This feels astonishingly wrongheaded to me. "Imagine an Internet HyperCard that allowed regular people to easily build web apps, as easily as using a spreadsheet." and then JUST TWO SENTENCES LATER, "There are many app-in-a-can tools that generate highly stereotypical apps but to be truly disruptive we need to match the broad generality of frameworks like Rails."
In other words, we need to make it possible for people to build simple things without much coding. Oh wait, people already can build simple things, but....that's not what you mean?
The Web is actually a phenomenally easy platform to learn how to write code for, compared to basically every platform I've ever seen that came before it. Yeah, it has a shitload of quirks and stupidities and kludges, but that's what happens when you're building a system that gets used by a very large percentage of the global human population.
The author writes as though he's never actually considered that doing anything at scale - not programming, anything; running a train system, or setting up a government, or selling falafels to people at lunchtime - inevitably involves hacks, compromises, kludges and half-measures. "Technical debt" is just another term for "doing things that involve humans".
"In other words, we need to make it possible for people to build simple things without much coding. Oh wait, people already can build simple things, but....that's not what you mean?"
I think that the problem the OP has with most current app-in-a-can tools is that they don't allow users to create arbitrary simple things, but only fairly specific types of simple things.
The more generalized a tool is, the more special knowledge you need to use it for specific tasks. The advantage of stored-program computing is that we can take a general tool (a computer) and package it with a set of automated instructions (the stored program) to turn it into a specialized tool that you can use with general knowledge. That's pretty much the fundamental endeavor of the software profession.
Compilers, IDEs, etc. are themselves generalized tools to produce more specialized tools; as a result of being generalized, they require special knowledge to use. A programming environment that allowed you to create an arbitrary specialized application using only general knowledge of computing would certainly be a great thing to have -- in fact, I'd describe it as the holy grail of computing.
People have been pursuing that goal for decades. Partial solutions have been discovered in the past (the OP mentions Hypercard, Visual Basic, etc.) but have generally been insufficiently powerful. It is also frequently difficult to keep them up to date with changes in the underlying technology.
I think there's not nearly as much agreement on how a "web app" might look or work today as there was for a desktop app even as early as the early '90s.
For "classic" (content-based) web apps (forums, CMSes, blogs), there's already fairly sophisticated, end user-targeted tools that fill that role. For a lot of people, WordPress is the Visual Basic of the web. Some wiki engines could be said to resemble Hypercard.
But the community is so split on what the "modern web app" should look like (thin client? thick client? standards-based or not? use the DOM or draw directly?) that I think it'd be much harder to gain much support for a "Hypercard for the Web" -- many web developers, possibly even a majority, would consider it to be "doing it wrong" and "teaching bad habits" _no matter what it looked like_.
General purpose design tools are again the equivalent of DTP (and very welcome), but not an app solution (unless I have completely misunderstood the audience Macaw is for).
I think you're right. My problem with that problem is that it's some combination of naive, impractical and pie-in-the-sky. It'd be great for humanity if you didn't need any skills or training to make any kind of computing application you wanted.
While we're at it, let's also have a replicator to make any kind of food we want, instantly and deliciously, without any culinary training whatsoever, beyond being able to press the "pasta carbonara" setting (oh but I don't want there to be eggs in the carbonara, and can it have tuna instead of ham, and can it use a different kind of noodle, but I still want it to taste like carbonara and be delicious...)
We have shitloads of them. Squarespace, Blogger, Wordpress, on to new hotnesses like Macaw, Dreamweaver, on and on and on and on and on and on and on and on...it's really pretty insane to argue, in 2014, that we have some kind of lack of simple-to-use website-creation things.
If your response is, "Sure, but they can't do anything complex/building interactive experiences is still hard" well, we had Flash, etc, but also, at some point this is moving the goalposts so far as to constitute meaninglessness. If what you're asking is, "why isn't there a simple-to-use WYSIWYG editor that I can use to build anything I want," then my answer is, "it's riding on the back of my unicorn."
We aren't talking about sites (that would be the equivalent of DTP), but apps. Hypercard was used to author content, sure, but also create games, tools, and so on. I don't think that is moving the goalposts at all.
I agreed with the broad premise that web development is shockingly horrible by the standards of the past era of polished corporate tools. But otherwise, it was all contradictions, inaccuracies, and indefensible conclusions.
>>In other words, we need to make it possible for people to build simple things without much coding. Oh wait, people already can build simple things, but....that's not what you mean?
Before that, he claimed that all of the big companies that had made packaged tools, like VB, had gone extinct. That might be a slight exaggeration.
I think the reason there's so much technical debt is largely because the amount it would cost to actually build quality software... is too high. We could not afford it. Like, as a society. Our society is built on crappy software.
I think it's just a utopian fantasy to think that if only the right hypercard-like tool could be created, then the cost of building quality software would go down.
Or at any rate, actually: Let's agree that the web is built on an enormous stack of kludges upon kludges. (These kludges are both in code frameworks that people use to build things on the web, and in the fundamental protocols of the web itself). The reason it is this way is, again, because by the time it is recognized what a problem this is, it would simply be too expensive to rebuild the web from scratch. We can't afford it.
To build this utopian hypercard-like stack which allow just anyone to build web things, and be so high-quality that it just worked without having to understand the things it's abstracted on top of, and to maintain it as web technology and desires continue to evolve, etc.... would be such an expensive undertaking, with such a high risk of failure, that it has no way to succeed in that fantasy of making the web all around cheaper.
We see posts like this come up here from time to time, written by non-programmers who have some kind of belief that programmers _like_ complexity, that programmers are _opposed_ to making things easy and simple. I totally don't see "modern programmer culture fetish[izing] complexity" -- rather, on HN, I think it's pretty clear that modern programmer culture fetishizes simplicity. It's just that simplicity is _hard_. (And people chasing simplicity often end up over-abstracting, and just winding up with an even worse form of complexity). Succesful software that is powerful and reliable and simple takes skill and it takes time. And skill and time cost money.
We've built an economy and a society that is entirely based on crappy software, because the economy could not bear the cost of as much quality software as we have crappy software, and the crappy software provides short-term efficiencies that make people money. (and i'm not talking about programmers,I'm talking about the 'domain' businesses which could not afford to run without software 'automation' anymore, even though it's all crappy)
(1) You actually do meet a developer from time to time who fetishizes complexity. More frequently, you'll find developers and managers who'll fight any attempt to reduce surplus complexity.
(2) I don't think the root cause of crappy software is the cost of quality. Quoth Phil Crosby, quality is free, it's the screw-ups that are expensive.
Nobody has suggested that the federal and state Obamacare sites failed because too little was spent on them. The way it was done, state by state, made the experience a laboratory of software development.
It was certainly possible to make an Obamacare site that works. New York had a rough first week, but at the beginning of Week 2 I had no trouble signing my mother-in-law up. Some states never processed a single application online.
The trouble wasn't that "quality is expensive" but more incompetence in management, procurement, etc.
I think it's important to distinguish up-front cost versus long-term cost. In more than just this discussion.
For example: plenty of people in the Bay Area would, long-term, find it cost-advantageous to own rather than rent -- if they could get together a 20% down payment. But they can't. So it's kind of irrelevant whether they'd save money long term.
The same principle can apply to software. Sure, you'd save money long-term if you adhered to extremely high quality standards. But you wouldn't release this month -- and you need to release this month for your company to stay afloat.
Figuring out when the short term cost is worth the long term savings is a great deal of the art of software product strategy. And I don't think we should just categorically sweep all such decisions -- even all such wrong decisions -- into the catch-all of "incompetence."
Technical debt strangles many products before they even get to market.
The status quo of software development is that management won't face the facts of what software will cost so they chronically underestimate what an efficient software development effort would cost by a factor of two or three.
Instead of laying out a realistic plan that will succeed, they embark on a hopeful plan that will certainly NOT succeed, and you end up with a 2/3 chance of failure and, if there is success, it costs a lot more than efficient development.
Screwing around leads to going in circles, not delivering a product in the next month. If software managers focused on compressing the standard deviation of the schedule they'd come very close to least cost development, because screwing up is incredibly expensive.
When we want to stigmatize people who plan too heavily for the future, we call it "overengineering." When we want to stigmatize people who plan not enough for the future, we do whatever you're doing above. The line between those two failure modes is relatively narrow and not at all obvious. There aren't simple heuristics that will infallibly put us onto the line, and acting like this is all black and white doesn't help anyone.
Even "line" is probably an oversimplification. Some projects probably have a large region, some a narrow line, and some might non-obviously have no such path.
>>We see posts like this come up here from time to time, written by non-programmers who have some kind of belief that programmers _like_ complexity, that programmers are _opposed_ to making things easy and simple. I totally don't see "modern programmer culture fetish[izing] complexity" -- rather, on HN, I think it's pretty clear that modern programmer culture fetishizes simplicity.
Programmers are people. And like most people, they are resistant to any change that will devalue their well-paying jobs and endanger their relatively luxurious lifestyle.
Let's say that you're a developer who makes pretty decent money writing CRUD applications. A new tool comes out that automates the process and it becomes very popular. What will be your first reaction? Are you going to say, "wow, this is such a cool thing, I'm going to tell all my friends and clients about it and even start contributing to it on GitHub"? Or will you have a knee-jerk reaction, based on fear, and criticize the hell out of it?
The software itself is usually no more complicated than it needs to be; the issue is that the things we want to do with the software are themselves very complicated. If you tried to do anything remotely complicated in HyperCard, you pretty quickly ended up with something approaching the complexity of a modern application.
There is also the idea of "default" versus "custom" and how the definition of the two can change over time as expectations of the level of complexity built into the default change. Where AJAX form autocomplete was once a nifty "custom" feature, it has effectively evolved to become the default way to capture input. But not everywhere.
Things are complicated, and the best way to do things changes all the time. And not just from a technology standard. So we build flexible solutions that can be extended and evolved over time to adapt to those changes; which really just adds complexity in the end. But the complexity is worth it, because nothing is ever really "done".
> The software itself is usually no more complicated than it needs to be; the issue is that the things we want to do with the software are themselves very complicated.
Ok, take these requirements: I want a web app that counts the number of times users click a button. Users should be able to see the number of times they clicked and I should be able to see a top 10 of the highest click counts.
To do this I must know HTML, some general purpose server language, how to configure a web server (be it directly or through a hosting account or some cloud thing), how to package/deploy/whatever to said server. I must have some database to store the clicks and use SQL or JSON or some specific API. Interacting with the database from the general-purpose language is going to require a library. I might have to download it and put it in the correct place or use a package manager. If I want the interface to update immediately (like an old-fashioned app would) I also have to use JavaScript. If I want to control the position of things on the screen, fonts, colours, whatever I will also need CSS.
I understand how we arrived at this state of affairs, but claiming that it couldn't be simpler is just Stockholm syndrome.
Actually, from your basic set of requirements, you just feature creeped your design to death.
Let me take a shot at it:
Learn enough HTML to make a GET request. Know enough PHP to receive the GET request, and then update a textfile of entries on disk. Use a second file to store the top ten clicks. Return the second text file.
Thats it. In your example, you did what is generally expected of today's current "web trends": you take a super simple use case, and demand it be highly scalable for millions of users with instant and immediate feedback. And why are we using CSS at all? Its a button and some text, no styling is needed. And why are we using a database? Do you expect millions of concurrent users? Hundreds? Your requirements didn't say that. What do you mean package/deploy/whatever to the server? Sure, there are some basic routing needs and maybe Apache, but those take minutes or less to setup. Also, right in the middle of your solution, you changed the requirement "If I want the interface to update immediately...", right there, you are adding complexity.
While at the face of it, I understand what you are trying to say, but I have to point out that you are the primary cause of the increase of complexity, not the technologies involved. I actually think deploying a simple counter website like this is easy. But as soon as you want immediate feedback? Alright, more complexity. Millions of stored records? Alright maybe some large memory cache, like Memcache (or a large array). Persistent records? Alright, fine, get a DB. Millions of concurrent users? Alright, we are going to need some more complexity to handle throttling. Thousands of requests per second? Even more complexity, maybe we have a distributed system.
In the end, you took a simple problem, and turned it into an awfully complex one. Yes, designing an application for that kind of load is complex, because it is actually a complex task. Doing all the things we want to do today is hard because there isn't some turn key solution, not because we are working with tools that are too complex.
As an unfair little poke at your solution, there are in fact turn-key solutions, like Yahoo webhosting, where you just design really high level basics and it does the rest.
You're not properly identifying your requirements then. If we were to break down your "requirement" into user stories, I count the following user stories:
1. As a user, I want to access this application through my web browser.
2. As a user, I want to know how many times I have clicked the button.
3. As a user, I want to know how many times the top 10 users have clicked the button.
4. As a user, I want the interface to update immediately when I click the button.
5. As a designer, I want the ability to easily change fonts, colors and layouts.
6. As a product owner, I want the ability to push updates to my users automatically.
Your technical requirements all roll up to these user stories. If you wanted to do this as an iOS app, it would be pretty trivial: you could almost build the whole thing in InterfaceBuilder. But the web browser is an abstraction layer we've built because it carries with it certain architectural advantages.
The web browser makes simple requirements much more difficult, I will grant you that. But it makes other requirements much simpler: rather than having to provide a mechanism by which to upgrade users' compiled applications when I want to add a red button and a blue button, I just push the changes out to the server and every user sees both the red and blue buttons. I also no longer have to write network code to connect to a server: my web browser does that. When is the last time anyone wrote a network stack for an application? Everything is REST services and JSON now.
Yes, writing web applications is very complex. But that complexity allows us to do things that were very, very difficult only a decade ago. The cost of being able to do hard things easily is that trivial things are somewhat less trivial to do than they would be in other environments.
This is almost exactly Meteor's "Leaderboard" example [https://www.meteor.com/examples/leaderboard]. Not much code goes into that, and I think it's pretty approachable for a non-programmer.
This is an interesting characterization of Jonathan Edwards... did you not do research on the author before writing this, or are you really claiming he's a "non-programmer"?
First page of google for Jonathan Edwards produces a 17th century philosopher and a singer. Adding "Jonathan Edwards programming" produces Subtext, which has a UI straight out of 1994 and no releases. So ... He's a not terribly well known academic yawning about how programming needs to be more academic?
I'm not claiming he's an Super Well Known Guy, but that it's trivially easy to figure out the non-programmer ad hom is inaccurate.
> First page of google for Jonathan Edwards produces a 17th century philosopher and a singer.
Because all real programmers are on the first page of Google when you search for their name.
It's also worth noting the historical Jonathan Edwards is a pretty important figure in American history. The First Great Awakening set the tone for American religion; it's standard material in any high school History class. And if there's one person you teach about from that period, it's Edwards. In fact, I would be somewhat surprised if most Americans don't recognize the name. So being out-ranked by him isn't exactly unexpected
Right. So even if you haven't heard the name before, some very simple google searching turns up the fact that he isn't a non-programmer.
And even without that, you could flip through his prior blog posts and figure out that non-programmer isn't an accurate description.
> yawning about how programming needs to be more academic?
I mean, the article says basically the exact opposite of this?
> Honestly, I have no idea who he is.
Yeah, I don't know who most of the world's programmers are. So they must not be real programmers (well, unless googling their name turns up their github account? But self-hosted projects don't count!).
But in 10 seconds of Google you figured out that non-programmer probably isn't a great description. And in a few more you might've figured out he's a fellow at MIT's CSAIL, which isn't particularly well-known for hiring programming-illiterate people.
My point was that it's usually a good idea to actually research the author of a piece before firing off the ad homs.
subtext looks interesting. Is it being developed in the open at all? Versions for download? The page looks like one of those shop windows covered in white putty to stop you looking in.
some kind of belief that programmers _like_ complexity, that programmers are _opposed_ to making things easy and simple
Anecdotally, by far the worst spaghetti code I've ever seen was written by big-minded CS types shoehorning algos and metaprogramming quite unnecessarily. The newbie spaghetti I've seen has been magnitudes easier to refactor.
Pretty negative. Not everything that is intractable is 'crappy'. Sometimes it just hasn't anticipated how we're going to want to change it, or was built to order and not for expansion. Like a building or a roadthat you no longer want to use - nothing wrong with it, just no longer useful.
I think the reason there's so much technical debt is largely because the amount it would cost to actually build quality software... is too high. We could not afford it. Like, as a society. Our society is built on crappy software.
I'm not sure that I agree. If by crappy you mean "not formally proven", then sure. Or if you consider floating point crappy, then we disagree on terms.
I think our industry is in a state where 98% of the code produced is just junk: unmaintainable, barely working, no future, career-killing garbage just waiting to fail at the worst time. This is tolerated because software victories are worth (or, at least, valued at) gigantic sums of money: billions of dollars in some cases.
I'm not sure how well we can "afford" it. Do we want to go through another 2000-3? How much use is it to have massive numbers of people writing low-quality code, not because they're incapable but because they're managed specifically to produce shit code quickly in order to meet capriciously changing and often nonsensical "requirements" at high speed? I think it's great for building scam businesses that demo well and then fail horribly when code-quality issues finally become macroscopic business problems and eventually lead to investors losing faith. (Oh, and those failures are all going to happen around the same time.) I'm not sure that it's good for society to produce code this way. So much of the code out there is "totaled": it would cost more to fix or maintain it than to rewrite it from scratch. You can't (or shouldn't) build anything on that.
Floating point, as IEEE standard? Beautiful. Elegant. One of my favorite technical standards. Other than the +0/-0 thing, it's perfect.
Floating point, as implemented? Ugh. You've got processors which implement some subset of x87, MMX, SSE, SSE2, SSE4, and AVX, all of which handle floating point slightly differently. Different rounding modes, different precisions, different integer conversions. Calling conventions differ between x32 and x64. Using compiler flags alone on Linux x64, you can make 'printf("%g", 1.2);' print 0. Figuring out the intermediate precision of your computations takes a page-sized flowchart: http://randomascii.files.wordpress.com/2012/03/image6.png
The "mess" reflects the fact that choices exist, that is, it is the result of the different goals of the producers of compilers or the processors, not of the mentioned standards. What's not standardized can vary.
Compared to the pre-IEEE754 state, the standard was a real success.
Re the article of the picture you link (0) still unless you're building games, and as long as you're compiling using VC your results haven't changed for more then a decade and a half. New versions of the compilers took care to preserve the results. And even VC 6, produced 1998 luckily selected the constants of intermediate calculations that were most reasonable and matched the ones in SSE2 hardware introduced by Intel in 2001.
You say "So much of the code out there is "totaled": it would cost more to fix or maintain it than to rewrite it from scratch."
If that's the case, why does such code still exist? If it's still running, then in some sense someone is "maintaining" it, at least to the extent of keeping the server it resides in powered on. In other words, someone obviously finds it cheaper to keep such code running as-is than to rewrite it (or to do more ambitious maintenance on it).
Even crappy horrible buggy code can be useful (in a business sense, or a "makes its users happier than if it didn't exist" sense), as hard as it is for us as developers to admit it.
One example: I used to work for a company offering a security-related product with crippling, fundamental security problems. The flaws covered everything from improper use of cryptography to failure to validate external input, lack of proper authorization handling, and even "features" fundamentally at odds with any widely expected definition of security.
This company continues to survive, and has several large clients. But the liabilities of the current code base are massive. Worse is that the clients aren't aware of the deep technical problems, nor is there any easy way for then to be. In a very real sense, this company is making some money in the short term (I don't believe they are profitable yet) by risking their clients' valuable data.
In general, the concern by the grandparent is that there are projects out there that are producing some revenue, but are essentially zombies. Every incremental feature adds more and more cost, but there's no cost-effective way to remove sprawling complexity. The project will die, taking along with it significant investor money.
Okay, me and you agree that most of the code produced is junk (not everyone in this thread does I think!).
I agree that the junky code is going to bite us eventually.
But what do you think it would take to change things so most of the code produced is not junk? Would it take more programmer hours? More highly skilled programmers? Whatever it would take... would it cost more? A lot more? A lot lot more? I think it would. And I think if this is so, it's got to be taken account in talking about why most code produced is crap.
I do not think it's because most programmers just aren't trying hard enough, or don't know that it's junk. I think it's because most places paying programmers do not give them enough time to produce quality (both in terms of time spent coding and time spent developing their skills). And if say 98% of code produced is junk, and it's because not enough programmer time was spent on them... that's a lot of extra programmer time needed, which is a lot of expense.
The utopian theory of the OP is that with the right tooling, it would not take any more time, or would even take less time, to develop quality software. I think it's a pipe dream.
>>I do not think it's because most programmers just aren't trying hard enough, or don't know that it's junk.
Actually, that's exactly the reason.
Back in 2003 I was a sophomore in college and I took an intro-level CS class. It was taught in Java. Back then we didn't have sites like Stack Overflow, so if you ran into issues during projects you had to find someone who could tell you what you were doing wrong. Often times this person was the TA or the instructor, and those had limited availability in the form of office hours. So it was super easy to get demotivated and give up -- which is indeed what made a lot of wanna-be programmers (including me) switch majors.
Fast-forward ten years. We now have a plethora of resources you can use to teach yourself "programming." While this is good in the sense that more people are trying to enter the profession, it's not so good because when you teach yourself something complex like programming, it is often difficult to know whether you are learning the correct habits and skills. I've been learning Rails for the past five months and I spend a lot of time obsessing about whether the code I write is high quality, but that's only because I've been an engineer for six years and I'm well-aware of the risks of building something overly complex and unmaintainable. In contrast, most people build something, get it to work, and then call it a day. They don't go the extra distance and learn best practices. As a result, the code they produce is junk.
As long as the job of a programmer is to be a business subordinate, it will not change and we'll see crappy code forever.
Mainstream business culture conceives of management as a greater-than relationship. You're a lesser being than your boss, who's a lesser being than his boss, and so on... It also is inhospitable to the sorts of people who are best at technology itself. Finally and related, it conceives of "working for" someone not as (a) working toward that person's benefit, as in a true profession, but (b) being on-call to be micromanaged. The result is that most programmers end up overmanaged, pigeonholed, disempowered, and disengaged. Shitty code results.
If you want to fix code, you have to fix the work environment for programmers. Open allocation is a big step in the right direction, and technical decisions should be made by technical people. Ultimately, we have to stop thinking of "working for" someone as subordination and, instead, as working toward that person's benefit. Otherwise, of course we're going to get shitty code as people desperately scramble (a) up the ladder, or (b) into a comfortable hiding place.
"As long as the job of a programmer is to be a business subordinate, it will not change and we'll see crappy code forever."
Well of course that's the job of the programmer. The programmer is supposed to build something that does something useful. Most of the time, the primary value of the code isn't that it's GOOD, it's that it DOES THE THING. Oh, sure, at the level of (say) the Linux kernel you can almost think of it as code for the sake of code, but you walk back up the chain and you'll find a lot of people contributing indirectly because they want to do THINGS and they find that they need a kernel for those things.
But most programmers aren't at that far of a remove from doing things, they work directly for a company engaged in doing something other than selling code. Management at that company wants things done. They insist upon this at a very high level of abstraction, that of "telling you to do the thing for them." You are a leaky abstraction.
There are programmers who without direct day-to-day management produce code that is valuable to the business, and programmers who receive comprehensive managerial attention and produce code that costs the business.
The problem is that everybody wants something so flashy and pretty that they can't settle for the functional-but-ugly barebones interfaces app programmers used to make. The problem is that we've trained users that if a product isn't pretty it's useless, and in pursuit of that all these other things happen.
You need to have a single-page app, because UX. Well, that in turn requires all these other things, in turn requiring still more things. You need to have a pretty web page that also looks good on mobile, in turn requiring more clever responsive CSS, and so on and so forth.
I think you've nailed it here. Really, people should work on solving these smaller problems, e.g. make building a responsive website easier, make building single-page web apps easier, rather than "fixing" the web. And people are already making significant progress toward these things! (meteor, react, bootstrap, angular, etc)
... and a 18 year old copy of cgi-lib.pl can be equally as simple and powerful ... and a great many things can be built on these simple, no-frills platforms.
How many websites have non-standard HTML items, and entire frameworks embedded, just to give me a slightly fancier submit button or text-input box ?
How many websites have you visited today that had a 2000 character URL ?
Yes. But isn't HTML itself the flashy & pretty (and often non-value-added) alternative to plaintext, the functional & ugly interface? Why not just use plaintext, wrap everything in <pre></pre>, and call the Web a fad?
I'm all for simplicity, I just wonder what's considered taking it too far.
HTML allows for creating hypermedia documents, which plain text does not. In small, reasonable amounts, it provides functionality not easily reproducable with a plain-text interface while still being reasonable.
Stacking on lots and lots of other stuff, though, is when it becomes silly.
True, true. Hyperlinks are pretty big. Defining links with a new language is certainly one way to do it. But look here on HN, what do I do? I write plaintext. I certainly don't write <a href>. I write like this [1], and the hyperlink becomes an implementation detail. An HTML implementation detail, haha, sure, but nowadays it's not hard for hyperlinks to be emulated. Anyhow I honestly do think html5 is a great language. (but so is text! :)
Software that ran on the mainframe on a green screen was generally rock solid reliable, and still is. Systems built in the 60s and 70s, still doing real work and making real money. But we ate the apple (pun not intended) and threw ourselves out of Eden.
Agreed on the "let's make programming more accessible" point, but meh on the "good old days" view. I've been writing software since 1976 so I have some perspective on the good old days. Programming was always arcane, and it always required specialized knowledge and the patience to work at a level of detail few people find enjoyable. Yes, there were things like hypercard that opened up programming at a certain level to a certain semi-skilled person. Excel falls into the same category. But in the end it is not the symbology, or the tools, or the environment that makes programming difficult: it's the mental process of building up complex behavior from little pieces, analogous sometimes to being handed a bag of atoms and instructed to build a toaster. Some of us find building up these intricate abstract models fun and rewarding, but believe me we're a tiny masochistic fraction of humanity.
One particular point of developer inequality and technical debt that I see get ignored is people with the social credit or track that lets them only create new things as opposed to those that have to do maintenance.
I find the latter often appreciate the place and purpose of types more clearly than the former, even if the former would benefit as well from having concise encodings of intent in the design and creation phase.
This bias is prevalent in dyn-lang/consulting-oriented communities, often because they are churning out projects and dumping them on their clients/employers.
To bolster this argument. I made my consulting money off coming in to fix projects that had been churned out like this. The issue is real enough to warrant good money to fix.
To be fair, businessmen are short-tempered and flightly. They're like 15-year-olds who love one band one week and hate it the next. You have to make impressions-- flashy ones-- quickly because they have no ability to judge code quality, and can only tell who seems to be working fast.
In this light, isn't the "get it out quickly" strategy (maintenance be damned) exactly what they've asked for? In an industry where programmers are business subordinates, not true professionals, should anything else be done?
I am playing devil's advocate insofar as I agree with you. (I also think creators worth their salt want to see their work through, which means they're already doing "maintenance" work by the time they're finished.) However, I don't see how anything else can become the norm, given that The Business sees us as a cost center and a commodity.
I often watch other developers look to make their lives more complex by trying to solve problems which aren't there. Why? Because the obvious solution is simple and boring. In any other culture that would be crazy. In programming culture, it seems to be the norm.
This is a constant battle for any developer. It's not merely that the obvious solution is boring, but that oftentimes the programmer is juggling so many problems that an obvious solution to one doesn't seem sufficient because it doesn't address the others.
That is, yes, devoid of all context the simple solution solves the small problem, but it doesn't address the larger. It's funny that we get back to that after describing the art of programming as being able to break a large problem down into small, solvable ones, but there you are.
I frequently find myself having to rescope my tasks, say "I don't know the answer to that, but I don't -need- to yet" or "That could go either way; let me just create a common exposed interface and cut off this effort at that point" etc, rather than create a bloated mass of abstractions to handle all the possibilities. It's the problem agile was/is supposed to solve in terms of method, and what functional programming is supposed to solve in implementation.
Personally, I perceive the 'complexity as a status marker' less a technological or programmer culture problem and more as one of business and job culture.
You won't find someone building needless complexity into something if the fundamental goal pushes the limits of their abilities. Basically, starving men don't build Rube Goldberg Machines to crack their eggs.
In technology in general, the majority of jobs seem to be been-done implementation while the education and training continues to stress design and engineering.
Complexity is an outlet for every boilerplate operator who would rather be engineering and a differentiator to remind him/herself and signal to others a greater capability.
Some reasons from experience that developers over-do complexity:
a) Someone at some point in the past told them to plan ahead
b) They aren't sure about a feature (or perhaps about the future of a feature) and hedge against inaction
c) Management does not manicure their task focus and/or priorities
"Our goal was to allow regular people without extensive training to easily and quickly build useful software. This was the spirit of languages like COBOL, Visual Basic, and HyperCard."
I think that this slightly overstates the matter. Having peered into the guts of some payroll code, I question whether it was easily built. Could we say
"To allow application developers to concentrate on the complexities of the problem domain rather than on those of the computing environment."?
The research methods used in this Twin case include a cross-sectional research. They performed IQ test, life history, psychiatric, and sexual life interviews. The participants also took a questioner independently, under constant supervision. The data that was collected showed us that 70% or 2/3 of the findings in IQ could be traced back to genetic variations. Adult monozygotic twins are equally similar in physiology and psychological traits.
The findings were very surprising, being raised by the same parents or in different homes doesn’t have anything to do with making siblings grow up to be alike. Like as said with Nature vs Nurture individuals choose how they react in situations, as for their thoughts, emotions, and actions. Nature and Nurture both are very influential to what makes us who we are and we need both nature and nurture equally.
> Attempting to simplify and democratize programming will attract only scorn and derision (as did COBOL and Visual Basic).
The reason COBOL and Visual Basic attracted scorn and derision is that they were awful. The problem with them is quite simple: You can use them to create something improperly in 1000 hours that you could create properly using other tools in 2000 hours. That thing will then work until you have enough concurrent users to expose the race conditions, or the Access database backing it reaches the 2GB limit, or the wind blows too hard and the hamsters powering it become frightened. At which point you'll spend 10,000 hours of overtime trying to keep it from collapsing in production while the users all burn you in effigy.
And the likes of Javascript don't suck because they aren't like COBOL and Visual Basic, they suck because they are.
The things I've seen built on COBOL, and Visual Basic are awe inspiring. The world is built on them - and for a reason: if you don't need a coder there's a whole new definition of "fail fast" that comes into play.
And maybe eventually some tiny percent of these these little things, succeeds beyond your wildest dreams, and is now critical your business, and you have to drop 10k hours slowly rebuilding it using professionals. That's what success looks like. It's also what not paying for 100 failed software projects looks like.
My guess is that you have not written a line of COBOL code. I have seen great systems written in COBOL. My guess is why COBOL never caught on in the PC world is that COBOL is very much suited for batch programming. There were a few tries with a visual form of COBOL, but the tool set was very expensive and the alternatives were cheaper and more suited to events (for instance Turbo Pascal, Turbo C, the Microsoft products).
That's not really the point. The problem is not that you can't do something great. Every Turing-complete language has the capacity to do that. You can write a great program in Brainfuck.
The problem is that they encourage you to do something terrible. I have seen COBOL programs that I did not wish to see. Good languages make it easier to do the right thing. They provide strong type checking even for user-defined types and well-tested abstractions (like templates/generics) so that you don't have twelve copies of the same function one for each expected input, each slowly diverging and multiplying under maintenance and developing their own subtly different bugs.
The civilized platforms controlled by large companies who invested in developer tools are all gone, strangled by the Darwinian jungle of the web. It is hard for programmers who have only known the web to realize how incredibly awful it is compared to past platforms. The web is just an enormous stack of kluges upon hacks upon misbegotten designs.
This passage makes me remember that famous quote by Alan Kay:
The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs.
One thing this article completely ignores (as do many of the comments on the site) is that expectations have also drastically changed. Yes, Hypercard was dead simple to get something useful actually working, but nobody would accept a web app that looked and behaved like a Hypercard app.
I think the author is right: there is a none-to-healthy culture of increasing complexity surrounding web development. But there is also a large amount of real complexity around it. Many of the complexities we deal with today are the result of attempting to put an abstraction layer over deeper, larger complexities. Unfortunately, few abstraction layers work without leaks, and those leaks add their own complexity.
With powerful browsers, APIs-as-a-service and single-page apps, we are almost to a point where the web itself has been abstracted. But, even with that, expectations are still incredibly high. Nobody wants to look at blocky, course web sites; they want beautiful, smoothly animated sites. And that takes complexity.
While people like (or love) products that look beautiful and delight them, what people love even more are products that do what they want/need. A Hypercard stack or spreadsheet could solve a real problem for a dozen people in a way that a slick commercial product targeting a large audience might not.
I do think there's value in enabling people to make applications that do something useful but aren't so pretty.
Love this post. Brings up the question of are we solving problems are merely difficult puzzles? "Modern programmer culture fetishizes complexity as a status marker or tribal initiation ritual"
I wonder if some of this is the bored engineer phenomenon? I find that when I'm writing actually difficult code, it almost always comes out clean, simple and easy to read.
It's when I'm churning out ridiculous amounts of front-end kluge that I start over-engineering or introducing complexity and indirection where it's totally unnecessary.
This seems like a particularly cogent hypothesis for the web, where a majority of the programming is not very intellectually stimulating* but is simultaneously time-consuming and mentally taxing.
* IMO and in my experience, mileage may vary. And this isn't to say you don't need to be intelligent. But requiring mental energy != intellectually stimulating.
The web won because it has the least barriers to entry for a new developer and solves a lot of difficult problems that the old-school application stack had. This is all from my history and point of view so I'm sure a lot of people will disagree.
Cost:
The days when big companies developed tooling and platforms cfor developers where great IF you worked at a company that could afford to pay for them. They were expensive. When I first started these suites were tens of thousands of dollars expensive, and later just thousands of dollars. Sure they did a lot of stuff, but man there was also A LOT of overhead in managing the suite. For young people and very small companies these things were unaffordable, and frankly unmaintainable. A lot of times you would here about companies who paid 10-20K for a suite only to not use it due to the training required for everyone, the installation cost, and the need to change your development process to fit the platform's model. So anyone starting out as a developer or starting up as company would find the cheapest/easiest options available. And since there are far more small development teams than big companies there was far more demand and community support for free/cheap systems that while not great could get the job done. Linux was free, mysql was free,Php was free, Perl was free, Javascript was free, and there were communities that would support a new developer through getting these up and running (no 2500 training class required). The reason the LAMP stack became so popular was because it was free, could be installed on cheap hardware, and every element had a community that would support it.
Cross Platform Support and Application Distribution:
With C++/C you had to compile for every target machine. What a nightmare, library flags, macros, ugh. Testing had to have special labs so they could find bugs that would popup in one OS, but not another. Then it had to be cloned on disks and shipped out.
Java came around with write once run everywhere which was better (MS came out with CLI and .net). The problem with these was making sure your customers had the base platform installed (the JRE or .net). B2B companies was a little easier because they had standardized equipment and IT departments that knew what to do, but if you were targeting B2C or small business then getting them to install the base software (again JRE/.NEt) was a pain because they would more than likely mess it up or just not know what to do so it required manual intervention. As soon as this got easier you wound up with the conflict of people having TOO much installed on their computer and just not wanting to install anything else unless they HAD to.
Fast forward to now and we've got a built in client that adhere's to common standards. If they want to use your app they bookmark it, and when they are sick of it they just delete the book mark. Sure there's cross browser issues, but they are far easier to deal with than both having the customer install some platform and dealing with cross OS issues (plus you dealing with them doesn't require the customer to do anything but hit the refresh button).
Developer Inequality:
I don't know how you can say there's a higher barrier to entry now than previous. In the 90's almost everyone you met who was a developer was a CS/EE major or started on computers at the age of 12. The arcane level of knowledge required for using platform to build an application in the 90's dwarfs what's required now. I still remember struggling with bugs where you'd go hunting for "the guy" who new everything, and then you'd spend a couple of days trying to figure where the issue was only to find some undocumented outcome to a flag passed to a lib. To the developer who was working for XXXX who wrote the library it this was a completely obvious and logical result.
Compare that to now. StackExchange. Free Video tutorials, Websites dedicated to teaching you. I constantly meet graphic designers, marketers, accountants, etc. who decided to become developers. I know who guy who has built a decent business with no official development education at all. He learned rails and shoved up to heroku. He learned javascript to improve the client side, and now he makes a decent living.
You can start from NOTHING and in a few weeks have a base product built and being used by an alpha customer. That never would have happened 20 years ago. I feel freer and more productive now as a developer than anytime in my career.
> The web won because it has the least barriers to entry for a new developer and solves a lot of difficult problems that the old-school application stack had.
I believe the point of the article is that you shouldn't have to be a developer, steeped in the arcane knowledge of the LAMP (LNPR, Docker, etc) stack, to create an application.
His example (and seemingly his rose tinted wonder from the past), Hypercard, didn't require any knowledge more than you would need to create a spreadsheet. If you could work a mouse, a keyboard, and had a vision, you could create an application. Certainly a primitive application by today's standards, but I saw interactive games, training applications, presentations, order entry screens... the whole gamut of development potential written by people who didn't know what a Gigabyte was.
>The web won because it has the least barriers to entry for a new developer and solves a lot of difficult problems that the old-school application stack had.
I think the web won because the users prefer it over downloading applications. That's not surprising, as 90%+ of users don't even use an operating system with a package manager.
Doesn't it? App stores share much of the virtues of package managers. And with the 90% comment being what it is it's fairly clear mobile is not the subject of the grandparents comment.
Also the discussion is rather pre-mobile. Mobile actually has nearly all of the positives, except os fragmentation, that were referred to in the root comment.
> The web won because it has the least barriers to entry for a new developer and solves a lot of difficult problems
> that the old-school application stack had. This is all from my history and point of view so I'm sure a lot of people will disagree.
In my hindsight, the web won in the enterprise because deployment was free. When the web was developed, in the early 1990s, vendors charged per seat for PC runtimes (Powerbuilder, Unify Vision, etc.). You got zero-cost runtimes, free client distribution (IE on windows) and application version control (server-sode managed code). The low barriers to entry probably exacerbated the hodge podge, but I'm happy to bill hours cleaning up after people.
Yes. This is the essential description of the situation.
Or, put more empirically, if the web is terrible, why did it succeed by orders of magnitude over what preceded it?
I have fantasies, like every other dev, of controlled, predictable, statically verifiable, friendly, elegant, safe systems that also reach billions of people, with instant distribution, nominal cost, and low barrier to entry. Maybe these things are not (a posteriori) compatible.
We used to call this “rich vs reach”. Still seems true, perhaps essentially so.
There is also the argument that in the old days, developers had to think carefully about their code and design. Spend time considering the data, structure and algorithms to be used. Resources were limited, maybe time at the computer was also limited. Inputing programs could have been labourious due to flicking switches or punch cards.
Basically, you had to stop and think a bit first.
Not such a bad thing - pausing to reflect before writing the first thing in the head, and releasing it as soon-to-be-abandoned-Ruby-Gem.
Also, developers of yesteryear may have had a more than passing understanding of the hardware, of how the OS worked, of how to work within the constraints.
Developers these days treat resources as infinite, as something to be allocated by someone else "mooooarr servers - page the devops!" - "but, if we just looked at how the code is performing?..." - "no, moooaarrr servers!".
I agree 99% with the point of the author. I would like to add my two cents how complexity and diversity of tools & ways costs software developpers :
Yesterday I attended a conference by a young but experienced Phd Java engineer about his experience in a research project using NodeJS (server side web programming in javascript).
Basically he explained he needed 6 months of project time to get up to speed with the language and the ways of the platform (asynchronous callbacks...) and that is with mentoring of a more experienced dev in JS. For me this learning curve and the mentoring time is a lot of $$$$ and opportunities for suboptimal work.
The profile of this dev, and is ability to perform a fine talk makes me thing his capabilities are not subpar. Do we really expect doctors, architects others high profile professions to work during many months suboptimally, requiring special mentoring, just to use new tools ? It seems to me that their initial training is supposed to be all what they need for a decade or more.
What would you think if your surgeon needed to work 6 months with a tutor because his hospital bought some new chirgurgical tool ? Basically were are shooting ourselves in both feets by using tools evolving too fast for us to master in any sensible way.
Turing told us all languages can do the work. Yet we are still inventing new ones. Maybe we should focus our attention on something else to improve our output.
The author is kind of all over the place, and it's hard to read it without thinking of counterexamples to a lot of his points.
If his general point is to bring computing to more people, I think a modern, web-based version of MS Access would be a huge step. Everyone needs databases. Everyone needs forms to access them in a way that makes sense for their business. Right now you have to actually program to make that happen.
Yes. His article's intent is to describe a problem.
He then tries to give examples of solutions, and they're bad suggestions. Still, he's right about the problem. We are in a kind of dark age at the moment, and the technology web stack is an unnecessary horror.
Regarding databases, I think what you're proposing is a step in the wrong direction. Databases is one of the major problems of our age, and cause of software being so difficult to get right. Many programmers will use a database to solve any problem.
There's an endemic problem of developers exposing databases as APIs, and then getting immediately bogged in complexity. We need less of that.
"Databases is one of the major problems of our age, and cause of software being so difficult to get right."
Strange. I think of databases as one of the bright spots in computing. They are very closely related to real business needs, and do a good job for the most part.
MS Access was great because it also had a form builder and it could all work over a network[1]. That means you could get a small business organized around a database easily and incrementally.
Now, we have to actually program to make that happen (e.g. rails, django, etc.) and design the forms with text rather than graphically. That's a big step backward for the technical non-programmers (e.g. accountants, HR professionals, etc.).
Of course, I'm always willing to hear new ideas. If you think you have the answer, please share (and/or start a startup).
Disclaimer: I have been heavily involved with databases from many perspectives (user, application developer, DBA, internals hacker). So, it's not a surprise that I think databases are great.
[1] Yes, the networking was a disaster from a technical standpoint. But that's an implementation issue, not a fundamental problem.
I had a think about it over the day, and decided that what you're proposing would bring power to users, and be an improvement. Focus on content rather than presentation, power to the users, less layers. Thanks for a considered reply.
> If his general point is to bring computing to more people, I think a modern, web-based version of MS Access would be a huge step. Everyone needs databases. Everyone needs forms to access them in a way that makes sense for their business. Right now you have to actually program to make that happen.
Most real world MS Access databases needed all of tables, forms, and behavior that required code. Online graphical tools for building the tables and forms components might be useful, but its not going to stop you from needing code for behavior.
That being said, something like Access for the web would be great. I'm surprised it hasn't happened yet.
Well, sure, they needed some code. But you could add it in small bits, like Excel, and gradually improve.
That's why accountants, etc., were able to use it. They aren't unable to code, they are unwilling to spend the time necessary to start from scratch each time.
"In the old days there was a respected profession of application programming. There was a minority of elite system programmers who built infrastructure and tools that empowered the majority of application programmers. Our goal was to allow regular people without extensive training to easily and quickly build useful software."
There are lots of people writing software today (probably the vast majority) who are not "elite programmers". Here are just a few examples:
1. All over the academic world, you'll find grad students in physics, biochemistry, etc. hacking together research software in Python (using numpy/scipy), R, and many other languages.
2. All over the business world, you'll find non-programmers writing programs in VBA and other end-user oriented languages to slice and dice data from databases.
3. The average CRUD code that powers today's web startups usually doesn't require anywhere near an elite programmer to create. Think of all the articles on HN written by a "non-technical founder who learned to program in two months and created a site that makes thousands of dollars".
I've been working on a startup that addresses this.
It is best described as "a front-end as a service": use our interface and API to build a cross-platform native mobile app, with a matching mobile-friendly web app.
Targeting companies who need a cross-platform whitelabel app to reach their customers or employees, but whose core competency is not flashy front-end software. Although I haven't done in-depth market research on the topic, I figure that most companies want some sort of custom app, but not every company wants to try and manage a team of web and mobile developers to do that.
Let me know if anyone wants to meet up around SF and chat about this; I've been casually looking for a co-founder while I prototype and look for first customers.
Because it's there. The only reason people use all these web tools is because they are there, documented and ready. The classic example is the Apache web server - it was there and worked. It was written when the web was small and the number of users to a site was low. Competing web servers are here now with performance way higher, so people are switching. Nobody wanted to write a better web server so nobody did for a long time - and better in that case was easily measured. Programming paradigms? How do you evaluate them? I agree there are fads (FP anyone?) but what objective criteria do we have to evaluate them, or design something from scratch to meet those criteria?
What is his actual suggestion? It sounds like he is just saying make programming easier. That would be awsome if he actually does it, but I see just the lone idea of making programming easier. How are you going to do that?
"This Archaeology of Errors is no place for the application programmers of old: it takes a skilled programmer with years of experience just to build simple applications on today’s web. "
I'm not sure HTML + JavaScript is any more or less easy than HyperCard + HyperTalk. I agree that nobody would design the web to be what it is right now if they had the chance to start over, but there's also something to be said for the force of Darwinism in selecting out decent technologies. And every major web technology right now is just that - decent. Not great - decent.
The proliferation of shitty Javascript is crushing the web. J
Javascript is designed from the ground up to enable bad developers to write shitty code. It's fine for alpha work, terrible for production.
It is possible to write beautiful, elegant, maintainable Javascript- but I've never seen a "designer" or even a "JavaScript programmer" write code like that. I have a job as a code mechanic because so many companies are built on woeful Javascript.
No one mentions the real reason, also because it is the large elephant in the corner of the room we ignore.
The truth is - most developers write crappy code. Even right now, here, on this site, plenty of people like their code is fine, it is the code of others that hurts the eyes.
We can say that programming is hard, the languages and tools maybe not perfect, but, that's not quite it. A lot of developers just want to ship it, get it compiled and out of the door. It passes some tests, so it must be done.
If we built bridges like we built software, well, we know how that ends....
I think a lot of people are misinterpreting what the article is saying (and maybe it wasn't expressed that well). The author is not saying that we need a tool that lets literally anyone be a developer ("So easy even a manager can use it!"), but rather that the current technology stack is so hacky that it gets in the way of progress, and demands levels of specialized knowledge that are tangential to the task of building web applications: that is, a higher-level set of development tools. Just as (to pick a high-level, desktop-oriented language at semi-random) C# is easier to work with than x86 assembly language, because it hides the messy details, I think the author would like a world where building a web application is as easy as building a desktop application: a world where issues like data serialization, browser-specific JavaScript hacks (and the hacky frameworks meant to solve them), incompatible databases, constant vigilance against easily preventable attacks (cross-site scripting, SQL injection, etc), and the necessity of juggling at least 3 different languages (JavaScript, HTML/CSS, and whatever you have on the backend) do not occupy valuable developer brainspace.
The issue is that the current web technology stack was never meant for building applications. HTML was designed for static documents. JavaScript's early development was mainly for making terrible mouseover-effects on links. XMLHttpRequest is a historical accident. So, yeah, it all works together, somehow, but it wasn't designed to be used as it is today, and it shows. And it's the developers who carry the burden of integrating these hacks. I, like the author of this article, wonder if it wouldn't be nice to design a system to provide the advantages of the web (distributed client-server applications with highly-customizable, visually attractive front-ends delivered on-demand) with a slightly more human development process. If we can't replace the web outright, maybe we can build better tools ontop of it.
I agree that current web technology is a horrible pile of kludges. Let's try to build something better. Some attempts are already being made. Please check out the amazing Ur/Web language [1] that lets you write front-end code, back-end code (compiled to native), and database code in a single, type-safe language that statically guards against errors that are rife in dynamically-typed web scripting languages. It abstracts the database, without wrapping your data in an annoying object layer; it lets you write front-end and back-end code in one language that communicate transparently. It is far from a finished product but it is an amazing glimpse and what could be with intelligent develop tools. There are some great examples on-line that show how easy it could be. Easy for a developer, of course, not for a layperson.
In other words, we need to make it possible for people to build simple things without much coding. Oh wait, people already can build simple things, but....that's not what you mean?
The Web is actually a phenomenally easy platform to learn how to write code for, compared to basically every platform I've ever seen that came before it. Yeah, it has a shitload of quirks and stupidities and kludges, but that's what happens when you're building a system that gets used by a very large percentage of the global human population.
The author writes as though he's never actually considered that doing anything at scale - not programming, anything; running a train system, or setting up a government, or selling falafels to people at lunchtime - inevitably involves hacks, compromises, kludges and half-measures. "Technical debt" is just another term for "doing things that involve humans".