It's true that much (most?) progress in programming has come through higher-level representations of programs. But this has made programming easier for programmers, not for non-programmers. Why? Because much of the complexity in programming is intrinsic to the problems themselves. The ability to distill and organize abstractions to address that complexity, and to formulate them precisely enough to be suitable for mechanical execution, is essential to programming. This ability is not going to magically appear once programs are represented as "something akin to a PowerPoint presentation, [or] a flow chart, [or] a sketch of what they want the actual user screen to look like" rather than in code. Actually, just the reverse is true. We program in code not because we're "coders", but because code happens to be by far the best medium for precisely expressing abstractions. If these other representations were so much better that they enabled non-programmers to specify working systems (i.e., to program), then they'd be better for programmers too. We'd all use them.
I'm not saying that Simonyi won't come out with anything of value. Maybe it will help to facilitate dialogue between programmers and domain experts about how a system should work. But the hype that it will turn non-programmers into programmers is pure PR.
Ability for precise modeling is the most important thing that separates programmers from non-programmers. Dealing with complexity is probably the runner-up, but domain experts should be just as good or better in handling this one.
Programming may someday escape the need for precision. The "compilers" of the day may be able to offer meaningful defaults and fitting. This sounds like full-blown AI, but it doesn't need to be, just as Google looks like it but is not. In fact the way to do it may be some kind of statistical approach relying on tones of code already written.
Ability for precise modeling is the most important thing that separates programmers from non-programmers. Dealing with complexity is probably the runner-up, but domain experts should be just as good or better in handling this one.
It's the need for precision that brings out the complexity. In my experience working with domain experts, they usually don't realize how complex their domain is. Only when we start trying to nail things down precisely do the contradictions and edge cases come out of the woodwork. Invariably it's order(s) of magnitude harder than it looked at first. Usually they are surprised and say they had no idea there was so much to do.
So while I agree that domain experts are accomplished at handling complexity, they do so the way that humans do it (ambiguously and inconsistently), not the way that computers do it (through formal specification). The ultimate bottleneck in software development is that there just aren't that many humans who can do the latter well.
I agree that if programming ever does escape the need for precision, then the game changes, completely and unrecognizably. A pet belief is that this would require computers that work more like the way humans do (down to the lowest level), but that's a wild guess. In any case, I haven't seen any evidence of things progressing in this direction, have you?
Humans resolve a lot of the complexity intuitively and unconsciously. It is not an inherently bad way to handle it, just incompatible with current programming paradigms. This is not to discount the value of clear explicit thinking in humans, it is just to say that programming would benefit a lot from ability to apply common sense to mundane things.
For some dramatic evidence that imprecise is better and it really works, look at the success of Google and the non-delivery of the Semantic Web. Google's approach is probably best explained by Peter Norvig in his "Theorizing from Data" lecture:
State of the art in natural language processing is achieved with statistical inference, not building precise linguistic models. See e.g. Marti Hearst's text here:
Back to programming languages, I believe that a lot of the Ruby/Rails success is due to mimicking natural language in DSLs. Rails goes as far as using pluralization. DSLs bring convenience at the price of less understanding: people tend to try using them intuitively, without understanding precisely what happens, much more so than they do with libraries/frameworks.
Perl, similarly, was created with linguistic principles in mind:
I thought that understanding program flow was a huge implediment to non-programmers and that if we had graphical flowchart method then at least small tasks could be performed by non-programmers. However, after using ETL software which works exactly this way, I'm much less confident.
Does anyone remember IBM Data Explorer, which became OpenDX?
The program was based on the notion of mapping flows as topological bundles, thus it could (and would) attempt to automatically parallelize whatever independent operations it could. Programs could be set up as, quite literally, flowcharts, with a given box having input hooks (multiple), output hooks (usually single), and a piece of code, usually a C++ library, that would produce or consume the data flowing through the 'pipes'.
I demonstrated this to a bunch of high school kids at a national science fair (ran it on an SGI in Ithaca while presenting at the fair in DC... it was hard to compete with the UIUC Cave VR developers otherwise) and the kids seemed to understand it instantly. Hey, look at that, functional programming in a visual style. What's not to like.
Needless to say it has not set the scientific programming world on fire and the project, while still maintained and used (I believe) by a handful of groups, isn't terribly high profile. A screenshot can be seen on OpenDX.org : http://opendx.org/images/opendx-screenshot.jpg
I 'contributed' the RPM spec file, played with it for a while, but by the time it had been opensourced, I was no longer working in the type of environment where it was a major advantage. Plus I no longer had a pile of nodes to throw at arbitrary computational task nor much need.
Sometimes I miss that sort of thing. Lots of times, actually. I've been daydreaming about it lately... although these days I like to use iPython for that sort of thing, it would be nice if the self-documenting aspect of the UI and the generated programs could be retained.
Khoros[1], AVS[2] and LinkWinds[3] all do the same sort of thing as OpenDX.
I think these sorts of "boxes and arrows" programming systems suffer from being monolithic environments with idiosyncratic interfaces and little or no support for external software engineering tools. What's worse for them is they generally compete for mindshare with scripting languages or scriptable environments (i.e. things like Mathematica) with larger communities.
I don't think they will ever gain wide acceptance until they allow the same level of reuse as systems based on structured text.
What a terrible article. We "hunch over keyboards, pecking out individual lines of code in esoteric programming languages"? What a cliche! Also, some of us use Ruby or Python.
And we don't talk to users? When there's so much research on interaction design, testing, measuring how users behave? Not that you should always listen to users either (a point completely lost on the author) (http://www.codinghorror.com/blog/archives/001063.html).
And intentional programming may sound revolutionary, but our solution to that problem are domain specific languages, and those aren't very new either. And if non-programmers can use those DSLs (SQL, Excel) that doesn't make them programmers.
Also, it's naive to assume that users could create programs except for the scary part where you communicate with the computer.
To program is to think, structured. It means considering all the consequences. x% of programming is error handling (citation needed). Non-programmers cannot think logically enough. The thinking part is what makes programming hard - the arcane API stuff is merely the annoying part - the accidental complexity. And we are getting rid of it, slowly but steadily.
What I meant was that it takes a certain skill to translate a business problem into something a computer can understand. You need to think extremely logical, rigorous. You have to consider everything that may go wrong.
That takes natural aptitude and experience. A minority of non-programmers have the aptitude, and extremely few have the experience.
That's what makes programming hard. And that's why this article is bullshit.
Programming is not hard because of esoteric programming languages. It's just hard, period.
"certain skill to translate a business problem into something a computer can understand"
Absolutely! And that skill includes analysis, not just development. Almost all the best analysis I've ever done has been WITH the skill of competent users, not in spite of them.
"A minority of non-programmers have the aptitude"
A minority of programmers have the aptitude, too.
"That's what makes programming hard"
Uh, try hot tarring a roof or digging ditches in 100 degree heat. THAT'S hard.
"It's just hard, period."
Hard is in the eye of the beholder. I don't think programming is much harder than most other things. It's easy doing it right. It's hard doing it wrong.
Also, I think many people already engage in a simple form of programming: it's called Excel.
I think of Excel as a zeroth-order functional programming environment. It is functional because you can establish direct functional relationships between cells, and zeroth-order because you can't build up abstractions (you can't create named functions, nor could such things be compounded -- it's only direct many-many cell references) beyond what's in built-in functions and plugins.
This sounds like exactly why this could be a great tool.
Programmers work faster when their edit-compile-run loop is shorter. Businesses generate applications faster when the spec-implement-try loop is shorter. If they can remove the programmer from the loop, and force the manager to put his spec straight into the computer (which will promptly show him what's missing and what's contradictory and what makes no sense), that sounds pretty cool to me.
It's true that business users don't know what they want. (We programmers don't know what we want, either; this is why we have REPLs and partial compilation and fast workstations.) Simonyi is not assuming business users will suddenly gain foresight. He's assuming they will never have foresight, and designing a system to make the inevitable "oh, that's not right, go try again" part go much faster.
In my experience, business users get frustrated trying to explain all the ways their business works to programmers.
Having to define the problem for programmers is the key.
If they can do it themselves, they don't have to make you understand what they want.
Will it be good software by programmer standards?
probably not.
But if it works.
Does anyone care?
It boils down to is it easier for me to teach a business user enough IT to solve their own problems or for a business user to teach me all the details of their problem in a way I can solve it for them.
Very few problems are worth training a developer on the details of my business.
Business users get frustrated trying to consciously comprehend all the ways their businesses work that they only thought about at a subconscious level until they were asked to piece together an abstract model.
> Very few problems are worth training a developer on the details of my business.
But that's just what you'd be doing anyway, substituting a dumb prompt (the machine) for a smart, interactive prompt (the developer.) There's a certain amount of domain knowledge that has to be extracted from the mind and modeled before any program will function correctly, and all a developer really is (beyond a simple expert system for picking algorithms) is a translator that knows the "good questions" to ask to refine the model in your mind.
I'm reminded of an exercise I once did in elementary school: we would each draw some odd shape on a piece of paper, then pick another person and, just by blind dictation, try to instruct them in drawing a perfect replica of what was on your page. There was a distinct division between the people who thought others would carry the same assumptions in their mental model as they do, and those that knew that for any true communication to occur, portable standards had to be adopted (in this case in inches and degrees rotation, basically turning the penman into a logo turtle.)
Programming is all the Computer Science hard-problem-y stuff for sure, but it's, more importantly I think, figuring out just what problems we're trying to solve. The former may one day be done by AI, but the latter (unless schools start teaching logic, rhetoric, philosophy, psychology, and perhaps something like linguistics from an early age) will always be the realm of those who have the innate understanding of the differences between different individuals' mental models.
Flawed premise. Software and UIs designed by "regular users" (unless they're professional designers, for the UI part) tends to suck far harder than almost anything cooked up by programmers.
Furthermore, there's no getting around the fact that computers are precise machines with no real ability to divine what you mean.
Even in declarative programming where nearly all the accidental complexity is removed, there's still the actual logic to implement, which in large quantities still requires tons of work by diligent, skilled programmers.
Good point - software is hard because you have to take into account the logical ramifications of each decision you make. Programmers are good at this - business users are not.
Well, I don't know about you, but I'm awaiting the day when people design their own clothes, write their own legal documents, and perform their own surgeries. After all, billions of dollars are lost every year to medical mishaps, so we should make tools so everyone can administer their own medicine!
Cobol was designed on the premise that it would be so "English like" (and by using business expressions as keywords) that it would allow business people to express things in familiar terms so that they could write their own programs and fire all their programmers. Instead, programmers were still necessary and they ended up having to deal with a horrible clumsy language. This is the same thing, only applied to visual representations.
In other news, I've developed a type of brick that's so easy for homeowners to lay, that architects will no longer be necessary!
I think it would be interesting if we could come up with an addictive way/game to teach programming to teenagers without them realizing it.
I believe making the jump from research project to something people can actually use is the problem. James Gosling's Jackpot project had a lot of promise? It sounded fascinating five years ago that the father of Java was turning his focus and attention to making programming better.
"Bill Venners: What's the state of Jackpot, your current research project?
James Gosling: Jackpot has been really cool lately. It's what I'm spending most of my time on, and it's been a lot of fun. I was really hoping to have something I could hand out at JavaOne this year, but I've been finding too many entertaining things to do.
It's a very different world when a program is an algebraic structure rather than a bag of characters, when you can actually do algebra on programs rather than just swizzling characters around. A lot of things become possible. "
Intentional programming is about WYSIWYG Lisp macros. This means representing abstractions using the familiar notation of their domain, integrated in a way very similar to what Lisp macros do for s-expressions.
Simonyi approaches it from a different perspective, however: he seems to think about it as an evolution of OOP and software components.
I'm not saying that Simonyi won't come out with anything of value. Maybe it will help to facilitate dialogue between programmers and domain experts about how a system should work. But the hype that it will turn non-programmers into programmers is pure PR.