Hacker News new | past | comments | ask | show | jobs | submit login
If I were to invent a programming language for the 21st century (wordsandbuttons.online)
121 points by okaleniuk on Nov 17, 2018 | hide | past | favorite | 175 comments



> Ultimately this:

> FILE * test_file = fopen(“/tmp/test.txt”, “w+”);

> Should become something like this:

> create file /tmp/test.txt for input and output as test_file

> Syntax highlighting should work instead of syntax notation just fine.

I couldn't disagree more.

The first example is easily scannable visually -- a variable is being created from a function call. I can tell because my eyes immediately notice the middle "=" first (easy to scan without reading) and the "(...)" in the second half (also easy to scan).

The second example is all words without punctuation, which requires me to read the entire thing word-for-word to figure out what it's doing.

We certainly shouldn't aspire to regex-like syntax for programming... but we also shouldn't aspire to prose. A healthy balance of words, punctuation, and whitespace/indentation helps to let our eyes understand the structure of code at a glance.

I'd argue that C-style syntax actually achieves this quite well, although I'm also a fan of named (as opposed to ordered) function arguments.


I wholeheartedly agree with you on this.

`create file /tmp/test.txt for input and output as test_file`

To me, this does not explain what's happening here with any clarity. I'm left feeling overwhelmed with uncertainty and ambiguity.

Is there an iterable being acted on? Is this blocking? Is it creating a variable? Two variables? What type is /tmp/test.txt when it has no quotation marks, and how does it handle spaces or concatenation/methods?

Those questions are just the tip of the iceberg. It goes on and on.

Not only that, but you're actually saving time by utilizing symbols rather than words. There's so much decoding that goes into reading a whole entire word/sentence rather than just flowing and gliding along symbols and keywords. There's a reason why mathematics in general typically uses symbols rather than these huge verbose chunks that are open to interpretation.


>Is there an iterable being acted on? Is this blocking? Is it creating a variable? Two variables? What type is /tmp/test.txt when it has no quotation marks, and how does it handle spaces or concatenation/methods?

Well, if you knew the language, you'd have answer to all of those questions.

Plus, if you didn't know C, you'd be baffled by the other example as well...


For any reasonably complex example its not at all clear that this is so. Being more english like is actually a hardship not a benefit because naturally language is famously ambiguous and difficult to decode.

Either your language is incredibly complex or it understands a very stilted subset of actual normal language that only happens to look like natural language and you have to remember to use that stilted subset in which case what is the benefit of this vs foo = bar.

Regarding complex language we seem remarkably bad at simple things.


> Being more english like is actually a hardship not a benefit because naturally language is famously ambiguous and difficult to decode

The rules are not any worse to express the exact same operations and relationships. English would need to be augmented to necessarily express the same things. The syntax would change, but thats the point. It doesnt solve anything. Making programs more verbose and approachable is a common theme in scripting languages already. How far is too far?


Very much agree. Programming notation, like math, is supposed to be easy to convey exact meaning, unlike human language.

Every attempt there has been to make code look like English hasn't worked out well. SQL, for example, is incredibly verbose, with complex syntax. This obscures what is otherwise very good semantics. I think this is why a lot of people hate writing queries.


I rather think Math is a good counter example. Or physics rather. There aren’t enough greek letters in the alphabet to satisfy the average academic paper. You first have to load in your memory the meaning of 20 different symbols then decypher a formula before you can process it.

Now people who read lots of papers are probably used to this exercise but that doesn’t mean it’s a good thing (you can get used to running on one leg, but that doesn’t mean there aren’t better ways to run!). I am of the opinion that initials, or full words should be used in formula.


Those who are not used to the terminology of physics or math would not be able to follow a paper no matter whether it was stuffed with the Greek alphabet or nice descriptive names. They'd be baffled by the formulae and concepts and lack the understanding needed to see the trees through the forest even if it were written like prose, maybe even more so as a formula at least gives some idea of what goes in and what comes out, what belongs together and in what order things are to be seen by virtue of some easily understandable symbols. While it is a fact that some people - often those who lack self-confidence - revel in being obtuse to increase the appearance of them being the guardians of some semi-magical 'Knowledge' this is not the reason why STEM-fields tend to gravitate to their own 'secret languages'. This starts in school where children learn that 1 + 1 = 2 instead of take the abstract number one, add another abstract number one and you now have the abstract number two.


All those symbols are necessary for readable math though. For millenia people did math as you describe, and it worked very poorly compared to modern math notation.

For example, how would you write something like a tensor H_i^{kl} (pretend those are rendered sub and superscripts)?

Looking at that symbol, I immediately understand that H is a tensor with one lower index and two upper. Now, it precise words, it would probably be something like:

"Let H be a tensor with lower index i and upper indices kl."

Oh wait, we've already had to define all the variables anyway (since we don't want to specialize it to an individual tensor, because that would be useless). Why not just use a bit more notation and write it as H_i^{kl}?

If we ran out of letters, we'd have to use Greek ones. Now we are back to where we started.


And yet after 30+ years no one had adequately managed to replace it. God help us if it's one of those json based abominations. My main complaint with sql is that it can be somewhat difficult top make DRY.


CTEs can go a long way toward that, as can stored procs


LATERAL joins can help with reusing calculations https://www.periscopedata.com/blog/reuse-calculations-in-the...


Wow that's a bad example. Any decent query engine (which I'm assuming Postgres has) would reuse common subexpressions.


Even if duplicate expressions are optimized by the engine, they hurt the readability and maintainability of the query, especially if they are big and complex.


As can views.


> SQL, for example, is incredibly verbose, with complex syntax. This obscures what is otherwise very good semantics.

I disagree; I think SQL syntax is one of the things that make it very clear and easy to use; it's got a couple of warts, especially related to automated tooling (the biggest being fixed order of clauses with SELECT before FROM), but it's exceptionally well fit for it's purpose.

> I think this is why a lot of people hate writing queries.

I've known more people comfortable with SQL and thrown by typical programming languages than the reverse.


>I've known more people comfortable with SQL and thrown by typical programming languages than the reverse.

Non-programmers often find SQL queries easier, and simple SQL queries are intuitive. Programmers, who are likely to do more complicated ones, usually don't like it. At least in my experience.

The issue is that it is hard to quickly visually parse complicated queries, like big CTEs. Having it in a less English-like notation would go a long way to fix this.


I'm a programmer who has had to do plenty of complicated SQL queries; the difficulty I've had with those has been about 10% quirks in the design of SQL, and 90% bad database design that I'm not in a position to fix.


Well, even if you had a good database design, you'd only then be able to argue that a greater portion of the problem is SQL.


Personally, I much prefer writing queries in plain SQL than using an ORM unless it’s for trivial things.


SQL is like the only language I like. It's so easy and unambiguous. What I read is what happens.

Whenever I work on a -nix box I grit my teeth with frustration at how OLD and UNFRIENDLY it feels. I'm shocked that people can still accept any of these archaic texts, whether is be for elitism/gatekeeping ("you aren't really into computers, otherwise you'd get it"), or for tradition ("it's the way it's always been, why bother changing what I know, despiteot being an inconvenience for everyone else who isn't in our group").

Why not just write your language to have three or four syntaxes (as a suggestion: compressed, C-style, shorthand, and verbose, with plugins for adding new syntaxes or spoken languages, dual screen for pair programming accessibility) and with the press of a button be able to swap between them? This way anyone can read your code in whichever way they please.

Maximize accessibility to all. It's not that hard.


>Every attempt there has been to make code look like English hasn't worked out well

Not true. Ruby is incredibly readable


I'm not a Ruby programmer but have over the course of my career had numerous occasions where I had to look at "incredibly readable" Ruby code.

No, no it isn't. Sure, you can infer some meaning in the various "DSLs" without knowing any Ruby but you're left facing exactly the problems of ambiguity the GP pointed out.

You know what the words mean because they're just English words but you don't know their exact definitions in that specific context. And even if two "DSLs" use some of the same vocabulary that doesn't mean the words will mean exactly the same thing.

It's just hiding the arbitrary implementation behind a very ambiguous API that might read as two completely different things to two different readers. Let alone trying to manipulate the program (at which point you realise that even though it's supposedly a "DSL for laypeople" you still need to understand Ruby idiosyncrasies to be able to work with it).

Ruby is not incredibly readable. Ruby is however capable of letting you create APIs that make code look a lot like plain English language. That doesn't mean the code "reads" like plain English language because unlike plain English language it's just a bunch of whimsical function and variable names with arbitrary implementations.


I think the goal in general should be to minimize conceptual depth and maximize predictability for the problem domain you are trying to address.

I dislike the C style for several reasons:

> FILE * test_file = fopen(“/tmp/test.txt”, “w+”);

First, you have to understand the implications of it being a pointer, which includes the little warning light in your head about ownership and allocation/deallocation. Then you have to remember that "w+" translates to "write in append mode". The fact that it's a string means you know it by memorization. But also there are warning lights in your head about text encoding... just standard things you're trained to be vigilant about. Where you can avoid triggering those reactions, I think it's worth trying.

These days we have the help of IDE's as memory aids, so for things like "w+" I prefer enums so that a tap of the auto-complete button shows you "write mode" and "append mode" options.

I suppose if I were to have any marginally controversial opinion on this topic, it's that I believe we should be embracing IDE's as "portable stack overflows" and designing languages with IDE assistance in mind.


>First, you have to understand the implications of it being a pointer, which includes the little warning light in your head about ownership and allocation/deallocation

Thats a semantic issue, not a syntax/style one. If your language has pointer semantics, you’d need to a way to access it with either proposed styles, and the same warning light should appear regardless.

The w+ example is much more relevant, as it could be expressed more clearly and well-defined (eg enum with full name). But all you could do with the pointer syntax is say FILE POINTER test_file, and gain nothing, because the “*” syntax isn’t at fault here


While this is a valid criticism that I agree with, I feel as if it misses the forest for the trees. The problem is entirely that the time is spent arguing over minor differences in syntax and style, while spending comparatively little time on the bigger questions of testing, security, updates, and ease-of-use of the end product. No user of software particularly cares if it's written in C or COBOL, they care whether it reboots at midnight to apply security updates, losing their work, or spends an extra three minutes a day because it has to contact a webservice.


Indeed, there's a reason that SQL statements still need formatting and keyword capitalization hints to make it readable.


What language doesn’t need formatting to be readable? Have you tried reading whitespace-stripped C/js?

And the caps hint is the same as syntax highlighting... its not even that useful if you have highlighting, but boy is it useful when you don’t


That's my point (and the opposite of the author's point)


Someone didn't finish reading the article...


In our defemse, we also had to click a button, who really isn't learning from their time in the modern world! /S


I think the OP perhaps wants to look at Apple Script?


Yes, this proposal is basically AppleScript.

AppleScript is (relatively) easy for non-programmers to read and understand, because it is built on natural language.

Unfortunately AppleScript is a huge pain to write, because as a programming language it is still quite strict about what type of syntax it will accept. It also is fairly limited, because speccing a natural language programming vocabulary/grammar gets increasingly difficult as you start adding features.

As a result even the best AppleScript programs are not concise at all, with limited use of abstraction. Most AppleScript programs are cobbled together by copy/pasting code found on the internet, and are full of bugs.


In my experience, the problem people have with learning programming is not syntax or abstraction - it's thinking like the abstract machine. Or in other words, understanding that having an array and a loop in your code isn't enough to make the computer understand and do what you want.

Basically, the thing that the PB&J exact instructions challenge teaches.


It's interesting you mention an array and a loop, because those can be separate abstractions too.

no need to think about loops: array.each {|x| puts x}

need think about loops: for i in 0...array.length do puts array[i] end


The first example saves you from thinking about indexing; arrays and loops are relevant in both.


The OP wants a modernized COBOL.


As far as I can tell, the OP wants people to stop inventing languages and "just write good code". They're deliberately describing COBOL as exciting, new and modern to make their point.


In fact, that's what they say after clicking the buttons at the bottom - which are actually part of the article.


Actually not if one reads carefully.


Well all right, they don't really want COBOL - they're saying that that's what they were describing as something they wanted.

But my point was primarily that people probably missed that they had to click the buttons to read that.


Completely agree. A balance is needed between the tenseness of Perl and the verbosity of AppleScript. Too compact, and the language becomes a pain to read — although I am sure the author feels clever at having accomplished so much in so little code. On the other side, too verbose and it is a pain to write and read. I absolutely hate reading through or writing pages upon pages of code to accomplish a trivial task.

Perhaps that is the reason languages like Python have been so successful. They seem to strike a good balance. I am not suggesting they are perfect by any means — they have their own fair share of compromises, but they do the job well enough. And I certainly don’t want to be writing COBOL in 21st century. :-)


I agree somewhat, but I think it's important to consider that the ease of reading c-style syntax you experience is probably greatly helped by the wide adoption it's seen of at least some of its features and the repeated exposure you've had to this point, as most of us have.

There is a choice to be made as to whether amateur learning should be optimized (not that their example would be the result), or or a greater tie-in with the existing programming ecosystem should be. Both have their trade-offs, similar to the question of whether we optimize ease of learning or ease of use for experienced programmers (think BASIC vs APL).


Which of these forms you use does not matter all that much. They both have the same semantics where a file is opened, and it is up to the programmer to remember to close it. What actually is a an improvement is Common Lisp's with-open-file or similarly, python's with:

  with open("somefile.dat", "wb") as f:
    f.write(somevar)
which automatically closes the file when it leaves the scope.


> The second example is all words without punctuation, which requires me to read the entire thing word-for-word to figure out what it's doing.

Do you think that's also a problem with natural languages (e.g. English)? Should we switch to a more readable natural language with a syntax inspired by programming languages (or even visual programming languages)?


Natural languages don't usually convey precise set of instructions to be executed, or target a system that has no common sense, situation awareness, and other human traits.

The purposes of natural and computer languages are quite different, so it doesn't make much sense to model one after another. There were attempts to make more precise languages (Lojban, for example), but they never took off. Maybe because it's too tiring to precisely describe meaning, which can be inferred from a context with sufficient probability or clarified by asking questions.


I disagree. The only reason why the first example might be "easier to scan" is because the reader happens to already have experience with that language. COBOL got a bad rap for being "too verbose", but that kind of verbosity paired with a language that actually supports modern techniques for code organization and modularity would be a godsend in this new era of blurring lines between designing software and designing businesses. The need for programs that can be understood by anyone without a full-blown programming background hasn't gone away.

It's worth noting that what the author seeks is already possible with a language like Tcl, or even with a shell language like Bourne or one of its descendants. Sure, most programs don't do that (probably because there aren't very many getopts-like libraries do parse that sort of argument list), but it's possible nonetheless.


But are’t you describing familiarity rather than readability? Ask a non programmer if he thinks the syntax of c code feels logical and natural.


Ask me if I think that the syntax of Chinese language feels..., well, like anything at all (punctuation marks are western influence).


Yeah. But if you ask a VB or python programmer who never wrote in a c-style syntax, you will probably get the opposite opinion.


Python's way is a good compromise IMHO:

    with open("/tmp/test.txt", "w+") as test_file:
        do_something_with(test_file)
And you get test_file closed for free at the end of the block.


It looks like AppleScript, which is an absolute nightmare to have to use.


Another thing is, with the proposed syntax it's very easy to screw up.

Creating a file named "test" has the same syntax as creating a file with name from the variable "test".


> The language for the 21st century should be business oriented, English-like and not dependent on native types.

It is shocking to me how often the problem of "programming is difficult and time consuming and unintuitive" is to be resolved by "making programming languages more like English". The language is there as a tool to make programming easier!

Do you know how many times I've taken someone's keyboard to type a tiny snippet and then explained it in English with the code as reference? Why do you think I don't just explain it in English first? Because English is verbose and ambiguous and not suited to describing logic. That's what programming languages are for.

It's the same thing whenever someone makes a scratch clone but meant for real work. If you limit your scope enough, and you benefit from the visual layout, I think it can be a productivity win. Otherwise, they just tend to make the trivial stuff into drag and drop, and the easy stuff difficult or tedious.

Unreal Engine's material and animation blueprints satisfy the above criteria, and are a pleasure to work with IMO. But they're both laser focused on doing one thing. With materials, seeing the output preview of each stage of the pipeline is a great help. And with animations, it's essentially a state machine, so seeing it graphed out in front of you is how you'd want to prepare to write the code anyway. Both clear wins for having a visual layout, but I want to stress that these are exceptions to the rule.

EDIT: Also should bring up Inform7[1] as another example of a focused tool that benefits from not being code. If your goal is to write a text adventure, it makes sense to use prose to create it. It also somewhat benefits the authors who are typically more writers than engineers, but don't even think that means you can just sit down and start writing it without a learning phase.

[0]: https://docs.unrealengine.com/en-US/Engine/Rendering/Materia...

[1]: http://inform7.com/


In support of what your are saying, here are some things that make programming _actually_ difficult:

- Reasoning about what the program should do in edge cases (e.g. if one write succeeds but the other fails)

- Designing data structures to represent complex information (e.g. deciding how to annotate an AST with type information)

- Determining if some functionality of a method is ever used

- Avoiding leaking data, even through side channels (like timing attacks)

- Making an interface that is hard to misuse

- Understanding how to use a poorly designed interface

- Understanding the performance implications of different designs


I'm going to take it one further. All of those are good but I think there are some "easier" and more fundamental things that are massive roadblocks for beginning programmers.

- Understanding the problem itself

- Determining how to actually solve that problem

- Expressing all the details of the solution you've chosen.

- Determining what has gone wrong and why

- Knowing when to step back and try another approach


On inform7 an interesting talk "Inform: Past, Present, Future" - http://www.emshort.com/ifmu/inform.html by its creator.

Slightly discussed here : https://news.ycombinator.com/item?id=18105644


> Do you know how many times I've taken someone's keyboard to type a tiny snippet and then explained it in English with the code as reference? Why do you think I don't just explain it in English first? Because English is verbose and ambiguous and not suited to describing logic. That's what programming languages are for.

Exactly this — and it is true about any human language. They were built for humans who are capable of interpreting things given the context. Try covering every single case in English and you have a lengthy legal document which no one wants to read — and it still has some ambiguity left.

If you want an example of what English would look like if used for programming, take a legal document and multiply it by 100.


Yep, I specified English because that's what the article mentioned, but I'm sure it applies to most any other spoken language as well.


Props for the Inform mention... One of the few cases where writing in pseudo-English is the right answer.


Thanks for such clear examples to demonstrate your point.


Yeah, it's a bit odd to make my argument by exclusively citing counter examples, but I think it was the right approach here.


>The language for the 21st century should be business oriented, English-like and not dependent on native types.

Not sure this is a popular opinion as presented by the author? I'd say the most sought-after PL features would be expressive type systems and those that reduce boilerplate. I've never heard someone long for a more "business oriented" programming language.

>It’s naïve to think that the language is responsible for the quality of code and that by adding some bells and whistles (or removing some bells and whistles), we can automatically make everything better […] I feel like the core issue here is in the domain of sociology and psychology, and not the programming.

I'd disagree, e.g. Rust's semantics rule out whole classes of common runtime errors. More expressive languages such as Kotlin, Scala, Swift, … are seeing rapid adoption over their more boilerplate heavy counterparts, such as Java or Obj-C. The desire for better languages is justified imho, as there's still a lot to gain. New PL development is not rooted in neurotic unhappiness, real progress is made.


>I've never heard someone long for a more "business oriented" programming language. I don't think that was a genuine opinion of the author, more just a cheeky hint at the reveal that they were describing COBOL (COmmon Business Oriented Language) the whole time.


>It’s naïve to think that the language is responsible for the quality of code and that by adding some bells and whistles (or removing some bells and whistles), we can automatically make everything better.

Author doesn't understand the work done in language research, thus must be pointless.

>We were not happy with Fortran and COBOL, so we invented C++ and Java only to be unhappy with them too in some 20–30 years.

Great, so let's stick to the thing that made us even less happy. Screw gradual improvements, revamp the field or go bust.

>But we have to blame something. Being software engineers partially responsible for the world of crappy software, we wouldn’t blame ourselves, would we? So let’s blame the tools instead!

I fuck up all the time, my colleagues fuck up all the time. Some tools are actively helping us make fewer fuck ups over time.

With this article, the author is pretty much coming out as the archetypic blub programmer. He sees all this "new, seemingly pointless stuff" polluting an area he believes to be knowledgeable at, and rather than make the effort involved in trying to understand why people are bothering at all, it's easier to say "it's all a waste, you should just learn to program in blub, since you're only reinventing blub but with crap added in".


People, don't comment before you read the whole article (and click on the survey on the bottom)...


Thanks for pointing this out. There really isn't any way to tell that the rando survey at the bottom is not actually a survey. I read what I thought was the whole article, but demurred from participating in the "survey."


The problem with English is that it's so ambiguous. The author wanted it to be obvious that you should click that button to see his conclusion but ambiguous English seems to have led many readers to misunderstand his intent. If only we had languages that could express things with less ambiguity, perhaps by incorporating symbols and stricter grammar.


Don't forget your sense of sarcasm either. The top comments in this thread apparently didn't get the point of the article or stopped reading early.


And it's pinned to the top because anyone who didn't read the article is basically expecting that sort of "hah, nice try dummy" criticism in the comments. It's like candy.


I'm glad this took a turn because I was not following, AT ALL. But even that last conclusion I only sort of agree with. If they are suggesting Go and Rust and Kotlin bring nothing useful to the table, I suggest they reevaluate. I am way more productive writing Go than C++, Mozilla invented Rust literally to solve concurrency problems, and Kotlin saves you a ton of typing, which speaks on it's own.

Yes there's hype. It's not about hype. It's about exploring permutations of the tried and true. Applying what we've learned. You see a lot of composition over inheritance - not because composition is a new concept, but because we learned how inheritance can be bad. It's that simple.


Kotlin saves you a ton of typing

And more importantly, it saves time and focus when reading code. You don't have to manually parse dozens of lines to verify "yes, this is a standard value object".


Having a language that almost looks like English is actually harder to program in than one that has a well defined strict syntax.

It falls into a sort of uncanny valley. Apple tried that sort of successfully with Hypertalk but really badly with AppleScript.


The hidden twist itself is interesting, but the described features didn't sound like they incorporate any PLT advancements of the past decades, with the first one indeed reminding of older languages: I thought it's just somebody describing the language bits they like for some reason, and saying that they'd like new languages like that.

If they were more convincing, it would be surprising to see that they all were already present in an old and somewhat forgotten language. Though the overall article (and the conclusion in particular) seems to be more about rhetoric than about reasoning. I agree with the conclusion, but it doesn't seem to be backed well with (follow from) the rest of the article.


> And it’s huge. The INCITS 226–1994 standard consists of 1153 pages. This was only beaten by C++ ISO/IEC 14882:2011 standard with 1338 pages some 17 years after. C++ has to drag a bag of heritage though, it was not always that big. Common Lisp was created huge from the scratch.

Common Lisp the Language was published in 1984 and it had 460+ pages. Heritage? The language itself was mostly based on a larger earlier language: Lisp Machine Lisp. That one had a language documentation, but not a standard. The standard for Common Lisp was published in 1994, after eight years of work from 1986 on. It was based on the 1984 Common Lisp, which was based on NIL, S-1 Lisp, Lisp Machine Lisp and Spice Lisp - all projects which developed a successor of Maclisp.

> A programming language should not be that huge. Not at all. It’s just that it should have a decent standard library filled with all the goodies so people wouldn't have to reinvent them.

Common Lisp in part is a larger Lisp dialect (but with a relatively small number of built-in constructs: http://www.lispworks.com/documentation/HyperSpec/Body/03_aba... ) and the standard includes a standard library - without actually dividing the language into a core and a library.


If I were to invent a programming language for the 21st century, the main feature that would make people frown is that it would not be laid down in a flat text file.

- It would allow high-level programming like constraints statements a la Prolog.

- It would allow graphical block building a la PureData

- It would generate lower level version of these and allow a user to "zoom" into the generated code down to the machine code (or pseudo-code).

- It would allow to link from lower level imperative routines.

- It would have a fucking wysiwyg GUI editor.

- It would be designed to have a powerful IDE and a top-notch Intellisense. I believe these need to be thought at the design step of the language, not added later.


> If I were to invent a programming language for the 21st century, the main feature that would make people frown is that it would not be laid down in a flat text file.

Yes! It drives me crazy that due to a historical accident, we're still essentially writing programs as decks of 80-column punch cards. Nowadays the decks are virtual and the editing tools are better, but languages really haven't evolved to take advantage of modern I/O. I'll point out that the Xerox Alto (1973) let you write your source code with full WYSIWYG formatting (fonts, bold, italics, etc), so we've taken a step back since then.


I think that'd be useful for commenting but nothing else.

Whenever I see something in code, I wonder, what does this do? If I started seeing bold and italics and it did nothing but draw attention, I would almost certainly feel it a distraction.


> I think that'd be useful for commenting but nothing else.

But isn’t that a good enough use case?

Almost nobody bothers to put proper formatting, diagrams, tables, formulas, references etc in their source code. At best we get ASCII approximations.

To see some examples of what it could be like look at Mathematica, Jupyter, literate programming, and org-babel.


.. all of which require some kind of rendering software that has to actually be installed on your machine.


So do you need for images, photos, spreadsheets, PDF files, websites, schematics, board layouts, databases, mechanical drawings and models, neural network models, music masters, audio tracks, video footage and any other file format used by professionals.

Better yet, all the formats I mentioned are raw text and not even binary. And all have well developed tools for editing.


More than formatting, I am thinking about defining things like function and classes in a formatting-independent way, and dismissing the order of definitions totally.

You should have a list of classes and namespaces and not have to worry about how they are actually stored in flat files.


Yes, that too. There's no point imposing an order on functions and classes; I think that when you're looking at a function, related functions should somehow float nearby. (This doesn't seem to be a very popular view.)


Isn't it more useful if such marks like showing something in bold or a different color are automatically applied to classes of marks like identifiers or strings?


> The INCITS 226–1994 standard consists of 1153 pages. This was only beaten by C++ ISO/IEC 14882:2011 standard with 1338 pages some 17 years after. C++ has to drag a bag of heritage though, it was not always that big. Common Lisp was created huge from the scratch.

This is a surprisingly incorrect remark, given that the author is apparently fairly well aware of programming language history. Common Lisp is a relatively successful attempt to standardize and bring together a lot of different Lisp varieties that evolved over decades.


Indeed. The original description of Lisp included something like 10 primitives. That's it. Numerous examples of full-blown Lisps have been built from just those few primitives. One article I read even showed how arithmetic can be built from car and cdr.


But those full-blown Lisps have to have that full-blowness documented. It can't just be "here is the documentation for the ten primitives we used; to understand everything else, read the source".

It would be a bad doc if for every feature, it rambled on the details about how that feature is constructed out of the ten primitives, rather that specifying the relevant requirements for that feature.


Source? I’d love to read more about this


http://languagelog.ldc.upenn.edu/myl/llog/jmc.pdf The Roots of Lisp by Paul Graham.


Not car and cdr but close enough https://en.wikipedia.org/wiki/Church_encoding


Also, cons, car, and cdr need not be primitives. They can be built out of lambdas. In Scheme:

  (define (cons x y)
    (lambda (m) (m x y)))

  (define (car z)
    (z (lambda (p q) p)))

  (define (cdr z)
    (z (lambda (p q) q)))
(From https://mitpress.mit.edu/sites/default/files/sicp/full-text/..., exercise 2.4)


Sure. Now to implement consp, all you need is a global list which tracks all lambdas that came from the cons function; if the argument object is eq to anything on that list, then it is a cons! We also now need another primitive: weak pointers. We don't want our global list to prevent otherwise unreferenced cells from being reclaimed.

typecase is going to be fun to develop and optimize, with everything being a lambda.


Over less than a decade, I, one person, made a smallish Lisp dialect and documented it with a terse reference manual that doesn't repeat itself or pontificate all that much. It's formatted as one giant man page. If we render it to PDF, it's around 650 pages, with no table of contents or index. I am not a committee; I have no historic baggage from multiple base dialects. Shit that just popped into my head over a small span of time adds up to 650 pages already.


It's also surprisingly incorrect about C++ given that he goes on to say "A programming language should not be that huge. Not at all. It’s just that it should have a decent standard library filled with all the goodies so people wouldn't have to reinvent them."

The 1338 pages of the C++11 standard is more than 2/3rds about the standard library. Only around the first 400 are about the core language.


I like the twist at the end of saying it already exists and it is COBOL.

Maybe the reason nobody uses COBOL is for the exact reasons mentioned in the article.

The author really wants to make an ambiguous, confusing language not fit for logic or any form of serious data processing. Which is pretty much entirely useless.


So, COBOL for everyone then? lol. Your "create file" example looks exactly like COBOL. I disagree completely that business-oriented languages (the BO in COBOL stands for "Business Oriented") are the answer.

You decry DSLs because they are "worthless" for the user to learn -- but you forget that they make the work of the original developers that much easier, which is why they exist.

You bring up Lisp as an example and then are upset at its size. If there is one language to rule them all (or at least one that's covered all the bases), it would have to be Lisp. Of course, there's no one perfect language for everything. To paraphrase Ansel Adams, there's no perfect language only the perfect language for the kind of programs you write.


The buttons at the bottom are not a survey, the rest of the article is hidden behind them.


That's a poor UI for an article. Just say what you want to say! If this article were written in the hypothetical, it would make the point much better.


That appears to be the gimmick of that entire website, based on the URL.


The twist would've been better with more set up. I can't fault people for not reading the whole thing and guessing a language at the end.


> But realistically, how often do you have to program high-performance computations? I suppose, unless you work in a narrow field of research and engineering that requires exactly that, not very often. And if you do, you have to use specialized hardware and compilers anyway. I’ll just presume, a typical 21st-century programmer don't have to solve differential equations very often.

Does the author know about all the efforts in this area, things such as Machine Learning and statistics? This type of argumentation is so... 20th century? :)


Ah, but we do use specialized hardware and compilers! (nevermind that they're sold by nvidia, nobody else is buying quadros by the truckload) And moreover, you don't do the differential equation in the code, you do the partial specialization by hand and throw that into your pytorch kernel.

The fact that we are using python, instead of fortran or matlab, really does show exactly how little hard math is being done by the computer, even in these fields.


What's the purpose of a new language? You can't just say build a language for the 21st century. That's pretty arbitrary. Each computer language is designed to solve a particular set of problems. No language is the best.

The down with syntax stuff is a particularly meaningless argument. Python probably has the most intuitive syntax of any "modern" language. I know many personal projects in python but very few professional ones. The lack of syntax makes python a great language for prototyping but a nightmare for scaling.

To choose which language you should use, I suggest figuring out the following:

- What problem are you solving? Web app, scalable service, low level processing?

- How correct does it have to be? Are bugs and edge cases okay to be tolerated? If not, I'd recommend a more functional language

- How many people are working on it? If many people are working on it, I'd recommend something with a strong type system.

- How "fast" does it have to be?

These aren't all the considerations but I really dislike the idea of modern vs antiquated. Languages will lots of usage are rarely outright bad (except javascript, I love typescript btw)


Link to increase your knowledge on non personal python projects https://wiki.python.org/moin/OrganizationsUsingPython

What about questions like:

* How happy / stress free you want to be in your work?

* How fast you want to deliver?

In result programming language's syntax is just one component and should be considered in the context of various practices and tools in use. E.g. unit tested Python code is better than not unit tested C#.


There are certainly many many professional python projects because the world is a giant place but in general, I see it as a less professional language (unless you're talking about data science).

As a full stack dev, I'd rarely if ever recommend python for any of the problems I face professionally. For data science problems, Jupyter + Pandas + Scipy makes a pretty awesome combination.

As for unit tests, I think that misses the point of my argument completely. This is the situation I would evaluate python: you're starting from scratch, what types of problems would you use python to solve? The answer is there aren't that many. I think the most common reason people choose python is that that's the language they feel comfortable with.


> The lack of syntax makes python a great language for prototyping

And I thought Lisp had a lack of syntax.


I meant more along the lines of what the author is describing. There's not a ton of syntax in lisp (of which I only have a cursory understanding - so take with salt) but python has the most "readable" syntax. Words like: "with", "is", "not" are all keywords replacing more traditional operators.


I would love to see a language re-embrace the elegance of Smalltalk block closures. I quit doing Smalltalk 8ish years ago. I’m happy enough with python and swift and Kotlin (and a lot of firmware C as well). They all have closure/lambda like stuff, but none of them are as simple/universal as the Smalltalk semantics/model of closures was.

The other thing I’d want is something better than generics. As I moved away from dynamic languages, I found I was ok with the swift/Kotlin type system mostly. Except for when generics get involved. For all but the trivial cases, I always feel like I have to really downshift to work through the compiler errors. It’s like me and the compiler just quit understanding each other. I don’t have the tools to express “this is what I want to do, you figure it out” and the compilers just say things “all your types are belong to us.”


I’d like to see Macros used to solve this instead of generics, then you have control to say what and how you are retrieving.


Scala blocks-as-closures are fairly simple and universal.


I really disagree with the author's points about macros.

If your language doesn't support macros you'll end up with programs that parse some arbitrary code and generate code in the language you're working with. That's far worse than a macro system that at least works in the confines of a language's syntax.


Yes I completely agree, I wish Golang would add compile time macros that have enough information to replace the desire some have for generics. It would make Go a much superior language as you would be able to supply a sensible amount of DRY to your code.


This article is like a litmus year for who actually reads the full article.


Eh, it's a litmus test for who sees through the framing device of the "survey" at the end. I respect the impulse to skip the "bullshit reader engagement stunt" or however they interpreted it... But I do agree that this thread is hilarious.


I upgraded Firefox on a Pine64 laptop that brought in all the schmancy concurrent CSS handling that I'm assuming Rust made possible.

It was like upgrading the computer from a barely workable state to a pleasant browsing experience for $0.

Could COBOL have been used instead to provide that same "set of rails" atop which Mozilla drove that concurrency development?

To be clear-- I'm not asking if COBOL is turing complete. Mozilla could have coded all that in C++ if they wanted to. I'm asking if COBOL would have so clearly helped the devs reason about concurrency that it would have been worth the effort of adding a new language instead of writing it in C++.


I do not agree much of that. English language writing is not precise enough to program a computer; it should be made more concise. You need to have more types such as integer or floating, 32-bits or 16-bits or 64-bits, etc. Better macros than C should be needed, in order to make the more powewrful kind of macro. I don't like "read only" programming languages such as Inform7. Also, different program languages are made for different use, and so should have different stuff in their doing. For programming one specific kind of computer, you can use assembly language.


This seems to be missing an important piece: who is the target audience and what problem domains is it targeted at. "For the 21st Century" doesn't answer either of those questions.


Another language that is very English like syntax is Livecode. It is an extremely battery included IDE; very much Hypercard. I like it because I can download one binary and click together software that runs on most OSs. But the language I cannot get used to ; far too much typing and I find it really difficult to remember things. For me the j code he mentions is easier to learn. Not to read maybe (although it is not that hard, but it is dense so it is normal it would take a little more time to see what it exactly does even if I program it every day), but to write I find the English and Verbosity annoying.

offtopic:

What other environments could learn is the battery included part: I spent another 3 hours today updating a .NET project. In return I get functionality I do not use and will not use. So why do I update. Because the IDE is nagging about it.

Does something exist that will ask me if or what to update based on what I used and the available updates? Like;

New features:

[ ] QR code

Bug fixed:

[ ] ListView - crashes when Clear is called and a row was selected

Performance:

[ ] Latency on tcp sockets improved — Linux only

etc.

Because I probably care about none of these or maybe only one.

And yet I have to spend hours on dependencies which I did not want in the first place. Or maybe I wanted only the latency fix but not the rest.

Sorry, ranting.


I think something like this was mentioned in 2600 once, where you can examine the list of changes and the code before accepting the updates, as well as to install only some changes instead of all of them. It seem good idea but may be impractical.

I have disabled automatic updates on my computer and instead manually select updates.


I like that the general feedback here is against Applescript-like "native english" syntax. There are certainly also good arguments for and against native types, which are critized by the OP.

However, I think blaming meta languages is a bit too meta. Sometimes they are just diabolical tools such as a C preprocessor (some of us hate C/C++ for lacking a proper dependency/build system, isnt it). I personally also will never be happy with the knotty C++ template programming, despite being functional.

But sometimes meta programming is put into a language from the first place with good intentions in mind -- think about Pythons class and function annotations (which I have found very nice frameworks built with) or the way how LISP programs work. Actually I also find RUST and Julia great for having the meta programming concept right at the core!

Doing HPC and bare metal programming daily, I cannot stress more that a well-engineered programming language like RUST would make my life so much easier. Code generation as a preceding step is daily buisness in my subject and that's not programming for the 21st century.


> Down with the syntax!

Yes, exactly! But replacing one syntax by another is not "down" with the syntax. Even the natural language inspired version is still a text. A sequence of characters bound by a syntax and comes with the same limitations. I would like to see more projectional editing / structure editors.


There are some good points in this article, namely that a very lengthy language specification is a sure sign that the language is overblown and probably the victim of design by committee. Modula-2 had a 100 page description, and i would take it over C++ any day. Heck C++ still doesn't have separate compilation. It is a total pile of crap IMHO.

Another good point is that languages that facilitate domain specific languages create maintenance problems, which they do. Nothing fun about taking over some giant code base that has invented its own syntax, and of course the designers left minimal (or worse, incorrect) documentation.


And Oberon (also by Niklaus Wirth) has a 20 page description.


> The most exciting thing, we already have the language exactly like this! What do you think it is?

He then proceeds to list 8 OOP mutable languages and one functional immutable language (Haskell). FFS.

I hate to relay the news to the OOP zealots but (having spent decades in OOP/procedural and about 3 years in Elixir) the future of reliable software is in functional languages with immutable data and, ideally, process supervision and massive concurrency (due to the breakdown of Moore's Law). The only language that does all this handily is the BEAM VM (via Erlang and Elixir) and possibly Haskell.


All I want is to be able to easily call routines written in other languages.


I would recommend a Brian Kernighan talk on that regard (2015):

https://www.youtube.com/watch?v=Sg4U4r_AgJU


At Global Day of Code Retreat today, the easiest Game of Life implementation we did was in SQL.

As Elixir is to Erlang, we can create a better COBOL for the 95% of code that is simple business logic.


Really, this is worse?

FILE * test_file = fopen(“/tmp/test.txt”, “w+”);

than

create file /tmp/test.txt for input and output as test_file

Yeah, remove the ; and duplication of type:

let test_file = fopen(“/tmp/test.txt”, “w+”)


File test = open /tmp/test.txt for writing


or remove any declaration syntax altogether!

    f = open("/tmp/test.txt", "w+")


We should not be constrained of only text notation, so a technology like the projectional editor of Jetbrains mps should be there. https://www.youtube.com/watch?v=iN2PflvXUqQ


Invent it or don't invent it. There is no "if" -- that's just bikeshedding. Programming languages are tools, not products. Durable languages evolve to suit the needs of projects, not the aesthetics of the coders. We see recent languages "do the same but better" (as OP opines) because the nature of the projects haven't fundamentally changed.


I think programing in the future needs to:

- treat a program as a database

- allow programs to be visualised in many formats (visually as graphs, textually as code etc)

- allow programs to be manipulated through many interfaces (visually, textually, programatically)

It’s not that text and syntax should die, it’s that different visualization and manipulation paradigms are appropriate for different levels of abstractions within programs.

2c


Reminisent of language workbenches like MPS in a way

https://www.jetbrains.com/mps/


> However, the distinct feature of the 21st-century language design is the absence of any distinct features in the languages themselves.

I would qualify Pony as only fast, concurrency-safe, new language noteworthy, also it's capabilities system and complete lack of blocking IO, needed to provide concurrency safety. Also its GC, which is completely new.


Programmers are too myopic about programming. It always looks the same - long strings of text. And that's the only domain in which we can think. Any (developer) usability problem is always seen as a language problem.

Well, creativity is constraint. What we're actually doing with all of this language is exploring a space of constraints on the Turing machine. But I would argue it's actually a small sub-space of the full thing. For example, you could fix (modern) browser JavaScript and explore what you can do with that. And this is, in fact, what "frameworks" actually do! And a framework does this by providing a higher-order data-structure that the programmer "fills in" with tiny bits of programming language.

Meanwhile, this higher-order data-structure is informed by the lower-level constraints of the language, both in form (syntax) and function (paradigm). So basically this is a huge space, but it feels like parts of it are falling into place. Emphasising pure functions, immutable, serializable data-structures, certainly makes filling in those lines of code much more pleasant! So, Elm and Redux in particular have staked out an excellent spot in this space, in my humble opinion.

But the problem of developer usability is much bigger than language!


> Swift, Kotlin, and Go are probably among the most popular. [...] They don’t have anything new in them at all; they are all made by the “something done right” formula, this something being Objective-C, Java or C

Go isn't just C done right, but both C and C++ done right. Similarly, Kotlin is both Java and Apache Groovy done right.


I wouldn't say go is c++ done right by any stretch. The lack of any memory control and the lack of any sort of generic or templates leads to a significantly less robust type system and code duplication.


For a better take on this solution space, checkout Crista Lopes’s essay on naturalistic programming: http://www.dourish.com/publications/2003/oopsla2003-naturali...


"create file /tmp/test.txt for input and output as test_file"

Now wrap it in a buffered stream, filtered stream and try catch block and do it using this "plain English" approach. I think it will end up with unbearable paragraph of bad prose :)


I love this. The piece does a great job of showing that the languages you use won’t solve the problems inherently due to the programmers. Also does a nice job of showing how few people actually read the content before deciding that “they know better”.


What I learned from these comments: Never hide essential parts of an article behind a button. People don't click it and miss your point.


This submission is kind of genius because it shows how often people don't finish reading articles. There are no spoiler tags on HN, but thr TLDR is the author is describing the design goals of COBOL, and is making a point that there is no programming language or environment that can solve all your problems. I slightly disagree in a "use the right tool for the right job" sense, but it clearly helps to understand your craft.


I don't think you can blame the people for reading all the words and deciding not to vote on the survey.


If you read all the words, wasn't it fairly obvious already without clicking to vote from the last few lines?


"The language for the 21st century should be business oriented, English-like and not dependent on native types.

The most exciting thing, we already have the language exactly like this! What do you think it is?"

Not to me, no. I thought this what some poll to gauge popularity of he various languages or some such.


The only language that conformed is COBOL; if he put in Livecode for a twist, then it would've been slightly harder. Still the business oriented would've made me pick COBOL.


To be fair, half of us thought that the article was finished.

Don't blame us for not being told that you have to click on the survey at the bottom, and that when you do so, the next bit would magically appear...



Some way to have type inference without the cons of type inference or the verbosity of static typing


lol, the way this ends.


isn't this more about declarative vs. imperative?


The ideal structure of a programming language seems to depend on the user. I mean, if someone thinks in machine code and can't understand anything else, then machine code's optimal for them, regardless of how impractical it might be for just about everyone else.

What we really need is intentional programming: we specify what we want, in whatever manner is most natural to the user, and the computer figures out how to make that happen. The computational expense of transforming intent into machine code is unavoidable; it has to be paid through some combination of human and machine effort, so there's no additional cost incurred by having a machine do it.

The really cool part about all of this is that, once you have a system that can transform intent into code, it's trivially meta-circular, causing it to become self-optimizing. And once you integrate machine learning into its optimization logic, it becomes intelligent, using machine learning to optimize its own logic, including the machine learning, continuously reemitting further optimized variants of itself to whatever computational resources it has available; getting it to utilize various CPU's, GPU's, distributed architectures, etc., becomes a matter of specifying their operation, such that it's like writing drivers (except the system merely needs to know what the computational resources _can_ do; the problem of how to make best use of those abilities remains a problem for the intentional logic itself).

Anyway, that's the premise of my startup, Intentional Systems. I've been working on this for a good while, and ended up applying to YCombinator's W2019 round. Regardless of how that goes, this is what's happening; intentional operation's the future. And it's awesome!

With respect to this blog post, my point's that we don't really need a "language" for the 21st century, but rather a system that allows users to specify what they want in whatever terms make sense to them -- whether that's machine code, C, Lisp, JavaScript, engineering flowsheets with equations, English, or pig-Elvish -- so long as that language has a definition that provides mappings to machine code, the problem of how to map that language to machine code is a logical problem that a machine can address.

The example in the blog is a special case: 1. > FILE * test_file = fopen(“/tmp/test.txt”, “w+”); 2. > create file /tmp/test.txt for input and output as test_file Here, the two languages have the same logic, varying only superficially. This allows an IDE to save that logic as an AST, then display it to a user according to their personal display preferences. There's no reason anyone opening that code in their IDE has to see it in Format 1 vs. Format 2 any more than they have to see it in Font 12 vs. Font 14. I think the folks behind Roslyn tried to capture the source code's features that were extraneous to the AST as ["syntax trivia"](https://github.com/dotnet/roslyn/wiki/Roslyn-Overview#syntax...).

It's a less trivial issue when the code isn't so transparently bijective. For example, programmers collaborating on the same project would have more trouble interacting if some prefer to program in [Rule 110](https://en.wikipedia.org/wiki/Rule_110) while others prefer idiomatic JavaScript. I mean, that's an addressable scenario, but it's more complicated.


two plus two equals four.

See that isn’t readable at all. I prefer syntax.

Edit: Ah u failed to actually press the button at the bottom!


> The most exciting thing, we already have the language exactly like this! What do you think it is?

Python, of course.


He is clueless about what makes a good programming language. The entire article seems to be making up straw men and then knock them down.


I read that as: less syntax, less leaky abstractions and less macros; and I mostly agree.

I would add better integration with the C tool chain, more separation of concerns and less academic ego bullshit.

I've been working on something like that [0] lately.

https://gitlab.com/sifoo/snigl


I don't agree with 'less syntax'. More syntax (when designed well) results is significantly more readable code.


Ada has a lot of syntax, but also gains a lot in terms of functionality (and arguably readability, but I disagree) for those extra characters.

Syntax for the sake of syntax is pointless, imo. Obviously there's a balance here - APL and Applescript both suffer from either extreme.


I have a suspicion that something like APL is only a problem as far as the learning curve is concerned. For example, the syntax of advanced mathematics is nearly illegible to me but, for a physicist or mathematician who does this every day, it's the most efficient way of communicating information.


Up to a point, perhaps. Say you have both `if` and `unless` control structures. Two keywords where you could have just one, but they're closely related enough that there's not much burden. But what if their grammar was completely different, like say:

    ifStmt := "if" "(" expression ")" statement ( "else" statement )?
    
    unlessStmt := "{" exprStmt exprStmt* "}" "unless" expression ";"
I doubt this additional syntax would be a positive for the language's user. And the divergence will almost certainly make implementation changes more difficult. If we had instead:

    ifStmt := "if" "(" expression ")" statement ( "else" statement )?
           |  "unless" "(" expression ")" statement
           
This smaller amount of syntax is easily understood, increases expressibility, and `unless` will most likely be implemented as simple sugar, just a negation of the expression in an `if`.

There's definitely some sweet spot in the middle, taking scan-ability and symmetry both into account.


Would you same the same thing about loops?

You can get away with just providing a 'while' loop (or maybe just goto and labels).

Most languages end up with a for and maybe a repeat-until syntax for the sake of natural looking constructs.


How much experience do you have from C, Forth & Lisp? Spend enough time in these languages and you start seeing the world differently. Less syntax (when designed well) doesn't have to be unreadable.


How much experience do you have with 10000 lines+ codebases written in Forth or Lisp?

There is a reason people choose boring imperative languages for large projects.

I did an internship at an 'Big 4' company which (from folklore) had a moderately large server written in Haskell, the codebase was a nightmare and eventually they ended up rewriting it in C++, which I believe they still use.


None, but that has very little to do with how capable the languages are in the hands of experienced coders. There's a ton of Forth being written in embedded circles, and several well known Lisp projects that are most probably even bigger.

And that's fine, they can keep writing their big projects using dumbed down languages and fresh out of school code monkeys. But there is plenty of code written that doesn't fit into that description.


Sure, they are awesome languages. We are talking about readability though, which unfortunately means your code has to be readable by "fresh out of school code monkeys" and not just experienced coders.


So the Haskell to C++ rewrite in your earlier example is supposed to be beneficial, because fresh-out-of-school code monkeys will understand it?

How so if all they speak is JS and Python.


Someone only speaking JS or Python will have a significantly easier time with a C++ codebase than a huge Haskell codebase.

You can have someone totally new to the project fixing bugs in a few days.


10kloc is not a large code base for Lisp. There are much larger codebases for Lisp - some which have been maintained for decades.


Sure, but how many? I'd bet that that C, C++, Java or even Python has a numbers advantage by one or two orders of magnitude.

Also, how many of these codebases attract new contributors regularly? That's a pretty direct metric for readability of a codebase.


> I'd bet that that C, C++, Java or even Python has a numbers advantage by one or two orders of magnitude.

That's a numeric argument, not a qualitative. Common Lisp is not popular in the mass market, but not because one can't easily write large applications. Something like Common Lisp actually was designed for the development of large and complex applications.

If you develop a large C application, you could shrink by porting it to Lisp. Lisp's code density in larger applications is much higher than C and large C applications tend to reimplement half of what Common Lisp brings out of the box (objects, code loading, runtime scripting, automatic memory management, error handling, ...) - see Java.

There are systems written in Common Lisp which are much larger than 1MLoc. Some applications like Macsyma easily reached already >100kloc in the 80s - vor example the commercial version with the GUI UI. Lisp Machine operating systems had around 1.5MLOC in the end 80s.

I use a web server, which has a few 10kloc code. My development environment I use is a few 100kloc Lisp code. There are a a dozen Lisp implementations/compilers maintained, which have all >10 kloc, some have several 100kloc (especially the larger commercial ones).

Historically there have been a bunch of databases, cad systems, etc. written in Lisp, which all had/have >100kloc, PTC sells a CAD system largely written in Lisp with >7MLOC Lisp code.

Take for example the theorem prover ACL2:

https://github.com/acl2/acl2

Roughly 4MLOC Lisp code. Maintained over almost three decades. Used in a bunch companies for verification of chip designs.


Ok, given all that, why do you think "Common Lisp is not popular in the mass market"?

Remember, I'm not arguing that Lisp is a bad language or that you can not write large applications with it. My point it just that a large Lisp codebase is significantly more difficult to understand and modify compared to ..say.. C++ or Java.

Your density argument is unhelpful here, because you typically don't want dense code full of user defined abstractions if you want to maintain it for a long time.


> Ok, given all that, why do you think "Common Lisp is not popular in the mass market"?

I think this question leads to nowhere, and even so if you make up a reason. You would have first to understand the target market of Lisp and the players interested in such a market. Lisp was originally designed for symbolic AI stuff (planners/schedulers, maths programs, theorem provers, natural language systems, knowledge representation and reasoning, knowledge-based engineering, ...) - that has been its core market and this is not a mass market. It's a market of applied research&development. Over time the language has been applied to related domains and a bunch of technology has been transferred: some of the underpinnings from Smalltalk and Java were largely influenced by Lisp - even the first garbage collector for Microsoft's .net was developed in Lisp and then transpiled to C++. One can also say that Javascript is a distant Lisp dialect (eval, symbols, data syntax, closures, object system, ...) - what it basically lacks is the whole s-expression infrastructure.

Is Porsche successful in the mass-market? Probably not - they are a small company compared to most others - they are not even in the top 20. But they are a very profitable car company. Should they give up because their market share is so small? They have world-wide sales of just 250000 cars/year. Toyota has ten million.

Is a Porsche 'difficult to maintain', because it is not a mass market car?

Just as in software, there are other more important factors. Porsche's target market are people who want to own and can afford a high-quality luxury sports car. But that's not the larger mass market.

> My point it just that a large Lisp codebase is significantly more difficult to understand and modify compared to ..say.. C++ or Java.

That's your claim. Have you ever worked with a Lisp program beyond 100kloc and extended it? It's not that difficult. It just looks different from what you have seen. In Java you work with the dead code in a text editor. In Lisp often you extend the running program - thus maintenance is done while you interact with it (similar how you write scripts for Excel, write code in Smalltalk, ...). For example for my Lisp IDE I never got a new version during the last two decades, beyond the usual new releases. All I got was patches which are loaded into the application - maintenance gets easy.

But where is the factual evidence - all I heard so far were guesses based on your popularity claims? I think you would need to look beyond that. There are technical reasons why Lisp is not popular like C: like Lisp is usually not a systems language, thus it can't compete with C in that area - which was designed for that -> this leads to the fact that most infrastructures are C-friendly and low-level. Apple writes their base OS in C and C dialects. On the iOS store the majority of applications are written in Objective-C - since that's what Apple supports (plus a bit of Javascript and Swift). Other large corporate players have invested zillions into their own infrastructure (SUN/Oracle -> Java). Now your language can be as great as possible, it will always be second tier or it needs to find a specific niche (Javascript -> webbrowser, PHP -> web sites, ...). The specific case for which Lisp was developed doesn't have a corporate sponsor (in the 80s it was DARPA for Lisp) and the niche markets for it are relatively small.

> Your density argument is unhelpful here, because you typically don't want dense code full of user defined abstractions if you want to maintain it for a long time.

That's another assumption. But domain-level maintenance is best done in a domain-level language. Much of the value of a software is at the domain-level.

The user defined abstractions are in any software. It's just how they look like to the user. Typically one prefers domain level user interfaces in maintenance, instead of having to deal with the machinery. That's why we write 'make' files (and the equivalent), instead of writing the code directly in C to compile software systems. It's easier to maintain the information in a 'make' file, even though you have to learn how to write and read them. That's the same for Lisp - it's easy to write a script to compile a bunch of Lisp files. But most of the time we prefer a domain level Lisp tool:

  (sct:defsystem cl-http                          
      (:pretty-name "HTTP Server"
       :default-pathname "HTTP:SERVER;"
       :journal-directory "HTTP:LISPM;SERVER;PATCH;"
       :initial-status :experimental
       :patchable t
       :source-category :basic)
    (:module pointers
     ("SYS:SITE;HTTP.TRANSLATIONS"
      "SYS:SITE;CL-HTTP.SYSTEM"
      "SYS:SITE;W3P.SYSTEM"
      "HTTP:LISPM;HTTP.TRANSLATIONS")
     (:type :lisp-example))
   ...
    (:serial
      showable-procedures
      "PACKAGE"                                   ; HTTP Packages
      "PRELIMINARY"                               ; Showable procedures
      "VARIABLES"                                 ; Variables and constants
      #+Genera lispm                              ; Load Lisp Machine specific code
      "BASE64-ENCODING"                           ; Base 64 utility RFC-1113 and RFC-1341.
      "MD5"                                       ; MD5 Digests based on RFC 1321
      "SHA"                                       ; SHA Digests based on Internet FIPS 180
      "TASK-QUEUE"                                ; Thread-Safe Queuing Facility
      "CLASS"                                     ; Server Class Definitions
      "URL-CLASS"                                 ; URL Class Definitions
      "HTTP-CONDITIONS"                           ; HTTP
   ...
   )
If you happen to know some Lisp, that's easy to maintain: During maintenance we just add the new configuration info - that's much easier, then to change a low-level program.

Where it gets more difficult is when we need to change the implementation of these tools. Like when you need to fix a bug in the 'make' program itself.

Every program has these abstractions at the domain level: either they are internal or external domain specific languages (DSLs) or they are complex frameworks at the OOP or functional level - which makes them very difficult to use. Lisp happens to be able to have an integrated way to expose these complex machineries in maintenance-friendly domain-level constructs. For an example see GNU Emacs which comes with a million lines of domain-level Lisp code.


"an embedded Forth with a Lisp in C" :)

Sounds like the opposite of what the post is advocating.


Yet it's not; Forth and C are too primitive for most tasks, and Common Lisp too magical. Snigl aims to find a middle way.

Programming languages will always be compromises, you always trade something in to get what you want. Anyone telling you otherwise is trying to sell you a new religion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: