Hacker News new | past | comments | ask | show | jobs | submit login
The next big language (eugenkiss.com)
65 points by newgame on Oct 17, 2010 | hide | past | favorite | 65 comments



New languages are carried on the backs of new platforms. eg: unix & c; web & javascript.

To predict the next big language, predict the next big platform. The upcoming platforms are: smart phones & tablets; cloud computing & many-core. The cloud, being connected services, is largely language-agnostic, and many-core might end up being implemented by borrowing whatever works in the cloud. The needs of the above seem well-met by established languages, leaving little opportunity for new languages to emerge.

What about the next big platform after the above? It's probably more than 10 years off, but Moore's law says smaller devices will come. If disruptive, they'll be attractive to new audiences, with different needs - perhaps along the lines of cochlear neural implants (already big business) or garage genetic-engineering. What languages do those guys happen to be using? They will be carried to success.


The next big platform is multiple cores/multiple CPUs. The next big language is functional and helps deal with consistency across time and across CPUs. Therefore, the next big language is Clojure. Rich Hickey said it best:

"If somebody hands you something mutable—let's say it has methods to get this, get that, and get the other attribute—can you walk through those and know you've seen a consistent object? The answer is you can't, and that's a problem of time. Because if there were no other actors in the world, and if time wasn't passing between when you looked at the first, second, and third attribute, you would have no problems. But because nothing is captured of the aggregate value at a point in time, you have to spend time to look at the pieces. And while that time is elapsing, someone else could be changing it. So you won't necessarily see something consistent.

For example, take a mutable Date class that has year, month, and day. To me, changing a date is like trying to change 42 into 43. That's not something we should be doing, but we think we can, because the architecture of classes is such that we could make a Date object that has mutable year, month, and day. Say it was March 31, 2009, and somebody wanted to make it February 12, 2009. If they changed the month first there would be, at some point in time, February 31, 2009, which is not a valid date. That's not actually a problem of shared state as much as it is a problem of time. The problem is we've taken a date, which should be just as immutable as 42 is, and we've turned it into something with multiple independent pieces. And then we don't have a model for the differences in time of the person who wants to read that state and the person who wants to change it."

http://www.artima.com/articles/hickey_on_time.html


Maybe, but I have my reservations. I think the attitude that the future is the tech that extracts the best performance with the least work out of future hardware is mistaken.

Clojure may be a great language for a multi-core world, but I'm not sure multiple cores will be needed for the vast majority of apps in the future. I'm sure some sectors will find clojure to be amazing for their purposes, but by and large, I doubt it will be as big a language as Python, Java or Ruby.

For instance, I doubt it will become a big player in the web app world.

Clojure's great support for concurrency only applies to a single machine. If your problem is squeezing maximum performance out of a single box w/ multiple cores this is great, but for something like a web application where processing a single request across multiple cores is generally unnecessary clojure's benefits don't really apply. Each thread/process in a web app is totally isolated from the others, and IPC must happen over a network, not in memory.

IMHO the most interesting tech for concurrent web app programming is using Redis as a data structures server.


> I'm not sure multiple cores will be needed for the vast majority of apps in the future.

Thre's no alternative if we want to keep being "lazy" (i.e. yesterday's "Computers are Fast" observation—or using programming techniques that produce code that is slow now, knowing it will be faster on future hardware, like games tend to do.) CPUs will soon stop getting faster, so we will have to instead rely on there being an increasing number of them if we want to keep thinking that "Computers are Fast" (or, in its future conception, "Computers are Wide.")

As an alternate argument, consider the brain. So many futurists have said something to the effect that, if we have a general-strong-AI-like architecture to work upon, procedural programming will be abstracted away, as much as digital electronics abstract away voltages. However, if we ever want to create something brain-like (assuming a general-strong-AI would best be modeled as something brain-like), we're not going to do it on one (or even a "scarce" number of) cores.


The reality is we have more CPU power on personal computers these days than most people know what to do with, that's why mobile is making so many gains.

Most people run a browser, the end. If they have a mobile or an iPad they run apps that use almost no CPU power, and are fancier interfaces to web services than a browser is able to be.

Since app development is moving towards mobile devices that means battery power is a concern. SMP often has overhead in terms of power consumption, it should be something you must resort to. So, my tablet's fancy drawing program might need to use Clojure to maximize its speed (and maybe not), but its twitter client doesn't need the advanced concurrency mechanisms of Clojure.

Sure, there will still be plenty of desktop computers running complex simulations that might really benefit from something like Clojure. But that'll be the exception.

Just my $0.02


Clojure, like Lisp, has terrible syntax that is difficult for human beings to read. This is one of the reasons the academic community kept Lisp alive - independence from industry, ability to weed out weaker programmers, flexibility.

I'm well aware that I'm posting this on Hacker News, a site founded by an individual who made gazillions on a website powered by Lisp --- I think it is safe to say that it is the exception rather than the rule.

In order to be widely accepted, a language must do many things well enough. If it fails badly at one particular thing, in this case syntax readability, its ability to gain followers will be severely diminished. It may, however, gain a strong following in academia.

If you disagree, then do this thought experiment: who currently uses serious parallel processing power? I can think of a few: Blizzard's WoW servers, government research labs, and bio-informatics operations. How would you convince them to try out Clojure? What problem do they have that Clojure solves so much better than anything else out there that it would gain a foothold?

By the way, I learned CS from Abelson and Sussman and Scheme was my language of choice between 1996 and 2004.


What syntax?

Or to rephrase:

What (scant) syntax there is, I'm in love with.


But the problem is that the instant you venture out of Clojure and into a third-party JAR all those assumptions about purity go out of the window. Clojure's strength is also its weakness in that regard.

Haskell does make real guarantees about what does I/O and what doesn't.


This is potentially being addressed in an in-development Clojure feature called Pods.


> The next big platform is multiple cores/multiple CPUs. The next big language is functional and helps deal with consistency across time and across CPUs.

So the popular opinion in the Internet echo chamber keeps telling me, but somehow I don't buy it.

If you do, please try to answer this simple question: what single application of widespread importance benefits on a game-changing scale from running on multiple cores?

It's not office productivity/business automation applications like word processors, spreadsheets, and accounting packages. They could run just fine on a typical desktop PC years ago. Sure, it's useful to run multiple applications simultaneously, but the OS can handle the scaling in that case.

It's not mass information distribution/web applications. The bottlenecks there are typically caused by limited communications bandwidth or database issues. While concurrency is obviously a big factor internally in databases, most of us don't actually write database engines.

It's not games. Most AAA titles today still don't scale up in that way, and one mid-range graphics card with its specialist processor would blow away a top-end quad-Xeon workstation when it comes to real-time rendering. Again, there is some degree of concurrency here, but many intensive graphics rendering problems are embarrassingly parallel in several ways, so again this isn't much of a challenge even for today's mainstream programming languages and design techniques.

I suspect the most likely mainstream beneficiaries of better multi-core/multi-CPU support would be things where there really is heavy calculation going on behind the scenes and it's not always uniform: multimedia processing, CAD, etc.

However, what about the alternative directions the industry might take? The Internet age has emphasized some basic realities of software development that as an industry we weren't good at recognising before.

For one thing, many useful tools are not million-lines-of-code monsters but relatively simple programs with far fewer lines of code. It's knowing what those lines should do that counts. That means rapid development matters, and that in turn requires flexible designs and easy prototyping.

For another thing, data matters far more than any particular piece of software. Protecting that data matters much more in a connected world with fast and widespread communications, so security is more important than ever, and we need software that doesn't crash, suffer data loss bugs, and so on.

So I'm going to go out on a limb here and suggest that multi-core/multi-CPU is not in fact going to be the dominant factor in the success of near-future languages. I think flexibility and robustness are going to be far more important.

It may turn out that the attributes of a more declarative programming style support these other factors as well. It may be that functional programming becomes the default for many projects as a consequence. But I don't think any future rise of functional programming will be driven by a compelling advantage to do with implementing modest concurrency on multi-core systems. That just isn't where the real bottlenecks are (in most cases).


> If you do, please try to answer this simple question: what single application of widespread importance benefits on a game-changing scale from running on multiple cores?

Computer vision and machine learning both benefit a lot from multiple cores. They seem to be really big growth areas at the moment and have the potential to dramatically change the way we interact with computers. It's already happening: recommendation engines on e-commerce sites are a great example of machine learning in practice. I believe we're going to see this sort of thing appearing in more and more places.

Web browsers already take advantage of multiple cores, by the way. The Rust language is being developed by Mozilla because (one of the three reasons from the project FAQ) of dissatisfaction with the concurrency support in existing languages.

I think there's a large opportunity cost to dismissing concurrency & parallelism at this moment.


> I think there's a large opportunity cost to dismissing concurrency & parallelism at this moment.

I'm not dismissing the idea, nor claiming that it is not valuable for any application. Clearly that parallelism would have value to a significant number of projects, which perhaps don't make best use of the host hardware today. I'm just trying to keep the multi-core idea in perspective, relative to other ways our programming languages might improve.

Better multi-core support can get you a constant factor speed-up in computationally expensive work, but Amdahl's Law tends to spoil even that. On the other hand, a language with a type system that allows you to prevent entire classes of programmer error could lead to a step change in security or robustness. A language expressive enough to capture the developer's intent in ways that today's programming models do not could lead to entirely new techniques for keeping designs flexible, supporting new rapid development processes, or it could create opportunities for optimisers that bring the performance of more expressive languages to a level where they compete with lower-level languages used in the same field today for speed reasons. I suspect that across the field of programming as a whole, such improvements would be far more widely applicable.


Why would that make it Clojure rather than Erlang or Scala?


Yes, you are absolutely right. In fact I had started a chapter about the "far future" where I wanted to elaborate more on the computer architecture change and its effect on programming languages. But I thought the scope would become too large.

If you are interested I gathered some links about the "far future":

* https://channel9.msdn.com/posts/Charles/Jonathan-Edwards-Pro...

* http://lambda-the-ultimate.org/node/4088

* http://lambda-the-ultimate.org/node/4090


JavaScript is going to get much bigger. My money is on that especially with node.js and html5 in the game. I also noticed F# was not even mentioned, purposefully or mistakenly?


There was a post linked here a couple of days ago where JWZ was bemoaning the lack of #!/usr/bin/javascript - and it's still not possible to use JS as a general purpose language a la Perl, Python, Ruby, etc.

I know it has all sorts of negative stereotypes, but the combination of JavaScript and PHP is the new Visual Basic - something that will let someone who isn't an expert quickly make a useful GUI application. And it won't go anywhere VB didn't go.


Yeah but #!/usr/bin/node does work. I've used some node.js command line scripts, and it works decently well. Some people praise Node.js for general purpose command line scripts. Personally, I find Ruby easier to use for command line utilities and scripts.


It's more than that - if you could assume #!/usr/bin/javascript (Or c:\windows\system32\javascript.exe) on any machine that had Netscape installed, then you'd have a viable ecosystem. The reason Perl became popular was not because it was the "best" language for CGI scripts but because you could assume that the sysadmin had already installed it for his own use. Node.js (which admittedly I've never used) would appear to be available only on a tiny minority of machines you would encounter "in the wild".


> Node.js (which admittedly I've never used) would appear to be available only on a tiny minority of machines you would encounter "in the wild".

Given that it really only hit the public consciousness last November or so, that's more or less expected. What makes people so bullish is that the number of JS developers isn't going to decrease any time soon.

It is, however, a pretty awful shell scripting environment and I don't know who would prefer it to Ruby. I say this despite writing a build system for it this weekend to drop dependencies on Ruby and Java. Coffeescript improves things but it's still pretty raw.


JavaScript is definately a candidate, but I don't think it's going to be node.js. Doing all IO asynchronously strikes me as very premature optimization. I prefer the way Go or Erlang deal with these scaling issues.


Funny you should consider async i/o to be premature optimization. To me it looks like the only reasonable way to build a non-trivial system.

Threads on the other hand, are the often the wrong optimisation -- processes are the right one.


The distinction of threads v processes is not something that necessarily influences the logic of our code. If I want to say a() b() c() I can do that with a thread or a process and it doesn't matter what a, b or c actually do. However, with the "IO must be asyc" stipulation, I have to know whether any of those functions is an IO call, because if it is, I have to write it in a completely different way, even if the logic I need is synchronous. I don't like that kind of interference with my logic.


Well, with very high probability, your logic is broken in the face of errors / malicious users then.

Node's async structure is just taking the async model of the web one step further, into processing the same request asynchronously if you need to wait for something else.

You might be master of doing this logic right, but I've audited tens (perhaps hundreds) of systems, and all the synchronous ones got it wrong.

(Disclaimer: I've never worked with Node. But I've been writing async servers in C and Python since 1999).


Node.js has synchronous, blocking IO options along with all the async ones. Just append Sync to the end wherever it makes sense.


I'd like to add that I already count Javascript as a "big language" as I wrote:

As we all know C, C++, C# and Java are these big mainstream languages of today with “recent” additions Python, Ruby, Javascript and, especially in the Apple world, Objective-C.

I totally agree with you that Javascript is going to get much bigger but to me it is already big so it doesn't even need to contend against the other languages in my blog post ;).


I think the problem with F# is going to be competition from Clojure (which also runs on the .NET platform).


I don't think that there will be that many competition, the ML folks that really want to try something new will go to F#, the Lisp folks will go to Clojure. Still there aren't that many reasons to abandon Common Lisp for Clojure(specially with Oracle controlling the JVM), or ML for F#(for Haskell maybe, but Haskell has its problems too).


The only existing reason is that the two of them run on modern, enterprise accepted platforms. If you're hacking your personal projects in Lisp, you have no reason to switch. If you want to sling some Lisp/ML at work, well Clojure of F# are likely to be your only options.


In that setting, I think F# has an advantage currently, since it's an officially supported/blessed language on Visual Studio and .NET.


You could say Clojure is a Java library. It's a single JAR file.


To ship it, yes, but there's the compiler, too. With F#, if your organization uses the latest Visual Studio, you're good; but if your shop uses a typical Java IDE setup, anyone who wants to build your project has to do extra work beyond the standard environment (e.g. installing Clojure's Eclipse or NetBeans plugins, with extra fun if you depend on Clojure libs that use Leiningen).


My sens tell me "JavaScript". The rise of the Web (especially Web applications), HTML5, Tablets and mobile applications, Server Side JS... I think JavaScript can make a big move. Learn one language and use it in anything you'll even dream of.

With its' flexibility, extend the language to support any feature you'd like. Make it Object Oriented, Prototypal, Procedural, your own...


We don't pick our tools to fit the job? The next big language will be determined by the needs of the largest computer user groups. The particular needs of these groups will determine the languages that eventually reach critical mass.

Using 5 universal criteria that characterize good programming languages is the wrong approach. For example, in safety critical systems his metric of conciseness is worthless. Ada may sometimes take more lines of code to accomplish a specific task than C but the requirements of safety critical platforms demand the extra effort and cost. The metric is worthless because it isn't picked to coincide with the needs of the users.

No one cares that the strong typing or other features in Ada can slow down coding and annoy programmers because this practice eliminates many bugs in the final product. The needs of the end-user for bug free code outweigh their needs to ship quickly and please developers. It is a tool used for a specific job that it handles quite well.

Evaluating a programming language with 5 universal metrics ignores that they are tools used to solve the problems of a particular group of users. If you want to figure out what the next big language will be ask what platform will have the most users in the near future and what the needs of those users are. The languages that meet them best are the top choices.


I don't really understand his justification for excluding Erlang. It IS a general purpose language, and I can't see it as any more niche than some of the obscure languages he includes (Clay?).


No curly braces, no hashtables, awkward strings, doesn't look object-oriented.

(This coming from someone whose favorite language is Erlang.)


All true, but that isn't his justification in the article. He skips Erlang by damning it as confined to a niche, like PHP.

Edit: Reia would be worth discussing if those are major obstacles.


He gives consideration to Clojure, which makes only limited use of curly braces and explicitly rejects object-orientation as most of the world understands it.


I find it interesting that he says Go's syntax is ugly, but that Clay programs look so good and readable. The two languages have a very similar basic syntax, and some of the Go's differences are an improvement in my book: no semi-colons, fewer parentheses.


I had an idea the other day about being able to write client-side browser apps in Google Go. I found it very intriguing.

My votes are for Go and Javascript. One has a 800lb gorilla behind it, and the other has a ton of momentum, and I agree with the other poster that the maturation of server-side Javascript and the ability to use one language on both ends is very appealing.


Strange that nobody's mentioned scala yet. Clearly it has its detractors, it seems like a credible candidate to displace Java. It also has more traction than a number of these languages, and more importantly there's a huge number of java based developers and organizations who are under pressure to improve productivity and can adopt it incrementally.

It seems to me to have fewer barriers to becoming a major language than most of the other contenders.


Indeed, specially with the pretty high profile companies using it: LinkedIn, Twitter, FourSquare.


I think server side JavaScript is the future. One language to rule the web's front end and back end.


Would be consistent with the tendency of majority of programmers to select the crummiest possible language because some platform that necessitates its use fluked into popularity.


I'd love to see Mirah hit it big (statically typed Ruby on JVM). Dunno if it will tho: http://www.mirah.org/


I was a little surprised with D being listed as one of the main contenders. I guess Clojure and Go were obvious suggestions. Go has the backing of Google and aims to be everything we want (fun/ease of Python, speed of C++) and Clojure is the ancient popular Lisp with a modern twist.

I guess I would have expected Javascript and Groovy to be included. Though I might be biased, I worked with a client using Groovy and in that short time saw that there was a lot of use in the (local) industry in it.


I was a little surprised with D being listed as one of the main contenders. I guess Clojure and Go were obvious suggestions.

Near the end he references some mailing list debate, then comments that Go emphasizes simplicity while D emphasizes features, therefore D will win, because to him, language features rock. To me, this idea reveals that much of his "reasoning" is this prejudice.

Nothing to see here. Move on!


D is just a sideshow. The community is too wracked with infighting over Tango vs Phobos. There aren't going to be a viable platform and ecosystem anytime soon. Pity. The "real work" crowd has, well, real work to do, they - we - can't hang around forever.


Imagine that you want to rewrite OpenOffice from scratch. What language would you use? That should be the next big language.


If you were going to do that I think choice of language would be the least of your worries!

Having said that, I would probably go with a set of low level modules/services, probably written in C or C++, held together with JavaScript.


Not a bad idea, I was skimming around the LibreOffice sources recently and it really could use a big scrapping everything and rewriting it. The best language choice would be C, or maybe C++/Qt for portability. Not that it will happen, but hopefully the LO devs will still clean up the codebase a bit.

Ps. The language actually does not matter that much. Much more important is a clean, scalable and fast design.



I think the next big language will actually be a small language.


The Clay mention makes me really happy. Years ago, I was a part of the small team that was working on its first incarnation, though its a completely different language now and I'm sure its been through half a dozen rewrites since then.


I think a new language could win by being more portable. A compiler that targets Intel and ARM is no longer interesting. But a nice language with a compiler that can create libraries usable in JavaScript apps, iPhone apps, Android apps, and App Engine apps would be very interesting.


Seem many people here have high expectations on JavaScript, cool! :)

Also, be sure to check out the JavaScript's kid brother - coffee-script

http://jashkenas.github.com/coffee-script/


Don't want to get into a language merits argument, but how can you consider the most widely used language on the web niche?


PHP? Computing is more than web pages.

I'm rooting for Clojure. Rich Hickey's presentations detailing his rationales make sense and are quite inspiring.


And Clojure makes a lot of sense in many-core scenarios.


What I meant by that is that PHP is admittedly used widely on the web but almost nonexistent in other "niches" like game programming, kernel programming, artificial intelligence, high performance computing etc.

I'm thinking of the ecological meaning of the word niche not the meaning of "something small".


Who can say Web is niche market?

Also, I feel sick all the time when people telling me X is a new general purpose programming language that will solve all the problems.

People should be more open minded, pick the right tool for the job.


"compile-time correctness"?!

All a compiler can say is that it compiles. It has no idea if the program does what you want.


Still that is more than one can say about most dynamic languages.


And how much this "more" really means?

All it says is that the program pieces fit together. The small benefit of knowing my potentially wrong program got the types of whatever is being passed around right gives me very little comfort when I think about the flexibility I lose by using statically typed languages.


No mention of Arc here? Its the language behind HN after all :)


Erlang.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: