Hacker News new | past | comments | ask | show | jobs | submit login

The reality is that you rarely want to doA and doB concurrently, so optimizing syntax for that case is not useful, whereas you want to be able to call functions without having to worry about their color all the time, where "all the time" here is typically >1 time per function.

Many of you are perhaps scratching your head and going "What? But of course I concurrently do multiple things all the time!" But this is one of those cases where you grossly overestimate the frequency of exceptions precisely because they are exceptions, and so they stick out in your mind [1]. If you go check your code, I guarantee that either A: you are working in a rare and very stereotypical case not common to most code or B: you have huge swathes of promise code that just chains a whole bunch of "then"s together, or you await virtually every promise immediately, or whatever the equivalent is in your particular environment. You most assuredly are not doing something fancy with promises more than one time per function on average.

This connects with academic work that has showed that in real code, there is typically much less "implicit parallelism" in our programs than people intuitively think. (Including me, even after reading such work.) Even if you write a system to automatically go into your code and systematically finds all the places you accidentally specified "doA" and "doB" as sequential when they could have been parallel, it turns out you don't actually get much.

[1]: I have found this is a common issue in a lot of programmer architecture astronaut work; optimizing not for the truly most common case, but for the case that sticks out most in your mind, which is often very much not the most common case at all, because the common case rapidly ceases to be memorable. I've done my fair share of pet projects like that.




ParaSail[0] is a parallel language that is being developed by Ada Core Technologies. It evaluates statements and expressions in parallel subject to data dependencies.

The paper ParaSail: A Pointer-Free Pervasively-Parallel Language for Irregular Computations[1] contains the following excerpt.

"This LLVM-targeted compiler back end was written by a summer intern who had not programmed in a parallel programing language before. Nevertheless, as can be seen from the table, executing this ParaSail program using multiple threads, while it did incur CPU scheduling overhead, more than made up for this overhead thanks to the parallelism “naturally” available in the program, producing a two times speed-up when going from single-threaded single core to hyper-threaded dual core."

One anecdote proves nothing but I'm cautiously optimistic that newer languages will make it much easier to write parallel programs.

[0] http://www.parasail-lang.org/

[1] https://programming-journal.org/2019/3/7/


I hope so too.

I want to emphasize that what was discovered by those papers is that if you take existing programs and squeeze all the parallelism from them you possibly can safely and automatically, it doesn't get you very much.

That doesn't mean that new languages and/or paradigms may not be able to get a lot more in the future.

But I do think just bodging promises on to the side of an existing language isn't it. In general that's just a slight tweak on what we already had and you don't get a lot out of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: