Hacker News new | past | comments | ask | show | jobs | submit login
Callbacks as our Generation's Goto Statement (tirania.org)
621 points by matthewn on Aug 15, 2013 | hide | past | favorite | 274 comments



"Await" is fantastic, and having using it for JavaScript (via TameJS and then IcedCoffeeScript), it makes things a lot easier and clearer.

That being said, I don't think the comparison between callbacks and goto is valid.

"Goto" allows you to create horrible spaghetti-code programs, and getting rid of it forces you to structure your programs better.

"Await", fundamentally, isn't really anything more than syntactic sugar (except for exception handling, which is a good thing). "Await" doesn't change how your program is structured at all, it just changes the visual representation of your code -- from indentations in a non-linear order, to vertical and linear order. It's definitely a nice improvement, and makes code easier to understand (and allows for better exception handling), but it's not actually changing the way your program is fundamentally structured.

And finally, "await" is only applicable when a single callback gets called once at the end. If you're passing a callback that gets used repeatedly (a sorting function, for example), then normal-style callbacks are still necessary, and not harmful at all. Sometimes they can be short lambdas, sometimes they're necessarily much larger.

In sum: "await" is great, but there's nothing inherently harmful about callbacks, the way "goto" is. To the contrary -- callbacks are amazingly useful, and amazingly powerful in languages like JavaScript. "Await" just makes them nicer.


What this suggests is that "await" isn't like all structured programming, it's like one control construct. The history of structured programming is the development of more constructs as people realised that they had a goto pattern that kept popping up in their code, so they named it and thereby got a cognitively-higher-level program. Long after Dijkstra's essay, we could still occasionally find new places where goto was really the best way to do it: for instance, if you wanted to "break" out of multiple loops at once. So someone invented named break (likewise continue), and removed yet another use of the unconstrained goto in favour of a more constrained, structured construct.

Taking the OP's premise at face value, then, if callbacks are like goto, then await takes away one place where they were needed and replaces them with something more structured and safer---and this doesn't at all negate the possibility that there are other constructs yet to be invented that would continue that process.


Another example: a lot of old code bases use "goto" for error handling. I'm assuming the Linux kernel still does this. Now we've formalized that behavior into exceptions.


Formalized that behavior into exceptions? Last I checked using exceptions for control flow was a BAD practice.

In c at least, breaking out of an error condition is still best handled with a goto (where you can clean up all matter of memory in an organized fashion without littering your code with if elses).


This discussion sort of hinges on being able to explain why something is a bad practice or not. What problems in particular do exceptions cause, and in which situations do they not apply? In other words, is it ever right to throw an exception, or is every line of code in your program "control flow?"

It's been nearly a decade since I've used plain C, so I'm not 100% sure, but I was under the impression that it doesn't have any native support for exceptions. So yes, under those circumstances "goto" would definitely be appropriate.

In C#, you can implement resource ownership with the IDisposable interface and "using" keyword, which guarantees that once you leave the block (via an exception or regular control flow), the resource is cleaned up. In C++, you can use the RAII pattern that another commenter brought up. What other problems do exceptions introduce?

In my as-functional-as-is-practical programming philosophy, you throw an exception when there is no valid output for your function. Whether or not that's recoverable is up to the client to decide. Nulls are an extremely poor substitute for this, as they push the responsibility of output validation onto the client, and every single line of code must be enclosed in an "if (foo != null)" block.

EDIT: Removed unproductive "zinger" at end.


You need C++'s RAII idiom or go's defer syntax to get the same behaviour though. I think try-with-resource in Java also would do the same thing, maybe. Just plainly throwing an exception won't release what you've acquired.


Something like go's defer makes most uses of goto (failure handling) unnecessary. However, there is still the "code a state machine" use case for goto.


Which we are trying to eliminate with tail-call-optimized mutually recursive functions.


Which are semantically less clear than goto, when you are working with something that is semantically a state machine.


Actually, I think that mutually-recursive functions are more semantically clear than goto for a state machine, though using explicit state objects is even more clear than either (though probably less efficient.)


ensure: blocks were how it was done in Smalltalk. You simply had a block of code that was followed by another block of code that the system would ensure the execution of. There was slightly more to it. You had to make sure that whatever ran in that block wouldn't take too long to complete, for example.


You're missing the point of `await`.

Await does change how your program is structured, unless we're talking about most trivial cases.

You can use `await` inside a `for` loop—can you do the same with callbacks without significantly re-structuring your code?

What is the “callback analog” of placing something in a `finally` block that executes no matter which callback in a nested chain fails? You'd have to repeat that code.

Await has a potential of simplifying the structure a lot, because it befriends asynchronous operations with control flow.

>And finally, "await" is only applicable when a single callback gets called once at the end. If you're passing a callback that gets used repeatedly (a sorting function, for example), then normal-style callbacks are still necessary, and not harmful at all.

Indeed, await is only for the cases when we abuse functions (because we don't know what we'll call, and we have to bend our minds). Passing the sorting comparator is the primary use of first-class functions.


I'm not disagreeing with you at all.

I guess we're using "structured" in different ways. Libraries like TameJS do wind up creating structures to deal with for loops, in the same way you'd otherwise manually have to deal with. Likewise with exceptions (which I said are the main actual benefit to await, that can't be reproduced in normal callback routines).

You obviously have to write a bunch of "plumbing" code to do with "raw" callbacks, in complicated situations (like loops), which "await" does on its own -- and writing that plumbing is annoying, although there are libraries to help.

My only point is, the fundamental structure of your program, on a conceptual level, is still the same. Everything's still running the same way, in the same order. It's just more concise, with less plumbing of your own, using "await". So the micro structure is different with await, but the high-level structure is no different. You can't "abuse" callbacks in the way you can abuse goto. Maybe I should have made that clearer.


I see your point. Closures also don't affect structure on conceptual level, but I think they're pretty darn useful. By the way, async can affect things on conceptual level if you embrace[1] it (I posted this link somewhere below as well, but just in case you haven't seen it).

[1]: http://praeclarum.org/post/45277337108/await-in-the-land-of-...


There are other means of accomplishing what you are referring to, for example, the async module for node.js is really nice in terms of having loop workers, as well as flattening out callback structures.

I find that async.waterfall + SomeFunction.bind(...) are insanely useful with node.js ... I don't seem to have near the friction in node that I find when working in C# projects.


Very concise reply with solid examples. You've sold me!


Let's not forget that "if", "for", "while", "switch", and friends, fundamentally, aren't anything more than syntactic sugar for "goto" either. ;)


Very true. :)

The genius about them, however, is that with all those, we discovered that you could get rid of "goto" afterwards. Which was kind of amazing.

"Await", on the other hand, doesn't remove the need for callbacks -- it just makes it much easier to use them in a lot of common use cases, in a much clearer way. But there are still plenty of valid/necessary uses for callbacks that can't be handled by "await".


On the contrary, there are still perfectly good uses for goto. The two I can name right off the top of my head are stack-like error unwinding in C (which comes with an endorsement from CERT recommending its use) and computed goto dispatch tables in threaded interpreters.

Still, you're right that structured control flow statements have obsoleted goto for all but the tiniest edge cases, and so too do I look forward to callbacks suffering a similar fate at the hands of things like await.


The aforementioned CERT guideline:

https://www.securecoding.cert.org/confluence/display/seccode...

Skeptics should note that the real-world example on that page is from the Linux kernel, where this style of goto is used heavily for error handling.


Most definitely. I wish people actually read http://www.u.arizona.edu/~rubinson/copyright_violations/Go_T... to understand why Dijkstra's argument doesn't apply to the cases you describe.


An alternative that can work, unless you need nested loops, would be to just wrap everything in a do {...} while(false), and then call break.


Await kills `done` and `error` callbacks, which are always devoid of concrete meaning in the context of function. Of course it can't—and isn't meant to replace callbacks like `comparator`, `predicate` etc.


I was initially very skeptical at first too. Then I noticed that the positioning of the "Busy = false" statements had been reduced to a structured form exactly as if we had started with an unstructured GOTO or multiple exit point control flow.

As a C++ guy, I don't like his "Busy = false" system to begin with. It would seem much better (to me) if he used a non-copyable object to represent the outstanding activity. Such an object could naturally reset the "Busy" flag in its destructor. But usually there's a better scheme than using a simple Boolean flag to represent a "busy" state anyway. (How is clearing the flag going to release the next guy waiting for the resource?)

So while I'm still a bit skeptical of drawing conclusions from this, I readily admit it's not so off-the-wall as I'd originally thought.


I'm sure Busy isn't meant to represent state—it's a property whose setter and getter call a spinning wheel UI element's StartAnimating and StopAnimating. It's a very common practice in iOS view controllers. That's what I read anyway.


That sounds like state to me.


Do you have any other method of updating the UI without calling corresponding methods? I'm lost on your argument.


Well if the goal is to have a "spinning wheel UI element" to reflect to the user that the app is in a "busy" state, and the UI element requires calls to modify its animation state, then no I don't have a way to do it without calls to the UI element.


I wrote about this very problem some time ago:

http://blog.barrkel.com/2006/07/fun-with-asynchronous-method...

Await is basically a continuation-passing style transform that puts the continuation in the continue handler of the task. The exception handling is also no more or less syntax sugar than the CPS rewrite - it's an error continuation that needs to be routed by querying the underlying Task's properties.

But the really great thing is that you can finally just thread the async keyword through your call stack to get the CPS effect across multiple method call boundaries, even your case of a sorting function. It's not just applicable for a single callback; the second half, the implementation half, is also implemented, so you're not just limited to using the pattern, but creating new instances easily too.


> I don't think the comparison between callbacks and goto is valid.

What is the largest code base you have worked on that used events + callbacks extensively? I worked on a 250,000 lines-of-code Java trading system, and understanding the way the code flow worked from the central event dispatch loop to handler functions, and back to the event dispatch loop took me MONTHS and seriously spoiled the quality of my life (considering how much many hours we spend at work).

Have you used alternative paradigms like Functional Reactive Programming, Promises, etc;? Maybe you don't realize how awesome the alternative is?


for/while are themselves syntactic sugar around goto. You can in fact write them as macros or even functions (using setjmp/longjmp) in C. However, the abstraction in that case will leak - you'll need to put your label manually where the loop needs to go, you'll still need to know you're using goto and how it works and you'll need to keep in mind the exact code that's generated by the macro, or you will get bit by it. Structured looping constructs also don't change the flow in your program and even force another level of indentation, but they allow you to think about it using higher-level building blocks being sure that those blocks always work properly, i.e. they won't break because you have a typo in some conceptually far away line.

Callbacks really are just like goto. I've seen really awful callback code where you have callbacks that create other callbacks which are passed to callback managers which are themselves finite state automatons and call one of the callbacks based on a return value from another. It's the most horrifying spaghetti code you can think of. It's practically fractal - spaghetti within spaghetti that influence the top layer in an untraceable manner.

While everyone can write spaghetti in any language, few can write good code when given just goto or just lambda. In some cases it's even impossible.


There's nothing inherently harmful about goto either, and people who think that just having it around is death, seriously don't understand the machines they're programming, and how we implement these magical control structures they love so much. (hint: we use goto)

I don't feel any respect for the article because it's written on the premise that goto is bad, and that it is anything like a callback.

Callbacks have been, and will continue to be an incredibly useful way to handle events.


I don't think the article is written on the premise that Goto is inherently bad, I think it's written on the premise that it can lead to unmaintainable code if misused.

I've had some of the exact situations described in the article come up many times at work — especially with blocks in Objective-C (nested error handling, recursive blocks, and so on). The `await` keyword seems like a very nice tool for a lot of common use cases that can be problematic with blocks/callbacks.


If you like icedcoffeescript, take a look at this:

https://github.com/bjouhier/galaxy

Essentially await/async using Harmony Generators.


If await isn't changing the way you structure your code -- and letting your code robustly handle things that it couldn't handle with callbacks -- I think you're doing it wrong.


Yes a thousand million times. This is the reason why people love golang and why there's a lot of excitement about core.async in the Clojure community, particularly for ClojureScript where we can target the last 11 years of client web browser sans callback hell:

http://swannodette.github.io/2013/07/12/communicating-sequen...

Having spent some time with ClojureScript core.async I believe the CSP model actually has a leg up on task based C# F# style async/await. We can do both Rx style event stream processing and async task coordination under the same conceptual framework.


Yeah, C# await/async is cool. But isn't Google Go's approach nicer? Keep standard lib calls blocking as is, and use the "go" statement when you want to run something async? It avoids all the extra DoWhateverAsync() API functions. See http://stackoverflow.com/a/7480033/68707

What's the Go equivalent of "await"? I.e., like the "go" statement but asynchronously wait for the function to return and get its value?


The difference is that Go has what looks like preemptive scheduling for goroutines (and will eventually be truly preemptive; see [1]) while await is more like cooperative multitasking. If you're writing in Go, you should use channels (or mutexes) to avoid race conditions: "share by communicating".

With a single-threaded language using await, it's safer to modify common data structures without locks, although you should be aware that calling a function via "await" gives others task the opportunity to modify your data. So "await" appearing in the code explicitly warns you that pseudo-multitasking is happening. In Go, preemption can (in theory) happen at any time, plus there are multiple threads.

[1] http://honnef.co/posts/2013/08/what_s_happening_in_go_tip__2...

Edit: Go isn't truly preemptive in 1.1.


  ci := make(chan int)
  go func(ci)
  <-ci


If you're using `ci` as just a semaphore, you should use a struct, since those use even less memory than an int in go.


I think that WaitGroups or possibly just reading from a semaphore channel might do the trick for you in go.


This sounds sweet. In my book, C# designers have a history of striking a good balance between simplicity and flexibility. This however always leaves me eager to look at Haskell/ClojureScript/other languages where concepts that C# borrowed and simplified are taken to the full (such as monads, iterators, CSP, etc).


The C# designers outdid themselves this time. I'm surprised they managed to made this feature so simple and succinct, especially for an "enterprise" language. Those two little words (async/await) seem like something I would expect from a language like Python or Ruby. Had it been Java, it would be called AsynchronousBureaucraticProccessDispatcherFactoryFactoryFactory.


F# implemented this many years before (6 years ago), and as a library, just by providing the proper language feature, workflows, and a default async implementation.

In comparison, C# adds special compiler keywords for one specific example, just like they did with LINQ. That seems rather ugly IMO. Providing building blocks and letting libraries fill things in is a lot nicer.

This is more of a "C#'s finally catching up with basic features".


I don't think it's fair to characterize this as "catching up". Both F# and C# are developed by an overlapping group of people at Microsoft. And, until recently, the bulk of Haskell's GHC was done by SPJ in a closely collaborating group in Microsoft Research.

The correct characterization is to view this as a pipeline from a research language, to a specialists' language, to a common man's language.

Six years isn't really all that long to wait for a specialist feature to be 1) motivated 2) conceived 3) prototyped 4) validated 5) justified 6) implemented 7) tooled 8) released 9) marketed. Given that there are hundreds of ideas and only so much time, the "minus 100 points rule" [1] basically means that it's no easy feat for a feature like this to show up in a mainstream language. When you consider the quality bar, level of IDE integration, the magnitude of the education effort, and all the other odds and ends, it's something of a minor miracle.

[1]: http://blogs.msdn.com/b/ericgu/archive/2004/01/12/57985.aspx


C# operates entirely differently, I understand. The IDE work is massive and necessary. Having said that, a lot of this stuff is "catching up" or implementing stuff from the 70s. To be clear, it's not like type inference or closures were invented with Haskell, F#, or C#. Stuff like that is pretty well-known PL stuff, isn't it?

People would be upset if C# didn't have for loops; why aren't they upset the type inference is nearly useless?


People would be upset if C# didn't have for loops; why aren't they upset the type inference is nearly useless?

Can you honestly not comprehend the answer to this? There are plenty of languages without type inference and shit gets done fine. People don't rely on it. People do rely on for loops.


Probably because they don't know it's useless. I use and like C#. The type inference seems useful to me. Avoiding generic parameters on almost every linq extension method is a huge savings in comprehensibility. var x = new SuperDuperLongClassName(); is a nice savings in redundancy.

Where can I see an example of useful type inference?


Off the top of my head, C# can't type infer: Fields, properties, parameters, return types, lambdas, generic type declarations, type constraints.

I did a pretty fair line-by-line translation of a C# app to F#, and the F# version needed 1/20th the number of type annotations.


> People would be upset if C# didn't have for loops; why aren't they upset the type inference is nearly useless?

Because the existing type inference is good enough for the average enterprise programmer.

You know, those guys with good enough CS grades, that just do what they are told and don't even know HN exists.


It's also good enough for some of us that do know HN exists.


Yes, C# tends to copy F# features in the way of compiler-syntactic-sugar. I wonder if Type Providers will be next. Async/await have been around for a couple of years and I haven't heard of the next big C# feature, other than Roslyn.


They seem to be pretty busy with Roslyn, Anders recently admitted it's taking longer than originally expected. So perhaps we need to give 'em a break. The only thing I heard about C# 6 so far is it's maybe going to have more compact class declarations, a-la F# or TypeScript.


C# 2 added generics (courtesy of the same people that did F#) and closures (albeit with syntax as verbose as JS).

C# 3 added LINQ, which is a major breakthrough for end-users, although I'm not fond of the query language. So really, C# 3 just added in some basic features you expect from proper languages. I do understand this required a huge amount of work, esp. with the tooling required.

C# 4 added dynamic (F# provides ? operator you can provide your own implementation for, if you really feel that strings look ugly). Oh, and it finally backpedalled on the no optional parameters (although the optional parameters is the same broken C-style callsite implementation).

C# 5 added async (F# had a more flexible implementation 6 years before).

What else? C# seems to have stagnated, although I understand that's a feature for some of their users. C# still lacks type inference in most places, making it extra verbose. C# expression trees are still very limited. C# still can't easily do tuples. Not sure they deserve a break; this was MS's "flagship" language.

OTOH, The CLR itself doesn't seem to be getting any upgrades, either - IL stayed locked at v2. It's as if they realised they have done better than the JVM can ever do (generics via erasure just sucks) so why bother pushing it further?


There are lots of work to do still on the CLR.

- Improve the GC algorithms, most JVMs have better GC algorithms

- Improve the code quality of the JIT and NGEN compilers, specially the set of applied optimizations

- Expose auto-vectorization and vector instructions (similar to Mono.SIMD)

- Expose something like C++ AMP on .NET.

Some of this was slightly improved on 4.5, but they could do more.


Actually the CLR GC vastly outperforms the Oracle JVM GC in many scenarios, like a native Win app.


Which GC, from the multiple configurable ones?

Not to mention that you are forgetting all the other JVMs that are also available out there.

Also it the set of tuning options is pretty thin compared with what most JVMs, Haskell and other GC environments offer for tuning.

Note that this is not a Java vs .NET rant, I work on both environments. Each has plus and minus.


I'm not sure what you mean by break, but the comparable time line for Java, and C++ would show far less improvement in terms the addition of programing languages (as in the academic discipline) features (e.g lambdas, closures). They've done a far better job than most languages of actually advancing the language conceptually, not adding libraries/features.


>C# seems to have stagnated

TO me that seems to be a good thing. The language has picked up a lot of great features that make it nice to program in, but it is also making the language huge. How do all these new features interact? What are the emergent properties of the language? Personally I could use a few years to A) get legacy code caught up B) Explore what already exists.


On the other hand the state machine C# compiler generates is a lot more efficient than the IL an F# async workflow compiles to.


By the way, how do F#'s async workflow compare to ClojureScript? Are they equally powerful?


In F# workflows are just a syntactic sugar for monads, much like Haskell 'do' notation. You can get continuation monad (workflow) in F# easily. I don't know what ClojureScript uses, but it doesn't seem very likely that it has more powerful mechanism :)


Clojure/ClojureScript now has core.async, which is a Go-style implementation of CSP with coroutines and channels.

Like C#, core.async uses a lexical compiler transform to produce a finite state machine for the co-routine. Unlike C#, Clojure can achieve this with a user-level macro, instead of a compiler change. Both C# and core.async differ from Go, in that Go's coroutines have dynamic extend, by virtue of being heap-allocated stacks with a custom scheduler. In practice, this has a minor impact on higher-order usage of co-routines, but is a smaller problem than you'd think, it's generally advisable to minimize higher order usage of side effects.

Both C# and Go's approaches can be implemented as Monads, yes. However, Monads are a significantly more abstract thing than either CSP or C#-style Tasks. The do-notation is barely concealed continuation-passing style, which is generally less pleasant to work with than traditional imperative constructs for side effects such as send & receive. "More powerful" isn't a really useful measurement for practical use.


As Brandon alludes below monadic designs generally have allocation overheads, this is why C# uses state machines. So while they may be equivalent in some abstract sense of "power" one ends up being more efficient in practice.


You know, this kind of BS about Java is a little tiresome.


sometimes the truth hurts


What people like to accuse Java for, I have seen enterprise architects do such examples in C, Perl, C++, Java, C# and about any other language used in enterprise context.


Aw come on. Have you ever programmed in Java? Anybody who has written any java code knows that class name is wrong.

AsynchronousBureaucraticProccessDispatcherFactoryFactoryFactoryInterfaceProvider

There I fixed it for you. And of course you have to specify the provider in the META-INF/async file.


You get a similar interface in Python's Twisted using the @inlineCallbacks decorator:

    @inlineCallbacks
    def example():
        try:
            obtain_some_lock()
            ui_status("Fetching file...")
            result = yield fetch_file_from_server(args)
            ui_status("Uploading file...")
            yield post_file_to_other_server(result)
            ui_status("Done.")
        except SomeError as e:
            ui_status("Error: %s" % e.msg)
        finally:
            release_some_lock()
I must say that this style of writing async code is much friendlier than descending into callback hell.

There is work to make a similar async interface native in Python 3, in PEP 3156 http://www.python.org/dev/peps/pep-3156/, so this should become more widely available even to those who don't use Twisted.


Tornado has a similar interface:

    class GenAsyncHandler(RequestHandler):
        @gen.coroutine
        def get(self):
            http_client = AsyncHTTPClient()
            response = yield http_client.fetch("http://example.com")
            do_something_with_response(response)
            self.render("template.html")
(via http://www.tornadoweb.org/en/stable/gen.html ..)


Obligatory mention of gevent[1].

It handles all the plumbing for you with monkey-patching I/O calls. No annotations necessary, you just write linear code.

[1] http://www.gevent.org


Yes! I really hope this use of yield will bubble up to the language spec and become pervasive in Python. It really strikes me as the Pythonic approach to solving callback hell.


> Yes! I really hope this use of yield will bubble up to the language spec and become pervasive in Python.

And the time machine spaketh: http://www.python.org/dev/peps/pep-3156/#coroutines-and-the-...

(it has no reason to "bubble up to the language spec", the language provides all the right primitives — especially with Python 3's `yield from`, it's up to the libraries to use them. That is a Good Thing.)


Good point. Bubble up to the standard library, then.


That's very cool, thanks for sharing.


I'm not buying the c# async/await kool-aid.

Async, sure, I'm down with that, but I've used the c# async stuff now, and while it makes it the app somewhat faster, it has three major downsides (that I encountered):

- Infects everything; suddenly your whole application has to be async.

- Debugging becomes a massive headache, because you end up in weird situations where the request has completed before some async operation completes, the debugger gets scared and stops working.

- It's really hard to test properly.

The only good reason for using it is that because of the infection-property back fitting async to your application is a major headache; if you might use it, you have to use it from the beginning or you get a huge backlog of refactoring and test fixes to do.

-___- 'I might use this for some performance bottleneck I don't yet know about, so better start using it now...' yeah, that's a thing: premature optimization.


But doing things by hand with callbacks is going to have all of those same issues isnt it? If you dont want to infect everything then you basically need to just write synchronous code instead...


The issue I see is that with some libraries you either go 100% async or 100% sync with no middle ground.


This is simply not true. You can use any C# library without using async (code rewriter). You'd still be using Tasks but there is nothing special about them (no compiler magic). They're just futures on which you can schedule continuations.


I'm not talking about C#, but about libraries in other languages that support async behaviour


PDFJS--the PDF reading package from Mozilla--is a gross offender in this regard; the mix of sync and async, and the somewhat arbitrary nature of which is which, is pretty annoying.


Correct. Same with introducing futures/deferred.


On the debugging issue, I agree that used to be terrible. See the awesome new Asynchronous Debugging feature in VS 2013: http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Asynchr...


As someone who only recently switched to Node.js from PHP I personally haven't had any difficulty switching over to the callback frame of mind, and I haven't experienced the "callback hell" so many people complain about. At first I was hesitant to start with Node because I saw blog posts by people bemoaning the spaghetti callback code that they ended up with. But I haven't experienced any of that, although I am a relatively newbie Node programmer with only a few months of experience so far. My current project is quite non trivial as well, running into the tens of thousands of lines so far.

The key I've discovered to nicely organizing callbacks is to avoid anonymous callback functions unless absolutely necessary for a particular scope, or unless the function is going to be so trivial and short that it can be read in a single glance. By passing all longer, non trivial callback functions in by name you can break a task up into clear functional components, and then have all the asynchronous flow magic happen in one concise place where it is easy to determine the flow by looking at the async structure and the function names for each callback function.

Another major advantage to a code organization like this is that once you have your code such that each step has it's own discrete function instead of being some inception style anonymous function tucked away inside another function inside another callback it allows you to properly unit test individual functional steps to ensure that not only is your code working and bug free at the top level functions, but also that each of the individual asynchronous steps that may make up some of your more complicated logic are working properly.

Most of the bad examples of callback hell that I see have anonymous callback functions inside anonymous callback functions, often many levels deep. Of course that is going to be a nightmare to maintain and debug. Callbacks are not the problem though. Badly organized and written code is the problem. Callbacks allow you to write nightmarish code, but they also allow you to write some really beautiful and maintainable code if you use them properly.


I kinda have to disagree with you here. The problem of callback hell has nothing to do with the funcions being anonymous. In fact, you kinda want to have anonymous functions if you want to keep things as similar as possible to traditional code.

For example, when you have code like

    var x = f();
    print(x);
only a hardcore extremist like Uncle Bob would write it as

    var x;

    function start(){
       x = f();
       onAfterF();
    }

    function onAfterF(){
       print(x);
    }
because now your code logic is split among a bunch of functions, the variables had to be hoisted to where everyone can see them and the extra functions obscure control flow. In the first case its obvious that its a linear sequence of statements but in the second you cant be sure a-priori how many times onAfterF gets called, when it gets called and who calls it.

Coming back on topic, callback hell is not just about nesting and your current code still suffesr a bit from it. The real problem is that you cant use traditional structured control flow (for, while, try-catch, break, return, etc) and must instead use lots of explicit callbacks. Additionally, for this same reason, callback code looks very different from how you would normally write synchronous code and its a PITA if you ever have to convert a piece of code from one style to the other.


The callback-based example is complicated for no reason other than to support your argument. This is how it translates:

    f(print);


Of course its complicated for no reason. Its an example! The same logic would apply if I had 5+ nontrivial lines of code instead of just a print statement.


I'm sorry, hardly ever had an issue with callback-based programming. If you're used to imperative, maybe the problem is that you're making a mess because you're adapting from a different style and complicating it with workarounds, you need to be functional.


I dont think its a matter of functional vs imperative. In fact, functional languages give some of the best tools to avoid having to write callbacks by hand. For example, in LISPs the language tends to have explicit support for converting non callback code to CPS (call/cc and thigns like that) and in Haskell you have do-notation to get rid of the nesting and hide the callbacks behind some syntax sugar.


I'm tired to the utmost degree of all these posts about people (supposedly) coming from PHP/C#/Ruby/Python background and seeing "absolutely no problems" with JS syntax, object model and programming paradigms. There are problems. They are objectively there. If you don't see them, you have to check your critical thinking skills, rather than imply that everyone outside of elite JS circles are simply too ignorant to understand its awesomeness.

The simplest example of callback hell is trying to analyze workflow of some chunk of code in a debugger. If the code is linear, you place a breakpoint at the beginning of the method you're interested in and go through the code one line at a time. If there are nested statement of method calls, the debugger happily redirect you to them without fail.

With extensive use of callbacks, this becomes impossible. Since callbacks are merely registered in the original method, you need to place a breakpoint at the beginning of every callback function you might encounter in advance. Named callbacks actually make this worse by physically separating the place where a function is registered from its body. Did I mention that you're loosing ability to do any kinds of static reasoning, since callbacks are inherently a runtime concept? And the fact that you loose ability to look at the stack trace to "reverse engineer" why something was called?

Which reminds me of something. Have you ever seen code that reads a global variable, and you have no clue where the value came from? Callbacks create the exact same problem, except they aren't just data, they are code, so the problem can be nested multiple times.


IMO, this is more of a problem of indirections than its a problem of callbacks.

If you use anonymous function as the callback there is no indirection and you know perfctly well where to set the break point.

At the same time, you can also have the sort of debugging problem you mentioned in regular code whenever you call a method in some polymorphic object. (the "listener" pattern is just one example of this)


Well in addition to not having a problem with callback hell I also haven't used a debugger in more than three years, so I guess I'm just weird.

As I said I like to write unit tests with my named callbacks. This allows me to test the callbacks as well as the root level functions that make use of these callbacks to ensure that everything is working perfectly.

When I follow this model it is extremely rare that I ever encounter any issues that would need a debugger, and if a problem does arise somewhere the relevant unit test can quickly expose which callback is having a trouble, and precisely what is wrong.

I'm not saying callbacks are perfect. My goal is just to share my technique for organizing my code in Node which I feel has led to some very well organized, testable, and maintainable code.


When someone gives you a sufficiently large codebase written by other people and asks why when they click A they get B, you have two options: 1. Read the code and try to reason about it. 2. Fire up the debugger and replicate user actions.

Guess what? Callbacks in JS make option #1 significantly harder, since they are, essentially, runtime weakly typed mechanism for code composition.


This is a common issue.

I guess people that state they are fine with it, never worked on the typical enterprise codebases, done by several consulting firms along the years.

When one works on their own code, or startup elite programmer style, everything is easy.


I often write synchronous methods that include control flow that nests three deep (say try/finally, if/then/else, and a for loop). Often it's easier to read this code than it would be if everything were split out into separate named methods.

Why would the same not be true of asynchronous methods, assuming that the technology was there to enable it (as it is in C#)?


I agree. Sometimes inline asynchronous callbacks work, just like inline code blocks for if statements or for loops. You just need to train your eye to read them as if they were inline code blocks for an if/then/else block or a for loop.

But sometimes when you get many levels deep in if statements or if a synchronous function starts to reach the hundreds of lines it makes sense to break it up into multiple functions that each have a sensible semantic meaning and which fit on a screen or so. This makes the synchronous code easier to read.

The same goes for asynchronous callback functions. The callback hell that I see most often happens when people have hundreds of lines of inception style anonymous callback functions inside of anonymous callback functions. In this case, just as with the synchronous function that got excessively heavy it makes sense to break things up into multiple functions.

It's all about finding the right balance, and when you do the results are very readable and easy to understand whether you are writing synchronous or asynchronous code.


> Often it's easier to read this code than it would be if everything were split out into separate named methods.

This might actually be a shortcoming of our code organization/reading tools.


Why should that be the case? Creating all the extra methods is going to create lots of new points of indirection, the new methods are likely to be tightly coupled anyway and breaking the nesting might mean you have to hoist a bunch of variables into an outer scope.


Exercise: list all of the implicit assumptions about how code organizing and reading tools have to work from those two sentences.


this use case is inherently more complicated than any of the control flow structures you mention here. You are introducing a new closure, and you don't know when the function is going to be executed.


Anonymous Callbacks != Callbacks

Callbacks have been around forever in C using named functions, and are not specific to either the current generation of programming languages or programmers. One can still use a named function instead of a locally constructed lambda to represent a callback in a high level languages.

The primary difference is that when declaring named functions non-locally, one must explicitly share state through the parameter rather than implicitly sharing state through lexical scoping. It seems more accurate to label the problem of nesting lambdas to the point of ambiguity as "Lambda Abuse" or "Lexical Scoping Abuse" rather than "Callback Hell".


Not totally, the problem being described as "Callback Hell" is in my experience: non-locality of code. Things that relate to one another should be spatially together for a programmer to write, read, understand- and NOT split into success and error conditions around the axis of final callback.

Even with named functions, this is the problem- right? You pass in a named callback and you have to find that function later when you're debugging to figure out what's going on. Locality of code means no breaking context to find something not already on screen and therefore easier programming.


The problem is simply that those languages are not Lisp.

Once good patterns of use of GOTO were found, it was natural to critisize random uses, and to wrap good uses in a lisp macro. Or in a new while or for "instruction".

But then the next construct is discovered, and its bad uses considered harmful, and its good uses need to be wrapped. In lisp, mere programmers will just write the next macro to abstract away this new construct. Other programming languages need to evolve or have new language invented with new "instructions".

So now it's the callbacks. Yes, in lisp we'd just use closures, but this is only a building block for higher level constructs. If those "callbacks" are needed to represent futures, then we'd implement those futures, as a mere lisp macro.

Yes, in the other languages you're still powerless, and need to wait for the language designers to feel the pressure and implement a new "future" instruction or whatever.

Any language construct can be considered harmful eventually. Concretely, a repeative use is a hint of that: the construct becomes meaningless because it's used all the time, or apparently randomly (just like those GOTOs). But it's not that the construct is bad, it's that its usage is not abstracted away in higher level constructs. And the only way to do that is lisp macros.

So unless you're programming in lisp (a homoiconic programming language that let you write easily macros in the language itself), you will always reach a point where some construct will be able to be considered harmful for lack of a way to abstract its uses away.


Ah, but the real problem is not that these languages aren't Lisp, but that they aren't Scheme. People are being forced to write continuation-passing-style code by hand, which anyone would agree is painful. To allow user-level code to abstract away the need to write CPS, you need call/cc (or something like it).


You would never use call/cc directly. Like goto, it is a building block to be used in control abtraction macros.


Yes. The point is that while the idea of continuations exists, no one should be forced to write continuation-passing-style code by hand!


"The infinite improbability drive is a wonderful new method of crossing interstellar distances in a mere nothingth of a second, without all that tedious mucking about in hyperspace."


Is this a genuine smug lisp weenie post or is it satire?


The former. Pascal isn't someone to be trifled with!


There is some creative use of C# async/await in this blogpost:

http://praeclarum.org/post/45277337108/await-in-the-land-of-...

Basically, the author implements a “first time walkthrough” kind of interface a-la iWork very declaratively by using async:

    	async Task ShowTheUserHowToSearch ()
    	{
  		await Tutorial.EnterText (searchField, minLength: 3);
  		await Tutorial.Tap (searchButton);
  		await Tutorial.Congratulate ("Now you know how to search.");
  	}


What if you have two buttons, search and 'be lucky'? I guess you will have to await for a controller object that wakes up that if either one of the buttons is triggered, and then test what button was pressed.

Now what if you have add a search option dialog that can be invoked at any moment? I guess when the 'apply' button of the option dialog got pressed then this is still going to be a callback that sets global variables according to search option.

So, like any mechanism, this way of structuring code has its limits. If an event can happen at any stage of the flow, then this still has to be handled by a callback, otherwise you would have to check for this kind of event after each await clause.


"What if you have two buttons, search and 'be lucky'? I guess you will have to await for a controller object that wakes up that if either one of the buttons is triggered, and then test what button was pressed."

http://msdn.microsoft.com/en-us/library/hh194796.aspx


... select or WaitForMultipleObjects in another form


In event driven programs there are often events that can arrive at any moment; for example in networking the connection might have been closed by the peer, or in a GUI the user might chose to alter parameters by means of option dialog.

So with await you may need to write a wrapper around await, the wrapper function will check for common events that can arrive at any moment.


Surely async isn't meant to replace events—it's just in some cases, when you expect events to happen in particular order, such as in help tutorial, async gives an advantage.


I agree, although I think callbacks are more like COME FROM than goto. You see a function being passed somewhere as a callback, and you know the block is going to execute at some point, but most of the time you have no idea what the codepath that calls you back looks like.

There's nothing more frustrating than trying to debug why a callback isn't being called. Who calls it? How do I set a breakpoint somewhere to see why it isn't being called? etc.

The one thing that is still missing from await and other green thread approaches is cheap global contexts. Isolating every different green thread so they can't implicitly share state is the obvious next step.


Have you looked at the Erlang model? Each Erlang process (green thread) only gets the arguments initially passed in and whatever else it asks for from other running processes. The only shared state is long-running processes created for the purpose of explicitly sharing state.


Yep. The Actor model that Erlang uses is exactly what I was referring to being missing from green thread libraries for other languages (C#, python with greenlet or generators, js with generators)


PLEASE. There's nothing wrong with COMEFROM.


ICL099I COMMENTOR IS OVERLY POLITE


I generally agree that there are better ways to handle asynchronous control flow than callbacks, but I think this is exaggerated. As in most posts like this, the callback soup examples are difficult to follow primarily because they are horribly written, not because of callbacks.

As long as you write decent code, the main impediment to asynchronous programming is reasoning asynchronously, not syntax. If you require complex asynchronous logic and don't use an appropriate algorithm, you'll end up in the muck whether you use callbacks or await.

Taking go as an example: while I agree that the go statement is more elegant than a callback approach, I see it as quite a minor win compared to channels. The go statement is convenient syntax, but channels are what make concurrency in go feel so robust, and it's a pattern than can be applied just as well in a language that uses callbacks.


I don't understand what the big deal is. Callbacks are OK. They're less cumbersome if the language you're using has smaller function definitions.

Callbacks 'get crazy' when you've got more than one I think, and thankfully someone smart has made a library you can use to manage them!

https://github.com/caolan/async

Saying that, I don't mind the way things look with the whole await/async stuff in C# and etc. However I don't think we should be waving our arms around saying callbacks are like goto, they so completely are not! I have written heaps of stuff with callbacks and it's _not that confusing or unmaintainable_. It's just different.


How do you do this with callbacks?

    foreach (var player in players) {
        while (true) {
           var name = await Ask("What's your name");
           if (IsValidName(name)) {
               player.name = name;
               break;
           }
        }
    }
Assuming `Ask` is an asynchronous operation and must not block the UI thread.

Note that second player is only asked after the first player has given a valid name.

(And the code structure reflects that :-)

My point is of course it's doable with callbacks, but I spent more time indenting this code than writing it, and I darn well know I'm not smart enough to spell out the correct callback-style code in a comment field on Hacker News. And if I suddenly had to add error handling...


Coroutines (or generators) are a really nice sugar for callbacks. This looks a lot like ES6's yield, just s/await/yield/.

But to answer your question, since these can't be done in parallel, you'd have to keep track of which player you're asking:

    var playersAsked = 0;
    var askNextPlayerHisName = function(done){
        if (playersAsked === players.length) done();

        Ask("What's your name?", function(name){
            if (IsValidName(name)){
                players[playersAsked].name = name;
                playersAsked++;
            }
            askNextPlayerHisName(done);   
        });
    };
    askNextPlayerHisName(function(){/*...*/});


Just hope that the players won't enter too much invalid names : http://stackoverflow.com/a/7828803/260556


Not quite in this case. I'm making an assumption that the Ask function provided by the parent is actually asynchronous.


  (function(players) {
    var cur_idx = 0;
    function cb(name, err) {
      // just kidding, not going to handle errors!
      if (IsValidName(name)) {
        players[cur_idx].name = name;
        cur_idx++;
      }
      if (cur_idx < players.length) {
        Ask("What's your name", cb);
      }
    }
    Ask("What's your name", cb);
  })(players);
This is of course completely awful. just2n's reply is a nicer realization of the same concept. sprobertson's reply clearly will not work as written, and I don't expect it's even possible to condense this code into a single call to eachSeries.


I would do it in Node.js like the following:

  async.eachSeries(players, askName, function (err) {});

  // this is someplace it should go
  function askName(player, callback) {
    Ask("What's your name", function (err, name) {
      if (err) { return callback(err); }
  
      if (IsValidName(name)) {
        player.name = name;
        callback(null);
      } else {
        askName(player, callback);
      }
    });
  }
This follows Node's err convention. Note I use a bit of old fashion recursion to handle the asking for a valid name. This will not stack overflow due to the fact Ask is async. You could inline askName, but then you wouldn't have a nice little unit testable function.


I'd do that with the aforementioned `async` library, specifically `async.eachSeries`:

    async.eachSeries players, ask, ->
       # Well that was easy enough...
       printRoster(players)


Where is the part that checks if the name was invalid and then asks again?


    function askPlayers(players, fn) {
      if (!players.length) return fn();

      var player = players[0];

      ask("What's your name", function(name) {
        if (isValidName(name)) {
          player.name = name;
          players.shift();
        }

        askPlayers(players, fn);
      });
    }


You do it like this (using async):

  async.eachSeries(
      players,
      function(player, playercb) {
	  var valid = false;
	  async.whilst(
	      function() { return !valid; },
	      function(wcb) {
  		  Ask("What's your name", function(name) {
		      if( IsValidName(name) ) {
			  player.name = name;
			  valid = true;
		      }
		      wcb();
		  });
	      },
	      playercb);
      },
      function() { /* done */ }
  );
More lines than the foreach loop, but on the other hand, if this was an operation you wanted to do in parallel instead of sequentially, that'd be impossible with the simple loop construct.


  for player in players
	get_name_for = (player) ->
		ask "what's your name?", (response) ->
			if is_valid_name response
				player.name = response
			else get_name_for player
	get_name_for player


This looks like it does the wrong thing. If "ask" gets to access the scheduler's task list as a queue, then entering an invalid name in the first response and only valid names thereafter will cause the first valid name to be given to the second player, the second valid name to the third player, and so on. If "ask" gets to access the scheduler's task list as a stack, then a sequence of only valid input names will cause the last player to have the first input name, the second-last player to have the second input name, and so on.

Edit: I was optimistically assuming that the consumer of the many "asks" that are created all at once would process them sequentially, dealing with one and invoking the callback before dealing with the next. If you do not assume this, my problem disappears and you get the simpler problem of spawning many prompts simultaneously.


Hmm, I wrote it so that the function closes over the player, so that shouldn't happen. The real issue is, as pointed out, that this will ask for all the names at once, rather than sequentially. Wether this is bad or not depends on how the `ask` function gets its input.


This will ask two players simultaneously. My example waits for each player to provide a valid name in turn.


ok, sure, but it's still fixable without having to use await. you'd have to forego the for loop and make that flow control part of the callback cycle.

was your point that it couldn't be done with callbacks? or couldn't be done easily? or not easily alongside traditional flow control like for loops?

I agree, await and async in C# are very nice, I just took your post as a challenge.

  get_player_name = (player, next) ->
    ask "what's your name?", (response) ->
        if is_valid_name response
             player.name = response
             next!
    else get_player_name player, next

  get_player_names = ([player,...players]) ->
    get_player_name player, ->
    	if players.length > 0
            get_player_names players

  get_player_names players


Exactly, this was my point.

It took me about as long as I typed this code to write it.

Of course it is doable with callbacks, but I know I'm not smart enough to do it in a comment field on HN.


Point taken, yet this particular problem with players asked one after the other is simpler than the case when players are each spawned their own Ask. Here's a simple solution to your problem:

  function getAllNames(players, callback){
		function getPlayer(i, players, callback){
		 	Ask("What's your name", function(name){
				if(isValidName(name)){
					players[i++].name = name;
				}
				if(i == players.length){
					callback(players);
				} else {
					getPlayer(i, players, callback);
				}
		 	});
		}
		getPlayer(0, players, callback)
	}


Here is an example where callbacks are even less intuitive:

  if not song.artist
    @getArtist song.id, (err, artist) =>
      song.artist = artist
      @save song
      @addToCatalog song
      #...
  else
    @save song
    @addToCatalog song
    #...
Callbacks will force you to move @save, @addCatalog, ... into a separate function. Completely messing up the logical sequence of operations.


Maybe like this?

  players.forEach(function() {
    'use strict';
    var player = this;
    var name = Ask("What's your name?, function(name) {
      if (isValidName(name) {
        player.name = name;
      }
    });
  });


Same problem like with the sibling post: this will ask two players simultaneously. My example waits for each player to provide a valid name in turn.


If you're doing everything sequentially anyway, why bother with the awaiting part? As far as I can see your example would be functionally unchanged if you wrote the same code except without the await keyword.


Because it doesn't block the thread. The idea is that this code is executed inside of a thread that, if it blocks, will cause the application to hang. For example, in a GUI or a server. So, if its a thread driving a GUI, and it's blocked on user input, then the entire application interface will be unresponsive until it receives that input.


This. Specifically, I'm thinking about iOS prompts and alerts, they are not blocking.


Great comment! I missed that.


Well, if the Ask function should only run once at a time, it should block itself.


Imagine it's an iOS modal prompt. It doesn't block the thread, it sends an event.


Holy Baader-Meinhof, just today, in frustration, I wrote something like Haskell's sequence_ for ContT, in Javascript:

https://gist.github.com/cscheid/6241817


Why do you as an American feel the need to invoke some dead german left-wing militants in a pseudo-religious phrase that's meaningless except maybe for shock value? This seems highly inapproriate for any website and even more so on HN.


It's not clear if you know or not, but it's the common name of http://en.wikipedia.org/wiki/List_of_cognitive_biases#Freque...


Thanks, no, I wasn't aware of that oddity.


It's a reference to the Baader-Meinhof phenomenon, a.k.a. "frequency illusion".


(for whatever's worth, I'm Brazilian)


Aren't you still American? ;)

As an Indian I am confused why only people of the US are called Amerians while two entire continents are called America.

And also, why we Indians are not considered Asians by the said Americans.


Because "America" is part of the nation's actual name, and the only part that isn't a modifier. What else could you call Americans? Unionized Statists?


I actually use the term "USian" quite frequently, but then I'm weird that way.


Time for bed. "Un-ionized? What is this person trying to say?"


typically American response


(C/OS developer spiel)

I'm sick of these app developers assuming that using "goto" is bad practice. The fact is that "goto" is used plenty in great production code you're probably running right now.[1] I'd like to know a cleaner way to abort a function into its cleanup phase when a function call returns an error. And "goto" statements are extremely simple to handle for even the most naive of compilers.

[1] https://www.kernel.org/doc/Documentation/CodingStyle (see chapter 7)


Exceptions


Not terribly plausible in kernel code.


Not all kernels are UNIX.


I didn't know UNIX won't allow kernel exception handling. Source?

Of course exceptions are suboptimal; bad performance compared to "goto" can have unintended side effects depending upon the OS (scheduling work may be triggered off of the additional interrupt).


> I didn't know UNIX won't allow kernel exception handling. Source?

My statement was based on ideology as I doubt typical UNIX kernel coders would ever allow for exceptions, given that C does not support them and is against the UNIX way.

> Of course exceptions are suboptimal; bad performance compared to "goto" can have unintended side effects depending upon the OS (scheduling work may be triggered off of the additional interrupt).

Exceptions at the kernel level are possible, Windows does it for certain classes of errors, for example.

Some other commercial or research kernels might do it as well.


Evan Czaplicki (author of Elm lang) made the identical argument (sometime?/years ago), with the same reference to Dijkstra's quote, but with another suggested solution, Functional Reactive Programming, on which his language is oriented:

http://elm-lang.org/learn/Escape-from-Callback-Hell.elm


HN discussion of that article: https://news.ycombinator.com/item?id=4732924

FRP is an interesting topic (I thought so anyway, I wrote a paper on it for my MS). It doesn't seem to have caught on widely as a paradigm, with a few exceptions I'm aware of (Elm, Meteor).


Rx (for C#), RxJS (for Javascript), Bacon.js.

Definitely not 'widely' but there are a few library implementations out there.

I reckon that the reason for this is that it's most useful when the interactivity is high (i.e. there are a lot of events to react to), but most applications (desktop, web) don't have a high enough number of events to make learning a new paradigm 'worthwhile'.


I had thought the most widely used implementation of FRP was in ReactiveCocoa.


I was pleasantly suprised that catoverflow is an actual site.


Anonymous callbacks are very powerful and very important. They will make you feel bad for unnecessary nesting. They will force you to learn how to abstract better, especially state changes. They will show you how nice and reliable code can be if it doesn't have shared states across multiple functions and how easy it is to understand consistent code with explicit continuations and how to write one yourself. They will make you a better programmer.

And "await" can only make it harder to visually distinguish which piece of code is executed in parallel and which is executed sequentially. Nesting makes it explicit.


Why do people insist on analysing things using analogies? Analogies are useful for explaining a concept that might not be obvious. Saying callbacks are like gotos, gotos are bad, therefore callbacks are bad is ridiculous.

And he gives some sample code where the 'problem' is nothing to do with callbacks, its just nested lambdas. In fact I find that code quite easy to read, and would be very interested in seeing the same functionality implemented some other way, bearing in mind it is quite a difficult problem to synchronize multiple async operations and usually requires horrible code using multiple mutex.


I'm confused as to why you think the code presented in the blog post isn't an example of 'the same functionality implemented some other way'. The await code is almost undeniably more straightforward and the exceptional cases more obvious to handle.

He also doesn't seem to be making a weird logical leap the way you claim he is. He's not really using an analogy. He's saying callbacks are bad the same way goto is bad, in that they make the logical structure of a program's execution non-obvious, particularly over time and when being modified.


I'm not saying the await syntax isn't useful or a better way to implement that problem. The title and opening of the article is comparing callbacks with gotos. It would be logical if he titled the article "C# Await syntax trumps callbacks when multiple callbacks need to be synchronised".

Any syntax can be used to create spaghetti code. Callbacks only become spaghetti like when nested / chained / abused. I've written hundreds of API methods using callbacks that I believe are very clean. I've recently written a large API which makes async service calls that follows this pattern :

void GetInvoiceHistory(int? customerId, Action<List<InvoiceItem>> callback, Action<string, Exception> exceptionCallback);

The consumer of this API does not need await, there is no need to nest callbacks, there is no need for try/catch and it is very clean to work with. So IMHO, stating that callbacks are this generation's gotos is a pile of shit.


This definitely is a problem in Obj-C. Using GCD and callbacks is usually easier to understand than delegates and manual thread management, but it's still not great. I would love to see something like async/await in Obj-C. There are some great ideas on how to get something similar in this blogpost, but none that I would use in production code unfortunately: http://overooped.com/post/41803252527/methods-of-concurrency


It's right that callback model sucks, and the task model is a way to go.

  Sadly, many developers when they hear the word "C# async" 
  ...
  All of these statements are made by people that have yet
  to study C# async or to grasp what it does.
But it's unpleasant to see the author is talking concept of task - lightweight threading, coroutine, or whatever - is like a patent of C# (or F#). And furthermore, treating many developers are not able to understand this concept.

Maybe true for the people around him.

I understand his position as a lead developer and an evangelist of Mono/C#, but this attitude is ridiculous.


Have you read the article? It's not about tasks per se, it's about a code rewriter.


This article is about structural paradigm shift which foresaw for a long time from Actor model, Erlang, and recently to Go and Rust.

How is this just only an introduction to a new syntactic sugar?


ES6 generators combined with promises will bring this to the javascripters: http://taskjs.org/


In the right places and for the right reasons they are fine. A lot of code today devolves into what I've come to call "callback spaghetti" and, well, good luck. The toughest thing sometimes is getting your mind around what is supposed to happen and, more importantly, what is not.

I found that sometimes it helps to build a state machine to effectively run the show and try to limit callbacks to setting flags and/or navigating the state tree. State machines make following code functionality a breeze, even when dealing with really complex logic.


That's a smooth point! As you probably know, async code is translated by C# compiler to a state machine[1].

[1]: http://stackoverflow.com/a/4047607/458193


Yup yup yup. I thought I was the only one who noticed that node had reinvented the Windows 3.1 programming loop.


Windows 3.0 had it before Windows 3.1.


Callbacks are basically COME FROM, epecially on a platform like a cell phone where you at least in theory have limited processing resources ($40 android phones need apps, too!). They are the devil.


> Callbacks are basically COME FROM

No, they aren't even similar to COME FROM. COME FROM is "upon reaching label X, jump to this point". Callbacks have less in common with COME FROM than with GOTO, and less in common with GOTO than with normal procedure/function invocation.

> epecially on a platform like a cell phone where you at least in theory have limited processing resources

Platform is completely orthogonal to the relation between callbacks and other programming constructs.


I see your point, but they read a lot like come from when looking at code.

And platform is not orthogonal: sometimes I need to know exactly what is using how many cpu cycles, and callbacks make it hard.

Admittedly I mostly do embedded dev, but there's a reason why I have the only remote control / video app out there that still works on a HT G1 :)


iced-coffee-script has a similar solution. http://maxtaco.github.io/coffee-script/


Iced Coffeescript is brilliant! I think it is the closest you can get right now to sane development with callback-oriented code in Javascript.

Unfortunately it doens't solve the exception problem yet. Exceptions passed to callbacks (as `(err, value)`) are not thrown. So instead of try/catch there will be lots of `return cb(err) if err` in your code.


I was expecting to read something about FRP or other naturally reactive programming models that dealt with the semantic complexity of callbacks, not just their syntactic complexity. I don't think async constructs and others that depend on CPS techniques are really going to save us from complex programs that we barely understand.


Meh.

Callbacks are a very limited way to do asynchronous programming. However they are a good way to create interfaces that let you call methods and insert your own functionality in the middle.

So yes. Better async is good. But don't take away callbacks. They have their uses.


lthread is a coroutine library that allows you to make blocking calls inside coroutines by surrounding the blocking code with lthread_compute_begin() and lthread_compute_end(). This is equivalent to async calls but without the need to capture variables.

http://github.com/halayli/lthread/

Disclaimer: lthread author


Instead of using callbacks, golang embraces synchronous-style calls and makes them asynchronous by switching between goroutines (lightweight threads). gevent (for Python) does something similar. It's certainly an interesting approach IMO.


I noticed that most of the methods awaited on had an Async suffix in their name. Is that some sort of modern hungarian notation, and is it even necessary? It also looks like you can't pass timeouts to await.


It is a convention, often used to differentiate blocking and asynchronous methods in the APIs (e.g. `Read` and `ReadAsync`, etc). You're not required to use it, but it is useful whenever there is a chance of confusion.

As for the timeouts, it would be strange to bake this into a language (different platforms may support different timers, at the very least).

Instead, you use library for this[1]:

    int timeout = 1000;
    var task = SomeOperationAsync();
    if (await Task.WhenAny(task, Task.Delay(timeout)) == task) {
        // task completed within timeout
    } else { 
        // timeout logic
    }
Instead of awaiting on a task, you await on `WhenAny` combinator.

[1]: http://stackoverflow.com/a/11191070/458193


The Async suffix is here to differentiate with the synchronous version of an existing method returning T, the asynchronous return a Task<T>. With a Task<> you can do :

  int timeout = 1000;
  var task = SomeOperationAsync();
  if (await Task.WhenAny(task, Task.Delay(timeout)) == task) {
      // task completed within timeout
  } else { 
      // timeout logic
  }


According to the Microsoft doc[1] it is convention for async to return a Task but not a requirement. Since the return type of Task tells you a method is expected to be used asynchronously, also encoding that in the name seems somewhat redundant. I guess you have to use a different name because of static typing so Async is as good a suffix as any.

[1] http://msdn.microsoft.com/en-us/library/vstudio/hh156528.asp...


This is all simply sugar to hide behind-the-scenes threading behind very narrow interfaces. Which isn't necessarily bad, but it's fun to see it suddenly in favour again and presented as something new.

E.g. Simula67 had Call and Detach for the basic case, and Activate(object representing async behaviour) and Wait(queue) that would both depending on need often be used for the same purpose (as well as a number of other operations). We had to write code using those methods in Simula67 in my introduction to programming class first semester at university...


That was my thought. I always get lost in these discussions because I have to translate the bizarre lingo into basic threading primitives I can understand. Once I've done that I don't understand what all the fuss is about.


I think the issue is that most programmers actually never touch threading, or if they do, they get complicated threading models shoved in their faces and run away in horror (which is a reasonable reaction). But these constructs looks sort-of like they're just wrappers around callbacks.

Never mind that the only reason callbacks are all that interesting in JS is because they allow the engine to sneak in threading behind your back - most people have a very woolly idea about how JS execution happens and the fact the javascript execution itself isn't threaded.


You can write any article like this about anything, here is the formula:

- pick a language feature

- write an article with the title "<language feature> as our Generation's Goto Statement"

- write an example where you misuse <language feature> and over generalize it

- show a workaround that doesn't really save the trouble of actually thinking before typing

The hard thing about callbacks is that you need to think about asynchronous processes which can be hard, the callbacks are not the problem so replacing them with something else won't help you too much.


I am very happy that my current (PhD) code doesn't require me to deal with blocking calls to things (it's just one massive calculation essentially). I remember this nastiness from when I had a real job, and I'll no doubt have to deal with it again when I escape academia.

This article is very interesting - I enjoy articles which spell out the usefulness of a new language feature. I haven't used C# for a few years, and this is a great advert for coming back to it one day.


I have a basic technical question. I work in C for embedded systems, so I'm a bit "behind the times."

How is "await" any different from a regular blocking system call? A regular system call does exactly what is being described: The system call happens, and then when it is finished, execution resumes where it left off.

(Yes, this makes the thread block... which is why you have multiple threads. I think the answer will have something to do with this, though...)


If you remember the (gnu, ossp) pth library or any of the various cooperative multitasking equivalents for C, this is all the people raised on javascript, who started writing callback APIs for C#/ObjC discovering the same idea.

await is equivalent to "yield" or "wait" in most of those systems. The idea is to pause execution and jump back to the cooperative scheduler, which will eventually execute the function call in question. Once that call ends up with a result, the scheduler will then (eventually) resume your function with the result at the point you yielded. In the midst of all that, various other threads of control will be scheduled briefly -- for instance, the ones servicing sockets and whatnot. It's all single threaded and cooperative -- typically with an event system embedded.

Part of the callback "mess" is the result of the short memory of our industry. I was working with these kinds of systems in C code years and years ago. It didn't take long for most people to realize that you wanted to abstract the callbacks to the scheduler so you were writing your code instead fucking with the state machine mechanism used to implement the cooperative framework in question. No references to abstract CSP (or CPS :-)) ideas needed, really -- it's basic practical knowledge that seems to have been ignored over time.


I'm not a beliver, so I may be missing something important...

As far as I can see, await makes it possible to do things like:

- UI Thread starts Thread X to do something slow; - Thread X is processing, in the meantime the user clicks on something that requires X's result; - Only now the UI thread stops, waiting for X.

Your program stay responsive, unless it's completely not able to do so.

As I said, I still don't think it's a so important feature. It is quite rare that one needs such kind of coordination in UI programs, and there are better mechanisms for no-interactive software (that optimize for throughput, not responsiveness)... As a second thought, it may be very relevant for games, I don't know.

I also don't get why the node.js people consider it so important to have concurrence at a web server. For me, it only makes sense for sites that have less concurrent requests than server cores, AKA: nobody.


No, you got it wrong. Await never blocks the UI thread.

Instead, the compiler rewrites your sequential code into a state machine. When you await on a task, the compiler turns this into scheduling a continuation. No blocking.


Yep, I got it wrong. Thanks for pointing that.

Now I'm also wondering how I'd use something like that in an imperative language... Well, I have some studying to do.


The idea is that while you're waiting on some kind of I/O or other asynchronous activity to complete, the thread you're on can do some other work. Threads are relatively heavyweight compared to the sort of cooperative multitasking that can be done through await or callbacks.


You never want to block UI thread in a GUI app.


Completely OT, but when Dijkstra says:

>My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible.

He's touching on a crucial point in Immanuel Kant's philosophy. Kant theorized that though humans received their sensations as a constant stream of input in time (which is an internal condition of human beings, not a feature of bare reality), we can't actually do anything with that stream without applying concepts so as to form concrete (or abstract) objects, i.e. chairs, black holes, mothers, etc. But how would our minds know when to apply this concept or the other? Kant's reply was that our minds look for little clues called 'schematisms' which tell us what the most appropriate fundamental concept to apply to a part of the stream is, upon which others could be combined to produce objective representations we can think and act upon.

Almost a hundred years later, Nietzsche will claim (paraphrasing) that a measure of strength in a human being is the extent to which to which they can 'consume' phenomena in time, weakness being how direly one needs to apply a static idea to phenomena (like morals, stereotypes, prejudices, cause and effect, etc).

I'm just noting an interesting entry point into an old philosophical conversation. If it's understandable then I hope someone finds it interesting.


The only thing I’ve encountered in modern day programming which is really as evil as goto is aspect oriented programming (AOP). Maybe there are different implementations of AOP but in the one I’ve used you were basically able to hook into every method from everywhere and it was impossible to have any grasp on the flow of the program. That is besides using a debugger.


JS gives you the tools to cope..

  function sequence(fns){
   var fn = fns.pop(); 
   while(fns.length) fn = fns.pop().bind(this, fn); 
   fn();
  }

  sequence([
    function(k) { funcy(1, k) },
    function(k) { funcy(2, k)  },
    function(k) { funcy(3, console.log)  }
  ]);

  function funcy(v, cb){
	console.log(v);	cb(v);
  }

  // ==> 1 2 3 3


ES6 generators will be the solution for callback hell in node.js. Node 0.11 already has generators support hidden behind a flag (--harmony-generators) and eventually it will be enabled by default. Generators + libraries like this

https://github.com/jmar777/suspend

will make node.js code more readable.


Callbacks are a tool in my toolbox. General event based programming is a tool in my toolbox. Various threading models are a tool in my toolbox. Just because there are situations where a tool is not the best choice does not make the tool bad... it means you use a different tool in that case.


Miguel is always fun/good to read.


Can someone explain to me the difference between this and futures (specifically, futures in c++11)?


I can't comment on futures in c++11, but async/await is more like promises + coroutines, rather than just promises (as defined by Promises/A+)


std::async uses threads (if used with std::launch::async) whereas C# await uses coroutines. You can use Boost.Coroutine to implement something similar in C++ as that project has done: https://github.com/vmilea/CppAwait


This is what continuations are for.


Given that your handle has "racket" in it, I'll assume that you're a call/cc kinda guy.

Even if you're not, others may enjoy this argument against call/cc by Oleg:

http://okmij.org/ftp/continuations/against-callcc.html

Delimited continuations are a huge improvement over undelimited ones, but still, by themselves, any kind of continuations feel like (to me) the GOTOs of functional programming.

More recently, people are doing work with "effect handlers". See Eff & it's research papers, for example:

http://math.andrej.com/eff/

This model is safer, faster, easier, and more composable than general purpose undelimited continuations.


Continuations don't help with the problem that the visual structure of callback-oriented programs doesn't reflect the order of execution. As a heavy JS programmer, that's the most compelling point for me in this post.


> the visual structure of callback-oriented programs doesn't reflect the order of execution.

One of my bosses made this assertion about Object Oriented code that followed the Law of Demeter and other OO best practices. I don't think he's entirely the best OO person, or entirely on the right track. However, I would venture to say that all programming paradigms hit a point where visualizing the flow of control gets exhausting. If it's not a twisty maze of little methods, all looking the same, then it's a twisty maze of callbacks...


Callback-oriented code is different. With by-the-book OOP code you're still executing one line at a time. You might be teleporting in space, which has its own problems, but your code still reflects the order of execution.

With callback-oriented code you're teleporting in space and time.

They can both make it hard to trace the path of execution. At least with OOP code you have a sensible stack trace, though. ;)


True about the stack trace, but teleporting around in space is still the same kind of difficulty. You're taking a whole and scattering it around.


I think he was referring to features like call/cc. They are very similar to C#'s await in that you write regular looking code and the language does the heavy lifting of figuring out what the real continuation is supposed to be.


Actually, they do. C#'s await (and any similar scheme based on ES6 generators) is a feature that can be built using continuations.


So are Java/C#/everything-else exceptions. And, for that matter, pretty much every control flow construct imaginable.

OTOH, I think that the big problem with continuations is that it gets very difficult to build efficient implementations of them (and this tends to impact not just efficiency of code that uses continuation, but usually efficiency of any code in a language which supports them), and it is much more efficient to implement specialized weaker (but good enough for most key use cases) forms of the most important applications of continuations.


Supporting call/cc and dynamic-wind has a significant performance impact in some languages, even for code that does not use the features.

Supporting coroutine.create+coroutine.clone, shift+reset, or setcontext+getcontext+makecontext+swapcontext seems to have no performance impact on code that does not use the features.


In fact, Eric Lippert, when first introducing that feature on his blog started with a five-part series about continuations and only in the end got around to explaining what that was all about. It was a very nice read.


Akka adds something similar in Scala land (and Java) called Dataflow concurrency.

http://doc.akka.io/docs/akka/snapshot/scala/dataflow.html


How is this different from Fibers in Ruby? One can accomplish same thing with Fibers.


To appreciate whether callbacks are Goto and what to do about them, it is probably good to read a good perspective on Goto from back in the day: http://cs.sjsu.edu/~mak/CS185C/KnuthStructuredProgrammingGoT...

When skimming it, I noticed the appeal to events and the precursors to literate programming (Knuth eventually came up with literate programming a few years after this paper was written).


futures are so nice to use in scala. warping back to my planet..


Scala has both Futures and Promises which are inherently much nicer than that async call. That's what happens when you allow things to compose. Glad c# gets something.


I love when Node.js advocates try to convince you that promises are as good a concept as anyone would need to handle asynchronous programming


Promises were removed from node core a long time ago, and remained unpopular until very recently. They are making a comeback due to lobbying in standards committees, forward-compatibility with ES6 generators, and the jQuery effect.


Fair enough. I guess I'm confusing IRC with reality again.


As soon as you have anything involved that's async to your process or thread, you're going to operate most efficiently with something along the lines of a callback. I don't see them as a Goto at all; they're much more like interrupt handlers or at least event handlers if you want jump ahead a generation from there.


Having had the pleasure to work with node.js for the past months, I upvoted this submission on title alone.


I actually believe that code should be synchronous unless instructed to operate asynchronously - just my .02

Await should not be required - it should be more like...

--

regularWork(); //im waiting till this thing is done

driveHome();// not executed till thing one is done

background orderStatus = orderPizza();

turnOnXbox();

while(orderStatus == 'not ready') {

playXbox();

}

turnOffXbox();

eat();

--

Like I said - just my humble opinion that the code written would become more expressive.


Agreed, that's why we're using fibers and common-node (https://github.com/olegp/common-node) at https://starthq.com


I always liked event based systems the most. I find them to be clean and flexible. Sometimes you want to run some more code after you run an async operation, or you want to run multiple operations at once and deal with them out of order. Await seems pretty linear.


Does Await convert those async calls back into synchronous calls, or what does it do? Because that would be kind of defeating the purpose of doing things asynchronously?

And you don't have to nest all those callbacks and write them inline. Rearrange your code a bit.


No, it rewrites the method code into a state machine[1].

See Async/Await FAQ[2].

[1]: http://stackoverflow.com/a/4047607/458193 [2]: http://blogs.msdn.com/b/pfxteam/archive/2012/04/12/async-awa...


I'll have to look at that in the morning, thanks!


The ease with which callbacks can be created leads people to create them carelessly and excessively. While I like what the article has to say, there are ways to write callback heavy code that do not get ugly so fast. Looking at the iOS nested block example from Marco Arment -- the first step is to not do everything inline. Then the code suddenly becomes clear and the argument becomes one of syntax sugar.

Comparing callbacks to goto is a tad unfair. They don't merely solve async issues, but also event handling and dynamic systems to name two common uses. I don't see a better solution on the table. Using callbacks to write deeply async code is the real problem. And while async/await may help with this problem, it still won't tell you why step 3 never finishes, because it's still waiting for a come from.


First: code that performs simple sequential steps should look simple. With callbacks, it always ends up looking complicated.

Second: refactorability in callback oriented code is a lot worse than linear code. Even refactoring code with a single callback can be annoying. Async code with callbacks that looks and feels like imperative synchronous code is an enormous gain.


First: you're hammering the async point, which I don't disagree with. I am just pointing out that you can write less horrible async code than the example cited with callbacks.

Second: wait until people write ludicrous numbers of async routines as if they were imperative because it's so easy. Calling one from another (and hooking them up to event handlers). You'll have the same damn problem one level removed.


Code that performs simple sequential steps should be synchronous.

Keep the asynchronous complications at the code that perform non-sequential steps. And yes, I'm fully aware that some libraries (Javascript's one, the guilty are always the same few) force you to use asynchronous calls. That's a flaw of the library.



"I have just delegated the bookkeeping to the compiler."

That's not obviously a good thing. Debugging the compiler (or just figuring out why it did something, even if correct) is far more difficult than debugging application code. Given the choice between implementing behavior with application code (or a library function) or adding semantics to the language, I prefer the former because it's much easier to reason about code written in a simple language than to memorize the semantics of a complex language.

[edited to replace sarcasm]


This is a nonsensical comment, and I voted it down. The same point can be made about any time languages got a level higher. This kind of rejection of powerful in favor of complex-but-familiar is precisely what Bret Vector warns against in the Future of Programming talk[1].

If anything, `await` makes debugging easier because you don't have to untangle callbacks and jump back and forth. You're not supposed to “debug the compiler” because, well, you know, there are test suites and everything.

Yes, this is something that takes getting used to. Just like `for` loops, functions, classes, futures, first class functions, actors and many other useful concepts and their implementations.

As for your edit, I still can't agree with you.

You're saying:

>I prefer the former because it's much easier to reason about code written in a simple language than to memorize the semantics of a complex language.

The point of `async` is making the semantics more obvious. Is it much easier to reason about Assembler than C? I say it's not. Would it be for somebody with years of experience in ASM and none in C? Yes it would.

I think it just comes down to that. Callbacks seem simpler to you not because they are simpler (try explaining them to someone just learning the language, and you'll see what I mean), but because you got used to them. Even so, error handling and explicit thread synchronization make maintaining callback-ridden code painful. I think setting `Busy` to `false` in `finally` block is a great example (in the blog post). You just can't do that with nested callbacks—they are not that expressive.

Async allows you to think in structure (`for`, `if`, `while`, etc) about time, that's why it's powerful.

[1]: http://vimeo.com/71278954


"Callbacks seem simpler to you not because they are simpler (try explaining them to someone just learning the language, and you'll see what I mean), but because you got used to them."

No, they're simpler in the literal sense: they introduce no new concepts into the language or runtime semantics. (The dynamic behavior is still complex, of course.)

"Even so, error handling and explicit thread synchronization make maintaining callback-ridden code painful. I think setting `Busy` to `false` in `finally` block is a great example (in the blog post). You just can't do that with nested callbacks—they are not that expressive."

Right -- nested callbacks aren't the answer, either. In JavaScript (where most of my non-C experience comes from), a good solution is a control flow function:

    busy = true;
    series([
        function (callback) {
             // step 1, invoke callback();
        },
        function (callback) {
            // step 2, invoke callback();
        },
        function (callback) {
            // step 3, invoke callback();
        }
    ],
    function (err) {
            // finally goes here
            busy = false;
            if (err)
                // ...
    });
This construct is clear and requires no extension to the language or runtime.

This is fundamentally a matter of opinion based on differing values. I just want to point out that there's a tradeoff to expanding the language and to dispel the myth that callbacks necessarily trade off readability when control flow gets complex.


The sarcasm in my post was unnecessary, so I've replaced it with a better explanation.


Thanks for taking time!

I still don't agree though, I edited my post as well to explain why I think this is exactly the moment you need to tweak the language, and not the libraries. (And this is the point Miguel was trying to make when he differentiated `async` from “futures” libraries, even from the one `async` uses, because they are irrelevant to the discussion.)


Thanks for mentioning the Bret Vector's talk. Quite interesting to watch.


While that can be true, I feel like the author inadvertently overstated the amount of work that the compiler is doing here. This is really more of a case of syntactic sugar and not heavy-duty code reordering.


callbacks are like goto in that you can create terrible code by using them badly, but also they are vital to implementing good code. if, for, while and co are all syntactic sugar for correct and standardised use of goto with hints to help the compiler make optimisation.

in both cases though we don't something universally evil or bad - just something that bad programmers can and will abuse.


I love it -- another dramatic unveiling in a cutting-edge language of a feature Tcl has had for decades (google "vwait").


Oh, can we use Tcl on iOS maybe? No? How about Android? Xamarin runs on both.


I can see how this could be useful for Javascript and web stuff. But it isn't Async, this is in effect a blocking call.


In ToffeeScript you can just do e, data = readFile! fname

Or take a look at CoffeeScript or IcedCoffeeScript or LiveScript back calls.


One instant cure:

Make rules in IDE/editors, for each async/anonymous closure, undo the indent for one level.


So... This article is basically saying that blocking style programming is a lot easier to read and write, and proceeds with demoing a lib which makes async calls look sync. So instead of doing this in $lang, why not invest time in making blocking style faster on the kernel level? Perhaps introduce actors or tasks in the kernel, so that every lang can benefit.


I think the problem is not the callback itself, but the nested inline callback.

In any case, I prefer futures.


shenanigans. Pyramids (at least in js) can be easily avoided by simply naming your functions. Treating them like the first class objects they are. Naming the function means you are no longer going to an arbitrary code block, but instead going to a concept whose name, doc string, and (through hoisting) position on the page illuminate its purpose.

Callbacks aren't bad. Pyramids are bad. Stop writing pyramids.

Nodejs also establishes a nice api for callback functions-- in particular, callbacks are defined with an `error` and a `data` argument. You handle the error if it is non-null, otherwise execute `data`.

If you want to avoid callbacks, Node also provides event emitters, and streams. Streams in particular provide a nice api for dealing with event based programming.


I disagree. Callback based programming forces you to modularize around asychronous/IO calls instead of semantic cohesion.

Naming functions doesn't solve that problem. On the contrary, you get seperate lexical units that are not separately reusable and make no sense on their own. They are simply fragments of code that's supposed to be executed before or after some IO operation inside another function.


I'm fairly new to node.js, but having done some decent work in AS3/Flex (admittedly years ago) I'm a bit amassed that they didn't take a look how Adobe took on some of these "problems". I think it is a pity that the don't have callbacks for onError and onResult for instance. It would be easy to do and code would look nicer and with less noise. I suspect there are other niceties that could be learned from AS3/Flex and async/ event loop programming.


await PostPicToServiceAsync(mFile.GetStream (), tagsCtrl.Tags);

Seems to be equivalent to a blocking call of old, or am I missing something?


Yes, you are :-)

The compiler rewrites your code to schedule all next lines as a continuation. There is no blocking.


At least for java, when your software library exists in the cloud, I don't see how you could avoid using callbacks.


This is possible if the language supports coroutines like python (see twisted's inline callbacks) or my port to Lua for Luvit: https://github.com/kans/luvit-inlineCallbacks.

Personally, I think raw callbacks are the right thing 95% of the time. yield/async/coroutines are needed for branching async logic where otherwise you'd be forced to make a new function for each branch and deal with the spaghetti at the end.


I think this article misses a main point and that's the fact that all await does it take a function that used to be asynchronous and makes it synchronous. While yes, there are definitely use cases where that is nice, in general I think that if you want to use an await command, why are you making a call that was meant to be async? You are defeating the whole point of async calls.

Yes I know a lot of standard libraries have calls that are async and you may not really need for them to be async but I don't think that this is the case often enough that we should abandon callbacks and the like and go back to an age where all code must be synchronous. I know the author isn't saying it to that extreme necessarily, but his comparing callbacks to gotos is extreme as well.


That's not how async/await work at all. They let you write the code as if it were synchronous, but it's still asynchronous, which is the whole point.


You misread the article. The compiler allows you to write code in sequential manner but rewrites it into a state machine with callbacks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: