Hacker News new | past | comments | ask | show | jobs | submit login
Asynchronous Programming in C# (github.com/davidfowl)
298 points by keewee7 on Sept 24, 2021 | hide | past | favorite | 177 comments



We use async/await pretty much universally throughout our codebase today.

One thing to keep in mind is that this mode of programming is actually not the most performant way to handle many problems. It is simply the most expedient way to manage I/O and spread trivial things across many cores in large, complex codebases. You can typically retrofit an existing code pile to be async-capable without a whole lot of suffering.

If you are trying to go as fast as possible, then async is not what you want at all. Consider that the minimum grain of a Task.Delay is 1 millisecond. A millisecond is quite a brutish unit when working with a CPU that understands nanoseconds. This isn't even a reliable 1 millisecond delay either... There is a shitload of context switching and other barbarism that occurs when you employ async/await.

If you are seeking millions of serialized items per second, you usually just want 1 core to do that for you. Any degree of context switching (which is what async/await does for a living) is going to chop your serialized throughput numbers substantially. You want to batch things up and process them in chunks on a single thread that never gets a chance to yield to the OS. Only problem with this optimization is that it usually means you rewrite from zero, unless you planned for this kind of thing in advance.


> Consider that the minimum grain of a Task.Delay is 1 millisecond.

The minimum here is contingent on a few things. The API can accept a TimeSpan which can express durations as low as 100ns (10M ticks per second: https://docs.microsoft.com/dotnet/api/system.timespan.ticksp...). The actual delay is subject to the timer frequency, which can be as high as 16ms and depends on the OS configuration (eg, see https://stackoverflow.com/a/22862989/635314). However, I'm not sure how any of this relates to "go[ing] as fast as possible", since surely you would simply not use a Task.Delay in that case.

> There is a shitload of context switching and other barbarism that occurs when you employ async/await.

Async/await reduces context switching over the alternative of having one thread per request (i.e, many more OS threads than cores) and it (async/await) exhibits the same amount of context switching as Goroutines in Go and other M:N schedulers. If there is work enqueued to be processed on the thread pool, then that work will be processed without yielding back to the OS. The .NET Thread Pool dynamically sizes itself depending on the workload in an attempt to maximize throughput. If your code is not blocking threads during IO, you would ideally end up with 1 thread per core (you can configure that if you want).

Async/await can introduce overhead, though, so if you're writing very high-performance systems, then you may want to consider when to use it versus when to use other approaches as well as the relevant optimizations which can be implemented. I'd recommend people take the simple approach of using async/await at the application layer and only change that approach if profiling demonstrates that it's becoming a performance bottleneck.


> I'd recommend people take the simple approach of using async/await at the application layer and only change that approach if profiling demonstrates that it's becoming a performance bottleneck.

Despite some of the things I presented in my original comment, I absolutely agree with this. There are only a few extreme cases where async/await simply can't get the job done. These edge cases are usually explicitly discovered up front. It's rare to accidentally stumble into one of these ultra-low-latency problem spaces in most practical business applications.


Honestly if you're in the situation where it comes down to individual CPU clock cycles I can't imagine C# (or similar Java, Go, etc.) being useful at that point. Too much going on that's not in the view of the developer.


Some of the highest throughput systems on earth are written in either Java or C#.

Check out the LMAX disruptor sometime. Throughput rates measured in hundreds of millions of serialized events per second are feasible in these languages if you are clever with how you do things.


> This mode of programming is actually not the most performant way to handle many problems.

This is correct, it's for increasing _throughput_ in concurrent scenarios. Meaning that when your server is processing multiple requests at the same time, yielding back rather than busy-waiting allows a different request to progress instead (or even to start processing a queued request earlier).

When waiting for I/O with another machine (a database, an API, etc) you can't wait faster; but you can wait better.


> This is correct, it's for increasing _throughput_ in concurrent scenarios.

I believe you mean the exact opposite. It decreases latency (because task B isn't blocked waiting for task A to complete) but it does so at the expense of decreased throughput. The context switches add overhead. If you just synchronously run A then B, the overall time would be shorter (higher throughput) because of less context switching overhead.


If task A & B perform IO (eg, a DB call) and the alternatives are running them sequentially on one thread or running them concurrently (via async/await) on one thread, then running them concurrently can both decrease end-to-end latency and increase throughput.

> If you just synchronously run A then B, the overall time would be shorter (higher throughput) because of less context switching overhead.

There are no context switches: async/await isn't threads. The compiler generates state machines which are scheduled on a thread pool. Basically, each time an event happens (eg, database request completes or times out, or a new request arrives), that state machine is scheduled again so that it can observe that event. This doesn't involve context switching: you can have 1 thread or N threads happily working away on many concurrent tasks without needing to context switch between them.


> There are no context switches: async/await isn't threads.

By "context switch", I didn't meant to imply "hardware thread context switch", just the general sense of "spend some CPU time messing about with scheduling".

There is overhead to async in that you're unwinding the stack, bouncing to the thread pool scheduler, loading variables from the heap (since your async code was compiled to closures) back onto the stack, etc.

As far as I know, it's always possible to complete some given set of work in less total time (i.e. highest throughput) using a carefully hand-written multithreaded program than it is using async. Of course, most people don't have the luxury of writing and maintaining that program, so async code can often be a net win to both throughput and latency, but the overhead is there.

It's analogous to going from a manually-memory language to a language with GC. The GC makes your life easier and makes it much easier to write programs that are generally efficient, but it does incur some level of runtime overhead when compared to a program with optimally written manual alloc and free.


No, I mean that yielding allows more requests to be executed at the same time on the same number of threads, increasing throughput. Overhead of context switches is not that relevant, this is are small fry compared to e.g. waiting 100s of milliseconds (or more) for a DB or API. Yielding instead of busy-waiting, as I said above, allow another request that is ready to execute to do so sooner. This leads to higher throughput.

The other reply from reubenbond ( https://twitter.com/reubenbond ) is correct. Also the implication that async does sometimes decrease end-to-end latency because you don't have wait for request A to complete before starting request B.

async/await is not the same thing as threading, it is about using a fraction of a thread: when awaiting, "there is no thread" being used. https://blog.stephencleary.com/2013/11/there-is-no-thread.ht...

> I believe you mean the exact opposite.

As an aside, how about you say what you mean, and I'll work on what I mean.


async/await is not for CPU-intensive parallelism. I think that's pretty much stated in the .NET docs. That's why Parallel Compute APIs like Parallel.ForEeach/For are not async. Their purpose is to enable non-blocking waits for IO, as well as to do stuff like animation on UI where you might want to execute procedural code over a larger timeframe.


The other reason those Parallel methods did use async/await is that async/await did not exist in .NET at the time those methods were introduced.

But good news! The upcoming .NET 6 release will have a Parallel.ForEachAsync method:

https://docs.microsoft.com/dotnet/api/system.threading.tasks...


Doing a bit of .NET archeology we find that both Task<T> and Parallel.For can be dated to .NET 4.0 So if they wanted to, they could've included async/await support. It just didn't make sense.


Async/await were officially added to c# 5 and .net framework 4.5

I think it was possible to have async/await in .net framework 4.0 via some workarounds when it was still in CTP mode but I don't recall the details.


Yes it is. Async is avoid blocking operations, whether it's IO-bound or CPU-bound. There are plenty of cases where computation can be offloaded to async tasks (eg: keeping the UI responsive).

The official docs even give examples to clarify both scenarios: https://docs.microsoft.com/en-us/dotnet/csharp/async


Sometimes parallel, rather than async, processing will help there, and it's pretty easy in dotnet with .AsParallel()


There's Task.Yield() which yields instantly. Task continuations happen on the same core by default, until you hit an IO completion or something else that knocks it onto the thread pool. This means that chaining lots of awaits together is very efficient, at least until you hit something that forces you to actually sleep.

In practice I tend to use a lot of homemade TaskCompletionSource, explicit threading and interlocked stuff where I need more control of continuation.

There's also a downside to explicit synchronization which you don't mention - if you design your threading for one load pattern, and your actual load is a different pattern, it crushes your application and it's difficult to refactor.

For instance, if you expect few users and many requests you might have a thread per user with a work queue for their requests. If you have many users with few requests then you have thousands of threads, which are actually context switches unlike Task yields.

I've heard that Midori was 50% faster than Windows, and it was nearly entirely written in something like C# with something like Tasks. The runtime was extremely different (no virtual memory, no threads) but it proves that the model can outperform traditional OS threading.


This seems to be conflating several issues. Async just means non-blocking. Queue a unit of work (ie: Task) to the runtime scheduler and come back to it later.

How you implement that can be with the underlying async/await or with your own custom framework. There are many examples like Actor frameworks (Akka.net, Microsoft Orleans) or System.Channels<> or anything else.

You don't need a rewrite from zero, it's pretty easy to have a class with a while(true) loop contained in an async function processing things from a System.Channel<> and that will handle things on a single thread, while you enqueue work from anywhere. You can even use the BackgroundService base class to start from: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/ho...


I don't think anyone really argues that async/await is a raw speed win. It introduces overhead, after all. It just makes code easier to manage in general, which usually comes with some perf tradeoffs.


> and spread trivial things across many cores in large, complex codebases

How are tasks spread across cores? My main experience with the "await" paradigm is from Python, which is primarily single threaded.


The .NET runtime has a threadpool with local/software threads that share the workload. The Tasks (from async operations) are spread across this threadpool, although depending on many factors (how quick it finishes, overall load, etc) they might just run on the same thread anyway.


C# was my first exposure to async/await back in 2015 and I initially had trouble wrapping my head around various details (i.e. ConfigureAwait etc.). I think the languages that have done best job in removing all that detail are Go and Elixir (Beam based languages). Which if you pay attention removed the overhead of rewiring your brain to do async/await all the way down. I repeated async/await systems recently with Kotlin coroutines in JVM world and again same problem, this time due to my prior knowledge I did hit the ground running but avg Joe had to relearn.


IMO ConfigureAwait is just the result of a failure to fully consider the implications of various design choices early on. To be fair, it is a tough problem, but the ergonomics ended up being horrible and the default they chose was probably the wrong default.

There are some other choices they made that are arguably not the right ones - for example, async code can do some of its initial execution on the calling thread and do the rest wherever continuations get scheduled (which is configurable...) which means you have to have exception handling in two places and the way the exception handling works will be different (the article calls this out). It is possible to avoid this by having the initial call to the async function only create the task but not run any of it - of course, there are reasons not to do it, performance being one of them, so it makes sense that they did it... it's just bad to optimize by default instead of making code simpler and more reliable.

My least favorite decision is that inexplicably, async/await state machines are very error prone... if any part of your codebase accidentally invokes a continuation twice, the state machine will potentially begin running twice or even start running again from the beginning with the same local variables. Fixing this would have been as simple as setting a bool at the end and checking it at the beginning, but for some reason they are dead-set on not fixing it. Premature optimization once again.

The existence of 'async void' is also just a complete trainwreck. They shouldn't have allowed it, especially since an 'async Task' that discards its result is just as easy.

The approach to cancellation (intrusive only) is also needlessly complex and gross. Putting a Dispose method on a Task would allow consumers of any async API to cleanly signal that they no longer need the result of a Task and any implementation would be able to observe this without anyone having to introduce a new method overload that takes a CancellationToken, not to mention that the intrusive cancellation design creates extra garbage on the heap. Really not obvious to me why they did this instead of reusing 'using x' and IDisposable.


Yeah, I don't like Go in most respects, but their approach to concurrency is way more intuitive than async/await. That said, Go doesn't have any standard promise or futures libraries, which is quite ridiculous. Yes, you can roll your own or go get one, but that is something basic should be in the language.


As someone who has never really used promises or futures, what do they add, or how do they make concurrent programming easier or clearer than what's currently in Go?


In short, they provide clean API for checking on completion and error states of a long-running process from another thread without blocking that thread. This is an essential pattern for UI-related tasks, including web pages.


Fair enough, I think we are thinking of futures slightly differently. I was thinking primarily about the deferred action to get a result (in which case channels and goroutines are equivalent with a select to handle, perhaps, an error result). You're also thinking of the other capabilities around task management and monitoring which I was not.


An easy example of where having promises is nice:

  a := getFoo()
  b := getBar()
  c := getBaz()
If you want to do all 3 in parallel, you may need to use channels and wait groups and stuff. In a language with promises:

  const a = getFoo();
  const b = getBar();
  const c = getBaz();

  await a
  await b
  await c

or even better

  const [a,b,c] = await Promise.all([getFoo(), getBar(), getBaz()])
If/when go generics become available, I expect to see some libraries that make things easier in golang.


This is built into the .NET API though? You can just check the Faulted and Completed properties of the Task you're holding.


GP wants those to be part of the Go stdlib.


It's kind of odd that JetBrains chose async/await for Kotlin considering the JVM is going towards the Go approach for virtual threads. I guess they had to since they wanted to support android/js?


The decision happened before Loom came to be.

It is yet another example of impedance mismatch from guest languages, when the platform moves into another direction.

The platform language gets the true way, while the guest languages get the hard decision how to combine multiple approaches, and libraries that only use the new platform APIs.


Yeah that decision make sense for multi platform. Suspending functions will work on JS and Kotlin Native.


It makes sense for UI patterns and any pattern where you need to bind to a specific OS thread.


I think it's clear from the languages given that the added complexity is to handle UI workflows, no?


You can have UI workflows just as well with a language that has (runtime-managed) green threads, like Go and Elixir. It's just that you sometimes have to make sure that certain green threads have their actions scheduled on the UI (OS) thread for backwards compatibility reasons. For example, the Scala ZIO library provides the means to control where (on which OS thread) code is scheduled in a very intuitive way that does not involve ConfigureAwait hacks. ZIO is not green threads and rather more like async/await, but a similar approach could be implemented in languages with runtime-managed green threads as well.


>you sometimes have to make sure that certain green threads have their actions scheduled on the UI (OS) thread

But that management is the crux of the issue. The complexity of hopping into and out of contexts is what is exposed with async/await and coroutine scopes and hidden by the more simple syntax.


>Prefer async/await over directly returning Task

This one seems questionable to me. I've never been bitten by any of the cons mentioned[1], and it's even noted that doing it this way does incur performance costs. I've learned over the years that if the code path is very prolific, it pays to avoid the async state machine.

I'm curious if others could expand on this one.

[1] https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/b...


There's a couple in here that are the absolute safest things even though the alternatives can be done safely. Async void, for example, is for dealing with event handlers.

Just like returning Task directly, if you take care with the exceptions, you'll have less problems.

Said another way "...unless you know what you're doing" could be added to a few of these.


Always using async/await is recommended to avoid _surprising_ behavior. If a method with a signature

Task<Bar> Foo();

and it is not declared with async and it throws an exception, the exception is propagated directly to the call site. Think of this usage:

var getBarTask = Foo();

// do some other stuff

try { var bar = await getBarTask; }

catch (Exception ex) { handle exceptions }

Then if the Foo is not async the exception is thrown at 'var getBarTask = Foo();'. If it is declared with async the exception is wrapped inside of the Task object and thrown at the 'var bar = await getBarTask;'

Yes there is obviously a small performance cost. My guideline would be "always use async/await unless you call the method hunders or more times a second and the small performace cost becomes neglible. And always measure before you optimise.

edit: formating


It's actually the case that's marked async that surprises me more. But I don't think the difference has ever mattered in code that I've written or worked with.

The reason that the non-async case makes sense to me is that I know there's usually going to be some synchronous code execution before the function I'm calling has to go async. And in that case I expect the code the executed before going async to come up the stack where I called it instead of where I'm awaiting it. And of course I expect exceptions beyond that to only be able to be retrieved when I await the task since the call stack will be rooted in the event loop after going async.


This surprised me. When they wrote LINQ and iterables, MS went to great lengths to ensure exceptions that could be thrown immediately (before iterating) were. I wonder why async/await are the opposite.


I don’t think I’d have a philosophical problem with throwing both synchronously and asynchronously when using async/await. After all, the act itself of queuing some work with a possible future result does seem like something that can fail. But the dotnet team (or c# compiler team, not sure) helped devs out by promising not to throw on the queuing the work bit when using async/await, and only throw at the point where the result should ordinarily be ready.

If you don’t use async/await, then I’m not sure how else they can help. By returning a task without async, the dev claims that they’re smart enough to safely kick off some async work and possibly provide a result later. But in the act of kicking off the work, you break?


Unless you're doing awaits in a tight loop of thousands/millions of calls, the overhead of the state machine is almost non-existent, which leads to the next question, what are you doing that requires await in a tight loop of that many calls? The whole point of await is to use it to yield a thread while waiting on a long running operation, if your await returns nearly instantly then use the synchronous version and avoid the overhead.


I've run into this in code that completes synchronously in the common case, but falls back to an async implementation - think caching. The simple way to write this creates a state machine even on the synchronous path.

It's possible to work around this efficiently by pulling the async code into a separate method and using ValueTask for the outer method return type.


The most common issue I see with async programming is that the naive style seen in most samples/docs is strictly slower than standard imperative programming for one user. In other words, it's pure overhead with no benefit at all unless you're at a very large scale and approaching 100% capacity on your hosts.

Most documentation -- and most code I've seen in the wild -- reads like this:

    var foo = await GetFooAsync(...);
    var bar = await GetBarAsync(...);
    var baz = await GetBazAsync(...);
The timeline of that code is exactly same as the standard synchronous version, just with extra steps and pauses.

The following version is more verbose -- which makes it feel slower -- but can provide dramatic speed ups even for a single user by overlapping requests so that they run concurrently:

    var fooTask = GetFooAsync(...);
    var barTask = GetBarAsync(...);
    var bazTask = GetBazAsync(...);

    var foo = await fooTask;
    var bar = await barTask;
    var baz = await bazTask;
   
Unfortunately, I've literally never seen this design pattern in the field...


"slower" is just one piece of the calculation. On the server end one goal of async/await is that you can run 10k instances of your 3 lines of code concurrently - inside a single thread. And while this might not use parallelism to make an individual operation faster, it might use less resources overall.

The other use-case was to run multi-step operations which involve waiting on UI threads of appplications, which wouldn't have worked with blocking waits (would prevent redraw). For that use-case "speed" also isn't the highest priority.


Like I said, this is a theoretical benefit that is realised only if the load is sufficiently high for the reduced overhead of async programming to provide a noticeable benefit.

For naive async code, there is a surprisingly narrow range of loads where this is true: only something like 80-99% load. Any higher and latencies start to go towards the stratosphere, or memory usage grows exponentially.

Of course, this is fixable with the appropriate use of backpressure and timeout cancellations, but I've never seen this implemented correctly and consistently anywhere. Almost all web apps in the wild fall over when load goes from 100% to 101%. They don't become 1% slower! Instead they take 30s to return a page or just start spewing 5xx errors.

For a point of comparison, Java is abandoning the complex and fragile async approach in favour of user-mode scheduled lightweight threads, which are vaguely similar in terms of efficiency, but are much easier for programmers to understand. They're also compatible with traditional threaded code.


Does C# not have an equivalent to JavaScript's Promise.all??

In JS this could be...

  const [
    fooTask,
    barTask,
    bazTask,
  ] = await Promise.all([
    GetFooAsync(...),
    GetBarAsync(...),
    GetBazAsync(...)
  ]);
... PS in your code above you assign GetBazAsync to bar and baz. :-)


There is Task.WhenAll which works similarly, but the problem is that it requires either all of the tasks to have the same return type, or else treat all the tasks in the array as untyped and extract the return values in a separate step.

i.e. you have to write

  var (fooTask, barTask, bazTask) = (GetFooAsync(), GetBarAsync(), GetBazAsync());
  await Task.WhenAll(fooTask, barTask, bazTask);
  var (foo, bar, baz) = (fooTask.Result, barTask.Result, bazTask.Result);
It's possible to write a custom awaiter extension method that allows awaiting tuples of tasks, so once that's in place you can just write

  var (foo, bar, baz) = await (GetFooAsync(), GetBarAsync(), GetBazAsync());
There are third-party packages that do this for you and it's reasonably easy to write yourself if you understand the inner workings of async/await, but it's not part of the standard library.


Fixed!

I believe this kind of copy-paste "last line effect" one of the most common errors in programming: https://hownot2code.com/2016/08/15/the-last-line-effect-typo...

Task.WaitAll is the C# equivalent: https://docs.microsoft.com/en-us/dotnet/api/system.threading...

But it's not necessarily faster, there are corner cases where waiting for all tasks prevents some concurrent computations (e.g.: JSON parsing) from occurring.


This is indeed bad for usual cases but in general you can't say it's always bad. For example, if you need to call 1 million asnyc methods, you will benefit from batching them and running a small number at a time.


var foo = await GetFooAsync(...);

var bar = await GetBarAsync(...);

var baz = await GetBazAsync(...);

Unfortunately this is the kind of example that is being used in the MS docs and elsewhere to explain async. I spent a long time figuring out what the difference was when they ran the exact same way as the synchronous version.


So much content and the `ConfigureAwait` portion, which is the BIGGEST gotcha in the whole shebang in my opinion, is not filled out?! Especially for Xamarin, you need to understand and use ConfigureAwait to properly bounce between UI / background threads.


What do you mean by this? I consider ConfigureAwait to be more of an optimization -- if I want to run something on a UI thread, I'm explicit about it.


Mainly because the guide is intended for ASP.Net developers, not .Net developers in general.


you can task.yield aswell.


One aspect of C# async/await I don't ever see talked about is the ecosystem integration.

Async/await was delivered after the popular .NET UI frameworks (WPF, UWP) were designed, and it shows.

Trying to work with data bindings with async is a pain. There are things WPF has to make it a bit easier, but UWP doesn't have them. A lot of the infrastructure (IValueConverter, for example) just won't allow async. There are workarounds, but they are ugly. It gets tricky when, as the document mentions, async is viral. Constructors and void methods (basically the only options for running initialization code when a UI component appears) give you not good options for async/await. A lot of the WinRT API (which has buggy C# bindings and is markedly unreliable) require async for things that were never async in the older implementations. It makes cross-platform library development a pain, and exacerbates the 'async is viral' issue.

None of what I mentioned is a 'problem' in that it can all be worked around and people have been delivering applications with such workarounds for a decade. But it is disappointing that Microsoft hasn't modernized the UI frameworks to take advantage of modern programming patterns.


The sync over async issue is a real common problem for me when trying to get a large older codebase converted to async and you can't just do it all at once.

You still need support non async callers and you want to share code between the new async version and the old sync versions it makes it really difficult to do so.

Say you have a db layer you want to move to async but you still have to support a sync api over that, no great way to do it without hitting the potential issues, instead you have to have two versions in the db layer with no great way to share code.

What worse is when you don't even have the option to go async for instance if your not on .Net core but Framework 4.8 with the latest version of ASP MVC there is no ExecuteResultAsync on a action result so you really can't call any async code there safely, they added it to MVC core later.

Bottom line sometimes you're at the mercy of your callers and not being able to easily expose a sync version of your api when needed without a bunch of code duplication is a real problem that I have hit. I really think they should have spent more time in the beginning to allow that scenario without pitfalls and the transition would have been much smoother.


Database operations should be async, you shouldn't consume them synchronously, as they do IO. Async/Await came out in 2012, almost a decade ago (and with great first party library support, I might add).

Moralizing aside, sometimes you do want to call async APIs as synchronous code. I don't think there should be a synchronous version of the API implemented as well, you just need to do

var myValue = DoSomethingAsync().ConfigureAwait(false).Result;

which will avoid deadlocks, and execute your call synchronously


> var myValue = DoSomethingAsync().ConfigureAwait(false).Result;

Came out a decade ago and we still don't know how to use it safely.

This example doesn't compile because there is no Result on ConfiguredTaskAwaitable. Regardless, ConfigureAwait(false) does absolutely nothing here because this Task is not being awaited.

If you're going to block this thread, you must push the work to another thread or it's going to deadlock when the implementation tries to resume a continuation (unless the implementation is 100% perfect and the SynchronizationContext smiles upon you).

var result = Task.Run(() => CalculateAsync()).GetAwaiter().GetResult();

This avoids the deadlock but can lead to other nasty things like thread pool starvation. The only true solution is to go async all the way - https://blog.stephencleary.com/2012/07/dont-block-on-async-c...


> Regardless, ConfigureAwait(false) does absolutely nothing here because this Task is not being awaited.

It does help if there is a SynchronisationContext active, like in legacy ASP.NET


No, really. ConfigureAwait configures the await. If you don't await - if you block by calling .Result - it does nothing


>> var myValue = DoSomethingAsync().ConfigureAwait(false).Result;

Doing this inside ASP.NET request processing code (e.g. a controller method) will result in thread pool starvation [1], if you see about 50-100 (the numbers are off the top of my head, so check for yourself) requests per minute hitting that line of code.

P.S.: Sorry for a medium link, but couldn't really find an alternative.

[1]: https://medium.com/criteo-engineering/net-threadpool-starvat...


"var myValue = DoSomethingAsync().ConfigureAwait(false).Result;"

In my view there should be a built-in keyword to do this right. It's too easy to get this wrong and even worse possible problems only show up rarely.


If this really needs to happen then I like .GetAwaiter().GetResult() instead of .Result to get the same exception behavior as await rather than the wrapped AggregateException that .Result throws. This is especially helpful if DoSomethingAsync sometimes throws synchronously rather than returning a task.


The reason there isn't a keyword is because its not possible to do it right. The example given is far from foolproof.


But there should be an easy way to do it right. Needing to call async functions from non-async code is not exactly an unusual thing. Or they need to make absolutely everything async which is also problematic, especially in terms of raw performance.


Well, you can put it into an extension method like this:

T SyncResult<T>(this Task<T> task){ return task.ConfigureAwait(false).Result; }

Same for ValueTask. It may already be implemented in the base .NET library as well.


To avoid deadlocks, you don't need the ConfigureAwait(false) here, but you do need it to have been applied correctly in all the async code you're calling.


Some things simply don't have an async API at the low level, e.g. DNS lookup: there is no asynchronous version of getaddrinfo(3). So if you look at the .NET sources, you'll see that Dns.GetHostEntryAsync pushes a task to a thread pool that calls getaddrinfo(3).

In the end, you arrive at a "sync top-level APIs -- async library APIs -- sync low-level OS APIs" sandwich of dubious efficiency.


There is on windows. On Linux we queue the requests asynchronously to the same address (golang does similar things)


When there's sync OS APIs what's the point of async over threads? I thought async APIs use async or polling version of syscalls and not blocking ones.


Because there are sometimes better things to do than block more threads. We can asynchronously queue dns requests to the same address (this is what we do in .NET 6)


There's plenty of database operations I don't do async. We heavily make use of ETLs. By their nature those processes are very linear and don't benefit at all from being async.


I hope the author at some point adds the section on ConfigureAwait. I've seen code bases where the devs have added .ConfigureAwait(false) to all invokations "just to make sure".


In an event loop model, I've never felt the need to reach for ConfigureAwait(false). Maybe there's certain operations that could be sped up a bit by letting them resume on any thread, but generally I want to be sure that the event loop is executing my code. There wouldn't be much of a point to using an event loop if nothing ever returned back to executing on it.


That's what you're meant to do!


Someday…


http://joeduffyblog.com/2015/11/19/asynchronous-everything/

>We were able to share this experience with .NET in time for C#’s await to ship. Sadly, by then, .NET’s Task had already been made a class. Since .NET requires async method return types to be Tasks, they cannot be zero-allocation unless you go out of your way to use clumsy patterns like caching singleton Task objects.


.NET Core and later has ValueTask for that usecase.


Around 50% of my work .NET coding and I find It's becoming really hard to keep up with this. .NET more and more feels to me like the typical MS approach where they just keep cranking out new stuff without cleaning up existing stuff. Some of the new things are very good, some are half baked, and it's difficult to figure out on what side these new features are.

Just lately I did some Entity Framework coding and noticed that some things are async compatible, but others aren't, so you end up doing a lot of strategizing coding around these omissions and creating questionable workarounds.

I really wish they would go back to the drawing board and simplify things. Same could be said for their various XAML dialects. It's just too damn verbose.


The problem is that .NET has been taken over by web developers, and they expect the kind of breakneck pace of change and half-baked tools that the Javascript ecosystem has become accustomed to.


This is so true. .NET has been steadily going downhill for some time now. The best indicator is the absolutely rotten documentation for the more recent .NET stuff. Compare that to the older .NET Framework and/or Winapi documentation which was excellent.


Agreed about documentation. They produce a lot of it but it’s hard to use and doesn’t really give you the big picture. I know I am old but in the 90s and 2000s the MSDN documentation was fantastic. Sad to see it going downhill that much.


I agree the docs are getting half-baked, they lack context and guidance.

FWIW, github issues for all things .net have been a surprisingly good resource. Especially on the hot new things, microsoft folks are very responsive and helpful. I dare say it's better for many topics than stackoverflow.


It is fighting in a space which is very competitive. Java, Go, JavaScript and Python. The later two are being favorite of every university or coding camp graduate. You stay relevant or you die.

Unfortunately, that implies faster dev cycles and areas like docs which are not well served.


How is .NET fighting against JavaScript or Python? Java or Go, I can understand but not JavaScript or Python.


Serverless computing is dominated by node, go, and python. .NET just doesn't startup fast enough.


Good point, I didn't think about that.


.NET is competing against any language which is not exactly driver level (like c/c++/rust). Every other app model they compete. Even OS Scripting they cover with PowerShell (which is not c# but it is .NET).


You're right, I tend to think .NET == C# but it's a whole ecosystem. We have some Powershell at work and it's a nice DSL for scripting. The languages we use outside of .NET are C++ (like you said), JS/TS (I doubt Blazor will replace all usage of JS, and we have a big codebase anyways), and Python for some machine learning stuff.


ValueTask is also available in Framework via System.Threading.Tasks.Extensions NuGET package.


but shouldnt it be that ValueTask is used everywhere by default and class "exists for some usecase"?


If you know that an operation will not complete sychronously (e.g. because it requires a network transaction) then normal Task might actually be more efficient, because the heap allocation would be required anyway and you safe the additional branches.


It kind of depends. Class is safer because you can have multiple calls to await and the TPL was designed to be mostly as safe as possible by default....but yeah, it does hurt that its not alloc free.


Should: yes. Can: no. Why: backward compatibility


I use C# primarily for Unity, and the part about avoiding async void caught me off guard. So, I tested it out and found that in Unity, throwing an exception from an async void method doesn't crash the process. So, it seems that the advice about avoiding async void is specific to ASP.NET.


I can confirm this. Unity handles this differently. But this is also true of Exceptions in Unity in general.


I used C#'s async/await on a project in 2017, and I took to it. I appreciated being able to follow the "relevant" parts of a method, without having to jump around to different callbacks. That being said, I think I was the only one on the project that understood it _well_. Over the course of two years, I learned lots of the same gotchas.

Avoid "async void" was one of the catchy mnemonics I learned the hard way, because one day our production server crashed because it threw an exception in an async void.

I'm working on Java web services now, and it's written using synchronous Java servlet framework (Spring/Jetty). My hidden fear is that one day we'll discover that our synchronous APIs will have to be completely re-written in the async model.


You won't need to rewrite into async model, because project Loom will introduce virtual threads. That means your sync code will look exactly the same, but will have scalability of async.


Given that you understand it well -- do you know why the compiler accepts async void in the first place?


I think it made sense for UI integrations. From a synchronous OnClick delegate you could start an async void function - which essentially starts a background task that lives even after the click handler returns. Returning a Task here would not have made sense since nothing awaits it. But arguably the use-case could also have been fulfilled by calling `Task.Run` in the handler to spawn a background task.


Without knowing what the underlying method does, the async method may block the UI thread because until the first await which doesn't immediately continue it runs on the UI thread.


I think it was for backwards compatibility with event handlers (which need to return void)


Is there a rule as to which methods are best made async and which not?

Or, once you start using async would it be best to make ALL methods async?

Many methods could be either sync or async. But if you make a method that doesn't strictly need to be async async you give yourself the option of later making it actually return its result after a delay, say reading its answer from the web or asynchronously from disk.

Whereas later trying to convert sync-methods to async seems to sometimes require big changes to the structure of the whole program. If you depend on getting the answer right away there is no easy way to modify the code so it in fact returns the answer after a delay. Or is there?

A downside to async-methods I can see is that they are harder to debug of course.


I recently had to make a change that converted a few functions to be async. It was certainly annoying to propagate that back in all the signatures, but the biggest issues came with having code that was designed under the assumption that the code would be synchronous. It was difficult to rework the code in ways that would avoid issues with things like the fact that if I use a member variable, call one of these functions, and then use use that variable again, there's no longer a guarantee that the variable still has the same value.

And for issues like that I don't think that having async from the start would have helped much. Because if the signature said async but everything actually completed synchronously, it's possible that people would have been more conscious about those issues, but it's also very likely that plenty of cases would be missed. Testing wouldn't expose any problems unless the implementations were swapped out for code that was actually running asynchronously.

It's not easy to call what the right approach is. The best you can do is try to guess what the most likely future is and code for that. Violating the YAGNI principle can end up adding extra work and complexity and even reduced performance for no payoff later.


I've had similar situations where what initially seems like a simple change between async vs. sync can turn out to be a major redesign effort. Therefore I'm starting to lean on the idea that I should use more async -methods from the start than what I'm currently doing.

Part of the issue is that 'sync' is the default. I wonder if it would be better the other way. Because sync is the default it is easy to "simply" write sync methods in cases where async might be a better choice, and, that decisions is often hard to reverse later.

I'm working with JavaScript but I assume the issues and questions are similar as with c#.

I think the question could be rephrased as "When should I NOT use async methods?"


> use a member variable, call one of these functions, and then use use that variable again, there's no longer a guarantee that the variable still has the same value.

Could you expound upon this? Not sure I understand the context, but would like to. Are you assigning a value to a member variable by awaiting the return of an async method, not clear would cause you to lose the value.


The would say that asynchronous methods are mostly about orchestration and side-effects and non-asynchronous ones about computations.

If your computation code reach out to fetch data or trigger side-effects I’d take that as a sign of it being badly factored. Try to push the async parts up the stack to an orchestration layer, and keep the computation code “pure”


But sometimes what needs to be fetched is determined by a "computation". And a computation might take a long time.

So I'm not sure if it's always clear what should be the perfect factoring.


The rule: If you need to call an async method, then you make your method also async. As such, your callers will then also need to be async.

Visual studio will actually give you a little warning if you make a method async and then don't await on any async methods.


Another downside is the overhead created. Those “simple” async methods are translated into state machine classes under the hood. You could probably test performance and see if the value you get is worth it.


I would call myself an extremely experienced and knowledgable C# programmer with 10+ years of experience and even I found a few things surprising or new in this guide. I think C# async/await implementation is the biggest con on the .NET community, because it gets constantly hailed as one of the easiest ways of async programming but this guide itself proves to me that there are so many gotchas which are not obvious at all that it's actually not that easy after all. When I compare this with goroutines I do sometimes wish .NET would have a Go like model instead.


I don't think there are really ways to do async programming that don't have gotchas. It's just inherently complicated.

And of course C#/.NET is a bit older and also very large, it has more surface area for weird behaviour that might not be easily fixable due to backwards compatibility.

Things like using void as a return type for an async function can be a nasty surprise if you're new, but this is not an issue at all once you know this (or if you simply used the right examples and used Task from the start). It's not a subtle gotcha.

Sync over async is much sneakier and can be very nasty. But I'm not sure you can avoid this when interacting with a language/environment that used to be mostly sync and switched to async. If everything is async you don't have this issue, so this is better for newer codebases.


Sure, but I'd argue that the best way to handle this is to just stick to the already existing paradigms that developers have experience with. It's not like there are any problems which can't be solved without async or tpl, and it's not like Microsoft wants to introduce a similar paradigm outside of the .net ecosystem, so it's just creating an artificial barrier and making the code harder to understand so that they can post clean looking examples on their doc pages.


JavaScript/TypeScript and C++ do implement a similar paradigm, loosely patterned on what C# does?


Python and Rust as well. C# invented a large part of the design of async/await that has been adopted by other languages. I figure if there's a better way, it would have been improved on by now since those other languages had plenty of time to see the issues with C#'s version.


Most claimed “simpler” languages are only being used for backend systems, i.e go, BEAM languages. There’s a reason that any language where UI needs to be considered has used what C# has. The flow of code is simplified because you don’t want to be on callback hell.


Ex-.NET guy here, can confirm. Coming from JVM-land I was constantly told that async await is something that makes C#/.NET much better than Java. I personally could not understand why. Async await is not as easy as it looks, and most .NET programmers who I knew, would just hammer at things to make it work.

"Hey, this is an HTTPClient call? Put an await in front of it?" "Oh, is the IDE showing an error? Try .ConfigureAwait(false)?" "Oh, still some issue? Try putting async in the method declaration?" "Still showing an error? Remove the .ConfigureAwait() and just try async?"

At some point, Visual Studio would stop showing warnings and errors, and then the code would pass review.

Go is better in the sense that, at least people understand what a goroutine is and how/when to use it correctly.


> most .NET programmers who I knew, would just hammer at things to make it work.

I hope the examples you listed are facetious or from the very early days of async/await in C#, otherwise I'd seriously question the skillset of the supposed .NET programmers.

Visual Studio is fairly good at handling incorrect use of async/await, and in all of the examples you listed, the actual solution should've been "read the IDE error, hit the bulb and apply the automatically suggested fix", not "ignore the IDE error and smash keyboard until it works".

ConfigureAwait usage is also not something you'll usually see outside of library code in modern C#.


async/await is so much wildly better than previous version of .NET async programming.

If you've ever had to deal with the IAsyncResult pattern in older .NET code, you'll never complain about await.

https://docs.microsoft.com/en-us/dotnet/standard/asynchronou...


You can get closer to the Go experience if you just ignore all the tuning knobs and not try to optimize performance. .NET 6 is reducing the penalty of some of the gotchas too, so things are getting better.


The way I put it: .NET async makes the easy things easier and the hard things harder.

The problem is that Task/Task<T> was the foundation for async, and it's a bad foundation. Even with the ability to write your own duck-typed awaiters (and the advent of ValueTask), the widespread use of Task means if you're writing async code you're going to have a tough time getting away from it.


I think this is the crux of the matter. Since Task and the TPL predated async, iirc, people get befuddled by the parallelism Vs concurrency (if that's the correct term) parts of the Task API.

Certainly the async story is a lot more complicated in desktop but it is very simple for most server scenarios, simply put "use this async call so that the thread can do other things while you wait for the db to respond" and the model in code is much preferable to callback hell.


IMO it's more fundamental than the parallelism v concurrency split.

Microsoft in general has a tendency to bolt on functionality in a kinda slapdash manner when another team wants it, so you get a lot of cruft that really doesn't belong in the Task class[1] but is there because someone wanted a way to handle their special case so it just got thrown into Task.

[1] See https://source.dot.net/#System.Private.CoreLib/Task.cs,045a7...


Depending on the task, I find C# async/await more intuitive and complete than Go.

I haven’t used Go for a while, but you can’t await a goroutine, you have to use a channel, which is more complicated than just using ‘await’. C# has channels, so you can replicate Go’s model.


Strong agree. I've seen devs with 20 years of experience on me write silly inefficient code because they're lulled into a false sense of security by the marketing of async/await.

Multithreading is one of the hardest problems in software, and Microsoft decided that the best way to solve it is to get smart and experienced people to stop forget everything they know and instead learn a bunch of opaque apis that interact with an incredibly complex internal state machine.

It hardly seems worthwhile to me.


async/await has its place, but your application needs to be designed for it.

The most illuminating moment for me was when I realized that there is no multi-threading involved with pure async/await.


async/await is not about multithreading at all though...


In a vacuous sense, but in practice you almost always use asynchronous code to achieve concurrency.

The canonical Microsoft tutorial on async spends about half its time talking about hiw to make your code concurrent to take advantage of async.

https://docs.microsoft.com/en-us/dotnet/csharp/programming-g...


Concurrency does not require multi-threading. Maybe you mean parallelism? Concurrency can still be really valuable in the context of a single threaded application.


If a program wants to perform a task in an async way without delegating it to an external program (like a database server or the OS' I/O system), it has to use threads, right? I think the point is that concurrency, for some tasks, basically requires multithreading. Not for the parallelism benefits, but just to be able to make concurrency possible for a task that requires blocking a thread.


Concurrency is more about having order independent units of computation. You can concurrently run operations on a single thread, although there is less benefit if no IO is involved. It's not something you'd likely do in practice unless there was IO.


I did mean parallelism, but I think the point stands. There's very little practical use of async await outside of multithreading.


Syntax sugar aside. Wasn’t the simple scalability of single threaded async nodejs main selling point?


" many gotchas which are not obvious at all that it's actually not that easy after all"

Totally agree. At first look async/await is simple and straightforward but it's way to easy to mess up in subtle ways. Most people don't even notice that their code has problems until they get weird behavior in production.

In general I believe they made async way too pervasive in the framework and are also inconsistent.


Agree but it just highlights that any kind of programming with more than one linear path of execution is hard. Before async/await, coroutines, etc. we all had to learn that the hard way. It's helpful to know what a process, thread or lightweight thread in your system is and at what cost it comes. The cost and frequency of context switches is not something you can ignore and will probably be forced to profile at some point, hopefully sooner than later.

With these newer programming models there are easier ways to distribute work but unless you really dig deep and understand the basic mechanisms you will be lulled into a false sense of security.

When async was first added to .Net I read through the details of how boldly the compiler re-writes my code and I was a bit shocked, like, can it really do that? Now I always keep that in mind as soon as I start typing a...


Strongly agree.

Honestly, just the first point "Asynchrony is viral" is a huge fucking flag that this implementation sucks.

It doesn't need to be viral, they just needed to make passing a continuation easy, and they failed miserably.

Overall - I really like most of C#, but that async/await implementation is poor at best.


I thought about this for a while as well, especially as I'm both a Go and .NET programmer, and made the following observation: Go and .NET have something that's viral about their IO code. In Go's case it's errors, in .NET's case it's async. Then I realised that basically all code that is async in .NET is I/O code just like all code in Go that throws runtime errors is I/O code as well. This is not a perfect heuristic, but it works 95% of the time.

Which means that .NET code tends to suffer from the same issue as Go code. The solution is the same as well: Separate out the code that does logic processing from the code that does I/O. This way only a few top level functions will become async. I find that this makes my code cleaner and more testable as well.


> Go and .NET have something that's viral about their IO code.

This is basically what some language communities are trying to capture with “monads” (like “the IO monad”)

There are some work yet on how to make such representations compose[1] (like how IEnumerable + Task = IAsyncEnumerable) but eventually we’ll probably see some form of effect systems for all such things reach mainstream languages.

[1] http://okmij.org/ftp/Haskell/extensible/more.pdf


Agree as well. Not necessarily on the pros/cons list, but there's definitely a few gotchas here I didn't know about.


What's the right way to do throttled async in modern C#? For some context, we have a process that needs to make an API call for each row in a file - maybe hundreds or thousands. What's the best way beyond Wait()'ing for each one to get decent performance without DOS'ing the server?


Use System.Threading.Channels.

BoundedChannelFullMode.DropNewest, DropOldest, DropWrite, Wait specifies the behavior to use when writing to a bounded channel that is already full


Thank you! I didn't know that existed. I gotta test the performance of that compared to something like a list or array with an explicit lock, which is otherwise my go to solution precisely for performance reasons.


I personally used a semaphore for that. You create a semaphore with an initial count of MAX_REQS_PER_SECOND, create WORKER_COUNT of looping "worker" tasks that each call WaitAsync() on that semaphore before doing request (and don't call Release() after request is done), plus a separate task that does either

    Sleep(100);
    Release(MAX_REQS_PER_SECOND / 10);
or

   Sleep(1000 * WORKER_COUNT / MAX_REQS_PER_SECOND);
   Release(WORKER_COUNT);
in a loop, depending on what numbers make more sense.


Fire off one task per row, but within each of those tasks use a SemaphoreSlim to rate limit your requests to the API.


Can you use MaxDegreeOfParallelism or var throttler = new SemaphoreSlim(initialCount: MAX_CALLS)?


Maybe Bulkhead policy in Polly is a good match here?


The QueueProcessor example could have been written using channels, which expose async APIs.

Otherwise, as others have commented, I despise how async/await is "95% done". The remaining 5% will come to bite haunt you and the documentation is less than satisfiyng. E.g., how does TaskScheduler interact with async? Nowhere documented, except answered on StackOverflow by Stephen Cleary that "it should work".

I prefer Java's CompletableFuture and Executors. It's more verbose, but at least there's no hidden magic. From the documentation you can infer exactly how it'll behave.


Async programming in C# is easy to use if you do not look under the cover (the generated state machine stuff is ugly and hard to reason about). But I find it tiring to repeat all these await and async keywords in almost every line of code. I wonder if someone already designed a language which is async be default, with some extra constructs to support running multiple operations in parallel.


async/await makes things complicated - this is a great illustration of some issues with it. Fibers/green threads/goroutines seem to be generally easier and not viral. C#/dotnet choosing this the async model always bothered me. Otherwise, the platform is solid.

It's interesting, therefore, to try to understand why dotnet went with the async/await model.

A language maintainer C# talks about the issue here and references the rust justification for the same:

https://mail.mozilla.org/pipermail/rust-dev/2013-November/00...

https://github.com/dotnet/runtime/issues/11084

They knowingly seem to have chosen a more complicated model for performance. For me, that sounds like a bad trade. Developer time is quite a bit more valuable than compute time. The performance difference just doesn't seem to justify it.


Implementing coroutine requires to change to the code generator, the runtime and the garbage collectors and there is no general consensus on the way to implement them efficiently => this requires a huge investment in engineering.

For Rust, one of the feature of Rust is to have a minimal runtime, so using a compiler transformation seems a good fit.

For C#, Microsoft has a limited number of people working on the runtime of DotNet. Async/await was developed at the same time DotNet was transitioning to DotNet Core which also requires massive engineering, so it was about priority. The future will tell if at some point coroutine will be added to DotNet.


I think for C# one of the strong async use-cases was the same one as for Javascript having async/await: UI programming. UI environments are mostly single-threaded, and if you want to modify UI elements you have to perform that operation on the UI thread. Doing e.g. a HTTP download in a background thread and directly manipulating a progress bar from there wasn't possible. With async/await that can multiplex all those async tasks onto the UI thread this now gets feasible. With the move to more declarative UI patterns this argument might now get more mood, but I don't have enough recent experience to tell.

On the server end, I strongly agree with you that for 95% of applications a threaded environment - and even using plain OS threads - would likely be easier and fast enough. But I guess everyone also wants to support the remaining 5% of applications, like "build a 100k clients proxy server".

Btw: Fibers can also be viral. E.g. if they are multiplexed on a single OS thread (non work-stealing scheduler) you still can't block in them, and need fiber-aware methods of everything. If you use a work-stealing scheduler then methods which use thread-local storage might be subject to undefined behavior, because the currrent thread might change inside the execution of the function at an invisible yield point.


Been a few years now since I've worked much in C# but these all seem like things that should be linting rules.

Can these be added as warnings to the compiler? Can you have custom lint/compiler warnings from the community like eslint?


yes

They are called “analyzers” in .net though


I have been a C# developer for a while as well now and I'm not sure if this is true:

"Use of async void in ASP.NET Core applications is ALWAYS bad. "

Depending on the context, it can even be recommended to do async void. See Stephen Cleary's brilliant explanation of this: https://blog.stephencleary.com/2012/02/async-and-await.html

Edit: The correct link is this, see first table column exceptions: https://docs.microsoft.com/en-us/archive/msdn-magazine/2013/...


An exception in an async void function will crash your entire ASP.NET Core application. There is no reason at all to use these in ASP.NET Core, always use Task or ValueTask as the return type of your async functions.


There are still event handlers.

You just have to always remember to always wrap async void methods with try { ... } catch (Exception ex){ ...}


> it can even be recommended to do async void

I don't see this reflected in the linked article at all. Aren't you confusing async void with async Task?


Why do we need the `async` keyword? What is the difference between a function which returns a Task<int>, and an async function which returns a Task<int>?


In c# async only enables the await keyword. If await was a reserved word from the start it wouldn't be needed.


`async` tells the compiler to generate a IAsyncStateMachine implementation https://ranjeet.dev/understanding-how-async-state-machine-wo...


Exactly, `async`/`await` is in the same realm of `yield`, it tells the compiler to take your code and create a state machine out of it. And also similarly to `foreach` and LINQ it boils down to a lot of duck typing.

There are two concepts:

1) _awaiters_ offer methods that the compiler-generated code will call to schedule continuations and ask whether it is completed. The thing that you call `await` on needs to offer a `GetAwaiter()` method that returns such an awaiter. (Due to the nature of the duck typing it might also be an extension method actually, so you can make types in other assemblies retrospectively awaitable)

2) _async method builders_ offer methods to perform the state machine transitions and connect them to the result object (which is traditionally of type `Task<T>` or `Task`). To register other types you can decorate them with the `System.Runtime.CompilerServices.AsyncMethodBuilderAttribute` attribute to tell the compiler what builder to use depending on the type you want to return in your async method.

I recommend this blog post series by Sergey Teplyakov for more details: <https://devblogs.microsoft.com/premier-developer/dissecting-...>


An non-async Task<int> is returning a Task<int> object, where the return statement of an async Task<int> method is of type int (which will be wrapped in the resulting Task). ”return 1;” eg only works in the latter case.


Am I completely crazy in thinking that we use the terms sync and async incorrectly in software?

Synchronous: Simultaneous, at the same time.

Asynchronous: Not Synchronous

So the basic category would be something like serial vs not serial where the "not serial" part consists of two approaches, synchronous, threads or forks for instance, and asynchronous, selectors and callbacks.

Right?...RIGHT?!?! Why do we refer to blocking calls as "sync"?!?!


There is no spoon


This is a great overview


Sorry for OT but whats up with those camelcase method names starting with upper case letter? Very weird convention to my eyes


Not to get too deep into a bikeshed conversation but I'm pretty sure thats is normal c# convention [1].

[1] https://docs.microsoft.com/en-us/dotnet/standard/design-guid...


That's standard C# naming convention.


It's called PascalCase or UpperCamelCase.


unfortunately, the code is simply ugly to read


That’s what I was looking for


Is there a particular reason this has been posted today? The last commit to it was in April.

It's a good list for sure, just wondering why it's popped up on HN


Would it be different somehow if the last commit was, oh, yesterday?


just wondered if it was relevant to some other discussion I'd missed. that's all




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: