Hacker News new | past | comments | ask | show | jobs | submit login

> Is that the ideomatic way to do it

Well... I'm actually not sure what ideomatic means (English isn't my first language), but it's the standard way of doing it. You'll even find it as step 2 and 3 here: https://go.dev/tour/concurrency/1

> or the best way you can imagine

I would do a lot much more to tune it if you were in a position where you'd know it would run that many "tasks". I think what many non-Go programmers might run into here is that Go doesn't come with any sort of "magic". Instead it comes with a highly opinionated way of doing things. Compare that to C# which comes with a highly optimized CLR and a bunch really excellent libraries which are continuously optimized by Microsoft and you're going to end up with an article like this. The async libraries are maintaining which tasks are running (though Promise.All is obviously also binding a huge amount of memory you don't have to), while the Go example is running 1 million at once.

You'll also notice that there is no benchmark for execution time. With Go you might actually want to pay with memory, though I'd argue that you'd almost never want to run 1 million Goroutines at once.

Though to be fair to this specific author, it looks like they copied the previous benchmarks and then ran it as-is.




The post was edited, previously it just said roughly this part: "step 2 and 3 here: https://go.dev/tour/concurrency/1". Which - as far as I can tell - does not mention worker pools...


You're right. It is using channels and buffers, but you're right.

It's not part of the actual documentation either, at least not exactly: https://go.dev/doc/effective_go#concurrency You will achieve much the same if you follow it, but my answer should have been yes and no as far as being the "standard" Go way.


Idiomatic is the word the parent was looking for. The base word is idiom.

It was probably the intent of the parent to mean 'making use of the particular features of the language that are not necessarily common to other languages'.

I'm not a programmer, but you appear to give good examples.

I hope I'm not teaching you to suck eggs... {That's an idiom, meaning teaching someone something they're already expert in. Like teaching your Grandma to suck eggs - which weirdly means blowing out the insides of a raw egg. That's done when using the egg to paint; which is a traditional Easter craft.}


I actually did find "idiomatic" when I looked it up, but I honestly still didn't quite grasp it from the cambridge dictionary. Thanks for explaining it in a way I understand.


In programming Idiomatic is used to reference a programming language’s “best practices” and “style guide”. Obviously programming languages can solve problems in many different ways, but they often develop a “correct way” that matches their design or the personality of their influential community members. Following this advice is Ideomatic. Next time your coworker has their style wrong you can say “I don’t think this is Idiomatic” :D


I'm torn.

As far as practicality goes I actually agree with you: if I knew I were trying to do something to the order of 1,000,000 tasks in Go I would probably use a worker pool for this exact reason. I have done this pattern in Go. It is certainly not unidiomatic.

However, it also isn't the obvious way to do 1,000,000 things concurrently in Go. The obvious way to do 1,000,000 things concurrently in Go is to do a for loop and launch a Goroutine for each thing. It is the native unit of task. It is very tightly tied to how I/O works in Go.

If you are trying to do something like a web server, then the calculus changes a lot. In Go, due to the way I/O works, you really can't do much but have a goroutine or two per connection. However, on the other hand, the overhead that goroutines imply starts to look a lot smaller once you put real workloads on each of the millions of tasks.

This benchmark really does tell you something about the performance and overhead of the Go programming language, but it won't necessarily translate to production workloads the way that it seems like it will. In real workloads where the tasks themselves are usually a lot heavier than the constant cost per task, I actually suspect other issues with Go are likely to crop up first (especially in performance critical contexts, latency.) So realistically, it would probably be a bad idea to extrapolate from a benchmark this synthetic to try to determine anything about real world workloads.

Ultimately though, for whatever purpose a synthetic benchmark like this does serve, I think they did the correct thing. I guess I just wonder exactly what the point of it is. Like, the optimized Rust example uses around 0.12 KiB per task. That's extremely cool, but where in the real world are you going to find tasks where the actual state doesn't completely eclipse that metric? Meanwhile, Go is using around 2.64 KiB per task. 22x larger than Rust as it may be, it's still not very much. I think for most real world cases, you would struggle to find too many tasks where the working set per task is actually that small. Of course, if you do, then I'd reckon optimized async Rust will be a true barn-burner at the task, and a lot of those cases where every byte and millisecond counts, Go does often lose. There are many examples.[1]

In many cases Go is far from optimal: Channels, goroutines, the regex engine, various codec implementations in the standard library, etc. are all far from the most optimal implementation you could imagine. However, I feel like they usually do a good job making the performance very sufficient for a wide range of real world tasks. They have made some tradeoffs that a lot of us find very practical and sensible and it makes Go feel like a language you can usually depend on. I think this is especially true in a world where it was already fine when you can run huge websites on Python + Django and other stacks that are relatively much less efficient in memory and CPU usage than Go.

I'll tell you what this benchmark tells me really though: C# is seriously impressive.

[1]: https://discord.com/blog/why-discord-is-switching-from-go-to...


I agree with everything you said and I think you contributed a lot to what I said making things much more clear.

> I'll tell you what this benchmark tells me really though: C# is seriously impressive.

The C# team has done some really great work in recent years. I personally hate working with it and it's "magic", but it's certainly in a very good place as far as trusting the CLR to "just work".

Hilariously I also found the Python benchmark to be rather impressive. I was expecting much worse. Not knowing Python well enough, however, makes it hard to really "trust" the benchmark. A talented Python team might be capable of reducing memory usage as much as following every step of the Go concurrency tour would for Go.


Userspace scheduling of Goroutines, virtual stack and non-deterministic pointer type allocation in Go are as much magic if not more, the syntactic sugar of C# is there to get the language out of your way and usually comes at no cost :)

If you do not like the aesthetics of C# and find Elixir or OCaml family tolerable - perhaps try F#? If you use task CEs there you end up with roughly the same performance profile and get to access huge ecosystem making it one of the few FP languages that can be used in production with minimal risk.


> Userspace scheduling of Goroutines, virtual stack and non-deterministic pointer type allocation in Go are as much magic if not more, the syntactic sugar of C# is there to get the language out of your way and usually comes at no cost :)

I don't think C# does it at no cost. I think it's "attachment" to Clean Code makes most C# code bases horrible messes after a while. I know this is a preference thing and that many people will disagree, but I've seen C# code bases that were so complicated to work with that they were actively hindering the development teams ability to meet the business needs. You don't have to write C# that way, but that's what happens in almost every company where I live.

> If you do not like the aesthetics of C# and find Elixir or OCaml family tolerable - perhaps try F#? If you use task CEs there you end up with roughly the same performance profile and get to access huge ecosystem making it one of the few FP languages that can be used in production with minimal risk.

I mean, I don't think I'll ever have to work within the dotnet ecosystem. The way things are going in the green energy and finance sector which is where my career have taken me I'll mostly get to work with Python (with C/Zig) or Go and possibly Java. C# and dotnet is almost exclusively used at stagnant small-medium sized companies and in the consultance business servicing these companies. This is not because of C# or dotnet but more because of the developer landscape. Java is big in "older" organisations because it's what was taught in universities and because it was always good, Go is replacing C#/Java in a lot of newer companies because there are a lot of success stories around it and a lot of the Java developers are retiring. Python is growing really big because a lot of non-swe engineers and accountant types are using it as well as how it's used in ML/AI/Datawarehouse. PHP is big in the web-shop industry and so on. C# manly made it's way into business at places which ran a lot of windows servers. Since organisations rarely change tech stacks in the more "boring" parts of the world, it's not likely to change much.

I don't think dotnet or C# are bad. I write some powershell for azure automation to help IT operations from time to time, but I really don't like working with C# (or Java). I would personally like to work with Rust or more Zig at some point, but it's not like anyone is adopting Rust around here and while Zig can be used for some things in place of C it's not really "production ready" for most things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: