Hacker News new | past | comments | ask | show | jobs | submit login
Borgo is a statically typed language that compiles to Go (github.com/borgo-lang)
666 points by manx 9 months ago | hide | past | favorite | 545 comments



This addresses pretty much all of my least favorite things with writing Go code at work, and I hope--at the very least--the overwhelming positivity (by HN standards -- even considering the typical Rust bias!) of the responses inspires Go maintainers to consider/prioritize some of these features, or renews the authors interest in working on the project (as some have commented, it seems to have gone without activity for a little bit over half a year).

Some of the design decisions seem to me to be a bit more driven by being Rust-like than addressing Go's thorns though. In particular, using `impl` to define methods on types (https://borgo-lang.github.io/#methods), the new syntax for channels and goroutines (https://borgo-lang.github.io/#channels), and the `zeroValue()` built-in (https://borgo-lang.github.io/#zero-values-and-nil) seem a bit out of place. Overall though, if I had a choice, I would still rather write Borgo by the looks of it.


I have to disagree. I'm on record here lamenting Go. I've never really enjoyed writing it. When I've had to use it, I've used it. Lately though, I've found a lot more pleasure. And much of that comes from the fact that it does NOT have all these features. The code I write, is going to look like the code written by most other on my team. There's an idiomatic way to write Go, and it doesn't involve those concepts from other languages. (For better or for worse) So I'm super hyped that we'd have a "compiles TO Go" language, but I'm not as excited as using it as a catalyst to get new (and perhaps wrong for the language) features into Go.


I'm open to alternatives, but I haven't experienced any language constructs that strike as good of a balance between forcing you to handle errors/options when a function indicates it returns them, and allowing you to do so without too much ceremony. I don't think match is enough on it's own, but when combined with if-let/let-else/?-operator I don't feel like I'm sacrificing significant time and effort in dealing with Results/Options, so I'm not encouraged to cut corners and write worse code to avoid returning them in the first place.

The idiomatic way to write Go discourages you from robust error handling; return an opaque error, which callers will probably deal with as a string (or bubble-up as much as possible) because knowing its potential concrete types requires either reading source code or having documentation available (as the idiomatic function return type won't tell you anything about it). The path of least resistance only goes as far as forcing you to acknowledge there might be an error, it doesn't help you make good decisions about how to deal with it.

I think without also having to learn about lifetimes and dealing with async that this still presents an attractive trade-off for people who want to be quickly productive and don't care as much about garbage collection.


I agree with you. I’m not the biggest fan of Go and would personally love these features, but they’re very counter to Go’s purpose. These feel Rust-y to me.

Go was designed to be simple enough that developers can’t write code too complicated for others to read, with a particular eye towards junior devs (and ops/infra people, I think).

This usage of sigils for error handling and union return types is very cool and very expressive, but also going to confuse the shit out of your new devs or infra people. It’s just not a good fit for what Go wants to be.

I’m even sympathetic to the idea that generics are similar, though personally I think the alternative to generics is code generators, which are worse.

Anecdotally, I recently wrote some Go code at work (infra team) that used generics, and I had to look outside my team to even find someone that felt comfortable reviewing generic Go code. I see a fair bit of code using interface{} or any that would be much simpler and better with generics.


A lot of people said the same about generics, and some even still do. I could barely stand Go before generics, and still don't think they go far enough.

From my experience, things I think Go could really benefit from, like I believe it has benefited from generics:

* A way to implement new interfaces for existing types and type constraints for custom types, like `impl Trait for T`. This would obsolete most uses of reflection in the wild, in a way generics alone haven't. This isn't about syntax, it's about an entirely different way to associate methods to types, and with both Go and Rust being "data-oriented" languages, it's weird that Go is so limited in this particular regard. It has many other consequences when combined with generics, such as ...

* Ability to attach receiverless methods to types. Go concrete types may not need associated methods like "constructors", but generic types do, and there's no solution yet. You can provide factory functions everywhere and they infect the whole call graph (though this seems to be "idiomatic"), or make a special method that ignores its receiver and call that on a zero instance of the type, which is more hacky but closer to how free functions can be resolved by type. There's no reason this should be limited to constructors, that's just the easiest example to explain, in Rust associated methods are used for all kinds of things. Speaking of which...

* `cmp.Ordered` for custom types. Come on people. We shouldn't still have this much boilerplate to sort/min/max custom types, especially two full years after generics. The new `slices.SortFunc()` is the closest we've ever come, and it's still not associated with the type. We would basically get this for free if both of the above points were solved, but it's also possible we get something else entirely that solves only ordering and not e.g. construction or serialization.

* Enums, especially if the need for exhaustiveness checking could be balanced with Go's values of making code easy to evolve later. When I need them, I use the `interface Foo { isFoo() }` idiom and accept heap boxing overhead, but even the official `deadcode` analysis tool still to this day does not recognize this idiom or support enough configuration to force it to understand. The Go toolchain could at the very least recognize idioms people are using to work around Go's own limitations.

If we had solutions to these problems, I think most Go folks would find enough value in them that they would still be "Go". In fact, I think people would have an easier time consolidating on a new standard way to do things rather than each come up with their own separate workarounds for it.

This is where I feel "The code I write, is going to look like the code written by most other on my team" the least, because that's only true when a Go idiom has some official status, it's not nearly as true for workarounds that the Go team has not yet chosen to either endorse or obsolete.


> From my experience, things I think Go could really benefit from [...] Enums

Obviously it already benefits from enums.

    type E int
    const (
        A E = iota
        B
        C
    )
Which, once you get past the superficiality of syntax, is identical to, say, what you find in C.

    enum E {
        A,
        B,
        C
    }
Enums are just a numbering mechanism, after all. No different than hand numbering constants, but without the need to put in the manual effort. Enums kind of suck, though. Are you sure any language actually benefits from them (as compared to better alternatives)?

> especially if the need for exhaustiveness checking

It is true that gc doesn't make any effort to determine how the enums are used, but if it were to it would be a warning like as is seen in many C compilers. As enums are values, not a type, it's not a fault to use them "inappropriately". While it may be all fine and dandy to add such warnings to gc, the Go team has taken a hard line stance that warnings have no place in the compiler. Of course, the external analysis tools like you speak to can still be used to provide these warnings for you. All the information you need is there.

But it seems what you really want is a more expressive type system – specifically sum types. Then you would be able to describe how you expect identifiers to be used without resorting to using generated number values as placeholders. Enums are just a hack to work around a limited type system anyway. If you are going to improve the type system in order to gain improved compilation constraint, you don't need enums anymore.

Rust doesn't have enums and nobody has ever noticed. Hell, many are even somehow convinced it has enums – but it does not. It provides tag-only unions (with, optionally, fully tagged unions) instead. Seemingly that is what you really want in Go as well. And fair enough. It is undeniably the better solution, but at the cost of being more complex to implement.


> the very least--the overwhelming positivity (by HN standards -- even considering the typical Rust bias

As someone who has the "Rust bias", I feel like it's a bit of an open secret that a _lot_ of Rust developers don't actually need the extreme low-level performance that it offers and use it more because of the quality of life things (including some of the features in Borgo, but also tooling like cargo, rustdoc, etc.). I've said for a while that a language that focused on this sort of "developer experience" but used a GC to be a bit more flexible would probably be enough for a large portion of Rust developers, and pretty much everyone Rust developer I've talked to agrees with this (even if they aren't in the portion that would use this).

It's also pretty common for me to see people asking why someone would use a low-level, C++ language for something like web development, and I think the explanation is pretty similar; people like a lot of what Rust has to offer besides the low-level C++-like semantics, but there isn't something higher-level that offers enough of those features yet. Probably the language that would come closest to this is OCaml, but stuff like the documentation and "multiple preludes" are exactly the kind of thing that add friction to getting up and running, which is something Rust has invested a lot of time into reducing.


> I've said for a while that a language that focused on this sort of "developer experience" but used a GC to be a bit more flexible would probably be enough for a large portion of Rust developers

Why then would Go not fit? It prioritizes developer experience (documentation, automatic formatting, etc.) with a GC


Lack of sum types and not much support for a functional programming style. I am totally uninterested in any modern programming languages (e.g. post 1980s) that only allows me to express AND (product types, e.g. records, tuples) but not OR (sum types)


Oh I agree on those but it seemed the person I was replying to was primarily interested in developer experience (tooling, documentation, etc.), not language features. So I was curious about what was missing from the Go developer experience because that's one thing that's generally regarded as a strength


Those are only examples of things that I think would be necessary; I didn't intend for them to be treated as comprehensive. There are a decent number of things in Go that make me feel like my day-to-day experience of working in it is not a priority compared to a design goal of simplicity for its own sake. For example, when debugging code I often will comment and uncomment portions of it as I run it repeatedly to try to narrow down exactly where something unexpected is happening, and having unused variables be a hard error makes this tedious. Is it possible to work around this by manually "using" the variables I comment out in a way that does nothing? Of course. Would it be more "proper" to use a debugger rather than doing something hackish on my own? Possibly! But is this an actual thing that I expect a large number of other developers also do in pretty much every other language without issue? I strongly suspect the answer is yes.

Things like this might be small, but they add up, and at the end of the day, the frustrations that I encounter when writing Rust are smaller and less frequent than the ones I've had writing Go. Obviously stuff like this is subjective, and there's no way to make a language that satisfies everyone. I think there's empirical evidence that there's an audience for a language that fits the niche I describe that Go doesn't fill though.


Yeah, and the common/un-comment workflow isn't just for debugging but also for trying new things out to see what works.

You're right that judging on the engagement of this post and others in the past there must be a big appetite for a language somewhat like Go but with fundamental differences. It's really quite interesting how Go seems to be so polarizing, they really nailed some things and really missed on others.


> but there isn't something higher-level that offers enough of those features yet

I think Swift would tick most of those boxes, it’s a shame it hasn’t really picked up outside Apple-land.

It can be a horribly complex language, but day-to-day it’s very nice to write.


Swift doesn't have explicit namespaces, which is a real inconvenience.


Yeah, I think would be a good potential candidate if it had a full commitment to support on Linux and Windows, but unless that happens at some point, I don't think I'd consider it over Rust for anything other than Apple-specific development, which isn't something I do


Is this .Net?


I think, for any language that "targets the Golang runtime", you do need some way to express to the runtime to "use the zero-value default initializer." Otherwise, you'd have no hope of code in your language being able to be used to define e.g. protobuf-compatible types; or of using Golang-runtime reflection (due to e.g. the interface zero-value.)


Wow, this is everything I want from a new Go!

Having worked on multiple very large Go codebases with many engineers, the lack of actual enums and a built-in optional type instead of nil drive me crazy.

I think I'm in love.

Edit: Looks like last commit was 7 months ago. Was this abandoned, or considered feature complete? I hope it's not abandoned!


While I have no particular beef with Rust deciding to call its sum types "enum", to refer to this as the actual enum is a bit much.

Enumerated types are simply named integers in most languages, exactly the sort you get with const / iota in Go: https://en.wikipedia.org/wiki/Enumerated_type

Rather than the tagged union which the word represents in Rust, and only Rust. Java's enums are close, since they're classes and one can add arbitrary behaviors and extra data associated with the enum.


Haxe also has Enums which are Generalized Algebraic Data Types, and they are called "enums" there as well: https://haxe.org/manual/types-enum-using.html


Very well then: Rust is not the only one to call a variant type / tagged union an enum. It's a nice language feature to have, whatever they decide to call it.

It remains a strange choice to refer to this as the true enum, actual enum, real enum, as has started occurring since Rust became prominent. If that's a meaningful concept, it means a set of numeric values which may be referred to by name. This is the original definition and remains the most popular.


Rust is targeting both users who know the original definition as well as people who don’t. Differentiating between real enums and sum types means the language gets another keyword for a concept that overlaps.

From a PL theory perspective, enum denotes an enumerable set of values within a type. It just happens that sums slot in well enough with that.


But the instances of a sum type aren't enumerable unless all of its generic parameters are enumerable.


Checked the definition. An enum is defined as a set of named constants. Id argue that a set by definition needs to be constrained. If it lacks the constraints/grouping id argue it no longer is a set.


Swift enums support union types as well, and are also very useful.


> While I have no particular beef with Rust deciding to call its sum types "enum", to refer to this as the actual enum is a bit much.

I didn't read GP as saying "Actual enums are what Rust has", I read it more as "Go doesn't have actual enums", where "enum" is a type that is constrained to a specified set of values, which is what all mainstream non-Rust languages with enums call "Enums".

I mean, even if Rust never existed, the assertion "Go doesn't have actual enums" is still true, no?


That's an interpretation I hadn't considered, mostly because Borgo has Rust-style tagged unions which it also calls enums. The statement wouldn't have caught my attention if I'd read it in that light, but while I'm here, I don't mind opining.

"Does Go have enumerated values" seems much like "does Lua have objects". Lua doesn't have `class` or anything like it, it has tables, metatables, and metamethods. But it makes it very easy to use those primitives to create a class and instance pattern, including single inheritance if desired, and it even offers special syntax for defining and calling methods with `:` and `self`. If I had to deliver a verdict, I would say that the special syntax pushes it over the line, and so yeah: with some caveats, Lua has objects.

Same basic thing with Go. One may define a custom integer type, and a set of consts using that type with `iota`, to get something which behaves like plain old small-integer enums. It's possible to undermine this, however, by defining more values of this type, which makes this pattern weaker than it could be, but in a way which is similar to the enums found in C.

Ultimately, Go provides iota, and making enums is the intended purpose of it. If you search for "enums in Go" you'll find many sources describing an identical pattern. So, like `self` and `:` in Lua, I'd say that `iota` in Go means that it has an enumerated type.

But if someone wanted to say "Go doesn't even have enums, you have to roll your own, other languages have an enum keyword", I have a different opinion than that first clause, but there's nothing factually wrong with any of it. I find this sort of "where's the threshold" question less interesting than most.


An Enum type has to be on the core Go team's radar by now. It's got to be tied with a try/catch block in terms of requested features at this point (now that we have generics).


No one wants try/catch/exception in Go.


Comments like this are what drives me away from Go; comments that enforce a particular belief about how or what features you should or should not use/introduce in your PL. Talking in absolutes is so far removed from a logical arguments and from good engineering. I would appreciate if anyone could recommend a language like Go (static, strong typed, not ancient, good tooling) with a friendly community, that won’t ostracize the non-believers. Zig?


Go is a very opinionated language from it's inception. We could probably argue for all eternity about code formatting, for instance. But Go went and set it in stone. Maybe it's part of good engineering to keep things simple and not allow hundreds of ways to do something. Maybe the people who use Go are the ones who just want to write and read simple and maintainable code and don't want it to be cluttered with whatever is currently the fashion.

You could look at Lisp. It's kind of the opposite of Go in this regard. You can use whatever paradigm you like, generate new code on the fly, use types or leave them. It even allows you to easily extend the language to your taste, all the way down to how code is read.

But Lisp might violate your set of absolutes.


We have completely lost the plot by assuming that just because there are disagreements on somethings then any choice is equally as good as any other. Go is opinionated and it is opinion is wrong.

Not having exceptions (but then having them anyway through panic, but whatever) is a choice - but the other reasonable alternative is the Maybe monad. What Go did is not reasonable. I might be okay if they had been working on getting monads in, but they haven't.

I have a specific hatred for Go because it seems perfectly suited to make me hate programming: it has some really good ideas like fast compile time speeds, being able to cross-compile on any platform and being a systems language without headers.

But then I try to write code in it and so. much. boilerplate.


> But then I try to write code in it and so. much. boilerplate.

If boilerplate is cause for you to dislike the language, fine.

But your unnecessarily strong language "Go is opinionated and it's opinion is wrong", "What Go did is not reasonable", "I have a specific hatred for Go" speaks more about you than Go.

Go's choice of trade-off much was practical and reasonable, engineering-wise.

Your opinion is entirely wrong.


Thanks for your response AnonymousPlanet. I agree there is value in the pursuit of a minimal set of features in a PL which brings many benefits. And of course the opposite - an overly feature packed and/or extensible PL as a core feature has tradeoffs. Over this range of possibilities my preference probably falls somewhere in the middle.

I see an effect where the languages whose primary goal is a particular set language design choices (such as strict memory safety over all else) grow a cult following that enforces said design choices. Maybe in the pursuit of an opinionated language, even if the designers are reasonable at the language's inception, the community throws out logic and "opinionated" becomes a in-group out-group tribal caveman situation.


> Maybe in the pursuit of an opinionated language, even if the designers are reasonable at the language's inception, the community throws out logic and "opinionated" becomes a in-group out-group tribal caveman situation.

I think you've got this backward. It's not that the particular choices are important. It's a thing happening on a higher meta level than that.

Some programming languages are, by design intent, "living" languages, evolving over time, with features coming and going as the audience for the language changes.

"Living" languages are like cities: the population changes over time; and with that change, their needs can shift; and they expect the language to respond with shifts of its own. (For example: modern COBOL is an object-oriented language. It shifted to meet shifting needs of a new generation of programmers!)

If you were able to plot the different releases of a living language in an N-dimensional "language-design configuration space", these releases would appear to arbitrarily jump around the space.

Other languages, though, are, by their design intent, "crystallized" languages — with each component or feature of the language seeing ever-finer refinement into a state of (what the language maintainers consider) perfection; where any component that has been "perfected" sees no further work done, save for bugfixes. For such languages, creating a language in this way was always the designers' and maintainers' goal — even before they necessarily knew what they were creating.

"Crystallized" languages are like paintings: there was an initial top-down rough vision for the language (a sketch); and at some early point, most parts of the design were open for some degree of change, when paint was first being applied to canvas. But as the details were filled in in particular areas, those areas became set, even varnished over.

If you plot the successive releases of a crystallized language in design configuration space, the releases would appear to converge upon a specific point in the space.

The goal with a crystallized language is to explore the design space to discover some interesting point, and then to tune into that interesting point exactly — to "perfect" the language as an expression of that interesting concept. Every version of the language since the first one has been an attempt to find, and then hit, that point in design space. And once that point in design space is reached, the language is "done": the maintainers can go home, their jobs complete. Once a painting says what it "should" say, you can stop painting it!

If a crystallized language is small, it can achieve this state of being entirely "finished", its spec set in stone, no further "core maintainer" work needed. Some small crystallized languages are even finished the moment they start. Lua is a good example. (As are most esolangs — although they're kind of a separate thing, being created purely for the sake of "art", rather than sitting at the intersection of "work of art" and "useful tool" as crystallized languages do.)

But large crystallized languages do exist. They seek the same fate as small crystallized languages — to be "finished" and set in stone, the maintainers' job complete. They just rarely get there, because of how large a project it is to refine a language design.

You might intuitively associate a "living" language with democratic control, and a "crystallized" language with a Benevolent Dictator For Life (BDFL) figure with an artistic vision for the language. But this is not necessarily true. Python was a "living" language even when it had a BDFL. And Golang is a "crystallized" language despite its post-1.0 evolution being (essentially) directed by committee.

---

The friction you're describing, comes from developers who are used to living languages, trying to apply the same thinking about "a language changing to serve the needs of its users" to a crystallized language.

Crystallized languages do not exist primarily to serve their users. They exist to be expressions of specific points in design space, quintessences of specific concepts. Those points in design space likely are useful (esolangs excluded); but the expectation is that people who don't find that point in design space useful, should choose a different language (i.e. a different point in design space) that's more suited to their needs, rather than attempting to shift the language's position in design space.

Adding a bridge is a sensible proposal for a city. You can get entire streets of buildings torn down to make way for the bridge, if the need is there. But adding a bridge is not not a sensible proposal for a (mostly-finished) painting. If you want "this painting but with a bridge in it", that's a different painting, and you should seek out that painting. Or paint it yourself. Or paint the bridge on some kind of transparent overlay layer, and hang that overlay in front of the painting.

Conveniently for this discussion, Borgo here is exactly an example of a language that's "someone else's painting, but now with the bridge I wanted painted on an overlay in front of it." :)


Best comment in the thread. I think defining certain languages as "crystallised", rather than "set", explains well the underlying structure has taken a specific shape based on specific tenets. Well said.


> Go is a very opinionated language from it's inception.

True.

> We could probably argue for all eternity about code formatting, for instance. But Go went and set it in stone.

This is part of the story that Rob Pikes uses to justify how opinionated Go is, but it's a bit stupid given that most language do fine and I've never seen any debates about the code formatting after the very beginning of a project (where it's always settled quickly in the few case where it happens in the first place).

The real reason why Go is opinionated is much more mundane: Rob is an old man who think he has seen it all and that the younger folks are children, and as a result he is very opinionated. (remember his argument against syntax coloring because “it's for babies” or something).

It's not bad to be opinionated when designing a language, it give some kind of coherence to it (looking at you Java and C++) but it can also get into the way of users sometimes. Fortunately Go isn't just Rob anymore and isn't impervious to changes, and there is finally generics and a package manager in the language!


Rob Pike... and Ken Thompson, and Robert Grisemer.

Firstly, Ken Thompson is a master at filtering out unnecessary complexities and I highly rate his opinion of the important and unimportant things.

Secondly, the Go team were never against generics, the three early designers agreed the language needed generics but they couldn't figure out a way to add it orthogonally.

Go has gone on to be very successful in cloud and networked applications (which it was designed to cater for), which lends credit to the practicalities of what the designers thought as important, HN sentiments notwithstanding.


> Secondly, the Go team were never against generics, the three early designers agreed the language needed generics but they couldn't figure out a way to add it orthogonally.

This is a PR statement that has been introduced only after Go generics landed, for years generics were dubbed “unnecessary complexity” in user code (Go has had generics from the beginning but only for internal use of the standard library).

> Go has gone on to be very successful in cloud and networked applications (which it was designed to cater for), which lends credit to the practicalities of what the designers thought as important

Well, given that the k8s team inside Google developed their own Go dialect with pre-processing to get access to generics, it seems that its limitations proved harmful enough.

The main reason why Go has been successful in back-end code is the same as the reason why any given language thrive in certain environments: it's a social phenomenon. Node.js has been even more successful despite JavaScript being a far from perfect language (especially in the ES 5 era where Node was born), which shows that you cannot credit success to particular qualities of the language.

I have nothing against Go, it's a tool that does its job fairly well and has very interesting qualities (fast compile time, self-contained binaries, decent performance out of the box), but the religious worship of “simplicity” is really annoying. Especially so when it comes in a discussion about error handling, where Go is by far the language which makes it the most painful because it lacks the syntactic sugar that would make the error as return value bearable (in fact the Go team was in favor of adding it roughly at the same time as generics, but the “simplicity at all cost” religion they had fostered among their users turned back against them and they had to cancel it…).


70% of cloud tools on CNF are built with Go; Kubernetes is just one of many. Also, since Kubernetes was originally started as a Java project you should consider whether the team was trying to code more with Java idioms than with Go ones.

Nodejs has been more successful than Go in cloud?


> 70% of cloud tools on CNF are built with Go; Kubernetes is just one of many.

Yes, that's what's called an ecosystem effect. But k8s has been the biggest open source codebase for a while, so it's far from insignificant.

> you should consider whether the team was trying to code more with Java idioms than with Go ones.

Turns out generics, the “Java idiom” in question, was eventually added to Go after many years, so maybe it was in fact useful and it's not just k8s devs who where idiots following “java idioms”…

> Nodejs has been more successful than Go in cloud?

Nodejs has been more successful than Go in pretty much everything except in orchestration tools (because of the ecosystem effect mentioned above) which is a tiny niche anyway. Go is a very small language on terms of use compared to Nodejs, or PHP, which are arguably language with a terrible design.


> I have nothing against Go, it's a tool that does its job fairly well and has very interesting qualities (fast compile time, self-contained binaries, decent performance out of the box), but the religious worship of “simplicity” is really annoying.

Typical Gate keeping the gate keepers of simplicity and pretty sure you code 23.5 hours a day on Haskell


> Typical Gate keeping the gate keepers of simplicity and pretty sure you code 23.5 hours a day on Haskell

I've no idea what you mean, you should keep your argumentation simpler ;)


Damn that’s a comeback that’s not complicated


Seriously, if you feel patronised by how someone designs a programming language, it might be best to move on. It's obviously not for you. Especially when you feel compelled to bad faith assumptions and ageism over it.

For those who want to feel the wind of coding freedom blow through their hair, I can recommend to spend some time learning Lisp. It offers the most freedom you can possibly have in a programming language. It might enlighten you in many other ways. It won't be the last language you learn, but it might be the most fruitful experience.


Most of people who tend to brag about Lisp's (Common Lisp) superiority, never actually used it. It is not as impressive as many legends claim.


Doesn't ring true; why would a non-user of Common Lisp evangelize it?

Are there online examples? Can you point to someone's blog where they are proselytizing regarding Common Lisp, but it's obvious they don't have experience in it (perhaps betrayed by technical problems in their rhetoric).


Can you name a language that provides more freedoms? I used Lisp as an example for that side of the spectrum because I'm familiar with it, having used it for many years in the past. But maybe there are better examples.


What kind of "freedom", precisely, are you talking about? Freedom to write purely functional programs? Well, then you need Haskell or Clojure at least. Freedom to write small, self sufficient binaries? Well you need C or C++ then. CL is a regular multiparadigm language with a rich macro system, relatively good performance but nonexistent dependency management, too unorthodox OOP, with no obvious benefits compared to more modern counterparts, and a single usable implementation (SBCL). If I want s-expressions based language I can always choose Scheme or Clojure, if I need modern flexible multiparadigm language I'd use Scala


All of them. You can do imperative, functional, and oop programming in lisp. As for small libraries, it’s because cruft is an actual hindance in lisp. It’s like unix tools, you can do a lot of stuff with them, but a more integrated tool that do one thing better will fare worse in others. A big library brings a rigid way of thinking to lisp flexible model. Dependency management? Think of it like the browser runtime, where you can bring the inspector up and do stuff to the pages. It’s a different devloment models where you patch a live system. And with the smaller dependency model, you may as well vendor the libraries if you want reproductibility. Unorthodox OOP? CLOS is the best oop model out there.

The thing is that Common Lisp has most of what current programming languages are trying to implement. But it does require learning proper programming and being a good engineer.


You should probably reread what I wrote, and lay off your patronizing attitutde. "It is just better, you do not get it" won't work here. Yes you can do functional in Lisp, as you can do it in even in C, but why? The support for functional style is laughable, compared to Haskell or even Clojure. CL advocates are fanatically fail to accept bitter truth: CL is dead language with once great set of features which now present in many many other languages.


> most language do fine

No, they don't. Most languages turn dealing with code formatting, into an externality foisted upon either:

• the release managers (who have to set up automation to enforce a house style — but first have to resolve interminable arguments about what the given project's house style should be, which creates a disincentive to doing this automation); or

• the people reviewing code in the language.

In most languages, in a repo without formatting automation, reviewers are often passed these stupid messy PRs that intermingle actual semantic changes, with (often tons of) random formatting touch-ups — usually on just the files touched by the submitter's IDE.

There's a constant chant by code reviewers to "submit formatting fixes as their own PR if there are any needed" — but nobody ever does it.

Golang 1. fixes in place a single "house style", removing the speedbump in the way of automating formatting; and 2. pushes the costs of formatting back to the developer, by making most major formatting problems (e.g. unneeded imports) into compile-time errors — and also, building a formatter into the toolchain where it can be relied upon to be present and so used in pre-commit hooks, guaranteeing that the code in the repo never gets out of sync with proper formatting.

"Getting in the way of the users" is the right thing to do, when "the users" (developers) fan in 1000:1 with code reviewers and release managers, who have to handle any sloppiness they produce.

(Coincidentally, this is analogous to other Google thinking about scaling compute tasks. Paraphrasing: "don't push CPU-bottlenecked workloads from O(N) mostly-idle clients, out to O(log N) servers. Especially if the clients are just going to sit there waiting for the servers' response. Get as much of the compute done on the clients as possible; there's not only far more capacity there, but blocking on the client blocks only the person doing the heavy task, rather than creating a QoS problem." Also known as: "the best 'build server' is your own workstation. We gave you a machine with an i9 in it for a reason!")


> No, they don't.

Really, they do: there a millions of us coding in those other languages just fine, and automatic formatting has been a thing for decade, and I'm not aware of a single language out there that doesn't have such a formatting tool.

The only exception with Go is that you cannot change the default settings. But that's it. In any other language you can use a code formatter with the default settings and the “speedbump in the way of automating formatting” you talk about doesn't exist anywhere but in your mind.

> where it can be relied upon to be present and so used in pre-commit hooks

You know that a failing git hook aborts the commit? So that with any language, if the formatter isn't installed in the machine, the commit cannot be performed, which means that the formatter can actually be relied upon anyway. In practice, the hardest part is making sure people all have the git hook installed (that's not that hard but that's the hardest part).

As I said before, Go has many useful properties, but automatic formatting is definitely not what makes Go relevant, and the endless stream of Gophers who argue this are just ridiculing themselves in front of everybody else.


> You know that a failing git hook aborts the commit? So that with any language, if the formatter isn't installed in the machine, the commit cannot be performed, which means that the formatter can actually be relied upon anyway.

When making a trivial fix PR to an upstream FOSS project, if I find that a missing third-party linter install has force-rejected my commit (that I know has correct syntax)... then I just give up on making that PR. I can't be assed to install some random linter. (Third-party linters have a history of being horrible to install†.)

Small amounts of friction can be enough to shape behavior (see https://en.wikipedia.org/wiki/Nudge_theory.) Aggregated over a large project's entire community, this can make an appreciable difference in code quality over time.

† Mind you, a linter that exists as a library dev dependency of the project is fine, too. I had to pull the deps to build and run the tests, so the linter will be there by the time I attempt to commit. It's just linters that are their whole own projects that give me a jaw-ache.

> and the endless stream of Gophers who argue this are just ridiculing themselves in front of everybody else.

I don't even use Go! I mainly write Elixir, actually. Which also has a built-in auto-formatter.

To me, the nice thing about the formatter being built into Elixir (and of-a-piece with the compiler), is that when I use macros, the purely-in-memory generated-and-compiled code can be inspected in the REPL, and shows as formatted (because it passes through the auto-formatter), rather than looking like AST gunk. Without having had to pay that auto-formatting cost at compile time (because that would also be a cost you'd pay at runtime codegen time, which you might do a lot of if you've built a domain-specific JIT on top of the macro system.)


It's easy for programmers to focus on the technical details and forget the big picture. The technical aspects of automatically formatting code are relatively easy to solve. The difficulty is in the social parts. That's what Go solved by bundling gofmt with the language.

As a result, almost all Go code out there is formatted the exact same way and nobody has ever had to have the dreaded code formatting discussion about Go at their company. Eliminating such bikeshedding for every user of the language is a solid win.

That's why all the languages proceeding Go have adopted the same approach, e.g. Rust and Zig. Python's Black formatter has been directly inspired by gofmt as well.

What is provided by default really matters.


Ironic given how much effort is going into Bazel remote build executors.


Snarky response: that's more steps toward the long-held dream of the Google operations department: to be able to just issue all devs cheap commodity Chromebooks, because all the compute happens on a (scale-to-zero) Cloud Shell or Cloud Workstation resource.

Actual response:

• For dev-time iteration, you want local builds; for large software (e.g. Chrome), you make this work by making builds incremental. So it takes a few hours to build locally the first time you build, but then it's down to 30s to rebuild after a change.

• But for building releases, you can't rely on incremental builds; incremental builds (i.e. building on top of a cache from previous arbitrary builds) would be non-deterministic and give you non-reproducible builds, exactly what a release manager doesn't want. So releases, at least, are stuck needing to "build the world." You want to accelerate those — remote build infra is the way to go. Remote, distributed build infra, ideally (think: ccache on Flume.)

These remote/distributed builds do still cohere to the philosophy in the abstract, though — a remote build is not the same as a CI build, after all; the dev's own workstation is still acting as the planner and director of the build process.


Appreciate a proper response to my throw away comment :)

> incremental builds (i.e. building on top of a cache from previous arbitrary builds) would be non-deterministic and give you non-reproducible builds

Isn’t this exactly what Bazel solves?


It tries, but it's really more of an operational benefit (i.e. works to your advantage to enable build traceability and avoid compile-time Heisenbugs, when you the developer can hold your workstation's build-env constant) than a build-integrity one (i.e. something a mutually-untrustworthy party could use to audit the integrity of your build pipeline, by taking random sample releases and building them themselves to the same resulting SHA — ala Debian's deterministic builds.)

Bazel doesn't go full Nix — it doesn't capture the entire OS + build toolchain inside its content-fingerprinting to track it for changes between builds. It's more like Homebrew's build env — a temporary sandbox prefix containing a checkout of your project, plus symlinks to resolved versions of any referenced libs.

Because of this, you might build, upgrade an OS package referenced in your build or containing parts of your toolchain, and then build again, Bazel (used on its own) doesn't know that anything's different. But now you have a build that doesn't look quite like it would if you had built everything with the newest version of the package.

I'm not saying you can't get deterministic builds from Bazel; you just have to do things outside of Bazel to guarantee that. Bazel gets you maybe 80% of the way there. Running the builds inside a known fixed builder image (that you then publish) would be one way to get the other 20%.

I have a feeling that Blaze is probably better for this, though, given all the inherent corollary technologies (e.g. objFS) it has within Google that don't exist out here.


> give some kind of coherence to it (looking at you Java and C++)

I have never done any real programming in Java itself, but the parts of Java world that I learned while writing some Clojure circa 2015 felt pretty coherent. Now I'm curious what I missed.


It baffles me that so many developers are unable to use pre-comit hooks for their code formatting tools, that exist since the 1990's, to the point go fmt became a revelation.


That's hardly the point. The point is that there is a single format for the language itself and you don't have to argue about spaces vs tabs vs when to line break, whether you want trailing commas and where to put your braces.

You can format on save or in a pre commit hook. But that the language has a single canonical format makes it kind of new.


Yes, because there is no one in the room able to configure the formating tool for the whole SCM.

A simple settings file set in stone by the CTO, such a hard task to do.

The fact that is even a novelty unaware of, only confirms the target group for the language.


> A simple settings file set in stone by the CTO, such a hard task to do.

And then you have a 100 companies with 100 CTOs resulting in 100 different styles.

With Go there is only one style everywhere.


Most people only care about the code of their employer.


Many shops have to write and submit patches to upstream projects. Some shops have to maintain their own "living fork" version of an upstream project.


Yeah, and apparently use Notepad, since they are unable to have a configuration file for formatting.


Very few employers do 100% of the code in-house, everyone uses libraries and code from the internet.

Which will have a different style you need to contend with.

But with Go every single sane piece of code you find will be formatted with gofmt and will look mostly the same.


> A simple settings file set in stone by the CTO, such a hard task to do.

It does seem hard thing to do. Working over dozens of enterprise shops in last 15 years I have not see such setting done or dictated at all. So whole codebase used to be mishmash of person styles.


A clear management failure then.


Any CTO who is aware of the impact that having an incoherent programming style can have on employee productivity, is likely going to arrive at the conclusion that the most efficient way to set such policy is to "outsource" it to the programming language, by requiring projects to use an opinionated language.

Then again, any such CTO is likely also going to be someone who tends to think about things like "the ability to hire developers already familiar with the language to reduce ramp-up time" — and will thus end up picking a common and low-expressivity opinionated language. Which usually just means Java. (Although Golang is gaining more popularity here as well, as more coding schools train people in it.)


It is going to be a very clueless CTO, if they aren't aware of tooling that is even older than themselves.


IMHO it's not about the standards in your company, it's more about being able to parse any random library on GitHub etc with your eyeballs.


I use compilers and IDEs for that.


You're reading way too much into what the parent poster said. He just correctly stated the overall sentiment of the community.

That said, suggesting adding exceptions to Go is about as reasonable as adding a GC to Zig. How much effort would you spend arguing against someone bringing that up as a serious proposal?


> That said, suggesting adding exceptions to Go is about as reasonable as adding a GC to Zig.

Suggesting the addition of exceptions to Go is as reasonable as suggesting the addition of loops to Rust. Which is to say that it already has exceptions, and always has. Much of the language's design is heavily dependent on the presence of exceptions.

Idioms dictate that you probably shouldn't use exceptions for errors (nor should you in any language, to be fair), but even that's not strictly adhered to. encoding/json in the standard library famously propagates errors using the exception handlers.


Go doesn't use exceptions as a primary way of handling errors, which is what we're talking about here. Pedantry is not welcome.


It doesn't not use exception handlers as a primary way of handling errors either, though. Go doesn't specify any error handling mechanism. Using exception handlers is just as valid as any other, and even the standard library does it, as noted earlier.

The only error-related concept the Go language has is the error type, but it comes with no handling semantics. Which stands to reason as there is nothing special about errors. Originally, Go didn't even have an error type, but it was added as a workaround to deal with the language not supporting cyclic imports.

Your pedantry is hilarious and contradictory.


You're absolutely technically correct, in the "spherical cow in a vacuum" sense. In reality though, essentially all Go code out there handles errors through the pattern of checking if the error in a `(value, error)` tuple returned from a function is `nil` or not. That is what the discussion here is about - the way errors are handled in a language in practice, not in theory. Therefore, pedantry.

Basically, discussions have context and I have no intention of prepending 10 disclaimers to every statement I make to preemptively guard against people interpreting my comments as absolutes in a vacuum.


    "spherical cow in a vacuum"
I only learned about this recently. Very funny to me, and appropriately used here. Ref: https://en.wikipedia.org/wiki/Spherical_cow


That's a lot of pedantry you've got there for someone who claims it is not welcome. Rules for thee, not for me?

But, if you'd kindly come back to the topic at hand:

> That is what the discussion here is about - the way errors are handled in a language in practice, not in theory.

While I'm not entirely convinced that is accurate, I will accept it. Now, how does:

- "That said, suggesting adding exceptions to Go is about as reasonable as adding a GC to Zig."

Relate to that assertion? What does "suggesting adding exceptions to Go" have to do with a common pattern that has emerged?


It’s not simply a common pattern. It is a way of doing things in the community. The stdlib uses it, the libraries use it, and if you do not use it, people will not use your software.


Okay, but how does that relate to what was said?


You’re just complaining because the compiler isn’t complaining


And you're panicking because the other commenters haven't helped recover for you. Try as you might, sadly they are not here to catch you. No need to throw a fit over it. You are not as exceptional as you think you are.


Dude you tried, there’s no need for 50000 word essay against a sentence


The output of software cares not for whether it be a sentence or even just a word. The output of software has no care at all. What lead you to believe otherwise?


The problem with go exceptions (panics) is that they are a second class citizen. Leading to ignorance of using “defer”-statement when needed, opening up risk of leaving mutexes and other critical handlers open.


I would be happy if they would add in compiler some thing that'll allow to ignore error return values and in this case compiler would just throw exception^Wpanic from that point. I think it even makes sense for go purists, like you need to handle errors or get panicked. And I'd just mostly ignore errors and will have my sweet exceptions.


> I think it even makes sense for go purists, like you need to handle errors or get panicked.

You think wrong. Go preaches that zero values should be useful, with means in the common (T, error) scenario, T should always be useful even if there is also an error state. Worst case, you will get the zero value in return, which is still useful. This means that the caller should not need to think about the error unless it is somehow significant to the specific problem they face. They are not dependent variables.

I understand where you are coming from as in other languages it would be catastrophic to ignore errors, but that's not how Go is designed. You cannot think of it like you do other languages, for better or worse.


> T should always be useful even if there is also an error state.

I disagree with this blanket assertion. In a limited set of cases would I ever expect a result to be valid if there was also an error.

Also, when you disagree with what someone thinks, there are different ways to respond and using "You think wrong" is probably one of the most confrontational ways of responding.


> In a limited set of cases would I ever expect a result to be valid if there was also an error.

You have to return something for T. Go does not allow you to return nothing. Why would you return garbage when you can just as easily return the zero value, that of which should always be useful? Yes, you technically could return garbage, but why? There is absolutely no justification. Consider,

    func GetUser() (*User, error) {
        return nil, ErrNotFound
    }
Here, T is useful. You don't necessarily need to look at error in this example. You can meaningfully work with T alone if the error state is insignificant to your specific use case.

What's the alternative?

    func GetUser() (*User, error) {
        return &User{
            Name:  "No User",
            Email: "not@found.com",
            Role:  DoesNotExist,
        }, ErrNotFound
    }
Why on earth would you do that?

> Also, when you disagree with what someone thinks, there are different ways to respond and using "You think wrong" is probably one of the most confrontational ways of responding.

Imagine thinking that the output of software is being confrontational or exhibiting of any kind of output that congers this kind of change in "emotional state". As nonsensical as the random number generator outputting three consecutive 6s and then concluding that it must be the work of the devil. So strange.


You can get pretty close with generics:

    func Must[Value any](v Value, err error) Value {
        if err != nil {
            panic(err)
        }
        return v
    }

    Must(strconv.Atoi("not a number"))
(https://go.dev/play/p/NnrZ30TflDI)

;)


Then it shouldn't have added exceptions in the first place. But exceptions in go exist and so does exception (un)safety, and denial only leads to buggy code. I cannot count how many times I've seen exception unsafe code in go exactly because everyone keeps ignoring them


What would you use when you actually have an exception, then? Exception and exception handlers are a useful construct, but, like everything else, become problematic when used outside of their intended purpose.

Just because exception and error both start with the letter "e" does not mean they are in any way related. They have very different meanings and purposes.


If they are really exceptional situations that should never happen, just crash the process. The moment recover was added they become just another error handling mechanism


You would crash the process. But, sometimes it's useful to do a little bookkeeping first.


That's not how recover is used in practice


Right. In practice, people don't even realize that Go has an exception handling system (see rest of thread) to use.


Aside from "not ancient" Java has everything you want! I'd consider the best tooling (Intellij), static, strongly typed, has enums now (sealed interfaces), composeable error handling, null safety with new module flags, etc. Not sure about the community, but the maintainers I've worked with seemed nice enough. I imagine the community has a lot less ego than rust/go due to the general perception of the language.


C# is much better at doing systems-programming adjacent tasks than either Java or Go, as it actually exposes all the necessary features for this: C structs, actual, proper generics, fast FFI, etc.


I would argue that .NET is better than Java, unless Java has gotten something like Linq since last I used it (which is entirely) possible.

For those who do not know: .NET is cross platform, MS has official documentation on how to deploy it in Docker, and it is MIT licensed.

And if you want to deploy a backend for your webapp the tersenes can now rival Flask - plus the compiler can cross compile it to any supported platform, even in a form that works without .NET installed.

And of course it has Jetbrains support through Rider.


Poor F# always had the most terse web server but no one ever noticed!


What does Java offer that dotnet does not?


libraries, and I thank God I don't have to work on Windows and thus don't know how smart Visual Studio Enterprise is, but IJ is world class in the number of bugs it'll catch


What? You don't have to work on Windows with .net either and there is an IntelliJ IDE for C#.


Apologies, I meant to draw the Windows distinction due to VS Enterprise, since I doubt very seriously VSE runs elsewhere (I'm aware of 2022 for Mac but it doesn't cite its "feature level"). Since I'm not a .NET-er, nor a VS-er, I can't say how smart their top-tier IDE is with their in-house language in order to have an apples-to-apples bake-off


You have the same PL preferences as me. I haven't tried Rust yet, but Kotlin, modern C#, and F# all fit your requirements. Kotlin is closest because it uses the enormous Java ecosystem.


I haven't had time to really try to write anything in it, but https://gleam.run/ looks really good too. Like Elm for backend + frontend!


"+ frontend" only if you squint really hard, I think


F# is Elm for front end and backend


I read somewhere something to the effect of this: Some languages solve deficiencies in the language by adding more features to the language. This approach is so common, it could be considered the norm. These are languages like C++, Swift, Rust, Java, C#, Objective C, etc. But two mainstream languages take different approach: C and Go strongly prefer to solve deficiencies in the language simply by adding more C and Go code. One of the effects of this preference is that old (or even ancient in the case of C) codebases tend to look not that different from new codebases, which as one might imagine can be quite beneficial in certain cases. There is a reasonable argument to be made that at least some of the enduring success of C has to do with this approach.


Why would you not enforce a particular belief about how a language should be designed? There are languages designed around being able to do anything you want at any time regardless of if it makes sense, and then you end up with everyone using their own fractured subset of language features that don't even work well together. Not every language needs to be the same feature slop that supports everything poorly and nothing well.


I'd say Scala.

It has its flaws, but the latest version (Scala 3) is really really good. The community is open to different styles of programming - from using it as a "better Java" to "pure functional programming like in Haskell".


Scala is the perfect example of why you want to limit expressivity. It's seems so cool and awesome at first, but then you have to support a code base with other engineers and you quickly come to the view that go's limited expressivity is a blessing.

Hilariously I was using a gen AI (phind) and asked it to generate some scala code and it no joke suggested the code in both implematic scala and in java style, and all you had to do is look at it and you could see java style was 1000X easier to read/maintain.


Well, flexibility has its price. And yeah, if you need to work in a team that uses a very different style, then you won't like it.

On the other hand, if you carefully select your team or work alone, then this is not a problem at all.

Btw, there isn't really "one" idiomatic scala style - therefore I tend to believe that you are not familiar with the language and the community.


> Btw, there isn't really "one" idiomatic scala style

That is their point. There's too many styles.


What "point" is that though? That's like saying "there is too many programming languages".


Too many styles "in Scala," specifically. The point is that some people (not me) prefer to use something restrictive because it keeps everyone on the team from getting too clever with code and making it unreadable. The little bit of extra typing is worth the easier time reading because that's most of what you'll be doing as a coder.

As opposed to an expressive language with powerful macros. Another person could hop on the team and write something that only makes sense to them, and now you have to understand their half-baked DSL.

The burden is on you, in an expressive language, to have a style guide for your team or enforce a style through code reviews. Whereas Golang just has that built-in and obviously is more than capable for writing production software.\

This is a criticism often levied against Scala because you can do pretty much any paradigm in it, and there's lots of disagreements over when to do what paradigm.


Fair enough.


Suggesting try/catch indicates that you have virtually no experience using Go. You're standing on the side lines yelling stupid/non-sensical feature requests and getting upset when you're not taken seriously.


Adding exceptions to golang doesn't make any sense for a very simple reason: they're already there. The fact that they're called differently doesn't change anything, panics walk like exceptions, swim like exceptions and they quack like exceptions.


> ostracize the non-believers

It is rather the non-believers in exception handling who are the lunatic fringe that benefits from a healthy dose of ostracism.


Let’s hope Go never gets try/catch exceptions


    func try(fn func()) { fn() }
    func catch(fn func(any)) {
        if v := recover(); v != nil {
            fn(v)
        }
    }
    func throw(v any) { panic(v) }

    func fail() {
        throw("Bad things have happened")
    }

    func main() {
        try(func() {
            defer catch(func(v any) {
                fmt.Println(v)
            })
            fail()
        })
    }

Sorry.


the day the go codebase throws random panics is the day I quit the company.


So you quit the day encoding/json was written?


you wrap those properly. believe me waking up in the middle of the night because someone abused panic to throw state around, end up being called from a new goroutine, just enough to crash the entire pod, sucks even more.


You are probably thinking about (proto)reflect.


No, I am thinking of encoding/json. It uses Go's exception handlers to pass errors around, much like the code above.


Forgive me as I'm not experienced in Go. I had a look at the API reference for encoding/json[1] and performed a (very hasty) search on the source code[2].

The API reference doesn't state that panics are a part of the expected error interface, and the source code references seem to be using panics as a way to validate invariants. Is that what you're referring to?

I'm not entirely sure if the panics are _just_ for sanity, or if input truly can cause a panic. If it's the latter, then I agree - yikes.

[1] - https://pkg.go.dev/encoding/json

[2] - https://cs.opensource.google/search?q=panic&sq=&ss=go%2Fgo:s...


The use does not transcend package boundaries, but you will find its use within the package itself, which is within the Go codebase spoken of earlier.


There is evil in this world and then there's ... this :D


ESBuild, one of my favourite Go projects, uses panics to handle try/catch exceptions.

The syscall/js package [0] throws panics if something goes wrong, rather than returning errors.

Go already has try/catch exceptions. We just don't use them most of the time because they're a really bad way of handling errors.

[0] https://pkg.go.dev/syscall/js


Because

    if err != nil {
      return err
    }
repeated all over the place, is the epitome of productivity!


I would have to go through my comment history for exact numbers. In analyzing a real, production service written in Go where multiple dozens of contributors over hundreds of thousands of lines over several years, "naked" if-err-return-err made up less than 5% of error handling cases and less than 1% of total lines. Nearly every error handling case either wrapped specific context, emitted a log, and/or emitted a metric specific to that error handling situation.

If you do naked if-err-return-err, you are likely doing error handling wrong.


Also known as The Apple Answer.

Plenty of The Go Way arguments apply to software we were writing from the dawn of computing until the 1990's, and there are plenty of reasons why, with exception of Go (pun intended), the industry has moved beyond that.


I nearly wrote "you are holding it wrong" to nod to that quote. But it is really true - most errors in long running services are individual and most applications I've worked in ignore this when (ab)using exceptions.

In our Go codebases, the error reporting and subsequent debugging and bug fixing is night and day from our Python, Perl, Ruby, PHP, Javascript, and Elixir experiences.

The one glaring case where this is untrue is in our usage of Helm which, having been written in Go, I would expect better error handling. Instead we get, "you have an error on line 1; good luck" - and, inspired, I looked at their code just now. Littered with empty if-err-return-err blocks - tossing out all that beautiful context, much like an exception would, but worse.

https://github.com/search?q=repo%3Ahelm%2Fhelm%20%20if%20err...


The same applies for literally any language if you care about error handling except it's way more ergonomic to do. Why do go users try to pass off the lack of language features as as if that's the reason why they care about writing quality code?


to do the same as Go in Python is way more verbose and less ergonomic because you would wrap each line in a try catch.

I can't speak for all Go users, but what I have seen is that the feature set in Go lends itself to code that handles errors, and exceptions simply don't -- I can say this because I've worked in a dozen different production systems for each of perl, python, js, elixir, and php -- I'm left believing that those languages _encourage_ bad error handling. Elixir is way cool with pattern matching and I still find myself wishing for more Go-like behavior (largely due to the lacking type system, which I hear they are working to improve).

I've not used Rust which apparently is the golden standard in the HN sphere


Wrapping a line in a try catch is equivalent to the go error check routine. Should be roughly the same amount of lines if you care about that sort of thing.

There's bad programmers everywhere. Writing if err != nil { return err } is the same as not handling exceptions (they just bubble up).

Maybe you think this because go shoves the exceptions front and center in your face and forces you to deal with them. I suppose it can be a helpful crutch for beginners but it just winds up being annoying imo.


It actually is when debugging, because it makes control flow explicit.

In JS, for example, people don‘t even know which functions could throw exceptions, and just ignore them, most of the time. Fast to write and looks nice, but is horrible quality and a nightmare to debug.


Ever heard of these funny tools called debuggers and exception breakpoints?


sane error handling in go is more productive any day.


Calling if boilerplate sane is an oxymoron.


tbh only like 50% of my `if err != nil { return err }` are mindless. rest of the time, the fact that error handling is explicit and in my face has helped me alot during my time with go.


It's a tradeoff. Alternatives like exception handling make control flow less obvious and shorthands like the ? operator lock you into a specific return type (Result<T,E> in the case of Borgo or Rust).


What's bad about locking the programmer into using Result<T, E> or Option<T>? These are enough for many common use cases, and anyway, you're free to use something else and unwrap it manually.


It's 2024. We need an effective means for error propagation and the battle-tested solutions are try/catch exceptions or Optional types. Go's error handling would have been great in the 1970's, it's not so great now some 50 years later.


Option types as error handling works but it requires proper syntax. Haskell can do it with do-notation and Rust has its special ? operator. Go has boiler-plate.


Honest question - do you think there's a better way to do it?


Better than Go? Yes Rust, Haskell, OCaml all do it better. Better than those? Probably - the design space has hardly been explored!


I was looking to modernize my skills in systems languages as C++ is my systems language of choice at the moment and I'd been deciding between Go and Rust (I know, to call Go a "systems" language is a bit of a stretch, but what we call a "system" is also a bit different these days) and I've decided it's going to be Rust.


Go has more in common with Java or C# than Rust. You can get a Rust like experience with the latest C++ standard and a lot of tooling on top. That’s the true alternative in my view.


Sooo what’s the new 2025 way?


Having the exceptions support does not mean the code will be scattered with try/catches - it is not used for the code flow, but ensuring no error slips silently. And when the exception is thrown, the stack trace is captured so you can get to the code and debug.


If it were true, it would be because nobody who wants exceptions uses Go. The use of the language is self-selected based on the match between preferences and available features, so then the preferences aren't surprising.

But in fact, there probably exists a minority of developers who somehow had Go foisted upon them, and who would would like it to have basic features like exceptions.


I like go for the most part, but the error handling drives me bonkers. I'm a big exceptions/errors with try/catch/finally statements person. They do a MUCH better job of forcing you to handle exceptions/errors than return values. I'm sure this will start some argument of why returning an error value is better. I've spent the last 35 years programming in a large number of languages and my _personal_ opinion is that try/catch trumps returning error results.


Speak for yourself. Without decent error type handling, exception handling is inevitable.


panic/recover/error?


Objectively wrong.


They didn't mean literally 0 people.


Then they shouldn't use absolute statements like "No one", right? Otherwise you have pedants (a group that includes myself from time to time) that, rightfully, point out that your absolute statement is, in fact, not an absolute.


The issue is that it's more or less impossible to graft onto the language now. You could add enums, but the main reason why people want them is to fix the error handling. You can't do this without fracturing the ecosystem.


> but the main reason why people want them is to fix the error handling

Why do you think so? Maybe I'm an odd case, but my main use case for enums is for APIs and database designs, where I want to lock down some field to a set of acceptable values and make sure anything else is a mistake. Or for state machines. Error handling is manageable without enums (but I love Option/Result types more than Go's error approach, especially with the ? operator).


> but I love Option/Result types more than Go's error approach

The thing is, these don't add much on their own. You'd have to bring in pattern matching and/or a bunch of other things* that would significantly complicate the language.

For example, with what's currently in the language, you could definitely have an option type. You'd just be limited to roughly an api that's `func (o Option[T]) IsEmpty() bool` and `func (o Option[T]) Get() T`. And these would just check if the underlying point is nil and dereference it. You can already do that with pointers. Errors/Result are similar.

A `try` keyword that expands `x := try thingThatProducesErr()` to:

    x, err := thingThatProducesErr()
    if err != nil {
        return {zero values of the rest of the function signature}, err
    }
Might be more useful in go (you could have a similar one for pointers).

* at the very least generic methods for flat map shenanigans


Using an Option instead of a pointer buys you the inability to forget to check for nil.

Just need to make sure the Option exposes the internal value only through:

    func (o Option[Value]) Get() (Value, bool) {
        return o.value, o.exists
    }
Accessing the value is then forced to look like this:

    if value, ok := option.Get(); ok {
        // value is valid
    }
    // value is invalid
Thus, there's no possibility of an accidental nil pointer dereference, which I think is a big win.

A Result type would bring a similar benefit of fixing the few edge cases where an error may accidentally not be handled. Although I don't think it'd be worth the cost of switching over.


How is that better than

    if value != nil {
        // value is valid
    }
    // value is invalid
?

Of course, this is often left out, but you can just as easily do:

    value, _ := option.Get()
So this is just not true:

> Using an Option instead of a pointer buys you the inability to forget to check for nil.


It's better because you do not need to remember to check for nil, the compiler will remind you every time by erroring out until you handle the second return value of `option.Get()`.

> Of course, this is often left out, but you can just as easily do:

Unfortunately it gets brought up pretty much every time in these discussions.

Deliberate attempts to circumvent safety are not part of the threat model. The goal is prevention of accidental mistakes. Nothing can ultimately stop you from disabling all safeties, pointing the shotgun at your foot and pulling the trigger.


> my main use case for enums is for APIs and database designs, where I want to lock down some field to a set of acceptable values and make sure anything else is a mistake

Then what you are really looking for is sum types (what Rust calls enums, but unusually so), not enums. Go does not have sum types, but you can use interfaces to archive a rough facsimile and most certainly to satisfy your specific expectation:

    type Hot struct{}
    func (Hot) temp() {}

    type Cold struct{}
    func (Cold) temp() {}

    type Temperature interface {
        temp()
    }

    func SetThermostat(temperature Temperature) {
        switch temperature.(type) {
        case Hot:
            fmt.Println("Hot")
        case Cold:
            fmt.Println("Cold")
        }
    }


Enums and sum types seem to be related. In the code you wrote, you could alternatively express the Hot and Cold types as enum values. I would say that enums are a subset of sum types but I don't know if that's quite right. I guess maybe if you view each enum value as having its own distinct type (maybe a subtype of the enum type), then you could say the enum is the sum type of the enum value types?


> Enums and sum types seem to be related.

They can certainly help solve some of the same problems. Does that make them related? I don't know.

By definition, an enumeration is something that counts one-by-one. In other words, as is used in programming languages, a construct that numbers a set of named constants. Indeed you can solve the problem using that:

    type Temperature int

    const (
        Hot Temperature = iota
        Cold
    )

    func SetThermostat(temperature Temperature) {
        switch temperature {
        case Hot:
            fmt.Println("Hot")
        case Cold:
            fmt.Println("Cold")
        }
    }
But, while a handy convenience (especially if the set is large!), you don't even need enums. You can number the constants by hand to the exact same effect:

    type Temperature int

    const (
        Hot  Temperature = 0
        Cold Temperature = 1 
    )

    func SetThermostat(temperature Temperature) {
        switch temperature {
        case Hot:
            fmt.Println("Hot")
        case Cold:
            fmt.Println("Cold")
        }
    }
I'm not sure that exhibits any sum type properties. I guess you could see the value as being a tag, but there is no union.


Unfortunately, this:

    const (
        Hot  Temperature = 0
        Cold Temperature = 1 
    )
Isn't really a good workaround when lacking an enumeration type. The compiler can't complain when you use a value that isn't in the list of enumerations. The compiler can't warn you when your switch statement doesn't handle one of the cases.

Refactoring is harder - when you add a new value to the enum, you can't easily find all those places that may require logic changes to handle the new value.

Enums are a big thing I miss when writing Go, compared to when writing C.


> Isn't really a good workaround when lacking an enumeration type.

Enumeration isn't a type, it's a numbering construct. Literally, by dictionary definition. Granted, if you use the Rust definition of enum then it is a type, but that's because it refers to what we in this thread call sum types. Rust doesn't support "true" enums at all.

> The compiler can't complain when you use a value that isn't in the list of enumerations.

Well, of course. But that's not a property of enums. That's a property of value constraints. If Go supported value constraints, then it could. Consider:

    type Temperature 0..1

    const (
        Hot  Temperature = 0
        Cold Temperature = 1 
    )
Then the compiler would complain. Go lacks this in general. You also cannot define, say, an Email type:

    type Email "{string}@{string}"
Which, indeed, is a nice feature in other languages, but outside of what enums are for. These are separate concepts, even if they can be utilized together.

> Enums are a big thing I miss when writing Go, compared to when writing C.

Go has enums. They are demonstrated in the earlier comment. The compiler doesn't attempt to perform any static analysis on the use of the use of the enumerated values because, due to not having value constraints, "improper" use is not a fatal state[1] and Go doesn't subscribe to warnings, but all the information you need to perform such analysis is there. You are probably already using other static analysis tools to assist your development. Go has a great set of tools in that space. Why not add an enum checker to your toolbox?

[1] Just like it isn't in C. You will notice this compiles just fine:

    typedef enum {
        Hot,
        Cold
    } Temperature;

    void setThermostat(Temperature temperature) {
        switch (temperature) {
        case Hot:
            printf("Hot\n");
        }
    }

    int main() {
        setThermostat(10);
    }


> but all the information you need to perform such analysis is there.

No, it isn't, unlike C, in which it is. The C compiler can actually differentiate between an enum with one name and an enum with a different name.

There's no real reason the compiler vendor can't add in warnings when you pass in `myenum_one_t` instead of `myenum_two_t`. They may not be detecting it now, but it's possible to do so because nothing in the C standard says that any enum must be swappable for a different enum.

IOW, the compiler can distinguish between `myenum_one_t` and `myenum_two_t` because there is a type name for those.

Go is different: an integer is an integer, no matter what symbol it is assigned to. The compiler, now and in the future, can not distinguish between the value `10` and `MyConstValue`.

> Just like it isn't in C. You will notice this compiles just fine:

Actually, it doesn't compile "just fine". It warns you: https://www.godbolt.org/z/bn5ffbWKs

That's about as far as you can get from "compiling just fine" without getting to "doesn't compile at all".

And the reason it is able to warn you is because the compiler can detect that you're mixing one `0` value with a different `0` value. And it can detect that, while both are `0`, they're not what the programmer intended, because an enum in C carries with it type information. It's not simply an integer.

It warns you when you pass incorrect enums, even if the two enums you are mixing have identical values. See https://www.godbolt.org/z/eT861ThhE ?


> No, it isn't, unlike C, in which it is.

Go on. Given:

    type E int
    const (
        A E = iota
        B
        C
    )

    enum E {
        A,
        B,
        C
    }
What is missing in the first case that wouldn't allow you to perform such static analysis? It has a keyword to identify initialization of an enumerated set (iota), it has an associated type (E) to identify what the enum values are applied to, and it has rules for defining the remaining items in the enumerated set (each subsequent constant inherits the next enum element).

That's all C gives you. It provides nothing more. They are exactly the same (syntax aside).

> It warns you

Warnings are not fatal. It compiles just fine. The Go compiler doesn't give warnings of any sort, so naturally it won't do such analysis. But, again, you can use static analysis tools to the same effect. You are probably already using other static analysis tools as there are many other things that are even more useful to be warned about, so why not here as well?

> enum in C carries with it type information.

Just as they do in Go. That's not a property of enums in and of themselves, but there is, indeed, an associated type in both cases. Of course there is. There has to be.


> What is missing in the first case that wouldn't allow you to perform such static analysis?

Type information. The only type info the compiler has is "integer".

> It has a keyword to identify initialization of an enumerated set (iota),

That's not a type.

> it has an associated type (E)

It still only has the one piece of type information, namely "integer".

> and it has rules for defining the remaining items in the enumerated set

That's not type information

> That's all C gives you.

No. C enums have additional information, namely, which other integers that type is compatible with. The compiler can tell the difference between `enum daysOfWeek` and `enum monthsOfYear`.

Go doesn't store this difference - `Monday` is no different in type than `January`.

> Warnings are not fatal.

Maybe, but the warning tells you that they types are not compatible. The fact that the compiler tells you that the types are not compatible means that the compiler knows that the types are not compatible, which means that the compiler regards each of those types as separate types.

Of course you can redirect the warning to /dev/null with a flag, but that doesn't make the fact that the compiler considers them to be different types go away.

Whether you like it or not, C compilers can tell the difference between `Monday` and `January` enums. Go can't tell the difference between `Monday` and `January` constants. How can it?


> That's not a type.

Nobody said it was. Reaching already? As before, enums are not a type, they are a numbering mechanism. Literally. There is an associated type in which to hold the numbers, but that's not the enum itself. This is true in both C and Go, along with every other language with enums under the sun.

> The compiler can tell the difference between `enum daysOfWeek` and `enum monthsOfYear`.

Sure, just as in Go:

    type Day int
    const (
        Monday Day = iota
        Tuesday
        // ...
    )

    type Month int
    const (
        January Month = iota
        February
        // ...
    )

    func month(m Month) {}
    func main() {
        month(January) // OK
        month(Monday)  // Compiler error
    }

> Go doesn't store this difference - `Monday` is no different in type than `January`.

Are you, perhaps, mixing up Go with Javascript?

> How can it?

By, uh, using its type system...? A novel concept, I know.


Enums are exactly sums of unit types (types with only one value).


Regardless of the rest of this thread, I appreciate this comment. It helped crystalize 'enum' in the context of 'sum' for me in a way that had previously been lacking. Thanks.


Traditionally, enums have been a single number type with many values (initialized in a counted one-by-one fashion).

Rust enums are as you describe, as they accidentally renamed what was historically known as sum types to enums. To be fair, Swift did the same thing, but later acknowledged the mistake. The Rust community doubled down on the mistake for some reason, now gaslighting anyone who tries to use enum in the traditional sense.

At the end of the day it is all just 1s and 0s. If you squint hard enough, all programming features end up looking the same. They are similar in that respect, but that's about where the similarities end.


annoyingly go can't have proper sum types, as the requirement for a default value for everything doesn't make any sense for sum types


Couldn't the zero value be nil? I get that some types like int are not nil-able, but the language allows you to assign both nil and int to a value of type any (interface{}), so I wonder why it couldn't work the same for sum types. i.e. they would be a subset of the `any` type.


You can just default to the first variant, no?


Said "requirement" is only a human construct. The computer doesn't care.

If the humans choose to make an exception for that, it can be done. Granted, the planning that has taken place thus far has rejected such an exception, but there is nothing about Go that fundamentally prevents carving out an exception.


https://www.postgresql.org/docs/current/datatype-enum.html

Then wrap appropriately. Something like sqlc will actually generate everything you need.


When enums make it from the language to the db, things are now brittle and it only takes one intern to sort the enums alphabetically to destroy the look up relations. An enum look up table helps, but now they are not enums in the language.


Depends what you mean by 'enums' exactly, but now that generics has been added, a small change would be to allow interfaces defined via type disjunction to be used as concrete types:

    type Option1 struct { ... }
    type Option2 struct { ... }
    type MyEnum interface { Option1 | Option2 }

    var myValue MyEnum // currently not legal Go
That doesn't solve all the use cases for enums / sum types, but it would be useful.


I just want regular enums, that would solve the problems that result from using the current status quo.


I would like to have proper stack traces. With that the error handling in go would be fixed.


You can emit a stack trace anytime you like


s/fracturing/having to use older libraries in the existing manner/


How the fuck do you release a language without enums?


To be honest I haven’t seen a single programming language that has decent enums.

With that I mean fundamental and fool proof functions for to/from a string, to/from an int, exhaustive switch cases, pattern matching, enumerating.

Seems like something that wouldn’t be too hard but everybody always fails on something.


> I haven’t seen a single programming language that has decent enums.

There's not much you can do with an enumeration. It's just something that counts one-by-one.

A useful tool when you have a large set of constants that you want to number, without having to manually sit there 0, 1, 2, 3... But that's the extent of what it can offer.

> With that I mean fundamental and fool proof functions for to/from a string, to/from an int, exhaustive switch cases, pattern matching, enumerating.

While a programming language may expose this kind of functionality, none of these are properties of enums. They are completely separate features. Which you recognize, given that you are able to refer to them by name. Calling these enums is like calling if statements functions because if statements may be layered with functions.


Can someone help me understand why enums are needed? They only seem like sugar for reducing a few lines while writing. What cannot be achieved without them or what is really a pain point they solve? Maybe it is hard to have a constant with type information?


The original enum are just enumerated integer constants.

What people want "the ability to express enums with an associated value", I think we should invent a new term.


The term you're looking for is ADT - Algebraic Data Types

https://en.wikipedia.org/wiki/Algebraic_data_type


We did: Sum types.


You can enumerate constants. There is syntax to implicitly assign the integer values, just use iota as a value.


You'll have to ask the Rust community. Rust lacks enums. Go, however, most definitely has enums (and exceptions too!).

    type E int
    const (
        A E = iota
        B
        C
    )
It's funny how people who have clearly never even looked at Go continually think they are experts in it. What causes this strange phenomena?


Go has a way to implement enums - I'll give you that. Rust does have enums though: https://doc.rust-lang.org/book/ch06-01-defining-an-enum.html

They can have values like sum types, or not.


> Rust does have enums though

It does not. You'll notice that if you read through your link. A tag-only union is not the same thing, even if you can find some passing similarities.

If you mean it has enums like the Democratic People's Republic of Korea has a democracy, then sure, it does have enums in that sense. I'm not sure that gives us anything meaningful, though.

If we're being honest, sum types are the better solution. Enums are hack for type systems that are too limited for anything else. They are not a feature you would add if you already have a sufficient type system. It's not clear why Rust even wants to associate itself with the hack that are enums, but your link shows that its author has a fundamental misunderstanding of what enums even are, so it may be down to that.

To be fair, Swift made the same mistake, but later acknowledged it. It is interesting that Rust has chosen to double down instead.


> A tag-only union is not the same thing, even if you can find some passing similarities

Why not? Seems to function the same way.


Enums produce values. Tag-only unions 'produce' types (without values).

Realistically, there isn't a whole lot of practical difference. They were both created to try and solve much the same problem. As before, I posit that there is no need for a language to have both. You can do math on enum values, but that is of dubious benefit. In theory, tag-only unions provide type safety, whereas enums are just values so there is no inherit safety... But, as you probably immediately recognized a few comments back, with some static analysis you can take the greater view,

    type E int
    const (
        A E = iota
        B
        C
    )
and invent something that is just as useful as proper type safety. All the information you need is there. So, in reality, that need not even be significant.

But, technically there is a difference. Values and types are not the same thing.


> Enums produce values. Tag-only unions 'produce' types (without values).

Source? Is this something that is widely accepted, or just how you think enums should be defined.

My understanding is you are saying (using c++ as an example since it has both types) an `enum` is a "true" enum, while an `enum class` somehow isn't?


> Source?

To which language?

> `enum` is a "true" enum, while an `enum class` somehow isn't?

No. Enums are used in both cases. The difference there is in the types the enums are applied to. In one case, a basic integer-based type. In the other, a class.

This differs from Rust. Rust does not use enums. It relies on the type itself to carry all the information. C++ enum classes could have done the same, so it is not clear why they chose to use enums, but perhaps for the sake of familiarity or backwards compatibility with the regular enum directive?


> To which language?

I mean more in the sense of "where did you get this definition from."

> The difference there is in the types the enums are applied to. In one case, a basic integer-based type. In the other, a class.

I'm still not seeing a difference, mainly because when I went to see how c++'s `enum class` and rust's `enum`, they both seemed to work the same.

    #[repr(u8)]
    enum Words {
        Foo = 0,
        Bar,
        Baz,
    }

    const _: () = {
        assert!(Words::Foo as u8 == 0);
        assert!(Words::Bar as u8 == 1);
        assert!(Words::Baz as u8 == 2);
    };
vs

    enum class Words : uint8_t {
     Foo = 0,
     Bar,
     Baz
    };

    static_assert(static_cast<uint8_t>(Words::Foo) == 0);
    static_assert(static_cast<uint8_t>(Words::Bar) == 1);
    static_assert(static_cast<uint8_t>(Words::Baz) == 2);


> I mean more in the sense of "where did you get this definition from."

In other words, you want to have a conversation with someone else by proxy? If that's the case, why not just go talk the other people you'd really prefer to talk to?

> I'm still not seeing a difference

There is no difference. I recant what I said. This (strangely, undocumented in the above link) functionality does, in fact, provide use of enums.

Curious addition to the language. Especially when you consider how unsafe enums are. When would you ever use it? It is at least somewhat understandable in C++ as it may be helpful to "drop down" to work with the standard enum construct in some migratory situations, but when do you use it in Rust?


> Why not just go talk the other people you'd really prefer to talk to?

sorry, I didn't mean to be so argumentative or negative. (The "You'll have to ask the Rust community. Rust lacks enums." did get me a little annoyed :p)

> This (strangely, undocumented in the above link) functionality does, in fact, provide the use of enums.

That link was from "the rust book" which is primarily is for learning rust. For more technical info the referenced is used https://doc.rust-lang.org/reference/items/enumerations.html

> When would you ever use it?

I assume (like you said for c++) a good reason would be for c/c++ interoperate, but it also probably makes things like serialization easier. Sometimes you just need a number (e.g. indexing an array) and it's simpler to be able to cast then have a function that goes from enum -> int.

> Especially when you consider how unsafe enums are.

Do note though, going from int -> enum is an unsafe op which would require `std::mem::transmute`.


> sorry, I didn't mean to be so argumentative or negative.

Thanks, but I had no reason to think that the output of software has human qualities.

> That link was from "the rust book" which is primarily is for learning rust.

Learn Rust by keeping features of the language a secret? Intriguing.

> a good reason would be for c/c++ interoperate

I'm not sure that's a good reason. C++ doing it is questionable to begin with, but at least you can understand how a bad idea might have made it in many years ago when we didn't know any better.

> it also probably makes things like serialization easier.

It would, except you would never want to serialize the product of an enum as it means your program becomes forever dependent on the structure of the code. I mean, sure, you can remove the enum later if you are to change the code, so you're not truly stuck, but that kind of defeats the purpose. You may as well do it right the first time.

Enums are inherently unsafe. That you have to be explicit about converting the union to an integer at least gives some indication that you are doing something unsafe. It is not so unusual that Rust allows some kind of "escape hatch" to get at the actual memory. What is interesting, though, is that it also allows manipulation of what values are assigned by the enumerator as a first-class feature, which suggests that it promotes this unsafe behaviour. This is what is surprising and what doesn't seem to serve a purpose.


because you can just declare a custom type and have constants that use the custom type.


I am so tired of reading Java/C++/Python code that just slaps try/catch around several lines. To some it might seem annoying to actually think about errors and error handling line by line, but for whoever tries to debug or refactor it's a godsend. Where I work, try/catch for more than one call that can throw an exception or including arbitrary lines that don't throw the caught exception, is a code smell.

So when I looked at Go for the first time, the error handling was one of the many positive features.

Is there any good reason for wanting try/catch other than being lazy?


>the error handling was one of the many positive features.

sounds good on paper, but seeing "if err!=nil" repeated million times in golang codebases does not create positive impression at all



Yes but the impression is largely superficial. The error handling gets the job done well enough, if crudely.


The ability to quickly parse, understand and reason about code is not superficial, it is essential to the job. And that is essentially what those verbose blocks of text get in the way of.


As an experienced Go dev, this is literally not a problem.

Golang code has a rhythm: you do the thing, you check the error, you do the thing, you check the error. After a while it becomes automatic and easy to read, like any other syntax/formatting. You notice if the error isn't checked.

Yes, at first it's jarring. But to be honest, the jarring thing is because Go code checks the error every time it does something, not because of the actual "if err != nil" syntax.


Just because you can adapt to verbosity does not make it a good idea.

I've gotten used to Javas getter/setter spam, does that make it a good idea?

Moreover, don't you think that something like Rusts ? operator wouldn't be a perfect solution for handling the MOST common type of error handling, aka not handling it, just returning it up the stack?

  val, err := doAThing()
  if err != nil {
    return nil, err
  }
VERSUS

  val := doAThing()?


I personally have mixed feelings about this. I think a shortcut would be nice, but I also think that having a shortcut nudges people towards using short-circuit error handling logic simply because it is quicker to write, rather than really thinking case-by-case about what should happen when an error is returned. In production code it’s often more appropriate to log and then continue, or accumulate a list of errors, or… Go doesn’t syntactically privilege any of these error handling strategies, which I think is a good thing.


This. Golang's error handling forces you to think about what to do if there's an error Every Single Time. Sometimes `return err` is the right thing to do; but the fact that "return err" is just as "cluttered" as doing something else means there's no real reason to favor `return err` instead of something slightly more useful (such as wrapping the err; e.g., `return fmt.Errorf("Attempting to fob trondle %v: %w", trondle.id, err)`).

I'd be very surprised if, in Rust codebases, there's not an implicit bias against wrapping and towards using `?`, just to help keep things "clean"; which has implications not only for debugging, but also for situations where doing something more is required for correctness.


Well we are in a discussion thread about a language that does just that :)

I see two issues with the `?` operator:

1. Most Go code doesn't actually do

    return nil, err
but rather

    return nil, fmt.Errorf("opening file %s as user %s: %w", file, user, err)
that is, the error gets annotated with useful context.

What takes less effort to type, `?` or the annotated line above?

This could probably be solved by enforcing that a `?` be followed by an annotation:

  val := doAThing()?("opening file %s as user %s: %w", file, user, err)
...but I'm not sure we're gaining much at that point.

2. A question mark is a single character and therefore can be easy to miss whereas a three line if statement can't.

Moreover, because in practice Go code has enforced formatting, you can reliably find every return path from a function by visually scanning the beginning of each line for the return statement. A `?` may very well be hiding 70 columns to the right.


For the first point, there are two common patterns in rust:

1. Most often found in library code, the error types have the metadata embedded in them so they can nicely be bubbled up the stack. That's where you'll find `do_a_thing().map_err(|e| Error::FileOpenError { file, user, e })?`, or perhaps a whole `match` block.

2. In application code, where matching the actual error is not paramount, but getting good messages to an user is; solutions like anyhow are widely used, and allow to trivially add context to a result: `do_a_thing().context("opening file")?`. Or for formatted contexts (sadly too verbose for my taste): `do_a_thing().with_context(|| format!("opening file {file} as user {user}"))?`. This will automatically carry the whole context stack and print it when the error is stringified.

Overall, what I like about this approach is the common case is terse and short and does not hinder readability, and easily gives the option for more details.

As for the second point, what I like about _not_ easily seeing all return paths (which are a /\? away in vim anyways), is that special handling stands out way more when reading the file. When all of the sudden you have a match block on a result, you know it's important.


It might just be me, but I find both of those to be massively less readable. More terse is not the same as more readable (in fact, I find the reverse).

I'm a huge fan of keeping things simple; my experience has shown me that complex things have lots of obscure failure points, while simple things are generally more robust.


You always have the option of using a match block if you don't like those chained calls. But I do agree, it's a bit bolted on and kinda ugly.

> More terse is not the same as more readable (in fact, I find the reverse).

I generally agree, but I also find that "all explicit" also hinders readability because it tends to drown the nitty-gritty details. As always it's a matter of balance :) And I think that neither go nor rust are great in this matter as one is verbose and the other falls in the "keyword soup" with the chain call, the closure, and the format macro. I'm pretty sure something in between could be found.


Actually this is precisely same cadence as in good old C. As someone who writes lots of low-level code, I find Go's cadence very familiar and better than try-catch.


The idea that error handling is "not part of the code" is silly though. My impression of people that hate Go's explicit error handling is that they don't want to deal with errors properly at all. "Just catch exceptions in main and print a stack trace, it's fine."

Rust's error handling is clearly better than Go's, but Go's is better than exceptions and the complaints about verbosity are largely complaints about having to actually consider errors.


> The idea that error handling is "not part of the code" is silly though. My impression of people that hate Go's explicit error handling is that they don't want to deal with errors properly at all. "Just catch exceptions in main and print a stack trace, it's fine."

I'm honestly asking as someone neutral in this, what is the difference? What is the difference between building out a stack trace yourself by handling errors manually, and just using exceptions?

I have not seen anyone provide a practical reason that you get any more information from Golangs error handling than you do from an exception. It seems like exceptions provide the best of both worlds, where you can be as specific or as general as you want, whereas Golang forces you to be specific every time.

I don't see the point of being forced to deal with an "invalid sql" error. I want the route to error out in that case because it shouldn't even make it to prod. Then I fix the SQL and will never have that error in that route again.


The biggest difference is that you can see where errors can happen and are forced to consider them. For example imagine you are writing a GUI app with an integer input field.

With exception style code the overwhelming temptation will be to call `string_to_int()` and forget that it might throw an exception.

Cut to your app crashing when someone types an invalid number.

Now, you can handle errors like this properly with exceptions, and checked exceptions are used sometimes. But generally it's extremely tedious and verbose (even more than in Go!) and people don't bother.

There's also the fact that stack traces are not proper error messages. Ordinary users don't understand them. I don't want to have to debug your code when something goes wrong. People generally disabled them entirely on web services (Go's main target) due to security fears.


> But generally it's extremely tedious and verbose

Is it? In my experience it's very short, especially considering you can catch multiple errors. Do my users really need a different error message for "invalid sql" vs "sql connection timeout?" They don't need to know any of that.

> There's also the fact that stack traces are not proper error messages

I would say there's not a proper error message to derive from explicitly handling sql errors. Certainly not a different message per error. I would rather capture all of it and say something like "Something went wrong while accessing the database. Contact an admin." Then log the stack trace for devs


> Do my users really need a different error message for "invalid sql" vs "sql connection timeout?"

Yes! A connection timeout means it might work if they try again later. Invalid SQL means it's not going to fix itself.

But in any case, the error messages are probably the minor part. The bigger issue is about properly handling errors and not just crashing the whole program / endpoint handler when something goes wrong.

> I would say there's not a proper error message to derive from explicitly handling sql errors. Certainly not a different message per error. I would rather capture all of it and say something like "Something went wrong while accessing the database. Contact an admin." Then log the stack trace for devs

Ugh these are the worst errors. Think about the best possible action that the user could take for different failure modes.

"Contact an admin" is pretty much always bottom of the list because it rarely works. More likely options are "try again later", "try different inputs", "clear caches and cookies", "Google a more specific error".

Giving up on making an error message because you only have a stack trace and don't want to show it means users can't pick between those actions.

If you have written a "something went wrong" error I literally hate you.


> "Contact an admin" is pretty much always bottom of the list because it rarely works. More likely options are "try again later", "try different inputs", "clear caches and cookies", "Google a more specific error"

You're totally misunderstanding what I'm saying. If I have an error the user can act on, I'll make that error message for them. If they can't act on it, I will make a generic catcher and ask them to contact an admin because that's the only thing they can do. It is not my experience that any of these things you've written (try again later, try a different input) are applicable when an error comes up in my apps. It's always an unexpected bug a developer needs to fix, because we've already handled the other error paths. And the bug is not from "not explicitly handling the error."

> Think about the best possible action that the user could take for different failure modes.

What if contacting an admin IS the best possible action? Which is what I'm referring to.

In the case of invalid sql, your route should crash because it's broken. Or catch it and stop it. It's functionally the same thing.

You seem to be under the impression that having exceptions mean people can't handle errors explicitly? It just prevents the plumbing of manually bubbling up the error. It means you can do so MORE granularly. Also, there are some errors that are functionally the same whether you handle them explicitly or not. There are unexpected errors, and even Golang won't save you from that. Golang doesn't even care if you handle an error. It will compile fine. Even PHP will tell you if you haven't handled an exception.

> If you have written a "something went wrong" error I literally hate you.

Lol.


> You seem to be under the impression that having exceptions mean people can't handle errors explicitly?

Not at all! It's possible, but it's very tedious, and the lazy "catch it in main" option is so easy that in practice when you look at code that uses exceptions people actually don't handle errors explicitly.

> It means you can do so MORE granularly.

Again, it doesn't just mean that you can; it means that you will. And for proper production software that's not a good thing.

> There are unexpected errors

Only in languages with exceptions. In a language like Rust there are no unexpected errors; you have to handle errors or the compiler will shout at you.


That has nothing to do with having exceptions, Rust just has a good type system (something go doesn't have).

But again, handling an error doesn't necessarily prevent bugs. Just because you handled an error doesn't mean the error won't happen in Prod. It just means when it does, you wrote a message for it or custom behavior. Which could be good, or it might be functionally as effective as returning a stack traces message. It depends on the situation.

For what it's worth, I've never seen people not handle errors that the user could do anything with. If it's relevant to the user, we handle it.


> That has nothing to do with having exceptions

It absolutely does. Checked exceptions sort of half get there too but they are quite rarely used (I think they are used in Android quite well). They were actually removed from C++ because literally nobody used them.

> handling an error doesn't necessarily prevent bugs.

I never made that claim.

> I've never seen people not handle errors that the user could do anything with.

We already talked about "something went wrong" messages. Surely you have seen one of those?


> We already talked about "something went wrong" messages. Surely you have seen one of those?

My point is that "something went wrong" messages are for errors the user CANT and SHOULDNT do anything with.


> In a language like Rust there are no unexpected errors

What? Of course there is. Rust added panic! exactly because unexpected errors are quite possible.

Unexpected errors, or exceptions as they are conventionally known, are a condition that arises when the programmer made a mistake. Rust does not have a complete type system. Mistakes that only show up at runtime absolutely can be made.


> What is the difference between building out a stack trace yourself by handling errors manually, and just using exceptions?

You cannot force your dependencies to hand you a stack trace with every error. But in languages that use exceptions a stack trace can be provided for "free" -- not free in runtime cost, but certainly free in development cost.


This one frustrates me a lot. Not getting a proper trace of the lib code that generated an error makes debugging what _exactly_ is going on much more of a PITA. Sure, I can annotate errors in _my_ code all day long, but getting a full trace is a pain.


Sure, I just don't think it's that significant. Humans don't read/parse code character-by character, we do it by recognizing visual patterns. Blocks of `if err != nil { }` are easy to skip over when reading if needed.


I agree, though I was really surprised to learn this when reading Go code. Much easier to skip over than I was expecting it to be


I find that knowing where my errors may come from and that they are handled is essential to my job and missing all that info because it is potentially in a different file altogether gets in the way


> sounds good on paper, but seeing "if err!=nil" repeated million times in golang codebases does not create positive impression at all

Okay, but other than exceptions, whats the alternative?


> other than exceptions, whats the alternative?

This may be a crazy/dumb take, but would it be so wrong to allow code outside the function to take the wheel and do a return? Then you could define common return scenarios and make succinct calls to them. Use `returnif(err)` for the most typical, boilerplate replacement, or more elaborate handlers as needed.


The ? Operator in Rust?


More than just that, Result in general also prevents from accessing the value when there is an error and accessing an error when there is a value.


The absence of that safeguard in Go is a feature. It's used when the error isn't that critical and the program can merrily continue with the default value.

Of course, this is also scarily non-explicit.


Good point.

I only briefly tried Rust and was turned off by the poor ergonomics; I don't think (i.e. open to correction) that the Rust way (using '?') is a 1:1 replacement for the use-cases covered by Go error management or exceptions.

Sometimes (like in the code I wrote about 60m ago), you want both the result as well as the error, like "Here's the list of files you recursively searched for, plus the last error that occurred". Depending on the error, the caller may decide to use the returned value (or not).

Other times you want an easy way to ignore the error, because a nil result gets checked anyway two lines down: Even when an error occurs, I don't necessarily want to stop or return immediately. It's annoying to the user to have 30 errors in their input, and only find out about #2 after #1 is fixed, and #3 after #2 is fixed ... and number #30 after #29 is fixed.

Go allows these two very useful use-cases for errors. I agree it's not perfect, but with code-folding on by default, I literally don't even see the `if err != nil` blocks.

Somewhat related: In my current toy language[1], I'm playing around with the idea of "NULL-safety" meaning "Results in a runtime-warning and a no-op", not "Results in a panic" and not "cannot be represented at all in a program"[2].

This lets a function record multiple errors at runtime before returning a stack of errors, rather than stack-tracing, segfaulting or returning on the first error.

[1] Everyone is designing their own best language, right? :-) I've been at this now since 2016 for my current toy language.

[2] I consider this to be pointless: every type needs to indicate lack of a value, because in the real world, the lack of a value is a common, regular and expected occurrence[3]. Using an empty value to indicate the lack of a value is almost certainly going to result in an error down the line.

[3] Which is where there are so many common ways of handling lack of a value: For PODs, it's quite popular to pick a sentinel value, such as `(size_t)-1`, to indicate this. For composite objects, a common practice is for the programmer to check one or two fields within the object to determine if it is a valid object or not. For references NULL/null/nil/etc is used. I don't like any of those options.


> that the Rust way (using '?') is a 1:1 replacement for the use-cases covered by Go error management or exceptions.

It is a 1:1 replacement.

I think you're thinking of the case when you have many results, and you want to deal with that array of results in various ways.

> Result implements FromIterator so that a vector of results (Vec<Result<T, E>>) can be turned into a result with a vector (Result<Vec<T>, E>). Once an Result::Err is found, the iteration will terminate.

This is one such way, but there are others - https://doc.rust-lang.org/rust-by-example/error/iter_result....

This doesn't handle every case out there, but it does handle the majority of them. If you'd like to do something more bespoke, that's an option as well.


> Is there any good reason for wanting try/catch other than being lazy?

It's the best strategy for short running programs, or scripts if you will. You just write code without thinking about error handling at all. If anything goes wrong at runtime, the program aborts with a stacktrace, which is exactly you want and you get it for free.

For long-running programs you want reliability, which implies the need to think about and explicitly handle each possible error condition, making exceptions a subpar choice.


The huge volume of boilerplate makes the code harder to read, and annoying to write. I like go, and I don’t want exceptions persay, but I would love something that cuts out all the repetitive noise.


This has not been my experience. It doesn’t make the code harder to read, but it forces you to think about all the code paths—if you only care about one code path, the error paths may feel like “noise”, but that’s Go guiding you toward better engineering practices. It’s the same way JavaScript developers felt when TypeScript came along and made it painful to write buggy code—the tools guide you toward better practices.


> The huge volume of boilerplate makes the code harder to read, and annoying to write

That may be superficially true but don’t forget our brain is structured to optimize every repetitive work or some boilerplates, we can basically use “strcpy” and “string_copy” we are so used to all these that even if repeated a billion times it can be processed fast


The example in the article is a good one. Result and Optional as first class sum types


That just changes the boilerplate from if's to match's.


See the example with the `?` operator: https://github.com/borgo-lang/borgo?tab=readme-ov-file#error...

The main benefits of a Result type are brevity and the inability to accidentally not handle an error.


Yes, but that isn't necessarily a feature of option types. Is it the case that similar sugar for the tiresome Go pattern couldn't achieve similar benefits?


Perhaps, but there have been several proposals along those lines and nobody seems capable of figuring out a sensible implementation.

A funny drawback of the current Go design that a Result type would solve is the need to return zero values of all the declared function return types along with the error: https://github.com/golang/go/issues/21182.


exactly.. yes, I understand why ? is neat from a type POV since you specifically have to unwrap an optional type whereas in Go you can ignore a returned error (although linters catch that) - so at the end of the day it's just the same boilerplate, one with ? the other with err != nil


> Is there any good reason for wanting try/catch other than being lazy?

In a hot path it’s often beneficial to not have lots of branches for error handling. Exceptions make it cheap on success (yeah, no branches!) and pretty expensive on failure (stack unwinding). It is context specific but I think that can be seen as a good reason to have try catch.

Now of course in practice people throw exceptions all the time. But in a tight, well controlled environment I can see them as being useful.


> In a hot path it’s often beneficial to not have lots of branches for error handling.

This is true but the branch isn't taken unless there's an error in Go.

Given that the Go compiler emits the equivalent of `if (__unlikely(err != nil)) {...}` and that any modern CPUs are decently good at branch prediction (especially in a hot path that repeats), I find it hard to believe that the cost would be greater than exceptions.


Yes, it's the ability to unwind the stack to an exception handler without having to propagate errors manually. Go programs end up doing the exact same thing as "try/catch around multiple lines" with functions that can return an error from any point, and every caller blindly propagating the error up the stack. The practice is so common that it's like semicolons in Java or C, it just becomes noise that you gloss over.


Go programs generally do not “blindly prepare the error up the stack”. I’ve been writing Go since 2011 and Python since 2008, and for the last ~decade I’ve been doing DevOps/SRE for a couple of places that were both Go and Python shops. Go programs are almost universally more diligent about error handling than Python programs. That doesn’t mean Go programs are fre from bugs, but there are far, far fewer of them in the error path compared to Python programs.


This matches my experience _hard_; there is simply no comparison in practice. Go does it better nearly every time


The difference is that all code paths are explicitly spelled out and crucially that the programmer had to consider each path at the time of writing the code. The resulting code is much more reliable than what you end up with exceptions.


I understand your sentiment. The debate of error codes vs exceptions will be debated until the year 3000, and further. One point to consider with exceptions: It is "impossible" to ignore an exception. The function implementation is telling (nay: dictating[!] to) the caller: You cannot ignore this error code. At the very least, you must catch, then discard. Another point this is overlooked in these discussions: Exceptions and error codes can, and do, peacefully co-exist. Look at Python, C#, and Java. In the standard library for all three, there are cases where error codes are used and cases where exceptions are thrown. Another thing about exceptions, especially in enterprise programming, you can add a human readable error message. That is not possible when only returning error codes.

EDIT

Inspired by this comment: https://news.ycombinator.com/item?id=40220147

I forgot about exception stack traces, including "chained" exceptions. These are incredibly powerful when writing enterprise software that commonly has a stack 50+ levels deep.


> One point to consider with exceptions: It is "impossible" to ignore an exception. The function implementation is telling (nay: dictating[!] to) the caller: You cannot ignore this error code. At the very least, you must catch, then discard.

Error returns are no different, assuming a proper implementation like the Result type in Rust. The difference is, unhandled error returns are found at compile time but unhandled exceptions only show up at runtime, when it's too late.

> Another point this is overlooked in these discussions: Exceptions and error codes can, and do, peacefully co-exist.

Both Go and Rust have panics, which are basically exceptions that are generally not supposed to be caught. They are used for unrecoverable cases like running out of memory or programmer mistakes. There's otherwise no reason to mix the two.

> Another thing about exceptions, especially in enterprise programming, you can add a human readable error message. That is not possible when only returning error codes.

I don't really know what you mean, it's equally possible in both cases. If anything, the error return implementation that Go uses is probably the most optimal out there when it comes to error messages. Most look like:

    return nil, fmt.Errorf("opening file %s as user %s: %w", file, user, err)
Whereas most exception code will just dump a stacktrace since that's the default.


Do you really do that in practice, or do you just blindly go 'if err != nil return nil, err'?

Because fundamentally the function you called can return different errors at any point so if you just propagate the error the code paths are in fact not spelled out at all because the function one above in the hierarchy has to deal with all the possible errors two calls down which are not transparent at all.


In Go, no one really blindly returns nil, err. People very clearly think about errors—if an error may need to be actioned on up the stack, people will either create a named error value (e.g., `ErrInvalidInput = errors.New(“invalid input”)` or a named error type that downstream users can check against. Moreover, even when propagating errors many programmers will attach error context: `return nil, fmt.Errorf(“searching for the flux capacitor `%s`: %w”, fluxCap, err)`. I think there’s room for improvement, but Go error handling (and Rust error handling for that matter) seem to be eminently thoughtful.


Coming from dotnet, I rather like the Go pattern as you've described it. I would normally catch and error and then write out a custom message with relevant information, anyway, and I hate the ergonomics of the try{}catch(Exception ex){} syntax. And yes, it is tempting to let the try block encompass more code than it really should.


Yeah, I was pretty skeptical when they added the error wrapping stuff to the standard library, and it still feels a little too squishy, but in practice it works very well. I prefer Go’s error handling even to Rust’s much more explicit error handling.


I don't see how it's possible to do it blindly unless the code gets autogenerated. If you're typing the `if err != nil` then you've clearly understood that an error path is there.

There's no requirement for the calling function to handle each possible type of error of the callee. It can, as long as the callee properly wrapped the error, but it's relatively rare for that to be required. Usually the exact error is not important, just that there was one, so it gets handled generically.


Their point was writing `if err != nil return nil, err` is the same thing that stack traces from exceptions do, but with even less information. And if that's most of a Golang codebases error handling, it's not a compelling argument against exceptions.


Try-blocks with ~one line are best practice on code based I have worked with. The upside is that you can bubble errors up to the place where you handle them, AND get stack traces for free. As a huge fan of Result<T, E>, I have to admit that that's a possible advantage. But maybe that fits your definition of lazy :).


> try/catch for more than one call that can throw an exception or including arbitrary lines

You generally need to skip all lines that the exception invalidates. That's why it's a block or conditional.


I agree, I don't really understand everyone's issue with err != nil.. it's explicit, and linters catch uncaught errors. Yes the ? operator in Rust is neat, but you end up with a similar issue of just matching errors throughout your code-base instead of doing err != nil..


The problem is that you're forced to have four possible states

1. err != nil, nondefault return value

2. err != nil, default return value

3. err == nil, nondefault return value

4. err == nil, default return value

when often what you want to express only has two: either you return an error and there's no meaningful output, or there's output and no error. A type system with tuples but no sum types can only express "and", not "or".


this is true, but not a problem. Go's pattern of checking the error on every return means that if an error is returned, that is the return. Allowing routines to return a result as well as an error is occasionally useful.


I mean, I wish Go had sum types, but this really isn’t a problem in practice. Every Go programmer understands from day 0 that you don’t touch the value unless the error is nil or the documentation states otherwise. Sum types would be nice for other things though, and if it gets them eventually it would feel a little silly to continue using product types for error handling (but also it would be silly to have a mix of both :/).


Yeah, also you almost always need to annotate errors anyway (e.g., `anyhow!`), so the ? operator doesn’t really seem to be buying you much and it might even tempt people away from attaching the error context.


You can print or log the stack trace of the exception in python.


I've never needed either.

Try/catch is super confusing because the catch is often far away from the try. And in Python I just put try/catch around big chunks of code just in case for production.

I think Go is more stable and readable because they force you not to use the lazy unreadable way of error handling.

Enums I honestly never used in Go also not the not-type-safe ones.

But I'm also someone who used interfaces in Go maybe I think 4 times only in years and years of development.

I just never really need all those fancy things.


I think what this comment is missing is any sort of analysis of how your experience maps to the general go user, and an opinion on while you've never needed either whether you think it could have provided any benefit when used appropriately.

For example, and option type with enums combined can ensure return values are checked by providing a compile time error if a case is missing (as expressed in the first few examples of the readme).


I know it can, the compiler can do one more "automatic" unit test based on the type checking system.

But they decided not to add enums because it conflicted and overlapped too much with interfaces.

I just want to add "my" experience that personally, yes maybe you can argue enums are nice, but I never missed them in Go.

I personally agree with the Go team on how they argue and for me it would be a step back if they listened to the herd that does not take all sides of the story into consideration but just keeps pushing enums.

Try/catch is just a really bad thing all "hacky solution" alarm bells go off for me if you want to change error handling to giant try/catch blocks.


> But they decided not to add enums because it conflicted and overlapped too much with interfaces.

I'm very curious now about how it might conflict and/or overlap with interfaces.

To reach the goal of an enumeration type (and all the strong type-checking that that brings with it), enums could look as simple as:

    type DayNames enum {
       Sunday
       Monday
       Tuesday
       Wednesday
       Thursday
       Friday
       Saturday
    }
    ...
    func isFunDay (dow DayNames) {
       // This must fail to compile, because there is an unhandled enumeration
       switch {
          case Sunday: ...
          case Monday: ...
          case Tuesday: ...
          case Thursday: ...
          case Friday: ...
          case Saturday: ...
       }
       ...
    }
    ...
    isFunDay (0)   // Compile failure
    var x int
    isFunDay (x)   // Compile failure
And I don't see how that conflicts or overlaps with interfaces.


I think something like when a variable type in an enum was an interface it would destroy the galaxy or something, not 100% sure, would have to look it up... 1 sec.

Here you Go: https://go.dev/doc/faq#variant_types


> I think something like when a variable type in an enum was an interface it would destroy the galaxy or something,

Hah :-)

> Here you Go: https://go.dev/doc/faq#variant_types

Not quite the same: Variants are a constrained list of types. Enums are a constrained list of values.

Let's assume that I agree with the reasoning for not having a constrained list of types.

It still doesn't tell me why we can't have a constrained list of values.


I largely agree with your sentiment. Go’s simplicity is what makes it such a useful tool for me. It’s worth protecting, and that means setting a very high bar for proposals that add new things to the language.

However, 2 things I would be enthusiastic about if it got included in the language: - having ‘?’ As syntactic sugar for ‘if(err != nil) …’. Would make code more easily readable, and I think that is a benefit for programmers trying to keep things simple. - Sum Types. I’ve had a few cases where this would’ve been very useful. I consider the ‘var state customtype = iota’ a bit too easy to make mistakes with (eg exhaustive checking of options).

Like generics, I hope that when that happens, they take a very deliberate approach on doing it.


Your comment could have been a nice opinion that proves to a drive-by reader that needs can differ drastically between programmers.

But you ruined it with "fancy things" which shows offhand disregard and disrespect.

A question like "what do you need these features for?" would have been a better contribution to the forum.


I actually really have a disrespect for them. I'm in a constant fight against developers that want to translate code in almost the same code but "only using language features from the Advanced book".

I also wanted to add that I used inheritance only ONCE in all my years of writing Python in all other millions of lines of code inheritance was not the best solution.

This is my daily struggle as a CTO. People using waaayy too many "fancy" features of languages making it totally unreadable and unmaintainable.

It's their ego they want to show off how many complex language features they know. And it's ruining my codebases.


    > This is my daily struggle as a CTO
This is a nice humblebrag. Why does it matter that you are a CTO for this comment? It doesn't. It would better written as: "This is my daily struggle with my team."


Haha I actually thought about that maybe 8 times haha I just wrote it and pressed send but exactly that I wanted to edit later XD.


It's one thing to want your devs to produce readable code -- as a former CTO I also spent significant effort in teaching people that -- but it's completely another to be a curmudgeon and directly disregard valuable programming tools like the sum types.

Not sure why you are conflating both. Also inheritance was known to be the wrong tool for the job at least 15 years ago, maybe even 20. Back then people wrote Java books that said "prefer composition over inheritance" so your analogy didn't really land.

Everyone who uses sum types in production code agrees they reduce bugs.

Maybe it's time for you to retire.


About a year ago, I tried writing a language that transpiled to Go with many of the same features, in my research I found other attempts at the same idea:

- braid: https://github.com/joshsharp/braid

- have: https://github.com/vrok/have

- oden: https://oden-lang.github.io/



I am genuinely appreciative that a post like this, a GitHub link to a semi-slow moving, but clearly well considered and sincerely developed programming language, can not only remain on the front page of HN, but can generate a diverse and interesting group of discussions. It’s material like this that keeps me coming back to the site. I’m not sure if anyone needed this comment, but I’m sure my posting it isn’t going to hurt.


[flagged]


Nor yours.


This kind of pandering horseshit from throwaway accounts add no value and should be downvoted on sight. Especially if you bake self-deprecation into it, "uwu I'm... I'm not sure if a-anybody is going to like my comment..."

Just let the good conversation unfold without patronizing meta-commentary. This isn't Reddit.


I observed the same thing happen for YouTube videos in the last couple of years and it drives me crazy. I don't need 20 different comments fishing for likes that try hard to compliment something, and point out how this creator does something that others don't. They're everywhere!


Just an aside, this is my only account which I humorously thought I would name throwaway when I made it 5 years ago. I have repeatedly regretted the choice, even if using a better handle I still probably would have posted this. Also, it is not in my typical wheelhouse of comments, but I was on my 3rd scotch and soda, so it’s came out relatively coherent.


Great! Something I've always wanted.

I'd love to be able to use a bit more type-y Go such as Borgo, and have a Pythonesque dynamic scripting language that latches onto it effortlessly.

Dynamic typing is great for exploratory work, whether that's ML research or developing new features for a web app. But it would be great to be able to morph it over time into a more specified strongly typed language without having to refactor loads of stuff.

Like building out of clay and firing the parts you are happy with.

Could even have a three step - Python-esque -> Go/Java-esque -> Rust/C++esque.


> Like building out of clay and firing the parts you are happy with. > Could even have a three step - Python-esque -> Go/Java-esque -> Rust/C++esque.

We do exactly that with Common Lisp. It compiles to different languages/frameworks depending on what we require (usually sbcl is more than enough, but for instance for embedded or ML we need another step. All dev (with smaller data etc) is in sbcl so with all the advantages.


Is there somewhere I could read more about this? I've always wanted to learn Lisp but lacked a specific need for it.


We don’t necessarily do good lisp things; we use Common Lisp because macros and easy DSLs allows us to use CL for everything we do while using, for us, the best dev and debugging env in the world. So we want to do the exploration, building, debugging all in CL and after that compile, possibly, to something better depending. We trade for that a little bit of inconvenience (as in; leaky abstraction), but it’s worth it the past 30+ years.

For learning cl, the reddit lisp subreddit is good and has the current best ones on it. Lately there is a guy making a gui (clog) who is doing good work for spreading general lisp love by making it modern. Including tutorials. And there are others too.


Dart? Version 1 was a lot like Javascript/Typescript in one spec (a dynamic language with optional unsound typing). Version 2 uses sound typing, but you can still let variables unannoted (and the compiler will infer type "dynamic") for scripts.


Sounds like JavaScript and typescript would be a good fit for you. Highly expressive, dynamic and strongly typed, and highly performant both on server side and within the browser.


I do like JavaScript but it strikes a weird balance for me where it's a bit too easy to write and a bit too verbose so I tend to end up with hard to maintain code. Feels good at the start of a project but rarely a few weeks in. Also not a fan of the node ecosystem, I try to use deno where I can (maybe that would be bun these days).


perhaps rescript [https://rescript-lang.org/] even more than typescript


I like the idea but in all honesty I have difficulty imagining it working in practice. Once your python code is stable (i.e. You've worked out 99% of the bugs you might have caught earlier with strict type checking) would there be any incentive to go back and make the types more rigid or rigorous? Would there be a non-negligible chance of introducing bugs in that process?


by the time you have your code in its final state (i.e. you're done experimenting) and shaken out the bugs, your types are mostly static; they're just implicitly so. adding annotations and a typechecker helps you maintain that state and catch the few places where type errors might still have slipped through despite all your tests (e.g. lesser-used code paths that need some rare combination of conditions to hit them all but will pass an unexpected type through the call chain when you do). it is very unlikely that you will introduce bugs at this point.


I agree it's a bit of a pipe dream. I'm more thinking of performance here, e.g. web services using Django. You could start off in dynamic/interpreted land and have a seamless transition to performant compiled land. Also lets you avoid premature optimisation since you can only optimise the hot paths.

Also types are self documenting to an extent. Could be helpful for a shared codebase. Again Python just now getting round to adding type definitions.

At the end of the day good tooling/ecosystem and sheer developer hours is more important than what I'm suggesting but it would be nice anyway. I dream about cool programming languages but I stick to boring for work.


py2many does python-esque to both Go and Rust.

The larger problem is building an ecosystem and a stdlib that's written in python, not C. Use ffi or similar instead of C-API.


I like the graph at the top of the readme as a summary.

The rest of the readme focuses on the delta between Go and Borgo. It doesn't say much about the delta between Borgo and Rust.

I think the delta there is mainly no lifetimes/ownership?


No traits, const generics, probably no turbofish equivalent for when inference struggles.


Most importantly: Null pointers still exist (yes I know they technically exist in unsafe Rust, to head off any pedants)

Also: No `?` operator



Oh! Cool somehow I missed that


Pedants would say that null pointers exist in safe Rust too.


This seems to achieve a similar type safety<->complexity tradeoff as Gleam [1] does. However, Gleam compiles to Erlang or JavaScript, which require a runtime and are not as performant as Go.

I wonder if Borgo's compiler messages are as nice as Rust's/Gleam's, though.

[1] https://gleam.run/


> are not as performant as Go.

Ymmv, you might be surprised if you actually bothered to benchmark. Depending on the workload, either JS or erlang can ultimately turn out on top.

They're all optimized to a degree that each has a niche it excells at and leaves the others in the dust.

even with heavily scewed benchmark like techempower fortunes (https://www.techempower.com/benchmarks/#hw=ph&test=fortune&s...) you end up with JS getting ahead of Go with raw requests. And not just slightly, but by 1.5 times the throughput.

In other benchmarks, Golang does indeed win out with similar or even bigger advantages... so the only thing you can ultimately say is ... that it depends. Its a different story if you chose other languages though. But JS, Golang and Erlang are all extremely optimized for their ideal usecase.


Well hold on a second. The JS impl that you're talking about uses a minimal custom runtime (https://github.com/just-js/just) that you would never use—it barely implements JS. It's basically only used for this benchmark. It doesn't make sense to compare that to Go when we're talking about Javascript vs. Go performance.

Scroll down to the "nodejs" entry for a more realistic comparison.


Feel free to switch to json serialization and the the same pattern repeat with uwebsockets.js.

I'm not saying that JS is "just as fast as golang" generally. My argument is specifically that it's optimized to a degree that there are cases in which JS, an interpreted language, does end up on top.

The same applies to erlang and it's optimization for efficient concurrency.

On average you'll likely get better performance with go, but depending on the workload the results can differ


I'd add Java to that list as well. JIT compilers have come a long way, and OpenJDK was on par with Rust for performance on the last project I tried porting.


Go has an amazing runtime and tool ecosystem, but I’ve always missed a little bit more type safety (especially rust enums). Neat!


This and pub/private modifiers for structs instead of letter casing is all I've ever wanted.


I love Go's letter casing. It's such a neat way to remove cruft.


It also adds cruft. Public struct members JSON usually needs to be converted to lowercase. Hence the stuct tags.


JSON is just one tiny part of most programs, sitting on the edge where the program interacts with other programs; it doesn’t permeate the entire codebase.

Structure privacy, OTOH, does. Count me in as someone who really enjoys the case-based approach. It’s not the only one which could work, but it does work.


The single most productive habit I picked up int the last few years is to always use exactly the same name for the same entity across source files, configs files, database entries, protocol fields, etc.


That’s funny, I did it your way for years and ended up considering it a big mistake.

Today I use idiomatic names - MyName in Go, myName in JS/JSON, my_name in SQL. There are many reasons but generally speaking, for me, it’s less effort and code is more readable.

Curious what your rationale is?


I just ran into this earlier today - it makes navigating code with grep more difficult.

I had a YAML file using `some_property_name`, which was turned into `SomePropertyName`, and it's a small annoyance. It's not a huge deal, but it adds friction where some languages have none. (Or alternately, getting reordered in a separate system like `property_name_some`.)


The issue that I ran into is dealing with lot of code across different languages, like plpgsql, go and JavaScript.

Especially with database code, something that's fine in Go, like EmployeeID, ends up being employeeid in SQL. You can use underscores in Go but that can trigger other behaviours. If you mix your own JSON with JSON from other sources, you get inconsistent capitalisation. And so on.

And when you have hundreds or thousands of identifiers like this, it gets really hard to read.

You can of course capitalise in SQL - even though it's not semantic - but that becomes inconsistent, too. And then of course the lifecycles of each of these things can be different, which adds another layer of complexity - maybe you refactor your Go code before you upgrade the database, so you end up with two identifiers anyway.

Ultimately I switched to using idiomatic names everywhere, and I really haven't looked back. The boundaries between these systems tend to be pretty clear, as mentioned by someone else, so finding things shouldn't be hard regardless of what they're named.

It's certainly takes slightly longer to deal with idiomatic names - but you read code way more than you write it, and it's easier to read idiomatic code.


Ease of grepping is one benefit, but for me the main benefit is the cognitive overhead when reading code.


I do the same way like you. My logic was that I can tell where the value is coming from. I would use the same type but different name.

Is it coming from a client via http? Then it needs to be checked and saved

The database? then it has an id field which could be null in other cases.

Or I just create a new instance?

Naming things is hard tho, sometimes I do think I should just name them the same and stop caring. I'm not sure if I gain productivity but it gives me some comfort that I can instantly tell the source of the data.


I have the benefit of writing mostly C++ where there is really no globally agreed idiomatic naming. At $JOB we use snake_case naming for C++ functions and objects (as opposed to types), which also matches the python naming convention we use.

Snake case is not idiomatic for xml, but we still happen to use it for leaf config options.

The main benefit is reducing ambiguity to what maps to what across files. Ease of grepability is also an advantage.


And database col name, and validation and...

The moment you integrate with a third party your US centre zip_code field is suddenly coming over the wire as postCode. The conversions are going to go on, at least in go I can define all of that conversion with ease in one place.


> It's such a neat way to remove cruft.

I don't disagree, the problem I have with it is, I have to pay for that up front and have to factor it into my design immediately. This also combines with the fact that the namespace is very flat with no heirarchy, so, choosing good public names is something I feel like I spend way too much time on.

Go is the only language that causes me to pull out a thesaurus when trying to name methods and struct members. It's kinda maddening. Although, after going through this exercise, I end up with code that reads like English. I just wish I could refactor my way into that state, rather than having to try to nail it up front.


Choosing names is something that often is in the bucket of oh I wish I had thought a little more before sharing these names.


>I just wish I could refactor my way into that state, rather than having to try to nail it up front.

Procrastination looms. :o


ChatGPT is really good at suggesting 20 names for <vague description of thing>. Try it out!


Go's semantic use of case is objectively bad because most of the worlds scripts do not have the concept of it. For example ideographs, as used in eastern countries, do not have capitalization. This means programmers in many parts of the world cannot express identifiers in their native tongue.


It looks like something was lost in the middle of your comment. You open with something about it be objectively bad, but then it jumps to something about how it is subjectively bad. What was omitted?


How is "i cant name variables in my native language" subjective?


I don't really think the sarcastic tone was called for, but the previous poster is right. "I can't name variables in my native language" is objective, but whether or not that's bad is subjective.


Very true but “bad” is always subjective so at least they came up with an evaluation that is binary — either you have capitalization in your language or you don’t, either the analogy fits or it doesn’t.

(Some linguist will point out that Bongo-Bongo has half-capitalization, or half has capitalization).


True, as a non-native speaker: naming variables in a native language (that's not English) is objectively bad.


So if you have a concept that doesn't have an equivalent in English you just kinda translate it and add a comment for other people of your language to understand what it is?


Sure. And for a concept that's so foreign there is no English equivalent, I hope there's plenty of documentation. I mean, to each their own, but for me, a software team using native language for variables is a red flag.


You could also transliterate it into the English alphabet. Looks ugly but saves you from having to switch your keyboard layout.


Dislike it very much specially with codebases which have lots of acronyms, aka aviation. Having to change an acronym from upper to lowercase just suck.


In that case, maybe try: `_ACRNYM`


Function names with _ ? That’s not for me :)


I spent quite a few hours tracking down bugs due to miscased struct fields unfortunately. Strongly prefer explicitness over implicitness


The casing rules are quite explicit and enforced by the compiler. A build would have immediately failed on whatever mismatch you had. A few hours and you didn't even think to try compiling it?

I'm guessing you are talking about something else entirely, like, perhaps, decoding JSON into a struct using reflection and encountering a situation where the field names didn't match? Indeed, implicitness can bite you there. That's true in every language. But, then again, as you prefer explicitness why would you be using that approach in the first place?


The rules are explicit but the actual changes in code are very small and unique to this language (or unique from the languages I had ever used). It’s one of those things that you can forget about — because it’s a small difference in code and arguably isn’t explicit.

I forget what it was, but basically my code wasn’t working the way I thought it should and it was solely due to a lowercased struct field. It happened twice where I spent at least a little while trying to figure it out.

And yeah I would guess that I tried to compile. Would be very dumb if I hadn’t although wouldn’t be the dumbest thing I’ve ever done


Which IDE do you use? Mine would flag this as an error pretty quickly.


Goland. How would an IDE know that you intended a struct field to be public or not?


Any non-private usage of a private struct is a compile error.

If you're using goland, it would report this as an error as it is effectively compiling your code as you write it.

Also, autocomplete wouldn't work when you tried to use the private struct.


I like the terseness of it but having to refactor just because I change visibility is a bit stupid.

I never wrote ObjC but didn’t they use + and - (and nothing) as visibility modifiers?


Same. It's ugly, it breaks acronyms, it doesn't work in all (spoken) languages, it doesn't work well with serialization, etc.

Frankly if they insisted on visibility being part of the name, I would have preferred they go with the age-old C++/ancientPHP tradition of using a _ prefix for private members.


I would kill for these languages features in Go


That's what C# offers (except true* Rust-style enums).

The latter will be there in one of the future versions and is in an active design phase, which luckily focuses on tagged-union implementation strategy.

With that said, you can already easily use one of the Option/Result libraries or write your own structure - switching on either is trivial (though you have to sometimes choose between zero-cost-ness and convenience).

It already has struct generics, iterator expressions (LINQ), switch pattern matching, good C interop and easy concurrency primitives (no WaitGroup nonsense, also has Channel<T>). Oh, and also portable SIMD, C pointers and very strong performance in general.

* True as in proper tagged unions with either a tag or another type of discriminant and aliased layout, instead of tag + flattening all constituent parts into a one jumbo struct. Or making it an opaque pointer to a box (like Java does, or C# if you go inheritance/interface route). These don't count. I'm curious about Borgo's lowering strategy for enums, but given Go doesn't have those, I'm not holding my breath and expecting something like F# struct unions at best.


As someone who is "C# curious," but haven't been able to keep up with all the horrific number of rebrands of the "new, open, .net, core, framework", what is the C# equivalent of $(for GOOS in linux darwin; do for GOARCH in amd64 arm64; do dotnet build -o thing_${GOOS}-${GOARCH}; done; done)?


That's spelled `dotnet publish -r ${GOOS}-${GOARCH}` with the new ahead-of-time (branded Native AOT) compilation features installed and enabled.

It isn't without a whole list of caveats if you're used to Go's way of doing things though. See <https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...> for details.


You're very kind, thank you

For others wanting to play along at home:

  $ docker run --name net8 --rm -it mcr.microsoft.com/dotnet/sdk:8.0-jammy-arm64v8 bash -c '
  cd /root
  dotnet new -d -o console0 console
  cd console0
  dotnet publish --nologo --self-contained -v d -r osx-arm64 -o console0-darwin-arm64
  sleep 600
  '
although it didn't shake out

  $ docker cp net8:/root/console0/console0-darwin-arm64/console0 ./console0
  $ ./console0
  Killed: 9
I tried with and without --self-contained and the biggest difference was that self-contained emitted a bazillion .dll files and without just emitted the binary. I case the context isn't obvious, $(dotnet new console) is a skeleton for the infamous WriteLine("Hello, World") without doing crazy whacko stuff


For simple JIT-based but fully self-contained binaries, without adding any properties to .csproj, the command is a bit mouthful and is as follows

    dotnet publish -p:PublishSingleFile=true -p:PublishTrimmed=true -o {folder}
(you can put -p: arguments in .csproj too as XML attrs in <PropertyGroup>...)

This will give you JIT-based "trimmed" binary (other languages call it tree shaking). You don't need to specify RID explicitly unless it's different from the one you are currently using.

For simple applications, publishing as AOT (without opting in the csproj for that) is

    dotnet publish -p:PublishAot=true -o {folder}
Add -p:OptimizationPreference=Speed and -p:IlcInstructionSet=native to taste.

Official docs: https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-p...


You're also very kind, and I realized that it's possible there were a bazillion "watch out"s on the docs and was just trying the <PublishAot>true trick when I saw your comment

However, it seems this brings my docker experiment to an abrupt halt, and is going to be some Holy Fucking Shit to re-implement that $(for GOOS) loop in any hypothetical CI system given the resulting explosion

  /usr/bin/sh: 2: /tmp/MSBuildTemproot/tmp194e0a13157b47889b36abb0ce96cd2d.exec.cmd: xcodebuild: not found


You need an OS for which you are building to be able to compile an AOT binary - it depends on OS-provided tooling (MSVC on Windows, Clang on macOS and Linux, and a system-provided linker from each respective system). In fact, once ILC is done compiling IL to .a or .lib, the native linker will just link together the csharp static lib, a GC, then runtime/PAL/misc and a couple of system dependencies into a final executable (you can also make a native library with this).

Cross-architecture compilation is, however, supported (but requires the same extra dependencies as e.g. Rust).

If you just want to publish for every target from a single docker container (you can't easily do that with e.g. Rust as noted), then you can go with JIT+single-file using the other command.

Keep in mind that Go makes concessions in order for cross-compile to work, and invested extra engineering effort in that, while .NET makes emitting "canonical" native binaries using specific system environment a priority and also cares a lot about JIT instead (there aren't that many people working on .NET compiler infrastructure, so it effectively punches above its weight bypassing Go and Java and matching C++ if optimized).


The sibling comment pretty much sums it up. But if you want more detail, read on:

Generally, there are three publishing options that each make sense depending on scenario:

JIT + host runtime: by definition portable, includes slim launcher executable for convenience, the platform for which can be specified with e.g. -r osx-arm64[0].

JIT + self-contained runtime: this includes IL assemblies and runtime together, either within a single file or otherwise (so it looks like AOT, just one bin/exe). This requires specifying RID, like in the previous option, for cross-compilation.

AOT: statically linked native binary, cross-OS compilation is not supported officially[1] because macOS is painful in general, and Windows<->Linux/FreeBSD is a configuration nightmare - IL AOT Compiler depends on Clang or MSVC and a native linker so it is subject to restrictions of those as a start. But it can be done and there are alternate, more focused toolchains, that offer it, like Bflat[1].

If you just want a hello world AOT application, then the shortest path to that is `dotnet new console --aot && dotnet publish -o {folder}`. Otherwise, the options above are selected based on the needs either via build properties or CLI arguments. I don't know which use case you have - let me know if you have something specific in mind ("Just like in Go" may or may not be optimal choice depending on scenario).

[0] https://learn.microsoft.com/en-us/dotnet/core/rid-catalog

[1] https://github.com/bflattened/bflat (can also build UEFI binaries, lol)


To clarify, my team uses Go and prefers to stick with "idiomatic" Go. So, while we could implement our own types, there would be pushback. As an example I liked lo [0] but my team was resistant because it's not considered idomatic.

If were up to me we'd be using a language with a better type system :)

[0]: https://github.com/samber/lo


It's not the best solution, but an analyzer like [0] covers most of the cases for reference types. For enums and struct DUs in general we'll have to wait for language (or even runtime) support.

[0] https://github.com/shuebner/ClosedTypeHierarchyDiagnosticSup...


This looks like an interesting sweet spot. Rust is often praised for the borrow checker, but honestly I really only like rust for the type system and error handling. Go is praised for it's simplicity, but hated for it's error handling.


Rust without borrow checker is much less feasible than Go with Result/Option types to address the nil overdose problem. Unfortunately Go team refuses to acknowledge the common themes coming out of years of user complaints. They don't have to cater to every wishlist but when nil/enum related complaints are the majority in every discussion about issues with Go, one would think to acknowledge the legitimacy of those shortcomings. Nope, not Go team and their band of simplicity zealots.


I'm not sure what exactly you mean by acknowledgement, but here are some counterexamples:

- A proposal for sum types by a Go team member: https://github.com/golang/go/issues/57644

- The community proposal with some comments from the Go team: https://github.com/golang/go/issues/19412

Here are some excerpts from the latest Go survey [1]:

- "The top responses in the closed-form were learning how to write Go effectively (15%) and the verbosity of error handling (13%)."

- "The most common response mentioned Go’s type system, and often asked specifically for enums, option types, or sum types in Go."

I think the problem is not the lack of will on the part of the Go team, but rather that these issues are not easy to fix in a way that fits the language and doesn't cause too many issues with backwards compatibility.

[1]: https://go.dev/blog/survey2024-h1-results


I guess I should have been more clear that I mean actions that have resulted from the feedback. Sure, the survey brings out the concerns in a structured form, but to anyone who has seen more than a few discussions about Go, the feedback regarding error handling or enum or sum types etc would not have been news. I can't imagine Go team at Google is stunned by developer demand for these things. Question is why there hasn't been a concerted effort to prioritize these top concerns (I will stand corrected if there is already something underway that I'm not aware of).

One of the proposals you linked has been raised in 2017 and it is still open with "No one assigned"; same fate for the other item. That doesn't inspire confidence in terms of Go team treating these things as top priority.

I think stuff developers are moaning about the most should be top priority but I guess that is just my simpleton thinking.

> I think the problem is not the lack of will on the part of the Go team, but rather that these issues are not easy to fix in a way that fits the language and doesn't cause too many issues with backwards compatibility.

They have made many changes to the language, some significant ones like Generics (which I would assume was also not an easy problem to solve) while they have largely left the elephant in the room unaddressed i.e. error handling - and the developers deal with that one on a daily basis and I would wager a lot more frequently compared to Generics. If I had to gauge their priority, I would go by where they they are putting their money instead of surveys and proposals. And their priorities seem to be different from what the populace is asking. And that is my point.


Dude if the 70% don’t care they don’t care besides it’s open source


You can get Rust's type system in most ML derived languages, some of them even go beyond what Rust is capable of today.


Can you give some examples?


Could you share a bit more?


Be still my heart. I would use this so fast at work where we currently use Go.

But introducing a new language is a scary thing.


Many people on HN “Rust syntax is so ugly”

rustaceans “I love the Rust syntax so much I want it in Go too”


The author notes[0] that it keeps the Rust syntax to avoid having to write a parser.

I have no issue with the syntax but I think the chance of uptake would be considerably improved if the syntax were as close to Go's as reasonably possible. That's because I estimate Go programmers to be a better target for this than Rust programmers, but maybe I'm wrong.

[0] https://news.ycombinator.com/item?id=36847594


Having a soft-Rust alternative is a recurring topic in the Rust community and is acknowledged as being of interest by core members:

https://www.reddit.com/r/rust/comments/j2l9v9/revisiting_a_s...


Yeah, it'd be great to see something come of it. Been a while, though.


There’s no contradiction here.

Rust’s syntax looks alien to people who are not familiar with it, but the syntax itself is fine.

Some users also blame Rust’s syntax for being complicated when they actually struggle with Rust’s semantics, e.g. borrow checking wouldn’t be any less strict if Rust chose a less weird sigil for lifetime labels.


In the same vein, Go+ is also interesting, and its being actively developed.

https://goplus.org/


Where's the list of "Awesome Go-derived languages"? :)


Is it correct to say Borgo "compiles to Go", or should it say "transpiles to Go"

It appears to be a transpiler (consumes a Borgo and does the work to convert and emit a Go program as text):

https://github.com/borgo-lang/borgo/blob/main/compiler/src/c...


The word "transpiler" propagates the misunderstanding that there is something special about a compiler that emits machine code, that requires some special "compiler" techniques for special "compiler" purposes that are not necessary for "transpiler" purposes because "transpiling" requires a completely different set of techniques.

There aren't any such techniques. If one were to create an academic discipline to study "transpilers" and one to study "compilers", all you'd end up with is an identical bunch of techniques and analyses with different names on them. (A thing that sometimes happens when diverse disciplines study what turns out to be the same thing; see machine learning versus statistics for a well-known example.)

Even "compiling" to machine code isn't special anymore, because CPUs don't actually execute assembly anymore. They themselves compile assembly into internal micro-ops, which is what they actually execute. So compilers don't even compile "machine language" anymore; it's just another target. This also dodges "is it a 'compiler' or a 'transpiler' if it targets WASM?", which is good, because there is no value in that question on any level.


1. Transpilers output to another programming language that is typically written by hand by others (so not assembly).

2. Transpilers don’t typically optimize code, leaving those transformations to the compiler of the target language.

3. Compilers will typically have an internal representation (SSA) which they operate on to optimize. Transpilers typically operate on the AST (because they don’t need to do any but the most trivial optimizations).

There are exceptions to the rules but these cover the majority of the reasons on why people make the distinction.


These differences aren't inherent to transpilers vs compilers, they're mostly the result of the fact that the vast majority of transpilers are less mature than the battle-tested compilers that you're thinking of.

The average hobby compiler—regardless of target—doesn't optimize code and works directly on the AST because that's simple to get started with. Most hobby compilers also target some other language rather than LLVM or machine code because that's simple to get started with, so the result is that most transpilers are hobby projects that don't optimize. But there's no reason why a transpiler shouldn't include optimization steps that adapt the output to use code paths that are known to be fast, and a production-grade transpiler typically will include these steps.


> the majority of the reasons on why people make the distinction.

You have provided some defining properties that might allow for distinction, but you have not given any reasons for why people make a distinction.

But perhaps we can suss it out. Given the statement "Borgo compiles to Go", what important information is lost that would be saved if "Borgo transpiles to Go" was used instead?


In that statement, it doesn't really add anything.

In the statement "XYZ is a compiler/transpiler", it does. It doesn't hurt to have a word that is more specific than others. Otherwise we should just refer to compilers as an "app" :)


I don't think anyone here is saying we shouldn't have the word "transpiler" at all, just that "transpiler" is a subcategory of "compiler" and there's no reason for OP to try to correct the title of this story.

It reminds me of how my 5-year-old son always corrects me when I tell him to get in the car—"you mean the van!". I have tried to explain to him that a minivan is a kind of car, and he's just about getting it, but it's been a challenge for him to grasp.


>I don't think anyone here is saying we shouldn't have the word "transpiler" at all

This thread chain is in response to jerf's comment "transpiler shouldn't be a word" (simplifying his comment for brevity's sake)


Eh, that's one possible reading, but their actual take is more nuanced than that:

> The word "transpiler" propagates the misunderstanding that there is something special about a compiler that emits machine code, that requires some special "compiler" techniques for special "compiler" purposes that are not necessary for "transpiler" purposes because "transpiling" requires a completely different set of techniques.

In context of the parent comment I read this to be a reaction to someone insisting that we use "transpiler" instead of "compiler"—more an observation of what is happening here than a call to stop using the word altogether.


Someone argues that transpiler adds nothing (no nuance) over the original word. And your takeaway is that “I don't think anyone here is saying we shouldn't have the word "transpiler" at all” and that their original post is “more [of] an observation”? Does a person have to be all boorish and say that “you shouldn’t use that word” in order to convince you that they think it’s useless? Anyway this comment (newer than your comment) seems clear enough: https://news.ycombinator.com/item?id=40214781

> Ultimately, "compiler" isn't a bright shining line either... I can take anything and shade it down to the point where you might not be sure ("is that a 'compiler' or an 'interpreter'?"), but the "transpiler" term is trying to draw a line where there isn't even a seam in the landscape.


As no internet discussion is complete without a car analogy, car and automobile mean the same thing, but I see no reason why one of those terms needs to go away. Why can't transpiler and compiler peacefully coexist with the same meaning?


Automobile should be scrapped before we get to self-driving cars. What’s a self-driving automobile? An autoautomobile? Get outta here!


Humanless carraige.


We shouldn't have the word "transpiler" at all.


> In the statement "XYZ is a compiler/transpiler", it does.

Okay. What important information is lost in "XYZ is a compiler" that would be gained in "XYZ is a transpiler"?

> It doesn't hurt to have a word that is more specific than others.

It can if the intent is not properly understood. And so far I'm not sure we do have that understanding.


It doesn't matter but I fully disagree with this. A transpiler emits code the user is supposed to understand, a compiler does not. At least that's the general way I've seen the term used, and it seems quite consistent.


There is a phenomenon I have observed many times where you can get a bunch of people in a room and make some statement, in this case, "Compilers are different than transpilers", and everyone around the table will nod sagely. Yup. We all agree with this statement.

But if you dig in, it will turn out that every single one of them has a different interpretation, quite often fatally so to whatever the task at hand is.

I mention this because my impression has been that the distinction between "transpiler" and "compiler" is that the latter is into some machine code and the former is not. I think if we could get people to sit down and very clearly define the difference we'd discover it is not as universal a definition as we think.

My personal favorite is when I say a particular term is not well defined on the internet, and I get multiple commenters to jump up and tell me off about how wrong I am and how well-defined the term is and how universal the understanding is, while each of them gives a completely different definition. As I write this it hasn't happened in this thread yet, but stay tuned.

Anyhow, the simple solution is, there isn't a useful distinction between them. There's no sharp line anyhow. Plenty of "transpilers" produce things like Python that looks like

    def f000000001_bch(a0023, a0024, bf___102893):
        __a1 = f000000248_BCh1(a0024, const_00012)
        if __c_112__0:
            f0000000923(__a1)
        else:
            f0000000082(__a1)
and it's really quite silly to look at what can be a very large process and make a distinction only in how the very last phase is run, and on a relatively superficial bit of that last phase too.


2 hours later, I think it's safe to say there are multiple definitions in play that are, if not outright contradictory, certainly not identical.

It seems the term is not terribly useful even on its own terms... it is not as well defined as everyone thinks.

Ultimately, "compiler" isn't a bright shining line either... I can take anything and shade it down to the point where you might not be sure ("is that a 'compiler' or an 'interpreter'?"), but the "transpiler" term is trying to draw a line where there isn't even a seam in the landscape.


> the "transpiler" term is trying to draw a line where there isn't even a seam in the landscape.

I don't think you have proven that it is a seamless landscape. In fact, I think that people's definitions have been remarkably consistent in spite of their fuzziness. The heart of what I have read is that most people understand a transpiler to be an intermediate text to text translation whose output is input to another tool. The common colloquial definition of a compiler is a text to machine code (for some definition of machine code) translation whose output is an executable program on a host platform. You can make an argument that every compiler is a transpiler or every transpiler is a compiler, but I think it requires a level of willful obtuseness or excessive pedantry to deny that there is something behind the concept of a transpiler. This discussion wouldn't even be happening if transpiler were a completely meaningless term.


> , but I think it requires a level of willful obtuseness or excessive pedantry to deny that there is something behind the concept of a transpiler.

Transpiler means something. Fuzzily. And it defines and denotes nothing of practical utility.

> This discussion wouldn't even be happening if transpiler were a completely meaningless term

This discussion wouldn’t even be happening if (people like) JS programmers didn’t insist on using terminology that implied some archaic view of technology, like “compilers emit machine code”—the distinction between high- or low-level target languages aren’t interesting anymore, even if it might have been novel to normie programmers in the 90’s or something.


I've observed this sort of behavior frequently, is there a name for this phenomenon yet?

Something like "Assuming all concepts are universal to one's own peculiar definition"

Maybe "semantic egocentrism" could fit the bill?


And in the other corner you have Chomsky with universal grammar... and in another you have Platonic Forms...

I love the "draw me a tree" idea of a Platonic form, we all have an idealized model of what that is, that is uniquely our own. With that in mind isnt everything subject to some sort of semantics?


> A transpiler emits code the user is supposed to understand, a compiler does not.

No, a transpiler emits code that another system is meant to understand (often another compiler or interpreter). Whether a human can understand it or not is immaterial to the objective of transpiling.


But then compilation is the same thing as transpilation as noted.


Yes.


Does that imply that a compiler emits code that nothing can understand? Or are you saying that 'transpile' is just another word for 'compile'?


> Does that imply that a compiler emits code that nothing can understand?

Bizarre take. No, compilers in the classical sense target byte code and machine code which is meant to be interpreted by a byte code interpreter or a hardware machine.

> Or are you saying that 'transpile' is no more than another word for 'compile'?

Yes. Compilers translate from one language to another. Transpilers translate from one language to another. Both have the objective of preserving the behavior of the program across translation. Neither has the objective of making something intended for humans as a general rule.

That transpiled code (if we draw a distinction) targets languages meant for humans to read/write means that many transpiled programs can be read by people, but it's not the objective.


> Bizarre take.

Bizarre in what way? If compilers are somehow different, then they mustn't target systems, as that's what your previous comment says transpilers do. Which leaves even your own classical definition to be contradictory, if they are somehow different. What does that leave?

> Yes.

But it seems you do not consider them different, which was the divergent path in the previous comment. But evaluating both the "if" and the "else" statement is rather illogical. The evaluation of both branches is what is truly bizarre here.


I see what you mean but how is it academically useful to identify transpilers something of their own? It's still compiling (lowering) from one notation to another.


All compiler outputs are understandable. I suppose you mean with the intent of it being a one-time translation? As in, like when the Go project converted the original C codebase into Go source with the intent of having developers work in the Go output afterwards and the C code to be never touched again?

What is meaningful about such a distinction?


Yea, not sure i disagree with anything being said here. Though to me, transpiler just typically means it goes from one language i could write, to another i could write. I don't necessarily expect to enjoy reading or perhaps even understanding the JavaScript output from any lang that builds to JS, for example.


> transpiler just typically means it goes from one language i could write, to another i could write.

What possible compiler target couldn't you write? Compilers are not exactly magic.


Fair, by "could write" i meant one intended for humans to write. Ie i would not say LLVM bytecode is intended for humans to write by hand. Can they? Sure.

The difference (to the parent comment) in my eyes is that the target language is the thing intended for humans, not the target output itself. As another commenter points out, transpiled code is often not intended for humans, even if the language is.


Machine code is intended to be written by humans. That was the only way to program computers at one point in time. Given a machine code target, would you say it is product of transpilation or compilation?


I would stand by my original statement, as i don't consider that "intended" or common by modern day standards. Humans hand wrote binary for a while too hah.

If it's not clear, these are just my opinions. Not an attempt at an objective fact or anything.


> Humans hand wrote binary for a while too hah.

Like, as in flipping toggle switches? Isn't that just an input device? They were still producing machine code from that switching.


> A transpiler emits code the user is supposed to understand, a compiler does not.

How come Godbolt is so popular? Inspecting compiler output?

Is GCC now officially a transpiler?


Actually, I can understand assembly.


Is Babel a transpiler?


I'm fairly certain that source-to-source transpilers rarely use anything like BURS or any other sufficiently smart "instruction selection" (or insturction scheduling, for that matter) algorithms because why would they, the compilers for the targeted language already incorporate such algorithms, maybe even of the higher quality than the transpiler's author are capable to write themselves.


All compilers end up with local considerations. Instruction selection or register allocation is not a consideration special to compilers that "transpilers" do not need to have, they are specific considerations for that particular compiler target. A compiler to Go must consider Go's identifier rules, which does not apply to compilers targeting a CPU. A compiler to SQL must consider valid SQL syntax and have their own optimization concerns that don't apply to Go. And so on.

The middles all look very similar, though, which is where the heart of "compiler" comes from; that process of some sort of parsing and then transforming down to some other representation. This has a distinguishing set of characteristics and problems despite what frontends and backends get slapped on them.


This was very nicely put. Thanks. I don’t think we need different terms just because the target languages are different (higher level or whatever).


Readme says transpile: "Borgo is a new language that transpiles to Go."

And it's written in rust. Kinda unholy.


Nothing unholy there. It's easier to transpile to a less constrained language. Transpiling to Rust would require using one of the GC crates or refcount everything. Then you'd have to also satisfy the mutability and send/sync constraints. Go needs none of these things, so all the transpiler needs to care about is it's own added constraints.


Yeah, the top of the project says "compiles", then the readme says "transpiles". Perhaps the author was just trying to get all the SEO terms in there.

> And it's written in rust. Kinda unholy.

Agreed, it's like, do you really hate writing Go so much that you'll really write all that Rust to get out of it? Haha. Reminds me of the web frameworks which generated the JS so you didn't have to touch it, like GWT of old.

I'm sure it was a fun exercise to create Borgo, though.

My favorite transpiler is Haxe. It targets so many other languages, the surface area is impressive.

https://haxe.org/


I’ve totally forgotten about Haxe!


Nothing like adding dependencies to the build toolchain.


“transpiler”, “compiler”, either terminology is valid:

https://en.wikipedia.org/wiki/Source-to-source_compiler


Thanks, I'd always thought targeted transformations -> transpiler, but it makes sense it's really a subset of general compiler functionality, sans binary output.


> but it makes sense it's really a subset of general compiler functionality, sans binary output.

Tell us more about this compiler subset that does not produce binary output. What do they produce? Ternary? Qubits? Nothing?


They produce text, such as Golang source code.


Thanks for your response, but the Go spec asserts that Go source is represented as UTF-8. UTF-8 is a binary encoding.

We're talking about compilers that produce something other than binary output.


Ah, my mistake, I thought you were making an earnest inquiry and not a joke. Carry on.


Transpiler is a kind of compiler


Reminds me of this previous effort to build upon Go but add a more flexible type system: https://github.com/oden-lang/oden


The only language I can think of that has pulled off “compiles to another totaling language” and gained mainstream adoption is typescript, and I’m sure it wouldn’t have done so if it were possible to run in the browser otherwise.

Can anyone think of another example?


C++ is for sure gonna be the biggest example. I think Objective-C too.

There's other successful but not "mainstream" languages that might count or semi-count, like Clojure targeting the JVM (though not Java) and being able to use Java packages, or ClojureScript targeting JavaScript.


There’s web2c which transpiles Pascal-Web to C code. And in the 90s, Eberhard Mattes, to enable his port of TeX and friends to OS/2 and DOS wrote a Pascal to C compiler (I remember when it was first released, there was speculation that it might have been a pirated commercial implementation because how could one guy manage this, but that was short-lived as people realized it was faster than any of the commercial versions.)


CoffeeScript was pretty popular about 10 years ago and did this.


Nim compiles to C by default, and it seems most Nim devs stick with that default. Nim hasn’t gained mainstream adoption, though.


haxe is not quite "mainstream adoption" level but it has a decent amount of stuff done in it

clojurescript is fairly popular too.


I like your take but - not that this was important to TypeScript - JavaScript was literally the assembly language of the web (asm.js) until WASM came along. There was no other target that TypeScript could compile to. I guess that’s why TypeScript simply added types to JS, rather than being a wholly new language, which in turn made it compatible and familiar to potential adopters. It also solved a big problem caused by the growing size of client code bases.

All of that said - this train of thought lead me to discover AssemblyScript! https://www.assemblyscript.org/


Not exactly the same but a few languages compile to llir which then is compiled by llvm to machine code.


Elixir compiles to Erlang, I think.


Not exactly correct, it does have "core erlang" as a compilation step, but they both have that as a compilation step ultimately compiling to beam bytecode.


Correct Elixir runs on the BEAN.


C++ used to compile to C.


They made a coffeescript for go


They made a Typescript for go. Coffeescript was dynamic.


Coffeescript added semantics and behavior. Typescript for better or worse is almost only types on top of existing behavior. (with the primary exception being the `enum` concept)


I'd never let a rando project on GitHub generate my code. Become dependent on their tiny new syntax AND let it generate the Go code that will actually be built from. Asking for things like backdoor insertion trick to be introduced later, after they have enough folks dependent on them and decide reward worth the risk. GitHub is The Jungle and all that entails.


Option instead of nil is amazing. Imo, the biggest flaw in Go's design is having nils instead of optionals.

I don't have a strong opinion on Result and Pattern matching - it seems nice, but I don't know if it adds much to the language. It is nice, but it may not be worth the complexity.

The error handling with ? is a no for me. I'd rather have something more like the go-errors/errors package in the standard library instead. This has been proposed before, and it was rejected for a good reason: it makes it too easy to be lazy and just bubble up errors instead of properly handling them.


Has anyone written a language which targets golang assembler yet? I’m surprised I don’t see that.


to me the problem is not the language per se but the emerging complexity of a project written in a language. I.e. say I'm familiar with go and a k8s user. Does that mean that I can understand the architecture of k8s project and be meaningfully productive in a short period of time? Far from it.

Sometimes I think we focus too much and formalize on the first order tooling we use, language being one of them, while we neglect the layers upon layers of abstractions built on top of them. I wonder whether a meta-language could exist that would be useful in these upper layers. Not a framework that imposes its own logic. More of a DSL that can capture both business logic and architecture.


Why compile to Go rather than less-than-ideal (or even slightly-unsafe) Rust?

I find it conceptually compelling, I'm just surprised the target would then be in the GC'd, larger-binary'd direction. Like 'Java expressiveness with C simplicity, transpiles to Java'.

Perhaps 'just' because it's a lot simpler to just expand the target language slightly and then you only have to deal with mapping the new bits into implementation, it's less like writing a compiler for a whole new language?


Good question. It's probably to be able to continue using the Go ecosystem. You could not incrementally switch a Go codebase to a Rust-based Borgo, but you can when it's Go-based.


Why not? You can have a language B that's a superset of language G but which compiles to language R.

It's even typical in a sense: C++ is a superset of C, that doesn't mean it has to compile to C, it compiles to LLVM IR or whatever.


I suspect emitting safe Rust would necessitate your very own implementation of a borrow-checker, or a weak flavor of it, in the language frontend. From there you place yourself in the author's shoes and consider the tradeoffs that come with emitting unsafe Rust. My interpretation of _that_ tradeoff is emitting "something like C++" or "something like Java," to use your analogy. There appears to be less to get wrong.


Sum types having zero values seems to be breaking the promise that people hoped out of them.

    use fmt
    
    enum Coin {
        Penny,
        Nickel,
        Dime,
        Quarter,
    }

    fn wtf() -> Coin {
      return zeroValue()
    }
    
    fn main() {
        let coin = wtf()
        fmt.Println("zero coin:", coin)
    }

Output:

    zero coin: {0}


It looks pretty interesting! Definitely something to play around with, but honestly, I'd rather just use Rust (or Gleam if GC is ok).


Congratulations and best wishes for your project. I have hoped for a Go+Rust lang for a long time now.


It's a little suspicious the example uses math/rand.Seed() which has been deprecated for over a year. That's when I noticed the repo itself hasn't had a single commit in 7 months.

Why is this suddenly news, when by all appearances it's abandonware?


> Why is this suddenly news, when by all appearances it's abandonware?

Because the submitter suddenly found it, and it was new to many others too? It's not a 'Show HN'.


This is a fantastic proof of concept project that answers a question I've been asking: what if there was another language that was designed to fit between Go and Rust? Ideas?


So swift?


it looks a lot like java 21+


Java < Kotlin < Scala

Golang < Borgo < Rust


i (nearly) wrote only scala for 10 years. if i was starting a project today i would not use scala. its still a great language.


This looks a lot like Swift. To me, that's a good thing :)


What's the license?


I understand that you like some Rust features like Result and Option types, enums, and pattern matching.

These features provide for more safety, and at the same time, they reduce productivity by forcing the developer to statically type everything.

The question is then why do we need to transpile to Go, a language with GC and slower than Rust?

If we already agree on super-safe static typing, why not just use Rust? Are there any libraries in Go that are not available or of worse quality in Rust?


Does Borgo have a Treesitter grammar? An LSP? I'd use it in a heartbeat if so.


Neovim user detected!


Would it be possible to make a Python (without C extensions) that compiles to Go?


I am not sure you can easily directly transpile to Golang from Python. Python is very, very dynamic and can have extremely complex types that are not representable with the Golang type system. Not impossible but I guess you might end up with an extra runtime layer and some more dynamic operations will not be very fast. Or you restrict it to a subset of Python like this project does: https://github.com/zanellia/prometeo

You could of course write a bytecode VM in Golang but I guess that defeats the purpose.


There was one: https://github.com/grumpyhome/grumpy

Looks like abandoned though.


Can you use Go modules?



No exceptions - no love.


Why would you target Go?


It's a language known for having a great runtime and tooling but subpar language semantics. Makes sense to me, at least. Most of the benefits of go with fewer drawbacks.


This is a terrible take.

It's like one of those people who buys a massive over priced knife block with 48 knives in it that they never use.

Most go devs have lived through bloated java/php/python/ruby/js projects that become a pile of dependencies.

Go is to coding what brutalism is to architecture. Simple, functional, efficient. Dont build a massive dependency chain, dont build magic, repeating yourself is OK. Be an adult and deal with your errors (it's a feature)... That minimalist no bullshit language semantics that force you not to be lazy is a feature not a bug.


I don’t think any of that is contradicted by what this seems to be trying to do. In fact, Go doesn’t make you deal with your errors (you’re free to ignore the returns) whereas this would (via exhaustive pattern match).


> In fact, Go doesn’t make you deal with your errors

Your right, it does not. I will _ =: an error in a throw away script all the time.

I see that in a code review, in production code... Big red flag. This is a departure from an exception, that might be thrown in one place and handled far far away from the code you're looking at.


I'm not an expert in Go, and my experience is somewhat limited, however, a few years back I fixed a really subtle bug in a project that was related to the fact that errors _weren't_ being handled correctly. As a relative newbie to Go, the code in the diff[0] didn't appear to be doing anything wrong, until I added some print statements and realized that the numbers were not adding up correctly. IMO, if the returned value had been more like a Rust optional or result type, I think this issue would have either not been a bug in the first place, or it would have been easier to spot the bug.

[0]: https://github.com/semilin/genkey/commit/fafed6744555c5a81fd...

EDIT: The fact that this was a bug at all makes me fear for the rest of the code base. If this one slipped through the cracks, how can I know that the rest of the code base is correct?


> The fact that this was a bug at all makes me fear for the rest of the code base.

The fact that the commit was accepted into a release without any changes to accompanying tests is what is most concerning. You should be afraid.


Is the wrong code in the new part of the diff or in the old one?

Generally getting zero value from a map is a feature, but the code that the diff replaces did look overly complex and fragile to me already tbh


Rust does not enforce error checking either.

You can ignore errors in Rust.


Ignoring errors is a thing you can do whether you're checking them or not. Rust does enforce error checking, every Result<T, E> is required to be checked before the value can be accessed. The only exception for this is if you don't need to access the Result's value, for which there's at least a built-in warning. And of course `unsafe`.


> Result<T, E> is required to be checked before the value can be accessed.

Which is a flaw in Borgo, at least when interfacing with Go code, as in Go both T and E should independently valid states.


Basic has

    on error goto next


If Rust was already matured by the time Docker was rewriten from Python, and Kubernetes from Java, I bet that Go wouldn't have been the lucky candidate.


I like rust, I write go.

There was a thread here the other day where a rust dev pointed out that "Rust is the language tokio ate"... https://nullderef.com/blog/rust-async-sync/

Rust is a lot of overhead when go is "good enough" for 99% of what needs to get done. That doesn't mean go is good for everything. I would still rather write Rust than C or C++ or bunch of other languages. Look at a project like Pingora, from cloud flair. Perfect Rust project, bad Go project. Rust is in the kernel, rust is what im looking at for a USB driver. I would not shove go in either of these places.


Go could have been like Modula-3, D or C#, instead they decided to revamp Oberon and Limbo, the version 1.0 of those languages from the 1990's, not what was latter done from their learnings, e.g. Active Oberon.

Now thanks to Docker and Kubernetes adoption success, we're stuck with it.

At least now generics are supported, unless one needs gccgo, maybe in 10 years we get Pascal enumerations.


It whooshed right past your head.

Go is stone cold simple to pick up. ITs simple to reason about. I can decompose a project and spoon feed it to a JR engineer and they can get through it.

Rust is none of those things.

Its great for low level stuff, it is in the kernel right next to C and go will NEVER be there. Why, because that is what rust is good at.

> At least now generics are supported, unless one needs gccgo, maybe in 10 years we get Pascal enumerations.

Again I like rust. Crates being colored (as in functions), the shitty compile and tests times... these are things that are holding back rusts adoption in more places... None of this stuff is on the road map to get fixed, it's quite the glass house you have.

Go is going to replace a fair bit of python/ruby in the next few years... Is it going to be rust or zig that eats into c/c++? If the rust community keeps on the way it has been, zig will eat all the core apps we depend on.


Looking forward for AI research in Go taking over the world.

Zig is a Modula-2 (1978), with C like syntax, with added compile time, still has use-after-free, and a community that seems ideologically against binary libraries.

Good luck making it relevant, I am not buying into Bun ever taking over node.


I'm not sure why you included php in your list of examples of things that become bloated with dependencies. I've never seen that be the case.


Look at your list of built-in modules though :)


So stdlib counts as dependencies now?


Well stdlib normally doesn't have stuff like Oracle drivers and such. I think one of the reasons why PHP tends to need so few external dependencies is because it has so many extensions to cover basically anything that you don't need PHP libraries all that much. I don't personally see it as a bad thing but it's something worth considering.


[flagged]


I just opened the composer.json file for a complex PHP application, it has 20 imports, total.

I just opened the package.json for a react frontend, it has 80 imports.

I just opened the Gemfile for a complex rails application, it has 150+ imports.

But sure, I'm just trolling I guess.


I'm not trying to be argumentative, I'm genuinely curious: what is it about the go runtime and tooling that makes them great?

I did not post your parent comment.


GO:

Easy to learn: you can be productive in go in a day or two.

Strong standard library.

Complies to binary. (you dont need to drag a run time around) And this is fast!

Easy dependency management.

Linting is built in. (No arguing over tabs vs spaces)

First class testing. (and its fast)

"good enough" coding is very fast. You can mostly ignore performance and pick it up when and where you need it.

-----------------------

Go users tend to say "Idiomatic" a lot. Your not getting rails, there is no java like framework, and you really should NOT do the node js thing and stack tools to the sky. Minimalism, brutalism.

As an example: Most languages have tooling for dependency injection. Most go projects have dependency injection but dont use a library or framework or tooling to do it. Its just a bit of code (100 ish lines) that you end up writing as part of your bootstrapping, config or testing (depending on the project)....


> As an example: Most languages have tooling for dependency injection. Most go projects have dependency injection but dont use a library or framework or tooling to do it.

I actually see this as a negative and we've been looking at Uber/Fx for more support. DI frameworks don't do anything you can't do without it, but it takes significantly more experience and technological/organizational maturity that I find the average developer doesn't have to do it without framework support.

In the current zeitgeist of being towards the "micro" scale of the spectrum, that "average developer" support is necessary. If you have more modular monoliths or many high quality examples, maybe its better.


DI in something like Spring, for example, can make it extremely hard to track where a given dependency is coming from. With the use of annotations and defaults and properties on annotations for selecting a dependency, to sometimes autogenerated classes for which there is no source code.

I would much rather have a few lines of straight forward code that set up dependencies explicitly, than deal with opaque semantics and mysterious incantations.


I've never really had that trouble. There are typically relatively few places that

1. a given interface is provided via a DI module 2. said modules are included in a binary

With decent codesearch, finding the implementation of a particular injection for a given deployed binary is usually a fairly short search.

I'm sure there is all sorts of extra voodoo you can get up to, but the straightforward DI case is, well, straightforward.


The autogenerated class with no source code was not a hypothetical example. This was something I saw cause a problem in production, where the source code in the stack trace didn't exist.


code generation is a mostly disjoint topic from DI. Granted, some solutions like https://github.com/google/wire use code generation, but you're exactly right about their pitfalls. If your dev environment doesn't have good support for generated code, it is a nightmare. If you can goto-definition the generated code, then it is suddenly feasible, but perhaps still a bad choice.

But the DI frameworks I've used are typically just... normal code files. Uber/fx, Go/Guice, etc


Wow, that's a pretty good take.

There is a line here though. I think a lot of people have seen what happens when you set a bunch of JR devs loose in a node/ruby code base with all that tooling. It goes about as well as giving a lead footed suburbanite an F1 car.

If you work in an agency (new every week) or in a place where you have a high number of jr devs then a framework makes a fair bit of sense. But at that point are your experienced devs being productive or being babysitters?

I think I would rather babysit a bunch of jr devs working in Go where correcting their issues is educational, rather than dealing with babysitting JR devs and high speed stupidity in something like rails...


My experience is that there is a great, big in-between where folks are just chugging along and aren't thinking about project/codebase level architectural decisions. Without that active thought and foresight, you end up shooting yourselves in the foot, a bit. With DI frameworks, the _default_ ends up typically being the right thing to do so it unburdens people from a particular slice of cognitive load.


To name a few things: Excellent and very stable stdlib (if you’re making a networked thing), fast GC optimized for latency, async I/O without any fuss, easy CSP-like concurrency, fast compile time, statically linked binaries, large user community.


seamless cross compilation as a first-class citizen, for one


Ahh the failure to recognize that the great runtime and tooling is due to the language semantics.


I think the most complained-about semantics of Go have been the lack of generics, the lack of algebraic data types, nil, and the error handling. Generics are now implemented, and just about nobody considers the tooling and the runtime ruined because of them. Projects like Borgo and Oden are evidence that you can have what people like about the Go runtime and tooling combined with ADT-based error handling and restricted nil.


subpar semantics?


Because a plethora of languages, some of which you don't personally like using, being part of a large and healthy ecosystem, benefits all languages and programmers.

There is no "one true" answer, and time spent bullying people over it, is entirely wasted.


it's literally the 2nd sentence of README: "It's fully compatible with existing Go packages."

It's a nice way to bootstrap ecosystem IMHO. No one wants to use a brand-new language without library support for common tasks.


It doesn't make sense that that is the reason. Surely C++ has way more packages. So Go would be a bad choice if package availability was a very important concern.


> I want a language for writing applications that is more expressive than Go but less complex than Rust.

> Go is simple and straightforward, but I often wish it offered more type safety. Rust is very nice to work with (at least for single threaded code) but it's too broad and complex, sometimes painfully so.

It seems clear that the author really likes aspects of both Go and Rust and desires something between the two. Check out the complexity vs type-safety illustration at the top of the page, with Borgo placed between the Gopher and the Crustacean just before the complexity curve gets steep.

They're basically building their personal ideal version of Go with inspiration from Rust.


Sure. I'm not knocking the design goal. The same thing could be achieved by having C++ as a target. So why choose go.


Because they like go, are making essentially a go extension, and so a lot of their features map directly to go features. No need to re-implement goroutines and channels etc.

The type system lets them reject the programs they want to reject, but if a program is valid in the type system it can, in large part, emit essentially the same go code sans the types. I mean, I'm making some assumptions here, but that's typically the reason.


There is more code written in C++, but there are fewer "C++ packages" in the sense of just doing `go get` and you're ready to use it.


I doubt that's true. In all the years I have never not found a library that can the things I want in C or C++. It doesn't have a package manager but that doesn't mean it doesn't have packages.


It doesn't make sense when you use this reasoning on established languages like C++ and Go. Makes all the difference when you are less than a year old language.


Nice - it uses Rust Try syntax to solve error conditions and Rust style enums!

Which of course begs another question but I won't be that Rust fanboy


if this produced SIMD optimised code, better inlining, and just more LLVM features, I would use this!


wow. this is it!


Go is less complex than rust? Really? I thought that was disputed.


Go is less complex than Rust, imo. As someone who has used Go and Rust for about the same time (5-6 years), it's not as less complex than it seems, though. Namely i found a odd type of complexity emerge in Go where by every individual unit was simple, but the whole was so spread out and poorly abstracted that it it spread out the complexity. So if you squinted, everything was simple. If you zoomed out, it felt convoluted.

Rust on the other hand drastically simplifies a lot of the complexity i dealt with in Go. However depending on the type of work, it's of course got plenty of complexity to dig into should you need it.

The challenge with Rust imo is to know where to use that complexity. Lots of rope to hang yourself with. On average i find myself with code that to me is simpler in Rust, because it's easier to reason about larger blocks of logic. However i still wouldn't ignore the extra rope of the whole language and call it "simpler" than Go.


You can read and grok the entire gospec in a week.


The golang language spec is very sparse on implementation details in comparison to something like the java spec. I don't think the length of the lang spec is a great metric for language simplicity.


The Java specification is bigger because it has to define the entire Virtual Machine where a compiler can target already defined architectures. I'm also not seeing much in the way of "implementation details" in their specification. Can you point out what you mean?


Rust has macros and a novel memory model. How would you measure complexity? For me it's that simple.


Rust is not as complicated as the opening graphic indicates. I usually see this meme from less experienced people but I'm frankly surprised to see it from somebody that's capable of writing a compiler in rust.


Compared to GCed languages like Go and Borgo? Ownership is non-trivial...


I think they are calling Rust complex, not complicated. Rust is way more complicated than Go, when we are talking about language features.


It's also not as type safe as the graphic implies.


There are no labels on the graph's axes :^)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: