It's not any particular feature that makes a language a mess. It's the interaction between the features. It's a bit like mixing paint, it's very easy to end up with greyish poop.
Go was designed by very experienced programmers that understood the cost of abstraction and complexity well.
They didn't do an absolutely perfect job. It's probably true that Go would be a better language with a simplified generics implementation, enums, and maybe a bit more. That they erred on the side of simplicity shows how they were thinking. It's an excellent example of less is more.
Most programmers never gain the wisdom and/or confidence to keep things boringly simple. Everyone likes to use cool flashy things because it makes what can be a boring job more interesting.
But if your goal is productivity, and the fun comes from what you accomplish, then the code can be relatively mundane and still be very fun to write.
Precisely, and this is one area where go fails completely. The features don't interact well at all!
Tuple returns are everywhere, but there are no tools to operate on them without manually splitting the halves, checking conditionally if one of them exists, and returning something different based on each possibility. Cue the noise of subtly-different variants of `if res, err := nil; err != nil` in every function.
Imports were just paths to repositories. Everything was assumed to just pull from the tip of the branch, and this was considered to be just fine because nobody should ever break backwards compatibility. They've spent years trying to dig themselves out from under this one.
Everything should have a default zero value. Including pointers. So now we go back to having to do manual `nil` checking for anything that might receive a nil. But thanks to the magic of interfaces, if you call a function that returns a nil interface pointer, it will directly fail a nil comparison check! This is completely bonkers.
Go has implicit implementation of interfaces which makes exhaustive checking of case statements impossible. So you type-switch and hope nobody adds a new interface implementation. So you helpfully get strong typing everywhere except for the places you're most likely to actually mess something up.
Go genuinely feels like a language where multiple people each had their pet idea of some feature to add, but nobody ever came together to work on how to actually make those features work in concert with one-another. That anyone could feel the opposite is absolutely incomprehensible to me.
Given that I am involved in the Rust project I'm very likely biased, but given that I've focused on the learnability of the language (diagnostics and ergonomics) I have a bit of context on this subject.
When designing a language there are intrinsic (what things the project wants to focus on, be they features of the language or the associated tooling that affect the language, like generics or compilation speed) and extrinsic (external impositions like being able to run on certain platforms, or interfacing with existing technologies like being able to run a statically linked binary in Linux or being able to debug using gdb or calling C libs without runtime translation) design constraints. All languages have (or should have) an objective of being easy to learn, pick up and use long term. It might just not be the top priority.
For the sake of argument you can take Python where expressiveness at runtime and clean syntax are prioritized over speed, Go where fast compilation and multithreaded microservices are prioritized over more complex language features, and Rust where fast binaries and expressiveness are prioritized over ergonomics (when push comes to shove this is the case, otherwise you wouldn't need to call `.clone()` or add `&` to arguments when calling a method ever), you can see how these objectives permeate every decision throughout the language.
When it comes to Rust in particular, I feel it is still a boring language despite the appearance of too many features, precisely because of how they interact between them and fit together naturally. It is not the best fit for every use case, but it is one of the projects out there that is embracing the fact that it can't be as easy to learn as it could be (without sacrificing some of the constraints that make it interesting as a systems language), but we can rely on the compiler being a necessary part of the developer toolchain to make the compiler understand the user's intent when they do things that make sense from extrapolated misunderstanding of the language and help them write the "correct" code instead. This has the added benefit that reading the code is easier because you have to "guess" much less what it is doing. Remember that if the code can confuse a parser it will also confuse humans. On the opposite end of the spectrum you have JavaScript, where it's grammar has a lot of optional or redundant ways of doing the same thing (think semicolon insertion), which makes the act of reading and debugging code harder. This is a reasonable approach in a case like the web, less so in a compiled language that can evolve independently from the end users' platform.
Go was designed by very experienced programmers that understood the cost of abstraction and complexity well.
They didn't do an absolutely perfect job. It's probably true that Go would be a better language with a simplified generics implementation, enums, and maybe a bit more. That they erred on the side of simplicity shows how they were thinking. It's an excellent example of less is more.
Most programmers never gain the wisdom and/or confidence to keep things boringly simple. Everyone likes to use cool flashy things because it makes what can be a boring job more interesting.
But if your goal is productivity, and the fun comes from what you accomplish, then the code can be relatively mundane and still be very fun to write.