After writing a lot of Rust, I recently did a small project with Zig to learn the language.
I'm especially impressed with the C interop. You can just import C headers, and Zig will use clang to analyze the header and make all symbols available as regular Zig functions etc. No need for manually writing bindings, which is always an awkward and error-prone chore, or use external tools like bindgen, which still takes quite a bit of effort. Things just work. Zig can also just compile C code.
Rust indeed can feel very heavy, bloated and complicated. The language has a lot of complex features and a steep learning curve.
On the other hand, Rust has an extremely powerful type system that allows building very clean abstractions that enforce correctness. I've never worked with a language that makes it so easy to write correct, maintainable and performant code. With Rust I can jump into almost any code base and contribute, with a high confidence that the compiler will catch most of the obvious issues.
The defining feature of Rust is also the borrow checker and thread safety (Send/Sync), which contribute a lot to the mentioned correctness. Zigs doesn't help you much here. The language is not much of an improvement over C/C++ in this regard. The long-term plan for Zig seems to be static analysis, but if the many attempts for C/C++ in this domain show anything is that this is not possible without severe restrictions and gaps.
Choosing to forego generics and do everything with a comptime abstraction makes Zig a lot easier to understand, compared to Rust generics and traits. The downside is that documentation and predictability suffers. Comptime abstractions can fail to compile with unexpected inputs and require quite a bit of effort. They are also problematic for composability, and require manual documentation, instead of getting nicely autogenerated information about traits and bounds.
Many design decisions in Rust are not inherently tied to the borrow checker. Rust could be a considerably simpler, more concise language. But I also think Rust has gotten many aspects right.
It will be very interesting to see how Zig evolves, but for me, the borrow checker, thread safety and ability to tightly scope `unsafe` would make me chose Rust over Zig for almost all projects.
The complexity of Rust is a pill you have to swallow to get those guarantees, unless you use something like Ada/Spark or verifiable subsets of C - which are both more powerful than Rust in this regard, but also a lot more effort.
Some smaller paper cuts, which are partially just due to the relative youth of Zig:
* no (official) package manager yet, though this is aparently being worked on
* documentation is often incomplete and lacking
* error handling with inferred error sets and `try` is very nice! But for now errors can't hold any data, they are just identifiers, which is often insufficient for good error reporting or handling.
> On the other hand, Rust has an extremely powerful type system that allows building very clean abstractions that enforce correctness.
I love Scala because of this as well, and put up with the slow compile times and large runtime needed because I very much value that correctness. I get a lot of the same with Rust (and more, like data races being a compiler error), with the added bonus that so many of its abstractions are zero-cost.
The more I build software, the more I want strong type systems that I can lean on to help ensure correctness. Obviously that won't eliminate all bugs, but building reliable software on a deadline turns out to be really hard, and if a compiler can tell me I'm doing things wrong before it becomes an expensive mistake in production, that's worth the added effort it takes to write in a language that can help me in this way.
It seems like Zig is approaching it from the other side: a "better C". I don't really want a better C; I want a Scala that runs with the CPU and memory footprint of a C program. Rust is probably as close as I'll get to that.
Unfortunately I've found that a lot of developers -- especially senior ones -- don't want to learn anything new, and want to keep churning out the same overengineered, overabstracted, exception-oriented, mutable-spaghetti Java code, year after year. Reminds me of the saying that some senior developers have one year of experience, repeated ten times.
> * error handling with inferred error sets and `try` is very nice! But for now errors can't hold any data, they are just identifiers, which is often insufficient for good error reporting or handling.
It's still under debate and I'm personally in the camp that errors should not have a payload, so I would avoid assuming that it's definitely the preferable choice. We already have a couple of existing patterns for when diagnostics are needed. That said proposals about adding support for error payloads are still open, so who knows.
It's possible to create closures already (by declaring a struct type with a method (the closure) and instantiating it immediately after), but it's a clunky solution. We also have an open proposal in this space:
The new compiler is very innovative and a distinguishing feature, but I would recommend giving a higher priority to package management.
A package manager is more or less expected by developers now, and I bet you will see a lot more adoption once it's easy to publish and consume libraries with an official packager manager and online repository.
IIUC the line of thoughts is that the current compiler is too slow, and that it doesn't show on a small project. But if you imagine a big project with 100 dependencies, you'll be really slowed done. The new compiler is faster (I think there are benchmarks in the repo) and can compile debug builds without going through LLVM.
> It's still under debate and I'm personally in the camp that errors should not have a payload, so I would avoid assuming that it's definitely the preferable choice.
How can you argue that not having access to string index where the JSON was unparseable is better than having access to it? I read through issues/2647, and I read through the "existing pattern", and it just seems obvious to me that containing the error information in the error is better than trying to hack it through side channels.
If A -> B, and B returns an error through a side channel, then A can use it and "what's so bad about that?"
But this doesn't seem to scale very well.
If A is refactored to use an intermediate function X, then it just doesn't work. A -> X -> C would mean that...
X cannot use Zig's normal error handling syntax to automatically propagate this error that A is better equipped to handle, unless we're going to further stipulate that A must pass this Diagnostics struct into X which then must pass it into B.
If we now assume that X calls into two fallible functions, B and C, and each of them provide their own diagnostics structs, then X will have to take in two "out" arguments that provide the diagnostics, and every single caller of X will have to provide those two values.
You see how this goes. It just doesn't seem scalable.
Why not take the current design to its logical conclusion of simply having every function return a Boolean indicating whether it succeeded or not, and then require the caller to look into some "out" argument to determine what the error was? Obviously, that would be extremely annoying.
A tagged error union containing the diagnostic error values is just a minor evolution of the current design that brings huge wins for language ergonomics.
For errors that don't need to carry a value, there is no additional cost: they compile and work exactly as they do today.
For errors that carry a value, the size of the error struct is only the size of the largest error value (plus the tag, of course), not the product of all possible error values, so the global error set will never grow to be enormous unless you have some very weird error type. In which case, you can solve this problem by fixing that error type.
So, I'm not deeply versed in Zig, but I have personally argued in favor of Zig's async design, which seems exceptionally interesting. I had not realized until now how limiting the error system implementation was, but I had superficially appreciated how much less verbose it was than Rust's where you tend to hand write these error enums, or use a bunch of code gen. However, errors benefit tremendously from having the ability to supply payloads.
No one is required to act on the payloads within the errors, but no one is able to if they don't exist.
The problem is for all the situations where the error payload isn't needed. You now need to carry around the extra syntactic complexity and/or the extra wasted memory (unless you have a strategy to elide all of this stuff when error payloads are not needed).
I think it's not trivial to find a good alternative to status quo.
There are so many situations where an error payload is either mandatory for properly handling the error, or required to get somewhat decent error output...
Side channels are not really a feasible implementation option.
So you have to resort to maintaining the logical information on intermediate levels, returning a result sum type, or stick to good old C paradigms and use a output param.
Which negates all value that error sets offer.
I understand that it's not trivial to implement, but for me it's a sort of must to avoid ending up with messy APIs.
People are lazy, though. I think if errors can't take payloads, it's inevitable you'll end up with many libraries that don't return error payload information when you wish they did. This will trickle out into software using the libraries, where the end-user will get errors that don't include as much useful info as you'd like. So to my mind, not supporting error payloads in a first class way is contrary to Zig's goal of enabling perfect software. :-)
“wasted memory”... this stuff exists on the stack, right? And we’re talking on the order of like 64 bytes or less in common, practical scenarios?
I believe the solution has been clearly presented to those who are listening, and the downsides are so much smaller than the status quo.
If people on the language team are unwilling to provide developers with the tools to make more robust software because of something like 64 bytes of stack memory... I find that pretty shocking. And yes, I mean that it is extremely difficult to diagnose and fix issues in production software when you only get back a static error value that lacks the dynamic error context that a payload would provide. I’ve been there, done that. Do not want that again.
Such a strong viewpoint (avoiding such error payloads) could at least focus more on the technical aspects of how best to implement the elision of unused error payloads instead of just broadly opposing the concept, since it should be obvious how beneficial those payloads are, and the only question is how to remove them for developers who literally cannot spare dozens of bytes of stack memory. (Which I find hard to believe outside of AVR or PIC microcontrollers, which is probably not the best use case to be optimizing for in a new language.)
Elision is an optimization that can be added later. It’s not obvious that it can’t be done, and there’s no clear reason that it has to be solved first... I just don’t think anyone would actually care enough to implement it once they see how nonexistent the negative impacts are, but it would be a cool optimization.
But, maybe everyone who upvoted the apparently most upvoted GitHub issue on the Zig repo (including myself) is just completely wrong and this obvious solution is actually secretly terrible. It’s entirely possible.
Your comment has not really done anything to help me believe that I’m wrong, though, unfortunately.
But, as I’m basically “no one” in this context, my opinion probably doesn’t really matter.
It's not that the "obvious solution" is so terrible, it's that the workaround (not needed in 99% of cases) is really easy and arguably good architectural practice. You can emit an error/ok union and have the error return the structured information.
Note that this isn't like go's "if err = nil" monstrosity, either.
> You can emit an error/ok union and have the error return the structured information.
I already addressed how you’re apparently giving up all of the benefits of Zig’s error handling system to do this. That level of convention breaking isn’t a good sign for anything. The current convention really is that bad, from my point of view.
The obvious solution’s extremely tiny overhead could “easily” be avoided in the almost non-existent cases where it matters.
I want robust software by default, not software where I have to use side channel hacks to get the information I need by default... but maybe that’s just me. (And everyone else who upvoted that GitHub issue)
Of course Zig programmers would be unlikely to see the issue here — naturally the people left are the ones who don’t see the problem. People like myself who know from past experience how problematic not having error context is are likely to just avoid Zig until it meets our minimum requirements.
That’s not a problem for existing Zig users — it works for them — but it is annoying to people like me who think Zig otherwise has a number of interesting aspects.
You're missing the point. Let me give a direct analogy: In erlang there is a fantastic error system, but sometimes you want to emit an ok/error tuple instead. There is a generally sane heuristic of when you do and don't want to use the error pathways, it's a part of making good engineering choices. Generally speaking it's the case where when you want structured errors you use the tuple.
It's possible that erlang developers are merely internalizing sone pain, but it's also a robust system that people have been developing highly reliable and broadly used (e.g. rabbitMQ) systems architectures in, so the choice can't be all that bad.
Eliding would be done by the compiler where the values aren’t used. That’s whole meaning of the word “elide” in a language context.
From the developer’s point of view, the values would always be there, just unused. The compiler would just make them disappear at compile time if unused.
> You now need to carry around the extra syntactic complexity and/or the extra wasted memory
This quote is from your GP post, and I read this as arguing against having the mechanism for getting this payload in the language (and hence the compiler itself), so my language was intended. Am I misunderstanding?
Ah. I think I see what you were saying now. The word “elide” has different connotations to me than the word “omit”. Zig’s omission of error context is something I agree is problematic.
If the standard library doesn’t do follow that convention, then it doesn’t matter what I do in my code. I won’t get the information I need from the standard library, and third party libraries are unlikely to follow this ad hoc convention. I’m certainly not willing to rewrite the entire world to follow this ad hoc convention.
You make a good point that the standard library could make better use of this pattern to show it off, but the language was designed with this sort of thing as an affordance and there even is a (rough) example in the docs: https://ziglang.org/documentation/master/#Tagged-union
You wouldn't have to rewrite the world, but the language is young and maybe more effort could go into making using this pattern more idiomatic and encourage library writers to do it more often.
Why keep two competing languages in your toolbox? I personally do C++ and Python and will pretty much never voluntarily choose another language from their respective niches... Except I'm looking forward to replacing C++ with Rust (any year now), at which point I'll not choose C++ again.
Because no single language can (nor should) be a good match for solving all types problems. For this reason it's better to be fluent in 5 (or so) small languages than one big language IMHO.
Most used first: C (C99 is different enough from the common C/C++ subset that it counts as its own language IMHO), Python, C++ (up to around C++11), Javascript, and recently more and more Zig. Less then I would like: Go (since currently I don't do much server backend stuff).
PS: forgot Objective-C, for coding against macOS APIs.
>"Less then I would like: Go (since currently I don't do much server backend stuff)."
I do loads of server backend in C++. Never felt that I need anything else for this kind of stuff. Sometimes due to client's insistence I did it in other languages but it was their choice.
Correct me if I'm wrong, but I don't think D has the ability to just import C headers and seamlessly use them without having generated or manually written `extern` declarations?
C (or even C++) functions can be called directly from D. There is no need for wrapper functions, argument swizzling, and the C functions do not need to be put into a separate DLL.
I think Walter Bright achieved this via implementing a full-blown C++ parser in the dlang compiler. A [God-Tier] achievement.
Not quite as seamless as Zig, but dstep is an external program that leverages libclang to do the same thing (and generates a D module for you), as well as e.g., smartly convert #define macros to inlineable templates functions :)
> Choosing to forego generics and do everything with a comptime abstraction makes Zig a lot easier to understand, compared to Rust generics and traits. The downside is that documentation and predictability suffers.
If I understand correctly it also means that Zig will never support inference of generic type parameters. Not a dealbreaker, but unfortunate.
I also wonder how relying on comptime for all these things will affect IDE support. I guess the IDE will need to run these comptime computations a lot.
I'm especially impressed with the C interop. You can just import C headers, and Zig will use clang to analyze the header and make all symbols available as regular Zig functions etc. No need for manually writing bindings, which is always an awkward and error-prone chore, or use external tools like bindgen, which still takes quite a bit of effort. Things just work. Zig can also just compile C code.
Rust indeed can feel very heavy, bloated and complicated. The language has a lot of complex features and a steep learning curve.
On the other hand, Rust has an extremely powerful type system that allows building very clean abstractions that enforce correctness. I've never worked with a language that makes it so easy to write correct, maintainable and performant code. With Rust I can jump into almost any code base and contribute, with a high confidence that the compiler will catch most of the obvious issues.
The defining feature of Rust is also the borrow checker and thread safety (Send/Sync), which contribute a lot to the mentioned correctness. Zigs doesn't help you much here. The language is not much of an improvement over C/C++ in this regard. The long-term plan for Zig seems to be static analysis, but if the many attempts for C/C++ in this domain show anything is that this is not possible without severe restrictions and gaps.
Choosing to forego generics and do everything with a comptime abstraction makes Zig a lot easier to understand, compared to Rust generics and traits. The downside is that documentation and predictability suffers. Comptime abstractions can fail to compile with unexpected inputs and require quite a bit of effort. They are also problematic for composability, and require manual documentation, instead of getting nicely autogenerated information about traits and bounds.
Many design decisions in Rust are not inherently tied to the borrow checker. Rust could be a considerably simpler, more concise language. But I also think Rust has gotten many aspects right.
It will be very interesting to see how Zig evolves, but for me, the borrow checker, thread safety and ability to tightly scope `unsafe` would make me chose Rust over Zig for almost all projects.
The complexity of Rust is a pill you have to swallow to get those guarantees, unless you use something like Ada/Spark or verifiable subsets of C - which are both more powerful than Rust in this regard, but also a lot more effort.
Some smaller paper cuts, which are partially just due to the relative youth of Zig:
* no (official) package manager yet, though this is aparently being worked on
* documentation is often incomplete and lacking
* error handling with inferred error sets and `try` is very nice! But for now errors can't hold any data, they are just identifiers, which is often insufficient for good error reporting or handling.
* No closures! (big gotcha)