I really enjoy reading Zig, I always struggle writing it because its so subtly different from the other languages I use, like C, Rust and Go, that I get confused often.
I really hope Zig becomes stable soon so I can use it on non-throwaway projects
Here's my opinion from the Zig evangelism strike force:
Aside from the removal of async (which was very disruptive for a few of my projects), the rest of the changes over the last few years have been minor. It takes a few hours once a year to update a few tens of thousands of lines of code to the new syntax, build system, and stdlib. The current 0.12.0 is supposed to be a semi-stable release for people to be happy with pre-1.0, reducing that effort further, and language stability is an express design goal, so once 1.0 hits (ETA 3-5yrs?) it'll almost certainly be good enough for you.
I do think the initial learning curve was higher than I would have expected for such a simple language. The docs are much better now, and there are more examples and learning resources (off-topic, definitely use the zig-help channel in their discord server). I just finished making a compelling enough prototype that $WORK wants to use Zig for some performance-critical software. I suspect it'll be easier for my team to learn than it was for me. We'll see (whether it succeeds or fails, that sounds like a fun first blog post).
Despite that higher-than-expected learning curve (which IMO was mostly because nuances like the in-memory representation of a 5-bit integer weren't documented, so you had to experiment to find out the exact behavior if you were doing anything "interesting"), once you've picked it up it really is a simple language. IMO it's worth pushing through. I've done hardly anything with Go yet, but Zig fits happily in the mind of this particular C/Rust/Forth/Scala/Python/... programmer. Pushing through the first ~100 Ziglings exercises might be a pretty fast way to work through most of the syntactic differences if you're already comfortable with C and Rust.
Or maybe your prior is that new languages need to prove themselves and stabilize a little longer before you sink time into them. That's fine too. I'm just commenting since you seem to be a bit on the fence and because I think it's better than your current impression of it is.
My experience is exactly the same. It deviates just enough in really minor ways that it becomes difficult to write. There's a lot of upfront effort to unlearn all the syntactic patterns that have already been established, the fact that the differences are really small makes it harder, not easier.
I think the language design includes a lot of good justification for these choices, but I worry they might have underestimated the sheer power of habit and historical precedence.
Personally, I do not find that zig differs from other languages all that greatly.
I am actually a little curious about the minor and subtle differences, cause I personally did not find that this was the case coming from C, Java, C#, a little rust, COBOL and NATURAL.
Edit:
I do admit that you need to understand what said other languages are doing in order to accomplish things. No hidden flow definitely increases a bit of the surface area of the code, but if you understood how other languages accomplish the stuff they hide form you, then zig does not feel like it has minor differences.
This is my impression too. Rust is drastically different from mainstream languages, so it's worth the jump. Zig is quite close, and I'm not sure it provides enough. I would just use C.
> I really hope Zig becomes stable soon so I can use it on non-throwaway projects
This may or may not give you some comfort, but I recall hearing one of the core maintainers of Zig talking about its use at Amazon to leverage its cross platform capabilities.
I wish all multi-task systems had a UI like this. It makes it SO much easier to spot where your biggest latencies are coming from, which is an excellent passive motivator to improve them.
Silent / opaque progress just teaches people that they can't do anything about it except wait, so all it does is get worse and worse.
> The key insight I had here is that, since the end result must be displayed on a terminal screen, there is a reasonably small upper bound on how much memory is required, beyond which point the extra memory couldn't be utilized because it wouldn't fit on the terminal screen anyway.
How large are your terminals?
I'm not sure if this represents lines, or perhaps something else. But on occasion I need to copy things out of my terminal, scrolling while copying doesn't work well, and so I shrink the text size to, let's say, one or two pixels per letter. I suspect Zig won't survive this.
One pixel per letter on a 4k screen would be 3840 × 2160 chars at most, so around 8 megabytes if we assume one byte per "character tile" on the screen. I highly doubt Zig would die from this.
If I understand correctly then that's for the number of threads, although I also think that the sudden switch from terminal size back to the topic of threads is confusing so I agree that it's not entirely clear what the connection between the two is - or if there is none, that the structure of the article makes that confusing
Would be interesting to have a mechanism to track runtime of each step and compare across runs (maybe with a unique generated key?), so that you could benchmark speed up and slow down. This would enable per time rather than per step progress bars.
Reminds me of nix-output-monitor [1], for example see [2].
It makes it easy to understand how individual steps are progressing, and how individual steps relate to the overall plan. It enables me to locate expensive build steps, and possibly to avoid them if steps are failing.
I'm not sure if it's actually any easier to read. When lots of things are changing it honestly becomes harder to read and figure out what's important and what's superfluous.
With the old progress system, everything was on one line. This honestly isn't horrible to me since I can easily glance text from left to right to figure out what the gist of the text is. When it's changing between the same two steps it still isn't too much of an issue since the information is all still in the same place and it's not actually changing too much between each stage. I can identify each step, figure out what's changing between them, and look for that information specifically.
The new progress system dumps a lot more information at the user, most of it detailing what file is being analyzed and compiled, each one taking maybe 2-4 frames of screen time, on an excessive number of lines, just a complete barrage of pointless information. None of this is really important to me since the only time it would be important is if a file or step took more than 3 seconds to be processed. With items constantly appearing and disappearing, the things that are taking time on the more macro scale like build-lib and build-exe steps that are more important to me will constantly move around the terminal. It's much much harder to read something if it's jumping up and down randomly every 2 frames vs if it's being swapped to share a single line. If the line literally leaves my field of view, it becomes frustrating to follow.
I much prefer the Bazel approach to this problem. When running a series of actions concurrently, the 6 actions taking the most amount of time will be visible in the action list showing how long they're taking, but all other actions will be minimized to a "and X other actions..." line.
This looks cool on the surface, but in practice is not that good at giving you progress information. Which is what progress indicators should do! At best this is a better indicator of how much work is being done, not how much progress is being made. Like a bunch of bleeps, bloops, and random LEDs toggling when a computer does work in a sci-fi TV show.
I think this would be better if individual files being processed got removed unless they start taking too long. Keep the build-exe and build-lib steps around but make them a little sticky. When they complete, have it say "Done" and then remove things in groups or on a set interval. Don't change the number of lines too often and don't reorganize lines either. Generally, it should be easy to parse what's going on and frequent changes to the number of lines and how much information is on each makes that hard.
Less is more; worse is better. I don't remember where I originally saw it, but some rules of thumb that seem to serve well are:
- information shown to the user should be able to be consumed; if it's too fast to read and not displayed in perpetuity (e.g. logged once complete) it's essentially just unnecessary movement akin to a really noisy "spinner"
- no new "movement" (e.g. spinnner/progress esque) within ~200ms will result in the user thinking something is hung
- every permanent line away from input command prompt is sacred; e.g. too much movement vertically (e.g. spamming logs) should be reserved for "verbose" output where the user explicitly asks for it. The effort by the user to scroll the buffer afterwards should be worth it.
The new zig progress IMO breaks a these rules and I'm not a huge fan of it except out of novelty in design.
Cargo's current output essentially only breaks the vertical scrolling rule; if I were BDFL I'd probably just use a single line indicatif spinner ephemerally instead of displaying all "downloading" and "compiling" lines unless someone asked for a "verbose" output and wanted to actually see those lines logged. It could be argued that using a tree of indicatif spinners for long running parallel tasks might be interesting? But determining what is "long running" would be extremely difficult to know ahead of time, and accidentally making an 8-frame spinner would just appear like useless noise. Add to that the max parallelization with Rust, especially in debug builds can be quite high which would end up looking much like the this zig output where 99% of the lines only last for a few frames.
The vertical scrolling has the advantage that when you are watching a build go, and you see a certain step's taking a long time, it doesn't ever disappear from the terminal completely. You don't need to look through dependencies some other way and figure out which matches the thing you saw, or do a clean build to see the bottleneck again, etc.
I've seen guides for detecting terminal support for colors, hyperlinks, emoji, etc but not "fancy" terminal control like this. Anyone know of any? In one CLI I work on, some others are concerned about the long tail of users and wanting to make sure no one has a bad experience by default.
This isn't using any "fancy" terminal features, aside from the synchronized update sequences (which terminals that don't support typically will ignore, though windows has a special case where it must be ignored). That said, you can query the terminal for support for this sequence if you wanted to.
Other than that, it's using standard ANSI sequences: `\r` to return the cursor to the beginning of the line, `\x1bM` (Reverse Index) to move the cursor up a single line (repeated n times), and then `\x1b[J` to clear the screen below the cursor position. All of these are sequences defined at least since the VT220, probably the VT100.
> `\x1bM` (Reverse Index) to move the cursor up a single line (repeated n times), and then `\x1b[J` to clear the screen below the cursor position. All of these are sequences defined at least since the VT220, probably the VT100.
For whatever reason, the people I'm collaborating with have the impression these aren't universal enough. And of course termcaps is out of vogue.
One thing that the latest generation of languages has taught me (or that I have learned in the process?) is that “languages” aren’t “really a real thing” in and of themselves as much as they are merely composable APIs over compilers / interpreters (which are basically just dynamic compilers, or compilers are interpreters?), and compilers are an insane, fragmented dumpster fire with no or barely usable APIs. Only LLVM, GCC, TinyCC, and Terra admit this reality, neither LLVM nor GCC’s APIs are really user-oriented, and Terra isn’t really a realistic option for most people. I also personally don’t really feel that LLVM has accomplished (or can accomplish) its potential for a variety of reasons — most LLVM projects are forks or patches, although Zig has done well at this — and GCC has limitations as well although I’m very pleased with gccjit in Emacs. I’m very curious to see what the future holds especially if we can get an optionally-verifiable yet usable compiler API.
Unsolicited writing advice: drop the self-aggrandizing grandiosity. It is distracting and undermines the work itself.
Zig is a language for writing perfect programs? The fact that a progress bar is needed at all is a symptom of the lack of perfection of Zig. A perfect compiler would compile instantly (or at least faster than human perception is capable of registering) with no need for a progress bar.
"Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."
It's amusing to me that multiple people on HN think I'm joking. Have you ever tried to write thread-safe, lock-free, infallible, non-heap-allocating code before?
I'm curious what kind of API the people who think this problem is easy would have come up with, and what its performance characteristics would be compared to mine.
I agree that this is not an easy problem. I have written a status bar for a fairly widely used open source build tool in java. I hijacked System.out and System.err globally and added all writes to a queue that I line buffered and interleaved progress bars at the bottom. There was no noticeable performance degradation relative to not having the progress bar since all the queue management was handled on a background thread. I would characterize the volume of output as medium. The tool is chatty compared to some, but we aren't talking about managing massive amounts of data coming in.
At some point though, I stopped working on this tool because it and the compiler it was primarily used for were way too slow and started focusing on compiler design. I am currently working on a compiler that is self hosted and compiles itself in 1 second but I expect to get that down to under 100ms.
You don't need progress bars when your build is that fast.
I was undeniably rude in my initial post, but I also want to challenge you to be even better as a programmer. I had to write many compilers before I figured out how to self host as quickly as I now can. That is the advantage of working in isolation. I have thrown out more working compilers than all but a handful of people have written. My language is not public because I am still iterating but I wrote the bootstrap compiler in about 10 days of 4-5 hours per day and the self-hosted compiler took about 3 more weeks on top of that. It is ~4200 lines of code in itself. The next version will likely be fewer.
I think the part that might raise eyebrows is the “programming language designed for making perfect software”. Perfect is, after all, highly subjective, yet at the same time a maybe unnecessarily hyperbolic term.
I don’t mind the language though. I have no doubt that you’re at the top of your game and the problems you’ve outlined in the post do indeed sound challenging.
> A perfect compiler would compile instantly (or at least faster than human perception is capable of registering)
That seems lika a very arbitrary requirement. I'd say a perfect implementation of anything must do what it is supposed to be doing, and that may have a lower bound on processing time. You can't write to memory faster than what the bus allows, for example.
I really hope Zig becomes stable soon so I can use it on non-throwaway projects