Hacker Newsnew | past | comments | ask | show | jobs | submit | unscaled's commentslogin

It was a cleartext signature, not a detached signature.

Edit: even better. It was both. There is a signature type confusion attack going on here. I still didn't watch the entire thing, but it seems that unlike gpg, they do have to specify --cleartext explicitly for Sequoia, so there is no confusion going on that case.


I don't think it's stupid and this is one of the reason I prefer ULIDs or something like it. These IDs are very important for diagnostics, and making them easily selectable is a good goal in my book.


The monotonic behavior is not the default, but I would also be happier if it was removed from the spec or at least marked with all the appropriate warning signs on all the libraries implementing it.

But I don't think UUIDv7 solves the issue by "having less quirks". Just like you'd have to be careful to use the non-monotonic version of ULID, you'd have to be careful to use the right version of UUID. You also have to hope that all of your UUID consumers (which would almost invariably try to parse or validate the UUID, even if they do nothing with it) support UUIDv7 or don't throw on an unknown version.


UUIDv7 is the closest to ULID as both are timestamp based, and UUIDv7 has fewer quirks than ULID, no question about it.

I agree that picking UUID variant requires caution, but when someone has already picked ULID, UUIDv7 is easily a superior alternative.


Let's revisit the original article[1]. It was not about arguments, but about the pain of writing callbacks and even async/await compared to writing the same code in Go. It had 5 well-defined claims about languages with colored functions:

1. Every function has a color.

This is true for the new zig approach: functions that deal with IO are red, functions that do not need to deal with IO are blue.

2. The way you call a function depends on its color.

This is also true for Zig: Red functions require an Io argument. Blue functions do not. Calling a red function means you need to have an Io argument.

3. You can only call a red function from within another red function.

You cannot call a function that requires an Io object in Zig without having an Io in context.

Yes, in theory you can use a global variable or initialize a new Io instance, but this is the same as the workarounds you can do for calling an async function from a non-async function For instance, in C# you can write 'Task.Run(() -> MyAsyncMethod()).Wait()'.

4. Red functions are more painful to call.

This is true in Zig again, since you have to pass down an Io instance.

You might say this is not a big nuisance and almost all functions require some argument or another... But by this measure, async/await is even less troublesome. Compare calling an async function in Javascript to an Io-colored function in Zig:

  function foo() {
    blueFunction(); // We don't add anything
  }

  async function bar() {
    await redFunction(); // We just add "await"
  }
And in Zig:

  fn foo() void {
    blueFunction()
  }

  fn bar(io: Io) void {
    redFunction(io); // We just add "io".
  }

Zig is more troublesome since you don't just add a fixed keyword: you need a add a variable that is passed along through somewhere.

5. Some core library functions are red.

This is also true in Zig: Some core library functions require an Io instance.

I'm not saying Zig has made the wrong choice here, but this is clearly not colorless I/O. And it's ok, since colorless I/O was always just hype.

---

[1] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...


> This is also true for Zig: Red functions require an Io argument. Blue functions do not. Calling a red function means you need to have an Io argument.

I don't think that's necessarily true. Like with allocators, it should be possible to pass the IO pointer into a library's init function once, and then use that pointer in any library function that needs to do IO. The Zig stdlib doesn't use that approach anymore for allocators, but not because of technical restrictions but for 'transparency' (it's immediately obvious which function allocates under the hood and which doesn't).

Now the question is, does an IO parameter in a library's init function color the entire library, or only the init function? ;P

PS: you could even store the IO pointer in a public global making it visible to all code that needs to do IO, which makes the coloring question even murkier. It will be interesting though how the not-yet-implemented stackless coroutine (e.g. 'code-transform-async') IO system will deal with such situations.


In my opinion you must have function coloring, it's impossible to do async (in the common sense) without it. If you break it down one function has a dependency on the async execution engine, the other one doesn't, and that alone colors them. Most languages just change the way that dependency is expressed and that can have impacts on the ergonomics.


Not necessarily! If you have a language with stackful coroutines and some scheduler, you can await promises anywhere in the call stack, as long as the top level function is executed as a coroutine.

Take this hypothetical example in Lua:

  function getData()
    -- downloadFileAsync() yields back to the scheduler. When its work
    -- has finished, the calling function is resumed.
    local file = downloadFileAsync("http://foo.com/data.json"):await()
    local data = parseFile(file)
    return data
  end

  -- main function
  function main()
    -- main is suspended until getData() returns
    local data = getData()
    -- do something with it
  end
    
  -- run takes a function and runs it as a coroutine
  run(main)
Note how none of the functions are colored in any way!

For whatever reason, most modern languages decided to do async/await with stackless coroutines. I totally understand the reasoning for "system languages" like C++ (stackless coroutines are more efficient and can be optimized by the compiler), but why C#, Python and JS?


Look at Go or Java virtual threads. Async I/O doesn't need function coloring.

Here is an example Zig code:

    defer stream.close(io);

    var read_buffer: [1024]u8 = undefined;
    var reader = stream.reader(io, &read_buffer);

    var write_buffer: [1024]u8 = undefined;
    var writer = stream.writer(io, &write_buffer);

    while (true) {
        const line = reader.interface.takeDelimiterInclusive('\n') catch |err| switch (err) {
            error.EndOfStream => break,
            else => return err,
        };
        try writer.interface.writeAll(line);
        try writer.interface.flush();
    }
The actual loop using reader/writer isn't aware of being used in async context at all. It can even live in a different library and it will work just fine.


Uncoloured async is possible, but it involves making everything async. Crossing the sync/async boundary is never trivial, so languages like go just never cross it. Everything is coroutines.


Runtime borrow checking panics if you use the non-try version, and if you're careful enough to use try_borrow() you don't even have to panic. Unlike Go, this can never result in a data race.

If you're using unsafe blocks you can have data races too, but that's the entire point of unsafe. FWIW, my experience is that most Rust developers never reach for unsafe in their life. Parts of the Rust ecosystem do heavily rely on unsafe blocks, but this still heavily limits their impact to (usually) well-reviewed code. The entire idea is that unsafe is NOT the default in Rust.


I think the original sin of Go is that it neither allows marking fields or entire structs as immutable (like Rust does) nor does it encourage the use of builder pattern in its standard library (like modern Java does).

If, let's say, http.Client was functionally immutable (with all fields being private), and you'd need to have to set everything using a mutable (but inert) http.ClientBuilder, these bugs would not have been possible. You could still share a default client (or a non-default client) efficiently, without ever having to worry about anyone touching a mutable field.


This is all very nice as an idea or a mythical background story ("Go was designed entirely around CSP"), but Go is not a language that encourages "sharing by communicating". Yes, Go has channels, but many other languages also have channels, and they are less error prone than Go[1]. For many concurrent use cases (e.g. caching), sharing memory is far simpler and less error-prone than using channels.

If you're looking for a language that makes "sharing by communicating" the default for almost every kind of use case, that's Erlang. Yes, it's built around the actor model rather than CSP, but the end result is the same, and with Erlang it's the real deal. Go, on the other hand, is not "built around CSP" and does not "encourage sharing by communicating" any more than Rust or Kotlin are. In fact, Rust and Kotlin are probably a little bit more "CSP-centric", since their channel interface is far less error-prone.

[1] https://www.jtolio.com/2016/03/go-channels-are-bad-and-you-s...


You can argue about how likely is code like that is, but both of these examples would result in a hard compiler error in Rust.

A lot of developers without much (or any) Rust experience get the impression that the Rust Borrow checker is there to prevent memory leaks without requiring garbage collection, but that's only 10% of what it does. Most the actual pain dealing with borrow checker errors comes from it's other job: preventing data races.

And it's not only Rust. The first two examples are far less likely even in modern Java or Kotlin for instance. Modern Java HTTP clients (including the standard library one) are immutable, so you cannot run into the (admittedly obvious) issue you see in the second example. And the error-prone workgroup (where a single typo can get you caught in a data race) is highly unlikely if you're using structured concurrency instead.

These languages are obviously not safe against data races like Rust is, but my main gripe about Go is that it's often touted as THE language that "Gets concurrency right", while parts of its concurrency story (essentially things related to synchronization, structured concurrency and data races) are well behind other languages. It has some amazing features (like a highly optimized preemptive scheduler), but it's not the perfect language for concurrent applications it claims to be.


Rust concurrency also has issues, there are many complaints about async [0], and some Rust developers point to Go as having green threads. The original author of Rust originally wanted green threads as I understand it, but Rust evolved in a different direction.

As for Java, there are fibers/virtual threads now, but I know too little of them to comment on them. Go's green thread story is presumably still good, also relative to most other programming languages. Not that concurrency in Java is bad, it has some good aspects to it.

[0]: An example is https://news.ycombinator.com/item?id=45898923 https://news.ycombinator.com/item?id=45903586 , both for the same article.


Rust has concurrency issues for sure. Deadlocks are still a problem, as is lock poisoning, and sometimes dealing with the borrow checker in async/await contexts is very troublesome. Rust is great at many things, but safe Rust only eliminates certain classes of bugs, not all of them.

Regarding green threads: Rust originally started with them, but there were many issues. Graydon (the original author) has "grudgingly accepted" that async/await might work better for a language like Rust[1] in the end.

In any case, I think green threads and async/await are completely orthogonal to data race safety. You can have data race safety with green threeads (Rust was trying to have data-race safety even in its early green-thread era, as far as I know), and you can also fail to have data race-safety with async/await (C# might have fewer data-race safety footguns than Go but it's still generally unsafe).

[1] https://graydon2.dreamwidth.org/307291.html


in .NET, async/await does not protect you from data races and you are exposed to them as much as you are in Go, but there is a critical difference in that data races in .NET can never result (not counting unsafe) in memory safety violations. They can and will in Go.


Async and concurrency are orthogonal concepts.


While I agree, in practice they can actually be parallel. Case in point - the Java Vert.x toolkit. It uses event-loop and futures, but they have also adopted virtual threads in the toolkit. So you still got your async concepts in the toolkit but the VTs are your concurrency carriers.


But Rust's async is one of the primary ways to handle concurrency in Rust, right? Like, async is a core part of how Tokio handles concurrency.


Could you give an example to distinguish them? Async means not-synchronous, which I understand to mean that the next computation to start is not necessarily the next computation to finish. Concurrent means multiple different parts of the program may make progress before any one of them finishes. Are they not the same? (Of course, concurrency famously does not imply parallelism, one counterexample being a single-threaded async runtime.)


Async, for better or worse, in 2025 is generally used to refer to the async/await programming model in particular, or more generally to non-blocking interfaces that notify you when they're finished (often leading to the so-called "callback hell" which motivated the async/await model).


If you are waiting for a hardware interrupt to happen based on something external happening, then you might use async. The benefit is primarily to do with code structure - you write your code such that the next thing to happen only happens when the interrupt has triggered, without having to manually poll completion.

You might have a mechanism for scheduling other stuff whilst waiting for the interrupt (like Tokio's runtime), but even that might be strictly serial.


So async enable concurrent outstanding requests.


But even so, the JVM has well-defined data races that may cause logical problems, but can never cause memory issues.

That's not the case with Go, so these are significantly worse than both Rust and Java/C#, etc.


What is your definition of memory issues?

Of course you can have memory corruption in Java. The easiest way is to spawn 2 threads that write to the same ByteBuffer without write locks.


And you would get garbled up bytes in application logic. But it has absolutely no way to mess up the runtime's state, so any future code can still execute correctly.

Meanwhile a memory issue in C/Rust and even Go will immediately drop every assumption out the window, the whole runtime is corrupted from that point on. If we are lucky, it soon ends in a segfault, if we are less lucky it can silently cause much bigger problems.

So there are objective distinctions to have here, e.g. Rust guarantees that the source of such a corruption can only be an incorrect `unsafe` block, and Java flat out has no platform-native unsafe operations, even under data races. Go can segfault with data races on fat pointers.

Of course every language capable of FFI calls can corrupt its runtime, Java is no exception.


> Meanwhile a memory issue in C/Rust and even Go will immediately drop every assumption out the window, the whole runtime is corrupted from that point on.

In C, yes. In Rust, I have no real experience. In Go, as you pointed out, it should segfault, which is not great, but still better than in C, i.e., fail early. So I don't get or understand what your next comment means? What is a "less lucky" example in Go?

> If we are lucky, it soon ends in a segfault, if we are less lucky it can silently cause much bigger problems.


Silent corruption of unrelated data structures in memory. Segfault only happens if you are accessing memory outside the program's valid address space. But it can just as easily happen that you corrupt something in the runtime, and the GC will run havoc, or cause a million other kind of very hard to debug errors.


> But it can just as easily happen that you corrupt something in the runtime, and the GC will run havoc

I would love to see an example of this, if you don't mind. My understanding is that the GC in Go actively prevents against what you write. There is no pointer arithmetic in the language. The worst that can happen is a segfault or data corruption due to faulty locking like the Java example I gave above.


https://lobste.rs/c/vq3zn0

Here is a thread discussing it, but there are multiple posts/comment threads on the topic. In short, slices are fat pointers in the language, and data races over them can cause other threads to observe the slice in an invalid state, which can be used to access memory it shouldn't be able to.


Most people in Japan live outside of the Yamanote circle in Tokyo. Rural and Suburban supermarkets have parking lots (although in central areas they can still be quite small) and people still use cars for shopping trips, especially in the countryside.

It is true that grocery packages are much smaller than the US (since Japanese houses, even in the countryside, are smaller and I guess the average household size is smaller as wel). Shopping carts in regular supermarkets are smaller than abroad, and are usually built to house 1 or 2 shopping baskets you can also carry by hand.

But hey, we still have Costco in Japan, and package sizes and shopping cart sizes are just as big as they are in the US (although the parking lot is probably considerably more crowded). And Costco is extremely popular here. It's far messier than a Japanese supermarket and I do see inconsiderate people sometimes in Costco, but the cars are still parked nicely and most people do return their shopping carts. It would be interesting to compare Costcos in Japan and the US directly though.


I lived well outside of Tokyo and most supermarkets were eki-mae (near train stations) and did not have parking lots. I did go to markets by car, and the practice I mentioned, shopping carts used exclusively within the store, was practiced. There very well may be exceptions, but as I noted, the cultural practice even extends to Japanese markets in the United States.


I'm afraid this article kinda fails at at its job. It starts out with a very bold claim ("Zig is not only a new programming language, but it’s a totally new way to write programs"), but ends up listing a bunch of features that are not unique to Zig or even introduced by Zig: type inference (Invented in the late 60s, first practically implemented in the 80s), anonymous structs (C#, Go, Typescript, many ML-style languages), labeled breaks, functions that are not globally public by default...

It seems like this is written from the perspective of C/C++ and Java and perhaps a couple of traditional (dynamically typed) languages.

On the other hand, the concept that makes Zig really unique (comptime) is not touched upon at all. I would argue compile-time evaluation is not entirely new (you can look at Lisp macros back in the 60s), but the way Zig implements this feature and how it is used instead of generics is interesting enough to make Zig unique. I still feel like the claim is a bit hyperbolic, but there is a story that you can sell about Zig being unique. I wanted to read this story, but I feel like this is not it.


D has had compile time function execution since 2007 or so.

https://dlang.org/spec/function.html#interpretation

It doesn't need a keyword to trigger it. Any expression that is a const-expression in the grammar triggers it.


Hello Mr. Bright. I've seen similar comments from you in response to Zig before. Specifically, in the comments on blog post I made about Zig's comptime. I took some time reading D's documentation to try to understand your point (I didn't want to miss some prior art, after all). By the time I felt like I could give a reply, the thread was days old, so I didn't bother.

The parent comment acknowledges that compile time execution is not new. There is little in Zig that is, broad strokes, entirely new. It is in the specifics of the design that I find Zig's ergonomics to be differentiated. It is my understanding that D's compile time function execution is significantly different from Zig's comptime.

Mostly, this is in what Zig doesn't have as a specific feature, but uses comptime for. For generics, D has templates, Zig has functions which take types and return types. D has conditional compilation (version keyword), while Zig just has if statements. D has template mixins, Zig trusts comptime to have 90% of the power for 10% of the headache. The power of comptime is commonly demonstrated, but I find the limitations to be just as important.

A difference I am uncertain about is if there's any D equivalent for Zig having types being expressions. You can, for example, calculate what the return type should be given a type of an argument.

Is this a fair assessment?


> A difference I am uncertain about is if there's any D equivalent for Zig having types being expressions. You can, for example, calculate what the return type should be given a type of an argument.

This is done in D using templates. For example, to turn a type T into a type T star:

    template toPtr(T) { alias toPtr = T*; } // define template

    toPtr!int p; // instantiate template

    pragma(msg, "the type of p is: ", typeof(p));
The compiler will deduce the correct return type for a function by specifying auto* as the return type:

    auto toPtr(int i) { return cast(float)i; } // returns float
For conditional compilation at compile time, D has
static if:

    enum x = square(3);  // evaluated at compile time
    static if (x == 4)
        int j;
    else
        double j;
    auto k = k;
Note that the
static if* does not introduce a new scope, so conditional declarations will work.

The version is similar, but is intended for module-wide versions, such as:

    version (OSX)
    { stuff for OSX }
    else version (Win64)
    { stuff for Windows 64 }
    else
        static assert(0, "unsupported OS");
Compile time execution is triggered wherever a const-expression is required. A keyword would be redundant.

D's mixins are for generating code, which is D's answer to general purpose text macros. Running code at compile time enables those strings to be generated. The mixins and compile time execution are not the same feature. For a trivial example:

    string cat(string x, string y) { return x ~ "," ~ y; }
    string s = mixin(cat("hello", "betty")); // runs cat at compile time
    writeln(s); // prints: hello,betty
I'll be happy to answer any further questions


I appreciate you taking the time to give examples in D. People are often under the mistaken impression that Zig's compile time is revolutionary, from how it is excessively hyped, but are failing to realize that many other languages have similar or users can get similar results by doing things differently, because languages have different philosophies and design strategies.

For example, the creator of Odin, has stated in the past he rather come up with optimal solutions without metaprogramming, despite enthusiasts trying to pressure him to add such features into that language.


Maybe I don't understand, in D, how do I write a function which makes a new type?

For example Zig has a function ArrayHashMapWithAllocator which returns well, a hash table type in a fairly modern style, no separate chaining and so on

Not an instance of that type, it returns the type itself, the type didn't exist, we called the function, now it does exist, at compile time (because clearly we can't go around making new types at runtime in this sort of language)


You use templates and string mixins alongside each other.

The issue with mixins is that using string concatenation to build types on the fly isn't the greatest debugging experience, as there is only printf debugging available for them.


See my other post.


Yes and D's comptime is much more fun, IMHO than Zig's! Yet everyone talks about Zig's comptime as if it were unique or new.


But Zig doesn't need a keyword to trigger it either? If it's possible at all, it will be done. The keyword should just prevent run-time evaluation. (Unless I grossly misunderstood something.)


I'm no expert on Zig, but "comptime" is the keyword to trigger it.


I'm pretty sure the "comptime" keyword only forces you to provide an argument constant at compile time for that particular parameter. It doesn't trigger the compile time evaluation.


That's how the constant is provided - through compile time evaluation.


Yes, but compile-time evaluation in Zig doesn't require the "comptime" keyword. Only specific cases such as compile-time type computation do (but these specific cases are not provided by compile-time function evaluation in D anyway, so language choice wouldn't make a difference here).


Partial evaluation has been quite well known at least since 1943 and Kleene's Smn proof. It has since been put to use, in various forms, by quite a few languages (including C++ in 1990, and even C in the early seventies). But the extent and the way in which Zig specifically puts it to use -- which includes, but is not limited to, how it is used to replace other features that can then be avoided (and all without macros) -- is unprecedented.

Pointing out that other languages have used partial evaluation, sometimes even in ways that somewhat overlap with Zig's use, completely misses the point. It's at least as misplaced as saying that there was nothing new or special about iPhone's no-buttons design because touch screens had existed since the sixties.

If you think Zig's comptime is just about running some computations at compile time, you should take a closer look.


I'd like to see an example! as I cannot think of one.


An example of what?


Not OP, but I guess based on your comment:

> But the extent and the way in which Zig specifically puts it to use -- which includes, but is not limited to, how it is used to replace other features that can then be avoided (and all without macros) -- is unprecedented.

That MrWhite wanted to knkw an example of Zig's comptime that is not merely a "macro", rather the usage as a replacement of other features (I guess more complex..)

PS just interested in zig, I'd like some pointer to these cool feature :)


An unprecedented use.


Ok, so a primary goal of comptime in Zig is to avoid needing certain specialised features while still enjoying their functionality, in particular, generics, interfaces, and macros. I'm not aware of any language that has been able to eliminate all of these features and replace them with a simple, unified partial evaluation mechanism.

In addition, there's the classic example of implementing a parameterised print (think printf) in Zig. This is a very basic use of comptime, and it isn't used here in lieu of generics or of interfaces, but while there may be some language that can do that without any kind of explicit code generation (e.g. macros), there certainly aren't many such examples: https://ziglang.org/documentation/0.15.2/#Case-Study-print-i...

But the main point is that the unprecedented use of partial evaluation is in having a single unified mechanism that replaces generics, interfaces, and macros. If a language has any one of them as a distinct feature, then it is not using partial evaluation as Zig does. To continue my analogy to the novel use of a touchscreen in the iPhone, the simplest test was: if your phone had a physical keypad or keyboard, then it did not use a touchscreen the way the iPhone did.


D's `write` function is generic:

    write(1,2,"abc",4.0,'c');
write is declared as:

    void write(S...)(S args) { ... }
where `S...` means an arbitrary sequence of types represented by `S`. The implementation loops over the sequence, handling each type in its own individual fashion. User defined types work as well.


If D has a separate feature for one of: generic types, interfaces and macros, then obviously it doesn't use partial evaluation similarly to how Zig does. It seems to me that it has all three: templates, interfaces, and string mixins. So if Zig uses its unified partial evaluation feature to eliminate these three separate features, why bring up D, which clearly does not eliminate any one of them?

It's like saying the the iPhone design wasn't novel except for the fact that prior art all had a keypad. But the design was novel in that it was intended to eliminate the keypad. Zig's comptime feature is novel in that it exists to eliminate interfaces, generics, and macros, and you're bringing up a language that eliminates none of them.

So D clearly isn't an example, but perhaps there's some other language I haven't heard of. Just out of curiosity, can a printf in D not only check types at compile time but also generate formatting code while still allowing for runtime variables and without (!!!) the use of string mixins? Like I said, it's possible there's precedent for that (even though it isn't the distinguishing feature), and I wonder if D is that. I'm asking because examples I've seen in D either do use string mixins or do not actually do what the Zig implementation does.


If you have an example of what you're talking about, I'd like to see it.


An example of Zig not having generics, interfaces, and macros? I don't understand.


@pron If all you mean is there is more syntax to D than to zig to achieve same/similar thing then you may be a bit aggresive on how you communicate it.


It's not about more syntax, it's about design, and how Zig was the first to use partial evaluation to do away with several, rather common, features.

It's like how the novelty of the iPhone's touchscreen design was in not having a keypad, or that the novelty of the spork wasn't in inventing the functionality of either the spoon or the fork, but in having a single utensil that performs both. The more important aspect isn't the functionality but the design. I'm not saying you need to like any of these designs, but they are novel.

Saying that you could have a similar functionality by other means misses the point as much as saying that there's nothing special about a spork because if you have a spoon and a fork, then you have the same functionality. But you still don't have a spork.

You could, then, ask what the point of the novel design is. Well, in some languages you have generics and interfaces and compile-time expressions, but because none of these is general and powerful enough, so you also have macros. Macros are very powerful - perhaps too powerful - but they are difficult to understand, so they're used sparingly even if they can subsume other functionality.

Zig has shown that you can do almost anything you would reasonably want to do with macros with partial evaluation that has access to reflection. That wasn't obvious at all. And because that feature was not only powerful enough to subsume other features and make them redundant, but also very simple and easy to understand, it ended up with a design that is both minimal and easy to read (which is important for code reviews) but also highly expressive. Again, you don't have to like this rather minimalistic design, but it is novel.


> Zig was the first to use partial evaluation to do away with several, rather common, features

Please show an example of Zig partial evaluation.


I posted a few examples below: https://news.ycombinator.com/item?id=45865028


The partial evaluation you mentioned.


I'm still not sure what exactly you're asking, but the implementation of print linked above (https://ziglang.org/documentation/0.15.2/#Case-Study-print-i...) typically requires some kind of syntactic macros in other lanugages as does something like "innerParse" here (https://ziglang.org/documentation/0.11.0/std/src/std/json/st...), this (https://ziglang.org/documentation/0.15.2/#Generic-Data-Struc...) would typically require some kind of a separate generics or templates feature in other languages, and to see how comptime replaces interfaces, search for "formatType" here: https://marcelgarus.dev/comptime.

Again, the novel use of partial evaluation in Zig is that it eliminates generics, interfaces, and macros. Any language that has one or more of these features does not have this novel design.


print is a fairly complicated case, as it has to deal with all kinds of different types.

I mean a simple example. Just to illustrate the concept. Like the examples I provided here:

https://news.ycombinator.com/item?id=45859669


If you scroll a bit up on the page from the print example, you'll see more introductory stuff: https://ziglang.org/documentation/0.15.2/#Introducing-the-Co...


Hi Walter! Big fan. What do you think of Zig? How would you like to see it evolve? Are there any things from Zig that inspire you to work in D?


I have never written a Zig program, I've just browsed the specification. I do admire the energy and enthusiasm of its creators. The fast compiles of it are well done.

Mostly what I think is the syntax is more complex with less utility than the equivalent D syntax. For example, the use of the 'comptime' keyword is not necessary. For another, the import declaration is overly complex.

I don't know enough about Zig to make informed suggestions on evolving it. D has borrowed stuff from many languages, but I don't recall suggestions in the D forums of a Zig feature that should be added to D, though I might have missed it.


Perl5 had it before. Either by constant-folding, or by BEGIN blocks.

Constant-folding just got watered down by the many dynamic evangelists in the decades after, that even C or C++ didn't enforce it properly. In perl5 is was watered down on add (+) by some hilariously wrong argumentation then. So you could precompute mult const expressions, but not add.


How are perl5’s BEGIN blocks equivalent to comptime? It’s been awhile, but I recall BEGIN blocks executing at require time—which, in complicated pre-forking setups that had to be careful about only requiring certain modules later during program execution because they did dumb things like opening connections when loaded, meant that reasoning about BEGIN blocks required a lot more careful thought than reasoning about comptime.

The same is true for templates, or macros—all of which are distinguished by being computed in a single pass (you don’t have to think about them later, or worry about their execution being interleaved with the rest of the program), before runtime start (meaning that certain language capabilities like IO aren’t available, simplifying reasoning). Those two properties are key to comptime’s value and are not provided by perl5’s BEGIN blocks—or probably even possible at all in the language, given that it has eval and runtime require.


BEGIN blocks execute at compile-time. require is just a wrapper to load a module at compile-time.

When you want to use state, like openening a file for run-time, use INIT blocks instead. These are executed first before runtime, after compile-time.

My perl compiler dumps the state of the program after compile-time. So everything executed in BEGIN blocks is already evaluated. Opening a file in BEGIN would not open it later when required at run-time, and compile-time from run-time is seperated. All BGEIN state is constant-folded.


I think we’re using different definitions of “compile time”.

I know who you are, and am sure everything you say about the mechanisms of BEGIN is correct, but when I refer to “compile time”, I’m referring to something that happens before my program runs. Perl5’s compilation happens the first time a module is required, which may happen at runtime.

Perhaps there’s a different word for what we’re discussing here: one of the primary benefits of comptime and similar tools is that they are completed before the program starts. Scripting languages like perl5 “compile” (really: load code into in-memory intermediate data structures to be interpreted) at arbitrary points during runtime (require/use, eval, do-on-code).

On the other hand, while code in C/Zig/etc. is sometimes loaded at runtime (e.g. via dlopen(3)), it’s compile-time evaluation is always done before program start.

That “it completed before my code runs at all” property is really important for locality of behavior/reasoning. If the comptime/evaluation step is included in the runtime-code-load step, then your comptime code needs to be vastly more concerned with its environment, and code loading your modules has to be vastly more concerned with the side effects of the import system.

(I guess that doesn’t hold if you’re shelling out to compile code generated dynamically from runtime inputs and then dlopen-ing that, but that’s objectively insane and hopefully incredibly rare.)


When I read "I can easily say that Zig is not only a new programming language, but it’s a totally new way to write programs" I expected to see something as shocking as LISP/Smalltalk/Realtalk/EVE/FORTH/Prolog... A whole new paradigm, a whole new way to program. Or at least a new concept like the pure functionalism of Haskell, or Prototyping like in Lua/JS/Io. And I was so damn shocked how I must have missed something so huge, having read the entirety of Zig's documentation and not have noticed anything? As you mentioned, turned out nothing, and I was shocked then why is it in the top of HN? Also turned out for no reason based on the comments.


The idea of modern society is "get hyped for the new thing". Tech crowd did not escape that unfortunately, and keeps rediscovering techniques that were already possible more that 50 years ago. Because they don't want to learn the history of the technology they are using.


"Computing is a fashion show"

-- Alan Kay


Dev celebs makes blogposts and videos on how Zig is awesome and unique, so the herd repeats.


Same can be said about other programming languages. Therefore the comment of yours has zero value…


Nah, there are many languages ignored or even despised by those dev celebs.

> yours has zero value

Yours didn't bring much as well, so I suppose value isn't strictly required.


Good point.

BTW: Your reply made me laugh so hard… I loved it.

Thanks.


> Also turned out for no reason based on the comments.

The reason is the clickbait title.


Agreed.

But i would not put comptime as some sort of magical invention. Its still just a newish take on meta programming. We had that since forever. From my minimal time with Zig i kind of think comptime as a better version of c++ templates.

That said Zig is possibly a better alternative to c++, but not that exiting for me. I kind of dont get why so many think its the holy grail, first it was rust, and now zig.


As much as I dislike Rust, I gotta give it credit where it's due. It has something unique: a borrow checker. What is so unique in Zig?


> It has something unique: a borrow checker.

Rust's borrow checker isn't unique either but was inspired by Cylone: https://en.wikipedia.org/wiki/Cyclone_(programming_language)

IMHO a programming language doesn't need a single USP, it just needs to include good existing ideas and (more importantly) exclude bad existing ideas (of course what's actually a good and bad idea is highly subjective, that's why we need many programming languages, not few).


Rust's borrow checker is unique in the sense that it is production-ready. Cyclone is indeed prior art, but it's not as if it ever got beyond the research project stage.


I don't necessarily disagree, of course. That is why I like Odin the most so far, and perhaps C3.


> I'm afraid this article kinda fails at at its job

Yeah, I know nothing about Zig, and was excited by the author's opening statement that Zig is the most surprising language he has encountered in a 45 yr software career...

But this is then immediately followed by saying that ability to compile C code, and to cross-compile, are the most incredible parts of it, which is when I immediately lost interest. Having a built-in C compiler is certainly novel, and perhaps convenient for inter-op, but if the value goes significantly beyond that then the author is failing to communicate that.


The code samples are so weird... Some are images, others are not, and there's like 10 different color schemes (even among the textual ones, it's not consistent). That actually takes some kind of effort to achieve :D.


gives you a preview of the experience of using it :)


> Zig is not only a new programming language, but it’s a totally new way to write programs

I'd say the same thing about Rust. I find it the best way to express when what code should run at any given point in the program and the design is freakin interstellar: It is basically a "query engine" where you write a query of some code against the entire available "code space" including root crate and its dependencies. Once you understand that programming becomes naming bits and then queries for the ones you wish to execute.


As someone not really familiar with Rust, this sounds intriguing, but I don’t full understand. Do you have any links that can or examples that could clarify this for someone who is just starting out with Rust?


Not GP, but I genuinely don't understand what GP is talking about.


Compile time seems to be a standard feature in D-lang as well.

Powerful macros that generate code that then gets compiled =)


"this article kinda fails at at its job"

Definitely.


> C/C++

It has been several decades since putting a slash between these two made sense, lumping them together like this. It would be similar to saying something like Java/Scala or ObjectiveC/Swift. These are completely different languages.


Nope, that is a English grammar construct that is a shortcut for "and" and "or", as any good English grammar book will explain.

Indeed you see those for Java/Scala and Objective-C/Swift in technical books and job adverts.

Any search on the careers sites, or documentation, on companies that have seats at ISO, sell/develop C and C++ compilers, have such C/C++ references in a couple of places.

Do you need any example?


In the general case yes, but "C/C++" became an idiom for the stance, that C and C++ are essentially the same, that C++ is a superset of C or that C++ is just the replacing successor of C and it should be treated as superseded. This is quite wrong and thus there is a lot of rightful intervention to that term. Personally I use "C, C++" when I want to talk about both without claiming, that they are the same language.


Nah, that is what pedantic folks without English grammar knowledge keep complaining about, instead of actually discussing better security practices in both languages.

It is a bikeshedding discussion that doesn't help in anything, regarding lack of security in C, or the legions of folks that keep using C data types in C++, including bare bones null terminated strings and plain arrays instead of collection types with bounds checking enabled.


This has nothing to do with bikeshedding, it is a genuine misunderstanding of these two languages that is propagated in this way. This is not about grammar.


Yet those complaining usually make use of plenty C constructs, data types and standard library on their C++ projects, instead of modern C++ practices.


"C-like" code in C++ still has C++ semantics. "modern C++" is a disputed paradigm, but not necessarily how things should be done. When you write C++, but not "modern C++", that doesn't mean you are writing C. There are also modern features in C. https://floooh.github.io/2019/09/27/modern-c-for-cpp-peeps.h...


Null terminated strings with pointer arithmetic instead of std::string and string_view, pointer arithmetic instead of std::span, bare pointer arrays instead of std::array and std::vector, C style casts,...


Sure, that is all modern C++, but we are talking about C vs. C++, not about C++ vs. modern C++ and we certainly don't want to conflate non-modern C++ with C.


In my opinion, this is an important issue and not "bikeshedding", but it can be discussed whether the term "C/C++" is always an example of that idea or not. I think it is not, but it is connected enough, that I won't use it to side step the issues.


So there will be zero C language constructs, and C standard library functions being called, on your C++ source code?


I mostly write C, but yes even a simple call to e.g. malloc has different semantics in C++ (you need to cast).


Proper C++ should use new, delete, custom allocators, and standard collection types.

Even better, all heap allocations should be done via ownership types.

Calling into malloc () is writing C in C++, and should only be used for backwards compatibility with existing C code.

Additionally there is no requirement on the C++ standard that new and delete call into malloc()/free(), that is usually done as a matter of convenience, as all C++ compilers are also C compilers.


> Calling into malloc () is writing C in C++, and should only be used for backwards compatibility

And this is exactly the stance I am arguing against. C++ is not the newer version of C. It forked of at some point and is a quite different language now.

One of the reasons I do use malloc for, is for compatibility with C. It is not for backward compatibility, because the C code is newer. In fact I actively change the code, when it needs a rewrite anyway, from C++ to C.

The other reason for using it even when writing C++ is, that new alone doesn't allow to allocate without also calling the constructor. For that I call malloc first and then invoke the constructor with placement new. For deallocating I call the destructor and then free. This also has the additional benefit, that your constructor and deconstructor implementation can fail and you can roll it back.


So it would fail to compile when configuring static analysis to build on error when using C with C++ compiler.

Finally, people like to argue between C and C++ when it convenient to do so, yet the compiler language switches to use C extensions in C++ mode keep being used across many projects.


> So it would fail to compile when configuring static analysis to build on error when using C with C++ compiler.

What do you mean? I don't think I can follow you.

> yet the compiler language switches to use C extensions in C++ mode keep being used across many projects.

When you use compiler extensions, that just happen to be both available in C and C++, I wouldn't say you are writing C in C++, I mean the extension isn't standard C either.

Code written in C++ has different semantics, even when it is word-for-word the same as C code. They ARE different languages.


Only when removing everything that C++ adopts from C, other than low level implementation details that cannot be done in any other way.

That is what writing proper modern C++ is all about, anything else is writing C in C++.

Null terminated strings with pointer arithmetic instead of std::string and string_view, pointer arithmetic instead of std::span, bare pointer arrays instead of std::array and std::vector, C style casts,....


> That is what writing proper modern C++ is all about, anything else is writing C in C++.

That is a claim that is yours and I do not agree with that. C++ that does not fit your taste of modern C++ does not suddenly become C, it is likely a syntax error in C and when it compiles it has a different meaning. Code that may look to you like C in C++ has C++ semantics, that differs from C semantics.


You can write Objective-C and C++ in the same source file, even mixing them in the same line of code, and it compiles together, but no one can say these are not two very different languages.


The name Objective-C++ exists for a reason.


If you think Objective-C++ is a bonafide language in its own right rather than what it clearly is -- a convenience to use two very different languages together -- then there is no point trying to debate with you further.


Custom allocators in c++ was never enjoyable nor easy. It introduces more template slop everywhere. That's one thing I really liked about eastl, it did allocators much better. But it wasn't maintained.


I don't use templates for that at all. Instead of:

    Class * object = new (...);
I do:

    Class * object;
    object = malloc (sizeof Class);
    if (nullptr == object) // ... error handling

    new (object) Class (...);
or if the constructor needs to fail:

    Class * object;
    object = malloc (sizeof Class);
    if (nullptr == object) // ... error handling

    new (object) Class ();
    if (!Class::init (object, ...)) {
        object->~Class ();
        free (object);

        // ...  other error handling
    }


Not in this context, that’s incorrect.


What context?

The pedantic folks that jump of their chair when seeing something all companies that pay WG21 salaries use on their docs?

If only they would advocate for writing safer code with the same energy, instead of discussing nonsense.

That is why C and C++ communities are looked down by security folks, and governments.


The problem is that it's a bit tricky to type the intersection symbol (∩), because C ∩ C++ makes more sense.


Anonymous structs and type interference are things even C has, although support for the later one is quite recent and limited.


Yeah, as I keep repeating, it is a Modula-2 in C clothes, minus comptime, which as others have mentioned D has had for quite some time.


As a c++ developer who's heard of Zig but never dived into it, I was reading this article scratching my head wondering what is it actually so unique about it.

Why the blog has a section on how it install it on the path is also very puzzling.


Zig is simple, clever and clean. certainly not perfect but it addresses much of what I disliked about c++. I wanted to like D and rust but they seem just as complex as c++. Yes, better in some ways but still full of complexity.


[flagged]


I use Lua in 2025, it’s brilliant.

By the way, so does everyone using neovim.


> By the way, so does everyone using neovim.

See also Roblox (and there used to be a whole bunch of game engines that had Lua scripting but I -think- most of them have switched to "NIH!" scripting engines?)


Are you ok?


> C/C++

No such thing. Also C++ has most of those features too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: