Hacker News new | past | comments | ask | show | jobs | submit login
Game Boy Advance “Hello World” Using Zig (github.com/wendigojaeger)
188 points by WendigoJaeger on Dec 9, 2019 | hide | past | favorite | 103 comments



The issue to get rid of the llvm-objcopy build dependency is https://github.com/ziglang/zig/issues/2826. This appears to be the only thing that is currently required to be installed besides Zig itself.


I tried to get Zig running on a Game Boy Advance a while ago, but never really got to a point I was happy with. Great to see someone else working on it!

It's cool that you got the built-in linker to work. When I tried, I got a bunch of "unsupported architecture" errors and had to use GNU ld. Have things gotten better recently or is this an ARM-vs-Thumb thing?

I was excited to be able to use packed structs to represent the different IO registers -- at last, no more OR-ing the wrong flag into the wrong register! -- but unfortunately those don't quite work right on ARMv4T yet: https://github.com/ziglang/zig/issues/2767


For anyone else who doesn't know what Zig is: https://ziglang.org/


Hmm that's certainly an interesting project. But I have to wonder if writing "defer file.close()" everywhere is really superior to RAII. Isn't defer also kind of hidden control flow? At that point I feel like RAII simply offers more, first of all you don't have to write what you want to defer everywhere, and second of all you can actually move resources around and still rely on it.

Also the integer overflow section struck me as kinda odd - what if I want wrapping behavior? That's not exactly a super uncommon scenario.


Wrapping is not a good default, and actually it's not that common a scenario.

When you want arithmetic to wrap, you should use some way to explicitly denote that you want it to wrap. (I think it could look like an operation with a modulo reduction or mask applied to it.)

defer is way more transparent than RAII, because your code is right there in the function call, not hidden in a pile of classes that might change underneath.


I don't mind it not being default - but the section doesn't mention at all how to get wrapping behavior. Just one sentence like "use this different operator to get wrapping behavior" would be fine, but since they didn't mention it at all I kind of went away with the idea that it is not supported at all, which honestly would be a massive problem.

>defer is way more transparent than RAII, because your code is right there in the function call, not hidden in a pile of classes that might change underneath.

I mean, sort of? It seems way less composable though, if you have an object which contains 5 different resources, now you have to defer 5 things? How does this interact with return values, if you defer file.close() for example but return the file, does it still get closed? And if you have a heap allocated array of those objects, where objects get inserted at different places, defer doesn't work at all anymore, since it's just bound to function scope? Because that's usually the scenario that can be annoying to handle in C (opposed to just closing whatever was opened in the same function scope), so it seems a bit odd to me to kind of violate that "no hidden control flow" principle for something that doesn't even solve the primary issue that C resource handling has.


> I don't mind it not being default - but the section doesn't mention at all how to get wrapping behavior.

I think that piece would get too long if it described everything in depth.

https://ziglang.org/documentation/master/#Wrapping-Operation...

> I mean, sort of? It seems way less composable though, if you have an object which contains 5 different resources, now you have to defer 5 things?

Defer is used for scope-local resources. If you create five different things in the local scope that need to be cleaned up, then they are most likely not part of the same object; yes, you will need to clean them all up. If you have an object with five different thing, then you would defer that object's cleanup routine.

> How does this interact with return values, if you defer file.close() for example but return the file, does it still get closed?

Yes. Why would you defer close on a file that you are planning to return? That's by definition not some scope-local resource with scope lifetime you want to cleanup at the end of scope. That's like implementing malloc that frees the block before returning.

Note that there is errdefer, which you can use to defer the release of resources that would be returned normally but need to be cleaned up on error. https://ziglang.org/documentation/master/#defer

> And if you have a heap allocated array of those objects, where objects get inserted at different places, defer doesn't work at all anymore, since it's just bound to function scope?

I don't understand what you're asking about here. If you malloc() an array in C, then you free() it when you're done with it. The exact same mechanism works in Zig, but defer also helps you ensure free() actually gets called when you go out of scope, so you can still use early returns instead of ret = foo; goto err_bar;

> Because that's usually the scenario that can be annoying to handle in C (opposed to just closing whatever was opened in the same function scope), so it seems a bit odd to me to kind of violate that "no hidden control flow" principle for something that doesn't even solve the primary issue that C resource handling has.

I don't agree that it's violating the "no hidden control flow constraint." It's explicit, right there in the scope, not stashed away in a bunch of destructors. It's as transparent as return or goto or break.

Again, I don't really understand the rest of your complaints (or what's your primary issue with C's resource handling). Defer is just a mechanism to make it easier to release resources whose lifetime is tied to a scope.


>I think that piece would get too long if it described everything in depth.

I don't think one sentence more about a very fundamental operation in a page-long document would really be misplaced. But as far as the language itself goes, those operators seem fine to me.

>Note that there is errdefer

That is definitely more useful, and covers the situation I was thinking of.

>I don't understand what you're asking about here. If you malloc() an array in C, then you free() it when you're done with it.

This is about composed objects. So an allocated array of objects, which in turn also hold allocated objects, which might in turn also hold allocated objects.

>I don't agree that it's violating the "no hidden control flow constraint." It's explicit, right there in the scope, not stashed away in a bunch of destructors. It's as transparent as return or goto or break.

Not sure I can completely agree with that. Fundamentally, defer means that something happens at a point where there is no associated code. If you read the line "return 5;", you don't know if that triggers one (or multiple) function calls. Yes, you only have to look for those in function scope (which could be 5000+ lines in some cases) (you only have to look for destructors in function scope too btw), but it is still a "hidden code path". For me personally it doesn't really matter at that point if I have to first search in the function body (and then potentially in the cleanup function), or first in the function body and then potentially in destructors. It removes the ability to reason about code line by line either way. Which might be a price that is worth to pay for easier cleanup logic, but the way I'm seeing it I'm only getting like a quarter of the benefit of destructors but I'm paying essentially the same price.


> This is about composed objects. So an allocated array of objects, which in turn also hold allocated objects, which might in turn also hold allocated objects.

The way this is solved by calling the cleanup routine for the first object in the hierarchy. If that object is responsible for the lifetime of other objects, then it will also call their cleanup routine. From the user's perspective, those are invisible, just as any calls to free() or close() or fflush() inside an fclose() are.

> If you read the line "return 5;", you don't know if that triggers one (or multiple) function calls.

If you read the line "continue;" or "break;", you don't know if that triggers one (or multiple) function calls until you read the surrounding code.

I still find it much easier to follow code inside a function than to jump through classes.

But the bigger picture is that it makes lifetime management explicit and thus transparent, and you needn't complicate the language with copy constructors and move semantics.


>If you read the line "continue;" or "break;", you don't know if that triggers one (or multiple) function calls until you read the surrounding code.

Not sure what you mean exactly - you have to read the surrounding code to know where the control flow continues, but there's no hidden function call between that point and and the break. That's like arguing you need to understand the entire code base to know what return does - since it returns control flow to potentially a completely different file. But that's not really the point, the point is that it doesn't do anything else in between - if you know where control flow is going to, you know everything that is happening. More so with break and continue even, since for break and continue where control flow continues doesn't even depend on runtime state, whereas return could return to different places depending on where the function was called.


With break, control flow goes out of the loop or out of the switch. You may find function calls there. With continue, control flow goes to the start of the loop. You may find function calls there. With return, control goes through the defer blocks and then out of the function. Similar reasoning for goto. In every case, you know where the control flow goes by looking at the context inside the function.

I don't agree with the assessment that one of them is more hidden than the other.


>With break, control flow goes out of the loop or out of the switch. You may find function calls there.

Yeah but you don't have to look for anything in between. You look for where it goes, and that's it. That there could be a function call after the continue has executed isn't relevant at all, there could also be function call after a return has executed. Stepping through the code in your head is trivial because the execution never jumps anywhere without an explicit statement to jump.

With defers, you need to make sure you find every single defer that was executed up to the return (which might not be trivial if defers are executed conditionally). With a break there is exactly one, unconditional, position at which execution continues - and if you have found it, there is no reason to keep looking anywhere else.

I mean wanting to make the defer trade-off but not the destructor trade-off, fine, that is ultimately a matter of opinion. But defer being hidden control flow is just objectively true, execution jumps somewhere without a corresponding explicit jump in the code (return only explicitly jumps out of the function).


For wrapping behaviour you need to add % symbol, like a+%b is wrapping addition.


RAII requires even more boilerplate.


I'm not sure how RAII could ever need more code, when the only difference is that functions get called automatically (instead of through close(), or a close() inside a defer).


There’s way more going on with RAII.

With RAII, you’re expressing ownership in code. That means that you have to write all the extra code to track that ownership in the type system or at runtime. Every time I want to write a class that manages a resource with RAII, I am almost always writing a copy constructor, move constructor, assignment operator, move assignment operator, and destructor.

On the other hand, if I write the equivalent code in Go or Nim, I can just add something like a Close() method. Because ownership isn’t expressed in code, there is a higher chance to make mistakes, but the code is simpler.


If you need a deep copy you also need to write a copy() function for the object without RAII. (Although one could argue that that is still RAII, just without operator overloading.) But in any case, for deep copies, the copy code needs to be somewhere.

If you don't need a deep copy (which honestly you basically never need once you already have some container classes), you can pretty much always just use an unique_ptr or write a unique_handle class or something to that regard, which means you never need to write any of the operators.


If you just want a unique_ptr, then you should at least write the code to delete the relevant operators or make them private. Boilerplate, yes, but that is how idiomatic C++ code is written.


I'm not sure what you mean. If you have code like this

    struct Foo {
        unique_ptr p;
    };
Foo is automatically not copyable. That code is about is perfectly good idiomatic C++, with zero boilerplate. And that is how most classes actually end up looking in idiomatic C++, there's almost never the need to implement operators.


> Every time I want to write a class that manages a resource with RAII, I am almost always writing a copy constructor, move constructor, assignment operator, move assignment operator, and destructor.

No you aren't. You only do that at the lowest abstraction level. For normal classes you can add a vector member and it will just work (see "rule of zero").

> On the other hand, if I write the equivalent code in Go or Nim, I can just add something like a Close() method.

Yes, and then find and update every place in the code where that class is instantiated. If the class was ever instantiated as a temporary, you also need to pull it out into a named variable so you can use defer. If the class was ever used in a vector, you need to find all the places where an element of the vector is removed and make sure close is being called there, etc.

This is really awful, so in a language without RAII, you are strongly incentivized not to tie ownership to object lifetime. I've found the best way is to aggressively simplify managed lifecycles and do allocation/cleanup in some kind of manager object.


> No you aren't. You only do that at the lowest abstraction level. For normal classes you can add a vector member and it will just work (see "rule of zero").

That’s what I mean when I write “a class that manages a resource with RAII”. Just on a personal note—it can be damn frustrating writing a comment on HN sometimes, because somebody will always find a way to misinterpret it completely.

> Yes, and then find and update every place in the code where that class is instantiated. If the class was ever instantiated as a temporary, you also need to pull it out into a named variable so you can use defer. If the class was ever used in a vector, you need to find all the places where an element of the vector is removed and make sure close is being called there, etc.

Once you step away from automated formal verification, which is very rare to begin with, the programmer is always responsible for maintaining some nonzero number of program invariants. In my years of experience with Go, the manual work to close file handles when you are done is not particularly burdensome or difficult, and I have seen very few errors slip into production related to resource management which would have been solved by RAII.

> This is really awful…

I would be interested to understand what design decisions led to the experiences you describe, because I have never seen a project suffer from those problems. Can you give more details?


> Once you step away from automated formal verification, which is very rare to begin with, the programmer is always responsible for maintaining some nonzero number of program invariants.

Of course! It's nice to have all the help with that you can get.

> In my years of experience with Go, the manual work to close file handles when you are done is not particularly burdensome or difficult, and I have seen very few errors slip into production related to resource management which would have been solved by RAII.

Go has a GC which greatly reduces the number of things to manage, but the discussion is about Zig. In Go, if you add, let's say, a string to a class, you don't have to do anything, it will just work because of the GC. In Zig, if you add a string to a class and it didn't already have a close method for some other reason, you now have to update all the users. Essentially its unreasonably burdensome to switch from not managing a resource to managing a resource unless there are few users.

> That’s what I mean when I write “a class that manages a resource with RAII”. Just on a personal note—it can be damn frustrating writing a comment on HN sometimes, because somebody will always find a way to misinterpret it completely.

This wasn't some kind of pedantic point-scoring. In something like C++/Rust every single class that so much as contains a string manages a resource with RAII, but only a tiny number of them actually require you to write any cleanup code. In a language without RAII, if you used the same design, a huge number of them would require writing cleanup code.

> Can you give more details?

The sort of design I prefer in a language without "smart-objects" is along these lines: https://floooh.github.io/2018/06/17/handles-vs-pointers.html


This is the introduction to Zig from the language author that I watched when it was first posted here a while ago:

https://www.youtube.com/watch?v=Z4oYSByyRak

(good watch)


Here is a newer talk (April 2019) which is also by me and also an introduction to Zig:

https://www.youtube.com/watch?v=Gv2I7qTux7g

A lot has changed since then, and a lot has changed since that Localhost talk. To catch up:

* (the older talk happened here)

* 0.3.0 release notes - https://ziglang.org/download/0.3.0/release-notes.html

* 0.4.0 release notes - https://ziglang.org/download/0.4.0/release-notes.html

* (the newer talk happened here)

* 0.5.0 release notes - https://ziglang.org/download/0.5.0/release-notes.html



This is so cool! I'm really impressed with what I've seen out of Zig so far. I need to come up with a project to try it out with. It seems like a great successor to C


Yep. Zig is shaping up to be the low-level programming language I've been waiting for to replace C++ for me. It just needs to mature and stabilize a bit before it's production-quality.


Why not rust?


It's all a matter of personal preference, and Rust is just not what I'm looking for in a C++ replacement, I guess. My biggest issue with C++ is the complexity of the language. C++ is a language that fetishizes accidental complexity, making it the center of things, and now it seems that Rust is saying, "hold my beer." Zig, on the other hand, fits my aesthetics pretty much perfectly, although I'm sure that, as with all programming languages, some people would have the opposite preference. As for correctness, one thing I've learned in the years I've been using formal methods is that there are many paths to correctness, and I'm starting to think that even on that front Zig's approach (of dynamic verification) might ultimately prove a better one.


Rust is great for when you're writing a serious, security-critical program that can't have any memory-corruption bugs or data races. It makes writing programs a little more challenging, and sometimes you sacrifice a bit of runtime performance compared to C, but it's often worth it.

But Game Boy Advance games don't really fit that description. GBA games don't accept untrusted input, and nothing bad happens if they're "compromised". (Like, when people discovered arbitrary code execution in Super Mario World, no one was worried about the security implications.) So languages like C or Zig that let you cowboy values directly into specific memory locations can be a better choice.

I'm excited about Zig in particular because the mission statement seems to be "C but nicer" -- you get the same basic programming model, but with things like instance methods, generic types, better macros, arbitrary-bit-integer types and a "crash when hitting undefined behavior" compile mode.


Idk, I use Rust wherever I would use any other programming language, but I'm also a huge fan of the language in general. There's no need to limit it to "serious" applications


> nothing bad happens if they're "compromised"

It can make debugging your program a lot harder :)


On the contrary - when people discovered an ACE bug in Super Mario World, they PogChamp'd.


> GBA games don't accept untrusted input

You have a very narrow definition of untrusted input.


Arbitrary code execution isn’t that big a deal when one of the features of the device is that it accepts a ram image over its synchronous serial port which is jumped into after being received.


Not the OP, for me what I am looking forward is a systems language with automatic memory management, in the same vein as Modula-3, Active Oberon, D or System C#.

Swift fills that slot, but only on Apple platforms. Linux hardly has portable support even for file access and Windows will get supported some day, and most 3rd party libraries assume Apple anyway.

D has a great community, but still found lacking in some areas.

.NET AOT compiled with the learnings taken from System C# (7.x - 9) is the closest for my daily coding activities.

Ideally Delphi like with automatic memory management.

Or maybe having a Rust IDE with visual representation of lifetimes would already be productive enough.


Maybe a stupid question: if I want to write cross-platform desktop GUI apps and be able to take full advantage of the desktop GUI (menus, status icons in the system tray, etc.), and I'm mostly developing on Windows, what should I learn? That's what I'm looking for, and I'm sure it already exists; I just don't know how to find it other than asking around.

(I tried to install some Haskell GUI libraries, but that's very difficult on Windows. I eventually got haskell-gi to install, but as far as I could tell, it had no support for status icons.)


If you are doing Windows stuff and want native UI, then there is no way around Delphi, .NET or C++ (or C if you are masochist enough to use bare bones Win32).

Qt does a pretty good job for C++ and Python, but it is not native if that is what you're looking for.

JavaFX would be a good approach if you are into JVM languages, but you might need to do some JNI wrappers to call stuff like system tray, as just like Qt it does its own rendering.

Some corporations, go the route of having common code for the business logic, and then create an abstraction layer for stuff like system tray, UI widgets and stuff. Although is does require additional development cost.

So you call something like show_tray_message "New Email" and have an implementation for each OS that you care about.

If you are using Haskell, have you tried Haskell Platform?


Thanks for the information! Qt is fine. I might switch back to Linux in the future, and want to be able to carry over anything I write if I do. I have a working prototype in Python using PySide already (which handles the system tray well), but I figure I ought to know a compiled language with static typing. (I know interpreted vs. compiled is a questionable distinction to make these days; what I want is a language that comes with something to produce independent executable files as part of its basic tooling.

I've tried Haskell Platform. I had a lot of path problems in trying to install gtk2hs - it wasn't always obvious where things were (nor that Stack came with its own mingw, so I got my own, and that caused more problems... eventually I wiped everything Haskell-related and started from scratch with Platform), and there's also a known problem with gtk2hs where you have to edit some cabal files to make installation work on Windows, so I would've had to do local installs. I think I got gtk2hs to install at one point, and even to compile their sample hello-world program, but trying to actually run it gave a litany of obscure errors. WxHaskell has an installer batch file, but it just spewed errors at me. I spent about a week trying to install various GUI libraries, and the only one I tried that I managed to get working was gi-gtk... which doesn't have system tray support.

I'm hoping to avoid C++ and Java, but if I have to bite that bullet, I will.


> what I want is a language that comes with something to produce independent executable files as part of its basic tooling

Worth noting that the Fman Build System (fbs)¹ gets you most of the way there with Python and Qt5, allowing you to "freeze" a PyQt5 application and automatically generate a Windows installer. I don't recall of the top of my head if there's a way to condense that into a single standalone executable, but it certainly produces a single folder by default (which you can stick anywhere, like any other "portable" Windows app).

I haven't used this in production yet, but testing it has been promising enough, and I'm using this for my next desktop development project.

¹: https://build-system.fman.io/


A single folder is good enough. This looks useful - thanks!


> but you might need to do some JNI wrappers to call stuff like system tray

I don't do much desktop programming, but I think that the system tray has had support in Java for many years now:

https://docs.oracle.com/javase/tutorial/uiswing/misc/systemt...


Not if you want the Windows 10 stuff.

And that whole JDesktop stuff from Sun is unfortunately unmaintained since years (still left to rotten under Sun's stewardship).


Ah. Well, now that Microsoft have said they want to contribute to OpenJDK, maybe you could persuade them to do something about that.


It would be nice I guess.

I like both platforms anyway.


> I am looking forward is a systems language with automatic memory management

Doesn't Nim check those boxes? Didn't try it personally but I was instantly reminded of Nim when I read your comment.


Indeed, however it seems having a community even smaller than D, and it is still on the phase of depending on C or C++ as compiler backends, just like C++ and Objective-C on their early days.


Depending on C is actually a feature. Why do you consider it a negative?


You don't replace something that you depend on, for starters.

Then C is not portable Assembly that keep being talked about, so unless you stick with care to ISO C, there will be UB and compiler specific behaviour creeping into backend.

Even languages that use LLVM suffer from this, because its design is tainted by clang, and for certain bitcode sequences, it assumes C semantics.

This broke a couple of Rust optimisations. Not sure if they already got around fixing this.

Then C does not really expose the CPU features, so if you want to actually access everything. Either you need to support inline Assembly as well, or ship Assembly alongside C anyway.

Having C as backend is only useful as means to bootstrap an eco-system to not spend too much time implementing a backend.

However when a language matures, it is time to move on, just like C++ and Objective-C did.


They are different languages and you can be exited for both and want to use both. I think it is unnecessary to have this kind of comment after every statement on other languages as if ones excitement for one would completely take the other one out of the picture.

I would say why I really like Zig is because the minimalism of C is still there in the syntax and it makes a whole lot of very smart choices to keep this simplicity while being so much more expressive than C.

If you really need some Rust pro's I would say it is great as the language that finally brings more of ML and functional ideas into high performance programming. Most languages nowadays have fp things but Rust has it very deep into its core. Other than that it is great to see proof systems reaching the mainstream and the security guarantees its ownership system gives is great.


Why Rust?


articles about rust don't have to include a link to "what is Rust" like this one does.


They did when Rust was as mature as Zig is now. Language popularity is a reasonable deciding factor when one is starting a project that has a hiring component; personal projects not so much.


For one thing I do not think you can implement allocator in Rust. This alone would disqualify it as a "low level"/"systems" language in my opinion.


You can implement an allocator in Rust; it's unclear why you think this is not possible.

One example:

- https://github.com/fitzgen/bumpalo


Worth noting that support for custom allocators in standard collections is not finalized yet: https://github.com/rust-lang/rust/issues/32838

The (awesome) bumpalo crate that you link to gets around this by providing ported implementations of Vec and String, but if you need e.g. a custom-allocated HashMap, you need to implement it yourself.


Thanks for the additional points! What you state is true, but somewhat orthogonal to the point that the grandparent post was making (and I sought to refute).

Creating an allocator is inherently possible. Using that allocator with the rest of the ecosystem is currently not stable, as you point out.

As a sometimes Rust embedded developer, I look forward to both parameterized allocators and fallible allocation.


Also worth noting that this is one thing Zig seems to have gotten quite right already: every standard library function which might need to allocate memory requires the user of that function to pass in an allocator of one's choosing, whether provided with the standard library or provided from scratch.

For example, I've been (slowly, given limited free time) working on a SQLite binding for Zig. SQLite has its own allocator, which I've wrapped like so:

    const std = @import("std");
    const Allocator = std.mem.Allocator;
    const ArrayList = std.ArrayList;
    
    // This is all ripped from Zig's C allocator, with adaptations to point to
    // SQLite's allocator instead.
    pub const allocator = &allocator_state;
    var allocator_state = Allocator{
        .reallocFn = realloc,
        .shrinkFn = shrink,
    };
    
    fn realloc(self: *Allocator,
               old_mem: []u8,
               old_align: u29,
               new_size: usize,
               new_align: u29) ![]u8 {
        std.debug.assert(new_align <= @alignOf(c_longdouble));
        const old_ptr =
            if (old_mem.len == 0) null else @ptrCast(*c_void, old_mem.ptr);
        const buf = sqlite3_realloc64(old_ptr, @intCast(u64, new_size))
            orelse return error.OutOfMemory;
        return @ptrCast([*]u8, buf)[0..new_size];
    }
    
    fn shrink(self: *Allocator,
              old_mem: []u8,
              old_align: u29,
              new_size: usize,
              new_align: u29) []u8 {
        const old_ptr = @ptrCast(*c_void, old_mem.ptr);
        const buf = sqlite3_realloc64(old_ptr, @intCast(u64, new_size))
            orelse return old_mem[0..new_size];
        return @ptrCast([*]u8, buf)[0..new_size];
    }
And later, to actually open a database (where we need an allocator to add a null byte to a Zig string to satisfy SQLite's expectations for a database name, so we might as well use SQLite's)¹:

    pub fn open(name: []const u8) !Database {
        // "allocator" here being the const-defined one above
        var cstrName = try std.cstr.addNullByte(allocator, name);
        defer allocator.free(cstrName);
        var db: Database = undefined;
        var rc = sqlite3_open(cstrName.ptr, &db);
    
        if (rc == SQLITE_OK) {
            return db;
        } else {
            _ = sqlite3_close(db);
            return errorCode(rc);
        }
    }
Of course, it'd be even better if such a wrapper actually used SQLite's own support for custom allocators to follow the Zig stdlib convention of allowing the user to supply one's own allocator. Going the other way seemed like a reasonable short-term choice, though, and it was a good enough excuse for me to learn the ropes on custom allocators (even if most of the code ultimately came from Zig's own C allocator).

----

¹: It turns out that this specific example will apparently be entirely unnecessary in future versions of Zig; the master branch documentation seems to do away with "Zig strings" v. "C strings", instead just making strings null-terminated by default (using what looks to be new support for "sentinel-terminated" pointers/arrays/slices). Looks like I've got some work to do :)


I said "I do not think". I do not really know Rust. Obviously I was wrong.


You also cannot fully implement libc in ISO C, without using either language extensions or an external Assembler.

So I guess it is not a "low level"/"systems".


True. You beat me. But C with extensions does qualify ;)


So does any other language with extensions, even stuff like CircuitPython.


Because it's too heavy on the "fight the compiler" security stuff. Rust is well-designed all around, but no fun to program in.


learning curve, mostly


I've been doing Advent of Code in Zig. It's a far cry from "real world" programming, but still a decent way to get a feel for the language.


Do you have a public repo of your Advent code? I started out trying to use Zig but couldn’t find documentation on how to complete even basic tasks (like passing a filename as a command line arg, then opening and reading a file line by line), so I gave up. Maybe seeing a repo of good Zig code accomplishing these tasks would get me going.


Check out the Zig standard library and the tests for examples on doing some basic stuff with the language. Tests are generally at the end of files, you can search for "test".

0.5.0 Documentation https://ziglang.org/documentation/0.5.0/#Introduction

root std library https://github.com/ziglang/zig/tree/master/lib/std

examples of writing and reading to a file https://github.com/ziglang/zig/blob/master/lib/std/event/fs....

There are also a ton of projects that the creator has done with Zig, you can check those out for more examples as well.

https://github.com/andrewrk?utf8=%E2%9C%93&tab=repositories&...

Looks like he has an advent of code repo too.


My solutions are here: https://github.com/lukechampine/advent

Documentation for Zig isn't great, but it's become much easier now that the standard library is searchable: https://ziglang.org/documentation/master/std


Here[1] is a playlist of the creator of Zig coding in Zig. He also has some solutions for Advent of Code.

1. https://www.youtube.com/watch?v=hBCsWEQ_asM&list=PLviMr_WImM...


> He also has some solutions for Advent of Code.

I had looked at his 2018 Advent repo and was disappointed to find that he simply pasted his input directly into the code, whereas I like to compile generic binaries that can read inputs from the command line. However, it looks like his 2019 solutions for days 1 and 2 at least do read input from a file, so this year looks like it'll be a better starting point.


Zig looks neat; a nice middle ground, not as preachy as Rust -- I took a stroll through the documentation but didn't get a sense from it how suitable it would be for baremetal or OS programming... Specifically about its standard library..

  * How dependent on libc is Zig, if at all?
  * What kind of runtime expectations does Zig have? Does it need a malloc, for example? 
  * How big is its standard library?
  * Is it possible to write code without its standard library?


Zig was started with that exact use case at the forefront of its design.

- Zig has no hard dependency on libc, it's the user's choice whether he wants to link it or not. It ships musl libc and can build/link it for most (all?) supported targets.

- Most of Zig's standard library works on freestanding targets. No runtime needed. There are no implicit allocators, so the user must use an allocator shipped with the standard library or implement their own.

- Standard library is still in its early days but already provides a lot of functionality out of the box, from containers to io.

- Using the standard library is optional. When it is used, only what is used gets included. The standard library is built from source every time (though there is caching) and its semantic analysis basically does LTO in the codegen stage so you end up with pretty slim binaries.

Here's a couple of examples:

- https://github.com/andrewrk/clashos

- https://github.com/AndreaOrru/zen


I don't get the first statement: no hard dependency on libc and ships with musl libc. Does Zig need a libc, so they ship musl libc? Or is it fully self hosting?


The Zig language doesn't need libc, but because it also includes a fully functional C compiler, it ships with musl source and glibc headers for better crossplatform support.


Please don’t use blockquotes (4 spaces) for anything that isn’t code. Your comment is very hard to read on mobile.


Reformatted for mobile users:

* How dependent on libc is Zig, if at all?

* What kind of runtime expectations does Zig have? Does it need a malloc, for example?

* How big is its standard library?

* Is it possible to write code without its standard library?


What is the state of GBA homebrew? I've been curious about learning how to make games with it, but I haven't had much luck finding information about the toolchains. I know for original Game Boy/Color, there is a C compiler, but it's not recommended over assembly. Is the GBA the same way?


The most popular toolchain is devkitPro, which is based on GCC: https://devkitpro.org/wiki/Getting_Started

The impression I've gotten is that the GBA is powerful enough that you can write most of your programs in C, but some performance-critical stuff (like if you're trying to draw pixels directly instead of using the sprite/tile hardware) is best written in assembly. If you're interested in learning more, I would highly recommend this tutorial: https://www.coranac.com/tonc/text/toc.htm


GBA is very well documented (gbatek). The DS seems to be as well, but it's much harder to work with. GBA is much easier, but still powerful enough to do non trivial things without having to do them all in assembly. Emulator support is superb. Hardware adapters that allow you to load ROMs have been around forever.

I'd say it's one of the best systems to get into on-the-metal Homebrew.


Much harder is a bit of a stretch IMO.

There’s more potential complexity if you want to do complex things and maybe 3D is hard (I’ve never done it) but the basics are about as straightforward.


Fair enough. Id say 3d and the two screens make it harder. Also memory latency and clock speed is somewhat more complex, what with the tightly coupled memory.


I was intrigued about the 3D thing— a cool YouTube top 5 with some light commentary: https://www.youtube.com/watch?v=Y6QtoZcYhi4


Cool video—how was such fluid 3D achieved on the GBA hardware? Anyone know of any war story articles about developing 3D games on the GBA?


Low resolution combine with a decently fast processor means software rendering is entirely doable. It's a comparable situation to very early PC 3D gaming.


Just for the record, "decently fast" in this case is a 16 Mhz ARM 7.


It's especially crazy that a bunch of them look to be rendering legit polygon scenes— this isn't early nineties 2.5D titles like Doom and Wolf3d running on 486 PCs.


The ARM 7 is a pretty good processor, so it can get a lot done on 16 MHz. Quite a bit more than what an x86 of similar clock speed ever could.


Ok you could argue that’s more complex on the GBA.

I would argue otherwise; interfaces to other people’s stuff always are the most difficult parts of programming. When you do it yourself (3D projection etc.) it’s just you and reason and that’s not so bad.


The limited amount of VRAM in the DS is the real issue. Even the lack of RAM is an issue.


4 MB RAM + 656 kB VRAM doesn't strike me as particularly memory constrained as someone who grew up with 4-64 kB systems. I guess it matters where you're coming from.

On top of that, DS cartridges are 8 - 512 MB.


I prefer the PSP for this purpose myself.

https://github.com/pspdev/psptoolchain


I am waiting for the milestone[1] for Zig to be able to bootstrap itself. Usually, this means the language is mature enough.

[1] https://github.com/ziglang/zig/projects/2


What are the advantages of this other than showing the language is competent enough that you can write a compiler for itself? Would it be better to keep a compiler written in a highly portable language such as c? Genuinely curious.


I think it is usually considered a right of passage of a new programming language to prove that it is mature enough to self-host.

Personally, I value that aspect highly because it means if I choose to work with a language that I can understand, read, and possibly contribute to the compiler if needed.


> Personally, I value that aspect highly because it means if I choose to work with a language that I can understand, read, and possibly contribute to the compiler if needed.

You would have about the same if the compiler were written in a lingua franca. For whatever it's worth, I have in the past had to work inside the gfortran compiler, and I wouldn't have been able to do that if it were written in Fortran.

Self-hosting (which nowadays often isn't "full" self-hosting, it's implementing a frontend and possibly parts of a mid-end only and letting a framework like GCC or LLVM do the rest) is useful because it proves that it has reached a certain level of maturity, as you say. Another advantage is that this gives you a non-trivial piece of software written in the language which can be used for testing and benchmarking. If you introduce a compiler bug, there is a certain likelihood that you will notice it immediately. This might not be the case if you only had small toy programs to test with.


Keep on thinking about lisp mix of compiler time and run time construct when watching the video.


Question: would I be able to do this on say a Kindle?

Really pining for a non backlit display I control.

Never thought this would be possible but given the readable code...maybe I should try?! Anybody know where I could start?


Classical Kindle uses some variation of Java, so probably only if rooted somehow.

Other ones are Android based, so there you can use it via NDK.


Interesting. First I've heard of Zig and it sounds like a cool language.

I suggest putting the README images in a doc directory so visitors know at first glance that they aren't part of the app.


I love stuff like this. It's fun and educational. The code and linker script look very clean. Nice project!


How does it compare to JAI?


Zig is a lot more focused on being a better C that focuses on systems stuff which winds up being good for games. Jai is focused on being a good language for games specifically and so certain highly desirable features for games (reflection, SoA, a different sort of standard library).

But all in all from my time following both for several months they have a lot of similarities. They're only slightly more different than one another than C# and Java.


JAI is still unreleased.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: