SFU replaced the posix subsystem with an OpenBSD port and more capable thing for NT/2000 etc.
SFU was so much later to the game, while BOW ran on Windows 3.1 which would have been great had it had far bigger appeal as you could just run it on such lesser machines. The best part being no formatting, no device drivers, or changes to the OS at all. I just copied over to my PS/2, and was playing hack 1.03 in no time.
Hey. Microsoft's SFU[1] is alive and well, back when I was running servers getting DDoSed by chargen reflection, as far as I can tell, most of the reflectors were running chargen from SFU.
[1] Well, Microsoft published it, but IIRC, another company developed it.
Well, I don't know how it got installed, just that it was installed, so when the DDoS origin spoofed requests to the SFU hosts with my server's IP as the source, the SFU hosts were happy to send me garbage.
I had good luck compiling and running command line apps on SFU (nee Interix) back in the early 2000's. GNU autoconf had (has?) support for Interix as a target.
I always wanted a distribution of Windows that booted character mode and ran the Interix userland.
The author reached out to Hiroshi Oota on github, who said they created it to use emacs on Windows. Not sure why it's not in the article, but figure there might be a reason so won't link it.
yes I was hoping to do an update after I had help finding the author, and to see if there was any potential of an update. It's late breaking & live. And it seems all the engagement is here not on the blog so I didn't see this as I was sleeping.
Just playing devil's advocate here but Interix according to the Wikipedia was supported by Microsoft until Windows 7 (1999 to 2010). I wouldn't call that "bought and extinguished".
True, I was being glib, but the devil's in the details here - unfortunately, it was very hard to use after Microsoft bought it. It became unreliable, and the licensing was hard to understand for customers who wanted to use it.
During university days, had NT really properly supported POSIX, I probably would never bothered with that "Linux Unleashed with Slackware 2.0 CD-ROM" book.
If Microsoft could tell the future, they would have improved POSIX, instead of killing it, and then briging a Linux VM into the OS.
Microsoft did everything, like everything and more, to kill every alternative platform on the planet. Hedging their bets with half-hearted posix support was part of that effort.
If not for that MSN vs Internet fiasco, they would succeed.
I never liked that point of view, because while it is true they did their part, the OEMs and consumers also did theirs.
Also as seen nowadays by many FAANG efforts, "do no evil" marketing stunts, Playstation and Nintendo with their exclusive deals, they are not alone in the desire to achieve that goal.
Ironically, while UNIX won the server room, the Linux wars replacing the UNIX wars, is kind of what killed the desktop for most consumers, now enjoying the Linux kernel via Android, ChromeOS, and the free beer version of them, versus what OEMs make available isn't the same anyway.
Or one pays what is seen as the Apple tax in countries where wages aren't at the same level as North America.
Well, the problem with world domination plans (Microsoft included), is that every actor has one, and they don't necessarily, ehm, align.
So the non-desktop part was lost by Microsoft, and this resulted in the world we have now, Linux derivatives and all. But for a while, sometime around 1996-2006, it definitely felt like they almost had it. This is when I began my official dev career and that summer I could barely find a non-MS stack shop.
And yes, the OSS movement didn't manage to unify the desktop. And Microsoft has nothing to do with it.
Nah, in those days everyone was either offering POSIX.1 support or even providing their own custom Unix distribution for their machines, because it was the only way to get US government contracts. This is why things like Amix, Atari UNIX, and A/UX existed at all, and why NT had to have an out-of-box POSIX subsystem.
Notice how Windows, FreeBSD, NetBSD, and (removed in 6.0) OpenBSD all have Linux emulation/compat layers. This is in fact incredibly sad, for multiple reasons:
- This is what POSIX was meant to address; there aren't all that many Linux-specific APIs that ordinary applications actually need. Proof: browse the ports/packages on any BSD; all the apps (with extremely few notable omissions) are there and working fine.
- Vocal FOSS enthusiasts are ready, every day of the week, to preach to you about open standards; you must use Jitsi and not Zoom, Matrix and not WhatsApp, heck I've been told to play Mindustry instead of Factorio. I don't know if these are the same people who write "#ifdef linux" in their code, but I've sent out enough one-line patches to fix the build on OpenBSD to wonder.
- The Linux userland is one hell of a horror, in terms of being a viable target for others to aim at; try running literally any precompiled binary on Alpine or Void(musl) to get a feel. The glibc seems to be introducing new versioned symbols and other backward incompatible changes every release - just for the sake of it; and many open source libraries resist static linking with every inch of their autohell existence. (I've tried making a static build of Love2d for Linux, and it was an exercise in futility.)
Yeah guys, unless you're writing a container runtime, please stop targeting Linux. You don't need to strictly adhere to POSIX, just please at least try to compile your program in a VM with any BSD; it will flush out 99% of the non-portable stuff.
What's funny is we've tried so many different, "better" solutions to build once, run anywhere. I truly love Inferno - learned SO much reading the dis VM bytecode spec; JVM was and still is a thing; there's even .NET IR which was built with the specific, explicit goal of getting JIT'ted for the target CPU. The two de-facto solutions ended up being:
- win32/wine;
- Linux emulation - the kernel itself has an incredibly stable interface, but you basically need to ship an entire RHEL/Ubuntu installation on top.
My sincere hope is that APE/Cosmopolitan takes off and eats everyone's lunch AND the table. It even recently got some funding - turns out it's the easiest way to ship LLM models (finally something good may come out of this hype cycle).
The kernel does _not_ have an incredibly stable interface. It is in the fact part of the reason older statically linked versions of glibc no longer work. I personally count at least two breaking changes, and only one of them got an option in Kconfig (which was related to ASLR iirc, so distros rushed to enable the "break compatibility" option). . Yet another example dynamic linking almost always makes things easier for your future users, despite the preconceptions.
Just grep your kernel's Kconfig for "ancient" -- the euphemism every developer uses to refer to stuff they don't care about and want to break.
Also, stuff in /proc, /sys, or the like is moving every other day, and some programs depend on it (sigh).
I don’t think the point was about ABI compatibility across operating systems—that’d be unreasonable. It’s more so about using POSIX API’s as much as possible to enable portability rather than relying on Linux-specific API’s that make compiling the code an unnecessarily frustrating experience. Sometimes, using Linux-specific API’s is inevitable, but in the overwhelming majority of cases, POSIX API’s work just fine and would make the code instantly more portable.
Even on the source level, there are certain limitations to POSIX, e.g. linux uses epoll() which is more performant than poll(), but isn't defined by POSIX. So application writes sometimes have to choose between making code more portable or making it more performant.
I agree with the rant and think that Linux today is overall doing a net negative effect to computing. The absolutely lack of stable API/ABI efforts and the "cavalier" attitude regarding free software are creating terrible precedents over the industry.
Linux is today the kernel/operating system with the largest hardware support, but my ability to actually use the operating system I want with the hardware I want has moved nilch. The next free operating system which even supports my AMD GPU? It's FreeBSD, and they do that by linking in the code from Linux (not even forking: forking would be way too much effort). There is _no_ other OS which supports it.
Also, you can forget about DRM support anywhere other than Linux. Netflix? Linux-only.
It's a very sad picture and much definitely worse than 10-20 years ago.
This is relevant to my interests. What roadblocks did you run into when you attempted to build statically? I assume your goal was to have a single binary of Love2D so you can swap it around and not worry about glibc version? (As God intended imo; not religious but games should be one contained thing imo)
Am curious to see what it does to avoid a static build.
The love2d static build story was a tangent to my 2016 challenge to ship one game every month of the year. The tangent ended up delaying the March game indefinitely, and thus ending the challenge... My memory is a bit hazy, so apologies for the scarce/imprecise details.
I really, really wanted to ship binary builds for the three major platforms: Windows, Mac, and Linux; so that my friends and strangers could actually play the games, without dicking around downloading some framework. Love2d uses the relatively well-known hack of opening "argv[0]" as a ZIP file (the ZIP header starts at the end of a file, so this usually[0] just works).
Creating and testing the builds for Windows was trivial - even though I didn't even have Windows on any of my computers; I borrowed a friend's laptop to verify that "cat love.exe game.love > game.exe" just works, and indeed it did. They got some scary warnings about binaries downloaded from the Internet, but the game ran well.
The Mac needed a bit more fiddling, and I didn't have a Mac back then to sign or test the build. But I followed the instructions and someone reported success (modulo scary warnings). Woohoo.
On to Linux... I already knew it was going to be the most "fun", despite being the only platform among these three that didn't do code signing or scary warnings. I didn't even realize at the time, how much of a clusterfuck glibc actually is; my primary motivation was that Love2d kept breaking their Lua APIs, and I've already found that lots of older (and even recent) Love2d games simply couldn't cope. My game had to be bound to a specific version of Love2d, and many distributions shipped different versions, so the only reasonable path forward was to bundle, just like on Windows/Mac.
I started by downloading the Love2d sources, verifying that I can make a standard/dynamic build, and then "cat love game.love > game && chmod +x game && ./game". Indeed that was easy, but "ldd game" revealed several dozen shared libraries for things like PNG, Vorbis, etc, etc. I've looked at the .so numbers, and realized Love2d breaking is gonna be the lesser of my worries - judging by how high these numbers were, I was signing up for DLL hell. I didn't even want PNG or Vorbis - all of my graphics were 100% procedural, and I was yet to try adding sound to any game. So I've disabled most options, and this is where the easy part ended.
I don't recall where exactly I gave up... I managed to find & download the sources for a whole bunch of these libraries, make the ".a" archives for static builds, and so on... I think at some point I've ran into Mesa (Love2d actually requires GPU acceleration) and decided that this is enough insanity.
I still firmly believe in static linking on Linux! I only changed my approach: just use Go (with CGO_ENABLED=0). Unfortunately, Go is not without its own share of problems[0]; while XGB allows cgo-less X11, Mesa remains elusive.
Thanks for sharing this journey! I've been considering Love2D for a while so it's important to read about the challenges others run into. You should definitely not have to package Mesa to ship a game! I wonder why it hooks so far deeply..
Coming from PICO-8 myself, I'm wondering if I should just skip the frameworks to babystep and just jump into C+SDL, or pygame or something. I like the different ways that games can be made -- like your procedural graphics not needing a graphics library.
The "can I link Mesa statically" journey was futile, I had much less understanding how things worked back then (not that I'm any kind of an expert now, but you shouldn't be one to ship a f.in game). See, a pretty big chunk of the graphics driver stack on Linux sits in the .so's provided by Mesa: the i915_dri.so, r600_dri.so, nouveau_dri.so, libvulkan_intel.so, libvulkan_radeon.so, etc are ALL exactly what it says on the package. It's not just 50+MB of x86-64 code, it's also your peace of mind that whenever AMD/Intel/NVidia/Apple/RPi/etc drop a new GPU model, or whenever the kernel changes things on their end, you don't need to relink and re-release.
I'm torn between "all of this stuff belongs in the kernel goddammit" and "these guys probably know better". OpenBSD and macOS actually force all syscalls to go through a dynamically linked libc, so perhaps it's the latter.
Libraries/frameworks/engines such as PICO-8, Love2d, SDL, Allegro, Pygame, Godot, etc exist precisely to abstract away these details; you're not meant to care for libGL.so.1, you're meant to care for fixing your physics engine's timestep and using cubic splines to interpolate animation. Love2d was not a mature choice in 2016, so don't take my horror story or my hubris as any indication of what it looks like today; do your own research and pick the tool for the job ;)
TBH I would love to go back to making games, but recently got too absorbed by StarCraft 2 and shitposting.
Part of my interest is also getting better at Lua, and it seems to excel in embedded contexts like making small 2D games. So maybe Love2D will still be good for that, and maybe the statically linking story is better now. My eventual goal is similar to yours -- I want to ship a game with a single binary per platform that players don't have to get confused about. Completing a game at all would be good enough, but I want to do it right!
I was recently surprised that CGO only works on Windows with a msys2 compiler, while everyone else works just fine with Visual C++, or VS provided clang.
I'm yet to tackle actually testing any of my software on a real Windows machine; I usually just smoke-test things through Wine. (That's already more effort than what Linux devs do towards BSD so I award myself a point for that.)
I have my eyes on AppImage - it looks the most sane, vs Flatpak or Snap. For CLI apps, Go is great at making static binaries that Just Work.
Most of my past interactions with LD_PRELOAD caused some trouble down the line. It likes to leak into unexpected places, sowing discord and heisenbugs. I'd rather never touch it again unless out of other options.
Well said, I hate all that autoconf hell really. Most of my programs can be compiled pretty much anywhere (or easy ported). Not that I write big complicated stuff, but its doable. One thing is unavoidable tho, but its ok, because its special platform: #ifdef __CYGWIN__
Maybe after decades we have to accept that batched, concurrent I/O with asynchronous notifications is the least bad interface that can exploit the potential of modern hardware and let the simpler to use default Unix I/O model (blocking, synchronous) go. If this happens I hope we'll get a better designed API than some hacks around eBPF and io_uring.
The kernel, I've been told by people more knowledgeable about this things than myself, is a work of beauty. But the layers and layers of Win32 that sit on top of it do a great job of covering that up.
I fail to see what is particularly brillant about this kernel. Drivers are convoluted. IRQL is convoluted. The supposed distniction between "kernel" and executive has little definition and is convoluted (esp given it duplicates somes primitives for no real reasons). The HAL was a false good idea and Cultler himself agree with that. The network stack splitted betwen user and kernel space makes no sense (and the story behind this, including afd.sys, is ridiculous)
It's a not great / not terrible kernel, like most are (Linux included)
I dont suppose you have coherent 8086/80286 at all? I should have ordered it as a kid but I'd been told so many times to never trust mailorder. It still bugs me I didn't send in the $99.99
We had it on 286, but most of where I lived was on the 68k. I never bought a personal license. We had a license w/ source. I had an Amiga at home back in those days.
I'd go for a full-fledged BSD subsystem, myself. From what he wrote on the blog it sounds like this would be a pain in the butt to use for anything substantial.
This is where we should be, native support of Linux on Windows and native support of Windows on Linux. Executables should be cross-compatible, we somehow ended up in a reality of monopolistic -- 1v1. I would enjoy it to break free at some point but I doubt it.
Just about everything now is it's own garden with a tiny API gate to allow you access to the minor features of the whole collection.
I am aware of WSL and the likes but, it's taken many years to get that far.
But they should all break on MacOS and refuse to be cross-compiled for Windows/Linux?
I agree with your idealism, but I think the best we're going to get is more or less what we have now, otherwise we end up in the space where program x works with o/s y version z subsystem q, but not the other combinations you'd hope for.
I'm still annoyed that we don't design enough for the future that when a new O/S version/architecture gets released there isn't more effort to save the past. We have a perfectly serviceable relatively recent printer (4-5 years old?) that my wife now can't use because her new Mac which replaced her 12 year old mac, now doesn't have a driver for it, because there are no Sonoma versions of it.
I think we err on the wrong side of creating waste at the expense of not doing (admittedly annoying) work.
To get back on topic, this was the point of subsytems - you managed to hold onto and get access to the past you liked from within your walled garden.
Extra annoying as Apple's ability to share a printer was so cool. I'm holding our TV Mac Mini to Mojave to keep compatibility w an older and an even older scanner. Oh and Turbo Tax went backward after only being the new version of Mac OS they went back to Mojave and two previous versions that they will run under.
Sure, but from a user perspective? It says you could run vi/gcc/etc. in BoW, which you can easily do. I don't know why GGP would want to spend millions (surely would cost way more than millions) nor billions recreating something that already exists?
Really, to do something different and that might move the needle to new possibilities. The *BSDs and Linux seem like they permanently miss just a few core elements to be mass adoptable. Mass adoption would benefit everyone through things like competition but also just different features. Like imagine being able to zfs send incremental backups of your daily drivers and gaming boxes. Or if plan9 had really taken off and had real market share. Idk, just feels like we've been stuck in a place for the last few decades.
You're looking too close to the trees gcc/vi instead of the BSD forest.
BOW's only weakness is its single user mode, more so a limitation of being a Win16. If it'd been pushed harder to a NT service/client/server DLL it'd have been a much bigger player.
It's all moot, Pink/taligent died, Virtual Machines is where everyone runs their stuff these days, OS/2 had the driver/disk/filesystem stuff done right.
I don't want to derail the conversation too much but when I look at what Notch has become since he became a billionaire it's frankly pathetic.
Princes & Kings
Isn't it strange how princes and kings,
and clowns that caper in sawdust rings,
and common people, like you and me,
are builders for eternity?
Each is given a list of rules;
a shapeless mass; a bag of tools.
And each must fashion, ere life is flown,
A stumbling block, or a Stepping-Stone.
― R. Lee Sharpe
Ah OK. I guess I interpreted it differently because I don't follow him as a person. So my exposure to him is usually people complaining about something he said, or someone saying what he said was based. I know very little of his projects after Minecraft, so I just used the info I have.
I dunno why you'd be sore about it, the guy already paid his dues? Again though, bot familiar eith what he goes around promising.
Bought a big house on the Hills, throws big parties with superficial people, tweets shit as if the political opinions of an instant billionaire suddenly mattered.
I had to use it in the past and it’s genuinely awful. MobaXterm was the best before WSL (imo)
https://en.m.wikipedia.org/wiki/Windows_Services_for_UNIX