The toolchains dropped support for the i80386, not the OSes directly.
Intel, as they're known for doing, half-assed many things. Even though the 32 bit mode on the i80386 wouldn't be compatible with i8086 real mode and they could've made vast improvements, they decided to make just a mediocre 32 bit CPU, since they figured most people would just be running it as a super fast i8086.
The m68020 in 1984, by comparison, was much more forward thinking. It had atomic operations, could exist in a multi-processor environment much more easily than an i80386, and it can still run a modern OS (NetBSD) in 2022.
> None of this actually relevant to openbsd dropping 386 support 15 years ago.
Well, I think the argument being made (implicitly) was:
If Intel had made a decent and capable implementation of 386 with forward looking CPU instructions etc. 386 could have been alive and kicking today in many more operating systems and platforms.
It's more like the PC platform has continued to evolve in many ways, and 30 year old museum pieces aren't particularly interesting. Contrast with the mac68k platform, which is frozen in time. You don't see anybody trying to build one kernel that boots on every Mac ever made because the CPU has changed several times, but it wouldn't be pleasant even with continuity.
IIRC, the argument from the developers on OpenBSD misc mailing lists is that developing on 30 year museum pieces gives them perspective on the difference between right and wrong. It exposes them to more "stuff" than if it were the other way around.
But that's not how it works. Nothing is frozen in time. NetBSD 10 isn't functionally frozen compared with NetBSD 1.4 on mac68k because mac68k hardware hasn't changed since back then.
"PC" means personal computer, and the term existed long before the IBM PC, but the term is being used here to refer to the i8086, x86 and amd64 generic platforms. The fact that there's tons of overlap from the original IBM PC with an i8088 to a Compaq 386, then between the i80386 and a PCI Pentium, then more overlap from the Pentium to the Core2Duo, then to a modern UEFI multi-core CPU, means that if people just say, "ISA is dead, so let's rip it out" would lead to a world of pain, unless you want to drop support for every x86 in the world that doesn't use UEFI.
There are lots of people who want to do exactly that, but why? They very untechnically say there's "technical debt", say stupid things like, "people have to spend time supporting that" when open source is volunteer and most code written well will just continue to compile and run well.
Most people forget that the rest of the world that has less money has our leftovers, so some gatekeeper wanting to desupport older hardware he doesn't have WILL affect others. If there's no good technical reason to take it out, don't.
Running on older hardware quickly points out performance regressions. Running on alternate architectures points out issues like bad stack alignment assumptions, endian assumptions or bad word size assumptions. Running on systems with less memory brings light to software that's using too much memory. All of these things benefit someone, and it's pretty shitty to say that people who're less well off should be out of luck because certain people don't want to be told they should code with less assumptions and/or should care when they're told there's something wrong with their code. It happens more than you realize.
But even if people want a "financial" justification for everything, all of those benefits of testing on older hardware help with making sure a system is robust and runs well even on resource constrained embedded platforms. So there's that.
It's funny how some people will scream about not wanting to keep support for hardware less fortunate people have, but will shut up when someone mentions that the same support is important for embedded use.
You are so right with regression and slow sofware. Tryd to install rhel8.6 on a t61p from 2009. Gnome alone used half the vram around 130mb. And was slow as hell. Git is fast? Ever tryd to use it with 3gb of ram and a laptop hardisk? Switching branch on a linux repo took 4 minutes. I then converted the repo to a bitkeeper one (took about a week), changed to freebsd and the laptop was usable again...with i3 the nvidia drivers and bitkeeper. I really think dev's should test resonable sofware on a 2g raspberry or something like that regularly.
I don't know specifically why OpenBSD dropped support, but NetBSD dropped i80386 support after NetBSD 4 because gcc dropped i80386 support. So it is, in fact, relevant because OpenBSD would've had to have maintained an older gcc just for i80386 if they wanted to continue to support it.
IIRC, the 386 did not have atomic load/store operations (XADD, CMPXCHG), so it's much harder to write reliable semaphores/mutexes on that platform. I'm surprised OpenBSD supported it for this long.
(edit: it seems they didn't: OpenBSD/i386 hasn't actually supported running on [386sx/386dx] for some time. )
I'm trying to remember from the last em64t/ia32e & ia32 ISA toy OS I messed around with.
i386 ISA is the bare minimum for functioning protected mode to be better than real-mode DOS. i286 protected mode is absolute trash.
i386 is so bad at dealing with SMP primitives, it was standard practice to have a non-SMP kernel. Then, you get into the business of maintaining 2 kernel flavors or a major kernel feature flag.
For those interested, there is a quirky processor mode trick called "unreal mode" that allows flat addressing without switching to protected mode (no protections, just like real-mode DOS).
The 386 still has hardware and software interrupts which might just happen between the load and store instructions, so it's not as simple as "we don't need atomics".
If you were writing code specifically for the 386, I suppose you could implement all semaphores as critical sections to guard against external interrupts; I see no reason why that wouldn't work. But realistically, that code would be more of a museum piece than something you'd want to maintain in a 2022 operating system.