Recall a project back in the days where the customer wanted to upgrade their workstations but also save money, so we designed a solution where they'd have a beefy NT4-based Citrix server and reusing their 486 desktop machines by running the RDP client on Windows 3.11.
To make deployment easy and smooth, it was decided to use network booting and running Windows from a RAM disk.
The machines had 8MB of memory and it was found we needed 4MB for Windows to be happy, so we had a 4MB RAM disk to squeeze everything into. A colleague spent a lot of time slimming Windows down, but got stuck on the printer drivers. They had 4-5 different HP printers which required different drivers, and including all of them took way too much space.
He came to me and asked if I had some solution, and after some back and forth we found that we could reliably detect which printer was connected by scanning for various strings in the BIOS memory area. While not hired as a programmer, I had several years experience by then, so I whipped up a tiny executable which scanned the BIOS for a given string. He then used that in the autoexec.bat file to selectively copy the correct printer driver.
Project rolled out to thousands of users across several hundred locations (one server per location) without a hitch, and worked quite well from what I recall.
And here I am, in a shop of about 3K FTE, where we are squeezed 6 persons onto a single socket Xeon which barely beats an iPhone 15 Pro rendering those 12 screen worth of data slowing down to a crawl, huge latency, crappy disk latency too (feels like were all on the same spinning Rust, but it's Windows and some VMWare middleware so who knows why this is slow).
This was also when I learned about the pitfalls of multithreading...
I had started trying to use multithreading, and had implemented it in my raytracer. It worked like a charm on my machine.
During that project they brought a 4-socket server in the lab for testing, so after hours I fired up my raytracer on it, excited to see the promised 4x speedup... only to be met by an immediate access violation.
I had of course missed a lock, but it worked fine on my computer because it only had a single CPU (core) so even if I spawned multiple threads, the chances of them tripping over each other at that point was very low. Not so when they could actually run simultaneously.
>it was decided to use network booting and running Windows from a RAM disk.
What tech did that use? I imagine a "modern" system doing this would use a small RAM disk for the base Windows installation to boot from with any critical apps inside, and a second disk image mounted via HTTP for any "bulk" data, all via a custom iPXE build.
But NT4 predates the PXE standard by a number of years, and also the concept of mounting disk images via HTTP (or other SAN protocol). Was this some Netware stuff perhaps?
While it used NT4 this was around y2k, I recall we also had to test all the 486 and BIOS combinations they had if they survived y2k transition (one failed!). Which also means my memory is hazy.
Anyway, what comes to mind is Etherboot[1]. I'm pretty sure he used BOOTP[2] with a TFTP server for the distribution part.
bootp / tftpboot were used to provision diskless systems (diskless workstations / X-terminals) even in the very early 1990s (and likely before, but I didn't interact with them until college computer labs).
On scanning my brain-based archives I can't recall how IP addresses were assigned and a brief asking of google "bootp vs dhcp" leads me to believe that with bootp there was just a static list of IP / MAC mappings for already-known diskless devices.
A bootp server app generally listens for a MAC broadcast which appears when another physically connected device sends a "bootp request" into the network, so the bootp server can compare the MAC against its whitelist and assign its corresponding designated fixed IP address on the local network. Which IP address will then be remembered by the requesting device and is used from that point forward, not much differently than an address assigned by DHCP.
Some hardware like early HP ethernet printers did not retain their assigned IP address through a power failure. So there had to be a functional bootp server app up and running before you powered up the printer or the printer's initial bootp request would go unanswered and it was no good to anybody.
I also learned some valuable lessons in project planning and execution... they had a few guys who sat for 1.5 years to plan the rollout, working with suppliers to plan deliveries of the hardware etc.
When the time came we rolled out all the several hundred locations in just over 3 months, which included assembling and basic configuration of the servers at base, shipping the servers and then migrating data onsite from the old UNIX server (mailboxes, home directories etc).
We only had two issues, one server had gotten the hard disk cables disconnected during transport, easy fix had the technician on site been Compaq certified (had to get shipped back to base and out again), and another had a weird hardware error which caused the server to freeze when scrolling large log files in Notepad (replaced the server and all was good). Beyond that it went flawless.
Two floppies? In 1995 I got Windows 3.1 down to 1 floppy in order to boot a library catalogue computer into a Write.exe so I could do wysiwyg word processing. All the Wordperfect 5.1 machines were used and as few used a catalogue computer only for a short while I thought I should just do it - I used the software "2MF 3.0" which formatted the 1.44 to the highest density of 2M, booted into DOS, with autoexec.bat a ramdisk was created and the WIN.UC2 self-extracted itself. Decades later I am happy to work at the same university in IT.
I made a similar floppy setup to boot into CIV1 and a floppy to read the complete bible, and a floppy to listen to mod's.
One floppy for single tasking computing, life was good.
I love this idea. I wish there was some kind of 2024 equivalent to doing this. Maybe a rewritable optical media of some sort. Sony really should have brought the minidisc to the pc as a floppy and cd/dvd rom replacement.
Mike Elgan of therawfeed (a long defunct tech website), used to talk about his minimal travel set up - of a USB with a basic linux distro on it and a very simple word processing set up.
He'd either boot to it to 'hobble' his own laptop and remove all other distractions while writing, or boot to it from other peoples PCs to a set up he was used to and happy to work from when away from home. (Mike was one of the first technomads - or at least one of the first to post about it a lot.)
I had a similar set up for a while which I used with my home laptop or work PC at lunch time, but I never really got into writing as much as I'd hoped I would, so it wasn't that useful for me.
Started doing this when the first USB sticks became available. Even in Windows 95, which didn't support USB at the time.
Interestingly W95 could be trimmed down to about 35mb, and carefully adding Word & Excel from Office97 was about another 65mb, so it ended up fitting if you had one of the huge 128mb USB sticks.
That's about the smallest I can figure you can get a 32-bit Windows office machine which would be "fully compatible" with the latest Windows & Office versions, as long as you were carefully storing your office files on a FAT32 partition and limiting your expectations (like file size and number of XL rows) to those addressed by W95 which was as functional an office machine as millions of people need today.
For those of you who did manage to run W95 at its full 2GHz maximum over 10 years later on much higher speed motherboards than there were in the 1990's, you know what I'm talking about when I say the most noticeable thing is zero latency in almost all human-computer interactions.
You could be doing all kinds of office work, with lots of other things to "boot".
Just got a couple more of the "small" 128GB SATA SSD that are finally cheap enough for bootable OS's to use like "game cartridges" now. Not much different "application", just faster booting and operation than most USB.
Two partitions on each SSD, one for an OS only, one for ALL related storage.
Still have some massive multibooting going on, but with these lttle SSDs the most up-to-date are going to be W11, W10x86, W10x64, Debian, Mint & Fedora.
Fortunately I got a few of the pre-NUC cheap ASUS miniPC that has a simple hatch on top and came with a full size SATA Desktop HDD right there. Gets even better ventilation and has no exposed electronics when the cover is off all the time, remove the HDD (for good now) and just slip in whichever SSD you feel like booting to at the time.
Looks like about 128GB will do what 128mb would do back in the day.
Even in Windows 95, which didn't support USB at the time.
That's because DOS-based Windows could use the BIOS for disk access, and BIOS presented USB drives as hard drives. I believe you can even do the same with an NVMe SSD that has a suitable boot ROM.
Yes, good to emphasize that UEFI or genuine BIOS motherboards will access the USB drives on powerup, then any OS that can boot from that type partition layout can go forward from there. DOS, W9x, NT5 need CSM enabled to boot on a UEFI MB, W7 loves it as well.
W98 would install and run from USB too, as long as USB device drivers did not get installed. That way once booted if you plugged something into a USB socket on the MB, it was "unknown" and remained inaccessible. But if you booted when the second USB device was plugged in beforehand, W9x (or DOS) assigned an alphabetic drive letter and you could access the files.
Sometimes I still use a small FAT32 partition with simple DOS on a Syslinux'ed volume to boot distros from the NT5 bootloader. That way you can edit the Linux multiboot menu in Windows, or even DOS which sure boots a lot faster today.
Now they have USB enclosures for M.2 drives, usually not both NVMe & SATA flexibility though.
What were they thinking. A mini disc enclosure with a blu ray density disk inside would be perfect for so many use cases. Considering the extremely low prices we see for usb sticks and sd cards these days, imagine how cheap these discs could be to mass produce.
> I wish there was some kind of 2024 equivalent to doing this
Lots of OS support live USB images. It’s only Windows and macOS that don’t (though I’m sure Windows could with a little effort).
> Sony really should have brought the minidisc to the pc as a floppy and cd/dvd rom replacement.
Sony did. But there were already removable writable formats that had larger capacity than a CD back when the minidisk was a thing and they didn’t become mainstream either.
Frankly I think MiniDisk is one of those technologies that people remember as being better than they actually were.
Windows does, it's called Winpe. It's what the Windows installer media runs, but you can (could?) create your own images that boot straight to desktop. Winpe images that included a third-party screen reader were somewhat popular among the blind community at one point, before Narrator (the built-in screen reader) got decent and Microsoft started including it and related components in the installer. I think the original rationale behind this feature was the ability to make custom disks with data recovery tools and such.
Mac OS can boot from external volumes too. It's not a traditional live image, the volume is usually writable and it's a real copy of Mac OS. We're probably talking actual HDD or SSD portable hard drives here, not flash drives, Mac OS isn't that small. You need a Mac to run one of these of course. No idea if this works across computers on Apple Silicon. The new Macs store a lot of the encryption and boot policy stuff in the SE, so I have no idea if booting an unrecognized system would actually work.
Not merely the limited functionality of WinPE or WinRE.
It's "Windows-to-Go", introduced in Windows 8.0 which had the full Windows functionality contained on the USB stick, if the bootable stick met the high-performance requirements and had firmware indicating to Windows that the USB stick was not a "Removable Device".
I credit nLite for my general NT knowledge today. Taking Windows apart and putting it back together was a fun way to squeeze every MHz/Mb from your hardware.
"Hiren’s BootCD PE (Preinstallation Environment) is a restored edition of Hiren’s BootCD based on Windows 11 PE x64. Given the absence of official updates after November 2012, the PE version is currently under development by the fans of Hiren’s BootCD. It features a curated selection of the best free tools while being tailored for new-age computers, supporting UEFI booting and requiring a minimum of 4 GB RAM."
I recently found a 90 minute cassette from 40 years ago with some good obscure music I had forgotten. The accessibility and touch of the cassette and a floppy interfacing with the player or controller gives a better tactile feedback then a USB disk or MicroUSB disk. You can more easily loose the USB disks as well as they are smaller and don't really make noise when dropped.
As MiniDisc and Zipdrive are obsolete I'd like to have a minicd or businesscard CD audioplayer based on a RaspberryPi Zero. With using Vocos, EnCodec or Lyra v2 you could fit 200 hours of stereo audio on the small cd at around 3 kbps and select them with some buttons and a LCD screen.
Similar to the Imation RipGo or Memorex portable 8cm mini-cd players which could play mp3.
https://languagecodec.github.io/
I could run a bootable OS on MicroSD and using external small cd's as data or app storage, perhaps also to boot from. An OS experience [Redox, Haiku, KolibriOS, Dragonfly, Fuchsia, MorphOS, Serenity] booting from a 8 cm business card CD depending on the day, weather or mood.
https://en.wikipedia.org/wiki/Mini_CD
Or perhaps just boot into a Raspberry Pi zero menu to select or boot a random OS from the Zero and use Airtagged encrypted USB-C sticks (making sure each OS can read the encrypted volumes).
>> Sony really should have brought the minidisc to the pc as a floppy and cd/dvd rom replacement.
>Sony did. But...
If it was from Sony, it was probably ridiculously expensive. Everything they sold had (and still has I think) the "Sony tax", making it uncompetitive with other standards, especially for something like removable media where the per-unit cost adds up quickly.
CD-RW, DVD-RW, DVD+RW, and DVD-RAM all support random access writes of some form, and can be used as live block devices. CD-R, DVD-R, and DVD+R can be used too, with limitations because it's a write once media; you can't really erase anything, so you'll fill up the disc eventually.
Probably similar for BD-RE (recordable, erasable) and BD-R, too.
Somehow related: I used to run both 3.11 and 95 over the same DOS 5 (?) installation. It's a less known fact that, although Windows 95 ran in protected mode, it was started as a DOS real-mode program before switching modes and, when it was run from a DOS command line, it shut down back to the DOS prompt, so I could run both Windows versions without completely rebooting. It was useful because at the time I was programming versions of the same program for both systems.
That means that Windows 3 was compatible with the newer DOS version that came underneath 95. If someone still has the software and wants to test it, IIRC the only thing needed was to edit autoexec.bat and have different directories for every Windows.
I recently slimmed down Windows 3.0 to run it from a 1MB SRAM card on a MS-DOS 5.0 palmtop. I just did it by trial and error, and I think I can slim it down further from this guide, though they're using Double Space, which I won't have.
I'm always sort of amazed how well Windows 3.x runs on hardware that would have been a bit old even when 3.0 was released.
double space was nice unless you had anything that's been already compressed. I hated when it showed me 450MB of disk space (I had 160MB HDD), but then after installing Warcraft 2 it took 160 of my free space instead of 50MB (don't remember real figures, however war2 had already compressed files and it was shown in Norton Commander as "size: (let's say) 50MB, occupied: 160MB"
Remaining disk space was estimated using a compression ratio of 2.0 by default, but you could change the default, at least in Win9x. I don't know why it showed you 450MB on a 160MB HDD. You probably already changed it to 3.0 or higher.
I used a drivespace volume for a very very very long time, even when I had 80 GB HDDs, because it made it SO EASY to back up the entire windows installation - just copy the file.
I vaguely recall that the default was to use the current average compression for the default estimation. I always changed it to 1, which is much saner: instead of being surprised that you didn't have enough disk space to store something, you'd at most get surprised that you had more space left over than you expected after storing something. Even today, things like the transparent compression in btrfs use 1 as the compression ratio for the free space estimate (though its metadata usage can be a bit unpredictable).
DriveSpace was only ever available on FAT12 and FAT16 volumes. It seems like it'd be more annoying than anything for drives where FAT32 would be desirable.
Not really. I had (and still have and use today) a 2GB FAT16 partition with ~1 GB for DOS utilities, Win98 setup kit from the CD and various windows programs that didn't need to be installed. The remaining ~1 GB was the drivespace volume with 2 GB space inside (maximum) for Windows 98 and the rest of programs that needed to be installed to run. It was enough space for Office 97, Corel Draw 8, Adobe Acrobat 5, and many many small programs. Games are in a different partition.
Microsoft used to sell Windows as a runtime. Like, you would ship Windows on floppy disks as part of your software product for end-users.
16 bit Windows was consciously designed, intentionally made modular, to accomplish things like this. Albeit the author was surely not the intended customer!
Something like it! Setup for Windows 95 through Windows ME, when run from DOS, unpacked and launched a version of Windows 3.1 that was included in the MINI.CAB file.
Seems like the sort of page that should've been linked from oldfiles.org, a long lost directory/webring of DOS, bootdisk stuffing and Windows 3.x sites:
Back in the dark ages I put swap on most of the graphics memory of an AGP bus graphics card. The read spead was slow compared to system RAM but insanely fast compared to swapping to HDD. Then put the graphics card as the highest priority swap so it swapped there before going to HDD.
I don't remember, specifically I don't remember if I had to compile a tiny device driver to expose a chunk of RAM as a block device. In that case, it might have been something like this sample: https://www.opensourceforu.com/2012/02/device-drivers-disk-o...
There might also have been some way to convince the kernel a piece of RAM was a block device. The "ramdisk.c" driver in Linux was simpler before.
After that, it was a matter of creating an init script which formatted the block device as a swap device, then running swapon.
Depends on the system. I still use a ramdisk on my DOS/Win31/Win98 machine. It has lots of memory and Win98 crashes with more than ~768 MB RAM. I use the ramdisk to occupy excess memory.
If your RAM disk has a fixed size but there's some free space on it, and the system is out of RAM, it can effectively reclaim that otherwise unused space on the RAM disk at the cost of having to copy the pages back and forth.
from my vague memory, windows had a default minimum swap size. by putting it on to a RAM drive if you had enough memory (+ and as someone else mentioned compressing it as well) made things run much faster
Android does this with zram, and I think this is a big part of the reason why it tends to use more battery once you run low on RAM, since paging in and out requires (de)compression, which uses CPU.
Badly. Because (1) CPU and (2) dblspace.bin takes ~70K of conventional memory. IIRC, on a 286 it can't be loaded high, or at least not with EMM386 from back then.
around the same time, I fit linux with many of the common ethernet drivers, X + rdesktop and vnc clients + a little utility to launch them to a list of sites on a single floppy.
I have a 1.44 MB floppy here somewhere that boots MS-DOS with Windows 1.01. I do not remember if I even had to do anything special to fit that on a single bootable floppy. Probably not. From the first item I find on archive.org it looks like Windows 1.0 was distributed on four 360 kB floppies.
I had Windows 1.01 and 2.03 on boot floppies, they were both small enough to just copy the files into a DOS bootdisk and fit fine. However in my experience most of the programs that I wanted to run in Windows that wouldn't run in DOS required at least Windows 3.x to work.
There is Tiny11 for those wanting to try a custom stripped down version of Windows 11. Never tried it myself, though I was a fan of nLite back in the day.
I liked your comment, but at the same time I imagine how tiny 4.5 GB will seem 30 years from now, when our children will reminisce how it fit on a single-layer DVD.
To make deployment easy and smooth, it was decided to use network booting and running Windows from a RAM disk.
The machines had 8MB of memory and it was found we needed 4MB for Windows to be happy, so we had a 4MB RAM disk to squeeze everything into. A colleague spent a lot of time slimming Windows down, but got stuck on the printer drivers. They had 4-5 different HP printers which required different drivers, and including all of them took way too much space.
He came to me and asked if I had some solution, and after some back and forth we found that we could reliably detect which printer was connected by scanning for various strings in the BIOS memory area. While not hired as a programmer, I had several years experience by then, so I whipped up a tiny executable which scanned the BIOS for a given string. He then used that in the autoexec.bat file to selectively copy the correct printer driver.
Project rolled out to thousands of users across several hundred locations (one server per location) without a hitch, and worked quite well from what I recall.