I also fondly remember IPX from when I was a kid. Red Alert 2 and Age of Empires II supported it, and it was truly zero-config on a small LAN. Then, eventually it died off, and I had to learn TCP.
The author’s point about so much not being necessary today, while simultaneously having more drudge work is interesting. I’ve often thought the former, but hadn’t connected it to the latter. For example, I commandeered my family’s two PCs (a Celeron 333 MHz and a Pentium III 550 MHz – what a screamer!) at night to run distcc on, so that Gentoo builds would finish in a somewhat reasonable amount of time. This is simply not necessary anymore. Firefox, which used to be an overnight job, now compiles in 10-20 minutes for most modern CPUs.
On the one hand, this is wonderful – faster feedback loops, more time to tinker. On the other hand, getting distcc set up back then was a fairly large undertaking for a kid, and taught you a good deal about a wide range of topics. Also, since the pain level of failure was so high, you were more careful to get it right the first time, lest you awaken to disappointment.
> I also fondly remember IPX from when I was a kid. Red Alert 2 and Age of Empires II supported it, and it was truly zero-config on a small LAN.
Interestingly, I recall Red Alert 2 only supporting IPX, and that causing tons of trouble for us at one LAN party when we tried to play it. Of course I may be remembering this wrong due to the passage of time and the fact that my technical skills were rudimentary at the time.
Red Alert 2 in retrospect feels like an absolute "perfect storm" of bad luck that still produced a fantastic game
RA2 runs on the same engine as Tiberian Sun (with minor modifications)
Tiberian Sun was Westwood's first engine in C++
It was also two years behind schedule and pushed out barely functional after EA bought Westwood out in the late 1990's
While Westwood was busy putting itself back together under EA they spun off "Westwood Pacific" and tasked them to make Red Alert 2 using this same engine
Red Alert 2's engine is the most cantankerous, buggy piece of junk that ran what I would strongly argue was the pinnacle of the RTS genre. Getting it running on modern systems is an exercise in "what directdraw wrapper actually works?" and hoping that the UDP patch doesn't cause random game desyncs, even on a modern LAN
AOE2 is the pinnacle of the RTS genre. With the Definitive Edition released in 2019, it's still extremely popular today. In my opinion, no RTS title has surpassed AOE2 before or since in terms of gameplay (and, in my subjective opinion, graphics -- I LOVE the detailed pixel graphics much more than polygons).
AOE2 is the best-balanced, IMO. I don’t know that it’s necessarily the most fun, though. I love it, but I also love RA2. Agree on the graphics comment: I’ll take the isometric view any day over 3D RTS.
RA2 has some grossly OP units that you can crank out en masse and dominate with. A small army of Apocalypse Tanks is game over for the enemy, unless they’re France and have turtled with a bunch of Grand Cannons.
Or in Yuri’s Revenge, load Battle Fortresses with a Chrono Legionnaire or two, a Sniper (assuming British), and some GIs.
These, and the overall frenetic pace of RA2 makes it more like junk food than the fine dining experience of AOE2. Yes, it’s not the best thing ever, but man is it fun while it lasts.
I played AOE3 after AOE2 as well years ago, but it never had the same feeling despite having much more variety and content. AOE2 hit the balance between simplicity and variety for me, though might be an effect of childhood memories as well.
Supreme Commander: Forged Alliance is the pinnacle, especially with the FAForever mod.
It puts emphasis on strategy and gives a lot of options on how to approach a match. Micro can make a difference, but it's not make or break like in pretty much every other RTS.
I would say that Dawn of War 1 is the pinnacle of the RTS genre. Still has never been topped by any game I've played, even its sequels. DoW 2 was a dumpster fire that removed most of the RTS elements and cut armies down to 1/3 of the size they used to be. DoW 3 was a step back in the right direction, but still was not really at the lofty heights the first game hit.
I used to dream that Blizzard would put Jay Wilson (who was the D3 lead while he was with them, but more importantly was the DoW1 lead before that) onto a Warcraft 4. I would've loved to see that. Alas, after D3 had a fairly chilly reception (and Wilson himself sparked outrage through unwise social media posting), Blizzard quietly demoted him (and he eventually left). So it was not to be. But I still wish it had happened.
I really enjoyed DoW2 as an action RPG but share your disappointment with it as a sequel to DoW1. I think it would have been received better with a different title.
id argue that dominion storm over gift 3 is peak. Tiberian Sun and Red Alert 2 stole so much from this game it's wild. from repairable bridges to unit queues and tabs on the build bar.
I can't comment on Red Alert 2 in particular, but many early network multiplayer games predate widespread adoption of the internet so TCP/IP wasn't an option. Early online gaming services like Kali were almost a product built around a TCP/IP wrapper for IPX that allowed internet multiplayer on a platform it wasn't initially designed for. But it wasn't long before most people probably didn't even have the IPX protocol installed anymore.
Yeah I had basically the same system (Celeron 300a overclocked to a blazing 504mhz) and trying to play the same games with my dad over IPX was a huge problem. I don't remember why, just frustration and never wanting to use ipx.
Definitely had problems with one of the Warcrafts, I assume 1 but maybe 2
If you used pure DOS with a network driver, it JustWorked(tm)(r)(c). However, Windows 95 had its own implementation of IPX, and it sometimes had issues.
> I also fondly remember IPX from when I was a kid.
IPX is the network layer and SPX was the transport layer on top. IPX can be used directly like UDP or the SPX protocol used when you need a guaranteed in-order byte pipe like TCP. Uses the MAC address as the machine address and was pretty light weight to the point where it out-performed IP in some cases.
As a silly hobby project I want to take a stab at writing a user space IPX/SPX stack on Plan 9 and model it after ip(3). The stack would mount itself after /net providing /net/ipx and /net/spx, then you bind it to one or more Ethernet adapters. Programs wanting to listen or dial on IPX or SPX just put ipx!address!service instead of tcp!address!port for their dial strings. Then you could easily build IPX networks again and even easily tunnel them over whatever using 9P by mounting the stack on other machines.
Ahhh, right. Networking is the area of tech where I learned enough to do what I needed, and then stopped. I keep meaning to get better at it. Your proposed project sounds like a fun way to do so!
> Could a part-time programmer like my father write small-business software today? Could he make it as safe and productive as our LAN was? Maybe. If he was canny, and stuck to old-fashioned desktops of the 90s and physically isolated the machines from the internet. But there is no chance you could get the records onto a modern phone safely (or even legally under HIPAA) with the hours my father gave the project.
Their father’s software likely was not HIPAA compliant or safe, just isolated. They speak earlier in the post about it being a permission-less file based database any computer could access, including his personal one. And that any innocuous command could potentially bring the whole thing down.
Certainly looking back with rose colored glasses to the situation. It probably did work great for his father’s needs but “safe and compliant with modern medical data protection laws” is was not.
I do think that a simpler approach to small business software like this example is not a bad goal. This was a great read. Thank you for sharing.
I don't get it. Is it gone? The author doesn't tell what happened since then. Aside from newer machines and modern software, things in the LAN haven't changed much, right?
Of course it’s not gone, although I suspect there’s a very large number of people who would be surprised to learn either that their Wi-Fi network at home is actually a LAN or that a LAN has utility outside of just providing access to the Internet.
My thoughts after reading the article is that the author compares learning networks and computers isolated on a LAN vs. in the modern world of connected SaaS and the like. That the subject matter is different, less exciting.
It's like comparing a company in its startup phase to its soulless corporate behemoth phase.
And, it's a tailscale blog so it segues to their business.
I started reading it out of remembrance of the LANs I went to in the 90s, but kept reading because of the experience of core logic having much less overhead then very much mirroring my own. Good writing and astute observations.
I remember those days. I feel the same way as the author.
But I think the right place to find that same joy in programming today, is by building stuff with embedded systems. You can do a lot of fun stuff as a kid with an arduino, some components, and instructions from the Internet.
I suspect you're thinking of upstairs in the Golden computer arcade in ShamShuiPo. It's still there, but it was already "cleaned up" and respectable when I went back in 1999, having shifted focus almost entirely to business needs. I went back again a few years ago, and maybe it's just because I'm older or maybe it's because I've discovered HQB in Shenzhen, but it feels really boring now. You can at least still find the latest half-a-billion-in-one NES emulators downstairs, so that's always a good thing.
When I went in 1992, it was crazy. CDs didn't have much adoption for computers by then, so the market was still floppies - IIRC HK$10 per disk and another HK$10 for a photocopied manual. They'd take your money and phone up some guy and tell them the four digit code and 10 minutes later someone would hand you a stack of disks. It felt weird that it was so blatant, but the first time I went by myself, I couldn't find it and asked a policeman for directions. He said in English, "Oh, the copied stuff? Over there!" and pointed to it!
That said, I only ever bought 3 disks from there - partly because I didn't have much money, but mostly because I found the programming books downstairs even more interesting. They were translations of all the English programming books, for about HK$30 each. The text may have been all Chinese, but the source code wasn't, so I bought a couple of books just for the diagrams and the source code.
Cool that it's still there. I sort of recall visiting in '0-something and being disappointed. For a long time there was a larger spiritual successor in Bangkok, Pratunam Mall, but I'm pretty sure that's also scaled down these days. In some ways I guess they were the cultural equivalent of record shops. Probably many former soviet countries had them too.
I remember heading to HK on a high school trip and we all ended up in that building, many people grabbing loads of CDs. Felt like Christmas! (I was too worried about customs in my home country checking my bags on arrival though!)
Brought back frightening memories of running a 10BASE2 network on LANtastic. Any one bad connection and the whole network came down. So much fun to troubleshoot.
is there a way to give a 'tailscale IP address' to dumber devices on your network? effectively serving 'tailnet addresses' via DHCP? say to a printer, other lower powered devices, etc. could have sworn i saw an article about this at some point but can't find it
Huh? I am using it rather than remembering it. Since my router only allows specific hosts and ports to be accessed from Internet, there is no need for security between my laptop and my 3D printer, can just upload and print the model / watch it on camera by typing a URL.
You are using the Internet. He is speaking in reference to applications which are hosted and used locally (Local Area Network).
Your 3D printer, assuming it truly doesn't need an Internet connection to function, is an anomaly today. Remotely hosted assets, data & logic have become the norm.
I know that this article was basically an ad for Tailscale, which happens to be a product I've really grown to love, but it still struck a chord in me. I share the yearning for the "good old days" when we could just do stuff on the Internet and it felt limitless. The spirit of build over buy felt different then, too. While I don't have a parent who wrote practice management software, I remember a particular niche music store we'd go to using Fox Pro to manage their inventory and catalog. I wish I saw more of that today.
As it happens, I was thinking of whipping up a Rails app to help my wife with a particular task she's got, and it occurred to me I can host it in our basement if I install Tailscale on her phone and computer, and it will just work, particularly if they can get identity integrated beyond joining a tailnet. So maybe the author's point is valid but overstated, and it still exists somewhat, just less often.
Author here. I realize it reads as an ad for Tailscale, but I actually wrote it earlier in 2019 while doing some of the very early product development. In a sense it is more a PRD than anything.
Well, my apologies. It was a very enjoyable read. Keep doing what you're doing! And thank you all for being cool to headscale - I would love more family-as-enterprise pricing, and am thankful that headscale is developed and exists.
> Learning how to store passwords or add OAuth2 to your toy web site is not fun
Hard disagree. OAuth2 is a neat technology and every personal thunk you run across or find yourself asking "why the heck do I need to do this? why can't I just do x?) trying to implement it is an important part of security and instructive on how to build a secure system where you can only trust components to do a minimal thing. Storing passwords (or really salted password hashes) is similar.
OAuth 2 is definitely a "technology" and not a program. It is a tool for large corporations to create proprietary incompatible "OAuth" implementations. Whereas OAuth (1) was actually a program and it did what it said and was compatible. OAuth 2 is terrible corporate crapware. It is "open"auth like "open"ai is open.
As for LANs, my home LAN is still going strong, no flakey wireless for me. My childhood friends group did stop having in-person LAN parties around 2009; mostly because we all moved to different regions. We still do occasionally set up a VPN to play some old non-internet game but it's mostly over the internet now (with self-hosted voice chat).
OAuth2 and OIDC are fully open specifications: anyone can join a working-group (there's no membership gatekeeping); and they both have plenty of fully open-source implementations. Not like OpenAI at all.
Yes, OAuth2 is a toolkit that is open... for making proprietary non-interoptable auth solutions. One for each megacorp. That's why mail clients like Thunderbird have a directory full of configuration scripts written specifically for each proprietary oauth2 solution made by each megacorp.
Mail clients that do not go out of their way to support each corporation's unique out of band OAuth2 HTTPS solution are unable to access these oauth2 megacorp email accounts anymore (see gmail, office365, yahoo, etc). OAuth2 has made email servers proprietary. Simply supporting smtp or pop3 or imap is no longer sufficient. Custom HTTPS scripting has to be made for each corp and one corp's scripting won't work for another's login process.
The author’s point about so much not being necessary today, while simultaneously having more drudge work is interesting. I’ve often thought the former, but hadn’t connected it to the latter. For example, I commandeered my family’s two PCs (a Celeron 333 MHz and a Pentium III 550 MHz – what a screamer!) at night to run distcc on, so that Gentoo builds would finish in a somewhat reasonable amount of time. This is simply not necessary anymore. Firefox, which used to be an overnight job, now compiles in 10-20 minutes for most modern CPUs.
On the one hand, this is wonderful – faster feedback loops, more time to tinker. On the other hand, getting distcc set up back then was a fairly large undertaking for a kid, and taught you a good deal about a wide range of topics. Also, since the pain level of failure was so high, you were more careful to get it right the first time, lest you awaken to disappointment.