Hacker News new | past | comments | ask | show | jobs | submit login
Geocities Cage at the Exodus Datacenter (detritus.org)
189 points by imaginator on Nov 25, 2016 | hide | past | favorite | 84 comments



I worked for geocities as a front end person back then. We got to work on some cool javascript stuff for the page builder app that was really pretty far ahead for its time. I got into trouble for bringing a laptop with suse linux on it into the office, back then linux was considered a major security risk, you notice all the machines are from Sun. They were also very early adopters of server side java, but most of the plumbing was composed of c code, the javascript people were kept far away from the server side and we didn't make it up to yahoo, but they did snatch up almost all of the unix admins.


Wow, Page builder (formerly Geobuilder) -- crazy times. I don't remember the Suse incident! But your mention of server side java reminds me that we also experimented with server side javascript via Netscape Enterprise Server. Compared to the C CGIs that comprised most of Geo's backend at that point it seemed a panacea. Too many bugs and not ready for prime time later we stuck w/ Apache (but moved more to handlers).


Great memories. During this era, I was at Citysearch and spent a lot of time in the Orange County Exodus DC. We had a nice rack of Sun gear, Extreme Networks switches and Alteon AceDirector load balancers.

I remember vividly that there was a cage down the row from us that was populated entirely by eMachines, which were a low end desktop PC that you could buy at Circuit City and Best Buy. We laughed at their cage but the company, 911gifts.com, ended up getting acquired for a nice sum, while our site and company was basically gone a few years later.


That's a great story. It'd be interesting to know whether that was all they could afford, or if they were being very forward-thinking in using COTS hardware and a failure-tolerant architecture.


probably a combination of both.

i worked at a startup in the late 90's and the price of sun microsystems gear was mind-blowingly high. from memory: ultra 5 workstation with a scsi card and disk array was $15,000+. an ultra e450 server was on the order of $100,000 and went up from there depending on how you wanted to build.

of course, that was exactly the time people started switching to linux on x86 en masse. pentium pro's were good and cheap enough to scale out less expensively on a per unit basis. today you can buy a 72-core xeon server with gigabytes of memory and terabytes of ssd's and 4x 10G ethernet for less than 10 grand. amortized over its functional lifetime, it costs less than a cell phone bill.


I got the impression "hardware dogma" still happens when I was planning our infrastructure. The datacenter world is still a place where you can spend $36k for something you can get for $1000 if you're not careful. I didn't use Cisco for our switches and some IT guys I talked to acted like I was a heretic. But the decision saved us several thousand dollars and we ended up with a design that was IMHO more reliable than stacking proprietary switches.

The big trick is to make everything redundant (redundant power, network bonding, Ceph instead of a NAS) and not have a SPOF. Then it matters a lot less if anything fails, and you can use cheaper hardware if you need to. That said, I still prefer server grade hardware - just not always the newest or the most brand name.


Unfortunately storage dogma still seems to be a thing for EMC shops. Just try suggesting Ceph over Scale I/O. :-)


I was the technical architect for an internet-based commodity trading system that was also hosted at Exodus in Austin before the .com bust. As I recall we spent $6 million on our two Sun clusters "Coke" and "Pepsi" but that was almost 15 years ago so they might have been $6 million each. Then the bottom fell out of energy trading thanks to Enron and others and that was that. Those systems were probably sold for pennies on the dollar and would easily be replaced today with a small AWS presence for even less.

Frankly that's part of what makes working now such fun: the fact these systems just keep getting better and better.


Their revenue mattered a lot more than their COGS. At least in this case.


I love seeing stuff like this. Though I'd be a happy man if I never had to see the words "Veritas Volume Manager" again. Definitely one of my least favorite pieces of software from that era.

I worked for a company around this same time that had servers in the NJ Exodus data center. Used to have to head up there once a month to swap out tapes in the Sun L11000.


Brought back some memories for me as well. In corporate IT, unix sysadmins had to know how to manage multiple different hardware architectures and operating systems that all did things different. AIX, Solaris, HPUX, Ultrix, etc, all with different filesystems, raid hw/sw, command paths, and so on.

That picture of the ethernet hornet's nest too. Ugh. At least it wasn't AUI cabling.


Indeed. AUI was a killer. We had a policy of turning it into 10base2 right away with an adaptor, then 10baseT when that became a thing.

I liked this era so much I used to skip dive all the kit that was chucked out. Had myself a nice stacked Sun 1000E as a desktop in 1999, until I got the electricity bill. Must have cost as much as a house when it was new.

Then I found HP/UX was horrid. Had a run in with some HP N-class systems with Oracle. Yeuch, and that turned me to open source.


>>Must have cost as much as a house when it was new.

I do remember some $100k - $300k invoices for larger SMP servers from HP, Sun, and the like. For machines that probably had less overall horsepower than my current cell phone :)


Yeah pricing was awful. I remember someone playing £12k for a single PA RISC CPU option and when they cracked it open to have a look it was 95% heatsink. Cue the "bloody expensive heatsink" comments.

It had about as much go as one of those "big slab" xeon slot CPUs at the time which was 1/10th of the cost.


Well, I've seen appliances over $50k a piece ( you need at least two to be useful.) So systems like that are still here.


Right, because they aren't currently commodity items. Wouldn't take much, for example, though, to get haproxy to a state where it starts eating F5 lab's lunch. A nicer ui, etc.

It took Linux and commodity servers a while to kill the $100k+/each proprietary unix server market.


Veritas was definitely better than the alternative. I was a dba at the time and it was a godsend.


I still can't believe a top-tier UNIX vendor required an expensive 3rd party software to have halfway decent file-system. Or in the case of HP-UX, partitioning.

I get that Veritas also did logical volume, raid and so on and that was a guenuine value added at the time. But so many box I saw had veritas so they could partition and get a file system that was OK with 36GB+ monster SCSI disks.


I ran DEC Alpha servers at the same time, and Tru64 (aka OSF/1 aka DEC UNIX) shipped with AdvFS. It has a lot of the same features as ZFS (volume management combined with the FS, so you could add/remove drives, and grow/shrink capacity w/o downtime). I remember my Sun admin friends being incredulous as I shuffled disk space on a live server. It is kind of a shame HP killed it.


From having to use it on solaris, it was more the flexibility of resizing and changing the layout without downtime that was useful.

When you have a 100k+ machine it makes more sense, when you can just rebuild a new instance it makes less sense. I do NOT miss using it, but it was nice for its niche.


We were hosted at Exodus in the Herndon Virginia area around this same timeframe. The staff gave us a small tour and showed us a cage about the size of a double-wide trailer. Inside were two Sun Enterprise 10000 servers and a whole wall of drive arrays. Plus networking gear, tape drives, etc. Easily $7-8 million worth of stuff.

They said it was from a search engine we probably had heard of. Our guess was it was Altavista.

We didn't own our Compaq servers - we leased them from Exodus, like a lot of firms. And when the bubble popped, all those startups stopped paying for all that expensive equipment (which was now used and worth much less), and Exodus was on the hook as the owner of it all. Killed them.

Edit: Found this image from someone who picked one up for a song to add to their collection. From a prized million-dollar enterprise-class server, to being hauled around in the back of a pickup truck.

http://imgur.com/a/lXvOk


I think all colos say crap like that. During a tour at switch, they pointed to a large cage and said, "It's a search engine that you all have heard about but I cannot name."

A week later, in Atlanta at QTS metro, they said almost the exact same words.


If it really was a large search engine, then I imagine they would be hosted at multiple locations, so not surprising that they were mentioned at both places?


Hmmm.. I'm pretty sure AltaVista ran on DEC Alpha hardware, not Sun. And Google used self-built high-density racks of Intel hardware.

Maybe it was Inktomi? I'm pretty sure they used Sun hardware.


"Hmmm.. I'm pretty sure AltaVista ran on DEC Alpha hardware, not Sun."

Yes, they did. Remember - the original URL for altavista was: altavista.digital.com.


I don't think we knew about Inktomi at the time, but it certainly was a possibility. Our other guess was Ask Jeeves, since we didn't think that Excite had the money for that kind of hardware. But who knows, maybe Exodus floated the bill for that too.


"As you can imagine (and see from these pictures), this equates to a whole bunch of ethernet cables. Cable management gets increasingly difficult to grasp each time a new box is added to the mix"

Eeeeeek. I'm not in IT and I got the sinking stomach feeling when looking at this.

Thankfully there is https://www.reddit.com/r/cableporn/ to cure that.


That mess wasn't common in any environment I worked at during the same time period. Pretty sure that would have got someone sacked in the various datacenters I happen to have worked in.


It was common in many customer cages. I even saw cages setup in 2000 where a cage contained 500k of sun and cisco hardware still in their box after 8 months. I guess that startup was growing so fast they hadn't gotten around to plugging things in yet /s.

More likely they either ran out of funding or never got the growth they expected. Back then there was often a large disconnect between H/w bought and H/w needed/used. But money was there and hypergrowth was "just around the corner."


My conspiracy theory was VC funding required companies to spend X% on Sun/Cisco equipment, whether they needed it or not. In any case expensive, unused Sun servers were lying around everywhere during that period. And you'd hear thing like people bragging about their spare E10K.


They did eventually fix it, see the newer racks:

http://www.detritus.org/mike/gc/rearpatchpanels.jpg


That's not fair!

Patchpanel backs are cabled only once by electricians. All is connected to another patchpanel that's near the catalyst and it is the place the mess starts growing.

Memories :-)


I seem to recall the adjacent cage hosting a Very Large Company's hot web-based email product which was cabled "organically" and made this look like art. (Hi Mike!)


So much memories from that era. I saw my first 1TB storage array back then, it was an EMC Clariion fibre channel raid box, it cost nearly 1m$ and it was a full rack of 9GB dual-ported 15k RPM disks.


We had an IBM ESS "Shark" back then. A fantastic piece of kit but totally overkill for us in the end. It was basically the size of two or three standard racks, full of shelves for drives, and two RS/6000 AIX servers, and trays of redundant controllers.

You could pull trays of drives, one of the RS/6000's, or any of the controllers, and it'd keep on humming along thanks to redundancy pretty much everywhere. If any components went bad it'd call home to IBM via a modem.

And it could do things like automatic mirroring to a remote site.

1.5TB was a common configuration, but you could connect multiple boxes to increase capacity.


And that was EMC's "low-end" array. The Symetrix (Symetrics?) I remember being much more expensive for the same throughput.


Symmetrix, two Ms. They were more of a mainframe thing I thought. I never really got to play much with the really big gear, Starfires, Superdome and so on. By the time I got to use a 64-way SMP box (Origin 3800) it was already obsolescent and replaced a few years later with a pair of dual-quad core Xeons.


I had forgot that GeoCities started migrating to NetApp filers after the Yahoo! acquisition. As many thousands (tens of thousands?) of the filers they bought from NetApp, Yahoo! really shouldve bought them when they had the chance I feel.


Does anyone know of an article like this about the dial in modem systems and infrastructure for some of the early dialup services?


A few decent things I could find online that jive with what I remember:

https://www.patton.com/technotes/build_yourself_an_isp.pdf

http://www.gwi.net/behind-the-scenes-of-a-90s-internet-start...

https://news.ycombinator.com/item?id=8352432

The absolute minimum, and representative of the very first dial-up ISPs: http://www.linuxjournal.com/article/2025

http://www.datamation.com/erp/article.php/615281/My-own-priv...

There used to be a lot of nice material on this subject but a lot of it has been obsolete and rotted away from the web over the last 15-20 years. The larger dial-up ISPs used Cisco AS series boxes (or equivalent) with PRI (i.e.: phone over T1) connections (24 lines/each) to a centralized RADIUS server for authentication. They are/were the last hold outs providing dial up.

Smaller ISPs were more of a '94 to '99 thing. Usually they used cyclades or equivalent serial port cards with up to 16 serial ports per card and an external modem per port. Eventually this morphed into boxes with multiple modems in them and access servers that did the ppp termination and the authentication (to a RADIUS server) as scale increased. US Robotics was probably the best reputed player in the modem space.


Lots of people used the US Robotics Total Control boxes in the early 90s.

http://www.kmj.com/tcont.html


That's what I had in mind when I said "boxes with multiple modems in them".


They were also way too expensive for a lot of smaller ISPs.


In my smaller regional ISP we started off using USR courier and sportster modems with Cyclades cards before moving over to Livingston Port masters. I ended up selling the company before moving to all digital incoming lines (needed to support 56k modems).


Did exactly the same.... We had sporters hanging off the wall for a while (couldn't pack them tightly due to the crappy ventilation on the Sporters, but they were light, so clamping the cables to a suitable plate and fitting it to the wall worked great....)

The local telco had problems supplying enough lines from the nearest exchange, so they ended up hanging a thick cable bundle in the trees for several hundred meters, make a hole in one of our windows and put a large multiplexer cabinet in our office... Then three months later we moved to different offices - they were not pleased.


We got metal wire shelves and kept each shelf fairly close together (about 6-8 inches). We had 10 selves per rack and 18 modems per 3ft x 6ft shelf. 10 racks and related additional hardware (switches, terminals, routers, CSU/DSUs, etc) fit in about 600sq ft of office space with space to spare for technician workbenches. Could do much higher density these days of course, but it was the mid-90s and we thought we were doing good at the time.


We didn't quite get to the size where we needed that before we sold off the dial up business (for a pittance; we contributed to accidentally set off a price war in our market with incumbents with deep pockets - it was no fun, but I learned a lot)


I got lucky in that regard. The larger company in our area decided it was better to buy us out than compete on quality of service, so we were in a position to better negotiate terms of sale.


I'll dig up photos of the ISP I founded back in 1993. I've got the evolution of it from 1993 till I sold it in 97, but they were all film photos for obvious reasons. Surprised I never took the time to post before.


I would love to see those!


Yes please !!


I can't speak for early early dial up infrastructure, but by the turn of the century the equipment involved was not all that impressive to look at.

A typical configuration would terminate the analogue lines over ISDN, which supported somewhere up to 30 B channels ('voice calls') over a single cable, running alongside one or two Ethernet cables to a single router with one or more modem option cards installed.

We had Cisco boxes in the last place I worked that handled dialup, looking pretty much like this: http://www.ciscomax.com/datasheets/3600/Cisco_3640.jpg


Are you sure you're not thinking of T-1 (24 B channels) rather than ISDN which was most typically 1 or 2 B channels?

[edit]

ISDN is a catch-all term tat included BRI (2 B channels) and T-1 (24 B channels) and E-1 (30 B channels) so the parent comment is correct.


Your edit mostly got it right with the exception that it wasn't commonly called a T-1 or E-1 and the numbers are slightly off.

* ISDN BRI = two 64k B channels and one 16k D channel

* ISDN PRI over T1 = twenty three 64K B channels and one 64k D channel

* ISDN PRI over E1 = thirty 64k B channels and one 64k D channel (time slot 0 on the E1 was used as a sync channel and wasn't considered a B or D)

T1/E1 was considered an analog circuit with 24 or 32 respective 64kbps voice channels using in band signaling effectively limiting each channel to 56k. ISDN PRI used the same cabling and channel separation, but instead ran a digital circuit with out of band signaling enabling the full 64k to be available for voice and data. ISDN PRI (and BRI) also allowed the option to be packet based (similar to X.25) rather than circuit switched for data transmission.


In the 90s in my little corner of the mid Atlantic US at least, it was very commonly called a T1 or DS1 and the terms were used interchangeably even though they refer to slightly different things (at least by sysadmins; use the terms wrong around a telecom equipment engineer and you'd likely get corrected).


I'm aware the misnaming was common, but it's a bad idea to do so. There's a pretty big difference between a T1 and a PRI in practice when used as a data only circuit. For one, the T-1 is a fair bit faster.


I worked for a large UK organisation that had cut back on their dialup access but still had 50 or so modems in racks for field workers to dial in to to get their email etc. Also some site to site links remained a dialup connection.

Sorry, no fancy pictures or anything interesting, just reminded me of the time.


At least a couple of the Dyn datacenters had dialup for out-of-band access back in 2013, think its all Ethernet now.


Dial up lines are still available in some datacenters, had some both for services and for OOB access as recent as 2015 (San Jose)


A few Datacentres I know have POTS/ISDN capability as they host a fair view VOIP companies. I assume at some point that VoIP call needs to get into the existing telephone network somehow. Also, companies would locate their telephony stack in a DC and route over ethernet just so the kit was more secure and in disaster scenarios they pick up anywhere.


Yup. Lotta places will now give you 1Mbps 95% on FastE inclusive with a service contract, if you ask nicely, and are taking a cage with 20+ racks. Datacenters which ask you to pay it's usually <$50 MRC.

Would guess the transition is slow because if it ain't broke why fix it, and its often a lot more than just provisioning a new circuit, you have to update process documentation, etc.


Lots of companies still have inbound dialin access for EDI stuff still running on POS systems from the 1990s. It's scary really.


It's also used for OOB access in high security environments in case you are getting strategically DDOSed.

Makes me wonder what you'd find wardialing in 2016.


I've used dial up via a cheap modem to a Cisco 2500 as recently as a few years ago for OOB console access. It's not gold plated but still works fine

More recently though it's all been on ethernet handoff either via the DC or a local provider


AT&T installs a USR modem for the pots line to manage the routers for their bVoIP


I worked at a couple of small ISPs in the mid 90's, and built the dialup infrastructure of one from scratch. I remember using Xylogics Annex terminal servers connected to either 16 or 32 individual USR modems. There were stacks of modems, each with its own power brick, serial, and RJ-11 connection.

This was a few years before everything went digital with PRI lines (T1s that let you do digital modems, basically.)


And then years later you could download all of Geocities as a few hundred GB Torrent.


Which makes me wonder: what would it take to match the capabilities of this setup with modern hardware?


Could be done with a moderate-spec single server. Replace the JBOD enclosures with on-server 8xSamsung EVOs in hardware RAID6, and gain a bit in speed because we don't have drives that slow anymore. Xeon E5v4 could replace the computational power. Network multihoming would be done the same, except with 10000baseT instead of multiple cables. Run nginx on it and you're good.

Or, one could host the entire thing on S3 with CloudFront at a fraction of the cost.


> Or, one could host the entire thing on S3 with CloudFront at a fraction of the cost.

Or, you could rent capacity somewhere with decent bandwidth prices and pay 2%-20% of CloudFront/S3 bandwidth prices...


Raid 10s or bust. But otherwise correct.


Probably just a couple of HP DL380 G8's with SSD arrays on Linux would run rings around all that kit.

Or a smallish EC2 instance and the whole of the rest in S3...


No need for an instance if there is no dynamic content. Everything in S3!


I was considering an upload gateway and administrative functions. You can't really just expose S3 on that scale.


I offered free hosting on my site for a little while, the legal issues of what people would do with free storage started to be the biggest problem. We're in a different era now.


Actually, it's only about 20% of geocities. Much of it has been lost


And right there is why Sun was such a hit in the 90s and crushed in the bust.


Everyone in SV in 1999 used Sun boxes. Racking Ultra-2 Pizza Boxes was being very Internet Professional and money was flowing freely enough to pay for them.


Yep. I was at Netscape in Mountain View around that time, and the data center was mostly full of SGI stuff and Sun stuff. The new datacenter on Ellis opened up, the AOL thing happened, and even more Sun hardware started showing up. E4500s, E450s, you name it. I had an Ultra 5 sitting in my cube just because I could. And then the Intel stuff started showing up in droves. Compaq servers running Linux. And I don't remember seeing a single Sun box show up in that data center after that...


Sun hardware was everywhere in the late 90's / early 2000's. I remember working at a couple of startups that had E3500s (mini-fridge size machines?) and a few E450s. We had Ultra 5's and 10's on the developer desktops.

I have an Ultra 10 rotting away in my basement. It cost a pretty penny at the time, but I haven't booted it up in almost 15 years.


yes in BT we where sun 4u servers at $40k a pop


yes, and now computers are cheap and devs are expensive. The money has to flow somewhere.


Not sure what's the best find, the data center article or the author's boy band-esque photo: http://www.detritus.org/mike/pics/inpark.jpg


Good memories.

Does anyone remember where an Exodus colo facility was in SF around that same time (1999-2001)? That was my first visit to the area and I can't seem to locate the neighborhood now that I live here.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: