Hacker News new | past | comments | ask | show | jobs | submit login
How Websites Die (wesleyac.com)
144 points by herbertl on June 29, 2022 | hide | past | favorite | 77 comments



I'd like to call out the somewhat related problem of website rot. Meaning, the websites is online, it once worked perfectly, but becomes increasingly dysfunctional due to technical deprecations.

The soft obligation to use HTTPS these days has deranked old HTTP-only websites in search, making them hard to find. These websites are also "defaced" with browser warnings or some subresources may not load at all.

Embedded maps no longer work, since Google regularly breaks their API.

Facebook login or other FB plugins no longer work, since it needs a yearly checkup of your account and there's the new requirement of needing to have a privacy policy.

Those are just some examples of websites partially breaking through no fault of its creator, if you'd agree that the web should be backwards-compatible.


Hi, I happen to work on Google's Maps JavaScript API, and we actually take great pains to avoid breaking our APIs as much as possible. We take very seriously the implications that breaking changes can have for websites all across the internet, and know that many sites are not under active development.

This page covers the breaking changes from the last few years: https://developers.google.com/maps/deprecations#completed_de... (Keep in mind that the top portion of the page covers features still in the deprecation period - meaning they still work. And that this page covers deprecations in other APIs like Android and iOS.)

My favorite: the deprecation period for v2 of the Maps JS API lasted 11 years after the introduction of the v3 Maps JS API.

Happy to hear about experiences to the contrary, but I thought a little insight might be appreciated.


Thanks for responding, you're a good sport.

Indeed I'm looking at this from a long time frame. From recollection the main breaking changes I'm aware of:

1. The change from v2 to v3, like you mentioned. 2. The new requirement to always include an API key. 3. The visual update to touch-sized controls, distorting especially tiny embedded maps.


Appreciate the feedback. #2 is actually a pretty interesting one - while clearly a change in the API’s output, we were careful to make it so that maps without API keys actually still work (though they do get a heavy visual indication that they need to get an API key).

That being said, when we are talking about such large timeframes, is it fair to portray that as “regular” breaking changes?


Yes, within the context of the type of sites I was discussing. You already indicated that you know that they are not actively maintained. Knowing that, it doesn't really matter if a breakage happens every year, 3 years, 5 years. There's no point at which it becomes "OK", if you believe in the long time preservation of content.

I'm not picking on maps specifically. It's a compounding effect of many individual parts slowly breaking. I would even agree that you did a reasonable job of maintaining backwards compatibility, but that doesn't matter...broken is broken. Content gone is content gone.


Y'all have done great work. I have a page that uses the Maps API - haven't touched it in ~7 years. No problems.


also, even if those older http sites get a certificate, any embedded scripts that point at http URLs, even if those URLs are also available on https, will not load and break. Especially hard to fix if you have user generated HTML on there.

Same for http download links from https websites, will not work anymore.

anything that used cookies on embedded resources will also be broken because of the missing samesite header (same for third party cookies obviously)

if google really goes through with removing alert/prompt/etc from their browser like they said they were planning to a while back, so so many things will break


I'm uniquely bothered by this problem as I visit many such dysfunctional sites.

I'm active in the (hobbyist) field of documenting species. There's thousands upon thousands of websites created by amateurs containing unique niche content. For example, somebody might have made it their lifelong hobby to document every species of bee in their territory.

It's a fragmented mess of incompetently produced websites, but I find it incredibly charming and in the spirit of the original web. Above all, it is their content that has lasting value.

The people behind it are good, generous. That's why it makes me so angry when their work is cast aside like this. Things not just technically breaking, in many cases simply disappearing altogether from search results.


I wonder if manually curating a list of such valuable websites would be a useful resource for constructing search engine algorithms.

I mean, if I was building a search engine algo, I wouldn't necessarily know how to find the websites you describe, much less optimize for them.


you should make copies of those sites before they disappear if you care. maybe dont rehost them without the authors consent though


Notification prompt is 10x more annoying than alert ever was.


But they are the fault of the creator -- first, by including content entirely reliant on the whims and generosity of 3rd parties wholly outside the author's control. And secondly, using the content and not doing the minimum needed to maintain it. Would you also say it is "no fault of the creator" if s/he did not renew the domain registration?

It's a bit like mowing your lawn once and assuming you're entitled to a perfectly manicured lawn forever.


That is a perspective with zero empathy.

These people are amateurs with near-zero resources generously sharing their content with the world. They may have all kinds of reasons for being unable or unwilling to continuously update their site.

The bigger point is that when you create a website according to web standards, it should keep working indefinitely. As for 3rd party integrations, especially when they're enormous (Google, Facebook) they should do the maximum possible to not break anything. The very point of an API is that it's a contract.


Some of my favorite websites about cycling were created by now deceased webmasters. Luckily they are mirrored, but I’m very thankful that they used regular old HTML and images and aren’t rotting after a mere 11 years.


Wouldn’t all this not be an issue with static websites? We should have more static


No. Static vs. dynamic is a concern regarding how HTML / CSS gets rendered by the backend. HTTPS certificates, embedding of 3rd party scripts, outdated HTML / JS / CSS,... are concerns that live entirely in the browser.

Whether you use a SSG or a CMS, you still need to do upkeep of all of those: make sure your certificates are up to date, make sure you have the latest embed script, make sure you keep your site isn't build on top of brittle API's and frontend frameworks, make sure your site keeps following changing SEO practices, etc.

On another note, if you use a CMS - e.g. WordPress - the availability of your site is directly tied to the operational availability of the CMS and the stack it depends on. So, that means upkeep and maintenance of the CMS become immediate concerns. If you use a SSG that only spits out HTML / CSS, that's far less of a concern: if your SSG breaks, the site doesn't break down.

Of course, YMMV in the age of hybrid solutions where a static layout and a popular JS framework is send to the client which fetches fragments via GraphQL API's from a backend, which implies a coupling between frontend and backend.


> due to technical deprecations

This struck me as a wolf whistle, and the immediate follow-on paragraph confirmed that suspicion. HTTPS enforcement is a major migration, yes, but other than IPv4 (sadly a long way off) it's the only "technical deprecation" of any kind I can think of that websites have dealt with. Implying these kinds of deprecations are commonplace, rather than 2 in 30-40 years, is a little odd.

> Google [...] Facebook

Embedded maps & Like/Share buttons are not "technologies", they're product offerings from service-providers. Businesses change their offerings over time, this is common in all industries & not a problem specific to websites / the internet / tech.


If you want to get that technical, then even https enforcement is the changing of a product offering.


If you want to get that technical, sure, why not. I don't understand what point you're trying to make though.

Ultimately, ask yourself this question: if I set out to make a website in 1992 that I hoped would be around in 2022 after zero maintenance, would it be possible and why not. The only reason it wouldn't is my visitors are starting to enforce HTTPS. There's literally no other blockers. The backward compatibility within the web as an ecosystem is extreme.

Going back to the original point - if I'm building a long lasting website in whatever year Google Maps embeds became available, should I reasonably expect a map embed pulling from someone else's server to work forever?


> Implying these kinds of deprecations are commonplace, rather than 2 in 30-40 years, is a little odd.

Large parts of the web are powered by npm.

The vast majority of those sites, if not actively developed, will not be able to build within a year's time.


That's the same for literally any other programming language ecosystem. The discussion here was about the web being distinctly more rot-inclined as an ecosystem. I'm purely arguing that it's absolutely not. Comparatively it's much easier to avoid rot on the web than it is off the web.

> will not be able to build within a year's time

Fwiw, and I know this is a nitpick, but anything built purely on open source libraries should be technically buildable forever. Especially if you keep it local (e.g. cached node_modules for npm)


That's simply not true. I have WordPress sites I haven't touched in years and they all run auto updates.

Sure I guess it's technically avoiding rot by updating but the fact I don't need to touch it makes the point moot.

> Fwiw, and I know this is a nitpick, but anything built purely on open source libraries should be technically buildable forever. Especially if you keep it local (e.g. cached node_modules for npm)

Technically isn't very useful when my server needs to update npm/node for security reasons.


I've been working on this problem for a while. Website upkeep is hard to quantify, but basically every disk fails and every operating system eventually needs a serious upgrade. The timeframe that a system can run continuously is not that long compared to the timeframe that information is relevant. So the most lightweight way to keep something up and running is to make it trivial to port to many hosting configurations by simplifying the toolchain needed to rehost it. (Note that humans are part of that workflow, if it's a company)

I've written a manifesto about making a commitment to keep websites online and maintained for 10-30 years, for people who are maintaining web content: https://jeffhuang.com/designed_to_last/

And on the flipside (from a user's point of view), I've also been working on a background process that automatically captures full-resolution screenshots of every website you visit, creating your own searchable personal web archive: https://irchiver.com/

I've personally been trying to make a commitment to keep my web projects and writing online for 30 years. My original internal goal when I started thinking about this, was to outlast all the content on Twitter, Google+, and facebook.com. One of those has already been met, kind of sadly.


I think it's less a technological problem and more just that everyone who used to care about that site or its content no longer does. Or the company behind it has gone out of business -- who is going to maintain a website for a defunct organization, and why would anyone want to?

There's no obligation for a person to maintain anything longer than he wants to do it. Putting a blog online is not a lifetime committment. Interests change, or you simply realize nobody much cares about your online musings, and you move on to other interests.


When a company goes out of business you will typically also try to sell of all its assets, and a good domain name might be such an asset.


But what can you do when you do care, to make your website as durable as a printed book?


Keep it as simple as possible. Static HTML that can be dropped on any web server.

HTML/HTTP may some day be obsolete, but will likely be around longer than anything built on complicated javascript frameworks or tightly tied to current web browser technologies.


Depends on if you want it to survive without you maintaining it. If so, something like a bog-standard Wordpress blog hosted by them might work until they decide they don't want to bother anymore.

Otherwise, some setup using S3-as-a-website or GitHub pages may work, but those also depend on the company maintaining that service.

If your entire website can be dropped into a ZIP file and served anywhere, you have a greater chance in it surviving, especially if the internet archive got a copy at some point.

But if you die and nobody cares about the content, eventually it will disappear.


> If your entire website can be dropped into a ZIP file and served anywhere, you have a greater chance in it surviving, especially if the internet archive got a copy at some point.

Well, your entire website can be a zip: https://redbean.dev/


One step in the right direction is to keep the representation of information simple, or to transform it to something simple for long-term archiving.

Frameworks and Browsers change. External services change. But, unless its stored with no backup and the medium fails, an ASCII textfile composed 50 years ago is still readable today, and will likely still be readable in 100 years. And thanks to UTF-8 being pretty much ubiquitous these days, we don't even have to limit ourselves to ASCII any more.

I am not saying to make every page a textfile. I am merely saying that one step to preserve digital information for generations yet to come, is to represent it in simple and robust formats. And if simple is not an option (eg. media will probably need to be use a compression format), then well documented and open formats should be used.


You could literally print it in a book. With links suffixed by a page number [915].


Point archive.org at it (and make a donation).


It’s not uncommon to hear of rack servers with several years of continuous uptime. I wouldn’t be surprised if you could keep a website online for a decade without touching anything by using an LTS distro, enabling unattended upgrades, and running something like nginx.


> system font stack that matches the default font to the operating system of your visitor

If the list just matches the default font of the OS, is there a difference between using such a stack vs. not specifying a font? Wouldn't the browser naturally use the OS's default if font-family is unset?


> I've also been working on a background process that automatically captures full-resolution screenshots of every website you visit, creating your own searchable personal web archive:

How are screenshots searchable? They aren't plain text. You can't grep them.


Why not though. On MacOS you can select text in bitmaps nowadays. So the tech is there to make a grep for pictures.


The tool captures screenshots in addition to text.


That seems awfully simillar to archive.org wayback machine. I do like to see all these archival projects though, they are certainly worthwhile.


irchiver captures text on the page, and separately OCRs the screenshots (specifically, the screenshot from your viewport). So you can search just what was shown on the page, or what was in the page. Both techniques have pros and cons.

While archive.org is fantastic, it can only capture pages that are both 1) publicly accessible (i.e. no social media content) that it happens to crawl, and 2) static content (you're out of luck if the content you want is loaded dynamically, or changes depending on user input).


IIRC archive.org does save the JS and things it downloads, so you can replay them when you visit the archived site later.


I guess the difference in this case is that the JS on web archive relies on future browsers being backwards compatible, whereas irchiver relies on much less to stay timeless which is good. Although I don't think JavaScript will ever get a major update (as in breaking comparability) I believe relying on that is not a perfect way to archive web content. This kind of backwards compatibility breakage is something we have seen before with the deprecation of Adobe Flash and it could theoretically happen elsewhere on the web stack.


Agreed, I wish they would archive the DOM too like archive.is does instead of just the requests.


Interested personally in the font stack point. The system font stacks listed focus on font styles more than individual typefaces. If I want say, Bookman, is there any harm to long-term use to use a font stack tailored to that font?


There isn't, no, and there's no particular harm in self-hosted web fonts when it comes to long-term stability. Not only is it relatively unlikely that future browsers will deprecate web fonts but not anything else in current CSS (in which case there's a bigger issue than your fonts), it's even less likely that they'll do so in an entirely incompatible fashion. Assuming you've designed your font stack with decent fallbacks -- which you should -- then it's just like any other font stack, e.g., you may not be presenting in the typeface that you really want to, but nothing's going to break.

("Web fonts are okay, actually" is evidently the windmill I will keep tilting at on Hacker News.)


irchiver seems incredible. I hope there's a comparable product for other OSes one day.


I’ve been working on a very similar thing which runs on Windows, Mac, and Linux: https://apse.io


One of my ideas is called Wantsfiles at the root of domains. The Wantsfiles contains information about what you want and the price you're willing to pay. An order matching system matches your Wantsfiles.txt to Sellfiles.txt and transacts on your behalf.

So I could host link to my tgz on my Wantsfile and offer $5 to host the tarball a month. There we go a perpetually hosted website.

You could call it autohosting.


disk failure and os upgrades are a non-issue with the right technology stack.... but then you have other problems (more complicated setup)


Hm interesting, I read over your 7 guidelines [1] and I would say I agree 50%.

So the most lightweight way to keep something up and running is to make it trivial to port to many hosting configurations by simplifying the toolchain needed to rehost it -- I agree with this, although I would use the words "standards" and multiple implementations. The linked article doesn't appear to emphasize this.

1. Return to vanilla HTML/CSS

Again I feel the relevant issue is "standards", multiple implementations, and the fallback option of "old code" like GNU coreutils (i.e. taking advantage of the Lindy Effect). Not just just HTML/CSS (which certainly meet all of those criteria.)

I thought about this when designing https://www.oilshell.org/ and the toolchain

- The site started in Markdown but is now standardized on CommonMark [1], which has multiple implementations. So I don't see any reason to stick to HTML/CSS.

- The tools are written in Python and shell [2]. Both languages have multiple implementations. (Ironically, Oil is a second implementation of bash! My bash scripts run under both bash and Oil.)

- Python and shell rely on Unix, which is standardized (e.g. Linux and FreeBSD have a large common subset that can build my site).

This is perhaps unconventional, but I avoided using languages that I don't know how to bootstrap like node.js (has a complex JIT and build process), Go (doesn't use libc) or Rust (complex bootstrap).

On the other hand, C has multiple implementations, and Python and shell are written in C, and easy to compile. I'd say Lua also falls in this category, but node.js doesn't.

I feel like this is at least 60% of the way to making a website that lasts 30 years. The other 40% is the domain name and hosting.

This is a pretty a big site now, you can tar a lot of it up and browse it locally (well it's true you may have to rewrite some self links).

Of course, this is my solution as a technical user. If you're trying to solve this problem for the general public, then that's much harder.

[1] https://www.oilshell.org/blog/2018/02/14.html

[2] https://www.oilshell.org/site.html

2. Don't minimize that HTML

Mildly agree if only because it's useful to be able to read HTML to rewrite self links and so forth. Tools don't need it, but it's nice for humans.

3. Prefer one page over several

If you're allowed to use Python and shell to automate stuff, then this isn't an issue. I suppose another risk is that people besides me might not understand how to maintain my code. But I don't think it needs to be maintained -- I think it will run forever on the basis of Unix, shell, and Python. Those technologies have "crossed the chasm", where again I think the jury is still out on others.

4. End all forms of hotlinking

Yes, my site hosts all its own JS and CSS.

5. Stick with native fonts

Yes, custom fonts have a higher chance of being unreadable in the future. I just use the browser's font preference to avoid this issue. It's more future proof and in keeping with the spirit of the web (semantic, not pixel perfect).

6. Obsessively compress your images

Agree

7. Eliminate the broken URL risk

I think too few people are using commodity shared hosting ... I've been meaning to write some blog posts about that. I use Dreamhost but NearlyFreeSpeech is basically the same idea.

It's a Unix box and a web server that somebody else maintains. I absolutely don't care about CPUs, disk drives, even ipv4 vs. ipv6, and I've never had to.

The key point is writing to an interface and not an implementation. Commodity shared hosting is a de facto standard. The main difference between a tarball of HTML and a shared hosting site is that say "index.html" is always respected as /, and a few other minor things.

So I expect Heroku and similar platforms to come and go, but the de-facto standard of a shared hosting interface will stay forever. It's basically any Unix + any static web server, e.g. I think OpenBSD has a pretty good httpd that's like Apache/Nginx for all the relevant purposes.

Github pages also qualifies.

So I guess this is adding sort of a programmer's slant to it. To be honest it took me a long time to be fluent enough in Python and shell to make a decent website :) Markdown/CommonMark definitely helps too. I had of course made HTML before this, but it was the odd page here and there. Making a whole site does require some automation, and I agree that lots of common/popular tools and hosting platforms will rot very quickly. (and like you I've seen that happen multiple times in my life!)

And I think what your guidelines might be missing is a guideline on how to make "progress". For example CommonMark being standardized is progress that has happened in the last 10 years. You don't want to be tied to the old forever. At some point you have to introduce new technologies, and the way to do that is once there's wide agreement on them and multiple implementations. (just like there's wide agreement on HTML)

I think there is / can be progress in dynamic sites too, so you don't have to stick to static!


I’d quibble over at least CommonMark and Python.

CommonMark is not good for flawless longevity: it’s not as bad as non-CommonMark Markdown, but it’s still not fully settled and implementations vary in what HTML they will allow through and almost no one actually stops at CommonMark but implements other extensions which will normally break the meaning of at least some legitimate content.

Python is risky, even if you don’t use anything outside the standard library: consider what happened with Python 2.


I use the CommonMark reference implementation, which I assume stops at CommonMark :) I just download their tarball, and it's pure C, tiny, and very easy to use e.g. from shell or Python: https://github.com/oilshell/oil/blob/master/soil/deps-tar.sh...

Nothing happened to Python 2! I still use it, and it's trivial to build and run anywhere, even if distros drop it. Just download the tarball and run ./configure and make, which I've done hundreds of times for various purposes.

(The same is not true for node.js, Go, or Rust, which as mentioned is one reason I don't use them for my site. If people stopped making binaries for them, I'd be lost.)

Ironically, if you want something that will last 30 years without maintenance and upgrades, Python 2 is better than Python 3. (I use both; Python 3 is very good for many/most things.) There are memes about "security" mostly based on misunderstandings IMO.

As you point out, the real problem is PIP packages, but that's why my site doesn't depend on any. Or if it does, I again make sure I can download a tarball, and not rely on transitive dependencies and flaky solvers.

The Python 2 stdlib is definitely good enough for making a website. It's not good enough for bigger apps, but it's great for simple website automation.

----

The higher level point is that you can always "plan for the future" at the expense of the present. IMO avoiding things like CommonMark and Python will just make your site worse right now, which defeats the purpose of preserving it in the future. So there has to be a balance against extreme conservatism, and there has to be a way of making progress (new standards) while not succumbing to bad fashions. Likewise I think Oil looks like a "retro" project to some, but it does a lot of new things, and that is the whole point.


> (The same is not true for node.js, Go, or Rust, which as mentioned is one reason I don't use them for my site. If people stopped making binaries for them, I'd be lost.)

Thankfully, this no longer appears to be the case for Rust, thanks to mrustc [0], a compiler that can build a working Rust 1.54.0 toolchain (released 2021-06-29) from source. It requires only a C++14-compatible compiler and some other common tools; I've just verified that its build script works with no problems on my Ubuntu machine. To be safe, you'd want to specify exact dependency versions for everything (or better yet, vendor them locally), since the big crates all have different policies for when they break backward compatibility with older compiler versions.

[0] https://github.com/thepowersgang/mrustc


Yes definitely, it's great that progress has been made on this front. There has to be a middle ground between "web sites rotting after 5 years" and "POSIX forever", etc. The former is the status quo, and the latter doesn't produce very much.

I looked back to some early wiki pages of research for Oil, and it is surprising how many links are broken, and a few of them were on niche/valuable subjects


It comes down to the person running the website has to care. That's it. It doesn't matter how simple it is if the person doesn't care.

In my own case, I've been running my own website for 24 years now [1]. The URLs I started out with have remained the same (although some have gone, and yes, I return 410 for those) and the technology hasn't changed much either (it was Apache 24 years ago, it's still Apache today; my blog engine [2] was a C-based CGI program, and it's still a C-based CGI program. The rest of the site is static, and there's no Javascript (except for one page). I can see it lasting at least six more years, and probably more. But I care.

[1] Started out on a physical server (an AMD 586) and a few years later on a virtual server.

[2] https://github.com/spc476/mod_blog


> It comes down to the person running the website has to care.

Personally I think it is a little more nuanced, in particular I think the relationship between how much someone cares and how much effort is required to keep the website online is what matters.

If your website is super simple you don't need to care about it very much (though you do need to care at least a little bit). On the other hand, if everyone who works at google suddenly quit tomorrow because they didn't care the stack of cards would fall over very quickly because it's a lot of work maintaining millions of servers.


My first domain “expired” in an unexpected way after 18 years. I got a .eu.org because I believed that eu.org would be more stable than a commercial provider. I used the same not-for-profit DNS provider until they were commercially acquired and the parent company shut down the old nameservers.

Now I’m locked out: eu.org does not respond to inquiries for switching nameservers, and my account predates the auth system. While my phone number is the same, auth reset does not work with phone.

It would have been fun to retain the same domain forever, but stuff breaks, people die, and things crumble.


> I’ve often thought about getting together with some friends to pay into a fund to house our websites after we die. I don’t think setting that up would be too hard — the math around insurance policies of this sort is quite simple — I mostly haven’t tried to set something like this up just since it’s a pretty morbid ask. But, if you’d be interested, maybe reach out to me?

> Our ghosts could live forever, if we help each other.

I love this idea and would gladly assist in the effort, let's set it up :)


the year is 2122 and all the "good" (short, memorable) DNS names are owned by ghosts. in the grim dark future, there is only joe_smith_from_minneapolis_born_2055.com


Underscores aren't technically valid in URLs or domain names ;D

That minor implementation detail aside... You make a good point. Perhaps MD5 or SHA256 hashes will become a last resort for domain names? Haha, no.


Or maybe UUIDs...and the typical printed representation of those uses hyphens, which are valid in domain names ;)


DNSv6 will emerge and replace the old DNS system


Previously <https://news.ycombinator.com/item?id=31047818>

-----

From the NearlyFreeSpeech.Net FAQ <https://www.nearlyfreespeech.net/about/faq#Interest>:

> Q: Do I get interest on my deposit? A: No[, but...] We periodically reevaluate this situation, because we think a web account that runs forever purely off of its own interest is a pretty cool idea.

NFSNet also has an interesting part in their FAQ in response to the question If I think services you host are currently unavailable due to lack of funds; is there anything I can do?, they outline a process whereby third-parties can fund a hosted service by creating an account themselves, depositing funds into their own account (NFSNet services are prepaid instead of billed after the fact), and then submitting a manual (but free) request to transfer those funds to the original accountholder based on the service's domain name. I've always thought this was interesting because in theory someone could set up a community, disappear, and then the community could step up to keep it funded long enough for the person to get out of the hospital/be rescued at sea/etc, so long as the infrastructure is solid enough to remain operational without being attended to (not vulnerable to exploits, etc.)

They've also got a policy where if the member who operates the service is a willing participant, they can publish their NearlyFreeSpeech.Net account ID and have donors add funds to cover 100% of service costs via automated transfers. <https://www.nearlyfreespeech.net/about/faq#Lifeboat>


> you may attempt to archive it, but should you wish to avoid sadness down the line, you should accept now in your heart that all archives will eventually succumb to the sands of time.

Enjoyed this.


You could etch your data and website onto copper plates and launch them into the depths of space; this will likely last near until the heat-death of the universe!


and simultaneously it will be lost immediately


And here we see the difference between available and accessible :)


Sad and depressing when things die, especially without notice, but that's life.

I have a blog, https://langsoul.com, that could die anytime. After all, sites are a lot of work and only a few make anything worthwhile to continue.

Not to mention, if I were to suddenly get ill, what then? Do I write into my will to launch a farewell blog post?

I think there was this internet cult that maintained their site for over 20 years from the leaders death. It required alive cult members to keep it alive.

Unfortunately I don't have cult members, nor funds to start a trust to keep the site going.


I think the cult you are referring to is Heaven's Gate: https://www.heavensgate.com/


Unstoppable domains solve this problem by making domain purchases permanent and stored on a decentralized blockchain - although that brings some problems of it's own - and IPFS solves that by not requiring that a central server stays online to serve that content, although this does require that _someone_ be insterested in serving the website at all.


This article isn't about websites going offline. It's about websites dying of 'natural causes', meaning the author or site manager lost interest in updating and maintaining the site.

There's much more to the soul of a website than the datacenter where it's hosted. I doubt if the handful of mirror.xyz 'decentralized' sites I see popping up will still exist in 5 years' time.


>blockchain

>solves this problem

Sure. Absolutely. Probably cures cancer too.


Show me where I can buy a permanent domain directly from ICANN and be sure that it will be stored forever without ever relying on a single company or system to keep working and I will suggest it over any blockchain solution immediately. The problem is that it does not exist and it probably never will, and thus this is the next big thing.

Also relevant LOL (although you're right when it comes to blockchain and cancer, this is nonsense, I just though it was funny): https://www.independent.co.uk/news/business/business-reporte...


This really makes me want to redesign my site with minimal external dependencies and clear documentation.


There have been some attempts to capture sites that became the ghosts or disappeared completely. During the dotcom bubble through 2008, Steve Baldwin(co-author of Netslaves) had that as a side project.

https://www.disobey.com/ghostsites/

Our Incredible Journey still does this, with more snark and humor.

https://ourincrediblejourney.tumblr.com/


A very nice read with some poignant thoughts and quotes on digital impermanence. Life on the Internet is nasty, brutish and short.


The automated url # fragments on highlights for this website is a cool but weird feature


I personally sunset a handful of websites this year. only kept a handful to try to dedicate time to.

writing open source the new site is called Make Post Sell (https://www.makepostsell.com)

hoping to restore a bunch of content as shops in this multitenant SaaS.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: