Hacker News new | past | comments | ask | show | jobs | submit login

A pulled an old friends website down from Internet Archive.

He's moved on the next stage, but I was glad I was able to put his site back up.

It'll be a shame if IA goes down permanently, but we need a decentralized solution anyway.

Having a single mega organization in charge of our collective heritage isn't a good idea.




I have always thought about this. It would be interesting to have users actually store small amounts of redundant info on a device connected to the internet. Very similarly to what a torrent does but with more peers (more data shards than full copies) and less seeds. And try and keep a huge database for everyone. Obviously open source and it would end up something like tor where they just assist the network with security patches but they don’t actually have any real “control” (admin dashboard control) over the network at large. We already do something smaller but like that with website static file caching, but at much smaller scale. Obviously security implications of this would be very hard but maybe not impossible to overcome. ipfs comes close but it again does more seeds then peers.

if anyone knows something like what I'm suggesting, I'd love to hear about it!


IIRC there were a few storage based projects that popped up using alt coins to encourage people to offer excess storage space for other randos on there internet. The possibility you might be storing illegal content might have been what killed it/them.

https://en.wikipedia.org/wiki/Cooperative_storage_cloud gives a few examples, like Filecoin.


In my opinion, IPFS was killed by a few things:

1) wedding itself to crypto with FileCoin.

2) terrible performance due to architectural choices (basically: too much pointer-chasing, except every pointer was back out to the DHT).

3) No serious attempts to integrate with existing software distribution strategies.

I think it's still a good core idea.


Its DHT implementation was shit. Ignoring all existing wisdom, it uses persistent connections, rates peers and has far too many special nodes.


Are you, by any chance, named Richard Hendricks?


The main issue that such hosting faces is that it's less efficient and more expensive than just regular centralized servers.


Anything would be better than the current system where you basically just have one source.

Independently ran mirrors all over the world, along with snapshots.

Have the occasional fork or two. Say your from a small town in Northern Illinois. If you have 2 TB of image archives from a defunct local newspaper, it might be good for photography forks even if it wouldn't make sense for the main archive.


Does https://ipfs.tech/ fit the bill?


This was a plot line in Silicon Valley.


I believe that it would be possible to cost effectively build and implement an architecture for a distributed IA backup—this comment entails some notes.

The system that asks volunteers about their age, sex, location, and storage format details (the model, past use etc. can be used to predict the durability of a single storage) without sharing most of this data anywhere.

The downloaders are then algorithmically allocated pieces of the archive. Exampli gratia such that there is at least limited amount of overlap between the pieces, and two people same country won't provide redunancy for each other.

When a downloader verifies that they have completed the download by giving (unique, to prevent fake-download sabotage) SHA hashes of the data, the information that these pieces have been downloaded in this or that country, plus an estimate of the reliability of the storage, is added to a public database, for the algorithm to use in the future.

Every downloader is then generated a public and private key so that they can give the hash of their download again once in a while or just verify that the piece is still there. The reliability estimates (based on storage / hardware details) would be empirically calibrated based on the data about the actual storage failures.

A public counter, estimating how well the archive is currently backed up via this scheme, could be displayed.

For copyright issues, it would be possible to encrypt some of the data, e.g. such that normally borrowable items become readable files only when X% of downloads are pieced together.

The scheme would be primarily based on existing designs and algorithms but work roughly as depicted above. I am not an expert of what compression, hashing and other algorithms should be used, and it needs lots of good work, to determine how to avoid errors in the scientific part of estimating the reliability of the downloads—and generally a situation where it would turn out that lots of data was lost when attempting to put the pieces back together again.

Remark (engineering): To empirically validate the correctness of the software of the backup architecure by testing it on grids of real hard drives in single places will probably give safety against catastrophic failure. Even better would be to obtain large amount of old hard drives and SSDs kept in a single place for a long time, to validate that the software works over time.

Remark (integrity): That a downloader actually has the downloads can be verified efficiently by IA server adding small part to the piece the downloader has, hashing it again, and requesting the new hash.

Remark (redunancy): It may be possible to develop a social program that analyzes whether a volunteer in certain place can provide more redunancy by buying themselves a hard drive or by supporting the acquisition of hard drives for volunteers who have proved themselves realiable elsewhere. This is speculative and the benefit may be lower than the risks.

Finally, instead of "public database" it may be much more optimal to decide to use a blockchain of some sort. Not a cryptocurrency, but a blockchain. This is because if the idea is to distribute copies over the world to ensure continguency in case of IA main architecture collapse, then the more parts of the distributed backup architecture (which must actually not be "the backup architecture" but "a scheme", that no everyday IA decisions rely upon, and that just exists out there) are on a blockchain network run by a "decentralized" system, the more reliable it will be.

My heuristic plausibility analysis: 0. IA backup would not need to be constantly accessed or changed (this makes storage easier, cheaper and prolongs the maximun age of the storage) 1. Not all IA has to be backed up: a distrobuted backup that successfully recovers 10% of IA in a catastrophe is by all means a great success (consequently priorization of what might / should be stored should probably be part of the algorithm that decides what volunteers download; and what existing "big" archives already store that overlaps with IA should be taken into account in this analysis) 2. I recall you estimated 30-40 M USD ballparks for a single copy: a properly led open source project may be able to develop this for free, and fairly compensated one could be ~ 0.1% to 1% of the cost. 3. The Sia network https://siascan.com/ has space for 7PB; and it's for storage where one can download their own files at any time; and they have had very little publicity. 4. 2TB hard drive costs 50-100 USD and 20PB would be 10 000 humans buying one 2TB hard drive which by itself is possible. Hobbyists and organizations may be able to provide even larger capacities. 5. Most IT projects fail, but since lots of technology already exists and in this we know what we are doing and IA might be able to recruit above talent we can conservatively, give conservatively 50% chance the groundwork development to succeed, or 45% without funding. 6. If the develoment succeeds, then there may already be around ~ 100 potential volunteers. I estimated that 0.1% IA visitors may volunteer, plus 1% from Hacker News traffick were to project to be mentioned there, plus growth over first few years and traffick from elsewhere. Perhaps 75% chance to get 10% of IA backed up by volunteers, given development succeeds. 7. If that much is backed up, there is perhaps 5% of attaining 200 TB in next few decades.

Conservatively, given that open-source development starts, one gets apprx. 33% - 38% chance that 10% backup is achieved & apprx. 1-2% that 100% of what is now in the IA, could be backed up. These are of course rather meaningless numbers, but the fact seems that in the lack of funding to build a complete backup IA can best guarantee continguency by starting to build a distributed one. Perhaps this was needlessly lots of words for a simple proposal.

- X

---

Note: It's probable that at least the NSA has a private full IA backup.


This is why BitTorrent and other P2P solutions were invented, but alas: A. The RIAA, MPAA, and ESA have given these technologies a terrible reputation. B. Nobody likes to seed. Some kind of seeding-based crypto would have been a great incentive if cryptocurrency wasn't also demonized by now.


Part of the reason people don't/didn't like seeding is that many residential lines are so terribly asymmetric. If you had 100down/5up, seeding your torrent at a useful speed was often enough to degrade your connection into unusability.


It's called torrent protocol and it doesn't work, no one wants to spend money and bandwidth hosting a god forsaken movie or book that only a handful of people care about.


Not much money and bandwidth if you aren't on a metered connection. You can share tens of gigabytes or more on a cheap read only flash plugged into into a $25 single board computer that draws way less than a full PC and can be left sitting there near the router. Just limit its bandwidth on the torrent client and you won't even notice it during online gaming. The client can be as small as the Transmission daemon running headless on one of the many Debian based embedded distros: all control through either the web interface or from its client: no monitor, mouse, keyboard etc. just a small cheap box.

https://www.friendlyelec.com/index.php?route=product/product...

(just an example, as it's way overkill for the task)

https://transmissionbt.com/

https://github.com/transmission-remote-gui/transgui


I see 24 seeders for the entire 72-episode run of the 1991 sitcom "Herman's Head" which was so poorly rated that it's never seen a home media or streaming release, your premise doesn't hold any water at all.


People are pirating comic books and cookbooks from the 30s; there are a lot of people in this world, if something goes on the web and you tell everyone you put it there, it's pretty much preserved. It's only law enforcement that kills free availability of everything all the time online, for better or for worse.

With copyright, as individuals we get to trade all of the wonderful stuff already made (and long paid for) for the flood of minute-old shit and sludge inundating us online constantly. It's a bad trade. Maybe copyright should stop encouraging creativity; the answer to how "artists" would get paid post-copyright might be "who cares, quit if you want."

We already have Herman's Head, we don't need any more crap.


I never thought about UBI and copyright - but as soon as you say that, it is immediately obvious to me that when we have some kind of UBI, copyright should be dramatically reduced.


Copyright should be reduced in general. 20 years was already excessive for exclusive control over culture, 200 is just absurd.


I 100% agree. Just pointing out that UBI changes the discourse on this subject.


> With copyright, as individuals we get to trade all of the wonderful stuff already made (and long paid for) for the flood of minute-old shit and sludge inundating us online constantly.

What does this have to do with copyright? People post sludge online even in chaotic meme environments where copyright is irrelevant and people constantly take and repost each others' stuff.


It does work, when you don't notice it. We need sane limits and permanent seeders. This is why so many regular people get hit with ISP notices, they don't know they've seeded Captain America for the last six months every time they started their PC.


Yup. If browsers built in support for magnet links and (on desktop) defaulted to seeding with some capped bandwidth then a lot of centralized hosting platforms would become unnecessary.


You can build something very similar with WebRTC. Browsers already have P2P networking capability, it's just not immediately interoperable with BitTorrent clients. Standardizing some sort of BitTorrent over WebRTC bridge and adding it to BT clients would fix this problem.

That being said, please do not host content this way. P2P blows away the already thin privacy guarantees that the web provides. Anyone seeding the site gets the IP addresses of everyone on that site, and can trivially correlate that with other sites to build detailed dossiers on, if not individual people, at least households[0] of people. After all, that's how the MAFIAA[1] sent your ISP DMCA scare letters back in the 2000s P2P wars.

[0] IPv4 CGNAT would frustrate this level of tracking, but IPv6 is still subnet per subscriber. Note that you can't use individual v6 addresses because we realized very early on that the whole "put the MAC in the lower 64 bits of the address" thing was also a privacy nightmare, so IPv6 hosts rotate addresses every hour or so.

[1] Music And Film Industry Association of America, a ficticious merger of the MPAA and RIAA in a hoax article


> You can build something very similar with WebRTC.

Isn't that exactly what WebTorrent is?


I hadn't considered the privacy implications. For this to be workable, you'd need to pair it with near-ubiquitous use of some anonymizing overlay network.


iirc opera browser tried that


If the whole world has bandwidth available for TikTok, it can make the same available for sharing torrent files.


I've been seeding some unpopular torrents for ten years (would have done for even longer if I did not change the torrent client a decade ago). "No one" is too strong a word, as usual with these absolutist things.


Agree, shouldn't have said no one. But you got to recognize that some torrent are most popular than other.

I would have absolutely no trouble downloading the latest marvel movie but if you are looking for some old Soviet movie, Iranian movie or even old American movie then you're in bad luck. I've never seen more than 0 seeder on thepiratebay.


In addition to the costs, I'd say it's also that no one wants to risk getting sued like the IA is getting.


I keep wanting to do this for old sites, make like a personal mini IA. Besides just using wget or curl, any tips for pulling down useable complete websites from IA?


Agreed, especially an organziation that has already shown to not always be impartial.


A decentralized solution, doesn't that scream internet archive on blockchain? What could go wrong.


This is one of the very few real use-cases I can think of for the blockchain


torrents maybe




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: