I have always thought about this. It would be interesting to have users actually store small amounts of redundant info on a device connected to the internet. Very similarly to what a torrent does but with more peers (more data shards than full copies) and less seeds. And try and keep a huge database for everyone. Obviously open source and it would end up something like tor where they just assist the network with security patches but they don’t actually have any real “control” (admin dashboard control) over the network at large. We already do something smaller but like that with website static file caching, but at much smaller scale. Obviously security implications of this would be very hard but maybe not impossible to overcome. ipfs comes close but it again does more seeds then peers.
if anyone knows something like what I'm suggesting, I'd love to hear about it!
IIRC there were a few storage based projects that popped up using alt coins to encourage people to offer excess storage space for other randos on there internet. The possibility you might be storing illegal content might have been what killed it/them.
Anything would be better than the current system where you basically just have one source.
Independently ran mirrors all over the world, along with snapshots.
Have the occasional fork or two. Say your from a small town in Northern Illinois. If you have 2 TB of image archives from a defunct local newspaper, it might be good for photography forks even if it wouldn't make sense for the main archive.
I believe that it would be possible to cost effectively build and implement an architecture for a distributed IA backup—this comment entails some notes.
The system that asks volunteers about their age, sex, location, and storage format details (the model, past use etc. can be used to predict the durability of a single storage) without sharing most of this data anywhere.
The downloaders are then algorithmically allocated pieces of the archive. Exampli gratia such that there is at least limited amount of overlap between the pieces, and two people same country won't provide redunancy for each other.
When a downloader verifies that they have completed the download by giving (unique, to prevent fake-download sabotage) SHA hashes of the data, the information that these pieces have been downloaded in this or that country, plus an estimate of the reliability of the storage, is added to a public database, for the algorithm to use in the future.
Every downloader is then generated a public and private key so that they can give the hash of their download again once in a while or just verify that the piece is still there. The reliability estimates (based on storage / hardware details) would be empirically calibrated based on the data about the actual storage failures.
A public counter, estimating how well the archive is currently backed up via this scheme, could be displayed.
For copyright issues, it would be possible to encrypt some of the data, e.g. such that normally borrowable items become readable files only when X% of downloads are pieced together.
The scheme would be primarily based on existing designs and algorithms but work roughly as depicted above. I am not an expert of what compression, hashing and other algorithms should be used, and it needs lots of good work, to determine how to avoid errors in the scientific part of estimating the reliability of the downloads—and generally a situation where it would turn out that lots of data was lost when attempting to put the pieces back together again.
Remark (engineering): To empirically validate the correctness of the software of the backup architecure by testing it on grids of real hard drives in single places will probably give safety against catastrophic failure. Even better would be to obtain large amount of old hard drives and SSDs kept in a single place for a long time, to validate that the software works over time.
Remark (integrity): That a downloader actually has the downloads can be verified efficiently by IA server adding small part to the piece the downloader has, hashing it again, and requesting the new hash.
Remark (redunancy): It may be possible to develop a social program that analyzes whether a volunteer in certain place can provide more redunancy by buying themselves a hard drive or by supporting the acquisition of hard drives for volunteers who have proved themselves realiable elsewhere. This is speculative and the benefit may be lower than the risks.
Finally, instead of "public database" it may be much more optimal to decide to use a blockchain of some sort. Not a cryptocurrency, but a blockchain. This is because if the idea is to distribute copies over the world to ensure continguency in case of IA main architecture collapse, then the more parts of the distributed backup architecture (which must actually not be "the backup architecture" but "a scheme", that no everyday IA decisions rely upon, and that just exists out there) are on a blockchain network run by a "decentralized" system, the more reliable it will be.
My heuristic plausibility analysis:
0. IA backup would not need to be constantly accessed or changed (this makes storage easier, cheaper and prolongs the maximun age of the storage)
1. Not all IA has to be backed up: a distrobuted backup that successfully recovers 10% of IA in a catastrophe is by all means a great success (consequently priorization of what might / should be stored should probably be part of the algorithm that decides what volunteers download; and what existing "big" archives already store that overlaps with IA should be taken into account in this analysis)
2. I recall you estimated 30-40 M USD ballparks for a single copy: a properly led open source project may be able to develop this for free, and fairly compensated one could be ~ 0.1% to 1% of the cost.
3. The Sia network https://siascan.com/ has space for 7PB; and it's for storage where one can download their own files at any time; and they have had very little publicity.
4. 2TB hard drive costs 50-100 USD and 20PB would be 10 000 humans buying one 2TB hard drive which by itself is possible. Hobbyists and organizations may be able to provide even larger capacities.
5. Most IT projects fail, but since lots of technology already exists and in this we know what we are doing and IA might be able to recruit above talent we can conservatively, give conservatively 50% chance the groundwork development to succeed, or 45% without funding.
6. If the develoment succeeds, then there may already be around ~ 100 potential volunteers. I estimated that 0.1% IA visitors may volunteer, plus 1% from Hacker News traffick were to project to be mentioned there, plus growth over first few years and traffick from elsewhere. Perhaps 75% chance to get 10% of IA backed up by volunteers, given development succeeds.
7. If that much is backed up, there is perhaps 5% of attaining 200 TB in next few decades.
Conservatively, given that open-source development starts, one gets apprx. 33% - 38% chance that 10% backup is achieved & apprx. 1-2% that 100% of what is now in the IA, could be backed up. These are of course rather meaningless numbers, but the fact seems that in the lack of funding to build a complete backup IA can best guarantee continguency by starting to build a distributed one. Perhaps this was needlessly lots of words for a simple proposal.
- X
---
Note: It's probable that at least the NSA has a private full IA backup.
if anyone knows something like what I'm suggesting, I'd love to hear about it!