Hacker News new | past | comments | ask | show | jobs | submit login
ZeroNet – Uncensorable websites using Bitcoin crypto and BitTorrent network (zeronet.io)
410 points by handpickednames on April 5, 2017 | hide | past | favorite | 169 comments



Love the ZeroNet project! Been following them for a year and they've made great progress. One thing that's concerning is the use of Namecoin for registering domains.

Little known fact: A single miner has close to 65% or more mining power on Namecoin. Reported in this USENIX ATC'16 paper: https://www.usenix.org/node/196209. Due to this reason some other projects have stopped using Namecoin.

I'm curious what the ZeroNet developers think about this issue and how has their experience been so far with Namecoin.


What is the point of namecoin and a having a central domain registrar at all?

It seems like a publisher-addressable network (where documents are identified using a publisher's public key) or a content-addressable network (where documents are identified using a file hash) would be good enough by itself, so long as the protocol had builtin support for distributed document searching and ranking.

Casual internet users on the regular internet do not seem to be using domain names to locate resources anymore. They are using Google to locate resources, and only looking at the domain name to verify identity. If the primary purpose of the domain name is not to locate a resource but to verify identity, then it seems like this could be accomplished with a web of trust without a central name registrar.


iirc you only need namecoin if you want to register a human friendly domain.


Also, if you ever lose control of a namecoin domain you can say goodbye to it forever. A squatter will take it instantly and hold on to it forever unless you buy it from them for actual money.


Has squatting gotten worse on Namecoin? Squatting is fairly hard to handle in decentralized naming systems in general. Namecoin got a lot of squatting issues mostly because of the pricing function (price of names dropped over x years, and now it's almost free to register names). Here is another paper from WEIS'15 that studied squatting in Namecoin: http://randomwalker.info/publications/namespaces.pdf


Isn't that true of normal domains, too?


Depends on the toplevel suffix. For instance, .fr (France) domains have a "no taking" period after the expiration date, where nobody can take it from their previous owners. The owner can then take it back, but it won't be re-activated for a couple of weeks, I believe. So the punishment for screwing up is a temporary blackout of your domain name.

.com, .net, .org domains are handled differently, and may be easier to lose permanently.


I wonder if they could look into using some of the DNS features currently being built by BlockStack?


Has the code quality improved since I was told to screw off for bringing up security?

* 2 years out of date gevent-websocket

* Year old Python-RSA, which included some worrying security bugs in that time. [0](Vulnerable to side-channel attacks on decryption and signing.)

* PyElliptic is both out of date, and actually an unmaintained library. But it's okay, it's just the OpenSSL library!

* 2 years out of date Pybitcointools, with just a few bug fixes around confirmation things are actually signed correctly.

* A year out of date pyasn1, which is the type library. Not as big a deal, but covers some constraint verification bugs. [1]

* opensslVerify is actually up to date! That's new! And exciting!

* CoffeeScript is a few versions out of date. 1.10 vs the current 1.12, which includes moving away from methods deprecated in NodeJS, problems with managing paths under Windows and compiler enhancements. Not as big a deal, but something that shouldn't be happening.

Then of course, we have the open issues that should be high on the security scope, but don't get a lot of attention.

Like:

* Disable insecure SSL cryptos [3]

* Signing fail if Thumbs.db exist [4]

* ZeroNet fails to notice broken Tor hidden services connection [5]

* ZeroNet returns 500 server error when received truncated referrer [6] (XSS issues)

* port TorManager.py to python-stem [7] i.e. Stop using out of date, unsupported libraries.

I gave up investigating at this point. Doubtless there's more to find.

As long as:

a) The author/s continues to use out-dated, unsupported libraries by directly copying them into the git repository, rather than using any sort of package management.

b) The author/s continue to simply pass security problems on to the end user

... ZeroNet is unfit for use.

As simple as that.

People have tried to help. I tried to help before the project got as expansive as it is.

But then, and now, there is little or no interest in actually fixing the problems.

ZeroNet is an interesting idea, implemented poorly.

[0] https://github.com/sybrenstuvel/python-rsa/issues/19

[1] https://github.com/etingof/pyasn1/issues/20

[3] https://github.com/HelloZeroNet/ZeroNet/issues/830

[4] https://github.com/HelloZeroNet/ZeroNet/issues/796

[5] https://github.com/HelloZeroNet/ZeroNet/issues/794

[6] https://github.com/HelloZeroNet/ZeroNet/issues/777

[7] https://github.com/HelloZeroNet/ZeroNet/issues/758


Thanks for sharing this.

It's a shame your skills weren't more appreciated.


That's a pretty deep and well thought out security audit. Are they at least making progress? For a lot of open source projects that are labours of love, it's all about getting the time and funding to work on them.


openSSLVerify is up to date. That's one more dependency than was up to date a year ago.

My problem is conversations like this one: [0], where improvements are resisted against, for being too hard.

People have tried to help improve quality and testing rigour, but they get turned away.

[0] https://github.com/HelloZeroNet/ZeroNet/issues/830


I've brought up this thread on Reddit https://www.reddit.com/r/zeronet/comments/63lvqo/has_the_cod... and the author /u/nofishme fixed a few things and introduced automation.

Can you take a look at it again? It's not my area of expertise.


It's a lot better. Steps in the right direction.

About 52% test coverage, and pip is in use for some things.

However, so long as the LIB[0] folder exists, these sorts of problems will recur.

Each of those libraries is an opportunity for problems to emerge.

However, as they're manually managed, you don't get the chance to test against future versions, to check for breakage or okays.

Out of date becomes inevitable.

[0] https://github.com/HelloZeroNet/ZeroNet/tree/master/src/lib


Well, it is better to concentrate on getting users in than to solve some small quirks.

Nobody is going to attack ZeroNet if it doesn't have users anyway.


That was OpenSSL's attitude. It resulted in harm to many more users who would've been better off with something else or with its own developers actually trying to prevent security vulnerabilities. A project advertising something to be "uncensorable" based on "crypto" or whatever should be baking security in from the start everywhere it goes. Or it's just a fraud.


You're right.


Let me quote one of the ZeroNet team members when questioned about potential hacking.

> I wasn't aware of any hackers. The only problem I have since I have been running ZeroNet for a year, is the minor problem of file size mismatch, simply because not all peers in the network have the latest version of a file.

At best, that's an unhelpful attitude. It leads to things like: [0]

[0] https://arstechnica.com/security/2017/03/firefox-gets-compla...


Nobody is going to use ZeroNet in the first place if it's not secure. "Users before security" makes no sense at all if the product you're selling is security.


How do you explain that it has a lot of users, much more than your preferred secure network?


Making front page of HN is probably enough motivation for a good portion of attackers.


Your comment is under-appreciated. This is the greatest issue facing computer security today.


It seems to be one guy working on the full project which is opensource so you could do PRs to fix it or donate money.

It's easy to point issues and not do anything to help.


I did.

We talked it over, decided I would do the test suite.

I started, found the bad practices, and showed how I could turn it into a fully automated system, new versions could be tested against, and if it works, it could output binaries for every system.

The response was, 'No don't do that. I like doing it manually. Means I can check for breakage.'

Followed by my PRs and issues being closed, and my emails bouncing.


nofish has known this, he said he will update it up to data soon.


We need more projects like these. Whether this project solves the question of a truly distributed Internet* is out of question. What we need is a movement, a big cognitive investment towards solving the Big Brother problem.

*I am referring to concentrated power of the big players here, country-wide firewalls, and bureaucracy towards how/what we use.


We need multiple internets, a big confusion, governments can't handle confusion, but if everything is standardized over Facebook and WhatsApp it's easy for them.


They can.

Well. Look, even if you have multiple internets, decentralized everything, distributed all systems, no more Google no more Facebook. What does the communication patterns in such a system look like? Do you use the system after work before going to sleep? Your and everyone else usage patterns, traffic can be analyzed. The endpoints, many of them would be honeypots run by Spooks, revealing even more what you are up to and giving you a false sense of safety while the Spooks could run the entire decentralized inter-network.

So your system would have to fake it somehow, fake requesting some hashes here and there, fake request/post comments and follows. Otherwise, the social data available when the SPOOKS join your social-network even if it is distributed like patchwork on scuttlebot, defeats its purpose.

That is what bitmessage does, but then you pay in high bandwidth costs. And yet, you cant just do random shit, random can easily be filtered out so you need more advanced method of finding fake social relations and using those to do fake data to actually conceal what you and everyone else is doing on the interweb.

EDIT: Im not saying "give up", its a very worthy cause, just the problem is harder and enters the social space quite fast - the problem is the same as "we are all nice developers and hackers" yet 99% seem to be employed by NSA/similar-services/Google and think they are doing great James Bond like type of jobs, while they are actually anti-hackers and anti-developers, in fact, anti-society.


I believe i2p solves most of issues you mentioned with galic routing. Though at a cost of speed of course.


Agreed - what they like is "legibility" as defined here:

https://www.ribbonfarm.com/2010/07/26/a-big-little-idea-call...

ZeroNet does appear to take an important step in that direction with decentralization.


There is IPFS

ZeroNet is created for fast, dynamic websites, IPFS is more like a storage solution. It's already possible to cooperate, eg. IPFS for static, big files, ZeroNet for dynamic user content


The project looks very promising but relies on running a lot of javascript from untraceable sources in the browser.

Given the long history of vulnerabilities in the the browsers, trusting js from a well-known website might be OK, trusting js from zeronet is unreasonable.

If ZeroNet could run with js code generated only by the local daemon or without js it would be brilliant.


Chrome added a feature a long while back I really wanted for ages. The ability to specify the checksum of a linked asset, so that it can be verified as it's downloaded (and untrusted/discarded if not). I just can't find the docs for it. :( My Google-fu is not strong.

EDIT:

Found it :D

https://w3c.github.io/webappsec-subresource-integrity/



It's kind of a shame they didn't let their imagination fly with that one... I wish integrity were a global attribute, because I could totally see using it for things like images and audio/video.


It might work (though I'm not completely sure) if you specify a hash in the img-src directive of the CSP header: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Co...

Another option would be to just use a subresource-integrity protected script to check the hash of a downloaded image/video before displaying it.


That is clever and I like you.


And then any site could check whether you've seen a particular image before just by including it and seeing how long it takes to load, yay!


Nice feature but you need to trust the HTML page that is pulling the js. ZeroNet allows any HTML page to pull any script.


And... we're back to loving IPFS :3


I was pretty sure I knew what you meant but to be a bit more explicit: it's a real integrity check using hashes and not merely a checksum.


If we are nitpicking like this: It uses cryptographic hash functions and not merely the hash functions commonly used by hash tables.


This is why native clients (real native clients, not browsers-in-cans) are so important: they enable one to be more secure against targeted attacks, and they enable many eyes to review code and hence make one more secure against untargeted attacks.

Frankly, given much of the history of successful Internet tools & protocols, I'd love to see some text-UI clients for ZeroNet.


I would recommend use of Freenet over ZeroNet. More or less the same concept/functionality however with 15 years more experience.

Freenet: https://freenetproject.org/


They have different goals. FreeNet is about total anonymity. In the FreeNet world, everyone helps server small pieces of all the data, yet no one person knows what data is on the portion of their local drives. Things like Javascript are also disabled on FreeNet.

ZeroNet uses the torrent protocol and serves up the content you've chosen to view. You know what you're serving.


JavaScript is disabled through the Freenet proxy but it's possible to use the Freenet API to access the raw data and have a proxy that doesn't disable JavaScript. It would be possible to do something very ZeroNet like but with Freenet as the backend.

ZeroNet doesn't really use the torrent protocol. It has its own file sharing service that it runs to receive requests from other users for files. It uses the torrent trackers to map site addresses to IP or onion address.

You know what you're serving initially but the site author can add any files they choose and you'll start serving them if you've visited the site and are seeding it. You have no control over malicious sites that decide to store arbitrary data.


Freenet is a great idea with 15 years of failure to get traction with sane (by which I mean non-paedophile) people.

Also, it's written in Java.


> Also, it's written in Java.

(Not sure if that's a praise or a criticism.)

As much as I dislike Java (ML outclasses it as a language), that's probably much better than the obvious alternatives (C and C++): there aren't nearly as many undefined behaviours, and that eliminates a whole class of potential security vulnerabilities.

If it were written now, Rust could be a viable alternative: just as safe, potentially faster, with less dependencies (at least as far as the compiled binaries are concerned).


>ML outclasses it you people are why nobody will take haskell or ocaml seriously. as a developer who works primarily with haskell: it is not a panacea, stop being so snobbish


I did not say ML was a panacea. I said it was better than Java. As a language, to be more precise: I disregarded tooling, libraries, and community.

Now as a language, I maintain that ML is better than Java on pretty much every account. It has sum types (or algebraic data types), a safer type system (without null), better generics (that weren't bolted on after the fact), a fine module system, easier lambdas… Ocaml in particular even has a class system, though it is almost never used: ML hardly ever need inheritance, so I count that as a negligible disadvantage. And of course, polymorphism (the OO kind) is easily obtained with with modules or plain higher-order functions.

Yes, yes, Java has an enormous community, loads of tools, and so many libraries that whatever you want to do has probably already been done. Yes, yes, it means that many projects would be cheaper and faster to write in Java, thanks to those libraries, communities, and plain available programmers. The JVM itself is pretty amazing, with a tunable garbage collector, and very fast execution once the JIT has warmed up.

While important, none of those advantages come from the language. They come from Sun, the staggering amount of working Java devs, and the heap of work they already accomplished. Network effects, mostly. A similar comparison could be made between C++ and Rust, though I don't know Rust well enough to have an actual opinion.

---

Also, "you people" should also admit that a language can be better than another, even if it's only Java8 vs Java7, or C++11 vs C++98. You should also realise that it's important to distinguish the language from the rest (community, tooling, libraries). If you don't, the older language always wins! Of course you wouldn't start a sizeable project in Rust in a C++ shop. Throwing away all the internal libraries and framework, all the training and painstakingly acquired experience? Of course not.

Still, one must not lose the long term picture. At that picture is provided by the language. Because ultimately, everything stems from the syntax and semantics of the language.


Just curious: are the gay people insane as well?


I think he was using the word "sane" loosely to mean common people, not a niche group. I agree though that it's hypocritical to label some sexualities as a disease or insanity but object to doing the same for other minority ones.


are you actually trying to say that being a pedophile is a legitimate sexuality?

or even that it's anything like being gay?


That's an interesting question for which you'll get no answer for reasons of force majeure.


just curious: are you comparing being gay with being a fucking pedo?


> Anonymity: Full Tor network support with .onion hidden services instead of ipv4 addresses

How does this track with the Tor Project's advice to avoid using BitTorrent over Tor [1]? I can imagine that a savvy project is developed with awareness of what the problems are and works around them, but I don't see it addressed.

[1] https://blog.torproject.org/blog/bittorrent-over-tor-isnt-go...


Tor Project doesn't like people pushing HD video through its relays, because that degrades performance for other users. Torrent clients are very good at saturating links.

This project is about hosting generally. But if it were used for HD video streaming, Tor Project would be just as unhappy.


The linked article refers to three ways bittorrent can deanonymise you behind Tor.

That's a privacy concern, not a load problem.


Yeah, but you can deal with that, if you know what you're doing. If you use Whonix, or roll your own Tor gateway, leaks around Tor aren't an issue. UDP is the hardest thing to deal with. I mean, with proper Tor/userland isolation, leaks don't happen. So all UDP just gets dropped. If you want UDP, you need to use OnionCat or tunnel a VPN through Tor.


> Yeah, but you can deal with that, if you know what you're doing.

I think it's fairly clear at this point that ZeroNet isn't testing to make sure that this is the case.

Their TorManager [0] is basically a wrapper around the tor executable, and runs a fairly vanilla config.

So yes, leaks or attacks via bittorrent are actually an issue here.

[0] https://github.com/HelloZeroNet/ZeroNet/blob/master/src/Tor/...


> leaks or attacks via bittorrent are actually an issue here.

Its protocol is a different one.

https://zeronet.readthedocs.io/en/latest/help_zeronet/networ...


ZeroNet doesn't use the torrent protocol for distributing file. It uses its own TCP service for that so avoids the issues of tunnelling UDP over TCP. Its use of "bittorrent" technology is limited to the protocol for mapping ZeroNet site addresses to IP/Onion addresses.


So will ZeroNet map addresses to immanent Tor onion addresses, which are much longer? That change will screw OnionCat, sadly enough.

Also, I wonder if MPTCP would play nice with ZeroNet. MPTCP works very well With OnionCat. I could create TCP streams with hundreds of subflows over all possible combinations of multiple OnionCat addresses.

https://ipfs.io/ipfs/QmUDV2KHrAgs84oUc7z9zQmZ3whx1NB6YDPv8ZR...

https://ipfs.io/ipfs/QmSp8p6d3Gxxq1mCVG85jFHMax8pSBzdAyBL2jZ...


I'm also suspicious, since they say that your blockchain address is used for authentication - couldn't colluding websites track your public key and use it to track you between websites?


Seems like that's only for publishing new content, not for merely browsing.

Though I guess unless you create a new identity for every site you want to post a comment on, your comments on one site could be proven to be posted by the same person as your comments on another site.


Presumably since they're using BIP32, they create a new address for every website you visit.


ZeroNet doesn't use the torrent protocol for distributing files. It uses its own file service that is exposed via a port to receive file requests and send files. It uses torrent trackers for mapping ZeroNet site addresses to IP or Onion addresses.


As for uncensorable, if the content is illegal, the torrent peers may be incriminated distribution of illegal content


Neither argument has been tested, but the defense would that you were acting as an ISP with dumb pipes.

Which logically leads to an unrelated question -- if ISPs are doing DPI on every packet, they at least theoretically 'know' whether you're transmitting 'illegal' content. If I were a rights holder, I'd be making that argument against ISPs. I don't know how I'd sleep at night, maybe, but I wouldn't let ISPs have their cake (valuable user data) and eat it too (immunity based on status as ISP-only).


It's been tested for Freenet. LEA adversaries can participate, and identify peers. Judges issue subpoenas. Many defendants have accepted plea bargains. Plausible deniability doesn't work. What works is using Tor.


Even Tor isn't a magic bullet, specifically because of other technologies used in combination, such as a web browser.

https://www.eff.org/pages/playpen-cases-frequently-asked-que...


Yes, the FBI exploited a Firefox vulnerability to drop NIT malware on Playpen users. And said malware phoned home to FBI servers, bypassing Tor.

However, any Whonix users would not have been affected, for two reasons. One, this was Windows malware, and Whonix is based on Debian. Two, Whonix comprises a pair of Debian VMs, a Tor-gateway VM and a workstation VM. Even if the malware had pwned the workstation VM, there is no route to the Internet except through Tor.


Then I recommend you change this:

What works is using Tor.

to this:

What works is using [Whonix].


Too late to edit. But I added a comment. Thanks.


Wait, did they reveal how their exploit worked? I thought they had already dropped two cases rather than reveal the internals of the NIT? Like Tor Browser could still be unpatched for this?


Yes, they didn't reveal the Firefox bug or the details of NIT. And yes, Tor browser could still be vulnerable.

You must isolate Tor process and userland in separate VMs, or even separate physical devices. Even if the browser gets pwned, and the NIT gets dropped, you'll be OK, because the Internet is reachable only through Tor. Whonix is an easy to use implementation.

I've been ragging on Tor Project about this for years. But they don't want to frighten people by making Tor too complicated to use. You could be cynical, and say that they want the cannon fodder for their government masters. Or you could say that they think it's more important to protect the most people, rather than to most strongly protect technically competent people. I have no clue what the truth is. Maybe there's a range of opinion.


If Tor is too difficult to use, people won't use it. Edward Snowden and Laura Poitras had to dedicate a significant amount of time to get Glenn Greenwald to just use TAILS, a plug and play Tor operating system. Someone like that is not going to use Whonix, even if maybe they should be.


Yeah, I get that. And I realize that I've gone off the deep end. It's hard to imagine anymore how easily people's eyes glaze over. I've written guides that lay everything out, step by step. And many people still can't seem to get it.

But Whonix really is trivial. You install VirtualBox. You download the Whonix gateway and workstation appliances. You import them in VirtualBox. You start them. You work in the workstation VM. There's nothing to configure. That literally should be enough information to use Whonix. Plus there's a wiki and a support forum.


In my opinion, Whonix on Qubes is much more user-friendly. Just install Qubes and use preconfigured anon-Whonix VM.


If the workstation vm is pwned what stops it from hitting the usual home router internal network address and/or changing the route?

Is there some network isolation going on which prevents that?


The workstation VM has no route to the home router except through the Tor gateway VM. With Whonix, the gateway VM isn't even a NAT router. Plus there are iptables rules that block everything except Tor. The gateway VM only exposes Tor SocksPorts to the workstation VM. You'd need to break the network stack in the gateway VM in order to bypass Tor.


Right so can't I just add one then? Most vm setups I might have a default route to the other VM running tor but I can still talk to e.g 192.168.0.1 even if I'm not putting traffic through it.

Is this some kind of 'vm specific' virtual network which can't talk on the real lan? Is that implemented on the hypervisor?


Yes, for Whonix it's a VirtualBox internal network. There's no direct routing through the host, only among VMs. You can do much the same on VMware.

Edit: I forget that I'm writing on HN. When I say VM, I'm referring to full OS-level VMs, not namespace, Java, etc VMs.


That sounds like a pretty neat setup. I know I can just google all this so please forgive me the inane questions; it depends on virtualbox though?

That's a bit of a nonstarter for a few of.

We probably aren't the target base for the project though so maybe it doesn't matter...


Yes, it depends on VirtualBox. But there are versions for KVM, and for Qubes. More of a nonstarter, though. Or even using physical devices, such as Raspberry or Banana Pi.

Years ago, I created a LiveDVD with VirtualBox plus Whonix gateway and workstation VMs. I had to hack at both Whonix VMs to reduce size and RAM requirements. But I got a LiveDVD that would run with 8GB RAM. It took maybe 20 minutes to boot, but was quite responsive.


In theory breaking properly-configured Whonix would require a VM escape, pretty much the holy grail of exploits (a few have happened recently). The alternative is a complete break of Tor, which has proven unlikely.


Too late to edit.

I should have said: "What works is using Whonix, or otherwise using Tor securely with leaks blocked."


Do you have a source for those cases? I did some searches but can't seem to find anything.


See https://freenetproject.org/news.html#news

I read up on this a while ago, but didn't keep links. There was some discussion on /r/Freenet. For example: https://www.reddit.com/r/Freenet/comments/5tnx81/freenet_use... Missouri police developed a custom Freenet client that logged everything. But I don't remember the name :(


> Neither argument has been tested, but the defense would that you were acting as an ISP with dumb pipes.

Unlikely.

To become a peer, you must first visit the website, fully downloading the content.

Which makes an argument to you consenting to share the information.


Just like an ISP has to take your request and transmit it and the response. No difference in theory. In practice, I would worry whether courts would ignore theory.


But there is no person involved directly in the pipeline with an ISP. Every request goes through an automated process.

Governments do and have asked ISPs to update and block against certain websites. Thus, the automated system is expected to behave adequately.

However, as a person becomes involved, they become an active participant in the pipeline.

That is a huge theoretical difference.

A person is not automated, they have common reason, and intellect that goes beyond the rules that can be encoded in a dumb system.

This reasoning is what allows us to hold a person accountable to their actions.

If you visited a ZeroNet site, found it was bad stuff, immediately left and deleted the cache, you might have a case for innocence.

But if you immediately left, but continued to actively share the content... It's a different message.

You could become part of a child pornography ring, for example.

And courts enjoy making examples of distributors of such materials.

Pleading your innocence, becomes difficult at that point.

You've shared illegal content from your own property.


I wonder how all that would change if you only had a portion of any of the files, at what point would they draw the line?

Either way it's a damn difficult question to answer, but God's be damned if I wouldn't prefer a distributed internet.


Hell yes, to both those points.

At least IPFS is working hard towards Tor integration. That might be something one day.


> At least IPFS is working hard towards Tor integration. That might be something one day.

Actually, that day is today already! OpenBazaar had the same need of a Tor transport and made one! It's available here: https://github.com/OpenBazaar/go-onion-transport/

Basically a plug-and-play transport for IPFS.


I hope they get a chance to add a better README to that, looks interesting.


Yeah, it's not the most documented repository. In the absence of that, you can check out the following document and implementation for some better understanding:

- https://github.com/OpenBazaar/openbazaar-go/blob/4a9ee8de8fd...

- https://github.com/OpenBazaar/openbazaar-go/blob/4a9ee8de8fd...

Hope that helps a bit. Keep in mind that none of this have been verified and might not work as advertised. Just a warning.


Courts exist to evaluate circumstantial evidence. Complex theories are often ignored.

The ISP, like a container ship, is not responsible of every bit it moves around. Individuals are.


I understand this logic, but state can then say that being ISPs without license is a crime too.


There's also GNUNet: https://gnunet.org/ As others have mentioned there's also FreeNet: https://freenetproject.org/

I haven't looked deep into any of these projects, but I do think they are neat and hoping at least one of them gains a lot of traction.


Considering a Freenet user is currently [jailed indefinently](https://arstechnica.com/tech-policy/2017/03/man-jailed-indef...), there does seem to be some problems.


the main difference is that zeronet is fast.


Cannot access this at work for zeronet.io being involved in P2P activity.

I cannot help but feel disappointed and unamused.



In other news ZeroNet has been banned from giving its TEDtalk: https://news.ycombinator.com/item?id=14039219


I quit giving TED my clicks long ago. They occasionally have some good talks, but many more that are pseudoscience garbage. Don't even get me started on TEDx. I hope ZeroNet find a better stage for their talk. Perhaps an organizer could contact them.


There is single point of failure, kill the tracker = kill the whole network. You can get all the IPs from the tracker that are visiting certain site, it's not so secure if someone is not using tor.


Bittorrent supports more peer sources than just trackers, DHT being the most important one among the others.


It's not supported by ZeroNet.


You sure about that? Their presentation [says][1] "Tracker-less peer exchange is also supported". Any idea what that's referring to?

[1]: https://docs.google.com/presentation/d/1_2qK1IuOKJ51pgBvllZ9...


That one peer can send their peers. It's called peer exchange.


there are libraries, it shouldn't be difficult for them to add that.

Oh, they say the following on their issue tracker:

> zeronet protocol is different from torrent, so libtorrent will not work.

So the headline is misleading?


Yes, it's misleading. It's peer to peer but it doesn't speak the same protocol. You can look at the docs of its network protocol. https://zeronet.readthedocs.io/en/latest/help_zeronet/networ...


I don't see how this could decentralize web applications though. Wouldn't each client have to be running the server software? Someone has to pay for that, too.


Yeah every client have to run the software or you use a proxy. If you have a site with many spreads you don't need a running instance. But if you have a an unknown site you would have to run a little server permanently.


Sounds incredible, we'll probably be seeing much more of this type of thing in the near future.


Lack of anonymity in ZeroNet is a big problem.


Seems like it's just as anonymous as the existing web; you can use Tor to hide your IP, but that's optional.


> Page response time is not limited by your connection speed.

Huh? What do they mean?


If you have previously visited a page then the response time will be limited by your computers ability to locate and open the correct html document.

If you haven't previously visited a page then the response time will be limited by how many peers are available <b>and then</b> by your connection speed.


I assumed they meant offline browsing of cached content


Several years ago I had Tor running on a server at home. It was a regular Tor node, not an exit node. Later I was put on a blacklist because of this. What is the risk of using this?


Would you mind defining "blacklist" in this context? That's kind of scary!


I ran a relay for years and never noticed any strange behaviour. Where and how were you blacklisted?


Sorry for the late reply. One website didn't work anymore. I mailed them and they reported this as the reason why they blocked me.


Presumably you only download the site you want when you visit it. If that's the case then can you view revisions of the web sites or do you only have the current copy?


If you click on "How does it work?" you get redirected to a short and sweet presentation[0]. According to the presentation, when you, as the site owner, push an update, content.json gets updated, the peers get a notification (using the WebSocket API) that a new content is available, and then they download the new version of content.json that contains the sitemap of the updated version of the website. Cleverly thought out!

[0] - https://docs.google.com/presentation/d/1_2qK1IuOKJ51pgBvllZ9...


Unless your site is too big: [0].

Then you can have users end up browsing stale versions of the site. Still an issue as of before Christmas last year.

[0] https://github.com/HelloZeroNet/ZeroNet/issues/598


It would be great if a simpler webtorrent version was available just for fun.


There's a year-old project called peercloud that might scratch that itch:

* https://github.com/jhiesey/peercloud

* https://peercloud.io/


This seems similar to ipfs. What are the main differences?


IPFS is more low-level in terms of that IPFS is a protocol (in reality a collection of protocols) for P2P data transfer. Together with IPLD, you'll get a full suite of protocols and data structures for creating fully distributed P2P applications.

ZeroNet is a application for distributing P2P applications, using Bittorrent for the P2P layer. In theory, ZeroNet could leverage IPFS to get a better and more modular stack for the actual connectivity and transfering.


gotcha, thanks for the explanation. It sure seems like they have many similar goals so it makes sense that ZeroNet could leverage IPFS


This is what I've waited for for quite some time.


Freenet has done this for over a decade.


This project is cool, but I'm more interested in future releases by the Askasha project.


Anybody read this as Netzero the free internet dial up in the 90s?


The zeronet.io is hosted on vultr.com. Why don't they use zeronet to deliver its own website?


Because that would require everyone to have the zeronet client installed before they can go to the zeronet website to download the client...


Like bittorrent, zeronet requires a client. The client acts as local server and displays the pages in your browser


That explains it, thank you.


ZeroNet client has hosted on itself"s network,You can keep it updated without outside net.the site is own by zeronet's DEV:nofish.the update site is sync with github's project,the site is:http://127.0.0.1:43110/zeroupdate.bit


No comment about ZeroNet itself, but am I alone in the opinion that this website takes grid layout too far? It looks outright cluttered and overloaded.


I think it's the seemingly random colors more so than the grid itself that makes it a bit unpleasant to look at.


Kind of reminds me of the old table layouts, although it doesn't bother me personally.


Looks great on mobile however


Yeah, the advantage. Tables on steroids.


I love the design. Looks great on mobile. Not sure about desktop. Loads really fast too.


I always wondered why you couldn't just download a torrent of torrents for the month.


Well, there's a draft to specify semantics of a torrent-of-torrents and incremental updates http://bittorrent.org/beps/bep_0049.html

But that'll need revising once sha1 torrents are deprecated.


It would be interesting to create a distributed catalog of torrents itself.


if you substitute torrents with ipfs this is trivial:

    # create empty folder
    FOLDER=`ipfs object new unixfs-dir`
    # add folders to object
    FOLDER=`ipfs object patch "$FOLDER" add-link 2017-01 "$HASH_2017_01"`
    FOLDER=`ipfs object patch "$FOLDER" add-link 2017-02 "$HASH_2017_02"`
    FOLDER=`ipfs object patch "$FOLDER" add-link 2017-03 "$HASH_2017_03"`
    # print final hash
    echo $FOLDER
This would result in a hash that has the folders for January, February and March, which can have arbitrary nested folders and files in them.


Maybe I'm not picking up what you're putting down, but how does this differ from DHT?


It might be possible with the DHT alone, but I think for what the grandparent poster wants it would depend on the ability to query the DHT. Both in general and by popularity and insert date.

That might be possible, but with the prvelence of magnet links instead of everyone using that, I just assumed not.


It's possible to do something fully distributed, but not just with the existing DHT network: https://www.tribler.org/


I thought it was pretty easy to disrupt / censor torrents, hasn't that been going on for a while?


"I thought it was pretty easy to disrupt / censor torrents, hasn't that been going on for a while?"

Not torrents themselves, only torrent search engines. Torrents are distributed by design, but traditional torrent directories/aggregators/search engines are centralized, thus easy targets for DCMA take-downs, ISP blocks, trials etc.


...And that's exactly the first thing they should put in ZeroNet.


Yup, torrent search engines are the weak link when it comes to protecting the public's access to arbitrary large files, and also the front lines in the battle between the media industries and an uncensored internet.

ZeroNet is perhaps not enough on its own to solve this problem, though, since a good torrent search engine suffers from the same limitation as a good forum, which is the need to have some form of community-based moderation. If people can't remove spam search results, and spam comments, then the medium can be too easily exploited (using Sybil attacks, etc.) and become useless.

The missing piece which is holding back so many decentralised technology projects is a lack of a decentralised trust platform. A necessary step towards this would be a decentralised (and privacy-preserving) identity platform, which would have the added benefit of removing the "Log in with Facebook/Google" problem from the web.


Just sort search results by torrent popularity. People aren't going to seed bad content.


Speaking of which, what's the progress on IPFS?


We're moving forward as always. Latest features would include distributed pubsub, filestore (allows you to add files without duplicating them) and interop between the browser and desktop nodes. Any specific part you're looking at?


1) What's the status of (supported as a real feature, not just manually changing the bootstrap nodes and hoping everyone else does too) private IPFS networks? If it's there already, how stable is its configuration (i.e. if I get my friends on a private IPFS network will I likely have to get them all to update a bunch of config in 6 months or a year)?

2) Does filestore also let you store, say, newly pinned files in your regular file tree? That is, can you pin a hash for a file (or tree) you don't already have and provide an ordinary file system location where it should go when it's downloaded? Or do you have to copy it out of IPFS' normal repo manually, then re-add it in the new location? Also: how does filestore behave if files are moved/deleted?

3) What rate of repo changes requiring upgrades can we expect for the future? That is, how stable is the current repo structure expected to be? Is the upgrade process expected to improve and/or become automated any time soon?

4) Is there a table of typical resource requirements somewhere? I'm looking for "if you want to host 10TB and a few 10s of thousands of files, you need a machine X GB of memory. If you want to host 500MB, you only need Y GB of memory. If you have 2TB but it's in many, many small files, you need Z GB of memory", or else a formula for a achieving a best-guess for that. For that matter, how predictable is that at this point?

The use case I've been excited to use IPFS for since I found out about it is a private, distributed filesystem for my friends and family. Easy automated distributed backups/integrity checking on multiple operating systems, access your files at someone else's house easily, that sort of thing. Filestore finally landed, which was a big piece of the puzzle (the files have to remain accessible to ordinary tools and programs or I'll never get buy-in from anyone else), so that's exciting. Now I'm just waiting for docs to improve (so I'm not searching through issue notes to learn which features exist and how to use them) and for a sense that it's stable enough that I won't be fixing brokenness on everyone's nodes several times a year.


1) https://github.com/ipfs/go-ipfs/issues/3397#issuecomment-284...

2) The latter. Former is nice idea you for sure should rise it on go-ipfs tracker.

3) The repo update is currently automated (run daemon with `--migrate` flag so it will migrate itself)

4) Unfortunately not but it is very interesting question. If you could ask it on http://ipfs.trydiscourse.com/ it would be awesome.


a youtube replacement in zeronet would rock


A little known fact: the Namecoin blockchain's cost-adjusted hashrate [1] is the third highest in the world, after Bitcoin and Ethereum, making it unusually secure given its relative obscurity (e.g. its market capitalisation is only $10 million).

[1] hashrates can't be compared directly due to different hashing algorithms having different costs for producing a hash.


Namecoin has a number of innovations. It's the first 'alt-coin' fork from Bitcoin, and it pioneered the technique of "merge mining" where a miner could do Proof of Work on both the Bitcoin chain and the NameCoin chain simultaneously. A lot of mining pools implemented merged mining. Even though the alt-coin space has become much more crowded and noisy, NameCoin retains that early hashing advantage. It's a very secure chain.


> It's a very secure chain.

But this guy said that a single miner has 65% of the hashing power: https://news.ycombinator.com/item?id=14043038


IIRC, that's one mining pool, not one miner. The power of mining pools is relatively limited. If the workers see that the pool is attacking Namecoin and devaluing their NMC (not to mention ruining a cool project), they're liable to switch to a different pool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: