Hacker News new | past | comments | ask | show | jobs | submit login
Re: Moxie on Web3 (plan99.net)
209 points by telotortium on Jan 9, 2022 | hide | past | favorite | 181 comments



> As for Signal it’s at least open source, but there’s no way to check that the client I’m using, or my friend is using, actually matches that source code.

This is… not true? Signal builds are reproducible. You can verify that the apk in the app store corresponds to a specific point in git history, and if you don't trust Google you can sideload that.

> Even if there was it’s irrelevant. Centralized infrastructure can claim to provide privacy but can never provide control: they can openly alter the deal at any time and I’d be forced to continue using it, if I couldn’t get my friends to switch to something else.

If you're using Signal because of the E2E properties, and Signal unilaterally changes its behaviour, you are absolutely not forced to continue using it. If you do wish to continue communicating with people who are using Signal, you're still able to reappraise what you feel comfortable saying now you know that the behaviour has changed.

The claim that "Signal encryption doesn’t actually work" just seems inaccurate. We can verify that Signal has the properties it claims it has. "Signal encryption could stop working" would be true - the client could absolutely just choose to start sending everything in plaintext, but we're not relying on a centralised group to make that claim! Anyone can look at the commits to Signal and flag that the claimed properties no longer apply.


How does reproducible builds help you verify that you friends don’t have compromised builds installed? I haven’t used Signal, so I don’t know, but is there some way for your client to verify the client on the other end?

Knowing that your client isn’t compromised is pointless if you friend can’t/won’t do verification on their end.


This isn't something you can reasonably protect against and I don't even see this as a purely technical problem.

Even if you could verify Signal, what about the OS being compromised, the phone being lost or a shoulder surfer reading the message?

Even if I tell a friend a secret over dinner I can't 100% verify that they are not going to tell someone else. So some degree of trust in the other party is required.

If you can't trust the person to take the technical steps needed to secure the information, why do you trust them to not just share your secret the old fashioned way?


Even if you could verify the client on the other end, that doesn't help you if the OS that client runs on is compromised because your friend carelessly allowed macros in an office document.


>This is… not true? Signal builds are reproducible. You can verify that the apk in the app store corresponds to a specific point in git history, and if you don't trust Google you can sideload that.

A better way to verify this is latest Signal build in F-Droid? Last time I checked it is behind quit few version. If Signal can satisfy the F-Droid build check to be open and free, then it is legit. Most people don't have time to go through the git commit to verify the build.


I don't think Signal was ever on F-Droid? IIRC its code does contain some proprietary blobs and Signal doesn't want non-official clients to connect to their servers, so it'd be against Signal's wishes to have a fork on F-Droid without those proprietary blobs.

They do offer an apk on their website though.


The F-Droid issue mentioned nothing about blobs as I recall, the reason was they want to be able to push security updates in a timely manner. F-Droid makes it far too easy for people to keep using an old version.


>F-Droid makes it far too easy for people to keep using an old version.

Being open/free is to allow people to have any version they chose. I've pushed both in App Store and Google Play Store, if the software is truly free/open it shouldn't be hard to publish it on F-Droid. The problem is when there are mixxed proprietary code in the project then F-Droid tend to reject it.


I use Signal, and am not saying that they are doing anything shady.

But that argument would make for a convenient smokescreen if they were.


Agree though making it faster or more convenient would be a benefit. But I disagree that most people don't have time to check. It's just inconvenient or not important enough for those who choose not to.


Yeah the logic of the article seems absurd to me. The hypothetical problem applies to any client. If the open source software Signal cannot be trusted, how do you trust any other software you are using? It’s not the software problem , it’s the problem of the build and distribution of the software. As mentioned in other comments, you can use trusted build and distribution channel. The worst you can do is to build by your own if you want to be really safe. I have done that and it’s not so hard .


> Signal builds are reproducible. You can verify that the apk in the app store corresponds to a specific point in git history, and if you don't trust Google you can sideload that.

Actually, for both the commercial app stores - you cannot. Google requires app bundles where they package different APK for devices and sign it using their keys for new apps from 2021, August.


Google is pushing towards that, but for apps that already exist you can absolutely still upload stuff signed with your own keys.


Google is far more than 'pushing': currently any app added after August 2021 is required to let Google manage the keys. I expect it's a just a matter of time before it applies to all apps. Really disappointing that they are using their market position to completely ruin developers' ability to provide security.

Source: I have been developing on Android since Android 1.0. Also see https://stackoverflow.com/questions/68710048/no-opt-out-opti...


Right, but the point about the closed server infrastructure still stands, though, and you have to blindly trust them not to collect data about you or interfere with your message delivery, compared to a possible decentralized system.


> you have to blindly trust them not to collect data about you

Why blindly trust them if you can inspect the client, like you easily can with Signal? It's all about how you choose to update your client and which due diligence you want to apply, and for this a decentralized system doesn't help.


This is about the server which is not open source and even if it was you wouldn't know if they actually run that code without modifications


> This is about the server which is not open source

The Signal server implementation is mostly open-source, and some users managed to run their own instance.

> you wouldn't know if they actually run that code without modifications

That's the point of SGX, but even without it, most guarantees you get from the Signal protocol come from the clients. E.g. you don't need to know what the server is running to ensure that the content of your messages is not leaking. Other than that, being decentralized on its own doesn't magically solve the metadata problem. Signal leaks less metadata than most messengers, including decentralized ones.


"You can verify that the apk in the app store corresponds to a specific point in git history"

Is there an easy way to get a direct HTTPS download link from the Play Store? The app itself certainly doesn't let you get the direct APK, or if it does then this must be a very obscure or new feature because I've been using Android for years and never seen it. My understanding is that the Play Store protocol is quite complicated.

Even if you can get such a link:

• It's not the case for iOS.

• It's not the case for WhatsApp.

• You have to repeat the process for every update, of which there are many.

• You don't know what version other people are running, which also matters. Consider group chats!

This is really something possible only in theory, not something normal users can actually do, hence the threshold signing/auditing proposal.

The claim that "Signal encryption doesn’t actually work" just seems inaccurate

I see a bunch of people getting hung up on this point. It's missing the forest for the trees.

Encryption in which you rely on your adversary to encrypt messages for you is conceptually broken. It doesn't matter if the encryption scheme works in theory in another context, in that context it's a failed setup. The example of WhatsApp (which uses the Signal encryption scheme) makes this clear: the Signal protocol in the abstract prevents the adversary learning when the same message is sent twice. In the most important concrete instantiation it actually doesn't: Facebook simply changed the client to expose that information unencrypted, and there's nothing anyone can do about it. Signal could do the same for their own network tomorrow, and given the ideological trends in the Bay Area perhaps they will. Or maybe they already did for some people? We have no way to know, only faith.

"If you do wish to continue communicating with people who are using Signal, you're still able to reappraise what you feel comfortable saying now you know that the behaviour has changed."

I think you're agreeing with me. The point of encryption is to turn the infrastructure provider into a "dumb pipe" that doesn't have opinions on what you think. If one day Signal or WhatsApp change the deal - as indeed Facebook did in 2019 - and you suddenly have to care what random Facebook employees think - then we can no longer really claim the service is end-to-end encrypted.


The point is that not everybody needs to check.

- It take one person to notice that builds don't match or that something fishy is happening for the trust on Signal to be lost forever.

- Someone for whom the integrity is critical can do the check / compile themself

- I don't know if this has happened yet, but the way Signal is built and distributed allows a community of "checkers" to exist.

What can Signal do to improve this aspect?


I outlined a proposal for what they can do in the article.

In reality even if some people checked it'd make no difference. Networks are very sticky. When WhatsApp broke the encryption by allowing Facebook to detect forwarded messages, there was no uproar or sudden loss of trust. There was no mass migration away from WhatsApp. One day there was privacy. The next there was less, because having made its fortune on the back of viral sharing in social networks Facebook decided that maybe there was a little bit too much social going on and not enough hierarchy. And there's no possible fix, beyond "convince everyone not to use WhatsApp" which isn't going to happen anytime soon.

That's why I suggested a globally distributed group of auditors creating threshold signatures. It should be backwards compatible with the app stores because the resulting signatures are/can be indistinguishable from a regular signature. I have a Shoup RSA implementation lying around in my mail archive somewhere.

Anyway, encryption is about meeting social goals, not merely doing maths for the sake of it. There's no way to meet the underlying social goals here without decentralization - the fact that this is hard is irrelevant (even though I fully agree with Moxie on the problems around iteration speed etc).


> When WhatsApp broke the encryption by allowing Facebook to detect forwarded messages, there was no uproar or sudden loss of trust. There was no mass migration away from WhatsApp.

Well, you’re giving a not so great example there.

I don’t know anyone that uses WhatsApp because of their encryption system or privacy claims. Whereas I also don’t know anyone who would keep using signal if not for their encryption and privacy claims.

In every metric I can perceive signal is just playing catch-up with WhatsApp or telegram features. The only winning feature is privacy and encryption. So, your argument that:

> In reality even if some people checked it'd make no difference

Makes no sense, if one person could break the trust on signal, it would be absolutely dead. No one would keep using a poor substitute with barely any user compared to the mainstream chat services if not for privacy and trust.


The primary reason Whatsapp had to change the forwarding was because of the social impact of forwarding in some nations that lead to social unrest and deaths. So while there was a loss of privacy, the alternative was to remove the offer of the application itself in its entirety.


> So while there was a loss of privacy, the alternative was to remove the offer of the application itself in its entirety.

OP's point is that the change in forwarding was practically equivalent to (partially) defeating the encryption.

I think the argument is that, previous to this Whatsapp forward counter rollout, one would assume significant hits to credibility would block a company/org rolling out such a thing in a client. And that hit apparently didn't happen.


> In reality even if some people checked it'd make no difference. Networks are very sticky. When WhatsApp broke the encryption by allowing Facebook to detect forwarded messages, there was no uproar or sudden loss of trust. There was no mass migration away from WhatsApp. One day there was privacy. The next there was less, because having made its fortune on the back of viral sharing in social networks Facebook decided that maybe there was a little bit too much social going on and not enough hierarchy. And there's no possible fix, beyond "convince everyone not to use WhatsApp" which isn't going to happen anytime soon.

This seems to miss two things.

First, "networks are very sticky" is the reason Signal uses telephone numbers as identifiers. Your network is not tied to Signal, or even to your contacts having smartphones that can run apps of any kind.

Second, the target markets for Signal and WhatsApp are quite different. WhatsApp users more often than not don't care about privacy. They care that everyone they want to talk to, including possibly critical government services in many countries, requires them to use WhatsApp. So if WhatsApp stops encrypting entirely, I'm not sure very many users care. Signal is much more likely to be used by people who actually care about privacy and security and is much more likely to be abandoned if it breaks the promise of maintaining those things. It's even designed to be easy to abandon, whereas WhatsApp very much is not. Leave WhatsApp and there are a whole lot of people and services you just can't talk to any more.


Signal (with Google / Apple cooperation) can potentially ship modified backdoored versions to specific targeted users, while that one geek that verifies every Signal build still gets a clean version.

This is not a game you can win with centralised app stores and app architecture that relies on push notifications.


This is true even in a decentralised world - even if the protocol can be guaranteed to be secure, you could still push a modified binary to an individual.


Not if you build the binary yourself. It is perfectly doable in PC world, where you can run background processes and such, but much harder to do in current mobile world, where both iOS and Android restrict background processes (Android to the lesser degree, but it is going to the same place as iOS), and force apps to rely on push notifications.


No, mjg59 is right - the operating system is a root of trust. Microsoft could push a Windows update that patches a messenger app in place for example. I probably shouldn't have brought up the "knowing what your peer is running" problem as it just confuses things.

There is plenty of progress to be made here without getting into the next layer of how do you trust your OS. Android is at any rate in a decent place because so many users don't get their OS bits from Google but rather, an intermediary, and it's very easy to change those intermediaries (it means buying a new phone but people do that all the time anyway).

Reproducible builds and audits are all very theoretical today. Reproducible builds on their own aren't actually useful because you're just comparing hashes: someone has to have actually read the source code, understood it, documented what it does or compared it to some natural language description, and they have to keep up with all the code changes, and they have to convince other people to trust their judgement. It's a lot of work and we don't have any companies that do this sort of thing today. The industry needs a network of auditing firms that specialize in this but it's a chicken-and-egg problem: there's no point creating such a firm when their outputs are so hard to use, and there's no point making their outputs easier to use when there are no auditors who would create them.


With respect to reproducible builds, they're pretty extensively used in engineering but have you considered that the reason they aren't more widely used outside of the industry is that most people simply don't care? I don't think that it's that the outputs are hard to use - it's that the vast majority of people installing and running software on their devices are perfectly fine with trusting the existing distribution infrastructure.


By "outputs" I mean "understandable and meaningful trust assertions" here, not actual binaries.

As for don't care, sure, but I guess you can make that argument for almost anything security or privacy related. Do most people care if an app or website uses SSL? Nope. They probably can't even tell. Sometimes it's just up to the people building the infrastructure to plan ahead a bit, see what problems can arise and take countermeasures.

I think it's much easier to make progress on audited builds than e.g. PGP style end-to-end encryption. Audited builds and remote attestations are the sort of thing that can be done behind the scenes, they don't inconvenience end users. People who care can double-check what the auditors are saying, and "people who care" might include journalists, politicians and other people with powerful megaphones who can affect change.

Plus it doesn't require a big breakthrough in tech. It does require Facebook (and Apple etc) to accept that just asserting their products are encrypted isn't really achieving anything, and for some engineers to say "OK, let's find a few foreign companies to cross-sign with us". It might require some changes in the app stores. I see a post elsewhere claiming that Google is phasing out the model in which developers sign their own APKs, which is a pity and a retrograde step. But if the right people agree with these arguments, all those problems are solvable.


There are many sources to get Android OS, but they all share one and the same Google Play. So this is the most logical piece of infrastructure to ship you tampered apps, on any Android device with Google services.


But you can already build the signal binary yourself, right? Or are you suggesting that should be the default and everybody should rebuild the binary?


You can, but you won't be able to receive push notifications with it - to send push notifications to your app Signal will need your developer keys, or relay push notifications via your server. And I don't think Signal can relay push notifications.

Telegram, actually, can send push notifications to third-party clients, but requires uploading developer keys to them.


Not if you buy a raspberry pi like device "server" and plug it into your modems' Ethernet port


I think it is also not a game you can win (as in world wide usage) if you need users to build their own apps. The majority of users will not do that no matter how many tutorials one can write.


well, this is something that can be somewhat mitigated by giving users a meaningful choice of software repositories. Like, f-droid maintainers are trusted not to tamper apps because of their reputation. However, this would also require making third-party repositories ("app stores" is the modern fancy term) to have the same system privileges as Google Play / Apple Appstore have - and by this I mean giving apps access to third party push notifications services.


The exact same argument can be made for the centralized services that Ethereum relies on. If Infura would serve incorrect or censored data, it only takes _one_ person to find out and the trust we have in them is also lost forever.

There is no fundamental difference between the trust placed in relaying services like Infura, and the trust placed in software supplier like Signal. Both companies/services are able to cheat or betray that trust in a way that regular users will not easily notice, but both of them are fully auditable by technical users. Since their whole business model relies on being a trustworthy service provider, it makes no sense for either of them to attempt to cheat, because they risk their entire reputation.


On that note, I just noticed this tool the EFF released for downloading APKs from the Play Store and others. Haven't tried it:

https://www.eff.org/deeplinks/2022/01/eff-threat-labs-apkeep...


This argument is "true" for every non-static website. In theory, different content can be delivered based on certain criteria (activist, lawyer, etc).

Every interaction with other humans or institutions is based on a degree of trust. Even https was designed by humans and a signing ceremony is being carried by humans that could have bad intentions. Trust has to be created or verified, if interacting entities have no knowledge about each other.

Audit trails were created for this reason and are ubiquitous in the world of the centralized web.


> Is there an easy way to get a direct HTTPS download link from the Play Store?

Have you tried search engine? I've got a lot of good answers by typing your exact question into duckduckgo.com


Yes. The first hit following your instructions takes me to a page with a list of "download sites" like apkleecher.com that has a button labelled "GENARATE DOWNLOAD LINK". I find myself uncertain that such sites can be trusted to give me an actual correct download. Even if they give me one that looks plausible, is it the one on my actual phone?

Note that PCs have this problem less, because there's filesystem access and a culture of direct distribution. I can go find the actual install folder and take a look at what's in there. For mobiles it's harder and we're forced to rely on these shady sites. Even on PC though someone has to reproduce the build, keep doing so and keep up with auditing source code changes. Nowhere does this. Even if there are reproducible builds, who exactly does the reproduction and much more expensive audit work is handwaved away. As an industry we're not even auditing the libraries we include into the final binaries, so we're a long way from being able to have a group of different people make confident claims about what a binary actually does.


> Even if they give me one that looks plausible, is it the one on my actual phone?

Download apk from your phone (using adb) and compare?


> Is there an easy way to get a direct HTTPS download link from the Play Store?

Easy? No. But you can grab the apk from any number of third party download sites and verify the signature, and you can also just download it from https://signal.org/android/apk/ . Unless you've installed something via the Play Store, Android won't autoupdate it.

> • It's not the case for iOS.

It's still not hard to obtain iOS packages and verify their contents - it's definitely not as straightforward to use those as it is to sideload updates for Android, but it's still possible to verify that the binary matches the source, and you only need one person to notice.

> • It's not the case for WhatsApp.

Your claim was "Signal encryption doesn’t actually work". WhatsApp uses a whole bunch of Signal, but it's not Signal. This is like claiming that the IoT device I found that uses Signal in the backend but has all the key material in an unprotected Firebase bucket tells us anything about the security of Signal.

> • You have to repeat the process for every update, of which there are many.

No, someone has to. And that's something that could be automated.

> • You don't know what version other people are running, which also matters. Consider group chats!

If your position is "We should have infrastructure that makes it easy for third parties to audit Signal updates correspond to the source code", I absolutely agree! But we can build that with what currently exists, Signal's centralised infrastructure does nothing to prevent that.

> Encryption in which you rely on your adversary to encrypt messages for you is conceptually broken.

Every time someone sends an encrypted message, they're relying on a huge stack of technology that's largely outside their control. If my keyboard becomes untrustworthy, my guarantees are gone. If my video driver becomes untrustworthy, I'm in a bad place. Using any form of technology implies placing some trust in an awfully large set of people. On a daily basis, we're relying on an awful lot of faith. The Signal devs have gone out of their way to make it easier to verify whether that faith is misplaced or not.

But:

> We have no way to know, only faith.

We literally do have a way to know. We can dump every Signal APK installed on every phone and determine whether they match the source. It wouldn't be easy, but it could be done.

> If one day Signal or WhatsApp change the deal… then we can no longer really claim the service is end-to-end encrypted.

Signal doesn't appear to have changed the deal, so it seems like you're saying we can currently claim that it's end-to-end encrypted?


"you can grab the apk from any number of third party download sites"

But people get it from the official app stores and for WhatsApp there's no other option (which is the main one I actually care about because that's where my contacts live).

I think some of this discussion is caused by conflation between Signal-the-tech and Signal-the-app. I could have called it the Axoltl Ratchet instead but I don't think anyone would know what I'm talking about, and anyway, I think it'd be less technically accurate as by now there's lots of stuff to do with group chats etc and I don't know what the underlying code-name is of those schemes. I normally see the whole encryption scheme just be called "Signal encryption".

If your position is "We should have infrastructure that makes it easy for third parties to audit Signal updates correspond to the source code", I absolutely agree! But we can build that with what currently exists, Signal's centralised infrastructure does nothing to prevent that.

We are in 95% agreement indeed! I proposed such an infrastructure in the article, based on threshold signatures, however, it doesn't exist today yet our industry is claiming "we use end-to-end encryption which means we can't read your messages". This claim isn't true - they can read our messages if they want to - and progress seems to have stopped after these rollouts. The claim that it's possible to have a trustless centralized infrastructure provider is a very bold one and currently it's not convincing, so to prove this is possible we need to keep things moving. There has been no attempt to make these services auditable nor even really any recognition that there needs to be. That troubles me a lot. At some point people are going to notice these claims don't add up and trust in the whole software industry may be damaged.

Edit: actually if I recall correctly Signal uses Intel SGX to make some parts of its contact list intersection system auditable. That's a great use of tech to solve some of these problems. SGX is one way to attack the centralized-but-untrusted-provider problem. I don't know if anyone is actually doing remote attestations on a regular basis though.

"Every time someone sends an encrypted message, they're relying on a huge stack of technology that's largely outside their control"

Yes, but those components come from different players, in different countries, with different governments and agendas who often do not trust each other blindly. Hence why mobile radios are now sandboxed, why keyboard apps are sandboxed, Samsung presumably watches out for Google playing naughty games with the Android source as part of their patching and distribution process, etc. Certainly carriers have a long history of treating phone makers as semi-trusted, hence the historically long approval processes.

"Signal doesn't appear to have changed the deal, so it seems like you're saying we can currently claim that it's end-to-end encrypted?"

I believe that is currently the case. I cannot prove it, nor can I prove it will remain the case tomorrow, if it is today, nor can anyone else I trust prove it. Which, logically, means it boils down to faith in Moxie and his employees, people I've never met and do not know. A blog post saying "we promise we're not logging messages" should really carry equal weight.


> But people get it from the official app stores

It doesn't matter what "people" do - what matters is whether a third party can obtain the same package and verify it. And they can!

> This claim isn't true - they can read our messages if they want to

They can read our messages if they want to, and we can detect that that happened, and then nobody uses their app any more.

> Samsung presumably watches out for Google playing naughty games with the Android source

Right! And we can watch out for Signal playing naughty games with their source. If you're ok trusting that Samsung will catch Google being untrustworthy, why are you not ok with trusting that signalverifier69's CI toolchain will catch Signal doing the same? The whole point here is that we don't need to trust Signal, we can absolutely verify it.


Third party getting the package is not a guarantee that you get the same package.


If you've got to the point where Google is specifically pushing you a modified app, Google could just push a backdoored keyboard update that exfiltrated all your keyboard inputs instead. Why bother targeting a single app when you can get everything?


I think Android is designed such that even the Play Store cannot replace an installed app with a differently signed one. Or at least it used to be. And, "Android" is controlled by whoever made your phone, not directly by Google unless it's a Pixel device.

That said, it's a thin line. If Google had changed things at some point so that the Play Store could override the signing continuity requirement I wouldn't be remotely surprised.


> I think Android is designed such that even the Play Store cannot replace an installed app with a differently signed one.

They can’t, however Google have recently changed the requirements for submitting an application to the Play Store. You now need to hand over your application signing key. Instead of signing the application to prove authenticity to Android devices then giving it to Google to host, you now sign the application to provide authenticity to Google, and then Google re-signs it with the key you gave them to prove authenticity to Android devices.

So if Google want to provide an alternative binary to a specific person, they can now do that.


> Signal-the-tech and Signal-the-app

The Signal protocol[0] is used in Signal[1].

[0]: https://en.wikipedia.org/wiki/Signal_Protocol [1]: https://en.wikipedia.org/wiki/Signal_(software)

> they can read our messages if they want to

If Alice and Bob use the current version of the Signal app, they can't. You're worried about updates, fair enough, if that's within your threat model you're free to delay your updates for as long as you want, until you've reviewed them and waited enough to be relatively confident that a backdoor would have been found by then. Splitting the signing key into several like you suggest would delay updates for everyone while bringing negligible benefits to a tiny minority of Signal users.


> You can verify that the apk in the app store corresponds to a specific point in git history

Heya thanks for your comment.

Would you mind expanding on what the above means?

For the App Store newbies here - myself included :-)


Reproducible builds refers to the process where a binary/package can be verified that it been built from a specific commit (for example). If the build process is reproducible, it means that every time you build the project, the output is always the same (idempotent). So Signal can publish a binary + a hash + which commit it been built from, and you should be able to checkout that commit, build the project, and once built, both binaries should be exactly the same (the hash for both should be the same).

Not only helpful in being able to verify that the binaries people publish are without 3rd part modifications, but also helpful in debugging and some other scenarios. More info here: https://reproducible-builds.org/

Although, on iOS I think reproducible builds are mostly useless (for the use case of verifying the binaries) as you can't download the binary straight from the App Store nor can you inspect installed binaries on your iPhone (AFAIK).


This was really insightful - thank you!


Preface: No one, and I mean no one should take anything Mike Hearn has to say as credible, the guy is up there with Ver in terms of being persona non-grata for his blatant incompetence. His words seethe with a sense (and rejection) of wanting to be relevant, but as he proved with his 'rage quit' not only were his constant forks failures, but Bitcoin has survived all the attacks since he left proving he never could project anything accurately.

Web3, a boomerism in terms of indexing the internet as they see it, is hapless fad that people who don't or can't see that the current model of the internet is fundamentally broken and instead want to patch instead of fix the rot.

Kazakhstan has shown how hastily this centralized cesspool, that relies on spying and data collection, can be turned off. Which has been seen time and again since the uprising in the the Arab Spring. The Internet infrastructure as it currently exists is THE problem, and it is that needs to be corrected.

I think as technologists we have to accept that using a centralized blockchain, which ETH is, built on a horrible language (Solidity) has not and cannot bring about the progress Vitalik or anyone else has fantasized and sold many people on. To date, whether it be the DAO hack or 2.0 or whatever, all we see is how centralized systems make failure an inevitability rather than a possibility.

What I hope is that Starlink and Starship make it possible for new types of uncensorable ISPs get created that do not rely on these archaic models that rely on backbones out in the Ocean that are readily prone to attacks by adversaries that can be shut off at will [0] and rely on Nation-States to operate as a despot or regime sees fit.

That alone, beyond Mars colonization, should be enough for people to celebrate Spacex's success!

0: https://news.sky.com/story/russian-submarines-threatening-un...


It's interesting that the article touches on the reasons why end users might not want to run servers but doesn't mention at all one of the ones that comes to my mind most quickly, which is security.

Running a production server on your home network is a risky thing to do as, if there's a vulnerability in the code (and historically this has been quite common, especially when products are new), it allows attackers into home networks which are very likely to have a load of easily compromised unpatched consumer IT products.

Obviously it's possible to do this securely, if you have the right technical knowledge and the time and dedication to apply it, but I'd guess that it's a very small percentage of people who fall into that camp.


Your right. Ive been saying for years that routers should come with 2 networks (vlans) out of the box (colour coded red and blue) to make it easier to seperate out kit/appliances. One side for IoT stuff that you dont trust and the other side for our own equipment. But as you mention, even then consumers need to know how to set the networks up etc. As the old adage goes, you dont know what you dont know.


The issue with two networks is that many IoT things work by interacting with other things/users. A Philips Hue bridge or a Chromecast need to be accessible from the network you're on. And they rely on things like multicast and UPnP to function, which are hard if not impossible to do cross-vlan ( i looked into putting my Chromecasts on a different vlan and the setup was quite heavy).


Agree. I even gave up on keeping the christmas light IoT device on the other network, because the whole point was to be able to control the lights with the phone, and having to switch my phone between networks was just too much.

What needs to happen here is that my master devices (my phone, my laptop) need to be able to stay connected to many networks at the same time, and then I have to be able to assign different apps to different networks.


It depends on how they work. IIRC the Hue bridge connection can be statically configured with an IP, and then some firewall rules should be enough ( authorise your main network to talk to the Hue IP).


That still doesn't fix the fact that it relies on network broadcasts for many core functionality (app discovery of the hub, etc). Putting your IoT devices on their own VLAN sounds like a good idea, but they usually aren't designed to work in that sort of situation and you're likely to create a bunch of usability and management problems that won't be solved with firewall rules.


Many routers already come with two networks separated with vlans. It's a common feature on TP-Link devices, among many others. They call it "Guest network", which makes it really easy to understand.


There are lots of reasons why people don't do it today, I didn't list all of them because it's easy to come up with them as indeed you've done here.

But again, note there's nothing fundamental about this problem. People routinely download and run arbitrary code on their desktops, laptops and smartphones, and most servers don't ever do that. If servers are less secure than web browsers it's a matter of effort put into the implementations, not something fundamental about human nature or computers.


>But again, note there's nothing fundamental about this problem. People routinely download and run arbitrary code on their desktops, laptops and smartphones...

But isn't that in fact the fundamental problem we're talking about? You get security without knowledge exactly if you give up all control over the device. What's the point of "running" your own server if all control you have left is that you can physically smash it to pieces?

Yes you can take control, but only to the degree to which you acquire the knowledge to keep it secure. I think this is a very fundamental trade-off.


I don't think giving up control is necessary to achieve security. Modern web browsers are pretty secure (even if web apps themselves aren't) and yet I can browse to whatever websites I want. There's a competitive marketplace of browsers to ensure that if one provider goes nuts I can switch to another.

There are lots of different angles through which to make progress here. I don't think we should be defeatist about it.


> There's a competitive marketplace of browsers

There isn't. There are exactly three independent browser vendors:

- Google with Chrome. Chrome dominates the market to such an extent that Google pushes through its internal APIs as "standards"

- Apple with Safari. They flat out refuse to implement many of Chrome's shenanigans, and also have issues with priorities and funding

- Mozilla with Firefox. Much like Safari, they are increasingly refusing to implement Chrome's shenanigans. But Firefox is rapidly declining in relevance, and will be on the brink of extinction within a few years (it already has at best ~150-199 million users)


Chrome, Safari, Edge, Firefox, Brave. At least 5. Also, whipping together a simple browser is quite easy given the embedding APIs available. It's good enough to make the point.


> Chrome, Safari, Edge, Firefox, Brave. At least 5.

Chrome, Edge and Brave are the same engine

> whipping together a simple browser is quite easy

Slapping lipstick on a pig doesn't make it a different animal.

Yes, there are superficial differences like watching ads in Brave to earn their "coin", but it doesn't make them a truly different/competitive browser.


Similarity of the code base doesn't matter, because we're talking about what sort of things a browser maker who "goes nuts" might do, like trying to switch off SSL or upload everything you type to their servers. In which case it's easy to patch that stuff out and/or just build a new shell around the underlying (fixed) rendering engine.

That's the reason browser makers open sourced their renderers in the first place - exactly to help the web be more decentralized. And it works. If tomorrow Chrome blocks a website because it mis-gendered someone, OK, I'll switch to Safari. And if they do the same thing I'll switch to Edge. Etc. In extremis, I'll make my own browser based on the open code and sell it to people who really hate censorship. There are lots of ways to respond to this type of 'attack' but they don't rely on cryptography (I can't think of a cryptographic algorithm that could stop such a thing). Instead it's decentralization that fixes it. That's my point.


> Instead it's decentralization that fixes it.

That's FOSS. You can have FOSS using centralized, federated or decentralized protocols, I don't see a direct connection between the two.

To go back a little, Moxie's main point has always been about the advantages of centralized protocols vs decentralized ones, not about FOSS vs proprietary software. In practice, he seems to favor (at least for Signal) centralized protocols and FOSS.


It doesn't have to be open source. The same argument applied when most browsers were proprietary, it was just less strong because it was so much harder to create a competitor. There was still competition though, and a decentralized network of servers (you can think of links as edges in a p2p network).


Ah. Now I got you. Agreed


Browsers based on chromium like brave also refuse to enable chrome shenanigans.


Servers are directly attackable over the internet, unlike browsers :) Browsers are a very well hardened target at this point after decades of attacks. The fact that browsers still see exploitation, despite the money spent on their security by large well funded teams like those at google and Microsoft, shows the difficulty of securing applications.

Where servers have a direct correlation to money (e.g. if they are involved in processing crypto payments), there's a direct financial gain to exploitation, increasing the likelihood of attack.

(IMHO) After the first wave of worms compromising home users web3 servers, it would be pretty hard to convince people that it was a good idea to do it again


> Servers are directly attackable over the internet, unlike browsers :) Browsers are a very well hardened target at this point after decades of attacks.

OTOH, browsers are expected to have an ever-expanding array of capabilities on behalf of the end user, which leads to ever-increasing code complexity and size. Moreover, breaking into any individual browser instances nets you almost nothing, so the interesting security issues are the ones that apply across the board (allowing you to do really large-scale blackhat stuff).

Servers, on the other hand, can be relatively static targets in terms of functionality and code, can use modular code design in a way that is harder for browsers to pull off, and have been much more hardened than browsers over time for the simple reason that even a single server break in has way more implications than a single browser break in.

I don't know how much money I'd put on this wager, but I'd bet that getting into a random nginx install is much, much harder than getting into a random browser instance.


oh I'd absolutely agree that a random nginx install is likely to be better than many browsers, but nginx has had years and years of attacks directed at it, so will be a very well hardened target.

What I was thinking of, when looking back at the article is that the web3 space will need new servers with new likely novel protocols to support it. when the code is written and released to be run in user's homes, would I expect that to be better than a browser like chrome or firefox.... no, I would not :)


> oh I'd absolutely agree that a random nginx install is likely to be better than many browsers, but nginx has had years and years of attacks directed at it, so will be a very well hardened target.

That would seem to be an existence proof that servers in the home is just a matter of design priorities then, no?


> Obviously it's possible to do this securely, if you have the right technical knowledge and the time and dedication to apply it, but I'd guess that it's a very small percentage of people who fall into that camp.

I think there's an underlying subtext that is missed in the article and this comment. Like, I don't disagree with the quoted bit above, but at the same time, I can't help but recognize that its truth is a consequence of design choices rather than an inevitability.

Americans happily own cars, which are themselves quite dangerous, laborious to manage, expensive to operate, have to be insured, and require an extensive and constantly updated training to maintain. They are companies that offer rental models, rather than ownership models, but they are mostly used by urban dwellers with limited income and travelers. Most people would tell you they'd prefer to own one. It's just a cultural context.

The same applies to servers. We could have servers that were well designed for people to self-manage with perhaps some basic training (certainly no more difficult than what is required than to pass a driving test), regular servicing, recalls, service stations spread throughout the nation, legal adjusted to favour ownership, etc.

Really, from a practical standpoint, the difference between a server and so many other computers deployed in homes (routers, mobile phones, IoT devices, cars, laptops, etc.) is far more subtle than we're willing to acknowledge... and frankly we actually introduce a lot of systemic risk by NOT requiring basic skills, regular servicing, insurance, etc. for these "pseudo-servers".


Both authors are conflating 2 different things and managing to talk past each other.

Moxie did conflate the state of Ethereum with the technical state of blockchain technology as a whole. In practice I am not sure this conflation actually matters though as right now most of the interesting stuff that is happening -is- on Ethereum. For the most part Moxie's arguments are completely on point regarding deficiencies in the offerings and the improvements he suggests would both be simple and materially improve them (whilst not addressing Mike's main problem which is semi-centralisation).

Mike manages to completely miss Moxie's point regarding how cryptography can be used to obviate some of (but not all) the problems with centralised infrastructure and work with the devices people actually have and use (mobile phones). He goes on to argue that if "people would just have desktops/linux servers at home this would all be different". But that isn't going to happen, that is a fantasy land.

The world isn't going back to a fully decentralized web. There are very good practical, commercial and political reasons for this.


For what it's worth, I do agree with Moxie that encryption is under-used and can make many kinds of semi-centralized infrastructure private. But that's irrelevant in this case. The focus is on WhatsApp because Moxie likes to use the speed with which it rolled out e2e encryption as the canonical example of why centralized services are better for privacy (this isn't the only place he's made this claim). But it's meaningless: something switched on quickly on a whim can also be switched off quickly on a whim. And, note, we found out that WhatsApp was newly encrypted because of a blog post and some messages in the UI. It could be switched off again silently and who would notice? We know the answer to this because hardly anyone realizes there are forwarding limits, even though it's in their FAQ. Apparently there aren't any groups reverse engineering every WhatsApp update to figure out if it's still encrypting everything.

With respect to the latter part, there might be a terminology issue here. I'm using "desktop" to mean any non-mobile/tablet/server device, including laptops or stick PCs. A whole lot of people have such devices and in particular the sort of people who create content do. The idea that people only have mobile phones is false. There certainly are plenty of people who only have mobile phones and they matter, but there is also a huge population of people who have more powerful computers as well. And even for those who don't, a big part of why is not inability to have one but because they don't see any reason to buy one. That's not exactly fundamental.

Finally I would be very careful with predictions about technology trends. The computing industry has oscillated between big centralized computers and smaller more personal devices several times already. Even today the picture is quite ambiguous: as I argue in the article, a fully-bought-in Apple user is very much still a child of the PC revolution. Modern iPhones are as powerful hardware-wise as many PCs, just the form factor and usage model is different. The ecosystem there is very far from the canonical cloud-über-alles vision promoted by Google via the ChromeBooks.


> Apparently there aren't any groups reverse engineering every WhatsApp update to figure out if it's still encrypting everything.

If that were true--in the case of WhatsApp, I'm pretty sure it's not, as another poster here pointed out--it would be equally true for any open source app. Like, I dunno, Debian (https://jblevins.org/log/ssh-vulnkey).

There's absolutely a place for reproducible builds and binary transparency in the ecosystem. But those seem to me to be examples of what Moxie described: using cryptography to provide privacy and security guarantees while building on top of centralized infrastructure.

Would you rather binary transparency, or would you rather run Sendmail on your Debian server in your closet because, hey, it's physically yours? The conclusion that the latter is better is just nonsensical.

(And of course, Bitcoin Wallet is in the Play store, distributed in exactly the same way as Signal...)


You can do both, they aren't exclusive. There's a class of problems you can't solve with cryptography. See the discussion up-thread about browsers.

Would I rather have binary transparency or run my own mail server? Well, I might like an auditable Gmail but what I can actually get today is my own sendmail. So it's an academic point. Moreover, there's no particular reason running a mail server has to be as painful as it is. It's a PITA because nobody with usability skills has invested time to make it point-and-click, self maintaining etc (+ of course, finding a good reputation block of IPs).

Nonetheless, if someone sat down and wanted to make messaging more decentralized today, making self-hosted email easier would actually be a tractable problem you could start on. Changing Facebook or Google policies, not so much. That has value even if it seems hard to imagine a great outcome today.

Bitcoin Wallet is in the Play Store, can also be sideloaded, is open source, forkable, and connects directly to a P2P network. It doesn't rely on infrastructure run by the developers. It'd be even better if Android let you (or another app) audit installs but it's not a bad foundation.


> Well, I might like an auditable Gmail

Who proposed binary transparency for Gmail (the server)? The whole premise of tools like Signal is that enough happens in the client--which is amenable to binary transparency--that the server is immaterial.

Sure, if you attack that strawman sufficiently, the only alternative is running your own Sendmail. And yes, it could suck less.

My point was a bit bolder than that, though: not only does binary transparency (which, to be clear, is about client code, not the server itself) substantially mitigate the concerns you might have trusting a centralized service, but running a computing stack of millions of lines of code yourself does not at all mitigate those concerns.

Sendmail isn't "yours", even if it's open source. It's a bunch of random patches from random people, some of which will erroneously "fix" a Valgrind error and remove cryptographic entropy from the RNG, others of which will contain intentional backdoors, etc, etc. The fact that you're running it yourself in your closet just doesn't matter. Physical locality--and hardware ownership--don't make a difference.


But there's only one client. And, that's deliberate - Signal doesn't allow federation for philosophical/iteration speed/competitiveness reasons. So in practice, if the client changes you're out of luck. It's encryption that lasts as long as the service owner thinks it should last, but if it's going to be like that, they could also just promise not to look. The point of defining an adversary in cryptography is to stop them from doing things, not just noticing if they do.

Hence, we end up back to my point about threshold signatures. With a threshold software update system and a network of auditing firms, you can actually enforce good behavior on the adversary because they no longer fully control the client, and the client is treating the server as untrusted. This is a form of decentralization, albeit very different to the classical kind (p2p networks, multiple clients, federation etc). It should also be amenable to rapid iteration.

Totally agree that open source is often a bit of a black box and maybe nobody is auditing it properly. But if you do become aware of a bad change / back door, and the system is at least somewhat decentralized and a part of it is physically yours, you can just switch to a better implementation. No such luck with a centralized system: the client may be open source but if only the official client is allowed to connect to the servers it's irrelevant. And all E2E messengers today (except Telegram?) have such a policy.


Wow. Where to begin. :)

> …there’s only one client…It’s encryption that lasts as long as the owner thinks it should last…

I think this is a big leap. It’s a open source client with verifiable builds. If a future version removes encryption, users will migrate to Wire, or Threema, or Matrix, or whatever. How does there being only one client make the encryption non-resistant to an adversary?

It’s genuinely unclear to me what you consider the difference between:

- Multiple open source clients, distributed as source or prebuilt binaries, possibly signed and verifiable, possibly downloaded over TLS, depending on what Linux distro you use. <— This is the state of the art for most open source software.

- A single open source client, developed in the open and distributed with verifiable builds where feasible via broadly reliable and unlikely-to-collude third-party distribution mechanisms (the app stores). <- This is how Signal works.

- A single open source client, developed in the open and distributed with verifiable builds everywhere. <— This would be nice, if the app stores would support it!

> The point of defining an adversary is…not just noticing if they do

Well, no, I disagree. As a prominent example, Certificate Transparency.

“Noticing”—aka accountability—is extremely useful. As I noted above, there is a multitude of free, secure Signal competitors. The only thing that keeps millions of users using Signal is trust based on accountability. Building better accountability mechanisms means that if Signal violates that trust, users can switch, before they discuss hiring hitmen/buying meth/overthrowing the government on an unencrypted channel.

> …threshold signatures…

TBH, I didn’t read this part of your post that closely. There is a lot of work being done on binary transparency, and your proposal seems broadly in the same direction, but I’m not following this space very closely.

What I don’t follow is why you think it requires multiple clients.

> …you can just switch to a better implementation…

Aha. And here’s your point—not that a single client thwarts encryption, but that a single client for a sufficiently sticky centralized service would reduce the ability of users (who may not care that much) to switch away, should the encryption be disabled.

And sure, I think this is true to a degree. But as Aol/Google Talk/WhatsApp-post-ToS-change can tell you, messaging is not that sticky!

Anyway, ultimately the market has spoken. Ping me when everyone is using XMPP/Matrix/whatever. In the meantime, I hope the Signal Foundation keep building usable secure products for people like me. When they sell out, I’ll switch to Threema.


You keep making arguments specific to Signal-the-app. I'm making arguments about E2E messenger encryption in general (and also servers, decentralization etc).

If Signal-the-app did a WhatsApp and started blocking forwarded messages, or blocking messages that contained a blacklisted domain etc, I think very little would happen and it might take a while for people to even notice. You're assuming it'd be noticed immediately and everyone would find out nearly as quick, but I see no evidence this is true. There is no organized effort to do this type of auditing, let alone make it sustainable for the long run. Even if there was, so what? By the time I somehow find out (how? reading HN?) my message history is already gone.

"What I don’t follow is why you think it requires multiple clients."

I don't ...

"TBH, I didn’t read this part of your post that closely"

... which might explain why you keep mis-characterizing my position. Please RTFA and then opine. The section on threshold signed auditable builds isn't even long or complicated, it's a few paragraphs at most.

With the proposed software update scheme you can at least get closer to the goal of a single-client-single-server but genuinely encrypted system, that meets the social goals people actually have for encryption. It's a form of decentralization that doesn't require everyone to re-write the same code several times, so should mostly dodge the product management and iteration speed issues that causes Signal to reject federation.

It isn't as good as having multiple clients because the auditors can only reject new changes, not actually improve the privacy of the product in the way that a competitive market of clients might allow, but the tradeoff is the central authority can add features more consistently as there's no client capability skew.

"Well, no, I disagree. As a prominent example, Certificate Transparency “Noticing”—aka accountability—is extremely useful."

CT is hardly used. I've never heard of a single incident in which a website operator found a bad cert by monitoring CT logs. If there are any such cases it is very likely to be a big tech firm and not a "normal" SSL user.

Regardless, CT is if anything a better analogy for my own proposal. The actionable outcome of a CA breach detected via CT is that browser makers revoke it and they stop being able to issue certificates. It happens behind the scenes and users don't have to think about it. Imagine if browser makers never revoked any CA and people just had to tell their friends that some padlock icons were trustworthy, others weren't and they shouldn't browse to websites that used the leaky CA. Good luck with that.


> If Signal-the-app did a WhatsApp and started blocking forwarded messages, or blocking messages that contained a blacklisted domain etc, I think very little would happen and it might take a while for people to even notice.

Why would it be different for, say, Sendmail? I mean, I agree, but isn’t this the heart of why “open source” is not a meaningful differentiator? (Taviso wrote something similar about reproducible builds and “bugdoors” here a while ago: https://blog.cmpxchg8b.com/2020/07/you-dont-need-reproducibl....)

My claim is not at all that it would be noticed immediately. My claim is that distributing source code doesn’t help for exactly the same reason.

> which might explain why you keep mis-characterizing my position. Please RTFA and then opine. The section on threshold signed auditable builds isn't even long or complicated, it's a few paragraphs at most.

It’s never stopped me before. ;)

More seriously: I don’t disagree with the proposal, but it strikes me as less radical than you suggest, for a few reasons:

- It seems to me you can already do this with signed git commits plus—what we talked about before—reproducible builds.

- You still have to incentivize review. For a significant project, yeah, you might get it (and it’s cheaper than the dev time to maintain multiple overlapping projects, I guess). But are you going to get it for every, I dunno, NPM package to make text a pretty color which turns out to be critical infrastructure? Nah.

- As Tavis points out in the above link, source code is not at all a panacea.

So, yes, if people want to do multiple-auditor-signed updates, I’m all for it. App stores should make it easier than they do today. Agreed?

> CT…

I guess I no longer understand what we’re disagreeing about. To be clear, I’m all in favor of enabling multiple auditors to sign binaries, I guess. I don’t think it solves major problems in most cases, nor do I think source code access is a panacea for supply chain attacks. But if people want to do it, sure, I guess?


> Nonetheless, if someone sat down and wanted to make messaging more decentralized today, making self-hosted email easier would actually be a tractable problem you could start on.

This is the point I was making on the original Moxie/web3 thread here on HN. However ...

> Changing Facebook or Google policies, not so much.

One of the issues with "making self-hosted email easier" is that it does, at least in part, depend on Facebook (and particularly) Google policies regarding email acceptance/forwarding, and while these are somewhat more technical than some of the policies you're referring to, they are still policies, and still pretty hard to change. You can't make reliable exchange of email dependent on IP block reputation. Getting back to a world where self-hosted email worked as well as Gmail on the receiving side and SES-like services on the other would be a huge plus, but I don't think it's particularly easy to accomplish.

Still, it could be tractable, and that is the adjective you used.


I used to work on the Gmail spam filter. There weren't any rules or policies specifically about punishing small operators of course, but:

1. PBL (policy blocklist) was used like in all spam filters, to stop residential IPs sending mail. You can get yourself removed from this if you have a static IP normally just by asking.

2. Smaller senders are often sitting in cheap IP blocks that aren't too picky about the neighbours. Spam filters do use IP proximity as a signal for better or worse, so if you sign up with a company that doesn't do anti-abuse work then, well, all your IPs will have been heavily abused in the past. An individual IP can still stand out from the crowd but to do that, you have to get people to mark your mail as not spam, so there's a nasty catch 22 there. But fixing it seems fundamentally difficult. Something like what Authenticode does with EV certs might have worked.

One way to make progress here might be to have a SOCKS proxy service for SMTP-TLS connections, that does ID verification, requires large crypto deposits or has some other way to try and discourage spammers from signing up. Then if you have an account, the service will find and buy clean IPs and let you proxy through them. The service can sign up to exchange ARF spam reports from the major providers to locate abusers and shut them down before it's too late.

I'm not sure SOCKS provides quite enough detail to fight outbound spam properly, but you could also experiment with an SGX enclave. That way basic anti-spam logic like "is it DKIM signed, does it advertise SPF, is there an abuse reporting address in the headers" could be made auditable. You still face the question of "pray I don't alter the deal further", but just running some SMTP proxies isn't a very difficult task - kinda like a VPN service - so there'd hopefully be a competitive market of them.


Zerodium pays up to 1.5M$ for working exploits in i.e. WhatsApp [1]. Every widely used software, especially those used by millions and billions of users, is reverse engineered momentarily after every single patch by multiple groups.

[1] https://zerodium.com/program.html


Your argument is pretty much that you can't/shouldn't trust WhatsApp/Signal and Moxie's argument is you don't need to if the cryptography is implemented correctly. It can't be just switched off, any update to the applications gets reversed within hours of release well before it's rolled out to a significant population of the users because exploits in both are quite prized. As someone else pointed out WhatsApp RCEs go for ~$1.5M USD, that is in a similar class to iMessage.

Signals relay system is effectively untrusted at the protocol layer, this isn't much different from Bitcoin. The only real difference is the only people running it are the Signal folk. Is it an improvement to have a decentralized backend? Only arguably. From a user perspective it's actually often worse as fully decentralized peer-to-peer systems often perform worse and can't implement optimisations that are possible in centralised systems. From a purely "I can't trust anyone" perspective then no, but even Bitcoin and friends aren't that either. You trust the Bitcoin developers to not alter the protocol in a subtle way to weaken it just like you trust the Signal developers and server infrastructure not to subley subvert the cryptosystem.

What I am saying is I think trusting the protocol developers of blockchain like systems isn't substantially different from trusting the Signal developers with the exception that it would be easier to fork a blockchain if you disagreed while spinning up your own Signal infrastructure would be quite a lot more work.

As for powerful wall connected devices I do think you are being a bit facetious here. The vast majority of the population the mobile phone is their primary computing device. This is true across most of Asia, India and Africa. More modern developed countries have a wider proliferation of powerful consumer computing hardware but even there non-mobile computing is dwindling and while devices like iPad/tablets are becoming more popular they are prone to the same shortcomings of mobile devices (locked down OS, little/no background processing capability, unreliably connectivity).

Also I wouldn't cling too quickly to the idea that Apple is selling you something drastically different from Google. If you look at their recent earnings reports they are now generating more and more of their profits as a precentage from their services business. They very much are attempting to follow the Google model, they just weren't as aggressive in the route they took.

I don't actually see that as a problem though. If they can pursue a very cloud heavy model that is privacy preserving and protected by sufficient cryptography I don't see a problem. If however they keep going down the path of the device local CSAM scanning and the like... well I guess I won't be buying more Apple products.

At the end of the day user experience is king. Everything else fades away under the weight of sheer amounts of money that comes from bowing to consumerism. If all this Web 3.0 stuff has a poor user experience vs what is being peddled by Google/Apple then don't expect it survive for that long.


I don't think it's safe to assume exploit vendors are concerned about end-user privacy.

The point about decentralization is that merely noticing problems isn't good enough. Information propagation is hard! What are you going to do if a WhatsApp contact sends you a message saying the encryption was disabled? Forward the message, presumably ... ah.

Audited builds aren't hugely useful unless there is teeth behind the audits, like the ability to block a change to the software that violates the social contract implemented by the encryption. If you don't have that then so what? With that level of control they are basically just saying "we aren't peeking right now, promise!"

The reason it takes years to roll out end to end encryption in other protocols is not that centralization is inherently better or faster. It's that it takes a long time to build up a consensus around a new way of doing things and get people to take it seriously. That process isn't really optional though, if you skip it you get the WhatsApp problem where stuff is encrypted up until the moment a single executive decides it shouldn't be. If you can switch it on and off with the flick of a mouse then it's not actually making our society more robust against damaging behaviours by the adversaries, which is the ultimate point of implementing encryption in the first place.


> He goes on to argue that if "people would just have desktops/linux servers at home this would all be different". But that isn't going to happen, that is a fantasy land.

I don't know. I'm looking around my home, and I have a WiFi router, an Android phone, a Chromebook, a Kindle, and a car. The distinction between those five devices and a linux desktop/server is pretty subtle --to the point of being a matter of semantics more than anything else. ;-)


A weird mix of no true scotsman and tu quoque arguments.

Sure there are other paths in crypto land that theoretically could have been better than what we got. It's besides the point though.

Moxie's original argument wasn't against what is theoretically possible - it was about what actually happened.


I share the view that end-to-end encryption (E2EE) provides strong security guarantees only if the clients used to exchange data/messages are fully under control of the end users. All it would take Signal to undermine their E2EE is a single update of their app that exfiltrates all chat messages (not saying that I think they would ever do that). Same story for WhatsApp. In my opinion, a necessary ingredient for secure E2EE is that you don't have to trust the central infrastructure operator, at all. And that just doesn't work if the provider controls the client software.

Another problem is that almost all E2EE messengers can still be trivially man-in-the-middled (MITM) by the infrastructure operator, as there's no out-of-band verification of keys. The central server can just replace client keys in transit and decrypt all the data exchanged between two clients. This can only be detected by clients comparing their public keys over an out-of-band channel. Most E2EE messengers simply disregard this risk [1], while some provide functionality for out-of-band key verification (e.g. Threema). Always struck me as a bit odd as it's like using self-signed TLS certificates for your server, which leads to a security exception in all modern browsers but somehow seems to be fine for E2EE messengers. Keybase tried to solve this problem but unfortunately they got acquired by Zoom so I doubt they will continue working on that.

[1] https://signal.org/blog/there-is-no-whatsapp-backdoor/


The article mentioned something I was not aware of - that WhatsApp has already started sending metadata (the number of times a message has been forwarded) in the clear:

> This is not a theoretical argument. Disabling E2E encryption has already happened, although hardly anyone knows about it. In 2019 WhatsApp imposed forwarding limits on messages in order to “slow down the spread of rumors, viral messages, and fake news”. This represents a total defeat of the Signal protocol’s cryptographic objectives: a basic goal of any modern cryptographic scheme is to ensure the same message encrypted twice doesn’t encrypt to the same bytes. The point of this is to stop the adversary knowing when you’re repeatedly sending the same message and encryption modes that get this wrong (like AES/ECB) are discredited. Yet once Facebook — the adversary — became dominated by authoritarians who see unlimited communication as chaotic, they simply changed the client to include a forwarding counter outside the encrypted part of the message. There was nothing anyone could do about this. It just showed up one day, and all the fancy mathematics designed to stop this “attack” were irrelevant.


WhatsApp does the counting of message forward completely on the client side. Their server does not know how many times a message has been forwarded.

THe article author's point was not that metadata is being sent in the clear, but that the real goal of encryption is to be unable to manipulate social behaviour. And because WhatsApp controls the client, they are able to do this without breaking E2EE or leaking metadata.


This is why you should use an open standard like Matrix and a client like Element where key verification is a first-class citizen.


"Trivial" is overstating it, for exactly the reasons posted in that blogpost. People can and do verify keys out-of-band, and for that reason there is an extremely low chance that whatsapp (or signal) is wholesale MITMing connections. If you are concerned you will be targeted specifically you can treat key changes with much more suspicion.


But note, this still requires trusting the client. If the provider is going to change a key on you to spy on your messages, they will start by pushing a routine update to take out the warning and - assuming some cooperation from the app stores - that update could be sent only to you or your contacts.


> And that just doesn't work if the provider controls the client software.

The web platform is now capable of making this not required, we just need to build solutions that take advantage of it. The coupling of the service and the client (web/mobile apps) is something that should be challenged in "web 3.0" designs.


I’m excited to launch this product and the company behind it OK, so it's kind of an ad, although no product yet.

He makes an interesting point. If you're going to have home servers so you own your data, they need to be in something you own and control that's always plugged into power. There's the router, but you usually don't own that. It belongs to the telco or the cable company. A "speaker", i.e. Alexa and friends? Amazon controls that. The TV? That's a slave to the TV maker, Google, or the cable company. The doorbell? That's run by Amazon and the cops. The refrigerator?

Apple used to sell a box, their "Time Capsule", which was a home server. But Apple discontinued that. Apple became so cloud-oriented that their thermostat has to contact an external server to get the outside temperature.

Most people now have no home electronics that isn't a slave to someone else. Where do you put a home server?


> Apple used to sell a box, their "Time Capsule", which was a home server. But Apple discontinued that

Because no one is interesting in running their own server.

> Apple became so cloud-oriented that their thermostat has to contact an external server to get the outside temperature

Apple doesn't make a thermostat but they, along with many others, are moving to Matter. An interoperable, secure, private, open standard i.e. the very opposite of being cloud orientated.


>> Apple doesn't make a thermostat but they, along with many others, are moving to Matter. An interoperable, secure, private, open standard i.e. the very opposite of being cloud orientated.

Matter (formerly CHIP)? The protocol whose identity layer is allegedly based on a blockchain [0]? I think it needs the cloud, in the same way Moxie describes it.

[0] https://staceyoniot.com/blockchain-101-how-distributed-trust...


I have not read the specification of matter (which is a joint standard). But Apple‘s custom HomeKit protocol was designed to work without any cloud service by design. I use it without any cloud control, i won‘t be able to control my heater or light from on the go but because of WiFi without any central at my home.


> Because no one is interesting in running their own server.

"Home facing" servers and "network facing" servers are not the same thing at all. I'm not saying that implies a completely different attitude towards using them, but the differences are really important. If your social media identity lived on your own network-facing server, rather than FB/Twitter/Twitch/Discord/etc. servers, you may well feel differently about running it.


This is a cost of migration. I am totally fine with renting my server at AWS if I can move it to another provider at any time.

The goal of decentralization is ownership of your data and services. Centralization only becomes a problem if there is no competition and a company has too much power.


For me, the problem of Centralization was the gate keeping.

It seems often when an API is used to migrate to a new system the API is turned off. Thats if there was an API at all.

If the centralized service is an easily replaceable commodity... I'm not super concerned.


> Apple became so cloud-oriented that their thermostat has to contact an external server to get the outside temperature.

Did you mean Google with their Nest devices?

Apple don't sell a thermostat. Although I wouldn't be surprised if HomeKit enabled devices needed to do what you described LOL :-)


For that matter, how is that even a criticism? A thermostat sits on your (interior) wall; If you want it to be aware of the outside temperature for any reason, you either provide additional equipment that consumers need to mount outside their building (in such a way as to get an accurate reading whilst being weatherproof and having reliable power/connectivity)... or you just fetch it from an API based on your postal code like a sane person would


Sorry, Google owns Nest.


Don’t people here run raspberry pi’s for their home server needs?

As for the general public, they have to trust someone to run their home servers for them, whether that is google, apple, amazon or their friendly neighborhood HN reader. Trusting a megacorp is not necessarily a bad move because while they do take abusive steps to further their bottom line, they are pretty good about security and software updates and as a customer you are unlikely to experience severe harm like having your bank accounts emptied because of the privileged access they have (well, except if you buy into the apple ecosystem, then the emptying of the accounts happens willingly).


Just for pi-hole.


Your phone? Double redundant networking, UPS protected power supply, and plenty of compute resources. Writing this from an android with 16G of RAM.


Potentially? Yes. In practice, not so much. Do you think we'll get there?


I'm not sure I buy the idea of "people just don't want servers in their homes". I think the people who have figured out how to do it, have captured the value. Some are hesitant to buy into that captured ecosystem.

Many people who own 3D printers, plug them into a raspberry pi running something called "octo-pi". 3D printer people are more technical than most, but they're not allways coders and highly skilled IT people. Octo-pi's are open source, free as in beer, and free as in speech software.

I know A LOT of people who go to pepole's houses, and say "hey that Alexa thing is cool, but why do you want Amazon to have a mic in your home". Lots of people see the value, many don't trust it.


> Apple used to sell a box, their "Time Capsule", which was a home server. But Apple discontinued that. Apple became so cloud-oriented that their thermostat has to contact an external server to get the outside temperature.

Time Capsule wasn’t a home server, just a NAS with integrated networking and backup. You couldn’t run software on it.

There has never been an Apple thermostat. Are you confusing Apple with a different company?

There’s nothing strange about a thermostat calling an API to get the outside temperature. Most thermostats aren’t physically able to measure the outside temperature themselves.


> I agree with points 1 and 2, but there’s a conceptual problem with the argument: cryptography cannot impose any limits on an adversary that also controls the client doing the encryption.

> Let’s put it less abstractly.

And then goes on to very concretely criticise WhatsApp and Signal instead of addressing Moxie's point, or how any of the crypto technologies address Moxie's point.

> Quick fixes

> Firstly let’s look at mobile messengers.

Why? Why look at them. If you're addressing Moxie's points, address them, and Moxie's article wasn't about the messengers.

> Why don’t non technical users want to run servers? After all, they have done in the past via programs like BitTorrent and Gnutella.

No. No, they haven't. While it was rather wide-spread, it still wasn't nearly as en masse as people pretend it was. An average non-technincal person wasn't running those clients. And, most importantly, the moment iTunes, then Steam, then Spotify appeared, people ditched running those torrent clients.

I wonder why /s

> They don’t do that today because the software industry outside of Cupertino isn’t interested in easily letting them do that.

Apple apps are not servers. And users are not running them. What wild logic leaps! Apple's iTunes ba sically single-handedly reduced the number of people running their own servers aka torrent clients.

> I’m excited to launch this product and the company behind it,

So. A marketing message hidden behind a rather inconherent "response"


As someone that indulges in both legal and pirated games, the benefits isn't in downloading content, but extra features. Stuff like auto-update, cloud saves, friends list, etc.

I don't trust any store to keep list of my games forever, sooner or later they will all fail.


> the benefits isn't in downloading content, but extra features. Stuff like auto-update, cloud saves, friends list, etc.

Features like donloading content is the main benefit. Or, rather, quick and painless access to content.

iTunes/Apple Music and Spotify never had any friends lists etc. And there are no "auto updates" to music. Same goes for Netflix (and other streaming services). And yet they reduced piracy probably a hundredfold.

Same goes for games. There are many features we associate with Steam these days, but many of those features weren't there for years, and still Steam was extremely successful and also significantly reduced piracy.

People don't want to chase content somewhere and then chase cracks/unlocks that may or may not work, that may or may not contain any number of viruses etc.


> Features like donloading content is the main benefit.

Ease of use yes. But ease of ownership is much better with pirated content. I can play pirated music where and how I want. I'm not tied to an ecosystem.

Gabe said it best - piracy is a service problem.


Mike's point seems to apply, practically speaking, to any software, including Bitcoin:

- Most users of Bitcoin Wallet download it from the Play Store

- Even if it's open source (as is the Signal client), "nobody's" reading through it and compiling their own builds

This sort of sweeping argument--that e2ee doesn't work because most users don't read source code--applies to everything, and as such isn't at all illuminating.


He’s right that ethereum is centralized, and “web3” even more so. If AWS disappeared Ethereum would too.

It’s hilariously wrong to say that paying for things with bitcoin is no better now than in the past.

The country of El Salvador adopted it as a currency and every merchant has the option of easily taking bitcoin via lightning network there— and everywhere else as well. Adoption around the world is growing rapidly. It hasn’t yet broken into mainstream awareness yet, but the graph is exponential.

Lightning is one of those technologies that when you use it you feel like “wow it really is the future!”


> If AWS disappeared Ethereum would too.

I'm sorry but no, Ethereum wouldn't disappear if AWS disappeared. Yes, a lot of the Ethereum nodes are hosted with hosting companies (~69% [1]). Yes, a lot of those are hosted with AWS (~44% [2]), but even so, AWS disappearing would not have a huge impact on Ethereum itself.

- [1] - https://www.ethernodes.org/network-types (just nodes that opt-in, real number differ)

- [2] - https://www.ethernodes.org/networkType/Hosting (just nodes that opt-in, real number differ)


Even wose, many blockchain nodes are running on AWS. Fortunately we know the problem exists.. and there are solutions people are working on to resolve it (for example https://akash.network/)

The good part is that this infrastructure is a commodity and you can move relatively effortlessly.


Hmmm he is at first criticizing moxie for not making a difference between bitcoin and ethereum and then he is throwing signal and whatssapp in the same bucket …..

But okey writeup


Okay, explain me where is the technologic difference? Same cryptography, other controlling instance. One controlled from Moxie, one from Facebook. But both are forcing me to update the client sometimes. Why should I trust Moxie more than Facebook?


Why stop there! Why trust your phone OS… When it comes to trust it’s turtles all the way down, you have to start somewhere.


Exaxtly … there is a fine line between paranoia and distrusting everything


Thanks for the question.

First of all I trust moxie because he has a trustworthy history. Thats TRUST and trust is based on gut feeling and history. And there is a difference between the two messengers. Whatsapp aint open source. They only implement the e2e encryption of signal and not the whole app. They can still gather metadata. Signal cant.

I hope this clears up my statement. :)

Edit: I dont trust facebook at all. I hope this shithole will one time go to hell.


> Same cryptography

Not really, e.g. the logic to send messages is different, does WhatsApp have sealed senders?

> Why should I trust Moxie more than Facebook?

The point of E2EE is that you don't, especially since you can update on your own schedule. There is some trust somewhere ("this version has been released for 3 months, nobody complained, seems fine, should be fine"), but you don't have to trust Moxie.


> But it’s still a big step up and would mean that if Facebook suddenly decided merely blocking forwarding isn’t enough to fight “rumors” or “fake news”, they’d be stopped by the auditors who’d (hopefully) refuse to sign the update that takes out the encryption.

A central auditor, or a bunch of auditors that verify what updates are doing, and work out if they are changing functionality of the protocol?

I mean that's never going to happen. Writing the constitution/contract for that is pretty hard, not to mention the cost. Plus all of this assumes that users will read the information provided (they wont)

Then there is the verification of trust for the auditors, which is difficult.

I do notice that the author makes a bold claim to say that the signal protocol doesn't work, as a way of attacking moxie (fair enough) but then goes on outline the problems with whatsapp. Now, Signal inc were contracted by whatsapp to add e2e on whatsapps "improved" XMPP.

The claim appears to refer to this:

> In 2019 WhatsApp imposed forwarding limits on messages in order to “slow down the spread of rumors, viral messages, and fake news”. This represents a total defeat of the Signal protocol’s cryptographic objectives.

I mean that's not really the case, Adding client side limits to stop forwarding isn't a defeat of encryption. Its a limit of freedom of speech, perhaps, but not encryption.

but more importantly the author appears to deliberately conflate the software company with the signal protocol


Read: I own a lot of digital tulips.


The tulip bubble is a myth. But the funny thing about bubbles is they have a blow off top then deflate.

Bitcoin has had 4 blow off tops and is at %200 of its last one.

Not fitting the pattern.


> The tulip bubble is a myth

That's news to me. In what way is it a myth?

https://en.wikipedia.org/wiki/Tulip_mania


read the Modern View section of the wiki page. tldr is it's massively over stated and the actual impact was completely negligible. Not quite the same as the 2 trillion dollars global crypto market today. Any comparison of the two is just not done in good faith.


That's more true when taken in isolation, but I read about it as part of "Extraordinary Popular Delusions and the Madness of Crowds" by Charles MacKay as illustrative of crowd manias, mob mentalities, town scares and herd stampedes, not necessarily as great financial disasters in history. There are plenty of those that didn't involve the madness of crowds.


He states in the article that he currently does not own any cryptocurrencies.


And you really believe that? He’s building a company on top of it.


Ah, the "I don't own any cryptocurrency (but I own a company that owns a lot of it)"


My new company has nothing to do with cryptocurrency.


So is Moxie, does that mean that we should blindly disregard everything he thinks about crypto too?


Yes, when there’s absolutely crazy speculation over something with absolutely zero utility, a bunch of charlatans preaching for the web to run on it, and its adherents getting richer doing nothing but talking about it, absolutely yes.


I don't understand why this author is talking about bitcoin, when Moxie's article was about web3. I thought Moxie's article was more about what has happened with the ecosystem, versus what could happen.


What strikes me in this "comments/rebuttals" is that they mostly miss Moxies' main points and underlying critic. Some comments are pretty obvious and certainly understood my moxie...


OK, potentially dumb, non-educated non-programmer comment here, so please be kind

One of the points he makes is that it's not possible to tell if a Signal client is actually made from the source code in question. Is there no hash-based way to ensure that isn't the case? I know that producing code which self-reports wouldn't work (as the 'altered' version could just report the correct hash, as it were), but is there no way to have a third-party app which would produce hashes of installed APKs, and wouldn't be open to such? Or is it always possible to alter that app so that it self-reports correctly, even though it's lying about the hash of the signal app?


The term you're looking for is reproducible builds and it's unfortunately not trivial and a somewhat rare state of affairs. It's definitely crucial in the long run.


On Android, Signal already offers reproducible builds[1].

[1] https://signal.org/blog/reproducible-android/


I actually found the article he's replying to much more interesting than this piece


> As for Signal it’s at least open source, but there’s no way to check that the client I’m using, or my friend is using, actually matches that source code.

It doesn't matter. Your keyboard isn't opensource and it can transmit all typing.



The Gboard app doesn't have internet permissions, so no, it can't.


Google Play Services that is installed on almost every phone out of the box can, and Gboard can trivially communicate with it. So yes, it can.


> briefly, the client app bootstraps connections to the P2P network as normal but sends a special message saying “please don’t send me the contents of every block or transaction, I only want to see transactions matching a filter”

The gist I got from Moxie's post was that this can't be done on the web itself without an intermediary, and this is very relevant if this is "web3" versus "the blockchain ecosystem". Why can't we have a serverless wallet app on the web?

Even if the underlying blockchain network supports lightweight clients, it is still not eliminating go-betweens in a way that makes it truly decentralized.


We are all tech people here, lets not pretend like there is no difference between self hosting and cloud hosting.

Pretty much everything you do on OpenSea requires indexing/processing, there is no way to do this on the blockchain (atm) so all of these platforms crawl the chain and keep their own indexes.

OpenSea never claimed to be decentralised, its a centralised marketplace. The difference is, When they deplatform you, all they can do is remove their copy.

They cant remove your nfts from the blockchain, they can delist you from the blockain and for the purposes of listing/curation, you can still see your nfts on other platforms.


> When they deplatform you, all they can do is remove their copy.

> They cant remove your nfts from the blockchain, they can delist you from the blockain and for the purposes of listing/curation, you can still see your nfts on other platforms.

The original article acknowledges this but also complaints that there is no easy way to access your NFTs once they are gone from OpenSea. All the major wallets are using OpenSea as a single source of truth and not pulling data directly from the chain. Defeats the point


> there is no easy way to access your NFTs once they are gone from OpenSea

You go to etherscan for the relevant smart contract, query for the metadata URI for your token, and voila you have it. Writing a new page to do this is a 5 minute task.


did you miss the word "easy"? what should average joe on their iphone do to see their nft if it's removed from opensea?


It’s easy enough that if this is a problem you can 1) google it or 2) use a tool that was made in 5 minutes by some dev.

Wallets don’t delete your transaction history. The average user knows their wallet address and any wallet I’ve ever used 1) doesn’t use OpenSea as a source of anything and 2) keeps a track of transactions you’ve performed using that wallet. This makes it easy to find the address of your NFT.

This is analogous to knowing your credit card number and knowing how to look up the merchant of a transaction you made.


People routinely run their own servers. A great example is bittorrent where the incentive design forces people to run actual file servers on their computer.

People will run servers on their computer if it is (i) motivated the right way, (ii) as frictionless as possible.

For the types of applications that web2 is great at (content creation / consumption), it makes no sense to run a server of course but computing is a lot bigger than that.


"Routinely run their own servers"

That's doing a lot of work.

Sure, it happens, but that vast majority of humans (including programmers) will not be in that category.


There are more common/accessible setups than just BitTorrent. Remote desktop software, media sharing e.g. Plex media center, doorbell cameras (and many other smart home IoT devices), PS/Xbox remote play, etc.


I’d argue all of those setups combined would amount to less than 1% of internet users, so I’d hardly call them mainstream.


Mainstream != routinely, and yes, I agree.


Most printers and routers are running webservers. Some camera sd cards run servers. My ereader runs a webserver, it works well.


Most bittorrenters leech.


> If an encryption scheme can’t stop infrastructure providers having opinions on the moral value of messages, what’s the point of it?

Good question.


I wish Mike Hearn would rejoin the crypto world, in the Ethereum space.

Ultimately his enormous contributuons to Bitcoin were mostly wasted by Bitcoin's leadership, when they prevented Bitcoin from achieving mass-adoption, but his talents would not be wasted on Ethereum, which already has a clear path to massive scalability and a pro-adoption leadership.


What does "leadership" mean here? I thought these systems were supposed to be decentralized and trustless?


They're supposed to eventually become leaderless, but they're not for the time being, especially as it relates to protocol evolution.


What incentive do leaders have to give up authority?


It depends on the project and leader. Some leaders are incentivized by altruism, others by their large stake in the token supply, which could appreciate when the protocol becomes more decentralized/immutable.

Some by the inertia and social pressure of community expectations, derived from long-published and widely discussed roadmaps.

Some do not have an incentive to design themselves out of leadership positions.


> The filter in our implementation was a Bloom filter, so you could probabilistically hide what you were interested in.

Very funny, this is not the first time that I hear about the 'bloom filter solution. But, it does not make much sense in practice: for your payment transactions, do you want the transaction to be 'safe' or 'probabilistically' safe?

Is it that special day that you will do a 500k transfer that the probability will be against you and you will be ripped of your money...


The Bloom filter in this case was purely a privacy technique. It had no influence on probability of tx confirmation.


By creating commitments on state, Ethereum offers the possibility of ultra-light clients, where only the state root needs to be downloaded instead of all block headers, that Bitcoin doesn't.


That's not there, yet. That's one of the main criticisms of the original article.

So many promises but so little actual functionality after a decade for Bitcoin and half a decade for Ethereum.


It’s hilarious to me how the NFT app he built got its published NFT art delisted for “violating terms of service”. That pretty much tells you everything you need to know right there. Again, there’s nothing distributed about Web 3.0 and this new system won’t solve any of the “problems” of the current system, real or imagined.

It’s a common fallacy of tech people to think that all problems are of a technical nature, and that if we could just have the right technology they would go away and we would be living in a utopia. The fact is, the problems are with human behavior and any technology created will serve to further that behavior. If anything the past 20 years has shown us, technology has an uncanny ability to bring out and amplify the worst of human behaviors. Make of that what you will, it’s a good thing in some regards but also a bad thing.

Of course, none of this will matter or have anything to do with how successful Web 3.0 will or will not be. Bitcoin never solved any of the problems it was originally intended to solve, and still a lot of people became quite rich from it. So Web 3.0 proponents need not be worried, hype is illogical by nature.


"It’s a common fallacy of tech people to think that all problems are of a technical nature"

Exactly! Computers will be sold somewhere, parts will be bought from some countries / vendors, using money issued by governments, connect to others using internet ( which again has 100s of regulations from state / central governments ). There are way too many factors to become truly decentralized.


>It’s a common fallacy of tech people to think that all problems are of a technical nature, and that if we could just have the right technology they would go away and we would be living in a utopia. The fact is, the problems are with human behavior and any technology created will serve to further that behavior.

I agree with you. This can be applied to the solutions we are looking to solve climate change.


> Bitcoin never solved any of the problems it was originally intended to solve

Private transactions allowing to buy drugs of the internet for example? Darknets actually reduce incentives for drug gangs, at least according to this undercover cop [1] - 11th minute but I recommend watch the whole thing.

[1] https://youtu.be/y_TV4GuXFoA


"Ethereum isn’t actually decentralized" ok so now there's infighting among who's decentralized and who is not. I suppose driving some money away from ETH to other projects is a logical course of action.

Everything else in this post reminds me of the golden rule of game development: to suspend disbelief in order to make the game sound plausible.

Certainly a decentralized internet, with increased security and less government control sounds good, and plausible. Will this be delivered by a "Respected cryptographer". I doubt it. It is however fun to see how tech illiterate folks spend money on grand ideas that amount to nothing much but a ponzi scheme. Suppose scams had to be elevated to a new state just like anything else connected.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: