This same kind of thought process--that one should strive not for "do no evil" but "can't do evil"--really applies everywhere, as it is somewhat general: amassing control over other people and their resources (money, data, whatever) is always going to be dangerous.
Maybe you are good today, but in the future you might start to be swayed by changing incentives or situations due to forces such as "absolutely power corrupts absolutely".
Or maybe you manage to always be good, but--as humans have fixed life spans--eventually retire or die or simply move on and are replaced by someone who is less good than you are.
Or maybe you are good but the power you manage to concentrate gets stolen by someone (in the digital world, maybe you get hacked) and used without your permission to do bad things.
Or maybe you want to be good, but your power is seen as an asset for something external--such as a government--and you end up being required to do bad things that make you sad.
We see all of these issues play out constantly with large tech companies, with control techniques such as curated application markets getting abused as anti-competitive measures, or getting regulated by authoritarian governments as a tool for their regime.
In 2017, I gave a talk at Mozilla Privacy Lab that looked at many of these issues, citing tons of situations--every slide is a screenshot of a news source, as somehow people always want to believe these situations are far-fetched--where having control has gone badly:
Ugh. I just barely missed the window to fix that "absolutely power corrupts absolutely" typo, and in the process was breaking that paragraph into two, as I think this better explains my vague "changing incentives or situations". Here, improved:
> Maybe you are good today, but--as "absolute power corrupts absolutely"--you might find yourself incrementally abusing your power or even "selling out" in the future without even feeling yourself change.
> Or maybe you always do good things, but due to a tragic change of circumstances--such as an expensive health issue with a family member--it becomes difficult to decide what being good means for you.
> Or maybe you manage to be good "forever", ...
> Or maybe even your heirs are all truly good, but the power you managed to concentrate ...
If users agree that the thing you have built is broken and want to opt into using a new less-broken thing, they can always do that, as would be the case (say) with software they download from you and are running on their computer (rather than software you host on your server, that you can change at will): you don't need the power to "reach your grubby mitts"--to put it bluntly--into their lives and fix what you built on their behalf in order for broken things to be fixed, and your definition of what is "broken" can easily be at odds with the user's preferences or even needs (which of course begs the entire question of how to avoid the incentive to be evil in the first place).
> If users ... want to opt into using a new less-broken thing, they can always do that
Users don't necessarily have that much agency. The most successful and powerful businesses don't need to satisfy user preferences because the users are FORCED to conform THEMSELVES to the constraints imposed by the business.
I certainly understand that wanting to amass power over others (and, then, likely, eventually being evil) is both extremely common and extremely profitable... but the argument here is that it should also be frowned upon, avoided, and quite possibly even regulated out of existence. I thereby don't understand why you are re-asserting the status quo :(.
I think the point is that the alternative where the developer needs the user's permission in order to push out an update that the user actually wants is already the scenario where the user is in control, as opposed to the status quo.
You don't need the ability to force something on them that they actually want, so the ability to force something on them can only be used for evil. The good update would be accepted without coercion.
I wasn't talking about updates. The user maybe hates using the platform at all. Or maybe doesn't think about it. But either way, knows there's no choice.
Network effect, or else some other monopoly, determines where you have to go to find jobs, where you go to meet girls, what kind of currency you have to pretend has value, etc.
Yes - even in what language you speak, what sort of fashion is appropriate, etc. To put it bluntly, because humans only want to interact with other humans, people will always be forced to change their ways and adapt to whatever the status quo is to participate in that human-to-human interaction (lest you decide to slice your tongue and go live in the woods, isolated from any other human). Without the conformity, nobody will be able to interact as they all want to interact in a different way.
The real potential solution to all this is standardization. It's conformity, but only on the communication layer itself and not the backend. For example, in online dating, if you want to use Tinder, go for it, but someone else might want to use something else while still matching with people who use Tinder. This is, of course, entirely impractical as of now given all of the side-effects nobody is willing to make (giving up user data, DoS threats, general increased infra costs with no real ROI), but perhaps a stopgap solution is to have a regulating body determine standards for specific types of services (eg. short-form blog platforms like Twitter), and they enforce products to conform to the shared standard if it's a similar enough experience to existing products in the category. This would still allow innovation in new areas, while ensuring giants can't form a monopoly via network effects in easy-enough-to-replicate platforms. This still leaves the issue of "all my users' PII is in 1000 different companies' servers" so maybe the body would have to vet & impose legal data restrictions on companies that want to federate with the incumbents, but that would impose on the goal of allowing new players to enter the market with low friction and further entrench the dominance of these incumbents.
> (giving up user data, DoS threats, general increased infra costs with no real ROI)
I'm starting to think the best way to solve this is to bifurcate it.
Create a P2P system which is free-as-in-beer but is possibly susceptible to DoS attacks etc., then have an option which is more robust than that where you pay Cloudflare or someone to host your data in exchange for money, but it's still part of the same network and interacting with the same people.
That way you can acquire new users without shoving a payment prompt in their face on day one but someone who just wants to hand a fistful of cash to a third party to take care of their problems still has that option.
> This still leaves the issue of "all my users' PII is in 1000 different companies' servers" so maybe the body would have to vet & impose legal data restrictions on companies that want to federate with the incumbents, but that would impose on the goal of allowing new players to enter the market with low friction and further entrench the dominance of these incumbents.
This is the kind of problem that gets solved much better by smart cryptographers than government bureaucrats.
Encrypt the data. Use a kind of cryptosystem where only the people who are supposed to be able to have it (i.e. your friends, not Mark Zuckerberg) can decrypt it.
Now some servers can host it so it's not offline when your phone is, but those servers can't read it, only the intended recipient(s) can.
Obviously this is only even necessary for posts that aren't intended to be completely public.
The fact is that you cannot verifiably compare those numbers to traditional finance hacks, because they're simply not transparently reported/cleaned up before the damage can be noticed.
Don't fault a good in-built bug bounty for bringing about the next generation of strong, distributed infrastructure.
Traditional finance hacks usually abuse tax / debt of large enough companies. Web3 finance hacks take all available money directly out of your personal account. That's a much worse situation for a normal user, because with a bank you can often say "this is wrong, fix this" and the bank is insured enough that they will. Compare to the situation in Venezuela where the in-built bug bounty currently takes a significant amount of people's savings and they have no way to revert it.
The problem is that we've given up our own knowledge and abilities as consumers to complete fucking snakes as people,
and they regularly abandon their duties to keep us safe, with varying degrees of recompense/Justice actually being served when the consumer gets screwed (less every year, from what I can see).
At least with SuperNewFastDeFi (vs Coinbase), I know I'm using a beta and could get screwed, as opposed to my current "stable, traditional financial institution" which just recently turned off everyone's 2FA during an infra migration, and acted like that's totally okay.
I know my home loan etc won't be T-shirt cannon'd off to even worse snakes as soon as I've left the parking lot.
Many forms of security and agency improvements this shift is bringing to the forefront now, and this community, for whatever reasons, sleeps on.
If you want to be that precise, then you are the 1/population part of the payback. Yes, collectively we pay the tiny fraction of insurance on each of us not having all savings wiped in a single hack.
Which happens as well for defi/crypto, but with no guarantees attached - you pay for the development/support with fees.
This is relying on something akin to a free market for laws: immigrate to a place which has better designed laws. I find it way too simplistic for real life. Network effects and other cost of switching can sustain bad regimes for quite a while. This applies as much to people immigrating across national boundaries, to religions imposing death penalty for apostasy to musicians being forced to be present on Spotify despite its terrible economics.
Should constitutional amendments be outlawed - just because people are "free" to immigrate if they don't like it?
At common law, certain relationships are characterised as fiduciary relationships and come with certain additional onerous obligations, such as the duty to act in the best interests of the person you are representing, the duty to avoid conflicts of interest, the duty to account for any undisclosed profits, etc.
Though perhaps a slight oversimplification, I would say that a guiding principal for determining whether a relationship is characterised as fiduciary one is the possibility of control over another person's affairs, either literally (as in the case of a trustee or agent), or because that person is accustomed to place a high degree of trust in your judgement (attorneys, financial advisors).
In that context, I find this article quite interesting (if a little short). Maybe there is a concept emerging of "data fiduciaries", even if regulators and courts don't yet call it that. It has long been accepted that a financial institution that holds your stocks and bonds has onerous obligations not just to their regulators but to you as their principal. Given how important and valuable data is becoming, people may begin to question why data custodianship should be treated any differently.
Incidentally, in some European asset-backed finance transactions, I have already seen "data trustees" appointed to hold personal data relating to the underlying assets in accordance with applicable data protection laws.
(The specific rules about fiduciaries will vary by jurisdiction, so don't complain if the above is not a perfect description of the rules in your location, though I'd be interested to hear if your rules are fundamentally different.)
Once you collect data, it can become very attractive to various parties, like nation states and snooping employees. Google found this out the hard way multiple times!
Hm it's hard to believe these incidents were so long ago. They were very big news at the time, but I can understand that many people have never heard of them, or forgot.
I'm reading the recent book This Is How They Tell Me the World Ends, which reviews all these incidents. I recommend it for anyone working in software!
Google isn't the main focus of the book, and it comes off looking significantly better than our government ...
----
(1) China hacked Google in 2009, probably through sending phishing e-mails to employees, and was inside for months:
So even if you're a big successful company, you're not likely to defend against the two most powerful governments in the world hacking you. (And this is OUTSIDE all the information you're compelled to reveal by law!)
Google did change many things in response to these incidents (like hiring hundreds of security engineers, Project Zero, etc.), but this is evidence of the point. If you collect a lot of valuable data, then you will need to spend a lot of money securing it, and you'll probably get hacked to some degree anyway.
Also it might be worthwhile to look at past Hacker News discussions on these topics:
Well, yes, in the money area. It's generally accepted in finance that custody implies responsibility. It's taken a while for that to penetrate to the crypto sector. The earlier players, lacking assets, desperately tried to evade their responsibilities for other people's money. Now there are some players you can actually find and sue if they screw up.
Arguably, if you've contracted an anonymous party to custody your funds, there's not a huge implication of responsibility there, besides personal code.
"Here though, I would argue that the forced shift in developers' mindsets, from "I want to control more things just in case" to "I want to control fewer things just in case", also has many positive consequences."
For the US, I think the most expedient route to this climate of fear (caution) that yields positive consequences lies not in government regulation but in removing impediments to private liability, or perhaps creating new sources of private liability. The requisite fear of liability to yield positive consequences is a threat of private litigation, not a threat of fines set by government agencies.
It is not for the government to decide how much "tech" companies should pay for their wrongdoing. It is for the victims of that wrongdoing and a jury of their peers to decide.
It is not users who have pushed us to this breaking point. it is "tech" companies and web developers.
Ah yes, it’s well known that as a single user, I totally have the power the meaningfully affect Facebook/Clearview/Google/etc when they do something to me, or with my data that I deem unacceptable.
I can’t effectively sue them-they have an army of lawyers and can throw more cash without blinking than I can muster with years of work. They also have near bulletproof legal clauses. Simply deleting my account doesn’t do a lot against any of them (facebooks shadow accounts, googles pervasive ad/product network, clearview already harvesting all the data they need) and they have harvested so much, or have so many users, that the loss of a few over some principal means literally nothing to them.
So yeah, I’m pretty happy for governments to regulate companies into the ground on my behalf: if they demonstrably can’t be trusted to do the right thing, then they can suffer the consequences.
There is already personal liability in the US, in theory you can sue anyone for anything, but in practice that's simply not true, there are entities and people untouchable to those they affect, and that's where the government should step in.
I get this feeling of fear of the unknown from US citizens talking about regulation, like your just one piece of red tape away from tyranny, but regulations have been making your country work properly since it became a country, and this is just a new frontier of decisions to make for how this stuff works in your society.
"Hence, even though these regulatory changes are arguably not pro-freedom, at least if one is concerned with the freedom of application developers, and the transformation of the internet into a subject of political focus is bound to have many negative knock-on effects, the particular trend of control becoming a liability is in a strange way even more pro-cypherpunk (even if not intentionally!) than policies of maximizing total freedom for application developers would have been. Though the present-day regulatory landscape is very far from an optimal one from the point of view of almost anyone's preferences, it has unintentionally dealt the movement for minimizing unneeded centralization and maximizing users' control of their own assets"
The intent of privacy protecting legislations like GDPR was explicitly and intentionally to give individuals / 'natural persons' control and ownership of their own data, it says so in Chapter 1 of the entire thing.
Vitalik is right I think but it's funny how he frames this as some sort of unexpected or revelatory thing. Well functioning ecosystems are created through rules, and people benefit from having the protection of these laws. That the people don't benefit if you let the wolves run around freely in the henhouse is obvious.
I think I was referring there more to all the laws _other_ than GDPR (eg. the more data-nationalist stuff) that nevertheless end up having a similar effect.
I wonder if, in some ways, this might also lead to more centralization. If your answer to "control == liability" is decentralized protocols, the answer is obviously no. But if these protocols either don't exist for your use case, or are not practical, I could also see these efforts leading to things being centralized in the hands of a few large providers who take some of the control and are better positioned to shelter the associated liability.
We already see this with hosting on cloud services, or external auth providers and the like — for small players, it is very unlikely that they're better at e.g. security or availability than google, aws, et al. The downside is obviously that this clusters a lot of the risk and when something does go wrong, it takes many more people down in its wake.
The article sounds good. It depicts how current mainstream business models collect data and maintain control over their users, in a way, it seems it will legally challenged.
But wait a moment. It offers, in exchange, and in an indirect way, but as the author being the founder of Ethereum, the decentralized app business model, where app developers / users must pay a fee to participate.
So the current choice is: Go develop a business the classic / mainstream / centralized way, and do, or do not harm by collecting data and control the users. Or, create a decentralized app, where the first thing, for you or your users is to "Get some ETH".
No thanks. I don't think the answer for what FAANG did wrong is me / my users to pay a fee to develop / use an app. There should be a third way.
> the first thing, for you or your users is to "Get some ETH".
In a year, it'll be so ubiquitous to have any coin, and the ability to swap it instantly at a digital POS for your purchase, that this will seem hilarious.
This entire community is missing the fact that even MASTERCARD is going to use Ethereum as a settlement layer....
I saw this in Golden Boy episode 1 [NSFW] - where the protagonist used a piece of paper with a keyboard drawn on it.
<spoiler>
In the end, he was able to develop one of the most robust operating systems ever seen in that universe, in a matter of days.
</spoiler>
I guess the reality you live in where you can somehow develop an app without money is different from mine. There is absolutely a digital divide in many different aspects of this computing ecosystem, and that's not a reason not to adopt and utilize great technologies that provide transparency and decentralization to the end user.
Frankly, the anti hacker sentiment I'm seeing lately on HN is unbecoming of the hacker way. Let's try things, break them, try again, and perfect them.
There's no reason to see the glass half empty on every amazing new occasion.
It is interesting that the person opposes "Get some ETH" with some unknown thing, but what really is implied in that is “Give me some data so our company can profit way more than the benefits we provide”.
Consider the life of a cell phone, or any other electronic gadget... it has value because of the utility it offers at first, that declines as it ages, until it becomes a special form of hazardous waste that you have to pay to dispose of.
Perhaps this analogy of private user data to electronic gadgets is helpful for predicting the future? Public data is another matter all together.
I know that the European DPA/GDPR has made being a data controller a non-trivial thing, for several years.
I’m working on a social media-style app that needs to take very good care of its member data. Many folks here would probably be appalled at the measures I’m taking.
Maybe you are good today, but in the future you might start to be swayed by changing incentives or situations due to forces such as "absolutely power corrupts absolutely".
Or maybe you manage to always be good, but--as humans have fixed life spans--eventually retire or die or simply move on and are replaced by someone who is less good than you are.
Or maybe you are good but the power you manage to concentrate gets stolen by someone (in the digital world, maybe you get hacked) and used without your permission to do bad things.
Or maybe you want to be good, but your power is seen as an asset for something external--such as a government--and you end up being required to do bad things that make you sad.
We see all of these issues play out constantly with large tech companies, with control techniques such as curated application markets getting abused as anti-competitive measures, or getting regulated by authoritarian governments as a tool for their regime.
In 2017, I gave a talk at Mozilla Privacy Lab that looked at many of these issues, citing tons of situations--every slide is a screenshot of a news source, as somehow people always want to believe these situations are far-fetched--where having control has gone badly:
https://youtu.be/vsazo-Gs7ms