> [The internet] can be lent to a propagation of wonderful sort or the propagation of hatred.
Key word that ruins this sentence: "or".
The internet can be, and is, lent to propagation of both wonder and hatred.
If you want to avoid hatred, you don't get rid of it, you categorize it. Give hatred its own space separate from wonder, and wonder can flourish.
Hatred and wonder are, in fact, tightly related. More often than not, wonderful things are created out of hatred for the status-quo. Without hatred, how can we progress?
This is why the internet must be neutral and open, why free speech must be protected no matter how hateful or obscene people are. To do otherwise is to limit wonder.
> More often than not, wonderful things are created out of hatred for the status-quo.
Example? Cos I'm not a fan of the rationalisation of the World Wars as "okay but it did advance technology a lot."
Hatred is what imbalances the status quo. In fact when hatred becomes the status quo, then we've lost the plot as humanity.
The Internet is a beautiful idea, but as an ever-evolving real world system, we can no longer afford to idealise it. We do have responsibility to steer it somehow, in order to battle entropy.
In the idealised Internet, free speech is truly free. But when we talk about the Internet now, is it really the one that we dream it to be? Most people on the Internet are actually spending their time in a small number of ecosystems, like Facebook, Google, Twitter, YouTube. Each of these has its own set of rules, which ultimately defines how we communicate and therefore the limitations. I daresay that currently they are fertile ground for hatred - and it's not entirely their fault, it's just how the nature of the systems work.
I guess I'm still fresh from watching Silicon Valley's latest episode, but the way I see it, there is no quick remedy to tackle hatred - we need an Internet 2.0 to give ourselves another chance to build a much more neutral and open net.
Wars are not "hatred". They are action. Specifically, action driven by hatred.
The internet is not a collection of action, rather, it is a collection of information.
If we do anything to restrict information, that very action restricts innovation and understanding.
> Most people on the Internet are actually spending their time in a small number of ecosystems, like Facebook, Google, Twitter, YouTube.
The very fact that most internet users spend their time on centralized networks lends itself as a perfect example of how controlled, but uncategorized information networks are detrimental to discussion. Imagine all of Reddit were on one subreddit. The lack of categorization would lead to an unruly, and useless platform for communication. The reason Reddit is so successful is that it promotes specific conversation about any topic, without restriction. The only way that is feasible is through categorization, via subreddits.
> we need an Internet 2.0
We need an "internet 2.0" to promote the ecosystem that existed with "internet 0.1". We need decentralization (like ZeroNet, IPFS, etc.) to promote free, categorized, distribution of information.
Wars are actions driven by hatred when hatred becomes the status quo.
Hatred can be the status quo of the Internet. The Internet may just be a collection of information, but it is continually being shaped by user actions. The Internet is also increasingly taking a large part of our lives, to the point that what happens there also affects the "real" world. We effectively live in two worlds now which influence each other.
And like all systems, the Internet is subject to entropy, and to an extent, I agree with you: restricting information is harsh, a better design of the Internet that leads to better presentation of information is much more preferable.
But how this Internet 2.0 looks like, to be honest I have no idea. I see your point about categorisation but even Reddit can spill into other ecosystems which may have less categorisation, like a meme gone viral on Twitter. Most of these ecosystems are designed with a strong characteristic for virality built in - connectedness that is further enhanced by a sophisticated understanding of human psychology and targeted applications (e.g. personalisation.) (Dear lord that is mouthy.)
And that's the big question really. With super connectedness, and entropy, is it inevitable then that hatred eventually becomes the status quo?
> even Reddit can spill into other ecosystems which may have less categorisation, like a meme gone viral on Twitter.
That is exactly the problem. The most popular communication platforms on the internet are Facebook and Twitter. These platforms do not have enough categorization. Because of that fact, those who choose to get information through these platforms are vulnerable to fake news, targeted advertising, gossip, etc.
We need a decentralized internet to protect from censorship, but we also need decentralized platforms to categorize information (and communication) and thereby promote education and creation.
I have a feeling that we agree on the issue at heart. Thanks for being courteous and responding back :)
Internet 2.0 is something that I struggle with, there's just way too many factors to consider. On paper (or mind) a decentralised internet looks neat, but making it real is akin to putting a man on mars. Platforms, for one, are commercialised so that they can afford to be sustained. They are also designed to attract a very large active user base, which is again ultimately commercial but would the Internet have billions of users today if there was no YouTube, no Facebook?
But I guess the advantage of having Internet 1.0 is that there are many learnt lessons and many good ideas up for grabs. I would also like to see people more free in making decisions and less prone to instant gratification, rather than being stuck in the "hooked" cycles so fine-tuned by current platforms. But, sigh, that also means a more mature world. I know many who overlook Hacker News by virtue of its "old fashioned" UI...
There's minimal, necessary limits to free speech after it happens: shutting down child pornographers, violent extremists, 0day exploits and other sites that are obviously, imminently dangerous. Other than that, Neo-Nazis and other undesirable/kook groups must be allowed to gather and speak in order to consistently protect and defend the principles of open society.
>Most people on the Internet are actually spending their time in a small number of ecosystems, like Facebook, Google, Twitter, YouTube. Each of these has its own set of rules, which ultimately defines how we communicate and therefore the limitations.
Or ecosystems like Hacker News. Let's shut down Hacker News. No? Because you like it? Your subjective judgment should rule?
Where was I even subjective there? Of course the way the systems are designed affect how we communicate and how information is made available, whether for good or for bad. Now it's necessary to consider them because hatred has gone viral - Internet 2.0 is one radical solution (whatever it may be), but patching up here and there may be the best realistic solution. Shrugs, I don't know, it's an imagined WIP in my head too. But we have to drop the utopian view in order to get closer to fixing it.
(Apologies if this comes across as a harsh tone - it's not meant at all.)
How will it be different this time? I recall all the hype of the late 90s Internet, being a open, decentralized and democratized communications medium that will heal all wounds, and more importantly, treat censorship as damage and route around it ("International borders are mere speed bumps in the Information Superhighway").
well, if only categorisation of extremism online were better. Recently I was searching youtube with the word "Lokiarchaeota", an exciting very recent find in the origin of the eukaryotes, hoping for a scientific lecture, and was mostly getting creationist preachers as results.
Who the hell searches for priests by naming obscure microbes ?? And sadly this is hardly unique; i've been bombarded by UFOs, reptillian aliens and similar outlandish nonsense in search of factual science regularly. And its clearly not doing a reasonable job at predicting my interests at all, for those are not items I'd view.
Why google's imbecilic algorithms promote and push such extremist trash on the general populace is beyond me, degenerating many neutral search results page to worse than reading the yellow press, but this does sadly seem to be what's currently happening, and would reasonably lead many away from wonders that Internet could offer instead.
This is a perfect example of how centralization is harmful.
YouTube has too much content without enough effort to categorize that data. YouTube content limits to whatever is the most appealing to the widest audience. Naturally the extreme, and even the absurd are most at home there.
I think you are taking the word "hate" out of the context intended by Mr. Berners-Lee. He wasn't talking about hateful in terms of hateful of the status-quo. He was talking about hateful in terms of something like Mien Kampf. That's the sort of thing people mean when we talk about "hatred" in this context.
I think we can progress just fine without Mien Kampf or that particular brand of hate. If a similar book with similar content was to be published today, I see no harm in suppressing it and I can think of no harm that be pointed out without resorting to the slippery slope fallacy.
Would the world be a much worse place if the German authorities had censored Mien Kampf - I find such an argument difficult to swallow.
> I think you are taking the word "hate" out of the context intended by Mr. Berners-Lee.
Definitely not. I did say they more often than not, hatred is the driving factor for positive change. I did not say "always".
> He was talking about hateful in terms of something like Mien Kampf.
Clearly the actions of the Nazi party or by Hitler were awful. Don't forget that Hitler was only a man. He had a hatred of the status quo, just like many of us do. His hatred was misguided and his actions were immoral, but they weren't from his perspective.
> If a similar book with similar content was to be published today, I see no harm in suppressing it
That is where you and I disagree. The harm in suppressing any information is that we chill the distribution of all information.
> Would the world be a much worse place if the German authorities had censored Mien Kampf
That is very unlikely. Mien Kampf was just a book. It was not Hitler, it was not a member of the Nazi party.
I would argue that Germany would have been a worse place, sooner. Censorship chills speech, not hatred, not action. Hitler would still be free to spread his ideas, the Nazi party would still come to power, and they would still go on to burn Germany, books and people alike.
If you want to prevent action that is based on hateful ideas, do it through education. Don't stifle wonder by breaking the very way it is shared.
I thought the interview started a bit slow, but I liked the parts regarding "fake news":
Döpfner: Right. But was does that mean for fake news?
Berners-Lee: What’s the fake news? How do they define fake news in this case?
Döpfner: Well, that’s already where the problem starts. That means that, in the end, Facebook or another social media platform can define what fake news is, and I think it should always be the prosecutor and not a private company. Facebook should be a neutral platform. People exchange all kinds of things, good and bad, truth and lies. However, only if something is illegal, the prosecutor should intervene. If Facebook with its almost two billion users – ushered by well-intentioned politicians – morphs into a universal media monopoly making editorial decisions and even judging on who gets to read what, then we have a problem. And it is exacerbated if that happens in a closed system.
...
Döpfner: Again: How about the whole fake news phenomenon? How should society deal with that? Should publishers help Facebook to correct their fake news stories, or is it basically more about media education, that people should learn that not everything that is distributed on a social media platform is necessarily true?
Berners-Lee: I think we have to be very scientific and look at how these systems interact. The Macedonian websites we talked about earlier generated ad revenue by making up things about the US election, and affected that election even though their motivation was not at all political, only commercial. The ad network trained them that lies can generate more revenue than truth. As a system engineer, I look at that and say, "Well, now, something’s broken here." So one of the possibilities is for Google at that point to tweak the way they reimburse people not just on the clicks, but on some other function.
Döpfner: So you could say the fact that Facebook is incentivizing people to distribute bad content, fake news, has to be changed in the very interest of the social media platform.
Berners-Lee: The guys in Veles, Macedonia, they were independent , they were not on Facebook. They were only on websites. They pointed at them with Twitter. So it wasn’t a Facebook phenomenon.
There’s a separate Facebook phenomenon which I’m told was an important factor in the election. That is targeted advertising. Targeted advertising on social networks is very effective. There’s a blog. He claimed that what they did was divided the entire American voting population into 32 different subtypes, like Myers-Briggs, different sorts of people, different demographics, and so they could then send targeted information.
So to one group with children, you’d say, "Our candidate is going to fight for education." Then there’s a group without children, and they can say, "Our candidate is going to save money by cutting down education"“ Because it’s targeted advertising, it’s not public so nobody can check.
So one simple rule could be that you could say, actually, targeted advertising by political bodies is not democratic. Therefore, from now on, if you want to advertise as a political person, you have to say same thing to everybody because, that’s what democracy is.
There are several types of "fake news" which are often --- and sometimes intentionally --- conflated. As the interview does, I think it's necessary to distinguish "click bait" (which has a primarily monetary incentive) from "political propaganda" (which has a primarily social agenda).
But not all "click bait" and "political propaganda" is false, and not all that are false are false in the same way. It seems also worth distinguishing "lies by omission" (biased reporting) from "lies of whole cloth" (truly "fake" news). I'm doubtful that there is any algorithmic way of distinguishing these, or any legal definition that is not ripe for abuse.
I'm also dubious about the claim that "something is broken here, at least from a business perspective. Google and Facebook are making tons of money with the current system. It seems likely that any change would reduce their profits. Other than "don't be evil" (for Google at least), what's their incentive to change? As a society, what can we change so that such an incentive exists, and is stronger than the currently dominant ones?
There's an increasingly blurry line between "click bait" and "fake news" with political intent, as they've often been two shades of the same thing.
Tabloids have been making shit up about celebrities for decades. Brad Pitt has cancer! No, wait, he murdered a puppy! No, he kicked an orphan in the face! These things sell papers.
Then people realized the same thing works in politics: Hillary Clinton has cancer! She murdered a puppy! She kicked an orphan in the face! It's a proven formula, people click it, and you will make money given how things are monetized these days.
Attention, any attention, is something you can monetize.
That's the problem. Advertisers don't give a fuck. When they start to pay attention to where they're spending their money they might wise up and deprive these "fake news" sources of any incentive.
Not only have tabloids been doing it, but all journalism--just look at the 1890s and how "fake" news was. The sinking of the USS Maine?
Also, look at state propaganda in the media in the 1930s and 1940s and even during Vietnam--the contrived Gulf of Tonkin incident as a means of generating public support for expanding the war to include mainline ground troops.
I have to conclude that all journalism is mostly lies and state propaganda is behind much of it.
I should add that I was a sports writer for a local newspaper many years ago and that gig was definitely an exercise in embellishment and outright fabrication at times.
What I find most egregious about the recent "fake news" bs is that now private Internet companies deign to tell you what is and isn't truthful information and we just watched how these same companies sided with one and only one presidential candidate. The problem was, even they couldn't carry that side of beef over the finish line!
> Not only have tabloids been doing it, but all journalism--just look at the 1890s and how "fake" news was.
Except now there's so much happening so fast, we can't keep up! Plus with "personalisation", no one's experience is the same. What a headache for future historians.
There's still a few that takes journalism seriously; the Guardian is one. But none can claim absolute truth or neutrality. So fake or not, I guess we still have to resort to multiple sources - or even better, stick our heads in the sand until the madness is over!
(That may or may not be taken as tongue in cheek..)
"My uncle once lied about that fish he says he caught. Clearly the man has no morals and should never be trusted."
Everything anyone says should be treated with a critical eye and a healthy dose of skepticism. Nobody who's ever achieved anything significant has done it without making mistakes.
The problem is when you throw up your hands and declare the New York Times to be on the same level as the National Enquirer.
Statistics are important here. A source that's reliable 99 times out of 100 is more trustworthy than one that's reliable 1 in 100 times.
Our kids are not being taught that skepticism is a good thing. I say this as a skeptic who tries to undo the damage our society has done in the constant wash of advertising and school "everyone wins" nonsense.
I praise my kids a lot, but I do it for meaningful things (or steps towards) that they do and for exhibiting the principles that will strengthen them as adults.
I urge my kids to practice skepticism to question the nature of things and people.
That's why as a responsible parent you should slip in a few harmless false truths into their diet to see if they can pick them out.
It's also educational to play games where two or three people tell a different version of the "truth" and you have to guess which one is actually right.
Schools are sadly suffering a deficit of critical thinking. The American ones in particular do not reward it at all, if anything it's punished. You must answer according to the textbook or you lose marks. Such is a test-driven education system.
I'd argue intensity is as, if not more, important than just "statistics." A source that tells one lie that starts a war is much worse IMO than one that tells 99 lies about celebrity pets.
>I have to conclude that all journalism is mostly lies and state propaganda is behind much of it.
non sequitur
From a few chosen examples, it's a massive stretch to say journalism is "mostly lies".
It's a further stretch to talk about state propaganda without distinguishing between various topics of journalism (political, sports, tech, economic, etc) not to mention source and past performance.
It's older than all of that, even. The practice of "say or promise anything that will earn followers, acceptance, or profit" has a long and storied history under the brand of organized religion.
One thing that I noticed during the 2016 US elections was how difficult it was to find direct information from a candidate, especially in local government races.
Most people running for governor, congress, etc. did not have a basic website explaining their stance on certain issues, etc. Some of them only had a facebook page with a series of memes.
What we need to do to fight "fake news" and targeted advertising is to fight for the source of real news. We need to stop spreading information via opinionated articles when the actual information is salient enough. We need to reject advertised news regardless of whether it is fake or not, because even the truth is used to mislead an audience.
Thanks for the summary. That's exactly the problem I saw with FB... And I think Tim's proposal is a very simple and effective solution (require all sharing of political news to be public). That way, if FB didn't do the warning themselves, someone else could. And lies by omission could be addressed too.
Yeah I'm thinking the "something" that's broken here is the meat-based portion of the system - the brain of the poor ape homo sapiens. Centuries of "progress" have trained us to get pretty good at pretending we're smart. But we still laugh at fart jokes, and we still worship vacuous celebrities, and "...lies can generate more revenue than truth."
IIRC, the more sensible ones (at least a few in Europe) reported that "Powell said he had proof" of WMDs in Iraq. So yes, they reported on it, and no, not everyone publication about WMDs in Iraq is automatically fake news.
Key word that ruins this sentence: "or".
The internet can be, and is, lent to propagation of both wonder and hatred.
If you want to avoid hatred, you don't get rid of it, you categorize it. Give hatred its own space separate from wonder, and wonder can flourish.
Hatred and wonder are, in fact, tightly related. More often than not, wonderful things are created out of hatred for the status-quo. Without hatred, how can we progress?
This is why the internet must be neutral and open, why free speech must be protected no matter how hateful or obscene people are. To do otherwise is to limit wonder.