Hacker News new | past | comments | ask | show | jobs | submit login
Internet firms’ legal immunity is under threat (economist.com)
204 points by JumpCrisscross on Feb 13, 2017 | hide | past | favorite | 131 comments



And yet phone companies and postal services aren't liable for what people send using their services. The big difference of course is that digital services are much closer in effect to being a broadcast service, and the costs are much much lower than phones or snail mail.

So to say that these policies are exceptionalism is bending the truth, when in fact it is a faulty, leaky, middle ground between old world information transmission systems; private (phones), semi public (snail mail, think spam, political pamphlets), and public (mainstream media).

We live in an imperfect world and this is yet another example. The pendulum will nevertheless swing the other way for a while.


> digital services are much closer in effect to being a broadcast service

One of the points from the article is that many of these players (e.g. facebook, uber, etc) are not simple broadcast services, but select content from an available pool for users. It seems like 2010 Facebook, with its simple feed algorithm, can much more credibly claim to be an impartial carrier of content a'la the postal service.

I wonder if the influence of a platform on content selection will become legally important for regulation.


It ultimately comes down to who makes the rules and who the companies have to answer to in the end, I think. Though I must say that the article had a rather odd way of stating that, with their use of the word 'sovereignty' in this sentence:

> Governments and courts are chipping away at the sovereignty of internet firms, and public opinion is pushing them to police themselves better.


But digital services do actively censor posts, even if your post is perfectly legal. Postal services do not destroy letters with pictures of nipples in it, just try to post an image of a female breast on Facebook. If they can censor nipples, they can actively filter hate speech or abusive posts just fine. Claimining it's too difficult to monitor posts pro–actively was probably true in 2005, but not anymore. Right now, platforms like Facebook are eating cake and having it too.


they can actively filter hate speech or abusive posts just fine

And how, pray tell, do you identify "hate speech"? Who's the judge? You're going to get a different answer in Iran than you do in Switzerland. Not to mention the obvious, that the answer to to speech you don't like, is more speech, not censorship.


Well, parent's argument is that Facebook is already deciding which speech they don't like, so they're already doing some kind of identification. The question then becomes, can government compel them to change that definition to meet laws and regulations that the govt. wants to enforce?

Companies already cater to national laws, e.g. Twitter censoring accounts in Turkey on the behest of their government.


> more speech

In a technological world this just results in megaphone / spam wars, ultimately destroying the value of communication channels. Almost everyone finds a moderated platform more useful than an unmoderated one.


Disregarding context in a narrow focus image, how can you differentiate between male and female nipples?


Take a look at the following instagram account, where somebody is posting pictures that are delightfully ambiguous:

https://www.instagram.com/genderless_nipples/

This has been written up on a few news-sites, for example:

https://www.theguardian.com/world/2017/jan/19/genderless-nip...


Look at the curvature of the surrounding skin. I bet even a CNN could learn it.


Yes, well the phone company isn't listening in on what you're saying and deciding if it's "fake news" or not and disconnecting you if it is "fake news".

Facebook has no one to blame but themselves for going into the role of "internet police" and giving up their safe harbour provision. You can't have it both ways, you can't be an open platform under the protections of safe harbour and a curated feed with censorship.


Actually, postal companies do filter packages and mail. If you write 'obscene' writing on the outside, USPS will keep the package, and not give it to the end address even if requested. IdubbzTV, a youtube channel that features 'fanmail' videos of an obscene nature has had a lot of problems with that.


That's still materially different from choosing whether to deliver the package based on its contents, and it's that behavior that jeopardizes Facebook's "safe harbor" status.


Isn't safe harbor something else? safe harbor explicitly refers to the DMCA but this article seems to be about an older provision of the Communications Decency Act


A safe harbor is a law or part of a law that provides immunity from liability or makes legal behavior that could otherwise be considered criminal in certain circumstances. The safe harbors of the DMCA are one example.


I'd add only that all the ground is faulty and leaky. Law is in many ways the application of logic to largely arbitrary social norms qua premises.


"And yet phone companies and postal services aren't liable for what people send using their services. "

Postal service does not read your mail, phone companies (are not supposed to) listen in on your calls.

'Internet companies' deal directly with the content of the material.

Notice that 'email hosts' are not the issue here.

It's Google/FB/Twitter etc..

Not taking any position on the issue, but this is a rathe salient difference.


I would argue that companies like Twitter, which exercise substantial censorship, are incurring substantial liability in doing so.

The price of free speech is high, and I think that some of the big firms are about to find that out.

The effect on users could turn out for the better, but I'm not holding my breath.


>And yet phone companies and postal services aren't liable for what people send using their services.

Phone companies and postal services don't directly profit from the contents of conversations, letters, and packages. Google, Facebook, and so on, directly profit from the content, not from providing the service.


Oh, but so near in the past were the days that the phone company made more money the more time you spent on a dialed call. They absolutely had an interest in helping you connect and send your message. Phone companies also sold Yellow Page ads.

Print shops typically have a waiver that you own what you're paying them to print and that you're not publishing illegal content. Then they take that content, print it, and send it to you to mail out or otherwise distribute. In extreme cases they can see up front that you're lying, and will refuse the job. However, FedEx Kinkos in most cases is not liable for the speech you pay them to replicate.


"Phone companies and postal services don't directly profit from the contents of conversations, letters, and packages."

Uh, outside of the postal service, they absolutely do. Do you believe your phone company is not selling data about who you call?

They also track websites, etc, you visit when you use their mobile data networks.


Yes, this is entirely true, hence why it is a leaky analogy. And yes, a growing number of people are upset. But the genie can't be put back in the bottle; there will almost certainly be some push back, and possibly some regulation that the, us, techies won't like. But the internet tries very hard to route around issues, be they technical, social, regulatory.

There will always be a dark underbelly because the net was created by, and is used by us humans. You'll never be able to protect everyone. Just like every other protection issue; regulation, light or heavy, comes in waves. Humans, collectively, are notoriously bad at risk management, exigent threats always seem worse and distant ones in the future are always discounted.


Counter: if your phone service was free, but you had to listen to a pre-call ad every time you answer the phone, do you think the phone company is now liable for damages caused by the content of the call?


You mean from all the broken windows from having chucked my phone because of the crappy ads?


They don't profit from the content, they profit from the service of providing the content (ads).

Bloomberg is one that profits from the content.


I think that's a broken comparison.

In both cases, the networks benefit from the fact that data/information/voice/messages are flowing over them.. almost some sort of effect. ;)

I don't think the content is the important part in either case but the fact that the platform is the place to distribute it.


Well I tend to think they also have deeper pockets and are more concerned with public perception than phone and mail are. Some of it has to do with the fact people don't see them as indispensable so they have to not be to belligerent standing up for themselves.


The article regards this change as almost a foregone conclusion, while mentioning that "Internet activists and the firms themselves may deplore" the loss of §230.

If you're reading this and you're in one of these categories, you can do something rather than just deplore the change. For example, get your company to write or sign on to amicus briefs in §230 cases explaining why not being liable for user activity is important to you.

Also, all different kinds of organizations can endorse or advocate for

https://www.manilaprinciples.org/


The immunity must be limited to sites that are neutral, in the same way that non-political religious organizations are tax-exempt. Why? Because to claim that a site isn't responsible for a user's content is fine, until the site starts editing, censoring or weighting certain points of view. When the site loses its neutrality it ceases to be a conduit of content and instead becomes content. Indemnification from liability for a specific point-of-view is, I feel, an abridgment of free speech and I believe it is unconstitutional.


You're essentially restating the concept of Common Carrier status[1], which is the inspiration for what Sec230 erected on the internet for content providers, so YouTube couldn't be charged with Material Support just because someone uploaded ISIS videos.

1. https://en.wikipedia.org/wiki/Common_carrier


But common carriers do have nondiscrimination obligations, while sites that benefit from §230 don't.


I believe that is only to account for editorial policy in a UGC context.


I'm not sure I understand what you're referring to there.

Edit: I mean that being prohibited from editorial interference seems like a pretty long distance from being essentially completely protected in editorial interference.


To be sure, it's not "completely protected," as we've seen with The Pirate Bay, etc., but Common Carrier says the phone company indeed can't refuse to rent you a phone number just because you want to run a sex line on it. That YouTube has protection against people posting videos of themselves speeding or doing drugs (it's always "vices," natch) was a necessary consideration in Sec230.

I guess there are arguments to be made about how far apart these might be conceptually, but what would the country/world look like without Safe Harbor? We may be about to find out.


I'm a huge supporter of the safe harbor and I hope different kinds of communications intermediaries will continue to be free of legal responsibility to know or control what people communicate through their services and platforms... I was just questioning the conceptual part in your observation that the common carrier regime was the inspiration for §230, which I don't see as straightforward because of this difference about content discrimination.


> Indemnification from liability for a specific point-of-view is, I feel, an abridgment of free speech and I believe it is unconstitutional.

Is your concern focused on the idea that the law shouldn't incentivize sites to interfere editorially with users' expression, or some other aspect?


Indemnity is an advantage conveyed by the State. The State ought not provide the advantage to some speech at the expense of other speech.


So in the existing §230 structure, intermediaries are immune for liability related to speech of their users. However, the intermediaries are still immune if they, for example, remove things they disagree with. That does mean that openly editorially biased sites can and do benefit from the immunity.

This still seems to be point-of-view neutrality on the government's part: an anti-fooist site that removes fooist comments gets the same immunity as a fooist site that removes anti-fooist comments (or a neutral site that removes neither). Is your view that it's wrong for the government to, in a sense, help the fooist site in the first place even though it's equally willing to help the anti-fooist site in the same way? Does that mean that there shouldn't ever be a subsidy for "newspapers" (open to any newspaper regardless of its editorial line or policies), but only for "neutral newspapers" (that don't editorialize)?


I don't think this is the same as a subsidy. It's about immunity from legal consequence.

If a website can curate what is said so that it is visibly filled with defamatory material about a specific target, the organization curating the material maintains complete immunity from what would otherwise destroy a newspaper or any other organization that is actually liable for what is printed in it.

I don't think this should be possible for any organization, regardless of who they are. This immunity should require an extreme impartiality on behalf of the website. It's meant to protect organizations from speech they don't control -- when they assert any control over that speech, it's now their speech.


This is complete nonsense its entirely impossible to run a large scale conversation without stepping in editorially to some degree. The impartiality criteria you are specifying is probably in fact impossible to define sufficiently in practice while doing so and the fact that people may in fact say nasty things on the internet is an acceptable cost to having a free global communication system.


A site like Twitter or Facebook that has millions of users contributing content can edit the feed to convey a particular message by only showing posts that fit. Just like an author who chooses particular quotes that fit an article, but on a larger scale, and the "article" is now made only of quotes.

IOW, the message can be crafted by the platform even if the words are provided by the users.


There is no article trying to reduce a many to many conversation to a simpler construct in order to justify regulating it is poor analysis.

Platforms may or may not be slanted people can pick a different platform if they don't like their current ones slant or make their own. People ought to have no right to have a platform to be neutral in what communication they facilitate between their users and any attempt to enforce that is inevitably going to devolve into being slanted towards those with the juice to hire the lawyers to suppress speech.


Whether you call it an article or something else, the feed as a whole, if chosen in a biased way, represents the point of view of the selector.

Just as an article made of quotes would.

And if the message in the feed/article is libelous, it's shameful (though perhaps legal) to hide beyond the argument "I was only quoting other people."


Speech without restriction based on content is a right acknowledged by law. Re-framing it as a privilege doesn't change this.


Because to claim that a site isn't responsible for a user's content is fine, until the site starts editing, censoring or weighting certain points of view.

I have a lot of sympathy for the idea that a neutral service should not automatically be responsible for the content it unknowingly transfers on behalf of a small minority of its users. This principle would support common carrier status for services like post delivery or phone networks.

But I also think we have to be careful not to go too far. The potential damage caused by an online distribution network in terms of sharing material that should not be shared is many orders of magnitude greater in terms of potential reach and rate of distribution than the analogous damage caused by a postal or phone service. We shouldn't assume that a reasonable balance between responsibility and safe harbour in one context is necessarily still a good balance in a different context.

Today's big IT businesses make staggering amounts of money from their online services. Some of those services, like YouTube, include significant amounts of illegal content, and sometimes that content was part of the main attraction that got the service established in the first place. I don't think it's unreasonable to suggest that the operators of such services shouldn't get a complete pass on facilitating substantial amounts of illegal activity just because policing their services more effectively or at least providing enough resources to respond quickly when they are actively informed of a problem would be inconvenient or cost them money.

The same goes for disruptive businesses like Airbnb and Uber. I have nothing against the disruption itself, where someone is using innovative business models that take advantage of modern technology to compete with established big players. However, I do have something against innovative business models that offer a certain advantage over incumbents only because they don't follow the same rules as everyone else. If some of those rules are no longer fit for purpose then they should be changed or removed appropriately, but then they should be changed or removed for everyone, too.


> The immunity must be limited to sites that are neutral, in the same way that non-political religious organizations are tax-exempt. Why? Because to claim that a site isn't responsible for a user's content is fine, until the site starts editing, censoring or weighting certain points of view. When the site loses its neutrality it ceases to be a conduit of content and instead becomes content. Indemnification from liability for a specific point-of-view is, I feel, an abridgment of free speech and I believe it is unconstitutional.

1) Yeah that works great until someone uploads content about children being sexually abused and you can't take it down because taking it down is a non-neutral action (censorship).

2) Same, but non-consensual pornography of someone's ex.

3) Same, but advocating a clear and immediate desire to commit violence.

4) Same, but "fighting words" (a well known exemption when said to someone's face to free speech protections).

5) Same, but obscenity.

There are some very, very massive flaws of that nature with your position and I'd continue but I think you are getting the idea. Such things don't only impact the speaker and therefore create a situation where the provider should (in theory) censor them.

Similarly, such things have been ruled to be outside of the bounds of "free speech" in the US by the judicial system.


Liability is not the same thing as a court order. If you sue someone for libel and win, and the libel is hosted on YouTube, YouTube can't say "nope, Section 230" and keep hosting it. The court can order them to take it down. YouTube just doesn't owe you any damages, you have to take that up with the user.

> Indemnification from liability for a specific point-of-view is, I feel, an abridgment of free speech and I believe it is unconstitutional.

That would be the case if the government was deciding who could be indemnified based on content, but that isn't what's going on.

Consider what you would be doing to search engines. Their entire purpose is to sort content by relevance. There is no opinion-free way to do that, otherwise every search engine would have exactly the same results. You want to impose liability on Google and Bing because they index the whole internet and the internet has bad stuff on it?


> You want to impose liability on Google and Bing because they index the whole internet and the internet has bad stuff on it?

Yes. If you found some, i don't know, child pornography, say, lying in the street, then went around showing it to people saying, "hey, look what I found", that would be considered illegal/repugnant/stupid, right? Why should that same action analogized to the digital domain be any less illegal/repugnant/stupid? I don't think it is.


Analogies are often useful tools to explicate things by relating them to already understood concepts but utterly useless to make useful proofs or arguments because it oversimplifies to the point of uselessness and misses the ways things differ.

These arguments via analogy are so worthless, so without substance that it is usually a massive waste of time to try to explain to the originator the ways in which the analogy differs from reality so I will simply ask you to come back with an argument based on reality instead.


Would you also hold the post office responsible if they delivered such material? Or the telephone company?


We aren't discussing one-time delivery but an ongoing availability as it is present on the website and would be delivered for weeks/months before a court order to take it down was received. This is more akin to a broadcast where someone picks the channel (url) than the example you provided.

That isn't the same thing as a sealed point-to-point non-public delivery of a message and to imply it is equal and equivalent is disingenuous.


There was at one point a tiny number of very expensive to run networks which could reasonably be supposed to bear the small cost of putting its money where its mouth was every time they showed someone on the tv.

Your concept would in fact basically either go laughably unenforced or destroy many to many communication on the internet as we know it as there would be no large channels for distributing any information of any sort as showing anything that couldn't be shown on nickalodeon would be an unacceptable risk in a world where most communication has few viewers and earns no/little money.

Once again I am at a loss to understand what is so bad about the way things are that this seems like a good solution.


I'm pretty sure you completely misunderstood what I said given I was replying to a chain of comments like this one:

> > The immunity must be limited to sites that are neutral, in the same way that non-political religious organizations are tax-exempt. Why? Because to claim that a site isn't responsible for a user's content is fine, until the site starts editing, censoring or weighting certain points of view. When the site loses its neutrality it ceases to be a conduit of content and instead becomes content. Indemnification from liability for a specific point-of-view is, I feel, an abridgment of free speech and I believe it is unconstitutional.


Note sure how an internet service provider could be responsible for "fighting words"--that's a very, very narrow doctrine essentially constrained to yelling something in someone's face that immediately gets you punched.


>non-political religious organizations are tax-exempt.

Religious organisations aren't tax-exempt because they're non-political, they're tax-exempt because they operate as not-for-profit charities.


They lose their tax-exempt status if they become political actors. I can't recall the exact language now b/c I've been out of the world for a long time.


I believe you're referring to the Johnson Amendment, if you're speaking of the US:

https://en.wikipedia.org/wiki/Johnson_Amendment


Trump has vowed to repeal it.


Previous periods of overbearing law enforcement in the face of rapidly changing technology imho had a lot to do with the flourishing of open source and the multiplication and widespread adoption of federated protocols. Subsequent changes in the political environment (not least the collapse of Soviet-style communism and the end of the Cold War) led to a moderately happy marriage between convenience and consumerism in which the web flourished and provided wins for both consumers and capitalists.

I feel the emerging need now is for lean protocols and tools that allow us to effectively filter unwanted content and to view and manipulate metadata structures, both inherent and emergent. If you're looking for inspiration in places other than the commercial sphere, much interesting work on digital ontologies has been emerging from the EU in recent years.


> If you're looking for inspiration in places other than the commercial sphere, much interesting work on digital ontologies has been emerging from the EU in recent years.

This sounds interesting - do you have any recommended reading / websites here you would suggest?


Here's a few links to get started, but I haven't kept up to date with the field for 2 ro 3 years so they're probably not ideal.

http://www.semantic-web-journal.net/system/files/swj329_0.pd...

http://www.mirelproject.eu/members.html

http://eurovoc.europa.eu/drupal/sites/all/files/EuroVocConfe...

I got really into semantic web stuff for a year or so several years back. I was disappointed that SW doesn't seem to have gone anywhere but I still think there's a lot of great work being done that's worth people's attention.


I was briefly concerned before dismissing this all as absurd. Surely we aren't more likely to hold tech companies responsible for the actions of their users than we are to hold gun manufacturers responsible for theirs.


There are reasonable(-ish?) people who do hold gun manufacturers accountable for what their products are used for. That's not absurd at all.


It is actually pretty absurd. If you think guns shouldn't be in the hands of most people and guns lead to bad things happening its still ridiculous to go after people for manufacturing legal products just because most of the US wont let you ban them. The desired end astoundingly unlikely and doesn't justify the nonsensical means.

Worse the lawyers involved in such things no that all it serves to do is to suck up money from suckers to enrich the lawyers.


Interesting analogy but there is a difference though. Government can somewhat regulate gun sales (proof of identity, no hospitalization due to mental illness, etc). If you apply that to tech companies then we are about to see some regulation in who can and who can't use those services.


Why do you say that? The NRA is one of the single most powerful special interest groups in the USA. The tech industry doesn't even come close to the NRA's lobbying results.


In 2016 the NRA spent $3,188,000 (source: https://www.opensecrets.org/lobby/clientsum.php?id=d00000008... that doesn't even make the top 20 (it's #156).

Perhaps you meant campaign donations? It donated $1,092,750, making it #427.

It is #8 in outside spending, but its $53 million is nowhere near Priorities USA's $133 million.


He wrote "results", so I think he targeted the fact that weapons are still essentially freely available in the USA (at least it looks that way from over here in Europe). This can be seen as really effective lobbying given how many times it has been argued that weapons shouldn't be so easily available anymore without any changes to existing laws.


> weapons are still essentially freely available in the USA (at least it looks that way from over here in Europe)

They're really not. Automatic weapons made after 1986 are banned; automatic weapons made before that date are extremely expensive and require permission of local law enforcement and revocation of one's Fourth Amendment rights. Firearms may no longer be sold by mail, as they were for most of our nation's history. Firearms may not be sold by private parties across state lines. All commercial sales require a background check. In many states it's illegal to sell a firearm privately; in some states you can't even legally give a weapon to a family member as a gift. Many states and some localities impose unreasonable restrictions on magazine size and cosmetic features. Some states impose registration requirements, which means that law enforcement has the ability to confiscate all legal weapons at will.

> This can be seen as really effective lobbying given how many times it has been argued that weapons shouldn't be so easily available anymore without any changes to existing laws.

I think the conditions above indicate how ineffective the pro-gun lobby has been, given that almost none of it is constitutional.

(although it's worth noting that for most of its history the NRA was a pro-gun-control organisation)


> The argument that they do not interfere in the kind of content that is shown was a key rationale for exempting them from liability.

It seems like this might be the "correct" point; at least when considering "who" is responsible for content - that the degree to which you (the service provider) picks and chooses content is the degree to which you are responsible for the effects of showing that content.

In an ideal implementation, such a link also correlates with organizational size: FB of today can both afford to be liable for the content shown, and can afford the work to be responsible about it. FB when it started could not.

A better example might be Tinder - When it started, would it be reasonable for Tinder to police its users for asshole behavior? Now that Tinder is established, it is reasonable for them to not?


it carried over to service platforms

No, it never did, Uber just made that up. The whole basis of this article is plain false, no other words for it.

(Dito Amazon, who have brazenly been selling and even shipping electronic waste that passes no basic safety standards. No uttering of "marketplace seller" changes the legal reality.)


The pivot to Airbnb and Uber is a strange one, that's for sure.

You can draw them all in as part of a more general narrative of technology companies trying to avoid regulations and liabilities faced by their legacy competition, but the article really doesn't do enough to draw any distinction or justify the mention of these companies.


A bit clearer with the full quote:

> Although limiting liability online was intended to protect sites hosting digital content, it carried over to service platforms


The intent of TFA is clearer, but it's still false. AirBnB and Uber didn't get some legal limitation of liability, they just started doing something new and asserted that they weren't liable. Turns out some jurisdictions agree, and others don't. But nobody thought that Common Carrier or the CDA exemptions applied to cars or apartments.


Yeah. Comparing the legal issues that Facebook faces to the legal issues that Uber faces is complete nonsense.


If they shred the legal immunity, the only platforms remaining will be the giants. It would be the ultimate moat for Facebook, Google, etc.

I've been waiting for two decades for the monsters in DC (and their many accomplices) to legislatively make it impossible to wake up in the morning with a normal business idea (not talking Napster here) and decide to just build it without having to go through an endless parade of legal/political/regulatory/licensing concerns. It doesn't appear to be far away now, the government monster is always hungry, always expansive, always looking to dig its claws into any bastions of free movement.

My suggestion to younger entrepreneurs out there: get it while you can. This glorious period of having so much freedom to create/build - no permission required - will probably seem like a distant fantasy in another decade. There is no scenario in which they aren't going to add more and more friction to the process, putting themselves in-between you and building things online as just another layer of control.


Why would this necessarily mean they're getting rid of all legal immunity? I think we have plenty of examples outside of the digital world where legal immunity is maintained.

The digital world currently has an excessively powerful version of this -- even when they're not neutral, they're still immune, which definitely needs to be fixed.


What is the reality that taking away the legal immunity could really happen? Wouldn't the community, non-profits and big internet companies lobby against this, call up all their congress and senators like just how we stopped SOPA and PIPA?

Cases like this are the few I'd approve of corporate lobbying for(which I'm usually against personally). I do not like the idea of censorship myself. Is someone posting hateful stuff? Unfollow or block them. I feel like censorship should be used in rare cases.


> GOOGLE, Facebook and other online giants like to see their rapid rise as the product of their founders’ brilliance. Others argue that their success is more a result of lucky timing and network effects—the economic forces that tend to make bigger firms even bigger.

(Take it easy with that down arrow button :) but yet others see their rapid rise as sponsored fronts for Intelligence.

[p.s. & I would be delighted to be presented with thoughtful replies that show /why/ the above view can not possibly be true.]


I think people are down lying you because your comment adds nothing to the discussion. Not because they necessarily disagree with you.


How could you possibly say that it is not relevant to OP?

If the handful of uber social network platforms are in fact run by intelligence, then /minimally/ articles such as OP are whitewashing these platforms. More fundamentally, they are just herding us to accept corporate "champions" that may in fact be under the control of unaccountable arms of the same corporate-statist regime.


> I would be delighted to be presented with thoughtful replies that show /why/ the above view can not possibly be true.

"The one who asserts must prove."

"The burden of proof lies with the accuser."

Maybe you are starting the wrong way round.


What does a sponsored front for intelligence even mean? Sponsored by whom? For whose intelligence is it a front for?


Sponsered: funds, contracts, press promotion, etc.

Front: a front organization to conduct activities not subject to governmental oversight.

Intelligence: "alphabet" agencies.

Seriously, a smart person like you, ceo of a data mining company, is asking that question?


the truth is often buried.


Well this would only help incumbents, the little guys are unlikely to ever be able to police user content at scale or at cost.


And it's not just the hosts. Consider what this does to a host's incentive to take on a small customer. If you're a small entity you wouldn't be able to find hosting.


So this is pushed by DRM freaks, who dream about censoring everything, and making others pay for this abusive policing.


There is an old french word for DRM laws: patente.

The ancester of patent.

Edicted by the authoritarian delusive king known as Louis XIV (a probable model of Kim Jung Il) when printing came. The purpose was to control publication in exchange for oligopoly on content. To make colonial wars, Louis made an awful amount of debts that would take centuries to pay.

It resulted in France being under-educated (price of books where 300x what they costed), making the church, the editors, the king and the authors/artists happy.

Until a wild theater and colporteurs appeared ignoring the administrative borders and leaking the contents in cheaper ways. They shipped the contents coming from all Europa but forbidden on the local territory. Boring pamphlets about the possibility of living in a world where the random of birth would matter less than merits.

Then, from Jacqueries to Jacqueries, one day, the 90% that couldn't live because of an excessive fiscal pressure on the poorest while the richest did not wanted to pay asked for the convocation of the Third State to discuss the fiscal equity problem ... somewhere around 1789.

History may not repeat itself, but is sure does have some hiccups...


Whta's the difference between Communications Decency Act and https://en.wikipedia.org/wiki/Indemnity


Why do people have to quote fake news like it's an invented problem? I guess there could be some disagreement on the scope of the problem, but there's inevitably a better-trained person in this world who will be researching and becoming a subject matter expert on this in the coming years on it's effects and breadth.


We detached this subthread from https://news.ycombinator.com/item?id=13639517 and marked it off-topic.


It essentially is an invented problem, which was immediately abused by both sides of the political spectrum, so an absolute version of it is hard to nail down.

There's little or no proof I've seen that shows any significant number of people actually believe the stuff on small sketchy sites in question originally - and then the term definition got expanded to seemingly include any slightly misinformed MSM article.


There's some proof, in the amount of shares (non-ironic ones) that these websites would get on Facebook.

The abuse of the term is infuriating because the "fake news" websites are so clear cut, there shouldn't even be a debate. Some websites just make up facts to write a story, as their main source of stories. And there's no way CNN, Breitbart, MSNBC, any of them fit the claim.


In hindsight, the term Fake News wasn't a good one. "Organized [foreign] disinformation" is both more descriptive and would have been less easily appropriated and diluted.


I hear you, but I think some of that is some sort of blind support, rather than an informed opinion that many of the viewers would necessarily repeat. Just a matter of wanting to cheer your team on with no regard for fact or not.

There's also a factor of people from the other side going there. Or people just curious.

Overall, I don't think that it's a good idea to use mere traffic as a way to determine influence.


>>There's little or no proof I've seen that shows any significant number of people actually believe the stuff on small sketchy sites in question originally

The proof is that the sites get large numbers of repeat visitors and their articles get shared unironically on social media.

I mean creators of these sites make tens of thousands of dollars a month. That may be small potatoes in the grand scheme of things, but put all of them together and they make up a sizable portion of total Internet media readership.


There's significant evidence that a lot of people believed Obama was born outside the US, which was the prototypical fake-news claim.


Because the term "fake news" is about two months old, and no sooner was it invented than people started abusing it well beyond its original definition.

The term was made up to describe completely and deliberately made-up news stories created for lulz or clicks, but immediately expanded to also include news that isn't deliberately falsified but might just be biased, distorted, true yet deceptively phrased, or non-deliberately inaccurate -- and from there used as a general catch-all term for criticising any news source you don't like, whether that's Breitbart or CNN.

There is very little objective news left, and very few unbiased umpires remaining to judge it. I don't think any of us should trust facebook, of all places, to set itself up as the arbiter of what is "fake" and should be censored, as the company's own political views are well known and I don't believe they are likely to be capable of applying consistent standards to both sides.


There is very little objective news left, and very few unbiased umpires remaining to judge it.

I think part of this comes from the idea that there ever was objective news, which is a myth. Not in the sense that it's all biased editorial pieces, but that there is going to be bias in every story, just from the fact that it's written by a human. What people should keep in mind is how hard the journalist is working against that bias to be objective.

Once people are poisoned with the idea that any bias is bad, coupled with their own biases as readers, varying levels of critical reading skills, and partisans charging the atmosphere, people can be too quick to dismiss everything, rather than being encouraged to read critically and get what they can from news sources.

Similarly, the idea of unbiased umpires. Again, everyone's going to have biases, including those held up as umpires. Those same umpires are likely to hold political opinions as well. Is it even fair to think they don't or shouldn't? What's important is to see how their political beliefs influence their work.

Perhaps I'm too naïve, but I still believe people can do good, unslanted work while holding political beliefs. I'd like to think I can, and I extend that benefit of the doubt to others until they prove to me that it's undeserved. Similar to news sources, I think people have been encouraged to think that this is separation of work product from personal politics is impossible in others.

I think both of these are very real problems, and I'm personally trying to work to improve this as much as I can.


>whether that's Breitbart or CNN.

Wow. CNN is now regarded as being on the end of a political spectrum. What a world we live in. I mean, attempting to be impartial is now a political act. That's crazy.


I don't see the GP making any claims about where they are on the political spectrum, however. If anything, that looks like a list of unreliable news sources to me.

Lest we forget, CNN's credibility came into focus when their role in rigging the debates was discovered. There was an attempt to divert that by claiming nebulously that the emails had been 'altered', but then it was established that they were DKIM validated. CNN's Cuomo also told us that reading Wikileaks is "illegal" only to be contradicted by far more reputable lawyers at Popehat. Incidentally, Cuomo is an attorney and he should know better.


>I don't see the GP making any claims about where they are on the political spectrum, however.

True. It wasn't meant like that. It was more of an observation on how the frame of public debate has shifted.


That's some out-of-context quoting, right there.

Your post's parent's point, in the bit you quoted, is how "Fake news" has become — or is perhaps more accurately a thing certain people are trying to make into — a way of dismissing news sources you don't like.

I don't read any endorsement or disparagement of either source in that post, or even an implication that they're "of a kind", except insofar as some people have painted both with a "fake news" brush, or that "objective" reporting is hard to find, anywhere.

Neither of those claims should be particularly controversial.


I think this is more allusion to the Prez calling CNN FAKE NEWS all the time rather than a value judgement by GP.

There's an argument to be made that CNN smells blood since January 21st and is looking to be the ones that make the kill. The fact that news orgs have to choose coverage priorities is, in itself, political.

Even if the coverage itself is impartial, would 24/7 coverage of Trump be non-political?


Yes. A lot of media critiques try to discern bias in the content. However, bias exists prior to that. What and who is covered is bias. Ignoring a topic could be bias not how the topic is covered.


[flagged]


The Economist limits the number of articles you can read in a week, so it's not a total paywall. The number of articles you have read in a week is stored in a cookie, as a reader of HN I think you now know what to do :-)

Please note, however, that if you clear your cookies several times in a row, they end up identifying your machine in some other way (I think IP based? But that wouldn't work for very busy IPs, e.g. busy workplace with many employees, so maybe it's IP and amount of traffic over time?).

Disclaimer: I am a paying subscriber since 2010 and have tried the above just out of curiosity.


What's the limit? This is the third economist article I've loaded in over a month and it's already paywalled.

Edit: I suppose the simple answer is "3 articles", but that just seems like a rather low limit to me, hence my question. I'm wondering if maybe their detection is simply bad (e.g. maybe my browser reloaded an article and it counts that as an additional read?)


I think the latest limit is 2 articles per week, it used to be higher, the change has been introduced recently, I remember seeing a pop-up announcing it.


I think whatever the limit is should be the limit that news source can post to this site. IMO.


Posts are made by users, not the news source itself, and your proposed limit is based on the assumption that most users read each and every article submission that is made to HN, which doesn't seem likely at all.


What? You don't like my brilliant plan?


You've been posting nothing but complaints about paywalls and web annoyances. Those are not substantive comments, so please stop.

Re paywalls, HN's rule is that they're ok as long as there's a standard workaround. This is in the FAQ: https://news.ycombinator.com/newsfaq.html.

See also https://news.ycombinator.com/item?id=10178989.


IMO The Economist subscription is worth every penny.


Agreed. Just got mine a couple of days ago, and I'm enjoying it so far.


Doesn't always work, but hit the "web" link under the article tittle. It'll usually get you around those paywalls.


I didn't seem to get a paywall at the link, did you? If so, out of curiosity, do you block Referers?


I get a paywall and don't block referrers.


Interesting, wonder if it's a number of articles restriction or similar, you might try just dumping your cookies, trying private browsing mode or setting Privacy Badger or similar to eat them for that site.

You could also try the Facebook linkout method:

http://facebook.com/l.php?u=<URL> (or just otherwise spoof your referrer to facebook.com)

Yet to find a paywall willing to deny social media traffic. They'll die without it and they know it.


You shouldn't be liable for what your users say and do.

You sure as hell should be liable when your buggy crappy software costs people money.


There is some absurdity in the safe harbor provisions that cover people outside of US legal jurisdiction and also that provide protections even when the provider has no actual business relationship with the customer or even any idea who the customer actually is.

I've felt that an argument could be made that safe harbor provisions should only apply when the service provider can provide an actual identity associated with an account and that that person is within US legal jurisdiction.


> I've felt that an argument could be made that safe harbor provisions should only apply when the service provider can provide an actual identity associated with an account and that that person is within US legal jurisdiction.

1) How would they be able to verify that identity for a reasonable cost without being opened up to a DDoS vulnerability against their finances?

2) What happens when every country does it based on local jurisdiction and the internet gets balkanized?


#1 is like asking "how can this industrial chemical company be profitable if they can't just dump their waste in the sewers?" If a business model is only profitable because they can externalize some of the problems associated with that model, it does deserve certain scrutiny and those who are harmed by the model should have some recourse.

#2 has already happened.


> #1 is like asking "how can this industrial chemical company be profitable if they can't just dump their waste in the sewers?" If a business model is only profitable because they can externalize some of the problems associated with that model, it does deserve certain scrutiny and those who are harmed by the model should have some recourse.

I'm not aware of any other business that is legally required to record every customer and link them to RL identities. You can still buy things with cash.

So yeah, that is a terrible and blatantly false analogy.

You are basically saying "Everyone has to be a subscription service with verifiable identities."

Why are you on a site you believe morally shouldn't exist?

> #2 has already happened.

Not in the Western world.


>I'm not aware of any other business that is legally required to record every customer and link them to RL identities. You can still buy things with cash.

Go buy a gun sometime. Or non-prescription cold medications that contain pseudoephedrine. Or prescription medications. Or auto insurance.

However, my response was not about identities specifically, but about whether society should really care if the things that it requires organizations to do as a prerequisite of doing business are inexpensive. Sometimes we make the judgment that it really is worthwhile to require a pharma lab to spend half a billion dollars before we let them sell their new pill to the public.

>Not in the Western world.

Do you not consider Europe as part of the Western world?

https://www.theguardian.com/technology/2015/sep/21/french-go...


[flagged]


>That isn't an example of what we are talking about and you should know that. If they can't enforce things as they do now, they'd need to block domains/firewall a la China.

It is exactly what is being talked about. Countries impose their laws on companies operating in their jurisdictions. Sometimes even on organizations that are outside of their jurisdiction as well. E.g. the pirate bay. https://en.wikipedia.org/wiki/Countries_blocking_access_to_T...

The internet is already a mix of legal jurisdictions and you can face legal consequences for your word press blog in some random country.

>You really are missing the point. You are talking about one off transactions of a substantial dollar value and not short-term online accounts with values measured in pennies.

Cold medicine costs seven or eight bucks, but you still have to present ID to buy it. Regardless, your complaint reinforces the fact that there are business models that are only profitable because they can externalize the damages they cause or divert profits away from those who deserve the profits of a particular work to themselves as a service provider.


> Cold medicine costs seven or eight bucks, but you still have to present ID to buy it. Regardless, your complaint reinforces the fact that there are business models that are only profitable because they can externalize the damages they cause or divert profits away from those who deserve the profits of a particular work to themselves as a service provider.

So you want to expose people's IDs over the internet? o.O k


What is the benefit to requiring proof of jurisdiction, and do you really see that benefit being worth making most Internet forums — including this one — almost impossible to run legally?

It seems to me the end effect of holding hosts' liable for users' speech would be that only the rich are allowed to communicate anything on the Internet.


>What is the benefit to requiring proof of jurisdiction,

It ensures that those who are hurt by illegal actions don't have to travel to Swaziland to receive justice.

>and do you really see that benefit being worth making most Internet forums — including this one — almost impossible to run legally?

I do see a benefit to people whose rights are being violated.


It ensures that those who are causing hurt by illegal actions simply have to travel to Swaziland to avoid justice.


Those who are causing hurt remotely from a country outside any jurisdiction where the victim can realistically get help are already immune to justice.

This discussion is, in large part, about whether services that knowingly or unknowingly help such people to cause harm but are within a jurisdiction where the victim can realistically get help should be immune as well.

I wouldn't necessarily go as far as wang_li suggested in their first comment on this thread myself, but the general sentiment isn't unreasonable.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: