People are complaining a lot about how FB are enforcing things, and why they are explicit about their decision to remove these things was not based on the content, but the patterns of inauthentic activity and fake accounts.
Look what happened with Matthew Prince of Cloudflare. Relentless harassment from people who believe he should be more active in censoring his customers, especially after that last shooting. Even when he has reluctantly stepped in on some extreme cases, they still gave him no credit. Defining wrongthink is a slippery slope, and you’ll never make ideologues happy. If you even try, then next thing you know, you’re somehow culpable for everything you didn’t remove, or didn’t remove quickly enough. Then you get accusations of endorsing it, because you removed X but not Y, so you must approve of Y, right?
And where does it end? “Sprint are ‘literally Nazis’ for allowing David Duke to have a cell phone?”
It’s no wonder everyone is deathly afraid of policing based on content. The PR consequences are bad enough, but it’s also likely to lead into legal problems as well.
You're making a category error. Sprint is a Common Carrier and protected by law as long as they don't meddle with what traverses their equipment nor deny David Duke his phone number.
If FB did retreat to common carrier, they would not be allowed to run ads nor ban accounts. If they became content providers they would incur civil and criminal liability for content posted on their sites, but also have complete control over who is allowed to have an account and who isn't.
It used to be one or the other, but FB et al wanted to have their cake and have it too, and Section 230 of the CDA was created to allow them to straddle this fence. Well, here we are and it turns out that content is a bigger problem than Section 230 was built to deal with.
That's a pretty backwards slippery slope argument. "They wanted all sites in this category removed, cloudflare removed two sites, why aren't they getting any credit?" The answer is obvious and does not require a slope at all, much less a slippery one.
> If you even try, then next thing you know, you’re somehow culpable for everything you didn’t remove, or didn’t remove quickly enough. Then you get accusations of endorsing it, because you removed X but not Y, so you must approve of Y, right?
I've seen a lot of accusations of cloudflare endorsing this kind of content. I haven't seen a single person or place where those accusations got stronger after they went from 0 or 1 sites removed to 2 sites removed.
Lots of American culture skates very close to pedophilia. At the extreme, there's porn that's only legal because the actors are documented adults. But there are also some hugely popular actors and musicians, who leverage child-like sexuality. And then the pervasive style for girls to dress like prostitutes.
And no, I'm not a prude. I just find it amusing. And, I admit, very sexy.
Edit: If I didn't care about this account, I'd link examples. But I do, so I won't.
OK, I've led a ~sheltered life, and have no personal experience with prostitutes. Or at least, women who explicitly demanded money for sex. But I have visited Amsterdam's red light district a few times, just to watch. And I've seen lots of prostitutes in movies, for whatever that's worth.
So basically, prostitutes either wear as little as they can get away with. Or they wear costumes that appeal to various fetishes. The French Maid. The Schoolgirl (or Sexy Librarian) with Big Glasses. Here are some (NSFW) stock images:
https://www.gettyimages.de/fotos/prostitute
Also, a site that's probably OK, albeit way creepy:
I understand most sex work is not streetwalking and I assume they change into whatever when they arrive at wherever they're hired to go. I've also heard that the Amsterdam Red Light District is a costumed tourist trap.
I appreciate that many sex workers don't dress like streetwalkers, or sex hostesses in bars, clubs, etc. I' sure that there are sex workers who dress like most people dress. Whatever the client wants. Escorts, for example.
But still, there is the "streetwalker / sex hostess" style. As you see in Hooters. And it's not at all uncommon to see preteens dressed like that. I gather that it's considered "cute". Also a way for young women to thumb their noses at society. And I can't deny finding it sexy, at least for the older ones. But it's still strange, in a country that's so freaked by pedophilia.
lots of what porn producers are producing you mean?
'american culture' also stipulates that it is illegal to disseminate pornographic material to minors.. but this is not being enforced at all against the US pornography industry..
this presumes that mass media doesn't have any bearing on the culture it broadcasts to, when in fact, it is a two-way street.. and also that this is a representative sample size.
But the fact remains that "sexy young woman" is a hugely popular theme in US culture. Also in many other cultures. But the US seems particularly schizophrenic about it.
Youthfulness and pedophilia are very different things. I don’t even see a hint of pedophilia in the SyFy image you posted. She looks feminine and young, but not like a child.
How do you know they are keeping ISIS from recruiting? By ISIS I'm sure you mean al-qaeda because ISIS is a regional terrorist army/group where al-qaeda is a global group who would recruit worldwide.
People as it is have a Negativity Bias.
Facebook/Twitter/Youtube's biggest content producers use it and depend on it, to hit their engagement metrics. Elevating threat perception or a fight response doesn't require censored content. Once you build such a machine that operates at population scale, what does it matter which content is censored?
All the content is fucking everyone in someway everyday. And the effects of that accumulate over time. Just ask your friendly neighborhood shrink whether business is booming.
The problem is that each individual believes they alone have the moral high ground.
CNN, MSNBC, FOX, All Mainstream Media, Alt Media, and independents all play off on a consumers slightly unique belief structure to exploit their emotions, adrenaline and fear.
SV Tech is cutting it's own throat trying to police this content. for better or worse the second they take down one site they are obligated to take down every site or become liable.
Honestly, The arrogance and elite personas of the trust and safety teams in social media is frightening.
Believe me. Matt Price et, al, and every other censor are going to be held accountable for the loss of 230 exemption.
"Where does it end" arguments are the kind of "winning" internet argument I hate seeing [0].
There's a lot to be said to building up your business in an environment that is a liberal democracy with free movement of ideas, commerce, and people. And then protecting those principles from those that hijack such liberties to promote illiberal ideas.
An existing (but old and outdated, badly needing reform) legal construct exists: Title 2 protections (being pipes). Only, no one wants to be Title 2 (not even Verizon, Sprint, Comcast) because all the money is in non-Title 2 (content).
Sounds like you may be interested in Title 1 and 2 overhaul.
While "slippery slope" / "where does it end" arguments can be used in bad faith and often are not productive regardless of intention, I don't think they are inherently bad. I think the best use of it is to call into question things the original argument thinks is obvious.
Your article's example of using pronouns to show empathy and the saboteur response show this well. It is not clear if Person 1 thinks all people regardless of whether they are a furry, goth, etc are deserving of empathy or whether there is some group of people who don't deserve empathy. I think the position that all people deserve empathy is probably at least a slightly controversial claim, and if only some people deserve it you should provide the reasoning why LGBTQ deserve it.
I don't think Person 1's argument could convince anyone who doesn't already believe either: all people deserve empathy or LGBTQ people deserve empathy. Provide more reasoning to support either of those positions would help convince people to refer to people by their preferred pronouns.
I don't think slippery slopes are productive to constructive discourse at all, and are merely used as a rhetorical device with the goal of invoking Ethos or Pathos to certain demographics. Aka, "virtue signalling".
The "this policy/rule/law (moral or legal) could be applied in a way that I don't like" argument is a tired, boring, uninteresting, and fearful one. It is tautologically true of any policy/rule/law. Better is putting forth an actual argument for an alternative policy/rule/law, whether it's merely a small clarification or a complete systemic overhaul, turning that fear into a constructive argument with actual participation in the matters at hand.
Less fear and signaling, more activism and constructive dialogue without divisive rhetoric.
Edit: if instead the intent of the arguer is to refine something someone else said, just ask! Going back to your example, the (no-longer) sabatuer asking pointedly "Do you think all people deserve empathy, only LGBT (as opposed to flurries and goths), or something else?" is way more productive to directly ask than perform a slippery slope dance. The argument can then be provided without reservation since everyone is being genuine and clear. It might be compelling, it might not. Either way, it'll get made. Rhetoric matters to discourse quality.
Is your argument that because Matthew Prince of Cloudflare didn't get more positive reinforcement for dropping 8ch from their service, that means that Facebook shouldn't try and reduce bot traffic?
Because it sure seems like this is the actual crux of your argument, but it makes little sense.
Interesting seeing this response from both Facebook and Twitter on the exact same day.
Guessing from this:
> Based on a tip shared by Twitter about activity they found on their platform, we conducted an internal investigation into suspected coordinated inauthentic behavior in the region and identified this activity. We will continue monitoring and will take action if we find additional violations. We’ve shared our analysis with law enforcement and industry partners.
that the simultaneous response was coordinated by both platforms... I'd go so far as to say that it was coordinated in order to demonstrate the capability to self-moderate ahead of the 2020 US elections.
It's likely due to this: https://en.wikipedia.org/wiki/Li_Yi_Bar. Which organized a widely popular anti-protest campaign about 2 days ago, the online community had about 30 million members. Members proceeded to use VPN to spam anti-HK protest posts/comments on FB/instagram/twitter. This is consistent with the behavior of this community in the past (mass trolling, mostly apolitical but sometimes political, this community is pretty nationalist to begin with).
It does not really matter if one can prove that something is government directed, because “coordinated inauthentic” behavior is sufficient reasonable grounds to take action.
An individual expressing their own pro-government views from their own account is fine. An individual pretending to be someone else, or multiple people, is not, regardless of the content.
What about multiple people coordinating to express an unpopular opinion together, each of their own volition? You know, kind of like a protest.
Just call this what it is, subjective censorship. Of course they give a different reason to skirt by on technicality. But if it were an extremely popular opinion being shared millions of times, it would just be called a viral meme and FB and Twitter wouldn't be making subjective judgements on authenticity.
What about it? It’s allowed. Do you not see the distinction? Inauthentic activity. Fake accounts. So no, this isn’t subjective censorship at all, that’s the point.
What matters is whether or not Facebook or Twitter can make this distinction successfully 100% of the time, and whether or not they are perceived as doing so just as successfully.
And the line is very blurred, how do you know there isn't Chinese government affiliated person acting on either government or their own behalf within the Li Yi Bar? The US government employs about 20% of US workers, the Chinese government employs slightly more, and when you say this individual is affiliated with Chinese government, that's not a small likelihood if you just picked up a random working Chinese person.
> I'd go so far as to say that it was coordinated in order to demonstrate the capability to self-moderate ahead of the 2020 US elections.
I don't understand how you go from "Guessing from Twitter tipping off Facebook" to "Probably flexing their self-moderating muscles ahead of US elections".
Right, because they both posted detailed run-downs of their moderating activities against state actors on the same day.
Edit for clarity: Twitter tipping off Facebook in and of itself isn't evidence of a coordinated response, but both parties posting detailed rundowns of their responses against the same actor on the same day is more likely to suggest a coordinated effort.
One particularly good reason to publicize acting against state-sponsored agents is if you're trying to build goodwill among the targets of said state effort, which conspicuously are the people of Hong Kong and which, more inconspicuously, are probably people within certain demographics and locales in the United States and Europe.
I would post this headline as "Twitter and Facebook coordinated today to post blog posts about removing coordinated inauthentic behavior from their platforms". There is no more confirmation from Twitter and Facebook that their actions are coordinated than confirmation from the Chinese government that it ran psyop, botnets, and the like, but the inference thereof is justified. Likewise, the leading inference that Facebook and Twitter are inauthentic in their efforts is just as justified.
At least a few years ago, Facebook did publish about partnering with Twitter and Microsoft to build tools for sharing image/content hashes [1], presumably to better aid in removal or prevent dissemination of certain types of content. It's not outlandish to suggest the partnership runs deeper 2-3 years later to cover more vectors.
For the US they have a unique challenge: they can freely attempt to mitigate propaganda and manipulation by the Chinese ruling party and it's backers because they are not physically in China.
"We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people." What do they think advertising is?
Going even further, a lot of people in the advertisement industry see it as just providing a way for companies to inform consumers that their products even exist. The point of ad targeting in this worldview is to figure out who already wants to know that certain products exist, not to change anyone's mind by getting them to purchase something they don't want to.
As someone who's worked with ads campaigns, yes that's true and entirely how I thought about it, but there are so many bad apples too, those disgusting Taboola ads as an example.
To persuade is to "Induce (someone) to do something through reasoning or argument" (Oxford), to "move by argument, entreaty, or expostulation to a belief, position, or course of action" (Merriam-Webster), or "to plead with" (Merriam-Webster).
To manipulate is to "Control or influence (a person or situation) cleverly or unscrupulously" (Oxford), or to "control or play upon by artful, unfair, or insidious means especially to one's own advantage" (Merriam-Webster), or to "change by artful or unfair means so as to serve one's purpose" (Merriam-Webster).
Manipulation implies a level of deceit and deception that persuasion does not. They are sufficiently different in meaning that thesaurus.com does not list either as a synonym of the other. Neither does whatever Google is using as its thesaurus source when it decides to answer itself instead of link to a site.
> Manipulation implies a level of deceit and deception that persuasion does not.
Actually, if we're going to get into this nitpicky level of semantics, none of the words "cleverly, unscrupulously, artful, unfair, insidious" mean "deceit" or "deception".
I'd argue that advertising is pretty inherently unscrupulous, and if it's at all effective, it's clever, artful, and insidious. And you can argue all day long that it's not "especially to one's own advantage" or "to serve one's purpose", but the fact is that if it was really providing a service to customers and not just serving advertisers, that service would be better provided by a neutral third party (such as Consumer Reports, OutdoorGearLab, etc.).
But while we're at it, plenty of advertising is deceitful and deceptive too, so even if that's a prerequisite for being manipulative, advertising is still often manipulative.
> Actually, if we're going to get into this nitpicky level of semantics, none of the words "cleverly, unscrupulously, artful, unfair, insidious" mean "deceit" or "deception".
"insidious" is actually a synonym of "deceitful" [0].
You could still argue that advertisement is insidious (which I would by and large disagree with), but the previous poster's point about manipulation and persuasion being different still stands.
There’s a wide spectrum between “eat oat bran it’s good for colon health” and “diamonds are forever,” and I see what foreign governments do these days as being somewhere in the middle.
Shouldn't they replace all the content with, at least, a short note saying what was formerly here was part of a "coordinated inauthentic behavior" and that the individuals representing themselves here were not real?
Hard to say if they would want those remnants on their platform, but it would be really great if these things were funneled to the FBI who published reports of state-sponsored misinformation activities so we could cite them as examples later on.
Slightly related--it was amusing the first time I went to /r/China to find it completely full of anti-China posts. I mean, it makes sense since Reddit is an American website, but it would be like going to /r/potato and finding dozens of posts on how people hate potatoes.
/r/China isn’t so much anti China as it is a bunch of expats in China venting on issues facing them in China. It would be like going to /r/potato and finding dozens of posts griping about the challenge of working with potatoes. Or going to /r/fortnite and seeing dozens of posts about players complaining about that damn mech.
/r/sino is the China-fan subreddit, but in the same way as /r/Pyongyang is pro Kim jong un.
Algorithms to detect algorithms. The latter have to become more and more genuine and authentic in order to fool the former which are forced to become better and better ... Soon the genuineness, authenticity and complexity of the latter will exceed Shakespeare writing his plays (some say even that was a fake - in this case i subscribe to Marlowe theory) and we, regular humans, will be left in the dust... Too stupid a post - yea, that is a real human, let it through :)
The point of propaganda isn't actually to promulgate a lie that obscures the truth. It's to promulgate a lie that people will repeat. It may eventually reach a scale where it can eclipse the truth, but by then it's generally done its job.
You may not be a weak minded fool, there surely are SOME people who are who will be genuinely moved by the now-banned content. Those people will then repeat credulously an idea, making it easier for folks who know better to repeat an idea. That's generally how propaganda starts. Eventually it becomes unquestionable.
Let's keep in mind that propaganda, advertising and public relations are all spouting from one and the same sewer. Hell, PR as a modern profession is a descendant of propaganda[0].
It doesn't matter whether you're selling a product, a service or an agenda. The mechanics are the same, only the labels we assign them differ.
Creating false grassroots support for your product? Astroturfing. Creating false grassroots support for your agenda? Sockpuppeting.
An early episode of Motherboard's otherwise mediocre podcast "CYBER" did an interview with Gruqg.[1] That episode has an incredible amount of material about the nature of propaganda and modern information warfare. It's 40+ minutes, and well worth the time spent.
There are many purposes of propaganda, a term which is not synonymous with falsehoods. Some propaganda tells the truth, and some lies. Some propaganda merely conveys a sentiment that is neither true nor false.
Consider "Loose lips sink ships" or "Uncle Sam Wants YOU" Those are both classic propaganda lines and neither is promoting a falsehood.
> Consider "Loose lips sink ships" or "Uncle Sam Wants YOU" Those are both classic propaganda lines and neither is promoting a falsehood.
I agree with this and these are good examples of what I was talking about.
I was responding specifically to the propaganda at hand and MikeGale's comment, "Also a bit insulting, these guys seem to assume that I'm a weak minded fool who will get taken in by this obvious nonsense."
To understand the bigger picture of this propaganda campaign on Hong Kong’s protest, check out an allegedly leaked instruction on how China controls the media (picture in Chinese and translation to English [1], transcribed in Chinese [2]).
Edit: it is interesting to see that a sibling comment of mine [3] on another thread on Twitter’s reaction got overwhelming upvotes, while this comment on FB’s reaction got a downvote.
> Edit: it is interesting to see that a sibling comment of mine [3] on another thread on Twitter’s reaction got overwhelming upvotes, while this comment on FB’s reaction got a downvote.
Yes, it's almost as if different people -- with different opinions -- read and reacted to two comments posted 5 hours apart on separate threads.
I wonder when the US and other Western countries finally start cracking down on China and Russia with harsh sanctions for all the manipulation they're doing.
Oh wait, they can't, as all manufacturing was outsourced to China and that would be the first thing China cuts off...
Sarcasm aside: how can the West possibly react on the events of the last years?! There's not many options available any more.
Great question. I've been dealing with APTs from Russia (and to a lesser extent China) for two years now, and there is nobody out there to help independent sites.
GDPR recently drove us to build our own application level firewall from scratch, which turned up behavior that is never reported by CloudFront or Google Analytics - those services are only hiding the severity of the problem.
Advanced Persistent Threat - a hacking group that finds an open door (or creates on by phishing), gains access, spreads little backdoors and malicious code into a bunch of machines, any of which, if removed, can be re-created by the others. Not easy to detect, very hard to get rid of once infected. Many groups have differences in the tools they use, which enables those in the know to say "Oh, that's APT X" when they look at an infection.
You may have heard of "Fancy Bear". That's one APT out of Russia.
When I see patterns in traffic coming from 45,000 one-off hosts for a month straight it is clear that there is a distributed botnet behind the requests.
When I see a vulnerability scan from an Azure cloud instance seconds after I ban a block of Russian addresses, I can be sure there is coordination.
And don't get me started on the Moldavian Registration Bots. Those are a combination of automated and human-assisted CAPTCHA solvers, and it took me almost a full week of careful observation to weed them out.
These are some of the things my application firewall can detect automatically. Every now and then I see a new pattern, that is all I was trying to say.
Right, you got it. Sometimes I feel like we're being attacked by bespoke systems, but it really must be off-the-shelf stuff since our full content is easily licensable. We shouldn't be worth the trouble.
We just weren't getting enough information from Bing or Google Analytics or CloudFlare, and when I developed a realtime activity dashboard, patterns started emerging: distributed web scrapers, registration bots, vulnerability scans, and some of these in tandem (i.e., scans commencing immediately after blocking a block of addresses). And many of these are coming from cloud hosts, Azure being the worst, with Google a close second. This is the type of traffic they don't want you to see, so those respective analytics services just supress it because it would be a negative advertisement if we could actually see what is happening realtime. I compared the numbers - Google was consistently underreporting our traffic by at least 40%, and a lot (not the majority, but enough to be noticible) of that traffic was coming their own hosted servers (not the indexing bots, but the user cloud instances).
CloudFlare implements temporary bans but I needed something permanent for those threats that were recognizable based on their request patterns.
The ARIN squatting is the latest thing I'm seeing - a lot of requests coming from netblocks that are former DoD and RedHat addresses. The publicly available ARIN databases aren't entirely up-to-date and the bad guys know it, some of the checks we depend on have to be taken with a grain of salt.
So far, I've been able to develop business rules to separate out the human activity from the carefully constructed scraper/probe attempts, but I fear that if they get just a bit more sophisticated I may lose that ability.
That is a fair point. It used to be that way, but around 2016 we started noticing bots and scrapers that use that use full webkit implementations to run JavaScript. Those clients should be triggering Google Analytics just like a desktop browser, but it was difficult to make the correlation due to information hiding in the GA dashboard (tuple of IP address, timestamp, resource would be needed, but they do not provide that, so it was impossible to test).
No they are not. What do you think why the Belt Road Initiative exists? In my opinion it's mostly to get Africa up to speed, similar to the post WW2 marshall plan, so that African demand can fill the gaps that Europe/Africa leave.
Also, China has massive sums of dollars and euros, they can simply play the "long game", i.e. weather out the storm until Europe/US caves.
Yeah that's not how it works at all thanks to globalisation. About 10 other countries will step in to provide all the cheap widgets in Chinas place about 3 seconds after they 'cut us off'
Which countries have anything even remotely matching the supply chain of Shenzen built up and trained - and with the spare capacity available?
Many companies are moving out of China at the moment, either due to the threat of a Trump-caused trade war or because other countries are even cheaper than China, and it takes them years.
China can weather one year of export blockade to EU/US. No problem. But EU/US can not, not in any way.
China is a nuclear power plus any open US-China/Russia war would immediately extend to South Korea and Taiwan (in case of China) or Eastern Europe (in case of Russia). I highly doubt that even the hardest falcons in the DoD would advocate for war...
Europe is closer to Moscow than ever in history, since Russia breaks in pieces that want more democracy.
All of them have a pretty harsh stand against Russia.
Crimea ( Ukraine) almost made Russia's entire naval fleet useless.
Russia is still getting punished for it economically and Putin has never seen these low approval rates. There are also a lot of protests there.
China thinks they can possess the world, but they are even having severe problems in their own country. We just need to wait, China's plan was smart, but they forgot the will for freedom in every execution/plan they had and are doomed to fail. Sooner or later.
Like every other Conservative party in the world doesnt do the same? This is my problem with these actions. They are selective based on political affiliation.
I dont care if they are communists, socialists, or the run of the mill capitalists. The only reason twitter and facebook still exist in their current form, is because the poltical establishments all over the world have worked out to leverage them for their advantage.
Politics has not escaped the disruption that other industries have suffered. Its like any other targetting advertising campaign. How are mining companies who put out poltical properganda too undermine environmental movements any different?
While this is not going to be a popular opinion, you have to give credit where credit is due to Trump. Without his campaign to crack down on Chinese practices (including Huawei) and bringing massive attention to China via the active trade war and tariffs I doubt social media such as FB and Twitter would in-acting these policies. He brought political pressure onto China where there was little to none by previous administrations.
Trump made it worse because whatever evidence against CCP he is presenting is often being dismissed by foreign leaders who don't trust him because of his unnecessarily hostile attitude towards US allies. If Obama or somebody alike was presenting Huawei case (which has been in the works long before Trump) more countries would be willing to coordinate a response necessary to maintain China's authoritarian / colonial ambitions. But now alienated Germany and other allies are increasingly skeptical, which helps China & Russia.
Maybe it’s the conspiracy theorist in me, but I see the exact opposite: there are a lot of people who believe that “coordinated, inauthentic behavior” on social media platforms allowed Trump to win the 2016 election, and that a crackdown on anything perceived to be “coordinated and inauthentic” will prevent him from doing so again in 2020. To me, this just feels like a cynically convenient excuse to get ready to remove content arbitrarily in advance of next year’s election cycle.
I actually argue that the social media campaigns were much more negative against him. Google literally had a all-hands (leaked video) after the election where Brin and C-level executives were activists against Trump and conservative news outlets. Brin called the election results “offensive”.
He did many offensive things, it's not really in dispute. It isn't a secret that he thrives on controversy. It's also not in dispute that foreign actors took advantage of the social media structure to stoke animosity and resentment ahead of the elections (at least, not in dispute among anyone serious). Execs in silicon valley would be correct to be horrified at what their infrastructure was used for and the results.
The FBI has posted many examples of the propaganda. It was not "much more negative" against Trump. That's a falsehood.
FB doesn't want to be a content police, behind the principal of freedom of speech. They're instead looking at metadata that reveals that it's fake and attempting to pursue an agenda. I am certainly no FB apologist, but I think the language of 'coordinated inauthentic behavior' is a really smart one that well defines the problem.
The phrase is so vague as to be meaningless. They may as well have said “organized opinion setting”.
I miss the time when there was a massive debate over the legality of removing the video Innocence of Muslims, to the extent there were op-eds suggesting that Youtube pursue an advisory opinion before removing it. Now it is all but certain that FAANG has the right to remove whatever, without question, without appeal.
And in most EULAs, certain categories of dispute go to arbitration.
Well, is "organized opinion setting" something they should allow? What if it pretends to not be organized? What if it's organized by a government? What if the "opinions" are that the government organizing the opinions would be right to oppress those with differing opinions?
Is there anywhere in there that you think a line should be drawn?
That is not actually true. The loud ones are always the ones with the bullhorn, allowed to them by the powerful.
I would be for censorship as well, if successful libel suits against publications meant they would be removed from the internet for fake news. Sadly egregious errors in editorial judgment are to be protected over someone saying an incomplete set of truths that result in an erroneous conclusion.
It is not so much about FB/TWTR/GOOG, etc. supporting freedom of speech.
It is about evading being subject to responsibility for the content on their site -- they want to be regulated as 'nuetral' content carriers, not editors.
> We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people. We’re taking down these Pages, Groups and accounts based on their behavior, not the content they posted. As with all of these takedowns, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.
To suggest that the practices of facebook in general - likes, the activity feed, and various experiments that facebook has in its history - is not itself an attempt by a coordinated group to manipulate people, is itself disingenuous at best. In order to correct this to honesty, they would have to clarify that they don't care for external groups - other than those who pay for advertising services - to be permitted to manipulate people.
This take down of a few pages being followed by 15k people and its subsequent information release is itself a manipulation, to make it appear as if facebook is doing something about the glaring problems that I personally think it represents. It seems to me that in the halls of power today, there are few good actors, and this tiny morsel doesn't make facebook one of them, and does nothing about the manipulators on their platforms that are beyond their ability to take down, despite self-serving dishonesty from so many groups.
In other words, what a skillfully crafted news release from facebook, give that marketer a raise.
Look what happened with Matthew Prince of Cloudflare. Relentless harassment from people who believe he should be more active in censoring his customers, especially after that last shooting. Even when he has reluctantly stepped in on some extreme cases, they still gave him no credit. Defining wrongthink is a slippery slope, and you’ll never make ideologues happy. If you even try, then next thing you know, you’re somehow culpable for everything you didn’t remove, or didn’t remove quickly enough. Then you get accusations of endorsing it, because you removed X but not Y, so you must approve of Y, right?
And where does it end? “Sprint are ‘literally Nazis’ for allowing David Duke to have a cell phone?”
It’s no wonder everyone is deathly afraid of policing based on content. The PR consequences are bad enough, but it’s also likely to lead into legal problems as well.