"n October, Twitter’s general counsel told a Senate committee investigating disinformation that Russian bots tweeted 1.4 million times during the run-up to the last presidential election, and such bots would later be implicated in hundreds of tweets that followed a school shooting in Florida."
Then why not tackle the root problem?: Lack of critical thinking skills. I'm not from the US, so I don't know how extensive their philosophy courses are in school, but judging by these repetitive "Russians/bots are manipulating our people" headlines, it's probably lacking.
It's an understatement say that the development of critical thinking skills in US schools is lacking. For example, it's estimated that 1 out of 3 Americans believe that all life has existed in it's present form since the beginning of time.[1] Educating these people is difficult because they are trained by their parents and religious leaders to mistrust professors[2] and other experts from an early age. It's much easier to ban bots.
That'd be great but you can't teach everyone to be awesome critical thinkers. There is a question of aptitude.
That focus also overlooks the fact that much of the propaganda is emotionally-driven and appeals to certain instincts. For even the most rational of human beings, emotion can overwhelm critical thinking.
So, while we can definitely do better in teaching critical thinking skills, we're just going to have a percentage of the population that is susceptible. And, given the fact that we're split almost down the middle politically (the election came down to 85,000 votes across three states), the ability to affect even a small percentage of the population can have a huge impact on the direction of the country.
A more effective root cause mitigation might be to break the two-party monopoly so that a broader set of ideas can flourish. This binary for-or-against system is also unhelpful to say the least.
> There is a question of aptitude... the ability to affect even a small percentage of the population can have a huge impact on the direction of the country.
At risk of poking a hornet's nest, isn't this basically just an argument against democracy itself?
Sure, you can go around shutting down Twitter bots, but if the fundamental problem is "elections are swung by people who believe literally anything they read, and we can't change that" then we've basically lost already.
First, because it'll move to a new source, and playing media whack-a-mole to maintain a functioning government doesn't seem reliable. Second, because deciding what's a valid source suddenly becomes the defining feature of elections. Deciding which voices to promote and which to undermine obvious matters, but if Twitter bots are a vital issue in a national election then one starts to wonder if anything else matters.
And I mean, I know one argument is that this has always been the case, that the only new thing is the ability for unfiltered manipulation to come straight into people's homes. I suppose it's true that in ~1800 most people couldn't possibly get enough information to make informed voting decisions, so we were largely getting our decisions made by political officials and party machines.
But it's a bit scarier to think that the problem is people, not access - if that's true then expanding information access is actually harmful to democracy.
Undermining any kind of trust in democratic institutions seems like a possible goal of these manipulations, especially if Russia is involved. It shores up the argument for “managed democracy” as Putin calls it, and gives autocrats something to point and laugh at when challenged on their autocracy. If rational people start to think in terms of historically failed and discredited notions of government, that’s a boon to any adversary of democratic nations.
> Then why not tackle the root problem?: Lack of critical thinking skills. I'm not from the US, so I don't know how extensive their philosophy courses are in school, but judging by these repetitive "Russians/bots are manipulating our people" headlines, it's probably lacking.
Because that's not the root problem. This is all a dog and pony show.
There is no transparent mechanism of control for internet publishing like there is for e.g. broadcast media via FCC licensure. The government very much wants to rein in this "Wild West" age of the internet and assert some control over the messaging.
Conditions are very favorable for such a project right now:
a) Centralization on walled gardens like Facebook, Google, et al provides a plausible choke point.
[Right now they're making everything else online "dirty", "bad", and/or "Russian", but don't be surprised when they ban it all together and anything non-Google-or-Facebook-approved becomes part of the "dark web", meaning the place that only dirty, bad Russians go, and which will effectively require special communication equipment to access. Wikileaks is a great case study in this.]
b) There's sufficient angst among the tech-savvy public to fly such controls in under the ironic pretext of "preserving the integrity of our democracy" without stirring up too much of a ruckus.
and c) The mainstream media is looking for any plausible cover story they can find after their disastrous and embarrassing coverage of the 2016 election and will go along with anything that will allow them to save a shred of face. That it paints their main competition (hyper-specialized online media) as dirty and unreliable is a bonus.
This is all very bad stuff, and that people are so easily tricked into following along because they're bitter about a political loss is as deeply disappointing as it is predictable.
I expect that the government will eventually move to establish a set of "identity verification standards" that have to be met or the site will be taken offline by court order. This will essentially end free speech on the web and allow the government to assert its control over what normal people are allowed to access, read, and/or share online.
Bottom line is that you don't have to be a Russian double-agent to be a Republican, and you don't have to be the victim of a pernicious foreign plot to lose an election against the other national political party. If you live in the Silicon Valley bubble, don't let your frustration delude you into believing otherwise.
I have a pretty good idea on who is going to decide which person is too stupid to vote in such a society — I mean, you must be pretty stupid in the first place to even consider voting for anyone else than the most powerful party/government, right?
The basic idea underlying democracy is not that people are intelligent enough to make the right choices, but that they have the right to choose their government regardless of their intelligence. Even if that means you end up with people like George W. Bush junior or Donald Trump in the White House.
I am still somewhat shocked that Donald Trump is really President of the USA[1], but I am deeply convinced that denying arbitrary people their right to vote is an acceptable "solution".
>Do you have proof that Twitter bots influence opinion?
Demanding proof of effect is an oddly defensive response that seems aimed at exonerating bad behavior. The GP didn't even assert anything about influence; just stated a fact.
But what difference does it make? Are you saying we should just accept it if we can't definitively prove it had a certain influence?
Sounds an awful lot like a certain U.S. president.
>If Russian bots influenced elections, what about the millions of real live humans
False equivalence. A concerted effort by one person to use technology in amplifying a specific narrative against an adversary is not the same as actual human beings rendering their frequently diverse opinions. Surely you see the difference.
"n October, Twitter’s general counsel told a Senate committee investigating disinformation that Russian bots tweeted 1.4 million times during the run-up to the last presidential election, and such bots would later be implicated in hundreds of tweets that followed a school shooting in Florida."
some of these bots, were not bots btw