Hacker News new | past | comments | ask | show | jobs | submit login

It's spam - just like anyone else. Sensationalized because it's connected to someone special.



Sorry, what? This is spam, but the potential effect is significantly greater. This is someone trying to buy an election and circumvent democracy. I'd say it's not sensationalized enough given the stakes.


I’ve never fully understood why bots are seen as subverting democracy when things like flyers, posters, and yard signs aren’t.

If I go to an empty lot and I put up 50 Mike yard signs, what’s the difference between that and a bot that makes 50 Mike tweets?


The difference is that the bot is pretending to be a person. People are herd animals, they are easily manipulated to follow views that the herd sees as "correct". Having bots that are pretending to be people expressing their views is essentially an attempt at large-scale psychological manipulation. It is not simply about making people aware of a candidate or a certain point of view, it is about trying to make people believe many others believe that point of view to try and manipulate them into believing it as well.


Exactly this. A single person can put up 50 yard signs. Seeing 50 signs for a candidate just means there’s someone really enthusiastic about that candidate (or paid by them), it’s not automatically an indicator of wide support.

On the other hand, if every second house in a large area suddenly sprouted a sign for a candidate, you might rightly interpret that as strong support. Having 10,000 likes on a post is akin to this latter scenario - showing that there’s a whole lot of “actual people” who are on the bandwagon.


How are signs not also an example of people attempting "large-scale psychological manipulation"?

Signs don't just grow naturally out of the ground. Someone had to make them and put them there. And those signs aren't there to decarate yards. They're there to convince passerbys that a lot of people support a particular candidate.


The issue is not that people tend to follow the herd, nor that people try to influence each other’s opinions, those are just facts of life. The issue is when someone creates a fake herd that only they control.

If 100 different Twitter accounts all weigh in on a thread, arguing for one side, but they are all secretly controlled by one person, that creates a false picture of widespread support for that person’s stance.

It’s true that traditional advertising has always been used to create artificial buzz around candidates, and this is often a bit manipulative. But creating a crowd of fake supporters, with fake photos and bios, and deploying them in conversations with unsuspecting real humans, is a whole different level of manipulation.


There is absolutely no difference. Why is it ok for politicians to pay for signs to be posted in front yards, show to all of the neighborhood "network". Yet, the same action is somehow evil when the "neighborhood" is instead a social media platform?


It’s because this exists in meatspace where there is almost always a significant Proof of Work required. This helps limit/prevent the most egregious abuses.


>The difference is that the bot is pretending to be a person. People are herd animals, they are easily manipulated to follow views that the herd sees as "correct".

If this is true (and I'm not saying it isn't), the entire premise of Democracy is incompatible with modern technology. We could keep trying to bandage it with "fake news filters", but at some point we will need to admit defeat on that front and start fiddling with suffrage rather than fiddling with free speech.


> Having bots that are pretending to be people expressing their views is essentially an attempt at large-scale psychological manipulation.

Paid celebrity endorsements on TV like we see for products are the same. I don’t know if you can legally do that for campaigns, but you could definitely hire them to the staff as a surrogate (any rules against doing so at an inflated rate)?


Bots are not "pretending" to be anything. Humans have a false assumption that everything posted on social media was hand typed by another human, at the time it was posted. This is not, and almost never had been true, therein lies the problem. There have been "bots" posting online since day 1


When bots have human names and human profile pictures they are most certainly pretending to be human. But even if they do not have these, by not explicitly mentioning they are bots on social networking sites (including sites like reddit) they are effectively pretending to be humans as well, because like you mentioned people assume that other users on the site are human.


It's a false assumption. People shouldn't assume anything, there are plenty of places to do legitimate research, political or otherwise, social media isn't one of them


Technically, the humans running the accounts are pretending to be a lot of different people. But since the HN audience is aware nobody has achieved artificial consciousness, we casually attribute characteristics to the puppets, not the puppeteers.


"social media" has "social" in its name, which implies the presence of people. Hanging out with a bunch of computers is not social.


It's fine to create bots if the social media platform allows it. Just make it clear that they're bots if that's the platform's policy.

If you disagree with Twitter's policy here, then that's a separate question, but here's my (I think, uncontroversial) opinion on that: Banning the automatic creation and puppeteering of masses of fake people seems like a pretty good policy along a number of dimensions - especially just for keeping authentic users from leaving your platform.


These are not bots, the article doesn't imply they are. They are campaign supporters expressing their support for a candidate, paid or not. How many of these folks do you presume would post the same messages if paid by Trump 2020? I'd argue very few would.


> If I go to an empty lot and I put up 50 Mike yard signs, what’s the difference between that and a bot that makes 50 Mike tweets?

Because you're viewing it as solely advertising, you're missing it.

If you go out and buy a newspaper, and direct them to publish only favourable articles about certain people, you begin to run into problems. Like laws that govern bias and fairness.

Because this isn't advertisement - this is where people, rightly or wrongly, are going to get their information.


Out of curiosity, what laws which govern bias and fairness are you referring to? I tried searching for some, and the closest I found was the FCC fairness doctrine which was revoked (https://en.wikipedia.org/wiki/FCC_fairness_doctrine).


I agree this crosses a line, but a lot of other things do too (the presidential debates cut to commercials from drug companies).


Seems like it is the context. The bots are pretending to be something they're not - normal Bloomberg supporters, instead of a paid political campaign. If someone sees 50 signs in an empty lot, they're likely going to assume they were stuck there by Bloomberg's campaign staff.


advertising != false advertising

We have laws to protect consumers against the latter. Why should advertising in politics be held to different standards?


I don't think they're even bots in the first place. If he's spending hundreds of millions of dollars, I'm assuming he can get real people to post for him. It's the same thing as canvassing for a candidate.


Everything that has come out of human society is either arbitrary or due to our animal nature if you think about it


How? Every campaign spends money on advertising.


How is a distributed advertisement campaign different from bots on Twitter? Should we ban automated mail as well?

>This is someone trying to buy an election and circumvent democracy

The only differences between Bloomberg's behavior and that of a typical candidate are that Bloomberg is a late comer and he isn't beholden to donor interests - that doesn't mean he can't represent the people (not saying he can either).

If anything this is dangerous on the part of Twitter because now it's approaching publisher territory where it chooses which bots are allowable and which campaign bots are not. Twitter deciding the outcome of an election is far less democratic than a bunch of people possibly being swayed to vote for a candidate because of advertising. You can't apply different standards to a campaign you don't like - that's partisan and undemocratic. Buying ads is not.


First of all there could easily be a scenario where bots crowd out people by some order of magnitude. People would realize it and lose trust in Twitter and leave.

Then it is absolutely unclear what Twitter and other web companies are, or should be. There is a lot of moderating, banning and rule setting going on and these platforms are currently absolutely not "infrastructure". I hope that in the end we'll have something that brings people together and makes everyone better informed.


>It's spam

Semantically accurate

>Sensationalized because it's connected to someone special

It can be spam and it can be spam from 70 pro-Bloomberg accounts at the same time. I don't understand what's sensationalized about that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: