Hacker News new | past | comments | ask | show | jobs | submit login
Twitter to label 'good' bot accounts (bbc.com)
141 points by MayurJBhatt on Sept 10, 2021 | hide | past | favorite | 114 comments



Twitter suspended 12 of my accounts, most of them bot-accounts and some belonging to community organizations with no explanation only because I had all of them under linked on tweetdeck against my main account.

They un-suspended most of them after few months of follow-up emails (and no human response) and then the automatic suspension happened again after a few weeks.

I have given up on Twitter as a development platform, and instead prefer running these bots on open platforms such as RSS now.

If Twitter wants to do this right, they must ensure that there is a clear guideline for developers to follow and fix their bot detection algorithms to avoid stupid stuff like this.

Edit: 12, not 13. Here's the relevant tweets: https://twitter.com/search?q=from%3Acaptn3m0%20twittersuppor...


I am one of the moderators of the /r/Twitter subreddit and based on community sentiment I have come to the conclusion that Twitter is in its core a user-hostile organization, and is unable to change this at all.

Use the Fediverse please. It is the ONLY ethical alternative.

edit: I know the Twitter comms people lurk here. I've tried multiple times to get you to contact me and have hit a wall. You can modmail my sub. Your engagement is requested.


I've been planning to migrate the bot/OSS accounts to Mastodon, thanks for the nudge.


I think this the lifecycle of any typical advertisement revenue based social network/community platform.

1. Have an open platform with APIs which can r/w over entire platform.

2. Gain enough market to attract large advertisers.

3. Start restricting the content to please the advertisers.

4. Start restricting the 3rd party apps which offer superior experience and blocks your ads.

5. Start restricting general API itself.

Facebook, Twitter has gone through it, I bet Reddit will start restricting it's API next.


Reddit is dead to me, there is some change of generations there. Content and comments are below basic


Maybe the change is not reddit, maybe it's you growing up?

For some reason I still waste my time there trying to share opinions as a HK resident: no China is not just "evil", not Hong Kong is not "oppressed", yes the old idiots, communists or not, are all the same conservative cowards as anywhere else, no we dont seem to really want a bloodbath, so far.

You can imagine the amount of weird obsessive hatred I get with this opinion, from people who ve never been to China :D


Can't comment on your opinions, but maybe I'm getting older indeed


Could you provide us a lens for understanding how the treatment of Uyghur Muslims in China isn't objectively "evil" then? Would love to understand how us Westerners are getting that one wrong.


Clear guidelines are worthless if your accounts are automatically banned without explanation or a meaningful appeal process with human interaction.


human intervention is incredibly expensive at the scale Twitter operates at.

They're not going to spend the money.


It's really really not, that's just bs they feed people so they don't need to fix their broken systems.


throwing more engineering resources is not going to fix this at all.

Twitter's track record speaks for itself.


Yes it really is.


Big ups to RSS! You can also create Mastodon bots for the Twitter feel.


Just curious about having that many - I have, I think, 8 right now, but there seems to be a limit (of 10?) due to how many times you can re-use a phone number. Do you get around that by using multiple google voice numbers or something?


Multiple numbers, and not all of these were mine. I usually provide my number for sign-up, then remove it from the linkage in settings.

Twitter suspended all by association with my main account that had access to all these different accounts via TweetDeck. (Or that’s the hypothesis, since I could find no other link across all 12)


The natural next step is to require bot labels for bots. Suddenly machine learning models that detect and bans bots without a label will be feasible. THAT will have a big impact on "healthy conversations" which seems to be their goal.


This move seems designed specifically to forestall calls for that requirement. Consider: Twitter are first pointing out that bots can be good, IOW "we're not banning them outright". Then they're introducing this non-mandatory bot label, which makes it seem like they're aware of bots and working on ensuring they can keep doing all the good things they do.

Of course, none of this addresses malicious bots in any way. In effect, this is Twitter showing they're aware of their bot problem, but unwilling or unable to do anything about it, and are therefore doing a misdirection as a PR effort.


To me (who doesn't use Twitter) this seems like a fair solution. If you are running a legitimate bot I don't see what you have to hide. It's only people trying to be intentionally deceptive that have to worry.


Twitter makes its money as a marketing engine. Who can say what marketing is deemed intentionally deceptive? I'm not a Twitter user either but it seems to me an obvious question.


Most bots aren't marketing bots, they're sort of... Goofs, or utilities, or projects. Like @MyDudes_, which posts a picture of a frog every Wednesday. Or alttextbot, which if you follow it, DMs you when you've posted a picture without alt text. Or wwiiInRealTime. Or wintbot_neo, which is inexplicable outside of twitter but serves only comedic purpose.


> Most bots aren't marketing bots,

wouldn't you need to know how many marketing bots there are to determine that?

making statements about goodbots/(goodbots+badbots) is difficult when the good bots already make it clear they're bots, and the bad bots are misleading.


> Most bots aren't marketing bots, they're sort of... Goofs, or utilities, or projects.

In my experience those kinds of boutique/indie bots appear to make up a tiny percentage of total bots. If you have data that shows otherwise, I'd appreciate if you could share it.


They're the kind people engage with. There's obviously a lot more astroturfing and disinformation, but most corporate marketing on twitter is done by humans. Steak-Umm, Oreo, Microsoft, Wendy's, even indie game publishers have a human running their accounts.


You seem to be anonymous here on HN, why don't you use your full name instead if you have nothing to hide and you're not intentionally deceptive?

No, privacy doesn't work like that. Even people who are not trying to deceive others are deserving of privacy as well.

Most of my accounts (except one) on the internet are anonymous, as I don't want people to tie anything I do for fun on the internet to my real-life persona. It doesn't mean that I want to deceive people, only that I want my meatspace life to be something else than my online life.


Your bot could be anonymous, the only thing you'd be forced to reveal is that it was a bot.

just a [bot] marker of some kind next to the tweets that were posted by a bot. That's it. Still anonymous.

I could tell you this comment was posted by a bot, that would reveal nothing about me.


There's plenty you could have to hide as a user on HN. But what reason does a bot have to hide the fact that it's a bot?


Anonymity is fine, they were saying that bots should be labeled as bots.


But a bot isn't a person. And you don't have to reveal who made the bot, just that it's a bot. Can you give me a good use case for a bot that is not some kind of manipulation and would still benifit from humans not knowing it's a bot? Usually you want people to know so they aren't confused why the account acts werid i.e. unhuman.


I don't see how "I am a bot, beep boop" is a privacy issue.


You people and your absolutes. Nobody is saying that every anon is deceitful and untrustworthy- just that you need to make sure your BS detector is working.


> privacy doesn't work like that

Except we aren't discussing private conversations. We are discussing broadcast conversations on twitter.


Without tying your anonymity to something else (whether it’s another anonymous account or a real world identity) it’s impossible to determine whether you want to deceive or not, no?

In any case real identities have been a failure anyway


This is what Discord does.

A bot looks exactly like a normal user, except for a [BOT] label after the name. Works like a charm.


But on Twitter, bots aren’t only bots. For example, I have bot accounts that auto-tweet things, but then I can log in to those accounts and manually tweet.

Because of that I’d think the label would have to be applied to individual tweets and not to the profile.


AFAIK the Fediverse requires this (on mastodon instances).


I really appreciate that some masto/fedi clients have the option to indicate bot accounts. Tusky adds a little robot icon to the profile pic of bot accounts, which helps when deciding whether/how to interact.


This is how I see it too. Twitter is boiling the frog on malicious bot accounts.

Bots that want you to think they're human will end up with "bot verified" badges that undermine the strength of their opinion, or they will be banned.


> The natural next step is to require bot labels for bots.

Great! This would convince me to use Twitter again.


Only on Twitter is 'bot' now a loaded term the masses understand. In every other medium you could already (and many did) let some software post automatic posts (most benign example: "Here's my new blog post: xxxx") or scripts on irc.

Where is the line? Technically scheduled tweets are automatic, and so, by some definition, bots.


"good" implies there is a "bad", but both are subjective... for whom?

I thought the same about Google Bot scraping websites, many websites would argue that this is universally good. In fact at Cloudflare the Google Bot was on a system wide allow list as it was "good" (in the older and now long deprecated original system years ago).

But "good" from one perspective isn't "good" from all perspectives, it's subjective.

Google Bot isn't good to the site that contains private information and launched too early and didn't yet have it's security in place and was scraped and the private exposed.

Bots are just machines, and machines aren't "good" or "bad", the actions they perform aren't even "good" or "bad"... it remains relative to each person as to whether they are good or bad.

Even with bots on Twitter, for some any bot is bad, or advertising bots are good, or they're bad. Creating the idea of a "blue tick for automated accounts" doesn't seem like the smartest move and just brings in the same problem that the blue tick has with humans.

I'm glad they're thinking about the problem, but a binary solution to a subjective and relative problem feels more like a regression than a step forward.

Calling this "known" bots would be a better thing, especially if tools to filter tweets by specific known bots existed.


As far as I can see only the BBC article is claiming Twitter is trying to label bots as "good" or "bad". The actual system appears to be for linking bot accounts to real accounts (ie. who made this bot) or simply label an account as "automated" to give better transparency, which to me sounds like an overall positive feature.


The answer for bots is the same for dogs: who we call good.

It's going to change over time, sometimes quickly. Today bots made by individuals and labeled as bots that once per hour makes some clever markov speech tweet are good. Bots that spam reply to all tweets of a pattern and pretend to be humans are bad.

Battles will get fought in the very large grey zone but the entity that gets the final say on "good enough" is the owner.

http://www.threepanelsoul.com/comic/dog-philosophy


IMHO the idea of accepting some bots (e.g. Google Bot) while not accepting others in robots.txt (e .g. a new Google business killer bot) clashes with Internet neutrality.


Ultimately technology cannot really solve this problem. People need to either learn to use Twitter less. (or not at all) or find a way to be more rational and skeptical about Twitter. I'm skeptical that this is possible, though, as even genuinely intelligent people seem to become swept up in outrage and narrative as easily as anyone else.


One of the smartest people I know, who is so rational and skeptical against unknown information, will go on Twitter and see a 30 second clip that's totally out of context from some random person, but hits his confirmation bias, and he'll share it to the world.

So yeah I'm not optimistic about the future either.


>Google Bot isn't good to the site that contains private information and launched too early and didn't yet have it's security in place and was scraped and the private exposed.

What does Google have to do with this scenario? If someone puts their private information on the internet, then they shouldn't be surprised that their private information is on the internet.


There’s a difference between “I am in public so taking a photo of me is fair game” and paparazzi gathering around a front door with a dozen cameras snapping flash photos the moment both feet are outside while goading the person verbally because their income depends on a saleable picture.

“It’s information in public so someone might see” vs “Google continuously maximising the damage of a short term mistake, testing every door round the clock for accidental unlocks, because it’s in their financial interest”

If the internet is a hostile place where mistakes can harm you, the harm comes from the people acting in a hostile way rather than s friendly supportive way, and on this spectrum Google is the indifferent machine rather than the kindly human janitor exercising discretion and judgement.


It’s good that they’re thinking about the problem but I wonder if this will have an accidental negative effect due to it being honour based:

1. User1 reads something that sounds pretty spammy from Bot1

2. Bot1 is not a “good bot” and thus doesn’t disclose that it is a bot.

3. User1 checks if spammy post was from a bot and sees that it’s not declared itself as one.

4. User1 then assumes that Bot1 is a human and trusts its Tweet more than it would have before the bot status was a thing.


I was wondering why labeling bots isn't mandatory, and can't think of a reason not related to Twitter not wanting the moderation headache.


How does Twitter tell which accounts are bots?


There are a lot of heuristics Twitter could use to estimate likelihood of bot accounts:

1) Source IP range + number of accounts linked

2) interaction speed (some actions are too fast to be human, or show coordination like tweeting exact same tweet as a different account withing .2 seconds)

3) interaction times in a rolling window (individual humans need sleep, and usually around the same time)

4) IP packet fingerprinting: can flag discrepancies like a client seemingly tweeting from "Twitter for iPhone" but originating from a Linux/*BSD/windows host


Isn't it really amazing that no malicious bot author thought about these before...


I'm sure they have, but not all of them have the means (and time) to bypass some of the measures, defeating single-source-IP[1] bots and script-kiddies can cut down on the amount of bots.

It's an arms-race that I've fought on both sides of: deploying anti-abuse measures and writing a very well-behaved scraper; unfortunately some sites block all scrapers despite their behavior, and you have to figure out how the detection is done empirically and enjoy your short-lived victories while they last. On the defensive side, you continually have to update your detection heuristics, and the short-lived victories.

1. I also have a read-only Twitter archiving bot that doesn't hide its nature (the handle is "${NOUN}bot"). It runs from a single VPS at regular cron intervals. It'd be trivial for Twitter to detect this and make it not worth my while.


They can't, but repetitiveness of content and hours of posting are fairly good heuristics.


Those heuristics apply to company twitter accounts controlled by humans.


I'm sure they have an API bots can use.


The API doesn't know whether there's a human behind it or not. It's the same API used for third-party clients.


Easily detected by IP ranges.


If I'm using my VPN, my Twitter bots will be coming from the same IP range as me using Tweetbot or the Twitter website.

Even without the VPN, I occasionally have a bot that posts from a home machine with the same IP range as Tweetbot et al.


because obviously there's no way to obfuscate an IP address

/s


How would you differentiate between a human sending a tweet via the API and a bot?


User-agents


- User-Agent is an optional HTTP header

- RESTful APIs don't even expect a User-Agent

- User-Agent is customisable (ie bad bots can trivially use an array of seemingly authentic strings)

- Even Google are moving away from using User-Agent for identification.


Politics


That does help at times, yes. When politics encompasses propaganda messaging to achieve a specific outcome or condition, it's possible to study the social media output and see when political messages all pop up in a coordinated way. And map out the network, particularly if you can sort out the provenance of key accounts and know where they're coming from, and identify which accounts are most disciplined about amplifying and laundering the messaging.

As such, you can indeed spot bots through political agenda :)


Item 4 already happens on a large scale, so I don't think it counts as a downside. Also, 'bot' implies it's a computer or machine.

The behavior of concern here is Twitter accounts operating in service of somebody or something else, which might or might not be operated by a human, script, AI, or what have you. If they're operated by a human but the human's following its own script or administering a farm of puppet accounts, it'd still qualify even if the human gets to improvise on its message a bit.

A more accurate term might be 'agent', which also describes the day jobs of some of the people working on Twitter as their primary job :)


I’m not sure if you’ve read the article or not but a lot of what you’ve posted seems to miss the point of it (and thus my post too). Eg

> Item 4 already happens on a large scale,

The article states that some people are skeptical about whether to trust tweets because they don’t know whether it’s a bot or not. My post was expanding off that premise saying an optional label doesn’t solve that problem, it makes it worse.

> The behavior of concern here is Twitter accounts operating in service of somebody or something else, which might or might not be operated by a human, script, AI, or what have you.

The article puts puppet accounts under the same umbrella as “bots”. My post was continuing on from that premise.

> A more accurate term might be 'agent',

Maybe, but arguing semantics completely misses the point of the discussion.

> which also describes the day jobs of some of the people working on Twitter as their primary job :)

We aren’t talking about people who work for Pizza Hut managing their Twitter presence. We are talking about large scale use of accounts pretending to be an average Joe but who are controlled by a single entity as part of a larger misinformation campaign. It might be a human who posts the messages but the person controlling those accounts aren’t the same as the accounts report themselves as and neither is there a 1:1 relationship between account and human. Calling them an “agent” legitimises it as if it’s people representing an organisation but it’s not, it’s effective still a network of bots except you’ve replaced software with “fleshware”.

I mean sure, if you want a different term to make the distinction then I have no issue with that. But the point of the discussion isn’t what do we name these types of accounts, it’s how do we make readers aware of when they’re reading an opinion vs an undercover piece of promotion. It’s kinds of similar to when some jurisdictions mandate that paid promotions need to be labelled as an advert rather than a review.


I think we agree more than we disagree, here :)

I've got no quarrel with anything you've said, except: item 4. In the parent post, your exact criticism was:

user reads info, user checks to see if it's labeled 'Officially A Bot', user does NOT find that label, and then item 4, user trusts info that harms them.

I'm sorry, I have a darker view of all this. The whole point is to push info (whether it's secret paid product promotion, political propaganda, or literal advocacy of genocide) into normalized social discussions and past any initial network of bots, and the whole point is that it's unreasonably effective on social networks where strangers can seem like 'friends'.

Obvious bots vs. surreptitious or human-controlled 'bots' is an almost meaningless distinction here. Also, you're correct in that 'agent' can be like a corporate 'registered agent' but also can imply 'secret agent', as in hostile foreign spy who doesn't have your best interests at heart. An intelligent operative trying to hurt you, not simply a 'robot' with no interests at all.

We're talking about and agreeing on all the same things. I'm saying the real key to managing the damage of this is not 'label things a bot', the real key is to broadly communicate the idea, 'this whole platform is trying to maximize social contagion and make you feel like large numbers of strangers are your trusted personal friends'.

And they're not, they're really, really not.


Another thing that will cause problems: "good bot" by whose standards?


Is this not about financials above anything else?

"As many as 48 million Twitter accounts aren’t people" (2017)

https://www.cnbc.com/2017/03/10/nearly-48-million-twitter-ac...

"How Twitter Makes Money"

https://www.investopedia.com/ask/answers/120114/how-does-twi...

"Twitter by the Numbers"

https://www.omnicoreagency.com/twitter-statistics/

"80% of Twitter users are affluent millennials." LOL


Twitter seems to be cracking down on bot-like behavior. I recently had my twitter account suspended.

It seems to be related to using the new function which suggests that you follow a group of related people with one click, e.g. digital artists. Seems I followed too many people in a short period of time. Make up your mind, Twitter , do you want me to follow people or not?


That's hilarious. That's literally bad-actor tools, asking you to use them, and then punishing you for it.

I wonder if that tool is intentional bait. You don't actually know if Twitter itself is acting in good faith there, or if it's a trap: a tool seemingly made to help bot networks, but which would show Twitter exactly who fell for it and set up their bot networks with a few convenient (too convenient?) clicks.

It's a very interesting question, what side Twitter is on. I'm guessing it's on Twitter's side and Twitter's side alone, much like Facebook. If you're a bad actor and they're 'helping' you, it may be a trap.


My friend's company is actually struggling with a "bot attack" at this very moment. Someone has about 100 bots who are tweeting malicious and untrue messages about the company. It's really obvious that these are bot accounts, but there doesn't seem to be a way to get rid of them. We've tried reporting them and contacting Twitter, but they are still there, tweeting away.

These are obviously 'bad' bot accounts, but there's no indication of when, if ever, Twitter will get to them. Does anyone have any suggestions on how to handle such a scenario?


Mastodon's had this feature for a long time, and it's worked out well over there.


SHAMLESS PLUG: I have two hacker news bots

I have the word 'bot' in the description as well as the handle

It would be cool to get a friendly label

https://twitter.com/HN_Vimmy_Bot?s=09

https://twitter.com/HN_Kotlin_Bot?s=09


I’m pretty sure the good bots already identify themselves as such. Or make themselves obvious e.g. @PossumEveryHour


We all know "good" will equal "what Twitter owners like". That can only be "bad bots".


I see twitter is slowly catching up with Mastodon, beyond just slated plans for "decentralization".


their "slated plans" are nothing more than noise ("vaporware" is what this stuff used to be called back in the day).


I run a couple everylot bots. They've been shut down a few times, but always reinstated after an email. This seems like a fairly silly measure, but I'll probably opt-in if it reduces the chances of that happening again.


Ok, so there are good bots and bad bots, where can I find those naughty bots?


Come to Amsterdam...


Does convincing Twitter that you are a bot improve privacy?



lol, that's a good one. I don't know how you can interpret either "verification" or this new bot labeling system as anything but arbitrary.


[flagged]


The continued survival of the NATO alliance surely rests on whether people get timely updates about delays on the Hammersmith and City line.


What sinister Atlanticist goals do earthquake updates serve?

It's an entirely opt in system. Nothing stopping other bots opting in as well. The only ones who wouldn't would be ones trying to pretend they're not bots


Earthquake updates, nobody cares about it. It's not controversial. Suppose there's a bot that responds to every comment with positive sentiment about ivermectin with "You are not a horse" with a link to WHO guidelines. Would Twitter call that a good bot or a bad bot?

Suppose there was a bot that responded to every suggestion that Trump called white supremacists "very fine people" with a link to a video showing he didn't. Would Twitter call that a good bot or a bad bot?


> responded to every

That's spam.

A fairly good non-partisan bot rule would be that bots would only respond when @'d, or just post updates to something on their own TL. e.g. https://twitter.com/choochoobot or https://twitter.com/bitsofjupiter


> Suppose there's a bot that responds to every comment with positive sentiment about ivermectin with "You are not a horse" with a link to WHO guidelines. Would Twitter call that a good bot or a bad bot?

It would not only be classified as a bad bot (if unsolicited), but it would violate Twitter's automation rules [0]

0: https://help.twitter.com/en/rules-and-policies/twitter-autom...


[flagged]


> on this prison platform

Unlike prison, you’re free to leave Twitter any time you’d like. I did. The world kept spinning all the same.


Don't worry, I never had an account there, just observing from a sane distance this insane platform !


good bots = biden bots, blm bots, chinese bots, and global warming bots.

bad bots = republican bots and russian bots.


You are, I hope, aware that 'BLM bots' can BE Russian bots, intentionally seeding misinformation explicitly for other networks to react to? I daresay that's true for any of it.

This is why I often use the term 'organic' to describe social media content that's not spurious. There's organic content, and that alone can be a handful.

Then there's stuff specifically cooked up to drive people into a frenzy and feed social hysteria, driving wedges and disrupting a targeted population. There is NO reason, on a publically available platform like Twitter, that the hostile content has to self-identify as the target group for manipulation. Provocation of outrage is often more effective by pretending to be a demographic marked for destruction, and then carrying on in an outrageous fashion for the benefit of angering onlookers.

And none of that is good, from any angle: not for general societal well-being. Unless you really like genocides and civil wars… which is where your Global Superpower Adversary bots come in, with an obvious and easy to understand agenda.

And that's where we're at, in this discussion.


There are NO good bot accounts. Bots are never good. Period.




Let me guess, these are the good bot accounts?

https://imgur.com/a/MNRYcSE

Alt text, for those who (rightfully) don't want to click is, "I just left the ER. We are officially crushed by COVID..."

Three accounts posting the exact same thing.


Thousands of similar examples to this, whatever spooks run these jobs are obnoxiously sloppy - almost intentionally so.

"Sure, you caught us. What are you gonna do about it?"


All Twitter accounts are run by humans using software.

This idea that there are “bot” accounts and “real” accounts is nonsense. Until AGI can create its own account, every account on Twitter is some real person posting, via some form of software.


The term "bot" isn't something created by Twitter. Bot's have been discussed from IRC to web forums and it's been universally agreed that someone a human interacting with the service (which will always be via software because we're not talking about physical communications here) is different from a headless script posting either generated or curated content automatically.

Sure, there will be some nuances where it's not so clear cut. But to say that there is no such concept of a "bot" is ridiculously misinformed about any of tech released in the last 30 years of electronic communication.


Humans make very good script engines, too. If you need to put out a script but vary it to get past simple algorithmic spam filtering, you can't beat running it through a good hardworking human who can type real fast and think on the fly. That human's doing the job of a bot, for all practical purposes: 'bot' is a job description, and humans can easily outperform computers for some bot purposes.

For now :)


This is some kind of weird reductio ad absurdum nonsense.

You, sneak, are not that stupid. I've seen you posting before. I know you're not.

So honestly: What exactly are you trying to get at, here?


My point: this is just another form of Twitter's escalating censorship. I was speaking obliquely because I repeat the phrase "Twitter censorship" far too often these days. :(

All of these "bot" accounts belong to a human being, and these human beings are posting on purpose (via their bot software).


I don’t think this kind of response is good for hacker news. It’s just rudeness masked by being patronising.


The original comment by sneak was seemingly deliberately obtuse, which it itself a form of rudeness.


Yes. But as dang says every so often, don't reply to a bad comment with another bad comment. This "I know you're not that stupid" thing is obviously not necessary. Leave that out and we wouldn't be having this conversation about what's acceptable and what's not.


That's a good point. That said, 'I know you're not that stupid' is one way of addressing a concern: it's a more humanized, more personal way of calling out a significant criticism that's worth making.

Another would be, "It looks like you are trying to plant the assumption that there are no possible Twitter users serving as puppets directed by others. Taking your statement at face value with best faith interpretation would lead to abandoning the whole discussion and trusting whatever goes on at Twitter to be just plain organic population, but we're pretty sure that's not the case. Why are you making statements that strongly suggest all of Twitter is just organic userbase, and minimizing the very thing we're talking about? We're literally talking about bots here.'

That's a lot wordier than 'I know you're not that stupid', and a lot less personal. Is it better? In some ways it implies the bot-minimizer is actively taking a role as a bad actor, not merely being foolish. I think such actors exist and also post on Hacker News and more or less everywhere else, but it's a nastier accusation and 'you're not that stupid, so what gives?' is a much gentler way of addressing it.

Especially when talking about use of, and restraint of, social media bot networks (AS THE PRIMARY TOPIC, I might add), we have to be able to talk about good faith and bad faith. It's literally the point of the whole discussion.


It's neither rude nor patronising. The guy isn't an idiot, and I'm genuinely curious about what he thinks -- I don't think I know better than him, I think he has made a rash comment.


Taking your comment in good faith, there are already a set of rules with regards to automation on twitter [0] which might help in understanding their usage of the word "bot" in that context.

0: https://help.twitter.com/en/rules-and-policies/twitter-autom...


Your definition of bot account differs from the actual, there is no requirement for AI. A bot account posts based on an automated script.

If i were to use your definition, is the account created by an AGI not the same? The AGI, as with the automated script, was made by a person.


Consider chess. How much can I claim that "I won the chess championship" if I use stockfish to play in my stead? True, in a pedantic fashion, I am the one who booted stockfish up so in a way I contributed to my matches, but this isn't a measurement of my chess skill. I cannot claim that I am good at chess because I hired someone else to play for me, and the same is true when I relegate my options to an artificial intelligence.

Now take that into Twitter context. Can you claim that "I am a good tweet author" when I'm not the one who decides what's written on my feed but someone else? Can you even claim that you interacted with me personally at all?


Microsoft's chatbot that started racist talk.

https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: