Twitter suspended 12 of my accounts, most of them bot-accounts and some belonging to community organizations with no explanation only because I had all of them under linked on tweetdeck against my main account.
They un-suspended most of them after few months of follow-up emails (and no human response) and then the automatic suspension happened again after a few weeks.
I have given up on Twitter as a development platform, and instead prefer running these bots on open platforms such as RSS now.
If Twitter wants to do this right, they must ensure that there is a clear guideline for developers to follow and fix their bot detection algorithms to avoid stupid stuff like this.
I am one of the moderators of the /r/Twitter subreddit and based on community sentiment I have come to the conclusion that Twitter is in its core a user-hostile organization, and is unable to change this at all.
Use the Fediverse please. It is the ONLY ethical alternative.
edit: I know the Twitter comms people lurk here. I've tried multiple times to get you to contact me and have hit a wall. You can modmail my sub. Your engagement is requested.
Maybe the change is not reddit, maybe it's you growing up?
For some reason I still waste my time there trying to share opinions as a HK resident: no China is not just "evil", not Hong Kong is not "oppressed", yes the old idiots, communists or not, are all the same conservative cowards as anywhere else, no we dont seem to really want a bloodbath, so far.
You can imagine the amount of weird obsessive hatred I get with this opinion, from people who ve never been to China :D
Could you provide us a lens for understanding how the treatment of Uyghur Muslims in China isn't objectively "evil" then? Would love to understand how us Westerners are getting that one wrong.
Just curious about having that many - I have, I think, 8 right now, but there seems to be a limit (of 10?) due to how many times you can re-use a phone number. Do you get around that by using multiple google voice numbers or something?
Multiple numbers, and not all of these were mine. I usually provide my number for sign-up, then remove it from the linkage in settings.
Twitter suspended all by association with my main account that had access to all these different accounts via TweetDeck. (Or that’s the hypothesis, since I could find no other link across all 12)
The natural next step is to require bot labels for bots. Suddenly machine learning models that detect and bans bots without a label will be feasible. THAT will have a big impact on "healthy conversations" which seems to be their goal.
This move seems designed specifically to forestall calls for that requirement. Consider: Twitter are first pointing out that bots can be good, IOW "we're not banning them outright". Then they're introducing this non-mandatory bot label, which makes it seem like they're aware of bots and working on ensuring they can keep doing all the good things they do.
Of course, none of this addresses malicious bots in any way. In effect, this is Twitter showing they're aware of their bot problem, but unwilling or unable to do anything about it, and are therefore doing a misdirection as a PR effort.
To me (who doesn't use Twitter) this seems like a fair solution. If you are running a legitimate bot I don't see what you have to hide. It's only people trying to be intentionally deceptive that have to worry.
Twitter makes its money as a marketing engine. Who can say what marketing is deemed intentionally deceptive? I'm not a Twitter user either but it seems to me an obvious question.
Most bots aren't marketing bots, they're sort of... Goofs, or utilities, or projects. Like @MyDudes_, which posts a picture of a frog every Wednesday. Or alttextbot, which if you follow it, DMs you when you've posted a picture without alt text. Or wwiiInRealTime. Or wintbot_neo, which is inexplicable outside of twitter but serves only comedic purpose.
wouldn't you need to know how many marketing bots there are to determine that?
making statements about goodbots/(goodbots+badbots) is difficult when the good bots already make it clear they're bots, and the bad bots are misleading.
> Most bots aren't marketing bots, they're sort of... Goofs, or utilities, or projects.
In my experience those kinds of boutique/indie bots appear to make up a tiny percentage of total bots. If you have data that shows otherwise, I'd appreciate if you could share it.
They're the kind people engage with. There's obviously a lot more astroturfing and disinformation, but most corporate marketing on twitter is done by humans. Steak-Umm, Oreo, Microsoft, Wendy's, even indie game publishers have a human running their accounts.
You seem to be anonymous here on HN, why don't you use your full name instead if you have nothing to hide and you're not intentionally deceptive?
No, privacy doesn't work like that. Even people who are not trying to deceive others are deserving of privacy as well.
Most of my accounts (except one) on the internet are anonymous, as I don't want people to tie anything I do for fun on the internet to my real-life persona. It doesn't mean that I want to deceive people, only that I want my meatspace life to be something else than my online life.
But a bot isn't a person. And you don't have to reveal who made the bot, just that it's a bot. Can you give me a good use case for a bot that is not some kind of manipulation and would still benifit from humans not knowing it's a bot? Usually you want people to know so they aren't confused why the account acts werid i.e. unhuman.
You people and your absolutes. Nobody is saying that every anon is deceitful and untrustworthy- just that you need to make sure your BS detector is working.
Without tying your anonymity to something else (whether it’s another anonymous account or a real world identity) it’s impossible to determine whether you want to deceive or not, no?
In any case real identities have been a failure anyway
But on Twitter, bots aren’t only bots. For example, I have bot accounts that auto-tweet things, but then I can log in to those accounts and manually tweet.
Because of that I’d think the label would have to be applied to individual tweets and not to the profile.
I really appreciate that some masto/fedi clients have the option to indicate bot accounts. Tusky adds a little robot icon to the profile pic of bot accounts, which helps when deciding whether/how to interact.
Only on Twitter is 'bot' now a loaded term the masses understand. In every other medium you could already (and many did) let some software post automatic posts (most benign example: "Here's my new blog post: xxxx") or scripts on irc.
Where is the line? Technically scheduled tweets are automatic, and so, by some definition, bots.
"good" implies there is a "bad", but both are subjective... for whom?
I thought the same about Google Bot scraping websites, many websites would argue that this is universally good. In fact at Cloudflare the Google Bot was on a system wide allow list as it was "good" (in the older and now long deprecated original system years ago).
But "good" from one perspective isn't "good" from all perspectives, it's subjective.
Google Bot isn't good to the site that contains private information and launched too early and didn't yet have it's security in place and was scraped and the private exposed.
Bots are just machines, and machines aren't "good" or "bad", the actions they perform aren't even "good" or "bad"... it remains relative to each person as to whether they are good or bad.
Even with bots on Twitter, for some any bot is bad, or advertising bots are good, or they're bad. Creating the idea of a "blue tick for automated accounts" doesn't seem like the smartest move and just brings in the same problem that the blue tick has with humans.
I'm glad they're thinking about the problem, but a binary solution to a subjective and relative problem feels more like a regression than a step forward.
Calling this "known" bots would be a better thing, especially if tools to filter tweets by specific known bots existed.
As far as I can see only the BBC article is claiming Twitter is trying to label bots as "good" or "bad". The actual system appears to be for linking bot accounts to real accounts (ie. who made this bot) or simply label an account as "automated" to give better transparency, which to me sounds like an overall positive feature.
The answer for bots is the same for dogs: who we call good.
It's going to change over time, sometimes quickly. Today bots made by individuals and labeled as bots that once per hour makes some clever markov speech tweet are good. Bots that spam reply to all tweets of a pattern and pretend to be humans are bad.
Battles will get fought in the very large grey zone but the entity that gets the final say on "good enough" is the owner.
IMHO the idea of accepting some bots (e.g. Google Bot) while not accepting others in robots.txt (e
.g. a new Google business killer bot) clashes with Internet neutrality.
Ultimately technology cannot really solve this problem. People need to either learn to use Twitter less. (or not at all) or find a way to be more rational and skeptical about Twitter. I'm skeptical that this is possible, though, as even genuinely intelligent people seem to become swept up in outrage and narrative as easily as anyone else.
One of the smartest people I know, who is so rational and skeptical against unknown information, will go on Twitter and see a 30 second clip that's totally out of context from some random person, but hits his confirmation bias, and he'll share it to the world.
So yeah I'm not optimistic about the future either.
>Google Bot isn't good to the site that contains private information and launched too early and didn't yet have it's security in place and was scraped and the private exposed.
What does Google have to do with this scenario? If someone puts their private information on the internet, then they shouldn't be surprised that their private information is on the internet.
There’s a difference between “I am in public so taking a photo of me is fair game” and paparazzi gathering around a front door with a dozen cameras snapping flash photos the moment both feet are outside while goading the person verbally because their income depends on a saleable picture.
“It’s information in public so someone might see” vs “Google continuously maximising the damage of a short term mistake, testing every door round the clock for accidental unlocks, because it’s in their financial interest”
If the internet is a hostile place where mistakes can harm you, the harm comes from the people acting in a hostile way rather than s friendly supportive way, and on this spectrum Google is the indifferent machine rather than the kindly human janitor exercising discretion and judgement.
There are a lot of heuristics Twitter could use to estimate likelihood of bot accounts:
1) Source IP range + number of accounts linked
2) interaction speed (some actions are too fast to be human, or show coordination like tweeting exact same tweet as a different account withing .2 seconds)
3) interaction times in a rolling window (individual humans need sleep, and usually around the same time)
4) IP packet fingerprinting: can flag discrepancies like a client seemingly tweeting from "Twitter for iPhone" but originating from a Linux/*BSD/windows host
I'm sure they have, but not all of them have the means (and time) to bypass some of the measures, defeating single-source-IP[1] bots and script-kiddies can cut down on the amount of bots.
It's an arms-race that I've fought on both sides of: deploying anti-abuse measures and writing a very well-behaved scraper; unfortunately some sites block all scrapers despite their behavior, and you have to figure out how the detection is done empirically and enjoy your short-lived victories while they last. On the defensive side, you continually have to update your detection heuristics, and the short-lived victories.
1. I also have a read-only Twitter archiving bot that doesn't hide its nature (the handle is "${NOUN}bot"). It runs from a single VPS at regular cron intervals. It'd be trivial for Twitter to detect this and make it not worth my while.
That does help at times, yes. When politics encompasses propaganda messaging to achieve a specific outcome or condition, it's possible to study the social media output and see when political messages all pop up in a coordinated way. And map out the network, particularly if you can sort out the provenance of key accounts and know where they're coming from, and identify which accounts are most disciplined about amplifying and laundering the messaging.
As such, you can indeed spot bots through political agenda :)
Item 4 already happens on a large scale, so I don't think it counts as a downside. Also, 'bot' implies it's a computer or machine.
The behavior of concern here is Twitter accounts operating in service of somebody or something else, which might or might not be operated by a human, script, AI, or what have you. If they're operated by a human but the human's following its own script or administering a farm of puppet accounts, it'd still qualify even if the human gets to improvise on its message a bit.
A more accurate term might be 'agent', which also describes the day jobs of some of the people working on Twitter as their primary job :)
I’m not sure if you’ve read the article or not but a lot of what you’ve posted seems to miss the point of it (and thus my post too). Eg
> Item 4 already happens on a large scale,
The article states that some people are skeptical about whether to trust tweets because they don’t know whether it’s a bot or not. My post was expanding off that premise saying an optional label doesn’t solve that problem, it makes it worse.
> The behavior of concern here is Twitter accounts operating in service of somebody or something else, which might or might not be operated by a human, script, AI, or what have you.
The article puts puppet accounts under the same umbrella as “bots”. My post was continuing on from that premise.
> A more accurate term might be 'agent',
Maybe, but arguing semantics completely misses the point of the discussion.
> which also describes the day jobs of some of the people working on Twitter as their primary job :)
We aren’t talking about people who work for Pizza Hut managing their Twitter presence. We are talking about large scale use of accounts pretending to be an average Joe but who are controlled by a single entity as part of a larger misinformation campaign. It might be a human who posts the messages but the person controlling those accounts aren’t the same as the accounts report themselves as and neither is there a 1:1 relationship between account and human. Calling them an “agent” legitimises it as if it’s people representing an organisation but it’s not, it’s effective still a network of bots except you’ve replaced software with “fleshware”.
I mean sure, if you want a different term to make the distinction then I have no issue with that. But the point of the discussion isn’t what do we name these types of accounts, it’s how do we make readers aware of when they’re reading an opinion vs an undercover piece of promotion. It’s kinds of similar to when some jurisdictions mandate that paid promotions need to be labelled as an advert rather than a review.
I've got no quarrel with anything you've said, except: item 4. In the parent post, your exact criticism was:
user reads info, user checks to see if it's labeled 'Officially A Bot', user does NOT find that label, and then item 4, user trusts info that harms them.
I'm sorry, I have a darker view of all this. The whole point is to push info (whether it's secret paid product promotion, political propaganda, or literal advocacy of genocide) into normalized social discussions and past any initial network of bots, and the whole point is that it's unreasonably effective on social networks where strangers can seem like 'friends'.
Obvious bots vs. surreptitious or human-controlled 'bots' is an almost meaningless distinction here. Also, you're correct in that 'agent' can be like a corporate 'registered agent' but also can imply 'secret agent', as in hostile foreign spy who doesn't have your best interests at heart. An intelligent operative trying to hurt you, not simply a 'robot' with no interests at all.
We're talking about and agreeing on all the same things. I'm saying the real key to managing the damage of this is not 'label things a bot', the real key is to broadly communicate the idea, 'this whole platform is trying to maximize social contagion and make you feel like large numbers of strangers are your trusted personal friends'.
Twitter seems to be cracking down on bot-like behavior. I recently had my twitter account suspended.
It seems to be related to using the new function which suggests that you follow a group of related people with one click, e.g. digital artists. Seems I followed too many people in a short period of time. Make up your mind, Twitter , do you want me to follow people or not?
That's hilarious. That's literally bad-actor tools, asking you to use them, and then punishing you for it.
I wonder if that tool is intentional bait. You don't actually know if Twitter itself is acting in good faith there, or if it's a trap: a tool seemingly made to help bot networks, but which would show Twitter exactly who fell for it and set up their bot networks with a few convenient (too convenient?) clicks.
It's a very interesting question, what side Twitter is on. I'm guessing it's on Twitter's side and Twitter's side alone, much like Facebook. If you're a bad actor and they're 'helping' you, it may be a trap.
My friend's company is actually struggling with a "bot attack" at this very moment. Someone has about 100 bots who are tweeting malicious and untrue messages about the company. It's really obvious that these are bot accounts, but there doesn't seem to be a way to get rid of them. We've tried reporting them and contacting Twitter, but they are still there, tweeting away.
These are obviously 'bad' bot accounts, but there's no indication of when, if ever, Twitter will get to them. Does anyone have any suggestions on how to handle such a scenario?
I run a couple everylot bots. They've been shut down a few times, but always reinstated after an email. This seems like a fairly silly measure, but I'll probably opt-in if it reduces the chances of that happening again.
What sinister Atlanticist goals do earthquake updates serve?
It's an entirely opt in system. Nothing stopping other bots opting in as well. The only ones who wouldn't would be ones trying to pretend they're not bots
Earthquake updates, nobody cares about it. It's not controversial. Suppose there's a bot that responds to every comment with positive sentiment about ivermectin with "You are not a horse" with a link to WHO guidelines. Would Twitter call that a good bot or a bad bot?
Suppose there was a bot that responded to every suggestion that Trump called white supremacists "very fine people" with a link to a video showing he didn't. Would Twitter call that a good bot or a bad bot?
> Suppose there's a bot that responds to every comment with positive sentiment about ivermectin with "You are not a horse" with a link to WHO guidelines. Would Twitter call that a good bot or a bad bot?
It would not only be classified as a bad bot (if unsolicited), but it would violate Twitter's automation rules [0]
You are, I hope, aware that 'BLM bots' can BE Russian bots, intentionally seeding misinformation explicitly for other networks to react to? I daresay that's true for any of it.
This is why I often use the term 'organic' to describe social media content that's not spurious. There's organic content, and that alone can be a handful.
Then there's stuff specifically cooked up to drive people into a frenzy and feed social hysteria, driving wedges and disrupting a targeted population. There is NO reason, on a publically available platform like Twitter, that the hostile content has to self-identify as the target group for manipulation. Provocation of outrage is often more effective by pretending to be a demographic marked for destruction, and then carrying on in an outrageous fashion for the benefit of angering onlookers.
And none of that is good, from any angle: not for general societal well-being. Unless you really like genocides and civil wars… which is where your Global Superpower Adversary bots come in, with an obvious and easy to understand agenda.
All Twitter accounts are run by humans using software.
This idea that there are “bot” accounts and “real” accounts is nonsense. Until AGI can create its own account, every account on Twitter is some real person posting, via some form of software.
The term "bot" isn't something created by Twitter. Bot's have been discussed from IRC to web forums and it's been universally agreed that someone a human interacting with the service (which will always be via software because we're not talking about physical communications here) is different from a headless script posting either generated or curated content automatically.
Sure, there will be some nuances where it's not so clear cut. But to say that there is no such concept of a "bot" is ridiculously misinformed about any of tech released in the last 30 years of electronic communication.
Humans make very good script engines, too. If you need to put out a script but vary it to get past simple algorithmic spam filtering, you can't beat running it through a good hardworking human who can type real fast and think on the fly. That human's doing the job of a bot, for all practical purposes: 'bot' is a job description, and humans can easily outperform computers for some bot purposes.
My point: this is just another form of Twitter's escalating censorship. I was speaking obliquely because I repeat the phrase "Twitter censorship" far too often these days. :(
All of these "bot" accounts belong to a human being, and these human beings are posting on purpose (via their bot software).
Yes. But as dang says every so often, don't reply to a bad comment with another bad comment. This "I know you're not that stupid" thing is obviously not necessary. Leave that out and we wouldn't be having this conversation about what's acceptable and what's not.
That's a good point. That said, 'I know you're not that stupid' is one way of addressing a concern: it's a more humanized, more personal way of calling out a significant criticism that's worth making.
Another would be, "It looks like you are trying to plant the assumption that there are no possible Twitter users serving as puppets directed by others. Taking your statement at face value with best faith interpretation would lead to abandoning the whole discussion and trusting whatever goes on at Twitter to be just plain organic population, but we're pretty sure that's not the case. Why are you making statements that strongly suggest all of Twitter is just organic userbase, and minimizing the very thing we're talking about? We're literally talking about bots here.'
That's a lot wordier than 'I know you're not that stupid', and a lot less personal. Is it better? In some ways it implies the bot-minimizer is actively taking a role as a bad actor, not merely being foolish. I think such actors exist and also post on Hacker News and more or less everywhere else, but it's a nastier accusation and 'you're not that stupid, so what gives?' is a much gentler way of addressing it.
Especially when talking about use of, and restraint of, social media bot networks (AS THE PRIMARY TOPIC, I might add), we have to be able to talk about good faith and bad faith. It's literally the point of the whole discussion.
It's neither rude nor patronising. The guy isn't an idiot, and I'm genuinely curious about what he thinks -- I don't think I know better than him, I think he has made a rash comment.
Taking your comment in good faith, there are already a set of rules with regards to automation on twitter [0] which might help in understanding their usage of the word "bot" in that context.
Consider chess. How much can I claim that "I won the chess championship" if I use stockfish to play in my stead? True, in a pedantic fashion, I am the one who booted stockfish up so in a way I contributed to my matches, but this isn't a measurement of my chess skill. I cannot claim that I am good at chess because I hired someone else to play for me, and the same is true when I relegate my options to an artificial intelligence.
Now take that into Twitter context. Can you claim that "I am a good tweet author" when I'm not the one who decides what's written on my feed but someone else? Can you even claim that you interacted with me personally at all?
They un-suspended most of them after few months of follow-up emails (and no human response) and then the automatic suspension happened again after a few weeks.
I have given up on Twitter as a development platform, and instead prefer running these bots on open platforms such as RSS now.
If Twitter wants to do this right, they must ensure that there is a clear guideline for developers to follow and fix their bot detection algorithms to avoid stupid stuff like this.
Edit: 12, not 13. Here's the relevant tweets: https://twitter.com/search?q=from%3Acaptn3m0%20twittersuppor...