Do you think that governments want and work to try to influence public opinion? I mean you don't need to rely on intuition here. As one fun example the current state of modern art is largely because the CIA used it as a propaganda weapon. [1] As the article mentions it was pushed heavily by their "Propaganda Assets Inventory, which at its peak could influence more than 800 newspapers, magazines and public information organisations."
And governments in general seem more obsessed with information manipulation than ever before. Consequently, I find it only logical to assume that all social media is currently being heavily utilized by government level actors to push their various propaganda and agendas. If one is optimistic it even goes some way towards explaining the stupidity of social media -- propaganda is often quite ham fisted.
Now pair this with AI. OpenAI recently demonstrated an AI capable of producing at least semi-workable longer form articles. This [2] is a toy version of that running on an intentionally crippled network. You give it the start of a writing prompt, and it completes the rest. I decided to give it part of what you wrote. In particular I gave it everything up until "Imagine founding out that". This is what was produced:
"This is a conflict of interest, which should erode the trust that anyone would have on twitter. The man is simultaneously responsible for editorial decisions, while also serving in a brigade that "uses social media platforms such as Twitter, Instagram and Facebook ... to wage what the head of the UK military describes as “information warfare”.
Imagine finding out that the man responsible for the military's response had once been paid an anonymous $300,000 fee to write an op-ed. How can you trust him to do his job without that knowing?
Is there anything else that we can do? What can we do? We need a real-time response to any tweets or stories that are inaccurate or in breach of policy, where the facts are being misrepresented. I'm not an information manager but something needs to be done to stop this from happening, or at least give people an opportunity to understand what the fuck they are doing in their tweets."
That paragraph has several glaring errors, but the produced speech is intentionally designed to be such. It's a toy model on a public site. Imagine the current state of the art. You could even create a neural network that is trained to detect human-like writing to automatically validate or reject the blurbs. This all becomes much easier if you restrict its training down to a specific agenda, and even easier still on a platform like Twitter where Tweets continued to be heavily restricted in character count. Emulating a human is much easier in shorter posts.
I don't think we should ever start declaring one another to be bots, since that leads nowhere, but at the same time I would consider that things such as trying to get a feel for a consensus online may already be impossible. If it's not yet impossible, it will be soon enough. Yet another reason people must always remember to think for themselves, and only themselves... not that adopting a view because of its popularity would be logical, even if it was genuinely popular. Also a major reason to check any emotion at the door. I find it interesting that that random AI generated blurb was aiming to emotionally incite.
> And governments in general seem more obsessed with information manipulation than ever before. Consequently, I find it only logical to assume that all social media is currently being heavily utilized by government level actors to push their various propaganda and agendas.
There is so much Twitter bot activity around politics. It is deeply troubling to consider that the intelligence agencies could be actively involved in manipulating political discourse with the goal of influencing US elections.
EDIT: Surprising that a reply to this comment mentioning the JIDF was flagged to dead within 5 minutes of being posted. I don't think I've ever seen an HN comment go dead that fast. Poster seems to have a history of being flagged (maybe only took one report?), but in this context is a bit unnerving.
I have to object here to your long, vaguely plausible stream of semi-coherent accusations. It adds nothing to the discussion, derails the thread and pollutes the discussion. What is your point?
I mean, states and manipulators like to use Twitter bots but Twitter bots and Twitter editorial decisions are rather distinct. Just as much, the GP doesn't seem confused about whether states use manipulation and your insinuation that they are seems disingenuous.
And governments in general seem more obsessed with information manipulation than ever before. Consequently, I find it only logical to assume that all social media is currently being heavily utilized by government level actors to push their various propaganda and agendas. If one is optimistic it even goes some way towards explaining the stupidity of social media -- propaganda is often quite ham fisted.
Now pair this with AI. OpenAI recently demonstrated an AI capable of producing at least semi-workable longer form articles. This [2] is a toy version of that running on an intentionally crippled network. You give it the start of a writing prompt, and it completes the rest. I decided to give it part of what you wrote. In particular I gave it everything up until "Imagine founding out that". This is what was produced:
"This is a conflict of interest, which should erode the trust that anyone would have on twitter. The man is simultaneously responsible for editorial decisions, while also serving in a brigade that "uses social media platforms such as Twitter, Instagram and Facebook ... to wage what the head of the UK military describes as “information warfare”. Imagine finding out that the man responsible for the military's response had once been paid an anonymous $300,000 fee to write an op-ed. How can you trust him to do his job without that knowing? Is there anything else that we can do? What can we do? We need a real-time response to any tweets or stories that are inaccurate or in breach of policy, where the facts are being misrepresented. I'm not an information manager but something needs to be done to stop this from happening, or at least give people an opportunity to understand what the fuck they are doing in their tweets."
That paragraph has several glaring errors, but the produced speech is intentionally designed to be such. It's a toy model on a public site. Imagine the current state of the art. You could even create a neural network that is trained to detect human-like writing to automatically validate or reject the blurbs. This all becomes much easier if you restrict its training down to a specific agenda, and even easier still on a platform like Twitter where Tweets continued to be heavily restricted in character count. Emulating a human is much easier in shorter posts.
I don't think we should ever start declaring one another to be bots, since that leads nowhere, but at the same time I would consider that things such as trying to get a feel for a consensus online may already be impossible. If it's not yet impossible, it will be soon enough. Yet another reason people must always remember to think for themselves, and only themselves... not that adopting a view because of its popularity would be logical, even if it was genuinely popular. Also a major reason to check any emotion at the door. I find it interesting that that random AI generated blurb was aiming to emotionally incite.
[1] - https://www.independent.co.uk/news/world/modern-art-was-cia-...
[2] - https://talktotransformer.com/