>If someone wanted to target an individual, or a small number of people, couldn't they already do this manually? And if they wanted to target a huge number of people, surely they would very quickly burn the credibility of the platforms they hijacked.
Yes, but you can add incredible amounts of credibility to claims if you've used AI to create a bunch of deep faked images of completely artificial people, populated social media profiles, had AI create photos of these individuals together in random settings, create a network of these accounts that follow each other as well as real people/are friends with each other and real people, and organically feed claims out.
This sort of AI use, for faking images/video/audio/text, makes this much much easier to do with more believably as well as to scale it considerably for personal use or for hiring out.
You can already go on various darknet markets and hire various harassment services.
You can already go on various websites and order blackhat SEO that uses very 'dumb' software to generate content to spam to obviously fake social media accounts, blog posts, etc for SEO purposes - there are dozens and dozens of VPS services that rent you a VPS with gobs and gobs of commercial software pre-installed (with valid licenses) specifically intended for these uses and if you'd rather just farm it out there are hundreds of providers on these forums that sell packages where you provide minimal information and in days or weeks they deliver you a list of all of the links of content they've created and posted.
With stuff like GPT-2 you suddenly make more coherent sentences, tweets, short blog posts, reviews etc that you've trained on a specific demographic in the native language and not written by an English as a 3rd language individual that then had software reword it to past copyscape protection. Pair it with deepfaked images/video that you then add popular social media filters to and you can suddenly create much more believable social media presences that don't scream 'BOT' because it isn't a picture of an attractive woman in a bikni with the name Donald Smith that's only friends with women in bikinis with names like "Abdul Yussef" "Greg Brady" "Stephanie Greg" "Tiffany London" that you constantly see sending people friend requests on fb or following you on twitter/instagram because you used #love in a post.
Software applications like this, make the process much easier to do with a higher level of believability. Humans, without knowing, are often decent at detecting bullshit when they read a review or a comment. Inconsistent slang or regional phrasing, grammar that feels wrong but not necessarily artificial (English as a second language for a German speaker for example, where it might be something like "What are you called?" instead of "What is your name?" or more subtle like "What do you call it?" instead of "What's it called"?) which can be defeated with AI that is trained on tweets/blog posts/instagram posts that someone scrapes of 18-23 year old middle class women, or 30-60 year old white male gun owners, or 21-45 year old British working class males.
The whole point of AI is to make some tasks easier by automating them, when you're dealing with AI that mimics imaes/video/speech you're just making it far easier for individuals that already manually employ (or use 'dumb' software) these tactics to scale and increase efficacy.
Yes, but you can add incredible amounts of credibility to claims if you've used AI to create a bunch of deep faked images of completely artificial people, populated social media profiles, had AI create photos of these individuals together in random settings, create a network of these accounts that follow each other as well as real people/are friends with each other and real people, and organically feed claims out.
This sort of AI use, for faking images/video/audio/text, makes this much much easier to do with more believably as well as to scale it considerably for personal use or for hiring out.
You can already go on various darknet markets and hire various harassment services.
You can already go on various websites and order blackhat SEO that uses very 'dumb' software to generate content to spam to obviously fake social media accounts, blog posts, etc for SEO purposes - there are dozens and dozens of VPS services that rent you a VPS with gobs and gobs of commercial software pre-installed (with valid licenses) specifically intended for these uses and if you'd rather just farm it out there are hundreds of providers on these forums that sell packages where you provide minimal information and in days or weeks they deliver you a list of all of the links of content they've created and posted.
With stuff like GPT-2 you suddenly make more coherent sentences, tweets, short blog posts, reviews etc that you've trained on a specific demographic in the native language and not written by an English as a 3rd language individual that then had software reword it to past copyscape protection. Pair it with deepfaked images/video that you then add popular social media filters to and you can suddenly create much more believable social media presences that don't scream 'BOT' because it isn't a picture of an attractive woman in a bikni with the name Donald Smith that's only friends with women in bikinis with names like "Abdul Yussef" "Greg Brady" "Stephanie Greg" "Tiffany London" that you constantly see sending people friend requests on fb or following you on twitter/instagram because you used #love in a post.
Software applications like this, make the process much easier to do with a higher level of believability. Humans, without knowing, are often decent at detecting bullshit when they read a review or a comment. Inconsistent slang or regional phrasing, grammar that feels wrong but not necessarily artificial (English as a second language for a German speaker for example, where it might be something like "What are you called?" instead of "What is your name?" or more subtle like "What do you call it?" instead of "What's it called"?) which can be defeated with AI that is trained on tweets/blog posts/instagram posts that someone scrapes of 18-23 year old middle class women, or 30-60 year old white male gun owners, or 21-45 year old British working class males.
The whole point of AI is to make some tasks easier by automating them, when you're dealing with AI that mimics imaes/video/speech you're just making it far easier for individuals that already manually employ (or use 'dumb' software) these tactics to scale and increase efficacy.