> given a sufficient body of work, you could churn out fiction in the style of famous deceased authors by having the machine do the bulk of the work then having a small team go in and edit the work to make it fully coherent and an enjoyable read.
This is interesting but very dubious in my opinion. The current state of the art tech seems to be good at low-level stuff, like stylistic mimicry and maintaining (relative) coherence at the sentence level (and sometimes the paragraph level). It seems weaker at higher-level coherence, and I've seen no evidence that it would be capable of creating a book-length, or even short-story-length, work with a plot that made any sense (let alone a compelling one) or characters that are plausible (let alone interesting). If it does fail at those things, what are you supposed to do with the okay-in-isolation fragments that it spits out? You'd be lucky if they could be stitched together into anything worthwhile, even with a lot of human effort.
> With someone like me, that uses their name as their username virtually everywhere, you could sufficiently train the machine on my reddit and blog alone to imitate me on social media platforms. It could learn my writing style, my habits of using 'heh' and 'haha' way too much on reddit/twitter/facebook and you suddenly create Bizarro Ryan that you can create new social media accounts for and start tossing in some hate speech in an anti-me campaign. While this wouldn't do much to me, to a celebrity/politician/expert in a field it could absolutely ruin their career, even if later proven to have been faked because popular opinion will still associate that person with that undesirable behavior.
If someone wanted to target an individual, or a small number of people, couldn't they already do this manually? And if they wanted to target a huge number of people, surely they would very quickly burn the credibility of the platforms they hijacked.
>If someone wanted to target an individual, or a small number of people, couldn't they already do this manually? And if they wanted to target a huge number of people, surely they would very quickly burn the credibility of the platforms they hijacked.
Yes, but you can add incredible amounts of credibility to claims if you've used AI to create a bunch of deep faked images of completely artificial people, populated social media profiles, had AI create photos of these individuals together in random settings, create a network of these accounts that follow each other as well as real people/are friends with each other and real people, and organically feed claims out.
This sort of AI use, for faking images/video/audio/text, makes this much much easier to do with more believably as well as to scale it considerably for personal use or for hiring out.
You can already go on various darknet markets and hire various harassment services.
You can already go on various websites and order blackhat SEO that uses very 'dumb' software to generate content to spam to obviously fake social media accounts, blog posts, etc for SEO purposes - there are dozens and dozens of VPS services that rent you a VPS with gobs and gobs of commercial software pre-installed (with valid licenses) specifically intended for these uses and if you'd rather just farm it out there are hundreds of providers on these forums that sell packages where you provide minimal information and in days or weeks they deliver you a list of all of the links of content they've created and posted.
With stuff like GPT-2 you suddenly make more coherent sentences, tweets, short blog posts, reviews etc that you've trained on a specific demographic in the native language and not written by an English as a 3rd language individual that then had software reword it to past copyscape protection. Pair it with deepfaked images/video that you then add popular social media filters to and you can suddenly create much more believable social media presences that don't scream 'BOT' because it isn't a picture of an attractive woman in a bikni with the name Donald Smith that's only friends with women in bikinis with names like "Abdul Yussef" "Greg Brady" "Stephanie Greg" "Tiffany London" that you constantly see sending people friend requests on fb or following you on twitter/instagram because you used #love in a post.
Software applications like this, make the process much easier to do with a higher level of believability. Humans, without knowing, are often decent at detecting bullshit when they read a review or a comment. Inconsistent slang or regional phrasing, grammar that feels wrong but not necessarily artificial (English as a second language for a German speaker for example, where it might be something like "What are you called?" instead of "What is your name?" or more subtle like "What do you call it?" instead of "What's it called"?) which can be defeated with AI that is trained on tweets/blog posts/instagram posts that someone scrapes of 18-23 year old middle class women, or 30-60 year old white male gun owners, or 21-45 year old British working class males.
The whole point of AI is to make some tasks easier by automating them, when you're dealing with AI that mimics imaes/video/speech you're just making it far easier for individuals that already manually employ (or use 'dumb' software) these tactics to scale and increase efficacy.
This is interesting but very dubious in my opinion. The current state of the art tech seems to be good at low-level stuff, like stylistic mimicry and maintaining (relative) coherence at the sentence level (and sometimes the paragraph level). It seems weaker at higher-level coherence, and I've seen no evidence that it would be capable of creating a book-length, or even short-story-length, work with a plot that made any sense (let alone a compelling one) or characters that are plausible (let alone interesting). If it does fail at those things, what are you supposed to do with the okay-in-isolation fragments that it spits out? You'd be lucky if they could be stitched together into anything worthwhile, even with a lot of human effort.
> With someone like me, that uses their name as their username virtually everywhere, you could sufficiently train the machine on my reddit and blog alone to imitate me on social media platforms. It could learn my writing style, my habits of using 'heh' and 'haha' way too much on reddit/twitter/facebook and you suddenly create Bizarro Ryan that you can create new social media accounts for and start tossing in some hate speech in an anti-me campaign. While this wouldn't do much to me, to a celebrity/politician/expert in a field it could absolutely ruin their career, even if later proven to have been faked because popular opinion will still associate that person with that undesirable behavior.
If someone wanted to target an individual, or a small number of people, couldn't they already do this manually? And if they wanted to target a huge number of people, surely they would very quickly burn the credibility of the platforms they hijacked.