No, but AI requires zero learning curve and can be automated. I can't spit out 10 images of Tay per second in photoshop. If I want and the API delivers I can easily do that with AI. (Given, would one becoding this it requires a learning curve, but in principal with the right interface and they exist i can churn out hundreds of images without me actively putting work in)
I've never understood the argument about image generators being (relatively) fast. Does that mean that if you could Photoshop 10 images per second, we should've started clamping down on Photoshop? What exact speed is the cutoff mark here? Given that Photoshop is updated every year and includes more and more tools that can accelerate your workflow (incl. AI-assisted ones), is there going be a point when it gets too fast?
I don't know much about the initial scandal, but I was under the impression that there was only a small number of those images, yet that didn't change the situation. I just fail to see how quantity factors into anything here.
Yes, if you could Photoshop 10/sec it would be a problem.
Think of it this way, if one out of every ten phone calls you get is spam, you still have a pretty useable phone. Three orders of magnitude different and 1 out of every 100 calls is real and the system totally breaks down.
Generative AI makes generating realistic looking fakes ~1000x easier, its the one thing its best at.
>I just fail to see how quantity factors into anything here.
Because you can overload any online discussion / sphere with that. There were so many that X effectively banned searching for her at all because if you did, you where overwhelmed by very extreme fake porn. Everybody can do it with very low entry barrier, it looks very believable, and it can be generated in high quantities.
We shouldn't have clamped down on photoshop, but realisticly two things would be nice in your theoretical case, usage restrictions and public information building. There was no clear cut point where photoshop was so mighty you couldn't trust any picture online. There were skills to be learned and people could identify the trickery, and it was on a very small scale and gradual. And the photo trickery was around for ages, even Stalin did it.
But creating photorealistic fakes in an automated fashion is completely new.
But when we talk about specifically harming one person, does it really matter if it's a thousand different generations of the same thing or 10 generations that were copied thousands of times? It is a technology that lowers the bar for generating believable-looking things, but I don't know if it's the speed that is the main culprit here.
And in fairness to generative AI, even nowadays it feels like getting to a point of true photorealism takes some effort, especially if the goal is letting it just run nonstop with no further curation. And getting a local image generator to run at all on your computer (and having the hardware for it) is also a bar that plenty of people can't clear yet. Photoshop is kind of different in that making more believable things requires a lot more time, effort and knowledge - but the idea that any image online can be faked has already been ingrained in the public consciousness for a very long time.