His comments regarding RunwayML’s release of 1.5 were especially interesting:
> “No they did not. They supplied a single researcher, no data, not compute and none of the other reseachers. So it’s a nice thing to claim now but it’s basically BS. They also spoke to me on the phone, said they agreed about the bigger picture and then cut off communications and turned around and did the exact opposite which is negotiating in bad faith.”
> “I’m saying they are bad faith actors who agreed to one thing, didn’t get the consent of other researchers who worked hard on the project and then turned around and did something else.”
>We want to crush any chance of CP. If folks use it for that entire generative AI space will go radioactive and yes there are some things that can be done to make it much much harder for folks to abuse and we are working with THORN and others right now to make it a reality.
>> We want to crush any chance of CP. If folks use it for that entire generative AI space will go radioactive and yes there are some things that can be done to make it much much harder for folks to abuse and we are working with THORN and others right now to make it a reality.
I'm absolutely certain Linux has been used to kill children. Detergent, pesticide, and even pillows too!
Tools shouldn't be limited based on what the worst way they can be used. Stable diffusion is an absolute positive for society. Even when it's used to generate CP, every image that model creates is not one that involves a real kid.
The cat is out of the bag when OpenAI announced DALLE. Stable Diffusion only accelerated it a bit. Even if Sablity or lawmakers manage to prevent or outlaw open models, criminals will continue to build their own and ignore laws.
The only thing their reluctance does is harm Stability and give their competition a chance to catch up. Perhaps that's a good thing. Maybe it's time for another organization to take the lead.
> criminals will continue to build their own and ignore laws
More specifically, researchers and hobbyists would be made criminals in this hypothetical. Why would I stop playing with ML just because someone passed a useless and poorly thought out law?
You can make child pornography with a can of paint and your hand. What a totally laughable farce and excuse. People are making (notice the tense) REAMS of pornography with AI just as they have with literally every technology that has ever been invented to produce creative works.
The only way to do that is to not release the model, not let people run the model locally, and basically turn it into DALL-E with a web UI and rented compute. Which defeats the entire point of Stable Diffusion.
You can use AI in two steps, one for the general image and the second one to change the age, with access only to the crop of the head. There was a novel by Asimov where someone did a crime by composing the actions of two robots, who were doing perfectly legal things each one separately.
I just had the (very unreflected) thought that generated CP is better than sharing actual victim's pictures? Might be too short-sighted, didn't think about it that hard. But (preventing CP by using) this tech will not prevent CP nor sexual assault of minors.
I get the dilemma as a creator though. I wouldn't want my products to be used that way too.
If the results don't have enough artifacts to identidy it as ai generated, that could give cover to collections of images where actual minors were involved. Not great.
I'm very interested in this tech and have been working with it. I have heard tell of a CSAM model. I wont say anymore because I don't want to feed that hideous fire.
I would add though - if Moore's law continues this will be almost unstoppable in a decade or too.
It is an interesting ethical question that needs more research done in to it. Though given how people's brains tend to shut down upon hearing the topic, I feel like the general public would be vastly opposed to it even if it was proven to lead to better societal outcomes (decreased child abuse).
there is a history to that. In some US states it became illegal to have cgi images of CP , Secondlife had that problem and they still ban it. I think it got turned down on free speech grounds but there are still some kind of restrictions
Those restrictions made sense in a world without Stable Diffusion because CGI images were thought to stimulate interest in photorealistic CSAM and photorealistic CSAM couldn't be acquired without outright acquiring actual CSAM.
Now that we can readily generate photorealistic CSAM, there's little to no risk of inadvertently creating an customer base for actual CSAM.
People don't want to seriously grapple with these sorts of harm reduction arguments. They see sick people getting off on horrific things and want that stopped and the MSM will be more than eager to parade out a string of "why is [company X] allowing this to happen?" articles until the company becomes radioactive to investors.
It's a new form of deplatforming - just as people have made careers out of trying to get speech/expression that they dislike removed from the internet, now we're going to see AI companies cripple their own models to ensure that they can't be used to produce speech/expression that are disfavored, out of fear of the reputational consequences of not doing so.
I agree that if all CSAM was virtual and no IRL abuse occurred anymore, that would be a vast improvement, despite the continued existence of CSAM. But I suspect many abusers aren't just in it for the images. They want to abuse real people. And in that case, even if all images are virtual, they still feed into real abuses in real life.
This is advocating for increasing the number of victims of CSAM to include source material taken from every public photo of a child ever made. This does not reduce the number of victims, it amounts to deepfaking done to children on a global scale, in the desperate hope of justifying nuance and ambiguity in an area where none can exist. That's not harm reduction, it is explicitly harm normalization and legitimization. There is no such thing (and never will be such a thing) as victimless CSAM.
This is hoping for some technical means to erase the transgressive nature of the concept itself. It simply is not possible to reduce harm to children by legitimizing provocative imagery of children.
I'm impressed by how this panned out. A few months ago, before things had worsened this much, Emad(Stability CEO) was talking on discord how happy he was about the inpainting model being trained and released soon.
At that point, I assume RunwayML too talked about releasing it.
https://reddit.com/r/StableDiffusion/comments/y9ga5s/stabili...
His comments regarding RunwayML’s release of 1.5 were especially interesting:
> “No they did not. They supplied a single researcher, no data, not compute and none of the other reseachers. So it’s a nice thing to claim now but it’s basically BS. They also spoke to me on the phone, said they agreed about the bigger picture and then cut off communications and turned around and did the exact opposite which is negotiating in bad faith.”
> “I’m saying they are bad faith actors who agreed to one thing, didn’t get the consent of other researchers who worked hard on the project and then turned around and did something else.”