Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The author (Stability.AI’s CIO) did an impromptu AMA on Reddit:

https://reddit.com/r/StableDiffusion/comments/y9ga5s/stabili...

His comments regarding RunwayML’s release of 1.5 were especially interesting:

> “No they did not. They supplied a single researcher, no data, not compute and none of the other reseachers. So it’s a nice thing to claim now but it’s basically BS. They also spoke to me on the phone, said they agreed about the bigger picture and then cut off communications and turned around and did the exact opposite which is negotiating in bad faith.”

> “I’m saying they are bad faith actors who agreed to one thing, didn’t get the consent of other researchers who worked hard on the project and then turned around and did something else.”



Thank you, I think this is the root of everything: https://www.reddit.com/r/StableDiffusion/comments/y9ga5s/com...

>We want to crush any chance of CP. If folks use it for that entire generative AI space will go radioactive and yes there are some things that can be done to make it much much harder for folks to abuse and we are working with THORN and others right now to make it a reality.


>> We want to crush any chance of CP. If folks use it for that entire generative AI space will go radioactive and yes there are some things that can be done to make it much much harder for folks to abuse and we are working with THORN and others right now to make it a reality.

I'm absolutely certain Linux has been used to kill children. Detergent, pesticide, and even pillows too!

Tools shouldn't be limited based on what the worst way they can be used. Stable diffusion is an absolute positive for society. Even when it's used to generate CP, every image that model creates is not one that involves a real kid.

The cat is out of the bag when OpenAI announced DALLE. Stable Diffusion only accelerated it a bit. Even if Sablity or lawmakers manage to prevent or outlaw open models, criminals will continue to build their own and ignore laws.

The only thing their reluctance does is harm Stability and give their competition a chance to catch up. Perhaps that's a good thing. Maybe it's time for another organization to take the lead.


This is what happens when people/legislators choose to not see further than their own noses.

Many products benefit pedophiles in one way or another. Mobile phones, computers, video editing software, cars (vans?).

"But that's ridiculous, of course we can't prohibit cars just because they can be used by criminals."

Exactly.


> criminals will continue to build their own and ignore laws

More specifically, researchers and hobbyists would be made criminals in this hypothetical. Why would I stop playing with ML just because someone passed a useless and poorly thought out law?


You can make child pornography with a can of paint and your hand. What a totally laughable farce and excuse. People are making (notice the tense) REAMS of pornography with AI just as they have with literally every technology that has ever been invented to produce creative works.


The only way to do that is to not release the model, not let people run the model locally, and basically turn it into DALL-E with a web UI and rented compute. Which defeats the entire point of Stable Diffusion.


You can use AI in two steps, one for the general image and the second one to change the age, with access only to the crop of the head. There was a novel by Asimov where someone did a crime by composing the actions of two robots, who were doing perfectly legal things each one separately.


Which centralizes control of new tech and prevents innovation imho. Would be a shitty development.


I just had the (very unreflected) thought that generated CP is better than sharing actual victim's pictures? Might be too short-sighted, didn't think about it that hard. But (preventing CP by using) this tech will not prevent CP nor sexual assault of minors.

I get the dilemma as a creator though. I wouldn't want my products to be used that way too.


If the results don't have enough artifacts to identidy it as ai generated, that could give cover to collections of images where actual minors were involved. Not great.


That's a good point! To be fair, there's just no good solution here imho.


This is utterly ridiculous. Even if the generated content is repulsive, it’s still generated; no one is being harmed. Stop this babysitting nonsense.


Its not really black/white. Deepfakes can be traumatizing to the victims.


I'm very interested in this tech and have been working with it. I have heard tell of a CSAM model. I wont say anymore because I don't want to feed that hideous fire.

I would add though - if Moore's law continues this will be almost unstoppable in a decade or too.


Isn’t the dream to use this stuff to generate virtual CSAM? Could you not kill the CSAM market overnight by flooding it with AI-generated material?


It is an interesting ethical question that needs more research done in to it. Though given how people's brains tend to shut down upon hearing the topic, I feel like the general public would be vastly opposed to it even if it was proven to lead to better societal outcomes (decreased child abuse).


there is a history to that. In some US states it became illegal to have cgi images of CP , Secondlife had that problem and they still ban it. I think it got turned down on free speech grounds but there are still some kind of restrictions


Those restrictions made sense in a world without Stable Diffusion because CGI images were thought to stimulate interest in photorealistic CSAM and photorealistic CSAM couldn't be acquired without outright acquiring actual CSAM.

Now that we can readily generate photorealistic CSAM, there's little to no risk of inadvertently creating an customer base for actual CSAM.


They are never going to accept that argument.

I mean, some SD applications like the interior designer market this as a great tool for potential buyers to try out ideas before they buy.


[flagged]


I'm advocating for a system in which less CSAM is made IRL...I don't understand how that could possibly be controversial.


People don't want to seriously grapple with these sorts of harm reduction arguments. They see sick people getting off on horrific things and want that stopped and the MSM will be more than eager to parade out a string of "why is [company X] allowing this to happen?" articles until the company becomes radioactive to investors.

It's a new form of deplatforming - just as people have made careers out of trying to get speech/expression that they dislike removed from the internet, now we're going to see AI companies cripple their own models to ensure that they can't be used to produce speech/expression that are disfavored, out of fear of the reputational consequences of not doing so.


> People don't want to seriously grapple with these sorts of harm reduction arguments

Because there's no evidence it works and the idea makes no fucking sense. It approaches the problem in a way that all experts agree is wrong.


> Because there's no evidence it works and the idea makes no fucking sense. It approaches the problem in a way that all experts agree is wrong.

Experts in what exactly?

There are two ways to defend a law that penalizes virtual child pornography:

- On evidence that there is harm.

- On general moral terms, aka "we just don't like that this is happening".

Worth noting that a ban on generated CSAM images was struck down as unconstitutional in Ashcroft_v._Free_Speech_Coalition.

https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalit...


To ban something you need evidence that it's causing some harm, not vice versa.


I agree that if all CSAM was virtual and no IRL abuse occurred anymore, that would be a vast improvement, despite the continued existence of CSAM. But I suspect many abusers aren't just in it for the images. They want to abuse real people. And in that case, even if all images are virtual, they still feed into real abuses in real life.


No, you're advocating for a system that generates sick content and hoping against all evidence that it somehow means there's less CSAM.


Can you please give this "all evidence"? Because your claim is rather extraordinary given what we've seen elsewhere. E.g. more porn meaning less rape


This is advocating for increasing the number of victims of CSAM to include source material taken from every public photo of a child ever made. This does not reduce the number of victims, it amounts to deepfaking done to children on a global scale, in the desperate hope of justifying nuance and ambiguity in an area where none can exist. That's not harm reduction, it is explicitly harm normalization and legitimization. There is no such thing (and never will be such a thing) as victimless CSAM.


What if there's a way to generate it without involving any real children pictures in the training set?


This is hoping for some technical means to erase the transgressive nature of the concept itself. It simply is not possible to reduce harm to children by legitimizing provocative imagery of children.


How so? No children involved - no harm done.


Runway is the original creator of the latent diffusion model. See this https://huggingface.co/runwayml/stable-diffusion-v1-5/discus...


I'm impressed by how this panned out. A few months ago, before things had worsened this much, Emad(Stability CEO) was talking on discord how happy he was about the inpainting model being trained and released soon.

At that point, I assume RunwayML too talked about releasing it.

Then, a few months later they released it.

And suddenly the response is "How dare they"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: