Hacker News new | past | comments | ask | show | jobs | submit login

Isn't literally every imagegen AI that's not DALL-E or Midjourney based on Stable Diffusion?



There are exceptions, e.g. https://generated.photos/human-generator uses a GAN based model.

Edit: Also, Adobe uses its own model for Photoshop integration (inpainting via cloud). That model seems to be the same as this one: https://www.adobe.com/sensei/generative-ai/firefly.html


Are we sure that those arent based on stable diffusion?

No code black box, and we get to tease the closed source companies for wrapping FOSS stuff.

Midjourny I'm most convinced is just a SD with a fine-tuned model. That would explain why everything looks like pixar and can't follow the prompt.


Given that Midjourney predates StableDiffusion, that seems unlikely, though it is possible they threw away all their hard work to create their model to use one that's available to other people for free and then charge money for it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: