Hacker News new | past | comments | ask | show | jobs | submit login
List of Stable Diffusion resources (rentry.co)
164 points by theemathas on Nov 1, 2022 | hide | past | favorite | 23 comments



https://rentry.org/sdupdates2 seems to be the current link?


For prompt engineering, OpenArt published a Stable Diffusion Prompt Book[1] analogous to the one for DALL-E 2.

[1] https://openart.ai/promptbook


Anyone running SD on AWS's Sagemaker yet? They seem to have a very generous free tier that is way better than Google's Colab.


Don’t AWS collect customer data to send unsolicited ads and sell to governments?


Doesn't Google?


Please don't. They'll just castrate it as well.


Also, if you don't want to mess around with setting up Jupyter notebooks, I made a service for generating SD images:

https://www.phantasmagoria.me


See also Stable Horde (https://stablehorde.net/) - crowdsourced distributed cluster of Stable Diffusion workers


Automatic1111's fork is pretty good in the no-messing-around sense – on my Windows box (which has an old but decent gaming GPU), all it took was running `launch.bat`, and it set up a venv, downloaded dependencies and wired everything up, and opened the Gradio web UI.


Yeah even messing with a venv was too much work for me.

So I used this and it's as simple as extract the download and run the curl | bash script inside it.

Is it safe for enterprise use? Nay.

Is it more convenient than manually futzing with venvs? Yay.

https://github.com/cmdr2/stable-diffusion-ui


As my comment says, A1111's fork _sets up the venv for you_, no futzing.


Really cool!

Are you running the GPU instances and model yourself, or are you outsourcing image generation to an API?


Thanks! No, I use banana.dev because I currently have too little traffic to run the GPUs 24/7. That's why the initial generation is currently a little slow, but hopefully that will be improved with some new changes they're implementing soon.


This is why things like reddit / hn exist; a raw list of links gives you no idea what’s interesting and what’s just irrelevant.

I think the tldr is, in the last month:

- 1.5 model came out; its ok. Incremental improvement, not really significant.

- new VAE came out; this actually tangibly improves fine details, like feet and hands.

The rest is random crap around supporting tooling or vague hand wavy research stuff.

Don’t get me wrong, lots of stuff happening, that’s great, but a lot of it is basically nothing really worth paying attention to unless you’re specifically invested in the topic.


The 1.5 inpainting model is great! Significant improvement over how to do inpainting before. Highly recommended. Integrated into my mobile app last night and enjoying it so far!


any idea if you can merge a dreambooth checkpoint with the inpainting model to get the best of both? I mostly use inpainting to fix custom trained faces.


Yeah, that would be tricky to merge existing ones together. But you should be able to fine-tune the inpainting model the same way as you tune with dreambooth?

For faces, I haven't looked deep, but seems CodeFormer's training cost is minimal, should be able to fine-tune that model, probably better?


Also: https://rentry.org/sdmodels for up-to-date list of Stable Diffusion models


This is pretty cool! Thank you for sharing.

I made a service [1] to use SD in Notion. Is it something you could share as well?

[1] https://slashdreamer.com/


See also the GitHub repository for tracking changes; https://github.com/questianon/sdupdates


Not yet available, but InventAI [https://inventai.xyz] will offer AI generated content with some UI innovations.


Ty have been looking for something like this


nice




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: