Hacker News new | past | comments | ask | show | jobs | submit login

Hugging Face released a Colab Notebook for generation from SDXL Turbo using the diffusers library: https://colab.research.google.com/drive/1yRC3Z2bWQOeM4z0FeJ0...

Playing around with the generation params a bit, Colab's T4 GPU can batch-generate up to 6 images at a time at roughly the same speed as one.




Is Euler Ancestral recommended for this? I thought the Ancestral samplers added noise to every step, preventing convergence.


thanks for sharing

I got to experience the power of current models with just 5 lines of code

the pace of change is stressing me out :)


Does Turbo (the model, not this particular notebook) support negative prompts?


So the new SD model requires higher end hardware compare to the rest?


No, it's a smaller model than normal SDXL so it requires less hardware compared to SDXL.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: