Hacker News new | past | comments | ask | show | jobs | submit login

A little pointless, considering that Stable Diffusion is already at or near human levels. Worst case scenario, just train on pre-2022 images.




Stable Diffusion is capable of training on its own images


Is that advisable? It won’t learn anything new and might reinforce its errors.


No, it isn't if you don't curate meticulously. Perhaps by accident something new could emerge which is worth it to use as input for an advanced model.

But the rule is that new concepts are very hard to produce, although thanks to countless models, stable diffusion is probably the most flexible approach by quite some margin.


It probably won't learn anything new, but it will learn not to generate bad images if you cherry pick the best ones for training.


Yu would train it in an RL setting rather than actually use generated images in the train set.


Sure


Likely they have humans review the images (or even touch up). Same thing with the dataset scraped from the internet.


Yes, that's the technique. Generate 10 images, then choose the ones that turned out well for the next round. That's the standard way to create a LoRA.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: