Hacker News new | past | comments | ask | show | jobs | submit login

> When you train LLMs on the output of LLMs, it gets significantly worse.

That is also quite an assumption, it could be that training on the output of better LLMs also reduces this worsening of output. There might even be a tipping point where the LLMs get good enough that training on their output is better than training on the output of humans.




>That is also quite an assumption

And, as I understand it, one that is already demonstrably false: https://arxiv.org/abs/2306.11644


Perpetual learning machines




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: