Hacker News new | past | comments | ask | show | jobs | submit login

Exactly, since its not doing whole program synthesis im thinking it could be done with fewer parameters. However program synthesis is part of the loss function.



Program synthesis is part of the loss function, which is what makes it a auxiliary learning task.

We haven’t experimented with model size yet, we just used the same configuration as the smallest Code Llama. We did play with dataset size and found thah performance tracks the usual scaling laws. Details in the paper


Thanks for the reply!. This is really interesting research, hope to see more from your team.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: