Hacker News new | past | comments | ask | show | jobs | submit login

"From scratch" is commonly used in the field.



Among arxiv publications there are 217 results that contain "large language model" in the full text and "from scratch" in the title or abstract.

There are 2873 results that contain "large language model" in the full text and use "pretrained" in the title or abstract. A 10x difference in publication count does make one feel more common than the other?

I'd need to get into more involved queries to break down the semantic categories of those papers.


From scratch simply means that they didn't base it off some other llm.

This is perfectly good language and exactly the correct thing for them to say.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: