Hacker News new | past | comments | ask | show | jobs | submit login

That's what I thought 2 weeks ago. I figured it'd be ~5 years before I could do anything at home. But already people have the leaked facebook llama weights running on CPU w/under 32 GB of system ram doing a token a second or so.



that llama is likely much smaller than chatgtp


The whole point of the LLaMa paper is that large models are undertrained and oversized.


not sure where did you get it, but they trained 65b llama too, which outperformed llama 7b on their benchmark.


Here is the paper and synopsis:

https://arxiv.org/abs/2302.13971

> We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.


what's your point? If you read actual paper you will see that 65B outperforms that 7B which was discussed above, and size matters.


I don’t think anyone said size doesn’t matter. The point is that by training for longer, models with fewer parameters will perform similarly to GPT-3 models with more parameters.


probably not if gap is so large: 7B vs 175B.

Also, all those benchmark are trash because they can't track data leaks in training data. For example they trained llama on github, where GSM8k eval data is located, of course model will perform well on GSM8K, because it memorized answers.


You don’t have to guess, the information is in the provided synopsis.

I’m sure there are issues similar to your description. Nevertheless, you seem to be a staunch defender of GPT-3, which to me indicates some kind of bias? Like, who cares if LLaMA is better - in fact, isn’t that a good indicator of progress?


> You don’t have to guess, the information is in the provided synopsis.

yes, I checked benchmarks in paper, and there are many where gpt won over 7b llama. Also, it is not clean experiment, because models were trained on different datasets.

> I’m sure there are issues similar to your description. Nevertheless, you seem to be a staunch defender of GPT-3, which to me indicates some kind of bias? Like, who cares if LLaMA is better - in fact, isn’t that a good indicator of progress?

personality rants have been ignored.


:shrugging:

Okay, then.


You can run llama-30b right now on high-end consumer hardware (RTX 3090+) using int4 quantization. With two GPUs, llama-65b is within reach. And even 30b is surprisingly good, although it's clearly not as well trained as ChatGPT specifically for dialog-like task setting.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: