From the article: they didn’t release everything—although the model weights are open, the datasets and code used to train the model are not.
Is that true about Meta Llama as well? Specifically, the code used to train the model is not open? (I know no one releases datasets). If so the label "open source" is inappropriate. "Open weights" would be more appropriate.
Is that true about Meta Llama as well? Specifically, the code used to train the model is not open? (I know no one releases datasets). If so the label "open source" is inappropriate. "Open weights" would be more appropriate.