Hacker News new | past | comments | ask | show | jobs | submit login

Have you considered a limited LLM that could run locally?

> planning a more mellow retreat

The objective here is to forcefully going to where internet is impossible (no phone reception, I don’t have starlink) with the objective of focused productive output with limited distractions.

The idea came to mind after reading about John Carmack doing this for a week, diving into AI using nothing but classic text books and papers as reference material to work off.

EDIT: here is the HN thread on Carmack’s week long retreat:

https://news.ycombinator.com/item?id=16518726




> Have you considered a limited LLM that could run locally?

I think there are two main issues here. LLM are large (the name even hints at it ;) ) and the smaller ones (still, multiple GB) are really, really bad.

Edit: and uses a ton of memory, either RAM if CPU or VRAM if GPU.


Are they all that bad? [1]. I’d be ok using a few 100gb on my laptop, given that storage is so cheap these days.

[1] https://github.com/nat/openplayground


Compared to GPT-4, most of them are not super great, yeah. I've tested out most of the ones released for the last weeks and nothing have been getting the same quality of results, even the medium sized (30GB and up) models that require >24GB of VRAM to run GPU. I have yet to acquire hardware to run the absolute biggest of models, but I haven't seen any reports that they are much better either for general workloads.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: