Hacker News new | past | comments | ask | show | jobs | submit login

well it was a long shot anyway but it doesn’t seem to work on mobile. (tried on iOS safari on iPhone 11 pro)

a 1B model should be able to run in the RAM constraints of a phone(?) if this is supported soon this would actually be wild. Local LLMs in the palm of your hands




I don't know about this model but people have been running local models in Android phones for years now. You just need a large amount of ram (8-12 GB), ggml and Termux. I tried it once with a tiny model and it worked really well.


This is from Reddit, what were you expecting?


This needed a 4 GB renderer process and about that much additional memory use in the GPU process for me, in Chrome.


  Local LLMs in the palm of your hands
https://apps.apple.com/us/app/mlc-chat/id6448482937




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: