Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We need something like this on Linux, maybe powered by Vicuna. I’m not sure if the current batch of LLaMA variants is coherent enough to work as a digital assistant, but my gut feeling is that a little fine tuning on tool use might be all thats needed.


Linux is fundamentally not monolithic like Windows, but maybe some DEs could expose hooks for LLMs to use.

There is also the performance issue. Right now the task energy/memory usage of llama implementations is very high, and it takes some time to load into RAM and/or VRAM. It seems Microsoft is getting around this with cloud inference, and eats the hosting cost (for now).

> little fine tuning on tool use might be all thats needed.

Maybe I am interpreting this wrong, but LORA finetuning is extremely resource intense right now. There are practical alternatives though, like embedding databases people are just now setting up.


Plugging a llm into dbus may take you surprisingly far.


Fine tuning is moderately intense. Going from base LLaMA to Vicuna 13B costs about $300.


The compute requirements are reasonable, but the memory requirements are extremely high, even with some upcoming breakthroughs like 4 bit bitsandbytes.


How does accessibility work on Linux?


The hard part I think will be integrating the LLM closely enough with the other programs on your machine that it's actually useful in that context, and not just a glorified chat window or a text interface to do things you can already do more easily with a KB+Mouse.


In a perfect world you give the LLM a python interpreter and it does the rest.

Realistically, with the LLMs we have today, the right approach is probably to curate a set of APIs it's allowed to interact with. Some basic file system access and FFmpeg support would be extremely useful on it's own.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: