Hacker News new | past | comments | ask | show | jobs | submit login

> > Building a fully local LLM voice assistant

> I did the same thing, but I went the easy way and used OpenAI's API.

This is a cool project, but it's not really the same thing. The #1 requirement that OP had was to not talk to any cloud services ("no exceptions"), and that's the primary reason why I clicked on this thread. I'd love to replace my Google Home, but not if OpenAI just gets to hoover up the data instead.




Sure, but the LLM is also the easy part. Mistral is plenty smart for the use case, all you need to do is to use llama.cpp with a JSON grammar and instruct it to return JSON.


I might get downvoted for this but OpenAI's API pretty clearly says that the data isn't used in training


I'd imagine their ToS which they can update whenever they want links to a privacy policy which they can update whenever they want, which is where this restriction is actually codified. The ToS probably also has another part saying they'll use your data "for business reasons including [innocuous use-cases]", and yet another part elsewhere which defines "business reasons" as "whatever we want including selling it".




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: