Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What about like, if I said "switch off the lamp at 3:45"

How would you translate the Json you'd get out of that to get the same output? The subject would be "lamp" . Your app code would need to know that lamp is also light.




User: in the sentence "switch off the lamp at 3:45" output the subject, action, time, and location as json

Llama: { "subject": "lamp", "action": "switch off", "time": "3:45", "location": "" }

Where there is an empty parameter the code will try to look back to the last recent commands for context (e.g. I may have just said "turn on the living room light"). If there's an issue it just asks for the missing info.

Translating the parameters from the json is done with good old fashion brute force (i.e. mostly regex).

It's still not 100% perfect but its faster and more accurate than the cloud assistants and private.


So you'd need to somehow know that a lamp is also a light eh


With a proper grammar, you can require the "subject" field to be one of several valid entity names. In the prompt, you would tell the LLM what the valid entity names are, which room each entity is in, and a brief description of each entity. Then it would be able to infer which entity you meant if there is one that reasonably matches your request.

If you're speaking through the kitchen microphone (which should be provided as context in the LLM prompt as well) and there are no controllable lights in that room, you could leave room in the grammar for the LLM to respond with a clarifying question or an error, so it isn't forced to choose an entity at random.


In all seriousness, I have names for my lights for this very reason.


Same here (and for sub-areas). Now: this is sometimes stressful when I say "OK Google, switch off .... errrr ... pffff ..." and Google responds with a "come on, make up your mind" (or similar :))


I do something similar but I just pre-define the names of lights I have in Home Assistant (e.g. "lights.living_room_lamp_small" and "lights.kitchen_overhead") and a smart enough LLM handles it.

If you just say "the lamp" it asks to clarify. Though I hope to tie that in to something location based so I can use the current room for context.


LLM just are waayyy too dangerous for something like home automation, until it becomes a lot more certain you can guarantee an output for an input.

A very dumb innocuous example would be you ordering a single pizza for the two of you, then telling the assistant “actually we’ll treat ourselves, make that two”. Assistant corrects the order to two. Then the next time you order a pizza “because I had a bad day at work”, assistant just assumes you ‘deserve’ two even if your verbal command is to order one.

A much scarier example is asking the assistant to “preheat the oven when I move downstairs” a few times. Then finally one day you go on vacation and tell the assistant “I’m moving downstairs” to let it know it can turn everything off upstairs. You pick up your luggage in the hallway none the wiser, leave and.. yeah. Bye oven or bye home.

Edit: enjoy your unlocked doors, burned down homes, emptied powerwalls, rained in rooms! :)


No. LLMs do not have memory like that (yet).

Your 'scary' examples are very hypothetical and would require intentional design to achieve today; they would not happen by accident.


I love how burning your house down is something that deserves air quotes according to you.

All I can tell you is this: LLM’s frequently misinterpret, hallucinate and “lie”.

Good luck.


Preventing burning your house down belongs on the output handling side, not the instruction processing side. If there is any output from an LLM at all that will burn your house down, you already messed up.


I'd go as far as saying it should be handled on the "physics" level. Any electric apparatus in your home should be able to be left on for weeks without causing fatal consequences.


Im not taken aback by the current AI hype but having LLMs as an interface to voice commands is really revolutionary and a good fit to this problem. It’s just an interface to your API that provides the function as you see fit. And you can program it in natural language.


Chapter 4: In Which Phileas Fogg Astounds Passepartout, His Servant

Just as the train was whirling through Sydenham, Passepartout suddenly uttered a cry of despair.

"What's the matter?" asked Mr. Fogg.

"Alas! In my hurry—I—I forgot—"

"What?"

"To turn off the gas in my room!"

"Very well, young man," returned Mr. Fogg, coolly; "it will burn—at your expense."

- Around The World in 80 Days by Jules Verne, who knew that leaving the heat on while you went on vacation wouldn't burn down your house, 1872.


[flagged]


We might have different ovens but I don't see why mine would burn down my house when left on during vacations, but not when baking things for several hours.

Once warm, it doesn't just get hotter and hotter, it keeps the temp I asked for.


???


When you think about the damage that could be done with this kind of technology it’s incredible.

Imagine asking your MixAIr to sort out some fresh dough in a bole and then leaving your house for a while. It might begin to spin uncontrollably fast and create an awful lot of hyperbole-y activity.


I suggest looking up how electric motors work lest you continue looking stupid :)


I’ll just not worry myself over seemingly insane hypotheticals, lest I continue looking stupid, thank you.


I mean there is multiple people all over the main post pointing out how LLMs aren’t reliable but you do you.


All of those outcomes are already accessible by fat fingering the existing UI. Oven won’t burn your house down, most modern ones will turn off after some preset time, but otherwise you’re just going to overpay for electricity or need to replace the heating element. Unless you have a 5 ton industrial robot connected to your smart home, or have an iron sitting on a pile of clothes plugged in to a smart socket, you’re probably safe.


If it wasn't dangerous enough by default, he specifically instructs it to act as much like a homocidal AI from fiction as possible, and then hooks it up to control his house.

I think there's definitely room for this sort of thing to go badly wrong.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: