LLM just are waayyy too dangerous for something like home automation, until it becomes a lot more certain you can guarantee an output for an input.
A very dumb innocuous example would be you ordering a single pizza for the two of you, then telling the assistant “actually we’ll treat ourselves, make that two”. Assistant corrects the order to two. Then the next time you order a pizza “because I had a bad day at work”, assistant just assumes you ‘deserve’ two even if your verbal command is to order one.
A much scarier example is asking the assistant to “preheat the oven when I move downstairs” a few times. Then finally one day you go on vacation and tell the assistant “I’m moving downstairs” to let it know it can turn everything off upstairs. You pick up your luggage in the hallway none the wiser, leave and.. yeah. Bye oven or bye home.
Edit: enjoy your unlocked doors, burned down homes, emptied powerwalls, rained in rooms! :)
Preventing burning your house down belongs on the output handling side, not the instruction processing side. If there is any output from an LLM at all that will burn your house down, you already messed up.
I'd go as far as saying it should be handled on the "physics" level. Any electric apparatus in your home should be able to be left on for weeks without causing fatal consequences.
Im not taken aback by the current AI hype but having LLMs as an interface to voice commands is really revolutionary and a good fit to this problem. It’s just an interface to your API that provides the function as you see fit. And you can program it in natural language.
We might have different ovens but I don't see why mine would burn down my house when left on during vacations, but not when baking things for several hours.
Once warm, it doesn't just get hotter and hotter, it keeps the temp I asked for.
When you think about the damage that could be done with this kind of technology it’s incredible.
Imagine asking your MixAIr to sort out some fresh dough in a bole and then leaving your house for a while. It might begin to spin uncontrollably fast and create an awful lot of hyperbole-y activity.
All of those outcomes are already accessible by fat fingering the existing UI. Oven won’t burn your house down, most modern ones will turn off after some preset time, but otherwise you’re just going to overpay for electricity or need to replace the heating element. Unless you have a 5 ton industrial robot connected to your smart home, or have an iron sitting on a pile of clothes plugged in to a smart socket, you’re probably safe.
If it wasn't dangerous enough by default, he specifically instructs it to act as much like a homocidal AI from fiction as possible, and then hooks it up to control his house.
I think there's definitely room for this sort of thing to go badly wrong.
A very dumb innocuous example would be you ordering a single pizza for the two of you, then telling the assistant “actually we’ll treat ourselves, make that two”. Assistant corrects the order to two. Then the next time you order a pizza “because I had a bad day at work”, assistant just assumes you ‘deserve’ two even if your verbal command is to order one.
A much scarier example is asking the assistant to “preheat the oven when I move downstairs” a few times. Then finally one day you go on vacation and tell the assistant “I’m moving downstairs” to let it know it can turn everything off upstairs. You pick up your luggage in the hallway none the wiser, leave and.. yeah. Bye oven or bye home.
Edit: enjoy your unlocked doors, burned down homes, emptied powerwalls, rained in rooms! :)