There’s actually a good answer to this, namely that narrowly targeting the needs of exactly one family allows you to develop software about 1000x faster. This is an argument in favor of personal software.
Not every one of those families would find the same set of features helpful, so you have to make calls about what's worth developing and what isn't. Making those calls is very difficult because it's tricky to gather data about what will be used and appreciated.
Doesn't look very expensive to me. An LLM capable of this level of summarization can run in ~12GB of GPU-connected RAM, and only needs that while it's running a prompt.
The cheapest small LLMs (GPT-4.1 Nano, Google Gemini 1.5 Flash 8B) cost less than 1/100th of a cent per prompt because they are cheap to run.
Yes! And also, Apple loves selling expensive hardware and has zero shyness asking people to pay a few thousand bucks to buy into part of their ecosystem.
They could easily offer an on-prem family 'AI' product that you plop in your house and plug into your router, and does all AI processing for the whole family, and uses a secure VPN to connect to any of your devices outside the LAN.
If such a product delivered JUST what this guy's cool hack provides, and made Siri not a stupid piece of sh*t for my family, I'd buy it for $1999 even if I knew it cost Apple $700 to make.
True, I always thought something like Hypercard was needed to bring personal programming to the masses, but it appears that it might require LLM coding instead. ("I wish an app that did simple task XYZ existed."; "Can you ask ChatGPT to make that for you?")