Isn't it already relatively cheap to run? Training is costly, but there are examples of running LLaMA on your laptop. It doesn't seem like it will take decades to commoditize this stuff ... it is just the cutting edge might keep moving ahead.
Not relative to a permanent and much more simplistic solution that already exists in the form of the source code for the original radio project mentioned in the op.
I'll give you an example: fabricating an ASIC is expensive. Using FPGAs is cheaper if the potential sales are low, but they're less performant.
If a hypothetical AGI a decade from now can do the radio gimmick, but it incurs an ongoing cost, but it's going to have wide appeal, it makes more sense to make a simple utility.
Better yet, the simple utility already exists and doesn't need a hypothetical "benevolent AGI". It doesn't even need an LLM. It's here today.
This entire sub-thread went off at a tangent of trying to shoehorn AI into somewhere it has no place being, just like the fetishizing of blockchain and attempting to shoehorn it into everywhere a database would be cheaper, more flexible and more performant.
A hypothetical "benevolent AGI" is going to be incredibly larger in scale than an LLM, thus much more expensive. You won't be running one on a laptop. We may not even have enough compute globally for a hypothetical "benevolent AGI".
But we aren't there yet, and grandma is going to be dead in a decade, and the source code for the radio playlist gimmick already exists, as does Spotify.
In a decade, it'll cost pennies a year.