Given current models can accomplish this task quite successfully and cheaply, I'd say that if/when that happens it would be a failure of the user (or the provider) for not routing the request to the smaller, cheaper model.
Similar to how it would be the failure of the user/provider if someone thought it was too expensive to order food in, but the reason they thought that was they were looking at the cost of chartering a helicopter form the restaurant to their house.
Realtime LLM generation is ~$15/million “words”. By comparison a human writer at the beginning of a career typically earns ~$50k/million words up to ~$1million/million words for experienced writers. That’s about 4-6 orders of magnitude.
Inference costs generally have many orders of magnitude to go before it approaches raw human costs & there’s always going to be innovation to keep driving down the cost of inference. This is also ignoring that humans aren’t available 24/7, have varying quality of output depending on what’s going on in their personal lives (& ignoring that digital LLMs can respond quicker than humans, reducing the time a task takes) & require more laborious editing than might be present with an LLM. Basically the hypothetical case seems unlikely to ever come to reality unless you’ve got a supercomputer AI that’s doing things no human possibly could because of the amount of data it’s operating on (at which point, it might exceed the cost but a competitive human wouldn’t exist).
https://www.sandraandwoo.com/wp-content/uploads/2024/02/twit...
or it just telling you to google it