That's a great use-case! But what about when GPT hallucinates - gets it wrong.
What happens when you have sent a (potentially legally-binding) quote to your customers, and you have to back out of it?
Humans make mistakes - whether directly, or indirectly through the software that we craft. But when the mistake happens, we can reason about it, understand why it happened, and fix the problem.
Wouldn't it be embarrassing to not be in a position to reason about / explain a mistake to your customers caused by this "thing" we use that's been trained on all the random information available on the internet? Would that not reduce the value and standing of your business (which is ultimately about the people and expertise contained within) in the eyes of customers?
As an engineer, technologist, and CTO I absolutely love what I've been able to craft using generative AI under direct supervision.
But I cannot imagine a world where we give our carefully thought-through, easily changeable, algorithms and processes for a black box that cannot be reasoned about, no matter how good the output _sometimes_ is.
Our sales people do a quick eye test before sending it out.
Usually, the list of recommendations are about ~8 services that we can replace. Our sales people know those services and prices by heart. A quick glance is enough for them. Our quotes are not legally binding. The contract that we sign is.
We found that it was highly accurate. GPT4 has been especially good at stuff like this after OpenAI changed it to write Python code for this kind of calculation.
This! GPT is not an "intelligence" by itself, but more an extension to the human brain, like an "exocortex" holding information that we cannot or dont want to keep in our neocortex. Google search was the first of these "exocortex" extensions to the brain that was popular among many people. So the same seems to happens now with LLMs. Maybe we will not end up with an AGI but more with a human/machine hybrid which has the "superhuman" capabilities.
Humans make mistakes - whether directly, or indirectly through the software that we craft. But when the mistake happens, we can reason about it, understand why it happened, and fix the problem.
Wouldn't it be embarrassing to not be in a position to reason about / explain a mistake to your customers caused by this "thing" we use that's been trained on all the random information available on the internet? Would that not reduce the value and standing of your business (which is ultimately about the people and expertise contained within) in the eyes of customers?
As an engineer, technologist, and CTO I absolutely love what I've been able to craft using generative AI under direct supervision.
But I cannot imagine a world where we give our carefully thought-through, easily changeable, algorithms and processes for a black box that cannot be reasoned about, no matter how good the output _sometimes_ is.
I have to think many people feel this way.