Some jobs may well disappear, but I think rather than AI replacing all these jobs, it will just shift what they entail.
Take software engineering. Rather than doing the dirty work ourselves, we'll become AI supervisors and verifiers. Instead of implementation, the hard parts will become:
- Telling the AI what you want precisely enough that you get the right output.
- Verifying that the resulting code really works.
- Figuring out what went wrong when it doesn't work (which will still require a deep understanding of the code, even if the engineer doesn't write it).
Come to think of it, is that really all that different than what we do today? It sounds like the same job, just higher up on the ladder of abstraction.
I have a name for this: ai whispers. In order to be efficient there will be a special dialect to talk to different implementations of AIs and ppl will specialize in different dialects.
This is just what was just called “speaking tech” back in the early 2000. I see a lot of discussion online about “prompt engineers” or “ai whispers” as you put, but I think the reality is that these tools are just another UI paradigm that our current generation will (on the whole) struggle with and the next generation will find intuitive.
Well, to be fair there is a quantum leap: an ability to materialize abstract thought presented as a text. This will end a lot of jobs where anyone can judge the result or certain error rate is tolerable. But while we have a lot of jobs like that, way to many are not like that and they will not be replaced. But i like the idea of a new type of UI (UX?) being born in front of us.
It is possible to run the output automatically and feed the errors back into it. I am am doing this. With the right prompting and feedback it is starting to work for short straightforward functions some of the time. It helps a lot to give it the test cases beforehand. I am hoping I can launch it as a service when they have a ChatGPT model in the API. For now using the 'unofficial' API.
I foresee a sort of hyper-CAPTCHA for AI tuning. Perhaps this will be more valuable than the underlying model itself.
The process could be: a flawed AI answer is reported, someone writes a distilled generalization of the misunderstanding, this is made into a hyper-CAPTCHA and a bunch of solvers (people) compete with another AI to iron out the flaw in the model.
These people would probably have AI assistants of their own and would have to understand the internals, so they would have to be highly educated in maths, logic, stats, and probability. Heck, it could be like a spectator sport.
Take software engineering. Rather than doing the dirty work ourselves, we'll become AI supervisors and verifiers. Instead of implementation, the hard parts will become:
- Telling the AI what you want precisely enough that you get the right output.
- Verifying that the resulting code really works.
- Figuring out what went wrong when it doesn't work (which will still require a deep understanding of the code, even if the engineer doesn't write it).
Come to think of it, is that really all that different than what we do today? It sounds like the same job, just higher up on the ladder of abstraction.