At the end of the day, coding with AI is still coding.
LLMs need to get better at it - more precision, less hallucination, larger context windows, and stricter reasoning. We also need a clearer and more domain-specific vocabulary when working with them.
In short LLM will end up as one more programming language - but one that’s easier for humans to understand, since it operates using the terminology of the problem domain.
But humans will still need to do the thinking: the what-ifs, the abstractions, the reasoning.
That’s where AI becomes unsettling - too many people have been trained not to think at all.
LLMs need to get better at it - more precision, less hallucination, larger context windows, and stricter reasoning. We also need a clearer and more domain-specific vocabulary when working with them.
In short LLM will end up as one more programming language - but one that’s easier for humans to understand, since it operates using the terminology of the problem domain.
But humans will still need to do the thinking: the what-ifs, the abstractions, the reasoning.
That’s where AI becomes unsettling - too many people have been trained not to think at all.