Stackoverflow will have duplicates, approximates and what not and sometimes that works. But at other times, you hunt for a half hour before you figure it out.
You can throw the problem at ChatGPT, it may go wrong but your course correct it with simple instructions and slowly but steadily you move towards your goal with minimal noise of the irrelevant discussions.
What stands between the solution and you then is your ability to figure out when it is hallucinating and guide it to the right direction. But as a solution developer you should have that insight anyways
I'm with you (I use Claude Sonnet, but same difference...).
I do wonder if we're the last generation that will be able to effectively do such "course correct" operations -- feels like a good chunk of the next generation of programmers will be bootstrapped using such LLMs, so their ability to "have that insight" will be lacking, or be very challenging to bootstrap. As analogy, do you find yourself having to "course correct" the compiler very often?
I asked it a simple non programming question. My last paycheck was December 20, 2024. I get paid biweekly. In which year will I get paid 27 times. It got it wrong ... very articulately.
You'll be more successful with this the more you know how LLMs work. They're not "good" at math because they just predict text patterns based on training data rather than perform calculations based on logic and mathematical rules.
To do this reliably, prepend your request to invoke a tool like OpenAI's Code Interpreter (e.g. "Code the answer to this: My last paycheck was December 20, 2024. I get paid biweekly. In which year will I get paid 27 times.") to get the correct response of 2027.
Awesome! I'm sure the following is not an original thought, but to me it feels like the era of LLMs-as-product is mostly dead, and the era of LLMs-as-component (LLMs-as-UX?) is the natural evolution where all future imminent gains will be realized, at least for chat-style use cases.
OpenAI's Code Interpreter was the first thing I saw which helped me understand that we really won't understand the impact of LLMs until they're released from their sandbox. This is why I find Apple's efforts to create standard interfaces to iOS/macOS apps and their data via App Intents so interesting. Even if Apple's on-device models can't beat competitors' cloud models, I think there's magic in that union of models and tools.
Hunting for half an hour gradually increases your understanding of the problem and may give you new ideas for solutions, you’re missing out. ChatGPT will make your brain soft, and eventually mush.
I hear you but we can look at it in many different ways. I still own the solution, I am still going to certify the output. But maybe it allows me to be more productive so I may go soft in some areas, but deliver more in other areas by knowing how best to use the variety of tools available to me.
And by no means I am giving up on stackoverflow, it is just another tool, but its primacy may be in doubt. Just like for the last couple of years I would search for some information by pointing google to reddit, I will now have a mental map of when to go to chatter, when to go to SO, and when to go to reddit.
Stackoverflow will have duplicates, approximates and what not and sometimes that works. But at other times, you hunt for a half hour before you figure it out.
You can throw the problem at ChatGPT, it may go wrong but your course correct it with simple instructions and slowly but steadily you move towards your goal with minimal noise of the irrelevant discussions.
What stands between the solution and you then is your ability to figure out when it is hallucinating and guide it to the right direction. But as a solution developer you should have that insight anyways