Good article and it matches my own experience in the last year. I use it to my advantage both on hobby projects and professionally and it's a huge timesaver.
LLMs are far from flawless of course and I often get stuck with non working code. Or it is taking annoying shortcuts in giving a detailed answer, or it just wastes a lot of time repeating the same things over and over again. But that's often still useful. And you can sometimes trick them into doing better. Once it goes down the wrong track, it's usually best to just start a new conversation.
There are a few neat tricks that I've learned over the last year that others might like:
- you can ask chat gpt to generate some files and make them available as a zip file. This is super useful. Don't wait for it to painfully slowly fill some text block with data or code. Just ask it for a file and wait for the link to become available. Doesn't always seem to work but when it does it is nice. Great for starting new projects.
- chat gpt has a huge context window so you can copy paste large source files in it. But why stop there? I wrote a little script (with a little help of course) that dumps the source tree of a git repository into a single text file which I can then copy into the context. Works great for small repositories. Then you can ask questions like "add a function to this class that does X", "write some unit tests for foo", "analyze the code and point out things I've overlooked", etc.
- LLMs are great for the boring stuff. Like writing exhaustive unit tests that you can't be bothered with or generating test data. And if you are doing test data, you might as well have some fun and ask it to inject some movie quotes, litter it with hitchhiker's guide to the galaxy stuff, etc.
The recent context window increase to 128K with chat gpt 4o and other models was a game changer. I'm looking forward to that getting even larger. The first few publicly available LLMs had the memory of a gold fish. Not any more. Makes them much more useful already. Right now most small projects easily fit into its context already.
Great comment. I've also found some shortcuts to out-shortcut GPT. Before it even thinks of substituting code blocks with "/* code here */" or whatever, I usually just tell it "don't omit any code blocks or substituted any sections with fill-in comments. Preserve the full purpose of the prompt and make sure you retain full functionality in all code -- as if it's being copy-pasted into a valuable production environment".
It also helps to remind it that its role is a "senior developer" and that it should write code that likens it to a senior developer. It will be happy to act like a junior dev if you don't explicitly tell it.
Also, always remember to say please, thank you, hello, and that you'll tip it money - these HAVE made differences over time in my tests.
LLMs are far from flawless of course and I often get stuck with non working code. Or it is taking annoying shortcuts in giving a detailed answer, or it just wastes a lot of time repeating the same things over and over again. But that's often still useful. And you can sometimes trick them into doing better. Once it goes down the wrong track, it's usually best to just start a new conversation.
There are a few neat tricks that I've learned over the last year that others might like:
- you can ask chat gpt to generate some files and make them available as a zip file. This is super useful. Don't wait for it to painfully slowly fill some text block with data or code. Just ask it for a file and wait for the link to become available. Doesn't always seem to work but when it does it is nice. Great for starting new projects.
- chat gpt has a huge context window so you can copy paste large source files in it. But why stop there? I wrote a little script (with a little help of course) that dumps the source tree of a git repository into a single text file which I can then copy into the context. Works great for small repositories. Then you can ask questions like "add a function to this class that does X", "write some unit tests for foo", "analyze the code and point out things I've overlooked", etc.
- LLMs are great for the boring stuff. Like writing exhaustive unit tests that you can't be bothered with or generating test data. And if you are doing test data, you might as well have some fun and ask it to inject some movie quotes, litter it with hitchhiker's guide to the galaxy stuff, etc.
The recent context window increase to 128K with chat gpt 4o and other models was a game changer. I'm looking forward to that getting even larger. The first few publicly available LLMs had the memory of a gold fish. Not any more. Makes them much more useful already. Right now most small projects easily fit into its context already.