Hacker News new | past | comments | ask | show | jobs | submit login

Do you perhaps have some resources on how you use AI assistants for coding (I'm assuming Github Copilot). I've been trying it for the past months, and frankly, it's barely helping me at all. 95% of the time the suggestions are just noise. Maybe as a fast typer it's less useful, I just wonder why my experience is so different than what others are saying. So maybe it's because I'm not using it right?



I think it's your mindset and how you approach it. E.g. some people are genuinely bad at googling their way to a solution. While some people know exactly how to manipulate the google search due to years of experience debugging problems. Some people will be really good at squeezing out the right output from ChatGPT/Copilot and utilize it to maximum potential, while others simply won't make the connection.

Its output depends on your input.

E.g. say you have an API swagger documentation and you want to generate a Typescript type definition using that data, you just copy paste the docs into a comment above the type, and copilot auto fills your Typescript type definition even adding ? for properties which are not required.

If you define clearly the goal of a function in a JSDoc comment, you can implement very complex functions. E.g. you define it in steps, and in the function line out each step. This also helps your own thinking. With GPT 4o you can even draw diagrams in e.g. excalidraw or take screenshots of the issues in your UI to complement your question relating to that code.


> some people know exactly how to manipulate the google search due to years of experience debugging problems

this really rings true for me. especially as a junior, I always thought one of my best skills was that I was good at Googling. I was able to come up with good queries and find some page that would help. Sometimes, a search would be simple enough that you could just grab a line of code right off the page, but most of the time (especially with StackOverflow) the best approach was to read through a few different sources and pick and choose what was useful to the situation, synthesizing a solution. Depending on how complicated the problem was, that process might have occurred in a single step or in multiple iterations.

So I've found LLMs to be a handy tool for making that process quicker. It's rare that the LLM will write the exact code I need - though of course some queries are simple enough to make that possible. But I can sort of prime the conversation in the right direction and get into a state where I can get useful answers to questions. I don't have any particular knowledge on AI that helps me do that, just a kind of general intuition for how to phrase questions and follow-ups to get output that's helpful.

I still have to be the filter - the LLM is happy to bullshit you - but that's not really a sea change from trying to Google around to figure out a problem. LLMs seem like an overall upgrade to that specific process of engineering to me, and that's a pretty useful tool!


Keep in mind that Google's results are also much worse than they used to be.

I'm using both Kagi & LLM; depending on my need, I'll prefer one or the other.

Maybe I can access the same result with a LLM, but all the conversation/guidance required is time-consuming than just refining a search query and browsing through the first three results.

After all the answer is rarely exactly available somewhere. Reading people's questions/replies will provide a clues to find the actual answer I was looking for.

I have yet been able to achieve this result through a LLM.


> E.g. you define it in steps, and in the function line out each step. This also helps your own thinking

Yeah but there are other ways to think through problems, like asking other people what they think, which you can evaluate based on who they are and what they know. GPT is like getting advice from a cross-section of everyone in the world (and you don’t even know which one), which may be helpful depending on the question and the “people” answering it, but it may also be extroadinarily unhelpful, especially for very specialized tasks (and specialized tasks are where the profit is).

Like most people, I have knowledge of things very specific I know that less than a 100 people in the world know better than me, but thousands or even millions more have some poorly concieved general idea about it.

If you asked GPT to give you an answer to a question it would bias those millions, the statistically greater quantative solution, to the qualitative one. But, maybe, GPT only has a few really good indexes in its training data that it uses for its response, and then its extremely helpful because its like accidentally landing on a stackoverflow response by some crazy genius who reads all day, lives out of a van in the woods, and uses public library computers to answer queries in his spare time. But that’s sheer luck, and no more so than a regular search will get you.


Take a look at aider-chat or zed. zed just released new AI features. Had a blog post about it yesterday I think.

Also you can look into cursor.

There are actually quite a few tools.

I have my own agent framework in progress which has many plugins with different commands. Including reading directories, tree, read and write files, run commands, read spreadsheets. So I can tell it to read all the Python in a module directory, run a test script and compare the output to a spreadsheet tab. Then ask it to come up with ideas for making the Python code match the spreadsheet better, and have it update the code and rerun the tests iteratively until its satisfied.

If I am honest about that particular process last night, I am going to have to go over the spreadsheet to some degree manually today, because neither gpt-4o nor Claude 3.5 Sonnet was able to get the numbers to match exactly.

It's a somewhat complicated spreadsheet which I don't know anything about the domain and am just grudgingly learning. I think the agent got me 95% of the way through the task.


I rely on LLMs extensively for my work, but only a part of that is with copilots.

I have copilot suggestions bound to an easy hotkey to turn them on or off. If I’m writing code that’s entirely new to the code base, I toggle the suggestions off, they’ll be mostly useless. If I’m following a well established pattern, even if it’s a complicated one, I turn them on, they’ll be mostly good. When writing tests in c#, I reflexively give the test a good name and write a tiny bit of the setup, then copilot will usually be pretty good about the rest. I toggle it multiple times an hour, it’s about knowing when it’ll be good, and when not.

Beyond that, I get more value from interacting with the llm by chat. It’s important to have preconfigured personas, and it took me a good 500 words and some trial and error to set those up and get their interaction styles where I need them to be. There’s the “.net runtime expert” the “infrastructure and release mentor”, and on like that. As soon as I feel the least bit stuck or unsure I consult with one of them, possibly in voice mode while going for a little walk. It’s like having the right colleague always available to talk something through, and I now rarely find myself spinning my wheels, bike-shedding, or what have you.


It is very helpful in providing highly specific "boilerplate" in languages/environments you are not very familiar with.

The text interface can also be useful for skipping across complex documentation and/or learning. Example: you can ask GPT-4 to "decode 0xdf 0xf8 0x44 0xd0 (thumb 2 assembly for arm cortex-m)" => this will tell you what instruction is encoded, what it does and even how to cajole your toolchain into providing that same information.

If you are an experienced developer already, with a clear goal and understanding, LLMs tend to be less helpful in my experience (the same way that a mentor you could ask random bullshit would be more useful to a junior than a senior dev)


> this will tell you what instruction is encoded, what it does and even how to cajole your toolchain into providing that same information.

or it will hallucinate something that's completely wrong but you won't notice it


Copilot is just an autocomplete tool, it doesn’t have much support for multiturn prompting so it’s best used when you know exactly what code you want and just want it done quickly like implementing a well defined function to satisfy an interface or refactoring existing code to match an example you’ve already written out or prefilling boilerplate on a new file. For more complex work you need to use a chat interface where you can actually discuss the proposed changes with the model and edit and fork the conversation if necessary.


Don’t work in large established code bases. Make flappy bird games in Python.


My experience is mostly with gpt-4. Act like it is a beginner programmer. Give it small, self-contained tasks, explain the possible problems, limitation of the environment you are working with, possible hurdles, suggest api functions or language features to use (it really likes to forget there is a specific function that does half of what you need instead of having to staple multiple ones together). Try it for different tasks, you will get a feel what it excels in and what it won't be able to solve. If it doesn't give good answer after 2 or 3 attempts, just write it yourself and move on, giving feedback barely works in my experience.


What language do you use?

If you can beat copilot in a typing race then you’re probably well within your comfort zone. It works best when working on things that you’re less confident at - typing speed doesn’t matter when you have to stop to think.


I do 120wpm, but still copilot outpaces me, and it is not just typing, it is the little things I don't have to think about. Of course I know how to do all of it, but it still takes some mental energy to come up with algorithms and code. It takes less energy to verify what copilot output at least to me.


I use C# for the most part, sometimes PowerShell. But I can certainly see how it's more useful when I don't know much of the API yet. Then it would be a lot of googling which the AI assistant could avoid.


My experience is similar. Most of the results are not really useful so I have to put work in to fix them. But at that point I can do the small extra step of doing it completely myself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: