Hacker News new | past | comments | ask | show | jobs | submit login

>>I tried getting Claude to help me model an event-based store with moderate type safety and it was a total disaster.

That's just not how you work with a LLM. You don't give LLMs problems to solve. You do the thinking part and solve problems, and ask the LLM to implement small blocks of code, like the smallest possible, and you incrementally go from there.

This really is some what like asking the whole question in the Google search bar, not seeing any result and saying Google search doesn't work.

>>it’s typically filling in the simpler details of an implementation I had to design without its help.

Most complex solutions are just reusing and connecting simpler solutions.




This is how I was using it. I was trying to work through each part of the model, but kept running into an issue in which it would recommend implementing things with Python instead. It wasn’t impossible or even inadvisable to do it with typescript, so I have no idea where the impetus to switch to Python came from.

The issue occurred specifically when I was pointing out that type inference wouldn’t work as expected for typing the store and its corresponding events. I’d feed it some types that were close to what we needed and it would give up and show me how to do it with Python. I gave up after (I wish I was exaggerating) around 2 hours of experimenting with convincing it to believe it was possible to build and type the store correctly.

The store is built and working properly at this point so who knows, maybe the meta layer of typing things really throws LLMs.


> ...and ask the LLM to implement small blocks of code, like the smallest possible, and you incrementally go from there.

So it's autocomplete++? Talk about an insane level of hype for an incremental improvement to what we already have...if that's actually what we can hope to expect from these LLMs.


At their heart LLMs are basically a mechanism to guess(predict based on probability) the next word based on what it has already done/seen so far. How it goes about guessing them, or how it gets the context is based on attention and multihead attention functions. You could say that these functions provide which part(word) of a question/sentence must get a higher weight. That basically provides context based on which it predicts what needs to come next. How it predicts is basically your plain old neural network. Just like how in linear regression you know, if your model is predicting a straight line, you know in which area the next points are likely to appear. Similar mechanisms are used here to guess what the next words are likely to be.

It is a extreme auto complete feature in the context of code for sure. Note, LLMs are not sentient. That means they can't be held responsible for making decisions. Even more so code decisions.

Now you can argue its nothing special. But its some what like arguing eclipse/intellij are not special when compared to vim/emacs. That is just splitting hairs. IDE's definitely do a lot of productive work compared to plain text editors.

The initial demo's on LLMs confuse new users a lot. The demos go on the lines of giving a sentence like 'Implement a todo list app' or something like that and LLM writes some code implementing it. That's a wrong way to work with LLMs. Don't outsource your thinking or give it whole blanket problem statement to solve it. Think about LLMs like tools that do a lot of quick text writing for you you given the smallest, non ambiguous, atomically implementable/rollback statement possible.

It gets a while to get used to this, but once you are, you are more productive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: