Hacker News new | past | comments | ask | show | jobs | submit login

Perhaps the solutions(s) needs to be less focusing on output quality, and more on having a solid process for dealing with errors. Think undo, containers, git, CRDTs or whatever rather than zero tolerance for errors. That probably also means some kind of review for the irreversible bits of any process, and perhaps even process changes where possible to make common processes more reversible (which sounds like an extreme challenge in some cases).

I can't imagine we're anywhere even close to the kind of perfection required not to need something like this - if it's even possible. Humans use all kinds of review and audit processes precisely because perfection is rarely attainable, and that might be fundamental.




The biggest issue I’ve seen is “context window poisoning”, for lack of a better term. If it screws something up it’s highly prone to repeating that mistake. It then makes a bad fix that propagates two more errors, the says, “Sure! Let me address that,” repeating to poorly fix those rather than the underlying issue (say, restructuring code to mitigate.)

It is almost impossible to produce a useful result, far as I’ve seen, unless one eliminates that mistake from the context window.


I really really wish that LLMs had an "eject" function - as in I could click on any message in a chat, and it would basically start a new clone chat with the current chat's thread history.

There are so many times where I get to a point where the conversation is finally flowing in the way that I want and I would love to "fork" into several directions from that one specific part of the conversation.

Instead I have to rely on a prompt that requests the LLM to compress the entire conversation into a non-prose format that attempts to be as semantically lossless as possible; this sadly never works as in ten did [sic].


This is precisely what the poorly named Edit button does in Claude.


Google UI supports branching and delete someone recently made a blog post about how great it is


which Google UI?


ai.dev AI studio sorry


LM studio has a fork button on every chat part. Sorry, can't think of a better word - you can fork on any human or ai part. You can also edit, but editing isn't, it essentially creates a copy of the context with the edit, and sends the whole thing to the AI. This can overflow your context window, so it isn't recommended. Forking of course does the same thing, but it is obvious that it is doing so, whereas people are surprised to learn editing sends everything.


You can use LibreChat which allows you to fork messages: https://www.librechat.ai/docs/features/fork


"If it screws something up it’s highly prone to repeating that mistake"

Certainly true, but coaching it past sometimes helps (not always).

- roll back to the point before the mistake.

- add instructions so as to avoid the same path. "Do not try X. We tried X it does not work as it leads to Y.

- add resources that could aid a misunderstanding (api documentation, library code)

- rerun the request (improve/reword with observed details or insights)

I feel like some of the agentic frameworks are already including some of these heuristics, but a helping hand still can work to your benefit


I think this is one of the core issues people have when trying to program with them. If you have a long conversation with a bunch of edits, it will start to get unreliable. I frequently start new chats to get around this and it seems to work well for me.


Yes, this definitely helps. It's just incredibly annoying because you have to dump context back into it, re-type stuff, consolidate stuff from the prior conversation, etc.


Have the AI maintain a document (a local file or in canvas) with project goals, structure, setup instructions, current state, change log, todos, caveats, etc. You might need to remind it to keep it up-to-date, but I find this approach quite useful.


This is what I find. If it makes a mistake, trying to get it to fix the mistake is futile and you can't "teach" it to avoid that mistake in the future.


It depends, I ran into this a lot with GPT, but less so with Claude.

But then again, I know how it could avoid the mistake, so I point that out, from that point onwards it seems fine (in that chat).


> Perhaps the solutions(s) needs to be less focusing on output quality, and more on having a solid process for dealing with errors. Think undo, containers, git, CRDTs

LLMs are supposed to save us from the toils of software engineering, but it looks like we're going to reinvent software engineering to make AI useful.

Problem: Programming languages are too hard.

Solution: AI!

Problem: AI is not reliable, it's hard to specify problems precisely so that it understands what I mean unambiguously.

Solution: Programming languages!


With pretty much every new technology, society has bent towards the tech too.

When smartphones first popped up, browsing the web on them was a pain. Now pretty much the whole web has phone versions that make it easier*.

*I recognize the folly of stating this on HN.


No it's still a pain.

There's apps that open links in their embedded browser where ads aren't blocked. So I need to copy the link and open them in my real browser.


Or my other favorite trap: an embedded browser where I'm not authenticated. Great, now I have to roll the dice about pasting a password in your "trust me, bro" looking login page because I cannot see the URL and the autofill is all "nope"


> LLMs are supposed to save us from the toils of software engineering

Well, cryptocurrency was supposed to save us from the inefficiences of the centralized banking system.

There's a lesson to be learned here, but alas our sociiety's collective context window is less than five years.


But, assuming this is a general thing not just focused on say software development, can you make the tooling around creating this easier than defining the process itself? Everyone loosely speaking sees the value in test driven development, but often I think with complex processes, writing the test is harder than writing the process.


I want to make a simple solution where data is parsed by a vision model and "engineer for the unhappy path" is my assumption from the get-go. Changing the prompt or swapping the model is cheap.


vision models are also faulty, and some times all paths are unhappy paths, so there's really no viable solution. Most of the times, swapping the model completely randomizes the problem space (unless you measure every single corner case, it's impossible to tell if everything got better or if some things got worse...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: