Hacker News new | past | comments | ask | show | jobs | submit login

Memory and finetuning. If it was easy to insert a framework/documentation into GPT4 (the only model capable of complex software development so far in my experience), it would be easy to create big complex software. The problem is that currently the memory/context management needs to be done all by the side of the LLM interaction (RAG). If it was easy to offload part of this context management on each interaction to a global state/memory, it would be trivial to create quality software with tens of thousands of LoCs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: