Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anecdotally, LLMs also get less intelligent when the context is filled up with a lot of irrelevant information.




This is well established at this point, it’s called “context rot”: https://research.trychroma.com/context-rot

Yeah, though this paper doesn't test any standard LLM benchmarks like GPQA diamond, SimpleQA, AIME 25, LiveCodeBench v5, etc. So it remains hard to tell how much intelligence is lost when the context is filled with irrelevant information.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: