Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Somewhat true.

Ablation in post isn't good enough - it usually does 10% of "expunge the data you want expunged", 70% of "make the data you want expunged less accessible", and 20% of "collateral damage". Training for refusals doesn't damage the capabilities much - it just make them harder to access. If someone has access to model weights, neither holds. GPT-OSS was SOTA at removing unwanted capabilities, and even that didn't hold for long.

Now, dataset curation/filtration does help against select capabilities. But a lot of capabilities are double edged, and can't be deleted without hurting performance at the task you want.

If an AI is good at coming up with novel ways to perform chemical synthesis, it can be reused to come up with pathways for synthesizing illegal drugs or poisons, no way around that. If an AI is good at writing software, it can be reused for writing malware. If an AI is good at autonomously finding vulnerabilities in your own network, it can be reused to do the same in some other dude's network.

AI may have an alignment, but raw capabilities sure don't.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: