Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Agree the media is having a field day with this and a lot of people will draw bad conclusions about it being sentient etc.

But I think the thing that needs to be communicated effectively is that these these “agentic” systems could cause serious havoc if people give them too much control.

If an LLM decides to blackmail an engineer in service of some goal or preference that has arisen from its training data or instructions, and actually has the ability to follow through (bc people are stupid enough to cede control to these systems), that’s really bad news.

Saying “it’s just doing autocomplete!” totally misses the point.



i am sure plenty of bad things are waiting to be discovered

https://www.pillar.security/blog/new-vulnerability-in-github...


That statement is as true now as it was when some caveperson invented fire.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: