Hacker News new | past | comments | ask | show | jobs | submit login

As a matter of fact, I don't think its true that

> if the system survives, the consequences will be inevitable: There is no way of reforming or modifying the system so as to prevent it from depriving people of dignity and autonomy.

But I also think that this claim is (1) practically impossible to prove and (2) a claim we morally ought to attempt to disprove.




Highly recommend to read the full manifesto. The reason is game theory for why human civilization will increasingly automate until human autonomy is reduced to nil, which is the state of suffering he is talking about. If you don't automate, $ENEMY will. So you automate. If you still choose to not automate, $ENEMY wins and controls you, and they will automate ever more with your resources. In fact, these 3 sentences are a good summary of past 1000 years of history, possibly more. See the recent AI 2027 paper (https://ai-2027.com/) for a more detailed approach. And Kaczynski talked about this stuff in the 70s.

For the (2), I want to ask where did you get the idea that people have a moral responsibility to unconditionally defend the progress of technology?

Do you think new technologies' good or bad impact depend entirely on the virtue of the people using them? Do you think new tools dictate their own usage on society when they make contact with constants like immediate-reward-seeking human neurological system?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: