Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The standard e/acc trope is that existential risk is obviously sci-fi nonsense, and so anything that slows down progress is just doing harm. (Usually no attempt is made to engage with the actual arguments for potential risk.)

Given the self-evidence of no existential risk, there is then an objection to “dumbing down” where model performance suffers due to RLHF (the “alignment tax” is a real thing), and often but not always this includes an objection to wokeness or perceived unwillingness to speak “truths” or left-wing bias being imposed on models.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: