The standard e/acc trope is that existential risk is obviously sci-fi nonsense, and so anything that slows down progress is just doing harm. (Usually no attempt is made to engage with the actual arguments for potential risk.)
Given the self-evidence of no existential risk, there is then an objection to “dumbing down” where model performance suffers due to RLHF (the “alignment tax” is a real thing), and often but not always this includes an objection to wokeness or perceived unwillingness to speak “truths” or left-wing bias being imposed on models.
Given the self-evidence of no existential risk, there is then an objection to “dumbing down” where model performance suffers due to RLHF (the “alignment tax” is a real thing), and often but not always this includes an objection to wokeness or perceived unwillingness to speak “truths” or left-wing bias being imposed on models.