AI safety is moralism of the boring kind, not even some new moral philosophy. AFAIK Dennett did not hold strong moral positions , let alone moralist, so i feel he was orthogonal to it
It's not all boring - it makes for some great movies. Terminator 2, The Matrix etc.
It's also moving to practical engineering questions like how can we have AI controlled drones kill invading Russians but ensure they won't turn on us later, more than philosophical waffle.
(I had misremembered the author of that original quote, but then :
> Daniel Dennett, while sharply disagreeing on some points, acknowledged Nagel's paper ["What Is It Like to Be a Bat?"] as "the most widely cited and influential thought experiment about consciousness.")
According to a common definition, moralism involves judgment, but not necessarily imposition. From Wikipedia:
> The term has been used in a pejorative sense to describe the attitude of "being overly concerned with making moral judgments or being illiberal in the judgments one makes".
There is also the 19th century Moralism movement/philosophy, which may have inspired the more general sense of the word, but I’m not sure.
I think we should all step back and question whether we all even mean the same thing by the word “imposition”. When I think about imposition, I can see how people could interpret that to mean a wide range of things, including social pressure, religious rules, as well as the force of law. This is why (above) I interpret moralism (the general term) as a kind of judgment, but not necessarily an imposition of rules or punishments.