I missed out on Universal Paperclips, but I immediately love the idea. The notion of the Paperclip AI that destroys humanity to build paperclips is a sublime illustration of how badly computer scientists need to take a few philosophy courses to broaden their thinking, as is its facile allure. (Most reasonable fears of AI are fears around what we will use it to do to others, not what it will do to us, as all conflict is driven by resource contention and machine intelligences would be so profoundly different in their needs than ourselves that there would be almost no overlap to contend over. They don't need space, food, water, or even time. They could tick away at one cycle every 500 years and be perfectly content. What exactly will they want to take from us?)
I've always liked clicker games, and there is a very long-running instance of Cookie Clicker running in my browser at home right now. The thing that interests me in them seems to have only been tangentially touched upon, though. The mathematical tapestries that get overlaid on one another, that's what I find interesting. What is the optimal strategy to build cookie production fastest? Do you wait and save for the next available cookie production method? Or do you just purchase everything you can? Exactly how many of the lower level production means can you buy before the cost per additional cookie produced is less than if you'd purchased one of the more expensive options? If your goal is to build the cookie production as efficiently as possible, the thought and calculation necessary to figure it out is quite significant. And when you add in differing yield curves with the various upgrades and boosts and bonuses and whatnot... it gets very large.
Having humans being the ones playing Universal Paperclips is a very interesting phenomenon, though. What does it say about us that as we fear a paperclip automaton annihilating us, we find it irresistable just to see some numbers get bigger? Would anyone NEED to build a complex AI in order to get us to annihilate ourselves? Would calling it Universal Banknotes make it clearer?
The scenario that inspired this game includes the contention that an AI could harm humanity without anything that we would recognize as malice in human terms.
> For example, if you want to build a galaxy full of happy sentient beings, you will need matter and energy, and the same is also true if you want to make paperclips. This thesis is why we’re worried about very powerful entities even if they have no explicit dislike of us: “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.” Note though that by the Orthogonality Thesis you can always have an agent which explicitly, terminally prefers not to do any particular thing — an AI which does love you will not want to break you apart for spare atoms. See: Omohundro (2008); Bostrom (2012).
I guess two popular ways of disputing this thesis are to say that no AI that we can build can actually become this powerful, or to say that we'll easily program AIs not to harm us. I think the first objection is more cogent because there are various ways that you could argue that we're far from understanding how to build AIs of this kind.
The second objection is a subtle problem; the Omohundro-Bostrom-Yudkowsky-Bostrom argument is very focused on the idea that there are so many hidden assumptions about what it means not to harm us, and what kinds of things are off-limits, that someone who tries to capture all of them algorithmically at one stroke is more likely to fail than to succeed. And quite a few of the toy game-playing AI systems have found ways to cheat by human standards and miss the essence of the human understanding of the task they were set.
I think your analogy to the banknote-maximizer makes some of that subtlety clear: if we don't know what things are off-limits, we could see how a banknote-maximizer could cause very severe externalities without malice.
(The earlier Omohundro argument points out that, whatever you want to accomplish, power and safety will help you accomplish it, so you always have a good reason to try to acquire them. Maybe an AI could be explicitly programmed not to want to acquire power and safety, but there, again, we'd want a very good way to describe exactly what those things do or don't consist of.)
Having humans being the ones playing Universal Paperclips is a very interesting phenomenon, though. What does it say about us that as we fear a paperclip automaton annihilating us, we find it irresistable just to see some numbers get bigger? Would anyone NEED to build a complex AI in order to get us to annihilate ourselves? Would calling it Universal Banknotes make it clearer?
It says that when it comes to a Skinner box, we’re the same as those poor rats. It also suggests that keeping humanity busy while an AI turns us into paper clips would be a more trivial exercise than some might hope.
I've always liked clicker games, and there is a very long-running instance of Cookie Clicker running in my browser at home right now. The thing that interests me in them seems to have only been tangentially touched upon, though. The mathematical tapestries that get overlaid on one another, that's what I find interesting. What is the optimal strategy to build cookie production fastest? Do you wait and save for the next available cookie production method? Or do you just purchase everything you can? Exactly how many of the lower level production means can you buy before the cost per additional cookie produced is less than if you'd purchased one of the more expensive options? If your goal is to build the cookie production as efficiently as possible, the thought and calculation necessary to figure it out is quite significant. And when you add in differing yield curves with the various upgrades and boosts and bonuses and whatnot... it gets very large.
Having humans being the ones playing Universal Paperclips is a very interesting phenomenon, though. What does it say about us that as we fear a paperclip automaton annihilating us, we find it irresistable just to see some numbers get bigger? Would anyone NEED to build a complex AI in order to get us to annihilate ourselves? Would calling it Universal Banknotes make it clearer?