We will never achieve artificial intelligence unless we create a program that can differentiate good from evil, pleasure from pain, positive from negative.
The basic building blocks of life.
If we continue making faster and faster machines at calculating formulas and storing knowledge we will have just that, a giant calculator.
I definitely think you are on to something. The attempts at artificial intelligence I am aware of all consist
of some sort of optimizing, so trying to find a good thing to optimize seems a very reasonable thing to try.
(This is just my speculation, so take it with a grain of salt)
Here I think it is reasonable to look to human motivation. Maybe by making an agent that optimizes what a human brain optimizes, we could see similar behaviour?
A reasonable start is Maslows hierarchy of needs.
1. Biological and physiological needs. For an embodied AI, this could correspond to integrity checks coming up valid, battery charging, servicing.
2. Safety needs. I think these emerge from prediction+physiological needs.
3. After that we have social needs. This one is a little bit tricky. Maybe we could put in a hard coded facial expression detector?
4. Esteem needs. Social+prediction
5. Cognitive needs. I have no idea how this could be implemented
6. Aesthetic needs. I think these are pretty much hard-coded in humans, but are quite complex. Coding this will be ugly (irony)
7. Self-actualization???
Now, from 1 and 3 it is reasonable to suppose (provided the optimizer is good enough) that we could train the AI, like one trains a dog. You give command, AI obeys, you smile/pet it ( -> reward).
It does something bad, you punish it.
In order for the optimization procedure to not take unreasonably long time, I think it is important that the initial state has some instincts.
Make sound if you need battery. Pay attention to sounds that are speechlike.
Maybe give it something aking to filial imprinting could also be a good idea.
Extensive research on neural basis for motivation should be prioritized in my opinion.
Good and evil is subjective, and therefore not a "basic building block of life".
Pleasure and pain require emotion, something most researchers don't assume an AI will have.
What do you mean by "positive and negative"? Like, positive and negative numbers? Good and bad business decisions? We already have computers that can do that.
you're talking about what it means to be human, not what it means to be intelligent. you can't judge good from evil if you're not intelligent, so AI's first goal should be just that: intelligence, not morals
The basic building blocks of life.
If we continue making faster and faster machines at calculating formulas and storing knowledge we will have just that, a giant calculator.