Hacker News new | past | comments | ask | show | jobs | submit login

Well analog computation being noisey could be the reason we experience things like emotion.

It's easy to say emotion has no value, until you see it in action bringing some sense of control to say a family that has gone through trauma or a country through war.

It doesn't look like digital computation (not digital encoding) can produce such outcomes.

We are constantly seeing, be it the NSA/Zuck/Wall St/China etc etc having access to ridiculous amounts of digital computational power, but being totally surprised on a daily basis by the realization they aren't in control.




> Well analog computation being noisey could be the reason we experience things like emotion.

Hm, I don't really see why this should be the case. Emotions are a pretty well studied in both animals and humans and are, to put it very handwavingly, merely a global change of brain state/equilibria, for example modulated by brain regions such as the Amygdala and/or the release of neuromodulators [1]. From my understanding, there is nothing about emotions that cannot be computed by a digital computer, and there is little about emotions that is related to noise.

I'll let philosophers think about the experience part of your statement.

[1] https://en.wikipedia.org/wiki/Amygdala


Ok show me a computer with emotions.


You really wouldn’t want to see one.

But all that’s needed are are a handful of rules, to provide for a system that emotes. You’d probably dismiss it as an inauthentic toy, but emotions actually aren’t the core aspect of agency.

Anyway, the rules just need to assemble a goal, a threshold for equilibium, and reactions for deviation from that equilibrium.

Bonus points if you account for radiant measurements of equilibrium. What I mean by that is anticipation of adjacent conditions that signal a probable loss of equilibrium, such that the system doesn’t just react to an unbalanced circumstance, but also things that could lead to an undesired imbalance.

Examples:

A. If the cup is disturbed so that the milk spills, then a negative experience ensues.

B. If a balloon, inflated with ordinary compressed air, sinks onto the grass and pops, a negative experience ensues.

C. Ambulate through an environment obstructed by complex obstacles, and negotiate each obstacle without falling onto the ground. Falling onto the ground will result in a negative experience.

Each of these three goals represents a targeted state of equilibrium: don’t spill the milk, keep the balloon safe, don’t fall down go boom.

Now, layer an array of reactions on top of the branched set of possible outcomes. You can also buld up variations on top of each branch.

Positive branches are indicated in moments of success at achieving the goal. Negative branches are indicated upon equilibrium being defeated.

So the computer or robot can externalize its inner state with a happy face or a sad face, but we’re missing some of the emotional range. When would anger display? When the machine can assign blame and consider revenge, of course.

So if an entity (preferably a rival robot, since we wouldn’t want the robot to exact revenge on a person) knocks over the milk, pops the balloon, tackles the robot, the obvious motive is to make sure that never happens again, the root cause is the rival entity. Stand back up, destroy the entity, and acquire more milk, another balloon, and try to achieve equilibrium, and thus happiness again.

Prior to reacquiring its happy state, the machine can externalize an angry face if it can assign blame to a detected responsible entity, in all other cases, it would simply be sad, until it can stand back up, inflate another balloon and pour itself another glass of milk to protect. If it cannot set things back in order, as desired, then it is simply permanently sad (no balloon, no milk, unable to stand or walk), forever.

See how that works? It’s actually not much more complicated than that.


In a multi-agent environment an agent has to model its peers as well and learn to communicate to solve goals together. Dealing with other agents is one step up from dealing with objects. Emotion would naturally be linked to the actions of other agents as they affect the completion of one's goals.


Part of your statement is true, and part of it is false.

An agent would need to model the behavior of peers, yes.

But communicate? No. Solve goals together? No.

To coalesce civilization or society? Maybe, maybe not. Socialization among peers is not a prerequisite for agency. Not by a mile.

And certainly not amid a state of nature. Not at all would communication or collaboration become a necessity.

Emotion might become an aspect of investment in hypothetical experiments performed by an agent. Hope that equilibrium might be achieved with less work through communication and collaboration.

But put it this way. A caveman grunts at a wild boar standing on top of a hill. The caveman wishes to discern if the silohette atop the hill is potential food by provoking movement, or an inert object offering the illusory shape of a backlit animal. The boar notices and experiences fear. The boar freezes, hoping the grunt was not directed toward it intentionally.

Is neither an agent? Does the conflict of interests preclude emotion?

The boar models the adversary, and experiences emotion to preserve the equilibrium of staying alive.

The caveman experiences hunger as a loss of equilibrium, which provokes a mixture of anxiety, and unhappiness which may cascade into a malaise or depression as weakness progresses with starvation. The aggression of the hunt is not anger, although anger may arrive incidentally.

Is the grunt communication? Perhaps as much as any tactic might be. Deceptive comminication (bird calls, immitating a female in heat to draw male prey) might still be communication, after all.

But to model nature, there must have been a period of where some agents seemingly existed without peers. But those agents likely experienced emotion before cognizant sentience and a rich awareness of the potential for sentience within peers, which most likely precedes a capacity to communicate.


When the guys at Boston Dynamics kick BigDog and it struggles to stay up because it "doesn't want to fall" it feels spooky close.


That’s just you projecting your impression of agency onto a puppet, based on prior observation of actual animals.

But make no mistake. It is a puppet. It’s a multicore processing circuit with stack pointers, instruction pointers and little else going for it.

It’s your laptop strapped to some motors. It’s not sentient, and has no agency. It’s a guided missile at best. A step above cruise control.

It lacks authority to define where it goes or form a need for continuing to stand. Thus it lacks true agency.

We can ascribe happy/sad to stand/fall, as crude, fundamental binary “emotions” but robots like BigDog are less complicated than amoeboid life found in pond scum.

Consider whether traffic lights are happy or sad, based on whether traffic obeys their signalling. Now consider traffic cameras. Now consider whether an automated ticket for running a red light on camera is an expression of emotion.


>It lacks authority to define where it goes or form a need for continuing to stand. Thus it lacks true agency.

This has been an adequate description of me on my way to work on Monday morning at too many times in my life.

I think we might just be disagreeing about how complicated the puppet is.


You have options, that you prefer to avoid, because you choose not to cope with the inevitability of your own mortality, for emotional reasons. It's simply easier to don the mantle of the soulless worker drone.

But you do have options, and the free will to exercise them. You could rob banks, sell drugs, stay in bed, go on a hunger strike, bootleg intellectual property for fun and profit.

You have choices, up to and including suicide.

BigDog can't even commit suicide intentionally.


This is such a great comment.


in our current profit-based context, emotion is privilege. so i wouldn't expect "emotiputers" outside artistic/demo/toy spheres.


Think of emotion as predicting future positive or negative rewards. The value of a state (or action) is related to the anticipation of reward. The fundamental role of emotion is to select actions. Rewards types are basically hard wired by evolution into our brains and are related to safety, food, companionship, learning, creativity, curiosity and a few more. They simply identify specific situations and send reward signals to the part of the brain that learns to predict future rewards.

> It doesn't look like digital computation (not digital encoding) can produce such outcomes.

We have built such systems and they work with enough training, but their training must be as agents in an environment, not as a model training on a static dataset - think AlphaGo. I think AlphaGo has learned emotion related to the world of Go (in this case emotion is related to good-move, bad move, safety and danger), and its human opponents had a lot to say about how it felt to play against it.

Emotion is not something beyond AI agents. It's just how they plan their actions. They might not be human emotions but they are emotions related to their own activity and goals.


You maybe right that "training must be as agents in an environment" gets us something like emotion.

Emotions being valuable routes to unpredictable/unknowable states good or bad. How do you see this connecting to what the article talks about?

In my mind, he is saying as the transition happens to such systems we loose control over them.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: