Hacker News new | past | comments | ask | show | jobs | submit login

No.

The hype chamber in SV is on overdrive on AI at this point.

Singularity AI concept has gone from the neat Sci Fi idea it was, to being treated as a serious possibility. This is Absurd. The modern version of Malthus, and a community of people who pride themselves on their reason, should do a basic sniff test.

The author has it correct - human brains themselves are significantly impressive on a weight/power/capability scale.

But forget all of the intelligence debate, consider the things that people magically ignore: emotions.

Take a look at how people who are depressed function. They measured as being more aware than (better assessment of their surroundings/relations/self assesment) people who were not.

At the same time, they lack motivation, or desire - there is a lack of impulsive force to move forward.

Intelligence doesn't give human beings purpose. Emotion does.

This isn't feel good psychobabble, its a fundamental part of good mind/body housekeeping which a huge chunk of SV follows every day to reach "peak performance".

How are you going to create a general purpose AI which has any motive force?

Its actually MORE LIKELY, that if a GP AI is created, its just going to sit on its ass and pretend not to understand instructions.




> Intelligence doesn't give human beings purpose. Emotion does.

So would an explicit goal function, which is how we already give "purpose" to algorithms.

> How are you going to create a general purpose AI which has any motive force?

In any other out of countless of ways to do that. What makes you think that emotions are necessary to create a powerful optimization process?

The "motive force" is mostly a solved problem for now, we can code that explicitly (the issue is with figuring out a right goal and how to write it down). AI development is mostly about the way it works, not why it should do the work.

--

No offense meant to you personally, but I find that most of the comments about "overdriven AI dangers focus in SV" to reveal that authors don't have a fucking clue what the issue is about, and never spent time actually reading up the reasoning behind the AI X-risk potential.

I'll give an ultra-compressed super-TLDR of that reasoning for benefit of future conversations. It goes like this:

- intelligence is a super-strong optimization process; it doesn't necessarily have to look the way humans think (humans generally suck at reasoning, which is well established; see: cognitive biases, probability theory, decision theory, etc.)

- intelligence is an independent factor from values/goals, a mind can have any combination of the two - i.e. just because it's smart, it doesn't mean it will develop the same morality humans do, or any morality whatsoever; see https://wiki.lesswrong.com/wiki/Orthogonality_thesis

- combining the two, the danger of super-human AI is not something that's hostile to us - it's something that's indifferent about us, and is more powerful than us, the same way we don't give a second thought about e.g. ants


> Take a look at how people who are depressed function. They measured as being more aware than (better assessment of their surroundings/relations/self assesment) people who were not.

That's interesting. Do you have a reference for that?



>Its actually MORE LIKELY, that if a GP AI is created, its just going to sit on its ass and pretend not to understand instructions.

This is me if i was an AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: