Hacker News new | past | comments | ask | show | jobs | submit login

Seems we're approaching limits of what is possible w/AI alone. Personally, I find a hybrid approach - interfacing human intelligence w/AI (e.g. like the Borg in ST:TNG?) to provide the military an edge in ways that adversaries cannot easily/quickly reproduce or defeat. There's a reason we still put humans in cockpits even though commercial airliners can pretty much fly themselves....

Hardware and software (AI or anything else) are tools, IMHO, rather than replacements for human beings....




> Seems we're approaching limits of what is possible w/AI alone.

Not even close. We've barely started in fact.


How's that? I don't even see problem free self-driving taxis, and they even passed legislation for those in California. There's hype and then there's reality. I get your optimism though.


They've barely started trying. We'd be reaching the limits of AI if self-driving cars were an easy problem and we couldn't quite solve it after 15 years, but self-driving cars are actually a hard problem. Despite that, we're pretty darn close to solving it.

There are problems in math that are centuries old, and no one is going around saying we're "reaching the limits of math" just because hard problems are hard.


Humans are hardware we are not anything magical. We do have 4 billion years of evolution keeping our asses alive and that has lead to some very optimized wetware for that effect.

But somehow thinking that somehow wetwear is always going to be better than hardware is not a bet I'd make over any 'long' period of time.


> 4 billion years of evolution

that is a pretty important part of the equation. what if the universe is the minimum viable machine for creating intelligence? if you think of the universe as computer and evolution as a machine learning algorithm then we already have an example of what size of a computer and how long it takes for ML to create AGI. it seems presumptuous to believe that humans will suddenly figure out a way to do the same thing a trillion times more efficiently.


>it seems presumptuous to believe that humans will suddenly figure out a way to do the same thing a trillion times more efficiently.

Nature isn't efficient. Humans create things many orders of magnitude more efficient than nature as a matter of course. The fact that it didn't take millions of years to develop even the primitive AI we have today is evidence enough, or to go from the Wright Brothers' flight to space travel. Or any number of examples from medicine, genetic engineering, material synthesis, etc.

You could say that any human example also has to account for the entirety of human evolution, but that would be a bit of a red herring since even in that case the examples of humans being able to improve upon nature within relatively less than geological spans of time are valid, and that case would apply to the development of AI as well.


> it seems presumptuous to believe that humans will suddenly figure out a way to do the same thing a trillion times more efficiently.

Why?

I think it might be confusion on your part on how incredibly inefficient evolution is. Many times you're performing random walks, or waiting for some random particle to break DNA just right, and then for that mutation to be in just the right place to survive. Evolution has no means of "oh shit, that would be an amazing speed up, I'll just copy that over" until you get into intelligence.


I'd like to think we're more than just machines. We have souls, understand and live by a hopefully objective set of moral values and duties, aren't thrown off by contradictions the same way computers are.... Seems to me "reproducing" that in AI isn't likely... despite what Kurzweil may say :).


> We have souls, understand and live by a hopefully objective set of moral values and duties, aren't thrown off by contradictions the same way computers are

Citations needed


are you feeling depressed or suicidal?


That reply would fit better on Reddit than HN. Here we discuss things with curiosity.

If making a claim that humans have ephemeral things like souls and adherence to some kind of objective morality that is beyond our societal programming, then it's fair to ask for the reasoning behind it.

Every year machines surprise us by seeming more and more human (err, perhaps not that but "human-capable"). We used to have ephemeral creativity or ephemeral reasoning that made us masters at Drawing, Painting, Music, Chess or GO. No longer.

There are still some things we excel at that machines don't. Or some things that it takes all the machines in the world to do in 10,000 years with a nuclear plant's worth of energy that a single human brain does in one second powered by a cucumber's worth of calories.

However, this has only ever gone in one direction: machines match more and more of what we do and seem to lack less and less of what we are.


How old are you if you don't mind me asking?


I do mind you asking.

You can choose to engage with the content of the discussion or choose not to engage with it.

"Everybody who disagrees with me is either a child or clinically depressed" isn't what I come to HN for.


Sorry to offend your sensibilities bud. This discussion thread is over.


Can't you just reply to his points?


I could. The next thing he will do is accuse me of solipsism. So I'm gonna stop right here and agree with him.


>aren't thrown off by contradictions the same way computers are

We are not? Just look at any group of people that's bought into a cult and you can see people falling for contradictions left and right. Are they 'higher level' contradictions than what our machines currently fall for, yes, but the same premise applies to both.

Unfortunately I believe you are falling into magical thinking here. "Because the human intelligence problem is hard I'm going to offload these difficult issues to address as magic and therefore cannot be solved or reproduced".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: