Hacker News new | past | comments | ask | show | jobs | submit login

I don't doubt we will be able to do great things with machine learning in the coming years.

What I do doubt is that machine learning will become generic anytime soon. I predict machine learning will always need some degree of specialization - we aren't reaching general intelligence/learning within our lifetimes. A machine that is awesome at playing Go will suck at translating languages.

I also predict machine learning will never be able to surpass humans in terms of creative ability. A top notch machine-written book/poem will always be inferior to a top notch human-written book/poem, for example. Humans can invent new things, machines seem only capable of rehashing existing things. For example, at some point a human writer invented the concept of an unreliable narrator. If you "teach" a machine how to write by feeding it thousands of books, but you exclude books that have unreliable narrators, will the machine ever write a book whose narrator is unreliable? I think not.

I'll happily admit you were right all along if AGI does come about within even the next 20 years, but I think you are grossly oversimplifying things in order to embrace the sci-fi fantasy you wish were real.




> we aren't reaching general intelligence/learning within our lifetimes

Almost certainly false.

> I also predict machine learning will never be able to surpass humans in terms of creative ability

Algorithms are already churning out papers that are accepted to journals, and they can compose crude music. This a mere 10-15 years after the study first began. I give it maybe 20 years before a computer generated song will appear on one of the top charts. These will likely still be domain specific algorithms.

> Humans can invent new things, machines seem only capable of rehashing existing things

So you think human brains run on magical pixie dust? "Things" that humans invent can all be described by finite bit strings, which means generating "new things" is a fiction.

We discover these compositions just like a computer would. The secret sauce that we have but don't yet know how to describe algorithmically, is discerning those bit strings that have more value to us than others, like a clever turn of phrase is more valued than a dry, factual delivery.

> If you "teach" a machine how to write by feeding it thousands of books, but you exclude books that have unreliable narrators, will the machine ever write a book whose narrator is unreliable? I think not.

I don't see why not, even if we stick to domain-specific novel generation, but it depends on how you train the system based on the inputs. Random evolution is hardly a new concept in this domain.


I'm curious if this is in spite of you being on board that "sure, server farms do more than enough computation for parity with the human brain" or whether you don't consider neural nets in relation to human neural nets (biological brains)?

If you do consider and admit that computationally there seems to be enough horsepower there, where does your skepticism come from that anybody would figure it out?

Alternatively, did you happen to completely ignore the argument about how much computation the human brain does? (Which isn't that much compared with server farms). I mean on a neural level, using the same neural network topology or an approximation of it, actual neural networks.

I guess I'm perplexed at your skepticism.


I'm skeptical because you are promising the moon, and when I look and weigh the tech for myself it seems many orders of magnitude less advanced than your hype leads one to believe.


I am basing the promise bottom-up, based on how many neurons are in the human brain, their connectedness and speed, and amount of computation those 3 pounds can be doing using the synaptic mechanisms we know about.

You are basing your skepticism top-down based on the results the science of artificial intelligence has shown to date.

It's a fair source of skepticism. There are 15,000+ species of just mammals, all of which have neural nets and exactly one of which have higher abstract reasoning communicated in a language with very strong innate grammatical rules - and that is humans.

However, we have 7 billion working specimens, a huge digital corpus of their cultural output, and their complete digitized source code which can be explored or modified biologically.

For me bottom-up wins. We can just try things until it works - which may be sudden/overnight.

At the moment I see a jet engine, feathers flying everywhere, and no flight. But looking at that jet engine, I just can't imagine it will take long.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: