> I'm increasingly concerned that the impact of ML is going to be limited.
Of course, just like any other technology. What else could be the case? I don't see this as a point of concern. Is it overhyped - yes (salespeople gonna sell); is it still useful in a number of applications - also yes.
I think the main problem is that people call it a neural network. They are nothing like neurons. Neurons are lifeforms in their own right and are as advanced as this guy:
Now, what about neural networks? Well, you replace that guy with a one liner math function and call it a day. Now consider that you have billions of guys like that in your head, way more than any of our simplified neural networks, and it becomes very clear how far we are from getting anywhere. They are really important, they decide who to connect to, where and when to send signals etc. Neural networks doesn't model neurons, just the network hoping that the neuron wasn't important.
I studied biology and point this kind of thing out all the time. It usually gets dismissed by people who have not studied biology with a lot of hand waving.
Everything you say is true, and more. There is another kind of cell in the brain that outnumbers neurons about ten to one. They’re called glial cells and they come in numerous forms. We used to think they were just support cells but more recently started to find ways they are involved in computation. Here is one link:
The computational role they have is unclear so far (unless there is more recent stuff I am not aware of) but they are involved.
We are nowhere near the human brain. I think it will take at least a few more orders of magnitude plus much additional understanding.
GPT-3 only looks amazing to us because we are easily fooled by bullshit. It impresses us for the same reason millions now follow a cult called Qanon based on a series of vague shitposts. This stuff glitters to our brains like shiny objects for raccoons.
What this stuff does show is that generating coherent and syntactically correct but meaningless language is much easier than our intuition would suggest:
Those are extremely simple models and they already produce passable text. You could probably train it on a corpus of new age babble and create a cult with it. GPT-3 is just enormous compared to those models, so it’s not surprising to me that it bullshits very convincingly.
Edit: forgot to mention the emerging subject of quantum biology. It’s looking increasingly plausible that quantum computation of some kind (probably very different from what we are building) happens in living systems. It would not shock me if it played a role in intelligence. The speed with which the brain can generalize suggests something capable of searching huge n-dimensional spaces with crazy efficiency. Keep in mind that the brain only consumes around 40-80 watts of power while doing this.
In my opinion, GPT-3 is impressive not because it’s good at being human but because it’s good at doing lots of previously exclusively human things (like poetry) without being human at all. It’s certainly a better poet than I am, though that’s a low bar. It’s still concerning for that reason though - that relatively dumb algorithms can convincingly do things like “write a news article” or “write a poem”. What happens when we get to algorithms that are a lot smarter than this one (but still not as smart as our brains)?
I don’t think it’s a good poet. I think it does an excellent job assembling text that reads like poetry, but if you go read some good poets that’s only a small part of what makes their poetry good. Good poetry talks about the human experience in a place and time.
It absolutely could be used to manufacture spam and low quality filler at industrial scale. Those couple Swedish guys that write almost all pop music should be worried about their jobs.
Neural nets can solve some problems though like image classification, and looking for more applications like that is useful. Its just very doubtful that they can ever lead to something looking like human or even ant level intelligence.
Agreed. I didn’t say they were useless, just that they were not going to become Lt. Cmdr Data.
They’re really just freaking enormous regression models that can in theory fit any function or set of them with enough parameters. Think of them as a kind of lossy compression but for the “meta” or function itself rather than the output.
The finding that some cognitive tasks like assembling language are easier than we would intuitively think is also an interesting finding in and of itself. It shows that our brains probably don’t have to actually work that hard to assemble text, which makes some sense because we developed our language. Why would we develop a language that had crazy high overhead to synthesize?
You’ve made excellent points about why we shouldn’t be trying to emulate the human brain. I’m sure, collectively, we can come up with something better.
And then we wont call it ML, since ML is tainted by the current flawed methods that were promised to bring intelligence but just allowed us to generate metadata about images.
Of course, just like any other technology. What else could be the case? I don't see this as a point of concern. Is it overhyped - yes (salespeople gonna sell); is it still useful in a number of applications - also yes.