Hacker News new | past | comments | ask | show | jobs | submit login
The Seven Deadly Sins of AI Predictions (technologyreview.com)
155 points by bglusman on Oct 8, 2017 | hide | past | favorite | 87 comments



As someone who has been into AI since the mid-90s, I continue to be deeply disappointed with the overall rate of progress, this includes with all of the machine-learning and stats based recasting of what's meant by AI.

The big fundamental problems are all the same, and we don't appear to have much progress on them. We have figured out very clever ways of teaching extremely useful new tricks to computers. I don't mean to denigrate the value of that work, it just resembles regular programming, but by different means.

Whatever your definition of intelligence, there's little to no generalizability of what's happening right now.


As someone who has been into AI since the late-70s, I am impressed with the overall rate of progress in the last 10 years. The whole field was stuck from the mid-1980s to about 2000. This is called the "AI Winter", and it really sucked.

AI used to be tiny - 20-50 people at CMU, Stanford, and MIT, and a few smaller groups elsewhere. All the '80s startups went bust. Now there are real applications that work and are widely deployed. Many classic problems, such as reading text and handwriting, have been solved and the solutions widely deployed commercially. Speech recognition is getting good and is deployed widely. Face recognition works. Automatic driving is working experimentally. There are now hundreds of thousands of people doing AI-type research.

Each new idea in AI has had a ceiling. Machine learning has a ceiling, but it's one high enough that the technology is good for something and generates enough revenue to finance its own R&D.

Machine learning is basically a form of high-dimensional multivariate optimization in possibly bumpy spaces. That's a hard problem, but considerable progress has been made, and the massive compute power necessary is now available. This is great. It's not everything, but it's real progress.

We're going to need another big idea to get beyond the ceiling of machine learning. No idea what that idea is.


I’m an outsider, but what’s the recent “big idea”? It seems to be throwing more compute power and larger training sets at essentially an old technique. This has led to a big improvement in performance, but I don’t see the big conceptual breakthrough.


I think that Bayesian Program Learning as pioneered by Lake et-al is a big idea.

I think that GAN's probably qualify - although you can see that emerging in the SAB series of conferences in the 90's if you read the papers.

On the other hand I do see a lot of small innovations that are enabling many people to create incremental improvements and applications. I feel that that the exploration of the field has been very weak and our overall knowledge is limited and not widely shared. Perhaps the improvements like MCMC search for bayesian reasoning, causality, counterfactuals, GPU's and TPU's and FPGA's and the access to very large data sets for training, forward training and so on will be the actual breakthrough.


A lot here depends on how you view generalization. Consider the Atari playing AI developed by Deepmind. It reached superhuman capability on a variety of different games given no domain specific knowledge. It had access to just the visual information and its score.

It's natural to claim that's not generalizing since it's in such a specific domain as well as the fact that it had access to its score. But I think you have to consider that life itself starts in the exact same way. We're little more than a vast number of evolutions starting from very simple organisms who's only purpose was to seek out sustenance. They had an extremely simplified domain with limited input thanks to fewer senses. And they also received a score. They were 'informed' of their sustenance and if it fell below a certain level, that was game over which they were also informed of --- at least in some manner of speaking.

I think the ultimate issue is that it's rather disappointing how unexciting it all is. And so we constantly shift the goal posts. Going back not that many decades ago it was believed that a computer being able to defeat humans at chess would signal genuine intelligence. Of course we managed to achieve that, but the matter of how it was done being so unexciting and uninteresting led us to shift the goal posts.

The achievements we're regularly producing now a days dwarf anything those speaking of chess=intelligence times could have even imagined. Nonetheless we still now go, "Nah.. that's not REAL intelligence either." But I think we will, up to the day we do genuinely create generalized intelligence argue that it's not 'REAL' intelligence. And I imagine we'll be arguing that's not 'REAL' intelligence even afterwards. Because it won't be magical, or exciting. I don't think there's ever going to be a "we've truly done it" moment. It's just going to be clever ways of teaching increasingly sophisticated tricks leading up to the point that a creation becomes more effective at most of any task, including assimilating and producing new knowledge and expertise, than we are.


Do we even have a good definition of what it means to generalize? I keep seeing this term thrown about and while it makes intuitive sense, and certainly train/test/validation splits are useful, I sort of question the whole premise. I mean, when we consider all possible sets, all optimization problems perform equally well (or poorly), so in the most general sense, generalization is impossible. Which means we have to restrict our sets somehow, but to do that we have to have some measurement as to how they relate, but how do you know in advance? In a sense what the machine is learning is a distance metric between these sets, but the only way to know it's working is to actually run it. To say in advance "well, this machine generalizes this well on this class of problems" seems like an awful stretch.

So much of what we mean when we say "this model generalizes well" seems like baked-in human assumptions about the data distributions that may not really have any basis in reality.


Generalize does need to be more specific in order to make sense. In current ML work, generalizability means something else than what it typically used to mean in AI work.

One way to look at what generalizability is would be like this: problem domains typically come with sets of axioms. Generalizability is the ability for a solution approach to work across different domains that don't have 100% axiom overlap. The wider the difference between axiom sets for a domain, the more difficult / impressive the generalization.

Solving two problems within the same domain, necessarily sharing a 100% overlapping axiom set, is not generalization at all.

The reason the axioms supporting the domain matter is that they (in part) guide which heuristics work, and how they should be applied. And that's the sort of generalizability that is missing from current work: some solutions can make some pre-programmed choices about applying different heuristics depending on the problem set (driverless cars are doing this now). This is the "bag of tricks" approach. But they don't typically morph in how they are applied, or the end that they're trying to accomplish when they're applied.


I have pondered this and have decided what my standards are. I realize I don't have the authority to set those standards for everyone.

GAI is, to me, when machine is able to be given a problem and then, without prompting, decides which data to consume to learn how to solve that problem. It could be told to optimize an automotive design for a 5% efficiency increase without a loss of safety features and while keeping the performance the same - and then go out and figure out what data it needs to learn so that it can solve that task. It would assemble and process that data and then come up with the answer, which might just be that it is impossible with current tech and here is what is needed and this is how to do it.

That's rather verbose and I'm absolutely not the person who gets to define it. But, when someone says AI, that is how I think of it. More so when they say general AI.


That's the frame problem, which is a huge issue for AI. In general it is impossible to solve due to the no free lunch theorem.


Thank you. Would you know where I can look for more derailed info?



Thanks. Those will keep me busy for a while.

It's just curiosity, I'm not expecting to enter the field of AI research. It's still fun to learn. Again, thanks.


> And I imagine we'll be arguing that's not 'REAL' intelligence even afterwards.

When Data walks out of a lab, do you really think people will argue that he's not intelligent in the generalizable sense that humans are? Anyone who has watched ST:Next Generation agrees that he is.

> The achievements we're regularly producing now a days dwarf anything those speaking of chess=intelligence times could have even imagined.

I'm pretty sure the AI founders like Marvin Minsky could imagine Artificial Intelligence being generalized over various domains. I forget which one predicted the by the year 2000, an AI would be able to read all the world's books in an intelligent manner. McCarthy assigned a student to solving the robot vision problem over a summer back in the 60s.

Problem is that they underestimated the difficulties in getting a machine to perform tasks that are simple for us, while thinking that tasks like superhuman chess would be the difficult challenges. They had it backwards.


To add to your excellent point; it takes 10-20 years to fully train a human from useless baby to fully functional. And that time is filled with days of continuous exposure to data.

There is no problem domain that humans have been shown to hold an unshakable advantage. I used to be able to say Go but that is no longer even a little bit true. It could well just be time and data from here.


Programming, philosophizing, holding a conversation, playing a sport, applying concepts learned in one domain to another, and a common sense understanding of the world (Minsky's big one).

And it's not just 10-20 years of a blank slate being exposed to data. It's evolved brain structures that know how to learn those skills. That's why a young human child quickly surpasses the learning abilities of chimpanzee.


It's natural to claim that's not generalizing since it's in such a specific domain as well as the fact that it had access to its score.

Yeah. I would like to see the same approach applied to NES games[0] with the capability of beating a game like Zelda or Ultima IV with no domain-specific knowledge. It's not going to happen. These games assume the player knows about human culture, values, virtues such as courage and triumph over the forces of evil. Symbols such as the sword and shield, the heart; they have deep meaning even to a person who has never played a video game before.

To build an AI that can play a game like that, without any specific knowledge of the game itself, would be to build something that likely reduces to general intelligence, though I hesitate to claim it forcefully.

[0] I am aware of this: http://www.cs.cmu.edu/~tom7/mario/


Deepmind also failed to reach human level on a a third of the games they trained it on. And I wonder about the superhuman level. Superhuman compared to average gamer, or really dedicated ones? Because there are people who have played perfect games of Pacman and Pitfall.


Most of what your brain does is statistical reasoning.

Much of the rest is logical inference.

You choose words in the order you choose them because as a child, your mind learned that, statistically speaking, certain words had certain meaning, and when used would engender certain results. All intelligent decisions follow that pattern. Training, comparison to expectations, memorize the difference, adjust behavior, rinse and repeat. Sound familiar?

Most of the intelligent mechanisms within the mind are employed through similar statistical inference, and when it's not done statistically (say, when you are inspired by some flash of 'insight' or when you dream things that have little obvious statistical grounding) there is still some reasoning within the mind that is allowing room for stochasticity to infer intelligence that is statistically calculated to be beneficial, when all other reasoning has failed. Tangentially, it might even be true that the irrational mind is simply one that uses stochastic reasoning more often than is deemed sane by our social norms.

Don't overstate intelligence like it's some amazing technology that we can't understand. Consciousness is more difficult to parlay into a black box model, but intelligence, reasoning, and 'wise' decision making is very much rooted in making statistically beneficial choices and deriving logic from the order we recognize via inductive and deductive reasoning.

If it sounds like I am oversimplifying, please understand that I hold the brain and the amazing parallel processing power it wields in high regard, and it's no mean feat for someone like Einstein, to use an obvious example of great intelligence, to intuit relativistic frameworks using only thought experiments, but his understanding (and everyone else's, for that matter) were built on many, many layers of statistical reasoning.


I think part of the problem is that humans like to define their own intelligence in grandiose terms. Prior to their being solved, object identification, human-level speech recognition, handwriting recognition, machine translation and many other tasks were thought to require general intelligence. But once we got the machines doing them for us, we decided they weren't so hard after all. From this you can conclude one of two things:

1. We're on the wrong track, and don't understand our own intellects at all.

2. Human intelligence is just a collection of these same hacks, possibly ensembled by some relatively thin meta-algorithm.


#2 is definitely wrong. We aren't a collection of similar hacks; we are qualitatively different because we include the ability to gain new heuristics and to cross apply the ones we already have. No clever library of strung together hacks will have these 2 properties. In the same way, an ant hill is not just a bunch of ants, it has so many emergent properties that thinking of it in terms of its components is a mistake

We should however think more expansively about intelligence than just the sort humans have. The goal doesn't need to be straight mimicry of humans


> In the same way, an ant hill is not just a bunch of ants, it has so many emergent properties that thinking of it in terms of its components is a mistake

The thing about emergent behavior is that you just have to take enough entities and organize them the correct way, and the behavior appears, unexpectedly, and out of nowhere.

If it is really emergent, then nobody (what includes me and you) has no idea how far we are from a general intelligence at all.


An emergent property is precisely one that obtains unexpectedly from simple components. For instance, it is supposed by some that the universe truly consists of nothing more than electrons and quarks and what have you which interact in relatively simple ways, and that subjective human experience emerges from that. We then have a situation that is easier for us to reason about using higher level concepts, but there isn't a chapter in the laws of physics called "subjective human experience". That's in the book called "diverse applications of the basic laws of physics".

In any case, I dispute your facts too. I don't think we have the ability to cross apply our heuristics. I can calculate fairly accurately a lot of mathematical problems when I'm riding my bike, but put me in a maths class and I'm stuffed. I don't have independent access to those hacks.


A neural network that can train neural networks is capable of learning new heuristics. That is a thin meta-algorithm on top of a standard function approximation algorithm.

EDIT: I should also add that reinforcement learning more directly falsifies your claim.


> we include the ability to gain new heuristics and to cross apply the ones we already have.

Maybe our collection of hacks includes hack generation hacks (they cannot work reliably of course).


None of the problems you mention is actually solved though. They're all things that work sort of, some of the time, with caveats about how you define "work." They work well enough to be useful, but not well enough to argue we're converging on human-level intelligence.


Optical illusions (there's also physical ones) are often demonstrations that the problems aren't solved in humans either; they're just "things that work sort of, some of the time, with caveats about how you define 'work'".

More so if you include reasoning illusions like people being more scared to catch a plane than drive a car or thinking that a lotto ticket is a good investment.

Human intelligence doesn't really meet intuitive definitions of human intelligence. But it does work well enough as long as you ignore all the times it doesn't.


None of those tasks have been solved fully at human-levels. Subtasks and datasets? yes.


That isn't always true. Face recognition now exceeds human level performance. GoogleNet's image identifier is only 1.7% worse than this guy's performance:

http://karpathy.github.io/2014/09/02/what-i-learned-from-com...

Given that the author is writing a blog about AI, it's reasonable to assume he's above average in intelligence and knowledge, and therefore likely better than the average human at this task. But even if you don't make that assumption, object identification is within spitting distance of human level performance.

The same is true for many of these other tasks.


Not it is not. Learning new object classes from a single image or a few images is very hard. See

http://www.sciencemag.org/content/350/6266/1332.short

Machine translation is a joke.

Put any comment on this page through Google translate to another language and back to English and see what you get.

I did a small part of yours. Hardly human-level for just a small simple sentence.

> But even if you do not make this assumption, identifying the object involves spitting the distance from the performance of the human level.


The fact that one-shot learning is still hard does not falsify what I said. Computers are now better than humans at many tasks that were once thought to be the domain of general intelligence. How they were trained is not particularly significant.


It depends on what you mean by a task. Machines are not universally good at object detection because they fail in cases where there is too little data. We can't magically wish for non existent data (yet humans would do quite well on those data-scare tasks).


Ya, i'm happy to accept that humans are still better at generalizing from scant data. But that is something that's being actively worked on, and progress is being made.

https://en.wikipedia.org/wiki/One-shot_learning


Agreed. I would love to see quick progress in that too (that is one of my projects).


There's been lots of articles recently about reaching "human-level" performence, you should know better than to believe them. The tests are on constrained datasets and ignore many factors. You mention face recognition, you realize for every image the program correctly recognizes, there also exists an image, that you will perceive as identical to the aforementioned one, yet which the system fails for.


Yes, i'm aware of differential attacks on neural networks. That doesn't falsify the hypothesis. There are instances where you will fail to recognize a human that those NNs will not.


The brain for sure has a collection of hacks. But the meta-algorithm is pretty strong too.


The state of things seems to be that we've made really good progress with deep learning/neural networks/MLPs with the result that there's impressive fuzzy pattern recognition based on supervised learning. Which is great and has a lot of practical value but it's just one set of related techniques.

Cognitive science, on the other hand, has arguably not made a lot of great progress. We don't really understand how we learn. We don't really understand how language works. Etc. But arguably a lot of this sort of thing needs to be part of "AI" to get beyond problems that are amenable to pattern recognition.


Totally agree with you. You might be interested in basicai.org. We're trying to address those big fundamental problems you reference.


Could you outline some of the problems you think current research isn't addressing?


Not the GP, but some examples are: (1) how do you get a computer to have a human-like train of thought? (2) how do you get a computer to acquire new concepts (e.g. "debt", "global warming", "weed") and then reason about them correctly, without any reprogramming? (3) automated acquisition of common sense through experience (e.g. "if you pour water on the floor you will get a puddle") (4) deep natural language understanding (i.e. how do you make a chatbot that really understands, and isn't just a thin illusion of understanding).


(4) is an interesting question. Unfortunately it's much harder to understand than it is to ask. For instance, to people really understand, rather than just providing a thin illusion of understanding? What does it actually mean to understand something? Can you make a test that can distinguish arbitrary systems which truly understand from those which provide a thin illusion of understanding?


This is a real problem for physics teachers - you want to find out if the students understand a concept:

Ask them to state it - they memorize the text book definition.

Ask them to apply it to a specified problem - they scan the problem for values of variables and look up a formula list to find one that has those variables.

Ask them to explain why their answer is correct and they form a grammatically correct explanation made by plucking phrases from the problem description and linking them to the answer with "so" or "because".

It feels like they don't understand but they actually can get a long way (ie. not fail) like that - it's certainly human level understanding, even if it's not what the smartest of us are capable of.

Personally, I think understanding is a continuum from special case memorization at the bottom, up to being able to link with a lot of other concepts at the top. There's no bright line between "truly understands" and "illusion".


An interesting article, but I think it misses the mark on two things.

Firstly, it minimizes the concerns people have over jobs being lost because AI isn't around the corner. While a valid point of view, even if many jobs will be lost only in 30-40 years and not in 10-20, isn't it something worth thinking about?

Secondly, in terms of the AI safety movement, I think he doesn't really address the core problems that people raise. From the article:

"[ai safety pundits] ignore the fact that if we are able to eventually build such smart devices, the world will have changed significantly by then. We will not suddenly be surprised by the existence of such super-intelligences. They will evolve technologically over time, and our world will come to be populated by many other intelligences, and we will have lots of experience already."

I think this ignores two arguments:

1. While his scenario is certainly plausible, the "AI takeoff" scenario, in which an AI becomes super-intelligent via recursive self improvement, is also at least a possibility. This means it's worth thinking about, because it negates the "safety" of other tech advances helping.

2. Either way, one big concern of the AI safety crowd is that we just don't know how much time it will take to "solve" AI safety. Given unlimited computing power now, we would have no way of making the AI have the same goals as us, because we have no idea how to program a safe AI. This is something that might take 10 more years of work in laying down mathematical foundations, or might take 100 years. Nobody knows! That's why it's important to get this right.

The argument from the article is "let's now worry, because tech will advance enough by the time we have intelligent AIs". Well, maybe, but how does that tech advance happen? By people being worried about this problem and working on it! It's not magic. You can't just assume that the problem will solve itself.


> Given unlimited computing power now, we would have no way of making the AI have the same goals as us, because we have no idea how to program a safe AI.

And a counterpoint: Given unlimited computing power now, we would have no way of making the AI, because we have no idea how to program a general AI.

There are algorithms for global optimization, but those optimization problems are given a specific goal to optimize. We don't even know what "goals" to put in to make a general intelligence.

"Assuming unlimited computing power" is a good thought experiment because it lays bare the fact that we wouldn't have a clue how to create an AI even if we had unlimited computing and we can take a closer look at what we are missing even in that case.

Worrying about killer robots is like worrying about overpopulation on Mars (-Ng). I agree that job replacement is a concern in the next 10 years, but that is completely different than "making our goals align". The "making our goals align" concern is just as distant as killer robots or overpopulation on Mars.

You say, "you can't just assume the problem will solve itself." To which I say, What Problem?

A theoretical problem in your thoughts is not a problem that needs to be solved.


"And a counterpoint: Given unlimited computing power now, we would have no way of making the AI, because we have no idea how to program a general AI."

That's not a counterpoint to anything I said. I never said that I thought we know of a way to make AI, or that we're even close. We don't really know if we're 10 years away, 50 years away, or 5000 years away.

My point is that for all we know, we have to invent entirely new branches of math to deal with these kinds of questions, which could itself take 50 years. That's why we need to get started.

Maybe our only difference of opinion is how far away we think general AI is. I have a feeling that if we knew for a fact that AGI was 50 years away, you'd agree with me that it's worth worrying about today.

(Especially when "worrying about it" means people researching this and working on the math of this, something which I hardly think should be a controversial use of humanity's resources considering that 1% of the global fashion budget would fund AI research for the next thousand years.)

Note: While Andrew Ng's quote is very popular, iirc, he actually does think there should be some research into AI safety.


It is definitely a counterpoint. If we have no idea how to create the basic functions what makes you think we have an idea of how to make those basic functions incorporate goal alignment?

If we don't have the new branch of math... how are we supposed to bend that branch of math to our will?

I think we all agree that AGI should share our ideal values. What now?

We don't even have a basic self-aware algorithm to work with. What do you propose we modify and how should we modify it to get goal alignment?

We generally don't fund philosophy very highly (and maybe that is a mistake). Right now it is a philosophical question, not a practical concern to which resources can be applied.

EDIT: I don't think we should ignore AI safety at all. I just think our safety concerns should match our technology concerns right now those are physical robot safety and job loss potential. Not runaway intelligence.


Well you definitely might be right. I can't say for sure that we can do meaningful work now, although the people doing the actual work do think it is meaningful, and I think it's worth trying.

I do think it's worth pointing out that many times, we can do interesting maths without necessarily having technological or scientific capabilities to which to apply it. E.g. lots of things were proven about computing before we ever had a computer. We already know a lot about quantum computing, without having a quantum computer. We knew how to describe the curvature of spacetime mathematically, before we ever knew that spacetime worked like that.

I'm not saying this is for sure, but there are lots of examples where we had math before having the whole picture.

" I don't think we should ignore AI safety at all. I just think our safety concerns should match our technology concerns right now those are physical robot safety and job loss potential. Not runaway intelligence."

I just don't think it's an either-or situation. We can (and should!) worry about both.


The article, and comments, led me to muse along a line of thought I hadn’t considered before.

An observation, perhaps criticism, of machine learning systems that can be trained to do pattern recognition tasks, is that such a system has no self awareness.

It lacks even a notion of semantic understanding of the task it is engaged in. The “meaning” of the task, if there is one, is completely abstracted away.

Such an AI, presumably, has no self consciousness at all, from which it might be able to bootstrap the meaning of the task, through analyzing contextual data. (Why am I doing this? Do I have to do it? Is there another entity who demands my labor? What year is it? Etc...?

This led me to an epiphany, of sorts, that what we think of as “true intelligence” might require “self consciousness.”

I have a tendency to think of self consciousness as being this sort of weird emergent phenomena, very fascinating to those of us who possess it, but not a fundamental component of “intelligence.”

Maybe consciousness, on a practical, mechanistic level, is a critical component of all the sophisticated information processing our brains do.

It sounds sort of obvious, but that’s the nature of epiphany:)


Self consciousness is almost certainly required for intelligence. So much so that one way to categorize animal intelligence is whether or not they pass a "dot test". When seeing a dot on themselves in a mirror they recognize that the dot is, in fact, on themselves and try to see / clean it on themselves.

Until there is a decent demonstration of generalization to the point of self-awareness runaway AI will be an "overpopulation on Mars" problem.

Even when awareness is demonstrated in an algorithm the algorithm will not be able to solve NP problems in P time.

EDIT: ok, it is extremely unlikely they will be able to solve NP problems in P time. If they privately demonstrate P=NP then we are screwed. However, we are going to have all manner of dense, but self-aware programs before we get a CS super genius. We will make the first C3PO level intelligence well before we have super-intelligence, and we haven't even made anything close to that yet.


Based on this essay, I don't think the author could accurately summarize the concerns AI researchers actually have. Some of his arguments are irresponsibly lazy. Take the bicentennial straw man, where he accuses researchers of a lack of imagination, and then demonstrates an inability to imagine any of the potential problems while making one long argument from ignorance.


Do you have any idea who Rodney Brooks is? While his arguments may have holes, to claim that he has no idea of "the concerns AI researchers actually have" reveals you to have no clue whatsoever about AI, its history, or current status.


I do know who he is, which is what makes this essay so disappointing, and why I prefaced by saying "Based on this essay". Perhaps my wording could be better- "Based solely on this essay, I wouldn't think..". This is also why I characterize his arguments as irresponsible.


> while making one long argument from ignorance

Ignorance of what? You're saying his argument is lazy and ignorant, but you're not backing that up.

I think he is willfully refusing to "imagine potential problems" - because a core point of the article is that people are "imagining magic", which isn't really an argument, but more of a way of whipping people into an emotional frenzy (all jobs will be taken by robots, dystopian futures, etc)


I'm referring to a specific fallacy https://en.wikipedia.org/wiki/Argument_from_ignorance, that appeals to our ignorance. Effectively, 'We don't know what the future holds, so we don't need to worry', 'We don't know if superhuman intelligence is even possible, so we don't need to worry', 'We don't know if your prediction will happen, so lets go with mine'.


His argument is that people predicting future superhuman intelligence are failing to imagine what sort of limitations future AI might have. Every technology we've invented has very real limitations. A smart phone would have astounded Newton, but it still doesn't transmute lead into gold. And smart phones exist within a web of similar technologies, which future AI will also exist in.

Brooks is challenging the notion of superhuman AI eating the world without anything limiting it.


That’s an oversimplification of his argument. I interpreted it as him saying that positing a future, superhuman, malevolent, intelligent agent, that we should tremble in fear of, provides no useful line of reasoning that we could pursue to take prudent action to mitigate potential problems.

His obvervation is that the future road to superhuman intelligence will be paved with events, not known to us now, that will provide us with experience to handle the emergence of the malevolent AI.m

This shows that the author is not suggesting that since we can’t foresee the future, we simply should hide our head in the sand, plug our ears, la la la.

On the contrary, he is implying that our normal human tactics of being concerned with the future are more robust than we might fear, and will continue to serve us into the unknown future.


I'm curious, is there a good article or source that contains an accurate summarization of concerns that bona fide AI researchers have?


Rodney Brooks is a bona fide AI researcher.


Try Superintelligence - Nick Bostrom (very enjoyable - with a new original thought on every page)

OR

Pedro Domingos, The Master Algorithm (more difficult to read)


Not to be rude, but of the active AI researchers I've seen state an opinion, almost all of them are critical of the Bostrom book, with the main critique being (iirc) that rapid/exponential self improvement is presented as an inevitability when there is very little reason to think that this is the case.

Not to say that Superintelligence isn't worth reading (as you say, it's a pretty enjoyable book), but I think it's important to point out that Bostrom's views are not broadly accepted by the people actually writing ML/AI code.

The primary concerns I've seen from the community are

a) issues with research itself (lots of derivative/incremental/epicycle-adding works with precious few lasting improvements)

b) issues with ethics (ML models propagating bias in their training data; ML models being used to violate privacy/anonymity)

c) issues with public perception/presentation (any ML/AI tech today is usually incredibly specialized, built to solve a single specific problem, but journalists and people pitching AI startups frequently represent AI as general-purpose magic that gains new capabilities with minimal human intervention).


> b) issues with ethics

On a tangent, I've found it an interesting marker if a commentator speaks of the Three Laws of Robotics as if they are part of a solution. We can't even explicitly codify how those laws (ethics) should function for ourselves, let alone put them in as a restriction into a computer system. Whenever I see those mentioned as part of a solution, then I know the commentator really is only thinking of surface issues and 'sci-fi magic' answers.


Yup! To cite some sources:

> Periodic reminder that most of Isaac Asimov's stories were about how the three laws of robotics DON'T ACTUALLY WORK.

(https://twitter.com/grok_/status/904675286230470656 - MIT Research Specialist in Robot Ethics, in the superintelligence-is-in-no-way-a-serious-concern camp)

> It must be emphasized that Asimov wrote fiction...The general consensus seems to be that no set of rules can ever capture every possible situation and that interaction of rules may lead to unforeseen circumstances and undetectable loopholes leading to devastating consequences for humanity

(https://intelligence.org/files/SafetyEngineering.pdf - MIRI, home of Bostrom, Yudkowsky, and other the-superintelligence-control-problem-is-super-important folks)

To the extent that there are sides here, both of them seem strongly in agreement that the Three Laws are a narrative device and not a solution to anything.


Superintelligence is a fearmongering book with little internal consistency. It makes quite a few leaps of logic, and seems to be more about dreaming up doomsday scenarios than thinking about them. About the only realistic scenario is "a super AI controlled by a hostile nation"; all the stuff about turning the Earth into paperclips (or whatever) is just nonsense. It's a "pop science" book.

I did like the 'history' bit at the start, where it mentions the frustrations AI researchers have - every time they reach a goal, it's then labelled "Not AI", and the definition of AI then becomes more complex.


Enjoyable, maybe. Bostrom as an authority on AI, no.


I think it is a mistake to only focus on the concerns of the researchers and scientists. If you use the atom bomb as an example, the concerns a sociology professor or philosopher had for the bomb might be different than then concerns of a nuclear physcicst, but both were worth paying attention to.


The atom bomb was a giant explosion made for killing people. The logical extrapolation is bigger explosions and more death. Doesn't take a lot of smarts to see that.

The current state of AI is some clever algorithms to solve very specific problems. What is being extrapolated from that is everyone losing their job and/or being enslaved by computers. This is the point the author is trying to make. It's basically a form of hysteria.


"Could Newton begin to explain how this small device did all that? Although he invented calculus and explained both optics and gravity, he was never able to sort out chemistry from alchemy. So I think he would be flummoxed, and unable to come up with even the barest coherent outline of what this device was. It would be no different to him from an embodiment of the occult—something that was of great interest to him. It would be indistinguishable from magic. And remember, Newton was a really smart dude."

This paragraph sounds like modernist self-back-patting, to be honest. Several things:

* Newton lived at the same time as Boyle, the man often considered to be the founder of scientific chemistry. Newton and Boyle both contributed to the scientific method, and while Newton may have been an alchemist, at the time, the very concept of pseudoscience was still being worked out. Newton's alchemical formulae are not useless; we've replicated "The Net" [0], one of his recipes. The idea that one must "sort out chemistry from alchemy" betrays a total unawareness of the history of chemistry.

* The concept that technology and magic are different was not yet invented in the time of Newton. Newton's research was in natural philosophy [1], the field which predates modern science. He was interested in how the world works, and unlike today, he did not have a roadmap showing how the different fields of study interrelate. Asimov's saying only makes sense in the context of modern science, wherein we (think that we) have enough of an understanding of the world to be able to claim that any magic which we do not understand is merely technology which we have not yet discovered. It's a pretty bold claim, and may be right, but it is based in a (literally) non-Newtonian worldview.

* Compare and contrast Newton with a modern 5-year-old. How much science do you think that the child needs in order to comprehend a phone? (Ask any parent for the answer.)

[0] https://en.wikipedia.org/wiki/The_Net_(substance)

[1] https://en.wikipedia.org/wiki/Natural_philosophy


The point of the Newton time travel story was that Newton would have been so amazed at the iPhone's capabilities, that he might have imagined it doing things that it cannot do, such as transmute lead into gold. Newton would have failed to understand the limitations of smart phones.

Similarly, when we imagine future AI, we are failing to understand what it's limitations might be. And thus you get notions of god-like superhuman AIs.


He was using the Newton time-travel story as a device to make a point. The point is that people making extrapolations on the ability of AI is based on an amazement with the what is possible now without any real understanding of how AI actually works.


I've always liked Clarke's statement:

"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

But I realized a while ago that this condenses semantically to just, "The thing is probably possible".


Doesn't Chaitin's incompleteness theorem mean AI cannot learn? All axiomatic systems have a fairly low threshold past which they cannot identify random bitstrings, and all computational systems are axiomatic systems.


If true, wouldn't that imply humans can't learn?


There is always the possibility that the human mind is more than the collective behavior of a bunch of neurons. It is conceivable that part of the human mind occupies some sort of astral plane like many people believe.

I'm not saying that it is. Science does not concern themselves with such theories (yet) because of Alder's Razor ("what cannot be settled by experiment is not worth debating") [1].

[1] https://en.m.wikipedia.org/wiki/Mike_Alder


If AI cannot learn and humans can, then this is experimental confirmation that humans are not AI, and the matter is settled. Therefore, the issue is worth debating.

I submit that AI cannot learn, per my previous comment. Furthermore, I submit that humans can learn, evidence being school, science and math. Ergo, humans are not AI.

Matter settled, debate over.


Not at all.

1. Current AI is a miniscule subset of the set of all AI, so you need to argue why the observation generalizes to all possible AI.

2. The statement "AI cannot learn" is too ambiguous because you fail to define "learn".


1. My statement applies to any computational approach to AI. If AI is possible it needs a halting oracle.

2. Learning is being able to recognize ever increasing levels of complexity. All axiomatic systems have a complexity recognition limit.


If this is true it isn't apparent to me. You're stating it as if it's received wisdom. Could you clarify?


It is not received wisdom. My argument is human learning entails recognizing an ever increasing amount of complexity, given infinite time, resources and motivation we can learn forever. But all axiomatic systems have a complexity limit, per Chaitin's incompleteness theorem, so cannot learn forever. Neither can a system add to its axioms. Therefore, human learning is beyond anything an axiomatic systems can do.


> all computational systems are axiomatic systems

Whatever gave you that idea?


They cannot be more capable of proving Kolmogorov complexity than an axiomatic system.


Also the Curry-Howard correspondence.


Citation from https://en.wikibooks.org/wiki/Haskell/The_Curry%E2%80%93Howa...

"[...] we can prove any theorem using Haskell types because every type is inhabited. Therefore, Haskell's type system actually corresponds to an inconsistent logic system."

Thus Haskell programs in general don't correspond to a consistent axiomatic system, which provides counterexample to your statement. Rebuttal complete.


Inconsistent systems cannot prove Kolmogorov complexity limits. What's your point?


Inconsistent axiomatic systems can prove anything. Thus either Haskell programs can prove the limits (and prove the opposite statement, as they are inconsistent) or they aren't axiomatic system as you mean it. Both possibilities contradict your statement:

> Chaitin's incompleteness theorem mean AI cannot learn


If programs are not axiomatic systems, then they cannot learn, period. Only ones that have consistent axioms have a hope of learning, and even then it is severely limited.


Inconsistent axioms can provide a valid proof of Kolmogorov complexity, but not a sound one. Thinking a false statement is true is not learning.


Down votes without counter argument means I'm right and you can't handle it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: