Hacker News new | past | comments | ask | show | jobs | submit login

An interesting article, but I think it misses the mark on two things.

Firstly, it minimizes the concerns people have over jobs being lost because AI isn't around the corner. While a valid point of view, even if many jobs will be lost only in 30-40 years and not in 10-20, isn't it something worth thinking about?

Secondly, in terms of the AI safety movement, I think he doesn't really address the core problems that people raise. From the article:

"[ai safety pundits] ignore the fact that if we are able to eventually build such smart devices, the world will have changed significantly by then. We will not suddenly be surprised by the existence of such super-intelligences. They will evolve technologically over time, and our world will come to be populated by many other intelligences, and we will have lots of experience already."

I think this ignores two arguments:

1. While his scenario is certainly plausible, the "AI takeoff" scenario, in which an AI becomes super-intelligent via recursive self improvement, is also at least a possibility. This means it's worth thinking about, because it negates the "safety" of other tech advances helping.

2. Either way, one big concern of the AI safety crowd is that we just don't know how much time it will take to "solve" AI safety. Given unlimited computing power now, we would have no way of making the AI have the same goals as us, because we have no idea how to program a safe AI. This is something that might take 10 more years of work in laying down mathematical foundations, or might take 100 years. Nobody knows! That's why it's important to get this right.

The argument from the article is "let's now worry, because tech will advance enough by the time we have intelligent AIs". Well, maybe, but how does that tech advance happen? By people being worried about this problem and working on it! It's not magic. You can't just assume that the problem will solve itself.




> Given unlimited computing power now, we would have no way of making the AI have the same goals as us, because we have no idea how to program a safe AI.

And a counterpoint: Given unlimited computing power now, we would have no way of making the AI, because we have no idea how to program a general AI.

There are algorithms for global optimization, but those optimization problems are given a specific goal to optimize. We don't even know what "goals" to put in to make a general intelligence.

"Assuming unlimited computing power" is a good thought experiment because it lays bare the fact that we wouldn't have a clue how to create an AI even if we had unlimited computing and we can take a closer look at what we are missing even in that case.

Worrying about killer robots is like worrying about overpopulation on Mars (-Ng). I agree that job replacement is a concern in the next 10 years, but that is completely different than "making our goals align". The "making our goals align" concern is just as distant as killer robots or overpopulation on Mars.

You say, "you can't just assume the problem will solve itself." To which I say, What Problem?

A theoretical problem in your thoughts is not a problem that needs to be solved.


"And a counterpoint: Given unlimited computing power now, we would have no way of making the AI, because we have no idea how to program a general AI."

That's not a counterpoint to anything I said. I never said that I thought we know of a way to make AI, or that we're even close. We don't really know if we're 10 years away, 50 years away, or 5000 years away.

My point is that for all we know, we have to invent entirely new branches of math to deal with these kinds of questions, which could itself take 50 years. That's why we need to get started.

Maybe our only difference of opinion is how far away we think general AI is. I have a feeling that if we knew for a fact that AGI was 50 years away, you'd agree with me that it's worth worrying about today.

(Especially when "worrying about it" means people researching this and working on the math of this, something which I hardly think should be a controversial use of humanity's resources considering that 1% of the global fashion budget would fund AI research for the next thousand years.)

Note: While Andrew Ng's quote is very popular, iirc, he actually does think there should be some research into AI safety.


It is definitely a counterpoint. If we have no idea how to create the basic functions what makes you think we have an idea of how to make those basic functions incorporate goal alignment?

If we don't have the new branch of math... how are we supposed to bend that branch of math to our will?

I think we all agree that AGI should share our ideal values. What now?

We don't even have a basic self-aware algorithm to work with. What do you propose we modify and how should we modify it to get goal alignment?

We generally don't fund philosophy very highly (and maybe that is a mistake). Right now it is a philosophical question, not a practical concern to which resources can be applied.

EDIT: I don't think we should ignore AI safety at all. I just think our safety concerns should match our technology concerns right now those are physical robot safety and job loss potential. Not runaway intelligence.


Well you definitely might be right. I can't say for sure that we can do meaningful work now, although the people doing the actual work do think it is meaningful, and I think it's worth trying.

I do think it's worth pointing out that many times, we can do interesting maths without necessarily having technological or scientific capabilities to which to apply it. E.g. lots of things were proven about computing before we ever had a computer. We already know a lot about quantum computing, without having a quantum computer. We knew how to describe the curvature of spacetime mathematically, before we ever knew that spacetime worked like that.

I'm not saying this is for sure, but there are lots of examples where we had math before having the whole picture.

" I don't think we should ignore AI safety at all. I just think our safety concerns should match our technology concerns right now those are physical robot safety and job loss potential. Not runaway intelligence."

I just don't think it's an either-or situation. We can (and should!) worry about both.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: