Hacker News new | past | comments | ask | show | jobs | submit login

"And a counterpoint: Given unlimited computing power now, we would have no way of making the AI, because we have no idea how to program a general AI."

That's not a counterpoint to anything I said. I never said that I thought we know of a way to make AI, or that we're even close. We don't really know if we're 10 years away, 50 years away, or 5000 years away.

My point is that for all we know, we have to invent entirely new branches of math to deal with these kinds of questions, which could itself take 50 years. That's why we need to get started.

Maybe our only difference of opinion is how far away we think general AI is. I have a feeling that if we knew for a fact that AGI was 50 years away, you'd agree with me that it's worth worrying about today.

(Especially when "worrying about it" means people researching this and working on the math of this, something which I hardly think should be a controversial use of humanity's resources considering that 1% of the global fashion budget would fund AI research for the next thousand years.)

Note: While Andrew Ng's quote is very popular, iirc, he actually does think there should be some research into AI safety.




It is definitely a counterpoint. If we have no idea how to create the basic functions what makes you think we have an idea of how to make those basic functions incorporate goal alignment?

If we don't have the new branch of math... how are we supposed to bend that branch of math to our will?

I think we all agree that AGI should share our ideal values. What now?

We don't even have a basic self-aware algorithm to work with. What do you propose we modify and how should we modify it to get goal alignment?

We generally don't fund philosophy very highly (and maybe that is a mistake). Right now it is a philosophical question, not a practical concern to which resources can be applied.

EDIT: I don't think we should ignore AI safety at all. I just think our safety concerns should match our technology concerns right now those are physical robot safety and job loss potential. Not runaway intelligence.


Well you definitely might be right. I can't say for sure that we can do meaningful work now, although the people doing the actual work do think it is meaningful, and I think it's worth trying.

I do think it's worth pointing out that many times, we can do interesting maths without necessarily having technological or scientific capabilities to which to apply it. E.g. lots of things were proven about computing before we ever had a computer. We already know a lot about quantum computing, without having a quantum computer. We knew how to describe the curvature of spacetime mathematically, before we ever knew that spacetime worked like that.

I'm not saying this is for sure, but there are lots of examples where we had math before having the whole picture.

" I don't think we should ignore AI safety at all. I just think our safety concerns should match our technology concerns right now those are physical robot safety and job loss potential. Not runaway intelligence."

I just don't think it's an either-or situation. We can (and should!) worry about both.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: