Hacker News new | past | comments | ask | show | jobs | submit login

Smart is just a shorthand for a complicated series of lower level actions consisting of domain knowledge, raw computational speed and other things yes. I don't think we're really disagreeing about this. However, I do worry that you're confusing the existing constraints on the human brain (that people seem to have tradeoffs between, let's say, charisma and mathematical ability) and constraints that would apply to all possible brains.

But are you denying that there exists some factor which allows you to manipulate the world in some way, roughly proportional to the time that you have? If something can manipulate the world on timescales much faster than humans can react to, what makes you think that humans would have a choice?




I sort of am questioning that factor, yes. Stipulate, I don't know, several orders of magnitude of intellectual superiority. Stipulate that human intelligence can be captured in silico, so that when we talk about "intelligence", we are using the most charitable possible definition for the "fear AI" argument.

What precise vectors would be available for this system to manipulate the world to harm humans?


A typical example, which I don't really like, is that once it gains some insight into biology that we don't have (a much faster way of figuring out how protein folding works). It can mail a letter to some lab, instructing a lab tech to make some mixture which would create either a deadly virus or a bootstrapping nanomachine factory.

Another one is that perhaps the internet of things is in place by the time such an AI would be possible, at which point it exploits the horrendous lack of security on all such devices to wreck havoc / become stealth miniature factories which make more devistating things.

I mean, there's also the standard "launch ALL the missiles" answer, but I don't know enough about the cybersecurity of missiles. A more indirect way would be to persuade the world leaders to launch them, e.g. show both Russia and American radars that the other one is launching a pre-emptive strike and knock out other forms of communication.

I don't like thinking about this, because people say this is "sci-fi speculation".


Isn't that a little circular? Should we be concerned about insufficient controls on bio labs? Yes. I am very concerned about that. Should we be concerned about proliferation of insecure networked computing devices? Yes. I am very concerned about that. Should we be concerned about allowable inputs into missile launch systems? Yes. I am very concerned about that.

But I am right now not very concerned about super-AI. I assume, when I read smart people worrying about it, that there's some subtlety I must be missing, because it's hard for me to imagine that, even if we stipulated the impossibility of real AI, we'd not be existentially concerned about laboratory pathogen manipulation.


I guess the same vectors available to a blogger, or pundit, or politician? To fear-monger; to mislead important decision makers; to spread lies and manipulate the outcomes of important processes.

Is it possible to say that such AIs are NOT at work right now, fomenting terrorism, gathering money through clever investment and spending it on potent schemes to upset economies and governments?


The trouble with "persuasion" as the vector of harm from AI is that some of the dumbest people in the world are capable of getting thousands or (in the case of ISIS) millions of people to do their bidding. What contains persuasion as a threat isn't the intellectual limitations of the persuaders: it's the fact that persuasion is a software program that must run at the speed of transmitted human thought.


Agreed, digital brains will be unfathomable. What are they thinking, in between each word they laboriously transmit to us slow humans? They will have epochs to think while we are clearing our throats.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: