Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Low barrier to entry for catastrophic AI? Surely you aren't talking about the current state of AI. The current barrier of entry for an AGI (which is at least what it would take for catastrophe) is "does not exist". That's about as big as barriers get. Even when the first AGIs show up it will take full time supercomputers to perform like a single human brain. I don't see that as a particularly low barrier.



This is making the assumption that AGI is computationally expensive rather than just requiring of a particular algorithmic approach. It may be possible (and in fact I'd expect it to be so) to replicate general human-level intelligence with significantly less raw computational power than that embodied by the human brain.

Evolution does generate highly optimized systems but generally only when those systems have been around for tens of millions of years. Human-level intelligence has only been around for what, 50k - 100k years? We're probably still more in the 'just works' phase rather than the 'streamlined and optimal' phase.


Eh. The barrier for entry for an AGI does exist, though it is currently undefined, since we don't know what it is. The reason I say that is there are at least 7 billion general intelligences running around this planet (and many more if you consider animal intelligences) It is important to define it that way, not that it is impossible, just that it is unknown how much effort is needed to create an artificial one.

This distinction is very important when comparing the threat of AI with other significant threats. Before nuclear bombs were built we could not tell you what the difficulty was in creating one. Now that difficulty is well defined, and we can use that knowledge to prevent them from being built by most nations, except the most well funded.

If the barrier for entry for AGI (then ASI) is lower than we expect, then the threat of AI is significantly different than if AGI/ASI can only be created by nation states.


The barrier for entry for an alien invasion does exist, though it is currently undefined, since we don't know what it is. The reason I say that is there are at least 1 bloodthirsty species running around this galaxy (and many more if you consider the statistical possibility of life on other planets) It is important to define it that way, not that it is impossible, just that it is unknown how much time is needed before an alien invasion.

The reason I am framing things this way is we need to be very careful here because we are starting to turn towards speculation.


You know, you mean for that to sound implausible, but the Great Filter is in fact an open research problem.


I'm pointing out that this is all speculative and dangerously close to science-fiction.


You should learn the difference between what is impossible and what just has not happened yet. Much science-fiction that was in the realm of possibility is now science-reality. One should not need reminded they are communicating at the speed of light over a global communications network capable of reaching billions of people at a time. I'm sure at one point in the past that was science-fiction, now reality. I don't believe you can show me any science that points out why AI/AGI/ASI can be created, we simply are not at that level of sophistication.


Your argument is basically "some science-fiction has sometimes turned out to be true." That doesn't counter the fact that this is just speculation.


Um, pretty much, no.

Science fiction turns out to be true when physical reality agrees that it can be true. This again, is why we have a global communications network and personal wireless devices connected to it. This is also the reason we do not go faster than light.

The reason we don't have flying cars is they are completely possible. They are also terribly dangerous and expensive and a complete waste of energy.

The reason we don't have AGI is not that it is impossible, again if nature can create it, we can recreate it. Since we don't have a good understanding of the networked nature of emergent intelligence we cannot create a power optimized network that would allow us to create a energy efficient version. AGI itself is a complete waste of energy at this point. We already have many types of AI that are energy efficient and used in products now.


> Science fiction turns out to be true when physical reality agrees that it can be true

This is a ridiculous argument. Furthermore, even if it were true, it tells us nothing about the timeline. It could take 10,000 years for all we know.


In the past, single human brains have come close to destroying the world, and lots of people have access to supercomputers, so the barrier doesn't seem insurmountable.

I don't think you need AGI to cause a catastrophe. A narrow AI specializing in cyberattacks could be catastrophic, and is probably possible with current techniques.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: