Hacker News new | past | comments | ask | show | jobs | submit login

> and mandates audit requirements to prove that your models won't help people cause disasters

Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

> and can be turned off.

I really wish legislators would operate inside reality instead of a Star Trek episode.




This snide dismissiveness around “sci-fi” scenarios, while capabilities continue to grow, seems incredibly naïve and foolish.

Many of you saying stuff like this were the same naysayers who have been terribly wrong about scaling for the last 6-8 years or people who only started paying attention in the last two years.


I don't think GP is dismissing the scenarios themselves, rather espousing their belief these answers will do nothing to prevent said scenarios from eventually occuring anyways. It's like if we invented nukes but found out they were made out of having a lot of telephones instead of something exotic like refining radioactive elements a certain way. Sure - you can still try to restrict telephone sales... but one way or another lots of nukes are going to be built around the world (power plants too) and, in the meantime, what you've regulated away is the convenience of having a better phone from the average person as time goes on.

The same battle was/is had around cryptography - telling people they can't use or distribute cryptography algorithms on consumer hardware never stopped bad people from having real time functionally unbreakable encryption.

The safety plan must be around somehow handling the resulting problems when they happen, not hoping to make it never occur even once for the rest of time. Eventually a bad guy is going to make an indecipherable call, eventually an enemy country or rogue operator is going to nuke a place, eventually an AI is going to ${scifi_ai_thing}. The safety of all society can't rest on audits and good intention preventing those from ever happening.


It's an interesting analogy.

Nukes are a far more primitive technology (i.e., enrichment requires only more basic industrial capabilities) than AI hardware, yet they are probably the best example of tech limitations via international agreements.

But the algorithms are mostly public knowledge, datacenters are no secret, and the chips aren't even made in the US. I don't see what leverage California has to regulate AI broadly.

So it seems like the only thing such a bill would achieve is to incentivize AI research to avoid California.


> Nukes are a far more primitive technology (i.e., enrichment requires only more basic industrial capabilities) than AI hardware, yet they are probably the best example of tech limitations via international agreements.

And direct sabotage, eg Stuxnet.

And outright assassination eg https://www.bbc.com/news/world-middle-east-55128970


>So it seems like the only thing such a bill would achieve is to incentivize AI research to avoid California.

Which, incidentally, would be pretty bad from a climate change perspective since many of the alternative locations for datacenters have a worse mix of renewables/nuclear to fossil fuels in their electricity generation. ~60% of VA's electricity is generated from burning fossil fuels (of which 1/12th is still coal) while natural gas makes up less than 40% of electricity generation in California, for example


Electric power crosses state lines, very little loss.

It's looking like cooling water may be more of a limiting factor. Yet, even this can be greatly reduced when electric power is cheap enough.

Solar power is already "cheaper than free" in many places and times. If the initial winner-take-all training race ever slows down, perhaps training can be scheduled for energy cost-optimal times and places.


Transmission losses aren't negligible without investment in costly infrastructure like HVDC connections. It's always more efficient to site electricity generation as close to generation as feasibly possible.


Electric power transmission loss is less than 5%:

https://www.eia.gov/totalenergy/data/flow-graphs/electricity...

   14.26 Net generation
   0.67 "Transmission and delivery losses and unaccounted for"
It's just a tiny fraction of the losses resulting from burning fuel to heat water to produce steam to drive a turbine to yield electric power.


That's the average. It's bought and sold on a spot market. If you try to sell CA power in AZ and the losses are 10% then SRP or TEP or whoever can undercut your price with local power/lower losses.


I just don't see 10% remaining a big deal while solar continues its exponential cost reduction. Solar does not consume fuel, so when local supply exceeds local demand the cost of incremental production drops to approximately zero. Nobody's undercutting zero, even with 10% losses.

IMO, this is what 'winning' looks like.


The cost of solar as a 24hr power supply must include the cost of storage for the 16+ hours that it's not at peak power. It also needs to overproduce by 3x to meet that demand.

Solar provides cheap power only when it's producing.


this is interesting but missing some scale aspects.. capital and concentrated power are mutual attractors in some sense.. these AI datacenters in their current incarnations are massive.. so the number and size of solar panels needed, changes the situation. Common electrical power interchange (grid) is carefully regulated and monitored in all jurisdictions. In other words, there is little chance of an ad-hoc local network of small or mid-size solar systems making enough power unto themselves, without passing through regulated transmission facilities IMHO.


If you think a solution to bad behavior is a law declaring punishment for such behavior you are a fool.


Murder is a bad behavior. Am I a fool to think there should be laws against murder?


The AI doomsday folk had an even worse track record over the past decade. There was supposed to be mass unemployment of truck drivers years ago. According to CCP Grey's Human's Need Not Apply[1] from 10 years ago, the robot Baxter was supposed to take over many low skilled jobs (Baxter was continued in 2018 after it failed to achieve commercial success).

[1] https://www.youtube.com/watch?v=7Pq-S557XQU


Tbf there's the human element - we have had the technology to automate train systems for decades now and yet most train systems aren't automated - because then the drivers would lose their jobs. There's more to things like this than just "do we have the technology"


I do not count CGP grey or other viral youtubers among the segment of people I was counting as bullish about the scaling hypothesis. I’m talking about actual academics like Ilya, Hinton, etc.

Regardless, I just read the transcript for that video and he doesn’t give any timeline so it seems premature to crow that he was wrong.


> Regardless, I just read the transcript for that video and he doesn’t give any timeline so it seems premature to crow that he was wrong.

If you watch the video he's clearly saying this was was something that was already happening. Keep in mind it was made 10 years ago, and in it he says "this isn't science fiction; the robots are here right now." When bringing up the 25% unemployment rate he says "just the stuff we talked about today, the stuff that already works, can push us over that number pretty soon."

Baxter being able to do everything a worker can for a fraction of the price definitely wasn't true.

Here's what he said about self-driving cars. Again, this was 2014: "Self driving cars aren't the future - they're here and they work."

"The transportation industry in the united states employs about 3 million people. Extrapolating worldwide, that's something like 70 million jobs at a minimum. These jobs are over."

> I’m talking about actual academics like Ilya, Hinton, etc.

Which of Hinton's statements are you claiming were dismissed by people here but were later proven to be correct?


That's a total non sequitur. Just because LLMs are scalable doesn't mean this is a problem that requires government intervention. It's only idiots and grifters who want us to worry about sci-fi disaster scenarios. The snide dismissiveness is completely deserved.


> seems incredibly naïve and foolish.

We have electrical codes. These require disconnects just about everywhere. The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

> were the same naysayers

Now who's being snide and dismissive? Do you want to argue the point or are you just interested in tossing ad hominem attacks around?


Someone never watched the Terminator series.

In all seriousness, if we ever get to the point where an AI needs to be shut down to avoid catastrophe, there's probably no way to turn it off.

There are digital controls for damned near everything, and security is universally disturbingly bad.

Whatever you're trying to stop will already have root-kitted your systems (and quite possibly have replicated) by the time you realise that it's even beginning to become a problem.

You could only shut it down if there's a choke point accessible without electronic intervention, and you'd need to reach it without electronic intervention, and do so without communicating your intent.

Yes, that's all highly highly improbable - but you seem to believe that you can just turn off the Genie, when he's already seen you coming and is having none of it.


> In all seriousness, if we ever get to the point where an AI needs to be shut down to avoid catastrophe, there's probably no way to turn it off.

> There are digital controls for damned near everything, and security is universally disturbingly bad.

Just unplug the thing.


> You could only shut it down if there's a choke point accessible without electronic intervention, and you'd need to reach it without electronic intervention, and do so without communicating your intent.

You'll be dead before you reach the plug.


Then bomb it. Or did the AI take over the fighter jets too?


> Whatever you're trying to stop will already have root-kitted your systems (and quite possibly have replicated) by the time you realise that it's even beginning to become a problem.

There's a good chance that you won't know where it is - if you even did to begin with (which particular AI even went rogue?).

> Or did the AI take over the fighter jets too?

Dunno - how secure are the systems?

But it's almost certainly fucking with the GPS.


If a malicious model exhilarates its weights to a Chinese datacenter, how do you turn that off?

How do you turn off Llama-Omega if it turns out that it can be prompt-hacked into a malicious agent?


1. If the weights somehow are obtained by a foreign power, you can't do anything, just like every other technology ever.

2. If it turns into a malicious agent you just hit the "off switch", or, more likely just stop the software, like you turn off your word processor.


> We have electrical codes. These require disconnects just about everywhere. The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

Not so clear when you are inferencing a distributed model across the globe. Doesn't seem obvious that shutdown of a distributed computing environment will always be trivial.

> Now who's being snide and dismissive?

Oh to be clear, nothing against being dismissive - just the particular brand of dismissiveness of 'scifi' safety scenarios is naive.


> The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

Does anyone remember Sen. Lieberman's "Internet Kill Switch" bill?


> Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

Uh, aren't potential risk factors things you want to consider when planning for the future?


The best episodes are where the model can't be turned off anymore ;)


>I really wish legislators would operate inside reality instead of a Star Trek episode.

What are your thoughts about businesses like Google and Meta providing guidance and assistance to legislators?


If it happens in a public and open session of the legislature with multiple other sources of guidance and information available then that's how it's supposed to work.

I suspect this is not how the majority of "guidance" is actually being offered. I also guess this is probably a really good way to find new sources of campaign "donations." It's also a really good way for monopolistic players to keep a strangle hold on a nascent market.


> Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

What if it audits your deploy and approval processes? They can say for example, that if your AI deployment process doesn't include stress tests against some specific malicious behavior (insert test cases here) then you are in violation of the law. That would essentially be a control on all future deploys.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: