The law seems to only care about models that take at least 10^26 floating point ops or models that achieve comparable benchmark performance.
This is a truly absurd number! I’m ok with organizations with that sort of compute capacity being subject to regulatory oversight and reporting and liability.
This is not very far off from Llama3 training. 400 TFLOPS per GPU [1] and 6.4M GPU hours [2] puts Llama3 70B at 9.2*10^24 (so 10^25) floating point ops.
It's dumb to create laws based on arbitrary technological limits. Computing power is still increasing exponentially, yesterday's supercomputer is tomorrows gaming gpu.
Its not an absurd number, depending on how it is calculated.
I've seen that number thrown around, and by some calculations it would already apply to open source models like LLama.
No, we don't need to ban or regulate LLama or any existing open source models. If someone wants to be worried about GPT-6, fine. But there is no need to regulate the stuff thats already out there.
No, it’s not a good point. It could be 10^100000000 flops and it wouldn’t fucking matter. There’s no evidence that would do anything at all. “AGI” may not even be possible with those flops. You’re talking about what would require unprecedented control over computing to ease fears over a scary robot fanfic. None of the “safety” concerns are real. A GPT4 open source would do nothing but hurt Sam Altman’s bottom line - YOU ARE BEING DUPED.
If you’re not being serious, I appreciate your sarcasm. If you are, this may be one of the worst things I have ever read. It’s fucking math people. Math. It will not create some scary golem. It will create marginally better chatbots. You are arguing for totalitarian control over computing to line Microsoft’s pockets. Shame. Shame shame shame shame shame.
This is a truly absurd number! I’m ok with organizations with that sort of compute capacity being subject to regulatory oversight and reporting and liability.