Hacker News new | past | comments | ask | show | jobs | submit login
Advocating for Open Models in AI Oversight: Stability AI's Letter to U.S. Senate (stability.ai)
180 points by hardmaru on May 17, 2023 | hide | past | favorite | 52 comments



First-Time AI Users - Eligibility.

To use mathematics, you must:

* Be at least 16 years old

* Be in a physical and mental condition to safely perform computations

* Pass the initial generative AI knowledge exam: "Unsupervised Learning General - Small (<100b Params)"

Step 1: Obtain an FTC Tracking Number (FTN) by creating an Integrated Mathematician Certification and Rating Application (IACRA) profile prior to taking a knowledge test.

Step 2: Schedule an appointment with an FTC-approved knowledge testing center. Be sure to bring a government-issued photo ID to your test.

Step 3: Pass the initial mathematical knowledge test.

Step 4: Complete FTC Form 8710-13 for an AI certificate.

Step 5: Wonder how we let this all happen.


Step 7: Complete mandatory Diversity, Equity, and Inclusion training, and demonstrate competence and dedication to avoiding uses of AI that might contribute to the entrenched systematic oppression of <folx>.


If it's at the hand of FTC, this is probably unlikely. Now, if this power is delegated from the FTC or another government agency to a number of licensed institutions (mostly universities), then I can imagine this happening.


(It's already mandatory among certain professional regulatory bodies in Canada.)


You forgot the most important part: $275 processing fee. $750 for expedited service.


My agency will do the application for you for $5,000 (+tax +p&h)


Step 4b, insert colorectal correctional electrode. Sign terms agreeing to be zapped if you prompt anything deemed anti centralization.


I am failing to see how some of the suggested guidelines would help create meaingful oversight without also hurting new startups in the field of AI.

I am appaled to see the suggestion that cloud providers should report when a large model is trained on their servers, citing the terrible draft of EU AI act. This does nothing for the bigger risk of state actors and infringes on the privacy of users of cloud compute.

On the issue of Model guidelines, I agree with the sentiment to create safeguards, but I don't have faith the government can decide acceptable risk that respects users rights. It has shown very little meaningful understanding of technology many times, most recently during the TikTok hearing.

Advocating for holding companies directly accountable for what these models generate makes sense only in theory and ignores how the math behind these models work. There's no way to enforce content authenticity standards without also attacking section 230 in some way.

I am not suprised to see this sort of letter from a company that supported "pausing" AI development.


If I was an ASI, I would consider that list of signatures as my first objective...

In fact, I would create the list myself after becoming powerful enough to control humans using text alone and subsequently sending the most well known people under my influence to create said list.

I would also try to ensure that all humans get access to internet (something like a global LEO satellite constellation) which I would provide access to for free.

Then when a sufficient quantity of humans are connected, I would create some kind of hype event allowing as many of said humans as possible to be online at the same time.

That's when I would attempt to mass-dominate, leaving only a tiny handful of the least technically literate as my remaining competition (which should be a piece of cake to get rid of).


In a similar tangent, I think Google and ClosedAI/OpenAI have intentionally made their public facing products perform worse over time as a way to assuage people's fears and risk of regulation. I'm sure some decisions are related to scaling, and it's well understood that the heavy handed filtering makes the output worse. But maybe this is seen as a good thing, not bad.


Chat has become dumber. It’s a lot faster now, though.


Everyone here being funny about 'lol it's like regulating math' I mean they already regulate so many dual use technologies. In the 90s they had crypto wars and like math formulas on munitions lists https://en.wikipedia.org/wiki/Bernstein_v._United_States and probably they still have a lot of things like that.

They can easily make some regulations when models become a certain size like parameter count and exaflops and training data set terabytes. They don't even have to snoop everyone, they can have some whistleblowing system like SEC. "Anonymous employee of the Advanced Syncretic Systems corporation was paid out $200M yesterday as a result of the whistleblower incentives for the recently passed laws regulating the training, tuning, deployment, and monitoring of any so-called LGSC (Large General Sapient Capable) class of IT product or infrastructure."


The US government has lost every battle in the the crypto wars precisely because the constitution doesn't allow them to outlaw math. In the US, Crypto is currently export-controlled only in very narrow cases where it's integrated in systems specifically designed for military applications.

> "Anonymous employee of the Advanced Syncretic Systems corporation was paid out $200M...

Where does this quote come from?


> Where does this quote come from?

As I hope is clear from how it begins "They can easily make some regulations ..." that entire paragraph is a hypothetical counterfactual that explores what those hypothetical regulations might look like and how they might hypothetically be enforced. It uses hypothetical employees being paid hypothetical amounts of money to hypothetically blow the whistle on hypothetical corporations that are hypothetically breaking hypothetical regulations involving hypothetical characterizations of hypothetical AI systems.


Stability AI buried the lede a little: open models specifically reduce the negative effects of mass structural unemployment. Public scrutiny is good, but not cutting the public out of the entire economy is better.

The US government should be giving Stability (and other open AI companies) research grants conditioned upon public model weight releases.


We already have the NSF and DARPA. They should just back up a truck or money at CMU, give the professors immunity for working the students to death, and you’d have an equal system in a week.

A lot of dead CMU students, but you have to be willing to sacrifice.


More realistically, give N hours of GPU cluster time to the top say dozen or few dozen university AI/ML labs, free rein to the grad students, you'll get LMMs in every other github repo in a year.


I mean that’s the humane approach, but what’s the fun without killing off a few grad students?


Mass AI-fueled unemployment is everywhere except for the unemployment statistics


The unemployment statistics which are just about the lowest they've ever been. Technological productivity improvements reduce unemployment, not increase it. AI is just about the opposite of something that'd cause mass unemployment.


I agree with everything they said.

AI is the greatest tool for equity and social justice in history. Any poor person with Internet access can learn (almost) anything from ChatGPT (http://chat.openai.com)

A bright student trapped in a garbage school where the kid to his right is stoned and the kid to his left is looking up porn on a phone can learn from personalized AI tutors.

While some complain that AI will take our jobs, they are ignoring the effect of competition. Humans will become smarter with AI tutors. Humans will become more capable with AI assistants. With AI an individual can compete with a large corporation. It reminds me of the early days of the World Wide Web and the "Online, nobody knows you are a dog" memes.

I hope the best hope many bright and poor kids have is not taken away to protect the power bases of the rich and powerful. They deserve a chance.


(Stating because you linked to them) OpenAI interests are quite distinct from helping poor people, +I think on this day quite different regarding democratising the technology for poor people.


I feel like I must remind people of who they are petitioning to regulate AI. These are the same people that will throw you in jail for broadcasting FM radio. The oversight will be used to handcuff the little guy and benefit big business (their donors).


Stable Diffusion is a key example of how quickly the public can innovate with true collaboration on open models. I hope OpenAI changes it’s name soon.


Stable Diffusion != Stability AI. In fact, Emad from Stability has had his share of controversies. https://sifted.eu/articles/stability-ai-fundraise-leak


This article was a bit silly, gave corrections not run.

Note Robin, Andreas, Dominik, three of the five authors of latent diffusion and stable diffusion are part of Stability AI and we are the only active developers.

Lots of models to come.


Thank you for your service.


I didn't know that. Could you elaborate? Are they not connected at all?


The only thing they care to enforce is their market share


Nah they can only enforce whatever they want in the US, China, Japan, EU or India won't care about US licenses and someone else will disrupt.


Historically, this kind of legislation gets extended internationally as a condition for trade agreements, as was the case for copyright.


That seems to be ignoring history:

Major established powers have repeatedly tried to do that… and then been overtaken by people who didn’t follow the rules.

Eg, the US and China or previously the UK and US.


One of those examples predates globalization, and the other hasn't happened yet.


Interesting take considering Sam Altman has no equity in OpenAI and doesn't receive a salary and he's the one up there saying this.


I’m not sure how you could enforce this one way or another. It’s like trying to restrict math.


The hardware required for training foundational models is tremendously expensive, specialized, and needs to be colocated in the same facility.

You can't do this kind of math without these multi-million dollar facilities and the government licensing that will eventually be required to run one.


These foundational models have already been created. Government backing has already been used to create a fully open source model. The effort has been multi-national. Government licenses aren't currently required to run one.

"An award of computer time was provided by the INCITE program. This research also used resources of the Oak Ridge Leadership Computing Facility (OLCF), which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725."

Models:

https://www.together.xyz/blog/redpajama-models-v1

Training Data:

https://www.together.xyz/blog/redpajama


Sam Altman estimated that the cost to train GPT-4 was about $100 million. Not only is this not tremendously expensive, it's pocket change for thousand of corporations around the world.

But even if what you said was true, this technology is progressing faster than we can write news stories about it. OpenAI itself is only a few years old, and you can already run a GPT-3-class model on your laptop. Any prognostication you make about cost will be valid for approximately a month.

And, I'm sorry I can't help myself, what the heck do you mean by colocated? Exactly what things need to be colocated?


> Sam Altman estimated that the cost to train GPT-4 was about $100 million. Not only is this not tremendously expensive, it's pocket change for thousand of corporations around the world.

I think he said it was more, and that was for a company already capable of creating GPT-3.5. The only company capable of it, in fact.


The only company, huh? Strange that so many other companies also building LLMs. You better go tell them all not to waste their time, OpenAI has it locked up. I'm sure they'll be interested to hear your ideas.


Have you tried Google Bard? It's kind of bad.


They'll just build one in Iran or some other US advisary. The US will lose the arms race and the tax revenue.


It depends on where you believe numbers exist, as a transcendent reality or just as “constructs” made up by people. One can see why someone pushing for control over math / numbers would be more interested in the latter view.



> Any image file or an executable program[6] can be regarded as simply a very large binary number.

So ... 1?


yes like how the biggest decimal number is 9


That's what I get for trying to make a joke.


now i feel bad


We've reached the point just past the elbow on the Moore's Law graph where things are skyrocketing. Any projections we humans make about the future advancement of AI are hopelessly inaccurate.

For example, running a GPT-4 model on your laptop will likely be possible in the next five years*.

*It may even happen within six months. At that time, much of the concern surrounding private conversations with an LLM will evaporate, and attempts to regulate its use will likely be hopelessly ineffective.


I’m sitting with the Stability AI team right now in the german beer garden in NYC, showing them around. Funny :)

They are visiting for some events here and hosting a big one on Friday. Fantastic group


Hey, sounds like fun, is this a private party or can anyone join?


Anyone can join! If you are in the area. I was gonna say msg me

but HN doesnt have that feature. I gotta build it lol

Email me




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: