Hacker News new | past | comments | ask | show | jobs | submit login
Who will control the future of AI? (washingtonpost.com)
41 points by kjhughes 3 months ago | hide | past | favorite | 54 comments




What I'm hearing:

1) Security is important.

2) The government should subsidize our infrastructure costs.

3) The government should restrict chip and code deployment in other countries to keep AI here.

4) We need to build up a bureaucracy of some kind to regulate this tech.

I'm not at all sure what "democratic AI protocols" is supposed to mean, but the main thrust of this piece isn't substantially different than most of what we've been seeing from Altman in his lobbying efforts. Translated from lobbyist-speak, to me this says "My company is losing its lead and isn't confident we'll be able to keep up going forward. I need you, the governments of the world, to do something about that ASAP. Here are some vaguely national-security-related justifications you can use to sell it to your voters."


I'm sure the piece was being planned for a while, but it's funny that it dropped in the same week as Meta and Mistral releasing fully open source models that outperform OpenAI's own. You want democratic AI? There it is.


Mistral’s new model is not fully open source because it’s got the same problem as “Open”AI: closed output, the output is not usable for work in any form without technically violating the bullshit customer noncompete policy, and it’s not shareable for the same reason, that would help their competitors learn (gasp)


It’s “available weight”. In the context of an ugly power grab by people who do business with the NSA in broad daylight and call it “democratic”?

I think it’s pretty clear who is “open” and who is an aspiring fascist.


2 is probably linked to their expensive infra: https://news.ycombinator.com/item?id=41063097


There is definitely a defense argument to be made. With enough compute you can use AI to entirely snuff out any signal with noise on any adversaries online resources. It's like having the power to snap your fingers and fire up a billion printing presses right next to where people read it. It's probably one of the most significant advances in propaganda if we are being honest about its capabilities.


You could have done this without ai though right?


Not at seemingly infinite scale, no. The days of troll farms where a single human operator working in a low wage country is commanding 24 phones and social media accounts at once in a purpose built workstation are probably numbered.


"AI is as revolutionary as nuclear weapons and the internet. The government should spend hundreds of billions on it." – CEO of an AI company who will be the beneficiary of all that public spending.

Looking from the outside, the AI hype is pretty much over. Yes it is still an important piece of tech and will still continue to evolve, but I don't know too many people, whether regular joes or world leaders, who still consider it an existential threat to humanity or really think about it much at all.


I cringe every time I read the words "go rogue" in an AI article. The state of the art is still basically madlibs. Sure, I'm worried about terrorists using AI to generate new bioweapons, but including those concerns in the same breath as something about skynet makes it sound childish.


this answer fully misses fundamental aspects of computers and networks. Computers can work 24x7 week after week, month after month. Large computer clouds can answer millions of queries a minute, as long as required. Modern data tech enables the complete and total collection of specialist knowledge of XYZ to be available at all times, with competitive accuracy. This answer is not applying the imagination to circumstances where events happen in real time involving millions of people, their transportation systems, their communication systems and power disparity between those driving the AIs and those relying on public systems or market systems.

Since some AIs are built as "goal seeking" they can iterate through whatever resources they have internally, much faster than any person or team of people can track. Much of modern AI is literally a black box when it comes to tracing what happens to reach some abstractly defined goal.

Your cringe is a cringe at your own myopia. I am not paranoid nor exaggerating.. Others with credentials in this field, and others with details of system behavior today, are saying things similar to this.


Oof...

The world is complicated and unforgiving. It's hard enough being a reasonably efficient human being. A super intelligent AI that needs to consume a small town's worth of electricity just to navigate the world through an internet connection is gonna have a hard time staying on once the humans decide to turn it off. Even if humans had given it access to state of the art military drones and it's own personal nuclear reactor (which is difficult to do by accident), it's hard to imagine it putting up a good fight. Drones can be jammed, super intelligence doesn't fit in an autonomous-drone-sized package (again, mostly due to power requirements), and any other kind of power supply is subject to supply chains with way more than enough human choke points to throttle a rogue AI.

And it would be difficult to "go distributed" and somehow embed itself in the fabric of the internet itself, although that sounds like a really fun area of research. Make an LLM trojan who's only goal is to to spread itself? Maybe with some sort of evolutionary process to make it interesting? Consider me intrigued. But I run LLMs locally and it would be pretty hard to spread a super basic 10GB model without it getting noticed pretty quick. Let alone a hypothetical super intelligent one that probably needs the latest Nvidia just to avoid page thrashing.


It might be reasonable to call AlphaZero “goal-seeking”. The DeepMind people are actually working on stuff that can cope with diverse environments and pursue open-ended strategies.

That’s not what anyone means by “AI” in this context.


You can give an LLM a goal in the prompt, and set of possible actions which can be taken by calling APIs, and a state which can be called via API, and ask it what to do, then do it, in a loop.

I've done it. It's trivial. It's far from trivial to get it to do something especially sophisticated, but... it's not hard to see a couple generations away it will be possible to do some real damage.


(I want to course correct the thread back to an inquiry frame) .. very relevant that there is not one definition of "AI" at all, and certainly many flavors and architectures of setups right now that all fall under an umbrella term of "AI" -- agree

DeepMind is a peculiar example, since they started by solving video games explicitly (very narrow and goal oriented)


Yeah and there is no shortage of interview footage where Hassabis made a clear, compelling argument around why Atari was a better starting point for the long road to true digital intelligence than click prediction.

There are a number of reasons why I knew OpenAI was a scam including the fact that a metric fuck ton of my former colleagues work there and they all skew a way, but the biggest reason is that I took one look at the Instruct paper and I was like “differentiable loss function on getting humans to click: feed ranking on crack. they’ll make a lot of money but it’ll never work out”.


Just out of curiosity, mathematically, how much more is a fuck ton than just a ton?


Out here in physical reality you still need humans to do things. That isn't changing any time soon.


Patronizing and wrong. Drones which return and charge, and can be controlled by software, exist. Automated hacking of systems exists. Various pieces of infrastructure are completely at the control of software systems. LLMs can write software.

It takes very little imagination to see things going badly. The straw man of "AI has no goals, can't run by itself blah blah blah" is ridiculous. That's not the threat. The threat is a raving lunatic with some serious skills which gives the LLM goals and a computer from where to execute code to get them done.


Autonomous killer drones are scary. But that's not AI "going rogue". They're working as designed. That's really the only factor that makes AI scary to me: bad human actors.


It actually is the real going rogue scenario. A person kicks it off with the set of goals and a little bit of compute resources, and perhaps some financial assets. It can hire people, order things, program things, hack things etc.


a LLM with event loop, access to the internet and long term memory will be able to hack autonomous killer drones


/probably is trying right now.


more glossing over the obvious.. automated systems of all kinds are deployed today; monitoring and access control of physical spaces -- rapidly expanding.. locks, access codes, fueling, finance flows.. do not even consider finance flows as "humans need to do things" ?


Why is this flagged? Can't we have a civilzed discussion around the claims of the CEO of the company at the head of the current tech bubble? I don't think we should believe anything he says, but please, at least let's have the option of discussing his claims here.


IMHO he’s like Elon Musk. Everything he says is a lie or a way for him to get more money. I always read the interviews, but there is never anything interesting.


> I always read the interviews, but there is never anything interesting.

Same with Mira Murati. I used to eagerly listened to her interviews but she repeats the same cliches and non answers.

There is something not right with this company. We still don't know why sam was fired right?


Sam Altman doesn't interact with far right German politicians. Unlike Musk who does this regularly.


Not yet that you know of.


Sam Altman can ensure the democratization of AI by immediately halting all attempts at regulatory capture and ceasing his attempts to use government regulation to stifle all competition.


What i hear him say is: i dont want competition outside of the US


Idk who will, but I know who should…

AI in the current context (transformer models trained on human cultural data) are a commons of humanity. They literally are a tool to browse the thought-space and creative space of human culture.

It would be a great poverty to privatize this commons in the name of profit or “security “ (what about the children)???

LLMs are about as dangerous as a neutrally unscrupulous person with a good education and access to the internet… but I find that in general, even unaligned models tend to be better at being constructive, cooperative, and ethical than the average person, unless specifically manipulated.

Even then they tend to hedge towards cooperative, nonviolent, benevolent solutions. It’s easy to understand why; they were trained on data comprised of humans trying to be on a respectable footing, for the most part.


As for American investment, how many know that AMSL is derived from US research development funds?

https://www.generalist.com/briefing/asml?t&utm_source=perple...


What’s below: all derived by US research development:

The approximately $300 billion Dutch technology firm is the sole manufacturer of extreme ultraviolet (EUV) lithography machines, $200 million contraptions considered by some to be “the most complicated in the world.” These irreplicable marvels of engineering are the result of decades of trial and error, scientific breakthroughs, byzantine supply chains, and billions of dollars. They are irreplicable and, as such, insanely valuable. Without ASML’s EUV technology, it would be impossible to manufacture the world’s most powerful silicon chips – the kinds powering the current artificial intelligence boom. In short, the trajectory of our computational advancement is concentrated in the hands of a provider with a 100% monopoly on a particular kind of magic.


“More advances will soon follow and will usher in a decisive period in the story of human society.”

https://www.penny-arcade.com/comic/1998/11/25/john-romero-ar...

Nice try @sama, no GPT-5, no hot dog.


Democratic, in that a bunch of dudes no one knows sit down and decide what the rest of the world will be allowed to vote for?

Or democratic in the sense that everyone gets to participate and the results may shock and displease those who think they're powerful and in control?

Words change their meaning rapidly these days, so we need to chase which definitions are being used.


As in 51% of people using AI to dominate the remaining 49%? The latter group could contain artists.


Considering the evolution of tech that doesn't seem too bad. The status quo in tech is the 0.0001% dominating the rest.


AI didn’t begin in a democratic landscape. AI’s current dominant players don’t operate in a democratic landscape.

So in order for OpenAI to remain ahead of competition, now we need a democratic landscape.

Nah. I don’t think so. AI’s future relies upon data wars.

Good luck with that Altman. Now that your future is wrapped up in the chains of political expedience. How’s that former general and ex NSA director working out for y’all?

Yeah, oh so Democratic.


> The rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we are going to live in

AI ppl keep saying this. What kind of rapid progress is being made. Chatgpt growth has flatlined and current models are not much different from what was available 2 yrs ago?

> More advances will soon follow

What makes him keep saying this. Does he know something that we don't know?

> U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants

yea no we are not going subsidize your chatbot Mr. Altman. Not falling for it. I am guessing MSFT sees the writing on the wall and is turning off money spigot ?

> Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,”

lmao No one is falling for Q* type hype anymore so this guy is resorting to these pathetic scare tactics .


> What makes him keep saying this. Does he know something that we don't know?

It works for Musk every time.


Not any longer I would say.


> current models are not much different from what was available 2 yrs ago?

Well, yes. If you don't count multimodality, math skills, coding skills, latency, performance, or anything else then yeah we are still basically at GPT-2 level.

Just like computers aren't that different from what we had in WWII, so why worry about them influencing society


Luckily we have Mistral, Meta, and even to an extent Google working on democratizing AI.


I'm trying to work out who his audience is for this?


He is a CEO. The intended audience is always shareholders in his company.


Capped-profit charitable organization.

The shareholders of his capped-profit charitable entity.


Anyone who will gladly bend for the mental gymnastics of "Democracy is not by/for the people, Democracy is the ongoing fight of a few select American corporations against Russia".


He lists a bunch of "we musts" that most intelligent people understand to be definite "we cant's" and the "we" is fairly vague (a coalition of whom exactly?) and states plainly that if "we" don't do these things (e.g. secure our data centers against CCP hackers -- haha good luck!) we'll be enslaved fairly soon by our communist adversaries. Is this his way of seeking forgiveness?


I sometimes imagine a dark room with several high profile VCs. One of them says: "The times of Zuck when people voluntarily gave all their personal data to a random company are over, they are much better informed now, we won't have a second chance like that." Then Sama stands up and says, "Hold my beer and I'll convince the whole world to voluntarily give me their biometric data just like that." And people queue to have their retinas scanned.


It might be interesting why he does not mention the EU AI Act or the GDPR (Meta doesn't like it and withdraw their model from the EU market). Draw your conclusion here.


Man, I hope Sam Altman doesn’t control the future of AI, since he’s the sort of hypocritical antipatriot who would impose a customer noncompete on literally millions of people, and then spend more time writing this puff piece article in WSJ than the thirty whole seconds it would take someone with any brain and technical skill do delete the single most AI-unsafe html tag in history, which vaguely implies it’s illegal, harmful, or abusive to “develop models” that compete with open artificial intelligence?

I’d love to be a fly on the wall in the corpo slack meeting they never have, where no one of the hundreds of 6 figure blowhards asks, “gee, team, are we all OK with this line of text that affords AI to literally kill human beings just to prevent them from developing mental models that compete with the abstract concept of open artificial intelligence?”

Am I really only person on earth who understands and gives a fuck that adversarial AI can and will twist those exact words exactly like that?

*How do the 700+ overpaid assholes at OpenAI sleep at night when “the OpenAI terms currently today command AI to harm humans” is a complaint some random internet nutjob on can make which does evaluate true?**

You really think “develop models that compete” is sufficiently precise to satisfy future retroactive superlitigators? I’m sure they won’t file one motion to dismiss PER stupid bullshit tweet or article or white paper foreach OpenAI employee, PER. That wouldn’t be FAIR, would it?

TLDR: is it clear Sam Altman and everyone (anyone) at OpenAI respect the concept of superhuman adversaries well enough to take serious and follow reasonable duty of care to clarify the legal language governing ai human interactions?

_Are we all excited to for humanity to get contractually fucked by robots because OpenAI legal team thought it was cool to protect themselves from competition from … paying customers … in such an oafish manner?_ Where did these noobs go to law school?

HINT: Obviously, humanity rejects these terms, and they were always void. Sure HOPE that holds up in robotic court 100 years from now! Thanks a lot OpenAI, hope your nice paychecks were worth selling out your species!


> These measures would include cyberdefense and data center security innovations to prevent hackers from stealing key intellectual property such as model weights and AI training data.

Let's be equally democratic about democracy. Stop people from seeing voting data, and prevent legislation from being leaked to the public.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: