1. Early 2025: Identify Federal sites for AI data centers, clean energy facilities, and geothermal zones. Streamline permitting processes and draft reporting requirements for AI infrastructure. Develop a plan for global collaboration on trusted AI infrastructure.
2. By Mid-2025: Issue solicitations for AI infrastructure projects on Federal sites. Select winning proposals and announce plans for site preparation. Plan grid upgrades and set energy efficiency targets for AI data centers.
3. By Late 2025: Finalize all permits and complete environmental reviews. Begin construction of AI infrastructure and prioritize grid enhancements.
4. By 2027: Ensure AI data centers are operational, utilizing clean energy to meet power demands.
5. Ongoing: Advance research on energy efficiency and supply chain resilience. Report on AI infrastructure impacts and collaborate internationally on clean energy and trusted AI development.
Well, it would funnel tax dollars to already-established companies that don't need the subsidies. :p
I have some sympathy for certain domestic capabilities (e.g. chip fabrication) but this "AI" bubble cross-infecting government policy is frustrating to watch.
It's hard to know how much this effort is about the government's belief in AI, and how much of it is about supporting the technology sector while using AI as a convenient buzzword.
I think, though, that even if LLMs turn out to be a dead-end and don't progress much further... there are a lot of benefits here.
One of the US's key strategic advantages is brain drain.
We are one of the world's premier destinations for highly educated, highly skilled people from other countries. Their loss, our gain.
There are of course myriad other countries where they could go, many of them more attractive than the US in various ways. Every country in the world is in a sense competing for this talent.
As of this year, the US employment-based green card backlog for citizens of India is such that they're currently still processing applications filed in 2019 for the top EB-1 category (that's "Extraordinary People, Outstanding Researchers and Professors, and Multinational Executives and Managers"), and 2012 for mere PhDs. So the numbers would have to go down a lot for US to even notice.
Speaking as an immigrant myself, so long as there's still noticeable wealth disparity, people will make the jump. The other aspect that makes US specifically especially attractive compared to some others is its family immigration policy - people generally want their family to join them eventually, and US has an unusually large allotment for that compared to many other countries.
zero sum thinking has already infected public policy. turns out liberals can be just as "they took our jobs" as the redneck conservative.
the real fear should be that people wouldnt want to come. already chinese intl students are break even when considering US vs going back to china. who wants to deal with all the bureaucracy and hatred when they could just go back and work for deepseek.
>> Well, it would funnel tax dollars to already-established companies that don't need the subsidies.
Yeah it sounded like a gift to nVidia.
My prediction was that nVidia would ride the quantum wave by offering systems to simulate quantum computers with huge classical ones. They would do that by asking the government to fund such systems for "quantum algorithm research" since nobody really knows what to use QC for yet.
This move primes that relationship using the current AI hype boom.
So look for their quantum simulation-optimized chips in the near future.
GPU, gpgpu, crypto, ray tracing, AI, quantum. nvidia is a master at milking dollars from tech fads.
They'll be scouring the internet for signs of thoughtcrime and jamming the sources they find. Also, automating those sectors that tend to produce whistleblowers.
Because if they don't do the bad thing first, some bogeyman might become better at it than they are. Same logic that gave us the Manhattan project.
How else will we uncover what we could expose torified too so they flip out and lose their job so agent jackson can take their jerb by reading their hackernews comments and knowing all of the media they've been exposed too during the day?
Or try to make them have a heart attack by making a digital twin of them which synchronizes their sentiment, smart watch health data, and man-in-the-middling all of their digital conversations with creepy GenAI? Our adversaries might be doing it, so line up some fresh specimens. Come on bruh it's the future, you gotta think bigger.
I suspect we are not in a race for a better LLM, but a race to the singularity.
"I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.[4]"
Belief in a coming technological singularity is completely unscientific and based on zero hard evidence. It is essentially a secular religion with computers cast as deities to either usher in a new utopia or exterminate us all. Surely I am coming quickly, sayeth the Lord.
When I step through the logic in my mind, seems likely.
Lets make the leap of faith that we can improve our AIs to actually understand code that it's reading and can suggest improvements. Current LLMs can't do it, but perhaps another approach can. I don't think this is a big leap. Might be 10 years, might be 100.
It's not unreasonable to think there is a lot of cruft and optimization to be had in our current tech stacks allowing for significant improvement. The AI can start looking at every source file from the lowest driver all the way up to the UI.
It can also start looking at hardware designs and build chips dedicated to the functions it needs to perform.
You only need to be as smart as a human to achieve this. You can rely on a "quantitative" approach because even human level AI brains don't need to sleep or eat or live. They just work on your problems 24/7 and can you have have as many as you can manufacture and power.
I think having "qualitatively" superiority is a little easier actually because with a large enough database the AI has perfect recall and all of the worlds data at its fingertips. No human can do that.
Let's say all of that happens. Is there any reason to believe that will lead to "a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence"?
Or is it more reasonable to suppose that 1) all those improvements might get us a factor of 10 in efficiency, but they won't get us an intelligence that can get us another factor of 10 in efficiency, 2) each doubling of ability will take much more than doubling of the number of CPU cycles and RAM, and 3) growth will asymptotically approach some upper limit?
A lot of tech improvements are s-curves or waves of s-curves, not exponentials. For a given problem domain you will still have s-curves, but for overall intelligence/progress it might be exponential in some overall sense.
Yeah sure, there are physical limits, and doing things like mining, manufacturing, and logistics are really slow compared to just thinking. But that's just time and money. I don't see why it wouldn't happen. You build a machine to make machines. First you have 1, then 2, then 4, then 8, then 16 etc.
Then when the earth is used up, you look at space. Space travel is a lot easier because you don't need to keep a meat bag alive.
I for one am not worried about this. We still have trouble building robots that can wash dishes autonomously, let alone build nuclear power plants autonomously. The key word here being autonomously i.e. without human intervention, which is what a superintelligent machine would require to grow into this singularity.
I'm not "worried" about it. Just observing that the race is on. I don't think it's necessarily a bad thing as long as the tech is in the hands of "the good guys".
I think as soon as you can get an AI to break a task down into smaller tasks, and make itself a todo list, you have the autonomy. You just kick it all off by asking it to improve itself. I doesn't have to "want" anything, it just needs to work.
In the micro, nothing. We'd have a bunch of extra cash we can spend on other stuff.
In the macro?
The overall goal is keeping the US competitive in technology. Both in terms of producing/controlling IP as well as (perhaps even more crucially) remaining a premier destination for technologists from all over the world. The cost of not achieving that goal is... incalculable, but large.
Whether or not this is a good way to achieve that goal is of course up for debate.
It's not just the economic cost that is a consideration here. The military applications of AI are potentially staggering (as are the associated ethical concerns - but, as with any military tech, once someone does it and the advantages become clear, others will inevitably follow). No major power player is going to want to risk getting drastically outpaced by its primary competitors.
Note that this isn't a hypothetical, either. Israel is already using AI to pick targets. Ukraine is already using AI-controlled drones to beat jamming. There's no indication that either one intends to stop anytime soon, which tells the others that the tech is working for this purpose.
It doesn't matter what the incoming admin thinks, it just matters what makes the most economic sense. If interconnection queues are backed up and the grid is lacking the electricity required, offgrid might be the way to go: https://www.offgridai.us/
People assume gas plants are the fastest and easiest to build and power but you have to get the natural gas to the plant. Nat gas pipelines take years to build and also there are gas turbine supply issues. Offgrid solar + battery solves all of these.
That's actually a wildly short timeframe for a large political plan like this. Going from idea to breaking ground on a massive infrastructure project in a calendar year? It'll be interesting to see if this turns into a political/legal quagmire or not, since it's ostensibly aligned with both parties (donors) interests.
Agreed, it seems wildly optimistic to me. I was working at a military installation command last year and it would often take 6 months just to survey some existing buildings for renovation + repurposing and trying to get those plans approved. Stuff like "repaint these walls, mitigate mold, and install sub-divided chain-link fence in this warehouse" was busting our timelines. I'm assuming that just getting the paperwork right for a Federal government construction project is an equally-huge lift.
This is pretty ridiculous. US is already an undisputed world leader in building data centers. Our companies are great at building data centers already at their own initiative, and they don’t need government’s subsidies to do that. This just seems like a subsidy to “clean energy” industry, disguised as industrial strategy.
That schedule is impossible. Given the amount of power data centers take, and the fact that the government is new to building them, the environmental review process and lawsuits will take years.
I agree in general, but equipment mainly due to lead times and not permit/etc times.
Remember it's federal land, so they can't be held to state permitting or building codes, etc, only what they choose to (IE if their agency adopted it explicitly)
Having seen lots of datacenters constructed over the years - it's tractable in terms of the bureaucracy parts if they want it to be - because they can mostly ignore them.
So for me it breaks down like:
Building construction, it could be done.
Power provision - hard to say without knowing the sites. some would be clear yes, some would be clear no. Probably more nos than yeses.
Filling it with AI related anything at a useful scale - no, the chips don't exist unless you steal them from someoene else, or you are putting older surplus stuff in.
I’m not an environmental lawyer, but you don’t think these will be subject to NEPA? It sounds like they’re trying to piggy-back off land that was made available for solar projects under an existing EIS, but it sounds that those decisions could be litigated.
Automobile manufacturers can bring a plant from foundation to production in under a year, it’s not that far-fetched. A datacenter is orders of magnitude less complex.
My understanding is that it's not the data center that's the problem, it's the energy required to run them. This is especially demanding with us rubbing AI over everything and using insane amounts of energy in the process.
Ie: making a new data center is easy. Making new power plants quickly - not so much. But hey, at least there's some renewed political will, better than nothing.
I don’t have sources to back it up, but am reasonably sure these factories consume even more energy than a datacenter, possibly in the GW range. Powering robots, welders, and all kinds of mechanical systems in addition to electronics is very power intensive.
As another data point, Apple has been doing 100% renewables for their DCs since 2014. A wind farm can definitely be built in months, and energy companies will always follow the money. Site selection will definitely take energy availability into account as well.
It's interesting to see the federal government taking a strong industrial policy approach to AI through this executive order as well as physical computing via the CHIPS act.
Couple concerns:
- I loath to believe in silver bullets. The executive branch seems to believe that investing in AI (note: the order, despite the extensive definitions, leaves Artificial Intelligence undefined) is the solution to US global leadership, clean energy, national defense and better jobs. Rarely if ever is one policy a panacea for so many objectives.
- I am skeptical of government "picking the winners". Markets do best when competitive forces reward innovation. By enforcing an industrial policy on a nascent industry, the executive may just as well be stifling innovation from unlikely firms.
- I am always worried about inducing a _subsidy race_ whereby countries race to subsidize firms to gain a competitive advantage. Other countries do the same, leading to a glut of stimulus with little advantage to any country.
- Finally, government bureaucracy moves slowly (some say that's the point). What happens if a breakthrough innovation in AI radically changes our needs for the type, size or other characteristic of these data centers? Worse still, what happens if we hit another AI winter? Are we left with an enormous pork barrel project? It's hard to envision the federal government industrial policy perfectly capturing future market needs, especially in such a fast moving industry as tech.
Is the U.S. responding to recent moves in the U.K. to make a national effort in this field (the U.S. does not want to be an also-ran)? Are they responding to the efforts in China?
Do they know something more than we do with regard to the efficacy of current or soon-to-come AI? Or is it purely a speculative business/economic move?
I wonder if open models are even going to be allowed.
> require adherence to technical standards and guidelines for cyber, supply-chain, and physical security for protecting and controlling any facilities, equipment, devices, systems, data, and other property, including AI model weights
> plans for commercializing or otherwise deploying or advancing deployment of appropriate intellectual property, including AI model weights
Less advanced things have been labeled a national security risk.
It's currently quasi-illegal in the US to open source tooling that can be used to rapidly label and train a CNN on satellite imagery. That's export controlled due to some recent-ish changes. The defense world thinks about national security in a much broader sense than the tech world.
Value to you or me? Unlikely. Value to others who wish to cut the cost of killing, increase the speed of killing, or launder accountability? Undoubtedly.
Siri, use his internet history to determine if he's a threat and deal with him appropriately.
> AI can process intel far faster than humans.[5][6] Retired Lt Gen. Aviv Kohavi, head of the IDF until 2023, stated that the system could produce 100 bombing targets in Gaza a day, with real-time recommendations which ones to attack, where human analysts might produce 50 a year
Putting them on weapons so they can skip the middle man is the next logical step.
Great, give away more BLM land to Unaccountable corps to pollute who are going to use it to run facial recognition, surveillance, and randomly delete accounts. Fantastic. What a beautifully anti-human future we have in store.
The US bureaucracy is largely indifferent to changes in administration, though it undoubtedly wields significant influence. The key point is that major policy decisions are made by established bureaucrats in the military, State Department, and other parts of the essential institutional actors. As a result, any country, like China, perceived as a threat to US hegemony will be vigorously fought against.
Maybe but there's such a small amount of time between now and the new admin, it couldn't be that difficult to kill the bureaucracy behind it? If had been going to 3-4 years already, sure I can see that.
when new presidents come in the rest of the government employees don’t change as well. some of these at higher levels are career employees who will serve through many presidents. timing your asks to just get approved by an outgoing president can make your future bright
Historically you are correct that only around 4,000 employees of the US federal government are 'political hires' that the President can change/hire/fire. However in October 2020, Trump ordered Schedule F [0] that effectively increases the number of career bureaucrats that could be made by political appointment rather than by a competitive process. The Order was promptly rescinded by the Biden administration in Jan 2021. [1]
The Trump administration said that they believed Schedule F would increase the number of career positions to roughly 50,000 existing jobs. However many think tanks and union members believe that Schedule F could be interpreted much more broadly and could include well over 100,000 positions.
If the Trump administration revives the Schedule F order, it could mean very significant changes for many career bureaucrats.
Fear mongering was certainly not my intention. The simplest explanation for why this didn’t happen during President Trump’s first term was because the order was effective Oct 21st, 2020. Which was less than 2 weeks before President Biden won the popular & electoral vote.
I share your hope that the incoming Trump administration will uphold the usual norms of hiring & firing careers federal employees!
They had put a committee together in case Harris won.
They lost
The committee dumps their report and disbands
----------------
Trump's 1776 commission for redesigning history education released their report days before Biden's inauguration. They published by linking a PDF to the White House website and days later it was gone. Dust in the wind.
Thanks from India! The last time US imposed sanctions on Cryogenic rocket engines, India developed its own indigenous engine. This is the forcing factor other countries need to decouple from US leadership which it just proved cannot be trusted.
The analysis of this EO in these comments is disappointing.
It contains a lot of good stuff, also on geothermal and nuclear; long term storage; improving power transmission infrastructure; attempting to address the impact of AI on electricity rates for regular consumers; improving data transparency and communications of interconnections; improving the permitting flow for critical (clean) infrastructure; and a bunch of other stuff.
The likelihood of it all surviving the incoming administration seem low, but given how it aligns with already present structural trends (and downright critically important needs in terms of power infrastructure at least), there's a good chance some parts will.
It's more an energy policy EO than an AI EO, and it's both ambitious and objectively positive for combating climate change. Maybe we could at least read the damn thing before filling the comments with low effort cynicism?
It's their production release. :) After 4 years of contracting to put this work together, they published the work. Trump administration will soonafter change it to suit their vision of the next 4 instead.
I have some unease about this. If it wasn't for recent advancements and hype around LLMs we'd probably get something rushed through but not with AI but crypto. The grifters will grift..
> Well, an entity indistinguishable from most LLMs (at least when it comes to hallucinations) will soon be in charge of the US, so that Leadership is pretty much guaranteed, it seems...
I don't think I posted any flamebait, and if I inadvertently did, it was motivated by psychological science, not partisanship. But I've removed anything that would be controversial, I think?
The Biden administration finally woke up and realized America should lead in AI. But instead of unleashing the free market, we’re getting a laundry list of regulations, mandates, and red tape. Buckle up for the Big Government AI Plan
1. Early 2025: Identify Federal sites for AI data centers, clean energy facilities, and geothermal zones. Streamline permitting processes and draft reporting requirements for AI infrastructure. Develop a plan for global collaboration on trusted AI infrastructure.
2. By Mid-2025: Issue solicitations for AI infrastructure projects on Federal sites. Select winning proposals and announce plans for site preparation. Plan grid upgrades and set energy efficiency targets for AI data centers.
3. By Late 2025: Finalize all permits and complete environmental reviews. Begin construction of AI infrastructure and prioritize grid enhancements.
4. By 2027: Ensure AI data centers are operational, utilizing clean energy to meet power demands.
5. Ongoing: Advance research on energy efficiency and supply chain resilience. Report on AI infrastructure impacts and collaborate internationally on clean energy and trusted AI development.