> "OpenAI can now provide API access to US government national security customers, regardless of the cloud provider."
And this one might be related:
> "OpenAI can now jointly develop some products with third parties. API products developed with third parties will be exclusive to Azure. Non-API products may be served on any cloud provider."
Now, does anyone think MIC customers want restricted, safe, aligned models? Is OpenAI going to provide turnkey solutions, unaligned models run in 'secure sandboxed cloud environments' in partnership with private weapons manufacturers and surveillance (data collection and storage/search) specialists?
This pattern is not historically unusual, turning to government subsidies and contracts to survive a lack of immediate commercial viability wouldn't be surprising. The question to ask Microsoft-OpenAI is what percentage of their estimated future revenue stream is going to come from MIC contracting including the public private grey area (that is, 'private customers' who are entirely state-funded, eg Palantir, so it's still government MIC one step removed).
Well, 'calculus' is the kind of marketing word that sounds more impressive than 'arithmetic' and I think 'quantum logic' has gone a bit stale, and 'AI-based' might give more hope to the anxious investor class, as 'AI-assisted' is a bit weak as it means the core developer team isn't going to be cut from the labor costs on the balance sheet, they're just going to be 'assisted' (things like AI-written unit tests that still need some checking).
"The Arithmetic of AI-Assisted Coding Looks Marginal" would be the more honest article title.
Yes, unfortunately a phrase that's used in an attempt to lend gravitas and/or intimidate people. It sort of vaguely indicates "a complex process you wouldn't be interested in and couldn't possibly understand". At the same time it attempts to disarm any accusation of bias in advance by hinting at purely mechanistic procedures.
Could be the other way around, but I think marketing-speak is taking cues here from legal-ese and especially the US supreme court, where it's frequently used by the justices. They love to talk about "ethical calculus" and the "calculus of stare decisis" as if they were following any rigorous process or believed in precedent if it's not convenient. New translation from original Latin: "we do what we want and do not intend to explain". Calculus, huh? Show your work and point to a real procedure or STFU
Imagine if compilers were only available via the SaaS model. Developers and the tech community in general would never accept this, and compilers were open sourced well before the internet had developed to the point where it would even be possible. Nobody would trust a system that sent their proprietary code off to a data center to be compiled to binaries for their platform on a monthly subscription model - the idea is ludicrous.
Currently the only SaaS product that still makes sense are LLMs, but this is temporary - anyone with sense realizes that the ideal situation is to run an open-source LLM model locally and privately, but this still requires a significant investment in high-end hardware and IT technical people.
That's the basic calculation: is it overall more efficient and less expensive to hire a skilled IT team to manage your in-house solutions, mostly open-source, including security patches, than it is to rely on external providers who charge high monthly fees and use all manner of sneaky tactics to keep you locked into their products? The latter is going to win in the long run.
Imagine a town with two landlords who own all rental properties. Yes consumers prefer cheaper rentals, but all the landlords have to do is write an app that they can use to set prices as high as they can while not having too many units empty. If the homeless population in the town increases, that's an externality - especially if the landlords themselves don't live in the town.
This works if landlords don't have significantly more units than are demanded by the population AND it is both very expensive for new units to be built and new competitors to enter the market. If enough supply comes on the market and the best move for the landlord with the additional supply would be to lower prices. Tenants then all move into the better value units and the expensive landlord is left with either empty buildings or is forced to lower his price.
Wouldn't that require a large flooding of units not attached to those landlords? And considering they already cornered the market on the "old" units, unless this market disrupting supply of units is owned by someone generous, they'll just match the old prices and call it a win.
usually prices dont go down. the cost does relative to inflation. what usually happens is a new investor will do the analysis and build new units that are even more expensive but only slightly. now all the current tenants that can afford it will leave the current landlords and the current landlords wont be able to increase prices because there is a better product at that price level.
It does depend on where you are and how elastic supply is. In Austin for example there has been a recent decrease in rent (even relative to inflation) despite continually growing demand.
austin had such an insane explosion of supply. but they also have a price explosion just a couple years ago. probably going to see something similar with respect to GPU rentals in a couple years
The timeline here is interesting. Microsoft releases info and instructions for mitigation on July 19, and a more complete report on July 22nd, here's a copy of that:
Then according to this report, 'sometime in August' the exploit is used against the Honeywell-managed nuclear facility, since it wasn't patched, if I read correctly? So it really could have been anyone, and it's hardly just Russia and China who have a record of conducting nuclear espionage in the USA using their nation-state cybercapabilities (Israel?). As the article notes:
> "The transition from zero-day to N-day status, they say, opened a window for secondary actors to exploit systems that had not yet applied the patches."
Also this sounds like basically everything that goes into modern nuclear weapons, including the design blueprints. Incredible levels of incompetence here.
> "Located in Missouri, the KCNSC manufactures non-nuclear mechanical, electronic, and engineered material components used in US nuclear defense systems."
Solution: run open source LLMs on local hardware where inference takes a while but you're not leaking your sacred proprietary code to some backdoored cloud cluster. Then downtime arises naturally, see the relevant xkcd on compiling:
Note also, compilers automated the process of machine instruction generation - quite a bit more reproducibly than 'prompt engineers' are able to control the output of their LLMs. If you really want the LLMs to generate high-quality programming code to feed to the compiler, the overnight build might have to come back.
Also, in many fields the processes can't be shut down just because the human needs to sleep, so you become a process caretaker, living your life around the process - generally this involves coordinating with other human beings, but nobody likes the night shift, and automating that makes a lot of sense. Eg, a rare earth refinery needs to run 24/7, etc.
Finally, I've known many grad students who excelled at gaming the 996 schedule - hour long breaks for lunch, extended afternoon discussions, tracking the boss's schedule so when they show up, everyone looks busy. It's a learned skill, even if overall this is kind of a ridiculous thing to do.
The article doesn't mention that Bayh-Dole made it legal for a university to exclusively license a patent generated by a government-financed researcher to a corporation.
Prior to this, if a corporation wanted to have exclusive rights to basic patents, they'd have to run their own private research labs to generate those patents. Prior to Bayh-Dole, university inventions were patented but there were no exclusive licensing deals. This means no competitive advantage; anyone can use license the patents (I believe any US citizen) before Bayh-Dole.
So corporations largely stopped funding private research labs like Bell and instead entered into public-private partnerships; on the academic side we saw the rise of the shady enterpreneurial researcher whose business plan was to use government funds to generate patents (not uncommonly based on fraudulent research) which formed the basis of a start-up which was sold to a major corporation.
The fix is simple: patents generated with taxpayer dollars at American universities should be available to any American citizen for a small licensing fee; if people want exclusive rights to patents, they need to put up the capital for the research institution themselves, as was the case with Bell Labs. Practically, this starts with a repeal of Bayh-Dole.
The obvious retort would be, if the situation were so favorable for corporations before Bayh-Dole, why were so few licensing deals in place before the passage of Bayh-Dole (fewer than 5% of technologies were licensed)?
> So corporations largely stopped funding private research labs like Bell and instead entered into public-private partnerships
They didn't though. Bayh-Dole was 1980. All the big tech firms have invested massively in R&D since then, and I think it's also true for many non-tech industries or tech-adjacent (e.g. chip manufacturing, oil and gas).
Repealing Bayh-Dole is a terrible idea. A lot of research produces enough to get a patent but still requires a lot more development to get a product. Drugs are probably the best example.
Wouldn't a company still be able to patent the additional development they did to turn the original research into a product? E.g. delivery method patents are very common.
I don't see why they need to own the original research.
All else being equal, it's most straightforward to demonstrate infringement of a composition of matter claim (which tends to be the earliest for pharma) and so these are more valuable. Also, they tend to be the earliest to issue and possibly litigate over, which also increases value.
One easy free-market-friendly libertarian-approved solution would give citizens and residents of California a choice - your data can be kept private, or you can get a sizeable precentage of the value if you opt-in to data collection and sharing.
I'm not really sure how much this would be worth - and would it scale? How much value does the data from a worker at the lower end of the labor pool wage scale have in relation to that from the C-suite members of a mid-size corporation? Should we all have the right to climb up on the block and sell our data to the highest bidders, while collecting the majority of the profits from the transaction ourselves? It might make more sense to sell your data in five-year future contracts - opportunity to renegotiate rates now and then makes sense.
It's informational data, and in worlds like the commodity markets, information is invaluable. Traders have access to everything from satellite data of oil tankers to insider information from drilling rigs and they pay a lot to keep their data current and accurate, get access to proprietary databases and even nation-state classified sources.
Thus, if human data is so valuable, the humans generating the data should be the ones collecting the majority of its financial value if they opt to sell it. From this view, the real crime here is theft of worker value by data collectors and resellers in a monopolistic market system.
The are two projects that I know which attempted to solve that problem.
India's has DEPA (Data Empowerment And Protection Architecture) framework that addresses the data consent problem. (e.g bank will ask your consent before sharing the data). The advantage here is it providers legal framework as well.
The solid project from Tim Berners-Lee (who invented world wide web) is an attempt to solve that. https://solidproject.org/. This is pure consumer owned but there is no legal protection from the government.
> One easy free-market-friendly libertarian-approved solution would give citizens and residents of California a choice - your data can be kept private, or you can get a sizeable precentage of the value if you opt-in to data collection and sharing.
Some business models that depend on unscrutinized unlawful or antisocial behavior turn out to be unviable when they're forced to operate in a way that is lawful / not an attack on the public.
> "OpenAI can now provide API access to US government national security customers, regardless of the cloud provider."
And this one might be related:
> "OpenAI can now jointly develop some products with third parties. API products developed with third parties will be exclusive to Azure. Non-API products may be served on any cloud provider."
Now, does anyone think MIC customers want restricted, safe, aligned models? Is OpenAI going to provide turnkey solutions, unaligned models run in 'secure sandboxed cloud environments' in partnership with private weapons manufacturers and surveillance (data collection and storage/search) specialists?
This pattern is not historically unusual, turning to government subsidies and contracts to survive a lack of immediate commercial viability wouldn't be surprising. The question to ask Microsoft-OpenAI is what percentage of their estimated future revenue stream is going to come from MIC contracting including the public private grey area (that is, 'private customers' who are entirely state-funded, eg Palantir, so it's still government MIC one step removed).
reply