I can guarantee you that some smart person actually thinks that the opportunity size is measured as a fraction of the pharmaceutical industry market cap.
Doesn't matter if that person can't convince the bean counters and lawyers that it is in the company's long-term interest to release the "proprietary" data.
The only times I have seen this happen is when the remote devices were communicating with something on blacklist (which should be concerning anyway, but also a quick fix if not) or doing something naughty like not using the DNS server broadcast by DHCP.
It seems to me the key phrase in that definition is "that may exhibit adaptiveness after deployment" - If your code doesn't change its own operation without needing to be redeployed, it's not AI under this definition. If adaptation requires deployment, such as pushing a new version, that's not AI.
I'm not sure what they intended this to apply to. LLM based systems don't change their own operation (at least, not more so than anything with a database).
We'll probably have to wait until they fine someone a zillion dollars to figure out what they actually meant.
For LLMs we have "for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
For either option you can trace the intention of the definitions to "was it a human coding the decision or not". Did a human decide the branches of the literal or figurative "if"?
The distinction is accountability. Determining whether a human decided the outcome, or it was decided by an obscure black box where data is algebraically twisted and turned in a way no human can fully predict today.
Legally that accountability makes all the difference. It's why companies scurry to use AI for all the crap they want to wash their hands of. "Unacceptable risk AI" will probably simply mean "AI where no human accepted the risk", and with it the legal repercussions for the AI's output.
> We'll probably have to wait until they fine someone a zillion dollars to figure out what they actually meant.
In reality, we will wait until someone violates the obvious spirit of this so egregiously and ignore multiple warnings to that end and wind up in court (a la the GDPR suits).
This seems pretty clear.
That means you can answer the question whether they comply with the relevant law in the necessary jurisdiction and can prove that to the regulator. That should be easy, right? If it's not, maybe it's better to use two regexps instead.
I understand that phrase to have the opposite meaning: Something _can_ adapt its behavior after deployment and still be considered AI under the definition. Of course this aspect is well known as online learning in machine learning terminology.
No offense but this is a good demonstration of a common mistake tech people (especially those used to common law systems like the US) engage in when looking at laws (especially in civil law systems like much of the rest of the world): you're thinking of technicalities, not intent.
If you use Copilot to generate code by essentially just letting it autocomplete the entire code base with little supervision, yeah, sure, that might maybe fall under this law somehow.
If you use Copilot like you would use autocomplete, i.e. by letting it fill in some sections but making step-by-step decisions about whether the code reflects your intent or not, it's not functionally different from having written that code by hand as far as this law is concerned.
But looking at these two options, nobody actually does the first one and then just leaves it at that. Letting an LLM generate code and then shipping it without having a human first reason about and verify it is not by itself a useful or complete process. It's far more likely this is just a part of a process that uses acceptance tests to verify the code and then feeds the results back into the system to generate new code and so on. But if you include this context, it's pretty obvious that this indeed would describe an "AI system" and the fact there's generated code involved is just a red herring.
So no, your gotcha doesn't work. You didn't find a loophole (or anti-loophole?) that brings down the entire legal system.
reply