It's funny: it reminds me of "strong cryptography export ban" in the 2000's (btw, Oracle java still have it: you must download strong crypto package separatly)...
Obviously, it failed! You can stop a single vendor with unique technology to provide hardware components with bans, but you can't stop a whole field spreading knowledge!!! People come and go, meet, talk, write, exchange knowledges... so sooner or later (and more soon now as long as the internet is not limited to US) the software will implement these ideas.
The only one that will be challenged will be "big corps" relying on IP protection. But if i remember correctly, Google has a research center in China... so knowledge will aleady be in China and won't even need to be "exported"
As gross as the government-meddled Java crypto libraries made me feel, the fact that the meddling-free workaround library was called "bouncycastle" really made my day.
> Here at the Bouncy Castle, we believe in encryption. That's something that's near and dear to our hearts. We believe so strongly in encryption, that we've gone to the effort to provide some for everybody, and we've now been doing it for almost 20 years!
I don't think the government actually wanted to stop math [1] from leaving the border. It was just a tool to use against people they suspected being involved espionage. Just another reason to put you in a cell.
The intentions of laws vs the real world implications are so often disconnected. There's been a million cases where the mere side effects of laws have caused more trouble for the intended target of the good intentions than it actually helped itself (see rent control laws in NYC and Toronto ultimately significantly reducing low income housing for over a decade in the 70s and 80s because no developer would build there, significantly increasing average rents by reducing supply).
Thomas Sowell wrote a quite a few good books about this.
I have no doubt it applies equally to export laws.
No it makes it a pain to export any products abroad. In today's world where your market is probably not just the US anymore it is insane to have such rules in place.
The wiki link you provided does not have the words "export" or "espionage". Are you trying to imply that R,S, and A were suspected of espionage? You'll need a real source for that..
The link is for simple English Wikipedia, which does not use those words. Perhaps the standard English Wikipedia article would be a better reference:
https://en.m.wikipedia.org/wiki/RSA_(cryptosystem)
Cryptography devices (including software) were on the US Munitions List, which meant that it was on par with exporting guided missiles. There was an interesting episode of Darknet Diaries on this topic that covers some of the crazy restrictions, such as professors not being able to teach cryptography classes to foreign students.
Feedback and control systems can be described in a couple pages of text and a handful of simple equations. Suppressing knowledge is never an effective strategy. We shouldn't hoard the Krabby Patty Secret Formula.
That was the theory, but I don't recall courts ever actually ruling on this.
There was also the RSA t-shirt. The originals are no longer being sold, unfortunately, but these days it's easy enough to do a custom design for your own made to order. My take: https://www.customink.com/ndx/?cid=jxu0-00bx-9p0k
Thank you, my memory was hazy on this. So there was a court case, but it sounds like it was ultimately undecided, since the original opinion (that it constitutes protected speech) was withdrawn?
> The Ninth Circuit ordered that this case be reheard by the en banc court, and withdrew the three-judge panel opinion ... almost nine years after Bernstein first brought the case, the judge dismissed it and asked Bernstein to come back when the government made a "concrete threat"
Actually, nuclear KNOWLEDGE is quite widespread BUT nuclear ENGINEERING is not because it requires a lot of specific (so controllable) physical ressources and technologies (values, tweaks and hacks), that is controlled by a few states monopoly. So engineering is more easily restricted efficiently.
A parallel reasoning would be with face recognition: the algorithms are quite widespread (the knowledge) and new algorithms build on shared knowledge of the field, but the tuning of the parameters, of the input datas, the access to top hardware to speed up learning process, evaluate new strategies, and access to big databases are not.
The problem for a state to get the face recognition technology is either to obtain the parameters or to invest time and money to build it. So it's not really "bannable" because theses resources are available everywhere (and lots of companies are ready to provide it).
A more "bannable" tech would be "quantum computer" because it relies on new engineering tech, not only on knwoledge.
Probably yes. They're clearly not succeeding at it and in return this has probably caused feuds between countries and peoples that will last decades if not centuries.
As a person in the US who is not a PR or citizen and has worked on things the US Government likes to know are being exported, even exposure of the knowledge to a citizen of a target country is "deemed export." Google will probably have to reorganize its research center to comply with this and bring AI research that falls under this restriction back to the US. I don't disagree that it's futile though.
Additional details are present in the unpublished rule (document 2019-27649 [1][2]).
-- cut --
Geospatial imagery “software” “specially designed” for training a Deep Convolutional Neural Network to automate the analysis of geospatial imagery and point clouds, and having all of the following:
1. Provides a graphical user interface that enables the user to identify objects (e.g., vehicles, houses, etc.) from within geospatial imagery and point clouds in order to extract positive and negative samples of an object of interest;
2. Reduces pixel variation by performing scale, color, and rotational normalization on the positive samples;
3. Trains a Deep Convolutional Neural Network to detect the object of interest from the positive and negative samples; and
4. Identifies objects in geospatial imagery using the trained Deep Convolutional Neural Network by matching the rotational pattern from the positive samples with the rotational pattern of objects in the geospatial imagery.
Technical Note: A point cloud is a collection of data points defined by a given coordinate system. A point cloud is also known as a digital surface model.
The export ban is on the software? Not the algorithm itself? And would it be broad enough that all AI and QI frameworks like PyTorch, Tensorflow and Qiskit would fall under its purview.
These restrictions seem to flow from a mental model that still views software as a product purchased in a shrink-wrapped box. Rather than the services based model currently extant.
That is the point, to delay the adversary from gaining access to completed technology. An example is foreign adversaries purchasing PS3s with Linux to quickly access cheap computing power.
Whether frameworks are at risk is limited to the wording of the ban and then the final determination comes from a judge hearing the case. The chilling effects are real and its possible framework development may very well be hampered due to the unknowns you have pointed out in your post
An algorithm is an idea and it's hard to enforce ideas from spreading. Similar to how it is difficult to patent an algorithm. In the patent world you would have better luck tying that algo to a machine and patenting that.
I think the same concept applies here. Software is the machine that embodies the algorithm. Its tied to a company and to dollars.
What this will also mean is that startups that work in this space will need to watch out who they get funding from. If the VC is not US-based, CFIUS oversight may kick in.
I saw a great example of this when I was looking at VA gun laws today. They have an exception structured as follows: Virginia law exempts from these requirements any firearms shows held in any town with a population of not less than 1,995 and not more than 2,010, according to the 1990 United States census [1]
"The provisions of this section shall not apply to firearms shows held in any town with a population of not less than 1,995 and not more than 2,010, according to the 1990 United States census."
Any idea which town the law refers to? I looked at the 1990 census data [1] and the closest thing I could find is a town called Appalachia with 1994 people.
Can anyone comment on who the specific vendor or what specific product might be? The AND-criteria makes this restriction pretty narrow, and the GUI requirement alone rules out practically everything I can think of.
Ah, I was scratching my head reading through it thinking about how unlikely it must be for something to have all of those properties. But if they were tailor made for a single product that makes sense.
Lots of interesting precedent to come out of it as well. By putting code on Github, an American website, as a developer are you actually exporting it? Or are the people cloning the repo reaching into America and extracting it?
If it is considered exporting, is github the exporter, or is the developer. Just like a company might produce a metal widget but another company procures and exports it, the original company that made the widget isn't the exporter.
I would not be suprised if we get something along the lines of "Unfortunately, this repository is currently unavailable in your country" sometime soon. Many websites still completely block EU users post GDPR, e.g. Chicago Tribune.
Remember how it was done the last time the idiotic US government did the same stunt with crypto.... Zimmerman (PGP) physically printed and bound books containing the full OCR-happy font of the source code.
Evidently to the dinosaurs in Congress, a physical book is something COMPLETELY DIFFERENT than a file online.
Actually, when we buy lab equipment from the US we have to fill out paperwork saying that it will not be used to produce weapons, and that it will not be resold to "bad" countries.
Probably there is some precedent by considering e.g. if Lockheed or FedEx is the exporting company if there have been cases where weapons got exported to unwanted actors. Probably github is like FedEx in these cases.
Object detection in aerial images is a rather booming field with 100s of papers published on the topic and contests going on in top conferences. I wouldn't be surprised if OSM is also doing some of this.
Not so weird. Palantir put a GUI on Hadoop/whatever and that was enough to sell it to every government for citizen inspection. It’s the GUI that gets the contracts, not the technology alone.
You can't get a contract without marketing/sales interaction with the customers.
Customers at a high enough level to sign off on a payment with 7-9 zeroes following the number are not generally programmers (or, if they were, they haven't been working at the coal face for decades): they're senior managers or civil servants.
A GUI front end is a really amazing marketing tool for any piece of software insofar as GUIs are designed to expose all the internal configuration variables and controls in a visually appealing, or even intuitive, manner that is accessible to non-programmers. Like the folks signing the big checks.
(Here's a second possibility: we know it's possible to use CAPTCHAs to crowdsource recognition of objects. Maybe they're trying to prevent export of a NN training system that uses unwitting mechanical turks for quality control?)
Absolutely correct! I used to sell expensive software for Lisp Machines and what sold it was the UI that was dual purpose: for development and for demos to management that they could understand.
3) will stop them from buying an off the shelf commercially-ready product. Yes, they can develop one from scratch, but only at enormous cost and difficulty relative to simple dollars.
The GUI is specifically mentioned for training (tagging) of positive/false-positives that the ML then incorporates.
If you think about tagging it is something you need a GUI for unless you are lucky enough to have pre-tagged samples.
Eg: the famous Silicon Valley hotdog AI. You either need to have a GUI to allow users to select hot dogs and not hot dogs in a bunch of images, or you need a bunch of images already tagged.
I thought input method is orthogonal to whether it’s a GUI. I think GUI is meant as opposed to text console or eg audio (like a phone dial-in service); it doesn’t necessarily mean a touch screen like I think you’re implying.
Yes. Or, the software will export everything into some common format, that could be opened with any GIS viewer, and we've just gotten around the restriction?
"(geospatial imagery) and (point clouds)" or "geospatial (imagery and point clouds)"?
Point 4 requires the use of geospatial imagery, so any point-cloud-only product would be exempt, it seems?
The document doesn't define "geospatial imagery", but that could surely include hobby and commercial drone footage of the ground. Perhaps even ordinary photography from security cameras that have user-define object identification features? That would make it really quite broad.
But all we need to do is not normalize color, and then everything's exempt!
You’re forgetting that how the word “and” works in English is often not the same as logical AND, e.g. in “give me a list of people who live in New York and Los Angeles” the “and” means OR.
Good point. Though I think "and" in those other cases really means something like "additionally" or "plus" rather than OR. You sentence could be written "... list of people who live in NY and people who live in LA.".
But it's still confusing here. If they mean it applies to both imagery software and point cloud software, then point cloud software would be excluded in point 2 because it doesn't have pixels or color (if lidar/etc). So it must be software that uses both. That makes more sense if it's aimed at a specific existing product.
Was doing similar work on a project using sonar imaging for the bottom of water bodies to much of what they're excluding. Not sure how concerned they'd be with identify freshwater creatures and riverbed structures...
So image recognition APIs like ones provided by AWS and GCP should be affected by this due to #2 and #3, no? Or the “provides a graphical user interface” part applies to all points?
The main restriction is "“specially designed” for training a Deep Convolutional Neural Network to automate the analysis of geospatial imagery and point clouds".
And having a GUI. It appears you can simply export a binary framework (with or without source) and let the buyer build an interface on top of it. Building an interface is not terribly advanced work and can be done by fresh bootcamp grads even...
Strangely given that it has to meet all requirements, doesn't that mean multiple people could release two or three different projects that work together?
Although saying "well technically" as you get dragged off for waterboarding may not make you feel better.
That would have been my guess at who this is aimed at too. Lots of applications outside the military and spook shops but that's always been the primary customer.
Bernstein's case established that he had a First Amendment right to publish source code under the law in effect at the time; he argued, successfully, that this was the form in which his research was communicated to other researchers. He won his case, but it took many years, and of course court cases are political processes; they may be decided differently when different judges have been appointed to the bench. It seems that machine-vision researchers may now need to make the same argument. It's probably worthwhile to save the neural network parameter vectors you currently have access to somewhere outside the US while that is still legal.
The ethical argument for why everyone should have access to cryptography is a lot stronger than why everyone should have access to satellite imagery recognition algorithms.
Also, cryptography requires both sides to use the same algorithm, while companies don't need to use the same recognition algorithms.
It also helped, in the crypto case, that you could print some version of it on a T-shirt or mail it on a postcard. It looked like speech, while neural net parameters don't.
Lawyers have made strong cases on both sides of the cryptography argument; probably they can on both sides of the satellite-imagery argument as well. Maps are the primary result of satellite imagery recognition and are a public good. Most covert activity visible on satellite imagery is environmental damage, which is often illegal and generally harms the public. Satellite image processing can be very useful for increasing agricultural production; restricting that to one country, or granting one country's companies an effective worldwide monopoly on increasing agricultural production, would be ethically unconscionable — in times of drought, it amounts to letting people starve instead of telling them how to raise adequate food.
But Bernstein's case didn't hinge on the likely consequences of strong cryptography being widely available; rather, he argued that he had a First Amendment right to publish his research.
I believe the emphasis here was on the generic right of disseminating research not on judging the necessity of a specific technology for particular audience.
Yeah, the thing about this stuff is the "law is so vague as to be meaningless" and "encryption/AI/whatever is just executing operations on a computer, what do you mean really?" are much harder defenses than one would think.
You might be reading too much into the word “political.” It’s legitimate for different judges to have different judicial opinions; that’s why appeals are often decided by a panel of judges.
Also many judges are elected, and in the US those who aren't elected are appointed by elected officials, and even in undemocratic countries, rulers consider the opinion of the public when they make judicial appointments. Moreover, most judges must apply statute law, which is politically enacted.
I recall similar happening during Bush admin during 2000s. Many of our software customers were international. To obtain an export license we were required to scan our source code with an approved Dept of Commerce scan software vendor to look for all kinds of inappropriate code like a too strong cryptography algorithm in the licensing portion and plagarism of copyrighted code. The first couple of releases this was done were brutal. Many of the developers not far our of the university were used to taking anything from the internet/open source if it saved effort. There was not a clear company policy about this until the export restrictions. Sometimes there would be a half dozen chain of borrowing before a culprit turned up. We muddled through and fixed hundreds of flags. If I was the program manager, I'd schedule and export code scan every week to avoid late problems.
AI code is just another layer in this odorous process.
Makes me think that the big tech companies which absorb a large amount of new college grads each year have a bunch of copyright violations but won't be audited.
These companies have "strict" code review policies but often the reviewers are just a recent previous year's new college grad, now overconfident by a small amount of work experience.
We're in an early and explosive growth stage of AI where well-established statistical knowledge is having an unreasonable effect when combined with computing power. I've yet to see any AI platform that is mindbending vs doing basic math with a multivariate normal distribution. The eyewatering stuff is the number of Hz of computing power being thrown into simulating Go games, etc, etc.
Assuming that among 1.4 billion people there are a few good coder/statistician people and using supercomputers [0] as a rough proxy for available computing power, it isn't obvious the US is even going to inconvenience the Chinese military. Presumably they are going to have a parallel AI effort anyway given that they have been investing in the area.
They are Dumb and they are just Reacting out of some "fuck we have to do something" instinct driven by jobless fucks like Peter Thiel/Graham Allison/Kai-Fu Lee constant rhetoric about AI and falling behind and how its going to effect everyone.
This is Fear based Decision Making 101. All it leads to is more absurd outcomes such as Endless Wars, Huge Monopolies, More consolidation of power and resources in the hand of few therefore more inequality.
These people and this thinking style would have more credibility if they had stopped Wars, reduced inequality, disrupted monopoly and oligopolies. They have not done that.
The can't imagine a Chinese AI team and American AI team working together to solve problems in humane way. They can't imagine constructing orgs that push that through. They can't imagine punishing their own who cross lines out of fear that the other side wont.
When we allow Fear based thinking to dominate decision making Imagination dies. Outcomes are consistently shit. And way below the potential of what people collaborating and communicating across artificial bullshit boundaries are capable off.
Pick a side and don't back Fear based Decsion makers in your org. These guys hold back progress, are the reason climate change research is hidden, endless Wars keep getting funded and monopolies cling to power way past their expiry date.
How can it be the age of information and knowledge when fear wins?
> All it leads to is more absurd outcomes such as Endless Wars, Huge Monopolies, More consolidation of power and resources in the hand of few therefore more inequality
This is going to persist regardless of this decision
The thing you fear most is fear itself. But you shouldn’t. Fear is human and probably not going to stop being a thing any time soon, despite your demands.
None of the convnets are Gaussian. It was one of the big reasons convnets came to be at all, to model highly kurtotic distributions like natural images.
Chinese convnet research is absolutely state of the art already, indeed.
You can do basic math to a multivariate normal distribution to approximate other distributions. In classic feedforward networks, that's achieved by using nonlinear activation functions. Very wide networks turn into Gaussian processes in the limit [0]. There are also approaches explicitly embracing the "basic math on multivariate normal distributions" framework using normalizing flows [1].
I'd be interested in how you would do NLP tasks such as those now done with transformers like gtp2 and Bert by working with multi variate normal distributions.
There was a time when a software engineer didn't get basic stuff. A time when languages like C were developed where there wasn't an associative data structure baked in for example.
It wasn't because associative data structures are a secret tech that requires great insight to uncover. It is because the field was new and people hadn't cottoned on to how basic and important having access to hash-maps is. Times moved on. Now basically all modern languages have hash-maps as a basic data type.
'AI' is in that early phase where the engineering world is still getting excited over stuff that will basic practice eventually. BERT and GTP2 are signs of how much computer power Google/etc's researchers have access to, not signs that the architectures are fundamentally complicated or somehow hard to work out if you live in China. AlphaGo for instance was breathtaking as a standalone project, but not hard to implement.
I spent at least five years trying to use statistical ml and mlps to do NLP on social media comments from about 2003. Nothing like a transformer (or an rnn even) occurred to me.
I have a belief that someone in the USSR worked out a way of doing fluid dynamics that has enabled the Russians to develop hypersonics and super cavitation. This is probably rather straightforward - in the style of NS - of you know the principles. No one in the West ( or China) knows those principles, so Western torpedos and reentry vehicles are rather poor Vs Russian ones. Once you grasp how something works the fact that it's rather easy to apply in comparison to the process of getting the insight shouldn't detract from the value of the insight .
The wiki page on RNN says the early groundwork was done in 1986 and the LSTM was a 1997 innovation. If they didn't occur to you in 2003 that doesn't imply much, they are not suprising concepts.
The surprise was that in the mid-2000s suddenly GPU became so powerful that LSTM could be used to achieve interesting results. The story here isn't the models, it is the computers running the models.
> There was a time when a software engineer didn't get basic stuff. A time when languages like C were developed where there wasn't an associative data structure baked in for example. It wasn't because associative data structures are a secret tech that requires great insight to uncover. It is because the field was new and people hadn't cottoned on to how basic and important having access to hash-maps is. Times moved on. Now basically all modern languages have hash-maps as a basic data type.
That is a really weird historical fantasy. If you pull out your copy of volume 3 of Knuth and look at chapter 6, it is obvious that associative data structures were some of the first ones to be developed in the field.
The reason why hash tables became so popular is the explosion in main memory size starting in the late 1990s. The trade-offs between the possible associative data structures became less important for a lot of applications, especially when you consider how much needed to be done on secondary storage and specifically on tapes in the 1960s through the 1980s.
"I've yet to see any AI platform that is mindbending"
1. Siri / Alexa and similar for voice recognition and doing basic tasks.
2. Face recognition for uploaded pictures on Facebook.
3. Lots of people use FaceID on iPhones.
4. Tesla and other SDC systems: yes, it's not good enough for general use, but the fact that it mostly works in California is pretty cool..
"Mindbending" is subjective, but these all use DNNs, and are used by millions to billions of people every day. So it's incorrect to suggest that all DNN use-cases could be replaced with "doing basic math with a multivariate normal distribution".
1, 2 and 3 are absolutely mundane. One could argue that if anything, they are disappointing: i remember passable speech recognition running on pentium 2 with 32 megs of ram (dragon naturally speaking was first released in 1997). "Natural" language parsers were around starting with the first text adventures (80s as far as I remember). Considering the computing resources big G and big A have, the performance of this "AI" is mediocre at best.
Facial feature extraction is not rocket science either, it dates back to at least 1993, so 386/486 with 4-8 megs of RAM.
You want mindbending and scary? Mindbending: deepfakes. Scary: automated ai-based law enforcement.
imo, working memory at scale via improvements to more obscure methods like a DNC is what will be the next leap and make todays 'AI' seem mundane. Most AI today like deepfakes involve giving the model all memories available via input.
I sense that they will hurt themselves more with the imposed additional bureaucracy, hindering business and collaboration - the latter being a key aspect in efficient research, not to mention multinational companies already living on the area.
Also, how effective could this regulation be with so much knowledge and open source code already disseminated in the field?
Unlike the space race, where Russia and the US were significantly ahead of everyone else, the field is far more level in AI. Arguably there will be areas where the US could even be behind in some areas.
Wonder what the real world impact of this will be. Not much, I expect.
>Wonder what the real world impact of this will be. Not much, I expect.
Here's a thought experiment I use to imagine the impact of AI:
Imagine you've got a million people at your disposal. At zero cost and with no downtime, these people can remotely operate robots, understand text, interact using natural language, or classify objects in images, all with human-level intelligence and accuracy. Now what?
Obviously there are areas where AI can outperform humans, like mechanical accuracy and mathematical computation. But in general, I find this experiment works pretty well.
Now imagine doing crowd control. 10 frames per second, 100k people, if you need just 1 second to recognize a face, you’ve just saturated your 1m human-AI. The point of digital AI is that it can scale, almost indefinitely.
OK, then imagine 100 billion people. Scale isn't the point of the experiment - the point is bounding expectations based on probable (maximal?) capabilities.
Perhaps a post-Singularity AI will have wild capabilities beyond our comprehension, but that is outside the scope of this experiment.
You don't need to recognize every face on every frame. People move slowly compared to 10hz, so tracking "this is a person moving around" is way easier than identifying faces anew every frame.
USA is probably still ahead of the rest of the world regarding IA, thanks to Google. Google sits on a massive amount of text, video and speech data. Google is one of the biggest coordinated entity (regarding business, data, hardware and software) on Earth aimed at advancing AI. The only rival in term of budget and data is probably China with a few state-sponsored companies together (Huawei + Alibaba + Tencent).
All other countries probably have smart researchers and engineers, but no one has the data machine that Google has...
My guess is that this is like the export controls on munitions or encryption. You can't export to China but you can fairly easily get a license to sell it in Australia or any other US allied country.
"The rule will likely be welcomed by industry, Lewis said, because it had feared a much broader crackdown on exports of most artificial intelligence hardware and software"
It is meaningful if the US is ahead of any cooperating bloc of powers in any covered area of image recognition. This is much broader than being ahead of specifically China on the whole. For it to not be true would essentially imply that no new research is happening in the US.
I doubt the US is ahead in this area. China gains heaps upon heaps or practical experience in CV by sheer virtue of the breadth of its surveillance networks. Not to say we aren't doing the same here in the US, but efforts seem to be much more scattered
It’s a positive loop. More effective surveillance network -> Larger investment (from government or government contract) -> more application/startup/new programs -> more research funding/aggressive hiring -> higher recognition for CV/ML researchers/Engineers -> More and more people doing CV/ML -> More data, algorithms and applications-> more effective surveillance network. Btw it got deployed at scale in real world which is a huge advantage for progressing any CVML research
Not to mention nowadays Deep learning is pretty much a big data game.
> But isn't that orthogonal with developing the algorithms?
Assuming it is - China is also competitive on developing algorithms. A few months ago there was a post on explosion of AI papers submitted by authors at Chinese research institutions, with no signs of slowing down.
They use it for flight check-ins, entering the park, giving you a fine instantly for jaywalking, buying a soda from a vending machine, any many more use cases. I assume they are ahead.
I think this speaks more about the government and the acceptance of such things by the population rather than the state of research. Even in a hypothetical scenario of US being far ahead of China in the field at the moment, i do not see this kind of things going over well at all with the public in the US.
Given two research labs, with one having a bit better equipment, while the other having a more proven history of publishing innovative research, it would be disingenuous to say that the better equipped lab is ahead until they have actually produced some research that puts them ahead. It might help them gain lead, but it also might result in nothing. Better equipment is just one of many components that affect the chances of success. Until that lead is acquired, I don't really think it would be appropriate to say that they had done so.
Note: the lab with a history of published innovative research in my analogy isn't supposed to represent the US or any country in specific. This was just an example to better illustrate the point I was making. The only thing that should matter for whether someone is ahead or not in this situation is the actual proof of being ahead, not "the opportunities that could lead to them being ahead". Otherwise, we should also start immediately trusting all those articles that pop up once every few months about how some random city is "about to become the next Silicon Valley, here are the reasons why".
Anecdotally even number and quality of publications/papers by Chinese in English outperform the entire English speaking region. They probably publish a lot more in their native scripts.
Most of the time whatever interesting thing (posted to HN) originated in some Chinese startup/university.
They see AI as the next industrial revolution and have decided to make sure they are at its forefront. And likely they will, which means that we’ll be the ones trying to import their tech.
Didn’t the most advanced AI research come straight out of China?
The ResNet Project [1] of 2015. It was used as the core algorithm behind Google’s AlphaGo in 2017.
The 4 computer scientists behind the paper were Chinese nationals. They were all educated by the Chinese educational system, and got their PhD there (one guy was from Hong Kong). They worked at Microsoft at the time, so Microsoft paid them a salary for their work, but I think Microsoft benefited more from their research, as did the other Silicon Valley and American companies.
Three of them went to start or lead other Chinese unicorn companies, and one guy went to Facebook in Silicon Valley, so Facebook benefited here.
Considering that AI is whatever marketing folks want it to be, it'd be interesting see their legal definition AI. Anyone have a link to the actual document?
My off the cuff interpretation is that the rule would only cover convolutional neural nets that are trained to identify and determine the orientation of specific objects in geospatial imagery. If the neural net's input/output aren't wrapped in a GUI it sounds like they still might be OK to export without a license
My reading is that it's the GUI used to train a neural network to identify any objects in geospatial imagery. I'm definitely less okay with that. Although, I feel like the GUI is not the hard part to make, so it's a weird part to restrict.
1. Provides a graphical user interface that enables the user to identify objects (e.g., vehicles, houses, etc.) from within geospatial imagery and point clouds in order to extract positive and negative samplesof an object of interest;
2.Reduces pixel variation by performing scale, color, and rotational normalization on the positive samples;
3. Trains a Deep Convolutional Neural Network to detect the object of interest from the positive and negative samples; and
4. Identifies objects in geospatial imagery using the trained Deep Convolutional Neural Network by matching the rotational pattern from the positive samples with the rotational pattern of objects in the geospatial imagery.
What counts as "geospatial imagery"? Could this apply to any training UI for self-driving cars, maps, street view, etc.?
Looks like it's very specific and targeted to military applications. Although the GUI requirement seems to make it a little too specific to be applicable.
My initial instinct exactly, because major corps are global now, which means they can easily set up shop anywhere on Earth: subsidiaries, but also quasi-independent structures which might only be related through distant funding or meta-agreements.
So you can be an American company with tons of "friends" in the EU, Asia, Latin American and now Africa, doing stuff (research, product) and you would just happen to buy/sell from/through these independent actors. Fiction-Google: “Oh but that's not us! It's Oogleg, a Swiss company! It's true 95% of our private shareholders also have shares in Oogleg, but that's only circumstancial, these are large funds you know... they actually have shares in 95% of businesses altogether through ETFs and mutual funds dilution. + some legalese blabla.”
There goes your protectionism, State governments! You'll get your import taxes for physical goods and on-prems services but overall, it certainly won't impede or even touch the thriving heads, the global leaders of the business world. Not anymore. That was in another time, before global networks.
And actually, we might think Fortune 100, perhaps 1,000; but in truth it's probably much more (cue 80% of GDP in the form of SMBs) because how do you enforce a restriction on remoting to contribute to some repo somewhere?
Note that this is true as of 2020, factually from a technical standpoint, but given a few decades and some generalized country-based firewalls (it's coming, in all likelihood) + convenient surveillance and you get all the means necessary to enforce such policies anew.
I don't understand why you mention ETFs. If Google said it was meaningless that they were in a total market index fund with almost every other public company, then they would be right. But whether they were or weren't they could do business with somebody just the same. Did you think that companies can't interact if they're not subsidiaries of the same organization? Not only can they do whatever they want bilaterally, but often companies or other organizations set up a joint board or company or something with representatives to work on something of common interest. It probably has to be done the right way to avoid antitrust, but it's done a lot, and by government agencies too. This is not a mutual fund or ETF; the joint entity is controlled by the members, not vice versa.
Oblig. disclaimer, IANAL and not a financial advisor either.
> Did you think that companies can't interact if they're not subsidiaries of the same organization?
Of course not :) I however wonder if defending anti-trust from a subsidiary strategy would work — at least in France, I'm pretty sure taking half your execs and hiring them in a subsidiary which you control will NOT get you past anti-trust regulation.
You might say "but it's legal!" and the judge will kindly ask you not to mock the court by disingenuously failing to address the case at hand — are you or are you not effectively in a monopoly, or cartel situation? Legal or not in terms of legal structure doesn't matter because antitrust is 'above' in the hierarchy of norms (so to speak, my law studies are really far away now, and I was more into public than private law).
Case in point though, shareholding is even legally restricted in some sectors (e.g. media, and that was a strong motivation for e.g. Facebook trying not to be filed as a media group, at least in the EU).
I have absolutely no idea how this would fly in the US. I bow to your expertise, here.
A good example, I think, will be the shareholding structure of Libra (if it ever comes to fruition), where many actors essentially hide their participation behind layers of companies, like some onion (there was a good infographic which you might google on the topic). It's legal, technically, but would it stand in front of a supreme court antitrust case?
As far as I know from history, even legal lines tend to become blurry in major antitrust cases because these are, by essence, out-of-bounds of 'normal' operation, they're fringe cases that sometimes requires a new ad hoc law to take where we want to (I seem to remember elements of Teddy Roosevelt's opposition with Rockefeller, details of the Bell system breakup too, but I'm really not sure. Here in the EU, it's really common —all things considered— to just make new law whenever the current letter fails to live up to the desired spirit).
Thank you for the remarks, I'll probably refrain from speaking about antitrust in the US until I have a better understanding of those.
Seems pretty narrowly targeted, probably similar to things like ITAR, quantum, and crypto - which already require regulatory disclosure. Probably just to make sure that US companies aren't doing Project Maven (or the like) for China. Currently, best I can tell, there's nothing in place to prevent such a "collaboration".
They say it doesn't apply to Canada. What prevents a Chinese company to open a business in Canada and get access to US A.I software without the license?
I used to put on my tinfoil hat and imagine that cryptography was the field to study if you wanted secret government agents to visit you. Maybe next time I will instead imagine that computer vision is what summons the secret agents.
More seriously, computer vision is going to be important and it appears to be far less known than machine learning and has higher barriers to entry. I'd exchange a few introductory machine learning books for more good computer vision introductions.
Any suggestions on how to get started with computer vision?
As other commenters have stated, this export ban seems both very narrow and an extremely good idea.
My interpretation of the meaning & intent of the ban is that it is banning GUI-based tools for training CNN's to automatically identify specific types of objects in aerial imagery (e.g. distinguishing a limousine from a different type of car). These CNN's being trained are almost certainly intended for use in autonomous targeting of airstrikes by fully-autonomous weaponized UAV's.
Here is the text of the specific ban:
https://s3.amazonaws.com/public-inspection.federalregister.g...
Here is a really good book for background on this topic:
https://www.amazon.com/Army-None-Autonomous-Weapons-Future-e...
Not related to AI but instead related to geo-spatial imagery, I’m pretty sure I saw YT videos of ISIS commanders coordinating suicidal VBIED attacks in Syria using Google Maps aerial imagery. That was happening back in 2014-2015, when such videos were not instantly banned on /r/combatfootage and /r/SyrianCivilWar .
I sold AI software (for building expert systems) for Xerox Lisp Machines from 1982 to 1985 and the lawyers at my company complained that the $5K license price did not make up for the hassle for foreign sales (I sold to customers in Japan, Norway, and Germany). So, export controls are not such a new things.
Because this is so incredibly broad, there's a good chance that >20% of people here will be working on something that falls under this at some point in the next decade.
While we patiently await for a HN user (or, let's be honest, one of the ancient cryptologist-lawyers who come out of the woodwork every time something like this happens and sue the government) to fix this by suing the government on free speech grounds, don't forget that git, mercurial, fossil, bazaar and more are all decentralized, can't actually be censored at scale, and can be effectively hosted and mirrored trivially.
I actually think it's a well-intentioned law, and it's not like it'll harm most people, but it's still something that should be stood against on principle.
It actually sounds far more narrow than the title implies?
> Under a new rule which goes into effect on Monday, companies that export certain types of geospatial imagery software from the United States must apply for a license to send it overseas except when it is being shipped to Canada.
The measure covers software that could be used by sensors, drones, and satellites to automate the process of identifying targets for both military and civilian ends, Lewis said, noting it was a boon for industry, which feared a much broader crackdown on exports of AI hardware and software.
"sensors, drones, and satellites" used to target anything means that you can't even send a Ring camera to Europe.
Definitely worth mentioning, thank you. However, I was aware of that and believe there are currently no cryptography related restrictions (right?), so I'm still wondering if this is a zero-to-one situation. Has the software export restriction been switched from off to on?
The reason for this has to be economics / lobbyist driven. It makes no sense technologically (because it could not be effective) and there are far more dangerous examples of American companies developing technology that assists the Chinese military, such as private search engines and social credit systems that leave the general populace unable to make democratic influence on military actions or government policy.
The main impact of these regulations is not going to be on the software availability overseas but on the software jobs availability for foreign nationals, IMO.
I work on ITAR-regulated software and, even though, the software itself is exported all over the world, I would not be able to write it if I had been a national of a restricted country, working in the US on a temporary visa.
Progress in AI tech has been due to open sharing of knowledge, so much so that even companies such as Apple which tend to keep its research under closed doors started publishing open Machine Learning journals to attract talent.
AI tech is too powerful to be monopolised, if not democratised it might become another 'semiconductor' industry.
"... boost oversight of exports of sensitive technology to adversaries like China, for economic and security reasons..."
I think economic is the keyword here. From what I gather this is not the first time the US is doing something like this. I am pretty sure other countries have done the same in their particular areas of concern. They're just not as mighty and famous as the US so nobody pays attention. So much for free market.
Anyways I think it is a little too late and all it will accomplish is - opening a window of opportunity for other players.
Also because it formulated way too broad and has an escape clause (apply for a license) then it might offer an unfair advantage right inside the US. Big companies will get it and for smaller it nay be more difficult. Same as patent system. Company like Apple can patent my cat with little troubles. Me: not so much and I speak from experience.
The rule refers to products, not freely available source code.
Companies don't sell stuff from GitHub, they sell proprietary stuff. It may well be based on open source code, but they own it to sell (license) it.
There's lots of examples of products in the geospatial domain that are not "AI" yet are restricted or even classified.
For example, ship detection from space-based radar. There are numerous public papers on the topic yet any software that purports to do this is subject to ITAR rules in the US and CGP rules in Canada.
Just because you may know how to do something doesn't prevent a government from restricting you from selling it, or talking about it. Even if it is "public". Machine guns are an old tech and yet are restricted. As they should be.
A protectionist response to the US losing power, & trying to stave off brain drain. They should be considering any person who knows how to program with Tensorflow a munition. Mitigating brain drain is a hopeless endeavor. US should make their immigration more liberal to try encourage US as a destination for brain drain, as opposed to a source. Drain or be drained
Sounds like something straight out of Terminator movies. I think they are afraid this technology will get out of hand in the near future and I can't really blame them. I remember that video with the Google Assistant making a hair dresser appointment. Pretty scary stuff
I think as opposed to classical statistics, except for the important subfield of statistical learning theory, machine learning relates much more to functional analysis, differential geometry/optimization over manifolds, and measure/probability theory. "AI" is whatever marketing people want to define it as.
Obviously, it failed! You can stop a single vendor with unique technology to provide hardware components with bans, but you can't stop a whole field spreading knowledge!!! People come and go, meet, talk, write, exchange knowledges... so sooner or later (and more soon now as long as the internet is not limited to US) the software will implement these ideas.
The only one that will be challenged will be "big corps" relying on IP protection. But if i remember correctly, Google has a research center in China... so knowledge will aleady be in China and won't even need to be "exported"