Hacker News new | past | comments | ask | show | jobs | submit login
The Commerce Department is considering national security restrictions on AI (nytimes.com)
163 points by tzury on Jan 2, 2019 | hide | past | favorite | 91 comments



I'm curious how export restrictions would affect open source projects like Tensorflow and PyTorch. Would they be forced to become closed source? Could the license just include a disclaimer: "You're not allowed to use this if you're in one of the following countries: ..."? Would sites like Gitlab and Github be forced to implement per-repo geoblocking? Could they somehow be moved to ownership by a non-American entity that wasn't subject to such code? Does a US citizen contributing to a non-US open source ML project constitute a breach of export controls?


In the 90s it meant open source software (we didn’t use this term back then) like Kerberos were considered munitions and it was illegal to provide them to non-us persons without a license.

At Cygnus we generated a version with no cryptography at all and sent it to Switzerland where someone wrote crypto routines (available in the open literature) that interoperated, and the everyone on the planet had access to it (but American folks had a different version). All so that the Swiss stock exchange could use Kerberos for single sign on and end-to-end encryption.


Was publishing cryptography papers in open-access journals also illegal? Without such a ban it seems it would be very hard to stop the export of AI, as it's not particularly difficult to implement an AI algorithm from spec, just tedious; certainly much easier than implementing one's own crypto.

>illegal to provide them to non-us persons without a license

I wonder if in this case Git{hub|lab} would need the license, or the person uploading the code?


You couldn't export software implementing cryptography, but there wasn't any similar restriction on books about cryptography, even if the book contained detailed descriptions of the algorithms and the mathematics!


The PGP source code was published as a book to avoid this https://philzimmermann.com/EN/essays/BookPreface.html


The Darknet Diaries podcast has an episode on this called 'The Crypto Wars': https://darknetdiaries.com/episode/12


Wasn’t it also published as a T-shirt?


You are thinking of the RSA perl t-shirt.



That mean we have two free products. Compete each other. Open source is not one software to rule them all. If you think it is restrictive to Russia, china or Swiss, open your own sources!!! Do not just suck. Give up your milk and let the world be free not just free according to contribution by American. We need more maker not free rider.


Actually it meant one version: the one produced outside the USA (since there was no restriction on using that inside the USA). So the law achieved roughly nothing besides exporting coding jobs.


PGP printed their code in a book and exported that. The clarified matters and the courts ruled that source code was protected by the First Amendment. This is still true unless the courts overturn precedent.


I understand why they want to—even to the point of thinking it would be the right move if only it would actually work—but it won't work.

True, software is dual use[0] but it's too slippery. Even things like nuclear weapons designs[1] are getting pretty easy for people to get their hands on. This stuff is just text! How are you going to stop this from spreading? It won't work.

Besides, top-shelf AI tech isn't much better than mid-shelf stuff. For example, self-driving car technology. We're talking about the difference of 1000kms with no intervention by a human driver vs 5000kms. A terrorist weaponizing self-driving tech for, say, a mobile weapons platform (AKs connected to remote triggers, say) doesn't need the top shelf stuff. They need an auto that can go maybe 10kms.

Thing is, unlike most people I talk to in tech, I actually support the majority of the objectives intelligence agencies in the west have. I just don't know how you solve this one. We need free speech to be a liberal democracy but information itself is becoming dangerous. How do you square that?

Further, if the power of individuals keeps going up due to technological advancement how do you maintain security and freedom at the same time? The only way I can see it working is for individuals to continue to get more peaceful.

[0] https://en.wikipedia.org/wiki/Dual-use_technology

[1] Thank goodness weapons grade fissile material is hard to make.


> We need free speech to be a liberal democracy but information itself is becoming dangerous. How do you square that?

You can't. Either speech is free or it's not. There are reasonable restrictions - literally inciting violence or things done to intentionally cause harm ("Fire!" in a theater, etc).

For information, the only thing you can do is make it classified, which is hard to do when it's a private company doing the innovating.


There is precedence for using restrictions on freedoms to preserve those freedoms, a good example is Germany which very highly values freedom of expression - but has strong laws to defend those freedoms against anti-democratic forces - often referred to as “militant democracy”: it’s a system that’s not paradoxically intolerant of intolerance and so far seems to work well - and it shows that it is possible to draw a line somewhere in the law without needing to resign oneself to having to tolerate virulent evil in the name of freedom of speech.


>> a good example is Germany which very highly values freedom of expression

Can you give references?

In my experience of living in Germany, freedom of expression is welcomed as long as it's the right kind of expression.


Freedom of expression never means that other are not free to voice their dislike of certain opinions. And while some expression is frowned upon other members of the public you are still within your legal rights to express them. The line you can not cross is incitement of violence, denial of the holocaust, breaking NDAs and so on. Contrary to what many Americans think, Germany is not a oppressive country that censors you left and right, but actually has relatively similar free speach rights to the US or other western countries.


>> Germany [..] has relatively similar free speech rights to the US or other western countries.

At least since NetzDG was passed I'm not sure it's quite that simple.

Google[0] says this about the NetzDG:

"The Network Enforcement Law (NetzDG) requires social networks with more than 2 million registered users in Germany to exercise a local take down of “obviously illegal” content [..] To qualify for a removal under NetzDG, content needs to fall under one of the 22 criminal statutes in the German criminal code to which NetzDG refers."

Politico[1] said this "Titanic, a German satirical magazine, was similarly barred after parodying anti-Muslim comments on its own Twitter account"

[0] https://transparencyreport.google.com/netzdg/youtube [1] https://www.politico.eu/article/germany-hate-speech-netzdg-f...


> denial of the holocaust

This is a good example of something that is, without a doubt, unconstitutional to ban in the US.


I have been growing to accept that almost any technology that is sufficiently powerful is dual-use.

The classic sci fi example is spaceships; a rocket travelling at .2c is a devastating weapon and amazing tool for transit.

But so is AI tech, drones, and other technologies. Crypto can stop hackers and the government. 3D printers can print pen guns as well as plastic tools.


i think all tech is under the wings of rule 34, it just takes a while for somones imagination to kick in.


> Thank goodness weapons grade fissile material is hard to make.

The material itself is easy to make. It's the delivery systems that are hard to engineer. Missile technology is literally rocket science.


I agree that delivery systems are no piece of cake, but if I'm worried about nukes I'm more worried about terrorism than I am about nation states. From that lens, fissile material is indeed hard to make. The Iranians were pulling their hair out when seemingly fine centrifuges mistimed due to malware. If it were easy then they could have tried a different approach.


a cessna with a sawed off howitzer barrel is easy to engineer, the big problem, really is moving fissile materials around while satellites are looking for it.


The answer is probably the same as for the internet - encourage development in the west, possibly with some DAPRA type subsidies to counter the Chinese subsidising their own.


The publication mentioned:

https://www.federalregister.gov/documents/2018/11/19/2018-25...

Relevant excerpt:

The representative general categories of technology for which Commerce currently seeks to determine whether there are specific emerging technologies that are essential to the national security of the United States include:

(2) Artificial intelligence (AI) and machine learning technology, such as:

(i) Neural networks and deep learning (e.g., brain modelling, time series prediction, classification);

(ii) Evolution and genetic computation (e.g., genetic algorithms, genetic programming);

(iii) Reinforcement learning;

(iv) Computer vision (e.g., object recognition, image understanding);

(v) Expert systems (e.g., decision support systems, teaching systems);

(vi) Speech and audio processing (e.g., speech recognition and production);

(vii) Natural language processing (e.g., machine translation);

(viii) Planning (e.g., scheduling, game playing);

(ix) Audio and video manipulation technologies (e.g., voice cloning, deepfakes);

(x) AI cloud technologies; or

(xi) AI chipsets.


It also names "visualization" and "molecular robotics" as (non-AI) representative technology categories. It includes the specific, the absurdly general, and the so-far-imaginary among its categories. The Federal Register publication is not very enlightening and I can't tell if the NYT story is based on anything more than that publication.


I'm assuming "molecular robotics" is intended to cover CRISPR-like technologies.


There's a separate section for biotechnology. Molecular robotics is listed under robotics along with smart dust and swarming technology.


molecular robotics in a colloquial sense would cover any mechanichal process that is implemented at the molecular level of organization, this would include any sort of sequence determination; construction; or modification -- meaning most of any biotech, and nanotech. Seeing that btech is a separate matter is interesting.


If they want to control all of this stuff programming will become almost impossible since everyone is using these things today; it puts your coding in the hands of lawyers (no you can't call that method, its against the law). Even mobile and desktop OS's these days include many of those things, not to mention a big chunk of cloud systems like Amazon become a minefield.


What a bunch of maroons.

This stupid approach was such a disaster in the days of crypto controls. Lifting them protected Americans and unleashed a huge number of new capabilities.


You mean easing them? IIRC, there are still restrictions in place.


« Lifting them protected americans and unleashed... »

Source ?


End to end encryption is now common which protects everyone. SSL/TLS is now unbiquitous as are secure messaging apps. Crypto research looks for volnerabilities in all sorts of software. This protects everyone, including Americans.


Wide ranging restrictions such as these are being tactically applied throughout the world. Germany and France enacted similar policies last year. As Ian Hogarth sketched out last year, we're entering an era in which AI becomes part and parcel of a country's geopolitics, an era of AI Nationalism so to speak.

Hogarth expressed the opinion that Google's purchase of DeepMind(A UK based company initially) is going to be seen increasingly in the future as an amazing coup, a unit with some of the world's greatest minds on Deep learning allowed to be sold to a foreign conglomerate. I have to say as time goes by and governments realize how strategic their AI community is, I agree.


Well, if it's not deep mind, Google can buy some other companies and let them use the world's most advanced AI compute infrastructure and who knows what they would do.

I do not doubt deep minds talents, but your comment seems to me an overstatement on its value.


> Germany and France enacted similar policies last year.

Source?


Searched and couldn't find anything for Germany.


It's the UK and the US. The UK traded VX for the hydrogen bomb and gets Trident from us with their warheads on top.


I swear, America is always at the height of a cold war that ended decades ago.


Ended? the great game continues to this day.


This is what we get for all the stupid fear-mongering done by people with no understanding of "AI". Pop scientists like Neil deGrasse Tyson and Elon Musk weighing in on a field with 0 overlap with their expertise is treated by the media as some kind of prophecy sent from the heavens. Don't get me wrong, there's scary shit happening with AI that we can legitimately talk about, but we are so far from a world with sentient computers that the problem being posed by media figures is no more appropriate to discuss today than it was with the advent of neural networks.


https://youtu.be/GAXLHM-1Psk?t=945 - I think this commentary by Maciej Ceglowski rings true here.


Ah, it comes a bit later, at 19:00 -

"At that point people who are angry, mistrustful, and may not understand a thing about computers will regulate your industry into the ground. You'll be left like those poor saps who work in the nuclear plants, who have to fill out a form in triplicate anytime they want to sharpen a pencil."


Great vid! Thanks for sharing


Shane Legg "a machine learning researcher and cofounder of DeepMind Technologies" for example seems to think it's something to worry about. I don't think you can all the dismiss all the worrying as people with no expertise. Slightly out of date interview here https://www.lesswrong.com/posts/No5JpRCHzBrWA4jmS/q-and-a-wi...


What's even scarier is the blind self appointed belief that one may own the thought and conversation of a topic as impactful yet graspable as AI.


I don't believe that, if that's what you're implying. I think the people who should "own" the conversation so-to-speak are people who have spent the most time doing AI research. A lot of the cultural understanding of what AI is reflects the type of sci-fi hacking that's depicted on TV.


How is this even going to work ?

A large fraction of the advances in AI come either from China proper or by Chinese grad students in the US, or by ethnic Chinese raised in the Americas. I don't think a large fraction of this demographic shares the cold war mentality of the war hawks in the Pentagon.

USG will probably resort to will be to put restrictions on export of Nvidia GPUs/TPUs (similar to that on Intel Xeons). Hopefully, this'll have the unintended effect of breaking their monopoly of the DL market.


May be it is a yesterday cold war but a different kind of Cold War today. Your point right to the mark. Imagine you have Russian working living and sharing all the goodies but not follow the ethnic or market dynamics (genetic modified baby is just we know, no free access to chinese market but the other way round etc) what would any sensible American these days would say.

Let us open free no restriction between china and America software border not one way. Let us do Facebook and Twitter ... no.

Open source is great in a free word and inside a free world. At least the free rider paid but just benefit to their country even they live, study and suck you dry.


Do I really need to dust off my DeCSS Perl "this shirt is a munition" ThinkGeek shirt from way back?

When something is new or "scary" doesn't mean knee-jerk legislation will do anything other than lead a country to missing out and innovation strengthing other countries.


You're thinking of two different code-on-shirts episodes here (Federal export controls on cryptographic software and entertainment industry litigation over DVD decryption software), although both of the shirts may have involved Perl code. "This shirt is a munition" is from the cryptography export controls issue and DeCSS is from the entertainment industry litigation issue.


Uh oh, let's look at the crypto wars and see how that turned out. AI restriction is garbage, I think this is a way for the govt to steal AI tech from private corps. If you refuse to share, how can they access the strength? With crypto, we could easily say restricted to Nbits of keys. With AI, they will make up some rubbish and claim your AI is too strong because you refused to share. If you share and they find it good and think it gives the govt advantage, they will restrict you too. Either way you lose. Long run tho the govt will lose.


The crypto wars worked out well from the NSA's point of view. Obviously determined individuals, foreign governments, terrorists etc were not deterred by it from obtaining strong crypto. But mass adoption of strong crypto was delayed by many years, enabling mass surveillance.


So you think by creating a negative association with AI, adoption will be delayed?


Do you restrict the distribution of sets of trained hyperparameters? Do you restrict models for training? Do you restrict the datasets? Do you prevent journal publication of new developments? The fundamental algorithms are pretty well developed. Like Bernoulli's principle was well understood before the first airplane.

If you want to restrict something, you would restrict the tooling. Like we restrict the tooling to make advanced aircraft wings. In the case of ML that means you restrict ... migration of certain groups of engineers? Sale of GPUs?


While I don't advocate any restrictions for knowledge, even if for the very practical reason that it fails the it's premise of keeping it away from the bad guys and also find National Security to have been well and thoroughly abused for overreaching and draconian powers, there is something to be said about the power of the information that a well trained model holds.

I believe it is fair, to at least start with the premise that, restrictions that would normally apply to some data, would reasonably extend to the Model trained by them, obviously, how much depends on the context and sensitivity of the data.


Regulation is certainly needed but we can be certain it won’t be done properly.

More worrying is the fact that for every “benefit” we receive from AI (self driving cars, open source home automation), there are numerous, globally negative consequences such as discriminatory practices based off biased training data. The Chinese executive mistakened for a jaywalker is a clear example of this.

It feels like this is a train that cannot be stopped that will lead to disasterous circumstances.


The only thing that can stop a bad guy with an algorithm is a good guy with an algorithm.


Genuinely wondering, is that sarcasm?


> discriminatory practices based off biased training data

I think the car insurance industry has shown how unwilling governments are to do anything about that sort of thing. No need to involve AI.


This will work about as well as suing Napster did to prevent the spread of music sharing.


The same is true of most convictions and crimes.


Not really. Most good laws prohibit things most good people wouldn't actually do, and are mostly effective even if a few people don't follow them. So you start off with 80% compliance before you even pass the law, then you get another 15% because the people who might have done it accept that there is a law against and it don't consider it worth breaking the law. So you get >95% compliance, which is generally good enough to be effective even if there is the occasional scofflaw.

Restricting information sharing on the internet is the complete opposite. If there was no law then everybody would do it, many people will fight you on principle to the point of overt civil disobedience, and if even one source is available the whole thing falls apart -- including from other countries you have no jurisdiction over.


Only Americans are allowed to do nonlinear function approximation! Fuck off China!


Isn't this a bit over-reductionist, such as the people who claim that since all electronic images are represented by 0's and 1's images are just numbers and thus it is silly to ban an illegal image since it is just banning a number?


This is interesting... Does algorithm decode, let's say, a JPG image in such a way that would produce child porn out of let's say google logo, makes google logo illegal image? Does conveying an illegal image in other ways, let's say describe in text detailed attributes of such an image, makes such text illegal? Not from a law standpoint, as I am sure it is quite specific, but from a spirit of the law which in theory should represent cultural norms of the country using such laws?


The law doesn't really have a problem with such technical hacks. They basically apply common sense - if you distribute a decoder that produces CP from the Google Logo, in legal terms that is no different to distributing CP.

I think someone wrote an article about this many years ago called "the colour of bits" or something like that. Basically the idea that you can trick your way out of the law with "it's just a number" type arguments are nonsense.


To clarify, what if such decoder indeed decodes images, but it happens that if you feed to it specific data set, like google's logo, result will be a child porn picture on the screen? I.e. I am not talking about intentionally making an algorithm to generate such picture, but rather if such result will be a side-effect. What will be CP in this case, google's logo, algorithm, both, none?


What's like to happen? Given how low the probability of such an event is to happen by chance, if the algorithm is traced back to you then the court will find you guilty of creating the algorithm specifically to produce the image. In short, you would be extremely unlucky.


Why in this case maker of decoder should go to jail and not creator of google logo? To really make it difficult - let's assume both image and decoder was made at the same time.


If both were created at the same time, it would be a far different court case. I was assuming the case where the logo clearly predates the algorithm, because in that case you would get enough expert witnesses to testify that the logo was itself clean and thus it is extremely likely the algorithm was developed to specifically produce that image given the input.

If both were provably independently created at the same time, you would probably instead be having an uproar among scientists about the probability for such an event happening. It may still cause issues because the probability of such an event is so small, that the probability of the proof of their being independently created being wrong is the far more likely event.


If it was genuinely by chance, then nobody would believe you and you would still go to jail.


You're thinking of Matthew Skala's article "What Colour Are Your Bits?" https://ansuz.sooke.bc.ca/entry/23


Dr. Geoffrey Hinton, the creator of Deep Learning left academia in the United States and moved to Canada in part as a personal protest against military funding of research.


We're gonna need government permits to do gradient descent.


Great point, where do you draw the line? Will linear regression be considered "AI?"


The line will be drawn between class boundaries.


There is no such thing as AI. No professional, legal, or formal definition of such a thing exists. Even self-described AI experts have put forth no definition for AI, let alone agreed on one. Elon Musk was telling people on Joe Rogan that AI was the biggest threat to humanity without even defining what AI is.


Yes, because that worked fine with encryption restrictions back in the 90s.


I appreciate the members of HN for bringing this to our attention, however, the comment period on the proposed rule making ended on December 19th, 2018.

https://www.federalregister.gov/documents/2018/11/19/2018-25...


Let me correct myself. Seems the comment period has been extended until January 10th.

https://www.regulations.gov/document?D=BIS-2018-0024-0042


The problem I see here is logistical. AI is often a blank slate, dependent on the data that's fed into it to be useful. It's that data, and the way the model is tuned, that is important. We do control exports on certain grade encryptions, but how do we control export over tuning a machine model, or over input data for a ML model?


Good luck putting that genie back in the bottle


[flagged]


Could you please stop posting unsubstantive comments to Hacker News?


They may be short and sweet but I certainly don't feel they're unsubstantive. It seems you're the decider, though, so no problem I'll leave.


I'm sorry, dang. I appreciate your work here. I like to use farce to point out what I see as inconvenient truths. I can accept that they're not welcome here.


Short comments are fine in general, but short inflammatory or snarky comments tend to be flamebait. If you're posting about something divisive, please make the comment more thoughtful.

It's not just—or even primarily—about the quality of your own comment, but the quality of the other comments it is likely to lead to. In other words, the real issue is the systemic effect on the site.

And please don't leave!

https://news.ycombinator.com/newsguidelines.html


> They may be short and sweet but I certainly don't feel they're unsubstantive.

Your comment is visible through enabling flagged/dead comments in settings, and now that I've enabled it—I really don't see the value that this specific comment adds. It broadly derides a class of workers (regardless of whether we actually hold them in high regard) without materially contributing to the conversation.

Commentary of this nature does largely invite simple, emotional agreement in forums with a more instinctive hive mentality (versus the intellectual hive mentality which arises here, a challenge I'm not discounting generally but am subtracting from this equation). What it generally doesn't do is invite informed conversation.

It scratches an itch, and judging from how thoroughly your comment was both modded down and flagged, it looks like it scratches an itch Hackernews doesn't have.

Tl;Dr: dang might be the decider, but his decision was informed by flags, which speaks to the nature of the discourse desired here. The comment was just out of place. It's a better fit for Reddit, but most of us who are active here are specifically here in search of a respite from that.

--

If I were to reply to your comment in search of an actual conversation over this, it would probably be more along the lines of:

> I doubt the politicians themselves came up with this. More likely it was drafted by a think tank, an institution with a vested interest, or a committee constituted of the above. Can't blame the politicians for investigating whether there's a challenge here that'll help their constituents; it's literally their job.


Tl;dr when you got your people in power out of self-interest but now they outlived their usefulness




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: