Hacker News new | past | comments | ask | show | jobs | submit login

I would argue that we already have experienced enough of the downsides of "AI" that there is reasonable cause for concern.

The implications of deepfakes and similar frauds alone are potentially devastating to informed political debate in democracies, safe and effective dissemination of public health information in emergencies, and plenty of other realistic and important trust scenarios.

The implications of LLMs are potentially wonderful in terms of providing better access to information for everyone but we already know that they are also capable of making serious mistakes or even generating complete nonsense that a non-expert user might not recognise as such. Again it is not hard to imagine a near future where chat-based systems have essentially displaced search engines and social media as the default ways to find information online but then provide bad advice on legal, financial, or health matters.

There is a second serious concern with LLMs and related technologies, which is that they could very rapidly shift the balance from compensating those who produce useful creative content to compensating those who run the summary service. It's never healthy when your economics don't line up with rewarding the people doing the real work and we've already seen plenty of relevant stories about the AI training data gold rush.

Next we get to computer vision and its applications in fields like self-driving vehicles. Again we've already seen plenty of examples where cars have been tricked into stopping suddenly or otherwise misbehaving when for example someone projected a fake road sign onto the road in front of them.

Again there is a second serious concern with systems like computer vision, audio classification, and natural language processing and that is privacy. It's bad enough that we all carry devices with cameras and microphones around with us almost 24/7 these days and the people whose software runs on those devices seem quite willing to spy on us and upload data to the mothership with little or any warning. That alone has unprecedented implications for privacy and associated risks. With the increased ability to automatically interpret raw video and audio footage - with varying degrees of accuracy and bias of course - that amplifies the potential dangers of these systems greatly.

There is enormous potential in modern AI/ML techniques for everything from helping everyday personal research to saving lives through commoditising sophisticated analysis of medical scans. But that doesn't mean there aren't also risks we already know about at the same kind of scale - even without all the doomsday hypotheticals where suddenly a malicious AGI emerges that takes over the universe.




Let’s stipulate that all you said was true. How is EU regulation suppose to prevent that? Are they going to stop open source models from being used in Europe? Are they going to stop foreign adversaries from using deep fakes?

It’s just like trying to restrict DVD encryption keys from being published or 128 bit encryption from being “exported” in browsers back in the car.


> The implications of LLMs are potentially wonderful in terms of providing better access to information for everyone but we already know that they are also capable of making serious mistakes or even generating complete nonsense that a non-expert user might not recognise as such. Again it is not hard to imagine a near future where chat-based systems have essentially displaced search engines and social media as the default ways to find information online but then provide bad advice on legal, financial, or health matters.

I think a bigger concern is LLMs providing deliberately biased results and stating them as fact.


The issue is, the regulations are tailored to address any of those concerns, some of which may not even be solvable through regulation at all:

> The implications of deepfakes and similar frauds alone are potentially devastating to informed political debate in democracies, safe and effective dissemination of public health information in emergencies, and plenty of other realistic and important trust scenarios.

The horse is out of the barn on this one. You can't stop this by regulating anything because the models necessary to do it have already been released, would continue to be released from other countries, and one of the primary purveyors of this sort of thing will be adversarial nation states, who obviously aren't going to comply with any laws you pass.

> The implications of LLMs are potentially wonderful in terms of providing better access to information for everyone but we already know that they are also capable of making serious mistakes or even generating complete nonsense that a non-expert user might not recognise as such.

Which is why AI summaries are largely a gimmick and people are figuring that out.

> they could very rapidly shift the balance from compensating those who produce useful creative content to compensating those who run the summary service.

This already happened quite some time ago with search engines. People want the answer, not a paywall, so the search engine gives them an unpaywalled site with the answer (and gets an ad impression from it) and the paywalled sites lose to the ad-supported ones. But then the operations that can't survive on ad impressions lose out, and even the ad-supported ones doing original research lose out because you can't copyright facts so anyone paying to do original reporting will see their stories covered by every other outlet that doesn't. Then the most popular news sites become scummy lowest-common-denominator partisan hacks beholden to advertisers with spam-laden websites to match.

Fixing this would require something along the lines of the old model NPR used to use, i.e. "free" yet listener-supported reporting, but they stopped doing that and became a partisan outlet supported by advertising. The closest contemporary thing seems to be the Substacks where most of the stories are free to read but you're encouraged to subscribe and the subscriptions are enough to sustain the content creation.

The AI thing doesn't change this much if at all. A cheap AI summary isn't going to displace original content any more than a cheap rephrasing by a competing outlet does already.

> Next we get to computer vision and its applications in fields like self-driving vehicles. Again we've already seen plenty of examples where cars have been tricked into stopping suddenly or otherwise misbehaving when for example someone projected a fake road sign onto the road in front of them.

But where does the regulation come in here? When it does that it's obviously a bug and the manufacturers already have the incentive to want to fix it because their customers won't like it. And there are already laws specifying what happens when a carmaker sells a car that doesn't behave right.

> Again there is a second serious concern with systems like computer vision, audio classification, and natural language processing and that is privacy.

Which is really almost nothing to do with AI and the main solutions to it are giving people alternatives to the existing systems that invade their privacy. Indeed, the hard problem there is replacing existing "free" systems with something that doesn't put more costs on people, when the existing systems are "free" specifically because of that privacy invasion.

If a government wants to do something about this, fund the development of real free software that replaces the proprietary services hoovering up everyone's data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: