Hacker News new | past | comments | ask | show | jobs | submit login

Nobody is against regulation that disfavors large incumbents to support competition instead.

You'll struggle to find people who are against the Digital Markets Act for this reason. It literally only targets the potential monopolists.

However, virtually every other piece of regulation does the opposite.

Regulation usually gets trotted out after the downside of doing [new innovation] is experienced. This always happens, because doing something new always involves unknown risk. Most people aren't entrepreneurs and hate risk, so they pass regulation, and the market gets locked down so nothing new happens again. Incumbents and their army of lawyers can easily comply or are grandfathered in, and challengers are permanently disadvantaged. That market is officially dead until the next fundamental leap forward in technology.

What's different now though, is the hysteria over AI is leading regulators to pass this incumbent-cementing regulation before we've even had a chance to experience both the upside and downside, so the innovation never happens at all.

Combine this with a rapidly aging demography in Europe, and I only see this trend increasing. If there's one thing old people hate, it's risk and doing new things. Meanwhile, those same old folks are expecting massive payouts (social benefits) via taxation of the same private sector they're currently kneecapping with red tape. While ironic, those two trends converging aren't great for Europe.




Specifically with AI I don’t want to experience the downside of innovation before we regulate because of how wide spread its use already is, and it’s problems have already become apparent.

For example, it’s being used to job screen applicants even though we have proven that AI models still suffer from thing like racial bias. Companies don’t disclose how their models are trained to negate bias or anything like that either and that’s one example I remember off the top of my head


> it’s being used to job screen applicants even though we have proven that AI models still suffer from thing like racial bias

I bet they also suffer from other biases that are harder to detect and maybe some biases we can't even imagine and thus control for.


So.....,just like humans?


That's the point though. It's illegal for humans to do things during hiring that are not illegal for AI to do as it stands now (all sorts of discrimination). We want to at least level the playing field.


There’s no exemption from prosecution for racial bias because it was done by an algorithm. If a company uses such an algorithm, AI or not, they will be open to prosecution for it.


The differential nugget is that LLMs and similar tech are auditable. You can audit the models, test for bias etc in a way that scales.


how do you audit the loaded die at the end of all llms?


Are there agreed upon neutral auditors or common processes for auditing models? If not, maybe what we need is (gasp!) some governmental oversight.


Humans can be asked to explain their decissions and personally take accountability for them. Even if AI processes were less biased, using them introduces the opacity of relying on models that simply can't be fully intuited or explained. For life-altering decisions, merely performing better than a human is not the benchmark.


Kind of like humans in that regard


Except being able to process 1e9 more applications, thus massively more biased and discriminatory.


there maybe better examples of why AI biases provide deep systemic problems but: CV screening? are contemporary LLM really worse that previous screening tech and processes?


I think a worse feature for them to have is consistency. Imagine being someone that has fallen through the cracks of software a significant number of employers are using... Basically soft-locked out of employment with no recourse.


I'd argue consistency in discrimination is easier because it makes it easier to detect and account for.

A range of different people interviewing are going to be alot harder to pin down.


Discrimination in this sense is a statistical property. If Jimbob gets rejected everywhere as an individual it isn't really discrimination but that doesn't mean it isn't an issue.


You can combat bias in a team of people, but in LLM you may not even know it exists.


Isn't not knowing that bias exists just the same problem that the average HR department has? You solve that by educating people about biases. If anything that should be easier with LLMs because you're not asking people to alter their sense of self ("I wouldn't do that!"), you're just informing them of technical limitations.


I don’t understand your argument. The whole purpose of AI-driven recruitment is to reduce human effort by doing some work for them. If AI due to existing bias reduces diversity in your pool of candidates, what can you do with it? You either do not notice the bias and just think that those people are indeed the best fit, you willingly accept the bias („But it reduces our hiring costs! We deal with diversity later, when this technology matures!“), or you go to see the unfiltered stream and put such an amount of work into it that makes AI useless. HR departments are not the people who will build their own solution, they will buy it from a third party and they won’t have control over or the budget for the fine-tuning.


How can you combat bias in a team of humans if humans are susceptible to bias? How do you even know what neutral is if it's not a physically measurable quantity?


This sounds like an argument for never educating people and doing nothing about the problem.

Just because you graduated from a CS program doesn't mean you will always write bug free code using the ideal algorithm and design pattern.

But you know what you should be striving for, can more readily identify issues, and maybe sometimes you actually are perfect.


> This sounds like an argument for never educating people and doing nothing about the problem.

I feel like it's really the opposite.

There is a major problem with using AI to screen CVs, which is that AI is bad at screening CVs. It excludes good candidates and offers mediocre ones, and opens up a bunch of hacks the equivalent of black hat SEO, which gives an undesired advantage to people inclined to dirty tricks.

But "it's racist" is a political hot button, so before people can start to point that out, someone scrambles to push the hot button and suck all the oxygen out of the room. Then the purveyors of snake oil can munge the algorithm until it passes the naive oversimplified racial bias test someone made up (probably making it even worse at its intended purpose), and then claim "the problem" is solved, as if that was the main and only problem.


They send resumes with the same accomplishments and names that indicate race and observe the relative degrees of acceptance.

Everytime they do this test, considerable bias is found to still exist.


Still using resumes, have an ai interview for knowledge and communication skill.


Like many others, we've built a product to do this.

I think it gives a much higher signal on candidates than a resume, and is more "blind" in terms of bias than a human.

It's based on voice messages. Open to all here if anyone wants to take it for a spin. https://candit.net


How do you get an unbiased evaluation of someone with an accent that differs from your training set? It took about a decade for my friends with Indian English accents to be able to use voice controls in luxury cars. And honestly young people from India have a much more American accent than my generation.


remove names then. problem solved.


There are still plenty of proxies that can amount to the same thing. Not a lot of white graduates of HBCUs, for instance. Are we removing the names of all schools, too?


if you dont want to favor ivy leagues that would be the next thing to remove yes


It‘s 2024. There are decades or research with a lot of data, there exist best practices and products that help combating the bias. The toolkit available to HRs is rich and there’s plenty of things that work. E.g. I have absolutely no problem hiring women in tech roles, because I simply adjusted my process. I heard complaints from other engineering leaders that they want to increase diversity but struggle to find candidates: they do not realize that they have bias in job descriptions, they apply same screening criteria and do the post-interview evaluation the same way regardless of candidate gender or background. Many people can do the job, but they communicate it differently and it is important to account for that when you see the CV or talk to them.


of course you can know it exists. you just need to observe inputs and outputs and compare with precious non LLM ways.


You do not understand how HR works then. They are not LLM researchers and just apply the available toolkit, e.g. organizing regular training for hiring managers. When they buy AI-driven solution from a third party, they have no tools neither to detect if it’s just a statistical fluctuation or there’s a bias they need to find a workaround for, nor to act on this information in a cost-efficient way. Most likely they won‘t even have enough data to reach any statistical significance. The vendor in theory could do it, but most of them are startups now and they do not have diversity in their OKRs, at least not now. It’s not the UVP of their product.


It's almost guaranteed there are people trying to sell an AI version of Robodebt right now to the Australian government, even though the last (non-AI) version of it was an absolute cluster fuck:

https://en.wikipedia.org/wiki/Robodebt_scheme

Other governments around the world have done similar (non-AI) things in the past with similar terrible results. They'll likely try an AI version of things too in the near future, just because "AI" apparently solves all the problems. Ugh.


Of course they could be. A company that doesn't want a racial bias won't intentionally filter on names, but might accidentally deploy an LLM that can discern race from name.


The humans will do it unconsciously if they can see the names.

It is like orchestras used to use auditions where the judges could see the people; when they went to auditions where they couldn't see the people, the number of women being hired went right up.


A human might, but a dumb filter wouldn't.


See, this is the problem with just deciding to invent your own facts.

There is a famous study on this topic, usually presented in much the way you describe. But usually people are more careful not to lie about the results. The rate of women being hired went down, not up. The "success" for women was contained to advancement through particular audition rounds.

https://archive.is/xmvp2

The original paper is not exactly a model of investigative integrity:

>> Women are about 5 percentage points more likely to be hired than are men in a completely blind audition, although the effect is not statistically significant. The effect is nil, however, when there is a semifinal round, perhaps as a result of the unusual effects of the semifinal round.


Are you citing a weird racist blogger on medium that disagrees with the stated conclusion of the paper? Is that how science works now?

Abstract from the original paper:

> A change in the audition procedures of symphony orchestras--adoption of "blind" auditions with a "screen" to conceal the candidate's identity from the jury--provides a test for sex-biased hiring. Using data from actual auditions, in an individual fixed-effects framework, we find that the screen increases the probability a woman will be advanced and hired. Although some of our estimates have large standard errors and there is one persistent effect in the opposite direction, the weight of the evidence suggests that the blind audition procedure fostered impartiality in hiring and increased the proportion women in symphony orchestras.

https://www.aeaweb.org/articles?id=10.1257/aer.90.4.715


You'll get more accurate results if you look at the paper's conclusion, not the abstract.

The whole point is that the paper doesn't actually support any of the claims, which it's fairly open about -- that's those "large standard errors".

They did not in fact find that the screen increases the probability of a woman being hired. They found that it increased the probability of a woman passing the final round of auditions. There's more than one round.

It's up to you, I guess, whether you want to follow the original authors to their conclusion that the semifinal round is fundamentally unlike the final and quarterfinal audition rounds. I wouldn't.

The paper is meritless, but even it's actual worthless findings aren't in the direction that people like to claim. Here's Andrew Gelman: https://statmodeling.stat.columbia.edu/2019/05/11/did-blind-...

> First, some equivocal results:

> This is not very impressive at all. Some fine words but the punchline seems to be that the data are too noisy to form any strong conclusions.

> Huh? Nothing’s statistically significant but the estimates “show that the existence of any blind round makes a difference”? I might well be missing something here. In any case, you shouldn’t be running around making a big deal about point estimates when the standard errors are so large. I don’t hold it against the authors—this was 2000, after all, the stone age in our understanding of statistical errors. But from a modern perspective we can see the problem.

> Anyway, where’s the damn “50 percent” and the “increases by severalfold”? I can’t find it. It’s gotta be somewhere in that paper, I just can’t figure out where.

> Pallesen’s objections are strongly stated but they’re not new. Indeed, the authors of the original paper were pretty clear about its limitations. The evidence was all in plain sight.

> For example, here’s a careful take posted by BS King in 2017:

>> Okay, so first up, the most often reported findings: blind auditions appear to account for about 25% of the increase in women in major orchestras. . . . [But] One of the more interesting findings of the study that I have not often seen reported: overall, women did worse in the blinded auditions. . . .

If you read the paper, you'd notice the problems.

But you might not want to run the risk of, um, "racism" that seems to be inherent in reading a paper on the impact of blind auditions on women.


The racism was more the guy having very strong opinions about black people's IQs and immigrants destroying western civilization on twitter (plus being against COVID vaccines and human rights and other stuff that suggest you are not a level-headed truth seeker).

Quote from your second, more respectable, link:

> Pallesen’s objections are strongly stated but they’re not new. Indeed, the authors of the original paper were pretty clear about its limitations.

When I read the original papers, like your second link, I find a level headed paper working with limited data they had available.

I would suggest that the annoyance is with pop science retelling at best, or just a quack selling anger online to fuel his weird obsessions at worst.


You realize this is a huge problem right? Even if that was my only complaint (it’s not!) that is 100% not acceptable. If it’s doing it with CVs it’s doing it with other things in other scenarios.

Another is that a health insurance company was caught using AI to determine if a claim should be denied or not which lead to a scandal as a whistleblower leaked the practice, as it was wrought with errors and ethical concerns


> For example, it’s being used to job screen applicants even though we have proven that AI models still suffer from thing like racial bias.

Can't we just say racism is illegal, and if a company uses an AI to be racist, they get fined the same way they would if they were racist the old fashioned way?


Look up the Dutch tax return scandal where the Dutch tax arm of the government (‘IRS’) used machine learning to identify fraud but it turned out to be very racially biased and it uprooted thousands of families with years of financial struggles and legal battles.

See https://en.m.wikipedia.org/wiki/Dutch_childcare_benefits_sca...


or the royal mail horizon scandal. although the software not called an "ai" afair


"Just", no.

Fence at the top of a cliff (make sure the AI is unbiased and can be fixed when it turns out it is) vs. Ambulance at the bottom (letting people sue if they think the machine is wrong).


When you make specific methods illegal, it tends to lead to loopholes. If you just make the result illegal then nobody can try and get around it by using a slightly different system design.


Also true.

But which way around does that apply? Racism is a concrete example of what AI may incorrectly automate, it's not the only bias that AI can have — any correlation in the training data will be learned, but not necessarily causation — and the goal of these laws is to require the machines to be accurate, unbiased, up to date, respectful of privacy etc., with "not racist" as merely one example of why that matters.

(Also the existing laws on racial equality were not removed by GDPR; to the previous metaphor, the fence being better doesn't mean you can fire the ambulance service).


I would argue that we already have experienced enough of the downsides of "AI" that there is reasonable cause for concern.

The implications of deepfakes and similar frauds alone are potentially devastating to informed political debate in democracies, safe and effective dissemination of public health information in emergencies, and plenty of other realistic and important trust scenarios.

The implications of LLMs are potentially wonderful in terms of providing better access to information for everyone but we already know that they are also capable of making serious mistakes or even generating complete nonsense that a non-expert user might not recognise as such. Again it is not hard to imagine a near future where chat-based systems have essentially displaced search engines and social media as the default ways to find information online but then provide bad advice on legal, financial, or health matters.

There is a second serious concern with LLMs and related technologies, which is that they could very rapidly shift the balance from compensating those who produce useful creative content to compensating those who run the summary service. It's never healthy when your economics don't line up with rewarding the people doing the real work and we've already seen plenty of relevant stories about the AI training data gold rush.

Next we get to computer vision and its applications in fields like self-driving vehicles. Again we've already seen plenty of examples where cars have been tricked into stopping suddenly or otherwise misbehaving when for example someone projected a fake road sign onto the road in front of them.

Again there is a second serious concern with systems like computer vision, audio classification, and natural language processing and that is privacy. It's bad enough that we all carry devices with cameras and microphones around with us almost 24/7 these days and the people whose software runs on those devices seem quite willing to spy on us and upload data to the mothership with little or any warning. That alone has unprecedented implications for privacy and associated risks. With the increased ability to automatically interpret raw video and audio footage - with varying degrees of accuracy and bias of course - that amplifies the potential dangers of these systems greatly.

There is enormous potential in modern AI/ML techniques for everything from helping everyday personal research to saving lives through commoditising sophisticated analysis of medical scans. But that doesn't mean there aren't also risks we already know about at the same kind of scale - even without all the doomsday hypotheticals where suddenly a malicious AGI emerges that takes over the universe.


Let’s stipulate that all you said was true. How is EU regulation suppose to prevent that? Are they going to stop open source models from being used in Europe? Are they going to stop foreign adversaries from using deep fakes?

It’s just like trying to restrict DVD encryption keys from being published or 128 bit encryption from being “exported” in browsers back in the car.


> The implications of LLMs are potentially wonderful in terms of providing better access to information for everyone but we already know that they are also capable of making serious mistakes or even generating complete nonsense that a non-expert user might not recognise as such. Again it is not hard to imagine a near future where chat-based systems have essentially displaced search engines and social media as the default ways to find information online but then provide bad advice on legal, financial, or health matters.

I think a bigger concern is LLMs providing deliberately biased results and stating them as fact.


The issue is, the regulations are tailored to address any of those concerns, some of which may not even be solvable through regulation at all:

> The implications of deepfakes and similar frauds alone are potentially devastating to informed political debate in democracies, safe and effective dissemination of public health information in emergencies, and plenty of other realistic and important trust scenarios.

The horse is out of the barn on this one. You can't stop this by regulating anything because the models necessary to do it have already been released, would continue to be released from other countries, and one of the primary purveyors of this sort of thing will be adversarial nation states, who obviously aren't going to comply with any laws you pass.

> The implications of LLMs are potentially wonderful in terms of providing better access to information for everyone but we already know that they are also capable of making serious mistakes or even generating complete nonsense that a non-expert user might not recognise as such.

Which is why AI summaries are largely a gimmick and people are figuring that out.

> they could very rapidly shift the balance from compensating those who produce useful creative content to compensating those who run the summary service.

This already happened quite some time ago with search engines. People want the answer, not a paywall, so the search engine gives them an unpaywalled site with the answer (and gets an ad impression from it) and the paywalled sites lose to the ad-supported ones. But then the operations that can't survive on ad impressions lose out, and even the ad-supported ones doing original research lose out because you can't copyright facts so anyone paying to do original reporting will see their stories covered by every other outlet that doesn't. Then the most popular news sites become scummy lowest-common-denominator partisan hacks beholden to advertisers with spam-laden websites to match.

Fixing this would require something along the lines of the old model NPR used to use, i.e. "free" yet listener-supported reporting, but they stopped doing that and became a partisan outlet supported by advertising. The closest contemporary thing seems to be the Substacks where most of the stories are free to read but you're encouraged to subscribe and the subscriptions are enough to sustain the content creation.

The AI thing doesn't change this much if at all. A cheap AI summary isn't going to displace original content any more than a cheap rephrasing by a competing outlet does already.

> Next we get to computer vision and its applications in fields like self-driving vehicles. Again we've already seen plenty of examples where cars have been tricked into stopping suddenly or otherwise misbehaving when for example someone projected a fake road sign onto the road in front of them.

But where does the regulation come in here? When it does that it's obviously a bug and the manufacturers already have the incentive to want to fix it because their customers won't like it. And there are already laws specifying what happens when a carmaker sells a car that doesn't behave right.

> Again there is a second serious concern with systems like computer vision, audio classification, and natural language processing and that is privacy.

Which is really almost nothing to do with AI and the main solutions to it are giving people alternatives to the existing systems that invade their privacy. Indeed, the hard problem there is replacing existing "free" systems with something that doesn't put more costs on people, when the existing systems are "free" specifically because of that privacy invasion.

If a government wants to do something about this, fund the development of real free software that replaces the proprietary services hoovering up everyone's data.


How do you know the race?

I've seen some places collect that information which is wild. But you can decline.


> it’s being used to job screen applicants

Any idea what software is being used ?


Just an example:

https://hirebee.ai/


Problem is most innovation is in using existing models in new ways. You can't expect these most people to train their own models.

Regulating it the way you say just means "zero innovation!".


> What's different now though, is the hysteria over AI is leading regulators to pass potential market killing regulation

This is entirely because the experts and fundraisers in the field promoted the technology as existentially and societally dangerous before they even got it to do anything commercially viable. "This has so much potential that it could destroy us all!" was the sales pitch!

Of course regulators are going to take that seriously, as there's nobody of influence vested in trying to show them otherwise.


OpenAI was smart enough to build a moat for itself in Europe.

The EU was dumb enough to dig it for them.


What is the value that OpenAI is bringing to the US right now?

Mostly its being used to generate text that fit a query.


That's not true, it's also very useful when trying to defraud and scam people en-masse!


given that text is used everywhere, don't you think you might be undervaluing how valuable it is to "generate text that fit a query"?


What value does a google search provide?


The ability to target adverts more effectively.


Modern Google search? Very little


We’ve saved a few million dollars with it at my job.


It quickly generates BS text that costs billions of man hours to generate by people doing BS jobs, which could be doing something more useful, such as childcare.

I'm using it to quickly localize our application into different languages, such as Arabic. German took like a few hours to get from non existing to 100% functional. Arabic was sketchy with Arabic mixed with French text, now it's 100% Arabic.


> which could be doing something more useful, such as childcare.

Ok but what is your plan for letting people who lost their jobs to AI to do more meaningful things? Because unless there are bigger societal changes all it means those people will not have to find jobs that are even more bullshit.


I think they're going to do the same jobs they are doing now, but using AI instead which should probably reduce the amount of BS they have to deal with everyday. For instance some of the low pay work companies would use freelancers for, like processing huge amounts of data, manually transcribing text from documents, or content moderation.


how?


> This is entirely because the experts and fundraisers in the field promoted the technology as existentially and societally dangerous

> regulators...take that seriously

That's how.


The experts did that specifically so we would regulate barriers to entry into existence. It isn't a mew trick. Regulatory capture takes many guises, "think of the": Children, Consumers, ... Under booked hotels we could put you in.


>> Most people aren't entrepreneurs and hate risk, so they pass regulation, and the market gets locked down so nothing new happens again

I think that the bigger issue is that the people who suffer when the risk goes bad and the people who benefit when the risk goes well usually aren't the same people.


"before we've even had a chance to experience both the upside and downside, so the innovation never happens at all."

----

Let me laugh out loud. Those, who govern these companies know 10 years ahead how and what will happen. Bigdiks higher up has 10-20 year plans. And people talk about "before we had a chance to experience the upsides and the downsides". Get a grip on reality.


Yea, so wildly out of touch with reality. The governing elite are wizards of prediction!

Sundar at Google knew LLMs were going to be huge after Google invented them, so that’s why they were first to market and…

Oops. Maybe not?

While it might make risk-averse types feel good to imagine the people in charge are all-knowing (see religion), the truth is the world is a chaotic and reflexive system of unpredictability. Scary, I know!


I think you're the one who needs to get a grip on reality. There's no cabal of businessmen playing the world like a puppet. They don't have some secret knowledge of the future. They're often just as stupid as the rest of us, and prone to the same biases. If they were as prescient as you think, I assume companies would simply never fail - as others point out, they often miss things or make mistakes.


zzzzzz who said this?


What?


Google didn’t know in 2006 where cell phones were headed. If they did, they wouldn’t have made an Android as a BlackBerry clone.

No company can accurately predict where technology is headed a decade from now.


oakay


You ever read the first edition of Bill Gates “The Road Ahead” where he didn’t even mention the Internet?

Google thinking the future of phones were BlackBerry clones?

Palm CEO “The PC guys aren’t just going to walk in a figure this thing out” referring to the then rumored iPhone.


Predigested bs for people to ruminate about. Rather watch Tom & Jerry, that has much more profound thoughts.


> You'll struggle to find people who are against the Digital Markets Act for this reason. It literally only targets the potential monopolists.

I'm against the way it's being applied to Apple. I don't think that the government should dictate that consumers aren't allowed to choose a platform that's a locked down walled garden if that's what they want.

We have platforms that aren't walled gardens (Android) that many of us happily use (myself included), and Apple shouldn't have to become something that it didn't set out to be just because a few other big tech companies feel stifled by Apple's rules.


Going out of your way on the internet to defend apples right to take 30% of every sale on the app store is insane to me.

Just how can you not see there's probably 20% of every purchase sitting on the table if competition was ever allowed to occur.

Not to mention the simple freedom of choosing what you want to install yourself, and not just what Apple allows you to...


> Not to mention the simple freedom of choosing what you want to install yourself, and not just what Apple allows you to...

I have the freedom to install whatever I want. I get that freedom by using Linux and Android. I choose to have that freedom by selecting platforms that provide it.

Many HN users seem to want all the benefits of Apple's approach with none of the downsides, and it doesn't work that way. Apple is what it is because it has a tight, coherent strategy, and forcing Apple to change that strategy will have knock-on effects that most Apple users won't like.

If you value freedom to install whatever you want, you chose the wrong ecosystem, and hijacking the ecosystem to satisfy your values is unfair to the vast majority of customers whose values already align with the ecosystem's.


The Apple are trying to hijack the ecosystem of free competition and open market.


Apple is providing an entry into the free smartphone market that is a closed system with a highly controlled and curated software ecosystem. That is their product, that's what they offer.

It's no different than Nintendo's entry into the game console market being what it is—some people will choose that experience because what it offers is valuable for them.

For myself, I'm an Android phone user and a PC gamer, but just because I wouldn't choose those experiences for myself doesn't mean I begrudge their existence.


There is a strong difference here: Smartphones are kind of essential to our modern life, game consoles aren't.


The DMA does not allow you to install what you want.

It does not apply to consoles.

It does make it possible for companies to keep some more money, but most importantly it allows them to sidestep the protections for my privacy that I pay a premium to Apple for.

A real DMA would force facebook, twitter, etc to open up for alternative clients. That would bring competition in and benefit the end user.

Not that whatever digital slot machnine company is allowed to keep a higher percentage of the diamonds they sell you in their free to pay game.


WhatsApp and Facebook are forced to accept third party clients by these laws. So your concerns are addressed:)


it’s not 30%— in almost all cases it’s 15%. and by all means, if you think all the services and support for payments and returns and refunds and customer support are doable in 3%, you can have apple not do that too.


Apple needs to get out of the infrastructure business if they want to play by their own rules. They aren't selling Gameboys and washing machines, they are storing people's private data and selling primary communication devices. That needs to be regulated and the consumer needs to have the final say, not Apple.


> they are storing people's private data and selling primary communication devices.

To be clear, I think privacy laws like GDPR absolutely have a place for consumer protection.

I just don't think the DMA does. Watching how the DMA applies to Apple, it feels far less about consumer protection than it does about businesses, and that's what makes me uncomfortable. The EU is in this case listening to complaints from a bunch of other businesses who do not have consumer interests at heart and ignoring the very real damage that their actions could do to consumer protection.

The Apple App Store protects users from myriad abuses by myriad bad companies. The EU wants Apple to build a blessed, paved off-ramp that companies can strongly encourage prospective customers to use that brings them deeper into the manipulative control of those companies.


> complaints from a bunch of other businesses who do not have consumer interests at heart

Apple (and other big tech companies) locking the competition out of the market, or buying them, hurts consumers. It robs them of other innovative choices.

Having more choices and interoperability is generally good for consumers.

If you stick with the Apple defaults how does this hurt you?


To clarify: I don't use Apple. I chose their competitors instead because I like the features offered by the competition better.

> If you stick with the Apple defaults how does this hurt you?

Because people won't be able to stick with the defaults. One of the first dominoes to fall if the EU gets its wishlist will be Facebook, which will put a version of its app that wouldn't pass Apple's review out through unofficial channels and strongly encourage or force users to switch to it. Once Facebook has paved the way many other companies that are similarly inclined to abuse their users will follow.

A walled garden with a wide-open back door is no walled garden at all, and the many Apple users who liked the garden are going to be cranky when they realize what tech lobbyists have done in the EU.


Interesting. I had thought of the garden walls as being in the way of users, based on my personal experience, but you bring up the point that the walls can also be in the way of nefarious companies. I assume both can be true.

And perhaps the legal system sees some of these garden wall as protecting Apple, falling under anti-trust, like the 30% markups on all financial transactions?

You mentioned malicious versions of the Facebook app that do an end-run around Apple's review. Maybe the EU is trying to cover this with new laws like the GDPR and DMA, making malicious app behaviour illegal. Might that not be better, protecting all users regardless of platform?


> Maybe the EU is trying to cover this with new laws like the GDPR and DMA, making malicious app behaviour illegal. Might that not be better, protecting all users regardless of platform?

If I thought the EU could actually execute on that? Maybe. But I know that Apple executes on it, and the fallout from the gdpr doesn't give me a lot of confidence that the EU knows how to regulate tech in a way that achieves desired outcomes and doesn't just lead to the same behavior as before with malicious compliance stamped on top.


> The Apple App Store protects users from myriad abuses by myriad bad companies.

Does it? Are the $50/month subscription flashlight apps gone?

This is just a form of 'think of the children'. Think of all those evil hackers waiting around the corner for the poor unsuspecting iPhone users.


> Think of all those evil hackers waiting around the corner for the poor unsuspecting iPhone users.

I'm not talking "evil hackers". I'm talking about Facebook just for a start. Once they pave the way and teach users how to use the off-ramp (or else not be able to use Facebook!) they'll be followed by dozens of smaller abusive companies that are eager to pass by Apple's review requirements.

I'm far less concerned about software vulnerabilities than I am about the companies that Apple's business policies currently keep in check.


A friend recently lost his life savings though a hack of a crypto wallet on an Android phone. Phone security isn't trivial.


Crypto comments aside, that could have happened on a Windows desktop too. Yet when MS tried to go walled garden with their store everyone complained. I don't see why Apple should get a free pass.


I guess with the iPhone it was walled from the start so anyone buying one knew what they were getting. I'd be pissed off if they walled gardened my mac. I've got an iPhone and have mixed feelings about the restrictions on it. They can be annoying but on the other hand I have significant investments and more and more banks are insisting you access them via an app and for that I'd probably prioritize security.


On the other hand, an example i've posted before: i want DaisyDisk for my iPhone. It will never be available as long as Apple can censor what's released for their phones.


> consumer needs to have the final say, not Apple.

And consumers spoke, and now other consumers unhappy with those consumers choices, are demanding the deal be changed.


Democracy is the dictatorship of the majority.

But it doesn't have to be, you can simply NOT regulate every damn little thing, especially when there are no victims and you are forbidding a simple trade between two entities who are both willing to engage.

No, developers who want to make money on the Apple Store are not victims. They can develop for Linux and sell apps there if they don't want to pay what Apple wants.

I'm one of them, I'm not supporting Apple or Google and I build everything on the web paying nothing.


I agree, mostly. It is worth noting that until the forced opening of the market occurred, safari was conspicuously lacking in PWA support, especially wasm support. I am a recovering libertarian, and I now see the need for regulation in more places. Largely because of regulatory capture, true, but I don't think anarcho-capitalism works. It just makes lawyers top of the heap instead of warriors (in pure anarchy). Which, when the lawyers command the soldiers, isn't any better.


[flagged]


Because they can collect more data with a native app. A native app is far better from their perspective, but not from the user's, unless it is something computationally bound that can't handle overhead (mostly games?). The last exception is banking apps. They do a lot of weird stuff to fingerprint your phone too, but in this case the user definitely wants them to.


[flagged]


Web apps have ways to track users, but users have ways to mitigate this via adblockers and privacy features in their browser. https://coveryourtracks.eff.org/ exists to help users check their own browser fingerprint.

Full apps on the other hand have many more ways to fingerprint users which aren’t as well known, such as “your free storage, your current volume level (to 3 decimal points) and even your battery level (to 15 decimal points)”. (https://www.washingtonpost.com/technology/2021/09/23/iphone-...)

Is one conclusively better than the other? I’m not sure, but web tracking and fingerprinting is still much more studied and users have more countermeasures, including using a different device where the user has more control.



I think flash and actionscript might be a counterexample? People have different standards too. In thebearly days of win3.1 I liked apps that had different widget sets, now anything that doesn't look native is considered garbage. And some people seem to consider rastered fonts equivalent to baby-eating in a professional context. I don't think I have any common axioms or value systems to carry on conversations with them, but I believe they do care, so it is probably just that PWAs are my preference, but no one elses. I prefer the web mattermost to the app for instance.


Flash is actually an example of how bad cross platform frameworks are compared to native.

Despite what Adobe said when the iPhone first came out, there was no way that Flash was ever going to run on the first generation iPhone. It had only 128MB RAM and 400 MHz processor. When Flash finally came to Mobile in late 2010 on Android, it required 1GB RAM and 1Ghz processor. An iPhone with those specs didn’t come out until 2011.

Heck Safari could barely run on the first iPhone, it would do checkerboards until rendering could catch up with the scrolling.

And if you remember Apple’s cross platform Windows apps - iTunes, QuickTime and the short lived Safari, they also looked very bad on Windows.

Both Google and Facebook at different times said they are moving away from cross platform frameworks and web based technologies for iOS to more native software. Google has all but abandoned their cross platform mobile framework.


this argument doesn't work, because you could always argue that the consumer chose this product, and thus its features and practices should be allowed. Apple had it coming for a long time already, one way or another. And Microsoft also will again the way they are going.


Yeah, sure, you could always argue that about anything, but that's not a refutation of this particular argument in this particular situation. Apple's walled garden produces a lot of real benefits for its customers that are part of what make it successful, and dismantling their walled garden is going to harm consumers.

I would never pick an Apple device for myself. I would also never recommend an Android phone to my mother-in-law. I, myself, know to avoid the many Android security holes that exist because it's a relaxed platform. But for my non-technical loved ones, Apple provides a much better experience in large part because it's a walled garden that makes it very difficult to install garbage.


> Apple's walled garden produces a lot of real benefits for its customers that are part of what make it successful, and dismantling their walled garden is going to harm consumers.

What are those benefits, and why would they evaporate if Apple adds an "Install from other sources" toggle? If you want to exclusively benefit from Apple's discernment, keep that toggle off and stay in the walled garden.If the benefits are so great, then surely everyone will choose to stay in the walled garden.


> If the benefits are so great, then surely everyone will choose to stay in the walled garden.

Surely you've seen enough of the free market to know that this is baloney. The instant that the walled garden is gone abusive apps (Facebook et al) will start ushering users away from it into their own, abusive paths as the only way to install. And once the big abusive apps have started the trend, the walled garden is effectively gone. Businesses don't choose regulation if they can help it.

The only reason we don't see this with Android and Google Play is because Google hardly has any rules at all about what can go on the Play Store.


> Surely you've seen enough of the free market to know that this is baloney. The instant that the walled garden is gone abusive apps (Facebook et al) will start ushering users away from it into their own, abusive paths as the only way to install.

Nah. Users don't grok sideloading.

Even on android where it's always been possible, it's a fringe phenomenon. And every app is on Google Play. In fact it basically killed Huawei in the West not being able to offer it anymore.

The only exception is the epic store but they do it to make a point. Not because it actually works.


First and most importantly: you're imagining a sideloading method that is much less first-class than that the EU wants to impose.

Even aside from that, you can't compare the Play Store to Apple's because the Play Store is a thousand times less restrictive. There's a reason why people are constantly complaining about the App Store and (aside from Epic) never the Play Store. Apple has a lot of restrictions, and while some of them are there to extract more money most are there to protect their customers.


> First and most importantly: you're imagining a sideloading method that is much less first-class than that the EU wants to impose.

I'm not "imagining" anything. This is the status quo on Android and even setting one switch to "allow installations from this source" (one that already pops up automatically no less!) already freaks users out.

And the EU is fine with this. They didn't specify any technical implementation to Apple so I'm sure they will make it as scary as they can.

> Even aside from that, you can't compare the Play Store to Apple's because the Play Store is a thousand times less restrictive.

Apple has seen a lot of stuff slip by the reviewers too, and it's pretty much a hit and miss based on who you get. Just like with Google.

I'm sure a lot more app providers would love to get out from under Apple's heavy hand but in practice this is just wishful thinking. The tiny amount of friction there is on Android is more than enough for people not to bother.

> There's a reason why people are constantly complaining about the App Store and (aside from Epic) never the Play Store. Apple has a lot of restrictions, and while some of them are there to extract more money most are there to protect their customers.

We'll have to differ in opinion on that. I feel like most of the restrictions are there to protect their cash cow which is the app store.


You mentioned users leaving Apple for Facebook.

Does that change the garden for you? Can't users such as yourself remain in the Apple ecosystem? Do other users leaving affect you? Or Apple? Maybe they won't have as much cash?

Does Apple deserve the extra cash in perpetuity for being the first with the network effect? There might be a lot of better ideas out there, if they were given a chance, like Apple had back before them and Google took over the market.


From my comment (emphasis in the original):

> abusive apps (Facebook et al) will start ushering users away from it into their own, abusive paths as the only way to install

> Do other users leaving affect you?

To clarify, I'm not an Apple user. I use their competitors in all categories because they're not really my style. (Yes, choosing something other than Apple is a choice you can make!)

I'm just someone who feels the need to represent normal Apple users inside the HN bubble.


The path you describe is so realistic.

Facebook will leave the AppStore. Grandma can’t find Facebook anymore on her new iPhone, and will google “how to install Facebook on iPhone”. Thousands of websites will SEO for that, and give instructions (“There will be a popup that says ‘You might install dangerous software. Are you sure?’ Press YES.”) on how to install their AppStore with some Facebook clone/spyware/VPN MITM attack/keyloggers. They will have complete control of grandma’s phone, collect all the data they want. Soon she’ll have a new default browser with 7 toolbars, a new search engine, and be mining crypto on the side.


Still seems more of a "we need the mafia because they prevent you from buying hard drugs" kind of argument. If Facebook is so harmful then we should deal with it directly instead of giving another private company control over what individuals can install and what businesses can or cannot reach half of the population. More to the point whe should require Facebook to allow alternative applications which don't violate your privacy and thus could be on the app store so grandma never has to go searching for the "real" facebook to interact with her friends unless that's something she specifically wants.


The App Store review provides protections which are not a technical protections. Earlier someone talked about the failing of allowing a $50 subscription flashlight app into the store (dunno if they’d be able to find a citation). This was a failing of the expected business controls. There’s nothing technical to even be done to prevent these sorts of abuses.

One example from a while back was banning an internal Facebook enterprise developer account because they were using it to install a VPN onto users devices for the purpose of monitoring user behavior for competitive market analysis. This is not anything that can be prevented by technical controls, and was only able to be published because enterprise profiles can deploy apps outside the store.

Apple wants their customers to be safe. Third party marketplaces take that responsibility and control out of their hands. Apple stopped that VPN abuse immediately upon finding out about it. How long would it take for European regulators to force Facebook to turn off such a self published app of their own accord?


> Earlier someone talked about the failing of allowing a $50 subscription flashlight app into the store (dunno if they’d be able to find a citation). This was a failing of the expected business controls.

How about LassPass, then? That made it onto the AppStore despite being designed to make money and steal credentials.

Maybe not $50/month, though. I love one Apple apologist's remarks on that aspect:

> Instead, the scam LassPass app tries to steer you to creating a “pro” account subscription for $2/month, $10/year, or a $50 lifetime purchase. Those are actually low prices for a scam app — a lot of scammy apps try to charge like $10/week.

Emphasis mine. "Look, I know you got your credentials stolen, but at least you didn't also get scammed out of as much money as some other scammy apps!"


I am unsure your point - are you saying since the business controls are fallible, we should abandon them and have an internet's worth of potential side-loaded LastPass scam apps?


The problem is that having the option of exiting the walled garden kills the garden. As soon as the most important apps aren’t available on the apple App Store any more, everyone will have to exit the garden. You can argue about the pros and cons of that, but you can’t deny that this has an effect on every apple user, not just the ones that choose to exit.


People do have to exit the garden as much as people have to buy apple devices


When the tech support scammer calls your mother, there’s a decent chance it convinces to check that box and install who knows what that backdoors the phone.


If Apple was required to provide root access to all customers this would not prevent anyone from choosing to stay inside their walled garden.


You need app Foo on your phone for work (like Slack, maybe). Foo is in the App Store and worked great on your iPhone but decides they want to install via their own store so they can monetize employees’ data (location, whatever) to make more money. The new store launches, the new app version abuses private APIs, and the App Store version stops working. Your company announces that all employees need to download from the new source. Do you really have a choice about staying in the walled garden? Sure, neither your company nor Foo should suck, but we all know plenty of companies that don’t care about employees or users.

Game company Bar decides to launch their own store and pull their game - we’ll call it Nortfite - from the App Store so they can add something shady like crypto features. Nortfite is a massive social game that all your friends play and it’s a huge part of your teenage social life. Your only device capable of playing it is your second-hand iPad. Do you really have a choice about staying in the walled garden? Who needs friends anyway, amirite?


Those examples all sound like issues that should be dealth with by appropriate laws instead of hoping that the maffia ends up protecting you from worse criminals.


The question isn't whether anyone could choose to stay inside if they want to, the question is whether I can trust that {insert older relative here} will stay inside the garden and not get tricked by a sketchy website into installing something through the back doors the EU is mandating.

If the opening up of Apple were as difficult to use as getting root on Android is I wouldn't have a problem. But that's not what's being proposed, and any attempts by Apple to make it less than perfectly smooth for someone to exit the walled garden are most likely going to be shot down.


How difficult is it to get root in Android in your world? Getting root on Android comes in a variety of difficulties.


I reckon the peaceful existence of macOS is a counterpoint to this argument.


Are you also for workers being able to choose to live in company towns where they have to spend their incomes at the company store? When lock in and network effects come into play then yes the government should make sure that people have real and not only theoretical choices.


> You'll struggle to find people who are against the Digital Markets Act for this reason. It literally only targets the potential monopolists.

You're commenting on an article doing exactly that. So that was not much of a struggle.


> You'll struggle to find people who are against the Digital Markets Act for this reason

You missed most of the discussions about DMA on HN, I guess. There's always someone ready to say how EU will kill all innovation and make Google/Apple exit the market because they dare to question anything.


> Nobody is against regulation that disfavors large incumbents to support competition instead.

Actually pretty much all EU regulations and especially enforcement of those regulations gets pundits shouting that the EU is only trying to milk US megacorporations.

> However, virtually every other piece of regulation does the opposite.

Not true but even if so not all regulation concerns itself with monopolies. GDPR in particular is about user rights and should therefore apply to everyone, same for similar kinds of regulations. If corporations cannot survive without violating you in every way possible then they should not be allowed to live. If anything is lacking it's enforcement against incumbents.

> What's different now though, is the hysteria over AI is leading regulators to pass this incumbent-cementing regulation before we've even had a chance to experience both the upside and downside, so the innovation never happens at all.

Good. Not all "innovation" should happen.

> Combine this with a rapidly aging demography in Europe, and I only see this trend increasing. If there's one thing old people hate, it's risk and doing new things.

Again, good. Moving fast and breaking things at societal scale is not a good idea.


Makes no economical sense, but arguably the right response to not only AI, but every thing.


It's not that most people hate risk. It's that individuals whom are harmed by sociopathic individuals that exploit methodologies, techniques, and products to enrich, steal, and harm the population. (When I say that I mean financially, emotionally, socially, physically, etc). To add further insult to injury, defending ones self against these individuals is disproportionately impossible.

Socially: Creating and cultivating a culture that screws up dating.

Emotionally: Filter bubbles, and data analyitics to push proganda and motivate people in directions (cambridge). Additionally subjecting people to material to manipulate.

Stealing: Scooter companies are actively stealing the public space to operate their business (sidewalks), endorsing their users to run over people on the sidewalk (also making it difficult to identify the individual), etc.

Privacy wise: Companies are forcing you to give up your private info to live. (Retail tracking to individuals.. even accross multiple companies [see "The Retail Equation"])


I'm not insulting people who hate risk. For the stability and health of society, it's good that most people are that way. We need people who shake their fist at anything new or different to keep us sane (people like you it seems, from your laundry list of frustrations).

But we also need the people who do like risk taking and new stuff, and there's less of them. So innovation is much more of a fragile thing than stasis.

Even if you think society and human life in general can't be improved in any way, to just maintain the way things are now...will require many new innovations and people taking risk on new stuff. Your welfare, lifestyle, and security depends on the risk taking of others. So we should probably be careful about making it too hard for the folks taking risk (it's already hard enough).

Trust me, the risk-averse folks will still be the dominant voice either way. Even this forum--which started as a community of risk-taking entrepreneurial types--is now dominated by the risk-averse majority.


In theory I agree with your argument, but in practice I find it’s very often the AI and other dominant companies — often tech — that are at the root of the risk averse landscape you observe. To put it simply, major tech companies are among the greatest driving forces of that landscape because they would prefer to operate without any competition.

My point being “letting the risk adverse take risks” is not the same thing as “don’t rein in VC backed attempted monopolies”. You can do the latter without doing the former (theoretically of course; in practice doing the latter is impossible without a substantive change in the underlying incentive structure of current global society).


> But we also need the people who do like risk taking and new stuff

The issue with that is that risk is usually not borne by the people getting rewards.


> Stealing: Scooter companies are actively stealing the public space to operate their business (sidewalks), endorsing their users to run over people on the sidewalk (also making it difficult to identify the individual), etc.

OMG I cannot wait for these companies to be fined out of existence.

How can you be allowed to have a business model that relies on people leaving your trash wherever they feel like it.


I have no idea. Chicago had a limited period in which the scooters could operate a few years back. I saw someone that had an experience where the scooter wasn't picked up more than a week past the end. I contacted the alderman to get streets and sanitation to remove the vehcile.

What I got was: The alderman employee tried to get this cleared with the scooter company rather than towed.

I was blown away.. if I parked in front of the Division blue line station my car would be fined, towed, charged with storage fees, potentially vandalized within an hour. Them? Oh the gov employee will do CS for them. I filed a complaint with the city ombudsman and later found out she was removed.


>Regulation usually gets trotted out after the downside of doing [new innovation] is experienced. This always happens, because doing something new always involves unknown risk.

I would challenge "unknown"-- it very well seems like the risks have been known every time, they just don't give a shit

>What's different now though, is the hysteria over AI is leading regulators to pass this incumbent-cementing regulation before we've even had a chance to experience ...

Sounds good to me? It's sociopathic and opportunistic to want to risk major socioeconomic issues for the mere chance of a corporate "innovation"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: