Besides encouraging people to buy things you might think they don't need, what's an actual harm people experience from targeted ads as opposed to non targeted ads?
Former gambling addict and current mental health advocate here. For anyone with an addiction or a serious mental health problem, targeted advertising can be very dangerous.
Think about the “filter bubble” effect that we experience on platforms like YouTube where we are always being “recommended” content that confirms our pre-existing beliefs.
Targeted advertising is no different except that it follows you across multiple devices and multiple online platforms in order to sell your attention to the highest bidder.
This might be fine if you are a capable, healthy and intelligent individual seeing ads for computer parts or shoes. What about the recovering alcoholic who is being “targeted” by alcohol advertising? Or the homeless schizophrenic girl I worked with a while ago who couldn’t escape a constant barrage of ads for highly addictive online gambling products?
Our brains are all wired differently and not everyone has the same level of “free will” as you do. The entire purpose of the advertising industry is to push you away from reasoned decision making and towards compulsive consumption.
As adtech becomes better at exploiting our psychological weaknesses and influencing human behaviour, I worry that we will not only see an increase in negative outcomes for the most vulnerable among us - but also an increase in mental illness among the general population as our borderline, compulsive and narcissistic traits are enabled and encouraged by soulless algorithms.
Just to add to this; I've been sober for a number of years and I remember reading about how alcohol companies specifically target people in recovery. After reading this, the targeted ads on TV and in magazines became very apparent. Knowingly contributing to ruining people's lives.
I know for a fact this happens. Gambling companies often buy marketing data from porn websites and MLM schemes in order to better target people with “impulse control issues”.
I agree, gambling and alcohol ads should be banned. I personally would never work on them. There are categories of vice products that the law treats differently in many mediums.
I don't see why the existence of alcohol should mean SaaS software companies shouldn't be able to reach their target market with ads.
I don't understand the argument. If they're harmful enough, you ban them. If they are not, presumably you accept their existence? If you ban targeting instead, you just increase the cost of all ads, the ads from your list still reach those vulnerable individuals. This feels like an inefficient, weird and indirect tax?
Your point is very convincing, but it would be great to first have a more quantifiable view (beyond anecdotal) on whats going on and second have an idea what to do about/against it. I still believe that the societal/collective memory will eventually find the best way to deal with these challenges. I hate the consensus (here on HN) that people are just too dumb to deal with it on their own and take responsibility for it.
I dislike that the burden of proof is put on the targets of this style of advertisement instead of on the companies themselves. There are plenty of studies on the impact of propaganda and advertisement on society and individuals. In the meantime I will continue to recommend the use of ad blockers and pi-hole.
(Disclaimer: I've been working in adtech for over 15 years.)
Advertisers and publishers don't really want tracking and data collection. It carries huge costs (technical as well as social) with very little benefit for advertising. Advertisers want statistically significant and unbiased population samples, and that's not something you can arrive at by blindly throwing more data at it.
Data collection by Google et al., is really because they eventually want to pivot from adtech to govtech - think "social credit" or "Minority Report". From their vantage point of course it's a much more lucrative and advantageous place to be than a mere seller of internet clickbait.
I appreciate you disclosing your experience in the ad-tech industry. But I’m not sure I understand your point.
It sounds like from your experience, the concept of FLoC from the main article is exactly where Google and other want to be? They want legit population samples versus the ‘noise’ of huge amounts of random individual use data?
But when they are trying to market it to us as users, as a ‘privacy win’, that’s hard to swallow when you’re saying their end goal is some sort of ‘govtech’ or ‘social credit’ system.
> It sounds like from your experience, the concept of FLoC from the main article is exactly where Google and other want to be?
Yes, if it can be made into some objective standard, and not just another "trust me, I'm Google".
> But when they are trying to market it to us as users, as a ‘privacy win’, that’s hard to swallow when you’re saying their end goal is some sort of ‘govtech’ or ‘social credit’ system.
Yes, because Google is not just an adtech company. Obviously they are more than that. (Or at least they want to be.)
When ad-tech is mentioned we are not just talking about selling toothbrushes or cat food, its about how this technology can be used by companies and special interests groups to do damage to society, say for instance trying to target people who might be more likely to listen to an ideology that would inevitably fail our democracies.
The Cambridge analyticas and the Russian bots happened because the average internet user was not paying attention to ad tech.
We need better education around ad tech, we need more people to understand what these ad companies are enabling so more average internet users can stay better protected, and make better and more informed choices.
If I see an ad for healing crystals on some rando website, then I just think the website is stupid.
When I saw one on Facebook I was insulted, because Facebook thinks I am the kind of person who is so stupid they believe in them. You can write this of as not actual harm because it is only emotions, but it had a negative impact on me, which I consider actual harm.
The other issue is information leakage. If you want to show an article on your phone to a buddy you don't want the ads to be for adult diapers.
It's not just the ads. Do you actually trust the company that has personally identifying information about you? Do you trust the people working at said company? Any time you have information about someone, you can use it for nefarious purposes.
> As described above, FLoC cohorts shouldn’t work as identifiers by themselves. However, any company able to identify a user in other ways—say, by offering “log in with Google” services to sites around the Internet—will be able to tie the information it learns from FLoC to the user’s profile.
Ads, tracking, and SEO content: promote low-quality information, allow all kinds of bad actors to profit from bad behavior, promote the use of adware, give economic advantage to actors who have access to big data, create market assimetries, waste my time and attention, make me stressed due to the need of locally filtering barrages of bad, dangerous, or malicious information targeted at me, put my wellbeing in danger because bad information is targeted at people around me that have influence over my everyday life.
TLDR: The ad industry promotes shit content, finances fake news, and wastes my resources.