This shows the limitations of ML: facebook has incredible amounts of preference & behaviour data on its users; and can't even meet incredibly generic categories such as "high-earner, college educated, etc.".
The reason we think we're in an "AI" boom is 90% these ad. companies hyping their own abilities (an identical strategy to that of the initial boom in the 50s).
What we call "AI" today is just an associative house of cards which gives the illusion of depth when seen from a very narrow viewing angle.
I can't speak for other people, but the reason that I think we are in an AI boom has nothing to do with online advertising. It is that computers can now recognize images, translate language, have conversations, generate articles, make realistic looking pictures, play the game of go, solve protein folding, generate realistic text to speech, recognize voices. All these things were not possible 5 years ago. Every year there is significant progress. I believe that in a couple of years we will see many more applications. This will be as much of a paradigm shift as the invention of the personal computer, the internet and smart phones.
Dragon Systems released NaturallySpeaking 1.0 as their first continuous dictation product in 1997.
<...>
As of 2012 LG Smart TVs include voice recognition feature powered by the same speech engine as Dragon NaturallySpeaking.
Yeah I played with dragon in 97 and it was awful - it didn’t work at all, completely unusable.
Today voice transcription is a solved problem and while their engine might be the same in name - I’d be surprised if the approach isn’t totally different than what they were doing in 97, either that or the LG tv voice transcription probably doesn’t work as well as everyone else’s.
The deep learning revolution and the applications we’ve seen since 2015 are a major step forward and something truly different. People pretending otherwise are just acting cynical in some attempt to project intelligence or seem wise, it doesn’t work.
But you can't claim that something "wasn't possible 5 years" ago, if 7 years ago said feature was included in inexpensive consumer product (LG TV).
I'm not acting cynical, but it's tiresome for me to see people who claim that 20-30 years ago we all were living in a caves and catching bugs with wooden sticks, and now boom, ML!
Regarding "something truly different", well, my personal computing / mobile experience not changed that much from 2015. Honestly speaking, progress from 1995 to 2000 felt much more impressive and 'truly different'.
I mean, think of it, during this timeframe we went from DOOM via V.34 modems to amazon.com and ordering pizza online.
It matters if it actually works and if it works broadly or just for limited use cases. As I said in the original comment, I think that you will see many applications in the near future. The last 5 years have been focused on research (comparable to 1990-1995), only now are we getting ready for commercial applications.
Yeah - entire classes of problems went from unsolvable to solved. Some of that is in the consumer space and some of it is not.
I feel like an AGI could accidentally wipe out half of humanity and there would still be people commenting on HN about how the exact same technology already existed in a roomba seven years ago.
Honest question, no snark --- which consumer space problems were solved, if I don't play Go and don't have FB account to recognize me on a group photos (both of these two statements are true)?
Which shows some generality, the best way to accurately predict an arithmetic answer is to deduce how the mathematical rules work. That paper shows some evidence of that and that’s just from a relatively dumb predict what comes next model.
It’s hard to predict timelines for this kind of thing, and people are notoriously bad at it. Nobody would have predicted the results we’re seeing today in 2010. What would you expect to see in the years leading up to AGI? Does what we’re seeing look like failure?
Tesla autopilot is notoriously unsafe, and still extremely bad even at simple problems. Not sure how it could be called a "solved problem" - especially when you have the CEO of Waymo saying publically that he doesn't believe we'll "ever" reach level 5 autonomous driving.
There have been massive improvements in automated driving, but if you want to talk about solved problems, parking assisst is as far as you can get.
Translation is much better, and is often understandable, but it is far from a solved problem.
Colorizing/repairing old photos also often introduces strange artifacts in places where they are unnecessary. Again, workable technology, not a solved problem.
Voice transcription is also decent, but far from a solved problem. You need only look at YouTube auto-generated captions to see both how far it has come and how many trivial errors it still has.
And regarding "generalizable intelligence" and arithmetic in GPT-3, the paper can't even definitively confirm that the examples that they showed are not part of the corpus (they note that they made some attempts to find them that didn't turn out anything, but they can't go so far as to say they are certain that the particular calculations were not found within the corpus). They also make no attempts to check the model itself to find if any sub-structure may have simply encoded an addition table for 2-digit numbers.
Also, AGI will certainly require at least some attempts to get models to learn about the rules of the real world, the myriad bits of knowledge that we are born with that are not normally captured in any kinds of text you might train your AI on (the idea of objects and object permanence, the intelligent agent model of the world, the mechanical interaction model of the world etc.).
Autopilot is distinct from full self driving - I was talking about driver assist.
'Solved' is doing a lot of work here and is an unnecessary threshold I'm willing to concede. I think we would be more likely to agree that things went from unusably bad or impossible to usably good, but imperfect in a lot of consumer categories in the last five years due to ML and deep learning approaches.
The more clearly 'solved' cases of previously open problems (Go, protein folding, facial recognition, etc.) are mostly not in the consumer space.
As far as the gpt-3 bit I encourage others to read that excerpt, they explicitly state they excluded the problems they asked from the test data so it's not memorization. The types of failures it makes are failures like failing to carry the one, it certainly seems like it's deducing the rules. It'll be interesting to see what happens in gpt-4 as they continue to scale it up.
I keep hearing voice transcription is "solved", but both where I see it in consumer products (e.g. YT auto subtitles) and in dedicated transcription services I've tried it's far from solved.
Even the good translation services will randomly produce garbage, struggle with content that's not in nicely fully formed sentences, and are severely limited in language support.
First, nobody is claiming that people were living in caves before ML. I understand you're exaggerating for effect -- but that's the same thing the parent comment is doing when they say something "wasn't possible" 5 years ago. They don't mean that it was literally impossible, they mean that it was sufficiently bad that a typical consumer would be unlikely to use it back then -- whereas now the quality has improved to the point where these things are ubiquitous.
Similarly, both Amazon [1] and online pizza ordering [2] existed before 1995. They were just not commonly used.
>they mean that it was uncommon for a typical consumer to experience it back then.
Siri from Apple was launched in 2011, as some other commenter noted below. Also, "On June 14, 2011, Google announced at its Inside Google Search event that it would start to roll out Voice Search on Google.com during the coming days".
If it does not count as 'typical consumer to experience it', well, I do not know what counts then.
9 years ago, I mind you, not 5. And I think that 5 years ago voice recognition was more-or-less good already. In 4 years both Apple and Google acquired large enough datasets to learn from, afer initial launch of their products in 2011.
What we are still struggling with is proccesing of fuzzy queries, something among the lines of 'Siri tell me which restaurant in my area serves the most delicious sushi according to yelp reviews and also allows takeout', but this is not a voice recognition problem (though typical consumer can think it is).
> ‘Siri tell me which restaurant in my area serves the most delicious sushi according to yelp reviews and also allows takeout’
Siri stumbles at way less complex queries than that. Every year or so I retry using it, and give up due to the error rate.
An accuracy of 99% and 10x slower is apparently preferable for me.
My experience has been very different. I use an Alexa purely to control lights and set alarms, and have enough misses at just those tasks that I don't consider it particularly good at them.
I'd take a literal clapper that hooked into smartbulbs over it at this point.
Live captions for English video or audio are nice, but it still doesn't work for music (not even rap), and it doesn't work for other languages. It might work in a lab setting, but doesn't in currently available phones.
mjburgess has a point. There is a relationship with online advertising for the following reason: online advertising has been a perfect testbed for the latest AI research for years because (1) it leverages massive amounts of data (2) is not mission-critical (the human cost of failure is low) (3) has little accountability regarding fairness and explainability. For these reasons, online advertising has been surfing the latest AI hype, well in front of other domains (healthcare, defense, education, etc.) where either (1), (2) or (3) is not met.
you can make a list like that every five years. When deep blue beat kasparov people said the same thing, because on the surface it looks exceptional, but really when it comes down to it all those applications are extremely limited, closed systems. We will have more of them in five years, but as the saying goes, you don't get to the moon by climbing up trees.
In five years we will without a doubt have systems that play more games, solve more puzzles, fold more proteins and so on, but we'd have more progress on intelligence if we could build a machine that could reliably plumb a toilet or catch a mouse as well as a cat.
Out of your list, translating language and solving protein folding are the only 2 that are likely to have any kind of major impact on the world in the somewhat near future, in my opinion. Image recognition may be a distant third, as it could prove very useful as a tool in many domains that have lots of visual data to sift through.
Voice recognition is neat, and it is extremely useful in certain niches, but it is generally far inferior to mouse+keyboard or touch or dedicated buttons in all cases where those are practical. It is almost unusable in public spaces. Text-to-speech is mostly in the same boat - extremely useful in some niches, not a game-changer in any way I expect.
Text generation is only "realistic" in shape, but for now remains a novelty. Picture generation is the same. They are at best at the level where it already costs pennies to obtain similar texts through human labor. Having conversations in a meaningful way is even farther.
Playing Go better than humans is a fun gimmick - an awesome achievement in some sense, but with 0 practical applications outside of Go.
Overall, there has been good progress in AI/ML, but I don't expect we'll see any major changes too soon from this. Especially since the idea of actually seeing commercially available self-driving cars has been pushed back a decade or three compared to the initial optimism.
I'd also add that most of the progress in ML/AI in the last few years has essentially been of a technological/engineering kind. There haven't really been that many deep scientific insights, we don't necessarily understand any of these problem spaces better. We've just been able to match certain neural net architectures with certain problem spaces, and massively advanced in realistically-available computing power to actually train these nets on enough data.
But we haven't gotten any new insights into stuff like language from GPT-3, for example. We haven't even gotten any new insights into Go from AlphaGo!
You’re statement about us not learning anything about Go from AI agents is just false. Also we have seen a rapid improvement in the accuracy/speed curve, most notably with NAS. It hasn’t just been increased computational power.
AFAIK the use of learned embedding in the biomedical space has improved our understanding of the science/our ability to find high value experiments to run.
I think the coherency of GPT-3 and the RL learned tool use work both raise extremely interesting questions about the nature of language and high-level intelligence. Questions that we couldn’t meaningfully ask before that evidence.
I also think you’re being shortsighted about the possibilities that are opened up as we enable computers to understand and interact with the real world in more way. Increased visual and audio understanding will lead to more passive computing and that has all sorts of fascinating implications.
I don’t know what insights you’re looking for. We’re seeing AI reveal fascinating phenomena in language, vision and general purpose learning. Scientific advancement is often about asking the right questions and unexplainable phenomena is the driver of those questions. Much like some phenomena (I want to say something to do spectral) were unanswered questions that developed into the field of quantum mechanics, what we’re seeing in AI is currently unexplainable phenomena.
Are you in the field? You’ve focused on the things that made global headlines, but I feel like there’s so much more.
> You’re statement about us not learning anything about Go from AI agents is just false. Also we have seen a rapid improvement in the accuracy/speed curve, most notably with NAS. It hasn’t just been increased computational power.
We've learned how to play better Go, but not anything like a new (mathematical) theory of Go, as far as I have read. And I didn't claim it's just about computing speed improvements, I also noted that we have started the right formulas to fit specific NN architectures with specific kinds of problems.
> AFAIK the use of learned embedding in the biomedical space has improved our understanding of the science/our ability to find high value experiments to run.
Awesome, this is one thing I hadn't heard about.
> I think the coherency of GPT-3 and the RL learned tool use work both raise extremely interesting questions about the nature of language and high-level intelligence. Questions that we couldn’t meaningfully ask before that evidence.
They do raise interesting questions in a philosophical sense, but those questions do not seem to have piqued the interest of the AI community. The GPT-3 paper covers some interesting facts about GPT-3's ability to do arithmetic, but only briefly, and that's about it. They explicitly attribute GPT-3's impressive gains over GPT-2 entirely to the hugely increased parameter space, and mention that they believe that increasing the parameter space again by an order of magnitude will increase the realness even more - that's the extent of their analysis about its implications on language that I've seen (perhaps I missed something?).
I haven't even seen a real discussion about how different GPT-3's output is compared to it's training inputs. Given the gigantic corpus that they have used to train it, I would expect to see more scrutiny in this area. For the arithmetic operations, they mention some attempts they made to check that it wasn't reproducing a calculation it had explicitly seen, but they are not confident that they were enough.
More importantly, it is obvious that, even if GPT-{X} will be a complete model of human language, it will (1) have nothing to do with how humans acquired language, either individually or evolutionarily; and (2) that it won't be able to produce meaning that is not captured in its corpus, as it will have no notion of the human world and its reality; it might be able to produce commentary on current events that sounds plausible, but any relation between its commentary and reality will be either entirely captured in the training corpus, in the priming text, or entirely accidental.
> I also think you’re being shortsighted about the possibilities that are opened up as we enable computers to understand and interact with the real world in more way. Increased visual and audio understanding will lead to more passive computing and that has all sorts of fascinating implications.
Of course that's a strong possibility - the future is usually surprising. However, the only applications actually visible on the horizon are bone-chilling: mass surveillance that even Orwell didn't dream of, increasingly being actively used by more and more authoritarian regimes, from China to the US to Europe. It may well soon turn out that the AI fearmongers were right to fear AI, though of course not in the puerile fantasy of the paper-clip optimizer.
> We’re seeing AI reveal fascinating phenomena in language, vision and general purpose learning.
I have seen little to no coverage of such fascinating phenomena, and entirely too much coverage about precision rates, eerily human-like language generation, and a belief that bigger models and more data are the only way forward. Everything I have seen has been AI research not only not asking questions about such phenomena, but instead shutting down any questions, claiming that the million-parameter models that they produce on terrabytes of training ARE the answers [0].
> Are you in the field? You’ve focused on the things that made global headlines, but I feel like there’s so much more.
I am not in the field of AI/ML, no (the closest I got was doing my bachelor's thesis on an RL approach). The things I focused on were the advances highlighted by the post I replied to. I have no doubts that there are many advances in the techniques of AI that would completely fly over my head, and I am certain that there are successful applications of AI on hard problems that I never even heard about. AI is certainly a useful technique in many fields, though often over-hyped as well. Still, judging by what is commonly highlighted as the major achievements of the field, my prediction is still that it's positive impact on the world will continue to be limitted for the next 1-2 decades; its negative impact through enabling mass surveillance may well be far greater.
I'm sure that most people didn't expect major changes from pcs, the internet and smartphones before mass adoption. As this is a community tied to a startup incubator, I would encourage everyone here to keep an open mind.
They are, currently at least, a small improvement for a small part of many people's lives. They probably have a measurable impact on the world, but not anything that could be called world changing.
I don't care about driving. It still takes the same amount of time.
What I care more about is cooking. I'd rather have that one solved, as it would actually save me time. (No, I'm not looking for restaurants, meal delivery services or microwave meals).
I've found traffic aware cruise control greatly reduces driver fatigue as you don't have to constantly manage your speed. This improves both safety (you stay more alert and can spend more of your effort scanning and maintaining situational awareness) and means you're less tired to engage in other activities when not driving.
I was at a non tech company 7 years ago where we were generating article summaries and performing image recognition. We had some at least one org applying speech to text for a business use case.
I’m not saying there hasn’t been innovation in the past 5 years — absolutely there has.
In that industry, 2013-2016 were peak “ML” and “Cloud” where the CIOs at my company and competitors were fully bought into the ML and cloud hype, how it was going to solve all kinds of problems, without understanding what those problems were, and without realizing the complexity of getting meaningful, applicable data for those problems.
In the past few years, it feels like more people realized that ML is less mathematical magic and more of different “kinds” of curve fitting.
According to FB metrics, FB AI magic works great. Then again, according to FB metrics video engaged better than text ... and it turns out they were lying. I wouldn’t put it past them to goose the numbers a bit for their core product.
I think the problem is deeper than that. Every company wants to advertise to high earning college educated people, and nobody wants to advertise to unemployed anti-vaxers. But presumably Facebook has a lot more of the latter. So they have to get a bit "creative" with their targeting if they want to sell their inventory...
It's easy to target but what FB wants to do is to sell junk traffic as if it were highly lucrative. It prefers to sell more rather than do proper targeting. The traffic swap was an old black-hat trick used by ad network affiliates even before 2000 (they used to push pop-under traffic as SEO traffic, for example). Just cheating.
It's not right to judge ML by the quality of recommender systems. First of all, recommenders are just a small corner of ML, and on this scale are only developed at few companies. Second, I am not 100% sure they maximize what you want to see, instead they maximize what will make them more money. That's why it sucks.
I’m a bit confused why we are assuming the low performance is the fault ML. I would think instead that this low performance is deliberate because it locally optimizes Facebooks KPIs.
My understanding is that Facebook has a lot of data on the elasticity of ad buyers. If Facebook were to have even more precise algorithms, then they would likely charge even more money for impressions and clicks. Presumably buyers may be reticent to pay even higher prices.
Alternatively more precise targeting would highly bias the money people spend toward larger companies that can afford better keywords.
If you look at the communication, it seems that the Ad people didn't have a clear idea how well-targeted anything was.
It's easy to see how the statistical problems compound.
FB does not have exact earning levels, so you have to infer that from, say, likes. Let's say you can build a model salary(likeBMW, likeTravel, ...) = %likeBMW + %likeTravel +...
This gives you an estimate which is (70-80)% accurate, so you predict >£250k/yr 70pc of the time. In c. 25% you mispredict.
Now it seems to me that this 25% is going to compound across several categories: when you say "College AND HighEarner AND ..." you get more than 50% of your target group not matching this exact criteria (all you need to fail is one condition to fail to match this conjunction).
And according to FB comms, it looks like >50% didn't match client's chosen criteria.
I think this is the right analysis of the issue. ML systems of this kind are very bad at making targeted judgements. It's really little more accurate than guessing the mean of something (eg., salary) for your group.
All ML has to do, for FB/Google/etc. is improve targeting a few percent to have a significant value proposition.
However, the propaganda has it that ML systems can "target" you, etc. Only in the way a nuclear bomb "targets" a house.
Could it be a misunderstanding of the outcome, or perhaps bias in Facebooks algorithms? The best predictor of your target is: your target. In this case it’s predicting who will buy your product. Using a proxy such as income and education is a good start but surely cannot be the best approach because it artificially limits the search space. The best target is some ML-determined combination of features. My guess is that for niche items such as expensive watches and cars, Facebook’s targeting may be too liberal with lower income earners. But I think this the result of being very good at targeting the general audience for medium-low priced goods and services.
* 40% of targeted viewers didn't match "high-earner, college educated", the system is crap!
or
* 60% of targeted viewers did match "high-earner, college educated", that's amazing!
I dunno... do you have a better way of reaching those targeted viewers? Do you have a better way of measuring results than abstract page views? Depending on the context, 60% could be very worth it.
Yes, humans are also notoriously bad at Bayesian probability. This may actually be extremely impressive if high earners make up something like less than 1% of general page views.
In that case, non-targeted advertising gives you a <1% chance of reaching your audience per impression. FB is much better.
>... do you have a better way of reaching those targeted viewers?
I'd be curious to see how blogs do at targeting certain demographics. Presumably if you had a blog aimed at high-earning college educated people you could get a pretty high ratio of (target demographic/all viewers).
But I assume advertisers generally trust metrics more than intuition so you need something like Facebook or Google's ad network to identify someone as belonging to a certain demographic before the advertiser will believe it.
>Presumably if you had a blog aimed at high-earning college educated people you could get a pretty high ratio of (target demographic/all viewers).
Correct. The problem of course is scale. How much money such blog can accept from advertisers before it will get absurdly expensive to buy ads from such blog?
FB gives you targeting and scale, and you need both to run meaningful advertising campaign.
The emails claim the FB employee said 41% precision - so they only reached the targetted viewers 41% of the time, or 59% of the time an ad for high-earner, college educated reached someone who wasn't a high earner and/or college educated (I'm unclear whether it's supposed to be 'and' of 'or', but it's largely irrelevant).
In the third paragraph of the article: But nearly 40 percent of the people who saw Investor Village’s ad either lacked a college degree, did not make $250,000 per year, or both, the company claims.
If the campaign is well measured, this is taken into context.
FB advertisers are paying for results and they will pay, or not, for those results, not really the targetting.
If ABC corp knows for sure their target is top 50% Households, and FB can only roughly provide that, then the value will work itself out in the numbers.
If FB can provide that target more precisely, those ads will convert better becoming even more valuable.
I disagree that this shows anything about ML at all. The particular example here--high-earner, college educated-- should be trivial to deduce without machine learning. Just to start, many users add their college directly to their profile. How is it possible that they can't even get that right?
There's a confusion here about what is really going on here.
In truth, advertisers care more about volume that quality, and so the industry has a bias in that direction. It's entirely possible to have much higher precision, but that precision turns out to be not what advertisers are looking for.
For example, take "college educated high-earner". Well, there are a bunch of such types who don't spend or making purchasing decisions like the rest of that group, and a bunch of people outside that group who behave exactly like them. How did the advertiser figure out that they want that group? Did they really figure out that if you aren't meeting those criteria their ad is a waste of time? Of course not. They're just guessing, and they don't intend for their guess to constrain their audience.
Instead of limitations of ML, could be simply a matter of not having enough inventory with the desired parameters to sell to everyone who wants it. Which doesn't make it any less sleazy if true, of course.
This article is quoting employees conversations from 2016. It wasn't until late 2016 that Google published research on wide and deep recommender systems. I'd be interested to see if people still think behavioral data is poor and ML has not provided advances.
That said some demographics are broadly correct from behavioral tracking (e.g. a child who watches baby shark on repeat), but good luck telling the difference between a rich child and a less rich child.
I think what's scary is how much a blackbox it is. Some people rave at how the FB algo has improved to the point where they set the broadest ad audience possible. 2 years ago segmentation and testing was the name of the game. Now you apparently can just let it loose
AI doesn't need to be perfect to beat other approaches hands down and make hundreds of billions for companies using it. Having worked in the field, I'll take the 40+% revenue increases I got with AI in A/B tests any day of the week.
But mostly objective limitations. The most advanced minds executing the most robust, cutting edge ML aren't likely to do very much better. And most of the metrics by which they might suggest otherwise end up being flawed themselves due to the bias introduced by the feedback loops involved.
When your predicate influences the next version's antecedent, it's easy to show a big ROI, lift, or whatever other jargon you might care to use.
I know everyone loves to hate Facebook and articles that confirm that bias are very popular right now, but this lawsuit doesn't seem to have smoking gun evidence like the headline suggests. Have you ever written an e-mail or Slack message to a peer complaining that something at your company might not be working well? Or that something is totally broken and you think it should be prioritized in the ticket queue? Imagine those statements being cherry-picked and used to argue that your entire company is a fraud. That's essentially what's happening here.
The quoted e-mail is from 2016, and includes questions like this:
> That manager proceeded to suggest further examination of top targeting criteria to “see if we are giving advertiser [sic] false hope.”
You'd think that if they concluded that they were giving advertisers false hope, the lawyers would have simply quoted that discussion, rather than an earlier discussion where a manager was asking for further investigation to make sure they weren't giving advertisers false hope. That is, doing their job to check that the ad targeting was proper.
The originators of the lawsuit are angry that their ads were shown to people who didn't have college degrees and did not make over $250,000 per year:
> Investor Village said in court filings that it decided to buy narrowly targeted Facebook ads because it hoped to reach “highly compensated and educated investors” but “had limited resources to spend on advertising.” But nearly 40 percent of the people who saw Investor Village’s ad either lacked a college degree, did not make $250,000 per year, or both, the company claims. In fact, not a single Facebook user it surveyed met all the targeting criteria it had set for Facebook ads, it says.
It's not clear if Facebook explicitly knew at the time that the ad targets didn't meet the criteria, or if Facebook's targeting data was simply incorrect for those who were targeted. Again, if they had evidence that Facebook was deliberately showing ads to people who they knew didn't meet the criteria, you'd think the lawyers would have led with that evidence instead of digging deep into old e-mails where managers proposed investigating the ad performance.
I know Facebook is the quintessential evil tech company in the 2020 zeitgeist, but it looks to me that these lawyers are trying to use that to their advantage to win a court case, not that they've stumbled upon some smoking gun.
> It's not clear if Facebook explicitly knew at the time that the ad targets didn't meet the criteria, or if Facebook's targeting data was simply incorrect for those who were targeted.
The plaintiffs don't care, nor should they. When I go to a restaurant and order a hamburger, if the server brings me a tuna melt I send it back. It doesn't matter why the server brought me a tuna melt. Facebook is essentially saying that even though they wrote hamburger on the menu, and accepted my order for a hamburger, I should have known they meant a 41% chance of a hamburger and a 59% chance of something else because taped underneath my seat is a 15,000 word Restaurant Use Agreement printed on microfiche.
We should give some amount of acceptance to the argument that modern contractual agreements are too complex for individuals to read and understand fully. But this is entirely unacceptable as an argument from one business buying services from another business. It’s not the “fine print”, it’s the contract. If the plaintiff’s argument amounts to “I didn’t read the contract.” they will rightly be laughed out of court.
Not necessarily. Given the scale of firms like Facebook there's a case for client categorisation like financial firms do, with additional duties and protections in play when dealing when with less sophsticated users eg. small merchants. Even beyond this mis-selling does lead to losing lawsuits despite what is attempted in fine print.
As someone who has purchased Facebook ads, it's /so very evident/ that this is happening, that it's not particularly hard to compile evidence.
Here's a specific example: we ran around $1k in ads last year, targeting senior-level engineers at technology companies, but the ads were getting liked by mostly people who worked minimum-wage jobs. Twitter and LinkedIn targeting were fine with pretty much the same parameters.
This is one example, obviously, but it's so trivial to see that targeting is almost completely uncorrelated with reality, that it's not hard to come up with more examples.
I run $1,000's if not $10,000's a day in FB ads. I can confirm we've all known this for quite a long time.
Our general rule is if the product doesn't have broad appeal, you don't run it on FB.
My running theory on all ad networks is pretty simple. There are a very small subset of users, 25%-30%, of people who are regular purchasers and these companies know that based on conversion data. They generally just throw your ad in front of these people and let it ride.
we ran around $1k in ads last year, targeting senior-level engineers at technology companies
While sure it sounds like you were ripped-off, I would expect that just about any mass-market medium offering ad-targeting to "senior-level engineers at technology companies", "senior managers", "wealthy yachting enthusiasts" or similar super-high-value clients be a ripped-off if it was offered. I'd imagine anyone in the position would avoid anything that could target them with ads (blocking ads, going incognito, avoiding social media, etc)because otherwise they'd be continuously barraged by companies.
I think there's a huge difference between "statements are being cherry picked" and Bosworth, a major confidant to Zuckerberg, stating that "[I]nterest precision in the US is only 41%—that means that more than half the time we’re showing ads to someone other than the advertisers’ intended audience". I appreciate that you're pointing out the movement du jour against Facebook, but here it looks cut and dry to me.
>>Bosworth, a major confidant to Zuckerberg, stating that [...]
Talking about (inadvertent) misquotes: the quote was attributed to a "February 2016 internal memorandum sent from an unnamed Facebook manager to Andrew Bosworth, a Zuckerberg confidant and powerful company executive who oversaw ad efforts at the time [...]". (Italic emphasis is mine)
You're citing an absence of evidence as evidence of absence.
Consider the following: if Facebook knows its targeting is ineffective and doesn't meet the criteria, why would they put that in writing, knowing that anything on writing can be subpoenaed?
Facebook's ad inventory is sold by auction. Inaccurate targeting isn't a big deal as long as all advertisers are treated the same. Furthermore, granular targeting is no longer the way to scale on Facebook. Therefore, inaccurate targeting will have little to no effect on most campaigns' efficacies (at least the ones designed by those who are knowledgable of the ecosystem). Any FB marketer worth their salt knows this.
Many people are claiming FB ads are a scam due to their personal inability to make ads work for themselves. As someone who's spent 8 figures on FB with no special connections, I can tell you this assertion is complete hogwash.
In fact, the opposite assertion than the one in the article is true: Facebook is LEAGUES ahead of its competition as far as its ability to accurately predict user conversion rate given a specific ad and user.
Couldn't this be sample bias since you were spending 8 figures? At that level of spending, you might be getting a proportionally higher 'correct' target demographic.
For small-time advertisers, FB knows these folks would have less capital to fight back against poor ad practices.
I think it's possible but unlikely first and foremost bc FB has financial incentive to create a fair advertising ecosystem. Since FB sells inventory by auction, they make more money when there's more competition. There's a very long tail of small advertisers on Facebook, and handicapping them would ultimately lead to lower revenue for FB. On top of that, if such favoritism were real and ever were revealed, FB would suffer huge reputational damage which just isn't worth the risk when they're sitting on a cash cow.
I started out with low budgets too, and I can see how it would be easy to blame the platform when the real problem is poor campaign design and creative. Don't get me wrong, big advertisers have a huge advantage bc they can iterate on creative and campaign design faster. And, this advantage often explains why most small advertisers fail. But, at the actual auction level, I've never seen any evidence of favoritism nor do I think it would make sense from Facebook's perspective.
3-5 years ago, the most effective way to design campaigns was by micro-targeting audiences. That is, you would have thousands of adsets, and each adset would target a small slice of your target audience.
The theory behind micro-targeting was that your audience's conversion rate varies considerably by certain key targeting attributes. And, if your audience has varying conversion rates, your CPC bids ought to reflect this.
For example, let's assume your LTV is $100. And, let's assume your average conversion rate is 1%, but actually, males convert at 0.5% and females convert at 1.5%. If you do NOT split your audience by gender, you will be forced to bid $1 per click for every user. However, if you split your audience by gender, you can bid $0.5 for men and $1.5 for females. By splicing your audience, you gain considerable efficiency. The theory behind micro-targeting was to find the permutations of targeting attributes that split your audience into segments with the most variance in conversion rates.
However, what's happened over the past few years is that Facebook's AI has become vastly better at doing micro-targeting than even the best individual marketers. This is partly because they have more data than platform participants, but also because they have much smarter engineers. As a result, it's now become better to hand over targeting responsibilities to Facebook's AI. For the most part, you just tell them what you want the average CPA to be, and they do the targeting for you behind the scenes.
This has actually been true since at least 2013, as long as you were optimising for something that made you money.
If you're optimising for a proxy objective, more granular targeting can make sense (if you have more information about your customers), but if you can just make money from website conversions or in-app purchases, you're better off letting the algorithm do its thing.
(The magic is driven not by incredible engineers but by more people to show the ad to enabling the only people who see it to have high expected conversion rates. This is perhaps the real dirty secret of internet advertising, in that they are predominantly platforms for showing ads to people who were probably going to convert anyway).
"Crap" is a relative term; 41% (many years ago) was still a lot better than putting out an ad in the New York Times. FB ads aren't perfect, but IME they're a lot better than anything else — the only purchases I've made off ads in the last few years have been from targeted IG/FB ads, and quite a few friends have mentioned the same.
Sort of ironic how the Intercept is apparently capable of believing both that:
* FB is completely incompetent at targeting "to the point of being fraudulent," and can't hit the broad side of a barn with a targeted ad buy, and
* FB is filled with malicious targeting geniuses, whose ability to pinpoint target ads to anyone with incredible precision has made a mockery of democracy.
(Disclaimer: I recently left IG, although I didn't work on anything ads-related.)
How true. "Those evil Trump campaign staffers turned election in his favor using amazing evil micro-targeting FB techniques which are complete scam and do not work at all".
What I mean is: I haven't bought anything off of non-FB/IG ads, despite seeing many of them e.g. in the New York Times app (among others). It is indeed true that these are generally also "targeted," they're just not targeted well.
This doesn't surprise me at all. I've run a few, albeit very small, campaigns on FB. I set up targetting (occupation and geography) and I would get people outside of my target 'liking' my ad. I'd reach out anyways to ask if they are interested in my product and they would never respond. I oftened wondered if they were just fake/bot accounts to make it look like my ads were getting attention. I'm glad some company has taken them to court to determine what is really going on.
I can support what you're saying with the same experience.
I did some SEO for a local landscape company. They were already doing well, but asked me about doing ads on FB. I told them I thought it was a scam, but they persisted so I ran a few campaigns for irrigation in the Spring and then fall cleanup starting in August.
Same thing happened. They got a ton of impressions and clicks (no surprise) and a about a dozen leads. When they went through them, nearly all were for fake numbers (no answer and no VM) or email addresses that bounced. They ended up with one lead for a service they don't even make any money on.
In the end, they lost a few hundred dollars, but it was enough to convince them running FB ads to generate business was not only a good idea, but it convinced them it was rigged and a scam.
How were the bids setup? When you underbid you'll get matched with people nobody wants. Also if real users aren't interested in whatever interaction you're soliciting you're bound to only get bots.
I've done some FB advertising for news type content (much easier to target properly) and was getting incredible value for the money. Usually it's pretty obvious real fast if you're getting real action or not and you can adjust accordingly.
I'm not the person you are responding to, but I am the parent comment. Anyways, my issue was that often the users that were "liking" my ads weren't even in the geographic region I selected. So if I say "spend $20 a day" and target people in X area, why are they showing my ad to people in Y area? I know they have options for lived in vs travelled through etc but these were clearly people that lived in Y area.
What could explain the mechanics of how this happens?
Surely FB employees aren't managing those bot accounts themselves. Do they generally determine accounts that engage with ads to be "human/real", thereby incentivizing bad actors who spin up fake FB accounts to click/"like" ads?
They could also do it at arm's length. A lot of MTurk-style sites have loads of jobs along the lines of, "click this ad, fill out this form, get $0.02".
Anecdotally, facebook adverts are effective. They may not be able to be tracked from facebook, but I've had plenty of people recommend a product in person that they 'saw on facebook' but never liked or interacted with.
They're probably no more effective than billboards, or something though. That may be different with everyone stuck at home!
If Facebook ads are as ineffective as people say, what have people done that did work? The only thing I know for sure is that word of mouth can make an outstanding product spread quickly, but you still need to get your first 100 users somewhere
Let’s be real, ad tech is a shady business with Facebook leading the charge. This shouldn’t be surprising to any ad executive whose performance isn’t measured by actual dollars generated, but by impressions, money spent, cost per click, and various other dubious metrics which are conveniently provided by the ad network.
Not to saying the ad network isn’t valueable (it is and the largest tech companies are ad networks), but that the numbers are juiced shouldn’t be surprising. It’s in every decision makers interest that it is.
You should probably be talking about ad tech apart from Facebook, as very few ad-tech companies have large teams dedicated to help advertisers run experiments on Facebook.
Up till very recently, said teams had a separate reporting line from sales to ensure that they were telling advertisers the truth.
I’m assuming Facebook is not so incompetent they can’t show an ad to its intended audience. I think the main issue is to do any sort of ML or user based targeting, you need to have a minimum amount of data to make good judgements in an auction. But then you realize you have multiple ads that are competing against each other and another ad might convert better or most important of all: pay more. Thus even if you have an ad which is targeted to that specific user, it might not get shown. Sometimes it is better to not show an ad at all. With all these trade offs, the result can sometimes look quite bizarre.
My theory is - it's probably on purpose. If you spent $100 to reach your audience, your ROI would be pretty good for the $100. What if you spent $200 for the same audience reach? You still get results, albeit it's a bit more expensive now. But still better than other competitors out there because they don't have as much granular data about a user as Facebook does. Not even Google has this level of detail (Who are your friends, what you like to eat, What topics you like to chat about, etc.)
If you're on android, Google very much has your contacts and who you communicate with, where you go to eat and where you call for pickup, and your interests via search and gmail...
So I’ve heard (and believe) this for years: the targeting of ad networks borders on useless. But as the old adage goes: I know 50% of my advertising is worthless, I just don’t know which 50%.
So why haven’t we run out of suckers yet if this stuff is really that bad? If businesses weren’t seeing the ROI, you’d think they’d see that. So either, as bad as it is, it’s still worth it, or they aren’t able to measure ROI properly (a distant third: they are able to, and still do it?)
I’d love to hear from someone with first hand experience.
If you're running high volume campaigns with fast sales cycles you'll see huge gains from using advanced targeting options. All of them are best guesses and far from certain. Income targeting may be inferred by their (inferred) zip code and (inferred) age for example.
Identifying what to optimize is trivial at scale, simply look at your campaign performance by targeting options and it usually screams out at you. Doing better in the female, 18-25, urban demo? Raise bids for them and lower bids in the non-performing groups. It hardly matters to me how often Facebook or Google got it wrong if pulling the lever still works.
Most of the sour grapes I've seen are brand or lead gen campaigns run by the inexperienced griping about individual clicks. If you can't start broad and prune, you need to start hyper targeted and closely monitor.
I seem to recall some thing about when major companies pulled their advertising from YouTube over affiliation with hate speech, some notice that their sales did not decline. Does anyone know how advertising impact is measured online? It seems like a leap of faith every time.
I'm no fan of Facebook, but honestly, this doesn't seem like a big deal. Actually, online ad networks like FB or Google are way more transparent than the alternatives. Sure, when you say you want to target X it's not going to be 100% X, the data is noisy.
Ultimately, the data speaks for itself. You can see impressions, clicks, and conversions, and it's almost instant. If you don't like the results, you stop/change your campaign.
I don't understand how they could miss on 'interest'. Isn't this literally just based on what you 'like'? That shouldn't require advanced ML, just a simple JOIN!
As a med student, my ads are 90% ads for Caribbean med schools, nursing schools, and online nurse practitioner degrees that can be completed part time in under a year or two (these are the most worrying).
They target me about as well as a celebrity pitch.
"At the outset, the court notes the unusual nature of these requests: defendant is asking the court to seal an opposing party's pleadings. In ordinary course, such a request is improper. The FAC is plaintiff's pleading and, within the bounds of Rule 11, it is free to allege what it wants independent of any stricture defendant wishes to impose. The court understands, however, that plaintiff has taken discovery and that the FAC is more detailed than an ordinary verified pleading. Given that difference, the court will consider these requests."
The declaration filed by one of Faceook's legal staff to support this absurd request gets absolutely skewered.
Regulatory officials never seem to get much truth from Facebook, but I have always thought litigation, namely its "side effects", have the potential to expose truths about Facebook.
It is a know fact among marketing agencies detailed targeting on Facebook is almost fake. You can safely target for age / country / region / sex and that's it.
There is also similar audience setting, however it is a black box solution that takes away control over the campaign.
Moreover even when setting the campaign right, almost 80% of I got in some, were bots.
Except for Wish, the ads I get on Facebook and Instagram tend to be highly relevant to my interests. For all their flaws, it's one thing these platforms got right - for me, anyway. It seems to me that a lot of interesting ads from randommbusinesses ads come up on my feed, although I cannot say how many of those are from the small businesses that FB managers believe are being taken for a ride here.
I get some annoyingly perfect ads for Amazon and Mercari, but neither of them will actually direct you to that product when you click, so it never quite works out for them.
Anecdotally, FB shows me the same ads on repeat: Few are relevant, and the same ones repeat past a threshold I'd assume would indicate I'm not interested.
As someone who runs facebook ads for ecomm, repeated targeting are a core part of making facebook ads profitable.
You'll almost never make money by running a simple, one-step ad with a link to your website.
What we do is to target a group of people with a simple ad, then people who engage with or leave impressions on that ad will be run into a second ad, and so on and so forth until you finally funnel them into a conversion ad (for example, a discount) to try to convert those "warmed up" leads. All the campaigns leading up to the conversion ad aren't profitable, then you make all your money on the conversion ad.
So advertisers actually intentionally target you repeatedly, most people it just annoys but for the subset of people that are actually interested in your product, it can work. Familiarity and repeated visibility seems to lead to a sort of trust for certain people.
Yeah, I've frequently experienced seeing an ad for X service for the 10th time and finally finding myself curious to find out what it's actually about.
guessing that if anyone understands bullshit it's ad buyers; if they're on fb it's because it has higher ROI than other platforms for their specific campaigns, not because they believe the hype
(of course if actual fraud took place fb should give back the money they collected)
I once briefly worked at a startup which would scrape and NLP-analyze public Facebook posts for indicators of given conditions, like descriptions of symptoms and general affect, collect their information, and then sell it to pharmaceutical marketing companies...
It’s hard to say Facebook’s targeting is completely garbage while at the same time understanding that Cambridge analytica, Trump, and others were able to use their micro targeting to sway elections.
Obviously the targeting works. But,like most software, I’m sure it is buggy. Probably both sides of this are true.
The reason we think we're in an "AI" boom is 90% these ad. companies hyping their own abilities (an identical strategy to that of the initial boom in the 50s).
What we call "AI" today is just an associative house of cards which gives the illusion of depth when seen from a very narrow viewing angle.