Hacker News new | past | comments | ask | show | jobs | submit login
Don't build growth teams (conversionxl.com)
201 points by jasim on Sept 13, 2019 | hide | past | favorite | 97 comments



He doesn't directly address that relentless A/B testing can be a pathology that removes the soul and flexibility from your product and can wall in future growth. This is because you have to optimize on one (or selected) KPIs but the true health of your business is more nuanced than that. Large analytics operations try to solve this by creating ever more complex KPI models but that is not attainable for most.

As a crude example if you have a community site like Reddit you could harm the community by over-optimizing your conversion funnel for Reddit Gold, while still substantially growing the Reddit Gold business. For a time.

He touches on this by admitting that the super-optimized KISSMetrics homepage was not the right strategy but I would have loved read more.


I believe there will be plenty more stories like this coming out and landing well. Some of the testing I've managed hasn't helped end-to-end conversion rate _at all_ and did nothing short-term results even when it looked great at first glance. I feel pretty jaded about "performance marketing" and "growth marketing" at this point. There are legitimate growth marketers out there but it's become a cop out for The Hard Things in marketing.

Essentially, it's become really easy to get leads for cheap and incredibly expensive to get them to convert. The question becomes, which master do you serve?

I think marketing leaders need to delicately balance the KPIs with the vision of the company or product. In High Output Management, Andy Grove talked about negative indicators for KPIs as well (forgive me, don't have book within reach). I don't think any time gets spent on that in my network. Granted, it is hard to do with small teams and you might say it's the wrong thing to focus on for small teams. Eventually, you reach the point where the numbers look too good to be true and they usually are--meaning you won't convert these people because they're not ready to buy what you're selling. They're just ready to get the free info you're offering them in two clicks of their time.

I appreciate the candidness about failure even if it wasn't detailed. Can't wait for more similar stories.


If one claims to have growth experience. Enough to create a blog post about it. And one can't figure out the appropriate strategy to make a return on your head count - then stop pretending . A growth team should be designed to meet the business requirements and make a return (on top of your head count). Because one can't create an effective strategy to produce a return, doesn't mean you toss out the entire idea of creating Growth Teams. Because you failed - it doesn't mean its bad. It wasn't bad for Facebook.


The issue is hyper-optimization is very easy to break and very hard to maintain. In a way it's an anti-pattern you should do only if there's no other way to grow.

That is: if you want a car to go faster you may improve the engine, or focus to make the car lighter. If you hyper-optimize the engine the smallest variation in the fuel will affect performance. If you hyper-optimize the weight you better watch out on your own diet :) otherwise the car won't be as fast as intended.

All in all it often goes back to following the Pareto principle. If you need to hyper-optimize you landing pages you probably need to focus elsewhere. You need good landing pages, not perfect ones.


You are approaching a discussion of one of my favorite topics "adaptation vs adaptability"[1]. Basically, there is a spectrum between perfect efficiency and being completely flexible. we see this over and over again, from software (where flexibility might show as generality or abstraction) to organisms/ecosystems to statistical methods.

https://people.clas.ufl.edu/ulan/files/Conrad.pdf


It's the basis of evolution in general. Something can be perfectly optimized for an environment, but when that environment changes they may go extinct from it.

so instead don't necessary want perfectly optimized, we want good enough, because inside that good enough is where adaptability to changes comes from.


yes, and the article i linked to explores this in terms of ecosystems and the flows of material and energy between different organisms within the ecosystem.


Agreed. Relying on growth tactics while ignoring a well-thought marketing strategy is doomed to fail.


"Take 95% certainty compared to 99%. Because 95 is pretty close to 99, it feels like the difference should be minimal. In reality, there’s a gulf between those two benchmarks:

    At 95% certainty, you have 19 people saying “yes” and 1 person saying “no.”
    At 99% certainty, you have 99 people saying “yes” and 1 person saying “no.”
It feels like a difference of four people when, in reality, it’s a difference of 80. That’s a much bigger difference than we expect."

I'm a data scientist, I feel like this is a really wildly confusing way to put this.


IANADS and it took a reread to parse it. It's a bit weird but it draws attention to the point.

For others interested, basically at 95% confidence level, there's a 5% chance due to random events. At 99%, the randomness is limited to 1%, a 5X improvement, i.e. a huge 500% improvement not a tiny 4% improvement.


And that's not how to interpret the 95 and 99 thresholds.

Depending on what proportion of "Version B" ideas are truly effective, your false positive rate (what the author is concerned about) will be very far away from 5% or 1%.

At the 95 threshold (p-value of 0.05) you could have that >50% of what you identified as an "effective" version are not really effective (false positive). This is a good article to build the intuition around it: https://www.statisticsdonewrong.com/p-value.html

In the end it will also depend on how costly it is to have a false negative. The author hints that in this case is very costly.


My statistics background is a class in college. This statement made sense to me on the first read.


Fractions simplify the hell out of this.

- 95% is 1/20

- 99% is 1/100

This makes it intuitive that 95% has 5 times more people saying "no" than 99%. The wording made sense, but IMO using percentage adds more complexity than necessary.


As far as I can tell, he's mixing units (or scales?). Here's my version.

At 95% certainty, you have 95 people saying "yes" and 5 people saying "no." At 99% certainty, you have 99 people saying "yes" and 1 person saying "no."

While that is 5x improvement in uncertainity, it's also only 4 more people, a 4.21% improvement in overall "yes" count.

I can't see how comparing an N in 20 number to an N in 100 number makes sense. IANAS.


Yeah I had to read this over 3 or 4 times before I understood exactly what the author was saying. Very unintuitive way of putting it.


I think that a lot of statistical statements are just unintuitive and are very difficult to explain in terms people grok quickly.


Can either of you kindly help us understand in layman’s terms? (Or point to an informative URL resource?)


The way it makes sense to me is to invert the number you are looking at.

95 & 99 are very 'close together'

But if you look at 5 & 1 - the first number is 5 times the second. Huge relative difference


Sure, but the deal is that at very small numbers, a massive change in % difference doesn’t translate in to a massive difference. It’s _entirely_ dependent on context. The raw stats are meaningless.


The author is muddying the water for effect, that's why his explanation is confusing. To understand what he means, you should ignore the results expressed as percentages and instead focus on 1:19 vs 1:99.

Now, imagine a grid, where the unit on the x-axis is "No", vs "Yes" on the y-axis.

Place both studies in terms of their respective No/Yes relationship.

Study A: 1:19, 2:38, 3:47, ...

Study B: 1:99, 2:198, 3:297, ...

Can you picture the difference in their slopes?


Reminds me of when I was shopping for a windbreaker.

A brand had two models: one that blocked 98% of the wind and another that blocked 99%. To a casual observer, it wouldn’t be immediately obvious why the second one was significantly more expensive. After all, it’s just a 1% difference!

(Of course, the answer is obvious: the second one was twice as effective at blocking the wind.)


That is a questionable interpretation. It means the 98% one lets twice as much air in, but it doesn't block twice as much wind because the amount of ambient wind remains constant.

To be more explicit, if there are 100 units of wind and one blocks 98 units, the other blocks 99 units. So one lets twice as much air in than the other, but I wouldn't consider that saying that one was 'twice as effective'. The relative improvement is (99-98)/98 ~ 1%.


Your comment illustrates the point I'm making.

If windbreaker A let's in 2 units of wind out of 100 and windbreaker B lets in 1 unit of wind out of 100, that means windbreaker B is twice as effective at blocking wind, because it lets in half as much wind as windbreaker A.


I don't agree. It may mean "A is two times worse than B at blocking wind", but not "B is twice as good (effective) as A" at it.


This is the point I wanted to make. English is terribly imprecise about these things.


But that’s not the point though. What is the _real_ difference between 98% and 99% in real world terms? How much colder will you feel? Will it dry out your skin faster? How much faster? It’s a difference against zero.


>>It’s a difference against zero.

Exactly. This is why it is accurate to say that windbreaker B is twice as effective.


The problem isn't with the math, it's with the language, because we intuitively apply the same units to both four and eighty, when it's not four people and eighty people, it's four people and eighty percent, because 4/5 = 80%.


For this and sibling comments, one way to think more intuitively is in terms of odds p(yes)/p(no). So going from 99% to 99.9% is 10 fold improvement in conversion odds. That's why, for eg. "many 9s" of uptime is very hard.


I felt the same. It's also misleading and implies the common misunderstanding of certainty levels being equivalent to "% chance that you're right".


“9. Growth teams have limited revenue potential.”

This one is important for almost all optimization efforts. Once the low hanging fruit is gone further efforts often don’t produce much but can cause harm instead. Stack ranking is a good example. It may make sense to identify and lay off the bottom 10% for one or two years. The you should stop. But if you keep going you get all kinds of weird dynamics that harm morale instead of improving performance.


Can someone shed some light onto the executives hated homepage that was discussed? Since kissmetrics is an analytics company, doing this kind of thing is their core competency. How could executives change it to a loser immediately? There must be more to the story or some kind of other side to the story.

Shooting from the hip here possible problems with the "overly optimized" homepage:

1. Long term brand damage? If people go to the site and see something not as "polished" or "professional" looking, the bounces won't leave with just a neutral impression but a straight up negative impression. Sometimes even without a conversion a landing page can sell a visitor on a company and lead to recommendations or return visits down the line.

2. Lower value conversions? Is it possible that people who would immediately sign up without a long sales page end up being less valuable? This is more easily tracked so I'm sure OP would've accounted for this but it's still hard to tell without a ton of data.


This is really important, I think, and overlooked.

A/B testing micro-optimizes an outcome you're searching for. It doesn't tell you the long term trajectory of customers, because that takes way too long to get feedback. An example: "dark patterns". Facebook is deeply damaged, in part by making choices from well-run A/B trials.

Worse, A/B testing searches out local maxima. Maybe you get locked into an approach that precludes you from making the changes that would really drive the visit ultimately.

Performance indicators and properly using quantitative information are really important-- and many executives aren't versed at this. But, conversely, you can't use A/B testing to decide who you are as a company and define your relationship with the customer.


I dunno, I remember going to a talk at SXSW in 2011 about how A/B testing helps you reach local maxima but you need more to reach the next “peak”. I believe the story about Google testing 40 shades of blue was also in common currency back then.

I don’t think any of this stuff is new or undiscovered, really.


Well, I don't think anyone anticipated the "dark patterns" stuff at the time-- that we could be training technology to be kinda-evil in a way that would have massive consequences-- national political, reputation, etc.


From working in adjacent companies and doing marketing I can guess what the objections were to that:

1. It specifically mentions another company -> this often feels like a jerk move or from a branding standpoint something that diminishes your product. There's another front page HN article right now that leads with this:

https://alexdanco.com/2019/09/07/positional-scarcity/

2. It mentions nothing from the actual product/features which makes it hard to learn from the tests.

3. Some honest diversity questions with choosing that stock art guy as the face of the company. A tech company I worked at found out that we could get leads about 15% cheaper off of Facebook ads if we targeted only men and excluded women from the audience. We made the deliberate choice to use the less optimized combined male + female audience because it felt so ethically wrong to do otherwise.


I can’t speak to this group specifically, just to say that being an exec in an industry doesn’t make you an expert in that industry. When I worked in ad tech, I met plenty of execs who couldn’t run a facebook campaign to save their life. If you’re a founder of an analytics company hiring your CRO, it’d be great to find someone with an analytics background, but more than likely they’re coming in having worked in tech, but not that specific vertical.


It's specifically that vertical. Their product is designed for growth hacking, if they discard the results it's like saying their own product doesn't work.


I liked this part:

“I have a rule-of-thumb for picking A/B test winners: Whichever version doesn’t make any sense or seems like it would never work, bet on that. More often than I like to admit, the dumber or weirder version wins.

This is actually how I tell if a company is truly optimizing their funnel. From a branding or UX perspective, it should feel a little “off.” If it’s polished and everything makes sense, they haven’t pushed that hard on optimization.”

...but if you take this too far you end up with something like Amazon.com, where everything is “a little off”?


That's actually the part that I found a little bit ironic (though I very much enjoyed the post), considering that the author links to and praises a study that points out that many A/B tests result in illusory wins because growth teams end up testing a zillion different variations and stop tests early whenever one alternative seems promising.

In such an environment, you're bound to end up with all kinds of weird "winners" but there's no guarantee that (1) they're actually better than the alternatives you tested and (2) even if they are, that the advantage is stable and not just a temporary novelty effect.


That part also jumped out to me, but made me worry that the author isn't accounting for novelty effects. When you make something weird at first it does well, and then later as people get used to it the lift goes away or even turns negative.

With a lead gen funnel, where everyone is going through for the first time, this is much less of an issue than a site with long term users. In the latter case you want to measure learning effects, while in the former no individual is in a position to learn. Novelty could still wear off in lead gen, though, as companies copy each other and your weird new pattern starts to show up elsewhere and become familiar.


And it works. Amazon is the leader. Are you sure it's despite its UX or did UX contributed?


This.

I always use Amazon and eBay as go-to examples of Optimization that works as a counter-force to those that want massive redesigns every 6 months because some other new app or competitor has some sexy slick UI.


I suppose the cynical response is “there’s no such thing as taking it too far”, hence Amazon.


"Paradoxically, Amazon's design may work well for Amazon itself. The company is simply so different from other ecommerce sites that what's good for Amazon is not good for normal sites."

https://www.nngroup.com/articles/amazon-no-e-commerce-role-m...


Very useful content and perspective. I was certain it was all building to the conclusion you should hire the author's growth agency to get that 2x conversion gain and then leave the funnel alone, but the ask never came!


I was expecting that too but it could also be a savvy attempt at gaining credibility.

The real thing I was thinking was that productizing the activities of a growth team would be ideal. If you could build a software product that efficiently managed the process of the growth team you could make a good value proposition. e.g. one person ($150k/year) using one tool ($100k/year) is significantly cheaper than the $650k/year he quotes.


> I have yet to come across a designer, engineer, or marketer that intuitively understood probability on day one.

Sad but true. And it’s not even about understanding the more in depth math. It’s about having an intuition about the 80/20 rule or Venn diagrams. “If I work on this bug I can make 80% of our users happier but if I work on this bug, which affects no one but our CMO who always complains loudly...” ... is a type of discussion I’ve found happens too rarely


Good article, the author is clearly well informed. In the past I've made the mistake of staying on a growth team for too long, well after the gains mostly ran out. Toward the end, I was doing the best work I could, but it didn't have much impact. Now I'm on another growth team still in the honeymoon phase, grabbing 10% here and 20% there, and thinking what to do next when the gains run out.


I highly recommend considering https://medium.com/convoy-tech/the-power-of-bayesian-a-b-tes... as an alternative to p-value hacking / early stopping.


There's always something to work on, so the problem is sometimes that growth teams are ofte too narrowly focused. You're done with landing page, then move on AdWords. You're done with AdWords, move on improving sales team. Making forms work better is not everything.


I think the big take away is that still today the decision makers don't really understand digital and are still working in the same way that Mad Men did in the 60's

Won't mention the brand but I have in 2019 had in the context of a new website) comments of don't want to much text we will let images lead - which is just not going to work.

This is supposedly the digital native generation who are 20 years younger than me

edited to correct naïve to native


Was there data to show that they're wrong?


naïve (ignorant) -> native (fluent / expert)

not notpicking; those words have opposite meanings here


Oops a frieduan slip there (or an autocorrect gone wrong) you are of course quite correct.


Seems correct to me. People don't read text, but they'll look at pictures.


Google doesn't :-) which is the point here


Wow, amazing article. Note for those who scan through, recommendations about what to do instead are at the bottom.


>At 95% certainty, you have 19 people saying “yes” and 1 person saying “no.”

>At 99% certainty, you have 99 people saying “yes” and 1 person saying “no.”

The article was worth the read to me just for this part. I thought I had a pretty decent intuitive understanding of probabilities but this put things in new perspective for me.

The rest of the article is marketing gobbledygook to me, though.


I actually thought it was the worst part of the article because it fails to put the statements into context.

Let me quote from "0 And 1 Are Not Probabilities"[1].

> In probabilities, 0.9999 and 0.99999 seem to be only 0.00009 apart, so that 0.502 is much further away from 0.503 than 0.9999 is from 0.99999. To get to probability 1 from probability 0.99999, it seems like you should need to travel a distance of merely 0.00001.

> But when you transform to odds ratios, 0.502 and 0.503 go to 1.008 and 1.012, and 0.9999 and 0.99999 go to 9,999 and 99,999. And when you transform to log odds, 0.502 and 0.503 go to 0.03 decibels and 0.05 decibels, but 0.9999 and 0.99999 go to 40 decibels and 50 decibels.

> When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence.

[1]: https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-ar...

Edit: formatting


The quoted part is sensible, but the title is goofy. Zero and one have to be probabilities. A lot of the underlying math doesn't work out otherwise, but you can also believe something always/never happens (if you're a Bayesian), or observe the same thing happening over and over again (for the frequentists).

Your example shows that (non-linear) transformations aren't linear, and that our gut feels about likelihood aren't either, which is true enough.

However, you wouldn't say "101 ˚C isn't a temperature" because it takes much less energy to heat a wet thing from 97 to 99˚ than it does to bring it from 99 to 101˚ because of the phase change.


You can not ever discover in a statistical test that some probability is 0 or 1. Those are impossible values. You can get arbitrarily close to them, but you can never reach those values.

Of course, they are valid values for the set of probabilities, just like 0K is a valid temperature, and c is a valid speed for massive things. You just will never see any of those.


No, the issue you’re referring to applies to anything estimated directly from a sample; there’s nothing magic about zero or one. If you see 5000/10000 heads, the maximum likelihood estimate for that proportion is 0.5, but it still has some uncertainty attached: the 95% CI is about [0.49, 0.51]. With more data, you can shrink that interval, but you can never collapse it completely. On the other hand, that same data does let you assign exactly zero probability to the hypotheses “the coin is all heads” and “both sides of the coin are tails”, since you’ve seen counterexamples of both.


> You can not ever discover in a statistical test that some probability is 0 or 1

Sure you can: if you are using inferential statistics and trying to determine the population incidence of some trait and every sample has a sample incidence of exactly 1 then the population estimate will be exactly 1. And that works the same way for 0, or any other value, too.

> c is a valid speed for massive things.

No, it's not. An object with any rest mass would require infinite energy to reach c. It's an excluded upper bound, not a valid speed.


Here's a little puzzle that really puts this in perspective:

I put 100 lb. of potatoes in the sun. The potatoes were 99% water by weight. After drying a while, the potatoes were 98% water. How much did they weigh at that point?

https://en.wikipedia.org/wiki/Potato_paradox


In my college days, we used a similar trick question to harass (or ragging as known in India) juniors. Beginning with $100 how much will you be left with if you lose 50% and gain 50%?


This is a technique often used in gambling games (poker apps minigames etc.)

They offer a series of actions that let you lose or gain x% n many times in a row, with an equal chance of each. The fact that +x% doesn't cancel out a -x% compounds on the difference in lay person expectations to produce horrific odds for microtransactions.


It's $75, right? Don't leave us in suspense !


That’s how they rag on juniors, they never tell them the answer!


Haha! It's indeed $75 ️.


51 pounds?


Close, but not quite.


It’s actually nonsense.

Certainty has nothing to do with how many people said yes or no, it’s about how many people are in the experiment.

If you sample size of 2, and both like the new UI, then you can’t say you have 100% certainty.

I.e. 3/4 people isn’t the same certainty as 75/100

I think the point he’s trying to make is it’s easier to get the statement “95% of people prefer X” from a study then to get “99% of people”. The first requires >20 people, the later >100.

But he’s tying together the concepts of sample size, confidence, and the outcome of the experiment in a way hat doesn’t make sense


That's not at all what he's saying. Maybe understand the article first before calling it nonsense. Kind of funny how all you people are making his point for him!


I didn't understand this part. Why is the author changing scales? Shouldn't it be that at:

At 95% certainty, you have 95 people saying "yes" and 5 people saying "no"?

So it's easier to make an apples to apples comparison? What point is the author trying to make with changing the scale?


The point is to show that with 95% for every 1 person saying "no" you have 19 people saying "yes". With 99% for every 1 person saying "no" you have 99 people saying "yes". The common denominator is one "no".


That’s not how this works though. The same number of people are landing on your site. You aren’t getting 80 more signups as suggested.


> That’s not how this works though.

What youre struggling with is the counterintuitive nature of applied statistics vs pure math, and this is the point TFA was trying to make.

> You aren’t getting 80 more signups as suggested

TFA isnt saying you "get" more, but just illustrating how different 95 and 99% actually are. Its restating the potato paradox, linked elsewhere in the thread

https://en.m.wikipedia.org/wiki/Potato_paradox


Genuinely curious aside—you used TFA twice. It means “the fucking article” here, right? When you use TFA, your otherwise helpful comment reads as if it’s angry, exasperated, or some otherwise negative feeling that stands in opposition to the rest of your comment—which reads as a genuine attempt to be helpful and explain. Why do use TFA?


I think this is one of those things where the meaning has evolved over time, at least as someone who's been on Slashdot since the early '00s but only joined HN a few months ago. Originally, you'd only see "TFA" as part of RTFA, generally with the assumption that the person you're replying to had not read the article. ("If you'd RTFA, ..."; "Maybe you should RTFA.")

But "TFA", although it derives from "RTFA", never seem to had the same negative connotations. It's just that sometimes you want to refer to the original article in question, but "original article" is long to type and/or ambigious. (Do you mean the news article from the NYT, or the scientific paper the NYT article is reporting on?) And "TA" is too short for people to clearly know what you're talking about (And did you mean "teaching assistant"?) "TFA" is short and unambigious: it always means the article linked to from the main page.

Long story short: Although etymology would suggest that "TFA mentions this" is as aggressive as "Maybe you should RTFA", in actual developed usage, they're very much not the same.


> Long story short: Although etymology would suggest that "TFA mentions this" is as aggressive as "Maybe you should RTFA", in actual developed usage, they're very much not the same.

That was my understanding and my intended usage.


TFA can also stand for The Fine Article. By HN rules you should assume the best; in this case for you, he was talking about the fine article.


I did assume the best—hence pointing out twice how helpful the comment was. I was merely curious why the author of the comment used TFA, and if they meant something else than the typical meaning of the acronym. Unless you personally know the contents of the author’s mind, I don’t believe you can answer for their meaning with much authority. You seem to have misunderstood my intent with asking (or do you generally like reminding long-time HN users about rules?). I’m not offended by the usage of TFA or assuming the worst of the commenter. I am curious about the juxtaposition of a helpful tone alongside a rather well-established acronym. Either way, thanks. I’ll hope the author confirms that was their actual meaning.


Hello!

I wasnt aware "TFA" could be interpreted with a negative connotation, although that seems obvious in retrospect. Just trying to participate in the HN community, and also because its less typing : )

As a matter of writing style, repeating acronyms are invisible to the reader, whereas repeating words are annoying and remove value. This may be an opinion I gained from my military service, which was acronym-heavy.

I appreciate the question and its perspective.


Except that the potato paradox doesn’t apply here. What the business cares about is total signups and revenue, not the variation in rate of signups. The potato paradox only applies if you’re talking about the rate of misses, which, while interesting, is secondary or tertiary in importance.

An 80% decrease in missed signups only causes about a ~5% increase in revenue. That’s an important point if the team that produced that 5% revenue increase costs a large amount of money to run. At close to a million a year for the team, that’s only going to be worth it for some.

And that was the whole point of the article: this optimization usually isn’t worth the cost.


In the context of A/B Testing, if you've decided to stop the test at "95% significance" then you'll stop at the 19 yes, 1 no spot (or maybe 38:2). If you're testing for 99% significance, you won't stop unless you get to 99:1 (or more likely, you'll need to test way past that because "no" number 2 will arrive before "yes" number 99, so you'll need to test to 198:2 or 297:3 or further.

This is why you should never stop an A/B test once you've "hit your statistical significance". Always choose the number of tests you'd need to prove the significance before you start, and let it run even if it's "obviously winning" (or losing).


Can you elaborate on this? When an A/B test has hit the desired significance, what is the value obtained in testing further?


I'm not the author of the article, but I sometimes use the same approach because in my experience saying 5 people out of 100 does not have the same effect as 1 in 20 even though mathematically they are same. It provides a mental check if you are still ok with your original perception.


Comparing A:B against C:D is much less intuitive for us puny humans than comparing A:B against C:B.


> So it's easier to make an apples to apples comparison?

That's exactly the point. How many people said yes for every one who said no:

50% -> 1 yes for every no

95% -> 19 yeses for every no

99% -> 99 yeses for every no

99.9% -> 999 yeses for every no


Though obviously equivalent, I think as a percentage 95/5 and as a reduced fraction 19/1, they both have value in different ways to the non-mathematician. Giving 2 intuitive handles on a thing instead of one is helpful.


He is not changing scale. He is just illustrating what the probability means in terms of the size of the set you need to expect the event to occur once, which has a more practical value than a probability.


Especially as he goes on to state:

> It feels like a difference of four people when, in reality, it’s a difference of 80. That’s a much bigger difference than we expect.

I had to stop reading here.


Depends on how you frame it. There is 80 of difference when you think about "how many users say yes for each No"

So, what do you want to know, "yes for every 100" or "yes for every no" ? It matters.


But that's the point. Ratios are really hard to reason about.

Your mind sees the 5% going to 1% (or the 95% going to 99%) and thinks it's a small difference. When in actuality it's a big change.


When you thought about it some more, did you come back?


To make it match with human brain, this would be more helpful indeed:

>At 5% uncertainty

>At 1% uncertainty




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: