Hacker News new | past | comments | ask | show | jobs | submit login

It's a great example. This is the very reason I have scaled back the amount of time I rock climb as I've gotten older -- not because any individual outing is dangerous, but there's an element of Russian roulette wherein the mere act of doing it more often dramatically changes the risk.



"There are bold X, and old X, but no old, bold X."

Replace X with any practitioners subject to sufficient risk as a result of their practice.

I first heard it in the context of mushroom foraging.


The risks are highest when learners are at a beginner to intermediate stage. They know the basics, and have gained some confidence, but don't know enough to get themselves out of trouble.

This is called Stage 1 in the Gordon Model of learning: unconscious incompetence.


While this is true, in the context of alpine climbing where I first heard this statement, the bold alpinists who die young are very much not beginner-intermediates. I've interpreted this differently than just the "Bathtub Curve"[1] applied to dangerous pursuits.

Rather, there is a certain amount of objective risk in alpine environments, and the more time you put yourself in that environment, especially in locations you aren't familiar with, the greater the chance that something will eventually go wrong.

I'm always surprised by the number of famous alpinists who weren't killed on their progressive, headline-capturing attempts but rather on training attempts and lesser objectives.

[1]: https://en.wikipedia.org/wiki/Bathtub_curve


My wife teaches people to ride horses for a living so we talk about the safety of that.

You hear a lot about people who get seriously injured riding who are often professionals or people who ride competitively at a high level. They are doing dangerous things and doing a lot of them.

We don't think it is that dangerous for people who ride at the level we do, out of maybe 15 years we've had one broken bone.

The other day I noticed that we had acquired a used horse blanket from another barn in the area which is a running joke at our barn because of their bad safety culture. They are a "better" barn than ours in that they are attached to the show circuit at a higher level than the bottom, but we are always hearing about crazy accidents that happen there. When I was learning to ride there they had a confusing situation almost like

https://aviation-safety.net/database/record.php?id=19810217-...

with too many lessons going on at once where I wound up going over a jump by accident after a "near miss" in which I almost did. (I never thought I could go over a jump and survive, as it was I had about two seconds to figure out that I had to trust the horse and hang on and I did alright...)


Another good allegory is that, in the US Air Force, the flight crews considered most dangerous are those with the highest collective rank. Sure, the young crews are learning but the old ones still think they know it all and have often forgotten critical details.


(Example) When you go climbing somewhere, you have like a 40% of getting killed that you can mitigate completely by skill, and an additional 0.1% chance that something goes wrong by some fluke, that you can’t mitigate at all.

Pretty good if you go climbing 10 times a year. Pretty bad if you go 1000 times.


Isn't this somewhat expected?

They wouldn't be famous if they didn't succeed on headline-capturing attempts and there are only so many you can realistically do in life. They are dead however as doing dangerous things often enough will kill a substantial number of practitioners.


Need to consider that headline capturing objectives ar a few in a lifetime and training goes on all the time.


No, the risks are greatest when you reach complacency. Beginners, even bold ones, take some care. You mostly see this in things like forklift drivers because it takes years of doing the same thing every day before you really get expert enough to be complacent


There is also something called "Normalization of Deviance", defined better by a quote: "Today, the term normalization of deviance — the gradual process by which the unacceptable becomes acceptable in the absence of adverse consequences — can be applied as legitimately to the human factors risks in airline operations as to the Challenger accident." *

Most of you have probably heard of it in the context of fighter pilots doing riskier and riskier maneuvers, but it seems to apply to drivers who speed a lot. 80 starts seeming really slow to them after doing it for years.

* https://flightsafety.org/asw-article/normalization-of-devian....


Thanks for posting these, I'd only seen Normalisation of Deviance mentioned in these two youtube videos by Mike Mullane and never thought to look any further:

https://www.youtube.com/watch?v=Ljzj9Msli5o

https://www.youtube.com/watch?v=jWxk5t4hFAg

and the uploader references some further links:

https://www.fireengineering.com/leadership/firefighter-safet...

https://www.flightsafetyaustralia.com/2017/05/safety-in-mind...

and references this book (about the Challenger Disaster):

https://www.amazon.com/gp/product/B011DAS53Y/

which has an overview here:

http://web.mit.edu/esd.83/www/notebook/The%20Challenger%20La...

including these two excerpts I found interesting in this context: "Chapter nine she explains how conformity to the rules, and the work culture, led to the disaster, and not the violation of any rules, as thought by many of the investigators. She concludes her book with a chapter on lessons learned."

"She mainly emphasizes on the long-term impact of institutionalization of the political pressure and economic factors, that results in a “culture of production”."


Vaughn's book The Challenger Launch Decision doesn't tell this truth: the root cause of the accident can be traced back a decade to the acceptance of a design that was "unsafe at any speed".

Every other manned space vehicle had an escape system. The crew of the Challenger was not killed by the failure of the SRB or the explosion of the external tank, but rather when the part of the orbiter they were in hit the ocean. They could have build this into a reinforced pod with parachutes or some other ability to land but they chose not to because they wanted to have the payload section in the rear.

In the case of Columbia it was the fragile thermal protection system that did the astronauts in. There was a lot of fear in the first few flights that the thermal tiles would get damaged and failed and once they thought they'd dodged that bullet they didn't worry about it so much.

"Normalization of deviance" was a formal process in the case of the space shuttle of there being meetings where people went through a list of a few hundred unacceptable situations that they convinced themselves they could accept, often by taking some mitigations.

When the design was finalized it was estimated that a loss of vehicle and crew would happen about 2%-3% of the the time which was about what we experienced. (Originally they planned to launch 50 missions a year which would have meant the continuous trauma of losing astronauts and replacing vehicles.)

It's easy to come to the conclusion that it was a particular scandal that one particular concern got dismissed during a "normalization of deviance" meeting but given a poorly designed vehicle it was inevitable that after making good calls for thousands of concerns there would be a critical bad call.

"Normalization of deviance" is frequently used for a phenomenon entirely different than what Vaughn is talking about, something informal that happens at the level of individuals and small groups. That is, the forklift operators who come to the conclusion it is OK to smoke pot at work, the surgeon who thinks it is OK to not wash his hands, etc. A group can pressure people to do the right things here, but it's something different from the slow motion horror of bureaucracy that tries to do the right thing but cannot.


I'm reminded of Louis Slotin experimenting with the "Demon" core. The core was surrounded by 2 half spheres of beryllium. The core would go critical if the 2 spheres were not separated from each other.

The standard protocol was to use shims between the halves, as allowing them to close completely could result in the instantaneous formation of a critical mass and a lethal power excursion. Under Slotin's own unapproved protocol, the shims were not used and the only thing preventing the closure was the blade of a standard flat-tipped screwdriver manipulated in Slotin's other hand. Slotin, who was given to bravado, became the local expert, performing the test on almost a dozen occasions, often in his trademark blue jeans and cowboy boots, in front of a roomful of observers. Enrico Fermi reportedly told Slotin and others they would be "dead within a year" if they continued performing the test in that manner. Scientists referred to this flirting with the possibility of a nuclear chain reaction as "tickling the dragon's tail", based on a remark by physicist Richard Feynman, who compared the experiments to "tickling the tail of a sleeping dragon".

On the day of the accident, Slotin's screwdriver slipped outward a fraction of an inch while he was lowering the top reflector, allowing the reflector to fall into place around the core. Instantly, there was a flash of blue light and a wave of heat across Slotin's skin; the core had become supercritical, releasing an intense burst of neutron radiation estimated to have lasted about a half second. Slotin quickly twisted his wrist, flipping the top shell to the floor. The heating of the core and shells stopped the criticality within seconds of its initiation, while Slotin's reaction prevented a recurrence and ended the accident. The position of Slotin's body over the apparatus also shielded the others from much of the neutron radiation, but he received a lethal dose of 1,000 rad (10 Gy) neutron and 114 rad (1.14 Gy) gamma radiation in under a second and died nine days later from acute radiation poisoning.

https://en.wikipedia.org/wiki/Demon_core#Second_incident


I call that the rattlesnake principle. Treat all dangerous tasks like you’re dealing with a rattlesnake.


Also known as an advanced beginner, the stage before competency. Someone who has enough knowledge to, as they say, "be dangerous".


They call it the "intermediate syndrome" in freeflying.


That reminds me of when a family friend from church needed help clearing his land and thought the easiest approach would be to teach an overconfident 14 year old (me) to drive his tractor. He told me I would never be worse at operating it than the second time we went out. He was right.


A WTA top 70 tennis player from my country (aged 35+, thus possibly facing the end of her pro career) recently rephrased a well known proverb: "What doesn't kill you, makes you stronger -- or, crippled."


There's chance of death, but there's also duration of suffering while dying.

I'm guessing that falling from a cliff is "better" than dying from a poisonous mushroom. The latter scares the hell out of me. The former is a glorious ride until the ride is over (regrettably).


I regret to inform you that possible negative outcomes of falling from a cliff include life-long pain, paralysis and brain injury.


>The former is a glorious ride until the ride is over (regrettably).

If you sense you're falling to death, it wont be too glorious (personally), but freakish. It can also always fail to bring death!


Yet the older they are the more they test in production.


Mountaineers is where I heard it.


This line is famous in aviation, where X = pilots


Yeah after I read that many years ago I saw it come up in several other contexts (including aviation) and realized it probably originated elsewhere.


> the mere act of doing it more often dramatically changes the risk.

Kind of. However, you already know that the first N outings didn't have a disaster. So those should be discarded from your analysis.

Doing it N times more has a lot of risk, doing it the N+1th time has barely any.


This is called the Turkey fallacy: the turkey was feed by humans for 1000 days, and after each feed event he updated his belief that humans care for him until it's now almost a statistical certainty.


Is this the reverse of the Gambler's Fallacy? Instead of "The numbers haven't hit in a while, therefore they're going to hit soon." it's "The numbers haven't hit yet, therefore they're never gonna hit."


Also known as complacency. Working in a woodshop, one of the things you are most vulnerable to is failing to respect the danger you're in. This is why many experienced woodworkers have been injured by e.g. a table saw - you stop being as careful after such long exposure.


A related thing is normalization of deviance. You start removing safety because you see nothing bad happened before, until you are at a point where almost no safety rules are respected anymore. You can see this a lot in construction videos.


Yup, complacency can kill you.

In this case [0], a skydiver forgot to put on his parachute...

https://reverentialramblings.com/2018/08/15/the-skydiver-who...


Oh man, that's terrible. I can certainly understand how someone without a checklist that is verified by two people can do that, especially if you have a backpack on to mask the fact that the parachute is missing.

Many times if I wear a tight jacket in the car, I forget to put my seat belt on, because I unconsciously mistake the pressure of the jacket for the seatbelt's, even though putting on a seat belt is usually the first thing I do.

Poor guy.


I generally take off my jacket before driving for that very reason.


Luckily newer cars won't stop beeping if you forget your seatbelt, so the problem is mitigated. Not so for parachutes, apparently.


To be honest, I never wear a parachute when driving!


Better not drive close to cliffs then!


Wow, that's terrifying and a good cautionary tale.

Also, when I read

> I’m hoping you can you forgive me as a minister of religion for likening this story to a spiritual cautionary tale. Yes, we do need to live each day as if it might be our last.

I thought, "Hmm, sounds adventist", and sure enough :-)


And why pilots traditionally work from checklists, even if they've done the process thousands of times.


That only applies if you are updating priors. In this case the odds are fixed, the GP is correct.


The odds of a rocking accident are known and fixed?


Probably not. But they aren't affected by the previous N climbs, at least as described by GP post. They are considering a fixed odds event, and the probability of (bad thing happens) over a sample path through time. That's not the turkey fallacy.

In other words , the difference between the turkey and the climber is the climber knows the odds (at least nominally) , and it’s important .


All this reminds me of “if you are immortal and cannot be harmed, what are the odds of getting ‘stuck’?” I’d venture 100%.


Surely sometime about the turkey getting fatter each time complicates this example.


hows yesterday's tree impacting today's?


If you'll die if a roll of three dice comes up sixes, you're not really in a lot of danger. If you do it every day, you have about 15 months to live.


If you've already done it for 12 months without it happening though, the next 3 months are no more dangerous for you than for someone starting from scratch.


Very true. The only winning move is not to play!


That's true, but usually when we are deciding which actions to take, we're not comparing "I take actionA" versus "I take actionB," rather than comparing "I take actionA" versus "some random other person takes actionA."


OK, the next 3 months are no more dangerous for you than if you hadn't spent the last 12 months doing it. What you did in the past has no bearing on the chances going forward. I'm not sure if it's more clear to say it like that or not. Clearly, humans have a lot of trouble speaking and thinking clearly about statistics.

The next three months are no riskier than your first three months were. They don't become more risky because they will add up to 15 months total -- once you've already finished the first 12 without incident.


For the dice roll example that is true. But other examples it isn’t. For example the MTBF of a device that has run for x hours approaching the MTBF is probably more likely to fall in the next x hours. Or if there is some cyclic behavior. Like waiting outside for a hot day.


>you have about 15 months to live

Or a few minutes ... or 20 years.

That's the thing w/ statistically independent trials.


That's like the difference between

You could win 100mm in the lottery (true statement!)

Lottery tickets are a good investment (almost always, false statement).

Planning on "well it could happen, technically" isn't a good approach.


But when looking at a possible positive outcome, such as the lottery case, it can "make sense" to buy one ticket.

Your chance of winning goes from No Chance to A Chance, which is an infinite improvement.


That's not how this works as a rational investment choice.

It's true that you can never win a lottery you don't enter, but the expected value of that ticket is vastly lower than what you paid for it. That means, as an investment, your $10 will be expected to do better in literally anything with a positive return.

If you are buying > $10 worth of dreaming (for you), fine - but that's consumption.


Yup. Don't do stuff (repeatedly) that have an absorbing barrier - https://medium.com/ml-everything/nassim-taleb-absorbent-barr...


Anyone who's rolled double natural 1s with advantage would never take this bet - and your example is twice as likely to occur!


The expected value will be 6³ = 216 days or about 7 months. Where do you get the factor of two from?

Also, “not really in a lot of danger”? Those odds are worse than that of a 100 year old in the USA (they have a life expectancy of over two years)

Certainly, as an additional risk, it’s high.


You forget: once you roll three 6s in a row, you're dead, and you don't roll any more. Your expected calculation assumes that people keep rolling after they get 666.

Though I'm not sure where they got their figure from, because there isn't an “expected time to live”; there's a 90% probability to live time, a 5% probability to live time…


There’s a difference between expected value of number of days you’ll survive and the number of days a given fraction of the subjects will survive, but I don’t see either supporting the claim “If you do it every day, you have about 15 months to live”.

  (215/216)^450 ≈ 0.124
, so about one in eight will survive for 15 months or more. The “5% probability to live” time is around day 645 (about 1¾ years):

  (215/216)^645 ≈ 0.0501
the “half will survive at least for” point is around 5 months:

  (215/216)^149 ≈ 0.501


Funny book recommendation: "The Dice Man" by Luke Rhinehart.


If you think a 1/216 chance of sudden death isn't a lot of danger, I don't want to go rock climbing with you!


Don’t look at the actuarial tables. The odds are worse than that over a year’s time after ~35


What am I missing? 6 x 6 X 6 = 216 or about 7 months.


RandomSwede's comment is accurate, but maybe the below can help add some 'flesh' to their response.

Basically, the problem is that you can't just multiply it all together.

(1/6) ^ 3 is correct, and the probability of rolling 3 sixes is indeed 1/216 today, but if you repeat independent events, you don't just add up the probability.

Imagine instead of dice it's coins, and it's only two. Your odds of getting HH today are 1/4, but the odds of getting HH by day four are not now 4/4. We know that it's possible, although unlikely, you could flip coins for the rest of your life and NEVER get two heads. So we know that you can't ever have odds of 4/4 (or 1), only odds that approach 1. So that means that we can't say 216 days from now will be 216/216.

Instead, you need to work out the probability of the event NOT happening, and then repeatedly NOT happening independently (so we can multiply together to get the probability.

For our four coins, the probability of NOT getting HH is 3/4. On Day 2, the probability of NOT getting HH on both occasions will be (3/4)×(3/4), (9/16, 56.25%). By day 3, it will be (3/4) × (3/4) × (3/4), or 27/64. On day 4, it'll be 81/256, or 31.6%. Now we can subtract from 1, to work out that by day 4, the odds of us having hit HH are almost 70%.

As RandomSwede explains, there's a 50% chance that you will have rolled three sixes by day 149. By day 496, you're down to 10%.


    runs <- 10000
    x <- vector(mode = "numeric", length = runs)
    for (i in 1:runs){
      while (sum(sample(1:6, size = 3, replace = TRUE)) != 18){
        x[i] <- x[i] + 1
      }
    }

    summary(x)
    quantile(x, c(0.5, 0.8, 0.9)) 

    > summary(x)
       Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
        0.0    62.0   149.0   216.2   300.0  1902.0
    > quantile(x, c(0.5, 0.8, 0.9))
    50% 80% 90%
    149 350 495
A simple simulation. Run 10K times. Count the number of times it takes for three dice to add up 18.

The numbers very much agree with you. The median is 149. The 90th is 495 in the simulation, which is close enough to 496. There is very much a long tail in the data. So, the median and the average will not be the same. Is it a coincidence that mean is a 216?


No, I don't think this is a coincidence, but I'm not completely confident in saying that.

Thinking about it doesn't make me feel like I'm solving a maths problem. I start stacking ideas and concepts in a way which makes me feel like I'm overlaying them in a way which is incorrect.

It makes me feel like I'm solving a riddle, which hints to me that maybe it's actually a question of semantics and definitions rather than a maths problem.


Dice (typically) do not have a memory, so whatever happened yesterday will not influence what happens today. If you roll it daily, your chance of surviving at least N days is (215/216)^N, for the specific case of "rolling three 6 on three 6-sided dice" that puts you at ~50% at 149 days and at ~10% at 496 days.

At sufficient scale, even incredibly unlikely things become quite probable.


    runs <- 10000
    x <- vector(mode = "numeric", length = runs)
    for (i in 1:runs){
      while (sum(sample(1:6, size = 3, replace = TRUE)) != 18){
        x[i] <- x[i] + 1
      }
    }

    summary(x)
    quantile(x, c(0.5, 0.8, 0.9)) 

    > summary(x)
       Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
        0.0    62.0   149.0   216.2   300.0  1902.0
    > quantile(x, c(0.5, 0.8, 0.9))
    50% 80% 90%
    149 350 495
A simple simulation. Run 10K times. Count the number of times it takes for three dice to add up 18.

The numbers very much agree with you. The median is 149. The 90th is 495 in the simulation, which is close enough to 496. There is very much a long tail in the data. So, the median and the average will not be the same. Is it a coincidence that mean is a 216?


Off the top of my head, I don't know. It MAY be related to the fact that 6*3 is 216, but I don't have deep enough statistics knowledge to say for sure. You coudl try it again with 3 8-sided dice and rolling 24, that should give you ~50% at 344 iterations, and ~90% at 1177 iterations. If my supposition that the mean is related to the possible rolls, then the mean should end up being 512.

Iteration counts gathered with Python and a (manual) binary search (actually faster than writing code).


    runs <- 100000
    x <- vector(mode = "numeric", length = runs)
    for (i in 1:runs){
      while (sum(sample(1:8, size = 3, replace = TRUE)) != 24){
        x[i] <- x[i] + 1
      }
    }

    summary(x)
     Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
      0.0   146.0   353.0   511.8   708.0  5112.0 

    quantile(x, c(0.5, 0.8, 0.9))
     50%  80%  90% 
     353  824 1187
Strangely enough the mean agrees. The other ntiles are off a bit, but that's randomness for you.


The parent comment talks about scaling back the amount of rock climbing they do in order to reduce risk.. And now you are saying that they should go one more time, because a single climb is low risk?


Yes. I am saying their analysis of risk is incorrect, and therefore if that's the only reason they aren't climbing then they should climb more often.


I think you're reading it wrong.

After a long life of rock climbing, there's no significant risk of doing it one last time or 10 last times (ignoring the effect of old age itself and whatever).

But when you're in earlier stages of your life, you're asking a different question: You're asking, is this something I want to do hundreds or thousands of times in my life, knowing that each of those times has a small chance of ending my life? This becomes a completely different question.

If I'm 35, maybe I will climb 30 times per year on average for 30 years until I'm 65. That's 900 climbs in total. If my goal is to not die or experience serious injury from rock climbing even once in my life, I have to consider the chance that any one of those 900 climbs will result in serious injury or death. I don't know the numbers for the risks involved, but it seems reasonable to be cautious.

Maybe I don't want to give up on rock climbing altogether, but maybe I can scale it back. If I limit myself to 1 climb per year, that's 30 climbs in total. Much lower risk than with 900 climbs.

This is not a logical fallacy.


That would be a reason to have not climbed more than a specific rate ever. It wouldn't be a reason to scale down the rate of climbing as you age.


You're making it sound like it's a decision they made when they got into rock climbing initially, that they would climb frequently while young and then scale back as they get older.

Now, making that decision at the outset does make sense, because it will drastically reduce the number of climbs you make in your life compared to climbing frequently throughout your life, and rock climbing while young is less risky than rock climbing while old.

But importantly, I don't think that's what GP did. It sounds to me like GP spent their youth climbing a lot without considering their mortality, but then decided to scale back because they realized climbing that often for the rest of their life would be dangerous. Maybe they spent the time from 20 to 35 climbing 30 times per year, in keeping with my earlier example. That means they've already climbed 450 times. Risky, but they made it through alive. At 35, they start to consider their own mortality, and they have the choice between climbing 900 more times by keeping to their current rate, and climbing 30 more times by reducing their rate (or something in between). Deciding to scale back makes sense.

There is no logical fallacy.


The assumption is that it's desirable to have a descending climbing frequency instead of uniform.

This makes a lot of sense, as when you're younger frequent climbing would help you to develop proficiency quickly and your body allows you to joy it fully. Plus the social benefits are probably higher when younger.

Once you're older, it's potentially less enjoyable (as your body ages) and you don't need to worry as much about rapidly gaining proficiency.


I think what you're missing is that they are not avoiding "going rock climbing one more time"; they are avoiding "being a person who habitually rock climbs", because while each excursion is low-risk the aggregate effect will be high risk. It's like smoking -- one cigarette won't appreciably impact your health, but "being a smoker" will.

None of this intended to cast aspersions on rock climbing in particular, just pointing out that a reasonable person, understanding independence of events and not falling prey to any fallacy, could reasonably make this decision based on their personal risk tolerance


Yes, or more accurately there is a frequency of climbing outings at which the marginal increase in satisfaction from an extra climbing is no longer sufficient to justify the increased risk.


I disagree, their analysis is perfectly correct.

The more frequently you take a risk, the greater the chance that risk materialises.

Parent wants to lower their overall risk, but doesn't want to stop climbing entirely. So they climb less often.


I don't know how you can make this claim objectively without knowing that individual's preferences.

If an individual decides their risk tolerance is that they will not accept a one in a million chance of injury from rock climbing, how is their analysis incorrect?


I think it the argument is to make a lifetime risk assessment, opposed to an individual event risk assessment.

If your tolerance is X% death/life, you can calculate the climbing frequency that falls below the threshold.

On the plus side, if you assume the events are independent, you can recalculate and increase the frequency after each climb.


The point is that they're changing their habits. Of course we ignore the n times they've gone before, now instead of their habits meaning they'd go m more times in the future, they're going to be going p times in the future for some p that is much less than m.

So it's not about how often they've done it over their lifetime so far, but about how many times they will be doing it over the rest of their life.


Under this assumption, by the principal of mathematical induction, you can easily do it K more times for any K without taking on barely any risk at each step of the way.


The "slippery slope" principle applies here though: N+1 enables N+2, which enables N+3 and so on.


Slippery slope is a fallacy, not a principle. Just because you took N steps, that doesn't necessarily mean you will take N+1 steps.

It's a convincing fallacy because sometimes you do take N+1 steps. But just like in the article, heuristics aren't always right.


When accounting for human psychology it does have validity: doing an enjoyable activity "one more time" has a risk of a habit forming, which has a non-zero probability. It is indeed possible.

The argument can certainly be used in a fallacious manner (e.g. by greatly exaggerating the probability of the further steps, saying they are inevitable if the first step is taken, etc.). It's logically valid to say that the first step enables subsequent steps to be taken.

Edit: I'd say that the slippery slope is perfectly valid rule of thumb in a lot of 'adversarial' situations. Once one side makes an error or fails somehow, the balance between the two sides can be disrupted leading to one 'side' gaining momentum. Just as between people, a similar 'adversarial' process can occur within the minds of individuals: between two ideas or patterns of thought/behaviour, one idea can gain momentum after a decision has been reached. Precedence is a strong force.


Slippery slope arguments aren't inherently fallacious. If you can justify one more climb on the grounds that probability of injury or death is very low then you will be able to justify every subsequent climb on the same basis.


>If you can justify one more …

Reminds me of Terry Pratchett quote "No excuses. No excuses at all. Once you had a good excuse, you opened the door to bad excuses.”

Full quote is fifth here: <https://www.goodreads.com/work/quotes/819104-thud>


Slippery slope arguments are inherently fallacies. They don't prove that something will happen.

Just because you can justify the next climb on the same basis, that doesn't mean you will. You could decide that you've already tested the odds one too many times.


Don't get on that greased sliding board that ends at the top of a cliff. Once you start sliding, it will be hard to stop because of the grease, and then once you slide off then end you will fall and die.

Do you really think this slippery slope argument is a fallacy? FWIW, wikipedia acknowledges slippery slope can be a legit argument when the slope, and it's chain of consequences, are actually real. https://en.m.wikipedia.org/wiki/Slippery_slope . Indeed, this is the very basis of mathematical induction.


From your linked article:

> The fallacious sense of "slippery slope" is often used synonymously with continuum fallacy, in that it ignores the possibility of middle ground and assumes a discrete transition from category A to category B. In this sense, it constitutes an informal fallacy.

"If you take N steps, you will take N+1 steps" is a fallacy whenever it's possible that you won't take N+1 steps.


Not what was said. What was said: if you DON'T take the Nth step, you then WON'T take the N+1 step.

"Not A -> Not B" is different logic than "A -> B". A is necessary but not sufficient for B.


"You could decide that you've already tested the odds one too many times" was the original point. Someone responded that the N previous times don't matter and N + 1 has barely any risk. Another poster countered that that argument as stated applies not just for N + 1 but for (N + 1) + 1 etc and therefore the slippery slope principle applies.

Of course if you add in "you could decide that you've already tested the odds one too many times" then it's a fallacy to invoke slippery slope because an off-ramp is explicitly specified. In this case slippery slope was mentioned only because N was dismissed as irrelevant.


A pet peeve of mine is that the slippery slope fallacy can be defined as "modus ponens but wrong".

A fallacy should be a incorrect shape of an argument, a incorrect reasoning, not just a false statement.


Like all fallacies, it's only a fallacy when it's fallacious.

Otherwise, it's just a regular d argument.


Maybe fallacies could be renamed "logical hazards" or something like that. Arguments that are at high risk of being false and require extra care, but not automatically false.


but the risk is independent. so once you do the N+1 time safely, you are back to N and your next time is _also_ just an N+1.


True but it would be incorrect to assume that you can safely keep basejumping every day in a year, just because you haven’t died in the last 50 days. Eventually the stats say you will be 87% likely to have an accident when you consider your choice at the beginning of the year. It might be day 20 or day 300, but you won’t know what case you end up in. The chance of your next jump being your last is always the same, but that doesn’t decrease the risk of repeated trials.


Not exactly. If you've done it 50 days without an accident, your current chances of the accident happening in the remainder of the year are NOW less than 87%.

If you've made it Jan 1 to July 1 months without an accident, the chances of you making it to Dec 31 are now better than they were on Jan 1 -- because now they are just the chances of you making it six months, not a year.

The chances of flipping 6 heads in a row are 1/64. But if I've already flipped 3 in a row... the chances of flipping three _more_ heads in a row is 1/8, the same as always for flipping 3 heads in a row. The ones that already happened don't effect your future chances.


I meant to say starting a new year after the 50 past days, I see that wasn’t clear though.


Yes, but when you make a plan to find an acceptable cumulative future risk, planning to do it once a week for the rest of your life is planning to expose yourself to significantly more risk than doing it twice a year for the rest of your life.

You might still die in one of the next 20 instances. But you've added a lot more not-dead time in between them!

Saying "I can do one more with minimal added risk" every single time after not dying is true and yet pointless, because it's not a given that "minimal added risk" = "not dying." It's survivorship bias to not think frequency doesn't affect the cumulative odds of your future planning solely because you've already done a lot of trials.


Risk is independent of prior events, habits are not - I think that was what the anthropologist story is about


The risk is independent but the marginal enjoyment isn't. You don't get double the satisfaction from climbing twice as much.


Continuing to do something regularly doesn't ever mean you're just going to do it once more.


Psychologically, behaving in a certain way makes it more likely that you'll behave in the same way in the future. That's an integral idea underpinning justice systems.


In a skill-based game, N+1 has less incremental risk than adding 1 more trial with N-1 games did.


This assumes a lot about the underlying process, particularly independence. Whilst assuming independence might hold reasonably well for low numbers of samples, the assumption might be increasingly (and dangerously) misleading. The intuition expressed by GP captures that.


I took the same approach with motorcycling. I decided to do it for 3 years while at university because it had a transformative impact on my lifestyle. But I also decided the odds were way too bad to keep doing it my whole life. So I stopped and haven't done it since then.


I only owned a bicycle for several years at the same age, and have since mostly used a car to get around. I've been in various, relatively minor accidents with both, and have always wanted to try using a motorcycle, but it seems to take the risks of both biking and driving and puts them together!


> It's a great example. This is the very reason I have scaled back the amount of time I rock climb as I've gotten older -- not because any individual outing is dangerous, but there's an element of Russian roulette wherein the mere act of doing it more often dramatically changes the risk.

Indoor climbing, and especially bouldering, can be a lot of fun at the right gym, and with dramatically reduced risk of death (though injury is still a very real possibility, I say, recalling all the time I spent nursing my sprained ankle).


This is called ergodic theory, the more you repeat an action that could result in catastrophe, the likelihood of that catastrophe occuring will be close to 100% if the number of events is high enough.


But that doesn't imply any higher probability. Say chance of dying when doing alpinism is one in a thousand.

The 1000th time you go climbing the chances of dying are still 1/1000.

If you get 100 heads in a row, the 101th time you launch a coin the chance of getting heads is still 50%.


Right, but it's easy to conflate two very things here:

"What are my chances of dying in a climbing accident", and

"What are my chances of dying today if I go climbing".

If you are on a plane, you* have a lower risk of some kinds of cancer than the airline staff do. This has nothing to do with the flight you are both on, and everything to do with accumulated flights

"you*" = for most people, i.e. barring a counteracting risk factor.


This is exactly why people who do high risk think it will never happen to them.


That's the reason why I don't cycle in London. When I moved there, I thought I would be using my bike daily like I was used to. But over the span of a few years, I'm pretty sure the risk of serious injury becomes significant.

For rock climbing, you're probably right. I remember training in a climbing hall, when I saw someone falling off the highest wall. The tenant of the hall didn't look surprised at all. Apparently, it happens frequently.

That being said, if you serious about security, I'm sure the risk can be minimal.


You can contrast the odds of getting injured with the health benefits. The cardiovascular benefits would seem to outweigh the risks of getting injured from a mathematical point of view.

See e.g. https://blogs.bmj.com/bjsm/2018/12/12/pedal-power-the-health...


Lots of ways to get even better cardio benefits with much less risk.


"Climb if you will, but remember that courage and strength are nought without prudence, and that a momentary negligence may destroy the happiness of a lifetime. Do nothing in haste; look well to each step; and from the beginning think what may be the end" ~Edward Whymper


Also known as the Kelly criterion. If one possible outcome of an action is associated with a great enough loss, it doesn't make sense to perform the action no matter how unlikely the loss.


No, Kelly is about what fraction of your bankroll you should bet if you want to maximize your rate of return for a bet with variable odds.

It's essential if you want to:

* make money by counting cards at Blackjack (the odds are a function of how many 10 cards are left in the deck)

* make money at the racetrack with a system like this https://www.amazon.com/Dr-Beat-Racetrack-William-Ziemba/dp/0...

* turn a predictive model for financial prices into a profitable trading system

In the case where the bet loses money you can interpret Kelly as either "the only way to win is not to play" or "bet it all on Red exactly once and walk away " depending on how you take the limit.


That is a much narrower view of the Kelly criterion than the general concept.

The general idea is about choosing an action that maximises the expected logarithm of the result.

In practise this means, among other things, not choosing an action that gets you close to "ruin", however you choose to measure the result. Another way to phrase it is that the Kelly criterion leads to actions that avoid large losses.


Actually

https://en.wikipedia.org/wiki/Kelly_criterion

"The Kelly bet size is found by maximizing the expected value of the logarithm of wealth, which is equivalent to maximizing the expected geometric growth rate"

In real life people often choose to make bets smaller than the Kelley bet. Part of that is that even if you have a good model there are still "unknown unknowns" that will make your model wrong some of the time. Also most people aren't comfortable with the sharp ups and downs and probability of ruin you have with Kelley.


I've long found that Wikipedia article woefully lacking in generality.

1) The Kelly criterion is a general decision rule not limited to bet sizing. Bet sizing is just a special case where you're choosing between actions that correspond to different bet sizes. The Kelly criterion works very well also for other actions, like whether to pursue project A or B, whether to get insurance or not, and indeed whether to sleep under a tree or on a rock.

2) The Kelly criterion is not limited to what people would ordinarily think of as "wealth". It applies just as well to anything you can measure with some sort of utility where compounding makes sense.

The best overview I've found so far is The Kelly Capital Growth Investment Criterion[1], which unfortunately is a thick collection of peer-reviewed science, so it's very detailed and heavy on the maths, too.

[1]: https://www.amazon.com/KELLY-CAPITAL-GROWTH-INVESTMENT-CRITE...


If anyone wants to play around with an interactive explanation of the The Kelly criterion: https://explore.paulbutler.org/bet/


Isn't that called Pascal's Wager?


Which of course directly leads to Pascal's Mugging: I can simply say "I'm a god, give me $10000 or you will burn in hell for all eternity". Now if you follow Pascal's Wager or GP's logic you have to give me the money: I'm probably lying, but the potential downside is too great to risk upsetting me.


There's actually a rational explanation for that: humans don't care very much about burning in hell for all eternity, when it comes down to it.

There's actually a similar though experiment that might seem even more bizarre: I could tell you "give me $100 or I will kill you tomorrow" and you probably wouldn't give me the $100. That's because when it comes down to it, humans don't see the loss of their life as that big a deal as one might think. It's a big deal, of course, but in combination with the low likelihood, still not big enough to forgo the $100.


Here's the thing: if you have just killed five people, "give me $100 or I will kill you tomorrow" becomes a much more effective threat.

One-time games and repeated games have different strategies.


It becomes more effective only because it changes the probability estimation of the outcome.

Life is a repeated game of decisions that compound on each other, so that difference is irrelevant.


Pascal's Wager is a fallacy resulting from plugging infinity into your risk analysis and interpreting ∞/0 in a way that suits you.


Pascal's wager is an example of motivated thinking - there were very real and certain consequences to him if his wager didn't demonstrate you should obey the Catholic Church.


Same reason I sold my motorcycle when I was 30. I had a good run of not being maimed, but the odds were not good for the future.


I would imagine that if you scale back enough tho, you won't be as sharp. Sure the odds increase the more you do it, but not just because you do it, but often because of other variables, such as the weather, not listening to your body, over confidence, etc.


You would probably be at less risk if you continued rock climbing but eschewed driving (or riding in) a car.


This is why I don't drive or talk to people anymore.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: