Hacker News new | past | comments | ask | show | jobs | submit login

Agreed- they write as if being overwhelmed 3% of the time is a victory. A good system would have people feeling overwhelmed 0% of the time.



>A good system would have people feeling overwhelmed 0% of the time.

There are benefits to being pushed past your limits from time to time. Also, there's just no such thing as 0. When you're designing limits you don't say "this never happens", you're saying "this event happens less than this rate for this cohort".


I'd agree that it is worth pushing your limits during training, but the best-case scenario during actual conflict is to be as close to 0% overwhelmed as you can be.


How does that follow?

That would mean leaving some performance on the table the rest of the time.

It doesn't seem clear at all whether one outweighs the other.


Overwhelming an enemy involves getting inside their OODA loop. I can't see a real life-or-death scenario, outside of training, where you'd want your enemy to successfully get inside your OODA loop and disrupt your flow and rhythm, even for 0.1% of the time.

You of course don't want to become comfortable and complacent, risking losing focus, but there must be better ways of avoiding that other than being occasionally overwhelmed.


It doesn’t matter how many scenarios you enumerate, because the possibility space is infinite.

I don’t see how that could lead to a credible proof, even on the balance of probabilities.


You’re suggesting there are real deadly combat scenarios where it is beneficial to have your OODA loop compromised. Ok maybe you’re right, given the infinite possibilities.

But until you can present at least one such example scenario, no individual would be willing to take such a risk when their own life is at stake. Real combatants might value the motivating threat of being overwhelmed, but do not actually wish to be overwhelmed (i.e. have their OODA loop compromised).

In deadly combat, no one is looking to theorize. No one quibbles about their inability to prove the negative. They just want to live to see the next day.


Huh? Do you believe this phenomena has never happened before?

Otherwise your comment doesn’t make sense.


My belief doesn’t matter. Without proof that it is advantageous in a given scenario, I would never prefer to be overwhelmed in deadly combat. And I doubt you would either. If you believe you would, please present an example scenario.


If your belief doesn’t matter… then why do your opinions matter at all?

Or is that meant to be metaphorical?


> they write as if being overwhelmed 3% of the time is a victory

We’re talking about a soldier. Commanding a company’s worth of firepower single-handedly from relative safety. 3% would be an exceptional improvement over the status quo.


The real question is what happens in that 3%. If they are still able to control the drones that is very different from they set the drones to kill your own people. (This is DARPA so we can assume killing people is a goal in some form). There is a lot in between too.


This is a common error, if not outright fallacy. The correct amount of <negative event> is rarely zero due to diminishing returns -- it is where the cost curves intersect.

E.g. to decrease 3pct to 0.3pct might require operating only half the drones -- not a good trade.


This is peacetime thinking. If you've got a whole army trying to kill you, you're going to get overwhelmed sometimes.


Compare it to a control group - I feel overwhelmed at least 5% of the time and I’m not even controlling any robots.


Yeah I really don't like that phrasing. Take off and landing is the most dangerous part of flying but only makes up a tiny percentage of the total flight. If that 3% of the time referenced is the most dangerous or most critical 3% of time then it hardly matters how easy the rest of it is.


This is about the army. Depending on the case, it's acceptable that 30% of people die if it serves strategic goals. That's how "a good system" is defined by those who have the power to enact it .




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: