Hacker News new | past | comments | ask | show | jobs | submit login

Chesterton's Fence:

> In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.




This philosophy is one of the cornerstones of my engineering practice. IFF you can describe why something is there - without using shortcuts/crutches like 'their dumb' - then we may be able to consider changing it. Otherwise, it's dangerous and risky to fool with it. Too many times did I see something absurd and paint myself into a corner trying to 'fix' it, when there really was a bizarre edge case that this covered. (And in plants, you usually only find these edge cases after enormously expensive production loss events!)

And it's a fantastic principle in general, since it's like the technical equivalent of the Principle of Charity.

( http://philosophy.lander.edu/oriental/charity.html )


I worked in a similar setting, writing safety critical code. Any time we found code that was so dumb that it couldn't possibly be correct, without documenting why, we still did a blame to see when it was written, if the blame was older than the current source control system we did a blame in the previous one (this could be 10 year old code). Once we found when it was written we looked at any commit messages, if nothing obvious was written there we looked at general code movement around the time of the commit, we looked at any documents changed around the time of the commit, we looked at issue tracker tickets filed and closed around the time of the commit, we looked at test cases written around the time of the commit, test reports that had different results before/after. In essence we did everything in order to understand how the person thought when writing the code. This had always been a code base with very rigorous processes around it so explaining things by "he probably was lazy and let it slip by" was not a valid excuse before all other reasons had been investigated.

Obvious "errors" like if(false) is a stupid example of this, why would someone commit if(false)? Is it supposed to be if(true)? Is it for debugging only or is it code that should actually be removed? Maybe it's used for other reasons by a preprocessing-tool that you are not aware of?


This is a great story/summary, and it underlines the value of being able to "hear from" the code's original author (either by contacting them, or preferably by referring to high-quality documentation saying "this looks stupid, here's why").

I've had the joy of seeing code which broke completely if you removed the line while(false){}. Some hideous synchronization bug was solved by this line, and while it obviously wasn't the right solution, simply deleting it produced bad outputs. Profilers and processing tools are, as you say, other likely causes of dumb-looking code choices.

The stupider the "error", the more suspicion you should assign to its presence in any half-decent code.


While that's entirely true, it's also true that any code covering an obscure corner case ought to document that corner case and why the code is needed. And similarly, any policy or law should be completely clear about what problem it wants to solve. That would make it far easier to maintain.


Documentation is essentially the proactive version of the idea for which the principle of charity is the reactive version.

Assume that those who came before you had good reasons for their actions, and assume that those after you will be unable to identify any motivation you don't state explicitly.

I'd be curious to see a strong legislative version of this - enshrining the spirit of the law in the text, and giving courts explicit rights to strike down laws which no longer fulfill their original intent. Done well, it's the sort of change which could have worked wonders on our legal system's constant failure to adapt to technological change. The Aereo suit, for instance, shouldn't have happened under any kind of intent-based legal system.


> Assume that those who came before you had good reasons for their actions

There's a limit to that assumption, though. I'm always inclined to assume that people had reasons, and that they looked good at the time, but without knowing what those reasons are, there's no way to know if the reasons are as good now as they were then, even if you make the charitable assumption that they were good reasons at the time.

I would be a big fan of the idea that legislation had built-in turnover clauses, for instance, that required renewal every N years (for a value of N not much larger than the turnover rate of legislators). Which then means if you want something to persist, you would have to document your rationale for posterity, and convincingly argue that that rationale still applies.

> I'd be curious to see a strong legislative version of this - enshrining the spirit of the law in the text, and giving courts explicit rights to strike down laws which no longer fulfill their original intent.

I agree completely. Laws should state up front that "the purpose of this law is to ...", and for that matter explicitly state any other relevant considerations or side effects and whether they're considered beneficial, undesired, or simply neutral. That would mean there would have to be at least a pretense of a sensible motive, and that interpretations that don't serve that motive could be thrown out.

> The Aereo suit, for instance, shouldn't have happened under any kind of intent-based legal system.

It still could have, depending on the intent. The intent of copyright law, for instance, is supposed to be "we want more works produced, but we also want more works to enrich the public domain, so there's a tradeoff". The intent was never about authors and what they want; that's a means to an end. However, that rarely seems to be reflected in deliberations.


> I would be a big fan of the idea that legislation had built-in turnover clauses, for instance, that required renewal every N years (for a value of N not much larger than the turnover rate of legislators). Which then means if you want something to persist, you would have to document your rationale for posterity, and convincingly argue that that rationale still applies.

Let's be honest. These things would mostly be just bundled up and passed all at once. Or, alternatively, they would be used as leverage like the budget currently is. Actually, it would pretty much be exactly like the budget at this point. It either sails through with no issues, or it begins months of partisan bickering.

EDIT: Also, could you imagine the kind of flex we might see on major laws? I can only imagine large sets of laws sunsetting every time the Congress majority changes... Hey, at least it will create a whole industry centered around these adjustments! That's job creation right there! And, as always, the lawyers will be making their money.


This would require lawmakers to agree on the intent of legislation in addition to its affects, which would be categorically harder because different people may favor a policy for different reasons.

It might be simpler to just sunset every law by default after some period and require lawmakers to periodically renew them - if a majority can't form to support renewal, then it's reasonable to assume the original motivation lacks support. It's also a handy way of nudging updates to the laws to keep up with current times.


> This would require lawmakers to agree on the intent of legislation in addition to its affects, which would be categorically harder because different people may favor a policy for different reasons.

While that doesn't seem like the main intent of such an approach, it certainly sounds like a beneficial side effect. New legislation is not a thing that should occur particularly often, or lightly.

> It's also a handy way of nudging updates to the laws to keep up with current times.

Agreed. (You'd also need to have some requirement to prevent any omnibus renewal legislation.)


This is kind of irrelevant though. Sure it should be documented, and sure the fence should have a nice notice of purpose posted right at the gate for everyone to see, but life doesn't work like that.


I'm not sure I'd go as far as irrelevant - rather, I would say that documentation is what you do to keep other people from needing the principle of charity. They're two directions on solving the same problem. As a maintainer you should assume good intentions, but as a creator you should work to clarify your intent.


Agreed, but irrelevant to this particular conversation, which is about how to handle a "fence" in the road where there is no obvious reason evident.


Chesterton's Fence should be one of the core principles of software development, and it calls to mind the famous "Never rewrite software" article.

If you're refactoring, you can use unit tests and proofs of equivalence to demonstrate that your changes don't pose a threat. If you're rewriting a whole codebase, you're likely to wander down the same paths of "wait, the simple way doesn't work" that the person before you took.

Even bafflingly small changes like removing always-false tests have been known to break code, which ought to inspire real hesitation any time you see "stupid" code written by a smart person.


I hope we can all agree that having code relying on a compiler bug or something like that (without explicitly documenting it directly in the source code) is stupid.


> Even bafflingly small changes like removing always-false tests have been known to break code, which ought to inspire real hesitation any time you see "stupid" code written by a smart person.

Yes, but really, if you write stupid-looking code because of arcane reasons you really owe the rest of humanity at least

    //THIS SHOULD BE LIKE THAT BECAUSE ARCANE REASON
comment.

This is the main purpose of comments.


> Even bafflingly small changes like removing always-false tests have been known to break code

Okay, how is this even possible? Reflection/pre-processing magic?


Another comment in this post:

>I've had the joy of seeing code which broke completely if you removed the line while(false){}. Some hideous synchronization bug was solved by this line, and while it obviously wasn't the right solution, simply deleting it produced bad outputs. Profilers and processing tools are, as you say, other likely causes of dumb-looking code choices.

https://news.ycombinator.com/item?id=10036050

I can also imagine methods with side-effects. The test is always false but a side-effect is necessary.


Compiler codegen bugs.


I've had a few compiler bugs where, adding an if false or changing the order of an if test 'fixes' things. And the complement changing the code exposes a compiler bug. And also, new improved compiler results in code that's broken.

Other issues I run into a fair amount is dealing with hardware/naked interrupts often involves a lot of subtle timing and read/write access issues. Sometimes this happens in code you don't own.


    def stub():
        while(False):
            print("smart stuff")

    print("hello, world")
    stub()
If you remove the `while(False)` loop, the working code will stop working.


Oh, come on, that's an obvious one. Obviously, "removing the while(False) loop" in this case means replacing it with an empty return:

  def stub():
      return

  print("hello, world")
  stub()
There, it works.

That said, boo Python for not admitting empty bodies. It needlessly adds a special case to its syntax. I suspect this comes from the implementation of its lexer.


The question I answered was: "How is this even possible?" (emphasis in the original). The implication seems to be that it should not be possible for the removal of a `while(False)` statement to break working code, presumably based on the assumption that it can't do anything. Yet that is a false assumption, and lots of bugs are based on assumptions that seem true yet, strictly speaking, don't have to be. I provided an existence proof that it IS possible and one minimal example that the assumption that such code literally can't play any role in whether code works or not is incorrect.


The post I was replying to said `while(false){}`. It did not say `while(False)`.

It was implied that we were talking about a C based language. You proved nothing. Others did.


The keyword pass is used in place of an empty body. Explicit better than implicit, etc.


"pass"?


I'm going to dispute the "iff". Chesterton's fence is a useful idea to keep in mind, but it is flawed in that it is excessively conservative. It demands that you prove a negative, which is not reasonable in general. (How do you prove that that apparently useless doohicky isn't staving off Cthulu's wrath?).

It's important to try to understand why something exists, but it is also important to understand that the reason often is that it was either unintentional or silly, but you won't be able to prove it because it isn't documented and the person responsible is gone/unknown.


Chesterton's fence doesn't require you to prove that the doohicky isn't staving off Cthulu's wrath, but merely to know that that is the purpose.


To be sure, there are limits to one's sanity when over-applying this principle, but I also would apply it far past where many would stop. A simple explanation that doesn't quite fit needs yet more inquiry. Normally, the application pre-limits the number of things that may go wrong, and you can use that to save some effort. For example, I may not know if a pressure sensor's wiring diagram is correct enough to squelch Cthulhu's call, but since that has not bearing on if the vaccuum system's gases are pure, I also don't care. (That's the sort of thing HR and Operations to worry about.)

Usually this meant digging in deep, questioning really basic stuff like "Are we sure this model is normally open or closed? Did the manufacturer forget to tell us?", and normally there would be clues or evidence to hint at the next round of questions. Eventually you'll have enough information that the evidence fits the questions, and there's no clear line of inquiry left. Life experience and just volume of work teaches one the limits of inquiry (old engineers can be shockingly good at this, to the point of appearing sloppy :)

I once spent 6 hours overnight troubleshooting a confusing gas non-leak that ended up being the result of a default setting changing on a valve being replaced off-the-record by not-the-usual-guy. It gave me the confidence that this process does eventually get to the bottom of it, but it was a long, meandering path from miswired panels to out of date schematics in the wrong language to noting how clean the part was to know someone replaced it. All to dig out the missing tribal and undocumented information, proving the PLC was actually correct to interlock the whole machine out. (It's like knowing your program will halt - you can't prove it, but you can still be damn sure it will. At least until it doesn't ;)


Actually, negatives are often easily provable.

For example, I could prove that I don't have a dragon in my pocket without breaking a sweat.


I'm sorry, but I'm pretty convinced there's a phase-shifted dragon in your pocket, untouchable and invisible unless painted by a properly configured tachyon beam...

... which I'll happily sell you for just $2000. Remember, phase-shifted dragons are dangerous!


A corollary is that actual bugs tend to become the sturdiest, longest lasting fragments of code. Because they're by definition incorrect, so no one can really fully and correctly "describe why they're there", and thus people tend to feel afraid to fix the code ("dangerous and risky to fool with it"), because maybe that's not a bug? (and thus "someone might punish me later for touching this code")


This is an awesome comment, thank you for sharing. Do you know of any other resources/materials like this? (Specifically would love to learn more about these kind of principles as related to mechanical engineering or a factory/process setting)


Foolishly, I don't really. I made the connection in some other thread on HN, and it's stuck with me. My advice is perhaps terrible: find yourself an old engineer and just do whatever it takes to be their friend and follow 'em like a puppy. It's sorta weird how effective it is to just be around people like that (and I think some PE licenses actually require apprenticeships for that reason).

But I do have two formative things: Feynman's lecture on Cargo Cult Science is excellent, with this gem:

"The first principle is that you must not fool yourself--and you are the easiest person to fool. So you have to be very careful about that. After you've not fooled yourself, it's easy not to fool other scientists. You just have to be honest in a conventional way after that... I'm talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you're maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen." - Feynman [0]

I can't tell you how many times this has saved me in the field. Replace the word 'scientist' with '<YOUR DISCIPLINE HERE>' as needed. (And I think moreso for engineers than scientists, since safety absolutely requires this mentality.)

And I really like the ideas in The Pragmatic Programmer [1]. More than anything specific, the idea of not being married to any particular concept or solution, but rather to pragmatically chose what is correct then for that circumstance helps a lot. The book is opinionated, but reasonable, and copying that pleasant tone is a fast way to make allies when solving problems in a group. And it's a pretty good book to boot.

[0] http://neurotheory.columbia.edu/~ken/cargo_cult.html (and is an HN favorite, to be sure)

[1] https://pragprog.com/the-pragmatic-programmer (another HN favorite, though not always brought up in terms of engineering)


Taken together, Feynman's "bending over backwards" and the "principle of charity" constitute a philosophy along the lines of "Be conservative in what you send, be liberal in what you accept". It's recently become fashionable to blame this principle for some sort of percieved flaw in the Web, but I wonder if a world where web browsers failed to render a entire page on so much as a misplaced </p> would really have been better. After all, it would be a shame if we lived in a world where people only listened to your opinions if you formatted them a certain way, wouldn't it?


Often, I find requiring to format my ideas in a certain way has revealed problems with those ideas I'd not yet considered. For instance, if a theist was required to frame God in the context of provable science, they would realize how many holes exist in their logic.


An amusing anecdote is the onion in the varnish: http://c2.com/cgi/wiki?OnionInTheVarnish


The principle of charity sounds extremely questionable; what is the purported benefit of suspending logical thought for the purposes of understanding something that may be wrong? I want to understand the idea logically, so I don't want to substitute in some second-rate folk understanding while I'm considering it.


The principle of charity is basically a way to approach new ideas arising from mindsets different than yours.

The 'assumption of truth' step doesn't require us to believe anyone who walks up and says "A ^ ~A". Rather, it says that you shouldn't immediately reject a philosophy because it disagrees with your prior notions.

The 'resolve contradictions' similarly doesn't mean that we have to overlook flawed claims. It's a tool to help learn an idea even if you don't get a perfect presentation of it. There may be a valuable concept available even if the person explaining it to you can't properly work through every detail.

As an example, most people who could talk to you about evolution can't explain the development of eyes, but that doesn't mean you should assume evolution is bunk when you hear about it. The principle of charity says "there might be a good reason here, whether or not they can provide it".

In short, the principle is an attempt to get value out of ideas you're presented with. That might mean you're engaging with a different idea than what you were actually presented with, but it gets you more insight. It's not something you want to use when faced with a concrete form of the document - it's not a reason to pass a bad-but-well-intentioned law.


It's not questionable, just good practice. And my experience tells me it's a Best Practice. `Bartweiss gives a good overview of it, but I'll share some advice I got from an engineering greybeard (which I most definitely am not).

He had a great story to go with this, but the TLDR is that the PoC is the safe choice when applied to engineering: give a true, best-faith effort to prove the opposing view correct. (And best-faith is important: you can not approach it with bias, lest you waste everyone's time.) If you're right, then you'll find an irrefutable flaw in the engineering, and if you're wrong you've learned something new.

If it sounds like a lot of effort, it is.

But this is also utterly win-win. It's a way of assuring that you've been careful. Avoiding the humiliation of hubris is great if a detail of the implementation was missed at first glance. But you also will inevitably more fully understand the problem when you've failed to implement the opposing view; it also puts you in the excellent position to graciously be on the same side as your opponent and now you can sway your now-ally-in-war in their reasoning. In either way, your goal must be to solve the problem, and not merely "win" political points.

Of course, there's crackpot theories and stupid ideas and foolish plans which should be dismissed with prejudice. But hopefully you're working in a professional environment where your coworkers really are trying their best to succeed. And even then, a serious engineer will still give the stupid ideas at least some (small) time of day, as you must have a reason for all decisions, even dismissals; as you become experienced, you'll be able to properly dismiss these faster and more precisely, but you'll still need to go through the process. That process is what separates the people who loudly proclaim they are smart and right from those who would testify in court they are right.

(Incidentally, this is part of the reason why I get immensely frustrated with "idea people", as it takes much more work to flesh out their half-assed ideas into full-assed ideas. Non-engineers don't get that there's a huge amount of effort to constantly take everything and everyone seriously.)


The principle of charity asks you to grapple with the best possible version of an argument. If the one making the argument makes an obvious mistake, correct it and take on the corrected argument rather than the flawed one. Nowhere is logical thought suspended. The goal should be truth, so why fixate on small mistakes instead of fixing them and engaging with a sounder argument?


"I don't understand it" is an admission of ignorance, not wise authority justifying condemnation. Few seem to understand this.

If you can't concoct a strong* argument favoring the opposing view, you don't understand the issue well enough.

*Edit: as suggested, it's "strong" argument, not necessarily "convincing". If you can make a convincing argument for the other side but not for your own, perhaps you need reconsider your stance.


> admission of ignorance

And although "ignorance" has a negative connotation, I argue it's a neutral thing unless otherwise modified, ie. as opposed to "willful ignorance".

The healthcare field uses the gentler term "knowledge deficit" regarding patients; I've taken to using that term in engineering contexts as well.


This is the most concise expression I've heard of something I've felt was a fundamental truth for a long time, thank you!


slight caveat: I think you can understand an issue very well and be able to concoct a strong argument in favor of the opposing view, which is nonetheless not very convincing. (Sometimes you may also be able to concoct a strong but unconvincing argument in favor of your own view. Some issues are like that -- the best arguments on both sides still aren't really convincing.)


I can't make a strong argument for the Earth being flat, does that mean I don't understand the question well enough?

I can't come up with a single coherent good argument against gay marriage, does that mean I don't understand the question?

I think your rule is very useful as a guide line, but please don't make it a rule. There really are policies and ideas that are one sided.


Yes, I do indeed believe that means you do not understand those "questions". As such, your insistence that major issues deeply dividing society are solely one-sided, and establishing policies based on the one-sided view, leads to what I shall colloquially refer to as "civil unrest". Don't dismiss the guideline simply because it doesn't lead you to where you want to go.


> I can't make a strong argument for the Earth being flat, does that mean I don't understand the question well enough?

Yes.

Have you never used a flat map for navigation? Did the map deviate from what you observed on the earth by enough to notice? The curvature of the earth is very small, so it is trivial to come up with a world-view in which it would be "obvious" that the earth is flat.


Lest one dismiss the notion that anyone would willingly adhere to a "flat Earth world view", remember that most people are baffled by airliner flight paths as drawn on flat world maps.


So make a strong argument for Earth being flat, right now you have only made strong argument for why people perceive earth as flat.


"In the early days of civilization, the general feeling was that the earth was flat. This was not because people were stupid, or because they were intent on believing silly things. They felt it was flat on the basis of sound evidence. It was not just a matter of "That's how it looks," because the earth does not look flat. It looks chaotically bumpy, with hills, valleys, ravines, cliffs, and so on.

Of course there are plains where, over limited areas, the earth's surface does look fairly flat. One of those plains is in the Tigris-Euphrates area, where the first historical civilization (one with writing) developed, that of the Sumerians.

Perhaps it was the appearance of the plain that persuaded the clever Sumerians to accept the generalization that the earth was flat; that if you somehow evened out all the elevations and depressions, you would be left with flatness. Contributing to the notion may have been the fact that stretches of water (ponds and lakes) looked pretty flat on quiet days.

Another way of looking at it is to ask what is the "curvature" of the earth's surface Over a considerable length, how much does the surface deviate (on the average) from perfect flatness. The flat-earth theory would make it seem that the surface doesn't deviate from flatness at all, that its curvature is 0 to the mile.

Nowadays, of course, we are taught that the flat-earth theory is wrong; that it is all wrong, terribly wrong, absolutely. But it isn't. The curvature of the earth is nearly 0 per mile, so that although the flat-earth theory is wrong, it happens to be nearly right. That's why the theory lasted so long."

- Isaac Asimov, The Relativity of Wrong, http://chem.tufts.edu/answersinscience/relativityofwrong.htm (read the whole thing.)


In many ways, it's really just a matter of measurement power. An illuminating illustration of this is posed by how far away do you need to be from another person to see them disappear over the horizon [0]. Turns out it's like 6 miles. I have terrible eyesight, so without telescopic optics, I'd never have been able to measure this.

Rather, I'd have to rely on local measurements. And that'd nail me too: the Earth only curves about 12cm / km. So if I could only resolve a local rise-over-run of 1/1000, I wouldn't be able to fail the null hypothesis that the Earth is flat. (But if I could manage an order of magnitude better, I could!) And given that hills and such are all kinds of lumpy, and large bodies of water are rarely still, getting even that level of resolution without advanced optics would be difficult. (Though if you can be sure you've got a straight enough stick...)

So I think it really comes down to how well you can prove or measure anything. Once we had telescopes, there really wasn't too much confusion about the spherical nature of the planet. (And people had suspected for a very long time the earth was - at least in some way - round. Eclipses give that away a bit.) But the details really give us the resolving power to be sure. That and it helps to get away from local measurements - get up really high, and it becomes easier to tell (and IIRC some early experiments measuring the size of the Earth took advantage of really large height differences).

After all, Newton was right, too. But add a few extra zeroes to the solutions, and we start seeing some deviation from our relative measurements...

[0] http://mathcentral.uregina.ca/QQ/database/QQ.09.02/shirley3....


Of late I've been taken by the concept that someone who grew up on the far side of the Moon, and never travelled very far, would - due to the Moon's rotation period equalling its revolution period - be completely unaware of the existence of the Earth. They'd see the Sun rise and set in a 175 hour cycle, along with the rest of the Universe. The notion that a brightly reflective body, significantly larger than their own spheroid and covered with billions of intelligent (?) beings, existed a mere 238,900 miles away would be absolutely preposterous ... at least until said resident travelled far enough to peek around the horizon and see a most mind-blowing sight.

I mention that to set premise that one can be remarkably unaware of a plain truth just around the corner. The strong argument for Earth being flat is little different from the strong argument for Earth perceived as flat. The flaw obviously is the objective difference between fact & perception.

I go thru trouble of writing this to note that while you're pointing fingers at the difference between being and perceiving, you are yourself holding the mistaken notion that Earth is a lumpy sphere, when Earth is, in objective reality, a very long and slightly bent 4-dimensional _spring_ shape, which we see just a 3-d cross-section of which looks spherical to us lower-dimensional beings.

While making a strong argument, be humble - your perception may be objectively wrong, misguided, or incomplete as well.


The arguments are not meaningfully different. For each individual, their perception is indistinguishable (to them) from reality. It could be trivially changed:

"I use a flat map to navigate. I perceive no difference between the map and reality. Therefore reality is like the map."

For things that are very strongly one-sided you almost certainly aren't going to make an argument that convinces yourself, but if it's not at least as good as the arguments used by people that disagree with you, you are doing yourself an intellectual disservice.


The primary issue with Chesterton's long-revered fence is that puts the entire burden of proof on the person who desires to take it down.

What if the fence was put there for purposes of adverse posession? Those who would defend the fence, absent any documentation, would ascribe the most noble uses to it, as their desire is to maintain the status quo, or at least the status pro se. They would, to use Chesterton's term, 'go gaily up to it' and defend it with their lives, saying "clearly this fence always existed, and thus should always be."

Chesterton's 'modern reformer' is a strawman as bald-faced as any, and it's all too easy to use "Chesterton's Fence" as a defense of mindless conservatism.


>What if the fence was put there for purposes of adverse posession?

Answering that question is the very reason for pausing to investigate its purpose. Questioning past purposes before reforming is not equivalent to "defending the fence absent any documentation." Your own characterization of this "mindless conservatism" is itself a straw-man of the behavior Chesterton's fence recommends.

The alternative is an endless cycle of deconstruction that impedes any progress, as every possible step or rational for it brings a possible mask to power. The Nietzschean project, though fun, has produced nothing besides criticism, and would never succeed in taking any kind of action about said fence.


> Answering that question is the very reason for pausing to investigate its purpose.

So you answer the question and determine the fence is there because of adverse possession. And then everyone else you present your findings to says, "that's not a good reason, so clearly you're not using Chesterton's fence and we shouldn't make any changes." See the issue?

Chesterton's fence and/or the Principle of Charity only have the potential to lead you to a good answer if the other party is putting in a good faith effort. But as soon as that's not the case, these cognitive rules of thumb basically become tools of oppression. And if you've ever watched congressional testimony where the parties are forbidden by rule from criticizing each other, it's easy to see that lots of people purposely take advantage of this.

Also, in the context of business the PoC means assuming that people are acting in their self interest, but in the context of politics or whatever it means assuming that people aren't acting in their self interest. As a rule of thumb for improving your thinking then these sorts of ideas probably make sense, especially for things like entrepreneurship or software engineering, but at an epistemological level the phrase "not even wrong" comes to mind.


> So you answer the question and determine the fence is there because of adverse possession. And then everyone else you present your findings to says, "that's not a good reason, so clearly you're not using Chesterton's fence and we shouldn't make any changes." See the issue?

I must confess that I don't. I guess this depends on why you are doing this: if your goal is to convince other people that you should remove the gate, you must accept that it is a possibility that other people may not find your argument convincing. That is their prerogative, right? If you need their permission to remove the gate, there is nothing you can really do without convincing them anyway.

If, however, you are doing this for your own benefit, it may help you understand the system and propose/tweak your own solution to the problem better.

Or maybe I don't understand what you are saying here.


> the entire burden of proof on the person who desires to take it down.

That's exactly the point though - If a person can successfully defend the existence of a fence against your insistence for change, they've not demonstrated that the fence is required beyond all doubt, they've demonstrated that YOU are not capable enough to be the one that changes it, since your knowledge of the domain can't even defeat a "silly" insistence of maintaining the status quo.


Isn't it just a nice little parable intended to clearly transmit the idea?


Parables are the lowest form of argument. A valuable principle should be stated directly. It is absolutely worth making the caveats explicit, because there will always be people who try to enforce the exact letter of the principle.


Implicit in my comment was that it would be strange/tiresome to present it as a solid argument.

I more or less don't believe that principles are actually possible to act on in a principled manner. If everyone all at once took a deep breath and started calling the things they call principles ideals, we could probably avoid a lot of bad discussions (because implicit in that labeling is the observation that compromise sometimes happens).


I've seen this very often here when the topic of regulation comes up. Some think that any regulation of any kind is literally evil and exists for no reason other than to get in the way of "innovation". They demand to repeal them (or just ignore them, breaking the law) without understanding what they are for, just because they don't fit a particular business model.


Anyone interested in the necessity of understanding history for philosophical and moral thinking ought to read MacIntyre's Whose Justice, Which Rationality. It argues that a "history-constituted tradition" is the inevitable system in which we make intellectual progress, and analyses the major movements in philosophy as support.

http://www.amazon.com/Whose-Justice-Rationality-Alasdair-Mac...


I ran across a slightly more polemical phrasing of this principle once, while googling for something unrelated. I've long kept it in a file on my work desktop as a sort of reminder to myself:

"There are often very valid (and even necessary) reasons for why certain things are done in certain ways; these reasons often become clear to us only after we have more deeply investigated into the full details of how things work, whereas our first, less than fully informed reaction may be to regard them as silly, or to attribute them to the wrong cause or agent, or to entirely misunderstand them." (John H Meyers)


This statement worries me. It's equivalent to small-c conservatism and sounds suspiciously like "You don't know enough to have an opinion" - which is perfectly fine when it's reality-based, but can also be an easy excuse for enforcing groupthink when it's not.

There's also the implication that alternatives have already been tried. But what if they haven't? How are you supposed to tell?

Clearly you can't, unless the alternatives have been documented historically. If there's no documentation you're firmly in the land of convention, tradition, and opinion - not effectiveness.

I'd prefer a model that assumes running a start-up is like running an experiment on a market. You probe the market with a complete package of technology, marketing, networking, and funding. Then you assess the results. And then - the hard part - you try to work out which parts of the package aren't working, and design a new experiment.

You absolute should examine competing models for validity, and you absolutely shouldn't assume they're wrong because you're just that awesome.

But assuming they're a better market fit without reality-testing and evidence makes as little sense as assuming they're bad at what they're doing.

If that were true, there would be no room to innovate at all.


I didn't think about it that way -- valid criticism, and one that is strongly apparent when presented without context. In context (https://groups.google.com/d/msg/comp.sys.mac.apps/uWq5nUOa-1...) it was more about someone coming to a big software system and going "ooh, this decision is stupid" without having done that reality-testing themselves. So I interpret it more as a "don't make snap judgements, analyse the situation carefully" than anything else.


Understanding is also important when instead of getting rid of something/not copying some tactic you want to copy it. I once was in some cheap cinema where instead of the heavy red curtains they had some structures they painted red on the wall. My guess is that you have the curtains on the wall for a better sound quality, the sound in this cinema was really bad because of reverberation.

The only problem is that it's sometimes not possible anymore to find out why something is the way it is.


Might depend a lot upon who erected the fence in the first place: is it a fence in your mind, after your considered opinion has decided the path it wants to take? By all means, tear the fence down in your mind first, you might get an inkling as to why the fence exists in the first place. Of course, if the law or institution emanates from the people who think others' thoughts, you may be able to figure out whether you need the fence there or not faster


Problem: sometimes (often?) the reasons for the fence are so old and so far from relevant today that a cautious reformer cannot reasonably figure out why it's there (unless he first becomes an historian). If we insist he first understands the original reason, the result is that the fence remains, at much cost.

If you've ever worked on legacy software, you know what I mean.


Thanks for this.


There's a very similar notion in negotiation; that you should understand why your opponent is willing to grant a specific position before you accept it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: