Hacker News new | past | comments | ask | show | jobs | submit login
Entropy Is Fatal (kerkour.com)
183 points by sylvain_kerkour on June 8, 2022 | hide | past | favorite | 114 comments



The author makes a mistake here.

It's fine to think of entropy as messiness; that's the Boltzmann picture of statistical mechanics. The mistake is thinking that lowering entropy, or getting rid of the mess, is a satisfactory strategy.

Think of it as a negative feedback system, like a thermostat. Keeping entropy low means continually correcting errors. This is a successful strategy only if the world always stays the same, but it notoriously does not. Some degree of messiness is needed to remain flexible, strange as it may sound. There must be room to make the good kind of mistakes and happy little accidents (as Bob Ross would put it).

Because the author chose an analogy rooted in statistical mechanics, here's another: supercooled water. Take a bottle of purified water and put it in the cooler. It gets chilled below freezing temperature without freezing. If you give it a shake, it instantly freezes. The analogy may sound a bit vapid, but noise is the crucial ingredient for the system to "find" its lowest-energy state. The system crystallizes from some nucleation site.

It's the same with evolution. Mutations are a must. Keeping our genetic entropy low isn't a viable option, because that means we'll get stuck and die out. There must be opportunity for randomness, chance, chaos, noise; all that jazz.

This is how China became an economic powerhouse under Deng Xioping, for instance. They experimented with various policies and if something accidentally turned out to work great, it became something of a "nucleation site". The policy that worked in, say, Shaowu, would spread all across China. But it would never have worked if they stuck with a rigid, Confucian strategy of keeping "entropy" low at all times.

Entropy isn't necessarily fatal. Entropy can be used as a strategy for adaptation.


This is why I feel wary whenever I hear the phrase 'best practices'. Although they're generally promoted with good intentions, they're often accompanied by a utilitarian certitude that rapidly hardens into dogmatic inflexibility, followed by defensiveness or outright dishonesty in response to unforeseen consequences.

Most 'best practices' are good defaults, but the superlative rhetoric comes with the unstated assumption that any deviation is necessarily inferior, and that the best cannot be iterated upon. This drains agency from actors within a system, selecting for predictable mediocrity rather than risky innovation.


> a utilitarian certitude that rapidly hardens into dogmatic inflexibility…

Fascinating perspective to me, and probably the right one. The term “best practices“ gives me a warm feeling because I view it as a good starting point, one that will save me some time because others have figured out what not to do. It never occurred to me that best practices would be a static thing.


I feel the same way in the sense of not wanting to reinvent the wheel, but I often reflect on how implementation of 'best practice', 'zero tolerance' or other aspirational standards often falls upon administrators who might not have (or want) expertise in the practice area.


See also: "Antifragile: Things That Gain from Disorder" by Nassim Taleb.


Hi, author here.

Thank you very much for this very precious feedback! I think it way better explain what I wanted to say in "3rd Poison: Momentum".

But, there is an important distinction to make: The China you are describing seems to have way more "available energy" (people and resources) than needed and no really specific goals, while in my experience it's rarely the case for smaller organizations, teams and individuals.

So, how to innovate and avoid starvation while operating on limited resources? This certainly deserves its own post :)


Thanks for this comment. I work as a senior leader and I've been trying hard to help my teams understand that we should do small mutations and successful mutations are things we should scale.

A lot of people struggle with this disconnect of "the mess" of "variety" but I think your comment helped me to maybe get some more analogy that could help folks get unlocked to the thinking.


Philosophically, many problems in our society might theoretically be attributed for optimizing for local maxima or other short term goals. Incentives and goals aren't aligned, and rules are far too rigid in favor in too few of the people. Democratic policies as in benefiting the people and democratic as in we elected this policy. Innovation and mutation are the spice of life.


noise is the way out of local optimums ?


I've observed this personally! After finding a solution to a problem and repeating it numerous times, I'll often randomly change one parameter of the solution (I'm talking about things like opening jars, not building complex systems, but it could apply there as well) to see if it works better. This often happens randomly because I ask my self "what if I did this?" as I'm performing the action.

The result is that almost invariably, I found a new way of doing something that's better than before. It often takes multiple tries, but it's something that takes little energy because it can be done throughout the day and the stakes are small.

Applied to a larger scale, random adjustments to larger systems can be exactly what's needed.


I'm curious - what's your "better" way of opening a jar?


Before I was using my teeth, then one day I randomly tried using my hands. It surprisingly worked much better and I'm shocked more people don't do it this way.

Kidding =]. I actually don't have a better way of opening jars, it was more an example to show the triviality of the type of problems this kind of trial and error can work for. Maybe a better example is figuring out a repeatable way to get a stubborn door to shut. Things like that.

EDIT: Actually, you probably already know this, but if you turn a jar upside down and hit it on a counter top, it can knock the lid loose enough to twist off. I didn't mention it because I didn't figure that one out on my own, but if you're looking for jar-opening tips, that's a good one.


I can even see this applied to human existence. Thinking out of the box is basically glitching your ideas.


Pretty much the only one, near as anyone can tell, though an easy way to encourage or help someone or something get stuck in a local optimum is also a stable habitat/environment, as it avoids weeding out problematic noise from helpful noise until it is too late for all but the best luck to save it.


pretty much


Simulated annealing comes to mind.


Another way of wording this / looking at it is in the trade-off between "adaptation" and "adaptability" (cf https://people.clas.ufl.edu/ulan/files/Conrad.pdf ). Adaptability requires additional energy to maintain that -- in a steady state -- is wasteful. Being highly adapted to a particular niche/circumstance is fragile to change. There is an intrinsic trade-off.

This is also mirrored in https://en.wikipedia.org/wiki/Efficiency%E2%80%93thoroughnes...


> It's fine to think of entropy as messiness; that's the Boltzmann picture of statistical mechanics.

But it should be added: If one leaves the field of kinetic gas theory and focuses on scenarios where cohesion becomes relevant, such as in crystallisation, an increase in entropy means an increase in order.


Perfect use of “all that jazz”


Related: itcan be challenging to strike the right balance between efficiency at one pole and flexibility (/agility/resilience) at the other.


the latter part of your comment reads like exposition for the Dune novels :P


I'm not sure "Entropy" is the right word for what the author wants to talk about, but the article raises some interesting points

> 1st poison: Unlimited and uncontrolled growth

IMO this is by far the biggest poison, and the one our society is most prone to ignoring. Every company, government budget, population, etc is considered "successful" if it's growing, and "failing" if it's shrinking. And the growth is always measured in terms of percentage, meaning the growth is compounding/exponential.

Sustainable living means getting comfortable with flat or even negative growth.


I don't agree. Growth is not the only thing valued. Value companies are a big part of the US Markets for example. Dividend stocks are a thing.

But growth isn't about one company ever increasing. It's about new companies innovating and taking over old ones. As long as there is innovation, there will be growth.

Change should be embraced. I personally see no reason to advocate for stagnation of human progress with the same handful of companies serving humans for all time.


I guess my point (wrt corporations) is that companies should turn from Growth into Value a lot sooner. E.g. I would have preferred Google to become a Value company once it became the dominant search provider, instead of constantly looking for new revenue streams.


I agree with that wholeheartedly. I'd go further and say that I think companies are getting too big to the point where different divisions have no relation to each other at all. Any major tech stock has this problem. The problem is I have no idea how it could be discouraged. It feels like whenever something is regulated it just results in something completely different than intended.


Part of the problem is the tax code.

Dividends (what a value company typically produces) are taxed at a much higher rate than long term capital gains (which a ‘growth’ company produces).

Dividends usually at the same rate as income, from 10-37% at the federal level, and often the same if the state has an income tax.

Long term capital gains are 0-20%, and many states don’t tax them at all even if they tax income.

Everyone has a significant incentive to go ‘growth’ in that environment.


Do you know what is a share repurchase or buyback? [I find it unlikely that you don't - but your comment doesn't make much sense if you do.]


A share buyback is a way to convert extra cash (or new debt) into almost a dividend. It’s gotten popular lately for exactly this tax efficiency reason.

It is not a guaranteed way to do it however, as unlike a dividend, there is no way to directly tie the repurchase of shares to x amount of actual long term market value.

It often works though, and when markers were going up, it helps.


> unlike a dividend, there is no way to directly tie the repurchase of shares to x amount of actual long term market value

I don't understand what you mean. From the point of view of the company - and what you called its significant incentive to go ‘growth’ if I understood your comment - there is no difference between spending $1bn repurchasing shares or distributing dividends.


To the investors there is!

If you spend $1bln buying back shares, that isn’t money in their pocket until they sell. And if they sell, they don’t get any future ‘growth’.

And if the company/economy doesn’t do well next quarter, they’re losing it in the form of decreased stock value, or even implosion/complete loss.

A dividend is direct cash in the bank. Much lower (zero) ongoing risk of losing that payment. But it gets taxed directly at a higher rate.


> To the investors there is!

I thought we agreed that the difference to investors is that selling shares back to the company is better from a tax perspective than receiving a dividend.

> If you spend $1bln buying back shares, that isn’t money in their pocket until they sell.

In aggregate, they just sold $1bn. Individually, they can sell the proportional part. (Ignoring the second-order effects due to the uncertainty about the timing of the buybacks.)

> And if they sell, they don’t get any future ‘growth’.

Just like when they get the dividend and they end up owning the same share of the same company. The difference is that they pay less taxes.

> And if the company/economy doesn’t do well next quarter, they’re losing it in the form of decreased stock value, or even implosion/complete loss.

This seems more of a reason to avoid companies that do not distribute money (either in the form of dividends or buybacks), doesn't it?

> A dividend is direct cash in the bank. Much lower (zero) ongoing risk of losing that payment. But it gets taxed directly at a higher rate.

Selling the stock to the company is also cash in the bank - and it doesn't get taxed at a higher rate.

Whatever the relevance of "they don’t get any future ‘growth’" argument it's unrelated to the difference in rates between income taxes and capital gain taxes.

For the sake of the argument, let's say that both tax rates are equal. If company A distributes $1bn and company B doesn't would you still say that investing in company A the investros "don’t get any future ‘growth’"? In that case your preference for 'growth' companies is not based on the fact that dividends are taxed at a much higher rate than long term capital gains.


I think several things are getting confused here. I’m mentioning two factors:

1) tax efficiency 2) actual risk of losing money/extracting value from their holdings.

If an investor wants reliable, regular income, stock buybacks aren’t helpful. Dividends are. Enough to be worth using less tax efficient structures.

However if an investor wants a maximally large portfolio at a indefinite future point, they generally don’t care about dividends. In fact if they want maximum tax efficiency above cash flow (which such investor generally wants), and are comfortable with risk, they want to avoid dividends.

Stock buybacks ARE more tax efficient than dividends, but also higher risk.

In theory (and usually in practice even more so), buying back the $1bln increases the value of ongoing remaining stock holdings by $1bln.

However, unlike a dividend, that doesn’t get converted into cash. It turns cash into increased scarcity for equity ownership in the company.

Depending on expected future earnings multiples (cough inflated P/E) this can swing widely, but also provide ‘leverage’ for a company doing this.

Which increases ongoing dependence on management of the company, market perception of the companies worth, etc. which increases actual risk to the investor going forward.

It is far more tax efficient though.

Which if ‘everything always goes up’ is not a big deal, and often desirable. If someone is making sure they have cash in a bank account every month so they don’t need to be eating cat food, less so.

Does what I’m saying make more sense in that context?

I suspect that the ‘market goes up’ + automation of trades has also made dividends look less necessary. It’s been awhile since we’ve had a good stock market crash.


> In theory (and usually in practice even more so), buying back the $1bln increases the value of ongoing remaining stock holdings by $1bln.

That's absolutely wrong. (If that's - in some sense - correct, it will also be true that distributing $1bn in dividends the company increases the value of ongoing remaining stock holdings by $1bln.)

Anyway, the question was if the tax treatment of dividends relative to capital gains gives investors a strong incentive to prefer 'growth' companies that do not ever return capital to the owners rather than 'not-so-much-growth' companies that pass a substantial part of their cashflows ot the owners. And the answer is 'not as much as you implied' because there are also companies that distribute money using a mechanism that is not affected by the tax treatment of dividends so investors can avoid dividends without being restricted to 'growth only' companies.


Mind providing some references as to how removing $1bln of shares from the market doesn’t increase every remaining shareholders value by $1bln, assuming market cap stays the same (which it generally does, independent of another variable)?

Even if the shares are not destroyed, the assets of the company now include those $1bln shares and are non-voting. Every other shareholder now has their actual voting power/control/share increase proportionally.

Take the hypothetical case where all but 1 share was bought back by the company. The owner of the one remaining non-bought back share is now the controlling shareholder and that share should be valued at the prior market value of all public shares, modulo what everyone things about it’s new future in such a scenario.

Dividends do not work the same way, and would not do the same thing - though for a value company, share price generally is based off the dividend payment history over a period of time, and expected likelihood of that trend continuing. So missing a dividend would definitely have an impact.

It isn’t quite like a bond, but many people try to use value companies that pay dividends for similar purposes - cash flow.

If a dividend isn’t paid, theoretically you’d expect the money not spent to be an asset on the books and increase the overall share price proportionally, modulo the markets valuation of such a thing. But it still isn’t cash in anyone else’s bank account, and the companies management could just spend it on something else at any time.


> Mind providing some references as to how removing $1bln of shares from the market doesn’t increase every remaining shareholders value by $1bln, assuming market cap stays the same (which it generally does, independent of another variable)?

The market cap definitely doesn't stay the same. When $1bn goes out the door (either as a distribution of dividends or to repurchase shares) the value of the company goes down quite a lot instantaneously. [Maybe somewhat less than $1bn though, as cash on hand may be discounted due to the risk of mismanagement, the tax on the dividend is considered, etc.] The market value adjusts instantaneously when a dividend is distributed as it's known in advance, in the case of buybacks the company only discloses them from time to time but the adjustment to the fundamental change in value happens eventually.

Can you provide a single reference that says that the value of a company doesn't goes down when it gives money away?

> Every other shareholder now has their actual voting power/control/share increase proportionally.

All the shareholders together own a company that is worth less than before - the only difference is that it has less dollars in it's current account. [That is, all the remaining shareholders. When all the previous shareholders are considered they jointly own a company that is worth less than before and a bag of money - their aggregate wealth is essentially unchanged as discussed above.]

> Take the hypothetical case where all but 1 share was bought back by the company. The owner of the one remaining non-bought back share is now the controlling shareholder and that share should be valued at the prior market value of all public shares, modulo what everyone things about it’s new future in such a scenario.

Ok, let's assume that I have an Intel share and everyone else agrees to sell their shares to the company at the current price of $40 and the company can somehow get the $160bn financing required to buy those shares. Then my share is worth $160bn?

Yesterday, all the shareholders together had a company worth $160bn. If today I own a company worth $160bn and the other shareholders have $160bn in cash, where do you think the extra $160bn came from?

A simpler case: Alice and Bob are equal shareholders in a company. The company has $10mn in cash. They reach an agreement about the valuation of the company being $20mn. The company uses the $10mn in cash to buy back Alice's interest. [I said the company has $10mn in cash for simplicity. It could have more or have less and take a loan for the rest without affecting the argument.] Now Bob is the sole owner of the company. How much is the company worth? Do you really think that the company is still worth $20mn?

What if I'm a sole owner and sell half of my shares to the company? Has my wealth doubled? Can I have the (half) cake and eat it too? Can I keep selling half my interest to multiply my wealth?


Just wow. I wish you luck!


Not sure if we disagree. To be clear, in the following scenario

"Alice and Bob are equal shareholders in a company. They agree that the fair value of the company is $20mn. The company pays $10mn to Alice for her shares."

the correct answer is

(a) "Alice has now $10mn in cash and Bob is now the sole owner of a company worth $10mn [their joint wealth is unchanged]".

If for some reason you are under the impression that

(b) "Alice has now $10mn in cash and Bob owns a company worth $20mn [the deal has created $10mn of added value]"

maybe you could try to explain your reasoning.


Imagine now that the next day Alice regrets selling her half of the company and offers Bob to buy half of his shares from him to get back to the initial situation.

In universe (a) they agree that the company is worth $10mn. Alice pays Bob $5mn and they end both with $5mn in cash and each half of the company is valued at $5mn. Their net wealth is unchanged.

In the nonsensical alternative universe (b) they agree that the company is worth $20mn and Alice pays $10mn to Bob to get back 50% of the company. Alice would be again in the initial situation (no cash and a piece of business worth $10mn) while Bob would have made a large profit ($10mn in cash in addition to half of the company with the same value as before). If all the exchanges between Alice, Bob and the company have been done at fair value how did Bob end in a better situation than Alice? If all the exchanges have been done between Alice, Bob and the company only how can the net profit be explained?


The author wrote "uncontrolled growth", not "any growth".


From the article: >> ...a program with more lines of code is better...

Immediately reminded me of Bill Gates' comment on how measuring progress on a designing and building a software project by using a Total_Lines_Of_Code metric is about as dumb as measuring progress on designing and building an airplane using a Total_Weight metric.

What you want in both is the least amount of code/material that will actually do the job, and being smarter than a simple "more is better"approach. Yet so many supposedly intelligent managers use such approaches...


Well, one is easy, the other is hard hah!

The big issue near as I can tell, is that defining what the job actually is, and what accomplishing it actually looks like is the hardest part most of the time.

Many of the most innovative solutions also come from changing pre-built assumptions about what they can look like.


> Yet so many supposedly intelligent managers use such approaches...

Yet so many supposedly intelligent engineers work for such managers...


Yup!


Growth is possible by increasing outputs from the same level of inputs. Certain types of growth are unsustainable, yet other types of growth, e.g. productivity growth, is definitely sustainable.


Growth should probably be more precisely defined in the vast majority of cases to avoid confusion and misunderstanding. In terms of systems, the quantity that grows or shrinks should be concrete (i.e. not a rate, certainly).

For example, human population. Let's say the birthrate in a population is growing, so a naive conclusion would be that the population, a concrete object, must also be growing. This is not true if the deathrate is growing by the same amount. Now, this is the standard kind of thing you see in a intro to differential equations course: a river feeds a lake at rate X, and another river drains a lake at rate Y, and so is the lake growing or shrinking? Ooops, we neglected to take evaporation into account, so we get the wrong answer.

Economists are among the worst offenders in this misuse of concepts. Take 'productivity growth' - this is actually growth in the rate at which product is produced, right? If productivity is flat and market demand is flat and human population is flat, well, that means everyone is getting what they need, if say the product is cell phones, for example. If everyone has a cell phone, and cell phones are replaced every five years, then what is productivity growth? Maybe you can make the cell phone manufacturing line more efficient, by recycling the materials from old cell phones into new cell phones, or by automation etc. Nevertheless, the desired rate of cell phone production is fixed, and everyone has a cell phone.

Of course the market should be broken up between different producers to encourage competition, but here growth in production of a better cell phone with higher demand is counterbalanced by shrinkage in market share by another producer, as net demand is flat.

Unfortunately, if the cell phone makers form a cartel, and raise their prices in unison, some economist will call that 'economic growth' based on the fact that they're extracting more money from a market with fixed demand, which is just ludicrous - but that's modern neoclassical economics for you.


It seems that productivity growth is still necessarily limited in the end by physics.

Remove the unnecessary actions to produce X, and you're down to the bare minimum set of actions. Now speed up those actions and you'll eventually reach some minimum time requirement, and the output of X is a function of Required_Actions * Required_Time and how many Producing_Units you have and available time.

Seems it'd be asymptotic


Everything is limited by physics. But I think the limit is not close to where we are right now. Consider a smart phone. Physically, what is it? Some silicon, glass, a lithium-ion battery, and some other trace metals. If you were to have the raw inputs in front of you, it would be a small pile of dust. Yet, with just that small amount of material, a person can get access to a near infinite amount of information and entertainment. And smartphones can run software, which allows the phone to be updated for near-zero marginal cost. And this is only something invented in the last few decades. There are so many amazing things being created around us all the time. I don't know how you can look at this situation and think "yup, we've reached the end of human ingenuity."


If you look at the weight of the tech product (phone) in isolation, you are correct although not in a very meaningful way. If you look at the amount of physical material that went into the process leading up to producing that product, the quantity would amount to many tonnes of material in terms of crushed ore, fossil fuels, water consumption, chemicals, packaging and so on. A phone does not only represent its own grammes material, but an enormous tail of environmental impact in form of waste, emissions and extraction remains. (This is not to mention the human labor cost involved in obtaining some of the rare earths used, from countries with, ehrm, lax labor laws).


I don't think at all that we're near the limit of human ingenuity.

The quibble I had was with the "sustainable", which in that context, I read as indefinitely/infinitely sustainable (and it seems other responders have similar issues).

I agree that there should be a lot more human ingenuity ahead of us than behind us (assuming that those seeking power over others, e.g., megalomaniacs and autocrats, don't first destroy us).

That said, productivity of any one thing is certainly never an x^y sort of curve but eventually flattens and becomes asymptotic, if not declining.


That’s a bit like saying ‘we can only make horses so efficient’, which is true, but that’s why we end up coming up with automobiles, airplanes, etc.

As long as we have free brain cycles focused on solving problems or understanding something we do not yet understand, we’ll continue to do better.


Sustained innovation is finding a series of technologies with S-curve growth that can be transitioned away from as they approach their asymptotic limit. Then, society can stay in exponential phase until it hits https://en.wikipedia.org/wiki/Kardashev_scale#Type_III


> Everything is limited by physics.

Everything is controlled by the maths of physics and chemistry.


Some would argue everything fundamentally is physics, including mathematical models of chemistry. I can’t say they are wrong.

Physics being math doesn’t quite make sense to me yet, if for no other reason than a large body of physics laws are ‘because that is what happens in real life’ when you get down to it.

It’s clear the math is a tool to try to reason about the reality, not the other way around.


Physics is a branch of maths dealing with environmental and/or chemical properties.

Those properties could be constants or additional formulas in their own right, just like a computer program in many ways.


Hard disagree there. Physics is experimental exploration of fundamental physical rules in the universe, which requires maths to model and further explore. But the reality came first.

Chemistry is ‘higher level’ physics, and similarly the discipline grew out of observed reality far before the maths were used.

Maths were useful tools, and continue to be more and more useful as they are developed and have useful predictive power. But the predictive power (and falsifiability of their predictions) against reality is key.


> It seems that productivity growth is still necessarily limited in the end by physics.

Biology will put limits in place long before physics will.

Sadly, most techies completely ignore or miss this point.


In the long run, most J curves are actually S curves


I question the "most" here, rather than "all". Examples of J curves that aren't S curves?


>Examples of J curves that aren't S curves?

Reindeer population growth in Alaska.


That's absolutely an S-curve. Any animal population will eventually run out of resources.


Wait a few decades and it will probably plateau at, if not shrink from, a maximum.


Dark Energy astrophysics.


You seem confident an "ultraviolet catastrophe"-like scenario won't play out there, too.


2^x


I think if you wanted to find a way to view the negatives of business, government and other forms of potentially malignant growth in the eyes of basic physics, I'd Mass/Energy Equivalence would be a better scope.

Throughout the universe we see that bigger growth in a system, entity, body, etc means less adaptive potential, slower change potential, because latency of information increases. When latency increases too much it can be fatal to a system that 'communicates' or is synchronized in some way to remain a consistent system, entity, body, group etc. What holds all bodies together on the macro and micro seems to be information synchronization. And when particles, bodies, etc can't synchronize information in a timely manner, things degenerate from there and the system undergoes a state change. One of those degenerative factors is growth. Just as in an orchestra, if it becomes too big you lose synchronization. Other disturbances I'd argue use those same basic principles to corrupt a system; they interfere with the synchronization of information unless it can form a new changed state compatible with a revised system.

I recently talked about this in a very broad sense throughout a few blog entries but mostly it's just me being a dumdum: https://0134340.blogspot.com/2022/06/bottom-to-top-principle...


"Growth for the sake of growth is the ideology of the cancer cell." - Edward Abbey


I would add the whole monetary system is designed this way with inflation.


Growth really means productivity. That doesn't mean simply "more": it means "more with less".


> I'm not sure "Entropy" is the right word for what the author wants to talk about, but the article raises some interesting points

It's not the right word. Entropy is a "term-of-art" that has a specific meaning that differs from that in the general populace or in thermodynamics. This website can't be loaded over HTTPS:// without sufficient Entropy.


> Entropy is a "term-of-art" that has a specific meaning that differs from that in the general populace or in thermodynamics.

Would you care to explain that? The term ‘entropy’ originates in statistical mechanics / thermodynamics: https://en.m.wikipedia.org/wiki/Entropy


Entropy is a term of art in computer science/tech as well, which is apparently the target audience of the blog. Looking at the About page shows the blog is about programming, hacking, and entrepreneurship.

This is similar to when a programmer uses the word auto in a car dealership to mean something other than automobile - because it means something different in a different field. Even when that person first explains what he is about to refer to with the word auto, it is still the wrong word to use in that context.

Your link explains this: "In 1948, Bell Labs scientist Claude Shannon developed similar statistical concepts of measuring microscopic uncertainty and multiplicity to the problem of random losses of information in telecommunication signals. Upon John von Neumann's suggestion, Shannon named this entity of missing information in analogous manner to its use in statistical mechanics as entropy, and gave birth to the field of information theory. This description has been identified as a universal definition of the concept of entropy." [1]

[1] https://en.wikipedia.org/wiki/Entropy


There are many more definitions of entropy: https://en.wikipedia.org/wiki/Entropy_(disambiguation)


none of these seem consistent with how the original article uses "entropy". The original article specifically mentions thermodynamic entropy in its strained analogy.


With respect to the author, the article fails to show that in fact “entropy” is related to “complexity” and how the two are related.

Entropy should not be thought of as “overhead” or “wasted energy,” which is what I believe the author is getting at. Instead, entropy is the tendency to disorganization. So, the analogy could be the more stuff you have, the more disorganized it becomes; but this is a weak analogy in my opinion.

Here is a link to another article that discusses the link between complexity and entropy. The two are indeed related: entropy is necessary for complexity to emerge. Entropy is not, as this author suggests, a result of complexity.

https://medium.com/@marktraphagen/entropy-and-complexity-the...


> entropy is necessary for complexity to emerge.

Something about this particular line does not sit well with me.

How would we define the relationship between entropy, complexity and information?


Information thermodynamics is what you are looking for, it's the unification of thermodynamics and information theory. Bear with me because I'm not a physicist, but my understanding is that information needs a medium, in which way it is similar to heat, and you can use the same statistical mechanics to describe it, or fluctuation theorem, which is more precise.

My understanding is that this is pretty cool stuff that solves Maxwell's demon and also sort of explains biological evolution, because apparently a system responding to it's environment is computation, performed by changing the system's state as a function of a driving signal, which results in memory about the driving signal that can be interpreted as computing a model, a model that can be predictive of the future. Now apparently how predictive that model is equals to how thermodynamically efficient the system is. Even the smallest molecular machines with memory thus must conduct predictive inference to approach maximal energetic efficiency.

https://www.researchgate.net/publication/221703137_Thermodyn...


I agree it doesn’t feel right. But complexity, like life, does emerge even though the 2nd law holds. It is a matter of scale. Entropy does not mean everything becomes disordered. And now I defer to the physicists, because as an engineer I am going out of my lane…


Minor lightbulb went off in my head: you might be interested in slide 17 here, from a presentation on Shape Dynamics [0]

tldr:

    (Shannon) entropy [1]: expected description length of a state of a system

    (Shape) complexity [0]: a 'clumped-ness' metric: clump sizes / inter-clump separations

    information: not sure anyone ever really resolved this in a satisfying way :^) [2]

[0] https://philphys.net/wp-content/uploads/2019/Barbour-Saig_Su...

[1] https://physics.stackexchange.com/questions/606722/how-do-di...

[2] https://monoskop.org/images/2/2f/Shannon_Claude_E_1956_The_B...


thermodynamic entropy is to heat as Shannon entropy is to information?


Hmm, not entirely sure if those terms fit exactly. It's easier to point out you can go from one to the other by setting to hamiltonian to the negative logarithm of the probability density (or use the Boltzmann distribution to go the other way).


https://www.physics.princeton.edu/ph115/LQ.pdf https://en.wikipedia.org/wiki/The_Last_Question

Asimov was way ahead of TFA :)

I like to think that life is a thermodynamic mechanism that:

  - locally reduces entropy by consuming
    lower-entropy energy and casting out
    higher entropy stuff
  - reproduces
by which definition stars are almost alive. I say "almost" because stars only manage to keep local entropy from growing very fast, but they don't manage to reduce it.

For example, life on Earth consumes low-entropy (because colimated and short-ish wavelength) sunlight and molecules of various types (e.g., CO2) and uses that to build locally-lower-entropy things (plankton, plants, animals, ...), in the process emitting lower entropy things like detritus, feces, etc., but especially longer-wavelength light in all directions. Because Earth's climate is roughly in equilibrium, if you examine all the light going in (colimated, low-wavelength sunlight) and all the light going out (longer-wavelength IR in all directions), the energy must balance, but the entropy must be much higher on the outbound side. Similarly, Venus must be dead because it reflects so much sunlight, thus failing to use it to improve local entropy, thus it must be dead.


Inspired by Erwin Schrödinger - "What Is Life? The Physical Aspect of the Living Cell" from 1944 ?

> Schrödinger explains that living matter evades the decay to thermodynamical equilibrium by homeostatically maintaining negative entropy in an open system.

https://en.wikipedia.org/wiki/What_Is_Life%3F


Oh, I'm certain that Asimov wasn't the first to think of it. It could have been thought up in the 19th century, and it would be very surprising if Asimov was the first to write down the idea.


This is negentropy (not my fav. word, but it is the term used). Indeed it's the signature of life, although I think there would be a threshold that all living creatures meet, but non-living systems do not. In other words life produces lots of negentropy, probably exponentially, unlike other systems like celestial bodies.


There was a recent podcast episode from Sean Carroll's Mindscape [1] where they focus on and around the Krebs Cycle and discuss it as an example of Entropy. Turns out Entropy really is fatal.

[1] https://www.preposterousuniverse.com/podcast/2022/05/23/198-...


> In other words life produces lots of negentropy, probably exponentially, unlike other systems like celestial bodies.

Exponentially? I guess, in the sense that from fertilized egg to adult, lifeforms grow exponentially, you're right, but adults usually do not keep growing. So I would say "sustained", and once the negentropy (I don't like that term either) operation is no longer sustained -and so local entropy is once more on the rise-, then the being has died.

For stars, I think the issue is that local entropy does not go down. Collapse is avoided on a sustained basis, and when that stops being so, the star dies (and possibly leads to the creation of new stars, which looks like reproduction). That looks like life, but it isn't because local entropy still goes up.


Oh, I meant the totality of life. If we consider how life evolved over the 4 billion years, at first it was super slow and then it accelerates into a variety of lifeforms, eventually humans, and then civilization and all its complexities in just a few thousand years.


This got me thinking. Minimalism or minimalist systems are often seen as systems with less energy. Or, we seem to think it takes less energy to remove objects than to add objects (complex systems). More stuff, more energy. But, minimalism needs a lot of useful work. Less stuff is not the same as a minimalist system. Using a music analogy, I argue, it's much easier (takes less energy) to fill an empty space with hundreds of notes than with a few carefully selected notes :)


Great article. I could directly translate these thoughts to self-hosting. Having worked my way through linux, docker, systems, networks (etc.), since starting in 2017, I can say that the most important principle is Minimalism (and Reproducibility, but both go in hand). The points made by the author apply equally: Reject solutions that bring chaos, do not install everything - select services carefully, but make sure those services you host are stable and set up correctly.


I disagree with the author, while I think there is value in minimalism, I like to embrace messiness, and I can use the "entropy" idea to show my opposite viewpoint.

Entropy is a force of nature, it will always increase, second law of thermodynamics. And yes, it is fatal, we will all die in the end there isn't much we can do about it. But that's where the author backs out, saying that "organizational entropy" doesn't follow the laws of thermodynamics because there is a magic spell called "minimalism"... Why make a parallel with physics then?

I think that just like thermodynamic entropy, there is no solution, we will all die, period. The only thing we can do is make the best of the time we are alive.

And if we look at the author's ideal, it has zero entropy, literally absolute zero, nothing moves, which is not the most enjoyable situation...

Furthermore, the proposed solution (minimalism) involves creating a small pocket of low entropy. In thermodynamics, that would be a freezer. But while freezers can lower entropy locally, they increase entropy globally, freezers need energy to function. And the colder your freezer, the more energy it consumes and the more entropy it creates. It means that minimalism can be counterproductive: the more you try to make things perfect, the messier everything around it becomes.

So, don't try to put every atom at it correct place, you simply can't, absolute zero doesn't exist in nature, just admit that life isn't perfect, that it is sometimes better to do something useless than doing even more work trying to find if it actually is useless. And low entropy (nothing moves) is as boring as high entropy (just noise), the best is somewhere in the middle, life is in the middle.


As a side note, when John Carmack was asked why he always started a new engine from scratch, he used to say: "to fight code entropy"


There may be a slight nit with this enframing of entropy. Actually, the article visualizes it quite nicely:

the assumption that the world is a closed system - depicted here by the black border around the particles, showing it is closed off from the world.

Sure, in a closed system, eventually you get something like heat death, within the box.

But life, and the world, are an open system - at least especially from the human-scale life experience. You can't say that heat death is sure to happen.

Does entropy increase at the macro level? Pretty much yeah. But to define what 'macro' is, is hard enough to make any answer dubious or uninteresting - is it the entire universe? Is it the solar system? In either case the scale at which it appears closed much bigger your life, which is coincidentally a scale at which it may seem open - bc the world is not uniform and ergodic at the living-as-a-human-scale. We each experience a different life story (another debate for the future, perhaps? :^) )

If you like you can imagine that the entropy in your own particular life could always decrease while the entropy somewhere else far away undergoes a commensurate simultaneous increase.

As I remember it, Mihaly Csikszentmihalyi (author of Flow) wrote in the book The Evolving Self that basically the meaningfulness of your life is:

    (the flow you experience == the entropy/time 'life bandwidth rate' you experience) 

    x 

    (the good you do for others)
By going for a minimalist approach, you try to maximize the Signal-to-Noise ratio. If you'll allow for some eyebrow-raising application of mathematics to philosophy for a sec...

the Shannon-Hartley theorem tells us that the channel capacity C depends

    B: the channel substrate bandwidth,

    SNR = Power(Signal) / Power(Noise), both in decibels. 
as

    C=B log2(1 - SNR)
Here, C could stand for how good a life you're leading. So you'd either want to increase the signal, or decrease the noise, or increase B (the 'underlying capacity for enjoyment of life'). I guess it is a matter of debate whether you can improve the SNR more by tapping in more, or tapping into noise less. Probably something you can try to adaptively improve by 'gradient descent' ie 'trying out new ways of living', lol


I wouldn't call it entropy exactly, but this is also the theory behind Joseph Tainter's _Collapse of Complex Societies_. He proposes that too much governmental overhead through accretion of laws and bureaucracy is the cause (aside from obvious alternatives like being defeated in a war, etc.) of a society's or country's collapse.


And more complexity in laws and many more things increases inequalities in our society. The more stuff (i.e. laws) the more you need people (lawyers) to understand them, the more intelligent people you also need (i.e. cryptos). All this creating a larger gap between social classes.


I wonder what the author's view of the gaming industry is. On the one hand I agree with him that useless work or work that achieves little shouldn't be done. (in case of gaming a lot of work provides entertainment which could have been achieved by other much more efficient means.)

On the other hand we're not gas molecules confined in a chamber. So as much as we'd like to apply the laws of thermodynamics to our lives it just won't work. The reason we exist is a mystery. Usefulness takes an entirely different shape when you're not sure what it is that you're trying to optimize for. Useful to what end?


The main mistake you made was not realizing artificial complexity exists, that it is not natural, and that it is a form of control, possibly the most important. Evidence A: C++.


Counterpoint, "Entropy is Life": Diffusion is an entropy driven process, and is fundamental to most biological processes. Plus other more specific entropy driven reactions [0 + google it]. Lazy metaphors...

[0] https://www.nature.com/articles/srep04142


Counterpoint to your counterpoint :) I think local entropy has to decrease first to create the concentration gradients that are harvested by diffusion. So life must also rely on decreasing the local entropy. All depends on how you define the system.


You can't decrease local entropy without increasing it overall. We can only live by moving towards higher entropy, it's how we tell the past from the future.


I like the idea presented. The application of it to various constructs brings about ideas as to how to reverse the entropy:

Software: try to build orthogonal features so that maximum value can be obtained with minimum complexity; build tools with well-designed deprecation mechanisms, then have releases where underused/redundant functionality is first deprecated and then removed entirely.

e.g. all the major extension repositories for a particular tool parse the machine-generated deprecation list, scan extensions, and email owners when they use deprecated APIs. Maybe provide "shims" where APIs are internally removed entirely but transitional packages are provided that implement the removed APIs in terms of still-present ones.

Companies: deliberately design policies and organizational structures that are simple and minimize entropy; be prepared to pay some upfront financial costs to do so (e.g. because your policies aren't layers upon layers of hacks that grow every time someone makes a mistake), but reap rewards in the future as incidental costs of running your company are lower and less bureaucracy means workers are more efficient.

e.g. instead of having a bunch of different rules about different kinds of vacation time, consolidate them all into 1 or 2 kinds (normal and medical?) and give everyone a bit extra to compensate for some of the edge cases that were smoothed over.

Governments: legislators should spend some time attempting to "refactor" old laws such that fewer words and less complexity still results in effectively the same legal environment; reduce government functions (as much as some of you may dislike that idea).

e.g. instead of trying to provide hundreds/thousands of different special-case tax breaks for low-income families, use a "sliding" tax rate where e.g. at $1k/year annual income your tax rate is 0%, at $1M a year your tax rate is n%, with linear interpolation between those two extremes (or whatever). Simpler, still somewhat fair, people might actually be able to do their taxes by reading the law, and no suspicious discontinuities[1]. Or something else - with some thought and a little experience in government, I'm sure someone could come up with an income tax system that was an order of magnitude shorter than what the US has now.

[1] https://danluu.com/discontinuities/


> Governments

This would be a major constitutional change, by which I mean that it introduces new limits on government. Politicians would have some understandable inclination to oppose it. Not saying it isn't needed, it obviously is.

In this context "be a minimalist" is a thought stopper in all its grace.


> So when they have no incentive to take the "right" decisions, they will, most of the time, allocate the available energy towards useless work, sometimes for ego reasons, sometimes by ignorance, sometimes by corruption.

This neatly sums up the reason for Apple's ever expanding set of Animojis with every iOS release.


> Yet, not doing things is what, most of the time, brings the greatest results.

The only result nihilism brings is a slow and dangerous detachment from reality and the world we live in.


This is what happens when people are philosophically illiterate, they see important concepts out there but just don't have the proper tools to think about them.


I think the author misunderstands what entropy is. It's not a measure of complexity. If anything is the opposite.


When I read "fatal" I don't imagine the "death" but more "inevitable".


I think what you are trying to communicate is well written in the book "essentialism".


Recommendation: People should read the original "Parkinson's Law".


In one word: Entropy is fatal.

No it's not, and there's plenty of evidence that a certain amount of disorder is needed to be creative and flexible, to adapt quickly to change, and evolve.

The key is to find a balance.


virtually nobody understands NFTs that doesn't actively participate in the space. do with that information what you will.


> "The second law of thermodynamics states that the entropy of an isolated system always increases because isolated systems spontaneously evolve towards thermodynamic equilibrium..." (emphasis added)

There are no isolated systems in the context being discussed, i.e. houses and phones and so on. This is an important point: steady-state thermodynamics is a far more complicated beast than closed-system thermodynamics, and moving-state thermodynamics even more so.

Furthermore, how does one distinguish between useful and useless work? Work is work, the value of work is something humans decide on socially. Say people are put to work building a pyramid, so the kings and priests have a nice high place to sit. Egyptian pyramids are impressive, but are they useful? Maybe in terms of some abstract notion like consolidating the power of the state or impressing one's neighbors.

Anyway, here are some solutions to the author's points:

1) Unlimited and uncontrolled growth: match inputs to outputs. Delete as many old emails per day as you receive new ones. If it's important, copy and store securely offline. If you buy new clothes, send an equal amount of old clothes to recycler or the thrift store. If that's too much work, cut back on inputs instead.

2) Decision-makers have no skin in the game: If the decision-maker wants to build a pyramid, the decision-maker should be spending their days building that pyramid alongside their employees. Then they might decide that building pyramids is a useless activity, and perhaps building a bridge or a dam would be a wiser undertaking. Yes, investment capitalism has this problem. Put the shareholders to work on the production line or give them the boot, that's the solution.

3) Momentum is not a source of entropy I don't think. Entropy is more like diffusion than directed momentum. An object impacting another due to momentum could increase entropy, like a car running into a brick wall. Maybe the author is talking about something abstract like 'the momentum of bad habits is hard to break'? Here is where an injection of entropy ('shaking things up') might be helpful.

Physics analogies can be rather overused, in conclusion.


Entropy is not fatal. Areas of low entropy appear as creation, and that creation contains it's own destruction.

Entropy is a quantity. Qualities can only change, they do not live or die. The article should have been titled "Low Entropy is Fatal", or "Larger areas Low Entropy Areas are More Quickly Fatal than Small Areas of Low Entropy". Which is also wrong. All areas of low entropy are fatal, it is just easier to see the larger low entropy areas failing than the smaller ones. And larger areas of low entropy need more energy to keep the entropy low. Which is why minimalism is easier, it just shrinks the low entropy area to one that needs less energy.

Also...

You, as a human, are a low entropy area. Your whole existence is predicated on continuing to sustain your area of low entropy. How do we do this? By enabling electrons to flow through our mitochondria and stealing their angular momentum.

https://pubmed.ncbi.nlm.nih.gov/33671585/

Areas of low entropy have a constant stream of electrons flowing through them. For humans the electron transport chain in our mitochondria steals the electrons angular momentum energy. Our mitochondria steal the angular momentum of electrons.

https://pubs.acs.org/doi/10.1021/jacs.9b09262

https://www.pnas.org/doi/full/10.1073/pnas.1407716111

https://www.sciencedirect.com/science/article/abs/pii/S00108...

The mitochondria are quantum machines that capture the free energy from electrons. Black holes are also quantum machines. Anywhere you see an area of low entropy you see a quantum machine. Quantum machines are powered by the angular momentum of electrons to make areas of low entropy.

Electron spin makes all matter possible, everything object a quantum machine that uses the angular momentum of an electron (which happens to be the speed of light!).

https://www.scirp.org/journal/paperinformation.aspx?paperid=...

IMHO, Black holes are black because they capture all of an electrons angular momentum. Very efficient quantum machines indeed.


Also inevitable.


Integrate Bitcoin into your business and all the arguments in the write up just unwind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: