Happy to see the article being discussed here!
I was responsible for the technical implementation of the interactives. If you are interested in the source code, you can find it here: https://github.com/jnsprnw/mip-entropy
It’s built in Svelte 5 with Tailwind.
May I ask why you chose svelte 5 rather than something else? I’ve noticed that a lot of “one off” interactions are being built with svelte nowadays. What are the benefits of using it?
First, when I joined the project, it wasn’t clear how it would be published. Svelte, which outputs compiled JavaScript, can fit into many CMS workflows that newspapers/publishers use.
I believe Svelte was developed by Rich Harris at the NY Times for this very reason.
We ended up using iFrames, so other frameworks like React could have been used.
Second, Svelte is very well suited for these small interactives because it has built-in state, transitions, and reactivity with low overhead.
Third, it was a personal choice, as I now do most of my work in Svelte.
> We ended up using iFrames, so other frameworks like React could have been used.
Wait, wasn't one of the original selling points of React that it could be embedded piecewise to enhance interactivity of the parts of pages that needed it? It should certainly not need a separate page!
It’s a PITA to extract a stateful react component to a standalone piece of code that can be inserted in a random place (in another page, served via API etc.). Not sure about Svelte, but achieving this in React was unexpectedly hard/impossible in our use case.
Interesting to read this, 27 years after my PhD* (theoretical physics), in which I did compare the view WITH and the view WITHOUT ‘unknowns’ causing entropy as a driver.
* My PhD was about how to treat a (quantum mechanical) system inside a cavity: a cavity with one perfect mirror and one 99.999999% perfect mirror. The (one dimensional) universe was made whole by another perfect mirror at the other side of the non-perfect mirror (in ASCII art:
[100%] —l— [100-epsilon] ——L——— [100%]
With L >> l.
The ‘whole universe’ solution was simple (using standard quantum mechanics techniques), the ‘lossy’ ‘small universe’ was not. But they needed to be the same (physically).
Thus using the exact solution for the ‘complete’ (l+L) universe and comparing it to possible ‘small’ (l) universe models in which some non-linear term accounted for loss.
The connection between how a lossy system (in which entropy exists/is a driving ‘force’) and a losless system (in which everything is conserved) is thus not a new insight;-0
I read you're comment with interest, but ultimately I can't understand the point being made because I don't know what kind of mirror you're referring to (optical?), I don't know what 'l' or 'L' represent (lateral spacing of mirrors?, vacuum energy desnities?), and the last sentence I think maybe the word 'how' should be deleted?
The imperfect mirror means that epsilon% of the time the light goes through to a much larger "back room" whereas (1-epsilon)% of the time the light just reflects like normal. The point being made is that this is an extension of an ordinary ideal cavity to include unavoidable (but weak) interaction with the much larger system outside of it (aka the whole universe). It just so happens the much larger external system is also being modeled as a simple 1d cavity.
In other words, entropy is equivalent to bits of information needed to specify the complete state of the system leaking outside of the confines of where those bits are being observed by an experiment (eg tunneling through an imperfect mirror).
Entropy is an accounting tool to keep track of how many bits are missing, and how far this ignorance has percolated into what you can safely predict about the system.
Answers to your questions:
1): all the way to the left, a mirror with a reflectivity|r| of 1 (or a 100%). In the middle an |r| of slightly below 1. Yes, optical, system with photons (a and a^dagger with [a,a^dagger]=1).
2) distance between mirrors 1 and 2: l. Distance mirror 2 and 3:L. (Later taking the limit L/l ==>> infinity)
3) the how is actually correct, I guess the word behaves is missing twice: .... how .... behaves and a .... behaves.
Entropy got a lot more exciting to me after hearing Sean Carroll talk about it. He has a foundational/philosophical bent and likes to point out that there are competing definitions of entropy set on different philosophical foundations, one of them seemingly observer dependent:
Leonard Susskind has lots of great talks and books about quantum information and calculating the entropy of black holes which led to a lot of wild new hypotheses.
The exciting part is there are different concepts of it that start from different foundations and arrive at the same thing. It’s suggestive that our understanding is incomplete and there are deeper levels of understanding to uncover. And it _could_ just be the case that the universe started with low entropy for no reason, but if there are reasons and we ponder them it could lead to some massive epiphanies in physics/cosmology.
> As physicists have worked to unite seemingly disparate fields over the past century, they have cast entropy in a new light — turning the microscope back on the seer and shifting the notion of disorder to one of ignorance. Entropy is seen not as a property intrinsic to a system but as one that’s relative to an observer who interacts with that system.
Maybe I have the benefit of giant shoulders, but this seems like a fairly mundane observation. High-entropy states are those macrostates which have many corresponding microstates. The classification of several microstates into the same macrostate, is this not a distinctly observer-centred function?
I.e. if I consider 5 or 6 to be essentially the same outcome of the die, then that will be a more probable (higher-entropy) outcome. But that's just due to my classification, not inherent to the system!
> Maybe I have the benefit of giant shoulders, but this seems like a fairly mundane observation.
It is not mundane, and it is also not right, at least for entropy in Physics and Thermodynamics.
> High-entropy states are those macrostates which have many corresponding microstates.
That is how you deduce entropy form a given model. But entropy is also something that we can get from experimental measurements. In this case, the experimental setup does not care about microstates and macrostates, it just has properties like enthalpy, heat capacity and temperature.
We can build models after the fact and say that e.g. the entropy of a given gas matches that predicted by our model for ideal gases, or that the entropy of a given solid matches what we know about vibrational entropy.
That’s how we say that e.g. hydrogen atoms are indistinguishable. It’s not that they become indistinguishable because we decide so. It’s because we can calculate entropy in both cases and reality does not match the model with distinguishable atoms.
> The classification of several microstates into the same macrostate, is this not a distinctly observer-centred function?
It seems that way if we consider only our neat models, but it fails to explain why experimental measurements of the entropy of a given materials are consistent and independent of whatever model the people doing the experiment were operating on. Fundamentally, entropy depends on the probability distribution, not the observer.
> But entropy is also something that we can get from experimental measurements. In this case, the experimental setup does not care about microstates and macrostates, it just has properties like enthalpy, heat capacity and temperature. […] experimental measurements of the entropy of a given materials are consistent and independent of whatever model the people doing the experiment were operating on. Fundamentally, entropy depends on the probability distribution, not the observer.
Thermodynamics does have the concept of the entropy of a thermodynamic system; but a given physical system corresponds to many different thermodynamic systems. […] It is clearly meaningless to ask, “What is the entropy of the crystal?” unless we first specify the set of parameters which define its thermodynamic state. […] There is no end to this search for the ultimate "true" entropy until we have reached the point where we control the location of each atom independently. But just at that point the notion of entropy collapses, and we are no longer talking thermodynamics! […] From this we see that entropy is an anthropomorphic concept, not only in the well-known statistical sense that it measures the extent of human ignorance as to the microstate. Even at the purely phenomenological level, entropy is an anthropomorphic concept. For it is a property, not of the physical system, but of the particular experiments you or I choose to perform on it.
It is though. Temperature is an aggregate summary statistic used when the observer doesn't know the details of individual particles. If you did know their position, speed and velocities, you could violate the laws of entropy as Maxwell's thought experiment demonstrated in 1867 https://en.wikipedia.org/wiki/Maxwell%27s_demon
Maxwell's demon is a thought experiment that involves a magical being. It's an interesting thought experiment, and it provides some insights about the relationship between macro states and micro states, but it's not actually a refutation of anything, and it doesn't describe any physically realizable system, even in theory. There is no way to build a physical system that can act as the demon, and so it follows that entropy is not actually dependent on the information actually available to physical systems.
This is obviously visible in the observer-independence of many phenomena linked to temperature. A piece of ice will melt in a large enough bath of hot water regardless of whether you know the microstates of every atom in the bath and the ice crystal.
"large enough" and "bath" is doing a lot of work here. I don't think it's necessarily about size. It's about that size and configuration implying the particle states are difficult to know.
If for example, you had a large size but the states were knowable because they were all correlated, they were following a functionally predictable path, for example all moving away from the ice cube, or all orderly orbiting around the ice cube in a centrifuge such that they didn't quite touch the ice cube, it wouldn't melt.
The ice cube would melt, it would just take longer, heat does transfer through vacuum via radiation. And no, you can't control the positions of the electrons to stop that radiation from happening, it is a quantum effect not something you can control.
Temperature and entropy are directly linked; it follows that temperature is also anthropomorphic. Although I think "observer-dependent" would be a better way to put it; it doesn't have to specifically be relative to a human.
> It is not mundane, and it is also not right, at least for entropy in Physics and Thermodynamics.
Articles about MaxEnt thermodynamics by E.T. Jaynes where he talks about the “anthropomorphic” nature of entropy date back to 1960s. How is that not right in physics?
By the look of it, it is another misguided attempt to apply information theory concepts to thermodynamics. Entropy as information is seductive because that way we think we can understand it better, and it looks like it works. But we need to be careful because even though we can get useful insights from it (like Hawking radiation) it’s easy to reach unphysical conclusions.
> How is that not right in physics?
Why would it be right? Was it used to make predictions that were subsequently verified?
Plenty of bad stuff made its way into textbooks, I saw some really dodgy stuff at uni. And for every uncontroversial statement we can find a textbook that argues that it is wrong. Sorting the good from the bad is the main point of studying science and it is not easy. What is also important is that approaches that can work in some field or context might be misleading or lead to wrong outcomes in others. Information theory is obviously successful and there is nothing fundamentally wrong with it.
Where we should be careful is when we want to apply some reasoning verbatim to a different problem. Sometimes it works, and sometimes it does not. Entropy is a particularly good example. It is abstract enough to be mysterious for a vast majority of the population, hence why these terribly misleading vulgarisation articles pop up so often. Thinking of it in terms of information is sometimes useful, but going from information to knowledge is a leap, and then circling back to Physics is a bit adventurous.
Plenty of bad stuff makes its way into hackernews comments as well. Saying that “it could be wrong” doesn’t really support the “it’s wrong” claim, does it?
I did not make that point, though. I rejected an appeal to authority because something was in a textbook. I made no comment about the validity of that person’s work in his field, I just pointed out that this transferability was limited.
My bad, I thought you considered Balian’s work another misguided attempt to apply information theory concepts to thermodynamics.
For the record, this is the abstract of the “Information in statistical physics” article: “We review with a tutorial scope the information theory foundations of quantum statistical physics. Only a small proportion of the variables that characterize a system at the microscopic scale can be controlled, for both practical and theoretical reasons, and a probabilistic description involving the observers is required. The criterion of maximum von Neumann
entropy is then used for making reasonable inferences. It means that no spurious information is introduced besides the known data. Its outcomes can be given a direct justification based on the principle of indifference
of Laplace. We introduce the concept of relevant entropy associated with some set of relevant variables; it characterizes the information that is missing at the microscopic level when only these variables are known. For
equilibrium problems, the relevant variables are the conserved ones, and the Second Law is recovered as a second step of the inference process. For non-equilibrium problems, the increase of the relevant entropy expresses
an irretrievable loss of information from the relevant variables towards the irrelevant ones. Two examples illustrate the flexibility of the choice of relevant variables and the multiplicity of the associated entropies: the thermodynamic entropy (satisfying the Clausius–Duhem inequality) and the Boltzmann entropy (satisfying the H-theorem). The identification of entropy with missing information is also supported by the paradox of
Maxwell’s demon. Spin-echo experiments show that irreversibility itself is not an absolute concept: use of hidden information may overcome the arrow of time.”
About the lack of subjectivity of the states? If we consider any bit of matter (for example a crystal or an ideal gas), the macrostate is completely independent of the observer: it’s just the state in which the law of physics say that bit of matter should be. In an ideal gas it is entirely determined by the pressure and volume, which are anything but subjective. For a crystal it is more complex because we have to account for things like its shape but the reasoning is the same.
Then, the microstates are just accessible states, and this is also dictated by Physics. For example, it is quite easy to see that a crystal has fewer accessible states than a gas (the atoms’ positions are constrained and the velocities are limited to the crystal’s vibration modes). We can calculate the entropy in the experimental conditions within that framework, or in the case of correlated liquids, or amorphous solids, or whatever. But the fact that we can come up with different entropies if we make different hypotheses does not mean that any of these hypotheses is actually valid. If we measure the entropy directly we might have a value that is consistent with several models, or none. The actual entropy is what we observe, not the theoretical scaffolding we use to try to make sense of it. And again, this is not subjective.
Others in this thread do believe that entropy is a subjective measure, or more precisely a measurement of the information that an observer has about a system instead of a measurement about the state of the system itself. Information theory easily leads to this interpretation, since for example the informational content of a stream of bytes can very much be observer-dependent. For example, a perfectly encrypted stream of all 1s will appear to have very high entropy for anyone who doesn't know the decryption process and key, while in some sense it will be interpreted as a stream with entropy 0 by someone who knows the decryption process and key.
Of course, the example I gave is arguable, since the two observers are not actually observing the same process. One is looking at enc(x), the other is looking at x. They would both agree that enc(x) has high entropy, and x has low entropy. But this same kind of phenomenon doesn't work with physical entropy. A gas is going to burn my hand or not regardless of how well I know its microstates.
As far as I understand it, the original thread was about whether this distinction exists at all. That is, my understanding is that the whole thread is about opposition to the Quanta article's assertion, which suggests that thermodynamic entropy is the same thing as information-theory entropy and that it is not a "physical" property of a system, but a quantity which measures the information that an observer has about said system.
If you already agree that the two are distinct measures, I believe there is no disagreement in the sub-thread.
> Fundamentally, entropy depends on the probability distribution, not the observer.
Right but a probability distribution represents the uncertainty in an observer so there is no inconsistency here (else you're falling for the Mind Projection Fallacy http://www-biba.inrialpes.fr/Jaynes/cc10k.pdf).
This is only true if one assumes that (a) physical processes are fundamentally completely deterministic, and that (b) it is possible to measure the initial state of a system to at least the same level of precision as the factors that influence the outcome.
Assumption (a) is currently believed to be false in the case of measuring a quantum system: to the best of our current knowledge, the result of measuring a quantum system is a perfectly random sampling of a probability distribution determined by its wave function.
Assumption (b) is also believed to be false, and is certainly known to be false in many practical experiments. Especially given that measurement is a time-consuming process, events that happen at a high frequency may be fundamentally unpredictable on computational grounds (that is, measuring the initial state to enough precision and then computing a probability may be physically impossible in the time that it takes for the event to happen - similar to the concept of computational irreducibility).
So, even in theory, the outcomes of certain kinds of experiments are probabilistic in a way that is entirely observer-independent; this is especially true in quantum mechanics, but it is also true in many types of classical experiments.
The idea that entropy represents the uncertainty in our description of a system works perfectly well in quantum statistical mechanics (actually better than in classical statistical mechanics where the entropy of a system diverges to minus infinity as the precision of the description increases). The entropy is zero when we have a pure state, a state perfectly defined by a wave function, and greater than zero when we have a mixed state.
Regarding your second point, how does the practical impossibility of measuring the initial state invalidate the idea that there is uncertainty about the state?
The post I was replying to claims that probability is not a physical property of an observed system, that it is a property of an observer trying to observe a system. The examples given in the quoted link all talk about experiments like rolling dice or tossing coins, and explain that knowledge of mechanics shows that these things are perfectly predictable mechanical processes, and so any "probability" we assign to them is only a measure of our own lack of knowledge of the result, which is ultimately a measure of our lack of knowledge of the initial state.
So, the link says, there's no such thing as a "fair coin" or a "fair coin toss", only questions of whether observers can predict the state or not (this is mostly used to argue for Bayesian statistics as the correct way to view statistics, while frequentist statistics is considered ultimately incoherent if looked at in enough detail).
I was pointing out however that much of this uncertainty in actual physics is in fact fundamental, not observer dependent. Of course, an observer may have much less information than physically possible, but it can't have more information about some system than a physical limit.
So, even an observer that has the most possible information about the initial state of a system, and who knows the relevant laws of physics perfectly, and has enough compute power to compute the output state in a reasonable amount of time, can still only express that state as a probability. This probability is what I would consider a physical property of that system, and not observer-dependent. It is also clearly measurable by such an observer, using simple frequentist techniques, assuming the observer is able to prepare the same initial state with the required level of precision.
> an observer may have much less information than physically possible, but it can't have more information about some system than a physical limit.
Still the probability represents the uncertainty of the observer. You say that "the most possible information" is still not enough because "measurement is a time-consuming process" and it's not "possible to measure" with infinite precision. I'd say that you're just confirming that "the lack of knowledge" happens but that doesn't mean the physical state is undefined.
You call that uncertainty a property of the system but that doesn't seem right. The evolution of the system will happen according to what the initial state was - not according to what we thought it could have been. Maybe we don't know if A or B will happen because we don't know if the initial state is a or b. But if later we observe A we will know that the initial state was a. (Maybe you would say that at t=0 the physical state is not well-defined but at t=1 the physical state at t=0 becomes well-defined retrospectively?)
I would say that any state that we can't measure is not a physical state. So if we can only measure, even in theory, a system up to precision dx, then the system being in state a means that some quantity is x±dx. If the rules say that the system evolves from states where y>=x to state A, and from states where y<x to state B, then whether we see it in state A or state B, we still only know that it was in state a when it started. We don't get any extra retroactive precision.
This is similar to quantum measurement: when we see that the particle was here and not there, we don't learn anything new about its quantum state before the measurement. We already knew everything there was to know about the particle, but that didn't help us predict more than the probabilities of where it might be.
> I would say that any state that we can't measure is not a physical state.
Ok, so if I understand correctly for you microstates are not physical states and it doesn't make sense to even consider that at any given moment the system may be in a particular microstate. That's one way to look at things but the usual starting point for statistical mechanics is quite different.
That would be a bit too strong a wording. I would just say that certain theoretical microstates (ones that assume that position and velocity and so on are real properties that particles possess and can be specified by infinitely precise numbers) are not in fact physically distinguishable. Physical microstates do exist, but they are subject to the Heisenberg limit, and possibly other, less well defined/determined precision limits.
Beyond some level of precision, the world becomes fundamentally non-deterministic again, just like it appears in the macro state description.
> That’s how we say that e.g. hydrogen atoms are indistinguishable. It’s not that they become indistinguishable because we decide so. It’s because we can calculate entropy in both cases and reality does not match the model with distinguishable atoms.
How does using isotopes that allow atoms to be distinguished affect entropy?
Add more particles, get more possible states. Hydrogen is a proton and an electron. Deuterium adds a neutron. More possible states means more entropy, assuming your knowledge of the system stayed constant.
>> The classification of several microstates into the same macrostate, is this not a distinctly observer-centred function?
> It seems that way if we consider only our neat models, but it fails to explain why experimental measurements of the entropy of a given materials are consistent and independent of whatever model the people doing the experiment were operating on. Fundamentally, entropy depends on the probability distribution, not the observer.
I am not sure that I agree with this -- it feels a little too "neat and tidy" to me. One could argue, for example, that these seemingly-emergent agglomerations of states into these cohesive "macro" units are an emergent property limitations of modelling based of the physical properties of the universe -- but there's no way to necessarily easily tell if this set of behaviors comes from an underlying limitation of _dynamics_ of the underlying state of the system(s) based on the rules or this universe or the limitations of our _capacity to model_ the underlying system based on constraints imposed by the rules of this universe.
Entropy by definition involves a relationship (generally at least) between two quantities -- even if implicitly, and oftentimes this is some amount of data and a model used to describe this data. In some senses, being unable to model what we don't know (the unknown unknowns) about this particular kind of emergent state (agglomeration into apparent macrostates) is in some form a necessary and complete requirement for modelling the whole system of possible systems as a whole.
As a general rule, I tend to consider all discretizations of things that can be described as apparently-continuous processes inherently "wrong", but still useful. This goes for any kind of definition -- the explicit definitions we use for determining the relationship of entropy between quantities, how we define integers, words we use when relating concepts with seemingly different characteristics (different kinds of uncertainty, for example).
We induce a form of loss over the original quantity when doing so -- entropy w.r.t. the underlying model, but this loss is the very thing that also allows us to reason over seemingly previously-unreasonable-about things (for example -- mathematical axioms, etc). These forms of "informational straightjackets" offer tradeoffs in how much we can do with them, vs how much we comprehend them. So, even in this light, the very idea of modelling a thing will always induce some form of loss over what we are working with, meaning that said system can never be used to reason about the properties of itself in a larger form -- never verifiably, ever.
Using this induction, we can extend it to attempt to reason then about this meta-level of analysis, showing that because it is indeed a form of model sub-selected from the larger possible space of models, that there is some form of inherent measurable loss, and it cannot be trusted to reason even about itself. And therein lies a contradiction!
However, one could postulate that this form of loss results in any model necessarily has some form of "collision" or inherent contradiction in it -- theories like Borsuk-Ulam come to mind, and so we must eventually come to the naked depravity of picking some flawed model to analyze our understanding of the world, and hope to realize along the way that we find a sense of comfort and security in the knowledge that it is built on sand and strings, and its validity may unwind and slip away at any minute.
Maybe the experimental apparatus is not objective. The quantities we choose to measure are dictated by our psychological and physiological limitations. The volume of a container is not objectively defined. An organism which lives at a faster time scale will see the walls of the container vibrating and oscillating. You must convince the organism to average these measurements over a certain time scale. This averaging throws away information. This is the same with other thermodynamic quantities.
> The quantities we choose to measure are dictated by our psychological and physiological limitations.
No. The enthalpy changes measured by a calorimeter are not dependent on our psychological limitations.
> The volume of a container is not objectively defined.
Yes, it is, for any reasonable definition of "objective". We know how to measure lengths, we know how they change when we use different frames of reference so there is no situation in which a volume is subjective.
> An organism which lives at a faster time scale will see the walls of the container vibrating and oscillating.
This does not matter. We defined a time scale from periodic physical phenomena, and then we know how time changes depending on the frame of reference. There is no subjectivity in this, whatever is doing the measurement has no role in it. Time does not depend on how you feel. It’s Physics, not Psychology.
> This is the same with other thermodynamic quantities.
No, it’s really not. You seem to know just enough vocabulary to be dangerous and I encourage you to read an introductory Physics textbook.
I am not sure it would help, I think it would just enhance groupthink. I like how you need to write something obviously stupid or offensive for the downvotes to have a visible effect and that upvotes have no visible effect at all. (Yes, it changes ranking, but there are other factors). People are less prejudiced when they read the comment than if they see that it is already at -3 or +5.
Yes, it means that some posts should be more (or less) visible than they are but overall I think it’s a good balance.
Besides, I am not that interested in the absolute amount of information in a post. I want information that is relevant to me, and that is very subjective :)
The enthalpy changes measured by a calorimeter are dependent on the design of the calorimeter, which could have been a different piece of equipment. In a sense, that makes it dependent on the definition of enthalpy.
If you introduced a new bit of macro information to the definition of an ensemble, you'd divide the number of microstates by some factor. That's the micro level equivalent of macroscopic entropy being undefined up to an additive constant.
The measurables don't tell you S, they only tell you dS.
> The enthalpy changes measured by a calorimeter are dependent on the design of the calorimeter, which could have been a different piece of equipment.
Right, but that is true of anything. Measuring devices need to be calibrated and maintained properly. It does not make something like a distance subjective, just because someone is measuring it in cm and someone else in km.
> If you introduced a new bit of macro information to the definition of an ensemble, you'd divide the number of microstates by some factor. That's the micro level equivalent of macroscopic entropy being undefined up to an additive constant.
It would change the entropy of your model. An ensemble in statistical Physics is not a physical object. It is a mental construct and a tool to calculate properties. An actual material would have whatever entropy it wants to have regardless of any assumptions we make. You would just find that the entropy of the material would match the entropy of one of the models better than the other one. If you change your mind and re-run the experiment, you’d still find the same entropy. This happens e.g. if we assume that the experiment is at a constant volume while it is actually under constant pressure, or the other way around.
> In a sense, that makes it dependent on the definition of enthalpy.
Not really. A joule is a joule, a kelvin is a kelvin, and the basic laws of thermodynamics are some of the most well tested in all of science. The entropy of a bunch of atoms is not more dependent on arbitrary definitions than the energy levels of the atoms.
> The measurables don't tell you S, they only tell you dS.
That’s true in itself, the laws of Thermodynamics are invariant if we add a constant term to the entropy. But it does not mean that entropy is subjective: two observers agreeing that the thing they are observing has an entropy of 0 at 0 K will always measure the same entropy in the same conditions. And it does not mean that actual entropy is dependent on specific assumptions about the state of the thing.
This is also true of energy, and electromagnetic potentials (and potentials in general). This is unrelated to entropy being something special or subjective.
Entropy isn't subjective, but it is relative to what you can measure about a given system, or what measurements you are analyzing a system with respect to. In quantum mechanics there are degenerate ground states, and in classical mechanics there are cases where the individual objects (such as stars) are visible but averaged over. You should take a look at the Gibbs paradox.
I would say an even more limiting measure of entropy shows that entropy isn't subjective. That is entropy is the ability to extract useful work from a system by moving the system from a state of lower entropy to a state of higher entropy (in a closed system)
No subjective measure of entropy can allow you to create a perpetual motion machine. The measurements of any two separate closed systems could be arbitrary, but when said systems are compared with each other units and measurements standardize.
The amount of useful work that we can extract from any system depends - obviously and necessarily - on how much “subjective” information we have about its microstate, because that tells us which interactions will extract energy and which will not; this is not a paradox, but a platitude. If the entropy we ascribe to a macrostate did not represent some kind of human information about the underlying microstates, it could not perform its thermodynamic function of determining the amount of work that can be extracted reproducibly from that macrostate. […] the rules of thermodynamics are valid and correctly describe the measurements that it is possible to make by manipulating the macro variables within the set that we have chosen to use. This useful versatility - a direct result of and illustration of the “anthropomorphic” nature of entropy - would not be apparent to, and perhaps not believed by, someone who thought that entropy was, like energy, a physical property of the microstate.
Edit: I’ve just noticed that the article discussed links to this paper, and quotes the first sentence above, and details the whifnium example given in the […] above.
I think this is a very weird thought experiment, and one that is either missing a lot of context or mostly just wrong. In particular, if two people had access to the same gas tank, and one knew that there were two types of argon and the other didn't, so they would compute different entropies, one of them would still be wrong about how much work could be extracted out of the system.
If whifnium did exist, but it was completely unobtainable, then both physicists would still not be able to extract any work out of the system. If the one that didn't know about whifnium was given some amount of it without being told what it was, and instructed in how to use it, they would still see the same amount of work being done with it as the one who did know. They would just find out that they were wrong in their calculation of the entropy of the system, even if they still didn't know how or why.
And of course, this also proves that the system had the same entropy even before humanity existed, and so the knowledge of the existence of whifnium is irrelevant to the entropy of the system. It of course affects our measurement of that entropy, and it affects the amount of work we can get that system to perform, but it changes nothing about the system itself and its entropy (unless of course you tautologically define entropy as the amount of work the experimenter/humanity can extract from the system).
> unless of course you tautologically define entropy as the amount of work the experimenter/humanity can extract from the system
Do you have a different definition? (By the way the entropy is the energy that _cannot_ be extracted as work.) The entropy of the system is a function of the particular choice of state variables - those that the experimenters can manipulate and use to extract work. It’s not a property of the system on its own. There is no “true” entropy anymore that there is a “true” set of state variables for the “true” macroscopic (incomplete) description of the system. If there was a true limiting value for the entropy it would be zero - corresponding to the case where the system is described using every microscopic variable and every degree of freedom could be manipulated.
Yes, the more regular statistical mechanics theory of entropy doesn't make any reference to an observer, and definitely not to a human observer. In that definition, the entropy of a thermodynamic system is proportional to the amount of microstates (positions, types, momentum, etc. of individual particles) that would lead to the same macrostate (temperature, volume, pressure, etc.). It doesn't matter if an observer is aware of any of this, it's an objective property of the system.
Now sure, you could choose to describe a gas (or any other system) in other terms and compute a different value for entropy with the same generalized definition. But you will not get different results from this operation - the second law of thermodynamics will still apply, and your system will be just as able or unable to produce work regardless of how you choose to represent it. You won't get better efficiency out of an engine by choosing to measure something other than the temperature/volume/pressure of the gases involved, for example.
Even if you described the system in terms its specific microstate, and thus by the definition above your computed entropy would be the minimum possible, you still wouldn't be able to do anything that a more regular model couldn't do. Maxwell's demon is not a physically possible being/machine.
> the entropy of a thermodynamic system is proportional to the amount of microstates (positions, types, momentum, etc. of individual particles) that would lead to the same macrostate (temperature, volume, pressure, etc.). It doesn't matter if an observer is aware of any of this, it's an objective property of the system.
The meaning of "would lead to the same macrostate" (and therefore the entropy) is not an "objective" property of the system (positions, types, momentum, etc. of individual particles). At least not in the way that the energy is an "objective" property of the system.
The entropy is an "objective" property of the pair formed by the system (which can be described by a microstate) and some particular way of defining macrostates for that system.
That's what people mean when they say that the entropy is not an "objective" property of a physical system: that it depends on how we choose to describe that physical system (and that description is external to the physical system itself).
Of course, if you define "system" as "the underlying microscopical system plus this thermodynamical system description that takes into account some derived state variables only" the situation is not the same as if you define "system" as "the underlying microscopical system alone".
> That's what people mean when they say that the entropy is not an "objective" property of a physical system: that it depends on how we choose to describe that physical system (and that description is external to the physical system itself).
I understand that's what they mean, but this is the part that I think is either trivial or wrong. That is, depending on your choice you'll of course get different values, but it won't change anything about the system. It's basically like choosing to measure speed in meters per second or in furlongs per fortnight, or choosing the coordinate system and reference frame: you get radically different values, but relative results are always the same.
If a system has high entropy in the traditional sense, and another one has lower entropy, and the difference is high enough that you can run an engine by transferring heat from one to the other, then this difference and this fact will remain true whatever valid choice you make for how you describe the system's macrostates. This is the sense in which the entropy is an objective, observer-independent property of the system itself: same as energy, position, momentum, and anything else we care to measure.
> I understand that's what they mean, but this is the part that I think is either trivial or wrong. That is, depending on your choice you'll of course get different values, but it won't change anything about the system. It's basically like choosing to measure speed in meters per second or in furlongs per fortnight, or choosing the coordinate system and reference frame: you get radically different values, but relative results are always the same.
I would agree that it's trivial but then it's equally trivial that it's not just like a change of coordinates.
Say that you choose to represent the macrostate of a volume of gas using either (a) its pressure or (b) the partial pressures of the helium and argon that make it up. If you put together two volumes of the same mixture the entropy won't change. The entropy after they mix is just the sum of the entropies before mixing.
However when you put together one volume of helium and a one volume of argon the entropy calculated under choice (a) doesn't change but the entropy calculated under choice (b) does increase. We're not calculating the same thing in different units: we're calculating different things. There is no change of units that makes a quantity change and also remain constant!
The (a)-entropy and the (b)-entropy are different things. Of course it's the same concept applied to two different situations but that doesn't mean it's the same thing. (Otherwise one could also say that the momentum of a particule doesn't depend on its mass or velocity because it's always the same concept applied in different situations.)
> However when you put together one volume of helium and a one volume of argon the entropy calculated under choice (a) doesn't change but the entropy calculated under choice (b) does increase. We're not calculating the same thing in different units: we're calculating different things. There is no change of units that makes a quantity change and also remain constant!
Agreed, this is not like a coordinate transform at all. But the difference from a coordinate transform is that they are not both equally valid choices for describing the physical phenomenon. Choice (a) is simply wrong: it will not accurately predict how certain experiments with the combined gas will behave.
It will predict how other certain experiments with the combined gas will behave. That's what people mean when they say that the entropy is not an "objective" property of a physical system: that it depends on how we choose to describe that physical system - and what experiments we can perform acting on that description.
By my understanding, even if we have no idea what gas we have, if we put it into a calorimeter and measure the amount of heat we need to transfer to it to change its temperature to some value, we will get a value that will be different for a gas made up of only argon versus one that contains both neon and argon. Doesn't this show that there is some objective definition of the entropy of the gas that doesn't care about an observer's knowledge of it?
Actually the molar heat capacity for neon, or argon, or a mixture thereof, is the same. These are monotomic ideal gases as far as your calorimeter measurements can see.
If the number of particles is the same you’ll need the same heat to increase the temperature by some amount and the entropy increase will be the same. Of course you could do other things to find out what it is, like weighing the container or reading the label.
No, they are not. The entropy of an ideal monatomic gas depends on the mass of its atoms (see the Sackur–Tetrode equation). And a gas mix is not an ideal monatomic gas; its entropy increases at the same temperature and volume compared to an equal volume divided between the two gases.
Also, entropy is not the same thing as heat capacity. It's true that I didn't describe the entropy measurement process very well, so I may have been ambiguous, but they are not the same quantity.
I'll leave the discussion here but let me remind you that you talked (indirectly) about changes in entropy and not about absolute entropies: "if we put it into a calorimeter and measure the amount of heat we need to transfer to it to change its temperature to some value".
Note as well that the mass dependence in that equation for the entropy is just an additive term. The absolute value of the entropy may be different but the change in entropy is the same when you heat a 1l container of helium or neon or a mixture of them from 300K to 301K. That's 0.0406 moles of gas. The heat flow is 0.506 joules. The change in entropy is approximately 0.0017 J/K.
> And a gas mix is not an ideal monatomic gas; its entropy increases at the same temperature and volume compared to an equal volume divided between the two gases.
A mix of ideal gases is an ideal gas and its heat capacity is the weighted average of the heat capacities (trivially equal to the heat capacity of the components when it's the same). The change of entropy when you heat one, or the other, or the mix, will be the same (because you're calculating exactly the same integral of the same heat flow).
The difference in absolute value is irrelevant when we are discussing changes in entropy and measurements of the amount of heat needed to increase the temperature and whether you "will get a value that will be different for a gas made up of only argon versus one that contains both neon and argon".
What makes an observation mundane? I think what you said is insightful and demonstrates intelligence. I don't think it's at all obvious to the masses of students that have poured over introductory physics textbooks. In fact, it seems to me that often entropy is taught poorly and that very few people understood it well but that we are beginning to correct that. I point to the heaps of popsci magazines, documentaries and YouTube videos failing to do anything but confuse the public as additional evidence.
Maybe if you opened only introductory physics textbooks then it’s not mundane; but If you opened introductory information theory textbooks, statistical textbooks, introductions to Bayesian probability theory, articles about MaxEnt thermodynamics including articles by E.T. Jaynes, then it’s a quite mundane observation.
Jaynes definitely made the case for this a long time ago, and in my opinion he's correct, but I think that view is still not mainstream; or even if mainstream, certainly not dominant. So I think we should welcome other people who reach it on their own journey, even if they aren't the first to arrive there.
In physics the requirement for a valid changing the frame of reference is, that the laws of physics transform according to the transformation.
Every observer should discover the same fundamental laws when performing experiments and using the scientific method.
To stay in your analogy, saying 5 and 6 are the same would only work if the rules of the game you play could transform in such a way that an observer making a distinction between the two would arrive at the correctly transformed rules in his frame of reference.
Given that we have things like neutron stars, black holes and other objects that are at the same time objects of quantum physics and general relativity, the statement feels pretty fundamental to me, to a degree even that I wonder if it might be phrased to strongly.
I think you may have misunderstood the OP's point - the entropy you calculate for a system depends on how you factor the system into micro and macro states. This doesn't really have anything to do with changes of reference frame - in practice it's more about limitations on the kinds of measurements you can make of the system.
(you can't measure the individual states of all the particles in a body of gas, for instance, so you factor it into macrostate variables like pressure/temperature/volume and such)
Can we take the anthropic out of this? I reckon it'll make things easier.
Instead of me knowing, do other physical objects get affected. I might get anemsia and forget what the dots on a dice mean and say they are all the same: all dotty!
Imagine each hydrogen atom has a hidden guid but this is undetectable and has no effect on anything else. This is a secret from the rest of physics!
I guess!!! (Armchair pondering!) that that guid cannot be taken into account for entropy changes. At least from any practical standpoint.
You could imagine each atom having a guid and come up with a scheme to hash the atom based on where it came from ... but is that info really there and if so does it affect anything physically beyond that atoms current state (as defined by stuff that affects other stuff).
What anthropic do you mean? I'm describing properties of models, not people. Physics (probably) doesn't care what you "know".
On the guid idea - fundamental particles are indistinguishable from one another in quantum mechanics, so they don't have anything like a guid even in principle. There is no experiment you could perform on an electron to determine whether it had been swapped out for a "different" one, for instance.
Sorry ... I am replying mostly to the dice idea which wasn't you.
Yes correct about the guid idea. My point is the discussion is easier to follow if grounded in reality (as best modelled since that is all we have plus some evidence stored in the same "SSD"!)
Oh I see. But on your guid thing, people often describe entropy in terms of the set of micro states of your system (the actually physical states in your model) and the macro states (sets of microstates that are described by a collection of high-level state variables like pressure/temperature).
Physically indistinguishable stuff would have the same micro state, so yeah, they wouldn't affect entropy calculations at all, no matter what macro states you picked.
But I disagree a bit about grounding things in reality - some concepts are quite abstract and having clean examples can be helpful, before you start applying them to the mess that is our universe!
From a thermodynamics point of view only the differential of the entropy matters, so if there is only a fixed difference between the two computations they do not influence the physics.
If the way one does the coarse graining of states results in different differentials, one way should be the correct one.
There is only one physics.
If I remember one of Plancks relevations was that he could explain why a certain corrections factor was needed in entropy calculations, since phase space had finished cell size.
That's true - for instance I believe many of the results of statistical mechanics rely on further assumptions about your choice of macrostates, like the fact that they are ergodic (i.e. the system visits each microstate within a macrostate with equal probability on average). Obviously exotic choices of macrostates will violate these assumptions, and so I would expect the predictions such a model makes to be incorrect.
But ultimately that's an empirical question. Entropy is a more general concept that's definable regardless of whether the model is accurate or not.
For the longest time I had no real intuition of what entropy actually represented. This veritasium video explained it in a way that finally clicked for me: https://www.youtube.com/watch?v=DxL2HoqLbyA
Fails to mention Heisenberg uncertainty, which in my opinion is a theoretical ceiling to this approach. Also it needs to account for cost of compute relative potential useful work from these quantum engines. If the energy cost of compute exceeds potential useful work, then it’s still net negative (or useless work). Finally there’s the question of hidden patterns and the spectrum of randomness. Some systems are more random than others. The potential for useful work within a reasonable energy cost of compute will decline when we travel down the spectrum of randomness. Systems which are at maximal Heisenberg uncertainty, I.e. particles are not entangled, and have no correlation with a superstructure of other entangled particles, will not hold any further improvement to knowledge and thus zero potential work. This is the ultimate entropy of the local and macro system. Probably also the cause of certain violation of conservation of energy principles, such as dark energy.
The interactive graphic that tries to show entropy is subjective doesn't sit right for me.
They fail to properly define the macrostate of the system under consideration, then show two different observed entropies for two different macrostates (Colors for Alice and Shapes for Bob).
That doesn't show entropy is subjective, it shows that defining the system is subjective. The same two macrostates would still have the same entropy
I had a lot of questions about similar issues, mostly surrounding the question of "what is the nature of 'subjective'?"
The article deals with this a bit but not as much as I would like — maybe because of the state of the literature?
The linked paper by Safranek et al on observational entropy was sort of interesting, noting how a choice of coarse graining in to macrostates could lead to different entropies, but it doesn't really address the question of why you'd choose a coarse graining or macrostate to begin with, which seems critical in all of this?
In the information theory literature, there's a certain information cost (in a Kolmogorov complexity sense) associated with choosing a given coarse graining or macrostate to begin with — in their example, choosing shape or color to define entropy against. So my intuition is that the observational entropy is kind of part of a larger entropy or informational cost including that of the coarse graining that's chosen.
This kind of loops back to what they discuss later about costs of observation and information bottlenecks, but it (and the articles it links to) don't really seem to address this issue of differential macrostate costs explicitly in detail? It's a bit unclear to me; it seems like there's discussion that there is a thermodynamic cost but not how that cost accrues, or why you'd adopt one macrostate vs another (note Alice and Bob in their subjectivity example are defined by different physical constraints, and can be thought of two observational systems with different constraints).
It's also interesting to me to think about it from another perspective, which is let's say you have a box full of a large number of particles that are "purely random". In that scenario it doesn't really matter what Alice and Bob see, only the number of particles etc. The entropy with regard to say, color, will depend on the number of colors, not the position of the particles because they're maximally entropic. In reorganizing the particles with reference to a certain property, they're each decreasing the entropy from that purely random state by a certain amount that I can think be related in some way to the information involved in returning the particles to a purely random state?
A lot of the article has links to other scientific and mathematical domains. Some of the stuff about information costs of observation has ties in the math and computer science literature through Wolpert (2008) who approaches it from a computational perspective, and later Rukavicka. There's similar ideas in the neuroscience literature about entropy reduction efficiency (the names of some of the people involved there I'm forgetting).
I really liked this Quanta piece but there's a lot of fuzziness around certain areas and I couldn't tell if that was just due to fuzzy writing,fuzzy state of the literature, or my poor understanding of things.
Maybe if you take Alice and Bob to be two separate alien species it could make more sense. Alice’s species has no measurement devices capable of detecting Bob’s version of entropy, and therefore Alice is not able to extract any useful work from Bob’s system and vice versa. Therefore any objective definition of entropy needs to include the capabilities of the measurer. Which is just the same as what you were saying about macrostates.
Thank you for articulating what was bothering me about this.
I couldn't quite put my finger on it, but you're right. They are confusing defining the system with defining the entropy of a system and then saying it's the entropy that is subjective. That isn't the case at all. Entropy is just a measurement.
That can't be right, because the arrow of time depends on the universe being in a high entropy state when the Big Bang happened. There were no observers then. Also, the 2nd law of thermodynamics doesn't depend on observers. The universe overall will continue to evolve into a higher entropy state without any intelligent observers making measurements. As in stars will burn out and blackholes will evaporate, and the temperature of the universe will get close to absolute zero.
> the 2nd law of thermodynamics doesn't depend on observers
Thermodynamics entropy is not just a property of a system - it depends on how we choose to describe the system. Of course, we can define the thermodynamic entropy for a system without observers. (In fact, we can only define the classic equilibrium thermodynamic entropy for a system without observers!)
I come to entropy from Machine Learning, Information Theory and probability. For me it's fairly straightforwad. Of interest and useful - but nothing mysterious there.
The p.d.f. aka a fancy histogram is my current best knowledge of how many times an outcome is expected to happen. When I do one experiment - I can't tell the outcome. I can only count the number of different outcomes, without being certain when will any of them appear exactly.
A flat p.d.f. means my knowledge is poor: every outcome is about equaly possible. I'm very ignorant (high entropy). A spiky p.d.f. means I have good knowledge: some outcome is much more likely (low entropy). In extremis the p.d.f. is a Dirac impulse - that's deterministic knowledge.
The only mildly interesting thing is when a new observation reduces my knowledge. Say right now I'm fairly certain I have not got cancer. My chances are 90:10 for my age. Tomorrow I take a test, the test comes back positive. Of people of my age that test positive, aboout half have cancer for real. After the test my chances are 50:50. Now I am perfectly ignorant whether I have cancer or not. Whereas before I took the test and got a positive result, I was very certain I have not got cancer. The new information (positive test result), transformed my probability of cancer from spiky marginal P_Y(y)={0.9,0.1} to perfectly ignorant conditional P_Y(y|+ve test)={0.5,0.5}.
Lovely article. The subjective nature of entropy and information immediately makes me think of the IIT theory (integrated information theory) of consciousness and its foundational futility. Information cannot be discussed without perspective. Someone has to define the states. A die have 6 states only to us humans. What about an ant that's likely to have the die land on it? Bringing the observer back into discussions of information is fascinating because it then begs the question: How is the observer put together? And how does a perspective, an I, emerge in a multi-trillion-cell entity?
For those interested in this detour, you might like reading this and our book (mentioned in there)
I remember that somewhere during my physics degree this idea of the article clicked with me.
Entropy is just a simpler way of manifesting properties of a system that we didn’t measure because it is too difficult (when I was still at classical physics level) or it’s not possible to measure (when quantum mechanics became present in everything we learned).
We use it because we can’t measure the position and momentum of every particle (same goes for temperature) so we created theories on how a system with a particular entropy behaves, but it’s just a clever way to work with aproximations.
The chart of "Thermodynamic Beta" on wikipedia is great. I like to think about the big bang as a population inversion where the entropy temporarily becomes negative leading to the hyperinflationary epoch as the universe "collapses" into existence finally cooling to a near "infinite" temperature making the CMB nice and smooth.
The big bang clearly defies thermodynamic laws so why wouldn't be a negative entropy and temperature phenomenon? It's the "cheat code" the "primordial universe" uses to dodge problems like realities popping into existence.
1) entropy is a mathematical concept. Physics apply it, like any other mathematical one
2) math entropy does not change, everything is reversible, as it is based on known quantities
3) entropy measures how far we are from perfectly knowing a system
4) logarithm is chosen to measure entropy because of its convenient properties, not because of any deeper law. It could be a sum, a product etc
5) the Second thermodynamic law is a tautology: "every system tends to higher entropy state because it is more common" becomes "the more common is the more common"
5) there is no every system, there is only the universe in which all systems increase in entropy even if individual objects (block of ice) do not. You're bleeding off entropy into the universe you can never get back.
i think those convenient properties are the sign of some deeper law. you do want to work with the exponential and logarithm, same as in so many other places. you don’t necessarily want some other generalized “entropy”.
Thermodynamics is a young and poorly thought out science.
My beef is the adoption of bad names and pseudo-philosophy. In Gibbs free energy equation, there is a term called entropy but it is nothing more than the value for heat transferred for the given elements and given constant temperature and derived from experimental observation. Instead of calling it entropy, they could have called it Clausius constant. No need to confuse generations of students.
Makes sense. If we knew the theory of everything and the initial conditions of the universe, then we could just compute the next episode of a series instead of streaming it.
Not necessarily. A theory of everything does not imply either the computability of a future state or determinism.
Even if our ToE is deterministic the universe may be computationally irreducible, meaning it cannot be computed accurately at lower resolution in all cases. Note that such a universe could contain within it regions that are computationally reducible, just not the whole and not all regions.
I would expect a ToE to give us knowable bounds to either determinism or computability. It should tell us what is precisely knowable or predictable and what isn’t.
Edit: to understand how a ToE could leave some things unknowable (but tell us what they are) consider the Hubble horizon. Light beyond it will never reach us making sufficiently distant things unknowable.
Limits may be great. It means we can at least subjectively consider ourselves as having free will — even with a deterministic theory it may be unknowable determinism. It’s just like how the speed of light might be why we got to evolve before being bum rushed by aliens.
The entire universe could be the quantum many worlds (branches in higher dimensional Hilbert space), and you would have to compute the wave function of the universe.
a theory of everything would have to have quantum mechanics as an emergent theory and quantum mechanics is fundamentally non-deterministic. and even if it was, a theory of everything is almost certainly not going to be practically computable for the same reason that even basic non-relativistic quantum mechanics cannot even solve for a reasonably sized molecule numerically (let alone exactly - just as classical mechanics is unsolvable for more than two bodies)
Is that true? It was my understanding that chaotic systems (with a known initial state) could be predicted to arbitrary precision by throwing enough compute at the problem. Of course knowing the initial state is impossible in the real world, „enough“ compute might not fit in the observable universe, and quantum mechanics contains true randomness…
Yes, chaotic systems can be simulated, and that's the only way to find out what happens with them. The thing one can't do is reduce them to a closed-form function of x and compute their value for any point along their domain. The only thing we can do is inductively figure out the next x from the current x.
The issue is more that simulations diverge exponentially with time from any nonzero error in the initial state. With perfect knowledge of the initial conditions and parameters you could simulate the system perfectly, but you'd also need to accumulate zero numerical error along the way.
If you don't demand perfection, then in practice you can do pretty well for short times.
I don't get the example under "What is entropy?". This works only under the assumption that reality (phase space in particular) is somehow "pixelated". If reality is continuous, 9 particles that are clumped together can already be in uncountalbly many states.
"Carnot’s book was largely disregarded by the scientific community, and he died several years later of cholera. His body was burned, as were many of his papers." Poetic? albeit depressing.
A localized reduction of entropy will always result in an increase of entropy in the larger system.
It doesn't matter how efficient your process is, the entropy of the surrounding system will ALWAYS increase as the result of the work needed to effect a localized reduction.
But consider the "monkeycide paradox". If we start a nuclear war, and kill every single primate on the planet, does the universe cease to exist? Does time start running in reverse? Maybe time would go back to before we nuked everyone out of existence, and life would be like one of those endless time loop anime? Will matter start spewing out of black holes instead of going in because there was no monkey there to watch it? Would it be enough if a dog observed the universe? (they're "like our family" after all) or a squid? What about a yeast cell?
Obviously this is hyperbole 8-) but I think the point is clear. If anyone really believes that the existence of primate life on our little planet "observing the universe" is what makes all physical processes advance, they have some serious issues of overcharged ego.
Of course a theory doesn't have to be completely correct to be useful. Old ideas of heat as a fluid have been supplanted, but they still helped design working systems in their time. Modern ideas of quantum mechanics, as incomplete as they are, still model a concept of "tunneling" that's sufficiently accurate to make semiconductors work.
Or, you can just run with the Rovelli quote from the article: “What they’re telling me is bullshit” 8-)
Very nice article. There’s an alternative way of thinking of the Gibbs paradox, which is to attach labels to the particles. This naturally makes them distinguishable and increases the total number of possible configurations, and with it the maximum value of the entropy.
Attaching labels to the particles would make them fundamentally different types of particles, right? So it doesn’t seem that surprising that a system with different types of particles would be different.
Why has Quanta degenerated into a mad combination of ineffective science communication and breathless, and slightly envious references to yoga retreats in the North of England. It's become a lifestyle magazine for the self-important, science-adjacent middlebrows. Most obviously, it's nerd-sniping for HN.
And before you _instinctively_ (it will be instinctive) downvote me to oblivion, please read the piece and assure yourself that it's not truly a piece of rote trash.
This is an example of better writing on the same subject. Clear and concise, and with less extraneous material. It doesn't have the nice animations, but the writing is just better.
Is there a specific name for the art and craft of presenting a topic online (as a web page) using a flow of text interspersed with interactive activities?
Whatever the name of this approach, quantamagazine is definitely good at it!
It looks like this might be the original paper by Kay, can anyone provide a link that isn’t behind a paywall?
Search results seem to indicate he was referring to text based plus interactive components something something Java(script?). I’m not a programmer and know less about the history..
Multimedia refers to the integration of multiple forms of content such as text, audio, images, video, and interactive elements into a single digital platform or application.
Multimedia is fairly general, so it includes this but also much more.
Including video in particular tends to highjack the experience (you switch mode), whereas interactive elements that you explore as you keep reading feel more integrated.
It's pretty cyclical, there's older sci-fi that's more dystopian and still older sci-fi then that which is optimistic. (and of course, the trends are only trends)
Mr Action Lab of Youtube defined Entropy something like "Not a measure of how many states a system reaches, but a measure of how many states a system can reach".
I like this very much as it neatly summarizes information and other shit too.
reply