In response to those commentators puzzling over the evidential status of computer simulations, you should know that this is currently an open question being debated in the field of Epistemology of Measurement, which aims to give a rigorous account of what it means to `measure' something.
It is not at all obvious that simulations should be excluded from the category of `evidence', especially when the simulation depends heavily on the input of observational data. Consider, for instance, using a simulation to interpolate the temperature in some location based on the actually observed temperatures in surrounding regions. Let's also not forget that modern precision measurement is highly dependent on making detailed computer models to predict how the measuring instruments function, blurring the line somewhat between simulation and measurement.
Not obvious from the title, but in this article Greenwald takes the opportunity to air some of the dirty laundry about why he left the Intercept. Worth reading.
Long story short, QBism inherits "agent" from Bayesian decision theory. In that context, we talk about gamblers making bets on the outcomes of dice rolls. In QBism, we talk about scientists making bets on the outcomes of measurements. What exactly can be a "gambler?" Can a dog gamble? What about a trained dog? Or a flea? The answer is: if the shoe fits, wear it. Same goes for QBism. If a dog walks into a lab, lights up a pipe, scribbles equations on the blackboard and starts aligning beamsplitters, then QBism regards him as an agent.
A dog is not something like a “point of view” or a “reference frame” in that it can be thought to exist independently of a living, breathing being.
A personalistic interpretation requires a "person". Would a rock or a cell phone qualify as a person? I'm not even sure the authors of that reference would accept a dog, despite saying the concept is extremely flexible (they discuss agents "social" agents but not "dumb" agents).
It's not clear to me that the mathematical description can exist independently of the agent that has some knowledge, for some definition of agent. Even if it can be make to work with no-one's "abstract knowledge" what would be the physical meaning of that? (Not that the physical meaning when agents are present is clear, to be fair.)
(physicist here). I think monktastic1 mostly nailed it, I just want to add some further clarifications.
Take the electron double-slit experiment. Suppose no human checks which slit the electron goes through, but the "which-path" information leaks out into the environment somehow. The interference fringes will be destroyed anyway (called "environmental decoherence").
This is usually framed by saying "oh look, the environment measured the system". In fact, what happened is that (1) you prepared what you believed to be an electron beam in a coherent superposition; (2) you left it open to the environment; (3) you used quantum theory to model the system-plus-environment and predicted that you would not see interference fringes when you check; (4) YOU observed the dots on the glass plate and confirmed that there were no interference fringes. The whole story is told in terms of what YOU believed, expected, and finally what you saw. That is QBism's whole point.
Put it another way: in QBism wavefunctions just ARE sets of probability assignments to certain actual events that could be observed. And probabilities are human constructions: we invented them to help us reason about our expectations. So, without humans (or other beings capable of reasoning about expectations), there are no probabilities, hence no wavefunctions either. But stuff still happens, even when there's nobody around to see it or assign probabilities to it.
The key conceptual leap is separating "wavefunction" completely from "reality". There's more to quantum physics than just the wavefunctions -- the reality lies in the events that happen, not in the quantum state itself.
(pro QBist physicist here). Just some clarifications on the interesting points you raise.
1. In practice, experimental physicists can and do disagree all the time about what their apparatuses are doing, what the results mean, and so on -- at least, they do at the beginning of the experiment. By the end, they have gone through a process that results in agreement, which allows them to write a paper together declaring things in an objective manner. So it is important to distinguish between the "inputs" to scientific practice (our individual, subjective initial guesses about what is going on) from the products of that practice (effectively objective statements). QBism interprets wavefunctions not as products but as inputs: we need to start with a prior "best guess" about the probabilities, and it its the interactive process of updating our subjective priors in light of data that leads us to converge on a single wavefunction, which then attains an objective status in the sense of being the same for everyone. QBism's point is that this whole process of "objectification" is itself a process to be analyzed, not taken for granted as if the objectivity were there from the beginning.
2. The PBR theorem (which you cite) doesn't bite QBism, and PBR themselves admit as much in their paper. See point 8 in [https://arxiv.org/abs/1810.13401].
3. What you say would be silly, if QBism treated classical states as "more real" than quantum states. But they don't.
QBism treats quantum and classical states on the same footing: both are sets of subjective probabilities about the outcomes of possible measurements you could do. So classical states are probability distributions, and probabilities in QBism always represent subjective degrees of belief of an agent (following the de Finetti/ Ramsey school of probability theory). The difference between classical and quantum states in QBism does not lie in what the states ARE, it lies in the rules that tell you how to compute one state given your subjective beliefs about another state. And those rules are objective in QBism; as objective as the rule that says probabilities have to add up to one.
4. Doubly ironically then, this is probably the one point raised by critics that hits the QBists hardest. QBism can bring clarity to many issues (I find it handily resolves the measurement problem and non-locality of "collapse" - see eg.[https://arxiv.org/abs/1311.5253]) but there are still plenty of physical phenomena that don't yet have a neat QBist explanation.
It is not at all obvious that simulations should be excluded from the category of `evidence', especially when the simulation depends heavily on the input of observational data. Consider, for instance, using a simulation to interpolate the temperature in some location based on the actually observed temperatures in surrounding regions. Let's also not forget that modern precision measurement is highly dependent on making detailed computer models to predict how the measuring instruments function, blurring the line somewhat between simulation and measurement.
Some references:
[1] Nature: "Virtually a measurement", by Wendy Parker. https://www.nature.com/articles/s41567-020-01138-3
[2] Stanford Encyclopedia of Philosophy entry on "Measurement in Science": https://plato.stanford.edu/entries/measurement-science/#EpiM...
[3] "The problem of observational grounding: how does measuring generate evidence?" by Eran Tal. Video: https://vimeo.com/184697369 Article (open access): https://iopscience.iop.org/article/10.1088/1742-6596/772/1/0...