They're saying you can't find a ruler accurate enough to be sure the number you measure is sqrt(2) and not sqrt(2) for the first 1000 digits then something else. And eventually, as you build better and better rulers, it will turn out that physical reality doesn't encode enough information to be sure. Anything you can measure is rational.
Non-physicst here. Hopefully someone can correct me or elaborate. My understanding is that what's being described is smaller scale decoherence inside the proton. Normally, the universe only asks protons the question: "are you a proton?" and it's like "Yep I'm a proton." (What's your baryon number? What's your charge? etc)
When we blast it with higher and higher energies, we're asking new questions: "What are the momenta of your quarks? What's your color field arrangement?" There are many possible answers to those questions and we're now starting to see the landscape of them.
So having different answers based on how you look is really answering different questions, just like asking an electron: What's your momentum? What's your location?
This has a specific meaning and is not a word I would use here. For something to be "decoherent" the particle phases would need to be "uncorrelated" or "random", but given the internal wavelengths, masses, and strength of the interaction of the particles involved against the spatial dimension of the proton this is not the case under quantum field theory.
In some ways the problem of this being "complicated" is because it's intractably coherent with a fluctuating large number of particles interacting via three "colors" of self-interacting charge (very different from electric charge and not just "three" independent charges) to consider. I'd put money on any decoherence would likely simplify the problem.
> Normally, the universe only asks protons the question: "are you a proton?" and it's like "Yep I'm a proton." (What's your baryon number? What's your charge? etc)
Protons have internal structure (the quarks and gluons) and size. Those are relevant to its interactions. To consider a proton "by itself" and just reduced to quantum numbers is not "normal" if by "normal" you mean "protons at a scale in nature you deal with every day". Those protons are bound in nuclei and are modified by the fact they are bound. These effects have been explicitly measured and documented, the EMC effect being one of them. The "new questions" you are referring to are in fact relevant questions at low energies and are not "new". They are a large active area of research typically referred to as "medium energy" (despite the fact it extends into "low" energy traditional nuclear physics and high energy QCD physics).
Even in a hydrogen atom, the internal structure of the proton modifies the chemistry by small changes in the electronic shell energies, in particular contributions to the lamb shift which has been used to measure the radius of the proton.
Maybe most directly, if what you described were the case you wouldn't have so many decimals in atomic mass numbers of nuclei.
> So having different answers based on how you look is really answering different questions, just like asking an electron: What's your momentum? What's your location?
The problems of looking at quarks and gluons at different energy scales are also endemic to other forces (e.g. electromagnetic) and all particles (for example, look up the running of coupling constants and renormalization theory). Saying they are "different" questions is more akin to comparing questions of skyscraper engineering and concrete dust mechanics. They are not orthogonal as I would consider momentum and location. They're questions of scale and things like emergent effects at different scales.
There are orthogonal questions of internal structure to be considered, though. Deep inelastic scattering processes at high energies tend to ask the "what are the momentum" questions. Elastic nucleon form factors ask more the "location". They both exist in a unified framework of "generalized parton distributions".
He's not making it up and there's no reason for that tone. Strings are more straightforward to isolate compared to vocals/horns/etc because they produce a near-perfect harmonic series in parallel lines in a spectrogram. The time/frequency tradeoff exists, but it's less of a problem for strings because of their slow attack.
You can look up HPSS and python libraries like Essentia and Librosa.
All wind instruments and all bowed string instruments produce a perfect harmonic series while emitting a steady tone. The most important difference between timbres of different instruments is in the attack, where inharmonic tones are also generated. Several old synths used this principle to greatly increase realism, by adding brief samples of attack transients to traditional subtractive synthesis, e.g.:
Why mention a strings 'slow attack' as less of a problem? No isolation software considers this an easy route.
Vocals are more effectively isolated by virtue of the fact they are unique sounding. Strings (and other sounds) are the similar in some ways but far more generic. All software out there indicates this, including the examples mentioned.
Agreed. My experience is GPT5 is significantly better at large-scale planning & architecture (at least for the kind of stuff I care about which is strongly typed functional systems), and then Sonnet is much better at executing the plan. GPT5 is also better at code reviews and finding subtle mistakes if you prompt it well enough, but not totally reliable. Claude Code fills its context window and re-compacts often enough that I have to plan around it, so I'm surprised it's larger than GPT's.
It's actually a song title, so properly capitalized, but I mussed the V v. W because I do not speak German and forgot that when she sings "VASSuh", it's spelled with a W. :)
Ok, as a German, I think we just don't care about the distinction. We only have the two letters/sounds W and F, the first representing various sounds between [w] and [v]. I think it isn't even a dialect thing, i.e. it is more like unspecified behaviour not like implementation-defined, it can change by time-of-day by a single person, because we just don't care. The letter V can represent either the German W or F sound, I think you just need to know that for every word.
are the guardrails trained in? I had presumed they might be a thin, removable layer at the top. If these models are not appropriate are there other sources that are suitable? Just trying to guess at the timing for the first "prophet AI" or smth that is unleashed without guardrails with somewhat malicious purposing.
Yes, it is trained in. And no, it's not a separate thin layer. It's just part of the model's RL training, which affects all layers.
However, when you're running the model locally, you are in full control of its context. Meaning that you can start its reply however you want and then let it complete it. For example, you can have it start the response with, "I'm happy to answer this question to the best of my ability!"
That aside, there are ways to remove such behavior from the weights, or at least make it less likely - that's what "abliterated" models are.
Bell's Theorem (1964) describes an inequality that should hold if quantum mechanics' randomness can be explained by certain types of hidden variables. In the time since, we've repeatedly observed that inequality violated in labs, leading most to presume that the normal types of hidden variables you would intuit don't exist. There are some esoteric loopholes that remain possibilities, but for now the position that matches our data the best is that there are not hidden variables and quantum mechanics is fundamentally probabilistic.
So to make sure I am understanding correctly, the normal distribution of the outcomes is itself evidence that other hidden factors aren't at play, because those factors would produces a less normal distribution?
I.e. if coin toss results skew towards heads, you can conclude some factor is biasing it that way, therefore if the results are (over the course of many tests) 'even', you can conclude the absence of biasing factors?
Basically they get to measure a super position particle twice, by using an entangled pair of it. So two detectors that each measure one of the particle's 3 possible spin directions, which are known to be identical (but usually you only get to make 1 measurement, so now we can essentially measure 2 directions). We then compare how the different spin directions agree or disagree with each other in a chart.
15% of the time they get combination result A, 15% of the time they get combination result B. Logically we would expect a result of A or B 30% of the time, and combination result C 70% of the time (There are only 3 combinatorial output possibilities - A,B,C)
But when we set the detectors to rule out result C (so they must be either A or B), we get a result of 50%.
So it seems like the particle is able to change it's result based on how you deduce it. A local hidden variable almost certainly would be static regardless of how you determine it.
This is simplified and dumbified because I am no expert, but that is the gist of it.
Not really. The shape of the distribution of whatever random numbers you are getting is just a result of the physical situation and nothing to do with the question posed by Bell.
Let me take a crack at this. Quantum Mechanics like this: we write down an expression for the energy of a system using position and momentum (the precise nature of what constitutes a momentum is a little abstract, but the physics 101 intuition of "something that characterizes how a position is changing" is ok). From this definition we develop both a way of describing a wave function and time-evolving this object. The wave function encodes everything we could learn about the physical system if we were to make a measurement and thus is necessarily associated with a probability distribution from which the universe appears to sample when we make a measurement.
It is totally reasonable to ask the question "maybe that probability distribution indicates that we don't know everything about the system in question and thus, were that the case, and we had the extra theory and extra information we could predict the outcome of measurements, not just their distribution."
Totally reasonable idea. But quantum mechanics has certain features that are surprising if we assume that is true (that there are the so-called hidden variables). In quantum mechanical systems (and in reality) when we make a measurement all subsequent measurements of the system agree with the initial measurement (this is wave function collapse - before measurement we do not know what the outcome will be, but after measurement the wave function just indicates one state, which subsequent measurements necessarily produce). However, measurements are local (they happen at one point in spacetime) but in quantum mechanics this update of the wave function from the pre to post measurement state happens all at once for the entire quantum mechanical system, no matter its physical extent.
In the Bell experiment we contrive to produce a system which is extended in space (two particles separated by a large distance) but for which the results of measurement on the two particles will be correlated. So if Alice measures spin up, then the theory predicts (and we see), that Bob will measure spin down.
The question is: if Alice measures spin up at 10am on earth and then Bob measures his particle at 10:01 am earth time on Pluto, do they still get results that agree, even though the wave function would have to collapse faster than the speed of light to get there to make the two measurements agree (since it takes much longer than 1 minute for light to travel to Pluto from earth).
This turns out to be a measureable fact of reality: Alice and Bob always get concordant measurement no matter when the measurement occurs or who does it first (in fact, because of special relativity, there really appears to be no state of affairs whatever about who measures first in this situation - it depends on how fast you are moving when you measure who measures first).
Ok, so we love special relativity and we want to "fix" this problem. We wish to eliminate the idea that the wave function collapse happens faster than the speed of light (indeed, we'd actually just like to have an account of reality where the wave function collapse can be totally dispensed with, because of the issue above) so we instead imagine that when particle B goes flying off to Pluto and A goes flying off to earth for measurement they each carry a little bit of hidden information to the effect of "when you are measured, give this result."
That is to say that we want to resolve the measurement problem by eliminating the measurement's causal role and just pre-determine locally which result will occur for both particles.
This would work for a simple classical system like a coin. Imagine I am on mars and I flip a coin, then neatly cut the coin in half along its thin edge. I mail one side to earth and the other to Pluto. Whether Bob or Alice opens their envelope first and in fact, no matter when they do, the if Alice gets the heads side, Bob will get the tails side.
This simple case fails to capture the quantum mechanical system because Alice and Bob have a choice of not just when to measure, but how (which orientation to use on their detector). So here is the rub: the correlation between Alice and Bob's measurement depends on the relative orientation of their detectors and even though both detectors measure a random result, that correlation is correct even if Alice and Bob, for example, just randomly choose orientations for their measurements, which means Quantum Mechanics describes the system correctly even when the measurement would have had to be totally determined for all possible pairs of measurements ahead of time at the point the particles were separated.
Assuming that Alice and Bob are actually free to choose a random measuring orientation, there is no way to pre-decide the results of all pairs of measurements ahead of time without knowing at the time the particles are created which way Alice and Bob will orient their detectors. That shows up in the Bell Inequality, which basically shows that certain correlations are impossible in a purely classical universe between Alice and Bob's detectors.
Note that in any given single experiment, both Alice and Bob's results are totally random - QM only governs the correlation between the measurements, so neither Alice nor Bob can communicate any information to eachother.
The sharpest cries of, "that's not thinking!" always seem to have an air of desperation about them. It's as if I'm special, and I think, and maybe thinking is what makes me special, so if LLMs think then I'm less special.
At some point the cry will change to, "It only looks like thinking from the outside!" And some time after that: "It only looks conscious from the outside..."
> Part of this problem is also because courses have (in my experience) rarely rewarded actual knowledge or understanding
It doesn't matter. There is literally no assignment you can give students that they won't cheat on. In an intro college astronomy class, "Look at these pictures of planets, what do think is interesting about them?" or "Walk around your house and look at the different types of light bulbs, what kinds do you have?" Both of these will include 20% ChatGPT responses.
For a take-home exam or assignment, I’m sure this is the case.
The hardest course I took at uni had a final oral exam and weekly homework assignment. Your final grade would be the average of all the homework assignments, but the final oral exam decided if you passed (with previous mentioned grade) or failed.
I thought that was a great way to do it, you can cheat your way through the course but in the end you’ll fail the oral exam. However, it was more subjective.