Hacker News new | past | comments | ask | show | jobs | submit login
Mathematics of shuffling by smooshing (quantamagazine.org)
75 points by cjg on May 29, 2017 | hide | past | favorite | 20 comments



This article starts of strong but ends with a hand wavy "I guess someday we'll know more about smooshing". Furthermore it doesn't address the wide gap in method between smooshings performed by different people (I could swirl the cards differently or make the smooshing pile bigger or smaller) which is basically impossible to control for. How do you even begin to guarantee any kind of regularity without simply using a smooshing machine?


Shuffling a deck of cards is a basic coding interview question, but I prefer a more interesting version. How do you code a simulation of real-life shuffling? Take a basic riffle. Cut the cards roughly in half and interleave the cards. Of course if they are perfectly interleaved then the arrangement is not random, so that's not what happens. In reality there's a "clumping factor" that causes a few cards to stick together on each side. What other variables are at play? Can you reproduce the finding that it takes seven shuffles to fully randomize a deck?


> Then, one card at a time is repeatedly moved from the bottom of one of the packets to the top of the shuffled deck, such that if x cards remain in one packet and y cards remain in the other packet, then the probability of choosing a card from the first packet is x/(x + y) and the probability of choosing a card from the second packet is y/(x + y).

https://en.wikipedia.org/wiki/Gilbert%E2%80%93Shannon%E2%80%...


The article contained effectively no mathematics. "Mathematics of shuffling by smooshing" is poor title (and not the actual title of the article, which was more accurate).


That a "toddler shuffle" requires not-yet-developed math techniques for analysis ... fascinating!

> "The model does provide a framework for relating the size of the deck to the amount of mixing time needed, but pinning down this relationship precisely requires ideas from a mathematical field still in its infancy, called the quantitative theory of differential equations."


> quantitative theory of differential equations

Any idea what this is referring to? It's a pretty vague description...


About this cutoff. I guess that it is an analogue to the phase transition we see in for example the Ising model (this example was given in the article).

This transition is due to a change in temperature, i. e. the chance of flipping a spin into a non-favorable configuration. How does this relate to the number of shuffles in a deck?

There is no equivalent of a temperature here from what I can see.


The "cut-off" phenomenon is dynamical, not involving changes in parameters. The idea is that if you start a (irreducible aperiodic) Markov chain with initial condition drawn from a non-stationary distribution, and you compute the distance d(n) between the distribution at step n and the stationary distribution (where distance usually means "total variation distance" https://en.wikipedia.org/wiki/Total_variation_distance_of_pr...) then d(n) can drop off sharply as a function of n, rather than exponentially for all n as one might expect.

Here is one of Persi's papers on this topic:

http://www.pnas.org/content/93/4/1659.full.pdf

There is also a chapter in Trefethen and Embree's (very intersting) book "Spectra and Pseudospectra" on the topic, and IIRC it had a nice explanation of the cutoff phenomenon. Offhand I don't know a paper I can point to, but perhaps someone on the list does.


I don't mean to imply d(n) does not drop off exponentially (as it should), only that it has a sharp drop-off before the exponential scaling sets in, rather than a steady exponential all the way.

Also, if you equip an Ising model with Glauber dynamics, the resulting Markov chain does exhibit a cutoff phenonmenon. See, e.g., https://arxiv.org/abs/0909.4320 .


I imagine smooshing could be thought about like pulling off packets of cards from the deck and interleaving them randomly with other packets. Assuming your hand moves a clump of cards at a time and smooshes them with another clump.

I wonder how this simple model might represent something close to real life smooshing.


"smooshing" aka wash, aka Corgi shuffle.


I think 52 pickup is the best method for randomising a deck


"In fact, any time you shuffle a deck to the point of randomness, you have probably created an arrangement that has never existed before."

Eh... Maybe not. My bet is it is somewhat analogous to the birthday problem. I have no doubt that it's possible to create many new arrangements, but the possibilities are probably not normally distributed and there's been a hell of a lot of card games played.

All of theese come together to say that a new shuffle is probably not unique in the universe.


Even with the birthday problem, sqrt(52!) is still about 10^34, which is still huge. It's unlikely that any two (sufficiently random) shuffles in history have been the same.


> (sufficiently random)

Isn't that a true Scotsman?

Edit: I guess the GP post uses the same criterion, though.


No, that isn't what the No True Scottsman Fallacy is. Putting qualifiers on statements is completely valid. A No True Scottsman Fallacy is when you use qualifiers in a hand wavy way to move the goal posts.


What makes it a No True Scotsman is the term 'sufficently random', which kind of makes it unfalsifiable. After all, if any two shuffles are the same, then (by this logic) they're not sufficiently random.


Except its unlikely that "sufficiently random" refers to "do not equal each other" in this context. Its likely that "sufficiently random" means "passes one of a variety of proofs of being statistically random (for example: nlogn repetitions of fisher-yates)". That implies randomness, but doesn't imply "doesn't equal any other deck of cards" except with high probability.


That doesn't make it unfalsifiable at all. Sufficiently random is not defined as "not the same". you also clearly misunderstood the logic of the post in question. The poster started with a clear definition of random and used that to determine if shuffles are likely repeated. If there were fewer cards in a deck, that post would have reached a different conclusion.


Not normally distributed? Why would they be? For a "random" shuffle you would want a uniform distribution -- each permutation of the cards being equally likely. Can't think what bias could possibly consistently distort a random shuffle, but maybe you have one in mind.

Edit: "to the point of randomness" effectively establishes a uniform possibility of getting any one of the possible orderings...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: