One of my absolute favorite mathematicians is Persi Diaconis who is mentioned in this paper and was the original basis for this area of research. Well worth learning about. He was a runaway. Then a magician, trained by Dai Vernon and worked along side people like the late Ricky Jay and Richard Turner. He went on to get PhD from Harvard and got a MacArthur Fellowship. He is a legend.
Persi is also one my absolute favorite mathematicians. I had the good fortune to write "Trailing the dovetail shuffle to its lair" (Seven shuffles suffice) with him.
Wait... You wrote it with him? Are you Dave Bayer? That's amazing! What was it like working together? And absolutely fantastic paper! I have so many questions lol
I once wondered if you could reach every permutation with "believable" shuffles. I define a "believable" shuffle as one where you cut the deck near the center, and then rejoin the two parts by alternately dropping cards from the two parts, with most drops being being one card and no drop being a lot of cards.
The answer is yes.
Let P be a perfect out riffle shuffle. Let S(n) be a shuffle that is almost a perfect out riffle except that it swaps the order of dropping cards n and n+1. Let O(X) be the number of times you have to consecutively do shuffle X to get back to the order you started with.
If you do O(S(n))-1 shuffles using S(n) followed by P, you end up with the original order except cards n and n+1 are swapped. This gives you a procedure for swapping any adjacent pair of cards, and that is sufficient to use bubble sort to achieve any desired permutation via believable shuffles.
It's not very efficient. Swapping 22 with 23 takes 120 shuffles, as does swapping 28 and 29. There are two pairs that take 72 shuffles to swap, two that take 56, 4 that take 40, and the rest take 16.
S(n), Px7, S(n+1), Px7, S(n), Px7 swaps n/2 with n/2+1 if n is even. If n is odd, it swaps that pair that is 26 past the pair that doing it for n-1 would have swapped. That gives a way to swap any adjacent pair in 24 shuffles, improving the prior bubble sort to 10 pairs that use 24 shuffles to swap, with the rest using 16.
S(n), Px7 swaps two cards, but they are not adjacent. If n is even, it swaps n/2 and n/2+26. If n is odd, it swaps (n+1)/2 and (n+1)/2+25. There's probably a way to reach an arbitrary permutation using that which is more efficient than what I've got in the prior paragraph.
I recalled watching an old video of a casino guy talking about card shuffling. I think he was talking about randomization of the cards to catch card sharks, or something.
He even had a computer program simulate the hand shuffling. I’m not sure if it really did simulate it, but the video graphics showed it was hand shuffling.
So one day, I thought I could simulate it myself.
So I wrote a program to just do the manual shuffling in logic. Like split it roughly in half using a random value. Then do the merge shuffle, again using random card counts per level of the merge. Then to do the rifle shuffle, again using random counts. Then to finally do the random cut. All the steps that you would use to shuffle the deck of card when playing at home, or in the casino.
I found that after a few shuffles, the deck seemed sufficiently randomized. My untrained eye couldn’t find any patterns.
My program didn’t have the wizbang of animated graphics, but it solved the problem at hand, which was to generate a random sequence of events, and to replay it again to study it.
Not sure what I did with the code though. I must’ve lost the code somewhere.
This has a really interesting point about what's colloquially called "stack shuffling" (at least in the MTG community)
> k piles of length n, we observed that there's a similar special case when the size of the piles is a power of the number of piles.
The "power case" here is an exponentially smaller shuffle group, resulting in an exponentially smaller set of possible shuffles. Really interesting and counterintuitive to me, when I think about decks of cards.
This topic in particular is very dear to my heart since I learnt the shuffle as a kid, and it was wonderful to rediscover it in a new light with some math background being an adult.
For those interested, the shuffle is also called "faro" shuffle and was likely invented to cheat at a game with the same name.
There's a wonderful book "Magic Tricks, Card Shuffling and Dynamic Computer Memories" that describes the mathematics of perfect shuffles, highly recommended!
I was wondering recently about how you might solve shuffling with constraints - no two cards of the same suit next to each other, 3s at least 4 spaces apart, etc. My first thought was you'd just maintain a set of valid candidates for the next random card, with backtracking if you get stuck, but there must be better ways.
I had a similar problem, several constraints (dozen or so, iirc), data set of a few thousand items.
I thought about finding an elegant solution, but in the end decided that just doing a (psuedo-)random selection of the remaining set and then running through the resulting ordered list checking each one for all the constraints (ie, random sort). As soon as it failed a hard constraint, or failed > x soft constraints, toss the list and do another random sort.
Ugly, but it worked. Chose this solution because 1) I knew how big the data set would be, and I knew it would not change size enough to matter; 2) The other, very jr developer could understand the method; 3) The constraints, while semi-complex, were simple to check; 4) would be needed once or twice a year, could run completely off-line, and could be started well in advance of when it was needed; 5) and finally computers are fast & cheap, while my time is finite and valuable
As a result, the decision that "brute force" running on any available workstation for a weekend was an acceptable solution. In it's lifetime, I think the longest time it took to find an acceptable solution was a few hundred thousand sorts, finished in a few minutes.
It's now been retired as the program it supported went away.
I wish I'd had the time to explore better, more proper solutions, but at the time, quick and dirty and done was more important.
For arbitrary constraints it's problematic. Suppose you have a programming language capable of expressing a filter `(shuffle_result) => valid`. On average across all possible such functions you'll need at least `factorial(n_cards)` bits just to express the filters.
Of course, many special cases you care about can be expressed much more simply. E.g. with the adjacent-suits example you can deterministically generate such decks pretty easily by appropriately composing uniform integer partition sampling, uniform combination sampling, and uniform permutation sampling. For a standard 52-card deck you can basically lay out suit#1, then suit#2, do a little finicky magic to make sure suit#3 is placed uniformly in such a way that a solution exists (at least one always will), and then fill the remaining slots with suit#4.
"What could a mathematician do with a perfect shuffle?"
Perfect? That's a word which should not be deployed lightly!
By definition a shuffle can never be "perfect" - a card shuffle should strive to mix up the cards but it can never do that job and then say "these cards are optimally shuffled"
Oh so now we have a new definition for perfect: "Praeger, however, is interested in just those permutations of the cards that you can get by using perfect shuffles, when a pack of cards is exactly split in half and the cards from each half perfectly interleaved one by one."
I have a different idea of what perfect means and it is not that. I suggest the word "precise" or "exact" when describing fancy shuffling schemes.
Ricky Jay talking about his childhood friend Persi: https://believermag.com/an-interview-with-ricky-jay/
Persi on Numberphile: https://www.youtube.com/watch?v=AxJubaijQbI
A great lecture: https://www.youtube.com/watch?v=xit5LDwJVck