Hacker News new | past | comments | ask | show | jobs | submit login
Why does Sattolo's algorithm produce a permutation with exactly one cycle? (danluu.com)
156 points by darwhy on Aug 9, 2017 | hide | past | favorite | 18 comments



If you enjoy the discussion of permutations and cycles in this article, there's a puzzle you might also like:

Prisoner A is brought into the warden’s room and shown a faceup deck of 52 cards, lined up in a row in arbitrary order. She is required to interchange two cards, after which she leaves the room. The cards are then turned face down, in place. Prisoner B is brought into the room. The warden thinks of a card, and then tells it to B (for example, “the three of clubs”).

Prisoner B then turns over 26 cards, one at a time. If the named card is among those turned over, the prisoners are freed immediately. Find a strategy that guarantees that the prisoners succeed. (If they fail, they must spend the rest of their lives in prison.)

Needless to say: The two prisoners have the game described to them the day before and are allowed to have a strategy session; absolutely no communication between them is allowed on the day of the game. Notice that at no time does Prisoner A know the chosen card.

(Taken from https://www.reddit.com/r/math/comments/44h3tu/interesting_pu..., but quoting since that link has the answer in the top comment.)


That one is genius!


This is a nice post and I hadn't heard of Sattolo's algorithm before. The proof is a bit long though. The reference linked from Wikipedia: http://algo.inria.fr/seminars/summary/Wilson2004b.pdf proves the correctness of Sattolo's algorithm in three sentences. I found it fairly easy to understand, while I didn't manage to read through the linked post in detail to get the same level of understanding. Let me try to explain the proof I understood, without assuming mathematical background, but instead introducing it. (I'll use 0-based indices as in the post, instead of 1-based indices that mathematicians would use.)

# What is a permutation?

There are (at least) two ways to think of (or define) a permutation:

1. A list (an ordering): a specific order for writing out elements. For example, a permutation of the 10 elements [0, 1, 2, 3, …, 9] means those 10 elements written out in a particular order: one particular permutation of 0123456789 is 7851209463. In a computer, we can represent it by an array:

       i  0 1 2 3 4 5 6 7 8 9
    a[i]  7 8 5 1 2 0 9 4 6 3
2. A reordering. For example, the above permutation can be viewed as "sending" 0 to 7, 1 to 8, and in general i to a[i] for each i.

Instead of describing this reordering by writing down 10 pairs 0→7, 1→8, …, 7→4, 8→6, 9→3, we can save some space by "following" each element until we come back to the beginning: the above becomes a bunch of "cycles":

- 0→7→4→2→5⟲ (as 5→0 again) (note we could also write this as 4→2→5→0→7⟲ etc., only the cyclic order matters)

- 1→8→6→9→3⟲ (as 3→1 again)

You can think of cycles the way you think of circular linked lists. This particular permutation we picked happened to have two cycles.

# What is a cyclic permutation?

A cyclic permutation is a permutation that has only one cycle (rather than two cycles as in the above, or even more cycles). For example, consider the permutation 8302741956:

       i    0 1 2 3 4 5 6 7 8 9
    a[i]    8 3 0 2 7 4 1 9 5 6
If we follow each element as we did above, we get 0→8→5→4→7→9→6→1→3→2⟲ where all 10 elements are in a single cycle. This is a cyclic permutation.

Our goal is to generate a random cyclic permutation (and in fact uniformly at random from among all cyclic permutations).

# Sattolo's algorithm

Note that in a cyclic permutation of [0, ..., n-1] (in our example above, n=10), for the highest index n-1, there will be some smaller j such that a[j]=n-1 (in the example above, a[7]=9). Now if we swap the elements at positions n-1 and j (which in the example above is:

    Before                               After
       i    0 1 2 3 4 5 6 7 8 9             i    0 1 2 3 4 5 6 7 8 9
    a[i]    8 3 0 2 7 4 1 9 5 6          a[i]    8 3 0 2 7 4 1 6 5 9
where we swapped a[7]=9 and a[9]=6 to make a[7]=6 and a[9]=9), then in general we get a[n-1]=n-1, and a[0]…a[n-2] form a cyclic permutation of [0…n-2]. In the above example, in the "after" case, if we ignore i=9 and consider only positions 0 to 8, we have the cycle 0→8→5→4→7→6→1→3→2⟲. (This is our original cycle 0→8→5→4→7→9→6→1→3→2⟲ with 9 "removed", as we'd do when deleting an item from a linked list.)

This holds in reverse too: if we had started with the cyclic permutation of [0, …, 8] that is in the "after" column above, added a[9]=9, and swapped a[9]=9 with a "random" element a[7]=6, we'd get the cyclic permutation of [0, … 9] that is the "before" column.

In general, you can convince yourself that there is a unique way of getting any cyclic permutation on [0, …, n-1] by starting with a cyclic permutation on [0, …, n-2], considering a[n-1]=n-1, picking a particular index j in 0 ≤ j ≤ n-2, and swapping a[n-1] and a[j].

This gives the following algorithm, which we've already proved is correct (or derived, rather):

    def random_cycle(n):
        a = [i for i in range(n)]        # For all i from 0 to n-1 (inclusive), set a[i] = i
        for i in range(1, n):            # For each i in 1 to n-1 (inclusive),
            j = random.randrange(0, i)   # Pick j to be a random index in the range 0 to i-1, inclusive
            a[i], a[j] = a[j], a[i]      # Swap a[i] and a[j]
        return a
In the post linked above, you swap with a random element that is "ahead", instead of one that is "behind"; also you start with a list of length n and shuffle it according to the randomly generated cyclic permutation of [0…(n-1)] instead of simply generating the permutation. From the post:

    def sattolo(a):
        n = len(a)
        for i in range(n - 1):
            j = random.randrange(i+1, n)  # i+1 instead of i
            a[i], a[j] = a[j], a[i]
This is slightly different, but the proof is similar: in fact this is the algorithm (except going downwards) that is proved correct in the linked paper. (And even if it is not obvious to you that the two algorithms are equivalent, you have an algorithm that generates a random cycle and is just as easy to code!)


I've used inside-out Fisher-Yates with and without Sattolo's variation many times in testing, especially for testing various branch instructions in a bytecode interpreter (forward, backward, short, long, etc.). I think I first read about it in Knuth, but you don't need an algorithm bible to implement it. Very simple, very handy.


I'm having trouble with appreciating the point of it. My understanding is, if all we need is to visit each and every element of an array in a pseudo-random order, and if we just shuffle the array with Fisher-Yates, then iterate the newly shuffled array from 0 to n-1, and we are done right? So what's the point of looking it as a graph and the cycle thing?


Imagine you have a bunch of stores in your city. Each store has a piece of paper in a box that has the name and address of that store. You want to shuffle the pieces of paper in the the stores in such a way that if you start at any random store, read the piece of paper there, go to the listed store, then read that store's paper, and continue likewise, you'll reach all stores.

Shuffling like this isn't as as easy as it seems. If you give a store a paper with its own address, then a visitor to that store will be stuck there forever.

Of course it's easy to check for not giving the same address to a store when shuffling, but what about checking for store A points to B points to C points back to A? Oops. Anyone who reaches these stores will never go on to the rest of the stores.

Sattolo's shuffle handles this elegantly.

Caring about the number of cycles only matters if the values of your array are actually indexs back into your array.

If you were able to pickup every store in town, randomly shuffle them, and place them into a nice straight line down one side of a road, then any old shuffling algorithm would work to get you a "random path" through the stores.

(One huge practical difference with Sattolo's is that no element ends up in its original place after the shuffle.

This can be good, or very bad, depending on what you are using it for.)


Yes, exactly, I use Sattolo for a random path: labels connected by goto statements, functions connected by calls, etc.


"If you were able to pickup every store in town, randomly shuffle them, and place them into a nice straight line down one side of a road, then any old shuffling algorithm would work to get you a "random path" through the stores."

Isn't this always the case in software? Any set of entities represented in software can be hashed into a "straight line", in other words, you can assign a unique integer id to each one. And this works with your example of the street addresses of stores too -- just hash to a 32-bit value and sort them.

Certainly, if you can only see the paper in the box in front of you, the Sattolo solution makes sense. And it doesn't use any additional memory apart from one pointer per item (one piece of paper per store, in situ). There must be a more compelling application, though.

Incidentally, it's possible to use a maximal LFSR to generate a permutation (with only one cycle) without using extra storage and without using a shuffle. For cycling through very large numbers of elements this can be a great shortcut, if it's OK for the permutation to be the same each time. (Maybe could also use s-boxes to get a family of permutations)


You would probably enjoy checking out http://pcg-random.org/ it's got all that (tldr: powerful PRNG using very clever combination of simple permutation families), and a bunch of other useful properties. Also, no glossing over what exactly it does not (provably) do. It's not claiming to be cryptographically secure, for instance.


I think that is pretty much the alternative approach dan spoke of at the end, but without saving the cycle in a data structure (worse for testing):

> In practice, you probably don’t “need” to know how these algorithms work because the standard library for most modern languages will have some way of producing a random shuffle. And if you have a function that will give you a shuffle, you can produce a permutation with exactly one cycle.

```python

    def cycle_frm_shuffle(shuffled, cycle, n):

        for i in range(n - 1):

            key = shuffled[i]

            value = shuffled[i + 1]

            cycle[key] = value

        cycle[shuffled[n]] = shuffled[0]
```

This method has more memory and cpu overhead than sattolo's by O(n).

The proof is pretty simple:

Each assignment inside the loop is uniformly distributed, but the last assignment is determine strictly by the first n - 1. Giving us (n - 1)! permutations.


One important catch, however, is using a proper (P)RNG. Not all of them stay perfectly uniform if you scale and round them over all ranges of 2..n.

Since you usually get a stream of random bits, afaik the only fair way is to mask bits to get a uniform number 0 <= R < the next power of two. Then toss the result and reroll if it's outside the desired range.


On creating a single-cycle permutation from a shuffle:

Take 0, 1, 2, 3, 4, 5 and shuffle it. 4, 3, 0, 5, 1, 2

Now you can read out a permutation:

4 → 3

3 → 0

0 → 5

5 → 1

1 → 2

2 → 4

It will only have one cycle.


Edit: I misinterpreted the parent; see below.

If by 'shuffle' you mean like Fisher-Yates, this can't work all the time, since that can produce, e.g. 0, 1, 2, 3, 4, 5 -> 1, 0, 4, 5, 3, which has two cycles. If by 'shuffle' you mean like Fisher-Yates, but not leaving an element in its original position, (i.e., generating a derangement) then this doesn't suffice either, since notice that the same example satisfies that condition and yet still has two cycles. You need Sattolo's algorithm (or something like it) to guarantee a single cycle.


That's not what the parent meant. You shuffle the array with e.g. Fisher-Yates and then create a permutation from the shuffled array. That always gives a permutation that has one cycle. What you did was create a permutation from the original and the shuffled variant which has the problems you describe...


Thanks for clarifying. I was on my cell phone and was very brief.


Ah, you're right! My mistake.


Some permutations have one cycle, other have more. You are "adjusting" your permutation from 4 3 0 5 1 2 to 3 0 5 1 2 4 to eliminate a cycle and obtain a suitable result. What are the rules to do that and ensure the single-cycle permutations are produced with uniform probability if the original permutation is produced with uniform probability? I.e. which set of n permutations is mapped to each single cycle permutation? It's going to be a much more complex algorithm than the optimal in-place shuffles discussed in the article.


No... it's pretty clear that there are exactly n ways to get each particular single-cycle permutation, given by the n rotations of the fisher-yates shuffle. For example, if you rotate the parent's shuffle by one:

2, 4, 3, 0, 5, 1

you get the same single-cycle permutation.

I think this might actually lead to a easier proof than in the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: