The person that developed this trick is obviously familiar with the Mathematical properties of numbers. But this is the wrong approach from many prospective, including the impact to performance, the potential of under/overflow, and the lack of readability.
The author explicitly states that it should never be used due to readability. I'm interested in your point about performance, though - as a layman in this area, I would have thought that the additional operations were cheaper than whatever the overhead of a variable is, even for a primitive value type. Is that definitely not the case?
A modern CPU makes heavy use of out of order execution to do things quickly. Having the results of one operation depend on another makes it harder for the CPU to do this.
If you have this:
a = a + b
b = a - b
a = a - b
// use a for something
c = a + 7
then the final value of 'a' depends on both 'a' and 'b', so execution of the final line can't happen until the original 'a' and 'b' have both been fetched from memory/calculated/whatever.
However, if you have this:
temp = b
b = a
a = temp
// use a for something
c = a + 7
then the final value of 'a' depends only on 'b', so the final line can be executed even if the original 'a' isn't yet available.
In most cases swapping two variables with a temporary variable will be either zero-overhead -- because you've just changed the names you're referring to those variables by, and the compiler knows it -- or very, very low-overhead. You're not actually saving any memory with the fancy tricks.
> under/overflow
Assuming the 2 variables are treated as unsigned ints, even with under/overflows the algorithm works, since if a=a+b overflows, then b=a-b is guaranteed to underflow, thus returning the original a.
The overflowed bit is irrelevant here