Exactly, readability by other engineers + compiler optimizations is usually better than some bithacking that may or may not help in terms of speed and definitely doesn't help with understanding. (In this case maybe simple enough, but still).
We live in the era were you can get a Billion Bytes for a dollar. We also live in an era where finding someone who can manipulate bits to improve performance is either 1)A Micro-Op 2) Extremely fringe work, or 3) prohibitively expensive to be worth it.
Don't get me wrong. I find most of these "bit hacks" to be quite simple individually. But they add a layer of necessary comprehension.
In most cases you won't see these alone. They will be surrounded by tens of thousands of other lines of code. And where there is one "hack" there are usually others. And at some point you end up with a bug somewhere because one of these smart little hacks is failing. You don't know which one.
This does create unnecessary problems for maintenance in the future. So use it, but use it only when absolutely necessary. Not because you think the code will run faster.
It's also important to note that many things people think make the code faster, don't. Like vector vs. linked list access. Intuitively the linked list seems much simpler for inserting an element. But in almost all cases the vector is faster, because of cache locality.
So what I'm saying is. Do things that make the code probably faster like using a vector, if they don't cloud understanding. If they do, leave them out until you have a reason to use them. The only reason to use micro-optimizations like these is if you measured execution time and know there is a performance bug in this place in the code which your "hack" could resolve.
No measurement -> no optimization. Because the performance problems are almost always in different places than the average developer assumes.
And then add a comment explaining what you did to a 5 year old.