Hacker News new | past | comments | ask | show | jobs | submit login

There are relatively few programming jobs that actually require bit manipulation at that level. I've been deep into bit manipulation twice in my career (well more than 10 years now): network programming for a real time system and game development.

For the machine learning and infrastructure work I've done the last 5 years or so I haven't had to do even a single bitmask, shift, etc. directly. Of course we use these things liberally (e.g. hash functions) but it's hardly necessary to write the code itself very often, even in very sophisticated applications.




Huh - I do ML and a lot of bit twiddling with trained feature detector outputs - intake goes into a stage, feature detectors light up, and feature combos get fed to subsequent input via lots of bit banging - can't afford for it to be slow in the middle of that pipeline!


That happens. Like with bit twiddling in general computing it's not necessary in many applications.


Programming used to be mostly bit twiddling. The universe has expanded considerably, but if your still in that original corner its really hard to not see it as a basic skill.

Javascript only has doubles so you can't even bit twiddle if you wanted - C programmers can't even imagine.


Javascript stores integers in twos complement


> JavaScript numbers are always stored as double precision floating point numbers, following the international IEEE 754 standard.

It doesn't have integers to store.

https://www.w3schools.com/js/js_numbers.asp


That quote describes a good mental model for someone learning JavaScript, but not necessarily the physical truth.

JavaScript's numbers can do everything doubles can; but if the compiler can prove that all values that a particular variable could possibly take on will fit in a smaller type, then it can use that. So a variable really can end up stored as an integer--or never stored anywhere for that matter, like if a loop gets unrolled and the counter is optimized out.

The bitwise operators behave as if integers are 32-bit two's complement. The typed arrays are also two's complement. A smart developer can write code that a smart compiler will compile to bitwise operations on actual physical 32-bit integers.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

https://v8project.blogspot.com/2017/09/elements-kinds-in-v8....


That's an extremely dangerous assumption because of the implicit rounding and precision loss inherent in the underlying double type. An interpreter "may" use an integer under the hood but it still has to behave as if it were an IEEE double, that includes all the nasty things you don't want in integer math.

Most C programmers would be too scared to perform bitwise operations on doubles and JS doesn't add anything that makes it safer.


Can you give an example of math that would be exact with 32-bit integers, but is inexact in JavaScript? Floating-point math on values that happen to be integers is exact, if an exact answer exists and fits within the mantissa.


I think we're approaching this from philosophically different directions. On your end if the programmer controls everything correctly then its safe because doubles have a 53 bit significand.

The problem happens when you treat numbers one way and they are treated differently elsewhere. For example -0 when undergoing a bitwise +/- check (val & (1U<<31)) will appear positive when its negative by definition.

The cast to an integer prior to the bitwise operator looses information. You can use the operators safely if you know there is no information to be lost, but it is not type checked like with C. I will say at least JavaScript guarantees twos compliment. You never know when your favourite compiler will decide to break your code in the name of a faster benchmark - correctness of existing programs be damned. Implementation defined used to mean do something sane, but I digress.


The trick is to let your "double" only take on values that are also valid 32-bit integers, like with lots of gratuitous (x|0) and stuff. That excludes your (indeed problematic) -0. JavaScript's semantics give you enough to implement 32-bit integer math this way pretty easily. The speed may vary dramatically with small changes to the compiler, but correctness won't.


I still stick by my original point that:

((int)dblval) & flag

would require serious justification in a C code review. In JS its all you have. Under those restrictions I'd use it as rarely as possible. But yes you can use it correctly if you try.


That's certainly true. The only reason to play these games is that JavaScript gives you no better option.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: