Come on, really? "Knowing the very basics of binary numbers" isn't some obscure super-technical skill when it comes to programming, it's fairly fundamental. If they don't know that, then how can they possibly know how to use bit-masks, for instance?
Like, I get it when people say "most of the stuff you learn getting a CS degree has very limited usefulness day to day". Sure, you're not going to be implementing red-black trees or skip lists in your day-to-day, but you can't possibly claim that having a basic idea of how the bitwise operators work (or the details on how numbers are represented in binary) is in the same category.
>how can they possibly know how to use bit-masks, for instance?
Given that most programming jobs are web development jobs (I think, I don't have numbers to back that up), what makes using bit-masks important? That is, why would someone working on a web application need to know how to use them?
I might be asking a bit much here, but I generally expect someone who's developing something that's going to run on the Internet to have some understanding of how the Internet works. Like... you don't need to understand the intricacies of BGP, but some idea of how a request from your web browser leaves your home network and comes back is strongly appreciated.
My biggest reasoning for desiring that skill is debugging. If you build a system and it doesn't work right for whatever reason, it's good to understand why that might be. Also, having some appreciation for that stuff really helps you make good decisions (but why can't I make 200 HTTP requests while loading my page?!)
There are relatively few programming jobs that actually require bit manipulation at that level. I've been deep into bit manipulation twice in my career (well more than 10 years now): network programming for a real time system and game development.
For the machine learning and infrastructure work I've done the last 5 years or so I haven't had to do even a single bitmask, shift, etc. directly. Of course we use these things liberally (e.g. hash functions) but it's hardly necessary to write the code itself very often, even in very sophisticated applications.
Huh - I do ML and a lot of bit twiddling with trained feature detector outputs - intake goes into a stage, feature detectors light up, and feature combos get fed to subsequent input via lots of bit banging - can't afford for it to be slow in the middle of that pipeline!
Programming used to be mostly bit twiddling. The universe has expanded considerably, but if your still in that original corner its really hard to not see it as a basic skill.
Javascript only has doubles so you can't even bit twiddle if you wanted - C programmers can't even imagine.
That quote describes a good mental model for someone learning JavaScript, but not necessarily the physical truth.
JavaScript's numbers can do everything doubles can; but if the compiler can prove that all values that a particular variable could possibly take on will fit in a smaller type, then it can use that. So a variable really can end up stored as an integer--or never stored anywhere for that matter, like if a loop gets unrolled and the counter is optimized out.
The bitwise operators behave as if integers are 32-bit two's complement. The typed arrays are also two's complement. A smart developer can write code that a smart compiler will compile to bitwise operations on actual physical 32-bit integers.
That's an extremely dangerous assumption because of the implicit rounding and precision loss inherent in the underlying double type. An interpreter "may" use an integer under the hood but it still has to behave as if it were an IEEE double, that includes all the nasty things you don't want in integer math.
Most C programmers would be too scared to perform bitwise operations on doubles and JS doesn't add anything that makes it safer.
Can you give an example of math that would be exact with 32-bit integers, but is inexact in JavaScript? Floating-point math on values that happen to be integers is exact, if an exact answer exists and fits within the mantissa.
I think we're approaching this from philosophically different directions. On your end if the programmer controls everything correctly then its safe because doubles have a 53 bit significand.
The problem happens when you treat numbers one way and they are treated differently elsewhere. For example -0 when undergoing a bitwise +/- check (val & (1U<<31)) will appear positive when its negative by definition.
The cast to an integer prior to the bitwise operator looses information. You can use the operators safely if you know there is no information to be lost, but it is not type checked like with C. I will say at least JavaScript guarantees twos compliment. You never know when your favourite compiler will decide to break your code in the name of a faster benchmark - correctness of existing programs be damned. Implementation defined used to mean do something sane, but I digress.
The trick is to let your "double" only take on values that are also valid 32-bit integers, like with lots of gratuitous (x|0) and stuff. That excludes your (indeed problematic) -0. JavaScript's semantics give you enough to implement 32-bit integer math this way pretty easily. The speed may vary dramatically with small changes to the compiler, but correctness won't.
would require serious justification in a C code review. In JS its all you have. Under those restrictions I'd use it as rarely as possible. But yes you can use it correctly if you try.
In my career I have never had to know any binary number. I am on the application side of things though and try to void working with infrastructure at all so don't know about them
Like, I get it when people say "most of the stuff you learn getting a CS degree has very limited usefulness day to day". Sure, you're not going to be implementing red-black trees or skip lists in your day-to-day, but you can't possibly claim that having a basic idea of how the bitwise operators work (or the details on how numbers are represented in binary) is in the same category.