This is actually a major annoyance in some fields - Xilinx for example likes to use GB to represent 1024^3 for storage on FPGAs while HDD manufacturers like to use 1000^3. IMHO [ZYEPGMK]?iB is the way to go to end this non-sense.
I'm old enough to remember what MB and GB meant before sleazy marketers started to redefine them. They should have been sued for deceptive advertising. Instead the computer press of the time was spineless, because guess who paid for the ads.
Since mega- and giga- haf well-established meanings as prefixes to measures, and the original common usages of MB and GB were inconsistent with those meanings, I prefer MiB and GiB for those uses, even though it took marketers using the correct versions for devious reasons to get terms popularized that distinguished the base-2 prefixes from the close-but-not-the-same base-10 prefixes.
In the kilobyte world, 2.4% may not have been too big of a deal.
In the terabyte world, there's a 10% difference between binary and decimal prefixes. That's way bigger than rounding error. We need to start using the binary prefixes.
>I don't even know how to conceptualize a Kelvin-Byte.
I can actually see where this unit would be useful; in determining the probability of bit rot. The more kelvins you have, and the more bytes you have, the more likely you are to flip a bit due to random fluctuations. temperature*storage capacity = Kelvin-Bytes.
20GB = Twenty Gigabytes
NOT
20gb = Twenty gram bits
and:
20MB/s = Twenty Megabytes per second
NOT
20mbytes/sec = Twenty milli-bytes per second ( If you got you B's and b's correct about you would need to write out "bytes" )