Edit: Reading further, it's clearly about the written form used:
> [I]t is unlikely that Harriot hit upon binary notation simply because he was using weights in a power-of-2 ratio, something that was a well-established practice at the time. Equally if not more important was the fact that he recorded the measurements made with these weights in a
power-of-2 ratio too. For when recording the weights of the
various part-ounce measures, Harriot used a rudimentary form of positional notation, in which for every position he put down either the full place value or 0, depending on whether or not the weight in question had been used.
My original comment:
I'm a little unclear, as the article mentions "binary numeration" several times. Am I to understand that the Harriot is the first written evidence of binary in a modern form, that is, using arabic numerals like we do today? (example we'd write four as: 100) Other commenters are noting that base 2 has been known for far longer than the last 500 years.
Binary is impossible to 'invent' as it is just an application of established arithmetic rules to a base 2 number system.
If i declared an arbitrary base of 73 you wouldn't say I just 'invented' septuaginta-tresinary??
People have been using base 2 for thousands of years prior to Thomas Harriot, see egyptian multiplication, and to assume that these early mathematicians didn't understand the concept of base is naive!
1) someone using positional notation with digits for 1 and 0 and performing base 2 arithmetic on them
or
2) someone using positional notation in arbitrary bases, and generally establishing that that would work for any base down to and including 2
?
People had to realize that that worked. It's not obvious; it's not obvious that just because decimal and duodecimal work to count to every number that the same is true for any base; it's not obvious that it will continue working for base 2, where you can only count up to 1, which seems like it might be underpowered for the task. "Of course it works for 2!" you cry? Well, it doesn't work for 1.
What about base infinity? Every number gets a unique digit. Would need something that generates a unique digit pattern for reach number that isn't a base.
"He told me that in 1886 he had invented an original system of numbering and that in a very few days he had gone beyond the twenty-four-thousand mark. He had not written it down, since anything he thought of once would never be lost to him. His first stimulus was, I think, his discomfort at the fact that the famous thirty-three gauchos of Uruguayan history should require two signs and two words, in place of a single word and a single sign. He then applied this absurd principle to the other numbers. In place of seven thousand thirteen, he would say (for example) Maximo Perez; in place of seven thousand fourteen, The Railroad; other numbers were Luis Melian Lafinur, Olimar, sulphur, the reins, the whale, the gas, the caldron, Napoleon, Agustin de Vedia. In place
of five hundred, he would say nine. Each word had a particular sign, a kind of mark; the last in the series were very complicated ... I tried to explain to him that this rhapsody of incoherent terms was precisely the opposite of a system of numbers."
That's a tricky one. Should we be able to actually write down each and every digit? If so then you'll need to invoke some sort of infinite related concept to allow it to happen. I'm not a mathematician but:
Let the digit for one be a single pixel (the simplest mark I can make), two is two lots of one pixels etc. Now we need an infinite number of these.
I don't think we can change over to say symbols made up from pixels either, nor mess with multi dimensions to add extra "depth". In the end we still need an infinite number of pixels or pixel properties - we could mess with colours but that is simply adding dimensions.
We can conceive of base infinity but I don't think we can actually use it as such except symbolically. We can decree that a particular symbol or construction for a symbol represents a particular digit within base infinity that we can define by other means, and we can do that a lot but can we do it infinitely often? If we relax the physical representation requirement, then I'd say yes, otherwise no.
I've no doubt that this concept is well understood and dealt with already by the pros.
I wonder how developers of computers came to idea to use binary numbers? As I remember, early calculating machines used decimal system (e.g. Babbage's machine, IBM's tabulators). For example, did Konrad Zuse invent floating-point binary representation himself when developing his machine or there were previous works which described how numbers can be added in binary system using relays or tubes?
So, were there any works on floating-point binary numbers and implementing operations with them before 1938?
Also, did he invent logic gates or there also were previous works?
> I wonder how developers of computers came to idea to use binary numbers?
Essentially a logical consequence of electronics and the equipment they were working with at the time.
That 0 and 1 are not pure zero-volts and 5-voltsⁱ. There is a lower range that will definitely be ready as 0 and an upper range that will be read as 1, and hopefully the signal doesn't spend long between them. Also switching isn't absolutely instant, switches bounceʲ, debouncing methods smooth that but drag out the time to transition from 0->1 or 1->0.
All of this is try for multi-level electronics but even more so, making it easier and more reliable to work in binary than 3, 4, or more levels per bit.
Ternary (base 3) computers did exist, some using -1/0/+1ʰ and some using 0/1/2, and multi-level logic is used for some storage methods today, so binary isn't the be-all and end-all it is just most usually the most reliable & convenient tool for the job.
----
[i] or 3.3 or whatever other logic levels you are using
[j] a lot less of a problem with microscopic transistors but think back to old chunky relays & tubes
[h] not sure if that is just notation or there were negative voltages involved - I last read about these things properly at Uni ~2.5 decades ago
> All of this is try for multi-level electronics but even more so, making it easier and more reliable to work in binary than 3, 4, or more levels per bit.
You don't need many levels. IBM's tabulators used pulses to represent numbers in decimal (e.g. 6 pulses to transmit digit 6). That's why it is surprising that computer developers didn't reuse existing technology and went with a new system.
Above certain speeds, pulses based systems have timing issues. How many pulses can you cram in before they start to merge due to signal noise? Assuming serial communication transmitting 9 pulses is more than the 4 needed to represent 9 in binary (assuming 0 is represented by no pulse, not a single pulse with no signal at all indicating an error condition). If you allow variable length transmissions pure pulses become more efficient overall, but synchronising multiple signals potentially becomes more difficult.
So you fix the signal merging issue found in multi-level signals, but introduce other similar difficulties. Both sets of issues get more problematic as signalling speed increases.
> I wonder how developers of computers came to idea to use binary numbers?
Logic gates[0]:
A logic gate is an idealized or physical device
that performs a Boolean function, a logical operation
performed on one or more binary inputs that produces
a single binary output.
...
In the real world, the primary way of building logic
gates uses diodes or transistors acting as electronic
switches.
Many early electronic computers were still using base-10. It afforded encoding schemes that could help detect errors stemming from component failures, leverage existing decimal vacuum tubes, and it simplified output to displays and printers since no conversion from base-2 was necessary.
> I wonder how developers of computers came to idea to use binary numbers?
It’s the most robust when building actual physical computers. Only two states that need to be reliably distinguished. You can also sort-of relate it to an analog value going either up or down – the only two directions it can go.
The title is from the article. I dunno why you think the author who took the time to write the article deserves this gotcha or why we should divert focus from the article. If it was editorial by the submitter then yeah maybe...
Edit: on review it does seem to be on topic, sorry. It seems unconvincing to me because it has only multiplication and division, where addition and subtraction point more to recognizing it as a workable number system. I'd like to see an expert's analysis.
I suspect this article refers to binary positional notation, specifically, although it could have been more explicit about that.
Having said that it would indeed have surprised me a lot if powers-of-two based calculations wouldn't be among the oldest ones we have. Doubling or halving things seems quite natural.
One thing that occurs to me is that we often fail to believe that people 3000 years ago were just as smart as we are, blinded both by our knowledge of them and our certainty of their limited knowledge. Limited knowledge does not mean that one is not innately smart though, and unable to figure some stuff out...
> One thing that occurs to me is that we often fail to believe that people 3000 years ago were just as smart as we are ...
I think of this as "hubris of the now." This is where people cannot fathom how previous generations could achieve what they did without the tools and/or knowledge which exists in the present.
An example of this is found when contemplating how atmospheric differential equations were solved with computers having about 4k of RAM in the 1940's.
"So far as I know, the only person who has attempted to
explain Harriot’s transition from weighing experiments to the
invention of binary is Donald E. Knuth, who writes"
It's amazing how the name of Knuth pops up in such a variety of different subjects!
I wouldn't, just because it's so long ago. It's like reading, "the only person who has attempted to build Babbage's Difference Engine is Donald E. Knuth"!
That's precisely why I would expect him the most. If anyone would dig deep into the history of mathematics to find and understand all of the origins of fundamental concepts in computer science, it's Knuth.
it seems inevitable to me that someone would invent binary, or ternary, or likely nonary
bases are just the number of symbols you’re allowed to use to represent a number. so base 10 has ten symbols: 0123456789. base 2 has two symbols: 01, base 3 three: 012, etc etc.
someone will eventually look at a system with 10 symbols and think “what if we have x symbols instead of 10?”
once you realise the basic mechanism through which decimal represents a number - i.e. you can write any natural number as “some y multiples of x + [a number z between 0 and x-1]” - with a bit of nesting you can derive any number of any base
take 1327. let’s write it in base 9. the way I would do this is to write 1327 in the form:
xy + z = 9y + [0,8]
then if y != 0, we rewrite y itself in this form, then again with each new y, until y = 0. each z we produce is the next most significant digit in the answer. when y=0, the z produced is the most significant digit
1327 = 9147 + 4. so our number ends with 4
y=147 is not 0, so we do 147 = 169 + 3. so our number ends with 34
y=16 is not 0, so we do 16 = 19 + 7. our number ends with 734.
y=1 is not 0, so we do 1 = 0*9 + 1. our number ends with 1734
y=0 is 0 so we terminate and our number is 1734
you can do this with any number and any base as long as you have the symbols for it
But the numbering doesn’t really follow the usual positional aspects of binary. You would expect that the hexagram in the second position, given the first hexagram would be either 夬 (43) or 姤 (44) depending on whether you’re big-endian or little-endian. For that matter, the first hexagram should be number 0 for it to be a binary system. Just because you have an 2^n symbols doesn’t automatically make you binary.
The numbers assigned to the hexagrams are not part of the actual I-Ching. You're free to consider the 'receptive' as 0 and the 'creative' as 63 if you wish. The sequence is also explicitly known as an 'arrangement'. That said, my comment was somewhat tongue in cheek.
I'm skeptical that an "unpublished manuscript" should count. If someone quietly has an idea, but tells no one, it has no value to humanity. If the world wasn't told, then the world has every right to say "that does not count".
> [I]t is unlikely that Harriot hit upon binary notation simply because he was using weights in a power-of-2 ratio, something that was a well-established practice at the time. Equally if not more important was the fact that he recorded the measurements made with these weights in a power-of-2 ratio too. For when recording the weights of the various part-ounce measures, Harriot used a rudimentary form of positional notation, in which for every position he put down either the full place value or 0, depending on whether or not the weight in question had been used.
My original comment:
I'm a little unclear, as the article mentions "binary numeration" several times. Am I to understand that the Harriot is the first written evidence of binary in a modern form, that is, using arabic numerals like we do today? (example we'd write four as: 100) Other commenters are noting that base 2 has been known for far longer than the last 500 years.