There is binary and text, though. Many bit sequences aren't valid in a given text encoding (such as UTF-whatever) and so trying to use them as text is an error.
I understand what you mean, of course text can be represented and treated as binary, and the inverse often as well although it isn't necessarily true. Even in Windows-1252, where the upper 127 characters are in use, there are control characters such as null, delete, and EOT which I'd be impressed if a random chat program preserves them across the wire.
I also don't read an implication that ASCII couldn't be converted to b64
The article actually shows an example of text to base64 encoding. But base64 is generally used for encoding data in places where only ascii is admissible like URLs and inlined binary blobs
It's part of a lot of web standards and also commonly used for crypto stuff. E.g. the plain text files in your .ssh directory are typically in base64 encoding; if you use basic authentication that's $user:$passwd base64 encoded in a header; you can indeed use it to have images and other inline content in the url in web pages; email attachments are usually base64 encoded. And so on. One of those things any decent standard library for just about any language would need.
> Perpetuates the idea that there's "binary" and "text" [...]
Well, there is binary, and there is text. Sure, all text - like "strawman" ;) - is binary somehow, but not all binary data is text, nor can be even interpreted as such, even if you tried really hard ... like all those poor hex editors.
Text is text. Text is encodable as binary. If text was binary, that encoding would be unique, but it isn't. Latin-1 encodes "Ü" differently than UTF-8 does, and even the humble "A" could be a 0x41 in ASCII and UTF-8 or a 0xC1 in EBCDIC
Everything is representable as binary, but not everything is binary. The abstract concept of 'A' has no inherent binary representation, and 0x41 is just one of the options. Representing Pi or e calls for even more abstract encoding, even though they are a very specific concept. Text is not binary, but text has to be encoded to binary (one way or another) in order to do any kind of computer assisted processing of it. But we tend to think of text in abstract terms, instead of "utf-8 encoded bytes", hence this abstraction is useful.
What if the computer isn't binary, but it needs to talk to a binary computer? Then you definitely can't go "oh, this text is binary anyway, I can just push it on the wire as-is and let the other end figure it out".
What? Is a ZIP file not binary because it’s not a valid tar.gz file? Text just means “something that can be printed” and by this definition not even all valid ASCII sequences are text.
Most developers (etc) use "binary data" or "a binary format" as a shorthand for "not even remotely ASCII or Unicode" - as opposed to the opposite, like a .txt file or HTML or Markdown, where it's readable and printable to the screen. Of course if it's in a file or in RAM or whatever, it's always ultimately stored as 0s and 1s but that's not the sense we mean here.
> but also implies you can't encode ordinary ASCII text into base64.
I don't think it implies that at all.
Text isn't binary. Text can be encoded in binary, and there are different ways to do it. ASCII, UTF-8/16/32, latin-1, Shift-JIS, Windows-1252, etc. Many can't encode all text characters, especially languages that don't use the Latin alphabet.
The fact that you have to ensure you're using the correct encoding when processing text from a binary stream is proof enough that text isn't binary. Python before 3.x allowed you to treat binary and text as equal, and it often caused problems.
Perpetuates the idea that there's "binary" and "text", which is incorrect, but also implies you can't encode ordinary ASCII text into base64.