If anyone is on the fence about reading this, or worried about their ability to comprehend the content, I would tell you to go ahead and give it a chance. Shannon's writing is remarkably lucid and transparent. The jargon is minimal, and his exposition is fantastic.
As many other commentators has mentioned, it is impressive that such an approachable paper would lay the foundations for a whole field. I actually find that many subsequent textbooks seem to obfuscate the simplicity of the idea of entropy.
Two examples from the paper really stuck with me. In one, he discusses the importance of spaces for encoding language, something which I had never really considered before. In the second, he discusses how it is the redundancy of language that allows for crosswords, and that a less redundant language would make it harder to design these (unless we started making them 3D!). It made me think more deeply about communication as a whole.
Shannon is the closest equivalent to Einstein in contribution but for engineering fields.
He pioneered several foundational research in the engineering field including communication entropy, cryptography, chess engine, robotic intelligence, digital boolean, LLM and modern AI in general. An outstanding engineer, or engineer's engineer in a true sense of word.
my experience is that papers that lay the foundations for a whole field are usually very approachable. i'm not sure why this is:
- maybe being better at breaking new intellectual ground requires some kind of ability that can also be applied to explaining things? like maybe some people are just smarter than others, either inherently or as a result of their training and experience, in a way that generalizes to both tasks
- maybe the things that most strongly impede people from breaking new intellectual ground also impede them from explaining them clearly? candidates might include emotional insecurity, unthinking devotion to tradition, and intellectual vanity (wanting to look right rather than be right)
- maybe the people who suck at explaining their own ideas don't get access to the cutting-edge developments that they would need to break new intellectual ground? shannon had the great good fortune, for example, to spend a lot of the war at bell labs conducting cryptanalysis, rather than sleeping under a dunghill on the battlefield or on the assembly line making artillery shells
> papers that lay the foundations for a whole field are usually very approachable. i'm not sure why this is
Kuhn talks about this in his works[1]. If I recall correctly, his argument is that when someone is creating a new field (new paradigm) there isn't pre-existing jargon to describe it in terms of, so it has to be described in accessible language.
It's once people start doing work inside the field that they start developing jargon and assuming things.
[1] I think it would have been The Structure of Scientific Revolutions, and/or possibly The Copernican Revolution
> Linguists usually refer to informal language as ‘slang’ and reserve the term ‘jargon’ for the technical vocabularies of various occupations. However, the ancestor of this collection was called the ‘Jargon File’, and hacker slang is traditionally ‘the jargon’. When talking about the jargon there is therefore no convenient way to distinguish it from what a linguist would call hackers' jargon — the formal vocabulary they learn from textbooks, technical papers, and manuals.
What you call new technical terms is the jargon of technical pursuits. The post modernism stuff is the jargon of "studies" departments. There is also military jargon, for another example. The term is not inherently derogatory.
Jargon and shared context are barriers for newbies. In a new field they simply don't exist yet. The avenues for accidentally excluding people (or intentionally but I like to be charitable) don't exist yet.
I think it's because they are older and that led to less of the modern publication pressures.
You didn't need to publish if you didn't have something interesting to say and you could just make your point, without worrying about drowning in a sea of mediocrity.
i thought about that, and certainly i have read a lot of terribly written papers (twps) in recent years, but i think this is partly survival bias; there were lots of twps in the older literature (ol) too, but they don't get cited, so you have to do things like find an entire ol journal issue (maybe one containing a single well-known paper) to read through and find twps. but the well-written papers from the ol seem to be much, much better written than the current well-written papers. i think something about the current publishing pipeline acts as a filter against good writing, something that didn't use to be there
Yet another possibility is that there actually are papers laying out foundations of potential new fields in impenetrable prose... and then no one understands those papers and they are promptly forgotten. One can only hope that someone else reinvents the ideas and explains them better.
Agreed. This is one of the all time great papers in that it both launched an entire field (information theory) and remains very accessible and pedagogical. A true gem.
He also creates the first (at least that I could find) instance of a auto-regressive (markovian) language model as a clarifying example in the first 10 pages :)
I think it’s worth saying, thank you for such a great description of why Shannon’s writing is good, and how approachable his foundational papers are. I could’ve just upboted but it’s nice to know that what you’re doing really resonates with other people and they really appreciate your way of describing it. Thank you. Haha! :)
Just pointing out the obvious here - it’s impossible to have any jargon in this paper since it literally created the field. Any jargon is necessarily invented later.
On the whole I agree: just go read the paper. While you’re at it, queue up Lamport’s The Part-Time Parliament of Paxos.
> Two examples from the paper really stuck with me. In one, he discusses the importance of spaces for encoding language, something which I had never really considered before.
As a westerner who has studied quite a few writing systems this is kind of hard to interpret.
Verbally however, the timing of pauses are important in all languages I've learned. This would be a more coherent argument to place at the pan-lingual level than one related to written representation, which is pretty arbitrary (many languages have migrated scripts over the years, see for example the dual mode devanagari/arabic hindu/urdu divide, many other languages migrating to arabic, phagspa, vietnamese moving from chinese to french diacritics, etc.).
> In the second, he discusses how it is the redundancy of language that allows for crosswords, and that a less redundant language would make it harder to design these (unless we started making them 3D!). It made me think more deeply about communication as a whole.
Yeah, good luck making a Chinese crossword. Not sure "redundancy" is the right term, however. Perhaps "frequent [even tediously repetitive?] glyph reuse".
While well known for this paper and "information theory", Shannon's master's thesis* is worth checking out as well. It demonstrated some equivalence between electrical circuits and boolean algebra, and was one of the key ideas that enabled digital computers.
The funny thing is that, at the time, digital logic circuits were made with relays. For most of the XX century you could hear relays clacking away at street junctions, inside metal boxes controlling traffic lights.
Then you got bipolar junction transistors (BJTs), and most digital logic, such as ECL and TTL, was based on a different paradigm for a few decades.
Then came the MOS revolution, allowing for large scale integration. And it worked like relays used to, but Shannon's work was mostly forgotten by then.
> Then you got bipolar junction transistors (BJTs), and most digital logic, such as ECL and TTL, was based on a different paradigm for a few decades.
I think the emphasis is misplaced here. It is true that a single BJT, when considered as a three terminal device, does not operate in the same “gated” way as a relay and a CMOS gate does.
But the BJT components still were integrated into chips, or assembled into standard design blocks that implemented recognizable Boolean operations, and synthesis of desired logical functions would use tools like Karnaugh maps that were (as I understand it) outgrowths of Shannon’s approach.
i don't think traffic light control systems were designed with boolean logic before shannon, nor were they described as 'logic' (or for that matter 'digital')
> The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning...
This is my candidate for the sickest burn in a mathematical journal paper...
Aaah. That’sa good point. Hehe :) but I mean he does really delve into information in his exposition. Very clear.
Tho I’m intrigued by the idea that there’s more to information than Shannon. Any additional hints??
I’ve often thought about a dual metric with entropy called ‘organization’ that doesn’t Measure disorder/surprise, but measures structure, Coherence. But I don’t think it’s exactly a dual.
Not many know about it, but this paper (written in 1948) stemmed from a lesser-known paper Shannon wrote in 1945 called "A Mathematical Theory of Cryptography"[0].
Shannon's original paper on the topic was written during WWII and I believe it was classified and is much more concise as an introduction. After that, he and Weaver put together the famous and much more comprehensive 1948 paper which expanded into the noisy coding theorem. Meanwhile his original paper ("Communication in the Presence of Noise") was published in 1949, possibly after declassification. I highly recommend reading it first, taking maybe an hour to read. Another terrific intro is a chapter of a book by Bruce Carlson: "Communication Systems: An Introduction to Signals and Noise..." I have a scan of the chapter linked here: https://drive.google.com/file/d/0B9oyGOnmkS7GTFlmQ2F1RWNFd28...
As an undergrad I struggled to understand why log was used to measure information. Could not find a reason in any textbook.
Took a deep breath and decided to download and read this paper. Surprise, surprise: it's super approachable and the reasoning for using log is explained on the first page.
Among other things, this paper is surprisingly accessible. You can give it to a beginner without much math background and they'll be able to understand it. I actually find it better than most modern books on information theory.
From playing 20 Questions and attempting to formalise it?
EDIT: actually the cryptography connection is more likely: Leibniz was XVII; who was it that was already using binary alternatives for steganography a few centuries earlier?
EDIT2: did entropy in p-chem come before or after Shannon?
This is apocryphal, but it probably had something to do with dropping shells on Nazis - he was developing fire control systems for the US Navy around the time he developed the theorem, and only published several years after the War.
Allegedly he also derived Mason's Gain Formula around the same time but that was classified until Mason published it.
I use this paper whenever I teach information theory. If you are mathematically inclined, I’d recommend you to read the demonstration of his two main theorems, it’s illuminating.
I recently went through two books: (1) Fortune's Formula and (2) A Man for All Markets. They both impressed upon me a deep appreciation for Shannon's brilliant mind.
Curious if there are any great resources/books you'd recommend on Information Theory.
* "Information Theory" by Cover and Thomas is a mathematical introduction to information theory. This is a technical book, and is very different than the books above.
If you haven't yet done so, read Shannon's paper as linked above
As many other commentators has mentioned, it is impressive that such an approachable paper would lay the foundations for a whole field. I actually find that many subsequent textbooks seem to obfuscate the simplicity of the idea of entropy.
Two examples from the paper really stuck with me. In one, he discusses the importance of spaces for encoding language, something which I had never really considered before. In the second, he discusses how it is the redundancy of language that allows for crosswords, and that a less redundant language would make it harder to design these (unless we started making them 3D!). It made me think more deeply about communication as a whole.