Shannon's great contribution to the Bell System was that he figured out how to reduce the number of relays in a fully-connected toll office from O(N^2) to O(N log N).[1] After that, they let him work on whatever he wanted.
UNIX was written by some guys in the same organization, I wonder one of them thought "Oh sure Shannon gets to work on what he wants, why can't we work on the the future of a global inter-net? Why do we have to hide it as a text processing system?"
My management here apparently is a crowd sourced mob trying to silence me by clicktivism. Shannon and KNR had it easy, IMO.
Actually, no. The UC Berkeley TCP/IP implementation was not the first. It was more like the fifth. But it was the first for UNIX that was given away to universities for free. Here's the pricing on a pre-BSD implementation of TCP/IP called UNET.[1] $7,300 for the first CPU, and $4,300 for each additional CPU. We had this running at the aerospace company on pure Bell Labs UNIX V7 years before BSD.
Much of what happened in the early days of UNIX was driven by licensing cost. That's a long story well documented elsewhere. Licensing cost is why Linux exists.
But that doesn't refute the parent's point, does it? (If it has been edited since you wrote that, the version I see is "Unix's involvement with the development of the Internet was mainly through BSD, which was a UC Berkeley joint, not Bell Labs.")
They were responding to the statement:
> "why can't we [Kernighan, Ritchie, Thompson, other folks at Bell Labs] work on the the future of a global inter-net? Why do we have to hide it [Unix] as a text processing system?"
Whether or not the BSD TCP/IP implementation was the first or most influential, the point is that it wasn't the Bell Labs Unix folks driving Unix networking forward. UNET was from 3Com.
The Bell Labs people had their own approach - Datakit.[1] This was a circuit-switched network as seen by the user but a packet switch inside the central-office switches. Bell Labs used it internally, and it was deployed in the 1980s.
Pure IP on the backbone was highly controversial at the time. The only reason pure IP works is cheap backbone bandwidth. We still don't have a good solution to datagram congestion in the middle of the network. Cheap backbone bandwidth didn't appear until the 1990s, as fiber brought long-distance data transfer costs way, way down.
There was a real question in the 1980s and 1990s over whether IP would scale.
A circuit-switched network, with phone numbers and phone bills, was something AT&T understood. Hence Datakit.
Interesting. I want to point out though that the document you link to is dated 1980, which is late in the development of the internet (ARPAnet): by then the network was 11 years old and research on packet-switching had been going on for 20 years, which is one reason I find it hard to believe that the Labs (or anyone at AT&T) contributed much to the development of the internet like great grandparent implies when he imagines the Unix guys saying, "why can't we work on the the future of a global inter-net? Why do we have to hide it as a text processing system?"
Yes, the early internet (ARPAnet) ran over lines leased from AT&T, but I heard (but have not been able to confirm by finding written sources) that AT&T was required by judicial decree (at the end of an anti-trust case) to lease dedicated lines to anyone willing to pay and that if AT&T weren't bound by this decree, they would probably have refused to cooperate with this ARPAnet thing.
I concede that after 1980, Unix was centrally instrumental to the growth of the internet/ARPAnet, but that was (again) not out of any deliberate policy by AT&T, but rather (again) the result of a judicial decree: this decree forbade AT&T from entering the computer market (and in exchange, IBM was forbidden from entering the telecommunications market) so when Bell Labs created Unix (in 1970), they gave it away to universities and research labs because it was not legally possible to sell it. In 1980 (according to you, and I have no reason to doubt you) AT&T no longer felt bound by that particular decree, but by then Berkeley was giving away its version of Unix, or at least Berkeley had an old version of Unix from AT&T which came with the right to redistribute it and would soon start to do exactly that, and Berkeley's "fork" of Unix is the one that was responsible for the great growth of the internet during the 1980s. Specifically, even if an organization wanted Unix workstations for some reason other than their networking abilities, the ability to communicate over the internet was included with the workstation for free because most or all of the workstation OSes (certainly SunOS) were derived from Berkeley's open-source version of Unix (although of course they didn't call it "open-source" back then).
Unix got increased presence on Internet because DoD paid UCB to port "DoD Internet" (aka TCP/IP) to Unix, because Digital had announced cancellation of PDP-10 line.
Meanwhile everyone and their pet dog Woofy was exploiting the recent explosion in portability of Unix and thus source-level portability of applications, using unix as the OS for their products - because it enabled easier acquiring of applications for their platform.
With some Unix vendors (among others Sun, which arose from a project to try to build "cheap Xerox Alto"), providing ethernet networking and quickly jumping on BSD sockets stack, you had explosion of TCP/IP on unix but it still took years to get dominant.
Actually, the early ARPAnet was mostly DEC PDP-10 machines.[1] MIT, CMU, and Stanford were all on PDP-10 class machines. Xerox PARC built their own PDP-10 clone so they could get on. To actually be on the ARPANET, you had to have a computer with a special interface which talked to a nearby IMP node. Those were custom hardware for each brand of computer, and there were only a few types.
The long lines of the original ARPAnet were leased by the Defense Communications Agency on behalf of DARPA. ARPAnet was entirely owned by the Department of Defense, which had
no problem renting a few more lines from AT&T.
AT&T was willing to lease point to point data lines to anybody. They were not cheap. I had to arrange for one from Palo Alto CA to Dearborn MI in the early 1980s, and it was very expensive.
> The resulting units may be called binary digits, or more shortly, bits.
It's interesting to read this early use of “bit”, before the term became commonplace. The first publication to use “bit”, also by Shannon, was only a year prior[0].
[1] https://archive.org/details/bstj29-3-343