Hacker News new | past | comments | ask | show | jobs | submit login
Let's Not Dumb Down the History of Computer Science (2014) (acm.org)
241 points by hypomnemata on Jan 28, 2021 | hide | past | favorite | 167 comments



To be clear about the "(2014)", although Knuth gave this talk in 2014, this transcript of the talk is from the upcoming (February 2021) issue of Communications of the ACM.

The whole sequence of articles/talks is interesting:

- (2007, Martin Campbell-Kelly), "The History of the History of Software" (DOI: 10.1109/MAHC.2007.4407444 ) — the trigger for what follows.

- (2014, Donald Knuth): "Let's Not Dumb Down the History of Computer Science". Video: https://www.youtube.com/watch?v=gAXdDEQveKw Transcript: this submission (As mentioned, there was also a 2009 talk at Greenwich of which I can only find a 6-minute video: https://www.youtube.com/watch?v=sKUg0V7pt8o)

- (2014, Martin Campbell-Kelly): "Knuth and the Spectrum of History": https://ieeexplore.ieee.org/document/6880249 (click on PDF)

- (2015, Thomas Haigh): "The Tears of Donald Knuth": https://cacm.acm.org/magazines/2015/1/181633-the-tears-of-do...

The short version is that over the years, in all "history of X" fields other than history of mathematics, the proportion of papers with technical content—exactly what ideas did people come up with, and how, etc—has decreased, while historians have taken a turn towards broader social commentary. In this talk, Knuth explains why he finds this unfortunate and what value practitioners can get from history. (He also ends with examples of this kind of history waiting to be written.) In his reply, Haigh points out that if computer scientists want such history they'll have to write and fund such writing; historians as a field won't do it.

(Someone in the YouTube comments points out that military history is like this: there exist military historians writing technical history about things like the "terrain, weapon systems, tactics, strategy, etc", funded by the military, because members of the profession do care about this. Unfortunately, this doesn't seem to be much the case in computer science.)

BTW, here are a couple of papers that Knuth wrote himself, which I would guess is the kind of historical writing he'd like to read (rich in technical detail):

- Von Neumann's First Computer Program (1970): https://fermatslibrary.com/s/von-neumanns-first-computer-pro...

- Ancient Babylonian algorithms (1972): http://www.realtechsupport.org/UB/NP/Numeracy_BabylonianAlgo...

- The Early Development of Programming Languages (1976): https://news.ycombinator.com/item?id=25717306


Interesting.

I think it's a valid point/pursuit but somewhat sisyphean. Technical is harder to follow. That means fewer people will read it, even people who do have the skill to. Publishing is a medium, not entirely unlike HN or pop science books and has a media dynamic. I think history of X tends to go "accessible" in every field, with the counterexamples (eg mathematics) being exceptions. A HN article with lots of code is harder to read, digest and comment than an article about the business/societal implications of facebook's new policy on squirrels. Even people who do read the former, can read more of the latter more easily... so they do.

Also, generalities tend to be more evocative than specifics. If there is a message to the "story" about small teams, outsiders, or other cultural element... that's engaging. Technical is necessarily specific, and a lot of actual invention is somewhat random... without a wider implication.

I'm skeptical that "history of X" fields can really focus on what he wants them to focus on. The types of insights that he's looking for are, as he says himself, best found in primary sources. There will always be a limited supply of these, since people who write books and people who invent things overlap irregularly.

On the positive side, since he seems to mostly want inspiration, a little can go a long way.

Totally tangential... Military historians used to play a lot of "tactics," trying to recreate battles with board game pieces or whatnot. There are endless generations of such commentary on Roman vs Etruscan spear formations. The evolution of helmets or whatnot. I don't think we learn much from this.


Wow, "Fermat's library" is horrible to read in.

Luckily, the raw PDF is available at http://public.callutheran.edu/~reinhart/CSC521/Week3/KnuthVo...


Thank you. I know many of these older papers aren't exactly in the best format, but Fermat violating basic usability by turning the perfectly good text into a bitmap format somehow ends up worse than a PDF.


So, what Haigh is saying, what historians as a field do, is useless. I agree.


That is an absurd take. For all fields, most people are on the outside. Historical work for most people - produced by those on the outside - is how our society and culture understands itself.

I recently finished an excellent one volume history of the Civil War, "Battle Cry of Freedom." It was written by James McPherson, a historian. This book has informed my understanding of not just the Civil War, but also the lead up to it. I believe this context is essential for understanding where we are now.


So, where are we now?


Unlike Medicine, many of the ideas that we had in the past were better than the commonly accepted way things are done now.

Capability based security, for example was something that allowed you to run any program, with no danger to your system. It's not part of any common OS. They had it at Xerox PARC, but Steve Jobs chose not to take that part.

On the other hand, the PARC focus on replicating paper was a step backwards from work by Engelbart and others.

The limitation of a single desktop was put in place to allow children to ease into the desktop metaphor... it wasn't meant for adults to be stuck with the training wheels on.

I've been digging back, looking for the ideas we missed... and boy, there are some really powerful tools waiting to be reified in a modern context.


Capability based security, for example was something that allowed you to run any program, with no danger to your system. It's not part of any common OS.

I know, I know. Norm Hardy was really good, his system, KeyCos, worked, and few could understand him. I used to know his "explainer", Susan Rajunas. We don't even have proper "rings of protection", like Multics, any more.

Although the real problem today is that we need to run programs with less authority than the user running them, and we still lack a good conceptual model for doing that. "Allow write to SD card" is far, far too powerful a privilege to grant.


we need to run programs with less authority than the user running them, and we still lack a good conceptual model for doing that

Isn't this now the standard reality for most desktop and mobile OSs?

"Allow write to SD card" is far, far too powerful a privilege to grant.

Granting a permission based upon user approval is never secure anyway since if they want to see the porn / flashing lights they are going to click the button regardless. Having spent some time with both carriers and mobile device manufacturers, there are also areas of undeclared commercial utility in keeping control with the OS (device manufacturer) and out of the hands of the user: DRM, app sales, increased chargeable mobile data use, reduced chance of non-cellular data paths (mesh networking), etc.


People manage permissions just fine in the real world... if you have money in your pocket, you can pick and choose what portion of it to hand to the cashier when you're purchasing something.

When you had someone $3.50... you're giving them $3.50 of capability, and no matter how clever the Loch Ness Monster is, they'll never get more than that $3.50.

If on the other hand, you have to give them the entire wallet... like you do in Windows, Linux, etc... the game is over.

Capability Based Security is NOT the same as "Global Permission to do X"

PayPal exists because it is an instantiation of capabilities for finance on the internet.


you can pick and choose what portion...

That was the old way. The new way is you give them your credit card or WeChat 'bill me' code. You get billed whatever the merchant wants to bill. Any post-facto complaint will be at the cost of your own time.

Your further analogy is factually inaccurate. Under Windows, AFAIK most programs have a partially restricted permission set, hence Run as administrator. In Linux, under modern distributions, most programs run in both a user and a cgroup context, and some under further restrictions such as chroot and caps.


>The new way is you give them your credit card or WeChat 'bill me' code. You get billed whatever the merchant wants to bill. Any post-facto complaint will be at the cost of your own time.

Sure, that may be the "new" way, but that doesn't mean it's better! Much of the payment fraud online couldn't exist if customers were in control of how much money to send and when rather than the merchant.

>Under Windows, AFAIK most programs have a partially restricted permission set, hence Run as administrator. In Linux, under modern distributions, most programs run in both a user and a cgroup context, and some under further restrictions such as chroot and caps.

Under normal circumstances, they're restricted only to the extent that your user is restricted. That's far from the level of granularity that GP is talking about.


> Isn't this now the standard reality for most desktop and mobile OSs?

Not really, no. Because there isn't a comprehensive intent layer in the UI that is fully trusted and in which the user can grant capabilities.

Just a couple examples; if you download a PDF file with a mobile browser and want to open it in an app, the only option is to grant eternal (unless you manually revoke it) access to the local file system to the entire viewer app. A capability-aware UI would, at the "what do you want to do with this file?" system dialog grant a one time capability to read the particular file to the app the user picks. The capability persists as long as the app retains it. The app can't fork off a sub-process or send it to another app and clone the capability to the file; it would have to ask the user to do that. The mechanism for that trusted user request and response is where all UIs (and most CLIs) fail.

Designing a safe and useful capability desktop/mobile environment is hard. Modern UIs and CLIs are just papering over the details of actual objects in the privilege system. Icons and filenames are only a human-readable representation of objects accessible with capabilities and so there's a disconnect between UI representation, user intent, and user interactions. Solving that means replacing arbitrary custom UIs with trusted, object-aware system UIs that reveal the actual state of capabilities and objects instead of a simplified representation.

> Granting a permission based upon user approval is never secure anyway since if they want to see the porn / flashing lights they are going to click the button regardless.

The UI model would have to be significantly different with a lot of trusted components. Basically any UI element that can grant a capability to another app/process has to be part of the trusted system. That means a user-level app can't even ask for permissions to objects because it can't see them, know they exist, or refer to them in a way the system UI could grant a capability. Interaction between applications would be more like a smart clipboard; a user would open their camera app, take a picture, and grant a read capability to an app that's waiting for it.

There's a lot of 'convenience' lost in a pure capability system because there's no default trust in the good intentions of app developers.

There is a potential further development that allows convenience; formal proofs for application software that adheres to a particular data management policy. Say instagram wants to formally verify that photos taken automatically by the app (selfie button or whatever) can never be uploaded without user permission. Prove that the component for taking photos within the app drops all capabilities to the network and the main app itself before taking any photos; all it can do is fill up a particular folder/directory with new photos that are visible to the user, who can then trigger an upload (grant capability to a particular photo to the main app from a secure view of the directory) with another trusted interface. That trusted component could be granted a capability to the camera itself once the proof is verified.


All good points.

Clearly if reasonable security is a goal default trust should be restricted. Minimum trust thresholds should be distinct from sometimes-needed trust which should be delineated on a per-item/per-session/temporally restricted basis.

Currently the way the web works, 1st party user downloading almost any 2nd party file means a 3rd party app of user-defined identity starts and reads the 2nd party file from a 4th party server in a 5th party format running on a 6th party OS with a 7th party network, 8th party browser and 9th+ party certificate chain in an unknown filesystem context. What is wrong with this picture? You can't establish trust in this context. All you can do is establish boundaries for failure.

Modern history has shown that this tower-of-crap™ approach is the most commercially expedient means to distribute new functionality to general computing devices. I would argue that the path forward in general computing is therefore to work on processes to enhance and apply those restrictions using improved architectural decisions at the interface level combined with CI/CD processes that automatically apply them.

In short: stop asking the user, just design the system to grossly limit the scope and impact of any damage.


> In short: stop asking the user, just design the system to grossly limit the scope and impact of any damage.

Yep. This has to come from OS vendors and while iOS and Android took some steps in this direction they're still basically POSIX about it instead of capability-based.

There's also a really unfortunate trend where apps want to be the entire user interface for everything. Manage the SMSs, be the camera and photos albums, handle payments, etc. This breaks all security guarantees by the OS.


> A capability-aware UI would, at the "what do you want to do with this file?" system dialog grant a one time capability to read the particular file to the app the user picks. The capability persists as long as the app retains it. The app can't fork off a sub-process or send it to another app and clone the capability to the file; it would have to ask the user to do that. The mechanism for that trusted user request and response is where all UIs (and most CLIs) fail.

Isn't this precisely how iOS works?


> Isn't this precisely how iOS works?

I'm not an expert on iOS. From reading their latest security guide and a bit of wikipedia I think the following is true:

* iOS uses the ZNU kernel, a Mach derivative with BSD POSIX environment with traditional POSIX processes, ipc, filesystems, and networking. * iOS implements runtime security by process/user-id separation and ACLs on system and inter-process APIs. The ReplayKit uses a similar idea by giving an app a handle on a single recording session, but it's implemented on top of the POSIX framework. For contacts it's clearly ACL based and either an app has access to all contacts or none whereas a capability system would allow a particular form of access to a single contact.

Capability systems are implemented by a kernel enforcing capabilities as the sole method of interacting with other objects in the system. There is no generic open(), connect(), read(), or listen() although there are similar methods on certain capabilities processes can hold.

For example, the user could provably give one contact to an app which was further allowed to share that contact with a third app under certain circumstances e.g. "schedule a call to Mom next Thursday" which would involve granting a scheduling app a capability to initiate a phone call but only to the number in the particular contact within a certain time window, by granting a re-grantable capability to the phone number (and maybe display name) subfield(s) of the Mom contact, which the scheduler would hand off to the phone dialer at the desired time. This is a mildly contrived example because of the complexity of generating the specific capabilities necessary to accomplish it, but it should give an example of what is possible to do securely with a capability system that would be difficult or impossible with traditional ACLs.


I'm not sure how things are implemented on the OS level, but the high-level view from an app developer is this:

Apps are heavily sandboxed on the app bundle level. Any code that's installed from the App Store in an app bundle has no way to access the filesystem outside of its sandbox. There is no "access the file system" permission like on Android or on desktop OSes.

Inter-app communication is extremely limited. There is no way for an app to launch a process in another app bundle. When you open a file from the system Files app, your app gets access to that file only.

There is a small handful of high-level APIs that allow inter-app communication, including a URL handler, and a "share this file" dialog box, where you give the OS a file URL and the OS lets the user pick another app and the OS grants access to only that file to the other app. There are also some very narrowly-defined APIs for audio plugin-type data stream integration (which is gated by App Store approval)

There is no OS-mediated way to share live objects (you can share objects, but the receiving app just gets a copy of the state of it at the time of sharing)

For the OS-managed databases (photos, contacts, music etc), the exposed APIs vary wildly in how they manage permission


> For the OS-managed databases (photos, contacts, music etc), the exposed APIs vary wildly in how they manage permission

That's the key indication that there aren't capabilities underlying the databases. If there were, access would be more granular and uniform.


> Although the real problem today is that we need to run programs with less authority than the user running them,

All true; But. What really stopped capability systems is that most users cannot be bothered to grant minimum privilege. Even most developers cannot - we invented containers in part as a way to mitigate the consequences of failing to grant minimum privilege.


As the old saying goes, you go to war with the army you have. I have a bunch of fantastic solutions to all our problems, if only programmers would work 10 times harder for every line of code, and the project managers would be OK with that, and the business funding it would be OK with that, and society would be OK with getting 1/10th the software.

Capabilities-based stuff is really neat, but it's also really complicated to put into practice, and I live in a world where it's often a struggle to get developers to label their brand new REST interface for whether or not it's "admin only". Personally, I think a lot of that "really complicated to put into practice" is essential complexity, not incidental; anytime you sit down and really think about what the optimal permissions scheme around any even slightly non-trivial system ought to be, you generally end up with something pretty complicated. But even if the perfect system existed and you handed it to the real developers we have today, you'd still be working with people who would do whatever the simplest thing they could do to fully bypass the capabilities system is and get on with life without a moment's twinge of conscience.

I don't want to be too hard on the average programmer; contra fashionable cynicism, things are actually getting better on the decade time scale. But at the current pace, "capabilities" are probably still a decade or two away from even being "niche".


It's worse than that. You implement your perfect, minimum capability system. Your programmers really implement that. Everything's great...

... until someone from marketing shows up, and says, "Hey, we also need it to...", and the program now needs a bunch of new capabilities.


I wish I could show this to a Big Data analyst I worked with. He spent all day doing machine learning things but once complained to my team that we were making access control too complicated.


>we invented containers in part as a way to mitigate the consequences of failing to grant minimum privilege

Also because it's extremely difficult to grant minimum privilege on your typical general-purpose OS today. Spawn a new process and the default is that it has all the same authority that you do - it takes a lot of work to dial that down.

This is typically inverted in a capability system: the program only has the authority that you give it.


> Capability based security, for example was something that allowed you to run any program, with no danger to your system. It's not part of any common OS. They had it at Xerox PARC, but Steve Jobs chose not to take that part.

For FreeBSD there's Capsicum(1) and for Linux, although the implementation is not strictly capabily-based, there's SE-Linux which depending on the usecase resembles capability-based restrictions.

Also, although not based around capabilities, Linux has supported them for awhile https://linux.die.net/man/7/capabilities

(1) https://www.cl.cam.ac.uk/research/security/capsicum/


SE-Linux is the worst possible way to secure a linux system, its like when Microsoft went overboard on warnings, instead of actually trying to solve the problem.

The main implementation difference in a capability based system is using a PowerBox to select files to give to a process, instead of letting the process access everything the user has rights to.


On Android you get no option, SE-Linux and seccomp are enabled by default and there are other measures in place that top any other Linux based OS.


The Burroughs mentioned by Knuth, already had tagged memory, a systems programming language with unsafe code blocks, co-routines, bounds checking, the whole package, Go like compile speed (for 1961 hardware), compiler intrisics instead of Assembly, the whole package.

Nowadays still available as Unisys ClearPath MCP, with high level security as selling point, while it will take generations to fix UNIX and C's adoption at large, if ever.


When you someone performs surgery on one patient, the legacy medical impacts of that surgery are contained to at most 2 people and their immediate families. There will probably be fewer than 10 surgeries where someone might think about that -- and we don't begrudge a surgeon who doesn't.

When you add a commit to an OS, the legacy impact of that stays with the OS and must be un-done by anyone who changes it.


This was linked on HN a few days ago, contains links to dozens of classic papers back to the 1930s:

https://canvas.harvard.edu/courses/34992/assignments/syllabu... - Classics of Computer Science

and that page contains this link to a spreadsheet with links to over 150 historic papers and other sources:

https://docs.google.com/spreadsheets/d/1wS6O7-ZoFL7Cfjgt-kdh...


I find the comparison between medicine and computer science very interesting! I have never thought about it this way. Could you elaborate on how you „dig back“? Do you read old papers / books? I imagine that one would have to dig through much old stuff that just isn‘t interesting until you find something that could be really usefull (like capability based security).


Look at the recent talks by Alan Kay, Ted Nelson, Joe Armstrong on YouTube. Watch "The Mother of all Demos" by Douglass Engelbart.

There is a lot of stuff online, just follow the threads. I've been pushing Capability Based Security for a decade... only in the last 6 hours did I learn that they had it at Xerox PARC, in an offhanded comment by Ted Nelson at the end of one of his videos, where he mentions the great ideas that got away.


One of my favorite talks by Joe Armstrong -- computer science, a guide for the perplexed. He talks through what he feels are some important but forgotten ideas in CS

https://youtu.be/rmueBVrLKcY


Thank you, I hadn’t seen this. I miss him dearly.


Thank you. This was great.


Well, better in some sense: they might be cleaner, or more robust on some axis, but perhaps not practical or efficient. But, with computing the landscape can change rapidly so it is always good to see if older ideas can now be made a reality.


"The limitation of a single desktop was put in place to allow children to ease into the desktop metaphor." Any source for this? I dont thinkchildren were the target customers at that time.


They didn't mean literal children I guess, but "children" in the sense of blank-slate individuals exploring a new space.

Adults can use training wheels too when first learning to ride a bike. But once they're competent, it's more expressive to ride without.


I agree. But I am wondering, I remember linux had a lot of multiple Desktops, but somehow this fad never stuck. So maybe one Desktop was somehow a good idea?


I'm not sure fad is the right descriptor here. Unix WMs have had virtual desktops since the 90s (80s?), MacOS got them around 2006 and Windows in 2015.

Use by the non-initiated may not be that high, but that shouldn't be surprising given that user education is a very scarce resource and it's being spent elsewhere.


I meant as a default.


Aren't document tabs, eg browser tabs, an app-confined version of multiple desktops. The idea behind their are virtual screens of content you can switch between.

Every KDE I've installed (Slackware, Mandrake, Ubuntu, Fedora), to my recollection defaulted to multiple desktops.

I'm so happy Win10 has virtual desktops as in moving from Linux to MS Windows - mandated by my workplace - that was a massive pain point.


Don’t all major OS have virtual desktops now? A single desktop is very restrictive, it’s much easier to have some spatial separation between contexts.

It might not be the most visible feature for most users, but it is definitely an important one, which I would hate to see go away.


How do you use virtual Desktop in windows?



It’s not well known, but it exists. I can’t say exactly since when; it was a couple of years ago.


> Capability based security … allowed you to run any program, with no danger to your system.

‘No danger to your system’ shouldn't be conflated with ‘no danger to you.’ The problem with capabilities is that they reinforce and solidify the ‘smartphone model’: your data isn't yours, it's an app's, so what you're permitted to do with it is entirely controlled by someone else.


We shouldn't fear ideas just because it could be used to restrict some users of some systems. In the end, there will continue to be a market for less regulated hardware and OSes. Linux and the BSDs, as an end user if you own the system, are still wide open. x86/x64 systems aren't getting more locked down in general, and there's a significant movement toward more open hardware (at least within some niches) with RISC-V.

Something like capability based security would be a godsend for the corporate and enterprise environment compared to what's commonly in place.


An unusual historical paper from Niklaus Wirth, about many "good and original ideas (that) turned out to be less brilliant than they first appeared."

http://pascal.hansotten.com/uploads/wirth/Good%20Ideas%20Wir...


See also Bret Victor's talk, "The Future of Programming". The conceit: it's 1973, he describes (with an overhead projector and transparencies) some then-innovative ideas that he hopes will be adopted in the future.

http://worrydream.com/dbx/


> Capability based security, for example was something that allowed you to run any program, with no danger to your system. It's not part of any common OS.

Basically, haven't we "reinvented" this via "The Eternal Mainframe"? Only now it's called Amazon Web Services?


That's not your system.


I think the same problems are present in medicine. For example, there's tons of evidence that fasting and rest can heal most things.

It's talked about in just about every tribal knowledge compendium. It's assisted by doctors in some countries, almost completely ignored in others.


> They had it at Xerox PARC, but Steve Jobs chose not to take that part.

At least they created Object Pascal and did two OSes with it, and contributed later on to the C++ industry adoption, instead of being yet another C powerhouse cloning UNIX.


Could you name a few examples of ideas we missed? You made me quite curious.


How do you go about digging old PARC papers?


You start here,

https://archive.org/details/bitsavers_xerox

And then go over to symbolics, Borland, whatever.


> Capability based security, ... They had it at Xerox PARC, but Steve Jobs chose not to take that part.

To be fair:

1) Apple II and Macintoshes were slow

2) and single user

3) and pre-Internet (no networking, even.)

Microsoft also had to make an early choice about what features to add to DOS, particularly networking or multiuser, and networking was chosen to implement first. (Probably because of Novell and other early networking competitors.)

> the PARC focus on replicating paper was a step backwards

"Replicating paper" was resurrected about 10 years ago when everybody was making apps with a UI like books turning pages. There was actually a programmer (Russian name if I receall) who specialized in consulting on that, advising companies on what algorithms to use and how to get the page flipping appearance they wanted. I thought it was pointless but hilarious, as it was totally ornamental.

https://en.wikipedia.org/wiki/Skeuomorph


In 2021, we're having difficulties to surface even the history of computing and canonical documentation of the last two decades or so. Go search anything about early HTML, early JavaScript, research results in CompSci not even ten years old, POSIX reference docs, or even up-to-date JVM javadoc pages using your favorite search engine. It'll bring up all kinds of content marketing, naive march-of-progress advertising fringe tech and PLs addressed at juniors, hit-and-miss StackOverflow articles, and masses of academic pseudo science publications with original content/sites being decommissioned in favor of shiny content-less "web 2.0" crap.

Fuck the algorithm!


that’s probably where the methodology used by trained historians is useful.

I created a History of Tech Design class [0] of 6 different tracks for design students. I’m not an historian, but I have some training in finding original sources, sorting them, accessing archives… it helped a lot, and I could use a lot of primary sources full of not-well-known details.

Back in the 90s it would have taken me weeks or months to just access this amount of historical material!

I had to turn this class into something accessible and interesting for design students / pros, but I certainly do not consider that dumbing down, just: Know your audience!

[0] https://workflowy.com/s/strate-history-of-te/a4ID6kKtznLwQC7...


Especially as you get back to 20 or 25 years ago, the idea that you can just do some web searches to uncover much more than surface information is somewhat naive. A lot of information isn't accessible to casual public search. You need to use libraries, talk to people, etc.


You would be surprised, as I was, how much raw documents and primary sources much much older than 2000/1995 (to take your 20/25 year reference) are available, thanks to various digitization efforts. Check my notes for some nice sources dating back to the 60s.

(Not in my notes but just yesterday I stumbled upon the original 1990 memo by adobe co-founder John Warnock for the PDF project, Camelot. Great read)

You are absolutely right that there’s so much more in boxes somewhere and in the mind of the people who were there, but as a starting point online archives are great. We can only encourage more digitization of more archives (it’s a lot or works, organizing and sorting even before scanning)



Oh, I don't disagree. For a recent project, I found some real gems at a time when I couldn't go to a good library. But it's definitely fragmentary once you get past digital-first content.


What do you think is the difference between 'research results in CompSci not even ten years old' and 'academic pseudo science publications'?

Do you just disagree with the research of the last ten years for some reason so dismiss it as not the real research and think there's some kind of under-appreciated research out there?


Please don't put words in my mouth. What I've said is that publication cadence is overwhelming, and academic, much less general-purpose search isn't up to it.


As a computer science graduate student, I am always surprised by how rarely my peers seem to know or care about the history of our field. I doubt many of them would write papers about computer science history even if the incentives were better.

I think it is somehow related to the power of computer science to change the human condition. Everyone is thinking about the future. Mathematicians also crave novelty, but I don't think they feel "my work could change the world" in the same way as CS researchers.

Learning about CS history would make us better researchers, and thus more likely to change the world, but that line of motivation is not direct enough to excite people. There is still so much low-hanging fruit that can be plucked with only shallow knowledge of the field.


A favourite quote, that appears on my Github profile:

> Computing is pop culture. [...] Pop culture holds a disdain for history. Pop culture is all about identity and feeling like you're participating, It has nothing to do with cooperation, the past or the future—it's living in the present. I think the same is true of most people who write code for money. They have no idea where [their culture came from]. - Alan Kay

Changed how I think about my career.


This is a wonderful quote, thank you for sharing.

I particularly like how he had the insight of making the statement true only for "people who write code for money." Growing up in various computing "scenes" (open source, hacking, demoscene) in the 90's and 2000's, nobody did anything for money. And it seemed everybody had the utmost respect for the history of the scene. Demosceners were all about "old school" stuff and the old Amiga days. Hacking ezines and forums retold the tales of Captain Crunch and other legendary hackers from yore. And the open source community was still full of completely antiquated practices that some would have question but nobody would have dared to disrespect.

Only when I started programming as a job did I encounter people who, strangely, had no interest for such things. There was plenty of enthusiasm for new languages, but little for their genealogy. It really seemed odd to me and I think the lack of love for the "craft" and its history is ultimately what drove me out of industry and to academia.


To make new pop culture cooperation is necessary otherwise a culture stays niche. Diffusion of new culture works exactly as diffusion of technology from innovators to late adopters. The paradox with culture is that when an innovation is adopted in the mainstream late adopters do not perceive it as innovation because it is now the normal thing to do. Innovators do because brands validate new behavior, values and rituals that are contrarian to the status quo.

Nike & BLM, Tesla & global warming, Apple and computing without a 1000 page manual.

Nike validated Kaepernick at a moment for the first time that more kids of color are born than white.

https://www.slideshare.net/superbrands_poland/douglas-holt-h...


Added to the substantial Alan Kay section of https://github.com/globalcitizen/taoup


What an amazing quote! Thanks for sharing.


The problem with this blanket rejection of "pop culture" as something lesser is that its almost always deployed by "young fogey" conservatives, harkening back to so golden imperial age.

And of course these sort of people don't actually have any in depth historical knowledge.


I don’t read that quote as a blanket rejection of pop culture. Not at all.


I do


Suggested reading. This will take some searching.

- "As We May Think"

- Von Neumann's report on the EDVAC.

- The invention of index registers, originally called the "B Box". (The "A Box" being the main arithmetic unit.) Von Neumann missed that one.

- The original 19 page definition of ALGOL-60.

- HAKMEM, from MIT.

- A description of the SAGE air defense system.

- Something that describes how the Burroughs 5500, a very early stack machine, works.

- Something that describes how the IBM 1401 works. At one time, there were "business computers", all decimal, and they were very strange machines.

- Djykstra's original P and V paper.

- Wirth's Pascal manual, the one with the compiler listing

- The Bell System Technical Journal issue that describes UNIX.

- Jim Blinn's "A trip down the graphics pipeline", for the basics of classical computer graphics.



Thanks for collecting the links, and to Animats for the great list!


Compared to those employed in an industry, few write about the history of trains, or wifi, or road building, and on and on. It's normal for only a few academics to be interested.

There are only a couple of fields that are different - media (movies/broadcast), military, medicine. Probably because of the built-in human drama that's easily accessible?

As others have mentioned, Computer Science has a little human drama but few have sacrificed their lives or appeared heroic. It's a pretty dry field to document - more similar to road building.


I like history but how is history going to get me a functional game released or land a job interview?

I don't need to know about DARPA to create a wordpress page or a C# Windows Service...


The GP is talking about (future) researchers in particular. Still, there’s a tremendous historical myopia among practicing software engineers as well, leading us as a field to reinvent old ideas in new clothes every ten years or so.


So what? Sometimes ideas have to be reinvented, maybe this time they succeed. There could be two reasons why old ideas (talking about Software here) didnt succeed, one is Hardware wasnt capeable, the other is 'they didnt cross the chasm' - both might be different now (third option, the idea is bad, then one shouldnt do it again of course).


You're still assuming a linear model of history. My point was that history is cyclical, the same trends come and go. Static vs dynamic typing. Thin client vs thick client. Local vs distributed. Key-value vs relational. Monolithic vs micro*. And so on.

Yes, sometimes these have to do with changing requirements or hardware capabilities, but more often they're just about a new generation wanting shiny new things rather than boring old things. Except that the shiny new things were the boring old things of the previous iteration.

Many ideas of the yesteryear were not bad or even infeasible, indeed they were successfully put into real-world use. Until the tide changed and they became unfashionable for whatever reason. And then they became fashionable again, but without a view of the history there's little synthesis of ideas, little learning from past experiences beyond the current cycle.


This cycle was called the "Wheel of Reincarnation" in this 1968 paper by Myer and Sutherland about display processors.

http://cva.stanford.edu/classes/cs99s/papers/myer-sutherland...


Researchers might have more use for history... but day to day, in the field, programmers? not so much. I would think it's the difference between math researchers and accountants... You don't need to know history to balance a check book.


The difference is that the way to balance a check book doesn't change every five years, or these days more like every year. Maybe if programmers knew a bit more about history, they wouldn't need to reinvent it so often and could be more like accountants.

But programmers don't want to be accountants, they want to use new and shiny things even when it's not the pragmatic choice, even when it would be better engineering to understand things in a broader context, to understand the history behind these "new" (actually old) things.


Accounting has been around for hundreds of years. The reason it doesn't change? Because it's not a "new" field.

Same for stuff like construction and cooking... stuff that's been around a very long time has become rather stable and of course it isn't going to be "reinvented every five years".

> when it would be better engineering

again... I don't need to understand history to understand Big O and which algorithm to use in the right circumstances or when procedural programming is proffered to functional.

At no point in my career have I ever NEEDED history to decide which library to use, which sorting algorithm is suitable and how to create functional apis or windows services.

History is nice to know... but still 100% not needed at any stage of the process.

I'm not sure where the disconnect is because I like history... I like learning about American history, world history, programming history... but none of that has ever been relevant in any of my jobs when it comes to day to day decisions and projects.

"don't reinvent the wheel" is a separate topic and while I can agree that many things get reinvented - computers are still a very young industry compared to stuff like accounting and architecture.


Programmers in the field have little use for computer science in general. You don't need to know anything about cyclomatic complexity when assembling a shiny javascript widget.

Until you do, of course. At which point it's clear you should've known about it years ago.


cyclomatic complexity and computer science isn't history.

Not sure where the disconnect is because at no point have I said you shouldn't learn how to do your job correctly - and understanding Big O, SOLID, YANGI, etc are all important to know... but have nothing to do with history.

So again... history isn't needed to be a good engineer. And being a good engineer and KISS programming is possible without understanding 100 years of history.


> how is history going to land me a job interview

Step 1. Go to software conference.

Step 2. Get into hallway track conversation where a historical anecdote and its lessons are relevant.

Step 3. Say things that prompt y'all to have insight and establish trust that you're thoughtful about how to build software.


  > Step 1. Show 10+ years of experience
  > Step 2. Never talk about history because I don't need history to solve business problems with programming
  > Step 3. Make 6 figures.
An anecdotal example made up about talking at some conference with big words won't provide business value and isn't useful in day to day problem solving.

So again... 99% of programmers won't ever use the history of DARPA and the birth of the internet to land a job.


This thread is about computer science, not specifically programming.


I this is the first time I've seen the same person simultaneously assert that:

1. What really matters is delivering business value.

2. Networking at conferences and talking about the social context of technology is useless.


I won’t try to give a list of examples of how historical knowledge is useful for a developer, but just compare this statement with different fields:

An architect will learn about the history of hundreds of buildings at school.

A product designer like a designer designing chairs, knows about the history of chairs, has dozens of example of chairs in their mind.

That developers strive to be so a-historical is a sad state of affairs. It makes the whole field naive and stuck in a perpetual present.

But digital art and web design are in the same mode too, so it’s something about the whole digital fields.

The only exception is game design, where it’s considered important and valued to learn about old games, play them, understand how they worked.


If you ever need to read something in one of Knuth's books, that's useful history for you.

I know because I was looking how to efficiently generate power sets and eventually the only good answer was in TAoCP!!!

So the answer is: history, the kind of history full of technical details that Knuth argues for, is extremely useful for our profession.

It is the empty, pop history, made without algorithm descriptions and full of only anecdotes what you are thinking about? Knuth argues against that!


I'm intrigued which method you are referencing. If I recall, he goes through many. :)


The optimization bottleneck you're fighting against in the game might have been solved 50 years ago, if you don't know history, you don't have a complete toolkit of all the good tools.

There are good reasons to learn assembler, basic, forth, lisp, smalltalk, pascal, c, c++, python, java, etc. even if you don't use them on a regular basis.


I don't need history to know algorithms... so "that was solved 50 years ago" doesn't teach me how to build a balanced node tree or actually matter if I simply know to use sort library method A instead of B.

> good reasons to learn

Of course... but none of that needs HISTORY to use effectively.

Knowing functional programing, procedural, async, etc, etc, etc... I can know all of that without needing to know history.

One doesn't need the other.


History is not just about time — what happened when.

An equally important part is the ordering and the reasons something happened.

You can know how to write simple programs in all paradigms, but fail to understand what to use when. You might have memorized a long list of algorithms but not understand the trade-offs involved in choosing one.

Of course history is not the only way to understand this, but it is definitely one good method.


Well like I said... I enjoy history. but. I've never used history to solve business problems.

They are related and good to know... but I don't need to know history to flip burgers, run a business or plan my next sprint.

I'm not arguing history is unimportant or good to know or even possibly useful just to have ideas of what to do - or what not to do... I'm just arguing it's importance is over-emphasized.


Just feels like you’re arguing that only short term memory matters. Have you never encountered a problem that became easy once you talked to someone with more background on the problem than you?

Why would you want to handicap yourself or your field by making it harder to “talk” to those a little bit further before you?

Your building is resting on their foundations whether you acknowledge it or not. They may have had a solution for the problem you’re facing.


> I've never used history to solve business problems.

As Knuth was trying to point out, history should be more than tables and time lines.

It should provide details of how important advances where made and how solutions to big problems where found.

And the reason being, since those historical details aren't being recorded, it's more than likely you are in fact solving business problems today using the same techniques discovered in the past, you just don't know it.


What I see is not over-emphasizing the importance of history but de-emphasizing it as a justification for present day sufficiency.

If you don't like the term, use another one. But without history you're bound to cargo cult computer science where what you have to do next is plan your sprint. Why do you need a sprint anyway? You might never know if you need a sprint any more if you don't know what problem was it supposed to fix.


You don't use history to solve the problems. You use history to find when people were fighting similar problems. Then, you link that to when other people solved still similar problems. And you look for what changed between, to see if you can leverage that.


At the junior programmer levels yes that can be true. At the principal engineer levels it’s not true. At that level you have to invent new ways of doing things when existing ways don’t cut it. Looking into the past is super charging.


> Well like I said... I enjoy history.

It totally looks like you don't. Are you sure about this sentence? Every other sentence seems to contradict this one.


none of my statements goes against learning history.

I probably don't do as much as I could/should... but I am a successful programmer who has never needed "history" to understand basic programming principles.

I seem to consider history and good engineering to be different topics while everyone else seems to think you can't understand Big O, YAGNI, SOLID, the difference between Functional Programming and Procedural programing, etc, etc, etc without understanding the history of how assembly turned into c...

Because History is interesting... but not needed to be a good engineer and a good programmer.


The disdain for history as an expression of you not knowing how to code wordpress is strange considering that, as you allude to, you don't know the latter.

You may want to think whether learning wordpress is the same with learning history of html at decade level.

Since you're publicly exposing your thoughts, let me tell you that what you are doing is expressing your ignorance and being proud of it. You may want to drop the second part.


I've not shown any disdain for history or an inability to code wordpress... in fact, I've said the opposite: I like history and simply don't need it to code.

I don't need to know history to code html, php, javascript, etc. I've a very successful programmer without using the "history' of programming in any shape or form.

"expressing your ignorance" or... I'm simply expressing that I'm a successful programmer who's never used "history" in a decade+ of programming...

You may want to reevaluate "ignorance" as you make claims that you can't support (IE: "disdain" for history and "can't code wordpress" - again, neither statements I've made). Reread my comments.

recap of them: I like history. I don't need history to code wordpress or windows services.


I was not entirely fair to you when trying to make my point. In fact I could see this exact argument of yours coming up and ignored it completely.

What I want to say is that there are entire domains of competence that are irrelevant to a large degree to doing the day's job. Knowing history of computing will not necessary help in making a better program in a way that can be noticed. But history of computing is closely related to computing. Unlike for example history of knitting. Although neither will make you a better programmer now, the first has much better chance at that than the second (although we don't exactly know that). Thus, when someone downplays the importance of history of programming to improving a programmer's mastery, I see it as touting ignorance at not seeing the connections between the two: self sufficiency, arrogance of ego being trapped by the light of today's fads, pop culture that doesn't care about the past or the future.

Or maybe I'm projecting...


I mean, at a conceptual level, knowing history is important in the "ignorance of history will lead to repeated history" line of thinking - and I don't disagree. ... and knowing why decisions were made can help determine which tools to use (IE: Why use static typing or duck typing and when to use the other... or when to use procedural programming, async, functional, etc...)

Maybe it's a combination of you "projecting" and me not being clear. And trying to discuss what could be deep conversations in a little more than a twitter tag of 140 chars.

I'm personally a .Net Developer and while I do stick with the latest versions (IE: .Net Core), I also have enough experience to know the past (IE: ADO vs Entity Frameworks). I was trained in school on a mainframe (IBM DB2 with RPG and SQL). I'm not trying to stick with the latest "hotness" as most of MY work is actually done via Windows Services, API interactions and moving files around - definitely not the latest fad. With that said, I am working on using good tools to get my job done faster/better - IE: CI/CD pipelines to automate builds, testing, deployment, etc. Tools that didn't exist 5 years ago could be the latest fad but I don't think that's what you are suggesting.

I'm more worried about learning different things (IE: Functional, procedural, async, parallel, etc) than I am worried about "history" of those. When I pick up a programming book, I'm less worried about the "Microsoft created version 1 in 2000 and version 2 in 2005 and..." and more concerned about do's, do not's, best practices, etc.

Maybe it's my personality and the way I "deemphasize" history... it's not that I think it's unimportant... I just think that it's more important to focus on other things. Learning some history along the way is good and fun but it's never been my focus and I've never used what I consider "history" in an interview or a job on a day to day. Maybe that's rubbing people the wrong way lol


Wow, He used my wife's portrait of himself. I was there the day she took it, it was the opportunity of a lifetime to meet one of my heroes, and he didn't disappoint. Knuth is incredibly sharp, lightning sharp for his age. He plays Emacs as masterfully as he plays his pipe organ in his home. I watched as he whipped around different buffers of literate programming, as he demoed some new angle on the properties of Sudoku solving algorithms. I asked him if he had been keeping up with machine learning, and he instantly name-dropped a dozen papers/authors he had read recently.

I have to say, I'm worried. Just in 2020, we lost of a lot of greats, including Jon Conway, Larry Tessler, Chuck Peddle, Bert Sutherland, Frances Allen, most recently Brad Cox. We're losing not just a lot of history, but a lot of the historians themselves, the people to whom can tell the stories that may not have been written down yet.


That is why this project is so important:

https://www.computerhistory.org/collections/oralhistories/

Except for Jon Conway, all the other people you mentioned were interviewed.


One problem in understanding computer science history is the essentially perpetual copyright laws. It's illegal to view or share many important past programs' source code, a problem not shared by other fields. Imagine discussing literature without being allowed to read it! There are exceptions, but they are exceptions.

The rise of open source software is finally letting us view some of that software. But we may never publicly know what some past giants did.


Programs source code is just as copyrighted as late 20th century literature.

However, it's not generally available (unlike late 20th century literature). If windows sold with the source code (even if you were forbidden from doing anything with it), then we would be reading windows source code in our CS classes.


Microsoft even had a "research kernel" they licensed out to unis, and we did read (parts of) that in CS classes. (but still of course very different than actually having the full source of real systems through history available for general study)


Ride the mach continuum between the micro and monolithic.. not really interesting. Basically opening up user mode device drivers or user mode code in the kernel (sandboxes).


No we wouldn’t. Source code is notoriously hard to read if you’re not familiar with it, and optimized source code using internal APIs is even harder.

If you’re learning an algorithm, just stick to pseudocode, otherwise youre stuck with MMIX bullshit.


MMIX is not bullshit. Knuth is interested in concrete complexity as well as asymptotic complexity, e.g. why quick sort is faster than heap sort, so he needs concrete model of computation. It may be an overkill if you aren't interested in concrete complexity, but it serves a purpose.


There’s a reason why no one explains abstract algorithms in assembly. It’s a trash idea, that obscures more than it illuminates. It’s a flex. A flex that is marginally above explaining an algorithm through a wiring diagram of thermionic valves.


No one, except Donald Knuth himself, widely regarded as one of the best explainers of algorithms we have?


I don’t think agreeing that no one else on the planet writes examples in assembly is the dunk you think it is.

Have you ever heard of anyone reaching out of those books, or even read all of them? He maybe heralded because of when we wrote them, but there are much better written books in 2021.


I mean Joyce is notoriously hard to read, no?


While there is only one Joyce, please tell me how build 627252 of Windows 3.11 glonkglonk.cpp is a unique example that provides more insight into a heap, than five lines of pseudocode.


"Here's 5 lines of pseudocode" "Here's a real-world implementation of the pseudocode. Note this section does not improve the asymptotic case, but has significant constant-factor advantages for small inputs."

[edit]

I noticed you mentioned "heap" assuming you're talking about the data-structure, that would probably be in a different class than one where you read windows source code. Just like you probably wouldn't read Joyce in a class on composing newspaper articles.

I still think it's useful to dissect existing code to get an idea of how complicated it really can be. A CS undergrad may graduate without ever reading or writing a single program over 1kloc. Even from an academia perspective, having a concept that real-world software is usually 100s or 1000s of kloc is likely useful.


A typical experience when I re-implement a technique from an academic paper:

1) Read the paper's high level description and pseudocode.

2) Write some test cases.

3) Write code based on the paper's pseudocode.

4) Observe that my test case results do not match what the authors show.

4a) Fiddle repeatedly with possible ambiguities in the pseudocode-to-real-code translation until test cases match publication. This involves a certain amount of mind reading and speculation. "They said argmax, but there are possible singularities... maybe argmax followed by clamping."

4b) Request source code from the authors. Upon receiving it, find that their real implementation diverges in small but significant ways from the pseudocode that was published. They have mixed actual-code algorithm outputs with incomplete pseudocode algorithm descriptions in the paper.


I disagree

I hate reading pseudocode because I do always have to wonder "what the person writting this had actually on their mind"

an actual source code is literally what's going on.


Reading source code, especially source code that you're not familiar with, is a crucial skill for any programmer.

I've found annotated source code to be an extremely useful learning resource, personally.


Is the actual source code that important? I’m trying to think of what I’m missing. I’m personally more interested in the comments left in the code of some or those old projects than the actual code, which was probably full of bugs and a bit crufty just like the stuff we write today.


Yes it is important. Sometimes details matter, and often the only place details can be found is actual source code.


Fortunately people are putting up some of that early work: http://www.softwarepreservation.org/projects


In most active and growing fields ( medicine is one example ), the history of the field is generally ignored by students and practitioners.

There are a few pleasant exceptions. For instance, Neurology Minute have had occassional bits on the history of neurology. See https://neurologyminute.libsyn.com/

However, when reading something historical ( for instance this interesting podcast on the history of the Inverted Brachioradialis Reflex) https://neurologyminute.libsyn.com/history-of-neurology-3-hx..., there is no expectation that it will actually contribute to practice.


Funny you should mention Neurology in this context - my experience in South Africa has been the same.

South Africa is renowned for grooming some of the world's best doctors (as subjective as this sounds, it is largely owed to the relentless influx of patients in state hospitals and the inevitable hands-on experience that follows).

So practically speaking, you have some very young yet very experienced doctors emerging from the assorted residency programmes and specialisations.

And yet the history is largely overlooked - which is mostly a function of "not enough time to learn this as well" - and I know this because I was fortunate enough to collaborate with a very old and respected Neurologist in Pretoria who has made it his personal mission to collate all the historical tidbits on his field, so that he can pass it on to the next generation.

He has subsequently prepared a rather extensive 2,000+ page archive on the history of Neurology (all manually typed up in his old Word installation), and I'm helping him transpose it onto a an easily searchable website so that his work can live on.

Anyhow your comment reminded me of this, thought I'd share :)


That sounds fascinating. Can't wait for you to finish that. I also live in Pretoria. You know we have the national library here, which I've never visited, but I'm interested in SA history. Cheers, Paul


Here is an interesting Cornell reading group+papers on the history of Instruction Set Architectures - https://www.cs.cornell.edu/courses/cs7491/2020sp/


Knuth writes eloquently what I have thought for a while. Some of these early works and ideas, even and perhaps especially those that did not evolve into something successful today, are worthy of study. I strongly suspect some of them could take us in new directions. Some of those that “blocked” may now be unblocked by related developments in our field. Maybe we have to go back to forwards.

https://blog.eutopian.io/the-next-big-thing-go-back-to-the-f...


This this this.

We’ve sort of settled on the architecture of our “stack” mostly by historical accident. It has constantly been the local maxima for various variables throughout our short history. And because of the inertia of the mass of what we’ve built that mostly works we don’t look outside very far.

There are a lot of excellent ideas that were too early, or due to a coin toss not taken, or killed because of winner-takes-all market competition, or because they were just some footnote at the bottom of a paper some researcher wrote and never investigated. These ideas are ripe for investigation.

I’ve been reading papers and looking at source code from the early days of programming languages. There was definitely more interplay between the design of the machine’s API (the instruction set) and programming language designers than we see these days. Our ISAs are made to emulate a PDP-11 running C (a glib take I know but not entirely inaccurate). Oh yeah, we also now get an instruction that handles JavaScript floats better. :-)


I often find that when I can come across documentation that was written for a practitioner, say, 50 years ago, it helps clarify why things work the way they do today - the new, "streamlined" way of doing something only ever makes sense to me after I understand what the old way actually was.


The main problem seems to be that computer scientists don't care about history. That seems a bit strange to me, since there is no lack of people to analyse the historical parts of mathematics of physics. Maybe the problem is that computer science history doesn't seem like history yet since it is relatively recent?


Because a lot of CS history are only relevant in a larger/humanities context. In terms of absolute technical value, the historical contributions are not necessarily as valuable. Russell and Whitehead's Principia Mathematica has next to no practical use in modern day software engineering. Their existence helped induce Church and Turing to create modern computer science theory but the discrete mathematics in that book itself holds little value for your average computer scientist outside of being an intellectual exercise.

Another issue is that CS is a branch of applied math that also happens to be extremely profitable. Math operates on much larger timescales than other domains. But business and economics demands immediate attribution. McCulloch and Pitts had their Hebbian neural network almost a century ago. Kleene of the Regex fame designed his neural network in 1951 [0]. These vast timescales are quite normal for advanced mathematics. But in 2010 GPUs became cheap and the obscure theories of the pre-GOFAI age are suddenly immensely profitable. Of course the names and associations would be with their most recent implementors. The ones who actually applied and made it possible, rather than those whom dreamt it up a century ago.

[0] https://news.ycombinator.com/item?id=25882079


I had the impression that there was a fair amount of CS history that only shined much later when CPU, IO, memory, etc, was big enough or fast enough for it to become practical.

Edit: Meaning there are pretty straightforward reasons to care about history.


You can definitely come up with ideas that are too early for the hardware. When I was doing proof of correctness work in the early 1980s, it took 45 minutes to run something that now runs in a second. I had a nice ragdoll physics system in the late 1990s, and on a 100 MIPS machine it was about a quarter of real time speed. Now games have ragdolls. Around 1970, I was working on 3D graphics with hidden surface elimination. I had it working, but I had to monopolize a million dollar UNIVAC mainframe in the middle of the night to do it at 2 FPS. Usually I just ran batch jobs and used a pen plotter. Asked to figure out a way to create drawings of parts from tool paths, I thought of what are now called octrees, figured out how memory would be needed, knew I had 64K of 36 bit words, and abandoned that approach. At a million dollars a megabyte, it was unaffordable. TCP was at one time considered too much compute for PC-sized machines. This produced a decade of now-forgotten "PC LAN" systems.

Some ideas just need a big engine.


Re first paragraph: Do you think mathematics and physics are different? It isn't like the old papers are relevant after hundreds of years, others have written the same things in more distilled and easier to read formats.


Well, most students in (contemporary) math programs don't learn to derive their axioms from Euclid (construction of naturals are usually done via Peano), their Calculus from Newton's Principia, and nor do they learn their set theory from Cantor's original papers. Instead they learn from modern textbooks that make only a passing reference to the original authors.


Yeah, that was my point. I wondered what your point was.


People don't want to look back at

- how far we came in each decade

- how brilliant people had to be to invent ideas that feel obvious to us

- how much people got done with previous decades' technology

- how many times history the consensus of expert opinion mis-predicted the next development

because we want to believe

- the current state of the art is close to perfection than it is to the last generation

- we only have to embrace the obvious

- people who are one step behind the times are at an insurmountable disadvantage to us who are up-to-date

- the future will bring nothing more than a higher perfection of the ideas we embrace now

It's intellectual millennialism. We want to think that the new ideas we receive at the beginning of our career are the final piece of the puzzle, that there's no point in generations beyond us. Our ideas give us special powers to accomplish things that no previous generation could accomplish, and no subsequent generation will accomplish much more, or the same things much easier, than we did. We are standing at the end of history. We are the generation that left the desert and entered the promised land.

History upsets all that. History teaches us that the future will stand in relation to us as we stand in relation to the past, whereas we want to believe that we are equals with the future and superior to the past.


The main problem is that computer researchers don't always write up their ideas in papers. And source code is hard to read, system dependent - you can't understand it without being familiar with hardware which may not even exist any more - and may be impossible to access.

This isn't really about history. This is about ideas which could still be useful today but which have been forgotten, for a variety of reasons.

Most CS departments don't have anyone who specialises in this. And academic historians have other priorities.

Calling this dumbing down doesn't help. It's not about historians "dumbing down" CS, it's about not understanding the difference between what historians do - create narratives - and what archivists and researchers do, which is more closely related to continuing research and development.

Physics actually has the same problem. Anything outside a limited mainstream is forgotten very quickly. Math seems less prone to it, perhaps because math is made of academic papers and nothing else, and there's more of a tradition of open exploration and communication.


I would submit that the “Graphics Gems” series of books does document a lot of rendering techniques in their historical context. They were written to be current for practitioners at the time, but the connections are drawn.

Similarly Michael Abrash’s articles on the development of Doom et al.


Also I guess Fabien Sanglard's "Game Engine Black Books" (on Doom and Wolfenstein 3D: https://fabiensanglard.net/gebb/), and John Aycock's book "Retrogame Archeology: Exploring Old Computer Games".

(Wonder why all these examples are from computer games…)


I suppose to Knuth’s point there is no “book of the history of graphics algorithms”. The information exists, and some people know how to find it, but we don’t choose to write the book.


I wonder how much this is born from what at outwardly appear very transient.

Our interfaces with computing have changed so radically and many things are legitmately obselete for the majority of us; no one is likely to reach for a punch card or a 3.5" floppy (might be a few edge cases on this one :) ).

Most of this is the facade of computing though and many of underlying fundamentals are the same but I feel that this outward appears of transiency leaves us flippant about the retrospective.

It's not a great comparison but lets look at military history, many of the tactics, and hardware remain the same for decades with design and development taking an significant portion of time. the F35 has been in development for nearly 2 decades and it's only now at the beginning of it's service life.


This article returns error 500 for me. Seriously, STOP rendering your static web content with run-time server-side rendering! There is absolutely no reason this article couldn't be delivered as static assets by a CDN.


I’ve been saying the same thing so much that I should probably just get it tattooed on my forehead.

It’s crazy to think about the carbon we are pumping into the atmosphere in order to do the same computations over and over again. Same goes with layout and rendering on the client. There’s no reason we can’t just do this for a few standard widths and send that static content to the client.


> Operating Systems. I have at home Edsger Dijkstra's source code for the operating system he wrote in 1965. Nobody has looked at it, and we should.

I'd love to see this. Please put it on your webpage Dr. Knuth!


Someone (Randall Neff) from the Computer History Museum has scanned a lot of the documents from Knuth's personal collection; search for "Dijkstra" on this page: https://archive.computerhistory.org/resources/text/Knuth_Don...

The following document is labelled as the source code (the OS was called "THE"; see https://en.wikipedia.org/w/index.php?title=THE_multiprogramm...): https://archive.computerhistory.org/resources/text/Knuth_Don... (254 pages of assembly code), but there are also other related documents listed on the page.


I was going to say the same thing but it’s probably a cardboard box filled with paper. Somebody who understands the stuff needs to spend the time to go through it and scan it and catalog it. And given the time crunch for the test of Vol 4, not to mention the rest I don’t want Knuth doing it.


Seems to me that there are multiple histories of computing . One is the history of theoretical computer science which I think is what Knuth is referring to. This is actually fairly well preserved in academia and anyone with a CS background should have gone through it. Another is the history of programming, of which we know the origins but off late has become nearly impossible to track. The last is the social history of the internet , which doesn't require a technical background.


> This is actually fairly well preserved in academia and anyone with a CS background should have gone through it.

I don't think I've been exposed to it, but I would love to be. Where did you learn about TCS history?


> There was a brilliant programmer at Digitek who had completely novel and now unknown ideas for software development; he never published anything, but you could read and analyze his source code.

Does anyone know about whom Knuth was referring? Did this have anything to do with the work on the PL/I compiler?


Seems to have been Jim Dunlap (James R. Dunlap), who wrote many Fortran compilers. Just posted a comment here: https://news.ycombinator.com/item?id=25955227


I feel the same way about the history of the Web. There are hundreds of browsers we can be using, each one with its own unique features, advantages, and dare I say, beauty. Really writing for ANY browser shows you why things are the way they are today, and helps you not reinvent Unix, poorly :D


I wonder if Architecture - a much older trade, also having immediate commercial value and expensive intellectual property - has a technical history?

Or accountancy - another old trade with immediate commercial value, though perhaps without as much intellectual property.


Umm. yes?


This type of history is also crucial in the fight against overreaching patent and copyright.

Talking in broad strokes about a novel solution that is later patented probably won’t get the patent invalidated. Showing the code for the implementation would.


Excellent point and also a very important one!


Martin Campbell-Kelly was my lecturer for "History of Computing" at Warwick University in 1986. Nice guy, and one of the few CS courses I actually enjoyed :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: