Hacker News new | past | comments | ask | show | jobs | submit login
Fred Brooks has died (twitter.com/stevebellovin)
1516 points by tkhattra on Nov 18, 2022 | hide | past | favorite | 211 comments



I exchanged a few emails with Fred over the years. RIP Fred. Thank you so much for all the wisdom. Sharing one of the last responses here:

    Thanks for your kind words.  You will find lots of condensed wisdom in the three software books I 
    value most:
    
    DeMarco & Lister Peopleware

    2007. Software engineering: Barry Boehm's lifetime contributions to software development, 
    management and research. Ed. by Richard Selby.

    Hoffman, Daniel M.; Weiss David M. (Eds.): Software Fundamentals – Collected Papers by David L. 
    Parnas, 2001, Addison-Wesley, ISBN 0-201-70369-6.

    You might also like my later book on technical design in general:  The Design of Design.  Start 
    with Part II.


I can recommend The Design of Design. It's a bit chatty, but I haven't seen the material in there elsewhere, and it did change my perspective quite a bit, to the point where I could see some "conventional wisdom" at the time being entirely wrong.


> but I haven't seen the material in there elsewhere

Indeed. The references are full of works to studies from the Design Studies journal, in-situ work studies, works of philosophy etc.


The Design of Design was one of the primary inspirations for my open source project Semantic UI.

"[Progressive truthfulness] is perhaps a better way to build models of physical objects...Start with a model that is fully detailed but only resembles what is wanted. Then, one adjusts one attribute after another, bringing the result ever closer to the mental vision of the new creation, or to the real properties of a real-world object

...Starting with exemplars that themselves have consistency of style ensures that such consistency is the designer's to lose."

Frederick Brooks - The Design of Design


Thanks for sharing this.


MMM deserves all the attention it gets, but there's more!

1. I've got to track down the source of the quote (it may be the linked video), but Brooks has said that the most important architectural decision he made was to have an eight bit byte rather than the cheaper 7 bits (Edit: 6 bits) being considered for the IBM 360. To call that influential is an understatement.

2. And he has said the most important management decision was sending Ted Codd to graduate school, where Codd laid the foundation for what became relational databases.

3. A paper [0] he co-authored with Amdahl and Blaauw introduced the term 'architecture' to computer hardware, later borrowed for software. From the first page: "The term architecture is used here to describe the attributes of a system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flow and controls, the logical design, and the physical implementation."

He gave an interesting talk at the 50th anniversary of the International Conference on Software Engineering (ICSE) a few years ago, [1]

[0] 'Architecture of the IBM System/360', Amdahl, Blaauw, Brooks.

[1] https://www.youtube.com/watch?v=StN49re9Nq8


> eight bit byte rather than the cheaper 7 bits being considered for the IBM 360

That's 8-bit vs. 6-bit bytes. See "Interview: An interview with Fred Brooks", Communications of the ACM, Volume 58, Number 11 (2015), Pages 36-40 https://dl.acm.org/doi/fullHtml/10.1145/2822519 .

> Gene's machine was based on the existing 6-bit byte and multiples of that: 24-bit instructions and a 48-bit instruction or floating point ... Of all my technical accomplishments, making the 8-bit byte decision is far and away the most important. The reason was that it opened up the lowercase alphabet. I saw language processing as being another new market area that we were not in, and could not get into very well as long we were doing 6-bit character sets.

From your [0], "Architecture of the IBM System/360" (1964) at https://cpb-us-w2.wpmucdn.com/sites.gatech.edu/dist/8/175/fi... see the section "Character size, 6 vs 4/8", which discusses 4/6, 6, and 8-bit codes and the reasoning for 8-bit, and which comments:

> The selection of the 8-bit character size in 1961 proved wise by 1963, when the American Standards Association adopted a 7-bit standard character code for information interchange (ASCII).

FWIW, [0] is from April 1964. He also used "computer architecture" in the earlier "Architectural Philosophy", which is chapter 2 of the 1962 book "Planning A Computer System" concerning Project Stretch, at https://archive.org/details/bitsavers_ibm7030Plam_46781927/p...


Thank you for the corrections, I learned something.


7 bits was enough for ASCII and lower case, why not 7?


I'd guess because it's not an even number. I don't know why even number of bits were considered infeasible, but there hasn't been a single computer architecture with odd number of bits as word length: https://en.wikipedia.org/wiki/Word_(computer_architecture)


??? The table on that article contains plenty of architectures with odd bits per word, for example the Apollo Guidance Computer (15). Odd bits often resulted from sign or parity bits. And I'm shocked to see decimal digits were sometimes encoded as 5 to 7 bits to a digit (bi-quinary coded decimal) rather than 4 (binary coded decimal), e.g. in the IBM 650 (10 digits and a sign bit), which used 71 bits per word - a prime number! Of course "bit" is not the right term here, as software can't access them. But there are 71 physical switches exposed to the user for input.


That might give a clue: An 8-bit word can fit two bcd digits.


My one email exchange with Brooks had nothing to do with Mythical Man Month.

In the 1990s I was was the junior co-founder and, for a while, main developer of VMD, a program for molecular visualization. I wanted to include molecular surface visualization, but me being me, would rather integrate someone else's good work.

I looked around and found "Surf", a molecular surface solver written by Amitabh Varshney when he was at the University of North Carolina. (See "Computing Smooth Molecular Surfaces", IEEE Computer Graphics and Applications, https://ieeexplore.ieee.org/abstract/document/310720 .)

Brooks, you may not know, heard Sutherland talk about using the screen as a window into another world, which got Brooks interested in VR. Back in the 1970s, at UNC, they started experimenting with head-mounted displays. Brooks worked on VR for the rest of his career.

The UNC VR group worked on many different VR approaches, including haptic (tactile) feedback. As I recall, the first was a used hydraulic-powered robot arm. People had to wear a lab coat and helmet when using it because it would leak, and had a tendency to hit people.

One of the experiments, the NanoManipulator, hooked up the VR and haptic feedback (not that same robot!) to an atomic-force microscope, so people could feel the surface and move nanoscale objects around. http://www.warrenrobinett.com/nano/ .

Brooks felt that VR would be very useful for molecular visualization, and developed the GRIP Molecular Graphics Resource. Quoting https://apps.dtic.mil/sti/pdfs/ADA236598.pdf , some of its early achievements were "the first molecular graphics system on which a protein was solved without a physical model", "using remote manipulator technology to enable users to feel molecular forces", and "Real-time, user-steered volume visualization of an electron density map".

As that document points out, their goal was to "wildcat radical new molecular graphics ideas to the prototype stage. Winning ideas are spun off to the thriving commercial industry or into autonomous research projects."

Surf fit very well in those lines, as VMD was an "autonomous research project".

My exchange with Brooks and UNC was 1) to get permission to distribute Surf as part of the VMD distribution, and, 2) a few years later, to provide numbers about how many people had downloaded VMD with Surf.


Side story: the haptic feedback for the NanoManipulator was through a hydraulic system (kinda like this? https://www.sarcos.com/wp-content/uploads/history_5-339x280....). There were hydraulic lines that were piped through the building down to the machine room (where the SGI Infinite Reality Engine was!). Someone read through the manual and realized that the force that the arm was capable of could easily break someone's arm, and since it was usually grad students working late at night programming it, they decided it would be safest to just decommission that. I think I got one of the last demos during a UNC grad school recruiting event.


I may be mixing up my robot arms! I visited the lab around 1995 and saw a smaller robot arm than the one shown in Figure 2 of https://dl.acm.org/doi/pdf/10.1145/166117.166133 .

The big arm from Figure 2 is "an Argonne III Remote Manipulator".

Oddly, I can find no mention of that ARM outside of its use for the NanoManipulator. I did find https://www.ks.uiuc.edu/History/VMD/ (my old haunting grounds!) say:

> Computer scientist Frederick Brooks describes his chance encounter with the man who designed the manipulator as providential. In the 1950s, at Argonne National Laboratory near Chicago, Raymond Goertz and his group developed the ARM, the Argonne Remote Manipulator, a force-feedback device used to manipulate radioactive material in contaminated areas unsafe for humans to enter. Users gripped a device and moved it with their hand, and then signals were transferred to a robotic hand inside the contaminated area, which the users could see through glass. In the late 1960s or early 1970s Brooks met Goertz, the primary developer of the ARM, and Goertz arranged for Brooks to receive a manipulator that was no longer in use. ...

> While trying to use the donated remote manipulator with a computer in the 1970s, Brooks realized that he needed at least a hundred times more computer power than was feasible at the time, and he sidelined his work with the ARM until 1986, the arrival of the VAX computer. ...

Oooh! And you can see a few pictures of a young me in that UIUC link!


Yes, the NanoManipulator! I remember getting a demo of it during the grad student “research assistant job fair” when I first started there over 20 years ago. It was like magic, and what got me excited to study computer graphics. Unfortunately after taking the intro class (taught by Brooks himself) I realized I wasn’t cut out for all the complex math involved, so I switched to medical imaging instead. Still, Dr. Brooks was a great lecturer who injected a lot of his own personal experiences into his classes, and I ended up taking his computer architecture class later on.


I have fond memories of attending his "No Silver Bullet" lecture only a few short years ago.

A favorite quote of mine from MMM: "The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures...."


The quote is worth reading in full, available here [0]. Brooks captured so perfectly my own experience programming. Sometimes the job is boring and hard, but then there are the moments that are pure magic:

> Yet the program construct, unlike the poet's words, is real in the sense that in moves and works, producing visible outputs seperate from the construct itself. It prints results, draws pictures, produces sounds, moves arms. The magic of myth and legend has come true in our time. One types the correct incantation on the keyboard, and a display screen comes to life, showing things that never were nor could be.

[0] https://pages.cs.wisc.edu/~param/quotes/man-month.html


“Castles in the air” was an inspiring quote to read as an undergrad, and helped me recognize programming as my vocation.

I have a beautiful Docubyte print of the IBM-360 in my room, as a reminder of the great endowment that is our computing past. Well done to Brooks on a remarkable life’s work.


I read that section of MMM during a down time in my early career where I was doubting my choice of programming as a career because of my limitations. It buoyed my spirits enough that I decided to stick with it and I am glad I did. I owe a great personal debt to Brooks.


> Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures.

Yeah, and then you have the core banking system tied with scotch tape and bubble gum in the 1950s, that drives a business worth tens of billions of dollars and for which a modern rewrite would probably cost a billion dollars.

And then you realize some pieces of software are harder than a mountain made of titanium.


"No silver bullet" in itself must be one of the most misused or misunderstood phrases used by people to argue not to try better ways of doing things. If one suggests, that there is a safer or somehow better tool to get the job done ... no silver bullet! Your tool is not perfect for everything in the universe, so we will not even use it where it works significantly better!


I suspect there is no bit of wisdom in the world, that is not misunderstood, distorted, and abused by lesser people.


The thought that there is no ultimate perfect way to do things is liberating, if anything. It means that we can pick something we deem good enough and roll with it, or we can try and find an infinity of other good enough ways with acceptable tradeoffs, instead of searching for that end all, be all holy grail. Or we can stop trying at some point, because of diminishing returns. It's all okay.

The important thing is not to worry that what we have is merely good, because the perfect might be waiting around the corner. It's not.


Also, the most important part of that quote is that due to there not being “free lunch” productivity boosts, we really should focus on reusing existing libraries, “standing on the shoulder of giants”, not reinventing the wheel for each platform/language.


"no free lunch" and "standing on ..." are 2 ideas, which are also being misused by people in at least 2 ways: One is to block, basically saying: "But no free lunch! Somewhere there must be a cost, because someone else once said no free lunch!", while it is very possible, that one has done something in a way, which was silly from the start and really could have a "free lunch", by switching to a better way of doing it.

The other "standing on the shoulders of giants" is used as justification for accumulating more and more dependencies and not writing stuff oneself, no matter how simple, which is how we get into left-pad-like territory.

We should indeed focus on reusing libraries, but please, _good_ libraries and not ones, which impose limitations in implementation as well as future thinking, by creating a high mental barrier to change ("But if we switch out this library we need to change aaalll this stuff, because our code is adapted to how the library works."). Sometimes adding a dependency is simply not worth it, like in the left-pad scenario. Just write that one function yourself and don't add more dependencies to the project. Always weigh the benefit against the additional load of dependencies added directly or indirectly as dependencies of dependencies.


JS is definitely not a good example of “standing on the shoulder of giants”, the language has always lacked a sane standard library (it is getting better nowadays), plus due to optimizing for size and depending on “tree shakers” they have probably the worst dependency graphs ever.

But should I really start writing a graph algorithm for the 15th time? I believe the point of the quote is not “blindly add dependencies”, but that a significant boost in productivity can come from reusing already existing, high quality code that fits in with the projects requirements. Left-pad is clearly not such for most projects, and “it is not a free lunch” (:D) in that maintaining dependencies also has a cost.


To summarize, "No silver bullet" claims no single innovation will increase software developer productivity tenfold across the board. He does not say an innovation can't increase productivity greatly in a specific area, neither does he deny that many improvement can accumulate to a tenfold increase.


When I attended his talk ~4(?) years ago on No Silver Bullet, the audience had the chance to ask questions about recent technologies like ML/DL, AI (though CoPilot didn't exist yet), modern safety features, etc. I wish I could remember his exact responses to those questions, but of course he still maintained that there is no silver bullet :)


Also, he speculated it decades ago (though, arguably it still stands. There is no new managed language that would be 10x more productive than any already existing one)


He is not saying a "10x" language couldn't be invented, he is saying a hypothetical 10x language wouldn't increase overall developer productivity 10x.

His argument is developers use a lot of time on the intrinsic complexity of designing solutions for complex problems. Representing these solutions in code is a smaller part of the job. So even if a new language increased coding productivity by 10x (like going from assembler to C might) it wouldn't increase overall productivity with the same factor.

In short, the bottleneck is not the coding, the bottleneck is our minds thinking about how to solve the problem.


I reread the paper and I see what you mean — due to coding being only one part of the equation for productivity, any increase in that area can only speed up that part, not the whole (basically Amdahl’s law).

But it does also mention that since the appearance of high level languages, we are on a path of diminishing returns:

“The most a high-level language can do is to furnish all the constructs the programmer imagines in the abstract program. To be sure, the level of our sophistication in thinking about data structures, data types, and operations is steadily rising, but at an ever-decreasing rate. And language development approaches closer and closer to the sophistication of users. Moreover, at some point the elaboration of a high-level language becomes a burden that increases, not reduces, the intellectual task of the user who rarely uses the esoteric constructs.”

Also, Brooks originally wrote this paper for a 10 years timeline, and my point was mostly that even though we are like 3 times over its original length, I still don’t think languages would be even 3 times more productive, let alone more (and I won’t buy empirical evidence of your fav language, but some form of objective one).


It's also a warning against those proselytizing solutions.


I came here to mention this quote.

I bought the book years ago when I was on my 'let' s learn everything I can about computers and software ' spree. Very few books have left such a lasting impression on me, even though at the time I had no notion whatsoever of what professional software engineering was like.


One of my favourites too. I use this with my students all the time to capture the magic of programming.


The poet's words have a longer copyright.


A longer copyright, but no patent protection.


"The teacher's job is to design learning experiences; not primarily to impart information." -Fred Brooks

A favorite, lesser known quote of Fred's from his technical communications course at UNC and a SIGCSE talk. Beyond a software engineer and researcher, he was an extraordinary educator. His design ethos carried through to pedagogy, as well, and has been an inspiration to me. Thanks, Fred.


During my short stint in academia, I once addressed our room full of students with something like "Our role here is not to teach you; it's to provide you with the best context in which you can learn." I have often wondered where that philosophy came from, and I an very happy to have an idea now.


During my first week at UNC Chapel Hill, I had an incredibly opportunity through the honors program where a handful of new students (including myself, an aspring CS student) got to hang out with Fred Brooks for an evening. I don't think I really knew who he was at the time, but I did recognize his name as the one on the CS Building.

When we talk about Fred Brooks now, we're usually talking about the things he's written (MMM, No Silver Bullet, etc.) or the impact he's had on computing (8 bit byte, founding the CS depts, etc.). He didn't talk about any of that with us freshmen. Other than a brief introduction, he didn't talk about any of that at all.

Instead, he talked to us about what he saw as the future. The most exciting thing going forward, as he saw it in August of 2011, was the development of the interface between biology and computing. One of the things that stuck with me was that he said he hoped students today looked at biology the way he looked at computer science back in the 50s and 60s, as a land of unlimited potential.


> he said he hoped students today looked at biology the way he looked at computer science back in the 50s and 60s

that's interesting, i believe ken thompson said something similar re: getting into biology in an interview about 20 years ago.


I know a lot of people like MMM - I too enjoyed it. “The bearing of a child takes nine months, no matter how many women are assigned.” Still totally valid, IMO.

But I really enjoyed The Design of Design as well.

R.I.P. Mr. Brooks. I thank you for introducing to me the idea of conceptual integrity.


Message from the Chair of the UNC Computer Science Department (personal phone number elided):

Dear Friends,

It is with great sadness that I must share the following update on the health of the Department Founder Dr. Frederick P. Brooks, Jr. I know how much Dr. Brooks has meant to the department, to computer graphics, to the world of computing, and to each of you. So I wanted to reach out and pass on the following message from his son, Roger Brooks.

Dr. Samarjit Chakraborty Chair, UNC Department of Computer Science

– Begin Forwarded Message – Subject: Frederick's condition and his Hope

Dear ones:

As you may have heard, on Saturday my father came home from the hospital into hospice care. He spends most of the time sleeping. When (slightly) awake, he is only slightly responsive, and not able to respond verbally to questions. He seems to be in no pain and no particular discomfort. He is eating and drinking small amounts, but far from enough.

Frederick P. Brooks Jr. has fought the good fight, run the good race, been an outstanding husband and father and mentor and friend of many . . . and is now fading away. His hope and his coming joy, in death and in life, is in his Lord Jesus Christ, who I know will welcome him with “Well done . . .”.

The hospice nurse tells us that my father may live several days to 10 days or so.

You may share this information with all who would want to know. I know that I am missing email addresses for beloved friends which exist somewhere in my parents’ contact lists, and I apologize that I do not have time to dig for those.

With family and aides around, we have ample help. If you would like to come and visit my mother, or bring your last respects and prayers to my father, please just call the house first. Close friends are welcome, but it is hard to predict in advance when things will be busy or peaceful.

Kori Robbins, associate pastor at Orange Methodist Church, visited yesterday and prayed what I thought was exactly the appropriate, loving, and merciful prayer, which she tells me she adapted from Douglas McKelvey's A Liturgy for the Final Hours. We ask you to join in this prayer:

O God our Father, O Christ our Brother, O Spirit our comforter,

Fred is ready.

Now meet him at this mortal threshold and deliver him to that eternal city; to your radiant splendor; to your table and the feast and the festival of friends; to the wonder and the welcome of his heart's true home.

He but waits for your word. Bid him rise and follow, and he will follow you gladly into that deeper glory,

O Spirit his True Shepherd, O Christ his True King, O God his True and Loving Father, receive him now, and forgive his sins, through the blood of his Savior Jesus Christ.

Roger Brooks Sr.


It is clear that beyond his accomplishments in the world of computing, he achieved what is most important - being a good friend and family member. The warmth and love in those recalling his life is evident and is a reflection of his profound impact on others.

Rest In Peace.


By no means am I a religious person, but that certainly is a beautiful prayer.


Another quote of his;

"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)

Stated a different way:

"Bad programmers worry about the code. Good programmers worry about data structures and their relationships." -Linus Torvalds


While I agree with both quotes separately, I don't think that they necessarily mean the same thing.

To me, the first (by Brooks) seems to be about grasping the domain model to understand what the system does (or can do) in general.

Wheras the second (by Torvalds) seems to be about how best to organize data in code for efficient processing. Array, hash, tree, heap, etc and their associated access time complexity. The efficiency of your solution depends on your choice of a data structure that fits the local problem.


What if the table includes a poorly labeled column entitled “fiat@“?


Underrated comment that would have been perfect if you said "labled"


The Linus quote is ripe for misinterpretation. Not worrying about the code can lead to an unreadable mess, that ones future self or others will hate working with. So a really good programmer will probably rather go the Sussman way and realize, that programs are firstly meant to be read and only lastly meant to be run by a computer (paraphrasing here).


> The Linus quote

I always attributed it to Rob Pike, but it turns out Pike's is following:

> Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.

> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

https://users.ece.utexas.edu/~adnan/pike.html

Interestingly enough, the above link has this to say:

> Rule 5 was previously stated by Fred Brooks in The Mythical Man-Month.

Which I guess references GP's excerpt.

Also says this, which kinda loops back to Linus's way of saying it:

> Rule 5 is often shortened to "write stupid code that uses smart objects".


Designing good data structures is very important for code readability too. If you struggle to understand how given data structure is used / what it contains it would be hard to understand the rest of the code.


There's a lot of code out there that would improve massively with better data structures though.

I mean I've waded through tons of code where the original author abused strings to indicate relationships in a table - a column with semicolon-separated values referencing other tables / rows. And a ton of code to check references.

Mind you, that's more database design than data structures, but they're close enough for this example.


In reality though the tables are going to be like: Column named “price_usd_incl_tax” is neither in usd nor does it include all taxes.


Don't worry, the code is self-documenting. /s

Self-documenting code (what I take in practice means "no comments"-culture) is something I don't understand how it can work, never seen a good implementation of it. It _can_ be successful in describing the _what_ but is poorly or not at all describing the _why_. Perhaps I'm in the wrong domain for that though.


In practice it really does mean self documenting code.

Like variables called "daysSinceDocumentLastUpdated" instead of "days". The why comes from reading a sequence of such well described symbols, laid out in a easy to follow way.

It doesn't do away with comments, but it reduces them to strange situations, which in turn provides refactoring targets.

Tbh, its major benefit is the fact that comments get stale or don't get updated, because they aren't held in line by test suites and compilers.

Most comments I come across in legacy code simply don't mean anything to me or any coworkers, and often cause more confusion. So they just get deleted anyway.


In most cases, even though there's verbose variable names you still can't understand the why just by reading the code. And even if you did, why would one want to?

Most often I'm just skimming through, and actual descriptions are much better than having to read the code itself.

This whole notion of "documentation can get out of sync with the code, so it's better not to write it at all" is so nonsensical.

Why isn't the solution simply: "lets update the docs when we update the code". Is this so unfathomably hard to do?


> This whole notion of "documentation can get out of sync with the code, so it's better not to write it at all" is so nonsensical.

To me, this feels similar to finding the correct granularity of unit tests or tests in general. Too many tests coupled to the implementation too tightly are a real pain. You end up doing a change 2-3 times in such a situation - once to the actual code, and then 2-3 times to tests looking at the code way too closely.

And comments start to feel similar. Comments can have a scope that's way too close to the code, rendering them very volatile and oftentimes neglected. You know, these kind of comments that eventually escalate into "player.level += 3 // refund 1 player level after error". These are bad comments.

But on the other hand, some comments are covering more stable ground or rather more stable truths. For example, even if we're splitting up our ansible task files a lot, you still easily end up with several pages of tasks because it's just verbose. By now, I very much enjoy having a couple of three to five line boxes just stating "Service Installation", "Config Facts Generation", "Config Deployment", each showing that 3-5 following tasks are part of a section. And that's fairly stable, the config deployment isn't suddenly going to end up being something different.

Or, similarly, we tend to have headers to these task files explaining the idiosyncratic behaviors of a service ansible has to work around to get things to work. Again, these are pretty stable - the service has been weird for years, so without a major rework, it will most likely stay weird. These comments largely get extended over time as we learn more about the system, instead of growing out of date.


> Comments can have a scope that's way too close to the code, rendering them very volatile and oftentimes neglected.

I think this is a well put and nuanced insight. Thank you.

This is really what the dev community should be discussing; the "type" of comments and docs to add and the shape thereof. Not a poorly informed endless debate whether it should be there in the first place.


> To me, this feels similar to finding the correct granularity of unit tests or tests in general.

I recently had an interview with what struck me as a pretty bizarre question about testing.

The setup was that you, the interviewee, are given a toy project where a recent commit has broken unrelated functionality. The database has a "videos" table which includes a column for an affiliated "user email"; there's also a "users" table with an "email" column. There's an API where you can ask for an enhanced video record that includes all the user data from the user with the email address noted in the "videos" entry, as opposed to just the email.

This API broke with the recent commit, because the new functionality fetches video data from somewhere external and adds it to the database without checking whether the email address in the external data belongs to any existing user. And as it happens, it doesn't.

With the problem established, the interviewer pointed out that there was a unit test associated with the bad commit, and it was passing, which seemed like a problem. How could we ensure that this problem didn't reoccur in some later commit?

I said "we should normalize the database so that the video record contains a user ID rather than directly containing the user's email address."

"OK, that's one way. But how could we write a test to make sure this doesn't happen?"

---

I still find this weird. The problem is that the database is in an inconsistent state. That could be caused by anything. If we attempt to restore from backup (for whatever reason), and our botched restore puts the database in an inconsistent state, why would we want that to show up as a failing unit test in the frontend test suite? In that scenario, what did the frontend do wrong? How many different database inconsistencies do we want the frontend test suite to check for?


That makes no sense to me either. In my book, tests in a software project are largely responsible to check that desired functionality exists, most often to stop later changes from breaking functionality. For example, if you're in the process of moving the "user_email" from the video entity to an embedded user entity, a couple of useful tests could ensure that the email appears in the UI regardless if it's in `video.user_email` or in `video.user.email`.

Though, interestingly enough, I have built a test that could have caught similar problems back when we switched databases from mysql to postgresql. It would fire up a mysql based database with an integration test dump, extract and transform the data with an internal tool similar to pgloader, push it into a postgres in a container. After all of that, it would run the integration tests of our app against both databases and flag if the tests failed differently on both databases. And we have similar tests for our automated backup restores.

But that's quite far away from a unit test of a frontend application. At least I think so.


> With the problem established, the interviewer pointed out that there was a unit test associated with the bad commit, and it was passing, which seemed like a problem. How could we ensure that this problem didn't reoccur in some later commit?

It would seem that the unit test itself should be replaced with something else, or removed altogether, in addition to whatever structural changes you put in place. If you changed db constraints, I could see, maybe, a test that verifies the constraints works to prevent the previous data flow from being accepted at all - failing with an expected exception or similar. But that may not be what they were wanting to hear?


> This whole notion of "documentation can get out of sync with the code, so it's better not to write it at all" is so nonsensical.

I do believe that in a lot of case an outdated, wrong or plain erroneous documentation does more harm than no documentation. And while the correct solution is obviously "update the doc when we update the code", it has been empirically proven not to work across a range of projects.


What 'has' been proven then? No comments or docs? Long variable and method names?

I just had a semi-interview the other day, and was talking with someone about the docs and testing stuff I've done in the past. One of the biggest 'lessons' I picked up, after having adopted doc/testing as "part of the process" was... test/doc hygiene. It wasn't always that stuff was 'out of date', but even just realizing that "hey, we don't use XYZ anymore - let's remove it and the tests", or "let's spend some time revisiting the docs and tests and cull or consolidate stuff now that we know about the problem". Test optimization, or doc optimization, perhaps. It was always something I had to fight for time for, or... 'sneak' it in to commits. Someone reviewing would inevitably question a PR with "why are you changing all this unrelated stuff - the ticket says FOO, not FOO and BAR and BAZ".

Getting 'permission' to keep tests and docs current/relevant was, itself, somewhat of a challenge. It was exacerbated by people who themselves weren't writing tests or code, meaning more 'drift' was introduced between existing code/tests and reality. But blocking someone's PR because it had no tests or docs was "being negative", but blocking my PR because I included 'unnecessary doc changes' was somehow valid.


But arguments around "is this so hard?", or resolution stripping like "so don't write documents at all", are more about superiority signalling, aimed at individualistic benefit.

The fact is that, when you zoom out to org level, comments do quickly drift out of sync and value, and so engineering managers must encourage code writing that will maintain integrity over time, regardless of what people "should" be able to do.


The argument isn’t that it’s better to not write it at all, it’s that it’s not worth the effort when you could have done something else. Opportunity cost and all that.


People are lazy.


Lazy people work the hardest. It's an up front investment for a big payoff later when you can grok your code in scannable blocks instead of having to read a dozen lines and pause to contemplate what they mean, then juggle them in your memory with other blocks until you find the block you're looking for.

Comments allow for a high-level view of your code, and people who don't value that probably on average have a slower overall output.


What you write in your first para is so self evidently true, at least to me.

I simply cannot comprehend the mindset that views comments as unnecessary. Or worse, removes existing useful comments in some quest for "self-documenting" purity.

I've worked in some truly huge codebases (40m LOC, 20k commits a week, 4k devs) so I think I have a pretty good idea of what's easy vs hard in understanding unfamiliar code.


As the late Chesterton said, "Don't ever take a fence down until you know the reason why it was put up."

A lot of people think comments are descriptive rather than prescriptive. They think a comment is the equivalent of writing "Fence" on a plaque and nailing it to the fence. "It's a fence," they say, "You don't need a sign to know that."

Later, when the next property owner discovers the fence, they are stumped. What the hell was this put here for? A prescriptive comment might have said, "This was erected to keep out chupacabras," answering not what it is, but why.

You might know about the chupacabras, but if you don't pass it on then you clearly don't care about who has to inherit your property.


> Lazy people work the hardest.

What's amazingly funny is that many people think this is a positive, because they ascribe more value to working hard than to achieving results. I even thought your comment was going to go that way when I first read it.


Better, a "last_updated" method on instances of "Document", that being an "Age" instance with a "days" method: document.last_updated.days

Self describing code does not need theRidiculouslyLongNamesPerferredByJavaCoders.


Yes, was just an illustrative example


The why should be clear from the domain that you're working within. A line of comment should count as something like 10 lines of code, if you're reading a comment then you're treading into real complexity. If you're in a code base where that isn't true, then is the comment really necessary?

Fairly hot take from me, life is more ambiguous than that :-).


> The why should be clear from the domain that you're working within

Sometimes the 'why' is purely domain knowledge. Sometimes the 'why' is about narrowing down options available in the domain. Sometimes the 'why' is about a choice made for reasons that aren't specific to the domain. And sometimes the 'why' is about the code that wasn't written, so it can't possibly be in the code that was.


> sometimes the 'why' is about the code that wasn't written, so it can't possibly be in the code that was.

I have often had to write extensive comments related to this to prevent well meaning coders who are not expert in the domain from replacing the apparently bad or low performance code with an obvious but wrong 'improvement'.


In a perfect world, tests and assertions would protect from that, but yes, that's a good use of comments.


"Sometimes" doing all the lifting there.

Comments are supplemental. If you have just added some weird, non-obvious, bit of code because you needed to compromise, or work around some other quirk, go ahead and comment. No one is going to (sanely) object to that.


What you describe is how I tend to comment. At the opposite end of the spectrum we have Knuth's 'literate programming', exemplified in Tex, which has as its goal 'making programs more robust, more portable, more easily maintained, and arguably more fun' [0] by merging documentation with code. I'd bet if you counted documentation lines vs. code lines in Tex they'd be near 50/50, and I'd bet that if we asked Knuth whether the comment lines were supplemental he'd say no.

[0] https://www-cs-faculty.stanford.edu/~knuth/lp.html


Grokking that 'why' can take non-trivial mental effort by a non-author, even when well coded/documented. Worse, if the code is needlessly complex, or trying to be smart or over-engineered, any amount of commenting wont help. The non-author (maintainer) of the code is now burdened. And if (so commonly happens) - they dismiss the original code as 'non-performant' or 'not a best practice' or something else.. we know how that plays out.


> The why should be clear from the domain that you're working within.

I hear this commonly from coders who haven't had the ambiguous pleasure of working with old, production critical codebases from generations of coders who have come and gone, with technical decisions buffeted around by ever-shifting organizational political and budgeting winds. Knowing the why's that leadership cares about is far more important to your career than the technical why's, which are along for the ride.

Once you go into production with tens of thousands of users and up, with SLA's driven by how many commas of money going up in smoke per minute...yeah, illusions of "pure" domain knowledge driving understanding of function dictating code form evaporate like a drop of distilled water on the Death Valley desert hardpan in the middle of summer.

I used to be like that as well years ago, but some kind greybeards who took me under their wings slapped that out of me.

Now my personal hobby code with an "unlimited" budget and I'm the sole producer and consumer? Yep, far closer to this Platonic ideal where comments are terse and sparse, and the code is tightly coupled to the domain.


Code is almost never self-documenting. That's why there are so many O'Reilly books out there.

A great example is AWK: It's a tool, and it comes with a book from the people who made the software. That's how I like my software.


To your point, we also have 'The C Programming Language', K&R, and 'The Unix Programming Environment', K&Pike.


Seems like the common denominator is the K here!


Or the Emacs book.


Unless you are in some trivial startup domain, real domains (TM) have almost fractal-level complexities if you dig deep enough, corner cases, sometimes illogical rules etc.

The "why" is still very much needed since it can have 10 different and even conflicting reasons, and putting it in the code in appropriate amount shows certain type of professional maturity and also emotional intelligence/empathy towards rest of the team.

I mean, somebody has to be extremely junior to never experience trying to grok somebody's else old non-trivial code in situation when you need to deliver fix/change on it now. And its fairly trivial to write very complex code even in few lines, which some smart inexperienced juniors (even older, but total skill-wise still juniors) produce as some sort of intellectual game out of boredom.


And even more important than the 'why' can be the 'why not'? Ie explanations for implementation choices that haven't been taken for various reasons.


> I mean, somebody has to be extremely junior to never experience trying to grok somebody's else old non-trivial code

People are definitely capable of looking at someone else's code and saying "this crap is completely unreadable, we should rewrite it all", while at the same time believing that their own code is perfectly readable and self-documenting.


I’m not a “no comments” maximalist but someone has to be pretty junior to have never experienced a comment that is just completely incorrect.

It’s really hard to write a good comment that is only “why”. It’s really hard to keep comments up to date as code is moved and refactored. And an incorrect comment is much more damaging than no comment at all.

That’s the driving force behind “self documenting” code. My view is that a comment is sometimes necessary but it is almost always a sign that the code is weak.


> It’s really hard to keep comments up to date as code is moved and refactored.

Hard disagree with this.

If your comment is so volatile then that really sounds like there's something architecturally wrong with the code.

Most of the time these kind of "comments" can be turned into either a test, or a extensive description that goes into version control.

Because commit messages are just that: a comment for a specific moment in time. There are lots of options to inline comments.


Which is why I find looking at `git blame` (or one's favorite IDE's/SCM's equivalent) output so very useful in case of undercommented code.


> It’s really hard to keep comments up to date as code is moved and refactored

I agree with this, but if the explanation for logic has good reason to be there, then keeping comments up-to-date with code changes is very important and it goes back to seniority and empathy I mentioned earlier - if you understand why its there in the first place, and you actually like rest of your team, you are doing too all of you a big favor with updates of comments.

Each of us has different threshold for when some text explanation should be provided, which is source of these discussions. But again back to empathy, not everybody is at your coding level, you can save a lot of new joiner's time (and maybe a production bug or two) if they can quickly understand some complex part.


The developers , especially new ones do not understand or know all the history of the project.

I remember one time in css I had to do something weird like min-widht:0; It was needed to force the css engine to apply some other rule correctly,. but this will puzzle you when you read it. And this kind of puzzling code needs comments, I prefer to just put the ticket ID there and the ticket should contain the details on what the weird bug was with all the details, so if some clever dev wants to remove the weird code he can understand stuff.

Sometimes I see in our old project code like if webkit to X else do Y , there is no comment with a bug link so I have no idea if this code is still needed or not (Browsers still differ in more complex stuff, like contenteditable )


I don't like such rules of thumb.

A better approach would be that A comment should tell you something that you cannot glean from the code and/or is non-obvious. Yes, I understand non-obvious can have a truck driven through it, but in general it should work.

You can read code and understand what it's doing mechanically, but you may not understand why the obvious approach wasn't taken or understand what it's trying to achieve in the larger context. Feel free to comment on those, but if the code is difficult to understand mechanically, the code is generally bad. Not always, everything has exceptions, but generally that's true.


Documentation without accurate and descriptive method/member names is much more harmful than the inverse. If an abstraction is sufficiently complex to warrant a lengthy description of why it exists, then it should have a design doc. In practice, most code within a repo is pretty simple in what it accomplishes and if it's confusing to a reader, then it is most likely because they don't understand the design of the larger component or system or simply because the implementation is poor. There are of course cases where comments are really useful or even necessary (e.g. if going against best practices for a good reason or introducing an optimization that is for all intents and purposes unreadable without explanation), but they are exceptions.


I like that term. When I hear it I can with 100% accuracy know the person touting it is a hack and their code is garbage.


The dream of self-documenting code requires solving two problems, only one of which programmers are typically good at.

1) Communicating with computers

2) Communicating with other humans

Self-documenting code is essentially writing prose. Granted, to someone with similar knowledge as you.

But most people suck at writing.


I have better hope that a good programmer can write readable code, than that they will write readable documentation. As you point out, people suck at writing.


I would remark here that The Mythical Man-Month did give a page or two to documentation. My copy seems to be out on loan, but as I recall the section included a figure showing the documentation for a sort function, perhaps 25 lines or so.


> My copy seems to be out on loan,

Drifting off-topic, but I wonder how close to the top of the list TMMM is for "on loan" duty cycle in the software world. My copy also seems to be persistently in someone else's hands.


If I remember correctly, Brooks experience was with assembler, which might require some more documentation than modern Java or Python.


I think that the example was in PL/1.


At my company code is required to be self-documenting. My attitude is that if you can't determine the why then you likely are not familiar enough with the problem domain to be working with that code. It's fine not to be familiar with the domain and there are ways to address that, but reading source code is not one of them.


So you bar all junior developers from writing code until they've gone through tested coursework in your domain, or what?


Yes absolutely. All developers, junior and senior, go through a 4 month training program working on a completely independent project from scratch that teaches them everything they need to work in their domain. There are exceptions now and then, but for the most part it's pretty consistent.

When a developer wants to switch from one area to another, they go through an accelerated program (takes only about a month).


I've seen lots of documentation that I only understood after I understood the code.


In other words: poor documentation.


It used to mean that, but a programmer changed the meaning. They could.

(a) rename the column, be the guy who broke the system and spend all weekend trying to fix 6 systems he never knew existed, written in Access, Excel, Crystal Reports, VB6, Delphi and a CMD file on a contractors laptop.

(b) keep the column name, deliver the feature, go home.


I really hope you are joking 100%, because

(b) Go home and be happily oblivious that six other systems silently started to produce wrong results since the meaning of the column has changed. But, of course, that is someone else’s problem, some other day, when several months of reports and forecasts have to be recalled and updated.


We prefer option c: add a new table/column with similar looking name. Then few years later start wondering why there're two almost identical entities, and why one of them behaves weirder than another.


There's no point in keeping the same name so that a system can keep running, if the data meaning has changed.

Bad programmers chose (b). Good programmers choose (a). Better programmers refuse the change request.


If data meaning changed tho those 6 systems would break too, no ?


It might not be math-related; could be something as simple as a requests table being named to indicate that api X was used and now it uses API y and there is some reporting on that somewhere that doesn't care which API was used.

Ideally the table would have been named more generically but in an earlier stage startup there will be mistakes in naming things no matter how hard you try to avoid that.

So the only thing that actually breaks here is that a small number of engineers that care about this might misinterpret what it means unless they learn the appropriate tribal knowledge. Ideally it gets fixed but if you look at all of the things that can be improved in an early stage startup, this kind of thing is pretty minimal in importance so it becomes tech debt.


Anyway, where were we? Oh yeah - RIP Fred Brooks.


That's why you should also have comments, and maintain them. Then if the name is hard to change, you can at least document the new semantics.


To be clear I am pointing out incentives to make a worse choice, rather than the better choice.


Which is exactly what Linus' quote is about.


In that case the code will probably also be difficult to read.

In my experience, studying the database schema and IO data structures is indeed the best way to begin understanding a complex system.


This is only in the reality where nobody is shown the table.

Showing things acts as a forcing function to fix the thing being shown


I can just barely count the times I have seen a production failure due to someone assuming a millicent value from a column with ambiguous naming was in centicents or vice-versa.


At least you can grep that column name in the source code to find out where other taxes are calculated. Of course an ORM can further complicate this.


This hits home. Not only in databases, but also in code.


I had the privilege of attending some of Dr. Brooks' lectures. His views on the role of a computer scientist have helped me find meaning and direction in my career. May he rest in peace.

In a word, the computer scientist is a toolsmith--no more, but no less. It is an honorable calling. If we perceive our role aright, we then see more clearly the proper criterion for success: a toolmaker succeeds as, and only as, the users of his tool succeed with his aid. However shining the blade, however jeweled the hilt, however perfect the heft, a sword is tested only by cutting. That swordsmith is successful whose clients die of old age.

https://www.cs.unc.edu/~brooks/Toolsmith-CACM.pdf


I'm pretty sad about this.

When I was in high school and learning how to program, he let me borrow a copy of his Mythical Man Month book.


Wait, what? You knew Fred Brooks when you were in high school? How?


Church connections. I didn't know him personally, parents did.


Still cool.


So many question!

Was he at the school somehow?

Was Mythical Man Month useful for a high school programmer?


Not so much, at that time I was making projects putting them out on the internet etc. I tried talking with other adults that were programmers about some of the issues I had and occasionally got advice.

But when I got to grad school.. their convervsations about "the industry" and ways of working was massively inexperienced. I also read a lot at that age. It also did give me a leg up in identifying high pressure of delieverying. (The 9 women in 1 month for a baby reference).


Don't be sad - he lived a long productive life full of people he impacted, and as you can see from the comments here he was well respected. All of us will die one day, let's not feel sadness.


Nothing you said means you can't feel sad. No matter how long and full someone's life is, there is still sadness when they are gone. It doesn't have to be debilitating sadness, but it is ok to feel sad.


Let's not try and tell others what is OK and not OK to feel.


The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures.


Deserves the black bar, in my view. Probably one of the most widely-discussed and most oft-cited writers among hackers of multiple generations.

He will always be remembered for “Brooks’s Law”, colloquially, “adding people to a late software project makes it later”:

https://en.wikipedia.org/wiki/Brooks%27s_law

And for his timeless essay, “No Silver Bullet”, which introduced the idea of accidental vs essential complexity in software:

https://en.wikipedia.org/wiki/No_Silver_Bullet

RIP.


MMM is one of the very few things we all agree is right.


There are not many universally agreed truths amongst us opinionated hackers, but indeed Brooks has bequeathed us one.


This is the best way I've seen it put.


If anything deserved the black bar it was this. Shameful that it didn’t happen.


It did happen, actually. It was put up shortly after this comment and taken down about a day after the news.


It is once again time for me to re-read The Mythical Man Month and watch his No Silver Bullet lecture. No better way to respect such a giant.


Once every 2-3 years as a refresher anyway. Yup.


“What one programmer can do in one month, two programmers can do in two months.”

― Fred Brooks

I printed this out and taped it to the whiteboard at my desk. Handy to point out to the manager in various situations.


The Mythical Man-Month is one of the seminal works on software engineering practice. It has held up extremely well over time. If I have to jettison professional books over time for whatever reason, it'll be in the last box /shelf I retain.


His No Silver Bullet paper is another great one.


Newer editions of the book usually include the paper too.


https://news.ycombinator.com/item?id=32423356 - It's been posted a few times here, too. A recent discussion with dcminter and dang listing out more prior submissions.



That book was a window into the mind of a true software engineer and one of the best things I have read generally. RIP.


One of the things that always amazed me was how present he always was. Any lecture he attended, regardless of the speaker or topic (which were very far-ranging) he always asked questions. Good, tough, insightful questions. His mind was never passive, and he set a great example for everyone decades younger than him.


That is impressive. A goal I will try to aspire to. I may not achieve it and I may not always ask questions - but remaining alert, active, aware, that is pretty badass, and I would like to be that way too.


Fred brooks went supernova today. Let the brilliant light mark his passing and remind us of the light he shined on the software world.


Sad news but he left a legacy to be proud of. Few people write anything which will be read half a century later, much less as a source of insight rather than historical context.


"Conceptual integrity is the most important attribute of a great design" from The Design of Design


Conceptual integrity is a seriously underrated concept. I think the inherent conflict between conceptual integrity and representativeness is at the heart of democracy, for instance.


People keep mentioning this book. I think I'm going to have to check it out. Should also reread MMM as I haven't in years.


Dr. Brooks on No silver Bullet (while eating pizza).

https://youtu.be/HWYrrw7Zf1k

RIP Dr. Brooks.


The Department of Computer Science at UNC Chapel Hill, which Fred founded in 1964, shares our official remembrance letter here: https://cs.unc.edu/news-article/remembering-department-found...


"The Mythical Man-Month: Essays on Software Engineering is a book on software engineering and project management by Fred Brooks first published in 1975, with subsequent editions in 1982 and 1995. Its central theme is that adding manpower to software project that is behind schedule delays it even longer. This idea is known as Brooks's law, and is presented along with the second-system effect and advocacy of prototyping.

Brooks's observations are based on his experiences at IBM while managing the development of OS/360. He had added more programmers to a project falling behind schedule, a decision that he would later conclude had, counter-intuitively, delayed the project even further."

https://en.wikipedia.org/wiki/The_Mythical_Man-Month


Sad. He was a giant.

I'm glad I got to at least shake his hand. One of the lawyers at Google had studied under him, and when I saw them crossing the street I just assumed the older gentleman with the visitor's badge was Brooks (I didn't even know what he looked like, but I found out later I'd guessed correctly).


I really enjoyed his book “The Design of Design: Essays from a Computer Scientist”

https://www.goodreads.com/book/show/7157080-the-design-of-de...

RIP Mr. Brooks


Brooks and Knuth (fortunately still with us) are not only respected but also loved. I sometimes miss this in our field where sometimes success is valued more than a great mind and a great mind more than a lovable well-rounded person.


The ACM recorded this 2-hour long interview[0] with Fred that walks through his whole history. It's incredible to see Fred's ability to recall conversations and technical decisions from over 50 years ago!

I admire his ability to move back and forth between industry and academia and move the entire field forward.

One of my favorite quotes: "A scientist builds in order to learn. An engineer learns in order to build."

[0] https://www.youtube.com/watch?v=ul0dbgs8Mdk


Sad loss. One might wonder how much faster technology’s state of the art would have progressed if we had more people like Fred Brooks working in the field.


How dare you suggest adding more engineers to a slowing project, now of all times?


Haha, that's a great come back! Thanks for cheering me up. :D

I mentioned Fred Brook in a comment just earlier in the week. The Mythical Man Month is such an obviously and well known trap it's still surprising so many projects still fall into it.

Whilst his work is mostly seen as for software engineers, really it should be more well known by project and senior managers in general.


Perfect, thank you!


I wonder what Brooks would think about it.


For some reason I thought Brooks had already died some years ago, but then I remembered that what I had previously read was the comment thread on the announcement of his retirement in 2016: https://news.ycombinator.com/item?id=11257437


That’s somebody’s funeral I would like to attend, to pay my respects.


A great way to go to posterity is to state a law that so much depends on some basic human flaw, that it can not be bent until Darwin changes our brain.

Brook law of late software project will be quoted for the rest of times, because software projects will be late for the rest of time.

May Mr. Brooks rest in peace until then.


Rest in peace to one of the greats.

You might want to consider reading The Design of Design if you liked The Mythical Man-Month.


A true yet humble giant.

Reading his works elucidated so many ideas and experiences that I could not myself articulate, and helped set the foundation for my own ideas further down the line.

RIP Fred, thank you for all your warm kindness and endless contributions to our field at large.


Just a quick reminder: Fred Brooks did much more than write the Mythical Man Month.


We are standing on the shoulders of giants. He was one of the tallest.


That is sad, but he had a great career.

He, along with folks like Watts Humphries, and Donald Knuth, were some of the earliest published "Computer Programming As An Engineering Discipline" types.


Who is currently writing things that might end up having the impact of the MMM ?

It's Friday, and I'm grumpy, so I could very well argue that the age of the "thinkers" is dead and gone for software, and that everything that comes from now is just rehashing old good ideas (at best) or propagating new bad ones.

Let's be charitable and assume there is still 1% a good stuff among the junk. Who's writing it ? Who's on the good side of the tar pit, and has the potential to lend a hand ?


Not writing, but I have found all of Brett Victor's speeches to be incredibly inspiring on how I think about what I want to work on and where we're going.

Inventing on Principle by Brett Victor: https://www.youtube.com/watch?v=PUv66718DII

Growing a Language by Guy Steele: https://www.youtube.com/watch?v=lw6TaiXzHAE

We Really Don't Know How to Compute! by Gerald Sussman: https://www.youtube.com/watch?v=HB5TrK7A4pI


Plato is still read, but there have been may philosophers since then that are also read.

We are quickly passing the golden age when anyone can see the obvious problems and write on them. There will be many thinkers to come, but they are all building on the giants of the past and so will mostly not be commenting on the obvious.


Well, it probably also applies to Plato, but what Fred Brooks wrote is still not considered _obvious_.

Exhibit A : every single project manager who tried to address a tight schedule on a late project with a "well, we'll get more people in.". Today.


Hugely sorry to hear this. I met him when he was visiting Cambridge while I was a PhD student and he's a wonderful person as well as a computing luminary.


They will get his funeral done in record time by assigning 9 preachers to speak at the same time.

RIP Fred, you were a giant and will be missed.


I laughed.


May he rest in peace. Like many here, I was first introduced to Dr. Fred Brooks through Mythical Man-Month, which has had a tremendous impact in shaping my views on software. Afterwards I saw some of his lectures that he held at UNC on Youtube, and always wished I had attended UNC for my undergraduate studies.


A Man who will be Mythical every Month.


Fred Brooks, a mythic figure, who lived a couple days shy of a very productive 1099 man-months. RIP.


I rarely see this mentioned, but book he authored with Gerrit A. Blaauw, Computer Architecture, has a really cool way of characterizing the various machine architectures by describing their data representations, formats, and significant operations in APL.


(Feel like these trailblazers in computing are going to get more frequent, does anyone have an analysis on how much HN makes the banner to someone over the years)


Sad to hear. I did CS at UNC and will always cherish those late nights coding in the Brooks building. I have my dad’s first edition of MMM as well. RIP.


RIP Mr Brooks.

I've still got my copy of MMM from 20 years ago. I re-read it recently (~2 years ago). Such great wisdom in that book. Would highly recommend it.


Such a classic book. Are any other computer books from that date still relevant at all, let alone relevant to such a wide audience?


I was just looking him up the other day and found it was remarkable he was still alive. What a wonderful long life


Another legend leaves us.

Hopefully this will encourage more to read his work. It's about human behaviour and timeless.


I was literally just reading "there is no silver bullet" this week.

Quite the legacy, long may it last.


We need a black stripe on here @dang


Very sad News: Thanks for your enormous contribution to humanity !!! RIP, DR. Brooks


RIP


I qofte dheu i lehte.


So that's why there's a black bar, phew. glad it's not a bug on my browser.


We should probably wait for confirmation from more than this source. I don't disbelieve it, because they cite the UNC CS department, but even Wikipedia hasn't been updated yet.


> confirmation from more than this source

sure, but

> even Wikipedia hasn't been updated

how would you rank the possibility that Wikipedia is updated from that same source? It's an open article. If a piece of news, say, "goes viral", how do you know this does not directly affect sources you would have used for comparison - /especially/ a wiki?

PS: I just realized the coincidence that I have just submitted a piece about "Epistemic Vigilance in teams, esp. in the context of news sources", https://news.ycombinator.com/item?id=33651906


> how would you rank the possibility that Wikipedia is updated from that same source

I'd rank it as definitely possible. No idea how often it happens.

The thing that really persuaded me to put the story back up was the source of the tweet. A Columbia CS prof and law prof who had apparently studied with Brooks would not likely have posted that if he hadn't had a genuine notification.


Wikipedia is now updated.


Ok, we'll restore the thread. The fact that OP is by a Columbia CS prof makes it pretty likely.


As faculty at UNC CS, I can mournfully confirm this is indeed the case. He passed peacefully at his home in Chapel Hill earlier this evening, surrounded by family.


Dang can you clarify why you found "Columbia CS" credentials more credible than "UNC CS"? I'd like to say that's a pretty shocking position but sadly it isn't.


I didn't find that, nor say it, nor imply it, nor did such a possibility ever enter my consciousness until now. I'm mystified that such an interpretation could even arise! Do you want to explain how it occurred to you?


Black bar?


Wikipedia sites the twitter post as a source.


I wasn’t impressed by Man Month. The two major insights IIRC:

1. Adding more people adds overhead which slows down productivity. Might even make it worse

2. 10X developer (100X) mythology and how other programmers should be their support secretaries

(1) is too obvious and (2) I didn’t like for self-interest reasons.


> (1) is too obvious

If only.

https://www.military.com/daily-news/2013/06/19/lockheed-reas...

> Lockheed Martin Corp. has reassigned 200 engineers to work on the F-35 fighter jet's software, a problem area that Defense Department officials fear could cause more delays to the program.

Somehow obvious to DOD, but not to LM. And that's just one of the more high profile incidents I know of, I've witnessed plenty of others (directly or indirectly) that never made news. Nearly 40 years after MMM was published and people in major corps still have to relearn the lessons.


Consider that from Lockheed Martin's perspective: The F-35 at that point is a program that is guaranteed to have money flowing no matter what, it is too grandiose to fail. The longer Lockheed Martin can protract the work and the more excuses they can muster to rack up the monies needed (read: monies they pocket), the better. The government cashcow will give milk.

Lockheed Martin knew precisely that obvious fact of overhead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: