Okay, I'll take a little time to brag. The word count challenge here got my interest at the time, so I whipped up an assembly language version of it and iterated several times, trying to figure out the value of switching different registers. In the end, I took second place:
However, there's an interesting story behind this story. David Stafford, who came in first, posted that he thought he had the fastest solution and bet $100 that nobody could beat it. I posted my code which was significantly faster, and David tweaked it further to eventually win the challenge. Like a true person of honor, he did pay the $100 and I cashed his check.
I remember this, Dave. I was confident I had the fastest possible algorithm and you proved me wrong. It was a humbling experience but it forced me to throw out my assumptions and start over. It taught me to assume there was always a faster or better way just waiting to be discovered. TANSTATFC
Awesome. Any day that you can take Dave and Terje to school is a good day at the office.
I have my own little footnote in that book, somewhere, and know/have hung out with most of those guys. The old saying goes, "If you ever find that you're the smartest person in the room, you need to find another room," but it doesn't say what to do if you're pretty sure you're the dumbest guy in the room. It's good that the book is still in circulation despite its age, as there's a lot of wisdom left in it.
A lot of time I found myself the dumbest guy in the room but I'm getting used to it, as long as it's not a poker room or a trading room...it's a lot of fun to hang out with smart guys.
I knew from the title it was going to be Michael Abrash's book. There is, indeed, a lot of wisdom there. I did most of my recreational x86 assembler coding in 1993-95. I got the book when it came out (an unwieldy tome in paper form) and poured over it but, sadly, I never really did much x86 assembler after that.
There's a lot in the book that's dated, being very VGA-specific, or specific to the x86 CPUs of that day. Even so, there are lots of ideas in the book that transcend that. His advice about optimization, and about how to approach problems, is timeless.
His Zen of Assembly language from 1990 was also great. It's actually Volume 1 but he never did a Volume 2 because it would have been not very relevant by then. Not useful per se but there's a lot of fascinating stuff related to the x86 processors of the era.
Even so, there are lots of ideas in the book that transcend that. His advice about optimization, and about how to approach problems, is timeless.
I convinced my parents to buy this book for me as a gift when I was in high school. It taught me enough about x86 assembly language to reverse engineer the Windows driver for an HDTV tuner card to start writing a Linux driver, and the book still influences my views on performance and profiling.
I think the best thing to take from the book is to measure your code with a timer...
The specific parts probably can’t apply these days because systems are so complex (multi-core, multi-threaded, multi-processing) that you can never assume your assembly timing will be any way accurate.
Although I have heard a lot about the "Don't opimize if you don't need it" or many lines that are similar to this. I also heard from somewhere (maybe it was HN) that modern programmers do not spend enough time on optimization. Half because their education is more and more far away from low level and half because modern software is becoming too complex and they don't have time to optimize much.
Don't write code micro-optimizing each line of code as you write it, as prevalent in C and C++ circles, as starting point.
Rather think about the problem in abstract level, meaning data structures and algorithms for the problem being done.
For example if it is already obvious that the code section is going to work is large amount of data, doing linear search isn't going to scale.
Then also think about memory consumption, like maybe allocating all the time in a loop isn't the right approach, or if in a GC language that also supports value types, stack and global memory allocation, and off heap, consider where to place the data structures.
However don't over do it, and validate the assumptions with a profiler and the expectations of the end user.
An application doing batch processing overnight, no one is going to notice if it takes 5 seconds less with the algorithm that one has spent one week optimising for, other than the wasted developer budget.
On the other hand trying to do real time rendering at 120 FPS, every ms counts.
Don't prematurely optimize. Make your code work, make your code readable, and then profile your code.
Without profiling you're guessing where the most impactful work can be done. You don't want to spend a week shaving 150ms when you could have spent a day to make your program 10s faster.
Code quality is highly subjective though and often what some might consider as "high quality" also comes with a cost to performance.
Fortunately the linked book provides a simple way to decide when something needs to be optimized under "Understanding High Performance" [0]
--
Before we can create high-performance code, we must understand what high performance is. The objective (not always attained) in creating high-performance software is to make the software able to carry out its appointed tasks so rapidly that it responds instantaneously, as far as the user is concerned. In other words, high-performance code should ideally run so fast that any further improvement in the code would be pointless.
Notice that the above definition most emphatically does not say anything about making the software as fast as possible. It also does not say anything about using assembly language, or an optimizing compiler, or, for that matter, a compiler at all. It also doesn’t say anything about how the code was designed and written. What it does say is that high-performance code shouldn’t get in the user’s way—and that’s all.
Michael Abrash is one of the most pragmatic and clear programmers I've ever seen in my lifetime. Not only is he incredibly smart, but he has a communication style that is by far one of the greatest in technical manuals.
I didn't realize that the physical black book was so expensive these days. I sure hope I kept my copy.
Use Calibre to transfer the book to a kindle - this worked for me. However, the code is hard to read on the e-reader. I recommend to use a large display to read the book.
This book gets posted at least once a year, and I don’t even mind. Such a gem. Besides the more general advice on programming and optimization that is still relevant, the stuff at the end on the development of Quake is a really fun read.
It's been years since I read it cover-to-cover, but looking over the ToC and thinking back the following chapters stick out as having enough general applicability (i.e. not tied up in video hardware or CPU specifics) as to be less dated: 10, 14 - 18
The book goes into great detail about programming VGA GPUs. MCGA Mode 13, “Mode X” and a few other variations. I read the precursor to this book when it was new and programmed for GPU that I had back then.
I think that a more correct title would be the chapter title "There ain't no such Thing as the fastest code (1997)", or "Optimizing word counting" or such, as it's a chapter rather than being about the whole book.
Nice. I own the actual book itself. It is seriously hardcore. It always weighs a ton. I had no idea it was rare and/or worth anything. Also own the related Zen of Graphics Programming book by Abrash.
https://github.com/jagregory/abrash-black-book/blob/4028269f...
However, there's an interesting story behind this story. David Stafford, who came in first, posted that he thought he had the fastest solution and bet $100 that nobody could beat it. I posted my code which was significantly faster, and David tweaked it further to eventually win the challenge. Like a true person of honor, he did pay the $100 and I cashed his check.