The situation gets even more extreme when you consider FPGAs. Having done Mandelbrot set as a toy example, it worked faster than a graphics card at producing high-resolution images and zooms.
I am a programmer, I also like to play games, and make them.
I own a ASUS gaming laptop, and I got astonished by how badly newer games run, even games with graphics that are like of 20 years ago (example: I really like business simulation games, some use isometric graphics with some extremely low poly and untextured models for the walls and floors and some objects, like plants and tables) run slow on my machine.
Even some 2D games run badly, Axiom Verge had several slowdowns on my machine, Luftrausers is virtually unplayable (when there is heavy firefighting on the screen, the game starts to lag severely, things start to teleport around the screen), Wasteland 2 runs like crap and bugs out like crazy (transparent walls, grass growing on random places, people disappearing).
Even games from legendary companies in the technical sense are not doing well, for example I tried to run the new Wolfenstein game on my machine, even at the lowest settings possible the thing go at 3 FPS.
Meanwhile I can play emulated Wii games (Xenoblade for example) with 60 FPS, Stalker series (that in my opinion have amazing graphics) I could even place some mods to make it prettier without losing too much FPS, games using Source engine can confortably run with all the bells and whistles, and so on...
But try to play a new game, even a 2D one, say, Pillars of Eternity. The thing manages to lag for no reason.
I noticed one reason for this is the abuse of Garbage Collectors, many of the games I had problems with has Garbage Collectors, and I see the memory swinging all over the place (a particularly bad one: Kerbal Space Program, it loads everything on launch, then unloads things, then load again as needed, when I launch it, it hits 3gb of memory, then as I use it slowly decrease, then it starts increasing again for no reason, I assume due to memory leaks, and crashes), I even started to play games with the windows task manager on the second monitor, with apps ordered by memory use, this way I know when they will run out of memory and crash (thus I can save first).
And then, there are the devs attitude toward optimization, I finished two days ago some ARM NEON code for my iPhone project (I am doing some freelance iOS coding now), I decided to discuss it on IRC, and some devs got pissed off, they tried to convince me that Apple native APIs were better, and that I was doing my design all wrong, and that instead of using a single ASM function that do all the operations I need inside a single loop, I should have used a collection of Apple APIs that would involve 4 different objects layered to the same thing...
When I explained that my original code (that was using OpenCV), was too slow according to the profiler (it was using 60% of the CPU time), they replied that I was "optmizing the wrong place".
Also when I mention I plan to make it work on iPod 5 and iPhone 4, people counter-argue that I should just ignore those and make it run on iPhone 6 only, because that is the proper solution for not having enough CPU (instead of getting the CPU manual and doing decent code).
Righteous rant, and absolutely right: there isn't really any excuse for a 2D game being slow these days. I suspect some of this is due to unpredictability of GPU performance; that's why the nvidia driver ships with 300Mb of compatibility shims.
A lot of modern games run slow because devs are using common 2D/3D engines which focus on cross-compatibility and ease of use over speed.
Testing in a lot of basic 2D engines also focuses on low performance, e.g. for an RPG where you'll have a dozen sprites moving around, instead of say 500 ships/weapon shots in a shmup.
I like your Mac Plus article; that reminds us that the user experience has not got necessarily got better with all this processing power.