There's plenty of queries you may find yourself scattering throughout your application that you don't want to have in the database as a stored procedure, if for no other reason than it being overkill and extra work.
If it's business logic in the query that might get re-used across independent applications (e.g., cancelling an order), then I would think a stored procedure is better. But if it's specific to that application (e.g., fetching title+description+publication date of the five most recent blog entries for a side panel), I wouldn't care to put that in the db as a stored procedure.
And likely far more by now, had it become the PC instruction set architecture. The x86 instruction set was also simpler back in the late 70s, in the 8086 era when the Motorola 68000 was first released, than it is now.
Back then it only had the one register width (16-bit, e.g. "ax"), whereas now it has the 32-bit series (e.g. "eax") series and the 64-bit series (e.g. "rax"). It also now has SIMD, SSE/AVX, virtualization support, and other technologies. Back then it just had one operating mode (real mode), whereas now it has protected mode, long mode, system management mode, and a few other intermediate modes (e.g. "unreal mode").
So a lot of the complexity that x86 has now was introduced after that decision was made. It's definitely conceivable that the 68000 line would have developed similarly had it been chosen instead of x86 for the PC.
8086 let you address the upper and lower halves of the 16-bit registers as well, so don't trick yourself into thinking there was just one register width available in the sense that everything could only be treated as 16-bit words.
In the `logNow` example, the date argument will be eager evaluated, not lazzy evaluated. Am I right? So what was supposed to be a real use case doesn't seem useful to me.