I've had the pleasure and displeasure of working with small datasets (~7.5GB of images) in shell. One often needs to send SIGINT to the shell when it starts to glob expand or tab complete a folder with millions of files. But besides minor issues like that, command line tools get the job done.
Until semi-recently, millions of files in a directory would not only choke up the shell, but the filesystem too. ext4 is a huge improvement over ext3 in that regard; with 10m files in an ext3 directory you ended up with long hangs on various operations. And even with ext4, make sure not to NFS-export the volume that directory is on!
I've encountered this (or similar) issue on production.
We had C++ system that wrote temporary files to /tmp when printing, /tmp was cleared on system startup, it worked ok for years, but the files accumulated. At some point it started to randomly throw file access errors when trying to create these temporary files. Not for each file - only for some of them.
Disk wasn't full, some files could be created in /tmp, others couldn't, it turned out after a few days of tracking it, that filesystem can be overwhelmed by too many similary named files in one directory - and it can't create file XXXX99999 even if there's is no such file in this directory, but it can create files like YYYYY99999 :)
I just love such bugs where your basic assumptions turn out to be wrong.