> Splitting some functionality across many files adds significantly to the cognitive load of figuring out what code is actually even running.
This is the crux, if your goal is to figure out what code is running, if you can keep the program in your head, if you have small simple programs splitting things up is harmful.
But there is this murky line, different for everyone, and even different for the same person from day to day, where even with the best intent, no matter how good you are, you can't keep the program in your head.
At that point, you need to give up the idea that you can. Then you change perspective and see things in chunks, split up, divide and conquer, treat portions as black boxes. Trust the documentation's, pre and post conditions. Debugging becomes verifying those inputs and returns; only diving into the code of that next level when those expectations are violated.
But at some point you HAVE to be able to look at the program from above.
If you abandon the hope of understanding the code in the bigger scope, how can you ever meaningfully modify it? (Ie add a big feature and not just tweak some small parameters)
It depends on the change.
It depends on the code organizational structures.
It depends on the consistency of the code.
It depends on the testing setup.
It depends on the experience of the person changing it.
It depends on the sensitivity of the functionality.
It depends on the team structures.
There is however one reason that trumps them all: the actual reason the code was split.
Separating the code of your SQL server, HTTP server, Crypto Library, Framework, Standard Library, from your CRUD code is perfectly fine, and people understand this concept well, and even the most fervent anti-Clean-Code person won't complain about this separation existing.
But there is a good reason we separate those things from our CRUD codebase: it's because they can function separately fine, they're reusable, they're easy to isolate/reproduce problems, and they're at a totally different abstraction level.
The problem is separating code from the same level of abstraction, such as breaking a business logic class into many for mainly aesthetic reasons, such as method/class/module length, or to avoid having comments in the code (again as recommended by Clean Code), things that people are mentioning here in this thread.
EDIT: As someone said above, "20 files with 20 functions each does not cause high cognitive load, if the scope of each file and each function makes sense". In the end it's not the length or the number of methods/classes that matter, but how well separated they are. Having hard rules does not automatically make for good code, and it's often quite the opposite.
One last thing to consider, if you are writing little a CRUD app, it can be very simple, you can keep it in your head.
However, Can you?
You are using black box code from a web server, a sql database, the operating system, crypto libraries, and a ton more; You don't dive into that source code except in extraordinary circumstances, if you even can. In a large program, you end up treating code owned by you or your company as the same way.
In this scenario you are still making large meaningful changes by focusing on the level of abstraction you are at.
This is the crux, if your goal is to figure out what code is running, if you can keep the program in your head, if you have small simple programs splitting things up is harmful.
But there is this murky line, different for everyone, and even different for the same person from day to day, where even with the best intent, no matter how good you are, you can't keep the program in your head.
At that point, you need to give up the idea that you can. Then you change perspective and see things in chunks, split up, divide and conquer, treat portions as black boxes. Trust the documentation's, pre and post conditions. Debugging becomes verifying those inputs and returns; only diving into the code of that next level when those expectations are violated.