So what if one of those apps gets lots of use and you want to re-allocate resources from the other 24 to it? Now you have to manually tune your server settings, or have some automated process that does this. With a PHP-like environment, that would happen automatically. Say you have 100 interpreters, so average of 4 per app. Now, app A gets 80 requests. apache automagically re-assigns 80 interpreters to app A, and others idle. No re-configuration on your part necessary.
I am not familiar with how .NET does this. Perhaps it has some kind of a mechanism for dealing with this. Here is an example from the Django/Python world. In your virtual host apache config you have to specify the following:
Notice the process=5 threads=5. This means that apache will run 5 processes, with 5 threads each. Now imagine if you have several of these apps, all configured to use 5 processes and 5 threads each, which eats up 80% of the RAM on your server. Now, app A gets featured on HN, and lots of requests come in to that app. You can only process 25 concurrent requests (and really fewer since Python's GIL prevents CPU-intensive load to be efficiently scheduled between the 5 threads per process). However, while app A is getting slammed, apps B, C, D, and E are idle. You could get more performance for app A by reducing the number of processes/threads for apps B-E and increasing the number of processes/threads for app A, but this means manually doing so and reloading apache. Less than ideal.
Your example is not a problem with using an efficient execution model, it is a problem with django/wsgi. In fact, your example is using the exact same model as apache, it just sucks at it and makes you statically define the number of workers on a per app basis. You can easily have multiple web apps running in a single application server, and the resource limits will be shared just like with a typical apache+php setup.
Note that in environments where this sort of thing is trivial to do (java for example), virtually nobody does it, preferring to run separate servers per application anyways.
The way I understand it is that you either have a pool of interpreters per app or per set of apps. In the second case, life is easy: you can have a simple system that allocates interpreters to apps on demand. In the first case, you have to have a more complex solution. Perhaps the process manage (in this case apache) could implement such a system, but thus far it has not.
There's no need for sets of interpreters at all, that's what I am saying. Python being worse at this than PHP doesn't mean PHP is good at it. Look at go for example, there's one app server, running as many apps as you want.
I think its a naive attempt at describing worker processes and application scope. if you have a worker process per endpoint which is common with .net apps and lots of endpoints, its quite hard to balance resources. This is true. In PHP, none of this is of consequence.
Now I've got a tiny (280mb deployable, 5 hosts, 37 application pools, 5000 in flight requests 24/7) behemoth on my hands and I can testify its an arse pain to manage resources.