More of a meta-question; I'd love to know if people do self-directed learning more frequently, or if they follow through full (or partial) textbooks to learn concepts they're interested in. I have a hard time going through full books, but I have no doubt this leads to gaps in knowledge that someone following a textbook would not have.
I do both. In my case textbooks are to fill theoretical needs/gaps. I go through full books in some cases and in others it is not feasible based on various factors (difficulty of subject, need, content, 500+ pages etc.) I find that some of the newer textbooks give a helpful flowchart of chapters so one can choose their "adventure" based on interest or the curriculum needs (of an instructor).
On occasions I abandon reading after a chapter or two if I don't think it serves my needs or the subject matter is too difficult for me. In the latter case I may try to look for an alternative or come back to it once I have gained pre-requisite knowledge.
This will load up multiple processes like you say. OP loads a large dataset and gUnicorn would copy that dataset in each process. I have never figured out shared memory with gUnicorn.
> gUnicorn would copy that dataset in each process
Assuming you're on Linux/BSD/MacOS, sharing read-only memory is easy with Gunicorn (as opposed to actual POSIX shared memory, for which there are multiprocessing wrappers, but they're much harder to use).
To share memory in copy-on-write mode, add a call to load your dataset into something global (i.e. a global or class variable or an lru_cache of a free/class/static method) in gunicorn's "when_ready" config function[1].
This will load your dataset once on server start, before any processes are forked. After processes are forked, they'll gain access to that dataset in copy-on-write mode (this behavior is not specific to python/gunicorn; rather, it's a core behavior of fork(2)). If those processes do need to mutate the dataset, they'll only mutate their copy-on-write copies of it, so their mutations won't be visible to other parallel Gunicorn workers. In other words, if one request in a parallel=2 gunicorn mutates the dataset, a subsequent request has only a 50% likelihood of observing that mutation.
If you do need mutable shared memory, you could either check out databases/caches as other commenters have mentioned (Redislite[2] is a good way to embed Redis as a per-application cache into Python without having to run or configure a separate server at all; you can launch it in gunicorn's "when_ready" as well), or try true shared memory[3][4]
One way to achieve similar performance is redis or memcached running on the same node. It really depends on the workload too. If it is lookups by key without much post-processing, that architecture will probably work well. If it's a lot of scanning, or a lot of post-processing, in-process caching might be the way to go, maybe with some kind of request affinity so that the cache isn't duplicated across each process.
"I want the free service to work for my edge case"
If you want to continue serving your clients, you can still pay for a certificate from an authority trusted by your outdated clients. If you want to continue serving the old clients for free, ask them to use another free browser that will allow them to do so.
I think the intention though is to confront it rather than just ignore it to either help the interviewer realize its not acceptable, or just make them less likely to do it again.