Apologies for the wording of my comment. What I mean is that you don't have to worry about libraries for languages a lot if you can use the power of your OS ecosystem (especially, *nix one) in your application (considering performance and other limitations).
Under Linux, fork() is implemented using copy-on-write pages, so the only penalty that it incurs is the time and memory required to duplicate the parent's page tables, and to create a unique task structure for the child.
> " time and memory required to duplicate the parent's page tables "
Performance penalty is fairly significant ( unless it's done up front, i.e. to setup a pool of processes, like Apache's prefork mpm ). So doing a fork() per each request is actually a really bad idea (as is creating a new thread, but that's a whole other rant I have reserved) -- I'd say it's an anti-pattern (in Java world it's the "one thread per connection" Enterprise(TM)(R) idiocy lot of servlet containers do/users to do disregarding ThreadPools and NIO).
Memory penalty really depends on your language and its VM (and it's assuming no memory leaks). I haven't looked at arc + mzscheme, but it's actually a big issue for me at work with Perl right now at my present work (too much functionality was implement as cronjobs invoking Perl scripts which do system() after system() -- vs. doing it as daemons doing xs calls into C libraries).
I'll grant mzscheme and arc are likely lighter weight than Perl and its JIT, but unless you're doing pure C it's only a matter of degree.
HN isn't a commercial site and actively tries to restrict traffic. The advantage of having already made something people want is that you can make non-commercialized sites and limit the growth to the level you find comfortable :-)
Notice how I get better security without any extra effort? That is the joy of using an operating system.