This is kind of a weird statistic to try to analyze. So many uses of nginx are just the act of putting an Apache/IIS/etc site behind nginx, so technically, both servers still have market share but you only see nginx. It's just that nginx makes it so easy to do certain things, like supporting modern HTTPS, that you might as well add it to your stack rather than replace something.
As someone younger who never really used Apache, I don't see any reason to do anything with it instead of Nginx.
Other than supporting "legacy" setups, whats the point of Nginx load balancing Apache?
Configuring nginx is just so much more intuitive.
At $main_work, the reason is that there's a bunch of RewriteRules which last I checked simply couldn't be done by NGINX.
OTOH, Apache suffered from the "slow loris" attack, so the whole shebang ended up being nginx sitting in front of a few front-end apache instance kinds, which sit in front of a dozen or so backend apache instance kinds.
I find it interesting that although on those servers there are 12x more Apaches than NGINX, it might get counted as a server "using nginx"...
... and that's just because the whole she-bang sits under cloudflare, which reports Server: nginx-cloudflare ;)
Both nginx and apache are vulnerable to slowloris. To mitigate an attack like that you need an architecture with a scheduler, that kills slow connections, not a naive event loop.
Probably a lot of mod_* uses like PHP applications that haven't been migrated to php-fpm or something, and JBoss/Tomcat/Websphere/Weblogic websites. You of course can just proxy all of these things with nginx, but it's probably not worth it for most companies.
PHP is definitely a huge chunk of the reason why nginx has taken over Apache. php-fpm with nginx is the defacto standard, and PHP is far more prevalent than a lot of people think. Apache's mod_php(X) vs. nginx+php-fpm isn't even a debate. If someone is currently using Apache+mod_php, they probably have a smaller product that will eventually have to switch to nginx+fpm in order to scale.
While I imagine PHP is the single largest reason, other languages that support or expect the use of fastcgi are also very easy to configure with nginx, whereas I can count on one hand the number of businesses I've seen using Apache's mod_fcgid.
I'm probably about 2 years out of being bleeding edge, but php-fpm&nginx are far from defacto standard. At least when you look at how the web is being served at large, looking at cPanel/WHM.
I don't believe cPanel/WHM even supports nginx yet as a standard option.
Not to be disrespectful - I know that cPanel and similar have their place - but no real business that expects to have a presence is using cPanel or any other "easy setup".
fpm is being dropped now that php 7 contains many of the improvements that made fpm popular. php_mod+apache with .htaccess turned off is the faster stack. Put a nginx server to server static content in front and that's the fastest stack going forward
FPM isn't just popular because of speed. Its popular because of pools and the fact that its not a giant security risk by having it installed. mod_php shares permissions across everything it executes. If you have any site on the same Apache stack as another, they're accessible to each other as far as PHP is concerned. This makes the attack surface of a website significantly larger unless you're hosting exactly one site you have locked down to one directory.
I also really doubt that php7.1 mod and apache without .htaccess is faster than nginx and php7.1-fpm under 'ondemand' mode. Even a 5$ DO server can handle hundreds of requests a second to big frameworks like Drupal or Mediawiki, and they're securely seperated. Locking down permissions on a group level to the executing php-pool, so you can then make only specific users belong to that pool and bind a directory in their home to the actual website location.
As people upgrade many are choosing PHP 7 through mod_php
The below are links to benchmarks and discussions around mod_php vs. fpm. These are from last year 2016. Fast forward to today; I am seeing people move to php 7 and move back to mod_php. I believe we are at the start of a movement. Articles/stories will follow but only after the fact.
The first link is about PHP-PM, which is not mod_php, and is a new and unproven stack. The second link is a completely bullshit "echo 'Hello World';" with 100 concurrent requests - that benchmark is offering the stereotypical, utterly meaningless, metric.
The fact is that Apache + mod_php will keep an instance of the PHP interpreter active in every single child httpd process. With nginx+fpm, your static assets are served directly from nginx without the overhead of an unnecessary PHP interpreter loaded into that process, while only your PHP requests are funneled to FPM. The performance overhead of having a PHP interpreter loaded into the process that is only serving a static asset is astronomical.
At the end of the day, benchmark your shippable product. Never try to benchmark a "Hello World" or a Wordpress installation if you're not shipping a Hello World or Wordpress codebase. Purely based off professional experience, I have never seen a real-world app perform better on Apache+mod_php than on nginx+fpm.
The only thing PHP 7 gave us was essentially the ability to ignore HHVM as a "required performance booster". 90% of companies were already able to ignore HHVM; with the improvements made to PHP 7, it's now 95-99%+ of products that don't need to evaluate HHVM as a mandatory alternative. And yes, nginx+fpm is still the defacto standard for PHP 7; the links you have provided do not say any different.
I can only speak for myself, but I'd rather avoid the Apache HTTPD stack altogether. If you run multiple PHP FPM pools, it's nicer and easier to only recycle individual pools as necessary (rather than the whole process). Important to those of us who run lots of microservices (or even lots of PHP sites) on single hosts.
nginx is much more simple than full-service servers like Apache. Which is good if you want to do something easy fast (like terminate TLS, proxy, load-balance, simple redirect, simple header munging, etc.). And not good if you want to do something more complex and get into learning how nginx rewrite rules really work (totally not obvious), how if and other predicates really work (multiple articles in docs suggest it's not obvious at all) and what limitations are needed to achieve the simplicity and quickness. So if you want your webserver to do something complex, you'd go for Apache. But may still put nginx in front for LB, static content, pre-cache TLS, etc.
What people need to do to get their job done is very, very, frequently to work around an existing mess with new hacks that make the mess even harder to clean up. And if that's what they need to do, then they should do it.
But we should also ask ourselves how to get into such messes less often. That is, how to systematically reduce the number of early-stage design errors. One trick is to choose tools that forbid known anti-patterns.
That means the designers must work harder up-front to figure out a system that can do without the work-arounds. But that is a feature, not a bug; indeed that is what our processes should try to achieve.
> if you want your webserver to do something complex, you'd go for Apache
I would tend to disagree. Assuming "complex" = "business logic", Apache hardly seems the right choice. PHP/Python/Node/GoLang or Lua right inside nginx would be more appropriate in most cases, imo.
There are degrees of complexity, there's a kind of spectrum even. If you want a full-blown business logic that requires language like PHP or Go, it's insane to try and make Apache do it. If you need a set of simple rules that are within what Apache (including there all the module ecosystem) can and is designed to do, it would be a big mistake, costing a lot of scalability, to deploy high-level language instead. Right tool for the job, always.
Again, there are degrees of complexity. Very simple - nginx, kinda more complex - Apache, somewhat complex but still doable without using Turning-complete language - third-party Apache modules, needs Turing-complete language or you're wasting your time - Python/PHP/Perl/pick your poison.
Shared hosting setups where you want .htaccess support (or something comparable, but same basic issue: requires some additional layer to validate and generate a centralized nginx configuration, or some other extra layer, with Apache it is built in and well-documented).
HA Proxy would be a better choice if the only goal is reverse proxying to another web sever; nginx excels at static files, uwsgi, stuff like that. Even the commercial/paid nginx Plus only comes close to what's already built into ha-proxy, which is totally open source.
Agreed, I mean is a reverse proxy even a server? The number of people who use Nginx for load balancing is huge, so when they say people are switching to nginx from Node.JS, I'm just like: "Dude, I only use Nginx for my kubernetes ingress, with plenty of services behind using Node.JS rather than any monolithic everything server" So is my server Nginx, Kubernetes or Node.JS, or Python, or Java? :-/
I think this statistic is conceptually a little behind the times in some ways
Nginx will overwrites your origin server headers https://trac.nginx.org/nginx/ticket/538 so for these stats it would get picked up as nginx even though its only forwarding
There's a massive amount of Apache modules and installs of stuff that has various dependencies on Apache where it's simply not worthwhile to expend any effort to convert it to Nginx, but where it's worthwhile placing in Nginx in front if you prefer Nginx for things like caching, load balancing etc.
That was the original way (at least the most common way) to do CGI. Nginx as a reverse proxy / load balancer / SSL terminator / static file server pointing to a backend running apache for mod_cgi or mod_fastcgi. And a lot of people would have both web servers running on the same server. You already have a web server, why do you need two?! Just pick one!
I'd say the most common use-case (particularly given the ubiquity of PHP on the web) is to support mod_php for people not wanting to move to php-fpm for whatever reason - often some weird legacy reason, or sometimes just familiarity and change overhead.
Apache has historically been a giant swiss army knife that will do just about everything you could want, from redirect databases to cgi to php interpreters to crazy auth setups. It did all that while still being a reasonably good workhorse for static file serving (when properly turned and using the right worker model, event rather than threaded or process).
Nginx seems to have a different model. It does support a number of features but from what I can see it focuses on composing functionality with HTTP rather than adding more plugins.
Nginx seems to do a great job at being a load balancer and cdn-lite, and it seems like that's what the market wants out of a web server.
nginx is turning into much more than just a load balancer. Projects such as OpenResty (a full CMS built into nginx) and Kong (an API management service), are built right into nginx using Lua. We've done some minor nginx Lua extensions and are looking to offload activities such as JSON validation into the front-end nginx.
Unless I've missed something for a while, openresty is more a distribution for building apps / apis or complex middleware on top of nginx, not a full-blown CMS. It's a pre-built collection of useful extensions & lua modules, more of an "app platform"
Yes, json validation against schema. Bigger picture, in future we're looking to do some upstream proxy redirects based on payload. Figured the first step is short-circuiting anyone not sending us valid json.
Plus, back ends are written in a variety of languages -- not all of whom do json schema validation nicely.
> It did all that while still being a reasonably good workhorse for static file serving (when properly turned and using the right worker model, event rather than threaded or process).
event doesn't support HTTPS, or rather, it stops being event-based when you turn on HTTPS. So Apache can never be performant unless you have an SSL terminating proxy.
Its only advantage is that the config script is slightly easier to test, but it's still way too hard _and_ it's incredibly verbose compared to nginx config.
> event doesn't support HTTPS, or rather, it stops being event-based when you turn on HTTPS
Have you got any reference to this? I think I should have noticed, having run Apache with fairly bursty traffic.
The only reference in the documentation was a four year old one, which said it was problematic six years prior to that...
I find the Event MPM to be very stable under pressure and unless you have some really specific need it's the only sane way to run Apache. It should have been the default ages ago (in the limited sense Apache even has defaults).
2.4 is only five years old. 2.2 is more likely, if it was fixed ten years ago. In the few cases I have built Apache I usually don't bother with DSOs, and I usually run only the Event MPM.
The basis you pick your web server on shouldn't be whether it's "new" or "old". It is what your software runs on, what support contracts you have, and what functionality you need. But don't use it if you can't configure it. Preferrably your organization should have knowledge about every piece of software in production.
I think you're correct, and Nginx does a nice job within that scope. I decided I wanted to learn it a bit, so I picked it for a project. I hadn't even stopped to think that I was reliant on WebDAV for editing that static content!
Long story short, I decided not to use any of the partial WebDAV implementations for Nginx and just use sshfs instead. It's so true that Apache is a swiss army knife, though.
Netcraft's web server survey shows nginx at only 20%, and shows Apache dropping below 50% way back in August 2013. That's a big difference compared to w3techs and both sources should be taken with a pinch of salt.
Web server market share depends a lot on which sites you're looking at: are you checking the top X million sites or checking every site you can possibly find out about? And also how you're deduplicating them: is every blogspot blog counted separately?
Disclaimer: I work at Netcraft (but not on the survey).
Yeah, if I recall correctly there are a whole bunch of those "parked" domains that get included sometimes - so if you count based on domain and not just on IP you'll get vastly different results.
nginx is creating a name and market for itself as a reverse proxy, even though there are better solutions for reverse-proxies out there, everything from HAProxy to Apache Traffic Server to even Apache httpd. But this is an important market to have. Why? Because it allows for the perception that the "web runs on nginx" simply because all you see are the nginx web proxies and nothing behind that.
So what are the servers behind nginx? 9 times out of 10 it is Apache httpd, and numerous instances of it at that. So for each single nginx server "seen" in these surveys, there are unknown multiples of Apache httpd behind the scenes doing the real work.
But all that messes up the popular, if incorrect, narrative that Apache httpd is dying and nginx is gobbling up instances. It's all about marketing baby, for a product that really isn't truly "open source" but more so open core. And people buy it hook, line, and sinker.
> 9 times out of 10 it is Apache httpd, and numerous instances of it at that.
A lot of your post is not wrong, but that statistic is just not right at all. Less that half of the time we see apache instances behind NGINX, and it's mostly because it's legacy, and hard to move away from it. The other half of the time it's application specific web servers, or other NGINX instances.
Source: Worked for Cloudflare and now work at NGINX
Of course, as someone who works for NGINX I would guess you would see more nginx behind nginx. Based on discussions with people who do Apache httpd support as well as what I've personally seen, 90% isn't that far off the mark. People like using httpd for dynamic content. A lot.
I think the idea is that if you count every single website on the web, a huge number of them are crappy little sites on shared hosting. Shared hosting usually relies on .htaccess rules, which are only valid under Apache.
Popular PHP CMS software also relies on .htaccess. Wordpress for example allegedly powers about a quarter of all websites, and auto-creates an .htaccess file when you enable pretty links, which basically everyone does. Drupal ships with multiple .htaccess files.
Sure it's possible to adapt this code into Nginx configuration, but there is really no reason to do that. It's far easier to set up Nginx in front of Apache and get most of the Nginx benefits that way.
Or tomcat / jboss / websphere for Java applications.
There really are a lot of platforms.
Equally, there is a truckload of LAMP sites out there (Linux Apache MySQL Php) to give Apache the edge on pure quantity. It's been the standard for personal site hosting and forums for two decades. That's a long lasting effect. That's not where there is value and work for developers though.
So your total criteria for what constitutes a "better reverse proxy" isn't performance, isn't lowest latency, isn't full HTTP compliance, isn't dynamic reconfiguration, or pluggable load balancing mechanisms but rather configuration files??
Sorry if I don't hold your opinion to that high a standard in that case.
Configuration files are important. You understand that once you have to untangle thousands upon thousands of lines of apache configuration. Luckily for me, that can pay a decent rate :D
In the context of load balancing, on the performance + latency + load balancing mechanism + configuration files criteria, Apache is the worst by a huge margin compared to both HAProxy and nginx.
Maybe 5-7 years ago then yeah... maybe. Not even close today. Apache has lowest latency and faster total transaction time based on various benchmarks. It all depends on how you are using it.
"configuration files criteria"
Got me there. But then again, 2.4 adds a LOT of ways to even streamline that, like mod_macro, mod_define, etc...
Performance + latency => I dunno what world you live in. Apache is still stuck in the prefork era (not that it's mandatory but it is how it works most of the time). It's not even playing in the same order of magnitude.
Load balancing => Apache doesn't even support healthchecks. I won't even get into the lack of TCP/TLS support or the lack of some load balancing algorithms.
I don't know what world you live in, but Apache only runs prefork if you configure it to run in prefork. Saying that is "how it works most of the time" is complete and total nonsense. I don't even know how to parse that...
Also complete and total nonsense is the lack of health checks (which is, iirc, only available for paid nginx), TLS support and load balancing algos. I think nginx has some kind of hash LB method that httpd doesn't, although httpd has round-robin, byrequests, bytraffic, and bybusyness.
There are numerous modules and setup stuck in prefork mode. And the alternative with workers is a joke compared to the event loop of HAProxy and nginx.
HAProxy > nginx > apache
Of course if you compare apache to nginx, you can find stuff where nginx is lacking too.
Agreed, a lot of critical features are stripped in the open source nginx.
"There are numerous modules and setup stuck in prefork mode"
I have no idea what in the heck you are talking about. If one must use mod_php than it is recommended that you avoid a threaded MPM, but even that is no longer 100% true; you can run mod_php and Event is most implementations with no issues at all.
"stuck in prefork mode" is a nonsensical phrase. prefork is a MPM.
Just because something is threaded doesn't make it slow. Take varnish for example. There are tradeoffs on all implementations, that's why Apache httpd allows for prefork, worker(threaded) and event-based architectures which the sysadmin/devops can choose for their own particular case. But "Oog. Event be Good. Threads be Bad" is really completely missing the very real tradeoffs of both.
Apache default is prefork, which was appropriate to run php app the way they were done more than a decade ago. It is utterly inappropriate nowadays.
For load balancing, events trumps every other mode, that's just the way it is.
HTTP and TCP balancing are inherently mono thread operations. There is no need for threading at all, multiple threads are actually decreasing performances).
In HTTPS and TLS mode, the encryption is the bottle neck. So you use one process per core (that process needs events).
HaProxy lets me have one process pinned down to each core of the system while network card IRQ are on a dedicated core. Apache can't do half of that.
We could get into how nginx and HaProxy parser are insanely optimized. Whereas apache is not and it cannot be because of the modules.
Of course, not everyone has to push 10g or 30k requests/s with their load balancers.
Apache default is NOT prefork, unless you are using something older than Apache 2.4. Of course it is utterly inappropriate nowadays, which is why NO ONE USES IT, but people love spreading the FUD that Apache is still prefork.
The rest of your "analysis" suffers from the same misinformation as this. I especially like "Whereas apache is not and it cannot be because of the modules.". I have no idea what in the world you mean by that. Why "because of the modules"?
And this differs from nginx how, exactly? Since it also "has modules"... So since nginx has modules it requires nginx to "parse a lot of information from the request, and make them available and editable in variables. This gives a lot of flexibility but it has a performance costs." ??
To summarize your point (and correct me if I'm mistaken), Apache httpd is legacy that people don't seem to want to make the effort to move off of, but still want the benefits of nginx, so they'll stand up nginx in front of httpd, rather than migrate off of Apache (and update their platforms to support that).
I feel the "marketing" concept is less actual marketing and more feature appeal, but I guess that's similar? Maybe not, though. Features provide actual value, whereas marketing is ... tricksy.
Apache httpd "legacy"? Seems a bit exaggerated. As far as I can tell it works just fine, have plenty of very good tutorials(like those from digital ocean), performance wise no problems(at least for our usage). And seems simpler to setup.
At my work we use IIS for our main website, but I've been asked to setup and configure a few wordpress installations. I've played around with both nginx and apache, but it was much easier to find clear instructions to setup wordpress on apache, so thats what I went with. I also really like apaches .htaccess support. Easy to lock down access to wp-admin by just dropping a .htaccess file in there restricting access to local ip range, instead of having to pollute the nginx config file with that sort of thing.
"works just fine" is completely orthogonal to "legacy", same as "well documented", etc.
The way you're writing, you sound like you've tried one (Apache), and not really the other (nginx) and are basing your opinion on that, not on any merit-based evaluation...
A lot of Apache work load is now behind Nginx or Haproxy, I wouldn't say those numbers are entirely truthful.
Consider how Plesk panels nowadays go with Nginx proxy by default, but Apache in the backend; CPanel will probably follow soon and people have already been doing this manually for a while too.
Apache is still there, just not in as much plain sight as it used to.
That's certainly true for us, everything is just put behind a pair of Brocade Traffic Managers. I really do like, and to some extend prefer Nginx, but we're perhaps 20 different people servicing different customers so we standardised on Apache, because it support everything, and the traffic managers deal with performance.
People like to think they are special, and that they've need more speed, so they turn to Nginx, but most of us can easily be served by Apache.
How much of nginx's growth is, do you think, due to it being "better" than Apache httpd (which it isn't, BTW. Apache 2.4 is easily as fast and scalable as nginx), compared to either (1) The aggressive sales and marketing of NGINX the company or (2) nginx fronting Apache httpd and thus "hiding" the growth of Apache httpd usage. But there are lots of Apache httpd haters, for some reason, and so they LOVE promoting the FUD. And yeah, I am an admitted Apache fanboy so feel free to ignore my viewpoint if it shatters your world-view :)
I started needing to configure web stuff about 6 or 7 years ago. At the time, I could use nginx or Apache. I spent a bit of time (maybe 20 minutes) with Apache since I'd heard of it, thought "Ugh this is a pain and I just want to move on to the fun parts of this project and not banal config details", and then tried nginx. It was much easier, and I've never bothered to learn Apache. To be honest I've kind of assumed it wasn't worth learning since it seems to be in slow, but persistent, decline, and I've never needed to learn it for a particular project.
I wonder if anyone else had a similar experience. If the first 20 minutes are nicer with one tool over the other, I suspect most people will stick with that tool until it starts limiting them.
I don't think any of the stuff you mention is related to that.
I've had the experience in the reverse order, but with the same conclusion.
First lot of web server configuration I had to do was Tomcat. After that, IIS 6 through to 8.
Compared to any of those those, writing Apache httpd config actually seems pretty straightforward.
But yeah, Nginx is a lot plainer and I pick it most of the time. Usual time when I don't is if deploying someone else's software with complicated rules and don't have the time to port them.
Damn Tomcat config is godawful gibberish. Seems to be a running theme in Javaland.
This was exactly my experience. I learned Apache and knew how to configure it. But once I tried Nginx I never wanted to fuss with an Apache config again.
>How much of nginx's growth is, do you think, due to it being "better" than Apache httpd
If I were to take a guess based on my own experience, I'd say hobbyists and teenage experimenters are increasingly using nginx over apache because nginx configs are considerably easier to understand.
Over time, that is slowly translating to increased use in production environments as these people move into the workforce and apply their skills to production grade services.
How so? I agree that httpd's configuration language is "unique" but it is logical and easy to understand. And httpd's design is modular as the start, supporting dynamic modules LONG before nginx ever did, plus supporting various MPM mode, including prefork, threaded and event-based. So you can pick the right functionality and modules for your environ and use case? Maybe 10 years ago nginx had a "better design" but far, far from the situation today.
I would guess that so many people move before Apache 2.4, or in early day of 2.4, and they did not look back. And with them, all new site, or old site with pre-2.4, get upgraded toward nginx.
>Can also be re-written as "Apache Still the dominant web platform for the internet despite the upstarts.."
No, it can't. That just misses the entire point of the article. The key take away from it is how much Nginx has grown in the last few years by taking share away from Apache especially those deployments that needed support of modern protocols (from the article: 76.8% of all sites supporting HTTP/2 use Nginx, while only 2.3% of those sites rely on Apache). All the others have barely made a dent.
Also read the distribution of how Nginx fares vs. Apache among the top websites. It gives a much better picture of what is happening.
>I Don't consciously know either server, I just like the way sites can spin facts differently.
Well then you really can't make the statement that it is a spin, can you?
I assume boznz meant an entire (different) article could be written with that headline. One can certainly call this or the theoretical opposite article spin.
I could certainly write a lot more than that on the topic, as could any decent writer. What one needs to learn about and one writes about is not necessarily the same thing. Again, I think the original comment here was more of a general statement about the manipulability of statistical analysis "articles".
You can write an article about anything in the world. But when it comes to statistical analysis then in a pretty objective sense there is only one interesting thing to say about this data. Despite theoretical ability to be manipulative, only one of these two angles qualifies as news.
Apache was originally released in 1995, and has since maintained an enormous user base. It was the de facto standard for at least decade -- if you learned web dev between 1995-2005 you almost certainly found that (at least) 9 out of 10 books/articles/etc suggested that you use Apache.
That is, until a while after Nginx was released in 2004 and started to pick up adoption. Most articles I've seen over the past 4 years or so suggest using Nginx, and most web dev newbies I've met in the past year or two only know of Nginx.
So no, this is not about spin, this mostly about "hey, if you're still using Apache, the rate at which Nginx is being adopted just might warrant further consideration of switching, or at least adopting Nginx for further projects."
I saw a recent version of cPanel that used ngnix as a proxy in front of Apache, with Apache on a high port. That might be responsible for a lot of the new ngnix seen out there.
cPanel has no native support for nginx, if you've seen that it's been using a third party plugin. The second most popluar server controlpanel Plesk supports the configuraion you describes bt it's not enabled by default so I doubt will be affected the statistics.
IIS will probably continue its nose dive as Microsoft pushes forward with .NET Core. I'm guessing people using .NET Core are more likely to use Kestrel + Nginx as a proxy.
Apache has served my purposes fine. I've used nginx in some cases, mostly when it was already configured, but I personally have not yet found a strong reason to prefer one over the other (and I would suggest that this is probably another case of overblown hype, and that Apache probably continues to work perfectly fine for most use cases).
I mostly continue to use Apache because I'm more familiar with its syntax, but it sounds like nginx is popular for use cases that we've just used purpose-fit applications like haproxy to fill.
Of course.
I don't know how you would even do shared hosting with nginx, as I don't think it has a .htaccess equivalent, and restarting the server for everyone else wouldn't be acceptable
"Just to put that growth rate in perspective: this is 70 times the number of sites that switch to Node.js, another fast-growing web server."
I find it hard to believe this could be accurate considering the vast majority of Node.js deployments are also utilizing Nginx as a reverse-proxy in front of it. I think a large portion of nginx's uptake is actually due to Node.js' popularity.
Yeah - I think "put this behind nginx" is pretty standard practice for nodejs projects. I mean, even for relative beginners, advice like this is common (and sound)
I am really surprised no one brought on the performance topic.
Now I have been out of web programming for half a decade or more, but if my memory is not completely gone I remember that nginx was about an order of magnitude faster than apache under heavy load.
Is it still the same nowadays?
If I'm not mistaken he created it while working at/for rambler.ru which is one of the biggest Russian websites (it was serving 500 million requests per day at that time).
"Nginx is not a political statement or software nor does it benefit it's author's causes in any way."
Sure it is. Loving nginx and hating Apache httpd (especially) is short hand for telling everyone just how high-tech and up-to-date one is. "Still using Apache? Way to go Grandpa! Get with the times... all the cool kids use nginx nowadays!".
How about choosing the best solution for the job at hand, coolness and hip-factor regardless ;)
Yes, it is still possible, though increasingly harder to do (some of my friends expressed this explicitly and publicly). And even if it wasn't, silently doing your job is one thing — and enthusiastically supporting the current regime is another.
Although the ostracization is not coming from the government, it should be noted that many face economic deprivation for expressing their personal or political views within the United States.
My understanding is that Go's net/http package is very robust, performant and state of the art. Do you have references suggesting that it has been reviewed poorly?
So you are periodically chasing other person for expressing views contradicting with yours, and also messing business into it? This is simply terrible.
Apache doesn't use XML, its configs actually predate XML.
Yes, Apache configs are confusing, but it's not due to them being in a HTML-like style, and I suspect same thing would happen even if they had similar syntax Nginx.
IMO the problem is that Apache is extremely modular and every little piece was split into a separate module. There are many modules that are pretty much essential. It doesn't help that the example Apache config then comes with many not essential modules enabled.
One time I was motivated to just have Apache run only things that are necessary. I started without any modules and each time config complained that something is missing, I read documentation about it and either enabled it, or removed from the config. It took me quite a while. And I know people (in fact that is a vast majority) who would never bother to do that. Because of that Nginx looks attractive out of the box, because by default you don't need any modules to have a basic server.
Similarly this is also the reason why people also moved from Sendmail to Postfix. Sendmail doesn't use XML but it also has a steeper learning curve to correctly configure it.
XML for configs is fine, as long as you treat it as a binary format. Which means having tools to help manage those configs. Too many people use XML for configs and then call it done, which makes for a terrible experience.
XML with config helpers and the ability to open it in an editor and see what is going on and tweak it is a great experience. XML with the expectation that an editor will be used to configure it is a terrible experience.
The above also applies if you replace "XML" with "JSON". :-)
I know that one of the reasons that Apple went with XML for launchd was that you can define an XML schema for the files and validate them against that schema. IIRC, this claim was made in a HN (or Slashdot) discussion maybe 5 or 6 years ago.
Honestly, I think the statement, 'XML should NEVER be used, period,' is true as well: for any use case I can imagine there are superior alternatives to XML. Even if you have to interface with other services using XML, just use SXML (https://en.wikipedia.org/wiki/SXML) in your own code and marshal to XML at the very last instant.
Ultimately, XML was a colossal mistake carried out to perfection.
SGML and XML were invented as a meta-language for encoding semistructured text such as HTML with plain text editor programs, and excels in that use case like nothing else. That XML is abused as config language, payload meta-syntax for web services, for encoding component models, and even as meta-syntax for programming languages is hardly SGML/XML's fault.
HTML itself allows SGML-style tag omission since HTML was originally an application of SGML. A simple example can be found at [1], and can also be seen in action in my talk slides linked from [2].
We're talking XML rather than HTML (and omitting certain closing tags isn't compliant HTML5 anyway).
In XML there's also the concept of using self-closing tags only for "empty tags". Meaning, <tagname val="123"/> isn't "correct" and <tagname>123</tagname> should be used instead; while s-expressions simplify this.
Sorry but your comment re XML is incorrect. I suggest you study the HTML and XML specs, especially if want to convince us of an alternate XML serialization.
It hard to take a hardline stance on something with so much give and take.
XML is great because there are plenty of fast parsers for it with bindings in pretty every programming. It can be modified with nothing more than a text editor, by someone halfway competent.
XML is bad because it will never be quite as optimal as some binary only solution and editing with a test editor is painful.
Its not like most programming languages would have a hard time with gzipped xml either. This grants most of the benefits of binary formats and is often smaller than all but the most carefully designed ones.
All of that even presumes config load time matters. I a crazy large case maybe 10MB loads from a disk and the important stuff is cached in memory, in the actual structures that will use the data later. It just doesn't matter. Gzipped xml is fast enough for realtime high performance games, who cares about this silly XML hating anymore. Just pick something that works and move on.