Hacker News new | past | comments | ask | show | jobs | submit login
NGINX open sources TCP load balancing (nginx.org)
500 points by realityking on April 20, 2015 | hide | past | favorite | 117 comments



Many installations would go from haproxy->nginx to nginx->nginx. Having to support a single product will make many devops happy.

In the same tense, haproxy is adding Lua support[1], which has been available in nginx - using openresty[2] - since 2011, and nginx core is doing the same with Javascript[2].

Interesting times aroung haproxy and nginx.

[1] http://blog.haproxy.com/2015/03/12/haproxy-1-6-dev1-and-lua/

[2] http://www.openresty.org

[3] http://www.infoworld.com/article/2838008/javascript/nginx-ha...


Not sure about that... HAproxy is a proven technology (very reliable and a joy to use at that) in this field while Nginx is a newcomer and needs to establish its credibility first. I personally wouldn't use such technology for load balancer until it is properly battle-tested. Also, I can't see much of an advantage over (proven) HAproxy - am I missing something?

As for supporting a single product, I don't see the point of that. Using Nginx for load balancing will probably be much different than using Nginx as a web server, so the learning curve is similar.

Not that I don't welcome competition, I just don't see a real need in this space.

EDIT: btw, the Lua thing was an April Fool's joke...

EDIT 2: no it wasn't, my mistake. I was surprised by this so I checked the page and jumped a gun when I saw "April 1st" on http://www.haproxy.org/news.html. Sorry about that...


Nginx has supported http loadbalancing basically forever using the upstream directive along with proxy_pass. It supports leastconn, round-robin, and ip-hash (sticky). It is far from unproven. I have personally used it successfully in some huge projects. This new announcement simply makes the existing functionality work with tcp backends.


> As for supporting a single product, I don't see the point of that.

It's not about configuration; it's about security. Fewer products in your stack means fewer things to patch. Rather than updating nginx some times and haproxy other times, you just update nginx across all your machines (both web servers and load balancers), and you're done. This also gives you more time with which to vet any given nginx update.


> It's not about configuration; it's about security. Fewer products in your stack means fewer things to patch.

Kind of the reverse of the defense-in-depth principle eh? ;-)


Defense-in-depth doesn't work very well for infrastructure software packages: many projects share the same libraries with the same vulnerabilities (e.g. OpenSSL) but still have to be updated with independent package updates.

A shared-library vulnerability means both Nginx and HAProxy get broken in their own ways, which is worse, I think, than just having your whole stack rely on one or the other, and having that one break—it's more similar to having two independent vulnerabilities arise simultaneously.


You're only going to do TLS encrypt/decrypt in one place, so in that particular case... something is wrong.

However, the scenario you describe is one where you would likely NOT be doing defense in depth, because you'd be using the same library to handle a vital piece of your security infrastructure.

Regardless, when a shared library is updated for security, you don't need to apply updates to packages using the shared library. That's kind of the point. The only exception is when the flaw is in the interface to the library.

The win entirely derives from the case of having the two independent vulnerabilities. Since they are broken in their own ways it isn't sufficient to find a way to exploit one system (which would work great for attacking a system with both). You have to find a way to exploit each, and you have to find a way to connect the two so you can get all the way through.


For anyone already using nginx in their stack, I don't think these things (being a newcomer, or needing credibility) are a very big concern. Some people are already used to using nginx as a load balancer for HTTP traffic (this new feature adds load balancing for any TCP traffic) so those users won't have much gap to cover. Also this feature was in the nginx+ version, which presumably means it has already been battle-tested by their enterprise customers.


There is a world of difference between layer-3/4 style load balancers and layer-7 load balancers. If you want to do it right, you often employ both.


I think the April Fool's joke was around HAProxy being completely rewritten in Lua: http://www.haproxy.org/news.html


Oops, you are correct... the joke is on me. Thanks for correcting me.


>>>>>> EDIT: btw, the Lua thing was an April Fool's joke...

Are you sure ? The 1.6 dev repo contains Lua related code [1]

[1] http://git.haproxy.org/?p=haproxy.git;a=blob_plain;f=src/hlu...


I think, the Lua rewrite was a joke. But support for lua in haproxy is not.


HAProxy offers a better DDos mitigation configurables in comparison to nginx. Thats it.


CloudFlare is build on top of OpenResty which is basically stock nginx with ngx_lua and a bunch of other modules built in. I would argue that if you want it to be, nginx can be much better at DDoS mitigation. You can use the modules limit_conn and limit_req to control how many connection individual IPs can make to your server for basic control.


You can add various things on top of nginx... but you can use stock haproxy to limit the number of connections by source IP.


This can also be achieved in nginx fairly easy:

http://nginx.org/en/docs/http/ngx_http_limit_conn_module.htm...


nginx is definitely not quite a newcomer. Someone already mentioned it but it already proxied http for many year. Moreover it proxies websockets, which are basically long lived TCP connections.


Not to mention the vast ecosystem of some really great nginx plugins. The one thing I have found desperately wanting in both nginx and haproxy is the support for real-time stats and filtering found in Varnish (supports HTTP only). I am absolutely hooked!

  // Query Times
  varnishncsa -F '%t %{VCL_Log:Backend}x %Dμs %bB %s %{Varnish:hitmiss}x "%r"'

  // Slow Queries
  varnishncsa -F '%t %{VCL_Log:Backend}x %Dμs %bB %s %{Varnish:hitmiss}x "%r"' -m "VCL_Log:SlowQuery"

  // Top URLs
  varnishtop -i RxURL

  // Top Referer, User-Agent, etc.
  varnishtop -i RxHeader -I Referer
  varnishtop -i RxHeader -I User-Agent

  // Cache Misses
  varnishtop -i TxURL

  // awesome dashboard
  varnishstat


I've been able to get real-time stats in nginx by adding custom counters with the lua plugin and exporting a stats handler.

http://wiki.nginx.org/HttpLuaModule#ngx.shared.DICT


You can also use collectd nginx module along with nginx stub-status module to gather some useful metrics.


Awesome news for nginx users. But why should the first reaction should be about criticizing the other option? Some HAProxy users may or may not switch to this new product in the coming years. I probably won’t. Even today there is a big overlap between what HAProxy, nginx, and some other tools do and yet everybody works with what they prefer. Adding another feature that until now HAProxy had and nginx didn’t have doesn’t mean a dramatic time for this “competition” (even if such a thing exists).


nginx lacks of many good improvements and fixes which does not accepted upstream devs, so nginx has so many 3rd party modules which most of them are invalid because of nginx version changes or requires old buggy nginx version to run

openresty well it is standalone product, it is build on top nginx with many 3rd party modules and many improvements accepted/not accepted by upstream, and of course big Lua support


The blog post tells more and has some nice diagrams: http://nginx.com/blog/nginx-plus-r6-released/


Its NGINX+, not the open source version.


I think the point is that this is now in the open source version.


If you click on the original link, you'll notice it says "Port from Nginx+"


When I commented the link didn't come up.


I got a 502 when visiting this url, I think it's just irony bmiling at me: http://i.imgur.com/q3n8PpZ.png


It seems to be mercurial server (used only be a few developers usually) is not ready to handle slashdot effect.



... and this is how I discovered the nice Waterfox browser [1]

[1] https://www.waterfoxproject.org/


Same here... heh heh heh


Not sure what's so ironic about this. That's the Mercurial server that's not responding.


Yes, the Mercurial server being mercurial isn't ironic, it's coincidental


It would be ironic if the Mercurial server were saturnine though.


Does that mean that I can now put NGINX in front of a cluster of TCP (non HTTP) servers and get NGINX to cleverly load balance the incoming requests to the individual nodes ?


> load balance the incoming requests to the individual nodes

Correct me if I am wrong but I think this is actually incorrect, because there is no concept of "request" at the tcp level. If I understand correctly it will rather load balance "connections".


That's correct. It's configured in much the same way as HTTP proxying, with a few different load balancing methods (least connections, consistent hashing).


The NGINX Plus docs on TCP Load Balancing: http://nginx.com/resources/admin-guide/tcp-load-balancing/

Something to read while we wait for the announcement page to come back up :-)


Just FYI it's not an announcement page, it's a link to the source code commit (probably explains why the page is down - isn't expected to have high traffic)


I didn't see anything about proxy protocol support, which is kind of nice with TCP load balancing... http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt


I agree. nginx already supports forwarding from proxy protocol[0] via the http_realip module; time to go full circle.

[0]: http://nginx.org/en/docs/http/ngx_http_realip_module.html


I've had problems getting that module to work properly with AWS ELB (though I'd kind of assumed the problem was with ELB), so I'm not sure how solid the support is even for that. It'd be nice to test it against nginx itself as a baseline.


We use proxy protocol ELB -> nginx in production at CoreOS for Quay.io, but we use the Tengine 2.1.0 fork of nginx for some other patches.

http://tengine.taobao.org/


Yes this is important if you want to load balance http traffic at the TCP level and have the remote address in the back end app. Nginx already support decoding the proxy protocol but I wonder if with this new feature you can encode the remote address with proxy protocol.


Can someone explain how this is superior to HAproxy?


HAProxy's primary feature is HTTP/HTTPS load balancing. This new feature competes only with HAProxy's TCP load balancing support.

Note that Nginx already has a simple proxy built in that does very basic HTTP load balancing. HAProxy's is vastly superior to Nginx's in that it supports a sophisticated set of filters ("ACLs"), transformations (eg., header rewriting), queue behaviours (eg., queue limits, backup backends, health checks, retries) and proxy-specific request logging.

A big difference is that HAProxy's main balancing algorithm is "fair", in that traffic is distributed evenly among target backends, whereas Nginx's load balancing is purely round-robin (there is a third-party fair balancing module [1], but it's not maintained).

[1] https://github.com/gnosek/nginx-upstream-fair


Actually, Nginx has a couple of different load balancing algos that you can pick from -- http://nginx.org/en/docs/http/load_balancing.html


I wasn't aware of that. Thanks.


> A big difference is that HAProxy's main balancing algorithm is "fair"

I don't think that's really true - haproxy has lots of load balancing modes, none of which are called "fair". It's also very unclear what you'd mean by "fair" in this context. Least connection? Maybe, but with a traffic pattern with large volumes of very short-lived requests, least connection actually won't be very fair at all, and will end up loading some servers more than others. Roundrobin isn't really "fair" either, if you have a mix of short and long requests.


Well, "fair" was in quotes for a reason. My only point was something that HAProxy has something that distributes based on state, as opposed to round robin or random distribution.


Thank you!


Imagine for a moment you want to load balance more traffic than a single core can handle.

Now use HAproxy.

Now you see one example where nginx is kind of nice.


from what i understand, this could be used for anything tcp related (here[0] they said it could be used to proxy mysql, ldap, rtmp and they use pop3 as an example).

i thought haproxy was used basically to proxy http stuff, not general tcp.

[0] http://nginx.com/resources/admin-guide/tcp-load-balancing/


HAProxy is used for TCP too. The main benefit from my perspective is that there is one less component in the stack. Whereas I would previously need HAProxy running alongside NGINX to balance TCP and HTTP, I can now do all of this with NGINX.


But why not just use HAProxy? It is often used for plain TCP load balancing (we use it).


Because I would still need to run NGINX alongside it to handle HTTP proxying. By removing HAProxy from the stack, I have one less component to manage/maintain/upgrade.

I've used HAProxy for a long time and been very happy with it. But, everything else being equal, a stack with n-1 components is better than a stack of n components.


Why not use HAProxy for HTTP proxying? If you want to drop one component from the stack it could just as well be NGINX.


nginx can do a lot more than just HTTP proxying - it's a pretty popular webserver, and if your stack contains both nginx and haproxy right now, you must be using nginx for something haproxy can't do. So if you use both, you can drop haproxy. If you just need a reverse proxy, there's no reason to replace haproxy with nginx.


Because HAproxy's ability to work with URLs and headers is close to unusable.


Surely you must have misread the docs, it's pretty damn powerful.


I ran 1.4 in production at a 8,000+ QPS social network, have been on a team who submitted patches to Tarreau that are now in HAproxy, and very intentionally put Openresty behind it for HTTP after months of tweaking a very fragile HAproxy configuration with several applications hanging off our property's domain name. I also architected and built a LBaaS product at a well-known hosting provider using HAproxy. I didn't arrive at that conclusion by misreading the documentation and I stand by it. It's also not a knock of HAproxy, it's just a reflection that being intelligent about HTTP is not HAproxy's primary use case. It is awesome for TCP and I exclusively use HAproxy to balance TCP in a protocol-agnostic way.

First, doing anything intelligent with HTTP slaughters HAproxy's performance by an order of magnitude because of the way you must configure it. Second, sticking requests to a backend is easy if you have a header that you want to decide upon. If you want to elect a different backend based upon a path component, this is much harder and yields an unwieldy configuration.

HAproxy is not designed to operate extensively on HTTP. It is designed to balance quickly and efficiently, and grew HTTP intelligence because people started wanting the convenience of making HAproxy do far more than its core focus. Rather than HAproxy getting smarter about HTTP, I'd much rather have the protocols that service my applications handle themselves and use HAproxy for its bread and butter, TCP availability and balancing. I can then focus on optimizing that using HAproxy's really clever mechanisms, like keeping the entire TCP conversation in kernel memory without reading it out (which you must do to "be powerful," as you say). This also means if I want to support SPDY or HTTP/2 or Websockets, I'm not waiting for HAproxy to support them because I painted myself in a corner.

The stack I've deployed at the frontend of every startup I've ever consulted for or operated looks like this:

          /- [haproxy AZ A] -- [openresty AZ A]
    [ELB] -- [haproxy AZ B] -- [openresty AZ B]
          \- [haproxy AZ C] -- [openresty AZ C]
This is my Standard Frontend Deployment A. My other Standard Frontend Deployment, B, is if I have the budget and comprises Netscalers because of my experience with them from Google and other companies. Startup/low budget, ELB/haproxy/openresty. High budget, Netscaler and done.

We are speaking to years of my own operational experience. I apologize if it sounded like dismissal; I actually think Tarreau would agree with my observation and opinion, if I'm perfectly honest.


Well, I don't know what you refer to by "doing anything intelligent with HTTP slaughters HAproxy's performance by an order of magnitude". What particularly intelligent makes it slow ? I'd say it's supposed to be quite the opposite as we take great care of making it possible to write almost any configuration without having to use regex for example. We've added rules to rewrite parts of the request by combining other elements thanks to the format strings being used in all HTTP rules. You have sample fetch functions which extract various contents quite quickly and allow you to reinject them anywhere very quickly as well.

Some users reported more than 400k requests per second in HTTP mode, or 50 times more than what you experience. Sure, here any form of HTTP processing adds a few nanoseconds to the processing time and will slightly lower the numbers. But 8k is the level of performance you should expect from tens of thousands of HTTP rules which probably is not what you're doing.

So I'm interested in knowing what trouble you're experiencing. Feel free to bring that to the mailing list, a design is always better when more people are involved.


I'm not entirely disagreeing with your post, but there's a few comments I'd like to make.

1) HAproxy does support SPDY, now via NPN/ALPN. And you don't need http mode for this.

2) HTTP performance (and performance in general) is now much better. Particularly on newer kernels (IIRC somewhere in the 3.12-3.15 range, when splice was fixed for small objects).

3. I'm not sure why you found it fragile. I've run HAProxy in HTTP mode at much higher QPS volumes than you were seeing - this did require a bunch of tuning, and I'm actually hoping to have time sometime in the next few weeks to write some articles on doing this. But it worked, and worked well.

What's worked well for me is the following:

           /-  [haproxy]
  [router] --  [haproxy]
           \-  [haproxy]
Using anycast routing to distribute the load over the haproxy servers, who run BIRD[1]. It's simple and pretty effective - although you want to chose your router hashing method carefully, and if you use persistance (stick tables) you have to use a recent version of haproxy, with support for peering.

It is difficult to make it work, though - just how difficult I only recently appreciated when trying to help a friend over IRC make haproxy scale to very high volumes. I'm hoping that I'll find the time to write the articles I mentioned above, which will hopefully be useful to others with this sort of problem to solve.

1: http://bird.network.cz/


I am about getting involved with a startup and I would like you to explain "Standard Frontend Deployment, B" a bit in detail. And What is ELB?


I'd challenge ELB usage in such configuration. R53 should be enough.


It's nice for clean removal of a HAproxy from rotation as well as insulating against HAproxy failures without worrying about DNS caching, not to mention wildly different DNS behavior on different platforms. Some platforms unconditionally use only the first address in a RR A, which is why BIND (and maybe R53) has the "randomize A records" functionality.


ELB isn't magic either. You have to CNAME to it (i.e. mandatory AWS lookup) or use R53. Combine that with ELB slow on-ramp issues on load spikes and the fact that you pay per Gb of traffic passing through and I don't see a benefit. R53 has various advanced synthetic record types as well as healthchecks.


Wow, neat. To be honest, I had no idea Route 53 had grown that functionality; I'm only just now getting back to a startup and have been in Netscaler land for a while. In that case, you bet, I'd use Route 53 now instead. That actually sounds pretty useful and I'm going to check it out. Thanks for the tip.


Professional support services is quite important. I don't know if HAProxy provides it but I know nginx does.


They do. [0] And even their nonpaid support was absolutely awesome the one time I needed it. Not conclusive evidence by any means, but I was impressed by the way the bug (/feature request) was handled...

[0] http://www.haproxy.org/#supp


You can do tcp loadbalancing with haproxy, you can't do udp though.


[deleted]


Er, LVS, Netscaler, F5, Pen? People have been balancing DNS for ages to avoid the problems inherent to round robin, and UDP is the ideal scenario for DSR. HAproxy is unique in not supporting it, not the norm.


udp is trivial to load balance.... plenty support it.


Loadbalancing UDP is trivial but totally useless without protocol awareness. Loadbalancing UDP properly is much less trivial because most UDP-based protocols need to be handled properly and rewritten since they carry IP addresses, ports, and even expect reverse connections (eg: tftp). In fact the only two UDP-based protocols that you can load-balance without doing anything are syslog and DNS. Both of them are useless as they are properly dealt with by the sender.


Windows Network Load Balancing does.


HAProxy is fundamentally a L3/L4 proxy, with some specific HTTP features (all the nice throttling stuff can use only L3/L4 signals for example).



That's neat!

HAProxy has been able to do SSL termination OR pass-thru. It's ability to do TCP load balancing allows it to do ssl pass-thru, where SSL connections are "passed through" to other servers (so the web nodes would de-encrypt the SSL connection, rather than the load balancer). This is a good use case for some where they prefer or require data to be encrypted up to the last minute (although it's not the only way to do it).

TCP load balancing is neat for doing things like load balancing MySQL connections, which aren't HTTP (although that's not necessarily recommended according to some things I've read).

I believe, but can't find the sources, that Nginx can be as efficient a load balancer as HAProxy. I know I for one would prefer to use Nginx over HAProxy to keep my stack simpler (same technologies throughout), although HAProxy may have more advanced balancing algorithms and some more power around it's tcp socket "API" for adding/removing nodes dynamically. (I think Nginx Plus can already do some of that).

Would love to hear the opinions of those with more experience/knowledge on the differences between the two!


Yes you right, both are good enough and everyone want to simplify his backend schemas...

I want to note big difference between haproxy and nginx, first of all "nginx is webserver" (nginx HTTP server) and then proxy/loadbalancer(may be used) whereas haproxy is pure loadbalancer. Enterprise bare metal and hardware appliances for proxy/loadbalance built on top of haproxy

nginx is wide spread because of usage as good minimalistic web server


does this include the dynamic reconfiguration feature?

http://nginx.com/resources/admin-guide/tcp-load-balancing/#u...


This is great. Perfect for when one doesn't want to deal with running a poorly supported 3rd party module in nginx. Prior to this the only other easy option was Haproxy. I'm happy.


Anyone know if the nginx TCP load balancing supports the PROXY protocol? Doesn't appear to, which is unfortunate.


It doesn't at this stage. That is in the plan, but there are other features we'd like to implement first.


Thanks. For reference, the use case is to distribute SSL negotiation without losing access to client IP addresses.


Why would somebody need a TCP load balancer in a web server ?

Is there a use-case where the TCP load balancer being with the web-server made a lot of sense ?

Integrating too many features into a single software can be risky as it may compromise simplicity and the UNIX way, 1 tool for 1 job..


http://en.wikipedia.org/wiki/WebSocket are a completely legitimate use.


WebSocket works over HTTP, not TCP. A properly implemented HTTP stack will have no problem passing WebSocket to the next server. Some non-compliant HTTP stack still experience trouble with it though.


Technically incorrect. WebSockets handshake over HTTP and then "work" over TCP. I'm actually curious which HTTP stacks are non compliant and what you mean by that.

As to my original comment, you can probably get by without having full TCP load balancing.


I guess when further people adopt HTTP2, where each user will have only one tcp connection to transfer your files, you have to figure out load balancing to stick to your connection.


Maybe if you use Nginx to replace some of your load balancer appliances to reverse proxy all your traffic, but you are unable to do SSL termination on that box.



Not really sure how they can call this project open source anymore. Every new feature is now tied to there subscription service. You better off with haproxy.


Great to see functionality migrating from Plus to FOSS!


haproxy is good enough with it's: - full stats rather os nginx stab_status (full stats only in nginx plus) - much load balancing mechanism support than os nginx (full support needs nginx plus, e.g: for sticky sessions)

for me, in the first place haproxy and then nginx

I good welcome opensourcing parts of nginx, take that way


can this loadbalance redis and memcache?


Don't know for NGinx yet, but one can balance with HAProxy, e.g. create 2 backends for read/write respectively.

http://blog.haproxy.com/2014/01/02/haproxy-advanced-redis-he...


Yes it could. You would either need to have redis slave replication setup and then only load balance your read-replicas, or else you would need to use sticky sessions to always send requests back to the right node.. I'm not sure if sticky sessions is possible with TCP loadbalancing in nginx but I assume it is.


Reading through the code, it's extremely tidy. Very confidence inspiring :)


Maybe they'll bring in the health checks for HTTP balancing, next :).


If you don't mind building your own package, this has worked well for us: https://github.com/yaoweibin/nginx_upstream_check_module

Though I agree it'd be nice to have the functionality integrated.


It already supports it: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#...

Or you're referring to something else?


" This directive is available as part of our commercial subscription. "


huh, I thought all of NGINX was open source and so I'm confused by the title "NGINX open sources TCP load balancing."

Just a bad title or am I missing something?


A bit under two years ago, NGINX announced NGINX+ [1], with an open source "core" and paid-for extras [2]

[1]: https://news.ycombinator.com/item?id=6255592

[2]: http://nginx.com/products/feature-matrix/


Perfect timing, at least for my company.


Slightly disappointing though, it looks like it is just TCP and no health check.


This is, perhaps, a canonical example of how management's attempt to monetize an open source project will cause sub-optimal results both in code quality and profits.

New features should be developed and tested in an open version, so the feedback, testing, patches and even unexpected new improvements from high skilled enthusiasts would be incorporated much more quickly than any closed team with QA (look at the Linux kernel).

We have seen too many examples of "acquiring" open source projects to monetize on its user base (how I hate that idiotic MBA slang) which then became stagnant - from MySQL to Xen, you name it.

I wonder what mr. Sysoev is writing these days?)


Not that I don't want me some more nginx features, but how will this work exactly? Are you suggesting these features, once developed, become closed source? Honestly, I am not sure how nginx could be profitable long term. It is so good, that you don't need paid support or whatever the Plus version offers.


Guess it would work more like Fedora -> Redhat Enterprise Linux. You might be right about the profits, I have no idea..

The "open core" model is horrible IMO. It pits the open source version and the commercial version againts each other. What happens when someone would like to contribute features already planned for the commercial version.. ?


Redhat has really clever model, btw. Once, in times of RHEL3 and 4 they have tried to maintain zillion patches against vanilla kernel to be "Enterprise Linux", you know, so you could run Oracle cheap (a hot topic in that time). Then they have realized, that it is much smarter to give the patches to the mainstream, so everyone will benefit.

Fedora became a test-bed for a new technologies, to amortize the too rapid changes (systemd and other crap, you know), so they could provide stable and compatible RHEL versions for existing customers.

And Redhat is a service, not the code company.


I would say that open source code base cannot be profitable in principle (and it should be completely open, like the Linux kernel, with all the new "hot" features development) but the services, you as a developer provide, could.

Or let's say that one's knowledge and skills are profitable, not the code itself.

The code should be open, so it could grow on the same pace as everything else grows, according to the real demand for features, like early FreeBSD or Linux or even MySQL grew, or even nginx before 1.0.


Well, I think an OSS piece of software can be profitable if you can run it as a service. For example, WordPress (wordpress.com sells subscriptions), or Sentry (getsentry.com). Sure you can run your own, but why not pay someone to take care of that headache?

However, I am looking specifically at nginx. The open version of it is too good and too complete, and by its nature it's not something you are going to run externally to your application. It's what Alan Cox (I think) described as "legacy software", meaning it's a fundamental part of your infrastructure that you expect to just sit there and work.


There is not a single complain about the quality of the nginx and its place as a "core service", on the contrary, it is a rare example of careful craftsmanship (Igor's code), a gold standard, if you wish.)

What I am trying to suggest is the simple idea that as long as the code become closed it cease to grow and become stagnant, but while it is open to everyone, like Linux kernel, and just grows and grows, it is not possible to monetize it, and the one possible working model is to develop new features sponsored by someone, like it is the case for the Linux kernel, or to sell your own services with open code, like so many do.

It seems like the model acquire, close and sell does not work with open source projects, because when it does not grow it dies (stagnates), just according to the laws of big numbers. Or rather it changes its status to being a commercial product, which is completely different story (paid developers, support, QA, etc) to which very few people would contribute - no one wants to grow other people's wealth.


I agree with you, but knowing a little how hard it is to stay smart about this as an open source company, I understand why it went wrong. The Open Source Way is surprisingly hard to understand and the pressure from Sales turns the focus on short term gains.


502, how ironic.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: