which is a pretty thorough benchmark for Go multiplexers. The author's own framework is always fastest and takes the least amount of memory (there's nothing suspicious about it, it really is the fastest).
The test does a good job of showing just how horribly slow some of these, like Gorilla or Martini, really are. Anyways, Bone seems better than the horrible ones, but it's far from the fastest. A snapshot:
It also crashed on one of the test (first time I've seen that, so maybe I set it up wrong).
The actual way to match routes is important, but something else that a lot of people miss is how to load the Params. If you want to keep the *http.Request interface, there's little choice but to append values to the req.URL.RawQuery so that they can be pulled out via req.URL.Query("...") (which, by the way, everytime time you call Query(), RawQuery gets re-parsed). This is the approach Bone appears to take. It's unfortunate because it results in extra string concatenation. If you want speed, you need to break Go's interface, expose your own req object, and use a pool or something for the params.
The "horribly slow" Gorilla still completes most of my requests in under a millisecond, whereas something actually slow, such as PHP, takes around 200ms, even for the simplest requests.
Gorilla gives me a lot of comfort, but httprouter might be interesting once I remove other bottlenecks in my app. For most developers, swapping the two is a very premature optimization.
Takes guts to post a project on HN and expose yourself to all sort of criticism. Here's a hopefully constructive critic I hope you can use:
* Routing of requests is done in the context of network IO, perhaps disk IO and likely database accesses. The 500 nanoseconds you shave off your request time are likely not worth the trouble you went through, given that a HTTP request sending the bytes of the header will usually amount to a few hundred microseconds, milliseconds with some body. I'd suggest you attack the problem of performance by profiling bottlenecks, to identify parts of your code that have the greatest impact on the request time.
* The benchmark you wrote is not testing your claim (of fast routing/multiplexing), since the benchmark works against a router with no registered routes. When benchmarketing, I'd suggest creating a realistic scenario, a scenario your users are likely to end up with. In your case, this would likely mean a dozen resources, each with about 5 routes, using all verbs.
On your first point, you're assuming that you know his requirements (or that his requirements weren't just to learn and have fun). I'm a lot more interested in hearing from people who handle 10K req/sec on 4 cores versus those handling 100K req/sec with 1000 cores.
Faster code with fewer allocations is always a good thing.
The way Go makes "publishing to GitHub" and "public distribution" so similar is resulting in a lot of packages up on GitHub where it's not clear to me whether they're just there, or if the author just wants to show it off, or if the author really seriously thinks people should use this, or what. I'm developing an eye twitch in reaction to "A fast, simple, powerful library to do X" showing up on /r/golang. Especially when X is one of the things we have a ton of because they aren't that hard to write in the first place. And especially when no evidence is particularly provided for any of the adjectives, except that the author really hopes they're true.
This is faster because it's doing a linear search.
Linear search is really fast when 'n' is small.
But it's horribly slow when 'n' is large.
But since the http route matching is never going to come up at all in your profiling unless 'n' is large it's better to have a router that handles large values of 'n'.
I am not completely sure what an "http multiplexer" actually is. I this similar to something like apache, nginix, or a nodejs http server (like express)?
It's a router at the application level. So a request comes into the multiplexer, and you have your routes: /app goes to the app controller, /faq goes to the FAQ controller, etc. Then there's binding parameters from parts of the URL, etc.
(normally you'd say "router" or "routing layer" but Go calls their router a "mux" so "mux" and "multiplexer" are the typical term for request routers in Go.)
Just to be clear, my goal wasn't to be the best router for every situation, but a good one for my needs. I know that other router may be better in a lot of other use case and this why i wrote that my test were for fun. I still think for people who have small application and small server ressources, bone can be useful. And really sorry if you feel so angry about it.
https://github.com/julienschmidt/go-http-routing-benchmark
which is a pretty thorough benchmark for Go multiplexers. The author's own framework is always fastest and takes the least amount of memory (there's nothing suspicious about it, it really is the fastest).
The test does a good job of showing just how horribly slow some of these, like Gorilla or Martini, really are. Anyways, Bone seems better than the horrible ones, but it's far from the fastest. A snapshot:
It also crashed on one of the test (first time I've seen that, so maybe I set it up wrong).The actual way to match routes is important, but something else that a lot of people miss is how to load the Params. If you want to keep the *http.Request interface, there's little choice but to append values to the req.URL.RawQuery so that they can be pulled out via req.URL.Query("...") (which, by the way, everytime time you call Query(), RawQuery gets re-parsed). This is the approach Bone appears to take. It's unfortunate because it results in extra string concatenation. If you want speed, you need to break Go's interface, expose your own req object, and use a pool or something for the params.