OT but are any ingress/gateway systems using uring yet?
Really enjoyed reading CloudFlare's justification for wrtiting Pingora gateway recently[1], a similar-ish system. Im interested to see what systems tech (what system calls) it ends up using.
There's a ton of really great ingress/gateway tech out there. Kubernetes has a sublist that's pretty long[2]. There's a good comparison matrix[3] I ran into & it immediately made me very interested in APISIX (lots of box ticking). I think at one point I'd also run into a benchmark somewhere & they were quite performant, top tier. Would be interested to know more about tbeir chosen architecture & what if any performance optimizations they have planned/roadmapped/are-thinking-about.
I was trying to find a simple API gateway I could run as a single docker container on a single host. I ended up using a node-red workflow instead, because I was unable to get any of the options available to me working behind Traefik as my proxy. I tried APIMan first, then APISIX, then API Umbrella (used and developed by US government agency, apparently), and finally Kong. I got the closest with Kong, but decided to just give node-red a try and it worked like a charm as a proxy for another API.
Currently envoyproxy/gateway is still at early stage. It only supports configuring via file or k8s crds, and many things especially docs are not clear.
Envoy is great, but it's written in C++ instead of Rust, and it feels like such a missed opportunity.
Years ago we were going to write an extension to Envoy to decode our side channel session info so we could do without per-language client intelligence and additional service calls.
We didn't want to blow up production traffic - millions of dollars of transactions - because of stupid memory management and pointer blunders.
> Envoy is great, but it's written in C++ instead of Rust, and it feels like such a missed opportunity.
This comment sounds too cargo-cultish to be taken seriously.
> We didn't want to blow up production traffic - millions of dollars of transactions - because of stupid memory management and pointer blunders.
You might be surprised to learn that C++ is the tool of the trade of the high frequency trading sector. The key factor is that people who actually work on millions of dollars of transactions do make technicalll decisions instead of blindly going the fanboy path.
> Your attitude is incredibly rude and dismissive.
My attitude is to point out the mistakes of succumbing to fanboyism and blindly going with cargo cult beliefs that make no sense and have no bearing in reality. Complaining that pointing out the fact that the high frequency trading sector is built upon C++ is dismissive says more about your personal beliefs than anything else.
> We do incredible volume and felt this was a risk to our customers.
I really doubt you do more transactions per second than any high frequency trading company running a C++ stack. The likes of Optiver are doing just fine with C++.
Just be honest and humble and state that your expertise lies elsewhere, and your choice had zero to do with technical reasons.
A billion dollars a day in gross processing volume. You're just rude and keep doubling down with your arrogance, throwing around words like "blind" when we ran disciplined SPADE processes across senior engineering.
Nice, thanks for sharing.
It seems like an improved Kong, using the good parts of it, like Nginx and Lua. It makes me want to use it instead of the usual Kong, in next project.
Lua doesn't make it easy to create your own plugins at all. Kong guys themselves tried to offer a good use experience for this but failed more than once.
1. Apache APISIX is a project of the Apache Software Foundation, while Kong is a project controlled by a commercial company (it is possible to change the license).
2. The Apache APISIX community is more active and vibrant
Some insights: Apache APISIX Slack channel[1] is under the Apache Software Foundation, and 1000+ members joined in to ask questions or share cases around Apache APISIX API Gateway or its Ingress Controller.
After asking users why they prefer Apache APISIX than other solutions, there have four important points:
1. Feature Rich: Many users need to use API Gateway with OpenID Providers (e.g., Auth0, Keycloak), other solutions sold this feature on Enterprise Product only. There has one How-to guide "Use Keycloak with API Gateway to protect your APIs".
2. Quick Support: Apache APISIX has many active contributors and maintainers, they keep watching activities on GitHub[3], Slack[1], Mailing List and other channels. When users ask questions, they respond quickly, the goal is to help users onboard quick.
3. Apache Project: After APISIX project was donated to the Apache Software Foundation, it means nobody can change its License any more, so enjoy Apache projects ([https://www.apache.org](https://www.apache.org)).
4. Benchmark is excellent, and the most active maintainer's explaination here[4]: LuaJIT + Nginx.
P.S Welcome to join Apache APISIX Slack[1] to discuss, and you can find many useful posts from its blog[5].
For my use case, this would have to be incredibly awesome for it to justify self-hosting such a complex and critical system. My current go-to solutions are AWS API Gateway, Azure API Manager and Apigee.
I share your concern about managing solutions.
I only know AWS API gateway but I can tell you the features it offers are very limited compare to Foss solutions
Indeed Many members in the Slack channel reported that they came to APISIX because its feature-rich, check its README, please: https://github.com/apache/apisix
The selling point for me was ability to configure it using Kubernetes CRD's and future support of the Gateway API (under development - <https://gateway-api.sigs.k8s.io/>).
Developers can version their API now within helm charts or even yaml templates held along the code in their repositories.
Apache APISIX also has several security features to help reinforce API security. For instance:
1. It can dynamically manage lots of TLS certificates and do the TLS/SSL terminate or use mTLS to communicate with the upstream;
2. Plugins like ACL, IP Restriction, CSRF, and Referrer Restriction restrict API access in different dimensions.
You can use APISIX API Gateway as a traffic entrance to process all business data, including dynamic routing, dynamic upstream, dynamic certificates, A/B testing, canary release, blue-green deployment, limit rate, defense against malicious attacks, metrics, monitoring alarms, service observability, service governance, etc.
What is the main benefit to running a gateway? Auth should already be handled by the api and adding things like rate limiting, ip address filtering ,caching and so on are ridiculously trivial..
the same benefit you get from most reverse proxies. If you dont need it, then nothing. If you do need it, it's critical.
If you:
- have more than 1 upstream service to hide behind your api.bigcorp.com name?
- want to enforce standard authn/authz patterns across lots of teams/backend services?
- want a standard approach to all the Quality of Service management?
- want to have a well defined lifecycle for your APIs?
- want to have a portal that describes the APIs, how they work and facilitate users getting access to them?
API Gateways are a thing because web servers that started out being used as reverse proxies were not that easy to configure and just did way too much web server stuff. API gateways made this easier, and added a host of security measures to make it somewhat safer when presenting APIs to the internet.
Then API management came along as a first class concern for orgs who want others to use their APIs.
It's good to see some FOSS innovation in this domain, most of the real open source API gateways are a huge mess. Kong is great, but the really useful stuff is part of the paid enterprise platform.
the main ones are OIDC plugin, the serverless plugin, advanced req and resp transformer plugins. auth connectors are useful, but most orgs I work with are using OIDC or SAML
While you _can_ tack all of those things on to your API, many people (myself included) prefer to separate those concerns. A dedicated API Gateway can prevent a lot of additional load from hitting the backend services. A gateway can also be a nice abstraction of the public interface from the underlying service(s).
Most of the time an "API Gateway" provides nothing. Usually this kind of service is just looking at the request path and maybe the Authorization header. In general it's just a useless hop in the TCP connection chain from the client to the server.
I feel “useless hop in TCP connection chain” is treated as a virtue in “modern architecture.”
Somehow micro services and cloud native turned into that. So yeah, slow response times, wasted infrastructure and most work put into I/O and serialization/deserialization.
I think we forgot what actual monoliths look like and just kept decomposing… I guess that is an appropriate term.
> Most of the time an "API Gateway" provides nothing.
This personal assertion only holds if you somehow chose to adopt a component that you need, or blindly adopt components even though you have no idea what you're doing. Discussing these scenarios is pointless and a waste of time though.
Meanwhile, reverse proxies and ingress controllers are fundamental components in any service that outgrows a box under the desk, and a API Gateway is nothing more than a specialized reverse proxy whigh offers some high level features as part of it's happy path.
An API gateway is an adapter. You can build a web API out of multiple independent web services, like microservices or simply versioned APIs. On top of that you can do anything a proxy does, like load balancing, TLS termination, rate limiting, monitoring, etc. Basically a API gateway is a higher-level abstraction over a proxy which saves some time and effort implementing some basic features.
> Auth should already be handled by the api (...)
That won't make a dent if you are running a single instance of a single service.
Once you start to run multiple instances of multiple services then things start to break down.
It's been useful when domains are routed to a bunch of microservices internally. It keeps the gateway functionality centralized. For people using kubernetes, the new k8s Gateway API will obviate a bunch of the gateway tools out there.
> Maybe if you're running one instance of your API server.
I'm not sure how it differs, can you explain more? From my perspective each API instance is just one more `server x.x.x.x:port` in my `upstream someapi { ... }` section in Nginx config. Be it 1 or 20.
No difference from regular Loadbalancing as I see it.
For multiple APIs (api1, api2...) you end up with different `location {}` and using specific `upstream {..}` blocks upon request from deb/backend team.
It's really interesting. I just sync state and store short term cache in redis, it's what I do when building any internal API anyway, so why any different with an external API. API servers sit behind a load balancer.
Good question and I would like to share more background with you all :)
The open source API Gateway project, APISIX, was open sourced and donated to Apache Software Foundation by API7.ai[1] in 2019, many people asked why not use Kong, AWS or other products? The main reason was Kong relies on PostgreSQL and AWS Gateway is vendor locked, now Kong has supported the DB less deployment mode, Apache APISIX also supports this way, this mode is easy to maintain with GitOps.
Really enjoyed reading CloudFlare's justification for wrtiting Pingora gateway recently[1], a similar-ish system. Im interested to see what systems tech (what system calls) it ends up using.
There's a ton of really great ingress/gateway tech out there. Kubernetes has a sublist that's pretty long[2]. There's a good comparison matrix[3] I ran into & it immediately made me very interested in APISIX (lots of box ticking). I think at one point I'd also run into a benchmark somewhere & they were quite performant, top tier. Would be interested to know more about tbeir chosen architecture & what if any performance optimizations they have planned/roadmapped/are-thinking-about.
[1] https://blog.cloudflare.com/how-we-built-pingora-the-proxy-t... https://news.ycombinator.com/item?id=32836661 (362points, 1d ago, 92comments)
[2] https://kubernetes.io/docs/concepts/services-networking/ingr...
[3] https://kubedex.com/ingress/