Hacker News new | past | comments | ask | show | jobs | submit login

> rate limiting, ip address filtering ,caching and so on are ridiculously trivial

Maybe if you're running one instance of your API server.

But if you're not then API Gateways end up being significantly simpler.




> Maybe if you're running one instance of your API server.

I'm not sure how it differs, can you explain more? From my perspective each API instance is just one more `server x.x.x.x:port` in my `upstream someapi { ... }` section in Nginx config. Be it 1 or 20.

No difference from regular Loadbalancing as I see it.

For multiple APIs (api1, api2...) you end up with different `location {}` and using specific `upstream {..}` blocks upon request from deb/backend team.


It's really interesting. I just sync state and store short term cache in redis, it's what I do when building any internal API anyway, so why any different with an external API. API servers sit behind a load balancer.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: