On my team we use mitmproxy for observing traffic of our locally running instances of the backend for the project that I am working on.
Supposedly mitmproxy also supports HTTP/2 so might be helpful for that as well.
We're personally still using HTTP/1.1. But mitmproxy has been a really great tool for us while working on the client and the backend of our project.
One thing I also like a lot about mitmproxy is that you can edit requests and replay them. This is useful when I need to step debug the backend for an endpoint where requests are failing or otherwise misbehaving.
A shared client-server compression state is really weird. Can someone explain why this approach was chosen? I'm assuming that the idea here is to ensure efficiency for short strings by essentially keeping a synchronized dictionary?
Yes. That’s the idea - in combination with the idea that requests to the same server are often repetitive (they might have a lot of common headers). And „synchronized dictionary“ is basically the idea behind.
Unfortunately http/2 didn’t really give implementations a chance to opt out of that complexity by setting the table size to 0 - the default is 4kB and manipulating it triggers a race condition which might fail requests. That part is thankfully nicer in http/3, and both peer äs have to opt into using dynamic compression via a synchronized header table to support it.
Supposedly mitmproxy also supports HTTP/2 so might be helpful for that as well.
We're personally still using HTTP/1.1. But mitmproxy has been a really great tool for us while working on the client and the backend of our project.
One thing I also like a lot about mitmproxy is that you can edit requests and replay them. This is useful when I need to step debug the backend for an endpoint where requests are failing or otherwise misbehaving.