I use mitmproxy and mitmdump a lot. I really recommend it. Someone else said it's not easy to hack on, I do not agree. It's a really approachable project and great if you're doing anything HTTP related. That said, there are some pain points:
* libmproxy is not recommended for external projects. Instead you should use "inline scripts" (embedding functionality in mitmproxy). This is a pain point for me since I wanted to work on the captured streams in an external program that already existed and I shouldn't need to run mitmdump to do that work. The dumped streams are not in a standard format either, they're serialized python objects and dumps from different mitmproxy versions sometimes break the format.
* Performance. Having more than a couple of concurrent requests at the time tends to eat CPU. Running the requests of one browser through it is fine. Running multiple browser instances (e.g., using the new headless library in chromium for automated testing) through it, continuously performing requests is not a good idea. Memory footprint is also quite high, but is not a limiting factor for me.
I would like it if the dump format was a standard format and if libmproxy was a stable API.
Mitmproxy author here. It is kind of interesting that one of the best ways to get feedback for your software is reading random comments on the internet - thanks for that! :)
> libmproxy is not recommended for external projects.
This is true (for the reasons outlined in [1]), but your use case is the reason why we also offer the libmproxy API. That being said, you'll see improvments here in the next release (and hopefully a lot of stability afterwards).
> The dumped streams are not in a standard format either
We would love to use JSON, but JSON does not really work with streaming. We use tnetstrings (not serialized python objects) instead, and we do schema migrations for the last 5 releases now. We have an example on how to read dumpfiles in Python in the repo [2]. :)
> Performance
Thanks for bringing that up. Scaling beyond a few concurrent users is not a design goal for mitmproxy currently - otherwise we probably should start rewriting it in Rust/Go. If you need anything large-scale, the submission here is vastly superior. :)
Performance is actually the reason I started this project.
A friend of mine was creating a product based on mitmproxy for a client and was running in to performance projects. He asked me for advice and I pointed him to just how little work it was to do exactly what he wanted with openresty, instead of mitmproxy.
If you want everything, mitmproxy might be for you. If you want fast, minimal, hackable, and not a pile of python then openresty and my approach might be for you.
My problem with using mitm proxy is that it isn't as easy to hack on.
I'm an nginx person, so learning a new python code base to hack on features I want is way harder than having something simple I can add on to, as needed.
I found easy to get started, it's as simple as the post. The real benefit I see is to plug arbitrary python code to hack requests and responses. It's very good, but I've noticed some crashes in my experiments too.
I'm confused. Mitmproxy shows me request/response bodies and lets me edit and replay requests. Those seem like fundamental features (Fiddler, Burp, mitmproxy all seem to have them). I don't see how this is done with nginx reverse_proxying and logging, or is that coming in part 2 or 3 maybe?
>They all had good features, but none had all of my desired features.
Many intercepting proxies like The Fiddler with FiddlerScript and the Burp Suite through Burp Extender can be extended to have any feature you want by writing your own code or leveraging someone else's. Personally the only time I've found myself thinking I might need nginx for a debugging proxy is when I need scale. I'd rather use something that's close enough, write stuff where I need to, then focus on doing really cool things with them like finding vulnerabilities for fun and profit.
If you just want a quick proxy to inspect traffic, apache with mod_dumpio¹ always seemed the quickest and easiest way to do it: just proxypass your traffic and
LogLevel dumpio:trace7
DumpIOInput On
DumpIOOutput On
Hey.
About HTTPS proxy, i can offer you a better way, rather than creating your own CA, generating certs for any domain which is too much of work & configuration + compiling OpenSSL.
I have done that already, as free service working on this address: https://ca.parasite.io
You can easily implement with LUA module to download certs for any domain & download it as Zip or JSON or pfx. Contains all files you need. root, intermediate and target cert with private keys of course. As the owner/developer, that domain and service is going to work for years at least till 2027 (my root cert's expiry date).
Note: Created certs has a 60 mins of cache (nginx) to improve performance. You don't want to download each certificate for all static files in a single request.
As in homepage it states that strictly for developer's use. And maybe I should add for the other's who are not developer not to install root certificate.
Thank you for reminding.
Lots of tools generate CA certs locally. I don't have a problem with that. This is a tool that asks you to download a new root CA cert from a website. That's crazy.
thanks, this is really neat. i was thinking of something like this and my only idea was to write my own from scratch. while that might be educational it was daunting and i was guessing would have limited support and bugs.