It’s great to see that Mitmproxy is still being developed - it indirectly made my career.
Back in 2011, I was using it to learn API development by intercepting mobile app requests when I discovered that Airbnb’s API was susceptible to Rails mass assignment (https://github.com/rails/rails/issues/5228). I then used it to modify some benign attributes, reached out to the company, and it landed me an interview. Rest is history.
> Chrome does not trust user-added Certificate Authorities for QUIC.
Interesting. In linked issue chrome team says:
> We explicitly disallow non-publicly-trusted certificates in QUIC to prevent the deployment of QUIC interception software/hardware, as that would harm the evolvability of the QUIC protocol long-term. Use-cases that rely on non-publicly-trusted certificates can use TLS+TCP instead of QUIC.
I don't follow evolution of those protocols, but i am not sure how disallowing custom certificates has anything with "evolvability" of protocol ...
If I were to guess, it's to allow Google freedom in experimenting with changes to QUIC, since they control both the client and large server endpoints (Google Search, Youtube etc).
They can easily release a sightly tweaked QUIC version in Chrome and support it on e.g Youtube, and then use metrics from that to inform proposed changes to the "real" standard (or just continue to run the special version for their own stuff).
If they were to allow custom certificates, enterprises using something like ZScaler's ZIA to MITM employee network traffic, would risk to break when they tweak the protocol. If the data stream is completely encrypted and opaque to middleboxes, Google can more or less do whatever they want.
Middle boxes (https://en.m.wikipedia.org/wiki/Middlebox) are a well known source of protocol stagnation. A protocol with extensibility usually needs the client and server to upgrade, but with middle boxes there are N other devices that potentially need updating as well. Where the user (client) and service provider (server) are motivated to adopt new feature sets, the owners of middle boxes might be far less so. In net, it makes it hard for protocols to evolve.
> I don't follow evolution of those protocols, but i am not sure how disallowing custom certificates has anything with "evolvability" of protocol ...
One of the reasons for developing HTTP 2 and 3 was because it was so difficult to make changes to HTTP 1.1 because of middleware that relied heavily on implementation details, so it was hard to tweak things without inadvertently breaking people. They're trying to avoid a similar situation with newer versions.
There is a case of Kazakhstan installing certs to MITM citizens couple years ago and bunch of cases where bad actors can social engineer people to install certain for.
I think because of KZ case browsers and Chrome especially went for using only their own cert store instead of operating system one.
If your company requires communications to be monitored, the typical enforcement is a custom company CA installed on company equipment. Then they intercept TLS and proxy it.
Those proxies tend to be strict in what they accept, and slow to learn new protocol extensions. If Google wants to use Chrome browsers to try out a new version of QUIC with its servers, proxies make that harder.
It can seem confusing but it all makes sense when you realise Chrome is designed to work for Google, not for you. I remember people switching their Grandmas to Chrome 15 years ago when they could've chosen Firefox. Many of us knew this would happen, but convenience and branding is everything, sadly.
Do http/2 and http/3 offer any benefits if they are only supported by the reverse proxy but not the underlying web server? Most mainstream frameworks for JS/Python/Ruby don't support the newer http standards. Won't the web server be a bottleneck for the reverse proxied connection?
Yes, because http/2 or http/3 will improve the reliability of the connection between the client and the reverse proxy. The connection between the reverse proxy and the underlying web server is usually much faster and more reliable, so that part would benefit much less from being upgraded to http/2 or http/3.
the transport between reverse proxy <-> backend is not always http, eg python w/ uwsgi and php w/ fastcgi.
And even when it is HTTP, as other commenters said, the reverse proxy is able to handshake connections to the backend much more quickly than an actual remote client would, so it's still advantageous to use http/2 streams for the slower part of the connection.
Probably not, but mitmproxy is not a reverse proxy for any production purpose. It’s for running on your local machine and doing testing of either low-level protocol or web security stuff.
Something not mentioned: web-browsers limit the number of connections per domain to 6. With +http/2 they will use a single connection for multiple concurrent requests.
Depends. If they're running on the same box, the reverse proxy will be able to initiate tcp connections to the web server much more cheaply. Even if they're just in the same datacenter, the lower round trip latency will reduce the time for establishing TCP connections. Plus, the proxy might be load balancing across multiple instances of the backend.
One of the main promises of HTTP/3 is better performance under worse network conditions (e.g. no head-of-line blocking as in HTTP/2, connection migration, 0-RTT). For all of that HTTP/3 between client and proxy is really great. HTTP/3 between proxy and server is not required for that.
http/3 seems to be an excellent opportunity to optimize HTMX or any of the libraries which leverage HTML fragments like JSX. The obvious advantage of http/3 is for gaming.
The servers which run the frameworks have to http/3. In most cases the advantages should be transparent to the developers.
Unfortunately there is still the issue[1] of fingerprinting. Until it can spoof the TLS handshake of a typical browser, you get these "Just a quick check..." or "Sorry, it looks like you're a bot" pages on about 80% of the web.
> Until it can spoof the TLS handshake of a typical browser, you get these "Just a quick check..." or "Sorry, it looks like you're a bot" pages on about 80% of the web.
Evidently Firefox is not a typical browser anymore.
Thank you for your work on Hickory! It's super exciting to see how PyO3's Python <-> Rust interop enables us to use a production-grade DNS library with Hickory and also a really solid user-space networking stack with smoltcp. These things wouldn't be available in Python otherwise.
I wonder, can I use it like Privoxy/Proxomitron/Yarip? E.g. can I strip out script tags from specific sites, which I request with my browser (Ungoogled Chromium), using Mitmproxy as a Proxy? And how will this affect performance?
In theory: yes. In practice: mitmproxy is written in Python so there will be a delay because of the language not being all that fast. When you're visiting web pages with hundreds of small delays, you'll notice.
That said, for many people who care about this stuff, this could be an option. There's nothing preventing you from doing this technically speaking.
There's a small risk of triggering subresource integrity checks when rewriting Javascript files, but you can probably rewrite the hashes to fix that problem if it comes up in practice.
envoked|1 year ago
Back in 2011, I was using it to learn API development by intercepting mobile app requests when I discovered that Airbnb’s API was susceptible to Rails mass assignment (https://github.com/rails/rails/issues/5228). I then used it to modify some benign attributes, reached out to the company, and it landed me an interview. Rest is history.
danmur|1 year ago
JeremyNT|1 year ago
Sometimes it's easier to use mitmproxy with an existing implementation than to read the documentation!
RamRodification|1 year ago
;)
febusravenga|1 year ago
> Chrome does not trust user-added Certificate Authorities for QUIC.
Interesting. In linked issue chrome team says:
> We explicitly disallow non-publicly-trusted certificates in QUIC to prevent the deployment of QUIC interception software/hardware, as that would harm the evolvability of the QUIC protocol long-term. Use-cases that rely on non-publicly-trusted certificates can use TLS+TCP instead of QUIC.
I don't follow evolution of those protocols, but i am not sure how disallowing custom certificates has anything with "evolvability" of protocol ...
Anyone knows are those _reasons_?
filleokus|1 year ago
They can easily release a sightly tweaked QUIC version in Chrome and support it on e.g Youtube, and then use metrics from that to inform proposed changes to the "real" standard (or just continue to run the special version for their own stuff).
If they were to allow custom certificates, enterprises using something like ZScaler's ZIA to MITM employee network traffic, would risk to break when they tweak the protocol. If the data stream is completely encrypted and opaque to middleboxes, Google can more or less do whatever they want.
Kinda related: https://en.wikipedia.org/wiki/Protocol_ossification
cpitman|1 year ago
sbinder|1 year ago
remus|1 year ago
One of the reasons for developing HTTP 2 and 3 was because it was so difficult to make changes to HTTP 1.1 because of middleware that relied heavily on implementation details, so it was hard to tweak things without inadvertently breaking people. They're trying to avoid a similar situation with newer versions.
intelVISA|1 year ago
ozim|1 year ago
I think because of KZ case browsers and Chrome especially went for using only their own cert store instead of operating system one.
toast0|1 year ago
Those proxies tend to be strict in what they accept, and slow to learn new protocol extensions. If Google wants to use Chrome browsers to try out a new version of QUIC with its servers, proxies make that harder.
globular-toast|1 year ago
Onavo|1 year ago
AgentME|1 year ago
markasoftware|1 year ago
And even when it is HTTP, as other commenters said, the reverse proxy is able to handshake connections to the backend much more quickly than an actual remote client would, so it's still advantageous to use http/2 streams for the slower part of the connection.
masspro|1 year ago
nitely|1 year ago
lemagedurage|1 year ago
connicpu|1 year ago
mhils|1 year ago
jeltz|1 year ago
Narhem|1 year ago
The servers which run the frameworks have to http/3. In most cases the advantages should be transparent to the developers.
rnhmjoj|1 year ago
[1]: https://github.com/mitmproxy/mitmproxy/issues/4575
account42|1 year ago
Evidently Firefox is not a typical browser anymore.
bluejekyll|1 year ago
mhils|1 year ago
nilslindemann|1 year ago
jeroenhd|1 year ago
That said, for many people who care about this stuff, this could be an option. There's nothing preventing you from doing this technically speaking.
There's a small risk of triggering subresource integrity checks when rewriting Javascript files, but you can probably rewrite the hashes to fix that problem if it comes up in practice.
systems|1 year ago
RockRobotRock|1 year ago
https://docs.mitmproxy.org/stable/addons-overview/
38|1 year ago
https://github.com/mitmproxy/mitmproxy/issues/4170
halJordan|1 year ago
cryptonector|1 year ago
unknown|1 year ago
[deleted]