I've been hoping for years people would wake up to the risks of these things.
I presented on the topic at Blackhat Europe a few years back, where I disclosed several certificate validation flaws in Cisco Ironport. I understand there's legitimate reasons for enterprises to want to decrypt and inspect TLS connections, but it's not without it's risks and downsides.
Good set of slides. Companies are more likely to be afraid of the other risk, which is why SSL interception is used - when malware makes use of it to avoid detection.
Security cuts both ways. I think the most important point is that the user should be in control of the traffic, which means knowing whether or not interception is being used.
Yeah, it's a balancing act, and there's certainly a desire (and probably even a legitimate need) to monitor encrypted comms for malware C&C channels, data exfiltration, etc.
Your view seems to reflect a similar nuance as my own. Administrators need to weigh the risks and benefits as it relates to their own environment, and users should at least be aware that such monitoring is taking place. Beyond that, there's some technical challenges, but I see the bigger issues as political and expectation vs. reality alignment.
Those kind of monolithic network security systems see to be intrinsically pointless. If a user can run code on the machine then they can probably get around the network level security. So any implementation is dependant on AV software preventing circumvention. At that point you might as well install the tracking/filtering software on the local machine.
No. Network level security, if correctly installed, cannot be avoided by just running some code on your local workstation. If you have it installed on the station itself, then it is easier to avoid by just shutting it down. Also network based security can isolate workstations that are suspicious.
And your 'monolithic' is a symptom of architecture, that is either outdated ("not hipster") or just bad. But that does not mean that someone can't build hipster and good network level security. I guess, Google does not buy that off the shelf.
>> No. Network level security, if correctly installed, cannot be avoided by just running some code on your local workstation.
Don't you have to intercept/reject TLS to make that workable? Otherwise the user (or malware) can upload or download anything and all you see at the network level is a destination IP address. If a user has admin rights (which is common in corporate environments) then they can install software which can mimic a browser using HTTPS.
At the network level it is difficult to identify what program generated a request and which user was running that program. I am very sceptical of the heuristic approaches that try and solve this problem (Palo Alto App-ID for example) that display quite shocking emergent properties.
Surely it is technically preferable to track network requests within the OS and browser where you can actually get at information reliably without any hocus pocus. If a user can avoid it by just "shutting it down" then they can also remove the AV, connect to a proxy and spend the afternoon uploading client lists to a porn site.
Yes, the proxy has to offload the original TLS connection in order to do that. And the network owner must deploy its own certificate to the clients.
The whole X.509 infrastructure is based on trust. You have to trust your certificate store, the certificates, the network and its components and CAs need to trust those who request certificates. If you have to use a network that uses a proxy, you have to trust it aswell. If you do not, then just do not use it or at least don't do your online banking over that network (or use a VPN if allowed (sigh)). So a good network security deployment is not only well maintained, but also transparent to its users on what it does. The user must have a choice on whether a network is trustworthy or not.
The problem with SuperFish is that it shipped not only the root certificate, but the private key to sign new certificates on the fly. And the user was not informed about it and not given a choice. This is the problem here.
Most clients I worked for provided me with a separate network for unfiltered internet access (guest networks) in which I used a VPN to a network which I trusted. I was given a choice.
Edit: A thing that bugs me often is when I see a network proxy that does not use TLS for the proxy connections. Unfortunately that is happening in the majority of networks, I see. And that affects my trust, so I rather avoid accessing certain services when I cannot have my VPN.
I guess that corporations need lots of network level security because they have so much unencrypted sensitive data on their networks which places a lot of implicit trust in that network.
That is true. That is why attackers (like the NSA) would be happy to infiltrate routers (less changes from the outside like administrators) instead of clients (more changes). A proxy is a quality target, too. But a proxy is also more visible and tampering is usually easier/faster to detect. Corporations need to TLS and/or message encrypt everything. But that is often not priced into (project) budgets and a hard thing to do (key exchange, managing certificates).
That is possible. But that depends on how TLS clients approve wildcard certificates. Wildcard certificates are considered harmful. And AFAIK, browsers will not accept 'star.star' (correct me if I'm wrong). So if I host a MITM proxy, I at least use FQDNs as subjects. It also works better with revocation lists/protocols.
An example for why wildcard certificates are bad is Microsoft. A couple of years ago, they had problems with subdomains which delivered malicious code through hijacked web pages that were hosted on those domains. Microsoft used a wildcard certificate...
I don't see a problem with those solutions that protect networks, if the users know about it. The alternative would be to have no Internet access at all in order to lower risks of loading malicious content.
I see problems with them as well. There's the security risk that the products might have vulnerabilities that expose end users. Secondly, they may cause other problems that are not security problems. For instance, I have experience of a solution where HTTPS proxy mangles AJAX stuff that goes over HTTPS. This will cause very weird problems that are hard to debug.
Here the problem is not that the proxy would be trying to insert advertisements to the content. Just changing IP addresses within AJAX content may break functionality in nasty ways. For instance, so that things work with some browser and not another one, or reuiqre a particular engine setting in MSIE11, or some such. There is no problem in the service itself, but the service gets the blame because people can't think that a Cisco product in between might be the cause.
Of course there are security implications with central services like an enterprise-grade proxy. And anyone using such a solution must do the best to keep it secure. It is all a question of probability and of costs. I bet, most vendors of such solutions will do their best to protect them and their customers. So a network security solution that might have a exploitable hole in a period of time is better than none.
I've been working my entire career for large companies. I've experienced many solutions and I cannot remember one technical problem that was caused by network security, other than "InsertYourSocialNetworkOrBinary was denied by SecurityRuleXYZ". At several companies I had to sign a paper that informed me about the security implications and my duties when using the companie's Internet/network access.
I have also worked for larger companies, mostly, and within them I have actually experienced many technical problems caused by network security solutions.
HTTPS man-in-the-middle proxying is one particular scourge that causes weird things - the problem reports being of the kind that in a completely legitimate and intended use case, "Chrome works, MSIE does not".