An interesting statement from the UC President, former head of the Department of Homeland Security: [1]
> "While we have absolutely no interest in the content of any individual's emails or browsing history, we must accept that active network monitoring is a critical element of a sound cyber-security infrastructure and the interconnections of the University and all of its locations requires that such monitoring be coordinated centrally."
Essentially "we care about privacy, until something comes up that scares the general public and makes us care less about privacy", a.k.a. Ohlocracy [1].
It also doesn't address the problem of abuse. Remember when the NSA agents spied on their spouses, girlfriends, and lovers [2]?
The security industry has largely given up on prevention as a means of stopping breaches, the current trend is very much based on fast detection & response.
Neither the NSA's nor these systems have the capacity to store full-take data, hell even just netflow data can be a challenge to store for any extended period of time.
The only thing that can really be done is streaming monitoring.
To the people wondering why decryption is necessary: HTTPS is ubiquitous and ip/port info is not actually enough to figure out where the communications are going, SNI helps quite a bit, but it may not be enough e.g. if a bad guy decided to use Twitter or AppEngine/Spot or an S3 bucket for C&C.
In a corporate environment your options are usually either network or host agent based, and frankly the network approach is significantly less intrusive, but also less effective.
According to an earlier article [0] about this, there's enough storage to keep 30 days of network traffic. That's not nothing. It's also something that apparently the regular UC IT folks have no control over whatsoever.
That's strange. My company uses the same Fidelis XPS appliances Berkeley is using, and they're entirely installed, administered, managed, configured, and monitored by our internal IT security staff. Only our staff has access to alerts it fires and the snippets of traffic it captures (for example, if it detects potential malware C&C beaconing, it'll show the bytes it flagged as matching the signature, and some ~200 bytes before and after it). We're also able to view, change, or add all the rules it uses. The source code is closed, so that part is a blackbox, but it seems like a huge stretch to call the entire appliance a blackbox considering we're 100% responsible for integrating, managing, and utilizing it.
I imagine Fidelis probably also offers a professional services option that allows them to manage setup and maybe even remote monitoring. If this is true, there's absolutely no reason why Berkeley can't hire competent infosec folks to handle it all internally, especially considering they have a lot of smart students in the field.
My understanding of the situation (which I'm not involved with since I'm no longer Berkeley faculty, but I follow because I was in the past) is that the office of the president (UC-wide, not just UCB) ordered the systems installed and specifically out of the control (and over the objections) of UCB IT services.
> If this is true, there's absolutely no reason why Berkeley can't hire competent infosec folks to handle it all internally, especially considering they have a lot of smart students in the field.
University IT tends to be super fragmented with each department having their own IT. It was probably a lot simpler to contract it out than try and build a central org... and infosec people ain't cheap these days.
Somewhat sad to see given the high profile McCarthy era cases that placed staff and faculty in the UC system front and center in the 40's and 50's. Saying "we have no interest" is somewhat ambiguous; it is after all public money.
Actually,this response is really good. Thanks for posting the link. I hope this actually works and ends up providing a good example on how to secure high valued targets. How is this action surprising or even debatable?
I can't believe checking for malware signature, which can be done much more easily from the browser (and is already done, indirectly and efficiently by chrome for instance), justifies MITMing everyone at great expense.
In any case, signature check is not going to be useful in any scenario of targeted attack that I can imagine.
I can easily believe, though, that some BOFH is more than happy to retrieve the amount of logs he has in the good old days of plain http.
All IDS or IPS appliances do this. Running something open source like Snort or Suricata? It's "MitMing" you, too. (More accurately "Man-on-the-Side'ing" for most installations, since they're typically IDS and not IPS, but I'm just using the same alarmist term you did to make my point.)
All "smart firewalls" (sometimes dubbed "nextgen firewalls" by marketing departments) do this. If your network uses firewalls from Palo Alto, that firewall is doing nearly the exact same things as this Fidelis appliance. Either most of the same features, or all of the same if you pay a little extra.
All proxies do this, except only for 80/443.
All host IDS agents do this as well.
>(and is already done, indirectly and efficiently by chrome for instance)
Chrome isn't going to detect your machine beaconing out via malware that's already installed. Chrome just has lists of bad URLs and URL paths. Chrome's security features certainly provide another useful layer of protection, but is not nearly sufficient as a malware or intrusion detection system by itself.
This article is FUD, in my opinion, for singling out just this specific appliance as if it's somehow more invasive than all the other gear on a typical large network. If you want to make an article about how packet inspection is bad in general, go ahead, but I think that war has been lost. If you're using someone else's network and computers, I don't think you should have a problem with a device checking the packets your computer sends and receives to see if they match specific regex or byte patterns.
It's true that a proprietary appliance is going to be a bit more blackbox with exactly how it processes traffic, compared to an open source solution, but often the signature lists themselves are accessible and configurable by the customer. Sometimes the actual signature content is visible as well.
Systems like these are absolutely necessary to help prevent breaches, as long as there are competent internal employees who can make full use of them. Perhaps an open source system will be a little bit better privacy-wise, on the off chance the proprietary solution you're using is secretly doing something malicious or sending sensitive traffic elsewhere on the Internet, though if it were, the scandal would likely greatly harm or ruin the company.
For bureaucrats in their cubicles, yes, this is normal corporate network management. You have no expectation of privacy at work; if you want that, go home.
As a student, though, the university is also your home ISP. And the public WiFi at the coffee shops, libraries, etc. In this context, reading your users' email is way less okay.
Don't think of UCB IT as the BigCo IT department, but as Comcast.
> This is how all network security appliances work.
This does not mean "network security appliances" are a good way to address security though. They're just something that companies peddling security products have successfully marketed to privacy insensitive corporate IT departments.
Many are snake oil or overhyped, but it's generally accepted by professionals that a proper IDS or IPS appliance is helpful as an additional layer. Those alone aren't nearly sufficient; many many layers are required. But they do add value.
You could make the same argument about anti-virus (IDS is really just network anti-virus), and while it's true, it's also true that it's a good idea to put AV on all your endpoints.
They might be accepted as helpful in some situations, but many security professionals would weigh the privacy against added help and recommend against them in this context.
Many, hopefully most, security professionals would also disagree with requiring AV on all endpoints.
(Industry practices generally aren't a good guideline, and many IT security professionals are incompetent and/or powerless to do anything besides babysit security products that just create work and increase complexity. Witness the prevalence of large corporate "intranets" with centralised firewalls, and soft chewy insides...)
>They might be accepted as helpful in some situations, but many security professionals would weigh the privacy against added help and recommend against them in this context.
I strongly disagree that there's any privacy loss. Internal IT staff could look at your browser history and traffic if they really want to anyway (and often they are required to in specific circumstances). Why is getting IDS alerts somehow worse than this?
I would have an issue with my ISP deploying a universal IDS, but it's a different story for my employer.
>Many, hopefully most, security professionals would also disagree with requiring AV on all endpoints.
There are certainly much better endpoint protection solutions, like whitelisting agents or virtualizing everything, but those are typically difficult to deploy in a large enterprise. Are you suggesting an enterprise should have no endpoint protection at all? If better anti-malware solutions aren't an option, you need something in the meantime. It'd be rather embarrassing if your entire network gets hit with a wormer circa 2005 because you have no AV and it slipped past your other controls.
I don't run AV on my home computers, but I am very glad it's deployed on my company's endpoints.
> I strongly disagree that there's any privacy loss. Internal IT staff could look at your browser history and traffic if they really want to anyway
They could, but then they would be doing naughty things (possibly breaking the law).
Also the relationship in academia between scientists/research groups and their host organization's administrative staff is often very different than the relationship between the typical corporate drone in an enterprise and the IT department.
In this case we see that the staff is revolting because of just this issue.
Re the "AV on all endpoints" question, I referring to possibilities outside the typical enterprise IT swamp. Increasingly endpoints aren't Windows PCs. Sometimes you don't have any of those.
What I call MITMing is decrypting the payload by impersonating the server (or the client if needed) often with the help of a corporate browser that have been instructed to trust this lie.
Most of network security appliances do not work like that, and merely listen the traffic (usually mirrored, no need to be in the middle).
Pretending that this is OK is like pretending that it is OK to send passwords in plain text because "only a bunch of competent professionals could intercept them".
Anyway, Google have proven that this practice can be detected on the server and I'm confident HSTS+such server side detection will make this practice nothing but a waste of money within a few years .
I don't know if it's of any interest, but the title is unclear to me (Italian, self-proclaimed almost perfect English, working for US companies for 8 years, in the US since 4, previously in Singapore for 2.5 years).
I didn't know what "profs lambast" meant, until I googled it:
Lambast = criticize someone harshly
profs = it means UC Berkeley "profs", or professors.
(I erroneously thought that "UC Berkeley" was subject, "profs" verb, and lambast an adjective).
Strange that despite all these years, sometimes you stumble on some unknown English terms that you have to look up.
Man, I'm 37, have lived in the US for 37 years plus 9 months if you believe life starts at conception, and have a better command of the English language than many if not most, and I constantly have to look things up! I don't know if it means I'm actually not in any better command of the language than most 5th graders, or the things I read are written at a high level, or I just read enough to stumble upon new words, idioms, and phrases constantly, but whichever it is, I feel you :)
Of course when English has over 1,000,000 recognized words, who can be expected to know all of them?
How does the "Fidelis SSL Inspector" work from page 10 of Professor Ligon's slides? [1]
The man-in-the-middle server 'identifies and decrypts all SSL/TLS encrypted traffic'? So long as an 'endpoint-trusted CA certificate' has been installed?
Does this really break SSL on all traffic that is diverted to the device, or does one of the endpoints need to have some 'registration' on it to permit this?
In this case "endpoint-trusted" mean that it's a CA that is trusted by the client on the local network. What happens is that the box intercepts all traffic and establishes it's own TLS session with the remote server while establishing a different session under it's own CA with the client. Then any traffic sent by the client is decrypted, analyzed, and re-encrypted under the real server's session. (And the reverse for traffic coming back.)
Sure, your browser would tell you who the CA is. In this case it'd tell you that it's a Fidelis cert. And if you have the permissions on your system you could revoke the cert, but I'd be shocked if this system set encrypted traffic, that it couldn't inspect, pass. Your connection would probably just timeout or get some random error about your connection being refused.
It wouldn't be a Fidelis cert, it would be a cert signed by whatever CA is installed on the SSL terminator. What would happen if you locally revoked trust of the CA in question is you would get warnings that the connection is unsafe as you would with any other TLS error, e.g. hostname mismatch, untrusted cert, etc. Depending on the client behaviour you may be able to continue with the connection.
This type of security tool typically operates in conjunction with installing some extra authority on client machines (for outgoing connections) and/or in reverse by installing the server's private key/cert information (for incoming connections). Having done so, the device can then effectively MITM the organisation's own traffic.
This can be quite effective if you have control of the client machines on your network and can reliably configure them to trust your choice of authorities. For example, this might be true in a university computer lab or in a corporate office where IT staff administer all the machines and staff don't have admin rights to change anything. Now your anti-malware scanners and the like can operate on things like HTTPS traffic in the same way they would on any other connection.
It doesn't work if you have other machines that don't have the extra authority installed and therefore don't trust the intermediary. Typically, either you'll just see the same security warnings as you would for any other MITM threat, or they'll block the traffic and you won't be able to connect at all over an unauthorised encrypted link. Either way, all they see will be your encrypted traffic. Clearly it would be rather unfortunate for the entire global security infrastructure if this were not the case...
I note in passing that, other things being equal, this kind of technology is a good thing for security. There are more powerful tools available than the kind of antivirus software typically installed on individual workstations, and they are no more necessarily recording HTTPS traffic than any other traffic that they scan. But of course you're still MITMing someone else's traffic, and the person sending that traffic has only your word for it that you're not configuring tools to record the "temporarily" decrypted data stream or otherwise process the data in unexpected ways.
There have already been lawsuits about this kind of comprehensive monitoring of network traffic in the workplace. Obviously it can look bad if for example an employee was using what they believed to be an encrypted connection to their bank to check that some legitimately employment-related payment has been received, but because it was from a work computer and intercepted that connection wasn't actually secure. The results haven't been completely one-sided so far; it's a complicated issue, and there are elements of both an employer reasonably wanting to know what is happening with their own IT systems and the reasonable expectations of privacy of an employee to balance.
This technique makes certificate pinning that much more important. I assume that it would just lead to cert mismatch errors on clients with pinned certificates, but in almost call cases that would be preferable to being MITM'd.
Sorry, I don't quite follow. The entire point of these systems is that it is desirable to MITM your own traffic if you want to insert a security scanning device that can, for example, scan downloads over HTTPS for malware.
Fundamentally, the technique only works with the co-operation of the client, which must recognise the authority used by the certs from the MITM device. From a technical point of view we are therefore assuming the person controlling that client system is in on the arrangement.
Now, we might consider it desirable to warn users of that client device that this is happening for other reasons, if the user is not the same as the person ultimately controlling the system. For example, in the scenario I mentioned before, it might be considered appropriate for the employee to be warned that their "secure" connection to their bank is being affected in this way so they can choose not to continue if they don't want to. But this is getting beyond the realm of technology and into law and ethics. Indeed, if the person controlling the system is hostile and the system can't be trusted, in the end there is nothing technical you can do to safeguard the user anyway.
Likely not. If you are the kind of organization that installs this kind of hardware, you aren’t the kind of organization that permits users to configure their own machines in any substantial way, much less to work around the monitoring you worked so hard to establish.
Likewise, if you are that kind of organization, how hard would it be to set up the network such that only traffic from the MiTM hardware is allowed to go through the network? Is there any reason to permit local machines to establish connections outside the network for any reason?
We're talking about a university. It'd be extremely strange for students and faculty not to use their personal laptops.
My university runs like a corporate IT department for its offices, hospital, etc. but for any part of it an undergrad will ever see, IT is just a WiFi-based ISP.
Wow, what school? That sounds like a totally unacceptable violation of ownership of personal property and a great reason to avoid studying/working there.
My employer does that for company-issue laptops (we keep root as long as we accept the rootkit), but for personal ones that's unbelievable.
Looks like I misremembered and it wasn't a kernel extension. It was a product called SafeConnect by Impulse. Based on the instructions I can find, it doesn't seem to require root so it must not be a kernel extension. But on contemporary Unix-y systems like OS X and GNU / Linux, pretty much all your privacy is gone if there is a rogue process running as you. You don't need root.
In any case, my attitude was that it was their network and they could put any restrictions on it, although they may be in very bad taste.
>it was their network and they could put any restrictions on it
Would you say the same of Comcast? When it is the only ISP serving the apartment you just signed a lease on?
Because that is basically the position of an undergrad in university housing with respect to "their network." You could say they had a choice not to go to that school, but for a state school system?
On a network engineering level, something is deeply wrong if you can't separate internet access from access to sensitive resources. Or if you are treating a subnet which includes tens of thousands of students as "trusted."
> On a network engineering level, something is deeply wrong if you can't separate internet access from access to sensitive resources. Or if you are treating a subnet which includes tens of thousands of students as "trusted."
I don't disagree with you. And I hate that I only have one choice of ISP.
I'm sure they were able to separate internet access from sensitive resources. But without forcing their users to install some piece of shit tracking software, how would they know who to forward letters to from the MPAA? ;)
Stanford has done stuff with requiring people to install particular monitoring software on personal devices in order to access SUNet (the main stated purpose being to ensure that users were running approved anti-malware scanners). I don't know what the current status is.
Are you just making a guess or do you actually know what is happening here? Because from what I've been able to tell so far, what you say is not what is going on here. It seems to be passive monitoring. If you have evidence to the contrary please post it.
Passive monitoring is not possible even if you have the private key for the target cert if diffie hellman is used as a key exchange. Look up perfect forward secrecy for more info.
Seems like overkill for a government entity to scan all packets entering or leaving the network as a whole. Definitely not minimally intrusive.
It seems likely that the computers with information they really need to protect are already on separate subnets/vlans, so why not protect just those instead of sweeping up everything?
Well a very simple way to poke a hole in this would be to download the image (where ever it may be?) they use and see if any licenses are being violated as it seems to be based on Linux (as of 2012 https://www.niap-ccevs.org/st/st_vid10449-st.pdf find "linux").
The title of the article wasn't very worrying to me, so I read the whole article, and I was trying to figure what in this article is actually a cause for concern. I actually didnt find any "significant" cause of concern.
I have worked in the area of computer networking for many years. My guess is, one of the things the black box product is doing is typically looking for TCP or IP traffic patterns(packet flow patterns) that indicate say command control type of malware, or other malwares, that don't have much resemblance to packet flow signature of user browsing activity.
Also, this blackbox won't be able to decode HTTPs anyway. So may not be a privacy concern for most people?
My understanding is that it can decode all HTTPS traffic because it acts as a Man-in-the-Middle. Traffic from the client to the Fidelis device is encrypted with a Fidelis certificate. Its decrypted at the device, encrypted again with the actual certificate and sent to the destination. If it couldn't do this, I think it would be impossible to distinguish benign traffic from malicious traffic.
In keeping with this spirit, here is a reminder of how we monitor (your) CERN activities. We monitor all network Traffic coming into and going out of CERN.
Our new analysis infrastructure will be able to cope with the automatic live analysis of about one terabyte of data every day. All this data is stored for one year.
This does seem more reasonable — probing devices on their network for running services, counting unusual numbers of connections, etc. Are they MITMing SSL traffic?
As far as I know, CERN isn't a "home ISP" to anyone, unlike many universities.
> "While we have absolutely no interest in the content of any individual's emails or browsing history, we must accept that active network monitoring is a critical element of a sound cyber-security infrastructure and the interconnections of the University and all of its locations requires that such monitoring be coordinated centrally."
Do you guys agree?
[1] https://www.documentcloud.org/documents/2702981-Chancellors-...