I consulted for UC Berkeley on a project with the UCB IT email team.
I can say firsthand they are excellent people, both technically and also morally. They are careful about security and protecting people and data. There is the usual email protection for virus scanning, spam blocking, abuse alerting, archiving of legal items, and the like.
In my direct experience, the entire chain of command up to and including the UCB CTO is solid. So I hope Napolitano steps up and explains what's happening now.
In the meantime if any UC Berkeley people want to learn how to use GPG for encrypting email, and VPNs for encrypting traffic, I will donate pro bono hours.
That's wonderful you are willing to help everybody (seriously).
But, I think people need to be aware that it's not just email: "The intrusive device is capable of capturing and analyzing all network traffic to and from the Berkeley campus, and has enough local storage to save over 30 days of all this data ("full packet capture"). This can be presumed to include your email, all the websites you visit, all the data you receive from off campus or data you send off campus."
Is there anything we can do to protect ourselves from, say, posting something into a HN forum? Or any chat program? Or FB chat, etc...?
Not quite true, XKEYSCORE [1] is the GUI layer application for navigating the data using selectors. The UC systems sounds more like the job of TURMOIL [2] and other middleman taps that feed data into selector filtering systems, decrypt information, and allow injection of content into network traffic.
Also notably, a system like this in such a privileged place on the network has potential to go beyond simply surveillance and can be used for offensive purposes by injecting malicious content. Especially if control of the system falls into the wrong hands. These types of risks which come about by introducing a tap like this are rarely considered, people tend to focus on the passive data surveillance side as the primary issue. But as a student or faculty, I'd be more concerned about the distribution of malware.
Many foreign actors (basically, China) could exploit it to steal research information from students and faculty. PGP emails wouldn't protect against a rootkit.
Keys that are supposedly short-lived on the client and the server. And if you are assuming they already control the client or the server endpoint, then the entire discussion is moot, but considering we are talking about archived packets at a router interface, the parent comment is very valid.
They probably do, but frames of reference - this is outside of my threat model.
By "trusted" I mean something you reasonably control, not things like like privateinternetaccess/the next cheap vpn/FREE TORRENT SUPER PRIVACY STEALING NSA VPN/etc.
Maybe, but if you log in via their wireless or a hard connection you still have to go through their routers. I imagine that is the point of archival.
I say this with a barest understanding of how the TOR browser interfaces with your computer, the local routers, and the internet at large.
If someone is knowledgeable, please, let us all know: can the UC archive if you use TOR? If not, this is a good work around while the courts take their time.
True, but a) you'd be able to check that (unless they're crazy good and have altered all the apps too), b) they can't retroactively add it, c) doesn't work for BYOD, and d) not sure how it works in usa, but in uk it's often the case that each department runs its own IT
>So I hope Napolitano steps up and explains what's happening now.
given Napolitano's pedigree, in AZ and DHS, i think the best explanation you can expect, if any, that it is either to protect children and/or national security without any details.
Do you seriously think that choosing between new spectrometer and network snooping device she would choose spectrometer over her DHS security contractors?
I can only suppose that the UC search committee somehow ended up with only Hoover and Napolitano on the short list, and as Hoover hadn't returned multiple phone calls...
“Mini-Bush in action” is a ridiculous summary of Napolitano’s career. Both Democratic Governor of AZ and Secretary of Homeland Security are quite politically constrained positions in US politics. Arizona is a state with a radical Republican majority in both houses of the state legislature which has been in control for decades. There’s really only so much a Democrat can do as governor. I personally think the structure and many of the activities of the DHS are ridiculous, but that’s the fault of Congress, not of the person appointed to be its executive.
Napolitano seems to me like a competent mainstream (“centrist Democrat”) politician, not unlike most of the people who would be considered for a post like UC President. She’s not my personal favorite, and I disagree with her about many policy topics, but in a lineup of US ex-governors, she’s easily in the top quarter. By contrast, Bush was an absolute disaster in every way.
> This is a video message from Napolitano on TV screens in Wal-Mart stores playing a "public service announcement" to ask customers to report suspicious activity to a Wal-Mart manager. The rationale is that national security begins at home. Napolitano "compares the undertaking to the Cold War fight against communists."
Generally, the communist witch hunts of the 50's, 60's and 70's (especially the House Un-American Activities Committee) aren't considered a good thing. Apparently now they are.
> In the meantime if any UC Berkeley people want to learn how to use GPG for encrypting email
Could you explain why this should be necessary when email communication between the servers and clients is already encrypted via SSL/TLS? Can they bypass that? (And if so how?)
You're trusting the server too much. While TLS servers an important role, it is only protecting the link, not he data.
End-to-end encryption (which PGP/GPG provides) is important for the same reason bittorrent is so successful: it protects the data, not the host. When bittorrent was new, a common concern was that you might be getting the data form anybody, including people who are malicious. This concern was a product of traditional download methods where you had to trust the person sending the data ("only download from reputable sources"). Bittorrent bypassed that problem by providing hashes, so the data itself could be verified regardless of how you got it.
Securing of a host or connection suffers from the same problem. Just like how reputable fileservers can still be hacked or strong-armed into serving incorrect data, the server at the end of the TLS connection can be compromised.
> how
That depends on the people at the TLS endpoints who see the plaintext. If they are collaborators, they simply build in access to the plaintext and we call it "Prism". If they are refuseniks, it might be necessary use a national security letter.
How do you know that all your recipients' mail server aren't monitored (or webclient and the NSA pumps the data between the ssl termination and the web server, eg: google)?
As a rule of thumb, if you care enough that it should not be seen by others, encrypt before it is sent, before it is written to disk. You have no control over a stolen laptop, over what the backup servers (assuming remote backup), etc... If it is private, encrypt. There's a bit of a learning curve, but once you do it often enough it just becomes part of the workflow.
Good tools you should use on a regular basis: KeepassX, gpg, Enigmail
> it has nothing to do with this situation in particular.
Please read the parent's question, asking why it would be necessary to encrypt when using SSL/TLS. I explain why, and yes I do go a little bit further to more general cases, because I am assuming that if they don't understand the need to encrypt in this specific case they could benefit, and hopefully appreciate, to understand the more general cases as well.
> Could you explain why this should be necessary when email communication between the servers and clients is already encrypted via SSL/TLS? Can they bypass that? (And if so how?)
RFC2487 Section 5:
A publicly-referenced SMTP server MUST NOT require use of the
STARTTLS extension in order to deliver mail locally. This rule
prevents the STARTTLS extension from damaging the interoperability of
the Internet's SMTP infrastructure. A publicly-referenced SMTP server
is an SMTP server which runs on port 25 of an Internet host listed in
the MX record (or A record if an MX record is not present) for the
domain name on the right hand side of an Internet mail address.
In other words, internet mail servers are required to allow a trivial downgrade to plaintext by a MITM attacker.
And if you don't follow the RFC, the internet police will come after you!
Yes, I am aware of the RFC2119 meaning of "MUST NOT." In reality, nothing prevents the servers from disallowing that downgrade, except that they may not be interoperable with other servers on the internet. If the operator of the server wishes to make that tradeoff, then requiring STARTTLS is an option.
If you don't follow the RFC then people will email you from gmail saying "I tried to send you mail from my company's mail server and it didn't work", other people will submit a github issue to your project saying they couldn't email you, and when you try to subscribe to one of djb's mailing lists you'll get a response from your mail server saying it couldn't deliver the message.
And I subscribed to a mailing list years ago that suddenly went TLS-mandatory (for incoming email---it doesn't demand TLS when sending email) and now I can't even unsubscribe from the list.
I receive emails from whitehouse.gov and other .gov sites just fine, so they seem to be ok with it.
Also, my self signed CA cert and self signed server cert regenerate every hour for those that do not support PFS. My DH primes regenerate periodically. If you are someone that needs to assure you are talking to me, you don't need a CA. Instead, you test the connection to me from multiple ISP's before and after you send me an email.
For real security, you would encrypt the payload. Maybe an encrypted 7zip inside a PGP encrypted message.
I think this point has been made in some way or another in the thread, but I want to try to restate it.
Suppose Alice (alice@gmail.com) emails Bob (bob@yahoo.com). If Alice sends Bob an email without PGP encryption, the text of those emails is stored on both Google's and Yahoo's servers, allowing either of those companies to read the email (or a government if served with a subpoena, or others -- possibly the general public -- in the event of a data breach). This is true regardless of whether it was sent with TLS at every point in the chain.
On the other hand, if Alice uses PGP to actually encrypt the text of the email with Bob's and her public key, nobody except Alice and Bob themselves (not even Yahoo or Google, who have the message stored on their servers) can read the message.
It's an important distinction -- as major consumer messaging products like Apple's iMessage, WhatsApp, etc. have recently started to implement this end-to-end encryption where even the company themselves are unable to read the message their users are sending, the FBI and other federal agencies have started to actually complain about their newly limited ability to spy, unlike when only TLS was used and they could just serve an NSL to Apple/Google/etc. to make them give up your message without your knowledge (with their only other option being shutting down their business, more or less).
Thanks it would be great to have a recipe for how to maintain privacy here at UC Berkeley, and also a recipe for how to ensure that the information we want to share can be shared.
I was similarly unsurprised to find out that Stanford monitors all emails, including ones not sent via Stanford servers. I noticed my emails sent via SMTP to an outside server (edit: while connected to campus network) were getting softfail header flags because they were being relayed by Stanford. I'm sure all network traffic is monitored as well -- the network usage policy explicitly allows them to.
This is more likely an abuse mitigation than intentional surveillance. I helped out in the dorms and student machines were constantly infected with malware, some of which sent tons of spam.
I'd be very surprised if correctly configured SMTP using TLS is intercepted.
Abuse mitigation may have been the (original) intent. However, it prevents sender verification because stanford.edu is not a designated sender for the external domains, and thus has no SPF records.
If I use SMTP to send via gmail with SSL/TLS over port 587, the headers show it hopping through 5 stanford.edu servers, and finally `spf=softfail (google.com: domain of transitioning xxx@gmail.com does not designate (a stanford IP) as permitted sender)`.
For this to work, either your mail client is not enforcing TLS be used, is not validating the certificates presented to it, or trusts Stanford's MITM certificate. Perhaps you should try port 465 with forced SSL.
The problem with SMTP is the broken-by-design STARTTLS mechanism that allows a hostile network like here to trivially strip the encryption.
Been some time since I used a desktop email client, so I'm unsure if clients have now been upgraded to reject sending mails to/through servers that don't do STARTTLS. But I'm guessing no.
Thunderbird changed years ago to not allow soft-fallback for STARTTLS (i.e., if you request to use STARTTLS, and it's not available, it aborts the connection). I'm pretty sure that the account config also doesn't let you set up connections without STARTTLS or SSL without clicking through scary red this-is-not-safe dialogs, although I don't know if that's only for incoming or if it's for outgoing as well.
It's probably not (currently) checking the cert validity because the huge majority of STARTTLS certs, unlike HTTPS certs, are self-signed, so a slightly brave but maybe not that brave attacker could allow STARTTLS but MITM the connection with its own self-signed cert.
Thunderbird checks cert validity. Actually, it's gotten more difficult to use an invalid cert, since the core Gecko code changed the invalid cert dialog to remove the ability to easily import the certificate into the NSS database.
It's sometimes quite helpful to built on the codebase that invented SSL.
Stanford students have to get software installed on their laptops to access the Stanford network. Supposedly this is to force us to have anti-virus installed and the latest security updates, but I doubt any student has actually looked into what the software does / could do.
I've actually dug pretty deeply into it. Briefly, it's IBM BigFix (previously Tivoli Endpoint) and allows university IT to remotely run any script with administrator privileges on your machine. I hate it and remove it after registration. It's one thing to use that on company/university-owned computers. I think it's unacceptable to run it on personally owned machines like Stanford does. Students who live on campus have no alternative ISP.
Stanford gets attacked - a lot. The biggest threat to the network is personally owned machines that need legitimate access to the network. The only way for them to protect the rest of the network is to have some kind of monitoring on those personal machines. The only other alternative would be to run personal machines on a separate network entirely, which they may or may not do already. I'm not sure. And BigFix doesn't cover only students, but also all faculty and staff as well.
Honestly, if you think that's bad, try getting a server online (in the datacenter!). The network admins take their firewall rules very seriously (as well they should).
For me, the biggest pain is the two (yes, two) backup services that must be installed for my department.
That used to work but they've at least made it harder. They started using your browser UA string to detect OS, and somehow (forget details) they've made it more difficult to spoof that.
I block port 25 out because of spam / bot problems with student computers, but I would be curious any reason to do a relay. That doesn't make a heck of sense.
Presumably zbjornson was sending messages from a network within Stanford, using vanilla SMTP. In that sort of topology, Stanford's network could easily intercept, monitor and modify SMTP traffic.
What the posters below said. If you use SMTP to send to an external server, and you are on Stanford's network, they intercept it. "Relay" might have been the wrong word.
I left industry to be a postdoc in the AMPLab and it's really depressing to see this sort of intrusive monitoring coming to the home of BSD and sockets.
One of the benefits of working in academia in CS has been the absence of top-down corporate IT control -- no MDM on you mobile devices, no third-parties having root on your devices, the ability to have a static IP, etc.
I'm curious what led you to leave industry for a postdoc. AMPLab is a great place with lots of smart people, but I'm guessing you've taken a > 4x pay cut with dim chances for career development, unless you're shooting for a faculty position.
1. incredible freedom to work on whatever I want
2. a real focus on prototypes and curiosity, with less pressure to "ship working artifacts"
3. explicit recognition that "learning new skills" is part of the job. When I was in industry, if I needed an antenna designed, I would just go hire an RF engineering firm. Now I can spend time hacking on it myself, which I really enjoy
4. Absence of corporate bullshit, which has traditionally included tremendous IT freedom, no centralized control of messaging / official PR line, etc.
5. Generally no IP bullshit unless you want it -- everyone loves open-source, etc. AMP is a real leader here.
Things that are a challenge:
1. project management is often lacking -- it can be hard to focus, and the freedom to explore is a double-edged sword of "it's been 18 months, what papers have we actually finished?"
2. The pay isn't great -- CS postdocs make $60-70k/year. Fortunately I can consult on the side to pad this out.
3. A lot of your inputs are 23-year-old graduate students, who are smarter than you, enthusiastic, and inexperienced. This can be a fantastic source of new creative ideas, but sometimes they also decide to switch from julia to rust 80% of the way through a project. Sometimes you wish you could just hire that RF engineer and get shit done (see above above)
tl;dr : I get to build the coolest shit with the most amazing people helping me and learn a ton of new things, in an environment which generally shares my values.
The university I go to requires giving root access to third party closed source apps to use most of their IT infrastructure. I don't do this of course so I end up having to use things as a "guest" and occasionally my email clients can't sync.
It would be bad enough if you were, say, provided a laptop to use for work, but I'm guessing they want you to install that on your personal machines too, no? Unconscionable.
I honestly had no idea she was running the UC system now. I'm actually shocked to see her in that position. That only furthers my disillusion with anything related to politics.
There's a thing called regulatory capture: "Regulatory capture is a form of political corruption that occurs when a regulatory agency, created to act in the public interest, instead advances the commercial or political concerns of special interest groups that dominate the industry or sector it is charged with regulating." https://en.wikipedia.org/wiki/Regulatory_capture
This seems like that in reverse. I wonder what kind of capture it should be called.
Let's not forget John Yoo, the lawyer for the Bush Administration who kindly wrote the legal memorandum providing justification for an official adoption of torture, who was hired on at Berkeley law. Not only did these criminals never pay for their crimes against humanity, they've been allowed to prosper. It's a shame.
Janet Napolitano is a liberal Democrat, former governor of Arizona, and the Secretary of Homeland Security under Barack Obama. How does this make her a neocon?
I mean in terms of foreign policy (keep USA an unchallenged number 1 militarily and economically) and willingness to use surveillance as a mechanism of control. I get your point that socially or whatever she's not a neocon, but we're not talking about views on abortion, etc. in this thread. Do you have a better word than neocon?
> Janet Napolitano [was] the Secretary of Homeland Security under Barack Obama.
That qualifies her w.r.t. surveillance and exercise of power. Sounds a lot like her approach to the UC system! Barack Obama, while a Democrat, commands a fleet of extrajudicial killing machines that have been used to murder American citizens without trial. It was during Napolitano's time at DHS that drones for surveillance and control (read: killing) became cemented as governmental policy.
> - The intrusive device is capable of capturing and analyzing all network traffic to and from the Berkeley campus, and has enough local storage to save over 30 days of all this data ("full packet capture"). This can be presumed to include your email, all the websites you visit, all the data you receive from off campus or data you send off campus.
Seems to me that a big part of the concern is that this is imposed by and reports to the UCOP (University of California Office of the President?) and is independent of all local IT staff beyond "stick this black box between your network and the world and don't tell anyone about it." Covert action does not inspire trust.
I'm a bit shocked that I need to be surveilled by absolutely everybody. It's bad enough my government does it, but at least the conversation starts with the fact that they're spies by profession. Now my college needs to surveil my online activities?
I'm not the president of a major university, but how is this justified in the least?
> UC exists by the good graces of the State of California. In some sense, UC is the government, same as the DMV
You've actually dramatically understated it, I think. The UC doesn't exist through the good graces of the state. The UC's good graces are sufficient to make them exist under the state constitution absent any other part of government. If the legislature or governor wanted to shut them down, the only way they could do it is if they could get enough other regents to vote for it. The UC system has a great deal more power than the DMV. For one, the legislature is explicitly not allowed to regulate them outside a few ways involving funding. Also, the UC has its own state level police agency, that is controlled by UC, not any other part of the state government.
The DMV is involved in some of the things CHP does, but CHP doesn't report to the DMV.
Pretty much every .edu with more than a few thousand students is likely already doing this. If you work at one of them, you should not be surprised to know that your traffic is being monitored or that the capability is there.
Is the "uproar" because of the capturing itself or because the captures are being sent to/monitored by an external third-party?
I think that many people accept that there are some forms of monitoring occurring on the networks owned by their institutions.
Unfortunately, many folks do not have clear knowledge about exactly what gets monitored and what capabilities the monitoring allows.
In the USA, at least, "owners" of a network can do anything they want with the traffic going through it including providing fake certs for the purpose of clear-text search of https traffic, storing communications for however long they want, and providing these communications to whomever they please.
My point is that people (the public) don't actually understand the extent and implications of network monitoring, and almost no one is speaking out about it.
As an example consider that there are folks on this very thread asking about whether or not https is "safe" on a network which is owned by someone else. If there is a lack of knowledge _HERE_, imagine what the situation is for everyone else who doesn't even know what a MITM "attack" is. You can't "protect yourself" if you don't know what the threat is.
I am glad that the UC community is raising a stink about this.
Of course Berkeley has some monitoring, but it's done intelligently by campus IT. UCOP rammed in this new hardware and won't even let campus IT touch it.
> If you work for someone else, they can do whatever they want with your communications on their devices.
I agree with regard to companies, but I think it's a bit less clear for a university in general. Yes, the faculty and staff are clearly employees and so in a that sense it's reasonable to expect their internet usage will be monitored. But on the flip side, one of the historical aspects of universities is supposedly intellectual freedom. And in the sense that faculty might feel a chilling effect on their intellectual freedom if they know or suspect everything this type of monitoring, monitoring might be less acceptable.
When it comes to students, this argument is perhaps less clear. The students are not employees. Yes it's true they have all clicked "I Agree" to the university's network terms of service (but with all TOS's, how many of them actually read it?). But again, universities are ostensibly havens for intellectual freedom; what would pervasive monitoring do to that?
So, I think you are technically and legally correct, but I wonder if that justification is in the long-term best interests of universities as institutions of learning and exploration.
Napolitano's decision was probably one of those that sounded good in theory until people found out about it.
If I was a professor I'd want to work somewhere where I feel safe and secure because it's eerie having someone standing over your shoulders, let alone 24/7.
You mentioned intellectual freedom which is a feeling of security. I find that feeling misleading in the US with the NSA's new data center in Utah collecting our information.
Still it would be wise of Napolitano to reverse course but I think it's unlikely given her background.
While the scenario you describe is indeed alarming, its important to understand that in the US universities housing students are very, very different from landlords. Even in the context of FCC ney neutrality regulation, universities are considered premise operators rather than ISPs and do not have the same obligations.
Students have significantly reduced rights as they are not considered tenants so much as "members" of a university.
Hopefully this forces all UCB faculty to begin to use GPG for all email communication; then that in turn requires all the faculty contacts to use GPG; finally the GPG virus spreads nationwide due to this one selfish act!
GPG and HTTPS are just band aids over a far more serious problem. They don't protect meta data. Knowing what sites you connect to and who you email and how often is more than enough to seriously undermine privacy and chill discourse. The real solution is a political one, not a technical one.
Well... technical solutions can get you pretty far. Knowing that "there was a TCP connection that lasted 37 minutes and transferred 2 GB of data between IP1:Port1 and IP2:Port2 starting at 8PM", is different to "Steve in dorm 612 illegally downloaded Spectre last night".
Deep Session Inspection®. Decode and analyze content in real-time, no matter how deeply embedded it is. The Deep Session Inspection engine sees every single packet that traverses the network, reassembles those packets into session buffers in RAM, and recursively decodes and analyzes the protocols, applications and content objects in those session buffers in real-time - while the sessions are occurring. This allows XPS to “see deeper” into applications and, in particular, the content that’s flowing over the network.
Detect and Investigate Retrospectively. Investigate what attackers have done in the past. By collecting and storing rich content-level metadata from both the network and the endpoint, XPS provides a lighter, faster and less expensive way to analyze historical data.
I don't know precisely, but if FERPA works the way HIPAA does, there's a carve-out for arrangements like this provided that the vendor has an agreement in place (a BAA in HIPAA-speak) and agrees to abide by certain security standards. Note that, in practice, it goes like this:
VENDOR: We promise that we are doing all of the stuff that the law says we're supposed to do to protect the covered data that you're sending us. Look, see the pretty pictures of a fancy data center in our marketing materials?
CUSTOMER: OK, that's good enough for us! (checks off box on list)
In the workplace it's useful to follow counterintelligence strategies. Send yourself outlandish job offers from spoofed addresses, for example. The possibilities are endless.
In the first email, which basically is loaded with innuendo and short on actual facts it states:
UCOP defends their actions by relying on secret legal determinations and painting lurid pictures of "advanced persistent threat actors" from which we must be kept safe. They further promise not to invade our privacy unnecessarily, while the same time implementing systems designed to do exactly that.
Then in the nest email says:
A network security breach was discovered at the UCLA Medical Center around June 2015.
UCOP began monitoring of campus in networks around August 2015.
ONLY AFTER this monitoring, on August 27, 2015, did UCOP issue a new cybersecurity policy online under the heading of "Coordinated Monitoring Threat Response." The policy describes how UCOP would initiate "Coordinated Monitoring" of campus networks even though it is believed that such monitoring was already underway prior to the announcement of the new policy.
So first they were drumming up conspiracy theories about "supposed" threats to the network and in the second email, they outline there actually was a breach of their network.
I guess the real issue is they have no idea who the vendor is and what exactly they're doing with their data. The good news is appears they're only holding up to 30 days of data, but aren't clear what happens after the 30 days.
I would be more concerned about the lack of transparency with what they intend to do with the data and who the hell the vendor actually is. Nothing like having some shadowy government vendor snooping around your network and storing and analyzing your data without letting you know what they're doing.
Looking more into this: if your Google Apps for Work/Education account has unlimited storage, then your company has definitely purchased Vault as well. Easily checked in Gmail.
Thanks, for me it is: Don't use Google Apps for Education for anything except for taking advantage of it by uploading your encrypted data to the nice "unlimited" Google drive space or sending PGP mails.
Don't use Google Apps for Education for anything except for taking advantage of it by uploading your encrypted data to the nice "unlimited" Google drive space
I have a few terabytes of encrypted backups and disk images (via command line OpenSSL) on my "unlimited" Google Drive for Education account. I do not consider this an abuse, but a perfectly valid use case for Google Drive. If they don't like it, they can feel free to stop using the word "unlimited". However, I believe they do do some sort of throttling.
Absolute power corrupts absolutely. If NSA contractors were brazen enough to look up the private information of spouses and "ex-lovers" [1] I don't see why the IT departments of education systems won't evolve to be susceptible to the same thing.
Or, you know, have access to one of the world's premiere research institution's data in real-time. The amount of NDA'd traffic alone across UC's network is enormously valuable. How many private companies made of valuable, transmittable IP have have endpoints within that network?
I used to work for Berkeley Lab, which is run by UC, on a network research group in the mid-2000s. Already by that time, Berkeley Lab, and the UCs were passing their traffic via passive fiber optic tap to a deep packet inspector developed by my team. It was called Bro (https://www.bro.org/). It didn't have the disk capacity to save 30 days of traffic (at Berkeley's level of traffic, that's quite a lot of storage) but it certainly did some pretty deep snooping. It was used primarily for security (it would log into the router and cut off connections that appeared to be hackers). I got a call from IT one day asking what I was torrenting (it was a linux distro).
Is there any background to what seems like assertions made in the article?
Who did the installation, and how were they coerced into secrecy?
The article says information is sent directly to the vendor? Who is the vendor?
The article mentions "attorney-client privilege." Which counsel? Do they work for the state?
UC CIO Tom Andiola is said to have promised that the monitoring equipment would be removed and disclosed.
Then other UC senior management retracted that promise. Has anyone followed up with Tom Andiola? Hasn't he got a system-wide trust problem now? How is he taking being hung out to dry this way?
And, on top of all that, how does a UC president hire a contractor in secret? How is that legal? And how do you think it happened? Was Janet Napolitano really that concerned "for the children," or was this a sweetheart deal with the seeds sown back at DHS?
- These are University resources?
- Any corporation you work for would be monitoring their email systems and letting users know that. Why are these Univ. resources owned by the people?
- There's many legitimate reasons for monitoring. Security, legal defense, etc..
I'll admit trying to sneak this through over objections was probably not handled in the best way from a PR perspective.
And if you're privacy minded you should already know - most privacy debates that make the news - are moot.
If you have something truly secret - you must encrypt - so no man in the middle can read. If you think you have privacy sending any unencrypted email over any public network you're dreaming.
Assume that anyone with the key - has your data. Ask the guys that lost millions with Mt. Gox.
Is privacy a luxury we only afford to the technologically savvy? Previously we considered privacy a right of all people to be free of unreasonable searches. Your attitude displayed here says otherwise: if you can't encrypt everything from start to end, you don't deserve privacy. If you are not up to date with the absolute latest way of sending data without it being monitored, you don't deserve privacy.
In the past, those with these abilities fought hard to protect those who didn't.
When did that change?
Everyone deserves privacy and it must be defended, and if I have my computer data at home, the government should not be able to search it without a warrant.
But when I send that data over public networks unencrypted, I willingly give up that privacy.
Pretending like the transmitted data was safe in the first place, before they set up these servers - is a lie, and only plays into the hands of those who would snoop on you.
>But when I send that data over public networks unencrypted, I willingly give up that privacy.
You may willingly give up privacy, but many users do so unwittingly. Their ignorance does not make the current state of affairs just.
>Pretending like the transmitted data was safe in the first place, before they set up these servers - is a lie, and only plays into the hands of those who would snoop on you
Ignorance of privacy concerns on public networks is not the same as pretending the transmitted data was safe in the first place.
The article and controversy are attempting to call attention to the mechanisms in place that are a threat to privacy of all users. Your response was "well anyone who doesn't want their data exposed should know better." This is a disturbing attitude.
> Ignorance of privacy concerns on public networks is not the same as pretending the transmitted data was safe in the first place.
The general public is at least somewhat aware of these problems. People pushing a surveillance agenda (esp. marketing) often extrapolate this awareness, suggesting that people are surrendering privacy willingly either because they don't care or as part of a "trade' for services.
In reality[1], people often feel powerless. Without the necessary technical knowledge to create alternatives, the aggressively pushed surveillance option seems like the only choice.
> "well anyone who doesn't want their data exposed should know better."
That's classic victim blaming.
I think the common "this isn't surprising" response has some victim blaming in it, too, when it becomes a thought-terminating cliche. Jacob Appelbaum's is probably right[2] in his interpretation: that saying something isn't surprising is a coping mechanism that probably means "I can't do anything about it". Unfortunately, sometimes it's used to shut down discussion.
>Why are these Univ. resources owned by the people?
Why wouldn't these Univ. resources be owned by the people?
Check the title of UniversityOfCalifornia.edu:
>University of California | The only world-class public research university for, by and of California.
The University is established by the California constitution.
Article 9. Section. 9. (a) The University of California shall constitute a public
trust, to be administered by the existing corporation known as "The
Regents of the University of California," with full powers of
organization and government, subject only to such legislative control
as may be necessary to insure the security of its funds and
compliance with the terms of the endowments of the university and
such competitive bidding procedures as may be made applicable to the
university by statute for the letting of construction contracts,
sales of real property, and purchasing of materials, goods, and
services.
> On Dec. 7, 2015, several UC Berkeley faculty heard that UCOP had hired an outside vendor to operate network monitoring equipment at all campuses beginning as early as August 2015.
So, I don't want to defend UC entirely, because they appear to have seriously mishandled this in several ways. But I would like to share some information about this aspect of security.
Full packet captures of network traffic are extremely, extremely valuable in post-compromise incident investigation and in incident detection, as they allow for vastly more complex analysis of traffic (done post-hoc with more complicated logic) than is practical on the wire. There's absolutely no need to invoke the NSA here: multiple private vendors offer these systems. It sounds like UC went with Fidelis, another major provider is RSA NetWitness (now part of EMC). These systems are fairly common on corporate networks, the main thing that limits their installation is cost: just the storage becomes rather costly at large scales.
Invoking attorney-client privilege on matters related to security is pretty common in the private world. The reason for this is that any security investigations and reports are subject to legal discovery and may be used to establish liability in the event that someone sues you for a matter related to a cybersecurity incident. The primary way to protect this information is to place the cybersecurity function under legal counsel so that all security work is work-product of an attorney and so under privilege. This is a recommended best practice in the security compliance community. Public institutions do this less frequently for the reason that it is often prevented or superseded by the relevant public record/accountability law, or unnecessary due to some type of immunity for example, but this may not be the case in California.
All in all, nothing here strikes me as particularly unusual practice for a large organization. What I do see is that UC has made several massive mistakes in implementation:
1. It must be completely clear to users that they have no expectation of privacy when using organizational networks. Unfortunately, many users do not realize this, and many organizations do not sufficiently communicate it. All users of organizational networks should sign an agreement to ensure that they are aware that they have no expectation of privacy. This is already legally true in as far as I know all cases, but there is an ethical obligation, I think, to ensure further than that.
2. Universities present a particularly tricky situation because there is a captive audience of users who rely on the university network for their personal usage. Ideally this should be 100% segregated from the institutional network, I believe, but I have work experience in a small university's IT and I can tell you how difficult this is to manage - and I can imagine that the problems at the scale of even a single UC campus are so much greater. They can and definitely should work harder to balance network management against the privacy of their captive users.
3. It appears that there are inadequate controls in place (or at least disclosed) to protect this data. I am very uncomfortable with the involvement of a third-party without thorough documentation of their controls in place and their liability in the event of misuse. There must also be further internal controls - both technical and administrative - to guard against misuses. Simply asserting that the data is only used for security is not sufficient, set actual controls to ensure this and establish how violations will be handled.
4. Creating fragmentation within the IT org is very common in universities but still a terrible idea. All levels of IT and security operations should be 100% on board with security mechanisms used, which appears to not be the case here.
A couple of auxiliary thoughts:
- If they are intercepting SSL (which may be a good idea for a corporate network, there are several factors to weigh against each other) this will of course be limited to computers that they manage.
- Tivoli BigFix, as mentioned elsewhere, is a common and rather good endpoint security solution. A similar competitor is Cisco NAC. These aren't scary NSA codewords, they're commercial products that many corporations use to ensure that all computers on a protected network meet a minimum security configuration. Whether or not they are appropriate in the ways that some universities use them is a very touchy issue, I don't think that they are, but that means that potentially much more costly (and inconvenient) controls will need to be in place.
- Universities need to carefully manage the fact that they are often not perceived as corporate orgs in terms of their network practices, although they usually behave like them. There are certainly complications at universities. Open communication of policies and procedures will help to alleviate this, as well as good network management (once again, complete isolation of residential and administrative networks should be the goal).
Good comments. But note that UC doesn't have a document that says one can't expect privacy--it has one that says that (except in special cases, and in these the user is supposed to have recourse) one can expect privacy.
The monitoring may be routine and innocent. But it shouldn't be secret, and it shouldn't be in stark violation of the University's own (stated) policies.
I wasn't familiar with that document, but I read IV-C-2-b as authorizing this activity:
"University employees who operate and support electronic communications resources regularly monitor transmissions for the purpose of ensuring reliability and security of University electronic communications resources and services (see Section V.B, Security Practices), and in that process might observe certain transactional information or the contents of electronic communications."
This is followed by standard restrictions (only for valid purposes, controls to protect information, etc). They provide a stronger assurance of privacy than I would expect, but still leave plenty of latitude for this activity.
Edit: copied and pasted from a PDF. never again.....
Further edit: V-B is really interesting and requires user permission "it is necessary to examine suspect electronic communications records beyond routine practices." This is kind of a strange rule, but particularly since they're using a third-party vendor, all of this would easily fall under routine.
It's important to note that this policy primarily discusses "disclosure," which I can't see this being considered. It does go to a third party, but one contracted for internal purposes.
Doesn't this: "they are not permitted to seek out transactional information or contents when not germane to system operations and support, or to disclose or otherwise use what they have observed." [4C2b]
Conflict a bit with the product's advertised description: "The Deep Session Inspection engine sees every single packet that traverses the network, reassembles those packets into session buffers in RAM, and recursively decodes and analyzes the protocols, applications and content objects in those session buffers in real-time - while the sessions are occurring. This allows XPS to “see deeper” into applications and, in particular, the content that’s flowing over the network." [1]
Boiled down: "University employees are not permitted to seek out... content when not germane to system operations and support", juxtaposed against, "XPS reassembles those packets... and content objects... to "see deeper" into applications and, in particular, the content that's flowing over the network."
Or do we go all bureaucratic-NSA-legalese and say that, "all content is [could be] germane to some kind of ethereal 'persistent threat'". And that "'automated packet inspection' based on human-generated rulesets is different from a human 'seeking out' content." If so, then there shouldn't even be a privacy carve-out, as there's nothing from which to carve. And the first sentence in the policy is entirely meaningless.
I would argue that reviewing traffic is obviously germane to security monitoring, which is by definition monitoring traffic for security incidents.
Edit: Content included. Content is where the bad things go, of course, and a lot of the features that come out of full packet data are the ability to do things like auto-detonation of executable files for malware detection. Extremely content-based detection heuristics.
IV - A - "The University does not examine or disclose electronic communications records without the holder’s consent."
IV - B - "The University shall permit the examination or disclosure of electronic communications records without the consent of the holder of such records only: (i) when required by and consistent with law; (ii) when there is substantiated reason (as defined in Appendix A, Definitions) to believe that violations of law or of University policies listed in Appendix C, Policies Relating to Access Without Consent, have taken place; (iii) when there are compelling circumstances as defined in Appendix A, Definitions; or (iv) under time-dependent, critical operational circumstances as defined in Appendix A, Definitions."
IV - B1c - "In addition, California law requires state agencies and the California State University to enable users to terminate an electronic communications transaction without leaving personal data (see Appendix B, References). All electronic communications systems and services in which the University is a partner with a state agency or the California State University must conform to this requirement.
In no case shall electronic communications that contain personally identifiable information about individuals, including data collected by the use of "cookies" or otherwise automatically gathered, be sold or distributed to third parties without the explicit permission of the individual. "
IV - C2b - "In the process of such monitoring, any unavoidable examination of electronic communications (including transactional information) shall be limited to the least invasive degree of inspection required to perform such duties. This exception does not exempt systems personnel from the prohibition (see Section IV.A, Introduction) against disclosure of personal or confidential information.
Except as provided above, systems personnel shall not intentionally search the contents of electronic communications or transactional information for violations of law or policy."
More policy. http://ucop.edu/privacy-initiative/uc-privacy-and-informatio... was adopted as policy by the Regents in 2013. These really are principles, not policy, but among other things it describes "web sites visited" by an individual as an aspect of "autonomy privacy", and claims to accord these a high degree of respect, invoking academic freedom, and suggests that policy should follow the first and fourth amendments.
the IT department at universities or corporations can archive all the emails "legally" in their system(not personal emails though, but emails related to job), I guess this monitoring is something live instead of a cold-storage-alike archiving? while I hate this practice, they do have the right of doing that correct?
If you think someone is trolling, please don't make the thread worse by feeding it. Instead, flag the comment by clicking its timestamp to go to its page, then clicking 'flag'. An account needs a small amount of karma (currently 31) before flag links appear.
By definition; any democratic society has the government it deserves.
If not, the society is not a democracy or the citizens are idiots.
The etymology of "idiot": from "idiota" (Late Latin) an "uneducated or ignorant person", and from "idiotes" (Greek) a "layman, person lacking professional skill".
I can say firsthand they are excellent people, both technically and also morally. They are careful about security and protecting people and data. There is the usual email protection for virus scanning, spam blocking, abuse alerting, archiving of legal items, and the like.
In my direct experience, the entire chain of command up to and including the UCB CTO is solid. So I hope Napolitano steps up and explains what's happening now.
In the meantime if any UC Berkeley people want to learn how to use GPG for encrypting email, and VPNs for encrypting traffic, I will donate pro bono hours.