The fact that they're able to "double your password" is a bad sign. Here's what this implies to me:
* McGill had a database of everyone's password in plaintext at the time of Heartbleed
* McGill is concerned about mitigating possible security compromises due to Heartbleed, including these plaintext passwords, which if they were compromised were compromised all at once
* Despite this concern, McGill still has a database of everyone's password in plaintext. Oh, and a large proportion of them are still the possibly-compromised ones.
* They're comfortable announcing this fact to the Web, for some reason.
I really hope the first thing they do after doubling the password is put it into a password-hashing function and throw away the plaintext, and then make those users change them anyway, because the doubled passwords are still compromised. It sounds unlikely.
They can put a flag on the database and check that the cleartext you send them when logging in is doubled before hashing half of it.
I can't count how many times I've seen something that could easily be done at login time and people conclude that the service must be storing plaintext or multiple hashes.
This isn't even a direct security measure in the first place. This is to annoy people into updating their passwords.
You are correct - for UX reasons you don't want to be causing people to change password unnecessarily, so a check at login for length is the obvious way to do it, informing people on an as needed basis that their password is too short. I just don't think they want short passwords on their system any more. Hence there is nothing sinister about what they are doing and how they are storing passwords. The evidence fits the scenario that they say it is.
Forcing password changes -- e.g., "you're locked out until you change your password" -- means that the student must select a new password right now, whatever time pressure they're under.
Most people in a rush aren't going to do a good job choosing a secure new password; they aren't going to read McGill's recommendations about password managers or whatever; they're not even going to take 30 seconds to think about how to come up with a reasonably secure but memorable password.
They're just going to use their old password with a "1" on the end.
So -- McGill came up with a way to keep nudging students into updating their passwords, without forcing the specific moment they need to do it.
I'm at a .edu, not McGill. I'll relate my experience with forced password changes.
>Most people in a rush aren't going to do a good job choosing a secure new password;
Yup. This happens once per year without fail, and more often if there is some security problem. Often enough, it happens during a busy time of the semester (beginning or end).
>they aren't going to read McGill's recommendations about password managers or whatever; they're not even going to take 30 seconds to think about how to come up with a reasonably secure but memorable password.
My employer's IT department's password advice is... vintage, to be kind. It took a shaming in front of the college president to get them to stop threatening people for writing passwords down. The clunky password rules are at least partly responsible for the typically weak passwords. Another factor is the practice of configuration of machines with timeouts in the 10 to 30 minute range, forcing people to constantly enter their passwords. Yet another is the number of disparate IT systems with different password databases, and different password rules.
Here is one idea. When you force people to change there password you interrupt their workflow. Person X wants to do thing Y but first they must change their password. Now they are annoyed and are still wanting to do thing Y so they try to do the simplest thing possible. This will probably be using the same password again, and when that (hopefully) doesn't work they will choose a another common password all so they don't have to spend more time thinking about it. Now with what the university has done the person is only slightly annoyed and the workflow is not interrupted, this means the person has time to think up a good password with out time pressure on them so they are more likely to come up with good passwords.
Yeah, for that you'd need to know more about the password than we'd like them to know. I imagine they are simply setting the policy to "double" the password if it is older than X days.
Even if they 'check that the cleartext you send them when logging in is doubled before hashing half of it' and calculate hash of the doubled string, what will they check it against? They don't have the hash of double the string until and unless they were storing the plain-text.
Correct me if i'm wrong here, but to do what you're saying they could/should do here we basically have three options...
1) Storing the password in plaintext.
2) Sending the password over the wire in plaintext.
3) Computing two hashes on the same system (presumably) via the same function, with a known relationship between their inputs.
I guess the fourth option is they have a securely stored hash of the password, but that still wouldn't allow them to compute the 2x hash server side so we're looking at another round trip and/or number three above.
I don't think that's necessarily true. Let's say they have all of the passwords stored as bcrypt hashes, and they also know the last time you changed your password. They could just update the application logic to check that your password is of the form <pw><pw> if your last change date is before X. Then to check the password, they just take the first half and check that against the hash.
It's a moderately aggressive way to get people to change their passwords (more aggressive than an email or prompt, less aggressive than forcing upon login)
This would break passwords like "foofoo", since they'd think it was already doubled, they'd check "foo" against the hash and it would fail. Then again, you can get around that with doubling it again after checking, so I don't know.
Why is this a problem? If your password is "foofoo" and was set after the cutoff, then it won't be halved; if it is "foofoo" and was set before the cutoff, it will be halved, and then not match the password in the database, as intended.
Passwords prior to the doubling were a fixed length, so they could always assume the first eight characters of a 16 character password (which hasn't been changed manually) is the original password.
Of course, anyone who has a leak of the original passwords can equally just send a a double of it, so I'm not sure what benefit this is supposed to be offering.
Whether it provides a benefit or not is unrelated to the fact that they don't need to know your plaintext password to use something like the regex I wrote to detect a string duplicated twice and extract the first match prior to hashing/checking against the DB.
As Dylan stated, an example of a way to do this on login without storing passwords in plaintext:
login():
needsPasswordDoubled = [has user changed password since XX date?]
username = [username post parameter]
password = [password post parameter]
if login successful:
if needsPasswordDoubled:
replace stored password hash with hash(password | password)
else:
return success
return failure
Done. That said, the security of this is a joke: anyone who wants to compromise McGill accounts and (a) has a valid (old) username and password and (b) has seen this public press release is simply going to double all of their compromised passwords. But still, typing a double-password is annoying if you don't want to do it. So this is really just a way to "force" people to change their passwords without forcing a password reset.
The security of this is a joke because this isn't a security move. It's not designed to make the users secure after the Heartbleed vulnerability. It's purely designed to make logging in annoying for the users, so they finally change the password they should have changed a while ago.
Seems a heck of a lot better than force-expiring passwords like I'd expect any other company to do.
Blocking login until they've changed their password is too harsh, and it's counter-productive.
A college student who needs to choose a new password now but is under time pressure is definitely not going to take recommended steps like "go install a password manager that'll generate a secure random password for you." They may not even have a piece of paper on hand right now, so they will choose a password that they can remember.
Like -- their previous password, but "2" instead of "1" at the end....
Whereas if you just give them a nudge like this, they will bear the annoyance of typing it twice if they're in a rush, then when time permits they can choose a decent replacement.
In general yes. But don't forget there was CVE-2013-5750 with Django's PBKDF2 implementation, where arbitrary-length passwords could DoS the server. An upper bound is probably a safe thing to have (but 18 is too low)
> In general yes. But don't forget there was CVE-2013-5750 with Django's PBKDF2 implementation
CVE-2013-5750 is a Symfony vulnerability, the Django one is CVE-2013-1443.
And of course it was fixed by limiting passwords to 4096 bytes, not 18.
An easy alternative (which has to be applied if you're using bcrypt since it's limited ~50 bytes of input) is applying length reduction through a regular cryptographic hash before applying the KDF[0]. Of course the cryptographic hash might still be DOS'd, but they tend to have a throughput of 100~300MB/s on commodity hardware so that's less likely.
[0] HMAC actually does that internally: if the key (password) is bigger than BLOCKSIZE — 64B for hmac-md5 and hmac-sha1, it will pass it through the hash function once then pad it to BLOCKSIZE before doing its thing.
The pkbdf2 DOS is because PBKDF2-HMAC calls HMAC once per round, so (on passwords longer than BLOCKSIZE) it performs length reduction once per round, and the total input data for length reduction alone is thus rounds * pw_length.
Given pbkdf2 tends to have round counts in the tens of thousands or hundreds of thousands these days, the amount of data going through the hash function literally increases by several orders of magnitude… and for nothing since it's the exact same operation each round.
Thus running e.g. SHA-384 on a password before handing it off to a KDF is probably a good idea in any case
I always thought that the max is there to discourage users from setting passwords so long they can't remember them (and instead write them down on a post-it that's right on the monitor).
Maybe the interface expects an 8 character password even if it's hashed on the backend. Maybe it's hashed by something that expects a certain length because it's using a silly algorithm. Maybe not all systems use hashed passwords. Hard to say from the outside, but in my experience it's usually not because the developers were stupid or ignorant.
if not check_password(submitted_password) then
HTTP 401
else if password_last_changed < threshold then
change_password(submitted_password + submitted_password)
HTTP 401
else
log_user_in()
HTTP 200
endif
As we all know, a typical password validator formula is a great way to encourage people to choose "Secr3t!", or something else equally bad.
I'd really like to see a password field that auto-generated pass phrases using full english words from a sufficiently large wordset (in the vein of "correct horse battery staple"), possibly even enforcing such phrases as the only valid type of password. Every user gets a strong password they can actually remember. (...although possibly a non-starter for mobile contexts.)
Please don't enforce it. The moment you have a "sufficiently large wordset" in English you'd already added a whole lot of words that are hard to spell not only for non-native speakers.
Why not try to generate semi random pronounceable passwords? There's a clear decrease in entropy but brute force cracking against all pronounceable strings less than 20 chars will still be hard. (Of course your definition of pronounceability might differ.)
I wondered the same, but I'm not sure that doubling my password would annoy me into changing it, especially when the password was previously capped at 8 characters.
It's quite easy to come up with a scheme to manipulate the password client-side. I'd assume an organization with any technology credence whatsoever knows not to store unencrypted passwords by now.
The problem is that these passwords are used on everything from printer services to cisco vpn access to federated network ("eduroam") where "client-side" isn't somethin McGill controls.
McGill does some beautiful IT admin stuff. And then it does some scary-ass shit like this.
IMHO: It's not too annoying having a long password if you make it a sentence. It's when people try to get a single "word" up to 18 chars that it becomes annoying to type. A sentence is also nice because it's not too hard to work in a capital letter, lower case, and a symbol into the passphrase.
I'm not sure, but there may be some hash functions for which H(concat(a, a)) = F(H(a)). I'd worry a little about their security in some contexts - particularly if that generalized beyond just duplication, it seems likely vulnerable to length extension attacks if used for HMAC, &c - but for password hashing it might be fine and would clearly be better than plain text. It's not impossible they were already using a hash for which that's true - just unlikely.
Another possibility would be that they compute and send both H(a) and H(firsthalf(a)), if fisthalf(a) == secondhalf(a). That would work with any hash function, but I think would not appreciably increase security.
there may be some hash functions for which H(concat(a, a)) = F(H(a))
If H is secure then F is not computable. If they can do a trick like this then their hashing is no good.
Another possibility would be
Yeah, you can approach it like a puzzle and figure out what crazy set up they could have, but Occam's Razor has to apply at some point. I'm betting they did the dumb thing, not the strange thing that is mostly pointless.
"If H is secure then F is not computable. If they can do a trick like this then their hashing is no good."
Can you point to something more than assertion, here?
"Yeah, you can approach it like a puzzle and figure out what crazy set up they could have, but Occam's Razor has to apply at some point. I'm betting they did the dumb thing, not the strange thing that is mostly pointless."
It's mostly pointless, but it's increasingly common knowledge that the dumb thing is dumb. It's more subtle that the pointless thing is pointless. If I had to bet, I'd also bet that they did the dumb thing, I just think it's marginally less conclusive than had been implied.
> there may be some hash functions for which H(concat(a, a)) = F(H(a))
I'll give a shot at why this implies that H is not a secure hash function, though I could be wrong. The outputs of a secure hash function should be randomly distributed across the set of all possible outputs. If H is a secure hash function that outputs a value in the set {0,1}^128 (a 128 bit output), then H(a) and H(a+a) should both be 128 bit outputs whose mapping are indistinguishable from values chosen truly at random from this set. Although there is a discernable pattern in the two inputs to H (the concatenation), there should be no discernable pattern in the two outputs of H that you could use to reliably map H(a) to H(a+a) for any given a.
I don't think this follows. "Random output" is an idealized model that no actual, specific hash function achieves. I already said that having such an F would worry me, but "if F is computable, H is not secure" is a strong statement and I'm wondering whether it's backed up by math, not-quite-math-but-good-reasoning, or handwavy bluster. Please don't take this as an attack, though - I appreciate the attempt!
I don't think I'm able to make a much more rigorous argument, but I will note that you seem to be applying two different criteria here. If you want to criticize modeling a hash function as a PRF as too idealized, then you aren't going to get a mathematical answer (since it will start with "let H be a PRF").
A maximally strong mathematical answer is showing that given that F and H(a), we can reconstruct too much about a, for any H and corresponding F. There are probably other similarly strong forms of argument - I'm not saying that's the only one - but you can see how it differs from "Well, it's just not random."
The McGill Password length has also been increased from exactly eight
characters to a variable length of eight to 18 characters.
So they're not using bcrypt (usable length 72). Even PBKDF2 would have been acceptable, but my guess is that they were sold a "layer over" on their stack with this. I can already tell this is a hacky patch.
Every year, about 1,200 to 1,500 McGill accounts are compromised in
one way or another.
Phishing + guessing. I know someone who gets about 2-3 emails a week asking to enter their login info into some site in Brazil or the Czech Republic.
If every site properly salted and hashed passwords, reuse isn't even a problem. But as we know :
- Most people choose crappy passwords.
- Most sites use crappy hashing schemes (if they hash at all)
When other sites are compromised, there's an easy list of ready passwords to try against other potential targets.
They may be artificially limiting the password length because other services which authenticate (e.g. VPNs, mail systems, older UNIX logins, administrative software, payroll, etc.) may have limits on password input fields.
This is why PBKDF2 would have made more sense then. They can centrally authenticate, derive a secondary token from the original pass while specifying the max limit for each of those services. Best of all, this means the mail, UNIX login etc... need not have the same login token.
On the plus side, they're telling people about the limit. I visit so many websites that will happily take passwords of arbitrary length without complaint... until you try to log in and your password doesn't work because the password you entered was too long and it truncated it.
It's a pet peeve of mine when a site puts a max length on characters (which is dumb itself) and then they don't put a max length on the password input later. Nothing but a regular workout for your 'forgot my password' feature.
It bothers me less now that I use a good password generator/safe, but still bothers me nonetheless.
I have an auto loan with a company which truncates the username. It's bizarre because they'll happily let you key in the entire username when you go to log in, but it truncates when you first set your account up.
Why on earth would you ever need to truncate a username?
In addition to the frontend issue mod mentioned, it often happens accidentally without any errors or warnings when using a VARCHAR in a relational database, which have a maximum length. If the username field is VARCHAR(20), the application ignores database truncation warnings, and the developer didn't think to check the username length before storing it in the database, it'll truncate a 21-character username without you knowing. This comes down to the devs using sensible field lengths and handling edge cases.
Well, you have to have some limit. Otherwise a user could register with a 1GB username. This might break all sorts of things that assume they can display or work with usernames.
I saw an example of that on a JavaScript-related site recently, where a guy's username was aaa...aaa several hundred characters long, causing a ludicrous horizontal scroll bar. You'd think it would be easy enough to say upfront during account creation that both usernames and passwords are limited to x characters.
> Phishing + guessing. I know someone who gets about 2-3 emails a week asking to enter their login info into some site in Brazil or the Czech Republic.
I think it probably has something to do with this.
That's pretty bad. I think those get filtered before it gets to the inbox most of the time, but the phishing continues too. This one from 2010 is pretty similar :
Bcrypt is not the ONLY secure solution to securely store passwords (contrarily to what everyone is trying to tell you). See Thomas Pornin's answer on SO:
No, It does not mean that the password is stored as plaintext. Simply keep a flag for "UpdatedRecently?", if the flag is false, then not only should the first half of the input correctly match the hash, but the first half the input should match the second half.
I dislike being forced to change password without notice, I need some time to come up with a secure, typeable one. Change on next login just results in me reusing an old password or adding a "2" to the current one.
Unless you are being specifically targeted (ie. the attacker knows that you have to repeat the password twice), you mitigate the easiest possible attack: user/password combination from a stolen database.
Although I suppose this was done to force the users to change their password.
Everyone is guessing if they are storing in plaintext or not. But that isn't the actual issue to learn from their mistake. They have publicly asserted what they are doing (which is great information for a hacker), and chose a bad way to attempt to force users to reset their passwords because of a compromise. I would feel better if it was an email directly to mcgill faculty / staff. If you are building out a user management system, you need a way to disable accounts and force a password reset.
You never want to convey any information about the usernames, password, or state of the account _ever_. This is true for error messages during login, but can be applied to any messaging.
I't because they are still storing passwords in cleartext... If they were hashing passwords (which is the correct way to do it) there would be no limit.
That is simply not true. I used to think the same but real world experience showed me that a lot of websites hash the passwords but they still set a limit to password length. You should read this : https://www.reddit.com/r/gfycat/comments/2m7ddd/how_does_gfy...
Looking at a (failed) login flow, it looks like they are using Oracle SSO
Markers:
* Cookie named site2pstoretoken
* Http header: Oracle-Application-Server-10g/10.1.2.3.0 Oracle-HTTP-Server
* Layouts are still done via <tables>
I'm not sure it improves security significantly, but the weak link is using passwords as security in an environment like a university.
Getting users to confirm to good password practices is nearly impossible when they are mature, paid employees with money and valuable IP on the line, and at organizations with legal/regulatory security requirements. Imagine accomplishing that with thousands of college students. (I'm not sure there's a good, cost-effective solution, other than to provide more secure options to users who want them.)
Here at UBC all accounts (students, faculty, staff) must have their passwords updated every year. They force you to do it with 3 "skips" available (for if you really don't have time).
Am I the only one alarmed by the general inability of websites to protect sensitive information? There isn't almost a day without a major service leaking passwords or personal details. If we don't get a LOT better at this there will be some major reaction sooner or later, either legislative or in term of public behaviour. Like the government establishing a system of licenses to have the right to handle personal data, or with regular costly audit. But we can't continue at the current pace.
Is this effective at stopping attacks (given that it is public knowledge), or is it mostly a measure to annoy users into updating their passwords to something less cumbersome?
I think that the goal here is not to increase password strength, but to make typing your old, short password so annoying that you pick a different one (that complies with the current password strength rules). That is, this isn't aimed at attackers; it's aimed at users.
One possible answer: Because tenured faculty can call up the helpdesk and get policies reversed, because they're tenured and the helpdesk isn't.
Another possible answer: Many systems can't prompt for password changes, and will just continue to log you in (because, especially for remote-access systems, that's better than denying access and hoping you find another way to get logged in). Probably the lazier of the users also don't use very many of the computing facilities, and may just use a few legacy systems.
this is in no way more secure.... there's a bijection... any password that an attack wants to try they just double so instead of bruteforcing [aab, aac,aad] just [aabaab, aacaac, aadaad] the only reason this makes sense to do is to annoy users into changing their password
Although highly suspect and troubling, this does not necessarily require that they have all users original passwords stored in plain text. If they had originally used a hashing function that obeyed the following:
Hash(pw) + Hash(pw) := Hash(pw + pw)
(NB: Where '+' above is really just a stand-in for any pair of combining functions, not necessarily arithmetic addition or string concatenation.)
But, I agree with many others here that the likelihood of stored plain text passwords is very high.
Someone else pointed out, if you just had a flag on each user to indicate if the password is doubled, you could just check the hash of the first half of the provided password when the user logs in.
>The need to change passwords arose in April, when the Heartbleed vulnerability was revealed. Heartbleed makes systems vulnerable to data theft since attackers can use it to gain access to systems and then proceed to access and steal information without leaving a trace.
>Even though our central IT systems are protected against Heartbleed, any accounts that have already been stolen still pose a security risk. Almost 20,000 members of the McGill community did change their McGill Password, but thousands more did not, and so additional actions have become necessary.
So, ff the people who got the passwords read this post then all they need to do is double the passwords they got with HeartBleed to gain access?
There are several ways this can be done without that.
Easiest is if they store the date of the last password change or otherwise know you haven't changed it. If it's old enough, double the plaintext before handing it to the hashing function.
>> Seemed like a good idea until it dawned on me that this means the passwords are stored as plaintext.
>There are several ways this can be done without that.
>Easiest is if they store the date of the last password change or otherwise know you haven't changed it. If it's old enough, double the plaintext before handing it to the hashing function.
Not quite. What you'd need to do is halve the user's entered password if it's older than the cutoff date. If the user enters "foobarfoobar" then you'd halve it to "foobar" before hashing and comparing it to what's stored.
If you only have the old hashed password stored then you don't have the hash of the doubled password, nor can you can infer it.
This whole approach is silly of course. They should just force everybody to reset their passwords.
The reason to inconvenience rather than force is users in a rush will pick the worst passwords, even as paid employees where their password is the thing between the outside world and highly confidential stuff.
If the database record says the user hasn't changed their password, split the given password in half, check that both halves are equal to each other, and check that one half matches the hash of the password.
check that it's a string of the right form (e.g. it repeats something exactly twice) and then derive the original password from that, before feeding it to your hashing function.
* McGill had a database of everyone's password in plaintext at the time of Heartbleed
* McGill is concerned about mitigating possible security compromises due to Heartbleed, including these plaintext passwords, which if they were compromised were compromised all at once
* Despite this concern, McGill still has a database of everyone's password in plaintext. Oh, and a large proportion of them are still the possibly-compromised ones.
* They're comfortable announcing this fact to the Web, for some reason.
I really hope the first thing they do after doubling the password is put it into a password-hashing function and throw away the plaintext, and then make those users change them anyway, because the doubled passwords are still compromised. It sounds unlikely.