Of course the PIN can be brute forced. It feels like reporting "I can walk over the lawn fence". That PIN is probably here to prevent your kids from messing with your vault when you grab your coffee with your computer unlocked.
Protecting from an attacker with your laptop locked should be done at the OS level with FDE and secure boot. Protecting from a real attacker with access to your unlocked computer is a bit hopeless (as someone mentioned, they probably can install some key logger and steal the master password and everything else later).
It never hurts to be clear on the threat model (And they should probably go with the option 1. suggested by the author), but I feel in that case the behavior matches reasonable expectations and the author is a bit of bad faith.
For solution 2. if you want to check a pin server side without trivial access to the PIN from the server you can do it à la signal using secure enclaves https://signal.org/blog/secure-value-recovery/
This is obvious to those that know anything about security, but it is not obvious to the average user. It is Bitwarden's job to keep the user safe within their platform and if they provide a pin option, the average user knows no better than to use it. If Bitwarden does not explain how insecure pins are, then the fault 100% lies with them. Blaming the user is rarely ever an effective or useful argument.
The user does not need to be aware of the threat model.
OPs point was the pin isn’t protecting much at all because it doesn’t really need to. The user isn’t making a risky decision, because if the attacker gets as far as _being able to put the pin in_, the whole thing is toast regardless of guessing the pin or not
Not really: Consider e.g. a stolen laptop (without full-disk encryption or a screen lock).
If Bitwarden could somehow implement the PIN attempt counter in secure hardware or on their server, they could achieve something more resistant against local offline brute force attacks.
A Yubikey could do the trick, theoretically (but unfortunately the FIDO API does not really lend itself to encryption, as it was designed only for authentication).
If you don't even have a screen lock on your laptop, what business do you have complaining that bitwarden didn't protect your secrets?
And it's not like there is much reason for any extra effort either, because that user will for sure be logged in to the webmail that they use for mail-2fa so all logins can be password reset anyway.
Still, I think that software in general, and security software in particular, should follow the principle of least surprise.
In the case of PINs, this is, in my view, an implicit contract to rate-limit invalid PIN attempts somewhere, regardless of all other security measures.
Sorry if it came across as a statement directed at you as a person lxgr, I was using "you" in the generalised sense:
> We can use one, you or we when we are making generalisations and not referring to any one person in particular. When used like this, one, you and we can include the speaker or writer
I always wonder, with a keylogger on my device, I’m probably more f-ed using my master password all the time, right? Isn’t that a large threat? Larger than the one from op?
Yes, with a keylogger watching you you’re typically pretty f-ed. Though that’s also one of the reasons using a physical FIDO token as a second factor is a good idea, since the keylogger isn’t going to be able to steal your private key off the hardware token, unlike for TOTP.
Though that also begs the question, if I can get a keylogger onto your device, why wouldn’t I try to implant something slightly more capable?
> since the keylogger isn’t going to be able to steal your private key off the hardware token, unlike for TOTP
How? I mean how can keylogger get the secret from which TOTPs are being generated?
And why wouldn't some other malware won't be able to read whatever data hardware token inputs? I'm myself yubikey user and would like to know in what ways it is more secure than TOTP, even in the scenario when my workstation gets compromised.
I assumed if someone can install something on my computer, I'm toast.
> How? I mean how can keylogger get the secret from which TOTPs are being generated?
Since it’s time based with a 30 second window, you don’t need to know the secret, you just need to be able to repeat the code as it is typed. It takes more effort because it has to be done in real time, but 30-ish seconds is pretty doable.
> And why wouldn't some other malware won't be able to read whatever data hardware token inputs? I'm myself yubikey user and would like to know in what ways it is more secure than TOTP, even in the scenario when my workstation gets compromised.
The way (most) hardware tokens work, including the Yubikey, the private key is generated on the key and it never leaves the key. When the FIDO challenge/response happens, you relay the server’s challenge to the Yubikeu, it does the private key operation with the onboard chip, and sends back the response for you to relay back to the server. Done this way, your computer never needs to know the private key, but you can still prove you physically own it, which is what the server is trying to verify.
That said, just because they can’t steal your Yubikey’s private key, doesn’t mean they can’t take the bearer token from your computer. In general if your device is compromised it’s game over anyway.
> When I press button on yubikey, it pastes some jibberish - way more than 6 chars, but can't THAT token be re-used?
Just to be clear, that's not related to FIDO which I was originally talking about. That's one of the extra OTP features that most Yubikeys come with, but it's unrelated to the Yubikey's FIDO capability.
When it comes to 4 or 6 digit pins, its almost impossible to ensure that no pin has been used before. At 8 digits, you might as well be using diceware anyway.
Nothing prevents users from using 0YYYY or 0DDMM/0MMDD.
Every time some site ridiculously insists I "use a more secure password", I sigh and add "A1!$" to the end of my 32-character alphanumeric random string.
FDE doesn't matter at all since a running system is going to have the volume mounted.
Secure boot also doesn't matter because the boot time root kits are a rarity and trojaned downloaded SW (npm, homebrew, etc. nearly completely unprotected from malicious actors beyond the bare minimum) or the massive browser attack surface are what are actually used.
What would be nice is proper layered sandboxing at the OS level, always on, but obviously the kernel-userland ABI is not actually very secure in practice, and has been the source of recurring escapes. But at least it would be something.
Consider this: as it is, sophisticated users on, say, the MacOS platform, who download SW such as homebrew, youtube-dl, whatever, or do local development with npm and other package managers, are actually in a much worse place than unsophisticated users who run with "only from the app store" enabled.
It’s one thing to offer a pin for quick unlocks, and quite another to allow that pin to be the only key required to unencrypted the files stored on the disk. A pin should only secure the master key in memory; it should not even be possible to write passwords to disk with such weak encryption.
There’s no “of course” about it. Every place I can think of having used a PIN in the last decade has not been susceptible to brute-force attacks, due to the PIN being stored off-site (e.g. payment card) or in a TPM (e.g. Windows Hello), and a few incorrect attempts triggering blocking of the payment card or PIN or whatever.
If you think carefully about the description of this specific feature, and where and how it’s running, then yes, you’ll probably realise that the PIN will be brute-forceable. But people are probably used to the idea that PINs actually aren’t susceptible to this kind of attack.
That was my initial reaction as well, but it isn't necessarily true. If you read up on how Windows Hello uses a PIN, then it becomes clear that they can be pretty secure where: (1) a PIN is tied to the device; (2) a PIN is local to the device; and (3) a PIN is backed by hardware.
> Of course the PIN can be brute forced. It feels like reporting "I can walk over the lawn fence".
This is entirely non-obvious: there's several ways to implement a PIN unlock in a secure way (see e.g. what you mentioned about Signal and other comments about e.g. Windows Hello). Bitwarden chose an insecure one and to not warn about its risk in the clients (unlike some other features, where you get a big modal warning when enabling them; see the end of this comment).
> Protecting from a real attacker with access to your unlocked computer is a bit hopeless (as someone mentioned, they probably can install some key logger and steal the master password and everything else later).
This is a different attack scenario. If I throw away my computer, or you steal it in its powered off state, a keylogger won't help you since I won't be entering my password again.
> Protecting from an attacker with your laptop locked should be done at the OS level with FDE and secure boot.
Definitely, FDE and secure boot would mitigate the attack (if your computer is off). However it's still not enabled by default on most systems and Bitwarden recognizes that since they give you a big modal warning that your encryption key will be stored in plain, if you set the lock option to "Never": https://imgur.com/a/jj9FveF
> This brute-force will very likely be successful, since PINs are usually very low-entropy. Now, granted, the key derivation function is PBKDF2 with 100000 iterations (+ HKDF), but that won't help with a 4 digit pin.
It would be better to not have that feature at all, that convenience feature goes a bit too far. But nice to see that's the only thing they found, I'm sure they looked for more severe issues but found none.
A PIN is, if not by definition then by practical user experience, a short, numeric secret. You really can‘t blame users for using one for a field that purports to ask for a PIN.
Except this thread isn't about blaming the users, but assessing the potential for improvement, where this simple mistake leads to inaction
So yeah, they should change the stupid name and be more clear that users should use letters as well, but it's still valuable to increase argon work factor parameters
If you call something a PIN, salted hashing is completely pointless (and software still doing it tells me that the vendor hasn't grasped this fact).
A Personal Identification Number typically has 10k (or less frequently a million) possible values. Hashing buys you a few seconds of brute force resistance here at most, which is completely useless.
If you dial up the hashing complexity enough to make a dent, even legitimate logins will start having an unacceptable UX and battery life impact. You're just on the wrong side of the e function with PINs.
> be more clear that users should use letters as well
The entire point of the feature is that the PIN, whether letters or numbers, is shorter and/or easier to type than your passphrase. If you require users to use something of equivalent security to their passphrase, you might as well remove the feature.
I don't know why your repeat the same mistake with 10k, "typical" doesn't mean you ignore the atypical, that's still a net benefit.
Your last point is also misleading - no, you don't require the equivalent to master password, and yes, it still makes sense in that case. "Shorter/easier" can still mean enough entropy for the smaller threat model of local compromise (with even more entropy required in the master password for the bigger threat of cloud compromise). It's when it drops down to the level of literal 4-number PIN, then it becomes useless against this threat and only useful for an even smaller threat
> if you want to check a pin server side without trivial access to the PIN from the server you can do it à la signal using secure enclaves https://signal.org/blog/secure-value-recovery/
You could do that, but you should definitely also design this in a way that does not fail catastrophically if the server-side enclave fails, which at least Intel's version seems to have a habit of doing every couple of months or years.
A nice way would be to use a modern PAKE that can recover a client-side secret securely after a successful authentcation, such as OPAQUE: https://eprint.iacr.org/2018/163.pdf
Conceptually, you do a PAKE, and upon successful authentication deliver server-side stored entropy to the client (either through the PAKE, if it supports the feature natively such as OPAQUE, or explicitly over an encrypted channel secured under the PAKE-derived session key), which then combines it with client-side stored entropy to decrypt the database. That way, the server can rate-limit the client's PIN entry attempts, but does not have access to the vault itself even if it learns/brute forces the PIN.
Facebook uses a combination of this and the HSM/enclave approach for encrypted WhatsApp backups (although without the client-side stored secret, since the use case is recovering from a lost device, not protecting data on a compromised device): https://engineering.fb.com/2021/09/10/security/whatsapp-e2ee...
I agree with you. Short PINs are for preventing well-intentioned people from accidentally using the wrong account (which I do consider quite valuable, for the record) - not for keeping bad actors out.
You are thinking in hypotheticals like many developers do.
Most infostealer malware just exfiltratr your data and disappear before being detected do they can hit a lot of targets before commom av starts detecting them. People also accidentally disclose data, back it up on a usb drive and lose that drive, have their pc stolen,etc...
If you have keepass2 with a memory argon2 and a password/passphrase none of that is a concern.
Yes, the malware could also be a keylogger/RAT and you'd be screwed then. One the most important security mindsets to have is "perfection is the enemy of good", specific security controls exist to address specific threats/risk not to address arbitrary and and an unbounded number of possible threats.
I agree with the mindset and that's why I think it's good the data is still encrypted even if, as the author mentioned, they might as well have left the data in plaintext.
Sure entering a passphrase each time is better for security. But if the user chooses to set up a PIN instead, I feel the current behavior is reasonable.
This is not the default behaviour and it has to be enabled per client/platform. The first unlock after a reboot still requires the master password as well.
There really needs to be another standard class of vulnerability besides "physical access to the device" along the lines of "access to a copy of the on disk data". There are so many paths to this and some never required physical access or even an accidental exposure on the part of the user, it could be a breach of a provider (ala LastPass).
The Bitwarden client does this. The issue is that nothing prevents the attacker from taking the encrypted database and working with it directly, thereby bypassing that client protection.
Bitwarden argues that the finding is out of scope because from what I can gather the claim that exploiting this requires access to the device. If that were the case, I'd agree with them. But having access to the Bitwarden database is not the same as having access to the device. There are plenty of vulnerabilities that give you limited read access. Simply selling your hard drive without erasing the data first would be a very common scenario.
That's why SSH keys or GPG keys are typically protected by a pass phrase. Not a PIN, not a password, a pass phrase. At least that's the wording that OpenSSH uses, and Bitwarden should do the same. Secure your database with a pass phrase, not a PIN, or else you might be vulnerable. That's how it should be communicated to the user. These details matter.
I really like the way 1Password and MacOS work together for security [0].
Even if my laptop is unlocked, each 1Password interaction needs my fingerprint. That unlocks a secret stored in the Secure Enclave, which I trust. (Security is hard and flaws are possible, but Apple has done a reasonably good job here from what I can tell.) I only have to mess around with typing a long string of nonsense in when I'm registering a new device, rather than every time I want to use my SSH or GPG key (or trusting an agent to hold it in memory, which... I'll do it, but I won't like it).
I wish I could rely on similar things from Linux, but the user experience just isn't there on the desktop. A previous employer issued laptops that had both a fingerprint reader and some kind of HSM, so the hardware was there, but I remember both of them missing drivers at the time, and even with drivers the userland software support would've been very hacky.
Given that, really the only thing that Bitwarden would have needed to do is to clearly label the PIN feature as being much less secure than the biometry option, especially when used in combination with the "do not ask for passphrase after browser restarts" option, which persists an only PIN-encrypted version of the master encryption key to disk.
Interesting, how can fingerprints be used as a cryptographic unlocking method on Linux? Does this involve the TPM somehow, or does the security model assume a non-compromised userspace and/or kernel?
When you enable the PIN, you deliberately weaken your security for convenience. However, when the database unloads, your database is still protected by your pass phrase.
> However, when the database unloads, your database is still protected by your pass phrase.
This is not the case if you disable the "lock with master password on restart" option. In that case only your PIN is guarding access the the vault data.
Using a PIN shouldn't have to mean weakening the security, since there's several secure implementation options of a PIN. (See other comments about Windows Hello for example)
"Let's now assume that the user enables the PIN unlock and configures Bitwarden so that it doesn't require the master password on restart."
If the user has setup Bitwarden so the master password is not required, then the user gets what they asked for, namely a password database secured by a 4 digit PIN. Not clear to me why this is a problem Bitwarden needs to fix.
You're assuming the average user understands security when that is definitely not the case. The job of Bitwarden is to help all users (even ones ignorant of security) to secure their data. If Bitwarden has no warning explaining that pins are unsecure, then the fault 100% lies with Bitwarden.
Some things fall into the "obvious" category, users should just know them, and it's not 100% on Bitwarden to make the world a safe place.
Is it a good idea to leave your password on a piece of paper under your keyboard? No, and you shouldn't need Bitwarden to tell you that.
Is it a good idea to use your name and date of birth as a password? No, and this should be obvious, not something Bitwarden needs to educate you about.
Is it safe to rely on a 4 digit PIN? Obviously not, when there are only 10000 possible combinations. You shouldn't need Bitwarden to tell you that though.
Are there people out there who do need this education? Of course. But that's a job for someone with infinite patience and understanding. Not some words on a web page from a supplier.
Case in point, my step dad belonged to a "computers for elders" group and one day he learned about antivirus software. Next time I watched him, he was googling for anti virus software and downloading any he could find, from anywhere on the internet. He ended up with 6 different AV packages, some very dubious looking indeed. I tried to explain the dangers but he couldn't understand how antivirus could actually harm his computer. And he was a practicing doctor of medicine before retirement. It really highlighted the challenges of protecting some people in the brave new digital world.
> Is it safe to rely on a 4 digit PIN? Obviously not, when there are only 10000 possible combinations. You shouldn't need Bitwarden to tell you that though.
Most people really don’t know that. It is not obvious to a normal user.
I realize math education in the US sucks but are really suggesting most people can’t figure out that 0 to 9999 is all the possibilities you get from 4 digits?
I'm confident your average person would understand that a PIN is insecure if it was explained to them.
But think about other things in life that use a PIN -- debit cards, customer support shortcuts, etc. These are things that can't or typically won't be brute forced and are deemed as "secure enough" in our world.
Your average person has no idea how a 2FA token is generated, but they know it's just a few numbers that they have to enter on various websites and apps, and those numbers resemble a PIN. Yet another reinforcement that just a few numbers keeps things secure.
If you walk a user through software setup, and at some point they need to provide a complex master password, they would never automatically assume that being presented with an option to use a PIN would remove the security provided by a complex master password.
Only if they were to think it through, or have someone who thinks analytically, would they understand that in this scenario, given that it's Internet-accessible software, a PIN could be brute forced in no time unlike their debit card or any other PIN they may need to use in the course of their day to day life.
One could get into a long debate about whether "most" is literally true or not, but I think most of us should be able to agree that at least a significant proportion of people - enough to matter - either won't or can't think of this without some prompting.
>Is it safe to rely on a 4 digit PIN? Obviously not, when there are only 10000 possible combinations. You shouldn't need Bitwarden to tell you that though.
Normal users see that Bitwarden blocks you after 5 guesses, therefore an attacker will never get past all 10000 guesses. They won't realize that this block is easily evadable.
It's a bit of a stretch to label Bitwarden users as average user. Average users don't know about password managers beyond whatever their browser supports.
With a secure enclave of some kind, there could conceivably be a three attempt limit before the temporary key associated with the pin is deleted, and full pass phrase is required. In such a setup pin might make sense.
As it is - I'm not sure if pin makes sense even if there's user demand? Then again I do use biometric unlock - and that's not really great either.
At least the bitwarden installs are behind fde (macOS) - and possibly (?) file based encryption (Android 13+).
> the user gets what they asked for, namely a password database secured by a 4 digit PIN.
A 4 digit PIN would be safe if Bitwarden securely enforced an attempt limit on the PIN. There's several options to implement this securely (see e.g. other comments about Windows Hello or use of a TPM).
Why did you jump to a 4 digit PIN instead of a 10 letter word? (which is still faster than the full 20 letter master password with many special symbols)
Is it because of the name PIN? So there is your simple answer of what problem Bitwarden needs to fix
Your mirroring fails precisely because it's a password, and it's pretty common to set minimum requirements, so you can't have 4 digit pin
Similarly with your first point - I don't think the master password has many special symbols, I'm just asking a question that illustrates the issue with the original faulty assumption
They could make the pin process intentionally slow… maybe with some number of iterations… and as computers get faster they can just update the number of iterations required…
If the PIN is local, only a secure element type of chip could meaningfully enforce this restriction. Otherwise, whatever memory or disk stores the secret encrypted only by the 4-digit PIN could still be brute forced. Just disabling entering a PIN in the UI would not be enough for security.
An Nvidia RTX 4090 can crack a 4 digit pin using PBKDF2 with 200k iterations in less than a quarter of a second. Argon2 is definitely the better option, but even at 1 hash per second, that's less 3 hours.
> [...] As a comparison baseline, a 2.4 GHz Core2 CPU can perform about 2.3 millions of elementary SHA-256 computations per second (with a single core), so this would imply, on that CPU, about 20000 rounds to achieve the "8 milliseconds" goal.
So you'll need something that takes at least as long as entering your full password, at which point you basically could enter the full password (from a UX perspective). They PIN is here to make it faster and it will always be security vs. ease-of-use.
It already is intentionally "slow". However, for a 4 digit pin there are only 10 thousand combinations. It is not practical for it to be so slow that 10000x it is an infeasible amount of time. Not only would the user have to way too long on each entry, the attacker could just use faster hardware.
Or multiple machines. There are about 31k seconds in a year. 3.1 seconds per iteration seems already slow as a response time to unlock a db so it's about one year for those 10000 attempts. Split it between 10 machines by first digit, it's down to a little more than one month. Split it between 100 machines by the first two digits and it's down to half a week.
A four digit PIN is poor security. What Bitwarden could do is removing that feature.
Split it to 5000 machines, which will be "quite easy to get" for a computation that takes a single line in most languages. Then we're talking about 6 seconds and 50% success on first try.
That's a bit like putting a website password check in the client-side JavaScript. Attacker removes lockout, continues brute-forcing.
There really isn't a solution if the entropy is low and the enforcement mechanisms are in the hands of the attacker. Even a TPM or secure element is just a financial obstacle to a sufficiently motivated attacker.
The author mentions this finding was marked as out-of-scope when they reported it to Bitwarden. A couple of categories that are considered out-of-scope are listed, namely: attacks requiring physical access to a user's device, and "other side of airtight hatchway"[0] type issues.
The latter seems reasonable, if the assumption is that the device is fully compromised, and ongoing surreptitious monitoring of user activity by an attacker is occurring.
However, users probably have the reasonable expectation that if their laptop is stolen, their device-local vault data(supposedly encrypted on disk) is not compromised as a result. Bitwarden should either disallow weak pins/pin-access altogether, or they should caveat more clearly the weaknesses of pin-only access to device-local vault data.
Exactly. So many people in this comments section are saying 'well obviously a pin can be cracked' but the point is, the average user does not know this. Once they give their information to Bitwarden, they expect it to be safe. They shouldn't have to understand the nuances of security in order to keep their data safe. If the pin can be cracked, Bitwarden should not offer it as an option or at least explain to users how vulnerable they will be before they enable it.
In the case of Windows Hello, a PIN is very different from a password (such as your live.com password). PINs are encrypted per-device, and are never transmitted from the device. They are resilient against rainbow table brute-forcing, and they generate asymmetric cryptographic key-pairs by using the device TPM.
So forget what you know about ATM PINs; this is a markedly different concept.
TPMs have weaknesses, so this probably isn't a 100% guarantee depending on the attacker and the exact hardware, but it's pretty reliable (and very reliable if your attacker is reasonably small).
It can. TPMs have a "dictionary attack" (DA) protection feature.
You can't set the number of bad attempts that trip lockout, or how long to lock out for differently for different objects -- those are global configuration parameters. But you can configure which objects / policies require DA protection and which ones don't.
Indeed! They should have explained this much better when it was introduced.
But of course, Windows PIN was only needed when they made a local login a login to your Microsoft account, so your local password was suddenly transmitted to the internet.
Indeed, which is why Bitwarden should disallow pin-only access for offline vault data altogether. Admittedly, I'm valuing a safe interface for users much more highly than one that is convenient or ergonomic.
That would go against the nature of such software. Let's treat users as adults. There should be warnings. But this is a feature. Users shouldn't be able to eg. select weak crypto algos, there is no additional functionality in that. But setting whatever pin is a convenience, and users should be able to decide what threat vectors they accept.
I web searched it and found a dedicated wikipedia page https://en.wikipedia.org/wiki/Interdiction but I still can't figure out what TPM interdiction is supposed to mean
Anyway if a TPM was trivially bypassable then there would be no point to having them so I'm doubtful of whatever this off-hand comment is supposed to mean
TPMs are not 'trivially' bypassable. There's attacks on how they're used, naturally, but those wouldn't apply in this case. In this case the main issue is that once you've unlocked your storage keys you get to use them, and since storage keys need to be software keys to perform acceptably, you could even steal them. But if your device is off, a TPM would be more than adequate to protect local storage.
I think they are talking about the definition under the Espionage section, i.e. a hardware supply chain attack:
> The term interdiction is also used by the NSA when an electronics shipment is secretly intercepted by an intelligence agency (domestic or foreign) for the purpose of implanting bugs before they reach their destination.
You can encrypt sessions to the TPM. To do that you need to securely know a public key for the TPM.
The protocol spoken to a TPM is like a micro-TLS. You get to encrypt, or not. You get to authenticate the TPM (like a server), or not. You get to do ephemeral-static key exchange (unlike TLS 1.3, which wants ephemeral-ephemeral key exchange). And you get to do PSK (password), and you get to do it in ways that are not subject to off-line dictionary attacks by eavesdroppers.
But you don't have to do encryption or authentication of the TPM, and the easiest thing to do is not to do either, which is what much software does. There's been this assumption that if it's on the motherboard, you can't mount active -or even just passive- attacks, but that is very much not true.
I really did not know what to read into the "TPM interdiction" comment in the context of bitwarden, but I've left comments elsewhere in this sub-thread.
You can generally sniff the LPC bus to the TPM. Even with TPM2.0+ that allows optional encryption on the transport layer, Windows dosn't bother using it.
But I was referring more to a malicious software component between the data flow between the user interface and the TPM, even before the TPM's protocol stack is in the loop.
Yes, quite, which is why it's important to use encrypted sessions to the TPM.
> Even with TPM2.0+ that allows optional encryption on the transport layer, Windows dosn't bother using it.
They really really should.
> But I was referring more to a malicious software component between the data flow between the user interface and the TPM, even before the TPM's protocol stack is in the loop.
For something like bitlocker or bitwarden you really need the unlock step to happen so early in boot (or wake) that the only vulnerabilities present are those in the blessed firmwares and kernel that you're running. Once you have the OS fully booted the possibility of executing malicious code that intercepts the UI to the TPM is just too great indeed.
There was a comment on another hacker news thread the other day that a difference between secure element and TPM is that the pin code is entered on the keyboard and passes through memory while with secure enclave, at least the biometric stuff is connected directly to the secure enclave instead. Maybe that's what
If you have a key object that is not extractable and can only be used and it is password protected, then you can't bypass the TPM. Once the key is unlocked (if you have access to the relevant session, which, you would) you can use it, which is bad enough if the rest of the system is compromised. There's also a concept of restricted keys that can only be used for things like quote signing (for attestation), or credential activation, which means the user doesn't get to specify exactly what to sign or decrypt.
If you couple all of that with running golden/blessed firmwares and OS, and you do secure boot, then you can be pretty certain (early in boot time anyways) that you're not running firmware/software you didn't want to assuming those are not themselves compromised.
Now, for local storage keys you really need those to be in software as a TPM can't perform well enough (not even an fTPM), and even if they did, an attacker could just decrypt local storage w/o having access to the raw keys as long as they can compromise your OS.
So, in a way you're right. But a TPM would still give you something that software-only solutions wouldn't: you get to refuse to enter the password and then the attacker has no choice but to apply rubber hose attacks or mount more expensive attacks on the TPM itself.
If you’re using full-disk encryption (and you should be), this is less relevant, since the FDE is protecting you in case of theft. If you aren’t, the attacker can do anything, including copying the Bitwarden file and using a GPU farm to crack the master password.
Lots of people, perhaps the majority, still use master passwords that don't have a ton of entropy. For example, the bad guys that stole the LastPass vaults have definitely cracked a lot of the vaults that were protected with weaker master passwords.
1Password's approach is definitely the right one here, where the master key is basically a combination of the user's master password and a random 128 bit (I think it's 128) value, which does make cracking impossible. The user has to print out this value so that if they need to sign in on a new device that they can enter the value, so it adds a little friction, but 1P has the right idea that humans just can't be relied upon to generate and memorize high entropy strings, in general.
I’d bet (though in all fairness, only a low amount ;)) the intersection between a user that has both a weak master password and attackers willing to spend a ton to rent a GPU farm is pretty low, though.
My guess is that most people who have high value passwords also have weak passwords. CEOs and CFOs can probably authorize huge financial transactions with little oversight and tend to be security illiterate.
> However, users probably have the reasonable expectation that if their laptop is stolen, their device-local vault data(supposedly encrypted on disk) is not compromised as a result.
If you’re using a four-number pin to encrypt your data with no additional “padding” around that PIN, that is not a reasonable expectation.
However, I also don’t think that it’s reasonable that Bitwarden allows weak passphrases to begin with.
> users probably have the reasonable expectation that if their laptop is stolen
Yes, there is a significant difference between compromising a device with physical access and stealing the device. Disk encryption for example is very effective against the latter, but useless against the former. Having devices stolen is also far more common than targeted physical attacks.
It’s practically game over if an attacker has access to your laptop. They can for example install a keylogger and capture your master password for any password manager.
Different threat model. If someone steals your laptop, your hardware is seized or your employer wants their backdoored machine (but not backdoored in this specific way) back, you probably do not want to leak your password manager contents either.
I use the practice of creating a new employee specific password manager account instead. Once I have an employee email address, I use that to register on either Mozilla's password manager, or BitWarden. It works out nicely, isolates everything, and should I ever leave and someone needs an account I once had, I can safely pass along an old password with ease.
To clarify for those that jumped straight to the comments; the threat model this article is talking about is extracting the encrypted database stored locally on your machine to brute-force the PIN.
Personally, I'd agree with Bitwarden on this that it's an attack that requires physical access to a user's device, or worse, remote admin privileges.
> Using a PIN can weaken the level of encryption that protects your application's local vault database.
> If you are worried about attack vectors that involve your device's local data being compromised,
> you may want to reconsider the convenience of using a PIN.
It's a choice on the user to weaken the encryption. I don't use Bitwarden, but if they communicate that properly to the user, it's a valid compromise for convenience-versus-security.
This doesn't answer the question. Why is there a choice to encrypt something when it's completely unnecessary (according to their threat model)? No point in building unnecessary complexity into software, especially software meant for security.
Using the PIN is outdated anyways when it comes to convenience, all reasonable user OS now support biometrics and those offer better risk mitigation in comparison.
It is something they should remove entirely in an upcoming version after giving users enough warning.
Personally I advocate using BitWarden for commonly used logins that, if they were compromised, would not be catastrophic; perhaps in some cases because 2FA provides another, tautologically, factor, and KeePass secured with a FIDO2 device in challenge response mode, a passphrase, and possibly setting a higher key stretching work factor to further blunt any brute force attempt, in which more important data is held.
That's an absurd amount of effort to bypass fingerprint biometrics, nice.
However, it's important to note here that biometrics isn't just about fingerprints and every OS handles their available biometrics options differently. For example, I would recommend face authentication on apple devices, however I would avoid using face for windows, and instead recommend a windows hello PIN (yes, it's handled differently than the PIN in the above article).
Ultimately, you're just trying to create a balance between the layers of protection and reasonable attacks. There's only so much you can protect against, nobody can withstand someone who is cloning fingerprints, stealing devices, has access to your separate device 2FA, etc. without severely affecting their lifestyle.
I'm pretty sure you can register the same finger over and over again for new "keys". Assuming you go through the tedious process of disabling, restarting, etc whatever each one of the underlying OS demands. It's an additional layer that Bitwarden (and other apps) do not get direct access to.
So no excuse if you value convenience, PINs are not good. Short easy to remember and enter PINs are also the reason Apple is under fire due to how easily you can avoid biometrics and just use the 6 digit PIN.
Yup unless encrypted and password locked. Never leave a machine unlocked.
There are/were some highly-sophisticated attacks using peripheral bus exploits (firewire, thunderbolt, etc.), but that's a very unlikely threat to most people.
The silly thing is that Windows already has Windows Hello and its accompanying APIs which can be used to guard something like it with anti-hammering protections. Ditto for macOS and the Secure Enclave. I know it's not 100% of its market but using those two features could drastically improve security for the vast majority of people who pay no mind to things deep down in the weeds such as this.
Not every device has the necessary hardware, though; most desktops don't have it, so they would need to rely on external hardware such as USB keys. Furthermore, the demonstration video is clearly running on some kind of Linux/BSD system, where support for trust hardware is distinctly lacking.
I don't know why you would bother with a PIN on your password manager. My guess is that it's a feature designed for mobile devices, where access to the underlying key store is near impossible so brute-forcing is much less of a risk. Biometrics are usually available there as well, but if you don't trust them with your most secure passwords (you probably shouldn't) or if you want a backup, a PIN would be an excellent defence mechanism for when you've left your phone unlocked and a stranger is trying to steal your passwords.
Also, dongle support is pretty good for things like java PIV cards and yubikeys. I've successfully used a java chip card with website authentication in Firefox, and with VMWare view client. This has also worked for at least a decade or more - my main issue was with process in Firefox being a little convoluted and VMWare using out of date libraries and having a set of installer instructions that actually require explicit symlinking of the a system library. But that's not really linux's fault.
Fingerprints work great in modern Linux distros (I use them for sudo and sometimes unlocking my display), but the fingerprint hardware and the TPM don't seem to talk to each other. Windows Hello (and I presume TouchID) is set up to handle authentication and authorization together in one well-secured kernel blob with all kinds of TPM trickery to ensure security.
The Linux version of this process, at least as far as I could find so far, consists of identifying and authorizing the user alright, but the TPM's secret management seems to be handled by an entirely different system.
This means that brute-forcing or other attempts at access don't need to go through the biometric system on Linux whereas Windows Hello is more tightly protected against malware like that.
Dongles do work great! With WebAuthn/FIDO(2) we can hopefully soon start to let go of password managers completely. Passwords are useful but passwordless authentication is just better for most single factor authentication mechanisms in my opinion.
I had no particular desire to use TPM thus far, but you're right, searching briefly on this, it seems the tcscd daemon written by IBM provided by the "trousers" project does not seem to have any PAM integration. That said, presumably you could plug the 2 together (pam and tcscd) with some random script, right? Sure, that doesn't avoid the brute force scenario, but if it's just to store some ridiculously long random key so you don't have to type it in, it's not going to get brute forced anyway. (although in that case, why bother with the TPM?)
As for eliminating passwords, while I love the idea of hardware dongles, I'm always going to want to have a password on it.. I assume you mean eliminating having a ton of passwords as opposed to one good strong password on the dongle. But then, that's the same situation I'm in with the password managers anyway. It's not like I actually type my password into most websites anymore...
I think it could work, but I wouldn't want to protect my banking passwords and credit card details behind some random script.
What I mean is eliminating passwords on most services at all. For everything but real important stuff (banks, email, business accounts, that kind of stuff), I reckon my devices are protected enough that if someone can gain access to my devices unlocked enough, 2FA wouldn't prevent any threads anyway. Passwords are easy to brute force, but device-bound keys aren't.
Using the security chips inside my devices for authentication as a single factor is more than enough for most of my purposes. Today, most websites offer FIDO2/WebAuthn/U2F as a second factor, but I'd rather see them as a first factor with an optional password as a second factor.
My password manager protects me against nothing more than brute forcing and password reuse. Switching to TPM-first moves the protected bastion from a piece of software running in userland to either a kernel level-protected component or a dedicated piece of hardware. There's only so much you can do as a desktop application to protect your users' keys, after all.
I wouldn't want to force anyone into this system, but I do think with the hype FIDO2 was released with, I imagine browsers are going to push more for using FIDO2 as a primary factor where available.
Yeah, I understand... but follow me here. If I'm using a 50 character randomly generated password on a website using my typical ({a..k} {m..z} {A..H} {J..N} {P..Z} {2..9}) token generation, then they are going to be brute forcing 6×10⁸⁷ combinations right? That's not happening. So. Randomly generated passwords for sites, managed by a password manager, are much like device bound keys, but with the added advantage that I can still type it into something that doesn't support the device if I really need to... I still like the idea of dongles especially to enhance the master password. I just don't see what it wins me over a random site password.
And, I'm absolutely going to have a password on my dongle or laptop in case of loss. I don't really trust biometrics in that scenario either really. Especially fingerprints that would be all over the laptop. Biometrics are just a convenience that I recognise offers some modicum of security. That's why hooking it up to TPM seems to not add much.
... I do get hardening where the passwords are stored though... although, if the disc is encrypted using TPM (which linux definitely supports), I guess the main attack you'd be concerned about would be the OS being compromised while running, but aren't we just back to loggers then?
> I don't know why you would bother with a PIN on your password manager. My guess is that it's a feature designed for mobile devices.
Note that on desktop Bitwarden allows PINs to be alphanumeric, and any length. I use a PIN because my master password is more than 20 characters and I don't want to type it every time I restart my browser. My PIN is a decently strong password in its own right, but shorter than my actual master password.
>Not every device has the necessary hardware, though; most desktops don't have it, so they would need to rely on external hardware such as USB keys.
How?!? There's been an fTPM built into CPUs since Haswell on the Intel side and on the AMD side since before Ryzen. If you OEM it's enabled automatically if you bought a machine after July 28, 2016. If you DIY you literally have to flick one switch in the BIOS if it's not enabled automatically.
I know it has been, but Windows Hello doesn't seem to be available for my 7700k after explicitly enabling the fTPM. I think it requires TPM 2.0 support, which hasn't been supported for all that long.
Windows Hello doesn't require 2.0, it works on 1.2.
There is something interesting regarding the 7700k though on windows (especially 11), users have been claiming windows reports it as supporting TPM 2.0 but they still can't upgrade to windows 11 due to failing requirements. So I wonder if something else is happening there that also seems to be affecting windows hello for you, because on paper at least for windows 10 you should be able to use Hello no problem.
My CPU doesn't support TPM 2.0 with fTPM (and the motherboard manufacturer stopped selling TPMs for my motherboard years ago) so I'm sticking with Windows 10. It told me I could upgrade at some point but that was a false positive. Maybe my install is just broken in some way.
However, I do know that I had to manually enable the fTPM functionality on my motherboard and I highly doubt the average consumer is going to enable such features in their BIOS. I don't know when manufacturers started enabling fTPM support by default, but it's definitely not enabled by default in most 7th gen Intel boards and I doubt 8th gen changed much in that sense.
I'm not sure if fTPMs have endorsement key certificates. I should know, but mostly I work with dTPMs and vTPMs. Not that an fTPM not having an EK certificate is a big deal -- if you bootstrap a public key for it early enough in OS installation, you can just trust the fTPM.
Bummer that its not usable on the browser extension though. I used to have the desktop app but it ended up being mostly useless. 99% of my passwords go into the browser where I want autofill, and for the rest I almost always have a browser open anyways so I can just as easily copy/paste from the extension as the app.
You can use biometrics in the browser app, but you have to run the application and unlock that with biometrics after enabling browser integration in the settings. You also can't use the Windows Store version of the Bitwarden app.
Maybe unrelated to the original article, but I tried Windows Hello once. Upon reboot, my PC couldn't connect to the internet for some reason, so I wasn't able to login. I got locked out of my own PC because it couldn't connect to Microsoft's servers to verify my password, and there's absolutely no workaround to this problem. It won't use a locally cached password to login, it verifies your password with Microsoft's servers. Every. Single. Time. There's no way to use another device either. I recall, back in the day, you could activate Windows XP without internet by calling a number which would give you a long code to type in and would activate Windows. But nothing like that exists for Windows Hello. No internet? Tough luck.
In the end, I had to reinstall Windows and I'm never touching Windows Hello again.
I've been using Windows Hello and Microsoft accounts for login on devices for a few years. Its worked for offline for me literally many thousands of times across several devices.
I've never had the problem the GP had, but I have had Windows Hello "forget" biometrics were even configured - it's happened at least 3 times now, I guess after a Windows Update. Each time I go to login with a fingerprint, but it gives an error message and insists on the password instead, and then I have to setup Windows Hello from scratch.
I am using my Microsoft account to login to my windows 11 systems. And they do seem to cache for offline access. Because I am still able to login without internet access.
Weirdly chrome seems to prompt for windows hello pin to view passwords but not to auto fill them. Is it using the TPM to store them securely in any way?
This is wrong. The Bitwarden client very clearly warns about storing your encryption key locally via a mandatory popup window, as seen here: https://i.imgur.com/BzXJmos.png
That's not what it says though. How would you phrase it? I don't think they do a great job but this is pretty hard to explain in two sentences if you're targeting a non-technical person.
A rule of thump: Nearly everything which is called PIN can be reasonably brute-forced as long as it doesn't get locked after a few attempts (and only unlocked through another factor e.g. a PUCK).
if you mistyped the PIN to often the thing is locked until the PUK is entered, often followed by resetting the PIN to a new potentially different value.
It's a tricky problem, because on devices without biometric authentication I really don't want to type in my long master password every time!
I think I'd appreciate an option such that:
1. If I'm online, I can unlock with a pin. Some critical piece of information would be kept server-side, and the server would limit unlock attempts.
2. If I'm offline and need access to my vault—which happens but not too often—I need to use my full master password.
I think this should be doable? Could you retain enough end-to-end encryption such that if Bitwarden itself is hacked, my vault couldn't be decrypted via only my pin?
I would loose that, unless I kept it permanently in my laptop's USB port, in which case it would use up a USB port.
I think the system I suggested above would work a lot better. What I have now isn't so bad as my passcode is moderately strong, it just could be better.
The problem is these password managers are lucrative targets, especially being able to gain access to a person's financial accounts. Simply disregarding the issue and categorizing it as "Attacks requiring physical access to a user's device" isn't good enough. Yes, there's only so much Bitwarden can do from the software side of things, without hardware support to back it up. But Bitwarden should still do what it can to mitigate such attacks, such as significantly increasing the number of PBKDF2 iterations for PINs (at least stored on disk; a key could be cached in RAM with fewer iterations because RAM is far less likely to be compromised than files on disk), and discouraging (or even preventing) users from using short PINs that could be quickly brute forced.
BW is/has switched to Argon2 over PBKDF2, fwiw. Although Argon2 trades the iterations field for an allocated amount of memory, so time will tell if there isn’t a “BW accounts with 16MB of argon2 allocation are no longer considered secure”.
Yes, pins can be bruteforced when they are stored locally, on device. That should be pretty obvious to those of us who know anything about security. However, the average user doesn't know about security. They shouldn't be expected to understand the nuance of security. So many people in this comments section are saying 'well obviously a pin can be cracked' but it's not obvious for the average user! Stop blaming the user here when Bitwarden should only offer features which are secure or at the very least provide a warning when a feature is unsecure.
Local pins can’t be brute-forced on the majority of my machines. The exception is a decade+ old intel desktop, and, even then, I think it has a wonky tpm slot (or I could buy a yubikey. They support locking pins after too many retries, right?)
>Bitwarden does not warn about this risk…… Bitwarden takes little effort in communicating the risks of choosing a short low-entropy PIN. Currently there is very little information to be found about the PIN in Bitwarden documentation
>Warning: Using a PIN can weaken the level of encryption that protects your application's local vault database. If you are worried about attack vectors that involve your device's local data being compromised, you may want to reconsider the convenience of using a PIN.
This posts reads like "Hey guess what, if you store your passwords in a file called passwords.txt on your desktop and install a virus, people can steal your passwords.".
>If accessing device-local data is outside of the threat model, why are we encrypting these data at all? We might as well store them in plain text.
The PIN is still potentially useful in that it prevents anyone with access to the device from getting access to the secret information without having to perform an overt act. The difference between leaving a piece of paper with the passwords laying about and locking it in a drawer.
The article and comments don’t really answer the most important question here: Most modern hardware supports access to secure key storage via biometrics or pins that lock after a few attempts.
Is the issue that bitwarden fails to use these properly (e.g., by storing a short pin in the touch id enclave), or that the person reporting the bug is using a machine without such an enclave, and enabled pins anyway?
> Is the issue that bitwarden fails to use these properly
It's not so easy. Bitwarden runs in a browser on PC's. As far as I know, browsers don't provide a javascript API to the TPM. They probably supply a javascript API to their password store which may even be backed by a TPM (Firefox on Windows is I think), but they also allow you to browse their passwords while the browser is running so I wouldn't be keen on that.
Bitwarden also runs on Android. I haven't seen an Android TPM API. Instead Android supplies a "Keystore". It stores secrets that can be unlocked using phone managed secrets - like a PIN, fingerprint and so on. Yes, they have limited retries - but they come with the limitation that if you can unlock the phone, you can unlock Bitwarden. Not ideal - in fact pretty much the same situation as a browser, now I think about it.
So I suspect that while it's true "most modern hardware supports secure storage", the issue is Bitwarden doesn't run directly on the hardware. Instead it runs on platforms like Anroid or a browser that don't expose the hardware's secure storage directly.
Bitwarden has now a native app on PC.
I still use the browser extension but I might switch to the native app, as I reckon from other comments they do support OS security APIs
Glad I transitioned my passwords from Bitwarden to an offline password manager that doesn't have a similar "easy PIN password mode" feature.
I used to use Bitwarden with this feature though, just that I recently came to the conclusion that the product probably isn't the most secure offering and it has some issues on Android with native autofill when you aren't using Google Services.
What issues are you talking about on android? Using e/os/ here with native autofill and yes this seems to have some issues :/ (mainly keyboard not auto-closing I think)
1. You can access the passwords if you know the PIN.
2. All state is local.
All you need to do is guess the PIN and restore state, and repeat. Any code in Bitwarden to prevent (obscure) you from doing this is just a cat and mouse game.
Perhaps a way around this is to not have all state local. Have the PIN + separate authenticating private key go to their server. You get 3 attempts before you need the full password.
The moral of the story here seems to be: if you want convenience you'll compromise your security. This is not exclusive to BW.
Or if you want a moral of the story specific to the article: Don't use the PIN feature in BW. And perhaps, instead of a PIN use a physical key (e.g., YubiKey).
Most people have been told that even though you're centralizing passwords (meaning if hacked you're in big trouble), the benefits gained from being able to generate strong passwords overcome this.
I would say this is definitely true for the common Joe, but of course it helps to not run much arbitrary software from the web and keep your browser up-to-date to avoid drive-by malware. If you've got a habit of pirating games, you may want to keep your princess in another castle.
My mom doesn't have any reason to download and run untrusted software ever, and she'd call me if she needs something, so for her it's definitely better to have secure passwords with the risk of having all eggs in one basket. The risk of her being tricked into running software that steals the vault is lower than the guessable and reused passwords that she used before.
If you are more like me and regularly download software to try it out, pull random github repos to toy with them, etc., then it might be wise to keep the password database on an Android/iOS device which have app isolation. You can download all the malware you want, but if you don't grant it root, it won't be able to access the database stored in /data/data/com.example.keepass/database/.
The idea is...centralize your PWs but "harden" them behind a single longer and less brute-forceable master pw. And since the PW manager is doing the fill in, those can be longer and more random as well.
Perfect? No. 100x better than what most people do? Yes!!!
Moi? For the important stuff? I add in a YubiKey. Perfect? Again, no. But closer than no YK at all.
As a side note: I do contract web dev work for various agencies. Generally, talk about a lazy approach to clients' PWs. They think 1PW makes things secure. Meanwhile I generally have access to all vaults, even projects I'm not working on. Good sec is bases on less trust, not too much blind trust.
I can count on one hand the number of services that are important/crucial enough to warrant unique, strong passwords.
All the rest I just reuse simple passwords because they simply aren't important and aren't worth the time to care. Someone wants my Discord? Go for it, I don't care. My Reddit goes with it? Sure, I don't care. My HN account too? Daring today, aren't we.
So no, personally I haven't felt a need nor desire for a password manager. Arguably it will cause me more grief than convenience.
The name is part of the problem
You can have shorter alphanumeric password per device for frequent unlocks, it doesn't have to be your typical 4 number PIN that jumps to mind
It would not offer the same protection as your main much longer password, but then that longer password protects against a bigger threat than local access, and it's still much more convenient (and not much less inconvenient vs the 4 digit PIN, like on a full computer keyboard typing a longer word is faster than shifting your fingers to the top numbers row)
And it would be nice if password managers explains the much much bigger risk of these very short numbers-only poorly named PINs so that users can make an informed decision
I see many comments in here saying it's obvious that a PIN can be brute forced. But it is absolutely NOT obvious that's the case because for a long time now we've had hardware like TPM and Secure Enclave. With that hardware, you can have the database's key securely stored with the PIN being what lets the hardware release that key. And even if the attacker has physical access to the device, it can be protected from brute force by having the hardware forget the key after a number of failed attempts.
> If accessing device-local data is outside of the threat model, why are we encrypting these data at all? We might as well store them in plain text.
Yes, exactly. I feel a lot of these tools are doing this. And it’s okay if it explicits that the data is plain text. However why annoying users with a PIN in the first place?
I am sorry bitwarden didn’t fake this seriously.
Let me know if you are looking for a job. We have several positions open. Feel free to shoot me an email julien -at—serpapi.com
Suddenly getting Error Code 7 (unusual network detected - https://bitwarden.com/help/unusual-traffic-error/) when attempting to login. Googling and looking on Twitter - people have been complaining about this happening to them as well.
Locking me out of 2FA because of you don't like my network traffic? Yeah, I will be immediately leaving your service, bye. That's Google like bro, lol. Clown stuff. Insane that they would even think of doing this.
The attacks went from LastPass and over to BitWarden.
After the attacks and smear campaigns, shall the people unwilling to use KeyPass and other high-lift/high-risk pieces of software move on to using tiered password strategies for their hundreds of sites again?
> However this 5 guesses limit is enforced completely within the client's logic: it relies on the attacker using the official Bitwarden client.
I'd guess a software engineer working on that part of a security system would probably consider that, while the implementation approach for that part was being decided.
If so, why did it happen anyway? Did they communicate the weakness to users?
If your machine has something running on it that's brute forcing your PIN, what's to stop it already having something doing keylogging and clipboard monitoring? Aren't you already hopelessly compromised?
Cars come with seatbelts, one can put them on, or put them on improperly. Not to say that the seatbelts are perfect, but the carmakers have done what they can to provide that safety net.
But you can still choose to not wear it. Am in a country where seatbelts are required by law, but many ignore it because of convenience, bad habits..etc. It's a law, but one that can't be enforced realistically.
It doesn't have to be P2P, as long as the vault items are encrypted using device-specific secrets that are in a secure element (like iCloud Keychain). As long as there is no secure element in the loop, the security is limited. Secure elements can rate limit requests, require biometric authentication, etc. Without such limitations, you are only one upstream compromise away from exfiltrating all passwords.
I like self-hosting things, but for something as sensitive as my passwords I trust a professional company much more than I trust myself. Yes, companies can and do screw things up all the time, but that doesn't mean I would do better, and I'm a single person with other responsibilities in my life.
Or you don't host it but just keep it tight to your chest. No risk of any server being compromised then.
Professional companies that care about security (I do IT security consulting so that's the group I deal with the most) are not perfectly secure. I'd argue that it's less likely that a password database is abused when hosted on a random average-security system (assuming the person is not being targeted, like if you're not a public figure or have a stalker) than if you've got it hosted with some bigcorp that has a huge target painted on their back. Some are really good at security and others a bit less, but none are perfect, and scale dictates compromises between security and usability. Not everyone in the firm will be a security expert, and much as you try to isolate the vaults / source code / other security-relevant systems from them, they're how ransomware and other groups gain a foothold to work with.
I wouldn't trust my mom to set up a server secure enough for a password manager to be hosted on, but if you are comfortable around servers at all and follow normal guidelines, or use normal syncing software that is only encrypted to the server (so you pick and rely on a strong password for your vault file), you're more secure than when you use a third party and type that password into their website (even if, on a good day, the key stays local in your browser).
It would, so long as you're aware that the password has to be strong.
Let's quantify that. The default KDF that keepassx uses is iirc ~50ms seconds of computation on modern hardware. It might dynamically determine the KDF rounds based on your system, but it never updates it (hmm, is that a vuln as well?) so it'll be old either way. A GPU gets a ~thousand-fold speed-up compared to CPU for pbkdf2, so let's say 20k guesses per second per GPU (note: this is just a ballpark number). An attacker might have a dozen GPUs available and care to spend a month on your vault (does that sound like a fair upper bound? Tweak it for your personal threat model), which means they run through some log(20e3 guesses_per_second × 12 gpus × (3600×24×31) seconds)/log(2) ≃ 40 bits of entropy.
If you pick random words from a diceware-sized dictionary (7776 words), you need 4 random words to be secure (because log(7776⁴)/log(2)>40). If you pick random characters from a-z,A-Z,0-9, you need a 7-character randomly generated password (because log(62⁷)/log(2)>40).
Edit: it looks like KeepassX has stopped development and says to use KeepassXC now. Their source code has some mentions of Argon2id so this may be outdated advice! Your password/-phrase may be able to be shorter than this, but it'll be hard to quantify because all Argon2 crackers suck ("For Argon2, the fastest cracking software that I can find is a CPU implementation." I wrote two years ago in https://security.stackexchange.com/a/249384/10863).
I built my own that does this :) Just working on improving the UX now...
It has some cool features too, like not injecting anything into pages until you activate it (so no performance cost or risk of breaking sites) being less reliant on perfect detection of fields when it is activated, etc.
"... is a form of cryptographic attack on a computer system or communications protocol that makes it abandon a high-quality mode of operation (e.g. an encrypted connection) in favor of an older, lower-quality mode of operation (e.g. cleartext) that is typically provided for backward compatibility with older systems." from https://en.wikipedia.org/wiki/Downgrade_attack
Not allowing people the convenience they want means they'll switch to a method that does. Worst case: a passwords.txt. Wouldn't that be a worse downgrade attack?
I have no opinion other than it is obviously a downgrade attack.
As a fair to middling organic language model, I cannot tell you how to keep your keys safe; I myself am blessed with a good memory and 160wpm typing speed so I use that.
Again, as a fair-to-middling organic language model, my only opinion is that, a PIN, as currently implemented in Bitwarden and described above, leaves Bitwarden users open to a form of exploitation that can be categorized as a downgrade attack.
Presumably, a better implementation would be fine.
You are free to keep using Bitwarden's PIN implementation, but it will still be open to: downgrade attack.
This is because Bitwarden's security, as described in this article, is open to: downgrade attack.
This is not an emotional term. It is no more laden with implication than observing that #F00 is 'red'. You are free to keep using #F00, just don't call it blue.
Donwgrade attacks are relatively straightforward, and ease-of-use features, such as PINs, are a traditional place to look for them.
I have no opinion on this matter other than that it implies that Bitwarden can be configured in a manner which is insecure, and the specific form of insecurity is: openness to downgrade attack.
Protecting from an attacker with your laptop locked should be done at the OS level with FDE and secure boot. Protecting from a real attacker with access to your unlocked computer is a bit hopeless (as someone mentioned, they probably can install some key logger and steal the master password and everything else later).
It never hurts to be clear on the threat model (And they should probably go with the option 1. suggested by the author), but I feel in that case the behavior matches reasonable expectations and the author is a bit of bad faith.
For solution 2. if you want to check a pin server side without trivial access to the PIN from the server you can do it à la signal using secure enclaves https://signal.org/blog/secure-value-recovery/