Hacker News new | past | comments | ask | show | jobs | submit login
Login_duress: A BSD authentication module for duress passwords (github.com/jcs)
125 points by djsumdog on Oct 6, 2018 | hide | past | favorite | 59 comments



For those who haven't heard of it, BSD authentication originated from BSDi (early commercial BSD variant) and is pretty much exclusively used by OpenBSD today.

https://en.wikipedia.org/wiki/BSD_Authentication

https://man.openbsd.org/authenticate.3

It does have some benefits, with its process-based model, with various login_* helper utilities (in contrast to PAMs shared libraries). This allows some cool things like "sandboxing" with pledge(2), which it appears many are.. including login_duress.


PAM shared libraries can (and do) invoke sandboxed sub-processes to do things like authenticate passwords (setuid).


A natural extension of this idea is the concept of multiple duress passwords that do different things - i.e., the concept of a password as a command, an unstoppable last chance to make changes to your system. If this becomes popular I imagine authorities will start making images of devices before demanding login, so they can check that the act of logging in hasn't substantially changed the system.


> I imagine authorities will start making images of devices before demanding login, so they can check that the act of logging in hasn't substantially changed the system.

I think that’s standard procedure already.

Also, note that destructing or concealing evidence that is relevant to a court case or legal investigation is a criminal offense in many jurisdictions. For sure you will face charges just for trying even if you are not successful (e.g. because they’ve imaged the system).


> I think that’s standard procedure already.

It can still be useful when they don't have physical access to make an image, for instance when the system in question is being remotely accessed via ssh.

> Also, note that destructing or concealing evidence that is relevant to a court case or legal investigation is a criminal offense in many jurisdictions.

It can still be useful when there's no court case or legal investigation. For instance, when you're being illegally threatened to reveal your password.


> It can still be useful when they don't have physical access to make an image, for instance when the system in question is being remotely accessed via ssh.

Bear in mind that law enforcement generally has the ability to go get those too.

> It can still be useful when there's no court case or legal investigation. For instance, when you're being illegally threatened to reveal your password.

You're absolutely right! One caveat worth considering is that this might not be a typical security threat that most people are likely to face.


How does having an image of the encrypted drive allow them to determine whether the system has 'significantly' changed? Just generating a login in the event log should make the image completely different.


i think the idea is that their(govt/shady entity - what's the difference?) forensic scientists would be able to tell whether the very act of entering a password started changing a system significantly which would count as refusing to comply


> changing a system significantly

I'm rehashing your parent's comment, but: Unless the command wiped the entire disk or the entity gained access to the unencrypted versions of the disk before/after the duress command, an outsider wouldn't be able to tell if a log had been updated or if an entire partition had been wiped.


Disk encryption prevents revealing the plaintext of files on disk, but you can still observe their ciphertext changing when the file changes. If the duress command causes different files to be modified compared to a normal login, then that can be detected by comparing to the original disk image, even though the actual modifications performed are hidden by the encryption.

On the other hand, it's possible to delete an encrypted partition by only overwriting the encryption key, which might be a small-enough change to go undetected.


There's a little bit of subtlety here, but in general GP was correct in that it can be made hard for authorities two tell exactly what changed.

Disk encryption, unlike other forms, does not have a terribly high avalanche factor when small changes are made--because it's expensive to write lots of things to disk.

However, it is possible to make a small change (as small as, say, writing the audit log file on a real successful login) that renders data completely inaccessible. Consider an encrypted disk on which you can tell the magnitude of changes on the filesystem, but not which data has changed. Let's say you have a lot (many gigabytes) of sensitive data on that disk. If a successful login triggers the encrypted filesystem to decrypt the contents of the disk using an encryption key (of, say, a 4kb length) that is stored only on that disk, then a duress code could simply destroy (or corrupt by randomizing a few bytes) that key, rendering the contents of the disk inaccessible, without writing more than a very small amount of data.

This fundamentally trades off deniability for data security: the disk would still contain all of the encrypted data and could be brute-forced, but that would be the case anyway if an image had been taken previously.

Of course, situations in which that deniability would be legally well-received are, as others here have pointed out, vanishingly rare.


You don't need a 4kb key. 128bits is more than enough for AES. And there's no way you are going to brute force a random 128bit key.


Where does one find such disk encryption software?


APFS, the new macOS file system, works like this.

If you have an encrypted volume, you can use the command 'diskutil apfs eraseVolume' to make data inaccessible instantly by deleting the encryption key. (Note that the disk passphrase is not the same as the encryption key, so even if you use a weak password for your disk, you can't brute force the key)


> Disk encryption prevents revealing the plaintext of files on disk, but you can still observe their ciphertext changing when the file changes. If the duress command causes different files to be modified compared to a normal login, then that can be detected by comparing to the original disk image, even though the actual modifications performed are hidden by the encryption.

How would they even be able to determine which files are modified? If we're talking full disk encryption here, you can't tell which files are being accessed/modified, just locations on disk. Without metadata to map blocks to objects they're flying blind.


E.g., If every block on the device changes, that's a pretty big flag. If only a few change, that's expected. So the magnitude of change is both observable and conveys information to the attacker.


> Disk encryption prevents revealing the plaintext of files on disk, but you can still observe their ciphertext changing when the file changes.

File encryption does that. Disk encryption means that you don't even know how many files there are, much less which ones were changed. The whole disk is just a blob of random data until the right password is entered.


> If the duress command causes different files to be modified compared to a normal login, then that can be detected by comparing to the original disk image, even though the actual modifications performed are hidden by the encryption

Don't they also need to know what files are changed by a normal login, so that they can see that the changed set in this login was different from that set?

Comparing an image after a login to an image from before the login gives you a set of changed files, but it doesn't tell you if that is the normal login change set or the duress login change set.

Anyway, if I were setting up a duress login I'd make it so normal and duress login change the same set of files.


If at least 98% of your drive isn't partitioned out its going to cause some questions by a curious observer.


> entering a password started changing a system significantly

When different passwords (for the same user) simply decrypt and access different parts of the filesystem it's not the case.


That isn't how any full-disk encryption scheme works. A login event entry probably changes a single block.


If the root drive is encrypted then what good does this software do? By the time login_duress has a chance to run, you've already had to type in a decryption password.


> I think that’s standard procedure already.

Simply transferring everything off a 2TB HDD at 100MB/s will take over 5 hours, and that's ignoring any hashing.


I can absolutely confirm that standard procedure is to seize storage media as evidence, send them to a lab and clone the images as the very first step.


Another thing I came to think of as well is that they often employ write-blockers, don’t they? Pieces of hardware that go between the computer and the storage media. So by the time decryption is demanded, it will be done with the system in a read-only state right?


They don't run forensics on the actual device if at all possible, only a read-only cloned image.


You’re not referring to America, right?

I could be behind on certain policies but that’d seem to fly in the face of the fifth amendment (or is concealment not covered?).


The 5th and 4th have a rather weird status at border crossings. You can be denied entry to the country if you refuse to share passwords, for instance. Or have your equipment be confiscated. Some of this has even been upheld in court.


While everyone, including U.S. citizens and permanent residents, can have their electronics confiscated (stolen) for refusal to cooperate...

Citizens and permanent residents cannot generally be refused entry for flexing their rights. There are a few ways permanent residents can be denied entry, but the main one is having been out of the country for over 180 days.

Temporary residents and visitors are the ones who can be denied entry for looking at an immigration agent wrong, or trying to flex their right against absurd digital searches.

However much we might wish for things to be different. Europe and Oceana may have a less intrusive policy about digital searches, but everywhere else is either worse than the USA, or isn't developed enough to have paranoid security services that want to search everything.


The 4th and 5th amendments have been severely degraded since 9/11: https://www.aclu.org/other/constitution-100-mile-border-zone


The fifth amendment says you cannot be forced to incriminate yourself. Actively destroying evidence that could incriminate you is not covered by that.


pam_duress, linked from the article, offers this feature. Each user can have multiple duress passwords that do different things.


Do authorities actually log in to peoples systems right now? I assumed password demands were just for decrypting existing hard drives contents, not for physically logging in to their computers.


They usually clone the images, but may ask for passwords, e.g. devices where the media is soldered to the board. I do not know if they compel you to provide the password; I believe that varies by jurisdiction.


Could something like this be used on headless servers to trigger an alarm that old passwords have been compromised? The typical "root login failed x times" has too much noise to be valuable in my experience for anything internet-facing, but looking for specific passwords tried might be useful.


Yes. Put the hash of the old password in the /etc/duress file with a script that pipes to sendmail you@youremail.com. Add one line per old PW you want to monitor and vary the message sent via sendmail to ID which password.


I wrote pam_pwnd to deny the use of passwords which have appeared in the have I been pwned database(s):

https://github.com/skx/pam_pwnd/

That's not exactly the same thing as denying a previously-valid password, but it is along similar lines.


Not this, but either BSD's auth_userokay() or libpam functionality in Linux would give you access to the cleartext password, which you could compare to whatever hashed history you wanted.


I imagine the primary usecase will be disabling your anime wallpaper.


What about the scenario where the adversary pokes around in the system and finds you have it installed?

Might the following script make sense:

- Delete/change what you don’t want to share

- Change the regular password to the duress password

- Remove login_duress

Any flaws with this approach or improvements?


> What about the scenario where the adversary pokes around in the system and finds you have it installed?

The whole point of this is that the adversary doesn't have access.


They might be unable to decrypt your data, but they still have the device in hands


I was thinking of this scenario:

https://xkcd.com/538/


This is the whole point of such a feature. Pipe wrench guy demands password, you give the duress password. Re. parent comments, I suppose you could have the script delete all traces of login_duress (and itself).


Now pipe wrench guy comes back and beats you extra hard for tricking him, demanding you provide whatever info was there yourself. You might truthfully say you don't have it or don't remember it, but pipe wrench guy is no longer willing to trust what you tell him because you tricked him before.

This might not be precisely a winning scenario.


Might be good to mention that in the readme.md then? Or make it an option in login_duress itself? I don’t understand the downvotes on my original comment.


Your original comment is easy to misunderstand. Until reading the rest of the thread I thought you were suggesting that an adversary who hacked into your system could use login_duress against you.


I see thanks, too late to edit.


It would be great to see a variant of this come to smart phones. Having a passcode that logs into a reasonable looking OS install, i.e. same wallpaper as the login screen, default apps only, no payment cards added to the device, no email configured, no passwords saved, basic web history. Would be good for crossing borders.

1Password already has the ability to wipe for travel, would be good to see more of that.


I'd love something that works like this, but with the disk encryption password screen.

Specifically, plymouth provides the splash screen on boot with many common Linux distros. If I enter the decoy password, take some action, like displaying a fake error screen.

I toyed around with making a plymouth theme to this end, but didn't get far. It would be nice to have a theme that offered plausible deniability wrt FDE.


Solving the rubber pipe solution


[flagged]


Well, it improves on risk (to the data, not the user) in torture situations, but I wouldn't say it "solves" the rubber-hose problem. And it only helps when a copy of the data has not been made.

Torture works (in situations in which it actually works, which are vanishingly few--in no way should this be taken as in favor of torture) by creating expectations of future suffering. Torture victims are made compliant over time, so it's more likely that a victim would have the wherewithal to enter a duress code early in the process than it is that they would not crack after prolonged torture.


Somewhat different perspective - torture works great, at what it's actually for. It's actually for oppressing a population you have control over, not specifically getting information. The getting information part is just a moderately useful byproduct. The torturers purpose is to terrify you, and your friends and family, out of having anything to do with the anti-regime forces. Considering that, whether you are actually guilty of anything may not matter, and whether any information you cough up is true doesn't matter that much either. Getting actionable intelligence against actual organized anti-regime forces is just a bonus.

If you're thinking that way, and the forces you're up against are too, then the only thing this really accomplishes is potentially protecting the information that reveals your actual comrades. And if you're actually in a situation where protecting information about your comrades and operations is more important than potentially avoiding torture, then you're braver than most of the people on the planet, and good luck - you'll need it.


My intuition makes me agree that it does address the rubber-hose scenario. Could you elaborate on why you think it doesn’t?


Because we've seen studies on what torture does to people and how most people respond to torture. Once a person breaks, they try their best to appease their torturer. An adversary rubber hosing isn't just going to stop when they login and dont get what they want.

In some rather narrow cases it might help avoid random searches from a non-malicious threat. (e.g., border guard just proving a point for whatever reason)

If there is a purpose to recover something from your machine, an invested adversary that is willing to rubber hose is going to keep going. And the numbers seem to show that most people will not have the will and calm to use their duress password when they've been tortured into logging in.


The whole point of login_duress is that you say "there's nothing you want on this machine" and give them the duress credential knowing that the thing they want will be deleted. Once it's deleted, you can't break and give it to them.


Yeah, I think this is where the meat is. csydas’s points have merit. Namely: 1. torturers will keep torturing until they get what they want, and 2. victims will eventually tell the torturer whatever the torturer wants to hear in order to stop the torturing. However, if you can employ your duress code, then you might be able to defeat the torturer (by preventing them from gaining access — they could still very well kill you) or by alerting the outside world about the duress scenario.


The rubber hose scenario is that someone is willing to hurt you to access the data you have on your device.

You trick them into entering a password that either wipes the data, alerts someone to the attempt, or dumps them into a sandbox that appears normal but doesn't provide the access they want.

What do you think happens next?


jcs is pretty good




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: