Hacker News new | past | comments | ask | show | jobs | submit login
German BSI withholds Truecrypt security report (golem.de)
234 points by rffn on Dec 16, 2019 | hide | past | favorite | 84 comments



Note, the title is no longer accurate. There's an update at the end of the article, along with a download link:

> Shortly before we published this article the BSI has allowed to publish the Truecrypt documents. They can be downloaded from the Frag den Staat web page. Update from December 16th 2019, 13:22


The documents seem to be available here: https://fragdenstaat.de/anfrage/untersuchungen-zum-verschlus...

They all have "geschwärzt" (blackened) in the file name, but it looks like only some author's name (and maybe working group name) have been removed -- I've scrolled through a few of these files, and didn't find anything else that might have been removed.


I'm not sure if the implication of the following is that sections of the report were withheld or that it was incomplete to begin with.

> However the report hints that more such flaws exist. Another chapter in the documents mentions, that several such off-by-one-errors were found, but due to a lack of a complete code analysis only examples can be shown. However even those examples are missing in the document - the following chapter only consists of a headline and has no content.


My understanding is that the auditors didn't have time to analyze all these errors, thus omitted them from the report already, but only pointing out the potential risk.


This AP7 (work package 7) document seemed the most relevant, although most of it reads like generic test results and conceptual stuff. Not sure why they would try to hold that back. A non-malicious view would be that they simply are a part of German bureaucracy and subsequently slow.

Non-Google translation of the summary (AP7, page 70) for those interested:

"5 Summary

This work package first describes the basic building blocks that are utilized to secure the start [boot?] process, as well as ones that might be necessary and helpful to realize hard dism encryption via full-disk-encryption .

Beyond that existing attacks are described and investigated if the solutions presented here mitigate against them or not.

In chapter 4 several possible solutions are presented, both online (meaning with network connectivity) as well as offline.

The most promising solutions use the new Trusted-Computing functionality based on a Trusted Platform Module (TPM) and a Boot of Trust (CRTM/SRTM/DRTM).

The most desirable solutions are the Secure Boot procedure from chapters 4.4 and 4.5. These do however require either the development of new hardware or need to be based on special hardware extensions, e.g. Intel's TXT technology.

A large-scale deployment in an existing, heterogenous area is therefore improbable.

At the moment solutions that combine Trusted Boot with the attestation functionality seem to be the most sensible. This solution can be combined with: - Sealing: Storing a secret on the platform configuration. - NVRAM: Storing a secret in an area of the Trusted Storage, inside the TPM, that is only readable/writable given a valid configuration. - Attestation: Proof of platform integrity towards an external party. As "external parties", multiple counterparts could be realized: - Online, e.g. a central server. - Offline: e.g. a smart card or a smart phone application that takes on the verification for the server. All three variants (sealing, NVRAM, attestation) are reliant on the correctness of the PCRs."

The rest is potential use cases and an impact matrix of the attacks described in the document.


Either they had to do that, or they had to be ready for the barrage of incoming requests for the documents.


Which gives the "withhold" part of the story a strong push towards Hanlon's Razor, once the topic escalated to higher ranks the copyright ceased to be a hindrance.

Or more precisely, towards an organizational variety of Hanlon's Razor, where stupidity takes the form of the organizational failure mode of underlings not being authorized to do what would have been the right thing.

Curiously, a less colloquial formulation of Hanlon's Razor would replace stupidity with incompetence and this, when translated to German contains a hint of a precisely matching double entendre: in German, "Kompetenz" is used for two separate things. For being able to (like in English) and being authorized to. It's not a full double entendre because the negated form "Inkompetenz" is exclusive to the mental ability, just like the English counterpart, but what's a good aphorism without subtle extra layers?


Scheinbare Bösartigkeit kann oft durch mangelnde Kompetenz erklärt werden. ;)


It is sad to see the state still making freedom of information requests so difficult and using copyright as a flimsy excuse to hinder citizens to share the information when they finally manage to get it out of them.

I find it especially sad to see something like this held back by an entity that claims to want to protect security in information technology and doubly so since this information would be relevant to the developers and many state entities that use the software and its successor.

The BSI is sadly often toothless when it comes to actually enforcing security standards on federal entities but to see them not even trying to educate on such matters, when they clearly know better, squanders a lot of trust one may have in them.


> since this information would be relevant to the developers and many state entities that use the software and its successor.

The BSI actually did communicate the findings of the report to the TrueCrypt developers in 2010, which the developers ignored:

> The results were communicated to the Truecrypt foundation, however the Truecrypt developers didn't consider them to be relevant. BSI furthermore says that the results were not intended to be published.

(From page 2 of the article)


Yes, but they neglected to tell the veracrypt developers once truecrypt stopped being developed. Though they also do know many municipalities using both applications. They should have told the veracrypt developers and advised the municipalities to switch to the newer version. And the whole argument about the information being outdated by then when both are clearly in use seems negligent of their duties.


Veracrypt didn't exist back then and Truecrypt would only be 'deprecated' five years later. This was in 2010.

Personally I would've given up after a few months of trying to get a vulnerability fixed. Can't really blame them this got buried after five years.


Those municipalities should dump Vera and TrueCrypt containers, under eIDAS they should really be using .asice for interoperability.


Open-records requests can be a huge amount of work, even if the records aren't secret - it's common for requesters to have an axe to grind, so you might want to know what that is (and whether you should be honoring the request, and whether that's actually optional, all of which isn't necessarily trivial); and it typically means you need to review all those documents - and understand them. Even if the examination is just cursory, although I have no firsthand experience, I can well imagine that for technical documents that were secret in a secretive organization this is going to be a time consuming process nobody really wants to do. As such, I don't think it's weird or even really all that sad when a request hits some speedbumps. It's kind of inevitable; it's work, it's specialized, it's not all too rewarding for the institution, it's thankless for the specific employee... It's a testament to the solidity of bureaucratic institutions that it gets done at all. Frankly, this all sounds like the process worked about as perfectly as you can expect - they actually got the data, all of it, and they could distribute it literally, too.

It would be nice if it were easier to achieve transparency; but the flipside is that overzealous transparency laws can be abused too; at least here in the Netherlands there have been several cases where... creative... citizens decided to cash in on the fines imposed for non-compliance with such requests by spamming (iirc) municipalities with requests which had been phrased to be hard to deal with quickly, and hundreds of which submitted simultaneously. I believe the law has changed, but even so - requests like this aren't free. Somebody else is paying; and that's fine - but to expect them to furthermore go above and beyond and push for excellent service and quick turnaround is perhaps a little unreasonable.


When the developers of a product pay for a third-party security assessment, the results are usually confidential - after all, who'd pay to have their product publicly badmouthed?

Perhaps BSI was merely attempting to provide such a service for free.


> Perhaps BSI was merely attempting to provide such a service for free.

Yup. That's something they do. They also do stuff like checking/scanning publicly accessible servers in Germany for outdated software/vulnerabilities and you'll get an e-mail if they find something (this is automated).

In this case there's the added dimension of the government using Truecrypt in some places at the time, so they had an interest in it being secure.


Jail any government worker who hinders an open records request - like Georgia is trying to do today:

https://www.ajc.com/news/local-govt--politics/trial-start-mo...


"... in the simplest case a user can mount a Truecrypt volume that contains a file with suid root permission that will open a shell. Golem.de was able to replicate this scenario in a current version of Veracrypt."


Can I just point out that this is just one vivid example of why tying setuid permissions to a file is a terrible design to begin with? Permissions should be derived from the execution context at run time. (People might hate me for saying this, but this is one of those design decisions Windows fundamentally gets right.)


Even Windows gets this wrong at times, with several UAC bypass techniques exposed by auto-elevating binaries. Still, Microsoft has done a great deal of work with the Windows privilege model to prevent things like this, and these issues are steadily being resolved.


>with several UAC bypass techniques exposed by auto-elevating binaries

According to Raymond Chen, a MSFT employee:

>There really are only two [UAC] settings.

>* Always notify

>* Meh

https://devblogs.microsoft.com/oldnewthing/20160816-00/?p=94...



Pasting a random article from 2007 with no other comment is not a great rebuttal of what they said.

A _lot_ has changed since 2007.


I'm pretty sure the fact that it's not a security boundary has not changed since 2007. They should've probably marketed it better to clarify this, but that's not a technical issue. It was always a horrible idea to run a malicious program under your credentials relying on UAC to enforce any security. That's never changed.


How do you suggest implementing passwd without setuid?


Using the design of the LSASS is one way, as mentioned. Others include OpenWall's tcb, and Daniel Rench's userdirs.

* https://www.openwall.com/tcb/

* https://web.archive.org/web/20030919191907/http://dren.ch:80...


Off the top of my head? Ask a system service that has the privilege to change it for you after authenticating you.


Isn’t that exactly what passwd is? A system service that has permission to change the passwords file?


No, the point is that passwd should obtain its privilege by virtue of being started by a privileged process, not by virtue of being marked as a privileged program when it's run by an unprivileged user.


How do you start the privileged process as a normal user?


You don't. It's already started as part of the system. Or you ask some part of the system that is willing to authenticate you to do it for you.


Shhhhhhhhhh.

Stop giving systemd more ideas.

(/s)

(But seriously, imagine a world where you can't get root because D-Bus crashed.)


PolicyKit essentially already does this, and all of the systemd *ctl commands support authentication via polkit


TIL (so that's what the whole PolicyKit thing was all about).

Now I'm wondering what its worst case crash behavior is like.


This surprised me the most―never thought about this before. Aren't all permission-supporting filesystems vulnerable to this if mounting by a user is permitted? I presume filesystems don't go through the files and downgrade root ownership.


Yes; that is why it is recommended that untrustworthy drives be mounted with the `nosuid` flag.


Ah, so even though filesystems don't go through files, they still can block the operation of suid. This suggests then that Veracrypt can simply enable the nosuid option when mounting a device.


And they should also add nodev, to block a similar attack where you add a bunch of block devices with 777 permissions, in an attempt to make the block device "/" is mounted from be readable to a user and thus able to read (and write) any file on the host.


I didn't know of this attack, sounds interesting :) can you explain in a bit more detail how it would work?


Sounds like it works exactly as described by cyphar. The OS trusts permissions that are set on the files, so if you slip it a device ‘file’ writable by anyone then it will let mere users write to the device even if it points to the root filesystem. Devices are denoted simply by numbers on the file inode in the filesystem, it's not difficult to make one that corresponds to the real disk drive.


Right, the attack would be something like:

    # On a machine where you have root, do the following in a Truecrypt volume:
    for maj in {0..4096}; do
      for min in {0..1048576}; do
        mknod block-${maj}.${min} b $maj $min
        mknod char-${maj}.${min} c $maj $min
      done
    done
    chmod a+rwx {block,char}-*
All devices which represent a block device (namely, hard drives and similar media) have some (major, minor) value. There are currently[1] 4096 values for the major number 1048576 for the minor number, so we can just create all of them (or you could just create the first 256 since it's very rare for the number to go above that).

And now when you mount the volume on a machine (with needing root, because that's what TrueCrypt allows you to do), the mounted filesystem contains every possible block and character device with read/write permissions for every user on the system. Therefore, one of the block devices (you can check by doing an ls in /dev) will correspond to the root filesystem and the user can now read or write to it directly.

By adding "nodev", the kernel will not permit any user to access character or block device inodes on the filesystem (even if you would normally have permissions).

[1]: https://elixir.bootlin.com/linux/v5.4.3/source/include/linux...


>Aren't all permission-supporting filesystems vulnerable to this if mounting by a user is permitted?

Technically, it's only vulnerable on operating systems that support setuid style permissions. That doesn't exist on windows, for example.


If you’re mounting a filesystem provided by the user you’re supposed to use the nosuid option which tells the kernel to ignore setuid bits on the filesystem.


This is also nice for breaking in/out of Docker containers with bind mounts.


Not if you use user namespaces (which you really should).


Which is not the default that Docker uses :(

One more reason to switch to podman, which has sane defaults.


Or LXD/LXC which can run containers such that they are isolated from one another in terms of their id mappings.


Isn't Veracrypt just a container, like a hard disk? Why should Veracrypt care about what filesystem you store inside of a container, and whatever you do with its permissions?


... if you have sudo.


Where does sudo come in with suid executables? Especially since afaik sudo depends on suid in the first place.


One might be granted privileges to mount the filesystem using sudo, but not privileges to run other commands.

If the filesystem just mounted has setuid executables, however, the user can then get around their lack of additional sudo privileges by running the setuid executables.

Although most people seem to use sudo to allow a user to run anything, that's really not how it was intended to be used.


That's somewhat different from the Veracrypt case, afaict. And it doesn't seem like this is what bonzini meant by mentioning sudo.


You still need to gain the privilege to mount filesystems in order to exploit the flaw. So it is a privilege escalation in that you can go from "sudo mount" to a root shell, but it is: 1) not exploitable unless you have sudo 2) pointless if you are authorized to "sudo" any command.


This depends on the setup really; it's possible to limit sudo access to only some commands, like so:

  %veracryptusers ALL=(root) NOPASSWD:/usr/bin/veracrypt
An inexperienced systems administrator might use this to allow some users access to the mount command so that users can use their encrypted USB drives to carry around sensitive data, not realizing that through the intricacies of the Linux filesystem this can lead to privilege escalation.

Such a config would normally give you sudo, but not a root shell; allowing certain users to use ping floods to test the network by giving them access to the ping command as root, for example, would not expose much security risks other than flooding the network.


I see now that you consider sudo necessary for mounting the filesystem, but

a) it seems the point in the report is that Truecrypt allowed to grant this ability without using sudo (I guess either via a daemon or just a setuid executable)

b) iirc in case of other filesystems you can allow users to mount them without being root—which is how removable devices work in unixes. So this goes around the whole sudo/setuid system and probably might be another option for this feature in Truecrypt too.

Lastly, as jeroenhd noted, even with sudo the root privilege can be granted for a script that mounts a volume, or for one particular command. Sudo, by default, doesn't allow the user to add options to the command specified in `sudoers`.


Ouch.


The casual user stumbling on this article is going to think that TrueCrypt or VeraCrypt has been broken. There’s a big difference between attacks on a live system when a volume is being used, versus cases in which an encrypted volume is lost, stolen, or copied.

It needs to be firmly said that there is still no known way to recover plaintext from an unmounted TrueCrypt or VeraCrypt volume on a powered-off system without knowing the pass phrase. TrueCrypt and VeraCrypt are still totally secure for the standard use-case of protecting your powered-off laptop being stolen, or your backup drives being lost, or an encrypted volume that you’ve copied over to Dropbox being compromised.


>The casual user stumbling on this article is going to think that TrueCrypt or VeraCrypt has been broken.

And why should the casual user use TrueCrypt/VeraCrypt when Bitlocker/Filevault works out of the box and is built into the operating system? I feel like that most people using veracrypt do so because it's open source, and they're distrustful of the software vendors. For that threat model, you need to have protections against evil maid attacks, which TrueCrypt/VeraCrypt does not have.


> As Truecrypt got no further releases the software is still vulnerable for all those weaknesses. [...]

> The BSI knew all that. [...]

> The results were communicated to the Truecrypt foundation, however the Truecrypt developers didn't consider them to be relevant. BSI furthermore says that the results were not intended to be published.

This is looking pretty terrible for Truecrypt. It means they ignored a vulnerability report and kept the vulnerabilities around for five years.


Truecrypt has been abandoned for ~seven years or so.


This was in 2010. Truecrypt wouldn't be 'deprecated' for like five more years after that.


Development was continued by the VeraCrypt project. Hopefully they fixed the vulnerabilities that TrueCrypt didn't.


You should no longer use TrueCrypt, if you want an alternative I suggest https://www.veracrypt.fr


But did the VeraCrypt developers know about it?


According to the article they didn't know about it.


Why would they release an audit that effectively provides them with zero-days into encrypted suspect disks.

They release now because no one is using TrueCrypt any longer..


Because there weren't any real zero days in the report in the first place. The article mentions that it's mainly minor things like failing to clear memory, which is only helpful in rare circumstances.


They did not publish publicly but did report their findings to the true crypt foundation so that it could be fixed (but they in return didn't agree that those were flaws worth thinking)


I use VeraCrypt and none of this are of my concern in my daily use of it. Can anyone tell me if my containers are still safe from prying eyes since I upload them to cloud? I need specific answers from anyone working on VeraCrypt, not general answers of "yeah, they are unsafe" that usually HN does.


Since you're uploading them to the cloud do keep in mind that given 60 years of computer advancement, today's encryption standards will be unlikely to withstand tomorrow's hardware...


Counterpoint: DES was broken because of the short key length, something that had been criticized very early on.

Asymmetric crypto will likely fall soon; symmetric crypto with conservative key length choices, e.g. AES-256, may stand for a long time (including the age of quantum computers).


You're generous. I'd give them no more then 10. Which is all I need in the first place. But I am worried about them this or next year, after that if they get decrypted would just be mildly annoying.


Burn everything and flee to the woods. Can't be too careful.


Is there a solid alternative to TrueCrypt with most of the features that’s been implemented with a proof-checking system such as OcaML Mirage?


If you're going to comment, it's highly preferable that you read the article where all of this is explained.


Somehow I didn't notice the article was paginated. Thanks for pointing that out.


As per the guidelines:

> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."


Who cares about the TFA. The comments is the content. Especially here! braces for downvotes


Much safer to assume that a decent nation state can decrypt Truecrypt and a lot of other things. You can hide stuff from your wife, friends or banana Republic countries, but I wouldn't bet against NSA with 30 years in jail.


True, but not by outright cracking the encryption.

They will get your password instead, by implanting your keyboard, putting a camera behind you on the wall, or grabbing you just after you've entered your password.


That sounds like effort. Intercepting your next Amazon order for anything that plugs into the pc and loading it up with malware would be better.


Good point. Another thing: even if they can decrypt it, they'd save that for Osama types, not ruin it over a small tax case. Otherwise bad guys would stop using it. Maybe decrypt but not use in court...


One tool is parallel construction. First they find out what you did through an illegal/classified method, then they use the benefit of knowing the answer to construct a way to figure out the same information legally.

For example, an agency illegally taps your phone and find out you'll be driving with something illegal in your car along a certain route. The agency then tells some state troopers to notice $your_car driving unsafely at $location and pull you over for a routine traffic stop and search your car.

It's unlikely they'd take even that risk on a tax case, though.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: