Hacker News new | past | comments | ask | show | jobs | submit login
The poisoned NUL byte, 2014 edition (googleprojectzero.blogspot.com)
231 points by tshtf on Aug 26, 2014 | hide | past | favorite | 17 comments



Can someone explain this a bit? While I can understand how these bugs arise, I'm not the best at exploiting them.

The summary states,

> disclosed a glibc NUL byte off-by-one overwrite into the heap.

> a full exploit (with comments) for a local Linux privilege escalation.

Normally, I wouldn't see how such a bug could lead to privilege escalation. (glibc runs in userspace, after all.) But it is glibc, and glibc is everywhere.

I think the key is in the source code, where they state,

  // It actually relies on a pkexec bug: a memory leak when multiple -u arguments are specified.
pkexec is setuid, so if it has a bug, then it's a great target for privilege escalation. Is the exploit the fact that they're passing bogus arguments to pkexec in such a way as to trigger this bug, corrupt the heap, and cause pkexec to either execute a binary of their choice or execute arbitrary code?


Yes, you got it. Lots of escalation privilege exploits target a binary run as root or a segment of code in the kernel and take control over it. The objective is usually to spawn a new process, a shell, from the initial process: it will have the same permissions (root) but allows to run commands easily (instead of having to write specific code).


This is one of the all-time great exploit writeups.


Agree completely! This is the best thing I've read on Hacker News in some time. Even if you've never been involved in security research you'll learn and be entertained at the same time.


And the writer also shows honesty:

> Why the 32-bit edition? I’m not going to lie: I wanted to give myself a break. I was expecting this to be pretty hard so going after the problem in the 32-bit space gives us just a few more options in our trusty exploitation toolkit.


I happened to like this write-up a fair bit :)

https://web.archive.org/web/20080430162146/http://www.matasa...

(The original post on the matasano.com server seems to have been eaten; any chance that could be resurrected, tptacek?)


There are some things I liked on that blog, but a lot more stuff I didn't like, and sorting the wheat from the chaff is painful.


I was interested to learn that the kernel actually allows you to pass 15 million arguments via execve(), with each one allowed to be enormous.

It seems very much like asking for trouble - I can't offhand think of a good reason why this would be required.

I'm sure there are plenty of programs that have similar memory leaks with commandline args, as many authors might, not unreasonably, think that abuse would be prevented by the shell ARG_MAX, which is 2621440 bytes on many systems. Perhaps some sort of adjustable lower limit might be appropriate here.


Fire one, arbitrary restrictions lead to their own problems, with people having to implement workarounds that make their code more complicated and bug prone.

For another, notice that one of the features they used in this exploit was similar "adjustable lower limits"; changing ulimit values to defeat most of the value that ASLR provides.

In general, I think that using safer languages, like Rust, Go, Java, or even modern C++, is more likely to be the answer than imposing new arbitrary limits. These kinds of buffer overruns, failures to remember to free memory, and so on are all too common in C, and can be avoided if you just use safer languages and constructs that do bounds checking and automatic memory management.


A lower limit could be troublesome with eg. wildcards: `tar cf stuff.tar *`.


I don't know if it is related, but there used to be a limit in shell parameters expansion in exec, that's why xargs was so handy when "rm *.txt" wasn't possible.

That was changed some years ago: https://lkml.org/lkml/2007/8/19/44


At least in Bash * still won't work if you have a lot of files with long names:

  $ for i in {1..99999}; do touch "this is a really long filename so lets see what bash does or does not will it fail or will it work"$i; done
  $ rm *
  bash: /usr/bin/rm: Argument list too long


When doing things like this, you'll be hitting your shell's ARG_MAX way before you hit the kernel limit.

The 15 million craziness only applies to cases where you're calling executables in ways that don't go through the shell, such as execve().


geohot hunts bugs for google now. I did not know that. Nice to see a happy ending there.


That's very impressive. It's also why you should be running a pax/grsecurity enabled kernel.


Would Pax/GRsecurity have stopped this?

Is there a non-distro-breaking way to use PaX short of running hardened gentoo?


I just want to join the choir and thank the author/poster. Really great article. I managed to learn quite a few things and there was just enough detail given to go and look up any background information necessary to complete understanding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: