The explanation is completely incorrect. The outer invocation of `yes` is never actually started. The reason is because the inner invocation is being used as an argument to the outer invocation, which means bash needs to know the entire output of the inner invocation before it even starts the outer invocation.
Instead, all you're seeing is bash thrashing as it tries desperately to capture all the output of `yes no` in-memory.
I expect you'd see the exact same behavior with echo `yes no`.
Now if you really want to screw over your machine, the classic bash fork bomb is
I've always preferred this tougher command to screw VMs I was going to delete.
dd if=/dev/random of=/dev/sda
Depending on how long does it run, you end up with a more or less screwed computer. It's funny to see how does the machine fall over when you invoke a simple 'cd' command after this.
It's not so funny when you do this accidentally, on a 2 TB disk full of user data, on a live production server, at 2AM on a Friday night...
(The server was part of a MogileFS cluster so there were multiple copies of the data online. There was no data loss, not even any downtime. Still, it was scary as hell, and I spent all Saturday in the data center restoring the box.)
I did this with a DELETE SQL WHERE clause, trying to isolate a set of records, instead selecting all records, in an effort to test a production installation, at 3am (YES) and had to buy 3 db admins lunch the following day because they came in to restore multiple customer tables. The DELETE's were triggered across numerous tables and platforms (AS400 and SQL Server). The walk of shame to tell my director was not my most shining moment. I'd like to go back in time and slap myself just moments before. Running commands on live production data. Silly programmer.
At least with MySQL's tools, thanks to ssh's happiness to work as a pipe, it's very easy to clone a database locally for fucking about when you're trying to do something like this:
ssh remotehost.example.com mysqldump -udbuser proddatabase | mysql -uroot testdatabase
(n.b. the mysqldump options that may lock your production tables during the dump.)
The above line of code successfully fixed a bug I'd been trying to find for weeks in source code in the same directory. When I rewrote it from scratch, the bug was gone.
I find rm -i to be unusable because it is so annoying when you are deleting more than a couple files. It would be nice if rm -I (notice the capital) would first determine all of the files to remove, print them out and then ask if you want to delete all of those files. Instead it just says "rm: remove all arguments?" which is clearly much less annoying than having to type y for every file, but it is also mostly useless.
I usually don't alias the command itself; that's bad form. It limits what you can do and screws you when you are using an environment without your alias when you forget it. Live and learn.
Exactly -- I was benchmarking a disk that was having a performance problem. and I did something like "dd if=/dev/zero of=/dev/sda" instead of "of=/testfile".
That doesn't sound so bad. /dev/random is slow, you have plenty of time to catch it. It probably won't even reach the first partition. urandom on the other hand...
We wrote our own shell when I was at university (which was completely offensive, not at all obedient and generally no help whatsoever) and chsh'ed them if they left a terminal open :)
(for ref, we had to do very naughty things to SunOS 4 to add it to /etc/shells)
Back when `/dev/mem` was a thing in default kernels (now you need a special config, or a module), I enjoyed `cat /dev/urandom > /dev/mem`. That would usually bork things pretty quickly, so then I would set random offsets with `dd`. Fun times.
It picks a random device on your system -- ram, video, bios, etc -- and writes a random number of random bytes into it and then sleeps for a random amount of time.
On Linux, the hard disk. This command, dd, copies data in bulk from /dev/random (a special device on *nix that outputs random bytes) to /dev/sda (your hard disk). That means it starts to overwrite your disk with trash, rendering the system unusable.
The device file applications can use to access the first hard drive on most Linux systems. It provides raw access without filesystems or partitions.
/dev/sda1 is the first partition on that drive, /dev/sda2 the second, and so on. /dev/sdb is the next hard drive, /dev/sdc the next after that; beyond /dev/sdz, the naming scheme is apparently dependent on the hardware driver in use: Going from /dev/sdz to /dev/sdaa is what happens in the default SATA and SCSI drives, up to /dev/sdzzz, at which point you apparently run into problems. [1]
> Now if you really want to screw over your machine, the classic bash fork bomb is `:(){ :|:& };:`
Actually, you're wrong. At least on a mac (IIRC) it caps the process number so fork bombs just fill the console with errors. I'm sure you can get around it but that kind of defeats its simplicity.
EDIT: I mean, don't get me wrong, it still really bogs your computer down, but you can still kill the parent bash process in a few seconds.
I don't know about linux (and I really should in this case as it's my 'field'), but at least on mac, you can restrict the number of child processes a process can spawn with `ulimit -u`:
It's interesting that bash doesn't figure out the system's maximum command line length and either error or truncate if any single argument exceeds that length.
I'm proud that I actually found a bug in yes over a decade ago. Yes should exit when whatever it is talking to exits, but under some circumstances doesn't and instead goes into an infinite loop.
`yes` will do the same thing. It's just accumulating all the output from yes inside your shell. The yes command is actually pretty crappy for making fork bombs.
I'm surprised bash wasn't able to allocate 16 exabytes, though I will accept it's a fairly large step up from having 4GB allocated. But surely your machine had heaps of memory?
Linux will let you allocate as much memory as you want. It doesn't ever return failure based on available memory. Instead if you attempt to write to memory it will then map memory and if there is none available it will trigger the OOM killer. The kernel will attempt to kill misbehaving processes.
To clarify to anyone not fammilar with rm, running `rm /` is such a stupid thing to do, that rm has a special case where if won't let you delete "/". To override this behavior, you need the flag "--no-preserve-root".
(as others have noted, the program actually doing something here is bash: it attempts to dynamically allocate as much memory as it can to store the output of 'yes no'. Hopefully the author discovers ulimit.)
In school my friends and I used to have little "hacking" competitions -- who could do the most damage to the other's computer using only an SSH session.
An old favorite of mine was the elegant: `yes > no` [1]
[1] I'm fairly certain I made this up... though it's obviously trivially easy to "discover" on your own.
This thread is one of the best I've seen on HN. 1001 ways to screw yourself in bash. Love it. My contribution is a message from Mr. Odus himself- also a Reverend.
Interesting read. Another use I have found for "yes" is separating terminal output. Since "clear" doesn't delete your terminal scrollback, it can be easy to scroll up and get confused of what you are looking at. I've noticed this is especially useful when compiling testing programs to separate build output with "yes" so that you're not hunting down compiler errors/warnings that you have already fixed :) you can even be descriptive with your yes call -- ie: "yes fixed x checking to see if y is still broken"
I was a sysop for the University of Florida CS department once upon a time. It was always fun when students first learned about fork(). This post reminded me of that.
Interestingly enough the lxterm I run echo `yes no` or yes `yes no` in dies after allocating somewhere around 4GB of RAM. I would expect this on a 32 or 32-pae kernel but I don't understand why on a 64 bit kernel.
It isn't -- it's just a typical 32-bit signed integer overflow that gets sign-extended to create a 64-bit unsigned integer. The previous size was probably just under 2 GB for this string. My guess is that “4296822784 bytes allocated” (which is already a little over 4 GB) refers to all the heap memory allocated so far, not just for this one string, which was actually slightly under 2 GB long.
Bash is full of bugs like this; e.g. on a 64-bit system try doing echo $[263/-1].
Looking at this blog, I started to say myself, "Wow, the Subtle network is getting worse and worse contributors," and then I realized that this blog just _looks_ a lot like a Subtle blog. Then I wiped my forehead, mostly because I expect not to read things that actually have to tell me that the the the thing that makes Unix difference from MS-DOS (uh, what?) is "the terminal".
The effect is interesting. I ran it and, sure enough, started running out of memory. However, simply killing the yes process didn't stop it. I had to kill the bash process in which I had typed the command.
Probably yes was just waiting for the write buffer to clear so it could write again while bash was busy asking the system for more memory which was busy swapping everything to make room.
Instead, all you're seeing is bash thrashing as it tries desperately to capture all the output of `yes no` in-memory.
I expect you'd see the exact same behavior with echo `yes no`.
Now if you really want to screw over your machine, the classic bash fork bomb is