You don't need to install anything; you could put this on the first line of your file, and achieve the same effect, with just tools you already have installed:
It's a hack because it's relying on the fact that without a shebang line or the appropriate magic number for your executable format, most platforms interpret scripts as a shell script; and it's using the fact that // can be treated as a comment delimiter in C or as a redundant way of referring to / in shell. So we interpret the first line as a shell script which calls out to make, to build the input file, then execute it, then exit so we don't try interpreting the rest of the C source as shell; and that first line of shell is interpreted as a comment in C.
Apparently I had a reading comprehension failure. I thought he was just saying that you can use that as an alias instead of downloading a new command. That is neat.
I thought of that, and even tried it, but on Mac OS X it's in /usr/bin/true, while on my Debian system it's /bin/true, defeating the purpose of making this path independent.
I considered a few no-ops, like /bin/cat (but that would then consume stdin) or /bin/echo -n, but there may be cases in which these can't be relied on either, so I figured that just keeping it simple but relying on the location of make was the better option.
Sure, you can do that. I didn't because it's (a) longer (and I was trying to do some minor golfing to keep it under 80 characters) and (b) doesn't have any stronger of a guarantee that it will be located in /usr/bin than make does.
There's more than one way to do this. It's just a silly hack; I decided to keep it simpler as the other alternatives didn't seem strictly better.
Nope, there are systems in which it's in /bin/env, though you have to go fairly exotic before you find one that doesn't at least have a symlink in /usr/bin. I don't believe POSIX says anything about it
I tried this out in my (wacky homebrew) shell, and it didn't work (ENOEXEC). Then I tried it in bash and it did. Turns out, this will only work when started with execvp, not when started with execv. All of the GNU adverbs I tried seem to get this right (nice, xargs, nohup), but I wouldn't be surprised to stumble across this issue later somewhere.
I'm pretty sure I've seen a case in which a particular script worked when called directly from Bash, but not when invoked by other things like xargs or nohup, because Bash actually will execute scripts under itself, while execvp will execute it under /bin/sh which is Dash on Debian/Ubuntu systems.
In fact, it was even better than that; they had even used #!/bin/bash, but had whitespace before it, causing the shebang to just be treated as a comment and not the as an interpreter:
Looks like it's fairly standard for shells to have this behavior, and execvp() is intended to have the behavior of executing like a shell would, so searching the path to find the executable and then passing the result to the shell interpreter if the underlying execve() returns noexec. May be a feature to add to your wacky homebrew shell.
That's a lot less elegant than my version. Multiple lines? Using sed to filter out the extraneous lines? Hardcoding gcc rather than using the system CC via make? And it gets the arguments wrong, too.
help_msg() {
>&$1 echo "Usage: $0 [file.c...
>&$1 echo "Execute C progams from the command line."
...
}
for that it puts the redirection at the beginning of the line, which is unusual and I didn't even realize until now that it's valid. ( example: >&55 redirects stdout to filescriptor number 55, and here >&$1 redirects stdout of echo to the filedescriptor number given as the first argument to the function)
# help if we have no arguments and no stdin
if [ $# -eq 0 ] && [ -t 0 ]; then
help_msg 2 # <--- NOTE 2 = stderr
exit 1
fi
# help if we get the flags
if [ "$1" == "--help" ] || [ "$1" == "-h" ]; then
help_msg 1 - <--- NOTE 1 = stdout
exit 0
fi
And second, that the author seems to switch between outputting the help_msg on stdout or stderr, depending on if stdout exists. I always was under the impression that only the actual script result ought to go to stdout, and personally I always put out general debugging, error messages, but also the usage, unconditionally to stderr.
He's following the philosophy that says when you run "program --help", the expected output of the program is the usage info and therefore it should go to stdout, but when you run "program <invalid args>" and it prints the same usage info, that is an error message and should go to stderr.
The advantage of outputting help text to stdout is that you can pipe it to a pager.
And since getting help is not an error, the exit code is 0.
From what you pasted it doesn't seem to check whether stdout exists, but whether stdin is a terminal. The intended use case is "your_program <input_file" => read from stdin; "your_program" => complain that you forgot a cmdline parameter (instead of blocking waiting for you to type something).
This is just a script, that invokes whichever compiler is set in the "CC" environment variable. If you set CC=tcc, it's basically the same as "tcc -run file.c", but I suppose this is a little bit nicer: "c file.c".
I would probably use this with Clang rather than TCC if Clang compiles fast enough, which it generally does.
If I understand it correctly, tcc is actually a compiler. C is just a script that compiles-then-runs, using whatever compiler you have set in your environment.
1. There's no guarantee that you can run anything directly out of /tmp/. IIRC lots of distros mount /tmp/ with noexec specifically so you can't do this. You might still be able to invoke ld directly to run it, but that's still kinda a hack to get around the noexec.
2. You need write access to the .c file. That means you can't install any scripts using this system-wide, because you won't have write-access to the .c source unless you're root.
IMO, the most obvious solution to the second is to make a copy of the .c source and edit that instead. AFAIK there isn't an easy solution to the first issue though.
This doesn't appear to cache compiled "scripts", which to me makes it kind of useless. It's nice not to leave binaries laying around but I'd expect things to be saved (otherwise it's pretty slow).
It would probably be quicker if it cached, but don't underestimate the speed of gcc...
A year or so ago, as part of my work on Project Clearwater (http://www.projectclearwater.org/), I was using a Ruby tool to retrieve statistics from a server, plumbing them into Cacti (http://www.cacti.net/) and graphing the results.
Cacti likes to restart statistics-gathering processes every time it wants new statistics (once a minute in my system), so this meant starting the Ruby interpreter every minute.
Project Clearwater is built to be scalable, so I turned up a few hundred nodes (EC2 makes this easy), at which point Cacti couldn't keep up - it took the best part of a second per Ruby process invocation, and since I was polling every minute, there just wasn't enough time to get through all the nodes.
I rewrote the Ruby tool in C++, at which point it ran in less than 0.1s, which was fast enough for what I needed.
Amusingly (at least to me), it actually _compiled_ (under GCC) and ran in less time than it took for the Ruby interpreter to start.
(This is not intended to be a comparison of the merits of C++ and Ruby. It's quite possible the Ruby code could have been optimized and really I was solving the wrong problem - I probably should have been making the statistics-gathering process long-lived. The point of the above is just that GCC is actually very quick for relatively small programs.)
I contributed that, anonymously. Previously, the task had been marked "omit from C", would you believe it!
Also, note the little "Student exercise" below the code. For this to be useful, you want to cache the compiler output; you want to recompile the underlying executable only if the C script has changed.
The inconvenience of invoking C programs obviously isn't the real obstacle to its use as a scripting language, otherwise this kind of thing would be widely used.
I have done something like this and use it for simple tests of my assumptions [1]. This one from OP is more polished I suppose. I must check if it supports (with shebang) putting "c-script",in pipeline.
As was already mentioned Fabrice Bellard's tcc is great in this regard. There were similar projects done with LLVM.
What I would really like to have is some kind of compiler or different C preprocessor that would implement modules, such as that building C would be as simple as building Go programs. Price for it I suppose are macros. I think it's possible.
I do something similar when working on single-file programs, but without any external dependencies, by making the file both a valid shell script (for my shell) and a valid C program. Here's an example in Swift, although the same idea works for C too:
My approach avoids recompiling the program every time you run it, and it also makes it much easier to debug crashes. At least when I was doing this, running `swift` directly produced really bad crash info when things went wrong.