This article is good in that it shows that starting processes boils down to the system calls fork(), exec(), wait(), etc.
My shell has function calls, loops, conditionals, pipelines, subshells, command substitution, redirects, here docs, etc. (The code has some messiness, but the architecture is clean IMO.)
It turns out that these features require just a few more system calls -- dup2() and fcntl() to manipulate file descriptors, pipe(), and open() / read() / write() / close(). It's a good reminder of how simple Unix is. The complexity of the shell is basically all in string processing, and not in interfacing with the kernel (although the interactive parts of the shell do complicate things).
These system calls are all available in Python, so the code ends up being comparable to what you would write in C (and I'm in the process of porting to C++ now that the architecture has settled.)
One of the quick tests I like to do to ascertain the complexity of a codebase is "how long does it take to find the entrypoint" and "how long does it take to find the core functionality"?
In this case, I have been clicking around through nearly empty directories for 15+ minutes and haven't seen anything that looks like an entrypoint yet, nevermind the main loop.
In contrast, I downloaded the Bash source and within 5 minutes found the main loop (eval.c) and entrypoint (shell.c).
Now I'm curious to know what the cmd.exe source is like, but MS seems to have no intention of releasing that.
It's instantiating the execution engine (.NET-speak for the core runtime) getting ready to run Monad (the original codename for PowerShell).
I think part of it is familiarity and context. I knew none of the command bits would be likely to contain an entrypoint. Directories that mentioned native looked useful for finding an entrypoint, because I know that PowerShell is implemented in C# - I reasoned I'd be able to find further managed entrypoints by looking at where it starts out in C++.
You can see that the main object it instantiates is Microsoft.PowerShell.UnmanagedPSEntry. Looks useful.
- I had the exact same problem - "how do I get at the entrypoint and follow this" - only earlier today, with Chromium. I was trying to figure out why sync was broken and thought I'd try and follow the (C++) breadcrumbs. It didn't go very well. https://bugs.chromium.org/p/chromium/issues/detail?id=671498
- AFAIK, cmd.exe is basically a horrible, horrible hack. I'm not 100% sure, but I think an instance of cmd.exe is the only way you can run things with a stdout, as in Windows has no concept of TTYs - the cmd.exe process uses a bunch of undocumented kernel interface APIs to "make it work™". Related: https://github.com/Microsoft/BashOnWindows/issues/111#issuec...
Edit/Update: PSA, if you use a sync passphrase with Chrome your history from other devices doesn't show up in chrome://history. Known bug, will hopefully be rearchitected long-term.
I have to say though, the writeup on that Chromium ticket I filed was absolutely awesome. Did someone here notice...? :P
Well, in addition to being huge (850K lines of C# by my count), I would say it's only marginally relevant to someone who wants to write a Unix shell. The parts of the kernel that Unix uses -- fork/exec/wait/pipe/dup -- are exactly the things that Windows does in a completely different manner (whereas the file system is probably more similar).
The shell is almost what distinguishes Unix from Windows, and that is apparent at the lowest levels.
I've skimmed oilshell, but couldn't find what it does differently. It just says everywhere that it is a shell that can do shell work. If it is just a normal shell, why would I want to use it instead of bash or others?
The main thing is that it will be an entirely new programming language that you can convert bash scripts to. That's why I went to the trouble of writing a pretty complete bash parser as well as an executor.
I mention that on the blog, but I haven't written about it a lot since I haven't implemented the language yet. It's designed with a big doc but not implemented yet.
Have you ever written a completion function in bash? IMO it's kind of a nightmare. That should be one of the initial use cases for a better language. It would be closer to writing it in Python or ES6 rather than bash.
If you just wanted to use sh syntax, then the reason you would use it is because it will have better parse errors and better runtime errors, which I talk about on my blog. But right now I don't actually expect anyone to use it, because it's in the early stages, like I say.
See my reply -- that's why I posted it here, so people could see an example of the algorithms and data structures for shell, but that's not the purpose of the project.
How would it be safer? You should never execute untrusted code in the shell.
And bugs that would cause problems in the shell are unlikely to be of the unterminated string type, but more likely to be logic errors (redirect to the wrong file, mess up globbing, etc), and Python would not help you there.
In fact thinking that Python makes you safer is a very dangerous attitude. Someone programming with that mindset is likely to have less safe code than a C programmer who knows he has to do it right.
The security bugs in shells are not of the "executing untrusted shell scripts" variety, but more complex.
Shell scripts are often handling untrusted input and dealing with a untrusted multiuser filesystem, there are lots of ways these can go wrong. Yes, a lot of these are not memory safety bugs. But some are.
You have logic errors no matter what programming language you choose. But only in some programming languages you have the additional error class of memory corruption. Hence, choosing a memory-safe language is probably safer (assuming it's not so complicated to use that you introduce more logic errors).
This is very well written, and it seems the author has already used the great code reviewing powers of the Internet (there are references to improvements from Reddit comments).
I suffer from compulsory C review disorder, so just a few quick points:
- The pattern "x = realloc(x, ...)" is a bad idea, it leaks the original memory on failure. Since the code seems to car about its memory management (and since it's a tutorial), I think this is worth mentioning (and fixing).
- I generally recommend against scaling string allocations by sizeof (char); that's always 1 so it doesn't do anything. Some people seem to feel that it's part of a general usage pattern that makes the intent more clear, though.
- I would recommend more "const" and "static", but some people seem to not care.
- I would use more size_t rather than int for things that are sizes. This is more clear, and also safer since it's unsigned.
- Using strtok() is not always the best choice, it has issues, but here it's basically doing what it was designed to do I guess.
- The pattern "x = realloc(x, ...)" is a bad idea, it leaks the original memory on failure. Since the code seems to car about its memory management (and since it's a tutorial), I think this is worth mentioning (and fixing).
If you look at what the code does after that...
if (!buffer) {
fprintf(stderr, "lsh: allocation error\n");
exit(EXIT_FAILURE);
}
...there is nothing to "fix" because there is nothing more the code can do --- the whole process is gone after that; unless you imply getting into the intricacies of trying to recover from out-of-memory situations, which quickly opens its own huge can of worms (does fprintf() itself do an allocation? Could that allocation fail? What to do if printing the error message fails? Etc. etc.)
In general, I think the "this pattern is bad" sort of "thinking" is itself a bad idea; when writing and examining code, you should be looking at each situation itself and thinking to evaluate it, not just trying to match everything into one "pattern" or another. Does "x = realloc(x, ...)" make sense in all contexts? No. Does it make sense here? Yes. Otherwise it is a path towards mindless cargo-cultism, something which beginners should certainly be discouraged from doing.
Yeah, good point about the exit(), I realized that and mentioned it in a later comment (below).
I do really try to look at the situation, but that doesn't change the fact (at least in my experience) that there are patterns of "bad code" that seem to have a life of their own.
Probably caused by things being unintuitive, and/or more complex than people first realize, or something.
I'm really opposed to cargo cult programming, so no argument there. :)
> The pattern "x = realloc(x, ...)" is a bad idea, it leaks the original memory on failure. Since the code seems to car about its memory management (and since it's a tutorial), I think this is worth mentioning (and fixing).
What would the alternative be here? Allocate a new region, copy everything over and free the other region? Or is there something better?
> I generally recommend against scaling string allocations by sizeof (char); that's always 1 so it doesn't do anything. Some people seem to feel that it's part of a general usage pattern that makes the intent more clear, though.
Isn't this just a habit born out of portability concerns?
Then if it fails, you can still free(x) before continuing. Of course, if you're going to exit() due to the failure it doesn't matter and perhaps that's why the original code looks that way. I still think it's worth pointing out, though.
About sizeof (char), there can be no "portability concerns", and that very thing is what I think gives rise to a lot of (in my opinion, bad) cargo cult programming. The value of "sizeof (char)" in C is 1. Note that I didn't say "using GCC x.y.z on x86". It's true for all conforming compilers, on all platforms, and on all architectures. It's defined by the language.
> What would the alternative be here? Allocate a new region, copy everything over and free the other region? Or is there something better?
No.
If realloc fails it returns NULL. In which case the original `x` is still valid. ==> However with `x = realloc(x,...)` you don't don't have a reference to `x` any more!
My university has a project where you implement a shell in c from scratch with most of the commonly known features of bash -- scripting, file redirection, command substitution, the whole nine yards. Needless to say it's a mess by the end.
Did that for the OS class in 90/91 at UND. Class was taught using an AT&T 3B2. First non-PC class I had that was not on VMS or an IBM mainframe. Was a very interesting project, and keeping the code clean was a bit of a pain.
A nice complement to this article is the MIT 6.828 shell exercise. It provides the command loop and parsing for you, and then has you implement command execution, redirection, and pipes.
The classic for me of an exposition of writing a shell is Advanced UNIX Programming by Marc Rochkind (Bell labs guy and author of SCCS) (not to be confused with W Richard Steven's text which is also excellent)
As far as I can tell, it's about using the kernel programming interface, which is of course very relevant to writing a shell. But I don't see anything about the shell specifically here:
I tried to implement a shell in C once, just to see how to do it. Until I realized, I was basically just writting a shell language interpreter, and thus implementing a lexer and parser for that language manually (which it seems is exactly what happens in the article).
Which made the whole project boring to me, since it would've just come down to 3 well-defined, already solved problems:
1. write shell language grammar file
2.Use something like flex/bison (nowadays maybe antlr, or whatever) to generate the shell language parser
3. Implement the behaviour of the shell as reactions to parser events
At that wasn't interesting because:
1. I wanted to be POSIX compatible, the POSIX shell langauge is well defined, writing the grammar for that would've been boring
2. Boring by design, the whole point of using parser generators is making compiler/interpreter writing an easy, boring task
3. Arguably, this would've been interesting, to see how to do it. But if you already have a deep understandidg how a *nix shell does what it does, not that interesting anymore...
A shell is indeed an interpreter for a programming language, and you do need a lexer and a parser (actually 4 parsers, and my lexer requires 13 modes).
But that's not what's happening in the article -- he is showing a simple REPL and fork() exec(), which is about as much as you can expect to do in an article that long. lsh_split_line() in no way resembles a real shell lexer, and there is no parser, since there are no programming language features like function calls, loops, and conditionals. Not to mention pipelines, subshells, and redirects.
I think you're overestimating the power of flex, bison, and ANTLR. I actually ported the POSIX shell grammar to ANTLR -- it's not usable as the basis for a shell parser. The POSIX grammar also only covers 1 of 4 sublanguages in shell, and it's a significantly smaller language than bash.
Bash uses bison/yacc, and maintainer Chet Ramey talks about how it was a mistake here:
Well, very interesting indeed! Cool that you actually went through with writing a complete shell :)
I guess, I might've been too optimistic about the power of parser generators. Still, I think if someone were to start implementing a shell, I'd advise them to start with the grammer/lexer/parser part, either implementing them themselves or trying some of the generators. I think it's the right way to go about it (just wasn't what I thought I'd be doing when implementing a shell, at the time)
I did do a few things with parser generators though. Granted, nothing (yet) as complex as bash or even just POSIX, but still, they each had their own little trickinesses. E.g.:
Great links. Would have come in helpful a year ago. I built most of (d)ash clone in Go as part of my UnderGrad dissertation. https://github.com/danwakefield/gosh
It's not a shell up to late 1980's standards until it handles POSIX job control (Ctrl-Z suspend, move another job from background to foreground) and traps Ctrl-C interrupts to return to the prompt.
Adding some of these requirements as an afterthought into a command interpreter will likely be a pain in the butt. Ctrl-C is usually done with a longjmp(). Unplanned introductions of setjmp/longjmp into a code base could wind up with leaks or stability issues. (Side note: don't write a shell in C: pick some open source managed language in which it is safe to throw an exception out of a signal handler.)
Another requirement is arranging pipes, and handling out-of-order termination of the pipeline elements; a shell must juggle multiple child processes in order to support pipes. Pipes are put into process groups called jobs, etc.
Without all these features, you don't have a Unix shell; just an interpreter for a simple command language.
I am in agreement - if you're not handling pipes or process groups correctly, then you don't have a Unix shell.
That said, I don't really see why Ctrl-C has anything to do with `longjmp` - and I say that having written a shell capable of doing the above. When talking about TTY functionality, the OS (or whatever handles the TTY layer) handles the Ctrl- keys. All the shell has to do in that instance is use `waitpid` (or similar) to wait for a process to die/suspend/etc. and then handle deciding what to (Such as changing the foreground process). When you press Ctrl-C while running a process, the shell never receives a SIGINT, only the current foreground process.
For Ctrl-C and longjmp(), are you talking about builtin shell commands? I think it will "just work" for external processes. They receive SIGINT and terminate, and then the shell gets control again.
I guess I need to do a test of doing some expensive computation in bash, and then hitting Ctrl-C. Like doing the mandelbrot set or something.
I think you can use any of C++ exceptions, explicit returns, or setlongjmp(). setlongjmp() doesn't work well with C++ destructors. The interpreter loop just has to check for a flag set by the signal handler and then pop back to the interactive loop.
I noticed that the Lua parser uses setlongjmp() when compiled as C, but C++ exceptions when compiled as C++.
Also, you can check the original sh by Stephen R. Bourne (Bell Labs, [1]). Or the original csh by Bill Joy (BSD, [2]). In Github there is a huge repository with a collage of Unix History, where is possible to check how shells have been evolved ([3]).
This is definitely a cool little tutorial, but I'd strongly caution anyone who's writing string-handling software (and what is a shell but string-handling software: the input and the output are all strings!) not to write it in C. There are higher-level languages with good string support which support fork(), exec(), wait() & friends, and are far less susceptible to string-handling and memory errors.
That said, a shell's remarkably limited scope is actually something C is reasonably suited for.
I don't think there are that many languages that are good for writing a shell. There seem to be a few shells written in Go, but Go only reluctantly supports fork(), because it interacts with its threaded runtime. Python is actually closer to what you want than Go, but I think it will end up having problems with signal handling, because the Python interpreter does its own signal handling. The prototype of my shell is written in Python [1].
Garbage collection almost always interacts poorly with signal handling. You can't interrupt a garbage collector at an arbitrary point.
I wanted to bootstrap my shell [1] with Lisp -- and I hacked on femtolisp because Julia is bootstrapped in exactly this manner. And for awhile I thought about using an OCaml front end.
But in the end I settled on C++ (writing 3K lines of code, which I plan to return to after my prototype is done.) C++ lets you do fork(), exec() and handle signals exactly as in C, but it actually has useful string abstractions!
C++ has a lot of problems, but it appears to be the best tool for this job.
I think that a shell is doable using my TXR Lisp as a base.
As far as signal handling goes, it's in the box. See this Rosetta Code example under the task "Find limits of recursion" where we catch a SIGSEGV using a TXR Lisp lambda (running on an alternate stack via sigaltstack):
You don't have to worry about signal handlers and garbage collection.
If you need to do something in C++, the TXR code base will support you. Though it is C, it all compiles in C++, which I check for regressions every release. (At this time, I won't accept patches which break away from C compatibility and cause C++ to be required, but for experimenting and forking off your own thing, there it is.) Just ./configure cc=g++ and off you go.
TXR also integrates a nicely hacked version of Linenoise that could be used to bootstrap the shell command prompt. I rewrote the multi-line support so that it is excellent. There is visual cut/copy/paste, paren matching, undo, and more. (It's the only part of TXR without proper Unicode support, unfortunately: in the TODO list.)
Yes, interesting project, and I think it would work. But femtolisp would work fine too... In the end I decided not to use a Lisp because it wasn't making me more productive. It felt a bit unstructured.
Instead, it looks like I will generate a lot of C++ from Python to control the complexity (and line count).
For example, right now I'm working with Zephyr ASDL, which is basically a DSL for algebraic data types which can bridge Python and C++. You can do this in Lisps too of course, but Python works just as well in this case. Julia is interesting because the lexer and parser are in femtolisp, and that enables Julia macros.
If I'm reading your page right, Python handles signals the same way -- it receives the signal in C, and then runs a Python handler later on the main interpreter loop. I think you pretty much have to do it that way.
(But Python has some logic about turning certain signals into exceptions on the main thread, which I don't want to bother with.)
TXR has deferred signals as well as async ones. Both the Rosetta examples show async signals going off: lambda being called in the signal handler context. For the SIGSEGV catch, this must be so: the lambda executes on the alternate stack arranged with sigaltstack.
There is a flag which enables or disables async signals. In the case of a CPU exception signal (SEGV, BUS, ILL, FPE, ...), we ignore this flag and just do async. Other signals respect the async flag. The async flag is almost always off; async signals are enabled in various places in the library when blocking system calls are being made. So for instance an async signal won't go off during garbage collection; the internal handler will see that async signaling is disabled, and arrange for a deferred call at the next opportunity (unless it is a CPU exception).
Lots of projects need their own CLI for administration purposes, here there are ready tools for command line parsing - like GNU readline (or editline/libedit if you can't work with the GPL), these are very useful as they do such things as tab completion, etc.
https://github.com/oilshell/oil/
And a blog here: http://www.oilshell.org/ (index: http://www.oilshell.org/blog/2016/11/20.html)
This article is good in that it shows that starting processes boils down to the system calls fork(), exec(), wait(), etc.
My shell has function calls, loops, conditionals, pipelines, subshells, command substitution, redirects, here docs, etc. (The code has some messiness, but the architecture is clean IMO.)
It turns out that these features require just a few more system calls -- dup2() and fcntl() to manipulate file descriptors, pipe(), and open() / read() / write() / close(). It's a good reminder of how simple Unix is. The complexity of the shell is basically all in string processing, and not in interfacing with the kernel (although the interactive parts of the shell do complicate things).
These system calls are all available in Python, so the code ends up being comparable to what you would write in C (and I'm in the process of porting to C++ now that the architecture has settled.)