I remember that when I worked at Google about a decade ago, there was this common saying:
"If the first version of your shell script is more than five lines long, you should have written it in Python."
I think there's a lot of truth in that. None of the examples presented in the article look better than had they been written in some existing scripting/programming language. In fact, had they been written in Python or Javascript, it would have been far more obvious what the resulting output would have been, considering that those languages already use {} for objects and [] for lists.
For example, take this example:
jo -p name=JP object=$(jo fruit=Orange point=$(jo x=10 y=20) number=17) sunday=false
Only a bit more code, but at least it won't suffer from the Norway problem. Even though Python isn't the fastest language out there, it's likely still faster than the shell command above. There is no need to construct a fork bomb just to generate some JSON.
python3 is (relatively) slow to startup, and this is something that got significantly worse with 2->3 migration:
$ time python3 -c ''
real 0m0.029s
$ time python2 -c ''
real 0m0.010s
$ time bash -c ''
real 0m0.001s
Which means - you probably don't want to have python scripts on a busy webserver, being called from classic cgi-bin (do people still use those?), or run it as -exec argument to a "find" iterating over many thousands files. Maybe a couple more of such examples. For most use-cases though, that's still fast enough.
> why would you choose to do either in a shell script?
In the normal case, you'd have variables interpolated in there, not static JSON. And then you run into the quoting problems that jo was created to work around...
That wasn't the claim made in the original post though, was it? The claim was that the Python snippet would be quicker than the jo snippet.
"Even though Python isn't the fastest language out there, it's likely still faster than the shell command above."
Which is most definitely is not - it's 5x slower.
(Probably not a huge issue in the real world if you're writing a shell script, mind, given that bash itself isn't a performance demon. But claims have to be tested.)
That’s because you’re making a false assumption about the environment prior to executing the statement.
If you are in a shell session and have to choose between executing python -c or calling jo, the latter is faster as you’ve demonstrated. But that’s not a realistic assumption.
Statements like these are almost certainly part of some combined work. The data you’re feeding to jo comes from somewhere. Its output is written somewhere.
You can’t convince me that if you’re already inside some Python script, that invoking json.dumps() is slower than calling jo from within a shell script.
At no point did I claim that launching Python AND running that json.dumps() is faster than running that shell command. I only stated that the json.dumps() is.
> if you’re already inside some Python script [...]
You're not going to shell out to `jo` and that's fine - it's not what `jo` was created for; it's explicitly a shell command to help you work around the annoyance of getting quoting right when constructing JSON from the command line (which I've had to do a lot and I'm pretty sure many people have to.)
> If you are in a shell session [and want to create JSON] ... that’s not a realistic assumption.
Of course it is. People create JSON in shell scripts all the time! That's why things like `jq` exist - because this is what people do!
I actually did that for a more realistic comparison.
Example for jo:
docker run --rm -it debian bash
apt update && apt install -y jo nano
nano bash-loop.sh && chmod +x bash-loop.sh
#!/bin/bash
for ((i=0;i<1000;i++));
do
jo -p name=JP object=$(jo fruit=Orange point=$(jo x=10 y=20) number=17) sunday=false
done
time ./bash-loop.sh >/dev/null
Example for Python 3:
docker run --rm -it debian bash
apt update && apt install -y python3 nano
nano python-loop.py
import json
for i in range(1000):
print(json.dumps({"name": "JP", "object": {"fruit": "Orange", "point": {"x": 10, "y": 20}, "number": 17}, "sunday": False}))
time python3 python-loop.py >/dev/null
Versions:
Debian GNU/Linux 11 (bullseye)
jo 1.3
Python 3.9.2
Results for jo:
real 0m2.230s
user 0m1.106s
sys 0m1.076s
Results for Python 3:
real 0m0.027s
user 0m0.021s
sys 0m0.005s
So it seems like you're probably right about how individual invocations scale for larger amounts of invocations in non-trivial cases!
Note: jo seems to pretty print because of the "-p" parameter, which is not the case with Python, might not be a 1:1 comparison in this case. Would be better to remove it. Though when i did that, the performance improvement was maybe 1%, not significant.
Admittedly, it would be nice to test with actually random data to make sure that nothing gets optimized away, such as just replacing one of the numbers in JSON with a random value, say, the UNIX timestamp. But then you'd have to prepare all of the data beforehand (to avoid differences due to using Python to get those timestamps, or one of the GNU tools), or time the execution separately however you wish.
Edit to explain my rationale: Why bother doing this? Because i disagree with the sibling comment:
> The claim was that the Python snippet would be quicker than the jo snippet.
In my eyes that's almost meaningless, since in practice when you'll actually care about the runtimes will be when working with larger amounts of data, or alternatively really large files. Therefore this should be tested, not just the startup times, which become irrelevant in most real world programs, except for cases when you'd make a separate invocation per request, which you sometimes shouldn't do.
Edit #2: here's a lazy edit that uses the UNIX time and makes the data more dynamic, ignoring the overhead to retrieve this value, to get a ballpark figure.
Edit #3: probably should have started with a test to verify whether the initially observed performance differences (Python being slower due to startup time) were also present.
Often when writing scripts, I'm chaining tools together, e.g. using git to find a thing in a specific commit, using curl to grab something from the web, decoding some json, maybe unzipping a file.
I've never really found any language that feels good for that kind of thing, there's definately a middle ground where it's getting too much for bash, but jumping to a language loses too much at the initial conversion to make it feel worth it until you are well past the point where your future self will think you should have made the switch.
Some languages have things like backticks in php to inter-operate but it's still not great experience to mix between them. For my own little things I'm currently looking at fish, but bash is omnipresent.
This tool seems to delay that point even further, as currently dealing with generating json is definately a pain point (whereas manipulating it in jq is often really good).
But if anyone can point to good examples of this transition in python then I'd be very interested.
Typically if you have bash you have a bunch of other utilities installed too.
The problem with python here is that while python-sh might be nice, you have to install any extra libraries you need with it, and that's not a trivial problem for installing scripts into prod.
Xonsh is better since you kind of get both the benefits of python and a shell like language, but frankly it's broken in a number of ways still. I use it daily, but hopefully you don't need to ctrl-c out of something since signal handling is iffy at best. It is kind of nice to be able to import python directly on the command line and use it as python though...
Thank you for pointing out that jq can create JSON! I use jq all the time for working with the AWS CLI and a big pain point has always been sending JSON. If you don't know, the AWS CLI depends on JSON arguments for quite a few common tasks, and the JSON needed can be quite lengthy.
Up until now I've been creating temp JSON files to feed the commands and I thought jo would be a great tool to make this easier. Now that I know jq can also create JSON, I'll just use that instead.
> "If the first version of your shell script is more than five lines long, you should have written it in Python."
Seriously, these kind of "common knowledge", "universal truth in a sentence" sayings are often the mark of wannabe guru mid career engineers that have no clue what they are talking about.
"Even though Python isn't the fastest language out there, it's likely still faster than the shell command above."
That is going a bit far. By all means use Python. Go ahead and attack people who use the shell. But let's be honest. The shell is faster, assuming one knows how to use it. A similar claim is often made by Python advocates, something along the lines of Python is not slow if one knows how to use it.
The startup time of a Python interpreter is enormous for someone who is used to a Bourne shell. This is what always stops me from using Python as a shell replacement for relatively simple jobs; I have never written a large shell script and doubt I ever will. I write small scripts.
If anyone knows how to mitigate the Python startup delay, feel free to share. I might become more interested in Python.
Anyway, this "jo" thing seems a bit silly. Someone at Google spent their 20% time writing a language called jsonnet to emit JSON. It has been discussed on HN before. People have suggested dhall is a perhaps better alternative.
I just assume at this point that a new python script from a coworker won’t run without an hour of tinkering and yelling obscenities at my screen. Or resorting to running in docker, which seems asinine. Python’s everywhere and does everything though, so I don’t have a good alternative. Shell scripts definitely aren’t it, but they generally hold up better when sharing in my experience.
Shell scripts can't declare dependencies though, in the same way a pip package can. A shell script using this tool requires one to manually apt install it first, or run in a common docker image - asinine. If you don't, your script will fail halfway through during runtime (actually, likely it will not fail, just produce corrupted output, since shell by default ignores errors), a python IDE or mypy will tell you about missing packages during analysis before you try to build and run it.
Besides that, looking at json only, it's part of the standard library so is more likely to already exist on any given machine rather than this.
That depends on how well your bash script is constructed. If you carefully handle the falling case such as missing commands, non-root permissions, etc. It can be easy to use and kind of portable. Of course python scripts have better error trace so if the script doesnt work others can debug with relatively easily.
Sure, but using Python gives you 100 more problems. Would you like to set pip and your package manager on a fight to the death? Or is today the day you learn all about venv? Might as well use Docker. Oh nice, my 600MB script is ready!
Since I have no idea why this is downvoted: it really isn't for small scripts: if you just ignore pip/packages, Python is going to give you way more functionality out of the box than shell would, with a lot fewer sharp corners that will translate into a less buggy/more correct script at the end of day.
If you do take the time to deal with pip (which, yes, is a problem) you get access to even more batteries that would have been a pain or just flat out impossible with shell.
(& on many distros, you can use your system package manager. I'm not seeing a material difference between a shell script that requires "apt-get install jo" and a Python script that requires "apt-get install python3-requests" or something.)
But either way, for circumstances where shell is the wrong tool for the job, "invoke python in the middle of this shell script because it's a better tool for this particular part of the job" is a strategy I've used before & will keep using, b/c it produces code that isn't riddled with bugs.
All fun and games until you need quoting. Quoting in shell scripts is already hellish enough, but layering JSON quoting on top of that is a road to madness.
Yes, `jq` would work fine, but so would `jo`. The point is, if there is anything more than the simplest dynamic values, constructing valid JSON just with POSIX shell is a huge pain.
I feel this whenever anyone advocates using jq. It boggles my mind that anyone would want to learn a whole new DSL for something that js makes trivial, especially considering js a much more expressive scripting language anyway
JS is not that easy to embed in scripting though for trivial situations. After seeing a couple of jq examples I can run `jq '.foo[] | {bar, baz}' without really "learning" its DSL. But doing the same with node? That would be much larger.
And now everyone who needs to read your script needs to also find those examples to figure out what your code does. Even your not-real-world example isn't obvious to anyone unfamiliar with jq what the intent is
That's about the only example you need explained to understand ~99% of real world jq usage. $dayjob has quite a bit of it around in various repos, and this actually is as real-world as it gets in my experience. Comparing that to having to learn enough JS to do the same thing in a verbose way, I'm still on the side of jq having an advantage in that case.
For programs this trivial, startup time so dominates runtime, and Python's startup time is so incredibly awful, that you can often fork 10-20 low-overhead processes before Python even starts executing user code.
that's probably you're at google, however there are more embedded devices on earth than whatever google has times 1+ billion. in those devices python is too heavy, and a posix shell along with jo fits perfectly.
HTTPie creator here. We’ve recently[0] added support for nested JSON[1] to the HTTPie request language, so you can now craft complex JSON request directly:
We did look at `jo`, and also `jarg`[0], the W3C HTML JSON form syntax[1], and pretty much every other approach we could find. We had quite a few requirements that the new nested syntax had to meet: be simple/flexible, easy to read/write, backward/forward compatible, and play well with the rest of the HTTPie request language.
The final syntax is heavily inspired by the HTML JSON forms one. But we made it stricter, added type safety, and some other features. It also supports all the existing functionality like embedding raw JSON strings and JSON/text files by paths.
The final implementation[2] is completely custom. We have plans to ship it as a standalone tool as well, publish the test suite in a language-independent format and write a formal spec so that other tools can easily adopt it. This spec will eventually be a subset of one for the overall HTTPie request language, which is currently tied to our CLI implementation but we want to decouple it.
Thanks for the detailed reply! Hadn’t come across the HTML JSON form syntax before. Good to know memorising the httpie request language will have some side benefits in that it’s closely related to a standard in use elsewhere.
This kind of thing has its place and can be useful, but I think there should be options to enable/disable such magic. Personally I'd lean towards it being opt-in, but I think with the cli it's a lot harder to not have it opt-out. Everythint is a string in the cli, but not so much when it comes to json. That's why I think it makes sense, providing you can disable the magic.
jo opaque_id=$(command used in prod that returns digits + letters for three months and everything works just fine and then suddenly at 3am on a Sunday it returns 1979 and breaks everything)
I'm sure it'd be a patch of a few lines to make the type specification mandatory on the command line (I would certainly prefer that also), but it comes down to the opinion of the maintainer if that is wanted or not.
> jo normally treats value as a literal string value, unless it begins with one of the following characters:
> value action
> @file substitute the contents of file as-is
> %file substitute the contents of file in base64-encoded form
> :file interpret the contents of file as JSON, and substitute the result
This is convenient but also very dangerous. This feature will cause the content of an arbitrary file to be embed in the JSON if any value starts with a `@`, `%`, or `:`.
This will be a source of bugs or security issues for any script generating json with dynamic values.
In the spirit of "do one thing well", I'd so rather use this to construct JSON payloads to curl requests than the curl project's own "json part" proposal[1] under consideration.
Agree, I was surprised that the cURL feature was considered as it seems to go against the "Do One Thing" and composability points of the UNIX philosophy.
Curl does like 100 "things" already by that standard. The Unix philosophy doesn't have to be reductionist.
Curl does do one thing: make network requests. This feature is making it easier to make network requests, i.e. it makes it better at doing the one thing that it does.
The more I work with JSON, the more I crave some kind of dedicated json editor to easily visualize and manipulate json objects, and to serialize / deserialize strings. This is especially the case with truly massive JSON objects with multiple layers of nesting. Anyway, cool tool that makes one part of the process of whipping up JSON a little less painful
Add a json LSP to your editor, or use VS code which includes it natively: https://www.npmjs.com/package/vscode-json-languageserver Configure a json schema for the document you're editing and suddenly you get suggestions, validation, etc. as you type. It's pretty magical.
I wrote something like this for emacs: a couple functions “fwoar/dive” and “fwoar/return” that let you navigate JSON documents in a two-pane sort of paradigm.
I built a JSON editor for Android a while back as part of a tool for kicking off AWS Lambda functions. I was planning on pulling the JSON editor to it's own reusable package, but lost momentum. I imagine it could be useful in many sorts of apps that use JSON.
Looks neat, but the documentation was hard to follow. A lot of typos, and some things didn't make sense. For example, I think 2 paragraphs were swapped because the example code below the paragraph taking about using square brackets has no square brackets (but nested objects) while the paragraph right after talks about nested objects but it's example doesn't show it (it does however seemingly demonstrate the square bracket feature mentioned previously)
This looks like an informally specified shell-friendly alternative json syntax.
I wonder if a formal syntax would help? Perhaps including relevant shell syntax (interpolation, subshell). It could clarify issues, and this different perspective might suggest improvements or different approaches.
It's written in C and is not actively developed. The latest commit, it seems, was a pull request from me back in 2018 that fixed a null-termination issue that led to memory corruption.
Because I couldn't rely on jshon being correct, I rewrote it in Haskell here:
In the common case where you trust your input entirely you can just interpret your string as JavaScript. Then you don't even need to use quotes for the key names.
Since fooson's argument is being interpreted as JavaScript, you can access your environment through process.env. But you could make a slightly easier syntax in various ways. Like with this script:
I think you're thinking of YAML; JSON doesn't interpret "no" as a boolean. Scroll down here to see the JSON grammar: https://www.crockford.com/mckeeman.html
I actually think the "Norway problem" is a PEBKAC from users not learning the data format. But this tool may confuse some people or applications who don't know what a boolean, integer, float or string are, and try to mix types when the program reading them wasn't designed to. Probably the issue will come up whenever people mix different kinds of versions ("1", "1.1", "1.1.1" should be parsed as an int, float, and string, respectively)
Depending on your exact needs, you might also want to try Next Generation Shell. It's a fully featured programming language for the DevOps-y stuff with convenient "print JSON" switch so "ngs -pj YOUR_EXPR" evaluates the expression, serializes the result as JSON and prints it.
You can implement this and many other single purpose CLI tools with inline Python or Perl or other language. Easier to remember because it's your favorite language.
While a handy trick, it doesn't entirely solve the original problem when the values have dynamic content. In your example, replace 10 with $FOO and then you are back to square one, having to escape python-strings within a shell-string. Better avoid the problem entirely by not using shell to begin with. To instead continue on the dirty track, replace 10 with int(sys.argv[1]) and call it as python -c "..." $FOO.
"If the first version of your shell script is more than five lines long, you should have written it in Python."
I think there's a lot of truth in that. None of the examples presented in the article look better than had they been written in some existing scripting/programming language. In fact, had they been written in Python or Javascript, it would have been far more obvious what the resulting output would have been, considering that those languages already use {} for objects and [] for lists.
For example, take this example:
In Python you would write it like this: Only a bit more code, but at least it won't suffer from the Norway problem. Even though Python isn't the fastest language out there, it's likely still faster than the shell command above. There is no need to construct a fork bomb just to generate some JSON.