[lnav](https://www.lnav.org) is a terrific little tool, a "mini-ETL" of sorts with an embedded SQLite client and a clean, powerful interface. Its sweet spot is logfiles, but given regex-based custom formats, works great with any semi-structured input. Lnav easily handles a few million rows at a time. IME it pairs really really well with eg mitmproxy/mitmdump for client request logs, as well as webserver logs.
Thanks for linking that. It's going to make my life easier this week, and I had not heard of it. I was weighing setting up something like Graylog for some troubleshooting and kind of dreading it. lnav looks like a perfect middle-ground between that and my wiki page full of grep commands.
This looks like a great resource. The tools you'd like to have for a specific problem are often quite un-googlable. So you either need complex hacks to get inferior tools to work or you spend an hour googling the tools for a tiny problem.
Of course, it would be even better if you could easily tell which of the dozen JSON query tools is the best choice for the task at hand, or which you should code if you only want to ever use one of them.
In fact I'd love if someone would like to share their set of tried-and-true tools. Personally I mostly go with the POSIX tools, plus jq or gawk on occasion (but I have to read their docs every single time...).
To nitpick even more, gawk (GNU Awk) is a superset of POSIX Awk. I'm not very familiar with the differences, but I always use specifically gawk---I got too annoyed with some of the BSD userland that ships with macOS, and learned to prefer GNU versions.
One thing I could suggest for the XML list is xmllint. It can be really useful for converting xml to canonical format so you can then use diff to compare it.
E.g. something like
diff <(xmllint —c14 first.xml) <(xmllint —c14 second.xml)
I’d love to heat about more command line SOAP tools if anyone can recommend some.
kdb+/q is another really good choice for dsv[1] and json[2]. You can certainly create single-file databases (if you really want to e.g. for exchange), but splayed table[3] is faster so you'd usually do that.
The problem with that might be the licensing costs once you use it commercially (eg. at work). IIRC the license prices aren't public, but you're looking at over $10k in any case.
I personally prefer J to K in the APL family of languages. They also have a relatively cheap database, Jd [1]. Individual licenses are $600. Still a bit too much for my data mangling needs. :)
$10k isn't a lot (assuming that's right; it could be). I mean, it's a lot if you're used to something like MySQL or Postgres-levels of quality, but I've seen quotes for Oracle being almost $50k per core. MS-SQL is something like $7k per core, and kdb+ is definitely a lot more useful to me than MS-SQL.
There's also a per-core/minute pricing which might be useful.
Sure, kdb+ would probably be worth every penny even at $100k/year when it's the right tool for the job. I gather it's genuinely the best in-memory database for computing arrays of varying rank.
But a lot of the use cases these other tools are good for are small tasks every now and then. I feel kdb+ is in a different category.
where Q.fs is a function in a script thats bundled with the interpreter; the chunk size for reading the file into memory is adjustable by editing the function.
I took the time to learn recutils a long time ago, and it has been the gift that keeps on giving
Sure, it is not as fast as many other formats, but on the other hand it integrates very well into Emacs an org-mode. I manage a large part of my different collections using a combination of both, and the Emacs integration means it is all less than 2 seconds away.
I don't understand why csvkit is listed in the SQL-based utilities section. csvkit is a suite of multiple command-line tools, including csvcut, csvsort, csvgrep, csvjson, csvstat, csvstack, csvjoin, etc. and multiple converters, so is not only csvsql
I'm very glad to see the 'silly' tools there, cut/join/paste/sort/uniq. While I would never build anything 'important' with them, they're an extremely useful tool to have in your toolkit.
If you don’t mind converting the Excel file to CSV, csvkit[0], which is mentioned in the list, has a tool to pipe Excel into CSV for further processing by its sibling tools.
It won’t help if you need to retain anything Excel specific, but I find it very useful to deal with any Excel files that come my way.
Yes, there are multiple Python libraries for wrangling Excel files as well as good built-in CSV support via the stdlib's csv module - which, despite its name, can actually support DSV (Delimited Separated Values), which is a generalization of CSV. The csv module also has a dialects feature with attributes like settable delimiters (which is how you get the DSV support) and quoting. And since Python's built-in data structures like lists, dicts, tuples and sets are great for munging data, you can get a lot done with just that, plus the benefit of Python's readability and productivity.
Sweet, ty! Staring at what was starting to look like a much larger python script than I'd anticipated, then realizing I could do it in 16 lines of (very basic) bash with csvfix + csvcut/sed/iconv was big day for me! Some of my fav code never written I think. Actually had most those files copied locally because was afraid the bytehost link would disappear.
That said, link to the manual in the bitbucket link not working.
Thanks for this link! I frequently have to load CSV files into a database and they are invariably full of errors. People think spitting out CSV is easy, but it's because they don't have to use their product. So every time I write a Perl script and go through various iterations before I find all that's wrong with the file.