I shared the first version of this in a Show HN last month [0]. The big update since then is that it now supports loading multiple files and doing SQL joins on them. You can see examples of that in this post.
The repo is here [1] and the README has a comparison against some of the other great tools in the space like q, octosql, and textql.
Can we get binary releases? There's some places I'd like to use this where I don't want to install go just to install the bin, but would prefer to just `curl` a binary. I didn't see any GitHub releases.
Interesting! and thanks! It has definitely not been straightforward to explain. Mostly my fault not working on tutorials, docs and videos. Thankfully DataStation performance, features and install-time are now in a pretty good spot so I am going to be focusing more on the education side.
Any thoughts on supporting Avro? I notice there's no issue for it in the Github repo, curious if there is a reason for that. We're pretty much replacing all CSV interchange with Avro (at least where the other side can handle it).
I really like this idea. jq is great too, but the jq language just seems to be too different from anything else for me, so it only works if I use it every day, which I don't. But I guess an issue is that I'll still need to use jq in order to get my data into the suitable array-of-objects format for dsq?
They really ought to be a standard, language-agnostic way for querying nested config file formats with identifiers by now. For all the flak XML gets, at least it has XPath. YAML has YAML Path, but it's only supported by a single Python lib.
Yeah frankly the q benchmark isn't the best even though dsq compares not terribly in it. It isn't well documented and exercises a very limited amount of functionality and isn't very rigorous from what I can see. That said, the caching q does is likely very solid (and not something dsq does).
The biggest risk (both in terms of SQL compliance and performance) I think with octosql (and cube2222 is here somewhere to disagree with me if I'm wrong) is that they have their own entire SQL engine whereas textql, q and dsq use SQLite. But q is also in Python whereas textql, octosql, and dsq are in Go.
In the next few weeks I'll be posting some benchmarks that I hope are a little fairer (or at least well-documented and reproducible). Though of course it would be appropriate to have independent benchmarks too since I now have a dog in the fight.
On a tangent, once the go-duckdb binding [0] matures I'd love to offer duckdb as an alternative engine flag within dsq (and DataStation). Would be neat to see.
I've released OctoSQL v0.4.0 recently[0] which is a grand rewrite and is 2-3 orders of magnitude faster than OctoSQL was before. It's also much more robust overall with static typing and the plugin system for easy extension. The q benchmarks are over a year old and haven't been updated to reflect that yet.
Take a look at the README[1] for more details.
My benchmarks should be reproducible, you can find the script in the benchmarks/ repo directory.
Btw if we're already talking @eatonphil I'd appreciate you updating your benchmarks to reflect these changes.
As far as the custom query engine goes - yes, there are both pros and cons to that. In the case of OctoSQL I'm building something that's much more dynamic - a full-blown dataflow engine - and can subscribe to multiple datasources to dynamically update queries as source data changes. This also means it can support streaming datasources. That is not possible with the other systems. It also means I don't have to load everything into a SQLite database before querying - I can optimize which columns I need to even read.
OctoSQL also let's you work with actual databases in your queries - like PostgreSQL or MySQL - and pushes predicates down to them, so it doesn't have to dump your whole database tables. That's useful if you need to do cross-db joins, or JSON-file-with-database joins.
As far as SQL compliance goes it gets hairy in the details - as usual. The overall dialect is based on MySQL as I'm using a parser based on vitess's one, but I don't support some syntax, and add original syntax too (type assertions, temporal extensions, object and type notation).
Stay tuned for a lot more supported datasources, as the plugin system lets me work on that much quicker.
Awesome post, thanks for the link. I had no clue Clickhouse could do this. Shows off some more CLI tools in there too.
The big issue with ClickHouse is the incredibly non-standard SQL dialect and the random bugs that remain. It's an amazing project for analytics but you definitely have to be willing to hack around its SQL language (I say this as a massive fan of ClickHouse).
I wonder: does this mean I can embed ClickHouse in arbitrary software as a library? I'd be curious to provide that as an option in dsq.
> I wonder: does this mean I can embed ClickHouse in arbitrary software as a library? I'd be curious to provide that as an option in dsq.
Not sure, but it's Apache licensed, so you likely can make it work if you want to. But realize that clickhouse(-local) is much heavier than sqlite / duckdb based solutions: the compiled binary is around 200mb iirc.
I dream of a database which would use CSV (with metadata/indices in CSV comment lines) as its storage. You can even use commented-out padding to fit data to blocks, etc.
Imagine when you don't have to dump/convert data because you can always open it with OpenOffice.
recfiles [1] are a sort of poor man's plaintext database that you can edit with any text editor. I found manually entering the data mildly annoying and repetitive, but visidata [2] supports the format, so I've been meaning to learn to use that for easier data entry and for tabular visualization.
On read, you update if the creation timestamp exists but the modification timestamp is later, otherwise you insert. Your app(s) can do a simple file append for writes. You even have the full version history at every point in time.
I did a lot of looking but didn't find a command-line tool that automated this process. It works fine for small projects of e.g. 100,000 records. Wouldn't work well for things like a notes app, because you'd be storing every modification as a new entry.
That's interesting. I'm trying to imagine your workflow, and thinking about what serverless SQL platforms like Amazon Athena let you do now - i.e., you can more or less dump CSV files in blob storage and query them. Is that what you meant?
Sorry, I don’t understand your question. But just in case this answers it:
S3 is cloud storage on AWS. Athena can work directly off the CSVs stored on S3.
Where I said “dump” it was just a colourful way of saying “transfer your files to…”. I appreciate “dump” can also mean different things with databases so maybe that wasn’t the best choice of word in my part. Sorry for any confusion there.
Dump is the proper term in this case, though, given S3 limitations (I.e. no append to file) which means you need to either create new file for each insert (very expensive) or append replace file for update. So practically it’s a workable read only replica with dumps of updates. For reasonably small datasets it can potentially work otherwise you look at rebalancing, partitions etc. and probably you’re better off with parquet,avro etc. given you are usually at the stage of introducing spark etc.
The repo is here [1] and the README has a comparison against some of the other great tools in the space like q, octosql, and textql.
[0] https://news.ycombinator.com/item?id=29643835
[1] https://github.com/multiprocessio/dsq