>I am not sure how many of the commands have been converted
There's a page [1] on the FreeBSD wiki with a list of converted and ongoing conversion. I don't know if the list is complete but the page was last modified 2015-05-22, so it should be fairly up to date i guess.
Except how do you represent every possible data type without losing fidelity? I made this point in another leaf, but it's worth echoing again. A perfect example of this is Dates. There isn't a defined JSON type for that so we'd be back to basic text parsing again (which will cause anyone who has dealt with parsing dates to recoil in agony).
When you try to ram a one size fits all approach to everything, you end up with abominations like SOAP with XML. I simply don't get the fascination with trying to use the same tool for every job even if it's not applicable.
Text streams are just an ad hoc structured format. Even an out of date format is better than one you have to invent or decode on a per case basis.
The whole xml vs json debate feels pretty pointless, they are for all practical purposes equal (in size, complexity etc). Sure xml has some weird design decisions around namespaces and so on, but if you use it as a simple hierarchical markup it's pretty much equivalent to json with different brackets? and often easier to parse for humans because of closing tags instead of }}} and of course, comments.
The xml-hate I think isn't really xml-hate it's the reaction against the xml-y things from 10-15 years ago: the IBM/Struts/Blah we all had to wade through. These days it feels frameworks pick json over xml even when it's clearly an inferior choice (such as for config files). Json is an object/message markup, not a good human readable config format.
>> Ten years ago this would be --xml. Who knows what it would be ten years from now. Text streams are timeless
> Text streams are just an ad hoc structured format. Even an out of date format is better than one you have to invent or decode on a per case basis.
I won't argue which one sucks more, XML or JSON. They are both inferior to what you call "ad-hoc structured" text files.
XML and JSON are both hierarchical data models. It has been known for fourty years that these are inferior to the relational model because they make presumptions about the access paths of the consuming algorithms.
Put differently, hierarchical data models provide just a view of the information that is actually there. (Implicitly -- consistency and normalization are not enforced). Relational databases on the other hand are concerned with the information and provide much better mechanisms for enforcing consistency and normalization.
By coincidence, the "unstructured" text files in Unix are just miniature relational database tables. Think passwd, shadow, hosts, fstab, ... . Consistency is not technically enforced (that would be a huge overkill at this level of abstraction), but there are even checker programs like pwck.
A good standardized relational model format would be cool, and I'm sure such formats exist. Feels like we could do better than spitting out randomly (I.e per- tool) formatted data with so-so encoding support!
A sequence of tables in csv with decent encoding support would go a long way towards a good machine parseable relational text output.
It's really two separate discussions though: what is s good input/output format for Unix-like tools, and what makes a good format for a config file.
> It's really two separate discussions though: what is s good input/output format for Unix-like tools, and what makes a good format for a config file.
I don't see where these are not one single problem. Everything is a file.
> A good standardized relational model format would be cool, and I'm sure such formats exist. Feels like we could do better than spitting out randomly (I.e per- tool) formatted data with so-so encoding support!
I'm actually currently trying to realize such a thing in Haskell, for usage in low traffic websites. There are obvious advantages in text DBs compared to binary DBs, for example versioning.
But I doubt we can do better than current Unix text files if we don't want to lock in to some very specific technology.
> I don't see where these are not one single problem. Everything is a file.
A very general format could solve more problems but as I said earlier I think the lack of comments in json makes it sub par as a config format for human editing.
I generally like the idea of tabular text files where the last column is free form text.
If you need more commenting freedom or flexiblity, why not make another indirection and generate the data from some source which is tailored to your needs? After all, relational data is often not suited for manual input. It's meant for general consumption by computers.
For example, as a sysadmin, passwd / shadow / group is not enough to model my business objects directly. I keep my business data in a custom database, and generate the text files from that.
I really don't care what a format is called so long as it fulfills the basic requirements: 1) can be written and parsed with the std libs of the language in question, and 2) supports comments if it is to be used by both humans and machines, such as in config files.
How would that work for late-evaluated data? A PowerShell object doesn't have to provide all of the data up-front, and can give updated data when checked at a later date. JSON is still just text, it still needs to provide all of the data you might need up front.
Not necessarily. Text streams don't have to by default provide all possible data for the next process in the pipe. Sure, you could keep all the command line arguments you had before to make JSON output manageable, but then you have two problems rather than one.
ls --json
ps aux --json
If they did that, all that powershell stuff would become trivial.
Wouldn't even be hard.