Hacker News new | past | comments | ask | show | jobs | submit login
Dhall – A Distributed, Safe Configuration Language (github.com/dhall-lang)
217 points by KirinDave on July 13, 2018 | hide | past | favorite | 53 comments



Dhall was such a godsend to make our infra configs more safe & modular (we jointly configure Kubernetes & Terraform with it).

If you need some example of use cases to see if it might help you, check out "Dhall in Production" [0]

So far I'm a maintainer of dhall-kubernetes [1] and we'll soon open source some of our integration with Terraform.

And because it's absolutely safe to distribute Dhall code I'm toying with the idea of making some kind of Dhall-Kafka pubsub, in which you can safely distribute Dhall code and data, with automatic version migrations (check this out for more info on why this is possible [2])

[0]: https://github.com/dhall-lang/dhall-lang/wiki/Dhall-in-produ...

[1]: https://github.com/dhall-lang/dhall-kubernetes

[2]: http://www.haskellforall.com/2017/11/semantic-integrity-chec...


RE: Dhall-Kafka pubsub

Dhall seems to be safe in the sense that it will always terminate and will never crash or throw some kind of exception, but I don't think it's safe in the sense that it is safe to execute potentially malicious Dhall code, unless you restrict the allowed imports to a whitelist.

At the very least that could be used to DDos some target by having the script try to import something from a victim domain. And you might be able to read data on local files and transmit that information back, I'm not sure. It depends on if imports are evaluated lazily, and the data of interest would have to be stored in a file on disk that can be imported.

EDIT: Actually, it looks like you can import raw text, so it doesn't matter what format the on-disk data is that you are trying to extract.

EDIT2: Actually, it doesn't even matter if imports are evaluated lazily or not, you can specify that a network import be made with given headers, so you could just set a header in the HTTP request to contain the sensitive data.


Author here: You might be interested in this post on safety guarantees:

https://github.com/dhall-lang/dhall-lang/wiki/Safety-guarant...

The main risks in executing potentially malicious Dhall code that is not protected by a semantic integrity check are:

* Using more computer resources than you expected (i.e. network/CPU/RAM)

* Unintentional DDos (as you mentioned)

* The malicious import returning a value which changes the behavior of your program

If you protect the import with a semantic integrity check then the malicious import can no longer return an unexpected value, which eliminates the third issue (changing program behavior). Also, upcoming versions will cache imports based on the semantic integrity check, which would mitigate the second issue (DDos) for all but the first time you interpret the program. There is also a `dhall freeze` subcommand which takes a program and automatically pins imports to their most recent value using semantic integrity check.

Regarding exfiltration, the import system guarantees that only local imports can access sensitive information such as file contents or environment variables. See:

https://github.com/dhall-lang/dhall-lang/wiki/Safety-guarant...

The only way that a remote import can obtain that information is if a local import supplied that information via Dhall's support for custom headers. In fact, this is actually an intended use of that feature (i.e. a local import fetching a Dhall expression from a private GitHub repository using an access token retrieved from an environment variable).

So in other words the threat model is that as long as you can trust local imports then you can transitively trust remote imports because they cannot access your local filesystem or environment variables unless you explicitly opt into that via a local import. I think that's a reasonable threat model because if can't trust the contents of your local filesystem then you can't even trust the Dhall interpreter that you are using :)

Imports are not computed and the set of imports that you retrieve is static (i.e. does not change in response to program state or input), so the set of imports or their paths cannot be used as an exfiltration vector.


I had missed that wiki section on safety guarantees, my bad. It seems all of my proposed attacks wouldn't work. There's just one other that I thought of, and looking through your documentation I'm not sure if it would work or not. What if the host executing the script had access to some intranet site with sensitive data? Would I be able to do a network import of such a URL, load it as raw text, and provide that as a header to another import?

Seriously great work on this, by the way. A total configuration language that allows some form of network access while still being secure against malicious input is a really really impressive tool!


> What if the host executing the script had access to some intranet site with sensitive data? Would I be able to do a network import of such a URL, load it as raw text, and provide that as a header to another import?

Yes, a local import would be able to access an intranet site and re-export that information via custom headers supplied to another import. This is allowed because it falls under trusting local imports. Local imports have access to environment variables and your local filesystem, too, which are equally sensitive, which is why they need to be trusted.

This rule is called the "referential transparency" check, which can be summed up as:

* Only environment variables, absolute paths, and home-anchored paths classify as "local" imports

* Only local imports can retrieve other local imports

* URLs can import relative paths, but they are relative to the URL, not relative to your local filesystem

The reason it's called the "referential transparency" check is because this security restriction also leads to the nice property that import system is referentially transparent. That means that every import evaluates to the same result no matter you import it from. For example, if you have a directory of Dhall expressions that refer to each other and you rehost them on a file server the language guarantees that they still behave the same whether you import them locally or you import them via their hosted URLs.

Also, thanks! :)


This is great. I've been toying with the idea of using Dhall for various things and your post might just have given me the impetus to go actually do it!


Dhall's really cool because you can use it even if your tooling doesn't support dhall. There is direct JSON/YAML support with dhall-json: https://github.com/dhall-lang/dhall-json

If you wanted to generate something like an NGINX config (or, for example, TOML or Terraform) you could use dhall-text: https://github.com/dhall-lang/dhall-text

Oh, did I say terraform? Someone's already on supporting that directly: https://github.com/blast-hardcheese/dhall-terraform

Dhall is very unique among configuration because it's "distributed", meaning that Dhall can load any file as a dhall expression, and has builtin URI resolvers to make even remote endpoints part of local configuration. This is excellent for, say, CI integration.

Dhall has "semantic hashing" so that if someone does change a dhall dependency in a surprising way, the dhall script will refuse to continue (and give a very clear error about what changed, where it was looking for the change, and why it's stopping).

Dhall is a secret superpower for project configuration. Especially as of the 1.14+ releases, it's become an increasingly go-to tool for me. Even in just one-off JSON generation scripts, I find it to be a lifesaver.


Dhall is really similar to Nix in that regard, though in practice I've never seen Nix used for real configs (sadly). Usually trying to do so in Nix ends up being really ugly because the Nix features don't translate directly to concepts in the config language.


I dabbled in using Nix for this sort of thing, but it's rather painful to invoke the "Nix language" itself. In Nix 1.x we can evaluate a Nix expression (e.g. a templated string) like this:

    $ nix-instantiate --eval --read-write-mode -E 'Nix code goes here'
That Nix code could be a file, or an import, or whatever. This is slightly improved with Nix 2.x:

    $ nix eval '(Nix code goes here)'
Still, it's not particularly well suited to string processing from the commandline like this. We're closer to Nix's comfortable territory if we use it to build a config file, e.g.

    $ nix-build -E 'Nix code goes here'
Or, more likely:

    $ nix-build myConfigFileDescription.nix
In my experience this is more useful than trying to use Nix as a string processor. Still, if we're using Nix in a project or system, we might be better off using it to build the whole project (e.g. with a 'default.nix' file) or system (using a NixOS module or something).

From what I've seen, Guix takes compatibility with non-Guix systems a little closer to heart, e.g. for generating standalone packages that don't require Guix to install/use.


[flagged]


You're being downvoted for several reasons.

First, Dhall is quite unique in the design space of languages. Most (configuration) languages are either forbidding any user-defined functions/variables at all (like XML or JSON) or are Turing-complete (like having configuration directly in Python). Both of these approaches have downsides and Dhall is a compromise between those.

Second, the world of programming is not just Python. I like Python, but it is certainly not a focus of programming language research. Your statement sounds like a fallacy that if anything new could be invented, then it would have happened already.

Third, it is rather obvious that you unfortunately don't understand what Dhall is about and therefore any arguments you make are rather weak.


Python is a Turing complete language that can do anything. Dhall isn't. That lack of power means that a configuration language can still do a lot, but stick to its mission of configuration. It produces a final value (probably a map or list). Your real program reads that value and does the real work. Why complicate your real program with that? Offload it to dhall and you get a consistent configuration for free!

Think of all those places (e.g., Chef and Circle) where people try to encode graphs and processes in YAML. Dhall could do much of the same work of producing a simple spec, but without the fear of infinite loops or nonsensical Bash parameters hitting a function.

Dhall is such a simple language (really just function application and and some built-in datatypes and access principles) that it doesn't seem terribly costly as opposed to, say, embedding a complex graph into YAML (risking structural errors) or Python (risking extremely complex behavior and execute-only configuration). If what you really want is a declarative set of instructions to your real program, Dhall is probably one of the most awesome tools in the open source world for making it.


You sound like a marketer trying to sell a fridge on a phone call.

> Why complicate your real program with that? Offload it to dhall and you get a consistent configuration for free!

Why complicate my projects pipeline and force my team to learn yet another tool that may just be abandonned in 3 months?

> Dhall could do much of the same work of producing a simple spec, but without the fear of infinite loops or nonsensical Bash parameters hitting a function.

Last time I wrote an infinite loop in Python was... I can't remember. Maybe 5 years ago? Nonsensical is subjective. Maybe Dhall only makes sense to you.

> Dhall is such a simple language

There is no objective criteria to define a "simple" language. Perl was simple to some people.

> If what you really want is a declarative set of instructions to your real program, Dhall is probably one of the most awesome tools in the open source world for making it.

Or you could write a nice Python library, with clean API, and eventually a linter that makes sure the Python configuration file is a restricted subset of Python.


> You sound like a marketer trying to sell a fridge on a phone call.

I've got no stake in Dhall and I'm not a contributor. Check my submission history to see the sorts of things I try to do at HN if you're curious what my agenda here is.

> Why complicate my projects pipeline and force my team to learn yet another tool that may just be abandonned in 3 months?

Dhall's substantially older than this and open source, so its not really an attrition risk in 2018.

But I'd argue your pipeline is already complicated. My CircleCI2 flow got enourmously simpler when I was no longer struggling to represent complex concepts in YAML and could just produce an object graph in Dhall and then render it to the right YAML. I've also switched to using it to generate json for mesos API calls to make sure I get the fields right.

> Or you could write a nice Python library, with clean API, and eventually a linter that makes sure the Python configuration file is a restricted subset of Python.

I grant, this is possible. But why does having it "in Python" matter? How does that help people writing Golang or C# or Ocaml or Scala or Ruby?

Dhall's decision to template out more standard files is a smart one, it means it can serve as a stand-in and let programs that do the heavy lifting of a product or process (e.g., a Python ML binding or airflow process) use their normal tooling and do what they love.

If you're mad it is written in Haskell, I dunno what to tell you other than that it's like "jq". The host language doesn't matter because you invoke it for its outputs.


> Check my submission history to see the sorts of things I try to do at HN if you're curious what my agenda here is.

I don't care where you come from, I'm judging the arguments in this thread.

> My CircleCI2 flow got enourmously simpler

"Enormously" is relative. Do you have any numbers?

> But why does having it "in Python" matter?

Because of the Bus factor. Many more people already know Python.

> Dhall's decision to template out more standard files is a smart one

The smartest move is the one that makes the most economical sense. Do you have number to prove it's a smart decision?


There is no single economic perspective I can offer that can decide what tool is right for your org. Personally, I think that's an unfair question, since any answer I have would be uninformed to your specific circumstances. It'd always be wrong.

I'm sorry I can't work out what you like from this conversation. I've used this tool, I like this tool, and I'm comfortable with it. As such, I thought I'd share it since there's also a TOML article on the front page today.


Actually, believe it or not, economics is not the only measure we have to respect. There is joy in simplicity and understandability that does not need to be mapped to costs on a number line.


> There is no objective criteria to define a "simple" language. Perl was simple to some people.

That's nonsense. Here are several:

- The Chomsky hierarchy: regular expressions (finite state machines) are objectively simpler than context free languages (pushdown automata), which are objectively simpler than Turing-complete languages (Turing machines).

- "Pure" languages (which only calculate an output, deterministically, using only the given input) are simpler than "impure" languages (which can also alter the world via irreversible effects, and those affects may alter subsequent behaviour, even across iterations)

- Untyped languages are simpler than typed languages, since all values are interchangable (this is sometimes called "unityped", since there's one big type)

- Globally scoped languages are simpler than locally scoped languages (whether lexical, dynamic, whatever)

- First-order languages are simpler than higher-order languages (just ask any compiler designer!)

- Statically-dispatched languages are simpler than dynamically-dispatched languages

- Languages with local control flow are simpler than those with non-local control flow (e.g. exceptions, GOTO, etc.)

Those are just off the top of my head. Of course, simple languages aren't necessarily better languages, since it depends on the task. The same goes for anything, e.g. building with Lego is simpler than building with welded metal, but Lego is not appropriate for building a car. The above list isn't making value judgements about those features, it's just stating which is objectively simpler (there's a reason we don't program everthing in SK logic!).

As for the languages being discussed here:

- Python is Turing complete, impure, untyped, locally-scoped, dynamically-dispatched with non-local control flow

- Perl is Turing complete, impure, untyped, locally-scoped, dynamically-dispatched with non-local control flow

- Dhall is not Turing complete, pure, typed, locally-scoped, statically-dispatched with local control flow

In particular, I don't see the point of advocating Python while using Perl as a negative example. As imperative, interpreted, dynamic scripting languages they're pretty much identical (see "LAMP stack" for example); arguing between such similar options is mostly subjective, and probably devolves into a tyranny of small differences.

On the other hand Dhall is a very different language. It is objectively simpler across many different criteria (although more complex by a few, like its types). In particular, due to its simplicity we can have much more confidence that a Dhall program is correct than in a language like Perl or Python. For example:

- If evaluating a Dhall program gives the right answer, we know that evaluating it again will give the same right answer, since Dhall programs cannot change their behaviour unless we change their input.

- We know, without even looking at a Dhall program, that it will not perform unintended or malicious actions like deleting files, sending emails, keylogging, etc. This is because the Dhall language is incapable of doing such things (it is pure).

- If we're able to run a Dhall program at all, the type system will have ruled out certain classes of mistake (note this is an objective fact; the holy war around typing is about whether that's worth doing or not)

To me, another advantage of using a limited, single-purpose language like Dhall for configuration rather than e.g. Python is that it can withstand pressure to add hacks, preprocessing and other complications. I've worked on applications which defined their configuration in the same language as the program, and it wasn't long before that "configuration" was branching on environment variables, hostnames, etc. to distinguish between systems (e.g. dev, test, staging, production). Once those braches were in place, it wasn't long before system-specific functionality started creeping into them (since "before everything else" can be an easy place to add patches). Once the available functionality started depending on code from the "config" file, it wasn't long before an override mechanism was needed to trick the "config" into which system it should pretend to be on. At that point we need config for the config, and the cycle continues.

Sure you could blame bad developers, but that's only useful with hindsight. Pretty much every segfault, security issue, crash, bug, etc. can all be backsplained as avoidable mistakes made by bad programmers. Yet we still, as a community, decided to create automatic memory management, sandboxes, restart loops, bug trackers, etc. because we accept that we will make mistakes, and it's better to mitigate their damage, or avoid giving ourselves that power/responsibility in the first place, than it is to point fingers and assign blame after a catastrophe.

Truly bad programmers are those who refuse to automate tasks: piling up mental burdens until eventually something slips, rather than letting the machine deal with it. Infinite loops are a simple example: personal anecdotes do not solve the halting problem; making a language which always halts does (for that language).

Enforcing policies using a linter can be a good idea, as it makes the machine responsible. Yet linting is usually a last resort, since it's necessarily limited (especially in a language as dynamic as Python, where behaviour can be redefined in non-obvious ways). Also, a machine-enforced language subset is essentially a different language. It also seems disigenuous to pose a solution involving hypothetical linters that don't yet exist, after raising concerns about abandonment for an established, feature-complete project.


Sometimes you need those hacks. I'd rather have the flexibility...


To be fair, Dhall is a whole hell of a lot more flexible than YAML. That's kinda the idea, it's proposing a principled compromise between XML and "an embedded language in an interpreted turing complete language module." It's not something you see very often, and it's even more rare to see as a separated executable that uses *nix pipes and files for delivery when needed.


Of course anything can be done in Python, but why would we? I'm forced to use Python at work yet I still reach for other tools to manage the deployment process. I know many languages and I expect my team members to do the same, or let me teach them. It is not clear what your point is.


Gabriel gave a great presentation on Dhall at Bayhac this year:

https://www.youtube.com/watch?v=Ih9Ngu7FlCc


is there a single example Dhall config file anywhere that has more than a couple feature examples? It's a lot easier to grok new stuff like this if there's just one big example to look at to get an idea of what it does rather than having to navigate tons of granular documentation (which is also great, though)


Author here: I opened an issue to track this request and remind myself to do this: https://github.com/dhall-lang/dhall-lang/issues/189


Since the author's already responded, this might be redundant, but perhaps expanding the example at https://github.com/dhall-lang/dhall-lang#case-study might be a helpful look at a running example of Dhall. I feel like I've seen multiple people completely skip over and not realize there was already an example in the README as opposed to just the detailed reference docs.

Perhaps you're looking for something with even more features and applicability (e.g. an example of a Kubernetes config as Dhall's author points out), but hopefully that example gives at least a self-contained example of how to gradually start using Dhall.


oh wow, I totally missed that since it was minimized and I was skimming. thanks!


Dhall has a cheatsheet that might help you get a handle on its full feature set: https://github.com/dhall-lang/dhall-lang/wiki/Cheatsheet


Hmm I agree with OP though, and this cheatsheet still isn’t it for me. I need to see a real world example to be able to fully grok its practical use.

Do you happen to know of any?


Here's an example from one of my projects: https://github.com/vmchale/polyglot/blob/master/atspkg.dhall


This discussion here and Dhall's Github Readme seemed promising, so I went through the tutorial. I've recently had to write a lot of YAML like language with custom templating (think Ansible), so naturally I felt there was a way out with Dhall. I was hoping the author could perhaps point me towards explanations to what I noticed bothering me by this (admittedly short) experience that left me puzzled.

I saw that lists can have only one type of elements without annotating their types. With type annotation list can have differently typed values, but only if each element is explicitly stated (note: this is my understanding that might not be correct) meaning there's no dynamic content in a list. In same vein a map of maps needs to have all its keys stated by type annotation.

To me this seems too restrictive since the structure of data gets lost in the more verbose annotations. Not to mention the work of writing this annotation or the functionality to produce the same. In TypeScript I'd write something like this { [string] : [ Number | String ] } and I'd have my string keyed object with values of lists containing numbers and strings. Having a language like Dhall to help with creation of correct configuration code seems really useful instead of this messy combination of declarative and template language. I would like to understand things that can get better by using such an type system.


Dhall's lists are homogeneous lists, meaning that every element always has the same type of value. This is true whether or not you annotate list elements with a type or you annotate the list with a type.

You only need to annotate the type of an empty list. Lists with at least one element don't require a type annotation because the type can be inferred from the type of that element.

Dhall does not have buit-in support for homogeneous maps. Dhall does have statically typed heterogeneous records (i.e. something like `{ foo = Bool, bar = "ABC" }` which has type `{ foo : Bool, bar : Text }` for example).

If you want to store different type of values in the same list you wrap them in a union. For example, if you want to store both `Text` values and `Natural` numbers in a list you would do:

        let union = constructors < Left : Natural | Right : Text >

    in  [ union.Left 10, union.Right "ABC", union.Right "DEF", union.Left 4 ]
The closest thing to a homogeneous map in Dhall is an association list of type `[ { mapKey : Text, mapValue : a } ]` but even that is still not an exact fit since it doesn't guarantee uniqueness of keys. However, Dhall's JSON/YAML integration does convert that automatically to a JSON/YAML homogeneous map (i.e. a JSON record where every field has the same type).

In general, Dhall's JSON/YAML integration has several tricks and conventions that translate to weakly typed JSON idioms (such as homogeneous maps, omitting null values, and using tags).


Thank you for your response.

This Left and Right declaration style was a new one for me. Also, homogeneous map wasn't exactly a familiar concept. I don't remember meeting these when learning TypeScript and dabbling with Elm. I fear I don't quite grasp the type structure here yet. I can go forward with my testing based on your example.


By the way, it's entirely fair game for you to create config/domain specific things and not simply (Left|Right) dichotomies.

And by the way, TypeScript DOES have a form of Sum typing like that as of 2017! You can say something like this from the manual:

    type Shape = Square | Rectangle | Circle | Triangle;
    function area(s: Shape) {
        switch (s.kind) {
            case "square": return s.size * s.size;
            case "rectangle": return s.height * s.width;
            case "circle": return Math.PI * s.radius ** 2;
        }
        // should error here - we didn't handle case "triangle"
    }

(You can see more about it here: https://www.typescriptlang.org/docs/handbook/advanced-types...., search for "Discriminated Unions".)

You can also make these ad-hoc ,and they're useful for harnessing underlying functions that might return null from stdlibs. E.g.,

    function strToInt( x: string ): number | null

And then you're required to check for nulls and the compiler will complain if you don't check for them. Pretty useful!


Is there a tutorial that isn't predicated on me already knowing haskell? I feel like Dhall is exactly the tool I've been hoping for. It feels like a non-hacky version of make + m4.


Answering my own question, the README says that eventually the tutorial will be language agnostic.


If you checkout dhall-text and dhall-json, you can see you can use it without any integration into your executable. I do this via shell scripts.


I don't understand what the word "distributed" means here. Is it that it can load the configuration from multiple servers?


Apparently any, or almost any(?), element of a language can be replaced with a file path, which is then transparently read and used as if typed directly in that place. It works even for type annotations and is type-safe. This allows for easy factoring of the code into many files, and - by extension - to files on remote hosts as long as they're accessible via a (built-in) HTTP support.


I tried to read Dhall manual and figure out if I could make sense of it. I couldn’t. Looks a bit too complicated for me.

Jsonnet strikes a great balance of json but with some nice template syntax so you can be DRY.

Also what’s wrong with being Turing complete. I love for loops.


if there's anyone here who has used both dhall and jsonnet in anger, can you comment on your experiences?


What about those special characters like the lambda or the universal quantification? How to you type them?


There are ASCII equivalents. You can type `\` instead of `λ` and `forall` instead of `∀`. Also `dhall format` will automatically translate ASCII to Unicode for you.

However, if you do want to type them then see:

https://en.wikipedia.org/wiki/Unicode_input

... and the relevant code points are:

* `λ`: `U+03BB`

* `∀`: `U+2200`


Remind me again... what's wrong with common-lisp:read?


I interpret it as "What advantage Dhall has compare to CL:read?"

So for example, in your configuration, you can define a common expression into a variable (or even parametrize it in a function). CL:read cannot do that without resorting to read macros (technically it doesn't have to be read macros, you can reinterpret it later, but then we are getting outside the realm of CL:read), which are Turing complete and can have bugs that can lead to security exploits.

Dhall guarantees that the functions you define in your configuration cannot be exploited.

This has interesting implications, for example, you can read configuration safely from semi-trusted source. With CL:read, you can do that only if you give up read macros, but then the semi-trusted source cannot define their own function.

So where with CL:read you have to choose between little and total power, Dhall gives you a little bit of both, a medium power of sorts.


Thanks for the explanation!


Too much power. Think of recursion, access to the full CL stdlib, destructive list operations, ...

Similar to YAML. Too much power, recursive via links, too insecure.


Dhall doesn't actually have any of those issues, though.


Look, I'm sorry but this is beyond 99% of the world's IT services right now.

This is of course biased, unresearched anecdata but roughly speaking

- 50 % of all IT installations cannot be rebuilt from scratch in an automated fashion if you have them the new hardware,plugged in.

A further ten percent could be rebuilt with mostly automated scripts and a wiki page that's out of date by two months

of the remaining 1/3 of the world's IT, 1/6 has dependencies it does not know about - the DNS server that "just works",the production routers that should not serve that subnet, the database that is "owned" by a different team whose configuration you have no control over.

Good guy d.b. has its secret passwords on the wiki page, or stored on a seperate importable module, in source control, base64-encoded

And the rest - the rest rely more on bash than ansible.

Just get the world's enterprise services over the line and into "run this python script with these parma's and you get a complete rebuild across databases and routers" and then we can talk.


That's completely beside the point. Are we supposed to shed a tear for each of that 50 %? I can do that, but it won't help me, at all.

On the other hand, I can use Dhall to generate Kubernetes/Helm configs, which will be more DRY, less duplicated, and not a pain to work with. I know that this would help me quite a bit, not only because I hate my current "DevOps" team members with burning passion and I'd enjoy watching them try to figure out what is happening (well, I'd at least add the source files to a repo, that's already more than they do).


You are travelling in a worldwide convoy - a convoy across the globe travelling to the future. Those 50% are not just in pointless fortune x000 companies, but governments and charities trying and failing to do useful things with their infrastructure.

The convoy has wagons with broken spikes, no horses and people berating them because they are not travelling as fast as the well maintained ones in front

Shed a tear for those being carried by such wagons - they could reach the future so much faster if best practises were common practises

And this from someone who believes in Schumpter


I still don't understand why so many of your comments here are, "People are too stupid to..."

This is an excessively negative argument, and has been used many times to push back against technologies that have subsequently shown to actually have huge traction. A great example of this that the Python community still makes: lambdas are too complicated.


I don't know about the numbers but you're probably speaking something close to truth here. So I'd guess the reason for the downvotes is the tone of your implication that because of this the preceding doesn't matter and is... wasting our time or something? I think it does matter. I work on a fairly small team that has to manage a lot of stuff, and we probably fall somewhere in the middle of your rough ranking. The wiki page might even be older than 2 months :). We're always looking for smarter ways to automate things (and then hoping to get the time to work on them). So to people like us things like this do matter quite a lot.


The difference here seems to be that the itch felt by the authors of this package, who are clearly skilled professionals contributing to the public good (thank you), is not likely to be an itch that 9/10 of the global IT services has time to stretch, given they have a trauma-level injury bleeding out right now.

I don't think there is an OSS solution for a team that is working on five year old RHEL, differently configured servers and half the deployment is done by a different team in a different continent. This is just the default for an awful lot of the enterprise world, (no one designed it like that - it's just how internal politics got them there) and when you hit the SME world "it's been up and working for two years, we won't look too closely"

I am not trying to be sarcastic or defeatist- just trying to assess reality. I am positive - this can be solved globally with really cool tools but the tools themselves do not drive their own adoption.

Simple things like Docker is the thing most likely to solve most of this - yet it's enterprise adoption seems to be awfully slow (needs root? not in my data centre).

But if there is a solution it is not technology - it is diktat. If every board of every company adopted one rule "You must be able to automatedly deploy a pre-prod environment that passes all automated tests, using the same scripts as the production deployment, changing only one configuration file" then world wide IT professional would leap a generation ahead.


I still don't understand why you're talking about IT departments when discussing a bespoke software configuration package.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: