I think the glossary is defining variable names as given in the paper. I found this confusing when I originally read the paper as the authors assume that the reader knows what B, L, D and N stand for. I had to use explainpaper to figure it out.
You can’t really move away from bash, unless there would be alternative as widely available on different Unix systems as bash is.
Yes, you came up with a nice syntax for your Haskell scripts, but what it cost to install all required dependencies on, for example, newly created Ubuntu server?
That's been one of the somewhat unexpected benefits for me with NixOS: it's so easy to pull in tools and libraries that I don't have to worry about using less popular options.
The caveat is that you're pushing the complexity somewhere else. I'm finally moving away from NixOS this month (after two years of using it) because I'm tired of edge casing interrupting my daily flow.
> You can’t really move away from bash, unless there would be alternative as widely available on different Unix systems as bash is.
Lua can be built from source in a few seconds on just about any system and can be used pretty easily as a bash replacement. You can even bundle the lua script in a bash script that idempotently installs lua and then calls the lua script embedded in heredoc.
Yep, this is mainly used for turning those bash scripts that you let get too large into a real program. You port your script, basically line for line, into this, and then start chipping away at it until you have something less scary.
We're trying to do a similar thing with darklang right now: providing a single binary with a built-in package manager. The idea being making it easy to write readable "bash" scripts (including writing them with AI), but with just one dependency - the darklang binary itself.
You don't need the compiler to run the program in production. You only need the binary. I'd argue that a heavy toolchain in the dev machine isn't a big deal.
Bash scripts tend to be used in situations where you want to edit them and not have to deal with a recompilation step, let alone a multi-GB compiler install. That seems like a deal-breaker to me.
The best alternative I've found so far is Deno. It natively supports single file scripts with third party dependencies (this is a big issue with replacing Bash with Python). It uses an existing popular language (so now "learn our weird language" problem). Installation is very easy.
The only real downside I've found is this stupid bug that they refused to fix:
I have to admit that Supervisor is much more mature for Meproc, as it has gone through a long process of user feedback and iteration. Meproc, on the other hand, has only recently implemented a rudimentary project. The difference between the two, in my opinion, lies more in their design philosophies. Supervisor was launched earlier, so it heavily relies on configuration files and initially followed a client-server architecture. Later, it added HTTP XML interfaces to address visualization and automation needs. In contrast, Meproc wanted to abandon configuration files from the beginning because they can complicate automation. For example, should the configuration file be updated when new configurations are deployed, and how to ensure consistency between runtime configurations and the contents of the file. To give an inappropriate analogy, Supervisor is like Nginx, a classic software, but it faces challenges in dynamic updates in the current cloud-native environment. That's why alternatives like Unit were developed to compensate for its shortcomings. However, why not design a software from scratch that inherently supports dynamic updates?
It gives me so much Perl one-liners vibe, when `perl` command combined with `-p` and `-e` flags allows you to write super concise programs for bash pipelines.
I think you’re confusing Progressive Web App (PWA) with Progressive Enhancement. PWA is basically a web app (typically an SPA) that behaves like a native app, as described in the MDN page they reference. Loading progressively such that the page is still useful without JS is Progressive Enhancement.
All those rules are derived from a much smaller set of rules that are easier to understand. In Rust, those rules are more or less "breaking changes require a major version, unless the breaking change could have been avoided by the caller by adding disambiguation ahead of time." More details here: https://predr.ag/blog/some-rust-breaking-changes-do-not-requ...
Rust generally has more exceptions than most languages about which breaking changes are technically considered non-major, so writing such guidelines for Python or Java shouldn't be too difficult.
The same tech and ideas that power `cargo-semver-checks` could also be repurposed for those languages as well. If a company is interested in sponsoring such work, I'd be happy to help build something like that!
For Java it should be easy to create such a list, if it doesn't exist (even easier than for Rust, I assume). For Python, I think the dynamicity would make it hard to come up with anything other than a subjective set of basic, non-comprehensive guidelines (which still sounds useful, but not from the perspective of a tool like this).
I just watched the presentation from post url and I really liked the tool Roy talked about at 19:00-22:40.
Also he answered a question (34:07-35:23) about which algorithm they use to actually tune thresholds by saying that he was sure it would be open-sourced at some point.
So I searched through the entire list of public repositories on https://github.com/Netflix?sort=stargazers and didn't find anything relatively similar to what Roy described.
Does anyone know:
1. What is the name of the metrics tuning system?
2. Is it open-source?
3. Is it actively supported or is it deprecated?