That actually gives me a reverse accessibility problem on iOS, because I will try to drag-scroll at the edge of the screen but accidentally grab the scrollbar, which goes in the opposite direction.
“The dog/child/vacuum cleaner ate my instruction manual” is probably a common support request at Lego, so there is no point in making the digital manuals difficult to access.
Begin by throwing the language away. Provide a tool that converts existing CMake scripts to the new, sane language. Use an established programming language for the new language.
These days we have Starlark, which is the core part of Python, modified to be completely hermetic & deterministic (by removing some things and changing others), and modified to support parallel execution. It is very nice to work with, since you can use a lot of Python tooling and Python experience, but you don’t have to carry around a complete Python implementation.
It was originally part of Bazel but it is used in other tools, and there are multiple implementations.
I don't disagree that the CMake language is badly misguided. I think that the SCons approach of just writing Python is sensible rather then inventing a whole new scripting language as well as a build system.
But CMake probably will get you a program that compiles cross-platform within an hour or two, even for a novice. Autotools probably won't do that for you.
SCons is utterly horrible. It isn’t a build system so much as a “Build your own build system” system. It gives you absolutely nothing to manage dependencies or shared information, or cross-platform ways of handling compiler features.
As such, everything using it is it’s own unique system completely alien and not interoperable with anything else.
And being python seems to lead to people embedding parts of their application as part of the build system, so now that is intertwined also.
I will run a million miles if I ever see a project using SCons, CMake is infinitely nicer, and makes some sense but I agree with further up the thread that it needs a new, saner language that maps over the top (expressions would be a start).
Hm. We had a SCons based build system around ten years back where we ran 10 different C++ compilers/toolchains for multiple native and cross build targets without any problems. Yes, we had to write some stuff on top of it, but after some systematic design, it went really smoothly.
If the new language would allow such a tool to work, it probably wouldn't be much better. It would either still have most of the awful behavior that the old language had or the converted scripts wouldn't be better than before.
The way the cmake_policy system works isn't bad, but those unfortunately have only limited impact on the language itself.
Apparently not. It is crazy that they delete a random line of code and don't update or add a single test at the same time. Absolute madness. I wonder what they are doing instead that ensures the Kernel mostly works.
> I wonder what they are doing instead that ensures the Kernel mostly works.
First, they do have unit tests (KUnit). However, I suspect the "real" tests that result in a mostly-working kernel are massive integration tests run independently by companies contributing to Linux. And, of course, actual users running rc and release kernels who report problems (which I suppose is not unlike a stochastic distributed integration testing system).
This is how most software development worked before roughly the late 2000's. I remember working on a system that processed like a billion in revenue for a major corporation employing thousands of people, written in a mix of C and C++. Zero unit tests. They did have a couple of dedicated QA guys though!
Yes, and that's what automated tests are for. They "replicate" specific conditions and make it possible to cover everything. That's what unit tests are. This has nothing to do with the physical world.
By passing it faked hardware. Yes, you have to write your APIs so they are testable. Yes, it is virtually impossible to retrofit unit tests into an old, large code base that was written without regard to testability. But no, it is not difficult at all to fake or mock hardware states in code that was designed with some forethought.
That may hold for a trivial device or a perfectly spec compliant device. However, the former is not interesting and the later does not exist. I agree that more test coverage would be beneficial, but I think your heavily downplaying the difficulty of writing realistic mock hardware.
Do you have experience doing this in C/C++? There are a bunch of things about the language models for both (e.g. how symbol visibility and linkage work) that make doing DI in C/C++ significantly harder than in most other languages. And even when you can do it, doing this generally requires using techniques that introduce overhead in non-test builds. For example, you need to use virtual methods for everything you want to be able to mock/test, and besides the overhead of a virtual call itself this will affect inlining and so on.
This doesn't even consider the fact that issues related to things like concurrency are usually difficult to properly unit test at all unless you already know what the bug is in advance. If you have a highly concurrent system and it requires a bunch of different things are in some specific state in order to trigger a specific bug, of course you CAN write a test for this in principle, but it's a huge amount of work and requires that you've already done all the debugging already. Which is why developers in C/C++ rely on a bunch of other techniques like sanitizer builds to test issues like this.
Right, doing interfaces that support DI would also force Linux to grow up and learn how to build and ship a peak-optimized artifact with de-virtualization and post-link optimization and all the goodies. It would be a huge win for users.
The fact that it would be hard to test certain edge cases does not in any way excuse the fact that the overwhelming bulk of functions in Linux are pure functions that are thread-hostile anyway, and these all need tests. The hard cases can be left for last.
Another way to put it: If you know how long something will take in advance, you have a solution in mind. It is unlikely that this solution is (A) the best one and (B) the one you will actually implement. It would be stupid to ignore information you learned along the way. If you could actually predict the future you should invest in the lottery, not in software.
EDIT: Of course there are projects where you actually know exactly what to do. Happens a lot in consulting. That has nothing to do with Agile though.
I actually said something like that during a meeting. "Yes, you can do it that way, but if someone finds out we will be in the news." This argument worked surprisingly well.
It's weird how languages like YAML, XML and JSON are very much designed for communication between machines, but are still the default choice for human input. Actual programming languages - designed for use by people - are rarely considered for high level configuration.
I have actually seen something similar to this happen:
1. We just need a few configuration options. Let's add an XML configuration file.
2. Keeping all the configuration files in sync for different environments is a lot of work and really error-prone. Let's generate all the configuration files. What about an XML meta-configuration file?
3. Some things are different between environments. We need conditionals in our XML meta-configuration language.
4. There is a lot of repetitive configuration. It would be more maintainable if we had loops, variables, integers, string interpolation, functions, ... in our XML meta-configuration language.
Great, now we invented our own awful programming language that lacks any tooling, documentation or libraries and isn't compatible with anything else.
>It's weird how languages like YAML, XML and JSON are very much designed for communication between machines, but are still the default choice for human input. Actual programming languages - designed for use by people - are rarely considered for high level configuration.
What are you talking about? YAML, XML (particularly HTML) and JSON were always intended to be written by and understood by human beings, no less so than any "programming language." All of these were designed to be "used by people" and machines.
>Actual programming languages - designed for use by people - are rarely considered for high level configuration.
And the rest of your comment illustrates why. What you have when you use a programming language for "high level configuration" isn't configuration. Configuration should describe constant state, not operate on or transform mutable state. What you have, then, is just more application layer, on top of your application.
Which you don't have with JSON. Or INI. And you do still kind of have with YAML, and definitely can with XML, but that's why a lot of people don't like YAML or XML when it gets too complex.
What it looks like your example shows is dumping unnecessary complexity into configuration in order to maintain "simplicity" in the application. You would have the exact same problem using a high level programming language, it would just be potentially infinitely worse with the explosion of complexity and feature creep that comes with it.
Each employee chooses each year how much of their
compensation they want in salary versus stock options.
You can choose all cash, all options, or whatever
combination suits you
And what's really freaking cool:
There are no compensation handcuffs (vesting)
requiring you to stay in order to get your money.
People are free to leave at any time, without loss of
money, and yet they overwhelmingly choose to stay.
We want managers to create conditions where people love
being here, for the great work and great pay.