> That is, the only specification for how the system should work is how it worked yesterday.
You'd think, if this was the desired model, you could partially automate the "writing unit tests" part of this process. (Integration tests no, but unit tests yes.) The "spec" for the unit tests is already there, in the form of the worktree of the previous, known-good commit.
That means that, in a dynamic language, you'd just need a "test suite" consisting of a series of example calls to functions. No outputs specified, no assertions—just some valid input parameters to let the test-harness call the functions. (In a static language, you wouldn't even need that; the test-harness could act like a fuzzer, generating inputs from each function's domain automatically.)
The tooling would then just compare the outputs of the functions in the known-good-build, to the outputs of the same functions from your worktree. Anywhere they differ is an "assertion failure." You'd have to either fix the code, or add a pragma above the function to specify that the API has changed. (Though, hopefully, such pragmas would be onerous enough to get people to mostly add new API surface for altered functionality, rather than in-place modifying existing guarantees.) A pre-commit hook would then strip the pragmas from the finalized commit. (They would be invalid as of the next commit, after all.)
Interestingly, given such pragmas, the pre-commit hook could also automatically derive a semver tag for the new commit. No pragmas? Patch. Pragmas on functions? Minor version. Pragmas on entire modules? Major version.
Ah, but the spec is often really poorly defined. That is all sorts of edge cases and uncommon code paths do not work as imagined. And, of course, it is full of bugs.
I have found countless "bugs" and issues when writing tests for existing code. I also often have to refactor the code to make it clearer what it's doing or to make it testable.
Unit tests are a development tool more than they are a testing tool. They are a means to produce correct, well documented, well speced, cleanly separated code.
You'd think, if this was the desired model, you could partially automate the "writing unit tests" part of this process. (Integration tests no, but unit tests yes.) The "spec" for the unit tests is already there, in the form of the worktree of the previous, known-good commit.
That means that, in a dynamic language, you'd just need a "test suite" consisting of a series of example calls to functions. No outputs specified, no assertions—just some valid input parameters to let the test-harness call the functions. (In a static language, you wouldn't even need that; the test-harness could act like a fuzzer, generating inputs from each function's domain automatically.)
The tooling would then just compare the outputs of the functions in the known-good-build, to the outputs of the same functions from your worktree. Anywhere they differ is an "assertion failure." You'd have to either fix the code, or add a pragma above the function to specify that the API has changed. (Though, hopefully, such pragmas would be onerous enough to get people to mostly add new API surface for altered functionality, rather than in-place modifying existing guarantees.) A pre-commit hook would then strip the pragmas from the finalized commit. (They would be invalid as of the next commit, after all.)
Interestingly, given such pragmas, the pre-commit hook could also automatically derive a semver tag for the new commit. No pragmas? Patch. Pragmas on functions? Minor version. Pragmas on entire modules? Major version.