I’m not quite sure what you are trying to say here.
In both of the examples you give you seem to be assuming that ‘not caring about the functionality of function X in function Y’ means ‘not caring about function X at all’.
This is untrue. You should test both.
If you want shouldExecute to have an env variable, you have a separate test for that, one for both the positive and negative scenario.
In the same sense you have a separate test for Compile and Validate, but in the Compile test you may not care to do the validation. At the very least you should have a test for Validate separate from Compile.
Naturally, I agree that the shouldExecute / ValidateSyntax methods should be tested too, but this is not the whole story.
Let me clarify my point with a more precise example.
Let's say we want to test this:
ParseUrlDomainAndPath(string url) { string validatedUrl = Validate(url); (domain, path) = /* inline logic to extract domain and path from the validatedUrl */; return (domain, path) }
Now I have few options to unit test it:
1. I could mock out "Validate" to return the "url" passed as input, call in the test ParseUrlDomainAndPath("___:||testDomain|testPath"), and assert it returns (testDomain, testPath). Such test might even pass, if the inline logic for extracting domain and path is not too fussy about the delimiters. In such case I will end up with a test telling me "if you call ParseUrlDomainAndPath with URL that has | instead of / and some weird schema of ___, it will successfully return". This is a lie. If you call it in production, it will fail validation. So the test gives you false impression on how the system behaves. You are testing how parsing of domain and path works on URL that has | instead of /, but this won't happen in production. Thus, you are testing made-up behavior. Waste of time.
2. As 1., but instead I could mock out Validate to return whatever validatedUrl I want, completely disregarding url. In such case, what is the point of even having Validate involved in the test here? Instead, let's refactor to functional core - let's take the inline logic, capture it in a method called "ExtractDomainAndPathFromUrl(string validatedUrl)" and pass validatedUrl to it directly. No need to deal with Validate at all, no need to mock anything, no need to fixup any broken mocks. Great!
3. As 1., but as input I pass "foo". The mocked out Validate returns "foo", and we are now trying to extract domain and path from it. This will either throw an exception or return garbage. So our test now has failed. But we don't care about this failure, at all. In actual production behavior the domain and path extraction logic would never even execute, because Validate would fail beforehand. So here we have reverse situation to 1.: In 1. we have a test telling us production will work while in reality it won't (as Validate will throw), and here we "found a bug" (the test fails) that doesn't matter as it is impossible to happen in production. Again, waste of time.
So with 1. and 3. being waste of time, the only option that is left is 2. You have one test that a) checks that Validate behaves correctly and b) checks that ExtractDomainAndPath behaves correctly and c) checks that both of these methods collaborate with each other correctly (aka "mini integration test").
You could now argue that this is wrong, I should have one test for Validate, one test for ExtractDomainAndPath and one test for ParseUrlDomainAndPath that mocks out Validate and mocks out ExtactDomainAndPath. But let me ask: why? In such case you lose the benefit of having "mini-integration" test. When you test ParseUrlDomainAndPath with everything mocked out you test an empty husk of logic, only if the calls are made in proper sequence. You cannot even really assert anything meaningful! (aka "mockery" anti-pattern). You end up with 3 tests instead of 1 and cr*pton of unreadable, brittle mock logic. And chances are, the one test of ParseUrlDomainAndPath that doesn't use any mocks of internal business logic, will already cover significant parts of ValidateUrl and ExtactDomainAndPath, and so will reduce the need of additional "corner case" tests testing these methods directly. Having 1 test with proper in-process dependencies instead of having mocks and 3 tests is just win all over he place: less testing logic, better ability to catch bugs (due to bonus mini-integration testing and realistic data), executable specification aiding in program comprehension, less brittle tests, no misleading green tests, and no made-up, irrelevant failing tests.
There is one more, very important benefit: you can step such test with a debugger and see how all the components collaborate with each other, on real data. But if you use mocks and fake oversimplified data, you get very shallow slices of code and cannot reason about anything relevant.
In both of the examples you give you seem to be assuming that ‘not caring about the functionality of function X in function Y’ means ‘not caring about function X at all’.
This is untrue. You should test both.
If you want shouldExecute to have an env variable, you have a separate test for that, one for both the positive and negative scenario.
In the same sense you have a separate test for Compile and Validate, but in the Compile test you may not care to do the validation. At the very least you should have a test for Validate separate from Compile.