> more in a “how should the communication patterns in this system work” way.
I guess that’s the point where I see so much overlap with tests. It seems like we’re talking about test inputs and test outputs.
Even with IPC or Distributed systems I often break problems down in a way that makes them testable in a local system with repeatable results. It’s not 100% perfect b/c you can’t account for all failure scenarios.
I do see your point though about constraints of the target language.
Testing one execution is a start but isn't enough in a nondeterministic distributed system. If you need to check that an invariant is held across all possible executions (e.g. node 3 is down and node 4 is suddenly faster than node 5), you could brute force that yourself but TLA+ has it built in.
That's exactly it. A test verifies one potential execution path. A TLA+ model verifies every execution path, in every ordering that those events could happen.
One of the tricky parts I found was figuring out what granularity to use when writing specifications.
I guess that’s the point where I see so much overlap with tests. It seems like we’re talking about test inputs and test outputs.
Even with IPC or Distributed systems I often break problems down in a way that makes them testable in a local system with repeatable results. It’s not 100% perfect b/c you can’t account for all failure scenarios.
I do see your point though about constraints of the target language.