If you care about the experience of your co-workers, you won't repeatedly break stuff. (Or if you do, it's because your tooling hasn't scaled enough to keep up; you need a staging area or precommit hooks or whatever.) If you know you're trusted to do your job, you will work to maintain that trust.
But things need to be architected for trust. If you have lots of rigid processes, then there's no use for manually executed tools that will only be used if someone decides to use them. People will lean on the process, and every time something goes wrong, the process will be "improved" (as in, catch one more potential issue, at the cost of taking longer and catching two more non-issues and making people batch up their changes more, causing more conflicts, etc.) In a high-trust environment, people will use the manual tools when appropriate, and so there's incentive for architecting things such that the results of those tools and processes is useful. Tests are more likely to test real things. Staging environments will be made to better reflect prod.
If people care about it, they'll make it work. If they don't, they'll make it "work".
But yes, I agree that as you scale up, more and more things will get missed and you'll need to balance things out. It's just so, so common to go too far in the rigid low-trust direction.
People are so terrified that something might go wrong that they'll do things that end up making sure that something will go wrong. It'll be something later, or somebody else's problem, or whatever.
But things need to be architected for trust. If you have lots of rigid processes, then there's no use for manually executed tools that will only be used if someone decides to use them. People will lean on the process, and every time something goes wrong, the process will be "improved" (as in, catch one more potential issue, at the cost of taking longer and catching two more non-issues and making people batch up their changes more, causing more conflicts, etc.) In a high-trust environment, people will use the manual tools when appropriate, and so there's incentive for architecting things such that the results of those tools and processes is useful. Tests are more likely to test real things. Staging environments will be made to better reflect prod.
If people care about it, they'll make it work. If they don't, they'll make it "work".
But yes, I agree that as you scale up, more and more things will get missed and you'll need to balance things out. It's just so, so common to go too far in the rigid low-trust direction.
People are so terrified that something might go wrong that they'll do things that end up making sure that something will go wrong. It'll be something later, or somebody else's problem, or whatever.