Mh. I'm not dealing in software architecture and much rather infrastructure architecture. But it's great to see that the onboarding documentation I'm currently writing Is mirroring the C4-architecture to a decent degree.
Like, at the highest level, we have the different nomad clusters with stuff around them, and how these are used at a business level, relevant regulations and such. This splits into a number of identically structured datacenters with a number of connections between them. Then, each datacenter consists of a number of software clusters, some deployed, some not deployed. It's pretty much the same code with the same toggles, just somewhat different due to different underlying cloud providers. What clusters are deployed or not deployed is a risk-management-decision, as well as a business decision. But that's when the highlevel overviews stop, because then you get into the weeds. And not just a little bit, that's when you need chops to manage postgres to manage some of those clusters.
But I'm putting a lot of hope into these diagrams and explanations for onboarding new colleagues, or maybe presenting the infrastructual ideas at meetups or conferences. Nothing against them, but a lack of an abstract understanding of a few high level ideas is really hurting a few new colleagues.
Like, if I have a ticket, what set of systems would be right to work with? What happens if the ticket specifies ... other systems? What if you follow a runbook and the runbook suddenly banks portside really hard and tells you to touch systems outside the cluster you're working upon? In most cases, this is going to be wrong. It might be hard to determine what would be correct here, but with a decent grasp, it usually ends up easy to determine if the path isn't correct.
Like, at the highest level, we have the different nomad clusters with stuff around them, and how these are used at a business level, relevant regulations and such. This splits into a number of identically structured datacenters with a number of connections between them. Then, each datacenter consists of a number of software clusters, some deployed, some not deployed. It's pretty much the same code with the same toggles, just somewhat different due to different underlying cloud providers. What clusters are deployed or not deployed is a risk-management-decision, as well as a business decision. But that's when the highlevel overviews stop, because then you get into the weeds. And not just a little bit, that's when you need chops to manage postgres to manage some of those clusters.
But I'm putting a lot of hope into these diagrams and explanations for onboarding new colleagues, or maybe presenting the infrastructual ideas at meetups or conferences. Nothing against them, but a lack of an abstract understanding of a few high level ideas is really hurting a few new colleagues.
Like, if I have a ticket, what set of systems would be right to work with? What happens if the ticket specifies ... other systems? What if you follow a runbook and the runbook suddenly banks portside really hard and tells you to touch systems outside the cluster you're working upon? In most cases, this is going to be wrong. It might be hard to determine what would be correct here, but with a decent grasp, it usually ends up easy to determine if the path isn't correct.