Hacker News new | past | comments | ask | show | jobs | submit login
DevOps uses a capability model, not a maturity model (octopus.com)
164 points by kiyanwang on April 6, 2023 | hide | past | favorite | 113 comments



DevOps as a movement died when it started being used as a job title. In my experience, it quickly stopped being a philosophy for Dev and Ops teams to work on working better together and just because a new name for System Administrator.

Before the popularity of the term "DevOps" it was always true that any System Administrator worth hiring knew how to script for automation (the job title for people who didn't was System Operator). Unfortunately the market was flooded with a lot of people in that role who barely knew what they were doing and so resisted every change anyone else in their company wanted.

From my perspective, the greatest achievement of the DevOps movement was to push the bar higher for the expected skill level of the average SysAdmin. I see that as a good thing.


> DevOps as a movement died when it started being used as a job title. In my experience, it quickly stopped being a philosophy for Dev and Ops teams to work on working better together and just because a new name for System Administrator.

The "DevOps" job title can arguably just be a sysadmin, but I do think the main ideals of the movement are just so tightly ingrained in software now that we don't even notice.

It's easy to forget that 10-15 years ago, the most common dev/ops model was "toss it over the fence"--developers write code, do a ton of QA work on the test system, and then toss it to ops, who had no idea what metrics to look for beyond standard cpu/mem/etc. Your dev team would write up a "runbook" for operators to follow if anything broke, which was usually just restarting things until they got into a good state again. The big tech players of the 00s like AWS and Google were probably more advanced of this, but the rest of the world largely wasn't.

Now, the most common model for companies of all sizes is "if you build it, you operate it". Devs are expected to know what metrics to expose, have oncall rotations, and have direct access to their production systems. Cloud, CI/CD and git helped quite a lot in this regard, reducing time between deployments from months to weeks to days to hours and minutes.

This continues to be the lasting legacy of "DevOps" IMO, it shouldn't be understated how much of an impact it had on our industry.


> It's easy to forget that 10-15 years ago, the most common dev/ops model was "toss it over the fence"

15 years ago it was often still I without the C, integration that wasn't continuous. Code freeze 1 month before release, put all the bits of code together, everything breaks, try to fix it.


And with the bad (compared to Git) merging capabilities of SVN, last minute merges of feature branches (hurray branchless!) for a release cadidtate to hand over to QA were a nightmare.


I do think CI/CD and automatic deployment made the difference. Coming from copying a Zip file to a server, to deploying a WAR, to having release trains wit dedicated ops teams most of my CTO coachees now have automatic deployments and ops teams mostly for monitoring, incident management and capability planning.

Many struggle with "if you build it, you operate it" because many developers don't want to be on pager duty.


> It's easy to forget that 10-15 years ago, the most common dev/ops model was "toss it over the fence"

Hum... My first guess is that what really changed was the ratio of developers working on places that do this to the ones working on places that integrate the jobs.

The wall was never the only model around. It still isn't. The same kind of company that practiced it still largely have walls. The IT industry just hired a lot of people.


> Hum... My first guess is that what really changed was the ratio of developers working on places that do this to the ones working on places that integrate the jobs.

Well sure, and that's because software companies that integrate the jobs is now so much more widespread and commonplace, thanks to "devops". Kind of my point.


I agree, but I think the GP makes a good point that it wasn't really necessarily any sort of initiative or movement within the organizations that had bad practices—or even the types of organizations that had those practices, like legacy IT orgs—but instead the groundswell of new organizations, including startups, small/medium-businesses, and other more-modern technologies companies, that drove the adoption of new practices. And in most cases, they did this by necessity—they didn't have the funding to pay two separate teams to manage their new website, they didn't have the luxury to wait for 6 month deployment cycles, they needed the reliability of continuous testing but were able to take the up-front cost of short-term unreliability to build up those test suites. Only once engineers who had used these practices and seen them be successful at other companies started to migrate to more legacy industries did devops practices start seeing adoption outside of the "bleeding edge"


Your optimism is refreshing but ultimately wrong. We did "solve" those problems but then immediately 10xd the incidental complexity of both our development and operations work in exchange for nothing.


I'm not sure where this 10x comes from but if this is about modern on-demand CI clustered runners vs old timey dedicated hardware and servers, then you have to take into account the fact that those hardware and servers were never properly cleaned of their mess, hard to configure, and had low general availability. This switch was a big gain IMO, even if docker and k8s are far from being simple to operate.


As a developer and sysadmin, devops is distinctly different thing. Deep knowledge of operating systems is traded for knowing how to manage an entire virtual data centers with code. This is far beyond the “scripting automation” of the past.

In my perspective, DevOps is a fundamentally different job from what Systems Administration has historically been.

It’s a strange new world to me but I like it.


As a developer and sysadmin, there is no trade and only difference is that you probably (and I'm saying probably because probably some poor fucker had at some point) won't need to debug NIC driver/firmware problems on "cloud" server.

We have both on-prem and cloud stuff, we've ran automation (via Puppet mostly) from the very beginning, and so far biggest difference is that writing template-backed YAMLs is utter shit compared to "proper" programming language or purpose built DSLs.

Like, I complained that Puppet is just kinda "shitty half-finished programming language" compared to just having Python/Ruby as DSL but boy I'm fucking happy to use it now (and to be entirely fair, it got better over time as a language) compared to whatever the fuck tool is in vogue this time that uses the "data language + template language" (because apparently programmers deploying the code can't program or something) model for interaction.

Same kind of work sans running to DC to fix stuff but you now have black boxes you have no chances fixing/analysing yourself and your broken code might fix itself next day because you thought it was your bug, but just a given cloud API decided to return nonsensical error that looked like it was your fault (greetings to MS Graph API team here)


  As a developer and sysadmin, there is no trade and only
  difference is that you probably (and I'm saying probably 
  because probably some poor fucker had at some point) won't 
  need to debug NIC driver/firmware problems on "cloud" 
  server.
Then the words "private cloud" drop by, and you find yourself fixing idiotic purchasing decisions that somehow led you to building custom firmware ROMs for intel X520 NICs


If you listened to half the managers at my current client DevOps simply means using YAML to configure your infrastructure.


Developers are allowed to call themselves developers at any layer of abstraction. Why shouldn't that be the case for System Administrators?


It is for you. It isn’t for HR and hiring managers.


> a new name for System Administrator.

Yes, this is definitely happening.

I try to frame "DevOps" roles not as "doing DevOps work", but instead "enabling DevOps work". So, for example, setting up systems to make it easier for developers to take control of their own deployments and environments.


I joined the industry when "Systems Administrator" was becoming a generic term for "person who adds users to a server". Same thing happened to "DevOps": Google made some noise about how cool it was for coders to do Ops work, so people started assuming "DevOps" meant "a coder who adds users to a server". Stupid people come to stupid conclusions and popularity cements the new definition. Same thing with the terms "hacker", "skinhead", etc. C'est la vie.

DevOps died because it was a handful of engineers trying to force a movement about solving business problems. We were never going to be successful. Business people need to push the movement, not us.

That said, we can continue the movement anyway, if only to improve our own work. If more true DevOps faithful become managers, then directors, then VPs, then maybe in 30 years engineering orgs won't be run as horribly as they are now.


What do you call the people who:

- Use Terraform to build infrastructure as code

- Get involve in containerising applications

- Run and operate Kubernetes

- Spend a lot of time on CI/CD

- Improve the development experience

- Implement service discovery

- Build and run developer platforms

- (To a lesser extent) Build and run cloud environments including Serverless components

This seems like a new set of responsibilities which don’t fit cleanly into Development or Sys Admin, and are substantial enough such that someone could specialise in this role full time.

I think that DevOps as a job title is one of the best things that ever happened to the industry.


You may like Platform Engineering even more as there's a lot of crossover with what you mention. A key difference from your list is that you enable teams to build and run their own applications, so they remain responsible throughout. You would provide a simpler way for them to do it, so they could self-serve.


Personally, I call this job role System Engineer.


For me, coming from Ops, DevOps seemed like a description for people who worked almost exclusively in Web, service-style companies, where their goal was to ensure that continuous integration and continuous deployment did what their names suggest.

Having a DevOps onboard also surely meant that the company didn't need a system administrator anymore, but that's not really because DevOps was a new name for sysadmin -- they automated sysadmns out of existence.

I still prefer not to touch Web and service-style products. And, in my world, DevOps doesn't really exist. People with similar set of skills are usually called "infra" or "automation". Having worked in automation department one would most likely have learned enough to apply for DevOps position in a company which needs that, and vice versa.


> a new name for System Administrator

System Administrator responsibility is limited to...adminstration/operation.

If you have "DevOps" positions that have dual responsibility to operate and develop, that it something different, no?

---

I'm not saying this is actually the case. But DevOps being a job is not necessarily just a rebranding.


We are saying that this specifically is _not_ the case. The people in DevOps positions typically only operate.


It's like saying "hacking" disappeared when it started being used as a crime.

Hackers that fiddles with stuff still definitely exist.

The same way the DevOps movement still exists.

But your point is not invalid, hiring a "DevOps" engineer is futile; especially given how the goal of any engineer in charge of DevOps should be to render their job obsolete.


> DevOps as a movement died when it started being used as a job title. In my experience, it quickly stopped being a philosophy for Dev and Ops teams to work on working better together

I would argue that it was the other direction, in my experience the DevOps philosiphy was very similar in at the core to the agile philosophy; however it met the same fate as the agile movement. Everyone who was an "agile" consultant or "scrum master" found a new buzzword to declare themselves experts of and then use to go around doing a whole lot of nothing and generating impressive sounding promises, before moving onto the next gig.


Eh, I see devops as automated systems administration. Knowing how to be a good by-hand sysadmin is an enormous advantage in knowing how to automate things.


> From my perspective, the greatest achievement of the DevOps movement was to push the bar higher for the expected skill level of the average SysAdmin. I see that as a good thing.

I wish I agreed. As far as I can see, the title is just as closely associated with being an expert consumer of cloud services as it is with actual skills relevant to development and IT operations, or anything we'd recognise as sysadmin today.


What upsets me about devops work is that everything I do at a client is a lineage of architecture and design that needs to be maintained going forward and it only exists at that client.

The lineage of interesting or useful things I do are tied to the client and dies with that client or when I leave.

I just think of the thousands of CI/CD systems, build systems, attempts at parallelising builds, impressive optimisations, tooling, automation that have been written for each company over-and-over-again, and there's no cross polenation except when they are open sourced.

I suppose Kubernetes is part of the answer here, a distribution of practices that survives organisations and spreads between organisations and client-specific lineages of software evolution.

I want to work on interesting capabilities such as diagrammatic observability and live visualizations of systems.

I really need to make a idle cloud environment simulation game where you invest time in servers, capabilities to handle load and problems that occur randomly or on a schedule.


What upsets me about DevOps, is I know ops pretty well, but am not on par with dev, and every job that just needs someone to ops work now wants to quiz me about O(n) before they'll interview. Then I look around at the people who are good at the Dev part and the don't know their way around an os or cloud infra.


Those concepts arent that hard though. Just spend a day and familiarize your self with it. The most important take away from those kinds of questions are, can you pick the right data structure when you need to. Using a list vs a hash map, and will you avoid doing things like nested for loops. Big O just puts into words why you want to avoid those things. Most of the concepts should be intuitive already if you have some experience. Personally I haven't seen an interview go in deeper then that.

If your writing a script that will touch 10k servers, the operation is likely already slow. if you throw an unnecessary for loop that iterates over everything and runs something, that's going to a painfully slow script and wasteful.


Same, and seeing companies puting devs in charge of interviewing ops people makes the game even worse.


Big O is kind of a valid question for ops in terms of response time versus load.


This isn't unique to devops, this is a general software thing. But there is an answer, which is the standard technology stack:

- AWS - Postgres - Linux - Docker - Jenkins or similar - Slack - Pagerduty - Jira - Packer - Terraform - etc

If you stay on the path there will be dozens of tools, plugins, and paths to do outstanding things with minimal work. Parallelising builds for example is built in to jenkins (if you define the workflow), which will autoscale workers in a setup that takes < 1 hr to setup in aws.

If you're writing code to solve a problem that a standard tool exists for, you're the problem.


< 1 hr to setup Jenkins, nevermind with auto-scaling workers on AWS, is an obscene underestimate.

You'll probably spend at least an hour figuring out IAM permissions before you even get to deploy a VM.


nah just throw few layers of abstraction and throw a templated YAML at another tool that sets it all up for you!

Something broke ? Well, rip it up and reinstall! But what about the data ? Who cares?


Well, you will certainly the first time sure. But in your next 3 jobs it should be much easier.


Kind of rude but parallelising builds was referring to the work that Uber did with their monorepo setup due to all their microservices.


For me what upsets me about devops work, is that for many organizations it is a synonym for systems engineer, lots of Ops and very little Dev.


This. I didn't see any meaningful difference between 'devops' roles and 'sysadmin' roles. Devops is a more recent name which had a IaC movement behind it, but it looks like the name stayed, but the role regressed.


The term Devops has been almost completely co-opted by sysadmins at this point.

Platform engineering is a term I used to use in its place to try and differentiate but it seems that's being taken over now as well.


Being a company and having a job title with DevOps in the name is a great way to out yourself as not following the DevOps philosophy. It’s quite ironic. If you know you know.


Surely Github workflows https://github.com/actions/starter-workflows or Gitlab templates is the culmination of your devops work?

Not Kubernetes (?) unless I missed something.

The rewarding thing for me is applying this and other sensible defaults like observing iteration speed and driving delivery..


In our previous iteration of our startup. We were trying to build day2 infrastructure automation platform. Based on interview with customers, engineerings, managers and first version of the product, we arrived at the conclusion that it is close to impossible productize because of the exact the reason the commentor is mentioning. Starter workflow or templates sounds like a good idea but in reality minor tweaks need to make it work will push you to make significant investment to learning them.

In real world, permutation and combination are endless. Every org has peculiar problem either created by the engineers themselves or are result of certain business decisions.


I'm imagining something similar to a 3D wireframe model of networks and systems, with particles that move around between nodes that represent requests or IOPS.

Circuit breakers, bottlenecks, IOPs, load shedding, traffic behaviours can all be visualised.

I'm not sure how you would represent latency with this visualization but that's also important. It more represents throughput.

Can also be used to represent human work/tasks itself.


Well you could use some heatmap to represent latency, where the hottest part would be those of high latency, turn those particles and boxes red or blue depending on their mean latency...


Sounds a bit complex, I usually stick to the R.E.D. method for services and from the devops perspective largely iteration speed.


"diagrammatic observability and live visualizations of systems"

I hear you. IMHO there's massive opportunity / unmet need here. And working on the things that interest and excite you is the surest path to (or maybe even the definition of) success. I hope you can find a way to start pursuing your ideas! Good luck!


> and there's no cross polenation except when they are open sourced

This is true of software dev as well. Open sourcing things is something you have to sell well and demand up front.


Not really sure we need another "model" of devops. All of these "philosophy of devops" discussions I've ever seen really boil down to "I am annoyed my coworkers aren't owning X, and here's a big fancy post explaining why it's their fault in smart words."

In my experience, a big fancy post in smart words is never actually a practical implement to change the current practices of a company.


Lately I've been made to understand that DevOps or Software Engineering is not for me, capability/maturity models, processes written in anything other then code, or metrics this and that just ism't me. We can argue about the meaning of words all day long but like I said this just isn't me. Some of us just prefer to say close to the metal (as in reality with all its complexity that escapes the precision of words and that is totally ok) and are deeply suspicious of any excessive abstractions. There must be others like me, so I am in search for my tribe, and I want to know what do we call ourself, maybe we don't want to call ourself anything, but where can we meet, who want to hire us, please tell me if you know, Thanks!


This will be an unpopular opinion because HN is full of web folks: Do not listen to jedberg, he drank the cloud kool-aid long ago and I can only assume he is not looking at billing and TCO critically or sitting in the board room discussing why the infra costs are higher than the staffing costs.

Personally, I do care that cloud is in the order of 10x the price and do have to explain infra cost as a metric of our road to profitability to our board.

My company does work closer to the metal, especially because for us the notion of “scaling” is not that we can simply slap a load balancer or a cache in front of a bunch of servers and call it a day: when you work with HFT or AAA Games: the performance you get on a single machine really matters, as does the ability of that machine to work reliably since there is state.

People in HFT and games really bleed for people like you and I, since its not as simple as CRUD stateless HTTP stuff where performance is measured in milliseconds and the average node runs 2GHz on all cores with 14 different abstractions.

Cloud optimises for the web, when its not the web, there are major dragons- those are your people.


I don't want to disappoint you, but what you want to do is not really supported by many companies anymore. Maybe the cloud providers would be interested in someone close to the metal. A few companies still run their own data centers, but even for them, hardware is so cheap it's usually easier to just throw more hardware at the problem than optimizing what you've got.

The one area where you might want to focus is HPC. All the companies building huge GPT models need highly optimized hardware.


I don't think any place actively looks for grey-beards in particular. But well, they do have an easy time posing as seniors.

The only question is whether pushing for working things instead of nice talk will get enthusiastic support or opposition from the management.


Companies who need every engineer to "Do Everything™" are doing it wrong. A team may need to cover a lot of ground, but by putting together folks who have different skills. It's a management skill to assemble a team with diverse experience, rather than only hiring individuals who are "a team on their own".

I don't think you should feel excluded from "Software Engineering" just because you don't have a passion for managing containers. We need people who can write great code in the world, too!


SOURCErer


My challenge to the author is why can't maturity models be dynamic and continually improved. What is being suggested here is too complex and involves a lot of busy work.

Why can't you, as part of the process implement a capabilty driven dynamic (as in you review and change it) maturity model?

I mean, capability driven sounds great to talk about but how do you go about implementing it? A maturity model is simple to define and measure. I worry about endless meetings with what is proposed here but maybe I misunderstood a few things.


This reminds me how once my then girlfrend's friend brought a textbook from some course she took for MBA degree. We had to wait hours for a train, or our other friend was late... and so having nothing better to do, I tried reading that coursebook. At first, it was hard to work through the terminology, but once you get the hang of it, you realize that if only they used planer language, the whole book could've been refined int a few trite statements. Like... "reward diligent employees and avoid the neglectful ones" and so on.

This post reads about just like that. I wish we had better insight into how to manage processes in programming businesses... but so far I haven't seen anything that truly goes beyond the obvious stuff. The only difference is how much the author is willing to elaborate on that obvious stuff.


I don't think this is an either/or problem. Very simply put, the maturity and capability form a two-dimensional "grid", within which each organization fits its levels - e.g. let's say "maturity levels" on the vertical axes, and "capability controls" on the horizontal one. Where the two meet on the grid, you get the two-dimensional org level. I recently had to conduct an org maturity analysis and identified a #1 (out of 5) existing level (i.e. very immature in some capabilities, functions and their associated specific processes), then define and implement a maturity model and path for them towards achieving such, whereas their "horizontal"/capability was in a "semi-automated and developing new" level (btw 3-4).


Doesn't this come down to tail vs. dog?

Suppose I plan to assemble infrastructure for some ideal of "Deployment Maturity" - no downtime, any time of day, one-click, etc. But it turns out the development team has designed the software to be un-load-balanceable thanks to in-memory sessions, so, thud goes my big plan. That's a very common problem.

Of course many of us see a "proper" devops discipline as interdisciplinary, so I suppose those folks would tell me to get in there, gently push devs out of the way, and fix that session mgmt problem. Of course I need advanced skills, but I also need advanced permission. Somebody's gonna fight me. Now I'm turning into more of a site reliability engineer.

So the maturity model definitely applies - especially when it comes to security - but when you're in devops-just-means-ops mode, you're much more tail than dog and it seems like you have no choice but to put capability first.


Almost as though there should be a “Capability Maturity Model.” Man, I wish someone had come up with that 40 years ago.


My only exposure was CMMI. Your comment gave me flashbacks. It made a lot of consultants a lot of money, that’s for sure. It was also a good way for federal acquisitions officials to steer contracts to preferred vendors, few of whom were able to deliver quality products on time.


./configure;make;make install ?



For a cynical (and, perhaps bitter?) laugh, how about working at an org that has a <= 0 score?

https://en.wikipedia.org/wiki/Capability_Immaturity_Model


Exactly my thoughts. I worked in a company that was CMMI 5 certified. It was a bureaucracy.


I looked up CMM bit it seems different to what is described in the article by Octopus and sounds more like just a maturity model.


Can you clarify what each of your 2 axis are? This sounds interesting

On a tangent, I once drew 2 dimensions and put all our political parties on them. It turned out the "left" and "right" were trending toward "up" and "right" not opposites. Upper right corner was totalitarian BTW.


I thought we were headed for neo-feudalism.


Totalitarianism is an implementation methodology. Actual underpinning values can vary by system.


Useful take on a common problem. Another way of approaching the same concept would be through the Cynefin[0] framework; maturity models occupy the simple or complicated contexts, whereas capability models can be fulfilled via iterating in the complex domain.

[0] https://en.m.wikipedia.org/wiki/Cynefin_framework


Cynefin is a great model - I must admit I've been insterested in it for a long time (I think Liz Keogh brought it to my attention) - but I've not applied it practically. Anyone got ideas for using it IRL?


There are a couple of specific situations I tend to run into fairly frequently with my consultant hat on:

1. People or organisations wanting to implement process by well-defined stages and checklists. They'll see that works in certain situations elsewhere, but not realise that such processes will fall apart quickly in the complicated or complex regions they're trying to manage. Talking through where they sit on a Cynefin diagram can help them understand which action model is the most useful, whether it's really possible to define "best practice" for any given situation, etc.

2. Products being managed as though they were projects. Big organisations tend to run on a project model by default because it seems like a way for them to manage risk - a certain amount of budget signed off for a few months to a year that ensures X, Y, Z is delivered for a certain timeframe. The true risk is that absolutely kills innovation for an early stage product looking for PMF. You don't really know what the end result is supposed to look like, but you probably do know what the process for getting there should be. Being able to talk about complicated (often, projects) and complex (often, product development) regimes being distinct areas that require different handling is a good start.


One of the very first point the book Accelerate (which I really recommend reading since it gives you the ammunition to sell DevOps culture to your coworkers and leaders) hammers this point early on.

Maturity models quickly devolves into cargo cults and of course making metrics a goal makes these metrics useless.

What really matter are meaningful capabilities, which when read aloud it just feels so obvious.


What is devops here again? Is it the The Phoenix Project one on one extreme or system admin for the cloud on the other extreme?


DevOps part one was just "developers and operations working more closely together". It only really required the managers to update the goals of the two specilizations to remove the conflict (developers are rewarded for delivering more, operations are rewarded for stability).

Once the research picked up, they started building out a broader picture of what a DevOps organization did and whether those things made them more successful (better at delivering software, more reliable, more profitable, etc).

The Phoenix Project / The Unicorn Project explain the concept by telling a story - they kind of tell the same story, but from different perspectives. There's also Investments Unlimited which takes an even broader view by adding governance, risk, and compliance (but in a way that aligns to DevOps).

In 2023, DevOps is best described by the DORA research (The State of DevOps Report) as it covers technical, cultural, and product concerns that all amplify each other.


There was a step in the middle there you missed where it was coopted by software vendors to not be about how you collaborate with others, but which tools you use, and has now become essentially meaningless.


The same happened with the concepts of Agile and UX. But of course the true concepts still live on. It's just that everyone wants to sound like they're using best practice techniques even when they're not.


I have long ago lost my enthusiasm for this stuff, and now I just do it because I need a paycheck.


Yes, there is that! ;)


Wow, I remember the first exposure I had to the term DevOps was not at all about working closely, it was about replacing admins with code. This made sense at the time because Puppet, Chef, SaltStack and other tools made it seem like you could pull this off...


Reading this made me ill.


As far as I can tell, being in DevOps is like being a SysAdmin, except you write code to automate as much of your job as possible.

...so, exactly like being a SysAdmin.


A quip I liked when devops was new was “devops means checking your scripts into version control”


After some further consideration, the thought occurred to me that "devops means writing your scripts in any language other than perl"


It really is nebulous. My title is DevOps. I do a lot of SysAdmin work, but I also work escalated tickets, write automation and integrations, read the flagship code bases when errors occur, I complain about the flagship code bases when bugs occur, and I file JIRA tickets about the flagship code bases when bugs occur -- but I do not write code for the flagship product. I think that is a key.


> but I do not write code for the flagship product. I think that is a key.

Basically you clean up after big-shot devs, got it.


I like to describe it as "we are the helper elves helping Santa deliver presents"


That view of the development process is a bit reductive, I'm not sure the dev working on fixing a regression on some crusty, old, ugly, but contractually maintained branch of the product considers itself to be a big shot. But that's where most of the money comes from, so that's where this function has to act. Maintenance caused by unprovoked changes is not the lot of any single person or job in the company, it is in fact probably the most common facet of any tech job.


Sounds dreadfully similar to what sysadmins were doing 15 years ago.


For many organizations, a rebranding of systems engineer.


Hence the call out. There is a "Dev" in DevOps which was meant to cross the organisation silo of developers and system admins, to "shift left", to understand the value chain and do whatever to deliver value. Which meant different things to different people, and it ended up meaning different things to different people.


"Dev" in our Devops (we call it sysops usuall tho) is "we make the tools to glue stuff together instead of relying on dev teams for anything that's longer than few hundred lines of code long.


Has DevOps, the approach to software development that accentuates the amalgamation and communication between software developers and IT experts, undergone a rebranding effort in recent years, replacing its nomenclature with Site Reliability Engineering (SRE)? Was this transformation brought about due to a growing demand for individuals who possessed a modicum of familiarity with cloud technologies, and sought to transition into teams responsible for software reliability engineering? Moreover, did this shift also encompass DevOps practitioners with rudimentary coding proficiencies?


This is the software industry equivalent of an article in "The Onion".


I thought that software industry equivalent to The Onion was LinkedIn blog posts, but yeah... this would fit there just fine.


Working with the logic that the author talks about can cause terrible situations for large companies. If you create a culture based on trying new things in companies that are in constant business development, you must create a culture that will take responsibility for it. otherwise, you'll have environments where answers to "hey why isn't this working" questions can't be found.


It's fair to say that culture eats everything else alive. It's also the hardest thing to change. I've worked in a few places where we felt helpless to impact the culture and creating a culture-bubble (i.e. to the extent of the technical teams, but no further) didn't help impact business outcomes, even where it made our lives a better. Maybe that was enough to make the culture-bubble worthwhile.


This article is highlighting a problem with badly designed maturity models in general.

If you have a good maturity model (e.g. for DevOps one based on DORA metrics) then the capabilities needed to arrive at a higher level can be determined per-org.

That solves the real problem. I think the article has a fundamental "AB problem" issue.


Rather than a maturity model, DORA use metrics and a capability model. The idea is you use the metrics to decide on which capabilities to focus on. This tailoring to your context is the great thing about it - it's what you mention when you talk about determining the capabilities per-org... that's why you'd choose a capability model, not a maturity model. Maturity models seek to standardize the practices globally, rather than leaving room for customization.

I guess... we're pretty much in agreement - except perhaps over the definition of the two types of model :)


> If you have a good maturity model (e.g. for DevOps one based on DORA metrics) then the capabilities needed to arrive at a higher level can be determined per-org.

How do metrics like deployment frequency and lead time for changes can have an impact on "the capabilities needed" by an org?

Honestly, your comment reads a bit like machine learning generated buzzword bingo.


> How do metrics like deployment frequency and lead time for changes can have an impact on "the capabilities needed" by an org?

Hitting tighter metrics is going to need more advanced capabilities.


I would encourage you to read the book "Accelerate" which was at the source of the DORA metrics, as it clearly advocate for a capability model, not a maturity one.


The article is basically a summary of the book "Accelerate." The main issue with the book is the conclusions about the connection between DORA metrics and team's performance; these four metrics might not actually cause the success of the "high-performers" in the survey. The book literally shows that when companies learn about these metrics, they can improve them but not always their performance. The research method, based only on surveys, is also questionable.

While I think it's an interesting read, we should take it with a grain of salt.


Do you have book ISBN? Thanks!


Accelerate: The science of Lean Software and DevOps

Nicole Forsgren, PhD, Jez Humble, Gene Kim

IT Revolution

ISBN 978-1-942788-33-1


I’ve seen “DevOps handover” on a whiteboard. I still don’t know how that makes sense.


Yeah, there are a lot of people confused about the difference between devops and sre.


DORA Metrics is not a good place to start. This blog post is sales pitchf or whatever they are selling.


Curious to hear your thoughts. What's a good place?

I'd argue that if your releases take forever, or lead times are huge they are worth improving first. What would you look at first?


I'd start with small batches. Quite a lot of the benefits come out of just doing fewer things between each release.


Is the tl;dr just to do your devops in an agile way? I can't seem to pull anything else out of this.


DevOps as she is written originally was “Agile Systems Administration”. So, yes.

Any other variation is not what Patrick Debois was talking about when he created DevOps days, from which the job title takes its name.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: