Hacker News new | past | comments | ask | show | jobs | submit login
Infrastructure Software is Dead (mirantis.com)
228 points by ctdean on June 25, 2016 | hide | past | favorite | 134 comments



It is really funny, I agree with this 100% but also think because of the new norm of instant gratification this person is totally wrong. I have customers that all the time state they want to move to an openstack platform but have no programmers or unwilling to hire programmers. So of course we go down that path, look at the capex and opex and their brains explode. Then we look at aws/azure/vCloud and regular internal cloud with its associated costs and they come back from their heart attacks.

If you are a services company, he is right, you should be focusing on outcomes. But, if you can't tell me in 2-3 sentences what problem you are solving and how it benefits the customer you are doing it wrong.

This is a world of businesses and businesses need to make money, usually. Everyone gets so caught up on the new hot thing or the new "revolution" or what the competition is doing without thinking of what problem they are trying to solve or what actual value they are providing. This article says they provide outcomes, sure I can provide outcomes by moving you to SAP, or VDI, or hyperconverged, but what problem is this solving?

Don't tell me it "cut costs", it rarely does and lots of people smarter than me have shown that cutting costs are not the top priority of most leadership.

Back to the point of the article. I have made a lot of money in consulting. I now sell consulting services. You know how I do it? "Yes Mr customer how are you? Cool great, I am fine thanks. So what problems does your organization have, what are your goals, and how can I help you?"

Boom. You are all welcome.

Don't talk about product or you will lose to me or someone better. Don't talk money or you lose. Don't talk fads or you lose. Talk about the business and how you will help it reach its goals, overcomes it's problems, and grow!


This! I previously worked for a small software company that frequently got itself lost in the upgrade treadmill (its going to take N man months to upgrade underlying technology X/Y/Z to the latest versions). Yet in the end, I kept asking, what does technology X/Y/Z provide to the product that benefits the customer. Those questions frequently didn't have satisfying answers.

So, your not the cool, forward looking engineer when you favor using a 5 year old toolchain to build your product. OTOH, your also the engineer that doesn't have to come in on weekends to debug the death march bug that ends up being a result of doing the upgrade.


Sometimes you have to upgrade because the old version goes EOL and you won't have support anymore. This is a problem even if you don't add features and it applies to open source too.

Example: as soon as Rails 5 gets released, Rails 3.2 won't get any secuurity fix anymore. If the customer accepts the risk of running a system that could be exploited by automated attacks, all fine. If it's not, time to upgrade at least to Rails 4.2. They could also decide to check if the new vulnerabilities apply to their old version and patch it themselves, but it's probably more expensive and error prone.

s/Rails/any other technology/


I worked for a CIO who liked to be no more than one major version behind on software. He explained that he had been burned too many times by upgrades that had to go through several versions. If you are going to have to do the work anyways, why not reap the benefits for a while as well.


That sort of support doesn't just go away. It just stops being free. These guys will happily help you run 2.3 and 3.2 for as long as you want:

https://railslts.com/

Again, this usually holds for any reasonably popular stack.


Yes your right, but not all products (and the product I was initially referring to) have large web facing footprints for security issues. At the indicated company the web facing portion was tiny part of the code base. I did monitor a few key technologies, and back-port or upgrade those (php, apache, ssl, etc) as needed. OTOH, we let our version of gcc get really long at the tooth, as well as refuse to allow additional technology stacks in certain areas (hence no ruby, python, node.js, etc, all rejected because they fill a similar place as the PHP we were already maintaining).

Amusingly enough, we avoided heartbleed because our version of openssl was too old! That was fun to try explaining to people, yah we backported the _1_ thing we though might cause a security issue a year ago into our ancient version of openssl, and your not affected by heartbleed. Yes, I know all the version checking scripts say its too old, but try to run one of the legitimate exploits against it..

The second part of this, is that an amusing exercise next time you actually have a need to call "support" for something. Find your local full stack/kernel developer and tell them, hey can you find this critical bug before the support guys find it.. and see what happens.

Keeping your toolkits small and lean, with a small set of dependencies does wonders for maintainability.


Sure, this happens occasionally, but more often you get an engineer who would like to upgrade just because there is some new version available. Who cares about stability, at least the bugs are new... :( At least that's my experience.


My experience is that more often the engineer knows the good reasons to upgrade but doesn't know how to articulate them to you, especially when you are known to be hostile to the idea of upgrading things.


I've experienced both extremes and everything in between, and I don't know that there is any consistent good or bad actor in these situations. Everybody has their own goals and it's human nature to overvalue the things that you care about and undervalue the things that other people care about. Good organizations are set up such that everybody can be a little biased in their priorities but the organization as a whole ends up going down the right path even if no one person is 100% right about what should be done. The decision-making in bad organizations often follows from one type of person with one set of biases making all the decisions. I don't blame people for having biases, I only blame people for denying that they have them and not trying to understand other points of view.

However, I can say for sure that there is a widely held view among professionals of all backgrounds (business, technical, whatever) that if someone fundamentally doesn't know how to articulate their point to someone else with a different background or set of priorities, then that person hasn't thought enough about why they should do the thing that they're pushing to do, and that there's a high probability that they're just doing the "overvalue the things you care about/undervalue the things other people care about" cognitive bias that everybody tends to do naturally.

I don't know if that's a good rule of thumb or not. There's at least some truth to it. But whenever I'm in a situation where I feel like someone isn't going along with me because I don't know how to articulate my point to them, my first thought is that maybe I need to think about it some more. I don't jump to thinking it's their fault because they don't speak my language.


This is a great comment and I totally agree both that there is almost never an actual "bad actor" and that being able to articulate a persuasive argument is highly valued in decision making. Maybe it is even a decent rule of thumb for coming to decisions under the inevitably imperfect conditions of the real world, but I do think that there are people who are just legitimately poor at articulating things, not because they are wrong or haven't thought things through enough, but simply because they are bad explainers, and that those people tend to be unfortunately undervalued.


Actually, I am an engineer, and I myself try to keep up with new technologies. But that doesn't mean I will use them just because they are new - in my mind that is additional risk. I have used my share of technologies which were made obsolete anyway, and it's no fun converting your codebase. But I guess the cool junior engineer who went to another company and left her old company taking care of her project using not-new-anymore / obsolete technology, doesn't care much about that. But I guess if she can't articulate why her newest technology stack is better, we should just switch?</rant>


Spot on.

As technical lead on some projects whenever someone comes will cool idea X, my question usually boils down to what is the business value of that idea.

If it doesn't improve business in some specific way, it doesn't matter how cool it is, it will just produce costs (developer time * hourly rate) without any benefit to the bottom line.


... and that's how technical debt happens.


And how "it's so hard for us to hire good developers!" happens.


It doesn't happen if the company is willing to pay what they deserve, instead of wanting to have them to work for the same low rates as the ones they are paying to some offshoring consulting partner the other side of the world.

Or if they provide a work environment that makes one feel like coming to the office.

Good developers care about many factors besides shiny toys.


Good developers will also prefer to be well paid and have a good environment while working with modern toys instead of MUMPS infested codebase.

(This is not an argument in favor of running after every new technology immediately, but you'll have trouble hiring and keeping good development on a JavaEE 2 environment with SVN build system based on 1990 code practices.)


> ...but you'll have trouble hiring and keeping good development on a JavaEE 2 environment with SVN build system based on 1990 code practices.

You just described many of the projects at DAX level customers.

I can assure you that I know quite a good set of developers doing consulting for those customers.

The salary and company benefits are more important to them, then using SVN vs GIT, gradle, TDD or whatever everyone in HN talks about.


To most developers, interesting problems and technologies are part of the compensation. To attract talent to work for unattractive technology you'll need very attractive salary and benefits. If a developer can't expect to gain transferrable skills from a job, that must be factored in. Jobs where you gain experience nobody else needs are the start of many year-end careers.

OTOH, sexy startups often get away with paying less but, in return, offer environments a DAX-level customer simply can't.


You also need to factor that not everyone wants to relocate just to work in cool technology X as not everyone lives in a SV style technology hub.


Sorry, what's a "DAX level customer"?


One of these companies.

http://www.finanzen.net/index/DAX/30-Werte

The top 30 companies listed on German stock exchange and used for the calculation of the daily trading index.

Replace for similar index on other countries.

This are the companies usually known as "the enterprise".


I was too flippant. It isn't about "shiny toys", it is about being respected enough within an organization to be trusted to self-determine technical direction. It is difficult to hire good developers if you have a reputation for thinking you know best when it comes to very technical decisions like when it does and doesn't make sense to upgrade individual software components. It is definitely a balance, but the grandparent comment struck me as sounding like it was on the too-paternalistic side.


Depends on the use case.

As I mentioned, it all boils down to how useful it is for the business, not how useful it is to pimp up CVs.


Business value is great because whoever is using the term gets to define it. So the term only means what the speaker wants it to mean.


Business value is simply return on investment.

So a developer wants to write module A in some data measurement application because the code looks ugly.

He or she, takes three days to make the module a work of art that passes code review with top score.

Those three days mean in business language

cost_of_moduleA_rewrite = 3 * 8h * salary_per_hour

So that developer just took $cost_of_moduleA_rewrite euros/dollar/yan/whatever out of project budget because the module was "ugly".

So what value does the money spent reflect on the productivity gains of the users, costs related to build software, maintenance costs and so on.

If in the long run it has helped to reduce costs that without the re-write it would be higher than $cost_of_moduleA_rewrite, then it has business value.

If the developer has spent $cost_of_moduleA_rewrite without any positive outcome on project costs, then it doesn't have any business value.


Sure. But, have fun trying to get a new job when you're using 5 year old tech. In other words it's not just about pride or being one of the cool kids.


You must be a frontend or mobile developer. On the server side using five year old tech is the mark of a proactive and daring company. Our code depends on Python 2.7 (released in 2010) and we still have to compile our own CPython every time we deploy on servers managed by the client.


This is less and less true in the new and excited cloud/devops world, for better or worse. There are plenty of companies that even on the server side are chasing the latest release cycle.


I wish I could upvote you more - spot on!


I can see your viewpoint, but I have to say that trying to keep up with the tooling treadmill has many tradeoffs.

If there is a new framework or tool to know about every few months and you change job every three, you will be spending all your time re-learning some strange new wheel and taking energy away from truly excelling at your current tool set.

On top of that, if we chase the carrot of technology we will always be using tools that are less than a few years old. In otherwords, un-tested by time and immature code bases without useful ecosystems and best practices.

RiotJS for example, there is a way it's designed to be used, and it is idealogicaly sound. But also naive. That way will evolve a lot over time as even just a small project revealed a dozen or so pain points to solve, that haven't yet been addressed by the community.

But working on an older codebase, communities have usually solved a lot of those problems with process or ammendments, and there is a wealth of information available.


So use 6 month old tech because you need to keep up with the cool kids who are hiring for that new tech specifically? Hamstring your current company in order to stay on top of tech/framework trends?


There's a middle ground - don't upgrade systems for the sake of upgrading, but maybe if you're building something new or working on a sideproject, investigate newer technologies.


If you're going for a permanent job, sure, you're largely tied to what is advertised.

If you're going for consulting gigs, no problem: Focus on selling solutions rather than specifics. Then afterwards offer up a solution based on new tech, and offer up a better alternative and explain how it solves their problems better.


The secret is to try to be a polyglot, not just get yourself into a silo, and also build up on people skills.

Many companies don't bother one cannot use the very latest fad, but you can demonstrate proficiency in soft skills, for example.


Say that to Cobol devs.


If you do, say it loudly or they might not hear you.


The older I get the more I feel like we're an accelerate version of the fashion industry.

At least in fashion you can make an excuse that the design is thirty years old and most people don't remember the last time we did this.

With software it's every six or seven. It's hard not to judge my peers for having such short memories.

We were in the midst of one of these upheavals when I first started, and so I learned programming in that environment. It also means I have one more cycle than most people near my age. Now it all looks the same to me, and I understand those people who wanted to be more conservative. In fact I probably owe some people an apology.


"High fashion" as as much an art installation as it is clothing.

Thus what i see with a lot of IT these days, is an invasion of "artists".

Personally i blame it on the web being co-opted by print media. This in turn brought in "media studies", that in turn brought in the "artists".

Notice how again and again there is an attempt at turning the web into a printed booklet.

Early on it was done using flash. These days its JS and mangling the behavior of scrolling.


Same here with regards to consulting. Clients pay and pay well to be able to pick up the phone and say, "We are having trouble realizing our goals/expectations/efficiencies in area X. How can you solve that for us and make our lives easier?"

And, just like that, another contract is drawn up and another invoice is sent. They don't care about the underlying tech. They only care about results, and what the results cost them. One-time consulting fees nearly always win over the prospect of hiring staff to take on the task, especially given that my incentive as a consultant is to time-gate a project and get it out the door so I can take on another.


I've come from your perspective before to customers (talk about the pain points, focus upon problem-resolution fit not tech, etc.) and it's awful hard to tell customers that have 80% of costs being related to labor and have reasonably well-performing technology that when they're talking to you only because you're trying to cut costs directly in your SOW it can be frustrating when touching that 80% is completely off limits and a political landmine.

Almost all the big vendors and sales teams start with the approach of looking for problems and trying to re-message their products and services to appeal to the problems of their C-level customers, not the engineers (whom never make the vendor relationship decisions). Several years ago everyone was "cloud-washing" their products to tell CIOs "oh yes Mr. CTO, we're ready for your cloud initiative" by slapping on a web frontend to their managed services and boxed software when they didn't have anything before to maintain a relationship. Those are red herring projects though I've found - it's just another way to keep vendors on their toes.

Another problem is that accurate problem statements can be very hard to get to without escalating to higher level managers that it turns out are oftentimes swayed by existing vendor relationships to get something done more effectively. From here, I simply have to ask "are you happy with the results your vendors have promised compared to what you expected?" and everyone complains about cost but you need to get away from that conversation honestly if possible - because IT at a company will never, ever, ever cost too little since it's a cost center.


> Everyone gets so caught up on the new hot thing or the new "revolution" or what the competition is doing without thinking of what problem they are trying to solve or what actual value they are providing.

These are my exact thoughts about how we now have like two dozen explicitly Docker-releated startups. Because Docker. Docker docker docker docker docker.


This kind of thinking doesn't factor in how many Robots work in enterprise software. They come programmed. You want to alter programming, esp in enterprise software, you need to be very high up the food chain. Any other route you take will be met with glazed looks and 'ok great...um...shall we go get a sandwich now'.


"Don't talk about product or you will lose to me or someone better."

I was with you until that little gem. Products fill a known need, solving a problem shared by many organizations. They don't claim that they can help everyone with all problems... but they do solve a specific problem, and if you have that problem, products are good things.

You can flip the attitude the other way by saying that consultants who act like everything they do is better than everyone everyone else will lose when working in well-established problem areas because they re-invent wheels. Expensively.

The truth is that good business can be done from either a broad consulting perspective, looking for new ways to help a specific customer... or from a narrow product perspective, solving a specific need for many customers. One is not inherently better or worse than the other. Just different.


>>Products fill a known need, solving a problem shared by many organizations. They don't claim that they can help everyone with all problems... but they do solve a specific problem, and if you have that problem, products are good things.

The point is that customers care about solving problems. They don't care about whether they are buying Product A or Product B to solve that problem. That's why you need to keep the conversation focused on the customer, their problems and potential solutions, and not the product.


I fully agree with this.

Not only should you not focus on product, but when clients bring up product you should of course listen carefully and respectfully, but you need to consider that they are even then expressing ideas about how to solve their problems. Those ideas may be well thought through, but very often they are not, and comes down to trying to explain what they need based on what they know.

If a client says "we need product X", sometimes they really need it, but often what they are trying to communicate is "we need the stuff product X's marketing material says it will solve", and even that may be imprecise

Someone coming in quoting on what they say rather than what they answer when you ask probing questions about their actual problems will often quote for the wrong thing. And more importantly: Quote for something that someone else will be explaining to them why is the wrong thing while quoting for something more appropriate. Doesn't help if you come in with the best price if someone else has made your solution irrelevant.

The other aspect is that people often value the solution to a problem far higher than they value a product.

I have built a tool that I'm starting to roll out a service around now. And the interesting thing is that when I've talked to people about it, it quickly became clear that if I showed them screenshots of my admin interface and explained the software, they started comparing it to $20/month services that in fact deliver far less value in terms of results, but they still found it hard to see it as something more expensive.

When I instead approach it entirely in the abstract, and show people analytics of the outcomes and never tell them I have a pretty web-based interface, and never suggest they'll get a login to anything, people consistently value the service 10x to 20x higher.

Now, this doesn't work for everything - the reason people value it so high in the latter case is that the analytics makes it clear it deliver results that would cost them in that region otherwise. But the moment I start to describe it as a product, people go blind to the outcomes and value it based on other criteria.

tptacek and patio11 have touched on this previously too when talking about how to get your rates up by focusing on value delivered, and avoiding billing in small increments (e.g. bill daily rather than hourly, or even bill by the week or by the project if you can). But beyond applying just to the price, the overall principle also greatly affect whether or not you'll get a deal at any price.

Someone may even suggest the exact same technical solution as you, cheaper, but still lose out if they focus on the product and you're selling problem solutions and visions of where it will get them next.

E.g. I had a client meeting yesterday to walk through a proposal I'd sent, and the entire meeting involved me telling them what problem each part would solve and what that'd enable next. They sat there grinning through most of it, and kept starting to discuss additional work they just thought of that this would enable them to do later, and cost came up just very briefly at the end. Never mind the initial contract - selling them on the idea and the problem solutions now rather than technical details of a specific product likely already did 90% of the job of selling in 3x+ more work down the line.


Bloggit!


Everybody’s OpenStack software is equally bad. It’s also as bad as all the other infrastructure software out there – software-defined networking, software-defined storage, cloud management platforms, platforms-as-service, container orchestrators, you name it. It’s all full of bugs, hard to upgrade and a nightmare to operate. It’s all bad.

100% my experience with OpenStack. And the breaking releases every 6 months only add insult to injury.


Right. This part of the article was the most aggrevating for me.

Perhaps the author could refrain from speaking for everybody! OpenStack may be a ruinous rubbish barge of a platform, but others are working on software that seeks to _reduce_ operational complexity. Software that does not exist merely to propel yet another band of consultants into a defeatist, over-priced nightmare cathedral of apparently irreducible complexity and unmitigated unreliability.

Outcomes for customers are extremely important, but it does not follow that software quality is immaterial, or unachievable, or that OpenStack is just as good a choice as anything else -- even if it does come with a consulting racket.


One could argue that because all the money is made by consulting, managing upgrades etc. the harder it is to setup, operate and to upgrade the better for the players in the OpenStack space.

If the underlying incentives don't coincide with good software you get crappy software.


I may be fundamentally misunderstanding what infrastructure software is, but it sounds an awful lot like a bandaid that's only necessary because our operating systems are all designed for the computing problems of 25 years ago.


Infrastructure software helps avoid directly managing hardware. You can change an OS all you want but that won't change the problem of managing physical resources (storage, network, compute).


Wasn't that precisely supposed to be the operating system's job? One, virtualising (Processes in Unix are little virtual machines), and two, hardware resources management (file systems, time sharing, ...).

Distributed operating systems tried to do that for multiple computers, but instead we get ad-hoc crap on top of single machine OS'es. Progress!


Vmware sucks too. Ever needs to tell a customer that there is no backup because the cm ducked up.


What is the "cm", please?


Probably vm.


Thanks!


There are commercial offerings that provide a much more stable and actually-working solution for these problems. vSphere 6 has had some growing pains and MS still has a way to go to get SCVMM up to scratch but the underlying hypervisors and network/storage stacks work as expected and can do so at scale... if you can afford it.


Talking technology, VMware NSX is the best SDN out there. You can do all the things the competition does but with zero programing.


SDN's are an artifact of classic networking mindsets. There is no air gap, pretending one exists by creating a virtual one is just being blind to the realities of network gear.

We need to move beyond SDN and embrace a point-to-point world where trust is established through pki or similar means. SDNs give a false sense of security, add a ridiculous amount of complexity to routing, and create confusion when it's unclear if there is a hardware problem or a software problem.

SDNs are only there because people aren't innovating enough in other layers.


Your comment makes no sense. SDN just provides the same barriers as regular network hardware in an automated and more flexible nature. It has nothing to do with data plane confidentiality.


SDNs add complexity, and are only necessary because of limitations in the networking protocols.

They add extra failure modes and reduce throughput by decreasing MTU. This is almost all because people want virtual networks with their VMs, rather than just embracing IPv6.


> SDN just provides the same barriers as regular network hardware in an automated and more flexible nature

Unless the implementation has bugs. Then, the sensitive area is just a single zeroday away. Airgaps don't have that attack surface.


But it's not a replacement for air gaps. It's a replacement for things like logging into your switch to manually set up vlans.

I agree with your previous point that we should encrypt point to point, but that's one layer up from the problems SDN is solving.


> I agree with your previous point that we should encrypt point to point ...

Just for clarification: That was written by bluejekyll, not by me.


Yep. This the classic end-to-end argument, a white paper all CS grads must read https://en.m.wikipedia.org/wiki/End-to-end_principle


>And the breaking releases every 6 months only add insult to injury.

The core services are backwards compatible much longer than 6 months from what I've seen. I'm not sure where you got your experience from, but it doesn't sound like Nova, Cinder, Glance, etc.


Bad infrastructure software with hideous UX is dead.


As MD of a medium-sized UK-based cloud server company (Bytemark, cough), I'm so glad to hear someone else saying this. We made the decision in 2010 to design and build our own cloud services stack - from scratch, ultimately including a low-level storage layer. This was just after Openstack was announced, so it felt like a risk not to get on this particular wagon.

We opened our service to customers within a year, and fixed what felt like small silly bugs while constantly racing to keep the platform up. Openstack kept getting richer and puffier and more important-sounding. Even though we thought we would eventually "make the switch" I couldn't for the life of me find any success stories, and looking at the software it seemed to have some crucial gaps that we'd need to fill, and a bunch of layers that we didn't need or care about for our simple "cloud servers" offering.

We're at the point where mayyyyybe we could think about switching out one or two components, but my gut feeling is that 1) these are quite simple components for us where our maintenance burden is manageable, 2) our model of VMs is slightly different (more permanent) than Openstack's and, of course 3) the integration effort doesn't seem worth it, and the loss of experience compared to our own software seems a huge risk if it puts the stability of our platform at risk. So it still feels like picking a fight with our own stable software for benefit that was way down the line.

We're in exactly the same spot at Boris - our customers care about the service, not the software. There is just so much integration with our own hardware, network, data centre & customer services operation that's outside the scope of Openstack that it never quite seemed relevant.


More provocative over insightful.

Infrastructure software isn't dead - but Openstack is.

It may still be used, and may continue to see a bit of media now and then but really it's gone the way of Puppet/Cfengine/<many proprietry infra software here>.

It missed it's chance to be good by allowing itself to be corrupted a massive design by committee disaster. Openstack needed a cohesive vision if it was to stand a chance against the integrated enterprise stacks or the custom in-house ones it sought to supplant. It never had it and at this point it never will.

I don't mean any ill-will to any of the Mirantis boys or the other countless hackers that worked on Openstack circa Cinder/Neutron introduction. I was there too, we tried to right the ship before it got too far off-course and we failed. Many smart people tried to make Openstack good but it was out of the hands of the hackers.


>Infrastructure software isn't dead - but Openstack is.

Same tired old hype cycle trope. Openstack is dying in the same way that 'virtualization is dead'. Containers are in the spotlight now so openstack is dead - a.k.a enterprises are actually adopting it now so it's addressing all sorts of boring requirements that aren't sexy to developers.


I think I was a little too melodramatic, the articles overbearing provocative language rubbed off on me a bit.

I don't really mean that it's "dead" persay, only that I don't think it will be anything other than bad. Which the author also agrees with.

The bit I contend with is that -all- infrastructure software is dead. I think there is some really good stuff out there right now that hasn't fallen prey to previous mistakes that could really change the infrastructure landscape. There is also some stuff where exactly the same things are happening though...


Yup, and those boring requirements is where the money is. Thanks, enterprises!


and I have the complete opposite view. Openstack is killing it even though it's as crappy as everything else in many ways. It's without cost, it's automatable, you don't build inhouse software to handle it, and you build applications that can withstand software and hardware failures.

Openstack isn't a replacement for configuration management.

The massive design by committee is fantastic, because you can help change and adapt where openstack goes. You don't have that option with insert vendor.

We just disagree completely


I've worked for two companies that hemorrhaged money trying to set up functional open stack environments. I was on an openstack security team for one company. Omg...yea I'm not going three. I don't feel like raging before bed.


I work for a company that is doing fine setting up functional openstack environments. It depends on how much you're willing to spend on it.

We've put enough resources to make it happen, and it's working out fine.


All open-source software has a cost. Here's a nice summary of where it shows up:

http://www.joetheitguy.com/2013/10/23/hidden-costs-of-open-s...

OpenStack even had its own assessment:

https://opensource.com/business/16/4/openstack-summit-interv...

My favorite part is this quote: "Hardware was a bit of a surprise, frankly. It's clearly a lot of money, but even doubling the utilization had a tiny impact compared to helping people get work done."

That's right. People in business, IT teams prefer to get shit done over see some utilization numbers go up. That they were surprised by this shows a disconnect from reality. Especially given all the cloud marketing talks about helping one get stuff done by focusing on core business instead of IT infrastructure. Double fail.


> It's without cost

Openstack is so expensive in implementation effort that I've repeatedly had to implement custom solutions because the effort requires to use Openstack would have totally blown the budget.

That there's no license cost does not make it without cost.


This is also true with paid software, it's still so expensive to implement.

So I'm saying, both are expensive to implement, but one I don't pay for the software on top of it.


What I was saying is that OpenStack is so expensive in terms of complexity that it often even pays to do custom development to avoid using it.


> It's without cost

The hell it is. Of course it has a cost. Cost of a product is more than what you pay for the license.


you build applications that can withstand software and hardware failures

Well with OpenStack you sure have to try to build apps that withstand software failure, because you get a lot of experience with it...


Openstack is really expensive.

It's free as in "here's a horse, it's free"; it actually has supremely high costs!

The marketing hype far exceeds the technical credibility. But the train has left the station.



Puppet is dead? A Fortune 500 company I'm familiar with adopted it a few months ago to start doing configuration management.


There are plenty of Fortune 500 companies using CFEngine as well. I'm consulting for a Fortune 10 company using it at massive scale. However its role in a container-centric role will be reduced -- it'll run on the infrastructure hosts but not inside immutable containers.


What will they be using instead?


Phoenix servers. See http://martinfowler.com/bliki/ImmutableServer.html Personally, I believe this is a non-confront of the complexity inherent in modern systems. It's a cop out! You're just saying, we don't need to know the details of the configuration to troubleshoot a problem, we'll just nuke it from orbit and create a new instance. But if you don't UNDERSTAND what went wrong, how can you control the situation well enough to fix it? (This is just not my client, I'm talking about the industry trend.)


Thanks for the info. I absolutely agree with you there. It may have a place with fast-evolving, short-lived or highly distributed technologies, but it's not conducive to solving problems - it just enables to avoid them when they're not serious enough.


Right. :) And you are welcome!


I've been consulting for public sector and the companies in the Fortune 500 that are behind the software and operations curve from even start-up companies and always will by choice + self-imposed bureaucracy. The problem I've observed consistently across various custom cloud and openstack implementations is that none of the products and technologies that people are developing are able to succeed because everyone assumes that they "solved" basic infrastructure management a decade ago when their operations teams are woefully unskilled, unmotivated, and unable to meet the demands of today's environments. Most organizations do not have monitoring beyond what comes out of the box even for production environments, most places do not have defined SLAs, almost nobody tests backups or their DR runbooks even though they spent $2M+ on a DR implementation. None of this is the fault of the engineers though; their managers wanted silos and people that just go implement their promotion-garnering plans that all go south.


I wish you were wrong. Silos are everywhere -- ugly and unavoidable bottlenecks. Nobody has a plan to fix that culture though because it works only just good enough.


I personally don't see anything "wrong" with silos - they are the partitions of the distributed systems of human society and are going to emerge anyway partially due to Dunbar's number in a "flat" organization. The issue I see is that managers are unwilling or very slow to understand new lines of division of labor ("What do you mean that sysadmins should also be developers? Why would we want that?" - direct quote from a customer) because it doesn't correspond to what their information systems management textbooks from 2002 told them how IT labor is divided. Then comes issues of training managers and telling people why this re-org is so much better than the last 10. This is why the "devops" movement in enterprise is cheap business consulting labor to me - it results in a re-org in most places, and that's the equivalent of developers showing up at a new shop and suggesting refactors as a newbie.

Furthermore, most IT managers have such poor data from the organizations they run that the only things they can track are money spent, not money saved and such. And showing that a CIO doesn't know wtf his company gets you kicked out of meetings permanently in most (typically dysfunctional) organizational cultures.


Distributed orchestration software is only just starting to become usable. See http://rancher.com/... I wouldn't call infrastructure software dead. It's the beginning.

I think the problem is that most existing tools/engines/components which make up software systems (e.g. databases, memory stores, frameworks, etc...) were designed to run on a single machine and so far, it's been the DevOps' responsibility to scale them manually. Even for those components which DO support clustering; their approach is often not compatible with most orchestration systems (they tend to micromanage the cluster - Instead of micromanaging the cluster and thereby fighting the orchestrator, the focus should be on writing simple hooks for the orchestrator to invoke).

Developers of tools/engines/components need to change their mindset and start building engines from the ground up to run on distributed orchestration software like Kubernetes, Swarm or Mesos and automatic scalability has to be BUILT INTO every component/service.

A major problem is that there seems to be a massive skill divide between DevOps/SysAdmins and Software Developers - Software devs think of systems like Kubernetes and Swarm as being the responsibility of DevOps people and don't spend enough time thinking about how it impacts them. This is the wrong attitude - The two skillsets need to converge in order to build effective solutions.

Orchestration management tools are the new operating systems - In the same way that one can build apps/systems which are compatible with Linux, Windows or OSX, we should build apps/systems which are compatible with Kubernetes, Swarm or Mesos.

There is more to these orchestration tools than just writing up config files - The code within the components themselves have to be designed to play nice with the automatic scheduling/orchestration requirements.


Agreed on all points. Rancher looks quite good and I'm looking forward to something that will soothe the pain in my ass that is the micromanagement of postgres clusters.


Well everyone is making money except docker. Drives the point home I think. Docker's business is docker and you can't really make money with it whereas everyone else is building a business on top of docker to deliver outcomes to customers.


Heh, it's like http://dtrace.org/blogs/wesolows/2014/12/29/fin/ but from a different angle.

I personal, naive as it sounds, beleive nix* (as in e.g. NixOS) could be a silver bullet here, but the market is so used to IT being inherently shitty I don't believe it will happen.


Whenever i run into anything from ex-SUN people there seems to be a truckload of sour grapes towards Linux included.

What this article complains about Pike doing can just as well be leveled at the article author (and colleagues) vs Linux.


Thanks for the link. I didn't know Cantrill basically imploded. His gripes on Pike's comment and industry mimicked my own. He's way overstating how much people ignore the system level as there's active projects handling it funded by NSF, DARPA, and EU with a practical focus. Most practical being ones with defense contractors (esp Galois) or actual engineers partnered in. Quite a few going from software to firmware to hardware with some to the gates. He could possibly enjoy himself and do some good getting hooked into one of those groups to cover pragmatic, real-world aspects plus spot opportunities as development goes on.

"beleive nix* (as in e.g. NixOS) could be a silver bullet here"

Come on, now. Try to avoid that trap. You need to look at what the market needs in compatibility/legacy, production worthiness, talent to aid deployment/support, security, and so on. Always consider these plus target markets when evaluating any software platform. NixOS at first glance appears to fall short in quite a few.

Now, what I do like about NixOS is its declarative, transaction-oriented packaging. That's great if implemented well given my Linux distro's screw that stuff up to this day when I install an odd package with one incompatibility in it. Irreparably breaks system or appears to. (rolls eyes) Source-based is debatable but allows site-specific optimizations. I'm barely in the debate but lean against systemd, which Cantrill's post mentions incidentally, as it's too complex to be in critical position it inhabits. Critics pointed out a simple thing in its spot plus less privileged services doing management or whatever. Consistent with best practices from high-integrity & high-security engineering going back decades. So, I see it as a weakness albeit a small one in larger picture.

So, there's my two cents on that link and claim.


> Thanks for the link. I didn't know Cantrill basically imploded.

That wasn't written by bcantrill. He hasn't imploded yet, as far as we all know. Unless that was when Oracle bought Sun and he's been operating in the imploded state since.


Oops. Part of the page didn't render on the device I was using. Couldn't see the name. Thought it was him with the illumos and joylent references. Thanks for the correction!

Note: So someone else that worked there was imploding and could use some time at aforementioned projects clean-slating or improving HW/SW architectures. :)


Yeah this is somebody else doing Fishworks->Oracle->Joyent. Original HN thread on the blog post btw https://news.ycombinator.com/item?id=8816055.


Ahhhh. Ok, all clear now. Thanks to both of you.


I sit next to Bryan, and to the best of my knowledge he has not imploded!


BTW original thread https://news.ycombinator.com/item?id=8816055 .

> His gripes on Pike's comment and industry mimicked my own.

Yeah I always found it funny that many of the ex-fishworks people don't like Pike's 2000 paper, which is essentially the same complaint as theirs but different specifics [s/(Unix|Windows)/Linux/ s/Plan 9/Illumos/]. As I think both are interesting, a set up the right direction, and not quite radical enough, I'm especially inclined to see the similarities.

> He's way overstating how much people ignore the system level...

I'm a huge fan of Galois too, I think there is a legitimate complaint that most of industry only look this far down the stack where they must, e.g.. embedded systems or real-time where desktop hardware and a mainstream linux won't do. From reading many of your past posts, I get the sense we both think different operating systems, or even hardware, should (eventually) get used all over. I get why for projects due in a year go the path of real resistance, but I think the economic argument that e.g. Google should be designed something post-unix for 15 years is pretty rock solid. And yet I don't see them or anybody else (since Midori was cancelled) doing that.

> Come on, now. Try to avoid that trap.

First of all, to be clear I responding to the OP more than what I linked. That I interpreted as infrastructure around Unix (or Windows? Haven't read up on OpenStack), not OS work or something lower level. Looking at innovation on the whole stack, I consider nix* more "silver ductape" than "silver bullet"---it made personally using Unix tolerable :).

> compatibility/legacy

So yes nix changes the way unix is administered---and if you use it without changing your ways you will miss most the benefit. At the same time nixpkgs demonstrates that it is feasible to shoehorn-in software that wasn't designed for this with few--no modifications. I think enterprise has sunk more money into their devs' Java monstrosities than ops' perl scripts, but I could be wrong here.

Also if your are running some "pre-cloud" "pre-container" "ancient" setup---on a heterogeneous pile of old desktops in a closet even!---I think nix* would actually allow one to change well than some more popular technologies.

> production worthiness, security

I don't think anybody as really audited the nix* ecosystem to the degree that some users would require, but people do use it in production.

> talent to aid deployment/support

So Nix has great fundamentals with a crappy user interface. Now maybe i am a masochistic idealist, but I think that's better than the reverse because it's easier to rewrite a misdesigned UI than misdesigned foundation.

> That's great if implemented well

It is. Sandboxing for security is a little WIP, but assuming enterprise users wouldn't install things willy-nilly, the real risk is more shitty software than malicious software, and Nix for a long time has been fully capable of dealing with the former. [And by shitty software I mean the thing being packaged. I don't know how one would fuck up the packaging itself: that either works and properly encapsulates things or doesn't work.]

> I'm barely in the debate but lean against systemd

I am completely out of the debate, but do note NixOS doesn't need systemd (or Linux!) for any fundamental reason. Indeed if we had better CI and better delegation on code review and merging PRs (my biggest gripes with nix*), I'd have expected somebody to have fixed this by now.

My personal goal (which I think is common in the community) is all the features, none of the policy. Support all distros' init etc decisions; support Linux Darwin, BSDs, Windows + MSYS2; and so on. [I actually think the Joyent people should be all over NixOS as a way to make moving between Unices painless, but they have gone with lx-branded zones for that.]


"which is essentially the same complaint as theirs but different specifics [s/(Unix|Windows)/Linux/ s/Plan 9/Illumos/]"

That's funny.

"I get the sense we both think different operating systems, or even hardware, should (eventually) get used all over. I get why for projects due in a year go the path of real resistance, but I think the economic argument that e.g. Google should be designed something post-unix for 15 years is pretty rock solid."

True. Especially given they're all already doing something new in the cloud. They have custom CPU's, custom boards, more efficient protocols, alternatives to all kinds of standards... you name it. No convergence on stuff that could get OSS benefits once open licensed. They could keep a good chunk to themselves for competitiveness if they wanted. Just do more than they are on getting CPU, kernel, firmware, and OS modifications. Other contributors, including from CompSci, will do plenty more as we've seen in Linux, LLVM, Java, browsers, especially Python, and so on. It's ridiculous the companies ignoring the opportunity are those standing to benefit.

"I consider nix* more "silver ductape" than "silver bullet"---it made personally using Unix tolerable :)."

Fair enough. A lot of good stuff in IT is ductape.

"Also if your are running some "pre-cloud" "pre-container" "ancient" setup---on a heterogeneous pile of old desktops in a closet even!---I think nix* would actually allow one to change well than some more popular technologies."

I was thinking shops doing Red Hat, OpenSUSE and so on might benefit from stuff like this. Definitely potential there.

"but I think that's better than the reverse because it's easier to rewrite a misdesigned UI than misdesigned foundation."

Not that: enterprises and other businesses know someone will handle their problems now and later with Red Hat, SUSE, and Ubuntu. Is there assurance of that for Nix?

"the real risk is more shitty software than malicious software, and Nix for a long time has been fully capable of dealing with the former. "

That's the risk I was talking about.

"I don't know how one would fuck up the packaging itself: that either works and properly encapsulates things or doesn't work."

Parsing, protocol, filesystem, and network errors. Even ASN.1 and JSON had issues despite their simplicity. So, that has to be considered, too. Preferrably auto-generated from grammar, checked, and in safe code (even C subset).

"but do note NixOS doesn't need systemd (or Linux!) for any fundamental reason."

So the whole distro is separate from Linux itself in terms that it could work on a BSD or Windows just replacing some OS-specific modules? There a link describing that in more detail? Good to know it's not dependent on systemd, though, for uptake reasons. I agree with you on supporting multiple mechanisms. High-assurance also has a mantra of separating mechanisms from policy since it provides all kinds of benefits. Btw, are you a NixOS developer or did "we" mean something else?


> Not that: enterprises and other businesses know someone will handle their problems now and later with Red Hat, SUSE, and Ubuntu. Is there assurance of that for Nix?

No. For this reason alone nothing big is about to happen. Some consulting shops might prescribe this, or may just use it internally. Other places I know its in deployment they have an employee whose a power user or contributor who probably evangelized it in the first place.

> So the whole distro is separate from Linux itself in terms that it could work on a BSD or Windows just replacing some OS-specific modules?

I am much more familiar with nixpkgs (the packages themselves, suitable for single-user install on top of some distro) proper, but let me take a shot. Nix is like a way better version or puppet/ansible/etc in that virtually all the work is done not in nix but in the nix expression themselves. This is true for both nixpkgs and nixos. The most interesting thing that Nix does is the sandboxing, and must of that I think could be done as support code in the nix expression language except then it would be easier to subvert.

They way nixos is structed (topological sort on deps is)

1. library of mainly FP prelude functions and few fix-point combinators, https://github.com/NixOS/nixpkgs/tree/master/lib

2. Nixpkgs itself https://github.com/NixOS/nixpkgs/tree/master/pkgs

3. NixOS module system: maybe https://github.com/NixOS/nixpkgs/blob/master/lib/modules.nix ?

4. NixOS itself https://github.com/NixOS/nixpkgs/tree/master/nixos

The purpose of the module system is the key to understanding nixos vs nixpkgs. Nixpkgs is just software to build: you can build as much as you want, as many versions of a thing as you want (nix doesn't even disambiguate between 2 packages vs 2 versions of the same package). Since it's just building software, there is no services or long running software or state really, other than nix itself.

Now that's all great, but a distro inevitably involves more state / finite resources. Only one process gets to be PID 1, only so many port files, etc. Nix doesn't want to deal with all this itself, so it largely just spits out /etc files. In rarer cases there are deliberately impure "packages" that may run some commands to set up something (e.g. the "package" that builds /etc itself.)

The module system allows one to write configuration modules (duh!) that (a) declare a few options (b) do a few things based on those options. Those "things" include delegating to other modules. Now the key point is the config options are safely combined so that syntactically conflicting configuration is caught -- we have conceptually a "partial monoid" of configurations. e.g. `{ a = [ "b" ]; }` (+) `{ a = [ "c" ]; }` => `{ a = [ "b" "c" ]; }`, `{ a = "foo"; }` (+) `{ a = "bar"; }` => error. This means one can use config itself to model finite resources (modules can also assert on config so one can e.g. ensure list is a set).

Because nixpkgs is one ginormous repo, its very easy to refactor and add abstractions. So on any system that Nix will one, one can just add more layer to bluntly abstract over non-portabilities. Not exactly glamorous, but it works. And people do use nixpkgs on FreeBSD and illumos at atleast, so I am sure it runs there. [Few users + our shitty CI practices means that is prone to bitrot, cygwin once worked years ago IIUC]. I don't know if Nix knows about jails or zones yet, but there is plain chroot "sandboxing", or even none at all, for more portability.

> There a link describing that in more detail?

I'm not sure what's best for NixOS (the above is my own memory from spending enough time on IRC, etc), but http://lethalman.blogspot.com/2014/07/nix-pill-1-why-you-sho... is a great blog series introducing nix itself.

> Good to know it's not dependent on systemd, though, for uptake reasons.

For example, there was some talk about smudging the nixpkgs--nixos divide so that single-user-install nix users on MacOS could use the module system to define services. That would naturally involve abstracting over init as no systemd on Darwin!

> "mechanisms from policy"

Ah, that was the phrase I was forgetting :)

> Btw, are you a NixOS developer or did "we" mean something else?

The "we" is really just group membership psychology at work :). I don't have repo privileges or anything but contribute odds and ends. Currently working on better cross-compiling abstractions in nixpkgs ...because is anything ever designed with cross-compiling in mind from day one?!


'Infrastructure software' is a too general simplication. If you break down infrastructure software the OP would find many types of software in this field which aren't dead.


This.

First of all, he should define what "infrastructure software" he is refering to.

He mostly talks about OpenStack, AWS ... "services".

But it is obviously not only that. It is low level libraries (C lib), compilers, interpreters, operating systems, graphic engines, virtualization software, etc. This kind of software runs on allmost all deployments out there. It is far from dead.


Especially in light of the recent Twilio news - I'd say infrastructure is far from dead.


IaaS vendor says that companies don't want to host their own infrastructure, more at 11.

I tried to get a neutral take on this, but it reads far too much like a sales pitch, even for vendor blog levels.


"And the reason Mirantis has been successful is because, despite ourselves, outcomes are what we’ve been able to deliver to our customers by complementing crappy OpenStack software with hordes of talented infrastructure hackers that made up for the gaps.

This is't really a ringing endorsement of your business is it? It's nice that he acknowledges a talented team but to refer to your core product as "crappy" is more than a little depressing in my opinion.


His product = "crappy openstack" + developers. He is just calling openstack (which is made by the community, not just Mirantis) crappy.


Yes thats the point I was making, it's an odd business practice to publicly state that fundamentally your business product is crap.


He is basically saying "don't ever count on openstack becoming non-crappy, we have a vested interest in keeping it crappy so that to get anything done you will still need to pay mirantis to deploy 'hordes of talented infrastructure hackers". Well, thanks for honesty.


> But none of this matters, because today customers don’t care about software. Customers care about outcomes.

Why "today"? Hasn't this always been the case? It sounds essentially like a rephrasing of Paul Graham's advice: "Make something people want."


Only slightly facetious, but if they are trying to decouple containers from one another then perhaps they should consider a basic Inversion of Control design principle.

Perhaps that's the job of the orchestrator already though.


Forgive my ignorance but what exactly is infrastructure software?


I presume because people still need to run "private cloud" - because of regulation, national borders or not quite trusting AWS with your heart mind and ballsack.


Is infrastructure software dead though, or is just Openstack dead?

My perception is Mesos is really impressive and picking up a lot of traction.


All this chatter about Openstack and how it fails brings to mind stories of SAP, and the ways it can turn into a money sink.


Well, service, yes. Additionally, freedom from service dependence too.


"Who's with me?" ~Jerry Maguire.


Maybe commercial software is dead.

We all want it, except for some youngins.. how long can the corporate marketplace defy the will of the people who fuel the market?

Free software is the future. The benefits to mankind are too great.


The market continues to do a lot of culling, yes, but some commercial software is great.

Look at the Jetbrains products. They make many open source IDEs feel janky, broken, and non-cohesive in comparison. Some problems aren't solved eloquently by being loosely community owned; sometimes you need a paid, focused group.


This is exactly the point of the article: software becomes commoditized, what matters is the customer outcome.


I certainly don't want or expect all software to be free, though. OS,sure, but when software gets specialized, I'd rather have its commercial incentives aligned to deliver the specific value I need.

I.e. id rather pay for what I get out of Adobe than try to use GIMP.


That's not true either, commercial software is alive and well. Some things take a ton of resources and developers thrown at it, and commercial software has that advantage at times.


Reality check: commercial software is bigger than ever. It's also not a dichotomy. Most commercial software relies on open source for various components.


> how long can the corporate marketplace defy the will of the people who fuel the market?

How can you claim that it's the will of all developers to be unemployed?


Still this nonsense? Lots of people make a living writing FOSS, and that's with the competition of non-FOSS companies.

As long as there's a demand for more software, there will be jobs for all the developers needed to write it.


> Free software is the future. The benefits to mankind are too great.

Ending all wars is the future. The benefits to mankind are too great.


We need a business model that makes sense for Open Source developers. Likely pinned to a blockchain for deployments, service updates, support, etc.


We need to stop trying to jam block chains into everything.


Especially when it makes absolutely no sense whatsoever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: