Hacker News new | past | comments | ask | show | jobs | submit login

Can you expand more on how it would qualify as an "anti-pattern"? I agree it is slow, has issues with its built-in coverage and capabilities, and has an oldschool UI; but it is at its fundamental core a pipeline runner. It is a decent pipeline runner even, which when it comes down to it is the core of each other CI product [that I've seen].

So to hear it described as an "anti-pattern", when realistically it seems to BE the pattern - just poorly executed, is a bit unintuitive to me.




Probably not possible to describe. Jenkins is just tool, if used wrong it will bite to operators. Problem with jenkins is that it's takes time to setup workflow. You configure to run plugin-install-tool, JaaC, Jobdsl, Shared Libs, some credential store etc. But when it's up and running, oh boy, it's factory. I have run jenkins with 5k+ jobs. All auto gen, no manual interventions etc. Gitlab-CI (same as github actions) I like due opinionated approach. It makes things easier if setup is simple, but when you need exceptions or special cases, hack's begins.

But, yes, I have seen so many badly implemented jenkinses.


(not the person you replied to, but allow me to give a couple of personal annoyances - keeping in mind that they're a couple of years old and things may have improved since)

My main problem with Jenkins was that its architecture made it extremely difficult to automate its provisioning without having to click through the ui at all.

This led to the second big problem which is that updating either Jenkins or one of its plug-ins when they got a new cve (which was every other day) was quite stressful because you could never be sure if something would break - especially for plugins that depended on other plugins (case in point this post)

I have since moved to concourse which has a much more sane architecture - at least for these things.


1. it isn't designed as a cloud-native configuration-as-code immutable service. The way it stores and loads configs, jobs, logs, build workspaces, etc is all 1990s tech. Every modern replacement does these things much better. These inherent design flaws set up all the later problems.

2. configuration as code is an afterthought, so it doesn't work very well.

3. the only way to manage Jenkins as described initially requires learning four different DSLs, although for developers to write jobs only requires learning three (JobDSL to load jobs from JCasC, Jenkinsfile for simple pipelines, Groovy for complex ones). This is ridiculous.

4. the plugins are atrocious, there's too many of them, they don't have good enough features much of the time, managing and upgrading them is always a pain.

5. CloudBees doesn't even maintain the core stuff correctly. The current Jenkins container comes with a new plugin manager which, by default, does not respect pinned plugin versions. That's literally the most basic thing you can do for operational stability. I filed a bug in January, and they didn't feel like fixing it, so I got them to merge a note at the bottom of their README instead mentioning the bug.

6. pipeline libraries are a costly maintenance and development pain. Having to write Groovy code just to write pipelines is horrible. Jenkinsfiles are, although much better than Groovy, still an over-complicated, unintuitive mess.

7. there's no simple way to deploy, maintain, test, and upgrade a Jenkins cluster. You have to maintain multiple clusters, increasing cost and complexity.

8. since most people don't set it up right (because it is so overcomplicated), the jobs, server configurations, and build history are not backed up, there's no version control. So when something goes wrong, the whole thing is hosed. Unless a Jenkins expert took 6 weeks to set it up perfectly.

9. due to all the above problems, you end up with a million different Jenkinses, all in various states of insecurity, brokenness, and wildly different configuration, making them incompatible with each other. This makes for a gigantic maintenance cost that never ends.

10. literally all of it is completely proprietary to Jenkins. Unless you build it and the jobs in a very particular way (which makes it impractical to use) none of it can be re-used in a different system.

That's off the top of my head. There's more reasons.

The point is, organizations will invest literally thousands of man-hours in making Jenkins work, slowing down their product development, forcing everyone to use this old-ass over-complicated piece of junk. If they took the same amount of cash they could buy literally any proprietary CI/CD system and do everything much faster and better. But the organization doesn't see the hidden costs until it's too late and they desperately want to replace it. Jenkins is not just bad, it actively holds back your organization.


I struggle to see how this justifies the claim that "Jenkins is really a technology anti-pattern".

You are describing bad maintenance, bad architecture, and bad execution, I fully agree with this. Jenkins is clearly old and it has old approaches to complexity.

But an "anti-pattern" implies that using it moves you further from your goal. When I was a noob with 2y of software development, primarily writing React & NodeJS APIs, I stood up a Jenkins VM and was able to correctly set up a CI/CD system building, testing, and deploying containerized microservice-based architectures via Docker and Jenkinsfile alone. I encountered extremely few issues with the core of Jenkins, because it is a pipeline runner and it has a lot of ways to run pipelines.

So to me, it looks like you use the word "anti-pattern" too liberally, since I don't think there are any other free open-source pipeline runners that would meaningfully integrate git webhooks and clone source code easily for me. But perhaps you disagree because of your final opinion, which is that standing up production-grade CI/CD for bigger workload would be faster avoiding Jenkins.

Still, not sure I even agree with that claim. I have seen it used to great success in many contexts. Does it have lots of problems? Yes. Anti-pattern? Tough sell.


> The way it stores and loads configs, jobs, logs, build workspaces, etc is all 1990s tech.

I agree with a good chunk of what you said, but.

It's files, my friend, files. Tech from the 1970's.

Nothing wrong with that, inherently, and they are easy to inspect, repair if needed, you can use standard tools, etc.

The design of the file structure is maybe the issue, because it makes high availability complicated, but just using files is not necessarily a bad idea.


Correct, in some circumstances, files are great. They suck for Jenkins.

What are they? Lots of different things: build logs, job configurations, server configuration, secrets, cached unpacked plugins, build workspaces, etc. Some of those you want in S3, some you want in a database, some you want on fast ephemeral storage, some you want in a credential store. Good luck with that; only the secrets are doable with plugins.

Where are they? Sitting on some EC2 instance's ephemeral or EBS storage. But you don't want them there, so now you have to throw a bunch of crappy wrappers in to occasionally move them if you want them somewhere else. (Even if you do JCasC/JobDSL/Jenkinsfiles for version-controlled configuration and secrets, you may still want to back up your build artifacts and logs)

And them being files, it doesn't scale. Using EBS? Only one host can mount it (unless Nitro), so good luck scaling one box's workspace filesystem past one gigantic EBS volume, or doing master-master. And you have to clean up the filesystem every time a plugin or core version changes, or the cached version on the filesystem will override your container/host's newer version. Using EFS? Network filesystems suck (scalability + reliability + security + performance woes).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: