Hacker News new | past | comments | ask | show | jobs | submit login
Shifting Gears (jenkins.io)
264 points by twic on Sept 3, 2018 | hide | past | favorite | 118 comments



I love Jenkins, and use it professionally.

With that said, I'd really like to see better documentation on the Jenkinsfile Pipeline format. I've tried to get started with it a few times, and haven't had tons of success. Stuff like "How do I pull in secrets", and "How do I control a plugin". I appreciate that it's Groovy-based, but that's not particularly helpful information (for a hack like me, at least).

The snippet-generator is nice, but it doesn't necessarily produce working code. Especially for things like getting secrets into a build. And it doesn't give me a broader picture for "How do I even write one of these from an empty text-box".

I recently tried the job-to-Pipeline exporter plugin, and that didn't work on my jobs - it generated stuff that didn't match the input job, and also wasn't structured like the example snippets Jenkins provides natively.

Maybe some kind of a sandbox I could experiment in? Or a REPL or something? It would really help to have something that gave great discoverability, with fast feedback. Faster than I can get by editing a job, saving it, running it, waiting, then realizing I still don't have the syntax right.


Unfortunately, it’s not very easy to do. If you want syntax validation, you can validate it by POSTing to an endpoint on your Jenkins master.

If you want a “sandbox”, you can just replay a pipeline run and make modifications. Both are not very useful IMO and slow down development pace substantially. Don’t get me started on integrating groovy shared libs.


Same here - I really struggled to set up declarative pipelines starting out. The docs don't do a great job of distinguishing between the full groovy syntax and the new declarative syntax and there is a relative dearth of examples.

I think the swiss army knife nature of Jenkins contributes to this - there's just so much you can do.


After working in a team that was heavily using Jenkins files & scripted pipelines I started to believe that writing Jenkins scripted pipelines is a bit of an anti pattern, as you end up with lots of build script code that can only run inside of a Jenkins, perhaps coupled to plugins, which hampers your ability to locally develop and test changes.

Perhaps sometimes using Jenkins scripted pipeline is a good idea, but if you've got the choice of implementing something as a Jenkins pipeline script or some other script that isn't coupled to Jenkins, prefer the latter.


I work with Jenkins day in, day out.

Doing anything build-script related in Jenkins, whether Pipeline or freestyle jobs, is definitely an anti-pattern. All build-related scripts should definitely be in standalone scripts / build tool config files (make or whatnot), for reasons you describe.

Jenkins should be there to handle the "side effects", as I view them. In our case that's stuff like integration with git PRs (posting results of linting, building, unit tests), sending emails when new builds are available, integration with JIRA (we automate some workflows), publishing artifacts to an internal server, etc.

Conversely, putting any of those side-effects or stateful steps inside build scripts is a bad idea, and it leads to not being able to run build scripts locally without worry of messing up a JIRA workflow or spamming people with build emails. Thus, they should be stored only in Jenkins.

These are all mistakes of my predecessors that I am still living with to this day.


I think thats a great rule of thumb. Declarative pipeline came after the script was "invented", which is slightly unfortunate, had it came before it would have encouraged the practices you describe (declarative is just for orchestration), and script would have been mainly an escape hatch (I think many people get the idea now though).


It's probably not quite so clear cut. For example, suppose you deploy to AWS, and have automated this to be triggered by a Jenkins job. It's advantageous to be able to run this automated deployment from outside of Jenkins, even though deploying is one real big side effect.


With "declarative pipeline" introduced a few years back, this is the direction we are pushing people toward.

Programming capabilities are useful for ecosystem developers to create higher level primitives from existing ones, as it creates a new way of extending Jenkins without plugins.


Where I work it started like this: every "component" had some source in a directory and a couple of simple scripts: build.sh and test.sh.

Then, when we wanted to run them in parallel we just used the parallel Jenkins pipeline statement, so that every step had its own captured output stream (and distinct build statuses too).

This was a slippery slope: now more and more build orchestration complexity moved to groovy code, but fixing that is not obvious because with very long builds seeing which one failed and which not is very useful, and fighting groovy code happens relatively infrequently.

How can I follow your rule of thumb and still let Jenkins capture (possibly in real time) the output and status of work units it doesn't describe and spawn itself?


I think as part of the new Jenkins architecture we should be able to make it much easier to stop at a point in a pipeline & open a terminal/REPL to test out steps.

Also I'm hoping for a nice validated YAML based pipeline syntax that should make editing/validating pipelines easier


We have almost the exact opposite request. RE: stop and repl. Our Jenkins pipelines have escalated privs in that they can deploy code so are a juicy attack vector. Wed largely like these things to be read and execute only and any modifications need to go through review.


You can't just use the docs at Apache Groovy's website because Jenkins pipeline uses a crippled version of Groovy -- none of the functional collections-based methods work.


Yes!

Pipelines can be incredibly rewarding if you spend the time to really dig into it and do a bunch of trial-and-error.

Convincing other people on your team to do that with the current state of documentation is painful, and understandably so.

The documentation really needs a lot of attention.


Sandbox, repl, and a testing platform to unit test my Jenkins files are amazing suggestions. Basically an environment I can actually test in, as opposed to write-run-fix


Totally agree. I tried to move our freestyle jobs to pipeline a few times and the experience was horrible. Official documentation was scarce, main concepts not clearly explained, there was barely any example, and god forbid if you use some unpopular plugins because chances are, the plugin does not support pipeline of if it does, good luck finding the syntax for it.


secrets are scoped and the snippet generator will put in the correct GUID (or whatever that id is) for you.


I worked on Jenkins at Lyft and completely set it up for DoorDash. If anybody needs help with their Jenkins setup, hit me up I give free advice and have a few blog posts on the matter.

If you happen to be using AWS, GitHub, and Slack, we at DoorDash have developed lots of goodies for streamlining things. We have secured our Jenkins behind our VPN, created load balanced Jenkins clusters, built a shared Groovy library for all of the Jenkins behaviors that are useful for each of our microservices, implemented a Flask app that receives each of the GitHub webhooks which starts pipelines instantly (rather than git polling), setup Okta integration, interfaced with our internal secrets store, and implemented a way to map GitHub users to Slack users allowing us to Slack message people when they are mentioned in GitHub (when their PR's receive LGTM's etc.) When new microservices launch, Folders automatically appear in Jenkins configured correctly for the service's pipelines.

If any of this sounds good let me know, maybe we open source some of our work. I love working on Jenkins and am happy to help advise you on how to scale, secure, demystify your own Jenkins setup. Links on my HN profile page.


You should be presenting in a future Jenkins World event!


Glad to hear that - you should put your config up on a blog post somewhere sometime.


Oh god, please!


Jenkins's biggest strength is also its biggest weakness: plugins. Any development shop that has been using Jenkins for a while is using at least a bunch of plugins. Plugins are not stable, they break every now and then. They require constant update with new Jenkins versions. They get abandoned by their creators (hell, many plugins still don't support pipeline).

It's a fundamental issue with how Jenkins is set up that I don't know how they can get away with unless they abandon the whole plugin architecture all together. But obviously that's not a solution.


They kind of made several plugins "blessed": Pipeline, Blue Ocean, Git, etc.

The core package plus these "blessed" plugins is a lot more stable than throwing every random plugin on top of a base installation. Just write a bit of glue script code and you're golden.


They still have their own issues. Blue Ocean requires a lot of stuff I have no need for (like github support) which in some cases conflict with stuff I do need (like bitbucket support)


Same. Every time there's a Blue Ocean update it requires you to update two dozens other plugins, many of which I don't use and can't get rid of (like the github one). And more annoying is the fact that you can't "select all" to update all plugins, you have to select them one by one.


There’s a select all link at the bottom of the plugin updates page...


well glad to see people using blue ocean, but yeah - that isn't a good look (and one of the aims mentioned is to get away from this pain, for good).

(also evergreen should take care of the updating, ideally). I will be happy when it does as don't want to spend any more time thinking about this!


The deeper issue with plugins is that they create global state across everything.

Switching the execution engine to Kubernetes will help a lot with operational pain. But consider the comparison to Concourse, which also predates Kubernetes. What drove Concourse into being wasn't the lack of Kubernetes (being born during the Diego project, itself a container scheduler), it was the amount of time and pain and unsafety that came from relying on the status quo.

Kawaguchi's agenda is bold and necessary, but I think it's going to take a while to get through even half of it. But the world is better off when Jenkins improves, simply because of its phenomenal installation base. We all talk about Travis and Circle and Drone and Concourse and Gitlab here, but I would bet folding money that over 75% of actual bits going through CI are going through some version of Jenkins.

Disclosure: I work for Pivotal, we sponsor Concourse.


I soon as I saw the word concourse, I look at the username, sure enough it was you.

You bring up Concourse comparision every single jenkins thread.

I've considered it the past but looked like there is almost no adoption outside pivotal. would you say the adoption increasing in 2018, I would like to give it another spin if possible.


> You bring up Concourse comparision every single jenkins thread.

Because it's the reference point I know best. It would be silly of me to compare it to microbiology or the internals of Travis, both of which I'm much less familiar with than Concourse.

> would you say the adoption increasing in 2018, I would like to give it another spin if possible.

It's nowhere near as popular as Jenkins, by at least 2 orders of magnitude, maybe 3. If that is important to you, wait a bit.


We looked at it. It was our second choice. Ultimately, its entirely self-hosted nature was it's greatest strength and weakness. We like self-hosting but don't have the resources to get it going at the moment. It's interesting enough that we'll keep revisiting it.


I've done a ton of work with concourse and it has a very large set of tradeoffs if makes as well.

For example, resources are really nice, but there are a ton of pain points to do with how intentionally crippled the yaml pipeline format is (and in general how much repetition there is, due to the lack of looping, etc). Also the way i've seen people write pipelines tends to end badly for all involved.

Also its just very very buggy, especially if you deploy sans bosh, since thats basically not tested.


Author of the post. I agree that it is our biggest strength & weakness, I acknowledge that problem and put forward some solutions in the doc.

One piece of the solution is to embrace core and a bunch of important plugins together as the foundation. Normal users shouldn't be asked to pick & choose the basics like that, and we want to lock down the combination of the versions in that group. Whether those are behind the scene plugins or not from contributors' perspective is an implementation detail.

Another piece of the solution is to grow more extensibility mechanism beyond the current in-process plugin. There's a thing called "Pipeline shared libraries" in Jenkins, which is a good example of this. It lets developers create higher level pipeline primitives by composing other existing ones. There's some mechanism to share those with the community, too, although not as sophisticated as plugins. From users' perspective, it extends capabilities of Jenkins just like plugins, but in a way that doesn't create the kind of instability a bad plugin can -- its impact is local to one build, for example.

Then there's the container-as-a-building-block extensibility, Jenkins Evergreen, and more...


Agree. The same happened with Eclipse IDE


And Firefox to some extent, except they actually fixed it by requiring all addons to be reimplemented. Lesser of two evils and all that


Less related but Minecraft servers were also a plugin mess. With N plugins, similar odds for any given plugin to break across an update, and frequent updates you'd pretty much always have broken plugins.


I agree this weakness comes from plugins. Because the plugins are not part of the main code base you can't introduce new functionality without breaking them. So you end up with a slow pace of development while you still end up breaking installations on upgrade.

If you add functionality to the main codebase you can keep running your tests to ensure nothing breaks. This is what I think they will do with Cloud Native Jenkins. Essentially abandoning plugins.

Jenkins Evergreen keeps only the essential plugins. This means they can run better tests. And when introducing new functionality you can update the essential plugins.

With GitLab CI we add new functionality in the main code base, avoiding the need for needless configuration and ensuring everything still works when updating.

I have just written a more extensive analysis of the blog post in https://about.gitlab.com/2018/09/03/how-gitlab-ci-compares-w...


> Because the plugins are not part of the main code base you can't introduce new functionality without breaking them.

This is a very simplistic explanation bordering on FUD. Jenkins defines something called 'Extension points' you can introduce new functionality as long as you don't break extension point contract you can continue to add functionality. For example, Greenballs plugin[1] is almost 11 yrs old and still works. Surely jenkins added new functionality in past 11 yrs.

> If you add functionality to the main codebase you can keep running your tests to ensure nothing breaks. T

Another comical statement. You only need to write tests against the contract of extension point and make sure you don't break the contract.

> I have just written a more extensive analysis of the blog post in https://about.gitlab.com/2018/09/03/how-gitlab-ci-compares-w....

This is full of misinformation too. eg: You can checkin Jenkinsfile in the root of your git repo too, you don't have to copy it around.

I don't want to attribute maliciousness to you but hope you correct the blog post.

1. https://github.com/jenkinsci/greenballs-plugin/


I agree my explanation is simplistic but my intention wasn't to to spread FUD.

If you don't break the extension point contact plugins should break, but doing so is hard. Hence the breaking plugins.

The extension points also make it harder to improve Jenkins since they can't be changed without breaking plugins.

And when you introduce a new concept, like pipelines, with a plugin there isn't a well defined extension point for other plugins.

I'm aware of the Jenkinsfile functionality but I think this is different. If you follow the link "Jenkins Configuration as Code" in https://jenkins.io/blog/2018/08/31/shifting-gears/ it points to https://jenkins.io/projects/jcasc/ which has plugin management https://github.com/jenkinsci/configuration-as-code-plugin/bl...

I don't think you can do plugin management in a Jenkinsfile https://jenkins.io/doc/book/pipeline/jenkinsfile/ so it seems incomplete.

I've tried to explain it better with https://gitlab.com/gitlab-com/www-gitlab-com/commit/0639c998...


> The extension points also make it harder to improve Jenkins since they can't be changed without breaking plugins.

Yes its a tradeoff, very similar to programming languages/libraries that people write code against. You cannot change the api (eg: syntax of the language) without breaking existing code. It doesn't mean a language cannot improve, java is has continued to evolve. Solution to this not have all java code in the world to be in one repo.

> I don't think you can do plugin management in a Jenkinsfile https://jenkins.io/doc/book/pipeline/jenkinsfile/ so it seems incomplete.

I am not quite sure what you man by 'plugin management' but you can use plugins in Jenkinsfile https://jenkins.io/doc/pipeline/steps/

I think you are referring two different concepts.

1. Managing Jenkins configuration eg: configuring global npm password, global nexus config, typically done by jenkins admin). This was traditionally done via UI and configuation as code is the effort to do it via code.

2. Managing your build configuration, done by devs setting up their builds on jenkins. In the past this was done in the UI in job configuration. Jenkinsfile /pipeline is solution for that. You can check that file into your code repo. This is equivalent to gitlab-ci.yml.

The model here is that the admin enables and configures the plugin with defaults and global level via configuration-as-code and users use that plugin in Jenkinsfile.

Very similar to gitlab releasing 'dependencies' feature on their build server so that users can use that feature in their gitlab-ci.yml.

> I don't think you can do plugin management in a Jenkinsfile

Why would a user want to do plugin managment on the server in their Jenkinsfile? It would be like gitlab ci users upgrading their gitlab CI version via gitlab-ci.yml.


> Why would a user want to do plugin managment on the server in their Jenkinsfile? It would be like gitlab ci users upgrading their gitlab CI version via gitlab-ci.yml.

This means that when a developer need a new plugin they need to ask the administrator of the Jenkins server, frequently a central IT organization.

With GitLab all the functionality is always enabled, there are no plugins to install.

I disagree that installing plugins is the same as upgrading GitLab or Jenkins itself. Although of course GitLab gets new functionality every month.


Thank you for all the work on GitLab, we are using AutoDevops extensively. Any thoughts on how AD morphs or adapts with knative? It seems like Jenkins is fully knative with CRD support.


You're welcome, thanks for commenting. Auto DevOps can probably benefit from Knative in two ways. Use Knative build https://github.com/knative/docs/blob/master/build/builder-co... for building images. And use Knative serving https://github.com/knative/serving to run review apps that don't use resources when not in use https://gitlab.com/gitlab-org/gitlab-ee/issues/3585#note_900...


Part of the ideas mentioned are to resolve this stability, and not depend on in process plugins (a new extensibility architecture that won't hurt stability). There are many things in plugins which should be core functionality (and will be).


That's just the thing. Why has there been so little adoption of commonly needed functionality as part of core, or at least as officially supported plugins?

Like, there's no official backup functionality? And why is version control not standard in 2018? This isn't something you just bolt on, or incorporate as a response to competing products.

I think they should abandon all hope of Jenkins being competitive. They should remain the weird old school universal tool it always was, and let it become relegated to legacy systems, like the Apache web server.

Jenkins was useful, but it's living in the past and trying to solve the wrong problems.


>That's just the thing. Why has there been so little adoption of commonly needed functionality as part of core, or at least as officially supported plugins?

Oh there are core bundled plugins, official etc - they are just core functionality that happens to be implemented by plugins.

>And why is version control not standard in 2018?

that is and always has been - "git" support used to be not in by default, but that was a while ago (it is included now).


I agree, the next big next step would shifting the plug-in system (wiki, downloader, etc) to the "global pipeline libraries" level.


yeah, I see 'plugins' being around for a while but docker steps becoming the more cloud native long term alternative; being more reusable stand alone & not requiring changing a Jenkins Master (or even requiring a Jenkins master for ephemeral build pods)


I think this is too much too late for Jenkins.

I can't speak for other countries but in London a lot of companies are now using Gitlab or Circle CI.

I migrated all my builds (12 projects) to Gitlab CI. After figuring out the first CI pipeline using DockerInDocker, it was easy to then setup the remaining pipelines.

Self hosting Gitlab was perfect for our needs (private docker registry). I use Gitlab for personal use too.

I wonder if they will get rid of Ruby in the future though and go Java to make it more performant, as it does slow down sometimes.

The Jenkins box is still running though, more out of sentimental value :)


Thanks for using GitLab! I'm glad to hear you found it easy to set up.

I just wrote an article in response to the OP https://about.gitlab.com/2018/09/03/how-gitlab-ci-compares-w...

We're working on making GitLab more performant. It is mostly fixes to our code, the parts where ruby is a problem are already rewritten in Go. GitLab self-hosted should be fast if it has enough memory, so make sure you check on its memory consumption.


> The current legacy version of Jenkins needs to be restarted once a day by an administrator

Is this true? Do you have a source that says this? We have a Jenkins instance that Kubernetes is configured to scale down from 1 replica to 0 at night, and up to 1 again in the morning, so if it is true we never would have noticed. (It hasn't always run on this cron cycle, which is why I'm a little incredulous at this claim, but if it's given in the OP or somewhere else easy to find this, I'll concede it... ah... found it: > It’s not unheard of that somebody restarts Jenkins every day.)

Honestly I don't understand this about "making a version of Jenkins that runs well on Kubernetes" – this is the _only way_ I have ever run Jenkins, and I think it runs already extraordinarily well for our purposes. I'm thrilled that they are making it their focus, and I'll concede also that our use of it is pretty narrow, but I haven't had these issues.

We installed it from the stable helm chart nearly 2 years ago and have hardly needed to make any tweaks. We are not tracking every K8S release, so maybe that's why I haven't noticed Jenkins falling behind, and we also haven't tried GitLab seriously (heard great things, but my work is very risk-averse when it comes to new technologies, and to be honest we rarely try new things on a short cycle once a given problem has been solved adequately for us... we are also not primarily a development shop, so maybe it makes sense.)

> The article doesn't mention how Cloud Native Jenkins addresses the problem, maybe it doesn't allow plugins.

Like I've been saying, we've always used the stable helm chart for Jenkins and started maintaining our own values.yaml about a year and a half ago. Over time we have had less need to change the templates as more configuration got moved into values.yaml. When I have needed a plugin or other configuration that is able to be set in values.yaml, that's easy and almost makes maintaining Jenkins fun. It is a little obtuse that I have to maintain my list of plugins and their latest versions there manually, but this could be something that gets resolved in Cloud-native Jenkins if they are ultimately providing an operator or something like that.

(Breaking a rule by commenting before I've read all of the content, but I liked your article and wanted to give you some feedback since you posted it.)

For configuration that doesn't live in values.yaml, Jenkins chart maintains a Persistent Volume where configuration and build artifacts/history are stored. It is easy enough to take backups of that with the ThinBackup plugin, and the storage costs of that are sure not breaking the bank.

> Services interacting through Kubernetes CRDs in order to promote better reuse and composability

And there it is! That's the big announcement from today. Knative is still early but this news from Jenkins sounds supportive and I should really read the whole article / watch the video now.


>>> The current legacy version of Jenkins needs to be restarted once a day by an administrator

>Is this true? Do you have a source that says this?

The OP states: "Admins today are unable to meet that heightened expectation using Jenkins easily enough. A Jenkins instance, especially a large one, requires too much overhead just to keep it running. It’s not unheard of that somebody restarts Jenkins every day."

This isn't always the case so my claim is too strong. I toned it down with https://gitlab.com/gitlab-com/www-gitlab-com/commit/672a19ca... Thanks for pointing it out!

If there is anything else I can improve in my article please let me know.


> Honestly I don't understand this about "making a version of Jenkins that runs well on Kubernetes" – this is the _only way_ I have ever run Jenkins, and I think it runs already extraordinarily well for our purposes.

I think the idea is not that Jenkins runs on Kubernetes, which as you note can already be done. It's rather that Jenkins uses Kubernetes as a replacement for the worker infrastructure.


yes that is right I believe.


This can be done with plugins, kubernetes-plugin for example. I'm looking forward to seeing what they come up with. It was good to see knative on their roadmap! This could be something else majorly.


> Is this true? Do you have a source that says this?

It's stated somewhat more strongly, but the post we're discussing itself says:

> It’s not unheard of that somebody restarts Jenkins every day.


I am in London, and our firm uses Jenkins on a huge scale. Certain industries won’t touch products that can’t be installed on premise.


Both GitLab and Circle CI can be installed on premise.


"in London" suggests that geography is the primary driver of your first-hand experience, rather than tech stack or company type. I've been working in London for a decade, and i don't think i've ever used either Gitlab or CircleCI for CI!

That said, i haven't used Jenkins for several years either.


I was seduced by BlueOcean and tried using Jenkins for CI, using Github and AWS ECS builders (which felt like a common enough use-case).

Unfortunately it ended up costing an astonishing amount of engineering time to get working and maintain, with builds frequently stalled or failing.

Since moving to CircleCI 2.0 enterprise (admittedly far from perfect) and Airflow, we have _dramatically_ reduced eng. time spent managing our job scheduling.

The core of our problem was how fragile and complex the Jenkins ecosystem seems to be: any change to the config or settings and it would easily burn a day of engineering, due to random bugs and hard to understand error messages. In the end, no one wanted to touch it!

I think there's a great project hidden somewhere here, but just getting the basic "everyday" stuff done with it can be a real PITA.


I'm sorry to hear the bad experience.

I recognize those challenges in my pitch, we have various efforts already under way to address them, and with this gear shifting, I think we'll be combining those in a compelling way.

For example, defining Jenkins config in YAML in Git is a key piece to solve a fear of config change, and this is called "Jenkins Configuration as Code" and is under way for a while now.

Cloud Native Jenkins will also split single process "master" into many build-as-a-function kind of processes, so it isolates builds and allows changes to be rolled out more incrementally.

There's more focus on us owning a bigger responsibilities around "basic every day stuff," too.


Having used Hudson/Jenkins for many years, I recently considered setting it up for a new project, and backed away mostly due to the issues Kohsuke describes. We ended up choosing GitLab instead.

GitLab has been pulling ahead in features and usability, compared to other things I've tried. Right now, different projects I'm involved with use a combination of GitLab Enterprise, Travis, Circle, and Google Cloud Build. Of those, GitLab accommodates the heaviest and most sophisticated workloads, without having to go through too much trouble to set up, maintain, and instruct developers how to use it (certainly less trouble than Jenkins). I highly recommend taking a critical look at all of these services, to see which best fits your needs.


If you don't mind, I'll shill my service as another option: builds.sr.ht. It's still in closed alpha, but it's being used seriously by several open-source projects for complex build automation. It also deploys itself, here's the build manifest which does it:

https://git.sr.ht/~sircmpwn/builds.sr.ht/tree/.build.yml

And an example of a build which used it:

https://builds.sr.ht/~sircmpwn/job/6974

If you or anyone else would like to try it, please let me know. I used Jenkins for a long time (and still do at work), Travis for a while as well, also tried Drone and Circle, but none of them were exactly right. I think builds.sr.ht does it very well.


Nice to know there is a plan and it's refreshing to see that they can understand most of the problems from the customer perspective now.

I hope they address the constant shifts in focus with this plan and Jenkins can secure it's market spot, it really deserves it from a historical point of view in the least. It should not be a Nokia or a Xerox, it's better than that and has been a major tool for the industry.

The whole CRD is a great way to move forward, but Argo is looking great right now and it's way ahead, if they manage to finish it soon and make it production ready it will be hard to beat it.

The problem is that every segment has it's player now and there are some big ones, GiLab, GoCD, Spinnaker, Concourse... So many tools and the difference is that most of them have more focus than Jenkins does, they also have newer code and more speed, each has a niche but Jenkins has the market share, it will be an interesting match.

Jenkins is fighting a bit of a uphill battle but with a huge army.

I hope they keep it simple, focus in being the best in one or 2 things and then scale to other areas, that is my 2 cents.


Jenkins X can help compete effectively with other tools since it automates your entire CI/CD; from creating the Pipelines, setting up your Environments, creating Preview Environments on each Pull Request and then performing GitOps based promotion through your Environments on each release.

I'm looking forward to seeing the Jenkins ecosystem expand to offer similar automated CI/CD for other platforms too (e.g. Terraform / Ansible / VMs etc)


I saw Jenkins X a few months ago, it indeed looks interesting,

I have not used it as it's super new and I don't trust Cloudbees on the super new stuff anymore. I wish you guys all the best and I hope to use it in the future.

Talking about right now, if I remember correctly Concourse+Bosh can do a lot of that as well and it's a lot more battle tested so it's not like Jenkins X is all alone there on the "we can do everything" spot but it's refreshing to see it in the fight.


First i've heard of Argo CI. Why would i want that rather than Concourse? Which can now be deployed on Kubernetes, it seems:

https://github.com/helm/charts/tree/master/stable/concourse


I am no Concourse expert, but I've tried it a couple times.

Concourse's interface is just beautiful and functional, the CLI is great as well, but it has it's own ecosystem. Lot's of tools to solve problems that K8s already solves (Jobs, Volumes, resources, scheduling, etc..) it also uses workers which if I remember correctly is just like a scheduled slave which you need to install stuff into.

Argo is a different beast, it uses k8s like it should be done in my view.

It schedules container to one or multiple k8s Clusters, using CRDs which is kind of a DSL on top of the k8s api. It does not run slaves, is simply a debian or busybox pod, a Job or other resource that runs and returns pakages to the workflow or pass it on. The Syntax, is K8S YAML syntax, the same with a couple more things, very little learning curve if you know K8S YAMLs, Concourse is a very different thing.

BTW I have zero to do with Argo, I just find it awesome and am waiting for it to be more stable.


> Lot's of tools to solve problems that K8s already solves

In fairness, Concourse predates Kubernetes and was invented developed to support a team working on a different container scheduler (Diego, similar vintage to Kubernetes).

Otherwise you're right. The pain of carrying around its own container scheduling, worker management etc has been a notably hefty thorn in the side of Concourse for the last two years or so. It should go away as Concourse gets rebased onto Kubernetes.

> The Syntax, is K8S YAML syntax, the same with a couple more things, very little learning curve if you know K8S YAMLs, Concourse is a very different thing.

As part of my work in and around Knative Build, Topher Bullock and I wrote a little façade Kubernetes controller[0] that lets you send pipelines and see builds from kubectl. Basically it looks like an existing Concourse pipeline, but with the usual k8s boilerplate.

I'm not sure what the final Concourse-on-Kubernetes will look like. kubectl is showing one-size-fits-all discomfort (see, for example, knctl[1]) and I am not sold that CRDs are suitable for every purpose. 2018 is clearly The Year of the CRD, so I figure we're about 18 months away from the trough of disillusionment on those.

[0] https://github.com/jchesterpivotal/knative-build-pipeline-po...

[1] https://github.com/cppforlife/knctl


Concourse's components (ATC, DB, Workers) can run on Kubernetes, but it is still handling the scheduling of containers it creates.

Delegating container scheduling to Kubernetes is the next major epic on the Core track for Concourse.

As for Argo: I am not particularly in favour of Turing-complete YAML.

Disclosure: I really like Concourse. I work for Pivotal, which sponsors Concourse development.


Valid point.

Don't fly you end up creating a turing-complete config file as well? I remember it being stored as some kind of Yaml like syntax.


Fly does substitutions, but that's it: no loops, no if-thens. It's almost certain that something can be tickled into being Turing-complete; it's actually devilishly difficult to avoid doing it by accident. My hunch is pipelines are Turing complete, but I haven't gotten around to proving it.

But there's a difference between trying not to introduce it and going out of your way to build a programming language in YAML. An actual, honest-to-god programmable YAML.


Wait, doesn't Pivotal also sponsor Turing-complete YAML development?


BOSH v1 manifests were Lovecraftian. Well beyond Turing and into the deep underdark of the Beyondness That Must Not Be Yea Even Unto Enterprise.


I'm surprised they have such a good understanding of Jenkins' shortcomings. It's a good first step in fixing them. Although to be fair, this has been coming a long time as the post says; but having Cloudbees' CTO publicly acknowledging those is even better.


He's not only Cloudbees CTO, he's the original developer and architect.

And before someone lambasts him for the Jenkins architecture, Jenkins/Hudson was created in 2005, when things were a lot different, and Jenkins managed to create an entire subgenre of software and lead it to the current day. Jenkins hasn't aged gracefully but how many software products from any category have even survived 5 or 10 years? :)


I can think of a number (Linux, Eclipse, vim, mysql, etc), but as a percentage of total software produced, it's very small.


Thank you for the encouragement.

Indeed I wanted to capture the shortcomings correctly, because I truly believe in the power of the community to solve them. Looking at this thread, I feel my summary is largely validated.

We've been working to solve those, and we'll step up even more so. Exciting times!!


OpenShift's Jenkins <-> Kubernetes integration plugin is pretty neat.

Authentication, SSH secrets and - most importantly - running each build in an ephemeral pod works out of the box.

https://docs.openshift.com/container-platform/3.9/dev_guide/...


OpenShift's Jenkins integration is good.

Though its even cooler to use Jenkins X on OpenShift as you get automated CI/CD pipelines + Environments, Preview Environments on Pull Requests and GitOps based Promotion between environments.


You can just use helm to install the default chart for Jenkins, and override the specific configuration you need.

But it’s still very clunky to work with, and spinning up a pod only happens when the build queue is not empty. It’s extremely slow.


The biggest deficiency I found in Jenkins is that GUI-based job configuration is great for simple setups and one-off jobs, but the moment you throw in any sort of parameterization it becomes a real headache. At that point you really need to be able to configure your jobs in code.


And the Jenkinsfile documentation is relatively bare and reliant on examples.


Exactly our thought! The whole Jenkins Pipeline is around the notion that your job definition should be version controlled (and you don't necessarily have to lose GUI, see our "blue ocean pipeline editor" that now comes out of the box.

Then the newest kid on the block is https://github.com/jenkinsci/configuration-as-code-plugin, which I referred to in my doc.


jenkins-dsl does wonders in this department. When infrastructure-as-code plugin gets stable we'll have a fully immutable Jenkins setup.


one solution to the problem, if you are building apps for kubernetes is to use Jenkins X which automates all your CI/CD pipelines: https://jenkins-x.io/


I'm wishing Jenkins all the best. I know it since the Hudson times as the de-facto CI system for Java (and Cruise Control before that as my first encounter with CI).

OT: does anybody know a CI system based on plain Makefiles, convention-over-configured for autotools-like default targets, and supporting file-suffix based build and test rules for C + JS + custom compilers and such?


invoke `make` from inside your `Jenkinfile`? :)


I love working with Jenkins - I know it is a pain to keep up to date but for me it has become a way as a sole or small team syseng to manage all kinds of stuff. "Jenkins-Ansible-Github" where you have a Jenkinsfile sitting in the git repo you are bulding/ deploying etc., has been a pretty good set of tools to manage heterogeneous environments.


Agreed. Now if only they could generalize Configuration as Code: https://github.com/jenkinsci/configuration-as-code-plugin. It's the missing piece.


That's definitely the goal, part of Kohsuke announcement.


I'm trying Gitlab atm, it's great to see something simpler than Jenkins to do CI/CD.


Glad to hear that. We'd love to hear your feedback about GitLab CI/CD.


I've been working with gitlab CI for the last year. Here are some of my feedbacks:

- 6 months ago we seriously considered moving away because it was really unstable (even when running on private runners) but now its a lot smoother

- with private runners you can have a very powerful CI without having to manage a master (as Jenkins) for a fraction of the costs (runner with docker-machine on spot instances)

- beware that if your CI flow is more complex than just a simple pipeline to build and deploy your project (we have a project for our code, that then trigger a project for end-to-end tests, that then trigger a deploy to our env) you will need to do a lot of boilerplate code (you will need to manually manage artifacts if they need to be shared between jobs)

- variables from a triggered pipeline should be available through the API and made more visible in the UI

- we do not use kubernetes so eveything CD is off the plate for us (environment and monitoring tab are useless)

- DO NOT USE THE BUILT IN CACHE, it's super slow and will fail unexpectedly (simply do cp to s3 and it will never fail)

- IF YOU USE THE BUILT IN CACHE, parallelism will be hard (you cannot populate part of the cache from a job, another part from another job and in the next step use the result of both cache)

- triggers are weird, its a curl to an API endpoint but it does not use the normal auth mechanism and it will answer with a useless json (please add the project id, variables etc to the result of the trigger it's a must have for anyone that needs to parse the output)

- the gitlab API is top notch except on the CI part...

- be ready to restart some jobs 2-3 times if gitlab is deploying a new version ;)

- be ready to have some random errors that can be fixed by a retry

- it will seem a good idea to run gitlab-runner on every laptop of your team to reduce cost. DO NOT DO THAT, if you are more than 2 in your team the guy in charge of making the CI run (me) will make you restart you docker, delete a specific image, restart gitlab-runner, etc... invest 1 day to setup the docker machine on spot

- please show in some way when a job triggered another one (maybe a section in the YML, or even better check make us populate an env var with a link to the triggered pipeline or anything)

- design your pipeline so that if a part fails you can restart it without breaking everything (I'm looking at you terraform)

This list seem really long but, I have worked with Jenkins and even if more stable the steady improvements and addition to gitlab CI still make it my first choice for my needs.


> - IF YOU USE THE BUILT IN CACHE, parallelism will be hard (you cannot populate part of the cache from a job, another part from another job and in the next step use the result of both cache)

You can use the `artifacts` and `dependencies` combo to leverage which artifact will be downloaded into a particular job.

For instance,

    bundle-install:
    stage: build
    script: ...
    artifacts:
        paths: [bin/*]

    yarn-install:
    stage: build
    script: ...
    artifacts:
        paths: [bin/*]

    rspec:
    stage: test
    script: ...
    dependencies: [bundle-install] # This downloads only `bundle-install` artifact to this job

    karma:
    stage: test
    script: ...
    dependencies: [yarn-install] # This downloads only `yarn-install` artifact to this job

    eslint:
    stage: test
    script: ...
    dependencies: [] # This downloads nothing
https://docs.gitlab.com/ee/ci/yaml/#dependencies explains how it works


> it will seem a good idea to run gitlab-runner on every laptop of your team to reduce cost.

Will it?!


Agreed, that's a crazy way to try to reduce cost


Reminds me of the Xcode built-in distcc thing they had back then.


Gitlab runner is really easy to install on Linux. At work, I run Gitlab-CI jobs on my laptop : the main reason was the shared runners (provided by my company) were unstable and full. Our Gitlab instance has now ~20 shared runners (used by dozens of teams) and are a lot more stable. I still use my laptop to avoid waiting forever for the docker images to be downloaded.


> - we do not use kubernetes so eveything CD is off the plate for us (environment and monitoring tab are useless)

Environments can be useful even without integration with K8S. It's useful e.g. for review apps feature (https://docs.gitlab.com/ee/ci/review_apps/index.html) which don't need to be hosted on K8S. Look on the https://gitlab.com/gitlab-org/gitlab-runner/environments, where we're using environments to track our releases, e.g. the download pages hosted on AWS S3. Another example is https://gitlab.com/gitlab-com/www-gitlab-com/environments - and again our about.gitlab.com website have each MR deployed as a review app without usage of K8S, but enviroments feature is used to track all deployments, link them from MR page and automatically delete review deployments when the MR is merged or closed.

> - DO NOT USE THE BUILT IN CACHE, it's super slow and will fail unexpectedly (simply do cp to s3 and it will never fail)

Are you referencing cache configured for Shared Runners on GitLab.com or the cache feature in general?

I need to agree that we had many strange problems with the cache in the past for Shared Runners on GitLab.com. Even now the feature is not always working as we would like to, and this is something that we're already thinking about how we could improve it: https://gitlab.com/gitlab-com/infrastructure/issues/4565.

But in general - I can't agree that the feature is not working and should not be used. In most of the time we had no problems with using the distributed cache with S3. When cache servers are stable, the feature just works. I also can't agree with that manual copy to S3 will be faster than copy to S3 made by Runner - in the end both are simple HTTP PUT requests send to chosen S3 server.

Also remember, that in some cases it's better to use the local cache instead of remote cache feature. With files stored locally there is no much things that can go wrong and it's definitely the fastests solution (however it can't be used for all workflows).

> - IF YOU USE THE BUILT IN CACHE, parallelism will be hard (you cannot populate part of the cache from a job, another part from another job and in the next step use the result of both cache)

Well, it depends :)

Our cache feature was designed with specified workflows in mind. The priorit is to allow a particular job to be speed up (but the job should be configured in the way that it will still work even if the cache is not available). We've made possible to re-use cache between parallel jobs, but as usual with more complex designs - it's hard to handle all cases.

But what it was not designed to, and what is confusing new users from time to time, is passing things from one job to another. This is where artifacts feature should be used. Cache feature was just never designed for this and we were always loud about this :)

But it doesn't mean that cache can't be used with parallel pipeline. Using configuration features like `key` and/or `policy` and configuring this properly for different jobs, it's possible to prepare cache in one job and then re-use it for many parallel jobs in next stages. This is exactly what's done for the GitLab CE and GitLab EE project: https://gitlab.com/gitlab-org/gitlab-ce/blob/v11.2.0/.gitlab.... Look for `default-cache`, `push-cache` and `pull-cache` YAML anchors and check how they are used next. In GitLab CE's pipeline, in the `setup-test-env` job `bundle install` is called and all downloaded gems are next turned into cache. In the next stage, where all tests are being executed, the same cache is downloaded what speeds up the `bundle install` executed in all test jobs.

So in the end, it depends on what you're expecting:

- If you want to pass things from one job to another: it's not cache that doesn't work. You just should use artifacts for this, since cache was never designed to handle such workflow.

- If you have not too complicated Pipeline, then configuring cache for parallel usage should not be a big problem.

- If you have a complex pipeline... well - there definitelly will be cases when our cache feature will be not much useful. And in that cases one need to chose if he wants to refactor the pipeline so it will fit to how cache is working or looking on own way to speed up jobs. But I'd say that in most cases it's posibble to configure the pipeline in the way, that it will be able to use cache.


Last I used Gitlab CI I remember being somewhat infuriated at the fact that I couldn't use the Gitlab docker repo as the source for my build images. That was about a year ago though so that might be old news.


You can use images hosted on GitLab's Container Registry. Especially - you can use such images as the base for CI/CD jobs running on the same GitLab instance.

It was possible since very erly state of GitLab's Container Registry. The problem was when one wanted to use images created from internal or private projects, which require previous authentication. Support for such workflow was added (partially) in GitLab Runner 0.6, and with GitLab Runner 1.8 (late 2016) it's possible to use any private registry with GitLab CI jobs. And private images from the same GitLab instance are accessible without any additional work that needs to be made by the user - until the user who triggered a job has access to the project where requested image is stored.

Details can be found at https://docs.gitlab.com/ee/ci/docker/using_docker_images.htm....


Not sure I understand your request but you can use docker images as part of your build https://docs.gitlab.com/ee/ci/docker/using_docker_images.htm...


I'm impressed about how honest the author is about the shortcomings of Jenkins as it is now. Very appropriate that he mentions being in a local optimum - that is where most organisations end up with Jenkins. The server nearly immediately becomes a snowflake, most stuff is configured through the GUI rather than code, probably some people know it's not ideal but getting to something better requires changing everything and people know how it works now.

Having said that, I think the conclusion is wrong. The next-generation CI already exists (CircleCI, Gitlab, etc), attempting to evolve Jenkins into that seems like a punishing task given the huge legacy and relatively little strategic advantage. Don't want to take anything away from them blazing the trail, but in the same way RCS and CVS did that and eventually bowed out of the game. Jenkins should gracefully do the same.


Thanks for your thought. I took your main question to be "why bother?"

I think a part of it is that I fundamentally believe in an extensible system. The world of software development is so diverse, and we have smart people everywhere. So I always felt that the best thing a geek like me can do to other geeks is to give them a shoulder to build on. I don't think that's a solved problem, and to me, that'll always be an important value of the Jenkins project, more so than any code.

I think a part of it is the responsibility to users. Jenkins is very widely used software, and it's an incredibly important part of the software development process for many. I appreciate that kind of trust, and I want to deliver better software for them. I think people in the community shares the same passion.

As CTO of CloudBees, serving our users and customers, and broadening the adoption base are an obviously important business goal. So the interests are aligned there as well.

And finally, I think this kind of "reinvention of the brand" happens all the time. Windows got reinvented from 95 to NT, Firefox got reinvented a few times. There are many other examples less famous but closer to my part of the universe, like Maven 2, GlassFish 3, ...


The worst part of my job is configuring our Jenkins server and managing builds in their dumbass groovy based DSL.

I'm willing to bet that most people just want to build GitHub repos. Then why do we have to do this mess to get a decently repeatable deployment strategy: https://coderanger.net/jenkins/ I should not have to crack open plugin source code in order to configure the plugin programmatically. It's dumb and bad.

Also groovy is a bag language. Managing Jenkins pipeline library deps is a pain.

Also yeah, plugins break constantly and upgrading them is always a nightmare.


> Also groovy is a bag language.

This seems to be a repeated pattern that is really giving Groovy a bad reputation: it keeps getting embedded as an extension point / scripting solution inside other products. It is sold as "it's almost the same as Java, so we don't need any documentation for it" - and the result is that people with little to no Groovy knowledge end up trying to use it and get incredibly frustrated with it.

I'm curious if your conclusion above is based only on encountering it inside other things (Gradle,Jenkins, etc) or if it's actually from analysing its characteristics as a language more generically?

(FWIW, Groovy is probably my favorite language, but I use it as a full stack language for application development, quite a different mode to how most other people encounter it).


Install GoCD and become a happy person again.


if you are happy to use Kubernetes then if you switch to Jenkins X you never need to configure a Jenkins server or create a groovy based DSL again: https://jenkins-x.io/


At ${DayJob} Jenkins is our default of yore. Returning to refresh a 1.x install for one group's product we're faced with the poster child for a Jenkins install gone bad. Looking at you Chuck Norris plugin. We can't upgrade and we can't migrate to a fresh install due to how Jenkins handles plugins. So we're left with a critical chunk of infrastructure that's a time bomb.

Ultimately instead of making the jump to 2.x and Jenkinsfiles we're trialing Buildkite with great success so far and the confidence that we can jump ship to CloudBuild, TravisCI, Concourse, CircleCI, ect should we need to.


The focus on cloud-first Jenkins is interesting considering Codebees's acquisition of Codeship earlier this year. Obviously Kohsuke would be biased to Jenkins, but as CTO, I'd imagine the corporate goals take precedence.


even from an OSS goals perspective I'm looking forward to seeing better alignment, reuse and interoperability between Jenkins, Jenkins X & things like CodeShip & Knative Build


I'm surprised that I haven't seen more of this announcement in my media streams. I'm not a big fan of Jenkins, I find it overly complex and a Swiss army knife.

When you can do anything, you often end up with poor implementations (IMO). If the tools you have are restrictive but useful enough, I find it easier to adopt my workflow for the tools instead of demanding that my complex workflow fit into this one tool.


I wish we could get encrypted credentials a lá travis in a Jenkinsfile. I’ve found most configuration for a job can be in a git repo but you have to manage some things through the web interface, and it’s not that easy securely managing credentials for a Jenkins installation, even with Folders and Roles


BTW we use the kubernetes credentials provider plugin in Jenkins X which exposes Kubernetes Secrets as Jenkins Credentials; then the `credentials` step in the `Jenkinsfile` encrypts them from any build logs


you can fetch the credentials from Kubernetes secrets using this plugin https://jenkinsci.github.io/kubernetes-credentials-provider-... and manage both your Ops and Jenkins credentials the same way


Travis recommends storing secrets that dont change between branches via the project settings. Also, secure envars in the Travis config appears to interfere with deep merging on triggered builds.. Hopefully they can do better :)


This is nice to see. I wish them luck on their project, because it is not going to be easy.


Hate to be that person, but what's "Cloud Native" even mean? Is there a glossary for buzzwords somewhere?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: