Does anyone have Jenkins in production and found it to be reliable and pleasant to use?
I had to do lots of Jenkins integrations some time ago, and even though I tried to minimize the number of plugins and make things as simple as possible, things would randomly break from time to time or exhibit weird behaviour etc.
I had the impression that Jenkins is deeply conceptually confused about some of its concepts, e.g. how builds are triggered. Also, it is a huge pile of untestable spaghetti code which explains the weird bugs.
I modified an open-source plugin only to find out that it's almost impossible to write meaningful tests for it: you can't even mock Jenkins API without using the darkest Java mock magic. Jenkins classes are just written in an old style that makes testing _really_ hard but probably can't be changed without breaking all of Jenkins.
I tried Jenkinsfile which was only even more unreliable (at the time at least, this was > 1y ago). The whole idea of using groovy, modifying the hell out of it to make it even more weird and surprising and edge-casy just didn't go well for me.
I ended up with generating a _lot_ of very simple jobs for each project and connecting them via triggers instead. It was not very pretty, but it was the most reliable that I could get out of Jenkins.
So the thought of integrating Jenkins deeply into you deployments, talking to Kubernetes and sitting in the middle of a huge pile of complexity (Jenkinsfile, Dockerfile, Helm, ...) and magic "that you don't need to worry about" scares the hell out of me.
But then again, if you want to do CI/CD with Jenkins that's what you might want, right?
(I would prefer more simple approaches if forced to use Jenkins, though)
Pleasant? Never. I absolutely hate every time I have to launch a webpage to dig around and find out why my build failed. Automating jobs is a nightmare. I can cURL a request to kick off a build and do you know what it returns? NOTHING a 201 response with NO information to link to the build in progress. Oh there's a JSON API to see running jobs, but without some sort of ID, it's useless. Scripting is in Groovy. Want to use another language? Too bad! "It was hard enough to get Groovy" is the response from the team. If I'm forced to use a web interface, does it still have to look like it was designed by a team of Java devs from 2008? The only thing that's changed is the Hudson to Jenkins clipart. Yes, I'm ranting.
To make it work, they had to cripple Apache Groovy so you can't use its functional collections-based functions. Not sure if you can really call it "Groovy" with that handicap.
Thanks for your feedback. We are trying to make things simpler with Jenkins X by using best of breed tools (git providers, issue trackers, cloud services, service meshes, security & code quality tools etc) with best of breed OSS tools like kubernetes, helm & skaffold to automate CI/CD on the cloud & kubernetes.
One of the big changes to traditional Jenkins is we don’t expect folks to have to configure Jenkins, add/remove/edit plugins or even write Dockerfiles or Jenkinsfiles.
If you really wanna do that Jenkins X won’t stop you - but we are trying to help automate and simplify CI/CD through standard tools, standard file formats (Dockerfile, Jenkinsfile, skaffold, helm etc).
Is the cloud, kubernetes, docker, helm & istio complex? Sure - but our goal is to simplify, automate & avoid folks having to look at all that detail.
It’s still early days and a challenge. Eg even Lambda & the AWS API Gateways is complex. But we hope to keep improving to make things easier to use & to help folks go faster by providing automated CI/CD as a service on any kubernetes cluster / cloud
Having a good set of plugins that integrate well should limit the complexity explosion and the number of edge cases users run into. So that's an improvement. It won't fix the Jenkins Heisenbugs though, of course :)
We use it in all of our environments and it is extremely flexible, mostly reliable, but still feels like a hodgepodge of plugins and various components. Once you throw in a variety of build tools and deployment options etc, it can get rather unwieldy. Still, we use it for almost everything now (no forgotten cron jobs running on random servers) and it gets the job done. I often think we should investigate other options, but we have made such an investment at this point it would be a massive undertaking to move to something else.
I actually think Jenkins is way too flexible for most use cases. We moved to GitLab CI which isn't perfect, but it provides safety rails/structure/opinions that pretty much provides an answer for everything you want to do, apart from maybe obscure corner cases that might not make sense for a CI/CD tool anyways.
Also you get the close integration of your CI tool and your git repos, which is very nice from a visibility point of view.
Having said that, GitLab is trying to own all parts of the build and deployment process, which from previous HN discussions, is of great annoyance to a lot of people who want to cherry pick what they use GitLab for.
Thanks for using GitLab. We want to be supportive of cherry picking GitLab features. For example we just release CI/CD with GitLab for GitHub https://about.gitlab.com/features/github/
Is there something we can add to GitLab to make it more composable?
Thanks for the question, greatly appreciated. We used GitLab on k8s when we first transitioned, but we found there were a few things we didn't quite like about the GitLab-Omnibus helm setup, so we moved it off the cluster and used the AWS EC2 AMI which was really easy to setup.
We are going to start experimenting with the new cloud native GitLab chart, but it would need to gain some maturity before we use it in production.
Do you know if the new GitLab cloud native helm chart will allow you to turn-off certain things like mattermost and prometheus? That was something that we didn't like about the omnibus chart because it exposed several extra services/ports that we didn't really want to manage/think about at the time.
samm we are indeed making all components that are not core gitlab services optional. You will be able to turn them off with a simple `prometheus.enabled=false`.
Thanks for giving the charts a try in alpha/beta, please pass along any feedback. We'd love to get it!
Thanks for the good work on GitLab, I tell everyone I can how great of an experience it was to use it at my last company. The CI integration was GREAT, the ui is pretty nice and the maintenance overhead was minimal.
This looks like an improvement over what Jenkins 2.0 provides and I wish you guys good luck.
I have used and vouched for Jenkins in several companies and some decent sized licenses were bought mainly because of my input.
But to me, Cloudbees has done a major dick move making the stages not restartable in Jenkins 2.0, among other things. E.g: Dropping stage view out of nowhere and focusing only in Blue Ocean. I complained about in the channels that I had at the time and the response was, it's going to be unsupported from now on.
It's a super squechy thing to have such a useful feature bundled with a bunch of support and useless stuff that I don't care, and then charge me per node. I am migrating away from Jenkins into GoCD after close to 10 years using it, and don't get me wrong I don't feel happy doing this, but it's hard to justify it.
Fortunately the future looks bright, there are several interesting solutions available, Argo is super interesting to me, looking forward for Argo CD!
Thanks for using Jenkins for close to 10 years and sorry to see you move on, but I just want to correct the record here because I don't think the time line of events and your description are accurate.
First, pipeline stages have never been restartable in Jenkins from the beginning of Jenkins Pipeline. It wasn't as if we started with restartable stage and decided to close-source it at one point. From the very beginning, it was a feature we exclusively developed for CloudBees products.
From time to time, we do move some features from products to Jenkins. As somebody later in the thread pointed out, in JENKINS-45455 we are doing just that. Another example of this from early days is the folders feature, which is now used by many.
Any company building enterprise products on top of OSS will likely keep some features in products. And for any given person, only some of those features are likely useful. So while I understand the frustration of "that feature should be in OSS" or "I should be able to just get this one thing for a small price", I don't think there's anything inherently bad about these practices.
As for Pipeline stage view, it is still available today, and IIRC it is also still a part of Jenkins 2 default experience. Now, you are right that, as a contributor of the project, CloudBees is focused on pushing Blue Ocean forward. We think Blue Ocean solves the problem of pipeline result comprehension a lot better, and we'd rather make one solution better as opposed to work on two separate things that solves the same problem simultaneously. That is not to block other people from carrying the pipeline stage view forward, though, if anyone is willing.
Hello there, thanks for building jenkins (or hudson?) you are an absolute hero.
Feel free to correct me, here is my take on it.
You are right the restarting in the declarative was not open, then closed. But I didn't say that, I said not having it was a dick move.
My point is a Stage is just an alias for a Job (or multiple) for the regular Joe who doesn't work on Jenkins code. That was always restartable at any time in my pipelines (Delivery Pipeline Plugin, Build Pipeline plugin, etc...). So when we started writing the old and new pipelines as code we assumed the feature would be the same, while learning the new DSL at the time it was not clear that it was not, at least to us.
Lot's of people thought the same the link below is an example of it. This was 2016 if I am not mistaken, it's now open source and congratulations for changing that as I said when it was pointed out to me, but I am not following the topic anymore.
I have no problem with the business, as I said I vouched for it, and in the end they brought the licenses. My problem was, the main reason to buy it was not support or something juicy like Cloudbees cluster management features, It was that we wanted restarts.
Is it bad? Absolutely not. But it's not something that made me feel happy. I just think it was not a thing I could easily communicate with the people making the decisions, was like punishing engineers for a problem that they can't solve, give managers a reason to buy it, Cluster management is a nice one, Automatic backups another, having miserable engineers is a terrible one.
About Stage view, I was frustrated from training support and others to use it on a complex pipeline just to see a new tool take over and from what I remember it was a very fast switch. There was no gain in changing to Blue Ocean at the time, as our restarts where not working on it. So we had one UI that worked without support and another that didn't with it. Again I am not following it the topic anymore, so this might be fixed by now.
If you have more feedback, I'm happy to connect with one of the PMs or somebody from the Jenkins OSS team.
I'm the founder/CEO of Codeship and we got acquired by CloudBees earlier this year. And I want to make sure that Jenkins + all CloudBees products get better :)
I did not know about it, I actually own and use a couple t-shirts from you guys, Codeship is very nice! Congratulations to all of you!
Last year I was in contact with the Jenkins team and had a couple of meetings to discuss the things mentioned above and some more.
The developers where very nice and interested, I know they were doing their best but the problem was my problems where not top priority, and I saw some problems solved over the months but I completely lost my will to help when blue ocean started and Stage view was abandoned, the super dick move of restarting stages as well.
It was not a small deployment mind you, the system was a critical one (payments) with multiple sites and all the works, not a small license as well, I left that company but before leaving we were already migrating away.
We had a complete pipeline dependent on it, we were an early adopter of the whole JenkinsFile (A developer myself I pushed hard for it) and stage view even without being awesome was already part of our way of working. We wanted more features that I was already discussing and bugs fixed, out of the blue they change to a completely different thing that was pretty but didn't solve any of my old problems, and that will have it's own problems and was/is not even close to complete.
I just feel I wasted my time, I don't plan to make that mistake again.
But I hope Jenkins X can improve on the past mistakes and become a contender again.
Sorry to hear about those issues. I shared your feedback with a couple of people, and we will work hard on being more mindful when making such changes going forward.
If you're interested I could provide UX blockers and annoyances from someone who has worked with Jenkins for close to 10 years now. Most of them would be about the UX around declarative pipelines and the frustrations around Configuration as Code with Jenkins.
The new CI/CD from Gitlab looks great... I am on Github otherwise I would definitely use it. In fact I find myself amazed by their development pace, every release is packed with stuff.
GoCD is a super simple CD platform (from user/developer point of view), easy to learn and with little possibility to snowflake it, with a Fan in dependency resolution that works nice.
The UI is not awesome but does the tricky for support teams and others.
BTW I have no association to any of the CD/CI companies I just like to work on this topic.
Sytse, thanks a lot for being present in the community. It's most appreciated, and I always look forward to your comments.
With regards to your question: I agree with user kerny.
Stop packing stuff on top.
What I want from Gitlab is to use it to manage my repos. If you improve that aspect (which is already fine though IMO), you'll make me happier.
If you improve Gitlab integration with other CI tools, you'll make me happier.
If you improve your CI solution (which I found lacking when I evaluated it 1.5 years ago -- no idea how it's now), I still won't use it -- I explicitly don't want to rely on one, integrated solution.
In my experience, such integrated solutions are fine for a while, until they aren't. My use cases tend to expand to things the integrated solution doesn't provide, and them I'm stuck.
Do one thing, and do it well. Doing yet more things detracts from Gitlab's appeal. Personally, I wouldn't mind you utterly removing Gitlab's CI tool (I know, not gonna happen, and that's fine -- just saying).
We don't want you to get suck with an integrated solution that is bad. We'll make sure that we'll keep improving every aspect of GitLab together with the wider community (100+ contributions in the last month). And if you want to use GitLab with something else you're welcome to https://about.gitlab.com/features/github/
My primary concern is that instead of polishing the features that have already been released, the platform is trying to do too many new things. Some of that stuff is cool (k8s monitoring integration, though EEP is too expensive for me, and my Grafana dashboard does basically the same thing), while some of it seems a bit bloated (SAST/DAST for example, which was a few lines of code to implement ourself).
I really want the core Github replacement use-case to be as ergonomic as Github is. And the CI/CD piece is also great, but still has plenty of rough edges (e.g. Environments are a great feature, but I still can't clean up stale ones, which makes the environment list basically useless).
General reliability in CI/CD is not great; I'd say something like 0.25-0.5% of my build jobs fail from intermittent infrastructure failures (mostly gitlab runner/API issues from what I can tell), which wasn't a problem when I was using Jenkins.
Ops is still a significant concern; site reliability has improved in the last year, but that's not saying a lot; it's still a fairly frequent occurrence to get errors during/after a deploy. I'm not sure if this problem would be better or worse if I self-hosted, as I don't know how hard it is to run a GL instance (seems like it's hard, given how often the gitlab.com site has issues).
Performance has also improved in the last year, but the site is still on the slow side (e.g. compared to contenders like gitea).
Oh, and the pricing model is a bit broken -- all of the other SaaS platforms that I use let me pay monthly (at a higher rate); when I was evaluating paying for gitlab.com vs. doing self-hosted EE, I really wanted to pay for my team to use the hosted offering for a few months to see how things went, but I wasn't prepared to lock in for a year, so I didn't end up trying out the hosted paid offering.
None of these points in isolation is enough to make me leave the platform and go back to Jenkins, but they are enough to make me pay close attention to the alternatives.
Support has had trouble scaling, we just hired a director to make sure we get on track, sorry about that.
Having you builds fail intermittently is bad, this should be a problem only on GitLab.com Reliability of that is not where it should be and we're taking drastic actions to improve it. If anyone reading this is up for the challenge please see https://jobs.lever.co/gitlab/a9ec2996-b7b6-4d87-aed0-1fc2ce3...
It is actually 45 days and in our subscription terms, Section 5.2: "If Customer terminates this Agreement pursuant to Section 6.2 within 45 calendar days from receipt of the initial invoice for the Licensed Materials, GitLab will refund all Fees paid hereunder."
Sure, if you dig into the SAST Dockerfile you'll see that it's running `bandit` if you're in a Python repo. So add this step to your .gitlab-ci.yaml:
bandit-check:
# This check runs Openstack Bandit, a Python static analysis tool that checks for
security issues.
stage: unit-test
script:
- bandit -r -x 'tests,test_,/migrations/,./src/' -c bandit-config.yaml -ll ."
DAST uses ZAP, which you can also run in a Dockerfile yourself.
Of course there's also some window dressing to display the errors on the main MR, instead of having to dig into a step failure, but that doesn't make a meaningful difference to me.
(This feature could well have moved on since it was first implemented, that was the only time I dug into it).
So we already have a very custom Jenkins setup that builds containers, runs tests and creates test environments out of our pull requests in a k8s namespace. This seems to come with so many things that we already have.
We have a few issues with it, like Jenkins suddenly deciding to build & test all branches/PRs in all repos, killing the server.
• What is Jenkins X exactly, and how does it relate to Jenkins? Is it just a CLI utility that generates git repos, k8s clusters and Jenkinsfiles for us? Is it a fork of Jenkins?
* automated CI/CD for your kubernetes based applications using Helm Charts & GitOps to manage promotions (manual or automated)
* a single command to create a kubernetes cluster, install Jenkins X and all the associated software all configured for you OOTB (including Jenkins, Nexus, Monocular etc): http://jenkins-x.io/getting-started/create-cluster/ - ditto for upgrading
* a single command to create new apps or import them via build packs to create docker images, pipelines and helm charts with GitOps promotion: http://jenkins-x.io/developing/create-spring/
* automated release notes + change logs with links to github/JIRA issues etc
* feedback on issues as they move from Staging -> Production
i.e. more automation around CI/CD and kubernetes so you can spend more time focussing on building your apps and less time installing/configuring/managing Jenkins + Pipelines
Not sure if you answered this already but I still have GP's question unanswered in my head
> What is Jenkins X exactly, and how does it relate to Jenkins? Is it just a CLI utility that generates git repos, k8s clusters and Jenkinsfiles for us? Is it a fork of Jenkins?
So we should be able to add a command `jx create cluster do` for using kops on DO - the current `jx create cluster aws` uses kops under the covers to spin up the kubernetes cluster.
> Relationship between Jenkins and Jenkins X
Jenkins is the core CI/CD engine within Jenkins X. So Jenkins X is built on the massive shoulders of Jenkins and its awesome community.
> We are proposing Jenkins X as a sub project within the Jenkins foundation as Jenkins X has a different focus: automating CI/CD for the cloud using Jenkins plus other open source tools like Kubernetes, Helm, Git, Nexus/Artifactory etc.
> Over time we are hoping Jenkins X can help drive some changes in Jenkins itself to become more cloud native, which will benefit the wider Jenkins community in addition to Jenkins X.
I find this solution compared to the gitlab Auto Devops, frankly, underwhelming.
We recently deployed AD in our self hosted gitlab instance and combined the SAST container checks with our production policies, it’s been rock solid.
Add to this the fact we are able to manage all the production policies via the pipeline API’s and AD templates, the whole Jenkinsfiles deal seems far less scalable and difficult.
Sorry to hear you're underwhelmed! Did you check out the GitOps features for versioning all changes to all Environments in git with human approval? http://jenkins-x.io/about/features/#promotion
Or the automatic publishing of Helm charts to the bundled Monocular for all versions of your apps for your colleagues to easily be able to run via helm?
Or that it works great with GitHub, GitHub Enterprise & JIRA and has awesome integration with Skaffold?
Thanks for using GitLab. If people want to see some raw footage of me using Auto DevOps with Spring after linking it to a Kubernetes cluster please see https://www.youtube.com/watch?v=9D5TwMo-IIw We're considering renaming Auto DevOps to GitOps. What do people think?
I am using gitlab, though we quickly grew beyond auto devops.
Your definition of auto devops is different than gitops. Gitops is the practice of using commits and pull requests to execute change and do releases.
Weave uses it to mean git as the source of truth.
https://www.weave.works/blog/gitops-operations-by-pull-reque...
Kelsey Hightower talked about it and and has demoed the workflow of using pull requests to initiate promotion and deployments.
Gitlab's auto devops does not seem to tackle promotion via environment repos, so in my understanding does not fit gitops and would be confusing to call it such.
This solves part of a real problem. There are a lot of tools out there but end to end integration is left as an exercise to the reader. I'd love some out of the box sanity when it comes to devops, CI/CD, and cloudhosting. Instead I'm stuck building nw architectures, deployment scripting, dealing with archaic broken by default configs of misc bits and pieces, host configurations, build servers & CI/CD pipelines, etc. from scratch. Add log aggregation, security auditing, and other nice to haves that are actually not optional these days (what are you running blind and ignoring failed ssh attempts?) and you have a full explanation why so many companies make a mess of all of the above.
IMHO a lot of stuff in this space are either focusing on making life harder through added complexity to up-sell support or more services or on solving a narrow problem in such a way that you have to still take care of a lot of other stuff.
I really don't like the idea of my ci/cd tooling being responsible for provisioning its own k8s cluster....there are a lot of other more mature projects out there for doing this.
Is the idea that the ONLY thing running on this cluster is jenkins-x and review/preview environments or something?
The default is to use separate namespaces in kubernetes for each teams developer tools & pipelines, Staging & Production environments (plus Preview Environments). Multiple teams can obviously use the same cluster with different namespaces.
We’d expect ultimately folks may want to use a separate cluster for development & testing to Production. GitOps makes that kind of decoupling pretty easy but we’ve still work to do to automate easily setting up a multi-cluster approach:
https://github.com/jenkins-x/jx/issues/479
It's not completely clear to me from reading the site - does this run a non-dockerized app build in kubernetes, or does it also work for building and deploying my app as a docker container itself? This usually requires things like being able to spin up a cluster of containers per build - one with my app, one with a database to run against, maybe one with memcached or elasticsearch for integration tests, etc. And does it work out of the box for complicated cases like partitioning a large test suite to run in parallel, where each parallel part of the build needs its own mini cluster of a couple of containers talking to each other?
I haven't looked into the current state of this recently but I ran into a lot of problems with this with a bunch of hosted CI services in the past. Somewhat ironically, as of a couple years ago if you needed to build your own docker container as part of a build you had to specifically stay clear of CI services that mentioned docker at all because that meant they were running their builds inside of containers and it was a pain to figure out how to run my own docker build, much less spin up a cluster per build with something like docker-compose, inside of a running container.
Curious if and how Jenkins X solves this. Or have things changed and it's now easy to build and run docker containers inside of a container?
(Aside from that, I'm not sure how I feel about Jenkins coordinating with a Kubernetes cluster. I've always found their monolithic approach to be a pain to work with, and always wished that, for example, I could just have Jenkins trigger jobs by pushing them onto an ActiveMQ queue or something and read back the results on another queue. Then I could just set up an autoscaling group of build servers, and provision them with whatever tools I'm already using to just start up and listen on this queue. Instead, jenkins wants me to duplicate a lot of this work I already have CM tools doing, and set it up manually through the UI, using community plugins that are often out of date).
I defer the question about container building in Kubernetes to somebody from the Jenkins X team, but I wanted to respond to your side note.
Offloading build queue outside Jenkins to another service, auto-scaling of build servers, configuring Jenkins with your configuration management tools are all something we are thinking about / looking into / actively working on. Some of them haven't gotten to the point of proper write-up yet, but see
Jenkins X runs in containers on kubernetes and all the builds are done in containers. You can use whatever pod template (collection of docker images) in your CI/CD pipeline:
http://jenkins-x.io/architecture/pod-templates/
Yes we can support things like parallel steps & tests spinning up separate clusters, namespaces or environments (we do this ourselves to test Jenkins X).
We delegate to an OSS tool called Skaffold to actually build docker containers that gives us the flexibility to use different approaches for docker image creation (e.g. kaniko or Google Container Builder or use the local docker daemon etc)
https://github.com/GoogleContainerTools/skaffold
Using Kubernetes as an engine for orchestrating containers works very well - thats kinda what Kubernetes was designed for. Though you are free to extend & integrate tools like ActiveMQ into Kuberenetes if you think it'll help your use cases.
* I use one namespace per env (staging, prod, etc), is this supported or must I go with the default (slightly wacky) staging and prod releases side-by-side in the same namespace?
* How are bugfix releases handled? If I pushed 1.2.0 to staging, and want to hotfix the prod release 1.1.0 with 1.1.1 (a common bugfix flow), can I promote releases from the hotfix branch?
* Is there a permission model? Does it bottom out to GitHub permissions for each env repository? E.g. can I have a smaller set of users approved to promote releases to production?
Each environment is in a separate namespaces. You can add/edit/delete the Environments to use whatever namespaces you wish to use.
Promotion is either automatic or manual. By default Staging is automatic and production is manual. You can manually promote any version to any environment whenever you wish: http://jenkins-x.io/developing/promote/
For promotions we're delegating to the git repository for RBAC; so you can setup whatever roles you want for who is allowed to approve promotions & if you need code reviews etc
Automation is never easy. If you zoom your focus out watching the innovation in the ci/cd space is fascinating. Not to sound like an old fart but doing this all by hand back in the day sucked. Jenkins for us has made a lot easier. Curious to try GitLab and Jenkins X on a new projects
I can't find this in the docs; but what happens when a promotion fails? Do things get rolled back to previous known state? The reason I ask is because I"m trying to replicate similar functionality in our much simpler environment.
Interesting annoucement, I'd like it better if there was a clear comparison between the current possibilities offered by Jenkins 2.0 and this version of Jenkins.
I'm not huge fan of the demo video, since it doesn't really handle what I can only imagine is a very common use case : I already have a Jenkins 2.0 instance with Jenkinsfiles, how easy would it be to migrate to Jenkins X ? Is it isofunctional with added capabilities ? How much will I lose ?
Bootstrapping a java spring app from scratch is fun, but I suspect most people have an already existing codebase with already existing CI/CD tools.
I'm leery of projects whose goal is make bootstrapping easier. Bootstrapping projects has generally gotten easier and easier and was never a real bottleneck. Projects spend 99% of their lifetime in development and maintenance so those are the parts that need the most help.
OpenShift is Red Hat's supported fork & distribution of Kubernetes - so its another platform we can install and use Jenkins X on.
OpenShift also includes some Jenkins support; e.g. you can add BuildConfig resources via a YAML file in the OpenShift CLI which will create a Jenkins server and a pipeline. But Jenkins X isn't yet integrated into OpenShift - but its easy to add yourself for now :)
If you are pondering which kubernetes cluster to try for developing Spring services: OpenShift is a good option if you are on premise. If you can use the public cloud then GKE on Google is super easy to use; AKS on Azure is getting there & EKS is looking like it will be good if you use AWS.
On the public clouds the managed kubernetes services are looking effectively free; you just pay for your compute + storage etc. So its hard to argue with free + managed + easy to use kubernetes - if you are allowed to use the public cloud!
When I started using Travis I immediately got a fan of it. Not having worked with Jenkins before, when I first tried it (pre-blue ocean), I was shocked by the unnecessary complexity. Since then, I've settled with a self-hosted drone.io for private projects, it offers a very similar experience to Travis, while I don't feel like I'm lacking anything compared to Jenkins.
If I understand correctly, this creates an environment for each PR. How does it accomplish that exactly? It would require all Kubernetes manifests for resources somewhere in the repo? What if the environment has some stateful dependencies, etc?
Jenkins X creates a Preview Environment per Pull Request yeah; which can be as much or as little as you want it to be. e.g. it could be just 1 pod only or could be a suit of related apps (you may want to test multiple things together).
You can define what a Preview Environment is in the source code of your application - its just a different Helm chart really. You can of course opt out of Preview Environments completely if you wish. http://jenkins-x.io/about/features/#preview-environments
Though I've personally found them to be super useful - especially if you are working on web consoles - it lets you try out changes visually as part of the Pull Request review process before you merge to master.
e.g. so you could deploy just your front end in a Preview Environment but link it to all the back end services running in the Staging or Production environment. Each team can configure their Preview environment helm chart however they wish really.
Using separate namespaces in kubernetes is a great way to keep software isolated and avoids apps interfering with each other; but at the same time its really handy to be able to link services between namespaces too.
Btw I’m the author of the above blog post & committer on Jenkins X.
So our focus is currently anyone looking to automate CI/CD on kubernetes, the cloud or any modern platform like OpenShift, Mesos or CloudFoundry which all come with kubernetes support baked in.
You can use just the CI part and do CI & releasing of non-cloud native apps if you want - we use Jenkins X to release jars, plugins & docker images using it - but doing so does miss out all the benefits of automated Continuous Delivery & GitOps promotion across environments
For a long time it wasn't, but it looks like GitHub are slowly rolling this out to people.
Given that Jenkins is pretty popular, you'd think that they'd be able to sort something out with GitHub to get bumped up the list for something along these lines.
There's always the Cloudfare option, but I've never felt that this was an ideal solution when HTTPS should be extremely straightforward for GitHub to set up on their pages.
Please don't do this. It gives the user the illusion that their connection is secure, but the connection between Cloudflare and the site is not secure. Arguably it's better to encrypt some of the route rather than none of it, but also giving people a false sense of security comes with its own drawbacks.
You should set it to "Full" instead. That will use TLS but won't verify the domain name in the certificate like it does in "Strict" mode so you can still use Github pages.
Huh, they're using Github pages. I thought if you set up a redirect - a C dns response - to whatever.github.io, that would soothe the ssl complaints in the browser.
Looking at the dns records, it looks like they didn't do this, and instead set up an A record.
It's only considered a dupe if the story/topic/product release has had significant attention (front page presence and discussion) in the past 12 months. None of these posts got much attention.
See dang's comments on the issue for the official position:
No worries! By the way, it's frowned upon if one person posts the same link repeatedly without getting any traction - eventually it just gets annoying, especially for people who watch the 'new' feed and keep seeing the same thing coming up. But the above posts seem to be submitted by different people, so it's all good.
I had to do lots of Jenkins integrations some time ago, and even though I tried to minimize the number of plugins and make things as simple as possible, things would randomly break from time to time or exhibit weird behaviour etc.
I had the impression that Jenkins is deeply conceptually confused about some of its concepts, e.g. how builds are triggered. Also, it is a huge pile of untestable spaghetti code which explains the weird bugs.
I modified an open-source plugin only to find out that it's almost impossible to write meaningful tests for it: you can't even mock Jenkins API without using the darkest Java mock magic. Jenkins classes are just written in an old style that makes testing _really_ hard but probably can't be changed without breaking all of Jenkins.
I tried Jenkinsfile which was only even more unreliable (at the time at least, this was > 1y ago). The whole idea of using groovy, modifying the hell out of it to make it even more weird and surprising and edge-casy just didn't go well for me.
I ended up with generating a _lot_ of very simple jobs for each project and connecting them via triggers instead. It was not very pretty, but it was the most reliable that I could get out of Jenkins.
So the thought of integrating Jenkins deeply into you deployments, talking to Kubernetes and sitting in the middle of a huge pile of complexity (Jenkinsfile, Dockerfile, Helm, ...) and magic "that you don't need to worry about" scares the hell out of me.
But then again, if you want to do CI/CD with Jenkins that's what you might want, right? (I would prefer more simple approaches if forced to use Jenkins, though)