Have been working with it for a few months, and I am not positive about it.
UI is pretty but often breaks. Is unusable when you have a pipeline with dozens of concurrent jobs.
Concept of teams is fine, but when you have to switch between them, even with oauth authentication, it is a pain.
Job concurrency control is binary.
We never have been able to have worker pool scale down without a hiccup, always some darn worker hogging containers and atc not removing it, leading to stalled pipelines.
That overlay network it comes with, garden thingy, creates so many problems and solves just one...
Oh and not having BUILD_ vars available in tasks is rude, thank you very much, but there are cases when it is just mandatory and concourse makes it impossible to do.
At least new version has better secrets handling, previously it was a joke.
My experience after having the browser UI open is that intially stuff renders, then starts to take ages to update state ( these pretty pulsing frames are no longer showing or rendering badly ). Quite often switching to a different pipeline renders empty space.
Also, when you have around 100+ resources and jobs in one pipeline, in a vertical plane, they are impossible to identify
> That overlay network it comes with, garden thingy, creates so many problems and solves just one...
Can you expand upon this a little bit? Garden is the containerisation piece, what about it creates problems and what single problem do you think it is solving?
Not sure when it came about, but when I first evaluated Concourse, not being able to trigger jobs manually was a primary blocker. Glad this showed up on HN again, because they've added it in at some point. My favourite thing about Concourse was really its ideas around "Resources". This always felt so much better than this idea of plugins due to how unified the experience was. Also, the ability to implement resources for yourself is extremely easy. So if I had an internal software running, being able to build something for it meant defining three bash scripts.
That said, the docs around implementing resources still needs some improvements. Whatever I learnt was from cloning and modifying existing resources.
This is also close to my experience. Improving, but not there yet. Unfortunately, it will retrigger the same job with the updated version of the resources. With other CI tools, it was possible to replay the same job, with whatever inputs they took by the time they ran, with concourse it does not seem t be possible.
A practical impact of this issue is that we cannot simply reprocess a previous deploy step, with the old artifacts and inputs in a middle of an outage.
You can achieve this by clicking on a resource and hitting toggling the "power button" to off for each of the resources you don't want to include in the build.
While this may even work, it is not practical and it is a disappointing experience for a pipeline centric type of CI such as concourse.
In other tools, hitting re-run would simply replay the job, with the same state it was executed in the first run. I would expect concourse to behave similarly, but no.
I guess I'm not clear on why you would be wanting to do this so frequently that the toggle is impractical, though it's likely that I simply haven't come across this use case.
I know everyone hates on Jenkins these days, but (at least for the workloads I am building) Concourse feels like a toy.
My issues:
- everything has to be in a docker container (not all things can run in garden)
- Just trying to find logs is a pain, and getting the UI to show a full log output is basically impossible
- each install of concourse may need an entirely different version of fly
- There was no way of retriggering a CI run on a single PR (without force pushing to the branch, which removes Github reviews)
Jenkins + Jenkinsfiles + Pipelines provides all of the good bits of Concourse, with none of the Pivitol enforced workflow + tools.
I get why CloudFoundry uses Concourse - the build process is so arcane that you basically need to run the full CI locally, but I really do not get the hype for other projects.
Jenkins has some serious problems when you want to work with CD.
Its currently impossible* to use Jenkins in a CD environment where you want to deploy by tagging your git project[1] and allow rollbacks because Jenkins doesn't treat tag's hashes (only the commits the tag points to).
Unfortunately, hack such a fix is not very pleasant either, because Java, and i don't think the Jenkins developers are interested in it either, at least not if just a few people ask for it.
It may look as a silly usecase, but i believe its one that mostly fits when you have a dynamic server farm (hosting on aws with autoscalling, for example) and don't want to auto-deploy on master commit.
> Jenkins has some serious problems when you want to work with CD.
When you want to work with CD in a particular way.
A simple fix to the above issue would be to have a "production" branch. When you want to do a release, you merge to the production branch. A roll back would just be a revert of the commits since the last merge to production, and could even be tagged.
If you are doing CD by allowing devs to tag random branches, cool, but I would not call it a "failure" on the Jenkins side.
> Unfortunately, hack such a fix is not very pleasant either, because Java, and i don't think the Jenkins developers are interested in it either, at least not if just a few people ask for it.
Its a webhook - that can be in any language, or if you use Git(hub|lab) a config item.
>> Jenkins has some serious problems when you want to work with CD.
> When you want to work with CD in a particular way.
> A simple fix to the above issue would be to have a "production" branch. When you want to do a release, you merge to the production branch. A roll back would just be a revert of the commits since the last merge to production, and could even be tagged.
> If you are doing CD by allowing devs to tag random branches, cool, but I would not call it a "failure" on the Jenkins side.
Well, i must confess i agree with you and my first idea was this, i will probably end up pushing for that anyway, but i will have to convince some devs that they should not be afraid of git reverts. When i first mentioned the idea, some devs with deploy rights said they were skeptical about this because they were afraid of having a git revert shitstorm in a SOS situation.
>> Unfortunately, hack such a fix is not very pleasant either, because Java, and i don't think the Jenkins developers are interested in it either, at least not if just a few people ask for it.
> Its a webhook - that can be in any language, or if you use Git(hub|lab) a config item.
Not exactly, on this case github only lets me create webhooks with 'create' action, which is trigerred "Any time a Branch or Tag is created.", so new branches would trigger deploys.
> When you want to work with CD in a particular way.
I think that I, as the user of the project, should get to decide how I want to deploy my software. So, yes, I do want to CD things in a particular way: one that matches my developer's expectations and my business requirements.
> CD
Jenkins isn't a deployment tool. It can act like one, but this (rollback) is a classic example of how it's a build, not release, tool.
To address OP's complaint: we rebuild steps from the previous pipeline to perform a rollback, and we have all our deployment logic in a separate system which works with commit IDs, if the previous pipeline has been pushed out of the list of recent pipeline runs.
> I think that I, as the user of the project, should get to decide how I want to deploy my software. So, yes, I do want to CD things in a particular way: one that matches my developer's expectations and my business requirements.
Sure - it definitely is. But we all have to do that within the limitations of the tools we use. Jenkins makes an assumption (in the default mode) of a continuously moving HEAD - which may or may not be what you want. It doesn't make Jenkins bad - it makes it not suitable for your use case.
FWIW - adapting to using a moving HEAD is not that hard - `git reset <old-tag> && git commit -a -m "Revert deploy <failed-tag> && git tag <failed-tag+1> && git push` - but I totally understand why people would not like to use that style. It is a personal / culture choice.
> i don't think the Jenkins developers are interested in it either
I keep running into this: we want to extend or change Jenkins behaviour, and since it's open-source, it's possible to get up-to-speed on the codebase & make that change. But getting it into a state which matches the mindset of the project maintainers is tough! Trying to convince someone that your use-case is not only permissible but is actually a good idea over ticket comments is not what we want to spend our time on.
Without fully knowing the context, I wonder if this is just a simple disconnect.
In Jenkins, we collectively cover wide-ranging use cases not by making one existing plugin (say Git plugin) do everything, but by making it extensible so that other people can define additional semantics and behaviours as separate plugins on top of it.
We learned this in the early days of Subversion plugin, back when that was the most popular version control system. Everyone uses a generic tool like that very differently, so as we kept adding individually valid use cases to the Subversion plugin, it became this giant hair ball not just for devs but also for users.
This mode also works better for those who do not want to spend time explaining why their use case is a good/important one, as they can simply code up their idea as a separate plugin and move on.
With all that said, I'm sorry for the frustration you had. Do you still have some pointers to tickets/PRs, etc? I'd like to look into it.
Funnily enough, I'm specifically thinking of the Git plugin and issues like https://issues.jenkins-ci.org/browse/JENKINS-6124 & https://issues.jenkins-ci.org/browse/JENKINS-14572, and Nicholas de Loof's comments 'ur doin it wrong'. It's extremely difficult to extend this plugin; we have a lot of work tracking all the existing plugins we use, and adding maintenance of our own plugin on top of that is work we don't want to do.
Nowadays we try to keep as little as possible defined in jenkins configs and put our build & test logic in makefiles/tools.
Yeah - it is a "problem"[0] with open source - you need to deal with people that may have completely objectives to you.
It is easier with a closed source / paid service to ask for feature requests. Remember that the maintainers probably get hundreds of similar requests a week, and have merged things in the past that they didn't like and later caused issues.
This causes confirmation bias for them "The last time we allowed code not aligned to codeing styles / test requirements / feature direction / $reason it all went badly", so it is easier to ask for things to align to their world view before merging it.
Being a maintainer is hard - you are damned if you do, and damned if you don't. People give out about not merging code, but they also give out if the code that does get merged breaks things. Finding a balance is hard.
0 - I don't personally think it is a problem, but it potentially requires a bigger time investment, and can be infuriating when you have a small change that really scratches an itch for you.
> - There was no way of retriggering a CI run on a single PR (without force pushing to the branch, which removes Github reviews)
A CI that runs one job (test) for lots of branches is differnet than a multi-pipeline CI that runs lots of jobs for a single branch (which is what concourse aims to be).
Sure, but if a job fails for a transient reason, there should be a way to retrigger it - that is just a basic feature of nearly all CIs, no matter how many jobs there is.
Of course, and you can - you can manually hit the + button to retrigger a build, and if you want specific inputs you can select the exact one you want in the resources tab (which is slightly less common, since you probably haven't been committing since your build broke, but it's totally doable).
For a lightweight pipeline based CI tool (with no docker requirements) you may like buildkite. I use it for almost everything I used to do in jenkins, although I still keep jenkins around as a glorified cron job / easy generic task runner.
(not associated with buildkite, just a very happy customer)
buildkite can be done entirely on your own infrastructure (for the purposes of source control and builds), its an interesting setup:
- the ui is SaaS, but it doesn't care about your repos or build agents
- the build agent runs on your machines (they communicate outwards only with the SaaS product to decide if they should start building)
- all pipeline config is kept inside your repo
- you can hook up any source control to connect to buildkites webhooks, they just happen to have an integration to the common SaaS source control tools (github etc)
A nice extra is that running your own build agents lets you keep costs down. If you are on AWS they have a "one-click" setup for a cloud formation template that gives you an auto-scaling build environment
Booting VMs on your worker seems like exactly the kind of debris concourse is encouraging the avoidance of by enforcing the containerisation of your builds. That said, I can't think of a reason why you couldn't boot a VM in a privileged Garden container, have you tried?
Yeah - I can't remember the actual failure, but it did not work out well.
We also need the VMs to survive across build steps.
We are working on an P/IaaS - so integration testing requires actually booting a full stack, then running an install, then testing the resultant servers are actually running Kubernetes
Ah that's a shame, I still suspect it would _probably_ work with the right tweaking but I appreciate there's a hurdle there that might be frustrating to overcome and that's no good.
We also test a PaaS and require VM spin up but we call out to GCP/AWS as necessary.
> We also need the VMs to survive across build steps.
You might represent them as a pool resource, so they can be safely handed from job to job. A lot of teams do this for stuff where they want to share an expensive, long-lived, stateful resource.
The VMs are per job - once the CI run tests that the orchestration has created the right system config on the VMs, and that basic integration tests work, we destroy the VMs.
I have worked in two companies now with Cloud Foundry distros, and both of them have started using Concourse, and then moved to Jenkins (for non Cloud Foundry projects - I shudder to think what would happen to a distro that attempted to not use Concourse)
I'm still lost. You said at the top that you need the VMs to survive across build steps (I read that as jobs), but now they're per-job?
The Concourse->Jenkins thing will definitely happen without better docs, examples and widespread understanding. Concourse is non-obvious to newbies and the error messages need a lot of love. Or any love.
I don't quite understand the parenthetical, could you elaborate?
I second what wlamartin said. We have pipelines that do integration tests that spin up 30 instance clusters in AWS. Trying to spin up something similar inside a concourse worker just isn't worth trying.
sure - I can use fly. if I got to the page, find the link to download the right version of fly, then remember the archine incantation to login, find a job, and find the logs.
Or - I can go to my Jenkins build page, and click "Show full log".
For me, I find the initial quick searching through logs much quicker in a browser.
And, I have found that on longer builds the web UIs log streaming can actually crash the browser.
Mostly off-topic but I've been looking for more than a CI for some quite time, more on the CD side. How are you guys handling some of these cases? Bonus points for hosted options.
- Truly a pipeline based stages. No messing around with git branches/tags for each environment release.
- Ability to combine multiple builds together.
- Build dependencies. Ability to trigger project-B build when project-A build succeeds. (Docker Cloud Builds have something like this.)
- Use the same artifact across multiple build stages.
- An option to promote a build to the next step either manually or automatically.
Amazon's internal Apollo system was amazing. AWS Code Pipelines kind of does some of these things but it's every limited and hard to work with.
> - Truly a pipeline based stages. No messing around with git branches/tags for each environment release.
That's how we use it. We have two or more stages for each environment (repo upload + installation, often tests) that are simply chained.
> - Ability to combine multiple builds together.
You can do fan-in and fan-out between pipelines (you can combine multiple pipelines in a DAG).
> - Build dependencies. Ability to trigger project-B build when project-A build succeeds. (Docker Cloud Builds have something like this.)
As above, with a graph of pipelines. We use this to trigger rebuild of some projects when libraries change.
> - Use the same artifact across multiple build stages.
You can reference and retrieve artifacts from all previous stages, even stages from upstream pipelines. The syntax for referencing takes some time to get used to.
> - An option to promote a build to the next step either manually or automatically.
automatic is default, but you can have manual approval steps, optionally with permissions attached to them.
It's a hybrid hosted model -you supply your own workers (they provide a CFN template to make that part easy), everything else (e.g. web UI/API) is hosted
GitLab CI has multi project pipeline, triggers, artifacts, and manually approved next steps. But I'm not sure what your first requirement means, GitLab CI always composes pipelines from individual projects so it probably doesn't meet your criteria. I'm interested to hear what you ran into with previous systems that made this your nr 1 criteria.
> Build dependencies. Ability to trigger project-B build when project-A build succeeds
Within a pipeline, you can 'pass' an output of one step as a triggering input to another. Across pipelines, you can use e.g. an S3 resource to save state; the job to be triggered looks at that S3 resource and triggers when it changes.
> Use the same artifact across multiple build stages.
Yep - you can pass the output of one job to be used as the input of another.
> Ability to combine multiple builds together.
Yep - have a look for "fan-in" in the docs.
> An option to promote a build to the next step either manually or automatically.
You can manually trigger steps from the UI; you could also have a resource that triggers a job automatically on some condition.
Mozilla's TaskCluster is definitely aimed at CI use cases, but does handle all the requirements you listed. It is entirely open-source, but unfortunately they aren't currently interested in supporting other organizations running it.
I used Octopus Deploy for .Net, it was amazing. It did releases as first-class objects (a collection of versioned packages pulled from a repository), per-environment configs, parallel execution of steps, rolling windows (for bouncing a few hosts at a time), pre- & post- activate scripts, and great IIS integration.
There was talk of Python + Linux support a few years back, a shame that didn't take off.
You can do Linux today actually - you can connect to machines over SSH, run bash scripts, etc. Today there's a Mono dependency, but in the next couple of weeks we'll be rolling out a beta that removes that dependency.
Check out CDDirector[0]. I've been using it to model release pipelines and enjoying it quite a bit. Disclaimer, I work for CA Technologies though not on this product.
Avoid! Concourse looks good on the surface but it's really not that great.
The UI is clunky.
The abstraction layer is too low and leads to a lot of repeated YAML. Which leads to YAML programming.
There are simple scenarios like deployment rollbacks who are hard to do.
For some reason they decided to develop their own container engine which leads to all sorts of trouble and maintenance issue. It's generally slow and we had 100% CPU usage when the worker was doing almost nothing.
I have used for 4 months and it was only problems. Gitlab CI is much better. Or even Jenkins is better.
> Concourse looks good on the surface but it's really not that great.
I've used teamcity, jenkins, gocd, circleci, concourse, and travisci. For multi-project systems, concourse is king. (I like travisci for by-itself, non-system projects)
> The UI is clunky.
What? You just said the UI looks good... It's simple and clean; everything is async javascript (no page loads).
> The abstraction layer is too low and leads to a lot of repeated YAML. Which leads to YAML programming.
Which is an intentional choice. If you don't like YAML, use one of the MANY yaml abstraction layers of your choice...
> There are simple scenarios like deployment rollbacks who are hard to do.
First of all, a CI system shouldn't be your answer to rollbacks. Your deployment system should handle that. Secondly, assuming your deployment system can do rollbacks, concourse has on-fail jobs that can trigger rollbacks just fine.
> For some reason they decided to develop their own container engine which leads to all sorts of trouble and maintenance issue. It's generally slow and we had 100% CPU usage when the worker was doing almost nothing.
garden is used because cloudfoundry builds it. It is not slow... it is a light layer on top of runc (as opposed to docker which is a rather heavy layer on top of runc). You should pretty much never have to care about it, and in 3 years of using concourse I haven't had to - and we have some pretty gnarly large pipelines.
Also I call rubbish on your 100% CPU. I have two workers t2.xlarge workers running 22 and 30 containers (like, right NOW) and neither is above 10% CPU (which, actually, I should make those a lot smaller). Don't run workers on a potato and you'll be fine.
> Gitlab CI is much better. Or even Jenkins is better.
> What? You just said the UI looks good... It's simple and clean; everything is async javascript (no page loads).
I meant that when just looking at the website, Concourse looks good in terms of what it has to offer. For example it offers proper build pipelines, something that is lacking in most other CIs.
Maybe it's clean to you, I found that our developers where generally confused by how the releases were working.
> Which is an intentional choice. If you don't like YAML, use one of the MANY yaml abstraction layers of your choice...
I don't mind YAML too much but adding a templating language on top is generally clunky. Gitlab CI also uses YAML, can also build pipelines and doesn't require to generate the YAML.
> Also I call rubbish on your 100% CPU.
It might have been due to a misconfiguration (we were using c4.4xlarge). I bet that you have deployed Concourse using BOSH, the only truly tested deployment method.
> Also I call rubbish on your 100% CPU. I have two workers t2.xlarge workers running 22 and 30 containers (like, right NOW) and neither is above 10% CPU (which, actually, I should make those a lot smaller). Don't run workers on a potato and you'll be fine.
I was on CF Buildpacks for a while, which is one of the earliest and more intensive users of Concourse.
Earlier versions of Concourse did stumble upon fairly hairy btrfs bugs that would enthusiastically choke CPUs to death if you had more than about 100 containers on a worker. This was particularly bad if, for any reason, you had a lot of jobs launching at once (I'm looking at you, 6-releases-simultaneously-NodeJS).
I can't remember if it was fixed by a kernel upgrade or whether they ditched btrfs. Either way it got fixed and I haven't seen it since.
The UI was originally designed with one end in mind: "Show me what is broken and take me there as fast as possible".
I and others have given lots of feedback about the strengths and limitations of that orientation, particularly trading off between a job-oriented and resource-version-oriented view of the world.
> There are simple scenarios like deployment rollbacks who are hard to do.
I've had this with the CF resource. The Concourse-y solution would be an undeploy resource.
> For some reason they decided to develop their own container engine which leads to all sorts of trouble and maintenance issue.
The container engine is garden-runc, the actual containerising is the same code as Docker's. What I like a lot less is container orchestration. I have complained about this in a ha-ha-only-serious matter.
We're using Concourse extensively at HelloFresh (>130 devs). It's not without its quirks, but I've little to complain about so far, except perhaps the polish of the UI.
Disclaimer: I work for Pivotal, on the RabbitMQ team. We push Concourse to its limits every day. We work closely with Jenkins, GoCD, Travis & Concourse. They all have their limitations.
All things will break horribly if the conditions are right. It's unreasonable to assume that the things which work in [insert your current CI] will work in Concourse. It's still a new and relatively immature product, but it works well in most cases.
Half the secret to a good Concourse experience is not upgrading it in-place - stand up fresh deployments. The other half is gradually transitioning between Concourse deployments, because bad versions have been and will continue to be released - mistakes are only human. As long as you share the Concourse vision and are willing to keep up with the pace of change - not everyone can or wants to - then it's an amazing CI.
Concourse still makes me excited, even after many years of hard lessons, because it is a genuinely innovative approach to building better software. Most miss this, and I understand why, but give it time - the ideas behind it will mature and become the norm.
Even though Concourse can work really well, it's not always the best choice. Make it better if you can & want to, use something else if it's easier. There is no right or wrong, just preferences : )
Concourse is difficult to come to from other CI tools. A little more aloof. There have been real, serious implementation difficulties.
But I kinda love it, because it's a handful of simple ideas that unlock incredible power. It goes beyond "build and test each commit" to becoming a full project automation tool, a software manufacturing robot. When I talk to people about Concourse, I tell them: your pipeline and your tasks are production code. Keep the discipline, care and engineering practices that you bring to the apps and services you create.
The problem is that most of the deep tribal knowledge about how to get started and how to best apply it is locked up inside a handful of organisations, most notably Pivotal. I had been working on a video series which was meant to walk through both the concrete business of building pipelines, as well as the concepts of how best to do so.
Unfortunately visa conditions got in the way and I have abandoned that effort. Interested persons are welcome to email me for links to the first 4 episodes that I made, but be aware that it stops even more suddenly than Firefly.
Kind of odd to see the homepage show Vagrant as the install mechanism, even though it supports Docker as well. In 2017, I'd think more developers are likely to run Docker than Vagrant workloads on their machine.
Garden makes use of runc (an Open Container Initiative project with a lot of contributions from Docker), in the same way containerd (component included in Docker) makes use of runc to run images. You won't find the docker engine being used by Garden.
Edit: I _think_ what you're likely seeing (if you are seeing docker in the top output) is Concourse using the docker-image-resource to pull images for your tasks to a local docker registry.
I still see the majority of developers that I've met using Vagrant. I gave my own team the option of both and as yet, I'm the only one using docker, so I think you'd be surprised. Docker seems to have gone out of favour in a lot of places.
I've been eyeing Concourse and Go.CD over Jenkins for a while.
The main criticism I saw on Jenkins and Go.CD vs. Concourse was that Jenkins Pipelines aren't first class and that it's easier to export configuration(in that regard Concourse > Go.CD > Jenkins). On the other hand Jenkins and Go.CD supports extensions, which Concourse touts as a feature.
I also want the CI builds to create my base boxes with packer in multiple steps. And I somewhat want to be able to hand over the stuff to ops at some point to be able just stay alive for the next 5 years or more. Would anyone know if it makes sense to even consider concourse or go.cd or some other CI/CD solution and if so which?
Obviously the boxes need to be used as artifacts and everything has be on premise as well.
How are Pipelines not first class in Jenkins? Pipelines are provided in the default installation and they are the first thing we talk about in the docs https://jenkins.io/doc/pipeline/tour/hello-world/
Admittedly I ditched jenkins before pipelines became a full feature, but my understanding of them in jenkins was that they are mostly for scheduling of jobs.
IE, run job A after job B
Concourse passes inputs, outputs and resources between jobs as the ONLY STATE, and jobs trigger based off of changes on resources or the availability of new inputs.
I think that when you sit down and look at the two products, jenkins is great at running scripts, and concourse is great at managing code versions.
In the end we actually run both concouse + a script runner; this allows concourse to manage the tagging, builds, releases, and testing; but still allows us to run ad hoc scripts that concourse doesn't do well.
I'd be interested in taking another look at jenkins now that pipelines and the groovy DSL is solidified, but I get the idea that they still fill slightly different needs
I have to agree with the poster. Piplines are very definitely not first-class objects:
- There is no way (in the default installation) of re-running a stage without running the whole job again.
- I can't see the history of a stage.
- Pipelines are a plugin, that only relatively recently became a default one
- They seems be the anointed new path, but live side-by-side with other branching methods like the Matrix plugin at this point.
This is all in-line with the "everything is a plugin" methodology, but in my mind that way of thinking is one of the biggest hindrances to Jenkins, second only to the lack of a real database powering it.
I've worked with GoCD in production a bit. It's a bit of a beast to keep running, the UI is very strange, but the pipelines are extremely powerful and make for good separation of stages. I found it awkward to configure jobs, though, and the documentation is not excellent. There's also a much smaller plugin ecosystem than with Jenkins. We ended up having a guy spend a good lot of his time, nearing 50% some weeks, just keeping GoCD happy for a couple hundred devs.
Overall, I would choose Jenkins first unless you know there's something GoCD can handle significantly better.
> I also want the CI builds to create my base boxes with packer in multiple steps.
This is certainly possible using concourse, I've done it myself on a few teams. We had one job that started with a base ISO and used the virtualbox-iso builder to apply updates and build an OVA. Then a second job would trigger whenever a new OVA was built and used the virtualbox-ovf to apply a different set of provisioners. Since the virtualbox-ovf builder uses an OVA/OVF file as both it's input and output, you can do that as many times as you'd like.
The really important things for me are, Concourse has no text boxes to edit in your browser, so it is possible to version and automate the configuration of your automation, and all Concourse resources (plugin equivalents) are zero dependency- so you can have multiple versions in one pipeline, and no accidentally breaking everything to get one new feature.
My number one complaint with Concourse (which I suspect is due to Go) is that you need to have it hosted with a valid TLS/SSL cert in order to use the fly command. At least this was an issue in the 2.6.0 days, but I couldn't see anything to change this in the recent versions.
This is rather annoying if you want to run a copy on your local network say at home. Its very frustrating because the fly command solves the biggest issue with IWOMM (it works on my machine) by allowing you to run code and tests on another machine before committing anything.
I think from memory I tried using self signed certs and this also had issues for one reason or another.
That said it is still the best CI system I have used to date.
I issue valid TLS certs for my internal servers using letsencrypt DNS challenge (there is a nice cloudflare hook for dehydrated that I use). Runs on cron, haven't had to worry about it once I set it up. (Haven't tried with concourse, but don't think that would be a problem)
> My number one complaint with Concourse (which I suspect is due to Go) is that you need to have it hosted with a valid TLS/SSL cert in order to use the fly command.
Nit: Not familiar with Concourse, but this is a design decision. Go stdlib let's you bypass SSL/TLS verification.
A little late to this discussion, but although I'm a big fan of many of the concepts in Concourse, I think it's lacking a lot of polish.
For example, I have actually implemented patterns within Jenkins to force all jobs to run inside of containers. A job is just a container + command with some linking using the jenkins pipeline plugin that reads some json configuration to determine how jobs are linked.
The primary issues I have surround the fact that my company uses kubernetes, thus we have no insight into the runc containers. Load balancing in concourse is non-existent. If a worker goes down due to load, if you bring it back up, it's going to go down immediately from all the jobs that have been triggered while the worker was offline. Not only that, the resource requirements seem pretty high. Recently a concourse worker stalled because the amount of volumes/images it was caching was over 100 gigs, and not knowing the internals, I wasn't sure what the best way was to clear this cache. Having to tell the infrastructure team that we just need to spend more money is a hard sell when we've upped the cpu, memory, storage, postgres disk, all more than once. I understand that different images have vastly different sizes, and jobs different amounts of work, but their need to be some clear suggestions for sizing. If there exist them now, I apologize but I haven't seen them.
So yeah I've had some fun developing in it, but more help making it reliable would be really nice. Also if kubernetes is absolutely the wrong way to run it, which it seems like, I'd have to be provided a better/easier alternative to really become an advocate.
Final note: has anyone actually setup metrics/monitoring for concourse that doesn't know BOSH? The docs describing it seem huge unless you already have the infrastructure pieces. Let's setup riemann (we already have statsd and no experience with riemann), emit to influxDB (we have prometheus, no experience with influxDB), then use Grafana (ok we already have that). I just wanted a better idea of disk, cpu, mem, # of containers, lifecycle without having to setup all these new pieces of infrastructure. Finally, just not that interested in BOSH, which all of the example metrics repo's are.
I took part in evaluating Concourse CI for the needs of my company (30+ devs). While it has amazing CI pipeline capabilities, we ultimately didn't select Concourse because it felt much more like a CI toolbox, requiring some development to put those tools to use. And what we really wanted was more of a turnkey CI product.
Perhaps ironically, we ended up doing some development around the edges of the CI product we ultimately selected (GitLab).
Cool to hear you selected GitLab. We recently added multi project pipeline visualization. That was partially inspired by concourse CI. Is there anything else you would like to see?
Interesting you should mention that. Right around the time you implemented multi-project pipeline visualization, we built a bot that listens for GitLab build completion events, hits the (undocumented /unsupported) internal GitLab global code search ElasticSearch index, finds downstream dependencies of the completed build (as declared in setup.py/package.json), and triggers their builds as well.. Taking full advantage of your nifty visualization.
So I think the feature we'd want to make that easier is an actual global code search API!
We do not have immediate plans to ship that yet, but I pinged our product team member there. However we do have plans to ship multi-project pipeline with an "inversion of control".
You will be able to specify pipeline relation with an upstream project, and when someone pushes a commit to it, a downstream pipeline is going to be trigger automatically. See an issue about cross-project dependencies - https://gitlab.com/gitlab-org/gitlab-ee/issues/1681!
Please can you allow for any acyclic graph rather than the current stage system? This would allow us to have our long running integration tests run in parallel with shorter tasks!
> While it has amazing CI pipeline capabilities, we ultimately didn't select Concourse because it felt much more like a CI toolbox, requiring some development to put those tools to use.
This is actually a brilliant insight. I'd never thought of it that way. And I'm a one-eyed raving Concourse fan.
Right. One of my favorite parts of Concourse is that all the config is declarative, in YAML that teams can (and do) check into a public Git repo (with the secrets kept elsewhere).
That lets teams share Concourse task and resource definitions. Many teams (including mine) publish our pipeline definitions too [0]. This enables "tooling" type teams build components that get a lot of re-use by other teams.
Some development is fine (and expected). And indeed the goal of "build automation" was what led us to evaluate both Concourse and GitLab. We were previously on Bamboo, which really did require wrangling with a moronic UI.
But I think there's a difference between a mostly complete CI product with API extension points, and a "batteries not included" CI toolbox.
Excellently designed system. Very poorly implemented.
Our team's been using it since it's initial releases. It's been nothing short of disastrous for all but the smallest pipelines.
The design is great. Keeping configs in yaml instead of little white boxes in a Jenkins database is much better. Pipelines as a first class concept. It feels inspired by a functional programming language. You get great build reproducibility since there are no workers that get dirtier over time if you forget to clean up. The resource model is awesome. Very cool stuff. I'm hoping every CI system learns from what's here. Second to none in design.
However it performs like a dud. No scheduling to speak of, just runs everything as soon as it can. We've run into nodes dying under load (-not- underprovisioned, could run all these jobs manually at once on these monsters). We've run into problems with volume reaping, fork bombs, ui freezes, everything under the sun.
I really like Concourse and will hopefully one day be able to come back to it when its implementation is as solid as its paradigms are.
> However it performs like a dud. No scheduling to speak of, just runs everything as soon as it can. We've run into nodes dying under load (-not- underprovisioned, could run all these jobs manually at once on these monsters). We've run into problems with volume reaping, fork bombs, ui freezes, everything under the sun.
I've used concourse as a consumer for 3 years and I've very, very rarely seen any of the problems you're describing, even on the older versions and certainly not in the last year or so.
UI freezes are completely client side and related to the elm implementation and your browser's execution of the code. The size of your VM doesn't matter at all.
ATC is a more of a dependency scheduler. The code [1] shows that it basically gets all pending jobs, and then runs them. There's no concept of queueing or maximum number of jobs, you just have to hope your limits are high enough (max containers, max tasks in systemd, max fds in the same) and that your machine doesn't fall over in the attempts.
The "massive" scheduling system also has no idea what nodes need work and which do not [2] so the idea is to heavily overprovision until it doesn't fall over (on top of already beefy requirements which others have alluded to in this thread).
You can not serialize multiple pipelines. Only within pipelines. If I have 10 pipelines, they will all run independently and there's nothing you can do about it other than attempt serialization with the pool resource (which we've recently had problems with - it also appears to be buggy and we're looking at submitting patches).
I've got a tremendous amount of experience with this system and I believe it's everything I made it out to be. The rebuttals you've provided to my issues are simply a lack of understanding of our context, not every user will have the same experience with any given product.
We are using concourse at work right now and have 50-ish pipelines for various repositories. IMO, there's definitely some work you have to put into it because you'll sooner or later run into a problem with the existing resources and need a custom one. Writing custom resources is pretty easy however.
Concourse also isn't really made to work well with the Git Flow, there is no builtin way to run the CI on multiple branches (there's a git-multibranch community resource which requires redis at some point). we're basically thinking about changing our workflow to trunk-based, but it still feels weird to me that we might change our workflow to fit our CI better.
that being said, I personally still really like concourse and it's fun to work with.
Did your evaluation happen a long time ago? Or, another question... did you run the binaries and appropriately screen them? I had the same problem until I realized I sucked at making a binary live a long time, and my eyes were opened to screen :)
We looked at Concourse deeply, while we didn't go with it, it's inspired a number of projects I know and I think has pointed out the obvious; representing how you think about a problem is how you should represent it visually, great insight that has been overlooked in CI.
I hadn't heard of this, it looks nice! Would I be far off if I'd say that Concourse is an open source clone of Travis CI and CircleCI? Any strong pluses of the SaaS offerings over Concourse I should know about?
Curious to know why no one actually mentions Bamboo - definitely a great alternative to Jenkins, especially for those, who are into Atlassian infrastructure.
At least last I looked, Bamboo requires manual configuration of all builds. This doesn't scale to more than a handful of build plans. It also doesn't have much in the way of modern CI pipeline functionality.
I use Concourse every day and I don't know yet what BOSH is and I hope I'll never have to. The installation was pretty standard. I followed a tutorial written by Justin Ellingwood, a great technical writer working for DigitalOcean -
https://www.digitalocean.com/community/tutorials/how-to-inst....
Shameless plug: I've been maintaining three Concourse clusters and have created https://github.com/SHyx0rmZ/concourse-debian to package the binary for Debian, so I can get away with just upgrading to a new package whenever a new version releases. It's far from perfect yet, but I'm happy to receive some feedback and planning to move my package repository and pipeline to the public soonish.
They are pre-compiled go binaries, and require nothing other than a modern kernel! If you want an example you can see an example in concourse's upstream docker config
> Usage Note: Throughout most of its history in English myriad was used as a noun, as in a myriad of reasons. In the 1800s, it began to be used in poetry as an adjective, as in myriad dreams. Both usages in English are acceptable, as in Samuel Taylor Coleridge's "Myriad myriads of lives." This poetic, adjectival use became so well entrenched generally that many people came to consider it as the only correct use. In fact, however, both uses are acceptable today.
I have had similar experiences, though, where I was convinced that some word or phrase usage was just incorrect, and where it turned out that I had just not happened across it. Fortunately, a minute of research can today fix any such misconceptions!
thefreedictionary.com is only one of many dictionaries that use "myriad of" in their examples. Here are some more: http://www.yourdictionary.com/myriad
Speaking from an en_UK POV 'a myriad of' is how I'd normally use it; the poetic adjectival form is nice in poetry, but doesn't feel quite right to me in normal language.
Whether I'm correct to feel like this is left as an exercise to the reader.
When I lived in London I noticed lots of people got it wrong. If you think of it as a synonym for 'many' rather than 'lots' it's easier to slot into sentences correctly.
Interesting. How does it compare to other CI software like Jenkins? I'm always a bit skeptic about tools that look nice but turn out to be limited when put to real work.
Works well for us, docker/container native so every step in a pipeline is a persistent volume with the environment changing based on what image you need. Also easily builds/pushes your own images and lots of other built-in task runners to make pipelines. Has .yaml or ui config and they have an enterprise version which runs on-prem.
UI is pretty but often breaks. Is unusable when you have a pipeline with dozens of concurrent jobs.
Concept of teams is fine, but when you have to switch between them, even with oauth authentication, it is a pain.
Job concurrency control is binary.
We never have been able to have worker pool scale down without a hiccup, always some darn worker hogging containers and atc not removing it, leading to stalled pipelines.
That overlay network it comes with, garden thingy, creates so many problems and solves just one...
Oh and not having BUILD_ vars available in tasks is rude, thank you very much, but there are cases when it is just mandatory and concourse makes it impossible to do.
At least new version has better secrets handling, previously it was a joke.