Hacker News new | past | comments | ask | show | jobs | submit login
Trunk-Based Development (trunkbaseddevelopment.com)
199 points by solarengineer on Jan 29, 2017 | hide | past | favorite | 205 comments



This workflow, like all others, is just a formalization of some reality. The reality is that many organizations and teams run on nothing but trunk - and it more-or-less works on them. Some teams work by building blocks separately and then joining them together; other prefer to hammer away on a problem all together. Both approaches work. What worries me are emotional claims that the other approach is fundamentally flawed, or that some git workflow is The One True Way.

I've seen feature branches get stuck in review for weeks. I've seen trunk-only developers not knowing how to merge a branch back to trunk. I've seen trunks brought into a state of disarray and never fixed again. I've seen svn-only devs get completely bogged down and confused by a git wofkflow.

Trunk-only is a reality. Not a reality some of us would want to live it - but one that nevertheless exists.


I would hesitate to proscribe a "one true way" as well but the longer I develop the more convinced I am that a having branch lives longer than a few days indicates a serious underlying problem (usually a lack of test coverage or faith in said tests).


In 20 years I've never worked on a project that had a test suite strong enough to merit blind faith. Usually the weak points are

* realistic test input: either you carefully scrub actual production traffic and replay it (these require a ton of setup that doesn't age well), or you guess what you think users are doing and create dummy requests (and miss anything wacky you didn't know users do)

* load & perf: it's rare for a team to spin up a prod-sized cluster just for a test, and before AWS it was almost unheard of to go out and buy twice the hardware you need, even when you were lucky enough not to be deploying on your customer's boxes. A change that tanks your cache hit rate can ruin your whole day yet look fine at trivial sizes.

* visual design: Microsoft famously got bitten by moving away from manual testing that would catch dialogs made unusable by layout changes

We usually had best-effort integration tests, but they never caught everything, and we always had to make tactical decisions about when to release what based on risk.


* Lack of realistic test input is usually a problem caused by over-reliance on unit testing or lack of monitoring feedback.

* UX review and load & performance testing are usually things that we engage in regularly on trunk or the developer/code reviewer flags up ("this change has load implications/this change requires a UX review").

* "Microsoft famously got bitten by moving away from manual testing" - I'd never advocate eliminating manual testing entirely, just that it should be A) never repetitive, B) not a gatekeeper, C) always exploratory.

>We usually had best-effort integration tests, but they never caught everything,

Nothing ever catches everything but IMHO continuous delivery, short-lived branches, good monitoring and regular exploratory testing, load/perf testing on trunk is the best way to catch as much as possible.

I've started experimenting with integration tests that generate reports with screenshots and story steps, as well, so that there's a tighter UX feedback.


How do you support a 3 year LTS branch for enterprise customers without long lived branches?


IIUC your use case this is not the kind of branch that gets merged back, it's more of "another trunk" whose version is largely frozen and that may get backports and fixes.


I've done that. It signalled that the customer had a lack of faith in the stability of our releases.

In our case it was a valid concern (our team had a pretty awful track record).


Apparently you have never sold to a fortune 500? Many require bug fixes only of your app for X years. They don't care about new versions. Nothing about faith but about their priority. Using the latest and greatest is not their priority.


We were selling to Fortune 500s.

The upgrade paranoia only kicked in when they were subjected to broken releases and had to roll back.


The F500 is incredibly varied in their requirements even within the same company. I've seen demand for new features and long term, keep-everything-the-same-as-day-one from the same group for different services / products. Paying more money to keep things from not changing or for not adding features is something unique to enterprise environments though.


Well, sometimes the lack of stability is purposeful - new features are added, old features are deprecated, etc. How do you deal with that?


What languages do not support the equivalent of #ifdef in the C world, such that you can have variations in actionable code without having permanently separated code branches?


OP: this works for Google and Facebook, if you don't use it, you're dumb.

I'd quite like to load the LTS version of GMail and Facebook. I'd also like to have some say in how these apps I use every day work. Unfortunately, I have no say. I'm not even the customer.


Branch rot is a known problem, and it hints at a larger issue lurking somewhere in the organization - usually either inability to ship or inability to keep things in order.


Trunk-only is a reality. Not a reality some of us would want to live it - but one that nevertheless exists.

My background in Smalltalk made me accustomed to a style where everyone would continuously merge everyone else's changes as they worked. This style makes you aware of what your team mates are doing. In fact, it facilitates communication pretty much when communication is most called for.

As always: context. The above approach is only going to work well, when there is convenient high bandwidth communications between teammates. (Not only for your version control, but also for communication between people.)

I've seen trunk-only developers not knowing how to merge a branch back to trunk.

Well of course. If they never had constant practice at merging, they wouldn't be very good at it. Merging isn't trivial.

I've seen trunks brought into a state of disarray and never fixed again.

Can't they roll back? This strikes me as a sign of badly designed process, or developer incompetence. This should be seen as something like breaking the build. (In a CI environment, it would be breaking the build, yes?)


How does a team of Smalltalk developers work? How do you merge the binary Smalltalk program image?


Most places I worked "built" by loading the release configuration, then saving off the image. That's often all there is to it. At one job, some pre-population scripts were also run for menus and drop downs.


With time, Smalltalk also got its own version control tooling.

Monticello for example,

http://www.wiresong.ca/monticello/


You could consider Monticello as one of the precursors of git.


If a check in breaks the build or breaks production on deployment, it should be rolled back within minutes. Trunk and production must always be healthy, and if you commit poorly tested code, you are doing nobody any favors.


It was a badly run organization - and, as an extension, bad management, bad developers, bad process, bad requirements and bad practices. It wasn't trunk-only that made the project grind to a screeching halt - it was people.


This is basically how the team I work on does it. We have wrapper scripts around git. Commiting automatically merges from master first.


In other words there is no size fits all there is even no size fit two . All those workflows are based on the management o the organization it self .


https://trunkbaseddevelopment.com/youre-doing-it-wrong/#mere...

Subversion having that TBT thing made it easy for people to respond 'yes we're doing trunk based development' when they're not


"The core requirement of Continuous Integration that all team members commit to trunk at least once every 24 hours."

Continuous integration is a means to an end, not an end in itself.


Indeed. There's a wide spectrum of software complexity and while committing to trunk every day might be reasonable for lower complexity projects, it's definitely not for higher complexity ones.

Having such a requirement also means that people can't undertake major refactorings/rewrites of significant subsystems, leading to long term tech debt.


In my experience, the "refactor the world in a separate branch" strategy is always a giant mess. The initial work is perhaps simpler but the final merge is always horrible and the full impact of breaks isn't found until the merge is committed and you're stuck in "fix forward" mode because reverting is an unacceptable choice due to the cost of merging.

I'm not sure the overall cost to the team wouldn't be lower if these sorts of rewrites were done in place. (Also it's not really a refactoring if it's so big you can't do it in branch.)


Not disagreeing, but a way to avoid this is to pull changes from the main branch into the development branch every day, and write your new subsystem 'beside' the old one until it is ready to go. If you do it like this, it's not much easier for the branched developers than working on the trunk - it's still a moving codebase - but it does mean that if you cancel the rewrite there isn't any pollution to the main branch.


The scenario you called out, where the rewriting work might be abandoned/cancelled, is the only scenario I can think that makes a separate branch actually better. For cases where you're confident that the work will not be cancelled, you have the same amount of work to push the rewrite back into the main branch constantly as you do to constantly keep the child branch in sync. i.e. If you can pull from the main branch into the rewrite branch and build/test successfully, you can do the same in reverse. If you cannot pull from main into rewrite and build/test successfully, you're just postponing the pain to the end when you'll dump it on the rest of the team when you finally merge back and break everything in the main branch.


The disadvantage is that if your rewrite branch is halfway finished, you _can_ merge in the latest main branch, but if you do the reverse (which is exactly as much effort), you're blocking the other developers until your rewrite is done.


This is only true if you assume that the rewrite branch will be broken the entire time. You're only blocking the team if your work will be unusable until it's done. If that's the case, then you're also assuming at some point in the future you'll complete the work, get everything working again, and then merge everything back to master. It won't likely play out that way, though.

More likely is that:

* Your child branch will be test-broken and likely build-broken almost the entire time you develop.

* You'll stop taking frequent merges as they become painful because you're clearly not doing the work to patch everything up (and can't, because you've broken everything and likely can't even run tests).

* You'll be "almost done" and then spend three weeks trying to get everything building and testing again.

* You'll tell everyone to freeze checkins because you need to get merged and it's impossible because the branch is so far out of sync and and you're now conflicting on every tiny change.

* You'll finally push your merge back with half the tests disabled.

* The entire team will spend another month cleaning up the mess.

I've never seen a big feature/rewrite branch play out any other way.


True, in that case you're screwed :)


> Trunk Based Development is a key enabler of Continuous Integration, and by extension Continuous Delivery.

Quite false. How do you expect developers to take you seriously when you essentially say "you can't properly do CI/CD with your current approach"? I sure am enjoying CI/CD right now.

Trunk-based development seems to rely heavily on feature flags, which are a huge source of complexity and inconsistency. Even when useful, they're liabilities; not assets.

git-flow (and variations) is simple, enables consistent codebase state, makes no assumptions about your codebase/infrastructure, and allows you to know the exact state of production/staging.

Finally, as it happens so often with open source projects and initiatives, first thing I should see is a big "Why". Why do I want this? Why are the alternatives inferior?


> Quite false. How do you expect developers to take you seriously when you essentially say "you can't properly do CI/CD with your current approach"? I sure am enjoying CI/CD right now.

Do you not understand that the "Integration" in continuous integration literally means "merge to master". Please read the first paragraph https://en.wikipedia.org/wiki/Continuous_integration

What you do might work well for you, but if you build long-lived branches then it's not CI. Let's not call it something that it's not: words have meaning if we are to communicate, and we're not doing post-truth software development.


These days CI vastly means test-on-commit over anything else.

Please google "CI platform" and see how Codeship, Circle, Travis etc self-advertise as CI platforms. None of them require TBD (not even nearly).


> These days CI vastly means test-on-commit over anything else.

Quite false. That is a miscommunication.

I know what those "CI servers" do, I have worked with a few of them them daily for years and had this exact same issue come up. These tools enable the continuous integration workflow, and lately they also enable other workflows such as "build and test the branches but don't integrate".

If that's what you do fine, but let's not confuse it with something that it is not. Words have meaning. CI is not a tool, it's in how you use the tools.

But if you still think that wikipedia is wrong, I suggest that you edit it. Good luck.


Quite correct.

Snap-CI is configurable to test every active branch, on first commit/push - https://trunkbaseddevelopment.com/game-changers/#snap-ci-s-p...

Of course doing that will expose problems that lead the dev team to reconsider Trunk Based Development.


I have worked at organizations that followed TBD and git-flow.

TBD is much better for software that is delivered to the end user as a package ( not as a SaaS ), ones that does not require a staging branch

> Why are the alternatives inferior?

This is specifically useful if your working on multiple releases at the same time. With git-flow, your pull requests ade blocked for the later release until the earlier release goes out of the door. When you later merge the PRs that has to go into the later release, you get massive conflicts which waste time.

With this model, all changes are always present on trunk first and are cherry-picked onto the release branch. Release owners can selectively choose to include whatever changes they are comfortable with onto their releases. This dramatically signifies communication and management of releases.


> "This is specifically useful if your working on multiple releases at the same time. With git-flow, your pull requests ade blocked for the later release until the earlier release goes out of the door. When you later merge the PRs that has to go into the later release, you get massive conflicts which waste time."

If you're doing git-flow right, your PRs shouldn't be blocked at any time. You should have a dev branch, which everyone is working off of. Any feature branches should be branched off of dev, and any approved PRs should be merged back into dev asap. Developers concerned about conflicts can also merge dev into their feature branches, on a regular basis, in order to catch and resolve conflicts early on. Creating a new release is then as simple as creating a snapshot of the dev branch.

https://datasift.github.io/gitflow/IntroducingGitFlow.html

Gitflow and TBD are more similar than people think; TBD is essentially gitflow with the requirement that feature-branches can only live for <24 hours. Short-lived feature branches does reduce the potential for conflicts, which sounds great, except that

1) By forcing people to create multiple PRs every single day, people are spending a ton of time dealing with the PR review/discuss/update process.

2) Because many features require multiple days to develop, you're going to have a bunch of half-finished code littered all over your codebase, gated behind temp flags.

If TBD was tweaked with the requirement that developers should merge commits into trunk every 7 days, I'd be all for it. 24 hours sounds to me like death by a thousand papercuts.


But if feature branches can live less than a day then why not get rid of them altogether? They are just extra bueurocracy that just slow you down.


I would imagine there is a lot tooling around feature branches which are useful, such as code reviews tools, branch builds etc.


Publications promoting Trunk Based Development include the best-selling book called Continuous Delivery - https://trunkbaseddevelopment.com/publications/


I'm on your side but my comment (yours is better) faced opposition so:

They redefine CD and then say nothing else is it, and you aren't doing it.


It seems like many people are unaware that "continuous integration" since its inception has meant pretty much the same thing as "trunk based development."

Wikipedia's definition: "In software engineering, continuous integration (CI) is the practice of merging all developer working copies to a shared mainline several times a day."


That sounds like it allows for feature branches to me.

Edit: anyway, isn't that extremely pedantic? I've known literally hundreds of companies that say "we are using CI" when what they mean is "master is continuously tested + deployed"... is there a name for that we should all start using? I'm not trying to diminish the awesomeness of "real" CD that some of you out there appear to be doing and proud of preserving the term for.


TBD as described here also allows for feature branches (but they must be short lived and owned/worked on by a single developer -- which seems reasonable to me).

I think people are seeing TBD as "no-branches ever" and that is not it's goal or design.


It does, but feature branches become pretty useless at that point. It makes a lot less sense to have a branch for something you plan to merge immediately after the first commit (though my team actually does this just to take advantage of github's PR functionality).


git-flow is anything but simple.


It's simple but it requires effort and habits. Many developers work using the opposite mindset, they prefer to trade short-term easiness for long-term complexity (which is kinda valid), instead of battling complexity upfront.


> It's simple but it requires effort and habits.

That seem to go against the definition of simple: easily understood or done; presenting no difficulty.

That's not to say there aren't any benefits to it or that developing those habits are a waste, but it's not simple. Changing existing or developing new habits is not without difficulty, it takes time, patience and perseverance.


> That seem to go against the definition of simple: easily understood or done; presenting no difficulty.

That's not a great definition of 'simple' to apply to software dev. Simple != easy, because easy is inherently about familiarity. See Rich Hickey's excellent talk on the subject [1].

[1] https://www.infoq.com/presentations/Simple-Made-Easy


That talk doesn't relate to the whole discipline of software development though. He's mostly arguing that if you chose ease over simplicity in your programming/code it can heavily effect the output of your work and its long term viability. It's about not introducing complexity in the design and your product.

But this is about the process and workflows of collaboration on code, not the code or the product itself. Some of these concepts certainly apply but just because it is in the realm of software development doesn't mean that particular definition always applies.


Hmm, not quite how I'd see it. You're right to point out different considerations are required for 'process and workflows', but I think Rich's simple/easy definitions still hold up in those situations, and are more useful than munging the two terms together.

So instead I'd say that when it comes to 'process and workflows' easiness becomes more important, because if it's an action you're literally doing everyday, you want that to be easy. In fact you might be willing to write more 'complex' underlying code/infrastructure (as we do when we setup CI) to make the process 'easy'.


> Quite false. How do you expect developers to take you seriously when you essentially say "you can't properly do CI/CD with your current approach"? I sure am enjoying CI/CD right now.

Quite true actually! : https://en.wikipedia.org/wiki/Continuous_integration#Everyon...


That's in a "Best Practices" section with this note at the top:

> This section contains instructions, advice, or how-to content. The purpose of Wikipedia is to present facts, not to train. Please help improve this article either by rewriting the how-to content or by moving it to Wikiversity, Wikibooks or Wikivoyage. (May 2015)


Are you questioning the daily part (continuous) or the mainline part (integration)?

I'm also interested in your better source. Every book I've read on the subject and the top 4 search results on google say the same thing.


I'm not an expert, and I've read no books on the subject, so I'll refrain from suggesting sources. The implication of my comment was that it's disingenuous to use that link as proof when it's marked with the equivalent of a FIXME.


First off, I have no motive to be disingenuous. I really don't care how you collaborate on software with your team.

Secondly, CI itself is considered a best practice. I don't know how you could expect wikipedia to mark it as anything else?

Here's the top link from google if you really care to learn and aren't just here to be a contrarian: https://www.thoughtworks.com/continuous-integration . There is a wealth of information on this subject that I promise all says the same thing. We can argue about whether or not CI is useful, but the practice of integrating continuously is in all the literature as well as the name itself.


If you learn how to break features down into small chunks, you can commit to master near-daily while still doing git flow.


> If you learn how to break features down into small chunks

If that's possible. What if there are some features for which it isn't?


Obviously if you can't break it up into smaller pieces, you can't regularly integrate smaller pieces. That's a tautology.

With that said, I've been doing this for over ten years and haven't come across that scenario. You can easily figure out all kinds of tricks to break things down, but usually only after you believe in the value of it.


You try by testing. Take big software like Firefox as an example. There are hundreds of commits landing on a good day and to merged into moz central you need to submit the patch for review and testing. Once the code is merged there are more testing and if something broke along the way the release team will figure out ans backout the bad commits or get someone add a fix asap. It is important to not be afraid to merge but also be responsible for your code. Take major refactoring as an example - don't create a patch which is partially implemented with breakage. You can ask for review but don't request a merge knowing it will break - actually your tests should tell you that. Finally, it is important to communicate changes regularly. Developers shouldn't be suprised to see their changes broken because someone else decised to refactor all the sudden.


> You try by testing.

I understand that testing tells you you broke the trunk, so you didn't break up the feature into small enough merge-able pieces. My question is what happens if you can't break it up any smaller--do you just throw up your hands and say you can't implement the feature because there's no way to break it up into small enough pieces?


You can bulk delete a bunch of functions and won't break anything. Your patch may require changes to 30 files but the change is minor. "Small enough pieces" is ambiguous I admit. I think a better way to put it to work is make your patch enough to get your code reviewed. If there are drastic changes, just let people know what you are planning to change (really, write out your plan). You may have to write wrappers or keeping the original function intact but write a my_api_function_2 for the newer version so people can start adopting it.


Because any long running feature branch can just live in a different repo until its ready to merge. There is a reason they are called branches not trunks. Except for rare exceptions developers should be able to break up features in a way that they can be continuously integrated during development.


Puppet Labs’ recently published State of DevOps (2016) report specifically calls out trunk-based development as a leading indicator of high performing organizations [1].

[1] https://puppet.com/resources/white-paper/2016-state-of-devop...


Amount of info required to download free report makes it not free. Can you quote?


I think this is the relevant portion for which you are looking:

"We found that having branches or forks with very short lifetimes (less than a day) before being merged into trunk, and less than three active branches in total, are important aspects of continuous delivery, and all contribute to higher performance. So does merging code into trunk or master on a daily basis. Teams that don’t have code freeze periods (when people can’t merge code or pull requests) also achieve higher performance."



All you have to do is enter an email address. It doesn't even have to be a real email.


It's pretty rude to ask every visitor to a free report to hand over contact details. What are you planning to do with all those contact details?

I can only think of plans that range from annoying to malicious. None that are good.

Don't normalize rude behavior. Conditionally free is not free.


It's their right as a producer to require email to view their content.

It's your right as a consumer to not want to make that trade.

Not rude. Rude is posting the raw file link to subvert the agreement or complain about it on hacker news.


You have a very odd view of rights. It's /within their means/ as a producer to require email to view their content. Just because you can do something doesn't make it your right to do something. Doesn't make it wrong to do it either, but that's god damn miles from a right.

In addition, just because you can do something doesn't mean you should, and it's within my means to assert that anyone who asks for my contact information but doesn't know me personally /definitely/ wants to spam me, there's no other reason for them to ask for that information.

Being polite, spamming is rude. Being rude, people who send or enable spam are worthless scum. I have no time for anyone who chooses willfully to be part of that cycle.

My freedom to express that opinion actually /is/ a right[1].

[1] http://www.un.org/en/universal-declaration-human-rights/


> My freedom to express that opinion actually /is/ a right.

Sure, but https://xkcd.com/1357/


It's odd that you would link that. From my perspective, I'm the one doing the boycotting and showing of the door. How do you see it?


When I enter my email address and click the button, the form expands to add new fields including first name, last name, phone number, and company name. The button still says "Download now", and I don't have the report in my email. So it seems like I just got tricked into providing my email address without getting the report in return.

This Google search finds downloadable mirrors of the report: "2016 State of DevOps Report filetype:pdf"

https://www.google.com/search?q=2016+State+of+DevOps+Report+...


Correct me if I'm wrong but isn't this just a form of centralised version control? If this is a better approach for many teams, perhaps this an indication that DVCS weren't the silver bullet they were hyped up to be.

Aside from that, in my opinion the whole centralised version control vs. decentralised version control, and all the variants thereof, miss the bigger issue. Keeping track of the history of a file, and the authors of the changes, is trivial for any version control system. The real challenge is in resolving merge conflicts, and it's the low level of sophistication in merge tools that is the real bottleneck here. That's why I think tools like Semantic Merge are far more important to a development workflow than SVN vs Git vs Mercurial. It relies on the merge tool understanding the structure of code rather than treating it as an ordinary text file. Similar tools could be built for any language that offers a 'compiler as a service' (such as RLS for Rust).

https://www.semanticmerge.com/


SemanticMerge mentioned on the site: https://trunkbaseddevelopment.com/game-changers/#plasticscm-.... It got it's name after I blogged about it too - http://paulhammant.com/2013/02/14/features-i-would-love-sour... - and am still owed a beer.


Thanks for the links, I'm glad you identified this as an issue and inspired further development of a solution (though I'm surprised you're owed a beer for such an obvious name, must've taken all of about 5 seconds to think of that one ;-) ).

Also found this article in the comments of your blog to be interesting:

http://blog.plasticscm.com/2013/04/put-your-hands-on-program...

This article you linked to was interesting too, is this a common workflow on CI solutions now?:

https://blog.snap-ci.com/blog/2013/11/07/automatic-branch-tr...


The suggestion that feature branches hinder CD is just untrue, at least in my experience. To me, this seems like a silly restriction to put on such a critical tool (the scm) which is designed to support unlimited flexibility.


You must not understand what CD really means then, because full CD is impossible with feature branches. CD means every commit gets built and delivered all the way prod if it passes all tests. The key word is "continuous" as in, every single commit. Branch based means you only deliver once you merge. Merging is a manual step, so your not doing full-CD. The merge is basically your "trigger" and your doing "delivery when I am ready"


We deploy every commit in master and use short lived feature branches and pull requests.

I would still consider this continuous delivery.


Yes, this is what I think of as CD. Hiding work that isn't ready (or work that may never make it in front of users) behind a flag on prod sounds like extra work for no gain, to me.


The gain is early integration. If you have more than one branch going on at the same time, they can diverge quite significantly, and merging can cause bugs, or sometimes more work. If you only ever have one feature branch at a time, you might as well be doing trunk based with manual deploys to prod.


Rebase early and often and this isn't an issue. Some see this as a chore, but I view it as taking advantage of the tool's power. You resolve the same set of conflicts as you would merging, but (as you say) earlier.

Edit: I don't think anyone was trying to imply a single allowed feature branch at a time; we can probably all easily agree that's basically no different than working on master.


Yes, rebase early and often works great. But if your branch includes a major refactor, your going to find yourself editing others code to get your rebase to work. That just increases your own scope. But yes, rebase often is often a very viable strategy. In practice though, I find very few that actually do it because it becomes painful when your branch lives too long simply because business isn't ready to ship something


Rebase is evil.


Can you explain how a repo with dozens of feature flags triggering greatly different code paths, and invisible to the source code management system, is an improvement on feature branching?

How would I use git bisect with a bunch of flags to a composite of build systems?


By "short lived", how short lived? Is the branch only for one developer? Or is it shared? How do you enforce "short lived"? If the branches are not collaborated on, you might be doing trunk based without realizing it.


Holy shit. Just admit that TBD is a fad instead of trying to define every successful methodology into it.

Branches tend to be solo devs and tend to live less than a day. But nobody develops in master. It's actually locked down to just merges from pull requests.


>you might be doing trunk based without realizing it.

Then trunk based is poorly named and apt to start a fruitless holy war. Nobody wants too many people on a feature branch, or for one to live too long. (But it happens, like rebranding an entire app or rewriting all the auth code, which is a good thing to be able to do. )

Is there a threshold of time/collaboration that makes it not-CD?


My point is that the article says that branches for the purpose of WIP and code review is OK as long as you don't share the branches. Trunk based is trunk "based" as in your git "base". Your based on trunk, not a branch. It's not called "commit directly to trunk only" development.


Why do we have this obsession with religiously following methodologies rather than focusing on the goals they attempt to achieve?

>CD means every commit gets built and delivered all the way prod if it passes all tests. The key word is "continuous" as in, every single commit.

In my experience one of two things happen with this approach. Either the functionality is hidden behind flags and intentionally doesn't impact prod, in which case there's no point in it being there. Or you only commit when the feature is complete, which is effectively the same "delivery when I am ready" as with merges.

The goal of CD is not to push every commit to prod -- this is merely the process. If the goal achieved by the process of pushing every commit to prod can be achieved via a different process, we shouldn't be so quick to criticise the other process. We should instead discuss the tradeoffs between the two, because in reality there is no best option, only tradeoffs.


How does code review fit in to such a workflow? That's also a manual step, but one I and many others consider good and necessary.


According to the article, and my personal experience, you don't commit directly to trunk. Trunk based means that you don't "collaborate" on a branch. So you write your code based off of trunk, once it's ready you commit in a branch, simply for purpose of code review. It gets reviewed, and probably runs an automated build. Once both pass, you merge and it goes through your deployment pipeline. Yes the merge/code review are manual, but your not merging to say, "I'm ready for this feature to be on prod now" your simply saying it looks good. The biggest difference is that if developer B wants to use your new method, he has to wait for it to be on trunk rather than branching off of your branch, etc.


It really is the same. He very much could branched off your branch and once merged just start syncing with master. And I see that better than waiting for your PR to be merged.


So, like squashed feature branches?


So write the tests first; isn't that what we're supposed to do anyway? That way the branch doesn't move to prod on commit, because it isn't passing the tests yet.

BTW "commit" is also a "manual step".


I'm kind of lost by your comment.

> Write tests first

What does this have to do with TBD vs branch based? Should be that way either way. In fact TBD breaks down if you don't write your tests.

> A branch doesn't move to prod on a commit

Are you saying your shipping branches to prod regardless as to if it's on trunk or not?

Or perhaps your whole comment is just saying how TBD is supposed to work and your agreeing with me?


Ah, never mind, I was confused.


Anyway the article mentions Continuous Delivery, not Continuous Deployment. Neither means what you say, according to a Puppet Labs blog post, but the latter sometimes does to some people. But that's not what the article says, and so it's not what I'm countering in my comment.


CD is not impossible with feature branches.

My live system looks up the branch name in the URL or in a (signed) cookie and checks out the appropriate branch then and there. This is possible because my development system is integrated into my live system.


If push to two feature branches, A and B, which one lands in production? both? if so, how? or the last one you pushed? Or the one that finished its build later?


My specific implementation uses the git commit id. Only the tool for signing cookies actually knows about branches.


So your delivering all commits on all branches to prod? That sounds hairy


How many commits do you do in a day?

The hardest part is figuring out how to do this without a map.


On a big team, potentially 100's. But if your delivering all commits regardless of branches, why even have branches?


Because they're still useful? A branch is a useful tool for the developers who know their development process includes pushing to and pulling from branch "xyz" while they develop whatever feature they're tasked to develop. That they don't solve every problem doesn't bother me.


> CD means every commit gets built and delivered all the way prod if it passes all tests.

I always wondered why a company would allow code to go straight to production, live, affecting customers, without a human giving the OK to release.


Because you should be able to automate that OK. There is generally no verification that a human can do on a computer system that cannot be automated. The only question is whether you're willing to invest enough to fully automate the go/no go decision.


So the person that has an opportunity to misunderstand the specification (the developer) is given the task of writing the tests to OK them, even though the test is based on the same misunderstanding?

That's terrifying. After 15 years in the same field I still don't have half the business knowledge as those writing the specs I implement. I would never ever want a nontrivial feature I wrote to hit a customer without manual testing by an expert in the area.


Where do you work that software engineers write code and then hand off their code to non-software-engineer "experts" for testing? That sounds like a really broken process.


I implement a program used by structural engineers. So basically the user is a structural engineer and I'm a software engineer. They typically find nuances of program behavior that I never thought of because I'm not an expert in structural engineering.

I think the same would be true if I made a trading platform, an x-ray machine UI or whatever. When the expert uses it to do what they are experts in, they will invariably find issues (bugs, omissions, simple improvements) that weren't obvious to begin with.

One can argue that if the spec was 100% perfect then I can always test it myself and hopefully even do so with automated tests - but I have never seen a spec like that (perhaps more importantly - if you have in house "end user like testers" for expert software then it's likely more economical to have expert testing and iterate than spend the time on more detailed upfront specs)


For a huge chunk of the industry, the consequences of broken software are limited to annoyed users and lost revenue. In cases like that, the benefits of shipping quickly often outweigh the value of "expert testing". Further, in many cases the software engineers have as much expertise as the customers, making the handoff for testing simply a way of abdicating responsibility for quality.

For your structural engineering example, I'm not sure you couldn't benefit from continuous, automated release. If the only risks are missing "bugs, omissions, simple improvements", you could fix those in the next release (which could be the next day). Delaying valuable features so that the customers can tell you that a tweak would be even better doesn't seem to be a net gain. The only reason to hold the release would be if you're catching dangerous bugs this way.

You could also build new features under a "flighting" system (pick your favorite name; there are several) where you don't expose new features to most customers until they are "baked" with your internal customers and/or customers who've opted into early features. This allows you to release constantly so your customers get bug fixes quickly and features as soon as they're ready without the complexity of separate branches and versions maintained in parallel.


We can't ship often for the same reason all of the big and complex software packages (IDE:s, spreadsheets etc) on your machine don't ship very often. Documentation needs to be produced and specific to a version. The application needs to be a consistent whole with UI changes, file format changes etc not happening too often.

I don't think it will ever be a good practice for large complex apps to change a tiny bit every day (Facebook might be challenging my theory, but their app is relatively simple they don't produce training docs, and most importantly they don't have to have the latest app compatible with all Facebook data from the beginning of time - instead they keep their data on their servers and modify it to the latest programs when necessary)

I agree you could have more feature gating, but large backwards compatible file formats are a complex business already with 10 releases over a decade - I can only imagine what it would be like supporting reference-rich documents with many more releases and the additional complexity of the sender and receiver having to agree on a feature set (unless you make the feature set/flight implicit from the data - but that's a new kind of headache). We already have tons of code in new versions dealing with loading malformed data in old formats because of bugs closed years ago! Every document we wrote we must also be able to read.

Lots of challenges in this area, but they are pretty fun to work with tbh.


> We can't ship often for the same reason all of the big and complex software packages (IDE:s, spreadsheets etc) on your machine don't ship very often. Documentation needs to be produced and specific to a version. The application needs to be a consistent whole with UI changes, file format changes etc not happening too often.

The only reasons Excel and Visual Studio don't update constantly are 1) the update mechanism is too heavy with gigabytes packaged into an MSI, 2) it's easier to sell licenses with big updates, and 3) inertia.

With people migrating to Office 365, I wouldn't be surprised if the ship cadence of Excel (etc.) become more service-like, with frequent feature releases and only big redesigns or massive features getting released as "major version" releases.

(Disclosure: I work for Microsoft. These are my opinions and not based on any inside knowledge of Excel or Visual Studio dev/release processes.)

The issues around UI, documentation, etc are solvable. You can build and release features regularly without changing the UI significantly. The UI as a whole should be consistent but small tweaks are fine and big changes can be built behind a feature flag and left dormant until the next "big" release if necessary.

Documentation doesn't need to be locked to a specific version, or to the extent it does, you could automate that. From version to version, the changes that impact existing documentation are minimal. So your doc system needs to know how to render V1 and V2 and understand the delta between the two. Not free but not overwhelming either.

With all that said, I understand that sometimes "major version" releases and the associated "big bang" testing and signoff and release can make sense. But it's rare that continual release cannot work.

> I don't think it will ever be a good practice for large complex apps to change a tiny bit every day (Facebook might be challenging my theory, but their app is relatively simple they don't produce training docs, and most importantly they don't have to have the latest app compatible with all Facebook data from the beginning of time - instead they keep their data on their servers and modify it to the latest programs when necessary)

Chrome is getting close to this. They release major versions on something like a monthly cadence and smaller updates more often.

Obviously, the closer you get to a service model like Facebook, the easier and more appropriate it will be to ship updates very frequently. Over time, the number of devs involved in projects like this seems to be trending distinctly upwards, though. I wonder in 10 years what percentage of software will be shipped in a way that looks like "shrink-wrapped" software.

> I agree you could have more feature gating, but large backwards compatible file formats are a complex business already with 10 releases over a decade - I can only imagine what it would be like supporting reference-rich documents with many more releases and the additional complexity of the sender and receiver having to agree on a feature set (unless you make the feature set/flight implicit from the data - but that's a new kind of headache). We already have tons of code in new versions dealing with loading malformed data in old formats because of bugs closed years ago! Every document we wrote we must also be able to read.

I would think that binary formats would be pretty stable even if you released very frequently. I don't mean that it would naturally happen. I mean that it should probably be mandated. You shouldn't need to modify the binary format constantly in order for other work to happen. It would be a maintenance nightmare if every little release modified the file format. But this is kind of like shipping a binary client. You wouldn't ship the client as often specifically because of the maintenance cost (namely compatibility testing). Obviously if you couldn't do any work without changing the file format, then this would become problematic. But then I would wonder why your file format is so brittle and so tightly coupled with the rest of the app, and if I wanted to release rapidly, I'd invest first in fixing that.


What do you think unpushed/uncommited changes on your local machine. They are basically feature branches.


With a sentence like:

" It has been a lesser known branching model of choice since the mid-nineties, and considered tactically since the eighties. "

Given:

The Release Engineering of 4.3BSD https://docs.freebsd.org/44doc/papers/releng.html 1989

concerning a key software system and part of software history, released in 1986, which states:

" For the 4.3BSD release, certain users had permission to modify the master copy of the system source directly. ... The development phase continues until CSRG decides that it is appropriate to make a release. "

,that the rcs tools of the time basically didn't even support branches natively, and this model is followed to this day by the direct descendants of this project

I'm prone to think that.. well. Hmm.


We currently use short-lived feature branches, merged via Pull Requests (+ review / automated testing) into the main development line. This way, we can communicate changes in a detailed manner before they are added to the product and make sure there is no unfished or bad code in the main branch. (The dev team is small, 5-8 devs).

I don't see (yet?), what benefits TBD would provide in such a setup.


The linked website includes this workflow under their definition of trunk-based-development. As long as your branches are short-lived and make it into the main line rapidly, it appears to be considered equivalent.

I've seen some commentators say that this doesn't pass their bar for TBD, but I think it's effectively the same thing.

The site does call out "GitHub flow" as being slightly different, but that's because GitHub's description of their model includes deploying the branch to production before merging to master.


Couldn't have said it better myself :)


With TBD you can reduce the overhead of coordination between committers.

Edit for clarification: "Branching in code", á la Feature Toggles is much better, because everything around it can be automated. With CVS branches, you shift that process into an earlier development stage where you need more human collaboration. So TBD can shift the focus of collaboration to things that matter more: the code.


I am very concerned that feature toggles add unnecessary complexity to the code.

With Lisp Macros, for example, that would actually be quite elegant but in a language like JavaScript, we'd end up with a bunch of if'a here and there, wouldn't we? One might simplify it a bit with a different architecture, though.


Long lived branches add complexity too, it's just hidden in GIT. The main issue with feature branches are merge conflicts: they are the most evil operation one can imagine. First you have your code, that you understand. Then you have others code that you have no idea what it does, so first you need to understand it. This might take hours when done properly. Usually people skim over this part BC it's boring. This leads to superficial merges that leads to very very evil and hard to find bugs. And these bugs are super hard to find in git logs, since the merge commit is 2 or more people's code.

Whereas the alternative, feature toggles are much better: first you don't need one for every small thing, add them to bigger features that take weeks to implement. With a team of ~10 developers you should not need more than 2 at one time (if you have more the management is at fault). Then feature toggles enforce modularity: by principle you should use them in as few places as possible, this will make the interface between the new feature and your current app as small as possible. This is a good thing!


Why can't you automatically merge all the branches for testing? Sure, sometimes it won't cleanly merge, but then again, features behind toggles won't always work well together.


I once wished this were possible too, but in reality it isn't.

With n branches there are n! permutations to test. That's a lot of infrastructure to maintain and computing power to spend and you still don't end up with a single team-blessed deployment artifact that can automatically be promoted for manual testing or release.

Don't get me wrong, feature-toggles are indeed a pain in the ass, but in my experience cherry-picking branches to merge really is much much worse.


And with n feature flags, aren't there also n! permutations to test, that is, each combination of enabled/disabled? If you say no, because you just test features as they get manually enabled, why can't you do the same by manually selecting branches to be merged?


Yes, that complexity does not go away. But you only need one delivery pipeline, with one test/staging environment. Toggling and running acceptance tests against the variations can be automated, but is viable when done manually.


With n feature flags, there are 2^n possibilities to test.


I like this best.

It enables you to build your feature and then squash the commits into one idempotent feature commit which keeps commit clutter out of the git logs.

Just keeping your branch rebased daily, one will have minimal problems even on big teams.

I personally love developing on my own branch and having the freedom to experiment without messing stuff up.


This is what I do when I can get away with it. Suitable for smallish teams.

All developers sit in the same branch; changes and conflicts are visible (and resolved) immediately. All work takes place there including bug fixes.

When we branch out for release; the focus is on disabling unwanted features. Actual development work + "hardening" is done in the main trunk before the branch.


Small teams like Google's 25000 devs and QA automators? :)


I think it can work well with maybe 10-20 people.

Depends on how well you software is modularised.


I assure you it works well with 25000 people, I've worked in such an environment.

It seems to me that in the various threads I read here that there seems to be a bit of misunderstanding. If you work on short lived feature branches and merge them to trunk/master and other people branch new feature branches from trunk/master and so on you might be just doing TBD without calling it that way.

Many things are just common sense consequences of that, e.g. if other people are spawning their feature branches from trunk/master; then you don't want to merge something that breaks trunk/master, otherwise you'd be making the life for your colleagues harder, since now they cannot know whether the code they added breaks some tests or they are broken because somebody else broke them.

The feature-flag thing comes into play only when you want to break down a larger feature into smaller branches, each landing in trunk/master before continuing the next step. This get more important the more people can actually make changes on the same part of the codebase that you big feature is going to touch.

If it's only you that touches a section of your code base (and you can do it because it's well modularized), you won't feel much pain of making your feature branch last longer.

However, TBD shows its strengths precisely when the team grows; when you no longer can make the assumption that only you will work on a given piece of code.


> you might be just doing TBD without calling it that way

I think this happens an astounding amount. In particular, i think Git Flow is TBD with the names swizzled.

As far as i know, all the source control disciplines that people seriously advocate involve:

1. Developers working on local copies of the code

2. A shared copy of the code where developers integrate their work when they're done

3. A copy of the code used to make releases, which is refreshed or recreated from the shared copy

These copies get different names, which are meaningless labels. In TBD, the shared copy is called 'trunk', and the release copy a 'release branch'. In Flow, the shared copy is 'develop' and the release copy is 'master'. They're just names. They don't matter.

The precise mechanism for moving work from local copies to the shared copy varies, but not in a way which really matters. In TBD, developers usually push straight to the trunk. If they're doing XP, they've done code review via pairing. In some teams, there might be pre-merge code review through a tool like Gerrit, Reviewboard, Phabricator, or some such, but still with an expectation of a merge happening quickly. In Flow, i guess there are pull reviews, but those are basically the same, a code review before merging. The Not Rocket Science Rule is this, but enforced by a robot [1].

The mechanism for moving work from the shared copy to the release copy varies a bit more. Strict CD shops release the trunk, but they will only push the latest point on the trunk which has passed all the checks, which is a bit like having an implicit branch. Conventional TBD cuts release branches. Flow does, that, and then adds a bit of a dance around merging individual release branches into the long-lived master branch, but it doesn't ultimately matter, because any changes made on an earlier release branch have already been merged back into develop. There's variation in where you fix urgent bugs: on the shared copy, followed by a copy to the release copy, or on the release copy, followed by a copy to the shared copy. I'm not sure that this is very significant.

Which leaves the only significant tuneables being how often developers integrate their changes, and how often the team makes releases. If you integrate often, you get easy merges and rapid integration feedback. If you integrate infrequently, you don't get those things. If you integrate more frequently than you release, then you will need disciplined incremental development, feature toggles, or some other way of making incomplete features unavailable in production.

[1] http://graydon2.dreamwidth.org/1597.html


So I'm saying given Google do Trunk Based Development (https://trunkbaseddevelopment.com/game-changers/#google-shar...) with that many committers, any size team can do it.


Trunk based development works well on any scm that sucks at branching. Perforce, subversion etc.

But, on git, using GitHub-Flow is far superior.

The two poster children for TBD do not use git. Don't cargo cult their process without understanding the unique problems they have that you don't.

Edit: downvotes on HN? This isn't Reddit, and I'm advocating github flow, not git flow.

GitHub flow is trunk based development but with all work in feature branches that live for less than s day or two, instead of branch by abstraction in the core code. That's it!


From the page:

Depending on the team size, and the rate of commits, very short lived feature/task branches are used for code-review and build checking (CI) to happen before commits land in the trunk for other developers to depend on. This allows and engage in eager and continuous code review of contributions before they land in the trunk.

This sounds roughly like what you are advocating.

The resources about GitHub-flow that I've seen don't put emphasis on the lifetime of branches, only on them being finished at some time, with the completion of a feature, and being deployed immediately after being reviewed.


We don't put hard limits on our branch lifetime, but we groom stories with a goal to make them small enough so that the feature branch is very short lived.


Based on the quote, it sounds like the authors of this submission might count what you do as a form of "trunk-based development" then.


yup


Which two poster children?

Trunk Based Development is also "all work in feature branches that live for less than a day or two". One small difference to Github-flow though: https://trunkbaseddevelopment.com/alternative-branching-mode...


GitFlow and it's derivatives have very poor handling for continued bug fixes of old released versions. It might be great for a truly continuous model such as Facebook and Gmail where there is only "one true version", but in a case where there is a current version and three old versions requiring bug fixes, it fails miserably.


I've implemented GitFlow with multiple support versions. We just added a "version" dimension to each of the main branches.

1.1/master 1.1/release 1.1/development

1.2/master 1.2/release 1.2/development

When we fix bugs in 1.x, we work off 1.1/development and merge into all later versions.

This doesn't fit for a "web" or "continuous delivery" model. It does fit very well for software houses with many customers on 6 monthly upgrade / development cycles.


I'm not talking about git flow.


Who are your two poster children? Because in my mind it's Amazon and Google, who both use git


In the article it was google (perforce) and Facebook (mercurial with hacks).

Both are massive mono repos.

At Amazon they were mostly perforce when I was there. Granted that was a few years back.


Google doesn't use perforce anymore. They have a custom in-house SCM that has various interfaces wrapped around it. I'd recommend this video that describes it.

https://m.youtube.com/watch?v=W71BTkUbdqE


Piper is awesome, but it's not unreasonable to think of it as Perforce if you designed and built Perforce to scale to handling all of Google in one repo. It has basically the same set of nouns and verbs. citc is the really magical bit, UX wise, as it's like you had a Perforce view set up for our entire repo but never actually had to sync it manually. Just navigate to a file and start hacking.

The paper that corresponds to the talk includes numbers on that (I forget if they're in the video).

http://m.cacm.acm.org/magazines/2016/7/204032-why-google-sto...

(note: I work at Google and absolutely love our SCM and other dev tooling)


I would say the bigger the repo becomes (in terms of # of collaborators) the more important trunk based becomes regardless of SCM choice. Mainly because merging many branches for many teams becomes very tedious and hairy. By the time you resolve all the merge conflicts, someone else will commit a merge and you'll have to resolve conflicts all over again.


Wouldn't you have the same problem with trunk though? If you commit at the same time as someone else, and they push their changes first, then you'll have to resolve any conflicts before you push.

It seems that trunk based development is merely a means of enforcing extremely short lived "branches".


I agree, which is also why I think monorepos are an anti pattern.


If two teams are working on two parts of an app that could be in different repos then they would have no conflicts while in a monorepo.


Amazon (today) maintains a git repo per package, though in theory there's no reason why it has to stay git (or even be homogenous). Each application is a set of versioned artifacts that can be updated, forked, or merged together so long as the whole set builds together.


Sounds like they still use Brazil.

I've never understood the mono repo argument. With something like Brazil (or even sonatype nexus), you end up in a much stronger place for guaranteeing consistency and immutability in your builds.


I described something that seems very similar years ago[1] (in response to git-flow), however I've since come up with something better: I think knowing how to get things live is important, and the entire development process should reflect that -- too many people have a "code cut" that takes days or even weeks, and that's despite using the latest and greatest CI tools.

A lot of the problems may sound familiar: How do we test something? How do we know it's good enough? We have a UAT step, but it doesn't find all the problems, so do we need more tests? To be more careful?

My approach[2] turns the entire problem into a software problem, and it's proving very successful to me (faster request turnaround, fewer problem reports post-deploy, and so on). Being able to select a "git branch name" for a specific user and get acceptance is powerful.

[1]: https://news.ycombinator.com/item?id=6125964

[2]: https://news.ycombinator.com/item?id=11190540


We have a smallish team working on web apps and use a trunk-based-development branching strategy. Not really to do with helping CI/CD.

The advantage we see is that new developers can start developing as soon as they clone the repository, without needing to switch to the 'development' branch. This is particularly useful when on boarding new developers who are inexperienced with Git. I found that when using Git Flow that the (stable) master branch frequently ends up out-of-date when developers are required to remember to merge into it releases into it.

Diagram of our branching strategy: https://gist.github.com/CameronWills/abf9e307669b1005c88ef82...


I looked at your diagram. I don;t think you're doing trunk based development.


TLDR: instead of using diffs of changed lines in branches, use 'if' statements in code to make Enterprise Continuous Integration gods happy.


The advantages of feature branches is that you can turn them on and off easily without trying merge in, or revert and dealing with the resultant conflicts this can cause.

Feature switches also have more advanced features like being able targeting specific users, if you want a gradual rollout.

We use both feature branches and feature switches, since branches short be very short lived. However large areas of development require months.


In subversion the only way I can possible develop with sanity is in an "unstable trunk". Branch off releases N weeks before shipping, and have only the requirement that trunk passes automated tests while release branches are manually tested. Obviously you don't release more than say once every month or two, but that's more than enough for most.

I'm honestly not sure if the site is satire?

No one develops in long lived feature branches in svn (rather, everyone does, but most only try it once before they realize the pain of trunk based dev is much smaller than that of svn merging).


Not satire - If teams are doing "branch for release", they're doing it a matter of days before the release - https://trunkbaseddevelopment.com/branch-for-release/

Why wouldn't you reconfigure your CI server to also guard prospective and recent release branches??

Also - https://trunkbaseddevelopment.com/youre-doing-it-wrong/#mere...


It's not satire. In your case the automated tests are just weak. You call the trunk unstable, because you need extra manual testing. What the page describes is an improvement on that. The CI system should have good enough tests to guarantee your trunk is never considered unstable. Then it doesn't really matter when you make a release - it's not a big deal. (Just choose a commit you like)


I think "pick a commit" only holds up for server based software.

For things like large desktop software with long term supported file formats, mission critical software where people might get killed if something malfunctions etc then it's pretty normal to have manual testing. If the cost of deployment of a release is high (N people training, downloading, installing) then you want few releases - and you can also motivate spending on manual testing to ensure you don't need a new release sooner than necessary.

The release branch also works as a beta/rc branch so that a version is tested on a subset of users before general availability, while things like large refactorings and major features can start being developed for vNext in the trunk/master.

I'm not sure but I'd guess this is how most software other than web and server software is developed.


You're right. But I don't really expect anyone writing safety critical software to get their idea for the process straight off a blog. In that case you'd be going from requirements to how you achieve that. You also don't need continuous delivery. Basically a different world.


If you release only a month or 2 you are doing something really wrong (or need to work based on some insane spec). Read the basics about CI and CD, read the clean code book, read about modern organisation practices (e.g. Github releases several times a day). Releasing every month or so is (thank God) a relic of the old waterfall development times.


We do CI, not CD. Lots of software doesn't do CD and lots of it can't. Games. Embedded. Desktop. Store-apps.

In my case it's large desktop software for structural engineering. Customers don't want new versions more than 1-2 times per year because of deployment, training and code compliance requirements. If we bork a release we have had users make a long download+install again. We don't just deploy the fixed one in prod to solve problems. Thats why a release goes through hundreds of hours of manual testing.

Not to mention data: a feature is a new file format. If we ship one new feature we basically ship a new format. Now you know why Autocad, Word and Illustrator ships once per year and not once per commit.

I think what a lot of people (who presumably do web dev only) forget is that web dev is just one of many software development disciplines, and a very young discipline too.

I know of the practices you mention. I like clean code (both the book and the idea). I'd do CD if I had a project some day that runs on a server. I hope I won't have to do web dev any day soon though.


You can do CD without releasing, its not pressing the release button that matters. CD means that you can press that button any time. Do small changes, test continuously, work on master.

If you make small changes and test them (using automation - unit tests, integration tests etc) you are catching bugs early, avoiding merge conflict (the most evil operation of them all). This is not my idea, its in the clean code book and its the basic of the CI/CD/TDD disciple.

And you can apply this to projects of all kind - we used it on the server, for iOS projects where we had to wait for Apple for weeks etc.


This is an important distinction. Another one is that just because you aren't deploying to production regularly doesn't mean you aren't deploying to other environments.


No, they're not. Is IntelliJ "doing it wrong" to release every three months[1] ? Absolutely not. A release can change things about a product, and you want to ensure you are pushing those changes (UI, behavior, plugin APIs) at a pace that matches the users. You don't want to be pushing changes to a plugin API every day. The surface area of stability that has to be maintained at that point is insane.

In the interim, you should definitely be able to build off master. Your CICD server should build and run tests on every commit to master. Then a release gets a tag in version control and you should be able to issue bug fix releases for only that specific version. Telling customers, "There is a critical security bug in version X, but in order to fix that bug you need to upgrade completely to version X+2" is not an acceptable answer.

Note that the description above works best in my experience for products that push a release artifact (like IntelliJ) versus a service like GitHub. Release management approaches will vary based on the product.

[1] https://www.jetbrains.com/idea/download/previous.html


I would say this must be satire but you seem sincere. You are promoting a thin slice of modern "best practices" which are not the only way to do software development. And it doesn't sound like you've ever worked on a major software project if you think that a month is a prohibitively long development timeframe. A mid-size architectural refactor of a large codebase can and maybe should take a month. When you get into the 200k+ SLOC range, no one person understands the entire thing, and most of us don't want to 'move fast and break things' when a little forethought, communication, and analysis can avoid it. I think this 'growth at all costs mentality' is ruining a lot more than our software, too.


One of the authors here. Ask questions :)


Paul Hammant? Thank you for your blogs and thanks for putting this site together, they're very informative. I work at a company that has been doing "Cascade" model for close to a decade [0], very similar to TBD but not quite the same. In addition to having a mainline trunk repo which everyone's work is pushed to, we have started utilizing Phabricator in the past few years -- it enables us to have short-running feature branches which aren't public to the upstream repository but can be shared if necessary and used for review (I'm not sure how similar this might be to GitHub flow? We use Mercurial).

[0] http://paulhammant.com/2013/12/04/what_is_your_branching_mod...


Trunk only goes well with continuous deployment.

If each increment in deployment is small enough, then you know exactly when things break and rollbacks are easy.

The 3-2-1 system where you develop for 3 months, then merge for 2 months, then debug in production for 1 month, is a real drag, and doesn't lend itself to quick feedback at all.

For a software looking for product market fit with ultra short cycles, I would absolutely choose trunk only.

For engineering projects in areas where cycles are months, or years, more traditional branching makes more sense.


The Envy configuration management system for Smalltalk worked like this. As you were working, you could see if someone had changed code "nearby" (same class, same category, same protocol) and you could review it and merge it in.

Perforce provides something with the same effect. If you periodically get the most recent revision, you can see files with collisions in your change sets, then merge them in.


I read Drunk-Based Development and was a bit disappointed. I can get awesome ideas while very tired and usually write them down, but a few weeks later, even though I was super exited when I wrote it down, it no longer makes any sense ...


I read Trump-Based Development...


If there was indeed a superior way of developing software, why doesn't anyone take this knowledge and out-compete for example Linux? It has literally hundreds of branches of all kinds (stable branches, feature branches, test branches), this should be easy pickings! Or out-compete Google or Microsoft which, contrary to whats said in this article, use branches and probably in all their big projects. Sorry, I don't buy it. I suspect that if you are religiously against branches you will just have to re-invent them by some other name.


> Google do Trunk Based Development and they have 25000 developers and QA automators in that trunk

For some stuff, but

Is Android trunk-only? No.

Is V8 trunk-only? No.

Is Chrome trunk-only? No.

Those have release branches.

---

Some software works with trunk-only paradigm, some does not. A rule of thumb is whether your customer-facing distribution is SaaS/web-based.

If you're making Google Search, you do it in a trunk. If you're making Google Chrome, you don't.


> Those have release branches.

Some Trunk Based Development teams do "branch for release" https://trunkbaseddevelopment.com/branch-for-release/ and some do "release from trunk" https://trunkbaseddevelopment.com/release-from-trunk/

Google gave numbers themselves - 95% of their devs are in one big trunk (formerly perforce, but re-written in-house in 2012). It's true that the open source facing teams are outside that, and have processes that are less lock-step than what they do in-house.

Android - Samsung, on each release (maybe not anymore), used to check all the composite parts of Android into one big trunk for themselves. Back into their Perforce.

> If you're making Google Chrome, you don't.

There's nothing stopping the Chrome team from doing Trunk Based Development on Github, and accepting unsolicited pull requests like any other team.


You can have long lived release branches in "trunk-based development"?!?

In that case, I guess I haven't ever seen a branching strategy that wasn't "trunk-based." What would that even be?


Anything involving long-lived feature branches especially ones with multiple collaborators, eg git-flow.


If I understand git-flow correctly, it's a trunk (develop) with a release branch (master).



Guess I should have RTFM.

Well written.


It depends on whether the branch ever merges back to the trunk or not.


Developing on master is something I favor, with tagging for releases, though it makes sense to create release branches if you are delivering software packages (that others will install) and not just deploying it to .com websites. Hotfixes can then be applied to both the maintaince branch and master.


I think some people are getting confused about terminology between Trunk Based Development[1], Git Flow[2], and GitHub Flow[3]. TBD and GitHub Flow are nearly identical, basically if you use TBD on GitHub you use GitHub Flow. Git Flow (not GitHub Flow) is a model that is all about cutting releases, and supporting bug fixes to releases. Git Flow is nonsensical when doing Continuous Delivery[4] of a web application.

If you are delivering web app code to production every day, it doesn't make sense to cut versioned releases, with for example semantic versioning, instead you just release the code that has passed tests. It also doesn't make sense to have hotfix branches, because there is only one version in production[5], and you release every day, so there is no need to backport a fix to older versions, or do anything special to get a fix out quickly.

Trunk Based Development has branches. If you are working with a team you pretty much never want to commit straight to master. The difference is that in TBD the branches are short-lived and only exist for code review purposes.

This is what TBD would look like with GitHub and GitHub "forks".

    git remote -a
        origin    git@github.com:<your company>/<repo>.git (fetch)
        origin    git@github.com:<your company>/<repo>.git (push)
        <github username>    git@github.com:<github username>/<repo>.git (fetch)
        <github username>    git@github.com:<github username>/<repo>.git (push)
    git checkout master
    git pull # make sure you are starting with the latest master
    git checkout -b my-cool-feature # create a branch for you change
    vim <file>
    git add <file>
    git status # check that all expected code is staged
    git diff HEAD # review your changes
    git commit # write a meaningful commit message
    git fetch # get the latest changes from origin
    git rebase origin/master # put your changes on top of the latest changes from master
    git push <github username> my-cool-feature:my-cool-feature # push to a branch on your "fork"
    # Open a PR to merge from <github username>/my-cool-feature to origin/master
    # Once the PR is approved, use the GitHub UI to merge the PR into master
    git checkout master
    git pull # pull down your code that just merged to master
    git branch -d my-cool-feature # delete your feature branch

[1] https://trunkbaseddevelopment.com/

[2] http://nvie.com/posts/a-successful-git-branching-model/

[3] https://guides.github.com/introduction/flow/

[4] I'm defining CD to mean delivering to production at least once a day.

[5] Unlike on-premises software, with a SaaS product you only have two versions: (1) what is in production, and (2) what is going out to production (in staging, canary, etc.).


This is all good, if you are on git.

Like I said in my other downvoted post... too many TBD zealots look at big companies that use it sans branches, and assume that it is the one true way, instead of understanding the larger picture (branches outside of git suck, these companies don't use git for various reasons)


Many of Google's 25000 Trunk-Based-Developers use Git on their local deevloper workstations. They choose their own workflow on their own machine, and then cooperate with the Mondrian submission rules when attempting to get their commit(s) through code review and automated tests/lint/findbugs etc.


Yup, as the other comment stated, this is par for the course in modern quality software development. Its actually rather shocking to me to see my comment get downvoted. The quality of the HN audience is declining dramatically. Feels like all we have is a bunch of web agency lifers at this point.


A microcode assembler has been in the top 5 for hours now. I suspect we're ok.

It's tempting to draw general conclusions about HN from specific things you don't like, but that's sample bias.

We detached this subthread from https://news.ycombinator.com/item?id=13514804 and marked it off-topic.


Feature flags are used by essentially every major software company. Hell, the browser you're using to post this has quite a few. A bunch of major software companies use them heavily as well.

You may disagree and think they're terrible, but calling their users just "web agency lifers" is objectively incorrect.


The quality of the HN audience is declining dramatically

Your account is 30 days old. On what basis are you making this remark?


I didn't know "CD" stand for "continuous denigration".


[flagged]


We detached this subthread from https://news.ycombinator.com/item?id=13514538 and marked it off-topic.


> EDIT: The downvoting system on HN optimizes for older, over the hill developers as is shown here once again.

This is pretty lame. You're both whining about downvotes and calling everyone who disagrees with you old.

I don't know what your concern is with feature flags. Maybe the way you've implemented or used them in the past has been problematic. But when your approach to disagreement is this petty, it makes me think you are unlikely to be able to discuss it meaningfully anyway.


I would say for backend you can get away without feature flags most of the time. But for the frontend, it's almost impossible to ship a new incomplete feature without feature flags


That's actually a very valid point. I was not thinking as much about HTML/CSS as I haven't been involved in a lot of large scale frontend refactors.


> EDIT: The downvoting system on HN optimizes for older, over the hill developers as is shown here once again.

Or that making gross generalizations like yours and speaking in absolutes demonstrates lack of experience and immaturity.


> EDIT: The downvoting system on HN optimizes for older, over the hill developers as is shown here once again.

Oh, so this is what ageism looks like? I see.


> There are a few times when you need to, but features can in most cases be written in a way that they can live inside production before complete.

That should never, ever, ever be done.


Why not? Loads of companies do this. Almost all the big ones I know of do this.



No wonder they call it trunk, it's "branches in SVN are pure suck" elevated to some sort of grand vision. Must have been feeling left out with all the cool kids doing Git.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: