As mentioned previously, you can be as lax as needed when adapting TBD to a teamâs workflow.
Weâll go over doâs, donâts, and how-toâs to help adjust this approach to software development for each context.
Follow the principles, not the rules
Branch wisely
While the point of TBD is obviously to only work on one main branch, this is an ideal that some teams strive for, but might be out of reach when starting out.
Start by ensuring no branch lives for more than a day, the shorter-lived the branch, the better.
Crucially, remove all long-lived branches that run in parallel to the main one.
Donât branch out of habit, do it if/when you actually need to. A POC might be a good example of a valid reason to branch.
Distrust PRs
While you work your way to a branch-less workflow, PRs are still going to happen.
Some restrictions might be worth considering:
- The pipeline should define the standards for code validity, quality and security. Not the PR.
- Reviews are optional: Reviewers shouldnât prevent code from reaching production, reviews should be requested by the submitting dev if needed (and should be done synchronously if possible).
- Use PRs as tools, not rituals: You might feel more confident blocking all pushes to the main branch and having the pipeline run on merge. This is little more than implementation detail, and is fine as long as it doesnât interfere with the workflow.
- Keep them small: The easier they are to revert, the better. Prefer multiple small PRs over one big PR per user story.
- Consider whether pair programming is a better alternative than a PR.
In general, try to see PRs as little more than âa thing that happensâ semi-automatically when pushing commits.
Stay in sync
The less time you spend away from your main branch, the better.
Constantly ask yourself: Could this be merged?
Doesnât matter if the feature is done or if the bug is fully fixed.
If the answer is yes (as in âtests pass, code compiles and doesnât break prodâ), do it.
Even if using branches: merge with master, open a new branch and keep going.
Make this a normal part of your workflow.
PRs and branches should end up feeling more like a chore than anything else.
Deploy whenever
The more, the merrier.
Keep your code deployable while you work, donât break the build, keep your tests green.
Test yourself and your team by deploying at least once a day. See if your code really is âalways in a releasable stateâ.
Automatically deploying every commit might be a bit much to begin with, but the closer you get, the faster you gather user feedback, the faster you can make informed decisions.
For the bold and brave: have a pipeline that automatically deploys every evening/morning. You might be surprised how much that can change how you work.
Work in small steps
Commit code frequently, multiple times per hour. Doesnât matter if the code is not perfect or if itâs a âWork In Progressâ.
Wrote a test? Commit. Made it pass? Commit. Made it compile? Commit. Refactored a module? Commit.
If it compiles and passes the test suite itâs good to go.
Reverts are easy when working in small increments, trust your VCS, think of commits as checkpoints.
Must-haves
Of course, there are some technical must-haves to make this work. You might not be able to just take your existing codebase, and go for a TBD workflow.
Here are some things to consider.
Pipeline
You need a solid, cared for, efficient and stable pipeline.
This should be a primary focus of the team: Issues with the setup (build, tests, containers, pipeline, etc.) should be resolved immediately.
Ideally it would take care of building, testing, code analysis, security tests, deploying to production and any other task that can possibly be automated.
The pipeline should be fast and efficient. Builds and tests should run as fast as possible, ideally in parallel.
Only changed code should be built, and only relevant tests should run. Of course this requires enough modularity to make this viable: you canât only run the tests for module A if you expect the changes to affect other parts of the system.
Cache your dependencies, optimize anything that comes to mind. Every minute wasted here will add up really fast.
Fast builds and tests
You need to have a comprehensive and meaningful suite of automated tests, mostly unit tests with a more selective approach to e2e and integration tests.
These need to be fast and reliable and the team should trust them enough to consider the code deployable as soon as it passes them. They should be the judge of what is or isnât production ready.
Ideally, building the project and running the tests shouldnât take more than a few minutes from start to finish. If there are tests or builds that take longer than the rest, isolate them.
Fuzz tests for example might run after the code is deployed or in parallel to it, slow builds might be avoided for patches that donât involve that specific part of the system.
Locally reproducible
When doing following this line of work, braking trunk slows down the rest of the team.
Since mistakes will inevitably happen, ensure the system can be quickly and fully built and tested locally. This should be done regularly before pushing changes.
Tests that âonly work in Jenkinsâ, flaky tests, or slow/complicated builds incentivize devs to run very little checks locally. This might not be out of carelessness, they might trust the pipeline so much that they count on it spotting errors, they might think itâs not a smart use of their time (why bother if the pipeline is going to do the same thing?).
This is a good thing, but not good enough to slow everyone else down. Make sure these things are not a chore, but a quick check one does without even thinking about it.
Fine-grained deploys
Ideally, especially with monolithic applications, one wouldnât need to re-deploy the whole thing. Rather, deploys should only involve the parts of the system that have been updated.
This is easy enough when working with microservices (if done right), but can be challenging with monolithic systems.
Modularize your code in a way that allows for partial deploys. Ideally, the selection of which part to deploy would be automatic based on the git diff, but human selection might be a good or even better idea depending on your system.
If for example the codebase is fragile and changes in one place are bound to affect other places, automatically selecting which piece to deploy might be a bad idea, while a human might have the context needed to make that decision.
Tips and tricks
When coming from a branch based workflow, it is likely unclear how exactly to make changes without breaking things.
There are multiple tricks you can use to protect the system from your code:
Feature Flags
A feature flag is a way to hide a functionality or a piece of code unless certain criteria is met.
What these criteria are is up to you and context dependent. Feature flags can be as simple as âonly available to user Xâ and complex enough to require a purpose build solution just to manage them.
This allows you to easily âturn your code off and onâ for one or more users, handle Betas or simply to manually test out the code in production.
It also has the added benefit of allowing work in progress code to live in production without affecting the application or the users in the slightest.
On top of that, it can pave the way for A/B testing.
This can be enough of a reason to implement feature flags on its own.
As you can imagine, there is much more to feature flags. You can learn more here.
Branch by abstraction
When making changes to a piece of code that other developers or teams depend on, branching from that code by abstracting the API is very helpful.
To use a simple example: If changes need to be done in function_foo()
but someone else is using it, extracting N functions from it can allow for easy swapping of the parts that need work or a new implementation, without having to go for more invasive approaches, like changing the usages of the original function to another wip_function_foo()
.
This might seem needlessly complex for functions, classes or interfaces/traits, but in complex systems and/or big enough changes we might be talking about whole modules. Even changing a function signature might entail an unmanageable amount of merge conflicts.
You can think of this as a type of Parallel Change, although on top of allowing you to keep the tests passing, it allows other team members to keep working uninterrupted.
Dark Launches & Canary Releases
Dark launches simply refer to releases that are hidden and only made visible/usable for a subset of users.
Similarly, canary releases are only meant to go out to a select group of users.
The former is used when all users necessarily run the same version of the software (a web app, SaaS, etc.) while the second might make more sense in the opposite case (a phone or desktop app). Both have the same purpose and hold the same value.
The point here is to only give access to the feature to a predefined group of trusted users (or a small percentage of the total user base), usually with the help of feature flags.
By doing so, you can see a feature in action (not only in production but in use by actual users), gather feedback, evaluate how it performs and decide if a full-scale release makes sense or more work needs to be done.