Conventional wisdom can go a long way and is often a useful guide. However, it is usually best served with a healthy dose of skepticism and scrutiny.
Here, we explore the good, the bad and the ugly behind some common clichĂŠs I see floating around.
TDD
Probably the most misunderstood of the bunch.
Requires clairvoyance
A simple (but unfortunately common) reading of TDD lead some to believe that all tests need to be written before any of the relevant production code is.
This would require developers to plan ahead in some sort of whiteboard, which of course is usually a bit silly, since we learn about a problem domain as we build software around it.
Does it though?
Letâs take a module or a Class for example: what part of TDD dictates that all tests for that module should be written as step 0?
Is it not TDD if I write a test for one of the functions, write that function and keep going bit by bit?
TDD is not a way of planning out your code with boxes on a whiteboard, quite the contrary: it incentivizes you to design as you go, to think about the public facing design of your code even before you think about the code itself.
Its usefulness comes in part from the fact that it helps detect and resolve unknowns before writing a piece of code, where they can cause issues.
Testing
Makes no sense in an MVP
An MVP needs to be built fast and only has to prototype the behavior of the system in a narrow scope.
Edge cases are often (purposefully) overlooked, so testing would slow down the process without adding much.
Who cares? Itâs just an MVP anyway, it will get re-written if it works.
MVPs are forever
MVPs often turn into the (nearly incomprehensible) core of Legacy projects that have to be maintained 20 years down the line. There is always ânot enough timeâ to re-write them.
What would you rather do: write tests as part of the process or convince the Business team to stop new developments for a while to test what has already been âprovenâ to work in production?
You donât need 100% test coverage here (or ever). Just make sure your code is testable to begin with. Further testing can come later down the line.
CI/CD
Means having a pipeline
Itâs not uncommon to come across teams that think having a pipeline as part of their workflow qualifies as Continuous Development.
Somehow, the principle and the tool got mushed together.
You could make-do without a pipeline
Assuming you wanted to make this as inefficient and unreliable as possible, you could argue that Continuous Integration can be achieved without a pipeline.
After all, itâs about integrating your code with everybody elseâs as frequently as reasonably possible: just push to master, see what happens.
You could also pay a poor soul to continuously hit the big red âDeployâ button all day long as well, no need for a pipeline.
The point of having a pipeline is to act as the gatekeeper for code quality, reliability, performance, stability, etc. when sending it to production.
This is the only sane way I can think of doing CI/CD, but pipelines are a tool, not a set of practices.
You can have the most over-engineered pipeline in the world, if you deploy and merge branches once a week well⌠I have bad news for you.
Automated Deployment is Continuous Deployment
Some think of CD as the absence of manual deployments.
Like with pipelines, automated deployments are the only sane way to do CD.
But if you really want to suffer, nobody is stopping you from manually producing and uploading all required artifacts by hand.
Donât miss the forest for the trees.
Clean Code
Horrible performance
The âcleanâ code rules were developed because someone thought they would produce more maintainable codebases.
Even if that were true, youâd have to ask, âAt what cost?â
Casey Muratori1
Letâs not dwell on the fact that the generation of developers that basically founded this industry are not someone, and rather focus on the claim regarding the cost.
Nowhere in the book does the author advocate for clean code at all costs.
Quite the contrary, it is mentioned several times that code efficiency must be taken into account:
I will avoid [the more efficient solution] if the [efficiency] cost is small.
The claim about performance is either missing the point entirely, based on a woefully misguided reading of the book or simply clickbait.
No one is advocating for a complete disregard to performance.
Rather, it is suggested that you should write your code with other people in mind.
Eventually, someone will have to maintain your code: make sure there is a good reason to make it hard to work with.
The reality is that in a lot (if not most) contexts, performance is far down the list of things to worry about.
In most cases, the real world, practical performance differences between clean code and whatever the alternative is (performance-focused code?) are far outweighed by the maintainability of the former.
As with most things: itâs a trade-off.
Your code doesnât need to be text-book-clean, but you should aspire to keep it reasonably clean considering the circumstances.
Itâs in the eye of the beholder
One might argue that whatâs clean to one person might not be clean to another.
Advanced programmers might find easy to read code that seem incomprehensible for beginners. Each language has different idioms.
This might indicate that clean code is too relative a thing to be of any help.
It not always is
On the one hand yes, what is or isnât clean/readable depends on the context.
Your team might be used to working with 20000+ LOC files. They might find this normal and desirable.
This is not the end of the world: if it works for the team then itâs all good.
On the other hand, not everything is relative.
Calling a variable x
is objectively less clear than giving it a decently descriptive name.
Writing your code in a way that can be understood by a beginner, someone coming from a different background/language, or your future self, has clear and obvious advantages.
Using you languageâs latest super-fancy, concise, ultra-functional gimmick is often less about clean code and more about showing off.
There is a fine line between taking advantage of a given languageâs features, and gate-keeping the codebase to only those âsmart/versed enoughâ to follow it.
Clean code doesnât look the same in all projects/teams/contexts, but thatâs not an excuse to disregard best practices and write code like you want to.
Linux
Is free if you donât value your time
Some find Linux (Desktop) to require too much time to set up/configure/maintain to be worth the effort.
To them, all possible gains from using Linux are offset by the amount of time and attention it requires.
In fairness, Linux can be as much of a time sink as you want it to. That being said, nothing prevents the use of a ready-to-use distribution. These come already set up/configured and require little to no maintenance.
Things donât actually break for no reason (except when updating Windows).
Since this is quite obvious, a more charitable reading of the claim might be something like: âThe amount of stuff you have to learn is not worth the effort.â
Is it really not worth it?
Learning a new anything, a new OS in this case, implies⌠A learning process, which quires time and might be frustrating.
Linux being FOSS furthers the amount of learning required, since we all come from using proprietary software and things are quite different here.
If you are not interested in learning a new skill set⌠Donât. Stick to what you already know.
If instead you are, this can be a lot of fun.
And if you are a developer, I cannot stress enough the amount of times I have been able to solve a problem or help a co-worker just by virtue of having a deeper understanding of how an OS actually functions.
If you come with an open mind, this stuff makes you a better dev. For free.
Breaks all the time
Some consider Linux Desktop to be unstable.
Iâm still not sure what makes it a reliable server, but an unstable client. Maybe someone will point it out eventually.
Still less of a headache
Iâve been personally working from a rolling release distribution, widely considered unstable and breakage-prone for more than 4 years ATOW.
In this time, it broke twice:
- The first one as a consequence of me doing things I didnât understand as
sudo
. - The other one due to a combination of my being silly and upstream changes (a borked update that should have been easy to recover from).
In contrast, Iâve lost count of how many times Iâve had to âunfuckâ my Windows machine after an update.
MacOS has been less of a headache in that regard, but in that land you either do it âThe Apple Wayâ or you donât. I like it my way, thank you very much.
There is a kernel of truth though: Linux allows the user to break things, while the alternatives usually limit what the user can do so much that the only reasonable way something breaks is if they (Microsoft, Apple) break it.
This is easier on the userâs ego because it makes him inherently blameless. The system likely âbreaks lessâ because the user is out of the equation.
Plus, whatâs so terrible about breaking things? Itâs fun and the best way to learn!
Just have a backup and youâll be fine.Footnotes