Write secure software

🗓️
•
🔄
•
⏳ 7 min

Bolting security onto a system as an afterthought is about as effective as retroactively adding tests to meet coverage goals, which is to say, not very.

Testing a system that wasn’t designed for testability is a huge pain and often limits both the scope and method of testing.
Similarly, securing a system that wasn’t built with security in mind can feel like using a sledgehammer to crack a nut.

What follows is a non-comprehensive list of heuristics to ensure a system is built with security at its core.

Minimize complexity

Unnecessary complexity is the enemy of good software.
It is also the enemy of secure software.
The more complex the system, the more surface area, the more nooks and crannies that can be exploited.

Complex UIs are more likely to produce invalid states, keep them simple.
The more user flows your app has, the harder it becomes to keep track of all of them.

Think of each software integration, SaaS or dependency as a possible security risk, each supported platform is a new Pandora’s box to be opened.
This doesn’t mean these things should be avoided altogether, just keep in mind that they imply risk, evaluate if it’s worth it or not.

Software minimalism is a bit of a meme, but it is true that having less (features, versions, size, etc.) is a surefire way to minimize possible attacks.

Clearly define boundaries

It’s surprising how often complex systems don’t have clearly defined points of entry.
This often leads to incomplete or inconsistent input validation.

Defining what pieces of your system will communicate with external systems (including but not limited to The Users) is a key step to securing it. This determines what exactly you are securing, where to focus your efforts.

Another team’s microservice sending incorrect input to your part of the system is an issue, but usually benign and solvable with a Slack message.
A public API receiving incorrect input might be user error, but might also be something else.

This doesn’t mean internal services can disregard security, but security requirements vary based on what it is you are securing.
Defining where a system ends and another one starts is key to detecting what parts need more or less security.

What are you defending against?

This doesn’t need to be a full threat model for the whole organization, but it is useful to ask certain questions.

Is the business B2C or B2B? Is the API public or for paid users only? What kind of data is being stored? Is the government involved somewhere? What relationship does the business have with its clients? Are possible competitors also using the software?

You don’t need detailed answers to all of these questions, but the more information/context you have, the better you can define security requirements.

You might have none of the answers you need to make an informed decision.
First: Really? How do you have no context of the software you work on?
In any case, you can look at the most critical security risks for a good baseline.

Defending against everything often ends up defending against nothing. Defining what you are defending against is key.

Restrict access

Both at the user and at the code level.

Users should not see parts of the system they shouldn’t interact with.
Giving a user a big red “DELETE ALL” button and telling them not to use it is like handing a child a crayon and expecting them not to draw on the walls.

Making internal functionality available for others to use (think making a function public instead of private) and expecting them not to, is equally naive.
This goes back to the previous point about defining boundaries: limit the ways a piece of code can be interfaced with.

Grant as little access and as little visibility as needed. Not more, not less.

Whitelisting > Blacklisting

Ideally, one would prefer the former over the latter.
This is kind of what is usually done with admin users: only these ‘whitelisted’ users have access to certain things.

The same goes with IPs: don’t wait for a DOS attack to start blacklisting IPs, block everything except the ones registered by the users/clients.

Of course, this isn’t always possible as is the case with public facing APIs or services.
Blacklists should not be avoided, but Whitelists should be preferred when possible.

Create alarms

You’d be surprised how often inappropriate uses are discovered when investigating logs trying to squash a bug.

Consider creating alarms for unexpected execution flows. Of course, handle the error in the code, but also give it a thought: Could this behavior suggest more than a user error?

An IP address constantly being rate limited should not go unnoticed.
A delete operation performed 1200 times in 1 minute in the main production database is likely more than a bug (and even if it is, you probably still want to be notified ASAP).

Set up a useful logging system, have alarms to cover edge cases and/or critical user flows (logins, payments, etc.).
This will help minimize the time it takes for a possible attack to be detected and dealt with.

Test for security

Pentesting is great, but often time-consuming and expensive.
While writing security driven tests is no substitute for this, it can alleviate some of the work.

Write your security requirements as tests, like you would write an acceptance tests for a use case.
Use fuzz testing to discover unwanted behavior.

If you fix a security issue, write a regression test to ensure it doesn’t happen in the future.
Or even better, use TDD to fix it.

Organizational aspects

While a lot can be done as a dev, ensuring the whole organization is aligned on security practices is key.
Here are some things worth considering.

Involve your business team

Security is always and implicit requirement, but often not an explicit one.
Users might not ask for it, PMs might not add it to the board, but they do expect the system to be secure.

Make sure everyone involved is aware of this.
Resources should be allocated to security.

Have a plan

Don’t assume that nothing bad can happen, nor that you will be able to recover from it.

Have a reliable backup system, some sort of contingency plan: When something goes wrong, you should be able to recover from it.

Ensure you can roll back the state of your software: When a vulnerability is introduced, you should be able to quickly roll back even before you start fixing it.

Audit your system

Reacting quickly is key, but preventing security issues can be far more cost-effective.

Consider having a bounty program, call a pentester once in a while.
Even better, learn the basics of pentesting yourself!

Remove trust from the equation

Often, particularly non-technical people, think that proprietary software is more secure by virtue of being opaque.
After all, if I can see how it’s made I can see its weaknesses, right?

This is “security by obscurity” and, while it can make sense in some contexts, it is generally not recommended (especially not on its own).

A secure system is not one that is only safe if the attacker can’t see it, but one that is safe even in that case.

Either outsource security to a third party based on the guarantees it offers (and it’s SLA), or prefer open standards and software.

The chance of auditable, open and widely used protocols and/or software being insecure is slim: everybody is using them, depending on them and auditing them. Plus, issues with these systems are, by nature, instantly public and have a vast pool of talent willing and invested in solving those vulnerabilities if/when they occur.


Other posts you might like