A different approach to AI code assistants

🗓️
•
🔄
•
⏳ 4 min

AI code-generation might usher a future where TDD becomes a strict requirement for software development. In that future, code quality will only be relevant for tests.

Let’s do a thought experiment.

Hallucinations

AI generated code kinda sucks.

It’s often messy, buggy, over-complicated or just simply incorrect. You need to be super careful with it, double check every line it produces. Troubleshooting bugs is a huge pain with generated code.

This is a shame because the efficiency is undeniable: I doubt that the average dev can produce code at a similar speed that your Copilot/GPT thing can.

Of course, speed is not everything, but we all want to be more productive.

How can we take advantage of the AI’s ability to generate code quickly while ensuring the code actually works, and does so as expected/required?

If only there was an approach to software development that could guarantee the behavior of a piece of code…

Testing

What if we add AI to the TDD cycle?

I write a little test, the AI produces code to make it pass.

I write another little test, the AI makes both pass, either adapting current code or generating new one.

This way, there’s at leas one thing we could always be certain of: the generated code makes the tests pass.

Given enough quality tests, this would allow us to ensure the system’s behavior.

This is not far away from conventional TDD: You just don’t write (or read) production code. You treat it as a black box and only focus on the tests.

Evolution

Take two equally capable developers: Peter and John.
Similar experience, similar skills.

Peter uses the previously described TDD+AI approach while John doesn’t.

While only writing tests, Peter would produce working software that is guaranteed to behave as expected (so far as the tests correctly describe its behavior). This obviously can be done in less time than the alternative, he is only writing tests.

John faces a difficult choice: either use AI and embrace its quirks or avoid it altogether. While the former might seem faster at first, he could soon find himself spending more time fiddling around with generated code than actually writing it.

Avoiding AI altogether will be significantly slower than Peter’s with the possible exception of just not writing tests at all, which wouldn’t be a fair comparison and has plenty of other downsides.

Which one seems more productive/employable? Remember, we are considering two equally capable devs.

Neglect the black box

As you might imagine, the proposed approach would imply a near-total neglect of production code.

This is a big jump from our current notions of clean code or maintainability.

If you can produce code in a matter of seconds, does it really matter if it’s easy to understand and modify? Wouldn’t you just tell the AI to rewrite the thing if your requirements change?

Remember, you’ll have your test suite to ensure no current behavior is lost. Just add more tests for the new behavior and the bugs you need to fix.

We wouldn’t be spending much time (if any) with production code at all: that will be the AI’s territory.

We would mostly work with the tests, using them to ensure the AI behaves correctly and doesn’t make stuff up.

This doesn’t make clean code or maintainability concepts obsolete. Rather, they will find their place within the tests. Those notions where meant for us humans anyway, not for the machine. If we focus our efforts on the tests, they’ll come along for the ride.

Grim future? Depends on how you feel about TDD I guess ¯\_(ツ)_/¯.


Other posts you might like