Part 4: Low-Debt Development

The purpose of this series is to lead to a subtle shift in how secondary work is prioritized on modern software projects. Rather than work under the implication that testing and documentation are the biggest indicators of software quality, instead focus on prioritizing technical debt and consider both testing and documentation as part of the broader definition. Testing is a significant investment and a fair amount of it is needed to get a positive ROI, and it can provide a false sense of security as well when all of those tests go green.

With that said, test-driven development (TDD) can work well for certain types of projects, especially those that are highly algorithmic and conditional and that have minimal side-effects. A lot of projects, however, aren't like that, and probably don't need or benefit from such strict testing. Instead, I'd advocate for low-debt development (LDD) instead, ensuring that for every step forward you don't take two back, and get the business to sign off on addressing technical debt as the primary secondary activity for the engineering team.

If the business is able to minimize communication time needed with engineers in order to maximize development hours, and engineers are encouraged and empowered to address technical debt regularly, they can maximize their effectiveness both now and in the long-term, without being weighed down by significant communication overhead or dogmatic best practices that aren't adding value.

So what is low-debt development? It's the emphasis on code quality and longevity, which may or may not include common practices such as extensive developer documentation or code formatting/linting or code coverage etc. It it is the judgment required to identify the unique technical needs and requirements of a project as well as the business goals and constraints and balancing them with a strategic game plan for maximizing the ROI on secondary developer activities.

An Example

I've built a lot of REST APIs, and without fail they all have the same types of CRUD operations (Create, Read, Update & Delete). You create a bunch of models, basically just database tables and columns, and add behavior to create and modify the records of those tables. It gets more complicated when you add in filtering, authorization, business rules and additional data sources, but they all have these general requirements.

If you work under the implication that the API will be well-tested and well-documented then that means you'll be spending a ton of time creating swagger documentation and writing unit tests. As you start to write unit tests though you'll realize you need to mock your database calls because that's pretty much the majority of what you're doing. After a while you realize all your tests really do is test that you do a database call, but the actual database call doesn't even happen. What are you even testing then?

As you add 20 entities, each with the same 4 operations (list, create, update, delete), copying the same code over and over, adding the same test and database mock over and over, you start to wonder if there's a better way. Aside from being error prone, as soon as you want to make a change that applies to all operations of the same type (like adding pagination to lists) you realize you now have to update each list operation in each of the 20 entities. And when you're done now you have to go add new tests as well.

By just jumping in, delivering the specific feature complete with testing and documentation, you've done your job, but you've also done a disservice to the project. At no point was the long-term success of the project considered and the proper architecture applied. Rather than acknowledging and prioritizing technical debt, countless hours are spent adding the same operations for new entities over and over again.

The project lacked a technical priority for secondary activity, so testing and documentation were used by default. That should've been spent looking at the specific project or application and identifying what will be the biggest pain point and challenge to scaling the project. In the case of an API, that typically means repetitive operations if it's a database-heavy application, or managing integrations if it's 3rd-party heavy. Instead of writing 4 operations for 20 entities, 80 different operations, creating an abstraction to minimize the code repetition while also making it flexible and easy to make sweeping changes will increase reliability and improve velocity for the team and organization. This is far more valuable than mocking database calls, at least in this particular instance, and also simplifies testing because now you can just focus on testing these core abstractions.

Conclusion

Testing software has always been a bit of a polarizing topic. There are plenty that swear by TDD, others not so much. My opinion is somewhere in the middle, where it really depends on the team, project and business priorities. Testing can be an asset or it can be an expense. There are so many different ways to test an application or product that if you decide to test you also need to know how to draw the line and what types of tests to prioritize. If performance is a concern then focus on load testing. If your app has complex logic maybe unit testing is a great fit. If you're in ecommerce maybe code testing isn't the right approach, but e2e UI testing combined with a/b testing, feature flagging and deep analytics is more appropriate.

More importantly, don't let technical debt sink you. Engineers are under tons of pressure to ship working features, and can only do so consistently by ignoring or deferring technical matters that someday will need to be addressed. The reality is eventually we will all have to pay the piper, and that could meaning throwing away a codebase you've invested millions of dollars into.