There is always some technical debt. Sometimes it’s purely technical improvements: reducing the number of production incidents, improving performance, speeding up development, lowering maintenance costs, or upgrading libraries. Sometimes it’s just your professional pride that refuses to accept the messy implementation staring at you.

And then, the idea comes to sprint planning. Where it gets quickly descoped in favor of product items. Sounds familiar?

So what approaches can you and your team take?


Three common approaches

Based on my experience, developers usually choose one of three approaches:

1. Sneak it into product work.

They silently add refactoring to some product-related task. Does it work? Sometimes, for smaller things. But if you’ve just upgraded your database to the latest version, you can hardly hide it from QAs or infra. And if you rewrite that scary module everyone depends on but nobody dares to touch, the original product task suddenly grows into a monster.

2. Do it after hours.

They work nights or weekends. Some greedy CEOs may like this approach, but we all know it’s wrong. Developers aren’t fresh on Monday, testing is still needed, and convincing your family that fixing an algorithm on Saturday morning is a good idea doesn’t mean you’ll convince your QA. Worse: if your CEO is really mean, you just set the expectation you’ll work weekends even on product items.

3. The famous “20% time.”

Companies often offer developers 20% to address tech debt (sometimes it’s called a cool-down sprint). But what are you really allowing them to do?

I’ve tried this in multiple companies, across frontend, database, middleware, and platform teams. We started with: you have Fridays to work on whatever you want. Nothing meaningful ever came back. One day is not enough for big things, so devs focused only on tiny items.

Later we changed it to: you have 20% to work on tech debt. Developers started picking small pain points, but again avoided big impactful things—it just took too long.

At one point, I created a virtual “tech debt SWAT team.” Each frontend team nominated one developer per sprint, and they rotated. At first: big success! We finally tackled bigger items. A few months later: same frustration. The SWAT team fixed all the small and medium issues, but when only massive, long-term items were left, no one wanted to volunteer anymore.


The real problem

So what’s the right approach?

We need to return to the root cause: PMs don’t want to prioritize tech debt. They already spent ages defining their product backlog, and they genuinely believe their items are more impactful. Even if they understand the technical need, they don’t see why it’s more important than their features.

And that’s because technical backlog items often lack clear value.

Strange, right? Because when you ask developers, or hear them in 1:1s or coffee chats, they say: We absolutely need to do this!!!

Great! Then let’s write it down. And that’s where it usually fails—developers either say “it’s obvious!” or they give up.


Our approach at CloudTalk

At CloudTalk, we don’t keep a separate technical backlog. We have one product backlog. The rule is simple: every epic or story must clearly describe its product impact.

  • Database performance issues? Fixing them reduces customer frustration and lowers churn.
  • New microservice for shared records? It speeds up development by 20% every sprint afterwards.
  • Rewriting the call routing engine? Expensive, but reduces customer tickets in that area by 30% over the next six months.

The key is to step back and ask: what problem are we actually solving?

Upgrading the database? Why? Because of performance? Fine. Then let’s specify which performance issues we face.

Yes, it takes training, coaching, and experimenting. Managers need to sit with developers to guide them. Whoever approves the roadmap will push back the first and second time, but after a few iterations, writing clear business value becomes second nature.

One of the more interesting recent examples was our queue load balancing mechanism. It often caused service restarts. Because of asynchronous processing, there was no direct customer impact at first glance. My developers came to me saying “it fails often, we need to fix it.” But since it recovered automatically, it didn’t seem to generate much extra work, right?

Here, it’s important to be careful—developers must not feel that managers ignore failures or treat them as unimportant. The right response is to dig deeper: if it’s failing, does it leave any visible impact elsewhere? Fast forward, we discovered the connection: some customer-reported bugs were caused by data not syncing as quickly as expected. The team had to manually reshuffle queues, which added real work and threatened customer satisfaction. We ended up with a clear justification: fixing the load balancing issue would save the team ~20 hours per sprint on manual handling. The fix required two sprints but paid off in less than two months, with measurable improvement in customer satisfaction. We even worked with the PM to help define the expected impact.

Another example: our approach to handling a certain data record type. Each component processed it independently, which created fragmented logic and increasing performance bottlenecks. Our database expert’s analysis showed that within 12 months, the average load time for such records would double—making the system slower for customers and significantly more expensive to maintain.

The selling point was clear: without centralization, performance would degrade so badly that both user experience and development speed would suffer. By investing a few sprints of cross-team work now, we prevented a 100% increase in load times, avoided future firefighting, and ensured the system could scale. After adjusting scope and cutting lower-priority items, stakeholders agreed it was a worthwhile trade-off.


Does it solve everything?

Do we address all technical items? No. Some wishes aren’t backed by data, or we simply don’t have capacity right now. But this true for product roadmaps as well. The decisions we all have to make are difficult

Do we know how to do this perfectly? Not at all. We’re still learning—especially with cross-team technical items. Prioritizing within one team’s backlog is one thing. Comparing across teams is another challenge entirely.

Just to share what we are working on: one of our next steps is to introduce an initiative matrix. It will serve as a single place where anyone can see all cross-team topics in progress and who is leading them.

Another thing we are working on is a well-defined technical strategy that outlines our desired direction. Thanks to it, we won’t be addressing random issues here and there, but aligning technical work with a shared long-term goal.

And we’re still learning. We are not where we want to be yet. But we’ll get there.

Takeaway: Technical backlog on its own doesn’t bring value. Once you frame every item in terms of business impact—customer satisfaction, development speed, or scalability—it becomes part of the product backlog. That’s when hard prioritization decisions become possible, and that’s how technical work starts moving the company forward.

About the author
Senior Copywriter