The Problem with Simple Retries
Most developers handle network failures by wrapping a fetch call in a simple for loop. If the request fails, they try again immediately. This approach is dangerous. If a third-party service is struggling, hitting it repeatedly as fast as possible is like joining a digital riot; you are helping finish off an already dying server. This is known as the 'thundering herd' problem.
To build resilient systems, we need a smarter approach. Instead of constant retries, we use Exponential Backoff. This technique increases the waiting time between each attempt, giving the remote service breathing room to recover.
The Logic Behind Exponential Backoff
The math is simple: for every failed attempt, we double the delay. If the first retry happens after 1 second, the second happens after 2, the third after 4, and so on. This prevents your application from becoming its own Worst Enemy during a partial outage.
Building the Smart Retry Function
Let's build a generic, reusable TypeScript wrapper. This function will accept any asynchronous task and manage the retry logic automatically. This is particularly useful for flaky legacy APIs or rate-limited third-party tools.
const delay = (ms: number) => new Promise((res) => setTimeout(res, ms));
async function retryWithBackoff<T>(
task: () => Promise<T>,
maxRetries: number = 4,
initialDelay: number = 1000
): Promise<T> {
let currentDelay = initialDelay;
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await task();
} catch (error) {
if (attempt === maxRetries) {
console.error("Max retries reached. Failing...");
throw error;
}
console.warn(`Attempt ${attempt + 1} failed. Retrying in ${currentDelay}ms...`);
await delay(currentDelay);
currentDelay *= 2; // The exponential part
}
}
throw new Error("Unexpected failure in retry logic");
}
Practical Example: A Flaky Inventory API
Imagine you are building an e-commerce dashboard that pulls stock levels from an external supplier's API. This API is notorious for timing out during peak hours. Here is how you would use our new utility to stabilize your data fetching:
async function fetchStockLevels() {
const response = await fetch("https://api.flaky-supplier.com/v1/stock");
if (!response.ok) throw new Error("Server Error");
return response.json();
}
// Usage
const loadData = async () => {
try {
const stock = await retryWithBackoff(fetchStockLevels, 3, 500);
console.log("Stock Data:", stock);
} catch (err) {
alert("The supplier is currently offline. Please try again later.");
}
};
Why This Matters for Production
Implementing this logic does more than just stop errors; it improves the user experience. Instead of a screen full of immediate error messages, the app 'pauses' and tries to fix the problem behind the scenes. It also protects your infrastructure costs. By spacing out retries, you avoid hitting API rate limits that could lead to your IP being blacklisted.
In a real-world scenario, you might also add 'jitter'—a bit of random variation to the delay—to ensure that thousands of clients don't all retry at the exact same millisecond. But even this basic exponential backoff is a massive upgrade over the standard try/catch block.

