
Your Node.js API Is Doing the Same Work Over and Over — Redis Fixes That
Your API is running the same database query hundreds of times a minute. Redis caching fixes that — response times drop from 400ms to under 10ms. Here's the complete setup: client config, cache-aside pattern, per-user keys, cache invalidation, and a reusable Express middleware.
TL;DR
Your API is running the same database query hundreds of times a minute. Redis caching fixes that — response times drop from 400ms to under 10ms. Here's the complete setup: client config, cache-aside pattern, per-user keys, cache invalidation, and a reusable Express middleware.
You have an endpoint that returns a product list. It queries the database, formats the response, and sends it back. Works fine.
Then you check your database logs and realize that endpoint is running the same query 400 times a minute. Same table. Same filters. Same result every single time.
That's wasted work — and your users are paying for it with slower response times.
Redis is the fix. Once you add caching, that endpoint hits the database once, stores the result, and serves every other request directly from memory. Response times drop from 400ms to under 10ms. No query rewrite needed. No infrastructure change.
This article walks you through exactly how to set it up — from first connection to production-ready caching patterns.
What Redis Actually Is
Redis is an in-memory data store. It keeps data in RAM instead of on disk, which makes reads and writes extremely fast — we're talking microseconds.
You use it as a layer between your API and your database. Instead of hitting the database every time, your API checks Redis first. If the data is there, it returns immediately. If not, it fetches from the database and stores a copy in Redis for next time.
That's the entire idea. Everything else is just implementation.
Official docs: redis.io/docs
Setting Up Redis Locally
If you're on a Mac:
brew install redis
brew services start redisOn Ubuntu/Debian:
sudo apt install redis-server
sudo systemctl start redisTo verify it's running:
redis-cli ping
# PONGFor production, Redis Cloud has a free tier that's good enough to start with. You get a connection URL and skip the server management entirely.
Installing the Node.js Client
Use the official redis package — it's maintained by Redis Inc. and has full TypeScript support.
npm install redisCreate a dedicated file for your Redis client so you don't create multiple connections across your app:
// src/lib/redis.ts
import { createClient } from 'redis';
const client = createClient({
url: process.env.REDIS_URL || 'redis://localhost:6379',
});
client.on('error', (err) => {
console.error('Redis error:', err);
});
client.on('connect', () => {
console.log('Redis connected');
});
await client.connect();
export default client;Call this once when your app starts. Every other file imports the same client instance.
Official node-redis docs: github.com/redis/node-redis
The Cache-Aside Pattern
This is the most common caching pattern and the one you'll use 90% of the time.
The logic is simple:
- Request comes in
- Check Redis for the data
- If found — return it immediately (cache hit)
- If not found — fetch from database, store in Redis, return the result (cache miss)
Here's what that looks like without any abstractions:
// src/routes/products.ts
import { Router } from 'express';
import redis from '../lib/redis';
import { db } from '../lib/db';
const router = Router();
router.get('/products', async (req, res) => {
const cacheKey = 'products:all';
// Step 1: Check Redis first
const cached = await redis.get(cacheKey);
if (cached) {
return res.json(JSON.parse(cached));
}
// Step 2: Cache miss — hit the database
const products = await db.query('SELECT * FROM products WHERE active = true');
// Step 3: Store in Redis for 5 minutes, then return
await redis.setEx(cacheKey, 300, JSON.stringify(products));
return res.json(products);
});
export default router;setEx stores the value with a TTL (time-to-live) in seconds. After 300 seconds, Redis automatically deletes it. Your database gets queried again, cache refreshes, and the cycle continues.
First request: ~350ms (database query)
Every request after: ~8ms (Redis)
Caching Per-User or Dynamic Data
Not everything is a flat list. Sometimes you need to cache data that's specific to a user or depends on query parameters.
The key is to make your cache key dynamic:
router.get('/users/:id/dashboard', async (req, res) => {
const { id } = req.params;
const cacheKey = `dashboard:user:${id}`;
const cached = await redis.get(cacheKey);
if (cached) {
return res.json(JSON.parse(cached));
}
// Expensive query — joins, aggregations, multiple tables
const dashboard = await db.query(`
SELECT
u.name,
u.email,
COUNT(o.id) AS total_orders,
SUM(o.total_amount) AS lifetime_value,
MAX(o.created_at) AS last_order_date
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
WHERE u.id = $1
GROUP BY u.id
`, [id]);
// Cache for 2 minutes — dashboard data can be slightly stale
await redis.setEx(cacheKey, 120, JSON.stringify(dashboard.rows[0]));
return res.json(dashboard.rows[0]);
});Now each user gets their own cache entry. User 42's dashboard doesn't interfere with user 99's.
Invalidating the Cache When Data Changes
Cached data goes stale when the underlying data changes. If a user updates their profile, you don't want the old version sitting in Redis for another 2 minutes.
Delete the cache entry when you update the data:
router.put('/users/:id', async (req, res) => {
const { id } = req.params;
const { name, email } = req.body;
// Update the database
await db.query(
'UPDATE users SET name = $1, email = $2 WHERE id = $3',
[name, email, id]
);
// Invalidate the cache so the next request fetches fresh data
await redis.del(`dashboard:user:${id}`);
return res.json({ success: true });
});Simple rule: wherever you write to the database, delete the related cache key. The next read will miss the cache, fetch fresh data, and rebuild it automatically.
A Reusable Cache Middleware
Writing the same check-cache / fetch-db / store-cache logic in every route gets repetitive fast. Extract it into a middleware:
// src/middleware/cache.ts
import { Request, Response, NextFunction } from 'express';
import redis from '../lib/redis';
export function cacheMiddleware(ttlSeconds: number) {
return async (req: Request, res: Response, next: NextFunction) => {
// Skip cache for non-GET requests
if (req.method !== 'GET') {
return next();
}
const cacheKey = `route:${req.originalUrl}`;
try {
const cached = await redis.get(cacheKey);
if (cached) {
res.setHeader('X-Cache', 'HIT');
return res.json(JSON.parse(cached));
}
// Intercept res.json to store the response in Redis
const originalJson = res.json.bind(res);
res.json = (body: unknown) => {
redis.setEx(cacheKey, ttlSeconds, JSON.stringify(body)).catch(console.error);
res.setHeader('X-Cache', 'MISS');
return originalJson(body);
};
next();
} catch (err) {
// If Redis is down, skip cache and serve normally
console.error('Cache middleware error:', err);
next();
}
};
}Use it on any route:
import { cacheMiddleware } from '../middleware/cache';
// Cache the product list for 10 minutes
router.get('/products', cacheMiddleware(600), async (req, res) => {
const products = await db.query('SELECT * FROM products WHERE active = true');
res.json(products);
});
// Cache the category list for 1 hour — it rarely changes
router.get('/categories', cacheMiddleware(3600), async (req, res) => {
const categories = await db.query('SELECT * FROM categories');
res.json(categories);
});The X-Cache: HIT / X-Cache: MISS header is useful during development — you can see in DevTools or Postman whether you're hitting Redis or the database.
Checking What's Stored in Redis
While you're building, the Redis CLI is your best friend:
# Open the Redis CLI
redis-cli
# See all keys (don't use this in production on large datasets)
KEYS *
# Get a specific key
GET "products:all"
# Check how long before a key expires (in seconds)
TTL "products:all"
# Delete a key manually
DEL "products:all"
# Check memory usage
INFO memoryWhat to Cache and What Not To
Good candidates for caching:
- Product listings, category pages, blog posts
- User dashboard stats and aggregations
- Configuration and settings that rarely change
- Third-party API responses (exchange rates, weather, etc.)
- Heavy database queries with joins and aggregations
Don't cache:
- Authentication tokens or session data (use Redis differently for this — with explicit keys per session, not TTL-based caching)
- Financial or transactional data where staleness matters
- Data that changes every request (live scores, real-time feeds)
- Endpoints that are already fast (sub-20ms database queries don't need caching)
Choosing the Right TTL
There's no universal answer — it depends on how often your data changes and how stale is acceptable.
A rough guide that works in most cases:
- Product listing — 5 to 10 minutes
- User profile — 2 to 5 minutes
- Dashboard stats — 1 to 2 minutes
- Category / config — 1 hour
- Static reference data — 24 hours
Start conservative (shorter TTL). Once you're confident the data patterns are stable, increase it.
Handling Redis Downtime
Redis is in-memory, which means if the Redis instance goes down, your cache is gone. This is fine — your app should fall back to the database gracefully, not crash.
The middleware above already handles this with a try/catch that calls next() on error. Always wrap Redis calls in try/catch in production.
For high-traffic production apps, Redis Sentinel or Redis Cluster gives you automatic failover. The official docs cover both:
- Redis Sentinel — redis.io/docs/management/sentinel/
- Redis Cluster — redis.io/docs/management/scaling/
Further Reading
- Redis data types (redis.io/docs/data-types/) — strings are enough to start, but lists, hashes, and sorted sets open up more patterns
- node-redis full API (github.com/redis/node-redis) — covers pipelines, transactions, Lua scripts
- Redis best practices (redis.io/docs/manual/patterns/) — key naming conventions, memory management, eviction policies
- Upstash (upstash.com) — serverless Redis that works well with Next.js edge functions
The Bottom Line
Caching is not complicated. You're storing a result so you don't have to compute it again. Redis is the right tool for this because it's fast, simple, and battle-tested at scale.
Start with your slowest GET endpoints. Add the middleware. Watch your database load drop. Adjust TTLs based on how fresh the data needs to be.
You don't need to cache everything on day one. Cache the one endpoint that's under the most pressure, see the improvement, and go from there.
Enjoying this article?
Get new articles, tips, and fixes delivered straight to your inbox — free, no spam.
Was this article helpful?
Let me know if this was useful — it helps me write more content like this.
What's next?
Daily Challenge
Put it into practice
Try today's hands-on dev challenge — takes under 5 minutes.
Open challengeRelated Tool
Timestamp Converter
Free browser-based dev tool — no sign-up needed.
Open toolQuick Tip
30-second dev lessons
Browse tips, fixes, and bugs — bite-sized and practical.
Browse tipsNew challenge and tips drop daily. Come back tomorrow to keep your streak going.
Related Articles
You might also enjoy these
Rate Limiting Isn't Optional - Here How to Actually Implement It in Node.js
No rate limiting means any client can hit your API as many times as it wants. This guide walks through the right way to implement it in Node.js - from express-rate-limit basics to Redis-backed sliding windows and layered per-route limits that work in production.
Session Hijacking Starts With Your Cookies — Here's What You Missed
Most developers think session hijacking is an advanced attack. It's not. It usually starts with something very basic: your cookies. Learn the 3 flags and token refresh pattern that actually works.
Why Your Docker Container Is 1.2GB When It Should Be 80MB
You run docker images and see your Node.js API sitting at 1.2GB. The same five mistakes appear in every bloated Docker image. Here's what they are and the exact changes that took a real 1.24GB image to 78MB without touching a single line of application code.



Comments
Leave a Comment
All comments are reviewed before publishing