
Why Your Docker Container Is 1.2GB When It Should Be 80MB
You run docker images and see your Node.js API sitting at 1.2GB. The same five mistakes appear in every bloated Docker image. Here's what they are and the exact changes that took a real 1.24GB image to 78MB without touching a single line of application code.
You run docker images and see it. Your Node.js API image sitting at 1.2GB. Your colleague's Python service at 1.8GB. A simple Go binary wrapped in a container at 900MB.
You ship it anyway because it works. The app runs fine. But that 1.2GB gets pulled on every deploy, pushed to every environment, stored in your registry, and downloaded by every developer on the team. At some point someone raises a cloud bill and everyone looks confused.
I've audited production Docker setups across several projects. The same five mistakes appear every single time. Here's what they are, why they happen, and the exact changes that took a 1.2GB Node.js image to 78MB without changing a line of application code.
Why Image Size Actually Matters
Every extra megabyte in your image costs you in four places. Pull time on every deploy — a 1.2GB image on a cold node takes 2–3 minutes to pull before your container even starts. Registry storage. Attack surface — every package installed is a potential vulnerability. And local developer experience — docker pull shouldn't be something you leave to go make coffee.
An 80MB image pulls in seconds. It has fewer packages, so fewer CVEs. It starts faster. The constraint of keeping images small is one of the best forces for keeping production environments clean.
Mistake 1: Using the Wrong Base Image
This is where 80% of bloat comes from. The default node:20 image is based on Debian and ships with a full Linux environment — compilers, development headers, build tools, man pages, locales. Everything you need to build software from source. Almost none of it is needed to run your application.
# node:20 — 1.1GB base image
FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
CMD ["node", "src/index.js"]
# Final image: ~1.2GB
# node:20-alpine — 56MB base image
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
CMD ["node", "src/index.js"]
# Final image: ~110MBFor Go projects you can go even further — build with golang:alpine and use scratch (an empty image) for the final stage. Your entire Go binary in a container with zero operating system underneath it.
# Go — multi-stage to scratch
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY . .
RUN CGO_ENABLED=0 go build -o app .
FROM scratch
COPY --from=builder /app/app .
CMD ["/app"]
# Final image: ~10MB — just the binaryMistake 2: Installing Dev Dependencies in Production
Your node_modules has two categories: packages your app needs to run (Express, Mongoose, dotenv) and packages you only need during development (Jest, ESLint, TypeScript, ts-node). Most Dockerfiles install all of them.
# Installs everything including devDependencies
RUN npm install
# node_modules: ~350MB
# Production dependencies only
RUN npm ci --only=production
# node_modules: ~80MBFor TypeScript projects, use multi-stage builds. Compile in stage 1 with full dev tools, then only copy the compiled output and production dependencies to stage 2. The TypeScript compiler, ts-node, jest, and eslint never touch the final image.
# Multi-stage TypeScript build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json tsconfig.json ./
RUN npm ci
COPY src ./src
RUN npm run build
FROM node:20-alpine AS production
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/index.js"]
# TypeScript compiler, ts-node, jest — all gone from final imageMistake 3: No .dockerignore File
Without a .dockerignore, Docker sends everything in your project directory to the build context — your node_modules, .git folder, test files, logs, and local environment files. This doesn't just bloat the image. It slows down every single build because Docker has to hash and transfer gigabytes before it even starts.
# .dockerignore — create this in your project root
node_modules
.git
.gitignore
*.log
.env
.env.*
dist
build
coverage
.DS_Store
*.md
docker-compose*.yml
.github
tests
__tests__
*.test.js
*.spec.jsThis single file can reduce build context from 500MB+ to under 5MB on a typical Node.js project. Builds go from transferring gigabytes to almost nothing. Cache invalidation improves dramatically because irrelevant file changes stop triggering full rebuilds.
Mistake 4: Too Many RUN Layers
Every RUN instruction creates a new image layer. If you install something in one layer and delete it in the next, Docker still includes the original layer. The deletion is recorded on top but the old data is still there, still counted in the image size.
# Delete in a separate layer — bloat remains in image
FROM node:20-alpine
RUN apk add --no-cache python3 make g++
RUN npm ci
RUN apk del python3 make g++ # does NOT reduce image size
# Install, use, and clean in a single RUN layer
FROM node:20-alpine
RUN apk add --no-cache python3 make g++ \
&& npm ci \
&& apk del python3 make g++ \
&& npm cache clean --force
# Layer only contains the net result — no temp tools bloatMistake 5: Running as Root and Installing Unnecessary Tools
Many Dockerfiles install curl, wget, git, or vim during setup for debugging convenience. These tools add size and attack surface. An attacker who gets into a container with curl can download additional payloads. One without it cannot. Most containers also run as root by default — easy to fix, almost never done.
# Minimal tools, non-root user
FROM node:20-alpine
RUN addgroup -g 1001 -S nodejs \
&& adduser -S nodeuser -u 1001 -G nodejs
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
COPY --chown=nodeuser:nodejs . .
USER nodeuser
EXPOSE 3000
CMD ["node", "src/index.js"]The Full Optimised Dockerfile
Here is everything combined — production-ready for a TypeScript Node.js API:
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json tsconfig.json ./
RUN npm ci
COPY src ./src
RUN npm run build
# Stage 2: Production
FROM node:20-alpine AS production
RUN addgroup -g 1001 -S nodejs \
&& adduser -S nodeuser -u 1001 -G nodejs
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production \
&& npm cache clean --force
COPY --from=builder --chown=nodeuser:nodejs /app/dist ./dist
USER nodeuser
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=10s --start-period=20s --retries=3 \
CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "dist/index.js"]Before vs After — Real Numbers
Tested on a real Express + TypeScript API with Mongoose, Zod, and a dozen utility packages: the baseline image was 1.24GB with 47 CVEs and a 2m 48s cold pull time. After applying all five fixes — alpine base, multi-stage build, .dockerignore, combined RUN layers, non-root user — the image is 78MB with 3 CVEs and an 11-second pull time. That is a 94% reduction with zero changes to application code.
Scan Your Images
Smaller images have fewer vulnerabilities, but verify before shipping.
# Scan with Trivy (open source, excellent for CI)
trivy image my-image:latest
# Add to GitHub Actions — fail build on HIGH/CRITICAL CVEs
- name: Scan image
uses: aquasecurity/trivy-action@master
with:
image-ref: my-image:latest
severity: 'HIGH,CRITICAL'
exit-code: '1'GitHub Repository
The companion repo has four branches — one per mistake — starting from the 1.24GB image. Run make build && make size on each branch to see the exact reduction: https://github.com/Sandeep007-Stack/docker-image-size-2026.git
The Takeaway
Docker image bloat is not a mystery. It is five predictable mistakes made in a predictable order: wrong base image, wrong dependencies, missing .dockerignore, too many layers, and unnecessary tools. Fix them in that order. The base image change alone will cut most images by 70%.
Your image should contain exactly what your application needs to run. Nothing installed during development. Nothing downloaded for convenience. Nothing that an attacker could find useful.
Eighty megabytes is not a hard limit. It's a mindset.
Enjoying this article?
Get new articles, tips, and fixes delivered straight to your inbox — free, no spam.
Was this article helpful?
Let me know if this was useful — it helps me write more content like this.
Related Articles
You might also enjoy these
We Ran git rebase on a Shared Branch and Lost Three Days of Work
It was a Thursday afternoon. One git rebase followed by git push --force wiped three days of work across four developers. Here's exactly what happened, how we recovered every commit, and the rules we put in place so it never happens again.

Comments
Leave a Comment