Developer Tools

Docker for Developers: Containers, Compose, and Real-World Workflows

A practical guide to Docker concepts, Dockerfile best practices, and using Docker Compose to run multi-service development environments.

9 min read

Server room with blue lighting

"It works on my machine" is the phrase that launched a thousand deployment failures. Docker solves this by packaging your application and all its dependencies into a container — an isolated, reproducible environment that runs identically everywhere, from your laptop to production.

Containers vs. virtual machines

Virtual Machine Container
Isolation Full OS Process-level
Startup time Minutes Seconds
Size GBs MBs
Overhead High (hypervisor) Near-zero
Use case Full OS isolation App packaging

Containers share the host kernel but isolate the filesystem, processes, and networking. That's why they're so lightweight.

Core Docker concepts

Image — A read-only blueprint for a container. Think of it as a class definition.

Container — A running instance of an image. Think of it as an object created from the class.

Registry — A storage and distribution system for images. Docker Hub is the public default; GitHub Container Registry and AWS ECR are common alternatives.

Volume — Persistent storage that survives container restarts. Data written inside a container without a volume is lost when the container stops.

Writing a good Dockerfile

# Use a specific version — never "latest" in production
FROM node:20-alpine

# Set working directory
WORKDIR /app

# Copy dependency files first (leverages layer caching)
COPY package.json package-lock.json ./
RUN npm ci --only=production

# Copy application code
COPY . .

# Build the app
RUN npm run build

# Run as non-root user for security
USER node

# Document the port (doesn't publish it)
EXPOSE 3000

# Use exec form for proper signal handling
CMD ["node", "server.js"]

Layer caching: the key to fast builds

Each instruction in a Dockerfile creates a layer. Docker caches layers — if nothing has changed in a layer or its predecessors, Docker reuses the cache.

Copy dependency files before application code. Dependencies change far less often than your code. If you copy everything at once, a one-line change in index.js invalidates the dependency installation cache and forces a full npm install.

Multi-stage builds

Use multi-stage builds to keep production images lean:

# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Production (no dev dependencies, no source)
FROM node:20-alpine AS runner
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
USER node
CMD ["node", "dist/server.js"]

The final image contains only the built output — not your source code, test files, or dev dependencies.

Docker Compose for local development

Docker Compose orchestrates multi-container applications. A single docker-compose.yml defines your entire local stack:

version: "3.9"

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/myapp
    depends_on:
      db:
        condition: service_healthy
    volumes:
      - .:/app
      - /app/node_modules

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: myapp
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  postgres_data:

Generate the scaffolding for your stack with our Docker Compose Generator — choose your services and get a production-ready docker-compose.yml in seconds.

Essential Compose commands

# Start all services in the background
docker compose up -d

# View logs (follow mode)
docker compose logs -f app

# Run a command inside a running container
docker compose exec app sh

# Stop and remove containers (keeps volumes)
docker compose down

# Stop, remove containers AND volumes
docker compose down -v

# Rebuild images after Dockerfile changes
docker compose up -d --build

Environment variables and secrets

Never hardcode credentials in your Dockerfile or Compose file. Use a .env file:

# .env (add to .gitignore!)
DATABASE_URL=postgresql://user:pass@db:5432/myapp
REDIS_URL=redis://redis:6379
JWT_SECRET=your-super-secret-key

Compose automatically loads .env from the project directory. Reference variables with ${VARIABLE_NAME} in your YAML.

Use our Env Generator to scaffold .env files with sensible defaults and .env.example templates for your team.

Networking in Docker

Compose creates a default network for your stack. Services can reach each other using the service name as the hostname — that's why the app uses db as the database host, not localhost.

app → connects to "db:5432"
app → connects to "redis:6379"

Only ports explicitly mapped with ports: are accessible from the host machine.

Health checks and startup order

depends_on only waits for a container to start, not to be ready. A database container starts in seconds, but Postgres may take a few more seconds to accept connections. Use healthcheck + condition: service_healthy to wait until the service is actually ready.

Production considerations

  1. Use specific image tagsnode:20.11.1-alpine, not node:latest.
  2. Scan images for vulnerabilitiesdocker scout cves myapp:latest.
  3. Set resource limits — prevent a single container from starving others.
  4. Use read-only filesystems where possible — read_only: true in Compose.
  5. Never run as root — add USER node or equivalent.
  6. Configure Nginx as a reverse proxy in front of your app — use our Nginx Config Generator to get a battle-tested config.

Quick reference

# Images
docker images                    # list local images
docker pull nginx:alpine         # download image
docker rmi myimage               # remove image

# Containers
docker ps                        # running containers
docker ps -a                     # all containers
docker stop <id>                 # stop gracefully
docker rm <id>                   # remove stopped container
docker logs <id> -f              # stream logs

# Cleanup
docker system prune              # remove unused everything
docker volume prune              # remove unused volumes

Docker removes an entire class of environment-related bugs and makes onboarding new developers trivially easy. Master the basics here and you'll have a foundation for Kubernetes, CI/CD pipelines, and cloud deployments.