Docker and Kubernetes: Containerization for Modern Development Teams

Master containerization with Docker and orchestration with Kubernetes. Learn production-ready patterns, security best practices, and deployment strategies.

# Docker and Kubernetes: Containerization for Modern Development Teams

Containerization has revolutionized how we build, ship, and run applications. Docker provides the foundation for packaging applications with their dependencies, while Kubernetes orchestrates these containers at scale. This comprehensive guide covers everything modern development teams need to know about containerization, from basic Docker concepts to advanced Kubernetes deployment patterns.

## Docker Fundamentals

## Understanding Containers vs Virtual Machines

Containers share the host OS kernel, making them lightweight and efficient:

dockerfile
# Traditional VM approach (heavy)
# Host OS -> Hypervisor -> Guest OS -> App

# Container approach (lightweight) # Host OS -> Container Runtime -> App

## Dockerfile Best Practices

dockerfile
# Multi-stage build for Node.js application
FROM node:18-alpine AS builder

# Set working directory WORKDIR /app

# Copy package files first (better layer caching) COPY package*.json ./

# Install dependencies RUN npm ci --only=production

# Copy source code COPY . .

# Build application RUN npm run build

# Production stage FROM node:18-alpine AS production

# Create non-root user for security RUN addgroup -g 1001 -S nodejs RUN adduser -S nextjs -u 1001

# Set working directory WORKDIR /app

# Copy built application from builder stage COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist COPY --from=builder --chown=nextjs:nodejs /app/node_modules ./node_modules COPY --from=builder --chown=nextjs:nodejs /app/package.json ./package.json

# Switch to non-root user USER nextjs

# Expose port EXPOSE 3000

# Health check HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost:3000/health || exit 1

# Start application CMD ["npm", "start"]

## Advanced Docker Patterns

dockerfile
# Python application with optimized layers
FROM python:3.11-slim-bullseye

# Install system dependencies RUN apt-get update && apt-get install -y \ build-essential \ curl \ && rm -rf /var/lib/apt/lists/*

# Create app user RUN useradd --create-home --shell /bin/bash app USER app WORKDIR /home/app

# Install Python dependencies COPY --chown=app:app requirements.txt . RUN pip install --user --no-cache-dir -r requirements.txt

# Copy application code COPY --chown=app:app . .

# Set Python path ENV PATH=/home/app/.local/bin:$PATH

# Run application CMD ["python", "-m", "gunicorn", "--bind", "0.0.0.0:8000", "app:application"]

## Docker Compose for Development

## Complete Development Environment

yaml
# docker-compose.yml
version: '3.8'

services: app: build: context: . dockerfile: Dockerfile.dev ports: - "3000:3000" volumes: - .:/app - node_modules:/app/node_modules environment: - NODE_ENV=development - DATABASE_URL=postgresql://postgres:password@postgres:5432/devdb - REDIS_URL=redis://redis:6379 depends_on: postgres: condition: service_healthy redis: condition: service_healthy networks: - app-network

postgres: image: postgres:15-alpine environment: POSTGRES_DB: devdb POSTGRES_USER: postgres POSTGRES_PASSWORD: password ports: - "5432:5432" volumes: - postgres_data:/var/lib/postgresql/data - ./init.sql:/docker-entrypoint-initdb.d/init.sql healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] interval: 30s timeout: 10s retries: 3 networks: - app-network

redis: image: redis:7-alpine ports: - "6379:6379" volumes: - redis_data:/data healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 30s timeout: 10s retries: 3 networks: - app-network

nginx: image: nginx:alpine ports: - "80:80" - "443:443" volumes: - ./nginx.conf:/etc/nginx/nginx.conf - ./ssl:/etc/nginx/ssl depends_on: - app networks: - app-network

volumes: postgres_data: redis_data: node_modules:

networks: app-network: driver: bridge

## Development Dockerfile

dockerfile
# Dockerfile.dev
FROM node:18-alpine

WORKDIR /app

# Install development dependencies RUN apk add --no-cache git

# Copy package files COPY package*.json ./

# Install all dependencies (including dev) RUN npm install

# Copy source code COPY . .

# Install global development tools RUN npm install -g nodemon concurrently

# Expose port EXPOSE 3000

# Start development server with hot reload CMD ["npm", "run", "dev"]

## Kubernetes Fundamentals

## Core Concepts

yaml
# Pod - smallest deployable unit
apiVersion: v1
kind: Pod
metadata:
  name: my-app-pod
  labels:
    app: my-app
spec:
  containers:
  - name: app
    image: my-app:latest
    ports:
    - containerPort: 3000
    env:
    - name: NODE_ENV
      value: "production"
    resources:
      requests:
        memory: "128Mi"
        cpu: "250m"
      limits:
        memory: "256Mi"
        cpu: "500m"

## Deployments for Scalability

yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
  labels:
    app: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: my-app:v1.0.0
        ports:
        - containerPort: 3000
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: database-url
        - name: PORT
          value: "3000"
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 3000
          initialDelaySeconds: 5
          periodSeconds: 5
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1

## Services for Networking

yaml
# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3000
  type: ClusterIP

--- # Load balancer service apiVersion: v1 kind: Service metadata: name: my-app-loadbalancer spec: selector: app: my-app ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer

## Configuration Management

## ConfigMaps and Secrets

yaml
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  NODE_ENV: "production"
  LOG_LEVEL: "info"
  API_VERSION: "v1"
  config.json: |
    {
      "database": {
        "pool": {
          "min": 2,
          "max": 10
        }
      },
      "cache": {
        "ttl": 3600
      }
    }

--- # secret.yaml apiVersion: v1 kind: Secret metadata: name: app-secrets type: Opaque data: database-url: cG9zdGdyZXNxbDovL3VzZXI6cGFzc0BkYi5leGFtcGxlLmNvbTU0MzIvZGJuYW1l jwt-secret: bXlfc3VwZXJfc2VjcmV0X2p3dF9rZXk= api-key: YWJjZGVmZ2hpams=

## Using Configuration in Deployments

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
      - name: app
        image: my-app:latest
        env:
        - name: NODE_ENV
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: NODE_ENV
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: database-url
        volumeMounts:
        - name: config-volume
          mountPath: /app/config
        - name: secret-volume
          mountPath: /app/secrets
          readOnly: true
      volumes:
      - name: config-volume
        configMap:
          name: app-config
          items:
          - key: config.json
            path: app-config.json
      - name: secret-volume
        secret:
          secretName: app-secrets

## Advanced Kubernetes Patterns

## Horizontal Pod Autoscaler

yaml
# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app-deployment
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 50
        periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
      - type: Percent
        value: 100
        periodSeconds: 15
      - type: Pods
        value: 2
        periodSeconds: 60

## Ingress for External Access

yaml
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/rate-limit: "100"
    nginx.ingress.kubernetes.io/rate-limit-window: "1m"
spec:
  tls:
  - hosts:
    - api.myapp.com
    secretName: api-tls-secret
  rules:
  - host: api.myapp.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app-service
            port:
              number: 80
      - path: /health
        pathType: Exact
        backend:
          service:
            name: health-service
            port:
              number: 80

## StatefulSets for Stateful Applications

yaml
# statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres-statefulset
spec:
  serviceName: postgres-service
  replicas: 3
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:15
        env:
        - name: POSTGRES_DB
          value: myapp
        - name: POSTGRES_USER
          value: postgres
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: password
        ports:
        - containerPort: 5432
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
        resources:
          requests:
            memory: "512Mi"
            cpu: "500m"
          limits:
            memory: "1Gi"
            cpu: "1000m"
  volumeClaimTemplates:
  - metadata:
      name: postgres-storage
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi
      storageClassName: fast-ssd

## Security Best Practices

## Pod Security Standards

yaml
# security-policy.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: secure-app
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/warn: restricted

--- apiVersion: apps/v1 kind: Deployment metadata: name: secure-app namespace: secure-app spec: template: spec: securityContext: runAsNonRoot: true runAsUser: 1001 runAsGroup: 1001 fsGroup: 1001 seccompProfile: type: RuntimeDefault containers: - name: app image: my-app:latest securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true capabilities: drop: - ALL volumeMounts: - name: tmp-volume mountPath: /tmp - name: cache-volume mountPath: /app/cache volumes: - name: tmp-volume emptyDir: {} - name: cache-volume emptyDir: {}

## RBAC (Role-Based Access Control)

yaml
# rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-service-account
  namespace: my-app

--- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: my-app name: app-role rules: - apiGroups: [""] resources: ["pods", "configmaps", "secrets"] verbs: ["get", "list", "watch"] - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "watch", "update"]

--- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: app-role-binding namespace: my-app subjects: - kind: ServiceAccount name: app-service-account namespace: my-app roleRef: kind: Role name: app-role apiGroup: rbac.authorization.k8s.io

## Monitoring and Observability

## Prometheus Monitoring Setup

yaml
# monitoring.yaml
apiVersion: v1
kind: ServiceMonitor
metadata:
  name: my-app-monitor
  labels:
    app: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics

--- apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: template: spec: containers: - name: app image: my-app:latest ports: - name: http containerPort: 3000 - name: metrics containerPort: 9090 env: - name: METRICS_PORT value: "9090" livenessProbe: httpGet: path: /health port: http initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: http initialDelaySeconds: 5 periodSeconds: 5

## Logging Configuration

yaml
# logging.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
data:
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf

[INPUT] Name tail Path /var/log/containers/*.log Parser docker Tag kube.* Refresh_Interval 5 Mem_Buf_Limit 50MB Skip_Long_Lines On

[FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Merge_Log On K8S-Logging.Parser On K8S-Logging.Exclude Off

[OUTPUT] Name es Match * Host elasticsearch.logging.svc.cluster.local Port 9200 Index kubernetes Type _doc

## Production Deployment Strategies

## Blue-Green Deployment

yaml
# blue-green-deployment.yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: my-app-rollout
spec:
  replicas: 5
  strategy:
    blueGreen:
      activeService: my-app-active
      previewService: my-app-preview
      autoPromotionEnabled: false
      scaleDownDelaySeconds: 30
      prePromotionAnalysis:
        templates:
        - templateName: success-rate
        args:
        - name: service-name
          value: my-app-preview
      postPromotionAnalysis:
        templates:
        - templateName: success-rate
        args:
        - name: service-name
          value: my-app-active
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: my-app:latest
        ports:
        - containerPort: 3000

## Canary Deployment

yaml
# canary-deployment.yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: my-app-canary
spec:
  replicas: 10
  strategy:
    canary:
      steps:
      - setWeight: 10
      - pause: {duration: 1m}
      - setWeight: 20
      - pause: {duration: 2m}
      - setWeight: 50
      - pause: {duration: 5m}
      - setWeight: 100
      analysis:
        templates:
        - templateName: error-rate
        - templateName: response-time
        args:
        - name: service-name
          value: my-app-canary
      trafficRouting:
        nginx:
          stableService: my-app-stable
          canaryService: my-app-canary
          annotationPrefix: nginx.ingress.kubernetes.io
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: my-app:latest

## Helm Charts for Package Management

## Chart Structure

yaml
# Chart.yaml
apiVersion: v2
name: my-app
description: A Helm chart for my application
type: application
version: 0.1.0
appVersion: "1.0.0"
keywords:
  - web
  - application
home: https://myapp.com
sources:
  - https://github.com/myorg/my-app
maintainers:
  - name: DevOps Team
    email: devops@mycompany.com
dependencies:
  - name: postgresql
    version: 12.x.x
    repository: https://charts.bitnami.com/bitnami
    condition: postgresql.enabled
  - name: redis
    version: 17.x.x
    repository: https://charts.bitnami.com/bitnami
    condition: redis.enabled

## Values and Templates

yaml
# values.yaml
replicaCount: 3

image: repository: my-app pullPolicy: IfNotPresent tag: "latest"

service: type: ClusterIP port: 80 targetPort: 3000

ingress: enabled: true className: nginx annotations: cert-manager.io/cluster-issuer: letsencrypt-prod hosts: - host: myapp.example.com paths: - path: / pathType: Prefix tls: - secretName: myapp-tls hosts: - myapp.example.com

resources: limits: cpu: 500m memory: 512Mi requests: cpu: 250m memory: 256Mi

autoscaling: enabled: true minReplicas: 2 maxReplicas: 10 targetCPUUtilizationPercentage: 70 targetMemoryUtilizationPercentage: 80

postgresql: enabled: true auth: postgresPassword: secretpassword database: myapp

redis: enabled: true auth: enabled: false

## Best Practices Summary

## Docker Best Practices

  • **Use multi-stage builds** to reduce image size
  • **Run containers as non-root users** for security
  • **Minimize layers** and use .dockerignore
  • **Use specific image tags** instead of 'latest'
  • **Implement health checks** for reliability

  • ## Kubernetes Best Practices

  • **Set resource requests and limits** for all containers
  • **Use namespaces** to organize resources
  • **Implement proper RBAC** for security
  • **Use ConfigMaps and Secrets** for configuration
  • **Monitor and log everything** for observability

  • ## Security Best Practices

  • **Scan images** for vulnerabilities
  • **Use Pod Security Standards** to enforce security policies
  • **Implement network policies** to control traffic
  • **Rotate secrets** regularly
  • **Keep Kubernetes updated** with latest security patches

  • ## Conclusion

    Containerization with Docker and orchestration with Kubernetes provide the foundation for modern, scalable applications. Key takeaways:

    - **Start simple** with Docker containers and docker-compose - **Learn Kubernetes fundamentals** before advanced patterns - **Focus on security** from the beginning - **Implement monitoring and logging** for production readiness - **Use Helm charts** for repeatable deployments - **Adopt GitOps practices** for deployment automation

    Success with containerization requires understanding both the technology and operational practices that make it work at scale.

    ← Back to Blog 📄 Print Article 🔗 Share