Back to Engineering Logs

Reverse Proxying Everything: How Traefik Became My Home Lab's Backbone

5 min read
traefik
docker
homelab
Reverse Proxying Everything: How Traefik Became My Home Lab's Backbone

After migrating my containers to MACVLAN, every service had its own IP on the LAN. 36 services. 36 IPs. And me with a browser full of bookmarks that looked like 192.168.1.72:8096 for Jellyfin, 192.168.1.74:8080 for a staging app, 192.168.1.81:9000 for something I’d completely forgotten about.

It was chaos. Clean in theory (every container its own identity) but a nightmare to navigate. I needed one clean entry point that handled routing, SSL, and security headers without me babysitting a central config file every time I deployed something new.

The solution was Traefik. But spinning up the container was the easy part.

If you’re wondering why I didn’t go with Nginx Proxy Manager, here’s the honest answer: NPM is great if you want a GUI and you have a dozen services. But when you’re orchestrating 36 containers and want your proxy configuration to live inside each service’s docker-compose.yml, Traefik’s label-based discovery is in a completely different league. You add labels to a service, Traefik discovers it, and routing and SSL are live—no central file to touch, no GUI to click through at 2 AM.

The core setup

The key isn’t the image version. It’s two specific flags that make or break the implementation:

services:
  traefik:
    image: traefik:v3.0
    container_name: traefik
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"  # This one saves you from yourself
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.cloudflare.acme.dnschallenge.provider=cloudflare"
      - "--certificatesresolvers.cloudflare.acme.email=your@email.com"
      - "--certificatesresolvers.cloudflare.acme.storage=/letsencrypt/acme.json"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./letsencrypt:/letsencrypt
    networks:
      direct_lan:
        ipv4_address: 192.168.1.65

exposedbydefault=false is non-negotiable. Without it, every container you spin up, test containers, one-off scripts, anything…, gets automatically routed to the public internet. That’s not a misconfiguration, that’s a security incident waiting to happen. With this flag, every service has to explicitly opt in with a traefik.enable=true label. You stay in control.

Middleware is where the real work happens

I didn’t appreciate middleware until I had to debug a service that was leaking headers. That’s when I built what I now call the “base chain”, a set of security headers every public service gets by default:

labels:
  - "traefik.http.middlewares.secure-headers.headers.stsSeconds=31536000"
  - "traefik.http.middlewares.secure-headers.headers.browserXssFilter=true"
  - "traefik.http.middlewares.secure-headers.headers.contentTypeNosniff=true"
  - "traefik.http.middlewares.ratelimit.ratelimit.average=100"
  - "traefik.http.middlewares.ratelimit.ratelimit.burst=50"
  - "traefik.http.routers.myapp.middlewares=secure-headers,ratelimit"

HSTS, XSS filtering, content-type enforcement, rate limiting. These aren’t extras I bolt on for production. They’re the baseline. After spending time revisiting security questions the hard way, my tolerance for “we’ll harden it later” is zero.

One certificate, all services

Instead of issuing individual certificates for each subdomain, I set up a wildcard cert for *.lab.mydomain.com via DNS challenge through Cloudflare. One certificate. Every service that needs HTTPS gets it automatically on deploy. I haven’t manually renewed or issued a certificate in months, and that’s exactly how it should feel.

What six months in production actually taught me

Three things I’d tell myself back when I was still figuring this out.

The Docker socket mount is more dangerous than it looks. Mounting /var/run/docker.sock into Traefik is standard practice, but if that container gets compromised, whoever’s inside has a direct line to every other container on the host. Read-only helps, but it doesn’t eliminate the risk. I’m in the process of switching to a socket proxy—Tecnativa’s is the one I’m evaluating—to add an isolation layer between Traefik and the socket.

Traefik’s access logs have saved me more than once. Twice I spent half an hour debugging intermittent 502 errors before remembering to enable access logging and actually read the output. It took minutes once I had the data. Log everything, always.

Consistent naming conventions matter more than you think. At five services, your middleware names don’t matter. At 36, ambiguous names like my-middleware and app-middleware are how you end up spending an afternoon untangling a routing conflict. I now follow a strict service-function convention: jellyfin-ratelimit, staging-auth, internal-headers. When something breaks at midnight, being able to grep for the config you want is not a luxury—it’s what keeps the fix under 10 minutes.

A reverse proxy isn’t just a convenience layer. It’s the front door of your entire infrastructure. Treat it with the same seriousness you’d give to a firewall, because operationally, that’s exactly what it is. Every request to every service passes through it, which means every security decision—rate limits, headers, authentication—lives there too. Build it right once and it becomes the most reliable piece of your stack.

From the Lab

This experiment was conducted by Ionastec. Need this level of technical rigor for your business?

Consult Ionastec