Skip to content

Web Servers & Reverse Proxies

This content is for Backend. Switch to the latest version for up-to-date documentation.

A web server (or reverse proxy) often sits in front of your backend application.

Even if your backend is an Express app, production setups commonly look like:

Diagram

A reverse proxy receives requests from clients and forwards them to your application.

Common jobs:

  • TLS termination: handle HTTPS certificates (so your app can run plain HTTP internally)
  • Static files: serve assets efficiently
  • Compression: gzip/brotli responses
  • Caching: cache responses for speed
  • Load balancing: distribute traffic across multiple app instances
  • Rate limiting: protect against abuse
  • Request limits: cap upload sizes / header sizes
  • Nginx: very popular, fast, flexible
  • Apache: older but feature-rich
  • Caddy: great developer experience; HTTPS is often very easy
  • Traefik: common in container/Kubernetes environments

Express Behind a Reverse Proxy (Key Notes)

Section titled “Express Behind a Reverse Proxy (Key Notes)”
  • If your reverse proxy is doing TLS, your Express app may only see internal HTTP unless you configure it correctly.
  • If you use cookies with secure: true, you usually need app.set("trust proxy", 1) so Express understands it’s really HTTPS.

This example sends all traffic to an Express app running on localhost:3000.

server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

WebSockets, SSE, and Streaming Through a Proxy

Section titled “WebSockets, SSE, and Streaming Through a Proxy”

Real-time features often require proxy settings to avoid buffering and to support connection upgrades.

WebSockets require an HTTP upgrade.

location /ws {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}

SSE and streaming rely on long-lived connections and sending chunks quickly.

location /events {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
proxy_cache off;
}

When your app runs multiple instances:

Diagram

Important: real-time connections (WebSockets/SSE) can be sensitive to load balancing. You may need sticky sessions or a shared pub/sub layer depending on your design.

Case Study 1: Single VM Deployment (Common First Step)

Section titled “Case Study 1: Single VM Deployment (Common First Step)”
  • A single Linux VM runs:
    • Nginx on :443 (HTTPS)
    • Express on :3000
  • Nginx terminates TLS and forwards requests to Express.
  • Logs:
    • Nginx access/error logs
    • App logs (stdout) collected by your process manager

This setup is cheap, understandable, and handles a lot of traffic for small/medium apps.

Case Study 2: Real-Time Features Behind a Proxy

Section titled “Case Study 2: Real-Time Features Behind a Proxy”
  • You add SSE at /events and/or WebSockets at /ws.
  • You must ensure:
    • WebSockets: Upgrade headers are forwarded.
    • SSE/streaming: proxy buffering is off, so events arrive immediately.

If you later add multiple app instances behind a load balancer, you may need:

  • Sticky sessions (same client goes to the same instance), or
  • A shared pub/sub (e.g., Redis pub/sub) so updates reach clients no matter which instance they’re connected to.
Built with passion by Ngineer Lab