OpsSquad.ai
Blog/Security/·32 min read
Security

NGINX Configuration Example: Master Reverse Proxy & Static Serving ...

Master NGINX configuration in 2026 with practical examples for reverse proxy, static content, SSL, and more. Automate debugging with OpsSqad's Security Squad.

Share
NGINX Configuration Example: Master Reverse Proxy & Static Serving ...

NGINX Configuration Example: Mastering Reverse Proxy and Static Content Serving in 2026

What is NGINX and Why is its Configuration Crucial?

NGINX is a high-performance, open-source web server that can also be used as a reverse proxy, load balancer, mail proxy, and HTTP cache. Its efficient architecture, particularly its event-driven, asynchronous model, makes it a popular choice for handling high traffic loads. Understanding NGINX configuration is paramount for optimizing performance, ensuring security, and enabling flexible deployment of web applications. This guide will delve into practical NGINX configuration examples, focusing on common use cases like serving static content and acting as a reverse proxy, while also addressing security considerations relevant in 2026.

TL;DR: NGINX configuration controls how your web server handles requests, proxies traffic, and serves content. Proper configuration directly impacts site performance, security, and scalability. This guide provides working examples for static content serving, reverse proxying, SSL setup, and multi-site hosting, with realistic troubleshooting scenarios.

As of 2026, NGINX powers approximately 34% of the world's busiest websites, maintaining its position as one of the most widely deployed web servers globally. Its configuration flexibility allows DevOps engineers to handle everything from simple static sites to complex microservices architectures with sophisticated load balancing and caching strategies.

The Core of NGINX: Master and Worker Processes

NGINX operates with a single master process and multiple worker processes. The master process reads configuration files, binds to ports, and manages worker processes. Worker processes handle actual client requests using an efficient event-driven model that can manage thousands of concurrent connections with minimal resource overhead.

Understanding this architecture helps in tuning parameters like worker_processes and worker_connections for optimal resource utilization. On a modern server in 2026, you'll typically set worker_processes to match your CPU core count, while worker_connections might range from 1024 to 4096 depending on your expected traffic patterns and available system resources.

# Global configuration typically in nginx.conf
user www-data;
worker_processes auto;  # Automatically detects CPU cores
pid /run/nginx.pid;
 
events {
    worker_connections 2048;  # Each worker can handle 2048 connections
    use epoll;  # Efficient connection processing method for Linux
    multi_accept on;  # Accept multiple connections at once
}

NGINX Configuration File Structure: nginx.conf and Beyond

The primary configuration file is typically nginx.conf, located at /etc/nginx/nginx.conf on most Linux distributions. However, for better organization, NGINX supports including other configuration files, often found in directories like /etc/nginx/conf.d/ or /etc/nginx/sites-available/ (with symlinks in /etc/nginx/sites-enabled/). This modular approach is key to managing complex NGINX setups.

The configuration hierarchy follows a nested block structure: the main context contains the events and http blocks, which in turn contain server blocks, which contain location blocks. Directives inherit from parent contexts unless explicitly overridden.

# Main context
user www-data;
worker_processes auto;
 
events {
    # Events context
    worker_connections 2048;
}
 
http {
    # HTTP context - applies to all HTTP traffic
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    
    # Include all site configurations
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Note: Always test your configuration with nginx -t before reloading. This catches syntax errors before they can cause downtime.

Configuring NGINX to Serve Static Content Efficiently

Serving static files (HTML, CSS, JavaScript, images) is a fundamental task for any web server. NGINX excels at this due to its non-blocking I/O model, capable of serving tens of thousands of static file requests per second on modest hardware. Efficiently configuring NGINX for static content involves specifying the root directory, handling file types, and leveraging caching mechanisms.

Defining the Document Root with root and alias Directives

The root directive specifies the base directory for requests. When NGINX receives a request for /images/logo.png with root /var/www/html, it looks for the file at /var/www/html/images/logo.png. The root directive appends the complete URI path to the specified directory.

server {
    listen 80;
    server_name example.com;
    
    location / {
        root /var/www/html;
        index index.html index.htm;
    }
}

The alias directive offers more flexibility, allowing you to map a specific URL path to a different file system path. Unlike root, alias replaces the matched location path entirely. This is particularly useful when your URL structure doesn't mirror your file system structure.

server {
    listen 80;
    server_name example.com;
    
    # Using root - request to /assets/style.css looks in /var/www/html/assets/style.css
    location /assets/ {
        root /var/www/html;
    }
    
    # Using alias - request to /static/style.css looks in /var/www/static-files/style.css
    location /static/ {
        alias /var/www/static-files/;  # Note the trailing slash
    }
}

Warning: When using alias, always include a trailing slash if the location ends with a slash. Mismatched slashes are a common source of 404 errors.

Optimizing Static File Delivery with index and try_files

The index directive specifies default files to serve when a directory is requested. When a user visits example.com/, NGINX checks for index.html, then index.htm in the document root. The try_files directive is powerful for handling fallbacks, such as serving a static file or falling back to a different location if the file isn't found.

server {
    listen 80;
    server_name example.com;
    root /var/www/html;
    
    location / {
        index index.html index.htm;
        try_files $uri $uri/ /index.html;  # Serve file, then directory, then fallback to index.html
    }
}

This configuration is particularly useful for single-page applications (SPAs) built with React, Vue, or Angular. When a user navigates directly to /dashboard/analytics, NGINX first checks for that exact file, then a directory, and finally falls back to index.html, allowing the JavaScript framework to handle routing.

Here's a more comprehensive example for a production static site:

server {
    listen 80;
    server_name static.example.com;
    root /var/www/static-site;
    
    # Default file handling
    index index.html index.htm;
    
    # Main location block
    location / {
        try_files $uri $uri/ =404;  # Return 404 if file not found
    }
    
    # Serve specific file types with optimized settings
    location ~* \.(html|htm)$ {
        try_files $uri =404;
        add_header Cache-Control "no-cache, must-revalidate";
    }
    
    # Handle missing favicon gracefully
    location = /favicon.ico {
        log_not_found off;
        access_log off;
    }
    
    # Deny access to hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }
}

Leveraging Browser Caching for Static Assets

To reduce server load and improve perceived performance, configure browser caching using the expires directive. This tells the client's browser how long to cache a resource, reducing repeat requests for unchanged files. As of 2026, with average webpage sizes exceeding 2MB, effective caching strategies can reduce bandwidth consumption by 60-70% for returning visitors.

server {
    listen 80;
    server_name example.com;
    root /var/www/html;
    
    # Cache static assets aggressively
    location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot)$ {
        expires 1y;  # Cache for 1 year
        add_header Cache-Control "public, immutable";
        access_log off;  # Don't log static asset requests
    }
    
    # Shorter cache for HTML to allow content updates
    location ~* \.(html|htm)$ {
        expires 1h;
        add_header Cache-Control "public, must-revalidate";
    }
    
    # No caching for API responses
    location /api/ {
        expires -1;
        add_header Cache-Control "no-store, no-cache, must-revalidate";
        proxy_pass http://backend_api;
    }
}

Pro tip: Use versioned filenames (like style.v123.css) or query strings for static assets when using long cache times. This allows you to bust the cache immediately when deploying updates by changing the version number.

NGINX as a Reverse Proxy: Routing Traffic to Backend Applications

One of NGINX's most common and powerful uses is as a reverse proxy. It sits in front of one or more backend application servers, forwarding client requests and returning responses. This provides benefits like load balancing, SSL termination, enhanced security by hiding backend infrastructure, and the ability to serve multiple applications from a single public IP address.

Basic Reverse Proxy Configuration with proxy_pass

The proxy_pass directive is the cornerstone of NGINX reverse proxying. It specifies the address of the upstream server to which NGINX should forward requests. The upstream server can be specified by IP address, hostname, or as part of an upstream block for load balancing.

server {
    listen 80;
    server_name app.example.com;
    
    location / {
        proxy_pass http://localhost:3000;  # Forward to Node.js app on port 3000
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

This basic configuration forwards all requests to a backend application running on port 3000. The additional headers ensure WebSocket connections work properly, which is essential for real-time applications in 2026.

Here's a more complete example with multiple backend services:

http {
    # Define upstream servers
    upstream backend_servers {
        server 192.168.1.100:8080;
        server 192.168.1.101:8080;
    }
    
    upstream api_servers {
        server 10.0.1.10:9000;
        server 10.0.1.11:9000;
    }
    
    server {
        listen 80;
        server_name example.com;
        
        # Main application
        location / {
            proxy_pass http://backend_servers;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            
            # Timeout settings
            proxy_connect_timeout 60s;
            proxy_send_timeout 60s;
            proxy_read_timeout 60s;
        }
        
        # API endpoint
        location /api/ {
            proxy_pass http://api_servers;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}

Essential proxy_set_header Directives for Backend Communication

When proxying, it's crucial to pass relevant information to the backend application so it can correctly identify the original client and request details. The proxy_set_header directives modify or add HTTP headers before forwarding the request.

Critical headers for reverse proxy configurations:

  • Host: Passes the original Host header from the client. Without this, the backend receives the upstream server address as the host, breaking virtual host configurations.
  • X-Real-IP: Passes the actual IP address of the client. Essential for logging, rate limiting, and geolocation.
  • X-Forwarded-For: A comma-separated list of IP addresses the request has passed through, including the original client IP.
  • X-Forwarded-Proto: Indicates the original protocol (HTTP or HTTPS), critical when NGINX terminates SSL but communicates with backends over HTTP.
location / {
    proxy_pass http://backend;
    
    # Preserve original request information
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Port $server_port;
    
    # For WebSocket support
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    
    # Security headers
    proxy_set_header X-Request-ID $request_id;  # Useful for request tracing
}

Note: Your backend application needs to be configured to trust and use these headers. In 2026, most frameworks (Django, Express, Rails, Spring Boot) have built-in middleware to handle forwarded headers correctly.

Load Balancing with Upstream Blocks

The upstream block allows you to define a group of backend servers. NGINX will then distribute requests among these servers using various load balancing methods. The default method is round-robin, but NGINX supports several algorithms optimized for different use cases.

upstream api_servers {
    # Load balancing method (optional, round-robin is default)
    least_conn;  # Directs requests to the server with the fewest active connections
    
    # Backend servers with optional parameters
    server api1.example.com:8000 weight=3;  # Higher weight means more requests
    server api2.example.com:8000 weight=1;
    server api3.example.com:8000 backup;  # Only used if primary servers are down
    
    # Health check parameters
    server api4.example.com:8000 max_fails=3 fail_timeout=30s;
    
    # Connection pooling for better performance
    keepalive 32;  # Maintain 32 idle keepalive connections to upstream servers
}
 
server {
    listen 80;
    server_name api.example.com;
    
    location / {
        proxy_pass http://api_servers;
        proxy_http_version 1.1;
        proxy_set_header Connection "";  # Clear Connection header for keepalive
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Available load balancing methods:

  • round-robin (default): Distributes requests evenly across servers
  • least_conn: Sends requests to the server with the fewest active connections
  • ip_hash: Routes clients to the same server based on their IP address (session persistence)
  • hash $variable: Custom hash-based distribution using any variable
  • random: Randomly selects a server (useful with random two least_conn for better distribution)

Here's an advanced example with session persistence for stateful applications:

upstream app_servers {
    ip_hash;  # Same client IP always goes to same backend
    
    server app1.internal:8080 max_fails=2 fail_timeout=10s;
    server app2.internal:8080 max_fails=2 fail_timeout=10s;
    server app3.internal:8080 max_fails=2 fail_timeout=10s;
    
    keepalive 16;
}
 
server {
    listen 443 ssl http2;
    server_name app.example.com;
    
    ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
    
    location / {
        proxy_pass http://app_servers;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Buffer settings for better performance
        proxy_buffering on;
        proxy_buffer_size 4k;
        proxy_buffers 8 4k;
        proxy_busy_buffers_size 8k;
    }
}

NGINX Server Blocks: Hosting Multiple Websites on a Single Server

Server blocks (formerly known as virtual hosts in Apache terminology) are NGINX's way of handling multiple domain names or IP addresses on a single server. Each server block defines a virtual host with its own configuration, allowing you to host dozens or even hundreds of websites on a single NGINX instance.

Configuring Basic Server Blocks for Different Domains

Each server block listens on a specific port and responds to requests for specific domain names defined in the server_name directive. NGINX matches the Host header from incoming requests against configured server names to determine which server block should handle the request.

# Site 1: example.com
server {
    listen 80;
    server_name example.com www.example.com;
    root /var/www/example.com/html;
    index index.html index.htm;
    
    access_log /var/log/nginx/example.com.access.log;
    error_log /var/log/nginx/example.com.error.log;
    
    location / {
        try_files $uri $uri/ =404;
    }
}
 
# Site 2: another-example.org
server {
    listen 80;
    server_name another-example.org www.another-example.org;
    root /var/www/another-example.org/html;
    index index.html index.htm;
    
    access_log /var/log/nginx/another-example.org.access.log;
    error_log /var/log/nginx/another-example.org.error.log;
    
    location / {
        try_files $uri $uri/ =404;
    }
}
 
# Default server block - catches requests with no matching server_name
server {
    listen 80 default_server;
    server_name _;
    return 444;  # Close connection without response
}

Best practice: Always configure a default server block to handle requests that don't match any configured domain. This prevents NGINX from serving the first configured site for arbitrary Host headers, which can have security implications.

For production deployments in 2026, organize server blocks in separate files:

# File structure
/etc/nginx/
├── nginx.conf
├── sites-available/
   ├── example.com.conf
   ├── another-example.org.conf
   └── api.example.com.conf
└── sites-enabled/
    ├── example.com.conf -> ../sites-available/example.com.conf
    └── another-example.org.conf -> ../sites-available/another-example.org.conf

Enable a site by creating a symlink:

sudo ln -s /etc/nginx/sites-available/example.com.conf /etc/nginx/sites-enabled/
sudo nginx -t  # Test configuration
sudo systemctl reload nginx

Using location Blocks for Path-Specific Routing

location blocks within a server block define how NGINX handles requests for specific URI paths. They can be used for serving static files from different directories, proxying to different backends based on URL path, or applying specific configurations like authentication or rate limiting.

Location matching types (in order of precedence):

  1. Exact match: location = /path
  2. Preferential prefix match: location ^~ /path
  3. Regex match (case-sensitive): location ~ pattern
  4. Regex match (case-insensitive): location ~* pattern
  5. Prefix match: location /path
server {
    listen 80;
    server_name example.com;
    root /var/www/example.com/html;
    
    # Exact match - highest priority
    location = / {
        index index.html;
    }
    
    # Preferential prefix match - static assets
    location ^~ /static/ {
        alias /var/www/example.com/static/;
        expires 1y;
        add_header Cache-Control "public, immutable";
    }
    
    # Regex match - PHP files
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
    
    # Prefix match - proxy API requests
    location /api/ {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Rate limiting for API
        limit_req zone=api_limit burst=10 nodelay;
    }
    
    # Regex match - image files with optimization
    location ~* \.(jpg|jpeg|png|gif|webp)$ {
        root /var/www/example.com/html;
        expires 30d;
        add_header Cache-Control "public";
        add_header Vary "Accept-Encoding";
    }
    
    # Default location - SPA fallback
    location / {
        try_files $uri $uri/ /index.html;
    }
}

Here's a practical example for a modern web application with separate frontend and backend:

# Rate limiting zone definition (in http block)
http {
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
    
    server {
        listen 443 ssl http2;
        server_name app.example.com;
        
        ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
        
        # Frontend - React/Vue/Angular SPA
        location / {
            root /var/www/app.example.com/dist;
            try_files $uri $uri/ /index.html;
            
            # Security headers
            add_header X-Frame-Options "SAMEORIGIN" always;
            add_header X-Content-Type-Options "nosniff" always;
            add_header X-XSS-Protection "1; mode=block" always;
        }
        
        # Backend API
        location /api/ {
            proxy_pass http://localhost:8080;
            proxy_http_version 1.1;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            
            # CORS headers for API
            add_header Access-Control-Allow-Origin "https://app.example.com" always;
            add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
            add_header Access-Control-Allow-Headers "Authorization, Content-Type" always;
            
            # Rate limiting
            limit_req zone=api_limit burst=20 nodelay;
        }
        
        # WebSocket endpoint
        location /ws/ {
            proxy_pass http://localhost:8080;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
            proxy_read_timeout 86400;  # 24 hours for long-lived connections
        }
        
        # Health check endpoint (no logging)
        location /health {
            access_log off;
            return 200 "healthy\n";
            add_header Content-Type text/plain;
        }
    }
}

NGINX SSL/HTTPS Configuration: Securing Your Websites

In 2026, serving websites over HTTPS is not optional; it's a fundamental security and SEO requirement. Major browsers now flag HTTP sites as "Not Secure," and search engines penalize sites without HTTPS. NGINX makes SSL/TLS configuration straightforward, with support for modern protocols and cipher suites.

Obtaining and Installing SSL Certificates

You'll need an SSL certificate from a Certificate Authority (CA). Let's Encrypt offers free certificates that can be managed with tools like Certbot, which has become the de facto standard for automated certificate management. As of 2026, Let's Encrypt issues over 3 million certificates daily.

Install Certbot and obtain a certificate:

# Install Certbot (Ubuntu/Debian)
sudo apt update
sudo apt install certbot python3-certbot-nginx
 
# Obtain and install certificate automatically
sudo certbot --nginx -d example.com -d www.example.com
 
# Output:
# Saving debug log to /var/log/letsencrypt/letsencrypt.log
# Requesting a certificate for example.com and www.example.com
# 
# Successfully received certificate.
# Certificate is saved at: /etc/letsencrypt/live/example.com/fullchain.pem
# Key is saved at: /etc/letsencrypt/live/example.com/privkey.pem
# This certificate expires on 2026-05-28.
# 
# Deploying certificate
# Successfully deployed certificate for example.com to /etc/nginx/sites-enabled/example.com.conf
# Congratulations! You have successfully enabled HTTPS on https://example.com

Certbot automatically modifies your NGINX configuration and sets up auto-renewal via systemd timer. Verify auto-renewal:

sudo certbot renew --dry-run
 
# Output:
# Saving debug log to /var/log/letsencrypt/letsencrypt.log
# Processing /etc/letsencrypt/renewal/example.com.conf
# Account registered.
# Simulating renewal of an existing certificate for example.com and www.example.com
# Congratulations, all simulated renewals succeeded

Configuring NGINX for HTTPS

Modify your server block to listen on port 443 and specify the SSL certificate paths. Modern NGINX configurations should include HTTP/2 support and strong security parameters.

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;  # IPv6 support
    server_name example.com www.example.com;
    
    # SSL certificate paths
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    
    # SSL protocols and ciphers (2026 best practices)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    
    # SSL session optimization
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    ssl_session_tickets off;  # Disable for better security
    
    # OCSP stapling for faster certificate validation
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;
    
    # Security headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    
    root /var/www/example.com/html;
    index index.html;
    
    location / {
        try_files $uri $uri/ =404;
    }
}
 
# Redirect HTTP to HTTPS
server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;
    
    # Redirect all HTTP requests to HTTPS
    return 301 https://$host$request_uri;
}

Note: The Strict-Transport-Security header (HSTS) tells browsers to only access your site via HTTPS for the specified duration. Use cautiously and test thoroughly before adding preload, which submits your domain to browsers' HSTS preload lists.

Implementing HTTP/2 for Faster Connections

The http2 directive in the listen directive enables HTTP/2, which offers multiplexing (multiple requests over a single connection), header compression, and server push for significantly faster web page loading. As of 2026, over 85% of web traffic uses HTTP/2 or the newer HTTP/3 protocol.

HTTP/2 is automatically enabled when you add the http2 parameter to your listen directive (shown in the previous example). You can verify HTTP/2 is working:

# Test HTTP/2 support
curl -I --http2 https://example.com
 
# Output:
# HTTP/2 200
# server: nginx/1.24.0
# date: Thu, 27 Feb 2026 10:30:00 GMT
# content-type: text/html
# content-length: 612
# last-modified: Wed, 26 Feb 2026 15:20:00 GMT

For even better performance, consider implementing HTTP/3 (QUIC) if your NGINX version supports it:

server {
    listen 443 ssl http2;
    listen 443 quic reuseport;  # HTTP/3 support
    listen [::]:443 ssl http2;
    listen [::]:443 quic reuseport;
    
    server_name example.com;
    
    # Add Alt-Svc header to advertise HTTP/3
    add_header Alt-Svc 'h3=":443"; ma=86400';
    
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    
    # HTTP/3 requires TLSv1.3
    ssl_protocols TLSv1.3;
    ssl_early_data on;
    
    # ... rest of configuration
}

NGINX Directives: The Building Blocks of Configuration

NGINX configuration is built using directives, which are commands that tell NGINX how to behave. Directives can be simple (single line) or block directives that contain other directives. Understanding common directives and their contexts is essential for effective NGINX configuration.

Common NGINX Directives and Their Purpose

Here are the most frequently used NGINX directives you'll encounter in 2026:

Connection and Server Directives:

  • listen: Specifies the port and/or IP address on which NGINX should listen for connections
  • server_name: Defines the domain names that a server block should respond to
  • root: Sets the document root directory for requests
  • index: Specifies default files to serve when a directory is requested
  • return: Immediately returns a status code and optional text or URL

Proxy Directives:

  • proxy_pass: Forwards requests to an upstream server
  • proxy_set_header: Modifies or adds HTTP headers before proxying
  • proxy_buffering: Controls response buffering (on by default)
  • proxy_cache: Enables caching of proxied responses
  • proxy_connect_timeout: Sets timeout for establishing connection to upstream
  • proxy_read_timeout: Sets timeout for reading response from upstream

Content Handling Directives:

  • location: Matches request URIs and applies specific configurations
  • try_files: Checks for file existence in specified order
  • alias: Maps a location to a different filesystem path
  • expires: Sets cache control headers for client-side caching
  • add_header: Adds or modifies response headers

Logging Directives:

  • access_log: Configures access logging (requests)
  • error_log: Configures error logging
  • log_format: Defines custom log formats

SSL/TLS Directives:

  • ssl_certificate: Specifies the SSL certificate file path
  • ssl_certificate_key: Specifies the private key file path
  • ssl_protocols: Defines which SSL/TLS protocols to support
  • ssl_ciphers: Specifies which cipher suites to use
  • ssl_prefer_server_ciphers: Prioritizes server cipher preferences

Understanding Directive Contexts (Global, http, server, location)

Directives have contexts, meaning they can only be used in specific blocks. Understanding directive contexts prevents configuration errors and helps you structure your NGINX configuration correctly.

Context hierarchy:

  1. main (global): Top-level directives outside any block
  2. events: Connection processing directives
  3. http: HTTP server directives
  4. server: Virtual host directives (within http)
  5. location: URI-specific directives (within server)
# main context
user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
 
# events context
events {
    worker_connections 2048;
    use epoll;
}
 
# http context
http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
    
    access_log /var/log/nginx/access.log main;
    
    sendfile on;
    tcp_nopush on;
    keepalive_timeout 65;
    
    # server context
    server {
        listen 80;
        server_name example.com;
        root /var/www/html;
        
        # location context
        location / {
            try_files $uri $uri/ =404;
        }
        
        location /api/ {
            proxy_pass http://localhost:3000;
        }
    }
}

Common context errors:

# WRONG - proxy_pass cannot be in server context
server {
    listen 80;
    proxy_pass http://backend;  # Error: directive not allowed here
}
 
# CORRECT - proxy_pass must be in location context
server {
    listen 80;
    location / {
        proxy_pass http://backend;  # Correct
    }
}

Troubleshooting Common NGINX Configuration Errors

Syntax Errors:

Always test your configuration before reloading NGINX:

sudo nginx -t
 
# Successful output:
# nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
# nginx: configuration file /etc/nginx/nginx.conf test is successful
 
# Error output:
# nginx: [emerg] unexpected "}" in /etc/nginx/sites-enabled/example.com:15
# nginx: configuration file /etc/nginx/nginx.conf test failed

Permission Denied Errors:

# Check NGINX error log
sudo tail -f /var/log/nginx/error.log
 
# Common output:
# 2026/02/27 10:30:15 [crit] 12345#0: *1 open() "/var/www/html/index.html" failed (13: Permission denied)
 
# Fix permissions
sudo chown -R www-data:www-data /var/www/html
sudo chmod -R 755 /var/www/html

502 Bad Gateway:

This indicates NGINX can't connect to the upstream backend. Common causes:

# Check if backend is running
sudo systemctl status your-app
 
# Check backend is listening on expected port
sudo netstat -tlnp | grep 3000
# or
sudo ss -tlnp | grep 3000
 
# Check NGINX error log for details
sudo tail -f /var/log/nginx/error.log
 
# Common error output:
# 2026/02/27 10:30:15 [error] 12345#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 1.2.3.4, server: example.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "example.com"
 
# This means the backend on port 3000 isn't running or isn't accepting connections

404 Not Found:

Verify file paths and location block configurations:

# Check what NGINX is actually looking for
sudo tail -f /var/log/nginx/error.log
 
# Output:
# 2026/02/27 10:30:15 [error] 12345#0: *1 open() "/var/www/html/about.html" failed (2: No such file or directory)
 
# Verify the file exists
ls -la /var/www/html/about.html
 
# Check location block configuration
sudo nginx -T | grep -A 10 "location /"

Configuration Not Taking Effect:

# Reload NGINX after configuration changes
sudo nginx -t && sudo systemctl reload nginx
 
# If reload doesn't work, try restart
sudo systemctl restart nginx
 
# Check NGINX loaded the correct configuration
sudo nginx -T  # Shows the complete parsed configuration

Skip the Manual Work: How OpsSqad Automates NGINX Debugging and Management

While NGINX offers powerful configuration options, managing and troubleshooting complex setups, especially across multiple servers or environments, can be time-consuming and error-prone. When you're dealing with a 502 Bad Gateway error at 2 AM or trying to debug SSL configuration issues across a dozen servers, the manual process of SSHing into each server, tailing logs, checking service status, and testing configuration becomes tedious.

OpsSqad's Security Squad can streamline these tasks, allowing your team to focus on higher-value activities instead of repetitive debugging workflows.

The OpsSqad Advantage: Reverse TCP Architecture for Seamless Access

OpsSqad's unique reverse TCP architecture means you don't need to open inbound firewall ports or configure complex VPNs to access your servers. A lightweight node installed via CLI establishes a secure outbound connection to OpsSqad cloud. This simplifies deployment and enhances security by reducing your attack surface—there are no listening ports for attackers to probe, no bastion hosts to maintain, and no VPN credentials to rotate.

The architecture works like this: your servers establish outbound TCP connections to OpsSqad's cloud infrastructure. AI agents organized in Squads (like the Security Squad for infrastructure debugging) execute terminal commands remotely through a chat interface. Every command goes through whitelisting, runs in a sandboxed environment, and gets logged to an immutable audit trail.

Deploying and Connecting Your Agents and Squads

Getting started with OpsSqad takes approximately 3 minutes:

1. Create Account and Node: Sign up at app.opssquad.ai and navigate to the Nodes section. Create a Node with a descriptive name like "production-web-servers" or "staging-environment." The dashboard provides a unique Node ID and authentication token—keep these handy for the next step.

2. Deploy Agent: SSH to your server and run the installation commands using the Node ID and token from your dashboard:

# Download and run the OpsSqad installer
curl -fsSL https://install.opssquad.ai/install.sh | bash
 
# Install the node with your credentials from the dashboard
opssquad node install --node-id=node_abc123xyz --token=tok_secret456
 
# Start the node (establishes reverse TCP connection)
opssquad node start
 
# Output:
# [INFO] OpsSqad node starting...
# [INFO] Establishing secure connection to cloud.opssquad.ai
# [INFO] Connection established successfully
# [INFO] Node ID: node_abc123xyz
# [INFO] Status: Active
# [INFO] Awaiting commands from authorized agents

The node is now connected and ready to receive commands from your Squads.

3. Browse Squad Marketplace: In the OpsSqad dashboard, navigate to the Squad Marketplace. Find the Security Squad (designed for infrastructure debugging, security auditing, and system troubleshooting). Click "Deploy Squad" to create your private instance with all its specialized agents.

4. Link Agents to Nodes: Open your deployed Security Squad and go to the Agents tab. Grant the Squad's agents access to your Node by linking them. This permission model ensures agents can only execute commands on infrastructure you explicitly authorize—you maintain granular control over which Squads can access which servers.

5. Start Debugging: Navigate to chat.opssquad.ai, select your Security Squad, and start chatting with AI agents that can execute real commands on your infrastructure.

Real-World Scenario: Debugging a 502 Bad Gateway Error with OpsSqad

Imagine you're facing a 502 Bad Gateway error on example.com at 10:30 AM on a busy Thursday. Instead of manually SSHing into servers, checking NGINX logs, investigating backend service status, and piecing together the problem, you can ask the Security Squad to handle the investigation:

You: Check NGINX logs for example.com and investigate 502 errors.

OpsSqad Security Squad:

[INFO] Connecting to node: production-web-01
[INFO] Executing: sudo tail -n 100 /var/log/nginx/error.log | grep "502\|upstream"

[OUTPUT]
2026/02/27 10:28:43 [error] 12345#0: *6789 upstream prematurely closed connection while reading response header from upstream, client: 203.0.113.45, server: example.com, request: "GET /api/users HTTP/1.1", upstream: "http://127.0.0.1:8080/api/users", host: "example.com"
2026/02/27 10:29:15 [error] 12345#0: *6790 connect() failed (111: Connection refused) while connecting to upstream, client: 203.0.113.46, server: example.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "example.com"
2026/02/27 10:29:16 [error] 12345#0: *6791 connect() failed (111: Connection refused) while connecting to upstream, client: 203.0.113.47, server: example.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "example.com"

[ANALYSIS] The NGINX error log shows "Connection refused" errors when attempting to connect to the upstream server at 127.0.0.1:8080. This indicates the backend application is not accepting connections.

[INFO] Executing: sudo systemctl status backend-api

[OUTPUT]
● backend-api.service - Backend API Service
     Loaded: loaded (/etc/systemd/system/backend-api.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Thu 2026-02-27 10:28:30 UTC; 2min ago
    Process: 54321 ExecStart=/usr/local/bin/backend-api (code=exited, status=1/FAILURE)
   Main PID: 54321 (code=exited, status=1/FAILURE)

Feb 27 10:28:30 production-web-01 systemd[1]: backend-api.service: Main process exited, code=exited, status=1/FAILURE
Feb 27 10:28:30 production-web-01 systemd[1]: backend-api.service: Failed with result 'exit-code'.

[ANALYSIS] The backend API service is inactive and exited with failure status. Checking the application logs for the root cause.

[INFO] Executing: sudo journalctl -u backend-api -n 50 --no-pager

[OUTPUT]
Feb 27 10:28:25 production-web-01 backend-api[54321]: Error: ECONNREFUSED: Connection to database at db.internal:5432 refused
Feb 27 10:28:25 production-web-01 backend-api[54321]: at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
Feb 27 10:28:30 production-web-01 backend-api[54321]: Fatal: Unable to establish database connection. Exiting.
Feb 27 10:28:30 production-web-01 systemd[1]: backend-api.service: Main process exited, code=exited, status=1/FAILURE

[ANALYSIS] The backend API crashed because it cannot connect to the database at db.internal:5432. Checking database connectivity.

[INFO] Executing: sudo nc -zv db.internal 5432

[OUTPUT]
nc: connect to db.internal port 5432 (tcp) failed: Connection refused

[ANALYSIS] The database server is not accepting connections. Checking database service status.

[INFO] Executing: sudo systemctl status postgresql

[OUTPUT]
● postgresql.service - PostgreSQL RDBMS
     Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Thu 2026-02-27 10:25:00 UTC; 5min ago

[DIAGNOSIS SUMMARY]
Root cause identified: PostgreSQL database service is down, which caused the backend API to crash on startup, resulting in NGINX 502 errors.

Recommended actions:
1. Start PostgreSQL service: sudo systemctl start postgresql
2. Verify database is accepting connections
3. Restart backend API service: sudo systemctl start backend-api
4. Monitor NGINX error logs to confirm 502 errors are resolved

Would you like me to execute these recovery steps?

You: Yes, please execute the recovery steps and verify the fix.

OpsSqad Security Squad:

[INFO] Executing: sudo systemctl start postgresql

[OUTPUT]
[INFO] PostgreSQL service started successfully

[INFO] Executing: sudo systemctl status postgresql

[OUTPUT]
● postgresql.service - PostgreSQL RDBMS
     Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2026-02-27 10:31:15 UTC; 2s ago

[INFO] Executing: sudo nc -zv db.internal 5432

[OUTPUT]
Connection to db.internal 5432 port [tcp/postgresql] succeeded!

[INFO] Executing: sudo systemctl start backend-api

[OUTPUT]
[INFO] Backend API service started successfully

[INFO] Executing: sudo systemctl status backend-api

[OUTPUT]
● backend-api.service - Backend API Service
     Loaded: loaded (/etc/systemd/system/backend-api.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2026-02-27 10:31:20 UTC; 3s ago
   Main PID: 55789 (node)

[INFO] Executing: curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/health

[OUTPUT]
200

[INFO] Executing: sudo tail -n 20 /var/log/nginx/error.log

[OUTPUT]
2026/02/27 10:31:25 [info] 12345#0: *7123 client closed connection while waiting for request, client: 203.0.113.50
2026/02/27 10:31:30 [info] 12345#0: *7124 client closed connection while waiting for request, client: 203.0.113.51

[VERIFICATION COMPLETE]
✓ PostgreSQL database restored and accepting connections
✓ Backend API service running and responding to health checks (HTTP 200)
✓ NGINX error log shows no new 502 errors
✓ System fully operational

The 502 Bad Gateway issue has been resolved. The problem was caused by PostgreSQL service being down, which prevented the backend API from starting. All services are now running normally.

Total resolution time: 90 seconds

This interaction demonstrates how OpsSqad's Security Squad can diagnose and resolve complex infrastructure issues by executing a series of whitelisted, audited commands. What would have taken 15 minutes of manual investigation—SSHing into the server, checking multiple log files, testing connectivity, and carefully restarting services in the correct order—was completed in 90 seconds through a conversational interface.

The Security Squad's command whitelisting ensures it can only execute approved operations, sandboxing prevents unintended system changes, and audit logging creates an immutable record of every action taken for compliance and security reviews.

Prevention and Best Practices for NGINX Configuration in 2026

Proactive configuration and adherence to best practices are key to maintaining a secure and performant NGINX deployment. The following practices reflect current security standards and performance optimization techniques as of 2026.

Security Hardening Principles

Minimize Attack Surface:

Disable unused modules and features to reduce potential vulnerabilities:

# In nginx.conf main context
load_module modules/ngx_http_geoip_module.so;  # Only load modules you actually use
 
http {
    # Disable server tokens to hide NGINX version
    server_tokens off;
    
    # Limit request methods
    if ($request_method !~ ^(GET|HEAD|POST|PUT|DELETE)$ ) {
        return 405;
    }
}

Restrict Access:

Use allow and deny directives judiciously to control access to sensitive endpoints:

# Restrict admin panel to specific IP addresses
location /admin/ {
    allow 203.0.113.0/24;  # Office network
    allow 198.51.100.50;   # VPN gateway
    deny all;
    
    proxy_pass http://admin_backend;
}
 
# Deny access to hidden files
location ~ /\. {
    deny all;
    access_log off;
    log_not_found off;
}
 
# Protect sensitive files
location ~* \.(conf|sql|bak|old|log)$ {
    deny all;
}

Implement Rate Limiting:

Protect against brute force attacks and API abuse:

http {
    # Define rate limit zones
    limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
    limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
    limit_req_zone $binary_remote_addr zone=api:10m rate=100r/s;
    
    # Connection limits
    limit_conn_zone $binary_remote_addr zone=conn_limit:10m;
    
    server {
        # Apply rate limiting to login endpoint
        location /api/login {
            limit_req zone=login burst=3 nodelay;
            limit_req_status 429;
            proxy_pass http://backend;
        }
        
        # General API rate limiting
        location /api/ {
            limit_req zone=api burst=50 nodelay;
            limit_conn conn_limit 10;
            proxy_pass http://backend;
        }
    }
}

Configure Security Headers:

Modern security headers protect against common web vulnerabilities:

# Create a security headers snippet
# /etc/nginx/snippets/security-headers.conf
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline';" always;
 
# Include in server blocks
server {
    listen 443 ssl http2;
    server_name example.com;
    
    include snippets/security-headers.conf;
    
    # HSTS header for HTTPS
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    
    # ... rest of configuration
}

Regular Configuration Audits:

Implement a regular review schedule for your NGINX configurations:

# Create a configuration audit script
# /usr/local/bin/nginx-audit.sh
#!/bin/bash
 
echo "NGINX Configuration Audit - $(date)"
echo "======================================"
 
# Check for weak SSL configurations
echo -e "\n[SSL Configuration Check]"
grep -r "ssl_protocols" /etc/nginx/ | grep -v "TLSv1.2\|TLSv1.3"
 
# Check for missing security headers
echo -e "\n[Security Headers Check]"
for site in /etc/nginx/sites-enabled/*; do
    if ! grep -q "X-Frame-Options" "$site"; then
        echo "Missing X-Frame-Options in: $site"
    fi
done
 
# Check for overly permissive access
echo -e "\n[Access Control Check]"
grep -r "allow all" /etc/nginx/ | grep -v "#"
 
# Verify rate limiting is configured
echo -e "\n[Rate Limiting Check]"
if ! grep -q "limit_req_zone" /etc/nginx/nginx.conf; then
    echo "WARNING: No rate limiting configured"
fi
 
echo -e "\n[Configuration Test]"
nginx -t

Monitoring and Logging Best Practices:

Configure comprehensive logging for security and troubleshooting:

http {
    # Custom log format with detailed information
    log_format detailed '$remote_addr - $remote_user [$time_local] '
                       '"$request" $status $body_bytes_sent '
                       '"$http_referer" "$http_user_agent" '
                       '$request_time $upstream_response_time '
                       '$pipe $upstream_addr';
    
    # Separate access logs by site
    server {
        server_name example.com;
        access_log /var/log/nginx/example.com.access.log detailed;
        error_log /var/log/nginx/example.com.error.log warn;
        
        # Don't log health checks
        location /health {
            access_log off;
            return 200 "OK\n";
        }
    }
    
    # Log failed authentication attempts
    location /api/login {
        access_log /var/log/nginx/auth-attempts.log detailed;
        proxy_pass http://backend;
    }
}

Backup and Version Control:

Maintain configuration backups and use version control:

# Initialize git repository for NGINX configs
cd /etc/nginx
sudo git init
sudo git add .
sudo git commit -m "Initial NGINX configuration"
 
# Create backup script
#!/bin/bash
BACKUP_DIR="/backup/nginx/$(date +%Y%m%d)"
mkdir -p "$BACKUP_DIR"
cp -r /etc/nginx/* "$BACKUP_DIR/"
tar -czf "$BACKUP_DIR.tar.gz" "$BACKUP_DIR"
echo "Backup created: $BACKUP_DIR.tar.gz"

Performance Optimization:

Tune NGINX for optimal performance based on your workload:

http {
    # Enable file caching
    open_file_cache max=10000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;
    
    # Optimize buffer sizes
    client_body_buffer_size 128k;
    client_max_body_size 10m;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 16k;
    
    # Enable compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types text/plain text/css text/xml text/javascript 
               application/json application/javascript application/xml+rss 
               application/rss+xml font/truetype font/opentype 
               application/vnd.ms-fontobject image/svg+xml;
    gzip_disable "msie6";
    
    # Connection optimization
    keepalive_timeout 65;
    keepalive_requests 100;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
}

Conclusion

NGINX configuration mastery is essential for modern DevOps engineers managing web infrastructure in 2026. This guide has covered the fundamental building blocks—from serving static content efficiently and configuring reverse proxies to implementing SSL/TLS security and hosting multiple sites on a single server. Understanding NGINX's architecture, directive contexts, and best practices enables you to build high-performance, secure web infrastructure that scales with your application needs.

The configuration examples provided here represent production-ready patterns used across thousands of deployments, from simple static sites to complex microservices architectures handling millions of requests daily. Whether you're optimizing cache headers, debugging 502 errors, or implementing advanced load balancing strategies, these patterns form the foundation of reliable NGINX deployments.

If you want to automate the entire workflow of NGINX debugging, configuration validation, and infrastructure troubleshooting across your server fleet, OpsSqad's Security Squad can execute these operations through a conversational interface in seconds rather than minutes. The reverse TCP architecture means no firewall changes, no VPN setup, and immediate access to your infrastructure from anywhere.

Create your free account and deploy the Security Squad to start automating NGINX management tasks that currently consume hours of your team's time each week.