OpsSquad.ai
Blog/Security/·30 min read
Security

Nginx Configuration File Examples for 2026

Master Nginx configuration files in 2026 with practical examples for single sites, multi-site hosting, SSL, and reverse proxy. Automate with OpsSqad's Security Squad.

Share
Nginx Configuration File Examples for 2026

Nginx Configuration File Example: Securely Serving Your Websites in 2026

Nginx is a cornerstone of modern web infrastructure, renowned for its performance, stability, and flexibility. However, mastering its configuration can be a complex undertaking, especially when aiming for robust security and efficient multi-site hosting. This guide provides practical, real-world [nginx configuration file](/blog/nginx-configuration-file) example scenarios, from basic setup to advanced security hardening, empowering you to confidently manage your web servers in 2026.

TL;DR: Nginx configuration files use a hierarchical structure with the main nginx.conf file containing global settings and the http block housing server configurations. For multi-site hosting, create individual server blocks in /etc/nginx/sites-available/ and symlink them to /etc/nginx/sites-enabled/. Modern Nginx deployments in 2026 require HTTPS with TLS 1.2+ minimum, strong cipher suites, and proper security headers. This guide walks through complete configuration examples for single sites, multi-site hosting, SSL/HTTPS setup, reverse proxy configurations, and security hardening.

Understanding Nginx Configuration File Structure and Locations

Before diving into specific examples, it's crucial to understand how Nginx configuration is organized. The Nginx configuration system follows a hierarchical, block-based structure where directives are organized into contexts that determine their scope and applicability. This foundational knowledge will make it easier to navigate and modify your server's behavior.

The Core Configuration File: nginx.conf

Problem: Where does Nginx store its primary settings, and what's the general layout?

Explanation: The main Nginx configuration file, typically nginx.conf, acts as the root of your server's configuration. It contains global directives and includes other configuration files, allowing for modularity. This file defines how the Nginx master process operates, how worker processes handle connections, and establishes the foundation for all HTTP/HTTPS server configurations.

Key Components:

  • global directives: Settings that affect the entire Nginx process, such as the user context, worker process count, and PID file location.
  • events block: Configures worker process behavior related to connections, including the maximum number of simultaneous connections each worker can handle.
  • http block: Contains directives for HTTP servers, including MIME types, logging, compression, and includes for virtual host configurations.

Common Locations:

  • /etc/nginx/nginx.conf (most common on Linux distributions like Ubuntu, Debian, CentOS, RHEL)
  • /usr/local/nginx/conf/nginx.conf (if compiled from source)
  • /usr/local/etc/nginx/nginx.conf (common on FreeBSD and macOS)

Example Snippet:

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
 
events {
    worker_connections 768;
    # use epoll; # Uncomment for Linux 2.6+ for better performance
}
 
http {
    ##
    # Basic Settings
    ##
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    server_tokens off;
 
    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;
 
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
 
    ##
    # SSL Settings
    ##
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;
 
    ##
    # Logging Settings
    ##
    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;
 
    ##
    # Gzip Settings
    ##
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml+rss application/rss+xml font/truetype font/opentype application/vnd.ms-fontobject image/svg+xml;
 
    ##
    # Virtual Host Configs
    ##
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Note: The worker_processes auto; directive automatically sets the number of worker processes to match the number of CPU cores, which is the recommended configuration for most servers in 2026.

Modular Configuration with sites-available and sites-enabled

Problem: How can I manage configurations for multiple websites without cluttering the main file?

Explanation: Nginx utilizes a common pattern on Debian/Ubuntu-based systems for managing virtual hosts. The sites-available directory holds all your site configuration files, while sites-enabled contains symbolic links to the configurations you want to activate. This makes enabling and disabling sites a simple matter of creating or removing links, without modifying or deleting the actual configuration files.

Directory Structure:

  • /etc/nginx/sites-available/: Contains individual configuration files for each site (these are the source files).
  • /etc/nginx/sites-enabled/: Contains symbolic links to the files in sites-available that are currently active.
  • /etc/nginx/conf.d/: Alternative directory for configuration files that are always loaded (commonly used on RHEL/CentOS systems).

Creating a New Site Configuration:

  1. Create a new file in /etc/nginx/sites-available/:

    sudo nano /etc/nginx/sites-available/mywebsite.com
  2. Add your site's configuration (covered in subsequent sections).

  3. Enable the site by creating a symbolic link:

    sudo ln -s /etc/nginx/sites-available/mywebsite.com /etc/nginx/sites-enabled/
  4. Test the configuration:

    sudo nginx -t
  5. Reload Nginx:

    sudo systemctl reload nginx

Disabling a Site:

sudo rm /etc/nginx/sites-enabled/mywebsite.com
sudo nginx -t
sudo systemctl reload nginx

Warning: When disabling a site, only remove the symbolic link from sites-enabled, not the original file from sites-available. This preserves your configuration for future use.

Configuring Nginx for a Single Website

Serving a single website is the most basic use case for Nginx. A properly configured server block defines how Nginx should handle requests for a specific domain, including which files to serve, how to handle errors, and what security measures to apply.

Basic Server Block Configuration

Problem: How do I tell Nginx to listen for requests on a specific domain and serve files from a particular directory?

Explanation: A server block defines a virtual server. It specifies listening ports, server names (domains), and the root directory for serving content. The location blocks within a server block determine how specific URI patterns are handled.

Example mywebsite.com (in /etc/nginx/sites-available/):

server {
    listen 80;
    listen [::]:80;
 
    server_name mywebsite.com www.mywebsite.com;
 
    root /var/www/mywebsite.com/html;
    index index.html index.htm index.nginx-debian.html;
 
    # Logging
    access_log /var/log/nginx/mywebsite.com.access.log;
    error_log /var/log/nginx/mywebsite.com.error.log;
 
    location / {
        try_files $uri $uri/ =404;
    }
 
    # Deny access to hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }
 
    # Error pages (optional but recommended)
    error_page 404 /404.html;
    location = /404.html {
        internal;
    }
 
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        internal;
    }
}

Key Directives Explained:

  • listen 80;: Nginx will listen for incoming IPv4 connections on port 80 (HTTP).
  • listen [::]:80;: Nginx will listen for incoming IPv6 connections on port 80.
  • server_name mywebsite.com www.mywebsite.com;: Specifies the domain names this server block should respond to. Nginx uses this to route requests to the correct server block.
  • root /var/www/mywebsite.com/html;: Sets the document root directory. All file paths are relative to this location.
  • index index.html index.htm;: Defines the default files to serve when a directory is requested. Nginx tries each file in order.
  • location / { ... }: A block that handles requests for the root URI and all paths not matched by more specific location blocks.
  • try_files $uri $uri/ =404;: Attempts to serve the requested file ($uri), then tries the path as a directory ($uri/), and finally returns a 404 error if neither exists.
  • location ~ /\. { deny all; }: Prevents access to hidden files (those starting with a dot), which often contain sensitive configuration data.
  • internal;: This directive specifies that the location can only be accessed through internal redirects, not directly by clients.

Setting Up the Directory Structure:

sudo mkdir -p /var/www/mywebsite.com/html
sudo chown -R $USER:$USER /var/www/mywebsite.com/html
sudo chmod -R 755 /var/www/mywebsite.com
echo "<html><body><h1>Welcome to mywebsite.com</h1></body></html>" > /var/www/mywebsite.com/html/index.html

Reloading Nginx After Configuration Changes

Problem: How do I apply my new configuration without interrupting existing connections?

Explanation: Nginx has a graceful reload mechanism that allows it to re-read its configuration files without stopping the master process or terminating active worker processes. When you reload Nginx, the master process validates the new configuration, spawns new worker processes with the updated settings, and gracefully shuts down old workers after they finish handling their current requests. This is crucial for zero-downtime updates.

Checking Configuration Syntax:

Before reloading, always test your configuration for syntax errors:

sudo nginx -t

Expected output:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

If there are errors, Nginx will display the file path and line number where the problem occurred:

nginx: [emerg] unexpected ";" in /etc/nginx/sites-enabled/mywebsite.com:12
nginx: configuration file /etc/nginx/nginx.conf test failed

Reloading Nginx:

sudo systemctl reload nginx

For systems without systemctl:

sudo service nginx reload

Or using the Nginx binary directly:

sudo nginx -s reload

Restarting vs. Reloading:

  • Reload (reload): Graceful, maintains active connections, applies configuration changes. Use this for most updates.
  • Restart (restart): Stops all processes and starts fresh. Briefly interrupts service. Only needed for major changes or troubleshooting.
sudo systemctl restart nginx  # Full restart (brief downtime)

Nginx Multi-Site Hosting: Serving Multiple Domains

Nginx excels at hosting multiple websites on a single server through its virtual host system. By defining multiple server blocks, each with unique server_name directives, a single Nginx instance can efficiently serve dozens or even hundreds of different domains.

Configuring Multiple Server Blocks

Problem: How can I host siteA.com and siteB.com on the same Nginx instance?

Explanation: By creating separate configuration files in /etc/nginx/sites-available/ for each domain and linking them in /etc/nginx/sites-enabled/, Nginx can differentiate requests based on the server_name directive in the HTTP Host header. When a request arrives, Nginx compares the Host header against all server_name directives and routes the request to the matching server block.

Example: siteA.com (in /etc/nginx/sites-available/):

server {
    listen 80;
    listen [::]:80;
    server_name siteA.com www.siteA.com;
    
    root /var/www/siteA.com/html;
    index index.html index.htm;
 
    access_log /var/log/nginx/siteA.com.access.log;
    error_log /var/log/nginx/siteA.com.error.log;
 
    location / {
        try_files $uri $uri/ =404;
    }
 
    location ~ /\. {
        deny all;
    }
}

Example: siteB.com (in /etc/nginx/sites-available/):

server {
    listen 80;
    listen [::]:80;
    server_name siteB.com www.siteB.com;
    
    root /var/www/siteB.com/html;
    index index.php index.html index.htm;
 
    access_log /var/log/nginx/siteB.com.access.log;
    error_log /var/log/nginx/siteB.com.error.log;
 
    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }
 
    # PHP processing (requires PHP-FPM)
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/var/run/php/php8.2-fpm.sock; # Adjust PHP version as needed
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
 
    location ~ /\. {
        deny all;
    }
 
    # Deny access to PHP files in uploads directory (security)
    location ~* /(?:uploads|files)/.*\.php$ {
        deny all;
    }
}

Enabling Both Sites:

sudo ln -s /etc/nginx/sites-available/siteA.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/siteB.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

Creating Directory Structures:

sudo mkdir -p /var/www/siteA.com/html
sudo mkdir -p /var/www/siteB.com/html
sudo chown -R www-data:www-data /var/www/siteA.com
sudo chown -R www-data:www-data /var/www/siteB.com

Handling Default Server and Catch-All Configurations

Problem: What happens if a request comes in for a domain not explicitly defined, or if the Host header is missing or malformed?

Explanation: You can define a "default server" that will handle requests for any server_name not matched by other server blocks. This is a critical security practice in 2026, as it prevents your server from inadvertently serving content when accessed via IP address or through malicious Host headers. The default server can return an error, serve a placeholder page, or simply drop the connection.

Example: default (in /etc/nginx/sites-available/):

server {
    listen 80 default_server;
    listen [::]:80 default_server;
 
    server_name _; # Underscore matches any server name not explicitly defined
 
    # Option 1: Return 444 (close connection without response)
    location / {
        return 444;
    }
 
    # Option 2: Serve a default page
    # root /var/www/default;
    # index index.html;
    # location / {
    #     try_files $uri $uri/ =404;
    # }
 
    # Option 3: Return 403 Forbidden
    # location / {
    #     return 403;
    # }
}

Key Directives:

  • default_server: This parameter makes this server block the default for the specified listen directive. Only one server block per listen address:port combination can be the default.
  • server_name _;: The underscore is a special wildcard that matches any server name. It's not actually special to Nginx, but it's an invalid hostname that will never match a real domain, making it perfect for catch-all configurations.
  • return 444;: A special Nginx status code that closes the connection without sending a response. This is often used for security to avoid revealing information about your server to scanners and bots.

Warning: Make sure your default server is enabled before your actual sites to ensure it's properly recognized:

sudo ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/000-default

The 000- prefix ensures it's loaded first alphabetically.

Nginx SSL/HTTPS Configuration for Enhanced Security

Securing your website with HTTPS is no longer optional in 2026—it's a fundamental requirement for user trust, SEO rankings, and compliance with security standards. As of 2026, major browsers flag all HTTP sites as "Not Secure," and many APIs refuse to interact with non-HTTPS endpoints. Nginx makes it straightforward to implement SSL/TLS certificates with modern security configurations.

Obtaining and Installing SSL Certificates

Problem: How do I enable HTTPS for my website?

Explanation: You'll need an SSL/TLS certificate from a Certificate Authority (CA). Let's Encrypt provides free, automated certificates via Certbot, which is the recommended approach for most users. Let's Encrypt certificates are trusted by all major browsers and are valid for 90 days with automatic renewal.

Using Certbot (Recommended):

  1. Install Certbot:

For Ubuntu/Debian:

sudo apt update
sudo apt install certbot python3-certbot-nginx

For RHEL/CentOS:

sudo dnf install certbot python3-certbot-nginx
  1. Obtain Certificate:
sudo certbot --nginx -d mywebsite.com -d www.mywebsite.com

Certbot will:

  • Automatically detect your Nginx configuration
  • Verify domain ownership via HTTP-01 challenge
  • Obtain the certificate from Let's Encrypt
  • Modify your Nginx configuration to use HTTPS
  • Set up automatic renewal via systemd timer or cron

Expected output:

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Requesting a certificate for mywebsite.com and www.mywebsite.com

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/mywebsite.com/fullchain.pem
Key is saved at:         /etc/letsencrypt/live/mywebsite.com/privkey.pem
This certificate expires on 2026-05-27.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.

Deploying certificate
Successfully deployed certificate for mywebsite.com to /etc/nginx/sites-enabled/mywebsite.com
Successfully deployed certificate for www.mywebsite.com to /etc/nginx/sites-enabled/mywebsite.com
Congratulations! You have successfully enabled HTTPS on https://mywebsite.com and https://www.mywebsite.com
  1. Test Automatic Renewal:
sudo certbot renew --dry-run

Manual Certificate Installation (Commercial Certificates):

If you have a certificate from a commercial provider like DigiCert or Sectigo, you'll typically receive:

  • Certificate file (.crt or .pem)
  • Private key file (.key)
  • Intermediate certificate/chain file (.crt or .pem)

Place these files in a secure location:

sudo mkdir -p /etc/nginx/ssl/mywebsite.com
sudo cp mywebsite.com.crt /etc/nginx/ssl/mywebsite.com/
sudo cp mywebsite.com.key /etc/nginx/ssl/mywebsite.com/
sudo cp intermediate.crt /etc/nginx/ssl/mywebsite.com/
sudo chmod 600 /etc/nginx/ssl/mywebsite.com/mywebsite.com.key
sudo chmod 644 /etc/nginx/ssl/mywebsite.com/*.crt

Configuring Nginx for HTTPS

Problem: How do I update my Nginx configuration to use the SSL certificate and listen on port 443 with modern security settings?

Explanation: You'll create a new server block or modify an existing one to listen on port 443, specify the certificate and key paths, and enable SSL with secure protocols and cipher suites. Modern configurations in 2026 should disable TLS 1.0 and 1.1 (deprecated), use strong ciphers, and implement additional security headers.

Example mywebsite.com (with SSL and security hardening):

# HTTP server - redirect all traffic to HTTPS
server {
    listen 80;
    listen [::]:80;
    server_name mywebsite.com www.mywebsite.com;
 
    # Redirect all HTTP requests to HTTPS
    return 301 https://$host$request_uri;
}
 
# HTTPS server
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name mywebsite.com www.mywebsite.com;
 
    root /var/www/mywebsite.com/html;
    index index.html index.htm;
 
    # SSL Certificate Configuration
    ssl_certificate /etc/letsencrypt/live/mywebsite.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/mywebsite.com/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/mywebsite.com/chain.pem;
 
    # SSL Security Configuration (2026 best practices)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    
    # SSL Session Configuration
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    ssl_session_tickets off;
    
    # OCSP Stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;
 
    # Security Headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Referrer-Policy "no-referrer-when-downgrade" always;
    add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
 
    # Logging
    access_log /var/log/nginx/mywebsite.com.access.log;
    error_log /var/log/nginx/mywebsite.com.error.log;
 
    location / {
        try_files $uri $uri/ =404;
    }
 
    location ~ /\. {
        deny all;
    }
}

Key SSL Directives Explained:

  • listen 443 ssl http2;: Listen on port 443 with SSL enabled and HTTP/2 protocol support for improved performance.
  • ssl_certificate: Path to the full certificate chain (includes your certificate and intermediate certificates).
  • ssl_certificate_key: Path to your private key file. This file must be kept secure and should never be publicly accessible.
  • ssl_protocols TLSv1.2 TLSv1.3;: Only allows TLS 1.2 and 1.3. TLS 1.0 and 1.1 are deprecated and vulnerable.
  • ssl_ciphers: Specifies strong cipher suites. This list prioritizes forward secrecy (ECDHE) and modern algorithms (GCM, CHACHA20-POLY1305).
  • ssl_prefer_server_ciphers on;: Server's cipher preference takes precedence over client's.
  • ssl_session_cache shared:SSL:10m;: Enables session caching across all worker processes, improving performance for returning clients.
  • ssl_stapling on;: Enables OCSP stapling, which improves SSL handshake performance and privacy.
  • Strict-Transport-Security: HSTS header tells browsers to only access the site via HTTPS for the next year.

Testing Your SSL Configuration:

After configuring SSL, test your setup using SSL Labs:

# Visit in browser:
https://www.ssllabs.com/ssltest/analyze.html?d=mywebsite.com

A properly configured site should achieve an A+ rating in 2026.

Nginx as a Proxy Server: Forwarding Requests

Nginx is frequently used as a reverse proxy, sitting between clients and backend application servers. This architecture provides numerous benefits including load distribution, SSL termination, caching, and security isolation. In 2026, most modern web applications use Nginx to proxy requests to application servers like Node.js, Python (Django/Flask), Ruby (Rails), or containerized services.

Basic Reverse Proxy Configuration

Problem: How do I use Nginx to forward requests to an application running on a different port or server?

Explanation: The proxy_pass directive is central to Nginx's proxying capabilities. It tells Nginx where to forward the request. When used as a reverse proxy, Nginx handles all client communication (including SSL termination) and forwards requests to backend servers that may be running on localhost or on different servers entirely.

Example app.mywebsite.com (in /etc/nginx/sites-available/):

server {
    listen 80;
    server_name app.mywebsite.com;
 
    # Redirect to HTTPS
    return 301 https://$host$request_uri;
}
 
server {
    listen 443 ssl http2;
    server_name app.mywebsite.com;
 
    ssl_certificate /etc/letsencrypt/live/app.mywebsite.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.mywebsite.com/privkey.pem;
 
    # SSL configuration (same as previous examples)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers on;
 
    access_log /var/log/nginx/app.mywebsite.com.access.log;
    error_log /var/log/nginx/app.mywebsite.com.error.log;
 
    location / {
        # Proxy to Node.js application running on port 3000
        proxy_pass http://localhost:3000;
        
        # Proxy headers
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        
        # WebSocket support
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        
        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
        
        # Buffering
        proxy_buffering on;
        proxy_buffer_size 4k;
        proxy_buffers 8 4k;
        proxy_busy_buffers_size 8k;
    }
 
    # Static assets with caching
    location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot)$ {
        proxy_pass http://localhost:3000;
        proxy_cache_valid 200 30d;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }
}

Key Proxy Directives:

  • proxy_pass http://localhost:3000;: The URL of the backend server. Can be a localhost port, another server, or an upstream group.
  • proxy_set_header Host $host;: Passes the original Host header from the client to the backend. Critical for applications that rely on the hostname.
  • proxy_set_header X-Real-IP $remote_addr;: Passes the client's real IP address. Without this, the backend only sees Nginx's IP.
  • proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;: Appends the client's IP to the X-Forwarded-For header chain.
  • proxy_set_header X-Forwarded-Proto $scheme;: Indicates whether the original request was HTTP or HTTPS. Essential for applications that need to generate correct URLs.
  • proxy_http_version 1.1;: Uses HTTP/1.1 for backend connections, required for WebSocket support.
  • proxy_set_header Upgrade and Connection: Enable WebSocket proxying for real-time applications.

Proxying to FastCGI (for PHP)

Problem: How do I configure Nginx to work with PHP applications?

Explanation: Nginx doesn't execute PHP directly. It passes PHP requests to a FastCGI Process Manager (like PHP-FPM) via a Unix socket or TCP port. PHP-FPM maintains a pool of PHP processes ready to handle requests, providing better performance than traditional CGI.

Example wordpress.mywebsite.com (in /etc/nginx/sites-available/):

server {
    listen 80;
    server_name wordpress.mywebsite.com;
    return 301 https://$host$request_uri;
}
 
server {
    listen 443 ssl http2;
    server_name wordpress.mywebsite.com;
 
    root /var/www/wordpress.mywebsite.com/html;
    index index.php index.html index.htm;
 
    ssl_certificate /etc/letsencrypt/live/wordpress.mywebsite.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/wordpress.mywebsite.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
 
    access_log /var/log/nginx/wordpress.mywebsite.com.access.log;
    error_log /var/log/nginx/wordpress.mywebsite.com.error.log;
 
    # WordPress permalink structure support
    location / {
        try_files $uri $uri/ /index.php?$args;
    }
 
    # PHP processing
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        
        # FastCGI pass to PHP-FPM socket (adjust version as needed)
        fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
        
        # Alternative: TCP connection
        # fastcgi_pass 127.0.0.1:9000;
        
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
        
        fastcgi_intercept_errors on;
        fastcgi_buffer_size 16k;
        fastcgi_buffers 4 16k;
        fastcgi_connect_timeout 300;
        fastcgi_send_timeout 300;
        fastcgi_read_timeout 300;
    }
 
    # Deny access to PHP files in uploads directory
    location ~* /(?:uploads|files)/.*\.php$ {
        deny all;
    }
 
    # WordPress security: deny access to sensitive files
    location ~ /\.(htaccess|htpasswd|ini|log|sh|inc|bak)$ {
        deny all;
    }
 
    location = /xmlrpc.php {
        deny all;
    }
 
    # Static file caching
    location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot)$ {
        expires 30d;
        add_header Cache-Control "public, immutable";
        access_log off;
    }
 
    # Deny access to hidden files
    location ~ /\. {
        deny all;
    }
}

Installing and Configuring PHP-FPM:

# Ubuntu/Debian
sudo apt update
sudo apt install php8.2-fpm php8.2-mysql php8.2-curl php8.2-gd php8.2-mbstring php8.2-xml php8.2-zip
 
# Start PHP-FPM
sudo systemctl start php8.2-fpm
sudo systemctl enable php8.2-fpm
 
# Verify socket exists
ls -la /var/run/php/php8.2-fpm.sock

Note: The socket path varies by PHP version and distribution. Common paths include:

  • Ubuntu/Debian: /var/run/php/php8.2-fpm.sock
  • RHEL/CentOS: /var/run/php-fpm/www.sock

Advanced Nginx Configuration: Security and Performance

Beyond basic serving and proxying, production Nginx configurations in 2026 require additional security hardening and performance optimization to handle modern threats and traffic patterns.

Rate Limiting and DDoS Protection

Problem: How do I protect my server from brute force attacks and excessive requests?

Explanation: Nginx's rate limiting module allows you to restrict the number of requests from a single IP address within a defined time window. This is essential for protecting login pages, APIs, and preventing resource exhaustion attacks.

Example Configuration:

# Define rate limit zones in http block of nginx.conf
http {
    # Limit to 10 requests per second per IP
    limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
    
    # Limit to 5 requests per minute for login attempts
    limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
    
    # Connection limit: 10 concurrent connections per IP
    limit_conn_zone $binary_remote_addr zone=addr:10m;
}
 
# In your server block
server {
    listen 443 ssl http2;
    server_name api.mywebsite.com;
 
    # Apply general rate limit
    limit_req zone=general burst=20 nodelay;
    limit_conn addr 10;
 
    location /api/ {
        limit_req zone=general burst=20 nodelay;
        proxy_pass http://localhost:8080;
    }
 
    location /login {
        limit_req zone=login burst=5;
        proxy_pass http://localhost:8080;
    }
}

Directives Explained:

  • limit_req_zone: Defines a shared memory zone for tracking request rates.
  • $binary_remote_addr: Uses client IP as the key (binary format saves memory).
  • zone=general:10m: Names the zone "general" and allocates 10MB of memory.
  • rate=10r/s: Allows 10 requests per second.
  • burst=20: Allows bursts of up to 20 requests, queuing excess requests.
  • nodelay: Processes burst requests immediately without artificial delays.

Security Headers and Content Security Policy

Problem: How do I protect against common web vulnerabilities like XSS, clickjacking, and MIME sniffing?

Explanation: Security headers instruct browsers on how to handle your content securely. These headers form a critical defense layer against various attack vectors.

Comprehensive Security Headers:

server {
    listen 443 ssl http2;
    server_name secure.mywebsite.com;
 
    # HSTS: Force HTTPS for 1 year, include subdomains
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
    
    # Prevent clickjacking attacks
    add_header X-Frame-Options "SAMEORIGIN" always;
    
    # Prevent MIME type sniffing
    add_header X-Content-Type-Options "nosniff" always;
    
    # Enable XSS protection
    add_header X-XSS-Protection "1; mode=block" always;
    
    # Control referrer information
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    
    # Permissions Policy (formerly Feature Policy)
    add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
    
    # Content Security Policy (adjust based on your needs)
    add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' https://trusted-cdn.com; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' data:; connect-src 'self'; frame-ancestors 'self'; base-uri 'self'; form-action 'self'" always;
 
    # Hide Nginx version
    server_tokens off;
    more_clear_headers Server; # Requires headers-more-nginx-module
}

Load Balancing with Upstream Blocks

Problem: How do I distribute traffic across multiple backend servers?

Explanation: Nginx can act as a load balancer, distributing requests across multiple backend servers to improve performance, reliability, and scalability.

Example Load Balancer Configuration:

# Define upstream block in http context
http {
    upstream backend_servers {
        # Load balancing method (default is round-robin)
        least_conn; # Route to server with fewest active connections
        
        # Backend servers
        server 10.0.1.10:8080 weight=3 max_fails=3 fail_timeout=30s;
        server 10.0.1.11:8080 weight=2 max_fails=3 fail_timeout=30s;
        server 10.0.1.12:8080 weight=1 max_fails=3 fail_timeout=30s;
        server 10.0.1.13:8080 backup; # Only used when others fail
        
        # Health check (Nginx Plus feature, or use third-party module)
        # health_check interval=10s fails=3 passes=2;
        
        # Session persistence
        ip_hash; # Same client IP always goes to same backend
    }
 
    server {
        listen 443 ssl http2;
        server_name app.mywebsite.com;
 
        location / {
            proxy_pass http://backend_servers;
            proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
            proxy_next_upstream_tries 3;
            
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}

Load Balancing Methods:

  • round-robin (default): Distributes requests evenly across servers.
  • least_conn: Routes to the server with the fewest active connections.
  • ip_hash: Ensures the same client always reaches the same backend (session persistence).
  • hash $request_uri: Routes based on URI, useful for caching.

How OpsSqad's Security Squad Simplifies Nginx Management

Managing Nginx configurations across multiple servers—especially when implementing security hardening, SSL renewals, and performance tuning—can become a time-consuming operational burden. DevOps engineers often spend hours SSH-ing into servers, manually editing configuration files, testing changes, and monitoring for issues. OpsSqad's Security Squad transforms this workflow from manual server access to conversational automation.

The Traditional Pain: You need to update SSL configurations across 15 production Nginx servers to implement new cipher suites and security headers recommended for 2026. This typically involves:

  1. SSH into each server individually
  2. Backup existing configurations
  3. Edit multiple configuration files
  4. Test syntax with nginx -t
  5. Reload Nginx gracefully
  6. Monitor logs for errors
  7. Repeat for each server
  8. Document changes in a runbook

Total time: 15-20 minutes per server = 4-5 hours for all servers.

The OpsSqad Solution: With OpsSqad's Security Squad, you deploy a lightweight agent on your servers that establishes a reverse TCP connection to the OpsSqad cloud. This means no inbound firewall rules, no VPN setup, and no exposed SSH ports. The Security Squad's AI agents can execute whitelisted commands through a chat interface with full audit logging and sandboxing.

Here's the complete 5-step setup (takes about 3 minutes):

  1. Create Account and Node: Sign up at app.opssquad.ai → Navigate to Nodes → Click "Create Node" → Enter a descriptive name like "Production-Web-Servers" → Copy your unique Node ID and authentication token from the dashboard

  2. Deploy Agent: SSH to your server and run the installation commands using your Node ID and token:

    curl -fsSL https://install.opssqad.ai/install.sh | bash
    opssquad node install --node-id=prod-web-abc123 --token=eyJhbGc...
    opssquad node start
  3. Browse Squad Marketplace: In the OpsSqad dashboard, navigate to Squad Marketplace → Find "Security Squad" → Click "Deploy Squad" (this creates your private instance with specialized security agents)

  4. Link Agents to Nodes: Open your deployed Security Squad → Go to Agents tab → Select your web server agents → Click "Grant Access" → Choose your "Production-Web-Servers" node (this gives agents permission to execute commands on your infrastructure)

  5. Start Managing: Go to chat.opssquad.ai → Select your Security Squad → Start chatting with AI agents that can execute real commands

Real Conversation Example:

You: I need to update SSL configurations on all production web servers to use only TLS 1.2 and 1.3, implement the 2026 recommended cipher suites, and add HSTS headers with preload.

Security Squad: I'll help you update your Nginx SSL configurations. I found 15 servers in your Production-Web-Servers node. Let me:
1. Backup current configurations
2. Update ssl_protocols to TLSv1.2 TLSv1.3
3. Apply recommended cipher suites
4. Add HSTS headers with max-age=31536000; includeSubDomains; preload
5. Test configurations
6. Reload Nginx gracefully

Executing on all 15 servers...

✓ server-web-01: Configuration updated, syntax OK, reloaded successfully
✓ server-web-02: Configuration updated, syntax OK, reloaded successfully
✓ server-web-03: Configuration updated, syntax OK, reloaded successfully
[...13 more servers...]
✓ server-web-15: Configuration updated, syntax OK, reloaded successfully

All servers updated successfully. Would you like me to verify the SSL configurations using SSL Labs API?

You: Yes, please verify and show me the scores.

Security Squad: Running SSL Labs analysis on all 15 servers...

Results:
- 14 servers: A+ rating
- 1 server (server-web-08): A rating - missing OCSP stapling

Would you like me to enable OCSP stapling on server-web-08?

You: Yes, and also check if any certificates are expiring in the next 30 days.

Security Squad: Enabling OCSP stapling on server-web-08...
✓ Configuration updated, tested, and reloaded.

Certificate expiration check:
- 13 certificates expire in 60+ days
- 2 certificates (mywebsite.com, app.mywebsite.com) expire in 25 days
- Auto-renewal is configured via Certbot

Recommendation: All certificates have auto-renewal enabled. I'll monitor and alert you if any renewals fail.

What took 4-5 hours of manual work now takes 90 seconds via chat.

Architecture Benefits:

  • Reverse TCP Connection: The OpsSqad agent initiates outbound connections to the cloud, so you don't need to open any inbound firewall ports or set up VPN access.
  • Command Whitelisting: You define exactly which commands agents can execute. For example, you might whitelist nginx -t, systemctl reload nginx, and certbot renew but block destructive commands.
  • Sandboxed Execution: Commands run in isolated contexts with resource limits.
  • Complete Audit Logging: Every command, who requested it, when it ran, and the output are logged for compliance and debugging.
  • Works from Anywhere: Manage your servers from your phone, laptop, or anywhere with internet access—no need to be on the corporate VPN.

The Security Squad specializes in web server security, SSL/TLS management, configuration hardening, and compliance monitoring. It understands Nginx configuration syntax, can detect misconfigurations, and proactively suggests security improvements based on current best practices.

Troubleshooting Common Nginx Configuration Issues

Even experienced engineers encounter Nginx configuration errors. Understanding common issues and their solutions can save significant debugging time.

Syntax Errors and Configuration Testing

Problem: Nginx won't start or reload after configuration changes.

Solution: Always test your configuration before reloading:

sudo nginx -t

Common syntax errors:

  1. Missing semicolon:
# Wrong
server_name mywebsite.com
 
# Correct
server_name mywebsite.com;
  1. Mismatched braces:
# Wrong
server {
    listen 80;
    location / {
        try_files $uri $uri/ =404;
    # Missing closing brace
}
 
# Correct
server {
    listen 80;
    location / {
        try_files $uri $uri/ =404;
    }
}
  1. Invalid directive context:
# Wrong - proxy_pass cannot be in server block directly
server {
    listen 80;
    proxy_pass http://localhost:3000;
}
 
# Correct - proxy_pass must be in location block
server {
    listen 80;
    location / {
        proxy_pass http://localhost:3000;
    }
}

502 Bad Gateway Errors

Problem: Nginx returns "502 Bad Gateway" when trying to access your site.

Causes and Solutions:

  1. Backend server is down:
# Check if your application is running
sudo systemctl status your-app
sudo netstat -tlnp | grep 3000  # Check if port 3000 is listening
  1. PHP-FPM socket doesn't exist or has wrong permissions:
# Check if socket exists
ls -la /var/run/php/php8.2-fpm.sock
 
# Check PHP-FPM status
sudo systemctl status php8.2-fpm
 
# Verify socket permissions
sudo chmod 666 /var/run/php/php8.2-fpm.sock  # Temporary fix
  1. SELinux blocking connections (RHEL/CentOS):
# Check SELinux status
getenforce
 
# Allow Nginx to connect to network (permanent fix)
sudo setsebool -P httpd_can_network_connect 1

Permission Denied Errors

Problem: Nginx returns "403 Forbidden" or logs show permission denied errors.

Solutions:

  1. Check file permissions:
# Directories should be 755, files should be 644
sudo find /var/www/mywebsite.com/html -type d -exec chmod 755 {} \;
sudo find /var/www/mywebsite.com/html -type f -exec chmod 644 {} \;
  1. Check ownership:
# Change ownership to www-data (Nginx user)
sudo chown -R www-data:www-data /var/www/mywebsite.com/html
  1. Verify Nginx user:
# Check which user Nginx runs as
ps aux | grep nginx
 
# Ensure nginx.conf has correct user directive
grep "^user" /etc/nginx/nginx.conf

SSL Certificate Issues

Problem: Browser shows SSL errors or Nginx won't start after adding SSL configuration.

Solutions:

  1. Certificate path issues:
# Verify certificate files exist
sudo ls -la /etc/letsencrypt/live/mywebsite.com/
 
# Check certificate validity
sudo openssl x509 -in /etc/letsencrypt/live/mywebsite.com/fullchain.pem -text -noout
  1. Certificate and key mismatch:
# Verify certificate and key match
sudo openssl x509 -noout -modulus -in /etc/letsencrypt/live/mywebsite.com/fullchain.pem | openssl md5
sudo openssl rsa -noout -modulus -in /etc/letsencrypt/live/mywebsite.com/privkey.pem | openssl md5
# The MD5 hashes should match
  1. Expired certificates:
# Check expiration date
sudo openssl x509 -in /etc/letsencrypt/live/mywebsite.com/fullchain.pem -noout -dates
 
# Force renewal
sudo certbot renew --force-renewal

Nginx Configuration Best Practices for 2026

As web infrastructure evolves, certain configuration practices have become essential for security, performance, and maintainability in 2026.

Configuration Organization

  1. Use include directives to keep configurations modular:
http {
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}
  1. Create snippet files for reusable configurations:
# /etc/nginx/snippets/ssl-params.conf
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;

Then include in server blocks:

server {
    listen 443 ssl http2;
    include snippets/ssl-params.conf;
}
  1. Version control your configurations:
cd /etc/nginx
sudo git init
sudo git add .
sudo git commit -m "Initial Nginx configuration"

Performance Optimization

  1. Enable HTTP/2 for all HTTPS connections (already shown in examples above).

  2. Configure caching for static assets:

location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
    expires 1y;
    add_header Cache-Control "public, immutable";
}
  1. Enable gzip compression (in nginx.conf):
gzip on;
gzip_vary on;
gzip_min_length 1000;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
  1. Optimize worker processes:
worker_processes auto;  # One per CPU core
worker_connections 1024;  # Adjust based on expected traffic

Security Hardening Checklist

  • ✓ Disable TLS 1.0 and 1.1 (use only TLS 1.2 and 1.3)
  • ✓ Use strong cipher suites with forward secrecy
  • ✓ Enable HSTS with preload
  • ✓ Implement Content Security Policy
  • ✓ Hide Nginx version (server_tokens off;)
  • ✓ Restrict access to sensitive files (.htaccess, .git, etc.)
  • ✓ Implement rate limiting on login endpoints
  • ✓ Use OCSP stapling for faster SSL handshakes
  • ✓ Regular security header audits
  • ✓ Keep Nginx updated to latest stable version

Monitoring and Logging

  1. Structured logging for better analysis:
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                '$status $body_bytes_sent "$http_referer" '
                '"$http_user_agent" "$http_x_forwarded_for" '
                'rt=$request_time uct="$upstream_connect_time" '
                'uht="$upstream_header_time" urt="$upstream_response_time"';
 
access_log /var/log/nginx/access.log main;
  1. Monitor Nginx status:
server {
    listen 127.0.0.1:8080;
    location /nginx_status {
        stub_status on;
        access_log off;
        allow 127.0.0.1;
        deny all;
    }
}
  1. Set up log rotation:
# /etc/logrotate.d/nginx
/var/log/nginx/*.log {
    daily
    missingok
    rotate 14
    compress
    delaycompress
    notifempty
    create 0640 www-data adm
    sharedscripts
    postrotate
        [ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
    endscript
}

Conclusion

Mastering Nginx configuration is essential for modern DevOps engineers managing web infrastructure in 2026. This guide has covered everything from basic server blocks and multi-site hosting to advanced SSL/TLS configurations, reverse proxy setups, security hardening, and troubleshooting common issues. By implementing these nginx configuration file examples and following current best practices—including TLS 1.2+ only, strong cipher suites, comprehensive security headers, and proper rate limiting—you'll build a robust, secure, and performant web server infrastructure.

If you want to automate the entire workflow of managing Nginx configurations across multiple servers—from SSL updates and security hardening to configuration testing and log analysis—OpsSqad's Security Squad can transform hours of manual work into minutes of conversational commands. The reverse TCP architecture means no firewall changes, the whitelisted command system ensures security, and complete audit logging provides compliance visibility. Create your free account and deploy your first Squad in under 3 minutes to experience the difference between manual server management and AI-powered infrastructure automation.