Configure NGINX: Master Web Serving & Proxying
Master NGINX configuration for web serving & proxying. Learn manual setup, then automate with OpsSqad's AI-powered Linux Squad for faster troubleshooting.

Mastering NGINX Configuration: A Practical Guide to Web Serving and Proxying
Introduction: Why NGINX Configuration Matters
What is NGINX?
NGINX (pronounced "engine-x") is a high-performance web server, reverse proxy, and load balancer that powers over 30% of the world's busiest websites. Originally created by Igor Sysoev in 2004 to solve the C10K problem—handling 10,000 concurrent connections—NGINX uses an asynchronous, event-driven architecture that dramatically outperforms traditional process-based servers like Apache in high-traffic scenarios.
Unlike thread-per-connection servers, NGINX handles thousands of concurrent connections with minimal memory overhead, making it the preferred choice for modern web architectures, microservices deployments, and containerized environments. You'll find NGINX serving static assets, proxying requests to application servers, terminating SSL connections, and load balancing traffic across backend clusters in production environments from startups to Fortune 500 companies.
The Power of Configuration
NGINX's performance and flexibility come from its configuration system. A well-configured NGINX instance can serve thousands of requests per second, intelligently route traffic, cache responses, and protect backend applications from malicious traffic. A misconfigured NGINX instance can bring your entire application stack to its knees with 502 errors, timeouts, or security vulnerabilities.
The NGINX configuration system uses a hierarchical structure of directives (individual configuration instructions) organized into blocks (context-specific sections). Understanding this structure is the difference between copying configuration snippets from StackOverflow and architecting robust, maintainable web infrastructure.
What This Guide Covers
This guide walks you through NGINX configuration from fundamentals to production-ready deployments. You'll learn how to configure NGINX as a static file server, set up reverse proxying to backend applications, implement SSL/TLS encryption, manage multiple websites on a single server, handle errors gracefully, and troubleshoot common configuration mistakes. Each section includes working code examples with explanations of what each directive does and why it matters.
TL;DR: NGINX is a high-performance web server and reverse proxy that uses an event-driven architecture to handle thousands of concurrent connections efficiently. Its power comes from flexible configuration using directives organized in hierarchical blocks. This guide teaches you to configure NGINX for static serving, reverse proxying, SSL termination, multi-site hosting, and production troubleshooting.
Understanding NGINX Configuration File Structure and Core Concepts
The nginx.conf File: Your Central Hub
The main NGINX configuration file lives at /etc/nginx/nginx.conf on most Linux distributions (Ubuntu, Debian, CentOS, RHEL). This file defines global settings and includes other configuration files for modular organization.
A typical nginx.conf structure looks like this:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}The include directive pulls in additional configuration files, allowing you to organize virtual hosts, SSL settings, and application-specific configurations in separate files. Ubuntu/Debian systems use /etc/nginx/sites-available/ for configuration files and /etc/nginx/sites-enabled/ for symbolic links to active sites. RHEL/CentOS systems typically use /etc/nginx/conf.d/ for all active configurations.
NGINX Directives: The Building Blocks
Directives are individual configuration instructions that tell NGINX how to behave. Each directive has a name, one or more parameters, and ends with a semicolon. Simple directives fit on one line:
worker_processes 4;
error_log /var/log/nginx/error.log warn;Some directives accept complex values or multiple parameters:
access_log /var/log/nginx/access.log combined buffer=32k;
proxy_set_header X-Real-IP $remote_addr;Common directives you'll use frequently include listen (which port to bind), server_name (which domain names to respond to), root (where to find files), proxy_pass (where to forward requests), and ssl_certificate (which SSL certificate to use).
NGINX Blocks: Organizing Your Configuration
Blocks (also called contexts) are directives that contain other directives within curly braces. NGINX uses four primary block types arranged hierarchically:
- Main context: Top-level directives outside any block (process-level settings)
- Events block: Connection processing settings
- HTTP block: All web server configuration
- Server block: Virtual host configuration (one per website/domain)
- Location block: URI-specific configuration within a server
Directives inherit from parent contexts to child contexts. A directive set in the http block applies to all server blocks unless overridden. Here's the hierarchy in action:
http {
# Applies to ALL servers
gzip on;
server {
# Applies to this server only
listen 80;
server_name example.com;
location / {
# Applies to this location only
root /var/www/html;
}
}
}Master and Worker Processes
NGINX uses a master-worker architecture. The master process reads configuration, binds to ports, and spawns worker processes. Worker processes handle actual client connections. This architecture allows NGINX to reload configuration without dropping connections and isolate crashes to individual workers.
The worker_processes directive controls how many worker processes NGINX spawns. Setting it to auto (recommended) creates one worker per CPU core:
worker_processes auto;Each worker can handle thousands of concurrent connections defined by worker_connections in the events block:
events {
worker_connections 1024;
}With 4 CPU cores and worker_connections 1024, your NGINX instance can theoretically handle 4,096 concurrent connections. In practice, you'll hit other limits (memory, backend capacity, network bandwidth) first.
Configuring NGINX as a Web Server for Static Content
Serving Static Files: The Basics
Static file serving is NGINX's bread and butter. When configured properly, NGINX can serve static HTML, CSS, JavaScript, images, and other assets faster than any application server. This is why even applications running on Node.js, Python, or Ruby typically put NGINX in front to handle static assets.
All web server configuration happens inside the http block. Within that block, you define one or more server blocks for virtual hosts.
Setting Up a Basic server Block
A server block defines a virtual host—a website or application served by NGINX. Each server block needs at minimum a listen directive (which port), a server_name directive (which domain), and a root directive (where files live).
Here's a complete working configuration for a static website:
http {
server {
listen 80;
listen [::]:80; # IPv6
server_name example.com www.example.com;
root /var/www/example.com/html;
index index.html index.htm;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
}
}Save this in /etc/nginx/sites-available/example.com, create a symbolic link to /etc/nginx/sites-enabled/, and test the configuration:
sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginxThe listen directive tells NGINX to bind to port 80 (HTTP). The server_name directive matches incoming requests by the Host header. The root directive specifies the document root where NGINX looks for files. The index directive lists filenames to serve when a directory is requested.
The location Block: Routing Requests
Location blocks define how NGINX handles specific URI patterns. The simplest location block matches all requests:
server {
listen 80;
server_name example.com;
root /var/www/example.com;
location / {
try_files $uri $uri/ =404;
}
}This configuration says "for any URI starting with /, try to serve the requested file, then try the requested path as a directory, then return 404 if neither exists."
NGINX supports several location matching types with different priorities:
- Exact match
location = /path(highest priority) - Preferential prefix
location ^~ /path - Regex case-sensitive
location ~ pattern - Regex case-insensitive
location ~* pattern - Prefix match
location /path(lowest priority)
Here's a practical example serving static files with special handling for images:
server {
listen 80;
server_name example.com;
root /var/www/example.com;
# Exact match for homepage
location = / {
try_files /index.html =404;
}
# Cache images aggressively
location ~* \.(jpg|jpeg|png|gif|ico|svg)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
# Deny access to hidden files
location ~ /\. {
deny all;
}
# Default fallback
location / {
try_files $uri $uri/ =404;
}
}Warning: NGINX evaluates regex locations in the order they appear in the configuration file. Place more specific patterns before general ones to avoid unexpected matches.
Pro Tip: Using try_files for Improved Fallback and Cleaner Configurations
The try_files directive is your Swiss Army knife for clean request handling. It checks for file existence and falls back gracefully without complex if statements.
For single-page applications (React, Vue, Angular), use try_files to serve the index.html for all routes:
location / {
root /var/www/spa;
try_files $uri $uri/ /index.html;
}This configuration tries to serve the requested file, then tries it as a directory, then falls back to /index.html (letting the JavaScript router handle the path). This pattern is essential for client-side routing to work correctly.
NGINX as a Reverse Proxy: Connecting to Backend Applications
What is a Reverse Proxy?
A reverse proxy sits between clients and backend servers, forwarding client requests to backends and returning responses to clients. Unlike a forward proxy (which represents clients to servers), a reverse proxy represents servers to clients.
Reverse proxies provide several critical benefits: SSL termination (handling HTTPS so backends don't have to), load balancing (distributing requests across multiple backends), caching (storing responses to reduce backend load), and security (hiding backend topology and filtering malicious requests).
Configuring a Basic Reverse Proxy
The proxy_pass directive transforms NGINX from a static file server into a reverse proxy. Here's a minimal configuration proxying to a Node.js application running on port 3000:
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://localhost:3000;
}
}When a request hits http://api.example.com/users, NGINX forwards it to http://localhost:3000/users and returns the response. This works, but it's missing critical headers that backend applications need.
Essential Proxy Directives
Backend applications need to know the original client IP, protocol, and hostname. Without proper headers, your application logs will show all requests coming from 127.0.0.1, breaking IP-based rate limiting and geolocation.
Here's a production-ready reverse proxy configuration:
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://localhost:3000;
# Pass original request information
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeout settings
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
}The proxy_set_header directives add headers to the proxied request. Host preserves the original hostname. X-Real-IP contains the client's IP address. X-Forwarded-For builds a chain of proxy IPs (useful when multiple proxies are involved). X-Forwarded-Proto tells the backend whether the original request was HTTP or HTTPS.
Timeout directives prevent hanging connections. proxy_connect_timeout limits how long NGINX waits to establish a connection to the backend. proxy_read_timeout limits how long NGINX waits for a response. If your backend has slow endpoints (report generation, video processing), increase these values.
The proxy_buffering directive controls whether NGINX buffers backend responses before sending them to clients. Buffering allows NGINX to free up backend connections quickly (good for slow clients on mobile networks). Disabling buffering reduces latency for streaming responses. Most applications should leave buffering enabled.
Handling WebSocket Connections
WebSocket connections require special handling because they upgrade from HTTP to a persistent TCP connection. Without proper configuration, NGINX closes WebSocket connections immediately.
Here's the correct WebSocket proxy configuration:
server {
listen 80;
server_name ws.example.com;
location /socket.io/ {
proxy_pass http://localhost:3000;
# WebSocket-specific headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Standard proxy headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Disable buffering for WebSockets
proxy_buffering off;
# Increase timeouts for long-lived connections
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}The critical directives are proxy_http_version 1.1 (WebSockets require HTTP/1.1), Upgrade header (signals protocol upgrade), and Connection "upgrade" (maintains the connection). Set long timeouts (86400 seconds = 24 hours) for persistent connections.
Advanced NGINX Configuration: Virtual Servers and Multi-Site Hosting
NGINX Virtual Servers: Hosting Multiple Websites
Virtual servers allow a single NGINX instance to host multiple websites distinguished by domain name. Each server block defines a separate virtual server with its own configuration.
Here's a configuration hosting two completely different websites:
# First website
server {
listen 80;
server_name example.com www.example.com;
root /var/www/example.com;
location / {
try_files $uri $uri/ =404;
}
}
# Second website
server {
listen 80;
server_name another.com www.another.com;
root /var/www/another.com;
location / {
try_files $uri $uri/ =404;
}
}
# API backend
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
}
}NGINX matches incoming requests to server blocks using the Host header. A request for example.com matches the first block, another.com matches the second, and api.example.com matches the third.
Note: If no server_name matches, NGINX uses the first server block for that port (the default server). Explicitly define a default server to control this behavior:
server {
listen 80 default_server;
server_name _;
return 444; # Close connection without response
}Multi-Site Hosting Strategies
For maintainability, separate each virtual host into its own file. On Ubuntu/Debian systems, use the sites-available/sites-enabled pattern:
# Create configuration file
sudo nano /etc/nginx/sites-available/example.com
# Enable the site
sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
# Test and reload
sudo nginx -t
sudo systemctl reload nginxThis pattern lets you disable sites without deleting configuration:
# Disable a site
sudo rm /etc/nginx/sites-enabled/example.com
sudo systemctl reload nginx
# Re-enable later
sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
sudo systemctl reload nginxOn RHEL/CentOS systems, place configuration files directly in /etc/nginx/conf.d/:
sudo nano /etc/nginx/conf.d/example.com.conf
sudo nginx -t
sudo systemctl reload nginxNGINX Location Priority and Matching
Understanding location priority prevents frustrating debugging sessions where requests hit unexpected locations. NGINX evaluates locations in this order:
- Exact matches (
location = /path) first - Preferential prefix matches (
location ^~ /path) second - Regex matches (
location ~orlocation ~*) in file order - Prefix matches (
location /path) last
Here's a practical example showing priority:
server {
listen 80;
server_name example.com;
# 1. Exact match - highest priority
location = /exact {
return 200 "Exact match\n";
}
# 2. Preferential prefix - stops regex evaluation
location ^~ /images/ {
root /var/www/static;
}
# 3. Regex - evaluated in order
location ~ \.php$ {
fastcgi_pass unix:/var/run/php-fpm.sock;
}
location ~* \.(jpg|png|gif)$ {
expires 30d;
}
# 4. Prefix - lowest priority
location /docs/ {
root /var/www;
}
location / {
try_files $uri $uri/ =404;
}
}Request for /exact matches the exact location. Request for /images/logo.png matches the preferential prefix (skipping regex evaluation). Request for /test.php matches the PHP regex. Request for /photo.jpg matches the image regex. Request for /docs/guide.html matches the prefix location.
Warning: The ^~ modifier is powerful but dangerous. It prevents all regex evaluation for matching requests, which can break expected behavior if you're not careful.
Securing Your NGINX Deployment: SSL/HTTPS Configuration
The Importance of HTTPS
HTTPS encrypts traffic between clients and servers using SSL/TLS, preventing eavesdropping, man-in-the-middle attacks, and data tampering. Modern browsers mark HTTP sites as "Not Secure," and Google penalizes HTTP sites in search rankings. HTTPS is no longer optional—it's a baseline security requirement.
Beyond encryption, HTTPS enables HTTP/2 (faster page loads), is required for many modern browser APIs (geolocation, service workers, camera access), and builds user trust with the green padlock icon.
Obtaining SSL Certificates
Let's Encrypt provides free, automated SSL certificates valid for 90 days with automatic renewal. Install Certbot to obtain and manage Let's Encrypt certificates:
# Ubuntu/Debian
sudo apt update
sudo apt install certbot python3-certbot-nginx
# RHEL/CentOS
sudo yum install certbot python3-certbot-nginxObtain a certificate for your domain:
sudo certbot --nginx -d example.com -d www.example.comCertbot automatically modifies your NGINX configuration to enable HTTPS and set up automatic renewal via cron job.
Configuring SSL/TLS in NGINX
Here's a manual SSL configuration (useful if you have commercial certificates or need custom settings):
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
# SSL certificate files
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# SSL session cache
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
root /var/www/example.com;
location / {
try_files $uri $uri/ =404;
}
}The ssl_certificate directive points to the full certificate chain (your certificate plus intermediate certificates). The ssl_certificate_key directive points to your private key. Never commit private keys to version control or expose them via web server.
The ssl_protocols directive disables insecure protocols (SSLv3, TLSv1, TLSv1.1). The ssl_ciphers directive restricts which encryption algorithms are allowed. The ssl_prefer_server_ciphers directive forces clients to use server-preferred ciphers (more secure).
SSL session caching dramatically improves performance for returning visitors by reusing SSL session parameters instead of performing a full handshake.
Redirecting HTTP to HTTPS
Always redirect HTTP traffic to HTTPS to ensure all connections are encrypted:
# HTTP server - redirect to HTTPS
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
# HTTPS server - actual configuration
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# ... rest of configuration
}The return 301 directive sends a permanent redirect to the HTTPS version of the requested URL. The $server_name variable contains the matched server name, and $request_uri contains the full request path and query string.
SSL/TLS Best Practices
For production deployments, add these security enhancements:
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Modern SSL configuration
ssl_protocols TLSv1.3 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
# ... rest of configuration
}OCSP stapling improves SSL handshake performance and privacy by having NGINX fetch certificate revocation status instead of forcing clients to do it. The Strict-Transport-Security header (HSTS) tells browsers to always use HTTPS for your domain, preventing downgrade attacks.
Mastering NGINX Error Handling and URI Rewriting
Customizing Error Pages
Default NGINX error pages are functional but ugly. Custom error pages improve user experience and maintain brand consistency:
server {
listen 80;
server_name example.com;
root /var/www/example.com;
# Custom error pages
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /404.html {
internal;
}
location = /50x.html {
internal;
}
}The error_page directive maps HTTP status codes to custom pages. The internal directive prevents direct access to error pages (they're only served when NGINX generates that error).
Create /var/www/example.com/404.html and /var/www/example.com/50x.html with user-friendly error messages. For single-page applications, you might want to serve the main application for 404 errors:
error_page 404 /index.html;Logging Errors Effectively
The error_log directive controls where NGINX writes error messages and at what verbosity level:
error_log /var/log/nginx/error.log warn;Available log levels from least to most verbose: emerg, alert, crit, error, warn, notice, info, debug. Use warn or error for production. Use debug for troubleshooting (generates huge log files):
error_log /var/log/nginx/debug.log debug;You can specify different error logs per server block:
server {
listen 80;
server_name example.com;
error_log /var/log/nginx/example.com.error.log;
}Troubleshooting Tip: When debugging configuration issues, temporarily increase the error log level to info or debug and watch the log in real-time:
sudo tail -f /var/log/nginx/error.logURI Rewriting with rewrite
The rewrite directive modifies request URIs using regular expressions. Common use cases include redirecting old URLs to new ones, removing file extensions, and enforcing URL conventions.
Here's the syntax:
rewrite regex replacement [flag];Redirect old blog URLs to new structure:
server {
listen 80;
server_name example.com;
# Old: /blog/post.php?id=123
# New: /blog/123
rewrite ^/blog/post\.php$ /blog/$arg_id permanent;
# Remove trailing slashes
rewrite ^/(.*)/$ /$1 permanent;
# Add .html extension
rewrite ^/([^.]+)$ /$1.html last;
}Flags control how NGINX processes rewrites:
last: Stop processing rewrite directives, search for new locationbreak: Stop processing rewrite directives, use current URIredirect: Return 302 temporary redirectpermanent: Return 301 permanent redirect
Warning: The rewrite directive is powerful but can create infinite loops. Use last or break flags carefully, and always test with curl -I to verify redirect behavior.
The return Directive for Simple Redirects
For simple redirects, return is faster and clearer than rewrite:
server {
listen 80;
server_name old-domain.com;
# Redirect entire domain
return 301 https://new-domain.com$request_uri;
}
server {
listen 80;
server_name example.com;
# Redirect specific path
location /old-page {
return 301 /new-page;
}
# Return custom response
location /api/status {
return 200 "OK\n";
add_header Content-Type text/plain;
}
}Use return for status codes and simple redirects. Use rewrite when you need regex pattern matching or URI manipulation.
Skip the Manual Work: How OpsSqad Automates NGINX Debugging and Management
The Challenge of NGINX Configuration Complexity
Managing NGINX configurations across multiple servers is time-consuming and error-prone. You SSH into servers, manually edit configuration files, test syntax, reload services, check logs, and repeat the process when something breaks. A typo in nginx.conf can bring down your entire web infrastructure. Debugging a 502 Bad Gateway error often means SSHing between servers, checking logs, verifying backend status, and tracing network connections—all while your site is down and users are complaining.
Traditional configuration management tools help with deployment but don't solve the real-time debugging problem. When NGINX returns cryptic errors at 2 AM, you need immediate answers, not a Terraform plan.
Introducing OpsSqad: Your AI-Powered DevOps Partner
OpsSqad uses reverse TCP architecture to give you secure, agent-based access to your servers without opening inbound firewall ports or configuring VPNs. Install a lightweight node on your NGINX server, and it establishes an outbound connection to OpsSqad cloud. AI agents organized into specialized Squads execute commands and troubleshoot issues through a simple chat interface.
Unlike traditional SSH access, OpsSqad provides command whitelisting (agents can only run approved commands), sandboxed execution (commands run in controlled environments), and complete audit logging (every command is recorded for compliance). You get the power of direct server access with the safety of managed automation.
How OpsSqad's Linux Squad Solves NGINX Configuration Challenges
Step 1: Create Your Free Account & Deploy a Node
Sign up at app.opssquad.ai and navigate to the Nodes section. Click "Create Node" and give it a descriptive name like "production-web-01". The dashboard generates a unique Node ID and authentication token—copy these values.
Step 2: Install the OpsSqad Agent
SSH to your NGINX server and run the installation commands using your Node ID and token from the dashboard:
curl -fsSL https://install.opssquad.ai/install.sh | bash
opssquad node install --node-id=node_abc123xyz --token=tok_def456uvw
opssquad node startThe node establishes a secure reverse TCP connection to OpsSqad cloud. Your firewall requires no inbound rules—the node connects outbound on port 443 (HTTPS), which works from anywhere including locked-down corporate networks.
Step 3: Browse Squad Marketplace & Deploy Linux Squad
In the OpsSqad dashboard, navigate to Squad Marketplace and find "Linux Squad"—an AI agent pre-trained on Linux system administration, including NGINX configuration, log analysis, and troubleshooting. Click "Deploy Squad" to create your private instance with all necessary agents.
Step 4: Link Agents to Nodes
Open your deployed Linux Squad, go to the Agents tab, and grant the agents access to your production-web-01 node. This permission model ensures agents only interact with servers you explicitly authorize.
Step 5: Start Debugging via Chat
Navigate to chat.opssquad.ai, select your Linux Squad, and start chatting with AI agents that can execute real commands on your infrastructure.
Real-World Scenario: Debugging an NGINX Configuration Error
You: "My NGINX site is returning a 502 Bad Gateway error. Can you check the NGINX error logs and the status of my backend application?"
OpsSqad Linux Squad:
[Executing command: sudo tail -n 50 /var/log/nginx/error.log]
[Output:]
2024/01/15 14:23:17 [error] 12456#12456: *8934 connect() failed (111: Connection refused) while connecting to upstream, client: 203.0.113.45, server: api.example.com, request: "GET /api/users HTTP/1.1", upstream: "http://127.0.0.1:3000/api/users", host: "api.example.com"
2024/01/15 14:23:18 [error] 12456#12456: *8935 connect() failed (111: Connection refused) while connecting to upstream, client: 203.0.113.46, server: api.example.com, request: "GET /api/products HTTP/1.1", upstream: "http://127.0.0.1:3000/api/products", host: "api.example.com"
[Executing command: sudo systemctl status node-app.service]
[Output:]
● node-app.service - Node.js API Application
Loaded: loaded (/etc/systemd/system/node-app.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2024-01-15 14:22:50 UTC; 45s ago
Process: 11234 ExecStart=/usr/bin/node /opt/app/server.js (code=exited, status=1/FAILURE)
Main PID: 11234 (code=exited, status=1/FAILURE)
Jan 15 14:22:49 web-01 node[11234]: Error: Cannot find module 'express'
Jan 15 14:22:50 web-01 systemd[1]: node-app.service: Main process exited, code=exited, status=1/FAILURE
Jan 15 14:22:50 web-01 systemd[1]: node-app.service: Failed with result 'exit-code'.
[Executing command: sudo journalctl -u node-app.service -n 20]
[Output:]
Jan 15 14:22:49 web-01 node[11234]: Error: Cannot find module 'express'
Jan 15 14:22:49 web-01 node[11234]: at Function.Module._resolveFilename (internal/modules/cjs/loader.js:880:15)
Jan 15 14:22:49 web-01 node[11234]: at Function.Module._load (internal/modules/cjs/loader.js:725:27)
OpsSqad Linux Squad: "I've identified the issue. Your NGINX error log shows Connection refused errors when trying to proxy to http://127.0.0.1:3000, indicating the backend application isn't running. The systemd status confirms node-app.service is inactive and failed to start due to a missing express module. The application crashed because npm dependencies weren't installed after a recent deployment. To fix this, run cd /opt/app && sudo npm install to install dependencies, then sudo systemctl start node-app.service to restart the application."
You: "Can you verify the NGINX configuration is correct for proxying to port 3000?"
OpsSqad Linux Squad:
[Executing command: sudo nginx -T | grep -A 10 "server_name api.example.com"]
[Output:]
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
OpsSqad Linux Squad: "Your NGINX configuration for api.example.com is correct. It's properly configured to proxy requests to http://localhost:3000 with appropriate headers. Once you install the npm dependencies and restart the Node.js application, NGINX will be able to connect to the backend and the 502 errors will resolve."
Benefits Demonstrated
Speed: OpsSqad executed multiple diagnostic commands across logs, systemd, and NGINX configuration in seconds. Manually SSHing into the server, finding the right log files, checking service status, and examining configuration would take 5-10 minutes.
Security: No direct SSH access required. Commands execute via the secure reverse tunnel and are subject to OpsSqad's command whitelisting and sandboxing. The audit log records every command for compliance.
Clarity: The AI agent provided clear, actionable analysis connecting the symptoms (502 errors) to the root cause (missing npm dependencies) with specific remediation steps.
Efficiency: What would take 15 minutes of manual investigation and command execution took 90 seconds via chat. The agent correlated information from multiple sources (NGINX error log, systemd status, application logs) that would require running several commands manually.
The reverse TCP architecture means this works from anywhere—your laptop at home, a coffee shop, even your phone—without VPN configuration or firewall changes. Install the node once, and you have permanent secure access through OpsSqad's chat interface.
Troubleshooting Common NGINX Configuration Mistakes
Syntax Errors
Typos, missing semicolons, and incorrect directive usage are the most common NGINX configuration errors. A single missing semicolon can prevent NGINX from starting.
Always test configuration before reloading:
sudo nginx -tExample output for valid configuration:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Example output for syntax error:
nginx: [emerg] unexpected "}" in /etc/nginx/sites-enabled/example.com:15
nginx: configuration file /etc/nginx/nginx.conf test failed
The error message tells you exactly which file and line number contains the problem. Open that file and look for the issue (usually a missing semicolon on the previous line).
Pro Tip: Use nginx -T (capital T) to dump the entire parsed configuration, which helps identify issues with included files:
sudo nginx -T | lessIncorrect server_name Matching
NGINX not selecting the intended server block is frustrating. Requests hit the wrong virtual host, serving content from the wrong directory or proxying to the wrong backend.
Common causes:
- Typo in
server_name:server_name exmaple.comwon't matchexample.com - Missing
wwwsubdomain:server_name example.comwon't matchwww.example.com - DNS not pointing to server: Domain resolves to wrong IP address
- No matching server block: NGINX uses the default server (first block on that port)
Debug server name matching:
# Check what NGINX sees
curl -H "Host: example.com" http://your-server-ip/
# Test with verbose output
curl -v -H "Host: example.com" http://your-server-ip/Add both naked domain and www subdomain to server_name:
server_name example.com www.example.com;Or use a wildcard:
server_name *.example.com example.com;location Block Ambiguities
Requests routing to the wrong location block cause unexpected behavior. You set up a proxy for /api/ but requests hit the root location instead.
Common issues:
- Regex evaluated before intended prefix: Regex locations match before prefix locations
- Missing trailing slash:
location /apimatches/apiand/api/usersbut behaves differently thanlocation /api/ - Conflicting location blocks: Multiple blocks match the same pattern
Debug location matching by adding custom headers:
location / {
add_header X-Location "root" always;
try_files $uri $uri/ =404;
}
location /api/ {
add_header X-Location "api" always;
proxy_pass http://localhost:3000;
}Check which location matched:
curl -I http://example.com/api/usersLook for the X-Location header in the response. Remove these debugging headers before going to production.
Proxying Issues
Backend applications not responding, timeouts, or incorrect headers cause 502 Bad Gateway, 504 Gateway Timeout, and broken functionality.
Problem: 502 Bad Gateway
Cause: Backend not running or not listening on expected port
Solution: Check backend status and verify port:
sudo systemctl status your-backend-service
sudo netstat -tlnp | grep 3000Problem: 504 Gateway Timeout
Cause: Backend responding too slowly
Solution: Increase proxy timeouts:
proxy_read_timeout 300s;
proxy_connect_timeout 300s;Problem: Backend sees all requests from 127.0.0.1
Cause: Missing proxy headers
Solution: Add proxy headers:
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;Problem: WebSocket connections close immediately
Cause: Missing upgrade headers
Solution: Add WebSocket headers:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";SSL Certificate Errors
Browser warnings or connection failures due to SSL configuration break HTTPS access entirely.
Problem: "Your connection is not private" browser warning
Causes:
- Certificate expired
- Certificate doesn't match domain name
- Certificate chain incomplete
- Wrong certificate file
Solution: Verify certificate:
sudo openssl x509 -in /etc/letsencrypt/live/example.com/cert.pem -text -noout | grep -A 2 "Validity"
sudo openssl x509 -in /etc/letsencrypt/live/example.com/cert.pem -text -noout | grep "Subject:"Check certificate chain:
sudo openssl s_client -connect example.com:443 -servername example.comLook for "Verify return code: 0 (ok)" at the end of the output. Any other code indicates a problem.
Problem: NGINX won't start after adding SSL
Cause: Incorrect file paths or permissions
Solution: Verify files exist and are readable:
sudo ls -la /etc/letsencrypt/live/example.com/
sudo nginx -tCertificate files should be readable by the NGINX user (usually www-data or nginx).
Prevention and Best Practices for NGINX Configuration
Version Control Your Configurations
Store all NGINX configuration files in a Git repository. This gives you change history, easy rollback, and collaboration capabilities.
cd /etc/nginx
sudo git init
sudo git add nginx.conf sites-available/ conf.d/
sudo git commit -m "Initial NGINX configuration"Before making changes:
cd /etc/nginx
sudo git diff # Review changes
sudo nginx -t # Test configuration
sudo git commit -am "Add SSL configuration for example.com"
sudo systemctl reload nginxIf something breaks, rollback:
sudo git revert HEAD
sudo systemctl reload nginxUse nginx -t Religiously
Never reload NGINX without testing configuration first. A syntax error can prevent NGINX from starting, taking down your entire web infrastructure.
Safe reload workflow:
sudo nginx -t && sudo systemctl reload nginxThis command only reloads if the test passes. If the test fails, NGINX continues running with the old configuration.
Modular Configuration
Break down large configurations into smaller, manageable files. This improves readability and makes it easier to enable/disable features.
# /etc/nginx/nginx.conf
http {
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}Organize by purpose:
/etc/nginx/
├── nginx.conf
├── conf.d/
│ ├── ssl-params.conf # Shared SSL settings
│ ├── proxy-params.conf # Shared proxy settings
│ └── security-headers.conf # Security headers
└── sites-available/
├── example.com.conf
├── api.example.com.conf
└── blog.example.com.conf
Create reusable configuration snippets:
# /etc/nginx/conf.d/proxy-params.conf
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;Include in server blocks:
location /api/ {
include conf.d/proxy-params.conf;
proxy_pass http://localhost:3000;
}Logging Strategy
Configure appropriate access_log and error_log levels for effective monitoring and debugging.
Production logging:
http {
access_log /var/log/nginx/access.log combined;
error_log /var/log/nginx/error.log warn;
server {
server_name example.com;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
}
}Disable access logging for static assets to reduce I/O:
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
access_log off;
expires 30d;
}Use conditional logging to exclude health checks:
map $request_uri $loggable {
/health 0;
/status 0;
default 1;
}
access_log /var/log/nginx/access.log combined if=$loggable;Security Hardening
Protect your NGINX deployment from common attacks.
Hide NGINX version in error pages and headers:
server_tokens off;Limit request size to prevent DoS:
client_max_body_size 10M;
client_body_buffer_size 128k;Implement rate limiting:
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
server {
location /api/ {
limit_req zone=one burst=20 nodelay;
proxy_pass http://localhost:3000;
}
}
}This configuration allows 10 requests per second per IP address, with a burst allowance of 20 requests.
Disable unnecessary HTTP methods:
if ($request_method !~ ^(GET|POST|HEAD)$ ) {
return 405;
}Performance Tuning
Optimize NGINX for your workload.
Set worker processes to CPU core count:
worker_processes auto;Increase worker connections for high-traffic sites:
events {
worker_connections 4096;
}Enable keepalive connections:
keepalive_timeout 65;
keepalive_requests 100;Enable Gzip compression:
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml+rss application/rss+xml font/truetype font/opentype application/vnd.ms-fontobject image/svg+xml;Enable caching for static assets:
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}Conclusion
You've learned the fundamentals of NGINX configuration: the hierarchical structure of directives and blocks, serving static content efficiently, configuring reverse proxies for backend applications, hosting multiple websites with virtual servers, securing deployments with SSL/TLS, handling errors gracefully, and troubleshooting common mistakes. These skills form the foundation of modern web infrastructure—whether you're running a simple blog or a complex microservices architecture.
NGINX is a powerful and flexible tool, and continuous learning is key. As your infrastructure grows, you'll encounter new challenges: load balancing across multiple backends, implementing caching strategies, integrating with service meshes, and optimizing for specific workloads. The configuration patterns you've learned here scale to those advanced scenarios.
Managing NGINX configurations manually works fine for a few servers, but as your infrastructure grows, you need automation. If you're tired of SSHing into servers at 2 AM to debug 502 errors, OpsSqad can help. Our AI-powered Linux Squad executes diagnostic commands, analyzes logs, and provides actionable solutions through a simple chat interface—all secured by reverse TCP architecture that requires no inbound firewall rules. Visit app.opssquad.ai to create your free account and experience the future of DevOps automation.