Skip to main content

Basics

NGINX (pronounced “engine-x”) is a high-performance web server, reverse proxy, load balancer, and HTTP cache.

It is designed to:

  • Handle very high traffic
  • Use low memory
  • Serve many concurrent users efficiently

NGINX is widely used by companies like Netflix, Google, Cloudflare, GitHub, and WordPress.com.

Why NGINX Was Created

Traditional web servers (like Apache) use a process/thread-based model, which can consume a lot of memory under heavy traffic.

NGINX was created to solve:

  • Poor performance under high concurrency
  • High CPU and memory usage
  • Slow response times during traffic spikes

NGINX uses an event-driven, asynchronous architecture, making it faster and more scalable.

What Can NGINX Do? (Core Uses)

NGINX can act as:

  1. Web Server – Serve static & dynamic content
  2. Reverse Proxy – Forward requests to backend servers
  3. Load Balancer – Distribute traffic across servers
  4. HTTP Cache – Cache responses for faster delivery
  5. SSL/TLS Termination – Handle HTTPS encryption
  6. API Gateway – Manage API traffic

NGINX Architecture Basics

Event-Driven Model

  • Uses non-blocking I/O
  • One worker can handle thousands of connections
  • Very low memory usage

Main Components

  • Master Process – Manages configuration and worker processes
  • Worker Processes – Handle client requests
  • Configuration File – Controls behavior (nginx.conf)

NGINX vs Apache (Basic Comparison)

FeatureNGINXApache
ArchitectureEvent-drivenProcess-based
PerformanceVery highModerate
Memory usageLowHigh
Static filesExcellentGood
Dynamic contentVia proxy (PHP-FPM)Built-in
Best forHigh trafficSmall to medium sites

Basic NGINX Concepts

1. Server Block (Virtual Host)

Equivalent to Apache’s VirtualHost.

server {
listen 80;
server_name example.com;
root /var/www/html;
}

2. Location Block

Defines how requests are handled.

location /images/ {
root /data;
}

3. Reverse Proxy

Forwards requests to backend servers.

location / {
proxy_pass http://localhost:3000;
}

NGINX as a Web Server

You want NGINX to serve a static website.

server {
listen 80;
server_name mysite.com;

root /var/www/mysite;
index index.html;

location / {
try_files $uri $uri/ =404;
}
}
  • listen 80 → Listens on HTTP port 80
  • server_name → Domain name
  • root → Website files location
  • index → Default page
  • try_files → Prevents invalid requests

This means:

  • It serves files from: /var/www/mysite
  • The default page is: /var/www/mysite/index.html
  • If a file/folder doesn’t exist, it returns 404.

How to access it:

  • In a browser: http://mysite.com. But this only works if: mysite.com points to your server’s public IP (DNS configured correctly).
  • If DNS isn’t set yet: http://YOUR_SERVER_IP But this only works if: This server block is the default or matches the IP.

NGINX as a Reverse Proxy

Your app runs on Node.js (port 3000), but users access port 80.

server {
listen 80;

location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
  • NGINX receives client requests on port 80.
  • Any request to / is forwarded to: http://localhost:3000
  • Sends the response back to the client
  • Client never sees backend server details

NGINX as a Load Balancer

Distribute traffic across 3 backend servers.

upstream backend_servers {
least_conn;
server 192.168.1.10;
server 192.168.1.11;
server 192.168.1.12;
}

server {
listen 80;

location / {
proxy_pass http://backend_servers;
}
}
  • Load balancing is round-robin by default.
  • upstream defines backend servers
  • Nginx listens on port 80.
  • Requests are forwarded to one of: 192.168.1.10, 192.168.1.11, 192.168.1.12

NGINX Caching

location / {
proxy_cache my_cache;
proxy_pass http://backend;
}
  • Stores responses
  • Reduces backend load
  • Improves response speed

Advantages of NGINX

  • High performance
  • Handles massive traffic
  • Low memory usage
  • Easy reverse proxy
  • Built-in load balancing
  • Excellent for microservices

When to Use NGINX

  • You expect high traffic
  • You need reverse proxy or load balancing
  • You want fast static file serving
  • You need API gateway or caching