NGINX Cookbook Advanced Recipes for High-performance Load Balancing (Derek Dejonghe) (Z-Library)

Author: Derek Dejonghe

艺术

NGINX is one of the most widely used web servers available today, in part because of its capabilities as a load balancer and reverse proxy server for HTTP and other network protocols. This revised cookbook provides easy-to-follow examples of real-world problems in application delivery. Practical recipes help you set up and use either the open source or commercial offering to solve problems in various use cases. For professionals who understand modern web architectures, such as n-tier or microservice designs and common web protocols such as TCP and HTTP, these recipes provide proven solutions for security and software load balancing and for monitoring and maintaining NGINX’s application delivery platform. You’ll also explore advanced features of both NGINX and NGINX Plus, the free and licensed versions of this server. You’ll find recipes for: • High-performance load balancing with HTTP, TCP, and UDP • Securing access through encrypted traffic, secure links, HTTP authentication subrequests, and more • Deploying NGINX to Google, AWS, Azure cloud, and DigitalOcean • Installing and configuring the NGINX App Protect module • HTTP/3 (QUIC), OpenTelemetry, and the njs module

📄 File Format: PDF
💾 File Size: 5.6 MB
9
Views
0
Downloads
0.00
Total Donations

📄 Text Preview (First 20 pages)

ℹ️

Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

📄 Page 1
D eJong he N G IN X C ookb ook N G IN X C ookb ook Third Edition NGINX Cookbook Advanced Recipes for High-Performance Load Balancing Derek DeJonghe 3E
📄 Page 2
SYSTEM ADMINISTR ATION NGINX Cookbook Twitter: @oreillymedia linkedin.com/company/oreilly-media youtube.com/oreillymedia NGINX is one of the most widely used web servers available today, in part because of its capabilities as a load balancer and reverse proxy server for HTTP and other network protocols. This revised cookbook provides easy-to-follow examples of real-world problems in application delivery. Practical recipes help you set up and use either the open source or commercial offering to solve problems in various use cases. For professionals who understand modern web architectures such as n-tier or microservice designs and common web protocols such as TCP and HTTP, these recipes include proven solutions for security and software load balancing and for monitoring and maintaining NGINX’s application delivery platform. You’ll also explore advanced features of both NGINX and NGINX Plus, the free and licensed versions of this server. You’ll find recipes for: • High-performance load balancing with HTTP, TCP, and UDP • Securing access through encrypted traffic, secure links, HTTP authentication subrequests, and more • Deploying NGINX to Google, AWS, and Azure Cloud Services • NGINX Plus as a service provider in a SAML environment • HTTP/3 (QUIC), OpenTelemetry, and the njs module Derek DeJonghe, an Amazon Web Services Certified Professional, specializes in Linux/Unix-based systems and web applications. His background in web development, system administration, and networking makes him a valuable cloud resource. Derek focuses on infrastructure management, configuration management, and continuous integration. He also develops DevOps tools and maintains the systems, networks, and deployments of multiple multi-tenant SaaS offerings. US $59.99 CAN $74.99 ISBN: 978-1-098-15843-9 3E
📄 Page 3
Derek DeJonghe NGINX Cookbook Advanced Recipes for High-Performance Load Balancing THIRD EDITION
📄 Page 4
978-1-098-15843-9 [LSI] NGINX Cookbook by Derek DeJonghe Copyright © 2024 O’Reilly Media, Inc. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com. Acquisitions Editor: John Devins Development Editor: Gary O’Brien Production Editor: Clare Laylock Copyeditor: Piper Editorial Consulting, LLC Proofreader: Kim Cofer Indexer: Potomac Indexing, LLC Interior Designer: David Futato Cover Designer: Karen Montgomery Illustrator: Kate Dullea November 2020: First Edition May 2022: Second Edition February 2024: Third Edition Revision History for the Third Edition 2024-01-29: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781098158439 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. NGINX Cookbook, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. The views expressed in this work are those of the author, and do not represent the publisher’s views. While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights. This work is part of a collaboration between O’Reilly and NGINX. See our statement of editorial independence.
📄 Page 5
Table of Contents Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1. Basics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.0 Introduction 1 1.1 Installing NGINX on Debian/Ubuntu 1 1.2 Installing NGINX Through the YUM Package Manager 2 1.3 Installing NGINX Plus 3 1.4 Verifying Your Installation 3 1.5 Key Files, Directories, and Commands 4 1.6 Using Includes for Clean Configs 6 1.7 Serving Static Content 7 2. High-Performance Load Balancing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.0 Introduction 9 2.1 HTTP Load Balancing 10 2.2 TCP Load Balancing 11 2.3 UDP Load Balancing 13 2.4 Load-Balancing Methods 14 2.5 Sticky Cookie with NGINX Plus 17 2.6 Sticky Learn with NGINX Plus 18 2.7 Sticky Routing with NGINX Plus 19 2.8 Connection Draining with NGINX Plus 20 2.9 Passive Health Checks 21 2.10 Active Health Checks with NGINX Plus 22 2.11 Slow Start with NGINX Plus 24 3. Traffic Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.0 Introduction 25 3.1 A/B Testing 25 iii
📄 Page 6
3.2 Using the GeoIP Module and Database 27 3.3 Restricting Access Based on Country 30 3.4 Finding the Original Client 31 3.5 Limiting Connections 32 3.6 Limiting Rate 34 3.7 Limiting Bandwidth 35 4. Massively Scalable Content Caching. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.0 Introduction 37 4.1 Caching Zones 37 4.2 Caching Hash Keys 39 4.3 Cache Locking 40 4.4 Use Stale Cache 40 4.5 Cache Bypass 41 4.6 Cache Purging with NGINX Plus 42 4.7 Cache Slicing 43 5. Programmability and Automation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.0 Introduction 45 5.1 NGINX Plus API 45 5.2 Using the Key-Value Store with NGINX Plus 49 5.3 Using the njs Module to Expose JavaScript Functionality Within NGINX 51 5.4 Extending NGINX with a Common Programming Language 54 5.5 Installing with Ansible 56 5.6 Installing with Chef 58 5.7 Automating Configurations with Consul Templating 59 6. Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 6.0 Introduction 63 6.1 HTTP Basic Authentication 63 6.2 Authentication Subrequests 65 6.3 Validating JWTs with NGINX Plus 66 6.4 Creating JSON Web Keys 68 6.5 Authenticate Users via Existing OpenID Connect SSO with NGINX Plus 69 6.6 Validate JSON Web Tokens (JWT) with NGINX Plus 70 6.7 Automatically Obtaining and Caching JSON Web Key Sets with NGINX Plus 71 6.8 Configuring NGINX Plus as a Service Provider for SAML Authentication 72 7. Security Controls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 7.0 Introduction 77 7.1 Access Based on IP Address 77 7.2 Allowing Cross-Origin Resource Sharing 78 iv | Table of Contents
📄 Page 7
7.3 Client-Side Encryption 79 7.4 Advanced Client-Side Encryption 81 7.5 Upstream Encryption 83 7.6 Securing a Location 84 7.7 Generating a Secure Link with a Secret 84 7.8 Securing a Location with an Expire Date 85 7.9 Generating an Expiring Link 86 7.10 HTTPS Redirects 88 7.11 Redirecting to HTTPS Where SSL/TLS Is Terminated Before NGINX 89 7.12 HTTP Strict Transport Security 89 7.13 Restricting Access Based on Country 90 7.14 Satisfying Any Number of Security Methods 92 7.15 NGINX Plus Dynamic Application Layer DDoS Mitigation 92 7.16 Installing and Configuring NGINX Plus with the NGINX App Protect WAF Module 94 8. HTTP/2 and HTTP/3 (QUIC). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 8.0 Introduction 99 8.1 Enabling HTTP/2 99 8.2 Enabling HTTP/3 100 8.3 gRPC 102 9. Sophisticated Media Streaming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 9.0 Introduction 105 9.1 Serving MP4 and FLV 105 9.2 Streaming with HLS with NGINX Plus 106 9.3 Streaming with HDS with NGINX Plus 107 9.4 Bandwidth Limits with NGINX Plus 108 10. Cloud Deployments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 10.0 Introduction 109 10.1 Auto-Provisioning 109 10.2 Deploying an NGINX VM in the Cloud 111 10.3 Creating an NGINX Machine Image 112 10.4 Routing to NGINX Nodes Without a Cloud Native Load Balancer 113 10.5 The Load Balancer Sandwich 115 10.6 Load Balancing over Dynamically Scaling NGINX Servers 117 10.7 Creating a Google App Engine Proxy 118 11. Containers/Microservices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 11.0 Introduction 121 11.1 Using NGINX as an API Gateway 122 11.2 Using DNS SRV Records with NGINX Plus 126 Table of Contents | v
📄 Page 8
11.3 Using the Official NGINX Container Image 127 11.4 Creating an NGINX Dockerfile 128 11.5 Building an NGINX Plus Container Image 132 11.6 Using Environment Variables in NGINX 133 11.7 NGINX Ingress Controller from NGINX 134 12. High-Availability Deployment Modes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 12.0 Introduction 137 12.1 NGINX Plus HA Mode 137 12.2 Load Balancing Load Balancers with DNS 140 12.3 Load Balancing on EC2 141 12.4 NGINX Plus Configuration Synchronization 142 12.5 State Sharing with NGINX Plus and Zone Sync 144 13. Advanced Activity Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 13.0 Introduction 147 13.1 Enable NGINX Stub Status 147 13.2 Enabling the NGINX Plus Monitoring Dashboard 148 13.3 Collecting Metrics Using the NGINX Plus API 150 13.4 OpenTelemetry for NGINX 153 13.5 Prometheus Exporter Module 157 14. Debugging and Troubleshooting with Access Logs, Error Logs, and Request Tracing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 14.0 Introduction 159 14.1 Configuring Access Logs 159 14.2 Configuring Error Logs 161 14.3 Forwarding to Syslog 162 14.4 Debugging Configs 163 14.5 Request Tracing 164 15. Performance Tuning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 15.0 Introduction 167 15.1 Automating Tests with Load Drivers 167 15.2 Controlling Cache at the Browser 168 15.3 Keeping Connections Open to Clients 169 15.4 Keeping Connections Open Upstream 169 15.5 Buffering Responses 170 15.6 Buffering Access Logs 171 15.7 OS Tuning 172 Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 vi | Table of Contents
📄 Page 9
Preface The NGINX Cookbook aims to provide easy-to-follow examples of real-world prob‐ lems in application delivery. Throughout this book, you will explore the many features of NGINX and how to use them. This guide is fairly comprehensive, and touches on most of the main capabilities of NGINX. The book will begin by explaining the installation process of NGINX and NGINX Plus, as well as some basic getting-started steps for readers new to NGINX. From there, the sections will progress to load balancing in all forms, accompanied by chap‐ ters about traffic management, caching, and automation. Chapter 6, “Authentication”, covers a lot of ground, but it is important because NGINX is often the first point of entry for web traffic to your application, and the first line of application-layer defense against web attacks and vulnerabilities. There are a number of chapters that cover cutting-edge topics such as HTTP/3 (QUIC), media streaming, cloud, SAML Auth, and container environments—wrapping up with more traditional operational topics such as monitoring, debugging, performance, and operational tips. I personally use NGINX as a multitool, and I believe this book will enable you to do the same. It’s software that I believe in and enjoy working with. I’m happy to share this knowledge with you, and I hope that as you read through this book you relate the recipes to your real-world scenarios and will employ these solutions. Conventions Used in This Book The following typographical conventions are used in this book: Italic Indicates new terms, URLs, email addresses, filenames, and file extensions. Constant width Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords. vii
📄 Page 10
This element signifies a general note. This element indicates a warning or caution. O’Reilly Online Learning For more than 40 years, O’Reilly Media has provided technol‐ ogy and business training, knowledge, and insight to help companies succeed. Our unique network of experts and innovators share their knowledge and expertise through books, articles, and our online learning platform. O’Reilly’s online learning platform gives you on-demand access to live training courses, in-depth learning paths, interactive coding environments, and a vast collection of text and video from O’Reilly and 200+ other publishers. For more information, visit http://oreilly.com. How to Contact Us Please address comments and questions concerning this book to the publisher: O’Reilly Media, Inc. 1005 Gravenstein Highway North Sebastopol, CA 95472 800-889-8969 (in the United States or Canada) 707-827-7019 (international or local) 707-829-0104 (fax) support@oreilly.com https://www.oreilly.com/about/contact.html We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at https://oreil.ly/nginx-cookbook-3e. For news and information about our books and courses, visit https://oreilly.com. Find us on LinkedIn: https://linkedin.com/company/oreilly-media. Follow us on Twitter: https://twitter.com/oreillymedia. Watch us on YouTube: https://youtube.com/oreillymedia. viii | Preface
📄 Page 11
CHAPTER 1 Basics 1.0 Introduction To get started with NGINX Open Source or NGINX Plus, you first need to install it on a system and learn some basics. In this chapter, you will learn how to install NGINX, where the main configuration files are located, and what the commands are for administration. You will also learn how to verify your installation and make requests to the default server. Some of the recipes in this book will use NGINX Plus. You can get a free trial of NGINX Plus at https://nginx.com. 1.1 Installing NGINX on Debian/Ubuntu Problem You need to install NGINX Open Source on a Debian or Ubuntu machine. Solution Update package information for configured sources and install some packages that will assist in configuring the official NGINX package repository: $ apt update $ apt install -y curl gnupg2 ca-certificates lsb-release \ debian-archive-keyring Download and save the NGINX signing key: $ curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \ | tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null 1
📄 Page 12
Use lsb_release to set variables defining the OS and release names, then create an apt source file: $ OS=$(lsb_release -is | tr '[:upper:]' '[:lower:]') $ RELEASE=$(lsb_release -cs) $ echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \ http://nginx.org/packages/${OS} ${RELEASE} nginx" \ | tee /etc/apt/sources.list.d/nginx.list Update package information once more, then install NGINX: $ apt update $ apt install -y nginx $ systemctl enable nginx $ nginx Discussion The commands provided in this section instruct the advanced package tool (APT) package management system to utilize the official NGINX package repository. The NGINX GPG package signing key was downloaded and saved to a location on the filesystem for use by APT. Providing APT the signing key enables the APT system to validate packages from the repository. The lsb_release command was used to automatically determine the OS and release name so that these instructions can be used across all release versions of Debian or Ubuntu. The apt update command instructs the APT system to refresh its package listings from its known repositories. After the package list is refreshed, you can install NGINX Open Source from the official NGINX repository. After you install it, the final command starts NGINX. 1.2 Installing NGINX Through the YUM Package Manager Problem You need to install NGINX Open Source on Red Hat Enterprise Linux (RHEL), Oracle Linux, AlmaLinux, Rocky Linux, or CentOS. Solution Create a file named /etc/yum.repos.d/nginx.repo that contains the following contents: [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/$releasever/$basearch/ gpgcheck=0 enabled=1 2 | Chapter 1: Basics
📄 Page 13
Alter the file, replacing OS in the middle of the URL with rhel or centos, depending on your distribution. Then, run the following commands: $ yum -y install nginx $ systemctl enable nginx $ systemctl start nginx $ firewall-cmd --permanent --zone=public --add-port=80/tcp $ firewall-cmd --reload Discussion The file you just created for this solution instructs the YUM package management system to utilize the official NGINX Open Source package repository. The commands that follow install NGINX Open Source from the official repository, instruct systemd to enable NGINX at boot time, and tell it to start NGINX now. If necessary, the firewall commands open port 80 for the transmission control protocol (TCP), which is the default port for HTTP. The last command reloads the firewall to commit the changes. 1.3 Installing NGINX Plus Problem You need to install NGINX Plus. Solution Visit the NGINX docs. Select the OS you’re installing to and then follow the instruc‐ tions. The instructions are similar to those of the installation of the open source solutions; however, you need to obtain a certificate and key in order to authenticate to the NGINX Plus repository. Discussion NGINX keeps this repository installation guide up-to-date with instructions on installing NGINX Plus. Depending on your OS and version, these instructions vary slightly, but there is one commonality. You must obtain a certificate and key from the NGINX portal, and provide them to your system, in order to authenticate to the NGINX Plus repository. 1.4 Verifying Your Installation Problem You want to validate the NGINX installation and check the version. 1.3 Installing NGINX Plus | 3
📄 Page 14
Solution You can verify that NGINX is installed and check its version by using the following command: $ nginx -v nginx version: nginx/1.25.3 As this example shows, the response displays the version. You can confirm that NGINX is running by using the following command: $ ps -ef | grep nginx root 1738 1 0 19:54 ? 00:00:00 nginx: master process nginx 1739 1738 0 19:54 ? 00:00:00 nginx: worker process The ps command lists running processes. By piping it to grep, you can search for specific words in the output. This example uses grep to search for nginx. The result shows two running processes: a master and worker. If NGINX is running, you will always see a master and one or more worker processes. Note the master process is running as root, as, by default, NGINX needs elevated privileges in order to function properly. For instructions on starting NGINX, refer to the next recipe. To see how to start NGINX as a daemon, use the init.d or systemd methodologies. To verify that NGINX is returning requests correctly, use your browser to make a request to your machine or use curl. When making the request, use the machine’s IP address or hostname. If installed locally, you can use localhost as follows: $ curl localhost You will see the NGINX Welcome default HTML site. Discussion The nginx command allows you to interact with the NGINX binary to check the version, list installed modules, test configurations, and send signals to the master process. NGINX must be running in order for it to serve requests. The ps command is a surefire way to determine whether NGINX is running either as a daemon or in the foreground. The configuration provided by default with NGINX runs a static-site HTTP server on port 80. You can test this default site by making an HTTP request to the machine at localhost. You should use the host’s IP and hostname. 1.5 Key Files, Directories, and Commands Problem You need to understand the important NGINX directories and commands. 4 | Chapter 1: Basics
📄 Page 15
Solution The following configuration directories and file locations can be changed during the compilation of NGINX and therefore may vary based on your installation. NGINX files and directories /etc/nginx/ The /etc/nginx/ directory is the default configuration root for the NGINX server. Within this directory you will find configuration files that instruct NGINX on how to behave. /etc/nginx/nginx.conf The /etc/nginx/nginx.conf file is the default configuration entry point used by the NGINX daemon. This configuration file sets up global settings for things like worker processes, tuning, logging, loading dynamic modules, and references to other NGINX configuration files. In a default configuration, the /etc/nginx/ nginx.conf file includes the top-level http block, or context, which includes all configuration files in the directory described next. /etc/nginx/conf.d/ The /etc/nginx/conf.d/ directory contains the default HTTP server configuration file. Files in this directory ending in .conf are included in the top-level http block from within the /etc/nginx/nginx.conf file. It’s best practice to utilize include statements and organize your configuration in this way to keep your configura‐ tion files concise. In some package repositories, this folder is named sites-enabled, and configuration files are linked from a folder named site-available; this conven‐ tion is deprecated. /var/log/nginx/ The /var/log/nginx/ directory is the default log location for NGINX. Within this directory you will find an access.log file and an error.log file. By default the access log contains an entry for each request NGINX serves. The error logfile contains error events and debug information if the debug module is enabled. NGINX commands nginx -h Shows the NGINX help menu. nginx -v Shows the NGINX version. nginx -V Shows the NGINX version, build information, and configuration arguments, which show the modules built into the NGINX binary. 1.5 Key Files, Directories, and Commands | 5
📄 Page 16
nginx -t Tests the NGINX configuration. nginx -T Tests the NGINX configuration and prints the validated configuration to the screen. This command is useful when seeking support. nginx -s signal The -s flag sends a signal to the NGINX master process. You can send signals such as stop, quit, reload, and reopen. The stop signal discontinues the NGINX process immediately. The quit signal stops the NGINX process after it finishes processing in-flight requests. The reload signal reloads the configura‐ tion. The reopen signal instructs NGINX to reopen logfiles. Discussion With an understanding of these key files, directories, and commands, you’re in a good position to start working with NGINX. Using this knowledge, you can alter the default configuration files and test your changes with the nginx -t command. If your test is successful, you also know how to instruct NGINX to reload its configuration using the nginx -s reload command. 1.6 Using Includes for Clean Configs Problem You need to clean up bulky configuration files to keep your configurations logically grouped into modular configuration sets. Solution Use the include directive to reference configuration files, directories, or masks: http { include conf.d/compression.conf; include ssl_config/*.conf } The include directive takes a single parameter of either a path to a file or a mask that matches many files. This directive is valid in any context. Discussion By using include statements you can keep your NGINX configuration clean and concise. You’ll be able to logically group your configurations to avoid configuration files that go on for hundreds of lines. You can create modular configuration files 6 | Chapter 1: Basics
📄 Page 17
that can be included in multiple places throughout your configuration to avoid duplication of configurations. Take the example fastcgi_param configuration file provided in most package manage‐ ment installs of NGINX. If you manage multiple FastCGI virtual servers on a single NGINX box, you can include this configuration file for any location or context where you require these parameters for FastCGI without having to duplicate this configura‐ tion. Another example is Secure Sockets Layer (SSL) configurations. If you’re running multiple servers that require similar SSL configurations, you can simply write this configuration once and include it wherever needed. By logically grouping your configurations together, you can rest assured that your configurations are neat and organized. Changing a set of configuration files can be done by editing a single file rather than changing multiple sets of configuration blocks in multiple locations within a massive configuration file. Grouping your con‐ figurations into files and using include statements is good practice for your sanity and the sanity of your colleagues. 1.7 Serving Static Content Problem You need to serve static content with NGINX. Solution Overwrite the default HTTP server configuration located in /etc/nginx/conf.d/ default.conf with the following NGINX configuration example: server { listen 80 default_server; server_name www.example.com; location / { root /usr/share/nginx/html; # alias /usr/share/nginx/html; index index.html index.htm; } } Discussion This configuration serves static files over HTTP on port 80 from the directory /usr/ share/nginx/html/. The first line in this configuration defines a new server block. This defines a new context that specifies what NGINX listens for. Line two instructs NGINX to listen on port 80, and the default_server parameter instructs NGINX to 1.7 Serving Static Content | 7
📄 Page 18
use this server as the default context for port 80. The listen directive can also take a range of ports. The server_name directive defines the hostname or the names of requests that should be directed to this server. If the configuration had not defined this context as the default_server, NGINX would direct requests to this server only if the HTTP host header matched the value provided to the server_name directive. With the default_server context set, you can omit the server_name directive if you do not yet have a domain name to use. The location block defines a configuration based on the path in the URL. The path, or portion of the URL after the domain, is referred to as the uniform resource identifier (URI). NGINX will best match the URI requested to a location block. The example uses / to match all requests. The root directive shows NGINX where to look for static files when serving content for the given context. The URI of the request is appended to the root directive’s value when looking for the requested file. If we had provided a URI prefix to the location directive, this would be included in the appended path, unless we used the alias directive rather than root. The location directive is able to match a wide range of expressions. Visit the first link in the “See Also” section for more information. Finally, the index directive provides NGINX with a default file, or list of files to check, in the event that no further path is provided in the URI. See Also NGINX HTTP location Directive Documentation NGINX Request Processing 8 | Chapter 1: Basics
📄 Page 19
CHAPTER 2 High-Performance Load Balancing 2.0 Introduction Today’s internet user experience demands performance and uptime. To achieve this, multiple copies of the same system are run, and the load is distributed over them. As the load increases, another copy of the system can be brought online. This architecture technique is called horizontal scaling. Software-based infrastructure is increasing in popularity because of its flexibility, opening up a vast world of possibili‐ ties. Whether the use case is as small as a set of two system copies for high availability, or as large as thousands around the globe, there’s a need for a load-balancing solution that is as dynamic as the infrastructure. NGINX fills this need in a number of ways, such as HTTP, transmission control protocol (TCP), and user datagram protocol (UDP) load balancing, which we cover in this chapter. When balancing load, it’s important that the impact to the client’s experience is entirely positive. Many modern web architectures employ stateless application tiers, storing state in shared memory or databases. However, this is not the reality for all. Session state is immensely valuable and vastly used in interactive applications. This state might be stored locally to the application server for a number of reasons; for example, in applications for which the data being worked is so large that network overhead is too expensive in performance. When state is stored locally to an appli‐ cation server, it is extremely important to the user experience that the subsequent requests continue to be delivered to the same server. Another facet of the situation is that servers should not be released until the session has finished. Working with stateful applications at scale requires an intelligent load balancer. NGINX offers multiple ways to solve this problem by tracking cookies or routing. This chapter covers session persistence as it pertains to load balancing with NGINX. 9
📄 Page 20
It’s important to ensure that the application that NGINX is serving is healthy. Upstream requests may begin to fail for a number of reasons. It could be because of network connectivity, server failure, or application failure, to name a few. Proxies and load balancers must be smart enough to detect failure of upstream servers (servers behind the load balancer or proxy) and stop passing traffic to them; otherwise, the client will be waiting, only to be delivered a timeout. A way to mitigate service degradation when a server fails is to have the proxy check the health of the upstream servers. NGINX offers two different types of health checks: passive, available in NGINX Open Source; and active, available only in NGINX Plus. Active health checks at regular intervals will make a connection or request to the upstream server, and can verify that the response is correct. Passive health checks monitor the connection or responses of the upstream server as clients make the request or connection. You might want to use passive health checks to reduce the load of your upstream servers, and you might want to use active health checks to determine failure of an upstream server before a client is served a failure. The tail end of this chapter examines monitoring the health of the upstream application servers for which you’re load balancing. 2.1 HTTP Load Balancing Problem You need to distribute load between two or more HTTP servers. Solution Use NGINX’s HTTP module to load balance over HTTP servers using the upstream block: upstream backend { server 10.10.12.45:80 weight=1; server app.example.com:80 weight=2; server spare.example.com:80 backup; } server { location / { proxy_pass http://backend; } } This configuration balances load across two HTTP servers on port 80, and defines one as a backup, which is used when the two primary servers are unavailable. The optional weight parameter instructs NGINX to pass twice as many requests to the second server. When not used, the weight parameter defaults to 1. 10 | Chapter 2: High-Performance Load Balancing
The above is a preview of the first 20 pages. Register to read the complete e-book.

💝 Support Author

0.00
Total Amount (¥)
0
Donation Count

Login to support the author

Login Now
Back to List