Previous Next

Diving Deep into Kubernetes Networking (Adrian Goins, Alena Prokharchyk etc.) (z-library.sk, 1lib.sk, z-lib.sk)

Author: Adrian Goins, Alena Prokharchyk, Murali Paluru

DevOps

Networking is a critical component for the success of a Kubernetes implementation. This ebook covers Kubernetes networking from basics to advanced topics and is designed for operators and developers alike. You'll learn about: The Kubernetes networking model and seamless scaling. The abstractions that allow Kubernetes communication between applications. Popular Container Network Interface (CNI) plugins for Kubernetes such as Calico, Flannel, and Canal. Load balancing, DNS, and how to expose applications to the outside world.

📄 File Format: PDF
💾 File Size: 1.2 MB
9
Views
0
Downloads
0.00
Total Donations

📄 Text Preview (First 20 pages)

ℹ️

Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

📄 Page 1
Diving Deep into Kubernetes Networking AUTHORS Adrian Goins Alena Prokharchyk Murali Paluru
📄 Page 2
JANUARY 2019 TABLE OF CONTENTS DIVING DEEP INTO KUBERNETES NETWORKING TABLE OF CONTENTS Introduction ............................................................................................................ 1 Goals of This Book ..................................................................................................................... 1 How This Book is Organized .................................................................................................. 1 An Introduction to Networking with Docker ..............................................2 Docker Networking Types ......................................................................................................2 Container-to-Container Communication ........................................................................8 Container Communication Between Hosts................................................................... 9 Interlude: Netfilter and iptables rules ..........................................................10 An Introduction to Kubernetes Networking ............................................. 11 Pod Networking ......................................................................................................................12 Network Policy ........................................................................................................................15 Container Networking Interface ...................................................................................... 20 Networking with Flannel ..................................................................................21 Running Flannel with Kubernetes .....................................................................................21 Flannel Backends ....................................................................................................................21 Networking with Calico ................................................................................... 23 Architecture ............................................................................................................................. 23 Install Calico with Kubernetes .......................................................................................... 23 Using BGP for Route Announcements ........................................................................... 26 Using IP-in-IP ........................................................................................................................... 29 Combining Flannel and Calico (Canal) .......................................................30 Load Balancers and Ingress Controllers ....................................................31 The Benefits of Load Balancers .........................................................................................31 Load Balancing in Kubernetes ..........................................................................................35 Conclusion ............................................................................................................40
📄 Page 3
1JANUARY 2019 INTrOduCTIONDIVING DEEP INTO KUBERNETES NETWORKING Introduction Kubernetes has evolved into a strategic platform for deploying and scaling applications in data centers and the cloud. It provides built-in abstractions for efficiently deploying, scaling, and managing applications. Kubernetes also addresses concerns such as storage, networking, load balancing, and multi-cloud deployments. Networking is a critical component for the success of a Kubernetes implementation. Network components in a Kubernetes cluster control interaction at multiple layers, from communication between containers running on different hosts to exposing services to clients outside of a cluster. The requirements within each environment are different, so before we choose which solution is the most appropriate, we have to understand how networking works within Kubernetes and what benefits each solution provides. GOALS OF THIS BOOK This book introduces various networking concepts related to Kubernetes that an operator, developer, or decision maker might find useful. Networking is a complex topic and even more so when it comes to a distributed system like Kubernetes. It is essential to understand the technology, the tooling, and the available choices. These choices affect an organization's ability to scale the infrastructure and the applications running on top of it. The reader is expected to have a basic understanding of containers, Kubernetes, and operating system fundamentals. HOW THIS BOOK IS OrGANIZEd In this book, we cover Kubernetes networking from the basics to the advanced topics. We start by explaining Docker container networking, as Docker is a fundamental component of Kubernetes. We then introduce Kubernetes networking, its unique model and how it seamlessly scales. In doing so, we explain the abstractions that enable Kubernetes to communicate effectively between applications. We touch upon the Container Network Interface (CNI) specification and how it relates to Kubernetes, and finally, we do a deep dive into some of the more popular CNI plugins for Kubernetes such as Calico, Flannel and Canal. We discuss load balancing, DNS and how to expose applications to the outside world. This book is based on the Networking Master Class online meetup that is available on YouTube. This eBook covers Kubernetes networking concepts, but we do not intend for it to be a detailed explanation of Kubernetes in its entirety. For more information on Kubernetes, we recommend reading the Kubernetes documentation or enrolling in a training program from a CNCF- certified training provider.
📄 Page 4
2JANUARY 2019 AN INTrOduCTION TO NETWOrKING WITH dOCKErDIVING DEEP INTO KUBERNETES NETWORKING dOCKEr NETWOrKING TYPES When a Docker container launches, the Docker engine assigns it a network interface with an IP address, a default gateway, and other components, such as a routing table and DNS services. By default, all addresses come from the same pool, and all containers on the same host can communicate with one another. We can change this by defining the network to which the container should connect, either by creating a custom user-defined network or by using a network provider plugin. The network providers are pluggable using drivers. We connect a Docker container to a particular network by using the --net switch when launching it. The following command launches a container from the busybox image and joins it to the host network. This container prints its IP address and then exits. docker run --rm --net=host busybox ip addr Docker offers five network types, each with a different capacity for communication with other network entities. A. Host Networking: The container shares the same IP address and network namespace as that of the host. Services running inside of this container have the same network capabilities as services running directly on the host. B. Bridge Networking: The container runs in a private network internal to the host. Communication is open to other containers in the same network. Communication with services outside of the host goes through network address translation (NAT) before exiting the host. (This is the default mode of networking when the --net option isn't specified) C. Custom bridge network: This is the same as Bridge Networking but uses a bridge explicitly created for this (and other) containers. An example of how to use this would be a container that runs on an exclusive "database" bridge network. Another container can have an interface on the default bridge and the database bridge, enabling it to communicate with both networks. D. Container-defined Networking: A container can share the address and network configuration of another container. This type enables process isolation between containers, where each container runs one service but where services can still communicate with one another on the localhost address. E. No networking: This option disables all networking for the container. An Introduction to Networking with Docker Docker follows a unique approach to networking that is very different from the Kubernetes approach. Understanding how Docker works help later in understanding the Kubernetes model, since Docker containers are the fundamental unit of deployment in Kubernetes. Host Networking The host mode of networking allows the Docker container to share the same IP address as that of the host and disables the network isolation otherwise provided by network namespaces. The container’s network stack is mapped directly to the host’s network stack. All interfaces and addresses on the host are visible within the container, and all communication possible to or from the host is possible to or from the container. If you run the command ip addr on a host (or ifconfig -a if your host doesn’t have the ip command available), you will see information about the network interfaces. Container eth0
📄 Page 5
3JANUARY 2019 AN INTrOduCTION TO NETWOrKING WITH dOCKErDIVING DEEP INTO KUBERNETES NETWORKING If you run the same command from a container using host networking, you will see the same information.
📄 Page 6
4JANUARY 2019 AN INTrOduCTION TO NETWOrKING WITH dOCKErDIVING DEEP INTO KUBERNETES NETWORKING Bridge Networking In a standard Docker installation, the Docker daemon creates a bridge on the host with the name of docker0. When a container launches, Docker then creates a virtual ethernet device for it. This device appears within the container as eth0 and on the host with a name like vethxxx where xxx is a unique identifier for the interface. The vethxxx interface is added to the docker0 bridge, and this enables communication with other containers on the same host that also use the default bridge. To demonstrate using the default bridge, run the following command on a host with Docker installed. Since we are not specifying the network - the container will connect to the default bridge when it launches. Run the ip addr and ip route commands inside of the container. You will see the IP address of the container with the eth0 interface: Container docker0 bridge eth0 Container eth0 eth0 ip tables vethxxx vethyyy
📄 Page 7
5JANUARY 2019 AN INTrOduCTION TO NETWOrKING WITH dOCKErDIVING DEEP INTO KUBERNETES NETWORKING In another terminal connected to the host, run the ip addr command. You will see the corresponding interface created for the container. In the image below it is named veth5dd2b68@if9. Yours will be different. Although Docker mapped the container IPs on the bridge, network services running inside of the container are not visible outside of the host. To make them visible, the Docker Engine must be told when launching a container to map ports from that container to ports on the host. This process is called publishing. For example, if you want to map port 80 of a container to port 8080 on the host, then you would have to publish the port as shown in the following command: docker run --name nginx -p 8080:80 nginx By default, the Docker container can send traffic to any destination. The Docker daemon creates a rule within Netfilter that modifies outbound packets and changes the source address to be the address of the host itself. The Netfilter configuration allows inbound traffic via the rules that Docker creates when initially publishing the container's ports. The output included below shows the Netfilter rules created by Docker when it publishes a container’s ports.
📄 Page 8
6JANUARY 2019 AN INTrOduCTION TO NETWOrKING WITH dOCKErDIVING DEEP INTO KUBERNETES NETWORKING NAT table within Netfilter
📄 Page 9
7JANUARY 2019 AN INTrOduCTION TO NETWOrKING WITH dOCKErDIVING DEEP INTO KUBERNETES NETWORKING Custom Bridge Network There is no requirement to use the default bridge on the host; it’s easy to create a new bridge network and attach containers to it. This provides better isolation and interoperability between containers, and custom bridge networks have better security and features than the default bridge. • All containers in a custom bridge can communicate with the ports of other containers on that bridge. This means that you do not need to publish the ports explicitly. It also ensures that the communication between them is secure. Imagine an application in which a backend container and a database container need to communicate and where we also want to make sure that no external entity can talk to the database. We do this with a custom bridge network in which only the database container and the backend containers reside. You can explicitly expose the backend API to the rest of the world using port publishing. • The same is true with environment variables - environment variables in a bridge network are shared by all containers on that bridge. • Network configuration options such as MTU can differ between applications. By creating a bridge, you can configure the network to best suit the applications connected to it. To create a custom bridge network and two containers that use it, run the following commands: $ docker network create mynetwork $ docker run -it --rm --name=container-a --network=mynetwork busybox /bin/sh $ docker run -it --rm --name=container-b --network=mynetwork busybox /bin/sh Container-Defined Network A specialized case of custom networking is when a container joins the network of another container. This is similar to how a Pod works in Kubernetes. The following commands launch two containers that share the same network namespace and thus share the same IP address. Services running on one container can talk to services running on the other via the localhost address. $ docker run -it --rm --name=container-a busybox /bin/sh $ docker run -it --rm --name=container-b --network=container:container-a busybox /bin/sh No Networking This mode is useful when the container does not need to communicate with other containers or with the outside world. It is not assigned an IP address, and it cannot publish any ports. $ docker run --net=none --name busybox busybox ip a
📄 Page 10
8JANUARY 2019 AN INTrOduCTION TO NETWOrKING WITH dOCKErDIVING DEEP INTO KUBERNETES NETWORKING CONTAINEr-TO-CONTAINEr COMMuNICATION How do two containers on the same bridge network talk to one another? In the above diagram, two containers running on the same host connect via the docker0 bridge. If 172.17.0.6 (on the left-hand side) wants to send a request to 172.17.0.7 (the one on the right-hand side), the packets move as follows: 1. A packet leaves the container via eth0 and lands on the corresponding vethxxx interface. 2. The vethxxx interface connects to the vethyyy interface via the docker0 bridge. 3. The docker0 bridge forwards the packet to the vethyyy interface. 4. The packet moves to the eth0 interface within the destination container. Container docker0 bridge eth0 1 Container eth0 eth0 ip tables vethxxx vethyyy 4 2 3 PACKET src: 172.17.0.6/16 dest: 172.17.0.7
📄 Page 11
9JANUARY 2019 AN INTrOduCTION TO NETWOrKING WITH dOCKErDIVING DEEP INTO KUBERNETES NETWORKING We can see this in action by using ping and tcpdump. Create two containers and inspect their network configuration with ip addr and ip route. The default route for each container is via the eth0 interface. Ping one container from the other, and let the command run so that we can inspect the traffic. Run tcpdump on the docker0 bridge on the host machine. You will see in the output that the traffic moves between the two containers via the docker0 bridge. CONTAINEr COMMuNICATION BETWEEN HOSTS So far we’ve discussed scenarios in which containers communicate within a single host. While interesting, real-world applications require communication between containers running on different hosts. Cross-host networking usually uses an overlay network, which builds a mesh between hosts and employs a large block of IP addresses within that mesh. The network driver tracks which addresses are on which host and shuttles packets between the hosts as necessary for inter-container communication. Overlay networks can be encrypted or unencrypted. Unencrypted networks are acceptable for environments in which all of the hosts are within the same LAN, but because overlay networks enable communication between hosts across the Internet, consider the security requirements when choosing a network driver. If the packets traverse a network that you don't control, encryption is a better choice. The overlay network functionality built into Docker is called Swarm. When you connect a host to a swarm, the Docker engine on each host handles communication and routing between the hosts. Other overlay networks exist, such as IPVLAN, VxLAN, and MACVLAN. More solutions are available for Kubernetes. For more information on pure-Docker networking implementations for cross-host networking (including Swarm mode and libnetwork), please refer to the documentation available at the Docker website.
📄 Page 12
10JANUARY 2019 INTErLudE: NETFILTEr ANd IPTABLES ruLESDIVING DEEP INTO KUBERNETES NETWORKING The Filter Table Rules in the Filter table control if a packet is allowed or denied. Packets which are allowed are forwarded whereas packets which are denied are either rejected or silently dropped. The NAT Table These rules control network address translation. They modify the source or destination address for the packet, changing how the kernel routes the packet. The Mangle Table The headers of packets which go through this table are altered, changing the way the packet behaves. Netfilter might shorten the TTL, redirect it to a different address, or change the number of network hops. Interlude: Netfilter and iptables rules In the earlier section on Docker networking, we looked at how Docker handles communication between containers. On a Linux host, the component which handles this is called Netfilter, or more commonly by the command used to configure it: iptables. Netfilter manages the rules that define network communication for the Linux kernel. These rules permit, deny, route, modify, and forward packets. It organizes these rules into tables according to their purpose. Raw Table This table marks packets to bypass the iptables stateful connection tracking. Security Table This table sets the SELinux security context marks on packets. Setting the marks affects how SELinux (or systems that can interpret SELinux security contexts) handle the packets. The rules in this table set marks on a per-packet or per-connection basis. Netfilter organizes the rules in a table into chains. Chains are the means by which Netfilter hooks in the kernel intercept packets as they move through processing. Packets flow through one or more chains and exit when they match a rule. A rule defines a set of conditions, and if the packet matches those conditions, an action is taken. The universe of actions is diverse, but examples include: • Block all connections originating from a specific IP address. • Block connections to a network interface. • Allow all HTTP/HTTPS connections. • Block connections to specific ports. The action that a rule takes is called a target, and represents the decision to accept, drop, or forward the packet. The system comes with five default chains that match different phases of a packet’s journey through processing: PREROUTING, INPUT, FORWARD, OUTPUT, and POSTROUTING. Users and programs may create additional chains and inject rules into the system chains to forward packets to a custom chain for continued processing. This architecture allows the Netfilter configuration to follow a logical structure, with chains representing groups of related rules. Docker creates several chains, and it is the actions of these chains that handle communication between containers, the host, and the outside world.
📄 Page 13
11JANUARY 2019 AN INTrOduCTION TO KuBErNETES NETWOrKINGDIVING DEEP INTO KUBERNETES NETWORKING Pods The smallest unit of deployment in a Kubernetes cluster is the Pod, and all of the constructs related to scheduling and orchestration assist in the deployment and management of Pods. In the simplest definition, a Pod encapsulates one or more containers. Containers in the same Pod always run on the same host. They share resources such as the network namespace and storage. Each Pod has a routable IP address assigned to it, not to the containers running within it. Having a shared network space for all containers means that the containers inside can communicate with one another over the localhost address, a feature not present in traditional Docker networking. The most common use of a Pod is to run a single container. Situations where dif ferent processes work on the same shared resource, such as content in a storage volume, benefit from having multiple containers in a single Pod. Some projects inject containers into running Pods to deliver a service. An example of this is the Istio service mesh, which uses this injected container as a proxy for all communication. Because a Pod is the basic unit of deployment, we can map it to a single instance of an application. For example, a three-tier application that runs a user interface (UI), a backend, and a database would model the deployment of the application on Kubernetes with three Pods. If one tier of the application needed to scale, the number of Pods in that tier could scale accordingly. An Introduction to Kubernetes Networking Kubernetes networking builds on top of the Docker and Netfilter constructs to tie multiple components together into applications. Kubernetes resources have specific names and capabilities, and we want to understand those before exploring their inner workings. File Puller Web Server Content Manager Consumers Volume Pod
📄 Page 14
12JANUARY 2019 AN INTrOduCTION TO KuBErNETES NETWOrKINGDIVING DEEP INTO KUBERNETES NETWORKING Workloads Production applications with users run more than one instance of the application. This enables fault tolerance, where if one instance goes down, another handles the traffic so that users don't experience a disruption to the service. In a traditional model that doesn't use Kubernetes, these types of deployments require that an external person or software monitors the application and acts accordingly. Kubernetes recognizes that an application might have unique requirements. Does it need to run on every host? Does it need to handle state to avoid data corruption? Can all of its pieces run anywhere, or do they need special scheduling consideration? To accommodate those situations where a default structure won't give the best results, Kubernetes provides abstractions for different workload types. REPLICASET The ReplicaSet maintains the desired number of copies of a Pod running within the cluster. If a Pod or the host on which it's running fails, Kubernetes launches a replacement. In all cases, Kubernetes works to maintain the desired state of the ReplicaSet. DEPLOYMENT A Deployment manages a ReplicaSet. Although it’s possible to launch a ReplicaSet directly or to use a ReplicationController, the use of a Deployment gives more control over the rollout strategies of the Pods that the ReplicaSet controller manages. By defining the desired states of Pods through a Deployment, users can perform updates to the image running within the containers and maintain the ability to perform rollbacks. DAEMONSET A DaemonSet runs one copy of the Pod on each node in the Kubernetes cluster. This workload model provides the flexibility to run daemon processes such as log management, monitoring, storage providers, or network providers that handle Pod networking for the cluster. STATEFULSET A StatefulSet controller ensures that the Pods it manages have durable storage and persistent identity. StatefulSets are appropriate for situations where Pods have a similar definition but need a unique identity, ordered deployment and scaling, and storage that persists across Pod rescheduling. POd NETWOrKING The Pod is the smallest unit in Kubernetes, so it is essential to first understand Kubernetes networking in the context of communication between Pods. Because a Pod can hold more than one container, we can start with a look at how communication happens between containers in a Pod. Although Kubernetes can use Docker for the underlying container runtime, its approach to networking differs slightly and imposes some basic principles: • Any Pod can communicate with any other Pod without the use of network address translation (NAT). To facilitate this, Kubernetes assigns each Pod an IP address that is routable within the cluster. • A node can communicate with a Pod without the use of NAT. • A Pod's awareness of its address is the same as how other resources see the address. The host's address doesn't mask it. These principles give a unique and first-class identity to every Pod in the cluster. Because of this, the networking model is more straightforward and does not need to include port mapping for the running container workloads. By keeping the model simple, migrations into a Kubernetes cluster require fewer changes to the container and how it communicates.
📄 Page 15
13JANUARY 2019 AN INTrOduCTION TO KuBErNETES NETWOrKINGDIVING DEEP INTO KUBERNETES NETWORKING The Pause Container A piece of infrastructure that enables many networking features in Kubernetes is known as the pause container. This container runs alongside the containers defined in a Pod and is responsible for providing the network namespace that the other containers share. It is analogous to joining the network of another container that we described in the User Defined Network section above. The pause container was initially designed to act as the init process within a PID namespace shared by all containers in the Pod. It performed the function of reaping zombie processes when a container died. PID namespace sharing is now disabled by default, so unless it has been explicitly enabled in the kubelet, all containers run their process as PID 1. If we launch a Pod running Nginx, we can inspect the Docker container running within the Pod. When we do so, we see that the container does not have the network settings provided to it. The pause container which runs as part of the Pod is the one which gives the networking constructs to the Pod. Note: Run the commands below on the host where the nginx Pod is scheduled.
📄 Page 16
14JANUARY 2019 AN INTrOduCTION TO KuBErNETES NETWOrKINGDIVING DEEP INTO KUBERNETES NETWORKING Kubernetes Service Pods are ephemeral. The services that they provide may be critical, but because Kubernetes can terminate Pods at any time, they are unreliable endpoints for direct communication. For example, the number of Pods in a ReplicaSet might change as the Deployment scales it up or down to accommodate changes in load on the application, and it is unrealistic to expect every client to track these changes while communicating with the Pods. Instead, Kubernetes offers the Service resource, which provides a stable IP address and balances traffic across all of the Pods behind it. This abstraction brings stability and a reliable mechanism for communication between microservices. Services which sit in front of Pods use a selector and labels to find the Pods they manage. All Pods with a label that matches the selector receive traffic through the Service. Like a traditional load balancer, the service can expose the Pod functionality at any port, irrespective of the port in use by the Pods themselves. KUBE-PROXY The kube-proxy daemon that runs on all nodes of the cluster allows the Service to map traffic from one port to another. This component configures the Netfilter rules on all of the nodes according to the Service’s definition in the API server. From Kubernetes 1.9 onward it uses the netlink interface to create IPVS rules. These rules direct traffic to the appropriate Pod. KUBERNETES SERVICE TYPES A service definition specifies the type of Service to deploy, with each type of Service having a different set of capabilities. Intra-Pod Communication Kubernetes follows the IP-per-Pod model where it assigns a routable IP address to the Pod. The containers within the Pod share the same network space and communicate with one another over localhost. Like processes running on a host, two containers cannot each use the same network port, but we can work around this by changing the manifest. Inter-Pod Communication Because it assigns routable IP addresses to each Pod, and because it requires that all resources see the address of a Pod the same way, Kubernetes assumes that all Pods communicate with one another via their assigned addresses. Doing so removes the need for an external service discovery mechanism. ClusterIP This type of Service is the default and exists on an IP that is only visible within the cluster. It enables cluster resources to reach one another via a known address while maintaining the security boundaries of the cluster itself. For example, a database used by a backend application does not need to be visible outside of the cluster, so using a service of type ClusterIP is appropriate. The backend application would expose an API for interacting with records in the database, and a frontend application or remote clients would consume that API. NodePort A Service of type NodePort exposes the same port on every node of the cluster. The range of available ports is a cluster-level configuration item, and the Service can either choose one of the ports at random or have one designated in its configuration. This type of Service automatically creates a ClusterIP Service as its target, and the ClusterIP Service routes traffic to the Pods. External load balancers frequently use NodePort services. They receive traffic for a specific site or address and forward it to the cluster on that specific port. LoadBalancer When working with a cloud provider for whom support exists within Kubernetes, a Service of type LoadBalancer creates a load balancer in that provider's infrastructure. The exact details of how this happens differ between providers, but all create the load balancer asynchronously and configure it to proxy the request to the corresponding Pods via NodePort and ClusterIP Services that it also creates. In a later section, we explore Ingress Controllers and how to use them to deliver a load balancing solution for a cluster.
📄 Page 17
15JANUARY 2019 AN INTrOduCTION TO KuBErNETES NETWOrKINGDIVING DEEP INTO KUBERNETES NETWORKING DNS As we stated above, Pods are ephemeral, and because of this, their IP addresses are not reliable endpoints for communication. Although Services solve this by providing a stable address in front of a group of Pods, consumers of the Service still want to avoid using an IP address. Kubernetes solves this by using DNS for service discovery. The default internal domain name for a cluster is cluster.local. When you create a Service, it assembles a subdomain of namespace.svc.cluster.local (where namespace is the namespace in which the service is running) and sets its name as the hostname. For example, if the service was named nginx and ran in the default namespace, consumers of the service would be able to reach it as nginx.default.svc.cluster.local. If the service's IP changes, the hostname remains the same. There is no interruption of service. The default DNS provider for Kubernetes is KubeDNS, but it’s a pluggable component. Beginning with Kubernetes 1.11 CoreDNS is available as an alternative. In addition to providing the same basic DNS functionality within the cluster, CoreDNS supports a wide range of plugins to activate additional functionality. NETWOrK POLICY In an enterprise deployment of Kubernetes the cluster often supports multiple projects with different goals. Each of these projects has different workloads, and each of these might require a different security policy. Pods, by default, do not filter incoming traffic. There are no firewall rules for inter-Pod communication. Instead, this responsibility falls to the NetworkPolicy resource, which uses a specification to define the network rules applied to a set of Pods. The network policies are defined in Kubernetes, but the CNI plugins that support network policy implementation do the actual configuration and processing. In a later section, we look at CNI plugins and how they work. The image to the right shows a standard three-tier application with a UI, a backend service, and a database, all deployed within a Kubernetes cluster. Requests to the application arrive at the web Pods, which then initiate a request to the backend Pods for data. The backend Pods process the request and perform CRUD operations against the database Pods. If the cluster is not using a network policy, any Pod can talk to any other Pod. Nothing prevents the web Pods from communicating directly with the database Pods. If the security requirements of the cluster dictate a need for clear separation between tiers, a network policy enforces it. backend pod db podweb pod backend pod backend pod db podweb pod
📄 Page 18
16JANUARY 2019 AN INTrOduCTION TO KuBErNETES NETWOrKINGDIVING DEEP INTO KUBERNETES NETWORKING The policy defined below states that the database Pods can only receive traffic from the Pods with the labels app=myapp and role=backend. It also defines that the backend Pods can only receive traffic from Pods with the labels app=myapp and role=web. kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: backend-access-ingress spec: podSelector: matchLabels: app: myapp role: backend ingress: - from: - podSelector: matchLabels: app: myapp role: web kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: db-access-ingress spec: podSelector: matchLabels: app: myapp role: db ingress: - from: - podSelector: matchLabels: app: myapp role: backend
📄 Page 19
17JANUARY 2019 AN INTrOduCTION TO KuBErNETES NETWOrKINGDIVING DEEP INTO KUBERNETES NETWORKING backend pod db podweb pod backend pod backend pod db podweb pod With this network policy in place, Kubernetes blocks communication between the web and database tiers. How a Network Policy Works In addition to the fields used by all Kubernetes manifests, the specification of the NetworkPolicy resource requires some extra fields. PODSELECTOR This field tells Kubernetes how to find the Pods to which this policy applies. Multiple network policies can select the same set of Pods, and the ingress rules are applied sequentially. The field is not optional, but if the manifest defines a key with no value, it applies to all Pods in the namespace. POLICYTYPES This field defines the direction of network traffic to which the rules apply. If missing, Kubernetes interprets the rules and only applies them to ingress traffic unless egress rules also appear in the rules list. This default interpretation simplifies the manifest's definition by having it adapt to the rules defined later. Because Kubernetes always defines an ingress policy if this field is unset, a network policy for egress-only rules must explicitly define the policyType of Egress.
📄 Page 20
18JANUARY 2019 AN INTrOduCTION TO KuBErNETES NETWOrKINGDIVING DEEP INTO KUBERNETES NETWORKING EGRESS Rules defined under this field apply to egress traffic from the selected Pods to destinations defined in the rule. Destinations can be an IP block (ipBlock), one or more Pods (podSelector), one or more namespaces (namespaceSelector), or a combination of both podSelector and nameSpaceSelector. The following rule permits traffic from the Pods to any address in 10.0.0.0/24 and only on TCP port 5978: egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978 The next rule permits outbound traffic for Pods with the labels app=myapp and role=backend to any host on TCP or UDP port 53: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: db-egress-denyall spec: podSelector: matchLabels: app: myapp role: backend policyTypes: - Egress egress: - ports: - port: 53 protocol: UDP - port: 53 protocol: TCP Egress rules work best to limit a resource’s communication to the other resources on which it relies. If those resources are in a specific block of IP addresses, use the ipBlock selector to target them, specifying the appropriate ports: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: db-egress-denyall spec: podSelector: matchLabels: app: myapp role: backend policyTypes: - Egress egress: - ports: - port: 53 protocol: UDP - port: 53 protocol: TCP - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 3306
The above is a preview of the first 20 pages. Register to read the complete e-book.

💝 Support Author

0.00
Total Amount (¥)
0
Donation Count

Login to support the author

Login Now

Recommended for You

Loading recommended books...
Failed to load, please try again later
Back to List