Author:Rafael Gomes
No description
Tags
Support Statistics
¥.00 ·
0times
Text Preview (First 20 pages)
Registered users can read the full content for free
Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.
Page
1
(This page has no text content)
Page
2
Docker for Developers Rafael Gomes This book is for sale at http://leanpub.com/docker-for-developers This version was published on 2020-07-09 * * * * * This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing process. Lean Publishing is the act of publishing an in- progress ebook using lightweight tools and many iterations to get reader feedback, pivot until you have the right book and build traction once you do. * * * * *
Page
3
This work is licensed under a Creative Commons Attribution 4.0 International License
Page
4
Table of Contents Preface Best regards, How to read this book Acknowledgements Introduction Why using Docker? What is Docker? Set up Setting up on GNU/Linux Setting up on MacOS Setting up on Windows Basic commands Running a container Checking the list of containers Managing containers Creating your own image on Docker Understanding storage on Docker Understanding the network on Docker Using Docker in multiple environments Managing multiple Docker containers with Docker Compose How to use Docker without GNU/Linux Turning your application into a container Codebase
Page
5
Dependencies Config Backing services Build, release, run Processes Port binding Concurrency Disposability Development/production parity Logs Admin processes Tips for using Docker Tips for running Best practices to build images Apêndice Container or virtual machine? Useful commands Can I run GUI applications? Are you linting your Docker file? You should… How can I do that? My process is manually done, how do I do that? My process is automated, how do I do that? Understanding the lint result Dockerhub
Page
6
Preface In software development it is usual to create good practices for standards. Especially for web applications, certain concepts and practices such as DevOps, cloud infrastructure, Phoenix, immutable and 12 factor apps are widely accepted theories that help on systems productivity and maintenance. While these concepts are not new, many are the tools and systems that can help to implement them. But Docker is one of the first and most commented tools and platforms that combine many of these concepts in a cohesive and simple way. As any tool, Docker is an investment that provides the best return when you understand its purpose and how to use it properly. There are several presentations, papers and documents about Docker. However, there was the opportunity of a book connecting the theory to the practice of the tool, in which the reader could understand the motivations of Docker and also how to organize the application in order to get the best from the tool. I am very pleased that Rafael wrote this book that I believe it’s an important contribution to our field. Rafael is extremely engaged in Docker and Devops communities in Brazil, and understands what people seek in terms of knowledge regarding this subject. In this book you will be able to understand the basics on Docker with a simple language and many practical examples. I hope this publication turns into one more step to boost your journey. I wish you success and all the best. Best regards, Luís Armando Bianchin
Page
7
How to read this book This material was divided in two big parts. The first one approaches the most basic points of Docker. It is the exactly minimum necessary that a developer needs to know to use this technology properly, that is, knowing what happens exactly when executing each command. In this first part, we will try not to approach the “low level” issues of Docker, because they are more appealing for the infrastructure team. In case you don’t know anything about Docker, we strongly advise you to read this first part, so you can go through the next part that focus on building a web application on Docker following the best practices, no pauses. In this book, we used the practices from 12factor. The 12 factor will be detailed in the beginning of the second part, but we can tell what we consider the “12 commandments for web applications on Docker”, that is, once your application follows all the good practices presented in this document, you will be possibly using Docker at its full potential. This second part is divided by each good practice of 12factor. Therefore, we present a sample code in the first chapter that will evolve while the book develops. The idea is that you can practice with a real code, thus absorbing the content in a practical way. We also put together some appendices with extra-important subjects that don’t fit in the following chapters.
Page
8
Acknowledgements My first thanks go to the person who gave me the chance of being here and to be able to write this book: my mother. The famous Cigana, or Dona Arlete, a wonderful person and a role model. I want also thank to my second mother, Dona Maria, who took so much care of me when I was a kid while Dona Arlete was taking care of her two other kids and a nephew. I feel lucky for having two moms while many don’t have one. I take this chance to thanks the person who introduced Docker to me, Robinho, also known as Robson Peixoto. In a conversation during the Linguágil meeting, in Salvador, Bahia, he told me: “Study Docker!” And here I am finishing a book that transformed my life. I truly thank you, Robinho! Thanks to Luís Armando Bianchin, who started to write along with me but was not able to go on for other reasons. I’m very grateful, for your constant feedback kept me going on writing this book. Thanks to Paulo Caroli who encouraged me to write the book and introduced me to the Leanpub platform. If it wasn’t for him, this book would not be here so quick. Thanks to the amazing Emma Pinheiro, for the beautiful cover. I also want to thanks a lot the incredible people from Raul Hacker Club, who have strongly encouraged me this whole time. Thanks for the mother of my son, Eriane Soares, who is an amazing friend of mine and have encouraged me to write the book while we were still living together! As every open knowledge product, this book wouldn’t be possible without the help of the vibrant Docker Brazil community. I will highlight the effort
Page
9
of some members of reading several chapters many times and dedicating their precious time and suggesting improvements: Gjuniioor gjuniioor@protonmail.ch Marco Antonio Martins Junior - Wrote the chapters “Can I run GUI applications” and “Useful commands”. Jorge Flávio Costa Glesio Paiva Bruno Emanuel Silva George Moura Felipe de Morais Waldemar Neto Igor Garcia Diogo Fernandes Possibly, I have forgotten to mention some people here, but as I recover my logs I will updated it.
Page
10
Introduction This part of the book is for those who don’t have any basic knowledge of Docker. In case you do, don’t be shy and jump to the next part. However, even if you do know Docker, we are presenting explanations on several available resources and how they work. Even if you are a regular Docker user, reading this part of the book at some point of your life can be important in order to know more about what happens at every executed command.
Page
11
Why using Docker? Docker has been a very commented subject lately, many articles have been written, usually talking about how to use it, auxiliary tools, integrations and the like, but many people still ask the most basic question when it’s about the possibility of using any new technology: “Why should I use this?” Or would it be: “What this has to offer me that is different from what I have today?” It is natural that people still doubt the Docker’s potential, some even think that it’s about some hype. But in this chapter we intend to show some good reasons to use Docker. It’s important to highlight that Docker is not a “silver bullet” – it is not intended to solve all the problemas, much less being the only solution to several situations. Here are some good reasons for using Docker: 1 – Similar environments It’s worth to highlight that Docker is not a “silver bullet” – it is not intended to solve all the problemas, much less being the only solution to several
Page
12
situations. Once your application is turned into a Docker imagem, it can be instantiated as a container in any environment you wish. That means you can use the application at the developer’s notebook as well as it would run at the production server. The Docker image accepts parameters at the start of container, thus indicating that the same image can behave differently in distinct environments. This container can connect to its loca database for testing, using the credentials and the testing database. But when the container created from the image receives parameters from the production environment, it will access the database in a more robust infrastructure, with its respective production credentials and database, for instance. The Docker images can be considered atomic implantations – which provides more predictability compared to other tools such as Puppet, Chef, Ansible etc. – impacting positively on errors analysis, as well as on the reliability of the continuous delivery process, which is strongly based on the creation of a single artefact that migrates between environments. In the case of Docker, the artefact would be the image itself with all the dependencies required to execute its code, whether compiled or dynamic. 2 – Application as a whole package Using the Docker images makes possible to package all of your application and dependencies, making the distribution easy because it won’t be necessary to send an extent documentation explaining how to configure the required infrastructure to allow the execution, just make the image available in a repository and grant the access to the user, so the user can download the build that will be executed with no problems. Updating is also positively affected, because the layer structure of Docker allows that, in case of change, only the altered part is transferred, so the environment can be changed faster and simpler. The user only needs to execute one command to update the application image, that will be reflected on the container running on the desired moment. The Docker images can hold tags, thus making possible to store multiple versions of the same application. That means that, if there’s a problem on the update, the backup plan will be basically to use the image along with the previous tag.
Page
13
3 – Standardization and replication As the Docker images are built with definition files, it is possible to guarantee that a given pattern will be followed, increasing the confidence on replication. You just the need the images to follow the [best practices] (https://docs.docker.com/engine/userguide/eng-image/dockerfile_best- practices/) of building so it is viable to escalatethe structure quickly. In case a new member enters the team to work on development, he/she can get the work environment with a few commands. This process will take the time of downloading the images that are going to be used, as well as the definition files to manage them. This helps the introduction of a new member in the process of developing an application, who will be able to quickly reproduce the environment in his/her station, thus writing codes according to the team’ standard. If there’s the need of testing a new version of a certain part of the solution, using Docker images, usually it’s only necessary to change one or more parameters of the definition file in order to start a modified environment with the requested version to evaluation. That is: creating and modifying the infrastructure got easier and faster. 4 – Common language between infrastructure and development The syntax used to parameterize the Docker images and environments can be considered as a common language between areas that usually don’t dialogue well. Now it’s possible to both sectors to make proposals and counter-proposals based on a common document. The required infrastructure is going to be present in the code of the developer and the infrastructure area will be able to analyse the document, suggesting changes to get in sync with the standards of the sector or not. All that through comments and acceptance of merge or pull request from the code version control system. 5 – Community
Page
14
As it is possible to access github or gitlab to search code samples, using the image repository of Docker makes possible to get good models of application infrastructure or services for complex integrations. For example: nginx as a reverse proxy, and mysql as a database. In case the application needs these two resources, you don’t have to need to waste time installing and setting up theses services. Just use the images from the repository, setting up minimum parameters for suitability to the environment. Usually the official images follow the good practices for using the services offered. Using these images doesn’t mean to “be held hostage” of their configuration, because it is possible to send you own configuration to the environments and just prevent the basic installation. Questions Some people sent some questions regarding the advantages we presented in this text. Thus, instead of answering them one by one, we decided to publish the questions and their answers here. What is the difference between Docker image and definitions created by an infrastructure automation tool? As an example of infrastructure automation tools we have Puppet, Ansible, and Chef. They can guarantee similar environments, once their job is to keep a given configuration on the desired asset. The difference between the Docker solution and the configuration management may seem very thin, for both can support the necessary configuration of every infrastructure that an application demands to be implanted, but we think that one of the most relevant differences is in the following fact: the image is a complete abstraction and doesn’t require any treatment to deal with most varied GNU/Linux distributions that exit, since the Docker image comes along with a full file copy of a lean distribution. To carry within the copy of a GNU/Linux distribution is usually not a problem for Docker, because using the layer model saves a lot of resources
Page
15
by reusing the base layers. Read this article to know more about Docker storage. Another advantage of image in relation to the configuration management is that, when using the image, it is possible do make available the complete application package in a repository, and this “final product” be easily used without needing a complete configuration. Just one configuration file and one command can be enough to start an application build as a Docker image. Still on the process of the Docker image as a product in the repository: it can also be used in the process of updating the app, as we previously explained in this chapter. The use of the base image on Docker of a given distribution is not the same of creating a definition of a configuration management for a distribution? No! The difference is in the host perspective. On Docker, it doesn’t matter which GNU/Linux distribution is used on the host, for there is a part of the image that carries all the files from a mini-distribution that will be sufficient to support everything the app needs. In case your Docker host is Fedora and the app needs files from Debian, don’t worry because this given image will bring up Debian files to support the environment. As said previously, this usually doesn’t affect negatively in disk space consumption. Does it mean that I, as a developer, have to worry about everything on Infrastructure? No! When we say that it is possible to the developer to specify the infrastructure, we are talking about the closest layer of the application and not all the required architecture (Basic operational system, firewall rules, network rules etc.). The ideia on Docker is that relevant subjects directly connected to the application can be configured by the developer. This does not obligate him/she to perform this activity. This is a possibility that pleases many developer, but in case it is not your situation you can relax, another team will deal with this part. The deploy process will get a little slower. Many people refer to Docker for be used with microservices. Is it possible to use Docker to monolithic applications?
Page
16
Yes! However, in some cases minor changes in the applications are required so it can enjoy the facilities of Docker. A common example is the log that usually the application sends to a given file, that is, in the Docker model the applications in the containers should not try to write or generate log files. Au contraire, each process in execution writes its own event flow, no buffer, to stdout, because Docker holds specific driver to treat the log sent this way. The subject on best practices of log manager will be approached in the next chapters. At some point you will realize that using Docker to your application demands lots of effort. In this cases, usually the problem relies in how the application works and not on the Docker configuration. Be aware of that. Do you have more questions and/or good reasons for using Docker? Leave your comment here.
Page
17
What is Docker? In a very summarised way, we can say that Docker is an open platform, created with the goal of facilitating the development, deployment and execution of applications in isolated environments. It was designed especially to make an application available in the fastest way as possible. Using Docker, you can easily manage the application infrastructure, speeding up the process of creation, maintenance and modification of your service. The process occurs without the need of any privileged access to the corporate infrastructure. Therefore, the team responsible for the application can take part in the environment specification along with the team responsible for the servers. Docker provided a common “language” between developers and servers administrators. This new “language” is used to build files with the definitions of the required infrastructure and to show how the application will be arranged in this environment, which port will provide the service, which data from external volumes will be requested and other possible requests. Docker also provides a public cloud to share ready environments, that can be used to enable customizations for specific environments. It is possible to get a ready image from apache and configure the specific requested
Page
18
modules to the applications, thus creating your own customised environment. All with a few source lines of code. Docker uses the container model to “package” the application that, after being transformed into a Docker imagem, can be reproduced in a platform of any size; that is, in case the application runs flawlessly in your notebook, it will behave the same in the server ou mainframe. Build once, and execute wherever you want. Containers are isolated on disk, memory, processing and network levels. This separation provides great flexibility, in which distinct environments can co-exist in the same host without any issues. It is worth to highlight that the overhead in this process is the minimum necessary, because each container usually carries only one process, that is responsible for delivering the desired service. In any case, this container also carries every file needed (configuration, library and related) for a complete isolated execution. Another interesting point on Docker is the velocity to make the desired environment feasible; as it is basically the beginning of a process and not a whole operational system, the availability time is usually counted in seconds. Virtualization at operational system level The isolation model in Docker is the virtualization at operational system level, a virtualization method in which the kernel of the operational system allows that multiples processes are executed separately in the same host. These running isolated processes are called containers.
Page
19
To create the required isolation in the process, Docker uses the kernel functionality, called namespaces, that creates isolated environments between containers: the processes of a running application will not have access to the resources of another one. Unless it is expressly enabled in the configuration of each environment. To avoid the exhaustion of machine resources due to one isolated environment, Docker uses the cgroups feature from kernel. This makes possible the coexistence os different containers in the same host, without one affecting the other for overusing shared resources.
Page
20
Set up Docker stopped being just a software to turn into a set of softwares: an ecosystem. In this ecosystem we have the following softwares: Docker Engine: It’s the base software of the solution. It is both the daemon responsible for the containers and the client used to send commands to daemon. Docker Compose: It’s the tool responsible for defining and executing multiple containers based on definition files. Docker Machine: It’s the tool that enables to create and keep Docker environments in virtual machines, cloud environments and even in a physical machine. We are not mentioning Swarm and other tools because they’re not lined up with the goal of this book: introduction to developers. Setting up on GNU/Linux We will explain the set up in the most comprehensive way, thus you can install the tools in any GNU/Linux distribution you are using. Docker engine on GNU/Linux To set up Docker Engine is simple. Access your GNU/Linux terminal of choice and become root user: 1 su - root or, in case of using sudo 1 sudo su - root Execute the following command: 1 wget -qO- https://get.docker.com/ | sh
The above is a preview of the first 20 pages. Register to read the complete e-book.
Comments 0
Loading comments...
Reply to Comment
Edit Comment