The Docker role
As you remember from the previous chapters, Docker utilizes the concept of containerization. You simply put your application (in this context, the application will be a microservice) no matter what language and technology it uses, into a single, deployable and runnable piece of software, called the image. We are going to cover the process of packaging a Java application into the image in detail in the Chapter 4, Creating Java Microservices. The Docker image will contain everything our service needs to work, it can be a Java Virtual Machine with all required libraries and an application server, or it can also be a node.js application packaged together with the node.js runtime with all the needed node.js modules, such as express.js or whatever the node.js service needs to run. A microservice might consist of two containers, one running the service code and another running a database to keep the service's own data.
Docker isolates containers to one process or service. In effect, all the pieces of our application will just be a bunch of black boxes, packaged and ready to use Docker images. Containers operate as fully isolated sandboxes, with only the minimal kernel of the operating system present for each container. Docker uses the Linux kernel and makes use of kernel interfaces such as cnames and namespaces, which allow multiple containers to share the same kernel while running in complete isolation from one another.
Because the system resources of the underlying system are shared, you can run your services at optimal performance, the footprint is substantially smaller in comparison to traditional virtual machines. Because containers are portable, as we have said in Chapter 2, Networking and Persistent Storage, they can run everywhere the Docker engine can run. This makes the process of deployment of microservices easy. To deploy a new version of a service running on a given host, the running container can simply be stopped and a new container started that is based on a Docker image using the latest version of the service code. We are going to cover the process of creating new versions of the image later in this book. Of course, all the other containers running on the host will not be affected by this change.
As microservices need to communicate using the REST protocol, our Docker containers (or, to be more precise, your Java microservices packaged and running from within the Docker container) also need to communicate using the network. As you remember from Chapter 2, Networking and Persistent Storage, about networking, it's very easy to expose and map a network port for the Docker container. It seems that Docker containerization is ideal for the purposes of microservice architecture. You can package the microservice into a portable box and expose the needed network ports, enabling it to communicate to the outside world. When needed, you can run as many of those boxes as you want.
Let's summarize the Docker features that are useful when dealing with microservices:
- It is easy to scale up and scale down a service, you just change the running container instances count
- The container hides the details of the technology behind each of the services. All containers with our services are started and stopped in exactly the same way, no matter what technology stack they use
- Each service instance is isolated
- You can limit the runtime constraints on the CPU and memory consumed by a container
- Containers are fast to build and start. As you remember from Chapter 1, Introduction to Docker, there's minimal overhead in comparison to traditional virtualization
- Docker image layers are cached, this gives you another speed boost when creating a new version of the service
Doesn't it fit perfectly for the definition of the microservices architecture? Sure it does, but there's one problem. Because our microservices are spread out across multiple hosts, it can be difficult to track which hosts are running certain services and also monitor which of them need more resources or, in the worst case, are dead and not functioning properly. Also, we need to group services that belong to the specific application or feature. This is the missing element in our puzzle: container management and orchestration. A lot of frameworks emerged for the sole purpose of handling more complex scenarios: managing single services in a cluster or multiple instances in a service across hosts, or how to coordinate between multiple services on a deployment and management level. One of these tools is Kubernetes.