One of the major issues faced by the developers around the globe is running their applications in favourable environments. Every developer wants their application to be running in an environment that responds solely to their application, i.e without any external interference. Here is where docker comes into play. Docker not only helps the developer to run the application in a secluded environment but also grants portability to it.
Docker is a software engine that is used to manage and automate deployment of software applications into containers and then integrate them in remote systems . The docker guarantee that it will work anywhere without any dependencies. And so docker is widely used by the system administrators and managers to port applications - with all of their dependencies conjointly - and get them running across systems and machines around the world. The flexible behaviour of the docker is the reason behind its massive use nowadays. Since it is open source anyone can try the docker and avail shipping of the application.
The highlighting aspect of the docker is shipping beyond the limits and networks. A developer can ship an entire container with all applications inside along with the dependencies . And another developer can easily integrate and execute the existing applications inside the container by simply running the docker container. And there is no complication of dependencies and environments since everything is preset during docker implementation.
The flip side while using the docker is consumption of the large disk space. For each creation and execution of the docker container the excessive allocation of the disk space gradually deteriorate the performance of the system. And it can be handled by recovering the space consumed by older and unused docker containers .
The docker follows a client-server architecture which is one of the most commonly used architectures. There will be a docker client and a server which listens to the request from the docker client. The server is a daemon that is waiting for requests to be processed. It is the server daemon that is doing the actual task of creating, stopping and all the things that are related to the containers.
The docker client-server architecture is shown in the diagram. There is a docker host where the docker daemon is waiting. A docker client utility can connect to the docker daemon and can execute commands. The docker daemon is connected to the docker container.
Docker images – It is from the images that a docker container is launched. The image is the source code for the container. A user can install the required application on any of the base images such as Ubuntu or CentOS that are available from the official registry. This in turn can be saved as another image dedicated for the particular application.
Docker Registries - It is the place where the images are stored and retrieved to be used later. It is much like a GIT repository for the images. Docker registries can be made public as well as private.
Docker Containers – It is the backbone of a docker i.e the main execution part of docker. Just like linux, as many applications can be deployed on a container. It is the execution environment for the end user.
Installation of Docker
Docker is supported by a wide varieties of platforms that include Amazon cloud, Microsoft Azure, Rackspace Cloud etc. It is also supported by all the three OS platforms( MacOS, Linux and Windows).
Official documentation of installation in various platforms can be found here.