Docker and the World of Containers
Firstly, What is Docker?
Docker is a software tool which allows for operating system level virtualisation, through the use of (mostly) sandboxed packages known as containers. This differs from more traditional Virtual Machines, which use hardware virtualisation. To understand Docker will require the understanding of hardware virtualisation and operating system level virtualisation.
Operating System Vs. Hardware Virtualisation
Hardware Virtualisation, i.e. Virtual Machines, are achieved by running software on a physical server, known as a Hypervisor, which “virtualises” the server’s physical hardware. This Hypervisor enables the ability to run multiple Operating Systems, concurrently and of differing types, on a single server.
Operating System Virtualisation, i.e. Containers, are achieved by running a full operating system on the physical server and then, in turn, a Container Engine within this OS. This Container Engine enables the sharing of the Operating System kernel – the core of the Operating System – with applications running within Containers. By sharing the core functionality of the Operating system with the containers, these containers are much smaller in size than a traditional VM.
A Brief History of Docker
Docker began as a Platform as a Service (PaaS) company named dotCloud, founded by Solomon Hykes and Sebastien Pahl in 2011. Initially only an internal project, the software was eventually debuted to the public at PyCon in 2013 and subsequently released as an Open Source project in March of the same year. Docker quickly caught on, with commercial Docker services provided by the likes of RedHat (2013) and Amazon Web Services (2014), and partnerships with Stratoscale and IBM in 2014.
As of 2018, Docker provided the following statistics on the global uptake of its services:
What Docker means to you?
Docker provides several key technical benefits compared to standalone applications and VMs, these include:
- OS agnostic images – Docker containers are built from Docker images, these are OS agnostic and can therefore be deployed on any platform on which the Docker engine can run. This means you can simply and efficiently migrate an application stack (running in Docker containers) which may be running on a Linux server to a Windows server and vice versa, for example.
- Easy server upgrades – Because Docker containers are abstracted from the OS, the host server upgrades can be applied without the requirement to consider whether the application stack will be affected.
- Simple snapshots – Docker containers may be imaged to create snapshots at a point in time of your running applications. This provides the ability to create simple snapshots, which, similar to the running containers, are much smaller in size than a traditional VM image.
- Versioning – Application stacks can be versioned, by utilising the snapshot feature and tagging the created image with the version details. Additionally, the underlying base Dockerfiles from which the images are created can be maintained in a version control system (VCS), as they are simply text files specifying the commands to setup the environment for which the image will be created.
- Ecosystem – Docker provides the Docker Hub, a worldwide library for Docker images, which contains over 100,000 pre-created images across the software spectrum. Most well-known software vendors, such as Microsoft, Oracle and Atlassian contribute to this library, providing official images, complete with documentation, for some of the most common applications used worldwide.
- Central file access – Docker provides the ability to map storage volumes between the container and the host server. This provides the ability to manage the filesystem for an application stack on a single machine, whereas traditionally the filesystem would span multiple VMs.
- Simple application upgrades – Docker containers can be upgraded simply by stopping the current container, removing the current container and running the container relating to the higher version. In order to retain user data, any files should be mapped to a connected Docker volume located on the host machine.
- Simple application configuration changes – Key application settings may be provided as Docker environment variables, passed to the application in the initial Docker run command. This enables the ability to change settings such as heap size for the application, by simply restarting the container with a new value for the relevant environment variable.
- Infrastructure as Code (IaC) – Docker provides the ability to define your application stack and the related networking and storage in a single file, known as a Docker Compose file. This is essentially your IaC and uses the YAML file format, it may therefore be stored in a VCS and versioned appropriately.
At a business level, the use of Docker within organisations has been shown to increase performance in the following key areas:
Getting Started with Docker
In order to get started with Docker, you will need to consider where your Docker engine(s) will be located. You may choose to run your own Docker engine(s) on your physical or cloud provider’s servers or VMs.
Alternatively, there are a number of Container as a Service (CaaS) offerings, for example:
- Amazon Elastic Container Service (ECS)
- Azure Container Service (ACS/AKS)
- Google Kubernetes Engine (GKE)
- Red Hat OpenShift Online
Once you have configured the infrastructure on which your containers will run, you will then need to create or identify the relevant images for your application stack. You can choose from three methods:
- Creating your own Dockerfiles
- Extending a current Dockerfile
- Using an existing Dockerfile/Image
Finally, once the infrastructure is configured and the image has been created or identified, you will need to run the Docker container from the image, ensuring you configure the relevant environment variables, networking and storage volumes.
Are Docker and Kubernetes the same thing?
The short answer is No.
However, Kubernetes makes use of Docker containers and therefore the two are closely linked. Kubernetes is an orchestration system for containers, providing the ability to manage, place, scale and route. Similarly, Docker provide Docker Swarm as an alternative orchestration system.
Want to know more? Keep your eyes peeled for our next blog instalment on Kubernetes, coming soon!
+44(0)118 932 3001
Automation Consultants shares the same working values as us – they are part of our team, literally! They supply us with up-to-speed consultants who fit in seamlessly.
Theresa Pemble, solution delivery manager, Severn Trent
Using the technical expertise of Automation Consultants' people we have been able to identify and fix problems very early in the migration process thus saving valuable time.
HP Consulting and Integration
In an agile workflow changes to source code should be deployed and tested early and often – as soon as there is a working component you try it out straight away. After working with Automation Consultants all the developer had to do was click a button and wait 20 minutes to register the changes to the advertising platform.
Malcolm Reid, head of product development, Sky IQ
Automation Consultants has been involved in several different projects, from normal performance testing to testing the capability of new hardware systems, as well as creating several innovative bespoke tools that improved productivity, delivered high standard results and added value to the test process. During all of these projects, AC has been flexible and helpful, going out of their way to resolve any difficult technical issues.
Kenneth Lagerwall ,
IS Quality Services, T-Mobile UK
Automation Consultants' technical knowledge and understanding of the Rational toolset, combined with the excellent training they provide, has meant we can forge ahead with application development.
Andy King, head of test, ONS
Automation Consultants has been able to deploy highly skilled resources at very short notice and has always met very demanding delivery deadlines
HP Consulting and Integration
- About Us
- Blog / News
- About Us▼
- Blog / News