There are headaches and heavy costs associated with hypervisor virtualization. There was a time I used Eucalyptus (basically a private AWS) to create multiple virtual machines on a 24-core, 64GB RAM server. Resource allocation is tighter, there’s more overhead, and there was just a lot of maintenance involved.
- In the Docker world, containers usually run just a single process. Instead of SSH’ing, you would use volumes (socket) and the docker network (expose on a network interface) to manage the container (e.g., sudo docker exec -ti my_container /bin/bash).
Unlike hypervisor virtualization, containers run in user space. Which means it runs on top of the operating system’s kernel. This offers an amazing amount of flexibility.
- A higher density of containers on a host
- Quicker testing and deployment
- A smaller attack surface
- Deployments of multi-tenant services at a large scale
- Segregation of work (SOA/microservices)
- Great for sandboxing in isolation
- Resource management
- Building blocks for services
- Consistent environments. All developers see the same thing (dev, test, and production).
- Lower overhead because it uses OS system calls instead of a hypervisor layer.
- Automate the deployment of applications in containers.
- Easy collaboration and portable.
- Isolated environments for Continuous Integration (CI)
- Platform-as-a-Service (PAAS)
- Hyperscale
- Docker images are layered (COW). This means you might not have to deal with configuration management tools.
How it Works
Traditional containers could be a bit of a hassle to automate. This is where Docker comes in. It aims to not only provide virtualized environments but deployment options as well. This means you could move an application from a dev environment to production without losing consistency. Thanks to its use of a copy-on-write (COW) model, it’s lightweight – and fast.
Docker uses client/server architecture. We have a Docker host (your Linux or Windows box), a Docker daemon (install on the host), clients, and containers. Clients could connect remotely to the daemon if needed. Containers could be based on one of the thousands of Docker Images. They could be found on registries such as Docker Hub. You could also create your own private registries.
Containers are generic, portable, and are defined by the image you decide on. They all have the same operations:
- Create
- Start
- Stop
- Restart
- Destroy
A Docker Image could be a MySQL server, Nginx server, etc. You could also create your own images and throw them in containers. You could think of images as the “code” of the container. They are the standalone building blocks of Docker.
To mediate interactions between containers, we have Docker Compose (application stack), Docker Swarm (cluster), and Kubernetes (Orchestration). Don’t worry, we’ll drive further in a future post.
Installation
Docker is supported out of the box on all major Linux platforms. Installing Docker on Windows requires Hyper-V – which means you need Windows 10 Pro or higher. Installation on OS X requires a virtual environment as well (Docker for Mac).
Requirements:
- 64-bit architectures only
- Linux 3.10 or higher kernel
- Storage driver (default: Device Mapper/AUFS)
- Namespaces and cgroups enabled
Installation on Ubuntu 18.04 LTS
sudo apt update -y; sudo apt-get upgrade -y
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update -y
apt-cache policy docker-ce
sudo apt install docker-ce -y
sudo usermod -aG docker $USER
sudo service docker start
sudo service docker status
sudo docker info
sudo dockerd & #WSL 2
Result
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.
sudo cat /var/log/docker.log
level=info msg="API listen on /var/run/docker.sock"
Docker Daemon
Keep in mind, if you bind Docker Daemon on a publicly accessible port, anyone would be able to access it unless you use TLS authentication.
You could bind the daemon to multiple locations:
sudo dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
dockerd -H tcp://0.0.0.0:2375
docker info
docker help run
Docker then used this image to create a new container inside a filesystem. The container has a network, IP address, and a bridge interface to talk to the local host.
Common Docker Commands
List Containers | docker ps -a or docker ps -n 2 |
Pull Image | docker pull ubuntu:18.04 |
Running Container | docker run –name my_container -i -t ubuntu:latest /bin/bash – Container name has to obey [a-zA-Z0-9_ .-] – The container only runs for as long as the command we specified is running. So, in this case, it’s “/bin/bash” |
Start Container | docker start [container name] |
Stop Container (SIGTERM) | docker stop [container name] |
Kill Container (SIGKILL) | docker kill [container name] |
Attach Container | docker attach [container name] |
Container Logs* | docker logs –tail 0 -ft [container name] |
Container Processes | docker top [container name] |
Container Configuration | docker inspect [container name] docker inspect –format ‘{{ .NetworkSettings.IPAddress }}’ [container1] [container2]… |
Multiple Container Processes | docker stats |
Run Background Process | docker exec -u user -d [container name] touch /var/log/mysql/extra.log |
Run Interactive Process | docker exec -it [container name] /bin/bash |
Delete All Containers | docker rm -f `sudo docker ps -a -q` |
* Docker’s logging driver defaults to json-file. You could change it when running a container with the option –log-driver=”syslog”
Automatic Restarts
Always | docker run –restart=always –name [container name] ubuntu:latest |
On Failure | docker run –restart=on-failure:5 –name [container name] ubuntu:latest |
Docker Images
Docker containers are made of layers of filesystems images. The initial layer is an empty read-write layer. This may be followed by an “Apache” or “MySQL” layer, VIM layer, and other applications. As you make changes, while preserving the read-only layers, those changes are applied to the read-write layer. This is what is meant by the copy-on-write (COW) pattern – which makes Docker very powerful.
Then we have the base image which is usually the operating system (e.g., Ubuntu, Arch Linux, etc). This layer is called the rootfs. Then finally we have the bootfs which contain the kernel, cgroups, namespaces, and device-mapper.
docker images #list docker images
Docker images are stored in a repository – which exists on a registry. There are user-contributed repositories (jayluong/chef) and “top-level” repositories maintained by Docker Inc. (ubuntu). As implied, use user-contributed repositories at your own risk.
The default registry is Docker Hub. They also have a product called Docker Trusted Registry if you want to run a private registry behind your own firewall.
docker pull ubuntu:18.04
docker images ubuntu
docker search ubuntu
So “ubuntu” is a repository of a number of ubuntu images. Each image is defined by a tag. So the 18.04 tag has all the layers of the 18.04 image. You should always build a container from a specific tags.
Dockerfile
Using a Dockerfile with docker build is an idempotent way of creating images. The directory where the Dockerfile is located is called the build context. Each instruction (in caps) creates/commits a new layer. The EXPOSE instruction could be used to open ports – or connect containers together.
Since the FROM instruction is required to have a base image (Ubuntu 18.04 here), if the entire image fails to build for any reason, you’ll have a container you could still connect to for debugging. Having a .dockerignore file will enable you to exclude files from being part of the build context – and hence, not sent to the Docker daemon.
# Version: 1.0.1
FROM ubuntu:18.04
LABEL maintainer="jay@example.com"
ENV REFRESHED_AT 2020-02-20
RUN apt-get update; apt-get install -y nginx
RUN echo 'Hello, World.' > /var/www/html/index.html
EXPOSE 80
Build the new image and find the port it was mapped to. Nginx is running detached (-d) and in the foreground (-g). You could specify your own ports mappings with -p.
docker build -t="bacontest/static_web:v1" .
docker build --no-cache -t="bacontest/static_web:v1" .
docker history bacontest/static_web
docker run -d -P --name static_web bacontest/static_web nginx -g "daemon off;"
docker port static_web 80
Dockerfile Instructions
- CMD – Run the command once the container is launched. Use an array or Docker will prepend the command with /bin/sh -c. However, the command you run with docker run will override this instruction.
- CMD [“/bin/bash”, “-l”]
- Only one CMD is allowed per dockerfile
- ENTRYPOINT – takes in the arguments passed via docker run.
- ENTRYPOINT [“/usr/sbin/nginx”]
- could use the –entrypoint flag to override
- Typically followed by a CMD instruction to set the default arguments: CMD [“-D”, “FOREGROUND”]
- ADD – adds files, URLs, or directories from the build environment to the image. If the destination path doesn’t exist, Docker will create it with 0755 and UID/GID of 0. If the source is a local zip/tar file, Docker will unzip it automatically (without overwrite). This command will invalidate the cache if the files change.
- ADD config.rb /opt/config.rb
- ADD http://wordpress.org/latest.zip /root/wordpress.zip
- COPY – It’s like ADD but without decompression. It’s just more explicit and makes the intent clearer.
- VOLUME – Bypasses the Union File System to enable persistent or shared data. They are “alive” and are their own separate entities even when containers are off (but need to exist). Great for holding source code or databases.
- VOLUME [“/opt/project”, “/tmp”]
- Volumes are located on the Docker host (e.g., /var/lib/docker/volumes).
- If you’re using Windows’s WSL 2: /var/data/docker-desktop/default/daemon-data/volumes
- WORKDIR – Set containers working directory
- Much like the “cd” command. You could change directories and RUN commands.
- -w to override
- USER – the user the container should be run as. The default is root.
- USER uid:gid
- USER uid:group
- USER user:gid
- USER user
- USER user:group
- -u to override
- ONBUILD – these are instructions that are executed when the image you’re building will be used as a basis for another one. For the child image, it’ll execute after FROM. They’re essentially triggers.
- ONBUILD ADD . /var/www
- LABEL – metadata in key/value pairs.
- LABEL version=”0.1″ city=”Los Angeles”
- STOPSIGNAL – The system call signal to use when the container is stopped.
- A valid number from the kernel syscall table or a SIGNAME (SIGKILL)
- ARG – variables that could be passed at build-time with –build-arg.
- pre-built ones include http_proxy, https_proxy, ftp_proxy, no_proxy
- SHELL – Change your shell.
- Useful for using bash or zsh or even cmd or powershell.
- HEALTHCHECK – You could only have one of these instructions in a dockerfile. You could check if a website or database is up.
- docker inspect –format ‘{{.State.Health.Status}}’ my_database
- ENV – set environment variables during the image build. These variables will persist after the container is created.
- ENV RVM_PATH /home/rvm RVM_ARCHFLAGS=”-arch i386″
- ENV NEW_DIR /home/jay
- WORKDIR $NEW_DIR
- -e to set at runtime
Entering Container with Interactive Shell
sudo docker exec -ti [container] /bin/bash
Deleting Images
This is done with the rmi command. Make sure any containers using the images that are about to be deleted are stopped and removed (with the rm) command.
docker rmi [image name or ID]
docker rmi `docker images -a -q` #removes all images
Private Docker Registry
docker run -d -p 5000:5000 --name registry registry:2.7.1
Volumes
Typically, you don’t want to include your application code in the images. For one, code changes frequently and you don’t want to keep rebuilding the image. Secondly, this code could be shared or is required to be tested between other containers and developers. This is why you would want to use volumes (-v):
docker run -d -p 80 --name test_website -v /projects/website:/var/www:rw [image name] nginx
Volumes can be shared and reused between containers.
• A container doesn’t have to be running to share its volumes.
• Changes to a volume are made directly.
• Changes to a volume will not be included when you update an image.
• Volumes persist even when no containers use them.
If you run the “run” command with -v and the destination path on the container, Docker will create an anonymouse volume automatically without you having to specify a path on the source. You could figure out where this volume is but running “docker inspect [container name].” It’ll be under “Mounts.” The best practice is usually to use a named volume (e.g., mydb:/var/lib/mysql).
"Mounts": [
{
"Type": "volume",
"Name": "a2a39ff65531abea37acd6e2ace5b631193d5e2842bfb020cfd4dd6ee68ac896",
"Source": "/var/lib/docker/volumes/a2a39ff65531abea37acd6e2ace5b631193d5e2842bfb020cfd4dd6ee68ac896/_data",
"Destination": "/var/lib/mysql",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
docker volume prune #remove all unused volumes
Sending Signals
sudo docker kill -s [signal] [container]
Docker Compose
So far we’ve built images from Dockerfile, and running containers off those images. With Docker Compose, we could now use a YAML file to start and connect multiple containers, each with their own “service.” It is used to create applications with multiple services. This is GREAT for building local development stacks.
Basically, it performs the “docker run” from a config file.
docker-compose up will:
- Launch all containers
- Runs runtime configurations
- Multiplexes all the log outputs
Docker Compose comes built-in with Windows and MacOS but for Ubuntu 18.04 you’ll need to copy the binary from Github:
sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
version: '2'
services:
db:
image: percona/percona-server:8.0
environment:
MYSQL_ROOT_PASSWORD: secret
ports:
- "3306:3306"
volumes:
- /var/log/mysql
- /var/lib/mysql
command:
- '--user=mysql'
This is equivalent to:
sudo docker run -d -p 3306:3306 -v /var/log/mysql -v /var/lib/mysql -e "MYSQL_ROOT_PASSWORD=secret" --name mypercona percona/percona-server:8.0 --user=mysql
You could also use DOCKER_HOST environment variable if the host is not local.
sudo docker-compose up -d
# omit -d to run interactively
sudo docker-compose ps
sudo docker-compose logs
sudo docker-compose kill
sudo docker-compose rm
If you’ve changed your Dockerfile, you could run this to rebuild:
docker-compose build
Recent Comments