Docker Notes

I recently started using Docker and am very impressed with how easily you can set up environment to develop, test and deploy applications. It provides a virtualization environment, but unlike some of the other virtualization solution, Docker container  (runtime environment for your application)  is a process instead of a complete OS. It runs natively on Linux and uses a single virtual machine on Windows and Mac in which Docker containers run.

Using Docker you can easily create containers required for your application to run, for example containers for database, application server etc and easily deploy them to test or production environment. You can create images of your set-up and re-create the same setup constantly from those images. When you create a Docker image, it has to be based on some variant of Linux base image. You can browse Docker images at Docker Hub.

This post is not meant to be tutorial or detailed information about Docker. You can refer to Docker web site for that. As many of my other posts, this post is meant to be reference for me – about some of the Docker commands I have used so far.

How to get Docker

 Dockerfile

You create Dockerfile to create a new docker container. Some of the commands used in the Dockerfile are (also see complete official reference docs)  –

FROM – Specify base image for your docker image. For example – FROM ubuntu

ADD – Add file(s) from the host machine to a docker container. For example, to copy setup.sh file, from the directory from where docker commands are run, to my container – ADD ./setup.sh /setup.sh

RUN – Runs a command in the container. For example to make setup.sh file executable after copying to the container – RUN chmod +x /setup.sh

ENTRYPOINT– Docker containers are meant to have one main application, which when stops running, the container stops. That main program is specified  using this directive. For example to run Apache server after it is installed (using possibly RUN command ) – ENTRYPOINT apachectl start

 

CMD – This is a little confusing directive in Dockerfile. There can be only one command specified in a Dockerfile. In the absence of ENTRYPOINT, if you specify CMD, with is executable, then that becomes the main application for the container. If you do not specify executable application command in CMD, then arguments to it are passed to the command specified in ENTRYPOINT.
EXPOSE – This is another one of confusing directives. You would think this commands exposes port from the container to the outside world. But that is not what it does. It simply tell Docker that the container listens on specified port(s) at runtime. For example if Apache server is listening on port 80 in a container,  then you would specify – EXPOSE 80 in the docker file
ENV – Set environment variable(s) in the container. For example, ENV PATH=/some/path:$PATH
VOLUME – Creates mountable point for a volume. Volume is just like folder, or virtual folder. From within the container, it can be accessed as any other folder.  Volumes can be used to share folders across different running containers. One container can import volumes from another container.
There am many other directives those can be used in a Dockerfile, but above are the ones I have used frequently, well other than cmd.

Creating Docker images and containers

You can create a docker container either completely from command line potions or passing location of Dockerfile.

Create docker container with command line options

(See complete reference)

Create a container from ubuntu image

$ docker run -name my-ubuntu -it ubuntu bash

The above command will create a docker container from base ubuntu image, name it my-ubuntu, run bash command (open bash shell) and keep standard input open (-i) and text input console open (-t, together -it). It will open a bash shell in the container, where you can execute any command.

See currently running containers

$ docker ps

If you run this command from another terminal on the host computer (from where you are running docker commands), you will see my-ubuntu container created above. If you did not exit the bash shell opened in that container (exit command), then you should see my-ubuntu container listed. If you exit the shell, the container stops (because bash was the main command and it terminated). To see all containers, including stopped ones, use -a flag

$ docker ps -a

ps command shows lot of information. However you can filter and format the output. Format should be a Go template string. For example to see only names of  container use following command –

$ docker ps --format "{{.Names}}"

See docs for more formatting options.

Start Container

$ docker start my-ubuntu

Above command will start my-ubuntu container, if it was stopped. It will use the same entrypoint command that was specified when the container was created. In this case it will open bash shell, but will terminate immediately. So the container would also stop. Specifying -i option will keep stdin open and will allow you to run commands in the container.

$ docker start -i my-ubuntu

use stop command to stop the container, e.g. docker stop my-ubuntu1

To remove a container , use rm command (you can specify multiple containers names) –

$ docker rm my-ubuntu1 my-ubuntu

If you want to remove running container, use -f option, e.g. docker rm -f my-ubuntu1. Instead of container names, you can use container ids also.

Attaching to running container

Let’s create a container in detached mode.

$ docker run -d -it --name my-ubuntu ubuntu

-d option runs docker container in background (detached mode). You will immediately return to command prompt after executing the above command.

To attach to the  the above container and the process that started it (in this case /bin/bash) –

$ docker attach my-ubuntu

This will allow you to execute commands in bash shell that was started in the container when container was run. Existing the shell (exit command) will also terminate the container.

If you do not want to terminate the container upon existing the bash shell, you can use exec command. It can be used to run any command, not just bash shell.

$ docker exec -it my-ubuntu bash

This will open a new bash shell. Exiting that shell will not terminate the container because it was not the command that started the container.

Listing images

To list all images –

$ docker images

Deleting Image

To remove images, use rmi command. Note that there should be no container based on the images you want to delete. If there are containers using images to be deleted, then remove those containers first using rm command mentioned above.

$ docker rmi my-image1 my-image2

Instead of names you can also use image ids.

Delete all Containers

Following command will delete ALL containers, so be careful

$ docker rm $(docker ps -a -q)

-q option tells ps command to return only ids, which are then fed to rm command.

Here is an example of using filters to remove containers (this example removes all containers starting with my-ubuntu)

$ docker rm $(docker ps --filter name=my-ubuntu* -q)

Delete all Images

Following command deletes all images, so again be careful –

$ docker rmi $(docker images -q)

To delete by filtering on image name –

$ docker rmi ($docker images *my-ubuntu*)

Using Volumes

As mentioned earlier, volumes can be used to share data between host and container and also amongst containers. Let’s create a container, call it container1, with volume v-container1.

$ docker run -dt --name container1 -v /v-container1 ubuntu

You can open a bash shell in container and verify that /v-container1 folder is available. Create a file in this folder.

$ docker exec -it container1 bash
$ touch v-container1/file-in-container1

Now create a container, container2, that uses volumes from container1.

$ docker run --rm -it --name container2 --volumes-from container1 ubuntu

Note that the above command uses –rm to create a temporary container. Once the main application started in the container (in this case bash shell) stops, the container will be terminated and it won’t appear in ‘docker ps -a’ command too.

Once the shell opens in container2, you can ensure that /v-container1 is available and the file we created exists in the folder.

Mapping folder from host to container

To share folder from the host to a container, use the same -v option, but specify <host-folder-name>:<path-in-container> argument. For example if you want make, say /project1/src folder from host map to /src folder in the container –

$ docker run --rm -it -v ${PWD}:/src ubuntu

${PWD} tells docker to map present working directory. We could have entered the entire path too – docker run –rm -it -v /project1/src:/src ubuntu

If your host is Mac or Windows, make sure /project1/src folder is shared in Docker preferences, else docker will throw error.

Using volumes for backup and restore

I found docker volumes very useful when backing up data from one container and restoring it in another. For example, if you had created a container from mysql image and populated a database, you could easily create a tar file of that data from the host machine and then restore it when you create a new instance of the container, or restore the data in the original db container. However you will need to know volumes used by the container –  in this case the container using mysql base image. You can find that easily by running this command –

$ docker inspect my-db

Note that the the output is JSON. Look for “Mounts” key and “destination” sub-key in that. In case of mysql container, this value is “/var/lib/mysql”. This is where the data is stored in mysql container. So we can use -volumes-from option we saw above to use this volume in another container and then run tar command on it and save it in the volume mapped on the host machine.

$ docker run --rm --volumes-from my-db -v ${pwd}/backup-data:/backup-data ubuntu tar cvf /backup-data/my-db-volume.tar /var/lib/mysql

We are using –rm because we want to create a temporary container. The container will be terminated after the command is finished. We are using volumes from my-db container, which is based on mysql image. This makes /var/lib/mysql folder available to this (temporary) container. Then we are mapping backup-data folder in the present folder (on host machine) to /backup-data in the container. So now the temporary container has access to /var/lib/mysql (from my-db container) and backup-data folder on the host machine. Then we execute tar command to create my-db-volume.tar in /backup-data, which in effect creates in the same named folder in the host machine. And it tars data from /var/lib/mysql, which is contains data from my-db container.

To restore the data –

$ docker run --rm --volumes-from my-new-db -v $(pwd)/data-backup:/backup-data ubuntu bash -c "cd / && tar xvf /backup-data/my-db-data.tar"

Here we are restoring the data into newly created my-new-db container (created with mysql base image). We are using volumes from the new db container, so /var/lib/mysql folder is available to the temporary container. We map the same folder from host to the temp container and then run tar command to untar /backup-data/my-db-data.tar (which in untars the same named file from the host machine) into root. So data from /var/lib/mysql in the tar file will be written to /var/lib/mysql in the temp container. And since this volume is coming from my-new-db (the container into which we want to restore the data), we get data saved in that container.

Creating image from container

Let say you create a container from some base image, you installed more applications in that container. Now you want to save the container as an image so that next time you could create a container from that image and don’t have to repeat installation of all the additional applications. You can do that with export command –

$ docker export -o /my-images/container1-image.tar container1

Specify output file path using -o option. The last argument is name of the container from which you want to create an image.

To create image from the exported file, use import command –

$ docker import /my-images/container1-image.tar container1-image

The above command will create image named container1-image from container1-image.tar file.

I know the post is getting really long, but I wanted to put some of the most useful commands, according to me,  of Docker in this post. I would wrap up the post by briefly list some of the useful commands in docker-compose

Docker-Compose

Docker-compose can be used to configure multiple docker containers in the same file and also to specify dependencies between them. There is a docker-compose command which executes commands from docker-compose-yml (though you can override file name in docker-compose command).

Structure of docker-compose.yml is simple – It has version umber at the top (the latest version is 3, but I have mostly worked with version 2), and then there is services section which lists container definitions. Here an example of docker-compose.yml

version: "2"

services: 
  my-db:
    image: mysql
    environment:
      MYSQL_DATABASE: my-db
    container_name: my-db
    ports:
      - '3307:3306'

  my-app-server:
    build: ../my-app-server
    container_name: my-app-server
    stdin_open: true
    tty: true
    ports:
      - '8082:8080'
      - '9001:9001' # Open port for debuggin
    volumes: 
      - ../src/app1:/src/app1
      - ../logs:/logs
    depends_on: 
      - my-db

Many of the options we have already seen in this post so far, so I though I would just show and example of docker-compose.yml. The file contains definition of one db container that specifies image directly in the docker-compose.yml and one app server container that builds the container from dockerfile in ../my-app-server folder. Note the dependency of app server on db server, specified in ‘depends_on’  section in the app server

To bring up both the container, run

$ docker-compose up

To run docker-compose in detached mode, use -d option. However you may want to use non-detached mode to see output messages.

To stop all containers started in docker-compose.yml, press CTRL+C if it is running in foreground, or you can run

$ docker-compose down

-Ram Kulkarni

Update:  I had run into a networking issue in docker containers on Linux. The containers created did not have access to internet. After struggling to find a solution for this issue, I realized that the issue could be with DNS lookup. I am not sure if this is a generic problem, but in my setup DNS were not set properly in the containers. So I had to specify DNS (find DNS from host network settings) when creating containers, either in docker-compose.yml (see this) or when executing ‘docker run’ command (see this).

One Reply to “Docker Notes”

Leave a Reply