This is an old revision of the document!
Table of Contents
Tutorial on Using Docker
What is Docker?
Docker is a platform that helps with easy deployment of applications inside a container. Containers are very similar to virtual machines but are getting more popular due to theor flexibility and ease of use. Some key differences
Image vs container
A container is created by running an image. An image is an executable package that includes everything needed to run an application–the code, a runtime, libraries, environment variables, and configuration files.
A container is a runtime instance of an image–what the image becomes in memory when executed (that is, an image with state, or a user process). You can see a list of your running containers with the command, docker ps, just as you would in Linux.
Container vs virtual machine
A container runs natively on Linux and shares the kernel of the host machine with other containers. It runs a discrete process, taking no more memory than any other executable, making it lightweight.
By contrast, a virtual machine (VM) runs a full-blown “guest” operating system with virtual access to host resources through a hypervisor. In general, VMs provide an environment with more resources than most applications need.
Preparing Docker Environment
Installation
Install Docker-CE community edition by following the instructions here https://docs.docker.com/install/linux/docker-ce/ubuntu/
There are three ways to install docker 1. Using docker repositories via “sudo apt” 2. Manual installation using .deb file and update when needed 3. Using convenience scripts - Mostly for dev/test environments needing automation
We go with first option “INSTALLING USING REPOSITORIES”. Follow this section in the link above. Do not forget to go through Post installation instructions here : https://docs.docker.com/install/linux/linux-postinstall/ This is useful to use docker without having to specify “sudo”.
Test Installation
Test for successful installation of Docker by executing following commands
Docker --version
This should give an output as shown below
Docker version 19.03.1, build 74b1e89e8a
Now execute
Docker run hello-world
This command downloads and runs a test image. This should produce the following output
latest: Pulling from library/hello-world 1b930d010525: Pull complete Digest: sha256:451ce787d12369c5df2a32c85e5a03d52cbcef6eb3586dd03075f3034f10adcd Status: Downloaded newer image for hello-world:latest
Hello from Docker! This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.(amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.
You can also run which gives more information
docker info
Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 17.12.0-ce Storage Driver: overlay2 ...
NVIDIA Docker
This is useful to use docker image with nvidia runtime. we can now create GPU accelerated containers and run applications inside. This avoids the need to install the CUDA/GPU driver inside the containers and have it match the host kernel module. Instead, drivers are on the host and the containers don't need them.https://www.nvidia.com/object/docker-container.html
Note : The latest version is nvidia-container-toolkit. You can install that and see its usage from the official github page here https://github.com/NVIDIA/nvidia-docker But, at the time of making this document nvidia-docker2 was being used. For now nvidia-docker2 can be upgraded as mentioned on the github page and supports all the cli options as nvidia-container-toolkit.
A container can be run using –runtime=nvidia option as follows
docker run --runtime=nvidia nvidia/cuda:9.0-cudnn7-devel nvidia-smi
In the above command we are running an image named “cuda:9.0-cudnn7-devel” from nvidia repository and then executes the command “nvidia-smi” inside it. If the nvidia-docker installation is correct it should produce the output showing the nvidia-driver details from inside the docker container.
Working With Docker
Working with an existing docker image
What are Docker images and where can I get them from?
Docker containers are created from docker images. By default docker images are pulled from Docker Hub. You can think of it as something similar to github where code lives. In DockerHub docker images , their version history with tags live. DockerHub is managed my Docker company. Anybody can build and host their Docker images on Docker Hub, So most of the common applications and Linux distributions you’ll need to run Docker containers have images that are hosted on Docker Hub.
For example in the previous section “NVIDIA DOCKER” we ran
docker run --runtime=nvidia nvidia/cuda:9.0-cudnn7-devel nvidia-smi
we pulled an image from nvidia/cuda repository with tag 9.0-cudnn7-devel .
Search for Images
We can check for available images on docker hub using search subcommand. For example
docker search ubuntu
This will crawl Docker Hub and gives a listing of all images that match with search string Ubuntu.
docker run
docker run ubuntu Unable to find image 'ubuntu:latest' locally latest: Pulling from library/ubuntu 35c102085707: Pull complete 251f5509d51d: Pull complete 8e829fe70a46: Pull complete 6001e1789921: Pull complete Digest: sha256:d1d454df0f579c6be4d8161d227462d69e163a8ff9d20a847533989cf0c94d90 Status: Downloaded newer image for ubuntu:latest
When this command is executed docker initially looks for the image named ubuntu locally. when it doesnt find the image, it pulls from docker hub. Whenever we do not specify a tag “latest” is pulled by default.
docker run -it ubuntu
root@cf3f32ce0c33:/#
using -it option lets us access the running image via interactive terminal as shown above. Here “cf3f32ce0c33” is the container ID and by default the user is root.
Containers are designed to mainly run an application. Hence, docker run is meant to run a command and then exit the container. we can just start and keep the container in active by using -it option with docker run command.
docker pull
docker pull ubuntu
This command just downloads the image. The default location where all the images are stored is /var/lib/docker
Docker Cheat Sheet
docker start/stop
start a stopped container using docker start <containerID/name>. Similarly to stop a container docker stop <ContainerID/name> == List images == docker images == List containers == docker ps lists only active containers docker ps -a Lists all the containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cf3f32ce0c33 ubuntu "/bin/bash" 4 minutes ago Exited (0) 2 minutes ago kind_matsumoto b0c54578020b ubuntu "/bin/bash" 7 minutes ago Exited (0) 7 minutes ago hopeful_sanderson 6cc5fda09e89 hello-world "/hello" 2 hours ago Exited (0) 2 hours ago pensive_shaw 71a3b955c10a nvidia/cudagl:9.0-devel-ubuntu16.04 "/bin/bash" 4 weeks ago Exited (0) 4 weeks ago sweet_greider b8add4a38631 nvidia/cuda:9.0-devel "/bin/bash" 4 weeks ago Exited (0) 4 weeks ago sweet_wu
remove container
docker rm <containerID/name>
remove docker image
docker rmi <imageID/name>
docker exec -it <container_id> /bin/bash → to open terminal of a running container
Setup a Docker Image
Docker is not just useful to run a single command like shown before. It is much more powerful than that. In this section we will see how we can make our own docker image with required runtime for our application, how to make incremental changes to it and push the image to docker hub.
Let's start with an example, we first run the latest ubuntu image with -it which gives access to interactive shell.
Output root@9b0db8a30ad1:/#
Now, this is our shell inside the container. We can execute shell commands just like on a ubuntu machine like
apt-get update apt-get install
we don’t have to prefix any command with sudo, because we are operating inside the container as the root.we can now install/remove all the necessary packages, develop and build packages. We can stop and start the same containers whenever we need to and modify the container.we can exit the container with exit command, similar to exiting a remote session. But, once we remove the container using docker rm, the container and it changes are lost.
Let's say we have installed couple of packages after getting the latest ubuntu image. This container ia running a different image than the one we pulled in the first place. We might want to make changes at a later point of time or re-use this image as a base for other images. We can commit the new image with changes and push it to dockerHub
docker commit -m "message for recording the changes" -a "Author Name" container-id repository/new_image_name:tag
When we commit an image, the new image is saved locally on the computer. If we execute docker images now, we should that additional image as well in the list.
REPOSITORY TAG IMAGE ID CREATED SIZE repository/new_image_name tag 6a1784a63edf 2 minutes ago 170MB ubuntu latest ea4c82dcd15a 17 hours ago 85.8MB
In the above output, the image with new_image_name was derived from the existing ubuntu image from Docker Hub. The size difference reflects the changes that were made. Next time you need to run a container using Ubuntu with the pre-installed packages, we can just use this new image.
We can now share this image with others to create containers from it. We can push the committed image to docker HUb or any other docker registry. To push the image to Docker Hub we must first create an account at https://hub.docker.com.
First log into Docker Hub
docker login -u docker-registry-username
we’ll be prompted to authenticate using Docker Hub password.
docker push docker-registry-username/new_image_name
Pushing might take a little while to upload the image.After pushing we can see the changes in the dockerhub repository.
The above picture is the screenshot from neufiledrobotics account for the light-fields image. This image is built on top of nvidia/cudagl:9.0-devel-ubuntu16.04 image with all the lightfield software and dependencies installed.
Docker with GUI
Some applications might require graphical user interfaces to be used by docker.
We can connect docker's display with host's Xserver. The simple way is expose your xhost so that container can render to the correct display by reading and writing though the X11 unix socket.
sudo docker run --privileged -it -e "DISPLAY=unix:0.0" -e "QT_X11_NO_MITSHM=1" -v="/tmp/.X11- unix:/tmp/.X11-unix:rw" --runtime=nvidia nvidia/cudagl:9.0-devel-ubuntu16.04 /bin/bash
we made the container's processes interactive, forwarded our DISPLAY environment variable, mounted a volume for the X11 unix socket. If this fails with the error shown below
No protocol specified cannot connect to X server unix:0
We can adjust the permissions the X server host. This is not safe as it compromises the access control to X server. So with a little effort, someone could display something on your screen, capture user input, in addition to making it easier to exploit other vulnerabilities that might exist in X.
xhost +local:root # if we dont worry about the secuirty xhost -local:root # This will return the access controls once we are done using the containerized GUI.
There are multiple other ways to access the display. For full tutorials check out this page: http://wiki.ros.org/docker/Tutorials/GUI
Accessing USB devices
One option is to use –device option as shown below without needing to give access to all USB devices.
docker run -it --device=/dev/ttyUSB0 ubuntu /bin/bash
The problem with this method is that it doesnt support dynamic USB devices.
docker run --privileged -it -e "DISPLAY=unix:0.0" -v="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /dev:/dev --runtime=nvidia neufieldrobotics/lightfields /bin/bash
The volumes flag is required if we want this to work with devices connected after the container is started. in that case s we mount /dev of the host to /dev of the containerusing -v option. Note that its unsafe as it maps all the devices from your host into the container.
Note : Running a container with the –privileged flag gives all the capabilities to the container and also access to the host’s devices (everything that is under the /dev folder.
Setting up Jupyter notebook server
Jupyter notebook usually starts a server on port 8888. The output looks like this
$ jupyter notebook [I 22:59:06.534 NotebookApp] Serving notebooks from local directory: /home/auv [I 22:59:06.534 NotebookApp] The Jupyter Notebook is running at: [I 22:59:06.534 NotebookApp] http://localhost:8888/? token=62c8c6fe166e9b8f5c3c17b4ab1981dc14700e3ab17935f5 [I 22:59:06.534 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 22:59:06.537 NotebookApp] To access the notebook, open this file in a browser: file:///home/auv/.local/share/jupyter/runtime/nbserver-13456-open.html Or copy and paste one of these URLs: http://localhost:8888/?token=62c8c6fe166e9b8f5c3c17b4ab1981dc14700e3ab17935f5 [13469:13488:0904/225906.813666:ERROR:browser_process_sub_thread.cc(217)] Waited 3 ms for network service Opening in existing browser session.
We can run jupyter server from inside docker container by exposing the ports using -p option
sudo docker run --privileged -it -e "DISPLAY=unix:0.0" -v="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /dev:/dev -p "8888:8888" --runtime=nvidia neufieldrobotics/lightfields /bin/bash
This is how the final docker run command looks like with all the options explained above.
To run jupyter inside container
jupyter notebook --allow-root --ip=0.0.0.0 --port=8888 Go to http://0.0.0.0:8888: token to see the notebook