Creating Ubuntu containers with libraries (Page 3).04 Aug 2018
The goal of this section is to demonstrate by example how to set up Dockerfiles for the backend of deploying your code and projects on Docker hub – the OS, the required packages, and specialized libraries that may need to be imported via
git and built via
cmake. In this page we’ll do both with the OpenCV library. All of the examples are available as public Github and Docker repositories, and the links are provided near the examples.
A side note – I have been using OpenCV since the early days (vs 1.0), when it was supported by Intel. IplImages, anyone?
- C++ Program
- Ubuntu Images
Getting started, principles of layering
The basic idea as hinted at from the previous page is to create an image that imports from another image, using the
So conveniently, Ubuntu has the different versions of the OS on docker, so we can pull directly (
docker pull ubuntu:16.04) to the local machine, or include it in Dockerfiles, in the form of
FROM ubuntu:16.04. I’m using Ubuntu 16.04, but of course you can apply this to any other version you wish.
The idea in building a succession of images, in which an image inherits from a succession of images, is 1) reuse and 2) ease of troubleshooting. The best practices page lays some of these principles out ( best practices page ). They are:
- Keep images are small as possible
- Each line is a layer, make it as small as possible
- Build a succession of images for maximum reuse.
Keep images as small as possible
The idea here is only install packages and libraries that are necessary for the goal applications. Everything you install will result in additional bytes/MBs, which results in a larger image and download time for a user. So go for the bare minimum, unlike in your personal machines, where I at least tend to install my usual suite because I know I’ll need it eventually.
Each line is a layer, make it as small as possible
Technically you can string together many statements using
&& with the
\ character at the end of the line. You’ll want to consier how this may affect layer size. The exception is
apt-get update && apt-get install -y, which should be combined. (See best practices page for the details).
Build a succession of images for maximum reuse
This idea is that intead of creating one huge image for one application, we can create a succession of images. Then, for a range of applications we can combine them and reuse them. The added upside is that like iterative testing of C++ programs, it is easier to find where the errors are when you use this approach, and I will illustrate this.
Ubuntu 16.04 image with build-essential
This example is very straightforward: I import the
ubuntu:16.04 image, and then install the
build-essential packages, along with all of the libraries that I know are going to be needed for OpenCV, which I use in almost every project, and OpenMP. Here’s the Dockerfile:
FROM ubuntu:16.04 MAINTAINER Amy Tabb RUN apt-get update && apt-get install -y \ build-essential \ cmake \ git \ libgtk2.0-dev \ pkg-config \ libavcodec-dev \ libavformat-dev \ libswscale-dev\ libtbb2 \ libtbb-dev \ libjpeg-dev \ libpng-dev \ libtiff-dev \ libjasper-dev \ libeigen3-dev \ liblapack-dev \ libatlas-base-dev \ libgomp1
While this is simple, you can also grab it from Github.
To do local testing, in the folder with this Dockerfile you would
$ docker build -t ubuntu-essentials .
Here, I gave this repository the tag
ubuntu-essentials. Like on page 2, you could also create an automated build for this repository; this one is here on the Docker Hub. Recall that we used this repository as the
FROM statement in the first part of the tutorial.
You can also pull it down from Docker hub,
docker pull amytabb/docker_ubuntu16_essentials, where it will have the
Since I’ve been explicit with “you have to refer to the container with the right tag,” I will dispense with it from now on. We’re assuming the image has tag
ubuntu-essentials, and we can run it, and play around.
$ docker run -it ubuntu-essentials bash
gives us a bash shell in the image, at location
/. To get out of the image, type
Ubuntu 16.04 image with OpenCV libraries
An important disclaimer, in 2020. This will not work on Docker hub with an automated build, given the size of the OpenCV libraries and some of the limitations imposed on free accounts on Docker hub. However, for smaller libraries it may work, and the principles in the tutorial will hold for other libraries. You can also build locally on your machine and push to Docker hub, which is what I started doing in late ~2019 to incorporate OpenCV 4.x libraries.
This next section will detail importing a library’s source files using
git, and then building it using a script that contains
cmake. Like the previous example, this one is also on Github and Docker hub and public repositories. After I go over the structure, I’ll mention some handy debugging strategies.
This repository consists of a Dockerfile and a build script:
FROM amytabb/docker_ubuntu16_essentials MAINTAINER Amy Tabb RUN git clone https://github.com/opencv/opencv.git COPY build_opencv.sh /build_opencv.sh WORKDIR /opencv/build RUN /bin/sh ./../../build_opencv.sh WORKDIR / #do not remove directory, because will use this image for building the contrib module as well.
FROMcreates a layer from the
amytabb/docker_ubuntu16_essentialsimage explained in the last section.
RUN git cloneuses git to copy the source files of the library we want to the image. This will result in a folder
COPYcopies the build script from the host to the image; syntax is host -> image. Unless we specifically place it in the image, it will not magically get there.
WORKDIRchanges the directory to
/opencv/buildin the image. There is no
WORKDIRcreates it. I hope to create a post about
WORKDIRin the next month, almost nothing is written about it.
RUN ...executes the shell script, that will compile and install the library in the image [next block ].
WORKDIRchanges the directory back to the top of the directory structure:
/. However we leave the state of the image, it will remain when we next inherit with
FROM. Since I want to reuse images, I leave the
WORKDIRin a standard place to make working with the images more predicatable.
The shell script:
cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local .. make -j4 make install ldconfig
OpenCV uses cmake to compile, link and install the library. This cmake command is about as bland as you can make it for OpenCV installation, and installing OpenCV can take a long time (even on a local machine). If you happen to be installing a library like OpenCV, I suggest hammering out the cmake arguments and shell script on your local machine, then shifting to the Dockerfile. For instance, in future projects with Docker I’ll need the OpenCV contrib module, which is why I don’t delete the OpenCV source files from the Dockerfile; I’ll need them when I rebuild the library with the additional module.
By now, you’re a pro, but I’ll be explicit. To build locally within a directory with both the Dockerfile and script:
docker build -t ubuntu_opencv .
And you can run:
docker run -it ubuntu_opencv bash
What I’m really interested in here is that the library installed correctly. So I
cd /usr/local/lib, where from the
cmake arguments I know the libraries should be. They are in the right spot!
Automated builds, strategies for troubleshooting
As I have alluded to in the previous pages, I have not had great luck in using
push to get large images to Docker hub; but automated builds have worked well using repositories on Github (or Bitbucket). Details are here.
In general, though, creating chains of images and using run to get the right commands helps greatly with sorting out issues when you are learning. For instance, I chained images until I figured out how to put together Dockerfiles. I tended to only include one or two lines in a file. Here’s an example, using the Ubuntu OpenCV as a case:
FROM amytabb/docker_ubuntu16_essentials MAINTAINER Amy Tabb RUN git clone https://github.com/opencv/opencv.git
git clone takes a while for this library and is unlikely to fail unless my internet connection died. I didn’t want to keep waiting for the cloning to complete, so I made it its own image. Then build:
$ docker build -t stage0 .
FROM stage0 MAINTAINER Amy Tabb COPY build_opencv.sh /build_opencv.sh
You guessed it! Stage 1. It is perhaps obvious, but you will need different Dockerfiles and directories if you want to remember what these all were.
$ docker build -t stage1 .
FROM stage1 MAINTAINER Amy Tabb WORKDIR /opencv/build RUN /bin/sh ./../../build_opencv.sh
WORKDIR is not likely to fail, so I put it in with running the shell script. If I got
WORKDIR wrong, the script would fail immediately.
$ docker build -t stage2 .
Suppose something’s wrong, the script failed. I can go backwards and revise:
FROM stage1 MAINTAINER Amy Tabb WORKDIR /opencv/build
Then build. All this image does now is to set the current directory in the image.
cd are fundamentally different so setting the current directory is difficult to troubleshoot in bash. Also
cd doesn’t work with
RUN statements the way you think it would, here’s some discussion in the details.
$ docker build -t stage2 .
run to troubleshoot where we are and the script:
$ docker run -it stage2 bash
First of all: where are we? Are we in the expected directory (use
pwd)? Then, try to run the script (