In a previous blog post, I wrote about how to set up Docker and Docker Compose. Now that we have our environment set up, let’s learn how to start running Docker containers. There are Docker containers for almost everything these days and really, they are just a Google search away. In most cases, this will either take you to a GitHub repository or to the Docker Hub. Both are great resources for deploying new Docker containers. We installed Docker Compose so that’s what I will primarily be focusing on since it makes deploying Docker containers very simple.
There are a few things we should have in place before we start deploying Docker containers. First, some housekeeping. We all know you shouldn’t run stuff as Root since this is a security risk so hopefully, you have created a non Root user and granted it rights to use the sudo
command. If you don’t, check out my article on “best practices” for setting up new Linux systems. With that out of the way, if we don’t continually want to put sudo
in front of every Docker command we type, we should add the Docker group to our user as well.
Since we will be using Docker Compose, let’s create a new directory where we can store all our Docker containers. I like to create this directory in the top-level directory since it makes the most sense if multiple users will be accessing it. With the directory created, we will want to change the group on the directory to Docker so that any user in the Docker group can access it and make changes. To change the group of the directory type sudo chgrp -R docker your-directory
and hit enter. This will change the group permissions recursively. We also need to change the permission of this directory. To do that, type sudo chmod -R 775 your-directory
and hit enter. This will give the owner and group full permissions and everyone else read and execution rights. With that done, we can start creating directories inside our Docker directory to store our containers. Use the same commands to change the group and permissions on these directories as well.
One thing I learned early when creating multiple Docker containers is that they all create their own networks. This can be great for separation between Docker containers but can really slow things down if you spin up too many Docker containers. I recommend creating a new virtual network that all your Docker containers can be a part of. To do this, type docker network create --driver=bridge --subnet=192.168.0.0/24 network-name
then hit enter. This will create a new virtual network. I assigned it 255 usable addresses but you can change the subnet mask to fit your needs. Now when you configure your containers, you can join them to the virtual network we created and they won’t build their own.
With most of our setup out of the way, let’s build our first Docker container. In this example, we’ll be spinning up Nginx. Check out Docker Hub for the current documentation on deploying Nginx. Let’s point out a few things on the page. Scrolling down the page we will find a section talking about a docker-compose.yml file. These YAML files are what Docker Compose uses to build the container. In a lot of cases, these YAML files are provided and you simply need to make some adjustments to make it work in your environment.
Let’s take a moment to talk about port configuration and volume setup. Every service that runs in Docker will need a port number. For Nginx, this is port 80 for HTTP. In the example code, they are mapping port 8080 to 80. This is represented by 8080:80 in the file and the number before the colon is the port that will be used on the host while the number after is the port used on the container. Note that you could run multiple instances of Nginx in separate containers mapping a different port on the host but still using port 80 inside each container.
Volumes are shown in the same format. The path before the colon is on the host side and the path after the colon is on the container side. Creating volumes gives you a way to share data from inside the container and make it accessible from the host. In the example file, they are mapping /etc/nginx/templates
inside the container to ./templates
on the host.
Let’s also take a quick moment to talk about variables. These will be different from one container to another so the best place to look for information is the image’s documentation usually on Docker Hub or GitHub.
The last thing I will cover is getting your Docker containers connected to the virtual network we created earlier and spinning them up and down. In your docker-compose.yml file, at the end, you will want to add the following…
networks default external name networkname
Make sure you format it just like it’s shown above or you will get an error when trying to start your container. YAML files are very particular about the spacing. Add the same code to any containers you want to run on the same virtual network. Save your docker-compose.yml file and type docker-compose up -d
to start your container. To stop it type docker-compose down
. It’s important to note that anytime you make a change to the docker-compose.yml file, you will need to stop then restart the container. It’s also important to note that you need to be in the same directory as the docker-compose.yml file you want to start or stop.
These are some of the basics of getting started with Docker and Docker Compose. For more information, you can check out Docker’s resources.