Blog: How Tos

Docker

Joe Durbin 26 Jun 2017

<headline image>

For this blog post I’ve spun up an Ubuntu Linux 18.04LTS server and opted to install Docker at runtime.

I’ve tarted it up with a desktop environment and made sure everything is up to date, but other than that it’s a ‘clean’ install.

I’ve approached this from the viewpoint of someone that’s never played with Docker but fancies having a delve into the world of containerisation and what it could mean from a security perspective.

This will be followed up with another post explaining how and why this could be useful in a pentest engagement.

For clarity, my local machine is called ‘goshawk’. Keep that in mind as you read the blog as commands will be run on both my local machine and within containers.

Once we’re up and running we can check Docker is installed, check the version, and see that our local image repository is empty:

IMAGE1

Let’s find and pull a container from dockerhub (the default repo) that has the latest version of alpine Linux to play with:

IMAGE2

The first hit is the official alpine image as can be seen by the ‘official’ column. This is a minimal image (5.5MB) which we can play with.

IMAGE3

At this point, we still aren’t running any containers. We’ve just downloaded a tiny container which we can then start playing with. It is possible to download and execute in one command if you’re feeling brave.

We can run a shell in an instance of alpine using the following command:

Docker run -it <image> <command> where -it spawns the process as ‘[i]nteractive’ and in a ‘[t]erminal’

IMAGE4

SECURITY CONSIDERATION #1: Running as root

We can see we are dropped into a root shell. As containers tend to be as small as possible, multi-user environments aren’t standard and therefore many run as root.

Over in a different window we can query Docker to see what’s running:

IMAGE5

This instance has been given the random name of mystifying_raman which is pretty cool.

We can see from the previous commands that this container has a network stack and an IP address of 172.17.0.2/16

This put’s it on the same network as my docker0 interface on my local machine which is configured as below:

IMAGE6

This is because Docker has a few default networks, of which the default used by new containers is the ‘bridged’ network and therefore can communicate with the host over this interface. We can see the networks that Docker has by default by issuing the following command:

IMAGE7

We can drill down further into the network config with the ‘inspect’ command:

IMAGE8

We can confirm that the host and container can communicate with a couple of ping commands:

IMAGE9

IMAGE10

SECURITY CONSIDERATION #2: Networking

By default, all containers can see each other and the host over the docker0 network interface.

Back in the alpine container, lets investigate file persistence and what happens when we exit a container.

We’ll create a file, exit the container, and see what happens:

IMAGE11

IMAGE12

As we can see, no containers are running.

Let’s do the exact command we ran before to get another shell:

IMAGE13

Our file has gone! That’s because this is a new container. We can see this with another ps command:

IMAGE14

We are now in vibrant_visvesvaraya

Appending the –all flag to the ps command shows us that the previous container hasn’t gone anywhere, it’s just in an ‘Exited’ state:

IMAGE15

Unless we explicitly remove the container, it will continue to exist.

We can start up the previous container and get back to our file:

IMAGE16

As we had exited it, the IP address was released, and consequently given to vibrant_visvesvaraya. Mystifying_raman now has the next available IP of 172.17.0.3/16:

IMAGE17

Again, we can ping between docker containers and the host as they are both on the ‘bridge’ network.

So that’s a big security consideration. If a vulnerable app in a docker container is compromised, the other containers are at risk.

Let’s isolate one by bringing it up, but specifying the network as ‘none’:

IMAGE18

That’s better. This container has no networking and therefore can’t communicate with hosts over the network. There are other means by which a container can interact with other containers (for instance when the docker.sock is shared from the host to the guest) but we wont go in to that now.

So, lets look at doing something useful with Docker. Firstly, I’m going to clean up a bit. I’ve exited all the containers and issues a fairly impolite command to remove all containers:

IMAGE19

We could have achieved the same thing by manually deleting it with a ‘docker rm <containerid>’ command.

We still have our alpine image in our local repo, but no containers running.

Let’s pull down a slightly more feature rich image. I’ve gone for ubuntu (for a bit of ubuntu-ception):

IMAGE20

We can see this is 70MB rather than the 5.5MB alpine image. It also has bash which is a bit nicer to work in.

IMAGE21

Hmm, no ip tools. Let’s install it. Issue an ‘apt-get update’, then install iproute2:

IMAGE22

IMAGE23

We now have ip:

IMAGE24

Let’s stick nmap on there too:

IMAGE25

IMAGE26

We can now exit the container, and start it back up (without the -it flag, so it defaults to a background process)

IMAGE27

IMAGE28

Now let’s execute an nmap command which will run inside the container, from the host (goshawk)

IMAGE29

So this is useful. But what about storing results from the container on the host?

We’ll create a new container, with a local directory mapped with the -v switch which takes the local directory and maps it to a directory in the container (separated with a colon):

IMAGE30

We can see our nmap-output directory. Let’s test it out by creating a file in that directory, then existing out of the container and deleting it:

IMAGE31

After using the rm command, that container is gone, never to be revived. RIP.

Now let’s check on our file on the host:

IMAGE32

Hooray, we can keep results from commands. What have we learned?

We’ve downloaded container images.

We’ve spun them up and connected to them.

We’ve worked out where they go when you stop them, and how to bring them back up.

We’ve also customised a container with persistent storage.

In the next post we’ll talk about how to automate this, and how we can get Kali up and running in a container which can be useful for pentesting Docker environments.