Install debian
install latest version on debian
https://docs.docker.com/engine/installation/linux/ubuntulinux/
Windows
Boot2Docker
https://github.com/boot2docker/windows-installer/releases
https://github.com/boot2docker/osx-installer/releases
Docker daemon draait niet op windows en osx, mbv boot2docker, die een vm installeert op virtualbox, wordt docker op je machine geinstalleerd.
Voor windows bestaat er ook een optie om msysgit te installeren, dit is een alternatief voor de wndows cmd terminal.
Om docker commando's te kunnen uitvoeren zonder het sudo commando, moet uw gebruiker worden toegevoegd aan de docker group:
hierdoor wordt het voltooien van docker commando's met de tab toets ook mogelijk.
Op sommige distributies bestaat de docker groep niet, maak hem in dat geval aan met
Realiseer je dat gebruikers in de docker groep in principe root toegang hebben.
uitloggen en opnieuw inloggen voor het resultaat
images are specified by repository:tag
docker run [options] [image] [command] [args]
image wordt aangeroepen met repository:tag,
docker run ubuntu:14.04 echo "Hello World"
Find your container
docker ps to list running containers
-a flag to list all containers
Container with Terminal
Gebruik de -i en -t flag met docker
-i flag verteld docker te verbinden met STDIN van de container,
-t flag specificeerd de pseudo-terminal
Je dient het terminal proces als commamndo mee te geven (bash)
docker run -it ubuntu:latest bash
Exit the Terminal
Exit om de terminal af te sluiten en terug te keren naar de host terminal, dit sluit de container
Om uit de terminal te stappen zonder de container te sluiten
Docker ps command
docker ps -q #om alleen het container ID te laten zien
docker ps -l #om de laatste container die is gestart te laten zien
docker ps -aq #all containers with only their short ID
docker ps -lq #list the short ID van de laatst gestarte container
docker ps -a --filter "exited-1" #past een filter toe op exit code 1 (exit with error), momenteel kan er worden gefiltert op exit-code en status ( restarting, running, exited and paused)
Running in detached mode, wordt ook wel op de achtergrond genoemd of als een daemon
-d flag,
om de output te bekijken gebruik je:
docker logs [container id]
maak een centos container en voor het ping commando 50 keer uit
docker run -d centos:7 ping 127.0.0.0 -c 50
Met de -P flag wordt de container port gemapped met de host port
attach en detach
hier loop je het risico de container af te sluiten door per ongeluk op crtl C te drukken
vb,
docker run -d ubuntu ping 127.0.0.1 -c 50
docker attach [container id]
of naam, verlaat de container met Crtl + P + Q
docker exec
commando wordt gebrukt om extra processen in de container te starten
docker exec -i -t [container id] bash
als men deze terminal (bash) afsluit blijft de container draaien.
docker logs bekijken
docker logs container name
-f optie als tail -f
docker logs --tail 5 -f containerid
to exit
Stopping a container
, sends a SIGTERM to the main container proces, proces then receives a SIGKILL after a grace period, -t flag grace period can be specified
, sends SIGKILL immediately to mail container proces
Restart a container
docker start to restart a container that has been stopped, the container will start using the same options and command specified previously.
Can attach to the container with the -a flag
Formatting docker inspect output
docker inspect --format=`{{.
.
}}'
docker inspect --format=`{{.Config.Cmd}}'
Deleting containers
Delete all containers that are stopped
Use dockers ps -aq to list id's of all containers, feed the output to docker rm
docker rm $(docker ps -aq)
List all stopped containers
docker ps --filter=`status=exited`
delete the latest container that was run
docker rm $(docker ps -ql)
Comparing containes with docker diff
Methoden om images te bouwen
1. commit changes from a container as a new image
allows you to build images interactively
get terminal access inside a container and instal the necessary pograms and your application
then save the container as a new image using the docker commit command
docker commit [options] [container id] [repository:tag]
docker commit 984n5843j594857398c jaccokip/myapplication:1.0
2. build from Dockerfile
create a Dockerfile in een nieuwe map of in een bestaande applicatie map
Schrijf de instructies om het image te bouwen (what program to install, what base image to use, what command to run)
Build examples
#comment in een build file
FROM ubuntu:14.04 of FROM johnytu/myapplicaiton:1.0 of FROM company.registry:5000/myapplication:1.0
RUN apt-get install vim
RUN apt-get install curl
draai docker build commando of het image te bouwen van het Dockerfilesd
docker build -t johnytu/myimage:1.0 .
# Build an image use the current folder as context path.
docker build -t johnytu/myimage:1.0 myproject
# as above but use the project folder as context path
3. Import a tarbal into Docker as a standalone base layer
Docker uses exact strins in your Dockerfile to compare with the cache, simply changing the order of instructions will invalidate the cache. To disable the cache manually use:
--no-cache flag
docker build --no-cache myimage
Run instruction aggregation
h
Can aggregate multiple RUN instructions by using &&
Commands will all be run in the same container and committed as a new image at the end
Reduces the number of image layers that are produced.
Docker history
command shows us the layers that make up an image
CMD instruction
CMD defines a default command to execute when a container is created
Shell format and EXEC format
Can only be specified once in a Dockerfile, if specified multiple times the last CMD instruction is executed
Can be overridden at run time
Shell format
CMD ping 127.0.0.1 -c 30
Exec format
CMD ["ping", "127.0.0.1", "-c", "30"]
ENTRYPOINT instruction
Defines the command that wil be run when a container is excuted
Run time arguments and CMD instructions are passed as parameters to the ENTRYPOINT instruction
Container essentially runs as an executable
shell vs exec format
In shell form, the command will run inside a shell with /bin/sh -c
Exec format allows execution of command in images that don't have /bin/sh
RUN ["apt-get", "update"]
Shell form is easier to write and you can perform shell parsing of variables
CMD sudo -u $(USER) java ... .
Exec form does not require image to have a shell
For ENTRYPOINT instructions, using shell form will prevent the ability to specify arguments at run time
- The CMD argument will not be used as parameters for ENTRYPOINT
Overriding ENTRYPOINT
To override teh command specified by ENTRYPOINT, use the --entrypoint flag, Useful for troubleshooting your images
docker run -it --entrypoint bash myimage
COPY instruction
h
The copy instruction copies new files or directories from a specified source and adds them to the container filesystem at a specified place.
The
path must be inside the build context, if the <> path is a directory, all files in de directory are copied, the directory itself is not copied. You can specify multiple
directories
# copy the server.conf file in the build content into the root folder of the container.
Specify a working directory
instructions allow us to set the working directory for any subsequent RUN, CMD, ENTRYPOINT and COPY instructions to be executed.
# path can be absolute or relative to the current working directory. Instruction can be used multiple times
MAINTAINER Instruction
Specifies who wrote the Dockerfile
Optional but best practice to include
Usually placed straight after de FROM instraction
ENV instruction
Used to set environment variables in any container launched from the image
Syntax: ENV <variable><value>
ENV JAVA_HOME /usr/bin/java
ENV APP_PORT 8080
ADD insntruction
Copies new files or directories from a specified source and adds them to the container filesystem at a specified destination.
Syntax:
The src path is relative to the directory containing the Dockerfile
If the src path is a directory, all files in the directory are copied. The directory itself is not copied.
You can specify multiple src directory
COPY vs ADD
Both instructions perform a near identical funtion
ADD has the ability to auto unpack tar files ADD instruction also allows you to specify URL for you content (although this is not recommended)
Both instructions use a checksum against the files added.
If the checksum is not equal then the test fails and the build cache wil be invalidated, because it means we have modified the files
Best practice for writing Dockerfiles
Remember, each line in a Dockerfile creates a new layer
You need to find the right balance between having lots of layers created for the image and readability of the Dockerfile
Don't install unnecessary packages
One ENTRYPOINT per Dockerfile
Combine similar commands into one by using "&&" and "\"
Example: RUN apt-get update "&&" "\""
install -y curl
apt-get install -y vim && \
Use the caching system to your advantage, the order of statments is important, add files that are least likely to change first and the ones most likely to change last.
Distibuting your image
2 options:
Push to Docker Hub (public or private repository)
Push to your own registy server
Pushing Images to Docker Hub
Use docker push command, syntax:
Local repo must have same name adn tag as the Docker Hub repo
Only the image layers that have changed get pushed
You will be prompted to login to your Docker Hub account
Tagging Images
Used to rename a local image repository before pushing to Docker Hub
Syntax:
tag [image ID] [repo:tag]
docker tag [local repo:tag] [Docker HUb repo:tag]
docker tag edfc212de17d trainingteam/testexample:1.0
docker tag johnnytu/testimage:1.5 trainingteam/testexam
One image many tags
Deleting local images
docker rmi [image id]
docker rmi [repo:tag]
Volumes
A Volume is a designated directory in a countainer, which is designed to persist data, independent of the container's life cycle.
Volume changes are excluded when updating an image
Persist when a container is deleted
Can be mapped to a host folder
Can be shared between containers
Volumes bypass the copy on write system
Act as passthrough to the host filesystem
When you commit a countainer as a new image, the content of the volume will not be brought into that image
If a RUN instruction is a Dockerfile changes the content th content of a volume, those changes are not recorded either.
De-couple the data that is stored, from the container which created the data
Good for sharing data between containers
Can setup a data container which has a volume you mount in other containers
Share directories between multiple containers
Bypassing the copy on write system to achieve native disk I/O
Share a host directory with a container
Share a single file between the host and container
Mount a Volume
Volumes can be mounted when running a container
Use the -v option on docker run
Volume paths specified must be absolute
Can mount multiple volumes by using the -v option multiple times
Execute a new container and mount the folder /myvolume into its system
docker run -d -P -v /myvolume nginx:1.7
Example of mounting multiple volumes
docker run -d -P -v /data/www -v /data/images nginx
Where are your volumes
Volumes exist independently from containers, if a volume is stopped we can still access our volume.
To find where the volume is, use docker inspect on the container
Deleting a volume
Volumes are not deleted when you delete a container
To remove the volumes associated with a container use the -v option in the docker rm command.
docker rm -v <container id>
Mounting host folders to a volume
When running a container, you can map folders on the host to a volume
The files from the host folder will be present in the volume
CHanges made on the host are reflected inside the container volume
Syntax:
docker run -v [host path]:[container path]:[rw|ro] # rw or ro controls the write status of the volume
docker run -d -v /home/user/public_html:/data/www ubuntu
Files inside /home/user/public_html on the host will apear in the /data/www folder on the container.
If the host path or container path does not exist, it wil be created
If the container path is a folder with existing contenyt, the files will be replaced by host path
Volumes in Dockerfile
VOLUME instruction creates a mount point
Can specify arguments in a JSON array or string.
Cannot map volumes to host directories.
Volumes are initialized when the container is executed.
String example: VOLUME /myvol
String example with multiple volumes: VOLUME /www/website.com /www/website2.com
JSON example: VOLUME ["myvol", "myvol2"]
Example Dockerfile with Volumes. When we run a container from this image, the volume will be initialized along with any data in the specified location.
If we want to setup default files in the volume folder, the folder and file must be created first
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y vim \
wget
RUN mkdir /data/myvol -p && \
echo "hello world¨ > /data/myvol/testfile
VOLUME ["/data/myvol"]
Data containers
A data container is a container created for the purpose of referencing one or many volumes
Data containers don't run any application or proces
Used when you have persistent data that needs to be shared with other containers
When creating a data container, you should give it a custom name to make it easier to reference
Custom container names
By default, containers we create, have a randomly generated name to give your container a specific name,
use the "--name" option on the docker run command
Existing container can be renamed using the docker rename command docker rename <old
name=""> <new name="">
Create a container and name it mynginx
docker run -d -P --name mynginx nginx
docker rename happy_einstein mycontainer
<new>
> <old>
creating data containers
docker run --name mydata -v /data/app1 busybox true
chaining containers
docker run --name logdata -v /var/log/nginx busybox
docker run --name webdata -v /home/jacco/public_html:/usr/share/nginx/html busybox
docker run --name webserver -d -P --volumes-from webdata --volumes-from logdata nginx
backup your data container
docker run --volumes-from logdata \
-v /home/jacco/backups:/backup \
ubuntu:14.04 \
tar cvf /backup/nginxlogs.tar /var/log/nginx
inspecting an image
docker inspect ubuntu:14.04
or
docker inspect <image id>
Docker networking model
Containers cannot have a public IPv4 addres
They are allocated in a private address range
Services running on a docker container must be exposed port by port
Container ports have to be mapped to the host port to avoid conflicts
The Docker bridge
When docker strart, it creates a virtual interface called docker0 on the host machine
docker0 is assigned a random ip addres and subnet from the private range defined by RFC1918
docker0 interface is a virtual Ethernet bridge interface
It passes or switches packets between two connected devices just like a physical bridge or switch (host to container - container to container)
Each new container gets one interface that is automatically attached to the docker0 bridge
Checking the bridge interface
We can use the brctl (bridge control) command to check the interface on our docker0 bridge
Install bridge-utils package to get the command (apt-get install bridge-utils)
run:brctl show docker0
Check container network properties
use docker inspect command and look for the NeworkSettings field
Manual port mapping
Use -p option (lowercase p) in the docker run command
syntax: -p [host port]:[container port]
To map multiple ports, specify the -p option multiple times
Map port on the host to port 80 on the nginx container and port 81 on the host to port 8080 on the nginx container
docker run -d -p 80:80 -p 81:8080 nginx
We can use docker port command for displaying port mappings (and docker ps)
EXPOSE instruction
Configure which ports a container will listen on at runtime
Port still need to be mapped when containers is executed
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y nginx
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
linking Containers
Linking is a communication meethod between containers which allows them to securely transfer data from one to another
Source and recipient containers
Recepient containers have acce to data on a source container
Links are established based on container names.
Containers can talk to each other without having to expose ports to the host
Essential for micro service application architecture
Example:
- container with Tomcat running
- container with MySQL running
- Application on Tomcat needs to connect to MySQL
Create the source container first
Create the recipient container and use the --link option
Best Practice- give your container meaningful names
Format for linking: name:alias
Create the source container using the postgres
docker run -d --name database postgress
Create the recipient container and link it
docker run -d -P --name website --link database:db nginx
The underlyin mechanism
Linking provides a secure tunnel between the containers
Docker will create a set of environment variables based on your --link parameter
Docker also exposes the environment variables from the source container
- Only the variables created by Dockler are exposed
- Variabes are prefixed by the link alias
- ENV instruction in the container Dockfile
- Variables defined during docker run
DNS lookup entry will be added to /etc/hosts file based on your alias
Controlling and configuring the Daemon
The way you start/stop and configure docker depends on
- are we running it as a service
- What linux distribution
service vs systemctl command
Running interactively in the foreground (docker -d &), send SIGTERM to docker proces to stop
- run pidof docker
- sudo kill $(pidof docker)
If starting the Daemon from the Docker command you just specify the various options as a flag
sudo docker -d [options] &
For Ubuntu and Debian located in /etc/default/docker use DOCKER_OPTS to control the startup options for the daemon when running as a service
Example: Start daemon with log level of debug and allow connections to an insecure registry at the domain of myserver.org
DOCKER_OPS="--log-level=debug --insecure-registry=myserver.org:5000"
CentOs uses systemd to run docker, look at the docker.service file to see how docker is started.
/usr/lib/systemd/system/docker.service
a full reference list of all daemon options:
https://docs.docker.com/reference/commandline/cli/#daemon
Docker daemon logging
Start the docker daemon with --log-level parameter and specify the logging level debug, info, warn, error and fatal
Connecting to a remote daemon
A few things we need to setup
First, the docker daemon we want to connect to needs to be listening on a tcp socket
For security purpost we should use a HTTPS encrypted socket, which will require us to setup TLS
Then we point our client to the remote Daemon
Docker Daemon socket option
The docker daemon listens for remote API request on three types of Socket
- unix
- tcp
- fd (for Linux distributions using Systemd)
The default socket is a unix domain socket created at /var/run/docker.sock
This socket requires root permission
Error connecting to socket
If you get the eror message below, it typically means
- The Docker daemon is not running
- You do not have permission to make an API call to the docker daemon (i.e. you didn't use sudo in your command or you are not in the docker group)
- Your docker client is trying to connect to the daemon using the unix socket but the daemon is nog listening on it
- You are not using TLS to connect to the daemon
johnnytu@docker-ubuntu:~$ docker ps
FAT[0000] Get http:///var/run/docker.dock/v1.18/container/json:
dial unix /var/run/docker.sock: no such file or directory.
Are you trying to to conect to a TLS-enabled daemon without TLS?
Listening on TCP socket
To configure the Docker daemon to listen on a TCP socket, we start the daemon using the --host option and specify the TCP address and port
- Can also use -H
Be aware that by default the TCP socket is un-encrypted
For the address, you can specify an IP address to listen on or specify 0.0.0.0 to listen on all network interfaces.
Port numbers should be 2375 for un-encrypted communication and 2376 for encrypted communication
Using docker command, listen on TCP socket for all network interfaces
docker -d -H tcp://0.0.0.0:2375
Using docker command, listen on TCP socket on a particular IP address
docker -d -H tcp://192.168.0.1:2375
Configure via the upstart configuration file /etc/default/docker
DOCKER_OPS="-H tcp://192.168.0.1:2375"
Connect the client to the daemon
By default the Docker client assumes the daemon is listening on a unix socket
if the daemon is listening on a TCP socket, we have to configure the cleint to connect to a particular host
Two metehod
- Use the -H flag on the docker command
docker -H tcp://localhost:2375
docker -H tcp://193.241.228.93:2375
- Configure the DOCKER_HOST environment variable
export DOCKER_HOST="tcp://localhost:2375"