Docker: Difference between revisions

From 太極
Jump to navigation Jump to search
 
(72 intermediate revisions by the same user not shown)
Line 68: Line 68:
</li>
</li>
<li>[https://www.howtoforge.com/tutorial/how-to-install-kubernetes-on-ubuntu/ How to Install and Configure Kubernetes and Docker on Ubuntu 18.04 LTS] </li>
<li>[https://www.howtoforge.com/tutorial/how-to-install-kubernetes-on-ubuntu/ How to Install and Configure Kubernetes and Docker on Ubuntu 18.04 LTS] </li>
<li>[https://forums.linuxmint.com/viewtopic.php?t=414617 How install docker in Mint?]
</ul>
</ul>


=== One-line script ===
=== One-line script ===
https://twitter.com/portainerio/status/1650171336864550912. Note that 1) the one-liner is a huge security issue. 2) but how will you add the current user to docker group and then logout and log back in.
https://github.com/docker/docker-install, https://docs.docker.com/engine/install/ubuntu/, https://twitter.com/portainerio/status/1650171336864550912
{{Pre}}
 
Note that 1) the one-liner is a huge security issue. 2) but how will you add the current user to docker group and then logout and log back in. 3) Linux Mint does not work.
<syntaxhighlight lang='sh'>
$ curl -fsSL https://get.docker.com | bash
$ curl -fsSL https://get.docker.com | bash
# Executing docker install script, commit: a8a6b338bdfedd7ddefb96fe3e7fe7d4036d945a
...
+ sudo -E sh -c 'apt-get update -qq >/dev/null'
+ sudo -E sh -c 'DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null'
+ sudo -E sh -c 'mkdir -p /etc/apt/keyrings && chmod -R 0755 /etc/apt/keyrings'
+ sudo -E sh -c 'curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg'
gpg: WARNING: unsafe ownership on homedir '/home/brb/.gnupg'
+ sudo -E sh -c 'chmod a+r /etc/apt/keyrings/docker.gpg'
+ sudo -E sh -c 'echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu focal stable" > /etc/apt/sources.list.d/docker.list'
+ sudo -E sh -c 'apt-get update -qq >/dev/null'
+ sudo -E sh -c 'DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin >/dev/null'
+ sudo -E sh -c 'docker version'
Client: Docker Engine - Community
Client: Docker Engine - Community
  Version:          23.0.5
  Version:          24.0.7
  API version:      1.42
  API version:      1.43
  Go version:        go1.19.8
  Go version:        go1.20.10
  Git commit:        bc4487a
  Git commit:        afdd53b
  Built:            Wed Apr 26 16:17:30 2023
  Built:            Thu Oct 26 09:08:17 2023
  OS/Arch:          linux/amd64
  OS/Arch:          linux/amd64
  Context:          default
  Context:          default
Line 96: Line 89:
Server: Docker Engine - Community
Server: Docker Engine - Community
  Engine:
  Engine:
   Version:          23.0.5
   Version:          24.0.7
   API version:      1.42 (minimum version 1.12)
   API version:      1.43 (minimum version 1.12)
   Go version:      go1.19.8
   Go version:      go1.20.10
   Git commit:      94d3ad6
   Git commit:      311b9ff
   Built:            Wed Apr 26 16:17:30 2023
   Built:            Thu Oct 26 09:08:17 2023
   OS/Arch:          linux/amd64
   OS/Arch:          linux/amd64
   Experimental:    false
   Experimental:    false
  containerd:
  containerd:
   Version:          1.6.20
   Version:          1.6.26
   GitCommit:        2806fc1057397dbaeefbea0e4e17bddfbd388f38
   GitCommit:        3dd1e886e55dd695541fdcd67420c2888645a495
  runc:
  runc:
   Version:          1.1.5
   Version:          1.1.10
   GitCommit:        v1.1.5-0-gf19387a
   GitCommit:        v1.1.10-0-g18a0cb0
  docker-init:
  docker-init:
   Version:          0.19.0
   Version:          0.19.0
   GitCommit:        de40ad0
   GitCommit:        de40ad0


----
---------------


To run Docker as a non-privileged user, consider setting up the
To run Docker as a non-privileged user, consider setting up the
Line 130: Line 123:
         documentation for details: https://docs.docker.com/go/attack-surface/
         documentation for details: https://docs.docker.com/go/attack-surface/


----
--------------
$ # sudo groupadd docker
$ sudo usermod -aG docker $USER; newgrp docker


$ docker run hello-world
$ docker run hello-world
docker: permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied.
</syntaxhighlight>
See 'docker run --help'.
The '''newgrp docker''' command in Linux is used to switch the current user’s group ID during a login session. Specifically, it changes the user’s '''primary group''' to the docker group without logging out and back in. This is particularly useful when you need to gain the permissions associated with the docker group to run Docker commands.
<pre>
$ id -gn
docker
</pre>
</pre>
This can install docker but you still need "sudo" to run it. See [https://docs.docker.com/engine/install/linux-postinstall/ Linux post-installation steps for Docker Engine]: 1) Manage Docker as a non-root user, and 2) Configure Docker to start on boot with systemd.


=== Docker Desktop ===
=== Docker Desktop ===
Line 193: Line 190:


== CentOS ==
== CentOS ==
https://docs.docker.com/engine/installation/linux/docker-ce/centos/
<ul>
<li>https://docs.docker.com/engine/installation/linux/docker-ce/centos/. Note that Centos 9 Stream is required. I have tried to follow [https://linux.how2shout.com/how-to-install-docker-ce-on-oracle-linux-8-7/ this] to install on Oracle 7/8 and it does not work now. A possible workaround is either [https://docs.docker.com/engine/install/centos/ download RPM packages manually] ([https://download.docker.com/linux/centos/7/x86_64/stable/Packages/ CentOS 7], [https://download.docker.com/linux/centos/8/x86_64/stable/Packages/ CentOS 8]) or [https://docs.docker.com/engine/install/binaries/ Install Docker Engine from binaries].
{{Pre}}
sudo yum install -y yum-utils
 
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
 
sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
 
sudo systemctl start docker
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
 
sudo docker run hello-world
 
sudo usermod -aG docker $USER
newgrp docker
</pre>
 
<li>[https://stackoverflow.com/questions/36545206/how-to-install-specific-version-of-docker-on-centos How to install specific version of Docker on Centos?]
<pre>
sudo yum list docker-ce.x86_64 --showduplicates | sort -r
</pre>
</ul>


== Boot2Docker ==
== Boot2Docker ==
Line 443: Line 463:


== Mac ==
== Mac ==
* https://docs.docker.com/desktop/mac/
<ul>
* Alternatives to Docker Desktop for Mac? Rancher is recommended. 2022-06-08
<li>https://docs.docker.com/desktop/mac/
* Vagrant method. If you have Mac, you don't have to use boot2docker (iso & its management tool). You can use other Linux which comes with docker pre-installed. See [https://serversforhackers.com/getting-started-with-docker/ this post].
<li>Alternatives to Docker Desktop for Mac? Rancher is recommended. 2022-06-08
<li>Vagrant method. If you have Mac, you don't have to use boot2docker (iso & its management tool). You can use other Linux which comes with docker pre-installed. See [https://serversforhackers.com/getting-started-with-docker/ this post].
<li>To avoid the message ''Error: `brew cask` is no longer a `brew` command. Use `brew <command> --cask` instead'', use
<pre>
brew install --cask docker
</pre>
</ul>


== Raspberry Pi ==
== Raspberry Pi ==
Line 670: Line 696:


= Usage =
= Usage =
== Basics, docs, cheatsheet ==
== Basics, docs, cheatsheet, introduction ==
* https://docs.docker.com/articles/basics/
* https://docs.docker.com/articles/basics/
* [https://www.fosstechnix.com/docker-command-cheat-sheet/ 81 Docker Command Cheat Sheet with Description]
* [https://www.fosstechnix.com/docker-command-cheat-sheet/ 81 Docker Command Cheat Sheet with Description]
Line 679: Line 705:
* [http://www.cnblogs.com/wanliwang01/p/docker01.html Docker快速入门]
* [http://www.cnblogs.com/wanliwang01/p/docker01.html Docker快速入门]
* [http://blog.myplanet.com/docker-the-fun-and-easy-way Docker: The Fun and Easy Way]
* [http://blog.myplanet.com/docker-the-fun-and-easy-way Docker: The Fun and Easy Way]
* [https://www.r-bloggers.com/2023/06/a-gentle-introduction-to-docker/ A Gentle Introduction to Docker]. docker build & renv.


Note that we need '''sudo''' is needed unless it is on a Mac OS.
Note that we need '''sudo''' is needed unless it is on a Mac OS.
Line 997: Line 1,024:
$ sudo docker exec -i -t loving_heisenberg bash #by Name
$ sudo docker exec -i -t loving_heisenberg bash #by Name
</pre>
</pre>
== Rootless mode ==
<ul>
<li>[https://docs.docker.com/engine/security/rootless/ Run the Docker daemon as a non-root user (Rootless mode)]
* The data dir is set to '''~/.local/share/docker''' by default. The data dir should not be on NFS.
<li>Setup on Ubuntu 22.04
<syntaxhighlight lang='sh'>
curl -fsSL https://get.docker.com | bash
sudo apt install -y uidmap
dockerd-rootless-setuptool.sh install
nano ~/.bashrc
source ~/.bashrc
systemctl --user start docker
systemctl --user enable docker
sudo loginctl enable-linger $(whoami)
docker run hello-world
docker run --rm -ti r-base:4.4.1
</syntaxhighlight>
<li>Unfortunately, Rocker/rstudio does not work. I am not able to log in using username/password. It keeps saying incorrect username/password.
<li>'''Limitations''':
* Performance Overhead
** OverlayFS Limitations: Rootless Docker uses fuse-overlayfs instead of OverlayFS by default, which can be slower.
** Resource Limits: The performance might be slightly lower compared to running Docker with root privileges due to additional user namespace operations.
* Network Restrictions
** Network Drivers: Only the bridge and host network drivers are supported. macvlan and overlay network drivers are not supported.
** Port Binding: Binding to ports below 1024 is not allowed. Only non-privileged ports (1024 and above) can be used.
* File System
** Volume Permissions: Issues with file permissions can arise when mounting volumes from the host, as the files created by rootless Docker processes will be owned by the user running Docker, not root.
** NFS and Other Filesystems: Certain filesystems like NFS might have compatibility issues with rootless Docker due to permission and ownership constraints.
* Compatibility
** Certain Features: Some Docker features might not be fully supported or behave differently. For example, checkpoint/restore and cgroup v1 are not supported.
** Security Features: Some security features like AppArmor, SELinux, and seccomp might have limited functionality or require additional configuration.
* Configuration Complexity
* Troubleshooting
<li>[https://serverfault.com/a/1128797 What's the difference between rootless Docker, running a container as a non-root user, and Podman?]
<li>[https://itnext.io/docker-running-in-rootless-mode-bdbcfc728b3a Docker Running In Rootless Mode]
<li>[https://mohitgoyal.co/2021/04/14/going-rootless-with-docker-and-containers/ Going rootless with Docker and Containers]
<li>[https://www.liquidweb.com/kb/how-to-docker-rootless-containers/ How to Run Rootless Docker Containers]
</ul>


== docker pull ==
== docker pull ==
Line 1,032: Line 1,101:
* [https://vsupalov.com/docker-latest-tag/ What's Wrong With The Docker :latest Tag?] '''Do not run any container with the latest tag.'''
* [https://vsupalov.com/docker-latest-tag/ What's Wrong With The Docker :latest Tag?] '''Do not run any container with the latest tag.'''
* [https://www.reddit.com/r/docker/comments/vjx9ct/how_to_upgrade_container_properly/ How to upgrade container properly?]  "docker-compose pull" to update all your service and "docker-compose up -d" to start them all. docker swarm is even better because you can achieve zero downtme rolling upgrades.
* [https://www.reddit.com/r/docker/comments/vjx9ct/how_to_upgrade_container_properly/ How to upgrade container properly?]  "docker-compose pull" to update all your service and "docker-compose up -d" to start them all. docker swarm is even better because you can achieve zero downtme rolling upgrades.
* The following two are equivalent
:<syntaxhighlight lang='bash'>
docker compose pull && docker compose up -d
docker compose up --pull always -d
</syntaxhighlight>


=== <none>:<none> images ===
=== <none>:<none> images ===
Line 1,129: Line 1,203:
<li>[https://www.cloudytuts.com/tutorials/docker/how-to-check-memory-and-cpu-utilization-of-docker-container/ How to Check Memory and CPU Utilization of Docker Container], [https://www.howtoforge.com/how-to-check-docker-container-ram-and-cpu-usage/ How to Check Docker Container RAM and CPU Usage]
<li>[https://www.cloudytuts.com/tutorials/docker/how-to-check-memory-and-cpu-utilization-of-docker-container/ How to Check Memory and CPU Utilization of Docker Container], [https://www.howtoforge.com/how-to-check-docker-container-ram-and-cpu-usage/ How to Check Docker Container RAM and CPU Usage]
<pre>
<pre>
docker stats
docker stats           # ctrl + c to quit
docker stats CONTAINER # multiple 'ctrl + c' to quit
docker stats --no-stream
docker stats --no-stream CONTAINER
</pre>
</pre>
<li>[https://github.com/ColinFay/dockerstats docker stats]
<li>[https://github.com/ColinFay/dockerstats docker stats]
Line 1,155: Line 1,232:


=== [https://docs.docker.com/network/host/ Host network] ===
=== [https://docs.docker.com/network/host/ Host network] ===
If you use the '''host''' network driver for a container, that container’s network stack is not isolated from the Docker host. For instance, if you run a container which binds to port 80 and you use host networking, the container’s application will be available on port 80 on the host’s IP address.
* If you use the '''host''' network driver for a container, that container’s network stack is not isolated from the Docker host. For instance, if you run a container which binds to port 80 and you use host networking, the container’s application will be available on port 80 on the host’s IP address.
* One good example is if I want to use tailscale network from my host in Uptime Kuma container. See [https://wiki.taichimd.us/view/Docker_Applications#Uptime_Kuma HERE].
* Considerations. While host networking can be powerful, it's important to consider:
** Security implications: Host networking reduces network isolation, potentially increasing security risks.
** Port conflicts: Services using host networking may conflict with other applications running on the host machine.
** Platform limitations: Host network mode only works on Linux hosts, not on Docker Desktop for Mac or Windows.


=== ping, ifconfig and ip commands not found in Ubuntu container ===
=== ping, ifconfig and ip commands not found in Ubuntu container ===
Line 1,177: Line 1,259:


=== Viewing the Web Application's Logs ===
=== Viewing the Web Application's Logs ===
<ul>
<li>[https://linuxiac.com/dozzle-real-time-docker-logs-viewer/ Installing Dozzle: A Superb Real-Time Docker’s Logs Viewer], [https://github.com/amir20/dozzle Dozzle] docker image.
<li>Command line
<pre>
<pre>
$ sudo docker logs -f nostalgic_morse
$ sudo docker logs -f nostalgic_morse
</pre>
</pre>
</ul>


=== Clear Logs of Running Docker Containers ===
=== Clear Logs of Running Docker Containers ===
[https://www.howtogeek.com/devops/how-to-clear-logs-of-running-docker-containers/ How to Clear Logs of Running Docker Containers]
* [https://www.howtogeek.com/devops/how-to-clear-logs-of-running-docker-containers/ How to Clear Logs of Running Docker Containers]
* [https://linuxiac.com/reducing-docker-logs-file-size/ Reducing Docker Logs Size: A Practical Guide to Log Management]


=== Looking at our Web Application Container's processes ===
=== Looking at our Web Application Container's processes ===
Line 1,212: Line 1,299:
<pre>
<pre>
docker start XXXXXXXX    # restart it in the background
docker start XXXXXXXX    # restart it in the background
docker attach XXXXXXXX  # reattach the terminal & stdin
docker attach XXXXXXXX  # reattach the terminal to a running container
</pre>
</pre>
If we want the latest created container, then we use
If we want the latest created container, then we use
Line 1,221: Line 1,308:
=== docker exec: SSH into a running container ===
=== docker exec: SSH into a running container ===
Run a command in a running container
Run a command in a running container
 
<ul>
* [https://docs.docker.com/engine/reference/commandline/exec/ Usage]: <syntaxhighlight lang='bash'>
<li>[https://docs.docker.com/engine/reference/commandline/exec/ Usage]: <syntaxhighlight lang='bash'>
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
</syntaxhighlight>
</syntaxhighlight>
* Examples: <syntaxhighlight lang='bash'>
<li>Examples: <syntaxhighlight lang='bash'>
$ docker exec -d ubuntu_bash touch /tmp/execWorks # do st in the background
$ docker exec -d ubuntu_bash touch /tmp/execWorks # do st in the background


Line 1,235: Line 1,322:
$ docker exec -it -w /root ubuntu_bash pwd # change the working directory
$ docker exec -it -w /root ubuntu_bash pwd # change the working directory
</syntaxhighlight>
</syntaxhighlight>
* [https://www.cloudsavvyit.com/14541/how-to-run-a-command-on-a-running-docker-container/ How to Run a Command on a Running Docker Container]
<li>[https://www.cloudsavvyit.com/14541/how-to-run-a-command-on-a-running-docker-container/ How to Run a Command on a Running Docker Container]
<li>[https://thenewstack.io/how-to-use-the-docker-exec-command/ How to Use the Docker exec Command]. nginx container is used as an example.
<pre>
docker run --name docker-nginx -p 8080:80 -d nginx
 
# method 1. Access the Running Container’s Shell
docker exec -it ID /bin/bash
  apt-get update
  apt-get upgrade -y
  exit
 
# method 2. Run a Command from Outside the Container
docker exec ID apt-get update &amp;&amp; apt-get upgrade
 
docker exec ID cat /usr/share/nginx/html/index.html
docker cp index.html ID:/usr/share/nginx/html/
docker exec ID cat /usr/share/nginx/html/index.html
</pre>
</ul>


=== docker cp ===
=== docker cp ===
Line 1,338: Line 1,443:
/home/$USER/docker/$PROJECT/$SUB-DIRECTORY
/home/$USER/docker/$PROJECT/$SUB-DIRECTORY
</pre>
</pre>
=== PUID, PGID, share volume permission/owner ===
<ul>
<li>[https://docs.linuxserver.io/general/understanding-puid-and-pgid Understanding PUID and PGID] (or the [https://github.com/linuxserver/docker-documentation/blob/master/general/understanding-puid-and-pgid.md source])
<li>You should use the -e PUID and -e PGID options when creating a container from a Docker image to map the container’s internal user to a user on the host machine. This is useful because Docker runs all of its containers under the '''root''' user domain, which means that processes running inside your containers also run as '''root'''. '''This kind of elevated access is not ideal for day-to-day use and can potentially give applications access to things they shouldn’t.''' By using PUID and PGID, you can ensure that files and directories created during the container’s lifespan are owned by a user on the host machine instead of root.
<li>'''Please note that not all Docker images support the PUID and PGID environment variables. The Docker image must be designed to use these variables.''' If you’re using an image that doesn’t support these variables, you may need to create a Dockerfile to build a new image that does.
<li>The following works. The '''--user''' option is a built-in Docker feature that sets the user (and optionally the group) that is used to run the container. This option works regardless of whether the Docker image uses any specific environment variables. PS. "docker" user has been defined in the r-base's [https://github.com/rocker-org/rocker/blob/master/r-base/4.4.0/Dockerfile Dockerfile].
<syntaxhighlight lang='sh'>
docker run --rm -ti --user docker \
  -v "$(pwd)":/workspace r-base
> setwd("/workspace")
> save(iris, file="iris.rda")
> system("ls -lt") # docker docker instead of $USER $USER
> unlink("iris.rda")
</syntaxhighlight>
<li>Similarly, the '''--user''' option works with rocker/rstudio image and ubuntu.
<syntaxhighlight lang='sh'>
docker run --rm -ti --user rstudio \
  -v "$(pwd)":/workspace rocker/rstudio R
> setwd("/workspace")
> save(iris, file="iris.rda")
> system("ls -lt")
> unlink("iris.rda")
</syntaxhighlight>
Note that the prompt is '''$''' rather than '''#'''.
{{Pre}}
docker run --rm -it -v $(pwd):/home --user ubuntu \
  ubuntu bash
$ id
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev)
$ cd /home
$ echo "newfile" > newfile
</pre>
<syntaxhighlight lang='sh'>
docker run --rm -it -v $(pwd):/home --user "$(id -u):$(id -g)" \
  ubuntu bash
$ cd /home
$ echo "newfile" > newfile
</syntaxhighlight>
<li>In the article [https://github.com/rocker-org/rocker/wiki/Sharing-files-with-host-machine Sharing files with host machine] from the Rocker's project, users are instructed to use '''-e USERID''' variable if the host machine user has a UID other than 1000. But the generated file 'iris.rda' from the following example is still owned by root:(
<syntaxhighlight lang='sh'>
docker run --rm -ti -v "$(pwd)":/workspace -e USERID=$UID rocker/rstudio R
</syntaxhighlight>
<li>(Cont.) however, if we run the above command as a daemon and '''log in using the user "rstudio" ''', it works even we don't specify the "-e USERID" option. The lesson is we should use the user defined in the docker image.
<pre>
docker run --rm -v "$(pwd)":/workspace -p PASSWORD=123 rocker/rstudio
</pre>
Notice the prompt is '''#''' rather than '''$''' and the user id is 0.
<pre>
docker run --rm -it -v $(pwd):/home -e PUID=1000 -e PGID=1000 \
  ubuntu bash
# id
uid=0(root) gid=0(root) groups=0(root)
</pre>
<li>In this video [https://youtu.be/oHC6J_aN4eQ?t=137 How to Install Calibre on OMV and Docker], it uses the command '''id admin'''  where "admin" is the portainer user to get PUID (of "admin") and PGID (of "users") to find out the two ids.
</ul>


=== Back Up Your Docker Volumes ===
=== Back Up Your Docker Volumes ===
Line 1,690: Line 1,851:
* [https://nikiforovall.github.io/docker/2020/09/19/publish-package-to-ghcr.html Publish images to GitHub Container Registry (ghcr)]
* [https://nikiforovall.github.io/docker/2020/09/19/publish-package-to-ghcr.html Publish images to GitHub Container Registry (ghcr)]


=== Google cloud registry ===
<pre>
[https://seandavi.github.io/2019/02/using-google-cloud-registry-for-private-docker-images/ Using google cloud registry for private docker images]
docker pull ghcr.io/OWNER/IMAGE_NAME:TAG
 
# docker pull registry-url/image-name:tag
</pre>
 
=== Google Container Registry ===
<ul>
<li>Authenticate with Google Cloud: ensure you have the Google Cloud SDK installed
<pre>
gcloud auth login
gcloud auth configure-docker
</pre>
<li>Pull the image
<pre>
docker pull gcr.io/PROJECT_ID/IMAGE_NAME:TAG
</pre>
</ul>
 
=== Google Artifact Registry ===
https://cloud.google.com/artifact-registry/
<ul>
<li>Authenticate with Google Cloud: ensure you have the Google Cloud SDK installed
<pre>
gcloud auth login
gcloud auth configure-docker us-central1-docker.pkg.dev
</pre>
<li>Pull the image
<pre>
docker pull us-central1-docker.pkg.dev/PROJECT_ID/REPOSITORY/IMAGE_NAME:TAG
</pre>
</ul>


== Dockerfile ==
== Dockerfile ==
* A Dockerfile does not follow the YAML syntax or the shell syntax. It is a plain text file that contains instructions for building a Docker image, using its own specific syntax and keywords.
* [https://docs.docker.com/reference/builder/ Dockerfile Reference]
* [https://docs.docker.com/reference/builder/ Dockerfile Reference]
* [https://www.digitalocean.com/community/tutorials/docker-explained-using-dockerfiles-to-automate-building-of-images Using Dockerfiles to Automate Building of Images] from digitalocean.com.
* [https://www.digitalocean.com/community/tutorials/docker-explained-using-dockerfiles-to-automate-building-of-images Using Dockerfiles to Automate Building of Images] from digitalocean.com.
Line 1,711: Line 1,903:
*** If we use ENTRYPOINT + CMD, ENTRYPOINT defines the command and CMD defines parameters. The example above will run ''ping 8.8.8.8 -c 3''. This form is called the '''exec''' form.
*** If we use ENTRYPOINT + CMD, ENTRYPOINT defines the command and CMD defines parameters. The example above will run ''ping 8.8.8.8 -c 3''. This form is called the '''exec''' form.
* [https://github.com/jamtur01/dockerbook-code The Docker Book]
* [https://github.com/jamtur01/dockerbook-code The Docker Book]
* [https://github.com/rocker-org/rocker rocker (R and RStudio)]
* [https://github.com/Bioconductor/bioc_docker/tree/master/out Bioconductor]


=== Examples of Dockerfile ===
=== Examples of Dockerfile ===
Line 1,755: Line 1,945:
</ul></ul>
</ul></ul>


=== How to use Dockerfile ===
==== Rocker ====
* [https://github.com/rocker-org/rocker rocker (R and RStudio)]
* [https://github.com/hrbrmstr/rdaradar rdaradar (RDA Radar)]
<pre>
FROM r-base:latest
COPY check.R .
CMD [ "Rscript", "check.R", "/unsafe.rda"]
</pre>
<pre>
$ git clone https://github.com/hrbrmstr/rdaradar.git
$ docker build -t rdaradar:0.1.0 -t rdaradar:latest . 
$ docker run --rm -v "$(pwd)/exploit.rda:/unsafe.rda" rdaradar
</pre>
 
==== Bioconductor ====
[https://github.com/Bioconductor/bioconductor_docker Bioconductor]
 
==== Papers ====
* [https://github.com/amyfrancis97/DrivR-Base DrivR-Base], [https://f1000research.com/posters/12-1521 Poster], [https://academic.oup.com/bioinformatics/article/40/4/btae197/7644281 Paper] 2024
 
=== How to use Dockerfile ===
https://docs.docker.com/engine/reference/commandline/build/
https://docs.docker.com/engine/reference/commandline/build/


Line 2,093: Line 2,303:


=== R and httpgd package ===
=== R and httpgd package ===
[https://nx10.github.io/httpgd/articles/b03_docker.html httpgd vignette], [https://nx10.github.io/httpgd/ installation] from Github.
<ul>
 
<li>[https://nx10.github.io/httpgd/articles/b03_docker.html httpgd Docker vignette], [https://nx10.github.io/httpgd/ installation] from Github.
It works. However, currently "httpgs" is archived in CRAN (2023/1/25). So my temporary solution is  
<li>It works. However, currently "httpgs" is archived in CRAN (2023/1/25). So my temporary solution is  
<pre>
<pre>
$ docker run --rm -it r-base:4.2.2 bash
$ docker run --rm -it r-base:4.2.2 bash
Line 2,113: Line 2,323:
> plot(1:5)
> plot(1:5)
</pre>
</pre>
<li>It works when I tested it on a '''remote ubuntu server''' (R 4.4.0 & httpgd 2.0.1) (following the instruction on [https://cran.r-project.org/web/packages/httpgd/vignettes/b03_docker.html Docker vignette]). Either IP or hostname works but the hostname URL link given by httpgd::hgd() needs to be modified to include '''.local'''.
<li>Some variation of using hgd()
<pre>
hgd(host="0.0.0.0", port = 8888) # allow connection from any one from any computer
hgd()                # default is host=127.0.0.1, port will be random
hgd(token="secret")  # define the token
hgd_browse()
hgd_close()
hgd_details()
hgd_url()
hgd_view()
</pre>
<li>To use it with [https://github.com/Bioconductor/bioconductor_docker?tab=readme-ov-file Bioconductor] (the Bioconductor docker image will use p3m.dev to install binary R packages so it is fast to create images), we can do like this
<pre>
$ docker run --rm -it -p 8888:8888 bioconductor/bioconductor_docker:RELEASE_3_18 R
> install.packages("httpgd")
> httpgd::hgd(host = "0.0.0.0", port = 8888)
</pre>
OR use, for example, "bioconductor/bioconductor_docker:RELEASE_3_18" as the base image in the Dockerfile, and follow the same instruction from httpgd vignette to create a docker image.
<pre>
$ nano Dockerfile_httpgd
$ docker build . -f Dockerfile_httpgd -t bioc-httpgd:RELEASE_3_18
$ docker images
$ docker run --rm -it --user rstudio -p 8888:8888 bioc-httpgd:RELEASE_3_18 R
</pre>
<li>[[Biowulf#Singularity|Singularity]]. The following is a definition file that is using the bioconductor image + the '''httpgd''' package.
<syntaxhighlight lang='sh'>
Bootstrap: docker
From: bioconductor/bioconductor_docker:RELEASE_3_18
%post
    apt-get update \
    && apt-get install -y --no-install-recommends \
    libfontconfig1-dev \
    && apt-get autoremove -y && apt-get clean -y && rm -rf /var/lib/apt/lists/* \
    && install2.r --error --skipinstalled --ncpu -1 \
    httpgd \
    && rm -rf /tmp/downloaded_packages
%runscript
    exec /usr/local/bin/R
%environment
    export LC_ALL=C
</syntaxhighlight>
<syntaxhighlight lang='sh'>
sudo singularity build bioc.sif bioc.def
singularity run bioc.sif
> httpgd::hgd(host = "0.0.0.0", port = 8888)
</syntaxhighlight>
After we copy the URL, we need to modify the IP or hostname.
</ul>


=== Docker-OSX ===
=== Docker-OSX ===
https://github.com/sickcodes/Docker-OSX
https://github.com/sickcodes/Docker-OSX


== Pruning unused resources ==
== Delete/remove/'''prune''' unused resources ==
* Prune containers
[https://docs.docker.com/config/pruning/ Prune unused Docker objects]
 
<ul>
<li>Prune build cache (seems most effective)
<syntaxhighlight lang='bash'>
docker builder prune
 
docker system df -v # Check Docker disk usage
sudo du -sh /var/lib/docker
</syntaxhighlight>
<li>Prune containers
<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
docker container prune # remove all containers that are not in ''running'' status
docker container prune # remove all containers that are not in ''running'' status
Line 2,126: Line 2,401:
docker container rm -f $(docker container ls -aq) # remove even the running containers
docker container rm -f $(docker container ls -aq) # remove even the running containers
</syntaxhighlight>
</syntaxhighlight>
* Prune images
 
<li>Prune '''dangling images''': Dangling images are images that aren’t tagged and aren’t referenced by any container. Normal but unused/unreferenced images are kept and won't be deleted. See [[#%3Cnone%3E:%3Cnone%3E_images|<none>:<none>images]].
<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
docker images prune # unused image layers
docker image prune # unused image layers
</syntaxhighlight>
</syntaxhighlight>
* Prune volumes
 
<li>Remove all '''unused images''': If you want to remove all images that aren’t used by any existing containers, you can use the -a flag. It will give a warning saying: '''this will remove all images without at least one container associated to them'''. "Exited" container like "hello-world" will not be deleted.
<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
docker volume prune # unused volumes by at least one container
docker image prune -a
</syntaxhighlight>
Used images means anyone shown by '''docker ps -a'''.
 
<li>Prune volumes
<syntaxhighlight lang='bash'>
docker volume ls
docker volume prune # unused volumes by at least one container


docker volume prune --filter 'label=demo'
docker volume prune --filter 'label=demo'
docker volume prune --filter 'label=demo' --filter 'label=test'
docker volume prune --filter 'label=demo' --filter 'label=test'
</syntaxhighlight>
</syntaxhighlight>
* Prune networks
 
<li>Prune networks
<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
docker network prune
docker network prune
</syntaxhighlight>
</syntaxhighlight>
* Prune everything
 
<li>Prune everything.
<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
docker system prune
docker system prune
</syntaxhighlight>
</syntaxhighlight>
</ul>


== Plugins ==
== Plugins ==
Line 2,208: Line 2,495:
</syntaxhighlight>
</syntaxhighlight>


=== Where are Docker images stored on the host: /var/lib/docker ===
=== Where are Docker containers/images stored on the host: /var/lib/docker ===
* http://blog.thoward37.me/articles/where-are-docker-images-stored/
* http://stackoverflow.com/questions/19234831/where-are-docker-images-stored-on-the-host-machine
* http://stackoverflow.com/questions/19234831/where-are-docker-images-stored-on-the-host-machine
* https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-storage-driver
* https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-storage-driver
Line 2,215: Line 2,501:
* [https://foxutech.com/manage-docker-images-local-disk/ How to Manage Docker images on local disk]
* [https://foxutech.com/manage-docker-images-local-disk/ How to Manage Docker images on local disk]
* [https://www.howtogeek.com/devops/how-to-store-docker-images-and-containers-on-an-external-drive/ How to Store Docker Images and Containers on an External Drive]
* [https://www.howtogeek.com/devops/how-to-store-docker-images-and-containers-on-an-external-drive/ How to Store Docker Images and Containers on an External Drive]
* [https://linuxiac.com/how-to-change-docker-data-directory/ How to Change Docker’s Default Data Directory]


The default is '''/var/lib/docker'''. The location can be changed by modifying the file '''/etc/default/docker'''. Three options if we are tight on the disk space.
The default is '''/var/lib/docker'''. The location can be changed by modifying the file '''/etc/default/docker'''. Three options if we are tight on the disk space.
Line 2,311: Line 2,598:
</pre>
</pre>


== Docker Compose <docker-compose.yaml> ==
== Package CLI Applications ==
Docker Compose can help us out as it allows us to specify a single file in which we can define our entire environment structure and run it with a single command (much like a Vagrantfile works).
[https://www.cloudsavvyit.com/15713/how-to-use-docker-to-package-cli-applications/ How to Use Docker to Package CLI Applications]


* https://docs.docker.com/compose/ (the example will give an error when "RUN pip install -r requirements.txt")
== Stack ==
*# app.py
* https://www.composerize.com/
*# requirements.txt
* [https://youtu.be/-ttZjGBkLL8 Export Docker Container Settings as Docker Compose Stack], [https://github.com/Red5d/docker-autocompose docker-autocompose] (only x86)
*# Dockerfile
*# docker-compose.yml
* Some top-level '''keys''': version, services, networks, volumes
* [https://stackoverflow.com/questions/38507446/docker-dockerfile-vs-docker-compose-yml Dockerfile vs docker-compose.yml]
* A simple example of running [https://www.tutorialspoint.com/docker/docker_compose.htm nginx & mysqsl]
* [https://youtu.be/Qw9zlE3t8Ko Docker Compose in 12 Minutes]
* https://deliciousbrains.com/vagrant-docker-wordpress-development/
* https://github.com/kristophjunge/docker-mediawiki
* [https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/ Docker Guide: Dockerizing Python Django Application] (cannot follow)
* Examples:
** [https://github.com/haroldtreen/epub-press EpubPress] local server
** Running [https://github.com/nextcloud/docker nextcloud], [https://blog.ouseful.info/2017/06/16/rolling-your-own-jupyter-and-rstudio-data-analysis-environment-around-apache-drill-using-docker-compose/ Jupyter and RStudio]
** [https://github.com/dceoy/docker-rstudio-server Rstudio]


=== Download binary ===
== Docker app ==
<ul>
Docker App is an experimental Docker feature which lets you build and publish application stacks consisting of multiple containers. It aims to let you share '''Docker Compose''' stacks with the same ease of use as regular Docker containers.
<li>https://github.com/docker/compose/releases
<li>[https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-20-04 How To Install and Use Docker Compose on Ubuntu 20.04]
{{Pre}}
wget https://github.com/docker/compose/releases/download/v2.6.0/docker-compose-linux-x86_64
sudo mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
</pre>
<li>New instruction [https://docs.docker.com/compose/install/linux/ Install the Compose plugin]. In short, we will use '''docker compose''' (new) instead of "docker-compose" (deprecated) command.
<li>[https://stackoverflow.com/a/66516826 Difference between "docker compose" and "docker-compose"]
</ul>


=== Simple examples ===
[https://www.cloudsavvyit.com/10673/how-to-use-docker-app-to-containerise-an-entire-application-stack/ How to Use 'Docker App' to Containerise an Entire Application Stack]
Create a file '''docker-compose.yml''' and run '''docker-compose up''' after creating the file.
 
== Docker Swarm ==
* https://www.linux.com/learn/how-use-docker-machine-create-swarm-cluster
* [https://www.howtoforge.com/tutorial/ubuntu-docker-swarm-cluster/ How Setup and Configure Docker Swarm Cluster on Ubuntu]
* [https://www.cloudsavvyit.com/13049/what-is-docker-swarm-mode-and-when-should-you-use-it/ What is Docker Swarm Mode and When Should You Use It?]


'''[https://hub.docker.com/_/hello-world hello-world]''': 9kB
== Security ==
<ul>
<li>[https://cloudberry.engineering/article/dockerfile-security-best-practices/ Docker Security Best Practices from the Dockerfile]
<li>[https://www.cloudsavvyit.com/12631/how-to-secure-sensitive-data-with-docker-compose-secrets/ How to Secure Sensitive Data With Docker Compose Secrets]
<li>[https://docs.docker.com/scout/ Docker Scout]. Docker Scout lets users secure their software supply chain and continuously observe and improve their security posture. Docker Scout is free for up to 3 repositories.
<pre>
<pre>
version: "3"
docker login
services:
docker scout enroll ORG_NAME
  hello:
 
    image: hello-world
docker scout scan my-web-app:latest
 
docker scout repo enable --org ORG_NAME ORG_NAME/scout-demo
# Docker Scout analyzes the image you built most recently by default,
# so there's no need to specify the name of the image in this case.
</pre>
</pre>
<li>[https://www.portainer.io/blog/is-docker-ce-ready-for-production-how-portainer-bridges-the-gaps Is Docker-CE Ready for Production? How Portainer Bridges the Gaps]
</ul>


'''[https://hub.docker.com/_/alpine alpine]''': 7.73MB
== [https://mobyproject.org/ Moby Project] ==
<pre>
[https://www.infoworld.com/article/3193904/containers/what-is-dockers-moby-project.html What is Docker's Moby Project?]
version: "3"
services:
  server:
    image: alpine
    container_name: my_container
    command: sh -c "echo 'hello' && echo 'docker'"
</pre>


'''[https://hub.docker.com/_/nginx Nginx]''': 135MB
== Windows container ==
<pre>
[https://stackoverflow.com/questions/45380972/how-can-i-run-a-docker-windows-container-on-osx How can I run a docker windows container on osx?]
mkdir src
echo "Hello world!" > src/index.html
</pre>
<pre>
version: "3"
services:
  client:
    image: nginx
    ports:
      - 8000:80
    volumes:
      - ./src:/usr/share/nginx/html
</pre>


=== PUID, PGID ===
== When Not to Use Docker ==
* [https://docs.linuxserver.io/general/understanding-puid-and-pgid Understanding PUID and PGID] (or the [https://github.com/linuxserver/docker-documentation/blob/master/general/understanding-puid-and-pgid.md source])
[https://www.cloudsavvyit.com/15446/when-not-to-use-docker-cases-where-containers-dont-help/ When Not to Use Docker: Cases Where Containers Don’t Help]
* You should use the -e PUID and -e PGID options when creating a container from a Docker image to map the container’s internal user to a user on the host machine. This is useful because Docker runs all of its containers under the '''root''' user domain, which means that processes running inside your containers also run as '''root'''. '''This kind of elevated access is not ideal for day-to-day use and can potentially give applications access to things they shouldn’t.''' By using PUID and PGID, you can ensure that files and directories created during the container’s lifespan are owned by a user on the host machine instead of root.
* In this video [https://youtu.be/oHC6J_aN4eQ?t=137 How to Install Calibre on OMV and Docker], it uses the command '''id admin'''  where "admin" is the portainer user to get PUID (of "admin") and PGID (of "users") to find out the two ids.


=== Composerize ===
= Docker Compose <docker-compose.yaml> =
[https://ostechnix.com/convert-docker-run-commands-into-docker-compose-files/ Convert Docker Run Commands Into Docker-Compose Files]
Docker Compose can help us out as it allows us to specify a single file in which we can define our entire environment structure and run it with a single command (much like a Vagrantfile works).


=== An example from 'Fundamentals of Docker' ===
* Tabs are not allowed in a Docker Compose YAML file. You should use spaces for indentation instead.
<syntaxhighlight lang='bash'>
* https://docs.docker.com/compose/ (the example will give an error when "RUN pip install -r requirements.txt")
git clone https://github.com/fundamentalsofdocker/labs.git
*# app.py
cd labs/ch08
*# requirements.txt
docker-compose up
*# Dockerfile
# Open http://localhost:3000/pet
*# docker-compose.yml
</syntaxhighlight>
* Some top-level '''keys''': version, services, networks, volumes
The images do not show up:( The terminal shows what has happened under the hood. So the problem is the http links for images do not exist.
* [https://stackoverflow.com/questions/38507446/docker-dockerfile-vs-docker-compose-yml Dockerfile vs docker-compose.yml]
* A simple example of running [https://www.tutorialspoint.com/docker/docker_compose.htm nginx & mysqsl]
* [https://youtu.be/Qw9zlE3t8Ko Docker Compose in 12 Minutes]
* https://deliciousbrains.com/vagrant-docker-wordpress-development/
* https://github.com/kristophjunge/docker-mediawiki
* [https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/ Docker Guide: Dockerizing Python Django Application] (cannot follow)
* Examples:
** [https://github.com/haroldtreen/epub-press EpubPress] local server
** Running [https://github.com/nextcloud/docker nextcloud], [https://blog.ouseful.info/2017/06/16/rolling-your-own-jupyter-and-rstudio-data-analysis-environment-around-apache-drill-using-docker-compose/ Jupyter and RStudio]
** [https://github.com/dceoy/docker-rstudio-server Rstudio]
* [https://readmedium.com/docker-compose-for-beginners-working-with-multiple-containers-ee0727aab687 Docker Compose For Beginners: Working With Multiple Containers]
** image, container_name
** image, container_name, environment
** image, container_name, environment, volumes, ports


We can also run the application in the background
== YAML validator ==
<syntaxhighlight lang='bash'>
https://codebeautify.org/yaml-validator
docker-compose up -d
 
</syntaxhighlight>
== Download binary ==
 
<ul>
To stop and clean up the application, [https://www.thegeekstuff.com/2016/04/docker-compose-up-stop-rm/ Howto use docker-compose to Start, Stop, Remove Docker Containers]
<li>https://github.com/docker/compose/releases for macOS (x86/arm), Linux (aarch64 or armv6 or armv7).
<syntaxhighlight lang='bash'>
<li>New instruction [https://docs.docker.com/compose/install/linux/ Install the Compose plugin]. In short, we will use '''docker compose''' (new) instead of "docker-compose" (deprecated) command. There is no need to install the original "docker-compose" tool. "docker-compose --version"
docker-compose down # Stop and remove containers, networks, images, and unnamed volumes
</ul>
                     # defined in the docker-compose.yml flie
 
# OR
== Difference of "docker compose" and "docker-compose" ==
docker-compose down -v # similar to above but remove named volumes defined in yml file
* Docker-compose is the original '''Python-based''' command-line tool that was released in 2014. Docker compose is a newer '''Go-based''' command-line tool that is integrated into the Docker CLI platform and supports the compose-spec. Docker compose is meant to be a drop-in replacement for docker-compose, but it may have some behavior differences and new features. Docker compose is currently a tech preview, but it will eventually replace docker-compose as the recommended way to use Compose.
# OR
 
* https://docs.docker.com/compose/migrate/. From July 2023 Compose V1 stopped receiving updates. It’s also no longer available in new releases of Docker Desktop.
* [https://forums.docker.com/t/docker-compose-vs-docker-compose/137884 Docker-compose vs docker compose]
* [https://stackoverflow.com/a/66516826 Difference between "docker compose" and "docker-compose"]
 
== Simple examples ==
Create a file '''docker-compose.yml''' and run '''docker-compose up''' after creating the file.
 
'''[https://hub.docker.com/_/hello-world hello-world]''': 9kB
<pre>
version: "3"
services:
  hello:
    image: hello-world
</pre>
 
'''[https://hub.docker.com/_/alpine alpine]''': 7.73MB
<pre>
version: "3"
services:
  server:
    image: alpine
    container_name: my_container
    command: sh -c "echo 'hello' && echo 'docker'"
</pre>
 
'''[https://hub.docker.com/_/nginx Nginx]''': 135MB
<pre>
mkdir src
echo "Hello world!" > src/index.html
</pre>
<pre>
version: "3"
services:
  client:
    image: nginx
    ports:
      - 8000:80
    volumes:
      - ./src:/usr/share/nginx/html
</pre>
 
== Composerize/convert a docker command into a docker compose file ==
* Copilot/ChatGPT/...
* https://www.composerize.com/
* [https://ostechnix.com/convert-docker-run-commands-into-docker-compose-files/ Convert Docker Run Commands Into Docker-Compose Files]
 
== An example from 'Fundamentals of Docker' ==
<syntaxhighlight lang='bash'>
git clone https://github.com/fundamentalsofdocker/labs.git
cd labs/ch08
docker-compose up
# Open http://localhost:3000/pet
</syntaxhighlight>
The images do not show up:( The terminal shows what has happened under the hood. So the problem is the http links for images do not exist.
 
We can also run the application in the background
<syntaxhighlight lang='bash'>
docker-compose up -d
</syntaxhighlight>
 
To stop and clean up the application, [https://www.thegeekstuff.com/2016/04/docker-compose-up-stop-rm/ Howto use docker-compose to Start, Stop, Remove Docker Containers]
<syntaxhighlight lang='bash'>
docker-compose down # Stop and remove containers, networks, images, and unnamed volumes
                     # defined in the docker-compose.yml flie
# OR
docker-compose down -v # similar to above but remove named volumes defined in yml file
# OR
docker-compose stop && docker-compose rm -f
docker-compose stop && docker-compose rm -f
docker-compose rm -v
docker-compose rm -v
Line 2,420: Line 2,755:
</syntaxhighlight>
</syntaxhighlight>


=== An example from "How to Setup NGINX as Reverse Proxy Using Docker" ===
== An example from "How to Setup NGINX as Reverse Proxy Using Docker" ==
See [[#Nginx_reverse_proxy|here]]. Only nginx is used.
See [[#Nginx_reverse_proxy|here]]. Only nginx is used.


=== An example from "Docker Deep Dive" (flask + redis) ===
== An example from "Docker Deep Dive" (flask + redis) ==
'''Note''' that on [https://docs.docker.com/compose/gettingstarted/#step-7-update-the-application Get started with Docker Compose] it mounts the current directory to ''/code'' inside the container. So after we modify ''app.py'', we don't need to copy it to the container.
'''Note''' that on [https://docs.docker.com/compose/gettingstarted/#step-7-update-the-application Get started with Docker Compose] it mounts the current directory to ''/code'' inside the container. So after we modify ''app.py'', we don't need to copy it to the container.


Line 2,517: Line 2,852:
</syntaxhighlight>
</syntaxhighlight>


 
== Create Compose Files From Running Docker Containers ==
=== Create Compose Files From Running Docker Containers ===
[https://www.makeuseof.com/create-docker-compose-files-from-running-docker-containers/ How to Automatically Create Compose Files From Running Docker Containers]
[https://www.makeuseof.com/create-docker-compose-files-from-running-docker-containers/ How to Automatically Create Compose Files From Running Docker Containers]


=== Docker-Compose persistent data MySQL ===
== Docker-Compose persistent data MySQL ==
https://stackoverflow.com/questions/39175194/docker-compose-persistent-data-mysql
https://stackoverflow.com/questions/39175194/docker-compose-persistent-data-mysql


=== Connect to Docker daemon over ssh using docker-compose ===
== Connect to Docker daemon over ssh using docker-compose ==
[https://medium.com/@sujaypillai/dockertips-connect-to-docker-daemon-over-ssh-using-docker-compose-f4b189dd8951 #DockerTips: Connect to Docker daemon over ssh using docker-compose]
[https://medium.com/@sujaypillai/dockertips-connect-to-docker-daemon-over-ssh-using-docker-compose-f4b189dd8951 #DockerTips: Connect to Docker daemon over ssh using docker-compose]


=== Dockerfile + docker-compose ===
== Dockerfile + docker-compose ==
[https://stackoverflow.com/a/29487120 Docker Compose vs. Dockerfile - which is better?]  
[https://stackoverflow.com/a/29487120 Docker Compose vs. Dockerfile - which is better?]  


The Compose file describes the container in its running state, leaving the details on how to build the container to Dockerfiles.
The Compose file describes the container in its running state, leaving the details on how to build the container to Dockerfiles.


=== How to deploy on remote Docker hosts with docker-compose ===
== How to deploy on remote Docker hosts with docker-compose ==
[https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/ How to deploy on remote Docker hosts with docker-compose]
[https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/ How to deploy on remote Docker hosts with docker-compose]


=== logs ===
== logs ==
<pre>
<pre>
docker-compose logs -f
docker-compose logs -f
Line 2,541: Line 2,875:
</pre>
</pre>


== Package CLI Applications ==
= GUI/TUI interface manager =
[https://www.cloudsavvyit.com/15713/how-to-use-docker-to-package-cli-applications/ How to Use Docker to Package CLI Applications]
[https://www.tecmint.com/docker-tools/ 11 Must-Have Docker Tools To Simplify Your Workflow]:
 
* LazyDocker - Command-Line Docker Management
== Stack ==
* Dive - Analyze Docker Image Layers
* https://www.composerize.com/
* Portainer – Simplify Docker Management
* [https://youtu.be/-ttZjGBkLL8 Export Docker Container Settings as Docker Compose Stack], [https://github.com/Red5d/docker-autocompose docker-autocompose] (only x86)
* Watchtower – Automated Docker Container Updates
 
* Dockly – Interactive Docker Management Tool
== Docker app ==
* Docker Compose – Define and Run Multi-Container Apps
Docker App is an experimental Docker feature which lets you build and publish application stacks consisting of multiple containers. It aims to let you share '''Docker Compose''' stacks with the same ease of use as regular Docker containers.
* Dry – Real-time Docker Container Monitoring
 
* Sliplane – Cloud-Based Docker Management Tool
[https://www.cloudsavvyit.com/10673/how-to-use-docker-app-to-containerise-an-entire-application-stack/ How to Use 'Docker App' to Containerise an Entire Application Stack]
* Orbstack (closed source) - manage VM and Docker containers
 
* Docker Desktop – A Graphical Interface for Docker
== Docker Swarm ==
* Visual Studio Code (VS Code) Docker Extension
* https://www.linux.com/learn/how-use-docker-machine-create-swarm-cluster
* [https://www.howtoforge.com/tutorial/ubuntu-docker-swarm-cluster/ How Setup and Configure Docker Swarm Cluster on Ubuntu]
* [https://www.cloudsavvyit.com/13049/what-is-docker-swarm-mode-and-when-should-you-use-it/ What is Docker Swarm Mode and When Should You Use It?]
 
== Security ==
* [https://cloudberry.engineering/article/dockerfile-security-best-practices/ Docker Security Best Practices from the Dockerfile]
* [https://www.cloudsavvyit.com/12631/how-to-secure-sensitive-data-with-docker-compose-secrets/ How to Secure Sensitive Data With Docker Compose Secrets]


== [https://mobyproject.org/ Moby Project] ==
[https://www.infoworld.com/article/3193904/containers/what-is-dockers-moby-project.html What is Docker's Moby Project?]
== Windows container ==
[https://stackoverflow.com/questions/45380972/how-can-i-run-a-docker-windows-container-on-osx How can I run a docker windows container on osx?]
== When Not to Use Docker ==
[https://www.cloudsavvyit.com/15446/when-not-to-use-docker-cases-where-containers-dont-help/ When Not to Use Docker: Cases Where Containers Don’t Help]
= GUI/TUI interface manager =
== [https://github.com/moncho/dry Dry] ==
== [https://github.com/moncho/dry Dry] ==
[https://www.2daygeek.com/dry-an-interactive-cli-manager-for-docker-containers/ Dry – An Interactive CLI Manager For Docker Containers]. The TUI is built on top of [https://github.com/gizak/termui termui]; a cross-platform, easy-to-compile, and fully-customizable terminal dashboard. It is inspired by [https://github.com/yaronn/blessed-contrib blessed-contrib], but purely in Go.
[https://www.2daygeek.com/dry-an-interactive-cli-manager-for-docker-containers/ Dry – An Interactive CLI Manager For Docker Containers]. The TUI is built on top of [https://github.com/gizak/termui termui]; a cross-platform, easy-to-compile, and fully-customizable terminal dashboard. It is inspired by [https://github.com/yaronn/blessed-contrib blessed-contrib], but purely in Go.
Line 2,597: Line 2,914:
== [https://portainer.io/ Portainer]* (nice) ==
== [https://portainer.io/ Portainer]* (nice) ==
<ul>
<ul>
<li>Username: admin. Password: at least 12 characters long</li>
<li>[https://github.com/portainer/portainer/issues/5406 Hardware minimum requirements] 100MB RAM. That's why 1GB ram of Raspberry Pi works fine.
<li>[https://github.com/portainer/portainer/issues/5406 Hardware minimum requirements] 100MB RAM. That's why 1GB ram of Raspberry Pi works fine.
<li>[https://docs.portainer.io/start/install/server/docker/linux Install], [https://docs.portainer.io/ Documentation]
<li>[https://docs.portainer.io/start/install-ce/server/docker Install CE], [https://docs.portainer.io/ Documentation]
<pre>
<pre>
docker volume create portainer_data
docker volume create portainer_data
Line 2,647: Line 2,965:
* [https://youtu.be/XYcKmPi4McA How-to: Deploy Portainer on MicroK8s] (video)
* [https://youtu.be/XYcKmPi4McA How-to: Deploy Portainer on MicroK8s] (video)
* [https://www.portainer.io/blog/from-zero-to-production-with-fedora-coreos-portainer-and-wordpress-in-7-easy-steps From Zero to Production with Fedora CoreOS, Portainer, and WordPress in 7 Easy Steps]. Virtualbox was used
* [https://www.portainer.io/blog/from-zero-to-production-with-fedora-coreos-portainer-and-wordpress-in-7-easy-steps From Zero to Production with Fedora CoreOS, Portainer, and WordPress in 7 Easy Steps]. Virtualbox was used
=== IP address 0.0.0.0 ===
[https://www.reddit.com/r/docker/comments/rkq7o8/comment/hpbb13k/ How to setup ip address in portainer to access containers?] </BR>
(Left hand side) Administration -> Environment-related -> Environments > local (or whatever your environment is named) -> '''Public ip'''.


=== Templates ===
=== Templates ===
Line 2,743: Line 3,065:
Every app is based on a Docker application
Every app is based on a Docker application
* https://casaos.io/
* https://casaos.io/
** https://wiki.casaos.io/en/get-started. It also supports arm64, armv7.
** http://casaos.local
** https://docs.zimaboard.com/docs/index.html Default login casaos/casaos. For a new user, the password has to be at least 5 characters.
* [https://youtu.be/FwJByjTdKks Revisiting CasaOS After A Few Months] 2022-6-14
* [https://youtu.be/FwJByjTdKks Revisiting CasaOS After A Few Months] 2022-6-14
* [https://youtu.be/w44CypRO5l4 Home Servers Have NEVER Been This Easy: CasaOS + ZimaBoard] 4/23/2023
* [https://youtu.be/w44CypRO5l4 Home Servers Have NEVER Been This Easy: CasaOS + ZimaBoard] 4/23/2023
* [https://github.com/xp1ode/new/blob/506bf428592a9604c4d8ca4cc6d0426280805b48/Screenshot%20from%202022-11-15%2006-12-55.png List of apps] 2022/11


= Orchestrator =
= Orchestrator =
Line 2,781: Line 3,107:
== k3s: Lightweight Kubernetes ==
== k3s: Lightweight Kubernetes ==
[https://opensource.com/article/20/3/kubernetes-raspberry-pi-k3s Run Kubernetes on a Raspberry Pi with k3s]
[https://opensource.com/article/20/3/kubernetes-raspberry-pi-k3s Run Kubernetes on a Raspberry Pi with k3s]
== Kubeflow ==
* [https://www.kubeflow.org/ The Machine Learning Toolkit for Kubernetes]


= Other containers =
= Other containers =
Line 2,809: Line 3,138:
| $ docker pull ubuntu:latest <br/>$ docker pull broadinstitute/gatk3:3.8-0 || $ singularity pull docker://ubuntu:latest<br/>$ singularity pull docker://broadinstitute/gatk3:3.8-0
| $ docker pull ubuntu:latest <br/>$ docker pull broadinstitute/gatk3:3.8-0 || $ singularity pull docker://ubuntu:latest<br/>$ singularity pull docker://broadinstitute/gatk3:3.8-0
|-
|-
| $ docker build || $ singularity build
| $ docker build -t myname/myapp:latest -f Dockerfile || $ singularity build myapp.sif myapp.def
|-
|-
| $ docker shell (not exist) || $ singularity shell docker://broadinstitute/gatk3-3.8-0<br/>  $ singularity shell gatk3-3.8-0.img<br/> > ls  # the default location depends on the host system<br/>  
| $ docker shell (not exist) || $ singularity shell docker://broadinstitute/gatk3-3.8-0<br/>  $ singularity shell gatk3-3.8-0.img<br/> > ls  # the default location depends on the host system<br/>  
Line 3,054: Line 3,383:
<li>[https://www.kasmweb.com/ Kasm Workspaces], https://hub.docker.com/r/kasmweb/desktop/#!
<li>[https://www.kasmweb.com/ Kasm Workspaces], https://hub.docker.com/r/kasmweb/desktop/#!
* [https://youtu.be/go7n0FmNqh4 KASM: Isolated Disposable Remote Workspace Via Your Browser] (video)
* [https://youtu.be/go7n0FmNqh4 KASM: Isolated Disposable Remote Workspace Via Your Browser] (video)
* [https://www.youtube.com/watch?v=hXkZVqqAg7c Kasm Workspaces: Your Solution for Remote Desktops?] (video)
* [https://youtu.be/_ur59HHoRGc?t=442 Desktop Apps in Docker Containers with Kasm Workspaces] (you cannot install any software there)
* [https://youtu.be/_ur59HHoRGc?t=442 Desktop Apps in Docker Containers with Kasm Workspaces] (you cannot install any software there)
</li>
</li>
Line 3,060: Line 3,390:
= Podman =
= Podman =
* [https://podman.io/docs/installation Podman Installation Instructions]
* [https://podman.io/docs/installation Podman Installation Instructions]
** [https://ostechnix.com/install-podman-desktop-in-linux/ How To Install Podman Desktop In Linux]
** Raspberry Pi OS use the standard Debian's repositories, so it is fully compatible with Debian's arm64 repository. You can simply follow the steps for Debian to install Podman.
** Raspberry Pi OS use the standard Debian's repositories, so it is fully compatible with Debian's arm64 repository. You can simply follow the steps for Debian to install Podman.
* [https://linuxhandbook.com/docker-vs-podman/ Podman vs docker]:
* [https://linuxhandbook.com/docker-vs-podman/ Podman vs docker]:
Line 3,081: Line 3,412:
</syntaxhighlight>
</syntaxhighlight>
* [https://developers.redhat.com/articles/2023/05/23/podman-desktop-now-generally-available Podman Desktop 1.0]: Local container development made easy.
* [https://developers.redhat.com/articles/2023/05/23/podman-desktop-now-generally-available Podman Desktop 1.0]: Local container development made easy.
* [https://dzone.com/articles/podman-for-docker-users Podman for Docker Users]
** Prerequisites
** Moving Images from Docker to Podman
** Creating a Basic Nuxt.js Project
** Building a Container Image for Your Nuxt.JS Project
** Push Your Podman Image to Quay.io
** '''Run Your Podman Image with Docker'''
** Creating Pods
** Generate a Kubernetes Pod Spec with Podman
** Create a Kubernetes Cluster with Kind (Optional)
** Deploying to Kubernetes
* Podman, Docker and Singularity all support OCI container format images.
* [https://appsilon.com/docker-vs-podman-vs-singularity/ Docker vs. Podman vs. Singularity: Which Containerization Platform is Best for R Shiny?]


= Resource =
= Resource =
Line 3,087: Line 3,432:
* [https://github.com/rainsworth/osip2019-containerisation-workshop/ Reproducible Research through Containerisation: Docker and Singularity] from rainsworth.
* [https://github.com/rainsworth/osip2019-containerisation-workshop/ Reproducible Research through Containerisation: Docker and Singularity] from rainsworth.
* [https://youtu.be/fqMOX6JJhGo Docker Tutorial for Beginners - A Full DevOps Course on How to Run Applications in Containers] from freeCodeCamp.org
* [https://youtu.be/fqMOX6JJhGo Docker Tutorial for Beginners - A Full DevOps Course on How to Run Applications in Containers] from freeCodeCamp.org
* [https://www.howtogeek.com/733522/docker-for-beginners-everything-you-need-to-know/ Docker for Beginners: Everything You Need to Know]


== Books ==
== Books ==
Line 3,117: Line 3,463:
* It is based on Alpine Linux. To install htop, do '''apk add htop'''. But '''htop''' command shows the resource from the host, not from the user's account.
* It is based on Alpine Linux. To install htop, do '''apk add htop'''. But '''htop''' command shows the resource from the host, not from the user's account.
* '''ctrl + insert''' to copy and '''shift + insert''' to paste
* '''ctrl + insert''' to copy and '''shift + insert''' to paste
* [https://github.com/play-with-docker/play-with-docker/issues/238 connect to a play-with-docker instance]. Answer: You just need to create a random private key. [https://kostislab.blogspot.com/2019/03/play-with-play-with-docker-form-your.html Play with "Play with Docker" from your terminal!].
* Some applications I've tested.
* Some applications I've tested.
** webtop (OK)
** webtop (OK)

Latest revision as of 21:34, 21 November 2024

Official web page http://docker.io.

Docker is both a client and a server: the server is a daemon that runs on Linux. The normal approach was that you used docker on the same server the daemon was running on - however it was possible to connect the docker client to a remote docker daemon.

Installation

Which OS to install?

Containers vs virtual machines

KubeVirt

OS containers vs application containers

Differences:

  • OS containers: LXC, OpenVZ, Linux VServer, BSD Jails and Solaris zones. The container acts as VPS.
  • App containers: Docker, Rocket. The container acts as an application.

Current release version

Ubuntu x86 and Mint

One-line script

https://github.com/docker/docker-install, https://docs.docker.com/engine/install/ubuntu/, https://twitter.com/portainerio/status/1650171336864550912

Note that 1) the one-liner is a huge security issue. 2) but how will you add the current user to docker group and then logout and log back in. 3) Linux Mint does not work.

$ curl -fsSL https://get.docker.com | bash
...
Client: Docker Engine - Community
 Version:           24.0.7
 API version:       1.43
 Go version:        go1.20.10
 Git commit:        afdd53b
 Built:             Thu Oct 26 09:08:17 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          24.0.7
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.10
  Git commit:       311b9ff
  Built:            Thu Oct 26 09:08:17 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.26
  GitCommit:        3dd1e886e55dd695541fdcd67420c2888645a495
 runc:
  Version:          1.1.10
  GitCommit:        v1.1.10-0-g18a0cb0
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

---------------

To run Docker as a non-privileged user, consider setting up the
Docker daemon in rootless mode for your user:

    dockerd-rootless-setuptool.sh install

Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.


To run the Docker daemon as a fully privileged service, but granting non-root
users access, refer to https://docs.docker.com/go/daemon-access/

WARNING: Access to the remote API on a privileged Docker daemon is equivalent
         to root access on the host. Refer to the 'Docker daemon attack surface'
         documentation for details: https://docs.docker.com/go/attack-surface/

--------------
$ # sudo groupadd docker
$ sudo usermod -aG docker $USER; newgrp docker

$ docker run hello-world

The newgrp docker command in Linux is used to switch the current user’s group ID during a login session. Specifically, it changes the user’s primary group to the docker group without logging out and back in. This is particularly useful when you need to gain the permissions associated with the docker group to run Docker commands.

$ id -gn
docker

Docker Desktop

Without sudo, Post-installation

To use docker without sudo, follow the instruction on the official guide.

# Add the docker group if it doesn't already exist.
# sudo groupadd docker

# Add your user to the docker group.
sudo usermod -aG docker $USER

# Log out and log in

After running this command, you need to log out and log back in for the changes to take effect. This is because group membership is determined at login time. When you log in, the system reads the group membership information and assigns the appropriate permissions to your user account.

Upgrade Docker Desktop

It seems it does not affect running containers (e.g. RStudio on Mac).

Is it fine to upgrade Docker-ce while a container is running?

Doesn't matter. Your system will stop the container if you update docker.

Is there a way to hibernate a docker container

Live restore

Rate limits for GitHub Apps

Rate limits for GitHub Apps

When I tried several times of docker build, I finally got a message

Downloading GitHub repo XXX/XXXXX@HEAD
Error: Failed to install 'unknown package' from GitHub:
  HTTP error 403.
  API rate limit exceeded for XXX.XX.XXX.X. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)

  Rate limit remaining: 0/60
  Rate limit reset at: 2021-04-12 20:32:28 UTC

  To increase your GitHub API rate limit
  - Use `usethis::browse_github_pat()` to create a Personal Access Token.
  - Use `usethis::edit_r_environ()` and add the token as `GITHUB_PAT`.
Execution halted

CentOS

Boot2Docker

For Windows and OS operation systems, we have to use Boot2Docker. Boot2Docker is a local virtual machine with its own network interface and IP address. To find the Boot2Docker IP address you can check the value of the DOCKER_HOST environment variable. You're be prompted to set this variable when you start or install Boot2Docker the first time. You can find the IP address by running boot2docker ip command.

Note that since Windows and OS X don't share a file system as Linux, the command 'docker run' with '-v' flag to mount a local directory into a Docker container will not work with Boot2Docker release prior to 1.3. The support for volumes is now available for OS X but is not yet present for Windows with the release of Boot2Docker 1.3.

Windows

Note many of the information here have not been updated.

Docker can be run on Windows 10 Pro as a native application; see

The information below is based on running Docker on Windows 7.1 and 8. Your processor needs to support hardware virtualization.

  • Windows Installer includes msys-git, Virtualbox, Boot2Docker-cli management tool and Boot2Docker ISO.
  • Installation instruction for Windows OS. It will install Boot2Docker management tool with the boot2docker iso (based on Tiny Core Linux), Virtualbox and MYSYS-git UNIX tools.
  • Docker needs Admin right to be installed. However, Virtualbox can be installed by user's account.
  • If the installer detects a version of VirtualBox installed, the VirtualBox checkbox will not be checked by default (Windows OS). The VirtualBox cannot be used anymore after updating my VB from 4.3.18 to 4.3.20. The error may be related to Windows update according to Virtualbox forum.
Error in supR3HardenedWinReSpawn
  • Note that boot2docker cannot be installed/run inside a Windows guest machine. See this post and my Virtualbox wiki here. If we try to launch boot2docker-vm from Virtualbox, we will see a message "This kernel requires an x86-64 CPU, but only detected an i686 CPU."
  • After I switch back to an old version of virtualbox, every thing works again. I can even install Docker successfully.
    • Boot2Docker Start icon cannot be run directly because Notepad++ will automatically open it. A possible solution is to go to control panel and change default program for .sh file from Notepad++ to C:\Program Files (x86)\Git\bin\bash.exe.
    • The above step does not work well since a terminal appears and disappears quickly.
    • A working approach is to open Git Bash from Start menu. And run /c/Program Files/Boot2DockerforWindows/start.sh (or boot2docker start or boot2docer init)
    • A new VM called 'boot2docker-vm' will be created (we can open VirtualBox Manager to check). But I got an error error in run: Failed to start machine "boot2docker-vm" (run again with -v for details). The VM has an error on Network>Adapter2>VirtualBox Host-Only Ethernet Adapter #2. So I open the setting of <boot2docker-vm>, go to Network > Adapter 2 and change the dropdown list of Name from VirtualBox Host-Only Ethernet Adapter #2 to VirtualBox Host-Only Ethernet Adapter.
    • Now it works either I directly click boot2docker-vm VM from VB Manager or use the command start.sh from Git Bash.

Boot2docker-vm.png

$ # boot2docker is in the PATH variable, so there is not need to cd to the folder.
$ boot2docker start
initializing...
Virtual machine boot2docker-vm already exists

starting...
Waiting for VM and Docker daemon to start...
........o
Started.
Writing c:\Users\brb\.boot2docker\certs\boot2docker-vm\ca.pem
Writing c:\Users\brb\.boot2docker\certs\boot2docker-vm\cert.pem
Writing c:\Users\brb\.boot2docker\certs\boot2docker-vm\key.pem
Docker client does not run on Windows for now. Please use
    "c:\Program files\Boot2Docker for Windows\boot2docker.exe" ssh
to SSH into the VM instead.


192.168.56.101
connecting...
                        ##        .
                  ## ## ##       ==
               ## ## ## ##      ===
           /""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
           \______ o          __/
             \    \        __/
              \____\______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.4.1, build master : 86f7ec8 - Tue Dec 16 23:11:29 UTC 2014

Docker version 1.4.1, build 5bc2ff8
docker@boot2docker:~$ docker
Usage: docker [OPTIONS] COMMAND [arg...]

A self-sufficient runtime for linux containers.

Options:
  --api-enable-cors=false                      Enable CORS headers in the remote
 API
  -b, --bridge=""                              Attach containers to a pre-existi
ng network bridge
...
Run 'docker COMMAND --help' for more information on a command.
docker@boot2docker:~$ docker run hello-world
Unable to find image 'hello-world:latest' locally
hello-world:latest: The image you are pulling has been verified
511136ea3c5a: Pull complete
31cbccb51277: Pull complete
e45a5af57b00: Pull complete
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (Assuming it was not already locally available.)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

For more examples and ideas, visit:
 http://docs.docker.com/userguide/

docker@boot2docker:~$ ls
boot2docker, please format-me
docker@boot2docker:~$ pwd
/home/docker
docker@boot2docker:~$ ls /
bin/     dev/     home/    lib/     mnt/     proc/    run/     sys/     usr/
c/       etc/     init     linuxrc  opt/     root/    sbin/    tmp      var/

docker@boot2docker:~$ docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (Assuming it was not already locally available.)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

For more examples and ideas, visit:
 http://docs.docker.com/userguide/
docker@boot2docker:~$
docker@boot2docker:~$
docker@boot2docker:~$
docker@boot2docker:~$ docker run -it ubuntu bash
Unable to find image 'ubuntu:latest' locally
ubuntu:latest: The image you are pulling has been verified
53f858aaaf03: Pull complete
837339b91538: Pull complete
615c102e2290: Pull complete
b39b81afc8ca: Pull complete
511136ea3c5a: Already exists
Status: Downloaded newer image for ubuntu:latest


root@ea7e3289a01a:/# pwd
/
root@ea7e3289a01a:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs           19G  269M   17G   2% /
none             19G  269M   17G   2% /
tmpfs          1005M     0 1005M   0% /dev
shm              64M     0   64M   0% /dev/shm
/dev/sda1        19G  269M   17G   2% /etc/hosts
tmpfs          1005M     0 1005M   0% /proc/kcore
root@ea7e3289a01a:/# ls
bin   dev  home  lib64  mnt  proc  run   srv  tmp  var
boot  etc  lib   media  opt  root  sbin  sys  usr
root@ea7e3289a01a:/# exit
exit


docker@boot2docker:~$ pwd
/home/docker
docker@boot2docker:~$ ls
boot2docker, please format-me
docker@boot2docker:~$ exit
[Press any key to exit]

brb@NCI-01825357 /c/Program files/Boot2Docker for Windows
$ boot2docker down

brb@NCI-01825357 /c/Program files/Boot2Docker for Windows
$
$ boot2docker --help
Usage: c:\Program files\Boot2Docker for Windows\boot2docker.exe [<options>] <command> [<args>]

Boot2Docker management utility.

Commands:
   init                Create a new Boot2Docker VM.
   up|start|boot       Start VM from any states.
   ssh [ssh-command]   Login to VM via SSH.
   save|suspend        Suspend VM and save state to disk.
   down|stop|halt      Gracefully shutdown the VM.
   restart             Gracefully reboot the VM.
   poweroff            Forcefully power off the VM (may corrupt disk image).
   reset               Forcefully power cycle the VM (may corrupt disk image).
   delete|destroy      Delete Boot2Docker VM and its disk image.
   config|cfg          Show selected profile file settings.
   info                Display detailed information of VM.
   ip                  Display the IP address of the VM's Host-only network.
   shellinit           Display the shell commands to set up the Docker client.
   status              Display current state of VM.
   download            Download Boot2Docker ISO image.
   upgrade             Upgrade the Boot2Docker ISO image (restart if running).
   version             Display version information.

Options:
      --basevmdk="": Path to VMDK to use as base for persistent partition
      --clobber=false: overwrite Docker client binary on boot2docker upgrade
      --dhcp=true: enable VirtualBox host-only network DHCP.
      --dhcpip=192.168.59.99: VirtualBox host-only network DHCP server address.
....
  -v, --verbose=false: display verbose command invocations.
      --vm="boot2docker-vm": virtual machine name.
      --waittime=300: Time in milliseconds to wait between port knocking retries during 'start'
error in run: config error: pflag: help requested

brb@NCI-01825357 /c/Program files/Boot2Docker for Windows

The big picture


                           start.sh                      docker run -it ubuntu bash
Git Bash Git Bash         ---------->  boot2docker-vm       ------------->   ubuntu
                                   docker@boot2docker:
   <-------               <----------                       <------------- 
   boot2docker down           exit                                 exit
   (shutdown boot2docker) (boot2docker-vm is still on)
    |
    |
    |  boot2docker up (start boot2docker)
    |
    |  boot2docker ssh (log into docker acct)
    |
    v
   boot2docker-vm
   docker@boot2docker

Increase boot2docker vmdk space

https://docs.docker.com/articles/b2d_volume_resize/

Install utilities in Boot2docker VM

http://blog.tutum.co/2014/11/05/how-to-use-docker-on-windows/

For example, to install cifs-utils,

wget http://distro.ibiblio.org/tinycorelinux/5.x/x86/tcz/cifs-utils.tcz
tce-load -i cifs-utils.tcz

WSL2

Mac

  • https://docs.docker.com/desktop/mac/
  • Alternatives to Docker Desktop for Mac? Rancher is recommended. 2022-06-08
  • Vagrant method. If you have Mac, you don't have to use boot2docker (iso & its management tool). You can use other Linux which comes with docker pre-installed. See this post.
  • To avoid the message Error: `brew cask` is no longer a `brew` command. Use `brew <command> --cask` instead, use
    brew install --cask docker
    

Raspberry Pi

ARM architeture from hub.docker.com

curl -sSL https://get.docker.com | sh
  • UDOO Quad running Armbian 20.04
    • The instruction on official Docker website does not work
    • The curl command method above does not work
    • sudo apt-get install -y docker.io works (docker -v shows it is 19.03.8). After that, run sudo usermod -aG docker $USER and log out/in.
  • See Odroid magazine 2015 January and 2015 February. Note that the current versions of Docker and Docker Hub are not aware of the architecture for which the image has been built. All standard images are intended for the x86 architecture, and the autobuild feature offered by the Docker registry is only available for x86.
  • NVIDIA Jetson Nano Developer Kit - Introduction, Redis running inside Docker container on NVIDIA Jetson Nano
sudo apt install curl
curl -sSL https://get.docker.com/ | sh

docker-compose

Some examples*

Not I use the arm64 image on my Pi3b+.

Images from https://www.linuxserver.io/. Some indices include number of pulls and stars.

List of tz database time zones

Portainer. The port number is 9000. Note the stack will be deployed using the equivalent of docker-compose. Only Compose file format version 2 is supported at the moment.

Samba. Tested on iOS, Ubuntu & Windows 10.

mkdir -p /mnt/usb/share/{data,backups}
mkdir /mnt/usb/share/data/{alice,bob,documents}
touch /mnt/usb/share/backups/backupsfile
touch /mnt/usb/share/data/bob/bobfile
touch /mnt/usb/share/data/documents/documentfile

docker run -d -p 445:445 \
  -v /mnt/usb/share/data:/share/data \
  -v /mnt/usb/share/backups:/share/backups \
  --name rpi-samba trnape/rpi-samba \
  -u "alice:abc123" \
  -u "bob:secret" \
  -u "guest:guest" \
  -s "Backup directory:/share/backups:rw:alice,bob" \
  -s "Bob (private):/share/data/bob:rw:bob" \
  -s "Documents (readonly):/share/data/documents:ro:guest,alice,bob" 

On Windows, 1) right click on 'This PC' and choose 'Add a network location'. 2) type \\192.168.1.249\ and the dropdown list will populate all available folders. 3) choose the one (e.g. Bob) and then enter the credential. Done. On Ubuntu, just type smb://192.168.1.249/. It will then populate the available folders.

Nginx

mkdir -p /mnt/usb/docker-nginx/html
echo "hello world" >> /mnt/usb/docker-nginx/html/index.html
nano /mnt/usb/docker-nginx/html/sharefile
docker run --name rpi-nginx -p 8086:80 \
  --restart always \
  -v /mnt/usb/docker-nginx/html:/usr/share/nginx/html \
  -d nginx:stable-alpine

# Or a stack file
version: '2'
services:
    nginx:
        container_name: rpi-nginx
        ports:
            - '8086:80'
        restart: always
        volumes:
            - '/mnt/usb/docker-nginx/html:/usr/share/nginx/html'
        image: nginx:stable-alpine

Note consider to use a samba share folder (see above) as a nginx document root.

cp /mnt/usb/docker-nginx/html/* /mnt/usb/share/data/bob/
rm -rf /mnt/usb/docker-nginx/html
ln -s /mnt/usb/share/data/bob/ /mnt/usb/docker-nginx/html

Rpi-monitor. I need to change /dev/vcsm to /dev/vcsm-cma. But the temperature part is not working. I am using 64-bit Raspberry Pi OS and it does not show attached USB disks. The port number is 8888.

code-server

---
version: "2.1"
services:
  code-server:
    image: ghcr.io/linuxserver/code-server
    container_name: code-server
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - PASSWORD=password #optional
      - SUDO_PASSWORD=password #optional
    volumes:
      - /mnt/usb/code-server:/config
    ports:
      - 8443:8443
    restart: unless-stopped

mstream Music streaming. Works great.

emby does not work on arm64. It works on x86 though. Even I copy a mp4 file to movies directory the movie does not show up:(

version: '2.1'
services:
    embyserver:
        container_name: emby
        network_mode: bridge
        restart: always
        environment:
            - VERSION=latest
            - UID=1000
            - GID=1000
            - TZ=America/Denver
        volumes:
            - /media/crucial/emby/config:/config
            - /media/crucial/emby/tv:/mnt/tv
            - /media/crucial/emby/movies:/mnt/movies
        ports:
            - 8096:8096            
        image: 'emby/embyserver:latest'

jellyfin Jellyfin is descended from Emby's 3.5.2 release and ported to the .NET Core framework to enable full cross-platform support. How to Install Jellyfin on Docker with Portainer

plex We can access the plex server via http://IP:32400/web. Note that in the first server setup, we need to add Library' by choosing the new library name (eg Other Videos) shown on plex & the data source (eg /data) so our own media can be found. After we added new media files we can rescan by clicking the vertical 3 dots icon and selecting scan library files. Pi3b+ is still a little weak since I can see all threads are busy when I played a mp4 file.

mkdir -p /mnt/usb/plex/{config,data}
cp FILENAme.mp4 /mnt/usb/plex/data
docker run \
  -d \
  --name plex \
  --net host \
  -p 32400:32400 \
  --restart always \
  --volume /mnt/usb/plex/config:/config \
  --volume /mnt/usb/plex/data:/data \
  greensheep/plex-server-docker-rpi:latest

WARNING: The requested image's platform (linux/arm) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

Nextcloud.

sudo mkdir -p /srv/dev-disk-by-label-Files/Databases/NextCloud
sudo mkdir -p /srv/dev-disk-by-label-Files/Config/Nextcloud

After that, copy and paste the stack into portainer. Wait for a few minutes on RPi3. The port number is 8080. Now we can create the admin username/password such as nextcloud/nextcloud. Click the little triangle next to "Storage and Database". Change to MySQL. In the next part we enter nextcloud/nextcloud/nextcloud/db (note the "db" replaces localhost b/c we use "db" as the service name). Again, wait for a few minutes.

Heimdall (Dashboard for web apps). I keep the PUID (1000) and PGID (1000). The instruction says it is from the admin user account but I don't find admin account? Change the volume to /srv/dev-disk-by-label-Files/Config/Heimdall (use sudo mkdir to create the directory on terminal). Change the port to 83 & remove port 443. Define the endpoint from Portainer -> Endpoints -> local -> Public IP as raspberrypi.local (depending on your hostname). We need to wait a little bit. Now go to the container and find heimdall and click the port in order to open the website correctly (instead 0.0.0.0). I can add apps like nextcloud, portainer, pi-hole, other servers, etc. The Application Type entry has a good list of popular apps and it will pre-populate the button icon and the background color for our app.

taisun The default port is 3000

yacht. The default login is [email protected] and pass. The name shown on portainer is pedantic_hermann

docker volume create yacht
docker run -d -p 8001:8000 -v /var/run/docker.sock:/var/run/docker.sock -v yacht:/config selfhostedpro/yacht

CloudFlare DDNS - Update CloudFlare with Your Dynamic IP Address

WatchTower

bitwardenrs. Use the terminal to create a volume first. The port number is 8100. This is straightforward.

Duplicati for backup.

photoshow. It works. It has a slideshow button. PhotoShow only displays videos in WebM.

R. r-base provide arm64 image but not not 32-bit arm architecture.

# 64-bit OS
docker pull r-base
docker run -it --rm r-base   # enter R directly

rocker/rstudio DOES NOT work on arm64 even I can pull. WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

I build a 32-bit armv7 image for r-base v4.0.2. This image works on either 32-bit or 64-bit arm OS (tested on 32-/64-bit Raspberry Pi and other 32-bit SBC devices).

docker pull arraytools/r402armv7
docker run -it --rm arraytools/r402armv7 R
docker pull r-base
# Using default tag: latest
# latest: Pulling from library/r-base
# no matching manifest for linux/arm/v7 in the manifest list entries

How and Why to Use A Remote Docker Host

Backup

Usage

Basics, docs, cheatsheet, introduction

Note that we need sudo is needed unless it is on a Mac OS.

If docker cannot find an image, it will try to pull it from its repository.

$ sudo docker run -it ubuntu /bin/bash
Unable to find image 'ubuntu' locally
Pulling repository ubuntu
04c5d3b7b065: Download complete 
511136ea3c5a: Download complete 
c7b7c6419568: Download complete 
70c8faa62a44: Download complete 
d735006ad9c1: Download complete 
root@ec83b3ac878d:/# 
purpose command
run a container docker container run -d -p 80:80 httpd
list running cotainer docker container ls
view logs of Docker container docker container logs cranky_cori
identify Docker container process? docker container top cranky_cori
stop Docker container? docker container stop cranky_cori
list stopped or not running Docker containers docker container ls -a
start Docker container docker container start c46f2e9e4690
remove Docker container docker container rm cranky_cori
list Docker images docker images
remove Docker image docker rmi iman/touch

Restart docker daemon

When I try the Chap5 > Continuous integration (Jenkins) of the Docker Book, I found I cannot stop/kill the container. See others' report here. The solution is to restart the docker daemon.

sudo service docker start

After that, I can stop and rm the container.

sudo docker stop jenkins
sudo docker rm jenkins
sudo docker ps -a

images vs containers

$ sudo docker images
REPOSITORY                     TAG                 IMAGE ID            CREATED              VIRTUAL SIZE
iman                           latest              6e0f5644b2fd        About a minute ago   460.4 MB
iman/touch                     latest              77b9ac5951c2        4 minutes ago        460.4 MB
<none>                         <none>              aaa75e64ddf0        5 weeks ago          188.3 MB
ouruser/sinatra                v2                  ea8c9f407a8d        5 weeks ago          447 MB
ubuntu                         14.04               ed5a78b7b42b        5 weeks ago          188.3 MB
ubuntu                         latest              ed5a78b7b42b        5 weeks ago          188.3 MB
eddelbuettel/docker-ubuntu-r   add-r-devel-san     3c19d078c5d9        3 months ago         460.4 MB
hello-world                    latest              ef872312fe1b        4 months ago         910 B
training/sinatra               latest              f0f4ab557f95        8 months ago         447 MB

$ sudo docker ps -a
CONTAINER ID IMAGE                                          COMMAND              CREATED        STATUS                   PORTS NAMES
8fbdbcdb5126 iman/touch:latest                              "/bin/bash"          2 minutes ago  Exited (0) 2 minutes ago       thirsty_engelbart   
dc9e82f2c00a eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          9 minutes ago  Exited (0) 3 minutes ago       kickass_bardeen     
532a90f36aa8 eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          18 hours ago   Exited (0) 18 hours ago        happy_lalande       
7634024ee0bf eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          18 hours ago   Exited (0) 18 hours ago        insane_mclean       
14034a9720cb eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          18 hours ago   Exited (0) 18 hours ago        naughty_lumiere     
ca90954628db eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          19 hours ago   Exited (130) 18 hours ago      sick_hawking        
8bbdcb7c339f eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          19 hours ago   Exited (0) 19 hours ago        modest_davinci      
e8e24f80f0dd aaa75e64ddf0                                   "/bin/sh -c 'apt-get 5 weeks ago    Exited (100) 5 weeks ago       berserk_hodgkin     
d41959e0eb55 aaa75e64ddf0                                   "/bin/sh -c 'apt-get 5 weeks ago    Exited (100) 5 weeks ago       jovial_curie        
b408c0e2805b aaa75e64ddf0                                   "/bin/sh -c 'apt-get 5 weeks ago    Exited (100) 5 weeks ago       lonely_tesla        
72a551e4b492 ouruser/sinatra:v2                             "/bin/bash"          5 weeks ago    Exited (0) 5 weeks ago         jolly_meitner       
75fd6cc4658b training/sinatra:latest                        "/bin/bash"          5 weeks ago    Exited (0) 5 weeks ago         evil_yalow          
cc8886f5a02e training/sinatra:latest                        "/bin/bash"          5 weeks ago    Exited (130) 5 weeks ago       elegant_curie       
0585e4f5fecd eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          5 weeks ago    Exited (0) 5 weeks ago         elated_euclid       
brb@brbweb4:~/Downloads$ 

When we want to delete a container, we use the container's CONTAINER ID or NAME (last column output from docker ps -a). But when we want to delete an image, we use the image's REPOSITORY or IMAGE ID (2nd column output from docker images)

$ sudo docker rm thirsty_engelbart  # iman/touch
$ sudo docker rm dc9e82f2c00a       # eddelbuettel/docker-ubuntu-r:add-r-devel-san
$ sudo docker ps -a   # check to see the container is gone now

$ sudo docker rmi 6e0f5644b2fd
$ sudo docker rmi iman/touch
$ sudo docker images  # check to see the images are gone now

Command line interface, CLI

https://docs.docker.com/engine/reference/commandline/cli/ Docker command line

$ docker

Usage:	docker COMMAND

A self-sufficient runtime for containers

Options:
      --config string      Location of client config files (default "/home/brb/.docker")
  -D, --debug              Enable debug mode
  -H, --host list          Daemon socket(s) to connect to
  -l, --log-level string   Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
      --tls                Use TLS; implied by --tlsverify
      --tlscacert string   Trust certs signed only by this CA (default "/home/brb/.docker/ca.pem")
      --tlscert string     Path to TLS certificate file (default "/home/brb/.docker/cert.pem")
      --tlskey string      Path to TLS key file (default "/home/brb/.docker/key.pem")
      --tlsverify          Use TLS and verify the remote
  -v, --version            Print version information and quit

Management Commands:
  config      Manage Docker configs
  container   Manage containers
  image       Manage images
  network     Manage networks
  node        Manage Swarm nodes
  plugin      Manage plugins
  secret      Manage Docker secrets
  service     Manage services
  swarm       Manage Swarm
  system      Manage Docker
  trust       Manage trust on Docker images
  volume      Manage volumes

Commands:
  attach      Attach local standard input, output, and error streams to a running container
  build       Build an image from a Dockerfile
  commit      Create a new image from a container's changes
  cp          Copy files/folders between a container and the local filesystem
  create      Create a new container
  diff        Inspect changes to files or directories on a container's filesystem
  events      Get real time events from the server
  exec        Run a command in a running container
  export      Export a container's filesystem as a tar archive
  history     Show the history of an image
  images      List images
  import      Import the contents from a tarball to create a filesystem image
  info        Display system-wide information
  inspect     Return low-level information on Docker objects
  kill        Kill one or more running containers
  load        Load an image from a tar archive or STDIN
  login       Log in to a Docker registry
  logout      Log out from a Docker registry
  logs        Fetch the logs of a container
  pause       Pause all processes within one or more containers
  port        List port mappings or a specific mapping for the container
  ps          List containers
  pull        Pull an image or a repository from a registry
  push        Push an image or a repository to a registry
  rename      Rename a container
  restart     Restart one or more containers
  rm          Remove one or more containers
  rmi         Remove one or more images
  run         Run a command in a new container
  save        Save one or more images to a tar archive (streamed to STDOUT by default)
  search      Search the Docker Hub for images
  start       Start one or more stopped containers
  stats       Display a live stream of container(s) resource usage statistics
  stop        Stop one or more running containers
  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
  top         Display the running processes of a container
  unpause     Unpause all processes within one or more containers
  update      Update configuration of one or more containers
  version     Show the Docker version information
  wait        Block until one or more containers stop, then print their exit codes

Run 'docker COMMAND --help' for more information on a command.

Version, system information

Docker version

$ docker version
Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:24:51 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:23:15 2018
  OS/Arch:          linux/amd64
  Experimental:     false

System information.

  • what mode the Docker engine is operating in (swarm mode or not)
  • what storage drive is used for the union filesystem
  • what version of the Linux kernel we have on our host
  • et al
$ docker system info
Containers: 2
 Running: 0
 Paused: 0
 Stopped: 2
Images: 10
Server Version: 18.06.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.15.0-33-generic
Operating System: Ubuntu 18.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.674GiB
Name: t420s
ID: VLWB:6BN3:U7KB:L4T4:GQIB:54F3:YZKJ:PAIR:HEUM:UQIC:XLZU:3IFJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

List resource consumption

$ docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              10                  2                   2.58GB              1.519GB (58%)
Containers          2                   0                   304B                304B (100%)
Local Volumes       2                   0                   314.7MB             314.7MB (100%)
Build Cache         0                   0                   0B                  0B

$ docker system df -v  # more detailed information
# We can use the information to clean up our system

A brief intro to docker virtualization

docker search --help
docker search redis
docker search -s 100 redis
docker pull --help
docker pull ubuntu # download all versions of ubuntu
docker images    # available local container images
docker pull centos:latest
docker run --help
cat /etc/issue   # look at the current distr name before running docker
docker run -it centos:latest /bin/bash
                 # create a container & execute as a sudo

cat /etc/redhat-release
yum
cd /home
touch temp.txt
ls
exit

docker ps   # current running processes
docker ps -a # show all processes including closed
docker restart c85850ed0e13
docker ps   # container c85850ed0e13 is running
docker attach c85850ed0e13 # log into the system

ls /home
exit

docker ps -a
docker rm c85850ed0e13 # delete the container

Note: Following the discussion, using attach can only launch one instance of shell. If we use exec, we can launch multiple instances.

sudo docker exec -i -t c85850ed0e13 bash #by ID
or
$ sudo docker exec -i -t loving_heisenberg bash #by Name

Rootless mode

  • Run the Docker daemon as a non-root user (Rootless mode)
    • The data dir is set to ~/.local/share/docker by default. The data dir should not be on NFS.
  • Setup on Ubuntu 22.04
    curl -fsSL https://get.docker.com | bash
    sudo apt install -y uidmap
    dockerd-rootless-setuptool.sh install
    nano ~/.bashrc
    
    source ~/.bashrc
    
    systemctl --user start docker
    systemctl --user enable docker
    sudo loginctl enable-linger $(whoami)
    
    docker run hello-world
    docker run --rm -ti r-base:4.4.1
  • Unfortunately, Rocker/rstudio does not work. I am not able to log in using username/password. It keeps saying incorrect username/password.
  • Limitations:
    • Performance Overhead
      • OverlayFS Limitations: Rootless Docker uses fuse-overlayfs instead of OverlayFS by default, which can be slower.
      • Resource Limits: The performance might be slightly lower compared to running Docker with root privileges due to additional user namespace operations.
    • Network Restrictions
      • Network Drivers: Only the bridge and host network drivers are supported. macvlan and overlay network drivers are not supported.
      • Port Binding: Binding to ports below 1024 is not allowed. Only non-privileged ports (1024 and above) can be used.
    • File System
      • Volume Permissions: Issues with file permissions can arise when mounting volumes from the host, as the files created by rootless Docker processes will be owned by the user running Docker, not root.
      • NFS and Other Filesystems: Certain filesystems like NFS might have compatibility issues with rootless Docker due to permission and ownership constraints.
    • Compatibility
      • Certain Features: Some Docker features might not be fully supported or behave differently. For example, checkpoint/restore and cgroup v1 are not supported.
      • Security Features: Some security features like AppArmor, SELinux, and seccomp might have limited functionality or require additional configuration.
    • Configuration Complexity
    • Troubleshooting
  • What's the difference between rootless Docker, running a container as a non-root user, and Podman?
  • Docker Running In Rootless Mode
  • Going rootless with Docker and Containers
  • How to Run Rootless Docker Containers

docker pull

https://docs.docker.com/engine/reference/commandline/pull/

$ docker pull ubuntu:zesty
$ docker run -ti --rm ubuntu:zesty /bin/bash 
# lsb_release -a         
bash: lsb_release: command not found
# cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=17.04
DISTRIB_CODENAME=zesty
DISTRIB_DESCRIPTION="Ubuntu 17.04"
NAME="Ubuntu"
VERSION="17.04 (Zesty Zapus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 17.04"
VERSION_ID="17.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=zesty
UBUNTU_CODENAME=zesty

Update/upgrade images

docker compose pull && docker compose up -d
docker compose up --pull always -d

<none>:<none> images

Exit/detach from a container without stopping it

$ docker container run -it ubuntu:latest /bin/bash
# Ctrl+p, Ctrl+q to exit the container without terminating it
$ docker ps -a # showing the container 70c5aceb5512 is running in the background

# You can reattach your terminal to it with the "docker container exec" command
$ docker container exec -it 70c5aceb5512 bash

How to start a stopped Docker container with a different command

How to start a stopped Docker container with a different command?

Clean shutdown DOCKER containers before reboot

Dockerizing Applications/Detached mode

$ sudo docker run -d --name insane_babbage ubuntu:14.04 /bin/sh -c "while true; do echo hello world; sleep 1; done"
$ sudo docker ps -l
$ sudo docker logs insane_babbage
$ sudo docker stop insane_babbage
$ sudo docker ps

The -d flag tells Docker to run the container and put it in the background, to daemonize it.

According to https://docs.docker.com/engine/reference/run/#detached-vs-foreground, containers started in detached mode exit when the root process used to run the container exits, unless you also specify the --rm option. If you use -d with --rm, the container is removed when it is stopped, exits or when the daemon exits, whichever happens first.

Automatically restart after reboot

https://stackoverflow.com/questions/18786054/how-to-auto-restart-a-docker-container-after-a-reboot-in-coreos

Add a --restart=always parameter. It will always restart a stopped container unless it has been explicitly stopped, such as via a "docker container stop" command. See the following

$ docker run -d --restart always myCustomeDocker

$ docker container run --name neverdie -it --restart always ubuntu /bin/bash
# exit
$ docker ps -a  # the container is still ther
$ docker stop neverdie
$ docker ps -a

Working with Containers

$ sudo docker run -i -t ubuntu /bin/bash
$ sudo docker version
$ sudo docker
$ sudo docker attach --help

Environment variables

Docker container ID

  • The full container ID is a hexadecimal string of 64 characters.
  • The minimum number of characters required for a Docker ID is 4.
  • We can use a shorter ID in docker command if that ID uniquely determined the container. For example, docker exec -it 9608 bash or even docker exec -it 9 bash works.

Alpine image

apk add htop

Running a Web Application

$ sudo docker run -d -P training/webapp python app.py

Alpine linux is 6MB. It is a good OS to run a web application. See the demo here.

Viewing our Web Application Container

$ sudo docker ps -l
$ sudo docker run -d -p 5000:5000 training/webapp python app.py

Check container status (docker status) - CPU, Memory usage

Container networking

Host network

  • If you use the host network driver for a container, that container’s network stack is not isolated from the Docker host. For instance, if you run a container which binds to port 80 and you use host networking, the container’s application will be available on port 80 on the host’s IP address.
  • One good example is if I want to use tailscale network from my host in Uptime Kuma container. See HERE.
  • Considerations. While host networking can be powerful, it's important to consider:
    • Security implications: Host networking reduces network isolation, potentially increasing security risks.
    • Port conflicts: Services using host networking may conflict with other applications running on the host machine.
    • Platform limitations: Host network mode only works on Linux hosts, not on Docker Desktop for Mac or Windows.

ping, ifconfig and ip commands not found in Ubuntu container

apt update
apt install iputils-ping  # ping 
apt install net-tools     # ifconfig
apt install iproute2      # ip

Network Port Shortcut

$ sudo docker port nostalgic_morse 5000

Access Ports on the Host from a Docker Container

How to Access Ports on the Host from a Docker Container

Multiple NICs

containers in docker to use public ip addresses directly

Viewing the Web Application's Logs

Clear Logs of Running Docker Containers

Looking at our Web Application Container's processes

$ sudo docker top nostalgic_morse

Inspecting our Web Application Container

$ sudo docker inspect nostalgic_morse

Obtain the container's IP address, log into a running server

PS. Portainer web interface can show the IP addresses.

$ docker inspect <container id> | grep "IPAddress"

We don't need the IP address if we just want to log into a running server,

$ docker exec -it <contianer id> bash

How to Secure Docker’s TCP Socket

How to Secure Docker’s TCP Socket with TLS

docker attach

Suppose I run docker run -it --user rstudio bioconductor/bioconductor_docker:devel R and I use q() to quit the container. The container is still there. To re-enter the R in the container, I use

docker start XXXXXXXX    # restart it in the background
docker attach XXXXXXXX   # reattach the terminal to a running container

If we want the latest created container, then we use

docker start `docker ps -q -l` && docker attach `docker ps -q -l`

docker exec: SSH into a running container

Run a command in a running container

  • Usage:
    docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
  • Examples:
    $ docker exec -d ubuntu_bash touch /tmp/execWorks # do st in the background
    
    $ docker exec -it ubuntu_bash bash
    
    $ docker exec -it -e VAR=1 ubuntu_bash bash # set an environment variable
    
    $ docker exec -it ubuntu_bash pwd
    $ docker exec -it -w /root ubuntu_bash pwd # change the working directory
  • How to Run a Command on a Running Docker Container
  • How to Use the Docker exec Command. nginx container is used as an example.
    docker run --name docker-nginx -p 8080:80 -d nginx
    
    # method 1. Access the Running Container’s Shell
    docker exec -it ID /bin/bash
      apt-get update
      apt-get upgrade -y
      exit
    
    # method 2. Run a Command from Outside the Container
    docker exec ID apt-get update && apt-get upgrade
    
    docker exec ID cat /usr/share/nginx/html/index.html
    docker cp index.html ID:/usr/share/nginx/html/
    docker exec ID cat /usr/share/nginx/html/index.html
    

docker cp

Copy files/folders between a container and the local filesystem.

Restart an exited Container

$ docker start nostalgic_morse
OR
$ docker restart nostalgic_morse

For an interactive container, use docker start -ai CONTAINER which is equal to run "docker start CONTAINER" and "docker attach CONTAINER".

Rename a container

docker container rename

docker container rename CONTAINER NEW_NAME

Inspect container images and their metadata

Know the container size

docker ps -s

Meaning of two sizes

  • The "size" information shows the amount of data (on disk) that is used for the writable layer of each container
  • The "virtual size" is the amount of disk-space used for the read-only image data used by the container.

Removing our Web Application Container

$ sudo docker stop nostalgic_morse
$ sudo docker rm nostalgic_morse

Note: Always remember that deleting a container is final!

Dockerize an SSH service

https://docs.docker.com/engine/examples/running_ssh_service/#environment-variables

Remove old docker containers

This post on stackoverflow.com.

$ sudo docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty sudo docker rm

Similarly to remove all exited containers

$ sudo docker ps -a | grep Exit | awk '{print $1}' | xargs sudo docker rm

To kill/stop (not delete) all running containers

$ sudo docker kill $(sudo docker ps -q)

To delete all stopped containers

$ sudo docker rm $(sudo docker ps -a -q)
OR
$ sudo docker rm `sudo docker ps -a -q`

It is also helpful to create bash aliases for these commands by editing ~/.bash_aliases file.

docker create vs docker run

https://stackoverflow.com/questions/37744961/docker-run-vs-create

docker create is similar to docker run -d except the container is never started.

Retrieve docker run command

https://stackoverflow.com/a/32774347. See the github page of runlike. So it is better to put the docker run in a stack. Then for example the Portainer has an Editor tab to show the compose file.

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
    assaflavie/runlike -p CONTAINER_NAME

The -p option splits the output into pretty lines.

docker run -it and -d together

How to Modify the Configuration of Running Docker Containers

How to Modify the Configuration of Running Docker Containers

Volume

Examples of host's volume locations

/home/$USER/docker/$PROJECT/$SUB-DIRECTORY

PUID, PGID, share volume permission/owner

  • Understanding PUID and PGID (or the source)
  • You should use the -e PUID and -e PGID options when creating a container from a Docker image to map the container’s internal user to a user on the host machine. This is useful because Docker runs all of its containers under the root user domain, which means that processes running inside your containers also run as root. This kind of elevated access is not ideal for day-to-day use and can potentially give applications access to things they shouldn’t. By using PUID and PGID, you can ensure that files and directories created during the container’s lifespan are owned by a user on the host machine instead of root.
  • Please note that not all Docker images support the PUID and PGID environment variables. The Docker image must be designed to use these variables. If you’re using an image that doesn’t support these variables, you may need to create a Dockerfile to build a new image that does.
  • The following works. The --user option is a built-in Docker feature that sets the user (and optionally the group) that is used to run the container. This option works regardless of whether the Docker image uses any specific environment variables. PS. "docker" user has been defined in the r-base's Dockerfile.
    docker run --rm -ti --user docker \
      -v "$(pwd)":/workspace r-base 
    > setwd("/workspace")
    > save(iris, file="iris.rda")
    > system("ls -lt") # docker docker instead of $USER $USER
    > unlink("iris.rda")
  • Similarly, the --user option works with rocker/rstudio image and ubuntu.
    docker run --rm -ti --user rstudio \
      -v "$(pwd)":/workspace rocker/rstudio R
    > setwd("/workspace")
    > save(iris, file="iris.rda")
    > system("ls -lt")
    > unlink("iris.rda")

    Note that the prompt is $ rather than #.

    docker run --rm -it -v $(pwd):/home --user ubuntu \
       ubuntu bash
    $ id
    uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev)
    $ cd /home
    $ echo "newfile" > newfile
    
    docker run --rm -it -v $(pwd):/home --user "$(id -u):$(id -g)" \
       ubuntu bash
    $ cd /home
    $ echo "newfile" > newfile
  • In the article Sharing files with host machine from the Rocker's project, users are instructed to use -e USERID variable if the host machine user has a UID other than 1000. But the generated file 'iris.rda' from the following example is still owned by root:(
    docker run --rm -ti -v "$(pwd)":/workspace -e USERID=$UID rocker/rstudio R
  • (Cont.) however, if we run the above command as a daemon and log in using the user "rstudio" , it works even we don't specify the "-e USERID" option. The lesson is we should use the user defined in the docker image.
    docker run --rm -v "$(pwd)":/workspace -p PASSWORD=123 rocker/rstudio
    

    Notice the prompt is # rather than $ and the user id is 0.

    docker run --rm -it -v $(pwd):/home -e PUID=1000 -e PGID=1000 \
      ubuntu bash
    # id
    uid=0(root) gid=0(root) groups=0(root)
    
  • In this video How to Install Calibre on OMV and Docker, it uses the command id admin where "admin" is the portainer user to get PUID (of "admin") and PGID (of "users") to find out the two ids.

Back Up Your Docker Volumes

How to Back Up Your Docker Volumes

Two ways to achieve persistent data

Inspect the 'Mountpoint' of a volume

$ docker volume create crv
$ docker volume ls

$ docker run -d \
     --name mycloud \
     -p 81:80 \
     -v apps:/var/www/html/custom_apps \
     nextcloud

# docker inspect is not quite useful. It does not show how the volume was created
# But we can examine (ls, du, ...) the directory contents
$ docker inspect apps   
[
    {
        "CreatedAt": "2018-10-23T09:41:52-04:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/apps/_data",
        "Name": "apps",
        "Options": null,
        "Scope": "local"
    }
]

Remove an an unnamed volume

I you want to automatically removes volumes when a container is removed, you can use the --rm flag when starting the container with the "docker run" command. This flag tells Docker to automatically remove the container and any anonymous volumes associated with it when the container exits. However, this flag does not affect named volumes.

If you created an unnamed volume, it can be deleted at the same time as the container with the -v flag. Note that this only works with unnamed volumes.

docker rm -v container_name

If the volume is named, it stays present. To remove a named volume, use docker volume rm volume_name .

Volumes created in docker-compose

When you use docker-compose to create and manage containers, volumes are handled slightly differently than when using the docker run command.

In a "docker-compose.yml" file, you can specify named volumes using the volumes key at the top level of the file. These volumes are created when you run docker-compose up and are not automatically removed when you stop or remove the containers using docker-compose down.

If you want to remove named volumes created by docker-compose, you can use the -v flag with the docker-compose down command. Here’s an example command that stops and removes all containers defined in a docker-compose.yml file and also removes any named volumes:

docker-compose down -v

This command stops and removes all containers defined in the docker-compose.yml file and also removes any named volumes specified in the file. All data stored in the volumes will be permanently deleted.

Anonymous volumes created by docker-compose are automatically removed when you stop and remove the containers using docker-compose down, even if you don’t use the -v flag.

Start a container with a volume

--mount -v
docker run -d \

--name devtest \
--mount source=myvol2,target=/app \
nginx:latest

docker run -d \

--name devtest \
-v myvol2:/app \
nginx:latext

Note

  • target in "--mount" can be replaced by destination or dst.
  • To use a read-only volume, add the ,readonly option in "--mount" or the :ro option in "-v".
  • We cannot use "~/" to represent a local directory under HOME. We have to specify a full path in docker run.

A simple example

From the book "Learn Docerk -Fundamentals of Docker 18.x". Chap 5. Data Volumes and System Management > Creating and mounting data volumes.

# Create a volume
docker volume create my-data
docker volume inspect my-data
# The host folder can be found in the output under 'Mountpoint'
# In my case,
#        "Mountpoint": "/var/lib/docker/volumes/my-data/_data",

# Mount a volume into a container
docker run --name test -it -v my-data:/data alpine /bin/sh
# cd /data
# echo 'some data' > data.txt
# echo 'more data' > data2.txt
# exit
docker inspect my-data
sudo ls /var/lib/docker/volumes/my-data/_data
# We can even try to output the content of say, the second file:
sudo cat /var/lib/docker/volumes/my-data/_data/data2.txt
# We can create a new file in this folder from the host and then use the volume with another container
echo "the file is created on host" > sudo tee /var/lib/docker/volumes/my-data/_data/host-data
# Let's delete the test container and run another one
docker rm test

# This time we are mounting our volume to a different container folder
docker run --name test2 -it -v my-data:/app/data centos:7 /bin/bash
# We are able to see three files:
# ls /app/data

# Remove volumes
docker volume rm my-data # Or 
docker volume rm $(docker volume ls -q)

# Remove all running containers to clean up the system,
docker rm -f $(docker ls -aq)

Sharing data between containers

How to Share Data Between Docker Containers

docker run -it --name writer -v shared-data:/data alpine /bin/sh
# create a file inside it
# echo 'my sample file' > /data/sample.txt
# exit
docker run -it --name reader -v shared-data:/app/data:ro ubuntu:17.04 /bin/bash
# ls -l /app/data

Using host volumes

Use volumes that mount a specific host folder

  • It may be possible for the "docker volume" command to mount a local directory to a volume. See examples in the "docker volume create" documentation.
  • Specifying a directory name instead of giving a volume name in the "docker run" 's -v option
  • Since we are specifying a directory name instead of letting docker to create a new volume, "docker volume ls" will not getting a new volume
docker run -it --name test -v $(pwd)/src:/app/src alpine /bin/sh

# Make a sample to demonstrate how that works
mkdir ~/my-web; cd ~/my-web
echo "<h1>My website</h1>" > index.html

# Create 'Dockerfile'
echo -e 'FROM nginx:alpine
COPY . /usr/share/nginx/html' > Dockerfile

docker image build -t my-website:1.0 .
docker run -d -p 8080:80 --name my-site my-website:1.0

# Open http://localhost:8080. It looks good
# Now modify index.html and refresh the website. It does not refresh
# Let's stop and rm the container and rebuild using a volume
docker rm -f my-site
docker run -d -v $(pwd):/usr/share/nginx/html \
   -p 8080:80 --name my-site my-website:1.0
# Now any changes on index.html will refresh on the website

Define volumes in images

A few samples of volume definition

VOLUME /app/data
VOLUME /app/data, /app/profiles, /app/config
VOLUME {"/app/data", "/app/profiles", "/app/config"]

The first line defines a single volume to be mounted at /app/data.

We can use the docker image inspect command to get information about the volumes defined in the Dockerfile.

docker image pull mongo:3.7
docker image inspect --format='{{json .ContainerConfig.Volumes}}' \
       mongo:3.7 | jq
# {
#   "/data/configdb": {},
#   "/data/db": {}
# }

# now run an instance of MongoDB and inspect the volume information
docker run --name my-mongo -d mongo:3.7
docker inspect --format '{{json .Mounts}}' my-mongo | jq
# [
#  {
#    "Type": "volume",
#    "Name": "535e0138b9a32e89f71380e9e73bb0de64ce0d1cad78fcda0ec1d49e11d76d7a",
#    "Source": "/var/lib/docker/volumes/535e0138b9a32e89f71380e9e73bb0de64ce0d1.../_data",
#    "Destination": "/data/configdb",
#    "Driver": "local",
#    "Mode": "",
#    "RW": true,
#    "Propagation": ""
#  },
#  {
#    "Type": "volume",
# SKIP

Differences between VOLUME and '-v|--volume'

https://stackoverflow.com/a/25312719

Container Memory Limits, Setting Available CPUs, Allocating memory and CPU

docker run \
    -rm \ ## Automatically remove the container when it exits
    --memory=6g \ ## memory limit
    --cpus=1.5 \ ## number of CPUs
    -v /shared/data-store:/home/rstudio/data \
    -v /shared/library-store:/usr/local/lib/R/host-site-library \
    -e PASSWORD=bioc \
    -p 8787:8787 \
         bioconductor/bioconductor_full:devel

Work with container images

List images by size or name

# by size
docker images --format "{{.ID}}\t{{.Size}}\t{{.Repository}}" | sort -k 2 -h

# by name
docker images --format "{{.ID}}\t{{.Size}}\t{{.Repository}}" | sort -k 3 

List specific columns

docker images --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}'

Create an image interactively using commit - Example 1

The example is from the book 'Learn Docker - Fundamentals of Docker 18.x'.

docker container run -it --name sample alpine /bin/sh
# apk update && apk add iputils
# ping 127.0.0.1
# exit
docker container ls -a | grep sample
docker container diff sample

We can now use the docker container commit command to persist our modifications and create a new image from them

docker container commit sample my-alpine
docker images ls

If we want to see how our custom image has been built, we can use the history command as follows:

docker image history my-alpine
# IMAGE               CREATED              CREATED BY                                      SIZE    COMMENT
# 0f105057899b        About a minute ago   /bin/sh                                         1.55MB              
# 196d12cf6ab1        4 weeks ago          /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B                  
# <missing>           4 weeks ago          /bin/sh -c #(nop) ADD file:25c10b1d1b41d46a1…   4.41MB

The first layer in the preceding list is the one we just created by adding the iputils package.

Create an image interactively using commit - Example 2

Note that it is better/necessary to put the Dockerfile in an empty directory to avoid the problem of taking a long time to build the image (sending build context to Docker daemon ...GB ) since it will grab files from the current directory.

sudo docker search sinatra
sudo docker pull training/sinatra
sudo docker run -t -i training/sinatra /bin/bash
sudo docker commit -m="Added json gem" -a="Kate Smith" 0b2616b0e5a8 ouruser/sinatra:v2
sudo docker images

mkdir sinatra
cd sinatra
touch Dockerfile
sudo docker build -t="ouruser/sinatra:v2" .
sudo docker push ouruser/sinatra
sudo docker rmi training/sinatra
  • I get an error when I try to launch sinatra on my 32-bit ubuntu (Docker can only be installed through apt-get on 32-bit)
$ sudo docker run -t -i training/sinatra /bin/bash
2014/12/31 02:43:26 exec format error

How to copy Docker images from one host to another without using a repository

https://stackoverflow.com/questions/23935141/how-to-copy-docker-images-from-one-host-to-another-without-using-a-repository

docker save -o out.tar <image name>
# Or better to compress the file
docker save <docker image name> | gzip > out.tar.gz

And restore

docker load -i out.tar
# Or decompress the file
docker load < out.tar.gz

Docker Image Manifest

What Is a Docker Image Manifest?

Resources allocated to a container using docker?

https://stackoverflow.com/questions/16084741/how-do-i-set-resources-allocated-to-a-container-using-docker

hub.docker.com

docker tag local-image:tagname new-repo:tagname
docker login
docker push new-repo:tagname
docker pull phusion/baseimage
docker run -ti phusion/baseimage /bin/bash
  • https://dockerfile.github.io/ which includes dockerfiles for different purposes. The ubuntu-desktop one also works well (client needs a vnc viewer in order to see the desktop).

Set up a private Docker registry

$ curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://localhost:5000/v2/_catalog
$ OR
$ curl -H "Accept: application/xml" -H "Content-Type: application/json" -X GET http://localhost:5000/v2/_catalog

Github registry

docker pull ghcr.io/OWNER/IMAGE_NAME:TAG

# docker pull registry-url/image-name:tag

Google Container Registry

  • Authenticate with Google Cloud: ensure you have the Google Cloud SDK installed
    gcloud auth login
    gcloud auth configure-docker
    
  • Pull the image
    docker pull gcr.io/PROJECT_ID/IMAGE_NAME:TAG
    

Google Artifact Registry

https://cloud.google.com/artifact-registry/

  • Authenticate with Google Cloud: ensure you have the Google Cloud SDK installed
    gcloud auth login
    gcloud auth configure-docker us-central1-docker.pkg.dev
    
  • Pull the image
    docker pull us-central1-docker.pkg.dev/PROJECT_ID/REPOSITORY/IMAGE_NAME:TAG
    

Dockerfile

  • A Dockerfile does not follow the YAML syntax or the shell syntax. It is a plain text file that contains instructions for building a Docker image, using its own specific syntax and keywords.
  • Dockerfile Reference
  • Using Dockerfiles to Automate Building of Images from digitalocean.com.
  • Remember to put the Dockerfile in an empty directory.
  • What goes into a Dockerfile
  • Keywords
    • FROM. If we want to start from scratch, we can use FROM scratch.
    • RUN. The argument for RUN is any valid Linux command.
    • USER. This is useful if we want to create new files with a non-root owner privilege. For example, new files created under a binding directory with a non-root user ownership will belong to the current user in the host system. Here is an example where we use Rmarkdown to create pdf output. The generated pdf file should not be own by root. How to add users to Docker container? Switch users.
    • COPY & ADD.
      • "COPY . /app" will copy all files and folders from the current directory recursively to the /app folder. We can use "ADD" too but "ADD" will automatically unpack tarballs. See What is the difference between the `COPY` and `ADD` commands in a Dockerfile?
      • "ADD sample.tar /app/bin" will unpack the sample.tar' file into the target folder
      • "ADD http://example.com/sample.txt /data/" will copy the remote file sample.txt into the target file
    • WORKDIR. Define the working directory or context that is used when a container is run from the image.
    • CMD & ENTRYPOINT. These two are actually definitions of what will happen when a container is started from the image.
      • Use CMD without ENTRYPOINT: "CMD command param1 param2". This form is called the shell form.
      • If we use ENTRYPOINT + CMD, ENTRYPOINT defines the command and CMD defines parameters. The example above will run ping 8.8.8.8 -c 3. This form is called the exec form.
  • The Docker Book

Examples of Dockerfile

FROM python:2.7
RUN mkdir -p /app
WORKDIR /app
COPY ./requirements.txt /app/
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
  • Another example
FROM alpine:latest
ENTRYPOINT ["ping"]
CMD ["8.8.8.8", "-c", "3"]
FROM debian:testing
RUN useradd docker \
	&& mkdir /home/docker \
	&& chown docker:docker /home/docker \
	&& addgroup docker staff

We can test it by "docker build -t mydebian . " and "docker run --rm -it --user docker -v /tmp:/home/docker mydebian". We can create a new file under /home/docker and the file will be accessible and belongs to the current host user once we quit the container. This actually is a huge security issue.

The same technique does not work on alpine if I try to create a new file in the container.

FROM alpine:latest
# Create a group and user; not useful for creating files in host OS
RUN addgroup -S appgroup && adduser -S appuser -G appgroup \
           && chown appuser:appgroup /home/appuser

"docker build -t myalpine . " and "docker run --rm -it -v ~/Downloads/:/home/appuser:rw --user appuser myalpine". When I use the "id" command in the container, I see it returns 100 in alpine container and 1000 in debian container. The id returns 1000 on my host (Ubuntu/Pop_OS). So the solution is docker run --rm -it -v ~/Downloads/:/home/appuser --user 1000:1000 myalpine. So the local user and the created user home directory in the container are not needed. See

Rocker

FROM r-base:latest
COPY check.R .
CMD [ "Rscript", "check.R", "/unsafe.rda"]
$ git clone https://github.com/hrbrmstr/rdaradar.git
$ docker build -t rdaradar:0.1.0 -t rdaradar:latest .  
$ docker run --rm -v "$(pwd)/exploit.rda:/unsafe.rda" rdaradar 

Bioconductor

Bioconductor

Papers

How to use Dockerfile

https://docs.docker.com/engine/reference/commandline/build/

The . simply means "current working directory".

docker build -f Dockerfile -t arraytools/myimagename .

docker build -t [myname] .  
# Multiple tags
docker build -t arraytools/biospear:latest -t arraytools/biospear:3.6.0 .

In the above example, we can create the image by

docker image build -t pinger .

We can run a container from the pinger image

docker container run --rm -it pinger

Docker Build Args

How to Use Docker Build Args to Configure Image Builds

Clean up after failed builds

Cleanup docker images and containers after failed builds

#!/bin/bash
docker rm $(docker ps -aq) \
  docker rmi $(docker images | grep "^<none>" | awk '{print $3}')

ENTRYPOINT and CMD

The advantage of using ENTRYPOINT + CMD (exec form) instead of using CMD alone (shell form) is we can override the CMD part that I have defined in the Dockerfile.

docker container run --rm -it pinger -w 5 127.0.0.1
# ping the loopback for 5 seconds

If we want to overwrite what's defined in the ENTRYPOINT in the Dockerfile, we need to use the --entrypoint parameter.

docker container run --rm -it --entrypoint /bin/sh pinger
# we'll be inside the container. Type exit to leave the container

When we use the shell form, the ENTRYPOINT is have the default value of /bin/sh -c and whatever is the value of CMD will be passed as a string to the shell command.

Temporary failure resolving 'deb.debian.org' when running "docker build"

Add "--net=host" to the docker build command. See Docker build “Could not resolve 'archive.ubuntu.com'” apt-get fails to install anything

Best practices for writing Dockerfiles

Use multi-stage builds

With multi-stage builds, we have a single Dockerfile containing multiple FROM instructions. Each FROM instruction is a new build stage that can easily COPY artifacts from previous stages.

An example from the "Docker Deep Dive" book.

tag after image was built

$ docker tag <imageID> <newName>/<repoName>:<tagName>

About storage drivers

https://docs.docker.com/storage/storagedriver/#sharing-promotes-smaller-images

Privileged versus Root user in Docker

.dockerignore

Using .dockerignore files to build better Docker images

Dockerfile in One Line

FROM ubuntu

Using This simple Dockerfile and the docker command sudo docker build -t scooby_snacks . will result in

$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
ubuntu              15.04               2427658c75a1        42 hours ago        117.5 MB
ubuntu              vivid               2427658c75a1        42 hours ago        117.5 MB
ubuntu              vivid-20150218      2427658c75a1        42 hours ago        117.5 MB
ubuntu              utopic-20150211     78949b1e1cfd        42 hours ago        194.4 MB
ubuntu              utopic              78949b1e1cfd        42 hours ago        194.4 MB
ubuntu              14.10               78949b1e1cfd        42 hours ago        194.4 MB
ubuntu              14.04               2d24f826cb16        42 hours ago        188.3 MB
ubuntu              14.04.2             2d24f826cb16        42 hours ago        188.3 MB
ubuntu              trusty              2d24f826cb16        42 hours ago        188.3 MB
ubuntu              trusty-20150218.1   2d24f826cb16        42 hours ago        188.3 MB
ubuntu              latest              2d24f826cb16        42 hours ago        188.3 MB
scooby_snacks       latest              2d24f826cb16        42 hours ago        188.3 MB
ubuntu              precise             1f80e9ca2ac3        42 hours ago        131.5 MB
ubuntu              precise-20150212    1f80e9ca2ac3        42 hours ago        131.5 MB
ubuntu              12.04.5             1f80e9ca2ac3        42 hours ago        131.5 MB
ubuntu              12.04               1f80e9ca2ac3        42 hours ago        131.5 MB
ubuntu              14.04.1             5ba9dab47459        3 weeks ago         188.3 MB
ubuntu              12.10               c5881f11ded9        8 months ago        172.2 MB
ubuntu              quantal             c5881f11ded9        8 months ago        172.2 MB
ubuntu              13.04               463ff6be4238        8 months ago        169.4 MB
ubuntu              raring              463ff6be4238        8 months ago        169.4 MB
ubuntu              13.10               195eb90b5349        8 months ago        184.7 MB
ubuntu              saucy               195eb90b5349        8 months ago        184.7 MB
ubuntu              10.04               3db9c44f4520        10 months ago       183 MB
ubuntu              lucid               3db9c44f4520        10 months ago       183 MB

List all tags of an image

How can I list all tags for a Docker image on a remote registry?

Tag the image with the git commit ID

$ docker build -t REPOS/IMAGE:$(git rev-parse --verify HEAD)

Run a shell script on host

$ docker run -v /path/to/sample_script.sh:/sample_script.sh \
  --rm ubuntu bash sample_script.sh

# GATK container example
# First we log in interactive and see where is the default location (/usr in this case)
$ docker run --rm -i -t broadinstitute/gatk3:3.8-0 bash
$ cat > tmp.sh << EOF
> pwd
> ls
> java -jar GenomeAnalysisTK.jar --version
> EOF
$ docker run --rm -v $(pwd):/usr/my broadinstitute/gatk3:3.8-0 bash my/tmp.sh
# ALTERNATIVELY, WE CAN PUT OUR SCRIPT IN THE TOP DIRECTORY (Hopefully the name is not duplicated)
$ docker run --rm -v $(pwd)/tmp.sh:/tmp.sh broadinstitute/gatk3:3.8-0 bash /tmp.sh
docker run -d -v$(pwd):/my SOMEIMAGE bash 
docker exec -d Test bash /my/script.sh

Link containers together

Manage data in containers

Assign a static IP to a container

Running Multiple Docker Services on the Same Server

How to Run Multiple Docker Containers on Different IP Addresses

Firewall

Rstudio server not loading, taking too long to respond in browser. On Ubuntu run sudo ufw allow PORTNUMBER.

Docker DNS/internet problem

I got an error on resolving the debian server when I was creating an image from a Dockerfile that needs to run apt update and apt install commands. See RStudio in Docker – now share your R code effortlessly!. The problem happened on my Linux Mint Desktop but not on a VirtualBox VM (Ubuntu 18.04).

Fix Docker's networking DNS config

A temporary solution is to add the --dns option to docker run command. This works well when I use the IP from any one of my 2 DNS servers. It does not work however if I use the IP from google DNS or OpenDNS.

A permanent solution is to create a new file /etc/docker/daemon.json and include the working DNS server IPs (these are obtained through the nmcli command or the NetworkManager GUI; see Query DNS server).

{
    "dns": ["XXX.XX.XX.XX", "YYY.YY.YY.YY"]
}

Then restart the docker service: sudo service docker restart

A quick test on the DNS problem is

docker run --rm busybox nslookup google.com

Working with Docker hub

https://docs.docker.com/userguide/dockerrepos/

Github Actions

Enabling HTTPS/Let's encrypt

Enabling HTTPS by self-sign certificates

traefik: The Cloud Native Application Proxy

Nginx proxy manager

docker: Error response from daemon: Cannot link to /site1_app_1, as it does not belong to the default network.

Running multiple web applications on a Docker host

Authentication: Authelia

Additional Self-Hosted Security with Authelia on NGINX Proxy Manager (video)

GUI apps

Firefox example

Running GUI Applications in Docker Container

From ubuntu:20.04
RUN apt update
RUN apt install firefox -y
RUN apt install python3-pip -y
RUN pip3 install  notebook

CMD /usr/bin/firefox
CMD jupyter-notebook --allow-root
nano Dockerfile
docker build -t gui .
docker run --env="DISPLAY" --net=host --name=firefox gui

It works. However, I need to use docker rm -f firefox to kill it since Ctrl+c does not work.

Meld example, save a running container as an image

Running a GUI Application in a Docker Container. It works. Below is a modified version for creating the meld app. I can save file modified by meld. To use the app, I need to place files in the ~/Documents/docker (defined in -v). Note that the RAM usage is very minimal. Unfortunately on macOS, I got an error something related to Gtk.

host> docker image pull ubuntu:jammy  # 22.04
 
host> docker container run --rm --net host -v /tmp/.X11-unix:/tmp/.X11-unix -it ubuntu:jammy
container# apt update
container# apt install -y meld
host> xhost +local:
container# export DISPLAY=:0

host> docker container ls  # find the ID of the running container
host> docker commit <ID> meld
container# exit

host> docker container run --rm --net host \
  -v /tmp/.X11-unix:/tmp/.X11-unix \
  -v ~/Documents/docker:/meld \
  -e DISPLAY \
  --name=meld \
  meld meld

R and httpgd package

  • httpgd Docker vignette, installation from Github.
  • It works. However, currently "httpgs" is archived in CRAN (2023/1/25). So my temporary solution is
    $ docker run --rm -it r-base:4.2.2 bash
    # apt update
    # apt install  libfontconfig1-dev
    # R
    > install.packages("remotes")
    > remotes::install_github("nx10/httpgd")
    ## note if we try to run 'httpgd::hgd(host = "0.0.0.0", port = 8888)', it does not work.
    ## The reason is we have not use "-p" option to expose a port in the previous "docker run" command
    
    ## open another terminal and create a docker image based on the current container
    $ docker ps -a | head
    $ docker commit CONTAINER_ID httpgd:4.2.2
    $ docker run --rm -it -p 8888:8888 httpgd:4.2.2 R
    > httpgd::hgd(host = "0.0.0.0", port = 8888)
    > plot(1:5)
    
  • It works when I tested it on a remote ubuntu server (R 4.4.0 & httpgd 2.0.1) (following the instruction on Docker vignette). Either IP or hostname works but the hostname URL link given by httpgd::hgd() needs to be modified to include .local.
  • Some variation of using hgd()
    hgd(host="0.0.0.0", port = 8888) # allow connection from any one from any computer
    hgd()                # default is host=127.0.0.1, port will be random
    hgd(token="secret")  # define the token
    
    hgd_browse()
    hgd_close()
    hgd_details()
    hgd_url()
    hgd_view()
    
  • To use it with Bioconductor (the Bioconductor docker image will use p3m.dev to install binary R packages so it is fast to create images), we can do like this
    $ docker run --rm -it -p 8888:8888 bioconductor/bioconductor_docker:RELEASE_3_18 R 
    
    > install.packages("httpgd")
    > httpgd::hgd(host = "0.0.0.0", port = 8888)
    

    OR use, for example, "bioconductor/bioconductor_docker:RELEASE_3_18" as the base image in the Dockerfile, and follow the same instruction from httpgd vignette to create a docker image.

    $ nano Dockerfile_httpgd
    $ docker build . -f Dockerfile_httpgd -t bioc-httpgd:RELEASE_3_18
    $ docker images
    $ docker run --rm -it --user rstudio -p 8888:8888 bioc-httpgd:RELEASE_3_18 R
    
  • Singularity. The following is a definition file that is using the bioconductor image + the httpgd package.
    Bootstrap: docker
    From: bioconductor/bioconductor_docker:RELEASE_3_18
    
    %post
        apt-get update \
        && apt-get install -y --no-install-recommends \
        libfontconfig1-dev \
        && apt-get autoremove -y && apt-get clean -y && rm -rf /var/lib/apt/lists/* \
        && install2.r --error --skipinstalled --ncpu -1 \
        httpgd \
        && rm -rf /tmp/downloaded_packages
    
    %runscript
        exec /usr/local/bin/R
    
    %environment
        export LC_ALL=C
    sudo singularity build bioc.sif bioc.def
    singularity run bioc.sif 
    
    > httpgd::hgd(host = "0.0.0.0", port = 8888)

    After we copy the URL, we need to modify the IP or hostname.

Docker-OSX

https://github.com/sickcodes/Docker-OSX

Delete/remove/prune unused resources

Prune unused Docker objects

  • Prune build cache (seems most effective)
    docker builder prune
    
    docker system df -v # Check Docker disk usage
    sudo du -sh /var/lib/docker
  • Prune containers
    docker container prune # remove all containers that are not in ''running'' status
                           # Docker will ask for confirmation before deleting the containers
    
    docker container prune -f
    docker container rm -f $(docker container ls -aq) # remove even the running containers
  • Prune dangling images: Dangling images are images that aren’t tagged and aren’t referenced by any container. Normal but unused/unreferenced images are kept and won't be deleted. See <none>:<none>images.
    docker image prune # unused image layers
  • Remove all unused images: If you want to remove all images that aren’t used by any existing containers, you can use the -a flag. It will give a warning saying: this will remove all images without at least one container associated to them. "Exited" container like "hello-world" will not be deleted.
    docker image prune -a

    Used images means anyone shown by docker ps -a.

  • Prune volumes
    docker volume ls
    docker volume prune # unused volumes by at least one container
    
    docker volume prune --filter 'label=demo'
    docker volume prune --filter 'label=demo' --filter 'label=test'
  • Prune networks
    docker network prune
  • Prune everything.
    docker system prune

Plugins

How to Manage Docker Engine Plugins

Misc

LXC (raw Linux containers)

LXC vs Docker

Vagrant vs Docker

Date/Time zone

docker run --rm -t -i -v /etc/localtime:/etc/localtime:ro ubuntu date

Access the internet from the container

Run the container with the '--net=host' option

sudo docker run --net=host -it ubuntu /bin/bash

How to transfer/copy an image to another host

How to copy Docker images from one host to another without using a repository

# Step 1: save the Docker image as a tar file:
docker save -o <path for generated tar file> <image name>

# Step 2: copy your image to a new system with regular file transfer tools such as cp or scp. 

# Step 3: After that you will have to load the image into Docker:
docker load -i <path to image tar file>

The tar file size is the same as what we get from 'docker image'. If we use the 'gzip' utility, it can reduce the file size (e.g 2.7GB to 1.1GB).

Or https://stackoverflow.com/a/39716019

# Step 1:
docker save docker-image-name | gzip > my-image.tar.gz
# Step 3:
docker load < my-image.tar.gz

Where are Docker containers/images stored on the host: /var/lib/docker

The default is /var/lib/docker. The location can be changed by modifying the file /etc/default/docker. Three options if we are tight on the disk space.

1. Create a softlinks for the Docker data directory (/var/lib/docker) and for /var/lib/docker/tmp as described at miscellaneous-options. See this. See for how to stop docker daemon on different OS.

sudo service docker stop   # or sudo systemctl stop docker
sudo mv /var/lib/docker /a/new/location
sudo ln -s /a/new/location /var/lib/docker # Create a symbolic link
sudo service docker start  # or sudo systemctl start docker

2. Change the default location to another place. For example,

sudo nano /etc/default/docker
# Add a line DOCKER_OPTS="-g /home/brb/Docker"

Then after running sudo service docker.io restart and then a simple pull sudo docker pull rocker/r-base or sudo docker run --rm -ti rocker/r-base (the Dockerfile of r-base is available on github.com, --rm option means Automatically remove the container when it exits), we will see something like this:

$ docker run --rm -ti rocker/r-base
$ docker images
$ docker -v
Docker version 1.0.1, build 990021a

$ docker -D info | grep Root
 Root Dir: /home/brb/Docker/aufs

Consuming Docker system events

# Open a new terminal
docker system events
# This command is a blocking command. 
# Thus, when you execute it in your terminal session the according session is blocked.

# Open another terminal
docker container run --rm alpine echo "Hello World"

Monitor tools

Docker Machine

Docker Machine is a tool that lets you

  • Install Docker Engine on virtual hosts. You can use Machine (a unified way) to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like Azure, AWS, or Digital Ocean. See the comment on here.
  • Provision and manage multiple remote Docker hosts
  • Provision Swarm clusters

Docker machine is not installed in Linux when you install Docker. See the instruction on here to install it.

My feeling is if we just want to play Docker on a local Linux machine, we don't really need to use Docker Machine (it just make life more complicated). But if we are working on Mac/Windows or we want to work on clouds or test on VirtualBox, we shall use Docker machines.

Use Docker-machine to Create Docker Servers. Compare the Docker images on the local machine (server 1) & a new host (server 2) created by docker-machine. Question: 1. how to tell we are in the host/machine environment? 2. how to exit the host environment after we use eval $() command? docker-machine stop MachineName.

$ docker-machine help
$ docker-machine create --driver=virtualbox test
# Follow its hint on the output, issue the following command
$ docker-machine env test
# Follow its hint on the output, issue the following command
$ eval$(docker-machine env test) # will configure the docker CLI to connect to this docker machine 'test'
                                 # This is equivalent to running 4 export commands on the command line
$ docker-machine ls  # Very useful
$ docker-machine stop test
$ docker-machine ip test
$ docker-machine start test
$ docker-machine rm test

Play Docker Machine on Mac with Virtualbox. Docker can be used to create a virtual machine just like Vagrant.

$ docker-machine create -d virtualbox demo
$ docker-machine ls

# first way to access a Docker host
$ docker-machine ssh demo
docker@demo:~$ docker images # empty for now

# second way to access 
$ docker-machine env demo
$ eval $(docker-machine env demo)
$ docker version

RancherOS demo video used the docker-machine command to pull and run the RancherOS.

docker-machine create -d virtualbox --virtualbox-boot2docker-url https://releases.rancher.com/os/latest/rancheros.iso demo
docker-machine ssh demo
ps
docker ps
sudo system-docker ps

sudo ros help
sudo ros console list
sudo ros console switch ubunu
apt-get help

Package CLI Applications

How to Use Docker to Package CLI Applications

Stack

Docker app

Docker App is an experimental Docker feature which lets you build and publish application stacks consisting of multiple containers. It aims to let you share Docker Compose stacks with the same ease of use as regular Docker containers.

How to Use 'Docker App' to Containerise an Entire Application Stack

Docker Swarm

Security

Moby Project

What is Docker's Moby Project?

Windows container

How can I run a docker windows container on osx?

When Not to Use Docker

When Not to Use Docker: Cases Where Containers Don’t Help

Docker Compose <docker-compose.yaml>

Docker Compose can help us out as it allows us to specify a single file in which we can define our entire environment structure and run it with a single command (much like a Vagrantfile works).

YAML validator

https://codebeautify.org/yaml-validator

Download binary

Difference of "docker compose" and "docker-compose"

  • Docker-compose is the original Python-based command-line tool that was released in 2014. Docker compose is a newer Go-based command-line tool that is integrated into the Docker CLI platform and supports the compose-spec. Docker compose is meant to be a drop-in replacement for docker-compose, but it may have some behavior differences and new features. Docker compose is currently a tech preview, but it will eventually replace docker-compose as the recommended way to use Compose.

Simple examples

Create a file docker-compose.yml and run docker-compose up after creating the file.

hello-world: 9kB

version: "3"
services:
  hello:
    image: hello-world

alpine: 7.73MB

version: "3"
services:
  server:
    image: alpine
    container_name: my_container
    command: sh -c "echo 'hello' && echo 'docker'"

Nginx: 135MB

mkdir src
echo "Hello world!" > src/index.html
version: "3"
services:
  client:
    image: nginx
    ports:
      - 8000:80
    volumes:
      - ./src:/usr/share/nginx/html

Composerize/convert a docker command into a docker compose file

An example from 'Fundamentals of Docker'

git clone https://github.com/fundamentalsofdocker/labs.git
cd labs/ch08
docker-compose up
# Open http://localhost:3000/pet

The images do not show up:( The terminal shows what has happened under the hood. So the problem is the http links for images do not exist.

We can also run the application in the background

docker-compose up -d

To stop and clean up the application, Howto use docker-compose to Start, Stop, Remove Docker Containers

docker-compose down # Stop and remove containers, networks, images, and unnamed volumes
                    # defined in the docker-compose.yml flie
# OR
docker-compose down -v # similar to above but remove named volumes defined in yml file
# OR
docker-compose stop && docker-compose rm -f
docker-compose rm -v

If we also want to remove the volume for the database

docker volume rm ch08_pets-data

An example from "How to Setup NGINX as Reverse Proxy Using Docker"

See here. Only nginx is used.

An example from "Docker Deep Dive" (flask + redis)

Note that on Get started with Docker Compose it mounts the current directory to /code inside the container. So after we modify app.py, we don't need to copy it to the container.

Another one Docker compose tutorial for beginners by example

$ git clone https://github.com/nigelpoulton/counter-app.git
$ cd counter-app
$ ls
app.py  docker-compose.yml  Dockerfile  README.md  requirements.txt

$ cat requirements.txt 
flask

$ cat Dockerfile
FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]

$ cat docker-compose.yml 
version: "3.5"
services:
  web-fe:
    build: .
    command: python app.py
    ports:
      - target: 5000
        published: 5000
    networks:
      - counter-net
    volumes:
      - type: volume
        source: counter-vol
        target: /code
  redis:
    image: "redis:alpine"
    networks:
      counter-net:

networks:
  counter-net:

volumes:
  counter-vol:

$ docker-compose up &

$ docker container ls

$ docker network ls
NETWORK ID          NAME                     DRIVER              SCOPE
2acef6dabde6        bridge                   bridge              local
a2d42bc482ff        counterapp_counter-net   bridge              local
e1e093b64282        host                     host                local
7ecd0a6a9ebd        none                     null                local

# Open the browser http://localhost:5000
$ docker-compose ps
       Name                      Command               State           Ports         
-------------------------------------------------------------------------------------
counterapp_redis_1    docker-entrypoint.sh redis ...   Up      6379/tcp              
counterapp_web-fe_1   python app.py                    Up      0.0.0.0:5000->5000/tcp

$ docker-compose stop
$ docker-compose ps
# We can see stopping a Compose app does not delete the application

$ docker container ls -a
$ docker-compose rm     # delete a stopped Compose app
                        # images, volumes and source code remain
$ docker-compose restart
                        # If you made changes to your Compose app since stopping,
                        # these changes will not appear in the restarted app.
                        # You need to re-deploy the app to get the changes.
$ docker-compose ps
$ docker-compose down   # stop and delete the app
                        # images, volumes and source code remain
$ docker-compose down --volumes # remove the data volume used by the Redis container
$ docker-compose up -d 
$ docker volume ls
$ docker-compose 

# We can make changes to files in the volume, from the host side,
# and have them reflected immediately in the app.
$ nano app.py   # do some changes
$ docker volume inspect counterapp_counter-vol | grep Mount
$ sudo cp app.py \
  /var/lib/docker/volumes/counterapp_counter-vol/_data/app.py
# Our changes should be reflected 

$ docker-compose --help

Create Compose Files From Running Docker Containers

How to Automatically Create Compose Files From Running Docker Containers

Docker-Compose persistent data MySQL

https://stackoverflow.com/questions/39175194/docker-compose-persistent-data-mysql

Connect to Docker daemon over ssh using docker-compose

#DockerTips: Connect to Docker daemon over ssh using docker-compose

Dockerfile + docker-compose

Docker Compose vs. Dockerfile - which is better?

The Compose file describes the container in its running state, leaving the details on how to build the container to Dockerfiles.

How to deploy on remote Docker hosts with docker-compose

How to deploy on remote Docker hosts with docker-compose

logs

docker-compose logs -f
# Ctrl + c

GUI/TUI interface manager

11 Must-Have Docker Tools To Simplify Your Workflow:

  • LazyDocker - Command-Line Docker Management
  • Dive - Analyze Docker Image Layers
  • Portainer – Simplify Docker Management
  • Watchtower – Automated Docker Container Updates
  • Dockly – Interactive Docker Management Tool
  • Docker Compose – Define and Run Multi-Container Apps
  • Dry – Real-time Docker Container Monitoring
  • Sliplane – Cloud-Based Docker Management Tool
  • Orbstack (closed source) - manage VM and Docker containers
  • Docker Desktop – A Graphical Interface for Docker
  • Visual Studio Code (VS Code) Docker Extension

Dry

Dry – An Interactive CLI Manager For Docker Containers. The TUI is built on top of termui; a cross-platform, easy-to-compile, and fully-customizable terminal dashboard. It is inspired by blessed-contrib, but purely in Go.

LazyDocker (TUI)

Dockly (TUI)

Dockly – Manage Docker Containers From Terminal

DockStation

It is not open source. It works with remote Docker containers.

DockSTARTer: get started with home server apps running in Docker

Portainer* (nice)

IP address 0.0.0.0

How to setup ip address in portainer to access containers?
(Left hand side) Administration -> Environment-related -> Environments > local (or whatever your environment is named) -> Public ip.

Templates

Yacht

cockpit-docker

sudo apt-get -y install cockpit-docker

sudo systemctl restart cockpit

DockerUI (Deprecated, Development continues at Portainer)

https://github.com/kevana/ui-for-docker. A quick start:

  1. Run:
    docker run -d -p 9000:9000 --privileged \
        -v /var/run/docker.sock:/var/run/docker.sock uifd/ui-for-docker
    where -v means to bind mount a volume.
  2. Open your browser to http://<dockerd host ip>:9000

Note: Anyone in the local network can access the website without any authentication.

Rancher

$ sudo apt-get install ufw
$ sudo ufw allow 4500/udp
$ sudo ufw allow 500/udp
  • discoposse.com
    • Part 1 Installing Rancher and Setting Access Control
    • Part 2 Adding a Docker Host to Rancher
    • Part 3 Adding the DockerHub to our Rancher Registry
    • Part 4 Using the Catalog Example with GlusterFS

Seagull

https://youtu.be/TuT5gb8oRw8

docker run -d -p 127.0.0.1:10086:10086 -v /var/run/docker.sock:/var/run/docker.sock tobegit3hub/seagull

The only issue is there is no username/password to protect other people to access the web GUI. The solution of binding to localhost to restrict the access does not work for remote administration.

That is, the tool is suitable for home use.

Kitematic (Mac, Windows and Ubuntu)

Owned by Docker. Available for Mac OS X 10.8+ and Windows 7+ (64-bit) and Ubuntu. https://github.com/docker/kitematic/releases/

Run containers through a simple, yet powerful graphical user interface.

It can not connect to remote docker machines.

A Share your Shiny Apps with Docker and Kitematic!

Shipyard (retired)

VS Code

Applications

Docker Applications

CasaOS

Every app is based on a Docker application

Orchestrator

Kubernete

Kubernete vs Docker swarm

k3s: Lightweight Kubernetes

Run Kubernetes on a Raspberry Pi with k3s

Kubeflow

Other containers

Singularity and HPC systems

  • Old URL at singularity.lbl.gov
  • Singularity enables users to have full control of their environment; Singularity containers let users run applications in a Linux environment of their choosing. No 'sudo' is needed in general unless you want to build a container from a recipe.
  • Containers are more like an executable file for you to use
  • Containers are stored under the current location. It does not have a centre location (like /etc/default/docker if we use docker) to store images.
  • Can convert Docker containers to Singularity and run containers directly from Docker Hub
  • These bind points cannot be created unless the path already exists within the container. To ensure access to these storage spaces and remedy bind point errors, create these directories in the %post section of your Bootstrap file.
  • Singularity Hub

Ref:

Comparison of docker and singularity commands:

docker singularity
$ docker pull ubuntu:latest
$ docker pull broadinstitute/gatk3:3.8-0
$ singularity pull docker://ubuntu:latest
$ singularity pull docker://broadinstitute/gatk3:3.8-0
$ docker build -t myname/myapp:latest -f Dockerfile $ singularity build myapp.sif myapp.def
$ docker shell (not exist) $ singularity shell docker://broadinstitute/gatk3-3.8-0
$ singularity shell gatk3-3.8-0.img
> ls # the default location depends on the host system

> ls /usr # this is from the container

$ singularity shell --bind ~/Downloads:/mnt XXX.img
$ singularity shell docker://ubuntu:latest
# container is ephemeral

$ docker run --name test -it ubuntu date

# The next example is similar to 'singularity exec'
$ docker run --rm -i -t \
-v $(pwd):/usr/my_data \
broadinstitute/gatk3:3.8-0 \
bash /usr/my_data/myscript.sh
$ singularity run gatk3-3.8-0.img date
$ docker run --name ubuntu_bash --rm -i -t ubuntu bash
$ docker exec -d ubuntu_bash touch /tmp/execWorks
# Most useful
$ singularity exec gatk3-3.8-0.img java -version
$ singularity exec xxx.img cat /etc/*release
$ singularity exec docker://rocker/tidyverse:latest R
$ singularity exec docker://rocker/tidyverse:latest Rscript myScript.R

Cache

When we run singularity exec docker://rocker/tidyverse:latest R, it will save something in the cache in our system.

It seems to be OK after I manually delete the directory $HOME/.singularity (tested in Biowulf).

RStudio

$ singularity exec docker://rocker/tidyverse:latest R
$ singularity exec docker://rocker/tidyverse:latest Rscript myScript.R

Shifter

Conda

Anaconda

Bioconda

Using docker to install conda (https://conda.io/docs/user-guide/tutorials/index.html)

$ docker run -t -i --name test --net=host ubuntu bash
# apt-get update
# apt-get install -y wget bzip2 python
# wget https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh
# wget https://repo.continuum.io/archive/Anaconda2-5.1.0-Linux-x86_64.sh
# bash Miniconda2-latest-Linux-x86_64.sh
# bash Anaconda2-5.1.0-Linux-x86_64.sh
# exit

$ docker start test 
$ docker exec -i -t test bash
# conda list  # WORKS!
# conda config --add channels r
# conda config --add channels defaults
# conda config --add channels conda-forge
# conda config --add channels bioconda
# conda install bwa  (Segmentation fault. Core dumped)
# which bwa
/root/anaconda2/bin/bwa
# conda install r   (Only get 3.4.2 but the latest is 3.4.3.)
# conda install bowtie
# bowtie --version
# conda install gatk (https://bioconda.github.io/recipes/gatk/README.html)
   (Due to license restrictions, this recipe cannot distribute and install GATK directly)
   (R is downgraded to 3.2.2:( )
   (Segmentation fault. Core dumped)
# exit
# docker stop test
# docker rm test

Get miniconda image instead of using a Ubuntu image

$ docker pull continuumio/miniconda
$ docker run -i -t continuumio/miniconda /bin/bash
# conda install r   (get 3.4.2)
# conda config --add channels bioconda
# conda install bwa  (OK, no error)
# conda install gatk  (R was downgraded to 3.2.2, install openjdk 8.0.121)
# which gatk
/opt/conda/bin/gatk
# gatk -h
GATK jar file not found. Have you run "gatk-register"?

Issues:

  • R version is not up to date
  • So the problem is installing GATK requires an installation of R and the current R was affected.

CoreOS

Installation

We first boot a liveCD from any OS (CentOS works but Ubuntu 16.04 gave errors). In Virtualbox, we choose 'Red Hat' if we use CentOS.

Once the VM is created. We go to the settings. Create a bridged network or host-only network first (even we can get files from the host without creating a host-only network). Storage: choose CentOS-7.

  1. Get the install script from Github and create <coreos_install.sh> and chmod +x
  2. create <cloud-config.yaml> file which will include ssh_authorized_keys generated from another machine. It should also contain a new token for the cluster from https://discovery.etcd.io/new.
  3. ls -l /dev/sd*
  4. run sudo ./coreos_install.sh -d /dev/sda -C stable -c cloud-config.yaml. It will download the latest stable CoreOS, install to the HD
  5. Don't leave the VM or it will freeze. Issue sudo shutdown -h now once we see the word 'Success' at the last line of the output.
  6. Remove CentOS from the VM storage. Boot the coreOS VM.

The new screen shows corebm1 login with an IP. Go back to another machine and type ssh -i /tmp/CoreOSBM_rsa [email protected]. Inside CoreOS, we can type docker images.

The 'cloud-config.yaml file has to follow the format in https://coreos.com/os/docs/latest/cloud-config.html. Use the online validator https://coreos.com/validate/ to correct. At first I use the file from the youtube video. There is no error coming out when I ran the installation script. But I cannot connect to coreOS. The cloud-config.yaml file I use is (pay attention to '-', double quotes and indent characters)

#cloud-config
#
# set hostname
hostname: CoreBM1

# Set ssh key
ssh_authorized_keys:
  - "ssh-rsa AAAAB3 ..... brb@T3600"

coreos:
  etcd:
    discovery: "https://discovery.etcd.io/d3e95 .... "
# sudo ./installos -d /dev/sda -C stable -c cloud-config.yaml

CoreOS exploration

brb@T3600 /tmp $ ssh -i /tmp/id_rsa [email protected]
Enter passphrase for key '/tmp/id_rsa':
CoreOS stable (1010.6.0)
core@CoreBM1 ~ $
core@CoreBM1 ~ $ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
core@CoreBM1 ~ $ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.4G     0  1.4G   0% /dev
tmpfs           1.4G     0  1.4G   0% /dev/shm
tmpfs           1.4G  340K  1.4G   1% /run
tmpfs           1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/sda9        18G   23M   17G   1% /
/dev/sda3       985M  589M  345M  64% /usr
tmpfs           1.4G     0  1.4G   0% /media
/dev/sda1       128M   37M   92M  29% /boot
tmpfs           1.4G     0  1.4G   0% /tmp
/dev/sda6       108M   52K   99M   1% /usr/share/oem
core@CoreBM1 ~ $ free -m
             total       used       free     shared    buffers     cached
Mem:          2713        187       2525          0          9        109
-/+ buffers/cache:         68       2644
Swap:            0          0          0
core@CoreBM1 ~ $ lsb_release -a
-bash: lsb_release: command not found
core@CoreBM1 ~ $ docker pull ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
f069f1d21059: Pull complete
ecbeec5633cf: Pull complete
ea6f18256d63: Pull complete
54bde7b02897: Pull complete
Digest: sha256:bbfd93a02a8487edb60f20316ebc966ddc7aa123c2e609185450b96971020097
Status: Downloaded newer image for ubuntu:latest
core@CoreBM1 ~ $ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              0f192147631d        7 days ago          132.7 MB
core@CoreBM1 ~ $

CoreOS cluster discovery

https://coreos.com/os/docs/latest/cluster-discovery.html

etcd

fleet

TryGhost

https://github.com/TryGhost/Ghost

Firecracker

Firecracker: start a VM in less than a second

Self-hosting

Tools and Resources for Self-Hosting

Linux in browser

Podman

  • Podman Installation Instructions
    • How To Install Podman Desktop In Linux
    • Raspberry Pi OS use the standard Debian's repositories, so it is fully compatible with Debian's arm64 repository. You can simply follow the steps for Debian to install Podman.
  • Podman vs docker:
    • One of the main differences between Podman and Docker is their architecture. Docker uses a client-server architecture with a central daemon that manages containers. In contrast, Podman is daemonless and uses a fork-exec model to manage containers.
    • Podman is designed to run containers without requiring root privileges or the use of sudo. This is one of the key differences between Podman and Docker, as Docker requires root privileges to run containers.
    • Both Podman and Docker are compatible with the Open Container Initiative (OCI) container specification, which means that they can run the same container images. However, Podman is more closely aligned with Kubernetes and its native container runtime, while Docker also works with its own orchestration tool, Docker Swarm.
    • Podman provides several benefits over Docker. For example, Podman is daemon-less, which means that if the Docker daemon crashes, the containers are in an uncertain state. This is prevented by making Podman daemon-less. You can also use systemd to manage your containers with Podman, which gives you virtually unlimited configurability compared to Docker. Hooking Podman with systemd allows you to update running containers with minimal downtime and recover from any bad updates.
  • Podman is a project from Red Hat
  • Getting Started With Podman Desktop, an Open Source Docker Desktop Alternative
  • Podman Compose - Managing Containers
pip3 install podman-compose
But it seems the compatibility is an issue even I tried a small example based on alpine image.
  • Nginx example (works)
podman run -it --rm -d -p 8080:80 \
  --name web \
  -v /mnt/Podman/site-content:/usr/share/nginx/html \
  docker.io/libary/nginx

Resource

Internet

Books

Blogs

Tips/trouble shooting

Play with Docker (PWD)

  • Some applications I've tested.
    • webtop (OK)
    • r-base:3.6.3, r-base:4.1.0, r-base:4.1.1 (OK)
    • r-base:4.1.2, r-base:4.2.0 (ERROR: R_HOME ('/usr/lib/R') not found). Maybe the docker version there is too old.

Alternatives

The 9 Best Docker Alternatives for Container Management

Serverless computing