Docker: Difference between revisions

From 太極
Jump to navigation Jump to search
(24 intermediate revisions by the same user not shown)
Line 1,373: Line 1,373:
/home/$USER/docker/$PROJECT/$SUB-DIRECTORY
/home/$USER/docker/$PROJECT/$SUB-DIRECTORY
</pre>
</pre>
=== PUID, PGID, share volume permission/owner ===
<ul>
<li>[https://docs.linuxserver.io/general/understanding-puid-and-pgid Understanding PUID and PGID] (or the [https://github.com/linuxserver/docker-documentation/blob/master/general/understanding-puid-and-pgid.md source])
<li>You should use the -e PUID and -e PGID options when creating a container from a Docker image to map the container’s internal user to a user on the host machine. This is useful because Docker runs all of its containers under the '''root''' user domain, which means that processes running inside your containers also run as '''root'''. '''This kind of elevated access is not ideal for day-to-day use and can potentially give applications access to things they shouldn’t.''' By using PUID and PGID, you can ensure that files and directories created during the container’s lifespan are owned by a user on the host machine instead of root.
<li>'''Please note that not all Docker images support the PUID and PGID environment variables. The Docker image must be designed to use these variables.''' If you’re using an image that doesn’t support these variables, you may need to create a Dockerfile to build a new image that does.
<li>The following works. The '''--user''' option is a built-in Docker feature that sets the user (and optionally the group) that is used to run the container. This option works regardless of whether the Docker image uses any specific environment variables. PS. "docker" user has been defined in the r-base's [https://github.com/rocker-org/rocker/blob/master/r-base/4.4.0/Dockerfile Dockerfile].
<syntaxhighlight lang='sh'>
docker run --rm -ti --user docker \
  -v "$(pwd)":/workspace r-base
> setwd("/workspace")
> save(iris, file="iris.rda")
> system("ls -lt")
> unlink("iris.rda")
</syntaxhighlight>
<li>Similarly, the '''--user''' option works with rocker/rstudio image and ubuntu.
<syntaxhighlight lang='sh'>
docker run --rm -ti --user rstudio \
  -v "$(pwd)":/workspace rocker/rstudio R
> setwd("/workspace")
> save(iris, file="iris.rda")
> system("ls -lt")
> unlink("iris.rda")
</syntaxhighlight>
Note that the prompt is '''$''' rather than '''#'''.
{{Pre}}
docker run --rm -it -v $(pwd):/home --user ubuntu \
  ubuntu bash
$ id
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev)
$ cd /home
$ echo "newfile" > newfile
</pre>
<syntaxhighlight lang='sh'>
docker run --rm -it -v $(pwd):/home --user "$(id -u):$(id -g)" \
  ubuntu bash
$ cd /home
$ echo "newfile" > newfile
</syntaxhighlight>
<li>In the article [https://github.com/rocker-org/rocker/wiki/Sharing-files-with-host-machine Sharing files with host machine] from the Rocker's project, users are instructed to use '''-e USERID''' variable if the host machine user has a UID other than 1000. But the generated file 'iris.rda' from the following example is still owned by root:(
<syntaxhighlight lang='sh'>
docker run --rm -ti -v "$(pwd)":/workspace -e USERID=$UID rocker/rstudio R
</syntaxhighlight>
<li>(Cont.) however, if we run the above command as a daemon and '''log in using the user "rstudio" ''', it works even we don't specify the "-e USERID" option. The lesson is we should use the user defined in the docker image.
<pre>
docker run --rm -v "$(pwd)":/workspace -p PASSWORD=123 rocker/rstudio
</pre>
Notice the prompt is '''#''' rather than '''$''' and the user id is 0.
<pre>
docker run --rm -it -v $(pwd):/home -e PUID=1000 -e PGID=1000 \
  ubuntu bash
# id
uid=0(root) gid=0(root) groups=0(root)
</pre>
<li>In this video [https://youtu.be/oHC6J_aN4eQ?t=137 How to Install Calibre on OMV and Docker], it uses the command '''id admin'''  where "admin" is the portainer user to get PUID (of "admin") and PGID (of "users") to find out the two ids.
</ul>


=== Back Up Your Docker Volumes ===
=== Back Up Your Docker Volumes ===
Line 1,752: Line 1,808:
*** If we use ENTRYPOINT + CMD, ENTRYPOINT defines the command and CMD defines parameters. The example above will run ''ping 8.8.8.8 -c 3''. This form is called the '''exec''' form.
*** If we use ENTRYPOINT + CMD, ENTRYPOINT defines the command and CMD defines parameters. The example above will run ''ping 8.8.8.8 -c 3''. This form is called the '''exec''' form.
* [https://github.com/jamtur01/dockerbook-code The Docker Book]
* [https://github.com/jamtur01/dockerbook-code The Docker Book]
* [https://github.com/rocker-org/rocker rocker (R and RStudio)]
* [https://github.com/Bioconductor/bioc_docker/tree/master/out Bioconductor]


=== Examples of Dockerfile ===
=== Examples of Dockerfile ===
Line 1,795: Line 1,849:
<li> [https://stackoverflow.com/a/45673309 How to give non-root user in Docker container access to a volume mounted on the host] </li>
<li> [https://stackoverflow.com/a/45673309 How to give non-root user in Docker container access to a volume mounted on the host] </li>
</ul></ul>
</ul></ul>
==== Rocker ====
* [https://github.com/rocker-org/rocker rocker (R and RStudio)]
* [https://github.com/hrbrmstr/rdaradar rdaradar (RDA Radar)]
<pre>
FROM r-base:latest
COPY check.R .
CMD [ "Rscript", "check.R", "/unsafe.rda"]
</pre>
<pre>
$ git clone https://github.com/hrbrmstr/rdaradar.git
$ docker build -t rdaradar:0.1.0 -t rdaradar:latest . 
$ docker run --rm -v "$(pwd)/exploit.rda:/unsafe.rda" rdaradar
</pre>
==== Bioconductor ====
[https://github.com/Bioconductor/bioconductor_docker Bioconductor]
==== Papers ====
* [https://github.com/amyfrancis97/DrivR-Base DrivR-Base], [https://f1000research.com/posters/12-1521 Poster], [https://academic.oup.com/bioinformatics/article/40/4/btae197/7644281 Paper] 2024


=== How to use Dockerfile ===
=== How to use Dockerfile ===
Line 2,134: Line 2,208:


=== R and httpgd package ===
=== R and httpgd package ===
[https://nx10.github.io/httpgd/articles/b03_docker.html httpgd vignette], [https://nx10.github.io/httpgd/ installation] from Github.
<ul>
 
<li>[https://nx10.github.io/httpgd/articles/b03_docker.html httpgd Docker vignette], [https://nx10.github.io/httpgd/ installation] from Github.
It works. However, currently "httpgs" is archived in CRAN (2023/1/25). So my temporary solution is  
<li>It works. However, currently "httpgs" is archived in CRAN (2023/1/25). So my temporary solution is  
<pre>
<pre>
$ docker run --rm -it r-base:4.2.2 bash
$ docker run --rm -it r-base:4.2.2 bash
Line 2,154: Line 2,228:
> plot(1:5)
> plot(1:5)
</pre>
</pre>
<li>It works when I tested it on a '''remote ubuntu server''' (R 4.4.0 & httpgd 2.0.1) (following the instruction on [https://cran.r-project.org/web/packages/httpgd/vignettes/b03_docker.html Docker vignette]). Either IP or hostname works but the hostname URL link given by httpgd::hgd() needs to be modified to include '''.local'''.
<li>Some variation of using hgd()
<pre>
hgd(host="0.0.0.0", port = 8888) # allow connection from any one from any computer
hgd()                # default is host=127.0.0.1, port will be random
hgd(token="secret")  # define the token


=== Docker-OSX ===
hgd_browse()
https://github.com/sickcodes/Docker-OSX
hgd_close()
hgd_details()
hgd_url()
hgd_view()
</pre>
<li>To use it with [https://github.com/Bioconductor/bioconductor_docker?tab=readme-ov-file Bioconductor] (the Bioconductor docker image will use p3m.dev to install binary R packages so it is fast to create images), we can do like this
<pre>
$ docker run --rm -it -p 8888:8888 bioconductor/bioconductor_docker:RELEASE_3_18 R


== Pruning unused resources ==
> install.packages("httpgd")
* Prune containers
> httpgd::hgd(host = "0.0.0.0", port = 8888)
<syntaxhighlight lang='bash'>
</pre>
OR use, for example, "bioconductor/bioconductor_docker:RELEASE_3_18" as the base image in the Dockerfile, and follow the same instruction from httpgd vignette to create a docker image.
<pre>
$ nano Dockerfile_httpgd
$ docker build . -f Dockerfile_httpgd -t bioc-httpgd:RELEASE_3_18
$ docker images
$ docker run --rm -it --user rstudio -p 8888:8888 bioc-httpgd:RELEASE_3_18 R
</pre>
<li>[[Biowulf#Singularity|Singularity]]. The following is a definition file that is using the bioconductor image + the '''httpgd''' package.
<syntaxhighlight lang='sh'>
Bootstrap: docker
From: bioconductor/bioconductor_docker:RELEASE_3_18
 
%post
    apt-get update \
    && apt-get install -y --no-install-recommends \
    libfontconfig1-dev \
    && apt-get autoremove -y && apt-get clean -y && rm -rf /var/lib/apt/lists/* \
    && install2.r --error --skipinstalled --ncpu -1 \
    httpgd \
    && rm -rf /tmp/downloaded_packages
 
%runscript
    exec /usr/local/bin/R
 
%environment
    export LC_ALL=C
</syntaxhighlight>
<syntaxhighlight lang='sh'>
sudo singularity build bioc.sif bioc.def
singularity run bioc.sif
 
> httpgd::hgd(host = "0.0.0.0", port = 8888)
</syntaxhighlight>
After we copy the URL, we need to modify the IP or hostname.
</ul>
 
=== Docker-OSX ===
https://github.com/sickcodes/Docker-OSX
 
== Delete/remove/'''prune''' unused resources ==
[https://docs.docker.com/config/pruning/ Prune unused Docker objects]
 
<ul>
<li>Prune containers
<syntaxhighlight lang='bash'>
docker container prune # remove all containers that are not in ''running'' status
docker container prune # remove all containers that are not in ''running'' status
                       # Docker will ask for confirmation before deleting the containers
                       # Docker will ask for confirmation before deleting the containers
Line 2,167: Line 2,299:
docker container rm -f $(docker container ls -aq) # remove even the running containers
docker container rm -f $(docker container ls -aq) # remove even the running containers
</syntaxhighlight>
</syntaxhighlight>
* Prune images
 
<li>Prune dangling images: Dangling images are images that aren’t tagged and aren’t referenced by any container.
<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
docker images prune # unused image layers
docker images prune # unused image layers
</syntaxhighlight>
</syntaxhighlight>
* Prune volumes
 
<li>Remove all unused images: If you want to remove all images that aren’t used by any existing containers, you can use the -a flag
<syntaxhighlight lang='bash'>
docker image prune -a
</syntaxhighlight>
 
<li>Prune volumes
<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
docker volume prune # unused volumes by at least one container
docker volume prune # unused volumes by at least one container
Line 2,178: Line 2,317:
docker volume prune --filter 'label=demo' --filter 'label=test'
docker volume prune --filter 'label=demo' --filter 'label=test'
</syntaxhighlight>
</syntaxhighlight>
* Prune networks
 
<li>Prune networks
<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
docker network prune
docker network prune
</syntaxhighlight>
</syntaxhighlight>
* Prune everything
 
<li>Prune everything.
<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
docker system prune
docker system prune
</syntaxhighlight>
</syntaxhighlight>
</ul>


== Plugins ==
== Plugins ==
Line 2,249: Line 2,391:
</syntaxhighlight>
</syntaxhighlight>


=== Where are Docker images stored on the host: /var/lib/docker ===
=== Where are Docker containers/images stored on the host: /var/lib/docker ===
* http://blog.thoward37.me/articles/where-are-docker-images-stored/
* http://stackoverflow.com/questions/19234831/where-are-docker-images-stored-on-the-host-machine
* http://stackoverflow.com/questions/19234831/where-are-docker-images-stored-on-the-host-machine
* https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-storage-driver
* https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-storage-driver
Line 2,352: Line 2,493:
</pre>
</pre>


== Docker Compose <docker-compose.yaml> ==
== Package CLI Applications ==
[https://www.cloudsavvyit.com/15713/how-to-use-docker-to-package-cli-applications/ How to Use Docker to Package CLI Applications]
 
== Stack ==
* https://www.composerize.com/
* [https://youtu.be/-ttZjGBkLL8 Export Docker Container Settings as Docker Compose Stack], [https://github.com/Red5d/docker-autocompose docker-autocompose] (only x86)
 
== Docker app ==
Docker App is an experimental Docker feature which lets you build and publish application stacks consisting of multiple containers. It aims to let you share '''Docker Compose''' stacks with the same ease of use as regular Docker containers.
 
[https://www.cloudsavvyit.com/10673/how-to-use-docker-app-to-containerise-an-entire-application-stack/ How to Use 'Docker App' to Containerise an Entire Application Stack]
 
== Docker Swarm ==
* https://www.linux.com/learn/how-use-docker-machine-create-swarm-cluster
* [https://www.howtoforge.com/tutorial/ubuntu-docker-swarm-cluster/ How Setup and Configure Docker Swarm Cluster on Ubuntu]
* [https://www.cloudsavvyit.com/13049/what-is-docker-swarm-mode-and-when-should-you-use-it/ What is Docker Swarm Mode and When Should You Use It?]
 
== Security ==
* [https://cloudberry.engineering/article/dockerfile-security-best-practices/ Docker Security Best Practices from the Dockerfile]
* [https://www.cloudsavvyit.com/12631/how-to-secure-sensitive-data-with-docker-compose-secrets/ How to Secure Sensitive Data With Docker Compose Secrets]
 
== [https://mobyproject.org/ Moby Project] ==
[https://www.infoworld.com/article/3193904/containers/what-is-dockers-moby-project.html What is Docker's Moby Project?]
 
== Windows container ==
[https://stackoverflow.com/questions/45380972/how-can-i-run-a-docker-windows-container-on-osx How can I run a docker windows container on osx?]
 
== When Not to Use Docker ==
[https://www.cloudsavvyit.com/15446/when-not-to-use-docker-cases-where-containers-dont-help/ When Not to Use Docker: Cases Where Containers Don’t Help]
 
= Docker Compose <docker-compose.yaml> =
Docker Compose can help us out as it allows us to specify a single file in which we can define our entire environment structure and run it with a single command (much like a Vagrantfile works).  
Docker Compose can help us out as it allows us to specify a single file in which we can define our entire environment structure and run it with a single command (much like a Vagrantfile works).  


* Tabs are not allowed in a Docker Compose YAML file. You should use spaces for indentation instead.
* https://docs.docker.com/compose/ (the example will give an error when "RUN pip install -r requirements.txt")
* https://docs.docker.com/compose/ (the example will give an error when "RUN pip install -r requirements.txt")
*# app.py
*# app.py
Line 2,371: Line 2,543:
** Running [https://github.com/nextcloud/docker nextcloud], [https://blog.ouseful.info/2017/06/16/rolling-your-own-jupyter-and-rstudio-data-analysis-environment-around-apache-drill-using-docker-compose/ Jupyter and RStudio]
** Running [https://github.com/nextcloud/docker nextcloud], [https://blog.ouseful.info/2017/06/16/rolling-your-own-jupyter-and-rstudio-data-analysis-environment-around-apache-drill-using-docker-compose/ Jupyter and RStudio]
** [https://github.com/dceoy/docker-rstudio-server Rstudio]
** [https://github.com/dceoy/docker-rstudio-server Rstudio]
* [https://readmedium.com/docker-compose-for-beginners-working-with-multiple-containers-ee0727aab687 Docker Compose For Beginners: Working With Multiple Containers]
** image, container_name
** image, container_name, environment
** image, container_name, environment, volumes, ports


=== Download binary ===
== YAML validator ==
https://codebeautify.org/yaml-validator
 
== Download binary ==
<ul>
<ul>
<li>https://github.com/docker/compose/releases for macOS (x86/arm), Linux (aarch64 or armv6 or armv7).
<li>https://github.com/docker/compose/releases for macOS (x86/arm), Linux (aarch64 or armv6 or armv7).
Line 2,378: Line 2,557:
</ul>
</ul>


=== Difference of "docker compose" and "docker-compose" ===
== Difference of "docker compose" and "docker-compose" ==
* Docker-compose is the original '''Python-based''' command-line tool that was released in 2014. Docker compose is a newer '''Go-based''' command-line tool that is integrated into the Docker CLI platform and supports the compose-spec. Docker compose is meant to be a drop-in replacement for docker-compose, but it may have some behavior differences and new features. Docker compose is currently a tech preview, but it will eventually replace docker-compose as the recommended way to use Compose.
* Docker-compose is the original '''Python-based''' command-line tool that was released in 2014. Docker compose is a newer '''Go-based''' command-line tool that is integrated into the Docker CLI platform and supports the compose-spec. Docker compose is meant to be a drop-in replacement for docker-compose, but it may have some behavior differences and new features. Docker compose is currently a tech preview, but it will eventually replace docker-compose as the recommended way to use Compose.


Line 2,385: Line 2,564:
* [https://stackoverflow.com/a/66516826 Difference between "docker compose" and "docker-compose"]
* [https://stackoverflow.com/a/66516826 Difference between "docker compose" and "docker-compose"]


=== Simple examples ===
== Simple examples ==
Create a file '''docker-compose.yml''' and run '''docker-compose up''' after creating the file.
Create a file '''docker-compose.yml''' and run '''docker-compose up''' after creating the file.


Line 2,422: Line 2,601:
</pre>
</pre>


=== PUID, PGID ===
== Composerize/convert a docker command into a docker compose file ==
* [https://docs.linuxserver.io/general/understanding-puid-and-pgid Understanding PUID and PGID] (or the [https://github.com/linuxserver/docker-documentation/blob/master/general/understanding-puid-and-pgid.md source])
* Copilot/ChatGPT/...
* You should use the -e PUID and -e PGID options when creating a container from a Docker image to map the container’s internal user to a user on the host machine. This is useful because Docker runs all of its containers under the '''root''' user domain, which means that processes running inside your containers also run as '''root'''. '''This kind of elevated access is not ideal for day-to-day use and can potentially give applications access to things they shouldn’t.''' By using PUID and PGID, you can ensure that files and directories created during the container’s lifespan are owned by a user on the host machine instead of root.
* https://www.composerize.com/
* In this video [https://youtu.be/oHC6J_aN4eQ?t=137 How to Install Calibre on OMV and Docker], it uses the command '''id admin'''  where "admin" is the portainer user to get PUID (of "admin") and PGID (of "users") to find out the two ids.
* [https://ostechnix.com/convert-docker-run-commands-into-docker-compose-files/ Convert Docker Run Commands Into Docker-Compose Files]


=== Composerize ===
== An example from 'Fundamentals of Docker' ==
[https://ostechnix.com/convert-docker-run-commands-into-docker-compose-files/ Convert Docker Run Commands Into Docker-Compose Files]
 
=== An example from 'Fundamentals of Docker' ===
<syntaxhighlight lang='bash'>
<syntaxhighlight lang='bash'>
git clone https://github.com/fundamentalsofdocker/labs.git
git clone https://github.com/fundamentalsofdocker/labs.git
Line 2,460: Line 2,636:
</syntaxhighlight>
</syntaxhighlight>


=== An example from "How to Setup NGINX as Reverse Proxy Using Docker" ===
== An example from "How to Setup NGINX as Reverse Proxy Using Docker" ==
See [[#Nginx_reverse_proxy|here]]. Only nginx is used.
See [[#Nginx_reverse_proxy|here]]. Only nginx is used.


=== An example from "Docker Deep Dive" (flask + redis) ===
== An example from "Docker Deep Dive" (flask + redis) ==
'''Note''' that on [https://docs.docker.com/compose/gettingstarted/#step-7-update-the-application Get started with Docker Compose] it mounts the current directory to ''/code'' inside the container. So after we modify ''app.py'', we don't need to copy it to the container.
'''Note''' that on [https://docs.docker.com/compose/gettingstarted/#step-7-update-the-application Get started with Docker Compose] it mounts the current directory to ''/code'' inside the container. So after we modify ''app.py'', we don't need to copy it to the container.


Line 2,557: Line 2,733:
</syntaxhighlight>
</syntaxhighlight>


 
== Create Compose Files From Running Docker Containers ==
=== Create Compose Files From Running Docker Containers ===
[https://www.makeuseof.com/create-docker-compose-files-from-running-docker-containers/ How to Automatically Create Compose Files From Running Docker Containers]
[https://www.makeuseof.com/create-docker-compose-files-from-running-docker-containers/ How to Automatically Create Compose Files From Running Docker Containers]


=== Docker-Compose persistent data MySQL ===
== Docker-Compose persistent data MySQL ==
https://stackoverflow.com/questions/39175194/docker-compose-persistent-data-mysql
https://stackoverflow.com/questions/39175194/docker-compose-persistent-data-mysql


=== Connect to Docker daemon over ssh using docker-compose ===
== Connect to Docker daemon over ssh using docker-compose ==
[https://medium.com/@sujaypillai/dockertips-connect-to-docker-daemon-over-ssh-using-docker-compose-f4b189dd8951 #DockerTips: Connect to Docker daemon over ssh using docker-compose]
[https://medium.com/@sujaypillai/dockertips-connect-to-docker-daemon-over-ssh-using-docker-compose-f4b189dd8951 #DockerTips: Connect to Docker daemon over ssh using docker-compose]


=== Dockerfile + docker-compose ===
== Dockerfile + docker-compose ==
[https://stackoverflow.com/a/29487120 Docker Compose vs. Dockerfile - which is better?]  
[https://stackoverflow.com/a/29487120 Docker Compose vs. Dockerfile - which is better?]  


The Compose file describes the container in its running state, leaving the details on how to build the container to Dockerfiles.
The Compose file describes the container in its running state, leaving the details on how to build the container to Dockerfiles.


=== How to deploy on remote Docker hosts with docker-compose ===
== How to deploy on remote Docker hosts with docker-compose ==
[https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/ How to deploy on remote Docker hosts with docker-compose]
[https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/ How to deploy on remote Docker hosts with docker-compose]


=== logs ===
== logs ==
<pre>
<pre>
docker-compose logs -f
docker-compose logs -f
# Ctrl + c
# Ctrl + c
</pre>
</pre>
== Package CLI Applications ==
[https://www.cloudsavvyit.com/15713/how-to-use-docker-to-package-cli-applications/ How to Use Docker to Package CLI Applications]
== Stack ==
* https://www.composerize.com/
* [https://youtu.be/-ttZjGBkLL8 Export Docker Container Settings as Docker Compose Stack], [https://github.com/Red5d/docker-autocompose docker-autocompose] (only x86)
== Docker app ==
Docker App is an experimental Docker feature which lets you build and publish application stacks consisting of multiple containers. It aims to let you share '''Docker Compose''' stacks with the same ease of use as regular Docker containers.
[https://www.cloudsavvyit.com/10673/how-to-use-docker-app-to-containerise-an-entire-application-stack/ How to Use 'Docker App' to Containerise an Entire Application Stack]
== Docker Swarm ==
* https://www.linux.com/learn/how-use-docker-machine-create-swarm-cluster
* [https://www.howtoforge.com/tutorial/ubuntu-docker-swarm-cluster/ How Setup and Configure Docker Swarm Cluster on Ubuntu]
* [https://www.cloudsavvyit.com/13049/what-is-docker-swarm-mode-and-when-should-you-use-it/ What is Docker Swarm Mode and When Should You Use It?]
== Security ==
* [https://cloudberry.engineering/article/dockerfile-security-best-practices/ Docker Security Best Practices from the Dockerfile]
* [https://www.cloudsavvyit.com/12631/how-to-secure-sensitive-data-with-docker-compose-secrets/ How to Secure Sensitive Data With Docker Compose Secrets]
== [https://mobyproject.org/ Moby Project] ==
[https://www.infoworld.com/article/3193904/containers/what-is-dockers-moby-project.html What is Docker's Moby Project?]
== Windows container ==
[https://stackoverflow.com/questions/45380972/how-can-i-run-a-docker-windows-container-on-osx How can I run a docker windows container on osx?]
== When Not to Use Docker ==
[https://www.cloudsavvyit.com/15446/when-not-to-use-docker-cases-where-containers-dont-help/ When Not to Use Docker: Cases Where Containers Don’t Help]


= GUI/TUI interface manager =
= GUI/TUI interface manager =
Line 2,788: Line 2,933:
* https://casaos.io/
* https://casaos.io/
** https://wiki.casaos.io/en/get-started. It also supports arm64, armv7.
** https://wiki.casaos.io/en/get-started. It also supports arm64, armv7.
** http://casaos.local
** https://docs.zimaboard.com/docs/index.html Default login casaos/casaos. For a new user, the password has to be at least 5 characters.
* [https://youtu.be/FwJByjTdKks Revisiting CasaOS After A Few Months] 2022-6-14
* [https://youtu.be/FwJByjTdKks Revisiting CasaOS After A Few Months] 2022-6-14
* [https://youtu.be/w44CypRO5l4 Home Servers Have NEVER Been This Easy: CasaOS + ZimaBoard] 4/23/2023
* [https://youtu.be/w44CypRO5l4 Home Servers Have NEVER Been This Easy: CasaOS + ZimaBoard] 4/23/2023
Line 2,855: Line 3,002:
| $ docker pull ubuntu:latest <br/>$ docker pull broadinstitute/gatk3:3.8-0 || $ singularity pull docker://ubuntu:latest<br/>$ singularity pull docker://broadinstitute/gatk3:3.8-0
| $ docker pull ubuntu:latest <br/>$ docker pull broadinstitute/gatk3:3.8-0 || $ singularity pull docker://ubuntu:latest<br/>$ singularity pull docker://broadinstitute/gatk3:3.8-0
|-
|-
| $ docker build || $ singularity build
| $ docker build -t myname/myapp:latest -f Dockerfile || $ singularity build myapp.sif myapp.def
|-
|-
| $ docker shell (not exist) || $ singularity shell docker://broadinstitute/gatk3-3.8-0<br/>  $ singularity shell gatk3-3.8-0.img<br/> > ls  # the default location depends on the host system<br/>  
| $ docker shell (not exist) || $ singularity shell docker://broadinstitute/gatk3-3.8-0<br/>  $ singularity shell gatk3-3.8-0.img<br/> > ls  # the default location depends on the host system<br/>  
Line 3,107: Line 3,254:
= Podman =
= Podman =
* [https://podman.io/docs/installation Podman Installation Instructions]
* [https://podman.io/docs/installation Podman Installation Instructions]
** [https://ostechnix.com/install-podman-desktop-in-linux/ How To Install Podman Desktop In Linux]
** Raspberry Pi OS use the standard Debian's repositories, so it is fully compatible with Debian's arm64 repository. You can simply follow the steps for Debian to install Podman.
** Raspberry Pi OS use the standard Debian's repositories, so it is fully compatible with Debian's arm64 repository. You can simply follow the steps for Debian to install Podman.
* [https://linuxhandbook.com/docker-vs-podman/ Podman vs docker]:
* [https://linuxhandbook.com/docker-vs-podman/ Podman vs docker]:
Line 3,178: Line 3,326:
* It is based on Alpine Linux. To install htop, do '''apk add htop'''. But '''htop''' command shows the resource from the host, not from the user's account.
* It is based on Alpine Linux. To install htop, do '''apk add htop'''. But '''htop''' command shows the resource from the host, not from the user's account.
* '''ctrl + insert''' to copy and '''shift + insert''' to paste
* '''ctrl + insert''' to copy and '''shift + insert''' to paste
* [https://github.com/play-with-docker/play-with-docker/issues/238 connect to a play-with-docker instance]. Answer: You just need to create a random private key. [https://kostislab.blogspot.com/2019/03/play-with-play-with-docker-form-your.html Play with "Play with Docker" from your terminal!].
* Some applications I've tested.
* Some applications I've tested.
** webtop (OK)
** webtop (OK)

Revision as of 17:17, 8 May 2024

Official web page http://docker.io.

Docker is both a client and a server: the server is a daemon that runs on Linux. The normal approach was that you used docker on the same server the daemon was running on - however it was possible to connect the docker client to a remote docker daemon.

Installation

Which OS to install?

Containers vs virtual machines

KubeVirt

OS containers vs application containers

Differences:

  • OS containers: LXC, OpenVZ, Linux VServer, BSD Jails and Solaris zones. The container acts as VPS.
  • App containers: Docker, Rocket. The container acts as an application.

Current release version

Ubuntu x86 and Mint

One-line script

https://github.com/docker/docker-install, https://docs.docker.com/engine/install/ubuntu/, https://twitter.com/portainerio/status/1650171336864550912

Note that 1) the one-liner is a huge security issue. 2) but how will you add the current user to docker group and then logout and log back in.

$ curl -fsSL https://get.docker.com | bash
# Executing docker install script, commit: e5543d473431b782227f8908005543bb4389b8de
+ sudo -E sh -c 'apt-get update -qq >/dev/null'
[sudo] password for brb: 
+ sudo -E sh -c 'DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null'
+ sudo -E sh -c 'install -m 0755 -d /etc/apt/keyrings'
+ sudo -E sh -c 'curl -fsSL "https://download.docker.com/linux/debian/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg'
gpg: WARNING: unsafe ownership on homedir '/home/brb/.gnupg'
+ sudo -E sh -c 'chmod a+r /etc/apt/keyrings/docker.gpg'
+ sudo -E sh -c 'echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian bullseye stable" > /etc/apt/sources.list.d/docker.list'
+ sudo -E sh -c 'apt-get update -qq >/dev/null'
+ sudo -E sh -c 'DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin >/dev/null'
+ sudo -E sh -c 'docker version'
Client: Docker Engine - Community
 Version:           24.0.7
 API version:       1.43
 Go version:        go1.20.10
 Git commit:        afdd53b
 Built:             Thu Oct 26 09:08:17 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          24.0.7
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.10
  Git commit:       311b9ff
  Built:            Thu Oct 26 09:08:17 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.26
  GitCommit:        3dd1e886e55dd695541fdcd67420c2888645a495
 runc:
  Version:          1.1.10
  GitCommit:        v1.1.10-0-g18a0cb0
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0


To run Docker as a non-privileged user, consider setting up the Docker daemon in rootless mode for your user: dockerd-rootless-setuptool.sh install Visit https://docs.docker.com/go/rootless/ to learn about rootless mode. To run the Docker daemon as a fully privileged service, but granting non-root users access, refer to https://docs.docker.com/go/daemon-access/ WARNING: Access to the remote API on a privileged Docker daemon is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' documentation for details: https://docs.docker.com/go/attack-surface/
$ sudo groupadd docker $ sudo usermod -aG docker $USER $ newgrp docker $ docker run hello-world

This can install docker but you still need "sudo" to run it. See Linux post-installation steps for Docker Engine: 1) Manage Docker as a non-root user, and 2) Configure Docker to start on boot with systemd.

Docker Desktop

Without sudo, Post-installation

To use docker without sudo, follow the instruction on the official guide.

# Add the docker group if it doesn't already exist.
# sudo groupadd docker

# Add your user to the docker group.
sudo usermod -aG docker $USER

# Log out and log in

After running this command, you need to log out and log back in for the changes to take effect. This is because group membership is determined at login time. When you log in, the system reads the group membership information and assigns the appropriate permissions to your user account.

Upgrade Docker Desktop

It seems it does not affect running containers (e.g. RStudio on Mac).

Is it fine to upgrade Docker-ce while a container is running?

Doesn't matter. Your system will stop the container if you update docker.

Is there a way to hibernate a docker container

Live restore

Rate limits for GitHub Apps

Rate limits for GitHub Apps

When I tried several times of docker build, I finally got a message

Downloading GitHub repo XXX/XXXXX@HEAD
Error: Failed to install 'unknown package' from GitHub:
  HTTP error 403.
  API rate limit exceeded for XXX.XX.XXX.X. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)

  Rate limit remaining: 0/60
  Rate limit reset at: 2021-04-12 20:32:28 UTC

  To increase your GitHub API rate limit
  - Use `usethis::browse_github_pat()` to create a Personal Access Token.
  - Use `usethis::edit_r_environ()` and add the token as `GITHUB_PAT`.
Execution halted

CentOS

https://docs.docker.com/engine/installation/linux/docker-ce/centos/

Boot2Docker

For Windows and OS operation systems, we have to use Boot2Docker. Boot2Docker is a local virtual machine with its own network interface and IP address. To find the Boot2Docker IP address you can check the value of the DOCKER_HOST environment variable. You're be prompted to set this variable when you start or install Boot2Docker the first time. You can find the IP address by running boot2docker ip command.

Note that since Windows and OS X don't share a file system as Linux, the command 'docker run' with '-v' flag to mount a local directory into a Docker container will not work with Boot2Docker release prior to 1.3. The support for volumes is now available for OS X but is not yet present for Windows with the release of Boot2Docker 1.3.

Windows

Note many of the information here have not been updated.

Docker can be run on Windows 10 Pro as a native application; see

The information below is based on running Docker on Windows 7.1 and 8. Your processor needs to support hardware virtualization.

  • Windows Installer includes msys-git, Virtualbox, Boot2Docker-cli management tool and Boot2Docker ISO.
  • Installation instruction for Windows OS. It will install Boot2Docker management tool with the boot2docker iso (based on Tiny Core Linux), Virtualbox and MYSYS-git UNIX tools.
  • Docker needs Admin right to be installed. However, Virtualbox can be installed by user's account.
  • If the installer detects a version of VirtualBox installed, the VirtualBox checkbox will not be checked by default (Windows OS). The VirtualBox cannot be used anymore after updating my VB from 4.3.18 to 4.3.20. The error may be related to Windows update according to Virtualbox forum.
Error in supR3HardenedWinReSpawn
  • Note that boot2docker cannot be installed/run inside a Windows guest machine. See this post and my Virtualbox wiki here. If we try to launch boot2docker-vm from Virtualbox, we will see a message "This kernel requires an x86-64 CPU, but only detected an i686 CPU."
  • After I switch back to an old version of virtualbox, every thing works again. I can even install Docker successfully.
    • Boot2Docker Start icon cannot be run directly because Notepad++ will automatically open it. A possible solution is to go to control panel and change default program for .sh file from Notepad++ to C:\Program Files (x86)\Git\bin\bash.exe.
    • The above step does not work well since a terminal appears and disappears quickly.
    • A working approach is to open Git Bash from Start menu. And run /c/Program Files/Boot2DockerforWindows/start.sh (or boot2docker start or boot2docer init)
    • A new VM called 'boot2docker-vm' will be created (we can open VirtualBox Manager to check). But I got an error error in run: Failed to start machine "boot2docker-vm" (run again with -v for details). The VM has an error on Network>Adapter2>VirtualBox Host-Only Ethernet Adapter #2. So I open the setting of <boot2docker-vm>, go to Network > Adapter 2 and change the dropdown list of Name from VirtualBox Host-Only Ethernet Adapter #2 to VirtualBox Host-Only Ethernet Adapter.
    • Now it works either I directly click boot2docker-vm VM from VB Manager or use the command start.sh from Git Bash.

Boot2docker-vm.png

$ # boot2docker is in the PATH variable, so there is not need to cd to the folder.
$ boot2docker start
initializing...
Virtual machine boot2docker-vm already exists

starting...
Waiting for VM and Docker daemon to start...
........o
Started.
Writing c:\Users\brb\.boot2docker\certs\boot2docker-vm\ca.pem
Writing c:\Users\brb\.boot2docker\certs\boot2docker-vm\cert.pem
Writing c:\Users\brb\.boot2docker\certs\boot2docker-vm\key.pem
Docker client does not run on Windows for now. Please use
    "c:\Program files\Boot2Docker for Windows\boot2docker.exe" ssh
to SSH into the VM instead.


192.168.56.101
connecting...
                        ##        .
                  ## ## ##       ==
               ## ## ## ##      ===
           /""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
           \______ o          __/
             \    \        __/
              \____\______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.4.1, build master : 86f7ec8 - Tue Dec 16 23:11:29 UTC 2014

Docker version 1.4.1, build 5bc2ff8
docker@boot2docker:~$ docker
Usage: docker [OPTIONS] COMMAND [arg...]

A self-sufficient runtime for linux containers.

Options:
  --api-enable-cors=false                      Enable CORS headers in the remote
 API
  -b, --bridge=""                              Attach containers to a pre-existi
ng network bridge
...
Run 'docker COMMAND --help' for more information on a command.
docker@boot2docker:~$ docker run hello-world
Unable to find image 'hello-world:latest' locally
hello-world:latest: The image you are pulling has been verified
511136ea3c5a: Pull complete
31cbccb51277: Pull complete
e45a5af57b00: Pull complete
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (Assuming it was not already locally available.)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

For more examples and ideas, visit:
 http://docs.docker.com/userguide/

docker@boot2docker:~$ ls
boot2docker, please format-me
docker@boot2docker:~$ pwd
/home/docker
docker@boot2docker:~$ ls /
bin/     dev/     home/    lib/     mnt/     proc/    run/     sys/     usr/
c/       etc/     init     linuxrc  opt/     root/    sbin/    tmp      var/

docker@boot2docker:~$ docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (Assuming it was not already locally available.)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

For more examples and ideas, visit:
 http://docs.docker.com/userguide/
docker@boot2docker:~$
docker@boot2docker:~$
docker@boot2docker:~$
docker@boot2docker:~$ docker run -it ubuntu bash
Unable to find image 'ubuntu:latest' locally
ubuntu:latest: The image you are pulling has been verified
53f858aaaf03: Pull complete
837339b91538: Pull complete
615c102e2290: Pull complete
b39b81afc8ca: Pull complete
511136ea3c5a: Already exists
Status: Downloaded newer image for ubuntu:latest


root@ea7e3289a01a:/# pwd
/
root@ea7e3289a01a:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs           19G  269M   17G   2% /
none             19G  269M   17G   2% /
tmpfs          1005M     0 1005M   0% /dev
shm              64M     0   64M   0% /dev/shm
/dev/sda1        19G  269M   17G   2% /etc/hosts
tmpfs          1005M     0 1005M   0% /proc/kcore
root@ea7e3289a01a:/# ls
bin   dev  home  lib64  mnt  proc  run   srv  tmp  var
boot  etc  lib   media  opt  root  sbin  sys  usr
root@ea7e3289a01a:/# exit
exit


docker@boot2docker:~$ pwd
/home/docker
docker@boot2docker:~$ ls
boot2docker, please format-me
docker@boot2docker:~$ exit
[Press any key to exit]

brb@NCI-01825357 /c/Program files/Boot2Docker for Windows
$ boot2docker down

brb@NCI-01825357 /c/Program files/Boot2Docker for Windows
$
$ boot2docker --help
Usage: c:\Program files\Boot2Docker for Windows\boot2docker.exe [<options>] <command> [<args>]

Boot2Docker management utility.

Commands:
   init                Create a new Boot2Docker VM.
   up|start|boot       Start VM from any states.
   ssh [ssh-command]   Login to VM via SSH.
   save|suspend        Suspend VM and save state to disk.
   down|stop|halt      Gracefully shutdown the VM.
   restart             Gracefully reboot the VM.
   poweroff            Forcefully power off the VM (may corrupt disk image).
   reset               Forcefully power cycle the VM (may corrupt disk image).
   delete|destroy      Delete Boot2Docker VM and its disk image.
   config|cfg          Show selected profile file settings.
   info                Display detailed information of VM.
   ip                  Display the IP address of the VM's Host-only network.
   shellinit           Display the shell commands to set up the Docker client.
   status              Display current state of VM.
   download            Download Boot2Docker ISO image.
   upgrade             Upgrade the Boot2Docker ISO image (restart if running).
   version             Display version information.

Options:
      --basevmdk="": Path to VMDK to use as base for persistent partition
      --clobber=false: overwrite Docker client binary on boot2docker upgrade
      --dhcp=true: enable VirtualBox host-only network DHCP.
      --dhcpip=192.168.59.99: VirtualBox host-only network DHCP server address.
....
  -v, --verbose=false: display verbose command invocations.
      --vm="boot2docker-vm": virtual machine name.
      --waittime=300: Time in milliseconds to wait between port knocking retries during 'start'
error in run: config error: pflag: help requested

brb@NCI-01825357 /c/Program files/Boot2Docker for Windows

The big picture


                           start.sh                      docker run -it ubuntu bash
Git Bash Git Bash         ---------->  boot2docker-vm       ------------->   ubuntu
                                   docker@boot2docker:
   <-------               <----------                       <------------- 
   boot2docker down           exit                                 exit
   (shutdown boot2docker) (boot2docker-vm is still on)
    |
    |
    |  boot2docker up (start boot2docker)
    |
    |  boot2docker ssh (log into docker acct)
    |
    v
   boot2docker-vm
   docker@boot2docker

Increase boot2docker vmdk space

https://docs.docker.com/articles/b2d_volume_resize/

Install utilities in Boot2docker VM

http://blog.tutum.co/2014/11/05/how-to-use-docker-on-windows/

For example, to install cifs-utils,

wget http://distro.ibiblio.org/tinycorelinux/5.x/x86/tcz/cifs-utils.tcz
tce-load -i cifs-utils.tcz

WSL2

Mac

  • https://docs.docker.com/desktop/mac/
  • Alternatives to Docker Desktop for Mac? Rancher is recommended. 2022-06-08
  • Vagrant method. If you have Mac, you don't have to use boot2docker (iso & its management tool). You can use other Linux which comes with docker pre-installed. See this post.

Raspberry Pi

ARM architeture from hub.docker.com

curl -sSL https://get.docker.com | sh
  • UDOO Quad running Armbian 20.04
    • The instruction on official Docker website does not work
    • The curl command method above does not work
    • sudo apt-get install -y docker.io works (docker -v shows it is 19.03.8). After that, run sudo usermod -aG docker $USER and log out/in.
  • See Odroid magazine 2015 January and 2015 February. Note that the current versions of Docker and Docker Hub are not aware of the architecture for which the image has been built. All standard images are intended for the x86 architecture, and the autobuild feature offered by the Docker registry is only available for x86.
  • NVIDIA Jetson Nano Developer Kit - Introduction, Redis running inside Docker container on NVIDIA Jetson Nano
sudo apt install curl
curl -sSL https://get.docker.com/ | sh

docker-compose

Some examples*

Not I use the arm64 image on my Pi3b+.

Images from https://www.linuxserver.io/. Some indices include number of pulls and stars.

List of tz database time zones

Portainer. The port number is 9000. Note the stack will be deployed using the equivalent of docker-compose. Only Compose file format version 2 is supported at the moment.

Samba. Tested on iOS, Ubuntu & Windows 10.

mkdir -p /mnt/usb/share/{data,backups}
mkdir /mnt/usb/share/data/{alice,bob,documents}
touch /mnt/usb/share/backups/backupsfile
touch /mnt/usb/share/data/bob/bobfile
touch /mnt/usb/share/data/documents/documentfile

docker run -d -p 445:445 \
  -v /mnt/usb/share/data:/share/data \
  -v /mnt/usb/share/backups:/share/backups \
  --name rpi-samba trnape/rpi-samba \
  -u "alice:abc123" \
  -u "bob:secret" \
  -u "guest:guest" \
  -s "Backup directory:/share/backups:rw:alice,bob" \
  -s "Bob (private):/share/data/bob:rw:bob" \
  -s "Documents (readonly):/share/data/documents:ro:guest,alice,bob" 

On Windows, 1) right click on 'This PC' and choose 'Add a network location'. 2) type \\192.168.1.249\ and the dropdown list will populate all available folders. 3) choose the one (e.g. Bob) and then enter the credential. Done. On Ubuntu, just type smb://192.168.1.249/. It will then populate the available folders.

Nginx

mkdir -p /mnt/usb/docker-nginx/html
echo "hello world" >> /mnt/usb/docker-nginx/html/index.html
nano /mnt/usb/docker-nginx/html/sharefile
docker run --name rpi-nginx -p 8086:80 \
  --restart always \
  -v /mnt/usb/docker-nginx/html:/usr/share/nginx/html \
  -d nginx:stable-alpine

# Or a stack file
version: '2'
services:
    nginx:
        container_name: rpi-nginx
        ports:
            - '8086:80'
        restart: always
        volumes:
            - '/mnt/usb/docker-nginx/html:/usr/share/nginx/html'
        image: nginx:stable-alpine

Note consider to use a samba share folder (see above) as a nginx document root.

cp /mnt/usb/docker-nginx/html/* /mnt/usb/share/data/bob/
rm -rf /mnt/usb/docker-nginx/html
ln -s /mnt/usb/share/data/bob/ /mnt/usb/docker-nginx/html

Rpi-monitor. I need to change /dev/vcsm to /dev/vcsm-cma. But the temperature part is not working. I am using 64-bit Raspberry Pi OS and it does not show attached USB disks. The port number is 8888.

code-server

---
version: "2.1"
services:
  code-server:
    image: ghcr.io/linuxserver/code-server
    container_name: code-server
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - PASSWORD=password #optional
      - SUDO_PASSWORD=password #optional
    volumes:
      - /mnt/usb/code-server:/config
    ports:
      - 8443:8443
    restart: unless-stopped

mstream Music streaming. Works great.

emby does not work on arm64. It works on x86 though. Even I copy a mp4 file to movies directory the movie does not show up:(

version: '2.1'
services:
    embyserver:
        container_name: emby
        network_mode: bridge
        restart: always
        environment:
            - VERSION=latest
            - UID=1000
            - GID=1000
            - TZ=America/Denver
        volumes:
            - /media/crucial/emby/config:/config
            - /media/crucial/emby/tv:/mnt/tv
            - /media/crucial/emby/movies:/mnt/movies
        ports:
            - 8096:8096            
        image: 'emby/embyserver:latest'

jellyfin Jellyfin is descended from Emby's 3.5.2 release and ported to the .NET Core framework to enable full cross-platform support. How to Install Jellyfin on Docker with Portainer

plex We can access the plex server via http://IP:32400/web. Note that in the first server setup, we need to add Library' by choosing the new library name (eg Other Videos) shown on plex & the data source (eg /data) so our own media can be found. After we added new media files we can rescan by clicking the vertical 3 dots icon and selecting scan library files. Pi3b+ is still a little weak since I can see all threads are busy when I played a mp4 file.

mkdir -p /mnt/usb/plex/{config,data}
cp FILENAme.mp4 /mnt/usb/plex/data
docker run \
  -d \
  --name plex \
  --net host \
  -p 32400:32400 \
  --restart always \
  --volume /mnt/usb/plex/config:/config \
  --volume /mnt/usb/plex/data:/data \
  greensheep/plex-server-docker-rpi:latest

WARNING: The requested image's platform (linux/arm) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

Nextcloud.

sudo mkdir -p /srv/dev-disk-by-label-Files/Databases/NextCloud
sudo mkdir -p /srv/dev-disk-by-label-Files/Config/Nextcloud

After that, copy and paste the stack into portainer. Wait for a few minutes on RPi3. The port number is 8080. Now we can create the admin username/password such as nextcloud/nextcloud. Click the little triangle next to "Storage and Database". Change to MySQL. In the next part we enter nextcloud/nextcloud/nextcloud/db (note the "db" replaces localhost b/c we use "db" as the service name). Again, wait for a few minutes.

Heimdall (Dashboard for web apps). I keep the PUID (1000) and PGID (1000). The instruction says it is from the admin user account but I don't find admin account? Change the volume to /srv/dev-disk-by-label-Files/Config/Heimdall (use sudo mkdir to create the directory on terminal). Change the port to 83 & remove port 443. Define the endpoint from Portainer -> Endpoints -> local -> Public IP as raspberrypi.local (depending on your hostname). We need to wait a little bit. Now go to the container and find heimdall and click the port in order to open the website correctly (instead 0.0.0.0). I can add apps like nextcloud, portainer, pi-hole, other servers, etc. The Application Type entry has a good list of popular apps and it will pre-populate the button icon and the background color for our app.

taisun The default port is 3000

yacht. The default login is [email protected] and pass. The name shown on portainer is pedantic_hermann

docker volume create yacht
docker run -d -p 8001:8000 -v /var/run/docker.sock:/var/run/docker.sock -v yacht:/config selfhostedpro/yacht

CloudFlare DDNS - Update CloudFlare with Your Dynamic IP Address

WatchTower

bitwardenrs. Use the terminal to create a volume first. The port number is 8100. This is straightforward.

Duplicati for backup.

photoshow. It works. It has a slideshow button. PhotoShow only displays videos in WebM.

R. r-base provide arm64 image but not not 32-bit arm architecture.

# 64-bit OS
docker pull r-base
docker run -it --rm r-base   # enter R directly

rocker/rstudio DOES NOT work on arm64 even I can pull. WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

I build a 32-bit armv7 image for r-base v4.0.2. This image works on either 32-bit or 64-bit arm OS (tested on 32-/64-bit Raspberry Pi and other 32-bit SBC devices).

docker pull arraytools/r402armv7
docker run -it --rm arraytools/r402armv7 R
docker pull r-base
# Using default tag: latest
# latest: Pulling from library/r-base
# no matching manifest for linux/arm/v7 in the manifest list entries

How and Why to Use A Remote Docker Host

Backup

Usage

Basics, docs, cheatsheet, introduction

Note that we need sudo is needed unless it is on a Mac OS.

If docker cannot find an image, it will try to pull it from its repository.

$ sudo docker run -it ubuntu /bin/bash
Unable to find image 'ubuntu' locally
Pulling repository ubuntu
04c5d3b7b065: Download complete 
511136ea3c5a: Download complete 
c7b7c6419568: Download complete 
70c8faa62a44: Download complete 
d735006ad9c1: Download complete 
root@ec83b3ac878d:/# 
purpose command
run a container docker container run -d -p 80:80 httpd
list running cotainer docker container ls
view logs of Docker container docker container logs cranky_cori
identify Docker container process? docker container top cranky_cori
stop Docker container? docker container stop cranky_cori
list stopped or not running Docker containers docker container ls -a
start Docker container docker container start c46f2e9e4690
remove Docker container docker container rm cranky_cori
list Docker images docker images
remove Docker image docker rmi iman/touch

Restart docker daemon

When I try the Chap5 > Continuous integration (Jenkins) of the Docker Book, I found I cannot stop/kill the container. See others' report here. The solution is to restart the docker daemon.

sudo service docker start

After that, I can stop and rm the container.

sudo docker stop jenkins
sudo docker rm jenkins
sudo docker ps -a

images vs containers

$ sudo docker images
REPOSITORY                     TAG                 IMAGE ID            CREATED              VIRTUAL SIZE
iman                           latest              6e0f5644b2fd        About a minute ago   460.4 MB
iman/touch                     latest              77b9ac5951c2        4 minutes ago        460.4 MB
<none>                         <none>              aaa75e64ddf0        5 weeks ago          188.3 MB
ouruser/sinatra                v2                  ea8c9f407a8d        5 weeks ago          447 MB
ubuntu                         14.04               ed5a78b7b42b        5 weeks ago          188.3 MB
ubuntu                         latest              ed5a78b7b42b        5 weeks ago          188.3 MB
eddelbuettel/docker-ubuntu-r   add-r-devel-san     3c19d078c5d9        3 months ago         460.4 MB
hello-world                    latest              ef872312fe1b        4 months ago         910 B
training/sinatra               latest              f0f4ab557f95        8 months ago         447 MB

$ sudo docker ps -a
CONTAINER ID IMAGE                                          COMMAND              CREATED        STATUS                   PORTS NAMES
8fbdbcdb5126 iman/touch:latest                              "/bin/bash"          2 minutes ago  Exited (0) 2 minutes ago       thirsty_engelbart   
dc9e82f2c00a eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          9 minutes ago  Exited (0) 3 minutes ago       kickass_bardeen     
532a90f36aa8 eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          18 hours ago   Exited (0) 18 hours ago        happy_lalande       
7634024ee0bf eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          18 hours ago   Exited (0) 18 hours ago        insane_mclean       
14034a9720cb eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          18 hours ago   Exited (0) 18 hours ago        naughty_lumiere     
ca90954628db eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          19 hours ago   Exited (130) 18 hours ago      sick_hawking        
8bbdcb7c339f eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          19 hours ago   Exited (0) 19 hours ago        modest_davinci      
e8e24f80f0dd aaa75e64ddf0                                   "/bin/sh -c 'apt-get 5 weeks ago    Exited (100) 5 weeks ago       berserk_hodgkin     
d41959e0eb55 aaa75e64ddf0                                   "/bin/sh -c 'apt-get 5 weeks ago    Exited (100) 5 weeks ago       jovial_curie        
b408c0e2805b aaa75e64ddf0                                   "/bin/sh -c 'apt-get 5 weeks ago    Exited (100) 5 weeks ago       lonely_tesla        
72a551e4b492 ouruser/sinatra:v2                             "/bin/bash"          5 weeks ago    Exited (0) 5 weeks ago         jolly_meitner       
75fd6cc4658b training/sinatra:latest                        "/bin/bash"          5 weeks ago    Exited (0) 5 weeks ago         evil_yalow          
cc8886f5a02e training/sinatra:latest                        "/bin/bash"          5 weeks ago    Exited (130) 5 weeks ago       elegant_curie       
0585e4f5fecd eddelbuettel/docker-ubuntu-r:add-r-devel-san   "/bin/bash"          5 weeks ago    Exited (0) 5 weeks ago         elated_euclid       
brb@brbweb4:~/Downloads$ 

When we want to delete a container, we use the container's CONTAINER ID or NAME (last column output from docker ps -a). But when we want to delete an image, we use the image's REPOSITORY or IMAGE ID (2nd column output from docker images)

$ sudo docker rm thirsty_engelbart  # iman/touch
$ sudo docker rm dc9e82f2c00a       # eddelbuettel/docker-ubuntu-r:add-r-devel-san
$ sudo docker ps -a   # check to see the container is gone now

$ sudo docker rmi 6e0f5644b2fd
$ sudo docker rmi iman/touch
$ sudo docker images  # check to see the images are gone now

Command line interface, CLI

https://docs.docker.com/engine/reference/commandline/cli/ Docker command line

$ docker

Usage:	docker COMMAND

A self-sufficient runtime for containers

Options:
      --config string      Location of client config files (default "/home/brb/.docker")
  -D, --debug              Enable debug mode
  -H, --host list          Daemon socket(s) to connect to
  -l, --log-level string   Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
      --tls                Use TLS; implied by --tlsverify
      --tlscacert string   Trust certs signed only by this CA (default "/home/brb/.docker/ca.pem")
      --tlscert string     Path to TLS certificate file (default "/home/brb/.docker/cert.pem")
      --tlskey string      Path to TLS key file (default "/home/brb/.docker/key.pem")
      --tlsverify          Use TLS and verify the remote
  -v, --version            Print version information and quit

Management Commands:
  config      Manage Docker configs
  container   Manage containers
  image       Manage images
  network     Manage networks
  node        Manage Swarm nodes
  plugin      Manage plugins
  secret      Manage Docker secrets
  service     Manage services
  swarm       Manage Swarm
  system      Manage Docker
  trust       Manage trust on Docker images
  volume      Manage volumes

Commands:
  attach      Attach local standard input, output, and error streams to a running container
  build       Build an image from a Dockerfile
  commit      Create a new image from a container's changes
  cp          Copy files/folders between a container and the local filesystem
  create      Create a new container
  diff        Inspect changes to files or directories on a container's filesystem
  events      Get real time events from the server
  exec        Run a command in a running container
  export      Export a container's filesystem as a tar archive
  history     Show the history of an image
  images      List images
  import      Import the contents from a tarball to create a filesystem image
  info        Display system-wide information
  inspect     Return low-level information on Docker objects
  kill        Kill one or more running containers
  load        Load an image from a tar archive or STDIN
  login       Log in to a Docker registry
  logout      Log out from a Docker registry
  logs        Fetch the logs of a container
  pause       Pause all processes within one or more containers
  port        List port mappings or a specific mapping for the container
  ps          List containers
  pull        Pull an image or a repository from a registry
  push        Push an image or a repository to a registry
  rename      Rename a container
  restart     Restart one or more containers
  rm          Remove one or more containers
  rmi         Remove one or more images
  run         Run a command in a new container
  save        Save one or more images to a tar archive (streamed to STDOUT by default)
  search      Search the Docker Hub for images
  start       Start one or more stopped containers
  stats       Display a live stream of container(s) resource usage statistics
  stop        Stop one or more running containers
  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
  top         Display the running processes of a container
  unpause     Unpause all processes within one or more containers
  update      Update configuration of one or more containers
  version     Show the Docker version information
  wait        Block until one or more containers stop, then print their exit codes

Run 'docker COMMAND --help' for more information on a command.

Version, system information

Docker version

$ docker version
Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:24:51 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:23:15 2018
  OS/Arch:          linux/amd64
  Experimental:     false

System information.

  • what mode the Docker engine is operating in (swarm mode or not)
  • what storage drive is used for the union filesystem
  • what version of the Linux kernel we have on our host
  • et al
$ docker system info
Containers: 2
 Running: 0
 Paused: 0
 Stopped: 2
Images: 10
Server Version: 18.06.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.15.0-33-generic
Operating System: Ubuntu 18.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.674GiB
Name: t420s
ID: VLWB:6BN3:U7KB:L4T4:GQIB:54F3:YZKJ:PAIR:HEUM:UQIC:XLZU:3IFJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

List resource consumption

$ docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              10                  2                   2.58GB              1.519GB (58%)
Containers          2                   0                   304B                304B (100%)
Local Volumes       2                   0                   314.7MB             314.7MB (100%)
Build Cache         0                   0                   0B                  0B

$ docker system df -v  # more detailed information
# We can use the information to clean up our system

A brief intro to docker virtualization

docker search --help
docker search redis
docker search -s 100 redis
docker pull --help
docker pull ubuntu # download all versions of ubuntu
docker images    # available local container images
docker pull centos:latest
docker run --help
cat /etc/issue   # look at the current distr name before running docker
docker run -it centos:latest /bin/bash
                 # create a container & execute as a sudo

cat /etc/redhat-release
yum
cd /home
touch temp.txt
ls
exit

docker ps   # current running processes
docker ps -a # show all processes including closed
docker restart c85850ed0e13
docker ps   # container c85850ed0e13 is running
docker attach c85850ed0e13 # log into the system

ls /home
exit

docker ps -a
docker rm c85850ed0e13 # delete the container

Note: Following the discussion, using attach can only launch one instance of shell. If we use exec, we can launch multiple instances.

sudo docker exec -i -t c85850ed0e13 bash #by ID
or
$ sudo docker exec -i -t loving_heisenberg bash #by Name

Rootless mode

docker pull

https://docs.docker.com/engine/reference/commandline/pull/

$ docker pull ubuntu:zesty
$ docker run -ti --rm ubuntu:zesty /bin/bash 
# lsb_release -a         
bash: lsb_release: command not found
# cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=17.04
DISTRIB_CODENAME=zesty
DISTRIB_DESCRIPTION="Ubuntu 17.04"
NAME="Ubuntu"
VERSION="17.04 (Zesty Zapus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 17.04"
VERSION_ID="17.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=zesty
UBUNTU_CODENAME=zesty

Update/upgrade images

docker compose pull && docker compose up -d
docker compose up --pull always -d

<none>:<none> images

Exit/detach from a container without stopping it

$ docker container run -it ubuntu:latest /bin/bash
# Ctrl+p, Ctrl+q to exit the container without terminating it
$ docker ps -a # showing the container 70c5aceb5512 is running in the background

# You can reattach your terminal to it with the "docker container exec" command
$ docker container exec -it 70c5aceb5512 bash

How to start a stopped Docker container with a different command

How to start a stopped Docker container with a different command?

Clean shutdown DOCKER containers before reboot

Dockerizing Applications/Detached mode

$ sudo docker run -d --name insane_babbage ubuntu:14.04 /bin/sh -c "while true; do echo hello world; sleep 1; done"
$ sudo docker ps -l
$ sudo docker logs insane_babbage
$ sudo docker stop insane_babbage
$ sudo docker ps

The -d flag tells Docker to run the container and put it in the background, to daemonize it.

According to https://docs.docker.com/engine/reference/run/#detached-vs-foreground, containers started in detached mode exit when the root process used to run the container exits, unless you also specify the --rm option. If you use -d with --rm, the container is removed when it is stopped, exits or when the daemon exits, whichever happens first.

Automatically restart after reboot

https://stackoverflow.com/questions/18786054/how-to-auto-restart-a-docker-container-after-a-reboot-in-coreos

Add a --restart=always parameter. It will always restart a stopped container unless it has been explicitly stopped, such as via a "docker container stop" command. See the following

$ docker run -d --restart always myCustomeDocker

$ docker container run --name neverdie -it --restart always ubuntu /bin/bash
# exit
$ docker ps -a  # the container is still ther
$ docker stop neverdie
$ docker ps -a

Working with Containers

$ sudo docker run -i -t ubuntu /bin/bash
$ sudo docker version
$ sudo docker
$ sudo docker attach --help

Environment variables

Docker container ID

  • The full container ID is a hexadecimal string of 64 characters.
  • The minimum number of characters required for a Docker ID is 4.
  • We can use a shorter ID in docker command if that ID uniquely determined the container. For example, docker exec -it 9608 bash or even docker exec -it 9 bash works.

Alpine image

apk add htop

Running a Web Application

$ sudo docker run -d -P training/webapp python app.py

Alpine linux is 6MB. It is a good OS to run a web application. See the demo here.

Viewing our Web Application Container

$ sudo docker ps -l
$ sudo docker run -d -p 5000:5000 training/webapp python app.py

Check container status (docker status) - CPU, Memory usage

Container networking

Host network

If you use the host network driver for a container, that container’s network stack is not isolated from the Docker host. For instance, if you run a container which binds to port 80 and you use host networking, the container’s application will be available on port 80 on the host’s IP address.

ping, ifconfig and ip commands not found in Ubuntu container

apt update
apt install iputils-ping  # ping 
apt install net-tools     # ifconfig
apt install iproute2      # ip

Network Port Shortcut

$ sudo docker port nostalgic_morse 5000

Access Ports on the Host from a Docker Container

How to Access Ports on the Host from a Docker Container

Multiple NICs

containers in docker to use public ip addresses directly

Viewing the Web Application's Logs

$ sudo docker logs -f nostalgic_morse

Clear Logs of Running Docker Containers

How to Clear Logs of Running Docker Containers

Looking at our Web Application Container's processes

$ sudo docker top nostalgic_morse

Inspecting our Web Application Container

$ sudo docker inspect nostalgic_morse

Obtain the container's IP address, log into a running server

PS. Portainer web interface can show the IP addresses.

$ docker inspect <container id> | grep "IPAddress"

We don't need the IP address if we just want to log into a running server,

$ docker exec -it <contianer id> bash

How to Secure Docker’s TCP Socket

How to Secure Docker’s TCP Socket with TLS

docker attach

Suppose I run docker run -it --user rstudio bioconductor/bioconductor_docker:devel R and I use q() to quit the container. The container is still there. To re-enter the R in the container, I use

docker start XXXXXXXX    # restart it in the background
docker attach XXXXXXXX   # reattach the terminal & stdin

If we want the latest created container, then we use

docker start `docker ps -q -l` && docker attach `docker ps -q -l`

docker exec: SSH into a running container

Run a command in a running container

  • Usage:
    docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
  • Examples:
    $ docker exec -d ubuntu_bash touch /tmp/execWorks # do st in the background
    
    $ docker exec -it ubuntu_bash bash
    
    $ docker exec -it -e VAR=1 ubuntu_bash bash # set an environment variable
    
    $ docker exec -it ubuntu_bash pwd
    $ docker exec -it -w /root ubuntu_bash pwd # change the working directory
  • How to Run a Command on a Running Docker Container
  • How to Use the Docker exec Command. nginx container is used as an example.
    docker run --name docker-nginx -p 8080:80 -d nginx
    
    # method 1. Access the Running Container’s Shell
    docker exec -it ID /bin/bash
      apt-get update
      apt-get upgrade -y
      exit
    
    # method 2. Run a Command from Outside the Container
    docker exec ID apt-get update && apt-get upgrade
    
    docker exec ID cat /usr/share/nginx/html/index.html
    docker cp index.html ID:/usr/share/nginx/html/
    docker exec ID cat /usr/share/nginx/html/index.html
    

docker cp

Copy files/folders between a container and the local filesystem.

Restart an exited Container

$ docker start nostalgic_morse
OR
$ docker restart nostalgic_morse

For an interactive container, use docker start -ai CONTAINER which is equal to run "docker start CONTAINER" and "docker attach CONTAINER".

Rename a container

docker container rename

docker container rename CONTAINER NEW_NAME

Inspect container images and their metadata

Know the container size

docker ps -s

Meaning of two sizes

  • The "size" information shows the amount of data (on disk) that is used for the writable layer of each container
  • The "virtual size" is the amount of disk-space used for the read-only image data used by the container.

Removing our Web Application Container

$ sudo docker stop nostalgic_morse
$ sudo docker rm nostalgic_morse

Note: Always remember that deleting a container is final!

Dockerize an SSH service

https://docs.docker.com/engine/examples/running_ssh_service/#environment-variables

Remove old docker containers

This post on stackoverflow.com.

$ sudo docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty sudo docker rm

Similarly to remove all exited containers

$ sudo docker ps -a | grep Exit | awk '{print $1}' | xargs sudo docker rm

To kill/stop (not delete) all running containers

$ sudo docker kill $(sudo docker ps -q)

To delete all stopped containers

$ sudo docker rm $(sudo docker ps -a -q)
OR
$ sudo docker rm `sudo docker ps -a -q`

It is also helpful to create bash aliases for these commands by editing ~/.bash_aliases file.

docker create vs docker run

https://stackoverflow.com/questions/37744961/docker-run-vs-create

docker create is similar to docker run -d except the container is never started.

Retrieve docker run command

https://stackoverflow.com/a/32774347. See the github page of runlike. So it is better to put the docker run in a stack. Then for example the Portainer has an Editor tab to show the compose file.

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
    assaflavie/runlike -p CONTAINER_NAME

The -p option splits the output into pretty lines.

docker run -it and -d together

How to Modify the Configuration of Running Docker Containers

How to Modify the Configuration of Running Docker Containers

Volume

Examples of host's volume locations

/home/$USER/docker/$PROJECT/$SUB-DIRECTORY

PUID, PGID, share volume permission/owner

  • Understanding PUID and PGID (or the source)
  • You should use the -e PUID and -e PGID options when creating a container from a Docker image to map the container’s internal user to a user on the host machine. This is useful because Docker runs all of its containers under the root user domain, which means that processes running inside your containers also run as root. This kind of elevated access is not ideal for day-to-day use and can potentially give applications access to things they shouldn’t. By using PUID and PGID, you can ensure that files and directories created during the container’s lifespan are owned by a user on the host machine instead of root.
  • Please note that not all Docker images support the PUID and PGID environment variables. The Docker image must be designed to use these variables. If you’re using an image that doesn’t support these variables, you may need to create a Dockerfile to build a new image that does.
  • The following works. The --user option is a built-in Docker feature that sets the user (and optionally the group) that is used to run the container. This option works regardless of whether the Docker image uses any specific environment variables. PS. "docker" user has been defined in the r-base's Dockerfile.
    docker run --rm -ti --user docker \
      -v "$(pwd)":/workspace r-base 
    > setwd("/workspace")
    > save(iris, file="iris.rda")
    > system("ls -lt")
    > unlink("iris.rda")
  • Similarly, the --user option works with rocker/rstudio image and ubuntu.
    docker run --rm -ti --user rstudio \
      -v "$(pwd)":/workspace rocker/rstudio R
    > setwd("/workspace")
    > save(iris, file="iris.rda")
    > system("ls -lt")
    > unlink("iris.rda")

    Note that the prompt is $ rather than #.

    docker run --rm -it -v $(pwd):/home --user ubuntu \
       ubuntu bash
    $ id
    uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev)
    $ cd /home
    $ echo "newfile" > newfile
    
    docker run --rm -it -v $(pwd):/home --user "$(id -u):$(id -g)" \
       ubuntu bash
    $ cd /home
    $ echo "newfile" > newfile
  • In the article Sharing files with host machine from the Rocker's project, users are instructed to use -e USERID variable if the host machine user has a UID other than 1000. But the generated file 'iris.rda' from the following example is still owned by root:(
    docker run --rm -ti -v "$(pwd)":/workspace -e USERID=$UID rocker/rstudio R
  • (Cont.) however, if we run the above command as a daemon and log in using the user "rstudio" , it works even we don't specify the "-e USERID" option. The lesson is we should use the user defined in the docker image.
    docker run --rm -v "$(pwd)":/workspace -p PASSWORD=123 rocker/rstudio
    

    Notice the prompt is # rather than $ and the user id is 0.

    docker run --rm -it -v $(pwd):/home -e PUID=1000 -e PGID=1000 \
      ubuntu bash
    # id
    uid=0(root) gid=0(root) groups=0(root)
    
  • In this video How to Install Calibre on OMV and Docker, it uses the command id admin where "admin" is the portainer user to get PUID (of "admin") and PGID (of "users") to find out the two ids.

Back Up Your Docker Volumes

How to Back Up Your Docker Volumes

Two ways to achieve persistent data

Inspect the 'Mountpoint' of a volume

$ docker volume create crv
$ docker volume ls

$ docker run -d \
     --name mycloud \
     -p 81:80 \
     -v apps:/var/www/html/custom_apps \
     nextcloud

# docker inspect is not quite useful. It does not show how the volume was created
# But we can examine (ls, du, ...) the directory contents
$ docker inspect apps   
[
    {
        "CreatedAt": "2018-10-23T09:41:52-04:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/apps/_data",
        "Name": "apps",
        "Options": null,
        "Scope": "local"
    }
]

Remove an an unnamed volume

I you want to automatically removes volumes when a container is removed, you can use the --rm flag when starting the container with the "docker run" command. This flag tells Docker to automatically remove the container and any anonymous volumes associated with it when the container exits. However, this flag does not affect named volumes.

If you created an unnamed volume, it can be deleted at the same time as the container with the -v flag. Note that this only works with unnamed volumes.

docker rm -v container_name

If the volume is named, it stays present. To remove a named volume, use docker volume rm volume_name .

Volumes created in docker-compose

When you use docker-compose to create and manage containers, volumes are handled slightly differently than when using the docker run command.

In a "docker-compose.yml" file, you can specify named volumes using the volumes key at the top level of the file. These volumes are created when you run docker-compose up and are not automatically removed when you stop or remove the containers using docker-compose down.

If you want to remove named volumes created by docker-compose, you can use the -v flag with the docker-compose down command. Here’s an example command that stops and removes all containers defined in a docker-compose.yml file and also removes any named volumes:

docker-compose down -v

This command stops and removes all containers defined in the docker-compose.yml file and also removes any named volumes specified in the file. All data stored in the volumes will be permanently deleted.

Anonymous volumes created by docker-compose are automatically removed when you stop and remove the containers using docker-compose down, even if you don’t use the -v flag.

Start a container with a volume

--mount -v
docker run -d \

--name devtest \
--mount source=myvol2,target=/app \
nginx:latest

docker run -d \

--name devtest \
-v myvol2:/app \
nginx:latext

Note

  • target in "--mount" can be replaced by destination or dst.
  • To use a read-only volume, add the ,readonly option in "--mount" or the :ro option in "-v".
  • We cannot use "~/" to represent a local directory under HOME. We have to specify a full path in docker run.

A simple example

From the book "Learn Docerk -Fundamentals of Docker 18.x". Chap 5. Data Volumes and System Management > Creating and mounting data volumes.

# Create a volume
docker volume create my-data
docker volume inspect my-data
# The host folder can be found in the output under 'Mountpoint'
# In my case,
#        "Mountpoint": "/var/lib/docker/volumes/my-data/_data",

# Mount a volume into a container
docker run --name test -it -v my-data:/data alpine /bin/sh
# cd /data
# echo 'some data' > data.txt
# echo 'more data' > data2.txt
# exit
docker inspect my-data
sudo ls /var/lib/docker/volumes/my-data/_data
# We can even try to output the content of say, the second file:
sudo cat /var/lib/docker/volumes/my-data/_data/data2.txt
# We can create a new file in this folder from the host and then use the volume with another container
echo "the file is created on host" > sudo tee /var/lib/docker/volumes/my-data/_data/host-data
# Let's delete the test container and run another one
docker rm test

# This time we are mounting our volume to a different container folder
docker run --name test2 -it -v my-data:/app/data centos:7 /bin/bash
# We are able to see three files:
# ls /app/data

# Remove volumes
docker volume rm my-data # Or 
docker volume rm $(docker volume ls -q)

# Remove all running containers to clean up the system,
docker rm -f $(docker ls -aq)

Sharing data between containers

How to Share Data Between Docker Containers

docker run -it --name writer -v shared-data:/data alpine /bin/sh
# create a file inside it
# echo 'my sample file' > /data/sample.txt
# exit
docker run -it --name reader -v shared-data:/app/data:ro ubuntu:17.04 /bin/bash
# ls -l /app/data

Using host volumes

Use volumes that mount a specific host folder

  • It may be possible for the "docker volume" command to mount a local directory to a volume. See examples in the "docker volume create" documentation.
  • Specifying a directory name instead of giving a volume name in the "docker run" 's -v option
  • Since we are specifying a directory name instead of letting docker to create a new volume, "docker volume ls" will not getting a new volume
docker run -it --name test -v $(pwd)/src:/app/src alpine /bin/sh

# Make a sample to demonstrate how that works
mkdir ~/my-web; cd ~/my-web
echo "<h1>My website</h1>" > index.html

# Create 'Dockerfile'
echo -e 'FROM nginx:alpine
COPY . /usr/share/nginx/html' > Dockerfile

docker image build -t my-website:1.0 .
docker run -d -p 8080:80 --name my-site my-website:1.0

# Open http://localhost:8080. It looks good
# Now modify index.html and refresh the website. It does not refresh
# Let's stop and rm the container and rebuild using a volume
docker rm -f my-site
docker run -d -v $(pwd):/usr/share/nginx/html \
   -p 8080:80 --name my-site my-website:1.0
# Now any changes on index.html will refresh on the website

Define volumes in images

A few samples of volume definition

VOLUME /app/data
VOLUME /app/data, /app/profiles, /app/config
VOLUME {"/app/data", "/app/profiles", "/app/config"]

The first line defines a single volume to be mounted at /app/data.

We can use the docker image inspect command to get information about the volumes defined in the Dockerfile.

docker image pull mongo:3.7
docker image inspect --format='{{json .ContainerConfig.Volumes}}' \
       mongo:3.7 | jq
# {
#   "/data/configdb": {},
#   "/data/db": {}
# }

# now run an instance of MongoDB and inspect the volume information
docker run --name my-mongo -d mongo:3.7
docker inspect --format '{{json .Mounts}}' my-mongo | jq
# [
#  {
#    "Type": "volume",
#    "Name": "535e0138b9a32e89f71380e9e73bb0de64ce0d1cad78fcda0ec1d49e11d76d7a",
#    "Source": "/var/lib/docker/volumes/535e0138b9a32e89f71380e9e73bb0de64ce0d1.../_data",
#    "Destination": "/data/configdb",
#    "Driver": "local",
#    "Mode": "",
#    "RW": true,
#    "Propagation": ""
#  },
#  {
#    "Type": "volume",
# SKIP

Differences between VOLUME and '-v|--volume'

https://stackoverflow.com/a/25312719

Container Memory Limits, Setting Available CPUs, Allocating memory and CPU

docker run \
    -rm \ ## Automatically remove the container when it exits
    --memory=6g \ ## memory limit
    --cpus=1.5 \ ## number of CPUs
    -v /shared/data-store:/home/rstudio/data \
    -v /shared/library-store:/usr/local/lib/R/host-site-library \
    -e PASSWORD=bioc \
    -p 8787:8787 \
         bioconductor/bioconductor_full:devel

Work with container images

List images by size or name

# by size
docker images --format "{{.ID}}\t{{.Size}}\t{{.Repository}}" | sort -k 2 -h

# by name
docker images --format "{{.ID}}\t{{.Size}}\t{{.Repository}}" | sort -k 3 

List specific columns

docker images --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}'

Create an image interactively using commit - Example 1

The example is from the book 'Learn Docker - Fundamentals of Docker 18.x'.

docker container run -it --name sample alpine /bin/sh
# apk update && apk add iputils
# ping 127.0.0.1
# exit
docker container ls -a | grep sample
docker container diff sample

We can now use the docker container commit command to persist our modifications and create a new image from them

docker container commit sample my-alpine
docker images ls

If we want to see how our custom image has been built, we can use the history command as follows:

docker image history my-alpine
# IMAGE               CREATED              CREATED BY                                      SIZE    COMMENT
# 0f105057899b        About a minute ago   /bin/sh                                         1.55MB              
# 196d12cf6ab1        4 weeks ago          /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B                  
# <missing>           4 weeks ago          /bin/sh -c #(nop) ADD file:25c10b1d1b41d46a1…   4.41MB

The first layer in the preceding list is the one we just created by adding the iputils package.

Create an image interactively using commit - Example 2

Note that it is better/necessary to put the Dockerfile in an empty directory to avoid the problem of taking a long time to build the image (sending build context to Docker daemon ...GB ) since it will grab files from the current directory.

sudo docker search sinatra
sudo docker pull training/sinatra
sudo docker run -t -i training/sinatra /bin/bash
sudo docker commit -m="Added json gem" -a="Kate Smith" 0b2616b0e5a8 ouruser/sinatra:v2
sudo docker images

mkdir sinatra
cd sinatra
touch Dockerfile
sudo docker build -t="ouruser/sinatra:v2" .
sudo docker push ouruser/sinatra
sudo docker rmi training/sinatra
  • I get an error when I try to launch sinatra on my 32-bit ubuntu (Docker can only be installed through apt-get on 32-bit)
$ sudo docker run -t -i training/sinatra /bin/bash
2014/12/31 02:43:26 exec format error

How to copy Docker images from one host to another without using a repository

https://stackoverflow.com/questions/23935141/how-to-copy-docker-images-from-one-host-to-another-without-using-a-repository

docker save -o out.tar <image name>
# Or better to compress the file
docker save <docker image name> | gzip > out.tar.gz

And restore

docker load -i out.tar
# Or decompress the file
docker load < out.tar.gz

Docker Image Manifest

What Is a Docker Image Manifest?

Resources allocated to a container using docker?

https://stackoverflow.com/questions/16084741/how-do-i-set-resources-allocated-to-a-container-using-docker

hub.docker.com

docker tag local-image:tagname new-repo:tagname
docker login
docker push new-repo:tagname
docker pull phusion/baseimage
docker run -ti phusion/baseimage /bin/bash
  • https://dockerfile.github.io/ which includes dockerfiles for different purposes. The ubuntu-desktop one also works well (client needs a vnc viewer in order to see the desktop).

Set up a private Docker registry

$ curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://localhost:5000/v2/_catalog
$ OR
$ curl -H "Accept: application/xml" -H "Content-Type: application/json" -X GET http://localhost:5000/v2/_catalog

Github registry

docker pull ghcr.io/OWNER/IMAGE_NAME:TAG

# docker pull registry-url/image-name:tag

Google cloud registry

Using google cloud registry for private docker images

Dockerfile

  • Dockerfile Reference
  • Using Dockerfiles to Automate Building of Images from digitalocean.com.
  • Remember to put the Dockerfile in an empty directory.
  • What goes into a Dockerfile
  • Keywords
    • FROM. If we want to start from scratch, we can use FROM scratch.
    • RUN. The argument for RUN is any valid Linux command.
    • USER. This is useful if we want to create new files with a non-root owner privilege. For example, new files created under a binding directory with a non-root user ownership will belong to the current user in the host system. Here is an example where we use Rmarkdown to create pdf output. The generated pdf file should not be own by root. How to add users to Docker container? Switch users.
    • COPY & ADD.
      • "COPY . /app" will copy all files and folders from the current directory recursively to the /app folder. We can use "ADD" too but "ADD" will automatically unpack tarballs. See What is the difference between the `COPY` and `ADD` commands in a Dockerfile?
      • "ADD sample.tar /app/bin" will unpack the sample.tar' file into the target folder
      • "ADD http://example.com/sample.txt /data/" will copy the remote file sample.txt into the target file
    • WORKDIR. Define the working directory or context that is used when a container is run from the image.
    • CMD & ENTRYPOINT. These two are actually definitions of what will happen when a container is started from the image.
      • Use CMD without ENTRYPOINT: "CMD command param1 param2". This form is called the shell form.
      • If we use ENTRYPOINT + CMD, ENTRYPOINT defines the command and CMD defines parameters. The example above will run ping 8.8.8.8 -c 3. This form is called the exec form.
  • The Docker Book

Examples of Dockerfile

FROM python:2.7
RUN mkdir -p /app
WORKDIR /app
COPY ./requirements.txt /app/
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
  • Another example
FROM alpine:latest
ENTRYPOINT ["ping"]
CMD ["8.8.8.8", "-c", "3"]
FROM debian:testing
RUN useradd docker \
	&& mkdir /home/docker \
	&& chown docker:docker /home/docker \
	&& addgroup docker staff

We can test it by "docker build -t mydebian . " and "docker run --rm -it --user docker -v /tmp:/home/docker mydebian". We can create a new file under /home/docker and the file will be accessible and belongs to the current host user once we quit the container. This actually is a huge security issue.

The same technique does not work on alpine if I try to create a new file in the container.

FROM alpine:latest
# Create a group and user; not useful for creating files in host OS
RUN addgroup -S appgroup && adduser -S appuser -G appgroup \
           && chown appuser:appgroup /home/appuser

"docker build -t myalpine . " and "docker run --rm -it -v ~/Downloads/:/home/appuser:rw --user appuser myalpine". When I use the "id" command in the container, I see it returns 100 in alpine container and 1000 in debian container. The id returns 1000 on my host (Ubuntu/Pop_OS). So the solution is docker run --rm -it -v ~/Downloads/:/home/appuser --user 1000:1000 myalpine. So the local user and the created user home directory in the container are not needed. See

Rocker

FROM r-base:latest
COPY check.R .
CMD [ "Rscript", "check.R", "/unsafe.rda"]
$ git clone https://github.com/hrbrmstr/rdaradar.git
$ docker build -t rdaradar:0.1.0 -t rdaradar:latest .  
$ docker run --rm -v "$(pwd)/exploit.rda:/unsafe.rda" rdaradar 

Bioconductor

Bioconductor

Papers

How to use Dockerfile

https://docs.docker.com/engine/reference/commandline/build/

The . simply means "current working directory".

docker build -f Dockerfile -t arraytools/myimagename .

docker build -t [myname] .  
# Multiple tags
docker build -t arraytools/biospear:latest -t arraytools/biospear:3.6.0 .

In the above example, we can create the image by

docker image build -t pinger .

We can run a container from the pinger image

docker container run --rm -it pinger

Docker Build Args

How to Use Docker Build Args to Configure Image Builds

Clean up after failed builds

Cleanup docker images and containers after failed builds

#!/bin/bash
docker rm $(docker ps -aq) \
  docker rmi $(docker images | grep "^<none>" | awk '{print $3}')

ENTRYPOINT and CMD

The advantage of using ENTRYPOINT + CMD (exec form) instead of using CMD alone (shell form) is we can override the CMD part that I have defined in the Dockerfile.

docker container run --rm -it pinger -w 5 127.0.0.1
# ping the loopback for 5 seconds

If we want to overwrite what's defined in the ENTRYPOINT in the Dockerfile, we need to use the --entrypoint parameter.

docker container run --rm -it --entrypoint /bin/sh pinger
# we'll be inside the container. Type exit to leave the container

When we use the shell form, the ENTRYPOINT is have the default value of /bin/sh -c and whatever is the value of CMD will be passed as a string to the shell command.

Temporary failure resolving 'deb.debian.org' when running "docker build"

Add "--net=host" to the docker build command. See Docker build “Could not resolve 'archive.ubuntu.com'” apt-get fails to install anything

Best practices for writing Dockerfiles

Use multi-stage builds

With multi-stage builds, we have a single Dockerfile containing multiple FROM instructions. Each FROM instruction is a new build stage that can easily COPY artifacts from previous stages.

An example from the "Docker Deep Dive" book.

tag after image was built

$ docker tag <imageID> <newName>/<repoName>:<tagName>

About storage drivers

https://docs.docker.com/storage/storagedriver/#sharing-promotes-smaller-images

Privileged versus Root user in Docker

.dockerignore

Using .dockerignore files to build better Docker images

Dockerfile in One Line

FROM ubuntu

Using This simple Dockerfile and the docker command sudo docker build -t scooby_snacks . will result in

$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
ubuntu              15.04               2427658c75a1        42 hours ago        117.5 MB
ubuntu              vivid               2427658c75a1        42 hours ago        117.5 MB
ubuntu              vivid-20150218      2427658c75a1        42 hours ago        117.5 MB
ubuntu              utopic-20150211     78949b1e1cfd        42 hours ago        194.4 MB
ubuntu              utopic              78949b1e1cfd        42 hours ago        194.4 MB
ubuntu              14.10               78949b1e1cfd        42 hours ago        194.4 MB
ubuntu              14.04               2d24f826cb16        42 hours ago        188.3 MB
ubuntu              14.04.2             2d24f826cb16        42 hours ago        188.3 MB
ubuntu              trusty              2d24f826cb16        42 hours ago        188.3 MB
ubuntu              trusty-20150218.1   2d24f826cb16        42 hours ago        188.3 MB
ubuntu              latest              2d24f826cb16        42 hours ago        188.3 MB
scooby_snacks       latest              2d24f826cb16        42 hours ago        188.3 MB
ubuntu              precise             1f80e9ca2ac3        42 hours ago        131.5 MB
ubuntu              precise-20150212    1f80e9ca2ac3        42 hours ago        131.5 MB
ubuntu              12.04.5             1f80e9ca2ac3        42 hours ago        131.5 MB
ubuntu              12.04               1f80e9ca2ac3        42 hours ago        131.5 MB
ubuntu              14.04.1             5ba9dab47459        3 weeks ago         188.3 MB
ubuntu              12.10               c5881f11ded9        8 months ago        172.2 MB
ubuntu              quantal             c5881f11ded9        8 months ago        172.2 MB
ubuntu              13.04               463ff6be4238        8 months ago        169.4 MB
ubuntu              raring              463ff6be4238        8 months ago        169.4 MB
ubuntu              13.10               195eb90b5349        8 months ago        184.7 MB
ubuntu              saucy               195eb90b5349        8 months ago        184.7 MB
ubuntu              10.04               3db9c44f4520        10 months ago       183 MB
ubuntu              lucid               3db9c44f4520        10 months ago       183 MB

List all tags of an image

How can I list all tags for a Docker image on a remote registry?

Tag the image with the git commit ID

$ docker build -t REPOS/IMAGE:$(git rev-parse --verify HEAD)

Run a shell script on host

$ docker run -v /path/to/sample_script.sh:/sample_script.sh \
  --rm ubuntu bash sample_script.sh

# GATK container example
# First we log in interactive and see where is the default location (/usr in this case)
$ docker run --rm -i -t broadinstitute/gatk3:3.8-0 bash
$ cat > tmp.sh << EOF
> pwd
> ls
> java -jar GenomeAnalysisTK.jar --version
> EOF
$ docker run --rm -v $(pwd):/usr/my broadinstitute/gatk3:3.8-0 bash my/tmp.sh
# ALTERNATIVELY, WE CAN PUT OUR SCRIPT IN THE TOP DIRECTORY (Hopefully the name is not duplicated)
$ docker run --rm -v $(pwd)/tmp.sh:/tmp.sh broadinstitute/gatk3:3.8-0 bash /tmp.sh
docker run -d -v$(pwd):/my SOMEIMAGE bash 
docker exec -d Test bash /my/script.sh

Link containers together

Manage data in containers

Assign a static IP to a container

Running Multiple Docker Services on the Same Server

How to Run Multiple Docker Containers on Different IP Addresses

Firewall

Rstudio server not loading, taking too long to respond in browser. On Ubuntu run sudo ufw allow PORTNUMBER.

Docker DNS/internet problem

I got an error on resolving the debian server when I was creating an image from a Dockerfile that needs to run apt update and apt install commands. See RStudio in Docker – now share your R code effortlessly!. The problem happened on my Linux Mint Desktop but not on a VirtualBox VM (Ubuntu 18.04).

Fix Docker's networking DNS config

A temporary solution is to add the --dns option to docker run command. This works well when I use the IP from any one of my 2 DNS servers. It does not work however if I use the IP from google DNS or OpenDNS.

A permanent solution is to create a new file /etc/docker/daemon.json and include the working DNS server IPs (these are obtained through the nmcli command or the NetworkManager GUI; see Query DNS server).

{
    "dns": ["XXX.XX.XX.XX", "YYY.YY.YY.YY"]
}

Then restart the docker service: sudo service docker restart

A quick test on the DNS problem is

docker run --rm busybox nslookup google.com

Working with Docker hub

https://docs.docker.com/userguide/dockerrepos/

Github Actions

Enabling HTTPS/Let's encrypt

Enabling HTTPS by self-sign certificates

traefik: The Cloud Native Application Proxy

Nginx proxy manager

docker: Error response from daemon: Cannot link to /site1_app_1, as it does not belong to the default network.

Running multiple web applications on a Docker host

Authentication: Authelia

Additional Self-Hosted Security with Authelia on NGINX Proxy Manager (video)

GUI apps

Firefox example

Running GUI Applications in Docker Container

From ubuntu:20.04
RUN apt update
RUN apt install firefox -y
RUN apt install python3-pip -y
RUN pip3 install  notebook

CMD /usr/bin/firefox
CMD jupyter-notebook --allow-root
nano Dockerfile
docker build -t gui .
docker run --env="DISPLAY" --net=host --name=firefox gui

It works. However, I need to use docker rm -f firefox to kill it since Ctrl+c does not work.

Meld example, save a running container as an image

Running a GUI Application in a Docker Container. It works. Below is a modified version for creating the meld app. I can save file modified by meld. To use the app, I need to place files in the ~/Documents/docker (defined in -v). Note that the RAM usage is very minimal. Unfortunately on macOS, I got an error something related to Gtk.

host> docker image pull ubuntu:jammy  # 22.04
 
host> docker container run --rm --net host -v /tmp/.X11-unix:/tmp/.X11-unix -it ubuntu:jammy
container# apt update
container# apt install -y meld
host> xhost +local:
container# export DISPLAY=:0

host> docker container ls  # find the ID of the running container
host> docker commit <ID> meld
container# exit

host> docker container run --rm --net host \
  -v /tmp/.X11-unix:/tmp/.X11-unix \
  -v ~/Documents/docker:/meld \
  -e DISPLAY \
  --name=meld \
  meld meld

R and httpgd package

  • httpgd Docker vignette, installation from Github.
  • It works. However, currently "httpgs" is archived in CRAN (2023/1/25). So my temporary solution is
    $ docker run --rm -it r-base:4.2.2 bash
    # apt update
    # apt install  libfontconfig1-dev
    # R
    > install.packages("remotes")
    > remotes::install_github("nx10/httpgd")
    ## note if we try to run 'httpgd::hgd(host = "0.0.0.0", port = 8888)', it does not work.
    ## The reason is we have not use "-p" option to expose a port in the previous "docker run" command
    
    ## open another terminal and create a docker image based on the current container
    $ docker ps -a | head
    $ docker commit CONTAINER_ID httpgd:4.2.2
    $ docker run --rm -it -p 8888:8888 httpgd:4.2.2 R
    > httpgd::hgd(host = "0.0.0.0", port = 8888)
    > plot(1:5)
    
  • It works when I tested it on a remote ubuntu server (R 4.4.0 & httpgd 2.0.1) (following the instruction on Docker vignette). Either IP or hostname works but the hostname URL link given by httpgd::hgd() needs to be modified to include .local.
  • Some variation of using hgd()
    hgd(host="0.0.0.0", port = 8888) # allow connection from any one from any computer
    hgd()                # default is host=127.0.0.1, port will be random
    hgd(token="secret")  # define the token
    
    hgd_browse()
    hgd_close()
    hgd_details()
    hgd_url()
    hgd_view()
    
  • To use it with Bioconductor (the Bioconductor docker image will use p3m.dev to install binary R packages so it is fast to create images), we can do like this
    $ docker run --rm -it -p 8888:8888 bioconductor/bioconductor_docker:RELEASE_3_18 R 
    
    > install.packages("httpgd")
    > httpgd::hgd(host = "0.0.0.0", port = 8888)
    

    OR use, for example, "bioconductor/bioconductor_docker:RELEASE_3_18" as the base image in the Dockerfile, and follow the same instruction from httpgd vignette to create a docker image.

    $ nano Dockerfile_httpgd
    $ docker build . -f Dockerfile_httpgd -t bioc-httpgd:RELEASE_3_18
    $ docker images
    $ docker run --rm -it --user rstudio -p 8888:8888 bioc-httpgd:RELEASE_3_18 R
    
  • Singularity. The following is a definition file that is using the bioconductor image + the httpgd package.
    Bootstrap: docker
    From: bioconductor/bioconductor_docker:RELEASE_3_18
    
    %post
        apt-get update \
        && apt-get install -y --no-install-recommends \
        libfontconfig1-dev \
        && apt-get autoremove -y && apt-get clean -y && rm -rf /var/lib/apt/lists/* \
        && install2.r --error --skipinstalled --ncpu -1 \
        httpgd \
        && rm -rf /tmp/downloaded_packages
    
    %runscript
        exec /usr/local/bin/R
    
    %environment
        export LC_ALL=C
    sudo singularity build bioc.sif bioc.def
    singularity run bioc.sif 
    
    > httpgd::hgd(host = "0.0.0.0", port = 8888)

    After we copy the URL, we need to modify the IP or hostname.

Docker-OSX

https://github.com/sickcodes/Docker-OSX

Delete/remove/prune unused resources

Prune unused Docker objects

  • Prune containers
    docker container prune # remove all containers that are not in ''running'' status
                           # Docker will ask for confirmation before deleting the containers
    
    docker container prune -f
    docker container rm -f $(docker container ls -aq) # remove even the running containers
  • Prune dangling images: Dangling images are images that aren’t tagged and aren’t referenced by any container.
    docker images prune # unused image layers
  • Remove all unused images: If you want to remove all images that aren’t used by any existing containers, you can use the -a flag
    docker image prune -a
  • Prune volumes
    docker volume prune # unused volumes by at least one container
    
    docker volume prune --filter 'label=demo'
    docker volume prune --filter 'label=demo' --filter 'label=test'
  • Prune networks
    docker network prune
  • Prune everything.
    docker system prune

Plugins

How to Manage Docker Engine Plugins

Misc

LXC (raw Linux containers)

LXC vs Docker

Vagrant vs Docker

Date/Time zone

docker run --rm -t -i -v /etc/localtime:/etc/localtime:ro ubuntu date

Access the internet from the container

Run the container with the '--net=host' option

sudo docker run --net=host -it ubuntu /bin/bash

How to transfer/copy an image to another host

How to copy Docker images from one host to another without using a repository

# Step 1: save the Docker image as a tar file:
docker save -o <path for generated tar file> <image name>

# Step 2: copy your image to a new system with regular file transfer tools such as cp or scp. 

# Step 3: After that you will have to load the image into Docker:
docker load -i <path to image tar file>

The tar file size is the same as what we get from 'docker image'. If we use the 'gzip' utility, it can reduce the file size (e.g 2.7GB to 1.1GB).

Or https://stackoverflow.com/a/39716019

# Step 1:
docker save docker-image-name | gzip > my-image.tar.gz
# Step 3:
docker load < my-image.tar.gz

Where are Docker containers/images stored on the host: /var/lib/docker

The default is /var/lib/docker. The location can be changed by modifying the file /etc/default/docker. Three options if we are tight on the disk space.

1. Create a softlinks for the Docker data directory (/var/lib/docker) and for /var/lib/docker/tmp as described at miscellaneous-options. See this. See for how to stop docker daemon on different OS.

sudo service docker stop   # or sudo systemctl stop docker
sudo mv /var/lib/docker /a/new/location
sudo ln -s /a/new/location /var/lib/docker # Create a symbolic link
sudo service docker start  # or sudo systemctl start docker

2. Change the default location to another place. For example,

sudo nano /etc/default/docker
# Add a line DOCKER_OPTS="-g /home/brb/Docker"

Then after running sudo service docker.io restart and then a simple pull sudo docker pull rocker/r-base or sudo docker run --rm -ti rocker/r-base (the Dockerfile of r-base is available on github.com, --rm option means Automatically remove the container when it exits), we will see something like this:

$ docker run --rm -ti rocker/r-base
$ docker images
$ docker -v
Docker version 1.0.1, build 990021a

$ docker -D info | grep Root
 Root Dir: /home/brb/Docker/aufs

Consuming Docker system events

# Open a new terminal
docker system events
# This command is a blocking command. 
# Thus, when you execute it in your terminal session the according session is blocked.

# Open another terminal
docker container run --rm alpine echo "Hello World"

Monitor tools

Docker Machine

Docker Machine is a tool that lets you

  • Install Docker Engine on virtual hosts. You can use Machine (a unified way) to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like Azure, AWS, or Digital Ocean. See the comment on here.
  • Provision and manage multiple remote Docker hosts
  • Provision Swarm clusters

Docker machine is not installed in Linux when you install Docker. See the instruction on here to install it.

My feeling is if we just want to play Docker on a local Linux machine, we don't really need to use Docker Machine (it just make life more complicated). But if we are working on Mac/Windows or we want to work on clouds or test on VirtualBox, we shall use Docker machines.

Use Docker-machine to Create Docker Servers. Compare the Docker images on the local machine (server 1) & a new host (server 2) created by docker-machine. Question: 1. how to tell we are in the host/machine environment? 2. how to exit the host environment after we use eval $() command? docker-machine stop MachineName.

$ docker-machine help
$ docker-machine create --driver=virtualbox test
# Follow its hint on the output, issue the following command
$ docker-machine env test
# Follow its hint on the output, issue the following command
$ eval$(docker-machine env test) # will configure the docker CLI to connect to this docker machine 'test'
                                 # This is equivalent to running 4 export commands on the command line
$ docker-machine ls  # Very useful
$ docker-machine stop test
$ docker-machine ip test
$ docker-machine start test
$ docker-machine rm test

Play Docker Machine on Mac with Virtualbox. Docker can be used to create a virtual machine just like Vagrant.

$ docker-machine create -d virtualbox demo
$ docker-machine ls

# first way to access a Docker host
$ docker-machine ssh demo
docker@demo:~$ docker images # empty for now

# second way to access 
$ docker-machine env demo
$ eval $(docker-machine env demo)
$ docker version

RancherOS demo video used the docker-machine command to pull and run the RancherOS.

docker-machine create -d virtualbox --virtualbox-boot2docker-url https://releases.rancher.com/os/latest/rancheros.iso demo
docker-machine ssh demo
ps
docker ps
sudo system-docker ps

sudo ros help
sudo ros console list
sudo ros console switch ubunu
apt-get help

Package CLI Applications

How to Use Docker to Package CLI Applications

Stack

Docker app

Docker App is an experimental Docker feature which lets you build and publish application stacks consisting of multiple containers. It aims to let you share Docker Compose stacks with the same ease of use as regular Docker containers.

How to Use 'Docker App' to Containerise an Entire Application Stack

Docker Swarm

Security

Moby Project

What is Docker's Moby Project?

Windows container

How can I run a docker windows container on osx?

When Not to Use Docker

When Not to Use Docker: Cases Where Containers Don’t Help

Docker Compose <docker-compose.yaml>

Docker Compose can help us out as it allows us to specify a single file in which we can define our entire environment structure and run it with a single command (much like a Vagrantfile works).

YAML validator

https://codebeautify.org/yaml-validator

Download binary

Difference of "docker compose" and "docker-compose"

  • Docker-compose is the original Python-based command-line tool that was released in 2014. Docker compose is a newer Go-based command-line tool that is integrated into the Docker CLI platform and supports the compose-spec. Docker compose is meant to be a drop-in replacement for docker-compose, but it may have some behavior differences and new features. Docker compose is currently a tech preview, but it will eventually replace docker-compose as the recommended way to use Compose.

Simple examples

Create a file docker-compose.yml and run docker-compose up after creating the file.

hello-world: 9kB

version: "3"
services:
  hello:
    image: hello-world

alpine: 7.73MB

version: "3"
services:
  server:
    image: alpine
    container_name: my_container
    command: sh -c "echo 'hello' && echo 'docker'"

Nginx: 135MB

mkdir src
echo "Hello world!" > src/index.html
version: "3"
services:
  client:
    image: nginx
    ports:
      - 8000:80
    volumes:
      - ./src:/usr/share/nginx/html

Composerize/convert a docker command into a docker compose file

An example from 'Fundamentals of Docker'

git clone https://github.com/fundamentalsofdocker/labs.git
cd labs/ch08
docker-compose up
# Open http://localhost:3000/pet

The images do not show up:( The terminal shows what has happened under the hood. So the problem is the http links for images do not exist.

We can also run the application in the background

docker-compose up -d

To stop and clean up the application, Howto use docker-compose to Start, Stop, Remove Docker Containers

docker-compose down # Stop and remove containers, networks, images, and unnamed volumes
                    # defined in the docker-compose.yml flie
# OR
docker-compose down -v # similar to above but remove named volumes defined in yml file
# OR
docker-compose stop && docker-compose rm -f
docker-compose rm -v

If we also want to remove the volume for the database

docker volume rm ch08_pets-data

An example from "How to Setup NGINX as Reverse Proxy Using Docker"

See here. Only nginx is used.

An example from "Docker Deep Dive" (flask + redis)

Note that on Get started with Docker Compose it mounts the current directory to /code inside the container. So after we modify app.py, we don't need to copy it to the container.

Another one Docker compose tutorial for beginners by example

$ git clone https://github.com/nigelpoulton/counter-app.git
$ cd counter-app
$ ls
app.py  docker-compose.yml  Dockerfile  README.md  requirements.txt

$ cat requirements.txt 
flask

$ cat Dockerfile
FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]

$ cat docker-compose.yml 
version: "3.5"
services:
  web-fe:
    build: .
    command: python app.py
    ports:
      - target: 5000
        published: 5000
    networks:
      - counter-net
    volumes:
      - type: volume
        source: counter-vol
        target: /code
  redis:
    image: "redis:alpine"
    networks:
      counter-net:

networks:
  counter-net:

volumes:
  counter-vol:

$ docker-compose up &

$ docker container ls

$ docker network ls
NETWORK ID          NAME                     DRIVER              SCOPE
2acef6dabde6        bridge                   bridge              local
a2d42bc482ff        counterapp_counter-net   bridge              local
e1e093b64282        host                     host                local
7ecd0a6a9ebd        none                     null                local

# Open the browser http://localhost:5000
$ docker-compose ps
       Name                      Command               State           Ports         
-------------------------------------------------------------------------------------
counterapp_redis_1    docker-entrypoint.sh redis ...   Up      6379/tcp              
counterapp_web-fe_1   python app.py                    Up      0.0.0.0:5000->5000/tcp

$ docker-compose stop
$ docker-compose ps
# We can see stopping a Compose app does not delete the application

$ docker container ls -a
$ docker-compose rm     # delete a stopped Compose app
                        # images, volumes and source code remain
$ docker-compose restart
                        # If you made changes to your Compose app since stopping,
                        # these changes will not appear in the restarted app.
                        # You need to re-deploy the app to get the changes.
$ docker-compose ps
$ docker-compose down   # stop and delete the app
                        # images, volumes and source code remain
$ docker-compose down --volumes # remove the data volume used by the Redis container
$ docker-compose up -d 
$ docker volume ls
$ docker-compose 

# We can make changes to files in the volume, from the host side,
# and have them reflected immediately in the app.
$ nano app.py   # do some changes
$ docker volume inspect counterapp_counter-vol | grep Mount
$ sudo cp app.py \
  /var/lib/docker/volumes/counterapp_counter-vol/_data/app.py
# Our changes should be reflected 

$ docker-compose --help

Create Compose Files From Running Docker Containers

How to Automatically Create Compose Files From Running Docker Containers

Docker-Compose persistent data MySQL

https://stackoverflow.com/questions/39175194/docker-compose-persistent-data-mysql

Connect to Docker daemon over ssh using docker-compose

#DockerTips: Connect to Docker daemon over ssh using docker-compose

Dockerfile + docker-compose

Docker Compose vs. Dockerfile - which is better?

The Compose file describes the container in its running state, leaving the details on how to build the container to Dockerfiles.

How to deploy on remote Docker hosts with docker-compose

How to deploy on remote Docker hosts with docker-compose

logs

docker-compose logs -f
# Ctrl + c

GUI/TUI interface manager

Dry

Dry – An Interactive CLI Manager For Docker Containers. The TUI is built on top of termui; a cross-platform, easy-to-compile, and fully-customizable terminal dashboard. It is inspired by blessed-contrib, but purely in Go.

LazyDocker (TUI)

Dockly (TUI)

Dockly – Manage Docker Containers From Terminal

DockStation

It is not open source. It works with remote Docker containers.

DockSTARTer: get started with home server apps running in Docker

Portainer* (nice)

IP address 0.0.0.0

How to setup ip address in portainer to access containers? Environments > local (or whatever your environment is named) there you set your public ip.

Templates

Yacht

cockpit-docker

sudo apt-get -y install cockpit-docker

sudo systemctl restart cockpit

DockerUI (Deprecated, Development continues at Portainer)

https://github.com/kevana/ui-for-docker. A quick start:

  1. Run:
    docker run -d -p 9000:9000 --privileged \
        -v /var/run/docker.sock:/var/run/docker.sock uifd/ui-for-docker
    where -v means to bind mount a volume.
  2. Open your browser to http://<dockerd host ip>:9000

Note: Anyone in the local network can access the website without any authentication.

Rancher

$ sudo apt-get install ufw
$ sudo ufw allow 4500/udp
$ sudo ufw allow 500/udp
  • discoposse.com
    • Part 1 Installing Rancher and Setting Access Control
    • Part 2 Adding a Docker Host to Rancher
    • Part 3 Adding the DockerHub to our Rancher Registry
    • Part 4 Using the Catalog Example with GlusterFS

Seagull

https://youtu.be/TuT5gb8oRw8

docker run -d -p 127.0.0.1:10086:10086 -v /var/run/docker.sock:/var/run/docker.sock tobegit3hub/seagull

The only issue is there is no username/password to protect other people to access the web GUI. The solution of binding to localhost to restrict the access does not work for remote administration.

That is, the tool is suitable for home use.

Kitematic (Mac, Windows and Ubuntu)

Owned by Docker. Available for Mac OS X 10.8+ and Windows 7+ (64-bit) and Ubuntu. https://github.com/docker/kitematic/releases/

Run containers through a simple, yet powerful graphical user interface.

It can not connect to remote docker machines.

A Share your Shiny Apps with Docker and Kitematic!

Shipyard (retired)

VS Code

Applications

Docker Applications

CasaOS

Every app is based on a Docker application

Orchestrator

Kubernete

Kubernete vs Docker swarm

k3s: Lightweight Kubernetes

Run Kubernetes on a Raspberry Pi with k3s

Other containers

Singularity and HPC systems

  • Old URL at singularity.lbl.gov
  • Singularity enables users to have full control of their environment; Singularity containers let users run applications in a Linux environment of their choosing. No 'sudo' is needed in general unless you want to build a container from a recipe.
  • Containers are more like an executable file for you to use
  • Containers are stored under the current location. It does not have a centre location (like /etc/default/docker if we use docker) to store images.
  • Can convert Docker containers to Singularity and run containers directly from Docker Hub
  • These bind points cannot be created unless the path already exists within the container. To ensure access to these storage spaces and remedy bind point errors, create these directories in the %post section of your Bootstrap file.
  • Singularity Hub

Ref:

Comparison of docker and singularity commands:

docker singularity
$ docker pull ubuntu:latest
$ docker pull broadinstitute/gatk3:3.8-0
$ singularity pull docker://ubuntu:latest
$ singularity pull docker://broadinstitute/gatk3:3.8-0
$ docker build -t myname/myapp:latest -f Dockerfile $ singularity build myapp.sif myapp.def
$ docker shell (not exist) $ singularity shell docker://broadinstitute/gatk3-3.8-0
$ singularity shell gatk3-3.8-0.img
> ls # the default location depends on the host system

> ls /usr # this is from the container

$ singularity shell --bind ~/Downloads:/mnt XXX.img
$ singularity shell docker://ubuntu:latest
# container is ephemeral

$ docker run --name test -it ubuntu date

# The next example is similar to 'singularity exec'
$ docker run --rm -i -t \
-v $(pwd):/usr/my_data \
broadinstitute/gatk3:3.8-0 \
bash /usr/my_data/myscript.sh
$ singularity run gatk3-3.8-0.img date
$ docker run --name ubuntu_bash --rm -i -t ubuntu bash
$ docker exec -d ubuntu_bash touch /tmp/execWorks
# Most useful
$ singularity exec gatk3-3.8-0.img java -version
$ singularity exec xxx.img cat /etc/*release
$ singularity exec docker://rocker/tidyverse:latest R
$ singularity exec docker://rocker/tidyverse:latest Rscript myScript.R

Cache

When we run singularity exec docker://rocker/tidyverse:latest R, it will save something in the cache in our system.

It seems to be OK after I manually delete the directory $HOME/.singularity (tested in Biowulf).

RStudio

$ singularity exec docker://rocker/tidyverse:latest R
$ singularity exec docker://rocker/tidyverse:latest Rscript myScript.R

Shifter

Conda

Anaconda

Bioconda

Using docker to install conda (https://conda.io/docs/user-guide/tutorials/index.html)

$ docker run -t -i --name test --net=host ubuntu bash
# apt-get update
# apt-get install -y wget bzip2 python
# wget https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh
# wget https://repo.continuum.io/archive/Anaconda2-5.1.0-Linux-x86_64.sh
# bash Miniconda2-latest-Linux-x86_64.sh
# bash Anaconda2-5.1.0-Linux-x86_64.sh
# exit

$ docker start test 
$ docker exec -i -t test bash
# conda list  # WORKS!
# conda config --add channels r
# conda config --add channels defaults
# conda config --add channels conda-forge
# conda config --add channels bioconda
# conda install bwa  (Segmentation fault. Core dumped)
# which bwa
/root/anaconda2/bin/bwa
# conda install r   (Only get 3.4.2 but the latest is 3.4.3.)
# conda install bowtie
# bowtie --version
# conda install gatk (https://bioconda.github.io/recipes/gatk/README.html)
   (Due to license restrictions, this recipe cannot distribute and install GATK directly)
   (R is downgraded to 3.2.2:( )
   (Segmentation fault. Core dumped)
# exit
# docker stop test
# docker rm test

Get miniconda image instead of using a Ubuntu image

$ docker pull continuumio/miniconda
$ docker run -i -t continuumio/miniconda /bin/bash
# conda install r   (get 3.4.2)
# conda config --add channels bioconda
# conda install bwa  (OK, no error)
# conda install gatk  (R was downgraded to 3.2.2, install openjdk 8.0.121)
# which gatk
/opt/conda/bin/gatk
# gatk -h
GATK jar file not found. Have you run "gatk-register"?

Issues:

  • R version is not up to date
  • So the problem is installing GATK requires an installation of R and the current R was affected.

CoreOS

Installation

We first boot a liveCD from any OS (CentOS works but Ubuntu 16.04 gave errors). In Virtualbox, we choose 'Red Hat' if we use CentOS.

Once the VM is created. We go to the settings. Create a bridged network or host-only network first (even we can get files from the host without creating a host-only network). Storage: choose CentOS-7.

  1. Get the install script from Github and create <coreos_install.sh> and chmod +x
  2. create <cloud-config.yaml> file which will include ssh_authorized_keys generated from another machine. It should also contain a new token for the cluster from https://discovery.etcd.io/new.
  3. ls -l /dev/sd*
  4. run sudo ./coreos_install.sh -d /dev/sda -C stable -c cloud-config.yaml. It will download the latest stable CoreOS, install to the HD
  5. Don't leave the VM or it will freeze. Issue sudo shutdown -h now once we see the word 'Success' at the last line of the output.
  6. Remove CentOS from the VM storage. Boot the coreOS VM.

The new screen shows corebm1 login with an IP. Go back to another machine and type ssh -i /tmp/CoreOSBM_rsa [email protected]. Inside CoreOS, we can type docker images.

The 'cloud-config.yaml file has to follow the format in https://coreos.com/os/docs/latest/cloud-config.html. Use the online validator https://coreos.com/validate/ to correct. At first I use the file from the youtube video. There is no error coming out when I ran the installation script. But I cannot connect to coreOS. The cloud-config.yaml file I use is (pay attention to '-', double quotes and indent characters)

#cloud-config
#
# set hostname
hostname: CoreBM1

# Set ssh key
ssh_authorized_keys:
  - "ssh-rsa AAAAB3 ..... brb@T3600"

coreos:
  etcd:
    discovery: "https://discovery.etcd.io/d3e95 .... "
# sudo ./installos -d /dev/sda -C stable -c cloud-config.yaml

CoreOS exploration

brb@T3600 /tmp $ ssh -i /tmp/id_rsa [email protected]
Enter passphrase for key '/tmp/id_rsa':
CoreOS stable (1010.6.0)
core@CoreBM1 ~ $
core@CoreBM1 ~ $ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
core@CoreBM1 ~ $ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.4G     0  1.4G   0% /dev
tmpfs           1.4G     0  1.4G   0% /dev/shm
tmpfs           1.4G  340K  1.4G   1% /run
tmpfs           1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/sda9        18G   23M   17G   1% /
/dev/sda3       985M  589M  345M  64% /usr
tmpfs           1.4G     0  1.4G   0% /media
/dev/sda1       128M   37M   92M  29% /boot
tmpfs           1.4G     0  1.4G   0% /tmp
/dev/sda6       108M   52K   99M   1% /usr/share/oem
core@CoreBM1 ~ $ free -m
             total       used       free     shared    buffers     cached
Mem:          2713        187       2525          0          9        109
-/+ buffers/cache:         68       2644
Swap:            0          0          0
core@CoreBM1 ~ $ lsb_release -a
-bash: lsb_release: command not found
core@CoreBM1 ~ $ docker pull ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
f069f1d21059: Pull complete
ecbeec5633cf: Pull complete
ea6f18256d63: Pull complete
54bde7b02897: Pull complete
Digest: sha256:bbfd93a02a8487edb60f20316ebc966ddc7aa123c2e609185450b96971020097
Status: Downloaded newer image for ubuntu:latest
core@CoreBM1 ~ $ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              0f192147631d        7 days ago          132.7 MB
core@CoreBM1 ~ $

CoreOS cluster discovery

https://coreos.com/os/docs/latest/cluster-discovery.html

etcd

fleet

TryGhost

https://github.com/TryGhost/Ghost

Firecracker

Firecracker: start a VM in less than a second

Self-hosting

Tools and Resources for Self-Hosting

Linux in browser

Podman

  • Podman Installation Instructions
    • How To Install Podman Desktop In Linux
    • Raspberry Pi OS use the standard Debian's repositories, so it is fully compatible with Debian's arm64 repository. You can simply follow the steps for Debian to install Podman.
  • Podman vs docker:
    • One of the main differences between Podman and Docker is their architecture. Docker uses a client-server architecture with a central daemon that manages containers. In contrast, Podman is daemonless and uses a fork-exec model to manage containers.
    • Podman is designed to run containers without requiring root privileges or the use of sudo. This is one of the key differences between Podman and Docker, as Docker requires root privileges to run containers.
    • Both Podman and Docker are compatible with the Open Container Initiative (OCI) container specification, which means that they can run the same container images. However, Podman is more closely aligned with Kubernetes and its native container runtime, while Docker also works with its own orchestration tool, Docker Swarm.
    • Podman provides several benefits over Docker. For example, Podman is daemon-less, which means that if the Docker daemon crashes, the containers are in an uncertain state. This is prevented by making Podman daemon-less. You can also use systemd to manage your containers with Podman, which gives you virtually unlimited configurability compared to Docker. Hooking Podman with systemd allows you to update running containers with minimal downtime and recover from any bad updates.
  • Podman is a project from Red Hat
  • Getting Started With Podman Desktop, an Open Source Docker Desktop Alternative
  • Podman Compose - Managing Containers
pip3 install podman-compose
But it seems the compatibility is an issue even I tried a small example based on alpine image.
  • Nginx example (works)
podman run -it --rm -d -p 8080:80 \
  --name web \
  -v /mnt/Podman/site-content:/usr/share/nginx/html \
  docker.io/libary/nginx

Resource

Internet

Books

Blogs

Tips/trouble shooting

Play with Docker (PWD)

  • Some applications I've tested.
    • webtop (OK)
    • r-base:3.6.3, r-base:4.1.0, r-base:4.1.1 (OK)
    • r-base:4.1.2, r-base:4.2.0 (ERROR: R_HOME ('/usr/lib/R') not found). Maybe the docker version there is too old.

Alternatives

The 9 Best Docker Alternatives for Container Management

Serverless computing