Docker
Full Title or Meme
Docker is a system for building, deploying and running complex images of a program with its runtime environment.
Context
- With the rise of cloud computing the need arose to give users an easy way to create a run-time package that could be sent to any cloud Platform as a Service provider (PaaS) with complete interoperability.
Documentation
Docker was released in 2013 and solved many of the problems that developers had running containers end-to-end. It had all these things:
- A container image format
- A method for building container images (Dockerfile/docker build)
- A way to manage container images (docker images, docker rm , etc.)
- A way to manage instances of containers (docker ps, docker rm , etc.)
- A way to share container images (docker push/pull)
- A way to run containers (docker run)
Open Container Initiative addresses some of the features needed to deploy a complex docker container.
- Open Container Initiative (OCI)
- OCI runtime specification.
- Google code for running containers as a tool and library called runc
When you run a Docker container, these are the steps Docker actually goes through:
- Download the image
- Unpack the image into a "bundle". This flattens the layers into a single filesystem.
- Run the container from the bundle
Windows Subsystem for Linux
- WSL is available on Widows 10 version 2004
- Wiki page on Docker Container with Visual Studio Code
- A good guide for using WSL.
- To migrate from WSL to WSL2 key: Enable-WindowsOptionalFeature -Online -FeatureName VirtualMachinePlatform
Path : Online : True RestartNeeded : False
- For Ubuntu the hard disk image can be found on a location similar to: C:\Users\rp_to\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc
Networking
Networking out ports are always enabled. If you append -P (or --publish-all=true) to docker run, Docker identifies every port the Dockerfile exposes (you can see which ones by looking at the EXPOSE lines). Docker also finds ports you expose with --expose 8080 (assuming you want to expose port 8080). Docker maps all of these ports to a host port within a given ephemeral port range. You can find the configuration for these ports (usually 32768 to 61000) in /proc/sys/net/ipv4/ip_local_port_range. This is the method that Visual Studio uses when debugging in docker.
- To see pros key in: docker port {container #}
- They are also displayed for all container with: docker ps
- To list all networks key: docker network ls
- To see details about a network key: docker network inspect {network number from above list}
Container
lifecycle
- containers start as images, images are built by using the instructions in the dockerfile.
- run a container to start it, and then exec commands on containers that are already running (commonly just exec bash on a container to get a prompt inside its environment to poke around)
- docker-compose is a tool to manage containers, or groups of containers. to run containers with docker-compose run, just as with docker run an image, but most of the parameters pass to docker run would be specified in the docker-compose.yml file
- docker-compose run wil only start a single container in a compose file. so docker-compose up is more often used to start all the containers in the compose file. docker-compose up will also build any images it needs to start the containers it's been asked to start
- docker-compose exec <container> allows commands on the already running containers, and use docker-compose as a way to address the container names without having to figure out what the automatic name it was given was.
- a good pattern is to just use docker-compose commands, there's not a whole lot to do with docker itself this way. docker-compose is just a syntax tool to help run docker commands
Orchestration
- Optional for single docker images. Most important for deploying multiple apps to a single server or server farm.
- The portability and reproducibility of a containerized process provides have an opportunity to move and scale containerized applications across clouds and server farms. Containers effectively guarantee that those applications run the same way anywhere, taking advantage any server environment. There is tooling to help automate the maintenance of those applications, ability to replace failed containers automatically, and manage the rollout of updates and reconfigurations of those containers during their lifecycle. Tools to manage, scale, and maintain containerized applications are called orchestrators, and the most common examples of these are Kubernetes and Docker Swarm. Development environment deployments of both of these orchestrators are provided by Docker Desktop.
- An Overview of Docker Compose describes a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. The ordinary interface for the user is the docker-compose command line interface (CLI).
Services and Swarms
- Docker-compose introduces the concept of services.
- What is the difference between Docker Service and Docker Container?
Docker services can be used when the master node is configured with Docker swarm so that docker containers will run in a distributed environment and it can be easily managed.
- Docker run (used to create a standalone container): The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command. That is, docker run is equivalent to the API /containers/create then /containers/(id)/start source: https://docs.docker.com/engine/reference/commandline/run/#parent-command
- Docker service: Docker service will be the image for a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment. When you create a service, you specify which container image to use and which commands to execute inside running containers. source: https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/#services-tasks-and-containers You also define options for the service including:
- the port where the swarm will make the service available outside the swarm
- an overlay network for the service to connect to other services in the swarm
- CPU and memory limits and reservations
- a rolling update policy
- the number of replicas of the image to run in the swarm
- docker service is the new docker run
- Docker service create: is used to create instances (called tasks) of that service running in a cluster (called swarm) of computers (called nodes). Those tasks are containers of cource, but not standalone containers. In a sense a service acts as a template when instantiating tasks.
File Storage
- How to run ASP.NET Core 3.1 over HTTPS in Docker using Linux Containers
- Docker Docs - use volumes. Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker. Unless data needs to also be managed by the underlying o/s (as is the case for debugging on an IDE), it is recommended to use volumes over bind mounts.
- use rsync commands to move files between dev machine and a running linux container instance.
- Docker Bind for files on core o/s.
Support for TLS
- most help getting file mounts correct.
- use of HTTPS with Docker images in ASP.NET.
- mac verify error = Certificate Exception
- See wiki page on Let's Encrypt for instructions on renewing certificate
Troubleshoot in Docker on Linux
- How to stop a running Container Cannot kill -- permission denied (suggestion that apparmor is blocking).
- Troubleshooting an ASP.NET Core App Running in Docker deals with configuration environments in Development and Production. One place where errors like "dbPath is nul" can occur because the upsetting.{env}.json is not correctly spell. Not that the file naming convention in Linux is case sensitive and in Windows it is not. This means that if the env = "development" and the file name is upsettings.Development.json, the configuration will be correctly retrieved in Windows, but not in Linux.
Practices
This section is specific to Visual Studio 2019 and later.
- Start the project as a docker project, or go the startup project and Add->Docker support.
- Add orchestration in the same project by Add->Container Orchestrator Support. This will build a new folder under the VS solution folder called "docker-compose".
- That will enable containers that have more that one start project.
- If the application uses User Secrets, there might be a problem deploying it to Production.
- Ditto with root/.aspnet/https/---.pfs
- Deploying Dockerized .NET Apps Without Being a DevOps Guru 2019-08-13 Docker blog
- How To Use Rsync to Sync Local and Remote Directories 2020-11-18 DigitalOcean
Fetching changes from Github
Download a repo from Github (cloning)
- Go to GitHub repo, click on green button "Code", copy the like that displays. For example:
https://github.com/TomCJones/RegistryDemo.git
- Open Terminal.
- Change the current working directory to the location where you want the cloned directory.
- Type git clone, and then paste the URL you copied earlier.
Update a repo from GitHub
git fetch origin git reset --hard origin/master
git clean -f to remove local files that might have been added but no longer needed git clean -n -f To see what files will be removed (without actually removing them):
Handling Data Files
- This section is about files that need to be loaded into a service that should not be included with the program for some reason.
If the files to be loaded are on a GitHub account the easiest was is to use curl. The token can be generated by signing into GitHub and using the shorted URL. Then cut and paste the URL that results in returning the data. Note that single quotes are placed around the URL because of the "?" which is only needed on some version of zsh.
curl -H 'UserAgent:Mozilla' -H 'Accept: application/vnd.github.v3.raw' -L 'https://raw.githubusercontent.com/TomCJones/tcdata/main/trorgpw.txt?token=ACWGVVTTTFA3S4PSADLBCXS76TVLG' -o trorgpw.txt curl -H 'UserAgent:Mozilla' -H 'Accept: application/vnd.github.v3.raw' -L 'https://raw.githubusercontent.com/TomCJones/tcdata/main/trorg-210307.pfx?token=ACWGVVTTTFA3S4PSADLBCXS76TVLG' -o trorg-210307.pfx or just curl -L 'https://raw.githubusercontent.com/TomCJones/tcdata/main/trorg-210307.pfx?token=ACWGVVV7TPKPZKZVCXBYYMK76X5XG' -o Trustregistry.pfx Note that tokens are only good for a day!
- Download single files from GitHub from stoackoverflow.
Troubleshooting on Windows
- How to stop a running Container Cannot kill -- permission denied (one suggestion is that apparmor from Crispin is blocking).
Containers
- Configuration of Containers Windows is reached by using the search box in the IDE (press Ctrl+Q to use it), type in container, and choose the Containers window from the list.
- Container Tools launch settings in msft docs.
- Visual Studio Find does not search this file so you need to know about its contents.
Nav (left) panel
- Solution Containers
- List of containers, most with weird made-up names. (BTW, the current container name will be in the page footer is a terminal is open.)
- Other Containers
Nav (top) bar
- Environment
- Ports - contains linkage, for example, from container port (80) to host port (32775) and type (TCP) 443 -> 32774 (see networking above}
- Logs
- Files
References
- Docker docs, but note that the Docker team only deals with the low level formats.
- command line Interface reference for dolt, the DigitalOcean controller.
- Docker CLI Cheat Sheet 2020-11-22
- Docker for Windows on GitHub with issues.
- a four-part series on Container Runtimes describes low-level versus high-level runtimes well.
- Getting Started with Docker in Visual Studio 2019
- Docker Container Tools on docs.msft
- GitHub Docker for Windows Issues