From MgmtWiki
Revision as of 17:26, 18 June 2021 by Tom (talk | contribs) (Services and Swarms)

Jump to: navigation, search

Full Title or Meme

Docker is a system for building, deploying and running complex images of a program with its runtime.


  • With the rise of cloud computing the need arose to give users an easy way to create a run-time package that could be sent to any cloud Platform as a Service provider (PaaS) with complete interoperability.


Docker was released in 2013 and solved many of the problems that developers had running containers end-to-end. It had all these things:

  1. A container image format
  2. A method for building container images (Dockerfile/docker build)
  3. A way to manage container images (docker images, docker rm , etc.)
  4. A way to manage instances of containers (docker ps, docker rm , etc.)
  5. A way to share container images (docker push/pull)
  6. A way to run containers (docker run)

Open Container Initiative addresses some of the features needed to deploy a complex docker container.

When you run a Docker container, these are the steps Docker actually goes through:

  1. Download the image
  2. Unpack the image into a "bundle". This flattens the layers into a single filesystem.
  3. Run the container from the bundle

Windows Subsystem for Windows

  • WSL is available on Widows 10 version 2004
  • A good guide for using WSL.
  • To migrate from WSL to WSL2 key: Enable-WindowsOptionalFeature -Online -FeatureName VirtualMachinePlatform
Path          :
Online        : True
RestartNeeded : False


Networking out ports are always enabled. If you append -P (or --publish-all=true) to docker run, Docker identifies every port the Dockerfile exposes (you can see which ones by looking at the EXPOSE lines). Docker also finds ports you expose with --expose 8080 (assuming you want to expose port 8080). Docker maps all of these ports to a host port within a given ephemeral port range. You can find the configuration for these ports (usually 32768 to 61000) in /proc/sys/net/ipv4/ip_local_port_range. This is the method that Visual Studio uses when debugging in docker.

  • To see pros key in: docker port {container #}
  • They are also displayed for all container with: docker ps
  • To list all networks key: docker network ls
  • To see details about a network key: docker network inspect {network number from above list}



  1. containers start as images, images are built by using the instructions in the dockerfile.
  2. run a container to start it, and then exec commands on containers that are already running (commonly just exec bash on a container to get a prompt inside its environment to poke around)
  3. docker-compose is a tool to manage containers, or groups of containers. to run containers with docker-compose run, just as with docker run an image, but most of the parameters pass to docker run would be specified in the docker-compose.yml file
  4. docker-compose run wil only start a single container in a compose file. so docker-compose up is more often used to start all the containers in the compose file. docker-compose up will also build any images it needs to start the containers it's been asked to start
  5. docker-compose exec <container> allows commands on the already running containers, and use docker-compose as a way to address the container names without having to figure out what the automatic name it was given was.
  6. a good pattern is to just use docker-compose commands, there's not a whole lot to do with docker itself this way. docker-compose is just a syntax tool to help run docker commands


  • Optional for single docker images. Most important for deploying multiple apps to a single server or server farm.
  • The portability and reproducibility of a containerized process provides have an opportunity to move and scale containerized applications across clouds and server farms. Containers effectively guarantee that those applications run the same way anywhere, taking advantage any server environment. There is tooling to help automate the maintenance of those applications, ability to replace failed containers automatically, and manage the rollout of updates and reconfigurations of those containers during their lifecycle. Tools to manage, scale, and maintain containerized applications are called orchestrators, and the most common examples of these are Kubernetes and Docker Swarm. Development environment deployments of both of these orchestrators are provided by Docker Desktop.
  • An Overview of Docker Compose describes a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. The ordinary interface for the user is the docker-compose command line interface (CLI).

Services and Swarms

  • Docker-compose introduces the concept of services.
  • What is the difference between Docker Service and Docker Container?
    Docker services can be used when the master node is configured with Docker swarm so that docker containers will run in a distributed environment and it can be easily managed.
  • Docker run (used to create a standalone container): The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command. That is, docker run is equivalent to the API /containers/create then /containers/(id)/start source:
  • Docker service: Docker service will be the image for a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment. When you create a service, you specify which container image to use and which commands to execute inside running containers. source: You also define options for the service including:
    • the port where the swarm will make the service available outside the swarm
    • an overlay network for the service to connect to other services in the swarm
    • CPU and memory limits and reservations
    • a rolling update policy
    • the number of replicas of the image to run in the swarm
  • docker service is the new docker run
  • Docker service create: is used to create instances (called tasks) of that service running in a cluster (called swarm) of computers (called nodes). Those tasks are containers of cource, but not standalone containers. In a sense a service acts as a template when instantiating tasks.

File Storage

  • How to run ASP.NET Core 3.1 over HTTPS in Docker using Linux Containers
  • Docker Docs - use volumes. Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker. Unless data needs to also be managed by the underlying o/s (as is the case for debugging on an IDE), it is recommended to use volumes over bind mounts.
  • use rsync commands to move files between dev machine and a running linux container instance.
  • Docker Bind for files on core o/s.

Support for TLS

Troubleshoot in Docker on Linux


This section is specific to Visual Studio 2019 and later.

  1. Start the project as a docker project, or go the startup project and Add->Docker support.
  2. Add orchestration in the same project by Add->Container Orchestrator Support. This will build a new folder under the VS solution folder called "docker-compose".
    1. That will enable containers that have more that one start project.
  3. If the application uses User Secrets, there might be a problem deploying it to Production.
  4. Ditto with root/.aspnet/https/---.pfs

Fetching changes from Github

Download a repo from Github (cloning)

  1. Go to GitHub repo, click on green button "Code", copy the like that displays. For example:
  1. Open Terminal.
  2. Change the current working directory to the location where you want the cloned directory.
  3. Type git clone, and then paste the URL you copied earlier.

Update a repo from GitHub

git fetch origin
git reset --hard origin/master
git clean -f
to remove local files that might have been added but no longer needed
git clean -n -f
To see what files will be removed (without actually removing them):

Handling Data Files

  • This section is about files that need to be loaded into a service that should not be included with the program for some reason.

If the files to be loaded are on a GitHub account the easiest was is to use curl. The token can be generated by signing into GitHub and using the shorted URL. Then cut and paste the URL that results in returning the data. Note that single quotes are placed around the URL because of the "?" which is only needed on some version of zsh.

 curl -H 'UserAgent:Mozilla' -H 'Accept: application/vnd.github.v3.raw' -L ''  -o trorgpw.txt
 curl -H 'UserAgent:Mozilla' -H 'Accept: application/vnd.github.v3.raw' -L ''  -o trorg-210307.pfx
or just
 curl -L '' -o Trustregistry.pfx

Note that tokens are only good for a day!

Troubleshooting on Windows


  • Configuration of Containers Windows is reached by using the search box in the IDE (press Ctrl+Q to use it), type in container, and choose the Containers window from the list.
  • Container Tools launch settings in msft docs.
  • Visual Studio Find does not search this file so you need to know about its contents.

Nav (left) panel

  1. Solution Containers
    1. List of containers, most with weird made-up names. (BTW, the current container name will be in the page footer is a terminal is open.)
  2. Other Containers

Nav (top) bar

  1. Environment
  2. Ports - contains linkage, for example, from container port (80) to host port (32775) and type (TCP) 443 -> 32774 (see networking above}
  3. Logs
  4. Files