Full Title or Meme
Docker is a system for building, deploying and running complex images of a program with its runtime.
- With the rise of cloud computing the need arose to give users an easy way to create a run-time package that could be sent to any cloud Platform as a Service provider (PaaS) with complete interoperability.
Docker was released in 2013 and solved many of the problems that developers had running containers end-to-end. It had all these things:
- A container image format
- A method for building container images (Dockerfile/docker build)
- A way to manage container images (docker images, docker rm , etc.)
- A way to manage instances of containers (docker ps, docker rm , etc.)
- A way to share container images (docker push/pull)
- A way to run containers (docker run)
Open Container Initiative addresses some of the features needed to deploy a complex docker container.
- Open Container Initiative (OCI)
- OCI runtime specification.
- Google code for running containers as a tool and library called runc
When you run a Docker container, these are the steps Docker actually goes through:
- Download the image
- Unpack the image into a "bundle". This flattens the layers into a single filesystem.
- Run the container from the bundle
The portability and reproducibility of a containerized process mean we have an opportunity to move and scale our containerized applications across clouds and datacenters. Containers effectively guarantee that those applications run the same way anywhere, allowing us to quickly and easily take advantage of all these environments. Furthermore, as we scale our applications up, we’ll want some tooling to help automate the maintenance of those applications, able to replace failed containers automatically, and manage the rollout of updates and reconfigurations of those containers during their lifecycle.
Tools to manage, scale, and maintain containerized applications are called orchestrators, and the most common examples of these are Kubernetes and Docker Swarm. Development environment deployments of both of these orchestrators are provided by Docker Desktop.
- Docker Docs - use volumes. Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker. Unless data needs to also be managed by the underlying o/s (as is the case for debugging on an IDE), it is recommended to use volumes over bind mounts.
- use rsync commands to move files between dev machine and a running linux container instance.
- Docker Bind for files on core o/s.
Support for TLS
This section is specific to Visual Studio 2019 and later.
- Start the project as a docker project, or go the startup project and Add->Docker support.
- Add orchestration in the same project by Add->Container Orchestrator Support. This will build a new folder under the VS solution folder called "docker-compose".
- That will enable containers that have more that one start project.
- If the application uses User Secrets, there might be a problem deploying it to Production.
- Ditto with root/.aspnet/https/---.pfs
- Deploying Dockerized .NET Apps Without Being a DevOps Guru 2019-08-13 Docker blog
- How To Use Rsync to Sync Local and Remote Directories 2020-11-18 DigitalOcean
- Configuration of containers is reached by
- Visual Studio Find does not search this file so you need to know about its contents.
Nav (left) panel
- Solution Containers
- Other Containers
Nav (top) bar
- Ports - contains linkage from container port (80) to host port (32775) and type (TCP)
- Docker docs, but note that the Docker team only deals with the low level formats.
- command line Interface reference for dolt, the DigitalOcean controller.
- Docker for Windows on GitHub with issues.
- a four-part series on Container Runtimes describes low-level versus high-level runtimes well.
- Getting Started with Docker in Visual Studio 2019
- Docker Container Tools on docs.msft
- GitHub Docker for Windows Issues