One of the most important skills for a developer these days is knowing how Docker works. When I first started coding, I never imagined you could ship your development environment to production (the memes will never die, though). Let's talk about Docker and how it completely changes the landscape of software development forever.

Docker is a platform designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all the parts it needs, such as libraries and other dependencies, and ship it all out as one package.

Docker containers are lightweight and efficient. They share the host system's kernel while isolating the application's processes, reducing overhead compared to traditional virtual machines. This efficiency translates to faster startup times, better resource utilization, and the ability to run many more containers on the same hardware.

Another key feature of Docker is its integration with container orchestration tools like Kubernetes. These tools provide automated management of containerized applications, handling tasks such as scaling, load balancing, and self-healing. This orchestration capability is crucial for modern microservices architectures, where applications are composed of multiple interconnected services that need to be managed cohesively.

The widespread adoption of container orchestration platforms like Kubernetes further increases the importance of containerization skills. Kubernetes automates the deployment, scaling, and management of containerized applications, enabling enterprises to easily handle complex, distributed systems. I will talk about Kubernetes in an upcoming blog post, so stay tuned!

Why Docker?

  • Consistency: Docker ensures that your application works seamlessly in different environments by packaging it in containers. This solves the "it works on my machine" problem.
  • Isolation: Containers allow you to run multiple applications on the same host without interfering with each other.
  • Microservices: Docker is ideal for building microservice architectures, allowing each service to be deployed and scaled independently.
  • Portability: Containers can be easily moved from your local development machine to production servers, across cloud providers, and more.

Basically, it prevents the classic issue "it works on my machine".

Docker Images: The Blueprints

In OOP, classes serve as blueprints for creating objects. They define the data and behaviours the objects created from the class can have. Similarly, a Docker image serves as a blueprint for creating containers. An image includes everything needed to run an application: the code or binary, runtime, libraries, environment variables, and configuration files.

Once created, a Docker image is immutable. Think of an image as a snapshot of your application at a specific point in time. When you want to update your application, you create a new image with the changes. This parallels how you might update a class in your codebase to reflect these changes.

Docker Containers: The Instances

Continuing our analogy, if a Docker image is like a class, then a Docker container is like an instance of that class. An instance in OOP is an actual object created from a class; it's where you can set specific values to the properties defined in the class and where methods can be executed.

When you run a Docker image, a container is created from that image. This container is a running instance of the image and includes its own filesystem, set of processes, and network interface isolated from the host system and other containers. You can start, stop, move, and delete a container without affecting the image it was created from, just as you can manipulate an instance of a class without changing the class itself.

Understanding Dockerfiles

In our OOP analogy, if Docker images are like classes, then Dockerfiles are writing the class definition itself. A Dockerfile is a text document containing all the commands a user could call on the command line to assemble an image. Writing a Dockerfile is like defining the properties and methods of a class. It includes the base image (the parent class), additional packages to install (dependencies), environment variables (class properties), and the command to run when the container starts (the main method).

Key Elements of a Dockerfile

  • Base Image (FROM): Specifies the starting point, similar to extending a class in OOP.
  • Commands (RUN): Executes shell commands, akin to setting up methods in your class that prepare the object.
  • Copying Files (COPY, ADD): Brings in external files or directories, similar to including external libraries or resources.
  • Environment Variables (ENV): Sets variables that can be used by the running application, like setting default values for class properties.
  • Execution Command (CMD): Defines what command to run when the container starts, akin to the main method in your class that gets executed.

Docker Compose: Orchestrating Multiple Instances

Expanding our analogy, Docker Compose serves as the environment where multiple instances of different classes interact. If Docker containers are instances, Docker Compose is like an application that uses various objects (containers) to accomplish a larger goal. It allows you to define and run multi-container Docker applications, managing the lifecycle of those containers through simple commands.

With Docker Compose, you can:

  • Define your application's environment with a docker-compose.yml file, specifying which containers to run, their configurations, how they communicate, and shared volumes, akin to defining how instances of classes interact in an application.
  • Start, stop, and rebuild services collectively with commands like docker-compose up and docker-compose down, similar to initializing or tearing down your application environment.
  • Isolate environments on a single host, ensuring that different instances (containers) don't interfere with each other, similar to having separate instances of an application with their own set of objects.

This is extremely useful when there is a need to debug multiple dockerized applications at the same time.

A practical .NET example

Open a terminal and create a new dotnet application by running:

dotnet new webapi -n DockerDemo

Now create a dockerfile and add this to that file below:

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
WORKDIR /app
EXPOSE 80

FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /src
COPY ["DockerDemo.csproj", "./"]
RUN dotnet restore "DockerDemo.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "DockerDemo.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "DockerDemo.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "DockerDemo.dll"]

With the dockerfile in place now, run the command below to create an image:

docker build -t dockerdemo .

To run a container based on the image we created, you can run the command below:

docker run -d -p 8080:80 --name myapp dockerdemo

This command runs the Docker image dockerdemo as a container named myapp and maps port 8080 on your host to port 80 in the container.

Congratulations, you have run a .NET app using Docker.

Conclusion

Docker is a powerful tool that simplifies application development, deployment, and running. It ensures that applications run in a consistent environment, making deployments more predictable and less prone to errors. Are you using Docker in your company? Let me know, and as always, keep coding!

Next steps

If you're interested in learning more about Docker, you can always check our "From Zero to Hero: Docker for Developers" course on Dometrain.

docker-fzth.jpg

If you want to go a step further and master Kubernetes, you can check out our "From Zero to Hero: Kubernetes for Developers" course on Dometrain.

kube-devs-fzth.jpg