Docker Engine is a client-server application that serves as the core platform for building, shipping, and running containerized applications. It internally consists of a persistent daemon (dockerd), a REST API, and a command-line interface (CLI) that work together to manage containers through a modular architecture leveraging containerd and runC [citation:3][citation:4].
Docker Engine is the fundamental software that enables containerization on a host system. It provides an environment to develop, deliver, and run applications in isolated containers. The engine follows a client-server architecture where a long-running daemon process handles all container operations, a REST API exposes interfaces to interact with the daemon programmatically, and a CLI client allows users to issue commands [citation:4][citation:8]. Internally, Docker Engine has evolved from a monolithic daemon to a modular system composed of several specialized components that work together to manage the complete container lifecycle.
At the heart of Docker Engine is the dockerd daemon, which runs continuously in the background. It listens for API requests, manages Docker objects like images, containers, networks, and volumes, and orchestrates the lower-level components that actually run containers [citation:4][citation:7]. When you issue a command like docker run, the Docker CLI sends a REST API request to dockerd, which then coordinates the necessary steps: pulling the image if not present locally, creating the container, and starting its execution [citation:7].
Docker Daemon (dockerd) – The persistent background service that manages all Docker resources. It listens for API requests, handles image management, container lifecycle operations, networking, and volume management [citation:4][citation:7].
REST API – Provides programmatic access to the daemon's functionality. Developers and tools can interact with Docker using HTTP requests, enabling integration with CI/CD pipelines, monitoring systems, and orchestration platforms [citation:4][citation:8].
Docker CLI – The command-line interface users interact with directly. It translates commands like docker run, docker build, and docker pull into API calls sent to the daemon [citation:4][citation:7].
containerd – A core component that manages the complete container lifecycle, including image transfer and storage, container execution, and supervision. Since Docker Engine v29, containerd also handles image storage by default, replacing the older graph driver system [citation:1][citation:2].
containerd-shim – A lightweight process that sits between containerd and runC. Each running container has its own shim process, which allows runC to exit after starting the container and enables live updates of the Docker daemon without stopping containers [citation:2].
runC – The OCI-compliant runtime that actually creates and runs containers. It interfaces directly with Linux kernel features like namespaces and cgroups to provide isolation and resource control [citation:2][citation:10].
The modular architecture emerged from Docker's efforts to align with Open Container Initiative (OCI) standards. Before version 1.11, Docker used a monolithic daemon that handled everything from image management to container execution. The shift to separate components like containerd and runC brought several benefits: the ability to update the daemon without restarting containers, better alignment with industry standards, and improved reliability through fault isolation [citation:2][citation:10].
Step 1: User executes docker run nginx in the CLI, which sends an API request to dockerd.
Step 2: dockerd checks if the nginx image exists locally; if not, it pulls layers from the registry via containerd's image store [citation:1].
Step 3: containerd prepares the container bundle by unpacking image layers and creating an OCI runtime specification (config.json).
Step 4: containerd starts a shim process for the new container, then invokes runC to create and start the container using Linux namespaces and cgroups [citation:2].
Step 5: runC sets up isolation, forks the container process, and then exits, leaving the shim as the parent process to report exit status and manage stdio [citation:2].
Step 6: dockerd receives confirmation and returns success to the CLI; the container is now running with complete isolation.
Recent developments in Docker Engine v29 have further streamlined the architecture. The containerd image store is now the default for new installations, replacing the legacy graph driver system. This change simplifies the architecture by having containerd manage both image storage and container execution, enabling new capabilities like lazy pulling of image content and support for snapshotter innovations. The shift also improves ecosystem alignment with Kubernetes and other containerd-based platforms [citation:1].
Namespaces – Create isolated views of system resources. Each container gets its own namespace for processes, network interfaces, mount points, and more, preventing containers from seeing or affecting processes outside their namespace [citation:4][citation:8].
Cgroups (control groups) – Limit and account for resource usage. Cgroups ensure containers don't exceed their allocated CPU, memory, or disk I/O quotas [citation:4][citation:8].
Capabilities – Drop unnecessary Linux capabilities from containers by default, reducing the attack surface if a container is compromised.
Seccomp profiles – Restrict the system calls containers can make, providing an additional layer of defense against kernel exploits.
Docker Engine also supports alternative runtimes through containerd's shim API. You can configure runtimes like youki (written in Rust for better performance and lower memory usage) or Wasmtime for running WebAssembly containers. This flexibility allows Docker to adapt to specialized workloads while maintaining the same user experience through the Docker CLI [citation:6].