Jenkins architecture is a distributed, controller-agent model where a central controller manages scheduling and configuration, while lightweight agents execute build and test tasks across diverse environments
Jenkins is built on a distributed architecture that separates the responsibilities of task management from task execution. This design, often called controller-agent (formerly master-slave), allows Jenkins to scale horizontally, run jobs across multiple platforms simultaneously, and isolate resource-intensive builds from the core system. The architecture consists of several key components: the Jenkins controller, nodes, agents, and executors.
The Jenkins controller (previously called master) is the central component that serves as the web server, configuration manager, and scheduler for the entire Jenkins installation. It's responsible for storing all configurations, managing plugins, authenticating users, and orchestrating build jobs. Importantly, the controller is designed to be lightweight and should not execute builds itself. The official Jenkins documentation strongly recommends setting the number of executors on the controller to 0 to prevent it from running builds, which would degrade performance and reduce scalability[citation:2][citation:8]. The controller makes decisions about when and where to run tasks, then delegates the actual work to agents.
A node is any machine that can run Jenkins builds, including the controller itself. However, the practical building unit is the agent: a small Java client process (around 170KB) that connects to the controller and executes tasks on its behalf. Agents can run on any operating system that supports Java, enabling cross-platform builds where compilation happens on Linux, testing on Windows, and packaging on macOS—all within the same pipeline. Agents connect to the controller via several methods: SSH, JNLP (Java Web Start), or directly through WebSocket. Once connected, they register their availability and wait for work assignments.
Permanent Agents: Dedicated physical or virtual machines that run continuously. They have pre-installed tools and remain online even when idle[citation:5]. Best for stable, predictable workloads requiring consistent environments.
Docker Agents: Ephemeral containers spawned on demand for each build. They provide fresh, isolated environments every time and automatically clean up after job completion[citation:5]. Ideal for reproducibility and avoiding dependency conflicts.
Cloud-Based Agents: On-demand instances provisioned from cloud providers (AWS, Azure, Kubernetes) that scale dynamically with workload[citation:5]. Perfect for handling variable traffic peaks while minimizing idle resource costs.
Label-Based Agents: Nodes tagged with labels like 'linux', 'docker', or 'high-memory' that pipelines can target declaratively, decoupling build logic from physical infrastructure[citation:2][citation:5].
An executor is a thread within an agent that runs build tasks. The number of executors on a node determines how many concurrent jobs that node can handle[citation:8]. Configuring executors correctly is critical: too few leaves resources idle; too many causes contention and performance degradation. The safest configuration is one executor per node, though one executor per CPU core can work for small tasks with careful monitoring of CPU, memory, and I/O usage[citation:8]. When multiple executors run on the same node, Jenkins can execute multiple Pipeline stages simultaneously, dramatically increasing throughput.
Agents communicate with the controller through persistent connections, using either WebSocket or the Jenkins agent protocol[citation:1]. For security, production deployments should enable SSL/TLS encryption for all controller-agent communication[citation:1]. SSH-based agents require proper key management—preferably using SSH keys with passphrases rather than passwords, stored securely in Jenkins credentials[citation:2]. Container-based agents can further isolate workloads by providing fresh environments per build, reducing the risk of cross-contamination.
Labels are a powerful abstraction in Jenkins architecture that group agents based on capabilities like operating system, hardware architecture (e.g., 'aarch64'), or installed toolchains[citation:2]. Pipelines can then specify required labels in their agent blocks, and Jenkins automatically matches the job with an appropriate agent. This decoupling means pipelines don't need to know specific machine names—they just declare requirements like 'linux && docker' and let the controller find suitable agents[citation:2]. This dynamic matching enables flexible resource allocation and simplifies infrastructure changes.
For mission-critical environments, Jenkins can be configured for high availability by deploying multiple controllers behind a load balancer (HAProxy or Nginx) and sharing a common JENKINS_HOME directory via NFS or distributed filesystems[citation:9]. Container orchestration platforms like Kubernetes can automatically replace failed controller or agent pods. Regular backups of the controller's home directory and build artifacts ensure recoverability[citation:9]. However, true active-active clustering remains challenging due to Jenkins' filesystem-based storage model, so many organizations implement warm standby architectures instead.
Jenkins includes a special built-in node that runs within the controller process itself. While technically possible to execute builds here, this practice is strongly discouraged for security, performance, and scalability reasons[citation:8]. Running builds on the controller can lead to resource exhaustion, longer response times, and potential vulnerabilities if build scripts compromise the core system. Best practice is to configure the built-in node with 0 executors and route all build work to dedicated agents[citation:2][citation:8]. This separation ensures the controller remains responsive for scheduling and user interface tasks regardless of build load.