Adopt a tools-enabled Jenkins agent image for HybridOps.Studio CI¶
Status¶
Accepted — Standardise on a single, versioned “tools” agent image for Jenkins (Docker bootstrap agents now; Kubernetes/RKE2 pod agents later) to provide repeatable tooling and evidence-friendly runs.
1. Context¶
HybridOps.Studio pipelines require a consistent set of CLI tools (for example Terraform, Terragrunt, Packer, kubectl, Helm, Ansible) to execute platform automation and generate evidence artefacts.
Early bootstrap agents (Docker inbound agents on the controller’s Docker network) successfully executed jobs but lacked required tooling, producing “command not found” for key CLIs. Installing tools per-job increases runtime, creates drift between runs, and weakens evidence reproducibility.
Constraints and requirements:
- Runs must be repeatable across environments and time.
- The image must support Docker socket mounting for “Docker-from-agent” workflows without embedding a Docker daemon.
- Tooling versions must be pinned and auditable.
- The same image should be usable for both:
- Docker inbound agents (bootstrap / control-node execution), and
- Kubernetes-based agents (RKE2 / Kubernetes plugin) later.
- The image definition must live in the platform repo for traceability, and be consumable by the public Ansible collection role without shipping large binaries.
This ADR builds on the Jenkins controller and agents architecture in ADR-0603.
2. Decision¶
Adopt a versioned container image, hybridops/ci-agent-tools, as the standard Jenkins agent runtime for HybridOps.Studio CI.
- The image extends
jenkins/inbound-agent:latest-jdk21. - Required tools are installed at build time with pinned versions.
- The Docker CLI is provided as a static client plus Compose v2 plugin; the image does not run a Docker daemon.
- Ansible is installed into a dedicated virtual environment (
/opt/ansible-venv) and exposed via/usr/local/bin. - The image is published to Docker Hub and referenced by the Ansible role
hybridops.app.jenkins_agents_docker. - The same image is intended to be reused by the future Kubernetes/RKE2 Jenkins agent templates, unless a later ADR introduces hardened/segmented images.
3. Rationale¶
This decision:
- Improves repeatability by pinning tool versions at image build time rather than installing per pipeline step.
- Strengthens evidence quality: job logs show consistent tool versions and behaviour, supporting “evidence-first” runs.
- Simplifies the Jenkins agent roles: the agent definition becomes “run this image with these mounts” rather than “install a toolchain”.
- Enables an incremental path from Docker bootstrap agents to Kubernetes/RKE2 pod agents without changing the tooling surface area.
- Keeps the “Docker capability” explicit: the image only carries the Docker client and requires an explicit socket mount.
4. Consequences¶
4.1 Positive consequences¶
- Consistent CI runtime: Terraform/Packer/Kubernetes tooling is available immediately on agents.
- Faster pipelines (no repeated tool installation steps).
- Tool versions are captured in Git and in the image tag, improving auditability.
- Docker-based bootstrap agents can run Docker/Compose against the host daemon via a socket mount.
- Reuse across agent backends (Docker now; RKE2 later) reduces duplication.
4.2 Negative consequences / risks¶
- Image size increases (multiple CLIs).
- Requires an image publishing workflow and versioning discipline.
- Docker socket mounting increases blast radius if misused; it must remain explicit and constrained to trusted jobs.
- “latest” tags can reduce determinism if overused; pipelines should prefer pinned tags for evidence runs.
5. Alternatives considered¶
- Install tools on every agent at runtime (pipeline steps) — rejected due to drift, slower runs, and weaker evidence reproducibility.
- Bake tools into the Jenkins controller image — rejected because tooling belongs with agents and should not expand controller responsibilities.
- Maintain separate images per toolchain (Terraform-only, Kubernetes-only, etc.) — deferred; current scale benefits from a single “bootstrap tools” image. Segmentation remains a future option if security or size constraints tighten.
6. Implementation notes¶
Image source of truth:
- Platform repo:
hybridops-platform/control/images/ci-agent-tools/ Dockerfiledefines the toolchain and version pins.build.shbuilds, tags, optionally tests, and pushes..dockerignorereduces build context noise.
Consumption points:
- Ansible collection role:
- Repo: ansible-collection-app
- Role:
hybridops.app.jenkins_agents_docker - Default image:
hybridops/ci-agent-tools:<tag>
Versioning expectations:
- Use semantic tags (for example
0.1.0) for evidence-grade runs. - Maintain
latestfor convenience in interactive development, but do not rely on it for validated artefacts.
7. Operational impact and validation¶
Operational impacts:
- CI jobs run on a controlled runtime with preinstalled CLIs, reducing incidental failure modes.
- Agent logs and job output include tool versions, improving traceability.
Validation approach:
- Jenkins pipeline “Proof” stage asserts the presence and versions of core tools.
- The Jenkins agent role ensures the agent can connect, accept workloads, and execute basic commands.
- Evidence is captured under:
- CI artefacts
Runbooks and how-tos:
8. References¶
- ADR-0603 – Run Jenkins controller on control node, agents on RKE2
- HybridOps Platform repository
- HybridOps Ansible collection (apps)
- Jenkins inbound agent image
- Jenkins Kubernetes plugin documentation
- Docker Compose releases
- HashiCorp Terraform releases
- HashiCorp Packer releases
Maintainer: HybridOps.Studio
License: MIT-0 for code, CC-BY-4.0 for documentation unless otherwise stated.