CI/CD – Jenkins Controller Build and Smoke Tests¶
Purpose¶
Build, deploy, and validate the Jenkins controller running on ctrl-01 as defined in ADR-0603.
In scope:
- Build a custom controller image from
jenkins/jenkins:ltswith a pinned plugin set. - Deploy the controller as a Docker Compose stack on the control node.
- Execute a minimal smoke test against the HTTP endpoint.
- Emit evidence artefacts under
output/for audit and review.
Out of scope:
- Agent provisioning and scaling (Docker, LXC, RKE2).
- Application pipelines (Packer, Terraform, Ansible, RKE2 deployments).
- Backup/restore and DR procedures (covered by runbooks).
Pipeline stages¶
1. Pre-checks¶
- Confirm target host reachability.
- Validate Docker Engine and Docker Compose v2 availability.
- Validate base OS prerequisites used by the controller role.
Success criteria:
docker versionanddocker compose versionsucceed on the target.- Required paths can be created with expected ownership and modes.
2. Build¶
- Render controller build context:
Dockerfile- Plugin manifest
- Baseline JCasC configuration
- Build image
hybridops/jenkins-controller:<tag>(tag aligned to pipeline configuration).
Success criteria:
- Image build completes without plugin installation failures.
- Rendered JCasC files are present in the expected location.
3. Deploy¶
- Render
docker-compose.yml. - Ensure Docker network exists.
- Start or converge stack via Docker Compose v2.
Success criteria:
- Controller container is running.
- Container does not enter a restart loop.
4. Smoke test¶
- Poll the Jenkins HTTP endpoint until ready:
- Default target:
http://localhost:<port>/login - Record HTTP status and timing into evidence outputs.
Success criteria:
- Endpoint responds with expected HTTP status (for example 200/302) within the configured timeout.
5. Post-actions¶
Environment-dependent:
- Persistent control node: leave controller running and capture current image/tag metadata.
- Ephemeral runners: optional teardown (stack down, image cleanup) when configured.
Inputs and dependencies¶
Code and roles¶
hybridops.app.jenkins_controller(build and deploy)hybridops.common.docker_engine(baseline prerequisite when used)
Required runtime inputs¶
JENKINS_ADMIN_PASSWORD(required)- Additional credentials are optional and injected via the platform secrets strategy.
Execution requirements¶
- Ansible-capable runner with SSH access to the control node.
- Docker Engine and Docker Compose v2 on the control node.
Outputs¶
Generated outputs are sensitive where they include credentials and must not be committed. Proof artefacts should be redacted by default; logs remain the primary diagnostic record.
- Logs:
output/logs/ci/jenkins-controller/...- Evidence artefacts:
output/artifacts/ci/jenkins-controller/...
Typical artefacts:
- Pre-check summaries (Docker versions, host metadata)
- Build metadata (image tag, plugin manifest hash)
- Deploy state snapshot (container ID, compose converge result)
- Smoke test results (HTTP status, readiness timing)
Failure modes and recovery¶
- Docker/Compose unavailable:
- Run baseline Docker role and re-run the pipeline.
JENKINS_HOMEpermission or ownership issues:- Fix UID/GID alignment and path ownership, then redeploy.
- First build latency:
- Increase build and readiness timeouts for first-run environments.
- Restart loop:
- Inspect container logs, confirm volume permissions, and validate JCasC rendering.
References¶
- ADRs:
- ADR-0603 – Run Jenkins controller on control node, agents on RKE2
- ADR-0608 – Docker Engine baseline
- ADR-0202 – Adopt RKE2 as primary runtime for platform and applications
- Runbooks:
- Runbook – Jenkins controller and agents
- Evidence:
output/artifacts/ci/jenkins-controller/output/logs/ci/jenkins-controller/
Maintainer: HybridOps.Studio License: MIT-0 for code, CC-BY-4.0 for documentation unless otherwise stated.