Skip to content

PostgreSQL Runs in LXC (State on Host-Mounted Storage; Backups First-Class)

Status

Superseded — Early pattern where PostgreSQL runs in an LXC container with host-mounted storage.
Retained for lab and teaching scenarios only. The production-standard pattern is now recorded in ADR-0501 – PostgreSQL Runs on Dedicated VM with Host-Managed Storage and DR Replication.


1. Context

During initial design, PostgreSQL was deployed directly on the ctrl-01 VM.
While functional, this tightly coupled storage and compute layers, complicating DR and snapshot testing.

Running PostgreSQL in a dedicated LXC container offered several advantages:

  • Lightweight isolation without full VM overhead.
  • Controlled resource boundaries (CPU, RAM, IO).
  • Quick restart and replication capabilities.
  • Clean separation from Jenkins, Terraform, or RKE2 workloads.

However:

  • Container storage alone is not resilient.
  • Kernel and cgroup behaviour inside LXCs can diverge from full VMs.
  • Some enterprise assessors are less familiar with LXC as the home for a central shared database.

Therefore the container’s /var/lib/postgresql directory is mounted from a host dataset (for example ZFS or ext4) so that snapshots, rsync jobs, and WAL-G backups are consistent and host-driven.

This ADR now documents the prototype and lab pattern; ADR-0501 documents the production-standard VM pattern.


2. Decision (historical)

Original decision (now superseded):

  • Run PostgreSQL in a dedicated LXC container (db-01).
  • Mount persistent storage from the Proxmox host.
  • Manage backup and promotion pipelines externally via Jenkins and DR tooling.

Design Summary (historical)

  • Container type: Unprivileged LXC (db-01) with host volume bind mount.
  • Storage: /srv/db01-data (for example a ZFS dataset) bound to /var/lib/postgresql/14/main.
  • Networking: Static IP via vmbr6, DNS entry db01.lab.local.
  • Backups: WAL-G to cloud object storage (Azure Blob, GCP Bucket).
  • Promotion: DR read-replica in cloud, promoted manually or by decision service.
  • Monitoring: postgres_exporter integrated into Prometheus.

In the current blueprint, this pattern is used only for labs and Academy demonstrations, not as the primary production database tier.


3. Rationale (historical)

Reasons this pattern was initially attractive:

  • Reduced VM footprint compared to a dedicated full VM for PostgreSQL.
  • Easy to snapshot and back up host-mounted storage using Proxmox tooling.
  • Clean conceptual separation of PostgreSQL from other control-plane processes on ctrl-01.

Reasons it was later superseded:

  • For the central shared database tier, enterprise expectations and tooling are strongly oriented to full VMs.
  • Slightly reduced isolation and more nuanced operational behaviours inside LXC make it less ideal as the reference “production-like” pattern.
  • ADR-0501 replaces this pattern with a VM-based design that remains compatible with the same DR and cloud replication story.

4. Consequences

4.1 Positive consequences (for lab use)

  • Lightweight environment for PostgreSQL suitable for Academy lab scenarios.
  • Demonstrates the trade-offs between LXCs and full VMs in a controlled setting.
  • Reuses host-backed storage and backup tooling consistent with VM-based designs.

4.2 Negative consequences / risks (why it is no longer the primary pattern)

  • Reduced isolation compared to a full VM, which is less attractive for a central production database.
  • Requires careful UID/GID alignment between host and container.
  • Kernel, cgroup and security model differences can surprise teams expecting “full VM” semantics.

5. Alternatives considered

  • PostgreSQL directly on ctrl-01 VM
  • Rejected as it couples database lifecycle and control-node lifecycle too tightly and makes DR drills harder to reason about.

  • PostgreSQL in K8s (RKE2) with in-cluster storage

  • Rejected for the primary shared database tier; state should live outside Kubernetes as per ADR-0202 / ADR-0204 so that clusters remain largely stateless and replaceable.

  • PostgreSQL on dedicated VM (current choice)

  • Adopted and documented in ADR-0501; provides clearer isolation, portability and alignment with enterprise expectations.
  • Acts as the shared state tier across on-prem and cloud clusters, enabling realistic stateful DR scenarios (read-replica promotion, failover and failback) while keeping both the hypervisor and Kubernetes layers largely stateless.

6. Implementation notes (lab use)

In current HybridOps.Studio:

  • This LXC pattern may still be used for lab-only PostgreSQL instances.
  • Provisioning and configuration follow the same high-level pattern:
  • Terraform + Ansible for LXC lifecycle and configuration.
  • Host-mounted storage for /var/lib/postgresql.
  • WAL-G integration for backup where appropriate.

Production-like PostgreSQL deployments should follow ADR-0501 instead.


7. Operational impact and validation

Operationally:

  • This pattern is not used for the primary shared PostgreSQL tier.
  • Runbooks and evidence for the production pattern reference ADR-0501 instead.
  • LXC-based instances may appear in Academy material as a comparative pattern.

8. References


Maintainer: HybridOps.Studio
License: MIT-0 for code, CC-BY-4.0 for documentation unless otherwise stated.