Skip to content

NCC primary hub with routed Azure spoke connectivity

Decision

HybridOps.Studio uses Google Network Connectivity Center (NCC) as the primary hybrid networking hub.

  • On‑prem → GCP attaches using route‑based IPsec (VTI) + BGP to GCP HA VPN + Cloud Router.
  • Azure → GCP attaches as a routed spoke using Azure VPN Gateway (BGP) to GCP Cloud Router.
  • NCC provides the hub-and-spoke fabric for:
  • steady‑state operations,
  • controlled DR/burst steering,
  • consistent route governance and evidence capture.

Multi‑WAN on‑prem is treated as an additive resilience layer: it introduces additional tunnels/peers but does not change the routing pattern or contract.

Context

HybridOps.Studio requires an enterprise‑style hybrid backbone that supports:

  • repeatable deployment by IaC (Terraform/Terragrunt),
  • explicit routing boundaries (allowed prefixes, no leaks),
  • auditable change and recovery drills,
  • future growth (additional on‑prem sites, multiple clouds, multiple edges).

A single, central hub is preferred to reduce operational complexity while still enabling DR/burst and policy experiments.

Considered options

  1. NCC as primary hub; Azure as routed spoke (chosen)
  2. Azure vWAN / hub VNet as primary hub; NCC as secondary federation
  3. Full mesh (on‑prem ↔ Azure ↔ GCP) with no central hub
  4. Overlay mesh (for example WireGuard) as a neutral backbone

Why this decision

  • NCC provides a clean hub-and-spoke control plane with strong operational visibility in GCP.
  • Cloud Router BGP policy offers a clear place for prefix filtering, safety controls, and governance.
  • Azure connects cleanly as a routed spoke while keeping the hub-and-spoke baseline stable.
  • DR/burst steering is easier to demonstrate when the hub is explicit and policies are centralized.

Implementation summary

GCP hub

  • Hub VPC for shared services and routing boundaries
  • HA VPN + Cloud Router (BGP) for on‑prem and Azure attachments
  • NCC hub for spoke attachment and route propagation

On‑prem attachments

  • Route‑based IPsec (VTI) tunnels to HA VPN
  • BGP sessions over VTIs with explicit prefix lists
  • Linux edge implementation: ADR-0115
  • Optional multi‑WAN adds additional tunnel pairs with the same contract

Azure attachment

  • Azure VPN Gateway with BGP peering to GCP Cloud Router
  • Route filters and safety controls on both sides

Consequences

Positive

  • Centralized route governance and visibility at the hub
  • Stable pattern as the platform grows (additional sites/tunnels are additive)
  • Clear evidence story for drills (route withdraw/restore, tunnel failover, convergence time)

Negative

  • Hub availability and policy correctness become critical operational dependencies
  • Requires disciplined prefix control to avoid unintended transitive reachability

Neutral

  • A dual-hub design (Azure vWAN + NCC) remains possible later, but is not required to achieve DR/burst goals.

References