Skip to content

Access GCP Linux VMs via IAP SSH Tunnel

  • Purpose: Reach private GCP Linux VMs through Google Identity-Aware Proxy without a public IP or open SSH port. Owner: Platform operations

  • Trigger: Shell access to any HybridOps-provisioned GCP Linux VM (EVE-NG host, ops runner, or standalone lab VM)

  • Impact: Read-only verification or interactive access to the target VM
  • Severity: P3 Pre-reqs: gcloud SDK installed, target VM provisioned, IAP API enabled, operator has roles/iap.tunnelResourceAccessor.

Access model

All GCP Linux VMs provisioned via platform/gcp/platform-vm are private by default: no public IP. Inbound SSH is routed through GCP Identity-Aware Proxy.

The VM must carry the allow-iap-ssh network tag. Traffic from the IAP source range (35.235.240.0/20) on TCP 22 must be permitted by a firewall rule on the VPC. Two paths exist:

Path When to use
org/gcp/wan-hub-network applied Full platform stack: firewall rule is created automatically for the VPC
platform/gcp/vm-firewall-rules Standalone VM (default VPC) without shared network infrastructure

Verify which path is in effect before troubleshooting access failures.


Prerequisites

1. gcloud SDK

gcloud version

Install from cloud.google.com/sdk if absent.

2. IAP API enabled

gcloud services enable iap.googleapis.com --project <PROJECT_ID>

This is a one-time project-level step.

3. IAM: tunnelResourceAccessor

The operator identity needs roles/iap.tunnelResourceAccessor on the project or the specific VM instance:

# Project-wide (common for platform operators)
gcloud projects add-iam-policy-binding <PROJECT_ID> \
  --member="user:<your-email>" \
  --role="roles/iap.tunnelResourceAccessor"

# Instance-scoped (tighter)
gcloud compute instances add-iam-policy-binding <VM_NAME> \
  --project <PROJECT_ID> \
  --zone <ZONE> \
  --member="user:<your-email>" \
  --role="roles/iap.tunnelResourceAccessor"

4. VM tag confirmed

gcloud compute instances describe <VM_NAME> \
  --project <PROJECT_ID> \
  --zone <ZONE> \
  --format="get(tags.items)"

Expected output includes allow-iap-ssh.

5. Firewall rule present

gcloud compute firewall-rules list \
  --project <PROJECT_ID> \
  --filter="name~iap-ssh" \
  --format="table(name,network,direction,sourceRanges,targetTags)"

Expected: a rule with source range 35.235.240.0/20, protocol TCP 22, targeting allow-iap-ssh.


SSH access

Basic shell:

gcloud compute ssh <VM_NAME> \
  --project <PROJECT_ID> \
  --zone <ZONE> \
  --tunnel-through-iap

With explicit user (default user in HybridOps blueprints is opsadmin):

gcloud compute ssh opsadmin@<VM_NAME> \
  --project <PROJECT_ID> \
  --zone <ZONE> \
  --tunnel-through-iap

With a specific key file:

gcloud compute ssh opsadmin@<VM_NAME> \
  --project <PROJECT_ID> \
  --zone <ZONE> \
  --tunnel-through-iap \
  --ssh-key-file ~/.ssh/id_ed25519

Port forwarding

Forward a remote port to localhost: useful for web UIs (EVE-NG, dashboards) or tools that connect over TCP rather than SSH:

gcloud compute ssh <VM_NAME> \
  --project <PROJECT_ID> \
  --zone <ZONE> \
  --tunnel-through-iap \
  -- -L <LOCAL_PORT>:localhost:<REMOTE_PORT> -N

Example: EVE-NG web UI on port 80 accessible at http://localhost:8080:

gcloud compute ssh opsadmin@eve-ng-01 \
  --project my-project \
  --zone europe-west2-b \
  --tunnel-through-iap \
  -- -L 8080:localhost:80 -N

File transfer

Copy a file from the VM:

gcloud compute scp \
  --tunnel-through-iap \
  --project <PROJECT_ID> \
  --zone <ZONE> \
  <VM_NAME>:/remote/path ./local-dir/

Copy a file to the VM:

gcloud compute scp \
  --tunnel-through-iap \
  --project <PROJECT_ID> \
  --zone <ZONE> \
  ./local-file <VM_NAME>:/remote/path/

Ansible over IAP

HybridOps Ansible modules use ssh_access_mode: gcp-iap in blueprint inputs to route through IAP automatically. For manual Ansible runs, use an SSH ProxyCommand:

ansible all -i "<VM_IP>," \
  -u opsadmin \
  --private-key ~/.ssh/id_ed25519 \
  --ssh-extra-args="-o ProxyCommand='gcloud compute start-iap-tunnel <VM_NAME> 22 --listen-on-stdin --project <PROJECT_ID> --zone <ZONE>'" \
  -m ping

Verification

Confirm the VM is reachable via IAP before running blueprints or Ansible:

gcloud compute ssh opsadmin@<VM_NAME> \
  --project <PROJECT_ID> \
  --zone <ZONE> \
  --tunnel-through-iap \
  --command "hostname && uptime"

Troubleshooting

Error 4033: Access denied

Cause: operator identity lacks roles/iap.tunnelResourceAccessor.

Fix: grant the role as shown in Prerequisites §3.

Connection refused or tunnel timeout

Causes (check in order):

  1. VM does not have allow-iap-ssh tag: check with gcloud compute instances describe
  2. Firewall rule missing: check with gcloud compute firewall-rules list; if using wan-hub-network, confirm its state is ok; if using vm-firewall-rules, confirm the module has been applied
  3. SSH service not yet running: VM may still be booting; wait 60–90 seconds and retry

Unable to connect to the service: [Errno 111] Connection refused

Cause: IAP API not enabled on the project.

Fix:

gcloud services enable iap.googleapis.com --project <PROJECT_ID>

Permission denied (publickey)

Cause: SSH key mismatch. The key on the VM was set during hyops init gcp (ssh_keys_from_init: true). The key presented by gcloud must match.

Fix: confirm which key was used at init time:

jq -r ".context.ssh_public_key" ~/.hybridops/envs/dev/meta/gcp.ready.json

Then ensure your local ~/.ssh/ has the matching private key, or pass --ssh-key-file explicitly.


References