Setting up GCP Infrastructure

Dependencies


Terraform

Create a GCP project and create infrastructure using instructions in the shared terraform repo.


If you already have a Terraform environment set up for your account, add the run-anywhere-terraform module to your configuration and follow the steps listed in its README, skipping the steps involving state creation. 

If you are new to Terraform, follow the run-anywhere-terraform module’s README in its entirety: it will instruct you how to set up a Terraform state in your GCP account, instantiate the module, and run the subsequent Terraform apply.


Alternatively, you may use the Terraform module as documentation and deploy the same set of resources following the policies required by your organization. While the Terraform module does offer most configurations to be specified by the user, it is by no means “one size fits all,” as there are still some finite configuration details some organizations may need to tweak beyond the capability we allow.


DNS

(Not required if you manage your Domain with Google’s solution.)


Refer to the GCP Console, Network Services ⇒ Cloud DNS ⇒ default zone to retrieve the list of nameservers for your GCP DNS configuration.


In your base domain DNS provider's records, add an NS record to delegate the resolution of the subdomain to the Cloud DNS nameservers.


For example, an NS record for smallstep.basedomain.company.com may contain the following:

ns-cloud-c1.googledomains.com.
ns-cloud-c2.googledomains.com.
ns-cloud-c3.googledomains.com.
ns-cloud-c4.googledomains.com.

Platform

GCP kubectl context

To interact with the cluster, you'll need to configure a local kubectl context. gcloud CLI can do this for you. In the GCP console, visit Kuberenetes Engine ⇒ Clusters, then click Connect in the menu for your primary cluster.



Linkerd

Smallstep services currently use Linkerd for some internal load balancing needs. Install it manually with a long lived certificate. Linkerd comes with a default certificate with a lifetime of 1 year; we don’t want our CA to become useless in 1 year, so this step is necessary. However, we may eventually remove the dependency on Linkerd.


First, create a root CA cert and key:

step certificate create root.linkerd.cluster.local ca.crt ca.key \
  --profile root-ca --no-password --insecure --not-after=87600h

Use the CA to issue an identity certificate for Linkerd:

step certificate create identity.linkerd.cluster.local issuer.crt issuer.key \
  --profile intermediate-ca --not-after 87600h --no-password --insecure \
  --ca ca.crt --ca-key ca.key

Install Linkerd, providing the files from the previous commands:

kubectl config use-context <your context>
linkerd install \
  --identity-trust-anchors-file ca.crt \
  --identity-issuer-certificate-file issuer.crt \
  --identity-issuer-key-file issuer.key \
  | kubectl apply -f -

Shred the key material:

shred -uv ca.key issuer.key
If you do not have the shred command and don’t wish to install it, it is also okay to rm the files instead.

Secrets

 This project uses Kubernetes secrets internal to the EKS cluster to manage the passwords for Run Anywhere. You don’t have to do anything to set these up, as Terraform has already done so for you.


Next Steps

Now that your cloud infrastructure is in place, the K8s cluster is running, and in a good state, DNS has propagated, and your cluster keys have been rotated out:


Continue to SSH Professional Setup

or

Continue to Certificate Manager Setup