Setting up AWS Infrastructure



All steps involving the setup of your AWS Infrastructure use the AWS CLI at some point. Make sure you are logged in to your account on the command line with your configuration pointing to the correct account.


If you already have a Terraform environment set up for your account, add the run-anywhere-terraform module to your configuration and follow the steps listed in its README, skipping state creation steps. If you are new to Terraform, follow the run-anywhere-terraform module’s README in its entirety: it will instruct you how to set up a Terraform state in your AWS account, instantiate the module, and run the subsequent Terraform apply.

Alternatively, you may use the Terraform module as documentation and deploy the same set of resources following the policies required by your organization. While the Terraform module does offer most configurations to be specified by the user, it is by no means “one size fits all.” There are still some finite configuration details that organizations may need to tweak beyond our capability.

In short, the module will instantiate and configure:

  • 1 PostgreSQL Aurora cluster
    • Hosting 10 databases (set up by Lambda)
  • 1 Elasticache Redis instance
  • 1 EKS cluster and Node Pool
  • 1 KMS Key for the project resources
  • 7-8 SecretsManager Secrets
  •  8 Alias Route53 records
  • 1 NLB (stood up by the cluster)
  • 1 Target Group (stood up by the cluster)
    • Listeners/rules to route to services internal to the cluster
  • All associated IAM permissions, SG rules, and tags to allow the project to run with the minimum level of permissions required


(Not required if you manage your Domain with Route53)

Refer to the AWS Console, Route53 ⇒ Your Hosted Zone to retrieve the list of nameservers for your AWS DNS configuration.

In your base domain DNS provider's records, add an NS record to delegate the resolution of the subdomain to the DNS nameservers.

For example, an NS record for may contain the following:


AWS kubectl context

To interact with the EKS cluster, you'll need to configure a local kubectl context. aws CLI can do this for you. They provide a comprehensive guide on how to gain kubectl access to the cluster.

Your AWS profile will be the only IAM entity with access to the cluster initially. You will have the highest level of access by default. To add subsequent users and map subsequent permissions internal to the EKS cluster, follow the AWS guide on how to update the configmap with your desired settings.


Smallstep services currently use Linkerd for some internal load balancing needs. Install it manually with a long-lived certificate. Linkerd comes with a default certificate with a lifetime of 1 year; we don’t want our CA to become useless in 1 year, so this step is necessary. However, we may eventually remove the dependency on Linkerd.

First, create a root CA cert and key:

step certificate create root.linkerd.cluster.local ca.crt ca.key --profile root-ca --no-password --insecure --not-after=87600h

Use the CA to issue an identity certificate for Linkerd:

step certificate create identity.linkerd.cluster.local issuer.crt issuer.key --profile intermediate-ca --not-after 87600h --no-password --insecure --ca ca.crt --ca-key ca.key

Install the new Linkerd certificate you’ve just created to the Kubernetes cluster, providing the files from the previous commands:

kubectl config use-context <your context>

linkerd install --crds --identity-trust-anchors-file ca.crt --identity-issuer-certificate-file issuer.crt --identity-issuer-key-file issuer.key | kubectl apply -f -

Shred the key material:

shred -uv ca.key issuer.key
If you do not have the shred command and don’t wish to install it, it is also okay to rm the files instead.


This project uses Kubernetes secrets internal to the EKS cluster to manage the passwords for Run Anywhere. You don’t have to do anything to set these up, as Terraform has already done so for you.

Next Steps

Now that your cloud infrastructure is in place, the K8s cluster is running, and in a good state, DNS has propagated, and your cluster keys have been rotated out:

Continue to SSH Professional Setup


Continue to Certificate Manager Setup