Skip to content

EKS#

EKS was the first environment built. Many off-the-shelf solutions were considered and tested. In the end, EKS was found to be the best compromise, already native to our preferred cloud environment.

This decision eventually helped inform our on-prem EKS-Anywhere purchase, allowing us to keep even more aspects of the clusters identical.

The EKS environments are each contained within an AWS account. It is notable that Preproduction was taken offline to save money.

Pre-cluster Environment#

The AWS Clusters function by placing a gitlab-runner with adequate permissions (AWS roles) in the account and letting it build the rest of the environment.

Manual Stages#

Shared Resources#

The first step of starting any of the AWS environments from scratch is to provision the shared resources. The shared resources get provisioned by Terraform that gets run by hand.

This group of resources consists of the S3 buckets the states are held in and DNS configuration.

Per Environment Pre-Cluster Resources#

In addition to the shared resources, each environment requires a little infrastructure to setup the runner and do a little housekeeping to facilitate some upcoming parts of the cluster. This is done in the infra and infra-ccs projects.

The two projects are needed due to the differences in provisioning. The infra project is considered legacy due to changes on how accounts now are provisioned by CCS.

This Terraform is run by hand the first time, but has a pipeline to assist with further changes as this project results in the Gitlab runner being able to assist.

Code and Runner#

With symmetry to the EKS Anywhere provisioning, the EKS clusters have a separate group of code to stand up the cluster, since they were the first cluster, that code is currently in the same folder as the shared code.

Instead of running eksctl, the runner uses Terraform to create the cluster declaratively.

It is worth noting that the AWS Gitlab Runner derives its post-cluster-creation cluster credentials from Amazon, leveraging its role to get an up-to-date kubeconfig using an aws eks command.

Documentation: - General Terraform Documentation: https://developer.hashicorp.com/terraform/language - Upgrade Terraform Documentation: https://developer.hashicorp.com/terraform/language/v1.5.x/upgrade-guides - EKS Terraform Module: https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest - Self Managed Node Group Module: https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest/submodules/self-managed-node-group - tfswitch Project: https://github.com/warrensbox/terraform-switcher

Terraform Documentation Version

Be sure to select the version of documentation to the right of the title.

  • Implementation:

    • Kubernetes Cluster: Main cluster implementation

    https://code.vt.edu/it-common-platform/infrastructure/eks-cluster

  • Upgrade Concerns:

    • Plugins: Although Terraform is generally compatible between versions, the underlying plug-ins are likely not and will have various requirements, dependencies, and changes in default behavior.

EKS Kubernetes Upgrade#

Unlike EKS Anywhere, there is no lifecycle management software to worry about. To increment the Kubernetes version after confirming resolution of any outstanding caveats, a config edit and a pipeline run are adequate.

Update Concerns#

Critical Concerns#

General Concerns#

  • EKS does not have a separate versioning paradigm and simply updates with Kubernetes, see shared code.