Setting Up an Amazon EKS Cluster on AWS
Are you looking to harness the power of Kubernetes for managing your containerized applications? Amazon EKS (Elastic Kubernetes Service) is a managed Kubernetes service that makes it easier to deploy, manage, and scale containerized applications using Kubernetes on AWS. In this guide, we'll walk you through the step-by-step process of creating an Amazon EKS cluster using eksctl and AWS CLI.
Step 1: Creating the EKS Management Host
To get started, follow these instructions to create an EKS management host:
Launch a new Ubuntu VM using AWS EC2. You can choose the
t2.micro instance type.
Connect to the VM and install kubectl:
Copy code
$ curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin
$ kubectl version --short --client
Install the latest version of AWS CLI:
$ sudo apt install unzip
$ cd
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install
$ aws --version
Install eksctl:
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
$ eksctl version
Step 2: Creating an IAM Role and Attaching it to the EKS Management Host
Create a new IAM role (select use case "ec2") and grant it the following permissions:
IAM full access
VPC full access
EC2 full access
CloudFormation full access
Administrator access (Please refine permissions based on
security best practices)
Name the role (e.g., eksroleec2).
Attach the created role to the EKS management host. In the
EC2 dashboard, modify the instance's IAM role to use the role you just created.
Step 3: Creating the EKS Cluster
Now, it's time to create your EKS cluster:
Using the eksctl command, create the cluster (replace placeholders with actual values):
$ eksctl create cluster --name cluster-name
--region region-name
--node-type instance-type
--nodes-min 2
--nodes-max 2
--zones availability-zone(s)
For example:
eksctl create cluster --name test-cluster4 --region us-east-1 --node-type t2.micro --zones us-east-1a,us-east-1b
Allow the cluster creation process to complete (usually takes 5 to 10 minutes). Verify the nodes using:
ubuntu@ip-172-31-34-83:~$ eksctl create cluster --name test-cluster --region us-east-1 --node-type t2.micro --zones us-east-1a,us-east-1b
2023-08-29 07:08:19 [ℹ] eksctl version 0.154.0
2023-08-29 07:08:19 [ℹ] using region us-east-1
2023-08-29 07:08:19 [ℹ] subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
2023-08-29 07:08:19 [ℹ] subnets for us-east-1b - public:192.168.32.0/19 private:192.168.96.0/19
2023-08-29 07:08:19 [ℹ] nodegroup "ng-4cc58d71" will use "" [AmazonLinux2/1.25]
2023-08-29 07:08:19 [ℹ] using Kubernetes version 1.25
2023-08-29 07:08:19 [ℹ] creating EKS cluster "test-cluster" in "us-east-1" region with managed nodes
2023-08-29 07:08:19 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2023-08-29 07:08:19 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=test-cluster'
2023-08-29 07:08:19 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "test-cluster" in "us-east-1"
2023-08-29 07:08:19 [ℹ] CloudWatch logging will not be enabled for cluster "test-cluster" in "us-east-1"
2023-08-29 07:08:19 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=test-cluster'
2023-08-29 07:08:19 [ℹ]
2 sequential tasks: { create cluster control plane "test-cluster",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-4cc58d71",
}
}
2023-08-29 07:08:19 [ℹ] building cluster stack "eksctl-test-cluster-cluster"
2023-08-29 07:08:20 [ℹ] deploying stack "eksctl-test-cluster-cluster"
2023-08-29 07:08:50 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
2023-08-29 07:09:20 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
2023-08-29 07:10:20 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
2023-08-29 07:11:20 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
2023-08-29 07:12:20 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
2023-08-29 07:13:20 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
2023-08-29 07:14:20 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
2023-08-29 07:15:20 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
2023-08-29 07:16:20 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
2023-08-29 07:17:20 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
2023-08-29 07:18:20 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
eksctl create cluster --name test-cluster4 --region us-east-1 --node-type t2.micro --zones us-east-1a,us-east-1b2023-08-29 07:19:20 [ℹ] waiting for CloudFo
2023-08-29 07:20:20 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-cluster"
2023-08-29 07:22:21 [ℹ] building managed nodegroup stack "eksctl-test-cluster-nodegroup-ng-4cc58d71"
2023-08-29 07:22:21 [ℹ] deploying stack "eksctl-test-cluster-nodegroup-ng-4cc58d71"
2023-08-29 07:22:21 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-nodegroup-ng-4cc58d71"
2023-08-29 07:22:51 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-nodegroup-ng-4cc58d71"
2023-08-29 07:23:36 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-nodegroup-ng-4cc58d71"
2023-08-29 07:24:19 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-nodegroup-ng-4cc58d71"
2023-08-29 07:24:58 [ℹ] waiting for CloudFormation stack "eksctl-test-cluster-nodegroup-ng-4cc58d71"
2023-08-29 07:24:58 [ℹ] waiting for the control plane to become ready
2023-08-29 07:24:59 [✔] saved kubeconfig as "/home/ubuntu/.kube/config"
2023-08-29 07:24:59 [ℹ] no tasks
2023-08-29 07:24:59 [✔] all EKS cluster resources for "test-cluster" have been created
2023-08-29 07:24:59 [ℹ] nodegroup "ng-4cc58d71" has 2 node(s)
2023-08-29 07:24:59 [ℹ] node "ip-192-168-19-104.ec2.internal" is ready
2023-08-29 07:24:59 [ℹ] node "ip-192-168-47-196.ec2.internal" is ready
2023-08-29 07:24:59 [ℹ] waiting for at least 2 node(s) to become ready in "ng-4cc58d71"
2023-08-29 07:24:59 [ℹ] nodegroup "ng-4cc58d71" has 2 node(s)
2023-08-29 07:24:59 [ℹ] node "ip-192-168-19-104.ec2.internal" is ready
2023-08-29 07:24:59 [ℹ] node "ip-192-168-47-196.ec2.internal" is ready
2023-08-29 07:25:01 [ℹ] kubectl command should work with "/home/ubuntu/.kube/config", try 'kubectl get nodes'
2023-08-29 07:25:01 [✔] EKS cluster "test-cluster" in "us-east-1" region is ready
$ kubectl get nodes
Step 4: Cleaning Up Resources
Remember to clean up your resources after practice to avoid unnecessary billing:
Delete the EKS cluster and associated resources:
$ eksctl delete cluster --name ashokit-cluster4 --region us-east-1
By following these steps, you've successfully set up and
managed an Amazon EKS cluster on AWS. This infrastructure allows you to deploy
and manage containerized applications efficiently, while the managed nature of
EKS takes care of the underlying complexities, enabling you to focus on
building and scaling your applications.
Join the conversation