|
| 1 | +# Build your first EKS Cluster |
| 2 | + |
| 3 | +## Introduction |
| 4 | + |
| 5 | +The official CLI to launch a cluster is `eksctl`, this is a tool develop by weavworks on conjuntion of AWS team. The goal of this tool is to build an `EKS cluster` much easier. If we thing the things that we have to do to build a cluster, we have to create VPC, provision subnets (multiples of them), set up routing in your VPC, then you con go to the control plane on the console and launch the cluster. All the infrastructure could be done using `CloudFormation`, but still being a lot of work. |
| 6 | + |
| 7 | +We can build the cluster from `EKS Cluster console`, all the choices expose there can be done with `eksctl` |
| 8 | + |
| 9 | +## How do I check the IAM role on the workspace? |
| 10 | + |
| 11 | +```bash |
| 12 | +aws sts get-caller-identity |
| 13 | +``` |
| 14 | + |
| 15 | +## Create EC2 Key |
| 16 | + |
| 17 | +```bash |
| 18 | +$ aws ec2 create-key-pair --key-name EksKeyPair --query 'KeyMaterial' --output text > EksKeyPair.pem |
| 19 | +``` |
| 20 | + |
| 21 | +Modify permissions over the private key to avoid future warnings |
| 22 | + |
| 23 | +```bash |
| 24 | +$ chmod 400 EksKeyPair.pem |
| 25 | +``` |
| 26 | + |
| 27 | +With this new private key we can go ahead and generate a public one, that's the key that will be upload into the node (EC2 instance). If we provide this key, and we have the private one, we can connect to the remote instance. |
| 28 | + |
| 29 | +```bash |
| 30 | +$ ssh-keygen -y -f EksKeyPair.pem > eks_key.pub |
| 31 | +``` |
| 32 | + |
| 33 | +## Create definition YAML |
| 34 | + |
| 35 | +```yaml |
| 36 | +apiVersion: eksctl.io/v1alpha5 |
| 37 | +kind: ClusterConfig |
| 38 | + |
| 39 | +metadata: |
| 40 | + name: lc-cluster |
| 41 | + region: eu-west-3 |
| 42 | + version: "1.18" |
| 43 | + |
| 44 | +iam: |
| 45 | + withOIDC: true |
| 46 | + |
| 47 | +managedNodeGroups: |
| 48 | + - name: lc-nodes |
| 49 | + instanceType: t2.small |
| 50 | + desiredCapacity: 3 |
| 51 | + minSize: 1 |
| 52 | + maxSize: 4 |
| 53 | + ssh: |
| 54 | + allow: true |
| 55 | + publicKeyPath: "./eks_key.pub" |
| 56 | +``` |
| 57 | +
|
| 58 | +```bash |
| 59 | +eksctl create cluster \ |
| 60 | +--name lc-cluster \ |
| 61 | +--version 1.18 \ |
| 62 | +--region eu-west-3 \ |
| 63 | +--nodegroup-name lc-nodes \ |
| 64 | +--node-type t2.small \ |
| 65 | +--nodes 3 \ |
| 66 | +--nodes-min 1 \ |
| 67 | +--nodes-max 4 \ |
| 68 | +--with-oidc \ |
| 69 | +--ssh-access=true \ |
| 70 | +--ssh-public-key=eks_key.pub \ |
| 71 | +--managed |
| 72 | +``` |
| 73 | + |
| 74 | +Both forms are going to create exactly the same, but if we want to get all the power of `eksctl` we have to use the declarative way using the yaml form. |
| 75 | + |
| 76 | +## Understanding eks file |
| 77 | + |
| 78 | +`eksctl` is going to build our cluster using this file. |
| 79 | + |
| 80 | +```yaml |
| 81 | +apiVersion: eksctl.io/v1alpha5 |
| 82 | +kind: ClusterConfig |
| 83 | + |
| 84 | +metadata: |
| 85 | + name: lc-cluster # [1] |
| 86 | + region: eu-west-3 # [2] |
| 87 | + version: "1.18" # [3] |
| 88 | + |
| 89 | +iam: |
| 90 | + withOIDC: true # [4] |
| 91 | + |
| 92 | +managedNodeGroups: # [5] |
| 93 | + - name: lc-nodes # [6] |
| 94 | + instanceType: t2.small # [7] |
| 95 | + desiredCapacity: 3 # [8] |
| 96 | + minSize: 1 # [9] |
| 97 | + maxSize: 4 # [10] |
| 98 | + ssh: # [11] |
| 99 | + allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key |
| 100 | + publicKeyPath: "./eks_key.pub" # Add path to key |
| 101 | +``` |
| 102 | +
|
| 103 | +1. This is the cluster name, in our case `lc-cluster` |
| 104 | +2. The AZ where the cluster is going to be deplyed |
| 105 | +3. The Kuberntes version that we're going to use, if we let it empty, will use the last stable for `AWS` |
| 106 | +4. enables the IAM OIDC provider as well as IRSA for the Amazon CNI plugin |
| 107 | +5. `managedNodeGroups` are a way for the `eks service` to actually provision your data plane on your behalf so normally if you think about the of a container orchestrator it's jsut orchestarte containers on your compute so we're starting to see expansion of that role a little bit so now instead of you bringing your own compute and you having to manage patching it, updating it, rolling in new versions of it and all that day to day stuff, it's possible to be managed by AWS, this is what `managedNodeGroup` does. AWS provides the AMI and provisioning into your account on your behalf. |
| 108 | +6. The name of the group of nodes |
| 109 | +7. The instance type that we're running. We're usding the free tier |
| 110 | +8. The number of nodes that we want to have on the node group |
| 111 | +9. If the cluster infrastructure is updated the minimun mumber of instances that we want on the node group |
| 112 | +10. If the cluster infrastructure is updated the max number of instances that we want on the node group |
| 113 | +11. The `ssh` key to connect to our EC2 instances. |
| 114 | + |
| 115 | + |
| 116 | +## Launching the Cluster |
| 117 | + |
| 118 | +Now we're ready to launch the cluster |
| 119 | + |
| 120 | +```bash |
| 121 | +$ eksctl create cluster -f demos.yml |
| 122 | +``` |
| 123 | + |
| 124 | +## Test the cluster |
| 125 | + |
| 126 | +Now we can test that our cluster is up and running. |
| 127 | + |
| 128 | +```bash |
| 129 | +$ kubectl get nodes |
| 130 | +``` |
0 commit comments