Migrate your Kubernetes (From AKS to EKS). How?

Nitin
4 min readSep 2, 2021

So your containers ended up on Azure Kubernetes then the scope widened, it requires to utilise more managed services to scale out or what ever your reason is for moving to AWS.

Let’s Deploy!
Get me a pupawccino!

How do you go about migrating your services to AWS Elastic Kubernetes Service(EKS)? What are some of things to consider when setting up your AWS accounts?

What’s covered in this post?

  • AWS account structure
  • Networking for k8s
  • AWS integration with k8s
  • Data migration
  • Security

AWS Account Structure

It’s important to get the account structure right to avoid future problems (also security). Imagine you build/acquire a new company, how would you integrate that into your AWS landscape? How do you pay for all the business units from one account while accounting for their individual spends? How do you separate environments across all your business units? These are some of the questions to ask while designing your AWS Organisation and account structure.

Go multi-account!

Separate your environments (dev, test, prod) into individual accounts. Use “tooling/shared services account” to deploy to these accounts. The tooling account can house the CI/CD components.

This sounds like a lot of work and it used to be until AWS Control Tower came along. Check it out!

Networking for k8s

  • Have a dedicated VPC for your cluster.
    This will allow scaling of your cluster and services within reasons.
  • How do you want to access the EKS Control Plane? Do you want to expose it publicly or privately? Deploy accordingly. Limit the source IPs when exposing the endpoint publicly.
  • Select a VPC CIDR that’s big enough to accomodate growth.

AWS integration with k8s

There are various AWS services that integrate with EKS eg. ALB, EC2 EBS, Cloudwatch, S3, etc.

I will cover ALB and EBS here.

Install EBS CSI driver

If you k8s service require access to persistent storage you can utilise AWS EFS to mount it inside a container or use EBS Volumes as described below.

aws iam create-policy \
— policy-name AmazonEKS_EBS_CSI_Driver_Policy \
— policy-document file://csi-iam-policy.json

arn:aws:iam::<1234567890>:policy/AmazonEKS_EBS_CSI_Driver_Policy

aws eks describe-cluster \
— name deveks \
— query “cluster.identity.oidc.issuer” \
— output text

oidc.eks.ap-southeast-2.amazonaws.com/id/xxxxxxxxxxxxxxxx

aws iam create-role \
— role-name AmazonEKS_EBS_CSI_DriverRole \
— assume-role-policy-document file://”csi-trust-policy.json”

aws iam attach-role-policy \
— policy-arn arn:aws:iam::<1234567890>:policy/AmazonEKS_EBS_CSI_Driver_Policy \
— role-name AmazonEKS_EBS_CSI_DriverRole

helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
helm repo update

helm upgrade -install aws-ebs-csi-driver aws-ebs-csi-driver/aws-ebs-csi-driver \
— namespace kube-system \
— set image.repository=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-ebs-csi-driver \
— set enableVolumeResizing=true \
— set enableVolumeSnapshot=true \
— set serviceAccount.controller.create=true \
— set serviceAccount.controller.name=ebs-csi-controller-sa

ALB — ingress-controller

Using the ALB controller, you can deploy an Application Load Balancer in Public or Private mode.

$ curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.0/docs/install/iam_policy.json
$ aws iam create-policy \
— policy-name AWSLoadBalancerControllerIAMPolicy \
— policy-document file://iam_policy.json
$ eksctl create iamserviceaccount \
— cluster=deveks \
— namespace=kube-system \
— name=aws-load-balancer-controller \
— attach-policy-arn=arn:aws:iam::<1234567890>:policy/AWSLoadBalancerControllerIAMPolicy \
— override-existing-serviceaccounts \
— approve
$ kubectl apply -k “github.com/aws/eks-charts/stable/aws-load-balancer-controller/crds?ref=master”
$ helm repo add eks https://aws.github.io/eks-charts
$ helm repo update
$ helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
— set clusterName=deveks \
— set serviceAccount.create=false \
— set serviceAccount.name=aws-load-balancer-controller \
-n kube-system
$ kubectl get deployment -n kube-system aws-load-balancer-controller

annotate

kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
or
alb.ingress.kubernetes.io/scheme: internet-facing

You can utilise other services like S3 as datastore. Using the IAM roles you can securely allow k8s services to access such AWS services without having to manage access keys.

Data migration

Now that you have the foundation created for your k8s workloads to be migrated over, how do you go about migrating the data over to AWS? You may have data that reside in Azure Blob Storage accounts or Azure postgres?

I will cover migrating the data from Azure blob to AWS S3 using Rclone. During my test it took about 2 hours to copy 33Gb (Sydney region).

— Install and Configure Rclone on a Bastion host in AWS (or a host in private host as long as it can connect to Azure Blob and S3).

Configuration file

File location of config file:
C:\Users\Administrator\AppData\Roaming\rclone
Or Mac users
~/.config/rclone/rclone.conf

[Azure]
type = azureblob
account = <myblobstore>
key = Jhvcjshdvb………………………..brkjbv==
[S3]
type = s3
provider = AWS
access_key_id = <IAM User Accesskey here>
secret_access_key = <IAM User Secret Accesskey here>
region = ap-southeast-2
env_auth = false
location_constraint = ap-southeast-2

Command

rclone — verbose copy Azure: S3:tstbpdocs

Note: rclone uses linux’s rsync style syntax. The source “Azure:” didn’t work when the blob account name was specified like “Azure:myblobstore” but it worked without blob account name “Azure:”

If you have data in Azure databases, there are a number of options to migrate the data depending on the case and volume. You can simply dump and restore if it’s a small database and you can afford downtime.

Or use AWS Data Migration Service over a VPN if you need to sync the databases between the two locations.

Or dump and copy data to AWS S3 and restore from S3 to RDS.

Security

Where does security fit in all of this? Well, everywhere!

Multi-Account — Limiting blast radius and access to data.

VPC — Network separation and creating isolated networking zones.

K8s integration with AWS services — Least privilege model, only allow actions that are required by the application and only on specific resources (IAM roles and policies).

and — implement security at every step.

Design for tomorrow, build for today!

--

--

Nitin

An AWS APN Ambassador and AWS Cloud Architect by profession. I love yoga, hiking, camping. Ask me about my dog.