Migrate Amazon EKS and Amazon RDS to Google Kubernetes Engine (GKE) and Cloud SQL¶
This guide walks you through replatforming a containerized application from Amazon EKS and Amazon RDS to Google Kubernetes Engine (GKE) Autopilot and Cloud SQL. You will perform a "lift-and-shift" migration for the application layer (redeploying stateless workloads) and a data replication migration for the database layer using Database Migration Service (DMS). We will begin by assessing the source environment using the Kubernetes Cluster Discovery Tool to build a comprehensive inventory of your Kubernetes objects.
Requirements¶
To deploy this demo, you need:
- An AWS environment
- A Google Cloud project
- Google Cloud SDK
- Terraform >= 1.11.1
- AWS CLI
- AWS EKSCTL
- Gemini CLI
- Python 3.9+ & pip: Required to run the k8s discovery tool
[!CAUTION] Use a local environment. Cloud Shell's 5GB disk limit is insufficient, and its ephemeral sessions will clear your environment variables upon timeout.
AWS Permissions¶
Your AWS credentials need permissions to create and manage the following resources:
- IAM Roles and Policies: For the EKS control plane and nodes.
- VPC and Networking: VPC, subnets, security groups, and route tables.
- EKS Cluster: The Kubernetes control plane and worker nodes.
- RDS Database: A PostgreSQL database instance.
- ECR Repository: To store container images.
For demonstration purposes, the AdministratorAccess managed policy is
sufficient. In a production setting, always adhere to the principle of least
privilege.
Google Cloud Permissions¶
Ensure your user account or service account has the following IAM roles in your Google Cloud project:
- Kubernetes Engine Admin (
roles/container.admin): To create and manage the GKE cluster. - Cloud SQL Admin (
roles/cloudsql.admin): To provision and manage the Cloud SQL instance. - Compute Network Admin (
roles/compute.networkAdmin): To manage VPC networks and firewall rules. - Artifact Registry Admin (
roles/artifactregistry.admin): To create a repository for container images. - Database Migration Admin (
roles/datamigration.admin): To create and manage the DMS job. - Service Account Admin (
roles/iam.serviceAccountAdmin): To create service accounts for GKE nodes and other resources. - Project IAM Admin (
roles/resourcemanager.projectIamAdmin): To grant IAM roles to service accounts.
For demonstration purposes, the basic Owner role (roles/owner) is
sufficient. In a production setting, always adhere to the principle of least
privilege.
Deploy AWS Infrastructure¶
Set up your local environment and authenticate with both cloud providers.
-
Open your terminal and configure AWS CLI:
Enter your AWS Access Key ID and Secret Access Key. Set the default region name to your desired AWS region (e.g.,
us-west-1). Ensure this matches theaws_regionvariable if you plan to modify the default inaws/terraform/variables.tf. -
Clone the repository and set the working directory:
-
Set deployment variables and deploy AWS infrastructure:
Example output:
-------------------------------- EKS Cluster Name: <AWS_PREFIX>-cymbalbank EKS Cluster Config: aws eks update-kubeconfig --region us-west-1 --name <AWS_PREFIX>r-cymbalbank RDS Endpoint: <AWS_PREFIX>-cymbalbankdb.cyhrzhhyuigb.us-west-1.rds.amazonaws.com Frontend URL: http://k8s-default-frontend-05547d09c1-255394655.us-west-1.elb.amazonaws.com --------------------------------
Access the Frontend URL to ensure the application is up and running in your Chrome browser.
Assess the Amazon EKS environment¶
In this section, you will use the Kubernetes Cluster Discovery Tool to generate a detailed inventory of your Amazon EKS cluster. This helps verify which workloads are stateless and ready for direct redeployment.
Discover Amazon EKS Resources¶
Use the Discovery Tool to scan your AWS EKS cluster and collect inventory data.
-
In your terminal, clone the repository:
-
Set up a Python virtual environment:
-
Install dependencies:
-
Execute the Python application to scan your Amazon EKS cluster:
Wait for the tool to fetch the inventory. It will connect to the EKS API and retrieve details on Nodes, Workloads, Networking, and Storage.
Review the Inventory Report¶
Verify the generated inventory files and deactivate the Python virtual environment.
-
Verify the files generated:
Example output:
-
In your local terminal, deactivate the Python environment:
With the inventory files generated, you can now use Gemini to create a migration plan.
Generate migration recommendations using Gemini¶
Utilize the Gemini CLI to analyze the EKS cluster inventory and generate a GKE migration plan.
-
In your terminal, create the
.gemini/commandsdirectory and copy theaws-container-migration-analysis.tomlfile into it: -
Start the Gemini terminal from the current directory:
-
In the Gemini CLI, run the custom command to generate the recommendations:
-
Review the migration report
eks-migration-assessments/eks-migration-plan.md -
After the report is generated, exit Gemini CLI
Ctrl+ctwice
This command will analyze cluster and object inventory files from Amazon EKS workloads to generate a Google Kubernetes Engine (GKE) migration plan. For more information about the migration tool, see the Gemini-powered migrations
Provision the Google Cloud infrastructure¶
You will now provision the target environment.
-
Create a Google Cloud project:
GOOGLE_CLOUD_PROJECT_IDYour Google Cloud Project ID.
-
Set the default Google Cloud project and start deployment to GCP:
Configure AWS Security Group for DMS¶
The AWS RDS instance is not accessible from the internet. The following steps will configure the AWS security group to allow inbound connections from the Database Migration Service.
-
Retrieve the outbound IP address that DMS will use to connect to RDS:
-
Run terraform apply by passing the DB_IP variable. The variable is formatted as a list of strings with a /32 CIDR suffix to match the dms_source_ip_cidrs type constraint:
Migrate the database to Cloud SQL¶
The Terraform script has already created the Connection Profile and Migration Job. You simply need to start the job to begin the data replication.
-
Navigate to the Database Migration Service console
-
Select the Demo migration job from the list
-
Click Edit to enter the configuration view
-
Click Test job to verify connectivity between the source and destination
-
If the test returns a success status, click Save and start
-
Wait for the job status to transition to Running
-
Monitor the job as it completes the Full dump phase and enters the Change Data Capture (CDC) state
Note: In the CDC state, changes are continuously propagated from the source Amazon RDS instance to the target Cloud SQL instance.
-
Change the password for the postgres user:
-
Click Promote in the DMS console
-
Wait for the migration job's status to say Completed before proceeding
Note: CRITICAL PRODUCTION WARNING: Before clicking Promote: Ensure no new data is being written to the RDS instance
- In a production scenario, you must stop the application to prevent data loss.
- Once the DMS promotion is complete, you must update the application configuration to point to the new Cloud SQL instance before restarting the application.
Connect to the Cloud SQL instance¶
In this section, you will connect to the migrated Cloud SQL instance to verify the data replication.
-
Open Cloud Shell and connect to the Cloud SQL instance:
When prompted for the Postgres password enter
Chiapet22! -
Connect to the accounts database:
When prompted for the Postgres password enter
Chiapet22! -
List records in the contacts table:
Example output:
id | username | label | account_num | routing_num | is_external ----+----------+---------------+-------------+-------------+------------- 1 | testuser | Alice | 1033623433 | 883745000 | f 2 | testuser | Bob | 1055757655 | 883745000 | f 3 | testuser | Eve | 1077441377 | 883745000 | f 4 | alice | Testuser | 1011226111 | 883745000 | f 5 | alice | Bob | 1055757655 | 883745000 | f 6 | alice | Eve | 1077441377 | 883745000 | f 7 | bob | Testuser | 1011226111 | 883745000 | f 8 | bob | Alice | 1033623433 | 883745000 | f 9 | bob | Eve | 1077441377 | 883745000 | f 10 | eve | Testuser | 1011226111 | 883745000 | f 11 | eve | Alice | 1033623433 | 883745000 | f 12 | eve | Bob | 1055757655 | 883745000 | f 13 | testuser | External Bank | 9099791699 | 808889588 | t 14 | alice | External Bank | 9099791699 | 808889588 | t 15 | bob | External Bank | 9099791699 | 808889588 | t 16 | eve | External Bank | 9099791699 | 808889588 | t (16 rows) -
Exit the database:
Deploy the Application to GKE¶
Once the database migration is running and is in CDC state, deploy the CymbalBank application to your GKE cluster. This involves building the container images, pushing them to Artifact Registry, and deploying the Kubernetes manifests.
-
Open your local terminal where the source code was cloned, set environment variables:
-
Run the
gcp/build_and_deploy.shscript to build the application images, store the images to Artifact Registry and deploy the script to the GKE cluster:
Note: The build_and_deploy.sh script is intended solely for this tutorial to streamline the setup. For production workloads, we recommend replacing manual scripts with a managed continuous delivery pipeline using Google Cloud Deploy.
Verify the Pod startup¶
After the build finishes, confirm that the Kubernetes pods are successfully running on your cluster.
-
Configure
kubectlto communicate with your cluster: -
List the running pods:
You should see output indicating the pods are
Runningand the frontend service has anEXTERNAL-IPassigned:NAME READY STATUS RESTARTS AGE pod/balancereader-68b5b4f547-6mbcv 2/2 Running 0 33m pod/contacts-5459db85f-t9nsx 2/2 Running 0 33m pod/frontend-6cbfcf57fd-jzd9c 1/1 Running 0 33m pod/ledgerwriter-ffdbb8977-q988d 2/2 Running 0 32m pod/loadgenerator-b66884cc9-vxdc5 1/1 Running 0 33m pod/transactionhistory-6c99596fc5-t2ghj 2/2 Running 0 33m pod/userservice-8447c96568-tm4n5 2/2 Running 0 32m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/balancereader ClusterIP 34.118.227.70 <none> 8080/TCP 68m service/contacts ClusterIP 34.118.229.170 <none> 8080/TCP 68m service/frontend LoadBalancer 34.118.226.140 104.154.43.102 80:31822/TCP 68m service/kubernetes ClusterIP 34.118.224.1 <none> 443/TCP 114m service/ledgerwriter ClusterIP 34.118.225.150 <none> 8080/TCP 68m service/transactionhistory ClusterIP 34.118.232.99 <none> 8080/TCP 68m service/userservice ClusterIP 34.118.229.173 <none> 8080/TCP 68m
Troubleshooting:
-
If you see ImagePullBackOff, check that your REPOSITORY name matches your Artifact Registry.
-
If you see CrashLoopBackOff, the app may be failing to connect to the database. You will verify this connection in the next section.
-
Now that the application workload is running on GKE, the final step is to confirm that it is correctly communicating with the migrated Cloud SQL instance.
Verify the migration¶
Now that the application is running, you will verify that the frontend is accessible and that it is successfully pulling customer data from the migrated Cloud SQL database.
-
Retrieve the external IP of the GKE Load Balancer and open it in your browser:
export FRONTEND_URL="http://$(kubectl get service frontend -o jsonpath='{.status.loadBalancer.ingress[0].ip}')" echo $FRONTEND_URLNote: It may take a few minutes for the Load Balancer to provision an IP address. If the command returns nothing or
pending, wait a minute and try again. -
Copy the IP address output from the previous step and paste it into your web browser (e.g.,
http://34.x.x.x) -
Verify you can log in, create and view transactions
Clean up¶
To avoid incurring charges, destroy the infrastructure when finished.
Destroy AWS infrastructure¶
-
Destroy the demo infrastructure on AWS:
Destroy Google Cloud infrastructure¶
-
Destroy the GCP infrastructure:
Note: The google-cloud-infra-teardown.sh file is cloned into this repo as part
of the deployment steps.