Skip to content

GKE cross region capacity chasing with SkyPilot

Due to the limited availability of accelerator resources, customers face significant challenges in securing sufficient capacity to run their AI/ML workloads. They often require:

  • Preferences for VM families and accelerators, with the ability to automatically fail over to alternative configurations if their preferred resources are unavailable.
  • Automatic capacity acquisition across regions to address scenarios where a specific region lacks sufficient resources.

In this tutorial, we will demonstrate how to leverage the open-source software SkyPilot to help GKE customers efficiently obtain accelerators across regions, ensuring workload continuity and optimized resource utilization.

SkyPilot is a framework for running AI and batch workloads on any infra, offering unified execution, high cost savings, and high GPU availability. By combining SkyPilot with GKE's solutions (such as Kueue + Dynamic Workload Scheduler, Custom compute class, GCS FUSE), users can effectively address capacity challenges while optimizing costs.

The overview.

In this tutorial, our persona is an ML scientist planning to run a batch workload for hyperparameter tuning. This workload involves two experiments, with each experiment requiring 4 GPUs to execute.

We have two GKE clusters in different regions: one in us-central1 with 4A100 and another in us-west1 with 4L4.

By the end of this tutorial, our goal is to have one experiment running in the us-central cluster and the other in the us-west cluster, demonstrating efficient resource distribution across regions.

SkyPilot supports GKE's cluster autoscaling for dynamic resource management. However, to keep this tutorial straightforward, we will demonstrate the use of a static node pool instead.

Before you begin

  1. Ensure you have a gcp project with billing enabled and enabled the GKE API.

  2. Ensure you have the following tools installed on your workstation

  3. gcloud CLI
  4. gcloud kubectl

Set up your GKE Cluster

Create two clusters, you can create the clusters in parrallel to reduce time. 1. Set the default environment variables:

export PROJECT_ID=$(gcloud config get project)
  1. Create a GKE cluster in us-central1-c with 4*A100
gcloud container clusters create demo-us-central1 \
    --location=us-central1-c \
    --project=$PROJECT_ID 
gcloud container node-pools create gpu-node-pool \
  --accelerator type=nvidia-tesla-a100,count=4 \
  --machine-type a2-highgpu-4g \
  --region us-central1-c \
  --cluster=demo-us-central1 \
  --num-nodes=1
gcloud container clusters get-credentials demo-us-central1 \
--region us-central1-c \
--project ${PROJECT_ID}
  1. Create a GKE cluster in us-west1-c with 4*L4
gcloud container clusters create demo-us-west1 \
    --location=us-west1-c \
    --project=$PROJECT_ID 
gcloud container node-pools create gpu-node-pool \
  --accelerator type=nvidia-l4,count=4 \
  --machine-type g2-standard-48 \
  --region us-west1-c \
  --cluster=demo-us-west1 \
  --num-nodes=1
gcloud container clusters get-credentials demo-us-west1 \
--region us-west1-c \
--project ${PROJECT_ID}

Install SkyPilot

  1. Create a virtual environment.
cd ~
git clone https://github.com/GoogleCloudPlatform/ai-on-gke.git
cd ai-on-gke/tutorials-and-examples/skypilot
python3 -m venv ~/ai-on-gke/tutorials-and-examples/skypilot
source bin/activate 
  1. Install SkyPilot
pip install -U "skypilot[kubernetes,gcp]"
sky check

sky show-gpus
  1. Find the context names
kubectl config get-contexts

# Find the context name, for example: 
gke_${PROJECT_NAME}_us-central1-c_demo-us-central1
gke_${PROJECT_NAME}_us-west1-c_demo-us-west1
  1. Copy the following yaml to ~/.sky/config.yaml with context name replaced. SkyPilot will evaludate the contexts by the order specified until it finds a cluster that provides enough capacity to deploy the workload.
allowed_clouds:
  - gcp
  - kubernetes
kubernetes:
  # Use the context's name
  allowed_contexts:
    - gke_${PROJECT_NAME}_us-central1-c_demo-us-central1
    - gke_${PROJECT_NAME}_us-west1-c_demo-us-west1
  provision_timeout: 30

Launch the jobs

Under ~/ai-on-gke/tutorials-and-examples/skypilot, you’ll find a file named train.yaml, which uses SkyPilot's syntax to define a job. The job will ask for 4* A100 first. If no capacity is found, it failovers to L4.

resources:
  cloud: kubernetes
  # list has orders
  accelerators: [ A100:4, L4:4 ]

The launch.py a Python program that initiates a hyperparameter tuning process with two candidates for the learning rate (LR) parameter. In production environments, such experiments are typically tracked using open-source frameworks like MLFlow.

Start the trainig:

python launch.py

SkyPilot will first select the demo-us-central1 cluster, which has 4 A100 GPUs available. For the second job, it will launch in the demo-us-west1 cluster using L4 GPUs, as no additional clusters with 4 A100 GPUs were available.

You also can check SkyPilot's status using:

sky status

You can SSH into the pod in GKE using the cluster's name. Once inside, you'll find the local source code synced to the pod under ~/sky_workdir. This setup makes it convenient for developers to debug and iterate on their AI/ML code efficiently.

ssh train-cluster1

Clean up

Delete the GKE clusters.

gcloud container clusters delete demo-us-central1 \
    --location=us-central1-c \
    --project=$PROJECT_ID
gcloud container clusters delete demo-us-west1 \
    --location=us-west1-c \
    --project=$PROJECT_ID