Provision and configure a Google Kubernetes Engine platform for NVIDIA Omniverse applications¶
This document describes how to provision and configure a Google Kubernetes Engine (GKE) cluster to deploy NVIDIA Omniverse applications on Google Cloud.
The provisioning and configuration process follows GKE security and operational best practices.
Provision and configure a GKE cluster¶
To provision and configure a GKE cluster that implements best practices, you deploy an instance of the Core GKE Accelerated Platform.
To deploy an instance of the Core GKE Accelerated Platform, you do the following:
-
Clone the Accelerated Platforms repository and set the repository directory environment variable:
-
Configure the Core GKE Accelerated Platform by adding setting the following Terraform configuration variables in
platforms/gke/base/_shared_config/initialize.auto.tfvars
:# Disable initializing node pools without GPUs initialize_container_node_pools_cpu = false # Disable initializing node pools with GPUs that don't offer NVIDIA RTX cores initialize_container_node_pools_gpu_without_rtx = false # Disable initializing node pools with Google TPUs because they are not needed for this workload initialize_container_node_pools_tpu = false
-
Deploy an instance of the Core GKE Accelerated Platform:
Note: the instructions in this section are a specialization of the Core GKE Accelerated Platform deployment instructions that are tailored to this use case.
Destroy the GKE cluster¶
To destroy the instance of the Core GKE Accelerated Platform you created, you do the following:
-
Destroy the instance of the Core GKE Accelerated Platform:
What's next¶
- Develop NVIDIA Omniverse applications on NVIDIA Omniverse development workstations on Google Cloud.
- Deploy a NVIDIA Omniverse Kit Streaming application on GKE.