Skip to content

Deploy to a local kind Kubernetes cluster

Follow the instructions below to set up a local kind Kubernetes cluster for building and running the controller.

Costs

In this document, you use the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.

Before you begin

  1. Install the Google Cloud SDK.

  2. Configure authorization and a base set of properties for the gcloud command line tool. Choose a project that has billing enabled.

  3. Set the Google Cloud project you want to use:

    gcloud config set project PROJECT_ID
    

    Replace PROJECT_ID with the project ID of the Google Cloud project you want to use.

  4. Define and export environment variables that you use to configure resources required to run the controller:

    export PROJECT_ID="$(gcloud config get project 2> /dev/null)"
    export PROJECT_NUMBER="$(gcloud projects describe $PROJECT_ID --format 'value(projectNumber)')"
    
  5. Enable the Compute Engine, Identity and Access Management (IAM), and Security Token Service APIs on your Google Cloud project:

    gcloud services enable \
      compute.googleapis.com \
      iam.googleapis.com \
      sts.googleapis.com
    
  6. Install the kubectl Kubernetes client command-line tool:

    gcloud components install kubectl
    
  7. Install kind.

    To create kind Kubernetes clusters, you need one of the following:

  8. Install the jq command-line tool.

  9. To build the binary and the container image for the controller, install all of the following:

  10. Clone the Git repository and navigate to the directory projects/k8s-hybrid-neg-controller.

Create the local kind Kubernetes cluster

  1. Create cluster:

    make kind-create
    

    This command creates a multi-node Kubernetes cluster with fake topology.kubernetes.io/zone labels to simulate a Kubernetes cluster with Nodes spread across multiple zones.

Configure Workload Identity Federation with Kubernetes

Allow the controller manager to authenticate to Google Cloud APIs using Kubernetes ServiceAccount tokens, by configuring Workload Identity Federation with Kubernetes.

  1. Get the cluster's OpenID Connect issuer URL and save it as an environment variable:

    ISSUER_URL="$(kubectl get --raw /.well-known/openid-configuration | jq -r .issuer)"
    
  2. Download the cluster's JSON Web Key Set (JWKS):

    kubectl get --raw /openid/v1/jwks > cluster-jwks.json
    
  3. Create a workload identity pool:

    export WORKLOAD_IDENTITY_POOL=hybrid-neg
    
    gcloud iam workload-identity-pools create ${WORKLOAD_IDENTITY_POOL} \
      --description "For Hybrid NEG Controllers running in Kubernetes clusters" \
      --display-name "Hybrid NEG Controller Manager" \
      --location global
    
  4. Add the Kubernetes cluster OIDC issuer as a provider to the workload pool:

    export WORKLOAD_IDENTITY_PROVIDER=hybrid-neg-provider
    
    gcloud iam workload-identity-pools providers create-oidc $WORKLOAD_IDENTITY_PROVIDER \
      --attribute-mapping "google.subject=assertion.sub,attribute.namespace=assertion['kubernetes.io']['namespace'],attribute.service_account_name=assertion['kubernetes.io']['serviceaccount']['name']" \
      --issuer-uri "$ISSUER_URL" \
      --jwk-json-path cluster-jwks.json \
      --location global \
      --workload-identity-pool $WORKLOAD_IDENTITY_POOL
    
  5. Create a custom IAM role with permissions to manage zonal network endpoint groups (NEGs):

    gcloud iam roles create compute.networkEndpointGroupAdmin \
      --description "Full control of zonal Network Endpoint Groups (NEGs)" \
      --permissions "compute.instances.use,compute.networkEndpointGroups.attachNetworkEndpoints,compute.networkEndpointGroups.create,compute.networkEndpointGroups.createTagBinding,compute.networkEndpointGroups.delete,compute.networkEndpointGroups.deleteTagBinding,compute.networkEndpointGroups.detachNetworkEndpoints,compute.networkEndpointGroups.get,compute.networkEndpointGroups.list,compute.networkEndpointGroups.listEffectiveTags,compute.networkEndpointGroups.listTagBindings,compute.networkEndpointGroups.use,compute.zones.list" \
      --project $PROJECT_ID \
      --stage GA \
      --title "Zonal Network Endpoint Groups Admin"
    

    This custom role provides permissions to manage zonal network endpoint groups using the Compute Engine API.

    You can create the custom role at the organization level instead of at the project level, by replacing the --project flag with the --organization flag and your organization resource ID.

    You can use predefined roles, such as the Kubernetes Engine Service Agent role (container.serviceAgent), instead of creating a custom role. However, the predefined roles typically provide additional permissions that aren’t needed to manage zonal NEGs.

  6. Grant the custom IAM role on the Google Cloud project to the federated identity representing the Kubernetes service account of the controller:

    gcloud projects add-iam-policy-binding $PROJECT_ID \
      --member "principal://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${WORKLOAD_IDENTITY_POOL}/subject/system:serviceaccount:hybrid-neg-system:hybrid-neg-controller-manager" \
      --role projects/$PROJECT_ID/roles/compute.networkEndpointGroupAdmin
    
  7. Create a credential configuration file:

    gcloud iam workload-identity-pools create-cred-config \
      projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${WORKLOAD_IDENTITY_POOL}/providers/${WORKLOAD_IDENTITY_PROVIDER} \
      --credential-source-file /var/run/secrets/iam.googleapis.com/token \
      --credential-source-type text \
      --output-file k8s/components/secure-token-service/credential-configuration.json
    
  8. Create a patch that adds a serviceAccountToken projected volume to the controller Pod spec:

    echo PROJECT_NUMBER=$PROJECT_NUMBER
    echo WORKLOAD_IDENTITY_POOL=$WORKLOAD_IDENTITY_POOL
    echo WORKLOAD_IDENTITY_PROVIDER=$WORKLOAD_IDENTITY_PROVIDER
    
    eval "echo \"$(cat k8s/components/secure-token-service/patch-google-sts-token-volume.yaml.template)\"" \
      > k8s/components/secure-token-service/patch-google-sts-token-volume.yaml
    

Configure the controller

  1. Create a patch that sets the Google Cloud project ID as an environment variable in the controller Pod spec:

    echo PROJECT_ID=$PROJECT_ID
    
    eval "echo \"$(cat k8s/components/google-cloud-project-id/patch-google-cloud-project-id.yaml.template)\"" \
      > k8s/components/google-cloud-project-id/patch-google-cloud-project-id.yaml
    
  2. Create a patch that sets the name of your VPC network on Google Cloud as an environment variable in the controller manager Pod spec:

    export NETWORK=VPC_NETWORK
    
    eval "echo \"$(cat k8s/components/google-cloud-vpc-network/patch-google-cloud-vpc-network.yaml.template)\"" \
      > k8s/components/google-cloud-vpc-network/patch-google-cloud-vpc-network.yaml
    

    Replace VPC_NETWORK with the name of the VPC network you want the controller to use.

    You can list the VPC networks in your project with this command:

    gcloud compute networks list --project $PROJECT_ID
    

Build and deploy the controller

  1. Build the controller manager container image, render the manifests, deploy to the Kubernetes cluster, and tail the controller manager logs:

    make run
    

Verify that the controller can create hybrid NEGs

  1. Create a Kubernetes Deployment resource with Pods running nginx, and expose them using a Kubernetes Service that has the solutions.cloud.google.com/hybrid-neg annotation:

    kubectl apply --namespace=default \
      --filename=./hack/nginx-service.yaml,./hack/nginx-deployment-docker-hub.yaml
    
  2. Verify that the controller created one hybrid NEG in each of the Compute Engine zones us-west1-{a,b,c}:

    gcloud compute network-endpoint-groups list \
      --filter 'name=nginx-80 AND networkEndpointType:NON_GCP_PRIVATE_IP_PORT'
    

    The output matches the following:

    NAME      LOCATION    ENDPOINT_TYPE            SIZE
    nginx-80  us-west1-a  NON_GCP_PRIVATE_IP_PORT  1
    nginx-80  us-west1-b  NON_GCP_PRIVATE_IP_PORT  1
    nginx-80  us-west1-c  NON_GCP_PRIVATE_IP_PORT  0
    
  3. Verify that the hybrid NEGs in zones us-west1-{a,b} have one endpoint each:

    for zone in us-west1-a us-west1-b ; do
      gcloud compute network-endpoint-groups list-network-endpoints nginx-80 \
        --format yaml \
        --zone $zone
    done
    

    The output looks similar to the following:

    ---
    networkEndpoint:
      instance: ''
      ipAddress: 10.30.1.5
      port: 80
    ---
    networkEndpoint:
      instance: ''
      ipAddress: 10.30.2.5
      port: 80
    

Verify that the controller can delete hybrid NEGs

  1. Remove the solutions.cloud.google.com/hybrid-neg from the nginx Kubernetes Service:

    kubectl annotate service/nginx --namespace=default \
      solutions.cloud.google.com/hybrid-neg-
    
  2. Verify that the controller deleted the hybrid NEGs:

    gcloud compute network-endpoint-groups list \
      --filter 'name=nginx-80 AND networkEndpointType:NON_GCP_PRIVATE_IP_PORT'
    

    The output matches the following:

    Listed 0 items.
    

    It may take a few seconds for the controller to delete the hybrid NEGs.

Troubleshoot

If you run into problems, please review the troubleshooting guide.

Clean up

  1. Set up environment variables to use in the following steps:

    PROJECT_ID="$(gcloud config get project 2> /dev/null)"
    PROJECT_NUMBER="$(gcloud projects describe $PROJECT_ID --format 'value(projectNumber)')"
    WORKLOAD_IDENTITY_POOL=hybrid-neg
    WORKLOAD_IDENTITY_PROVIDER=hybrid-neg-provider
    
  2. Undeploy k8s-hybrid-neg-controller from the kind Kubernetes cluster:

    make delete
    
  3. Delete the kind cluster:

    make kind-delete
    
  4. Delete the workload provider:

    gcloud iam workload-identity-pools providers delete $WORKLOAD_IDENTITY_PROVIDER \
      --location global \
      --quiet \
      --workload-identity-pool $WORKLOAD_IDENTITY_POOL
    
  5. Delete the workload pool:

    gcloud iam workload-identity-pools delete $WORKLOAD_IDENTITY_POOL \
      --location global \
      --quiet
    
  6. Delete the custom IAM role:

    gcloud iam roles delete compute.networkEndpointGroupAdmin \
      --project $PROJECT_ID
    
  7. Verify that the controller deleted the hybrid NEGs:

    gcloud compute network-endpoint-groups list \
      --filter 'networkEndpointType:NON_GCP_PRIVATE_IP_PORT'
    

    The output matches the following:

    Listed 0 items.
    

    To delete NEGs manually, use the command gcloud compute network-endpoint-groups delete.