Using the solution

Streaming inference results to BigQuery

The VIAI Edge solution deployed an example GCP backend for storing the ML inference results and run analytics on the results. You could, for example, compare ML model performance across versions and time.

The example backend consists of the following pre-configured services:

  • Cloud Pub/Sub for receiving ML inference results. The payload is the JSON results from the VIAI model container, with additional metadata:
    • Camera device ID
    • Timestamp of the ML inference execution in RFC3339 format
    • File ID of the inference. This ID can be used to match the ML inference results row in BigQuery with the image filenames generated by the camera client - if the client was configured to write, or upload images as files
  • Cloud Function, subscribed to the Pub/Sub messages and write them to BigQuery
  • BigQuery dataset as destination table for the ML inference results

To use the inference results cloud backend, add the following flags to the camera integration app:

  • --pubsub results activates streaming the ML results JSON to Pub/Sub
  • --project_id the GCP project ID where the backend was deployed
  • --credentials pointing to the service account JSON key which has been authorized to publish messages to Pub/Sub

Note: You will need to manually generate a service account in the project_id with permissions to publish into Pub/Sub. The exported JSON key should be accessible by the viai-camera-integration-0 container. You can store it in the host OS path that is mapped to /var/lib/viai/camera-config, for example.

To run the camera client with the Pub/Sub streaming option, open a shell in the camera integration container:

Run on Edge Server

(NAMESPACE is the namespace where your camera container is running, by default viai-edge)

kubectl exec -it viai-camera-integration-0 -n $NAMESPACE -- /bin/bash

And from the container, run (example for Genicam):

export ML_HOST=<your inference model name>
export PROJECT_ID=<your gcp project id>

python3 camera_client.py --protocol genicam --gentl /var/lib/viai/camera-config/FLIR_GenTL_Ubuntu_20_04_x86_64.cti \
    --cfg_write --cfg_write_file ./flir-ax5-recommended.cfg --device_id ax5  --mode continuous --ml --ml_host ${ML_HOST} \
    --ml_port ${ML_PORT} --pubsub results --project_id ${PROJECT_ID} --count 1

Where:

  • ML_HOST is the service name you used when you deployed the model in the edge server.
  • ML_PORT is the port of your service, 8602 by default.

Or (example for USB):

export ML_HOST=<your inference model name>
export PROJECT_ID=<your gcp project id>

python3 camera_client.py --protocol usb --address ${ADDRESS}  \
    --device_id ${DEVICE_ID}  --mode continuous --ml --ml_host ${ML_HOST} \
    --ml_port ${ML_PORT} --pubsub results --project_id ${PROJECT_ID} --count 1

Where:

  • ML_HOST is the service name you used when you deployed the model in the edge server.
  • ML_PORT is the port of your service, 8602 by default.
  • ADDRESS is the address of the camera, usually /dev/video0. Check this page to find your camera address.
  • DEVICE_ID is the device ID of the camera, usually cam1. Check this page to find your camera device_id.

The command output should now contain the following lines, showing the inference results transmission to Pub/Sub:

Passing camera images to the ML model container
{'predictionResult': … 'annotationSpecDisplayName': 'defect'}]}]}, 'predictionLatency': '4.417044194s'}
Transmitting data to Cloud Pub/Sub
Closing camera connection and exiting

You can observe the payloads by monitoring: