9. Kubernetes User Guide

9.1. Introduction

Kubernetes provides an efficient and manageable way to orchestrate deep learning workloads at scale. This document describes the steps required in order to setup a generic Kubernetes solution for an on-premise setup or as a baseline in a larger cloud configuration.

Habana provides the following components needed for deploying a generic Kubernetes solution:

  • Device Plug in for Kubernetes

  • Helm chart

  • MPI Operator

The above components are available for download and use in the Habana Vault .

9.2. Habana Device Plugin for Kubernetes

This is a Kubernetes device plugin implementation that enables the registration of the Habana device in a container cluster for compute workload. With the appropriate hardware and this plugin deployed in your Kubernetes cluster, you will be able to run jobs that require a Habana device.

The Habana device plugin for Kubernetes is a Daemonset that allows you to automatically:

  • Enable the registration of Habana devices in your Kubernetes cluster.

  • Keep track of the health of your device.

9.2.1. Prerequisites

The below lists the prerequisites needed for running the Habana device plugin:

  • SynapaseAI® Software drivers loaded on the system

  • 1.10 <= Kubernetes version < 1.22

9.2.2. Deployment

Enable Habana® Gaudi® device resource support in Kubernetes.

You must run the device plugin on all the nodes that are equipped with the Habana device by deploying the following Daemonset using the kubectl create command (see the below command).


kubectl needs access to a Kubernetes cluster to implement these commands.

For deployment of the device plugin, the associated .yaml file can be used to setup the environment:

$ kubectl create -f

Check the device plugin deployment status by running the following command:

$ kubectl get pods -n habana_system

9.2.3. Running Gaudi Jobs Example

You can create a Kubernetes Pod that acquires a Gaudi device by using the resource.limits field. This is an example using Habana’s TensorFlow container image:

$ cat <<EOF \| kubectl apply -f -

apiVersion: v1

kind: Pod


name: habanalabs-gaudi-demo



- name: habana-ai-base-container


workingDir: /home/user1



add: ["SYS_RAWIO"]

command: ["sleep"]

args: ["10000"]



habana.ai/gaudi: 1


Check the pod status by running the following command:

$ kubectl get pods

9.3. Habana MPI Operator and Helm Chart for Kubernetes

The Habana MPI Operator is a slight modification of the Kubeflow MPI Operator that enables the running of MPI all reduce style workloads in Kubernetes, leveraging Gaudi accelerators. In combination with Habana’s hardware and software, it enables large scale distributed training with simple Kubernetes job distribution model.

9.3.1. Prerequisites

The below lists the prerequisites needed for running the Habana MPI Operator on Habana hardware:

  • SynapaseAI SW drivers loaded on the system

  • 1.10 <= Kubernetes version < 1.22

  • Helm 3.x

9.3.2. Deployment

The Habana MPI Operator is deployed using Helm. Install helm, add the HABANA helm repo and install the MPI Operator using the following commands:

$ helm repo add habanalabs https://vault.habana.ai/gaudi-helm

$ helm install -n <your_namespace> mpi-operator habanalabs/mpi-operator


If you already added helm repo using the above command, run helm repo update to get the latest.

Deployment of the MPI Operator can be verified using the following command:

$ kubectl get pods -n <your_namespace>

9.3.3. Running Multi-Gaudi Workloads

For more details on how to deploy and run workloads at a scale leveraging the MPI Operator, refer to the MPI operator documentation.

The environment variable HCL_CONFIG_PATH also needs to be set to /etc/hcl/worker_config.json on the worker pods where the HCL configuration file is automatically generated by the mpi-operator. This must be set in the workload MPIJob YAML. Copy-paste this code snippet under the worker container image field:

  value: "/etc/hcl/worker_config.json"