# 9. Kubernetes User Guide¶

## 9.1. Introduction¶

Kubernetes provides an efficient and manageable way to orchestrate deep learning workloads at scale. This document describes the steps required in order to setup a generic Kubernetes solution for an on-premise setup or as a baseline in a larger cloud configuration.

Habana provides the following components needed for deploying a generic Kubernetes solution:

• Device Plug in for Kubernetes

• Helm chart

• MPI Operator

The above components are available for download and use in the Habana Vault .

## 9.2. Habana Device Plugin for Kubernetes¶

This is a Kubernetes device plugin implementation that enables the registration of the Habana device in a container cluster for compute workload. With the appropriate hardware and this plugin deployed in your Kubernetes cluster, you will be able to run jobs that require a Habana device.

The Habana device plugin for Kubernetes is a Daemonset that allows you to automatically:

• Enable the registration of Habana devices in your Kubernetes cluster.

• Keep track of the health of your device.

### 9.2.1. Prerequisites¶

The below lists the prerequisites needed for running the Habana device plugin:

• SynapaseAI SW drivers loaded on the system

• 1.10 <= Kubernetes version < 1.22

### 9.2.2. Deployment¶

Enable HABANA device resource support in Kubernetes.

You must run the device plugin on all the nodes that are equipped with the Habana device by deploying the following Daemonset using the kubectl create command (see the below command).

Note

For deployment of the device plugin, the associated .yaml file can be used to setup the environment:

$kubectl create -f https://vault.habana.ai/docker-k8s-device-plugin/0.15.0/habanalabs/k8s-device-plugin/release/habana-k8s-device-plugin.yaml  Check the device plugin deployment status by running the following command: $ kubectl get pods -n habana_system


### 9.2.3. Running Gaudi Jobs Example¶

You can create a Kubernetes Pod that acquires a Gaudi device by using the resource.limits field. This is an example using Habana’s TensorFlow container image:

$cat <<EOF \| kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: habanalabs-gaudi-demo spec: containers: - name: habana-ai-base-container image: vault.habana.ai/gaudi-docker/0.15.0/ubuntu20.04/habanalabs/tensorflow-installer-tf-cpu-2.5.0:0.15.0-547 workingDir: /home/user1 securityContext: capabilities: add: ["SYS_RAWIO"] command: ["sleep"] args: ["10000"] resources: limits: habana.ai/gaudi: 1 EOF  Check the pod status by running the following command: $ kubectl get pods


## 9.3. Habana MPI Operator and Helm Chart for Kubernetes¶

The Habana MPI Operator is a slight modification of the Kubeflow MPI Operator that enables the running of MPI all reduce style workloads in Kubernetes, leveraging Gaudi accelerators. In combination with Habana’s hardware and software, it enables large scale distributed training with simple Kubernetes job distribution model.

### 9.3.1. Prerequisites¶

The below lists the prerequisites needed for running the Habana MPI Operator on Habana hardware:

• SynapaseAI SW drivers loaded on the system

• 1.10 <= Kubernetes version < 1.22

• Helm 3.x

### 9.3.2. Deployment¶

The Habana MPI Operator is deployed using Helm. Install helm, add the HABANA helm repo and install the MPI Operator using the following commands:

$helm repo add habanalabs https://vault.habana.ai/gaudi-helm$ helm install -n <your_namespace> mpi-operator habanalabs/mpi-operator


Deployment of the MPI Operator can be verified using the following command:

\$ kubectl get pods -n <your_namespace>


The environment variable HCL_CONFIG_PATH also needs to be set to /etc/hcl/worker_config.json on the worker pods where the HCL configuration file is automatically generated by the mpi-operator. This must be set in the workload MPIJob YAML. Copy-paste this code snippet under the worker container image field:
env: