9. Kubernetes User Guide¶
Kubernetes provides an efficient and manageable way to orchestrate deep learning workloads at scale. This document describes the steps required in order to setup a generic Kubernetes solution for an on-premise setup or as a baseline in a larger cloud configuration.
Habana provides the following components needed for deploying a generic Kubernetes solution:
Device Plug in for Kubernetes
The above components are available for download and use in the Habana Vault .
9.2. Habana Device Plugin for Kubernetes¶
This is a Kubernetes device plugin implementation that enables the registration of the Habana device in a container cluster for compute workload. With the appropriate hardware and this plugin deployed in your Kubernetes cluster, you will be able to run jobs that require a Habana device.
The Habana device plugin for Kubernetes is a Daemonset that allows you to automatically:
Enable the registration of Habana devices in your Kubernetes cluster.
Keep track of the health of your device.
The below lists the prerequisites needed for running the Habana device plugin:
SynapaseAI® Software drivers loaded on the system
1.10 <= Kubernetes version < 1.22
Enable Habana® Gaudi® device resource support in Kubernetes.
You must run the device plugin on all the nodes that are equipped with the Habana device by deploying the following Daemonset using the kubectl create command (see the below command).
kubectl needs access to a Kubernetes cluster to implement these commands.
For deployment of the device plugin, the associated .yaml file can be used to setup the environment:
$ kubectl create -f https://vault.habana.ai/artifactory/docker-k8s-device-plugin/habana-k8s-device-plugin.yaml
Check the device plugin deployment status by running the following command:
$ kubectl get pods -n habana_system
9.2.3. Running Gaudi Jobs Example¶
You can create a Kubernetes Pod that acquires a Gaudi device by using
resource.limits field. This is an example using Habana’s TensorFlow
$ cat <<EOF \| kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: habanalabs-gaudi-demo spec: containers: - name: habana-ai-base-container image: vault.habana.ai/gaudi-docker/1.1.0/ubuntu20.04/habanalabs/tensorflow-installer-tf-cpu-2.6.0:1.1.0-614 workingDir: /home/user1 securityContext: capabilities: add: ["SYS_RAWIO"] command: ["sleep"] args: ["10000"] resources: limits: habana.ai/gaudi: 1 EOF
Check the pod status by running the following command:
$ kubectl get pods
9.3. Habana MPI Operator and Helm Chart for Kubernetes¶
The Habana MPI Operator is a slight modification of the Kubeflow MPI Operator that enables the running of MPI all reduce style workloads in Kubernetes, leveraging Gaudi accelerators. In combination with Habana’s hardware and software, it enables large scale distributed training with simple Kubernetes job distribution model.
The below lists the prerequisites needed for running the Habana MPI Operator on Habana hardware:
SynapaseAI SW drivers loaded on the system
1.10 <= Kubernetes version < 1.22
The Habana MPI Operator is deployed using Helm. Install helm, add the HABANA helm repo and install the MPI Operator using the following commands:
$ helm repo add habanalabs https://vault.habana.ai/gaudi-helm $ helm install -n <your_namespace> mpi-operator habanalabs/mpi-operator
If you already added helm repo using the above command, run helm repo update to get the latest.
Deployment of the MPI Operator can be verified using the following command:
$ kubectl get pods -n <your_namespace>
9.3.3. Running Multi-Gaudi Workloads¶
For more details on how to deploy and run workloads at a scale leveraging the MPI Operator, refer to the MPI operator documentation.
The environment variable
HCL_CONFIG_PATH also needs to be set to
/etc/hcl/worker_config.json on the worker pods where the HCL
configuration file is automatically generated by the mpi-operator. This
must be set in the workload MPIJob YAML. Copy-paste this code snippet
under the worker container image field:
env: - name: HCL_CONFIG_PATH value: "/etc/hcl/worker_config.json"