Release Notes v1.9.0

New Features and Enhancements - 1.9.0

The following documentation and packages correspond to the latest software release version from Habana: 1.9.0-580. We recommend using the latest release where possible to stay aligned with performance improvements and updated model coverage. Please refer to the Installation Guide for further details.

General Features

  • Added RHEL8 support for Gaudi2.

  • Removed Ubuntu 18.04 support.

  • Added OFI “verbs” provider support for Host NICs based scale-out, utilizing Host NIC’s RDMA capabilities.

PyTorch

  • This release includes performance improvements on many models. See Model Performance Data on the Developer Website for more information.

  • Enabled native PyTorch autocast on Gaudi for some models. Additional models will be supported in the next release. See PyTorch Mixed Precision Training on Gaudi.

  • Enabled ZeRO-Offload support with ZeRO-1 or ZeRO-2, and enabled ZeRO-3 DeepSpeed configurations for training. See DeepSpeed Validated Configurations.

  • DeepSpeed activation checkpointing validated with activation partitioning and contiguous memory optimization flags. See DeepSpeed Validated Configurations.

  • Upgraded Habana’s DeepSpeed fork to version 0.7.7.

  • Habana now provides a GPU Migration toolkit which simplifies migrating PyTorch models that contain Python API calls with dependencies on GPU libraries (for e.g. torch.cuda calls). See GPU Migration Toolkit.

  • Added a new Dockerfile supporting Triton Inference Server to help manage PyTorch model deployment and execution.

  • Habana Lightning plugins suite is now available at https://github.com/Lightning-AI/lightning-Habana. The plugin is an extension of the lightning framework, supporting Habana DataLoader and Profiler features.

  • This release of SynapseAI was validated with PyTorch Lightning v1.9.4.

  • Added Metrics feature allowing users to obtain various performance-related metrics by:

  • Enabled monitoring HPU memory during training on TensorBoard. See Profiling with PyTorch.

  • Added new Checkpoints for PyTorch UNet3D to the Habana Catalog.

  • Added new API to control how Habana PyTorch bridge and Graph Compiler manage Dynamic Shapes in model scripts. Dynamic Shapes enabling is disabled by default. See Dynamic Shape APIs.

  • Removed BERT Fine-tuning Hugging Face model from Model References GitHub repository. Users are encouraged to run directly from Hugging Face Optimum Habana.

  • Published a HabanaAI GitHub fork of fairseq version 0.12.3. See HabanaAI fairseq GitHub repository.

  • Moved Wav2Vec2 training model from Model Reference to HabanaAI fairseq GitHub repository.

  • Enabled the following for Gaudi2. See Model References GitHub repository:

    • New variant of BERT Fine-tuning for training and inference

    • Stable diffusion up to 64 cards for training

    • HuBERT on 8 cards for training

    • Bloom-176B with Beam Search for inference

  • Enabled the following for first-gen Gaudi and Gaudi2. See Model References GitHub repository.

    • Stable diffusion 2.1 on 1 card for inference

    • UNet2D on 1 card for inference

    • UNet3D on 1 card for inference

    • ResNext101 on 1 card for inference

TensorFlow

Known Issues and Limitations - 1.9.0

PyTorch

  • Support for Dynamic shapes is limited. Included guidance on how to work with dynamic shapes in the Handling Dynamic Shapes.

  • Graphs displayed in TensorBoard have some minor limitations, eg. operator’s assigned device is displayed as “unknown device” when it is scheduled to HPU.

  • HPU tensor strides might not match that of CPU as tensor storage is managed differently. Reference to tensor storage (such as torch.as_strided) should take into account the input tensor strides explicitly. It is recommended to use other view functions instead of torch.as_strided. For further details, see Tensor Views and TORCH.AS_STRIDED.

  • Weights sharing:

    • Weights can be shared among two or more layers using PyTorch with Gaudi only if they are created inside the module. For more details, refer to Weight Sharing.

    • Weights are not shared with operators outside of the PyTorch library (i.e. PyBind11 functions).

  • User-defined attributes in HPU torch.nn.Parameter are not preserved after torch.nn.Parameter is assigned with a CPU tensor.

  • EFA installation on Habana’s containers includes OpenMPI 4.1.2 which does not recognize the CPU cores and threads properly in a KVM virtualized environment. To enable identifying CPU/Threads configuration, replace mpirun with mpirun --bind-to hwthread --map-by hwthread:PE=3. This limitation is not applicable for AWS DL1 instances.

  • Python API habana_frameworks.torch.hpu.current_device() returns 0 regardless of the actual device being used.

  • For torch.nn.Parameter which is not created inside torch.nn.Module:

    • When two torch.nn.Parameter are on CPU storage and referencing the same parameter, the connection will be lost if one of them is moved to HPU.

    • Assigning a CPU tensor to HPU torch.nn.Parameter is not supported.

  • Training/Inference using HPU Graphs: HPU Graphs offer the best performance with minimal host overhead. However, their functionality is currently limited:

    • Only models that run completely on HPU have been tested. Models that contain CPU Ops are not supported. During HPU Graphs capturing, in case the Op is not supported, the following message will appear: “… is not supported during HPU Graph capturing”.

    • HPU Graphs can be used only to capture and replay static graphs. Dynamic shapes are not supported.

    • Data Dependent dynamic flow is not supported with HPU Graphs.

    • Capturing HPU Graphs on models containing in-place view updates is not supported.

  • Saving metrics to a file configured using Runtime Environment Variables is not supported for workloads spawned via torch.multiprocessing.

Habana Communication Library

Single Process Multiple Device Support in HCCL: Since multiple processes are required for multi-node (cross chassis) scaling, it only supports one device per process mode so that users do not need to differentiate the inter-node and intra-node usage cases.

Qualification Tool Library

Before running the following plugin tests, make sure to set the export  __python_cmd=python3 environment variable:

  • ResNet-50 Training Stress Test

  • Memory Bandwidth Test

  • PCI Bandwidth Test

TensorFlow

  • When using TF dataset cache feature where the dataset size is large, setting hugepage for host memory may be required. Refer to SSD_ResNet34 Model Reference for instructions on setting hugepage.

  • Users need to convert models to TensorFlow2 if they are currently based on TensorFlow V1. TF XLA compiler option is currently not supported.

  • Control flow ops such as tf.cond and tf.while_loop are currently not supported on Gaudi and will fall back on CPU for execution.

  • Eager Mode feature in TensorFlow2 is not supported and must be disabled to run TensorFlow models on Gaudi. To disable Eager mode, see Creating a TensorFlow Example.

  • Distributed training with tf.distribute is enabled only with HPUStrategy. Other TensorFlow built-in distribution strategies such as MirroredStrategy, MultiWorkerMirroredStrategy, CentralStorageStrategy, ParameterServerStrategy are not supported.

  • EFA installation on Habana’s containers includes OpenMPI 4.1.2 which does not recognize the CPU cores and threads properly in a KVM virtualized environment. To enable identifying CPU/Threads configuration, replace mpirun with mpirun --bind-to hwthread --map-by hwthread:PE=3. This limitation is not applicable for AWS DL1 instances.