Release Notes

Note

For previous versions of the Release Notes, please refer to Previous Release Notes.

New Features and Enhancements - 1.20.0

The following documentation and packages correspond to the latest software release version from Intel® Gaudi®: 1.20.0-543. We recommend using the latest release where possible to stay aligned with performance improvements and updated model coverage. Please refer to the Installation Guide for further details.

General

  • Added support for the following:

  • In Intel Gaudi RDMA PerfTest Tool:

    • All operating systems are now supported.

    • Added support for Latency Communication test.

    • Added support for internal ports testing.

    • Removed HL-Thunk code.

Firmware

  • For Gaudi 3:

    • Enabled ANLT on scale up ports.

    • Added support for OOB IPMI.

    • Added new Telemetry data such as errors group, performance group and system information group.

  • For Gaudi 3 and Gaudi 2, added Host NIC driver version report to hl-smi output.

PyTorch

  • v1.20.0 release has been validated with Hugging Face Optimum for Intel Gaudi library and model version v1.15.0. Future releases of the Optimum for Intel Gaudi library may be validated with this release. See the Support Matrix for a full list of version support.

  • The Hugging Face Optimum for Intel Gaudi library now supports the updated TGI-gaudi version 2.3.1. See https://github.com/huggingface/tgi-gaudi.

  • Validated the Intel Gaudi 1.20.0 software release with vLLM v0.6.6 Post1. See the Support Matrix for a full list of version support.

  • Added support for the following features to the Intel Gaudi vLLM fork:

    • Multiprocessing backend

    • Speculative decoding

  • Upgraded to PyTorch version 2.6.0.

  • Validated the Intel Gaudi 1.20.0 software release on PyTorch Lightning version 2.5.0.post0. See https://lightning.ai/docs/pytorch/stable/integrations/hpu/.

  • Upgraded to Intel Neural Compressor (INC) v3.3.

  • Added an option in INC to limit Gaudi 3 FP8 scales to Gaudi 2 FP8 scales to reduce compilation by reducing the number of distinct graphs generated. See Supported JSON Config File Options.

  • The mixture_of_experts (MoE) custom op has been extended to support static quantization for inference. Support is now available for both FP8_143 and FP8_152 precision. This flavor is utilized by vLLM/Optimum Mixtral, which employs FP8_143 precision through the INC framework, using a scalar scale per tensor. Additionally, the custom op now supports amax measurement for the intermediate hidden layer of the downscale linear layer. See Mixture of Experts Forward (MoE) section.

  • To reduce warmup time on Gaudi with FP8 quantization, you can now modify the model execution script to disable constant folding optimizations for FP8 scale tensors. See HPU Inference APIs.

  • Deprecated Intel Gaudi Megatron-DeepSpeed and replaced it with Megatron-LM.

  • Intel Gaudi offers a wide range of models using Eager mode and torch.compile. In v1.21.0 release, Lazy will remain available. However, any model requiring Lazy mode will need to explicitly set the PT_HPU_LAZY_MODE=1 flag, as it is set to 0 by default.

Known Issues and Limitations - 1.20.0

General

  • Running functional test in high power mode experiences a performance failure with an 8.5% degradation, whereas the functional test in extreme power mode is fully operational.

  • Intel Gaudi Media Loader is not supported in RHEL9.4 OS with Python 3.11.

PyTorch

  • In certain models, there is performance degradation when using HPU graphs with Lazy collectives. To mitigate this, set PT_HPU_LAZY_COLLECTIVES_HOLD_TENSORS=1 flag, though it may lead to increased memory consumption on the device side. This will be fixed in the subsequent release.

  • Sporadic numerical instability may occur when training with FP8 precision.

  • Starting from PyTorch 2.5.1, torch.nn.functional.scaled_dot_product_attention automatically casts inputs to FP32, which can lead to performance degradation or GC failures. To avoid this, set torch._C._set_math_sdp_allow_fp16_bf16_reduction(True) to enable torch.nn.functional.scaled_dot_product_attention to operate on BF16. Without this setting, the inputs will always be cast to FP32.

  • DeepSpeed is not compatible with PyTorch Lightning in SW releases greater than 1.17.X. However, it is supported in older releases that include lightning-habana 1.6.

  • To bypass a performance issue in Linux kernel version >= 5.9 (e.g. Ubuntu 22.04), the intel_idle driver must be disabled by adding intel_idle.max_cstate=0 to the kernel command line.

  • Support for torch.compile is in early stage. Models may not work (due to missing OPs implementation) or performance may be affected.

  • Support for Eager mode is in early stages. Models may not work (due to missing OPs implementation) or performance may be affected. The functionality of Eager mode as a subset of Lazy mode can be emulated by using PT_HPU_MAX_COMPOUND_OP_SIZE environment variable and limiting cluster sizes to 1. See Eager Mode.

  • Model checkpointing for ResNet50 in torch.compile mode is broken. This will be fixed in the next release.

  • Timing events where enable_timing=True may not provide accurate timing information.

  • Handling Dynamic shapes can be initiated by setting the PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES flag. Flag is disabled by default but enabled selectively for several models. For best performance, users should follow the guidance on how to work with Dynamic Shapes in the Handling Dynamic Shapes document.

  • Graphs displayed in TensorBoard have some minor limitations, eg. operator’s assigned device is displayed as “unknown device” when it is scheduled to HPU.

  • HPU tensor strides might not match that of CPU as tensor storage is managed differently. Reference to tensor storage (such as torch.as_strided) should take into account the input tensor strides explicitly. It is recommended to use other view functions instead of torch.as_strided. For further details, see Tensor Views and TORCH.AS_STRIDED.

  • Weights sharing:

    • Weights can be shared among two or more layers using PyTorch with Gaudi only if they are created inside the module. For more details, refer to Weight Sharing.

    • Weights are not shared with operators outside of the PyTorch library (i.e. PyBind11 functions).

  • User-defined attributes in HPU torch.nn.Parameter are not preserved after torch.nn.Parameter is assigned with a CPU tensor.

  • Python API habana_frameworks.torch.hpu.current_device() returns 0 regardless of the actual device being used.

  • For torch.nn.Parameter which is not created inside torch.nn.Module:

    • When two torch.nn.Parameter are on CPU storage and referencing the same parameter, the connection will be lost if one of them is moved to HPU.

    • Assigning a CPU tensor to HPU torch.nn.Parameter is not supported.

  • Saving metrics to a file configured using Runtime Environment Variables is not supported for workloads spawned via torch.multiprocessing.

  • Using torch.device(hpu:x) - (for example, as model.to) - where x is rank > 0 may lead to memory leaks. Instead, always use torch.device(hpu) to access the current rank.

  • Added the capability to serialize constant tensors, enabling recipe caching to disk for inference scenarios. However, due to a technical limitation, sharing recipes between cards on a single server is not possible. Recipes from each card are stored in separate directories, leading to increased usage of disk space.

  • Performing view-related operations on tensors with INT64 data type (torch.long) in Lazy mode can lead to incorrect results. If this data type is not required, the script should work with INT32 tensors (torch.int). By default, PyTorch creates integer tensors with torch.long data type, so make sure to explicitly create INT32 tensors. This limitation does not apply to Eager + torch.compile mode (PT_HPU_LAZY_MODE=0).

  • Launching ops with tensor inputs from mixed devices (hpu and cpu) is not supported in Eager + torch.compile mode (PT_HPU_LAZY_MODE=0). All tensors need to reside on hpu. Launching ops with tensor inputs from mixed devices is supported in Lazy mode in which internal transfers to hpu are performed.