Release Notes v1.11.0
On this Page
Release Notes v1.11.0¶
New Features and Enhancements - 1.11.0¶
The following documentation and packages correspond to the latest software release version from Habana: 1.11.0-587. We recommend using the latest release where possible to stay aligned with performance improvements and updated model coverage. Please refer to the Installation Guide for further details.
General Features¶
Upgraded Open MPI to version 4.1.5.
PyTorch¶
DeepSpeed:
Upgraded Habana’s DeepSpeed fork to version 0.9.4.
DeepSpeed now enforces Deterministic behavior upon initialization. See DeepSpeed User Guide for Training.
Enabled DeepSpeed Chat for training on 8 Gaudi2 cards. See Model References GitHub repository.
Validated this release of SynapseAI on PyTorch Lightning v2.0.4 and added support for DeepSpeed on HPU. See https://lightning.ai/docs/pytorch/stable/integrations/hpu/advanced.html.
Some Habana reference models now use
PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES
environment variable which allows the Habana PyTorch bridge and Graph Compiler to automatically manage dynamic shapes. See Runtime Environment Variables and Handling Dynamic Shapes.Enabled FusedSDPA on HPU - a fused implementation of the
nn.functional.scaled_dot_product_attention()
API. See hpex/kernels/FusedSDPA.Users can now retrieve metrics for
cpu_fallback
,memory_defragmentation
,recipe_cache
using Metric APIs.Habana Mixed Precision (HMP) will be removed in the next release. Users should now switch to Autocast for mixed precision support.
Published the scripts to reproduce MLPerf 3.0 results with GPT3, UNet3D, ResNet and BERT Benchmarks on Gaudi2 with the latest SynapseAI release. See Model References GitHub repository.
TensorFlow¶
Upgraded to TensorFlow v2.12.1.
Known Issues and Limitations - 1.11.0¶
PyTorch¶
Using
torch.compile
is not supported. An error message will appear in the error logs.Support for Dynamic shapes is limited. Included guidance on how to work with dynamic shapes in the Handling Dynamic Shapes.
Graphs displayed in TensorBoard have some minor limitations, eg. operator’s assigned device is displayed as “unknown device” when it is scheduled to HPU.
HPU tensor strides might not match that of CPU as tensor storage is managed differently. Reference to tensor storage (such as torch.as_strided) should take into account the input tensor strides explicitly. It is recommended to use other view functions instead of torch.as_strided. For further details, see Tensor Views and TORCH.AS_STRIDED.
Weights sharing:
Weights can be shared among two or more layers using PyTorch with Gaudi only if they are created inside the module. For more details, refer to Weight Sharing.
Weights are not shared with operators outside of the PyTorch library (i.e. PyBind11 functions).
User-defined attributes in HPU
torch.nn.Parameter
are not preserved aftertorch.nn.Parameter
is assigned with a CPU tensor.EFA installation on Habana’s containers includes OpenMPI 4.1.2 which does not recognize the CPU cores and threads properly in a KVM virtualized environment. To enable identifying CPU/Threads configuration, replace
mpirun
withmpirun --bind-to hwthread --map-by hwthread:PE=3
. This limitation is not applicable for AWS DL1 instances.Python API
habana_frameworks.torch.hpu.current_device()
returns 0 regardless of the actual device being used.For
torch.nn.Parameter
which is not created insidetorch.nn.Module
:When two
torch.nn.Parameter
are on CPU storage and referencing the same parameter, the connection will be lost if one of them is moved to HPU.Assigning a CPU tensor to HPU
torch.nn.Parameter
is not supported.
Training/Inference using HPU Graphs: HPU Graphs offer the best performance with minimal host overhead. However, their functionality is currently limited:
Only models that run completely on HPU have been tested. Models that contain CPU Ops are not supported. During HPU Graphs capturing, in case the Op is not supported, the following message will appear: “… is not supported during HPU Graph capturing”.
HPU Graphs can be used only to capture and replay static graphs. Dynamic shapes are not supported.
Data Dependent dynamic flow is not supported with HPU Graphs.
Capturing HPU Graphs on models containing in-place view updates is not supported.
Saving metrics to a file configured using Runtime Environment Variables is not supported for workloads spawned via
torch.multiprocessing
.
Habana Communication Library¶
Single Process Multiple Device Support in HCCL: Since multiple processes are required for multi-node (cross chassis) scaling, it only supports one device per process mode so that users do not need to differentiate the inter-node and intra-node usage cases.
When running more than 8 HLS2 nodes (64 Gaudi2 devices) and the number of HLSs is not a power of ‘2’ (ie. 15 HLS2 boxes, 120 Gaudi2 cards), hcclAllReduce operation may yield wrong results. This may happen in an extreme corner case, depending on “hcclRedOp” and specific timing conditions.
Qualification Tool Library¶
Before running the following plugin tests, make sure to set the
export __python_cmd=python3
environment variable:ResNet-50 Training Stress Test
Memory Bandwidth Test
PCI Bandwidth Test
Gaudi2 only: When running BER test, use the switch setting
-waitTime200
.
TensorFlow¶
When using TF dataset cache feature where the dataset size is large, setting hugepage for host memory may be required. Refer to SSD_ResNet34 Model Reference for instructions on setting hugepage.
Users need to convert models to TensorFlow2 if they are currently based on TensorFlow V1. TF XLA compiler option is currently not supported.
Control flow ops such as tf.cond and tf.while_loop are currently not supported on Gaudi and will fall back on CPU for execution.
Eager Mode feature in TensorFlow2 is not supported and must be disabled to run TensorFlow models on Gaudi. To disable Eager mode, see Creating a TensorFlow Example.
Distributed training with tf.distribute is enabled only with HPUStrategy. Other TensorFlow built-in distribution strategies such as MirroredStrategy, MultiWorkerMirroredStrategy, CentralStorageStrategy, ParameterServerStrategy are not supported.
EFA installation on Habana’s containers includes OpenMPI 4.1.2 which does not recognize the CPU cores and threads properly in a KVM virtualized environment. To enable identifying CPU/Threads configuration, replace
mpirun
withmpirun --bind-to hwthread --map-by hwthread:PE=3
. This limitation is not applicable for AWS DL1 instances.(Gaudi2) In rare cases, when a hardware accelerated media loader is used, a segmentation fault occurs when closing TensorFlow after training is completed. This may happen due to an error in the order of which the Python interpreter unloads the modules. This issue does not affect training results.