PyTorch Gaudi Theory of Operations
On this Page
PyTorch Gaudi Theory of Operations¶
The PyTorch Habana bridge interfaces between the framework and SynapseAI software stack to drive the execution of deep learning models on the Habana Gaudi device.
The installation package provided by Habana comes with modifications on top of the PyTorch release.
The customized framework from this installed package needs to be used to integrate PyTorch with the Habana bridge.
PyTorch deep learning model training scripts need to load the PyTorch Habana plugin library and import habana_frameworks.torch.core
module to integrate with Habana Bridge.
Further integration details can be found at Importing Habana Torch Library.
The Habana bridge supports various modes of execution for a PyTorch model. The following modes are supported:
Eager mode - op-by-op execution as defined in standard PyTorch eager mode scripts.
Lazy mode - deferred execution of graphs, comprising of ops delivered from script op by op similar to Eager mode. It gives the Eager mode experience with performance on Gaudi.
Eager Mode¶
During eager mode execution, the framework executes one op at a time from python. The Habana bridge registers these ops for Habana device and drives the execution on Gaudi. For any op that is not supported by the Habana device, the bridge falls back to CPU to execute the op and continue execution on the device for further supported ops thereafter.
Habana’s integration with PyTorch supports Eager execution mode. Although the default execution mode is Lazy, Eager mode can be enabled from within a python script by setting an environment variable as shown below. Alternatively, you can set the environment variable on the commandline.
The software stack reads this environment variable once at the time of initialization.
os.environ["PT_HPU_LAZY_MODE"] = "2"
Lazy Mode¶
With this mode, users retain the flexibility and benefits that come with the PyTorch define-by-run approach of the Eager mode. The Habana bridge internally accumulates these ops in a graph. The execution of the ops in the accumulated graph is triggered in a lazy manner, only when a tensor value is required by the user. This allows the bridge to construct a SynapseAI graph with multiple ops, which provides the SynapseAI graph compiler the opportunity to optimize the device execution for these ops. The figure below shows the architectural diagram for the PyTorch Habana full stack.

PyTorch Habana Full Stack Architecture¶
PyTorch Habana Bridge¶
This section describes the major components of the PyTorch Habana Bridge.
SynapseAI Lowering Module¶
The SynapseAI Lowering module converts the framework provided op or graph to SynapseAI. The PyTorch framework dispatches the execution to the registered methods in the Habana bridge when an op is invoked on Habana tensors. In lazy evaluation mode, the Habana bridge internally builds a graph with accumulated ops. Once a tensor is required to be evaluated, the associated graph that needs to be executed is identified for the resulting tensor. Various optimization passes are applied to the graph, such as:
fusion of ops that are beneficial for Gaudi,
optimal placement of permutations for channel last memory format of tensors,
identification of persistent, non-persistent and tensors with duplicate or aliased memory.
PyTorch Kernels¶
PyTorch Kernel module within the Habana bridge provides the functionality to convert a PyTorch op into appropriate SynapseAI ops. The PyTorch op could be implemented with a single or multiple TPC/MME ops. The PyTorch Kernel module adds these set of SynapseAI ops to the SynapseAI graph and converts the PyTorch tensors to SynapseAI tensors for building the SynapseAI graph.
Execution Module¶
The Execution Module in the Habana bridge provides the functionality to compile a SynapseAI graph and launch the resulting recipe in an asynchronous method. The recipes are cached by the Habana bridge to avoid recompilation of the same graph. This caching is done at an eager op level as well as at a JIT graph level. During training, the graph compilation is only required for the initial iteration, thereafter, the same compiled recipe is re-executed every iteration (with new inputs) unless there is a change in the ops being executed.
Memory Manager¶
The Habana bridge has a memory manager that optimally serves the allocation and free requests from the device memory. The Habana bridge additionally provides the capability to create tensors with pinned memory, which reduces the time required for doing a DMA by avoiding a copy on the host side. The pinned memory feature can be expressed on a tensor with existing flags provided by the PyTorch framework.
Mixed Precision Support¶
Habana PyTorch supports mixed precision execution using the Habana Mixed Precision (HMP) package. You can execute the ops in FP32 or BF16 precision. The HMP package modifies the python operators to add the appropriate cast operations for the arguments before execution. Refer to PyTorch Mixed Precision Training on Gaudi for further details.
Distributed Training¶
Habana PyTorch implements HCCL communication backend to support scale-up and scale-out. See Distributed Training with PyTorch.
Habana Media Loader¶
Habana Media Loader is an accelerated dataloader which can operate in different modes. The optimal one is selected based on the underlying hardware:
In Gaudi2, the dataloader uses hardware-based decoders for acceleration, lowering the load on the host CPU.
In first-gen Gaudi, it uses either the framework default dataloader or AEON based dataloader, depending on the use case. Both are done on the host CPU.
For further details on Habana Media Loader setup and usage, refer to Habana Media Loader.
The Habana Media Loader inherits the native torch.utils.data.DataLoader
and maintains the same
interface from the user perspective. Internally, Habana data loader falls back to the
native torch data loader if the provided parameters are not supported.
The dataloader is imported and used similar to the torch DataLoader. For example:
import habana_dataloader
habana_dataloader.HabanaDataLoader(
dataset, batch_size=args.batch_size, sampler=train_sampler,
num_workers=args.workers, pin_memory=True, drop_last=True)
The following are full examples of models using Habana Media Loader with PyTorch: