Enabling Mixed Precision
On this Page
Enabling Mixed Precision¶
To run mixed precision training on HPU without extensive modifications to existing FP32 model scripts, Habana provides native PyTorch autocast support or Habana Mixed Precision (HMP).
Note
It is recommended to use Autocast as part of the native PyTorch distribution.
Using Native PyTorch Autocast on HPU¶
Autocast is a native PyTorch module that allows running mixed precision training.
It executes operations registered to autocast using lower precision floating datatype.
The module is provided using the torch.amp
package.
To use autocast on HPU, wrap the forward pass (model+loss) of the training to torch.autocast
:
with torch.autocast(device_type="hpu", dtype=torch.bfloat16):
output = model(input)
loss = loss_fn(output, target)
loss.backward()
For further information such as the full default list of registered ops or instructions on creating a custom ops list, see Native PyTorch Autocast.
Habana Mixed Precision¶
Habana Mixed Precision (HMP) package is a legacy capability. You can easily add mixed precision training support to the model script by adding the following lines anywhere in the script before the start of the training loop:
from habana_frameworks.torch.hpex import hmp
hmp.convert()
For further information such as the full default list of ops or instructions on creating a custom ops list, see Habana Mixed Precision.