Custom Habana Ops for PyTorch

Habana provides its own implementation of some complex PyTorch ops customized for Habana devices. In a given model, replacing these complex ops with custom Habana versions will give better performance.

Custom Optimizers

The following is a list of custom optimizers currently supported on Habana devices:

Note

FusedAdamW uses default epsilon value 1e-6 instead of 1e-8 for torch.optim.Adamw.

The below shows an example code snippet demonstrating the usage of a custom optimizer:

try:
   from habana_frameworks.torch.hpex.optimizers import FusedLamb
except ImportError:
   raise ImportError("Please install habana_torch package")
   optimizer = FusedLamb(model.parameters(), lr=args.learning_rate)

Note

For models using Lazy mode execution, mark_step() must be added right after loss.backward() and optimizer.step().

Other Custom OPs

The following is a list of other custom ops currently supported on Habana devices:

The below is an example code snippet demonstrating the usage of FusedClipNorm:

try:
   from habana_frameworks.torch.hpex.normalization import FusedClipNorm
except ImportError:
   raise ImportError("Please install habana_torch package")
   FusedNorm = FusedClipNorm(model.parameters(), args.max_grad_norm)

FusedNorm.clip_norm(model.parameters())