Runtime Environment Variables

The following table describes runtime flags that are set in the environment to change the behavior as well as enable or disable some features.

Flag

Default

Description

Consumer

PT_HPU_LAZY_MODE

1

Controls Execution mode. - 0x1 - Lazy mode - 0x2 - Eager mode

Habana PyTorch Bridge Modules

PT_HPU_LOG_MOD_MASK

All Modules

A Bitmask specifying Habana PyTorch Bridge module to enable logging.

  • 0x1 - Device logs

  • 0x2 - PT kernel/ops logs

  • 0x4 - Bridge logs

  • 0x8 - SynapseAI helper logs

  • 0x10 - Distributed module logs

  • 0x20 - Lazy mode logs

  • 0x80 - CPU fallback logs

Habana PyTorch Bridge

Modules

PT_HPU_LOG_TYPE_MASK

1

A Bitmask specifying Habana PyTorch Bridge logging level from SynapseAI and perf_lib.

  • WARNING = 0x1

  • TRACE = 0x2

  • DEBUG = 0x4

Habana PyTorch Bridge

ENABLE_CONSOLE

False

If set to true, enables printing SynapseAI logs to the console.

SynapseAI

LOG_LEVEL_ALL

5

Logging level from SynapseAI and perf_lib.

  • 6 is no logs

  • 0 is verbose

By default, logs are placed either in the console (if ENABLE_CONSOLE=true) or under ~/.habana_logs/.

SynapseAI

GRAPH_VISUALIZATION

False

Creates of graph visualization files. The output dump graphs are in ./.graph_dumps folder

SynapseAI

PT_RECIPE_CACHE_PATH

Unset

Path (directory), where compiled graph recipes are stored to accelerate a scale up scenario. Only one process compiles the recipe, and other processes read it from disk.

If unset (default), compiled graph recipes are not stored on disk (recipe disk caching disabled).

Note: Recipe cache dir is cleared automatically to prevent unintended consumption of disk space. If there is a failure in folder delete during abrupt session or different path usage then the cache dir should be cleared manually.

Note: If a recipe cache is shared among a few processes (scale up), it must be stored on a local physical disk. Avoid using remote drives (such as NFS) where file locks are not supported, as it it may lead to instability and unpredictable behavior.

Habana PyTorch Bridge

PT_HPU_MAX_COMPOUND_OP_SIZE

INT64_MAX

Limits internal graph size to specified number of Ops. Reduces the lazy mode memory overhead. This will be improved in future releases. Note: This may affect performance.

Habana PyTorch Bridge