Runtime Environment Variables
Runtime Environment Variables¶
The following table describes runtime flags that are set in the environment to change the behavior as well as enable or disable some features.
Flag |
Default |
Description |
Consumer |
---|---|---|---|
|
1 |
Controls Execution mode:
|
Habana PyTorch Bridge Modules |
|
False |
Creates of graph visualization files. The output dump graphs are in ./.graph_dumps folder |
SynapseAI |
|
Unset |
Holds the configuration of recipe cache.
Configuration is encoded as a comma separated list in the following format:
‘
Note: If a recipe cache is shared among a few processes (scale up), it must be stored on a local physical disk. Avoid using remote drives (such as NFS) where file locks are not supported, as it it may lead to instability and unpredictable behavior. |
Habana PyTorch Bridge Modules |
|
INT64_MAX |
Limits internal graph size to specified number of opsReduces the lazy mode memory overhead. This will be improved in future releases. Note: This may affect performance. |
Habana PyTorch Bridge Modules |
|
False |
The dynamic shapes feature is disabled by default. If a model experiences excessive recompilations due to Dynamic Data or Ops,
this variable can be set to enable the Habana PyTorch bridge and Graph Compiler to automatically manage dynamic shapes in model scripts. The graphs will be automatically bucketed and padded into
ranges to achieve a common size, reducing recompilations and and improving performance when working with dynamic workloads. To run with dynamic shapes handling enabled,
set |
Habana PyTorch Bridge |
|
30000 |
If cache evictions cause performance degradation, increasing the cache size will increase performance. The default value is 30000. Note: Boost in performance may cause an increase in host memory consumption. |
Habana PyTorch Bridge Modules |
|
1 |
This flag turns on host time optimization of lazy ops accumulation. It offloads ops accumulation to a separate thread, thus reducing computation time of the main thread. |
Habana PyTorch Bridge Modules |
|
Unset |
Path (file), where the collected metrics are stored.
Metrics are stored in a file only when |
Habana PyTorch Bridge Modules |
|
process_exit |
Once Supported values:
Multiple triggers can be enabled together by separating them with
a comma, for example:
|
Habana PyTorch Bridge Modules |
|
json |
Metrics file format. Both JSON and TEXT formats are supported:
|
Habana PyTorch Bridge Modules |
|
True |
Enables generic stream[2] which allows a user to submit different types operations in same user stream[1]. For a usage example of this flag, see Wav2Vec2 inference script.
[1] User stream: A queue of device work. The host (user) places the work in this queue and continues immediately. The device schedules the work in this queue when the resources are free.
[2] Generic stream: A user stream where all operations can be pushed to a stream irrespective of the type of operation (copy, compute, or collective operations).
|
Habana PyTorch Bridge Modules |
|
false |
Temporary performance improvement option for torch.compile mode only. Due to early adoption phase of torch.compile, there are still many Operations executed eagerly outside of graphs. Eager execution is less performant, and this option reduces the negative performance impact of it. |
Habana PyTorch Bridge Modules |
|
false |
Accelerate Eager Mode by enabling multithreaded pipeline in operations processing:
|
Habana PyTorch Bridge Modules |
The following table describes runtime flags that are set in the environment to obtain SynapseAI and PyTorch Habana Bridge level logs.
Flag |
Default |
Description |
Consumer |
---|---|---|---|
|
0 |
A Bitmask specifying components inside Habana PyTorch Bridge module that are allowed to use profilers. Note that certain profilers may require additional environment variables to be set.
|
Habana PyTorch Bridge Modules |
|
False |
If set to |
SynapseAI and Habana PyTorch Bridge |
|
5 |
Logging level from SynapseAI, perf_lib and Habana PyTorch Bridge.
By default, logs are placed either in the console
(if |
SynapseAI and Habana PyTorch Bridge |
|
5 |
Logging level for Habana PyTorch Bridge.
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
0 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |
|
5 |
Logging level for
|
Habana PyTorch Bridge |