Getting Started with SynapseAI Profiler
Getting Started with SynapseAI Profiler¶
This guide provides simple steps to easily profile SynapseAI applications without making any changes to the code.
Modify a configuration file in the SynapseAI Profiling Subsystem.
$ hl-prof-config -e off -phase=multi-enq -g 1-20 -s my_profiling_session
Set an environment variable to enable the profiling subsystem.
$ export HABANA_PROFILE=1
Execute any SynapseAI-based application (such as PyTorch, TensorFlow, or the SynapseAI API). Once done, you should be able to locate a file with
.hltvsuffix in your working directory.
Upload the file to https://hltv.habana.ai and view SynapseAI API calls and hardware trace events.
hl-prof-config command is configured to capture the model execution from the first graph launch until the 20th graph launch.
If it appears that some significant part of the model is missing, you can modify the
hl-prof-config flag to