Getting Started with SynapseAI Profiler

This guide provides simple steps to easily profile SynapseAI applications without making any changes to the code.

  1. Modify a configuration file in the SynapseAI Profiling Subsystem.

$ hl-prof-config -e off -phase=multi-enq -g 1-20 -s my_profiling_session
  1. Set an environment variable to enable the profiling subsystem.

$ export HABANA_PROFILE=1
  1. Execute any SynapseAI-based application (such as PyTorch, TensorFlow, or the SynapseAI API). Once done, you should be able to locate a file with my_profiling_session prefix and .hltv suffix in your working directory.

  2. Upload the file to https://hltv.habana.ai and view SynapseAI API calls and hardware trace events.

../../_images/main_screen.PNG

Note

The hl-prof-config command is configured to capture the model execution from the first graph launch until the 20th graph launch. If it appears that some significant part of the model is missing, you can modify the hl-prof-config flag to -g 1-20.