Optimizing Training Platform

The following optimization methods should be applied in your on-premise environment.

Ensure MPI Command Binds Cores/Socket to Correct Number

You can use this method to ensure CPU affinity and core allocation are correctly done for your processes. In the below example, six cores are allocated for one process, which controls one Gaudi in the same socket as the six cores:

--bind-to core --map-by socket:PE=6

Note

This optimization method also applies for AWS DL1.

The CPU affinity can improve performance not only for distributed training using multiple Gaudi devices in one server, but also for training using a single Gaudi device.

This number can be calculated by using the number of CPU(s) thread(s) per core (these two numbers are from the `lscpu` command), and the number of Gaudi devices. For example the HLS-1 platform contains eight devices while other servers may have four devices. In the below example, there are 96 CPU(s) and 2 thread(s) per core, therefore, 96 / 2 / 8 = 6 cores per Gaudi.

This sample code can also be used to calculate the number of physical CPU cores and HPU count to generate the appropriate PE value, shown as MPI_PE below. This can be incorporated into any model:

export PHY_CPU_COUNT=$(lscpu --all --parse=CORE,SOCKET | grep -Ev "^#" | sort -u | wc -l)
export PHY_HPU_COUNT=$(ls /dev/hl? | wc -l)
export MPI_PE=$(($PHY_CPU_COUNT/$PHY_HPU_COUNT))

The PE value in the Model-References examples may be set to a common number to ensure functionality, but the directions above should be used for optimal system performance depending on the Host CPU.

Set CPU Setting to Performance

The below is an example of setting the CPU to performance for Ubuntu:

#Get setting:
cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

#Set setting:
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

Ensure System Components are Updated

Refer to the Installation Guide to check for the latest Docker run commands, FW and SW packages, BMC and CPLD versions and make sure they are installed properly.

Ensure Dataset/Model Code/Output Utilize High-Performance NVMe/SSD Storage

For best performance, use NVMe for all datasets, model code, and output locations when running training.