Support Matrix
On this Page
Support Matrix¶
Note
For previous versions of the Support Matrix, please refer to Previous Support Matrix.
Supported Configurations and Components¶
The tables below detail the configurations and versions supported for first-gen Intel® Gaudi® AI accelerator, Intel® Gaudi® 2 AI accelerator, and Intel® Gaudi® 3 AI accelerator.
Intel Gaudi Software |
1.19.0 |
||||
---|---|---|---|---|---|
Operating System |
Ubuntu |
RHEL9 |
SUSE |
TencentOS |
|
Version |
22.04 |
24.04 |
9.4 |
15.5 |
3.1 |
Kernel |
5.4.0 and above |
6.8.0 |
5.14 and above |
5.14.21-150500.53-default |
5.4.0 and above |
Python |
3.10, 3.11 1 |
3.12 |
3.11 |
3.11 |
3.10 |
OpenShift |
4.16 |
||||
Kubernetes |
1.28, 1.29, 1.30 |
||||
KubeVirt |
0.36 and above |
||||
Slurm |
Forked from 24.11.0-0rc1 of the official Slurm |
||||
Docker |
25.0.1 |
||||
PyTorch |
2.5.1 - For a list of supported features, see PyTorch Support Matrix. |
||||
PyTorch Lightning or Lightning |
2.3.3 |
||||
lightning-habana |
|||||
Ray |
2.32.0 |
||||
DeepSpeed |
Forked from 0.14.4 of the official DeepSpeed. For a list of supported features, see DeepSpeed. |
||||
Megatron-DeepSpeed |
Forked from the official Megatron-DeepSpeed, PR #374 |
||||
Megatron-LM |
Forked from the official Megatron-LM, core_r0.8.0 |
||||
Intel Neural Compressor (INC) |
v3.2 |
||||
Open MPI |
4.1.6 |
||||
Libfabric |
|
||||
Optimum for Intel Gaudi 2 |
|||||
Text Generation Inference |
2.0.6 with Optimum for Intel Gaudi 1.15.0 |
||||
Transformers |
4.45.2 with Optimum for Intel Gaudi 1.15.0 |
||||
vLLM |
Forked from v0.6.4.post2 of the official vLLM |
- 1
Ubuntu 22.04 with Python version 3.11 is functional on a specific PyTorch Docker.
- 2
Refer to Optimum for Intel Gaudi release notes for more details on the release.
Components |
Version |
|
---|---|---|
Intel Gaudi Software |
Build Number |
1.19.0-561 |
SVN |
SVN Version |
sec-2 |
HL-325L |
SPI Firmware |
1.19.0-fw-57.1.0 |
FIT |
1.19.0-fw-57.1.0 |
|
CPLD |
|
|
eROM |
1.19.0-fw-57.1.0 |
|
HL-338A |
SPI Firmware |
1.19.0-fw-57.1.0 |
FIT |
1.19.0-fw-57.1.0 |
|
CPLD |
00_TS67221E5E |
|
eROM |
1.19.0-fw-57.1.0 |
|
HLB-325 |
PCIe Retimer Firmware |
2.12.10 |
SerDes Retimer Firmware |
D0_06 |
|
CPLD |
|
Intel Gaudi Software |
1.19.0 |
||||
---|---|---|---|---|---|
Operating Systems |
Ubuntu |
RHEL8 |
RHEL9 |
TencentOS |
|
Version |
22.04 |
8.6 |
9.2 |
9.4 |
3.1 |
Kernel |
5.4.0 and above |
4.18.0 |
5.14 and above |
5.14 and above |
5.4.0 and above |
Python |
3.10 |
3.11 |
3.10 |
3.11 |
3.10 |
OpenShift |
4.14 |
4.16 |
|||
Kubernetes |
1.28, 1.29, 1.30 |
||||
KubeVirt |
0.36 and above |
||||
Slurm |
Forked from 24.11.0-0rc1 of the official Slurm |
||||
Docker |
25.0.1 |
||||
PyTorch |
2.5.1 - For a list of supported features, see PyTorch Support Matrix. |
||||
PyTorch Lightning or Lightning |
2.3.3 |
||||
lightning-habana |
|||||
Ray |
2.32.0 |
||||
DeepSpeed |
Forked from 0.14.4 of the official DeepSpeed. For a list of supported features, see DeepSpeed. |
||||
Megatron-DeepSpeed |
Forked from the official Megatron-DeepSpeed, PR #374 |
||||
Megatron-LM |
Forked from the official Megatron-LM, core_r0.8.0 |
||||
Intel Neural Compressor (INC) |
v3.2 |
||||
Open MPI |
4.1.6 |
||||
Libfabric |
|
||||
Optimum for Intel Gaudi 3 |
|||||
Text Generation Inference |
2.0.6 with Optimum for Intel Gaudi 1.15.0 |
||||
Transformers |
4.45.2 with Optimum for Intel Gaudi 1.15.0 |
||||
vLLM |
Forked from v0.6.4.post2 of the official vLLM |
- 3
Refer to Optimum for Intel Gaudi release notes for more details on the release.
Components |
Version |
|
---|---|---|
Intel Gaudi Software |
Build Number |
1.19.0-561 |
SVN |
SVN Version |
sec-9 |
HL-225H |
SPI Firmware |
1.19.0-fw-56.1.0 |
FIT |
1.19.0-fw-56.1.0 |
|
CPLD |
0X10 |
|
eROM |
1.12.1-fw-46.0.5 and above |
|
HLBA-225 |
PCIe Retimer Firmware |
2.2 |
SerDes Retimer Firmware |
0xD00A |
|
CPLD |
|
|
HLS-Gaudi 2 |
JC BMC |
4.0 |
HIB BMC |
3.04 |
|
PCIe Switch |
002104 |
Intel Gaudi Software |
1.19.0 |
---|---|
Gaudi Firmware |
1.2.3 |
Gaudi SPI Firmware |
1.1.0 |
Operating Systems |
Ubuntu |
Version |
22.04 |
Kernel |
5.15 and above |
Python |
3.10 |
Kubernetes |
1.28, 1.29, 1.30 |
KubeVirt |
0.36 and above |
Slurm |
Forked from 24.11.0-0rc1 of the official Slurm |
Docker |
25.0.1 |
PyTorch |
2.5.1 - For a list of supported features, see PyTorch Support Matrix. |
Lightning |
2.3.3 |
lightning-habana |
|
DeepSpeed |
Forked from 0.14.4 of the official DeepSpeed. For a list of supported features, see DeepSpeed. |
Open MPI |
4.1.6 |
Libfabric |
1.16.1 and above |
Optimum for Intel Gaudi 4 |
|
Transformers |
4.45.2 with Optimum for Intel Gaudi 1.15.0 |
- 4
Refer to Optimum for Intel Gaudi release notes for more details on the release.
Backward/Forward Compatibility¶
The following is validated on Ubuntu 22.04 and Intel® Gaudi® 2 AI accelerator only.
Note
The driver version should be equal or higher than the Docker version.
Docker Image Version |
Driver |
SPI Firmware |
---|---|---|
1.19.0 |
1.19.0 |
1.19.0 |
1.18.0 |
||
1.17.x |
||
1.18.0 |
1.18.0, 1.19.0 |
1.19.0 |
1.18.0 |
||
1.17.x |
||
1.16.0 |
||
1.15.2 |
||
1.17.1 |
1.19.0 |
1.18.0 |
1.18.0, 1.19.0 |
1.17.x |
|
1.18.0, 1.17.1 |
1.15.2 |
|
1.16.2 |
1.18.0, 1.17.1, 1.16.2 |
1.18.0 |
1.18.0, 1.17.1, 1.16.2 |
1.17.x |
|
1.16.2 |
1.15.2 |