DeepSpeed User Guide for Training

The purpose of this document is to guide Data Scientists to run PyTorch models on Intel® Gaudi® AI accelerator using DeepSpeed.

DeepSpeed Runtime Environment Variables

The following table describes the runtime environment variables that must be set to handle Out of Memory (OOM) issues in large models.

Flag

Description

PT_HPU_MAX_COMPOUND_OP_SIZE=1000

Limits the internal graph size to 1000 ops and reduces Lazy mode memory overheard. This will be improved in future releases. Note: This may affect performance.

PT_HPU_POOL_MEM_ACQUIRE_PERC=100

Sets memory pool to consume the entire HBM memory.

DeepSpeed Validated Configurations

The following DeepSpeed configurations have been validated to be fully functioning with Gaudi:

Configuration

Description

Example

Distributed Data Parallel (multi-card)

Trains the same model across multiple ranks by splitting the datasets between the workers to achieve better performance compared to a single card.

N/A

ZeRO-1

Partitions the optimizer states across the ranks so that each process updates its own partition. For further details, refer to Using ZeRO section.

Bert

ZeRO-2

On top of ZeRO-1, each process retains only the gradients corresponding to its portion of the optimizer states.

Bert

ZeRO-3

The full model state is partitioned across the processes (including 16-bit weights). ZeRO-3 automatically collects and partitions them during the forward and backward passes. Make sure to use only optimizers that have been tested with DeepSpeed ZeRO. For further details, refer to Using ZeRO section.

Flan_T5_XXL

ZeRO++ hpZ

ZeRO++ is a set of optimization methods that extend ZeRO capabilities and enhance large model training efficiency. It can only be used with ZeRO-3. Hierarchical partitioning ZeRO (hpZ) is one of ZeRO++ three communication optimizations. Support for the other two methods will be added in future releases. Unlike ZeRO, hpZ keeps a complete model copy on each machine. Although this approach leads to increased memory usage, it replaces the costly cross-machine all-gather/broadcast on weights with an intra-machine alternative, which is faster due to high intra-machine communication bandwidth.

DeepSpeed ZeRO++ Tutorial

ZeRO-Offload

Offloads the optimizer’s memory and computation from HPU to the host CPU. The implementation of Adam on CPU is made more efficient by DeepSpeedCPUAdam.

offload_optimizer_to_cpu

ZeRO-Infinity

Extends ZeRO-3 functionality by allowing the offload of both the model and optimizer parameters to the CPU memory.

offload_optimizer_param_to_cpu

Model Pipeline Parallelism

Splits the model layers between several workers so each one will execute the forward and backward of their own layer. To optimize the evaluation process during training, refer to Optimizing Pipeline Parallelism section. Note: Pipeline parallelism is not supported on first-gen Gaudi.

LLaMa

Model Tensor Parallelism (using Megatron-DeepSpeed)

Splits the model tensors into chunks so that each tensor resides on its designated HPU. Megatron introduces an approach for Model Tensor Parallelism which can be used for Transformer based models.

LLaMa

Model Sequence Parallelism (using Megatron-DeepSpeed)

Splits the input of the sequence access into smaller sequences that are processed in parallel by each HPU. For further details, refer to Using Sequence Parallelism.

LLaMa

BF16 Precision

Reduces model memory consumption and improves performance by training with BF16 precision.

Bert

BF16 Optimizer

Allows BF16 precision training with pipeline parallelism. An optimizer that implements ZeRO-1 for BF16 and with gradient accumulation at FP32.

Bert

Activation Checkpointing

Recomputes forward pass activations during the backward pass in order to save memory. For further details, refer to Using Activation Checkpointing section.

Bert

Note

  • For further details on using DeepSpeed and Megatron 3D Parallelism configurations with large language models, see Optimizing Large Language Models.

  • Model Pipeline, Tensor Parallelism (via Megatron Deepspeed), and Sequence Parallelism (via Megatron Deepspeed) are currently supported on Gaudi 2 only.

  • DeepSpeed’s multi-server training uses pdsh for invoking the processes on remote hosts. Make sure it is installed on your machine before using it.

  • Upon initialization, Intel Gaudi DeepSpeed enforces Deterministic behavior by setting habana_frameworks.torch.hpu.setDeterministic(True).

  • All further information on DeepSpeed configurations can be found in DeepSpeed documentation.

Installing DeepSpeed Library

Intel Gaudi provides a DeepSpeed fork which includes changes to add support for the Intel Gaudi software. To use DeepSpeed with Gaudi, you must install Intel Gaudi’s DeepSpeed fork. Intel Gaudi’s DeepSpeed fork is based on DeepSpeed v0.14.0:

pip install git+https://github.com/HabanaAI/DeepSpeed.git@1.16.0

Integrating DeepSpeed with Gaudi

To run DeepSpeed on Gaudi:

  1. Prepare your PyTorch model to run on Gaudi by following the steps detailed in the PyTorch Model Porting section. If you have an existing training script that runs on Gaudi, migrating your model is not required.

  2. Follow the instructions in https://www.deepspeed.ai/getting-started/ with the following modifications:

    1. Replace the loss.backward() and optimizer.step()) with model_engine.backward(loss) and model_engine.step()).

    2. Replace all usages of model object in deepspeed.initialize() call with the returned new model_engine object.

    3. Remove from torch.nn.parallel import DistributedDataParallel as DDP and remove the DDP call for the model.

  3. In deepspeed.init_distributed(), make sure that dist_backend is set to HCCL:

    deepspeed.init_distributed(dist_backend='hccl', init_method = <init_method>)
    
  4. For the current release, the following steps are required in this specific order before calling deepspeed.initialize():

    1. Move your model to HPU and cast it to BF16 in case required:

      model.to(hpu, bf16)
      
    1. If your model uses weight sharing, make sure these weights are created inside the module. Refer to Weight Sharing.

    2. Initialize the optimizer.

  5. Update the DeepSpeed run command to include the --use_hpu flag:

    deepspeed model.py --deepspeed --deepspeed_config <JSON file path> --use_hpu
    

Note

It is highly recommended to review our DeepSpeed-BERT pre-training example.

Using ZeRO

  • ZeRO-1 - For optimal performance of ZeRO-1, it is recommended to configure contiguous_gradients=false parameter in the DeepSpeed ZeRO settings. The following shows a usage example:

    "zero_optimization": {
        "stage": 1,
        ...
    
        "contiguous_gradients": false,
    }
    
  • ZeRO-3 - For optimal performance of ZeRO-3, it is recommended to configure the following parameters in the DeepSpeed ZeRO settings:

    • overlap_comm=false

    • contiguous_gradients=true

    • reduce_scatter": false

    The following shows a usage example:

    "zero_optimization": {
        "stage": 3,
        "overlap_comm": false,
        ...
    
        "contiguous_gradients": true,
        "reduce_scatter": false
    }
    

For further information on how to configure ZeRO, refer to ZeRO Configuration section.

Using Activation Checkpointing

To use activation checkpointing with Gaudi, integrate deepspeed.runtime.activation_checkpointing.checkpointing.checkpoint wrapper from Intel Gaudi’s DeepSpeed fork into your model according to the instructions in TORCH.UTILS.CHECKPOINT guide. The below example is taken from the DeepSpeed-BERT script/modeling.py:

class BertEncoder(nn.Module):
    def __init__(self, config):
        super(BertEncoder, self).__init__()
        self.layer = nn.ModuleList([BertLayer(config) for _ in range(config.num_hidden_layers)])
        self.output_all_encoded_layers = config.output_all_encoded_layers
        self._checkpoint_activations = False
        self._checkpoint_activations_interval = 1

        ...

    def forward(self, hidden_states, attention_mask):
        all_encoder_layers = []

        layer_norm_input = 0
        if self._checkpoint_activations:
            hidden_states, layer_norm_input = self.checkpointed_forward(
                hidden_states, layer_norm_input, attention_mask, self._checkpoint_activations_interval)
        else:
            for i, layer_module in enumerate(self.layer):
                hidden_states, layer_norm_input = layer_module(hidden_states, layer_norm_input, attention_mask)

The values of the following parameters have been validated to be fully functioning on Gaudi:

  • “partition_activations”: true/false

  • “cpu_checkpointing”: true/false

  • “contiguous_memory_optimization”: true/false - As per DeepSpeed documentation, contiguous_memory_optimizationcan=true only when partition_activations=true.

  • “synchronize_checkpoint_boundary”: true/false

  • “profile”: false

For further details, refer to Configuring Activation Checkpointing section.

Optimizing Pipeline Parallelism

During model training with pipeline parallelism, communication redundancy among ranks can be eliminated to optimize the evaluation process. This can be achieved by setting the bcast_loss flag to False. Consequently, the return value of non-0 ranks within pipeline groups will change, and only the rank-0 of each group will return the actual evaluation loss obtained from the eval_batch call. To maintain the original behavior of DeepSpeed, the default value of bcast_loss has been kept as True:

def eval_batch(self,
                  data_iter,
                  return_logits=False,
                  compute_loss=True,
                  bcast_loss=True,
                  reduce_output='avg')

The below example is taken from the Megatron-DeepSpeed LLaMA training script:

if args.deepspeed and args.ds_pipeline_enabled:
    # DeepSpeed uses eval_batch() and already aggregates losses.
    assert isinstance(model, list) and len(model) == 1
    loss = model[0].eval_batch(data_iterator, bcast_loss=False, eval_micro_batches=num_eval_microbatches)
    loss_dicts = [{'lm loss' : loss}] * num_eval_microbatches
else:
    assert args.micro_batch_size == args.eval_micro_batch_size, \
           "Unsupported for split micro batch"
    loss_dicts = forward_backward_func(
        forward_step_func, data_iterator, model, optimizer=None,
        timers=None, forward_only=True)

Note

Pipeline parallelism is not supported on first-gen Gaudi.

Using Sequence Parallelism

Sequence Parallelism is used for training with Tensor Parallelism. This approach involves splitting Layer-Norm and Dropout operations along the sequence. These operations occur after attention and MLP blocks which are replicated across the Tensor Parallel group. As a result, a significant amount of activation memory is reduced.

To configure DeepSpeed pipeline module to support Sequence Parallelism:

  1. Mark the sequence parallel weights with an attribute. Each weight split under the sequence parallel region should be added with an attribute during its initialization as sequence_parallel and set to True:

    if sequence_parallel:
      # set sequence parallelism flag on weight parameter
      setattr(self.weight, 'sequence_parallel', True)
    
  2. Configure DeepSpeed pipeline engine to disable partitioning of activations and gradients. Set pipe_partitioned and grad_partitioned attributes to False under the “pipeline” section in DeepSpeed JSON configuration file:

      "pipeline": {
      "pipe_partitioned": false,
      "grad_partitioned": false
    }
    

You can find a usage code in the Megatron-DeepSpeed LLaMA model.