habana_frameworks.mediapipe.fn.Concat

Class:
  • habana_frameworks.mediapipe.fn.Concat(**kwargs)

Define graph call:
  • __call__(input1, input2)

Parameter:
  • input1 - First input tensor to operator. Supported dimensions: minimum = 1, maximum = 5. Supported data types: INT8, UINT8, BFLOAT16, FLOAT32.

  • input2 - Second input tensor to operator with the same dimensionality as input1 tensor. Supported dimensions: minimum = 1, maximum = 5. Supported data types: INT8, UINT8, BFLOAT16, FLOAT32.

Description:

Concatenates tensors along given axis. A maximum 10 tensors can be given as input.

Supported backend:
  • HPU

Keyword Arguments

kwargs

Description

axis

Axis along which tensors to be concatenated.

  • Type: int

  • Default: 0

  • Optional: no

dtype

Output data type.

  • Type: habana_frameworks.mediapipe.media_types.dtype

  • Default: UINT8

  • Optional: yes

  • Supported data types:

    • INT8

    • UINT8

    • BFLOAT16

    • FLOAT32

Note

  1. All input and output tensors are expected to have the same number of dimensions and the same shape except on the given concatenation dimension.

  2. Currently supports 10 inputs maximum .

Example: Concat Operator

The following code snippet shows usage of Concat operator:

from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import dtype as dt
import numpy as np
import os

# Create MediaPipe derived class
class myMediaPipe(MediaPipe):
    def __init__(self, device, queue_depth, batch_size, num_threads, op_device, dir):
        super(
            myMediaPipe,
            self).__init__(
            device,
            queue_depth,
            batch_size,
            num_threads,
            self.__class__.__name__)

        self.input0 = fn.ReadNumpyDatasetFromDir(num_outputs=1,
                                                shuffle=False,
                                                dir=dir,
                                                pattern="inp_x_*.npy",
                                                dense=True,
                                                dtype=dt.FLOAT32,
                                                device="cpu")

        self.concat = fn.Concat(axis=2, dtype=dt.FLOAT32, device=op_device)

    def definegraph(self):
        inp0 = self.input0()
        inp1 = self.input0()
        out = self.concat(inp0, inp1)
        return out, inp0

def run(device, op_device):
    batch_size = 1
    queue_depth = 2
    num_threads = 1
    base_dir = os.environ['DATASET_DIR']
    dir = base_dir+"/npy_data/fp32/"

    # Create MediaPipe object
    pipe = myMediaPipe(device, queue_depth, batch_size,
                      num_threads, op_device, dir)

    # Build MediaPipe
    pipe.build()

    # Initialize MediaPipe iterator
    pipe.iter_init()

# Run MediaPipe
    out, inp = pipe.run()

    def as_cpu(tensor):
        if (callable(getattr(tensor, "as_cpu", None))):
            tensor = tensor.as_cpu()
        return tensor

    # Copy data to host from device as numpy array
    out = as_cpu(out).as_nparray()
    inp = as_cpu(inp).as_nparray()

    del pipe

    print("\ninp tensor shape:", inp.shape)
    print("inp tensor dtype:", inp.dtype)
    print("inp tensor data:\n", inp)

    print("\nout tensor shape:", out.shape)
    print("out tensor dtype:", out.dtype)
    print("out tensor data:\n", out)

    return inp, out

def compare_ref(inp, out):
    ref = np.concatenate((inp, inp), axis = 1)
    if np.array_equal(ref, out) == False:
        raise ValueError(f"Mismatch w.r.t ref")

if __name__ == "__main__":
    dev_opdev = {'mixed': ['hpu'],
                'legacy': ['hpu']}
    for dev in dev_opdev.keys():
        for op_dev in dev_opdev[dev]:
            inp, out = run(dev, op_dev)
            compare_ref(inp, out)

The following is the output of Concat operator:

inp tensor shape: (1, 3, 2, 3)
inp tensor dtype: float32
inp tensor data:
[[[[182. 227. 113.]
  [175. 128. 253.]]

  [[ 58. 140. 136.]
  [ 86.  80. 111.]]

  [[175. 196. 178.]
  [ 20. 163. 108.]]]]

out tensor shape: (1, 6, 2, 3)
out tensor dtype: float32
out tensor data:
[[[[182. 227. 113.]
  [175. 128. 253.]]

  [[ 58. 140. 136.]
  [ 86.  80. 111.]]

  [[175. 196. 178.]
  [ 20. 163. 108.]]

  [[182. 227. 113.]
  [175. 128. 253.]]

  [[ 58. 140. 136.]
  [ 86.  80. 111.]]

  [[175. 196. 178.]
  [ 20. 163. 108.]]]]