habana_frameworks.mediapipe.fn.Reshape

Class:
  • habana_frameworks.mediapipe.fn.Reshape(**kwargs)

Define graph call:
  • __call__(input)

Parameter:
  • input - Input tensor to operator. Supported dimensions: minimum = 1, maximum = 5. Supported data types: INT8, UINT8, BFLOAT16, FLOAT16.

Description:

This operation returns a new tensor that has the same values as input tensor in the same order, except with a new shape given by size and tensorDim arguments.

Supported backend:
  • HPU

Keyword Arguments

kwargs

Description

size

Shape of output tensor, max length = MAX_DIMENSIONS_NUM.

  • Type: list[int]

  • Default: 0

  • Optional: no

tensorDim

Dimension of reshaped output tensor.

  • Type: int

  • Default: 5

  • Optional: no

layout

Output tensor layout.

  • Type: str

  • Default: ‘’

  • Optional: yes

  • Supported layouts:

    • NA = ‘’ : interleaved

    • NHWC = ‘CWHN’: planar

    • NCHW = ‘WHCN’: planar

    • FHWC = ‘CWHC’: video

dtype

Output data type.

  • Type: habana_frameworks.mediapipe.media_types.dtype

  • Default: UINT8

  • Optional: yes

  • Note: User should manually set dtype same as input data type. Otherwise output tensor’s data type will get changed to default UINT8.

  • Supported data types:

    • INT8

    • UINT8

    • BFLOAT16

    • FLOAT32

Note

The number of elements in the input and output tensor must be equal.

Example: Reshape Operator

The following code snippet shows usage of Reshape operator:

from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import dtype as dt
import numpy as np
import os
import glob

# Create MediaPipe derived class


class myMediaPipe(MediaPipe):
    def __init__(self, device, queue_depth, batch_size, num_threads,
                op_device, dir, pattern, reshape_to):
        super(
            myMediaPipe,
            self).__init__(
            device,
            queue_depth,
            batch_size,
            num_threads,
            self.__class__.__name__)

        self.inp = fn.ReadNumpyDatasetFromDir(num_outputs=1,
                                              shuffle=False,
                                              dir=dir,
                                              pattern=pattern,
                                              dense=True,
                                              dtype=dt.FLOAT32,
                                              device="cpu")

        self.reshape = fn.Reshape(size=reshape_to,
                                  tensorDim=2,
                                  layout='',
                                  dtype=dt.FLOAT32,
                                  device=op_device)

    def definegraph(self):
        inp = self.inp()
        out = self.reshape(inp)
        return out, inp


def run(device, op_device):
    batch_size = 2
    queue_depth = 2
    num_threads = 1
    base_dir = os.environ['DATASET_DIR']
    dir = base_dir+"/npy_data/fp32/"
    pattern = "*x*.npy"

    npy_x = sorted(glob.glob(dir + "/{}".format(pattern[0])))
    reshape_val = np.load(npy_x[0]).size
    reshape_to = [reshape_val, batch_size]

    # Create MediaPipe object
    pipe = myMediaPipe(device, queue_depth, batch_size,
                      num_threads, op_device, dir, pattern, reshape_to)

    # Build MediaPipe
    pipe.build()

    # Initialize MediaPipe iterator
    pipe.iter_init()

    # Run MediaPipe
    out, inp = pipe.run()

    def as_cpu(tensor):
        if (callable(getattr(tensor, "as_cpu", None))):
            tensor = tensor.as_cpu()
        return tensor

    # Copy data to host from device as numpy array
    out = as_cpu(out).as_nparray()
    inp = as_cpu(inp).as_nparray()

    del pipe

    # Display shape
    print('input shape', inp.shape)
    print('output shape', out.shape)
    return inp, out


def compare_ref(inp, out):
    reshape_val = inp.size//inp.shape[0]
    ref = np.reshape(inp, [inp.shape[0], reshape_val])
    if np.array_equal(ref, out) == False:
        raise ValueError(f"Mismatch w.r.t ref for device")


if __name__ == "__main__":
    dev_opdev = {'mixed': ['hpu'],
                'legacy': ['hpu']}
    for dev in dev_opdev.keys():
        for op_dev in dev_opdev[dev]:
            inp, out = run(dev, op_dev)
            compare_ref(inp, out)

The following is the Reshaped output tensor shape:

input shape (2, 3, 2, 3)
output shape (2, 18)