habana_frameworks.mediapipe.fn.Reshape

Class:
  • habana_frameworks.mediapipe.fn.Reshape(**kwargs)

Define graph call:
  • __call__(input)

Parameter:
  • input - Input tensor to operator. Supported dimensions: minimum = 1, maximum = 5. Supported data types: INT8, UINT8, BFLOAT16, FLOAT16.

Description:

This operation returns a new tensor that has the same values as input tensor in the same order, except with a new shape given by size and tensorDim arguments.

Supported backend:
  • HPU

Keyword Arguments

kwargs

Description

size

Shape of output tensor, max length = MAX_DIMENSIONS_NUM.

  • Type: list[int]

  • Default: 0

  • Optional: no

tensorDim

Dimension of reshaped output tensor.

  • Type: int

  • Default: 5

  • Optional: no

layout

Output tensor layout.

  • Type: str

  • Default: ‘’

  • Optional: yes

  • Supported layouts:

    • NA = ‘’ : interleaved

    • NHWC = ‘CWHN’: planar

    • NCHW = ‘WHCN’: planar

    • FHWC = ‘CWHC’: video

dtype

Output data type.

  • Type: habana_frameworks.mediapipe.media_types.dtype

  • Default: UINT8

  • Optional: yes

  • Note: User should manually set dtype same as input data type. Otherwise output tensor’s data type will get changed to default UINT8.

  • Supported data types:

    • INT8

    • UINT8

    • BFLOAT16

    • FLOAT32

Note

The number of elements in the input and output tensor must be equal.

Example: Reshape Operator

The following code snippet shows usage of Reshape operator:

from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import imgtype as it
from habana_frameworks.mediapipe.media_types import dtype as dt

# Create media pipeline derived class
class myMediaPipe(MediaPipe):
    def __init__(self, device, dir, queue_depth, batch_size, channels, img_h, img_w):
        super(
            myMediaPipe,
            self).__init__(
            device,
            queue_depth,
            batch_size,
            self.__class__.__name__)

        self.input = fn.ReadImageDatasetFromDir(shuffle=False,
                                                dir=dir,
                                                format="jpg")

        # WHCN
        self.decode = fn.ImageDecoder(device="hpu",
                                      output_format=it.RGB_P,
                                      resize=[img_w, img_h])

        # WH(C*N)
        self.reshape = fn.Reshape(size=[img_w, img_h,  channels*batch_size],
                                  tensorDim=3,
                                  layout='')

    def definegraph(self):
        images, labels = self.input()
        images = self.decode(images)
        images = self.reshape(images)
        return images, labels

def main():
    batch_size = 6
    img_width = 200
    img_height = 200
    img_channel = 3
    img_dir = "/path/to/images"
    queue_depth = 2

    # Create media pipeline object
    pipe = myMediaPipe('hpu', img_dir, queue_depth, batch_size,
                        img_channel, img_height, img_width)

    # Build media pipeline
    pipe.build()

    # Initialize media pipeline iterator
    pipe.iter_init()

    # Run media pipeline
    images, labels = pipe.run()

    # Copy data to host from device as numpy array
    images = images.as_cpu().as_nparray()
    labels = labels.as_cpu().as_nparray()

    # Display shape
    print('images shape', images.shape)


if __name__ == "__main__":
    main()

The following is the Reshaped output tensor shape:

images shape (18, 200, 200)