habana_frameworks.mediapipe.fn.ExtHpuOp

Class:
  • habana_frameworks.mediapipe.fn.ExtHpuOp(**kwargs)

Define graph call:
  • __call__(input)

Parameter:
  • input - Input tensor to operator. Tensor dimensions and data types depend on the operator used. Supported inputs: minimum = 1, maximum = 10.

Description:

This operator allows user to add existing TPC operators (which are not part of media operators list) to MediaPipe.

Supported backend:
  • HPU

Keyword Arguments

kwargs

Description

min_inputs

Minimum number of inputs to operator.

  • Type: int

  • Default: 1

  • Optional: yes

max_inputs

Maximum number of inputs to operator.

  • Type: int

  • Default: 10

  • Optional: yes

num_outputs

Maximum number of outputs from operator.

  • Type: int

  • Default: 1

  • Optional: yes

guid

Guid of an operator.

  • Type: str

  • Default: ‘’

  • Optional: yes

params

Parameters to an operator.

  • Type: dict

  • Default: None

  • Optional: yes

param_type

Data types of parameter used in param field.

  • Type: dict

  • Default: None

  • Optional: yes

shape

Input tensor shape.

  • Type: list[int]

  • Default: [0, 0, 0, 0, 0]

  • Optional: no

dtype

Output data type.

  • Type: habana_frameworks.mediapipe.media_types.dtype

  • Default: None

  • Optional: yes

  • Supported data types:

    • INT8

    • UINT8

    • INT16

    • UINT16

    • INT32

    • UINT32

    • FLOAT16

    • FLOAT32

    • BFLOAT16

Note

Maximum output tensors is defined by num_outputs.

Example: ExtHpuOp Operator

The following code snippet shows usage of ExtHpuOp operator. The below assumes that crop is not available as media operator and is available as TPC operator. User needs to correctly provide parameters and its ctypes.

from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import imgtype as it
from habana_frameworks.mediapipe.media_types import dtype as dt
import matplotlib.pyplot as plt
import ctypes as ct
import os

g_display_timeout = os.getenv("DISPLAY_TIMEOUT") or 5

# Create MediaPipe derived class
class myMediaPipe(MediaPipe):
    def __init__(self, device, queue_depth, batch_size,
                num_threads, op_device, dir,
                img_h, img_w):
        super(
            myMediaPipe,
            self).__init__(
            device,
            queue_depth,
            batch_size,
            num_threads,
            self.__class__.__name__)

        self.input = fn.ReadImageDatasetFromDir(shuffle=False,
                                                dir=dir,
                                                format="jpg")

        # WHCN
        self.decode = fn.ImageDecoder(device="hpu",
                                      output_format=it.RGB_P,
                                      resize=[img_w, img_h])

        crop_params = {
            'crop_w': 100,
            'crop_h': 100,
            'crop_d': 0,
            'crop_pos_x': 0.0,
            'crop_pos_y': 0.0,
            'crop_pos_z': 0.0,
            'pad_val': 0,
        }

        crop_params_type = {
            'crop_w': ct.c_int,
            'crop_h': ct.c_int,
            'crop_d': ct.c_int,
            'crop_pos_x': ct.c_float,
            'crop_pos_y': ct.c_float,
            'crop_pos_z': ct.c_float,
            'pad_val': ct.c_int
        }

        self.crop_op = fn.MediaExtHpuOp(num_outputs=1, guid="crop_u8",
                                        params=crop_params,
                                        params_type=crop_params_type,
                                        shape=[100, 100, 3, 6],
                                        dtype=dt.UINT8, device=op_device)

        # WHCN -> CWHN
        self.transpose = fn.Transpose(permutation=[2, 0, 1, 3],
                                      tensorDim=4,
                                      dtype=dt.UINT8,
                                      device=op_device)

    def definegraph(self):
        images, data = self.input()
        images = self.decode(images)
        images = self.crop_op(images)
        images = self.transpose(images)
        return images, data

def display_images(images, batch_size, cols):
    rows = (batch_size + 1) // cols
    plt.figure(figsize=(10, 10))
    for i in range(batch_size):
        ax = plt.subplot(rows, cols, i + 1)
        plt.imshow(images[i])
        plt.axis("off")
    plt.show(block=False)
    plt.pause(g_display_timeout)
    plt.close()

def run(device, op_device):
    batch_size = 6
    img_width = 200
    img_height = 200
    queue_depth = 2
    columns = 3
    num_threads = 1

    base_dir = os.environ['DATASET_DIR']
    dir = base_dir + "/img_data/"

    # Create MediaPipe object
    pipe = myMediaPipe(device, queue_depth, batch_size, num_threads,
                      op_device, dir, img_height, img_width)

    # Build MediaPipe
    pipe.build()

    # Initialize MediaPipe iterator
    pipe.iter_init()

    # Run MediaPipe
    images, labels = pipe.run()

    def as_cpu(tensor):
        if (callable(getattr(tensor, "as_cpu", None))):
            tensor = tensor.as_cpu()
        return tensor

    # Copy data to host from device as numpy array
    images = as_cpu(images).as_nparray()
    labels = as_cpu(labels).as_nparray()

    del pipe

    # Display images
    display_images(images, batch_size, columns)

if __name__ == "__main__":
    dev_opdev = {'legacy': ['hpu']}

    for dev in dev_opdev.keys():
        for op_dev in dev_opdev[dev]:
            run(dev, op_dev)

Images from ExtHpu Operation 1

Image1 of ext_hpu
Image2 of ext_hpu
Image3 of ext_hpu
Image4 of ext_hpu
Image5 of ext_hpu
Image6 of ext_hpu
1

Licensed under a CC BY SA 4.0 license. The images used here are taken from https://data.caltech.edu/records/mzrjq-6wc02.