habana_frameworks.mediapipe.fn.Flip

Class:
  • habana_frameworks.mediapipe.fn.Flip(**kwargs)

Define graph call:
  • __call__(input)

Parameter:
  • input - Input tensor to operator. Supported dimensions: minimum = 4, maximum = 4. Supported data types: UINT8, UINT16.

Description:

This operator flips selected dimensions (horizontal, vertical, depthwise).

Supported backend:
  • HPU

Keyword Arguments

kwargs

Description

horizontal

Flip horizontal dimension. 0 = Do not flip horizontal axis. 1 = Flip horizontal axis.

  • Type: int

  • Default: 0

  • Optional: yes

vertical

Flip vertical dimension. 0 = Do not flip vertical axis. 1 = Flip vertical axis.

  • Type: int

  • Default: 0

  • Optional: yes

depthwise

Flip depthwise dimension. 0 = Do not flip depth axis. 1 = Flip depth axis.

  • Type: int

  • Default: 0

  • Optional: yes

dtype

Output data type.

  • Type: habana_frameworks.mediapipe.media_types.dtype

  • Default: UINT8

  • Optional: yes

  • Supported data type:

    • UINT8

    • UINT16

Note

  1. Input tensor and output tensor should be in WHCN layout. W being the fastest changing dimension.

  2. At least one flip parameter should be 1.

Example: Flip Operator

The following code snippet shows usage of Flip operator:

from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import imgtype as it
from habana_frameworks.mediapipe.media_types import dtype as dt
import matplotlib.pyplot as plt

# Create media pipeline derived class
class myMediaPipe(MediaPipe):
    def __init__(self, device, dir, queue_depth, batch_size, img_h, img_w):
        super(
            myMediaPipe,
            self).__init__(
            device,
            queue_depth,
            batch_size,
            self.__class__.__name__)

        self.input = fn.ReadImageDatasetFromDir(shuffle=False,
                                                dir=dir,
                                                format="jpg")

        # WHCN
        self.decode = fn.ImageDecoder(device="hpu",
                                      output_format=it.RGB_P,
                                      resize=[img_w, img_h])

        self.flip = fn.Flip(horizontal=1,
                            vertical=0,
                            depthwise=0,
                            dtype=dt.UINT8)

        # WHCN -> CWHN
        self.transpose = fn.Transpose(permutation=[2, 0, 1, 3],
                                      tensorDim=4,
                                      dtype=dt.UINT8)

    def definegraph(self):
        images, labels = self.input()
        images = self.decode(images)
        images = self.flip(images)
        images = self.transpose(images)
        return images, labels

def display_images(images, batch_size, cols):
    rows = (batch_size + 1) // cols
    plt.figure(figsize=(10, 10))
    for i in range(batch_size):
        ax = plt.subplot(rows, cols, i + 1)
        plt.imshow(images[i])
        plt.axis("off")
    plt.show()

def main():
    batch_size = 6
    img_width = 200
    img_height = 200
    img_dir = "/path/to/images"
    queue_depth = 2
    columns = 3

    # Create media pipeline object
    pipe = myMediaPipe('hpu', img_dir, queue_depth, batch_size,
                        img_height, img_width)

    # Build media pipeline
    pipe.build()

    # Initialize media pipeline iterator
    pipe.iter_init()

    # Run media pipeline
    images, labels = pipe.run()

    # Copy data to host from device as numpy array
    images = images.as_cpu().as_nparray()
    labels = labels.as_cpu().as_nparray()

    # Display images
    display_images(images, batch_size, columns)


if __name__ == "__main__":
    main()

Horizontally Flipped Images 1

Image1 of flip
Image2 of flip
Image3 of flip
Image4 of flip
Image5 of flip
Image6 of flip
1

Licensed under a CC BY SA 4.0 license. The images used here are taken from https://data.caltech.edu/records/mzrjq-6wc02.