habana_frameworks.mediapipe.fn.Normalize

Class:
  • habana_frameworks.mediapipe.fn.Normalize(**kwargs)

Define graph call:
  • __call__(input)

Parameter:
  • input - Input tensor to operator. Supported dimensions: minimum = 4, maximum = 4. Supported data types: UINT8, UINT16.

Description:

Normalizes the input tensor by subtracting mean and dividing by standard deviation. The mean and stddev are calculated internally. Normalization is done using the formula given below. Epsilon(1e-05) is added in order to avoid division by 0.

out = (scale * (in - mean)) / sqrt(variance + epsilon) + shift

Supported backend:
  • HPU

Keyword Arguments

kwargs

Description

scale

Scaling factor to be applied to output.

  • Type: float

  • Default: 0.0

  • Optional: yes

shift

The value to which mean will map to the output.

  • Type: float

  • Default: 0.0

  • Optional: yes

axis

An optional bitmap for CHW data layout. Set res bit for the dim to be normalized.

  • Type: int

  • Default: 0

  • Optional: yes

  • Supported values: 0, 1, 2, 3, 4, 5, 7

batch

If set to True, the mean and standard deviation are calculated across tensors in the batch. This argument is not yet supported by TPC kernel.

  • Type: bool

  • Default: false

  • Optional: yes

dtype

Ouptput data type.

  • Type: habana_frameworks.mediapipe.media_types.dtype

  • Default: UINT8

  • Optional: yes

  • Supported data types:

    • UINT8

    • UINT16

Note

  1. When number of input tensor = 1, i.e., only ifm is provided and the axis is also set to default value, i.e. 0 then normalization is done along CHW.

  2. If either of mean or inverse standard deviation tensor is given as input and along with that axis is also specified then the shape of mean/inverse standard deviation tensor should match the reduced input tensor shape based on axis. Currently attribute axis can take the following values - 1, 2, 3, 4, 5 and 7.

Example: Normalize Operator

The following code snippet shows usage of Normalize operator:

from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import imgtype as it
from habana_frameworks.mediapipe.media_types import dtype as dt
import matplotlib.pyplot as plt
import os

g_display_timeout = os.getenv("DISPLAY_TIMEOUT") or 5

# Create MediaPipe derived class
class myMediaPipe(MediaPipe):
    def __init__(self, device, queue_depth, batch_size, num_threads, op_device, dir, img_h, img_w):
        super(
            myMediaPipe,
            self).__init__(
            device,
            queue_depth,
            batch_size,
            num_threads,
            self.__class__.__name__)

        self.input = fn.ReadImageDatasetFromDir(shuffle=False,
                                                dir=dir,
                                                format="jpg",
                                                device="cpu")

        # WHCN
        self.decode = fn.ImageDecoder(device="hpu",
                                      output_format=it.RGB_P,
                                      resize=[img_w, img_h])

        self.normalize = fn.Normalize(scale=60.0,
                                      shift=120.0,
                                      axis=1,
                                      device=op_device)

        # WHCN -> CWHN
        self.transpose = fn.Transpose(permutation=[2, 0, 1, 3],
                                      tensorDim=4,
                                      dtype=dt.UINT8,
                                      device=op_device)

    def definegraph(self):
        images, labels = self.input()
        images = self.decode(images)
        images = self.normalize(images)
        images = self.transpose(images)
        return images, labels

def display_images(images, batch_size, cols):
    rows = (batch_size + 1) // cols
    plt.figure(figsize=(10, 10))
    for i in range(batch_size):
        ax = plt.subplot(rows, cols, i + 1)
        plt.imshow(images[i])
        plt.axis("off")
    plt.show(block=False)
    plt.pause(g_display_timeout)
    plt.close()

def run(device, op_device):
    batch_size = 6
    queue_depth = 2
    num_threads = 1
    img_width = 200
    img_height = 200
    base_dir = os.environ['DATASET_DIR']
    dir = base_dir + "/img_data/"
    columns = 3

    # Create MediaPipe object
    pipe = myMediaPipe(device, queue_depth, batch_size,
                      num_threads, op_device, dir,
                      img_height, img_width)

    # Build MediaPipe
    pipe.build()

    # Initialize MediaPipe iterator
    pipe.iter_init()

    # Run MediaPipe
    images, labels = pipe.run()

    def as_cpu(tensor):
        if (callable(getattr(tensor, "as_cpu", None))):
            tensor = tensor.as_cpu()
        return tensor

    # Copy data to host from device as numpy array
    images = as_cpu(images).as_nparray()
    labels = as_cpu(labels).as_nparray()
    del pipe

    # Display images
    display_images(images, batch_size, columns)



if __name__ == "__main__":
    dev_opdev = {'mixed': ['hpu'],
                'legacy': ['hpu']}

    for dev in dev_opdev.keys():
        for op_dev in dev_opdev[dev]:
            run(dev, op_dev)

Normalized Images 1

Image1 of normalize
Image2 of normalize
Image3 of normalize
Image4 of normalize
Image5 of normalize
Image6 of normalize
1

Licensed under a CC BY SA 4.0 license. The images used here are taken from https://data.caltech.edu/records/mzrjq-6wc02.