habana_frameworks.mediapipe.fn.Saturation
habana_frameworks.mediapipe.fn.Saturation¶
- Class:
habana_frameworks.mediapipe.fn.Saturation(**kwargs)
- Define graph call:
__call__(input, saturation_level_tensor)
- Parameter:
input - Input tensor to operator. Supported dimensions: minimum = 4, maximum = 4. Supported data types: UINT8, UINT16.
(optional) saturation_level_tensor - Tensor containing saturation level for each image in the batch. Supported dimensions: minimum = 1, maximum = 1. Supported data types: FLOAT32.
Description:
Saturation operator modifies the saturation level of an image by specified change factor. It applies the HSV space modification to an RGB value to get the modified RGB value.
- Supported backend:
HPU
Keyword Arguments
kwargs |
Description |
---|---|
saturation_level |
Change factor by which the saturation level of an image needs to be modified.
|
dtype |
Output data type.
|
Note
Input tensor and Output tensor must be of the same data type.
Input1 and output tensor should be in NCHW layout. Input2 is one dimensional tensor of size N.
Saturation level can be provided either as scalar for all batches or 1D tensor.
Number of channels in input and output should be 3.
saturation_level should be >= 0
0: completely de-saturated image
1: no change to image’s saturation
Example: Saturation Operator
The following code snippet shows usage of Saturation operator:
from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import imgtype as it
from habana_frameworks.mediapipe.media_types import dtype as dt
import matplotlib.pyplot as plt
import os
g_display_timeout = os.getenv("DISPLAY_TIMEOUT") or 5
# Create MediaPipe derived class
class myMediaPipe(MediaPipe):
def __init__(self, device, queue_depth, batch_size, num_threads,
op_device, dir, img_h, img_w):
super(
myMediaPipe,
self).__init__(
device,
queue_depth,
batch_size,
num_threads,
self.__class__.__name__)
self.input = fn.ReadImageDatasetFromDir(shuffle=False,
dir=dir,
format="jpg",
device="cpu")
# WHCN
self.decode = fn.ImageDecoder(device="hpu",
output_format=it.RGB_P,
resize=[img_w, img_h])
self.saturation = fn.Saturation(saturation_level=0.3,
dtype=dt.UINT8,
device=op_device)
# WHCN -> CWHN
self.transpose = fn.Transpose(permutation=[2, 0, 1, 3],
tensorDim=4,
dtype=dt.UINT8,
device=op_device)
def definegraph(self):
images, labels = self.input()
images = self.decode(images)
images = self.saturation(images)
images = self.transpose(images)
return images, labels
def display_images(images, batch_size, cols):
rows = (batch_size + 1) // cols
plt.figure(figsize=(10, 10))
for i in range(batch_size):
ax = plt.subplot(rows, cols, i + 1)
plt.imshow(images[i])
plt.axis("off")
plt.show(block=False)
plt.pause(g_display_timeout)
plt.close()
def run(device, op_device):
batch_size = 6
queue_depth = 2
num_threads = 1
img_width = 200
img_height = 200
base_dir = os.environ['DATASET_DIR']
dir = base_dir + "/img_data/"
columns = 3
# Create MediaPipe object
pipe = myMediaPipe(device, queue_depth, batch_size,
num_threads, op_device, dir,
img_height, img_width)
# Build MediaPipe
pipe.build()
# Initialize MediaPipe iterator
pipe.iter_init()
# Run MediaPipe
images, labels = pipe.run()
def as_cpu(tensor):
if (callable(getattr(tensor, "as_cpu", None))):
tensor = tensor.as_cpu()
return tensor
# Copy data to host from device as numpy array
images = as_cpu(images).as_nparray()
labels = as_cpu(labels).as_nparray()
del pipe
# Display images
display_images(images, batch_size, columns)
if __name__ == "__main__":
dev_opdev = {'mixed': ['hpu'],
'legacy': ['hpu']}
for dev in dev_opdev.keys():
for op_dev in dev_opdev[dev]:
run(dev, op_dev)
Images with Changed Saturation 1
- 1
Licensed under a CC BY SA 4.0 license. The images used here are taken from https://data.caltech.edu/records/mzrjq-6wc02.