habana_frameworks.mediapipe.fn.ReduceMin
habana_frameworks.mediapipe.fn.ReduceMin¶
- Class:
habana_frameworks.mediapipe.fn.ReduceMin(**kwargs)
- Define graph call:
__call__(input)
- Parameter:
input - Input tensor to operator. Supported dimensions: minimum = 1, maximum = 5. Supported data types: INT32, FLOAT16, BFLOAT16, FLOAT32.
Description:
Computes the minimum of the input tensor’s elements along the provided dimension. Along with reduced output tensor, the index tensor (Retained-Index) is also produced. The resultant tensor stores the index of minimum element.
- Supported backend:
HPU
Keyword Arguments
kwargs |
Description |
---|---|
reductionDimension |
The dimension in which to perform the reduction operation.
|
dtype |
Output data type.
|
Note
All input/output tensors must be of the same data type.
All tensors should have the same dimensionality.
Size of the reduction dimension in the output tensor must be 1.
The size of the index tensor must be equal to the size of the output tensor.
Example: ReduceMin Operator
The following code snippet shows usage of ReduceMin operator:
from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import imgtype as it
from habana_frameworks.mediapipe.media_types import dtype as dt
# Create media pipeline derived class
class myMediaPipe(MediaPipe):
def __init__(self, device, dir, queue_depth, batch_size, img_h, img_w):
super(
myMediaPipe,
self).__init__(
device,
queue_depth,
batch_size,
self.__class__.__name__)
self.input = fn.ReadImageDatasetFromDir(shuffle=False,
dir=dir,
format="jpg")
# WHCN
self.decode = fn.ImageDecoder(device="hpu",
output_format=it.RGB_P,
resize=[img_w, img_h])
self.pre_cast = fn.Cast(dtype=dt.FLOAT32)
self.min = fn.ReduceMin(reductionDimension=0,
dtype=dt.FLOAT32)
def definegraph(self):
images, label = self.input()
images = self.decode(images)
images = self.pre_cast(images)
images, images_i = self.min(images)
return images, images_i
def main():
batch_size = 6
img_width = 200
img_height = 200
img_dir = "/path/to/images"
queue_depth = 2
# Create media pipeline object
pipe = myMediaPipe('hpu', img_dir, queue_depth, batch_size,
img_height, img_width)
# Build media pipeline
pipe.build()
# Initialize media pipeline iterator
pipe.iter_init()
# Run media pipeline
images, labels = pipe.run()
# Copy data to host from device as numpy array
images = images.as_cpu().as_nparray()
labels = labels.as_cpu().as_nparray()
# Display shape, data
print('shape:', images.shape)
print('data:\n', images)
if __name__ == "__main__":
main()
The following is the output of ReduceMin operator:
shape: (6, 3, 200, 1)
data:
[[[[47.]
[47.]
[48.]
...
[31.]
[33.]
[40.]]]]