habana_frameworks.mediapipe.fn.Clamp
habana_frameworks.mediapipe.fn.Clamp¶
- Class:
habana_frameworks.mediapipe.fn.Clamp(**kwargs)
- Define graph call:
__call__(input, lower_bound, upper_bound)
- Parameter:
input - Input tensor to operator. Supported dimensions: minimum = 1, maximum = 5. Supported data types: FLOAT16, FLOAT32, BFLOAT16.
(optional) lower_bound - 1D tensor of size 1, with lowerbound value in it. Supported dimensions: minimum = 1, maximum = 1. Supported data types: FLOAT16, FLOAT32, BFLOAT16.
(optional) upper_bound - 1D tensor of size 1, with upperbound value in it. Supported dimensions: minimum = 1, maximum = 1. Supported data types: FLOAT16, FLOAT32, BFLOAT16.
Description:
Clamps all elements in input into the range [lowerBound, upperBound].
- Supported backend:
HPU
Keyword Arguments
kwargs |
Description |
---|---|
upperBound |
The maximum upperbound value for the input. Higher values than upperBound are clipped to upperBound.
|
lowerBound |
The minimum lowerbound value for the input. Lower values than lowerBound are clipped to lowerBound.
|
dtype |
Output data type.
|
Note
All input/output tensors must be of the same data type and must have the same dimensionality.
This operator is agnostic to the data layout.
lower_bound
andupper_bound
are optional tensors with lowerBound and upperBound values respectively.When
lower_bound
andupper_bound
input tensors are not provided scalar params are used as lowerBound and upperBound.If the data type is BFLOAT16, then the params
lowerBound
andupperBound
float value is converted inside the operator from float to BFLOAT16.
Example: Clamp Operator
The following code snippet shows usage of Clamp operator:
from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import dtype as dt
import os
import numpy as np
g_upper_bound = 120.0
g_lower_bound = 50.0
class myMediaPipe(MediaPipe):
def __init__(self, device, queue_depth, batch_size, num_threads, op_device, dir):
super(
myMediaPipe,
self).__init__(
device,
queue_depth,
batch_size,
num_threads,
self.__class__.__name__)
self.inp = fn.ReadNumpyDatasetFromDir(num_outputs=1,
shuffle=False,
dir=dir,
pattern="inp_x_*.npy",
dense=True,
dtype=dt.FLOAT32,
device="cpu")
self.cast = fn.Clamp(upperBound=g_upper_bound,
lowerBound=g_lower_bound,
dtype=dt.FLOAT32,
device=op_device)
def definegraph(self):
inp = self.inp()
out = self.cast(inp)
return out, inp
def run(device, op_device):
batch_size = 2
queue_depth = 2
num_threads = 1
base_dir = os.environ['DATASET_DIR']
dir = base_dir+"/npy_data/fp32/"
# Create MediaPipe object
pipe = myMediaPipe(device, queue_depth, batch_size,
num_threads, op_device, dir)
# Build MediaPipe
pipe.build()
# Initialize MediaPipe iterator
pipe.iter_init()
# Run MediaPipe
out, inp = pipe.run()
def as_cpu(tensor):
if (callable(getattr(tensor, "as_cpu", None))):
tensor = tensor.as_cpu()
return tensor
out = as_cpu(out).as_nparray()
inp = as_cpu(inp).as_nparray()
del pipe
print("\nout tensor shape:", out.shape)
print("out tensor dtype:", out.dtype)
print("out tensor data:\n", out)
def compare_ref(inp, out):
ref = np.clip(inp, g_lower_bound,g_upper_bound)
if np.array_equal(ref, out) == False:
raise ValueError(f"Mismatch w.r.t ref")
if __name__ == "__main__":
dev_opdev = {'mixed': ['hpu'],
'legacy': ['hpu']}
for dev in dev_opdev.keys():
for op_dev in dev_opdev[dev]:
run(dev, op_dev)
The following is the output of Clamp operator:
out tensor shape: (2, 3, 2, 3)
out tensor dtype: float32
out tensor data:
[[[[120. 120. 113.]
[120. 120. 120.]]
[[ 58. 120. 120.]
[ 86. 80. 111.]]
[[120. 120. 120.]
[ 50. 120. 108.]]]
[[[120. 120. 96.]
[120. 64. 120.]]
[[120. 50. 117.]
[120. 50. 111.]]
[[ 77. 50. 120.]
[120. 102. 120.]]]]