habana_frameworks.mediapipe.fn.Mult

Class:
  • habana_frameworks.mediapipe.fn.Mult(**kwargs)

Define graph call:
  • __call__(input1, input2)

Parameter:
  • input1 - First input tensor to operator. Supported dimensions: minimum = 1, maximum = 5. Supported data types: INT16, INT32, FLOAT16, BFLOAT16, FLOAT32.

  • input2 - Second input tensor to operator. Supported dimensions: minimum = 1, maximum = 5. Supported data types: INT16, INT32, FLOAT16, BFLOAT16, FLOAT32.

Description:

Resultant tensor formed from the multiplication of the two operands element-wise. This operator performs element-wise multiplication and supports Broadcasting. Computes output as: output = (input1 * input2), element-wise.

Supported backend:
  • HPU, CPU

Keyword Arguments

kwargs

Description

dtype

Output data type.

  • Type: habana_frameworks.mediapipe.media_types.dtype

  • Default: FLOAT32

  • Optional: yes

  • Supported data types:

    • INT32

    • BFLOAT16

    • FLOAT32

Note

  1. All input/output tensors must be of the same data type and must have the same dimensionality except in broadcast support where dimensionality can be different.

  2. This operator is agnostic to the data layout.

Example: Mult Operator

The following code snippet shows usage of Mult operator:

from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import dtype as dt

# Create media pipeline derived class
class myMediaPipe(MediaPipe):
    def __init__(self, device, queue_depth, batch_size, num_threads, dir):
        super(myMediaPipe, self).__init__(
            device,
            queue_depth,
            batch_size,
            num_threads,
            self.__class__.__name__)

        self.inp1 = fn.ReadNumpyDatasetFromDir(num_outputs=1,
                                              shuffle=False,
                                              dir=dir,
                                              pattern="inp_x_*.npy",
                                              dense=True,
                                              dtype=dt.FLOAT32,
                                              device=device)

        self.inp2 = fn.ReadNumpyDatasetFromDir(num_outputs=1,
                                              shuffle=False,
                                              dir=dir,
                                              pattern="inp_y_*.npy",
                                              dense=True,
                                              dtype=dt.FLOAT32,
                                              device=device)

        self.mul = fn.Mult(dtype=dt.FLOAT32,
                          device=device)


    def definegraph(self):
        inp1 = self.inp1()
        inp2 = self.inp2()
        out = self.mul(inp1, inp2)
        return out, inp1, inp2


def main():
    batch_size = 2
    queue_depth = 2
    num_threads = 1
    dir = '/path/to/numpy/files'
    device = 'hpu'

    # Create media pipeline object
    pipe = myMediaPipe(device, queue_depth, batch_size, num_threads, dir)

    # Build media pipeline
    pipe.build()

    # Initialize media pipeline iterator
    pipe.iter_init()

    # Run media pipeline
    out, inp1, inp2 = pipe.run()

    if (device == 'cpu'):
        # Copy data as numpy array
        out = out.as_nparray()
        inp1 = inp1.as_nparray()
        inp2 = inp2.as_nparray()
    else:
        # Copy data to host from device as numpy array
        out = out.as_cpu().as_nparray()
        inp1 = inp1.as_cpu().as_nparray()
        inp2 = inp2.as_cpu().as_nparray()

    print("\ninp1 tensor shape:", inp1.shape)
    print("inp1 tensor dtype:", inp1.dtype)
    print("inp1 tensor data:\n", inp1)

    print("\ninp2 tensor shape:", inp2.shape)
    print("inp2 tensor dtype:", inp2.dtype)
    print("inp2 tensor data:\n", inp2)

    print("\nout tensor shape:", out.shape)
    print("out tensor dtype:", out.dtype)
    print("out tensor data:\n", out)

    pipe.del_iter()

if __name__ == "__main__":
    main()

The following is the output of Mult operator:

inp1 tensor shape: (2, 3, 2, 3)
inp1 tensor dtype: float32
inp1 tensor data:
[[[[182. 227. 113.]
  [175. 128. 253.]]

  [[ 58. 140. 136.]
  [ 86.  80. 111.]]

  [[175. 196. 178.]
  [ 20. 163. 108.]]]


[[[186. 254.  96.]
  [180.  64. 132.]]

  [[149.  50. 117.]
  [213.   6. 111.]]

  [[ 77.  11. 160.]
  [129. 102. 154.]]]]

inp2 tensor shape: (2, 3, 2, 3)
inp2 tensor dtype: float32
inp2 tensor data:
[[[[ 56. 168.  82.]
  [157.  42. 155.]]

  [[ 62. 235. 238.]
  [ 94. 125. 192.]]

  [[125. 162.   1.]
  [206.  77. 123.]]]


[[[138. 196. 246.]
  [137. 203.   7.]]

  [[217. 194.  11.]
  [167. 218. 226.]]

  [[ 68. 160. 254.]
  [243.  93.  70.]]]]

out tensor shape: (2, 3, 2, 3)
out tensor dtype: float32
out tensor data:
[[[[10192. 38136.  9266.]
  [27475.  5376. 39215.]]

  [[ 3596. 32900. 32368.]
  [ 8084. 10000. 21312.]]

  [[21875. 31752.   178.]
  [ 4120. 12551. 13284.]]]


[[[25668. 49784. 23616.]
  [24660. 12992.   924.]]

  [[32333.  9700.  1287.]
  [35571.  1308. 25086.]]

  [[ 5236.  1760. 40640.]
  [31347.  9486. 10780.]]]]