habana_frameworks.mediapipe.fn.ReadImageDatasetFromDir
On this Page
habana_frameworks.mediapipe.fn.ReadImageDatasetFromDir¶
- Class:
habana_frameworks.mediapipe.fn.ReadImageDatasetFromDir(**kwargs)
- Define graph call:
__call__()
- Parameter:
None
Description:
This reader is designed for image classification tasks. There are two ways to provide input to ReadImageDatasetFromDir:
By specifying input directory path. Sub-directory names will be considered as class_label for all the images in it.
By providing
file_list
,class_list
andfile_classes
to the reader
Reader returns batches of image path and ground truth labels. Output ground truth labels are a list of integers representing class lables of images.
- Supported backend:
CPU
Keyword Arguments
kwargs |
Description |
---|---|
shuffle |
If set to True, the reader shuffles the entire dataset before each epoch. In case of multi-card training, it first creates random slices/chunk of files for every card then shuffle.
|
seed |
Seed for randomization. If not provided it will be generated internally. It is used for shuffling the dataset and it is also used for randomly
selecting the images to pad the last batch when the
|
max_file |
Full path of biggest input file. This is used for pre-allocating buffer. If not provided, reader will find it.
|
drop_remainder |
If True, reader will drop the last partial batch of the epoch. If false - padding mechanism can be controlled by pad_reminder.
|
pad_remainder |
If True then reader will replicate last image of partial batch. If false then partial batch is filled with images randomly selected from dataset.
|
label_dtype |
Required data type of output ground truth labels. Reader returns batch of image file path and ground truth labels. Output ground truth labels are list of integers, which specifies index of respective image’s class label in sorted list of all class labels. label_dtype specifies data type of these integers.
|
num_slices |
It indicates number of cards in multi-card training. Before first epoch, input data is divided into num_slices chunks i.e. one chunk for every card. During entire training, same chunk will be used for that particular card for creating batches in every epoch. Default value is one, which indicates single card training.
|
slice_index |
In multi-card training, it indicates index of card.
|
dir |
Input image directory path. Reader will read images from all sub-directories of dir. Name of sub-directory is treated as class label for all the images inside it.
|
format |
Format or extension of image file names. It will be used to list-out all the images in sub-directories of “dir”.
|
file_list |
In-place of providing dir (input image directory path), user can provide list of files to reader.
|
class_list |
List of unique class labels must be provided along with file_list. It will be used as a look-up table to generate output ground truth labels.
|
file_sizes |
Optional dictionary of file path & file sizes for every file in file_list. If provided, it will be used to compute max_file_size. If not provided reader will find max_file_size itself. max_file_size is used for pre-allocating decoder buffers.
|
file_classes |
List of class name for every file in file_list. It will be used to generate output ground truth labels. If not provided, last sub-directory name from file_list will be used to generate file_classes.
|
Example #1: Use ReadImageDatasetFromDir by Providing Input Directory¶
The following code snippet shows use of ReadImageDatasetFromDir by providing input image directory path. Input jpg images are present in sub directories of “//path/to/dataset/”. For example “/path/to/dataset/class_1/img_c1_0.jpg” “/path/to/dataset/class_1/img_c1_1.jpg” “/path/to/dataset/class_1/img_c1_2.jpg” “/path/to/dataset/class_2/img_c2_0.jpg” “/path/to/dataset/class_2/img_c2_1.jpg” “/path/to/dataset/class_2/img_c2_2.jpg”
fn.ReadImageDatasetFromDir(shuffle=False, dir="/path/to/dataset/", format="jpg")
Since the format="jpg"
, the reader will process all “jpg” files in the sub-directories of dir
.
Names of sub-directories are considered as class_label
for all the images in it.
Reader internally creates class_list
which is a sorted list of unique class_labels
(i.e. sorted list of unique sub-directory names).
class_list
is used as a dictionary to generate output ground truth labels. Output ground truth label of every image is index
of image class_label
in class_list
(i.e. index of sub-directory name in which that image is present in class_list
)
In below example reader is returning ground truth label for every image, which is displayed as align title of that image.
import matplotlib.pyplot as plt
from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import imgtype as it
from habana_frameworks.mediapipe.media_types import dtype as dt
import os
g_display_timeout = os.getenv("DISPLAY_TIMEOUT") or 5
# Create MediaPipe derived class
class myMediaPipe(MediaPipe):
def __init__(self, device, queue_depth, batch_size, num_threads, op_device, dir, img_h, img_w):
super(
myMediaPipe,
self).__init__(
device,
queue_depth,
batch_size,
num_threads,
self.__class__.__name__)
self.input = fn.ReadImageDatasetFromDir(shuffle=False,
dir=dir,
format="jpg",
device=op_device)
self.decode = fn.ImageDecoder(device="hpu",
output_format=it.RGB_I,
resize=[img_w, img_h],
dtype=dt.UINT8)
def definegraph(self):
images, labels = self.input()
images = self.decode(images)
return images, labels
def display_images(images, labels, batch_size, cols):
rows = (batch_size + 1) // cols
plt.figure(figsize=(10, 10))
for i in range(batch_size):
ax = plt.subplot(rows, cols, i + 1)
plt.imshow(images[i])
plt.title("label:"+str(labels[i]))
plt.axis("off")
plt.show(block=False)
plt.pause(g_display_timeout)
plt.close()
def run(device, op_device):
batch_size = 6
queue_depth = 2
num_threads = 1
img_width = 200
img_height = 200
base_dir = os.environ['DATASET_DIR']
dir = base_dir + "/img_data/"
columns = 3
# Create MediaPipe object
pipe = myMediaPipe(device, queue_depth, batch_size,
num_threads, op_device, dir,
img_height, img_width)
pipe.build()
pipe.iter_init()
try:
images, labels = pipe.run()
except StopIteration:
pass
def as_cpu(tensor):
if (callable(getattr(tensor, "as_cpu", None))):
tensor = tensor.as_cpu()
return tensor
# Copy data to host from device as numpy array
images = as_cpu(images).as_nparray()
labels = as_cpu(labels).as_nparray()
del pipe
display_images(images, labels, batch_size, columns)
if __name__ == "__main__":
dev_opdev = {'mixed': ['cpu'],
'legacy': ['cpu']}
for dev in dev_opdev.keys():
for op_dev in dev_opdev[dev]:
run(dev, op_dev)
Example #1: Output Images 1
- 1
Licensed under a CC BY SA 4.0 license. The images used here are taken from https://data.caltech.edu/records/mzrjq-6wc02.
Example #2: Use ReadImageDatasetFromDir by Providing file_list¶
The following code snippet shows an alternative method of using ReadImageDatasetFromDir by providing file_list
and class_list
:
file_list
specifies list of files to process.class_list
must be provided along withfile_list
. It is list of unique class labels.file_classes
is list of class labels for every image.
Output ground truth labels will be index of image’s class label in class_list
, which is shown as title of output images.
import matplotlib.pyplot as plt
from habana_frameworks.mediapipe import fn
from habana_frameworks.mediapipe.mediapipe import MediaPipe
from habana_frameworks.mediapipe.media_types import imgtype as it
from habana_frameworks.mediapipe.media_types import dtype as dt
import os
g_display_timeout = os.getenv("DISPLAY_TIMEOUT") or 5
class myMediaPipe(MediaPipe):
def __init__(self, device, queue_depth, batch_size, num_threads, op_device, dir, img_h, img_w):
super(
myMediaPipe,
self).__init__(
device,
queue_depth,
batch_size,
num_threads,
self.__class__.__name__)
file_list = [dir+"/cougar_body/image_0002.jpg",
dir+"/cougar_body/image_0039.jpg",
dir+"/cougar_body/image_0024.jpg",
dir+"/butterfly//image_0015.jpg",
dir+"/butterfly//image_0067.jpg",
dir+"/butterfly//image_0090.jpg"
]
class_list = ["cougar_body", "butterfly"]
file_classes = ["cougar_body",
"cougar_body",
"cougar_body",
"butterfly",
"butterfly",
"butterfly"]
self.input = fn.ReadImageDatasetFromDir(shuffle=False,
file_list=file_list,
class_list=class_list,
file_classes=file_classes,
format="jpg",
device="cpu")
self.decode = fn.ImageDecoder(device="hpu",
output_format=it.RGB_I,
resize=[img_w, img_h],
dtype=dt.UINT8)
def definegraph(self):
images, labels = self.input()
images = self.decode(images)
return images, labels
def display_images(images, labels, batch_size, cols):
rows = (batch_size + 1) // cols
plt.figure(figsize=(10, 10))
for i in range(batch_size):
ax = plt.subplot(rows, cols, i + 1)
plt.imshow(images[i])
plt.title("label:"+str(labels[i]))
plt.axis("off")
plt.show(block=False)
plt.pause(g_display_timeout)
plt.close()
def run(device, op_device):
batch_size = 6
queue_depth = 2
num_threads = 1
img_width = 200
img_height = 200
base_dir = os.environ['DATASET_DIR']
dir = base_dir + "/img_data/"
columns = 3
# Create MediaPipe object
pipe = myMediaPipe(device, queue_depth, batch_size,
num_threads, op_device, dir,
img_height, img_width)
pipe.build()
pipe.iter_init()
try:
images, labels = pipe.run()
except StopIteration:
pass
def as_cpu(tensor):
if (callable(getattr(tensor, "as_cpu", None))):
tensor = tensor.as_cpu()
return tensor
# Copy data to host from device as numpy array
images = as_cpu(images).as_nparray()
labels = as_cpu(labels).as_nparray()
del pipe
display_images(images, labels, batch_size, columns)
if __name__ == "__main__":
dev_opdev = {'mixed': ['cpu'],
'legacy': ['cpu']}
for dev in dev_opdev.keys():
for op_dev in dev_opdev[dev]:
run(dev, op_dev)
Example #2: Output Images 2
- 2
Licensed under a CC BY SA 4.0 license. The images used here are taken from https://data.caltech.edu/records/mzrjq-6wc02.