DatasetFolder¶
- class paddle.vision.datasets. DatasetFolder ( root, loader=None, extensions=None, transform=None, is_valid_file=None ) [源代码] ¶
一种通用的数据加载方式,数据需要以如下的格式存放:
root/class_a/1.ext
root/class_a/2.ext
root/class_a/3.ext
root/class_b/123.ext
root/class_b/456.ext
root/class_b/789.ext
参数¶
root (str) - 根目录路径。
loader (Callable,可选) - 可以加载数据路径的一个函数,如果该值没有设定,默认使用
cv2.imread
。默认值为 None。extensions (list[str]|tuple[str],可选) - 允许的数据后缀列表,
extensions
和is_valid_file
不可以同时设置。如果该值没有设定,默认为('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif', '.tiff', '.webp')
。默认值为 None。transform (Callable,可选) - 图片数据的预处理,若为
None
即为不做预处理。默认值为None
。is_valid_file (Callable,可选) - 根据每条数据的路径来判断是否合法的一个函数。
extensions
和is_valid_file
不可以同时设置。默认值为 None。
属性¶
classes (list[str]) - 包含全部类名的列表。
class_to_idx (dict[str, int]) - 类名到类别索引号的映射字典。
samples (list[tuple[str, int]]) - 一个列表,其中每项为
(样本路径, 类别索引号)
形式的元组。targets (list[int]) - 数据集中各个图片的类别索引号列表。
代码示例¶
>>> import shutil
>>> import tempfile
>>> import cv2
>>> import numpy as np
>>> import paddle.vision.transforms as T
>>> from pathlib import Path
>>> from paddle.vision.datasets import DatasetFolder
>>> def make_fake_file(img_path: str):
... if img_path.endswith((".jpg", ".png", ".jpeg")):
... fake_img = np.random.randint(0, 256, (32, 32, 3), dtype=np.uint8)
... cv2.imwrite(img_path, fake_img)
... elif img_path.endswith(".txt"):
... with open(img_path, "w") as f:
... f.write("This is a fake file.")
>>> def make_directory(root, directory_hierarchy, file_maker=make_fake_file):
... root = Path(root)
... root.mkdir(parents=True, exist_ok=True)
... for subpath in directory_hierarchy:
... if isinstance(subpath, str):
... filepath = root / subpath
... file_maker(str(filepath))
... else:
... dirname = list(subpath.keys())[0]
... make_directory(root / dirname, subpath[dirname])
>>> directory_hierarchy = [
... {"class_0": [
... "abc.jpg",
... "def.png"]},
... {"class_1": [
... "ghi.jpeg",
... "jkl.png",
... {"mno": [
... "pqr.jpeg",
... "stu.jpg"]}]},
... "this_will_be_ignored.txt",
... ]
>>> # You can replace this with any directory to explore the structure
>>> # of generated data. e.g. fake_data_dir = "./temp_dir"
>>> fake_data_dir = tempfile.mkdtemp()
>>> make_directory(fake_data_dir, directory_hierarchy)
>>> data_folder_1 = DatasetFolder(fake_data_dir)
>>> print(data_folder_1.classes)
['class_0', 'class_1']
>>> print(data_folder_1.class_to_idx)
{'class_0': 0, 'class_1': 1}
>>> print(data_folder_1.samples)
[('./temp_dir/class_0/abc.jpg', 0), ('./temp_dir/class_0/def.png', 0),
('./temp_dir/class_1/ghi.jpeg', 1), ('./temp_dir/class_1/jkl.png', 1),
('./temp_dir/class_1/mno/pqr.jpeg', 1), ('./temp_dir/class_1/mno/stu.jpg', 1)]
>>> print(data_folder_1.targets)
[0, 0, 1, 1, 1, 1]
>>> print(len(data_folder_1))
6
>>> for i in range(len(data_folder_1)):
... img, label = data_folder_1[i]
... # do something with img and label
... print(type(img), img.size, label)
... # <class 'PIL.Image.Image'> (32, 32) 0
>>> transform = T.Compose(
... [
... T.Resize(64),
... T.ToTensor(),
... T.Normalize(
... mean=[0.5, 0.5, 0.5],
... std=[0.5, 0.5, 0.5],
... to_rgb=True,
... ),
... ]
... )
>>> data_folder_2 = DatasetFolder(
... fake_data_dir,
... loader=lambda x: cv2.imread(x), # load image with OpenCV
... extensions=(".jpg",), # only load *.jpg files
... transform=transform, # apply transform to every image
... )
>>> print([img_path for img_path, label in data_folder_2.samples])
['./temp_dir/class_0/abc.jpg', './temp_dir/class_1/mno/stu.jpg']
>>> print(len(data_folder_2))
2
>>> for img, label in iter(data_folder_2):
... # do something with img and label
... print(type(img), img.shape, label)
... # <class 'paddle.Tensor'> [3, 64, 64] 0
>>> shutil.rmtree(fake_data_dir)