注意
跳转至末尾 下载完整的示例代码。
使用 tensorclasses 加载数据集¶
在本教程中,我们将演示如何在训练管道中高效且透明地加载和管理数据。本教程大量借鉴了 PyTorch 快速入门教程,但进行了修改以展示 tensorclass 的用法。请参阅使用 TensorDict
的相关教程。
import torch
import torch.nn as nn
from tensordict import MemoryMappedTensor, tensorclass
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using device: {device}")
Using device: cpu
torchvision.datasets
模块包含许多方便的预备数据集。在本教程中,我们将使用相对简单的 FashionMNIST 数据集。每张图片都是一件衣服,目标是对图片中的衣服类型进行分类(例如,“包”、“运动鞋”等)。
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor(),
)
0%| | 0.00/26.4M [00:00<?, ?B/s]
0%| | 65.5k/26.4M [00:00<01:12, 363kB/s]
1%| | 229k/26.4M [00:00<00:38, 682kB/s]
3%|▎ | 852k/26.4M [00:00<00:13, 1.94MB/s]
13%|█▎ | 3.47M/26.4M [00:00<00:03, 6.89MB/s]
35%|███▍ | 9.24M/26.4M [00:00<00:01, 15.9MB/s]
57%|█████▋ | 15.1M/26.4M [00:01<00:00, 21.5MB/s]
79%|███████▉ | 20.8M/26.4M [00:01<00:00, 24.8MB/s]
100%|██████████| 26.4M/26.4M [00:01<00:00, 19.4MB/s]
0%| | 0.00/29.5k [00:00<?, ?B/s]
100%|██████████| 29.5k/29.5k [00:00<00:00, 326kB/s]
0%| | 0.00/4.42M [00:00<?, ?B/s]
1%|▏ | 65.5k/4.42M [00:00<00:12, 361kB/s]
4%|▍ | 197k/4.42M [00:00<00:07, 575kB/s]
19%|█▉ | 852k/4.42M [00:00<00:01, 1.96MB/s]
76%|███████▋ | 3.38M/4.42M [00:00<00:00, 6.68MB/s]
100%|██████████| 4.42M/4.42M [00:00<00:00, 6.08MB/s]
0%| | 0.00/5.15k [00:00<?, ?B/s]
100%|██████████| 5.15k/5.15k [00:00<00:00, 59.8MB/s]
Tensorclasses 是 dataclasses,它们像 TensorDict
一样,对其内容公开专用的 tensor 方法。当您要存储的数据结构固定且可预测时,它们是很好的选择。
除了指定内容外,我们还可以在定义类时将相关逻辑封装为自定义方法。在这种情况下,我们将编写一个 from_dataset
类方法,它接受一个数据集作为输入,并创建一个包含数据集数据的 tensorclass。我们创建内存映射的 tensor 来存储数据。这将使我们能够高效地从磁盘加载转换后数据的批次,而不是反复加载和转换单个图像。
@tensorclass
class FashionMNISTData:
images: torch.Tensor
targets: torch.Tensor
@classmethod
def from_dataset(cls, dataset, device=None):
data = cls(
images=MemoryMappedTensor.empty(
(len(dataset), *dataset[0][0].squeeze().shape), dtype=torch.float32
),
targets=MemoryMappedTensor.empty((len(dataset),), dtype=torch.int64),
batch_size=[len(dataset)],
device=device,
)
for i, (image, target) in enumerate(dataset):
data[i] = cls(images=image, targets=torch.tensor(target), batch_size=[])
return data
我们将创建两个 tensorclasses,分别用于训练和测试数据。请注意,我们在这里会产生一些开销,因为我们要遍历整个数据集,进行转换并保存到磁盘。
DataLoaders¶
我们将从 torchvision
提供的 Datasets 以及我们的内存映射 TensorDicts 创建 DataLoaders。
由于 TensorDict
实现了 __len__
和 __getitem__
(以及 __getitems__
),我们可以像使用 map-style Dataset 一样使用它,并直接从中创建 DataLoader
。请注意,由于 TensorDict
已经能够处理批处理索引,因此无需进行 collate 操作,所以我们将 identity 函数作为 collate_fn
传递。
batch_size = 64
train_dataloader = DataLoader(training_data, batch_size=batch_size) # noqa: TOR401
test_dataloader = DataLoader(test_data, batch_size=batch_size) # noqa: TOR401
train_dataloader_tc = DataLoader( # noqa: TOR401
training_data_tc, batch_size=batch_size, collate_fn=lambda x: x
)
test_dataloader_tc = DataLoader( # noqa: TOR401
test_data_tc, batch_size=batch_size, collate_fn=lambda x: x
)
Model¶
我们使用了与 快速入门教程 中相同的模型。
class Net(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28 * 28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = Net().to(device)
model_tc = Net().to(device)
model, model_tc
(Net(
(flatten): Flatten(start_dim=1, end_dim=-1)
(linear_relu_stack): Sequential(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=10, bias=True)
)
), Net(
(flatten): Flatten(start_dim=1, end_dim=-1)
(linear_relu_stack): Sequential(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=10, bias=True)
)
))
优化参数¶
我们将使用随机梯度下降和交叉熵损失来优化模型的参数。
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
optimizer_tc = torch.optim.SGD(model_tc.parameters(), lr=1e-3)
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
pred = model(X)
loss = loss_fn(pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
我们基于 tensorclass 的 DataLoader 的训练循环非常相似,我们只需要调整如何解包数据,以使用 tensorclass 提供的更明确的基于属性的检索。.contiguous()
方法会加载 memmap tensor 中存储的数据。
def train_tc(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, data in enumerate(dataloader):
X, y = data.images.contiguous(), data.targets.contiguous()
pred = model(X)
loss = loss_fn(pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(
f"Test Error: \n Accuracy: {(100 * correct):>0.1f}%, Avg loss: {test_loss:>8f} \n"
)
def test_tc(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for batch in dataloader:
X, y = batch.images.contiguous(), batch.targets.contiguous()
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(
f"Test Error: \n Accuracy: {(100 * correct):>0.1f}%, Avg loss: {test_loss:>8f} \n"
)
for d in train_dataloader_tc:
print(d)
break
import time
t0 = time.time()
epochs = 5
for t in range(epochs):
print(f"Epoch {t + 1}\n-------------------------")
train_tc(train_dataloader_tc, model_tc, loss_fn, optimizer_tc)
test_tc(test_dataloader_tc, model_tc, loss_fn)
print(f"Tensorclass training done! time: {time.time() - t0: 4.4f} s")
t0 = time.time()
epochs = 5
for t in range(epochs):
print(f"Epoch {t + 1}\n-------------------------")
train(train_dataloader, model, loss_fn, optimizer)
test(test_dataloader, model, loss_fn)
print(f"Training done! time: {time.time() - t0: 4.4f} s")
FashionMNISTData(
images=Tensor(shape=torch.Size([64, 28, 28]), device=cpu, dtype=torch.float32, is_shared=False),
targets=Tensor(shape=torch.Size([64]), device=cpu, dtype=torch.int64, is_shared=False),
batch_size=torch.Size([64]),
device=cpu,
is_shared=False)
Epoch 1
-------------------------
loss: 2.319664 [ 0/60000]
loss: 2.303494 [ 6400/60000]
loss: 2.282967 [12800/60000]
loss: 2.270802 [19200/60000]
loss: 2.254053 [25600/60000]
loss: 2.228572 [32000/60000]
loss: 2.237088 [38400/60000]
loss: 2.206554 [44800/60000]
loss: 2.196055 [51200/60000]
loss: 2.173318 [57600/60000]
Test Error:
Accuracy: 44.4%, Avg loss: 2.165061
Epoch 2
-------------------------
loss: 2.176356 [ 0/60000]
loss: 2.166144 [ 6400/60000]
loss: 2.111832 [12800/60000]
loss: 2.126604 [19200/60000]
loss: 2.076978 [25600/60000]
loss: 2.019964 [32000/60000]
loss: 2.044961 [38400/60000]
loss: 1.974381 [44800/60000]
loss: 1.967282 [51200/60000]
loss: 1.902837 [57600/60000]
Test Error:
Accuracy: 57.9%, Avg loss: 1.903539
Epoch 3
-------------------------
loss: 1.934818 [ 0/60000]
loss: 1.904432 [ 6400/60000]
loss: 1.797030 [12800/60000]
loss: 1.834745 [19200/60000]
loss: 1.721624 [25600/60000]
loss: 1.675833 [32000/60000]
loss: 1.687455 [38400/60000]
loss: 1.597601 [44800/60000]
loss: 1.613010 [51200/60000]
loss: 1.502748 [57600/60000]
Test Error:
Accuracy: 59.6%, Avg loss: 1.531052
Epoch 4
-------------------------
loss: 1.599049 [ 0/60000]
loss: 1.560516 [ 6400/60000]
loss: 1.420832 [12800/60000]
loss: 1.487164 [19200/60000]
loss: 1.363405 [25600/60000]
loss: 1.365580 [32000/60000]
loss: 1.366614 [38400/60000]
loss: 1.299678 [44800/60000]
loss: 1.330469 [51200/60000]
loss: 1.224487 [57600/60000]
Test Error:
Accuracy: 61.7%, Avg loss: 1.259897
Epoch 5
-------------------------
loss: 1.340469 [ 0/60000]
loss: 1.316975 [ 6400/60000]
loss: 1.159445 [12800/60000]
loss: 1.261730 [19200/60000]
loss: 1.131677 [25600/60000]
loss: 1.167898 [32000/60000]
loss: 1.174967 [38400/60000]
loss: 1.118451 [44800/60000]
loss: 1.154713 [51200/60000]
loss: 1.065566 [57600/60000]
Test Error:
Accuracy: 63.9%, Avg loss: 1.093563
Tensorclass training done! time: 8.5377 s
Epoch 1
-------------------------
loss: 2.299644 [ 0/60000]
loss: 2.293140 [ 6400/60000]
loss: 2.271977 [12800/60000]
loss: 2.273217 [19200/60000]
loss: 2.250980 [25600/60000]
loss: 2.225316 [32000/60000]
loss: 2.230843 [38400/60000]
loss: 2.195421 [44800/60000]
loss: 2.187299 [51200/60000]
loss: 2.160407 [57600/60000]
Test Error:
Accuracy: 44.1%, Avg loss: 2.152778
Epoch 2
-------------------------
loss: 2.156122 [ 0/60000]
loss: 2.149637 [ 6400/60000]
loss: 2.088117 [12800/60000]
loss: 2.110354 [19200/60000]
loss: 2.059621 [25600/60000]
loss: 2.002847 [32000/60000]
loss: 2.023556 [38400/60000]
loss: 1.943443 [44800/60000]
loss: 1.946714 [51200/60000]
loss: 1.874386 [57600/60000]
Test Error:
Accuracy: 54.3%, Avg loss: 1.872510
Epoch 3
-------------------------
loss: 1.902230 [ 0/60000]
loss: 1.873018 [ 6400/60000]
loss: 1.749329 [12800/60000]
loss: 1.796782 [19200/60000]
loss: 1.693456 [25600/60000]
loss: 1.648093 [32000/60000]
loss: 1.665045 [38400/60000]
loss: 1.566518 [44800/60000]
loss: 1.593781 [51200/60000]
loss: 1.490483 [57600/60000]
Test Error:
Accuracy: 60.5%, Avg loss: 1.506298
Epoch 4
-------------------------
loss: 1.570804 [ 0/60000]
loss: 1.537479 [ 6400/60000]
loss: 1.380056 [12800/60000]
loss: 1.460477 [19200/60000]
loss: 1.355020 [25600/60000]
loss: 1.349743 [32000/60000]
loss: 1.361715 [38400/60000]
loss: 1.282459 [44800/60000]
loss: 1.317743 [51200/60000]
loss: 1.225052 [57600/60000]
Test Error:
Accuracy: 63.6%, Avg loss: 1.246684
Epoch 5
-------------------------
loss: 1.317353 [ 0/60000]
loss: 1.304559 [ 6400/60000]
loss: 1.129192 [12800/60000]
loss: 1.247038 [19200/60000]
loss: 1.133444 [25600/60000]
loss: 1.153610 [32000/60000]
loss: 1.174721 [38400/60000]
loss: 1.104254 [44800/60000]
loss: 1.141913 [51200/60000]
loss: 1.068175 [57600/60000]
Test Error:
Accuracy: 64.9%, Avg loss: 1.084284
Training done! time: 35.1946 s
脚本总运行时间: (1 分钟 2.380 秒)