注意
跳到末尾 下载完整的示例代码。
训练分类器#
创建日期:2017 年 3 月 24 日 | 最后更新:2024 年 12 月 20 日 | 最后验证:未验证
就是这样。你已经了解了如何定义神经网络、计算损失以及更新网络权重。
现在你可能在想,
数据怎么办?#
通常,当你处理图像、文本、音频或视频数据时,你可以使用标准的 Python 包将数据加载到 NumPy 数组中。然后你可以将这个数组转换为 torch.*Tensor
。
对于图像,Pillow、OpenCV 等包很有用
对于音频,scipy 和 librosa 等包很有用
对于文本,原始 Python 或基于 Cython 的加载,或者 NLTK 和 SpaCy 很有用
特别是对于视觉,我们创建了一个名为 torchvision
的包,它包含 ImageNet、CIFAR10、MNIST 等常见数据集的数据加载器,以及图像数据转换器,即 torchvision.datasets
和 torch.utils.data.DataLoader
。
这提供了巨大的便利,并避免了编写样板代码。
在本教程中,我们将使用 CIFAR10 数据集。它包含以下类别:'airplane'(飞机)、'automobile'(汽车)、'bird'(鸟)、'cat'(猫)、'deer'(鹿)、'dog'(狗)、'frog'(青蛙)、'horse'(马)、'ship'(船)、'truck'(卡车)。CIFAR-10 中的图像大小为 3x32x32,即 3 通道彩色图像,大小为 32x32 像素。

cifar10#
训练图像分类器#
我们将按以下步骤进行:
使用
torchvision
加载并标准化 CIFAR10 训练和测试数据集定义卷积神经网络
定义损失函数
在训练数据上训练网络
在测试数据上测试网络
1. 加载并标准化 CIFAR10#
使用 torchvision
加载 CIFAR10 非常容易。
import torch
import torchvision
import torchvision.transforms as transforms
torchvision 数据集的输出是范围 [0, 1] 的 PILImage 图像。我们将其转换为标准化范围 [-1, 1] 的 Tensor。
注意
如果在 Windows 上运行并遇到 BrokenPipeError,请尝试将 torch.utils.data.DataLoader() 的 num_worker 设置为 0。
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 4
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
0%| | 0.00/170M [00:00<?, ?B/s]
0%| | 393k/170M [00:00<00:43, 3.89MB/s]
1%| | 1.80M/170M [00:00<00:17, 9.83MB/s]
2%|▏ | 3.28M/170M [00:00<00:13, 12.1MB/s]
3%|▎ | 4.78M/170M [00:00<00:12, 13.2MB/s]
4%|▎ | 6.23M/170M [00:00<00:12, 13.6MB/s]
5%|▍ | 7.86M/170M [00:00<00:11, 14.5MB/s]
6%|▌ | 9.57M/170M [00:00<00:10, 15.2MB/s]
7%|▋ | 11.3M/170M [00:00<00:10, 15.8MB/s]
8%|▊ | 13.0M/170M [00:00<00:09, 16.2MB/s]
9%|▊ | 14.7M/170M [00:01<00:09, 16.4MB/s]
10%|▉ | 16.4M/170M [00:01<00:09, 16.6MB/s]
11%|█ | 18.1M/170M [00:01<00:09, 16.7MB/s]
12%|█▏ | 19.8M/170M [00:01<00:08, 16.8MB/s]
13%|█▎ | 21.5M/170M [00:01<00:08, 16.8MB/s]
14%|█▎ | 23.2M/170M [00:01<00:08, 16.9MB/s]
15%|█▍ | 24.9M/170M [00:01<00:08, 16.9MB/s]
16%|█▌ | 26.6M/170M [00:01<00:08, 16.9MB/s]
17%|█▋ | 28.3M/170M [00:01<00:08, 16.8MB/s]
18%|█▊ | 30.0M/170M [00:01<00:08, 16.5MB/s]
19%|█▊ | 31.7M/170M [00:02<00:08, 16.4MB/s]
20%|█▉ | 33.4M/170M [00:02<00:08, 16.2MB/s]
21%|██ | 35.1M/170M [00:02<00:08, 16.4MB/s]
22%|██▏ | 36.9M/170M [00:02<00:08, 16.6MB/s]
23%|██▎ | 38.6M/170M [00:02<00:07, 16.7MB/s]
24%|██▎ | 40.3M/170M [00:02<00:07, 16.7MB/s]
25%|██▍ | 42.0M/170M [00:02<00:07, 16.8MB/s]
26%|██▌ | 43.7M/170M [00:02<00:07, 16.8MB/s]
27%|██▋ | 45.4M/170M [00:02<00:07, 16.9MB/s]
28%|██▊ | 47.3M/170M [00:02<00:07, 17.3MB/s]
29%|██▉ | 49.4M/170M [00:03<00:06, 18.3MB/s]
30%|███ | 51.4M/170M [00:03<00:06, 18.9MB/s]
31%|███▏ | 53.4M/170M [00:03<00:06, 19.3MB/s]
33%|███▎ | 55.5M/170M [00:03<00:05, 19.5MB/s]
34%|███▎ | 57.5M/170M [00:03<00:05, 19.9MB/s]
35%|███▍ | 59.6M/170M [00:03<00:05, 19.9MB/s]
36%|███▌ | 61.6M/170M [00:03<00:05, 20.0MB/s]
37%|███▋ | 63.8M/170M [00:03<00:05, 20.4MB/s]
39%|███▊ | 65.8M/170M [00:03<00:05, 20.5MB/s]
40%|███▉ | 68.0M/170M [00:03<00:04, 20.8MB/s]
41%|████ | 70.1M/170M [00:04<00:04, 20.9MB/s]
42%|████▏ | 72.3M/170M [00:04<00:04, 21.0MB/s]
44%|████▎ | 74.4M/170M [00:04<00:04, 20.9MB/s]
45%|████▍ | 76.6M/170M [00:04<00:04, 21.1MB/s]
46%|████▌ | 78.7M/170M [00:04<00:04, 21.0MB/s]
48%|████▊ | 81.2M/170M [00:04<00:04, 22.2MB/s]
49%|████▉ | 84.0M/170M [00:04<00:03, 23.8MB/s]
51%|█████ | 86.8M/170M [00:04<00:03, 25.0MB/s]
53%|█████▎ | 89.7M/170M [00:04<00:03, 26.0MB/s]
54%|█████▍ | 92.3M/170M [00:04<00:02, 26.2MB/s]
56%|█████▋ | 96.1M/170M [00:05<00:02, 29.6MB/s]
59%|█████▉ | 100M/170M [00:05<00:02, 32.9MB/s]
61%|██████▏ | 104M/170M [00:05<00:01, 35.6MB/s]
64%|██████▎ | 109M/170M [00:05<00:01, 37.7MB/s]
66%|██████▌ | 113M/170M [00:05<00:01, 38.7MB/s]
69%|██████▊ | 117M/170M [00:05<00:01, 39.5MB/s]
71%|███████ | 121M/170M [00:05<00:01, 39.9MB/s]
74%|███████▍ | 127M/170M [00:05<00:00, 44.1MB/s]
78%|███████▊ | 133M/170M [00:05<00:00, 49.1MB/s]
82%|████████▏ | 139M/170M [00:05<00:00, 53.9MB/s]
86%|████████▋ | 147M/170M [00:06<00:00, 61.3MB/s]
92%|█████████▏| 157M/170M [00:06<00:00, 71.5MB/s]
97%|█████████▋| 166M/170M [00:06<00:00, 78.7MB/s]
100%|██████████| 170M/170M [00:06<00:00, 27.0MB/s]
为了好玩,让我们展示一些训练图像。
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = next(dataiter)
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join(f'{classes[labels[j]]:5s}' for j in range(batch_size)))

horse cat plane cat
2. 定义卷积神经网络#
复制之前神经网络部分的神经网络,并修改它以接收 3 通道图像(而不是之前定义的 1 通道图像)。
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
3. 定义损失函数和优化器#
我们使用分类交叉熵损失和带动量的 SGD。
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
4. 训练网络#
现在事情开始变得有趣了。我们只需遍历数据迭代器,将输入馈送给网络并进行优化。
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
running_loss = 0.0
print('Finished Training')
[1, 2000] loss: 2.196
[1, 4000] loss: 1.852
[1, 6000] loss: 1.662
[1, 8000] loss: 1.581
[1, 10000] loss: 1.521
[1, 12000] loss: 1.477
[2, 2000] loss: 1.391
[2, 4000] loss: 1.349
[2, 6000] loss: 1.327
[2, 8000] loss: 1.287
[2, 10000] loss: 1.243
[2, 12000] loss: 1.275
Finished Training
让我们快速保存我们训练好的模型
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
有关保存 PyTorch 模型的更多详细信息,请参阅 此处。
5. 在测试数据上测试网络#
我们已经对训练数据集进行了 2 次网络训练。但是我们需要检查网络是否学到了任何东西。
我们将通过预测神经网络输出的类别标签,并将其与真实标签进行比较来检查这一点。如果预测正确,我们将样本添加到正确预测列表中。
好的,第一步。让我们显示测试集中的一张图像以便熟悉。
dataiter = iter(testloader)
images, labels = next(dataiter)
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join(f'{classes[labels[j]]:5s}' for j in range(4)))

GroundTruth: cat ship ship plane
接下来,让我们重新加载我们保存的模型(注意:此处不需要保存和重新加载模型,我们只是为了演示如何操作)
net = Net()
net.load_state_dict(torch.load(PATH, weights_only=True))
<All keys matched successfully>
好的,现在让我们看看神经网络认为这些上面的例子是什么
输出是 10 个类别的能量。某个类别的能量越高,网络就越认为该图像属于该类别。因此,让我们获取能量最高的索引
Predicted: cat ship ship ship
结果看起来相当不错。
让我们看看网络在整个数据集上的表现如何。
correct = 0
total = 0
# since we're not training, we don't need to calculate the gradients for our outputs
with torch.no_grad():
for data in testloader:
images, labels = data
# calculate outputs by running images through the network
outputs = net(images)
# the class with the highest energy is what we choose as prediction
_, predicted = torch.max(outputs, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print(f'Accuracy of the network on the 10000 test images: {100 * correct // total} %')
Accuracy of the network on the 10000 test images: 53 %
这比随机猜测要好得多,随机猜测的准确率为 10%(从 10 个类别中随机选择一个类别)。看来网络学到了一些东西。
嗯,哪些类别表现良好,哪些类别表现不佳
# prepare to count predictions for each class
correct_pred = {classname: 0 for classname in classes}
total_pred = {classname: 0 for classname in classes}
# again no gradients needed
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predictions = torch.max(outputs, 1)
# collect the correct predictions for each class
for label, prediction in zip(labels, predictions):
if label == prediction:
correct_pred[classes[label]] += 1
total_pred[classes[label]] += 1
# print accuracy for each class
for classname, correct_count in correct_pred.items():
accuracy = 100 * float(correct_count) / total_pred[classname]
print(f'Accuracy for class: {classname:5s} is {accuracy:.1f} %')
Accuracy for class: plane is 37.2 %
Accuracy for class: car is 41.3 %
Accuracy for class: bird is 58.9 %
Accuracy for class: cat is 32.2 %
Accuracy for class: deer is 25.7 %
Accuracy for class: dog is 62.3 %
Accuracy for class: frog is 62.2 %
Accuracy for class: horse is 67.3 %
Accuracy for class: ship is 74.0 %
Accuracy for class: truck is 73.8 %
好的,接下来呢?
我们如何将这些神经网络在 GPU 上运行?
在 GPU 上训练#
就像你将 Tensor 转移到 GPU 上一样,你也将神经网络转移到 GPU 上。
如果我们有 CUDA 可用,首先将我们的设备定义为第一个可见的 CUDA 设备
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print(device)
cuda:0
本节的其余部分假设 device
是一个 CUDA 设备。
然后这些方法将递归地遍历所有模块,并将其参数和缓冲区转换为 CUDA 张量
net.to(device)
请记住,你还需要在每一步将输入和目标发送到 GPU
为什么我没有注意到与 CPU 相比有巨大的加速?因为你的网络非常小。
练习:尝试增加网络的宽度(第一个 nn.Conv2d
的参数 2,以及第二个 nn.Conv2d
的参数 1 – 它们需要是相同的数字),看看你能获得什么样的加速。
实现目标:
高层次理解 PyTorch 的 Tensor 库和神经网络。
训练一个小型神经网络来对图像进行分类
在多个 GPU 上训练#
如果您想使用所有 GPU 获得更大的加速,请查看 可选:数据并行。
我接下来去哪里?#
del dataiter
脚本总运行时间: (1 分 28.924 秒)