评价此页

PyTorch: nn#

创建日期:2020年12月3日 | 最后更新:2022年6月14日 | 最后验证:2024年11月5日

一个三阶多项式,通过最小化平方欧几里德距离,训练用于预测从 \(-\pi\)\(\pi\)\(y=\sin(x)\)

此实现使用 PyTorch 的 nn 包来构建网络。PyTorch autograd 使得定义计算图和求梯度变得容易,但原始 autograd 对于定义复杂的神经网络可能有点太底层了;这就是 nn 包可以提供帮助的地方。nn 包定义了一组模块(Modules),您可以将其视为一个神经网络层,它从输入生成输出,并且可能带有一些可训练的权重。

99 875.3278198242188
199 582.22216796875
299 388.28216552734375
399 259.9512939453125
499 175.03016662597656
599 118.83214569091797
699 81.64002990722656
799 57.02470779418945
899 40.732177734375
999 29.947608947753906
1099 22.808589935302734
1199 18.082326889038086
1299 14.953231811523438
1399 12.881332397460938
1499 11.50935173034668
1599 10.600706100463867
1699 9.998918533325195
1799 9.600260734558105
1899 9.336179733276367
1999 9.161201477050781
Result: y = 0.0037914831191301346 + 0.8390498161315918 x + -0.0006540938629768789 x^2 + -0.09081399440765381 x^3

import torch
import math


# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)

# For this example, the output y is a linear function of (x, x^2, x^3), so
# we can consider it as a linear layer neural network. Let's prepare the
# tensor (x, x^2, x^3).
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)

# In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape
# (3,), for this case, broadcasting semantics will apply to obtain a tensor
# of shape (2000, 3)

# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. The Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
# The Flatten layer flatens the output of the linear layer to a 1D tensor,
# to match the shape of `y`.
model = torch.nn.Sequential(
    torch.nn.Linear(3, 1),
    torch.nn.Flatten(0, 1)
)

# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')

learning_rate = 1e-6
for t in range(2000):

    # Forward pass: compute predicted y by passing x to the model. Module objects
    # override the __call__ operator so you can call them like functions. When
    # doing so you pass a Tensor of input data to the Module and it produces
    # a Tensor of output data.
    y_pred = model(xx)

    # Compute and print loss. We pass Tensors containing the predicted and true
    # values of y, and the loss function returns a Tensor containing the
    # loss.
    loss = loss_fn(y_pred, y)
    if t % 100 == 99:
        print(t, loss.item())

    # Zero the gradients before running the backward pass.
    model.zero_grad()

    # Backward pass: compute gradient of the loss with respect to all the learnable
    # parameters of the model. Internally, the parameters of each Module are stored
    # in Tensors with requires_grad=True, so this call will compute gradients for
    # all learnable parameters in the model.
    loss.backward()

    # Update the weights using gradient descent. Each parameter is a Tensor, so
    # we can access its gradients like we did before.
    with torch.no_grad():
        for param in model.parameters():
            param -= learning_rate * param.grad

# You can access the first layer of `model` like accessing the first item of a list
linear_layer = model[0]

# For linear layer, its parameters are stored as `weight` and `bias`.
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')

脚本总运行时间:(0分0.554秒)