注意
转到末尾 下载完整的示例代码。
正向模式自动微分 (Beta)#
创建于: 2021年12月07日 | 最后更新: 2023年04月18日 | 最后验证: 2024年11月05日
本教程演示了如何使用正向模式 AD 计算方向导数(或等效的,雅可比向量积)。
下面的教程使用了一些仅在版本 >= 1.11(或 nightly builds)中可用的 API。
另请注意,正向模式 AD 目前处于 Beta 阶段。API 可能会发生变化,并且算子覆盖范围仍不完整。
基本用法#
与反向模式 AD 不同,正向模式 AD 在前向传播的同时即时计算梯度。我们可以通过像以前一样执行前向传播来计算方向导数,但我们首先将输入与表示方向导数方向的另一个张量(或等效地,雅可比向量积中的 v)关联起来。当一个输入(我们称之为“原像”)与一个“方向”张量(我们称之为“切线”)关联时,新生成的张量对象被称为“对偶张量”,因为它与对偶数[0]相关。
在前向传播过程中,如果任何输入张量是对偶张量,则会执行额外的计算来传播函数的这种“敏感性”。
import torch
import torch.autograd.forward_ad as fwAD
primal = torch.randn(10, 10)
tangent = torch.randn(10, 10)
def fn(x, y):
return x ** 2 + y ** 2
# All forward AD computation must be performed in the context of
# a ``dual_level`` context. All dual tensors created in such a context
# will have their tangents destroyed upon exit. This is to ensure that
# if the output or intermediate results of this computation are reused
# in a future forward AD computation, their tangents (which are associated
# with this computation) won't be confused with tangents from the later
# computation.
with fwAD.dual_level():
# To create a dual tensor we associate a tensor, which we call the
# primal with another tensor of the same size, which we call the tangent.
# If the layout of the tangent is different from that of the primal,
# The values of the tangent are copied into a new tensor with the same
# metadata as the primal. Otherwise, the tangent itself is used as-is.
#
# It is also important to note that the dual tensor created by
# ``make_dual`` is a view of the primal.
dual_input = fwAD.make_dual(primal, tangent)
assert fwAD.unpack_dual(dual_input).tangent is tangent
# To demonstrate the case where the copy of the tangent happens,
# we pass in a tangent with a layout different from that of the primal
dual_input_alt = fwAD.make_dual(primal, tangent.T)
assert fwAD.unpack_dual(dual_input_alt).tangent is not tangent
# Tensors that do not have an associated tangent are automatically
# considered to have a zero-filled tangent of the same shape.
plain_tensor = torch.randn(10, 10)
dual_output = fn(dual_input, plain_tensor)
# Unpacking the dual returns a ``namedtuple`` with ``primal`` and ``tangent``
# as attributes
jvp = fwAD.unpack_dual(dual_output).tangent
assert fwAD.unpack_dual(dual_output).tangent is None
与 Modules 的用法#
要将 nn.Module 与正向 AD 一起使用,请在前向传播之前将模型的参数替换为对偶张量。在撰写本文时,无法创建对偶张量 `nn.Parameter`。作为一种解决方法,必须将对偶张量注册为模块的非参数属性。
import torch.nn as nn
model = nn.Linear(5, 5)
input = torch.randn(16, 5)
params = {name: p for name, p in model.named_parameters()}
tangents = {name: torch.rand_like(p) for name, p in params.items()}
with fwAD.dual_level():
for name, p in params.items():
delattr(model, name)
setattr(model, name, fwAD.make_dual(p, tangents[name]))
out = model(input)
jvp = fwAD.unpack_dual(out).tangent
使用函数式 Module API (beta)#
另一种将 nn.Module 与正向 AD 一起使用的方法是利用函数式 Module API(也称为无状态 Module API)。
from torch.func import functional_call
# We need a fresh module because the functional call requires the
# the model to have parameters registered.
model = nn.Linear(5, 5)
dual_params = {}
with fwAD.dual_level():
for name, p in params.items():
# Using the same ``tangents`` from the above section
dual_params[name] = fwAD.make_dual(p, tangents[name])
out = functional_call(model, dual_params, input)
jvp2 = fwAD.unpack_dual(out).tangent
# Check our results
assert torch.allclose(jvp, jvp2)
自定义 autograd Function#
自定义 Function 也支持正向模式 AD。要创建支持正向模式 AD 的自定义 Function,请注册 jvp() 静态方法。自定义 Function 可以支持正向和反向 AD,但并非强制要求。有关更多信息,请参阅文档。
class Fn(torch.autograd.Function):
@staticmethod
def forward(ctx, foo):
result = torch.exp(foo)
# Tensors stored in ``ctx`` can be used in the subsequent forward grad
# computation.
ctx.result = result
return result
@staticmethod
def jvp(ctx, gI):
gO = gI * ctx.result
# If the tensor stored in`` ctx`` will not also be used in the backward pass,
# one can manually free it using ``del``
del ctx.result
return gO
fn = Fn.apply
primal = torch.randn(10, 10, dtype=torch.double, requires_grad=True)
tangent = torch.randn(10, 10)
with fwAD.dual_level():
dual_input = fwAD.make_dual(primal, tangent)
dual_output = fn(dual_input)
jvp = fwAD.unpack_dual(dual_output).tangent
# It is important to use ``autograd.gradcheck`` to verify that your
# custom autograd Function computes the gradients correctly. By default,
# ``gradcheck`` only checks the backward-mode (reverse-mode) AD gradients. Specify
# ``check_forward_ad=True`` to also check forward grads. If you did not
# implement the backward formula for your function, you can also tell ``gradcheck``
# to skip the tests that require backward-mode AD by specifying
# ``check_backward_ad=False``, ``check_undefined_grad=False``, and
# ``check_batched_grad=False``.
torch.autograd.gradcheck(Fn.apply, (primal,), check_forward_ad=True,
check_backward_ad=False, check_undefined_grad=False,
check_batched_grad=False)
True
函数式 API (beta)#
我们还在 functorch 中提供了一个更高级别的函数式 API,用于计算雅可比向量积,根据您的用例,您可能会发现它更易于使用。
函数式 API 的优点是无需理解或使用更底层的对偶张量 API,并且可以将其与其他functorch 转换(如 vmap)组合使用;缺点是它为您提供的控制较少。
请注意,本教程的其余部分将需要 functorch(pytorch/functorch)才能运行。请在指定链接查找安装说明。
import functorch as ft
primal0 = torch.randn(10, 10)
tangent0 = torch.randn(10, 10)
primal1 = torch.randn(10, 10)
tangent1 = torch.randn(10, 10)
def fn(x, y):
return x ** 2 + y ** 2
# Here is a basic example to compute the JVP of the above function.
# The ``jvp(func, primals, tangents)`` returns ``func(*primals)`` as well as the
# computed Jacobian-vector product (JVP). Each primal must be associated with a tangent of the same shape.
primal_out, tangent_out = ft.jvp(fn, (primal0, primal1), (tangent0, tangent1))
# ``functorch.jvp`` requires every primal to be associated with a tangent.
# If we only want to associate certain inputs to `fn` with tangents,
# then we'll need to create a new function that captures inputs without tangents:
primal = torch.randn(10, 10)
tangent = torch.randn(10, 10)
y = torch.randn(10, 10)
import functools
new_fn = functools.partial(fn, y=y)
primal_out, tangent_out = ft.jvp(new_fn, (primal,), (tangent,))
/var/lib/workspace/intermediate_source/forward_ad_usage.py:203: FutureWarning:
We've integrated functorch into PyTorch. As the final step of the integration, `functorch.jvp` is deprecated as of PyTorch 2.0 and will be deleted in a future version of PyTorch >= 2.3. Please use `torch.func.jvp` instead; see the PyTorch 2.0 release notes and/or the `torch.func` migration guide for more details https://pytorch.ac.cn/docs/stable/func.migrating.html
/var/lib/workspace/intermediate_source/forward_ad_usage.py:214: FutureWarning:
We've integrated functorch into PyTorch. As the final step of the integration, `functorch.jvp` is deprecated as of PyTorch 2.0 and will be deleted in a future version of PyTorch >= 2.3. Please use `torch.func.jvp` instead; see the PyTorch 2.0 release notes and/or the `torch.func` migration guide for more details https://pytorch.ac.cn/docs/stable/func.migrating.html
将函数式 API 与 Modules 一起使用#
要将 nn.Module 与 functorch.jvp 一起使用以计算相对于模型参数的雅可比向量积,我们需要将 nn.Module 重写为一个同时接受模型参数和模块输入的函数。
model = nn.Linear(5, 5)
input = torch.randn(16, 5)
tangents = tuple([torch.rand_like(p) for p in model.parameters()])
# Given a ``torch.nn.Module``, ``ft.make_functional_with_buffers`` extracts the state
# (``params`` and buffers) and returns a functional version of the model that
# can be invoked like a function.
# That is, the returned ``func`` can be invoked like
# ``func(params, buffers, input)``.
# ``ft.make_functional_with_buffers`` is analogous to the ``nn.Modules`` stateless API
# that you saw previously and we're working on consolidating the two.
func, params, buffers = ft.make_functional_with_buffers(model)
# Because ``jvp`` requires every input to be associated with a tangent, we need to
# create a new function that, when given the parameters, produces the output
def func_params_only(params):
return func(params, buffers, input)
model_output, jvp_out = ft.jvp(func_params_only, (params,), (tangents,))
/var/lib/workspace/intermediate_source/forward_ad_usage.py:235: FutureWarning:
We've integrated functorch into PyTorch. As the final step of the integration, `functorch.make_functional_with_buffers` is deprecated as of PyTorch 2.0 and will be deleted in a future version of PyTorch >= 2.3. Please use `torch.func.functional_call` instead; see the PyTorch 2.0 release notes and/or the `torch.func` migration guide for more details https://pytorch.ac.cn/docs/stable/func.migrating.html
/var/lib/workspace/intermediate_source/forward_ad_usage.py:242: FutureWarning:
We've integrated functorch into PyTorch. As the final step of the integration, `functorch.jvp` is deprecated as of PyTorch 2.0 and will be deleted in a future version of PyTorch >= 2.3. Please use `torch.func.jvp` instead; see the PyTorch 2.0 release notes and/or the `torch.func` migration guide for more details https://pytorch.ac.cn/docs/stable/func.migrating.html
[0] https://en.wikipedia.org/wiki/Dual_number
脚本总运行时间: (0 分钟 0.122 秒)