评价此页

(Beta) 使用点积注意力(SDPA)实现高性能Transformer#

创建于:2023年3月15日 | 最后更新:2024年10月9日 | 最后验证:2024年11月5日

作者: Driss Guessous

摘要#

在本教程中,我们希望强调一个名为 torch.nn.functional.scaled_dot_product_attention 的新 torch.nn.functional 函数,该函数有助于实现 Transformer 架构。有关该函数的详细说明,请参阅 PyTorch 文档。此函数已集成到 torch.nn.MultiheadAttentiontorch.nn.TransformerEncoderLayer 中。

概述#

在宏观层面,此 PyTorch 函数根据论文 Attention is all you need 中的定义计算查询、键和值之间的缩放点积注意力(SDPA)。虽然可以使用现有函数在 PyTorch 中编写此函数,但融合实现可以比朴素实现提供更大的性能优势。

融合实现#

对于 CUDA 张量输入,该函数将调度到以下实现之一:

注意

本教程需要 PyTorch 2.0.0 或更高版本。

import torch
import torch.nn as nn
import torch.nn.functional as F
device = "cuda" if torch.cuda.is_available() else "cpu"

# Example Usage:
query, key, value = torch.randn(2, 3, 8, device=device), torch.randn(2, 3, 8, device=device), torch.randn(2, 3, 8, device=device)
F.scaled_dot_product_attention(query, key, value)
tensor([[[ 2.0776, -0.6983,  0.7035, -1.6485, -0.7951, -0.3311,  0.1492,
          -0.8578],
         [ 0.4932,  1.2065,  0.4041,  0.0968, -0.2332,  0.1752, -0.4782,
          -0.8079],
         [ 0.7532,  1.0582,  0.5769, -0.0881, -0.3975,  0.1011, -0.3719,
          -1.0104]],

        [[-0.3089,  0.5800, -0.2403, -0.4013,  1.6163,  0.3114, -0.4385,
           1.3604],
         [-0.7240,  0.5346, -0.1608, -0.6489,  1.3062,  0.3425, -0.1411,
           0.9240],
         [-0.6395,  0.5294, -0.1754, -0.6034,  1.3402,  0.3328, -0.1917,
           1.0248]]], device='cuda:0')

显式调度器控制#

虽然该函数会隐式调度到这三种实现之一,但用户也可以通过使用上下文管理器显式控制调度。此上下文管理器允许用户显式禁用某些实现。如果用户想确保该函数确实使用了针对其特定输入的最快实现,则可以使用上下文管理器来测量性能。

# Lets define a helpful benchmarking function:
import torch.utils.benchmark as benchmark
def benchmark_torch_function_in_microseconds(f, *args, **kwargs):
    t0 = benchmark.Timer(
        stmt="f(*args, **kwargs)", globals={"args": args, "kwargs": kwargs, "f": f}
    )
    return t0.blocked_autorange().mean * 1e6

# Lets define the hyper-parameters of our input
batch_size = 32
max_sequence_len = 1024
num_heads = 32
embed_dimension = 32

dtype = torch.float16

query = torch.rand(batch_size, num_heads, max_sequence_len, embed_dimension, device=device, dtype=dtype)
key = torch.rand(batch_size, num_heads, max_sequence_len, embed_dimension, device=device, dtype=dtype)
value = torch.rand(batch_size, num_heads, max_sequence_len, embed_dimension, device=device, dtype=dtype)

print(f"The default implementation runs in {benchmark_torch_function_in_microseconds(F.scaled_dot_product_attention, query, key, value):.3f} microseconds")

# Lets explore the speed of each of the 3 implementations
from torch.nn.attention import SDPBackend, sdpa_kernel


with sdpa_kernel(SDPBackend.MATH):
    math_time=benchmark_torch_function_in_microseconds(F.scaled_dot_product_attention, query, key, value)
    print(f"The math implementation runs in {math_time:.3f} microseconds")

with sdpa_kernel(SDPBackend.FLASH_ATTENTION):
    try:
        flash_time=benchmark_torch_function_in_microseconds(F.scaled_dot_product_attention, query, key, value)
        print(f"The flash attention implementation runs in {flash_time:.3f} microseconds")
    except RuntimeError:
        print("FlashAttention is not supported. See warnings for reasons.")

with sdpa_kernel(SDPBackend.EFFICIENT_ATTENTION):
    try:
        efficient_time=benchmark_torch_function_in_microseconds(F.scaled_dot_product_attention, query, key, value)
        print(f"The memory efficient implementation runs in {efficient_time:.3f} microseconds")
    except RuntimeError:
        print("EfficientAttention is not supported. See warnings for reasons.")
The default implementation runs in 2272.459 microseconds
The math implementation runs in 87384.204 microseconds
The flash attention implementation runs in 2277.315 microseconds
The memory efficient implementation runs in 4341.564 microseconds

硬件依赖性#

根据您运行上述单元格的机器和可用的硬件,您的结果可能会有所不同。- 如果您没有 GPU 并在 CPU 上运行,那么对于 FP32,上下文管理器将不起作用,所有三次运行都应返回相似的时间。- 根据您的显卡支持的计算能力,FlashAttention 或内存高效实现可能会失败。

因果自注意力#

以下是受 Andrej Karpathy NanoGPT 仓库启发的、多头因果自注意力块的示例实现。

class CausalSelfAttention(nn.Module):

    def __init__(self, num_heads: int, embed_dimension: int, bias: bool=False, is_causal: bool=False, dropout:float=0.0):
        super().__init__()
        assert embed_dimension % num_heads == 0
        # key, query, value projections for all heads, but in a batch
        self.c_attn = nn.Linear(embed_dimension, 3 * embed_dimension, bias=bias)
        # output projection
        self.c_proj = nn.Linear(embed_dimension, embed_dimension, bias=bias)
        # regularization
        self.dropout = dropout
        self.resid_dropout = nn.Dropout(dropout)
        self.num_heads = num_heads
        self.embed_dimension = embed_dimension
        # Perform causal masking
        self.is_causal = is_causal

    def forward(self, x):
        # calculate query, key, values for all heads in batch and move head forward to be the batch dim
        query_projected = self.c_attn(x)

        batch_size = query_projected.size(0)
        embed_dim = query_projected.size(2)
        head_dim = embed_dim // (self.num_heads * 3)

        query, key, value = query_projected.chunk(3, -1)
        query = query.view(batch_size, -1, self.num_heads, head_dim).transpose(1, 2)
        key = key.view(batch_size, -1, self.num_heads, head_dim).transpose(1, 2)
        value = value.view(batch_size, -1, self.num_heads, head_dim).transpose(1, 2)

        if self.training:
            dropout = self.dropout
            is_causal = self.is_causal
        else:
            dropout = 0.0
            is_causal = False

        y = F.scaled_dot_product_attention(query, key, value, attn_mask=None, dropout_p=dropout, is_causal=is_causal)
        y = y.transpose(1, 2).view(batch_size, -1, self.num_heads * head_dim)

        y = self.resid_dropout(self.c_proj(y))
        return y


num_heads = 8
heads_per_dim = 64
embed_dimension = num_heads * heads_per_dim
dtype = torch.float16
model = CausalSelfAttention(num_heads=num_heads, embed_dimension=embed_dimension, bias=False, is_causal=True, dropout=0.1).to("cuda").to(dtype).eval()
print(model)
CausalSelfAttention(
  (c_attn): Linear(in_features=512, out_features=1536, bias=False)
  (c_proj): Linear(in_features=512, out_features=512, bias=False)
  (resid_dropout): Dropout(p=0.1, inplace=False)
)

NestedTensor 和 Dense 张量支持#

SDPA 支持 NestedTensor 和 Dense 张量输入。NestedTensors 处理输入是可变长度序列批次的情况,而无需将批次中的每个序列填充到最大长度。有关 NestedTensors 的更多信息,请参阅 torch.nestedNestedTensors 教程

import random
def generate_rand_batch(
    batch_size,
    max_sequence_len,
    embed_dimension,
    pad_percentage=None,
    dtype=torch.float16,
    device="cuda",
):
    if not pad_percentage:
        return (
            torch.randn(
                batch_size,
                max_sequence_len,
                embed_dimension,
                dtype=dtype,
                device=device,
            ),
            None,
        )
    # Random sequence lengths
    seq_len_list = [
        int(max_sequence_len * (1 - random.gauss(pad_percentage, 0.01)))
        for _ in range(batch_size)
    ]
    # Make random entry in the batch have max sequence length
    seq_len_list[random.randint(0, batch_size - 1)] = max_sequence_len
    return (
        torch.nested.nested_tensor(
            [
                torch.randn(seq_len, embed_dimension,
                            dtype=dtype, device=device)
                for seq_len in seq_len_list
            ]
        ),
        seq_len_list,
    )

random_nt, _ = generate_rand_batch(32, 512, embed_dimension, pad_percentage=0.5, dtype=dtype, device=device)
random_dense, _ = generate_rand_batch(32, 512, embed_dimension, pad_percentage=None, dtype=dtype, device=device)

# Currently the fused implementations don't support ``NestedTensor`` for training
model.eval()

with sdpa_kernel(SDPBackend.FLASH_ATTENTION):
    try:
        print(f"Random NT runs in {benchmark_torch_function_in_microseconds(model, random_nt):.3f} microseconds")
        print(f"Random Dense runs in {benchmark_torch_function_in_microseconds(model, random_dense):.3f} microseconds")
    except RuntimeError:
        print("FlashAttention is not supported. See warnings for reasons.")
/usr/local/lib/python3.10/dist-packages/torch/nested/__init__.py:250: UserWarning:

The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)

Random NT runs in 611.695 microseconds
Random Dense runs in 948.388 microseconds

将 SDPA 与 torch.compile 结合使用#

随着 PyTorch 2.0 的发布,引入了一项名为 torch.compile() 的新功能,它可以在急切模式下提供显著的性能改进。点积注意力与 torch.compile() 完全可组合。为了演示这一点,让我们使用 torch.compile() 编译 CausalSelfAttention 模块并观察由此产生的性能改进。

batch_size = 32
max_sequence_len = 256
x = torch.rand(batch_size, max_sequence_len,
               embed_dimension, device=device, dtype=dtype)
print(
    f"The non compiled module runs in  {benchmark_torch_function_in_microseconds(model, x):.3f} microseconds")


compiled_model = torch.compile(model)
# Let's compile it
compiled_model(x)
print(
    f"The compiled module runs in  {benchmark_torch_function_in_microseconds(compiled_model, x):.3f} microseconds")
The non compiled module runs in  424.367 microseconds
The compiled module runs in  525.241 microseconds

具体的执行时间取决于机器,但我的结果是:未编译模块运行时间为 166.616 微秒,编译模块运行时间为 166.726 微秒。这与我们预期的不同。让我们深入研究一下。PyTorch 提供了一个出色的内置性能分析器,您可以使用它来检查代码的性能特征。

from torch.profiler import profile, record_function, ProfilerActivity
activities = [ProfilerActivity.CPU]
if device == 'cuda':
    activities.append(ProfilerActivity.CUDA)

with profile(activities=activities, record_shapes=False) as prof:
    with record_function(" Non-Compilied Causal Attention"):
        for _ in range(25):
            model(x)
print(prof.key_averages().table(sort_by="cuda_time_total", row_limit=10))


with profile(activities=activities, record_shapes=False) as prof:
    with record_function("Compiled Causal Attention"):
        for _ in range(25):
            compiled_model(x)
print(prof.key_averages().table(sort_by="cuda_time_total", row_limit=10))

# For even more insights, you can export the trace and use ``chrome://tracing`` to view the results
#
# .. code-block:: python
#
#    prof.export_chrome_trace("compiled_causal_attention_trace.json").
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------
                                                   Name    Self CPU %      Self CPU   CPU total %     CPU total  CPU time avg     Self CUDA   Self CUDA %    CUDA total  CUDA time avg    # of Calls
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------
                         Non-Compilied Causal Attention        16.42%       2.083ms        71.91%       9.126ms       9.126ms       0.000us         0.00%      10.846ms      10.846ms             1
                         Non-Compilied Causal Attention         0.00%       0.000us         0.00%       0.000us       0.000us      10.731ms       101.01%      10.731ms      10.731ms             1
                                           aten::linear         0.93%     117.695us        34.04%       4.319ms      86.390us       0.000us         0.00%       8.028ms     160.566us            50
                                           aten::matmul         1.79%     227.338us        30.80%       3.909ms      78.186us       0.000us         0.00%       8.028ms     160.566us            50
                                               aten::mm         9.57%       1.214ms        26.96%       3.421ms      68.427us       7.806ms        73.48%       8.028ms     160.566us            50
         ampere_fp16_s1688gemm_fp16_128x128_ldg8_f2f_tn         0.00%       0.000us         0.00%       0.000us       0.000us       5.582ms        52.54%       5.582ms     223.279us            25
                     aten::scaled_dot_product_attention         1.64%     207.687us        14.12%       1.792ms      71.684us       0.000us         0.00%       2.818ms     112.714us            25
              aten::_scaled_dot_product_flash_attention         2.29%     290.827us        12.48%       1.584ms      63.376us       0.000us         0.00%       2.818ms     112.714us            25
                         aten::_flash_attention_forward         2.36%     299.166us         9.00%       1.142ms      45.680us       2.818ms        26.52%       2.818ms     112.714us            25
void pytorch_flash::flash_fwd_kernel<Flash_fwd_kerne...         0.00%       0.000us         0.00%       0.000us       0.000us       2.818ms        26.52%       2.818ms     112.714us            25
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------
Self CPU time total: 12.691ms
Self CUDA time total: 10.624ms

-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------
                                                   Name    Self CPU %      Self CPU   CPU total %     CPU total  CPU time avg     Self CUDA   Self CUDA %    CUDA total  CUDA time avg    # of Calls
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------
                              Compiled Causal Attention         0.00%       0.000us         0.00%       0.000us       0.000us      10.666ms       100.56%      10.666ms      10.666ms             1
                              Compiled Causal Attention         7.19%     919.680us        75.56%       9.662ms       9.662ms       0.000us         0.00%      10.607ms      10.607ms             1
                             Torch-Compiled Region: 0/0         6.75%     862.663us        65.73%       8.405ms     336.195us       0.000us         0.00%      10.607ms     424.272us            25
                                       CompiledFunction        20.78%       2.657ms        56.63%       7.242ms     289.675us       0.000us         0.00%      10.607ms     424.272us            25
                                               aten::mm         6.88%     879.324us        10.78%       1.379ms      27.581us       7.809ms        73.62%       7.809ms     156.174us            50
         ampere_fp16_s1688gemm_fp16_128x128_ldg8_f2f_tn         0.00%       0.000us         0.00%       0.000us       0.000us       5.585ms        52.65%       5.585ms     223.396us            25
              aten::_scaled_dot_product_flash_attention         1.64%     209.534us        11.62%       1.486ms      59.456us       0.000us         0.00%       2.798ms     111.923us            25
                         aten::_flash_attention_forward         2.34%     298.938us         8.48%       1.085ms      43.383us       2.798ms        26.38%       2.798ms     111.923us            25
void pytorch_flash::flash_fwd_kernel<Flash_fwd_kerne...         0.00%       0.000us         0.00%       0.000us       0.000us       2.798ms        26.38%       2.798ms     111.923us            25
ampere_fp16_s1688gemm_fp16_128x128_ldg8_f2f_stages_3...         0.00%       0.000us         0.00%       0.000us       0.000us       2.224ms        20.97%       2.224ms      88.952us            25
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------
Self CPU time total: 12.788ms
Self CUDA time total: 10.607ms

前面的代码片段生成了一份报告,列出了在编译和非编译模块中消耗最多 GPU 执行时间的前 10 个 PyTorch 函数。分析显示,GPU 上消耗的大部分时间都集中在两个模块的同一组函数上。这里的原因是 torch.compile 非常擅长消除与 PyTorch 相关的框架开销。如果您的模型正在启动大型高效的 CUDA 内核(在本例中 CausalSelfAttention 就是如此),那么 PyTorch 的开销可以被隐藏。

实际上,您的模块通常不只包含一个 CausalSelfAttention 块。在使用 Andrej Karpathy NanoGPT 仓库进行实验时,编译模块使每个训练步骤的时间从 6090.49ms 减少到 3273.17ms!这是在 NanoGPT 的提交:ae3a8d5 上,使用莎士比亚数据集进行训练的结果。

将 SDPA 与 attn_bias 子类结合使用#

# As of PyTorch 2.3, we have added a new submodule that contains tensor subclasses.
# Designed to be used with ``torch.nn.functional.scaled_dot_product_attention``.
# The module is named ``torch.nn.attention.bias`` and contains the following two
# utilities for generating causal attention variants:
#
# - ``torch.nn.attention.bias.causal_upper_left``
# - ``torch.nn.attention.bias.causal_lower_right``
#
# .. note::
#    The current argument ``is_causal`` in ``torch.nn.functional.scaled_dot_product_attention``
#    is the same as using ``torch.nn.attention.bias.causal_upper_left``.
#

from torch.nn.attention.bias import causal_lower_right, causal_upper_left

batch_size = 32
sequence_length_q = 2
sequence_length_kv = 10
num_heads = 16
embed_dimension = 32

dtype = torch.float16

query = torch.rand(batch_size, num_heads, sequence_length_q, embed_dimension, device=device, dtype=dtype)
key = torch.rand(batch_size, num_heads, sequence_length_kv, embed_dimension, device=device, dtype=dtype)
value = torch.rand(batch_size, num_heads, sequence_length_kv, embed_dimension, device=device, dtype=dtype)

upper_left_bias = causal_upper_left(sequence_length_q, sequence_length_kv)
lower_right_bias = causal_lower_right(sequence_length_q, sequence_length_kv)

print(type(upper_left_bias))
print(type(lower_right_bias))

assert type(upper_left_bias) == type(lower_right_bias)
assert issubclass(type(upper_left_bias), torch.Tensor)

# As you can see from the previous output, are the same type ``torch.nn.attention.bias.CausalBias``
# and subclass ``torch.Tensor``

# Lets see what these tensors look like
print(upper_left_bias)
print(lower_right_bias)

# Upper Left Bias aligns the causal attention mask to the upper left corner of the attention scores matrix.
# This only has an impact when the attention scores matrix is not square, which is common for decoding use cases.
# Another way of thinking about this concept is that when you use upper left bias,
# the 0th token in the query is aligned to the 0th token in the key, while for lower right bias,
# Assuming the attention score matrix is two dimensional, ``attn_score[0][0]`` is the attention score
# between the 0th token in the query and the 0th token in the key.
# For lower right bias, the sequence of q is aligned so that the last token in q is aligned to the last token in k
# (for example, ``attn_score[-1][-1])`` is all True since the last token in q is at the same position as the last token in k
# even if the sequence length of q and k are different.

# These objects are intended to be used with sdpa
out_upper_left = F.scaled_dot_product_attention(query, key, value, upper_left_bias)
out_lower_right = F.scaled_dot_product_attention(query, key, value, lower_right_bias)
out_is_causal = F.scaled_dot_product_attention(query, key, value, is_causal=True)

assert torch.allclose(out_upper_left, out_is_causal)
assert not torch.allclose(out_upper_left, out_lower_right)

# These attention biases should also be compatible with torch.compile
compiled_sdpa = torch.compile(F.scaled_dot_product_attention, fullgraph=True)
out_upper_left = compiled_sdpa(query, key, value, upper_left_bias)
<class 'torch.nn.attention.bias.CausalBias'>
<class 'torch.nn.attention.bias.CausalBias'>
tensor([[ True, False, False, False, False, False, False, False, False, False],
        [ True,  True, False, False, False, False, False, False, False, False]])
tensor([[ True,  True,  True,  True,  True,  True,  True,  True,  True, False],
        [ True,  True,  True,  True,  True,  True,  True,  True,  True,  True]])

结论#

在本教程中,我们演示了 torch.nn.functional.scaled_dot_product_attention 的基本用法。我们展示了如何使用 sdpa_kernel 上下文管理器来断言在 GPU 上使用某个特定的实现。此外,我们构建了一个简单的 CausalSelfAttention 模块,该模块支持 NestedTensor 并可进行 torch 编译。在此过程中,我们展示了如何使用分析工具来探索用户定义模块的性能特征。

脚本总运行时间: (0 分钟 7.522 秒)