评价此页

在 Intel GPU 上入门#

创建日期:2024 年 6 月 14 日 | 最后更新日期:2025 年 9 月 1 日

硬件先决条件#

适用于 Intel 数据中心 GPU

设备

Red Hat* Enterprise Linux* 9.2

SUSE Linux Enterprise Server* 15 SP5

Ubuntu* Server 22.04 (>= 5.15 LTS 内核)

Intel® 数据中心 GPU Max 系列(代号:Ponte Vecchio)

适用于 Intel 客户端 GPU

支持的操作系统

已验证的硬件

Windows 11 & Ubuntu 24.04/25.04




Intel® Arc A 系列显卡(代号:Alchemist)
Intel® Arc B 系列显卡(代号:Battlemage)
集成 Intel® Arc™ 显卡的 Intel® Core™ Ultra 处理器(代号:Meteor Lake-H)
集成 Intel® Arc™ 显卡的 Intel® Core™ Ultra 台式机处理器(第二代)(代号:Lunar Lake)
集成 Intel® Arc™ 显卡的 Intel® Core™ Ultra 移动处理器(第二代)(代号:Arrow Lake-H)

从 PyTorch* 2.5 版本开始,Intel GPU 对 Intel® 客户端 GPU 和 Intel® 数据中心 GPU Max 系列在 Linux 和 Windows 上均提供支持(原型),将 Intel GPU 和 SYCL* 软件栈集成到官方 PyTorch 栈中,提供一致的用户体验,以适应更多 AI 应用场景。

软件先决条件#

要在 Intel GPU 上使用 PyTorch,您需要先安装 Intel GPU 驱动程序。安装指南请访问 Intel GPU 驱动程序安装

如果您是从二进制文件安装,请跳过 Intel® Deep Learning Essentials 安装部分。如果您是从源代码构建,请参阅 Intel GPU 的 PyTorch 安装先决条件,其中包含 Intel GPU 驱动程序和 Intel® Deep Learning Essentials 的安装说明。

安装#

二进制文件#

在安装了 Intel GPU 驱动程序 后,使用以下命令安装 pytorchtorchvisiontorchaudio

适用于发布版 wheel

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu

适用于 nightly wheel

pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu

从源代码构建#

在安装了 Intel GPU 驱动程序和 Intel® Deep Learning Essentials 后。按照指南从源代码构建 pytorchtorchvisiontorchaudio

从源代码构建 torch,请参阅 PyTorch 安装:从源代码构建

从源代码构建 torchvision,请参阅 Torchvision 安装:从源代码构建

从源代码构建 torchaudio,请参阅 Torchaudio 安装:从源代码构建

检查 Intel GPU 的可用性#

要检查您的 Intel GPU 是否可用,通常可以使用以下代码

import torch
print(torch.xpu.is_available())  # torch.xpu is the API for Intel GPU support

如果输出为 False,请仔细检查 Intel GPU 的驱动程序安装。

最小代码更改#

如果您正在迁移从 cuda 编写的代码,则需要将 cuda 的引用更改为 xpu。例如:

# CUDA CODE
tensor = torch.tensor([1.0, 2.0]).to("cuda")

# CODE for Intel GPU
tensor = torch.tensor([1.0, 2.0]).to("xpu")

以下几点概述了 PyTorch 与 Intel GPU 的支持和限制:

  1. 支持训练和推理工作流。

  2. 支持 eager 模式和 torch.compile。从 PyTorch* 2.7 版本开始,Windows 上的 torch.compile 功能也支持 Intel GPU,请参阅 如何在 Windows CPU/XPU 上使用 torch.compile

  3. 支持 FP32、BF16、FP16 等数据类型以及自动混合精度 (AMP)。

示例#

本节包含推理和训练工作流的用法示例。

推理示例#

以下是一些推理工作流的示例。

FP32 推理#

import torch
import torchvision.models as models

model = models.resnet50(weights="ResNet50_Weights.DEFAULT")
model.eval()
data = torch.rand(1, 3, 224, 224)

model = model.to("xpu")
data = data.to("xpu")

with torch.no_grad():
    model(data)

print("Execution finished")

AMP 推理#

import torch
import torchvision.models as models

model = models.resnet50(weights="ResNet50_Weights.DEFAULT")
model.eval()
data = torch.rand(1, 3, 224, 224)

model = model.to("xpu")
data = data.to("xpu")

with torch.no_grad():
    d = torch.rand(1, 3, 224, 224)
    d = d.to("xpu")
    # set dtype=torch.bfloat16 for BF16
    with torch.autocast(device_type="xpu", dtype=torch.float16, enabled=True):
        model(data)

print("Execution finished")

使用 torch.compile 进行推理#

import torch
import torchvision.models as models
import time

model = models.resnet50(weights="ResNet50_Weights.DEFAULT")
model.eval()
data = torch.rand(1, 3, 224, 224)
ITERS = 10

model = model.to("xpu")
data = data.to("xpu")

for i in range(ITERS):
    start = time.time()
    with torch.no_grad():
        model(data)
        torch.xpu.synchronize()
    end = time.time()
    print(f"Inference time before torch.compile for iteration {i}: {(end-start)*1000} ms")

model = torch.compile(model)
for i in range(ITERS):
    start = time.time()
    with torch.no_grad():
        model(data)
        torch.xpu.synchronize()
    end = time.time()
    print(f"Inference time after torch.compile for iteration {i}: {(end-start)*1000} ms")

print("Execution finished")

训练示例#

以下是一些训练工作流的示例。

FP32 训练#

import torch
import torchvision

LR = 0.001
DOWNLOAD = True
DATA = "datasets/cifar10/"

transform = torchvision.transforms.Compose(
    [
        torchvision.transforms.Resize((224, 224)),
        torchvision.transforms.ToTensor(),
        torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
    ]
)
train_dataset = torchvision.datasets.CIFAR10(
    root=DATA,
    train=True,
    transform=transform,
    download=DOWNLOAD,
)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=128)
train_len = len(train_loader)

model = torchvision.models.resnet50()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR, momentum=0.9)
model.train()
model = model.to("xpu")
criterion = criterion.to("xpu")

print(f"Initiating training")
for batch_idx, (data, target) in enumerate(train_loader):
    data = data.to("xpu")
    target = target.to("xpu")
    optimizer.zero_grad()
    output = model(data)
    loss = criterion(output, target)
    loss.backward()
    optimizer.step()
    if (batch_idx + 1) % 10 == 0:
         iteration_loss = loss.item()
         print(f"Iteration [{batch_idx+1}/{train_len}], Loss: {iteration_loss:.4f}")
torch.save(
    {
        "model_state_dict": model.state_dict(),
        "optimizer_state_dict": optimizer.state_dict(),
    },
    "checkpoint.pth",
)

print("Execution finished")

AMP 训练#

注意:使用 GradScaler 进行训练需要硬件支持 FP64。Intel® Arc™ A 系列显卡不原生支持 FP64。如果您在 Intel® Arc™ A 系列显卡上运行工作负载,请禁用 GradScaler

import torch
import torchvision

LR = 0.001
DOWNLOAD = True
DATA = "datasets/cifar10/"

use_amp=True

transform = torchvision.transforms.Compose(
    [
        torchvision.transforms.Resize((224, 224)),
        torchvision.transforms.ToTensor(),
        torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
    ]
)
train_dataset = torchvision.datasets.CIFAR10(
    root=DATA,
    train=True,
    transform=transform,
    download=DOWNLOAD,
)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=128)
train_len = len(train_loader)

model = torchvision.models.resnet50()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR, momentum=0.9)
scaler = torch.amp.GradScaler(device="xpu", enabled=use_amp)

model.train()
model = model.to("xpu")
criterion = criterion.to("xpu")

print(f"Initiating training")
for batch_idx, (data, target) in enumerate(train_loader):
    data = data.to("xpu")
    target = target.to("xpu")
    # set dtype=torch.bfloat16 for BF16
    with torch.autocast(device_type="xpu", dtype=torch.float16, enabled=use_amp):
        output = model(data)
        loss = criterion(output, target)
    scaler.scale(loss).backward()
    scaler.step(optimizer)
    scaler.update()
    optimizer.zero_grad()
    if (batch_idx + 1) % 10 == 0:
         iteration_loss = loss.item()
         print(f"Iteration [{batch_idx+1}/{train_len}], Loss: {iteration_loss:.4f}")

torch.save(
    {
        "model_state_dict": model.state_dict(),
        "optimizer_state_dict": optimizer.state_dict(),
    },
    "checkpoint.pth",
)

print("Execution finished")

使用 torch.compile 进行训练#

import torch
import torchvision

LR = 0.001
DOWNLOAD = True
DATA = "datasets/cifar10/"

transform = torchvision.transforms.Compose(
    [
        torchvision.transforms.Resize((224, 224)),
        torchvision.transforms.ToTensor(),
        torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
    ]
)
train_dataset = torchvision.datasets.CIFAR10(
    root=DATA,
    train=True,
    transform=transform,
    download=DOWNLOAD,
)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=128)
train_len = len(train_loader)

model = torchvision.models.resnet50()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR, momentum=0.9)
model.train()
model = model.to("xpu")
criterion = criterion.to("xpu")
model = torch.compile(model)

print(f"Initiating training with torch compile")
for batch_idx, (data, target) in enumerate(train_loader):
    data = data.to("xpu")
    target = target.to("xpu")
    optimizer.zero_grad()
    output = model(data)
    loss = criterion(output, target)
    loss.backward()
    optimizer.step()
    if (batch_idx + 1) % 10 == 0:
         iteration_loss = loss.item()
         print(f"Iteration [{batch_idx+1}/{train_len}], Loss: {iteration_loss:.4f}")
torch.save(
    {
        "model_state_dict": model.state_dict(),
        "optimizer_state_dict": optimizer.state_dict(),
    },
    "checkpoint.pth",
)

print("Execution finished")