评价此页

强化学习 (DQN) 教程#

创建日期:2017年3月24日 | 最后更新:2025年6月16日 | 最后验证:2024年11月5日

作者: Adam Paszke

Mark Towers

本教程展示了如何使用 PyTorch 在 Gymnasium 的 CartPole-v1 任务上训练一个深度 Q 学习 (DQN) 智能体。

您可能会发现阅读原始的 深度 Q 学习 (DQN) 论文会有所帮助

任务

智能体必须决定向左或向右移动购物车,以便连接到它的杆保持直立。您可以在 Gymnasium 的网站 上找到有关环境和其他更具挑战性的环境的更多信息。

CartPole

CartPole#

当智能体观察环境的当前状态并选择一个动作时,环境转换到新的状态,并返回一个奖励,表明动作的后果。在这个任务中,每增加一个时间步,奖励为 +1,如果杆倾斜过度或购物车远离中心超过 2.4 个单位,则环境终止。这意味着表现更好的场景将持续更长时间,累积更大的回报。

CartPole 任务的设计使得智能体的输入是 4 个实数值,代表环境状态(位置、速度等)。我们直接将这 4 个输入传递给一个小型全连接网络,该网络具有 2 个输出,每个动作一个。网络经过训练,可以预测给定输入状态下每个动作的预期值。然后选择预期值最高的动作。

软件包

首先,让我们导入所需的软件包。首先,我们需要 gymnasium 用于环境,可以通过使用 pip 进行安装。这是原始 OpenAI Gym 项目的一个分支,自 Gym v0.19 以来由同一团队维护。如果您在 Google Colab 中运行此代码,请运行

%%bash
pip3 install gymnasium[classic_control]

我们还将从 PyTorch 中使用以下内容

  • 神经网络 (torch.nn)

  • 优化 (torch.optim)

  • 自动微分 (torch.autograd)

import gymnasium as gym
import math
import random
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple, deque
from itertools import count

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F

env = gym.make("CartPole-v1")

# set up matplotlib
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
    from IPython import display

plt.ion()

# if GPU is to be used
device = torch.device(
    "cuda" if torch.cuda.is_available() else
    "mps" if torch.backends.mps.is_available() else
    "cpu"
)


# To ensure reproducibility during training, you can fix the random seeds
# by uncommenting the lines below. This makes the results consistent across
# runs, which is helpful for debugging or comparing different approaches.
#
# That said, allowing randomness can be beneficial in practice, as it lets
# the model explore different training trajectories.


# seed = 42
# random.seed(seed)
# torch.manual_seed(seed)
# env.reset(seed=seed)
# env.action_space.seed(seed)
# env.observation_space.seed(seed)
# if torch.cuda.is_available():
#     torch.cuda.manual_seed(seed)
Gym has been unmaintained since 2022 and does not support NumPy 2.0 amongst other critical functionality.
Please upgrade to Gymnasium, the maintained drop-in replacement of Gym, or contact the authors of your software and request that they upgrade.
Users of this version of Gym should be able to simply replace 'import gym' with 'import gymnasium as gym' in the vast majority of cases.
See the migration guide at https://gymnasium.org.cn/introduction/migration_guide/ for additional information.

经验回放#

我们将使用经验回放来训练我们的 DQN。它存储智能体观察到的过渡,允许我们稍后重用这些数据。通过随机从中采样,构建批次的过渡是去相关的。已经表明,这极大地稳定并改进了 DQN 训练过程。

为此,我们需要两个类

  • Transition - 一个命名元组,代表我们环境中的单个过渡。它本质上将 (state, action) 对映射到它们的 (next_state, reward) 结果,状态是稍后描述的屏幕差异图像。

  • ReplayMemory - 一个有界大小的循环缓冲区,保存最近观察到的过渡。它还实现了一个 .sample() 方法,用于为训练选择随机过渡批次。

Transition = namedtuple('Transition',
                        ('state', 'action', 'next_state', 'reward'))


class ReplayMemory(object):

    def __init__(self, capacity):
        self.memory = deque([], maxlen=capacity)

    def push(self, *args):
        """Save a transition"""
        self.memory.append(Transition(*args))

    def sample(self, batch_size):
        return random.sample(self.memory, batch_size)

    def __len__(self):
        return len(self.memory)

现在,让我们定义我们的模型。但首先,让我们快速回顾一下 DQN 是什么。

DQN 算法#

我们的环境是确定性的,因此这里呈现的所有方程也是确定性公式的,为了简单起见。在强化学习文献中,它们也会包含环境中的随机过渡的期望值。

我们的目标将是训练一个策略,以最大化贴现的累积奖励 \(R_{t_0} = \sum_{t=t_0}^{\infty} \gamma^{t - t_0} r_t\),其中 \(R_{t_0}\) 也被称为回报。折扣 \(\gamma\) 应该是一个介于 \(0\)\(1\) 之间的常数,以确保总和收敛。较低的 \(\gamma\) 会使我们智能体对不确定遥远未来的奖励的重要性降低,而对可以相当确信的近期奖励的重要性提高。它还鼓励智能体在时间上更接近地收集奖励,而不是在未来时间上等效的奖励。

Q-learning 的主要思想是,如果我们有一个函数 \(Q^*: State \times Action \rightarrow \mathbb{R}\),它可以告诉我们如果我们采取某个动作,我们的回报会是多少,那么我们可以很容易地构建一个最大化我们奖励的策略

\[\pi^*(s) = \arg\!\max_a \ Q^*(s, a) \]

然而,我们不知道世界的一切,所以我们无法访问 \(Q^*\)。但是,由于神经网络是通用函数逼近器,我们可以简单地创建一个并训练它来类似于 \(Q^*\)

对于我们的训练更新规则,我们将使用一个事实,即对于某个策略的每个 \(Q\) 函数都遵循贝尔曼方程

\[Q^{\pi}(s, a) = r + \gamma Q^{\pi}(s', \pi(s')) \]

等式两边的差异被称为时序差分误差,\(\delta\)

\[\delta = Q(s, a) - (r + \gamma \max_a' Q(s', a)) \]

为了最小化这个误差,我们将使用 Huber 损失。Huber 损失在误差较小时类似于均方误差,但在误差较大时类似于平均绝对误差 - 这使得它对 \(Q\) 的估计非常嘈杂时的异常值更加鲁棒。我们对从回放记忆中采样的批次 \(B\) 进行计算

\[\mathcal{L} = \frac{1}{|B|}\sum_{(s, a, s', r) \ \in \ B} \mathcal{L}(\delta)\]
\[\text{where} \quad \mathcal{L}(\delta) = \begin{cases} \frac{1}{2}{\delta^2} & \text{for } |\delta| \le 1, \\ |\delta| - \frac{1}{2} & \text{otherwise.} \end{cases}\]

Q 网络#

我们的模型将是一个前馈神经网络,它接收当前屏幕补丁和先前屏幕补丁之间的差异。它有两个输出,分别代表 \(Q(s, \mathrm{left})\)\(Q(s, \mathrm{right})\)(其中 \(s\) 是网络的输入)。实际上,该网络试图预测给定当前输入采取每个动作的预期回报

class DQN(nn.Module):

    def __init__(self, n_observations, n_actions):
        super(DQN, self).__init__()
        self.layer1 = nn.Linear(n_observations, 128)
        self.layer2 = nn.Linear(128, 128)
        self.layer3 = nn.Linear(128, n_actions)

    # Called with either one element to determine next action, or a batch
    # during optimization. Returns tensor([[left0exp,right0exp]...]).
    def forward(self, x):
        x = F.relu(self.layer1(x))
        x = F.relu(self.layer2(x))
        return self.layer3(x)

训练#

超参数和实用程序#

此单元实例化我们的模型及其优化器,并定义一些实用程序

  • select_action - 将根据 epsilon 贪婪策略选择一个动作。简单地说,我们有时会使用我们的模型来选择动作,有时我们会均匀地随机选择一个动作。选择随机动作的概率将从 EPS_START 开始,并以指数方式衰减到 EPS_ENDEPS_DECAY 控制衰减速率。

  • plot_durations - 用于绘制剧集持续时间,以及过去 100 个剧集的平均值(官方评估中使用的指标)。该图将在包含主训练循环的单元格下方,并在每个剧集之后更新。

# BATCH_SIZE is the number of transitions sampled from the replay buffer
# GAMMA is the discount factor as mentioned in the previous section
# EPS_START is the starting value of epsilon
# EPS_END is the final value of epsilon
# EPS_DECAY controls the rate of exponential decay of epsilon, higher means a slower decay
# TAU is the update rate of the target network
# LR is the learning rate of the ``AdamW`` optimizer

BATCH_SIZE = 128
GAMMA = 0.99
EPS_START = 0.9
EPS_END = 0.01
EPS_DECAY = 2500
TAU = 0.005
LR = 3e-4


# Get number of actions from gym action space
n_actions = env.action_space.n
# Get the number of state observations
state, info = env.reset()
n_observations = len(state)

policy_net = DQN(n_observations, n_actions).to(device)
target_net = DQN(n_observations, n_actions).to(device)
target_net.load_state_dict(policy_net.state_dict())

optimizer = optim.AdamW(policy_net.parameters(), lr=LR, amsgrad=True)
memory = ReplayMemory(10000)


steps_done = 0


def select_action(state):
    global steps_done
    sample = random.random()
    eps_threshold = EPS_END + (EPS_START - EPS_END) * \
        math.exp(-1. * steps_done / EPS_DECAY)
    steps_done += 1
    if sample > eps_threshold:
        with torch.no_grad():
            # t.max(1) will return the largest column value of each row.
            # second column on max result is index of where max element was
            # found, so we pick action with the larger expected reward.
            return policy_net(state).max(1).indices.view(1, 1)
    else:
        return torch.tensor([[env.action_space.sample()]], device=device, dtype=torch.long)


episode_durations = []


def plot_durations(show_result=False):
    plt.figure(1)
    durations_t = torch.tensor(episode_durations, dtype=torch.float)
    if show_result:
        plt.title('Result')
    else:
        plt.clf()
        plt.title('Training...')
    plt.xlabel('Episode')
    plt.ylabel('Duration')
    plt.plot(durations_t.numpy())
    # Take 100 episode averages and plot them too
    if len(durations_t) >= 100:
        means = durations_t.unfold(0, 100, 1).mean(1).view(-1)
        means = torch.cat((torch.zeros(99), means))
        plt.plot(means.numpy())

    plt.pause(0.001)  # pause a bit so that plots are updated
    if is_ipython:
        if not show_result:
            display.display(plt.gcf())
            display.clear_output(wait=True)
        else:
            display.display(plt.gcf())

训练循环#

最后,用于训练我们模型的代码。

在这里,您可以找到一个执行单个优化步骤的 optimize_model 函数。它首先对一批进行采样,将所有张量连接成一个张量,计算 \(Q(s_t, a_t)\)\(V(s_{t+1}) = \max_a Q(s_{t+1}, a)\),并将它们组合到我们的损失中。根据定义,如果 \(s\) 是一个终止状态,则我们将 \(V(s) = 0\)。我们还使用目标网络来计算 \(V(s_{t+1})\) 以增加稳定性。目标网络在每个步骤中以 软更新 控制,该更新由之前定义的超参数 TAU 控制。

def optimize_model():
    if len(memory) < BATCH_SIZE:
        return
    transitions = memory.sample(BATCH_SIZE)
    # Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for
    # detailed explanation). This converts batch-array of Transitions
    # to Transition of batch-arrays.
    batch = Transition(*zip(*transitions))

    # Compute a mask of non-final states and concatenate the batch elements
    # (a final state would've been the one after which simulation ended)
    non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
                                          batch.next_state)), device=device, dtype=torch.bool)
    non_final_next_states = torch.cat([s for s in batch.next_state
                                                if s is not None])
    state_batch = torch.cat(batch.state)
    action_batch = torch.cat(batch.action)
    reward_batch = torch.cat(batch.reward)

    # Compute Q(s_t, a) - the model computes Q(s_t), then we select the
    # columns of actions taken. These are the actions which would've been taken
    # for each batch state according to policy_net
    state_action_values = policy_net(state_batch).gather(1, action_batch)

    # Compute V(s_{t+1}) for all next states.
    # Expected values of actions for non_final_next_states are computed based
    # on the "older" target_net; selecting their best reward with max(1).values
    # This is merged based on the mask, such that we'll have either the expected
    # state value or 0 in case the state was final.
    next_state_values = torch.zeros(BATCH_SIZE, device=device)
    with torch.no_grad():
        next_state_values[non_final_mask] = target_net(non_final_next_states).max(1).values
    # Compute the expected Q values
    expected_state_action_values = (next_state_values * GAMMA) + reward_batch

    # Compute Huber loss
    criterion = nn.SmoothL1Loss()
    loss = criterion(state_action_values, expected_state_action_values.unsqueeze(1))

    # Optimize the model
    optimizer.zero_grad()
    loss.backward()
    # In-place gradient clipping
    torch.nn.utils.clip_grad_value_(policy_net.parameters(), 100)
    optimizer.step()

下面,您可以找到主训练循环。在开始时,我们重置环境并获得初始 state 张量。然后,我们采样一个动作,执行它,观察下一个状态和奖励(始终为 1),并优化我们的模型一次。当剧集结束时(我们的模型失败时),我们重新启动循环。

下面,num_episodes 如果可用 GPU 设置为 600,否则安排 50 个剧集,以便训练不会花费太长时间。但是,50 个剧集不足以观察 CartPole 的良好性能。您应该在 600 个训练剧集中看到该模型不断达到 500 个步骤。训练 RL 智能体可能是一个嘈杂的过程,因此重新启动训练可以产生更好的结果,如果未观察到收敛。

if torch.cuda.is_available() or torch.backends.mps.is_available():
    num_episodes = 600
else:
    num_episodes = 50

for i_episode in range(num_episodes):
    # Initialize the environment and get its state
    state, info = env.reset()
    state = torch.tensor(state, dtype=torch.float32, device=device).unsqueeze(0)
    for t in count():
        action = select_action(state)
        observation, reward, terminated, truncated, _ = env.step(action.item())
        reward = torch.tensor([reward], device=device)
        done = terminated or truncated

        if terminated:
            next_state = None
        else:
            next_state = torch.tensor(observation, dtype=torch.float32, device=device).unsqueeze(0)

        # Store the transition in memory
        memory.push(state, action, next_state, reward)

        # Move to the next state
        state = next_state

        # Perform one step of the optimization (on the policy network)
        optimize_model()

        # Soft update of the target network's weights
        # θ′ ← τ θ + (1 −τ )θ′
        target_net_state_dict = target_net.state_dict()
        policy_net_state_dict = policy_net.state_dict()
        for key in policy_net_state_dict:
            target_net_state_dict[key] = policy_net_state_dict[key]*TAU + target_net_state_dict[key]*(1-TAU)
        target_net.load_state_dict(target_net_state_dict)

        if done:
            episode_durations.append(t + 1)
            plot_durations()
            break

print('Complete')
plot_durations(show_result=True)
plt.ioff()
plt.show()
Result
/usr/local/lib/python3.10/dist-packages/gymnasium/utils/passive_env_checker.py:249: DeprecationWarning: `np.bool8` is a deprecated alias for `np.bool_`.  (Deprecated NumPy 1.24)
  if not isinstance(terminated, (bool, np.bool8)):
Complete

这是说明整体结果数据流的图表。

../_images/reinforcement_learning_diagram.jpg

动作是随机选择的,还是基于策略选择的,从 gym 环境中获取下一个步骤样本。我们将结果记录在回放记忆中,并在每次迭代中运行优化步骤。优化从回放记忆中随机选择一批来训练新策略。 “旧的” target_net 也用于优化以计算预期的 Q 值。每一步都执行其权重的软更新。

脚本总运行时间: (2 分钟 51.673 秒)