LogScalar¶
- class torchrl.trainers.LogScalar(key: NestedKey = ('next', 'reward'), logname: str | None = None, log_pbar: bool = False, include_std: bool = True, reduction: str = 'mean')[源]¶
用于记录批次中任何张量值的通用标量记录器钩子。
此钩子可以记录收集的批次数据中的任何标量值,包括奖励、动作范数、完成状态以及任何其他指标。它会自动处理掩码并计算均值和标准差。
- 参数:
key (NestedKey) – 在输入批次中查找值的键。对于简单键可以是字符串,对于嵌套键可以是元组。默认是torchrl.trainers.trainers.REWARD_KEY(= (“next”, “reward”))。
logname (str, optional) – 要记录的指标的名称。如果为 None,则使用键作为日志名称。默认是 None。
log_pbar (bool, optional) – 如果为
True
,则值将记录在进度条上。默认是False
。include_std (bool, optional) – 如果为
True
,还将记录值的标准差。默认是True
。reduction (str, optional) – 要应用的归约方法。可以是“mean”、“sum”、“min”、“max”。默认是“mean”。
示例
>>> # Log training rewards >>> log_reward = LogScalar(("next", "reward"), "r_training", log_pbar=True) >>> trainer.register_op("pre_steps_log", log_reward)
>>> # Log action norms >>> log_action_norm = LogScalar("action", "action_norm", include_std=True) >>> trainer.register_op("pre_steps_log", log_action_norm)
>>> # Log done states (as percentage) >>> log_done = LogScalar(("next", "done"), "done_percentage", reduction="mean") >>> trainer.register_op("pre_steps_log", log_done)
- register(trainer: Trainer, name: Optional[str] = None)[源]¶
Registers the hook in the trainer at a default location.
- 参数:
trainer (Trainer) – the trainer where the hook must be registered.
name (str) – the name of the hook.
注意
To register the hook at another location than the default, use
register_op()
.