torchrl.trainers.algorithms.configs.trainers.PPOTrainerConfig¶
- class torchrl.trainers.algorithms.configs.trainers.PPOTrainerConfig(collector: Any, total_frames: int, optim_steps_per_batch: int | None, loss_module: Any, optimizer: Any, logger: Any, save_trainer_file: Any, replay_buffer: Any, frame_skip: int = 1, clip_grad_norm: bool = True, clip_norm: float | None = None, progress_bar: bool = True, seed: int | None = None, save_trainer_interval: int = 10000, log_interval: int = 10000, create_env_fn: Any = None, actor_network: Any = None, critic_network: Any = None, num_epochs: int = 4, _target_: str = 'torchrl.trainers.algorithms.configs.trainers._make_ppo_trainer')[源代码]¶
PPO(Proximal Policy Optimization,近端策略优化)训练器的配置类。
此类定义了用于创建 PPO 训练器的配置参数,包括必需字段和具有合理默认值的可选字段。