tensordict.nn.set_skip_existing¶
- class tensordict.nn.set_skip_existing(mode: bool | None = True, in_key_attr='in_keys', out_key_attr='out_keys')¶
用于跳过 TensorDict 图中现有节点的上下文管理器。
当用作上下文管理器时,它会将 skip_existing() 的值设置为指示的
mode
,让用户能够编写检查全局值并据此执行代码的方法。当用作方法装饰器时,它会检查 tensordict 的输入键,如果
skip_existing()
调用返回True
,并且所有输出键都已存在,则会跳过该方法。此装饰器不适用于不遵循以下签名的函数:def fun(self, tensordict, *args, **kwargs)
。- 参数:
mode (bool, optional) – 如果为
True
,则表示图中的现有条目不会被覆盖,除非它们仅部分存在。skip_existing()
将返回True
。如果为False
,则不执行检查。如果为None
,则skip_existing()
的值将不会被更改。此选项仅用于装饰方法,并允许其行为依赖于用作上下文管理器的同一类(请参见下面的示例)。默认为True
。in_key_attr (str, optional) – 被装饰的模块方法中输入键列表属性的名称。默认为
in_keys
。out_key_attr (str, optional) – 被装饰的模块方法中输出键列表属性的名称。默认为
out_keys
。
示例
>>> with set_skip_existing(): ... if skip_existing(): ... print("True") ... else: ... print("False") ... True >>> print("calling from outside:", skip_existing()) calling from outside: False
此类也可以用作装饰器
示例
>>> from tensordict import TensorDict >>> from tensordict.nn import set_skip_existing, skip_existing, TensorDictModuleBase >>> class MyModule(TensorDictModuleBase): ... in_keys = [] ... out_keys = ["out"] ... @set_skip_existing() ... def forward(self, tensordict): ... print("hello") ... tensordict.set("out", torch.zeros(())) ... return tensordict >>> module = MyModule() >>> module(TensorDict({"out": torch.zeros(())}, [])) # does not print anything TensorDict( fields={ out: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([]), device=None, is_shared=False) >>> module(TensorDict()) # prints hello hello TensorDict( fields={ out: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([]), device=None, is_shared=False)
当 mode 设置为
None
来装饰方法时,当需要让上下文管理器从外部处理跳过逻辑时,这样做很有用。示例
>>> from tensordict import TensorDict >>> from tensordict.nn import set_skip_existing, skip_existing, TensorDictModuleBase >>> class MyModule(TensorDictModuleBase): ... in_keys = [] ... out_keys = ["out"] ... @set_skip_existing(None) ... def forward(self, tensordict): ... print("hello") ... tensordict.set("out", torch.zeros(())) ... return tensordict >>> module = MyModule() >>> _ = module(TensorDict({"out": torch.zeros(())}, [])) # prints "hello" hello >>> with set_skip_existing(True): ... _ = module(TensorDict({"out": torch.zeros(())}, [])) # no print
注意
为了允许模块拥有相同的输入和输出键而不误将子图忽略,当输出键与输入键相同时,
@set_skip_existing(True)
将被禁用。>>> class MyModule(TensorDictModuleBase): ... in_keys = ["out"] ... out_keys = ["out"] ... @set_skip_existing() ... def forward(self, tensordict): ... print("calling the method!") ... return tensordict ... >>> module = MyModule() >>> module(TensorDict({"out": torch.zeros(())}, [])) # does not print anything calling the method! TensorDict( fields={ out: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([]), device=None, is_shared=False)