PackedSequence#
- class torch.nn.utils.rnn.PackedSequence(data, batch_sizes=None, sorted_indices=None, unsorted_indices=None)[source]#
Holds the data and list of
batch_sizes
of a packed sequence.All RNN modules accept packed sequences as inputs.
注意
Instances of this class should never be created manually. They are meant to be instantiated by functions like
pack_padded_sequence()
.Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to
pack_padded_sequence()
. For instance, given dataabc
andx
thePackedSequence
would contain dataaxbc
withbatch_sizes=[2,1,1]
.- 变量
data (Tensor) – Tensor containing packed sequence
batch_sizes (Tensor) – Tensor of integers holding information about the batch size at each sequence step
sorted_indices (Tensor, optional) – Tensor of integers holding how this
PackedSequence
is constructed from sequences.unsorted_indices (Tensor, optional) – Tensor of integers holding how this to recover the original sequences with correct order.
- 返回类型
自我
注意
data
can be on arbitrary device and of arbitrary dtype.sorted_indices
andunsorted_indices
must betorch.int64
tensors on the same device asdata
.However,
batch_sizes
should always be a CPUtorch.int64
tensor.This invariant is maintained throughout
PackedSequence
class, and all functions that construct aPackedSequence
in PyTorch (i.e., they only pass in tensors conforming to this constraint).- count(value, /)#
返回值的出现次数。
- index(value, start=0, stop=9223372036854775807, /)#
返回值的第一个索引。
如果值不存在,则引发 ValueError。
- to(dtype: dtype, non_blocking: bool = ..., copy: bool = ...) Self [源代码]#
- to(device: Optional[Union[str, device, int]] = ..., dtype: Optional[dtype] = ..., non_blocking: bool = ..., copy: bool = ...) Self
- to(other: Tensor, non_blocking: bool = ..., copy: bool = ...) Self
对self.data执行 dtype 和/或 device 转换。
它具有与
torch.Tensor.to()
相似的签名,除了像 non_blocking 和 copy 这样的可选参数应该作为 kwargs 传递,而不是 args,否则它们将不应用于索引张量。注意
如果
self.data
张量已经具有正确的torch.dtype
和torch.device
,则返回self
。否则,返回具有所需配置的副本。