评价此页

MemPool#

class torch.cuda.memory.MemPool(*args, **kwargs)[source]#

MemPool represents a pool of memory in a caching allocator. Currently, it’s just the ID of the pool object maintained in the CUDACachingAllocator。

参数
  • allocator (torch._C._cuda_CUDAAllocator, optional) – a torch._C._cuda_CUDAAllocator object that can be used to define how memory gets allocated in the pool. If allocator is None (default), memory allocation follows the default/ current configuration of the CUDACachingAllocator。

  • use_on_oom (bool) – a bool that indicates if this pool can be used as a last resort if a memory allocation outside of the pool fails due to Out Of Memory. This is False by default。

  • symmetric (bool) – a bool that indicates if this pool is symmetrical across ranks. This is False by default。

property allocator: Optional[_cuda_CUDAAllocator]#

Returns the allocator this MemPool routes allocations to。

property id: tuple[int, int]#

Returns the ID of this pool as a tuple of two ints。

property is_symmetric: bool#

Returns whether this pool is used for NCCL’s symmetric memory。

snapshot()[source]#

Return a snapshot of the CUDA memory allocator pool state across all devices。

理解此函数输出需要熟悉内存分配器的内部机制。

注意

有关 GPU 内存管理的更多详细信息,请参阅 内存管理

use_count()[源代码]#

返回此池的引用计数。

返回类型

int