conv3d#
- class torch.ao.nn.quantized.functional.conv3d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8)[source]#
Apply a 3D convolution over a quantized 3D input composed of several input planes。
See
Conv3d
for details and output shape。- 参数
input – quantized input tensor of shape
weight – quantized filters of shape
bias – non-quantized bias tensor of shape . The tensor type must be torch.float.
stride – the stride of the convolving kernel. Can be a single number or a tuple (sD, sH, sW). Default: 1
padding – implicit paddings on both sides of the input. Can be a single number or a tuple (padD, padH, padW). Default: 0
dilation – the spacing between kernel elements. Can be a single number or a tuple (dD, dH, dW). Default: 1
groups – split input into groups, should be divisible by the number of groups. Default: 1
padding_mode – 要使用的填充模式。目前量化卷积仅支持“zeros”。默认值:“zeros”
scale – 输出的量化尺度。默认值:1.0
zero_point – 输出的量化零点。默认值:0
dtype – 要使用的量化数据类型。默认值:
torch.quint8
示例
>>> from torch.ao.nn.quantized import functional as qF >>> filters = torch.randn(8, 4, 3, 3, 3, dtype=torch.float) >>> inputs = torch.randn(1, 4, 5, 5, 5, dtype=torch.float) >>> bias = torch.randn(8, dtype=torch.float) >>> >>> scale, zero_point = 1.0, 0 >>> dtype_inputs = torch.quint8 >>> dtype_filters = torch.qint8 >>> >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters) >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs) >>> qF.conv3d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point)