FakeQuantizedEmbedding¶
- class torchao.quantization.qat.FakeQuantizedEmbedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None, max_norm: Optional[float] = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, sparse: bool = False, weight_config: Optional[FakeQuantizeConfigBase] = None, *args, **kwargs)[源代码]¶
具有伪量化权重的通用嵌入层。
特定的目标数据类型、粒度、方案等通过权重和激活的单独配置来指定。
使用示例
weight_config = IntxFakeQuantizeConfig( dtype=torch.int4, group_size=8, symmetric=True, ) fq_embedding = FakeQuantizedEmbedding(5, 10, weight_config) fq_embedding(torch.LongTensor([3]))