评价此页

prepare#

class torch.ao.quantization.prepare(model, inplace=False, allow_list=None, observer_non_leaf_module_list=None, prepare_custom_config_dict=None)[source]#

Prepare a copy of the model for quantization calibration or quantization-aware training.

The quantization configuration should be assigned preemptively to individual submodules in the .qconfig attribute.

The model will be attached with observer or fake quant modules, and qconfig will be propagated.

参数
  • model – input model to be modified in-place

  • inplace – 就地执行模型转换,原始模块将被修改

  • allow_list – list of quantizable modules

  • observer_non_leaf_module_list – list of non-leaf modules to which we want to add observers

  • prepare_custom_config_dict – customization configuration dictionary for the prepare function

# Example of prepare_custom_config_dict:
prepare_custom_config_dict = {
    # user will manually define the corresponding observed
    # module class which has a from_float class method that converts
    # float custom module to observed custom module
    "float_to_observed_custom_module_class": {CustomModule: ObservedCustomModule}
}