QuantConfig¶
- class paddle.quantization. QuantConfig ( activation: paddle.quantization.factory.QuanterFactory, weight: paddle.quantization.factory.QuanterFactory ) [source]
-
Configure how to quantize a model or a part of the model. It will map each layer to an instance of SingleLayerConfig by the settings. It provides diverse methods to set the strategies of quantization.
- Parameters
-
activation (QuanterFactory) – The global quantizer used to quantize the activations.
weight (QuanterFactory) – The global quantizer used to quantize the weights.
Examples
>>> from paddle.quantization import QuantConfig >>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver >>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9) >>> q_config = QuantConfig(activation=quanter, weight=quanter) >>> print(q_config) Global config: activation: FakeQuanterWithAbsMaxObserver(name=None,moving_rate=0.9,bit_length=8,dtype=float32) weight: FakeQuanterWithAbsMaxObserver(name=None,moving_rate=0.9,bit_length=8,dtype=float32)
-
add_layer_config
(
layer: Union[paddle.nn.layer.layers.Layer, list],
activation: Optional[paddle.quantization.factory.QuanterFactory] = None,
weight: Optional[paddle.quantization.factory.QuanterFactory] = None
)
add_layer_config¶
-
Set the quantization config by layer. It has the highest priority among all the setting methods.
- Parameters
-
layer (Union[Layer, list]) – One or a list of layers.
activation (QuanterFactory) – Quanter used for activations.
weight (QuanterFactory) – Quanter used for weights.
Examples
>>> import paddle >>> from paddle.nn import Linear >>> from paddle.quantization import QuantConfig >>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver >>> class Model(paddle.nn.Layer): ... def __init__(self): ... super().__init__() ... self.fc = Linear(576, 120) >>> model = Model() >>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9) >>> q_config = QuantConfig(activation=None, weight=None) >>> q_config.add_layer_config([model.fc], activation=quanter, weight=quanter) >>> >>> print(q_config) Global config: None Layer prefix config: {'linear_0': <paddle.quantization.config.SingleLayerConfig object at 0x7fe41a680ee0>}
-
add_name_config
(
layer_name: Union[str, list],
activation: Optional[paddle.quantization.factory.QuanterFactory] = None,
weight: Optional[paddle.quantization.factory.QuanterFactory] = None
)
add_name_config¶
-
Set the quantization config by full name of layer. Its priority is lower than add_layer_config.
- Parameters
-
layer_name (Union[str, list]) – One or a list of layers’ full name.
activation (QuanterFactory) – Quanter used for activations.
weight (QuanterFactory) – Quanter used for weights.
Examples
>>> import paddle >>> from paddle.nn import Linear >>> from paddle.quantization import QuantConfig >>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver >>> class Model(paddle.nn.Layer): ... def __init__(self): ... super().__init__() ... self.fc = Linear(576, 120) >>> model = Model() >>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9) >>> q_config = QuantConfig(activation=None, weight=None) >>> q_config.add_name_config([model.fc.full_name()], activation=quanter, weight=quanter) >>> >>> print(q_config) Global config: None Layer prefix config: {'linear_0': <paddle.quantization.config.SingleLayerConfig object at 0x7fe41a680fd0>}
-
add_type_config
(
layer_type: Union[type, list],
activation: Optional[paddle.quantization.factory.QuanterFactory] = None,
weight: Optional[paddle.quantization.factory.QuanterFactory] = None
)
add_type_config¶
-
Set the quantization config by the type of layer. The layer_type should be subclass of paddle.nn.Layer. Its priority is lower than add_layer_config and add_name_config.
- Parameters
-
layer_type (Union[type, list]) – One or a list of layers’ type. It should be subclass of
layer. (paddle.nn.Layer. Python build-in function type() can be used to get the type of a) –
activation (QuanterFactory) – Quanter used for activations.
weight (QuanterFactory) – Quanter used for weights.
Examples
>>> import paddle >>> from paddle.nn import Linear >>> from paddle.quantization import QuantConfig >>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver >>> class Model(paddle.nn.Layer): ... def __init__(self): ... super().__init__() ... self.fc = Linear(576, 120) >>> model = Model() >>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9) >>> q_config = QuantConfig(activation=None, weight=None) >>> q_config.add_type_config([Linear], activation=quanter, weight=quanter) >>> >>> print(q_config) Global config: None Layer type config: {<class 'paddle.nn.layer.common.Linear'>: <paddle.quantization.config.SingleLayerConfig object at 0x7fe41a680a60>}
-
add_qat_layer_mapping
(
source: type,
target: type
)
add_qat_layer_mapping¶
-
Add rules converting layers to simulated quantization layers before quantization-aware training. It will convert layers with type source to layers with type target. source and target should be subclass of paddle.nn.Layer. And a default mapping is provided by property default_qat_layer_mapping.
- Parameters
-
source (type) – The type of layers that will be converted.
target (type) – The type of layers that will be converted to.
Examples
>>> from paddle.nn import Conv2D >>> from paddle.quantization import QuantConfig >>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver >>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9) >>> q_config = QuantConfig(activation=None, weight=None) >>> class CustomizedQuantedConv2D: ... def forward(self, x): ... pass ... # add some code for quantization simulation >>> q_config.add_qat_layer_mapping(Conv2D, CustomizedQuantedConv2D)
-
add_customized_leaf
(
layer_type: type
)
add_customized_leaf¶
-
Declare the customized layer as leaf of model for quantization. The leaf layer is quantized as one layer. The sublayers of leaf layer will not be quantized.
- Parameters
-
layer_type (type) – The type of layer to be declared as leaf.
Examples
>>> from paddle.nn import Sequential >>> from paddle.quantization import QuantConfig >>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver >>> q_config = QuantConfig(activation=None, weight=None) >>> q_config.add_customized_leaf(Sequential)
- property customized_leaves
-
Get all the customized leaves.
-
details
(
)
str
details¶
-
Get the formatted details of current config.