hard_swish¶
- paddle.fluid.layers.nn. hard_swish ( x, threshold=6.0, scale=6.0, offset=3.0, name=None ) [source]
-
This operator implements the hard_swish activation function. Hard_swish is proposed in MobileNetV3, and performs better in computational stability and efficiency compared to swish function. For more details please refer to: https://arxiv.org/pdf/1905.02244.pdf
The formula is as follows:
\[\begin{split}out = \\frac{x * (min(max(0, x+offset), threshold))}{scale}\end{split}\]In the above equation:
threshold
andscale
should be positive,offset
can be positive or negative. It is recommended to use default parameters.- Parameters
-
x (Variable) – Input feature, multi-dimensional Tensor. The data type should be float32 or float64.
threshold (float, optional) – The threshold in Relu function. Default: 6.0
scale (float, optional) – The scale factor. Default: 6.0
offset (float, optional) – The offset factor. Default: 3.0
name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name
- Returns
-
The output tensor with the same shape and data type as input.
- Return type
-
Variable
Examples:
import paddle.fluid as fluid import paddle import numpy as np paddle.enable_static() DATATYPE='float32' x_data = np.array([i for i in range(1,5)]).reshape([1,1,4]).astype(DATATYPE) x = fluid.data(name="x", shape=[None,1,4], dtype=DATATYPE) y = fluid.layers.hard_swish(x) place = fluid.CPUPlace() #place = fluid.CUDAPlace(0) exe = fluid.Executor(place) out, = exe.run(feed={'x':x_data}, fetch_list=[y.name]) print(out) # [[0.66666667, 1.66666667,3., 4.]]