thresholded_relu¶
- paddle.fluid.layers.ops. thresholded_relu ( x, threshold=None ) [source]
-
- alias_main
-
paddle.nn.functional.thresholded_relu
- alias
-
paddle.nn.functional.thresholded_relu,paddle.nn.functional.activation.thresholded_relu
- old_api
-
paddle.fluid.layers.thresholded_relu
Thresholded ReLU Activation Operator
- Equation:
-
\[\begin{split}out = \\begin{cases} x, &if x > threshold \\\\ 0, &otherwise \\end{cases}\end{split}\]
- Parameters
-
x (Variable) – The input of Thresholded ReLU op, Tensor or LoDTensor, dtype: float32 or float64.
threshold (float, optional) – The threshold value. Note that if the arg threshold is not set, the threshold in the equation is 1.0.
- Returns
-
The output of Thresholded ReLU op, Tensor or LoDTensor, dtype: float32 or float64, the same as the input, shape: the same as the input.
- Return type
-
Variable
Examples
# declarative mode import numpy as np from paddle import fluid x = fluid.data(name="x", shape=(-1, 3), dtype="float32") y = fluid.layers.thresholded_relu(x, threshold=0.1) place = fluid.CPUPlace() exe = fluid.Executor(place) start = fluid.default_startup_program() main = fluid.default_main_program() data = np.random.randn(2, 3).astype("float32") exe.run(start) y_np, = exe.run(main, feed={"x": data}, fetch_list=[y]) data # array([[ 0.21134382, -1.1805999 , 0.32876605], # [-1.2210793 , -0.7365624 , 1.0013918 ]], dtype=float32) y_np # array([[ 0.21134382, -0. , 0.32876605], # [-0. , -0. , 1.0013918 ]], dtype=float32)
# imperative mode import numpy as np from paddle import fluid import paddle.fluid.dygraph as dg data = np.random.randn(2, 3).astype("float32") place = fluid.CPUPlace() with dg.guard(place) as g: x = dg.to_variable(data) y = fluid.layers.thresholded_relu(x, threshold=0.1) y_np = y.numpy() data # array([[ 0.21134382, -1.1805999 , 0.32876605], # [-1.2210793 , -0.7365624 , 1.0013918 ]], dtype=float32) y_np # array([[ 0.21134382, -0. , 0.32876605], # [-0. , -0. , 1.0013918 ]], dtype=float32)