gelu¶
- paddle.fluid.layers.ops. gelu ( x, approximate=False ) [source]
-
GeLU Activation Operator For more details, see [Gaussian Error Linear Units](https://arxiv.org/abs/1606.08415).
- Equation:
-
if approximate is True .. math:
out = 0.5 * x * (1 + tanh(\\sqrt{\\frac{2}{\\pi}} * (x + 0.044715x^{3})))
else .. math:
out = 0.5 * x * (1 + erf(\\frac{x}{\\sqrt{2}}))
- Parameters
-
x (Variable) – The input of GeLU op, Tensor or LoDTensor, dtype: float32 or float64.
- Returns
-
The output of GeLU op, Tensor or LoDTensor, dtype: float32 or float64, the same as the input, shape: the same as the input.
- Return type
-
Variable
Examples
# declarative mode import numpy as np from paddle import fluid x = fluid.data(name="x", shape=(-1, 3), dtype="float32") y = fluid.layers.gelu(x) place = fluid.CPUPlace() exe = fluid.Executor(place) start = fluid.default_startup_program() main = fluid.default_main_program() data = np.random.randn(2, 3).astype("float32") exe.run(start) y_np, = exe.run(main, feed={"x": data}, fetch_list=[y]) data # array([[ 0.87165993, -1.0541513 , -0.37214822], # [ 0.15647964, 0.32496083, 0.33045998]], dtype=float32) y_np # array([[ 0.70456535, -0.15380788, -0.13207214], # [ 0.08796856, 0.20387867, 0.2080159 ]], dtype=float32)
# imperative mode import numpy as np from paddle import fluid import paddle.fluid.dygraph as dg data = np.random.randn(2, 3).astype("float32") place = fluid.CPUPlace() with dg.guard(place) as g: x = dg.to_variable(data) y = fluid.layers.gelu(x) y_np = y.numpy() data # array([[ 0.87165993, -1.0541513 , -0.37214822], # [ 0.15647964, 0.32496083, 0.33045998]], dtype=float32) y_np # array([[ 0.70456535, -0.15380788, -0.13207214], # [ 0.08796856, 0.20387867, 0.2080159 ]], dtype=float32)