Normal¶
- class paddle.fluid.layers.distributions. Normal ( loc, scale ) [source]
-
The Normal distribution with location loc and scale parameters.
Mathematical details
The probability density function (pdf) is,
\[\begin{split}pdf(x; \mu, \sigma) = \\frac{1}{Z}e^{\\frac {-0.5 (x - \mu)^2} {\sigma^2} }\end{split}\]\[Z = (2 \pi \sigma^2)^{0.5}\]In the above equation:
\(loc = \mu\): is the mean.
\(scale = \sigma\): is the std.
\(Z\): is the normalization constant.
- Parameters
-
loc (float|list|numpy.ndarray|Variable) – The mean of normal distribution.The data type is float32.
scale (float|list|numpy.ndarray|Variable) – The std of normal distribution.The data type is float32.
Examples
import numpy as np from paddle.fluid import layers from paddle.fluid.layers import Normal # Define a single scalar Normal distribution. dist = Normal(loc=0., scale=3.) # Define a batch of two scalar valued Normals. # The first has mean 1 and standard deviation 11, the second 2 and 22. dist = Normal(loc=[1., 2.], scale=[11., 22.]) # Get 3 samples, returning a 3 x 2 tensor. dist.sample([3]) # Define a batch of two scalar valued Normals. # Both have mean 1, but different standard deviations. dist = Normal(loc=1., scale=[11., 22.]) # Complete example value_npdata = np.array([0.8], dtype="float32") value_tensor = layers.create_tensor(dtype="float32") layers.assign(value_npdata, value_tensor) normal_a = Normal([0.], [1.]) normal_b = Normal([0.5], [2.]) sample = normal_a.sample([2]) # a random tensor created by normal distribution with shape: [2, 1] entropy = normal_a.entropy() # [1.4189385] with shape: [1] lp = normal_a.log_prob(value_tensor) # [-1.2389386] with shape: [1] kl = normal_a.kl_divergence(normal_b) # [0.34939718] with shape: [1]
-
sample
(
shape,
seed=0
)
sample¶
-
Generate samples of the specified shape.
- Parameters
-
shape (list) – 1D int32. Shape of the generated samples.
seed (int) – Python integer number.
- Returns
-
A tensor with prepended dimensions shape.The data type is float32.
- Return type
-
Variable
-
entropy
(
)
entropy¶
-
Shannon entropy in nats.
- Returns
-
Shannon entropy of normal distribution.The data type is float32.
- Return type
-
Variable
-
log_prob
(
value
)
log_prob¶
-
Log probability density/mass function.
- Parameters
-
value (Variable) – The input tensor.
- Returns
-
log probability.The data type is same with value.
- Return type
-
Variable
-
kl_divergence
(
other
)
kl_divergence¶
-
The KL-divergence between two normal distributions.
- Parameters
-
other (Normal) – instance of Normal.
- Returns
-
kl-divergence between two normal distributions.The data type is float32.
- Return type
-
Variable