Normal

class paddle.distribution. Normal ( loc, scale, name=None ) [source]

The Normal distribution with location loc and scale parameters.

Mathematical details

The probability density function (pdf) is

\[\begin{split}pdf(x; \mu, \sigma) = \\frac{1}{Z}e^{\\frac {-0.5 (x - \mu)^2} {\sigma^2} }\end{split}\]
\[Z = (2 \pi \sigma^2)^{0.5}\]

In the above equation:

  • \(loc = \mu\): is the mean.

  • \(scale = \sigma\): is the std.

  • \(Z\): is the normalization constant.

Parameters
  • loc (int|float|list|tuple|numpy.ndarray|Tensor) – The mean of normal distribution.The data type is int, float, list, numpy.ndarray or Tensor.

  • scale (int|float|list|tuple|numpy.ndarray|Tensor) – The std of normal distribution.The data type is int, float, list, numpy.ndarray or Tensor.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Examples

import paddle
from paddle.distribution import Normal

# Define a single scalar Normal distribution.
dist = Normal(loc=0., scale=3.)
# Define a batch of two scalar valued Normals.
# The first has mean 1 and standard deviation 11, the second 2 and 22.
dist = Normal(loc=[1., 2.], scale=[11., 22.])
# Get 3 samples, returning a 3 x 2 tensor.
dist.sample([3])

# Define a batch of two scalar valued Normals.
# Both have mean 1, but different standard deviations.
dist = Normal(loc=1., scale=[11., 22.])

# Complete example
value_tensor = paddle.to_tensor([0.8], dtype="float32")

normal_a = Normal([0.], [1.])
normal_b = Normal([0.5], [2.])
sample = normal_a.sample([2])
# a random tensor created by normal distribution with shape: [2, 1]
entropy = normal_a.entropy()
# [1.4189385] with shape: [1]
lp = normal_a.log_prob(value_tensor)
# [-1.2389386] with shape: [1]
p = normal_a.probs(value_tensor)
# [0.28969154] with shape: [1]
kl = normal_a.kl_divergence(normal_b)
# [0.34939718] with shape: [1]
sample ( shape, seed=0 )

sample

Generate samples of the specified shape.

Parameters
  • shape (list) – 1D int32. Shape of the generated samples.

  • seed (int) – Python integer number.

Returns

A tensor with prepended dimensions shape.The data type is float32.

Return type

Tensor

entropy ( )

entropy

Shannon entropy in nats.

The entropy is

\[\begin{split}entropy(\sigma) = 0.5 \\log (2 \pi e \sigma^2)\end{split}\]

In the above equation:

  • \(scale = \sigma\): is the std.

Returns

Shannon entropy of normal distribution.The data type is float32.

Return type

Tensor

log_prob ( value )

log_prob

Log probability density/mass function.

Parameters

value (Tensor) – The input tensor.

Returns

log probability.The data type is same with value.

Return type

Tensor

probs ( value )

probs

Probability density/mass function.

Parameters

value (Tensor) – The input tensor.

Returns

probability.The data type is same with value.

Return type

Tensor

kl_divergence ( other )

kl_divergence

The KL-divergence between two normal distributions.

The probability density function (pdf) is

\[\begin{split}KL\_divergence(\mu_0, \sigma_0; \mu_1, \sigma_1) = 0.5 (ratio^2 + (\\frac{diff}{\sigma_1})^2 - 1 - 2 \\ln {ratio})\end{split}\]
\[\begin{split}ratio = \\frac{\sigma_0}{\sigma_1}\end{split}\]
\[diff = \mu_1 - \mu_0\]

In the above equation:

  • \(loc = \mu_0\): is the mean of current Normal distribution.

  • \(scale = \sigma_0\): is the std of current Normal distribution.

  • \(loc = \mu_1\): is the mean of other Normal distribution.

  • \(scale = \sigma_1\): is the std of other Normal distribution.

  • \(ratio\): is the ratio of scales.

  • \(diff\): is the difference between means.

Parameters

other (Normal) – instance of Normal.

Returns

kl-divergence between two normal distributions.The data type is float32.

Return type

Tensor