MultivariateNormal¶
- class paddle.distribution. MultivariateNormal ( loc, covariance_matrix=None, precision_matrix=None, scale_tril=None ) [source]
-
The Multivariate Normal distribution is a type multivariate continuous distribution defined on the real set, with parameter: loc and any one of the following parameters characterizing the variance: covariance_matrix, precision_matrix, scale_tril.
Mathematical details
The probability density function (pdf) is
\[p(X ;\mu, \Sigma) = \frac{1}{\sqrt{(2\pi)^k |\Sigma|}} \exp(-\frac{1}{2}(X - \mu)^{\intercal} \Sigma^{-1} (X - \mu))\]In the above equation:
\(X\): is a k-dim random vector.
\(loc = \mu\): is the k-dim mean vector.
\(covariance_matrix = \Sigma\): is the k-by-k covariance matrix.
- Parameters
-
loc (int|float|Tensor) – The mean of Multivariate Normal distribution. If the input data type is int or float, the data type of loc will be convert to a 1-D Tensor the paddle global default dtype.
covariance_matrix (Tensor) – The covariance matrix of Multivariate Normal distribution. The data type of covariance_matrix will be convert to be the same as the type of loc.
precision_matrix (Tensor) – The inverse of the covariance matrix. The data type of precision_matrix will be convert to be the same as the type of loc.
scale_tril (Tensor) – The cholesky decomposition (lower triangular matrix) of the covariance matrix. The data type of scale_tril will be convert to be the same as the type of loc.
Examples
>>> import paddle >>> from paddle.distribution import MultivariateNormal >>> paddle.set_device("cpu") >>> paddle.seed(100) >>> rv = MultivariateNormal(loc=paddle.to_tensor([2.,5.]), covariance_matrix=paddle.to_tensor([[2.,1.],[1.,2.]])) >>> print(rv.sample([3, 2])) Tensor(shape=[3, 2, 2], dtype=float32, place=Place(cpu), stop_gradient=True, [[[-0.00339603, 4.31556797], [ 2.01385283, 4.63553190]], [[ 0.10132277, 3.11323833], [ 2.37435842, 3.56635118]], [[ 2.89701366, 5.10602522], [-0.46329355, 3.14768648]]]) >>> print(rv.mean) Tensor(shape=[2], dtype=float32, place=Place(cpu), stop_gradient=True, [2., 5.]) >>> print(rv.variance) Tensor(shape=[2], dtype=float32, place=Place(cpu), stop_gradient=True, [1.99999988, 2. ]) >>> print(rv.entropy()) Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True, 3.38718319) >>> rv1 = MultivariateNormal(loc=paddle.to_tensor([2.,5.]), covariance_matrix=paddle.to_tensor([[2.,1.],[1.,2.]])) >>> rv2 = MultivariateNormal(loc=paddle.to_tensor([-1.,3.]), covariance_matrix=paddle.to_tensor([[3.,2.],[2.,3.]])) >>> print(rv1.kl_divergence(rv2)) Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True, 1.55541301)
- property mean
-
Mean of Multivariate Normal distribution.
- Returns
-
mean value.
- Return type
-
Tensor
- property variance
-
Variance of Multivariate Normal distribution.
- Returns
-
variance value.
- Return type
-
Tensor
-
sample
(
shape=()
)
sample¶
-
Generate Multivariate Normal samples of the specified shape. The final shape would be
sample_shape + batch_shape + event_shape
.- Parameters
-
shape (Sequence[int], optional) – Prepended shape of the generated samples.
- Returns
-
Tensor, Sampled data with shape sample_shape + batch_shape + event_shape. The data type is the same as self.loc.
-
rsample
(
shape=()
)
rsample¶
-
Generate Multivariate Normal samples of the specified shape. The final shape would be
sample_shape + batch_shape + event_shape
.- Parameters
-
shape (Sequence[int], optional) – Prepended shape of the generated samples.
- Returns
-
Tensor, Sampled data with shape sample_shape + batch_shape + event_shape. The data type is the same as self.loc.
- property batch_shape
-
Returns batch shape of distribution
- Returns
-
batch shape
- Return type
-
Sequence[int]
- property event_shape
-
Returns event shape of distribution
- Returns
-
event shape
- Return type
-
Sequence[int]
-
log_prob
(
value
)
log_prob¶
-
Log probability density function.
- Parameters
-
value (Tensor) – The input tensor.
- Returns
-
log probability. The data type is the same as self.loc.
- Return type
-
Tensor
-
probs
(
value
)
probs¶
-
Probability density/mass function.
Note
This method will be deprecated in the future, please use prob instead.
-
prob
(
value
)
prob¶
-
Probability density function.
- Parameters
-
value (Tensor) – The input tensor.
- Returns
-
probability. The data type is the same as self.loc.
- Return type
-
Tensor
-
entropy
(
)
entropy¶
-
Shannon entropy in nats.
The entropy is
\[\mathcal{H}(X) = \frac{n}{2} \log(2\pi) + \log {\det A} + \frac{n}{2}\]In the above equation:
\(\Omega\): is the support of the distribution.
- Returns
-
Tensor, Shannon entropy of Multivariate Normal distribution. The data type is the same as self.loc.
-
kl_divergence
(
other
)
[source]
kl_divergence¶
-
The KL-divergence between two poisson distributions with the same batch_shape and event_shape.
The probability density function (pdf) is
\[KL\_divergence(\lambda_1, \lambda_2) = \log(\det A_2) - \log(\det A_1) -\frac{n}{2} +\frac{1}{2}[tr [\Sigma_2^{-1} \Sigma_1] + (\mu_1 - \mu_2)^{\intercal} \Sigma_2^{-1} (\mu_1 - \mu_2)]\]- Parameters
-
other (MultivariateNormal) – instance of Multivariate Normal.
- Returns
-
Tensor, kl-divergence between two Multivariate Normal distributions. The data type is the same as self.loc.