Laplace¶
- class paddle.distribution. Laplace ( loc, scale ) [source]
-
Creates a Laplace distribution parameterized by
loc
andscale
.Mathematical details
The probability density function (pdf) is
\[pdf(x; \mu, \sigma) = \frac{1}{2 * \sigma} * e^{\frac{-|x - \mu|}{\sigma}}\]In the above equation:
\(loc = \mu\): is the location parameter.
\(scale = \sigma\): is the scale parameter.
- Parameters
-
loc (scalar|Tensor) – The mean of the distribution.
scale (scalar|Tensor) – The scale of the distribution.
Examples
>>> import paddle >>> paddle.seed(2023) >>> m = paddle.distribution.Laplace(paddle.to_tensor(0.0), paddle.to_tensor(1.0)) >>> m.sample() # Laplace distributed with loc=0, scale=1 Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True, 1.31554604)
- property mean
-
Mean of distribution.
- Returns
-
The mean value.
- Return type
-
Tensor
- property stddev
-
Standard deviation.
The stddev is
\[stddev = \sqrt{2} * \sigma\]In the above equation:
\(scale = \sigma\): is the scale parameter.
- Returns
-
The std value.
- Return type
-
Tensor
- property variance
-
Variance of distribution.
The variance is
\[variance = 2 * \sigma^2\]In the above equation:
\(scale = \sigma\): is the scale parameter.
- Returns
-
The variance value.
- Return type
-
Tensor
-
log_prob
(
value
)
log_prob¶
-
Log probability density/mass function.
The log_prob is
\[log\_prob(value) = \frac{-log(2 * \sigma) - |value - \mu|}{\sigma}\]In the above equation:
\(loc = \mu\): is the location parameter.
\(scale = \sigma\): is the scale parameter.
- Parameters
-
value (Tensor|Scalar) – The input value, can be a scalar or a tensor.
- Returns
-
The log probability, whose data type is same with value.
- Return type
-
Tensor
Examples
>>> import paddle >>> m = paddle.distribution.Laplace(paddle.to_tensor(0.0), paddle.to_tensor(1.0)) >>> value = paddle.to_tensor(0.1) >>> m.log_prob(value) Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True, -0.79314721)
-
entropy
(
)
entropy¶
-
Entropy of Laplace distribution.
The entropy is:
\[entropy() = 1 + log(2 * \sigma)\]In the above equation:
\(scale = \sigma\): is the scale parameter.
- Returns
-
The entropy of distribution.
Examples
>>> import paddle >>> m = paddle.distribution.Laplace(paddle.to_tensor(0.0), paddle.to_tensor(1.0)) >>> m.entropy() Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True, 1.69314718)
-
cdf
(
value
)
cdf¶
-
Cumulative distribution function.
The cdf is
\[cdf(value) = 0.5 - 0.5 * sign(value - \mu) * e^\frac{-|(\mu - \sigma)|}{\sigma}\]In the above equation:
\(loc = \mu\): is the location parameter.
\(scale = \sigma\): is the scale parameter.
- Parameters
-
value (Tensor) – The value to be evaluated.
- Returns
-
The cumulative probability of value.
- Return type
-
Tensor
Examples
>>> import paddle >>> m = paddle.distribution.Laplace(paddle.to_tensor(0.0), paddle.to_tensor(1.0)) >>> value = paddle.to_tensor(0.1) >>> m.cdf(value) Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True, 0.54758132)
-
icdf
(
value
)
icdf¶
-
Inverse Cumulative distribution function.
The icdf is
\[cdf^{-1}(value)= \mu - \sigma * sign(value - 0.5) * ln(1 - 2 * |value-0.5|)\]In the above equation:
\(loc = \mu\): is the location parameter.
\(scale = \sigma\): is the scale parameter.
- Parameters
-
value (Tensor) – The value to be evaluated.
- Returns
-
The cumulative probability of value.
- Return type
-
Tensor
Examples
>>> import paddle >>> m = paddle.distribution.Laplace(paddle.to_tensor(0.0), paddle.to_tensor(1.0)) >>> value = paddle.to_tensor(0.1) >>> m.icdf(value) Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True, -1.60943794)
-
sample
(
shape=()
)
sample¶
-
Generate samples of the specified shape.
- Parameters
-
shape (tuple[int]) – The shape of generated samples.
- Returns
-
A sample tensor that fits the Laplace distribution.
- Return type
-
Tensor
Examples
>>> import paddle >>> paddle.seed(2023) >>> m = paddle.distribution.Laplace(paddle.to_tensor(0.0), paddle.to_tensor(1.0)) >>> m.sample() # Laplace distributed with loc=0, scale=1 Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True, 1.31554604)
-
rsample
(
shape
)
rsample¶
-
Reparameterized sample.
- Parameters
-
shape (tuple[int]) – The shape of generated samples.
- Returns
-
A sample tensor that fits the Laplace distribution.
- Return type
-
Tensor
Examples
>>> import paddle >>> paddle.seed(2023) >>> m = paddle.distribution.Laplace(paddle.to_tensor([0.0]), paddle.to_tensor([1.0])) >>> m.rsample((1,)) # Laplace distributed with loc=0, scale=1 Tensor(shape=[1, 1], dtype=float32, place=Place(cpu), stop_gradient=True, [[1.31554604]])
- property batch_shape
-
Returns batch shape of distribution
- Returns
-
batch shape
- Return type
-
Sequence[int]
- property event_shape
-
Returns event shape of distribution
- Returns
-
event shape
- Return type
-
Sequence[int]
-
prob
(
value
)
prob¶
-
Probability density/mass function evaluated at value.
- Parameters
-
value (Tensor) – value which will be evaluated
-
probs
(
value
)
probs¶
-
Probability density/mass function.
Note
This method will be deprecated in the future, please use prob instead.
-
kl_divergence
(
other
)
[source]
kl_divergence¶
-
Calculate the KL divergence KL(self || other) with two Laplace instances.
The kl_divergence between two Laplace distribution is
\[KL\_divergence(\mu_0, \sigma_0; \mu_1, \sigma_1) = 0.5 (ratio^2 + (\frac{diff}{\sigma_1})^2 - 1 - 2 \ln {ratio})\]\[ratio = \frac{\sigma_0}{\sigma_1}\]\[diff = \mu_1 - \mu_0\]In the above equation:
\(loc = \mu\): is the location parameter of self.
\(scale = \sigma\): is the scale parameter of self.
\(loc = \mu_1\): is the location parameter of the reference Laplace distribution.
\(scale = \sigma_1\): is the scale parameter of the reference Laplace distribution.
\(ratio\): is the ratio between the two distribution.
\(diff\): is the difference between the two distribution.
- Parameters
-
other (Laplace) – An instance of Laplace.
- Returns
-
The kl-divergence between two laplace distributions.
- Return type
-
Tensor
Examples
>>> import paddle >>> m1 = paddle.distribution.Laplace(paddle.to_tensor([0.0]), paddle.to_tensor([1.0])) >>> m2 = paddle.distribution.Laplace(paddle.to_tensor([1.0]), paddle.to_tensor([0.5])) >>> m1.kl_divergence(m2) Tensor(shape=[1], dtype=float32, place=Place(cpu), stop_gradient=True, [1.04261160])