ContinuousBernoulli¶
- class paddle.distribution. ContinuousBernoulli ( probs, lims=(0.499, 0.501) ) [source]
-
The Continuous Bernoulli distribution with parameter: probs characterizing the shape of the density function. The Continuous Bernoulli distribution is defined on [0, 1], and it can be viewed as a continuous version of the Bernoulli distribution.
The continuous Bernoulli: fixing a pervasive error in variational autoencoders.
Mathematical details
The probability density function (pdf) is
\[p(x;\lambda) = C(\lambda)\lambda^x (1-\lambda)^{1-x}\]In the above equation:
\(x\): is continuous between 0 and 1
\(probs = \lambda\): is the probability.
\(C(\lambda)\): is the normalizing constant factor
\[\begin{split}C(\lambda) = \left\{ \begin{aligned} &2 & \text{ if $\lambda = \frac{1}{2}$} \\ &\frac{2\tanh^{-1}(1-2\lambda)}{1 - 2\lambda} & \text{ otherwise} \end{aligned} \right.\end{split}\]- Parameters
-
probs (int|float|Tensor) – The probability of Continuous Bernoulli distribution between [0, 1], which characterize the shape of the pdf. If the input data type is int or float, the data type of probs will be convert to a 1-D Tensor the paddle global default dtype.
lims (tuple) – Specify the unstable calculation region near 0.5, where the calculation is approximated by talyor expansion. The default value is (0.499, 0.501).
Examples
>>> import paddle >>> from paddle.distribution import ContinuousBernoulli >>> paddle.set_device("cpu") >>> paddle.seed(100) >>> rv = ContinuousBernoulli(paddle.to_tensor([0.2, 0.5])) >>> print(rv.sample([2])) Tensor(shape=[2, 2], dtype=float32, place=Place(cpu), stop_gradient=True, [[0.38694882, 0.20714243], [0.00631948, 0.51577556]]) >>> print(rv.mean) Tensor(shape=[2], dtype=float32, place=Place(cpu), stop_gradient=True, [0.38801414, 0.50000000]) >>> print(rv.variance) Tensor(shape=[2], dtype=float32, place=Place(cpu), stop_gradient=True, [0.07589778, 0.08333334]) >>> print(rv.entropy()) Tensor(shape=[2], dtype=float32, place=Place(cpu), stop_gradient=True, [-0.07641457, 0. ]) >>> print(rv.cdf(paddle.to_tensor(0.1))) Tensor(shape=[2], dtype=float32, place=Place(cpu), stop_gradient=True, [0.17259926, 0.10000000]) >>> print(rv.icdf(paddle.to_tensor(0.1))) Tensor(shape=[2], dtype=float32, place=Place(cpu), stop_gradient=True, [0.05623737, 0.10000000]) >>> rv1 = ContinuousBernoulli(paddle.to_tensor([0.2, 0.8])) >>> rv2 = ContinuousBernoulli(paddle.to_tensor([0.7, 0.5])) >>> print(rv1.kl_divergence(rv2)) Tensor(shape=[2], dtype=float32, place=Place(cpu), stop_gradient=True, [0.20103608, 0.07641447])
-
probs
(
value
)
probs¶
-
Probability density/mass function.
Note
This method will be deprecated in the future, please use prob instead.
- property mean
-
Mean of Continuous Bernoulli distribution.
- Returns
-
mean value.
- Return type
-
Tensor
- property variance
-
Variance of Continuous Bernoulli distribution.
- Returns
-
variance value.
- Return type
-
Tensor
-
sample
(
shape=()
)
sample¶
-
Generate Continuous Bernoulli samples of the specified shape. The final shape would be
sample_shape + batch_shape
.- Parameters
-
shape (Sequence[int], optional) – Prepended shape of the generated samples.
- Returns
-
Tensor, Sampled data with shape sample_shape + batch_shape.
-
rsample
(
shape=()
)
rsample¶
-
Generate Continuous Bernoulli samples of the specified shape. The final shape would be
sample_shape + batch_shape
.- Parameters
-
shape (Sequence[int], optional) – Prepended shape of the generated samples.
- Returns
-
Tensor, Sampled data with shape sample_shape + batch_shape.
-
log_prob
(
value
)
log_prob¶
-
Log probability density function.
- Parameters
-
value (Tensor) – The input tensor.
- Returns
-
log probability. The data type is the same as self.probs.
- Return type
-
Tensor
-
prob
(
value
)
prob¶
-
Probability density function.
- Parameters
-
value (Tensor) – The input tensor.
- Returns
-
probability. The data type is the same as self.probs.
- Return type
-
Tensor
-
entropy
(
)
entropy¶
-
Shannon entropy in nats.
The entropy is
\[\mathcal{H}(X) = -\log C + \left[ \log (1 - \lambda) -\log \lambda \right] \mathbb{E}(X) - \log(1 - \lambda)\]In the above equation:
\(\Omega\): is the support of the distribution.
- Returns
-
Tensor, Shannon entropy of Continuous Bernoulli distribution.
-
cdf
(
value
)
cdf¶
-
Cumulative distribution function
\[\begin{split}{ P(X \le t; \lambda) = F(t;\lambda) = \left\{ \begin{aligned} &t & \text{ if $\lambda = \frac{1}{2}$} \\ &\frac{\lambda^t (1 - \lambda)^{1 - t} + \lambda - 1}{2\lambda - 1} & \text{ otherwise} \end{aligned} \right. }\end{split}\]- Parameters
-
value (Tensor) – The input tensor.
- Returns
-
quantile of
value
. The data type is the same as self.probs. - Return type
-
Tensor
-
icdf
(
value
)
icdf¶
-
Inverse cumulative distribution function
\[\begin{split}{ F^{-1}(x;\lambda) = \left\{ \begin{aligned} &x & \text{ if $\lambda = \frac{1}{2}$} \\ &\frac{\log(1+(\frac{2\lambda - 1}{1 - \lambda})x)}{\log(\frac{\lambda}{1-\lambda})} & \text{ otherwise} \end{aligned} \right. }\end{split}\]- Parameters
-
value (Tensor) – The input tensor, meaning the quantile.
- Returns
-
the value of the r.v. corresponding to the quantile. The data type is the same as self.probs.
- Return type
-
Tensor
- property batch_shape
-
Returns batch shape of distribution
- Returns
-
batch shape
- Return type
-
Sequence[int]
- property event_shape
-
Returns event shape of distribution
- Returns
-
event shape
- Return type
-
Sequence[int]
-
kl_divergence
(
other
)
[source]
kl_divergence¶
-
The KL-divergence between two Continuous Bernoulli distributions with the same batch_shape.
The probability density function (pdf) is
\[KL\_divergence(\lambda_1, \lambda_2) = - H - \{\log C_2 + [\log \lambda_2 - \log (1-\lambda_2)] \mathbb{E}_1(X) + \log (1-\lambda_2) \}\]- Parameters
-
other (ContinuousBernoulli) – instance of Continuous Bernoulli.
- Returns
-
Tensor, kl-divergence between two Continuous Bernoulli distributions.