Categorical

class paddle.fluid.layers.distributions. Categorical ( logits ) [source]

Categorical distribution is a discrete probability distribution that describes the possible results of a random variable that can take on one of K possible categories, with the probability of each category separately specified.

The probability mass function (pmf) is:

\[pmf(k; p_i) = \prod_{i=1}^{k} p_i^{[x=i]}\]

In the above equation:

  • \([x=i]\) : it evaluates to 1 if \(x==i\) , 0 otherwise.

Parameters

logits (list|numpy.ndarray|Variable) – The logits input of categorical distribution. The data type is float32.

Examples

import numpy as np
from paddle.fluid import layers
from paddle.fluid.layers import Categorical

a_logits_npdata = np.array([-0.602,-0.602], dtype="float32")
a_logits_tensor = layers.create_tensor(dtype="float32")
layers.assign(a_logits_npdata, a_logits_tensor)

b_logits_npdata = np.array([-0.102,-0.112], dtype="float32")
b_logits_tensor = layers.create_tensor(dtype="float32")
layers.assign(b_logits_npdata, b_logits_tensor)

a = Categorical(a_logits_tensor)
b = Categorical(b_logits_tensor)

a.entropy()
# [0.6931472] with shape: [1]

b.entropy()
# [0.6931347] with shape: [1]

a.kl_divergence(b)
# [1.2516975e-05] with shape: [1]
kl_divergence ( other )

kl_divergence

The KL-divergence between two Categorical distributions.

Parameters

other (Categorical) – instance of Categorical. The data type is float32.

Returns

kl-divergence between two Categorical distributions.

Return type

Variable

entropy ( )

entropy

Shannon entropy in nats.

Returns

Shannon entropy of Categorical distribution. The data type is float32.

Return type

Variable

log_prob ( value )

log_prob

Log probability density/mass function.

sample ( )

sample

Sampling from the distribution.