dynamic_lstm

paddle.fluid.layers. dynamic_lstm ( input, size, h_0=None, c_0=None, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', dtype='float32', name=None ) [source]
api_attr

Static Graph

Note:
  1. This OP only supports LoDTensor as inputs. If you need to deal with Tensor, please use api_fluid_layers_lstm .

  2. In order to improve efficiency, users must first map the input of dimension [T, hidden_size] to input of [T, 4 * hidden_size], and then pass it to this OP.

The implementation of this OP include diagonal/peephole connections. Please refer to Gers, F. A., & Schmidhuber, J. (2000) . If you do not need peephole connections, please set use_peepholes to False .

This OP computes each timestep as follows:

it=σ(Wixxt+Wihht1+bxi+bhi)
ft=σ(Wfxxt+Wfhht1+bxf+bhf)
ot=σ(Woxxt+Wohht1+bxo+bho)
~ct=tanh(Wcxxt+Wchht1+bxc+bhc)
ct=ftct1+it~ct
ht=ottanh(ct)

The symbolic meanings in the formula are as follows:

  • xt represents the input at timestep t

  • ht represents the hidden state at timestep t

  • ht1,ct1 represent the hidden state and cell state at timestep t1 , respectively

  • ~ct represents the candidate cell state

  • it , ft and ot represent input gate, forget gate, output gate, respectively

  • W represents weight (e.g., Wix is the weight of a linear transformation of input xt when calculating input gate it )

  • b represents bias (e.g., bi is the bias of input gate)

  • σ represents nonlinear activation function for gate, default sigmoid

  • represents the Hadamard product of a matrix, i.e. multiplying the elements of the same position for two matrices with the same dimension to get another matrix with the same dimension

Parameters
  • input (Variable) – LSTM input tensor, multi-dimensional LODTensor of shape [T,4hidden_size] . Data type is float32 or float64.

  • size (int) – must be 4 * hidden_size.

  • h_0 (Variable , optional) – The initial hidden state of the LSTM, multi-dimensional Tensor of shape [batch_size,hidden_size] . Data type is float32 or float64. If set to None, it will be a vector of all 0. Default: None.

  • c_0 (Variable , optional) – The initial hidden state of the LSTM, multi-dimensional Tensor of shape [batch_size,hidden_size] . Data type is float32 or float64. If set to None, it will be a vector of all 0. h_0 and c_0 can be None but only at the same time. Default: None.

  • param_attr (ParamAttr, optional) –

    Parameter attribute of weight. If it is None, the default weight parameter attribute is used. Please refer to ref:api_fluid_ParamAttr’ . If the user needs to set this parameter, the dimension must be :math:`[hidden_size, 4*hidden_size] . Default: None.

    • Weights = {Wcr,Wir,Wfr,Wor} , the shape is [hidden_size, 4*hidden_size].

  • bias_attr (ParamAttr, optional) –

    The bias attribute for the learnable bias weights, which contains two parts, input-hidden bias weights and peephole connections weights if setting use_peepholes to True. Please refer to ref:`api_fluid_ParamAttr’ . Default: None.

    System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/fluid/layers/rnn.py:docstring of paddle.fluid.layers.rnn.dynamic_lstm, line 53); backlink

    Inline interpreted text or phrase reference start-string without end-string.

    1. use_peepholes = False - Biases = {bc,bi,bf,bo}. - The shape is [1, 4*hidden_size].

    2. use_peepholes = True - Biases = { :math:`b_c, b_i, b_f, b_o, W_{ic},

      System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/fluid/layers/rnn.py:docstring of paddle.fluid.layers.rnn.dynamic_lstm, line 62); backlink

      Inline interpreted text or phrase reference start-string without end-string.

      System Message: ERROR/3 (/usr/local/lib/python3.8/site-packages/paddle/fluid/layers/rnn.py:docstring of paddle.fluid.layers.rnn.dynamic_lstm, line 64)

      Unexpected indentation.

      W_{fc}, W_{oc}`}.

      System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/fluid/layers/rnn.py:docstring of paddle.fluid.layers.rnn.dynamic_lstm, line 65)

      Block quote ends without a blank line; unexpected unindent.

      • The shape is [1, 7*hidden_size].

  • use_peepholes (bool, optional) – Whether to use peephole connection or not. Default: True.

  • is_reverse (bool, optional) – Whether to calculate reverse LSTM. Default: False.

  • gate_activation (str, optional) – The activation for input gate, forget gate and output gate. Default: “sigmoid”.

  • cell_activation (str, optional) – The activation for cell output. Default: “tanh”.

  • candidate_activation (str, optional) – The activation for candidate hidden state. Default: “tanh”.

  • dtype (str, optional) – Data type, can be “float32” or “float64”. Default: “float32”.

  • name (str, optional) – A name for this layer. Please refer to Name . Default: None.

Returns

The hidden state and cell state of LSTM

  • hidden: LoDTensor with shape of [T,hidden_size] , and its lod and dtype is the same as the input.

  • cell: LoDTensor with shape of [T,hidden_size] , and its lod and dtype is the same as the input.

Return type

tuple ( Variable , Variable )

Examples

import paddle.fluid as fluid
emb_dim = 256
vocab_size = 10000
hidden_dim = 512

data = fluid.data(name='x', shape=[None], dtype='int64', lod_level=1)
emb = fluid.embedding(input=data, size=[vocab_size, emb_dim], is_sparse=True)

forward_proj = fluid.layers.fc(input=emb, size=hidden_dim * 4,
                               bias_attr=False)

forward, cell = fluid.layers.dynamic_lstm(
    input=forward_proj, size=hidden_dim * 4, use_peepholes=False)
forward.shape  # (-1, 512)
cell.shape  # (-1, 512)