rank_attention

paddle.fluid.contrib.layers.nn. rank_attention ( input, rank_offset, rank_param_shape, rank_param_attr, max_rank=3, max_size=0 ) [source]

Rank Attention layer This Op can calculate rank attention between input and rank_param, and rank_param gives the organization of data. Notice: It currently supports GPU device. This Op exists in contrib, which means that it is not shown to the public. :param input: Tensor with data type float32, float64. :param rank_offset: Tensor with data type int32. :param rank_para_shape: The shape of rank_param. :param rank_param_attr: Attribute initializer of rank_param. :param max_rank: The max rank of input’s ranks.

Returns

A Tensor with the same data type as input’s.

Return type

Variable

Examples

System Message: ERROR/3 (/usr/local/lib/python3.8/site-packages/paddle/fluid/contrib/layers/nn.py:docstring of paddle.fluid.contrib.layers.nn.rank_attention, line 17)

Error in “code-block” directive: maximum 1 argument(s) allowed, 9 supplied.

.. code-block:: python
   import paddle.fluid as fluid
   import numpy as np

   input = fluid.data(name="input", shape=[None, 2], dtype="float32")
   rank_offset = fluid.data(name="rank_offset", shape=[None, 7], dtype="int32")
   out = fluid.contrib.layers.rank_attention(input=input,
                                             rank_offset=rank_offset,
                                             rank_param_shape=[18,3],
                                             rank_param_attr=
                                               fluid.ParamAttr(learning_rate=1.0,
                                                             name="ubm_rank_param.w_0",
                                                             initializer=
                                                             fluid.initializer.Xavier(uniform=False)),
                                              max_rank=3,
                                              max_size=0)