sequence_pool¶
- paddle.static.nn. sequence_pool ( input, pool_type, is_test=False, pad_value=0.0 ) [source]
-
Note
Only receives Tensor as input. If your input is Tensor, please use pool2d Op.(static.nn.** avg_pool2d or max_pool2d ).
This operator only supports Tensor as input. It will apply specified pooling operation on the input Tensor. It pools features of all time-steps of each sequence at the last lod_level using
pool_type
mentioned in the parameters, such as sum, average, sqrt, etc.It supports six pool_type:
average: \(Out[i] = \\frac{\sum_i X_i}{N}\)
sum: \(Out[i] = \sum_jX_{ij}\)
sqrt: \(Out[i] = \\frac{\sum_jX_{ij}}{\sqrt{len(X_i)}}\)
max: \(Out[i] = max(X_i)\)
last: \(Out[i] = X_{N_i}\)
first: \(Out[i]\) = X_0
where \(N_i\) is the length of i-th input sequence.
Case 1: input is a 1-level Tensor and pad_value = 0.0: input.lod = [[0, 2, 5, 7, 7]] input.data = [[1.], [3.], [2.], [4.], [6.], [5.], [1.]] input.shape = [7, 1] output is Tensor: out.shape = [4, 1] with condition out.shape[0] == len(x.lod[-1]) == 4 for different pool_type: average: out.data = [[2.], [4.], [3.], [0.0]], where 2.=(1. + 3.)/2, 4.=(2. + 4. + 6.)/3, 3.=(5. + 1.)/2 sum : out.data = [[4.], [12.], [6.], [0.0]], where 4.=1. + 3., 12.=2. + 4. + 6., 6.=5. + 1. sqrt : out.data = [[2.82], [6.93], [4.24], [0.0]], where 2.82=(1. + 3.)/sqrt(2), 6.93=(2. + 4. + 6.)/sqrt(3), 4.24=(5. + 1.)/sqrt(2) max : out.data = [[3.], [6.], [5.], [0.0]], where 3.=max(1., 3.), 6.=max(2., 4., 6.), 5.=max(5., 1.) last : out.data = [[3.], [6.], [1.], [0.0]], where 3.=last(1., 3.), 6.=last(2., 4., 6.), 1.=last(5., 1.) first : out.data = [[1.], [2.], [5.], [0.0]], where 1.=first(1., 3.), 2.=first(2., 4., 6.), 5.=first(5., 1.) and all above [0.0] at last of out.data is padding data. Case 2: input is a 2-level Tensor containing 3 sequences with length info [2, 0, 3], where 0 means empty sequence. The first sequence contains 2 subsequence with length info [1, 2]; The last sequence contains 3 subsequence with length info [1, 0, 3]. input.lod = [[0, 2, 2, 5], [0, 1, 3, 4, 4, 7]] input.data = [[1.], [3.], [2.], [4.], [6.], [5.], [1.]] input.shape = [7, 1] If pool_typ = sum, it will apply pooling on last lod_level [0, 1, 3, 4, 4, 7]. pad_value = 0.0 output is Tensor: out.shape= [5, 1] out.lod = [[0, 2, 2, 5]] where out.shape[0] == len(x.lod[-1]) == 5 sum: out.data = [[1.], [5.], [4.], [0.0], [12.]] where 1.=1., 5.=3. + 2., 4.=4., 0.0=pad_value, 12.=6. + 5. + 1.
- Parameters
-
input (variable) – Tensor with lod_level no more than 2. The data type should be float32 or float64.
pool_type (str) – The pooling type that supports average, sum, sqrt, max, last or first.
is_test (bool) – Only works when
pool_type
is max. If set False, a temporary Tensor maxIndex is created to record the index information corresponding to the maximum value, which is used for backward gradient calculation in the training phase. Default: False.pad_value (float) – Used to pad the pooling result for empty input sequence. Default: 0.0
- Returns
-
Tensor after pooling with data type float32 or float64.
- Return type
-
Tensor
Examples
>>> import paddle >>> paddle.enable_static() >>> x = paddle.static.data(name='x', shape=[None, 10], dtype='float32', lod_level=1) >>> avg_x = paddle.static.nn.sequence_pool(input=x, pool_type='average') >>> sum_x = paddle.static.nn.sequence_pool(input=x, pool_type='sum') >>> sqrt_x = paddle.static.nn.sequence_pool(input=x, pool_type='sqrt') >>> max_x = paddle.static.nn.sequence_pool(input=x, pool_type='max') >>> last_x = paddle.static.nn.sequence_pool(input=x, pool_type='last') >>> first_x = paddle.static.nn.sequence_pool(input=x, pool_type='first')