Tensor

class paddle. Tensor
abs ( name=None ) [source]

abs

Abs Operator.

This operator is used to perform elementwise abs for input $X$. \(out = |x|\)

Parameters
  • x (Tensor) – (Tensor), The input tensor of abs op.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

(Tensor), The output tensor of abs op.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.abs(x)
print(out)
# [0.4 0.2 0.1 0.3]
acos ( name=None ) [source]

acos

Arccosine Operator.

\(out = \cos^{-1}(x)\)

Parameters
  • x (Tensor) – Input of acos operator

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of acos operator

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.acos(x)
print(out)
# [1.98231317 1.77215425 1.47062891 1.26610367]
add ( y, name=None ) [source]

add

Elementwise Add Operator.

Add two tensors element-wise

The equation is:

\(Out = X + Y\)

  • $X$: a tensor of any dimension.

  • $Y$: a tensor whose dimensions must be less than or equal to the dimensions of $X$.

There are two cases for this operator:

  1. The shape of $Y$ is the same with $X$.

  2. The shape of $Y$ is a continuous subsequence of $X$.

For case 2:

  1. Broadcast $Y$ to match the shape of $X$, where $axis$ is the start dimension index for broadcasting $Y$ onto $X$.

  2. If $axis$ is -1 (default), $axis = rank(X) - rank(Y)$.

  3. The trailing dimensions of size 1 for $Y$ will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
Parameters
  • x (Tensor) – (Variable), Tensor or LoDTensor of any dimensions. Its dtype should be int32, int64, float32, float64.

  • y (Tensor) – (Variable), Tensor or LoDTensor of any dimensions. Its dtype should be int32, int64, float32, float64.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (string, optional) – Name of the output. Default is None. It’s used to print debug info for developers. Details: Name

Returns

N-dimension tensor. A location into which the result is stored. It’s dimension equals with x

Examples:

import paddle
x = paddle.to_tensor([2, 3, 4], 'float64')
y = paddle.to_tensor([1, 5, 2], 'float64')
z = paddle.add(x, y)
print(z)  # [3., 8., 6. ]
Return type

out (Tensor)

add_ ( y, name=None )

add_

Inplace version of add API, the output Tensor will be inplaced with input x. Please refer to api_tensor_add.

add_n ( name=None ) [source]

add_n

This OP is used to sum one or more Tensor of the input.

For example:

Case 1:

    Input:
        input.shape = [2, 3]
        input = [[1, 2, 3],
                 [4, 5, 6]]

    Output:
        output.shape = [2, 3]
        output = [[1, 2, 3],
                  [4, 5, 6]]

Case 2:

    Input:
        First input:
            input1.shape = [2, 3]
            Input1 = [[1, 2, 3],
                      [4, 5, 6]]

        The second input:
            input2.shape = [2, 3]
            input2 = [[7, 8, 9],
                      [10, 11, 12]]

        Output:
            output.shape = [2, 3]
            output = [[8, 10, 12],
                      [14, 16, 18]]
Parameters
  • inputs (Tensor|list[Tensor]|tuple[Tensor]) – A Tensor or a list/tuple of Tensors. The shape and data type of the list/tuple elements should be consistent. Input can be multi-dimensional Tensor, and data types can be: float32, float64, int32, int64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

Tensor, the sum of input \(inputs\) , its shape and data types are consistent with \(inputs\).

Examples

import paddle

input0 = paddle.to_tensor([[1, 2, 3], [4, 5, 6]], dtype='float32')
input1 = paddle.to_tensor([[7, 8, 9], [10, 11, 12]], dtype='float32')
output = paddle.add_n([input0, input1])
# [[8., 10., 12.],
#  [14., 16., 18.]]
addmm ( x, y, beta=1.0, alpha=1.0, name=None ) [source]

addmm

addmm

This operator is used to perform matrix multiplication for input $x$ and $y$. $input$ is added to the final result. The equation is:

\[Out = alpha * x * y + beta * input\]

$Input$, $x$ and $y$ can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input $input$.

Parameters
  • input (Tensor) – The input Tensor to be added to the final result.

  • x (Tensor) – The first input Tensor for matrix multiplication.

  • y (Tensor) – The second input Tensor for matrix multiplication.

  • beta (float) – Coefficient of $input$.

  • alpha (float) – Coefficient of $x*y$.

  • name (str, optional) – Name of the output. Normally there is no need for user to set this property. For more information, please refer to Name. Default is None.

Returns

The output Tensor of addmm op.

Return type

Tensor

Examples

import paddle

x = paddle.ones([2,2])
y = paddle.ones([2,2])
input = paddle.ones([2,2])

out = paddle.addmm( input=input, x=x, y=y, beta=0.5, alpha=5.0 )

print(out)
# [[10.5 10.5]
# [10.5 10.5]]
all ( axis=None, keepdim=False, name=None ) [source]

all

Computes the the logical and of tensor elements over the given dimension.

Parameters
  • x (Tensor) – An N-D Tensor, the input data type should be bool.

  • axis (int|list|tuple, optional) – The dimensions along which the logical and is compute. If None, and all elements of x and return a Tensor with a single element, otherwise must be in the range \([-rank(x), rank(x))\). If \(axis[i] < 0\), the dimension to reduce is \(rank + axis[i]\).

  • keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result Tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

Results the logical and on the specified axis of input Tensor x, it’s data type is bool.

Return type

Tensor

Raises
  • ValueError – If the data type of x is not bool.

  • TypeError – The type of axis must be int, list or tuple.

Examples

import paddle
import numpy as np

# x is a bool Tensor with following elements:
#    [[True, False]
#     [True, True]]
x = paddle.assign(np.array([[1, 0], [1, 1]], dtype='int32'))
print(x)
x = paddle.cast(x, 'bool')

# out1 should be [False]
out1 = paddle.all(x)  # [False]
print(out1)

# out2 should be [True, False]
out2 = paddle.all(x, axis=0)  # [True, False]
print(out2)

# keep_dim=False, out3 should be [False, True], out.shape should be (2,)
out3 = paddle.all(x, axis=-1)  # [False, True]
print(out3)

# keep_dim=True, out4 should be [[False], [True]], out.shape should be (2,1)
out4 = paddle.all(x, axis=1, keepdim=True)
out4 = paddle.cast(out4, 'int32')  # [[False], [True]]
print(out4)
allclose ( y, rtol=1e-05, atol=1e-08, equal_nan=False, name=None ) [source]

allclose

This operator checks if all \(x\) and \(y\) satisfy the condition:

\[\left| x - y \right| \leq atol + rtol \times \left| y \right|\]

elementwise, for all elements of \(x\) and \(y\). The behaviour of this operator is analogous to \(numpy.allclose\), namely that it returns \(True\) if two tensors are elementwise equal within a tolerance.

Parameters
  • x (Tensor) – The input tensor, it’s data type should be float32, float64.

  • y (Tensor) – The input tensor, it’s data type should be float32, float64.

  • rtol (rtoltype, optional) – The relative tolerance. Default: \(1e-5\) .

  • atol (atoltype, optional) – The absolute tolerance. Default: \(1e-8\) .

  • equal_nan (equalnantype, optional) – If \(True\) , then two \(NaNs\) will be compared as equal. Default: \(False\) .

  • name (str, optional) – Name for the operation. For more information, please refer to Name. Default: None.

Returns

The output tensor, it’s data type is bool.

Return type

Tensor

Raises
  • TypeError – The data type of x must be one of float32, float64.

  • TypeError – The data type of y must be one of float32, float64.

  • TypeError – The type of rtol must be float.

  • TypeError – The type of atol must be float.

  • TypeError – The type of equal_nan must be bool.

Examples

import paddle

x = paddle.to_tensor([10000., 1e-07])
y = paddle.to_tensor([10000.1, 1e-08])
result1 = paddle.allclose(x, y, rtol=1e-05, atol=1e-08,
                        equal_nan=False, name="ignore_nan")
np_result1 = result1.numpy()
# [False]
result2 = paddle.allclose(x, y, rtol=1e-05, atol=1e-08,
                            equal_nan=True, name="equal_nan")
np_result2 = result2.numpy()
# [False]

x = paddle.to_tensor([1.0, float('nan')])
y = paddle.to_tensor([1.0, float('nan')])
result1 = paddle.allclose(x, y, rtol=1e-05, atol=1e-08,
                        equal_nan=False, name="ignore_nan")
np_result1 = result1.numpy()
# [False]
result2 = paddle.allclose(x, y, rtol=1e-05, atol=1e-08,
                            equal_nan=True, name="equal_nan")
np_result2 = result2.numpy()
# [True]
any ( axis=None, keepdim=False, name=None ) [source]

any

Computes the the logical or of tensor elements over the given dimension.

Parameters
  • x (Tensor) – An N-D Tensor, the input data type should be bool.

  • axis (int|list|tuple, optional) – The dimensions along which the logical or is compute. If None, and all elements of x and return a Tensor with a single element, otherwise must be in the range \([-rank(x), rank(x))\). If \(axis[i] < 0\), the dimension to reduce is \(rank + axis[i]\).

  • keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result Tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

Results the logical or on the specified axis of input Tensor x, it’s data type is bool.

Return type

Tensor

Raises
  • ValueError – If the data type of x is not bool.

  • TypeError – The type of axis must be int, list or tuple.

Examples

import paddle
import numpy as np

# x is a bool Tensor with following elements:
#    [[True, False]
#     [False, False]]
x = paddle.assign(np.array([[1, 0], [1, 1]], dtype='int32'))
print(x)
x = paddle.cast(x, 'bool')

# out1 should be [True]
out1 = paddle.any(x)  # [True]
print(out1)

# out2 should be [True, True]
out2 = paddle.any(x, axis=0)  # [True, True]
print(out2)

# keep_dim=False, out3 should be [True, True], out.shape should be (2,)
out3 = paddle.any(x, axis=-1)  # [True, True]
print(out3)

# keep_dim=True, result should be [[True], [True]], out.shape should be (2,1)
out4 = paddle.any(x, axis=1, keepdim=True)
out4 = paddle.cast(out4, 'int32')  # [[True], [True]]
print(out4)
argmax ( axis=None, keepdim=False, dtype='int64', name=None ) [source]

argmax

This OP computes the indices of the max elements of the input tensor’s element along the provided axis.

Parameters
  • x (Tensor) – An input N-D Tensor with type float32, float64, int16, int32, int64, uint8.

  • axis (int, optional) – Axis to compute indices along. The effective range is [-R, R), where R is x.ndim. when axis < 0, it works the same way as axis + R. Default is None, the input x will be into the flatten tensor, and selecting the min value index.

  • keepdim (bool, optional) – Keep the axis that selecting max. The defalut value is False.

  • dtype (str|np.dtype, optional) – Data type of the output tensor which can be int32, int64. The default value is ‘int64’, and it will return the int64 indices.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

Tensor, return the tensor of int32 if set dtype is int32, otherwise return the tensor of int64

Examples

import paddle

x =  paddle.to_tensor([[5,8,9,5],
                         [0,0,1,7],
                         [6,9,2,4]])
out1 = paddle.argmax(x)
print(out1) # 2
out2 = paddle.argmax(x, axis=1)
print(out2)
# [2 3 1]
out3 = paddle.argmax(x, axis=-1)
print(out3)
# [2 3 1]
argmin ( axis=None, keepdim=False, dtype='int64', name=None ) [source]

argmin

This OP computes the indices of the min elements of the input tensor’s element along the provided axis.

Parameters
  • x (Tensor) – An input N-D Tensor with type float32, float64, int16, int32, int64, uint8.

  • axis (int, optional) – Axis to compute indices along. The effective range is [-R, R), where R is x.ndim. when axis < 0, it works the same way as axis + R. Default is None, the input x will be into the flatten tensor, and selecting the min value index.

  • keepdim (bool, optional) – Keep the axis that selecting min. The defalut value is False.

  • dtype (str) – Data type of the output tensor which can be int32, int64. The default value is ‘int64’, and it will return the int64 indices.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

Tensor, return the tensor of int32 if set dtype is int32, otherwise return the tensor of int64

Examples

import paddle

x =  paddle.to_tensor([[5,8,9,5],
                         [0,0,1,7],
                         [6,9,2,4]])
out1 = paddle.argmin(x)
print(out1) # 4
out2 = paddle.argmin(x, axis=1)
print(out2)
# [0 0 2]
out3 = paddle.argmin(x, axis=-1)
print(out3)
# [0 0 2]
argsort ( axis=- 1, descending=False, name=None ) [source]

argsort

This OP sorts the input along the given axis, and returns the corresponding index tensor for the sorted output values. The default sort algorithm is ascending, if you want the sort algorithm to be descending, you must set the descending as True.

Parameters
  • x (Tensor) – An input N-D Tensor with type float32, float64, int16, int32, int64, uint8.

  • axis (int, optional) – Axis to compute indices along. The effective range is [-R, R), where R is Rank(x). when axis<0, it works the same way as axis+R. Default is 0.

  • descending (bool, optional) – Descending is a flag, if set to true, algorithm will sort by descending order, else sort by ascending order. Default is false.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

sorted indices(with the same shape as x and with data type int64).

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[[5,8,9,5],
                       [0,0,1,7],
                       [6,9,2,4]],
                      [[5,2,4,2],
                       [4,7,7,9],
                       [1,7,0,6]]],
                    dtype='float32')
out1 = paddle.argsort(x=x, axis=-1)
out2 = paddle.argsort(x=x, axis=0)
out3 = paddle.argsort(x=x, axis=1)
print(out1)
#[[[0 3 1 2]
#  [0 1 2 3]
#  [2 3 0 1]]
# [[1 3 2 0]
#  [0 1 2 3]
#  [2 0 3 1]]]
print(out2)
#[[[0 1 1 1]
#  [0 0 0 0]
#  [1 1 1 0]]
# [[1 0 0 0]
#  [1 1 1 1]
#  [0 0 0 1]]]
print(out3)
#[[[1 1 1 2]
#  [0 0 2 0]
#  [2 2 0 1]]
# [[2 0 2 0]
#  [1 1 0 2]
#  [0 2 1 1]]]
asin ( name=None ) [source]

asin

Arcsine Operator.

\(out = \sin^{-1}(x)\)

Parameters
  • x (Tensor) – Input of asin operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of asin operator

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.asin(x)
print(out)
# [-0.41151685 -0.20135792  0.10016742  0.30469265]
astype ( dtype )

astype

Cast a Tensor to a specified data type.

Parameters

dtype – The target data type.

Returns

a new Tensor with target dtype

Return type

Tensor

Examples

import paddle
import numpy as np

original_tensor = paddle.ones([2, 2])
print("original tensor's dtype is: {}".format(original_tensor.dtype))
new_tensor = original_tensor.astype('float32')
print("new tensor's dtype is: {}".format(new_tensor.dtype))
atan ( name=None ) [source]

atan

Arctangent Operator.

\(out = \tan^{-1}(x)\)

Parameters
  • x (Tensor) – Input of atan operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of atan operator

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.atan(x)
print(out)
# [-0.38050638 -0.19739556  0.09966865  0.29145679]
backward ( grad_tensor=None, retain_graph=False )

backward

Run backward of current Graph which starts from current Tensor.

The new gradient will accumulat on previous gradient.

You can clear gradient by Tensor.clear_grad() .

Parameters
  • grad_tensor (Tensor, optional) – initial gradient values of the current Tensor. If grad_tensor is None,

  • 1.0; (the initial gradient values of the current Tensor would be Tensor filled with) –

  • None (if grad_tensor is not) –

  • Tensor. (it must have the same length as the current) –

  • None. (Teh default value is) –

  • retain_graph (bool, optional) – If False, the graph used to compute grads will be freed. If you would like to add more ops to the built graph after calling this method( backward ), set the parameter retain_graph to True, then the grads will be retained. Thus, seting it to False is much more memory-efficient. Defaults to False.

Returns

None

Return type

NoneType

Examples

import paddle
x = paddle.to_tensor(5., stop_gradient=False)
for i in range(5):
    y = paddle.pow(x, 4.0)
    y.backward()
    print("{}: {}".format(i, x.grad))
# 0: [500.]
# 1: [1000.]
# 2: [1500.]
# 3: [2000.]
# 4: [2500.]

x.clear_grad()
print("{}".format(x.grad))
# 0.

grad_tensor=paddle.to_tensor(2.)
for i in range(5):
    y = paddle.pow(x, 4.0)
    y.backward(grad_tensor)
    print("{}: {}".format(i, x.grad))
# 0: [1000.]
# 1: [2000.]
# 2: [3000.]
# 3: [4000.]
# 4: [5000.]
bincount ( weights=None, minlength=0, name=None ) [source]

bincount

Computes frequency of each value in the input tensor.

Parameters
  • x (Tensor) – A Tensor with non-negative integer. Should be 1-D tensor.

  • weights (Tensor, optional) – Weight for each value in the input tensor. Should have the same shape as input. Default is None.

  • minlength (int, optional) – Minimum number of bins. Should be non-negative integer. Default is 0.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The tensor of frequency.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([1, 2, 1, 4, 5])
result1 = paddle.bincount(x)
print(result1) # [0, 2, 1, 0, 1, 1]

w = paddle.to_tensor([2.1, 0.4, 0.1, 0.5, 0.5])
result2 = paddle.bincount(x, weights=w)
print(result2) # [0., 2.19999981, 0.40000001, 0., 0.50000000, 0.50000000]
bitwise_and ( y, out=None, name=None ) [source]

bitwise_and

It operates bitwise_and on Tensor X and Y .

\[Out = X \& Y\]

Note

paddle.bitwise_and supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting.

Parameters
  • x (Tensor) – Input Tensor of bitwise_and . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

  • y (Tensor) – Input Tensor of bitwise_and . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

  • out (Tensor) – Result of bitwise_and . It is a N-D Tensor with the same data type of input Tensor

Returns

Result of bitwise_and . It is a N-D Tensor with the same data type of input Tensor

Return type

Tensor

Examples

import paddle
x = paddle.to_tensor([-5, -1, 1])
y = paddle.to_tensor([4,  2, -3])
res = paddle.bitwise_and(x, y)
print(res)  # [0, 2, 1]
bitwise_not ( out=None, name=None ) [source]

bitwise_not

It operates bitwise_not on Tensor X .

\[Out = \sim X\]
Parameters
  • x (Tensor) – Input Tensor of bitwise_not . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

  • out (Tensor) – Result of bitwise_not . It is a N-D Tensor with the same data type of input Tensor

Returns

Result of bitwise_not . It is a N-D Tensor with the same data type of input Tensor

Return type

Tensor

Examples

import paddle
x = paddle.to_tensor([-5, -1, 1])
res = paddle.bitwise_not(x)
print(res) # [4, 0, -2]
bitwise_or ( y, out=None, name=None ) [source]

bitwise_or

It operates bitwise_or on Tensor X and Y .

\[Out = X | Y\]

Note

paddle.bitwise_or supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting.

Parameters
  • x (Tensor) – Input Tensor of bitwise_or . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

  • y (Tensor) – Input Tensor of bitwise_or . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

  • out (Tensor) – Result of bitwise_or . It is a N-D Tensor with the same data type of input Tensor

Returns

Result of bitwise_or . It is a N-D Tensor with the same data type of input Tensor

Return type

Tensor

Examples

import paddle
x = paddle.to_tensor([-5, -1, 1])
y = paddle.to_tensor([4,  2, -3])
res = paddle.bitwise_or(x, y)
print(res)  # [-1, -1, -3]
bitwise_xor ( y, out=None, name=None ) [source]

bitwise_xor

It operates bitwise_xor on Tensor X and Y .

\[Out = X ^\wedge Y\]

Note

paddle.bitwise_xor supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting.

Parameters
  • x (Tensor) – Input Tensor of bitwise_xor . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

  • y (Tensor) – Input Tensor of bitwise_xor . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

  • out (Tensor) – Result of bitwise_xor . It is a N-D Tensor with the same data type of input Tensor

Returns

Result of bitwise_xor . It is a N-D Tensor with the same data type of input Tensor

Return type

Tensor

Examples

import paddle
x = paddle.to_tensor([-5, -1, 1])
y = paddle.to_tensor([4,  2, -3])
res = paddle.bitwise_xor(x, y)
print(res) # [-1, -3, -4]
bmm ( y, name=None ) [source]

bmm

Applies batched matrix multiplication to two tensors.

Both of the two input tensors must be three-dementional and share the same batch size.

if x is a (b, m, k) tensor, y is a (b, k, n) tensor, the output will be a (b, m, n) tensor.

Parameters
  • x (Tensor) – The input Tensor.

  • y (Tensor) – The input Tensor.

  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.

Returns

The product Tensor.

Return type

Tensor

Examples

import paddle

# In imperative mode:
# size x: (2, 2, 3) and y: (2, 3, 2)
x = paddle.to_tensor([[[1.0, 1.0, 1.0],
                    [2.0, 2.0, 2.0]],
                    [[3.0, 3.0, 3.0],
                    [4.0, 4.0, 4.0]]])
y = paddle.to_tensor([[[1.0, 1.0],[2.0, 2.0],[3.0, 3.0]],
                    [[4.0, 4.0],[5.0, 5.0],[6.0, 6.0]]])
out = paddle.bmm(x, y)
#output size: (2, 2, 2)
#output value:
#[[[6.0, 6.0],[12.0, 12.0]],[[45.0, 45.0],[60.0, 60.0]]]
out_np = out.numpy()
broadcast_shape ( y_shape ) [source]

broadcast_shape

The function returns the shape of doing operation with broadcasting on tensors of x_shape and y_shape, please refer to Broadcasting for more details.

Parameters
  • x_shape (list[int]|tuple[int]) – A shape of tensor.

  • y_shape (list[int]|tuple[int]) – A shape of tensor.

Returns

list[int], the result shape.

Examples

import paddle

shape = paddle.broadcast_shape([2, 1, 3], [1, 3, 1])
# [2, 3, 3]

# shape = paddle.broadcast_shape([2, 1, 3], [3, 3, 1])
# ValueError (terminated with error message).
broadcast_tensors ( name=None ) [source]

broadcast_tensors

This OP broadcast a list of tensors following broadcast semantics

Note

If you want know more about broadcasting, please refer to Broadcasting.

Parameters
  • input (list|tuple) – input is a Tensor list or Tensor tuple which is with data type bool, float16, float32, float64, int32, int64. All the Tensors in input must have same data type. Currently we only support tensors with rank no greater than 5.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The list of broadcasted tensors following the same order as input.

Return type

list(Tensor)

Examples

import paddle
x1 = paddle.rand([1, 2, 3, 4]).astype('float32')
x2 = paddle.rand([1, 2, 1, 4]).astype('float32')
x3 = paddle.rand([1, 1, 3, 1]).astype('float32')
out1, out2, out3 = paddle.broadcast_tensors(input=[x1, x2, x3])
# out1, out2, out3: tensors broadcasted from x1, x2, x3 with shape [1,2,3,4]
broadcast_to ( shape, name=None ) [source]

broadcast_to

Broadcast the input tensor to a given shape.

Both the number of dimensions of x and the number of elements in shape should be less than or equal to 6. The dimension to broadcast to must have a value 1.

Parameters
  • x (Tensor) – The input tensor, its data type is bool, float32, float64, int32 or int64.

  • shape (list|tuple|Tensor) – The result shape after broadcasting. The data type is int32. If shape is a list or tuple, all its elements should be integers or 1-D Tensors with the data type int32. If shape is a Tensor, it should be an 1-D Tensor with the data type int32. The value -1 in shape means keeping the corresponding dimension unchanged.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

A Tensor with the given shape. The data type is the same as x.

Return type

N-D Tensor

Examples

import paddle

data = paddle.to_tensor([1, 2, 3], dtype='int32')
out = paddle.broadcast_to(data, shape=[2, 3])
print(out)
# [[1, 2, 3], [1, 2, 3]]
cast ( dtype ) [source]

cast

This OP takes in the Tensor x with x.dtype and casts it to the output with dtype. It’s meaningless if the output dtype equals the input dtype, but it’s fine if you do so.

Parameters
  • x (Tensor) – An input N-D Tensor with data type bool, float16, float32, float64, int32, int64, uint8.

  • dtype (np.dtype|core.VarDesc.VarType|str) – Data type of the output: bool, float16, float32, float64, int8, int32, int64, uint8.

Returns

A Tensor with the same shape as input’s.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([2, 3, 4], 'float64')
y = paddle.cast(x, 'uint8')
ceil ( name=None ) [source]

ceil

Ceil Operator. Computes ceil of x element-wise.

\(out = \\lceil x \\rceil\)

Parameters
  • x (Tensor) – Input of Ceil operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Ceil operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.ceil(x)
print(out)
# [-0. -0.  1.  1.]
ceil_ ( name=None )

ceil_

Inplace version of ceil API, the output Tensor will be inplaced with input x. Please refer to api_fluid_layers_ceil.

cholesky ( upper=False, name=None ) [source]

cholesky

Computes the Cholesky decomposition of one symmetric positive-definite matrix or batches of symmetric positive-definite matrice.

If upper is True, the decomposition has the form \(A = U^{T}U\) , and the returned matrix \(U\) is upper-triangular. Otherwise, the decomposition has the form \(A = LL^{T}\) , and the returned matrix \(L\) is lower-triangular.

Parameters
  • x (Tensor) – The input tensor. Its shape should be [*, M, M], where * is zero or more batch dimensions, and matrices on the inner-most 2 dimensions all should be symmetric positive-definite. Its data type should be float32 or float64.

  • upper (bool) – The flag indicating whether to return upper or lower triangular matrices. Default: False.

Returns

A Tensor with same shape and data type as x. It represents

triangular matrices generated by Cholesky decomposition.

Return type

Tensor

Examples

import paddle
import numpy as np

a = np.random.rand(3, 3)
a_t = np.transpose(a, [1, 0])
x_data = np.matmul(a, a_t) + 1e-03
x = paddle.to_tensor(x_data)
out = paddle.cholesky(x, upper=False)
print(out)
# [[1.190523   0.         0.        ]
#  [0.9906703  0.27676893 0.        ]
#  [1.25450498 0.05600871 0.06400121]]
chunk ( chunks, axis=0, name=None ) [source]

chunk

Split the input tensor into multiple sub-Tensors.

Parameters
  • x (Tensor) – A N-D Tensor. The data type is bool, float16, float32, float64, int32 or int64.

  • chunks (int) – The number of tensor to be split along the certain axis.

  • axis (int|Tensor, optional) – The axis along which to split, it can be a scalar with type int or a Tensor with shape [1] and data type int32 or int64. If :math::axis < 0, the axis to split along is \(rank(x) + axis\). Default is 0.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

The list of segmented Tensors.

Return type

list(Tensor)

Example

import numpy as np
import paddle

# x is a Tensor which shape is [3, 9, 5]
x_np = np.random.random([3, 9, 5]).astype("int32")
x = paddle.to_tensor(x_np)

out0, out1, out2 = paddle.chunk(x, chunks=3, axis=1)
# out0.shape [3, 3, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 3, 5]


# axis is negative, the real axis is (rank(x) + axis) which real
# value is 1.
out0, out1, out2 = paddle.chunk(x, chunks=3, axis=-2)
# out0.shape [3, 3, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 3, 5]
clear_grad ( )

clear_grad

The alias of clear_gradient().

clear_gradient ( self: paddle.fluid.core_avx.VarBase ) None

clear_gradient

Only for Tensor that has gradient, normally we use this for Parameters since other temporary Tensor doesen’t has gradient.

The Gradient of current Tensor will be set to 0 .

Returns: None

Examples

import paddle
input = paddle.uniform([10, 2])
linear = paddle.nn.Linear(2, 3)
out = linear(input)
out.backward()
print("Before clear_gradient, linear.weight.grad: {}".format(linear.weight.grad))
linear.weight.clear_gradient()
print("After clear_gradient, linear.weight.grad: {}".format(linear.weight.grad))
clip ( min=None, max=None, name=None ) [source]

clip

This operator clip all elements in input into the range [ min, max ] and return a resulting tensor as the following equation:

\[Out = MIN(MAX(x, min), max)\]
Parameters
  • x (Tensor) – An N-D Tensor with data type float32, float64, int32 or int64.

  • min (float|int|Tensor) – The lower bound with type float , int or a Tensor with shape [1] and type int32, float32, float64.

  • max (float|int|Tensor) – The upper bound with type float, int or a Tensor with shape [1] and type int32, float32, float64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

A Tensor with the same data type and data shape as input.

Return type

Tensor

Examples

import paddle

x1 = paddle.to_tensor([[1.2, 3.5], [4.5, 6.4]], 'float32')
out1 = paddle.clip(x1, min=3.5, max=5.0)
out2 = paddle.clip(x1, min=2.5)
print(out1)
# [[3.5, 3.5]
# [4.5, 5.0]]
print(out2)
# [[2.5, 3.5]
# [[4.5, 6.4]
clip_ ( min=None, max=None, name=None )

clip_

Inplace version of clip API, the output Tensor will be inplaced with input x. Please refer to api_tensor_clip.

clone ( self: paddle.fluid.core_avx.VarBase ) paddle.fluid.core_avx.VarBase

clone

Returns a new Tensor, which is clone of origin Tensor, and it remains in the current graph. It will always have a Tensor copy. Tn addition, the cloned Tensor provides gradient propagation.

Returns: The cloned Tensor.

Examples

import paddle

x = paddle.to_tensor(1.0, stop_gradient=False)
clone_x = x.clone()
y = clone_x**2
y.backward()
print(clone_x.stop_gradient) # False
print(clone_x.grad)          # [2.0], support gradient propagation
print(x.stop_gradient)       # False
print(x.grad)                # [2.0], clone_x support gradient propagation for x

x = paddle.to_tensor(1.0)
clone_x = x.clone()
clone_x.stop_gradient = False
z = clone_x**3
z.backward()
print(clone_x.stop_gradient) # False
print(clone_x.grad)          # [3.0], support gradient propagation
print(x.stop_gradient) # True
print(x.grad)          # None
concat ( axis=0, name=None ) [source]

concat

This OP concatenates the input along the axis.

Parameters
  • x (list|tuple) – x is a Tensor list or Tensor tuple which is with data type bool, float16, float32, float64, int32, int64, uint8. All the Tensors in x must have same data type.

  • axis (int|Tensor, optional) – Specify the axis to operate on the input Tensors. It’s a scalar with data type int or a Tensor with shape [1] and data type int32 or int64. The effective range is [-R, R), where R is Rank(x). When axis < 0, it works the same way as axis+R. Default is 0.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

A Tensor with the same data type as x.

Return type

Tensor

Examples

import paddle

x1 = paddle.to_tensor([[1, 2, 3],
                       [4, 5, 6]])
x2 = paddle.to_tensor([[11, 12, 13],
                       [14, 15, 16]])
x3 = paddle.to_tensor([[21, 22],
                       [23, 24]])
zero = paddle.full(shape=[1], dtype='int32', fill_value=0)
# When the axis is negative, the real axis is (axis + Rank(x))
# As follow, axis is -1, Rank(x) is 2, the real axis is 1
out1 = paddle.concat(x=[x1, x2, x3], axis=-1)
out2 = paddle.concat(x=[x1, x2], axis=0)
out3 = paddle.concat(x=[x1, x2], axis=zero)
# out1
# [[ 1  2  3 11 12 13 21 22]
#  [ 4  5  6 14 15 16 23 24]]
# out2 out3
# [[ 1  2  3]
#  [ 4  5  6]
#  [11 12 13]
#  [14 15 16]]
cond ( p=None, name=None )

cond

Computes the condition number of a matrix or batches of matrices with respect to a matrix norm p.

Parameters
  • x (Tensor) – The input tensor could be tensor of shape (*, m, n) where * is zero or more batch dimensions for p in (2, -2), or of shape (*, n, n) where every matrix is invertible for any supported p. And the input data type could be float32 or float64.

  • p (float|string, optional) – Order of the norm. Supported values are fro, nuc, 1, -1, 2, -2, inf, -inf. Default value is None, meaning that the order of the norm is 2.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

computing results of condition number, its data type is the same as input Tensor x.

Return type

Tensor

Examples

import paddle
import numpy as np

x = paddle.to_tensor([[1., 0, -1], [0, 1, 0], [1, 0, 1]])

# compute conditional number when p is None
out = paddle.linalg.cond(x)
# out.numpy() [1.4142135]

# compute conditional number when order of the norm is 'fro'
out_fro = paddle.linalg.cond(x, p='fro')
# out_fro.numpy() [3.1622777]

# compute conditional number when order of the norm is 'nuc'
out_nuc = paddle.linalg.cond(x, p='nuc')
# out_nuc.numpy() [9.2426405]

# compute conditional number when order of the norm is 1
out_1 = paddle.linalg.cond(x, p=1)
# out_1.numpy() [2.]

# compute conditional number when order of the norm is -1
out_minus_1 = paddle.linalg.cond(x, p=-1)
# out_minus_1.numpy() [1.]

# compute conditional number when order of the norm is 2
out_2 = paddle.linalg.cond(x, p=2)
# out_2.numpy() [1.4142135]

# compute conditional number when order of the norm is -1
out_minus_2 = paddle.linalg.cond(x, p=-2)
# out_minus_2.numpy() [0.70710677]

# compute conditional number when order of the norm is inf
out_inf = paddle.linalg.cond(x, p=np.inf)
# out_inf.numpy() [2.]

# compute conditional number when order of the norm is -inf
out_minus_inf = paddle.linalg.cond(x, p=-np.inf)
# out_minus_inf.numpy() [1.]

a = paddle.to_tensor(np.random.randn(2, 4, 4).astype('float32'))
# a.numpy()
# [[[ 0.14063153 -0.996288    0.7996131  -0.02571543]
#   [-0.16303636  1.5534962  -0.49919784 -0.04402903]
#   [-1.1341571  -0.6022629   0.5445269   0.29154757]
#   [-0.16816919 -0.30972657  1.7521842  -0.5402487 ]]
#  [[-0.58081484  0.12402827  0.7229862  -0.55046535]
#   [-0.15178485 -1.1604939   0.75810957  0.30971205]
#   [-0.9669573   1.0940945  -0.27363303 -0.35416734]
#   [-1.216529    2.0018666  -0.7773689  -0.17556527]]]
a_cond_fro = paddle.linalg.cond(a, p='fro')
# a_cond_fro.numpy()  [31.572273 28.120834]

b = paddle.to_tensor(np.random.randn(2, 3, 4).astype('float64'))
# b.numpy()
# [[[ 1.61707487  0.46829144  0.38130416  0.82546736]
#   [-1.72710298  0.08866375 -0.62518804  0.16128892]
#   [-0.02822879 -1.67764516  0.11141444  0.3220113 ]]
#  [[ 0.22524372  0.62474921 -0.85503233 -1.03960523]
#   [-0.76620689  0.56673047  0.85064753 -0.45158196]
#   [ 1.47595418  2.23646462  1.5701758   0.10497519]]]
b_cond_2 = paddle.linalg.cond(b, p=2)
# b_cond_2.numpy()  [3.30064451 2.51976252]
conj ( name=None ) [source]

conj

This function computes the conjugate of the Tensor elementwisely.

Parameters
  • x (Tensor) – The input tensor which hold the complex numbers. Optional data types are: complex64, complex128, float32, float64, int32 or int64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

The conjugate of input. The shape and data type is the same with input.

If the elements of tensor is real type such as float32, float64, int32 or int64, the out is the same with input.

Return type

out (Tensor)

Examples

import paddle
data=paddle.to_tensor([[1+1j, 2+2j, 3+3j], [4+4j, 5+5j, 6+6j]])
#Tensor(shape=[2, 3], dtype=complex64, place=CUDAPlace(0), stop_gradient=True,
#       [[(1+1j), (2+2j), (3+3j)],
#        [(4+4j), (5+5j), (6+6j)]])

conj_data=paddle.conj(data)
#Tensor(shape=[2, 3], dtype=complex64, place=CUDAPlace(0), stop_gradient=True,
#       [[(1-1j), (2-2j), (3-3j)],
#        [(4-4j), (5-5j), (6-6j)]])
copy_ ( self: paddle.fluid.core_avx.VarBase, arg0: paddle.fluid.core_avx.VarBase, arg1: bool ) None

copy_

cos ( name=None ) [source]

cos

Cosine Operator. Computes cosine of x element-wise.

Input range is (-inf, inf) and output range is [-1,1].

\(out = cos(x)\)

Parameters
  • x (Tensor) – Input of Cos operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Cos operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.cos(x)
print(out)
# [0.92106099 0.98006658 0.99500417 0.95533649]
cosh ( name=None ) [source]

cosh

Cosh Activation Operator.

\(out = cosh(x)\)

Parameters
  • x (Tensor) – Input of Cosh operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Cosh operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.cosh(x)
print(out)
# [1.08107237 1.02006676 1.00500417 1.04533851]
cpu ( self: paddle.fluid.core_avx.VarBase ) paddle.fluid.core_avx.VarBase

cpu

Returns a copy of this Tensor in CPU memory.

If this Tensor is already in CPU memory, then no copy is performed and the original Tensor is returned.

Examples

import paddle
x = paddle.to_tensor(1.0, place=paddle.CUDAPlace(0))
print(x.place)    # CUDAPlace(0)

y = x.cpu()
print(y.place)    # CPUPlace
cross ( y, axis=None, name=None ) [source]

cross

Computes the cross product between two tensors along an axis.

Inputs must have the same shape, and the length of their axes should be equal to 3. If axis is not given, it defaults to the first axis found with the length 3.

Parameters
  • x (Tensor) – The first input tensor.

  • y (Tensor) – The second input tensor.

  • axis (int, optional) – The axis along which to compute the cross product. It defaults to the first axis found with the length 3.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor. A Tensor with same data type as x.

Examples

import paddle

x = paddle.to_tensor([[1.0, 1.0, 1.0],
                      [2.0, 2.0, 2.0],
                      [3.0, 3.0, 3.0]])
y = paddle.to_tensor([[1.0, 1.0, 1.0],
                      [1.0, 1.0, 1.0],
                      [1.0, 1.0, 1.0]])

z1 = paddle.cross(x, y)
# [[-1. -1. -1.]
#  [ 2.  2.  2.]
#  [-1. -1. -1.]]

z2 = paddle.cross(x, y, axis=1)
# [[0. 0. 0.]
#  [0. 0. 0.]
#  [0. 0. 0.]]
cuda ( self: paddle.fluid.core_avx.VarBase, device_id: handle = None, blocking: bool = True ) paddle.fluid.core_avx.VarBase

cuda

Returns a copy of this Tensor in GPU memory.

If this Tensor is already in GPU memory and device_id is default, then no copy is performed and the original Tensor is returned.

Parameters
  • device_id (int, optional) – The destination GPU device id. Default: None, means current device.

  • blocking (bool, optional) – If False and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default: False.

Examples

# required: gpu
import paddle
x = paddle.to_tensor(1.0, place=paddle.CPUPlace())
print(x.place)        # CPUPlace

y = x.cuda()
print(y.place)        # CUDAPlace(0)

y = x.cuda(None)
print(y.place)        # CUDAPlace(0)

y = x.cuda(1)
print(y.place)        # CUDAPlace(1)
cumprod ( dim=None, dtype=None, name=None ) [source]

cumprod

Compute the cumulative product of the input tensor x along a given dimension dim.

Note: The first element of the result is the same as the first element of the input.

Parameters
  • x (Tensor) – the input tensor need to be cumproded.

  • dim (int) – the dimension along which the input tensor will be accumulated. It need to be in the range of [-x.rank, x.rank), where x.rank means the dimensions of the input tensor x and -1 means the last dimension.

  • dtype (str, optional) – The data type of the output tensor, can be float32, float64, int32, int64, complex64, complex128. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. The default value is None.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the result of cumprod operator.

Examples

import paddle

data = paddle.arange(12)
data = paddle.reshape(data, (3, 4))
# [[ 0  1  2  3 ]
#  [ 4  5  6  7 ]
#  [ 8  9  10 11]]

y = paddle.cumprod(data, dim=0)
# [[ 0  1   2   3]
#  [ 0  5  12  21]
#  [ 0 45 120 231]]

y = paddle.cumprod(data, dim=-1)
# [[ 0   0   0    0]
#  [ 4  20 120  840]
#  [ 8  72 720 7920]]

y = paddle.cumprod(data, dim=1, dtype='float64')
# [[ 0.   0.   0.    0.]
#  [ 4.  20. 120.  840.]
#  [ 8.  72. 720. 7920.]]

print(y.dtype)
# paddle.float64
cumsum ( axis=None, dtype=None, name=None ) [source]

cumsum

The cumulative sum of the elements along a given axis.

Note: The first element of the result is the same of the first element of the input.

Parameters
  • x (Tensor) – The input tensor needed to be cumsumed.

  • axis (int, optional) – The dimension to accumulate along. -1 means the last dimension. The default (None) is to compute the cumsum over the flattened array.

  • dtype (str, optional) – The data type of the output tensor, can be float32, float64, int32, int64. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. The default value is None.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the result of cumsum operator.

Examples

import paddle

data = paddle.arange(12)
data = paddle.reshape(data, (3, 4))

y = paddle.cumsum(data)
# [ 0  1  3  6 10 15 21 28 36 45 55 66]

y = paddle.cumsum(data, axis=0)
# [[ 0  1  2  3]
#  [ 4  6  8 10]
#  [12 15 18 21]]

y = paddle.cumsum(data, axis=-1)
# [[ 0  1  3  6]
#  [ 4  9 15 22]
#  [ 8 17 27 38]]

y = paddle.cumsum(data, dtype='float64')
print(y.dtype)
# VarType.FP64
detach ( self: paddle.fluid.core_avx.VarBase ) paddle.fluid.core_avx.VarBase

detach

Returns a new Tensor, detached from the current graph. It will share data with origin Tensor and always doesn’t have a Tensor copy. In addition, the detached Tensor doesn’t provide gradient propagation.

Returns: The detached Tensor.

Examples

import paddle

x = paddle.to_tensor(1.0, stop_gradient=False)
detach_x = x.detach()
detach_x[:] = 10.0
print(x)  # Tensor(shape=[1], dtype=float32, place=CPUPlace, stop_gradient=False,
          #        [10.])
y = x**2
y.backward()
print(x.grad)         # [20.0]
print(detach_x.grad)  # None, 'stop_gradient=True' by default

detach_x.stop_gradient = False # Set stop_gradient to be False, supported auto-grad
z = detach_x**3
z.backward()

print(x.grad)         # [20.0], detach_x is detached from x's graph, not affect each other
print(detach_x.grad)  # [300.0], detach_x has its own graph

# Due to sharing of data with origin Tensor, There are some unsafe operations:
y = 2 * x
detach_x[:] = 5.0
y.backward()
# It will raise Error:
#   one of the variables needed for gradient computation has been modified by an inplace operation.
diagonal ( offset=0, axis1=0, axis2=1, name=None ) [source]

diagonal

This OP computes the diagonals of the input tensor x.

If x is 2D, returns the diagonal. If x has larger dimensions, diagonals be taken from the 2D planes specified by axis1 and axis2. By default, the 2D planes formed by the first and second axis of the input tensor x.

The argument offset determines where diagonals are taken from input tensor x:

  • If offset = 0, it is the main diagonal.

  • If offset > 0, it is above the main diagonal.

  • If offset < 0, it is below the main diagonal.

Parameters
  • x (Tensor) – The input tensor x. Must be at least 2-dimensional. The input data type should be bool, int32, int64, float16, float32, float64.

  • offset (int, optional) – Which diagonals in input tensor x will be taken. Default: 0 (main diagonals).

  • axis1 (int, optional) – The first axis with respect to take diagonal. Default: 0.

  • axis2 (int, optional) – The second axis with respect to take diagonal. Default: 1.

  • name (str, optional) – Normally there is no need for user to set this property. For more information, please refer to Name. Default: None.

Returns

a partial view of input tensor in specify two dimensions, the output data type is the same as input data type.

Return type

Tensor

Examples

import paddle

x = paddle.rand([2,2,3],'float32')
print(x)
# Tensor(shape=[2, 2, 3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#        [[[0.45661032, 0.03751532, 0.90191704],
#          [0.43760979, 0.86177313, 0.65221709]],

#         [[0.17020577, 0.00259554, 0.28954273],
#          [0.51795638, 0.27325270, 0.18117726]]])

out1 = paddle.diagonal(x)
print(out1)
#Tensor(shape=[3, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#       [[0.45661032, 0.51795638],
#        [0.03751532, 0.27325270],
#        [0.90191704, 0.18117726]])

out2 = paddle.diagonal(x, offset=0, axis1=2, axis2=1)
print(out2)
#Tensor(shape=[2, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#       [[0.45661032, 0.86177313],
#        [0.17020577, 0.27325270]])

out3 = paddle.diagonal(x, offset=1, axis1=0, axis2=1)
print(out3)
#Tensor(shape=[3, 1], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#       [[0.43760979],
#        [0.86177313],
#        [0.65221709]])

out4 = paddle.diagonal(x, offset=0, axis1=1, axis2=2)
print(out4)
#Tensor(shape=[2, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#       [[0.45661032, 0.86177313],
#        [0.17020577, 0.27325270]])
digamma ( name=None ) [source]

digamma

Calculates the digamma of the given input tensor, element-wise.

\[Out = \Psi(x) = \frac{ \Gamma^{'}(x) }{ \Gamma(x) }\]
Parameters
  • x (Tensor) – Input Tensor. Must be one of the following types: float32, float64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

Tensor, the digamma of the input Tensor, the shape and data type is the same with input.

Examples

import paddle

data = paddle.to_tensor([[1, 1.5], [0, -2.2]], dtype='float32')
res = paddle.digamma(data)
print(res)
# Tensor(shape=[2, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#       [[-0.57721591,  0.03648996],
#        [ nan       ,  5.32286835]])
dist ( y, p=2, name=None ) [source]

dist

This OP returns the p-norm of (x - y). It is not a norm in a strict sense, only as a measure of distance. The shapes of x and y must be broadcastable. The definition is as follows, for details, please refer to the numpy’s broadcasting:

  • Each input has at least one dimension.

  • Match the two input dimensions from back to front, the dimension sizes must either be equal, one of them is 1, or one of them does not exist.

Where, z = x - y, the shapes of x and y are broadcastable, then the shape of z can be obtained as follows:

1. If the number of dimensions of x and y are not equal, prepend 1 to the dimensions of the tensor with fewer dimensions.

For example, The shape of x is [8, 1, 6, 1], the shape of y is [7, 1, 5], prepend 1 to the dimension of y.

x (4-D Tensor): 8 x 1 x 6 x 1

y (4-D Tensor): 1 x 7 x 1 x 5

2. Determine the size of each dimension of the output z: choose the maximum value from the two input dimensions.

z (4-D Tensor): 8 x 7 x 6 x 5

If the number of dimensions of the two inputs are the same, the size of the output can be directly determined in step 2. When p takes different values, the norm formula is as follows:

When p = 0, defining $0^0=0$, the zero-norm of z is simply the number of non-zero elements of z.

\[\begin{split}||z||_{0}=\lim_{p \\rightarrow 0}\sum_{i=1}^{m}|z_i|^{p}\end{split}\]

When p = inf, the inf-norm of z is the maximum element of z.

\[||z||_\infty=\max_i |z_i|\]

When p = -inf, the negative-inf-norm of z is the minimum element of z.

\[||z||_{-\infty}=\min_i |z_i|\]

Otherwise, the p-norm of z follows the formula,

\[\begin{split}||z||_{p}=(\sum_{i=1}^{m}|z_i|^p)^{\\frac{1}{p}}\end{split}\]
Parameters
  • x (Tensor) – 1-D to 6-D Tensor, its data type is float32 or float64.

  • y (Tensor) – 1-D to 6-D Tensor, its data type is float32 or float64.

  • p (float, optional) – The norm to be computed, its data type is float32 or float64. Default: 2.

Returns

Tensor that is the p-norm of (x - y).

Return type

Tensor

Examples

import paddle
import numpy as np

x = paddle.to_tensor(np.array([[3, 3],[3, 3]]), "float32")
y = paddle.to_tensor(np.array([[3, 3],[3, 1]]), "float32")
out = paddle.dist(x, y, 0)
print(out) # out = [1.]

out = paddle.dist(x, y, 2)
print(out) # out = [2.]

out = paddle.dist(x, y, float("inf"))
print(out) # out = [2.]

out = paddle.dist(x, y, float("-inf"))
print(out) # out = [0.]
divide ( y, name=None ) [source]

divide

Divide two tensors element-wise. The equation is:

\[out = x / y\]

Note: paddle.divide supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting .

Parameters
  • x (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • y (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import paddle

x = paddle.to_tensor([2, 3, 4], dtype='float64')
y = paddle.to_tensor([1, 5, 2], dtype='float64')
z = paddle.divide(x, y)
print(z)  # [2., 0.6, 2.]
dot ( y, name=None ) [source]

dot

This operator calculates inner product for vectors.

Note

Support 1-d and 2-d Tensor. When it is 2d, the first dimension of this matrix is the batch dimension, which means that the vectors of multiple batches are dotted.

Parameters
  • x (Tensor) – 1-D or 2-D Tensor. Its dtype should be float32, float64, int32, int64

  • y (Tensor) – 1-D or 2-D Tensor. Its dtype soulde be float32, float64, int32, int64

  • name (str, optional) – Name of the output. Default is None. It’s used to print debug info for developers. Details: Name

Returns

the calculated result Tensor.

Return type

Tensor

Examples:

import paddle
import numpy as np

x_data = np.random.uniform(0.1, 1, [10]).astype(np.float32)
y_data = np.random.uniform(1, 3, [10]).astype(np.float32)
x = paddle.to_tensor(x_data)
y = paddle.to_tensor(y_data)
z = paddle.dot(x, y)
print(z)
eig ( name=None )

eig

This API performs the eigenvalue decomposition of a square matrix or a batch of square matrices.

Note

If the matrix is a Hermitian or a real symmetric matrix, please use paddle.linalg.eigh instead, which is much faster. If only eigenvalues is needed, please use paddle.linalg.eigvals instead. If the matrix is of any shape, please use paddle.linalg.svd. This API is only supported on CPU device. The output datatype is always complex for both real and complex input.

Parameters
  • x (Tensor) – A tensor with shape math:[*, N, N], The data type of the x should be one of float32, float64, compplex64 or complex128.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

A tensor with shape math:[*, N] refers to the eigen values. Eigenvectors(Tensors): A tensor with shape math:[*, N, N] refers to the eigen vectors.

Return type

Eigenvalues(Tensors)

Examples

import paddle
import numpy as np

paddle.device.set_device("cpu")

x_data = np.array([[1.6707249, 7.2249975, 6.5045543],
                   [9.956216,  8.749598,  6.066444 ],
                   [4.4251957, 1.7983172, 0.370647 ]]).astype("float32")
x = paddle.to_tensor(x_data)
w, v = paddle.linalg.eig(x)
print(w)
# Tensor(shape=[3, 3], dtype=complex128, place=CPUPlace, stop_gradient=False,
#       [[(-0.5061363550800655+0j) , (-0.7971760990842826+0j) ,
#         (0.18518077798279986+0j)],
#        [(-0.8308237755993192+0j) ,  (0.3463813401919749+0j) ,
#         (-0.6837005269141947+0j) ],
#        [(-0.23142567697893396+0j),  (0.4944999840400175+0j) ,
#         (0.7058765252952796+0j) ]])

print(v)
# Tensor(shape=[3], dtype=complex128, place=CPUPlace, stop_gradient=False,
#       [ (16.50471283351188+0j)  , (-5.5034820550763515+0j) ,
#         (-0.21026087843552282+0j)])
eigvals ( name=None )

eigvals

Compute the eigenvalues of one or more general matrices.

Warning

The gradient kernel of this operator does not yet developed. If you need back propagation through this operator, please replace it with paddle.linalg.eig.

Parameters
  • x (Tensor) – A square matrix or a batch of square matrices whose eigenvalues will be computed. Its shape should be [*, M, M], where * is zero or more batch dimensions. Its data type should be float32, float64, complex64, or complex128.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

A tensor containing the unsorted eigenvalues which has the same batch dimensions with x.

The eigenvalues are complex-valued even when x is real.

Return type

Tensor

Examples

import paddle

paddle.set_device("cpu")
paddle.seed(1234)

x = paddle.rand(shape=[3, 3], dtype='float64')
# [[0.02773777, 0.93004224, 0.06911496],
#  [0.24831591, 0.45733623, 0.07717843],
#  [0.48016702, 0.14235102, 0.42620817]])

print(paddle.linalg.eigvals(x))
# [(-0.27078833542132674+0j), (0.29962280156230725+0j), (0.8824477020120244+0j)] #complex128
eigvalsh ( UPLO='L', name=None ) [source]

eigvalsh

Computes the eigenvalues of a complex Hermitian (conjugate symmetric) or a real symmetric matrix.

Parameters
  • x (Tensor) – A tensor with shape \([_, M, M]\) , The data type of the input Tensor x should be one of float32, float64, complex64, complex128.

  • UPLO (str, optional) – Lower triangular part of a (‘L’, default) or the upper triangular part (‘U’).

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The tensor eigenvalues in ascending order.

Return type

Tensor

Examples

import numpy as np
import paddle

x_data = np.array([[1, -2j], [2j, 5]])
x = paddle.to_tensor(x_data)
out_value = paddle.eigvalsh(x, UPLO='L')
print(out_value)
#[0.17157288, 5.82842712]
equal ( y, name=None ) [source]

equal

This layer returns the truth value of \(x == y\) elementwise.

NOTICE: The output of this OP has no gradient.

Parameters
  • x (Tensor) – Tensor, data type is bool, float32, float64, int32, int64.

  • y (Tensor) – Tensor, data type is bool, float32, float64, int32, int64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

output Tensor, it’s shape is the same as the input’s Tensor, and the data type is bool. The result of this op is stop_gradient.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 3, 2])
result1 = paddle.equal(x, y)
print(result1)  # result1 = [True False False]
equal_all ( y, name=None ) [source]

equal_all

This OP returns the truth value of \(x == y\). True if two inputs have the same elements, False otherwise.

NOTICE: The output of this OP has no gradient.

Parameters
  • x (Tensor) – Tensor, data type is bool, float32, float64, int32, int64.

  • y (Tensor) – Tensor, data type is bool, float32, float64, int32, int64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

output Tensor, data type is bool, value is [False] or [True].

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 2, 3])
z = paddle.to_tensor([1, 4, 3])
result1 = paddle.equal_all(x, y)
print(result1) # result1 = [True ]
result2 = paddle.equal_all(x, z)
print(result2) # result2 = [False ]
erf ( name=None ) [source]

erf

Erf Operator For more details, see [Error function](https://en.wikipedia.org/wiki/Error_function).

Equation:
\[\begin{split}out = \\frac{2}{\\sqrt{\\pi}} \\int_{0}^{x}e^{- \\eta^{2}}d\\eta\end{split}\]
Parameters

x (Tensor) – The input tensor, it’s data type should be float32, float64.

Returns

The output of Erf op, dtype: float32 or float64, the same as the input, shape: the same as the input.

Return type

Tensor

Examples

import paddle
x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.erf(x)
print(out)
# [-0.42839236 -0.22270259  0.11246292  0.32862676]
exp ( name=None ) [source]

exp

Exp Operator. Computes exp of x element-wise with a natural number \(e\) as the base.

\(out = e^x\)

Parameters
  • x (Tensor) – Input of Exp operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Exp operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.exp(x)
print(out)
# [0.67032005 0.81873075 1.10517092 1.34985881]
exp_ ( name=None )

exp_

Inplace version of exp API, the output Tensor will be inplaced with input x. Please refer to api_fluid_layers_exp.

expand ( shape, name=None ) [source]

expand

Expand the input tensor to a given shape.

Both the number of dimensions of x and the number of elements in shape should be less than or equal to 6. The dimension to expand must have a value 1.

Parameters
  • x (Tensor) – The input tensor, its data type is bool, float32, float64, int32 or int64.

  • shape (list|tuple|Tensor) – The result shape after expanding. The data type is int32. If shape is a list or tuple, all its elements should be integers or 1-D Tensors with the data type int32. If shape is a Tensor, it should be an 1-D Tensor with the data type int32. The value -1 in shape means keeping the corresponding dimension unchanged.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

A Tensor with the given shape. The data type is the same as x.

Return type

N-D Tensor

Examples

import paddle

data = paddle.to_tensor([1, 2, 3], dtype='int32')
out = paddle.expand(data, shape=[2, 3])
print(out)
# [[1, 2, 3], [1, 2, 3]]
expand_as ( y, name=None ) [source]

expand_as

Expand the input tensor x to the same shape as the input tensor y.

Both the number of dimensions of x and y must be less than or equal to 6, and the number of dimensions of y must be greather than or equal to that of x. The dimension to expand must have a value of 1.

Parameters
  • x (Tensor) – The input tensor, its data type is bool, float32, float64, int32 or int64.

  • y (Tensor) – The input tensor that gives the shape to expand to.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

A Tensor with the same shape as y. The data type is the same as x.

Return type

N-D Tensor

Examples

import paddle

data_x = paddle.to_tensor([1, 2, 3], 'int32')
data_y = paddle.to_tensor([[1, 2, 3], [4, 5, 6]], 'int32')
out = paddle.expand_as(data_x, data_y)
np_out = out.numpy()
# [[1, 2, 3], [1, 2, 3]]
fill_ ( value )

fill_

Notes:

This API is ONLY available in Dygraph mode

This function fill the Tensor with value inplace.

Parameters
  • x (Tensor) – x is the Tensor we want to filled data inplace

  • value (Scale) – value is the value to be filled in x

Returns

Tensor x filled with value inplace

Return type

x(Tensor)

Examples

import paddle

tensor = paddle.to_tensor([0, 1, 2, 3, 4])

tensor.fill_(0)
print(tensor.tolist())   #[0, 0, 0, 0, 0]
fill_diagonal_ ( value, offset=0, wrap=False, name=None )

fill_diagonal_

Notes:

This API is ONLY available in Dygraph mode

System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/__init__.py:docstring of paddle.tensor.manipulation.fill_diagonal_, line 3)

Definition list ends without a blank line; unexpected unindent.

This function fill the value into the x Tensor’s diagonal inplace. :param x: x is the original Tensor :type x: Tensor :param value: value is the value to filled in x :type value: Scale :param offset: the offset to the main diagonal. Default: 0 (main diagonal). :type offset: int,optional :param wrap: the diagonal ‘wrapped’ after N columns for tall matrices. :type wrap: bool,optional :param name: Name for the operation (optional, default is None) :type name: str,optional

Returns

Tensor with diagonal filled with value.

Return type

Tensor

Returns type:

dtype is same as x Tensor

System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/__init__.py:docstring of paddle.tensor.manipulation.fill_diagonal_, line 20)

Definition list ends without a blank line; unexpected unindent.

Examples

System Message: ERROR/3 (/usr/local/lib/python3.8/site-packages/paddle/__init__.py:docstring of paddle.tensor.manipulation.fill_diagonal_, line 22)

Error in “code-block” directive: maximum 1 argument(s) allowed, 23 supplied.

.. code-block:: python
    import paddle
    x = paddle.ones((4, 3)) * 2
    x.fill_diagonal_(1.0)
    print(x.tolist())   #[[1.0, 2.0, 2.0], [2.0, 1.0, 2.0], [2.0, 2.0, 1.0], [2.0, 2.0, 2.0]]
fill_diagonal_tensor ( y, offset=0, dim1=0, dim2=1, name=None )

fill_diagonal_tensor

This function fill the source Tensor y into the x Tensor’s diagonal.

Parameters
  • x (Tensor) – x is the original Tensor

  • y (Tensor) – y is the Tensor to filled in x

  • dim1 (int,optional) – first dimension with respect to which to fill diagonal. Default: 0.

  • dim2 (int,optional) – second dimension with respect to which to fill diagonal. Default: 1.

  • offset (int,optional) – the offset to the main diagonal. Default: 0 (main diagonal).

  • name (str,optional) – Name for the operation (optional, default is None)

Returns

Tensor with diagonal filled with y.

Return type

Tensor

Returns type:

list: dtype is same as x Tensor

Examples

import paddle

x = paddle.ones((4, 3)) * 2
y = paddle.ones((3,))
nx = x.fill_diagonal_tensor(y)
print(nx.tolist())   #[[1.0, 2.0, 2.0], [2.0, 1.0, 2.0], [2.0, 2.0, 1.0], [2.0, 2.0, 2.0]]
fill_diagonal_tensor_ ( y, offset=0, dim1=0, dim2=1, name=None )

fill_diagonal_tensor_

Notes:

This API is ONLY available in Dygraph mode

This function fill the source Tensor y into the x Tensor’s diagonal inplace.

Parameters
  • x (Tensor) – x is the original Tensor

  • y (Tensor) – y is the Tensor to filled in x

  • dim1 (int,optional) – first dimension with respect to which to fill diagonal. Default: 0.

  • dim2 (int,optional) – second dimension with respect to which to fill diagonal. Default: 1.

  • offset (int,optional) – the offset to the main diagonal. Default: 0 (main diagonal).

  • name (str,optional) – Name for the operation (optional, default is None)

Returns

Tensor with diagonal filled with y.

Return type

Tensor

Returns type:

list: dtype is same as x Tensor

Examples

import paddle

x = paddle.ones((4, 3)) * 2
y = paddle.ones((3,))
x.fill_diagonal_tensor_(y)
print(x.tolist())   #[[1.0, 2.0, 2.0], [2.0, 1.0, 2.0], [2.0, 2.0, 1.0], [2.0, 2.0, 2.0]]
flatten ( start_axis=0, stop_axis=- 1, name=None ) [source]

flatten

Flatten op

Flattens a contiguous range of axes in a tensor according to start_axis and stop_axis.

Note that the output Tensor will share data with origin Tensor and doesn’t have a Tensor copy in dygraph mode. If you want to use the Tensor copy version, please use Tensor.clone like flatten_clone_x = x.flatten().clone().

For Example:

Case 1:

  Given
    X.shape = (3, 100, 100, 4)

  and
    start_axis = 1
    end_axis = 2

  We get:
    Out.shape = (3, 1000 * 100, 2)

Case 2:

  Given
    X.shape = (3, 100, 100, 4)

  and
    start_axis = 0
    stop_axis = -1

  We get:
    Out.shape = (3 * 100 * 100 * 4)
Parameters
  • x (Tensor) – A tensor of number of dimentions >= axis. A tensor with data type float32, float64, int8, int32, int64, uint8.

  • start_axis (int) – the start axis to flatten

  • stop_axis (int) – the stop axis to flatten

  • name (str, Optional) – For details, please refer to Name. Generally, no setting is required. Default: None.

Returns

A tensor with the contents of the input tensor, with input

axes flattened by indicated start axis and end axis. A Tensor with data type same as input x.

Return type

Tensor

Raises
  • ValueError – If x is not a Tensor.

  • ValueError – If start_axis or stop_axis is illegal.

Examples

import paddle

image_shape=(2, 3, 4, 4)

x = paddle.arange(end=image_shape[0] * image_shape[1] * image_shape[2] * image_shape[3])
img = paddle.reshape(x, image_shape)

out = paddle.flatten(img, start_axis=1, stop_axis=2)
# out shape is [2, 12, 4]

# out shares data with img in dygraph mode
img[0, 0, 0, 0] = -1
print(out[0, 0, 0]) # [-1]
flatten_ ( start_axis=0, stop_axis=- 1, name=None )

flatten_

Inplace version of flatten API, the output Tensor will be inplaced with input x. Please refer to api_tensor_flatten.

flip ( axis, name=None ) [source]

flip

Reverse the order of a n-D tensor along given axis in axis.

Parameters
  • x (Tensor) – A Tensor(or LoDTensor) with shape \([N_1, N_2,..., N_k]\) . The data type of the input Tensor x should be float32, float64, int32, int64, bool.

  • axis (list|tuple|int) – The axis(axes) to flip on. Negative indices for indexing from the end are accepted.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

Tensor or LoDTensor calculated by flip layer. The data type is same with input x.

Return type

Tensor

Examples

import paddle
import numpy as np

image_shape=(3, 2, 2)
x = np.arange(image_shape[0] * image_shape[1] * image_shape[2]).reshape(image_shape)
x = x.astype('float32')
img = paddle.to_tensor(x)
tmp = paddle.flip(img, [0,1])
print(tmp) # [[[10,11],[8, 9]], [[6, 7],[4, 5]], [[2, 3],[0, 1]]]

out = paddle.flip(tmp,-1)
print(out) # [[[11,10],[9, 8]], [[7, 6],[5, 4]], [[3, 2],[1, 0]]]
floor ( name=None ) [source]

floor

Floor Activation Operator. Computes floor of x element-wise.

\(out = \\lfloor x \\rfloor\)

Parameters
  • x (Tensor) – Input of Floor operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Floor operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.floor(x)
print(out)
# [-1. -1.  0.  0.]
floor_ ( name=None )

floor_

Inplace version of floor API, the output Tensor will be inplaced with input x. Please refer to api_fluid_layers_floor.

floor_divide ( y, name=None ) [source]

floor_divide

Floor divide two tensors element-wise. The equation is:

\[out = x // y\]

Note: paddle.floor_divide supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting .

Parameters
  • x (Tensor) – the input tensor, it’s data type should be int32, int64.

  • y (Tensor) – the input tensor, it’s data type should be int32, int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. It’s dimension equals with $x$.

Examples

import paddle

x = paddle.to_tensor([2, 3, 8, 7])
y = paddle.to_tensor([1, 5, 3, 3])
z = paddle.floor_divide(x, y)
print(z)  # [2, 0, 2, 2]
floor_mod ( y, name=None ) [source]

floor_mod

Mod two tensors element-wise. The equation is:

\[out = x \% y\]

Note: paddle.remainder supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting .

Parameters
  • x (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • y (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import paddle

x = paddle.to_tensor([2, 3, 8, 7])
y = paddle.to_tensor([1, 5, 3, 3])
z = paddle.remainder(x, y)
print(z)  # [0, 3, 2, 1]
gather ( index, axis=None, name=None ) [source]

gather

Output is obtained by gathering entries of axis of x indexed by index and concatenate them together.

Given:

x = [[1, 2],
     [3, 4],
     [5, 6]]

index = [1, 2]
axis=[0]

Then:

out = [[3, 4],
       [5, 6]]
Parameters
  • x (Tensor) – The source input tensor with rank>=1. Supported data type is int32, int64, float32, float64 and uint8 (only for CPU), float16 (only for GPU).

  • index (Tensor) – The index input tensor with rank=1. Data type is int32 or int64.

  • axis (Tensor|int, optional) – The axis of input to be gathered, it’s can be int or a Tensor with data type is int32 or int64. The default value is None, if None, the axis is 0.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

The output is a tensor with the same rank as x.

Return type

output (Tensor)

Examples

import paddle

input = paddle.to_tensor([[1,2],[3,4],[5,6]])
index = paddle.to_tensor([0,1])
output = paddle.gather(input, index, axis=0)
# expected output: [[1,2],[3,4]]
gather_nd ( index, name=None ) [source]

gather_nd

This function is actually a high-dimensional extension of gather and supports for simultaneous indexing by multiple axes. index is a K-dimensional integer tensor, which is regarded as a (K-1)-dimensional tensor of index into input, where each element defines a slice of params:

\[output[(i_0, ..., i_{K-2})] = input[index[(i_0, ..., i_{K-2})]]\]

Obviously, index.shape[-1] <= input.rank . And, the output tensor has shape index.shape[:-1] + input.shape[index.shape[-1]:] .

Given:
    x =  [[[ 0,  1,  2,  3],
           [ 4,  5,  6,  7],
           [ 8,  9, 10, 11]],
          [[12, 13, 14, 15],
           [16, 17, 18, 19],
           [20, 21, 22, 23]]]
    x.shape = (2, 3, 4)

* Case 1:
    index = [[1]]

    gather_nd(x, index)
             = [x[1, :, :]]
             = [[12, 13, 14, 15],
                [16, 17, 18, 19],
                [20, 21, 22, 23]]

* Case 2:
    index = [[0,2]]

    gather_nd(x, index)
             = [x[0, 2, :]]
             = [8, 9, 10, 11]

* Case 3:
    index = [[1, 2, 3]]

    gather_nd(x, index)
             = [x[1, 2, 3]]
             = [23]
Parameters
  • x (Tensor) – The input Tensor which it’s data type should be bool, float32, float64, int32, int64.

  • index (Tensor) – The index input with rank > 1, index.shape[-1] <= input.rank. Its dtype should be int32, int64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

A tensor with the shape index.shape[:-1] + input.shape[index.shape[-1]:]

Return type

output (Tensor)

Examples

import paddle

x = paddle.to_tensor([[[1, 2], [3, 4], [5, 6]],
                      [[7, 8], [9, 10], [11, 12]]])
index = paddle.to_tensor([[0, 1]])

output = paddle.gather_nd(x, index) #[[3, 4]]
property grad [source]

Warning

This API will return the tensor value of the gradient. If you want to get the numpy value of the gradient, you can use x.grad.numpy().

Get the Gradient of Current Tensor.

Returns

the gradient of current Tensor

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor(5., stop_gradient=False)
y = paddle.pow(x, 4.0)
y.backward()
print("grad of x: {}".format(x.grad))
# Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False, [500.])
gradient ( )

gradient

Warning

This API will be deprecated in the future, it is recommended to use x.grad which returns the tensor value of the gradient.

Get the Gradient of Current Tensor.

Returns

Numpy value of the gradient of current Tensor

Return type

ndarray

Examples

import paddle

x = paddle.to_tensor(5., stop_gradient=False)
y = paddle.pow(x, 4.0)
y.backward()
print("grad of x: {}".format(x.gradient()))
# [500.]
greater_equal ( y, name=None ) [source]

greater_equal

This OP returns the truth value of \(x >= y\) elementwise, which is equivalent function to the overloaded operator >=.

NOTICE: The output of this OP has no gradient.

Parameters
  • x (Tensor) – First input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

  • y (Tensor) – Second input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The tensor storing the output, the output shape is same as input x.

Return type

Tensor, the output data type is bool

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 3, 2])
result1 = paddle.greater_equal(x, y)
print(result1)  # result1 = [True False True]
greater_than ( y, name=None ) [source]

greater_than

This OP returns the truth value of \(x > y\) elementwise, which is equivalent function to the overloaded operator >.

NOTICE: The output of this OP has no gradient.

Parameters
  • x (Tensor) – First input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

  • y (Tensor) – Second input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The tensor storing the output, the output shape is same as input x .

Return type

Tensor, the output data type is bool

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 3, 2])
result1 = paddle.greater_than(x, y)
print(result1)  # result1 = [False False True]
histogram ( bins=100, min=0, max=0, name=None ) [source]

histogram

Computes the histogram of a tensor. The elements are sorted into equal width bins between min and max. If min and max are both zero, the minimum and maximum values of the data are used.

Parameters
  • input (Tensor) – A Tensor(or LoDTensor) with shape \([N_1, N_2,..., N_k]\) . The data type of the input Tensor should be float32, float64, int32, int64.

  • bins (int) – number of histogram bins

  • min (int) – lower end of the range (inclusive)

  • max (int) – upper end of the range (inclusive)

Returns

data type is int64, shape is (nbins,).

Return type

Tensor

Examples

import paddle

inputs = paddle.to_tensor([1, 2, 1])
result = paddle.histogram(inputs, bins=4, min=0, max=3)
print(result) # [0, 2, 1, 0]
imag ( name=None ) [source]

imag

Returns a new tensor containing imaginary values of input tensor.

Parameters
  • x (Tensor) – the input tensor, its data type could be complex64 or complex128.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

a tensor containing imaginary values of the input tensor.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor(
    [[1 + 6j, 2 + 5j, 3 + 4j], [4 + 3j, 5 + 2j, 6 + 1j]])
# Tensor(shape=[2, 3], dtype=complex64, place=CUDAPlace(0), stop_gradient=True,
#        [[(1+6j), (2+5j), (3+4j)],
#         [(4+3j), (5+2j), (6+1j)]])

imag_res = paddle.imag(x)
# Tensor(shape=[2, 3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#        [[6., 5., 4.],
#         [3., 2., 1.]])

imag_t = x.imag()
# Tensor(shape=[2, 3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#        [[6., 5., 4.],
#         [3., 2., 1.]])
increment ( value=1.0, name=None ) [source]

increment

The OP is usually used for control flow to increment the data of x by an amount value. Notice that the number of elements in x must be equal to 1.

Parameters
  • x (Tensor) – A tensor that must always contain only one element, its data type supports float32, float64, int32 and int64.

  • value (float, optional) – The amount to increment the data of x. Default: 1.0.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the elementwise-incremented tensor with the same shape and data type as x.

Examples

import paddle

data = paddle.zeros(shape=[1], dtype='float32')
counter = paddle.increment(data)
# [1.]
index_sample ( index ) [source]

index_sample

IndexSample Layer

IndexSample OP returns the element of the specified location of X, and the location is specified by Index.

Given:

X = [[1, 2, 3, 4, 5],
     [6, 7, 8, 9, 10]]

Index = [[0, 1, 3],
         [0, 2, 4]]

Then:

Out = [[1, 2, 4],
       [6, 8, 10]]
Parameters
  • x (Tensor) – The source input tensor with 2-D shape. Supported data type is int32, int64, float32, float64.

  • index (Tensor) – The index input tensor with 2-D shape, first dimension should be same with X. Data type is int32 or int64.

Returns

The output is a tensor with the same shape as index.

Return type

output (Tensor)

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0, 3.0, 4.0],
                      [5.0, 6.0, 7.0, 8.0],
                      [9.0, 10.0, 11.0, 12.0]], dtype='float32')
index = paddle.to_tensor([[0, 1, 2],
                          [1, 2, 3],
                          [0, 0, 0]], dtype='int32')
target = paddle.to_tensor([[100, 200, 300, 400],
                           [500, 600, 700, 800],
                           [900, 1000, 1100, 1200]], dtype='int32')
out_z1 = paddle.index_sample(x, index)
print(out_z1)
#[[1. 2. 3.]
# [6. 7. 8.]
# [9. 9. 9.]]

# Use the index of the maximum value by topk op
# get the value of the element of the corresponding index in other tensors
top_value, top_index = paddle.topk(x, k=2)
out_z2 = paddle.index_sample(target, top_index)
print(top_value)
#[[ 4.  3.]
# [ 8.  7.]
# [12. 11.]]

print(top_index)
#[[3 2]
# [3 2]
# [3 2]]

print(out_z2)
#[[ 400  300]
# [ 800  700]
# [1200 1100]]
index_select ( index, axis=0, name=None ) [source]

index_select

Returns a new tensor which indexes the input tensor along dimension axis using the entries in index which is a Tensor. The returned tensor has the same number of dimensions as the original x tensor. The dim-th dimension has the same size as the length of index; other dimensions have the same size as in the x tensor.

Parameters
  • x (Tensor) – The input Tensor to be operated. The data of x can be one of float32, float64, int32, int64.

  • index (Tensor) – The 1-D Tensor containing the indices to index. The data type of index must be int32 or int64.

  • axis (int, optional) – The dimension in which we index. Default: if None, the axis is 0.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

A Tensor with same data type as x.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0, 3.0, 4.0],
                      [5.0, 6.0, 7.0, 8.0],
                      [9.0, 10.0, 11.0, 12.0]])
index = paddle.to_tensor([0, 1, 1], dtype='int32')
out_z1 = paddle.index_select(x=x, index=index)
#[[1. 2. 3. 4.]
# [5. 6. 7. 8.]
# [5. 6. 7. 8.]]
out_z2 = paddle.index_select(x=x, index=index, axis=1)
#[[ 1.  2.  2.]
# [ 5.  6.  6.]
# [ 9. 10. 10.]]
property inplace_version

The inplace version of current Tensor. The version number is incremented whenever the current Tensor is modified through an inplace operation.

Notes: This is a read-only property

Examples

import paddle
var = paddle.ones(shape=[4, 2, 3], dtype="float32")
print(var.inplace_version)  # 0

var[1] = 2.2
print(var.inplace_version)  # 1
inverse ( name=None ) [source]

inverse

Takes the inverse of the square matrix. A square matrix is a matrix with the same number of rows and columns. The input can be a square matrix (2-D Tensor) or batches of square matrices.

Parameters
  • x (Tensor) – The input tensor. The last two dimensions should be equal. When the number of dimensions is greater than 2, it is treated as batches of square matrix. The data type can be float32 and float64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

A Tensor holds the inverse of x. The shape and data type

is the same as x.

Return type

Tensor

Examples

import paddle

mat = paddle.to_tensor([[2, 0], [0, 2]], dtype='float32')
inv = paddle.inverse(mat)
print(inv) # [[0.5, 0], [0, 0.5]]
is_empty ( name=None ) [source]

is_empty

Test whether a Tensor is empty.

Parameters
  • x (Tensor) – The Tensor to be tested.

  • name (str, optional) – The default value is None . Normally users don’t have to set this parameter. For more information, please refer to Name .

Returns

A bool scalar Tensor. True if ‘x’ is an empty Tensor.

Return type

Tensor

Examples

import paddle

input = paddle.rand(shape=[4, 32, 32], dtype='float32')
res = paddle.is_empty(x=input)
print("res:", res)
# ('res:', Tensor: eager_tmp_1
#    - place: CPUPlace
#    - shape: [1]
#    - layout: NCHW
#    - dtype: bool
#    - data: [0])
property is_leaf

Whether a Tensor is leaf Tensor.

For the Tensor whose stop_gradient is True , it will be leaf Tensor.

For the Tensor whose stop_gradient is False , it will be leaf Tensor too if it is created by user.

Returns

Whether a Tensor is leaf Tensor.

Return type

bool

Examples

import paddle

x = paddle.to_tensor(1.)
print(x.is_leaf) # True

x = paddle.to_tensor(1., stop_gradient=True)
y = x + 1
print(x.is_leaf) # True
print(y.is_leaf) # True

x = paddle.to_tensor(1., stop_gradient=False)
y = x + 1
print(x.is_leaf) # True
print(y.is_leaf) # False
is_tensor ( ) [source]

is_tensor

This function tests whether input object is a paddle.Tensor.

Parameters

x (object) – Object to test.

Returns

A boolean value. True if ‘x’ is a paddle.Tensor, otherwise False.

Examples

import paddle

input1 = paddle.rand(shape=[2, 3, 5], dtype='float32')
check = paddle.is_tensor(input1)
print(check)  #True

input3 = [1, 4]
check = paddle.is_tensor(input3)
print(check)  #False
isfinite ( name=None ) [source]

isfinite

Return whether every element of input tensor is finite number or not.

Parameters
  • x (Tensor) – The input tensor, it’s data type should be float16, float32, float64, int32, int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the bool result which shows every element of x whether it is finite number or not.

Examples

import paddle

x = paddle.to_tensor([float('-inf'), -2, 3.6, float('inf'), 0, float('-nan'), float('nan')])
out = paddle.tensor.isfinite(x)
print(out)  # [False  True  True False  True False False]
isinf ( name=None ) [source]

isinf

Return whether every element of input tensor is +/-INF or not.

Parameters
  • x (Tensor) – The input tensor, it’s data type should be float16, float32, float64, int32, int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the bool result which shows every element of x whether it is +/-INF or not.

Examples

import paddle
x = paddle.to_tensor([float('-inf'), -2, 3.6, float('inf'), 0, float('-nan'), float('nan')])
out = paddle.tensor.isinf(x)
print(out)  # [ True False False  True False False False]
isnan ( name=None ) [source]

isnan

Return whether every element of input tensor is NaN or not.

Parameters
  • x (Tensor) – The input tensor, it’s data type should be float16, float32, float64, int32, int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the bool result which shows every element of x whether it is NaN or not.

Examples

import paddle
x = paddle.to_tensor([float('-inf'), -2, 3.6, float('inf'), 0, float('-nan'), float('nan')])
out = paddle.tensor.isnan(x)
print(out)  # [False False False False False  True  True]
item ( *args )

item

Convert element at specific position in Tensor into Python scalars. If the position is not specified, the Tensor must be a single-element Tensor.

Parameters

*args (int) – The input coordinates. If it’s single int, the data in the corresponding order of flattened Tensor will be returned. Default: None, and it must be in the case where Tensor has only one element.

Returns(Python scalar): A Python scalar, whose dtype is corresponds to the dtype of Tensor.

Raises

ValueError – If the Tensor has more than one element, there must be coordinates.

Examples

import paddle

x = paddle.to_tensor(1)
print(x.item())             #1
print(type(x.item()))       #<class 'int'>

x = paddle.to_tensor(1.0)
print(x.item())             #1.0
print(type(x.item()))       #<class 'float'>

x = paddle.to_tensor(True)
print(x.item())             #True
print(type(x.item()))       #<class 'bool'>

x = paddle.to_tensor(1+1j)
print(x.item())             #(1+1j)
print(type(x.item()))       #<class 'complex'>

x = paddle.to_tensor([[1.1, 2.2, 3.3]])
print(x.item(2))            #3.3
print(x.item(0, 2))         #3.3
kron ( y, name=None ) [source]

kron

Kron Operator.

This operator computes the Kronecker product of two tensors, a composite tensor made of blocks of the second tensor scaled by the first.

This operator assumes that the rank of the two tensors, $X$ and $Y$ are the same, if necessary prepending the smallest with ones. If the shape of $X$ is [$r_0$, $r_1$, …, $r_N$] and the shape of $Y$ is [$s_0$, $s_1$, …, $s_N$], then the shape of the output tensor is [$r_{0}s_{0}$, $r_{1}s_{1}$, …, $r_{N}s_{N}$]. The elements are products of elements from $X$ and $Y$.

The equation is: $$ output[k_{0}, k_{1}, …, k_{N}] = X[i_{0}, i_{1}, …, i_{N}] * Y[j_{0}, j_{1}, …, j_{N}] $$

where $$ k_{t} = i_{t} * s_{t} + j_{t}, t = 0, 1, …, N $$

Args:
x (Tensor): the fist operand of kron op, data type: float16, float32,

float64, int32 or int64.

y (Tensor): the second operand of kron op, data type: float16,

float32, float64, int32 or int64. Its data type should be the same with x.

name(str, optional): The default value is None. Normally there is no

need for user to set this property. For more information, please refer to Name.

Returns:

Tensor: The output of kron op, data type: float16, float32, float64, int32 or int64. Its data is the same with x.

Examples:
import paddle
x = paddle.to_tensor([[1, 2], [3, 4]], dtype='int64')
y = paddle.to_tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype='int64')
out = paddle.kron(x, y)
print(out)
#        [[1, 2, 3, 2, 4, 6],
#         [ 4,  5,  6,  8, 10, 12],
#         [ 7,  8,  9, 14, 16, 18],
#         [ 3,  6,  9,  4,  8, 12],
#         [12, 15, 18, 16, 20, 24],
#         [21, 24, 27, 28, 32, 36]])
less_equal ( y, name=None ) [source]

less_equal

This OP returns the truth value of \(x <= y\) elementwise, which is equivalent function to the overloaded operator <=.

NOTICE: The output of this OP has no gradient.

Parameters
  • x (Tensor) – First input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

  • y (Tensor) – Second input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The tensor storing the output, the output shape is same as input x.

Return type

Tensor, the output data type is bool

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 3, 2])
result1 = paddle.less_equal(x, y)
print(result1)  # result1 = [True True False]
less_than ( y, name=None ) [source]

less_than

This OP returns the truth value of \(x < y\) elementwise, which is equivalent function to the overloaded operator <.

NOTICE: The output of this OP has no gradient.

Parameters
  • x (Tensor) – First input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

  • y (Tensor) – Second input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The tensor storing the output, the output shape is same as input x.

Return type

Tensor, the output data type is bool

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 3, 2])
result1 = paddle.less_than(x, y)
print(result1)  # result1 = [False True False]
lgamma ( name=None ) [source]

lgamma

Lgamma Operator.

This operator performs elementwise lgamma for input $X$. \(out = log\Gamma(x)\)

Parameters
  • x (Tensor) – (Tensor), The input tensor of lgamma op.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

(Tensor), The output tensor of lgamma op.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.lgamma(x)
print(out)
# [1.31452441, 1.76149750, 2.25271273, 1.09579802]
log ( name=None ) [source]

log

Calculates the natural log of the given input tensor, element-wise.

\[\begin{split}Out = \\ln(x)\end{split}\]
Parameters
  • x (Tensor) – Input Tensor. Must be one of the following types: float32, float64.

  • name (str|None) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

The natural log of the input Tensor computed element-wise.

Return type

Tensor

Examples

import paddle

x = [[2,3,4], [7,8,9]]
x = paddle.to_tensor(x, dtype='float32')
res = paddle.log(x)
# [[0.693147, 1.09861, 1.38629], [1.94591, 2.07944, 2.19722]]
log10 ( name=None ) [source]

log10

Calculates the log to the base 10 of the given input tensor, element-wise.

\[\begin{split}Out = \\log_10_x\end{split}\]
Parameters
  • x (Tensor) – Input tensor must be one of the following types: float32, float64.

  • name (str|None) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

The log to the base 10 of the input Tensor computed element-wise.

Return type

Tensor

Examples

import paddle

# example 1: x is a float
x_i = paddle.to_tensor([[1.0], [10.0]])
res = paddle.log10(x_i) # [[0.], [1.0]]

# example 2: x is float32
x_i = paddle.full(shape=[1], fill_value=10, dtype='float32')
paddle.to_tensor(x_i)
res = paddle.log10(x_i)
print(res) # [1.0]

# example 3: x is float64
x_i = paddle.full(shape=[1], fill_value=10, dtype='float64')
paddle.to_tensor(x_i)
res = paddle.log10(x_i)
print(res) # [1.0]
log1p ( name=None ) [source]

log1p

Calculates the natural log of the given input tensor, element-wise.

\[\begin{split}Out = \\ln(x+1)\end{split}\]
Parameters
  • x (Tensor) – Input Tensor. Must be one of the following types: float32, float64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

Tensor, the natural log of the input Tensor computed element-wise.

Examples

import paddle

data = paddle.to_tensor([[0], [1]], dtype='float32')
res = paddle.log1p(data)
# [[0.], [0.6931472]]
log2 ( name=None ) [source]

log2

Calculates the log to the base 2 of the given input tensor, element-wise.

\[\begin{split}Out = \\log_2x\end{split}\]
Parameters
  • x (Tensor) – Input tensor must be one of the following types: float32, float64.

  • name (str|None) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

The log to the base 2 of the input Tensor computed element-wise.

Return type

Tensor

Examples

import paddle

# example 1: x is a float
x_i = paddle.to_tensor([[1.0], [2.0]])
res = paddle.log2(x_i) # [[0.], [1.0]]

# example 2: x is float32
x_i = paddle.full(shape=[1], fill_value=2, dtype='float32')
paddle.to_tensor(x_i)
res = paddle.log2(x_i)
print(res) # [1.0]

# example 3: x is float64
x_i = paddle.full(shape=[1], fill_value=2, dtype='float64')
paddle.to_tensor(x_i)
res = paddle.log2(x_i)
print(res) # [1.0]
logical_and ( y, out=None, name=None ) [source]

logical_and

logical_and operator computes element-wise logical AND on x and y, and returns out. out is N-dim boolean Tensor. Each element of out is calculated by

\[out = x \&\& y\]

Note

paddle.logical_and supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting.

Parameters
  • x (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

  • y (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

  • out (Tensor) – The Tensor that specifies the output of the operator, which can be any Tensor that has been created in the program. The default value is None, and a new Tensor will be created to save the output.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. It’s dimension equals with x.

Examples

import paddle

x = paddle.to_tensor([True])
y = paddle.to_tensor([True, False, True, False])
res = paddle.logical_and(x, y)
print(res) # [True False True False]
logical_not ( out=None, name=None ) [source]

logical_not

logical_not operator computes element-wise logical NOT on x, and returns out. out is N-dim boolean Variable. Each element of out is calculated by

\[out = !x\]
Parameters
  • x (Tensor) – Operand of logical_not operator. Must be a Tensor of type bool, int8, int16, in32, in64, float32, or float64.

  • out (Tensor) – The Tensor that specifies the output of the operator, which can be any Tensor that has been created in the program. The default value is None, and a new ``Tensor` will be created to save the output.

  • name (str|None) – The default value is None. Normally there is no need for users to set this property. For more information, please refer to Name.

Returns

n-dim bool LoDTensor or Tensor

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([True, False, True, False])
res = paddle.logical_not(x)
print(res) # [False  True False  True]
logical_or ( y, out=None, name=None ) [source]

logical_or

logical_or operator computes element-wise logical OR on x and y, and returns out. out is N-dim boolean Tensor. Each element of out is calculated by

\[out = x || y\]

Note

paddle.logical_or supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting.

Parameters
  • x (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

  • y (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

  • out (Tensor) – The Variable that specifies the output of the operator, which can be any Tensor that has been created in the program. The default value is None, and a new Tensor will be created to save the output.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. It’s dimension equals with x.

Examples

import paddle
import numpy as np

x_data = np.array([True, False], dtype=np.bool).reshape(2, 1)
y_data = np.array([True, False, True, False], dtype=np.bool).reshape(2, 2)
x = paddle.to_tensor(x_data)
y = paddle.to_tensor(y_data)
res = paddle.logical_or(x, y)
print(res) # [[ True  True] [ True False]]
logical_xor ( y, out=None, name=None ) [source]

logical_xor

logical_xor operator computes element-wise logical XOR on x and y, and returns out. out is N-dim boolean Tensor. Each element of out is calculated by

\[out = (x || y) \&\& !(x \&\& y)\]

Note

paddle.logical_xor supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting.

Parameters
  • x (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

  • y (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

  • out (Tensor) – The Tensor that specifies the output of the operator, which can be any Tensor that has been created in the program. The default value is None, and a new Tensor will be created to save the output.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. It’s dimension equals with x.

Examples

import paddle
import numpy as np

x_data = np.array([True, False], dtype=np.bool).reshape([2, 1])
y_data = np.array([True, False, True, False], dtype=np.bool).reshape([2, 2])
x = paddle.to_tensor(x_data)
y = paddle.to_tensor(y_data)
res = paddle.logical_xor(x, y)
print(res) # [[False,  True], [ True, False]]
logsumexp ( axis=None, keepdim=False, name=None ) [source]

logsumexp

This OP calculates the log of the sum of exponentials of x along axis .

\[\begin{split}logsumexp(x) = \\log\\sum exp(x)\end{split}\]
Parameters
  • x (Tensor) – The input Tensor with data type float32, float64.

  • axis (int|list|tuple, optional) – The axis along which to perform logsumexp calculations. axis should be int, list(int) or tuple(int). If axis is a list/tuple of dimension(s), logsumexp is calculated along all element(s) of axis . axis or element(s) of axis should be in range [-D, D), where D is the dimensions of x . If axis or element(s) of axis is less than 0, it works the same way as \(axis + D\) . If axis is None, logsumexp is calculated along all elements of x. Default is None.

  • keepdim (bool, optional) – Whether to reserve the reduced dimension(s) in the output Tensor. If keep_dim is True, the dimensions of the output Tensor is the same as x except in the reduced dimensions(it is of size 1 in this case). Otherwise, the shape of the output Tensor is squeezed in axis . Default is False.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, results of logsumexp along axis of x, with the same data type as x.

Examples:

import paddle

x = paddle.to_tensor([[-1.5, 0., 2.], [3., 1.2, -2.4]])
out1 = paddle.logsumexp(x) # [3.4691226]
out2 = paddle.logsumexp(x, 1) # [2.15317821, 3.15684602]
masked_select ( mask, name=None ) [source]

masked_select

This OP Returns a new 1-D tensor which indexes the input tensor according to the mask which is a tensor with data type of bool.

Parameters
  • x (Tensor) – The input Tensor, the data type can be int32, int64, float32, float64.

  • mask (Tensor) – The Tensor containing the binary mask to index with, it’s data type is bool.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns: A 1-D Tensor which is the same data type as x.

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0, 3.0, 4.0],
                      [5.0, 6.0, 7.0, 8.0],
                      [9.0, 10.0, 11.0, 12.0]])
mask = paddle.to_tensor([[True, False, False, False],
                         [True, True, False, False],
                         [True, False, False, False]])
out = paddle.masked_select(x, mask)
#[1.0 5.0 6.0 9.0]
matmul ( y, transpose_x=False, transpose_y=False, name=None ) [source]

matmul

Applies matrix multiplication to two tensors. matmul follows the complete broadcast rules, and its behavior is consistent with np.matmul.

Currently, the input tensors’ number of dimensions can be any, matmul can be used to achieve the dot, matmul and batchmatmul.

The actual behavior depends on the shapes of \(x\), \(y\) and the flag values of transpose_x, transpose_y. Specifically:

  • If a transpose flag is specified, the last two dimensions of the tensor are transposed. If the tensor is ndim-1 of shape, the transpose is invalid. If the tensor is ndim-1 of shape \([D]\), then for \(x\) it is treated as \([1, D]\), whereas for \(y\) it is the opposite: It is treated as \([D, 1]\).

The multiplication behavior depends on the dimensions of x and y. Specifically:

  • If both tensors are 1-dimensional, the dot product result is obtained.

  • If both tensors are 2-dimensional, the matrix-matrix product is obtained.

  • If the x is 1-dimensional and the y is 2-dimensional, a 1 is prepended to its dimension in order to conduct the matrix multiply. After the matrix multiply, the prepended dimension is removed.

  • If the x is 2-dimensional and y is 1-dimensional, the matrix-vector product is obtained.

  • If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is obtained. If the first argument is 1-dimensional, a 1 is prepended to its dimension in order to conduct the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (exclude the last two dimensions) dimensions are broadcasted according the broadcast rule. For example, if input is a (j, 1, n, m) tensor and the other is a (k, m, p) tensor, out will be a (j, k, n, p) tensor.

Parameters
  • x (Tensor) – The input tensor which is a Tensor.

  • y (Tensor) – The input tensor which is a Tensor.

  • transpose_x (bool) – Whether to transpose \(x\) before multiplication.

  • transpose_y (bool) – Whether to transpose \(y\) before multiplication.

  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.

Returns

The output Tensor.

Return type

Tensor

Examples:

import paddle
import numpy as np

# vector * vector
x_data = np.random.random([10]).astype(np.float32)
y_data = np.random.random([10]).astype(np.float32)
x = paddle.to_tensor(x_data)
y = paddle.to_tensor(y_data)
z = paddle.matmul(x, y)
print(z.numpy().shape)
# [1]

# matrix * vector
x_data = np.random.random([10, 5]).astype(np.float32)
y_data = np.random.random([5]).astype(np.float32)
x = paddle.to_tensor(x_data)
y = paddle.to_tensor(y_data)
z = paddle.matmul(x, y)
print(z.numpy().shape)
# [10]

# batched matrix * broadcasted vector
x_data = np.random.random([10, 5, 2]).astype(np.float32)
y_data = np.random.random([2]).astype(np.float32)
x = paddle.to_tensor(x_data)
y = paddle.to_tensor(y_data)
z = paddle.matmul(x, y)
print(z.numpy().shape)
# [10, 5]

# batched matrix * batched matrix
x_data = np.random.random([10, 5, 2]).astype(np.float32)
y_data = np.random.random([10, 2, 5]).astype(np.float32)
x = paddle.to_tensor(x_data)
y = paddle.to_tensor(y_data)
z = paddle.matmul(x, y)
print(z.numpy().shape)
# [10, 5, 5]

# batched matrix * broadcasted matrix
x_data = np.random.random([10, 1, 5, 2]).astype(np.float32)
y_data = np.random.random([1, 3, 2, 5]).astype(np.float32)
x = paddle.to_tensor(x_data)
y = paddle.to_tensor(y_data)
z = paddle.matmul(x, y)
print(z.numpy().shape)
# [10, 3, 5, 5]
matrix_power ( n, name=None )

matrix_power

Computes the n-th power of a square matrix or a batch of square matrices.

Let \(X\) be a sqaure matrix or a batch of square matrices, \(n\) be an exponent, the equation should be:

\[Out = X ^ {n}\]

Specifically,

  • If n > 0, it returns the matrix or a batch of matrices raised to the power

System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/__init__.py:docstring of paddle.tensor.linalg.matrix_power, line 12)

Bullet list ends without a blank line; unexpected unindent.

of n.

  • If n = 0, it returns the identity matrix or a batch of identity matrices.

  • If n < 0, it returns the inverse of each matrix (if invertible) raised to

System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/__init__.py:docstring of paddle.tensor.linalg.matrix_power, line 17)

Bullet list ends without a blank line; unexpected unindent.

the power of abs(n).

Parameters
  • x (Tensor) – A square matrix or a batch of square matrices to be raised to power n. Its shape should be [*, M, M], where * is zero or more batch dimensions. Its data type should be float32 or float64.

  • n (int) – The exponent. It can be any positive, negative integer or zero.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The n-th power of the matrix (or the batch of matrices) x. Its

data type should be the same as that of x.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[1, 2, 3],
                      [1, 4, 9],
                      [1, 8, 27]], dtype='float64')
print(paddle.linalg.matrix_power(x, 2))
# [[6.  , 34. , 102.],
#  [14. , 90. , 282.],
#  [36. , 250., 804.]]

print(paddle.linalg.matrix_power(x, 0))
# [[1., 0., 0.],
#  [0., 1., 0.],
#  [0., 0., 1.]]

print(paddle.linalg.matrix_power(x, -2))
# [[ 12.91666667, -12.75000000,  2.83333333 ],
#  [-7.66666667 ,  8.         , -1.83333333 ],
#  [ 1.80555556 , -1.91666667 ,  0.44444444 ]]
max ( axis=None, keepdim=False, name=None ) [source]

max

Computes the maximum of tensor elements over the given axis.

Parameters
  • x (Tensor) – A tensor, the data type is float32, float64, int32, int64.

  • axis (int|list|tuple, optional) – The axis along which the maximum is computed. If None, compute the maximum over all elements of x and return a Tensor with a single element, otherwise must be in the range \([-x.ndim(x), x.ndim(x))\). If \(axis[i] < 0\), the axis to reduce is \(x.ndim + axis[i]\).

  • keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

Tensor, results of maximum on the specified axis of input tensor, it’s data type is the same as x.

Examples

import paddle

# data_x is a Tensor with shape [2, 4]
# the axis is a int element

x = paddle.to_tensor([[0.2, 0.3, 0.5, 0.9],
                      [0.1, 0.2, 0.6, 0.7]])
result1 = paddle.max(x)
print(result1)
#[0.9]
result2 = paddle.max(x, axis=0)
print(result2)
#[0.2 0.3 0.6 0.9]
result3 = paddle.max(x, axis=-1)
print(result3)
#[0.9 0.7]
result4 = paddle.max(x, axis=1, keepdim=True)
print(result4)
#[[0.9]
# [0.7]]

# data_y is a Tensor with shape [2, 2, 2]
# the axis is list

y = paddle.to_tensor([[[1.0, 2.0], [3.0, 4.0]],
                      [[5.0, 6.0], [7.0, 8.0]]])
result5 = paddle.max(y, axis=[1, 2])
print(result5)
#[4. 8.]
result6 = paddle.max(y, axis=[0, 1])
print(result6)
#[7. 8.]
maximum ( y, name=None ) [source]

maximum

Compare two tensors and returns a new tensor containing the element-wise maxima. The equation is:

\[out = max(x, y)\]

Note: paddle.maximum supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting .

Parameters
  • x (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • y (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import numpy as np
import paddle

x = paddle.to_tensor([[1, 2], [7, 8]])
y = paddle.to_tensor([[3, 4], [5, 6]])
res = paddle.maximum(x, y)
print(res)
#    [[3, 4],
#     [7, 8]]

x = paddle.to_tensor([[1, 2, 3], [1, 2, 3]])
y = paddle.to_tensor([3, 0, 4])
res = paddle.maximum(x, y)
print(res)
#    [[3, 2, 4],
#     [3, 2, 4]]

x = paddle.to_tensor([2, 3, 5], dtype='float32')
y = paddle.to_tensor([1, np.nan, np.nan], dtype='float32')
res = paddle.maximum(x, y)
print(res)
#    [ 2., nan, nan]

x = paddle.to_tensor([5, 3, np.inf], dtype='float32')
y = paddle.to_tensor([1, -np.inf, 5], dtype='float32')
res = paddle.maximum(x, y)
print(res)
#    [  5.,   3., inf.]
mean ( axis=None, keepdim=False, name=None ) [source]

mean

Computes the mean of the input tensor’s elements along axis.

Parameters
  • x (Tensor) – The input Tensor with data type float32, float64.

  • axis (int|list|tuple, optional) – The axis along which to perform mean calculations. axis should be int, list(int) or tuple(int). If axis is a list/tuple of dimension(s), mean is calculated along all element(s) of axis . axis or element(s) of axis should be in range [-D, D), where D is the dimensions of x . If axis or element(s) of axis is less than 0, it works the same way as \(axis + D\) . If axis is None, mean is calculated over all elements of x. Default is None.

  • keepdim (bool, optional) – Whether to reserve the reduced dimension(s) in the output Tensor. If keepdim is True, the dimensions of the output Tensor is the same as x except in the reduced dimensions(it is of size 1 in this case). Otherwise, the shape of the output Tensor is squeezed in axis . Default is False.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, results of average along axis of x, with the same data type as x.

Examples

import paddle

x = paddle.to_tensor([[[1., 2., 3., 4.],
                       [5., 6., 7., 8.],
                       [9., 10., 11., 12.]],
                      [[13., 14., 15., 16.],
                       [17., 18., 19., 20.],
                       [21., 22., 23., 24.]]])
out1 = paddle.mean(x)
# [12.5]
out2 = paddle.mean(x, axis=-1)
# [[ 2.5  6.5 10.5]
#  [14.5 18.5 22.5]]
out3 = paddle.mean(x, axis=-1, keepdim=True)
# [[[ 2.5]
#   [ 6.5]
#   [10.5]]
#  [[14.5]
#   [18.5]
#   [22.5]]]
out4 = paddle.mean(x, axis=[0, 2])
# [ 8.5 12.5 16.5]
median ( axis=None, keepdim=False, name=None ) [source]

median

Compute the median along the specified axis.

Parameters
  • x (Tensor) – The input Tensor, it’s data type can be bool, float16, float32, float64, int32, int64.

  • axis (int, optional) – The axis along which to perform median calculations axis should be int. axis should be in range [-D, D), where D is the dimensions of x . If axis is less than 0, it works the same way as \(axis + D\). If axis is None, median is calculated over all elements of x. Default is None.

  • keepdim (bool, optional) – Whether to reserve the reduced dimension(s) in the output Tensor. If keepdim is True, the dimensions of the output Tensor is the same as x except in the reduced dimensions(it is of size 1 in this case). Otherwise, the shape of the output Tensor is squeezed in axis . Default is False.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, results of median along axis of x. If data type of x is float64, data type of results will be float64, otherwise data type will be float32.

Examples

import paddle

x = paddle.arange(12).reshape([3, 4])
# x is [[0 , 1 , 2 , 3 ],
#       [4 , 5 , 6 , 7 ],
#       [8 , 9 , 10, 11]]

y1 = paddle.median(x)
# y1 is [5.5]

y2 = paddle.median(x, axis=0)
# y2 is [4., 5., 6., 7.]

y3 = paddle.median(x, axis=1)
# y3 is [1.5, 5.5, 9.5]

y4 = paddle.median(x, axis=0, keepdim=True)
# y4 is [[4., 5., 6., 7.]]
min ( axis=None, keepdim=False, name=None ) [source]

min

Computes the minimum of tensor elements over the given axis

Parameters
  • x (Tensor) – A tensor, the data type is float32, float64, int32, int64.

  • axis (int|list|tuple, optional) – The axis along which the minimum is computed. If None, compute the minimum over all elements of x and return a Tensor with a single element, otherwise must be in the range \([-x.ndim, x.ndim)\). If \(axis[i] < 0\), the axis to reduce is \(x.ndim + axis[i]\).

  • keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

Tensor, results of minimum on the specified axis of input tensor, it’s data type is the same as input’s Tensor.

Examples

import paddle

# x is a tensor with shape [2, 4]
# the axis is a int element
x = paddle.to_tensor([[0.2, 0.3, 0.5, 0.9],
                      [0.1, 0.2, 0.6, 0.7]])
result1 = paddle.min(x)
print(result1)
#[0.1]
result2 = paddle.min(x, axis=0)
print(result2)
#[0.1 0.2 0.5 0.7]
result3 = paddle.min(x, axis=-1)
print(result3)
#[0.2 0.1]
result4 = paddle.min(x, axis=1, keepdim=True)
print(result4)
#[[0.2]
# [0.1]]

# y is a Tensor with shape [2, 2, 2]
# the axis is list
y = paddle.to_tensor([[[1.0, 2.0], [3.0, 4.0]],
                      [[5.0, 6.0], [7.0, 8.0]]])
result5 = paddle.min(y, axis=[1, 2])
print(result5)
#[1. 5.]
result6 = paddle.min(y, axis=[0, 1])
print(result6)
#[1. 2.]
minimum ( y, name=None ) [source]

minimum

Compare two tensors and returns a new tensor containing the element-wise minima. The equation is:

\[out = min(x, y)\]

Note: paddle.minimum supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting .

Parameters
  • x (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • y (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import numpy as np
import paddle

x = paddle.to_tensor([[1, 2], [7, 8]])
y = paddle.to_tensor([[3, 4], [5, 6]])
res = paddle.minimum(x, y)
print(res)
#       [[1, 2],
#        [5, 6]]

x = paddle.to_tensor([[[1, 2, 3], [1, 2, 3]]])
y = paddle.to_tensor([3, 0, 4])
res = paddle.minimum(x, y)
print(res)
#       [[[1, 0, 3],
#         [1, 0, 3]]]

x = paddle.to_tensor([2, 3, 5], dtype='float32')
y = paddle.to_tensor([1, np.nan, np.nan], dtype='float32')
res = paddle.minimum(x, y)
print(res)
#       [ 1., nan, nan]

x = paddle.to_tensor([5, 3, np.inf], dtype='float64')
y = paddle.to_tensor([1, -np.inf, 5], dtype='float64')
res = paddle.minimum(x, y)
print(res)
#       [   1., -inf.,    5.]
mm ( mat2, name=None ) [source]

mm

Applies matrix multiplication to two tensors.

Currently, the input tensors’ rank can be any, but when the rank of any inputs is bigger than 3, this two inputs’ rank should be equal.

Also note that if the raw tensor \(x\) or \(mat2\) is rank-1 and nontransposed, the prepended or appended dimension \(1\) will be removed after matrix multiplication.

Parameters
  • input (Tensor) – The input tensor which is a Tensor.

  • mat2 (Tensor) – The input tensor which is a Tensor.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

The product Tensor.

Return type

Tensor

::
  • example 1:

input: [B, …, M, K], mat2: [B, …, K, N] out: [B, …, M, N]

  • example 2:

input: [B, M, K], mat2: [B, K, N] out: [B, M, N]

  • example 3:

input: [B, M, K], mat2: [K, N] out: [B, M, N]

  • example 4:

input: [M, K], mat2: [K, N] out: [M, N]

  • example 5:

input: [B, M, K], mat2: [K] out: [B, M]

  • example 6:

input: [K], mat2: [K] out: [1]

Examples

import paddle
input = paddle.arange(1, 7).reshape((3, 2)).astype('float32')
mat2 = paddle.arange(1, 9).reshape((2, 4)).astype('float32')
out = paddle.mm(input, mat2)
print(out)
#        [[11., 14., 17., 20.],
#         [23., 30., 37., 44.],
#         [35., 46., 57., 68.]])
mod ( y, name=None ) [source]

mod

Mod two tensors element-wise. The equation is:

\[out = x \% y\]

Note: paddle.remainder supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting .

Parameters
  • x (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • y (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import paddle

x = paddle.to_tensor([2, 3, 8, 7])
y = paddle.to_tensor([1, 5, 3, 3])
z = paddle.remainder(x, y)
print(z)  # [0, 3, 2, 1]
multi_dot ( name=None )

multi_dot

Multi_dot is an operator that calculates multiple matrix multiplications.

Supports inputs of float16(only GPU support), float32 and float64 dtypes. This function does not support batched inputs.

The input tensor in [x] must be 2-D except for the first and last can be 1-D. If the first tensor is a 1-D vector of shape(n, ) it is treated as row vector of shape(1, n), similarly if the last tensor is a 1D vector of shape(n, ), it is treated as a column vector of shape(n, 1).

If the first and last tensor are 2-D matrix, then the output is also 2-D matrix, otherwise the output is a 1-D vector.

Multi_dot will select the lowest cost multiplication order for calculation. The cost of multiplying two matrices with shapes (a, b) and (b, c) is a * b * c. Given matrices A, B, C with shapes (20, 5), (5, 100), (100, 10) respectively, we can calculate the cost of different multiplication orders as follows: - Cost((AB)C) = 20x5x100 + 20x100x10 = 30000 - Cost(A(BC)) = 5x100x10 + 20x5x10 = 6000

In this case, multiplying B and C first, then multiply A, which is 5 times faster than sequential calculation.

Parameters
  • x ([Tensor]) – The input tensors which is a list Tensor.

  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.

Returns

The output Tensor.

Return type

Tensor

Examples:

import paddle
import numpy as np

# A * B
A_data = np.random.random([3, 4]).astype(np.float32)
B_data = np.random.random([4, 5]).astype(np.float32)
A = paddle.to_tensor(A_data)
B = paddle.to_tensor(B_data)
out = paddle.linalg.multi_dot([A, B])
print(out.numpy().shape)
# [3, 5]

# A * B * C
A_data = np.random.random([10, 5]).astype(np.float32)
B_data = np.random.random([5, 8]).astype(np.float32)
C_data = np.random.random([8, 7]).astype(np.float32)
A = paddle.to_tensor(A_data)
B = paddle.to_tensor(B_data)
C = paddle.to_tensor(C_data)
out = paddle.linalg.multi_dot([A, B, C])
print(out.numpy().shape)
# [10, 7]
multiplex ( index, name=None ) [source]

multiplex

Based on the given index parameter, the OP selects a specific row from each input Tensor to construct the output Tensor.

If the input of this OP contains \(m\) Tensors, where \(I_{i}\) means the i-th input Tensor, \(i\) between \([0,m)\) .

And \(O\) means the output, where \(O[i]\) means the i-th row of the output, then the output satisfies that \(O[i] = I_{index[i]}[i]\) .

For Example:

Given:

inputs = [[[0,0,3,4], [0,1,3,4], [0,2,4,4], [0,3,3,4]],
          [[1,0,3,4], [1,1,7,8], [1,2,4,2], [1,3,3,4]],
          [[2,0,3,4], [2,1,7,8], [2,2,4,2], [2,3,3,4]],
          [[3,0,3,4], [3,1,7,8], [3,2,4,2], [3,3,3,4]]]

index = [[3],[0],[1],[2]]

out = [[3,0,3,4],    # out[0] = inputs[index[0]][0] = inputs[3][0] = [3,0,3,4]
       [0,1,3,4],    # out[1] = inputs[index[1]][1] = inputs[0][1] = [0,1,3,4]
       [1,2,4,2],    # out[2] = inputs[index[2]][2] = inputs[1][2] = [1,2,4,2]
       [2,3,3,4]]    # out[3] = inputs[index[3]][3] = inputs[2][3] = [2,3,3,4]
Parameters
  • inputs (list) – The input Tensor list. The list elements are N-D Tensors of data types float32, float64, int32, int64. All input Tensor shapes should be the same and rank must be at least 2.

  • index (Tensor) – Used to select some rows in the input Tensor to construct an index of the output Tensor. It is a 2-D Tensor with data type int32 or int64 and shape [M, 1], where M is the number of input Tensors.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

Output of multiplex OP, with data type being float32, float64, int32, int64.

Return type

Tensor

Examples

import paddle
import numpy as np
img1 = np.array([[1, 2], [3, 4]]).astype(np.float32)
img2 = np.array([[5, 6], [7, 8]]).astype(np.float32)
inputs = [paddle.to_tensor(img1), paddle.to_tensor(img2)]
index = paddle.to_tensor(np.array([[1], [0]]).astype(np.int32))
res = paddle.multiplex(inputs, index)
print(res) # [array([[5., 6.], [3., 4.]], dtype=float32)]
multiply ( y, name=None ) [source]

multiply

Elementwise Mul Operator.

Multiply two tensors element-wise

The equation is:

\(Out = X \\odot Y\)

  • $X$: a tensor of any dimension.

  • $Y$: a tensor whose dimensions must be less than or equal to the dimensions of $X$.

There are two cases for this operator:

  1. The shape of $Y$ is the same with $X$.

  2. The shape of $Y$ is a continuous subsequence of $X$.

For case 2:

  1. Broadcast $Y$ to match the shape of $X$, where $axis$ is the start dimension index for broadcasting $Y$ onto $X$.

  2. If $axis$ is -1 (default), $axis = rank(X) - rank(Y)$.

  3. The trailing dimensions of size 1 for $Y$ will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
Parameters
  • x (Tensor) – (Variable), Tensor or LoDTensor of any dimensions. Its dtype should be int32, int64, float32, float64.

  • y (Tensor) – (Variable), Tensor or LoDTensor of any dimensions. Its dtype should be int32, int64, float32, float64.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (string, optional) – Name of the output. Default is None. It’s used to print debug info for developers. Details: Name

Returns

N-dimension tensor. A location into which the result is stored. It’s dimension equals with x

multiply two tensors element-wise. The equation is:

\[out = x * y\]

Note: paddle.multiply supports broadcasting. If you would like to know more about broadcasting, please refer to Broadcasting .

param x

the input tensor, its data type should be one of float32, float64, int32, int64, bool.

type x

Tensor

param y

the input tensor, its data type should be one of float32, float64, int32, int64, bool.

type y

Tensor

param name

Name for the operation (optional, default is None). For more information, please refer to Name.

type name

str, optional

returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import paddle

x = paddle.to_tensor([[1, 2], [3, 4]])
y = paddle.to_tensor([[5, 6], [7, 8]])
res = paddle.multiply(x, y)
print(res) # [[5, 12], [21, 32]]

x = paddle.to_tensor([[[1, 2, 3], [1, 2, 3]]])
y = paddle.to_tensor([2])
res = paddle.multiply(x, y)
print(res) # [[[2, 4, 6], [2, 4, 6]]]
Return type

out (Tensor)

mv ( vec, name=None ) [source]

mv

Performs a matrix-vector product of the matrix x and the vector vec.

Parameters
  • x (Tensor) – A tensor with shape \([M, N]\) , The data type of the input Tensor x should be one of float32, float64.

  • vec (Tensor) – A tensor with shape \([N]\) , The data type of the input Tensor x should be one of float32, float64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The tensor which is producted by x and vec.

Return type

Tensor

Examples

# x: [M, N], vec: [N]
# paddle.mv(x, vec)  # out: [M]

import numpy as np
import paddle

x_data = np.array([[2, 1, 3], [3, 0, 1]]).astype("float64")
x = paddle.to_tensor(x_data)
vec_data = np.array([3, 5, 1])
vec = paddle.to_tensor(vec_data).astype("float64")
out = paddle.mv(x, vec)
neg ( name=None ) [source]

neg

This function computes the negative of the Tensor elementwisely.

Parameters
  • x (Tensor) – Input of neg operator, an N-D Tensor, with data type float32, float64, int8, int16, int32, or int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The negative of input Tensor. The shape and data type are the same with input Tensor.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.neg(x)
print(out)
# [0.4 0.2 -0.1 -0.3]
nonzero ( as_tuple=False ) [source]

nonzero

Return a tensor containing the indices of all non-zero elements of the input tensor. If as_tuple is True, return a tuple of 1-D tensors, one for each dimension in input, each containing the indices (in that dimension) of all non-zero elements of input. Given a n-Dimensional input tensor with shape [x_1, x_2, …, x_n], If as_tuple is False, we can get a output tensor with shape [z, n], where z is the number of all non-zero elements in the input tensor. If as_tuple is True, we can get a 1-D tensor tuple of length n, and the shape of each 1-D tensor is [z, 1].

Parameters
  • x (Tensor) – The input tensor variable.

  • as_tuple (bool) – Return type, Tensor or tuple of Tensor.

Returns

Tensor. The data type is int64.

Examples

import paddle

x1 = paddle.to_tensor([[1.0, 0.0, 0.0],
                       [0.0, 2.0, 0.0],
                       [0.0, 0.0, 3.0]])
x2 = paddle.to_tensor([0.0, 1.0, 0.0, 3.0])
out_z1 = paddle.nonzero(x1)
print(out_z1)
#[[0 0]
# [1 1]
# [2 2]]
out_z1_tuple = paddle.nonzero(x1, as_tuple=True)
for out in out_z1_tuple:
    print(out)
#[[0]
# [1]
# [2]]
#[[0]
# [1]
# [2]]
out_z2 = paddle.nonzero(x2)
print(out_z2)
#[[1]
# [3]]
out_z2_tuple = paddle.nonzero(x2, as_tuple=True)
for out in out_z2_tuple:
    print(out)
#[[1]
# [3]]
norm ( p='fro', axis=None, keepdim=False, name=None ) [source]

norm

Returns the matrix norm (Frobenius) or vector norm (the 1-norm, the Euclidean or 2-norm, and in general the p-norm for p > 0) of a given tensor.

Note

This norm API is different from numpy.linalg.norm. This api supports high-order input tensors (rank >= 3), and certain axis need to be pointed out to calculate the norm. But numpy.linalg.norm only supports 1-D vector or 2-D matrix as input tensor. For p-order matrix norm, this api actually treats matrix as a flattened vector to calculate the vector norm, NOT REAL MATRIX NORM.

Parameters
  • x (Tensor) – The input tensor could be N-D tensor, and the input data type could be float32 or float64.

  • p (float|string, optional) – Order of the norm. Supported values are fro, 0, 1, 2, inf, -inf and any positive real number yielding the corresponding p-norm. Not supported: ord < 0 and nuclear norm. Default value is fro.

  • axis (int|list|tuple, optional) – The axis on which to apply norm operation. If axis is int or list(int)/tuple(int) with only one element, the vector norm is computed over the axis. If axis < 0, the dimension to norm operation is rank(input) + axis. If axis is a list(int)/tuple(int) with two elements, the matrix norm is computed over the axis. Defalut value is None.

  • keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have fewer dimension than the input unless keepdim is true, default value is False.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

results of norm operation on the specified axis of input tensor, it’s data type is the same as input’s Tensor.

Return type

Tensor

Examples

import paddle
import numpy as np
shape=[2, 3, 4]
np_input = np.arange(24).astype('float32') - 12
np_input = np_input.reshape(shape)
x = paddle.to_tensor(np_input)
#[[[-12. -11. -10.  -9.] [ -8.  -7.  -6.  -5.] [ -4.  -3.  -2.  -1.]]
# [[  0.   1.   2.   3.] [  4.   5.   6.   7.] [  8.   9.  10.  11.]]]

# compute frobenius norm along last two dimensions.
out_fro = paddle.norm(x, p='fro', axis=[0,1])
# out_fro.numpy() [17.435596 16.911535 16.7332   16.911535]

# compute 2-order vector norm along last dimension.
out_pnorm = paddle.norm(x, p=2, axis=-1)
#out_pnorm.numpy(): [[21.118711  13.190906   5.477226]
#                    [ 3.7416575 11.224972  19.131126]]

# compute 2-order  norm along [0,1] dimension.
out_pnorm = paddle.norm(x, p=2, axis=[0,1])
#out_pnorm.numpy(): [17.435596 16.911535 16.7332   16.911535]

# compute inf-order  norm
out_pnorm = paddle.norm(x, p=np.inf)
#out_pnorm.numpy()  = [12.]
out_pnorm = paddle.norm(x, p=np.inf, axis=0)
#out_pnorm.numpy(): [[12. 11. 10. 9.] [8. 7. 6. 7.] [8. 9. 10. 11.]]

# compute -inf-order  norm
out_pnorm = paddle.norm(x, p=-np.inf)
#out_pnorm.numpy(): [0.]
out_pnorm = paddle.norm(x, p=-np.inf, axis=0)
#out_pnorm.numpy(): [[0. 1. 2. 3.] [4. 5. 6. 5.] [4. 3. 2. 1.]]
not_equal ( y, name=None ) [source]

not_equal

This OP returns the truth value of \(x != y\) elementwise, which is equivalent function to the overloaded operator !=.

NOTICE: The output of this OP has no gradient.

Parameters
  • x (Tensor) – First input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

  • y (Tensor) – Second input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The tensor storing the output, the output shape is same as input x.

Return type

Tensor, the output data type is bool

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 3, 2])
result1 = paddle.not_equal(x, y)
print(result1)  # result1 = [False True True]
numel ( name=None ) [source]

numel

Returns the number of elements for a tensor, which is a int64 Tensor with shape [1] in static mode or a scalar value in imperative mode

Parameters

x (Tensor) – The input Tensor, it’s data type can be bool, float16, float32, float64, int32, int64.

Returns

The number of elements for the input Tensor.

Return type

Tensor

Examples

import paddle

x = paddle.full(shape=[4, 5, 7], fill_value=0, dtype='int32')
numel = paddle.numel(x) # 140
numpy ( self: paddle.fluid.core_avx.VarBase ) array

numpy

Returns a numpy array shows the value of current Tensor.

Returns

The numpy value of current Tensor.

Return type

ndarray

Returns type:

ndarray: dtype is same as current Tensor

Examples

import paddle
import numpy as np
data = np.random.uniform(-1, 1, [30, 10, 32]).astype('float32')
linear = paddle.nn.Linear(32, 64)
data = paddle.to_tensor(data)
x = linear(data)
print(x.numpy())
pin_memory ( self: paddle.fluid.core_avx.VarBase ) paddle.fluid.core_avx.VarBase

pin_memory

Returns a copy of this Tensor in pin memory.

If this Tensor is already in pin memory, then no copy is performed and the original Tensor is returned.

Examples

import paddle
x = paddle.to_tensor(1.0, place=paddle.CUDAPlace(0))
print(x.place)      # CUDAPlace(0)

y = x.pin_memory()
print(y.place)      # CUDAPinnedPlace
pow ( y, name=None ) [source]

pow

Compute the power of tensor elements. The equation is:

\[out = x^{y}\]

Note: paddle.pow supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting .

Parameters
  • x (Tensor) – An N-D Tensor, the data type is float32, float64, int32 or int64.

  • y (float|int|Tensor) – If it is an N-D Tensor, its data type should be the same as x.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. Its dimension and data type are the same as x.

Examples

import paddle

x = paddle.to_tensor([1, 2, 3], dtype='float32')

# example 1: y is a float or int
res = paddle.pow(x, 2)
print(res)
# Tensor(shape=[3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#        [1., 4., 9.])
res = paddle.pow(x, 2.5)
print(res)
# Tensor(shape=[3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#        [1.         , 5.65685415 , 15.58845711])

# example 2: y is a Tensor
y = paddle.to_tensor([2], dtype='float32')
res = paddle.pow(x, y)
print(res)
# Tensor(shape=[3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#        [1., 4., 9.])
prod ( axis=None, keepdim=False, dtype=None, name=None ) [source]

prod

Compute the product of tensor elements over the given axis.

Parameters
  • x (Tensor) – The input tensor, its data type should be float32, float64, int32, int64.

  • axis (int|list|tuple, optional) – The axis along which the product is computed. If None, multiply all elements of x and return a Tensor with a single element, otherwise must be in the range \([-x.ndim, x.ndim)\). If \(axis[i]<0\), the axis to reduce is \(x.ndim + axis[i]\). Default is None.

  • dtype (str|np.dtype, optional) – The desired date type of returned tensor, can be float32, float64, int32, int64. If specified, the input tensor is casted to dtype before operator performed. This is very useful for avoiding data type overflows. The default value is None, the dtype of output is the same as input Tensor x.

  • keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keepdim is true. Default is False.

  • name (string, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

Tensor, result of product on the specified dim of input tensor.

Raises
  • ValueError – The dtype must be float32, float64, int32 or int64.

  • TypeError – The type of axis must be int, list or tuple.

Examples

import paddle

# the axis is a int element
x = paddle.to_tensor([[0.2, 0.3, 0.5, 0.9],
                      [0.1, 0.2, 0.6, 0.7]])
out1 = paddle.prod(x)
# [0.0002268]

out2 = paddle.prod(x, -1)
# [0.027  0.0084]

out3 = paddle.prod(x, 0)
# [0.02 0.06 0.3  0.63]

out4 = paddle.prod(x, 0, keepdim=True)
# [[0.02 0.06 0.3  0.63]]

out5 = paddle.prod(x, 0, dtype='int64')
# [0 0 0 0]

# the axis is list
y = paddle.to_tensor([[[1.0, 2.0], [3.0, 4.0]],
                      [[5.0, 6.0], [7.0, 8.0]]])
out6 = paddle.prod(y, [0, 1])
# [105. 384.]

out7 = paddle.prod(y, (1, 2))
# [  24. 1680.]
qr ( mode='reduced', name=None )

qr

Computes the QR decomposition of one matrix or batches of matrice (backward is unsupported now).

Parameters
  • x (Tensor) – The input tensor. Its shape should be […, M, N], where … is zero or more batch dimensions. M and N can be arbitrary positive number. The data type of x should be float32 or float64.

  • mode (str, optional) – A flag to control the behavior of qr, the default is “reduced”. Suppose x’s shape is […, M, N] and denoting K = min(M, N): If mode = “reduced”, qr op will return reduced Q and R matrices, which means Q’s shape is […, M, K] and R’s shape is […, K, N]. If mode = “complete”, qr op will return complete Q and R matrices, which means Q’s shape is […, M, M] and R’s shape is […, M, N]. If mode = “r”, qr op will only return reduced R matrix, which means R’s shape is […, K, N].

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

If mode = “reduced” or mode = “complete”, qr will return a two tensor-tuple, which represents Q and R. If mode = “r”, qr will return a tensor which represents R.

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]).astype('float64')
q, r = paddle.linalg.qr(x)
print (q)
print (r)

# Q = [[-0.16903085,  0.89708523],
#      [-0.50709255,  0.27602622],
#      [-0.84515425, -0.34503278]])

# R = [[-5.91607978, -7.43735744],
#      [ 0.        ,  0.82807867]])

# one can verify : X = Q * R ;
rank ( ) [source]

rank

The OP returns the number of dimensions for a tensor, which is a 0-D int32 Tensor.

Parameters

input (Tensor) – The input N-D tensor with shape of \([N_1, N_2, ..., N_k]\), the data type is arbitrary.

Returns

The 0-D tensor with the dimensions of the input Tensor.

Return type

Tensor, the output data type is int32.

Examples

import paddle

input = paddle.rand((3, 100, 100))
rank = paddle.rank(input)
print(rank)
# 3
real ( name=None ) [source]

real

Returns a new tensor containing real values of the input tensor.

Parameters
  • x (Tensor) – the input tensor, its data type could be complex64 or complex128.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

a tensor containing real values of the input tensor.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor(
    [[1 + 6j, 2 + 5j, 3 + 4j], [4 + 3j, 5 + 2j, 6 + 1j]])
# Tensor(shape=[2, 3], dtype=complex64, place=CUDAPlace(0), stop_gradient=True,
#        [[(1+6j), (2+5j), (3+4j)],
#         [(4+3j), (5+2j), (6+1j)]])

real_res = paddle.real(x)
# Tensor(shape=[2, 3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#        [[1., 2., 3.],
#         [4., 5., 6.]])

real_t = x.real()
# Tensor(shape=[2, 3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#        [[1., 2., 3.],
#         [4., 5., 6.]])
reciprocal ( name=None ) [source]

reciprocal

Reciprocal Activation Operator.

\(out = \\frac{1}{x}\)

Parameters
  • x (Tensor) – Input of Reciprocal operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Reciprocal operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.reciprocal(x)
print(out)
# [-2.5        -5.         10.          3.33333333]
reciprocal_ ( name=None )

reciprocal_

Inplace version of reciprocal API, the output Tensor will be inplaced with input x. Please refer to api_fluid_layers_reciprocal.

register_hook ( hook )

register_hook

Registers a backward hook for current Tensor.

The hook will be called every time the gradient Tensor of current Tensor is computed.

The hook should not modify the input gradient Tensor, but it can optionally return a new gradient Tensor which will be used in place of current Tensor’s gradient.

The hook should have the following signature:

hook(grad) -> Tensor or None

Parameters

hook (function) – A backward hook to be registered for Tensor.grad

Returns

A helper object that can be used to remove the registered hook by calling remove() method.

Return type

TensorHookRemoveHelper

Examples

import paddle

# hook function return None
def print_hook_fn(grad):
    print(grad)

# hook function return Tensor
def double_hook_fn(grad):
    grad = grad * 2
    return grad

x = paddle.to_tensor([0., 1., 2., 3.], stop_gradient=False)
y = paddle.to_tensor([4., 5., 6., 7.], stop_gradient=False)
z = paddle.to_tensor([1., 2., 3., 4.])

# one Tensor can register multiple hooks
h = x.register_hook(print_hook_fn)
x.register_hook(double_hook_fn)

w = x + y
# register hook by lambda function
w.register_hook(lambda grad: grad * 2)

o = z.matmul(w)
o.backward()
# print_hook_fn print content in backward
# Tensor(shape=[4], dtype=float32, place=CUDAPlace(0), stop_gradient=False,
#        [2., 4., 6., 8.])

print("w.grad:", w.grad) # w.grad: [1. 2. 3. 4.]
print("x.grad:", x.grad) # x.grad: [ 4.  8. 12. 16.]
print("y.grad:", y.grad) # y.grad: [2. 4. 6. 8.]

# remove hook
h.remove()
remainder ( y, name=None ) [source]

remainder

Mod two tensors element-wise. The equation is:

\[out = x \% y\]

Note: paddle.remainder supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting .

Parameters
  • x (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • y (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import paddle

x = paddle.to_tensor([2, 3, 8, 7])
y = paddle.to_tensor([1, 5, 3, 3])
z = paddle.remainder(x, y)
print(z)  # [0, 3, 2, 1]
reshape ( shape, name=None ) [source]

reshape

This operator changes the shape of x without changing its data.

Note that the output Tensor will share data with origin Tensor and doesn’t have a Tensor copy in dygraph mode. If you want to use the Tensor copy version, please use Tensor.clone like reshape_clone_x = x.reshape([-1]).clone().

Some tricks exist when specifying the target shape.

1. -1 means the value of this dimension is inferred from the total element number of x and remaining dimensions. Thus one and only one dimension can be set -1.

2. 0 means the actual dimension value is going to be copied from the corresponding dimension of x. The index of 0s in shape can not exceed the dimension of x.

Here are some examples to explain it.

1. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape is [6, 8], the reshape operator will transform x into a 2-D tensor with shape [6, 8] and leaving x’s data unchanged.

2. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape specified is [2, 3, -1, 2], the reshape operator will transform x into a 4-D tensor with shape [2, 3, 4, 2] and leaving x’s data unchanged. In this case, one dimension of the target shape is set to -1, the value of this dimension is inferred from the total element number of x and remaining dimensions.

3. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape is [-1, 0, 3, 2], the reshape operator will transform x into a 4-D tensor with shape [2, 4, 3, 2] and leaving x’s data unchanged. In this case, besides -1, 0 means the actual dimension value is going to be copied from the corresponding dimension of x.

Parameters
  • x (Tensor) – An N-D Tensor. The data type is float32, float64, int32, int64 or bool

  • shape (list|tuple|Tensor) – Define the target shape. At most one dimension of the target shape can be -1. The data type is int32 . If shape is a list or tuple, the elements of it should be integers or Tensors with shape [1]. If shape is an Tensor, it should be an 1-D Tensor .

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

A reshaped Tensor with the same data type as x.

Return type

Tensor

Examples

import numpy as np
import paddle

x = paddle.rand([2, 4, 6], dtype="float32")
positive_four = paddle.full([1], 4, "int32")

out = paddle.reshape(x, [-1, 0, 3, 2])
print(out)
# the shape is [2,4,3,2].

out = paddle.reshape(x, shape=[positive_four, 12])
print(out)
# the shape of out_2 is [4, 12].

shape_tensor = paddle.to_tensor(np.array([8, 6]).astype("int32"))
out = paddle.reshape(x, shape=shape_tensor)
print(out)
# the shape is [8, 6].
# out shares data with x in dygraph mode
x[0, 0, 0] = 10.
print(out[0, 0])
# the value is [10.]
reshape_ ( shape, name=None ) [source]

reshape_

Inplace version of reshape API, the output Tensor will be inplaced with input x. Please refer to api_paddle_tensor_reshape.

reverse ( axis, name=None ) [source]

reverse

Reverse the order of a n-D tensor along given axis in axis.

Parameters
  • x (Tensor) – A Tensor(or LoDTensor) with shape \([N_1, N_2,..., N_k]\) . The data type of the input Tensor x should be float32, float64, int32, int64, bool.

  • axis (list|tuple|int) – The axis(axes) to flip on. Negative indices for indexing from the end are accepted.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

Tensor or LoDTensor calculated by flip layer. The data type is same with input x.

Return type

Tensor

Examples

import paddle
import numpy as np

image_shape=(3, 2, 2)
x = np.arange(image_shape[0] * image_shape[1] * image_shape[2]).reshape(image_shape)
x = x.astype('float32')
img = paddle.to_tensor(x)
tmp = paddle.flip(img, [0,1])
print(tmp) # [[[10,11],[8, 9]], [[6, 7],[4, 5]], [[2, 3],[0, 1]]]

out = paddle.flip(tmp,-1)
print(out) # [[[11,10],[9, 8]], [[7, 6],[5, 4]], [[3, 2],[1, 0]]]
roll ( shifts, axis=None, name=None ) [source]

roll

Roll the x tensor along the given axis(axes). With specific ‘shifts’, Elements that roll beyond the last position are re-introduced at the first according to ‘shifts’. If a axis is not specified, the tensor will be flattened before rolling and then restored to the original shape.

Parameters
  • x (Tensor) – The x tensor as input.

  • shifts (int|list|tuple) – The number of places by which the elements of the x tensor are shifted.

  • axis (int|list|tuple|None) – axis(axes) along which to roll.

Returns

A Tensor with same data type as x.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0, 3.0],
                      [4.0, 5.0, 6.0],
                      [7.0, 8.0, 9.0]])
out_z1 = paddle.roll(x, shifts=1)
print(out_z1)
#[[9. 1. 2.]
# [3. 4. 5.]
# [6. 7. 8.]]
out_z2 = paddle.roll(x, shifts=1, axis=0)
print(out_z2)
#[[7. 8. 9.]
# [1. 2. 3.]
# [4. 5. 6.]]
round ( name=None ) [source]

round

The OP rounds the values in the input to the nearest integer value.

input:
  x.shape = [4]
  x.data = [1.2, -0.9, 3.4, 0.9]

output:
  out.shape = [4]
  out.data = [1., -1., 3., 1.]
Parameters
  • x (Tensor) – Input of Round operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Round operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.5, -0.2, 0.6, 1.5])
out = paddle.round(x)
print(out)
# [-1. -0.  1.  2.]
round_ ( name=None )

round_

Inplace version of round API, the output Tensor will be inplaced with input x. Please refer to api_fluid_layers_round.

rsqrt ( name=None ) [source]

rsqrt

Rsqrt Activation Operator.

Please make sure input is legal in case of numeric errors.

\(out = \\frac{1}{\\sqrt{x}}\)

Parameters
  • x (Tensor) – Input of Rsqrt operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Rsqrt operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([0.1, 0.2, 0.3, 0.4])
out = paddle.rsqrt(x)
print(out)
# [3.16227766 2.23606798 1.82574186 1.58113883]
rsqrt_ ( name=None )

rsqrt_

Inplace version of rsqrt API, the output Tensor will be inplaced with input x. Please refer to api_fluid_layers_rsqrt.

scale ( scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None ) [source]

scale

Scale operator.

Putting scale and bias to the input Tensor as following:

bias_after_scale is True:

\[Out=scale*X+bias\]

bias_after_scale is False:

\[Out=scale*(X+bias)\]
Parameters
  • x (Tensor) – Input N-D Tensor of scale operator. Data type can be float32, float64, int8, int16, int32, int64, uint8.

  • scale (float|Tensor) – The scale factor of the input, it should be a float number or a Tensor with shape [1] and data type as float32.

  • bias (float) – The bias to be put on the input.

  • bias_after_scale (bool) – Apply bias addition after or before scaling. It is useful for numeric stability in some circumstances.

  • act (str, optional) – Activation applied to the output such as tanh, softmax, sigmoid, relu.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

Output tensor of scale operator, with shape and data type same as input.

Return type

Tensor

Examples

# scale as a float32 number
import paddle

data = paddle.randn(shape=[2,3], dtype='float32')
res = paddle.scale(data, scale=2.0, bias=1.0)
# scale with parameter scale as a Tensor
import paddle

data = paddle.randn(shape=[2, 3], dtype='float32')
factor = paddle.to_tensor([2], dtype='float32')
res = paddle.scale(data, scale=factor, bias=1.0)
scale_ ( scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None )

scale_

Inplace version of scale API, the output Tensor will be inplaced with input x. Please refer to api_tensor_scale.

scatter ( index, updates, overwrite=True, name=None ) [source]

scatter

Scatter Layer Output is obtained by updating the input on selected indices based on updates.

import numpy as np
#input:
x = np.array([[1, 1], [2, 2], [3, 3]])
index = np.array([2, 1, 0, 1])
# shape of updates should be the same as x
# shape of updates with dim > 1 should be the same as input
updates = np.array([[1, 1], [2, 2], [3, 3], [4, 4]])
overwrite = False
# calculation:
if not overwrite:
    for i in range(len(index)):
        x[index[i]] = np.zeros((2))
for i in range(len(index)):
    if (overwrite):
        x[index[i]] = updates[i]
    else:
        x[index[i]] += updates[i]
# output:
out = np.array([[3, 3], [6, 6], [1, 1]])
out.shape # [3, 2]

NOTICE: The order in which updates are applied is nondeterministic, so the output will be nondeterministic if index contains duplicates.

Parameters
  • x (Tensor) – The input N-D Tensor with ndim>=1. Data type can be float32, float64.

  • index (Tensor) – The index 1-D Tensor. Data type can be int32, int64. The length of index cannot exceed updates’s length, and the value in index cannot exceed input’s length.

  • updates (Tensor) – update input with updates parameter based on index. shape should be the same as input, and dim value with dim > 1 should be the same as input.

  • overwrite (bool) –

    The mode that updating the output when there are same indices.

    If True, use the overwrite mode to update the output of the same index,

    if False, use the accumulate mode to update the output of the same index.Default value is True.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

The output is a Tensor with the same shape as x.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[1, 1], [2, 2], [3, 3]], dtype='float32')
index = paddle.to_tensor([2, 1, 0, 1], dtype='int64')
updates = paddle.to_tensor([[1, 1], [2, 2], [3, 3], [4, 4]], dtype='float32')

output1 = paddle.scatter(x, index, updates, overwrite=False)
# [[3., 3.],
#  [6., 6.],
#  [1., 1.]]

output2 = paddle.scatter(x, index, updates, overwrite=True)
# CPU device:
# [[3., 3.],
#  [4., 4.],
#  [1., 1.]]
# GPU device maybe have two results because of the repeated numbers in index
# result 1:
# [[3., 3.],
#  [4., 4.],
#  [1., 1.]]
# result 2:
# [[3., 3.],
#  [2., 2.],
#  [1., 1.]]
scatter_ ( index, updates, overwrite=True, name=None ) [source]

scatter_

Inplace version of scatter API, the output Tensor will be inplaced with input x. Please refer to api_paddle_tensor_scatter.

scatter_nd ( updates, shape, name=None ) [source]

scatter_nd

Scatter_nd Layer

Output is obtained by scattering the updates in a new tensor according to index . This op is similar to scatter_nd_add, except the tensor of shape is zero-initialized. Correspondingly, scatter_nd(index, updates, shape) is equal to scatter_nd_add(paddle.zeros(shape, updates.dtype), index, updates) . If index has repeated elements, then the corresponding updates are accumulated. Because of the numerical approximation issues, the different order of repeated elements in index may cause different results. The specific calculation method can be seen scatter_nd_add . This op is the inverse of the gather_nd op.

Parameters
  • index (Tensor) – The index input with ndim > 1 and index.shape[-1] <= len(shape). Its dtype should be int32 or int64 as it is used as indexes.

  • updates (Tensor) – The updated value of scatter_nd op. Its dtype should be float32, float64. It must have the shape index.shape[:-1] + shape[index.shape[-1]:]

  • shape (tuple|list) – Shape of output tensor.

  • name (str|None) – The output Tensor name. If set None, the layer will be named automatically.

Returns

The output is a tensor with the same type as updates .

Return type

output (Tensor)

Examples

import paddle
import numpy as np

index_data = np.array([[1, 1],
                       [0, 1],
                       [1, 3]]).astype(np.int64)
index = paddle.to_tensor(index_data)
updates = paddle.rand(shape=[3, 9, 10], dtype='float32')
shape = [3, 5, 9, 10]

output = paddle.scatter_nd(index, updates, shape)
scatter_nd_add ( index, updates, name=None ) [source]

scatter_nd_add

Scatter_nd_add Layer

Output is obtained by applying sparse addition to a single value or slice in a Tensor.

x is a Tensor with ndim \(R\) and index is a Tensor with ndim \(K\) . Thus, index has shape \([i_0, i_1, ..., i_{K-2}, Q]\) where \(Q \leq R\) . updates is a Tensor with ndim \(K - 1 + R - Q\) and its shape is \(index.shape[:-1] + x.shape[index.shape[-1]:]\) .

According to the \([i_0, i_1, ..., i_{K-2}]\) of index , add the corresponding updates slice to the x slice which is obtained by the last one dimension of index .

Given:

* Case 1:
    x = [0, 1, 2, 3, 4, 5]
    index = [[1], [2], [3], [1]]
    updates = [9, 10, 11, 12]

  we get:

    output = [0, 22, 12, 14, 4, 5]

* Case 2:
    x = [[65, 17], [-14, -25]]
    index = [[], []]
    updates = [[[-1, -2], [1, 2]],
               [[3, 4], [-3, -4]]]
    x.shape = (2, 2)
    index.shape = (2, 0)
    updates.shape = (2, 2, 2)

  we get:

    output = [[67, 19], [-16, -27]]
Parameters
  • x (Tensor) – The x input. Its dtype should be int32, int64, float32, float64.

  • index (Tensor) – The index input with ndim > 1 and index.shape[-1] <= x.ndim. Its dtype should be int32 or int64 as it is used as indexes.

  • updates (Tensor) – The updated value of scatter_nd_add op, and it must have the same dtype as x. It must have the shape index.shape[:-1] + x.shape[index.shape[-1]:].

  • name (str|None) – The output tensor name. If set None, the layer will be named automatically.

Returns

The output is a tensor with the same shape and dtype as x.

Return type

output (Tensor)

Examples

import paddle
import numpy as np

x = paddle.rand(shape=[3, 5, 9, 10], dtype='float32')
updates = paddle.rand(shape=[3, 9, 10], dtype='float32')
index_data = np.array([[1, 1],
                       [0, 1],
                       [1, 3]]).astype(np.int64)
index = paddle.to_tensor(index_data)
output = paddle.scatter_nd_add(x, index, updates)
set_value ( value )

set_value

Notes:

This API is ONLY available in Dygraph mode

Set a new value for this Variable.

Parameters

value (Variable|np.ndarray) – the new value.

Examples

import paddle.fluid as fluid
from paddle.fluid.dygraph.base import to_variable
from paddle.fluid.dygraph import Linear
import numpy as np

data = np.ones([3, 1024], dtype='float32')
with fluid.dygraph.guard():
    linear = fluid.dygraph.Linear(1024, 4)
    t = to_variable(data)
    linear(t)  # call with default weight
    custom_weight = np.random.randn(1024, 4).astype("float32")
    linear.weight.set_value(custom_weight)  # change existing weight
    out = linear(t)  # call with different weight
shard_index ( index_num, nshards, shard_id, ignore_value=- 1 ) [source]

shard_index

Reset the values of input according to the shard it beloning to. Every value in input must be a non-negative integer, and the parameter index_num represents the integer above the maximum value of input. Thus, all values in input must be in the range [0, index_num) and each value can be regarded as the offset to the beginning of the range. The range is further split into multiple shards. Specifically, we first compute the shard_size according to the following formula, which represents the number of integers each shard can hold. So for the i’th shard, it can hold values in the range [i*shard_size, (i+1)*shard_size).

shard_size = (index_num + nshards - 1) // nshards

For each value v in input, we reset it to a new value according to the following formula:

v = v - shard_id * shard_size if shard_id * shard_size <= v < (shard_id+1) * shard_size else ignore_value

That is, the value v is set to the new offset within the range represented by the shard shard_id if it in the range. Otherwise, we reset it to be ignore_value.

Parameters
  • input (Tensor) – Input tensor with data type int64 or int32. It’s last dimension must be 1.

  • index_num (int) – An integer represents the integer above the maximum value of input.

  • nshards (int) – The number of shards.

  • shard_id (int) – The index of the current shard.

  • ignore_value (int) – An integer value out of sharded index range.

Returns

Tensor.

Examples

import paddle
label = paddle.to_tensor([[16], [1]], "int64")
shard_label = paddle.shard_index(input=label,
                                 index_num=20,
                                 nshards=2,
                                 shard_id=0)
print(shard_label)
# [[-1], [1]]
sign ( name=None ) [source]

sign

This OP returns sign of every element in x: 1 for positive, -1 for negative and 0 for zero.

Parameters
  • x (Tensor) – The input tensor. The data type can be float16, float32 or float64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

The output sign tensor with identical shape and data type to the input x.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([3.0, 0.0, -2.0, 1.7], dtype='float32')
out = paddle.sign(x=x)
print(out)  # [1.0, 0.0, -1.0, 1.0]
sin ( name=None ) [source]

sin

Sine Activation Operator.

\(out = sin(x)\)

Parameters
  • x (Tensor) – Input of Sin operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Sin operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.sin(x)
print(out)
# [-0.38941834 -0.19866933  0.09983342  0.29552021]
sinh ( name=None ) [source]

sinh

Sinh Activation Operator.

\(out = sinh(x)\)

Parameters
  • x (Tensor) – Input of Sinh operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Sinh operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.sinh(x)
print(out)
# [-0.41075233 -0.201336    0.10016675  0.30452029]
slice ( axes, starts, ends ) [source]

slice

This operator produces a slice of input along multiple axes. Similar to numpy: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html Slice uses axes, starts and ends attributes to specify the start and end dimension for each axis in the list of axes and Slice uses this information to slice the input data tensor. If a negative value is passed to starts or ends such as \(-i\), it represents the reverse position of the axis \(i-1\) (here 0 is the initial position). If the value passed to starts or ends is greater than n (the number of elements in this dimension), it represents n. For slicing to the end of a dimension with unknown size, it is recommended to pass in INT_MAX. The size of axes must be equal to starts and ends. Following examples will explain how slice works:

Case1:
    Given:
        data = [ [1, 2, 3, 4], [5, 6, 7, 8], ]
        axes = [0, 1]
        starts = [1, 0]
        ends = [2, 3]
    Then:
        result = [ [5, 6, 7], ]

Case2:
    Given:
        data = [ [1, 2, 3, 4], [5, 6, 7, 8], ]
        axes = [0, 1]
        starts = [0, 1]
        ends = [-1, 1000]       # -1 denotes the reverse 0th position of dimension 0.
    Then:
        result = [ [2, 3, 4], ] # result = data[0:1, 1:4]
Parameters
  • input (Tensor) – A Tensor . The data type is float16, float32, float64, int32 or int64.

  • axes (list|tuple) – The data type is int32 . Axes that starts and ends apply to .

  • starts (list|tuple|Tensor) – The data type is int32 . If starts is a list or tuple, the elements of it should be integers or Tensors with shape [1]. If starts is an Tensor, it should be an 1-D Tensor. It represents starting indices of corresponding axis in axes.

  • ends (list|tuple|Tensor) – The data type is int32 . If ends is a list or tuple, the elements of it should be integers or Tensors with shape [1]. If ends is an Tensor, it should be an 1-D Tensor . It represents ending indices of corresponding axis in axes.

Returns

A Tensor. The data type is same as input.

Return type

Tensor

Raises
  • TypeError – The type of starts must be list, tuple or Tensor.

  • TypeError – The type of ends must be list, tuple or Tensor.

Examples

import paddle

input = paddle.rand(shape=[4, 5, 6], dtype='float32')
# example 1:
# attr starts is a list which doesn't contain tensor.
axes = [0, 1, 2]
starts = [-3, 0, 2]
ends = [3, 2, 4]
sliced_1 = paddle.slice(input, axes=axes, starts=starts, ends=ends)
# sliced_1 is input[0:3, 0:2, 2:4].

# example 2:
# attr starts is a list which contain tensor.
minus_3 = paddle.full([1], -3, "int32")
sliced_2 = paddle.slice(input, axes=axes, starts=[minus_3, 0, 2], ends=ends)
# sliced_2 is input[0:3, 0:2, 2:4].
solve ( y, name=None )

solve

Computes the solution of a square system of linear equations with a unique solution for input ‘X’ and ‘Y’. Let :math: X be a sqaure matrix or a batch of square matrices, \(Y\) be a vector/matrix or a batch of vectors/matrices, the equation should be:

\[Out = X^-1 * Y\]

System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/__init__.py:docstring of paddle.tensor.linalg.solve, line 7)

Explicit markup ends without a blank line; unexpected unindent.

Specifically, - This system of linear equations has one solution if and only if input ‘X’ is invertible.

Parameters
  • x (Tensor) – A square matrix or a batch of square matrices. Its shape should be [*, M, M], where * is zero or more batch dimensions. Its data type should be float32 or float64.

  • y (Tensor) – A vector/matrix or a batch of vectors/matrices. Its shape should be [*, M, K], where * is zero or more batch dimensions. Its data type should be float32 or float64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The solution of a square system of linear equations with a unique solution for input ‘x’ and ‘y’. Its data type should be the same as that of x.

Return type

Tensor

Examples: .. code-block:: python

# a square system of linear equations: # 2*X0 + X1 = 9 # X0 + 2*X1 = 8

import paddle import numpy as np

np_x = np.array([[3, 1],[1, 2]]) np_y = np.array([9, 8]) x = paddle.to_tensor(np_x, dtype=”float64”) y = paddle.to_tensor(np_y, dtype=”float64”) out = paddle.linalg.solve(x, y)

print(out) # [2., 3.])

sort ( axis=- 1, descending=False, name=None ) [source]

sort

This OP sorts the input along the given axis, and returns the sorted output tensor. The default sort algorithm is ascending, if you want the sort algorithm to be descending, you must set the descending as True.

Parameters
  • x (Tensor) – An input N-D Tensor with type float32, float64, int16, int32, int64, uint8.

  • axis (int, optional) – Axis to compute indices along. The effective range is [-R, R), where R is Rank(x). when axis<0, it works the same way as axis+R. Default is 0.

  • descending (bool, optional) – Descending is a flag, if set to true, algorithm will sort by descending order, else sort by ascending order. Default is false.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

sorted tensor(with the same shape and data type as x).

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[[5,8,9,5],
                       [0,0,1,7],
                       [6,9,2,4]],
                      [[5,2,4,2],
                       [4,7,7,9],
                       [1,7,0,6]]],
                     dtype='float32')
out1 = paddle.sort(x=x, axis=-1)
out2 = paddle.sort(x=x, axis=0)
out3 = paddle.sort(x=x, axis=1)
print(out1)
#[[[5. 5. 8. 9.]
#  [0. 0. 1. 7.]
#  [2. 4. 6. 9.]]
# [[2. 2. 4. 5.]
#  [4. 7. 7. 9.]
#  [0. 1. 6. 7.]]]
print(out2)
#[[[5. 2. 4. 2.]
#  [0. 0. 1. 7.]
#  [1. 7. 0. 4.]]
# [[5. 8. 9. 5.]
#  [4. 7. 7. 9.]
#  [6. 9. 2. 6.]]]
print(out3)
#[[[0. 0. 1. 4.]
#  [5. 8. 2. 5.]
#  [6. 9. 9. 7.]]
# [[1. 2. 0. 2.]
#  [4. 7. 4. 6.]
#  [5. 7. 7. 9.]]]
split ( num_or_sections, axis=0, name=None ) [source]

split

Split the input tensor into multiple sub-Tensors.

Parameters
  • x (Tensor) – A N-D Tensor. The data type is bool, float16, float32, float64, int32 or int64.

  • num_or_sections (int|list|tuple) – If num_or_sections is an int, then num_or_sections indicates the number of equal sized sub-Tensors that the x will be divided into. If num_or_sections is a list or tuple, the length of it indicates the number of sub-Tensors and the elements in it indicate the sizes of sub-Tensors’ dimension orderly. The length of the list must not be larger than the x ‘s size of specified axis.

  • axis (int|Tensor, optional) – The axis along which to split, it can be a scalar with type int or a Tensor with shape [1] and data type int32 or int64. If :math::axis < 0, the axis to split along is \(rank(x) + axis\). Default is 0.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

The list of segmented Tensors.

Return type

list(Tensor)

Example

import paddle

# x is a Tensor of shape [3, 9, 5]
x = paddle.rand([3, 9, 5])

out0, out1, out2 = paddle.split(x, num_or_sections=3, axis=1)
print(out0.shape)  # [3, 3, 5]
print(out1.shape)  # [3, 3, 5]
print(out2.shape)  # [3, 3, 5]

out0, out1, out2 = paddle.split(x, num_or_sections=[2, 3, 4], axis=1)
print(out0.shape)  # [3, 2, 5]
print(out1.shape)  # [3, 3, 5]
print(out2.shape)  # [3, 4, 5]

out0, out1, out2 = paddle.split(x, num_or_sections=[2, 3, -1], axis=1)
print(out0.shape)  # [3, 2, 5]
print(out1.shape)  # [3, 3, 5]
print(out2.shape)  # [3, 4, 5]

# axis is negative, the real axis is (rank(x) + axis)=1
out0, out1, out2 = paddle.split(x, num_or_sections=3, axis=-2)
print(out0.shape)  # [3, 3, 5]
print(out1.shape)  # [3, 3, 5]
print(out2.shape)  # [3, 3, 5]
sqrt ( name=None ) [source]

sqrt

Sqrt Activation Operator.

\(out=\\sqrt{x}=x^{1/2}\)

Note:

input value must be greater than or equal to zero.

Parameters
  • x (Tensor) – Input of Sqrt operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Sqrt operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([0.1, 0.2, 0.3, 0.4])
out = paddle.sqrt(x)
print(out)
# [0.31622777 0.4472136  0.54772256 0.63245553]
sqrt_ ( name=None )

sqrt_

Inplace version of sqrt API, the output Tensor will be inplaced with input x. Please refer to api_fluid_layers_sqrt.

square ( name=None ) [source]

square

The OP square each elements of the inputs.

\(out = x^2\)

Parameters
  • x (Tensor) – Input of Square operator, an N-D Tensor, with data type float32, float64 or float16.

  • with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Square operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.square(x)
print(out)
# [0.16 0.04 0.01 0.09]
squeeze ( axis=None, name=None ) [source]

squeeze

This OP will squeeze the dimension(s) of size 1 of input tensor x’s shape.

Note that the output Tensor will share data with origin Tensor and doesn’t have a Tensor copy in dygraph mode. If you want to use the Tensor copy version, please use Tensor.clone like squeeze_clone_x = x.squeeze().clone().

If axis is provided, it will remove the dimension(s) by given axis that of size 1. If the dimension of given axis is not of size 1, the dimension remain unchanged. If axis is not provided, all dims equal of size 1 will be removed.

Case1:

  Input:
    x.shape = [1, 3, 1, 5]  # If axis is not provided, all dims equal of size 1 will be removed.
    axis = None
  Output:
    out.shape = [3, 5]

Case2:

  Input:
    x.shape = [1, 3, 1, 5]  # If axis is provided, it will remove the dimension(s) by given axis that of size 1.
    axis = 0
  Output:
    out.shape = [3, 1, 5]

Case4:

  Input:
    x.shape = [1, 3, 1, 5]  # If the dimension of one given axis (3) is not of size 1, the dimension remain unchanged.
    axis = [0, 2, 3]
  Output:
    out.shape = [3, 5]

Case4:

  Input:
    x.shape = [1, 3, 1, 5]  # If axis is negative, axis = axis + ndim (number of dimensions in x).
    axis = [-2]
  Output:
    out.shape = [1, 3, 5]
Parameters
  • x (Tensor) – The input Tensor. Supported data type: float32, float64, bool, int8, int32, int64.

  • axis (int|list|tuple, optional) – An integer or list/tuple of integers, indicating the dimensions to be squeezed. Default is None. The range of axis is \([-ndim(x), ndim(x))\). If axis is negative, \(axis = axis + ndim(x)\). If axis is None, all the dimensions of x of size 1 will be removed.

  • name (str, optional) – Please refer to Name, Default None.

Returns

Squeezed Tensor with the same data type as input Tensor.

Return type

Tensor

Examples

import paddle

x = paddle.rand([5, 1, 10])
output = paddle.squeeze(x, axis=1)

print(x.shape)  # [5, 1, 10]
print(output.shape)  # [5, 10]

# output shares data with x in dygraph mode
x[0, 0, 0] = 10.
print(output[0, 0]) # [10.]
squeeze_ ( axis=None, name=None ) [source]

squeeze_

Inplace version of squeeze API, the output Tensor will be inplaced with input x. Please refer to api_paddle_tensor_squeeze.

stack ( axis=0, name=None ) [source]

stack

This OP stacks all the input tensors x along axis dimemsion. All tensors must be of the same shape and same dtype.

For example, given N tensors of shape [A, B], if axis == 0, the shape of stacked tensor is [N, A, B]; if axis == 1, the shape of stacked tensor is [A, N, B], etc.

Case 1:

  Input:
    x[0].shape = [1, 2]
    x[0].data = [ [1.0 , 2.0 ] ]
    x[1].shape = [1, 2]
    x[1].data = [ [3.0 , 4.0 ] ]
    x[2].shape = [1, 2]
    x[2].data = [ [5.0 , 6.0 ] ]

  Attrs:
    axis = 0

  Output:
    Out.dims = [3, 1, 2]
    Out.data =[ [ [1.0, 2.0] ],
                [ [3.0, 4.0] ],
                [ [5.0, 6.0] ] ]


Case 2:

  Input:
    x[0].shape = [1, 2]
    x[0].data = [ [1.0 , 2.0 ] ]
    x[1].shape = [1, 2]
    x[1].data = [ [3.0 , 4.0 ] ]
    x[2].shape = [1, 2]
    x[2].data = [ [5.0 , 6.0 ] ]


  Attrs:
    axis = 1 or axis = -2  # If axis = -2, axis = axis+ndim(x[0])+1 = -2+2+1 = 1.

  Output:
    Out.shape = [1, 3, 2]
    Out.data =[ [ [1.0, 2.0]
                  [3.0, 4.0]
                  [5.0, 6.0] ] ]
Parameters
  • x (list[Tensor]|tuple[Tensor]) – Input x can be a list or tuple of tensors, the Tensors in x must be of the same shape and dtype. Supported data types: float32, float64, int32, int64.

  • axis (int, optional) – The axis along which all inputs are stacked. axis range is [-(R+1), R+1), where R is the number of dimensions of the first input tensor x[0]. If axis < 0, axis = axis+R+1. The default value of axis is 0.

  • name (str, optional) – Please refer to Name, Default None.

Returns

The stacked tensor with same data type as input.

Return type

Tensor

Example

import paddle

x1 = paddle.to_tensor([[1.0, 2.0]])
x2 = paddle.to_tensor([[3.0, 4.0]])
x3 = paddle.to_tensor([[5.0, 6.0]])
out = paddle.stack([x1, x2, x3], axis=0)
print(out.shape)  # [3, 1, 2]
print(out)
# [[[1., 2.]],
#  [[3., 4.]],
#  [[5., 6.]]]
stanh ( scale_a=0.67, scale_b=1.7159, name=None ) [source]

stanh

stanh activation.

\[out = b * \frac{e^{a * x} - e^{-a * x}}{e^{a * x} + e^{-a * x}}\]
Parameters
  • x (Tensor) – The input Tensor with data type float32, float64.

  • scale_a (float, optional) – The scale factor a of the input. Default is 0.67.

  • scale_b (float, optional) – The scale factor b of the output. Default is 1.7159.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

A Tensor with the same data type and shape as x .

Examples

import paddle

x = paddle.to_tensor([1.0, 2.0, 3.0, 4.0])
out = paddle.stanh(x, scale_a=0.67, scale_b=1.72) # [1.00616539, 1.49927628, 1.65933108, 1.70390463]
std ( axis=None, unbiased=True, keepdim=False, name=None ) [source]

std

Computes the standard-deviation of x along axis .

Parameters
  • x (Tensor) – The input Tensor with data type float32, float64.

  • axis (int|list|tuple, optional) – The axis along which to perform standard-deviation calculations. axis should be int, list(int) or tuple(int). If axis is a list/tuple of dimension(s), standard-deviation is calculated along all element(s) of axis . axis or element(s) of axis should be in range [-D, D), where D is the dimensions of x . If axis or element(s) of axis is less than 0, it works the same way as \(axis + D\) . If axis is None, standard-deviation is calculated over all elements of x. Default is None.

  • unbiased (bool, optional) – Whether to use the unbiased estimation. If unbiased is True, the standard-deviation is calculated via the unbiased estimator. If unbiased is True, the divisor used in the computation is \(N - 1\), where \(N\) represents the number of elements along axis , otherwise the divisor is \(N\). Default is True.

  • keepdim (bool, optional) – Whether to reserve the reduced dimension(s) in the output Tensor. If keepdim is True, the dimensions of the output Tensor is the same as x except in the reduced dimensions(it is of size 1 in this case). Otherwise, the shape of the output Tensor is squeezed in axis . Default is False.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, results of standard-deviation along axis of x, with the same data type as x.

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0, 3.0], [1.0, 4.0, 5.0]])
out1 = paddle.std(x)
# [1.63299316]
out2 = paddle.std(x, axis=1)
# [1.       2.081666]
strided_slice ( axes, starts, ends, strides, name=None ) [source]

strided_slice

This operator produces a slice of x along multiple axes. Similar to numpy: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html Slice uses axes, starts and ends attributes to specify the start and end dimension for each axis in the list of axes and Slice uses this information to slice the input data tensor. If a negative value is passed to starts or ends such as \(-i\), it represents the reverse position of the axis \(i-1\) th(here 0 is the initial position). The strides represents steps of slicing and if the strides is negative, slice operation is in the opposite direction. If the value passed to starts or ends is greater than n (the number of elements in this dimension), it represents n. For slicing to the end of a dimension with unknown size, it is recommended to pass in INT_MAX. The size of axes must be equal to starts , ends and strides. Following examples will explain how strided_slice works:

Case1:
    Given:
        data = [ [1, 2, 3, 4], [5, 6, 7, 8], ]
        axes = [0, 1]
        starts = [1, 0]
        ends = [2, 3]
        strides = [1, 1]
    Then:
        result = [ [5, 6, 7], ]

Case2:
    Given:
        data = [ [1, 2, 3, 4], [5, 6, 7, 8], ]
        axes = [0, 1]
        starts = [0, 1]
        ends = [2, 0]
        strides = [1, -1]
    Then:
        result = [ [8, 7, 6], ]
Case3:
    Given:
        data = [ [1, 2, 3, 4], [5, 6, 7, 8], ]
        axes = [0, 1]
        starts = [0, 1]
        ends = [-1, 1000]
        strides = [1, 3]
    Then:
        result = [ [2], ]
Parameters
  • x (Tensor) – An N-D Tensor. The data type is float32, float64, int32 or int64.

  • axes (list|tuple) – The data type is int32 . Axes that starts and ends apply to. It’s optional. If it is not provides, it will be treated as \([0,1,...,len(starts)-1]\).

  • starts (list|tuple|Tensor) – The data type is int32 . If starts is a list or tuple, the elements of it should be integers or Tensors with shape [1]. If starts is an Tensor, it should be an 1-D Tensor. It represents starting indices of corresponding axis in axes.

  • ends (list|tuple|Tensor) – The data type is int32 . If ends is a list or tuple, the elements of it should be integers or Tensors with shape [1]. If ends is an Tensor, it should be an 1-D Tensor . It represents ending indices of corresponding axis in axes.

  • strides (list|tuple|Tensor) – The data type is int32 . If strides is a list or tuple, the elements of it should be integers or Tensors with shape [1]. If strides is an Tensor, it should be an 1-D Tensor . It represents slice step of corresponding axis in axes.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

A Tensor with the same dimension as x. The data type is same as x.

Return type

Tensor

Examples

import paddle
x = paddle.zeros(shape=[3,4,5,6], dtype="float32")
# example 1:
# attr starts is a list which doesn't contain Tensor.
axes = [1, 2, 3]
starts = [-3, 0, 2]
ends = [3, 2, 4]
strides_1 = [1, 1, 1]
strides_2 = [1, 1, 2]
sliced_1 = paddle.strided_slice(x, axes=axes, starts=starts, ends=ends, strides=strides_1)
# sliced_1 is x[:, 1:3:1, 0:2:1, 2:4:1].
# example 2:
# attr starts is a list which contain tensor Tensor.
minus_3 = paddle.full(shape=[1], fill_value=-3, dtype='int32')
sliced_2 = paddle.strided_slice(x, axes=axes, starts=[minus_3, 0, 2], ends=ends, strides=strides_2)
# sliced_2 is x[:, 1:3:1, 0:2:1, 2:4:2].
subtract ( y, name=None ) [source]

subtract

Substract two tensors element-wise. The equation is:

\[out = x - y\]

Note: paddle.subtract supports broadcasting. If you want know more about broadcasting, please refer to Broadcasting .

Parameters
  • x (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • y (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import numpy as np
import paddle

x = paddle.to_tensor([[1, 2], [7, 8]])
y = paddle.to_tensor([[5, 6], [3, 4]])
res = paddle.subtract(x, y)
print(res)
#       [[-4, -4],
#        [4, 4]]

x = paddle.to_tensor([[[1, 2, 3], [1, 2, 3]]])
y = paddle.to_tensor([1, 0, 4])
res = paddle.subtract(x, y)
print(res)
#       [[[ 0,  2, -1],
#         [ 0,  2, -1]]]

x = paddle.to_tensor([2, np.nan, 5], dtype='float32')
y = paddle.to_tensor([1, 4, np.nan], dtype='float32')
res = paddle.subtract(x, y)
print(res)
#       [ 1., nan, nan]

x = paddle.to_tensor([5, np.inf, -np.inf], dtype='float64')
y = paddle.to_tensor([1, 4, 5], dtype='float64')
res = paddle.subtract(x, y)
print(res)
#       [   4.,  inf., -inf.]
subtract_ ( y, name=None )

subtract_

Inplace version of subtract API, the output Tensor will be inplaced with input x. Please refer to api_tensor_subtract.

sum ( axis=None, dtype=None, keepdim=False, name=None ) [source]

sum

Computes the sum of tensor elements over the given dimension.

Parameters
  • x (Tensor) – An N-D Tensor, the data type is bool, float16, float32, float64, int32 or int64.

  • axis (int|list|tuple, optional) – The dimensions along which the sum is performed. If None, sum all elements of x and return a Tensor with a single element, otherwise must be in the range \([-rank(x), rank(x))\). If \(axis[i] < 0\), the dimension to reduce is \(rank + axis[i]\).

  • dtype (str, optional) – The dtype of output Tensor. The default value is None, the dtype of output is the same as input Tensor x.

  • keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result Tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

Results of summation operation on the specified axis of input Tensor x, if x.dtype=’bool’, x.dtype=’int32’, it’s data type is ‘int64’, otherwise it’s data type is the same as x.

Return type

Tensor

Raises

TypeError – The type of axis must be int, list or tuple.

Examples

import paddle

# x is a Tensor with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the corresponding output tensor.
x = paddle.to_tensor([[0.2, 0.3, 0.5, 0.9],
                      [0.1, 0.2, 0.6, 0.7]])
out1 = paddle.sum(x)  # [3.5]
out2 = paddle.sum(x, axis=0)  # [0.3, 0.5, 1.1, 1.6]
out3 = paddle.sum(x, axis=-1)  # [1.9, 1.6]
out4 = paddle.sum(x, axis=1, keepdim=True)  # [[1.9], [1.6]]

# y is a Tensor with shape [2, 2, 2] and elements as below:
#      [[[1, 2], [3, 4]],
#      [[5, 6], [7, 8]]]
# Each example is followed by the corresponding output tensor.
y = paddle.to_tensor([[[1, 2], [3, 4]],
                      [[5, 6], [7, 8]]])
out5 = paddle.sum(y, axis=[1, 2]) # [10, 26]
out6 = paddle.sum(y, axis=[0, 1]) # [16, 20]

# x is a Tensor with following elements:
#    [[True, True, True, True]
#     [False, False, False, False]]
# Each example is followed by the corresponding output tensor.
x = paddle.to_tensor([[True, True, True, True],
                      [False, False, False, False]])
out7 = paddle.sum(x)  # [4]
out8 = paddle.sum(x, axis=0)  # [1, 1, 1, 1]
out9 = paddle.sum(x, axis=1)  # [4, 0]
t ( name=None ) [source]

t

Transpose <=2-D tensor. 0-D and 1-D tensors are returned as it is and 2-D tensor is equal to the paddle.transpose function which perm dimensions set 0 and 1.

Parameters
  • input (Tensor) – The input Tensor. It is a N-D (N<=2) Tensor of data types float16, float32, float64, int32.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

A transposed n-D Tensor, with data type being float16, float32, float64, int32, int64.

Return type

Tensor

For Example:

# Example 1 (0-D tensor)
x = tensor([0.79])
paddle.t(x) = tensor([0.79])

# Example 2 (1-D tensor)
x = tensor([0.79, 0.84, 0.32])
paddle.t(x) = tensor([0.79, 0.84, 0.32])

# Example 3 (2-D tensor)
x = tensor([0.79, 0.84, 0.32],
           [0.64, 0.14, 0.57])
paddle.t(x) = tensor([0.79, 0.64],
                     [0.84, 0.14],
                     [0.32, 0.57])

Examples:

import paddle
x = paddle.ones(shape=[2, 3], dtype='int32')
x_transposed = paddle.t(x)
print(x_transposed.shape)
# [3, 2]
tanh ( name=None ) [source]

tanh

Tanh Activation Operator.

\[\begin{split}out = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}\end{split}\]
Parameters
  • x (Tensor) – Input of Tanh operator, an N-D Tensor, with data type float32, float64 or float16.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Tanh operator, a Tensor with same data type and shape as input.

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.tanh(x)
print(out)
# [-0.37994896 -0.19737532  0.09966799  0.29131261]
tanh_ ( name=None ) [source]

tanh_

Inplace version of tanh API, the output Tensor will be inplaced with input x. Please refer to api_tensor_tanh.

tensordot ( y, axes=2, name=None ) [source]

tensordot

This function computes a contraction, which sum the product of elements from two tensors along the given axes.

Parameters
  • x (Tensor) – The left tensor for contraction with data type float32 or float64.

  • y (Tensor) – The right tensor for contraction with the same data type as x.

  • axes (int|tuple|list|Tensor, optional) –

    The axes to contract for x and y, defaulted to integer 2.

    1. It could be a non-negative integer n, in which the function will sum over the last n axes of x and the first n axes of y in order.

    2. It could be a 1-d tuple or list with data type int, in which x and y will be contracted along the same given axes. For example, axes =[0, 1] applies contraction along the first two axes for x and the first two axes for y.

    3. It could be a tuple or list containing one or two 1-d tuple|list|Tensor with data type int. When containing one tuple|list|Tensor, the data in tuple|list|Tensor specified the same axes for x and y to contract. When containing two tuple|list|Tensor, the first will be applied to x and the second to y. When containing more than two tuple|list|Tensor, only the first two axis sequences will be used while the others will be ignored.

    4. It could be a tensor, in which the axes tensor will be translated to a python list and applied the same rules described above to determine the contraction axes. Note that the axes with Tensor type is ONLY available in Dygraph mode.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

The contraction result with the same data type as x and y. In general, \(output.ndim = x.ndim + y.ndim - 2 \times n_{axes}\), where \(n_{axes}\) denotes the number of axes to be contracted.

Return type

Output (Tensor)

Notes

  1. This function supports tensor broadcast, the size in the corresponding dimensions of x and y should be equal, or applies to the broadcast rules.

  2. This function also supports axes expansion, when the two given axis sequences for x and y are of different lengths, the shorter sequence will expand the same axes as the longer one at the end. For example, if axes =[[0, 1, 2, 3], [1, 0]], the axis sequence for x is [0, 1, 2, 3], while the corresponding axis sequences for y will be expanded from [1, 0] to [1, 0, 2, 3].

Examples

import paddle

data_type = 'float64'

# For two 2-d tensor x and y, the case axes=0 is equivalent to outer product.
# Note that tensordot supports empty axis sequence, so all the axes=0, axes=[], axes=[[]], and axes=[[],[]] are equivalent cases.
x = paddle.arange(4, dtype=data_type).reshape([2, 2])
y = paddle.arange(4, dtype=data_type).reshape([2, 2])
z = paddle.tensordot(x, y, axes=0)
# z = [[[[0., 0.],
#        [0., 0.]],
#
#       [[0., 1.],
#        [2., 3.]]],
#
#
#      [[[0., 2.],
#        [4., 6.]],
#
#       [[0., 3.],
#        [6., 9.]]]]


# For two 1-d tensor x and y, the case axes=1 is equivalent to inner product.
x = paddle.arange(10, dtype=data_type)
y = paddle.arange(10, dtype=data_type)
z1 = paddle.tensordot(x, y, axes=1)
z2 = paddle.dot(x, y)
# z1 = z2 = [285.]


# For two 2-d tensor x and y, the case axes=1 is equivalent to matrix multiplication.
x = paddle.arange(6, dtype=data_type).reshape([2, 3])
y = paddle.arange(12, dtype=data_type).reshape([3, 4])
z1 = paddle.tensordot(x, y, axes=1)
z2 = paddle.matmul(x, y)
# z1 = z2 =  [[20., 23., 26., 29.],
#             [56., 68., 80., 92.]]


# When axes is a 1-d int list, x and y will be contracted along the same given axes.
# Note that axes=[1, 2] is equivalent to axes=[[1, 2]], axes=[[1, 2], []], axes=[[1, 2], [1]], and axes=[[1, 2], [1, 2]].
x = paddle.arange(24, dtype=data_type).reshape([2, 3, 4])
y = paddle.arange(36, dtype=data_type).reshape([3, 3, 4])
z = paddle.tensordot(x, y, axes=[1, 2])
# z =  [[506. , 1298., 2090.],
#       [1298., 3818., 6338.]]


# When axes is a list containing two 1-d int list, the first will be applied to x and the second to y.
x = paddle.arange(60, dtype=data_type).reshape([3, 4, 5])
y = paddle.arange(24, dtype=data_type).reshape([4, 3, 2])
z = paddle.tensordot(x, y, axes=([1, 0], [0, 1]))
# z =  [[4400., 4730.],
#       [4532., 4874.],
#       [4664., 5018.],
#       [4796., 5162.],
#       [4928., 5306.]]


# Thanks to the support of axes expansion, axes=[[0, 1, 3, 4], [1, 0, 3, 4]] can be abbreviated as axes= [[0, 1, 3, 4], [1, 0]].
x = paddle.arange(720, dtype=data_type).reshape([2, 3, 4, 5, 6])
y = paddle.arange(720, dtype=data_type).reshape([3, 2, 4, 5, 6])
z = paddle.tensordot(x, y, axes=[[0, 1, 3, 4], [1, 0]])
# z = [[23217330., 24915630., 26613930., 28312230.],
#      [24915630., 26775930., 28636230., 30496530.],
#      [26613930., 28636230., 30658530., 32680830.],
#      [28312230., 30496530., 32680830., 34865130.]]
tile ( repeat_times, name=None ) [source]

tile

Construct a new Tensor by repeating x the number of times given by repeat_times. After tiling, the value of the i’th dimension of the output is equal to x.shape[i]*repeat_times[i].

Both the number of dimensions of x and the number of elements in repeat_times should be less than or equal to 6.

Parameters
  • x (Tensor) – The input tensor, its data type should be bool, float32, float64, int32 or int64.

  • repeat_times (Tensor|tuple|list) – The number of repeating times. If repeat_times is a list or tuple, all its elements should be integers or 1-D Tensors with the data type int32. If repeat_times is a Tensor, it should be an 1-D Tensor with the data type int32.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. The data type is the same as x.

Examples

import paddle

data = paddle.to_tensor([1, 2, 3], dtype='int32')
out = paddle.tile(data, repeat_times=[2, 1])
np_out = out.numpy()
# [[1, 2, 3], [1, 2, 3]]

out = paddle.tile(data, repeat_times=[2, 2])
np_out = out.numpy()
# [[1, 2, 3, 1, 2, 3], [1, 2, 3, 1, 2, 3]]

repeat_times = paddle.to_tensor([2, 1], dtype='int32')
out = paddle.tile(data, repeat_times=repeat_times)
np_out = out.numpy()
# [[1, 2, 3], [1, 2, 3]]
tolist ( ) [source]

tolist

Notes:

This API is ONLY available in Dygraph mode

This function translate the paddle.Tensor to python list.

Parameters

x (Tensor) – x is the Tensor we want to translate to list

Returns

A list that contain the same value of current Tensor.

Return type

list

Returns type:

list: dtype is same as current Tensor

Examples

import paddle

t = paddle.to_tensor([0,1,2,3,4])
expectlist = t.tolist()
print(expectlist)   #[0, 1, 2, 3, 4]

expectlist = paddle.tolist(t)
print(expectlist)   #[0, 1, 2, 3, 4]
topk ( k, axis=None, largest=True, sorted=True, name=None ) [source]

topk

This OP is used to find values and indices of the k largest or smallest at the optional axis. If the input is a 1-D Tensor, finds the k largest or smallest values and indices. If the input is a Tensor with higher rank, this operator computes the top k values and indices along the axis.

Parameters
  • x (Tensor) – Tensor, an input N-D Tensor with type float32, float64, int32, int64.

  • k (int, Tensor) – The number of top elements to look for along the axis.

  • axis (int, optional) – Axis to compute indices along. The effective range is [-R, R), where R is x.ndim. when axis < 0, it works the same way as axis + R. Default is -1.

  • largest (bool, optional) – largest is a flag, if set to true, algorithm will sort by descending order, otherwise sort by ascending order. Default is True.

  • sorted (bool, optional) – controls whether to return the elements in sorted order, default value is True. In gpu device, it always return the sorted value.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

tuple(Tensor), return the values and indices. The value data type is the same as the input x. The indices data type is int64.

Examples

import paddle

tensor_1 = paddle.to_tensor([1, 4, 5, 7])
value_1, indices_1 = paddle.topk(tensor_1, k=1)
print(value_1)
# [7]
print(indices_1)
# [3]
tensor_2 = paddle.to_tensor([[1, 4, 5, 7], [2, 6, 2, 5]])
value_2, indices_2 = paddle.topk(tensor_2, k=1)
print(value_2)
# [[7]
#  [6]]
print(indices_2)
# [[3]
#  [1]]
value_3, indices_3 = paddle.topk(tensor_2, k=1, axis=-1)
print(value_3)
# [[7]
#  [6]]
print(indices_3)
# [[3]
#  [1]]
value_4, indices_4 = paddle.topk(tensor_2, k=1, axis=0)
print(value_4)
# [[2 6 5 7]]
print(indices_4)
# [[1 1 0 0]]
trace ( offset=0, axis1=0, axis2=1, name=None ) [source]

trace

trace

This OP computes the sum along diagonals of the input tensor x.

If x is 2D, returns the sum of diagonal.

If x has larger dimensions, then returns an tensor of diagonals sum, diagonals be taken from the 2D planes specified by axis1 and axis2. By default, the 2D planes formed by the first and second axes of the input tensor x.

The argument offset determines where diagonals are taken from input tensor x:

  • If offset = 0, it is the main diagonal.

  • If offset > 0, it is above the main diagonal.

  • If offset < 0, it is below the main diagonal.

  • Note that if offset is out of input’s shape indicated by axis1 and axis2, 0 will be returned.

Parameters
  • x (Tensor) – The input tensor x. Must be at least 2-dimensional. The input data type should be float32, float64, int32, int64.

  • offset (int, optional) – Which diagonals in input tensor x will be taken. Default: 0 (main diagonals).

  • axis1 (int, optional) – The first axis with respect to take diagonal. Default: 0.

  • axis2 (int, optional) – The second axis with respect to take diagonal. Default: 1.

  • name (str, optional) – Normally there is no need for user to set this property. For more information, please refer to Name. Default: None.

Returns

the output data type is the same as input data type.

Return type

Tensor

Examples

import paddle

case1 = paddle.randn([2, 3])
case2 = paddle.randn([3, 10, 10])
case3 = paddle.randn([3, 10, 5, 10])
data1 = paddle.trace(case1) # data1.shape = [1]
data2 = paddle.trace(case2, offset=1, axis1=1, axis2=2) # data2.shape = [3]
data3 = paddle.trace(case3, offset=-3, axis1=1, axis2=-1) # data2.shape = [3, 5]
transpose ( perm, name=None ) [source]

transpose

Permute the data dimensions of input according to perm.

The i-th dimension of the returned tensor will correspond to the perm[i]-th dimension of input.

Parameters
  • x (Tensor) – The input Tensor. It is a N-D Tensor of data types bool, float32, float64, int32.

  • perm (list|tuple) – Permute the input according to the data of perm.

  • name (str) – The name of this layer. It is optional.

Returns

A transposed n-D Tensor, with data type being bool, float32, float64, int32, int64.

Return type

Tensor

For Example:

x = [[[ 1  2  3  4] [ 5  6  7  8] [ 9 10 11 12]]
    [[13 14 15 16] [17 18 19 20] [21 22 23 24]]]
shape(x) =  [2,3,4]

# Example 1
perm0 = [1,0,2]
y_perm0 = [[[ 1  2  3  4] [13 14 15 16]]
          [[ 5  6  7  8]  [17 18 19 20]]
          [[ 9 10 11 12]  [21 22 23 24]]]
shape(y_perm0) = [3,2,4]

# Example 2
perm1 = [2,1,0]
y_perm1 = [[[ 1 13] [ 5 17] [ 9 21]]
          [[ 2 14] [ 6 18] [10 22]]
          [[ 3 15]  [ 7 19]  [11 23]]
          [[ 4 16]  [ 8 20]  [12 24]]]
shape(y_perm1) = [4,3,2]

Examples

import paddle

x = paddle.randn([2, 3, 4])
x_transposed = paddle.transpose(x, perm=[1, 0, 2])
print(x_transposed.shape)
# [3L, 2L, 4L]
trunc ( name=None ) [source]

trunc

This API is used to returns a new tensor with the truncated integer values of input.

Parameters
  • input (Tensor) – The input tensor, it’s data type should be int32, int64, float32, float64.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The output Tensor of trunc.

Return type

Tensor

Examples

import paddle

input = paddle.rand([2,2],'float32')
print(input)
# Tensor(shape=[2, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#         [[0.02331470, 0.42374918],
#         [0.79647720, 0.74970269]])

output = paddle.trunc(input)
print(output)
# Tensor(shape=[2, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#         [[0., 0.],
#         [0., 0.]]))
unbind ( axis=0 ) [source]

unbind

Removes a tensor dimension, then split the input tensor into multiple sub-Tensors.

Parameters
  • input (Tensor) – The input variable which is an N-D Tensor, data type being float32, float64, int32 or int64.

  • axis (int32|int64, optional) – A scalar with type int32|int64 shape [1]. The dimension along which to unbind. If \(axis < 0\), the dimension to unbind along is \(rank(input) + axis\). Default is 0.

Returns

The list of segmented Tensor variables.

Return type

list(Tensor)

Example

import paddle
import numpy as np
# input is a variable which shape is [3, 4, 5]
np_input = np.random.rand(3, 4, 5).astype('float32')
input = paddle.to_tensor(np_input)
[x0, x1, x2] = paddle.unbind(input, axis=0)
# x0.shape [4, 5]
# x1.shape [4, 5]
# x2.shape [4, 5]
[x0, x1, x2, x3] = paddle.unbind(input, axis=1)
# x0.shape [3, 5]
# x1.shape [3, 5]
# x2.shape [3, 5]
# x3.shape [3, 5]
uniform_ ( min=- 1.0, max=1.0, seed=0, name=None )

uniform_

This is the inplace version of OP uniform, which returns a Tensor filled with random values sampled from a uniform distribution. The output Tensor will be inplaced with input x. Please refer to api_tensor_uniform.

Parameters
  • x (Tensor) – The input tensor to be filled with random values.

  • min (float|int, optional) – The lower bound on the range of random values to generate, min is included in the range. Default is -1.0.

  • max (float|int, optional) – The upper bound on the range of random values to generate, max is excluded in the range. Default is 1.0.

  • seed (int, optional) – Random seed used for generating samples. If seed is 0, it will use the seed of the global default generator (which can be set by paddle.seed). Note that if seed is not 0, this operator will always generate the same random numbers every time. Default is 0.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The input tensor x filled with random values sampled from a uniform distribution in the range [min, max).

Return type

Tensor

Examples

import paddle
# example:
x = paddle.ones(shape=[3, 4])
x.uniform_()
print(x)
# [[ 0.84524226,  0.6921872,   0.56528175,  0.71690357], # random
#  [-0.34646994, -0.45116323, -0.09902662, -0.11397249], # random
#  [ 0.433519,    0.39483607, -0.8660099,   0.83664286]] # random
unique ( return_index=False, return_inverse=False, return_counts=False, axis=None, dtype='int64', name=None ) [source]

unique

Returns the unique elements of x in ascending order.

Parameters
  • x (Tensor) – The input tensor, it’s data type should be float32, float64, int32, int64.

  • return_index (bool, optional) – If True, also return the indices of the input tensor that result in the unique Tensor.

  • return_inverse (bool, optional) – If True, also return the indices for where elements in the original input ended up in the returned unique tensor.

  • return_counts (bool, optional) – If True, also return the counts for each unique element.

  • axis (int, optional) – The axis to apply unique. If None, the input will be flattened. Default: None.

  • dtype (np.dtype|str, optional) – The date type of indices or inverse tensor: int32 or int64. Default: int64.

  • name (str, optional) – Name for the operation. For more information, please refer to Name. Default: None.

Returns

(out, indices, inverse, counts). out is the unique tensor for x. indices is

provided only if return_index is True. inverse is provided only if return_inverse is True. counts is provided only if return_counts is True.

Return type

tuple

Examples

import paddle

x = paddle.to_tensor([2, 3, 3, 1, 5, 3])
unique = paddle.unique(x)
np_unique = unique.numpy() # [1 2 3 5]
_, indices, inverse, counts = paddle.unique(x, return_index=True, return_inverse=True, return_counts=True)
np_indices = indices.numpy() # [3 0 1 4]
np_inverse = inverse.numpy() # [1 2 2 0 3 2]
np_counts = counts.numpy() # [1 1 3 1]

x = paddle.to_tensor([[2, 1, 3], [3, 0, 1], [2, 1, 3]])
unique = paddle.unique(x)
np_unique = unique.numpy() # [0 1 2 3]

unique = paddle.unique(x, axis=0)
np_unique = unique.numpy()
# [[2 1 3]
#  [3 0 1]]
unique_consecutive ( return_inverse=False, return_counts=False, axis=None, dtype='int64', name=None ) [source]

unique_consecutive

Eliminates all but the first element from every consecutive group of equivalent elements.

Note

This function is different from paddle.unique() in the sense that this function only eliminates consecutive duplicate values. This semantics is similar to std::unique in C++.

Parameters
  • x (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

  • return_inverse (bool, optional) – If True, also return the indices for where elements in the original input ended up in the returned unique consecutive tensor. Default is False.

  • return_counts (bool, optional) – If True, also return the counts for each unique consecutive element. Default is False.

  • axis (int, optional) – The axis to apply unique consecutive. If None, the input will be flattened. Default is None.

  • dtype (np.dtype|str, optional) – The data type inverse tensor: int32 or int64. Default: int64.

  • name (str, optional) – Name for the operation. For more information, please refer to Name. Default is None.

Returns

(out, inverse, counts). out is the unique consecutive tensor for x. inverse is provided only if return_inverse is True. counts is provided only if return_counts is True.

Return type

tuple

Example

import paddle

x = paddle.to_tensor([1, 1, 2, 2, 3, 1, 1, 2])
output = paddle.unique_consecutive(x) #
np_output = output.numpy() # [1 2 3 1 2]
_, inverse, counts = paddle.unique_consecutive(x, return_inverse=True, return_counts=True)
np_inverse = inverse.numpy() # [0 0 1 1 2 3 3 4]
np_counts = inverse.numpy() # [2 2 1 2 1]

x = paddle.to_tensor([[2, 1, 3], [3, 0, 1], [2, 1, 3], [2, 1, 3]])
output = paddle.unique_consecutive(x, axis=0) #
np_output = output.numpy() # [2 1 3 0 1 2 1 3 2 1 3]

x = paddle.to_tensor([[2, 1, 3], [3, 0, 1], [2, 1, 3], [2, 1, 3]])
output = paddle.unique_consecutive(x, axis=0) #
np_output = output.numpy()
# [[2 1 3]
#  [3 0 1]
#  [2 1 3]]
unsqueeze ( axis, name=None ) [source]

unsqueeze

Insert single-dimensional entries to the shape of input Tensor x. Takes one required argument axis, a dimension or list of dimensions that will be inserted. Dimension indices in axis are as seen in the output tensor.

Note that the output Tensor will share data with origin Tensor and doesn’t have a Tensor copy in dygraph mode. If you want to use the Tensor copy version, please use Tensor.clone like unsqueeze_clone_x = x.unsqueeze(-1).clone().

Parameters
  • x (Tensor) – The input Tensor to be unsqueezed. Supported data type: float32, float64, bool, int8, int32, int64.

  • axis (int|list|tuple|Tensor) – Indicates the dimensions to be inserted. The data type is int32 . If axis is a list or tuple, the elements of it should be integers or Tensors with shape [1]. If axis is a Tensor, it should be an 1-D Tensor . If axis is negative, axis = axis + ndim(x) + 1.

  • name (str|None) – Name for this layer. Please refer to Name, Default None.

Returns

Unsqueezed Tensor with the same data type as input Tensor.

Return type

Tensor

Examples

import paddle

x = paddle.rand([5, 10])
print(x.shape)  # [5, 10]

out1 = paddle.unsqueeze(x, axis=0)
print(out1.shape)  # [1, 5, 10]

out2 = paddle.unsqueeze(x, axis=[0, 2])
print(out2.shape)  # [1, 5, 1, 10]

axis = paddle.to_tensor([0, 1, 2])
out3 = paddle.unsqueeze(x, axis=axis)
print(out3.shape)  # [1, 1, 1, 5, 10]

# out1, out2, out3 share data with x in dygraph mode
x[0, 0] = 10.
print(out1[0, 0, 0]) # [10.]
print(out2[0, 0, 0, 0]) # [10.]
print(out3[0, 0, 0, 0, 0]) # [10.]
unsqueeze_ ( axis, name=None ) [source]

unsqueeze_

Inplace version of unsqueeze API, the output Tensor will be inplaced with input x. Please refer to api_paddle_tensor_unsqueeze.

unstack ( axis=0, num=None ) [source]

unstack

Alias_main

paddle.unstack :alias: paddle.unstack,paddle.tensor.unstack,paddle.tensor.manipulation.unstack :old_api: paddle.fluid.layers.unstack

UnStack Layer

This layer unstacks input Tensor x into several Tensors along axis.

If axis < 0, it would be replaced with axis+rank(x). If num is None, it would be inferred from x.shape[axis], and if x.shape[axis] <= 0 or is unknown, ValueError is raised.

Parameters
  • x (Tensor) – Input Tensor. It is a N-D Tensors of data types float32, float64, int32, int64.

  • axis (int) – The axis along which the input is unstacked.

  • num (int|None) – The number of output variables.

Returns

The unstacked Tensors list. The list elements are N-D Tensors of data types float32, float64, int32, int64.

Return type

list(Tensor)

Raises

ValueError – If x.shape[axis] <= 0 or axis is not in range [-D, D).

Examples

import paddle
x = paddle.ones(name='x', shape=[2, 3, 5], dtype='float32')  # create a tensor with shape=[2, 3, 5]
y = paddle.unstack(x, axis=1)  # unstack with second axis, which results 3 tensors with shape=[2, 5]
value ( self: paddle.fluid.core_avx.VarBase ) paddle::framework::Variable

value

var ( axis=None, unbiased=True, keepdim=False, name=None ) [source]

var

Computes the variance of x along axis .

Parameters
  • x (Tensor) – The input Tensor with data type float32, float64.

  • axis (int|list|tuple, optional) – The axis along which to perform variance calculations. axis should be int, list(int) or tuple(int). If axis is a list/tuple of dimension(s), variance is calculated along all element(s) of axis . axis or element(s) of axis should be in range [-D, D), where D is the dimensions of x . If axis or element(s) of axis is less than 0, it works the same way as \(axis + D\) . If axis is None, variance is calculated over all elements of x. Default is None.

  • unbiased (bool, optional) – Whether to use the unbiased estimation. If unbiased is True, the divisor used in the computation is \(N - 1\), where \(N\) represents the number of elements along axis , otherwise the divisor is \(N\). Default is True.

  • keepdim (bool, optional) – Whether to reserve the reduced dimension(s) in the output Tensor. If keepdim is True, the dimensions of the output Tensor is the same as x except in the reduced dimensions(it is of size 1 in this case). Otherwise, the shape of the output Tensor is squeezed in axis . Default is False.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, results of variance along axis of x, with the same data type as x.

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0, 3.0], [1.0, 4.0, 5.0]])
out1 = paddle.var(x)
# [2.66666667]
out2 = paddle.var(x, axis=1)
# [1.         4.33333333]
where ( x, y, name=None ) [source]

where

Return a tensor of elements selected from either $x$ or $y$, depending on $condition$.

\[\begin{split}out_i = \\begin{cases} x_i, \quad \\text{if} \\ condition_i \\ is \\ True \\\\ y_i, \quad \\text{if} \\ condition_i \\ is \\ False \\\\ \\end{cases}\end{split}\]
Parameters
  • condition (Tensor) – The condition to choose x or y.

  • x (Tensor) – x is a Tensor with data type float32, float64, int32, int64.

  • y (Tensor) – y is a Tensor with data type float32, float64, int32, int64.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

A Tensor with the same data dype as x.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([0.9383, 0.1983, 3.2, 1.2])
y = paddle.to_tensor([1.0, 1.0, 1.0, 1.0])
out = paddle.where(x>1, x, y)

print(out)
#out: [1.0, 1.0, 3.2, 1.2]
zero_ ( )

zero_

Notes:

This API is ONLY available in Dygraph mode

This function fill the Tensor with zero inplace.

Parameters

x (Tensor) – x is the Tensor we want to filled with zero inplace

Returns

Tensor x filled with zero inplace

Return type

x(Tensor)

Examples

import paddle

tensor = paddle.to_tensor([0, 1, 2, 3, 4])

tensor.zero_()
print(tensor.tolist())   #[0, 0, 0, 0, 0]