svd

paddle.linalg. svd ( x, full_matrices=False, name=None ) [source]

Computes the singular value decomposition of one matrix or a batch of regular matrices.

Let \(X\) be the input matrix or a batch of input matrices, the output should satisfies:

\[X = U * diag(S) * VT\]
Parameters
  • x (Tensor) – The input tensor. Its shape should be […, N, M], where is zero or more batch dimensions. N and M can be arbitraty positive number. Note that if x is sigular matrices, the grad is numerical instable. The data type of x should be float32 or float64.

  • full_matrices (bool, optional) – A flag to control the behavor of svd. If full_matrices = True, svd op will compute full U and V matrics, which means shape of U is […, N, N], shape of V is […, M, M]. K = min(M, N). If full_matrices = False, svd op will use a economic method to store U and V. which means shape of U is […, N, K], shape of V is […, M, K]. K = min(M, N). Default value is False.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

  • U (Tensor), is the singular value decomposition result U.

  • S (Tensor), is the singular value decomposition result S.

  • VH (Tensor), VH is the conjugate transpose of V, which is the singular value decomposition result V.

Tuple of 3 tensors(U, S, VH): VH is the conjugate transpose of V. S is the singlar value vectors of matrics with shape […, K]

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0], [1.0, 3.0], [4.0, 6.0]]).astype('float64')
x = x.reshape([3, 2])
u, s, vh = paddle.linalg.svd(x)
print (u)
#U = [[ 0.27364809, -0.21695147  ],
#      [ 0.37892198, -0.87112408 ],
#      [ 0.8840446 ,  0.44053933 ]]

print (s)
#S = [8.14753743, 0.78589688]
print (vh)
#VT= [[ 0.51411221,  0.85772294],
#     [ 0.85772294, -0.51411221]]

# one can verify : U * S * VT == X
#                  U * UH == I
#                  V * VH == I