You need to enable JavaScript to run this app.
\u200E
开始使用
特性
文档
API
使用指南
工具平台
工具
AutoDL
PaddleHub
PARL
ERNIE
全部
平台
AI Studio
EasyDL
EasyEdge
资源
模型和数据集
学习资料
应用案例
develop
3.0-beta
2.6
2.5
2.4
2.3
2.2
2.1
2.0
1.8
中文(简)
English(En)
安装指南
Pip 安装
Linux 下的 PIP 安装
MacOS 下的 PIP 安装
Windows 下的 PIP 安装
Conda 安装
Linux 下的 Conda 安装
MacOS 下的 Conda 安装
Windows 下的 Conda 安装
Docker 安装
Linux 下的 Docker 安装
MacOS 下的 Docker 安装
从源码编译
Linux 下从源码编译
MacOS 下从源码编译
Windows 下从源码编译
飞腾/鲲鹏下从源码编译
申威下从源码编译
兆芯下从源码编译
昆仑 XPU 芯片安装及运行飞桨
海光 DCU 芯片运行飞桨
NGC 飞桨容器安装指南
附录
使用指南
模型开发入门
10分钟快速上手飞桨
Tensor 介绍
数据集定义与加载
数据预处理
模型组网
模型训练、评估与推理
模型保存与加载
模型开发更多用法
使用 VisualDL 可视化模型,数据和训练
自动微分机制介绍
Paddle 中的模型与层
自定义Loss、Metric 及 Callback
梯度裁剪方式介绍
Paddle 模型导出 ONNX 协议
动态图转静态图
使用样例
转换原理
支持语法
案例解析
报错调试
推理部署
服务器部署 — Paddle Inference
移动端/嵌入式部署 — Paddle Lite
模型压缩 — PaddleSlim
分布式训练
Paddle 分布式整体介绍
环境部署
快速开始-数据并行
快速开始-参数服务器
数据并行
原理和实践案例
前向重计算
自动混合精度
Gradient Merge
张量模型并行
流水线并行
分组切分并行
MoE
性能调优
自动混合精度训练(AMP)
飞桨模型量化
模型性能分析(Profiler)
模型迁移
升级指南
版本迁移工具
兼容载入旧格式模型
Paddle 1.8 与 Paddle 2.0 API 映射表
PyTorch 1.8 与 Paddle 2.0 API 映射表
硬件支持
飞桨产品硬件支持表
昆仑芯片运行飞桨
飞桨对昆仑芯 2 代芯片的支持
飞桨框架昆仑 2 代芯片安装说明
飞桨框架昆仑 2 代芯片训练示例
飞桨对昆仑 XPU 芯片的支持
飞桨框架昆仑 XPU 版安装说明
飞桨框架昆仑 XPU 版训练示例
飞桨预测库昆仑 XPU 版安装及使用示例
海光 DCU 芯片运行飞桨
飞桨框架 ROCm 版支持模型
飞桨框架 ROCm 版安装说明
飞桨框架 ROCm 版训练示例
飞桨框架 ROCm 版预测示例
昇腾 NPU 芯片运行飞桨
飞桨框架昇腾 NPU 版安装说明
飞桨框架昇腾 NPU 版训练示例
Graphcore IPU 芯片运行飞桨
飞桨框架 IPU 版安装说明
飞桨框架 IPU 版训练示例
飞桨框架 IPU 版预测示例
寒武纪 MLU 芯片运行飞桨
飞桨框架寒武纪 MLU 版安装说明
飞桨框架 MLU 版训练示例
飞桨框架寒武纪 MLU 版支持模型
自定义算子
自定义 C++算子
自定义 Python 算子
环境变量 FLAGS
cudnn
数值计算
调试
check nan inf 工具
设备管理
分布式
执行器
存储管理
昇腾 NPU
其他
应用实践
快速上手
hello paddle: 从普通程序走向机器学习程序
动态图
飞桨高层API使用指南
模型保存及加载
使用线性回归预测波士顿房价
计算机视觉
使用LeNet在MNIST数据集实现图像分类
使用卷积神经网络进行图像分类
基于图片相似度的图片搜索
基于U-Net卷积神经网络实现宠物图像分割
通过OCR实现验证码识别
通过Sub-Pixel实现图像超分辨率
人脸关键点检测
点云处理:实现PointNet点云分类
自然语言处理
用N-Gram模型在莎士比亚文集中训练word embedding
IMDB 数据集使用BOW网络的文本分类
使用预训练的词向量完成文本分类任务
使用注意力机制的LSTM的机器翻译
使用序列到序列模型完成数字加法
推荐
使用协同过滤实现电影推荐
强化学习
强化学习——Actor Critic Method
强化学习——Deep Deterministic Policy Gradient (DDPG)
时序数据
通过AutoEncoder实现时序数据异常检测
动转静
使用动转静完成以图搜图
API 文档
paddle
Overview
abs
acos
acosh
add
add_n
addmm
all
allclose
amax
amin
angle
any
arange
argmax
argmin
argsort
as_complex
as_real
asin
asinh
assign
atan
atan2
atanh
batch
bernoulli
bincount
bitwise_and
bitwise_not
bitwise_or
bitwise_xor
bmm
broadcast_shape
broadcast_tensors
broadcast_to
bucketize
cast
ceil
chunk
clip
clone
complex
concat
conj
cos
cosh
count_nonzero
CPUPlace
create_parameter
crop
cross
CUDAPinnedPlace
CUDAPlace
cumprod
cumsum
DataParallel
deg2rad
diag
diagflat
diagonal
diff
digamma
disable_signal_handler
disable_static
dist
divide
dot
einsum
empty
empty_like
enable_static
equal
equal_all
erf
erfinv
erfinv_
exp
expand
expand_as
expm1
eye
flatten
flip
floor
floor_divide
flops
fmax
fmin
frac
full
full_like
gather
gather_nd
gcd
get_cuda_rng_state
get_default_dtype
get_flags
grad
greater_equal
greater_than
heaviside
histogram
iinfo
imag
in_dynamic_mode
increment
index_add
index_add_
index_sample
index_select
inner
is_complex
is_empty
is_floating_point
is_grad_enabled
is_integer
is_tensor
isclose
isfinite
isinf
isnan
kron
kthvalue
lcm
lerp
less_equal
less_than
lgamma
linspace
load
log
log10
log1p
log2
logcumsumexp
logical_and
logical_not
logical_or
logical_xor
logit
logspace
logsumexp
masked_select
matmul
max
maximum
mean
median
meshgrid
min
minimum
mm
mod
mode
Model
moveaxis
multinomial
multiplex
multiply
mv
nanmean
nanmedian
neg
no_grad
nonzero
normal
not_equal
NPUPlace
numel
ones
ones_like
outer
ParamAttr
poisson
pow
prod
put_along_axis
put_along_axis_
quantile
rad2deg
rand
randint
randint_like
randn
randperm
rank
real
reciprocal
remainder
remainder_
repeat_interleave
reshape
reshape_
roll
rot90
round
rsqrt
save
scale
scatter
scatter_
scatter_nd
scatter_nd_add
searchsorted
seed
set_cuda_rng_state
set_default_dtype
set_flags
set_grad_enabled
set_printoptions
sgn
shape
shard_index
sign
sin
sinh
slice
sort
split
sqrt
square
squeeze
squeeze_
stack
standard_normal
stanh
std
strided_slice
subtract
sum
summary
t
take
take_along_axis
tan
tanh
tanh_
Tensor
tensordot
tile
to_tensor
tolist
topk
trace
transpose
tril
tril_indices
triu
triu_indices
trunc
unbind
uniform
unique
unique_consecutive
unsqueeze
unsqueeze_
unstack
var
vsplit
where
zeros
zeros_like
paddle.amp
Overview
auto_cast
decorate
GradScaler
paddle.audio
Overview
backends
get_current_backend
list_available_backends
set_backend
datasets
ESC50
TESS
features
LogMelSpectrogram
MelSpectrogram
MFCC
Spectrogram
functional
compute_fbank_matrix
create_dct
fft_frequencies
get_window
hz_to_mel
mel_frequencies
mel_to_hz
power_to_db
info
load
save
paddle.autograd
backward
PyLayer
PyLayerContext
saved_tensors_hooks
paddle.callbacks
Overview
Callback
EarlyStopping
LRScheduler
ModelCheckpoint
ProgBarLogger
ReduceLROnPlateau
VisualDL
paddle.compat
floor_division
get_exception_message
long_type
round
to_bytes
to_text
paddle.device
cuda
current_stream
device_count
empty_cache
Event
get_device_capability
get_device_name
get_device_properties
max_memory_allocated
max_memory_reserved
memory_allocated
memory_reserved
Stream
stream_guard
synchronize
get_all_custom_device_type
get_all_device_type
get_available_custom_device
get_available_device
get_cudnn_version
get_device
IPUPlace
is_compiled_with_cinn
is_compiled_with_cuda
is_compiled_with_ipu
is_compiled_with_mlu
is_compiled_with_npu
is_compiled_with_rocm
is_compiled_with_xpu
MLUPlace
set_device
XPUPlace
paddle.distributed
Overview
all_gather
all_gather_object
all_reduce
alltoall
barrier
broadcast
fleet
DistributedStrategy
Fleet
PaddleCloudRoleMaker
UserDefinedRoleMaker
UtilBase
utils
HDFSClient
LocalFS
recompute
get_group
get_rank
get_world_size
gloo_barrier
gloo_init_parallel_env
gloo_release
init_parallel_env
InMemoryDataset
irecv
is_initialized
isend
launch
new_group
ParallelEnv
QueueDataset
recv
reduce
reduce_scatter
ReduceOp
scatter
send
sharding
group_sharded_parallel
save_group_sharded_model
spawn
split
utils
global_gather
global_scatter
wait
paddle.distribution
Overview
AbsTransform
AffineTransform
Beta
Categorical
ChainTransform
Dirichlet
Distribution
ExponentialFamily
ExpTransform
Independent
IndependentTransform
kl_divergence
Laplace
Multinomial
Normal
PowerTransform
register_kl
ReshapeTransform
SigmoidTransform
SoftmaxTransform
StackTransform
StickBreakingTransform
TanhTransform
Transform
TransformedDistribution
Uniform
paddle.fft
Overview
fft
fft2
fftfreq
fftn
fftshift
hfft
hfft2
hfftn
ifft
ifft2
ifftn
ifftshift
ihfft
ihfft2
ihfftn
irfft
irfft2
irfftn
rfft
rfft2
rfftfreq
rfftn
paddle.fluid
clip
ErrorClipByValue
set_gradient_clip
create_lod_tensor
create_random_int_lodtensor
cuda_pinned_places
data
DataFeedDesc
DataFeeder
dataset
DatasetFactory
InMemoryDataset
QueueDataset
dygraph
BilinearTensorProduct
Conv2D
Conv2DTranspose
Conv3D
Conv3DTranspose
Dropout
Embedding
enabled
GroupNorm
GRUCell
GRUUnit
LambdaDecay
LayerNorm
Linear
load_dygraph
LSTMCell
MultiStepDecay
NCE
Pool2D
PRelu
prepare_context
ReduceLROnPlateau
save_dygraph
StepDecay
TreeConv
evaluator
ChunkEvaluator
DetectionMAP
EditDistance
initializer
Constant
ConstantInitializer
MSRA
MSRAInitializer
Normal
NumpyArrayInitializer
TruncatedNormal
Uniform
Xavier
io
get_program_parameter
get_program_persistable_vars
load_params
load_persistables
load_vars
PyReader
save_params
save_persistables
save_vars
shuffle
layers
adaptive_pool2d
adaptive_pool3d
add_position_encoding
affine_channel
affine_grid
anchor_generator
argmax
argmin
argsort
array_length
array_read
array_write
autoincreased_step_counter
BasicDecoder
beam_search
beam_search_decode
bipartite_match
box_clip
box_coder
box_decoder_and_assign
bpr_loss
brelu
Categorical
center_loss
collect_fpn_proposals
concat
continuous_value_model
cosine_decay
create_array
create_py_reader_by_data
create_tensor
crop
cross_entropy
ctc_greedy_decoder
cumsum
data
DecodeHelper
Decoder
deformable_conv
deformable_roi_pooling
density_prior_box
detection_output
diag
distribute_fpn_proposals
double_buffer
dropout
dynamic_gru
dynamic_lstm
dynamic_lstmp
DynamicRNN
edit_distance
elementwise_add
elementwise_div
elementwise_floordiv
elementwise_max
elementwise_min
elementwise_mod
elementwise_pow
elementwise_sub
elu
embedding
equal
expand
expand_as
exponential_decay
eye
fc
fill_constant
filter_by_instag
flatten
fsp_matrix
gather
gather_nd
gaussian_random
gelu
generate_mask_labels
generate_proposal_labels
generate_proposals
get_tensor_from_selected_rows
greater_equal
greater_than
GreedyEmbeddingHelper
grid_sampler
gru_unit
GRUCell
hard_shrink
hard_sigmoid
hard_swish
hash
hsigmoid
huber_loss
IfElse
im2sequence
image_resize
image_resize_short
inplace_abn
inverse_time_decay
iou_similarity
isfinite
kldiv_loss
l2_normalize
label_smooth
leaky_relu
less_equal
less_than
linear_chain_crf
linear_lr_warmup
locality_aware_nms
lod_append
lod_reset
lrn
lstm
lstm_unit
LSTMCell
margin_rank_loss
matmul
matrix_nms
maxout
mean
merge_selected_rows
mish
mse_loss
mul
multiclass_nms
MultivariateNormalDiag
natural_exp_decay
noam_decay
Normal
not_equal
one_hot
ones
ones_like
pad
pad2d
pad_constant_like
piecewise_decay
pixel_shuffle
polygon_box_transform
polynomial_decay
pool2d
pool3d
prior_box
prroi_pool
psroi_pool
py_reader
random_crop
range
rank_loss
read_file
reduce_all
reduce_any
reduce_max
reduce_mean
reduce_min
reduce_prod
reduce_sum
relu
relu6
reorder_lod_tensor_by_rank
reshape
resize_bilinear
resize_nearest
resize_trilinear
retinanet_detection_output
retinanet_target_assign
reverse
rnn
RNNCell
roi_align
roi_perspective_transform
roi_pool
rpn_target_assign
SampleEmbeddingHelper
sampling_id
scatter
selu
shuffle_channel
sigmoid_cross_entropy_with_logits
sigmoid_focal_loss
sign
similarity_focus
size
smooth_l1
soft_relu
softmax
softshrink
space_to_depth
split
squeeze
ssd_loss
stack
StaticRNN
strided_slice
sum
sums
swish
Switch
target_assign
teacher_student_sigmoid_loss
tensor_array_to_tensor
thresholded_relu
topk
TrainingHelper
Uniform
uniform_random
unique
unique_with_counts
unsqueeze
warpctc
where
While
yolo_box
yolov3_loss
zeros
zeros_like
metrics
Accuracy
Auc
ChunkEvaluator
CompositeMetric
DetectionMAP
EditDistance
MetricBase
Precision
Recall
nets
glu
img_conv_group
scaled_dot_product_attention
sequence_conv_pool
simple_img_conv_pool
one_hot
optimizer
MomentumOptimizer
SGDOptimizer
profiler
cuda_profiler
profiler
reset_profiler
start_profiler
stop_profiler
reader
PyReader
regularizer
L1DecayRegularizer
L2DecayRegularizer
require_version
transpiler
DistributeTranspiler
DistributeTranspilerConfig
HashName
memory_optimize
release_memory
paddle.geometric
Overview
reindex_graph
reindex_heter_graph
sample_neighbors
segment_max
segment_mean
segment_min
segment_sum
send_u_recv
send_ue_recv
send_uv
paddle.hub
Overview
help
list
load
paddle.incubate
autograd
Overview
disable_prim
enable_prim
forward_grad
grad
Hessian
Jacobian
jvp
vjp
autotune
set_config
graph_khop_sampler
graph_reindex
graph_sample_neighbors
graph_send_recv
identity_loss
LookAhead
ModelAverage
nn
functional
fused_feedforward
fused_multi_head_attention
FusedFeedForward
FusedMultiHeadAttention
FusedTransformerEncoderLayer
optimizer
functional
minimize_bfgs
minimize_lbfgs
segment_max
segment_mean
segment_min
segment_sum
softmax_mask_fuse
softmax_mask_fuse_upper_triangle
paddle.io
Overview
BatchSampler
ChainDataset
ComposeDataset
DataLoader
Dataset
DistributedBatchSampler
get_worker_info
IterableDataset
random_split
RandomSampler
Sampler
SequenceSampler
Subset
TensorDataset
WeightedRandomSampler
paddle.jit
Overview
load
not_to_static
ProgramTranslator
save
set_code_level
set_verbosity
to_static
TracedLayer
TranslatedLayer
paddle.linalg
Overview
cholesky
cholesky_solve
cond
corrcoef
cov
det
eig
eigh
eigvals
eigvalsh
inv
lstsq
lu
lu_unpack
matrix_power
matrix_rank
multi_dot
norm
pinv
qr
slogdet
solve
svd
triangular_solve
paddle.metric
Overview
Accuracy
accuracy
Auc
Metric
Precision
Recall
paddle.nn
Overview
AdaptiveAvgPool1D
AdaptiveAvgPool2D
AdaptiveAvgPool3D
AdaptiveMaxPool1D
AdaptiveMaxPool2D
AdaptiveMaxPool3D
AlphaDropout
AvgPool1D
AvgPool2D
AvgPool3D
BatchNorm
BatchNorm1D
BatchNorm2D
BatchNorm3D
BCELoss
BCEWithLogitsLoss
BeamSearchDecoder
Bilinear
BiRNN
CELU
ChannelShuffle
ClipGradByGlobalNorm
ClipGradByNorm
ClipGradByValue
Conv1D
Conv1DTranspose
Conv2D
Conv2DTranspose
Conv3D
Conv3DTranspose
CosineEmbeddingLoss
CosineSimilarity
CrossEntropyLoss
CTCLoss
Dropout
Dropout2D
Dropout3D
dynamic_decode
ELU
Embedding
Flatten
Fold
functional
adaptive_avg_pool1d
adaptive_avg_pool2d
adaptive_avg_pool3d
adaptive_max_pool1d
adaptive_max_pool2d
adaptive_max_pool3d
affine_grid
alpha_dropout
avg_pool1d
avg_pool2d
avg_pool3d
batch_norm
bilinear
binary_cross_entropy
binary_cross_entropy_with_logits
celu
channel_shuffle
class_center_sample
conv1d
conv1d_transpose
conv2d
conv2d_transpose
conv3d
conv3d_transpose
cosine_embedding_loss
cosine_similarity
cross_entropy
ctc_loss
diag_embed
dice_loss
dropout
dropout2d
dropout3d
elu
elu_
embedding
fold
gather_tree
gelu
glu
grid_sample
gumbel_softmax
hardshrink
hardsigmoid
hardswish
hardtanh
hinge_embedding_loss
hsigmoid_loss
instance_norm
interpolate
kl_div
l1_loss
label_smooth
layer_norm
leaky_relu
linear
local_response_norm
log_loss
log_sigmoid
log_softmax
margin_cross_entropy
margin_ranking_loss
max_pool1d
max_pool2d
max_pool3d
max_unpool1d
max_unpool2d
max_unpool3d
maxout
mish
mse_loss
multi_label_soft_margin_loss
nll_loss
normalize
npair_loss
one_hot
pad
pairwise_distance
pixel_shuffle
pixel_unshuffle
prelu
relu
relu6
relu_
rrelu
selu
sequence_mask
sigmoid
sigmoid_focal_loss
silu
smooth_l1_loss
soft_margin_loss
softmax
softmax_
softmax_with_cross_entropy
softplus
softshrink
softsign
sparse_attention
square_error_cost
swish
tanhshrink
temporal_shift
thresholded_relu
triplet_margin_loss
triplet_margin_with_distance_loss
unfold
upsample
zeropad2d
GELU
GroupNorm
GRU
GRUCell
Hardshrink
Hardsigmoid
Hardswish
Hardtanh
HingeEmbeddingLoss
HSigmoidLoss
Identity
initializer
Assign
Bilinear
calculate_gain
Constant
Dirac
KaimingNormal
KaimingUniform
Normal
Orthogonal
set_global_initializer
TruncatedNormal
Uniform
XavierNormal
XavierUniform
InstanceNorm1D
InstanceNorm2D
InstanceNorm3D
KLDivLoss
L1Loss
Layer
LayerDict
LayerList
LayerNorm
LeakyReLU
Linear
LocalResponseNorm
LogSigmoid
LogSoftmax
LSTM
LSTMCell
MarginRankingLoss
Maxout
MaxPool1D
MaxPool2D
MaxPool3D
MaxUnPool1D
MaxUnPool2D
MaxUnPool3D
Mish
MSELoss
MultiHeadAttention
MultiLabelSoftMarginLoss
NLLLoss
Pad1D
Pad2D
Pad3D
PairwiseDistance
ParameterList
PixelShuffle
PixelUnshuffle
PReLU
ReLU
ReLU6
RNN
RNNCellBase
RReLU
SELU
Sequential
Sigmoid
Silu
SimpleRNN
SimpleRNNCell
SmoothL1Loss
SoftMarginLoss
Softmax
Softplus
Softshrink
Softsign
SpectralNorm
Swish
SyncBatchNorm
Tanh
Tanhshrink
ThresholdedReLU
Transformer
TransformerDecoder
TransformerDecoderLayer
TransformerEncoder
TransformerEncoderLayer
TripletMarginLoss
TripletMarginWithDistanceLoss
Unfold
Upsample
UpsamplingBilinear2D
UpsamplingNearest2D
utils
parameters_to_vector
remove_weight_norm
spectral_norm
vector_to_parameters
weight_norm
ZeroPad2D
paddle.onnx
export
paddle.optimizer
Overview
Adadelta
Adagrad
Adam
Adamax
AdamW
Lamb
lr
CosineAnnealingDecay
CyclicLR
ExponentialDecay
InverseTimeDecay
LambdaDecay
LinearWarmup
LRScheduler
MultiplicativeDecay
MultiStepDecay
NaturalExpDecay
NoamDecay
OneCycleLR
PiecewiseDecay
PolynomialDecay
ReduceOnPlateau
StepDecay
Momentum
Optimizer
RMSProp
SGD
paddle.profiler
Overview
export_chrome_tracing
export_protobuf
load_profiler_result
make_scheduler
Profiler
ProfilerState
ProfilerSummaryView
ProfilerTarget
RecordEvent
SortedKeys
paddle.regularizer
L1Decay
L2Decay
paddle.signal
Overview
istft
stft
paddle.sparse
Overview
abs
add
addmm
asin
asinh
atan
atanh
cast
coalesce
deg2rad
divide
expm1
is_same_shape
log1p
masked_matmul
matmul
multiply
mv
neg
nn
BatchNorm
Conv3D
functional
attention
conv3d
leaky_relu
max_pool3d
relu
relu6
softmax
subm_conv3d
LeakyReLU
MaxPool3D
ReLU
ReLU6
Softmax
SubmConv3D
SyncBatchNorm
pow
rad2deg
reshape
sin
sinh
sparse_coo_tensor
sparse_csr_tensor
sqrt
square
subtract
tan
tanh
transpose
paddle.static
Overview
accuracy
append_backward
auc
BuildStrategy
CompiledProgram
cpu_places
create_global_var
cuda_places
data
default_main_program
default_startup_program
deserialize_persistables
deserialize_program
device_guard
ExecutionStrategy
Executor
ExponentialMovingAverage
global_scope
gradients
InputSpec
ipu_shard_guard
IpuCompiledProgram
IpuStrategy
load
load_from_file
load_inference_model
load_program_state
mlu_places
name_scope
nn
batch_norm
bilinear_tensor_product
case
cond
conv2d
conv2d_transpose
conv3d
conv3d_transpose
crf_decoding
data_norm
deform_conv2d
embedding
fc
group_norm
instance_norm
layer_norm
multi_box_head
nce
prelu
row_conv
sequence_concat
sequence_conv
sequence_enumerate
sequence_expand
sequence_expand_as
sequence_first_step
sequence_last_step
sequence_pad
sequence_pool
sequence_reshape
sequence_reverse
sequence_scatter
sequence_slice
sequence_softmax
sequence_unpad
sparse_embedding
spectral_norm
switch_case
while_loop
normalize_program
npu_places
ParallelExecutor
Print
Program
program_guard
py_func
save
save_inference_model
save_to_file
scope_guard
serialize_persistables
serialize_program
set_ipu_shard
set_program_state
Variable
WeightNormParamAttr
xpu_places
paddle.sysconfig
get_include
get_lib
paddle.text
Overview
Conll05st
Imdb
Imikolov
Movielens
UCIHousing
viterbi_decode
ViterbiDecoder
WMT14
WMT16
paddle.utils
Overview
cpp_extension
CppExtension
CUDAExtension
get_build_directory
load
setup
deprecated
dlpack
from_dlpack
to_dlpack
download
get_weights_path_from_url
run_check
unique_name
generate
guard
switch
paddle.version
Overview
cuda
cudnn
show
paddle.vision
Overview
datasets
Cifar10
Cifar100
DatasetFolder
FashionMNIST
Flowers
ImageFolder
MNIST
VOC2012
get_image_backend
image_load
models
alexnet
AlexNet
DenseNet
densenet121
densenet161
densenet169
densenet201
densenet264
GoogLeNet
googlenet
inception_v3
InceptionV3
LeNet
mobilenet_v1
mobilenet_v2
mobilenet_v3_large
mobilenet_v3_small
MobileNetV1
MobileNetV2
MobileNetV3Large
MobileNetV3Small
ResNet
resnet101
resnet152
resnet18
resnet34
resnet50
resnext101_32x4d
resnext101_64x4d
resnext152_32x4d
resnext152_64x4d
resnext50_32x4d
resnext50_64x4d
shufflenet_v2_swish
shufflenet_v2_x0_25
shufflenet_v2_x0_33
shufflenet_v2_x0_5
shufflenet_v2_x1_0
shufflenet_v2_x1_5
shufflenet_v2_x2_0
ShuffleNetV2
SqueezeNet
squeezenet1_0
squeezenet1_1
VGG
vgg11
vgg13
vgg16
vgg19
wide_resnet101_2
wide_resnet50_2
ops
deform_conv2d
DeformConv2D
distribute_fpn_proposals
generate_proposals
nms
psroi_pool
PSRoIPool
roi_align
roi_pool
RoIAlign
RoIPool
yolo_box
yolo_loss
set_image_backend
transforms
adjust_brightness
adjust_contrast
adjust_hue
BaseTransform
BrightnessTransform
center_crop
CenterCrop
ColorJitter
Compose
ContrastTransform
crop
erase
Grayscale
hflip
HueTransform
Normalize
normalize
pad
Pad
RandomCrop
RandomErasing
RandomHorizontalFlip
RandomResizedCrop
RandomRotation
RandomVerticalFlip
resize
Resize
rotate
SaturationTransform
to_grayscale
to_tensor
ToTensor
Transpose
vflip
贡献指南
概述
代码贡献流程
新增 API 开发&提交流程
贡献前阅读
开发 API Python 端
开发 C++ 算子
API 设计和命名规范
API 文档书写规范
API 单测开发及验收规范
算子性能优化 提交流程
算子性能优化 方法介绍
算子性能优化 验收规范
Kernel Primitive API
API 介绍
API 介绍 - IO
API 介绍 - Compute
API 介绍 - OpFunc
API 示例
示例 - ElementwiseAdd
示例 - Reduce
示例 - Model
算子数据类型扩展 提交流程
算子数据类型扩展 验收规范
曙光开发指南
曙光智算平台-Paddle 源码编译和单测执行
Paddle 适配 C86 加速卡详解
Paddle 框架下 ROCm(HIP)算子单测修复指导
自定义新硬件接入指南
自定义 Runtime
数据类型
Device 接口
Memory 接口
Stream 接口
Event 接口
集合通讯接口
Profiler 接口
自定义 Kernel
Kernel 函数声明
Kernel 实现接口
Context API
Tensor API
Exception API
Kernel 注册接口
新硬件接入示例
文档贡献指南
规范和参考信息
代码规范
报错信息文案书写规范
代码风格检查指南
Paddle CI 测试详解
常见问题与解答
2.0 升级常见问题
安装常见问题
数据及其加载常见问题
组网、训练、评估常见问题
模型保存常见问题
参数调整常见问题
分布式训练常见问题
其他常见问题
2.4.1 Release Note
relu_
»
relu_
在 GitHub 上修改
relu_
¶
paddle.nn.functional.
relu_
(
x
,
name
=
None
)
[源代码]
¶
Inplace 版本的
relu
API,对输入
x
采用 Inplace 策略。