You need to enable JavaScript to run this app.
\u200E
开始使用
特性
文档
API
使用指南
工具平台
工具
AutoDL
PaddleHub
PARL
ERNIE
全部
平台
AI Studio
EasyDL
EasyEdge
资源
模型和数据集
学习资料
应用案例
3.0
develop
3.0-beta
2.6
2.5
2.4
2.3
2.2
2.1
2.0
1.8
中文(简)
English(En)
Installation Guide
Install via pip
Install on Linux via PIP
Install on MacOS via PIP
Install on Windows via PIP
Install via conda
Installation on Linux via Conda
Installation on MacOS via Conda
Installation on Windows via Conda
Install via docker
Install on Linux via Docker
Install on MacOS via Docker
Compile From Source Code
Compile on Linux from Source Code
Compile on MacOS from Source Code
Compile on Windows from Source Code
Paddle installation for machines with Kunlun XPU card
Appendix
Guides
Introduction
Basic Concept
Introduction to Tensor
Introduction to models and layers
Broadcasting
Automatic Mixed Precision Training
Gradient clip methods in Paddle
VisualDL Tools
Introduction to VisualDL Toolset
VisualDL user guide
Dygraph to Static Graph
基本用法
Supported Grammars
Error Debugging Experience
Deploy Inference Model
Model Compression
Distributed Training
Quick start for distributed training
Write New Operators
How to write a new operator
Notes on operator development
Kernel Primitive API
API Description
API Description - IO
API Description - Compute
API Description - OpFunc
API Examples
ElementwiseAdd
Reduce
Practice Improving
How to contribute codes to Paddle
Guide of local development
Guide of submitting PR to Github
FLAGS
cudnn
data processing
debug
check nan inf tool
device management
distributed
executor
memory management
ascend npu
others
API Reference
paddle
abs
acos
add
add_n
addmm
all
allclose
any
arange
argmax
argmin
argsort
asin
assign
atan
atan2
batch
bernoulli
bincount
bitwise_and
bitwise_not
bitwise_or
bitwise_xor
bmm
broadcast_shape
broadcast_tensors
broadcast_to
cast
ceil
check_shape
cholesky
chunk
clip
concat
conj
cos
cosh
CPUPlace
create_parameter
crop
cross
CUDAPinnedPlace
CUDAPlace
cumprod
cumsum
DataParallel
diag
diagflat
diagonal
digamma
disable_signal_handler
disable_static
dist
divide
dot
dtype
einsum
empty
empty_like
enable_static
equal
equal_all
erf
exp
expand
expand_as
expm1
eye
flatten
flip
floor
floor_divide
floor_mod
flops
full
full_like
gather
gather_nd
get_cuda_rng_state
get_default_dtype
get_flags
grad
greater_equal
greater_than
histogram
imag
in_dynamic_mode
increment
index_sample
index_select
inverse
is_empty
is_tensor
isfinite
isinf
isnan
kron
less_equal
less_than
lgamma
linspace
load
log
log10
log1p
log2
logical_and
logical_not
logical_or
logical_xor
logsumexp
masked_select
matmul
max
maximum
mean
median
meshgrid
min
minimum
mm
Model
multinomial
multiplex
multiply
mv
neg
no_grad
nonzero
norm
normal
not_equal
NPUPlace
numel
ones
ones_like
ParamAttr
pow
prod
rand
randint
randn
randperm
rank
real
reciprocal
reshape
reshape_
roll
round
rsqrt
save
scale
scatter
scatter_
scatter_nd
scatter_nd_add
searchsorted
seed
set_cuda_rng_state
set_default_dtype
set_flags
set_grad_enabled
set_printoptions
shape
shard_index
sign
sin
sinh
slice
sort
split
sqrt
square
squeeze
squeeze_
stack
standard_normal
stanh
std
strided_slice
subtract
sum
summary
t
tan
tanh
tanh_
Tensor
tensordot
tile
to_tensor
tolist
topk
trace
transpose
tril
triu
trunc
unbind
uniform
unique
unique_consecutive
unsqueeze
unsqueeze_
unstack
var
where
zeros
zeros_like
paddle.amp
auto_cast
decorate
GradScaler
paddle.autograd
backward
PyLayer
PyLayerContext
paddle.callbacks
Callback
EarlyStopping
LRScheduler
ModelCheckpoint
ProgBarLogger
ReduceLROnPlateau
VisualDL
paddle.device
cuda
current_stream
device_count
empty_cache
Event
get_device_capability
get_device_name
get_device_properties
Stream
stream_guard
synchronize
get_cudnn_version
get_device
is_compiled_with_cuda
is_compiled_with_npu
is_compiled_with_rocm
is_compiled_with_xpu
set_device
XPUPlace
paddle.distributed
all_gather
all_reduce
alltoall
barrier
broadcast
CountFilterEntry
fleet
CommunicateTopology
DistributedStrategy
Fleet
HybridCommunicateGroup
MultiSlotDataGenerator
MultiSlotStringDataGenerator
PaddleCloudRoleMaker
Role
UserDefinedRoleMaker
UtilBase
utils
DistributedInfer
HDFSClient
LocalFS
recompute
get_group
get_rank
get_world_size
gloo_barrier
gloo_init_parallel_env
gloo_release
init_parallel_env
InMemoryDataset
launch
new_group
ParallelEnv
ProbabilityEntry
QueueDataset
recv
reduce
ReduceOp
scatter
send
spawn
split
utils
add_arguments
Cluster
find_free_ports
get_cluster
get_host_name_ip
get_logger
global_gather
global_scatter
Hdfs
JobServer
Pod
pull_worker_log
start_local_trainers
terminate_local_procs
Trainer
TrainerProc
watch_local_trainers
wait
paddle.distribution
Categorical
Distribution
Normal
Uniform
paddle.fft
fft
fft2
fftfreq
fftn
fftshift
hfft
hfft2
hfftn
ifft
ifft2
ifftn
ifftshift
ihfft
ihfft2
ihfftn
irfft
irfft2
irfftn
rfft
rfft2
rfftfreq
rfftn
paddle.fluid
average
WeightedAverage
clip
ErrorClipByValue
set_gradient_clip
contrib
decoder
beam_search_decoder
BeamSearchDecoder
InitState
StateCell
TrainingDecoder
extend_optimizer
extend_optimizer_with_weight_decay
extend_with_decoupled_weight_decay
layers
metric_op
ctr_metric_bundle
nn
batch_fc
bilateral_slice
correlation
fused_bn_add_act
fused_elemwise_activation
fused_embedding_seq_pool
match_matrix_tensor
multiclass_nms2
partial_concat
partial_sum
rank_attention
search_pyramid_hash
sequence_topk_avg_pooling
shuffle_batch
tdm_child
tdm_sampler
tree_conv
var_conv_2d
rnn_impl
basic_gru
basic_lstm
BasicGRUUnit
BasicLSTMUnit
memory_usage_calc
memory_usage
mixed_precision
amp_nn
check_finite_and_unscale
update_loss_scaling
bf16
amp_lists
AutoMixedPrecisionListsBF16
amp_utils
bf16_guard
cast_model_to_bf16
cast_parameters_to_bf16
convert_float_to_uint16
rewrite_program_bf16
decorator
decorate_bf16
decorator
decorate
fp16_lists
AutoMixedPrecisionLists
fp16_utils
cast_model_to_fp16
cast_parameters_to_fp16
fp16_guard
op_frequence
op_freq_statistic
optimizer
Momentum
quantize
quantize_transpiler
QuantizeTranspiler
slim
quantization
cal_kl_threshold
cal_kl_threshold
imperative
ptq
ImperativePTQ
ptq_config
PTQConfig
ptq_quantizer
AbsmaxQuantizer
BaseQuantizer
HistQuantizer
KLQuantizer
PerChannelAbsmaxQuantizer
ptq_registry
PTQRegistry
qat
ImperativeQuantAware
post_training_quantization
PostTrainingQuantization
WeightQuantization
quant2_int8_mkldnn_pass
Quant2Int8MkldnnPass
quant_int8_mkldnn_pass
QuantInt8MkldnnPass
quantization_pass
AddQuantDequantPass
ConvertToInt8Pass
OutScaleForInferencePass
OutScaleForTrainingPass
QuantizationFreezePass
QuantizationTransformPass
TransformForMobilePass
sparsity
utils
check_mask_1d
check_mask_2d
check_sparsity
CheckMethod
create_mask
get_mask_1d
get_mask_2d_best
get_mask_2d_greedy
MaskAlgo
core_avx
LoDTensor
LoDTensorArray
XPUPlace
data
data_feed_desc
DataFeedDesc
data_feeder
DataFeeder
dataloader
collate
default_collate_fn
dataset
DatasetFactory
InMemoryDataset
QueueDataset
device_worker
DeviceWorker
DownpourSGD
DownpourSGDOPT
HeterSection
Hogwild
Section
dygraph
amp
auto_cast
amp_decorate
amp_guard
loss_scaler
AmpScaler
OptimizerState
base
enabled
guard
no_grad
to_variable
checkpoint
load_dygraph
save_dygraph
dygraph_to_static
ast_transformer
DygraphToStaticAst
break_continue_transformer
BreakContinueTransformer
convert_call_func
convert_call
logging_utils
TranslatorLogger
loop_transformer
LoopTransformer
NameVisitor
program_translator
convert_to_static
return_transformer
ReturnTransformer
static_analysis
AstNodeWrapper
NodeVarType
StaticAnalysisVisitor
variable_trans_func
create_bool_as_type
create_fill_constant_node
create_static_variable_gast_node
data_layer_not_check
to_static_variable
to_static_variable_gast_node
jit
dygraph_to_static_func
learning_rate_scheduler
CosineDecay
ExponentialDecay
InverseTimeDecay
LambdaDecay
LinearLrWarmup
MultiStepDecay
NaturalExpDecay
NoamDecay
PiecewiseDecay
PolynomialDecay
ReduceLROnPlateau
StepDecay
nn
BilinearTensorProduct
Conv2D
Conv2DTranspose
Conv3D
Conv3DTranspose
Dropout
Embedding
GroupNorm
GRUUnit
InstanceNorm
LayerNorm
Linear
NCE
Pool2D
PRelu
TreeConv
parallel
prepare_context
rnn
GRUCell
LSTMCell
evaluator
ChunkEvaluator
DetectionMAP
EditDistance
framework
cuda_pinned_places
is_compiled_with_xpu
generator
Generator
incubate
fleet
parameter_server
distribute_transpiler
distributed_strategy
AsyncStrategy
DistributedStrategy
GeoStrategy
HalfAsyncStrategy
StrategyFactory
SyncStrategy
TrainerRuntimeConfig
pslib
optimizer_factory
DistributedAdam
initializer
ConstantInitializer
MSRAInitializer
NormalInitializer
NumpyArrayInitializer
TruncatedNormalInitializer
UniformInitializer
XavierInitializer
input
one_hot
install_check
run_check
io
get_program_parameter
get_program_persistable_vars
load_inference_model
load_params
load_persistables
load_vars
save_inference_model
save_params
save_persistables
save_vars
layer_helper_base
LayerHelperBase
layers
BasicDecoder
beam_search
beam_search_decode
birnn
control_flow
array_length
array_read
array_write
Assert
create_array
DynamicRNN
equal
greater_equal
greater_than
IfElse
increment
less_equal
less_than
not_equal
reorder_lod_tensor_by_rank
StaticRNN
Switch
While
DecodeHelper
Decoder
detection
anchor_generator
bipartite_match
box_clip
box_coder
box_decoder_and_assign
collect_fpn_proposals
density_prior_box
detection_output
distribute_fpn_proposals
generate_mask_labels
generate_proposal_labels
generate_proposals
iou_similarity
locality_aware_nms
matrix_nms
multiclass_nms
polygon_box_transform
prior_box
retinanet_detection_output
retinanet_target_assign
roi_perspective_transform
rpn_target_assign
sigmoid_focal_loss
ssd_loss
target_assign
yolo_box
yolov3_loss
distributions
Categorical
MultivariateNormalDiag
Normal
Uniform
dynamic_gru
dynamic_lstm
dynamic_lstmp
GreedyEmbeddingHelper
gru_unit
GRUCell
io
create_py_reader_by_data
data
double_buffer
load
py_reader
read_file
layer_function_generator
autodoc
generate_activation_fn
generate_inplace_fn
generate_layer_fn
templatedoc
learning_rate_scheduler
cosine_decay
exponential_decay
inverse_time_decay
linear_lr_warmup
natural_exp_decay
noam_decay
piecewise_decay
polynomial_decay
loss
bpr_loss
center_loss
cross_entropy
edit_distance
hsigmoid
huber_loss
kldiv_loss
margin_rank_loss
mse_loss
rank_loss
sampled_softmax_with_cross_entropy
sigmoid_cross_entropy_with_logits
softmax_with_cross_entropy
teacher_student_sigmoid_loss
warpctc
lstm
lstm_unit
LSTMCell
nn
adaptive_pool2d
adaptive_pool3d
add_position_encoding
affine_channel
affine_grid
autoincreased_step_counter
brelu
chunk_eval
clip
clip_by_norm
continuous_value_model
cos_sim
crop
ctc_greedy_decoder
deformable_conv
deformable_roi_pooling
dropout
elementwise_add
elementwise_div
elementwise_floordiv
elementwise_max
elementwise_min
elementwise_mod
elementwise_mul
elementwise_pow
elementwise_sub
elu
embedding
expand
expand_as
fc
filter_by_instag
flatten
fsp_matrix
gather
gather_nd
gaussian_random
gaussian_random_batch_size_like
get_tensor_from_selected_rows
grid_sampler
hard_sigmoid
hard_swish
hash
im2sequence
image_resize
image_resize_short
inplace_abn
l2_normalize
label_smooth
leaky_relu
linear_chain_crf
lod_append
lod_reset
lrn
matmul
maxout
mean
mean_iou
merge_selected_rows
mish
mul
one_hot
pad
pad2d
pad_constant_like
pixel_shuffle
pool2d
pool3d
pow
prroi_pool
psroi_pool
random_crop
reduce_all
reduce_any
reduce_max
reduce_mean
reduce_min
reduce_prod
reduce_sum
relu
relu6
reshape
resize_bilinear
resize_linear
resize_nearest
resize_trilinear
roi_align
roi_pool
sampling_id
scatter
scatter_nd_add
selu
shuffle_channel
sign
similarity_focus
size
smooth_l1
soft_relu
softmax
space_to_depth
split
squeeze
stack
strided_slice
sum
swish
topk
unbind
uniform_random
uniform_random_batch_size_like
unique
unique_with_counts
unsqueeze
where
ops
cumsum
gelu
hard_shrink
softshrink
thresholded_relu
rnn
RNNCell
SampleEmbeddingHelper
tensor
argmax
argmin
argsort
assign
concat
create_tensor
diag
eye
fill_constant
fill_constant_batch_size_like
has_inf
has_nan
isfinite
ones
ones_like
range
reverse
sums
tensor_array_to_tensor
triu
zeros
zeros_like
TrainingHelper
lod_tensor
create_lod_tensor
create_random_int_lodtensor
log_helper
get_logger
metrics
Accuracy
Auc
ChunkEvaluator
CompositeMetric
DetectionMAP
EditDistance
MetricBase
Precision
Recall
nets
glu
img_conv_group
scaled_dot_product_attention
sequence_conv_pool
simple_img_conv_pool
optimizer
AdadeltaOptimizer
AdagradOptimizer
AdamaxOptimizer
AdamOptimizer
DecayedAdagradOptimizer
DpsgdOptimizer
FtrlOptimizer
LambOptimizer
LarsMomentumOptimizer
LookaheadOptimizer
ModelAverage
MomentumOptimizer
PipelineOptimizer
RecomputeOptimizer
RMSPropOptimizer
SGDOptimizer
reader
PyReader
regularizer
L1DecayRegularizer
L2DecayRegularizer
trainer_desc
DistMultiTrainer
HeterPipelineTrainer
HeterXpuTrainer
MultiTrainer
PipelineTrainer
TrainerDesc
trainer_factory
FetchHandlerMonitor
TrainerFactory
transpiler
collective
GradAllReduce
LocalSGD
MultiThread
distribute_transpiler
DistributeTranspiler
DistributeTranspilerConfig
memory_optimization_transpiler
memory_optimize
release_memory
ps_dispatcher
HashName
RoundRobin
wrapped_decorator
signature_safe_contextmanager
wrap_decorator
paddle.hub
help
list
load
paddle.incubate
graph_send_recv
LookAhead
ModelAverage
nn
functional
fused_feedforward
fused_multi_head_attention
FusedFeedForward
FusedMultiHeadAttention
FusedTransformerEncoderLayer
operators
resnet_unit
resnet_unit
ResNetUnit
segment_max
segment_mean
segment_min
segment_sum
softmax_mask_fuse
softmax_mask_fuse_upper_triangle
paddle.inference
Config
DataType
PlaceType
PrecisionType
Predictor
PredictorPool
Tensor
paddle.io
BatchSampler
ChainDataset
ComposeDataset
DataLoader
Dataset
DistributedBatchSampler
get_worker_info
IterableDataset
random_split
RandomSampler
Sampler
SequenceSampler
Subset
TensorDataset
WeightedRandomSampler
paddle.jit
load
not_to_static
ProgramTranslator
save
set_code_level
set_verbosity
to_static
TracedLayer
TranslatedLayer
paddle.linalg
cond
det
eig
eigh
eigvals
eigvalsh
matrix_power
matrix_rank
multi_dot
pinv
qr
slogdet
solve
svd
triangular_solve
paddle.metric
Accuracy
accuracy
Auc
Metric
Precision
Recall
paddle.nn
AdaptiveAvgPool1D
AdaptiveAvgPool2D
AdaptiveAvgPool3D
AdaptiveMaxPool1D
AdaptiveMaxPool2D
AdaptiveMaxPool3D
AlphaDropout
AvgPool1D
AvgPool2D
AvgPool3D
BatchNorm
BatchNorm1D
BatchNorm2D
BatchNorm3D
BCELoss
BCEWithLogitsLoss
BeamSearchDecoder
Bilinear
BiRNN
ClipGradByGlobalNorm
ClipGradByNorm
ClipGradByValue
Conv1D
Conv1DTranspose
Conv2D
Conv2DTranspose
Conv3D
Conv3DTranspose
CosineSimilarity
CrossEntropyLoss
CTCLoss
Dropout
Dropout2D
Dropout3D
dynamic_decode
ELU
Embedding
Flatten
functional
adaptive_avg_pool1d
adaptive_avg_pool2d
adaptive_avg_pool3d
adaptive_max_pool1d
adaptive_max_pool2d
adaptive_max_pool3d
affine_grid
alpha_dropout
avg_pool1d
avg_pool2d
avg_pool3d
batch_norm
bilinear
binary_cross_entropy
binary_cross_entropy_with_logits
class_center_sample
conv1d
conv1d_transpose
conv2d
conv2d_transpose
conv3d
conv3d_transpose
cosine_similarity
cross_entropy
ctc_loss
diag_embed
dice_loss
dropout
dropout2d
dropout3d
elu
elu_
embedding
gather_tree
gelu
glu
grid_sample
gumbel_softmax
hardshrink
hardsigmoid
hardswish
hardtanh
hsigmoid_loss
instance_norm
interpolate
kl_div
l1_loss
label_smooth
layer_norm
leaky_relu
linear
local_response_norm
log_loss
log_sigmoid
log_softmax
margin_cross_entropy
margin_ranking_loss
max_pool1d
max_pool2d
max_pool3d
max_unpool2d
maxout
mish
mse_loss
nll_loss
normalize
npair_loss
one_hot
pad
pixel_shuffle
prelu
relu
relu6
relu_
selu
sequence_mask
sigmoid
sigmoid_focal_loss
silu
smooth_l1_loss
softmax
softmax_
softmax_with_cross_entropy
softplus
softshrink
softsign
sparse_attention
square_error_cost
swish
tanhshrink
temporal_shift
thresholded_relu
unfold
upsample
GELU
GroupNorm
GRU
GRUCell
Hardshrink
Hardsigmoid
Hardswish
Hardtanh
HSigmoidLoss
initializer
Assign
Bilinear
Constant
KaimingNormal
KaimingUniform
Normal
set_global_initializer
TruncatedNormal
Uniform
XavierNormal
XavierUniform
InstanceNorm1D
InstanceNorm2D
InstanceNorm3D
KLDivLoss
L1Loss
Layer
LayerDict
LayerList
LayerNorm
LeakyReLU
Linear
LocalResponseNorm
LogSigmoid
LogSoftmax
LSTM
LSTMCell
MarginRankingLoss
Maxout
MaxPool1D
MaxPool2D
MaxPool3D
Mish
MSELoss
MultiHeadAttention
NLLLoss
Pad1D
Pad2D
Pad3D
PairwiseDistance
ParameterList
PixelShuffle
PReLU
quant
quant_layers
FakeQuantAbsMax
FakeQuantChannelWiseAbsMax
FakeQuantMAOutputScaleLayer
FakeQuantMovingAverageAbsMax
MAOutputScaleLayer
MovingAverageAbsMaxScale
QuantizedConv2D
QuantizedConv2DTranspose
QuantizedLinear
ReLU
ReLU6
RNN
RNNCellBase
SELU
Sequential
Sigmoid
Silu
SimpleRNN
SimpleRNNCell
SmoothL1Loss
Softmax
Softplus
Softshrink
Softsign
SpectralNorm
Swish
SyncBatchNorm
Tanh
Tanhshrink
ThresholdedReLU
Transformer
TransformerDecoder
TransformerDecoderLayer
TransformerEncoder
TransformerEncoderLayer
Unfold
Upsample
UpsamplingBilinear2D
UpsamplingNearest2D
utils
remove_weight_norm
spectral_norm
weight_norm
paddle.onnx
export
paddle.optimizer
Adadelta
Adagrad
Adam
Adamax
AdamW
Lamb
lr
CosineAnnealingDecay
ExponentialDecay
InverseTimeDecay
LambdaDecay
LinearWarmup
LRScheduler
MultiStepDecay
NaturalExpDecay
NoamDecay
PiecewiseDecay
PolynomialDecay
ReduceOnPlateau
StepDecay
Momentum
Optimizer
RMSProp
SGD
paddle.regularizer
L1Decay
L2Decay
paddle.signal
istft
stft
paddle.static
accuracy
append_backward
auc
BuildStrategy
CompiledProgram
cpu_places
create_global_var
cuda_places
data
default_main_program
default_startup_program
deserialize_persistables
deserialize_program
device_guard
ExecutionStrategy
Executor
ExponentialMovingAverage
global_scope
gradients
InputSpec
load
load_from_file
load_inference_model
load_program_state
name_scope
nn
batch_norm
bilinear_tensor_product
case
cond
conv2d
conv2d_transpose
conv3d
conv3d_transpose
crf_decoding
data_norm
deform_conv2d
embedding
fc
group_norm
instance_norm
layer_norm
multi_box_head
nce
prelu
row_conv
sequence_concat
sequence_conv
sequence_enumerate
sequence_expand
sequence_expand_as
sequence_first_step
sequence_last_step
sequence_pad
sequence_pool
sequence_reshape
sequence_reverse
sequence_scatter
sequence_slice
sequence_softmax
sequence_unpad
sparse_embedding
spectral_norm
switch_case
while_loop
normalize_program
ParallelExecutor
Print
Program
program_guard
py_func
save
save_inference_model
save_to_file
scope_guard
serialize_persistables
serialize_program
set_program_state
sparsity
calculate_density
decorate
prune_model
reset_excluded_layers
set_excluded_layers
Variable
WeightNormParamAttr
xpu_places
paddle.sysconfig
get_include
get_lib
paddle.Tensor
Overview
add_
astype
backward
ceil_
clear_grad
clip_
dim
exp_
fill_
fill_diagonal_
fill_diagonal_tensor
fill_diagonal_tensor_
flatten_
floor_
gradient
item
ndimension
reciprocal_
register_hook
round_
rsqrt_
scale_
set_value
sqrt_
subtract_
uniform_
zero_
paddle.text
Conll05st
Imdb
Imikolov
Movielens
UCIHousing
viterbi_decode
ViterbiDecoder
WMT14
WMT16
paddle.utils
cpp_extension
CppExtension
CUDAExtension
get_build_directory
load
setup
deprecated
dlpack
from_dlpack
to_dlpack
download
get_weights_path_from_url
profiler
cuda_profiler
get_profiler
profiler
Profiler
ProfilerOptions
reset_profiler
start_profiler
stop_profiler
require_version
run_check
try_import
unique_name
generate
guard
switch
paddle.version
cuda
cudnn
paddle.vision
datasets
Cifar10
Cifar100
DatasetFolder
FashionMNIST
Flowers
ImageFolder
MNIST
VOC2012
get_image_backend
image_load
models
LeNet
mobilenet_v1
mobilenet_v2
MobileNetV1
MobileNetV2
ResNet
resnet101
resnet152
resnet18
resnet34
resnet50
VGG
vgg11
vgg13
vgg16
vgg19
ops
decode_jpeg
deform_conv2d
DeformConv2D
psroi_pool
PSRoIPool
read_file
roi_align
roi_pool
RoIAlign
RoIPool
yolo_box
yolo_loss
set_image_backend
transforms
adjust_brightness
adjust_contrast
adjust_hue
BaseTransform
BrightnessTransform
center_crop
CenterCrop
ColorJitter
Compose
ContrastTransform
crop
Grayscale
hflip
HueTransform
Normalize
normalize
Pad
pad
RandomCrop
RandomHorizontalFlip
RandomResizedCrop
RandomRotation
RandomVerticalFlip
Resize
resize
rotate
SaturationTransform
to_grayscale
to_tensor
ToTensor
Transpose
vflip
2.2.2 Release Note
PipelineTrainer
»
PipelineTrainer
Edit on GitHub
PipelineTrainer
¶
class
paddle.fluid.trainer_desc.
PipelineTrainer
[source]
Implement of PipelineTrainer. It’s for Pipeline.