FastDeploy  latest
Fast & Easy to Deploy!
Public Member Functions | List of all members
fastdeploy::vision::classification::PaddleClasPreprocessor Class Reference

Preprocessor object for PaddleClas serials model. More...

#include <preprocessor.h>

Inheritance diagram for fastdeploy::vision::classification::PaddleClasPreprocessor:
Inheritance graph
[legend]
Collaboration diagram for fastdeploy::vision::classification::PaddleClasPreprocessor:
Collaboration graph
[legend]

Public Member Functions

 PaddleClasPreprocessor (const std::string &config_file)
 Create a preprocessor instance for PaddleClas serials model. More...
 
virtual bool Apply (FDMatBatch *image_batch, std::vector< FDTensor > *outputs)
 Implement the virtual function of ProcessorManager, Apply() is the body of Run(). Apply() contains the main logic of preprocessing, Run() is called by users to execute preprocessing. More...
 
void DisableNormalize ()
 This function will disable normalize in preprocessing step.
 
void DisablePermute ()
 This function will disable hwc2chw in preprocessing step.
 
void InitialResizeOnCpu (bool v)
 When the initial operator is Resize, and input image size is large, maybe it's better to run resize on CPU, because the HostToDevice memcpy is time consuming. Set this true to run the initial resize on CPU. More...
 
- Public Member Functions inherited from fastdeploy::vision::ProcessorManager
void UseCuda (bool enable_cv_cuda=false, int gpu_id=-1)
 Use CUDA to boost the performance of processors. More...
 
bool Run (std::vector< FDMat > *images, std::vector< FDTensor > *outputs)
 Process the input images and prepare input tensors for runtime. More...
 

Detailed Description

Preprocessor object for PaddleClas serials model.

Constructor & Destructor Documentation

◆ PaddleClasPreprocessor()

fastdeploy::vision::classification::PaddleClasPreprocessor::PaddleClasPreprocessor ( const std::string &  config_file)
explicit

Create a preprocessor instance for PaddleClas serials model.

Parameters
[in]config_filePath of configuration file for deployment, e.g resnet/infer_cfg.yml

Member Function Documentation

◆ Apply()

bool fastdeploy::vision::classification::PaddleClasPreprocessor::Apply ( FDMatBatch image_batch,
std::vector< FDTensor > *  outputs 
)
virtual

Implement the virtual function of ProcessorManager, Apply() is the body of Run(). Apply() contains the main logic of preprocessing, Run() is called by users to execute preprocessing.

Parameters
[in]image_batchThe input image batch
[in]outputsThe output tensors which will feed in runtime
Returns
true if the preprocess successed, otherwise false

Implements fastdeploy::vision::ProcessorManager.

◆ InitialResizeOnCpu()

void fastdeploy::vision::classification::PaddleClasPreprocessor::InitialResizeOnCpu ( bool  v)
inline

When the initial operator is Resize, and input image size is large, maybe it's better to run resize on CPU, because the HostToDevice memcpy is time consuming. Set this true to run the initial resize on CPU.

Parameters
[in]vture or false

The documentation for this class was generated from the following files: