Preprocessor object for PaddleClas serials model.
More...
#include <rec_preprocessor.h>
|
bool | Run (std::vector< FDMat > *images, std::vector< FDTensor > *outputs, size_t start_index, size_t end_index, const std::vector< int > &indices) |
| Process the input image and prepare input tensors for runtime. More...
|
|
virtual bool | Apply (FDMatBatch *image_batch, std::vector< FDTensor > *outputs) |
| Implement the virtual function of ProcessorManager, Apply() is the body of Run(). Apply() contains the main logic of preprocessing, Run() is called by users to execute preprocessing. More...
|
|
void | SetStaticShapeInfer (bool static_shape_infer) |
|
bool | GetStaticShapeInfer () const |
| Get static_shape_infer of the recognition preprocess.
|
|
void | SetNormalize (const std::vector< float > &mean, const std::vector< float > &std, bool is_scale) |
|
void | SetRecImageShape (const std::vector< int > &rec_image_shape) |
| Set rec_image_shape for the recognition preprocess.
|
|
std::vector< int > | GetRecImageShape () |
| Get rec_image_shape for the recognition preprocess.
|
|
void | DisableNormalize () |
| This function will disable normalize in preprocessing step.
|
|
void | DisablePermute () |
| This function will disable hwc2chw in preprocessing step.
|
|
void | UseCuda (bool enable_cv_cuda=false, int gpu_id=-1) |
| Use CUDA to boost the performance of processors. More...
|
|
bool | Run (std::vector< FDMat > *images, std::vector< FDTensor > *outputs) |
| Process the input images and prepare input tensors for runtime. More...
|
|
Preprocessor object for PaddleClas serials model.
◆ Apply()
bool fastdeploy::vision::ocr::RecognizerPreprocessor::Apply |
( |
FDMatBatch * |
image_batch, |
|
|
std::vector< FDTensor > * |
outputs |
|
) |
| |
|
virtual |
Implement the virtual function of ProcessorManager, Apply() is the body of Run(). Apply() contains the main logic of preprocessing, Run() is called by users to execute preprocessing.
- Parameters
-
[in] | image_batch | The input image batch |
[in] | outputs | The output tensors which will feed in runtime |
- Returns
- true if the preprocess successed, otherwise false
Implements fastdeploy::vision::ProcessorManager.
◆ Run()
bool fastdeploy::vision::ocr::RecognizerPreprocessor::Run |
( |
std::vector< FDMat > * |
images, |
|
|
std::vector< FDTensor > * |
outputs, |
|
|
size_t |
start_index, |
|
|
size_t |
end_index, |
|
|
const std::vector< int > & |
indices |
|
) |
| |
Process the input image and prepare input tensors for runtime.
- Parameters
-
[in] | images | The input data list, all the elements are FDMat |
[in] | outputs | The output tensors which will be fed into runtime |
- Returns
- true if the preprocess successed, otherwise false
◆ SetNormalize()
void fastdeploy::vision::ocr::RecognizerPreprocessor::SetNormalize |
( |
const std::vector< float > & |
mean, |
|
|
const std::vector< float > & |
std, |
|
|
bool |
is_scale |
|
) |
| |
|
inline |
Set preprocess normalize parameters, please call this API to customize the normalize parameters, otherwise it will use the default normalize parameters.
◆ SetStaticShapeInfer()
void fastdeploy::vision::ocr::RecognizerPreprocessor::SetStaticShapeInfer |
( |
bool |
static_shape_infer | ) |
|
|
inline |
Set static_shape_infer is true or not. When deploy PP-OCR on hardware which can not support dynamic input shape very well, like Huawei Ascned, static_shape_infer needs to to be true.
The documentation for this class was generated from the following files:
- /fastdeploy/my_work/FastDeploy/fastdeploy/vision/ocr/ppocr/rec_preprocessor.h
- /fastdeploy/my_work/FastDeploy/fastdeploy/vision/ocr/ppocr/rec_preprocessor.cc