PaddleClas serials model object used when to load a PaddleClas model exported by PaddleClas repository.
More...
|
| PaddleClasModel (const std::string &model_file, const std::string ¶ms_file, const std::string &config_file, const RuntimeOption &custom_option=RuntimeOption(), const ModelFormat &model_format=ModelFormat::PADDLE) |
| Set path of model file and configuration file, and the configuration of runtime. More...
|
|
virtual std::unique_ptr< PaddleClasModel > | Clone () const |
| Clone a new PaddleClasModel with less memory usage when multiple instances of the same model are created. More...
|
|
virtual std::string | ModelName () const |
| Get model's name.
|
|
virtual bool | Predict (cv::Mat *im, ClassifyResult *result, int topk=1) |
| DEPRECATED Predict the classification result for an input image, remove at 1.0 version. More...
|
|
virtual bool | Predict (const cv::Mat &img, ClassifyResult *result) |
| Predict the classification result for an input image. More...
|
|
virtual bool | BatchPredict (const std::vector< cv::Mat > &imgs, std::vector< ClassifyResult > *results) |
| Predict the classification results for a batch of input images. More...
|
|
virtual bool | Predict (const FDMat &mat, ClassifyResult *result) |
| Predict the classification result for an input image. More...
|
|
virtual bool | BatchPredict (const std::vector< FDMat > &mats, std::vector< ClassifyResult > *results) |
| Predict the classification results for a batch of input images. More...
|
|
virtual PaddleClasPreprocessor & | GetPreprocessor () |
| Get preprocessor reference of PaddleClasModel.
|
|
virtual PaddleClasPostprocessor & | GetPostprocessor () |
| Get postprocessor reference of PaddleClasModel.
|
|
virtual bool | Infer (std::vector< FDTensor > &input_tensors, std::vector< FDTensor > *output_tensors) |
| Inference the model by the runtime. This interface is included in the Predict() function, so we don't call Infer() directly in most common situation.
|
|
virtual bool | Infer () |
| Inference the model by the runtime. This interface is using class member reused_input_tensors_ to do inference and writing results to reused_output_tensors_.
|
|
virtual int | NumInputsOfRuntime () |
| Get number of inputs for this model.
|
|
virtual int | NumOutputsOfRuntime () |
| Get number of outputs for this model.
|
|
virtual TensorInfo | InputInfoOfRuntime (int index) |
| Get input information for this model.
|
|
virtual TensorInfo | OutputInfoOfRuntime (int index) |
| Get output information for this model.
|
|
virtual bool | Initialized () const |
| Check if the model is initialized successfully.
|
|
virtual void | EnableRecordTimeOfRuntime () |
| This is a debug interface, used to record the time of runtime (backend + h2d + d2h) More...
|
|
virtual void | DisableRecordTimeOfRuntime () |
| Disable to record the time of runtime, see EnableRecordTimeOfRuntime() for more detail.
|
|
virtual std::map< std::string, float > | PrintStatisInfoOfRuntime () |
| Print the statistic information of runtime in the console, see function EnableRecordTimeOfRuntime() for more detail.
|
|
virtual bool | EnabledRecordTimeOfRuntime () |
| Check if the EnableRecordTimeOfRuntime() method is enabled.
|
|
virtual double | GetProfileTime () |
| Get profile time of Runtime after the profile process is done.
|
|
virtual void | ReleaseReusedBuffer () |
| Release reused input/output buffers.
|
|
PaddleClas serials model object used when to load a PaddleClas model exported by PaddleClas repository.