MODNet model object used when to load a MODNet model exported by MODNet.
More...
#include <modnet.h>
|
| MODNet (const std::string &model_file, const std::string ¶ms_file="", const RuntimeOption &custom_option=RuntimeOption(), const ModelFormat &model_format=ModelFormat::ONNX) |
| Set path of model file and the configuration of runtime. More...
|
|
std::string | ModelName () const |
| Get model's name.
|
|
bool | Predict (cv::Mat *im, MattingResult *result) |
| Predict the matting result for an input image. More...
|
|
virtual bool | Infer (std::vector< FDTensor > &input_tensors, std::vector< FDTensor > *output_tensors) |
| Inference the model by the runtime. This interface is included in the Predict() function, so we don't call Infer() directly in most common situation.
|
|
virtual bool | Infer () |
| Inference the model by the runtime. This interface is using class member reused_input_tensors_ to do inference and writing results to reused_output_tensors_.
|
|
virtual int | NumInputsOfRuntime () |
| Get number of inputs for this model.
|
|
virtual int | NumOutputsOfRuntime () |
| Get number of outputs for this model.
|
|
virtual TensorInfo | InputInfoOfRuntime (int index) |
| Get input information for this model.
|
|
virtual TensorInfo | OutputInfoOfRuntime (int index) |
| Get output information for this model.
|
|
virtual bool | Initialized () const |
| Check if the model is initialized successfully.
|
|
virtual void | EnableRecordTimeOfRuntime () |
| This is a debug interface, used to record the time of runtime (backend + h2d + d2h) More...
|
|
virtual void | DisableRecordTimeOfRuntime () |
| Disable to record the time of runtime, see EnableRecordTimeOfRuntime() for more detail.
|
|
virtual std::map< std::string, float > | PrintStatisInfoOfRuntime () |
| Print the statistic information of runtime in the console, see function EnableRecordTimeOfRuntime() for more detail.
|
|
virtual bool | EnabledRecordTimeOfRuntime () |
| Check if the EnableRecordTimeOfRuntime() method is enabled.
|
|
virtual double | GetProfileTime () |
| Get profile time of Runtime after the profile process is done.
|
|
virtual void | ReleaseReusedBuffer () |
| Release reused input/output buffers.
|
|
|
std::vector< int > | size |
| Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default (256, 256)
|
|
std::vector< float > | alpha |
| Argument for image preprocessing step, parameters for normalization, size should be the the same as channels, default alpha = {1.f / 127.5f, 1.f / 127.5f, 1.f / 127.5f}.
|
|
std::vector< float > | beta |
| Argument for image preprocessing step, parameters for normalization, size should be the the same as channels, default beta = {-1.f, -1.f, -1.f}.
|
|
bool | swap_rb |
| Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
|
|
std::vector< Backend > | valid_cpu_backends = {Backend::ORT} |
| Model's valid cpu backends. This member defined all the cpu backends have successfully tested for the model.
|
|
std::vector< Backend > | valid_gpu_backends = {Backend::ORT} |
|
std::vector< Backend > | valid_ipu_backends = {} |
|
std::vector< Backend > | valid_timvx_backends = {} |
|
std::vector< Backend > | valid_directml_backends = {} |
|
std::vector< Backend > | valid_ascend_backends = {} |
|
std::vector< Backend > | valid_kunlunxin_backends = {} |
|
std::vector< Backend > | valid_rknpu_backends = {} |
|
std::vector< Backend > | valid_sophgonpu_backends = {} |
|
MODNet model object used when to load a MODNet model exported by MODNet.
◆ MODNet()
fastdeploy::vision::matting::MODNet::MODNet |
( |
const std::string & |
model_file, |
|
|
const std::string & |
params_file = "" , |
|
|
const RuntimeOption & |
custom_option = RuntimeOption() , |
|
|
const ModelFormat & |
model_format = ModelFormat::ONNX |
|
) |
| |
Set path of model file and the configuration of runtime.
- Parameters
-
[in] | model_file | Path of model file, e.g ./modnet.onnx |
[in] | params_file | Path of parameter file, e.g ppyoloe/model.pdiparams, if the model format is ONNX, this parameter will be ignored |
[in] | custom_option | RuntimeOption for inference, the default will use cpu, and choose the backend defined in "valid_cpu_backends" |
[in] | model_format | Model format of the loaded model, default is ONNX format |
◆ Predict()
bool fastdeploy::vision::matting::MODNet::Predict |
( |
cv::Mat * |
im, |
|
|
MattingResult * |
result |
|
) |
| |
Predict the matting result for an input image.
- Parameters
-
[in] | im | The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format |
[in] | result | The output matting result will be writen to this structure |
- Returns
- true if the prediction successed, otherwise false
The documentation for this class was generated from the following files:
- /fastdeploy/my_work/FastDeploy/fastdeploy/vision/matting/contrib/modnet.h
- /fastdeploy/my_work/FastDeploy/fastdeploy/vision/matting/contrib/modnet.cc