FastDeploy
latest
Fast & Easy to Deploy!
|
Torchvision ResNet series model. More...
#include <resnet.h>
Public Member Functions | |
ResNet (const std::string &model_file, const std::string ¶ms_file="", const RuntimeOption &custom_option=RuntimeOption(), const ModelFormat &model_format=ModelFormat::ONNX) | |
Set path of model file and the configuration of runtime. More... | |
virtual std::string | ModelName () const |
Get model's name. | |
virtual bool | Predict (cv::Mat *im, ClassifyResult *result, int topk=1) |
Predict for the input "im", the result will be saved in "result". More... | |
![]() | |
virtual bool | Infer (std::vector< FDTensor > &input_tensors, std::vector< FDTensor > *output_tensors) |
Inference the model by the runtime. This interface is included in the Predict() function, so we don't call Infer() directly in most common situation. | |
virtual bool | Infer () |
Inference the model by the runtime. This interface is using class member reused_input_tensors_ to do inference and writing results to reused_output_tensors_. | |
virtual int | NumInputsOfRuntime () |
Get number of inputs for this model. | |
virtual int | NumOutputsOfRuntime () |
Get number of outputs for this model. | |
virtual TensorInfo | InputInfoOfRuntime (int index) |
Get input information for this model. | |
virtual TensorInfo | OutputInfoOfRuntime (int index) |
Get output information for this model. | |
virtual bool | Initialized () const |
Check if the model is initialized successfully. | |
virtual void | EnableRecordTimeOfRuntime () |
This is a debug interface, used to record the time of runtime (backend + h2d + d2h) More... | |
virtual void | DisableRecordTimeOfRuntime () |
Disable to record the time of runtime, see EnableRecordTimeOfRuntime() for more detail. | |
virtual std::map< std::string, float > | PrintStatisInfoOfRuntime () |
Print the statistic information of runtime in the console, see function EnableRecordTimeOfRuntime() for more detail. | |
virtual bool | EnabledRecordTimeOfRuntime () |
Check if the EnableRecordTimeOfRuntime() method is enabled. | |
virtual double | GetProfileTime () |
Get profile time of Runtime after the profile process is done. | |
virtual void | ReleaseReusedBuffer () |
Release reused input/output buffers. | |
Public Attributes | |
std::vector< int > | size |
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default size = {224, 224}. | |
std::vector< float > | mean_vals |
Mean parameters for normalize, size should be the the same as channels, default mean_vals = {0.485f, 0.456f, 0.406f}. | |
std::vector< float > | std_vals |
Std parameters for normalize, size should be the the same as channels, default std_vals = {0.229f, 0.224f, 0.225f}. | |
![]() | |
std::vector< Backend > | valid_cpu_backends = {Backend::ORT} |
Model's valid cpu backends. This member defined all the cpu backends have successfully tested for the model. | |
std::vector< Backend > | valid_gpu_backends = {Backend::ORT} |
std::vector< Backend > | valid_ipu_backends = {} |
std::vector< Backend > | valid_timvx_backends = {} |
std::vector< Backend > | valid_directml_backends = {} |
std::vector< Backend > | valid_ascend_backends = {} |
std::vector< Backend > | valid_kunlunxin_backends = {} |
std::vector< Backend > | valid_rknpu_backends = {} |
std::vector< Backend > | valid_sophgonpu_backends = {} |
Torchvision ResNet series model.
fastdeploy::vision::classification::ResNet::ResNet | ( | const std::string & | model_file, |
const std::string & | params_file = "" , |
||
const RuntimeOption & | custom_option = RuntimeOption() , |
||
const ModelFormat & | model_format = ModelFormat::ONNX |
||
) |
Set path of model file and the configuration of runtime.
[in] | model_file | Path of model file, e.g ./resnet50.onnx |
[in] | params_file | Path of parameter file, e.g ppyoloe/model.pdiparams, if the model format is ONNX, this parameter will be ignored |
[in] | custom_option | RuntimeOption for inference, the default will use cpu, and choose the backend defined in "valid_cpu_backends" |
[in] | model_format | Model format of the loaded model, default is ONNX format |
|
virtual |
Predict for the input "im", the result will be saved in "result".
[in] | im | The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format |
[in] | result | Saving the inference result. |
[in] | topk | The length of return values, e.g., if topk==2, the result will include the 2 most possible class label for input image. |