FastDeploy
latest
Fast & Easy to Deploy!
|
Option object to configure ONNX Runtime backend. More...
#include <option.h>
Public Attributes | |
int | graph_optimization_level = -1 |
int | intra_op_num_threads = -1 |
Number of threads to execute the operator, -1: default. | |
int | inter_op_num_threads = -1 |
int | execution_mode = -1 |
Device | device = Device::CPU |
Inference device, OrtBackend supports CPU/GPU. | |
int | device_id = 0 |
Inference device id. | |
bool | enable_fp16 = false |
Use fp16 to infer. | |
Option object to configure ONNX Runtime backend.
int fastdeploy::OrtBackendOption::execution_mode = -1 |
Execution mode for the graph, -1: default(Sequential mode) /0: Sequential mode, execute the operators in graph one by one. /1: Parallel mode, execute the operators in graph parallelly.
int fastdeploy::OrtBackendOption::graph_optimization_level = -1 |
Level of graph optimization, /-1: mean default(Enable all the optimization strategy) /0: disable all the optimization strategy/1: enable basic strategy /2:enable extend strategy/99: enable all
int fastdeploy::OrtBackendOption::inter_op_num_threads = -1 |
Number of threads to execute the graph, -1: default. This parameter only will bring effects while the OrtBackendOption::execution_mode
set to 1.