Skip to content

AidLite C++ API

💡 Note

To use the AidLite-SDK C++ for development, please note the following:

  • During compilation, include the header file located at /usr/local/include/aidlux/aidlite/aidlite.hpp.
  • During linking, specify the library file located at /usr/local/lib/libaidlite.so.

Model Data Type enum DataType

For the AidLite SDK, different models from various frameworks are processed, each with its own input and output data types. When setting the input and output data types for a model in the usage flow described earlier, this enumeration is used.

Member VariableTypeValueDescription
TYPE_DEFAULTuint8_t0Invalid DataType
TYPE_UINT8uint8_t1Unsigned byte data
TYPE_INT8uint8_t2Byte data
TYPE_UINT32uint8_t3Unsigned int32 integer
TYPE_FLOAT32uint8_t4Float data
TYPE_INT32uint8_t5Int32 data
TYPE_INT64uint8_t6Int64 data

Inference Implementation Type enum ImplementType (Deprecated)

Member VariableTypeValueDescription
TYPE_DEFAULTuint8_t0Invalid ImplementType
TYPE_MMKVuint8_t1Implemented via MMKV (not currently implemented, may be removed later)
TYPE_FASTuint8_t2Implemented via IPC backend
TYPE_LOCALuint8_t3Implemented via local backend

💡 Note

Important: Starting from AidLite-SDK C++ version V2.0.4, ImplementType is deprecated.

Model Framework Type enum FrameworkType

As mentioned earlier, the AidLite SDK integrates multiple deep learning inference frameworks. Therefore, in the usage flow, it is necessary to specify which framework’s model is being used, requiring this framework type enumeration.

Member VariableTypeValueDescription
TYPE_DEFAULTuint8_t0Invalid FrameworkType
TYPE_SNPEuint8_t1SNPE (DLC) model type
TYPE_TFLITEuint8_t2TFLite model type
TYPE_RKNNuint8_t3RKNN model type
TYPE_QNNuint8_t4QNN model type
TYPE_SNPE2uint8_t5SNPE2 (DLC) model type
TYPE_NCNNuint8_t6NCNN model type
TYPE_MNNuint8_t7MNN model type
TYPE_TNNuint8_t8TNN model type
TYPE_PADDLEuint8_t9Paddle model type

Acceleration Hardware Type enum AccelerateType

Each deep learning inference framework may support running on different acceleration devices (e.g., SNPE models on DSP devices, RKNN models on NPU devices). Thus, in the usage flow, it is necessary to specify which device the model is expected to run on, requiring this acceleration hardware enumeration.

Member VariableTypeValueDescription
TYPE_DEFAULTuint8_t0Invalid AccelerateType
TYPE_CPUuint8_t1CPU acceleration
TYPE_GPUuint8_t2GPU acceleration
TYPE_DSPuint8_t3DSP acceleration
TYPE_NPUuint8_t4NPU acceleration

Log Level enum LogLevel

The AidLite SDK provides an interface for setting logs (introduced later), which requires specifying the log level to be used, necessitating this log level enumeration.

Member VariableTypeValueDescription
INFOuint8_t0Information
WARNINGuint8_t1Warning
ERRORuint8_t2Error
FATALuint8_t3Fatal error

Model Class class Model

As mentioned earlier, before creating an inference interpreter, detailed parameters related to the specific model need to be set. The Model class is primarily used to record the model’s file information, structural information, and related content during model execution.

Create Instance create_instance()

To set detailed model information, a model instance object is required first. This function is used to create a Model instance object. If the model framework (e.g., SNPE, TFLite) requires only one model file, this interface is used.

APIcreate_instance
DescriptionConstructs a Model type instance object by passing the model file’s path and name.
Parametersmodel_path: Path and name of the model file.
Return ValueIf nullptr, the function failed to construct the object; otherwise, it is a pointer to the Model object.
cpp
// Create a model object using the inceptionv3_float32.dlc file in the current path; report an error if the return value is null
Model* model = Model::create_instance("./inceptionv3_float32.dlc");
if(model == nullptr){
    printf("Create model failed !\n");
    return EXIT_FAILURE;
}

Create Instance create_instance()

To set detailed model information, a model instance object is required first. This function is used to create a Model instance object. If the model framework (e.g., NCNN, TNN) involves two model files, this interface is used.

APIcreate_instance
DescriptionConstructs a Model type instance object by passing the model file’s path and name.
Parametersmodel_struct_path: Path and name of the model structure file.
model_weight_path: Path and name of the model weight parameters file.
Return ValueIf nullptr, the function failed to construct the object; otherwise, it is a pointer to the Model object.
cpp
// Create a model object using two NCNN model files in the current path; report an error if the return value is null
Model* model = Model::create_instance("./mobilenet_ssd_voc_ncnn.param", 
     "./mobilenet_ssd_voc_ncnn.bin");
if(model == nullptr){
    printf("Create model failed !\n");
    return EXIT_FAILURE;
}

Set Model Properties set_model_properties()

After successfully creating the model instance object, it is necessary to set the input and output data types and the shape information of the input and output tensors.

APIset_model_properties
DescriptionSets the model’s properties, including input and output data shapes and data types.
Parametersinput_shapes: Array of input tensor shapes, in a 2D array structure.
input_data_type: Input tensor data type, DataType enumeration type.
output_shapes: Array of output tensor shapes, in a 2D array structure.
output_data_type: Output tensor data type, DataType enumeration type.
Return Valuevoid
cpp
// Use the previously defined DataType enumeration; input and output shapes are 2D arrays
std::vector<std::vector<uint32_t>> input_shapes = {{1,299,299,3}};
std::vector<std::vector<uint32_t>> output_shapes = {{1,1001}};
model->set_model_properties(input_shapes, DataType::TYPE_FLOAT32, 
    output_shapes, DataType::TYPE_FLOAT32);

Get Model Path get_model_absolute_path()

If the model framework (e.g., SNPE, TFLite) involves only one model file, and the Model object is constructed without exceptions, the provided model file parameter will be converted to an absolute path.

APIget_model_absolute_path
DescriptionRetrieves the existing path of the model file.
Parametersvoid
Return ValueThe model file path string corresponding to the Model object.

Get Model Structure Path get_model_struct_absolute_path()

If the model framework (e.g., NCNN, TNN) involves two model files, and the Model object is constructed without exceptions, the provided model structure file parameter will be converted to an absolute path.

APIget_model_struct_absolute_path
DescriptionRetrieves the existing path of the model structure file.
Parametersvoid
Return ValueThe model structure file path string corresponding to the Model object.

Get Model Weight Path get_model_weight_absolute_path()

If the model framework (e.g., NCNN, TNN) involves two model files, and the Model object is constructed without exceptions, the provided model weight parameters file parameter will be converted to an absolute path.

APIget_model_weight_absolute_path
DescriptionRetrieves the existing path of the model weight parameters file.
Parametersvoid
Return ValueThe model weight parameters file path string corresponding to the Model object.

Configuration Class class Config

As mentioned earlier, before creating an inference interpreter, in addition to setting specific Model information, some configuration information for inference is also required. The Config class is used to record configuration options that need to be preset and will be used during runtime.

Create Instance create_instance()

To set runtime configuration information, a configuration instance object is required first. This function is used to create a Config instance object.

APIcreate_instance
DescriptionConstructs a Config class instance object.
Parametersvoid
Return ValueIf nullptr, the function failed to construct the object; otherwise, it is a pointer to the Config object.
cpp
// Create a config instance object; report an error if the return value is null
Config* config = Config::create_instance();
if(config == nullptr){
    printf("Create config failed !\n");
    return EXIT_FAILURE;
}

Member Variables List

The Config object is used to set runtime configuration information, including the following parameters:

Member Variableaccelerate_type
Typeenum AccelerateType
Default ValueAccelerateType::TYPE_CPU
DescriptionType of acceleration hardware.
Member Variableimplement_type (Deprecated)
Typeenum ImplementType
Default ValueImplementType::TYPE_DEFAULT
DescriptionDistinguishes the underlying implementation method.

💡 Note

Important: Starting from AidLite-SDK C++ version V2.0.4, ImplementType is deprecated.

Member Variableframework_type
Typeenum FrameworkType
Default ValueFrameworkType::TYPE_DEFAULT
DescriptionType of underlying inference framework.
Member Variablenumber_of_threads
Typeint8_t
Default Value-1
DescriptionNumber of threads, effective if greater than 0.
Member Variablesnpe_out_names
TypeString array
Default ValueEmpty
DescriptionList of model output node names for FAST.
Member Variableis_quantify_model
Typeint32_t
Default Value0
DescriptionWhether it is a quantized model, 1 for quantized model for FAST.
Member Variablefast_timeout
Typeint32_t
Default Value-1
DescriptionInterface timeout in milliseconds, effective if greater than 0 for FAST.
cpp
config->framework_type = FrameworkType::TYPE_SNPE;
config->accelerate_type = AccelerateType::TYPE_CPU;
config->is_quantify_model = 0;

config->snpe_out_names.push_back("InceptionV3/Softmax");

Context Class class Context

Used to store the runtime context involved in the execution process, including the Model object and Config object. Additional runtime data may be extended in the future.

Constructor Context()

APIContext
DescriptionConstructs a Context instance object.
Parametersmodel: Pointer to the Model instance object.
config: Pointer to the Config instance object.
Return ValueNormal: Context instance object.

💡 Note

Important: The memory space pointed to by the model and config pointers must be allocated on the heap. After successfully constructing the Context, their lifecycles will be managed by the Context (when the Context is released, it will automatically release the model and config objects). Therefore, developers should not manually release their memory, as this will cause errors due to double freeing.

Get Model Member Variable get_model()

APIget_model
DescriptionRetrieves the Model object pointer managed by the context.
Parametersvoid
Return ValuePointer to the Model object.

Get Config Member Variable get_config()

APIget_config
DescriptionRetrieves the Config object pointer managed by the context.
Parametersvoid
Return ValuePointer to the Config object.

Interpreter Class class Interpreter

The Interpreter type object instance is the main entity for performing inference operations, used to execute the specific inference process. As mentioned in the previous sections, after creating the interpreter object, all operations are performed based on the interpreter object, making it the absolute core of the AidLite SDK.

Create Instance create_instance()

To perform inference-related operations, an inference interpreter is essential. This function is used to construct an instance of the inference interpreter.

APIcreate_instance
DescriptionConstructs an Interpreter type object using the data managed by the Context object.
Parameterscontext: Pointer to the Context instance object.
Return ValueIf nullptr, the function failed to construct the object; otherwise, it is a pointer to the Interpreter object.
cpp
// Create an interpreter object using the Context object pointer; report an error if the return value is null
Interpreter* current_interpreter = Interpreter::create_instance(context);
if(current_interpreter == nullptr){
    printf("Create Interpreter failed !\n");
    return EXIT_FAILURE;
}

Initialize init()

After creating the interpreter object, some initialization operations (e.g., environment checks, resource construction) need to be performed.

APIinit
DescriptionPerforms the initialization operations required for inference.
Parametersvoid
Return Value0 indicates successful initialization; non-zero indicates failure.
cpp
// Initialize the interpreter; report an error if the return value is non-zero
int result = fast_interpreter->init();
if(result != EXIT_SUCCESS){
    printf("sample : interpreter->init() failed !\n");
    return EXIT_FAILURE;
}

Load Model load_model()

After the interpreter object completes initialization, the required model file can be loaded for the interpreter, completing the model loading process. Subsequent inference processes will use the loaded model resources.

APIload_model
DescriptionCompletes model loading operations. Since the model file path is already set in the Model object, the model loading operation can be executed directly.
Parametersvoid
Return Value0 indicates successful model loading; non-zero indicates failure.
cpp
// Load the model for the interpreter; report an error if the return value is non-zero
int result = fast_interpreter->load_model();
if(result != EXIT_SUCCESS){
    printf("sample : interpreter->load_model() failed !\n");
    return EXIT_FAILURE;
}

Set Input Data set_input_tensor()

As mentioned in the flow introduction, before setting input data, different preprocessing operations are required for different models to adapt to the specific model.

APIset_input_tensor
DescriptionSets the input tensor data required for inference.
Parametersin_tensor_idx: Index value of the input tensor.
input_data: Pointer to the storage address of the input tensor data.
Return Value0 indicates successful setting of the input tensor; non-zero indicates failure.
cpp
// Set the input data for inference; report an error if the return value is non-zero
int result = fast_interpreter->set_input_tensor(0, input_tensor_data);
if(result != EXIT_SUCCESS){
    printf("interpreter->set_input_tensor() failed !\n");
    return EXIT_FAILURE;
}

Execute Inference invoke()

As mentioned in the flow introduction, after setting the input data, the next step is to execute the inference process on the input data.

APIinvoke
DescriptionExecutes the inference computation process.
Parametersvoid
Return Value0 indicates successful inference; non-zero indicates failure.
cpp
// Execute the inference operation; report an error if the return value is non-zero
int result = fast_interpreter->invoke();
if(result != EXIT_SUCCESS){
    printf("sample : interpreter->invoke() failed !\n");
    return EXIT_FAILURE;
}

Get Output Data get_output_tensor()

After inference is complete, the resulting data needs to be retrieved. As mentioned in the flow introduction, after retrieving the result data, it can be processed to determine if the results are correct.

APIget_output_tensor
DescriptionRetrieves the inference result data after successful inference.
Parametersout_tensor_idx: Index value of the output tensor.
output_data: Address of the pointer variable, the function resets the value of this pointer variable.
output_length: Data size (in bytes) of the output tensor.
Return Value0 indicates successful retrieval of the output tensor; non-zero indicates failure.
cpp
// Retrieve the inference result data; report an error if the return value is non-zero
float* out_data = nullptr;
uint32_t output_tensor_length = 0;
int result = fast_interpreter->get_output_tensor(0, 
   (void**)&out_data, &output_tensor_length);
if(result != EXIT_SUCCESS){
    printf("interpreter->get_output_tensor() failed !\n");
    return EXIT_FAILURE;
}

Resource Release destroy()

As mentioned earlier, the interpreter object requires initialization (init) and model loading operations. Correspondingly, the interpreter also needs to perform release operations to destroy previously created resources.

APIdestroy
DescriptionPerforms necessary release operations.
Parametersvoid
Return Value0 indicates successful release; non-zero indicates failure.
cpp
// Execute the interpreter release process; report an error if the return value is non-zero
int result = fast_interpreter->destroy();
if(result != EXIT_SUCCESS){
    printf("sample : interpreter->destroy() failed !\n");
    return EXIT_FAILURE;
}

Interpreter Builder Class class InterpreterBuilder

A unified creation function for Interpreter objects, used to create the required interpreter objects through this class.

Build Interpreter build_interpreter_from_path()

Builds an inference interpreter object, allowing different parameters to be provided. The simplest approach is to provide only the model file’s path and name. If the model framework (e.g., SNPE, TFLite) requires only one model file, this interface is used.

APIbuild_interpreter_from_path
DescriptionDirectly creates the corresponding interpreter object using the model file’s path and name.
Parameterspath: Path and name of the model file.
Return ValueReturns a unique_ptr pointer to the Interpreter type. If nullptr, the function failed to construct the object; otherwise, it is a pointer to the Interpreter object.

Build Interpreter build_interpreter_from_path()

Builds an inference interpreter object, allowing different parameters to be provided. The simplest approach is to provide only the model file’s path and name. If the model framework (e.g., NCNN, TNN) involves two model files, this interface is used.

APIbuild_interpreter_from_path
DescriptionDirectly creates the corresponding interpreter object using the model file’s path and name.
Parametersmodel_struct_path: Path and name of the model structure file.
model_weight_path: Path and name of the model weight parameters file.
Return ValueReturns a unique_ptr pointer to the Interpreter type. If nullptr, the function failed to construct the object; otherwise, it is a pointer to the Interpreter object.

Build Interpreter build_interpreter_from_model()

Builds an inference interpreter object. In addition to providing the model file path, a Model object can also be provided, allowing not only the model file path but also the input and output data types and shape information to be set.

APIbuild_interpreter_from_model
DescriptionCreates the corresponding interpreter object by passing a Model object. All Config-related parameters use default values.
Parametersmodel: Model type object containing model-related data.
Return ValueReturns a unique_ptr pointer to the Interpreter type. If nullptr, the function failed to construct the object; otherwise, it is a pointer to the Interpreter object.

💡 Note

Important: The memory space pointed to by the model pointer must be allocated on the heap. After successfully constructing the Interpreter, its lifecycle will be managed by the Interpreter (when the Interpreter is released, it will automatically release the model object). Therefore, developers should not manually release its memory, as this will cause errors due to double freeing.

Build Interpreter build_interpreter_from_model_and_config()

Builds an inference interpreter object. In addition to the methods mentioned above, both a Model object and a Config object can be provided, allowing not only model-related information but also additional runtime configuration parameters to be specified.

APIbathe_interpreter_from_model_and_config
DescriptionCreates the corresponding interpreter object by passing Model and Config objects.
Parametersmodel: Model type object containing model-related data.
config: Config type object containing configuration parameters.
Return ValueReturns a unique_ptr pointer to the Interpreter type. If nullptr, the function failed to construct the object; otherwise, it is a pointer to the Interpreter object.

💡 Note

Important: The memory space pointed to by the model and config pointers must be allocated on the heap. After successfully constructing the Interpreter, their lifecycles will be managed by the Interpreter (when the Interpreter is released, it will automatically release the model and config objects). Therefore, developers should not manually release their memory, as this will cause errors due to double freeing.

cpp
// Build an interpreter by passing Model and Config objects; report an error if the return value is null
std::unique_ptr<Interpreter>&& fast_interpreter = 
 InterpreterBuilder::build_interpreter_from_model_and_config(model, config);
if(fast_interpreter == nullptr){
    printf("build_interpreter failed !\n");
    return EXIT_FAILURE;
}

Other Methods

In addition to the inference-related interfaces described above, the AidLite-SDK also provides the following auxiliary interfaces.

Get SDK Version Information get_library_version()

APIget_library_version
DescriptionRetrieves version-related information for the current AidLite-SDK.
Parametersvoid
Return ValueVersion information string for the current AidLite-SDK.

Set Log Level set_log_level()

APIset_log_level
DescriptionSets the minimum log level, outputting log data greater than or equal to this level. By default, logs at WARNING and above are printed.
Parameterslog_level: Value of the LogLevel enumeration type.
Return ValueDefault return value is 0.

Log Output to Standard Terminal log_to_stderr()

APIlog_to_stderr
DescriptionSets log information to be output to the standard error terminal.
Parametersvoid
Return ValueDefault return value is 0.

Log Output to Text File log_to_file()

APIlog_to_file
DescriptionSets log information to be output to a specified text file.
Parameterspath_and_prefix: Path and prefix name for the log file.
also_to_stderr: Indicates whether to also output logs to stderr terminal, default is false.
Return Value0 indicates successful execution; non-zero indicates failure.

Get Latest Log Information last_log_msg()

APIlast_log_msg
DescriptionRetrieves the latest log information for a specific log level, typically the latest error information.
Parameterslog_level: Value of the LogLevel enumeration type.
Return ValuePointer to the storage address of the latest log information.