Skip to content

AidLite Python API

Model Data Type class DataType

For the AidLite SDK, different models from various frameworks are processed, each with its own input and output data types. In the usage flow described earlier, setting the input and output data types for the model requires this data type.

Member VariableTypeValueDescription
TYPE_DEFAULTint0Invalid DataType
TYPE_UINT8int1Unsigned byte data
TYPE_INT8int2Byte data
TYPE_UINT32int3Unsigned int32 data
TYPE_FLOAT32int4Float data
TYPE_INT32int5Int32 data
TYPE_INT64int6Int64 data

Inference Implementation Type class ImplementType (Deprecated)

💡 Note

Important: Starting from V2.0.7, this type is deprecated.

Member VariableTypeValueDescription
TYPE_DEFAULTint0Invalid ImplementType
TYPE_MMKVint1Implemented via MMKV
TYPE_FASTint2Implemented via IPC backend
TYPE_LOCALint3Implemented via local backend

Model Framework Type class FrameworkType

As mentioned earlier, the AidLite SDK integrates multiple deep learning inference frameworks. Therefore, in the usage flow, it is necessary to specify which framework’s model is being used, requiring this framework type.

Member VariableTypeValueDescription
TYPE_DEFAULTint0Invalid FrameworkType
TYPE_SNPEint1SNPE 1.x (DLC) model type
TYPE_TFLiteint2TFLite model type
TYPE_RKNNint3RKNN model type
TYPE_QNNint4QNN model type
TYPE_SNPE2int5SNPE 2.x (DLC) model type
TYPE_NCNNint6NCNN model type
TYPE_MNNint7MNN model type
TYPE_TNNint8TNN model type
TYPE_PADDLEint9Paddle model type

Inference Acceleration Hardware Type class AccelerateType

For each deep learning inference framework, it may support running on different acceleration hardware units (e.g., SNPE models running on Qualcomm DSP units, RKNN models running on Rockchip NPU units). Thus, in the usage flow, it is necessary to specify which computing unit the model is expected to run on, requiring this acceleration hardware type.

Member VariableTypeValueDescription
TYPE_DEFAULTint0Invalid AccelerateType
TYPE_CPUint1CPU general acceleration unit
TYPE_GPUint2GPU general acceleration unit
TYPE_DSPint3Qualcomm DSP acceleration unit
TYPE_NPUint4NPU general acceleration unit

Log Level class LogLevel

The AidLite SDK provides an interface for setting logs (introduced later), which requires specifying the log level to be used, necessitating this log level type.

Member VariableTypeValueDescription
INFOint0Information
WARNINGint1Warning
ERRORint2Error
FATALint3Fatal error

Model Class class Model

As mentioned earlier, before creating an inference interpreter, detailed parameters related to the specific model need to be set. The Model class is primarily used to record the model’s file information, structural information, and related content during model execution.

Create Model Instance create_instance()

To set detailed model information, a model instance object is required first. This function is used to create a Model instance object.

APIcreate_instance
DescriptionConstructs a Model instance object by passing the model file’s path and name.
Parametersmodel_path: Path and name of the model file.
Return ValueNormal: Model instance object.
Exception: None.
python
# Create a model object using the inceptionv3_float32.dlc file in the current path; report an error if the return value is None
model = aidlite.Model.create_instance(model_path=r"./inceptionv3.dlc")
if model is None:
    print("Create model failed !")
    return False

Set Model Properties set_model_properties()

After successfully creating the model instance object, it is necessary to set the input and output data types and the shape information of the input and output tensors.

APIset_model_properties
DescriptionSets the model’s properties, including input and output data shapes and data types.
Parametersinput_shapes: Array of input tensor shapes, in a 2D array structure.
input_data_type: Input tensor data type, an enumeration of DataType.
output_shapes: Array of output tensor shapes, in a 2D array structure.
output_data_type: Output tensor data type, an enumeration of DataType.
Return ValueThe model file path string corresponding to the Model object.
python
# Use the previously defined DataType; input and output shapes are 2D arrays
input_shapes=[[1,640,640,3]]
output_shapes=[[1,10,10,255], [1,20,20,255], [1,40,40,255]]
model.set_model_properties(input_shapes=input_shapes, input_data_type=aidlite.DataType.TYPE_FLOAT32, output_shapes=output_shapes, output_data_type=aidlite.DataType.TYPE_FLOAT32)

Get Model Path get_model_absolute_path()

APIget_model_absolute_path
DescriptionRetrieves the existing path of the model file.
ParametersNone
Return ValueThe model file path string corresponding to the Model object.
python
model_path = model.get_model_absolute_path()

Get Model Type get_model_type()

APIget_model_type
DescriptionRetrieves the model type identifier, such as DLC, RKNN, etc.
ParametersNone
Return ValueThe model file path string corresponding to the Model object.
python
model_type = model.get_model_type()

Configuration Class class Config

As mentioned earlier, before creating an inference interpreter, in addition to setting specific Model information, some configuration information for inference is also required. The Config class is used to record configuration options that need to be preset and will be used during runtime.

Create Config Instance create_instance()

To set runtime configuration information, a configuration instance object is required first. This function is used to create a Config instance object.

APIcreate_instance
DescriptionConstructs a Config class instance object.
Parameterssnpe_out_names: List of model output node names for FAST (optional).
number_of_threads: Number of threads, effective if greater than 0 (optional).
is_quantify_model: Whether it is a quantized model, 1 for quantized model for FAST (optional).
fast_timeout: Interface timeout in milliseconds, effective if greater than 0 for FAST (optional).
accelerate_type: Type of acceleration hardware (optional).
framework_type: Type of underlying deep learning framework (optional).
Return ValueNormal: Config instance object.
Exception: None.
python
# Create a config instance object; report an error if the return value is None
config = aidlite.Config.create_instance()
if config is None:
    print("Create config failed !")
    return False

Member Variables List

The Config object is used to set runtime configuration information, including the following parameters:

Member Variableaccelerate_type
Typeclass AccelerateType
Default ValueAccelerateType.TYPE_CPU
DescriptionType of acceleration hardware.

💡 Note

Important: Starting from V2.0.7, this type is deprecated.

Member Variableimplement_type (Deprecated)
Typeclass ImplementType
Default ValueImplementType.TYPE_LOCAL
DescriptionDistinguishes the underlying implementation method.

Member Variableframework_type
Typeclass FrameworkType
Default ValueFrameworkType.TYPE_DEFAULT
DescriptionType of underlying inference framework.

Member Variablenumber_of_threads
Typeint
Default Value-1
DescriptionNumber of threads, effective if greater than 0.

Member VariableSNPE_out_names
Typelist
Default ValueNone
DescriptionList of model output node names for FAST.

Member Variableis_quantify_model
Typeint
Default Value0
DescriptionWhether it is a quantized model, 1 for quantized model for FAST.

Member Variablefast_timeout
Typeint
Default Value-1
DescriptionInterface timeout in milliseconds, effective if greater than 0 for FAST.

python
config.framework_type = aidlite.FrameworkType.TYPE_SNPE
config.accelerate_type = aidlite.AccelerateType.TYPE_DSP
config.is_quantify_model = 1
config.SNPE_out_names = ["InceptionV3/Softmax"]
config.fast_timeout = 1000

Context Class class Context

Used to store the runtime context involved in the execution process, including the Model object and Config object. Additional runtime data may be extended in the future.

Constructor Context()

APIContext
DescriptionConstructs a Context instance object.
Parametersmodel: Model instance object.
config: Config instance object.
Return ValueNormal: Context instance object.
Exception: None.
python
context = aidlite.Context(model=model, config=config)

Get Model Member Variable get_model()

APIget_model
DescriptionRetrieves the Model object managed by the context.
ParametersNone
Return ValueModel object.
python
model = context.get_model()

Get Config Member Variable get_config()

APIget_config
DescriptionRetrieves the Config object managed by the context.
ParametersNone
Return ValueConfig object.
python
config = context.get_config()

Interpreter Class class Interpreter

The Interpreter type object instance is the main entity for performing inference operations, used to execute the specific inference process. As mentioned in the inference flow, after creating the interpreter object, all operations are performed based on the interpreter object, making it the absolute core of the AidLite SDK.

Create Interpreter Instance create_instance()

To perform inference-related operations, an inference interpreter is essential. This function is used to construct an instance of the inference interpreter.

APIcreate_instance
DescriptionConstructs an Interpreter type object using the data managed by the Context object.
ParametersNone
Return ValueNormal: Interpreter instance object.
Exception: None.
python
# Create an interpreter object; report an error if the return value is None
interpreter = aidlite.Interpreter.create_instance()
if interpreter is None:
    print("Create Interpreter failed !")
    return False

Initialize init()

After creating the interpreter object, some initialization operations (e.g., environment checks, resource construction) need to be performed.

APIinit
DescriptionConstructs an Interpreter type object using the data managed by the Context object.
Parameterscontext: Context object instance, which manages Model and Config objects containing model data, configuration data, etc.
Return ValueNormal: 0.
Exception: Non-zero.
python
# Initialize the interpreter; report an error if the return value is non-zero
result = aidlite.interpreter.init()
if result != 0:
    print("sample : interpreter->init() failed !")
    return False

Load Model load_model()

After the interpreter object completes initialization, the required model file can be loaded for the interpreter, completing the model loading process. Subsequent inference processes will use the loaded model resources.

APIload_model
DescriptionCompletes model loading operations. Since the model file path is already set in the Model object, the model loading operation can be executed directly.
ParametersNone
Return ValueNormal: 0.
Exception: Non-zero.
python
# Load the model for the interpreter; report an error if the return value is non-zero
result = interpreter.load_model()
if result != 0:
    print("sample : interpreter->load_model() failed !")
    return False

Set Input Data set_input_tensor()

As mentioned in the flow introduction, before setting input data, different preprocessing operations are required for different models to adapt to the specific model.

APIset_input_tensor
DescriptionSets the input data for inference.
Parametersin_tensor_idx: Index value of the input tensor, type int.
input_data: Binary data of the input tensor, type depends on the actual case.
Return ValueNormal: 0.
Exception: Non-zero.
python
# Set the input data for inference; report an error if the return value is non-zero
result = interpreter.set_input_tensor(in_tensor_idx=1, input_data=obj)
if result != 0:
    print("interpreter->set_input_tensor() failed !")
    return False

Execute Inference invoke()

As mentioned in the flow introduction, after setting the input data, the next step is to execute the inference process on the input data.

APIinvoke
DescriptionExecutes the inference computation process.
ParametersNone
Return ValueNormal: 0.
Exception: Non-zero.
python
# Execute the inference operation; report an error if the return value is non-zero
result = interpreter.invoke()
if result != 0:
    print("sample : interpreter->invoke() failed !")
    return False

Get Output Data get_output_tensor()

After inference is complete, the resulting data needs to be retrieved. As mentioned in the flow introduction, after retrieving the result data, it can be processed to determine if the results are correct.

APIget_output_tensor
DescriptionRetrieves the inference result data after successful inference.
Parametersout_tensor_idx: Index value of the output tensor.
output_type: Output data type, optional, defaults to aidlite.DataType.TYPE_FLOAT32.
Return ValueNormal: Float result data.
Exception: None.
python
# Retrieve the inference result data; report an error if the return value is None
out_data = interpreter.get_output_tensor(out_tensor_idx=1, output_type=aidlite.DataType.TYPE_INT32)
if out_data is None:
    print("interpreter->get_output_tensor() failed !")
    return False

Resource Release destroy()

As mentioned earlier, the interpreter object requires initialization (init) and model loading operations. Correspondingly, the interpreter also needs to perform release operations to destroy previously created resources.

APIdestroy
DescriptionPerforms necessary release operations.
ParametersNone
Return ValueNormal: 0.
Exception: Non-zero.
python
# Execute the interpreter release process; report an error if the return value is non-zero
result = interpreter.destroy()
if result != 0:
    print("sample : interpreter->destroy() failed !")
    return False

Interpreter Builder Class class InterpreterBuilder

A unified creation function for Interpreter objects, used to create the required interpreter objects through this class.

Build Interpreter build_interpreter_from_path()

Builds an inference interpreter object, allowing different parameters to be provided. The simplest approach is to provide only the model file’s path and name.

APIbuild_interpreter_from_path
DescriptionDirectly creates the corresponding interpreter object using the model file’s path and name. All related parameters use default values.
Parameterspath: Path and name of the model file.
Return ValueNormal: Interpreter object instance.
Exception: None.
python
# Build an interpreter by passing the model file path; report an error if the return value is None
interpreter = aidlite.InterpreterBuilder.build_interpreter_from_path(path=r"./640.dlc")
if interpreter is None:
    print("Create Interpreter failed !")
    return False

Build Interpreter build_interpreter_from_model()

Builds an inference interpreter object. In addition to providing the model file path, a Model object can also be provided, allowing not only the model file path but also the input and output data types and shape information to be set.

APIbuild_interpreter_from_model
DescriptionCreates the corresponding interpreter object by passing a Model object. All Config-related parameters use default values.
Parametersmodel: Model type object containing model-related data.
Return ValueNormal: Interpreter object instance.
Exception: None.
python
# Build an interpreter by passing a Model object; report an error if the return value is None
interpreter = aidlite.InterpreterBuilder.build_interpreter_from_model(model=model)
if interpreter is None:
    print("Create Interpreter failed !")
    return False

Build Interpreter build_interpreter_from_model_and_config()

Builds an inference interpreter object. In addition to the methods mentioned above, both a Model object and a Config object can be provided, allowing not only model-related information but also additional runtime configuration parameters to be specified.

APIbuild_interpreter_from_model_and_config
DescriptionCreates the corresponding interpreter object by passing Model and Config objects.
Parametersmodel: Model type object containing model-related data.
config: Config type object containing configuration parameters.
Return ValueNormal: Interpreter object instance.
Exception: None.
python
# Build an interpreter by passing Model and Config objects; report an error if the return value is None
interpreter = aidlite.InterpreterBuilder.build_interpreter_from_model_and_config(model=model, config=config)
if interpreter is None:
    print("Create Interpreter failed !")
    return False

Other Methods

Get SDK Version Information get_library_version()

APIget_library_version
DescriptionRetrieves version-related information for the current AidLite-SDK.
ParametersNone
Return ValueVersion information string for the current AidLite-SDK.

Get Python SDK Version Information get_py_library_version()

APIget_py_library_version
DescriptionRetrieves version-related information for the current Py-Aidlite-SDK.
ParametersNone
Return ValueVersion information string for the current Py-Aidlite-SDK.

Set Log Level set_log_level()

APIset_log_level
DescriptionSets the minimum log level, outputting log data greater than or equal to this level. By default, logs at WARNING and above are printed.
Parameterslog_level: Value of type LogLevel.
Return ValueDefault return value is 0.

Log Output to Standard Terminal log_to_stderr()

APIlog_to_stderr
DescriptionSets log information to be output to the standard error terminal.
ParametersNone
Return ValueDefault return value is 0.

Log Output to Text File log_to_file()

APIlog_to_file
DescriptionSets log information to be output to a specified text file.
Parameterspath_and_prefix: Path and prefix name for the log file.
also_to_stderr: Indicates whether to also output logs to stderr terminal, default is False.
Return ValueNormal: 0.
Exception: Non-zero.

Get Latest Log Information last_log_msg()

APIlast_log_msg
DescriptionRetrieves the latest log information for a specific log level, typically the latest error information.
Parameterslog_level: Value of type LogLevel.
Return ValueLatest log information.