AidLite Python API
Model Data Type class DataType
For the AidLite SDK, different models from various frameworks are processed, each with its own input and output data types. In the usage flow described earlier, setting the input and output data types for the model requires this data type.
Member Variable | Type | Value | Description |
---|---|---|---|
TYPE_DEFAULT | int | 0 | Invalid DataType |
TYPE_UINT8 | int | 1 | Unsigned byte data |
TYPE_INT8 | int | 2 | Byte data |
TYPE_UINT32 | int | 3 | Unsigned int32 data |
TYPE_FLOAT32 | int | 4 | Float data |
TYPE_INT32 | int | 5 | Int32 data |
TYPE_INT64 | int | 6 | Int64 data |
Inference Implementation Type class ImplementType
(Deprecated)
💡 Note
Important: Starting from V2.0.7, this type is deprecated.
Member Variable | Type | Value | Description |
---|---|---|---|
TYPE_DEFAULT | int | 0 | Invalid ImplementType |
TYPE_MMKV | int | 1 | Implemented via MMKV |
TYPE_FAST | int | 2 | Implemented via IPC backend |
TYPE_LOCAL | int | 3 | Implemented via local backend |
Model Framework Type class FrameworkType
As mentioned earlier, the AidLite SDK integrates multiple deep learning inference frameworks. Therefore, in the usage flow, it is necessary to specify which framework’s model is being used, requiring this framework type.
Member Variable | Type | Value | Description |
---|---|---|---|
TYPE_DEFAULT | int | 0 | Invalid FrameworkType |
TYPE_SNPE | int | 1 | SNPE 1.x (DLC) model type |
TYPE_TFLite | int | 2 | TFLite model type |
TYPE_RKNN | int | 3 | RKNN model type |
TYPE_QNN | int | 4 | QNN model type |
TYPE_SNPE2 | int | 5 | SNPE 2.x (DLC) model type |
TYPE_NCNN | int | 6 | NCNN model type |
TYPE_MNN | int | 7 | MNN model type |
TYPE_TNN | int | 8 | TNN model type |
TYPE_PADDLE | int | 9 | Paddle model type |
Inference Acceleration Hardware Type class AccelerateType
For each deep learning inference framework, it may support running on different acceleration hardware units (e.g., SNPE models running on Qualcomm DSP units, RKNN models running on Rockchip NPU units). Thus, in the usage flow, it is necessary to specify which computing unit the model is expected to run on, requiring this acceleration hardware type.
Member Variable | Type | Value | Description |
---|---|---|---|
TYPE_DEFAULT | int | 0 | Invalid AccelerateType |
TYPE_CPU | int | 1 | CPU general acceleration unit |
TYPE_GPU | int | 2 | GPU general acceleration unit |
TYPE_DSP | int | 3 | Qualcomm DSP acceleration unit |
TYPE_NPU | int | 4 | NPU general acceleration unit |
Log Level class LogLevel
The AidLite SDK provides an interface for setting logs (introduced later), which requires specifying the log level to be used, necessitating this log level type.
Member Variable | Type | Value | Description |
---|---|---|---|
INFO | int | 0 | Information |
WARNING | int | 1 | Warning |
ERROR | int | 2 | Error |
FATAL | int | 3 | Fatal error |
Model Class class Model
As mentioned earlier, before creating an inference interpreter, detailed parameters related to the specific model need to be set. The Model
class is primarily used to record the model’s file information, structural information, and related content during model execution.
Create Model Instance create_instance()
To set detailed model information, a model instance object is required first. This function is used to create a Model
instance object.
API | create_instance |
---|---|
Description | Constructs a Model instance object by passing the model file’s path and name. |
Parameters | model_path : Path and name of the model file. |
Return Value | Normal: Model instance object. |
Exception: None . |
# Create a model object using the inceptionv3_float32.dlc file in the current path; report an error if the return value is None
model = aidlite.Model.create_instance(model_path=r"./inceptionv3.dlc")
if model is None:
print("Create model failed !")
return False
Set Model Properties set_model_properties()
After successfully creating the model instance object, it is necessary to set the input and output data types and the shape information of the input and output tensors.
API | set_model_properties |
---|---|
Description | Sets the model’s properties, including input and output data shapes and data types. |
Parameters | input_shapes : Array of input tensor shapes, in a 2D array structure. |
input_data_type : Input tensor data type, an enumeration of DataType . | |
output_shapes : Array of output tensor shapes, in a 2D array structure. | |
output_data_type : Output tensor data type, an enumeration of DataType . | |
Return Value | The model file path string corresponding to the Model object. |
# Use the previously defined DataType; input and output shapes are 2D arrays
input_shapes=[[1,640,640,3]]
output_shapes=[[1,10,10,255], [1,20,20,255], [1,40,40,255]]
model.set_model_properties(input_shapes=input_shapes, input_data_type=aidlite.DataType.TYPE_FLOAT32, output_shapes=output_shapes, output_data_type=aidlite.DataType.TYPE_FLOAT32)
Get Model Path get_model_absolute_path()
API | get_model_absolute_path |
---|---|
Description | Retrieves the existing path of the model file. |
Parameters | None |
Return Value | The model file path string corresponding to the Model object. |
model_path = model.get_model_absolute_path()
Get Model Type get_model_type()
API | get_model_type |
---|---|
Description | Retrieves the model type identifier, such as DLC, RKNN, etc. |
Parameters | None |
Return Value | The model file path string corresponding to the Model object. |
model_type = model.get_model_type()
Configuration Class class Config
As mentioned earlier, before creating an inference interpreter, in addition to setting specific Model
information, some configuration information for inference is also required. The Config
class is used to record configuration options that need to be preset and will be used during runtime.
Create Config Instance create_instance()
To set runtime configuration information, a configuration instance object is required first. This function is used to create a Config
instance object.
API | create_instance |
---|---|
Description | Constructs a Config class instance object. |
Parameters | snpe_out_names : List of model output node names for FAST (optional). |
number_of_threads : Number of threads, effective if greater than 0 (optional). | |
is_quantify_model : Whether it is a quantized model, 1 for quantized model for FAST (optional). | |
fast_timeout : Interface timeout in milliseconds, effective if greater than 0 for FAST (optional). | |
accelerate_type : Type of acceleration hardware (optional). | |
framework_type : Type of underlying deep learning framework (optional). | |
Return Value | Normal: Config instance object. |
Exception: None . |
# Create a config instance object; report an error if the return value is None
config = aidlite.Config.create_instance()
if config is None:
print("Create config failed !")
return False
Member Variables List
The Config
object is used to set runtime configuration information, including the following parameters:
Member Variable | accelerate_type |
---|---|
Type | class AccelerateType |
Default Value | AccelerateType.TYPE_CPU |
Description | Type of acceleration hardware. |
💡 Note
Important: Starting from V2.0.7, this type is deprecated.
Member Variable | implement_type (Deprecated) |
---|---|
Type | class ImplementType |
Default Value | ImplementType.TYPE_LOCAL |
Description | Distinguishes the underlying implementation method. |
Member Variable | framework_type |
---|---|
Type | class FrameworkType |
Default Value | FrameworkType.TYPE_DEFAULT |
Description | Type of underlying inference framework. |
Member Variable | number_of_threads |
---|---|
Type | int |
Default Value | -1 |
Description | Number of threads, effective if greater than 0. |
Member Variable | SNPE_out_names |
---|---|
Type | list |
Default Value | None |
Description | List of model output node names for FAST. |
Member Variable | is_quantify_model |
---|---|
Type | int |
Default Value | 0 |
Description | Whether it is a quantized model, 1 for quantized model for FAST. |
Member Variable | fast_timeout |
---|---|
Type | int |
Default Value | -1 |
Description | Interface timeout in milliseconds, effective if greater than 0 for FAST. |
config.framework_type = aidlite.FrameworkType.TYPE_SNPE
config.accelerate_type = aidlite.AccelerateType.TYPE_DSP
config.is_quantify_model = 1
config.SNPE_out_names = ["InceptionV3/Softmax"]
config.fast_timeout = 1000
Context Class class Context
Used to store the runtime context involved in the execution process, including the Model
object and Config
object. Additional runtime data may be extended in the future.
Constructor Context()
API | Context |
---|---|
Description | Constructs a Context instance object. |
Parameters | model : Model instance object. |
config : Config instance object. | |
Return Value | Normal: Context instance object. |
Exception: None . |
context = aidlite.Context(model=model, config=config)
Get Model Member Variable get_model()
API | get_model |
---|---|
Description | Retrieves the Model object managed by the context. |
Parameters | None |
Return Value | Model object. |
model = context.get_model()
Get Config Member Variable get_config()
API | get_config |
---|---|
Description | Retrieves the Config object managed by the context. |
Parameters | None |
Return Value | Config object. |
config = context.get_config()
Interpreter Class class Interpreter
The Interpreter
type object instance is the main entity for performing inference operations, used to execute the specific inference process. As mentioned in the inference flow, after creating the interpreter object, all operations are performed based on the interpreter object, making it the absolute core of the AidLite SDK.
Create Interpreter Instance create_instance()
To perform inference-related operations, an inference interpreter is essential. This function is used to construct an instance of the inference interpreter.
API | create_instance |
---|---|
Description | Constructs an Interpreter type object using the data managed by the Context object. |
Parameters | None |
Return Value | Normal: Interpreter instance object. |
Exception: None . |
# Create an interpreter object; report an error if the return value is None
interpreter = aidlite.Interpreter.create_instance()
if interpreter is None:
print("Create Interpreter failed !")
return False
Initialize init()
After creating the interpreter object, some initialization operations (e.g., environment checks, resource construction) need to be performed.
API | init |
---|---|
Description | Constructs an Interpreter type object using the data managed by the Context object. |
Parameters | context : Context object instance, which manages Model and Config objects containing model data, configuration data, etc. |
Return Value | Normal: 0 . |
Exception: Non-zero. |
# Initialize the interpreter; report an error if the return value is non-zero
result = aidlite.interpreter.init()
if result != 0:
print("sample : interpreter->init() failed !")
return False
Load Model load_model()
After the interpreter object completes initialization, the required model file can be loaded for the interpreter, completing the model loading process. Subsequent inference processes will use the loaded model resources.
API | load_model |
---|---|
Description | Completes model loading operations. Since the model file path is already set in the Model object, the model loading operation can be executed directly. |
Parameters | None |
Return Value | Normal: 0 . |
Exception: Non-zero. |
# Load the model for the interpreter; report an error if the return value is non-zero
result = interpreter.load_model()
if result != 0:
print("sample : interpreter->load_model() failed !")
return False
Set Input Data set_input_tensor()
As mentioned in the flow introduction, before setting input data, different preprocessing operations are required for different models to adapt to the specific model.
API | set_input_tensor |
---|---|
Description | Sets the input data for inference. |
Parameters | in_tensor_idx : Index value of the input tensor, type int . |
input_data : Binary data of the input tensor, type depends on the actual case. | |
Return Value | Normal: 0 . |
Exception: Non-zero. |
# Set the input data for inference; report an error if the return value is non-zero
result = interpreter.set_input_tensor(in_tensor_idx=1, input_data=obj)
if result != 0:
print("interpreter->set_input_tensor() failed !")
return False
Execute Inference invoke()
As mentioned in the flow introduction, after setting the input data, the next step is to execute the inference process on the input data.
API | invoke |
---|---|
Description | Executes the inference computation process. |
Parameters | None |
Return Value | Normal: 0 . |
Exception: Non-zero. |
# Execute the inference operation; report an error if the return value is non-zero
result = interpreter.invoke()
if result != 0:
print("sample : interpreter->invoke() failed !")
return False
Get Output Data get_output_tensor()
After inference is complete, the resulting data needs to be retrieved. As mentioned in the flow introduction, after retrieving the result data, it can be processed to determine if the results are correct.
API | get_output_tensor |
---|---|
Description | Retrieves the inference result data after successful inference. |
Parameters | out_tensor_idx : Index value of the output tensor. |
output_type : Output data type, optional, defaults to aidlite.DataType.TYPE_FLOAT32 . | |
Return Value | Normal: Float result data. |
Exception: None . |
# Retrieve the inference result data; report an error if the return value is None
out_data = interpreter.get_output_tensor(out_tensor_idx=1, output_type=aidlite.DataType.TYPE_INT32)
if out_data is None:
print("interpreter->get_output_tensor() failed !")
return False
Resource Release destroy()
As mentioned earlier, the interpreter object requires initialization (init
) and model loading operations. Correspondingly, the interpreter also needs to perform release operations to destroy previously created resources.
API | destroy |
---|---|
Description | Performs necessary release operations. |
Parameters | None |
Return Value | Normal: 0 . |
Exception: Non-zero. |
# Execute the interpreter release process; report an error if the return value is non-zero
result = interpreter.destroy()
if result != 0:
print("sample : interpreter->destroy() failed !")
return False
Interpreter Builder Class class InterpreterBuilder
A unified creation function for Interpreter
objects, used to create the required interpreter objects through this class.
Build Interpreter build_interpreter_from_path()
Builds an inference interpreter object, allowing different parameters to be provided. The simplest approach is to provide only the model file’s path and name.
API | build_interpreter_from_path |
---|---|
Description | Directly creates the corresponding interpreter object using the model file’s path and name. All related parameters use default values. |
Parameters | path : Path and name of the model file. |
Return Value | Normal: Interpreter object instance. |
Exception: None . |
# Build an interpreter by passing the model file path; report an error if the return value is None
interpreter = aidlite.InterpreterBuilder.build_interpreter_from_path(path=r"./640.dlc")
if interpreter is None:
print("Create Interpreter failed !")
return False
Build Interpreter build_interpreter_from_model()
Builds an inference interpreter object. In addition to providing the model file path, a Model
object can also be provided, allowing not only the model file path but also the input and output data types and shape information to be set.
API | build_interpreter_from_model |
---|---|
Description | Creates the corresponding interpreter object by passing a Model object. All Config -related parameters use default values. |
Parameters | model : Model type object containing model-related data. |
Return Value | Normal: Interpreter object instance. |
Exception: None . |
# Build an interpreter by passing a Model object; report an error if the return value is None
interpreter = aidlite.InterpreterBuilder.build_interpreter_from_model(model=model)
if interpreter is None:
print("Create Interpreter failed !")
return False
Build Interpreter build_interpreter_from_model_and_config()
Builds an inference interpreter object. In addition to the methods mentioned above, both a Model
object and a Config
object can be provided, allowing not only model-related information but also additional runtime configuration parameters to be specified.
API | build_interpreter_from_model_and_config |
---|---|
Description | Creates the corresponding interpreter object by passing Model and Config objects. |
Parameters | model : Model type object containing model-related data. |
config : Config type object containing configuration parameters. | |
Return Value | Normal: Interpreter object instance. |
Exception: None . |
# Build an interpreter by passing Model and Config objects; report an error if the return value is None
interpreter = aidlite.InterpreterBuilder.build_interpreter_from_model_and_config(model=model, config=config)
if interpreter is None:
print("Create Interpreter failed !")
return False
Other Methods
Get SDK Version Information get_library_version()
API | get_library_version |
---|---|
Description | Retrieves version-related information for the current AidLite-SDK. |
Parameters | None |
Return Value | Version information string for the current AidLite-SDK. |
Get Python SDK Version Information get_py_library_version()
API | get_py_library_version |
---|---|
Description | Retrieves version-related information for the current Py-Aidlite-SDK. |
Parameters | None |
Return Value | Version information string for the current Py-Aidlite-SDK. |
Set Log Level set_log_level()
API | set_log_level |
---|---|
Description | Sets the minimum log level, outputting log data greater than or equal to this level. By default, logs at WARNING and above are printed. |
Parameters | log_level : Value of type LogLevel . |
Return Value | Default return value is 0 . |
Log Output to Standard Terminal log_to_stderr()
API | log_to_stderr |
---|---|
Description | Sets log information to be output to the standard error terminal. |
Parameters | None |
Return Value | Default return value is 0 . |
Log Output to Text File log_to_file()
API | log_to_file |
---|---|
Description | Sets log information to be output to a specified text file. |
Parameters | path_and_prefix : Path and prefix name for the log file. |
also_to_stderr : Indicates whether to also output logs to stderr terminal, default is False . | |
Return Value | Normal: 0 . |
Exception: Non-zero. |
Get Latest Log Information last_log_msg()
API | last_log_msg |
---|---|
Description | Retrieves the latest log information for a specific log level, typically the latest error information. |
Parameters | log_level : Value of type LogLevel . |
Return Value | Latest log information. |