Skip to content

FAQ

Q&A

Q: When viewing the model structure, the input shape contains question marks (?) or words like batchsize. How should I fill in the input shape?

A: When there is a ? in the input shape, the automatically filled input shape becomes 0, which may cause model conversion to fail. You can manually replace ? with 1. If errors still occur, you will need to modify the original model’s input shape. For example, if the ONNX model’s input shape is [?,3,224,224], use Python and the ONNX package to reload the model, change the input shape to [1,3,224,224], save it again as an ONNX model, and then convert it using AIMO.

Q: During model optimization, the process remains stuck in "converting" for a long time. Is it frozen or dead?

A: In the INT8 quantization options, different quantization methods and model sizes can significantly affect conversion time. If you select methods like ada_aimet, cle_aimet, or bc_snpe, the optimization process can take a very long time (possibly days). Be patient with ada_aimet and cle_aimet. If you want faster results, you can choose cle_snpe, enable_htp, or enable_hta as quantization methods.

Error Log

No.Description
1ERROR : [AIMET] Optimization is failed. Please consider disable ada or cle.AIMET does not support the selected method (cle_aimet or ada_aimet). Try using a different method.
2ERROR : ValueError: After pruning disconnected nodes, this model is empty. Encountered Error: After pruning disconnected nodes, this model is empty.Model issue: the model is empty and contains no valid nodes. Check the model and ensure it was uploaded correctly.
3ERROR : [DLC-QUAN] [ERROR] Invalid model for HBA quantization algorithm. Ensure there were batchnorms present prior to initial conversion, or reconvert.Caused by HTP quantization incompatibility with some model nodes. Avoid combining HTP with incompatible methods. Try HTP alone or combinations like HTP+CLE, CLE+BC, etc.
4ERROR : Node XXXX: 'Graph has no buffer XXX, referred to as input for XXX'Node in the model graph not found.
5ERROR : Model conversion failed due to non-existent or rigid pidAIMO service not started.
6ERROR : ValueError: Unsupported dynamic weights on tensor permute_0_0Model has dynamic weights or dynamic input, which is unsupported. Consider truncating the model.
7ERROR : Cutoff model is failed, please check nodes nameInput/output nodes or input shape values are incorrect. Check the model structure and fill in the correct values.
8Conversion FAILED: ERROR_TF_NODE_NOT_FOUND_IN_GRAPH: Node not found in graph. Node nameInput/output nodes or input shape values are incorrect. Check the model structure and fill in the correct values.
9status == QNN_BACKEND_NO_ERROR was not trueThe selected chip does not support conversion for certain operators in the model. Try a different chip or use a generic conversion strategy.
10AttributeError: 'UDOFactory' object has no attribute 'xxx'The model includes unsupported operators. Identify their location in the model. If they are at the beginning or end, consider using input/output node truncation to skip them. You may also manually modify the unconverted part or report the missing operators to the backend for support.
11[ERROR] 909 HTP FP16 not supported for this SoCThe quantization algorithm doesn't support this data format. Try disabling the CLE method.
12While converting to RKNN, error occurs: onnx.onnx_cpp2py_export.checker.ValidationError: Your model ir_version is higher than the checker's**The ONNX model version is too high. When exporting to ONNX, try setting opset to 12.
13When converting ONNX to DLC, even with correct output nodes, error occurs: failed, please check nodes name:xxx input_name:xxx output_name:xxxThe ONNX model version is too high. When exporting to ONNX, try setting opset to 12.
14During optimization: “SNPE HTP Offline Prepare: Could not assign any layer to the HTP” or “HTP FP16 not supported for this SoC”To resolve this, try unchecking CLE or CLE+BC methods and use only the basic SNPE quantization method with HTP.