SoFunction
Updated on 2024-11-13

Pytorch pth format to ONNX format in detail

Background

PyTorch trained models need to be deployed on Jetson nano, and jetson provides TensorRT support natively, so a better way to do this is to convert it to ONNX format, and then convert it to TensorRT format via ONNX

Installation of dependent libraries

So you need to install ONNX, the exact version of ONNX you need to install depends on the protobuf and python version in your environment, my python version is 3.6.9.

pip install onnx==1.11.0
pip install onnx-simplifier

Installation of ONNX is fine, but when installing onnx-simplifier, it won't install on Jetson xaviar!

Various errors

However, when I switched to windows and ubuntu servers, it worked fine!

Anyone who knows why the install failed on Jetson can private message me or let me know in the comments section, thanks~!

Convert to onnx format

with torch.no_grad(): 
	    ( 
		model, 
		example, 
		"", 
		opset_version=11, 
		input_names=['input'], 
		output_names=['output'])

This will export properly

The following error may be reported when running on Jetson XAVIAR

Illegal instruction (core dumped)

Just run the following command

 export OPENBLAS_CORETYPE=ARMV8

Conversion of ONNX to TensorRT format

Conversion on Jetson with the trtexec tool

trtexec --onnx= --saveEngine= --explicitBatch

The following error was found at the time of conversion:Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

So you need to convert it with onnx-simplifier, the conversion command is as follows

python -m onnxsim  init_sim.onnx

In this way, the conversion is successful, and after that, you can reason on TensorRT later on

This article about Pytorch pth format to ONNX format is here, for more related Pytorch pth to ONNX format content, please search my previous articles or continue to browse the following related articles, I hope you will support me more in the future!