AI Model Inferencing with ONNX on GPU (CUDA)
Recently I've been working a lot with different AI Models for a variety of use cases, ranging from simple object classification to massive crowd counting models. While working with these models, I saw that it is always difficult to implement a model in custom code since the Architecture is never included in the weights themselves, thus each model works differently.
Enabling CUDA with PyTorch on Nvidia Jetson and Python 3.9
For a use case I wanted to utilize the Nvidia Jetson for edge inference. One of the bottlenecks here was that my software required a Python version that is greater than 3.6. When looking at the Nvidia Jetson packages for PyTorch it was seen that this was only created for version 3.6.