Applications

End-to-end deep learning pipelines combining preprocessing, inference, and post-processing.

Overview

The application samples demonstrate complete workflows for common computer vision tasks:

  • Hello World - Introduction to CV-CUDA with GPU-only image processing

  • Image Classification - ResNet50 classification with TensorRT inference

  • Object Detection - RetinaNet detection with bounding box visualization

  • Semantic Segmentation - FCN-ResNet101 with artistic background effects

All applications showcase:

  • GPU-accelerated preprocessing with CV-CUDA

  • TensorRT inference integration

  • Post-processing and visualization

  • Model export from PyTorch to ONNX to TensorRT

Application Samples

Common Patterns

Model Export

All applications follow this pattern for model preparation:

  1. Export PyTorch model to ONNX (first run only)

  2. Build TensorRT engine from ONNX (first run only, cached)

  3. Load cached engine (subsequent runs)

Preprocessing Pipeline

Standard preprocessing steps:

  1. Load image with read_image()

  2. Add batch dimension with cvcuda.stack()

  3. Resize to model input size with cvcuda.resize()

  4. Convert to float32 with cvcuda.convertto()

  5. Normalize (if needed) with cvcuda.normalize()

  6. Reformat to NCHW with cvcuda.reformat()

Inference

Using the TRT wrapper:

model = TRT(engine_path)
outputs = model([input_tensor])

See Also