Applications
End-to-end deep learning pipelines combining preprocessing, inference, and post-processing.
Overview
The application samples demonstrate complete workflows for common computer vision tasks:
Hello World - Introduction to CV-CUDA with GPU-only image processing
Image Classification - ResNet50 classification with TensorRT inference
Object Detection - RetinaNet detection with bounding box visualization
Semantic Segmentation - FCN-ResNet101 with artistic background effects
All applications showcase:
GPU-accelerated preprocessing with CV-CUDA
TensorRT inference integration
Post-processing and visualization
Model export from PyTorch to ONNX to TensorRT
Application Samples
Common Patterns
Model Export
All applications follow this pattern for model preparation:
Export PyTorch model to ONNX (first run only)
Build TensorRT engine from ONNX (first run only, cached)
Load cached engine (subsequent runs)
Preprocessing Pipeline
Standard preprocessing steps:
Load image with read_image()
Add batch dimension with
cvcuda.stack()Resize to model input size with
cvcuda.resize()Convert to float32 with
cvcuda.convertto()Normalize (if needed) with
cvcuda.normalize()Reformat to NCHW with
cvcuda.reformat()
Inference
Using the TRT wrapper:
model = TRT(engine_path)
outputs = model([input_tensor])
See Also
Operators - Individual CV-CUDA operators
Common Utilities - Helper functions
Hello World - Simple introduction