- Welcome
- Getting Started With the NVIDIA DriveWorks SDK
- Modules
- Samples
- Tools
- Tutorials
- SDK Porting Guide
- DriveWorks API
- More
NVIDIA DRIVE™ platforms has these GPUS on which NVIDIA® DriveWorks can run the NVIDIA® CUDA® workloads:
The dGPU is faster than the iGPU but is limited to being a CUDA co- processor. It cannot run graphics code (OpenGL/OpenGLES).
There are two methods for ensuring the CUDA workload executes on iGPU.
Use the DriveWorks context methods to select the GPU on which to run, i.e. dwStatus dwContext_selectGPUDevice(). If the chosen GPU is dGPU and if rendering is needed, use an image streamer to copy the results to the iGPU for visualization.
For details on how this is done, see the DriveWorks samples.
Set this environment variable: CUDA_VISIBLE_DEVICES=1
. This setting limits CUDA applications to discovering (enumerating) only the iGPU.
If CUDA_VISIBLE_DEVICES
is unset, the GPUs enumerate as follows:
DriveWorks runs the CUDA workload on the first enumerated GPU, which is dGPU (if present).
For more information, see J. CUDA Environment Variables in CUDA C Programming Guide.