Loading [MathJax]/extensions/tex2jax.js
  • <xmp id="om0om">
  • <table id="om0om"><noscript id="om0om"></noscript></table>

  • DriveWorks SDK Reference
    5.10.90 Release
    For Test and Development only

    All Data Structures Files Functions Variables Typedefs Enumerations Enumerator Macros Modules Pages
    System and Platform Information

    GPU Enumeration and Selection

    On NVIDIA DRIVE systems, multiple GPUs are present and available for the SDK.

    • Integrated GPU (iGPU) is a GPU that shares the same die as the ARM CPU.
    • Descrete GPU (dGPU) is a separate unit connected over a PCIEX bus with the main CPU.

    At runtime, user applications can enumerate and select these GPUs. Most of the NVIDIA® DriveWorks modules use the currently selected GPU.

    ...
    // enumerate all available GPUs in the system and select first available dGPU
    int32_t numGPUs = 0;
    dwContext_getGPUCount(&numGPUs, sdkContext);
    for (int32_t i=0; i < numGPUsl; i++)
    {
    dwContext_getGPUDeviceType(&type, i, sdkContext);
    switch (type)
    {
    printf("GPU %d is a dGPU, set it as current GPU\n", i);
    dwContext_selectGPUDevice(i, sdkContext);
    break;
    printf("GPU %d is an iGPU\n", i);
    break;
    default:
    }
    }
    NVIDIA DriveWorks API: Core Methods
    DW_API_PUBLIC dwStatus dwContext_getGPUCount(int32_t *const count, dwContextHandle_t const context)
    Get the available GPU devices count.
    DW_API_PUBLIC dwStatus dwContext_getGPUDeviceType(dwGPUDeviceType *const deviceType, int32_t const deviceNum, dwContextHandle_t const context)
    Returns the device type of the input GPU number.
    DW_API_PUBLIC dwStatus dwContext_selectGPUDevice(int32_t const deviceNumber, dwContextHandle_t const context)
    Selects a GPU device, if available.
    dwGPUDeviceType
    GPU device type definitions Only applicable on Drive platforms.
    Definition: Types.h:148
    @ DW_GPU_DEVICE_DISCRETE
    Definition: Types.h:149
    @ DW_GPU_DEVICE_INTEGRATED
    Definition: Types.h:150
    Note
    For applications that require access to GPU memory, ensure that such applications use the correct GPU.

    DLA Engine Selection (Xavier SoC)

    NVIDIA DRIVE platforms with the NVIDIA Xavier SoC provide a hardware-accelerated deep learning accelerator (DLA). DLA accelerates inferencing of the networks, which frees up NVIDIA® CUDA® units to perform other tasks.

    The following snippet shows how to activate a specific DLA engine for inferencing of DriveNet. For a full sample, see DriveNet Sample.

    // check if current platform supports DLA engine
    int32_t numDLAs = 0;
    dwContext_getDLAEngineCount(&numDLAs, sdkContext);
    if (numDLAs == 0)
    {
    printf("The platform does not support DLA engine\n");
    return 0;
    }
    // setup DriveNet to use DLA engine for inferencing
    dwDriveNetParams driveNetParams = {};
    ...
    driveNetParams.processorType = DW_PROCESSOR_TYPE_DLA_0;
    // DLA supports only FP16 precision
    driveNetParams.networkPrecision = DW_PRECISION_FP16;
    dwDriveNet_initialize(&driveNet, ..., &driveNetParams, sdkContext);
    DW_API_PUBLIC dwStatus dwContext_getDLAEngineCount(int32_t *const count, dwContextHandle_t const context)
    Get the available DLA engines count.
    @ DW_PROCESSOR_TYPE_DLA_0
    Definition: Types.h:158
    @ DW_PRECISION_FP16
    FP16 precision.
    Definition: Types.h:137
    人人超碰97caoporen国产