Deep Learning Accelerator Programming Interface (nvm_dlaSample)

The NvMedia nvm_dlaSample sample application demonstrates how to use the NvMedia Deep Learning Accelerator (DLA) APIs to perform deep learning inference operations. The sample uses the NVIDIA? SoC DLA hardware engine.

The nvm_dlaSample application has four testing modes:

  1. Runtime mode.
    • Single thread for running DLA.
    • Demonstrates how to create and initialize DLA instances.
    • Demonstrates how to run DLA with provided input data and network.
  2. SciSync mode.
    • Single thread for running DLA. Two supporting threads for synchronization (signaler and waiter).
    • Demonstrates how to create and initialize DLA instances.
    • Demonstrates how to run DLA with provided input data and network.
    • Demonstrates how to synchronize DLA task submission with a CPU signaler and CPU waiter.
  3. Multithreaded mode.
    • Multiple threads (4) for running DLA.
    • Demonstrates how to create and initialize DLA instances.
    • Demonstrates how to run DLA with provided input data and network.
  4. Ping mode.
    • Pings the specified instance if it exists.
Note: The tegrastats utility enables and reports on resource utilization. For addition information, refer to DLA in GitHub.