Build and Run Sample Applications for DRIVE OS 6.x Linux
Alternatively, if the container was started with a mounted
${WORKSPACE}
, you may copy or move the compiled binaries to
the mounted ${WORKSPACE}
before exiting the container so that
the compiled binaries will be available on the host.
Graphics Applications
For boards that are pre-flashed or flashed with the custom GDM desktop file system
(default for SDK Manager and Docker installations), the Bubble sample application is
available in the /opt/nvidia/drive-linux/samples/
folder.
To run a basic X11 sample, use the following commands:
$ cd /opt/nvidia/drive-linux/samples/opengles2/bubble/x11 $ ./bubble -fps
To run other graphics samples for X11 and the supported window systems, see Building and Running Samples and Window Systems.
CUDA
Documentation is available online: CUDA Toolkit Documentation v11.4.4.
- The installations of CUDA Host x86 and Linux AArch64 are in the
/usr/local/cuda-11.4/
folder. - The source code files of all CUDA samples are in the
/usr/local/cuda-11.4/samples
folder.
How to Build the CUDA Samples for the Linux Target
Host cross-compile is supported for DRIVE OS releases only. After you finish installing CUDA x86 and cross-compile packages, perform the following steps:
Install the CUDA sample sources to a directory on your x86 development host where you do not need root privileges to write, such as the
$HOME
directory as shown in the following example:$ cd ~/
Use the following command to run the
cuda-install-samples-11.4.sh
script from the CUDA installation in the x86 host file system:$ <$NV_WORKSPACE>/drive-linux/filesystem/targetfs/usr/local/cuda-11.4/bin/cuda-install-samples-11.4.sh .
Where$NV_WORKSPACE
is:- Docker:
/drive
- Docker:
- A successful run will show the following:
Copying samples to ./NVIDIA_CUDA-11.4_Samples now... Finished copying samples.
Samples will be installed to the directory
~/NVIDIA_CUDA-11.4_Samples
.
-
$ cd ~/NVIDIA_CUDA-11.4_Samples $ sudo make SMS=87 TARGET_ARCH=aarch64 TARGET_OS=linux TARGET_FS=$NV_WORKSPACE/drive-linux_src/filesystem/targetfs
SMS=87
). To build for other GPUs,
replace the SMS value with the target compute
version.????How to Run the CUDA Samples
To run a CUDA sample application,
Copy the sample files of your choice to the target.
From the target, run the sample application.
For example:
$ cd ~/
$ rcp -r NVIDIA_CUDA-11.4_Samples <username>@<host>:/home/<username>
From the target:
$ cd ~/NVIDIA_CUDA-11.4_Samples/bin/aarch64/linux/release/
$ ./deviceQueryDrv
TensorRT
After you finish installing the TensorRT packages, the /usr/src/tensorrt
folder is created on the development host.
For more information about:
- TensorRT Developer Guide and API Reference, see NVIDIA TensorRT Documentation.
- How to cross-compile TensorRT samples, see Sample Support Guide in NVIDIA TensorRT Documentation.
How to Build the TensorRT Samples
On the development host:
$ cd /usr/src/tensorrt/samples
$ sudo make TARGET=aarch64
How to Run the TensorRT Samples on the Target
To run a TensorRT sample application,
Copy the sample files of your choice to the target.
From the target, run the sample application.
For example, from the host:
$ scp -r /usr/src/tensorrt <username>@<target ip address>:/home/nvidia/tensorrt
From the target:
$ cd /home/nvidia/tensorrt
$ ./bin/sample_algorithm_selector
For further information, the TensorRT Developer guide and API reference documents are available at https://docs.nvidia.com/deeplearning/tensorrt/index.html, including sample cross-compile information at https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html#cross-compiling-linux.