Build and Run Sample Applications for DRIVE OS 6.x Linux
Alternatively, if the container was started with a mounted
${WORKSPACE}
, you may copy or move the compiled binaries to
the mounted ${WORKSPACE}
before exiting the container so that
the compiled binaries will be available on the host.
Graphics Applications
For the pre-flashed boards and boards flashed with the DRIVE OS Docker
container, samples are in opt/nvidia/
. To run a basic x11 sample, do
the following:
$ cd /opt/nvidia/drive-linux/samples/opengles2/bubble/x11/
$ ./bubble -fps
CUDA
Documentation is available online: CUDA Toolkit Documentation v11.4.1.
CUDA Host x86 and Linux aarch64 is installed on the development host in
/usr/local/cuda-11.4/
.
All CUDA samples are available on the development host in source code in
/usr/local/cuda-11.4/samples
.
How to Build the CUDA Samples for the Linux Target
For DRIVE OS releases, only Host cross-compile is supported. Perform the following steps:
On the development host,
cd
to thesamples
directory.$ cd /usr/local/cuda-11.4/samples
Build the samples.
$ make TARGET_ARCH=aarch64 TARGET_OS=linux SMS=87
SMS=87
). To build for other GPUs,
replace the SMS value with the target compute
version.????How to Run the CUDA Samples
From the host:
$ rcp -r /usr/local/cuda-11.4/samples/bin/aarch64/linux/release/<username>@<target ip address>:/home/nvidia/cuda_samples
From the target:
$ cd cuda_samples
$ ./deviceQueryDrv
TensorRT
The /usr/src/tensorrt
folder is created in the Build Docker.
How to Build the TensorRT Samples
On the development host:
$ cd /usr/src/tensorrt/samples
$ sudo make TARGET=aarch64
How to Run the TensorRT Samples on the Target
Copy the files to the target to make the TensorRT sample apps available to run.
From the host:
$ scp -r /usr/src/tensorrt <username>@<target ip address>:/home/nvidia/tensorrt
From the target:
$ cd ~/tensorrt/bin
$ ./sample_googlenet
For further information, the TensorRT Developer guide and API reference documents are available at https://docs.nvidia.com/deeplearning/tensorrt/index.html, including sample cross-compile information at https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html#cross-compiling-linux.