Rebuilding the File System from ubuntu-base and Local Mirror
- Canonical ubuntu-base tarball:
nv-driveos-linux-ubuntu-20.04-base-*_amd64.deb
- Canonical arm64 Debian packages:
nv-driveos-linux-ubuntu-20.04-arm64-debians-*_amd64.deb
- NVIDIA CUDA arm64 Debian packages:
cuda-repo-ubuntu2004-11-4-local*arm64.deb
- NVIDIA cuDNN arm64 Debian packages:
cudnn-prune-87-repo-ubuntu2004-8-2-local*arm64.deb
- NVIDIA TensorRT arm64 Debian packages:
nv-tensorrt-repo-ubuntu2004-cuda11.4-trt*arm64.deb
- NVIDIA Mellanox arm64 Debian packages:
nv-driveos-linux-mlnx-docker-arm64-debians-*_amd64.deb
- NVIDIA Docker arm64 Debian packages:
nv-driveos-linux-mlnx-docker-arm64-debians-*_amd64.deb
- NVIDIA DriveWorks arm64 Debian packages: List of Debian packages:
driveworks-cgf*linux*arm64.deb
driveworks-cgf-cross*linux*amd64.deb
driveworks-cgf-samples*linux*arm64.deb
driveworks-cross*linux*amd64.deb
driveworks-data*all.deb
driveworks-stm*linux*arm64.deb
driveworks-stm-cross*linux*_amd64.deb
driveworks-stm-sample*.deb
- NVIDIA DRIVE OS Core arm64 Debian packages - List of Debian packages:
-
nv-driveos-linux-aurix_*_arm64.deb
-
nv-driveos-linux-firmware_*_arm64.deb
-
nv-driveos-linux-headers_*_arm64.deb
-
nv-driveos-linux-kernel-modules_*_arm64.deb
-
nv-driveos-linux-libraries_*_arm64.deb
-
nv-driveos-linux-samples_*_arm64.deb
-
nv-driveos-linux-security_*_arm64.deb
-
nv-driveos-linux-tools_*_arm64.deb
-
nv-driveos-linux-core_*_arm64.deb
-
nv-driveos-linux-oobe_*_arm64.deb
-
- Ensure that DRIVE OS Linux SDK is installed as per the Getting Started page.
- Install the build-fs and Copytarget Debian packages to use these tools.
- Ensure the NV_WORKSPACE shell variable is set and points to the top directory where DRIVE OS Linux SDK is installed.
- Keep the DRIVE OS Linux SDK Debian packages in
$NV_WORKSPACE
and switch to the following directory:cd $NV_WORKSPACE
Steps to rebuild a filesystem from ubuntu-base
The rebuilding a filesystem requires the use of a local mirror from the target-specific components above. Execute the steps below to set up the local mirror.- Install the driveos-oobe-desktop-rfs SDK package to install its manifest file
driveos-${FS_VARIANT}*MANIFEST.json.
$ sudo -E dpkg -i ./nv-driveos-linux-driveos-oobe-desktop-ubuntu-20.04-rfs-*_amd64.deb
Here, ${FS_VARIANT} is the name of filesystem variant, e.g. oobe-desktop.
For an example to rebuild driveos-oobe-desktop-rfs, please refer to Example: Steps to rebuild driveos-oobe-desktop-rfs filesystem from ubuntu-base below. - Install Canonical ubuntu-base and arm64 Debian SDK
packages:
$ sudo -E dpkg -i ./nv-driveos-linux-ubuntu-20.04-arm64-debians-<release>-<GCID>_<release>-<GCID>_amd64.deb ./nv-driveos-linux-ubuntu-20.04-base-<release>-<GCID>_<release>-<GCID>_amd64.deb
- Install NVIDIA Mellanox and Docker arm64 Debian
packages:
$ sudo -E dpkg -i ./nv-driveos-linux-mlnx-docker-arm64-debians--<release>-<GCID>_<release>-<GCID>_amd64.deb
- Copy the NVIDIA CUDA, cuDNN, TensorRT and DriveWorks arm64 Debian packages to $NV_WORKSPACE/drive-linux/filesystem/contents/debians/nvidia/.
- Import CUDA bits exported variables by sourcing versions using cmd
below:
$ source ${NVWORKSPACE}/drive-linux/filesystem/contents/debians/versions.conf
- Build the final filesystem starting from
ubuntu-base:
$ sudo -E /opt/nvidia/driveos/common/filesystems/build-fs/17/bin/build_fs.py -w ${NV_WORKSPACE}/ -i ${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-${FS_VARIANT}-ubuntu-20.04-rfs.MANIFEST.json -o $PWD/output/
Example: Steps to rebuild driveos-oobe-desktop-rfs filesystem from ubuntu-base
- Install the driveos-oobe-desktop-rfs SDK package to install its manifest
file
driveos-oobe-desktop*MANIFEST.json
.$ sudo -E dpkg -i ./nv-driveos-linux-driveos-oobe-desktop-ubuntu-20.04-rfs-*_amd64.deb
- Install Canonical ubuntu-base and arm64 Debian SDK
packages:
$ sudo -E dpkg -i ./nv-driveos-linux-ubuntu-20.04-arm64-debians-<release>-<GCID>_<release>-<GCID>_amd64.deb ./nv-driveos-linux-ubuntu-20.04-base-<release>-<GCID>_<release>-<GCID>_amd64.deb
- Install NVIDIA Mellanox and Docker arm64 Debian
packages:
$ sudo -E dpkg -i ./nv-driveos-linux-mlnx-docker-arm64-debians--<release>-<GCID>_<release>-<GCID>_amd64.deb
- Copy the NVIDIA CUDA, cuDNN, TensorRT and DriveWorks arm64 Debian packages
to
$NV_WORKSPACE/drive-linux/filesystem/contents/debians/nvidia/
. - Import CUDA bits exported variables by sourcing versions using cmd
below:
$ source ${NVWORKSPACE}/drive-linux/filesystem/contents/debians/versions.conf
- Build the final filesystem starting from
ubuntu-base:
$ sudo -E /opt/nvidia/driveos/common/filesystems/build-fs/17/bin/build_fs.py -w ${NV_WORKSPACE}/ -i ${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-oobe-desktop-ubuntu-20.04-rfs.MANIFEST.json -o $PWD/output/
driveos-oobe-desktop-ubuntu-20.04-rfs.MANIFEST.json
.Similarly, to install any filesystem manifest, install the corresponding filesystem SDK package.
Rebuilding a Filesystem from ubuntu-base Using NVIDIA DRIVE OS Core CopyTarget YAML Manifests
This section provides instructions rebuild the Linux RFS using the DRIVE OS Core CopyTarget YAML manifests from ${NV_WORKSPACE}/drive-linux/filesystem/copytarget/manifest/*.yaml to copy the Core files instead of obtaining them from the NVIDIA DRIVE OS Core Debian packages.
This is useful when you have modified the Core files on your host and want to copy them to the filesystem without rebuilding the NVIDIA DRIVE OS Core Debian packages.
Note: If you instead wish to add additional files to an already available filesystem, follow the instructions in How to Add a Single Debian Package and a Single File to the Linux Filesystem.
Install the rfs SDK package to install its manifest file driveos-${FS_VARIANT}*MANIFEST.json.
$ sudo -E dpkg -i ./nv-driveos-linux-driveos-${FS_VARIANT}-ubuntu-20.04-rfs-*_amd64.deb
Here, ${FS_VARIANT} is the name of filesystem variant, e.g. oobe-desktop.For an example to rebuild driveos-oobe-desktop-rfs, please refer to Example: Rebuilding driveos-oobe-desktop-rfs Filesystem from ubuntu-base Using NVIDIA DRIVE OS Core CopyTarget YAML Manifests below.
-
Install Canonical ubuntu-base and arm64 Debian SDK packages:
$ sudo -E dpkg -i ./nv-driveos-linux-ubuntu-20.04-arm64-debians-*_amd64.deb ./nv-driveos-linux-ubuntu-20.04-base-*_amd64.deb
-
Install NVIDIA Mellanox and Docker arm64 Debian packages:
$ sudo -E dpkg -i ./nv-driveos-linux-mlnx-docker-arm64-debians-*_amd64.deb
-
Copy the NVIDIA CUDA, cuDNN, TensorRT and DriveWorks arm64 Debian packages to $NV_WORKSPACE/drive-linux/filesystem/contents/debians/nvidia/.
-
Import CUDA bits exported variables by sourcing versions using cmd below:
$ set -a $ source ${NV_WORKSPACE}/drive-linux/filesystem/contents/debians/versions.conf $ set +a
-
Update ${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-${FS_NAME}-ubuntu-20.04-rfs.MANIFEST.json to remove DRIVE OS Core Debian packages.
$ sed -i '/nv-driveos-linux-\(aurix\|core\|firmware\|headers\|kernel-modules\|libraries\|oobe\|samples\|security\|tools\).*/d' ${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-${FS_VARIANT}-ubuntu-20.04-rfs.MANIFEST.json
-
Update ${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-${FS_NAME}-ubuntu-20.04-rfs.MANIFEST.json to use the DRIVE OS Core CopyTarget YAML manifests. You may update the “CopyTarget” key in the Build-FS manifest manually to include the YAML manifests from ${NV_WORKSPACE}/drive-linux/filesystem/copytarget/manifest/*.yaml or use the following singlecommand (note that this is one single command that can be pasted into your bash terminal):
python3 -B - << END import json from collections import OrderedDict bkConfig = OrderedDict() manifest="${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-${FS_VARIANT}-ubuntu-20.04-rfs.MANIFEST.json" with open(manifest) as f: data = f.read() bkConfig = json.loads(data, object_pairs_hook=OrderedDict) bkConfig["CopyTargets"] = [ "\${COPYTARGETYAML_DIR}/copytarget-configs.yaml", "\${COPYTARGETYAML_DIR}/copytarget-aurix.yaml", "\${COPYTARGETYAML_DIR}/copytarget-aurix.yaml", "\${COPYTARGETYAML_DIR}/copytarget-firmware.yaml", "\${COPYTARGETYAML_DIR}/copytarget-headers.yaml", "\${COPYTARGETYAML_DIR}/copytarget-kernel-modules.yaml", "\${COPYTARGETYAML_DIR}/copytarget-libraries.yaml", "\${COPYTARGETYAML_DIR}/copytarget-samples.yaml", "\${COPYTARGETYAML_DIR}/copytarget-security.yaml", "\${COPYTARGETYAML_DIR}/copytarget-tools.yaml" ] with open(manifest, 'w') as f: f.write(json.dumps(bkConfig, indent=4, sort_keys=False)) END
- Build the final filesystem starting from
ubuntu-base:
$ sudo -E /opt/nvidia/driveos/common/filesystems/build-fs/17/bin/build_fs.py -w ${NV_WORKSPACE}/ -i ${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-${FS_VARIANT}-ubuntu-20.04-rfs.MANIFEST.json -o $PWD/output/
Example: Rebuilding driveos-oobe-desktop-rfs Filesystem from ubuntu-base Using NVIDIA DRIVE OS Core CopyTarget YAML Manifests
This section provides an example to rebuild the Linux OOBE Desktop RFS using the
DRIVE OS Core CopyTarget YAML manifests from
${NV_WORKSPACE}/drive-linux/filesystem/copytarget/manifest/*.yaml
to copy the Core files instead of obtaining them from the NVIDIA DRIVE OS Core
Debian packages.
This is useful when you have modified the Core files on your host and want to copy them to the filesystem without rebuilding the NVIDIA DRIVE OS Core Debian packages.
-
Install the driveos-oobe-desktop-rfs SDK package to install its manifest file
driveos-oobe-desktop*MANIFEST.json
.$ sudo -E dpkg -i ./nv-driveos-linux-driveos-oobe-desktop-ubuntu-20.04-rfs-*_amd64.deb
-
Install Canonical ubuntu-base and arm64 Debian SDK packages:
$ sudo -E dpkg -i ./nv-driveos-linux-ubuntu-20.04-arm64-debians-*_amd64.deb ./nv-driveos-linux-ubuntu-20.04-base-*_amd64.deb
-
Install NVIDIA Mellanox and Docker arm64 Debian packages:
$ sudo -E dpkg -i ./nv-driveos-linux-mlnx-docker-arm64-debians-*_amd64.deb
-
Copy the NVIDIA CUDA, cuDNN, TensorRT and DriveWorks arm64 Debian packages to
$NV_WORKSPACE/drive-linux/filesystem/contents/debians/nvidia/
. -
Import CUDA bits exported variables by sourcing versions using cmd below:
$ set -a $ source ${NV_WORKSPACE}/drive-linux/filesystem/contents/debians/versions.conf $ set +a
-
Update
${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-oobe-desktop-ubuntu-20.04-rfs.MANIFEST.json
to remove DRIVE OS Core Debian packages.$ sed -i '/nv-driveos-linux-\(aurix\|core\|firmware\|headers\|kernel-modules\|libraries\|oobe\|samples\|security\|tools\).*/d' ${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-oobe-desktop-ubuntu-20.04-rfs.MANIFEST.json
-
Update
${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-oobe-desktop-ubuntu-20.04-rfs.MANIFEST.json
to use the DRIVE OS Core CopyTarget YAML manifests. You may update the “CopyTarget” key in the Build-FS manifest manually to include the YAML manifests from${NV_WORKSPACE}/drive-linux/filesystem/copytarget/manifest/*.yaml
or use the following singlecommand (note that this is one single command that can be pasted into your bash terminal):python3 -B - << END import json from collections import OrderedDict bkConfig = OrderedDict() manifest="${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-oobe-desktop-ubuntu-20.04-rfs.MANIFEST.json" with open(manifest) as f: data = f.read() bkConfig = json.loads(data, object_pairs_hook=OrderedDict) bkConfig["CopyTargets"] = [ "\${COPYTARGETYAML_DIR}/copytarget-configs.yaml", "\${COPYTARGETYAML_DIR}/copytarget-aurix.yaml", "\${COPYTARGETYAML_DIR}/copytarget-aurix.yaml", "\${COPYTARGETYAML_DIR}/copytarget-firmware.yaml", "\${COPYTARGETYAML_DIR}/copytarget-headers.yaml", "\${COPYTARGETYAML_DIR}/copytarget-kernel-modules.yaml", "\${COPYTARGETYAML_DIR}/copytarget-libraries.yaml", "\${COPYTARGETYAML_DIR}/copytarget-samples.yaml", "\${COPYTARGETYAML_DIR}/copytarget-security.yaml", "\${COPYTARGETYAML_DIR}/copytarget-tools.yaml" ] with open(manifest, 'w') as f: f.write(json.dumps(bkConfig, indent=4, sort_keys=False)) END
- Build the final filesystem starting from
ubuntu-base:
$ sudo -E /opt/nvidia/driveos/common/filesystems/build-fs/17/bin/build_fs.py -w ${NV_WORKSPACE}/ -i ${NV_WORKSPACE}/drive-linux/filesystem/targetfs-images/driveos-oobe-desktop-ubuntu-20.04-rfs.MANIFEST.json -o $PWD/output/