AV PCT Configuration
AV PCT Configuration
Autonomous Vehicle Virtual Machine Configuration
The NVIDIA DRIVE AGX? system Autonomous Vehicle (AV) Partition Configuration Table (PCT) consists of server VMs, service VMs, and NVIDIA DRIVE? AV Guest-OS (GOS) VM configurations.
PCT is divided according to the composition of the NVIDIA DRIVE AV GOS VM.
Single Linux GOS VM
Single QNX GOS VM
Dual QNX GOS VMs
Profile Makefile
The profile makefile is the file having definitions of PCT configuration. Each PCT have own profile makefiles.
Standard SDK/PDK Package
Default Profile Makefile(profile.mk) is for Standard Package.
PCT name | PCT | Profile Makefile for Standard build |
---|---|---|
Single Linux GOS VM | linux | profile.mk |
Single QNX GOS VM | qnx | profile.mk |
Dual QNX GOS VMs | dual-qnx | profile.mk |
The Profile Makefile is located at:
Single Linux GOS VM (linux PCT):
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/linux/profile.mk
Single QNX GOS VM (qnx PCT):
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/qnx/profile.mk
Dual QNX GOS VMs (dual-qnx PCT):
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/dual-qnx/profile.mk
Safety SDK/PDK Package
Safety build package need an additional option (PCT variant option, -p
) while bind to select different Profile Makefile for Safety Package.
The Safety package has three types of profile configurations (PCT variant): prod, prod_debug, and prod_debug_extra.
Especially prod_debug and prod_debug_extra PCT variant provides debug environment based on prod PCT variant.
PCT name | PCT | PCT Variant | Profile Makefile for Safety Build | Comment |
Single Linux GOS VM | linux | N/A | N/A | Linux PCT is not supported in Safety build. |
Single QNX GOS VM | qnx | prod | profile_prod.mk | |
prod_debug | profile_prod_debug.mk | Support communication to target over SSH/DHCP in GOS VM. | ||
prod_debug_extra | profile_prod_debug_extra.mk | Combined UART is enabled. Servers/VM log available. Support SSH/DHCP/NFS in GOS VM. | ||
Dual QNX GOS VMs | dual-qnx | prod | profile_prod.mk | |
prod_debug | profile_prod_debug.mk | Support communication to target over SSH/DHCP in GOS VMs. | ||
prod_debug_extra | profile_prod_debug_extra.mk | Combined UART is enabled. Servers/VM log available. Support SSH/DHCP/NFS in the first GOS VMs. |
These profile makefiles are located at:
Single QNX GOS VM (qnx PCT):
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/qnx/profile_prod.mk
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/qnx/profile_prod_debug.mk
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/qnx/profile_prod_debug_extra.mk
Dual QNX GOS VMs (dual-qnx PCT):
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/dual-qnx/profile_prod.mk
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/dual-qnx/profile_prod_debug.mk
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/dual-qnx/profile_prod_debug_extra.mk
Note:
The prod_debug and prod_debug_extra PCTs are used for testing/debugging purposes. These PCTs use a different file package that must not be used as part of the software stack in a running car.
Supported Platform and CPU Allocation
The following tables list the supported combination of PCT, platform, and SDK/PDK package.
The CPUs column indicates the number of CPUs assigned to the guest VM and to the server VMs, respectively.
Supported Platform and Board Name
Official Name | Platform | Board Name | 940/694-BOARD-SKU-REV | Comment |
NVIDIA DRIVE Orin? | p3663 |
|
| |
|
| Emmc Size increased to 64 GB compared to 32 GB on p3663-a01 | ||
|
| MAX96981B display serializer with DSC on top of p3663-01-a02 boards | ||
NVIDIA DRIVE AGX Orin? DevKit | p3710 |
|
| |
|
|
| ||
|
| |||
|
| |||
|
| GMSL out interconnect board delta compared to DisplayPort Interconnect Board on p3710-10-a03 | ||
|
| GMSL out interconnect board delta compared to DisplayPort Interconnect Board on p3710-10-a04 | ||
|
| GMSL out interconnect board delta compared to DisplayPort Interconnect Board on p3710-10-s05 | ||
DRIVE Recorder | p4024 |
|
|
Note:
Board name with suffix -f1, -ct02, -ct03, or -ct04 has eight CPU cores. Also, Board name with suffix -f1 are only for NVIDIA Internal Use. Normally, NVIDIA Orin has 12 CPU cores.
Standard SDK/PDK Package
Orin type | PCT | CPU assignment |
---|---|---|
12 cpu cores | qnx / dual-qnx / linux | 12 for GOS0 + 1 shared cpu with HOST1X server + 1 shared cpu with Other Servers(and GOS1) |
8 cpu cores | qnx / linux | 8 for GOS0 + 1 shared cpu with HOST1X server + 1 shared cpu with Other Servers |
Safety SDK/PDK Package
Orin type | PCT | PCT Variant | CPU assignment |
---|---|---|---|
12 cpu cores | qnx / dual-qnx | prod / prod_debug / prod_debug_extra | No difference with standard package |
8 cpu cores | qnx | prod / prod_debug / prod_debug_extra | No difference with standard package |
The example guest_config.h shows the mapping between the guest OS and services as well as their allocations.
Use Cases of NDAS/ECO, eMMC/UFS Secondary Boot Device
Depending on the use cases, storage configuration and peripheral assignment are different.
The following table lists the supported use cases with platforms.
Use cases | First boot device +Secondary boot device | Platforms |
Driving ECU (NDAS) | QSPI+eMMC | p3663 / p3710 |
EcoSystem/General (ECO) | QSPI+eMMC | p3663 / p3710 |
QSPI+UFS | p3710 | |
Recoder (REC) | QSPI+eMMC | p4024 |
ECID Read Access on Guest VMs
Read access to ECID can be provided to Guest VMs by configuring below in guest_config.h for a VM in PCT settings.
can_read_ecid_hash = 1
ECID is considered as a solution for customer to get a unique ID of the platform from customer's application.
Note:
ECID read access is disabled for Drive AV PCTs for all VMs by default.
Bind Options
A bind process creates a hypervisor image that combines DTB/IFS/KERNEL of server VMs, Hypervisor kernel, and PCT.
Syntax:
# for standard package
$ cd drive-foundation/
# for safety package
$ cd drive-foundation-safety/
$ ./make/bind_partitions [-b <board_name>] <pct> [-u <use_case>] [-r <storage_config>] [-p <pct_variant>] [options]
Example of Standard Linux PCT + ECO(QSPI+UFS) use case on DRIVE AGX Orin Devkit:
bind_partitions -b p3710-10-a01 linux
Note:
The default configuration of p3710-1* is Standard ECO QSPI+UFS. If you want ECO QSPI+eMMC boot, use the
-u eco
option.
Example of Standard QNX PCT + NDAS use case on NVIDIA DRIVE Orin?:
bind_partitions -b p3663-a01 qnx -u ndas
Supported Bind Options
The following tables list the supported bind options.
Bind Options for Standard Package
Note:
Get full
<board_name>
from Supported Platform and Board Name.
Use cases | bind cmd | -b <board_name> | <pct> | -u <use_case>(default) | -r <storage_config>(default) | Comment |
ECO | bind_partitions |
|
|
|
| |
|
|
|
| The -r no_ufs option disables UFS storage, which is used as secondary boot device on P3710 default ECO use case. Since UFS will be disabled by this option, EMMC must be secondary boot device. To make EMMC as secondary boot device of P3710 for the above case, the -u eco option must be used with the -r no_ufs option. | ||
NDAS | bind_partitions |
|
|
|
| With BASE storage config option(-r base),UFS device is disabled. |
|
|
| dual-qnx support only NDAS storage config. | |||
REC | bind_partitions |
|
| N/A | N/A |
Bind Options for Safety Package
Use cases | bind cmd | -b <board_name> | <pct> | -u <use_case>(default) | -r <storage_config>(default) | -p <pct_variant> | Comment |
ECO | bind_partitions |
|
|
|
|
| Refer to the Comment column ofBind Options for Standard Package. |
|
|
|
| ||||
NDAS | bind_partitions |
|
|
|
| ||
|
|
|
Bind Options for Power Profile
Platforms | Standard or Safety | Power Profile (default) |
p3663 | Standard and Safety |
|
p3710 | Standard |
|
Safety |
| |
p3710-10-a01-ct02 | QNX Safety and QNX Standard |
|
p3710-10-a01-ct02 | Linux Standard |
|
p3710-10-a01-ct03 | Standard and Safety |
|
p3710-10-a01-ct04 | Safety |
|
p3710-10-a01-ct04 | Standard |
|
p4024 | Standard |
|
Bind Options for SoC ID for C2C in GOS-DT
Platforms | SOC ID for C2C in GOS-DT | Comments |
p3663 / p3710 / p4024 | -s <SOC_ID:1~4294967295> | Specify SOC ID in GOS-DTand rebuild GOS device tree |
When the <SOC_ID>
argument is specified with the -s
option, SOC_IDENTIFICATION_VALUE is defined and
the <SOC_ID>
value will be set for soc_id in GOS-DT property.
#ifdef SOC_IDENTIFICATION_VALUE
soc_id = <SOC_IDENTIFICATION_VALUE>;
#else
soc_id = <1>;
#endif
Enable DRAM ECC for DRIVE AV PCT
DRAM ECC feature can be enabled for DRIVE AV PCT by modifying the profile makefiles.
The following flag needs to be set to y
in profile makefiles (with included mk files) to enable DRAM ECC.
ENABLE_DRAM_ECC := y
Refer to Profile Makefile to figure out which profile makefile needs to be modified.
Alternatively, you can pass ENABLE_DRAM_ECC=y
in the bind_partitions
command to override the default setting instead of modifying the flag directly as shown below.
bind_partitions -b p3663-a01 qnx ENABLE_DRAM_ECC=y
Refer to Profile Makefile for the bind_partitions
command.
Note:
DRAM ECC feature is enabled for QNX safety PCTs by default.
Disable FSI for DRIVE AV Linux PCTs
FSI can be disabled for DRIVE AV Linux PCTs by modifying the profile makefiles.
The following flag needs to be changed from y
to n
in profile makefiles (with included mk files) to disable FSI.
ENABLE_FSI := n
Refer to Profile Makefile to figure out which profile makefile needs to be modified.
Alternatively, you can pass ENABLE_FSI=n
in the bind_partitions
command to override the default setting instead of modifying the flag directly as shown below.
bind_partitions -b p3663-a01 linux ENABLE_FSI=n
Refer to Profile Makefile for the bind_partitions
command.
Note:
FSI is enabled for Drive AV Linux PCTs by default.
Enable DCLS for DRIVE AV PCT
Dual Core Lock Step (DCLS) can be enabled for the main CPU complex (referred to as the CCPLEX) by modifying the profile makefiles.
Enabling CCPLX DCLS reduces the effective number of CPU core by half, so it is only supported in Single GOS VM PCT with either 8 or 12 CPU cores.
The following flag needs to be changed from n
to y
in profile makefiles (with included mk files) to enable DCLS.
ENABLE_CCPLEX_DCLS := y
Refer to Profile Makefile to figure out which profile makefile needs to be modified.
Alternatively, you can pass ENABLE_CCPLEX_DCLS=y
in the bind_partitions
command to override the default setting instead of modifying the flag directly as shown below.
bind_partitions -b p3663-a01 qnx ENABLE_CCPLEX_DCLS=y
Refer to Bind Options for the bind_partitions
command.
Note:
DCLS is disabled for Drive AV PCT by default.
DM-VERITY and Root Filesystem Permission for DRIVE AV Linux PCTs
To enable dm-verity, root filesytem partition permission should be read-only.
Therefore, dm-verity and read-write root filesystem cannot be enabled at the same time. If both are enabled, the bind_partition
command fails.
Following table shows the default values and possible combination of OS_ARGS_ENABLE_DM_VERITY
value and OS_ARGS_ROOT_MOUNT_PER
value.
Platform and Use Cases | Extra Option of OS_ARGS_ENABLE_DM_VERITY | Extra Option of OS_ARGS_ROOT_MOUNT_PER | OS_ARGS_ENABLE_DM_VERITY Value | OS_ARGS_ROOT_MOUNT_PER Value |
P3663/P3710 AV+L ECO | Default (or OS_ARGS_ENABLE_DM_VERITY=0) | Default (or OS_ARGS_ROOT_MOUNT_PER=rw) | 0 | rw |
OS_ARGS_ROOT_MOUNT_PER=ro | 0 | ro | ||
OS_ARGS_ENABLE_DM_VERITY=1 | Default (or OS_ARGS_ROOT_MOUNT_PER=rw) | bind error due to DM-verity enabled and root filesystem have rw permission | ||
OS_ARGS_ROOT_MOUNT_PER=ro | 1 | ro | ||
P3663/P3710 AV+L NDAS(-u ndas) P4024 AV+L-REC(-b p4024-*) | Default (or OS_ARGS_ENABLE_DM_VERITY=0) | Default (or OS_ARGS_ROOT_MOUNT_PER=ro) | 0 | ro |
OS_ARGS_ROOT_MOUNT_PER=rw | 0 | rw | ||
OS_ARGS_ENABLE_DM_VERITY=1 | Default (or OS_ARGS_ROOT_MOUNT_PER=ro) | 1 | ro | |
OS_ARGS_ROOT_MOUNT_PER=rw | bind error due to DM-verity enabled and root filesystem have rw permission |
For example, to enable dm-verity, you can pass OS_ARGS_ENABLE_DM_VERITY=1 OS_ARGS_ROOT_MOUNT_PER=ro
in the bind_partitions
command regardless of the default values.
bind_partitions -b p3663-a01 linux OS_ARGS_ENABLE_DM_VERITY=1 OS_ARGS_ROOT_MOUNT_PER=ro
Change Chain-C L2PT Partitions to a Dummy Partition
Chain-C partitions in second level can be removed with changing to a dummy partition by modifying Profile Makefiles.
Below flag needs to be changed from y
to n
in profile makefiles (with included mk files) to disable Chain-C.
ENABLE_CHAIN_C_BOOTCHAIN := n
Refer to Profile Makefile to figure out which profile makefile needs to be modified.
Alternatively, you can pass ENABLE_CHAIN_C_BOOTCHAIN=n
in the bind_partitions
command to override the default setting instead of modifying the flag directly as shown below.
bind_partitions -b p3663-a01 ENABLE_CHAIN_C_BOOTCHAIN=n
Refer to Bind Options for the bind_partitions
command.
AV PCT Input/Output Resource Assignment
The ownership of input/output peripherals is divided between the guest OS and the service VMs.
The following table details the type of access the guest OS or service VMs have for each I/O peripheral.
QNX and Linux PCT Standard Package Profile
Resources | Resources Shared? | Update VM | Guest OS |
---|---|---|---|
DRAM | Yes | 512 MB | ~30 GB(32 GB RAM) / ~13 GB(16 GB RAM) |
iGPU | Yes | N/A | Virtual |
DLA | No | N/A | DLA0, DLA1 |
PVA | No | N/A | PVA0, PVA1 |
NvEnc / OFA | No | N/A | Assigned |
VIC | No | N/A | Assigned |
Display | No | N/A | Assigned |
QSPI | Yes | Virtual | N/A |
eMMC0 (32/64 GB) | Yes | Virtual | Virtual |
UFS (265 GB) | Yes | Virtual | Virtual |
1G Ethernet | No | N/A | Assigned |
SPI Master | No | N/A | Assigned |
SPI Slave | No | N/A | Assigned (Only for linux PCT) |
RCE (ISP, VI, MIPICAL, NVCSI, CSI lanes) | No | N/A | Assigned |
I2C Master | No | N/A | Assigned |
GPIO | No | N/A | Assigned |
NVIDIA Tegra CAN | No | N/A | Assigned |
NVDEC | No | N/A | Assigned |
NVJPG | No | N/A | Assigned |
10G Ethernet | No | N/A | Assigned |
PCIe Controller [5,6] EP+RP for C2C | No | N/A | Assigned (Only for P3710) |
PCIe Controller [2], UART[2] for Wifi/BT | No | N/A | Assigned |
SE Engine | Yes | Virtual | Virtual |
I2S, A2B/codec driver | No | N/A | Assigned |
dGPU | No | N/A | Assigned |
Dual-QNX PCT Standard Package Profile
Resources | Resources Shared? | Update VM | Guest OS | Guest OS1 |
---|---|---|---|---|
DRAM | Yes | 512MB | ~29 GB(32 GB RAM) / ~12 GB(16GB RAM) | 512 GB |
iGPU | Yes | N/A | Virtual | N/A |
DLA | No | N/A | DLA0, DLA1 | N/A |
PVA | No | N/A | PVA0, PVA1 | N/A |
NvEnc / OFA | No | N/A | Assigned | N/A |
VIC | No | N/A | Assigned | N/A |
Display | No | N/A | Assigned | N/A |
QSPI | Yes | Virtual | N/A | N/A |
eMMC0 (32/64GB) | Yes | Virtual | Virtual | Virtual |
UFS (265GB) | Yes | Virtual | Virtual | N/A |
1G Ethernet | No | N/A | Assigned | N/A |
SPI Master | No | N/A | Assigned | N/A |
SPI Slave | No | N/A | Assigned (Only for linux PCT) | N/A |
RCE (ISP, VI, MIPICAL, NVCSI, CSI lanes) | No | N/A | Assigned | N/A |
I2C Master | No | N/A | Assigned | N/A |
GPIO | No | N/A | Assigned | N/A |
NVIDIA Tegra CAN | No | N/A | Assigned | N/A |
NVDEC | No | N/A | Assigned | N/A |
NVJPG | No | N/A | Assigned | N/A |
10G Ethernet | No | N/A | Assigned | N/A |
PCIe Controller [5,6] EP+RP for C2C | No | N/A | Assigned (Only for P3710) | N/A |
PCIe Controller [2], UART[2] for Wifi/BT | No | N/A | N/A | Assigned |
SE Engine | Yes | Virtual | Virtual | Virtual |
I2S, A2B/codec driver | No | N/A | Assigned | N/A |
dGPU | No | N/A | Assigned | N/A |
Note:
Regarding SDRAM usage, the amount of physical memory available for each virtual machine for virtual RAM is determined by a fixed (base) amount plus a dynamic (growth) amount. The value for the fixed amount is read directly from the PCT. The value for the dynamic amount is based on the amount of memory remaining after the fixed allocation and subsequently assigned to each virtual machine based on a per-VM growth factor. Higher growth factors result in a higher proportion of memory allocated to a virtual machine.
IVC Mapping
Inter-virtual Machine Communication (IVC) facilitates data exchange between two virtual machines over shared memory.
The platform_config.h shows IVC queues between VMs, services, and servers in the AV PCT.
Mempool Mapping
The platform_config.h shows mempool mapping between VMs, services, and servers in the AV PCT.
Note:
Each VM/service Guest ID (GID) using in IVC/Mempool mapping is defined in the following code snippet.
<top>/<NV_SDK_NAME_FOUNDATION>/virtualization/make/t23x/server-partitions.mk
ifeq (,$(NR_VM))
NR_VM := 0
VM_GID = $(shell echo $$(($(NR_VM))))
VM_GID_MASK = $(shell echo $$((1 << ($(NR_VM)))))
INC_NR_VM = $(shell echo $$(($(NR_VM)+1)))
ifeq (y,$(ENABLE_GUEST0_VM))
LOCAL_FLAGS += -DGID_GUEST0_VM=$(VM_GID)
LOCAL_FLAGS += -DGOS0_VM=$(VM_GID_MASK)
GID_GUEST0_VM := $(VM_GID)
NR_VM := $(INC_NR_VM)
endif
ifeq (y,$(ENABLE_GUEST1_VM))
LOCAL_FLAGS += -DGID_GUEST1_VM=$(VM_GID)
LOCAL_FLAGS += -DGOS1_VM=$(VM_GID_MASK)
GID_GUEST1_VM := $(VM_GID)
NR_VM := $(INC_NR_VM)
endif
ifeq (y,$(ENABLE_UPDATE_VM))
LOCAL_FLAGS += -DGID_UPDATE=$(VM_GID)
LOCAL_FLAGS += -DUPDATE_VM=$(VM_GID_MASK)
GID_UPDATE := $(VM_GID)
NR_VM := $(INC_NR_VM)
endif
endif
ID := 0
cont-files :=
GID = $(shell echo $$(($(ID)+$(NR_VM))))
GID_MASK = $(shell echo $$((1 << ($(ID)+$(NR_VM)))))
INC_ID = $(shell echo $$(($(ID)+1)))
GPIO Ownership
The guest_gpio_ownership.h shows GPIO ownership to a guest OS.
I2C Ownership
The guest_i2c_ownership.h shows I2C ownership to a guest OS.
NVIDIA DRIVE AV Storage Configuration
In the PCT, there are three configuration files that implement the three level partitioning concept:
global_storage.cfg
This is the “root” of the definition hierarchy.
It defines the top level partitioning, and contains partitions that are present regardless of which boot chain is active
It is referred to as the First level(level-1) partition table storage config file.
boot_chain_storage.cfg file like mentioned after “sub_cfg_file=” in Level-1 partitions.
Contains partitions that will be unique to each of the boot chains.
It is referred to as the Second level(level-2) partition table storage config file.
qnx_gos0_storage.cfg file like mentioned after “sub_cfg_file=” in Level-2 partitions.
Contains partitions that will be unique to each of Level-2 container partitions.
It is referred to as the Third level(level-3) partition table storage config file.
Three level partitioning
The following diagram gives an example of a three level partition layout and 3 level partition tables.

The eMMC A chain: It consists of all eMMC partitions assigned to all virtual machines in the Partition Configuration Table (PCT). All content in this chain can be overwritten by the OTA application.
The eMMC B chain: This is the recovery chain. The target boots in this chain when the OTA application is about to update other boot chains.
The QSPI C chain: This is the recovery chain to boot with only QSPI. The target boots in this chain when the OTA application is about to update other boot chains.
Persistent partitions for all virtual machines: The data partitions that are NOT updated by the OTA application. Data over these partitions remain consistent over multiple OTA cycles.
Inspecting the global_storage.cfg (level-1) you will notice it refers to boot_chain_storage.cfg as a “sub_config” file. This elevates the boot_chain_storage_qspi.cfg to be level-2.
...
[partition]
name=A_qspi_chain
...
sub_cfg_file=boot_chain_storage.cfg
[partition]
name=B_qspi_chain
...
sub_cfg_file=boot_chain_storage.cfg
[partition]
name=C_qspi_chain
...
sub_cfg_file=boot_chain_c_storage.cfg
Similarly, inspecting the boot_chain_storage.cfg (level-2) you will notice it refers to the OS storage config file. This elevates the OS storage config file to be level-3.
[partition]
name=qnx-gos0
...
sub_cfg_file=qnx_gos0_storage.cfg
Since level-3 is derived from Level-2, its content will be duplicated in each of the boot chains (A & B).
Level-1 Storage Configuration Structure
General structure of this file is:
device 1 information
partition 1.1 information
....
partition 1.n information
device 2 information
partition 2.1 information
....
partition 2.n information
and so on.
Device information highlighted by this application note are:
linux_name=/dev/block/3270000.spi - name of peripheral device
size=0x940000 - total size of storage device
Note:
For Level-2 & Level-3 configurations, the size is not the total size of the device, but it is the allowed size of storage device for that level, as defined in previous level partition information highlighted by this application note are:
name=bct - logical name of the partition
size=0x80000 - size of the partition
Partitions that utilize the “sub_cfg_file” are container partitions. These partitions share the same space with partitions in “sub_cfg_file”. The size attribute specifies the allowed space the next level can use on the device.
For the QSPI storage, the allowed space for Level-2 & Level-3 is specified by the size
[partition]
name=A_qspi_chain
size=0x940000
[partition]
name=B_qspi_chain
size=0x940000
Since Level-2 & Level-3 also allocate storage on eMMC/UFS, the limit on how much eMMC/UFS storage can be allocated is defined here :
[partition]
name=A_emmc_chain
size=EMMC_BOOTCHAIN_SIZE
[partition]
name=B_emmc_chain
size=EMMC_BOOTCHAIN_SIZE
name=A_ufs_chain
size=UFS_BOOTCHAIN_SIZE
name=A_ufs_chain
size=UFS_BOOTCHAIN_SIZE
Level-1 Partition Table
The information about of all partitions in Level-1, as been allocated on both QSPI & eMMC/UFS, are stored in a partition table file
[partition]
name=pt
size=0x80000
This partition CANNOT be updated when the DU process is done. This table is common to Chain A & Chain B, so both chains' future updates must preserve the content of Level-1 partitions. If not, DU process will fail.
Level-2 & Level-3 Storage Configuration Structure
These follow the same rules as Level-1, that is, General structure of this file is:
device 1 information
partition 1.1 information
....
partition 1.n information
device 2 information
partition 2.1 information
....
partition 2.m information
Each level can specify partitions on QSPI and eMMC/UFS storage devices. The “device” record size is the max allowed storage space for that level, not the full size. Both levels will affect the content of Chain A and Chain B.
Storage Device Passthrough Access
The storage device passthrough access is disabled by default for Drive AV QNX prod and prod_debug PCT Variants. For internal testing, the passthrough access is selectively enabled for following:
PCT (variant) Type | Passthrough Enabled Partition Name |
---|---|
DRIVE AV Linux PCT | GOS ROOTFS |
DRIVE AV QNX Standard PCT | GOS0-EFS3 |
DRIVE AV QNX Prod_debug_extra PCT Variant | GOS0-EFS3 |
Note:
DRIVE OS user is recommended to enable storage passthrough access (if required) for only one virtual storage partition of a physical storage device.
See the Virtual Partitions section for details regarding how to enable passthrough access using a 32-bit virtual_storage_ivc_ch value for a virtual partition.
Customizing the Drive AV Storage Configuration
Configuration Files
The default storage layout QSPI and eMMC/UFS are as shown in their respective tables below (Refer to Storage Layout). Users can customize the storage layout for mass storage partitions on the target by modifying the configuration files. After the configuration files are updated, the target must be flashed again.
For Single Linux GOS VM PCT:
Storage configuration files
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/linux/
GOS0 DTB for virtual storage nodes
<top>/<NV_SDK_NAME_LINUX>/kernel/source/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/linux/storage/tegra234-common-storage-linux-gos.dtsi
For Single QNX GOS VM PCT:
Storage configuration files
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/qnx/
GOS0 DTB for virtual storage nodes
<top>/<NV_SDK_NAME_QNX>/bsp/device-tree/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/storage/tegra234-common-driveav-storage-gos0.dtsi
For Dual QNX GOS VMs PCT:
Storage configuration files:
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/dual-qnx
GOS0 DTB for virtual storage nodes
<top>/<NV_SDK_NAME_QNX>/bsp/device-tree/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/storage/tegra234-common-driveav-storage-gos0.dtsi
GOS1 DTB for virtual storage nodes
<top>/<NV_SDK_NAME_QNX>/bsp/device-tree/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/storage/tegra234-common-driveav-storage-gos1.dtsi
Virtual Partitions
Partitions that are allocated to the QNX/Linux Virtual Machine(VM) and are physically managed by Virtual Storage Controller(VSC) server VM. These partitions are defined to be virtual partitions.
For more information, see the Storage Server / Virtualized Storage topic in the Virtualization Guide chapter.
These partitions are accessible to the VM(s) via inter VM communications channels called Inter-VM-Channel(IVC) which is a bidirectional short command oriented interface, and memory-pool which is a shared memory buffer. These are only accessible by the GOS(QNX or LINUX)/UPDATE VM and the VSC.
Add IVC/mempool number define to give notice those would be reserved:
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/platform_ivc_config.h
#define GOS0_VSC_Q_[Ivc Queue Num] [Ivc Queue Num]
NOTE: Allocate unused IVC number and define a new constant editing [Ivc Queue Num].
IVC number can be added up to 255.
....
#define GOS0_VSC_M_[Mempool Num] [Mempool Num]
NOTE: Allocate an unused memory pool id, and define a new constant editing [Mempool Num].
Add IVC/mempool in platform_conf structure (Just place holder for HVRTOS VSC server to let user know these are using):
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/platform_config.h
#if defined(ENABLE_HVRTOS_VSC_EMMC_VIRT) || defined(ENABLE_HVRTOS_VSC_UFS_VIRT)
//[GOS0_VSC_Q_[Ivc Queue Num]] /* <partition name> */
#endif
....
#if defined(ENABLE_HVRTOS_VSC_EMMC_VIRT) || defined(ENABLE_HVRTOS_VSC_UFS_VIRT)
//[GOS0_VSC_M_[Mempool Num]] /* <partition name> */
#endif
NOTE: None HVRTOS VSC server is deprecated. No need to add array in platform_conf structure.
Note:
Adding Ivc/Memppol array in platform_config.h was only for none HVRTOS VSC server. If
ENABLE_HVRTOS_VSC_*
is set toy
in profile makefile, HVRTOS VSC server uses ivc/mempool IDs parsed from virtual_storage_ivc_ch in storage config files.
The new IVC number added in previous step as \[Ivc Queue Num], and the new mempool number added in the previous step as [Mempool Num], must be added in two configurations files:
In the storage configuration file where the new partition was added the following new attributes need to be added.
The last four string in virtual_storage_ivc_ch are the hexadecimal representation of the memory pool and IVC channel.
partition_attribute=<GID_GUEST0_VM+1>
virtual_storage_ivc_ch=0x8<GID_VSC_SERVER>20[Hex value of Mempool Num][Hex value of Ivc Queue Num]
The 32-bit virtual_storage_ivc_ch value can be broken down as follows:
Bit | Description |
[31] | Is Virtual Storage Flag [virt = 1, non-virt = 0] |
[30:24] | Storage Server ID [int value from 0-0x7F] |
[23] | Shared Partition Flag [shared = 1, exclusive = 0] |
[22] | RESERVED |
[21:18] | Partition Priority. [Values from 1 through 5 where 1 is highest priority.] |
[17] | Disable pass-through [1 = Disable, 0 = Enable] |
[16] | Read only flag [RO =1, RW = 0] |
[15:8] | Mempool ID [int value from 0-0xFF] |
[0:7] | IVC Queue ID [int value from 0-0xFF] |
For QNX GOS0 VM, the presence of the partitions to be used by the QNX VM is defined in its device tree file:
<top>/<NV_SDK_NAME_QNX>/bsp/device-tree/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/storage/tegra234-common-driveav-storage-gos0.dtsi
tegra_virt_storage[Number] { <<== required name, number is incrementing
compatible = "nvidia,tegra-hv-storage"; <<== required field, as is
status = "okay"; <<== required field, as is
instance = <[Instance ID]>; <<== [Instance ID] 0..n for each device type. This will determine node name in GOS File System
ivc = <&tegra_hv [Ivc Queue Num]; <<== ivc number used [Ivc Queue Num] in decimal
mempool = [Mempool Num]; <<== mempool number used [Mempool Num] in decimal
device-type = "[vblk_type]"; <<== [vblk_type] is one of "vblk_mnand", "vblk_sifs", "vblk_ufs"
partition-name = "<partition name>"; <<== partition name as defined earlier in storage configuration files.
read-only; <<== optional token for read-only partition. omit if read/write
iommus = <&smmu_niso0 TEGRA_SID_NISO0_UFS_1>; <<== required field for UFS device node, to mapping smmu.
dma-coherent; <<== required field for UFS device node. only for QNX
memory-phandles = < &{/smmu-static-mapping/vscd0-smmu-static-mapping} >; <<== required field for UFS device node, to mapping smmu. only for QNX
};
If new UFS virtual partitinon need to be added in QNX PCT, edit NUM_UFS_PARTITIONS to set total number of partitions. NUM_UFS_PARTITIONS number is using for offset/size of UFS SMMU.
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/storage/tegra234-common-driveav-storage-gos1.dts
#ifndef DISABLE_UFS
/*
* NUM_UFS_PARTITIONS should be matched with unmber of UFS partitions.
* UFS partition node has device-type=vblk_ufs.
* Exceptionally UFS partition could have device-type=vblk_sifs in case Sencondary IFS
* exist in UFS device.
*/
#ifdef NDAS_STORAGE_CONFIG
#define NUM_UFS_PARTITIONS 5
#else
#ifdef ENABLE_ECO_UFS_SEC_BOOT
#ifdef ENABLE_GOS0_SIFS_1
/* ECO QSPI+UFS case, secondary IFS exist in UFS */
#define NUM_UFS_PARTITIONS 9
#else
#define NUM_UFS_PARTITIONS 8
#endif
#else
#define NUM_UFS_PARTITIONS 2
#endif
#endif
#endif
For qnx GOS0 VM, nvhvblk_[Ivc Queue Num] must be added in nvsciipc table entry in the following file:
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/tegra234-nvsciipc-gos0.dtsi
"INTER_VM", "nvhvblk_[Ivc Queue Num]", "[Ivc Queue Num]", "0", /* nvhvblk_guest */
Note:
For linux GOS VM in linux PCT, this step is not needed.
For 2nd qnx GOS VM in dual-qnx PCT, refer /<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/tegra234-nvsciipc-gos1.dtsi
For qnx GOS0 VM, nvhvblk_[Ivc Queue Num] must be added in the list after nvhvblk_orin_gos:
in the following file:
<top>/<NV_SDK_NAME_QNX>/nvidia-bsp/aarch64le/sbin/nvsciipc_secpolicy.cfg
nvhvblk_orin_gos:nvhvblk_[Ivc Queue Num],<already existing nvhvblk_* list>
Note:
For linux GOS VM in linux PCT, this step is not needed. For the second qnx GOS VM in dual-qnx PCT, see "nvhvblk_orin_gos1:" in the same nvsciipc_secpolicy.cfg file.
Constraints
QNX OS expects the partition layout in a specific order. Refer to the Mass Storage Partitions Configuration chapter.
Partition sizes should be 256 K aligned.
IVC queue number should be smaller than 256.
Drive AV Storage Layout
First-Level Partition Table(L1PT) Layout
The global_storage.cfg file defines the total size of storage devices, boot chain container partitions and persistent partitions of guest VMs.
The first-level configuration has the following layout for the ECO QSPI+eMMC use case:
Device | Partition | Size (Hex bytes) | IVC Value (Depends on VSC Server GID) | Mandatory | Customizable | Purpose |
QSPI - 0 size (0x4000000) | bct (BR-BCT) | 0x80000 | 0x8 <GID_VSC_SERVER> 162560 | Yes | No | Holds Board configuration information. |
pt (PT_1) | 0x80000 | 0x8 <GID_VSC_SERVER> 162661 | Yes | No | L1 Partition Table | |
C_qspi_chain | 0x2C00000 | 0x8 <GID_VSC_SERVER> 962C66 | No | Yes (Only related with GOS partitions) | L1 container partition for holding C chain on qspi | |
bad-page (PBL) | 0x80000 | N/A | Yes | No | For DRAM ECC | |
A_qspi_chain | 0x940000 | 0x8 <GID_VSC_SERVER> 962762 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding A chain on qspi | |
B_qspi_chain | 0x940000 | 0x8 <GID_VSC_SERVER> 962863 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding B chain on qspi | |
EMMC - 3 size (0xE8FC00000 for 64GB) (0x747C00000 for 32GB) | A_emmc_chain | 0x72D080000 for 64 GB 0x38D0C0000 for 32 GB | 0x8 <GID_VSC_SERVER> 962964 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding A chain on eMMC |
B_emmc_chain | 0x72D080000 for 64GB 0x38D0C0000 for 32GB | 0x8 <GID_VSC_SERVER> 962A65 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding B chain on eMMC | |
gos0-shared-pers | 0x10000000 | 0x8 <GID_VSC_SERVER> 1636ED | No | Yes | Persistent shared user partition of GOS0 | |
pers-ota | 0x10000000 | 0x8 <GID_VSC_SERVER> 16245F | Yes | No | Persistent Storage for Update VM | |
UFS - 0 size (0x3A00000000) | A_ufs_chain | 0xD00000000 | 0x8 <GID_VSC_SERVER> 962D70 | No | Yes (Only related with GOS partitions) | L1 container partition for holding A chain on ufs |
B_ufs_chain | 0xD00000000 | 0x8 <GID_VSC_SERVER> 962E71 | No | Yes (Only related with GOS partitions) | L1 container partition for holding B chain on ufs | |
gos0-ufs | 0x1D00000000 | 0x8 <GID_VSC_SERVER> 163FF7 | No | Yes | User persistent data of GOS0 | |
gos0-demo-ufs (only for linux) | 0x280000000 | 0x8 <GID_VSC_SERVER> 1640F8 | No | Yes | To demonstrate Encrypted File System feature |
The first-level configuration has the following layout for the ECO QSPI+UFS use case:
Device | Partition | Size (Hex bytes) | IVC Value (Depends on VSC Server GID) | Mandatory | Customizable | Purpose |
QSPI - 0 size (0x4000000) | bct (BR-BCT) | 0x80000 | 0x8 <GID_VSC_SERVER> 162560 | Yes | No | Holds Board configuration information. |
pt (PT_1) | 0x80000 | 0x8 <GID_VSC_SERVER> 162661 | Yes | No | L1 Partition Table | |
C_qspi_chain | 0x2C00000 | 0x8 <GID_VSC_SERVER> 962C66 | No | Yes (Only related with GOS partitions) | L1 container partition for holding C chain on qspi | |
bad-page (PBL) | 0x80000 | N/A | Yes | No | For DRAM ECC | |
A_qspi_chain | 0x940000 | 0x8 <GID_VSC_SERVER> 962762 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding A chain on qspi | |
B_qspi_chain | 0x940000 | 0x8 <GID_VSC_SERVER> 962863 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding B chain on qspi | |
EMMC - 3 size (0xE8FC00000 for 64 GB) (0x747C00000 for 32GB) | A_emmc_chain | 0x72D080000 for 64 GB 0x38D0C0000 for 32GB | 0x8 <GID_VSC_SERVER> 962964 | Yes (For IST partitions) | Yes | L1 container partition for holding A chain on eMMC |
B_emmc_chain | 0x72D080000 for 64GB 0x38D0C0000 for 32GB | 0x8 <GID_VSC_SERVER> 962A65 | Yes (For IST partitions) | Yes | L1 container partition for holding B chain on eMMC | |
UFS - 0 size (0x3A00000000) | A_ufs_chain | 0xD00000000 | 0x8 <GID_VSC_SERVER> 962D70 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding A chain on ufs |
B_ufs_chain | 0xD00000000 | 0x8 <GID_VSC_SERVER> 962E71 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding B chain on ufs | |
gos0-shared-pers | 0x10000000 | 0x8 <GID_VSC_SERVER> 1636ED | No | Yes | Persistent shared user partition of GOS0 | |
pers-ota | 0x10000000 | 0x8 <GID_VSC_SERVER> 16245F | Yes | No | Persistent Storage for Update VM | |
gos0-ufs | 0x1D00000000 | 0x8 <GID_VSC_SERVER> 163FF7 | No | Yes | User persistent data of GOS0 | |
gos0-demo-ufs (only for linux) | 0x280000000 | 0x8 <GID_VSC_SERVER> 1640F8 | No | Yes | To demonstrate Encrypted File System feature |
The first-level configuration has the following layout for the NDAS use case:
Device | Partition | Size (Hex bytes) | IVC Value (Depends on VSC Server GID) | Mandatory | Customizable | Purpose |
QSPI - 0 size (0x4000000) | bct (BR-BCT) | 0x80000 | 0x8 <GID_VSC_SERVER> 162560 | Yes | No | Holds Board configuration information. |
pt (PT_1) | 0x80000 | 0x8 <GID_VSC_SERVER> 162661 | Yes | No | L1 Partition Table | |
C_qspi_chain | 0x2C00000 | 0x8 <GID_VSC_SERVER> 962C66 | No | Yes (Only related with GOS partitions) | L1 container partition for holding C chain on qspi | |
bad-page (PBL) | 0x80000 | N/A | No | No | For DRAM ECC | |
A_qspi_chain | 0x940000 | 0x8 <GID_VSC_SERVER> 962762 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding A chain on qspi | |
B_qspi_chain | 0x940000 | 0x8 <GID_VSC_SERVER> 962863 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding B chain on qspi | |
EMMC - 3 size (0x747C00000) | A_emmc_chain | 0x25BEC0000 | 0x8 <GID_VSC_SERVER> 962964 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding A chain on eMMC |
B_emmc_chain | 0x25BEC0000 | 0x8 <GID_VSC_SERVER> 962A65 | Yes | Yes (Only related with GOS partitions) | L1 container partition for holding B chain on eMMC | |
gos0-misc-pers | 0x6600000 | 0x8 <GID_VSC_SERVER> 1235EC | Yes | Yes | ||
gos0-ota-pers | 0x10000000 | 0x8 <GID_VSC_SERVER> 1636ED | Yes | No | ||
guest0-shadow-pers | 0x66600000 | 0x8 <GID_VSC_SERVER> 9737EE | Yes | Yes | ||
gos0-m-cache-pers | 0x100000000 | 0x8 <GID_VSC_SERVER> 1238EF | Yes | Yes | ||
gos0-m-stream-pers | 0x40000000 | 0x8 <GID_VSC_SERVER> 1239F0 | Yes | Yes | ||
gos0-p-map-pers | 0x40000000 | 0x8 <GID_VSC_SERVER> 123AF1 | Yes | Yes | ||
gos0-s-logger-pers | 0x60000000 | 0x8 <GID_VSC_SERVER> 163BF2 | Yes | Yes | ||
gos0-sar-pers | 0x13300000 | 0x8 <GID_VSC_SERVER> 0E3CF3 | Yes | Yes | ||
gos0-dlb-pers | 0x4000000 | 0x8 <GID_VSC_SERVER> 063DF4 | Yes | Yes | ||
gos1-config-pers | 0xA00000 (Only for dual-qnx) | 0x8 <GID_VSC_SERVER> 962F72 | No | Yes | ||
pers-ota | 0x10000000 | 0x8 <GID_VSC_SERVER> 16245F | Yes | No | Persistent Storage for Update VM | |
UFS - 0 UFS device is only for HIGH storage config size (0x1D00000000) | A_ufs_chain | 0xC80000000 | 0x8 <GID_VSC_SERVER> 962D70 | No | Yes | L1 container partition for holding A chain on ufs |
B_ufs_chain | 0xC80000000 | 0x8 <GID_VSC_SERVER> 962E71 | No | Yes | L1 container partition for holding B chain on ufs | |
gos0-m-cache-ufs | 0x200000000 | 0x8 <GID_VSC_SERVER> 1240F8 | No | Yes | ||
gos0-sar-ufs | 0xC0000000 | 0x8 <GID_VSC_SERVER> 0E41F9 | No | Yes | ||
gos0-edr-ufs | 0xC0000000 | 0x8 <GID_VSC_SERVER> 0A42FA | No | Yes |
Note:
For mandatory partition, only size of partition is customizable. For nonmandatory partition, all partition-attributes is customizable or it can be removed/split.
Second Level Partition Table(L2PT) Layout
QSPI Chain-A/B Layout
QSPI partitions include bct files, bootloaders, and firmware binaries.
The storage layout for QSPI is as follows:
QSPI Partition Name | Size (in KBytes) | Mandatory | Customizable | Purpose |
---|---|---|---|---|
PT_2(pt) | 256 | Yes | No | L2 Partition Table |
mb1-bootloader | 256 | Yes | No | Primary Copy of MB1 Bootloader |
PSC-BL1 (psc-bl) | 256 | Yes | No | PSC firmware |
MB1-BCT (mb1-bct) | 256 | Yes | No | BCT for MB1 |
MemBct (mem-bct) | 256 | Yes | No | BCT for memory configuration |
IST_UCode(ccplex-ist-ucode) | 256 | Yes | No | IST ucode |
MB2+MB2-BCT (mb2-bootloader) | 512 | Yes | No | MB2 |
SPE_FW (spe-fw) (Only for standard/safety prod_debug_extra PCT variant) | 512 | Yes | No | Sensor Processing Engine firmware |
TSEC_FW (tsec-fw) | 256 | Yes | No | TSEC firmware |
PSC_FW (psc-fw) | 768 | Yes | No | Firmware for PSC |
MCE (mts-mce) | 256 | Yes | No | Firmware for cpu cores |
BPMP_FW(bpmp-fw) | 1536 | Yes | No | Firmware for BPMP |
SC7-RF | 256 | Yes | No | BPMP SC7 resume firmware |
PSC-RF | 256 | Yes | No | PSC resume firmware |
MB2-RF | 256 | Yes | No | CPU resume firmware |
BPMP_FW_DTB (bpmp-fw-dtb) | 512 | Yes | No | DT for BPMP |
RCE_FW (rce-fw) | 1024 | Yes | No | RCE firmware image |
nvdec-fw | 512 | Yes | No | NVDEC firmware |
Key IST uCode (ist-ucode) | 256 | Yes | No | IST key |
IST_BPMP (bpmp-ist) | 256 | Yes | No | IST bpmp |
IST_ICT (ist-config) | 256 | Yes | No | IST ICT |
tsec-fw (tsec-fw) | 256 | Yes | No | TSEC FW |
Note:
Chain B (same as Chain A) partitions are not included in the above table.
QSPI Chain-C Layout
QSPI Partition Name | Size (in KBytes) | Mandatory | Customizable | Purpose |
---|---|---|---|---|
PT_2(pt) | 256 | Yes | No | L2 Partition Table |
mb1-bootloader | 256 | Yes | No | Primary Copy of MB1 Bootloader |
PSC-BL1 (psc-bl) | 256 | Yes | No | PSC firmware |
MB1-BCT (mb1-bct) | 256 | Yes | No | BCT for MB1 |
MemBct (mem-bct) | 256 | Yes | No | BCT for memory configuration |
MB2+MB2-BCT (mb2-bootloader) | 512 | Yes | No | MB2 |
SPE_FW (spe-fw) | 512 | Yes | No | Sensor Processing Engine firmware |
PSC_FW (psc-fw) | 768 | Yes | No | Firmware for PSC |
MCE (mts-mce) | 256 | Yes | No | Firmware for CPU cores |
BPMP_FW (bpmp-fw) | 1536 | Yes | No | Firmware for BPMP |
BPMP_FW_DTB (bpmp-fw-dtb) | 512 | Yes | No | DT for BPMP |
CPU-bootloader | 512 | Yes | No | QB |
secure-os | 4096 | Yes | No | TOS |
pvit | 256 | Yes | No | Partitions Version Info Table |
fsi-fw | 6144 | Yes | No | FSI FW |
kernel (HV image) | 6656 | Yes | No | HV kernel + server VMs + PCT |
guest-linux-chain_c (Linux GOS VM 3LPT) | 13568 | Yes | Yes | Linux kernel/initramfs/DT/GPT/rootf |
qnx-update-chain_c (IFS/DT of Drive Update VM L3PT) | 6144 | Yes | No | Update VM QNX primary IFS image and DT |
emmc-part eMMC and UFS partitions
The eMMC/UFS device connected to NVIDIA Orin is partitioned logically to enable each VM to have its own root file system or to store user data. Each VM is assigned a dedicated user partition on eMMC/UFS and might have secondary user partitions for storing additional data. The eMMC/UFS device on board is shared between the VMs.
The eMMC/UFS Storage Chain A/B for QNX PCT ECO use case:
Partition Name | Size (in MBytes) | Mandatory | Customizable | Purpose | Partition exist in |
---|---|---|---|---|---|
ist-testimg (For safety, Optional for standard) | 1280 | Yes | No | IST test image | eMMC |
ist-runtimeinfo (For safety, Optional for standard) | 0.25 | Yes | No | IST runtime information | eMMC |
ist-resultdata (For safety, Optional for standard) | 200 | Yes | No | IST result data | eMMC |
gr-ist (For safety, Optional for standard) | 0.25 | Yes | No | gr blob support | eMMC |
oist-tst-vtr | 512 | Yes | No | Runtime IST test vector | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
CPU-bootloader | 0.5 | Yes | No | QB | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
secure-os | 4 | Yes | No | TOS | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
adsp-fw (only for standard) | 2 | Yes | No | For Audio | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
fsi-fw (For safety, Optional for standard) | 6 | Yes | No | FSI FW | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
xusb-fw (Only for standard) | 0.25 | Yes | No | XUSB FW | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
pvit | 0.25 | Yes | No | Partitions Version Info Table | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
pva-fw | 2.5 | Yes | No | PVA FW | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
kernel (HV image) | 10 | Yes | No | HV kernel + server VMs + PCT | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
gos0-debug_overlay (For safety prod_debug*) | 128 | Yes | No | Debug overly for safety images only | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
qnx-gos0 (IFS/DT for GOS0 L3PT) | 30 | Yes | Yes | GOS0 QNX primary IFS image and DT | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
gos0-ifs2 (Secondary IFS) | 500 | Yes | Yes | QNX secondary IFS image | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
usr-fs (GOS EFS1) | 2560 (for standard) 2030 (for safety) | Yes | Yes | Guest OS rootfs | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
gos0-efs2 (GOS EFS2) | 7680 | No | Yes | Guest OS extended file system #2 | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
gos0-efs3 (GOS EFS3) | 1536 (for 32 GB eMMC) 16384 (for 64GB eMMC or QSPI+UFS) | No | Yes | Guest OS extended file system #3 | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
qnx-update (IFS/DT of Drive Update VM L3PT) | 24 | Yes | No | Update VM QNX primary IFS image and DT | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
qnx-update-fs | 128 | Yes | No | Filesystem of Update VM QNX | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
gos0-compute-bits-ufs (compute-bits) | 4096 | No | Yes | For CuDNN TRT | UFS |
Where:
/dev/vblk_*
denotes the enumeration for a given partition from guest OS or service VM. The partitions must be formatted before use.IFS: Early boot partition with minimal file system that contains the kernel.
EFS: QNX root file system, additional file system demonstration bits, sample applications, and data.
All partitions present in
boot_chain_storage.cfg
are part of the OTA update.All sub-configuration files that are in the
boot_chain_storage.cfg
are also part of the OTA update.
The eMMC/UFS Storage Chain A/B for Linux PCT ECO use case:
Partition Name | Size (in MBytes) | Mandatory | Customizable | Purpose | Partition exists in |
---|---|---|---|---|---|
ist-testimg (For safety, Optional for standard) | 1280 | Yes | No | IST test image | eMMC |
ist-runtimeinfo (For safety, Optional for standard) | 0.25 | Yes | No | IST runtime information | eMMC |
ist-resultdata (For safety, Optional for standard) | 200 | Yes | No | IST result data | eMMC |
gr-ist (For safety, Optional for standard) | 0.25 | Yes | No | gr blob support | eMMC |
gos0-crashlogs | 1 | No | Yes | To store Oops logs | eMMC |
CPU-bootloader | 0.5 | Yes | No | QB | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
secure-os | 4 | Yes | No | TOS | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
adsp-fw | 2 | Yes | No | For Audio | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
fsi-fw | 6 | No | No | FSI FW | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
xusb-fw | 0.25 | Yes | No | XUSB FW | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
dce-fw | 9 | Yes | No | DCE FW | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
pvit | 0.25 | Yes | No | Partitions Version Info Table | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
pva-fw | 2.5 | Yes | No | PVA FW | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
kernel (HV image) | 10 | Yes | No | HV kernel + server VMs + PCT | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
guest-linux (Linux GOS VM 3LPT) | 12818 (for 32GB eMMC) 27666 (for 64 GB eMMC or QSPI+UFS) | Yes | Yes | Linux kernel/initramfs/DT/GPT/rootfs | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
qnx-update (IFS/DT of Drive Update VM L3PT) | 24 | Yes | No | Update VM QNX primary IFS image and DT | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
qnx-update-fs | 128 | Yes | No | Filesystem of Update VM QNX | eMMC (QSPI+eMMC boot) UFS (QSPI+UFS boot) |
gos0-compute-bits-ufs (compute-bits) | 4096 | No | Yes | For CuDNN TRT | UFS |
The eMMC/UFS Storage Chain A/B for QNX (qnx/dual-qnx) PCT NDAS use case:
Partition Name | Size (in MBytes) | Mandatory | Customizable | Purpose | Partition exist in |
---|---|---|---|---|---|
ist-testimg (For safety, Optional for standard) | 1280 | Yes | No | IST test image | eMMC |
ist-runtimeinfo (For safety, Optional for standard) | 0.25 | Yes | No | IST runtime information | eMMC |
ist-resultdata (For safety, Optional for standard) | 200 | Yes | No | IST result data | eMMC |
gr-ist (For safety, Optional for standard) | 0.25 | Yes | No | gr blob support | eMMC |
oist-tst-vtr | 512 | Yes | No | Runtime IST test vector | eMMC |
CPU-bootloader | 0.5 | Yes | No | QB | eMMC |
secure-os | 4 | Yes | No | TOS | eMMC |
adsp-fw (only for standard) | 2 | Yes | No | For Audio | eMMC |
fsi-fw (For safety, Optional for standard) | 6 | Yes | No | FSI FW | eMMC |
xusb-fw (only for standard) | 0.25 | Yes | No | XUSB FW | eMMC |
pvit | 0.25 | Yes | No | Partitions Version Info Table | eMMC |
pva-fw | 2.5 | Yes | No | PVA FW | eMMC |
kernel (HV image) | 10 | Yes | No | HV kernel + server VMs + PCT | eMMC |
gos0-debug_overlay (For safety prod_debug*) | 128 | Yes | No | Debug overly for safety images only | eMMC |
qnx-gos0 (IFS/DT for GOS0 L3PT) | 30 | Yes | Yes | GOS0 QNX primary IFS image | eMMC |
gos0-ifs2( GOS0 Secondary IFS) | 250 | Yes | Yes | GOS0 QNX secondary IFS image | eMMC |
gos0-ifs2_1( GOS0 2nd Secondary IFS) | 250 | Yes | Yes | GOS0 QNX 2nd secondary IFS image | eMMC |
usr-fs (GOS0 Root-FS) | 2030 | Yes | Yes | GOS0 root file system | eMMC |
gos0-av-rootfs (GOS0 AV Root-FS) | 4096 | Yes | No | Automotive applications rootfs | eMMC |
gos1-debug_overlay (For safety prod_debug*) (*) | 128 | Yes | No | Debug overly for safety images only | eMMC |
qnx-gos1 (IFS/DT for GOS1 L3PT) (*) | 30 | Yes | Yes | GOS1 QNX primary IFS image | eMMC |
gos1-ifs2 (GOS1 Secondary IFS) (*) | 98 | Yes | Yes | GOS1 QNX secondary IFS image | eMMC |
gos1-av( GOS1 AV-FS) (*) | 256 | Yes | Yes | Guest OS1 av file system | eMMC |
qnx-update (IFS/DT of Drive Update VM L3PT) | 24 | Yes | No | Update VM QNX primary IFS image and DT | eMMC |
qnx-update-fs | 128 | Yes | No | Filesystem of Update VM QNX | eMMC |
gos0-compute-bits-ufs (compute-bits) (#) | 4096 | Yes | Yes | For CuDNN TRT | UFS |
gos0-usr-data-ufs (UserData UFS) (#) | 8192 | Yes | Yes | GOS0 user data | UFS |
Note:
(*) is a partition only for dual-qnx PCT, and (#) is a partition only for HIGH storage configuration.
The eMMC/UFS Storage Chain A/B for Linux PCT NDAS use case:
Partition Name | Size (in MBytes) | Mandatory | Customizable | Purpose | Partition exist in |
---|---|---|---|---|---|
ist-testimg (For safety, Optional for standard) | 1280 | Yes | No | IST test image | eMMC |
ist-runtimeinfo (For safety, Optional for standard) | 0.25 | Yes | No | IST runtime information | eMMC |
ist-resultdata (For safety, Optional for standard) | 200 | Yes | No | IST result data | eMMC |
gr-ist (For safety, Optional for standard) | 0.25 | Yes | No | gr blob support | eMMC |
gos0-crashlogs | 1 | No | Yes | To store Oops logs | eMMC |
CPU-bootloader | 0.5 | Yes | No | QB | eMMC |
secure-os | 4 | Yes | No | TOS | eMMC |
adsp-fw | 2 | Yes | No | For Audio | eMMC |
fsi-fw | 6 | No | No | FSI FW | eMMC |
xusb-fw | 0.25 | Yes | No | XUSB FW | eMMC |
dce-fw | 9 | Yes | No | DCE FW | eMMC |
pvit | 0.25 | Yes | No | Partitions Version Info Table | eMMC |
pva-fw | 2.5 | Yes | No | PVA FW | eMMC |
kernel (HV image) | 10 | Yes | No | HV kernel + server VMs + PCT | eMMC |
guest-linux (Linux GOS VM 3LPT) | 3456 | Yes | Yes | Linux kernel/initramfs/DT/GPT/rootfs | eMMC |
gos0-av-rootfs (GOS0 AV Root-FS) | 4096 | Yes | No | Automotive applications rootfs | eMMC |
qnx-update (IFS/DT of Drive Update VM L3PT) | 24 | Yes | No | Update VM QNX primary IFS image and DT | eMMC |
qnx-update-fs | 128 | Yes | No | Filesystem of Update VM QNX | eMMC |
gos0-compute-bits-ufs (compute-bits) (#) | 4096 | Yes | Yes | For CuDNN TRT | UFS |
gos0-usr-data-ufs (UserData UFS) (#) | 8192 | Yes | Yes | GOS0 user data | UFS |
Note:
(#) is a partition only for HIGH storage configuration.
Note:
For mandatory partition, only size of partition is customizable. For non-mandatory partition, all partition-attributes is customizable or it can be removed/split.
GPT L3 Support
GUID based partition table(GPT) support is added for GOS at L3 level. With this feature, the partitions in L3 under GPT can be independently updated including the partition table.
To add a GPT partition at third level(L3), a container partition needs to be added at second level(L2). Level 2 container partition holds the GPT primary and backup partition layout along with the actual partition contents.
The diagram below shows the organization of L2 container partition for GPT.

Multiple GPT partitions can be added for the same guest. The GPT primary and secondary are generated by using the container partition name as prefix.
Example Changes for Both Linux and QNX GOS VM
Configuration files are in the following folder:
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/<PCT>
NOTE:
<PCT> is qnx or dual-qnx or linux.
"ENABLE_TEST_GPT_L3PT :=" in "profile.mk" file is knob to adding GPT custom partition. "ENABLE_TEST_GPT_L3PT" is defined by default in standard package and safety pacakge prod_debug_extra PCT variant for sample of adding GPT partitions.
The following example snippet shows changes to add custom partition in L2:
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/<PCT>/boot_chain_storage.cfg
[device]
type=sdmmc
instance=3
linux_name=/dev/block/3460000.sdhci
size=EMMC_BOOTCHAIN_SIZE
...
#ifdef ENABLE_TEST_GPT_L3PT
[partition]
name=custom
allocation_policy=sequential
size=0x380000
partition_attribute=0x1000000<GID_GUEST0_VM+1>
sub_cfg_file=custom_storage_emmc.cfg
virtual_storage_ivc_ch=0x8<GID_VSC_SERVER>964631
#endif
Note:
IVC Channel(0x31=49) and Mempool(0x46=70) used in
virtual_storage_ivc_ch
are subject to availability.
Note:
For information about
partition_attribute
, see Mass Storage Partition Configuration > Customizing the Configuration File > Setting Attributes > Partition Attributes Table in the NVIDIA DRIVE OS 6.0 Linux SDK Developer Guide or see Storage > Mass Storage Partition Configuration > Customizing the Configuration File > Setting Attributes > Partition Attributes Table in the NVIDIA DRIVE OS 6.0 QNX SDK Developer Guide.
Add IVC/mempool for custom partition(Just place holder for HVRTOS VSC server):
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/platform_ivc_config.h
#define GOS0_VSC_Q_49 49
....
#define GOS0_VSC_M_70 70
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/platform_config.h
#ifdef ENABLE_TEST_GPT_L3PT
#ifdef ENABLE_HVRTOS_VSC_EMMC_VIRT
//[GOS0_VSC_Q_49] /* custom */
#elif defined(ENABLE_VSC_EMMC_VIRT)
/* custom */
[GOS0_VSC_Q_49] = { .peers = {GID_GUEST0_VM, GID_VSC_SERVER}, .nframes = 16, .frame_size = 128 },
#endif
#endif
#if defined(ENABLE_TEST_GPT_L3PT)
#ifdef ENABLE_HVRTOS_VSC_EMMC_VIRT
// [GOS0_VSC_M_70] /* custom */
#elif defined(ENABLE_VSC_EMMC_VIRT)
/* custom */
[GOS0_VSC_M_70] = { .peers = {GID_GUEST0_VM, GID_VSC_SERVER}, .size = (SZ_1MB * 8), .align = (SZ_1MB * 2) },
#endif
#endif
NOTE: None HVRTOS VSC server is deprecated. The codes inside of ENABLE_VSC_EMMC_VIRT define are not used anymore.
More custom partitions can be added with the same guest ID in the partition_attribute. The container partition name is used to differentiate between multiple GPT partition tables for the same guest.
For the example above the GPT is defined within the sub config file "custom_storage_emmc.cfg".
The following shows an example of how to define GPT L3 partitions.
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/<PCT>/custom_storage_emmc.cfg
[meta]
version=2
[device]
type=sdmmc
instance=3
linux_name=/dev/block/3460000.sdhci
size=0x380000
[partition]
name=sample-gp1
type=GP1
allocation_policy=sequential
filesystem_type=basic
size=0x40000
partition_attribute=0
[partition]
name=samplep1
allocation_policy=sequential
filesystem_type=basic
size=0x100000
partition_attribute=1
[partition]
name=samplep2
allocation_policy=sequential
filesystem_type=basic
size=0x100000
partition_attribute=1
[partition]
name=samplep3
allocation_policy=sequential
filesystem_type=basic
size=0x100000
partition_attribute=1
[partition]
name=sample-gpt
type=GPT
allocation_policy=sequential
filesystem_type=basic
size=0x40000
Example Changes for QNX GOS VM
Add device tree entry to let QNX enumerate each of the partitions inside the container as virtual block devices, as in the following example:
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/storage/tegra234-common-driveav-storage-gos0.dtsi
#ifdef ENABLE_TEST_GPT_L3PT
/* custom */
tegra_virt_storage50 {
compatible = "nvidia,tegra-hv-storage";
status = "okay";
instance = <15>;
ivc = <&tegra_hv 49>;
mempool = <70>;
icache = <10485760>;
device-type = "vblk_mnand";
partition-name = "custom";
};
#endif
NOTE: Ensure the tegra_virt_storage50 node name is unique in the above mentioned dtsi file.
Also pick a different instance number in new added node(tegra_virt_storage50 in the example above),
if that instance number is already in use in other node having same device-type.
The nvsciipc table entry in GOS-DTB:
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/tegra234-nvsciipc-gos0.dtsi
#ifdef ENABLE_TEST_GPT_L3PT
/* GPT container partition */
"INTER_VM", "nvhvblk_49", "49", "0", /* nvhvblk_guest */
#endif
nvhvblk node(nvhvblk_49 in this example) is added in nvhvblk_orin_gos for this feature:
<top>/<NV_SDK_NAME_QNX>/nvidia-bsp/aarch64le/sbin/nvsciipc_secpolicy.cfg
nvhvblk_orin_gos:....,nvhvblk_49,
In case nvsciipc_secpolicy.cfg is modified, QNX-IFS of Guest OS need to be rebuilt.
Example Changes for Linux GOS VM
Add device tree entry to let Linux enumerate each of the partitions inside the container as virtual block devices, as in the following example:
<top>/<NV_SDK_NAME_LINUX>/kernel/source/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/linux/storage/tegra234-common-storage-linux-gos.dtsi
#ifdef ENABLE_TEST_GPT_L3PT
/* custom */
tegra_virt_storage50 {
compatible = "nvidia,tegra-hv-storage";
status = "okay";
instance = <15>;
ivc = <&tegra_hv 49>;
mempool = <70>;
};
#endif
NOTE: Ensure the tegra_virt_storage50 node name is unique in the above mentioned dtsi file.
Also pick a different instance number in new added node(tegra_virt_storage50 in the example above),
if that instance number is already in use in other node having same device-type.
Bind/Bootburn steps are needed after upper changes.
Caution and Verification
The size of the device at L3 matches the size of the extended custom partition at L2:
The custom container has three partitions each of 1 MB. Note that the filename is not specified above. If file is not specified, flash tools will not create any images for these partitions but the space is allocated and will be formatted if specified. If an image is required, update the partition within the custom cfg file accordingly to add the filename field and point it to the file you want to flash.
Once booted with upper changes, the GPT partitions are enumerated as virtual block devices in each Guest OS(QNX or Linux).
For QNX, for example, if the container block device is enumerated as /dev/vblk_mnandf0, then the individual partitions are /dev/vblk_mnandf0.ms.0, /dev/vblk_mnandf0.ms.1, and /dev/vblk_mnandf0.ms.2.
For Linux, if the container block device is enumerated as /dev/vblkdev15, then the individual partitions are /dev/vblkdev15p1, /dev/vblkdev15p2, and /dev/vblkdev15p3.
Creating Custom QNX Filesystem Image
The GPT partitions added above do not have a file image associated, so create_bsp_images does not generate images for these partitions.
Follow the below steps to create a qnx6 dummy filesystem image with some contents.
1.Create an empty folder mkdir custom_fs.
2.Create some text files using touch, and add some content to the custom_fs folder.
3.Create the qnx_rootfs_build file with the contents and save as qnx_rootfs_build:
[num_sectors=2048]
[-followlink]
[type=link]/tmp=/dev/shmem
4.Make sure that QNX_TARGET and QNX_HOST paths are set correctly.
$QNX_HOST/usr/bin/mkxfs -t qnx6fsimg qnx_rootfs_build custom_qnxfs.img
Mutiple Secondary Image Filesystem (SIFS) Support
QNX (qnx/dual-qnx) PCT NDAS use case provide one more Secondary Image Filesystem (SIFS) by default. With this feature, one of SIFS having latency critical binaries can be loaded first and another SIFS having large number of binaries can be loaded after.
Example Code of adding Second SIFS
"ENABLE_GOS0_SIFS_1 :=" in "profile.mk" file is knob to add another Secondary Image Filesystem (SIFS). "ENABLE_GOS0_SIFS_1" is defined by default in QNX (qnx/dual-qnx) PCT NDAS use case.
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/<PCT>/common_profile.mk
ifeq ($(NDAS_STORAGE_CONFIG),y)
# Add SIFS_1 by default for NDAS use case
ENABLE_GOS0_SIFS_1 := y
The following example snippet shows changes required to divide 500MiB "gos0-ifs2"(SIFS0) partition into 250MiB "gos0-ifs2"(SIFS0) and 250MiB "gos0-ifs2_1"(SIFS1) in case "ENABLE_GOS0_SIFS_1" is defined:
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/<PCT>/boot_chain_storage.cfg
#ifdef NDAS_STORAGE_CONFIG
....
[partition]
name=gos0-ifs2
allocation_policy=sequential
filesystem_type=basic
#ifdef ENABLE_GOS0_SIFS_1
size=0xFA00000
#else
size=0x1F400000
#endif
partition_attribute=<GID_GUEST0_VM+1>
type=sec_ifs
decompression_algorithm=lz4
stream_validation=yes
#ifdef ENABLE_SAFETY_HV
filename=<PDK_TOP>/<NV_SDK_NAME_QNX>/bsp/images/ifs-nvidia-t18x-vcm31t186-guest_vm_safety_secondary.bin
#else
filename=<PDK_TOP>/<NV_SDK_NAME_QNX>/bsp/images/ifs-nvidia-t18x-vcm31t186-guest_vm_secondary.bin
#endif
virtual_storage_ivc_ch=0x809732E9
#ifdef ENABLE_GOS0_SIFS_1
[partition]
name=gos0-ifs2_1
allocation_policy=sequential
filesystem_type=basic
size=0xFA00000
partition_attribute=<GID_GUEST0_VM+1>
type=sec_ifs
decompression_algorithm=lz4
stream_validation=yes
#ifdef ENABLE_SAFETY_HV
filename=<PDK_TOP>/<NV_SDK_NAME_QNX>/bsp/images/ifs-nvidia-t18x-vcm31t186-guest_vm_safety_secondary.bin
#else
filename=<PDK_TOP>/<NV_SDK_NAME_QNX>/bsp/images/ifs-nvidia-t18x-vcm31t186-guest_vm_secondary.bin
#endif
virtual_storage_ivc_ch=0x80974732
#endif
....
#else /* NDAS_STORAGE_CONFIG */
....
NOTE: Same codes are here for ECO use case.
....
#endif /* NDAS_STORAGE_CONFIG */
NOTE: Current code using same "filename" for gos0-ifs2 (SIFS0) and gos0-ifs2_1 (SIFS1) due to example code.
The following example snippet shows changes required to add storage node in GOS-DTB:
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/<PCT>/boot_chain_storage.cfg
#ifdef NDAS_STORAGE_CONFIG
....
#ifdef ENABLE_GOS0_SIFS_1
tegra_virt_storage60 {
compatible = "nvidia,tegra-hv-storage";
status = "okay";
instance = <1>;
ivc = <&tegra_hv 50>;
mempool = <71>;
icache = <10485760>;
device-type = "vblk_sifs";
partition-name = "gos0-ifs2_1";
no_vblk_enumeration;
read-only;
};
#endif
....
#elif defined(ENABLE_ECO_UFS_SEC_BOOT)
....
NOTE: Similar codes are here for ECO QSPI+UFS use case. But QSPI+UFS add SIFS1 partition in UFS storage.
....
#else
NOTE: Same codes are here for ECO QSPI+eMMC use case.
#endif /* NDAS_STORAGE_CONFIG */
The nvsciipc table entry in GOS-DTB:
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/tegra234-nvsciipc-gos0.dtsi
#ifdef ENABLE_GOS0_SIFS_1
"INTER_VM", "nvhvblk_50", "50", "0", /* nvhvblk_guest */
#endif
nvhvblk node(nvhvblk_50 in this example) is added in nvhvblk_orin_gos for this feature:
<top>/<NV_SDK_NAME_QNX>/nvidia-bsp/aarch64le/sbin/nvsciipc_secpolicy.cfg
nvhvblk_orin_gos:....,nvhvblk_50,
QNX GOS0 startup script to mount SIFS1 after boot in GOS-DTB by automount:
For Standard package
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/tegra234-startupcmds-gos0.dtsi
#ifdef ENABLE_GOS0_SIFS_1
// mount sifs1 after sifs0 mounting is done. "automount -s" should not be executed parallelly.
<&automount_sifs1>,
<&nop_sifs1_done>,
#endif
....
#ifdef ENABLE_GOS0_SIFS_1
// Mount secondary IFS1
automount_sifs1: automount_sifs1 {
cmd = "iolauncher automount -s 1 -U 2410:2410";
sc7 = "nop";
critical_process = "no";
heartbeat = "no";
oneshot = "yes";
};
nop_sifs1_done: nop_sifs1_done {
cmd = "iolauncher --wait --prewait /tmp/secondary_ifs1_mount.done -i nop";
sc7 = "nop";
critical_process = "no";
heartbeat = "no";
oneshot = "yes";
};
#endif
For Safety package
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/kernel-dts/common/qnx/tegra234-startupcmds-gos0-safety.dtsi
#ifdef ENABLE_GOS0_SIFS_1
// mount sifs1 after sifs0 mounting is done. "automount -s" should not be executed parallelly.
// Add sifs1 mount here not to give impact to boot KPI.
<&automount_sifs1>,
<&nop_sifs1_done>,
#endif
....
#ifdef ENABLE_GOS0_SIFS_1
// Mount secondary IFS1
automount_sifs1: automount_sifs1 {
cmd = "iolauncher --secpol-type automount_sifs_t automount -s 1 -U 2410:2410";
sc7 = "nop";
critical_process = "no";
heartbeat = "no";
oneshot = "yes";
};
nop_sifs1_done: nop_sifs1_done {
cmd = "iolauncher --wait --prewait /tmp/secondary_ifs1_mount.done -i nop";
sc7 = "nop";
critical_process = "no";
heartbeat = "no";
oneshot = "yes";
};
#endif
NOTE: the number "1" in "-s" option is to find device node of SIFS that is passed number "1" in "secondary_ifs1" thourgh GOS-DTB "/chosen/bootargs/".
Pass argument to GOS VM through GOS-DTB "/chosen/bootargs/" to make automount parse device node and mount on /:
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/<PCT>/common_profile.mk
ifeq ($(ENABLE_GOS0_SIFS_1),y)
OS_ARGS_QNX_GOS0_SIFS_1 :=secondary_ifs1=/dev/vblk_sifs10:/
else
OS_ARGS_QNX_GOS0_SIFS_1 :=
endif
NOTE: the number "1" in "secondary_ifs1" should be matched the number which is passed with "-s" in startup script.
<top>/<NV_SDK_NAME_FOUNDATION>/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/<PCT>/qnx_gos0_storage.cfg
#define GOS0_OS_ARG "gpt gpt_sector=0x13001 <OS_ARGS_QNX_ROOTFS> secondary_ifs0=/dev/vblk_sifs00:/ <OS_ARGS_QNX_USER_FS> <OS_ARGS_QNX_UFS_FS> <DBG_OVERLAY_OS_ARGS> <OS_ARGS_QNX_GOS0_SIFS_1>"
NOTE: <OS_ARGS_QNX_GOS0_SIFS_1> will be replaced to "OS_ARGS_QNX_GOS0_SIFS_1" in common_profile.mk after bind_partitions.
Verification to Check Two SIFSs
# df -hP | grep ifs
ifs 17M 17M 0 100% /
ifs 17M 17M 0 100% /
ifs 53M 53M 0 100% /
NOTE: Three "ifs"s exist, one is a primary IFS and rest of two "ifs"s are econdary IFSs.
FYI, displayed size is not parition size, is actual size of included contents.
# mount
ifs on / type ifs
ifs on / type ifs
/dev/vblk_ufs00 on / type qnx6
/dev/vblk_mnand00 on / type qnx6
ifs on / type ifs
# ls /dev/vblk_sifs*
/dev/vblk_sifs00 /dev/vblk_sifs10
NOTE: vblk_sifs00 is SIFS0 having "instance = <0>;" in storage node in GOS-DTB.
vblk_sifs10 is SIFS1 having "instance = <1>;" in storage node in GOS-DTB.
# ls /tmp/secondary_ifs*.done
/tmp/secondary_ifs0_mount.done /tmp/secondary_ifs1_mount.done
NOTE: /tmp/secondary_ifs0_mount.done is created after SIFS0 is mounted by automount.
/tmp/secondary_ifs1_mount.done is created after SIFS1 is mounted by automount.
=====================================
Glossary
PCT: Partition Configuration Table.
AV: Autonomous Vehicle.
VM: Virtual machine.
IVC: Inter-Virtual machine Channel
mempool: Shared buffer implementation between two VMs
VSC: Virtual Storage Controller
IFS: QNX Image Filesystem