Adding a New INTER_VM Channel

The INTER_VM channel relies on Hypervisor to set up the shared area details between two (2) VMs. At present, it is done via IVC queues that are described in the PCT. For any new INTER_VM channel:

  1. Add a new IVC queue between two (2) VMs to the PCT file (platform_config.h) of the corresponding platform. The VM partition IDs are defined in the server-partitions.mk makefile. The frame_size value is in multiples of 64 bytes. The maximum IVC queue entries are 512 (the value imposed by the NVIDIA DRIVE? OS Hypervisor kernel). The location of the configuration file and makefile are as follows:
    • drive-foundation/platform-config/hardware/nvidia/platform/t23x/automotive/pct/drive_av/platform_config.h
    • drive-foundation/virtualization/pct/make/t23x/server-partitions.mk
  2. When INTER_VM IVC notification latency is critical between different PCPUs, the user can choose the MSI-based (Message Signaled Interrupt) IVC notification by adding use_msi = 1 option in the IVC queue table. The user should contact NVIDIA to use the MSI-based IVC notification since the total MSI-based IVC channel count is limited in the NVIDIA DRIVE OS? system. TRAP-based IVC notification is used by default if you don’t specify the use_msi flag.
  3. If the INTER_VM channel is defined in the configuration data of the cfg file but its IVC queue ID is NOT available in the PCT, that channel is ignored.
  4. If the INTER_VM channel is defined in the configuration data of the cfg file but its IVC queue ID is NOT available in the PCT, that channel is ignored.

Example: IVC queue table format of PCT

.ivc = {
.queue = {
    ... skipped ...
    [queue id] = { .peers = {VM1_ID, VM2_ID}, .nframes = ##, .frame_size = ##, .use_msi = # },
    ... skipped ...
},
... skipped ...
}

/* example */
[255] = { .peers = {GID_GUEST_VM, GID_UPDATE}, .nframes = 64, .frame_size = 1536 },

or

[255] = { .peers = {GID_GUEST_VM, GID_UPDATE}, .nframes = 64, .frame_size = 1536, .use_msi = 1 },