The CGF Channel Standalone Sample demonstrates the usage of CGF Channel without the context of a CGF graphlet or application.
The command line for the sample is:
./sample_cgf_dwchannel --prod=[0|1] --downstreams=[0,4] --cons=[1,4] --ip=[IP Address] --port=[Socket Port or ID] --mode=[mailbox|reuse|[N]] --type=[SOCKET|SHMEM_LOCAL|NVSCI]
where
--prod=[0|1] Have producer in this process our not. Ignored if type=NVSCI Default value: 1 --downstreams=[1,4] Number of downstreams of producer. Ignored if type=NVSCI Default value: 1 --cons=[0,4] Number of consumers in this process. Ignored if type=NVSCI Default value: 1 --ip=[STR] IP Address of the source port. Ignored if type=NVSCI Default value: 127.0.0.1 --port=[INT] SOCKET Port Number or port ID under SHMEM_LOCAL. Ignored if type=NVSCI Default value: 40002 --mode=[mailbox|reuse|[N]] If the value is a number, it means the size of fifo channel. mailbox means only one packet in channel, and it will be overwritten after come a new packet. reuse is based on mailbox, the channel will always keep the latest packet. Default value: 10 --type=[SOCKET|SHMEM_LOCAL|NVSCI] Socket channel, local shared memory channel, or nvsci channel. Default value: SOCKET --prod-reaches=[STR] Colon-separated list of producer reaches (process|chip) For NVSCI mode only Default value: "" --prod-timeout=[int] set the connection time-out value for producer in milliseconds For SOCKET mode only Default value: 1000 (ms) --prod-stream-names=[STR] colon-separated list of producer nvsciipc endpoints For NVSCI mode only Default value: "" --cons-reaches=[STR] Colon-separated list of consumer reaches (process|chip) For NVSCI mode only Default value: "" --cons-timeout=[int] set the connection time-out value for consumer in milliseconds For SOCKET mode only Default value: 1000 (ms) --cons-stream-names=[STR] colon-separated list of consumer nvsciipc endpoints For NVSCI mode only Default value: "" --dataType=[int|dwImage|custom] the type of data to be transferred. Default value: "dwImage" --frames=N the number of frames to run the sample. Default value: 128 --sync-mode=[none|p2c|c2p|both] the synchronization mode for exchanging buffers. none: all data buffers are exchanged synchronously. p2c: data buffers are written asynchronous from producer send. c2p: data buffers are read asynchronous from consumer read. both: both p2c and c2p synchronization. For NVSCI mode only --interactive=[0|1] run in interactive mode Default value: 0 --num-local-consumers=[int] number of local NVSCI consumers, for NVSCI mode only Default value: 0 --late-attach-locations=[STR] colon separated list of SOC-ID.VM-ID tuples, for example '0.0:1.0', for NVSCI mode only Default value: "" --late-attach=[STR] colon-separated list of producer nvsciipc endpoints, for NVSCI mode only Default value: "" --greedy-reattach=[0|1] greedily reattach any disconnected consumer nvsciipc endpoints, for NVSCI mode only Default value: 0 --loglevel=[STR] The log level Default value: "DW_LOG_WARN"
To run a default (inter process peer to peer with socket):
./sample_cgf_dwchannel
To run intra-process socket broadcast:
./sample_cgf_dwchannel --prod=1 --downstreams=2 --cons=2
To run intra-process nvscistream broadcast:
./sample_cgf_dwchannel --type=NVSCI --num-local-consumers=2
To run inter-process socket:
./sample_cgf_dwchannel --cons=1 --prod=0 --downstreams=0 ./sample_cgf_dwchannel --prod=1 --downstreams=1 --cons=0
To run inter-process socket with custom data type: ./sample_cgf_dwchannel –cons=1 –prod=0 –downstreams=0 –dataType=custom ./sample_cgf_dwchannel –prod=1 –downstreams=1 –cons=0 –dataType=custom
To run inter-process nvscistream:
./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscisync_a_0 --prod-reaches=process ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscisync_a_1 --cons-reaches=process
To run inter-process nvscistream with custom data type: ./sample_cgf_dwchannel –type=NVSCI –prod-stream-names=nvscisync_a_0 –prod-reaches=process –dataType=custom ./sample_cgf_dwchannel –type=NVSCI –cons-stream-names=nvscisync_a_1 –cons-reaches=process –dataType=custom
To run inter-process nvscistream with asynchronous writes:
./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscisync_a_0 --prod-reaches=process --sync-mode=p2c ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscisync_a_1 --cons-reaches=process --sync-mode=p2c
To run intra-inter-process socket broadcast:
./sample_cgf_dwchannel --cons=1 --prod=0 --downstreams=0 ./sample_cgf_dwchannel --prod=1 --downstreams=2 --cons=1
To run intra-inter-process nvscistream broadcast:
./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscisync_a_0 --prod-reaches=process --num-local-consumers=1 ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscisync_a_1 --cons-reaches=process
To run C2C socket peer to peer:
./sample_cgf_dwchannel --cons=1 --prod=0 --downstreams=0 --ip=${IP} --port=${ID}
./sample_cgf_dwchannel --cons=0 --ip=${IP} --port=${ID}
To run C2C nvscistream PCIE peer-to-peer:
sudo ./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscic2c_pcie_s0_c6_1 --prod-reaches=chip
sudo ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscic2c_pcie_s0_c5_1 --cons-reaches=chip
To run C2C nvscistream PCIE peer-to-peer with asynchronous writes:
sudo ./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscic2c_pcie_s0_c6_1 --prod-reaches=chip --sync-mode=p2c
sudo ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscic2c_pcie_s0_c5_1 --cons-reaches=chip --sync-mode=p2c
To run C2C socket broadcast:
./sample_cgf_dwchannel --cons=1 --prod=0 --downstreams=0 --ip=${IP} --port=${ID}
./sample_cgf_dwchannel --prod=1 --downstreams=2 --cons=1 --ip=${IP} --port=${ID}
To run C2C nvscistream PCIE broadcast:
sudo ./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscic2c_pcie_s0_c6_1:nvscic2c_pcie_s0_c6_2 --prod-reaches=chip:chip
sudo ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscic2c_pcie_s0_c5_1:nvscic2c_pcie_s0_c5_2 --cons-reaches=chip:chip
To run hybrid inter-process, inter-chip nvscistream:
sudo ./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscisync_a_0:nvscic2c_pcie_s0_c6_1 --prod-reaches=process:chip sudo ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscisync_a_1 --cons-reaches=process
sudo ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscic2c_pcie_s0_c5_1 --cons-reaches=chip
To run late-attach nvscistream:
./sample_cgf_dwchannel --type=NVSCI --num-local-consumers=1 --prod-reaches=process --late-attach=nvscisync_a_0 --frames=5000 Wait until the producer and consumer start sending and receiving data, then run ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscisync_a_1 --cons-reaches=process --frames=5000 The late consumer should come online and start receiving data. The same functionality should work with C2C endpoints.
To run reattach inter-process nvscistream:
./sample_cgf_dwchannel --type=NVSCI --num-local-consumers=1 --prod-stream-names=nvscisync_a_0 --prod-reaches=process --frames=5000 --greedy-reattach=1 ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscisync_a_1 --cons-reaches=process --frames=5000 Kill one of the consumers, the other consumer should continue without service disrupted. Then re-launch the consumer, the consumer should return to receiving data, service for other consumer not disrupted. The same functionality should work with C2C endpoints, provided late-attach-locations is passed to producer appropriately.
To run late-attach/reattach intra-inter-c2c nvscistream, with p2c sync:
On Dual Firespray Tegra B: sudo ./sample_cgf_dwchannel –type=NVSCI –cons-reaches=chip –cons-stream-names=nvscic2c_pcie_s0_c5_1 –frames=5000 –sync-mode=p2c
Producer will come online and start streaming to local consumer, then interprocess and inter SOC consumer will come online at t he same time later to start streaming. Kill either or both of the consumers, the other producer and/or other consumer should continue without service disrupted. Then re-launch the consumer(s), each consumer should return to receiving data, service for other consumer not disrupted.
Note: the restrictions of the command line arguments mean all late attaching consumers must connect together. Run with –interactive=1 to give commands manually via command line to connect or disconnect.
sudo sed -i '$ a net.core.wmem_max = 65011712' /etc/sysctl.conf sudo sed -i '$ a net.core.rmem_max = 65011712' /etc/sysctl.conf sudo sed -i '$ a net.core.rmem_default = 16777216' /etc/sysctl.conf sudo sed -i '$ a net.ipv4.tcp_wmem = 65011712 65011712 65011712' /etc/sysctl.conf sudo sed -i '$ a net.ipv4.tcp_rmem = 65011712 65011712 65011712' /etc/sysctl.conf sudo sysctl -p
On linux, the nvsciipc endpoints should be listed under /etc/nvsciipc.cfg to be viewed. See the /etc/nvsciipc.cfg file to understand the format of endpoints. If modified the system must be rebooted before the changes will take affect. For C2C use-cases, the nvsciipc endpoints are configured in the device tree and are associated with a specific peer SoC on the platform. Please see DRIVE OS release documentation for further information about how to configure the C2C endpoints.
Each endpoint can only be used by the Tegra that owns the corresponding PCIe port for the endpoint. In default P3710 (Firespray) Dual Configuration, the nvsciipc endpoints are listed as follows:
INTER_CHIP nvscic2c_pcie_s0_c[5-6]_[1-12) 0000
Endpoints with c5 are root-port (RP) endpoints on the PCIe bus. Endpoints with c6 are end-port (EP) endpoints on the PCIe bus. The EP and RP namings are for HW node identification, and do not signify direction of allowed data flows. Each pair of connected endpoints may transfer data in either direction. c6 is connected to Tegra A while c5 is connected to Tegra B.