• <xmp id="om0om">
  • <table id="om0om"><noscript id="om0om"></noscript></table>
  • Robotics

    Accelerating AI Modules for ROS and ROS 2 on NVIDIA Jetson Platform

    NVIDIA Jetson developer kits serve as a go-to platform for roboticists because of its ease of use, system support, and its comprehensive support for accelerating AI workloads. In this post, we showcase our support for open-source robotics frameworks including ROS and ROS 2 on NVIDIA Jetson developer kits.

    Diagram shows flow of work to get started with ROS and ROS 2 on the Jetson platform.
    Figure 1. ROS and ROS 2 with AI acceleration on NVIDIA Jetson platform.

    This post includes the following helpful resources:

    ROS and ROS 2 Docker containers

    We offer different Docker images for ROS and ROS 2 with machine learning libraries. We also provide Dockerfiles for you to build your own Docker images according to your custom requirements.

    ROS and ROS 2 Docker images

    We provide support for ROS 2 Foxy Fitzroy, ROS 2 Eloquent Elusor, and ROS Noetic with AI frameworks such as PyTorch, NVIDIA TensorRT, and the DeepStream SDK. We include machine learning (ML) libraries including scikit-learn, numpy, and pillow. The containers are packaged with ROS 2 AI packages accelerated with TensorRT.

    ROS 2 Foxy, ROS 2 Eloquent, and ROS Noetic with PyTorch and TensorRT Docker image:

    Table 1 shows the pull commands for these Docker images.

    Docker ImagesPull command
    ROS 2 Foxy with PyTorch and TensorRT$docker pull nvidiajetson/l4t-ros2-foxy-pytorch:r32.5
    ROS 2 Foxy with DeepStream SDK$docker pull nvidiajetson/deepstream-ros2-foxy:5.0.1
    ROS 2 Eloquent with PyTorch and TensorRT$docker pull nvidiajetson/l4t-ros2-eloquent-pytorch:r32.5
    ROS 2 Eloquent with DeepStream SDK$docker pull nvidiajetson/deepstream-ros2-eloquent:5.0.1
    ROS Noetic with PyTorch and TensorRT$docker pull nvidiajetson/l4t-ros-noetic-pytorch:r32.5
    Table 1. Pull commands for ROS 2 Docker images.

    ROS and ROS 2 DockerFiles

    To enable you to easily run different versions of ROS 2 on Jetson, we released various Dockerfiles and build scripts for ROS 2 Eloquent, ROS 2 Foxy, ROS Melodic, and ROS Noetic. These containers provide an automated and reliable way to install ROS and ROS 2 on Jetson and build your own ROS-based applications.

    Because Eloquent and Melodic already provide prebuilt packages for Ubuntu 18.04, the Dockerfiles, install these versions of ROS into the containers. In contrast, Foxy and Noetic are built from the source inside the container, as those versions only come prebuilt for Ubuntu 20.04. With the containers, using these versions of ROS and ROS 2 is the same, regardless of the underlying OS distribution.

    To build the containers, clone the repo on your Jetson device running NVIDIA JetPack 4.4 or newer, and run the ROS build script:

    $ git clone https://github.com/dusty-nv/jetson-containers
    $ cd jetson-containers
    $ ./scripts/docker_build_ros.sh all       # build all: melodic, noetic, eloquent, foxy
    $ ./scripts/docker_build_ros.sh melodic   # build only melodic
    $ ./scripts/docker_build_ros.sh noetic    # build only noetic
    $ ./scripts/docker_build_ros.sh eloquent  # build only eloquent
    $ ./scripts/docker_build_ros.sh foxy      # build only foxy 

    Accelerated AI ROS and ROS 2 packages

    GitHub: NVIDIA-AI-IOT/ros2_torch_trt

    We’ve put together bundled packages with all the materials needed to run various GPU-accelerated AI applications with ROS and ROS 2 packages. There are applications for object detection, human pose estimation, gesture classification, semantics segmentation, and NVApril Tags.

    The repository provides four different packages for classification and object detection using PyTorch and TensorRT. This repository serves as a starting point for AI integration with ROS 2. The main features of the packages are as follows:

    • For classification, select from various ImageNet pretrained models, including Resnet18, AlexNet, SqueezeNet, and Resnet50.
    • For detection, MobileNetV1-based SSD is currently supported, trained on the COCO dataset.
    • The TensorRT packages provide a significant speedup in carrying out inference relative to the PyTorch models performing inference directly on the GPU.
    • The inference results are published in the form of vision_msgs.
    • On running the node, a window is also shown with the inference results visualized.
    • A Jetson-based Docker image and launch file is provided for ease of use.

    For more information, see Implementing Robotics Applications with ROS 2 and AI on the NVIDIA Jetson Platform.

    ROS and ROS 2 packages for accelerated deep learning nodes

    GitHub: dusty-nv/ros_deep_learning

    This repo contains deep learning inference nodes and camera/video streaming nodes for ROS and ROS 2 with support for Jetson Nano, TX1, TX2, Xavier NX, NVIDIA AGX Xavier, and TensorRT.

    The nodes use the image recognition, object detection, and semantic segmentation DNNs from the jetson-inference library and NVIDIA Hello AI World tutorial. Both come with several built-in pretrained networks for classification, detection, and segmentation and the ability to load customized user-trained models.

    The camera/video streaming nodes support the following I/O interfaces:

    • MIPI CSI cameras
    • V4L2 cameras
    • RTP / RTSP
    • Videos and images
    • Image sequences
    • OpenGL windows

    ROS Melodic and ROS 2 Eloquent are supported. We recommend the latest version of NVIDIA JetPack.

    ROS 2 package for human pose estimation

    GitHub: NVIDIA-AI-IOT/ros2_trt_pose

    In this repository, we accelerate human-pose estimation using TensorRT. We use the widely adopted NVIDIA-AI-IOT/trt_pose repository. To understand human pose, pretrained models infer 17 body parts based on the categories from the COCO dataset. Here are the key features of the ros2_trt_pose package:

    • Publishes pose_msgs, such as count of person and person_id. For each person_id, it publishes 17 body parts.
    • Provides a launch file for easy usage and visualizations on Rviz2:
      • Image messages
      • Visual markers: body_joints, body_skeleton
    • Contains a Jetson-based Docker image for easy install and usage.

    For more information, see Implementing Robotics Applications with ROS 2 and AI on the NVIDIA Jetson Platform.

    ROS 2 package for accelerated NVAprilTags

    GitHub: NVIDIA-AI-IOT/ros2-nvapriltags

    This ROS 2 node uses the NVIDIA GPU-accelerated AprilTags library to detect AprilTags in images and publish the poses, IDs, and additional metadata. This has been tested on ROS 2 (Foxy) and should run on x86_64 and aarch64 (Jetson hardware). It is modeled after and comparable to the ROS 2 node for CPU AprilTags detection.

    For more information about the NVIDIA Isaac GEM on which this node is based, see April Tags in the NVIDIA Isaac SDK 2020.2 documentation. For more information, see AprilTags Visual Fiducial System.

    ROS 2 package for hand pose estimation and gesture classification

    GitHub: NVIDIA-AI-IOT/ros2_trt_pose_hand

    The ROS 2 package takes advantage of the recently released NVIDIA-AI-IOT/trt_pose_hand repo: Real-time hand pose estimation and gesture classification using TensorRT. It provides following key features:

    • Hand pose message with 21 key points
    • Hand pose detection image message
    • std_msgs for gesture classification with six classes:
      • fist
      • pan
      • stop
      • fine
      • peace
      • no hand
    • Visualization markers
    • Launch file for RViz2

    ROS 2 package for text detection and monocular depth estimation

    GitHub: NVIDIA-AI-IOT/ros2_torch2trt_examples

    In this repository, we demonstrate the use of torch2trt, an easy-to-use PyTorch-to-TensorRT converter, for two different applications:

    For easy integration and development, the ROS 2 package performs the following steps:

    1. Subscribes to the image_tools cam2image image message.
    2. Optimizes the model to TensorRT.
    3. Publishes the image message.

    ROS and ROS 2 package for Jetson stats

    GitHub:  NVIDIA-AI-IOT/ros2_jetson_stats

    The jetson-stats package is for monitoring and controlling your NVIDIA Jetson [Xavier NX, Nano, NVIDIA AGX Xavier, TX1, or TX2]. In this repository, we provide a ROS 2 package for jetson_stats such that you can monitor different system status in deployment. The ROS package developed by Jetson Champion Raffaello Bonghi, PhD can be found at rbonghi/ros_jetson_stats.

    The ros2_jetson_stats package features the following ROS 2 diagnostic messages:

    • GPU/CPU usage percentage
      • EMC/SWAP/Memory status (% usage)
      • Power and temperature of SoC

    You can now control the following through the ROS 2 command line:

    • Fan (mode and speed)
      • Power model (nvpmodel)
      • Jetson_clocks

    ROS 2 packages for the DeepStream SDK

    The DeepStream SDK delivers a complete streaming analytics toolkit to build full AI-based solutions using multisensor processing, video, and image understanding. It offers supportfor popular object detection and segmentation models such as state-of-the-art SSD, YOLO, FasterRCNN, and MaskRCNN.

    In this repository, we provide ROS 2 nodes based on the NVIDIA-AI-IOT/deepstream_python_apps repo to perform two inference object detection and attribute classification tasks:

    • Object detection: Four classes of objects are detected: Vehicle, Person, RoadSign, and TwoWheeler.
    • Attribute classification: Three types of attributes are classified for objects of class Vehicle: Color, Make, and Type.

    We also provide sample ROS 2 subscriber nodes that subscribe to these topics and display results in the vision_msgs format. Each inference task also spawns a visualization window with bounding boxes and labels around detected objects.

    For more information, see Implementing Robotics Applications with ROS 2 and AI on the NVIDIA Jetson Platform.

    ROS-based Chameleon project: Understanding semantic obstacles with deep learning

    GitHub:

    This promising work looks at the potential to use the power of robotics and deep learning together. We use FCN-AlexNet, a segmentation network, to perform several real-world applications such as detecting stairs, potholes, or other hazards to robots in unstructured environments.

    CUDA-accelerated Point Cloud Library

    GitHub: NVIDIA-AI-IOT/cuda-pcl

    Many Jetson users choose lidars as their major sensors for localization and perception in autonomous solutions. CUDA-PCL 1.0 includes three CUDA-accelerated PCL libraries:

    • CUDA-ICP
    • CUDA-Segmentation
    • CUDA-Filter

    For more information, see Accelerating Lidar for Robotics with NVIDIA CUDA-based PCL.

    NVIDIA Isaac Sim for robotics applications

    JetBot modeled in NVIDIA Isaac Sim.
    Figure 2. Waveshare JetBot in NVIDIA Isaac Sim.

    For more information, see: Training Your NVIDIA JetBot to Avoid Collisions Using NVIDIA Isaac Sim.

    Building

    Here are sample projects to leverage the NVIDIA Jetson platform for both the open-source developer community, such as building an autonomous model-scale car, and enterprises, such as implementing human pose estimation for robot arm solutions. All are enabled by ROS, ROS 2 and NVIDIA Jetson.

    ROS 2-based NanoSaur

    NanoSaur is an open-source project designed and made by Raffaello Bonghi. It’s a fully 3D printable robot, made to work on your desk, and uses a simple camera with two OLED-like eyes. The size is 10x12x6cm in only 500g. With a simple power-bank, it can wander your desktop autonomously. It’s a little robot for robotics and AI education.

    For more information, see About NanoSaur.

    ROS and ROS 2 integration with Comau North America

    This package demonstrates using a ROS 2 package to control the e.DO by bridging messages to ROS1, where the e.DO core package resides.

    Video 1. Using NVIDIA Jetson and GPU accelerated gesture classification AI package with the Comau e.DO robot arm.

    To test the Human Hand Pose Estimation package, the team used a Gazebo simulation of the Comau e.DO from Stefan Profanter’s open source repository. This enabled control of the e.DO in simulation with the help of MoveIt Motion Planning software. A ROS 2 node in the hand pose package publishes the hand pose classification message.

    Because MoveIt 1.0 works only with ROS1, a software bridge was used to subscribe to the message from ROS1. Based on the hand pose detected and classified, a message with robot pose data is published to a listener, which sends the movement command to MoveIt. The resulting change in the e.DO robot pose can be seen in Gazebo.

    ROS-based Yahboom DOFBOT

    DOFBOT is the best partner for AI beginners, programming enthusiasts, and Jetson Nano fans. It is designed based on Jetson Nano and contains six HQ servos, an HD camera, and a multifunction expansion board. The whole body is made of green oxidized aluminum alloy, which is beautiful and durable. Through the ROS robot system, we simplify the motion control of serial bus servo.

    For more information, see Yahboom DOFBOT AI Vision Robotic Arm with ROS Python programming for Jetson Nano 4GB B01.

    ROS package for JetBot

    GitHub: dusty-nv/jetbot_ros

    JetBot is an open-source robot based on NVIDIA Jetson Nano:

    • Affordable: Less than $150 as an add-on to Jetson Nano.
    • Educational: Includes tutorials from basic motion to AI-based collision avoidance.
    • Fun: Interactively programmed from your web browser.

    Building and using JetBot gives you practical experience for creating entirely new AI projects. To get started, read the JetBot documentation.

    Summary

    Keep yourself updated with ROS and ROS 2 support on NVIDIA Jetson.

    Discuss (5)
    0

    Tags

    人人超碰97caoporen国产