Autonomous vehicle (AV) development requires massive amounts of sensor data for perception development. Developers typically get this data from two sources��replay streams of real-world drives or simulation. However, real-world datasets offer limited flexibility, as the data is fixed to only the objects, events, and view angles captured by the physical sensors. It is also difficult to simulate��
]]>Data labeling and model training are consistently ranked as the most significant challenges teams face when building an AI/ML infrastructure. Both are essential steps in the ML application development process, and if not done correctly, they can lead to inaccurate results and decreased performance. See the AI Infrastructure Ecosystem of 2022 report from the AI Infrastructure Alliance for more��
]]>Robots are increasing in complexity, with a higher degree of autonomy, a greater number and diversity of sensors, and more sensor fusion-based algorithms. Hardware acceleration is essential to run these increasingly complex workloads, enabling robotics applications that can run larger workloads with more speed and power efficiency. The mission of NVIDIA Isaac ROS has always been to empower��
]]>The 65th annual Daytona 500 will take place on February 19, 2023 and for many this elite NASCAR event is the pinnacle of the car racing world. For now, there are no plans to see an autonomous vehicle racing against cars with drivers, but it��s not too hard to imagine that scenario at a future race. At CES in early January, there was a competition to test the best autonomous racing vehicles.
]]>Accurate, fast object detection is an important task in robotic navigation and collision avoidance. Autonomous agents need a clear map of their surroundings to navigate to their destination while avoiding collisions. For example, in warehouses that use autonomous mobile robots (AMRs) to transport objects, avoiding hazardous machines that could potentially damage robots has become a challenging��
]]>A point cloud is a data set of points in a coordinate system. Points contain a wealth of information, including three-dimensional coordinates X, Y, Z; color; classification value; intensity value; and time. Point clouds mostly come from lidars that are commonly used in various NVIDIA Jetson use cases, such as autonomous machines, perception modules, and 3D modeling. One of the key��
]]>With NVIDIA DriveWorks SDK, autonomous vehicles can bring their understanding of the world to a new dimension. The SDK enables autonomous vehicle developers to easily process three-dimensional lidar data and apply it to specific tasks, such as perception or localization. You can learn how to implement this critical toolkit in our expert-led webinar, Point Cloud Processing on DriveWorks, Aug. 25.
]]>Many Jetson users choose lidars as their major sensors for localization and perception in autonomous solutions. Lidars describe the spatial environment around the vehicle as a collection of three-dimensional points known as a point cloud. Point clouds sample the surface of the surrounding objects in long range and high precision, which are well-suited for use in higher-level obstacle perception��
]]>Our radar perception pipeline delivers 360-degree surround perception around the vehicle, using production-grade radar sensors operating at the 77GHz automotive microwave band. Signals transmitted and received at microwave wavelengths are resistant to impairment from poor weather (such as rain, snow, and fog), and active ranging sensors do not suffer reduced performance during night time��
]]>The NVIDIA DriveWorks SDK contains a collection of CUDA-based low level point cloud processing modules optimized for NVIDIA DRIVE AGX platforms. The DriveWorks Point Cloud Processing modules include common algorithms that any AV developer working with point cloud representations would need, such as accumulation and registration. Figure 1 shows NVIDIA test vehicles outfitted with lidar.
]]>