Announced at COMPUTEX 2025, the NVIDIA Omniverse Blueprint for AI factory digital twins has expanded to support OpenUSD schemas. The blueprint features new tools to simulate more aspects of data center design across power, cooling, and networking infrastructure. Engineering teams can now design and test entire AI factories in a realistic virtual world, helping to catch issues early so they can��
]]>Universal Scene Description (OpenUSD) offers a powerful, open, and extensible ecosystem for describing, composing, simulating, and collaborating within complex 3D worlds. From handling massive datasets and automating workflows for digital twins to enabling real-time rendering for games and streamlining industrial operations in manufacturing and energy, it is transforming how industries work with��
]]>Industrial enterprises are embracing physical AI and autonomous systems to transform their operations. This involves deploying heterogeneous robot fleets that include mobile robots, humanoid assistants, intelligent cameras, and AI agents throughout factories and warehouses. To harness the full potential of these physical AI enabled systems, companies rely on digital twins of their facilities��
]]>Kit SDK 107.0 is a major update release with primary updates for robotics development.
]]>The world of robotics is undergoing a significant transformation, driven by rapid advancements in physical AI. This evolution is accelerating the time to market for new robotic solutions, enhancing confidence in their safety capabilities, and contributing to the powering of physical AI in factories and warehouses. Announced at GTC, Newton is an open-source, extensible physics engine developed��
]]>Recently announced at MWC Barcelona, developers can now stream augmented reality (AR) experiences built with NVIDIA Omniverse to the Apple iPad. Omniverse, a platform for real-time collaboration and simulation, enables developers to create and stream detailed datasets with high visual quality. Built on Universal Scene Description (OpenUSD), Omniverse enables seamless compatibility across 3D tools��
]]>Learn how to adopt and evolve OpenUSD for the world��s physical and industrial AI data pipelines and workflows.
]]>Explore the future of extended reality, and learn how spatial computing is changing the future of immersive development and industry workflows.
]]>Universal Scene Description (OpenUSD) is an open, extensible framework and ecosystem with APIs for composing, editing, querying, rendering, collaborating, and simulating within 3D virtual worlds. This post explains how you can start using OpenUSD today with your existing assets and tools and what steps you can take to iteratively up-level your USD workflows. For an interactive��
]]>Take the three self-paced courses at no cost through the NVIDIA Deep Learning Institute (DLI).
]]>Spatial computing experiences are transforming how we interact with data, connecting the physical and digital worlds through technologies like extended reality (XR) and digital twins. These advancements are enabling more intuitive and immersive ways to analyze and understand complex datasets. This post explains how developers can now engage with Universal Scene Description (OpenUSD)-based��
]]>Training physical AI models used to power autonomous machines, such as robots and autonomous vehicles, requires huge amounts of data. Acquiring large sets of diverse training data can be difficult, time-consuming, and expensive. Data is often limited due to privacy restrictions or concerns, or simply may not exist for novel use cases. In addition, the available data may not apply to the full range��
]]>As enterprises increasingly integrate AI into their industrial operations to deliver more automated and autonomous facilities, more operations teams are becoming centralized in remote operations centers. From these centers, these teams monitor, operate, and provide expert guidance to distributed production sites. A new generation of 3D remote monitoring solutions, powered by advancements in��
]]>Programming robots for real-world success requires a training process that accounts for unpredictable conditions, different surfaces, variations in object size, shape, texture, and more. Consequently, physically accurate simulations are vital for training AI-enabled robots before deployment. Crafting physically accurate simulation requires advanced programming skills to fine-tune algorithms��
]]>Humanoid robots present a multifaceted challenge at the intersection of mechatronics, control theory, and AI. The dynamics and control of humanoid robots are complex, requiring advanced tools, techniques, and algorithms to maintain balance during locomotion and manipulation tasks. Collecting robot data and integrating sensors also pose significant challenges, as humanoid robots require a fusion of��
]]>Physical AI-powered robots need to autonomously sense, plan, and perform complex tasks in the physical world. These include transporting and manipulating objects safely and efficiently in dynamic and unpredictable environments. Robot simulation enables developers to train, simulate, and validate these advanced systems through virtual robot learning and testing. It all happens in physics��
]]>The integration of robotic surgical assistants (RSAs) in operating rooms offers substantial advantages for both surgeons and patient outcomes. Currently operated through teleoperation by trained surgeons at a console, these surgical robot platforms provide augmented dexterity that has the potential to streamline surgical workflows and alleviate surgeon workloads. Exploring visual behavior cloning��
]]>Producing commercials is resource-intensive, requiring physical locations and various props and setups to display products in different settings and environments for more accurate consumer targeting. This traditional process is not only expensive and time-consuming but also can be destructive to the physical environment. It leaves you with no ability to capture a new angle after you return home.
]]>Accelerate your OpenUSD workflows with this free curriculum for developers and 3D practitioners.
]]>Generative physical AI models can understand and execute actions with fine or gross motor skills within the physical world. Understanding and navigating in the 3D space of the physical world requires spatial intelligence. To achieve spatial intelligence in physical AI involves converting the real world into AI-ready virtual representations that the model can understand.
]]>Originally published on July 29, 2024, this post was updated on October 8, 2024. Robots need to be adaptable, readily learning new skills and adjusting to their surroundings. Yet traditional training methods can limit a robot��s ability to apply learned skills in new situations. This is often due to the gap between perception and action, as well as the challenges in transferring skills across��
]]>NVIDIA announced new USD-based generative AI and NVIDIA-accelerated development tools built on NVIDIA Omniverse at SIGGRAPH 2024. These advancements will expand adoption of Universal Scene Description (OpenUSD) to robotics, industrial design, and engineering, so developers can quickly build highly accurate virtual worlds for the next evolution of AI. OpenUSD is an open-source framework and��
]]>Developers from advertising agencies to software vendors are empowering global brands to deliver hyperpersonalization for digital experiences and visual storytelling with product configurator solutions. Integrating NVIDIA Omniverse with OpenUSD and generative AI into product configurators enables solution providers and software developers to deliver interactive, ray-traced��
]]>Complimentary trainings on OpenUSD, Digital Humans, LLMs and more with hands-on labs for Full Conference and Experience attendees.
]]>SyncTwin GmbH, a company that builds software to optimize production, intralogistics, and assembly, is on a mission to unlock industrial digital twins for small and medium-sized businesses (SMBs). While SyncTwin has helped major global companies like BMW minimize costs and downtime in their factories with digital twins, they are now shifting their focus to enable manufacturing businesses��
]]>NVIDIA Omniverse is a platform that enables you to build applications for complex 3D and industrial digitalization workflows based on Universal Scene Description (OpenUSD). The platform��s modular architecture breaks down into core technologies and services, which you can directly integrate into tools and applications, customizing as needed. This approach simplifies integration��
]]>With the growing emphasis on environmental, social, and governance (ESG) investments and initiatives, manufacturers are looking for new ways to increase energy efficiency and sustainability across their operations. One area of opportunity in electronics manufacturing is the performance of run-in test rooms, which are essential for ensuring the reliability, quality, and safety of the world��s��
]]>Manufacturers face increased pressures to shorten production cycles, enhance productivity, and improve quality, all while reducing costs. To address these challenges, they��re investing in industrial digitalization and AI-enabled digital twins to unlock new possibilities from planning to operations. Developers at Pegatron, an electronics manufacturer based in Taiwan, used NVIDIA AI��
]]>Missed GTC or want to replay your favorite training labs? Find it on demand with the NVIDIA GTC Training Labs playlist.
]]>With automotive consumers increasingly seeking more seamless, connected driving experiences, the industry has increased its focus on connectivity, advanced camera systems, and the in-vehicle experience. Continental, a leading German automotive technology company and innovator for automotive display solutions, is developing AI-powered virtual factory solutions to address these shifts and��
]]>With NVIDIA AI, NVIDIA Omniverse, and the Universal Scene Description (OpenUSD) ecosystem, industrial developers are building virtual factory solutions that accelerate time-to-market, maximize production capacity, and cut costs through optimized processes for both brownfield and greenfield developments. Companies such as Delta Electronics, FoxConn, Pegatron, and Wistron have developed��
]]>Today, NVIDIA, and the Alliance for OpenUSD (AOUSD) announced the AOUSD Materials Working Group, an initiative for standardizing the interchange of materials in Universal Scene Description, known as OpenUSD. As an extensible framework and ecosystem for describing, composing, simulating, and collaborating within 3D worlds, OpenUSD enables developers to build interoperable 3D workflows��
]]>We are so excited to be back in person at GTC this year at the San Jose Convention Center. With thousands of developers, industry leaders, researchers, and partners in attendance, attending GTC in person gives you the unique opportunity to network with legends in technology and AI, and experience NVIDIA CEO Jensen Huang��s keynote live on-stage at the SAP Center. Past GTC alumni? Get 40%
]]>Learn how synthetic data is supercharging 3D simulation and computer vision workflows, from visual inspection to autonomous machines.
]]>Gain a foundational understanding of USD, the open and extensible framework for creating, editing, querying, rendering, collaborating, and simulating within 3D worlds.
]]>Developers and enterprises can now deploy lifelike virtual and mixed reality experiences with Varjo��s latest XR-4 series headsets, which are integrated with NVIDIA technologies. These XR headsets match the resolution that the human eye can see, providing users with realistic visual fidelity and performance. The latest XR-4 series headsets support NVIDIA Omniverse and are powered by NVIDIA��
]]>HOMEE AI, an NVIDIA Inception member based in Taiwan, has developed an ��AI-as-a-service�� spatial planning solution to disrupt the $650B global home decor market. They��re helping furniture makers and home designers find new business opportunities in the era of industrial digitalization. Using NVIDIA Omniverse, the HOMEE AI engineering team developed an enterprise-ready service to deliver��
]]>Discover why OpenUSD is central to the future of 3D development with Aaron Luk, a founding developer of Universal Scene Description.
]]>On March 19, learn how to build generative AI-enabled 3D pipelines and tools using Universal Scene Description for industrial digitalization.
]]>