Training physical AI models used to power autonomous machines, such as robots and autonomous vehicles, requires huge amounts of data. Acquiring large sets of diverse training data can be difficult, time-consuming, and expensive. Data is often limited due to privacy restrictions or concerns, or simply may not exist for novel use cases. In addition, the available data may not apply to the full range��
]]>Generative physical AI models can understand and execute actions with fine or gross motor skills within the physical world. Understanding and navigating in the 3D space of the physical world requires spatial intelligence. To achieve spatial intelligence in physical AI involves converting the real world into AI-ready virtual representations that the model can understand.
]]>Originally published on July 29, 2024, this post was updated on October 8, 2024. Robots need to be adaptable, readily learning new skills and adjusting to their surroundings. Yet traditional training methods can limit a robot��s ability to apply learned skills in new situations. This is often due to the gap between perception and action, as well as the challenges in transferring skills across��
]]>NVIDIA announced new USD-based generative AI and NVIDIA-accelerated development tools built on NVIDIA Omniverse at SIGGRAPH 2024. These advancements will expand adoption of Universal Scene Description (OpenUSD) to robotics, industrial design, and engineering, so developers can quickly build highly accurate virtual worlds for the next evolution of AI. OpenUSD is an open-source framework and��
]]>Recent advancements in generative AI and multi-view reconstruction have introduced new ways to rapidly generate 3D content. However, to be useful for downstream applications like robotics, design, AR/VR, and games, it must be possible to manipulate these 3D models in a physically plausible way. This poses a major challenge to traditional physics simulation algorithms, which were designed to��
]]>NVIDIA researchers took the stage at SIGGRAPH Asia Real-Time Live event in Sydney to showcase generative AI integrated into an interactive texture painting workflow, enabling artists to paint complex, non-repeating textures directly on the surface of 3D objects. Rather than generating complete results with only high-level user guidance, this prototype shows how AI can function as a brush in��
]]>Differentiable Slang easily integrates with existing codebases��from Python, PyTorch, and CUDA to HLSL��to aid multiple computer graphics tasks and enable novel data-driven and neural research. In this post, we introduce several code examples using differentiable Slang to demonstrate the potential use across different rendering applications and the ease of integration. This is part of a series��
]]>NVIDIA just released a SIGGRAPH Asia 2023 research paper, SLANG.D: Fast, Modular and Differentiable Shader Programming. The paper shows how a single language can serve as a unified platform for real-time, inverse, and differentiable rendering. The work is a collaboration between MIT, UCSD, UW, and NVIDIA researchers. This is part of a series on Differentiable Slang. For more information about��
]]>NVIDIA is providing developers with an advanced platform to create scalable, branded, custom extended reality (XR) products with the new NVIDIA CloudXR Suite. Built on a new architecture, NVIDIA CloudXR Suite is a major step forward in scaling the XR ecosystem. It provides a platform for developers, professionals, and enterprise teams to flexibly orchestrate and scale XR workloads across��
]]>Developing extended reality (XR) applications can be extremely challenging. Users typically start with a template project and adhere to pre-existing packaging templates for deploying an app to a headset. This approach creates a distinct bottleneck in the asset iteration pipeline. Updating assets inside an XR experience becomes completely dependent on how fast the developer can build, package��
]]>The latest release of NVIDIA Omniverse delivers an exciting collection of new features based on Omniverse Kit 105, making it easier than ever for developers to get started building 3D simulation tools and workflows. Built on Universal Scene Description, known as OpenUSD, and NVIDIA RTX and AI technologies, Omniverse enables you to create advanced, real-time 3D simulation applications for��
]]>This post was updated January 16, 2024. Recent years have witnessed a massive increase in the volume of 3D geospatial data being generated. This data provides rich real-world environmental and contextual information, spatial relationships, and real-time monitoring capabilities for industrial applications. It can enhance the realism, accuracy, and effectiveness of simulations across various��
]]>Smart cities are the future of urban living. Yet they can present various challenges for city planners, most notably in the realm of transportation. To be successful, various aspects of the city��from environment and infrastructure to business and education��must be functionally integrated. This can be difficult, as managing traffic flow alone is a complex problem full of challenges such as��
]]>NVIDIA will present 19 research papers at SIGGRAPH, the year��s most important computer graphics conference.
]]>A dozen tools and programs��including new releases NeuralVDB and Kaolin Wisp��make 3D content creation easy and fast for millions of designers and creators.
]]>Graphics professionals and researchers have come together at SIGGRAPH 2022 to share their expertise and learn about recent innovations in the computer graphics industry. NVIDIA Developer Tools is excited to be a part of this year��s event, hosting the hands-on lab Using Nsight to Optimize Ray-Tracing Applications, and announcing new releases for NVIDIA Nsight Systems and NVIDIA Nsight��
]]>NVIDIA at SIGGRAPH 2022 announced the full open sourcing of Material Definition Language (MDL)��including the MDL Distiller and GLSL backend technologies��to further expand the MDL ecosystem. Building the world��s most accurate and scalable models for material and rendering simulation is a continuous effort, requiring flexibility and adaptability. MDL is NVIDIA��s vision for renderer algorithm��
]]>Join us at SIGGRAPH Aug. 8-11 to explore how NVIDIA technology is driving innovations in simulation, collaboration, and design across industries.
]]>It��s one thing to hear about something new, amazing, or downright mind-blowing. It��s a completely different experience when you can see those breakthroughs visualized and demonstrated. At SIGGRAPH 2021, NVIDIA introduced new and stunning demos showcasing how the latest technologies are transforming workflows across industries. From award-winning research demos to photorealistic graphics��
]]>NVIDIA announced at SIGGRAPH several additions to its Deep Learning Institute (DLI) curriculum, including an introductory course to Pixar��s Universal Scene Description (USD) and a teaching kit for educators, looking to incorporate hands-on technical training into graphics, architectural design, and digital media production coursework. The teaching kit will be based on NVIDIA Omniverse��
]]>NVIDIA Developer Program is now bringing NVIDIA Omniverse to over 2.5 million developers around the world. At SIGGRAPH, we��re introducing exclusive events, sessions, and other resources to unveil Omniverse as our newest platform for developers. NVIDIA is delivering a suite of Omniverse apps and tools to enhance developer pipelines. Developers can plug into any layer of the platform stack��
]]>In the SIGGRAPH Special Address, NVIDIA revealed that the upcoming release of Blender 3.0 includes USD support. The USD support in the new release was developed by NVIDIA in close collaboration with Blender Foundation to bring the open standard to Blender artists. In addition, NVIDIA announced a Blender 3.0 alpha USD branch with additional features permitting integration with Omniverse.
]]>After decades of research, NVIDIA has unearthed the holy grail of video game graphics: real-time ray tracing! This series of videos will explain why you need to add ray tracing to your pipeline now. The idea isn��t to use ray tracing as the only rendering technique, but to combine it with traditional rasterization to generate the best possible blend of performance and image quality.
]]>