Network traffic continues to increase, with the number of Internet users across the globe reaching 5 billion in 2022. As the number of users expands, so does the number of connected devices, which is expected to grow into the trillions. The ever-increasing number of connected users and devices leads to an overwhelming amount of data generated across the network. According to IDC��
]]>Announced at GTC 2022, the next generation of NVIDIA GPUs��the NVIDIA GeForce RTX 40 series, NVIDIA RTX 6000 Ada Generation, and NVIDIA L40 for data center��are built with the new NVIDIA Ada Architecture. The NVIDIA Ada Architecture features third-generation ray tracing cores, fourth-generation Tensor Cores, multiple video encoders, and a new optical flow accelerator. To enable you to��
]]>With the Jetson Orin Nano announcement this week at GTC, the entire Jetson Orin module lineup is now revealed. With up to 40 TOPS of AI performance, Orin Nano modules set the new standard for entry-level AI, just as Jetson AGX Orin is already redefining robotics and other autonomous edge use cases with 275 TOPS of server class compute. All Jetson Orin modules and the Jetson AGX Orin Developer��
]]>When examining an intricate speech AI robotic system, it��s easy for developers to feel intimidated by its complexity. Arthur C. Clarke claimed, ��Any sufficiently advanced technology is indistinguishable from magic.�� From accepting natural-language commands to safely interacting in real-time with its environment and the humans around it, today��s speech AI robotics systems can perform tasks to��
]]>As of March 21, 2023, QODA is now CUDA Quantum. For up-to-date information, see the CUDA Quantum page. Quantum circuit simulation is critical for developing applications and algorithms for quantum computers. Because of the disruptive nature of known quantum computing algorithms and use cases, quantum algorithms researchers in government, enterprise, and academia are developing and��
]]>The latest version of NVIDIA PhysicsNeMo, an AI framework that enables users to create customizable training pipelines for digital twins, climate models, and physics-based modeling and simulation, is now available for download. This release of the physics-ML framework, NVIDIA PhysicsNeMo v22.09, includes key enhancements to increase composition flexibility for neural operator architectures��
]]>Deploying AI models in production to meet the performance and scalability requirements of the AI-driven application while keeping the infrastructure costs low is a daunting task. Join the NVIDIA Triton and NVIDIA TensorRT community to stay current on the latest product updates, bug fixes, content, best practices, and more. This post provides you with a high-level overview of AI��
]]>In the operating room, the latency and reliability of surgical video streams can make all the difference for patient outcomes. Ultra-high-speed frame rates from sensor inputs that enable next-generation AI applications can provide surgeons with new levels of real-time awareness and control. To build real-time AI capabilities into medical devices for use cases like surgical navigation��
]]>Explore the AI technology powering Violet, the interactive avatar showcased this week in the NVIDIA GTC 2022 keynote. Learn new details about NVIDIA Omniverse Avatar Cloud Engine (ACE), a collection of cloud-native AI microservices for faster, easier deployment of interactive avatars, and NVIDIA Tokkio, a domain-specific AI reference application that leverages Omniverse ACE for creating fully��
]]>The cellular industry spends over $50 billion on radio access networks (RAN) annually, according to a recent GSMA report on the mobile economy. Dedicated and overprovisioned hardware is primarily used to provide capacity for peak demand. As a result, most RAN sites have an average utilization below 25%. This has been the industry reality for years as technology evolved from 2G to 4G.
]]>NVIDIA revealed major updates to its suite of AI software for developers including JAX, NVIDIA CV-CUDA, and NVIDIA RAPIDS. To learn about the latest SDK advancements from NVIDIA, watch the keynote from CEO Jensen Huang. Just today at GTC 2022, NVIDIA introduced JAX on NVIDIA AI, the newest addition to its GPU-accelerated deep learning frameworks. JAX is a rapidly growing��
]]>At GTC 2022, NVIDIA introduced enhancements to AI frameworks for building real-time speech AI applications, designing high-performing recommenders at scale, applying AI to cybersecurity challenges, creating AI-powered medical devices, and more. Showcased real-world, end-to-end AI frameworks highlighted the customers and partners leading the way in their industries and domains.
]]>Recent advances in large language models (LLMs) have fueled state-of-the-art performance for NLP applications such as virtual scribes in healthcare, interactive virtual assistants, and many more. To simplify access to LLMs, NVIDIA has announced two services: NeMo LLM for customizing and using LLMs, and BioNeMo, which expands scientific applications of LLMs for the pharmaceutical and��
]]>At GTC 2022, NVIDIA announced the Jetson Orin Nano series of system-on-modules (SOMs). They deliver up to 80X the AI performance of NVIDIA Jetson Nano and set the new standard for entry-level edge AI and robotics applications. For the first time, the Jetson family now includes NVIDIA Orin-based modules that span from the entry-level Jetson Orin Nano to the highest-performance Jetson AGX Orin.
]]>NVIDIA recently announced Ada Lovelace, the next generation of GPUs. Named the NVIDIA GeForce RTX 40 Series, these are the world��s most advanced graphics cards. Featuring third-generation Ray Tracing Cores and fourth-generation Tensor Cores, they accelerate games that take advantage of the latest neural graphics and ray tracing technology. Since the introduction of the GeForce RTX 20 Series��
]]>Developers, creators, and enterprises around the world are using NVIDIA Omniverse to build virtual worlds and push the boundaries of the metaverse. Based on Universal Scene Description (USD), an extensible, common language for virtual worlds, Omniverse is a scalable computing platform for full-design-fidelity 3D simulation workflows that developers across global industries are using to build out��
]]>Announced at GTC, technical artists, software developers, and ML engineers can now build custom, physically accurate, synthetic data generation pipelines in the cloud with NVIDIA Omniverse Replicator. Omniverse Replicator is a highly extensible framework built on the NVIDIA Omniverse platform that enables physically accurate 3D synthetic data generation to accelerate the training and accuracy��
]]>Developing for the medical imaging AI lifecycle is a time-consuming and resource-intensive process that typically includes data acquisition, compute, and training time, and a team of experts who are knowledgeable in creating models suited to your specific challenge. Project MONAI, the medical open network for AI, is continuing to expand its capabilities to help make each of these hurdles easier no��
]]>The field of computational biology relies on bioinformatics tools that are fast, accurate, and easy to use. As next-generation sequencing (NGS) is becoming faster and less costly, a data deluge is emerging, and there is an ever-growing need for accessible, high-throughput, industry-standard analysis. At GTC 2022, we announced the release of NVIDIA Clara Parabricks v4.0��
]]>Supercomputers are used to model and simulate the most complex processes in scientific computing, often for insight into new discoveries that otherwise would be impractical or impossible to demonstrate physically. The NVIDIA BlueField data processing unit (DPU) is transforming high-performance computing (HPC) resources into more efficient systems, while accelerating problem solving across a��
]]>The latest NVIDIA HPC SDK update expands portability and now supports the Arm-based AWS Graviton3 processor. In this post, you learn how to enable Scalable Vector Extension (SVE) auto-vectorization with the NVIDIA compilers to maximize the performance of HPC applications running on the AWS Graviton3 CPU. The NVIDIA HPC SDK includes the proven compilers, libraries��
]]>An AI model card is a document that details how machine learning (ML) models work. Model cards provide detailed information about the ML model��s metadata including the datasets that it is based on, performance measures that it was trained on, and the deep learning training methodology itself. This post walks you through the current practice for AI model cards and how NVIDIA is planning to advance��
]]>Our weekly roundup covers the most recent software updates, learning resources, events, and notable news. This week we have several software releases. Software releases The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries, and tools for developing accelerated HPC applications. With a breadth of flexible support options, users can create applications with a��
]]>This week at GDC, NVIDIA announced a number of new tools for game developers to help save time, more easily integrate NVIDIA RTX, and streamline the creation of virtual worlds. Watch this overview of three exciting new tools now available. Since NVIDIA Deep Learning Super Sampling (DLSS) launched in 2019, a variety of super-resolution technologies have shipped from both hardware��
]]>Developers and early access users can now accurately capture and replay VR sessions for performance testing, scene troubleshooting, and more with NVIDIA Virtual Reality Capture and Replay (VCR.) The potentials of virtual worlds are limitless, but working with VR content poses challenges, especially when it comes to recording or recreating a virtual experience. Unlike the real world��
]]>Instance segmentation is a core visual recognition problem for detecting and segmenting objects. In the past several years, this area has been one of the holy grails in the computer vision community with wide applications ranging from autonomous vehicles (AV), robotics, video analysis, smart home, digital human, and healthcare. Annotation, the process of classifying every object in an image��
]]>NVIDIA GPUs have enormous compute power and typically must be fed data at high speed to deploy that power. That is possible, in principle, because GPUs also have high memory bandwidth, but sometimes they need your help to saturate that bandwidth. In this post, we examine one specific method to accomplish that: prefetching. We explain the circumstances under which prefetching can be expected��
]]>Cybercrime worldwide is costing as much as the gross domestic product of countries like Mexico or Spain, hitting more than $1 trillion annually. And global trends point to it only getting worse. Data centers face staggering increases in users, data, devices, and apps increasing the threat surface amid ever more sophisticated attack vectors. NVIDIA Morpheus enables cybersecurity��
]]>Typically, real-time physics simulation code is written in low-level CUDA C++ for maximum performance. In this post, we introduce NVIDIA Warp, a new Python framework that makes it easy to write differentiable graphics and simulation GPU code in Python. Warp provides the building blocks needed to write high-performance simulation code, but with the productivity of working in an interpreted language��
]]>The current distribution of extended reality (XR) experiences is limited to desktop setups and local workstations, which contain the high-end GPUs necessary to meet computing requirements. For XR solutions to scale past their currently limited user base and support higher-end functionality such as AI services integration and on-demand collaboration, we need a purpose-built platform.
]]>Today during the 2022 NVIDIA GTC Keynote address, NVIDIA CEO Jensen Huang introduced the new NVIDIA H100 Tensor Core GPU based on the new NVIDIA Hopper GPU architecture. This post gives you a look inside the new H100 GPU and describes important new features of NVIDIA Hopper architecture GPUs. The NVIDIA H100 Tensor Core GPU is our ninth-generation data center GPU designed to deliver an��
]]>Fast and cost-effective whole genome sequencing and analysis can bring answers quickly to critically ill patients suffering from rare or undiagnosed diseases. Recent advances in accelerated clinical sequencing, such as the world-record-setting DNA sequencing technique for rapid diagnosis, are bringing us one step closer to same-day, whole-genome genetic diagnosis in a clinical setting.
]]>At GTC 2022, NVIDIA announced major updates to its suite of NVIDIA AI software, for developers to build real-time speech AI applications, create high-performing recommenders at scale and optimize inference in every application, and more. Watch the keynote from CEO, Jensen Huang, to learn about the latest advancements from NVIDIA. Today, NVIDIA announced Riva 2.0��
]]>Availability of the the NVIDIA Jetson AGX Orin Developer Kit was announced today at NVIDIA GTC. The platform is the world��s most powerful, compact, and energy-efficient AI supercomputer for advanced robotics, autonomous machines, and next-generation embedded and edge computing. Jetson AGX Orin delivers up to 275 trillion operations per second (TOPS). It gives customers more than 8X the��
]]>Developers, creators, and enterprises around the world are using NVIDIA Omniverse��the real-time collaboration and simulation platform for 3D design��to enhance complex workflows and develop for 3D worlds faster. At NVIDIA GTC, we showcased how the platform��s ecosystem is expanding, from new Omniverse Connectors and asset libraries to updated Omniverse apps and features.
]]>New advances in computation make it possible for medical devices to automatically detect, measure, predict, simulate, map, and guide clinical care teams. NVIDIA Clara Holoscan, the full-stack AI computing platform for medical devices, has added new sensor front-end partners for video capture, ultrasound research, data acquisition, and connection to legacy-medical devices.
]]>Even while 5G wireless networks are being installed and used worldwide, researchers in academia and industry have already started defining visions and critical technologies for 6G. Although nobody knows what 6G will be, a recurring vision is that 6G must enable the creation of digital twins and distributed machine learning (ML) applications at an unprecedented scale. 6G research requires new tools.
]]>This post was written to enable the beginner developer community, especially those new to computer vision and computer science. NVIDIA recognizes that solving and benefiting the world��s visual computing challenges through computer vision and artificial intelligence requires all of us. NVIDIA is excited to partner and dedicate this post to the Black Women in Artificial Intelligence.
]]>