GTC Digital 2022 – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-03-21T20:30:26Z http://www.open-lab.net/blog/feed/ Nicola Sessions <![CDATA[Detecting Threats Faster with AI-Based Cybersecurity]]> http://www.open-lab.net/blog/?p=55006 2023-07-11T23:10:17Z 2022-09-22T18:40:00Z Network traffic continues to increase, with the number of Internet users across the globe reaching 5 billion in 2022. As the number of users expands, so does...]]> Network traffic continues to increase, with the number of Internet users across the globe reaching 5 billion in 2022. As the number of users expands, so does...

Network traffic continues to increase, with the number of Internet users across the globe reaching 5 billion in 2022. As the number of users expands, so does the number of connected devices, which is expected to grow into the trillions. The ever-increasing number of connected users and devices leads to an overwhelming amount of data generated across the network. According to IDC��

Source

]]>
0
Rohit Naskulwar <![CDATA[AV1 Encoding and Optical Flow: Video Performance Boosts and Higher Fidelity on the NVIDIA Ada Architecture]]> http://www.open-lab.net/blog/?p=54929 2023-06-12T08:55:04Z 2022-09-22T18:39:00Z Announced at GTC 2022, the next generation of NVIDIA GPUs��the NVIDIA GeForce RTX 40 series, NVIDIA RTX 6000 Ada Generation, and NVIDIA L40 for data...]]> Announced at GTC 2022, the next generation of NVIDIA GPUs��the NVIDIA GeForce RTX 40 series, NVIDIA RTX 6000 Ada Generation, and NVIDIA L40 for data...

Announced at GTC 2022, the next generation of NVIDIA GPUs��the NVIDIA GeForce RTX 40 series, NVIDIA RTX 6000 Ada Generation, and NVIDIA L40 for data center��are built with the new NVIDIA Ada Architecture. The NVIDIA Ada Architecture features third-generation ray tracing cores, fourth-generation Tensor Cores, multiple video encoders, and a new optical flow accelerator. To enable you to��

Source

]]>
7
Suhas Hariharapura Sheshadri https://www.linkedin.com/in/suhassheshadri/ <![CDATA[Develop for All Six NVIDIA Jetson Orin Modules with the Power of One Developer Kit]]> http://www.open-lab.net/blog/?p=55019 2023-07-19T16:23:44Z 2022-09-22T18:34:00Z With the Jetson Orin Nano announcement this week at GTC, the entire Jetson Orin module lineup is now revealed. With up to 40 TOPS of AI performance, Orin Nano...]]> With the Jetson Orin Nano announcement this week at GTC, the entire Jetson Orin module lineup is now revealed. With up to 40 TOPS of AI performance, Orin Nano...

With the Jetson Orin Nano announcement this week at GTC, the entire Jetson Orin module lineup is now revealed. With up to 40 TOPS of AI performance, Orin Nano modules set the new standard for entry-level AI, just as Jetson AGX Orin is already redefining robotics and other autonomous edge use cases with 275 TOPS of server class compute. All Jetson Orin modules and the Jetson AGX Orin Developer��

Source

]]>
1
Dave Niewinski <![CDATA[Low-Code Building Blocks for Speech AI Robotics]]> http://www.open-lab.net/blog/?p=55065 2023-11-03T07:15:08Z 2022-09-22T18:33:00Z When examining an intricate speech AI robotic system, it��s easy for developers to feel intimidated by its complexity. Arthur C. Clarke claimed, ��Any...]]> When examining an intricate speech AI robotic system, it��s easy for developers to feel intimidated by its complexity. Arthur C. Clarke claimed, ��Any...

When examining an intricate speech AI robotic system, it��s easy for developers to feel intimidated by its complexity. Arthur C. Clarke claimed, ��Any sufficiently advanced technology is indistinguishable from magic.�� From accepting natural-language commands to safely interacting in real-time with its environment and the humans around it, today��s speech AI robotics systems can perform tasks to��

Source

]]>
0
Tom Lubowe <![CDATA[Achieving Supercomputing-Scale Quantum Circuit Simulation with the NVIDIA cuQuantum Appliance]]> http://www.open-lab.net/blog/?p=54840 2023-07-11T23:16:29Z 2022-09-22T18:31:00Z Quantum circuit simulation is critical for developing applications and algorithms for quantum computers. Because of the disruptive nature of known quantum...]]> Quantum circuit simulation is critical for developing applications and algorithms for quantum computers. Because of the disruptive nature of known quantum...

As of March 21, 2023, QODA is now CUDA Quantum. For up-to-date information, see the CUDA Quantum page. Quantum circuit simulation is critical for developing applications and algorithms for quantum computers. Because of the disruptive nature of known quantum computing algorithms and use cases, quantum algorithms researchers in government, enterprise, and academia are developing and��

Source

]]>
0
Bhoomi Gadhia <![CDATA[Enhancing Digital Twin Models and Simulations with NVIDIA PhysicsNeMo v22.09]]> http://www.open-lab.net/blog/?p=54871 2025-03-19T17:46:03Z 2022-09-22T18:30:00Z The latest version of NVIDIA PhysicsNeMo, an AI framework that enables users to create customizable training pipelines for digital twins, climate models, and...]]> The latest version of NVIDIA PhysicsNeMo, an AI framework that enables users to create customizable training pipelines for digital twins, climate models, and...

The latest version of NVIDIA PhysicsNeMo, an AI framework that enables users to create customizable training pipelines for digital twins, climate models, and physics-based modeling and simulation, is now available for download. This release of the physics-ML framework, NVIDIA PhysicsNeMo v22.09, includes key enhancements to increase composition flexibility for neural operator architectures��

Source

]]>
0
Shankar Chandrasekaran <![CDATA[Solving AI Inference Challenges with NVIDIA Triton]]> http://www.open-lab.net/blog/?p=54906 2023-03-22T01:21:27Z 2022-09-21T16:00:00Z Deploying AI models in production to meet the performance and scalability requirements of the AI-driven application while keeping the infrastructure costs low...]]> Deploying AI models in production to meet the performance and scalability requirements of the AI-driven application while keeping the infrastructure costs low...

Deploying AI models in production to meet the performance and scalability requirements of the AI-driven application while keeping the infrastructure costs low is a daunting task. Join the NVIDIA Triton and NVIDIA TensorRT community to stay current on the latest product updates, bug fixes, content, best practices, and more. This post provides you with a high-level overview of AI��

Source

]]>
0
Julien Jomier <![CDATA[Powering Ultra-High Speed Frame Rates in AI Medical Devices with the NVIDIA Clara Holoscan SDK]]> http://www.open-lab.net/blog/?p=55002 2023-07-20T16:54:54Z 2022-09-21T15:59:00Z In the operating room, the latency and reliability of surgical video streams can make all the difference for patient outcomes. Ultra-high-speed frame rates from...]]> In the operating room, the latency and reliability of surgical video streams can make all the difference for patient outcomes. Ultra-high-speed frame rates from...

In the operating room, the latency and reliability of surgical video streams can make all the difference for patient outcomes. Ultra-high-speed frame rates from sensor inputs that enable next-generation AI applications can provide surgeons with new levels of real-time awareness and control. To build real-time AI capabilities into medical devices for use cases like surgical navigation��

Source

]]>
0
Stephanie Rubenstein <![CDATA[Building Cloud-Native, AI-Powered Avatars with NVIDIA Omniverse ACE]]> http://www.open-lab.net/blog/?p=55365 2023-10-25T23:52:50Z 2022-09-21T15:30:00Z Explore the AI technology powering Violet, the interactive avatar showcased this week in the NVIDIA GTC 2022 keynote. Learn new details about NVIDIA Omniverse...]]> Explore the AI technology powering Violet, the interactive avatar showcased this week in the NVIDIA GTC 2022 keynote. Learn new details about NVIDIA Omniverse...

Explore the AI technology powering Violet, the interactive avatar showcased this week in the NVIDIA GTC 2022 keynote. Learn new details about NVIDIA Omniverse Avatar Cloud Engine (ACE), a collection of cloud-native AI microservices for faster, easier deployment of interactive avatars, and NVIDIA Tokkio, a domain-specific AI reference application that leverages Omniverse ACE for creating fully��

Source

]]>
5
Soma Velayutham <![CDATA[Unlocking New Opportunities with AI Cloud Infrastructure for 5G vRAN]]> http://www.open-lab.net/blog/?p=55076 2024-03-13T17:48:27Z 2022-09-21T15:28:00Z The cellular industry spends over $50 billion on radio access networks (RAN) annually, according to a recent GSMA report on the mobile economy. Dedicated and...]]> The cellular industry spends over $50 billion on radio access networks (RAN) annually, according to a recent GSMA report on the mobile economy. Dedicated and...

The cellular industry spends over $50 billion on radio access networks (RAN) annually, according to a recent GSMA report on the mobile economy. Dedicated and overprovisioned hardware is primarily used to provide capacity for peak demand. As a result, most RAN sites have an average utilization below 25%. This has been the industry reality for years as technology evolved from 2G to 4G.

Source

]]>
0
Siddharth Sharma <![CDATA[New SDKs Accelerating AI Research, Computer Vision, Data Science, and More]]> http://www.open-lab.net/blog/?p=54866 2024-05-07T19:33:29Z 2022-09-21T15:20:00Z NVIDIA revealed major updates to its suite of AI software for developers including JAX, NVIDIA CV-CUDA, and NVIDIA RAPIDS. To learn about the latest SDK...]]> NVIDIA revealed major updates to its suite of AI software for developers including JAX, NVIDIA CV-CUDA, and NVIDIA RAPIDS. To learn about the latest SDK...

NVIDIA revealed major updates to its suite of AI software for developers including JAX, NVIDIA CV-CUDA, and NVIDIA RAPIDS. To learn about the latest SDK advancements from NVIDIA, watch the keynote from CEO Jensen Huang. Just today at GTC 2022, NVIDIA introduced JAX on NVIDIA AI, the newest addition to its GPU-accelerated deep learning frameworks. JAX is a rapidly growing��

Source

]]>
0
Erik Pounds <![CDATA[New Languages, Enhanced Cybersecurity, and Medical AI Frameworks Unveiled at GTC]]> http://www.open-lab.net/blog/?p=54868 2023-05-23T23:52:54Z 2022-09-21T15:18:00Z At GTC 2022, NVIDIA introduced enhancements to AI frameworks for building real-time speech AI applications, designing high-performing recommenders at scale,...]]> At GTC 2022, NVIDIA introduced enhancements to AI frameworks for building real-time speech AI applications, designing high-performing recommenders at scale,...

At GTC 2022, NVIDIA introduced enhancements to AI frameworks for building real-time speech AI applications, designing high-performing recommenders at scale, applying AI to cybersecurity challenges, creating AI-powered medical devices, and more. Showcased real-world, end-to-end AI frameworks highlighted the customers and partners leading the way in their industries and domains.

Source

]]>
0
Annamalai Chockalingam <![CDATA[Simplifying Access to Large Language Models with the NVIDIA NeMo Framework and Services]]> http://www.open-lab.net/blog/?p=55008 2023-06-12T08:54:07Z 2022-09-21T15:15:00Z Recent advances in large language models (LLMs) have fueled state-of-the-art performance for NLP applications such as virtual scribes in healthcare, interactive...]]> Recent advances in large language models (LLMs) have fueled state-of-the-art performance for NLP applications such as virtual scribes in healthcare, interactive...

Recent advances in large language models (LLMs) have fueled state-of-the-art performance for NLP applications such as virtual scribes in healthcare, interactive virtual assistants, and many more. To simplify access to LLMs, NVIDIA has announced two services: NeMo LLM for customizing and using LLMs, and BioNeMo, which expands scientific applications of LLMs for the pharmaceutical and��

Source

]]>
0
Leela Subramaniam Karumbunathan <![CDATA[Solving Entry-Level Edge AI Challenges with NVIDIA Jetson Orin Nano]]> http://www.open-lab.net/blog/?p=55058 2023-03-22T01:21:30Z 2022-09-21T15:00:00Z At GTC 2022, NVIDIA announced the Jetson Orin Nano series of system-on-modules (SOMs). They deliver up to 80X the AI performance of NVIDIA Jetson Nano and set...]]> At GTC 2022, NVIDIA announced the Jetson Orin Nano series of system-on-modules (SOMs). They deliver up to 80X the AI performance of NVIDIA Jetson Nano and set...

At GTC 2022, NVIDIA announced the Jetson Orin Nano series of system-on-modules (SOMs). They deliver up to 80X the AI performance of NVIDIA Jetson Nano and set the new standard for entry-level edge AI and robotics applications. For the first time, the Jetson family now includes NVIDIA Orin-based modules that span from the entry-level Jetson Orin Nano to the highest-performance Jetson AGX Orin.

Source

]]>
0
Ike Nnoli <![CDATA[Accelerating Ultra-Realistic Game Development with NVIDIA DLSS 3 and NVIDIA RTX Path Tracing]]> http://www.open-lab.net/blog/?p=54987 2024-08-28T18:11:09Z 2022-09-21T14:50:00Z NVIDIA recently announced Ada Lovelace, the next generation of GPUs. Named the NVIDIA GeForce RTX 40 Series, these are the world��s most advanced graphics...]]> NVIDIA recently announced Ada Lovelace, the next generation of GPUs. Named the NVIDIA GeForce RTX 40 Series, these are the world��s most advanced graphics...

NVIDIA recently announced Ada Lovelace, the next generation of GPUs. Named the NVIDIA GeForce RTX 40 Series, these are the world��s most advanced graphics cards. Featuring third-generation Ray Tracing Cores and fourth-generation Tensor Cores, they accelerate games that take advantage of the latest neural graphics and ray tracing technology. Since the introduction of the GeForce RTX 20 Series��

Source

]]>
0
Frank DeLise <![CDATA[New Cloud Applications, SimReady Assets, and Tools for NVIDIA Omniverse Developers Announced at GTC]]> http://www.open-lab.net/blog/?p=55182 2023-05-23T23:17:29Z 2022-09-21T14:46:00Z Developers, creators, and enterprises around the world are using NVIDIA Omniverse to build virtual worlds and push the boundaries of the metaverse. Based on...]]> Developers, creators, and enterprises around the world are using NVIDIA Omniverse to build virtual worlds and push the boundaries of the metaverse. Based on...

Developers, creators, and enterprises around the world are using NVIDIA Omniverse to build virtual worlds and push the boundaries of the metaverse. Based on Universal Scene Description (USD), an extensible, common language for virtual worlds, Omniverse is a scalable computing platform for full-design-fidelity 3D simulation workflows that developers across global industries are using to build out��

Source

]]>
0
Nyla Worker <![CDATA[Accelerate AI Training Faster Than Ever with New NVIDIA Omniverse Replicator Capabilities]]> http://www.open-lab.net/blog/?p=54990 2023-04-18T19:24:14Z 2022-09-21T14:44:00Z Announced at GTC, technical artists, software developers, and ML engineers can now build custom, physically accurate, synthetic data generation pipelines in the...]]> Announced at GTC, technical artists, software developers, and ML engineers can now build custom, physically accurate, synthetic data generation pipelines in the...

Announced at GTC, technical artists, software developers, and ML engineers can now build custom, physically accurate, synthetic data generation pipelines in the cloud with NVIDIA Omniverse Replicator. Omniverse Replicator is a highly extensible framework built on the NVIDIA Omniverse platform that enables physically accurate 3D synthetic data generation to accelerate the training and accuracy��

Source

]]>
0
Michael Zephyr <![CDATA[Open-Source Healthcare AI Innovation Continues to Expand with MONAI v1.0]]> http://www.open-lab.net/blog/?p=55055 2023-06-12T08:53:54Z 2022-09-21T14:30:00Z Developing for the medical imaging AI lifecycle is a time-consuming and resource-intensive process that typically includes data acquisition, compute, and...]]> Developing for the medical imaging AI lifecycle is a time-consuming and resource-intensive process that typically includes data acquisition, compute, and...

Developing for the medical imaging AI lifecycle is a time-consuming and resource-intensive process that typically includes data acquisition, compute, and training time, and a team of experts who are knowledgeable in creating models suited to your specific challenge. Project MONAI, the medical open network for AI, is continuing to expand its capabilities to help make each of these hurdles easier no��

Source

]]>
0
Harry Clifford <![CDATA[Democratizing and Accelerating Genome Sequencing Analysis with NVIDIA Clara Parabricks v4.0]]> http://www.open-lab.net/blog/?p=54944 2023-06-12T08:54:53Z 2022-09-21T14:29:00Z The field of computational biology relies on bioinformatics tools that are fast, accurate, and easy to use. As next-generation sequencing (NGS) is becoming...]]> The field of computational biology relies on bioinformatics tools that are fast, accurate, and easy to use. As next-generation sequencing (NGS) is becoming...

The field of computational biology relies on bioinformatics tools that are fast, accurate, and easy to use. As next-generation sequencing (NGS) is becoming faster and less costly, a data deluge is emerging, and there is an ever-growing need for accessible, high-throughput, industry-standard analysis. At GTC 2022, we announced the release of NVIDIA Clara Parabricks v4.0��

Source

]]>
1
Scot Schultz <![CDATA[Ushering In a New Era of HPC and Supercomputing Performance with DPUs]]> http://www.open-lab.net/blog/?p=54899 2023-10-25T23:52:51Z 2022-09-20T16:45:00Z Supercomputers are used to model and simulate the most complex processes in scientific computing, often for insight into new discoveries that otherwise would be...]]> Supercomputers are used to model and simulate the most complex processes in scientific computing, often for insight into new discoveries that otherwise would be...

Supercomputers are used to model and simulate the most complex processes in scientific computing, often for insight into new discoveries that otherwise would be impractical or impossible to demonstrate physically. The NVIDIA BlueField data processing unit (DPU) is transforming high-performance computing (HPC) resources into more efficient systems, while accelerating problem solving across a��

Source

]]>
0
John Linford <![CDATA[Accelerating NVIDIA HPC Software with SVE on AWS Graviton3]]> http://www.open-lab.net/blog/?p=54622 2023-03-22T01:21:33Z 2022-09-19T19:00:00Z The latest NVIDIA HPC SDK update expands portability and now supports the Arm-based AWS Graviton3 processor. In this post, you learn how to enable Scalable...]]> The latest NVIDIA HPC SDK update expands portability and now supports the Arm-based AWS Graviton3 processor. In this post, you learn how to enable Scalable...

The latest NVIDIA HPC SDK update expands portability and now supports the Arm-based AWS Graviton3 processor. In this post, you learn how to enable Scalable Vector Extension (SVE) auto-vectorization with the NVIDIA compilers to maximize the performance of HPC applications running on the AWS Graviton3 CPU. The NVIDIA HPC SDK includes the proven compilers, libraries��

Source

]]>
2
Michael Boone <![CDATA[Enhancing AI Transparency and Ethical Considerations with Model Card++]]> http://www.open-lab.net/blog/?p=54689 2024-02-23T16:47:31Z 2022-09-19T18:59:00Z An AI model card is a document that details how machine learning (ML) models work. Model cards provide detailed information about the ML model��s metadata...]]> An AI model card is a document that details how machine learning (ML) models work. Model cards provide detailed information about the ML model��s metadata...

An AI model card is a document that details how machine learning (ML) models work. Model cards provide detailed information about the ML model��s metadata including the datasets that it is based on, performance measures that it was trained on, and the deep learning training methodology itself. This post walks you through the current practice for AI model cards and how NVIDIA is planning to advance��

Source

]]>
0
Michelle Horton <![CDATA[Latest Releases and Resources: NVIDIA GTC 2022]]> http://www.open-lab.net/blog/?p=45772 2025-02-25T19:38:29Z 2022-03-24T17:34:00Z Our weekly roundup covers the most recent software updates, learning resources, events, and notable news. This week we have several software releases. Software...]]> Our weekly roundup covers the most recent software updates, learning resources, events, and notable news. This week we have several software releases. Software...

Our weekly roundup covers the most recent software updates, learning resources, events, and notable news. This week we have several software releases. Software releases The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries, and tools for developing accelerated HPC applications. With a breadth of flexible support options, users can create applications with a��

Source

]]>
0
Ike Nnoli <![CDATA[New Ray-Tracing, AI, Cloud, and Virtual World Tools Simplify Game Development at GDC 2022]]> http://www.open-lab.net/blog/?p=45449 2024-12-09T16:57:05Z 2022-03-24T17:00:00Z This week at GDC, NVIDIA announced a number of new tools for game developers to help save time, more easily integrate NVIDIA RTX, and streamline the creation of...]]> This week at GDC, NVIDIA announced a number of new tools for game developers to help save time, more easily integrate NVIDIA RTX, and streamline the creation of...

This week at GDC, NVIDIA announced a number of new tools for game developers to help save time, more easily integrate NVIDIA RTX, and streamline the creation of virtual worlds. Watch this overview of three exciting new tools now available. Since NVIDIA Deep Learning Super Sampling (DLSS) launched in 2019, a variety of super-resolution technologies have shipped from both hardware��

Source

]]>
0
Ingo Esser <![CDATA[Record, Edit, and Rewind in Virtual Reality with NVIDIA VR Capture and Replay]]> http://www.open-lab.net/blog/?p=45290 2023-06-12T20:55:28Z 2022-03-24T16:00:00Z Developers and early access users can now accurately capture and replay VR sessions for performance testing, scene troubleshooting, and more with NVIDIA Virtual...]]> Developers and early access users can now accurately capture and replay VR sessions for performance testing, scene troubleshooting, and more with NVIDIA Virtual...

Developers and early access users can now accurately capture and replay VR sessions for performance testing, scene troubleshooting, and more with NVIDIA Virtual Reality Capture and Replay (VCR.) The potentials of virtual worlds are limitless, but working with VR content poses challenges, especially when it comes to recording or recreating a virtual experience. Unlike the real world��

Source

]]>
0
Zhiding Yu https://chrisding.github.io/ <![CDATA[Segment Objects without Masks and Reduce Annotation Effort Using the DiscoBox DL Framework]]> http://www.open-lab.net/blog/?p=45220 2023-10-20T18:03:35Z 2022-03-24T14:00:00Z Instance segmentation is a core visual recognition problem for detecting and segmenting objects. In the past several years, this area has been one of the holy...]]> Instance segmentation is a core visual recognition problem for detecting and segmenting objects. In the past several years, this area has been one of the holy...High quality instance segmentation done fast and efficiently with DiscoBox.

Instance segmentation is a core visual recognition problem for detecting and segmenting objects. In the past several years, this area has been one of the holy grails in the computer vision community with wide applications ranging from autonomous vehicles (AV), robotics, video analysis, smart home, digital human, and healthcare. Annotation, the process of classifying every object in an image��

Source

]]>
0
Rob Van der Wijngaart <![CDATA[Boosting Application Performance with GPU Memory Prefetching]]> http://www.open-lab.net/blog/?p=45713 2023-06-12T20:54:17Z 2022-03-23T15:02:00Z NVIDIA GPUs have enormous compute power and typically must be fed data at high speed to deploy that power. That is possible, in principle, because GPUs also...]]> NVIDIA GPUs have enormous compute power and typically must be fed data at high speed to deploy that power. That is possible, in principle, because GPUs also...

NVIDIA GPUs have enormous compute power and typically must be fed data at high speed to deploy that power. That is possible, in principle, because GPUs also have high memory bandwidth, but sometimes they need your help to saturate that bandwidth. In this post, we examine one specific method to accomplish that: prefetching. We explain the circumstances under which prefetching can be expected��

Source

]]>
7
Nicola Sessions <![CDATA[Supercharging AI-Accelerated Cybersecurity Threat Detection]]> http://www.open-lab.net/blog/?p=45716 2023-12-05T19:01:29Z 2022-03-23T15:01:00Z Cybercrime worldwide is costing as much as the gross domestic product of countries like Mexico or Spain, hitting more than $1 trillion annually. And global...]]> Cybercrime worldwide is costing as much as the gross domestic product of countries like Mexico or Spain, hitting more than $1 trillion annually. And global...

Cybercrime worldwide is costing as much as the gross domestic product of countries like Mexico or Spain, hitting more than $1 trillion annually. And global trends point to it only getting worse. Data centers face staggering increases in users, data, devices, and apps increasing the threat surface amid ever more sophisticated attack vectors. NVIDIA Morpheus enables cybersecurity��

Source

]]>
1
Miles Macklin <![CDATA[Creating Differentiable Graphics and Physics Simulation in Python with NVIDIA Warp]]> http://www.open-lab.net/blog/?p=45298 2023-03-22T01:19:10Z 2022-03-23T15:00:00Z Typically, real-time physics simulation code is written in low-level CUDA C++ for maximum performance. In this post, we introduce NVIDIA Warp, a new Python...]]> Typically, real-time physics simulation code is written in low-level CUDA C++ for maximum performance. In this post, we introduce NVIDIA Warp, a new Python...

Typically, real-time physics simulation code is written in low-level CUDA C++ for maximum performance. In this post, we introduce NVIDIA Warp, a new Python framework that makes it easy to write differentiable graphics and simulation GPU code in Python. Warp provides the building blocks needed to write high-performance simulation code, but with the productivity of working in an interpreted language��

Source

]]>
1
Ren�� Peters <![CDATA[Deploying XR Applications in Private Networks on a Server Platform]]> http://www.open-lab.net/blog/?p=45537 2023-06-12T20:54:43Z 2022-03-23T15:00:00Z The current distribution of extended reality (XR) experiences is limited to desktop setups and local workstations, which contain the high-end GPUs necessary to...]]> The current distribution of extended reality (XR) experiences is limited to desktop setups and local workstations, which contain the high-end GPUs necessary to...

The current distribution of extended reality (XR) experiences is limited to desktop setups and local workstations, which contain the high-end GPUs necessary to meet computing requirements. For XR solutions to scale past their currently limited user base and support higher-end functionality such as AI services integration and on-demand collaboration, we need a purpose-built platform.

Source

]]>
0
Michael Andersch <![CDATA[NVIDIA Hopper Architecture In-Depth]]> http://www.open-lab.net/blog/?p=45555 2023-10-25T23:51:26Z 2022-03-22T18:00:00Z Today during the 2022 NVIDIA GTC Keynote address, NVIDIA CEO Jensen Huang introduced the new NVIDIA H100 Tensor Core GPU based on the new NVIDIA Hopper GPU...]]> Today during the 2022 NVIDIA GTC Keynote address, NVIDIA CEO Jensen Huang introduced the new NVIDIA H100 Tensor Core GPU based on the new NVIDIA Hopper GPU...

Today during the 2022 NVIDIA GTC Keynote address, NVIDIA CEO Jensen Huang introduced the new NVIDIA H100 Tensor Core GPU based on the new NVIDIA Hopper GPU architecture. This post gives you a look inside the new H100 GPU and describes important new features of NVIDIA Hopper architecture GPUs. The NVIDIA H100 Tensor Core GPU is our ninth-generation data center GPU designed to deliver an��

Source

]]>
2
Harry Clifford <![CDATA[Boosting Ultrarapid Nanopore Sequencing Analysis on NVIDIA DGX A100]]> http://www.open-lab.net/blog/?p=45197 2023-06-02T18:35:39Z 2022-03-22T17:45:00Z Fast and cost-effective whole genome sequencing and analysis can bring answers quickly to critically ill patients suffering from rare or undiagnosed...]]> Fast and cost-effective whole genome sequencing and analysis can bring answers quickly to critically ill patients suffering from rare or undiagnosed...

Fast and cost-effective whole genome sequencing and analysis can bring answers quickly to critically ill patients suffering from rare or undiagnosed diseases. Recent advances in accelerated clinical sequencing, such as the world-record-setting DNA sequencing technique for rapid diagnosis, are bringing us one step closer to same-day, whole-genome genetic diagnosis in a clinical setting.

Source

]]>
0
Siddharth Sharma <![CDATA[Major Updates to NVIDIA AI Software Advancing Speech, Recommenders, Inference, and More Announced at NVIDIA GTC 2022]]> http://www.open-lab.net/blog/?p=45446 2023-03-22T01:19:12Z 2022-03-22T17:02:00Z At GTC 2022, NVIDIA announced major updates to its suite of NVIDIA AI software, for developers to build real-time speech AI applications, create high-performing...]]> At GTC 2022, NVIDIA announced major updates to its suite of NVIDIA AI software, for developers to build real-time speech AI applications, create high-performing...

At GTC 2022, NVIDIA announced major updates to its suite of NVIDIA AI software, for developers to build real-time speech AI applications, create high-performing recommenders at scale and optimize inference in every application, and more. Watch the keynote from CEO, Jensen Huang, to learn about the latest advancements from NVIDIA. Today, NVIDIA announced Riva 2.0��

Source

]]>
0
Jason Black <![CDATA[Supercharge AI-Powered Robotics Prototyping and Edge AI Applications with the Jetson AGX Orin Developer Kit]]> http://www.open-lab.net/blog/?p=45321 2023-03-22T01:19:13Z 2022-03-22T17:00:00Z Availability of the the NVIDIA Jetson AGX Orin Developer Kit was announced today at NVIDIA GTC. The platform is the world��s most powerful, compact, and...]]> Availability of the the NVIDIA Jetson AGX Orin Developer Kit was announced today at NVIDIA GTC. The platform is the world��s most powerful, compact, and...A rendering of the now available NVIDIA Jetson AGX Orin Developer Kit.

Availability of the the NVIDIA Jetson AGX Orin Developer Kit was announced today at NVIDIA GTC. The platform is the world��s most powerful, compact, and energy-efficient AI supercomputer for advanced robotics, autonomous machines, and next-generation embedded and edge computing. Jetson AGX Orin delivers up to 275 trillion operations per second (TOPS). It gives customers more than 8X the��

Source

]]>
2
Frank DeLise <![CDATA[Create 3D Virtual Worlds with New Releases, Expansions, and Toolkits from NVIDIA Omniverse?]]> http://www.open-lab.net/blog/?p=45328 2023-10-25T23:52:56Z 2022-03-22T17:00:00Z Developers, creators, and enterprises around the world are using NVIDIA Omniverse��the real-time collaboration and simulation platform for 3D design��to...]]> Developers, creators, and enterprises around the world are using NVIDIA Omniverse��the real-time collaboration and simulation platform for 3D design��to...

Developers, creators, and enterprises around the world are using NVIDIA Omniverse��the real-time collaboration and simulation platform for 3D design��to enhance complex workflows and develop for 3D worlds faster. At NVIDIA GTC, we showcased how the platform��s ecosystem is expanding, from new Omniverse Connectors and asset libraries to updated Omniverse apps and features.

Source

]]>
0
Yaniv Lazimy <![CDATA[New Sensor Partners Expand Surgical, Ultrasound, and Data Acquisition Capabilities in the NVIDIA Clara Holoscan Platform]]> http://www.open-lab.net/blog/?p=45194 2023-03-22T01:19:13Z 2022-03-22T16:59:00Z New advances in computation make it possible for medical devices to automatically detect, measure, predict, simulate, map, and guide clinical care teams. NVIDIA...]]> New advances in computation make it possible for medical devices to automatically detect, measure, predict, simulate, map, and guide clinical care teams. NVIDIA...

New advances in computation make it possible for medical devices to automatically detect, measure, predict, simulate, map, and guide clinical care teams. NVIDIA Clara Holoscan, the full-stack AI computing platform for medical devices, has added new sensor front-end partners for video capture, ultrasound research, data acquisition, and connection to legacy-medical devices.

Source

]]>
0
Nathan Horrocks <![CDATA[Jumpstarting Link-Level Simulations with NVIDIA Sionna]]> http://www.open-lab.net/blog/?p=44804 2024-05-07T19:22:35Z 2022-03-22T16:55:00Z Even while 5G wireless networks are being installed and used worldwide, researchers in academia and industry have already started defining visions and critical...]]> Even while 5G wireless networks are being installed and used worldwide, researchers in academia and industry have already started defining visions and critical...Sionna simulates physical layer and link-level research.

Even while 5G wireless networks are being installed and used worldwide, researchers in academia and industry have already started defining visions and critical technologies for 6G. Although nobody knows what 6G will be, a recurring vision is that 6G must enable the creation of digital twins and distributed machine learning (ML) applications at an unprecedented scale. 6G research requires new tools.

Source

]]>
3
Michael Boone <![CDATA[Guide to Computer Vision: Why It Matters and How It Helps Solve Problems]]> http://www.open-lab.net/blog/?p=45313 2023-03-22T01:19:14Z 2022-03-21T15:00:00Z This post was written to enable the beginner developer community, especially those new to computer vision and computer science. NVIDIA recognizes that solving...]]> This post was written to enable the beginner developer community, especially those new to computer vision and computer science. NVIDIA recognizes that solving...

This post was written to enable the beginner developer community, especially those new to computer vision and computer science. NVIDIA recognizes that solving and benefiting the world��s visual computing challenges through computer vision and artificial intelligence requires all of us. NVIDIA is excited to partner and dedicate this post to the Black Women in Artificial Intelligence.

Source

]]>
1
���˳���97caoporen����