With AI introducing an unprecedented pace of technological innovation, staying ahead means keeping your skills up to date. The NVIDIA Developer Program gives you the tools, training, and resources you need to succeed with the latest advancements across industries. We��re excited to announce the following five new technical courses from NVIDIA. Join the Developer Program now to get hands-on��
]]>From cities and airports to Olympic Stadiums, AI is transforming public spaces into safer, smarter, and more sustainable environments.
]]>For over a decade, traditional industrial process modeling and simulation approaches have struggled to fully leverage multicore CPUs or acceleration devices to run simulation and optimization calculations in parallel. Multicore linear solvers used in process modeling and simulation have not achieved expected improvements, and in certain cases have underperformed optimized single-core solvers.
]]>Hear from ExxonMobil, Honeywell, Siemens Energy, and more as they explore AI and HPC innovation in oil, gas, power, and utilities.
]]>Connect with industry leaders, learn from technical experts, and collaborate with peers at NVIDIA GTC 2024 Developer Days.
]]>Join experts from Stanford, Cornell, Meta, and more to learn about the latest in AI for academia and what��s next in cutting-edge research.
]]>This week��s model release features NVIDIA cuOpt, a world-record-breaking accelerated optimization engine that helps teams solve complex routing problems and deliver new capabilities. It enables organizations to reimagine logistics, operations research, transportation, and supply chain optimization. NVIDIA cuOpt facilitates many logistics optimization use cases, including: Ultimately��
]]>Advances in AI are rapidly transforming every industry. Join us in person or virtually to learn about the latest technologies, from retrieval-augmented generation to OpenUSD.
]]>With the GTC session catalog now live, it��s time to start building your personalized agenda for the conference. For those of you who will be joining us in San Jose, this post covers the technical training opportunities that you won��t want to miss. If you can��t attend GTC in person, please take advantage of the 15 virtual workshops scheduled in EMEA, India, and China time zones.
]]>Railroad simulation is important in modern transportation and logistics, providing a virtual testing ground for the intricate interplay of tracks, switches, and rolling stock. It serves as a crucial tool for engineers and developers to fine-tune and optimize railway systems, ensuring efficiency, safety, and cost-effectiveness. Physically realistic simulations enable comprehensive scenario��
]]>Register for expert-led technical workshops at NVIDIA GTC and save with early bird pricing through February 7, 2024.
]]>NVIDIA TAO Toolkit provides a low-code AI framework to accelerate vision AI model development suitable for all skill levels, from novice beginners to expert data scientists. With the TAO Toolkit, developers can use the power and efficiency of transfer learning to achieve state-of-the-art accuracy and production-class throughput in record time with adaptation and optimization.
]]>Get the latest best practices about how to accelerate your data science projects with RAPIDS.
]]>As the GPU launches threads, dispatches kernels, and loads from memory, the CPU feeds it data asynchronously, accesses network communications, manages system resources, and more. This is just a snippet of hardware activity needed to run an application��an orchestra of different components operating in perfect parallelism. As a developer, you are the conductor of an orchestra of hardware��
]]>NVIDIA AI inference software consists of NVIDIA Triton Inference Server, open-source inference serving software, and NVIDIA TensorRT, an SDK for high-performance deep learning inference that includes a deep learning inference optimizer and runtime. They deliver accelerated inference for all AI deep learning use cases. NVIDIA Triton also supports traditional machine learning (ML) models and��
]]>Project Mellon is a lightweight Python package capable of harnessing the heavyweight power of speech AI (NVIDIA Riva) and large language models (LLMs) (NVIDIA NeMo service) to simplify user interactions in immersive environments. NVIDIA announced at NVIDIA GTC 2023 that developers can start testing Project Mellon to explore creating hands-free extended reality (XR) experiences controlled by��
]]>Real-time remote communication has become the new normal, yet many office workers still experience poor video and audio quality, which impacts collaboration and interpersonal engagement. NVIDIA Maxine was developed specifically to address these challenges through the use of state-of-the-art AI models that greatly improve the clarity of video conferencing calls. These capabilities have been largely��
]]>The business applications of GPU-accelerated computing are set to expand greatly in the coming years. One of the fastest-growing trends is the use of generative AI for creating human-like text and all types of images. Driving the explosion of market interest in generative AI are technologies such as transformer models that bring AI to everyday applications, from conversational text to��
]]>Generative AI is primed to transform the world��s industries and to solve today��s most important challenges. To enable enterprises to take advantage of the possibilities with generative AI, NVIDIA has launched NVIDIA AI Foundations and the NVIDIA NeMo framework, powered by NVIDIA DGX Cloud. NVIDIA AI Foundations are a family of cloud services that provide enterprises with a simplified��
]]>LumenRT for NVIDIA Omniverse is the first engineering software application in the market built on NVIDIA Omniverse. The integration of NVIDIA Omniverse and the Bentley iTwin Platform enables real-time, immersive 3D and 4D experiences to enhance the visualization and simulation of infrastructure digital twins. The Bentley iTwin Platform is an open, scalable cloud platform for creating��
]]>NVIDIA BlueField-3 data processing units (DPUs) are now in full production, and have been selected by Oracle Cloud Infrastructure (OCI) to achieve higher performance, better efficiency, and stronger security, as announced at NVIDIA GTC 2023. As a 400 Gb/s infrastructure compute platform, BlueField-3 enables organizations to deploy and operate data centers at massive scale.
]]>Using generative AI and the NVIDIA Morpheus cybersecurity AI framework, developers can build solutions that detect spear phishing attempts more effectively and with extremely short training times. In fact, using NVIDIA Morpheus and a generative AI training technique, we were able to detect 90% of targeted spear phishing emails��a 20% improvement compared to a typical phishing detection solution��
]]>Imagine you are about to embark on the mountain biking adventure of a lifetime. You have done all of the planning and training. Now, all you need is the perfect bike. You need the best shocks, tires, brakes, frame, handlebars, and seat. Imagine that all of these parts would come together in one package, preassembled, saving you time and money. NVIDIA IGX Orin offers a similar package to edge��
]]>The NVIDIA Jetson Orin Nano Developer Kit sets a new standard for creating entry-level AI-powered robots, smart drones, and intelligent vision systems, as NVIDIA announced at NVIDIA GTC 2023. It also simplifies getting started with the NVIDIA Jetson Orin Nano series. Compact design, numerous connectors, and up to 40 TOPS of AI performance make this developer kit ideal for transforming your��
]]>NVIDIA Base Command Platform provides the capabilities to confidently develop complex software that meets the performance standards required by scientific computing workflows. The platform enables both cloud-hosted and on-premises solutions for AI development by providing developers with the tools needed to efficiently configure and manage AI workflows. Integrated data and user management simplify��
]]>Generative AI has captured the attention and imagination of the public over the past couple of years. From a given natural language prompt, these generative models are able to generate human-quality results, from well-articulated children��s stories to product prototype visualizations. Large language models (LLMs) are at the center of this revolution. LLMs are universal language comprehenders��
]]>A retailer��s supply chain includes the sourcing of raw materials or finished goods from suppliers; storing them in warehouses or distribution centers; and transporting them to stores or customers; managing sales. They also collect, store, and analyze data to optimize supply chain performance. Retailers have teams responsible for managing each stage of the supply chain��
]]>Learn how AI is boosting creative applications for creators during NVIDIA GTC 2023, March 20-23.
]]>If you asked a group of cybersecurity professionals how they got into the field, you might be surprised by the answers that you receive. With military officers, program managers, technical writers, and IT practitioners, their backgrounds are varied. There is no single path into a cybersecurity career, let alone one that incorporates both cybersecurity and AI. I��ve always been��
]]>This post is part of a series on accelerated data analytics. Digital advancements in climate modeling, healthcare, finance, and retail are generating unprecedented volumes and types of data. IDC says that by 2025, there will be 180 ZB of data compared to 64 ZB in 2020, scaling up the need for data analytics to turn all that data into insights. NVIDIA provides the RAPIDS suite of��
]]>This post is part of a series on accelerated data analytics. Update: The below blog describes how to use GPU-only RAPIDS cuDF, which requires code changes. RAPIDS cuDF now has a CPU/GPU interoperability (cudf.pandas) that speeds up pandas code by up to 150x with zero code changes. At GTC 2024, NVIDIA announced that the cudf.pandas library is now GA. At Google I/O��
]]>Last August, I wrote a post about GTC that asked, ��What if you could spend 8 hours with an AI legend while getting hands-on experience using some of the most advanced GPU and DPU technology available?�� My point still stands: This is exactly why you should attend training at GTC. The virtual conference offers hands-on workshops and training labs to deepen your skills in the areas of AI, HPC��
]]>Detecting drivable free space is a critical component of advanced driver assistance systems (ADAS) and autonomous vehicle (AV) perception. Obstacle detection is usually performed to detect a set of specific dynamic obstacles, such as vehicles and pedestrians. In contrast, free space detection is a more generalized approach for obstacle detection. It enables autonomous vehicles to navigate��
]]>As of 3/18/25, NVIDIA Triton Inference Server is now NVIDIA Dynamo. In many production-level machine learning (ML) applications, inference is not limited to running a forward pass on a single ML model. Instead, a pipeline of ML models often needs to be executed. Take, for example, a conversational AI pipeline that consists of three modules: an automatic speech recognition (ASR) module to��
]]>When it comes to new malware written in esoteric programming languages, Blue Team defenders have very little chance to ensure that all endpoints in their organization are able to detect and/or mitigate this malware. Security professionals have quickly recognized this issue and have built an effective pipeline to identify new releases of unique malware and develop detections for them.
]]>Learn about the latest path tracing technologies and how they��re accelerating game development.
]]>Learn how AI is enabling safer, more sustainable cities and improving operational efficiency in public spaces for our communities.
]]>At NVIDIA GTC 2023 join robotics, edge AI, and computer vision experts for a deep dive into building next-generation AI-powered applications and autonomous machines.
]]>Explore professional visualization developer tools including NVIDIA NeuralVDB, NVIDIA OptiX, and NVIDIA Video Codec.
]]>AI is impacting every industry, from improving customer service and streamlining supply chains to accelerating cancer research. As enterprises invest in AI to stay ahead of the competition, they often struggle with finding the strategy and infrastructure for success. Many AI projects are rapidly evolving, which makes production at scale especially challenging. We believe in developing��
]]>In the last few years, the roles of AI and machine learning (ML) in mainstream enterprises have changed. Once research or advanced-development activities, they now provide an important foundation for production systems. As more enterprises seek to transform their businesses with AI and ML, more and more people are talking about MLOps. If you have been listening to these conversations��
]]>From climate modeling to quantum computing, large language models to molecular dynamics; see how HPC is transforming the world.
]]>Accurately annotated datasets are crucial for camera-based deep learning algorithms to perform autonomous vehicle perception. However, manually labeling data is a time-consuming and cost-intensive process. We have developed an automated labeling pipeline as a part of the Tata Consultancy Services (TCS) artificial intelligence (AI)-based autonomous vehicle platform. This pipeline uses NVIDIA��
]]>Explore the latest tools, optimizations, and best practices for deep learning training and inference.
]]>Vision AI-powered applications are exploding in terms of value and adoption across industries. They��re being developed both by sophisticated AI developers and those totally new to AI. Both types of developers are being challenged with more complex solution requirements and faster time to market. Building these vision AI solutions requires a scalable, distributed architecture and tools that��
]]>Decades of computer science history have been devoted to devising solutions for efficient storage and retrieval of information. Hash maps (or hash tables) are a popular data structure for information storage given their amortized, constant-time guarantees for the insertion and retrieval of elements. However, despite their prevalence, hash maps are seldom discussed in the context of GPU��
]]>Learn about advancements in video conferencing that have transformed how we communicate.
]]>Get training, insights, and access to experts for the latest in recommender systems.
]]>Learn how AI is improving your cybersecurity to detect threats faster.
]]>Current topology-based modeling software produces 3D objects in a single level of detail. This makes them inoperable with multiple platforms in the metaverse. In addition, due to the topology creation process, 3D modeling is time consuming and has a high entry barrier for content creation. NVIDIA Inception Program member and New York-based startup Shapeyard is solving the metaverse 3D content��
]]>Learn about the latest tools, trends, and technologies for building and deploying conversational AI.
]]>Explore the latest software and developer tools to build, deploy, and scale vision AI and IoT apps.
]]>Learn about the latest AI and data science breakthroughs from leading data science teams at NVIDIA GTC 2023.
]]>Discover how to build a robust MLOps practice for continuous delivery and automated deployment of AI workloads at scale.
]]>NVIDIA DLSS has revolutionized AI-powered graphics. The latest DLSS Unreal Engine update further enhances experiences for developers and gamers on Unreal Engine 4.27 and higher. Developers can now automatically update to the latest DLSS AI networks. You can patch your games over the air (OTA) using the NVIDIA Game Ready Drivers with the latest DLSS improvements. Gamers can also automatically��
]]>Autonomous vehicles (AVs) must be able to safely handle any type of traffic scenario that could be encountered in the real world. This includes hazardous near-accidents, where an unexpected maneuver by other road users in traffic could lead to collision. However, developing and testing AVs in these types of scenarios is challenging. Real-world collision data is sparse��
]]>Learn how accelerated computing can reduce your total carbon footprint and support your organization��s energy efficiency efforts.
]]>NVIDIA AI Enterprise is an end-to-end, secure, cloud-native suite of AI software. The recent release of NVIDIA AI Enterprise 3.0 introduces new features to help optimize the performance and efficiency of production AI. This post provides details about the new features listed below and how they work. New AI workflows in the 3.0 release of NVIDIA AI Enterprise help reduce the��
]]>Get to know the NVIDIA technologies and software development tools powering the latest in robotics and edge AI.
]]>Many developers use tox as a solution to standardize and automate testing in Python. However, using the tool only for test automation severely limits its power and the full scope of what you could achieve. For example, tox is also a great solution for the ��it works on my machine�� problem. There are several reasons for this, such as: In addition, and most importantly��
]]>See how recent breakthroughs in generative AI are transforming media, content creation, personalized experiences, and more.
]]>Learn how developers are building metaverse applications, extensions, and microservices.
]]>Join us for the latest on NVIDIA RTX and neural rendering technologies, and learn how they are accelerating game development.
]]>When it comes to creating immersive, virtual environments, users want the experience to look as realistic and lifelike as possible. And while AIO headsets provide mobility and freedom for VR users, the headsets don��t always have enough power to render photorealistic scenes with accurate physics and lighting. Using the cloud and professional GPUs, you can generate realistic graphics for��
]]>Explore the latest advances in accurate and customizable automatic speech recognition, multi-language translation, and text-to-speech.
]]>How can you tell if your Jupyter instance is secure? The NVIDIA AI Red Team has developed a JupyterLab extension to automatically assess the security of Jupyter environments. jupysec is a tool that evaluates the user��s environment against almost 100 rules that detect configurations and artifacts that have been identified by the AI Red Team as potential vulnerabilities, attack vectors��
]]>5G deployments have been accelerating around the globe. Many telecom operators have already rolled out 5G services and are expanding rapidly. In addition to the telecom operators, there is significant interest among enterprises in using 5G to set up private networks leveraging higher bandwidth, lower latency, network slicing, mmWave, and CBRS spectrum. The 5G ramp comes at an interesting��
]]>Discover how 3D synthetic data generation is accelerating AI and simulation workflows.
]]>Check out this NVIDIA GTC 2023 playlist to see all the sessions on accelerated networking, sustainable data centers, Ethernet for HPC, and more.
]]>Many companies are embracing digital twins to improve their products and services. Digital twins can be used for complex simulations of factories and warehouses or to understand how products will look and behave in the real world. However, many businesses don��t know how to begin making their existing 3D art assets valuable within a simulation environment. The existing universe of 3D assets is��
]]>3D modeling workflows have a high degree of complexity driven by a wide variety of specialized tools. With the rise of digital twins, these 3D workflows are creating new possibilities in applications for robotics, autonomous vehicles, scientific visualization, and architectural engineering. In architecture, engineering, construction, and operations (AECO) specifically, the potential of digital��
]]>Network traffic continues to increase, with the number of Internet users across the globe reaching 5 billion in 2022. As the number of users expands, so does the number of connected devices, which is expected to grow into the trillions. The ever-increasing number of connected users and devices leads to an overwhelming amount of data generated across the network. According to IDC��
]]>Announced at GTC 2022, the next generation of NVIDIA GPUs��the NVIDIA GeForce RTX 40 series, NVIDIA RTX 6000 Ada Generation, and NVIDIA L40 for data center��are built with the new NVIDIA Ada Architecture. The NVIDIA Ada Architecture features third-generation ray tracing cores, fourth-generation Tensor Cores, multiple video encoders, and a new optical flow accelerator. To enable you to��
]]>With the Jetson Orin Nano announcement this week at GTC, the entire Jetson Orin module lineup is now revealed. With up to 40 TOPS of AI performance, Orin Nano modules set the new standard for entry-level AI, just as Jetson AGX Orin is already redefining robotics and other autonomous edge use cases with 275 TOPS of server class compute. All Jetson Orin modules and the Jetson AGX Orin Developer��
]]>When examining an intricate speech AI robotic system, it��s easy for developers to feel intimidated by its complexity. Arthur C. Clarke claimed, ��Any sufficiently advanced technology is indistinguishable from magic.�� From accepting natural-language commands to safely interacting in real-time with its environment and the humans around it, today��s speech AI robotics systems can perform tasks to��
]]>As of March 21, 2023, QODA is now CUDA Quantum. For up-to-date information, see the CUDA Quantum page. Quantum circuit simulation is critical for developing applications and algorithms for quantum computers. Because of the disruptive nature of known quantum computing algorithms and use cases, quantum algorithms researchers in government, enterprise, and academia are developing and��
]]>The latest version of NVIDIA PhysicsNeMo, an AI framework that enables users to create customizable training pipelines for digital twins, climate models, and physics-based modeling and simulation, is now available for download. This release of the physics-ML framework, NVIDIA PhysicsNeMo v22.09, includes key enhancements to increase composition flexibility for neural operator architectures��
]]>Deploying AI models in production to meet the performance and scalability requirements of the AI-driven application while keeping the infrastructure costs low is a daunting task. Join the NVIDIA Triton and NVIDIA TensorRT community to stay current on the latest product updates, bug fixes, content, best practices, and more. This post provides you with a high-level overview of AI��
]]>In the operating room, the latency and reliability of surgical video streams can make all the difference for patient outcomes. Ultra-high-speed frame rates from sensor inputs that enable next-generation AI applications can provide surgeons with new levels of real-time awareness and control. To build real-time AI capabilities into medical devices for use cases like surgical navigation��
]]>Explore the AI technology powering Violet, the interactive avatar showcased this week in the NVIDIA GTC 2022 keynote. Learn new details about NVIDIA Omniverse Avatar Cloud Engine (ACE), a collection of cloud-native AI microservices for faster, easier deployment of interactive avatars, and NVIDIA Tokkio, a domain-specific AI reference application that leverages Omniverse ACE for creating fully��
]]>The cellular industry spends over $50 billion on radio access networks (RAN) annually, according to a recent GSMA report on the mobile economy. Dedicated and overprovisioned hardware is primarily used to provide capacity for peak demand. As a result, most RAN sites have an average utilization below 25%. This has been the industry reality for years as technology evolved from 2G to 4G.
]]>NVIDIA revealed major updates to its suite of AI software for developers including JAX, NVIDIA CV-CUDA, and NVIDIA RAPIDS. To learn about the latest SDK advancements from NVIDIA, watch the keynote from CEO Jensen Huang. Just today at GTC 2022, NVIDIA introduced JAX on NVIDIA AI, the newest addition to its GPU-accelerated deep learning frameworks. JAX is a rapidly growing��
]]>At GTC 2022, NVIDIA introduced enhancements to AI frameworks for building real-time speech AI applications, designing high-performing recommenders at scale, applying AI to cybersecurity challenges, creating AI-powered medical devices, and more. Showcased real-world, end-to-end AI frameworks highlighted the customers and partners leading the way in their industries and domains.
]]>Recent advances in large language models (LLMs) have fueled state-of-the-art performance for NLP applications such as virtual scribes in healthcare, interactive virtual assistants, and many more. To simplify access to LLMs, NVIDIA has announced two services: NeMo LLM for customizing and using LLMs, and BioNeMo, which expands scientific applications of LLMs for the pharmaceutical and��
]]>At GTC 2022, NVIDIA announced the Jetson Orin Nano series of system-on-modules (SOMs). They deliver up to 80X the AI performance of NVIDIA Jetson Nano and set the new standard for entry-level edge AI and robotics applications. For the first time, the Jetson family now includes NVIDIA Orin-based modules that span from the entry-level Jetson Orin Nano to the highest-performance Jetson AGX Orin.
]]>NVIDIA recently announced Ada Lovelace, the next generation of GPUs. Named the NVIDIA GeForce RTX 40 Series, these are the world��s most advanced graphics cards. Featuring third-generation Ray Tracing Cores and fourth-generation Tensor Cores, they accelerate games that take advantage of the latest neural graphics and ray tracing technology. Since the introduction of the GeForce RTX 20 Series��
]]>Developers, creators, and enterprises around the world are using NVIDIA Omniverse to build virtual worlds and push the boundaries of the metaverse. Based on Universal Scene Description (USD), an extensible, common language for virtual worlds, Omniverse is a scalable computing platform for full-design-fidelity 3D simulation workflows that developers across global industries are using to build out��
]]>Announced at GTC, technical artists, software developers, and ML engineers can now build custom, physically accurate, synthetic data generation pipelines in the cloud with NVIDIA Omniverse Replicator. Omniverse Replicator is a highly extensible framework built on the NVIDIA Omniverse platform that enables physically accurate 3D synthetic data generation to accelerate the training and accuracy��
]]>Developing for the medical imaging AI lifecycle is a time-consuming and resource-intensive process that typically includes data acquisition, compute, and training time, and a team of experts who are knowledgeable in creating models suited to your specific challenge. Project MONAI, the medical open network for AI, is continuing to expand its capabilities to help make each of these hurdles easier no��
]]>The field of computational biology relies on bioinformatics tools that are fast, accurate, and easy to use. As next-generation sequencing (NGS) is becoming faster and less costly, a data deluge is emerging, and there is an ever-growing need for accessible, high-throughput, industry-standard analysis. At GTC 2022, we announced the release of NVIDIA Clara Parabricks v4.0��
]]>Supercomputers are used to model and simulate the most complex processes in scientific computing, often for insight into new discoveries that otherwise would be impractical or impossible to demonstrate physically. The NVIDIA BlueField data processing unit (DPU) is transforming high-performance computing (HPC) resources into more efficient systems, while accelerating problem solving across a��
]]>An AI model card is a document that details how machine learning (ML) models work. Model cards provide detailed information about the ML model��s metadata including the datasets that it is based on, performance measures that it was trained on, and the deep learning training methodology itself. This post walks you through the current practice for AI model cards and how NVIDIA is planning to advance��
]]>Learn about new CUDA features, digital twins for weather and climate, quantum circuit simulations, and much more with these GTC 2022 sessions.
]]>Join us for these GTC 2022 sessions to learn about optimizing PyTorch models, accelerating graph neural networks, improving GPU performance, and more.
]]>Discover the latest innovations in manufacturing and aerospace with GTC sessions from leaders at Siemens, Boeing, BMW, and more.
]]>Join us for sessions from AT&T, Verizon, T-Mobile, Ericsson, and more to discover the latest innovations in telecom.
]]>Learn how to develop and distribute custom applications for the metaverse with the NVIDIA Omniverse platform at GTC.
]]>Learn about transformer-powered personalized online advertising, cross-framework model evaluation, the NVIDIA Merlin ecosystem, and more with these featured GTC 2022 sessions.
]]>Join our deep learning sessions at GTC 2022 to learn about real-world use cases, new tools, and best practices for training and inference.
]]>Join us for manufacturing sessions at GTC 2022, including an expert-led workshop on Computer Vision for Industrial Inspection.
]]>