In an effort to rein in illicit fishing, researchers have unveiled a new open-source AI model that can accurately identify what virtually all of the world��s seafaring vessels are doing, including whether a boat is potentially fishing illegally. Seattle-based Ai2 (the Allen Institute for AI) recently released a lightweight model named Atlantes to analyze more than five billion GPS signals a��
]]>The new release introduces Python support in Service Maker to accelerate real-time multimedia and AI inference applications with a powerful GStreamer abstraction layer.
]]>From humanoids to policy, explore the work NVIDIA is bringing to the robotics community.
]]>NVIDIA has built three computers and accelerated development platforms to enable developers to create physical AI.
]]>Modern cyber threats have grown increasingly sophisticated, posing significant risks to federal agencies and critical infrastructure. According to Deloitte, cybersecurity is the top priority for governments and public sectors, highlighting the need to adapt to an increasingly digital world for efficiency and speed. Threat examples include insider threats, supply chain vulnerabilities��
]]>Immerse yourself in NVIDIA technology with our full-day, hands-on technical workshops at our AI Summit in Washington D.C. on October 7, 2024.
]]>The latest release of NVIDIA cuBLAS library, version 12.5, continues to deliver functionality and performance to deep learning (DL) and high-performance computing (HPC) workloads. This post provides an overview of the following updates on cuBLAS matrix multiplications (matmuls) since version 12.0, and a walkthrough: Grouped GEMM APIs can be viewed as a generalization of the batched��
]]>As cyberattacks become more sophisticated, organizations must constantly adapt with cutting-edge solutions to protect their critical assets. One such solution is Cisco Secure Workload, a comprehensive security solution designed to safeguard application workloads across diverse infrastructures, locations, and form factors. Cisco recently announced version 3.9 of the Cisco Secure Workload��
]]>The latest state-of-the-art foundation large language models (LLMs) have billions of parameters and are pretrained on trillions of tokens of input text. They often achieve striking results on a wide variety of use cases without any need for customization. Despite this, studies have shown that the best accuracy on downstream tasks can be achieved by adapting LLMs with high-quality��
]]>NVIDIA SDKs have been instrumental in accelerating AI applications across a spectrum of use cases spanning smart cities, medical, and robotics. However, achieving a production-grade AI solution that can deployed at the edge to support human and machine collaboration safely and securely needs both high-quality hardware and software tailored for enterprise needs. NVIDIA is again accelerating��
]]>Join us on March 20 for Cybersecurity Developer Day at GTC to gain insights on leveraging generative AI for cyber defense.
]]>Join experts from NVIDIA and the public sector industry to learn how cybersecurity, generative AI, digital twins, and more are impacting the way that government agencies operate.
]]>The past few decades have witnessed a surge in rates of waste generation, closely linked to economic development and urbanization. This escalation in waste production poses substantial challenges for governments worldwide in terms of efficient processing and management. Despite the implementation of waste classification systems in developed countries, a significant portion of waste still ends up��
]]>Discover how generative AI is powering cybersecurity solutions with enhanced speed, accuracy, and scalability.
]]>Large language models (LLMs) have revolutionized the field of AI, creating entirely new ways of interacting with the digital world. While they provide a good generalized solution, they often must be tuned to support specific domains and tasks. AI coding assistants, or code LLMs, have emerged as one domain to help accomplish this. By 2025, 80% of the product development lifecycle will make��
]]>Learn how generative AI can help defend against spear phishing in this January 30 webinar.
]]>NVIDIA today unveiled major upgrades to the NVIDIA Avatar Cloud Engine (ACE) suite of technologies, bringing enhanced realism and accessibility to AI-powered avatars and digital humans. These latest animation and speech capabilities enable more natural conversations and emotional expressions. Developers can now easily implement and scale intelligent avatars across applications using new��
]]>Identity-based attacks are on the rise, with phishing remaining the most common and second-most expensive attack vector. Some attackers are using AI to craft more convincing phishing messages and deploying bots to get around automated defenses designed to spot suspicious behavior. At the same time, a continued increase in enterprise applications introduces challenges for IT teams who must��
]]>Stacking transformer layers to create large models results in better accuracies, few-shot learning capabilities, and even near-human emergent abilities on a wide range of language tasks. These foundation models are expensive to train, and they can be memory- and compute-intensive during inference (a recurring cost). The most popular large language models (LLMs) today can reach tens to hundreds of��
]]>Spark RAPIDS ML is an open-source Python package enabling NVIDIA GPU acceleration of PySpark MLlib. It offers PySpark MLlib DataFrame API compatibility and speedups when training with the supported algorithms. See New GPU Library Lowers Compute Costs for Apache Spark ML for more details. PySpark MLlib DataFrame API compatibility means easier incorporation into existing PySpark ML applications��
]]>Explore generative AI concepts and applications, along with challenges and opportunities in this self-paced course.
]]>Spear phishing is the largest and most costly form of cyber threat, with an estimated 300,000 reported victims in 2021 representing $44 million in reported losses in the United States alone. Business e-mail compromises led to $2.4 billion in costs in 2021, according to the FBI Internet Crime Report. In the period from June 2016 to December 2021, costs related to phishing and spear phishing totaled��
]]>On Sept. 27, join us to learn recommender systems best practices for building, training, and deploying at any scale.
]]>Learn key techniques and tools required to train a deep learning model in this virtual hands-on workshop.
]]>Ransomware attacks have become increasingly popular, more sophisticated, and harder to detect. For example, in 2022, a destructive ransomware attack took 233 days to identify and 91 days to contain, for a total lifecycle of 324 days. Going undetected for this amount of time can cause irreversible damage. Faster and smarter detection capabilities are critical to addressing these attacks.
]]>Delve into how TMA Solutions is accelerating original ML and AI workflows with RAPIDS.
]]>Prompt injection attacks are a hot topic in the new world of large language model (LLM) application security. These attacks are unique due to how ?malicious text is stored in the system. An LLM is provided with prompt text, and it responds based on all the data it has been trained on and has access to. To supplement the prompt with useful context, some AI applications capture the input from��
]]>Prompt injection is a new attack technique specific to large language models (LLMs) that enables attackers to manipulate the output of the LLM. This attack is made more dangerous by the way that LLMs are increasingly being equipped with ��plug-ins�� for better responding to user requests by accessing up-to-date information, performing complex calculations, and calling on external services through��
]]>This post was updated January 16, 2024. Recent years have witnessed a massive increase in the volume of 3D geospatial data being generated. This data provides rich real-world environmental and contextual information, spatial relationships, and real-time monitoring capabilities for industrial applications. It can enhance the realism, accuracy, and effectiveness of simulations across various��
]]>On Aug. 8, Jensen Huang features new NVIDIA technologies and award-winning research for content creation.
]]>Modeling time series data can be challenging (and fascinating) due to its inherent complexity and unpredictability. Long-term trends in time series can change drastically due to certain events, for example. Recall the beginning of the global pandemic, when businesses such as airlines or brick-and-mortar shops saw a quick decline in the number of customers and sales. In contrast��
]]>We were stuck. Really stuck. With a hard delivery deadline looming, our team needed to figure out how to process a complex extract-transform-load (ETL) job on trillions of point-of-sale transaction records in a few hours. The results of this job would feed a series of downstream machine learning (ML) models that would make critical retail assortment allocation decisions for a global retailer.
]]>Gain insights from advanced AI use cases powered by the NVIDIA Jetson Orin in ruggedized environments.
]]>Machine learning has the promise to improve our world, and in many ways it already has. However, research and lived experiences continue to show this technology has risks. Capabilities that used to be restricted to science fiction and academia are increasingly available to the public. The responsible use and development of AI requires categorizing, assessing, and mitigating enumerated risks where��
]]>Voice-enabled technology is becoming ubiquitous. But many are being left behind by an anglocentric and demographically biased algorithmic world. Mozilla Common Voice (MCV) and NVIDIA are collaborating to change that by partnering on a public crowdsourced multilingual speech corpus��now the largest of its kind in the world��and open-source pretrained models. It is now easier than ever before to��
]]>Rapid digital transformation has led to an explosion of sensitive data being generated across the enterprise. That data has to be stored and processed in data centers on-premises, in the cloud, or at the edge. Examples of activities that generate sensitive and personally identifiable information (PII) include credit card transactions, medical imaging or other diagnostic tests, insurance claims��
]]>At COMPUTEX 2023, NVIDIA announced the NVIDIA DGX GH200, which marks another breakthrough in GPU-accelerated computing to power the most demanding giant AI workloads. In addition to describing critical aspects of the NVIDIA DGX GH200 architecture, this post discusses how NVIDIA Base Command enables rapid deployment, accelerates the onboarding of users, and simplifies system management.
]]>The pace of 5G investment and adoption is accelerating. According to the GSMA Mobile Economy 2023 report, nearly $1.4 trillion will be spent on 5G CapEx, between 2023 and 2030. Radio access network (RAN) may account for over 60% of the spend. Increasingly, the CapEx spend is moving from the traditional approach with proprietary hardware, to virtualized RAN (vRAN) and Open RAN architectures��
]]>Embedded edge AI is transforming industrial environments by introducing intelligence and real-time processing to even the most challenging settings. Edge AI is increasingly being used in agriculture, construction, energy, aerospace, satellites, the public sector, and more. With the NVIDIA Jetson edge AI and robotics platform, you can deploy AI and compute for sensor fusion in these complex��
]]>Learn how NVIDIA NVUE API automates data center network operations with sample code for Curl commands, Python Code, and NVUE CLI.
]]>Recent years have seen a proliferation of large language models (LLMs) that extend beyond traditional language tasks to generative AI. This includes models like ChatGPT and Stable Diffusion. As this generative AI focus continues to grow, there is a rising need for a modern machine learning (ML) infrastructure that makes scalability accessible to the everyday practitioner.
]]>Real-time cloud-scale applications that involve AI-based computer vision are growing rapidly. The use cases include image understanding, content creation, content moderation, mapping, recommender systems, and video conferencing. However, the compute cost of these workloads is growing too, driven by demand for increased sophistication in the processing. The shift from still images to video is��
]]>Central and Eastern Europe (CEE) is quickly gaining recognition as one of the world��s most important rising technology ecosystems. A highly skilled workforce, government support, proximity to key markets, and a history of entrepreneurship are all factors that have led to a significant increase in funding to the region over the past several years. In turn, the increase in funding has led to dozens��
]]>Most drone inspections still require a human to manually inspect the video for defects. Computer vision can help automate and accelerate this inspection process. However, training a computer vision model to automate inspection is difficult without a large pool of labeled data for every possible defect. In a recent session at NVIDIA GTC, we shared how Exelon is using synthetic data generation��
]]>Physics-informed machine learning (physics-ML) is transforming high-performance computing (HPC) simulation workflows across disciplines, including computational fluid dynamics, structural mechanics, and computational chemistry. Because of its broad applications, physics-ML is well suited for modeling physical systems and deploying digital twins across industries ranging from manufacturing to��
]]>Using generative AI and the NVIDIA Morpheus cybersecurity AI framework, developers can build solutions that detect spear phishing attempts more effectively and with extremely short training times. In fact, using NVIDIA Morpheus and a generative AI training technique, we were able to detect 90% of targeted spear phishing emails��a 20% improvement compared to a typical phishing detection solution��
]]>NVIDIA recently announced Morpheus, an AI application framework that provides cybersecurity developers with a highly optimized AI pipeline and pre-trained AI capabilities. Morpheus allows developers for the first time to instantaneously inspect all IP network communications through their data center fabric. Attacks are becoming more and more frequent and dangerous despite the advancements in��
]]>Imagine a future where ultra-high-fidelity simulation and training applications are deployed over any network topology from a centralized secure cloud or on-premises infrastructure. Imagine that you can stream graphical training content from the datacenter to remote end devices ranging from a single flat-screen or synchronized displays to AR/VR/MR head-mounted displays.
]]>NVIDIA today announced the software release of DeepStream SDK 3.0 for Tesla GPUs. Developers can now focus on building core deep learning networks rather than designing end-to-end applications from scratch given its modular framework and hardware-accelerated building blocks. The SDK��s latest features make it easy for you to create scalable solutions for the most complex Intelligent Video��
]]>The detection of malicious software (malware) is an increasingly important cyber security problem for all of society. Single incidences of malware can cause millions of dollars in damage. The current generation of anti-virus and malware detection products typically use a signature-based approach, where a set of manually crafted rules attempt to identify different groups of known malware types.
]]>Israeli startup UVeye developed an AI-based recognition system that scans the underside of moving vehicles to identify potential hidden threats. ��UVeye is changing the way people approach security when traveling by vehicle with a fast, accurate and automatic machine learning inspection system that can detect threatening objects or unlawful substances, for example, bombs��
]]>A six-person startup from Seattle developed augmented telerobotics software that gives humans better control of remotely operated robots which can be useful for exploring Mars or other planets. BluHaptics specializes in robotic control for underwater environments, but with a recently awarded grant funded by NASA, they are now applying their software to control robotic operations in space �C by��
]]>Vivek Venugopalan, a staff research scientist at the United Technologies Research Center (UTRC) shares how they are using deep learning and GPUs to understand the life of an aircraft engine and predictive maintenance for elevators in high-rise buildings. ��GPUs have helped us arrive at solutions quickly for computationally intensive challenges across all UTRC platforms, especially in this era of��
]]>James Parr, co-director of the NASA Frontier Development Lab (FDL) shares how NVIDIA GPUs and deep learning can help detect, characterize and deflect asteroids. The FDL hosted 12 standout graduate students for an internship to take on the White House��s Asteroid Grand Challenge, an ongoing program that aims to get researchers to ��find all asteroid threats to human populations and know what to do��
]]>Late last year, the NVIDIA Inception Program hosted a ��Cool Demo Contest�� for GPU-accelerated startups that are applying deep learning to their innovations. A variety of companies from around the world submitted their demos, ranging from defense to healthcare applications. Below are highlights from three of the 14 winners who each won a Pascal TITAN X GPU. Eating Smart Just Got Smart Now��
]]>Once it classifies the object, the Jetson-powered Airspace drone fires a tethered net to capture the other craft from the sky and safely returns it to its landing pad. Airspace is the only drone security solution capable of identifying, tracking, and autonomously removing rogue drones from the sky. The start-up is using GeForce GTX 1080 GPUs and DIGITS to train their deep learning model to��
]]>Devin White, Senior Researcher at Oak Ridge National Laboratory shares how they are using GPUs to improve the geolocation accuracy of imagery collected by a satellite, manned aircraft, or an unmanned aerial system. Using Tesla K80 GPUs and CUDA, the researchers in the Geographic Information Science and Technology Group at ORNL developed a sensor-agnostic, plugin-based framework to support��
]]>Just in time for the International Supercomputing show (ISC 2016) and International Conference on Machine Learning (ICML 2016), NVIDIA announced three new deep learning software tools for data scientists and developers to make the most of the vast opportunities in deep learning. NVIDIA DIGITS 4 A new workflow for training object detection neural networks to find instances of faces��
]]>Leo Meyerovich, CEO of Graphistry Inc., shares how GPUs and machine learning are protecting the largest companies and organizations in the world by visually alerting them of attacks and big outages. Using NVIDIA GPUs and CUDA, the graph analysis cloud platform is able to help the company��s response and hunting team sift through 100M+ alerts a day. ��We��ve built one of the world��s fastest��
]]>Joshua Patterson, principal data scientist of Accenture Labs shares how his team is using NVIDIA GPUs and GPU-accelerated libraries to quickly detect security threats by analyzing anomalies in large-scale network graphs. ��When we can move 4 billion node graphs onto a GPU and have the shared memory of all the other GPUs and have that connected processing power�� it��s really going to cut-out months��
]]>The Handford site in southeastern Washington is the largest radioactive waste site in the United Sates and is still awaiting cleanup after more than 70 years. Cleaning up radioactive waste is extremely complicated since some elements stay radioactive for thousands of years. Scientists from Lawrence Berkeley National Laboratory and six universities: The State University of New York at Buffalo��
]]>Javier Rodriguez Saeta, CEO of Herta Security, shares how they��re using NVIDIA GPUs to train deep neural networks for pattern recognition to help with security at airports, stadiums and train stations. Herta Security��s high performance video-surveillance solution for facial recognition is designed to identify people in crowded and changing environments in real-time. Their technology makes it��
]]>Share your creative, cutting-edge virtual reality innovations with the world. The VR Showcase, taking place at the GPU Technology Conference April 4-7, 2016 in San Jose, CA, is an opportunity for 8 teams to present their innovative work using Virtual Reality. The winning team will win $30,000 USD in cash and prizes. Each team will pitch their idea on stage for 8 minutes, 5 minutes of presentation��
]]>In what could one day help find missing people in forests, a team of researchers used deep learning to train an autonomous drone to navigate a previously-unseen trail in a densely wooded forest completely on its own. The researchers from Dalle Molle Institute for Artificial Intelligence, the University of Zurich, and NCCR Robotics, mounted three GoPro cameras to a headset to train their deep��
]]>To reveal deeper insights into important activities taking place around the world, DigitalGlobe��s advanced satellite constellation collects nearly 4 million km2 of high-resolution earth imagery each day. The company announced they will now rely on NVIDIA GPUs and deep learning to automatically identify objects such as airplanes, vehicles, and gray elephants, as well as to detect patterns from��
]]>Every second, millions of videos are generated and consumed every second. From the projected 859 petabytes of footage from surveillance cameras to the over two billion images and videos uploaded daily to social platforms, visual content is exploding. However, huge gaps exist between simply storing lots of data and the intelligent, insightful, and actionable understanding of this visual media.
]]>The U.S. Army introduced its newest supercomputer, Excalibur, which will help to ensure Soldiers have the technological advantage on the battlefield. Increased computational capabilities will allow researchers bring improved communications, data and intelligence to Soldiers in the field, said Maj. Gen. John F. Wharton, commanding general of the U.S. Army Research��
]]>Spurred by the need for neural networks capable of tackling vast wells of high-res satellite data, a team from the NASA Advanced Supercomputing Division at NASA Ames and Louisiana State University have sought a new blend of deep learning techniques that can build on existing neural nets to create something robust enough for satellite datasets. Deep belief networks are on offshoot of the larger��
]]>