Risk and uncertainty inherent in energy exploration include unknown geological parameters, variations in fluid and rock properties, boundary conditions, and noisy observational data. Rigorous calibration of uncertainty for key reservoir engineering tasks and field optimization requires running a large number of forward simulations. Use cases range from history matching and probabilistic��
]]>The evolution of modern application development has led to a significant shift toward microservice-based architectures. This approach offers great flexibility and scalability, but it also introduces new complexities, particularly in the realm of security. In the past, engineering teams were responsible for a handful of security aspects in their monolithic applications. Now, with microservices��
]]>Antibodies have become the most prevalent class of therapeutics, primarily due to their ability to target specific antigens, enabling them to treat a wide range of diseases, from cancer to autoimmune disorders. Their specificity reduces the likelihood of off-target effects, making them safer and often more effective than small-molecule drugs for complex conditions. As a result��
]]>Last November, AWS integrated open-source inference serving software, NVIDIA Triton Inference Server, in Amazon SageMaker. Machine learning (ML) teams can use Amazon SageMaker as a fully managed service to build and deploy ML models at scale. With this integration, data scientists and ML engineers can easily use the NVIDIA Triton multi-framework, high-performance inference serving with the��
]]>The latest NVIDIA HPC SDK update expands portability and now supports the Arm-based AWS Graviton3 processor. In this post, you learn how to enable Scalable Vector Extension (SVE) auto-vectorization with the NVIDIA compilers to maximize the performance of HPC applications running on the AWS Graviton3 CPU. The NVIDIA HPC SDK includes the proven compilers, libraries��
]]>Speech AI can assist human agents in contact centers, power virtual assistants and digital avatars, generate live captioning in video conferencing, and much more. Under the hood, these voice-based technologies orchestrate a network of automatic speech recognition (ASR) and text-to-speech (TTS) pipelines to deliver intelligent, real-time responses. Sign up for the latest Data Science news.
]]>Step-A Step-B Go get a cup of coffee�� Step-C How often have you seen ��Go get a coffee�� in the instructions? As a developer, I found early on that this pesky quip is the bane of my life. Context switches, no matter the duration, are a high cost to pay in the application development cycle. Of all the steps that require you to step away, waiting for an application to compile��
]]>Deploying AI-powered services like voice-based assistants, e-commerce product recommendations, and contact-center automation into production at scale is challenging. Delivering the best end-user experience while reducing operational costs requires accounting for multiple factors. These include composition and performance of underlying infrastructure, flexibility to scale resources based on user��
]]>Creating immersive applications with high-fidelity 3D graphics has never been more accessible thanks to recent advances in extended reality (XR) hardware and software. Despite this growth, developing augmented reality (AR) and virtual reality (VR) applications still come with challenges: By using NVIDIA CloudXR alongside Amazon NICE DCV streaming protocols, you can use on-demand compute��
]]>Today at AWS re:Invent 2021, AWS announced the general availability of Amazon EC2 G5g instances��bringing the first NVIDIA GPU-accelerated Arm-based instance to the AWS cloud. The new EC2 G5g instance features AWS Graviton2 processors, based on the 64-bit Arm Neoverse cores, and NVIDIA T4G Tensor Core GPUs, enhanced for graphics-intensive applications. This powerful combination creates an��
]]>See the latest innovations spanning from the cloud to the edge at AWS re:Invent. Plus, learn more about the NVIDIA NGC catalog��a comprehensive collection of GPU-optimized software. Working closely together, NVIDIA and AWS developed a session and workshop focused on learning more about NVIDIA GPUs and providing hands-on training on NVIDIA Jetson modules. Register now for the virtual AWS��
]]>Today, AWS announced the general availability of the new Amazon EC2 G5 instances, powered by NVIDIA A10G Tensor Core GPUs. These instances are designed for the most demanding graphics-intensive applications, as well as machine learning inference and training simple to moderately complex machine learning models on the AWS cloud. The new EC2 G5 instances feature up to eight NVIDIA A10G Tensor��
]]>The NGC team is hosting a webinar with live Q&A to dive into how to build AI models using PyTorch Lightning, an AI framework built on top of PyTorch, from the NGC catalog. Simplify and Accelerate AI Model Development with PyTorch Lightning, NGC, and AWS September 2 at 10 a.m. PT Organizations across industries are using AI to help build better products, streamline operations��
]]>The NGC team is hosting a webinar with live Q&A to dive into how to build AI models using PyTorch Lightning, an AI framework built on top of PyTorch, from the NGC catalog. Simplify and Accelerate AI Model Development with PyTorch Lightning, NGC, and AWS September 2 at 10 a.m. PT Organizations across industries are using AI to help build better products, streamline operations��
]]>With the growing interest in deep learning (DL), more and more users are using DL in production environments. Because DL requires intensive computational power, developers are leveraging GPUs to do their training and inference jobs. Recently, as part of a major Apache Spark initiative to better unify DL and data processing on Spark, GPUs became a schedulable resource in Apache Spark 3.
]]>Whole genome sequencing has become an important and foundational part of genomic research, enabling researchers to identify genetic signatures associated with diseases, differentiate sequencing errors from biological signals, and better characterize the genomes of various organisms. With the ongoing COVID-19 pandemic threatening the globe, characterizing, and understanding genomes is now more��
]]>AI is going mainstream and is quickly becoming pervasive in every industry��from autonomous vehicles to drug discovery. However, developing and deploying AI applications is a challenging endeavor. The process requires building a scalable infrastructure by combining hardware, software, and intricate workflows, which can be time-consuming as well as error-prone. To accelerate the end-to-end AI��
]]>Cloud computing is all about making resources available on demand, and its availability, flexibility, and lower cost has helped it take commercial computing by storm. At the Microsoft Build 2015 conference in San Francisco Microsoft revealed that its AzureC cloud computing platform is averaging over 90 thousand new customers per month; contains more than 1.4 million SQL databases being used by��
]]>