On Sept. 12, learn about the connection between MATLAB and NVIDIA Isaac Sim through ROS.
]]>Traditional healthcare systems have large amounts of patient data in the form of physiological signals, medical records, provider notes, and comments. The biggest challenges involved in developing digital health applications are analyzing the vast amounts of data available, deriving actionable insights, and developing solutions that can run on embedded devices. Engineers and data scientists��
]]>This blog discusses how an application developer can prototype and deploy deep learning algorithms on hardware like the NVIDIA Jetson Nano Developer Kit with MATLAB. In previous posts, we explored how you can design and train deep learning networks in MATLAB and how you can generate optimized CUDA code from your deep learning algorithms. In our experience working with deep learning engineers��
]]>Gone are the days of using a single GPU to train a deep learning model. With computationally intensive algorithms such as semantic segmentation, a single GPU can take days to optimize a model. But multi-GPU hardware is expensive, you say. Not any longer; NVIDIA multi-GPU hardware on cloud instances like the AWS P3 allow you to pay for only what you use. Cloud instances allow you to take��
]]>You��ve probably seen headlines about innovation in automated driving now that there are several cars available on the market that have some level of self-driving capability. I often get questions from colleagues on how automated driving systems perceive their environment and make ��human-like�� decisions. The autonomous car in Figure 1 must locate and classify all the relevant objects on the road��
]]>Researchers at MIT��s Computer Science and Artificial Intelligence Lab have developed software that uses variations in Wi-Fi signals to recognize human silhouettes through walls. The researchers presented RF-Capture, which tracks the 3D positions of a person��s limbs and body parts even when the person is fully occluded from its sensor, and does so without placing any markers on the subject��s body.
]]>Deep learning is becoming ubiquitous. With recent advancements in deep learning algorithms and GPU technology, we are able to solve problems once considered impossible in fields such as computer vision, natural language processing, and robotics. Deep learning uses deep neural networks which have been around for a few decades; what��s changed in recent years is the availability of large��
]]>In this post, I will discuss techniques you can use to maximize the performance of your GPU-accelerated MATLAB? code. First I explain how to write MATLAB code which is inherently parallelizable. This technique, known as vectorization, benefits all your code whether or not it uses the GPU. Then I present a family of function wrappers��bsxfun, pagefun, and arrayfun��that take advantage of GPU hardware��
]]>I stumbled upon the above tweet by Leon Palafox, a Postdoctoral Fellow at the The University of Arizona Lunar and Planetary Laboratory, and reached out to him to discuss his success with GPUs and share it with other developers interested in using deep learning for image processing. We are working on developing a tool that can automatically identify various geological processes on the��
]]>In an earlier post we showed how MATLAB? can support CUDA kernel prototyping and development by providing an environment for quick evaluation and visualization using the object. In this post I will show you how to integrate an existing library of both host and device code implemented in C++ or another CUDA-accelerated language using MEX. With MEX you can extend and customize MATLAB��
]]>This week��s Spotlight is on Dr. Adam Gazzaley of UC San Francisco, where he is the founding director of the Neuroscience Imaging Center and an Associate Professor in Neurology, Physiology and Psychiatry. His work was featured in Nature in September 2013. NVIDIA: Adam, how are you using GPU computing in your research? Adam: We are working with a distributed team (UCSF, Stanford��
]]>NVIDIA GPUs are becoming increasingly popular for large-scale computations in image processing, financial modeling, signal processing, and other applications��largely due to their highly parallel architecture and high computational throughput. The CUDA programming model lets programmers exploit the full power of this architecture by providing fine-grained control over how computations are divided��
]]>