With nearly 1.5 billion monthly visitors and 346,000 pictures of tattoos, Tattoodo is taking advantage of deep learning to help categorize the growing number of uploaded images. ��At Tattoodo, we spend a lot of time and effort on classifying the tattoo pictures that are uploaded,�� mentioned Goran Vuksic, a developer at Tattoodo. ��A community member is able to provide a textual description and tag��
]]>By Grace Lam, Mokshith Voodarla, Nicholas Liu How long does it take to program an office delivery robot? Apparently, less than seven weeks. This summer, three NVIDIA high school interns, Team Electron, built a completely autonomous indoor delivery robot with a Turtlebot base and Jetson TX2. Simply message the robot to deliver anything from pens to pizza and it��ll bring it to you. A Team with a��
]]>In part 1 of this series I introduced Generative Adversarial Networks (GANs) and showed how to generate images of handwritten digits using a GAN. In this post I will do something much more exciting: use Generative Adversarial Networks to generate images of celebrity faces. I am going to use CelebA [1], a dataset of 200,000 aligned and cropped 178 x 218-pixel RGB images of celebrities.
]]>In machine learning, a generative model learns to generate samples that have a high probability of being real samples like the samples from the training dataset. Generative Adversarial Networks (GANs) are a very hot topic in Machine Learning. A typical GAN comprises two agents: G and D have competing goals (hence the term ��adversarial�� in Generative Adversarial Networks): D must learn to��
]]>You heard it from the Deep Learning guru: Generative Adversarial Networks [2] are a very hot topic in Machine Learning. In this post I will explore various ways of using a GAN to create previously unseen images. I provide source code in Tensorflow and a modified version of DIGITS that you are free to use if you wish to try it out yourself. Figure 1 gives a preview of what you will learn to do in��
]]>Today we are announcing the production release of NVIDIA DIGITS 5 and NVIDIA TensorRT. DIGITS is an interactive deep neural network training application for developers to rapidly train highly accurate neural networks for image classification, segmentation and object detection. Trained models can be deployed to the cloud, PC, embedded or automotive GPU platforms, with TensorRT inference engine��
]]>Pattern recognition and classification in medical image analysis has been of interest to scientists for many years. Machine learning techniques have enabled researchers to develop and utilize complicated models to classify or predict various abnormalities or diseases. Recently, the successful applications of state-of-the-art deep learning architectures have rapidly expanded in medical imaging.
]]>Each November for the last decade, Tsukuba City in Japan has run a 2,000-meter race unlike just about any other in the world. What��s unusual is not the terrain �C spanning parks, city streets, even indoor shopping malls �C or the speed, the pace is a leisurely stroll. It��s the participants: each is an autonomous robot. The event, called the Tsukuba Challenge, gives robots the difficult task of��
]]>Today we��re excited to announce NVIDIA DIGITS 5. DIGITS 5 comes with a number of new features, two of which are of particular interest for this post: In this post I will explore the subject of image segmentation. I��ll use DIGITS 5 to teach a neural network to recognize and locate cars, pedestrians, road signs and a variety of other urban objects in synthetic images from the SYNTHIA dataset.
]]>The Barcelona-based startup developed a deep learning algorithm to determine in real-time what is in a particular image. RestB is commercializing their high-precision models by charging customers an initial training fee to build a custom model for their needs and charging per API call thereafter. ��If we take a picture of a city, (other company��s technology) would describe it by indicating that it��
]]>BabbyCam is a new deep learning baby monitor that recognizes your baby, monitors their emotions and will alert you if their face is covered. As a new parent himself, the developer of the camera was in search for a solution with the ability to identify if the infant was on its stomach, one of the leading causes of Sudden Infant Death Syndrome (SIDS). With no monitors in existence, Benjamin Lui��
]]>The diet app LoseIt! released a new deep learning feature called Snap It that lets users take photos of their food and then it automatically logs the calorie count and nutritional information. Using the NVIDIA DIGITS deep learning training system on four TITAN X GPUs, the company trained their network on a vast database of 230,000 food images and more than 4 billion foods logged by Lose It!
]]>DigitalGlobe, CosmiQ Works and NVIDIA recently announced the launch of the SpaceNet online satellite imagery repository. This public dataset of high-resolution satellite imagery contains a wealth of geospatial information relevant to many downstream use cases such as infrastructure mapping, land usage classification and human geography estimation. The SpaceNet release is unprecedented: it��s the��
]]>The NVIDIA Deep Learning GPU Training System (DIGITS) puts the power of deep learning in the hands of data scientists and researchers. Using DIGITS you can perform common deep learning tasks such as managing data, defining networks, training several models in parallel, monitoring training performance in real time, and choosing the best model from the results browser.
]]>Today we��re excited to announce the availability of NVIDIA DIGITS 4. DIGITS 4 introduces a new object detection workflow and DetectNet, a new deep neural network for object detection that enables data scientists and researchers to train models that can detect instances of faces, pedestrians, traffic signs, vehicles and other objects in images. Object detection is one of the most challenging��
]]>Just in time for the International Supercomputing show (ISC 2016) and International Conference on Machine Learning (ICML 2016), NVIDIA announced three new deep learning software tools for data scientists and developers to make the most of the vast opportunities in deep learning. NVIDIA DIGITS 4 A new workflow for training object detection neural networks to find instances of faces��
]]>The NVIDIA Deep Learning SDK brings high-performance GPU acceleration to widely used deep learning frameworks such as Caffe, TensorFlow, Theano, and Torch. The powerful suite of tools and libraries are for data scientists to design and deploy deep learning applications. Following the Beta release a few months ago, the production release is now available with: Download now >> ��
]]>We love seeing all of the social media posts from developers using NVIDIA GPUs �C here are a few highlights from the week: On Twitter? Follow @GPUComputing and @mention us and/or use hashtags so we��re able to keep track of what you��re up to: #CUDA, #cuDNN, #OpenACC.
]]>Containers wrap applications into an isolated virtual environment to simplify data center deployment. By including all application dependencies (binaries and libraries), application containers can run seamlessly in any data center environment. Docker, the leading container platform, can now be used to containerize GPU-accelerated applications. To make it easier to deploy GPU-accelerated��
]]>A generation of startups are now putting artificial intelligence into the hands of millions. NVIDIA GPUs and deep learning software power much of this work. A billboard ad that guesses your age. A photo app that recognizes your face. A digital math aide that solves quadratic equations. A billboard ad that guesses your age. A photo app that recognizes your face. A digital math aide that solves��
]]>Companies across nearly all industries are exploring how to use GPU-powered deep learning to extract insights from big data. From self-driving cars to disease-detecting mirrors, the use cases for deep learning is expanding by the day. Since computer scientist Geoff Hinton started using GPUs to train his neural networks, researchers are applying the technology to tough modeling problems in the real��
]]>DIGITS is an interactive deep learning development tool for data scientists and researchers, designed for rapid development and deployment of an optimized deep neural network. NVIDIA introduced DIGITS in March 2015, and today we are excited to announce the release of DIGITS 2, which includes automatic multi-GPU scaling. Whether you are developing an optimized neural network for a single data set��
]]>The hottest area in machine learning today is Deep Learning, which uses Deep Neural Networks (DNNs) to teach computers to detect recognizable concepts in data. Researchers and industry practitioners are using DNNs in image and video classification, computer vision, speech recognition, natural language processing, and audio recognition, among other applications. The success of DNNs has been��
]]>