To improve the speed of video recognition applications on edge devices such as NVIDIA��s Jetson Nano and Jetson TX2, MIT researchers developed a new deep learning model that outperforms previous state-of-the-art models in video recognition tasks. Trained using 1,536 NVIDIA V100 GPUs at Oak Ridge National Laboratory��s Summit supercomputer, the model earned the top spot in the Something-Something��
]]>