New MIT Video Recognition Model Dramatically Improves Latency on Edge Devices – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-04-02T18:57:57Z http://www.open-lab.net/blog/feed/ Nefi Alarcon <![CDATA[New MIT Video Recognition Model Dramatically Improves Latency on Edge Devices]]> https://news.www.open-lab.net/?p=15178 2022-08-21T23:48:46Z 2019-10-17T16:40:19Z To improve the speed of video recognition applications on edge devices such as?NVIDIA��s Jetson Nano?and?Jetson TX2, MIT researchers developed a new deep...]]> To improve the speed of video recognition applications on edge devices such as?NVIDIA��s Jetson Nano?and?Jetson TX2, MIT researchers developed a new deep...

To improve the speed of video recognition applications on edge devices such as NVIDIA��s Jetson Nano and Jetson TX2, MIT researchers developed a new deep learning model that outperforms previous state-of-the-art models in video recognition tasks. Trained using 1,536 NVIDIA V100 GPUs at Oak Ridge National Laboratory��s Summit supercomputer, the model earned the top spot in the Something-Something��

Source

]]>
0
���˳���97caoporen����