New techniques efficiently accelerate sparse tensors for massive AI models

Researchers from MIT and NVIDIA have developed two techniques that accelerate the processing of sparse tensors, a type of data structure that's used for high-performance computing tasks. The complementary techniques could result in significant improvements to the performance and energy-efficiency of systems like the massive machine-learning models that drive generative artificial intelligence.

from Tech Xplore - electronic gadgets, technology advances and research news https://ift.tt/eM0lyod

Comments

Popular posts from this blog