Intel Updates Its PyTorch Build With More Large Language Model Optimizations

Written by Michael Larabel in Intel on 10 May 2024 at 06:19 AM EDT. 3 Comments
INTEL
Intel has released their Intel Extension for PyTorch v2.3 to succeed their earlier v2.1 derived extension. With this updated extension targeting PyTorch 2.3, Intel is rolling out more optimizations around Large Language Models (LLMs).

The Intel Extension for PyTorch continues to be Intel's optimized downstream software for maximizing Intel CPU performance with the PyTorch framework. The Intel Extension for PyTorch ships with more AVX-512 VNNI optimizations, Intel AMX support, Intel XMX support for Intel dGPUs, and other improvements to maximize PyTorch capabilities on Intel hardware.

With the v2.3 extension, there are new Large Language Model optimizations with a new feature called the LLM Optimization API for module-level optimizations to commonly used LLMs, updating the bundled Intel oneDNN neural network library, adding TorchServer CPU examples, other LLM performance optimizations, and improved warnings and logging information.

PyTorch logo


Those making use of PyTorch on Intel platforms can find the updated open-source extension on GitHub.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week