Intel Showcases AI Optimizations for Language Models on Arc Alchemist GPUs Using PyTorch

Intel-Showcases-AI-Optimizations-for-Language-Models-on-Arc

Intel has released its PyTorch extension, which allows large language models (LLMs) like Llama 2 to run on Arc Alchemist GPUs. The PyTorch extension is designed to take advantage of the XMX cores inside Arc GPUs and optimizes FP16 performance.

The demo shows Llama 2 and Llama 2-Chat LLMs running on Arc A770 16GB, asking questions like “can deep learning have such generalization ability like humans do?” and responding to them.

Read more: Intel demonstrates PyTorch AI optimizations for accelerating large language models on its Arc Alchemist GPUs