Intel has released the open-source NPU Acceleration Library for developers to run lightweight LLMs like TinyLlama on Meteor Lake AI PCs. The library is available on GitHub and works on Windows and Linux.
Although it is primarily created for developers, ordinary users with coding experience could use it to run their AI chatbot on Meteor Lake. However, using the library for personal purposes is not simple and requires a decent understanding of Python.