Themata.AI
Themata.AI

Popular tags:

#developer-tools#ai-agents#llms#claude#ai-ethics#code-generation#openai#ai-safety#anthropic#open-source

AI is changing the world. Don't stay behind. Clear summaries, community insight, delivered without the noise. Subscribe to never miss a beat.

Ā© 2026 Themata.AI • All Rights Reserved

Privacy

|

Cookies

|

Contact
amdryzen-aillmslinux

AMD Ryzen AI NPUs are finally useful under Linux for running LLMs

AMD Ryzen AI NPUs Are Finally Useful Under Linux For Running LLMs

phoronix.com

March 11, 2026

4 min read

šŸ”„šŸ”„šŸ”„šŸ”„šŸ”„

32/100

Summary

AMD has developed the AMDXDNA accelerator driver for the mainline Linux kernel to support AMD Ryzen AI NPUs. User-space software for effectively utilizing these NPUs on Linux has been limited, with most applications relying on iGPU support instead.

Key Takeaways

  • AMD has developed the AMDXDNA accelerator driver for supporting Ryzen AI NPUs in the mainline Linux kernel.
  • The Lemonade 10.0 server now includes Linux NPU support for large language models and Whisper, alongside native integration with Claude Code.
  • FastFlowLM 0.9.35 has been released, providing official native Linux support for current-gen Ryzen AI NPUs with context lengths up to 256k tokens.
  • Ryzen AI NPU support on Linux is compatible with all current AMD Ryzen AI 300/400 series SoCs and requires the Linux 7.0 kernel or AMDXDNA driver back-ports.
Read original article

Related Articles

AMD's GAIA Now Allows Building Custom AI Agents Via Chat, Becomes "True Desktop App"

AMD's GAIA now allows building custom AI agents via chat, becomes "true desktop app"

Apr 11, 2026

AMD's Local, Open-Source AI Can Now Easily Interact With Your Gmail

AMD's local, open-source AI can now easily interact with your Gmail

May 8, 2026

AMD Engineer Leverages AI To Help Make A Pure-Python AMD GPU User-Space Driver

AMD engineer leverages AI to help make a pure-Python AMD GPU user-space driver

Mar 5, 2026

Refreshingly fast images LLMs on GPUs and NPUs

Lemonade by AMD: a fast and open source local LLM server using GPU and NPU

Apr 2, 2026