Back
Join now
About

Popular Tags

  • react
  • typescript
  • ui-components
  • shadcn-ui
  • tailwind
  • open-source-coding-agent
  • llm
  • ai-agents
  • open-source
  • react-components

Top Sources

  • github.com
  • clerk.com
  • 1771technologies.com
  • 21st.dev
  • abui.io
  • activepieces.com
  • ai-sdk.dev
  • alash3al.github.io
  • alchemy.run
  • altsendme.com

Browse by Type

  • Tools
  • Code
bookmrks.io - Discovery, refined.
Website favicongithub.com
Website preview

FastFlowLM: Run LLMs on AMD Ryzen AI NPUs

FastFlowLM enables efficient execution of large language models on AMD Ryzen AI NPUs, optimizing performance without GPU dependency.

flux
Tech Stack
GitHubGitHub PagesDockerGitHub ActionsBashJavaScriptCSSC++PythonCObjective-C
Summary

FastFlowLM (FLM) is a specialized runtime designed to run large language models on AMD Ryzen™ AI NPUs efficiently. It allows users to execute models without the need for a GPU, achieving faster performance while being over 10× more power-efficient. The tool supports context lengths of up to 256k tokens and is lightweight, with an installation size of only 17 MB.

Key features:

  • Fast and low power - Operates fully on AMD Ryzen™ AI NPU without burdening GPU or CPU resources.
  • Simple CLI and API - Provides a straightforward command-line interface and REST/OpenAI API for ease of use.
  • Private and offline - Ensures full privacy as it runs locally without internet access.
  • Lightweight runtime - Installs in under 20 seconds, making it easy to integrate into existing workflows.
  • No low-level tuning required - Users can focus on application development without needing to adjust model parameters.

FastFlowLM is particularly beneficial for developers looking to leverage local AI capabilities efficiently and effectively.

Comments
No comments yet. Sign in to add the first comment!
Tags
  • amd
    1
  • cpp
    1
  • deepseek
    1
  • llama
    1
  • llm
    1
  • npu
    1
  • ollama
    1