home ¦ Archives ¦ Atom ¦ RSS

Running Local LLMs

Link parkin’: 50+ Open-Source Options for Running LLMs Locally

Vince Lam put together a comprehensive resource for running LLMs on your own hardware:

There are many open-source tools for hosting open weights LLMs locally for inference, from the command line (CLI) tools to full GUI desktop applications. Here, I’ll outline some popular options and provide my own recommendations. I have split this post into the following sections:

  1. All-in-one desktop solutions for accessibility
  2. LLM inference via the CLI and backend API servers
  3. Front-end UIs for connecting to LLM backends

GitHub Repo, helpful Google Sheet

© 2008-2024 C. Ross Jam. Built using Pelican. Theme based upon Giulio Fidente’s original svbhack, and slightly modified by crossjam.