llama.cpp
Inference / Orchestration·infrastructure·open·#9 of 944·Rising
90.6
Excellent
High confidence
Project Details
- License
- MIT
- Topics
- llminferencecppggmlquantizationllama
- Install
brew install llama.cpp
C/C++ inference engine for LLaMA-family language models. Supports CPU, GPU, and Apple Silicon; provides quantization that lets large models run on consumer hardware.
C/C++ LLM inference engine with broad quantization support
Repo Score Comparison
Strong in BothTool Score
90.6
Rank #9
Repo Score
90.4
Repo Rank #5
Score Gap+0.3
Both the tool and its underlying repo score 75+ - a well-rounded, healthy project.
Pillar Breakdown
Adoption
35%
96.2
Maintenance
30%
96.4
Friction
20%
99.9
Ecosystem
15%
78.0
Momentum
0.52Rising
7d change +0.64
High confidenceIn Inference / Orchestration
Ranked #1 of 77