LA
llm-awq
Model Optimization·infrastructure·open·#893 of 944·-138·Rising
55.9
Low
High confidence
[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Pillar Breakdown
Adoption
35%
46.0
Maintenance
30%
55.2
Friction
20%
99.9
Ecosystem
15%
36.6
Momentum
0.52Rising
7d change +1.19
High confidenceIn Model Optimization
Ranked #15 of 16