NC
neural-compressor
70.7
Moderate
High confidence
SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime
Pillar Breakdown
Adoption
35%
60.2
Maintenance
30%
81.7
Friction
20%
98.8
Ecosystem
15%
54.4
Momentum
0.50Rising
7d change 0.00
High confidenceIn Model Optimization
Ranked #8 of 16