LL

lit-llama

Model Optimization·infrastructure·open·#598 of 944·+234·Surging

67.3

Low

High confidence

Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

Pillar Breakdown

Adoption

35%

74.3

Maintenance

30%

63.3

Friction

20%

97.5

Ecosystem

15%

40.7

Momentum

0.79Surging
7d change +0.84
High confidence

In Model Optimization

Ranked #9 of 16

Similar Tools