LG

LLM Guard

AI Safety & Guardrails·application·open·#392 of 944·+13·applications·Rising

73.6

Moderate

High confidence

Security toolkit for sanitizing LLM inputs and outputs against prompt injection.

Pillar Breakdown

Adoption

35%

75.2

Maintenance

30%

69.7

Friction

20%

99.7

Ecosystem

15%

58.2

Momentum

0.70Rising
7d change +0.44
High confidence

In AI Safety & Guardrails

Ranked #9 of 18

Similar Tools