H
Helicone
4
Open-source LLM observability platform for monitoring costs and latency
Open Sourcefreemium Trending
About
Helicone is an open-source observability platform purpose-built for LLM applications. It provides a one-line proxy integration to log every LLM request, offering real-time dashboards for monitoring costs, latency, error rates, and usage patterns. Helicone supports caching to reduce costs, rate limiting, user tracking, and prompt versioning, making it easy to optimize LLM spend in production.
Details
| Type | llm-observability |
| Integrations | OpenAI, Anthropic, Google AI, Azure OpenAI, LangChain, LlamaIndex, Vercel AI SDK |
| Cloud Support | Cloud-hosted, Self-hosted, AWS, GCP, Azure |
Tags
observabilitycost-monitoringlatencycachingproxyopen-sourcellm-gateway
Quick Info
- Organization
- Helicone
- Pricing
- Free (100K requests/month) / $25/month (Pro)
- Free Tier
- Yes
- Popularity
- 73/100
- Stars
- 5.1K
- MAU
- 30K+
- Updated
- Feb 19, 2026
Also in DevOps & MLOps
W
Weights & Biases
The AI developer platform for experiment tracking and model management
Commercialfreemium
Weights & Biases10.0KFree for individuals / Custom enterprise pricing
M
MLflow
Open-source platform for the complete ML lifecycle
OSSFree
Databricks20.0KFree (open-source) / Managed on Databricks
L
LangSmith
Observability and evaluation platform for LLM applications
CommercialfreemiumTrending
LangChainFree (1 seat, 5K traces/month) / $39/seat/month (Plus)