DeepYard
H

Helicone

4

Open-source LLM observability platform for monitoring costs and latency

Open Sourcefreemium Trending

About

Helicone is an open-source observability platform purpose-built for LLM applications. It provides a one-line proxy integration to log every LLM request, offering real-time dashboards for monitoring costs, latency, error rates, and usage patterns. Helicone supports caching to reduce costs, rate limiting, user tracking, and prompt versioning, making it easy to optimize LLM spend in production.

Details

Typellm-observability
IntegrationsOpenAI, Anthropic, Google AI, Azure OpenAI, LangChain, LlamaIndex, Vercel AI SDK
Cloud SupportCloud-hosted, Self-hosted, AWS, GCP, Azure

Tags

observabilitycost-monitoringlatencycachingproxyopen-sourcellm-gateway