Hey HN! I'm Caleb, one of the contributors to Opik, a new open source framework for LLM evaluations.
Over the last few months, my colleagues and I have been working on a project to solve what we see as the most painful parts of writing evals for an LLM application. For this initial release, we've focused on a few core features that we think are the most essential:
- Simplifying the implementation of more complex LLM-based evaluation metrics, like Hallucination and Moderation.
- Enabling step-by-step tracking, such that you can test and debug each individual component of your LLM application, even in more complex multi-agent architectures.
- Exposing an API for "model unit tests" (built on Pytest), to allow you to run evals as part of your CI/CD pipelines
- Providing an easy UI for scoring, annotating, and versioning your logged LLM data, for further evaluation or training.
It's often hard to feel like you can trust an LLM application in production, not just because of the stochastic nature of the model, but because of the opaqueness of the application itself. Our belief is that with better tooling for evaluations, we can meaningfully improve this situation, and unlock a new wave of LLM applications.
You can run Opik locally, or with a free API key via our cloud platform. You can use it with any model server or hosted model, but we currently have a built-in integration with the OpenAI Python library, which means it automatically works not just with OpenAI models, but with any model served via a compatible model server (ollama, vLLM, etc). Opik also currently has out-of-the-box integrations with LangChain, LlamaIndex, Ragas, and a few other popular tools.
This is our initial release of Opik, so if you have any feedback or questions, I'd love to hear them!
I am using Arize Phoenix and trying to see the difference. Can you highlight?