Hacker News new | past | comments | ask | show | jobs | submit | kakaly0403's comments login

Congrats on the launch. Cool to see a RAG specific tracing tool. Excited to try it out. Full disclosure, I am the cofounder and core-maintainer of Langtrace(https://github.com/Scale3-Labs/langtrace) which is also an open source tool for tracing and observing your LLM stack and our SDKs are OTEL based. Based on my experience, I think the biggest challenge right now specifically for RAG pipelines is the lack of flexibility in the current crop of tracing tools to not just visualize the entire retrieval flow across all the components of the stack - the framework calls, vectorDB retrievals, re-ranker i/o if any and the final LLM inference. But, also being able to do experiments by freezing a setup, iterate on it and measuring the performance and improving it to clearly know how the changes map to the performance end to end. This is what we think about mostly while we are building Langtrace as well.


Promptfoo is pretty good




There is a GenAI standard spec from OpenTelemetry for tracing LLM based applications. Currently there are 3 library implementations of this spec - Langtrace, OpenLLMetry and OpenLit. Microsoft has an implementation for .NET aswell. OpenInference, though opentelemetry compatible does not adhere to the standard spec.


The way it works: - I give it the content as a pdf or a text - it generates the script and records the audio


So why should I listen to a podcast that nobody bothered to record?


Langtrace core maintainer here. Congrats on the launch! We are building OTEL support for a wide range of LLMs, vectorDBs and frameworks - crewai, DSPy, langchain etc. Would love to see if the langtrace’s tracing library can be integrated with Laminar. Also, feel free to join the OTEL GenAI semantic working committee.


Thank you! If langtrace sends Otel spans over http or grps we can ingest it! How would one join OTEL GenAI comittee?


Checkout Langtrace. It’s also OTEL and integrates with datadog.

https://github.com/Scale3-Labs/langtrace


If you are looking for automatic instrumentation that generates open telemetry standard traces - check out Langtrace, https://github.com/Scale3-Labs/langtrace. Got support for all popular LLMs, frameworks and vectorDB with high cardinality.


This still has the same issue that the docs say I need 2 more databases, not making use of my existing grafana LGTM stack

https://github.com/Scale3-Labs/langtrace?tab=readme-ov-file#...

Another blocker for me is that it appears to need an API key for trying it out locally or self-hosting

---

I'm comparing your project to OpenLit on this front

- https://docs.openlit.io/latest/installation#kubernetes

- https://docs.openlit.io/latest/connections/grafanacloud


1. You do not need 2 databases if you are only looking to export the traces to your grafana stack. If you are using prometheus as the TSDB, you can directly send the traces there without needing any of the databases to be setup locally.

2. Also, you do not need the API key. You only need to install the python or typescript sdk and use the custom exporter option - https://github.com/Scale3-Labs/langtrace-python-sdk?tab=read... . If you already have a OTEL exporter running, you don't need even that. Just initialize the sdk after installing and it will do its thing.

I am one of the core maintainers of the project. Do let me know if you have any questions.


Cool. Have you considered adopting to the official opentelemetry standards? https://opentelemetry.io/docs/specs/semconv/attributes-regis...


Oh, we missed this new standard. Our first version must have been implemented before its release. We should definitely consider it now.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: