Langchain abstracts too much and you can't really see what's going on or control the flow of data with real precision. They are fixing that though and now you have much better visibility into what's being inferred. I think Langchain is pretty useful though, especially if you want to integrate with something quickly.
I think this is the reasonable answer. Langchain gets a lot of derision and rightly so but it does have uses for prototyping. It’s also a good resource for learning the landscape specifically because of the integrations. I haven’t used it in a while so I’m not familiar with the most recent updates.
Indeed, it has become the LLM equivalent of IBM, as in -- "No one ever got fired for choosing LangChain". A certain well-known ML person even runs a course on LangChain, as if it's a "fundamental" thing to know about LLMs. I was also surprised/disappointed to see that the otherwise excellent "Hands-on Large Language Models" book from O'Reilly has extensive examples using this library.
In Apr 2023 we (CMU/UW-Madison researchers) looked into this lib to build a simple RAG workflow that was slightly different from the canned "chains" like RetrievalQAConversation or others, and ended up wasting time hunting docs and notebooks all over the place and going up and down class hierarchies to find out what exactly was going on. We decided it shouldn't have to be this hard, and started building Langroid as an agent-oriented LLM programming framework.
In Langroid you set up a ChatAgent class which encapsulates an LLM-interface plus any state you'd like. There's a Task class that wraps an Agent and allows inter-agent communication and tool-handling. We have devs who've found our framework easy to understand and extend for their purposes, and some companies are using it in production (some have endorsed us publicly). A quick tour gives a flavor of Langroid: https://langroid.github.io/langroid/tutorials/langroid-tour/