We're actually planning on migrating to LangChain very soon (primarily to allow for memory / tool usage + automatic integrations with llamacpp / other open source model serving frameworks). We didn't start with it initially since we didn't want to restrict our usage patterns too much while we were (even more) unsure of what exactly we were going to build.
As far as using other data connector frameworks, we found that we either (1) didn't think they were very good or/and (2) they didn't support automatic syncing effectively. For larger enterprises, it's not feasible to do a complete sync every X minutes. We need to be able to get a time-bounded subset (or have them push updates to us), which is something LangChain, LlamaIndex, etc. don't support natively.
Can you elaborate on the difficulties and nuances surrounding syncing? I'm not sure exactly what you mean. Do you mean keeping indexes update to date when new documents are provided? or something else...
As far as using other data connector frameworks, we found that we either (1) didn't think they were very good or/and (2) they didn't support automatic syncing effectively. For larger enterprises, it's not feasible to do a complete sync every X minutes. We need to be able to get a time-bounded subset (or have them push updates to us), which is something LangChain, LlamaIndex, etc. don't support natively.