OpenLLM in comparison focuses more on building LLM apps for production. For example, the integration with LangChain + BentoML makes it easy to run multiple LLMs in parallel across multiple GPUs/Nodes, or chain LLMs with other type of AI/ML models, and deploy the entire pipeline on Kubernete (via Yatai or BentoCloud).
At a glance, it seems like it's going for lots of similar goals (run LLMs with interoperable APIs):
https://github.com/lm-sys/FastChat