You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, the llm-graph-builder is using a sophisticated pipeline and I am wondering if any benchmarking was done e.g. to measure the value of added complexity, tune hyperparameters, and compare with other RAG approaches ? Thanks!
The text was updated successfully, but these errors were encountered:
We started to run eval with RAGAS for a set of documents and the different retrievers (but not hyperparameter sets) and also expose a first set of metrics in the UI in the next release.
But not comprehensively so, that's still on todo.
On less related point, it would be nice to have an easy dev setup where changes to frontend code get reflected immediately and restarting the back end is automatic or simple. Maybe a devcontainer - the ability to run something in a codespace through Git makes projects very accessible.
Hi, the llm-graph-builder is using a sophisticated pipeline and I am wondering if any benchmarking was done e.g. to measure the value of added complexity, tune hyperparameters, and compare with other RAG approaches ? Thanks!
The text was updated successfully, but these errors were encountered: