What is the suggested best way to filter prompt responses by llm? #160
-
I want to do a chain of prompts A->B->C but if i do A(1,2)->B(1,2)->C(1,2) ; each step compounds the number of requests... 2, 4, 8.... Is the best way to have 2 node paths, each path with a single llm each? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
You're right, this is hard to do and I've been meaning to implement this. Chat Turns lets you do something like this, but it doesn't work atm for prompt nodes (the "continue using the same LLM" toggle in Chat Turns) Currently, you can work around this by having parallel chains, with one model on each chain. Then, compare at the end. |
Beta Was this translation helpful? Give feedback.
You're right, this is hard to do and I've been meaning to implement this. Chat Turns lets you do something like this, but it doesn't work atm for prompt nodes (the "continue using the same LLM" toggle in Chat Turns)
Currently, you can work around this by having parallel chains, with one model on each chain. Then, compare at the end.
As long as the input(s) are the same, the prompt variables should work similarly in inspectors and evaluators.