v0.1.3: Model Settings (+ more models) #58
ianarawjo
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Proud to announce we now have model settings in ChainForge. 🥳
You can now compare across different versions of the same model, in addition to nicknaming models and choosing more specific models.
Full changelog below.
More supported models 🤖
Along with model settings, we now have support for all OpenAI, Anthropic, Google PaLM (chat and text), Dalai-hosted models. For instance, you can now compare Llama.65B to PaLM text completions, if you were so inclined. For the full list, see models.py.
Here is comparing Google PaLM's text-bison to chat-bison for the same prompt:
Customizable model settings (and emojis! 😑)
Once you add a model to a
PromptNode
, now you can tap the 'settings' icon on aPromptNode
to bring up a form with all settings for that base model. You can adjust the exact model used (for instance,text-bison-001
in PaLM, or Dalai-hostedllama.30B
):Temperature appears next to model names by default. For ease of reference, temperature is displayed on a sliding color scale from cyan
#00ffff
(coldest) to violet#ff00ff
(lukewarm) to red#ff0000
(hottest). The percentage respects min and max temperature settings for individual models.You can now also nickname models in
PromptNode
s. Names must be unique. Each nickname will appear elsewhere in Chainforge (e.g. in plots). You can also set the Emoji used. For instance, here is a comparison between two ChatGPT models at different temperatures, which I've renamedhotgpt
andcoldgpt
with the emojis 🔥 and 🥶:Note about importing previous flows
Unfortunately, this code rewrite involved a breaking change for how flows are imported and exported (
.cforge
file format). You may still be able to import old flows, but you need to re-populate each model list and re-query LLMs. I hope to avoid this, but in this case it was necessary to store model settings information and redo how the backend cache's responses.Note about Dalai-hosted models
Currently, you cannot query multiple Dalai models/settings at once, since a locally run model can only take one request at a time. We're working on fixing this for the next minor release; for now, just choose one model at a time, and if you want more than one, add it to the list and re-query the prompt node (it will use the previously cache'd responses from the first Dalai model).
Encounter any bugs?
Open an Issue or respond to the Discussion about this release! 👍
This discussion was created from the release v0.1.3: Model Settings (+ more models).
Beta Was this translation helpful? Give feedback.
All reactions