diff --git a/README.md b/README.md index 99b6f15..0af26bd 100644 --- a/README.md +++ b/README.md @@ -116,13 +116,21 @@ The CLI will prompt you to input instructions interactively: You can configure the demo by specifying the following parameters: -- `--aggregator`: The primary model used for final response generation. -- `--reference_models`: List of models used as references. +- `--model`: The primary model used for final response generation. +- `--reference-models`: Models used as references. - `--temperature`: Controls the randomness of the response generation. -- `--max_tokens`: Maximum number of tokens in the response. +- `--max-tokens`: Maximum number of tokens in the response. - `--rounds`: Number of rounds to process the input for refinement. (num rounds == num of MoA layers - 1) -- `--num_proc`: Number of processes to run in parallel for faster execution. -- `--multi_turn`: Boolean to toggle multi-turn interaction capability. +- `--num-proc`: Number of processes to run in parallel for faster execution. +- `--multi-turn`: Boolean to toggle multi-turn interaction capability. + +Specify `--reference-models` multiple times to use multiple models as references. For example: + +```bash +# Specify multiple reference models +python bot.py --reference-models "mistralai/Mixtral-8x22B-Instruct-v0.1" --reference-models "Qwen/Qwen2-72B-Instruct" +``` + ## Evaluation