Skip to content

Latest commit

 

History

History
59 lines (34 loc) · 2.6 KB

007-Extra Chapter_What is the Model Configured in AutoCoder Really Used For.md

File metadata and controls

59 lines (34 loc) · 2.6 KB

007-Extra Chapter: What is the Model Configured in AutoCoder Really Used For

The simplest configuration for AutoCoder is as follows:

source_dir: /tmp/t-py
target_file: /home/winubuntu/projects/ByzerRawCopilot/output.txt 

project_type: py

query: >
  Modify server.py, after the code app = FastAPI()
  Add the initialization connection code for ray.

Specify the project address, the storage address of the prompts generated by AutoCoder based on your question, the project type, and your requirements.

At this point, AutoCoder does not require a large model. Let's take a look at the parameters that require model cooperation.

urls

You may specify urls to let AutoCoder refer to some documents. In fact, for this parameter, the model is optional. If a model is set, we will use the model to clean the captured data to achieve better results. If not set, it simply removes html and retains text, but there may be a lot of interfering information.

skip_build_index

If indexing is enabled, it must be used in conjunction with a model. Because the construction of the index requires the model to analyze the files, and filtering is needed when querying.

search_engine, search_engine_token

For search engine support, the model is also essential. Because it is necessary to filter the documents obtained from the search and score the documents to avoid too many or too long documents, or to not want to deal with them affecting the final result.

human_as_model

This parameter allows the model to only complete some basic functions, and the code generation part is handed over to the Web version model.

execute

This parameter will make AutoCoder have the model directly execute the Prompt and then write the result into the target_file. So for this parameter, the model is also necessary.

Here is a typical model configuration:

model: qianwen_chat
model_max_length: 2000
model_max_input_length: 100000
anti_quota_limit: 5

First, configure the instance name of the model, which is the model you deployed through Byzer-LLM. Then, according to the requirements of your model, configure the maximum number of tokens the model can generate and the maximum number of input tokens.

In addition, because many models have rate limits, you can set how long to wait before calling again after one call, in seconds.

Conclusion

Configuring a model can make AutoCoder more intelligent. If your project is already large, then you must use a model to enable indexing.

For the requirements of the model, please ensure that the window length of the model is >32K to ensure a good experience in the production environment.