Update README.md

This commit is contained in:
Bo Pan 2024-04-07 22:49:32 +08:00 committed by GitHub
parent 4116c993b0
commit 315102b03b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -20,7 +20,7 @@ git clone https://github.com/AgentCoord/AgentCoord.git
cd AgentCoord
```
Step 2: Config LLM (see [LLM configuration (use docker)](README.md#llm-configuration-if-use-docker)):
Step 2: Config LLM (see [LLM configuration (use docker)](README.md#llm-configuration-use-docker)):
Step 3: Start the servers
```bash
@ -50,7 +50,7 @@ Step 4: Open http://localhost:8080/ to use the system.
You can set the configuration (i.e. API base, API key, Model name) for default LLM in ./docker-compose.yml. Currently, we only support OpenAIs LLMs as the default model. We recommend using gpt-4-turbo-preview as the default model (WARNING: the execution process of multiple agents may consume a significant number of tokens). You can switch to a fast mode that uses the Mistral 8×7B model with hardware acceleration by [Groq](https://groq.com/) for the first time in strategy generation to strike a balance of response quality and efficiency. To achieve this, you need to set the FAST_DESIGN_MODE field in the yaml file as True and fill the GROQ_API_KEY field with the api key of [Groq](https://wow.groq.com/).
### LLM configuration (install on your machine)
You can set the configuration in ./backend/config/config.yaml. See [LLM configuration (use docker)](#llm-configuration-if-use-docker) for configuration explanations.
You can set the configuration in ./backend/config/config.yaml. See [LLM configuration (use docker)](#llm-configuration-use-docker) for configuration explanations.
### Agent configuration
Currently, we support config agents by [role-prompting](https://arxiv.org/abs/2305.14688). You can customize your agents by changing the role prompts in AgentRepo\agentBoard_v1.json. We plan to support more methods to customize agents (e.g., supporting RAG, or providing a unified wrapper for customized agents) in the future.