Update README.md

This commit is contained in:
Bo Pan 2024-04-07 22:47:11 +08:00 committed by GitHub
parent 6e504e4545
commit 269c318625
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -19,9 +19,15 @@ Step 1: Clone the project:
git clone https://github.com/AgentCoord/AgentCoord.git
cd AgentCoord
```
Step 2: Config LLM (see [LLM Configuration](README.md#llm-configuration)):
and start the servers
Step 2: Open http://localhost:8080/ to use the system.
Step 2: Config LLM (see [LLM configuration (use docker)](README.md#llm-configuration-if-use-docker)):
Step 3: Start the servers
```bash
docker-compose up
```
Step 4: Open http://localhost:8080/ to use the system.
### Install on your machine
@ -32,18 +38,21 @@ Step 1: Clone the project
git clone https://github.com/AgentCoord/AgentCoord.git
cd AgentCoord
```
Step 2: Install required packages for the backend and frontend servers (see readme.md in ./frontend and ./backend folders)
Step 2: Config LLM (see [LLM configuration (install on your machine)](README.md#llm-configuration-install-on-your-machine)):
Step 3: Install required packages, then run the backend and frontend servers separately (see readme.md for [frontend](frontend/README.md#Installation and [backend](backend/README.md#Installation)
Step 3: Run the backend and frontend servers separately (see readme.md in ./frontend and ./backend folders).
Step 4: Open http://localhost:8080/ to use the system.
## Configuration
### LLM configuration (If use docker)
### LLM configuration (use docker)
You can set the configuration (i.e. API base, API key, Model name) for default LLM in ./docker-compose.yml. Currently, we only support OpenAIs LLMs as the default model. We recommend using gpt-4-turbo-preview as the default model (WARNING: the execution process of multiple agents may consume a significant number of tokens). You can switch to a fast mode that uses the Mistral 8×7B model with hardware acceleration by [Groq](https://groq.com/) for the first time in strategy generation to strike a balance of response quality and efficiency. To achieve this, you need to set the FAST_DESIGN_MODE field in the yaml file as True and fill the GROQ_API_KEY field with the api key of [Groq](https://wow.groq.com/).
### LLM configuration (If install on your machine)
You can set the configuration in ./backend/config/config.yaml. See [LLM configuration (If install on your machine)](#llm-configuration (If use docker)) for explanations.
### LLM configuration (install on your machine)
You can set the configuration in ./backend/config/config.yaml. See [LLM configuration (use docker)](#llm-configuration-if-use-docker) for configuration explanations.
### Agent configuration
Currently, we support config agents by [role-prompting](https://arxiv.org/abs/2305.14688). You can customize your agents by changing the role prompts in AgentRepo\agentBoard_v1.json. We plan to support more methods to customize agents (e.g., supporting RAG, or providing a unified wrapper for customized agents) in the future.