Merge branch 'main' of https://github.com/AgentCoord/AgentCoord
This commit is contained in:
commit
631aa6dcdc
@ -56,7 +56,7 @@ You can set the configuration in ./backend/config/config.yaml. See [LLM configur
|
||||
Currently, we support config agents by [role-prompting](https://arxiv.org/abs/2305.14688). You can customize your agents by changing the role prompts in AgentRepo\agentBoard_v1.json. We plan to support more methods to customize agents (e.g., supporting RAG, or providing a unified wrapper for customized agents) in the future.
|
||||
|
||||
## More Papers & Projects for LLM-based Multi-Agent Collaboration
|
||||
If you’re interested in LLM-based multi-agent collaboration and want more papers & projects for reference, you may check out the [corpus](https://docs.google.com/spreadsheets/d/1HSl4AqIVXhUjZh0pRhz-brzfA7evh20Q/edit?usp=sharing&ouid=112400145401551512954&rtpof=true&sd=true) collected by us:
|
||||
If you’re interested in LLM-based multi-agent collaboration and want more papers & projects for reference, you may check out the [corpus](https://docs.google.com/spreadsheets/d/1HSl4AqIVXhUjZh0pRhz-brzfA7evh20Q/edit?usp=sharing&ouid=112400145401551512954&rtpof=true&sd=true) collected by us. Any contribution to the corpus is also welcome.
|
||||
|
||||
|
||||
|
||||
|
||||
@ -1,38 +1,21 @@
|
||||
# AgentCoord: Visually Exploring Coordination Strategy for LLM-based Multi-Agent Collaboration
|
||||
<p align="center">
|
||||
<a ><img src="https://github.com/bopan3/AgentCoord_Backend/assets/21981916/bbad2f66-368f-488a-af36-72e79fdb6805" alt=" AgentCoord: Visually Exploring Coordination Strategy for LLM-based Multi-Agent Collaboration" width="200px"></a>
|
||||
</p>
|
||||
AgentCoord is an experimental open-source system to help general users design coordination strategies for multiple LLM-based agents (Research paper forthcoming).
|
||||
# AgentCoord Backend
|
||||
This is the backend for the AgentCoord project. Root project is located [here](https://github.com/AgentCoord/AgentCoord). See [LLM configuration](README.md#llm-configuration) for LLM configuration before you launch the backend server.
|
||||
|
||||
## System Usage
|
||||
<a href="https://youtu.be/s56rHJx-eqY" target="_blank"><img src="https://github.com/bopan3/AgentCoord_Backend/assets/21981916/0d907e64-2a25-4bdf-977d-e90197ab1aab"
|
||||
alt="System Usage Video" width="800" border="5" /></a>
|
||||
## Launch
|
||||
|
||||
## Installation
|
||||
### Installation
|
||||
You can launch the backend by simply using `docker-compose` in the root directory of the project.
|
||||
|
||||
Or, you can launch the backend manually by following the steps below.
|
||||
|
||||
1. Install the dependencies by running `pip install -r requirements.txt`.
|
||||
```bash
|
||||
git clone https://github.com/AgentCoord/AgentCoord.git
|
||||
cd AgentCoord
|
||||
pip install -r requirements.txt
|
||||
pip install -r ./requirements.txt
|
||||
```
|
||||
### Configuration
|
||||
#### LLM configuration
|
||||
You can set the configuration (i.e. API base, API key, Model name, Max tokens, Response per minute) for default LLM in config\config.yaml. Currently, we only support OpenAI’s LLMs as the default model. We recommend using gpt-4-0125-preview as the default model (WARNING: the execution process of multiple agents may consume a significant number of tokens).
|
||||
|
||||
You can switch to a fast mode that uses the Mistral 8×7B model with hardware acceleration by [Groq](https://groq.com/) for the first time in strategy generation to strike a balance of response quality and efficiency. To achieve this, you need to set the FAST_DESIGN_MODE field in the yaml file as True and fill the GROQ_API_KEY field with the api key of [Groq](https://wow.groq.com/).
|
||||
|
||||
#### Agent configuration
|
||||
Currently, we support config agents by [role-prompting](https://arxiv.org/abs/2305.14688). You can customize your agents by changing the role prompts in AgentRepo\agentBoard_v1.json. We plan to support more methods to customize agents (e.g., supporting RAG, or providing a unified wrapper for customized agents).
|
||||
|
||||
### Launch
|
||||
Execute the following command to launch the system:
|
||||
2. launch the backend server (the default port is 8017)
|
||||
```bash
|
||||
python server.py
|
||||
python ./server.py --port 8017
|
||||
```
|
||||
## More Papers & Projects for LLM-based Multi-Agent Collaboration
|
||||
If you’re interested in LLM-based multi-agent collaboration and want more papers & projects for reference, you may check out the corpus collected by us:
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user