2024-04-07 14:58:42 +08:00
.
2024-04-07 13:42:30 +08:00
2024-03-31 17:26:29 +08:00
2024-04-07 14:58:42 +08:00

AgentCoord: Visually Exploring Coordination Strategy for LLM-based Multi-Agent Collaboration

 AgentCoord: Visually Exploring Coordination Strategy for LLM-based Multi-Agent Collaboration

AgentCoord is an experimental open-source system to help general users design coordination strategies for multiple LLM-based agents (Research paper forthcoming).

System Usage

System Usage Video

Installation

If you have installed docker and docker-compose on your machine, we recommend running AgentCoord in docker

Step 1: Clone our project and start the servers:

git clone https://github.com/AgentCoord/AgentCoord.git 
cd AgentCoord
docker-compose up

Step 2: Open http://localhost:8080/ to use the system.

Install on your machine

If you want to install and run AgentCoord on your machine without using docker:

Step 1: Clone the project

git clone https://github.com/AgentCoord/AgentCoord.git 
cd AgentCoord

Step 2: Install required packages for the backend and frontend servers (check readme.md in ./frontend and ./backend folders)

Step 3: Run the backend and frontend servers separately (check readme.md in ./frontend and ./backend folders).

Step 4: Open http://localhost:8080/ to use the system.

Configuration

git clone https://github.com/AgentCoord/AgentCoord.git
cd AgentCoord
pip install -r requirements.txt

Configuration

LLM configuration

You can set the configuration (i.e. API base, API key, Model name, Max tokens, Response per minute) for default LLM in config\config.yaml. Currently, we only support OpenAIs LLMs as the default model. We recommend using gpt-4-0125-preview as the default model (WARNING: the execution process of multiple agents may consume a significant number of tokens).

You can switch to a fast mode that uses the Mistral 8×7B model with hardware acceleration by Groq for the first time in strategy generation to strike a balance of response quality and efficiency. To achieve this, you need to set the FAST_DESIGN_MODE field in the yaml file as True and fill the GROQ_API_KEY field with the api key of Groq.

Agent configuration

Currently, we support config agents by role-prompting. You can customize your agents by changing the role prompts in AgentRepo\agentBoard_v1.json. We plan to support more methods to customize agents (e.g., supporting RAG, or providing a unified wrapper for customized agents).

More Papers & Projects for LLM-based Multi-Agent Collaboration

If youre interested in LLM-based multi-agent collaboration and want more papers & projects for reference, you may check out the corpus collected by us:

Description
No description provided
Readme BSD-2-Clause 8 MiB
Languages
TypeScript 63.2%
Python 15.7%
Vue 10.5%
CSS 8.5%
Shell 1.1%
Other 0.9%