- 升级openai依赖至2.x版本并替换旧版SDK调用方式 - 引入OpenAI和AsyncOpenAI客户端实例替代全局配置 - 更新所有聊天完成请求方法以适配新版API格式 - 为异步流式响应处理添加异常捕获和错误提示 - 统一超时时间和最大token数等默认参数设置 - 修复部分变量命名冲突和潜在的空值引用问题 - 添加打印彩色日志的辅助函数避免循环导入问题
AgentCoord: Visually Exploring Coordination Strategy for LLM-based Multi-Agent Collaboration
System Usage
Installation
Install with Docker (Recommended)
If you have installed docker and docker-compose on your machine, we recommend running AgentCoord in docker:
Step 1: Clone the project:
git clone https://github.com/AgentCoord/AgentCoord.git
cd AgentCoord
Step 2: Config LLM (see LLM configuration (use docker)):
Step 3: Start the servers
docker-compose up
Step 4: Open http://localhost:8080/ to use the system.
Install on your machine
If you want to install and run AgentCoord on your machine without using docker:
Step 1: Clone the project
git clone https://github.com/AgentCoord/AgentCoord.git
cd AgentCoord
Step 2: Config LLM (see LLM configuration (install on your machine)):
Step 3: Install required packages, then run the backend and frontend servers separately (see readme.md for frontend and backend
Step 4: Open http://localhost:8080/ to use the system.
Configuration
LLM configuration (use docker)
You can set the configuration (i.e. API base, API key, Model name) for default LLM in ./docker-compose.yml. Currently, we only support OpenAI’s LLMs as the default model. We recommend using gpt-4-turbo-preview as the default model (WARNING: the execution process of multiple agents may consume a significant number of tokens). You can switch to a fast mode that uses the Mistral 8×7B model with hardware acceleration by Groq for the first time in strategy generation to strike a balance of response quality and efficiency. To achieve this, you need to set the FAST_DESIGN_MODE field in the yaml file as True and fill the GROQ_API_KEY field with the api key of Groq.
LLM configuration (install on your machine)
You can set the configuration in ./backend/config/config.yaml. See LLM configuration (use docker) for configuration explanations.
Agent configuration
Currently, we support config agents by role-prompting. You can customize your agents by changing the role prompts in AgentRepo\agentBoard_v1.json. We plan to support more methods to customize agents (e.g., supporting RAG, or providing a unified wrapper for customized agents) in the future.
More Papers & Projects for LLM-based Multi-Agent Collaboration
If you’re interested in LLM-based multi-agent collaboration and want more papers & projects for reference, you may check out the corpus collected by us. Any contribution to the corpus is also welcome.
