- 新增 ProcessCard 组件用于展示和编辑任务流程 - 实现双击编辑任务描述功能 - 添加编辑状态下的卡片式输入界面 - 支持保存和取消编辑操作 - 实现鼠标悬停高亮效果 - 添加颜色处理函数用于界面美化 - 集成到 TaskResult 组件中展示任务过程 - 支持动态创建和管理任务流程连接线 - 添加额外产物编辑功能 - 实现按钮交互状态管理 - 添加滚动和折叠面板事件处理 - 集成 AgentAllocation 组件用于智能体分配 - 实现椭圆框交互效果展示选中状态 - 添加智能体等级颜色配置 - 支持智能体选中状态切换和排序
AgentCoord: Visually Exploring Coordination Strategy for LLM-based Multi-Agent Collaboration
System Usage
Installation
Install with Docker (Recommended)
If you have installed docker and docker-compose on your machine, we recommend running AgentCoord in docker:
Step 1: Clone the project:
git clone https://github.com/AgentCoord/AgentCoord.git
cd AgentCoord
Step 2: Config LLM (see LLM configuration (use docker)):
Step 3: Start the servers
docker-compose up
Step 4: Open http://localhost:8080/ to use the system.
Install on your machine
If you want to install and run AgentCoord on your machine without using docker:
Step 1: Clone the project
git clone https://github.com/AgentCoord/AgentCoord.git
cd AgentCoord
Step 2: Config LLM (see LLM configuration (install on your machine)):
Step 3: Install required packages, then run the backend and frontend servers separately (see readme.md for frontend and backend
Step 4: Open http://localhost:8080/ to use the system.
Configuration
LLM configuration (use docker)
You can set the configuration (i.e. API base, API key, Model name) for default LLM in ./docker-compose.yml. Currently, we only support OpenAI’s LLMs as the default model. We recommend using gpt-4-turbo-preview as the default model (WARNING: the execution process of multiple agents may consume a significant number of tokens). You can switch to a fast mode that uses the Mistral 8×7B model with hardware acceleration by Groq for the first time in strategy generation to strike a balance of response quality and efficiency. To achieve this, you need to set the FAST_DESIGN_MODE field in the yaml file as True and fill the GROQ_API_KEY field with the api key of Groq.
LLM configuration (install on your machine)
You can set the configuration in ./backend/config/config.yaml. See LLM configuration (use docker) for configuration explanations.
Agent configuration
Currently, we support config agents by role-prompting. You can customize your agents by changing the role prompts in AgentRepo\agentBoard_v1.json. We plan to support more methods to customize agents (e.g., supporting RAG, or providing a unified wrapper for customized agents) in the future.
More Papers & Projects for LLM-based Multi-Agent Collaboration
If you’re interested in LLM-based multi-agent collaboration and want more papers & projects for reference, you may check out the corpus collected by us. Any contribution to the corpus is also welcome.
