🌐 Multilingual Versions: This project maintains an English-only branch
i18n-enfor international users. Switch via:git checkout i18n-en
https://github.com/Avalon-467/Teamclaw
OpenAI-compatible AI Agent with a built-in programmable multi-expert orchestration engine and one-click public deployment.
Skill Mode: This repository is designed to run and be documented in a Skill-oriented workflow (see
SKILL.md).
TeamClaw exposes a standard /v1/chat/completions endpoint that any OpenAI-compatible client can call directly. Internally it integrates the OASIS orchestration engine — using YAML schedule definitions to flexibly compose expert roles, speaking orders, and collaboration patterns, breaking complex problems into multi-perspective debates, voting consensus, and automated summaries.
curl http://127.0.0.1:51200/v1/chat/completions \
-H "Authorization: Bearer <user>:<password>" \
-H "Content-Type: application/json" \
-d '{"model":"mini-timebot","messages":[{"role":"user","content":"Hello"}],"stream":true}'
This is the core design of the entire project.
Traditional multi-agent systems are either fully parallel or fixed pipelines, unable to adapt to different scenarios. The OASIS engine uses a concise YAML schedule definition that lets users (or the AI Agent itself) precisely orchestrate every step of expert collaboration:
# Example: Creative and Critical experts clash first, then everyone summarizes
version: 1
repeat: true
plan:
- expert: "Creative Expert" # Single expert speaks sequentially
- expert: "Critical Expert" # Immediately rebuts
- parallel: # Multiple experts speak in parallel
- "Economist"
- "Legal Expert"
- all_experts: true # All participants speak simultaneously
| Dimension | Control | Description |
|---|---|---|
| Who participates | expert_tags |
Select from 10+ built-in experts + user-defined custom expert pool |
| How they discuss | schedule_yaml |
4 step types freely combined (sequential / parallel / all / manual injection) |
| How deep | max_rounds + use_bot_session |
Control round depth; choose stateful (memory + tools) or stateless (lightweight & fast) |
| Step Type | Format | Effect |
|---|---|---|
expert |
- expert: "Name" |
Single expert speaks sequentially |
parallel |
- parallel: ["A", "B"] |
Multiple experts speak simultaneously |
all_experts |
- all_experts: true |
All selected experts speak at once |
manual |
- manual: {author: "Host", content: "..."} |
Inject fixed content (bypasses LLM) |
Set repeat: true to loop the plan each round; repeat: false executes plan steps once then ends.
10 Built-in Public Experts:
| Expert | Tag | Temp | Role |
|---|---|---|---|
| 🎨 Creative Expert | creative |
0.9 | Finds opportunities, proposes visionary ideas |
| 🔍 Critical Expert | critical |
0.3 | Spots risks, flaws, and logical fallacies |
| 📊 Data Analyst | data |
0.5 | Data-driven, speaks with facts |
| 🎯 Synthesis Advisor | synthesis |
0.5 | Integrates perspectives, proposes pragmatic plans |
| 📈 Economist | economist |
0.5 | Macro/micro economic perspective |
| ⚖️ Legal Expert | lawyer |
0.3 | Compliance and legal risk analysis |
| 💰 Cost Controller | cost_controller |
0.4 | Budget-sensitive, cost reduction |
| 📊 Revenue Planner | revenue_planner |
0.6 | Revenue maximization strategy |
| 🚀 Entrepreneur | entrepreneur |
0.8 | 0-to-1 hands-on perspective |
| 🧑 Common Person | common_person |
0.7 | Down-to-earth common sense feedback |
User-Defined Custom Experts: Each user can create private experts (name, tag, persona, temperature) through the Agent, mixed with public experts, isolated per user.
Each expert per round:
Engine auto-executes:
| Mode | use_bot_session |
Features |
|---|---|---|
| Stateless (default) | False |
Lightweight & fast, independent LLM call per round, no memory, no tools |
| Stateful | True |
Each expert gets a persistent session with memory, can invoke search/file/code tools, sessions visible in frontend |
TeamClaw integrates with popular messaging platforms, allowing users to interact with the Agent through Telegram or QQ:
Features:
Setup:
TELEGRAM_BOT_TOKEN in config/.envpython chatbot/telegrambot.pyUser commands:
Features:
Setup:
QQ_APP_ID and QQ_BOT_SECRET in config/.envpython chatbot/QQbot.pyTeamClaw provides sophisticated user-Agent interaction features:
Each user can maintain a personalized profile that the Agent references:
data/user_files/{username}/user_profile.txt
Tell Agent: "Remember that I'm a Python developer interested in AI" → Profile saved and injected into future conversations.
Users can define custom skills (reusable prompt templates):
// data/user_files/{username}/skills_manifest.json
[
{
"name": "Code Reviewer",
"description": "Review code for best practices",
"file": "code_reviewer.md"
}
]
Agent shows available skills in each session and can execute them on demand.
External systems can inject custom tools via OpenAI-compatible API:
# Caller sends tool definitions
response = client.chat.completions.create(
model="mini-timebot",
messages=[...],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {...}
}
}]
)
# Agent may call the tool → returns tool_calls to caller
# Caller executes tool and sends result back
Run a single command to expose the entire service to the internet — zero configuration, no account needed:
python scripts/tunnel.py
*.trycloudflare.com domaincloudflared if missing → starts tunnels → captures public URLs → writes to .envrun.sh ("Deploy to public network? y/N")The Agent has both "convene" and "participate" capabilities:
| 🏠 Internal OASIS (Convene) | 🌐 External OASIS (Participate) | |
|---|---|---|
| Initiator | Agent calls post_to_oasis |
External system sends message via OpenAI-compatible API |
| Participants | Local expert pool | Multiple independent Agent nodes |
| Trigger | User question → Agent decides | External request via /v1/chat/completions |
| Result | Conclusion returned to user | Agent opinion returned in standard OpenAI response format |
Browser (Chat UI + Login + OASIS Panel)
│ HTTP :51209
▼
front.py (Flask + Session) ── Frontend proxy, login/chat pages, session management
│ HTTP :51200
▼
mainagent.py (FastAPI + LangGraph) ── OpenAI-compatible API + Core Agent
│ stdio (MCP) (External OASIS also via OpenAI API)
├── mcp_scheduler.py ── Alarm/scheduled task management
│ │ HTTP :51201
│ ▼
├── time.py (APScheduler) ── Scheduling center
├── mcp_search.py ── DuckDuckGo web search
├── mcp_filemanager.py ── User file management (sandboxed)
├── mcp_oasis.py ── OASIS discussion + expert management
│ │ HTTP :51202
│ ▼
│ oasis/server.py ── OASIS forum service (engine + expert pool)
├── mcp_bark.py ── Bark mobile push notifications
├── mcp_telegram.py ── Telegram push + whitelist sync
└── mcp_commander.py ── Sandboxed command/code execution
┌─────────────────────────────────────────────────────────────────┐
│ External Bot Services │
├─────────────────────────────────────────────────────────────────┤
│ telegrambot.py ── Telegram Bot (text/image/voice) │
│ QQbot.py ── QQ Bot (C2C/Group, SILK transcoding) │
│ │
│ Both bots call mainagent.py via OpenAI-compatible API │
│ with user-isolated sessions (INTERNAL_TOKEN:user:BOT) │
└─────────────────────────────────────────────────────────────────┘
| Service | Port | Description |
|---|---|---|
front.py |
51209 | Web UI (login + chat + OASIS panel) |
mainagent.py |
51200 | OpenAI-compatible API + Agent core |
time.py |
51201 | Scheduling center |
oasis/server.py |
51202 | OASIS forum service |
Ports configurable in
config/.env.
7 tool services integrated via MCP protocol. All username parameters are auto-injected, fully isolated between users:
| Tool Service | Capability |
|---|---|
| Search | DuckDuckGo web search |
| Scheduler | Natural language alarms/reminders, Cron expressions |
| File Manager | User file CRUD, path traversal protection |
| Commander | Shell commands and Python code in secure sandbox |
| OASIS Forum | Start discussions, check progress, manage custom experts |
| Bark Push | Push notifications to iOS/macOS devices |
| Telegram | Push messages to Telegram, whitelist management |
# Linux / macOS
chmod +x run.sh
./run.sh
# Windows
run.bat
The script handles: environment setup → API Key config → create user → start all services.
Manual steps below can be skipped if using
run.sh/run.bat.
1. Environment
# Auto (recommended)
scripts/setup_env.sh # Linux/macOS
scripts\setup_env.bat # Windows
# Manual
uv venv .venv --python 3.11
source .venv/bin/activate
uv pip install -r config/requirements.txt
2. API Key
Set in config/.env:
DEEPSEEK_API_KEY=your_deepseek_api_key_here
3. Create User
scripts/adduser.sh # Linux/macOS
scripts\adduser.bat # Windows
4. Start Services
# One-click
scripts/start.sh # Linux/macOS
scripts\start.bat # Windows
# Manual (3 terminals)
python src/time.py # Scheduler
python src/mainagent.py # Agent + MCP tools
python src/front.py # Web UI
Visit http://127.0.0.1:51209 after startup.
One-click exposure via Cloudflare Tunnel (see Highlight #3 for details):
python scripts/tunnel.py
# Or interactively via run.sh — prompts "Deploy to public network? (y/N)"
Auto-downloads cloudflared, starts tunnels for Web UI + Bark push, captures public URLs, and writes them to .env. No account or DNS setup required.
| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions |
POST | Chat completions (streaming/non-streaming), fully OpenAI-compatible |
/login |
POST | User login authentication |
/sessions |
POST | List user sessions |
/session_history |
POST | Get session history |
| Endpoint | Method | Description |
|---|---|---|
/topics |
POST | Create discussion topic |
/topics |
GET | List all topics |
/topics/{id} |
GET | Get topic details |
/topics/{id}/stream |
GET | SSE real-time update stream |
/topics/{id}/conclusion |
GET | Block until conclusion ready |
/experts |
GET | List experts (public + user custom) |
/experts/user |
POST/PUT/DELETE | User custom expert CRUD |
sessionStorage expires on tab close/ask re-verifies passwordINTERNAL_TOKEN (auto-generated 64-char hex)user_idTeamClaw/
├── run.sh / run.bat # One-click run
├── scripts/ # Env setup, start, tunnel, user management
├── packaging/ # Windows exe / macOS DMG packaging
├── chatbot/ # External bot services
│ ├── telegrambot.py # Telegram Bot (text/image/voice)
│ ├── QQbot.py # QQ Bot (C2C/Group, SILK transcoding)
│ └── setup.py # Bot configuration helper
├── config/
│ ├── .env # API keys and env vars
│ ├── requirements.txt # Python dependencies
│ └── users.json # Username-password hash
├── data/
│ ├── agent_memory.db # Conversation memory (SQLite)
│ ├── telegram_whitelist.json # Telegram bot whitelist
│ ├── prompts/ # System prompts + expert configs
│ │ ├── oasis_experts.json # 10 public expert definitions
│ │ ├── oasis_expert_discuss.txt # Expert discussion prompt template
│ │ └── oasis_summary.txt # Conclusion generation prompt template
│ ├── schedules/ # YAML schedule examples
│ ├── oasis_user_experts/ # User custom experts (per-user JSON)
│ ├── timeset/ # Scheduled task persistence
│ └── user_files/ # User files (isolated per user)
│ └── {username}/
│ ├── user_profile.txt # User profile
│ ├── skills_manifest.json # User skills
│ └── tg_chat_id.txt # Telegram chat ID
├── src/
│ ├── mainagent.py # OpenAI-compatible API + Agent core
│ ├── agent.py # LangGraph workflow + tool orchestration
│ ├── front.py # Flask Web UI
│ ├── time.py # Scheduling center
│ └── mcp_*.py # 7 MCP tool services
├── oasis/
│ ├── server.py # OASIS FastAPI service
│ ├── engine.py # Discussion engine (rounds + consensus + conclusion)
│ ├── experts.py # Expert definitions + user expert storage
│ ├── scheduler.py # YAML schedule parsing & execution
│ ├── forum.py # Forum data structures
│ └── models.py # Pydantic models
├── tools/
│ └── gen_password.py # Password hash generator
└── test/
├── chat.py # CLI test client
└── view_history.py # View chat history
| Layer | Technology |
|---|---|
| LLM | DeepSeek (deepseek-chat) |
| Agent Framework | LangGraph + LangChain |
| Tool Protocol | MCP (Model Context Protocol) |
| Backend | FastAPI + Flask |
| Auth | SHA-256 Hash + Flask Session |
| Scheduling | APScheduler |
| Persistence | SQLite (aiosqlite) |
| Frontend | Tailwind CSS + Marked.js + Highlight.js |
MIT License
OpenAI 兼容的 AI Agent,内置可编程多专家协作引擎,支持一键部署到公网。
TeamClaw 对外暴露标准 /v1/chat/completions 接口,可以被任何 OpenAI 兼容客户端直接调用;对内集成 OASIS 智能编排引擎——通过 YAML 调度定义,灵活组合专家角色、发言顺序和协作模式,将复杂问题拆解为多视角辩论、投票共识、自动总结的完整流程。
curl http://127.0.0.1:51200/v1/chat/completions \
-H "Authorization: Bearer <user>:<password>" \
-H "Content-Type: application/json" \
-d '{"model":"mini-timebot","messages":[{"role":"user","content":"你好"}],"stream":true}'
这是整个项目的核心设计。
传统的多 Agent 系统要么全部并行、要么固定流水线,无法灵活应对不同场景。OASIS 引擎通过一份简洁的 YAML 调度定义,让用户(或 AI Agent 自身)能精确编排专家协作的每一个环节:
# 示例:先让创意和批判两位专家交锋,再让所有人总结
version: 1
repeat: true
plan:
- expert: "创意专家" # 单人顺序发言
- expert: "批判专家" # 紧接着反驳
- parallel: # 多人并行发言
- "经济学家"
- "法学家"
- all_experts: true # 所有参与者同时发言
| 维度 | 控制方式 | 说明 |
|---|---|---|
| 谁参与 | expert_tags |
从 10+ 内置专家 + 用户自定义专家池中选人 |
| 怎么讨论 | schedule_yaml |
4 种步骤类型自由组合(顺序 / 并行 / 全员 / 手动注入) |
| 多深入 | max_rounds + use_bot_session |
控制轮次深度,可选有状态(记忆+工具)或无状态(轻量快速) |
| 步骤类型 | 格式 | 效果 |
|---|---|---|
expert |
- expert: "专家名" |
单个专家顺序发言 |
parallel |
- parallel: ["A", "B"] |
多个专家同时并行发言 |
all_experts |
- all_experts: true |
所有选中专家同时发言 |
manual |
- manual: {author: "主持人", content: "..."} |
注入固定内容(不经过 LLM) |
设置 repeat: true 时,调度计划每轮循环执行;repeat: false 则按步骤顺序执行一次后结束。
10 位内置公共专家:
| 专家 | Tag | 温度 | 定位 |
|---|---|---|---|
| 🎨 创意专家 | creative |
0.9 | 发现机遇,提出前瞻性想法 |
| 🔍 批判专家 | critical |
0.3 | 发现风险漏洞,严谨质疑 |
| 📊 数据分析师 | data |
0.5 | 数据驱动,用事实说话 |
| 🎯 综合顾问 | synthesis |
0.5 | 综合各方,提出务实方案 |
| 📈 经济学家 | economist |
0.5 | 宏观/微观经济视角 |
| ⚖️ 法学家 | lawyer |
0.3 | 合规性与法律风险 |
| 💰 成本限制者 | cost_controller |
0.4 | 预算敏感,降本增效 |
| 📊 收益规划者 | revenue_planner |
0.6 | 收益最大化策略 |
| 🚀 创新企业家 | entrepreneur |
0.8 | 从 0 到 1 的实战视角 |
| 🧑 普通人 | common_person |
0.7 | 接地气的常识反馈 |
用户自定义专家:每个用户可通过 Agent 创建私有专家(定义名称、tag、persona、温度),与公共专家混合使用,按用户隔离。
每位专家每轮:
引擎自动执行:
| 模式 | use_bot_session |
特点 |
|---|---|---|
| 无状态(默认) | False |
轻量快速,每轮独立调 LLM,无记忆无工具 |
| 有状态 | True |
每位专家创建持久 session,有记忆、能调用搜索/文件/代码执行等全部工具,session 可在前端查看和继续对话 |
TeamClaw 集成主流即时通讯平台,用户可通过 Telegram 或 QQ 与 Agent 交互:
功能特性:
配置步骤:
config/.env 中设置 TELEGRAM_BOT_TOKENpython chatbot/telegrambot.py用户使用:
功能特性:
配置步骤:
config/.env 中设置 QQ_APP_ID 和 QQ_BOT_SECRETpython chatbot/QQbot.pyTeamClaw 提供丰富的用户-Agent 互动功能:
每个用户可维护专属画像,Agent 在对话中自动参考:
data/user_files/{用户名}/user_profile.txt
告诉 Agent:"记住我是 Python 开发者,关注 AI 领域" → 画像保存并在后续对话中注入。
用户可定义自定义技能(可复用的提示词模板):
// data/user_files/{用户名}/skills_manifest.json
[
{
"name": "代码审查员",
"description": "审查代码并提出最佳实践建议",
"file": "code_reviewer.md"
}
]
Agent 在每个会话中显示可用技能,并按需执行。
外部系统可通过 OpenAI 兼容 API 注入自定义工具:
# 调用方发送工具定义
response = client.chat.completions.create(
model="mini-timebot",
messages=[...],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "获取当前天气",
"parameters": {...}
}
}]
)
# Agent 可能调用工具 → 返回 tool_calls 给调用方
# 调用方执行工具并发送结果回去
一条命令将整个服务暴露到互联网——零配置、无需账户:
python scripts/tunnel.py
*.trycloudflare.com 域名cloudflared(若缺失)→ 启动隧道 → 捕获公网地址 → 写入 .envrun.sh 交互启动(提示"是否部署到公网?y/N")Agent 同时具备"主动召集"和"被邀参与"两种角色:
| 🏠 内部 OASIS(主动召集) | 🌐 外部 OASIS(被邀参与) | |
|---|---|---|
| 发起方 | Agent 调用 post_to_oasis |
外部系统通过 OpenAI 兼容 API 发送消息 |
| 参与者 | 本地专家池 | 多个独立 Agent 节点 |
| 触发 | 用户提问 → Agent 自主决策 | 外部请求通过 /v1/chat/completions |
| 结果 | 结论直接返回用户 | Agent 意见以标准 OpenAI 格式返回 |
浏览器 (聊天 UI + 登录页 + OASIS 论坛面板)
│ HTTP :51209
▼
front.py (Flask + Session) ── 前端代理,渲染登录/聊天页面,管理会话凭证
│ HTTP :51200
▼
mainagent.py (FastAPI + LangGraph) ── OpenAI 兼容 API + 核心 Agent
│ stdio (MCP) (外部 OASIS 同样通过 OpenAI API 接入)
├── mcp_scheduler.py ── 闹钟/定时任务管理
│ │ HTTP :51201
│ ▼
├── time.py (APScheduler) ── 定时调度中心
├── mcp_search.py ── DuckDuckGo 联网搜索
├── mcp_filemanager.py ── 用户文件管理(沙箱隔离)
├── mcp_oasis.py ── OASIS 多专家讨论 + 专家管理
│ │ HTTP :51202
│ ▼
│ oasis/server.py ── OASIS 论坛服务(调度引擎 + 专家池)
├── mcp_bark.py ── Bark 手机推送通知
├── mcp_telegram.py ── Telegram 推送 + 白名单同步
└── mcp_commander.py ── 安全沙箱命令/代码执行
┌─────────────────────────────────────────────────────────────────┐
│ 外部 Bot 服务 │
├─────────────────────────────────────────────────────────────────┤
│ telegrambot.py ── Telegram Bot(文字/图片/语音) │
│ QQbot.py ── QQ Bot(私聊/群聊,SILK 转码) │
│ │
│ 两个 Bot 均通过 OpenAI 兼容 API 调用 mainagent.py │
│ 使用用户隔离会话(INTERNAL_TOKEN:用户名:BOT) │
└─────────────────────────────────────────────────────────────────┘
| 服务 | 端口 | 说明 |
|---|---|---|
front.py |
51209 | Web UI(登录 + 聊天 + OASIS 面板) |
mainagent.py |
51200 | OpenAI 兼容 API + Agent 核心 |
time.py |
51201 | 定时任务调度中心 |
oasis/server.py |
51202 | OASIS 论坛服务 |
端口可在
config/.env中自定义。
Agent 通过 MCP 协议集成 7 个工具服务,所有工具的 username 参数由系统自动注入,用户间完全隔离:
| 工具服务 | 能力 |
|---|---|
| 搜索 | DuckDuckGo 联网搜索 |
| 定时任务 | 自然语言设置闹钟/提醒,Cron 表达式 |
| 文件管理 | 用户文件 CRUD,路径穿越防护 |
| 命令执行 | 安全沙箱中运行 Shell 命令和 Python 代码 |
| OASIS 论坛 | 发起讨论、查看进展、管理自定义专家 |
| Bark 推送 | 向 iOS/macOS 设备发送推送通知 |
| Telegram | 向 Telegram 发送消息、白名单管理 |
# Linux / macOS
chmod +x run.sh
./run.sh
# Windows
run.bat
脚本自动完成:环境配置 → API Key 配置 → 创建用户 → 启动全部服务。
以下为手动分步操作说明,使用
run.sh/run.bat可跳过。
1. 环境配置
# 自动(推荐)
scripts/setup_env.sh # Linux/macOS
scripts\setup_env.bat # Windows
# 手动
uv venv .venv --python 3.11
source .venv/bin/activate
uv pip install -r config/requirements.txt
2. 配置 API Key
在 config/.env 中设置:
DEEPSEEK_API_KEY=your_deepseek_api_key_here
3. 创建用户
scripts/adduser.sh # Linux/macOS
scripts\adduser.bat # Windows
4. 启动服务
# 一键启动
scripts/start.sh # Linux/macOS
scripts\start.bat # Windows
# 手动分别启动(3 个终端)
python src/time.py # 定时调度
python src/mainagent.py # Agent + MCP 工具
python src/front.py # Web UI
启动后访问 http://127.0.0.1:51209 登录使用。
通过 Cloudflare Tunnel 一键暴露到公网(详见亮点 #3):
python scripts/tunnel.py
# 或通过 run.sh 交互启动——提示"是否部署到公网?(y/N)"
自动下载 cloudflared,启动 Web UI + Bark 推送双隧道,捕获公网地址写入 .env,无需账户或 DNS 配置。
| 端点 | 方法 | 说明 |
|---|---|---|
/v1/chat/completions |
POST | 聊天补全(流式/非流式),完全兼容 OpenAI 格式 |
/login |
POST | 用户登录认证 |
/sessions |
POST | 列出用户会话 |
/session_history |
POST | 获取会话历史 |
| 端点 | 方法 | 说明 |
|---|---|---|
/topics |
POST | 创建讨论话题 |
/topics |
GET | 列出所有话题 |
/topics/{id} |
GET | 获取话题详情 |
/topics/{id}/stream |
GET | SSE 实时更新流 |
/topics/{id}/conclusion |
GET | 阻塞等待讨论结论 |
/experts |
GET | 列出专家(公共 + 用户自定义) |
/experts/user |
POST/PUT/DELETE | 用户自定义专家 CRUD |
| 新增:Workflow & Layout 管理 | ||
/sessions/oasis |
GET | 列出 OASIS 管理的专家会话(含 #oasis# 的 session) |
/workflows |
POST | 保存 YAML workflow(可选生成 layout) |
/workflows |
GET | 列出用户保存的 workflows |
/layouts/from-yaml |
POST | 从 YAML 生成并保存 layout JSON |
sessionStorage 关闭标签页即失效/ask 都重新验证密码INTERNAL_TOKEN(自动生成 64 字符 hex)user_id 隔离OASIS 服务器(端口 51202)除了供 MCP 工具调用外,也支持直接 curl 操作,便于外部脚本或调试。所有接口均使用 user_id 参数进行用户隔离。
# 列出所有专家(公共 + 用户自定义)
curl 'http://127.0.0.1:51202/experts?user_id=xinyuan'
# 创建自定义专家
curl -X POST 'http://127.0.0.1:51202/experts/user' \
-H 'Content-Type: application/json' \
-d '{"user_id":"xinyuan","name":"产品经理","tag":"pm","persona":"你是一个经验丰富的产品经理,擅长需求分析和产品规划","temperature":0.7}'
# 更新自定义专家
curl -X PUT 'http://127.0.0.1:51202/experts/user/pm' \
-H 'Content-Type: application/json' \
-d '{"user_id":"xinyuan","persona":"更新后的专家描述"}'
# 删除自定义专家
curl -X DELETE 'http://127.0.0.1:51202/experts/user/pm?user_id=xinyuan'
# 列出 OASIS 管理的专家会话(含 #oasis# 的 session)
curl 'http://127.0.0.1:51202/sessions/oasis?user_id=xinyuan'
# 列出用户保存的 workflows
curl 'http://127.0.0.1:51202/workflows?user_id=xinyuan'
# 保存 workflow(自动生成 layout)
curl -X POST 'http://127.0.0.1:51202/workflows' \
-H 'Content-Type: application/json' \
-d '{"user_id":"xinyuan","name":"trio_discussion","schedule_yaml":"version:1\nplan:\n - expert: \"creative#temp#1\"","description":"三人讨论","save_layout":true}'
# 从 YAML 生成 layout
curl -X POST 'http://127.0.0.1:51202/layouts/from-yaml' \
-H 'Content-Type: application/json' \
-d '{"user_id":"xinyuan","yaml_source":"version:1\nplan:\n - expert: \"creative#temp#1\"","layout_name":"trio_layout"}'
# 创建讨论话题(同步等待结论)
curl -X POST 'http://127.0.0.1:51202/topics' \
-H 'Content-Type: application/json' \
-d '{"user_id":"xinyuan","question":"讨论主题","max_rounds":3,"schedule_yaml":"version:1\nplan:\n - expert: \"creative#temp#1\"","discussion":true}'
# 创建讨论话题(异步,返回 topic_id)
curl -X POST 'http://127.0.0.1:51202/topics' \
-H 'Content-Type: application/json' \
-d '{"user_id":"xinyuan","question":"讨论主题","max_rounds":3,"schedule_yaml":"version:1\nplan:\n - expert: \"creative#temp#1\"","discussion":true,"callback_url":"http://127.0.0.1:51200/system_trigger","callback_session_id":"my-session"}'
# 查看讨论详情
curl 'http://127.0.0.1:51202/topics/{topic_id}?user_id=xinyuan'
# 获取讨论结论(阻塞等待)
curl 'http://127.0.0.1:51202/topics/{topic_id}/conclusion?user_id=xinyuan&timeout=300'
# 取消讨论
curl -X DELETE 'http://127.0.0.1:51202/topics/{topic_id}?user_id=xinyuan'
# 列出所有讨论话题
curl 'http://127.0.0.1:51202/topics?user_id=xinyuan'
# SSE 实时更新流(讨论模式)
curl 'http://127.0.0.1:51202/topics/{topic_id}/stream?user_id=xinyuan'
保存位置:
data/user_files/{user}/oasis/yaml/{file}.yaml(画布布局从 YAML 实时转换,不再单独存储 layout JSON)data/oasis_user_experts/{user}.jsondata/oasis_topics/{user}/{topic_id}.json注意: 这些接口与 MCP 工具 list_oasis_experts、add_oasis_expert、update_oasis_expert、delete_oasis_expert、list_oasis_sessions、set_oasis_workflow、list_oasis_workflows、yaml_to_layout、post_to_oasis、check_oasis_discussion、cancel_oasis_discussion、list_oasis_topics 共享同一后端实现,确保行为一致。
TeamClaw/
├── run.sh / run.bat # 一键运行
├── scripts/ # 环境配置、启动、隧道、用户管理脚本
├── packaging/ # Windows exe / macOS DMG 打包
├── chatbot/ # 外部 Bot 服务
│ ├── telegrambot.py # Telegram Bot(文字/图片/语音)
│ ├── QQbot.py # QQ Bot(私聊/群聊,SILK 转码)
│ └── setup.py # Bot 配置助手
├── config/
│ ├── .env # API Key 等环境变量
│ ├── requirements.txt # Python 依赖
│ └── users.json # 用户名-密码哈希
├── data/
│ ├── agent_memory.db # 对话记忆(SQLite)
│ ├── telegram_whitelist.json # Telegram 机器人白名单
│ ├── prompts/ # 系统提示词 + 专家配置
│ │ ├── oasis_experts.json # 10 位公共专家定义
│ │ ├── oasis_expert_discuss.txt # 专家讨论 prompt 模板
│ │ └── oasis_summary.txt # 结论生成 prompt 模板
│ ├── schedules/ # YAML 调度示例
│ ├── oasis_user_experts/ # 用户自定义专家(per-user JSON)
│ ├── timeset/ # 定时任务持久化
│ └── user_files/ # 用户文件(按用户隔离)
│ └── {用户名}/
│ ├── user_profile.txt # 用户画像
│ ├── skills_manifest.json # 用户技能
│ └── tg_chat_id.txt # Telegram chat ID
├── src/
│ ├── mainagent.py # OpenAI 兼容 API + Agent 核心
│ ├── agent.py # LangGraph 工作流 + 工具编排
│ ├── front.py # Flask Web UI
│ ├── time.py # 定时调度中心
│ └── mcp_*.py # 7 个 MCP 工具服务
├── oasis/
│ ├── server.py # OASIS FastAPI 服务
│ ├── engine.py # 讨论引擎(轮次 + 共识 + 结论)
│ ├── experts.py # 专家定义 + 用户专家存储
│ ├── scheduler.py # YAML 调度解析与执行
│ ├── forum.py # 论坛数据结构
│ └── models.py # Pydantic 模型
├── tools/
│ └── gen_password.py # 密码哈希生成
└── test/
├── chat.py # 命令行测试客户端
└── view_history.py # 查看历史聊天记录
| 层面 | 技术 |
|---|---|
| LLM | DeepSeek (deepseek-chat) |
| Agent 框架 | LangGraph + LangChain |
| 工具协议 | MCP (Model Context Protocol) |
| 后端 | FastAPI + Flask |
| 认证 | SHA-256 哈希 + Flask Session |
| 定时调度 | APScheduler |
| 持久化 | SQLite (aiosqlite) |
| 前端 | Tailwind CSS + Marked.js + Highlight.js |
MIT License
This repository features an active evolution branch: agent-evolution-xinyuan.
Managed autonomously by the resident Agent (TeamClaw), this branch is used for:
Human-AI Collaboration in progress.