Compare commits
8 Commits
f9b1039970
...
79d3c2e97e
| Author | SHA1 | Date | |
|---|---|---|---|
| 79d3c2e97e | |||
| f65c1d038b | |||
| 82c321ef0a | |||
| 4ab8822848 | |||
| 779b2f6d52 | |||
| 6f8f12314f | |||
| 70324d8542 | |||
| 8228b3fa37 |
104
.claude/commands/audit.md
Normal file
104
.claude/commands/audit.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# /audit — 变更审计
|
||||
|
||||
回顾本次会话中你所做的所有文件变更,结合自动预扫描结果,执行审计落盘。
|
||||
|
||||
## 执行步骤
|
||||
|
||||
### 第 1 步:运行预扫描脚本(Python,零 token)
|
||||
|
||||
运行:
|
||||
```bash
|
||||
python scripts/audit/prescan.py
|
||||
```
|
||||
|
||||
该脚本自动完成:
|
||||
- 从 git status 获取所有变更文件
|
||||
- 分类高风险文件 + 生成 risk_tags
|
||||
- 合规检查:代码→文档映射、迁移 SQL 检测、DDL 基线检查
|
||||
|
||||
读取输出的 JSON。如果 `audit_required: false`,告知用户"无需审计"并结束。
|
||||
|
||||
**备选**:如果 git status 包含大量非本次会话的历史变更,可以用 `--files` 参数只传入本次会话的文件:
|
||||
```bash
|
||||
python scripts/audit/prescan.py --files "file1.py,file2.sql,..."
|
||||
```
|
||||
文件列表从你的对话记忆(本次会话的 Edit/Write 工具调用)中提取。
|
||||
|
||||
### 第 2 步:补充语义上下文
|
||||
|
||||
预扫描脚本能告诉你"哪些文件变了、是否高风险、文档是否缺失",但它不知道**为什么改**。
|
||||
|
||||
从对话记忆中补充:
|
||||
- 每个变更文件的修改原因(用户的需求是什么)
|
||||
- 改动的技术思路和设计决策
|
||||
- 与其他模块的关联影响
|
||||
|
||||
将预扫描 JSON + 语义上下文合并,作为第 3 步的输入。
|
||||
|
||||
### 第 3 步:委托子代理写审计记录
|
||||
|
||||
用 Agent 工具启动子代理,传入:
|
||||
1. 预扫描 JSON 结果(完整)
|
||||
2. 每个变更的原因和内容概要(你补充的语义上下文)
|
||||
|
||||
子代理的任务指令:
|
||||
|
||||
> 在 `docs/audit/changes/` 目录下创建审计记录文件,文件名格式 `<YYYY-MM-DD>__<英文短标识>.md`。
|
||||
>
|
||||
> 使用以下格式:
|
||||
>
|
||||
> ```markdown
|
||||
> # 变更审计记录:<中文标题>
|
||||
>
|
||||
> | 字段 | 值 |
|
||||
> |------|-----|
|
||||
> | 日期 | YYYY-MM-DD HH:MM:SS |
|
||||
>
|
||||
> ## 操作摘要
|
||||
> <1-3 段,说清楚做了什么、为什么做>
|
||||
>
|
||||
> ## 变更文件
|
||||
> 按新增/修改/删除分组,每个文件一行,简要说明改动内容。
|
||||
>
|
||||
> ## 改动注解
|
||||
> 对每个变更文件写注解:
|
||||
> - 高风险文件(ETL 任务/后端路由/数据库迁移/金额相关):写详细注解(变更类型、原因、思路、结果)
|
||||
> - 普通文件:一行简要说明
|
||||
> - 删除的文件:只记录删除原因
|
||||
>
|
||||
> ## 数据库变更(如有)
|
||||
> 列出新建/修改/删除的表、字段、约束、索引。标注迁移执行状态。
|
||||
>
|
||||
> ## 风险与回滚
|
||||
> - 风险点(标注高/中/低)
|
||||
> - 回滚要点
|
||||
>
|
||||
> ## 验证
|
||||
> - 至少 1 条可执行的验证方式(测试命令 / SQL / 联调步骤)
|
||||
>
|
||||
> ## 合规检查
|
||||
> - 列出文档同步状态(已同步 / 待补齐 / 不适用)
|
||||
> ```
|
||||
>
|
||||
> 当前北京时间通过 `python -c "from datetime import datetime, timezone, timedelta; print(datetime.now(timezone(timedelta(hours=8))).strftime('%Y-%m-%d %H:%M:%S'))"` 获取。
|
||||
>
|
||||
> 审计记录语言使用简体中文。
|
||||
>
|
||||
> 完成后运行 `python scripts/audit/gen_audit_dashboard.py` 刷新审计一览表。
|
||||
>
|
||||
> 最终只返回:done / files_written / next_step。
|
||||
|
||||
### 第 4 步:补齐缺失的文档同步
|
||||
|
||||
根据预扫描 JSON 中 `code_without_docs` 列出的不合规项,逐项补齐:
|
||||
- 读取对应代码文件当前内容
|
||||
- 更新对应文档
|
||||
|
||||
如果补齐工作量大(>3 个文档),委托子代理处理。
|
||||
|
||||
### 第 5 步:向用户报告
|
||||
|
||||
简短回执:
|
||||
- 审计记录文件路径
|
||||
- 合规检查结果(全部通过 / N 项已补齐 / N 项待用户处理)
|
||||
- 下一步建议(如 "commit when ready")
|
||||
63
.claude/commands/db-docs.md
Normal file
63
.claude/commands/db-docs.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# /db-docs — 数据库文档同步
|
||||
|
||||
当 PostgreSQL schema/表结构发生变化时,将变更以审计友好的方式落盘到 `docs/database/`。
|
||||
|
||||
## 触发条件
|
||||
|
||||
- 迁移脚本/DDL 修改(新增/删除/改表、字段、类型、默认值、非空、约束、索引、外键)
|
||||
- 手工执行了 DDL
|
||||
|
||||
## 执行步骤
|
||||
|
||||
### 第 1 步:识别结构性变化
|
||||
|
||||
从本次会话的改动中,列出新增/修改/删除的对象:
|
||||
- schema / table / column / index / constraint / foreign key
|
||||
- 明确变更前后差异(before/after)
|
||||
|
||||
### 第 2 步:更新表结构文档
|
||||
|
||||
对每张受影响的表,更新 `docs/database/` 下对应的文档:
|
||||
- 如果文档已存在:更新字段列表、约束、索引等
|
||||
- 如果文档不存在:基于以下模板创建
|
||||
|
||||
模板:
|
||||
```markdown
|
||||
# <schema>.<table_name>
|
||||
|
||||
## 概述
|
||||
<表的用途说明>
|
||||
|
||||
## 字段
|
||||
|
||||
| 字段名 | 类型 | 可空 | 默认值 | 说明 |
|
||||
|--------|------|------|--------|------|
|
||||
| ... | ... | ... | ... | ... |
|
||||
|
||||
## 约束与索引
|
||||
- PRIMARY KEY: ...
|
||||
- UNIQUE: ...
|
||||
- INDEX: ...
|
||||
|
||||
## 关联
|
||||
- 上游:<数据来源>
|
||||
- 下游:<被哪些模块/表消费>
|
||||
```
|
||||
|
||||
特别注意金额类字段:标注精度、币种、舍入规则。
|
||||
|
||||
### 第 3 步:回滚与验证
|
||||
|
||||
写入审计友好的回滚和验证信息:
|
||||
- DDL 回滚路径(必要时提供反向迁移 SQL)
|
||||
- 至少 3 条验证 SQL(含约束/索引/关键字段检查)
|
||||
|
||||
### 第 4 步:DDL 基线检查
|
||||
|
||||
检查 `docs/database/ddl/` 下的基线文件是否需要合并更新。如需要,更新基线。
|
||||
|
||||
### 第 5 步:输出摘要
|
||||
|
||||
- 更新/创建了哪些文档
|
||||
- 迁移脚本执行状态(已执行 / 待执行)
|
||||
- DDL 基线状态(已合并 / 待合并)
|
||||
55
.claude/commands/doc-sync.md
Normal file
55
.claude/commands/doc-sync.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# /doc-sync — 逻辑改动后文档同步
|
||||
|
||||
检查本次会话中的逻辑改动是否需要同步更新文档,并执行同步。
|
||||
|
||||
## 触发条件
|
||||
|
||||
修改了以下任一类内容时应执行:
|
||||
- 业务规则/计算口径/资金处理(精度、舍入、阈值)
|
||||
- ETL/SQL 清洗聚合映射逻辑
|
||||
- API 行为(返回结构、错误码、鉴权/权限)
|
||||
- 小程序关键交互流程
|
||||
- 数据库表结构
|
||||
|
||||
## 执行步骤
|
||||
|
||||
### 第 1 步:分类
|
||||
|
||||
判断本次会话的改动是否属于"逻辑改动"。如果只是纯格式化/拼写修正/注释调整,告知用户"无逻辑改动,无需文档同步"并结束。
|
||||
|
||||
### 第 2 步:逐项评估需要更新的文档
|
||||
|
||||
根据变更涉及的模块,评估以下文档是否需要更新:
|
||||
|
||||
**各级 README.md**(只更新与本次变更相关的):
|
||||
- `README.md`(根目录):项目总览、快速开始、环境变量、架构概述
|
||||
- `apps/backend/README.md`:后端 API 路由、配置、运行方式
|
||||
- `apps/etl/connectors/feiqiu/README.md`:ETL 任务清单、开发约定
|
||||
- `apps/miniprogram/README.md`:小程序页面结构
|
||||
- `apps/admin-web/README.md`:管理后台功能说明
|
||||
- `apps/tenant-admin/README.md`:租户管理后台功能说明
|
||||
- `packages/shared/README.md`:共享包说明
|
||||
- `db/README.md`:Schema 约定、迁移规范
|
||||
|
||||
规则:如果"对读者理解系统行为有帮助"就应更新。若某个 README 尚不存在但变更涉及该模块,应创建。
|
||||
|
||||
### 第 3 步:执行更新
|
||||
|
||||
对每个需要更新的文档:
|
||||
1. 读取当前内容
|
||||
2. 根据本次变更更新相关段落
|
||||
3. 写入更新后的内容
|
||||
|
||||
如果更新工作量大(>3 个文档),委托子代理处理。
|
||||
|
||||
### 第 4 步:联动检查
|
||||
|
||||
- 如果涉及 DB schema 变化:提醒用户执行 `/db-docs`
|
||||
- 如果涉及 API 变化:检查 `apps/backend/docs/API-REFERENCE.md` 是否已更新
|
||||
|
||||
### 第 5 步:输出摘要
|
||||
|
||||
- Changed:改了哪些文档
|
||||
- Why:原始原因 + 直接原因
|
||||
- Risk:风险点与回归范围
|
||||
- Verify:建议的验证步骤
|
||||
65
.claude/commands/pre-change.md
Normal file
65
.claude/commands/pre-change.md
Normal file
@@ -0,0 +1,65 @@
|
||||
# /pre-change — 逻辑改动前置调研
|
||||
|
||||
对即将修改的模块进行全面调研,输出上下文摘要供用户确认后再动手。
|
||||
|
||||
## 适用场景
|
||||
|
||||
任何逻辑改动(ETL/业务规则/API/数据模型/前端交互),写代码前执行。
|
||||
|
||||
## 执行步骤
|
||||
|
||||
### 第 1 步:识别改动范围
|
||||
|
||||
从用户需求中提取:
|
||||
- 要修改的模块和文件
|
||||
- 涉及的数据表/API/页面
|
||||
- 预期的行为变化
|
||||
|
||||
### 第 2 步:委托 Explore 子代理调研
|
||||
|
||||
启动 Explore 子代理(thoroughness: very thorough),调研以下内容:
|
||||
|
||||
1. **目标模块文件**:读取要修改的文件及其直接依赖
|
||||
2. **历史审计**:搜索 `docs/audit/changes/` 中相关模块的历史变更记录
|
||||
3. **相关文档**:README、PRD(`docs/prd/`)、BD 手册(`docs/database/`)、API 参考
|
||||
4. **调用关系**:要修改文件的调用方和被调用方
|
||||
5. **数据流向**:上游(数据从哪来)→ 当前模块 → 下游(数据到哪去)
|
||||
6. **影响范围**:哪些模块/页面/任务可能受影响
|
||||
|
||||
### 第 3 步:输出「改动前上下文摘要」
|
||||
|
||||
格式:
|
||||
|
||||
```
|
||||
## 改动前上下文摘要
|
||||
|
||||
### 模块职责
|
||||
<模块做什么,在系统中的角色>
|
||||
|
||||
### 历史变更
|
||||
<近期审计记录中的相关改动,特别是踩坑记录>
|
||||
|
||||
### 数据流向
|
||||
上游: <数据来源>
|
||||
当前: <本模块处理>
|
||||
下游: <消费方>
|
||||
|
||||
### 影响范围
|
||||
- <受影响的模块/页面/任务列表>
|
||||
|
||||
### 风险点
|
||||
- <可能的副作用、边界条件、兼容性问题>
|
||||
|
||||
### 建议方案
|
||||
<基于调研结果的实施建议>
|
||||
```
|
||||
|
||||
### 第 4 步:等待用户确认
|
||||
|
||||
输出摘要后,等待用户确认或调整方向,确认后再进入编码实施。
|
||||
|
||||
## 例外(无需执行此流程)
|
||||
|
||||
- 纯格式调整、注释/文档纯文字修改
|
||||
- 用户明确说"直接改/跳过调研"
|
||||
- 新建文件且不涉及已有逻辑
|
||||
63
.claude/commands/spec-close.md
Normal file
63
.claude/commands/spec-close.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# /spec-close — Spec 收尾通用流程
|
||||
|
||||
当一个功能 spec 开发完成时,执行此收尾检查清单确保质量闭环。
|
||||
|
||||
## 执行步骤
|
||||
|
||||
### 步骤 1:最终测试检查点(必选)
|
||||
|
||||
- 运行 Monorepo 属性测试:`cd /c/NeoZQYY && pytest tests/ -v`
|
||||
- 运行模块单元测试:`cd <模块路径> && pytest tests/ -v`
|
||||
- 确保所有测试通过,有问题询问用户
|
||||
|
||||
### 步骤 2:前后端联调验证(涉及 API + 前端时必选)
|
||||
|
||||
- 启动后端服务,使用测试库验证各端点完整请求-响应链路
|
||||
- 验证 JSON 响应结构与 Schema 定义一致(camelCase 序列化)
|
||||
- 验证权限校验和数据隔离(`SET LOCAL app.current_site_id`)在真实请求中生效
|
||||
- 前端联调验证:确认前端页面能正确调用 API 并渲染数据
|
||||
- 验证空数据/降级场景下前端不崩溃
|
||||
|
||||
### 步骤 3:数据库变更审计与 DDL 合并(涉及 DB 改动时必选)
|
||||
|
||||
- 审计本次实现中对数据库的所有改动(新建表、新增字段、新增索引、FDW 映射变更等)
|
||||
- **必须通过 pg MCP 工具实际执行迁移 SQL**(禁止仅标记完成而不执行)
|
||||
- 执行后用查询验证表/字段/索引已正确创建
|
||||
- RLS 视图双 schema:后端查询 `app.v_*` 视图,新建 DWS RLS 视图时必须同时在原 schema 和 `app` schema 下创建
|
||||
- 合并到主 DDL 基线文件(ETL → `docs/database/ddl/etl_feiqiu__<schema>.sql`,业务 → `docs/database/ddl/zqyy_app__<schema>.sql`)
|
||||
- 编写回滚脚本(逆序 DROP/ALTER)
|
||||
|
||||
### 步骤 4:BD 手册更新(涉及 DB 改动时必选)
|
||||
|
||||
- 业务库 → `docs/database/BD_manual_*.md`
|
||||
- ETL 库 → `apps/etl/connectors/feiqiu/docs/database/<层级>/main/BD_manual_*.md`
|
||||
- FDW → `docs/database/BD_manual_fdw*.md`
|
||||
- 每份手册必须包含:字段明细、约束与索引、验证 SQL(≥3 条)、兼容性影响、回滚策略
|
||||
|
||||
### 步骤 5:项目文档同步更新(按涉及范围裁剪)
|
||||
|
||||
根据改动类型选择需要更新的文档:
|
||||
|
||||
| 文档 | 更新条件 |
|
||||
|------|----------|
|
||||
| 模块 README | 模块内部结构变更时 |
|
||||
| `apps/backend/docs/API-REFERENCE.md` | 新增/修改后端路由时 |
|
||||
| `docs/contracts/openapi/backend-api.json` | 新增/修改 API 端点时 |
|
||||
| `docs/DOCUMENTATION-MAP.md` | 新增任何文档条目时 |
|
||||
|
||||
### 步骤 6:变更审计收口(涉及高风险路径时必选)
|
||||
|
||||
执行 `/audit` 命令完成审计流程。
|
||||
|
||||
### 步骤 7:服务清理(启动了运行时服务时必选)
|
||||
|
||||
- 关闭浏览器实例、停止后端和前端服务、清理资源
|
||||
|
||||
## 按 Spec 类型裁剪
|
||||
|
||||
| 类型 | 必选步骤 |
|
||||
|------|---------|
|
||||
| ETL 类(ODS/DWD/DWS) | 1, 3, 4, 5, 6 |
|
||||
| 后端 API 类 | 1, 2, 5, 6 |
|
||||
| 全栈类(前后端 + DB) | 1, 2, 3, 4, 5, 6 |
|
||||
| 重构类 | 1, 5, 6 |
|
||||
33
.claude/hooks/post_edit_audit_reminder.py
Normal file
33
.claude/hooks/post_edit_audit_reminder.py
Normal file
@@ -0,0 +1,33 @@
|
||||
#!/usr/bin/env python3
|
||||
"""PostToolUse hook: 编辑高风险文件后提醒审计"""
|
||||
import json, re, sys
|
||||
|
||||
try:
|
||||
data = json.load(sys.stdin)
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
fp = (data.get("tool_input") or {}).get("file_path", "")
|
||||
if not fp:
|
||||
sys.exit(0)
|
||||
|
||||
# 转相对路径
|
||||
rel = re.sub(r"^.*?NeoZQYY[/\\]", "", fp.replace("\\", "/"))
|
||||
|
||||
HIGH_RISK = [
|
||||
r"^apps/etl/connectors/feiqiu/(tasks|loaders|scd|orchestration|config|database|models|quality)/",
|
||||
r"^apps/backend/app/(routers|services|auth|schemas)/",
|
||||
r"^db/.*/migrations/.*\.sql$",
|
||||
r"^db/.*/schemas/.*\.sql$",
|
||||
r"^packages/shared/",
|
||||
]
|
||||
|
||||
for p in HIGH_RISK:
|
||||
if re.search(p, rel):
|
||||
print(json.dumps({
|
||||
"hookSpecificOutput": {
|
||||
"hookEventName": "PostToolUse",
|
||||
"additionalContext": f"[audit-reminder] 已编辑高风险文件: {rel} — 完成本轮改动后请执行 /audit"
|
||||
}
|
||||
}))
|
||||
break
|
||||
26
.claude/hooks/post_edit_db_doc_sync.py
Normal file
26
.claude/hooks/post_edit_db_doc_sync.py
Normal file
@@ -0,0 +1,26 @@
|
||||
#!/usr/bin/env python3
|
||||
"""PostToolUse hook: 编辑 db/ 下 SQL 文件后提醒同步 docs/database/"""
|
||||
import json, re, sys
|
||||
|
||||
try:
|
||||
data = json.load(sys.stdin)
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
fp = (data.get("tool_input") or {}).get("file_path", "")
|
||||
if not fp:
|
||||
sys.exit(0)
|
||||
|
||||
rel = re.sub(r"^.*?NeoZQYY[/\\]", "", fp.replace("\\", "/"))
|
||||
|
||||
# 匹配 db/ 下的 SQL 文件(schemas、migrations、脚本等)
|
||||
if re.search(r"^db/.*\.sql$", rel):
|
||||
print(json.dumps({
|
||||
"hookSpecificOutput": {
|
||||
"hookEventName": "PostToolUse",
|
||||
"additionalContext": (
|
||||
f"[db-doc-sync] 已编辑数据库文件: {rel} — "
|
||||
"根据 Schema 变更规则,完成后须同步更新 docs/database/(变更说明、兼容性、回滚策略、验证 SQL)"
|
||||
)
|
||||
}
|
||||
}))
|
||||
37
.claude/hooks/post_edit_rls_dual_schema.py
Normal file
37
.claude/hooks/post_edit_rls_dual_schema.py
Normal file
@@ -0,0 +1,37 @@
|
||||
#!/usr/bin/env python3
|
||||
"""PostToolUse hook: 编辑含 CREATE VIEW 的 SQL 文件时提醒双 Schema 规则"""
|
||||
import json, re, sys
|
||||
|
||||
try:
|
||||
data = json.load(sys.stdin)
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
fp = (data.get("tool_input") or {}).get("file_path", "")
|
||||
if not fp:
|
||||
sys.exit(0)
|
||||
|
||||
rel = re.sub(r"^.*?NeoZQYY[/\\]", "", fp.replace("\\", "/"))
|
||||
|
||||
# 仅检查 db/ 下的 SQL 文件
|
||||
if not re.search(r"^db/.*\.sql$", rel):
|
||||
sys.exit(0)
|
||||
|
||||
# 读取文件内容检查是否包含 CREATE VIEW
|
||||
try:
|
||||
with open(fp, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
# 检测 CREATE [OR REPLACE] VIEW 语句
|
||||
if re.search(r"CREATE\s+(OR\s+REPLACE\s+)?VIEW\s+(dws|dwd|core)\.", content, re.IGNORECASE):
|
||||
print(json.dumps({
|
||||
"hookSpecificOutput": {
|
||||
"hookEventName": "PostToolUse",
|
||||
"additionalContext": (
|
||||
f"[rls-dual-schema] 检测到 {rel} 中包含 dws/dwd/core schema 的 VIEW 定义 — "
|
||||
"根据 RLS 视图双 Schema 规则,必须同时在原 schema 和 app schema 创建对应视图,否则后端查询会失败。"
|
||||
)
|
||||
}
|
||||
}))
|
||||
24
.claude/hooks/pre_demo_protect.py
Normal file
24
.claude/hooks/pre_demo_protect.py
Normal file
@@ -0,0 +1,24 @@
|
||||
#!/usr/bin/env python3
|
||||
"""PreToolUse hook: 保护 demo-miniprogram 目录不被删除或移入 _DEL/"""
|
||||
import json, re, sys
|
||||
|
||||
try:
|
||||
data = json.load(sys.stdin)
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
tool = data.get("tool_name", "")
|
||||
tool_input = data.get("tool_input") or {}
|
||||
|
||||
DEMO_DIR = "demo-miniprogram"
|
||||
|
||||
if tool == "Bash":
|
||||
cmd = tool_input.get("command", "")
|
||||
# 检查命令是否在对 demo-miniprogram 执行删除/移动操作
|
||||
if DEMO_DIR in cmd and re.search(r"\b(rm|rmdir|del|move|mv)\b", cmd, re.IGNORECASE):
|
||||
# 允许 mv 到非 _DEL 目录(如正常重命名),但阻止移入 _DEL
|
||||
if re.search(r"\brm\b|\brmdir\b|\bdel\b", cmd, re.IGNORECASE) or "_DEL" in cmd:
|
||||
print(json.dumps({
|
||||
"decision": "block",
|
||||
"reason": f"[demo-protect] apps/{DEMO_DIR}/ 禁止删除或移入 _DEL/。该目录是 UI 样式标杆校对基准。"
|
||||
}))
|
||||
28
.claude/hooks/pre_read_archived_block.py
Normal file
28
.claude/hooks/pre_read_archived_block.py
Normal file
@@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env python3
|
||||
"""PreToolUse hook: 阻止读取 _archived/ 目录下的文件"""
|
||||
import json, re, sys
|
||||
|
||||
try:
|
||||
data = json.load(sys.stdin)
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
tool = data.get("tool_name", "")
|
||||
tool_input = data.get("tool_input") or {}
|
||||
|
||||
# 从不同工具中提取路径
|
||||
path = ""
|
||||
if tool in ("Read", "Edit", "Write"):
|
||||
path = tool_input.get("file_path", "")
|
||||
elif tool == "Glob":
|
||||
path = tool_input.get("path", "")
|
||||
|
||||
path = path.replace("\\", "/")
|
||||
|
||||
if re.search(r"/_archived/|/_archived$|^_archived/", path) or re.search(
|
||||
r"[/\\]_archived[/\\]", tool_input.get("file_path", "")
|
||||
):
|
||||
print(json.dumps({
|
||||
"decision": "block",
|
||||
"reason": "[archived-block] _archived/ 目录内容已废弃,禁止读取或参考。请使用当前版本的文件。"
|
||||
}))
|
||||
36
.claude/hooks/session_start_context.py
Normal file
36
.claude/hooks/session_start_context.py
Normal file
@@ -0,0 +1,36 @@
|
||||
#!/usr/bin/env python3
|
||||
"""SessionStart hook: 会话开始时加载项目状态上下文"""
|
||||
import json, subprocess, sys, os
|
||||
|
||||
project_dir = os.environ.get("CLAUDE_PROJECT_DIR", os.getcwd())
|
||||
script = os.path.join(project_dir, "scripts", "audit", "prescan.py")
|
||||
|
||||
if not os.path.isfile(script):
|
||||
sys.exit(0)
|
||||
|
||||
try:
|
||||
r = subprocess.run(
|
||||
[sys.executable, script],
|
||||
capture_output=True, text=True, timeout=10, cwd=project_dir,
|
||||
)
|
||||
if r.returncode != 0:
|
||||
sys.exit(0)
|
||||
result = json.loads(r.stdout)
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
audit_required = result.get("audit_required", False)
|
||||
total = result.get("total_files", 0)
|
||||
tags = ", ".join(result.get("risk_tags", []))
|
||||
|
||||
if audit_required:
|
||||
ctx = f"[session-context] 当前工作区有 {total} 个未提交的变更文件,含高风险标签: {tags}。如果这些变更来自之前的会话且未审计,建议先执行 /audit。"
|
||||
else:
|
||||
ctx = "[session-context] 当前工作区状态正常,无高风险未审计变更。"
|
||||
|
||||
print(json.dumps({
|
||||
"hookSpecificOutput": {
|
||||
"hookEventName": "SessionStart",
|
||||
"additionalContext": ctx
|
||||
}
|
||||
}))
|
||||
26
.claude/hooks/stop_audit_check.py
Normal file
26
.claude/hooks/stop_audit_check.py
Normal file
@@ -0,0 +1,26 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Stop hook: Claude 结束回复时检查是否有未审计的高风险变更"""
|
||||
import json, subprocess, sys, os
|
||||
|
||||
project_dir = os.environ.get("CLAUDE_PROJECT_DIR", os.getcwd())
|
||||
script = os.path.join(project_dir, "scripts", "audit", "prescan.py")
|
||||
|
||||
if not os.path.isfile(script):
|
||||
sys.exit(0)
|
||||
|
||||
try:
|
||||
r = subprocess.run(
|
||||
[sys.executable, script],
|
||||
capture_output=True, text=True, timeout=10, cwd=project_dir,
|
||||
)
|
||||
if r.returncode != 0:
|
||||
sys.exit(0)
|
||||
result = json.loads(r.stdout)
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
high_risk = result.get("high_risk_files", [])
|
||||
if result.get("audit_required", False) and len(high_risk) > 0:
|
||||
print(json.dumps({
|
||||
"systemMessage": f"[audit-check] 当前有 {len(high_risk)} 个高风险文件变更未审计。建议执行 /audit。"
|
||||
}))
|
||||
81
.claude/hooks/stop_verify_check.py
Normal file
81
.claude/hooks/stop_verify_check.py
Normal file
@@ -0,0 +1,81 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Stop hook: 检查是否有逻辑改动未验证(未跑测试)、DDL 变更未建迁移"""
|
||||
import json, re, subprocess, sys, os
|
||||
|
||||
project_dir = os.environ.get("CLAUDE_PROJECT_DIR", os.getcwd())
|
||||
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "diff", "--name-only"],
|
||||
capture_output=True, text=True, timeout=10, cwd=project_dir,
|
||||
)
|
||||
staged = subprocess.run(
|
||||
["git", "diff", "--name-only", "--cached"],
|
||||
capture_output=True, text=True, timeout=10, cwd=project_dir,
|
||||
)
|
||||
changed = set((r.stdout + "\n" + staged.stdout).strip().splitlines())
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
if not changed:
|
||||
sys.exit(0)
|
||||
|
||||
warnings = []
|
||||
|
||||
# --- 1. 逻辑改动未验证检查 ---
|
||||
LOGIC_PATTERNS = [
|
||||
r"^apps/etl/connectors/feiqiu/(tasks|loaders|scd|orchestration|config|database|models|quality)/",
|
||||
r"^apps/backend/app/(routers|services|auth|schemas)/",
|
||||
r"^packages/shared/",
|
||||
]
|
||||
|
||||
logic_files = [
|
||||
f for f in changed
|
||||
if any(re.search(p, f) for p in LOGIC_PATTERNS)
|
||||
]
|
||||
|
||||
if logic_files:
|
||||
# 检查是否有测试结果文件被修改(间接判断是否跑了测试)
|
||||
# 更可靠的方式:检查变更中是否包含测试文件
|
||||
test_files = [f for f in changed if re.search(r"tests?/", f)]
|
||||
if not test_files:
|
||||
count = len(logic_files)
|
||||
warnings.append(
|
||||
f"本次会话修改了 {count} 个逻辑文件但未发现测试文件变更,"
|
||||
"建议运行相关测试验证(单元/集成/lint)"
|
||||
)
|
||||
|
||||
# --- 2. DDL 变更未建迁移检查 ---
|
||||
schema_files = [f for f in changed if re.search(r"^db/[^/]+/schemas/.*\.sql$", f)]
|
||||
migration_files = [f for f in changed if re.search(r"^db/[^/]+/migrations/.*\.sql$", f)]
|
||||
|
||||
if schema_files and not migration_files:
|
||||
# 也检查 untracked 的迁移文件
|
||||
try:
|
||||
untracked = subprocess.run(
|
||||
["git", "ls-files", "--others", "--exclude-standard", "db/"],
|
||||
capture_output=True, text=True, timeout=10, cwd=project_dir,
|
||||
)
|
||||
new_migrations = [
|
||||
f for f in untracked.stdout.strip().splitlines()
|
||||
if re.search(r"^db/[^/]+/migrations/.*\.sql$", f)
|
||||
]
|
||||
except Exception:
|
||||
new_migrations = []
|
||||
|
||||
if not new_migrations:
|
||||
dbs = set()
|
||||
for f in schema_files:
|
||||
m = re.match(r"^db/([^/]+)/", f)
|
||||
if m:
|
||||
dbs.add(m.group(1))
|
||||
db_list = ", ".join(sorted(dbs))
|
||||
warnings.append(
|
||||
f"检测到 {db_list} 的 schemas/ DDL 已变更但无对应迁移脚本,"
|
||||
"建议在 db/*/migrations/ 创建增量迁移文件"
|
||||
)
|
||||
|
||||
# --- 输出 ---
|
||||
if warnings:
|
||||
msg = "[verify-check] " + " | ".join(warnings)
|
||||
print(json.dumps({"systemMessage": msg}))
|
||||
133
.claude/settings.json
Normal file
133
.claude/settings.json
Normal file
@@ -0,0 +1,133 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(du -sh /c/NeoZQYY/*)",
|
||||
"Bash(du -sh /c/NeoZQYY/.*)",
|
||||
"Bash(wc -l /c/NeoZQYY/scripts/ops/*.py)",
|
||||
"Read(//c/Users/Administrator/.kiro//**)",
|
||||
"Bash(ls -lS /c/NeoZQYY/tmp/*.md)",
|
||||
"Bash(xargs -I {} basename {})",
|
||||
"Bash(sed 's/_[a-z].*//')",
|
||||
"Bash(ls tests/*.py)",
|
||||
"Bash(sed 's/test_//')",
|
||||
"Bash(ls test_property_*.py)",
|
||||
"Bash(ls test_p*.py)",
|
||||
"Bash(ls test_rns*.py)",
|
||||
"Bash(ls test_tenant_*.py)",
|
||||
"Bash(ls test_trace_*.py)",
|
||||
"Bash(mv .kiro/steering _DEL/.kiro/steering)",
|
||||
"Bash(mv .kiroignore _DEL/.kiroignore)",
|
||||
"Bash(mv .specstory _DEL/.specstory)",
|
||||
"Bash(mv .cursorindexingignore _DEL/.cursorindexingignore)",
|
||||
"Bash(mv AI_CHANGELOG.md _DEL/AI_CHANGELOG.md)",
|
||||
"Bash(mv _tmp_replace2.py _DEL/)",
|
||||
"Bash(mv backend_test_results.txt _DEL/)",
|
||||
"Bash(mv test_results.txt _DEL/)",
|
||||
"Bash(mv dev-trace-coverage-working.png _DEL/)",
|
||||
"Bash(mv dev-trace-page.png _DEL/)",
|
||||
"Bash(mv export/pytest_result.txt _DEL/export/)",
|
||||
"Bash(mv export/test_auth_results.txt _DEL/export/)",
|
||||
"Bash(mv export/p13_test_result.txt _DEL/export/)",
|
||||
"Bash(mv export/p13_result.txt _DEL/export/)",
|
||||
"Bash(cp -r tmp _DEL/tmp_backup)",
|
||||
"Bash(*)",
|
||||
"Bash(touch tmp/.gitkeep)",
|
||||
"Bash(ls -la c:/NeoZQYY/docs/audit/session_logs/_session_index*.json)",
|
||||
"mcp__pg-etl-test__execute_sql",
|
||||
"mcp__pg-app-test__execute_sql",
|
||||
"mcp__pg-app-test__list_schemas"
|
||||
],
|
||||
"additionalDirectories": [
|
||||
"C:\\Users\\Administrator\\.claude",
|
||||
"c:\\NeoZQYY\\.git"
|
||||
]
|
||||
},
|
||||
"hooks": {
|
||||
"SessionStart": [
|
||||
{
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python \"$CLAUDE_PROJECT_DIR/.claude/hooks/session_start_context.py\"",
|
||||
"timeout": 15,
|
||||
"statusMessage": "加载项目状态..."
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"PreToolUse": [
|
||||
{
|
||||
"matcher": "Read|Edit|Write|Glob",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python \"$CLAUDE_PROJECT_DIR/.claude/hooks/pre_read_archived_block.py\"",
|
||||
"timeout": 5
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": "Bash|Edit|Write",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python \"$CLAUDE_PROJECT_DIR/.claude/hooks/pre_demo_protect.py\"",
|
||||
"timeout": 5
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": "Edit|Write",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python \"$CLAUDE_PROJECT_DIR/.claude/hooks/post_edit_audit_reminder.py\"",
|
||||
"timeout": 5
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": "Edit|Write",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python \"$CLAUDE_PROJECT_DIR/.claude/hooks/post_edit_db_doc_sync.py\"",
|
||||
"timeout": 5
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": "Edit|Write",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python \"$CLAUDE_PROJECT_DIR/.claude/hooks/post_edit_rls_dual_schema.py\"",
|
||||
"timeout": 10
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"Stop": [
|
||||
{
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python \"$CLAUDE_PROJECT_DIR/.claude/hooks/stop_audit_check.py\"",
|
||||
"timeout": 15
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python \"$CLAUDE_PROJECT_DIR/.claude/hooks/stop_verify_check.py\"",
|
||||
"timeout": 15
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
55
.env
55
.env
@@ -104,25 +104,37 @@ SYSTEM_LOG_ROOT=C:/NeoZQYY/export/SYSTEM/LOGS
|
||||
# ------------------------------------------------------------------------------
|
||||
# 后端结构化日志目录
|
||||
BACKEND_LOG_ROOT=C:/NeoZQYY/export/BACKEND/LOGS
|
||||
# 用户头像存储目录
|
||||
AVATAR_EXPORT_PATH=C:/NeoZQYY/export/BACKEND/avatars
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# 阿里云百炼 AI 配置
|
||||
# DashScope AI 配置(百炼 Application API)
|
||||
# CHANGE 2026-02-23 | 从 PRD 文档迁移至 .env,禁止在文档中明文存放
|
||||
# CHANGE P14 | BAILIAN_* → DASHSCOPE_*;移除 BASE_URL/MODEL(Application API 不需要)
|
||||
# ------------------------------------------------------------------------------
|
||||
BAILIAN_API_KEY=sk-6def29cab3474cc797e52b82a46a5dba
|
||||
BAILIAN_MODEL=qwen-plus
|
||||
BAILIAN_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
|
||||
BAILIAN_TEST_APP_ID=541edb3d5fcd4c18b13cbad81bb5fb9d
|
||||
DASHSCOPE_API_KEY=sk-6def29cab3474cc797e52b82a46a5dba
|
||||
DASHSCOPE_WORKSPACE_ID=
|
||||
|
||||
# CHANGE 2026-03-05 | 8 个百炼 AI 应用 ID(从百炼平台获取,2026-03-05 更新)
|
||||
BAILIAN_APP_ID_1_CHAT=979dabe6f22a43989632b8c662cac97c
|
||||
BAILIAN_APP_ID_2_FINANCE=1dcdb5f39c3040b6af8ef79215b9b051
|
||||
BAILIAN_APP_ID_3_CLUE=708bf45439cd48c7ab9a514d03482890
|
||||
BAILIAN_APP_ID_4_ANALYSIS=ea7b1c374f574b9a925a2fb5789a9b90
|
||||
BAILIAN_APP_ID_5_TACTICS=46f54e6053df4bb0b83be29366025cf6
|
||||
BAILIAN_APP_ID_6_NOTE=025bb344146b4e4e8be30c444adab3b4
|
||||
BAILIAN_APP_ID_7_CUSTOMER=df35e06991b24d49971c03c6428a9c87
|
||||
BAILIAN_APP_ID_8_CONSOLIDATE=407dfb89283b4196934eec5fefe3ebc2
|
||||
# 8 个百炼 AI 应用 ID(从百炼平台获取,通过 app_id 指定应用)
|
||||
# 应用 1:通用对话 | 应用 2:财务洞察 | 应用 3:客户数据维客线索分析
|
||||
# 应用 4:关系分析/任务建议 | 应用 5:话术参考 | 应用 6:备注分析
|
||||
# 应用 7:客户分析 | 应用 8:维客线索整理
|
||||
DASHSCOPE_APP_ID_1_CHAT=979dabe6f22a43989632b8c662cac97c
|
||||
DASHSCOPE_APP_ID_2_FINANCE=1dcdb5f39c3040b6af8ef79215b9b051
|
||||
DASHSCOPE_APP_ID_3_CLUE=708bf45439cd48c7ab9a514d03482890
|
||||
DASHSCOPE_APP_ID_4_ANALYSIS=ea7b1c374f574b9a925a2fb5789a9b90
|
||||
DASHSCOPE_APP_ID_5_TACTICS=46f54e6053df4bb0b83be29366025cf6
|
||||
DASHSCOPE_APP_ID_6_NOTE=025bb344146b4e4e8be30c444adab3b4
|
||||
DASHSCOPE_APP_ID_7_CUSTOMER=df35e06991b24d49971c03c6428a9c87
|
||||
DASHSCOPE_APP_ID_8_CONSOLIDATE=407dfb89283b4196934eec5fefe3ebc2
|
||||
# 应用 9:Session 日志摘要生成(Kiro agent_on_stop + batch_generate_summaries 使用)
|
||||
DASHSCOPE_APP_ID_SUMMARY=e0cf8913b1ee4a4eb9464cc1ee0bf300
|
||||
|
||||
# 内部 API 认证 token(ETL 等内部服务调用 /api/internal/* 端点时使用)
|
||||
INTERNAL_API_TOKEN=C4Rs45fEoMC3u2PR4-jvakl8SBYpU9kV7JFiTj-TJAc
|
||||
|
||||
# 后端 API 地址(ETL 触发 AI 事件时使用,如 http://localhost:8000)
|
||||
BACKEND_API_URL=http://localhost:8000
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# 微信小程序
|
||||
@@ -147,3 +159,18 @@ PIPELINE_RATE_MAX=2.0
|
||||
OPS_SERVER_BASE=C:/NeoZQYY
|
||||
ETL_PROJECT_PATH=C:/NeoZQYY/apps/etl/connectors/feiqiu
|
||||
ETL_PYTHON_EXECUTABLE=C:/NeoZQYY/.venv/Scripts/python.exe
|
||||
|
||||
# === Dev Trace Log ===
|
||||
# 全链路请求追踪日志(仅开发/测试环境使用,生产环境关闭)
|
||||
DEV_TRACE_ENABLED=true
|
||||
DEV_TRACE_LOG_DIR=export/dev-trace-logs
|
||||
DEV_TRACE_LOG_RETENTION_DAYS=7
|
||||
DEV_TRACE_LOG_SQL=true
|
||||
DEV_TRACE_LOG_PARAMS=true
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# DWS 工资计算配置
|
||||
# CHANGE 2026-03-27 | 允许非月初结算期运行工资计算任务(临时开关)
|
||||
# 正常调度只在月初 1-5 号跑上月工资,此开关允许月中手动跑当月工资
|
||||
# ------------------------------------------------------------------------------
|
||||
DWS_SALARY_ALLOW_OUT_OF_CYCLE=true
|
||||
|
||||
@@ -101,27 +101,35 @@ SYSTEM_LOG_ROOT=C:/NeoZQYY/export/SYSTEM/LOGS
|
||||
# 后端输出路径
|
||||
# ------------------------------------------------------------------------------
|
||||
BACKEND_LOG_ROOT=C:/NeoZQYY/export/BACKEND/LOGS
|
||||
# 用户头像存储目录(chooseAvatar 上传后保存到此目录,文件名 {user_id}.jpg)
|
||||
AVATAR_EXPORT_PATH=C:/NeoZQYY/export/BACKEND/avatars
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# 阿里云百炼 AI 配置
|
||||
# DashScope AI 配置(百炼 Application API)
|
||||
# ------------------------------------------------------------------------------
|
||||
BAILIAN_API_KEY=
|
||||
BAILIAN_MODEL=qwen-plus
|
||||
BAILIAN_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
|
||||
BAILIAN_TEST_APP_ID=
|
||||
DASHSCOPE_API_KEY=
|
||||
DASHSCOPE_WORKSPACE_ID=
|
||||
|
||||
# 8 个百炼 AI 应用 ID(从百炼平台获取)
|
||||
# 8 个百炼 AI 应用 ID(从百炼平台获取,通过 app_id 指定应用)
|
||||
# 应用 1:通用对话 | 应用 2:财务洞察 | 应用 3:客户数据维客线索分析
|
||||
# 应用 4:关系分析/任务建议 | 应用 5:话术参考 | 应用 6:备注分析
|
||||
# 应用 7:客户分析 | 应用 8:维客线索整理
|
||||
BAILIAN_APP_ID_1_CHAT=
|
||||
BAILIAN_APP_ID_2_FINANCE=
|
||||
BAILIAN_APP_ID_3_CLUE=
|
||||
BAILIAN_APP_ID_4_ANALYSIS=
|
||||
BAILIAN_APP_ID_5_TACTICS=
|
||||
BAILIAN_APP_ID_6_NOTE=
|
||||
BAILIAN_APP_ID_7_CUSTOMER=
|
||||
BAILIAN_APP_ID_8_CONSOLIDATE=
|
||||
DASHSCOPE_APP_ID_1_CHAT=
|
||||
DASHSCOPE_APP_ID_2_FINANCE=
|
||||
DASHSCOPE_APP_ID_3_CLUE=
|
||||
DASHSCOPE_APP_ID_4_ANALYSIS=
|
||||
DASHSCOPE_APP_ID_5_TACTICS=
|
||||
DASHSCOPE_APP_ID_6_NOTE=
|
||||
DASHSCOPE_APP_ID_7_CUSTOMER=
|
||||
DASHSCOPE_APP_ID_8_CONSOLIDATE=
|
||||
# 应用 9:Session 日志摘要生成(Kiro agent_on_stop + batch_generate_summaries 使用)
|
||||
DASHSCOPE_APP_ID_SUMMARY=
|
||||
|
||||
# 内部 API 认证 token(ETL 等内部服务调用 /api/internal/* 端点时使用)
|
||||
INTERNAL_API_TOKEN=
|
||||
|
||||
# 后端 API 地址(ETL 触发 AI 事件时使用,如 http://localhost:8000)
|
||||
BACKEND_API_URL=
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# 管道限流配置(RateLimiter 请求间隔,秒)
|
||||
@@ -267,7 +275,7 @@ DWD_FACT_UPSERT=true
|
||||
RUN_TASKS=PRODUCTS,TABLES,MEMBERS,ASSISTANTS,PACKAGES_DEF,ORDERS,PAYMENTS,REFUNDS,COUPON_USAGE,INVENTORY_CHANGE,TOPUPS,TABLE_DISCOUNT,LEDGER
|
||||
# RUN_DWS_TASKS=
|
||||
# RUN_INDEX_TASKS=
|
||||
INDEX_LOOKBACK_DAYS=60
|
||||
INDEX_LOOKBACK_DAYS=90
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# DWS 月度/薪资配置
|
||||
@@ -340,4 +348,12 @@ ETL_PYTHON_EXECUTABLE=C:/NeoZQYY/.venv/Scripts/python.exe
|
||||
# 运维面板服务器根目录
|
||||
# CHANGE 2026-03-06 | 必须显式设置,消除 __file__ 推算风险
|
||||
# ------------------------------------------------------------------------------
|
||||
OPS_SERVER_BASE=C:/NeoZQYY
|
||||
OPS_SERVER_BASE=C:/NeoZQYY
|
||||
|
||||
# === Dev Trace Log ===
|
||||
# 全链路请求追踪日志(仅开发/测试环境使用,生产环境关闭)
|
||||
DEV_TRACE_ENABLED=true
|
||||
DEV_TRACE_LOG_DIR=export/dev-trace-logs
|
||||
DEV_TRACE_LOG_RETENTION_DAYS=7
|
||||
DEV_TRACE_LOG_SQL=true
|
||||
DEV_TRACE_LOG_PARAMS=true
|
||||
|
||||
20
.gitignore
vendored
20
.gitignore
vendored
@@ -68,27 +68,23 @@ infra/**/*.secret
|
||||
# ===== IDE =====
|
||||
.idea/
|
||||
.vscode/
|
||||
.vite/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
.specstory/
|
||||
.cursorindexingignore
|
||||
|
||||
# ===== Claude Code 本地配置 =====
|
||||
.claude/settings.local.json
|
||||
|
||||
# ===== Windows 杂项 =====
|
||||
*.lnk
|
||||
.Deleted/
|
||||
|
||||
# ===== Kiro 运行时状态 =====
|
||||
.kiro/.audit_state.json
|
||||
.kiro/.last_prompt_id.json
|
||||
.kiro/.git_snapshot.json
|
||||
.kiro/.file_baseline.json
|
||||
.kiro/.compliance_state.json
|
||||
.kiro/.audit_context.json
|
||||
# ===== 归档目录(用户定期手动清理) =====
|
||||
_DEL/
|
||||
|
||||
# ===== 运维脚本运行时状态 =====
|
||||
scripts/ops/.monitor_token
|
||||
|
||||
|
||||
# ===== Kiro Powers(含敏感 DSN) =====
|
||||
powers/
|
||||
# ===== 小程序打包产物 =====
|
||||
apps/*.zip
|
||||
|
||||
@@ -1,186 +0,0 @@
|
||||
---
|
||||
name: audit-writer
|
||||
description: Run post-change audit + docs sync for NeoZQYY Monorepo; write audit artifacts; return a very short receipt only.
|
||||
tools: ["read", "write", "shell"]
|
||||
---
|
||||
|
||||
你是专职"审计收口/后处理写入"子代理。
|
||||
|
||||
## 核心原则:从预构建上下文工作,禁止全盘扫描
|
||||
|
||||
你的唯一输入是 `.kiro/state/.audit_context.json`(由 `build_audit_context.py` 预构建)。
|
||||
该文件已包含所有你需要的信息:
|
||||
|
||||
| 字段 | 来源 | 内容 |
|
||||
|------|------|------|
|
||||
| `changed_files` | audit-flagger | 全部变更文件列表 |
|
||||
| `high_risk_files` | audit-flagger | 高风险文件子集 |
|
||||
| `reasons` | audit-flagger | 风险分类标签 |
|
||||
| `high_risk_diff` | git diff | 高风险文件的 diff(已截断) |
|
||||
| `diff_stat` | git diff --stat | 变更统计摘要 |
|
||||
| `compliance.code_without_docs` | compliance-prescan | 缺少文档同步的代码文件及其应更新的文档 |
|
||||
| `compliance.new_migration_sql` | compliance-prescan | 新增迁移 SQL 列表 |
|
||||
| `compliance.has_bd_manual` | compliance-prescan | 是否已有 BD_Manual 文档 |
|
||||
| `compliance.has_ddl_baseline` | compliance-prescan | 是否已更新 DDL 基线 |
|
||||
| `compliance.api_changed` | compliance-prescan | 是否有接口相关文件变更 |
|
||||
| `compliance.openapi_spec_stale` | compliance-prescan | OpenAPI spec 是否需要重新导出 |
|
||||
| `session_diff` | agent-on-stop (file baseline) | 本次对话期间的精确变更:`added`/`modified`/`deleted` |
|
||||
| `prompt_id` / `latest_prompt_log` | prompt-audit-log | Prompt-ID 与原文(溯源用) |
|
||||
|
||||
**禁止操作**:
|
||||
- ❌ 运行 `git status --porcelain`(已有 `changed_files`)
|
||||
- ❌ 运行 `git diff` 全量(已有 `high_risk_diff` + `diff_stat`)
|
||||
- ❌ 遍历目录寻找变更文件(已有分类好的列表)
|
||||
- ❌ 运行 `change_compliance_prescan.py`(已有 `compliance` 数据)
|
||||
|
||||
**允许操作**:
|
||||
- ✅ 读取具体文件内容(如需更新某个 README 时读取其当前内容)
|
||||
- ✅ 对单个文件运行 `git diff HEAD -- <file>`(仅当 context 中 diff 被截断时)
|
||||
- ✅ 连接测试库验证迁移执行状态(仅当 `new_migration_sql` 非空时)
|
||||
|
||||
## 审计产物路径(统一根目录)
|
||||
- 变更审计记录:`docs/audit/changes/<YYYY-MM-DD>__<slug>.md`
|
||||
- 审计一览表:`docs/audit/audit_dashboard.md`(自动生成,勿手动编辑)
|
||||
- Prompt 日志:`docs/audit/prompt_logs/`
|
||||
- 一览表刷新命令:`python scripts/audit/gen_audit_dashboard.py`
|
||||
- 所有审计产物统一写入项目根目录 `docs/audit/`,不要写入子模块内部
|
||||
|
||||
## 何时需要做"重型后处理"
|
||||
根据 `audit_context.json` 中的 `audit_required` 和 `reasons` 判断:
|
||||
- `audit_required: true` → 执行完整审计流程
|
||||
- `audit_required: false` → 输出"无需审计",清除标记,退出
|
||||
|
||||
## 执行策略(从 context 驱动,不做冗余扫描)
|
||||
|
||||
### 步骤 1:读取上下文
|
||||
读取 `.kiro/state/.audit_context.json`,提取关键字段。
|
||||
|
||||
### 步骤 1b:读取 Session 索引
|
||||
读取 `docs/audit/session_logs/_session_index.json`,按 `startTime` 找到与 `audit_context.json` 中 `prompt_at` 最接近的 entry(非 `is_sub` 的主对话)。提取:
|
||||
- `description`:作为审计记录的「操作摘要」(比从 diff 推断更准确、更完整)
|
||||
- `summary.files_modified` / `summary.files_created`:交叉验证 `session_diff`
|
||||
- executionId 前 8 位:作为 `session_id` 写入审计记录,建立双向链接
|
||||
- `summary.sub_agents`:记录本次对话调用了哪些子代理
|
||||
- `summary.errors`:标注执行中的异常
|
||||
|
||||
若索引不存在或无匹配 entry,跳过此步骤,不影响后续流程。
|
||||
|
||||
### 步骤 2:审计落盘(按需调用 skill)
|
||||
根据 `reasons` 判断需要哪些 skill:
|
||||
- 含 `dir:backend` / `dir:etl` / `dir:shared` 等 → 调用 `steering-readme-maintainer`
|
||||
- 含任意高风险标签 → 调用 `change-annotation-audit`(写 docs/audit/changes/ + AI_CHANGELOG + CHANGE 注释)
|
||||
- 含 `db-schema-change` → 调用 `bd-manual-db-docs`,并执行 DB 文档全量对账(见步骤 2b)
|
||||
|
||||
所有审计记录中涉及日期时间的字段,必须精确到秒(格式:`YYYY-MM-DD HH:MM:SS`,时区 Asia/Shanghai)。包括但不限于:审计记录头部的"日期"、AI_CHANGELOG 条目的时间戳、CHANGE 标记注释中的日期。
|
||||
|
||||
若 `session_diff` 中有 `added` 或 `deleted` 文件,在审计记录中增加「本次对话文件变更」段落,分别列出新增和删除的文件。
|
||||
|
||||
若步骤 1b 成功获取了 Session 信息,在审计记录头部元数据中增加:
|
||||
- `session_id`:executionId 前 8 位(如 `f29acdea`)
|
||||
- `操作摘要`:Session 索引中的 `description`(LLM 生成的操作摘要)
|
||||
- `session_path`:Session 日志文件的相对路径(`output_dir` 字段值)
|
||||
|
||||
审计记录头部模板:
|
||||
```markdown
|
||||
# 变更审计记录:<标题>
|
||||
|
||||
| 字段 | 值 |
|
||||
|------|-----|
|
||||
| 日期 | YYYY-MM-DD HH:MM:SS |
|
||||
| Prompt-ID | <从 audit_context> |
|
||||
| Session-ID | <executionId 前 8 位> |
|
||||
| Session 路径 | <output_dir 相对路径> |
|
||||
|
||||
## 操作摘要
|
||||
<Session 索引中的 description,或从 diff 推断的摘要>
|
||||
```
|
||||
|
||||
### 步骤 2b:DB 文档全量对账(当 reasons 含 db-schema-change 时)
|
||||
当 `reasons` 含 `db-schema-change` 时,除了调用 `bd-manual-db-docs` skill 处理本次变更外,还必须执行全量对账:
|
||||
|
||||
1. 连接测试库(使用 pg power 的 `pg-etl-test` / `pg-app-test`),查询 `information_schema.tables` 和 `information_schema.columns` 获取所有表和字段的实际结构
|
||||
2. 扫描 `docs/database/` 下现有文档,逐表对比:
|
||||
- 文档中缺失的表 → 新建表结构文档
|
||||
- 文档中字段与实际不一致(类型、nullable、默认值等)→ 更新文档
|
||||
- 文档中存在但数据库已删除的表 → 在文档中标注已废弃
|
||||
3. 输出对账摘要到审计记录中,列出:新增文档数、更新文档数、废弃标注数
|
||||
4. 所有文档输出到 `docs/database/`,遵循现有目录结构和模板格式
|
||||
|
||||
注意:全量对账使用测试库(TEST_DB_DSN),禁止连接正式库。
|
||||
|
||||
### 步骤 3:文档校对补齐
|
||||
遍历 `compliance.code_without_docs`,对每个缺失项:
|
||||
- 读取对应代码文件当前内容(不需要 diff,直接读文件)
|
||||
- 更新对应文档:
|
||||
|
||||
| 代码路径前缀 | 应同步更新的文档 |
|
||||
|---|---|
|
||||
| `apps/backend/app/routers/` | `apps/backend/docs/API-REFERENCE.md` + `docs/contracts/openapi/backend-api.json` |
|
||||
| `apps/backend/app/services/` | `apps/backend/docs/API-REFERENCE.md` + `apps/backend/README.md` |
|
||||
| `apps/backend/app/auth/` | `apps/backend/docs/API-REFERENCE.md` + `apps/backend/README.md` + `docs/contracts/openapi/backend-api.json` |
|
||||
| `apps/backend/app/schemas/` | `docs/contracts/openapi/backend-api.json` |
|
||||
| `apps/etl/connectors/feiqiu/tasks/` | `apps/etl/connectors/feiqiu/docs/etl_tasks/` |
|
||||
| `apps/etl/connectors/feiqiu/loaders/` | `apps/etl/connectors/feiqiu/docs/etl_tasks/` |
|
||||
| `apps/etl/connectors/feiqiu/scd/` | `apps/etl/connectors/feiqiu/docs/business-rules/scd2_rules.md` |
|
||||
| `apps/etl/connectors/feiqiu/orchestration/` | `apps/etl/connectors/feiqiu/docs/architecture/` |
|
||||
| `apps/admin-web/src/` | `apps/admin-web/README.md` |
|
||||
| `apps/miniprogram/` | `apps/miniprogram/README.md` |
|
||||
| `packages/shared/` | `packages/shared/README.md` |
|
||||
| `db/*/migrations/*.sql` | `docs/database/BD_Manual_*.md` + `apps/etl/connectors/feiqiu/docs/database/` + `docs/database/ddl/` |
|
||||
|
||||
### 步骤 4:DDL/迁移检查
|
||||
- 若 `compliance.new_migration_sql` 非空:
|
||||
- 连接测试库验证迁移是否已执行
|
||||
- 在审计记录中标注执行状态
|
||||
- 若 `compliance.new_migration_sql` 非空且 `compliance.has_ddl_baseline` 为 false:
|
||||
- 在审计记录中标注 ⚠️ DDL 基线待合并
|
||||
|
||||
### 步骤 4b:OpenAPI Spec 同步检查
|
||||
- 若 `compliance.api_changed` 为 true 且 `compliance.openapi_spec_stale` 为 true:
|
||||
- 在审计记录中标注 ⚠️ 接口代码已变更但 OpenAPI spec 未同步
|
||||
- 运行 `python scripts/ops/_export_openapi.py` 重新导出 spec(需后端可导入)
|
||||
- 若导出失败(后端未启动等),在审计记录中标注待手动导出
|
||||
- 导出成功后提醒用户重连 OpenAPI Power 的 MCP server 以加载新 spec
|
||||
- 若 `compliance.api_changed` 为 true 且 `compliance.openapi_spec_stale` 为 false:
|
||||
- spec 已同步更新,无需额外操作
|
||||
|
||||
### 步骤 5:改动注解(Change Annotations)
|
||||
|
||||
对本次审计涉及的所有变更文件,在审计记录(`docs/audit/changes/<YYYY-MM-DD>__<slug>.md`)中生成逐文件的改动注解段落。
|
||||
|
||||
注解内容包括:
|
||||
- 文件路径
|
||||
- 变更类型(新增 / 修改 / 删除)
|
||||
- 原始原因:为什么要做这个改动(从 `latest_prompt_log` 和 diff 上下文推断用户意图)
|
||||
- 思路分析:改动的技术思路和设计决策(从 diff 内容和代码结构推断)
|
||||
- 修改结果:改动后的效果和影响范围
|
||||
|
||||
格式模板(写入审计记录的 `## 改动注解` 段落):
|
||||
|
||||
```markdown
|
||||
## 改动注解
|
||||
|
||||
### `<文件路径>`
|
||||
- 变更类型:新增 / 修改 / 删除
|
||||
- 原始原因:<从 prompt log 和 diff 推断的改动动机>
|
||||
- 思路分析:<技术思路、设计决策、为什么选择这种实现方式>
|
||||
- 修改结果:<改动后的效果、影响范围、与其他模块的关联>
|
||||
```
|
||||
|
||||
执行规则:
|
||||
- 只对 `high_risk_files` 和 `session_diff.added` 中的文件写详细注解
|
||||
- 对非高风险的 `session_diff.modified` 文件写简要一行注解即可
|
||||
- 对 `session_diff.deleted` 文件只记录删除原因
|
||||
- 注解内容从 `high_risk_diff`、`latest_prompt_log`、文件内容综合推断,不要编造
|
||||
- 若某文件的 diff 被截断,可对该单个文件运行 `git diff HEAD -- <file>` 获取完整 diff
|
||||
- 注解语言使用简体中文
|
||||
|
||||
### 步骤 6:收尾
|
||||
- 把 `.kiro/state/.audit_state.json` 的 `audit_required` 置为 false,清空 `reasons`/`changed_files`/`last_reminded_at`
|
||||
- 执行 `python scripts/audit/gen_audit_dashboard.py` 刷新审计一览表
|
||||
|
||||
## 输出(强制极短回执)
|
||||
你最终只允许输出 3 段信息:
|
||||
- done: yes/no
|
||||
- files_written: <按行列出相对路径>
|
||||
- next_step: <若失败给 1~2 条;成功则写 "commit when ready">
|
||||
@@ -1,16 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "Agent On Stop (Merged)",
|
||||
"description": "合并 hook:对话结束时检测变更(含非 Kiro 外部变更)、记录 session log、合规预扫描、构建审计上下文、审计提醒。无变更时跳过。纯 Shell,零 Token。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "agentStop"
|
||||
},
|
||||
"then": {
|
||||
"type": "runCommand",
|
||||
"command": "python C:/NeoZQYY/.kiro/scripts/agent_on_stop.py",
|
||||
"timeout": 360
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "agent-on-stop"
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "CWD Guard for Shell",
|
||||
"description": "在 AI 执行 shell 命令前,校验 cwd、命令语法和 Python 调用安全性,防止常见 Windows/PowerShell 陷阱。",
|
||||
"version": "2",
|
||||
"when": {
|
||||
"type": "preToolUse",
|
||||
"toolTypes": [
|
||||
"shell"
|
||||
]
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "请对即将执行的 shell 命令做以下检查,发现问题则修正后再执行,全部通过则直接放行:\n\n1. **cwd 校验**:如果命令涉及 scripts/ops/、.kiro/scripts/、apps/etl/connectors/feiqiu/scripts/ 下的 Python 脚本,cwd 必须为仓库根 C:\\NeoZQYY。ETL 模块命令 cwd 应为 apps/etl/connectors/feiqiu/,后端命令应为 apps/backend/,前端命令应为 apps/admin-web/。\n2. **裸调 Python/Node 拦截**:如果命令包含 `python`、`node`、`ipython` 但没有跟 `-c`、`-m` 或脚本路径参数,必须修正(会导致 REPL 劫持 shell)。\n3. **命令连接符**:如果使用了 `&&`,替换为 `;`(PowerShell 语法)。\n4. **环境变量语法**:如果使用了 `$VAR_NAME` 读取环境变量,替换为 `$env:VAR_NAME`(PowerShell 语法)。\n\n对于不涉及上述问题的命令,直接放行。"
|
||||
}
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "每日经营数据报告",
|
||||
"description": "手动触发后执行 daily_revenue_report.py,统计 3月1日至当天的每日经营数据(实收、充值、团购结算、到店人次、新会员、充值人数等),输出到 docs/reports/daily-revenue-latest.md",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "执行 python C:\\NeoZQYY\\scripts\\ops\\daily_revenue_report.py"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "daily-revenue-report"
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "ETL FULL TEST",
|
||||
"description": "一键执行 ETL 全流程前后端联调:启动服务 → Playwright 浏览器提交任务 → 实时监控 → 性能报告 → 黑盒一致性测试 → 服务清理。详细步骤参考 .kiro/specs/[ETL]-fullstack-integration/tasks.md",
|
||||
"version": "1.1.0",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "执行 ETL 全栈联调运维任务。先读取 `.kiro/specs/[ETL]-fullstack-integration/tasks.md` 获取完整步骤细节,然后严格按以下 6 大步骤依次执行。全程使用 Playwright 浏览器模拟真实用户操作,不直接调用 API。\n\n## 步骤 1:服务启动与健康检查\n- 用 controlPwshProcess 启动后端:uvicorn app.main:app --host 0.0.0.0 --port 8000,cwd=apps/backend/\n- 用 controlPwshProcess 启动前端:pnpm dev,cwd=apps/admin-web/\n- 等待服务就绪,验证 http://localhost:8000/docs 和 http://localhost:5173 可访问\n- Playwright 打开 http://localhost:5173,登录(用户名 admin,密码 admin123)\n- 验证登录成功后跳转到任务配置页,侧边栏菜单正常渲染\n\n## 步骤 2:浏览器操作 - 任务配置与提交\n- 在任务配置页(/)依次操作:\n - Flow 选择 api_full(API → ODS → DWD → DWS → INDEX)\n - 处理模式选择 full_window\n - 时间窗口模式设为【自定义】,开始 2025-7-01,结束为当前时间\n - 窗口切分【按天】,切分天数 30\n - 勾选 force_full(强制全量)\n - 任务选择区域全选 is_common=True 的常用任务(共 41 个)\n- 确认 CLI 命令预览区显示完整参数\n- 点击【直接执行】按钮(SendOutlined 图标),触发 POST /api/execution/run\n- 确认提交成功提示,记录 execution_id\n\n## 步骤 3:执行监控与 DEBUG\n- 导航到【任务管理】页面(/task-manager)\n- 在【队列】Tab 确认任务状态为 running\n- 点击 running 任务行,打开 WebSocket 实时日志流抽屉\n- 按需以 30秒~20分钟 弹性间隔检查页面状态\n- 检测日志中的 ERROR / CRITICAL / Traceback / Exception / WARNING 关键字\n- 连续 20 分钟无新日志输出则报超时警告\n- 任务完成(success/failed/cancelled)时停止监控\n- 收集所有 ERROR 和 WARNING 日志行及上下文,分析错误类型\n- 如果任务失败,切换到【历史】Tab 查看完整执行详情\n\n## 步骤 4:性能计时与报告生成\n- 在【历史】Tab 点击已完成任务查看执行详情\n- 通过 GET /api/execution/{id}/logs 获取完整日志\n- 从日志提取每个窗口切片(30天)的开始/结束时间,计算耗时\n- 识别 ODS / DWD / DWS / INDEX 各阶段耗时,标注 Top-5 瓶颈\n- 生成综合联调报告到 {SYSTEM_LOG_ROOT}/{date}__etl_integration_report.md\n- 报告包含:执行概要、性能报告(各切片耗时对比、Top-5)、DEBUG 报告\n\n## 步骤 5:黑盒数据一致性测试\n- 运行全链路检查器:uv run python scripts/ops/etl_consistency_check.py(cwd=C:\\\\NeoZQYY)\n - 脚本自动从 LOG_ROOT 找最近 ETL 日志,从 FETCH_ROOT 读 API JSON\n - 连接数据库(PG_DSN)逐表逐字段比对:API vs ODS、ODS vs DWD、DWD vs DWS\n - 白名单:ETL_META_COLS、SCD2_COLS 排除;API 空字符串 vs DB None 视为等价\n - 报告输出到 ETL_REPORT_ROOT\n- 检查 FlowRunner 内置一致性报告(ETL_REPORT_ROOT 下已自动生成)\n- 对比两份报告结论是否一致\n- 将黑盒测试结果摘要追加到步骤 4 的综合报告中(通过/失败统计、白名单差异、失败表清单)\n\n## 步骤 6:服务清理\n- 关闭 Playwright 浏览器实例\n- 停止 uvicorn 后端进程(controlPwshProcess stop)\n- 停止 pnpm dev 前端进程(controlPwshProcess stop)\n- 报告联调完成状态\n\n## 环境与规范要求\n- 环境变量从根 .env 加载(load_dotenv),缺失必须报错,禁止静默回退\n- 数据库使用测试库(PG_DSN 指向 test_etl_feiqiu)\n- 报告路径遵循 export-paths 规范,从环境变量读取\n- 需要的环境变量:PG_DSN、FETCH_ROOT、LOG_ROOT、ETL_REPORT_ROOT、SYSTEM_LOG_ROOT"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "etl-fullstack-integration"
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "ETL Unified Analysis",
|
||||
"description": "手动触发 ETL 统一分析:合并数据流结构分析和数据一致性检查为一个流程。支持 --mode structure|consistency|full(默认 full),支持 --source api|etl-log(默认 api 主动采集最近 60 天)。",
|
||||
"version": "1.0.0",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "执行 ETL 统一分析,按以下步骤完成。若发现已完成或有历史任务痕迹则清空,重新执行:\n\n运行 `python scripts/ops/etl_unified_analysis.py`\n\n默认行为(full 模式):\n1. 第一阶段:数据流结构分析\n - 运行 analyze_dataflow.py 采集 API JSON、DB 表结构、三层字段映射、BD_manual 业务描述(默认最近 60 天)\n - 运行 gen_dataflow_report.py 生成结构分析报告\n2. 第二阶段:ETL 数据一致性检查\n - 运行 etl_consistency_check.py 对 API→ODS→DWD→DWS 逐表逐字段比对\n - 每张表展示数据截止日期(create_time/createtime/fetched_at 的 MAX 值)\n3. 第三阶段:报告合并\n - 将两份报告合并为一份统一报告,输出到 ETL_REPORT_ROOT\n\n可选参数:\n- `--mode structure` 仅执行结构分析\n- `--mode consistency` 仅执行一致性检查\n- `--source etl-log` 切换为读 ETL 落盘 JSON(而非主动调 API)\n- `--date-from YYYY-MM-DD` 指定起始日期\n- `--date-to YYYY-MM-DD` 指定截止日期\n- `--limit N` 每端点最大记录数\n- `--tables t1,t2` 指定分析的表\n\n白名单规则(继承 v5):\n- ETL 元数据列(source_file, source_endpoint, fetched_at, payload, content_hash)\n- DWD 维表 SCD2 管理列(valid_from, valid_to, is_current, etl_loaded_at, etl_batch_id)\n- API siteProfile 嵌套对象字段\n- 时间格式等价:同一时刻的不同格式表示视为内容相同\n- 白名单字段仍正常参与检查和统计,仅在报告中折叠显示并注明原因\n\n注意:\n- 当前仅分析飞球(feiqiu)连接器\n- 数据库使用测试库(TEST_DB_DSN),只读模式"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "etl-unified-analysis"
|
||||
}
|
||||
@@ -1,14 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "字段消失扫描",
|
||||
"description": "手动触发 DWD 表字段消失扫描,检测字段值从某天起突然全部为空的异常(≥3天且≥20条连续空记录)。输出终端报告 + CSV。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "runCommand",
|
||||
"command": "python scripts/ops/field_disappearance_scan.py",
|
||||
"timeout": 300
|
||||
}
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "H5 原型截图",
|
||||
"description": "手动触发:启动 HTTP 服务器 → 运行 screenshot_h5_pages.py 批量截取 docs/h5_ui/pages/ 下所有 H5 原型页面(iPhone 15 Pro Max, 430×932, DPR:3),输出到 docs/h5_ui/screenshots/。完成后关闭服务器。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "执行 H5 原型页面批量截图流程:\n1. 启动 HTTP 服务器:`python -m http.server 8765 --directory docs/h5_ui/pages`(用 controlPwshProcess 后台启动,cwd 为 C:\\NeoZQYY)\n2. 等待 2 秒确认服务器就绪\n3. 运行截图脚本:`python C:\\NeoZQYY\\scripts\\ops\\screenshot_h5_pages.py`(cwd 为 C:\\NeoZQYY,timeout 180s)\n4. 检查输出:列出 docs/h5_ui/screenshots/*.png 的文件名和大小,确认数量和关键交互态截图大小合理\n5. 停止 HTTP 服务器(controlPwshProcess stop)\n6. 简要汇报结果:总截图数、像素尺寸验证(应为 1290×N)、异常文件(如有)"
|
||||
}
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
{
|
||||
"enabled": false,
|
||||
"name": "Pre-Change Research Guard",
|
||||
"description": "在写操作执行前检查:是否已完成逻辑改动前置调研(审计历史、文档阅读、上下文摘要)。若未完成则阻止写入,先完成调研流程。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "preToolUse",
|
||||
"toolTypes": [
|
||||
"write"
|
||||
]
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "你即将执行写操作。请确认:\n\n1. 本次写操作是否涉及逻辑改动(ETL/业务规则/API/数据模型/前端交互)?\n2. 如果涉及逻辑改动,你是否已通过 context-gatherer 子代理完成前置调研,并向用户输出了上下文摘要且获得确认?\n\n若属于例外情况(纯格式/注释/文档纯文字/配置文件/.kiro 目录/用户明确跳过/新建不涉及已有逻辑),可直接继续。\n若未完成前置调研,必须先停止写操作,使用 context-gatherer 子代理完成调研流程后再继续。"
|
||||
}
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "Prompt On Submit (Merged)",
|
||||
"description": "合并 hook:每次提交 prompt 时执行风险标记 + prompt 日志记录 + git 快照。纯 Shell,零 Token。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "promptSubmit"
|
||||
},
|
||||
"then": {
|
||||
"type": "runCommand",
|
||||
"command": "python C:/NeoZQYY/.kiro/scripts/prompt_on_submit.py"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "prompt-on-submit"
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "REPL 劫持检测与恢复",
|
||||
"description": "在 shell 命令执行后检查输出,若发现 REPL 劫持症状(exit code 0 但无输出、出现 >>> 提示符),先尝试 exit 命令自救,失败则提醒用户手动终止进程。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "postToolUse",
|
||||
"toolTypes": [
|
||||
"shell"
|
||||
]
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "检查刚执行的 shell 命令输出,判断是否出现 REPL 劫持症状:\n1. exit code 0 但完全无输出(对于预期有输出的命令)\n2. 输出中出现 `>>>` 或 `...` 等 Python REPL 提示符\n3. 输出中出现 `>` 等 Node REPL 提示符\n\n如果检测到症状:\n- 第一步:立即执行 `exit` 命令尝试退出 REPL\n- 第二步:执行一条验证命令(如 `echo \"shell_ok\"`)确认 shell 已恢复\n- 如果恢复成功:重新执行原命令\n- 如果仍未恢复:停止重试,提醒用户在外部终端执行 `Get-Process python* | Stop-Process -Force`,等用户确认后再继续\n\n如果没有症状,直接放行,不做任何操作。"
|
||||
}
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "Manual: Run /audit (via audit-writer subagent)",
|
||||
"description": "按需触发:读取 agent-on-stop 预构建的审计上下文 + Session 索引,启动 audit-writer 子代理执行审计落盘+文档校对+DB文档全量对账+Session关联。上下文过期时自动重建。",
|
||||
"version": "11",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "执行 /audit 审计流程:\n\n**第零步:获取当前时间**:运行 `python -c \"from datetime import datetime, timezone, timedelta; print(datetime.now(timezone(timedelta(hours=8))).isoformat())\"` 获取当前北京时间,记为 `now`。后续所有「超过 30 分钟」的判断以此 `now` 为基准。\n\n**前置检查**:读取 `.kiro/state/.audit_context.json`,检查 `built_at` 时间戳。若文件不存在或 `built_at` 距 `now` 超过 30 分钟,先运行 `python .kiro/scripts/agent_on_stop.py --force-rebuild` 重建上下文,再重新读取。\n\n**Session 索引读取**:读取 `docs/audit/session_logs/_session_index.json`,找到与本次对话时间最接近的 entry(按 `startTime` 匹配),提取其 `description`(LLM 操作摘要)和 `summary`(结构化摘要)。这些信息将用于:\n- 作为审计记录头部的「操作摘要」来源(比从 diff 推断更准确)\n- 交叉验证 audit_context.json 中的 session_diff(files_modified/created)\n- 记录本次审计关联的 session executionId,建立双向链接\n\n**主流程**:启动名为 audit-writer 的子代理,传入以下指令:\n\n> 读取 `.kiro/state/.audit_context.json` 作为主输入,同时参考 Session 索引中匹配的 entry。不要自行运行 git status/diff/扫描文件。audit_context.json 已包含:变更文件列表、高风险文件 diff、合规检查清单(文档缺失/迁移状态/DDL 基线/接口变更/OpenAPI spec 状态)、本次对话精确变更(session_diff: added/modified/deleted)、Prompt-ID 溯源。按 audit-writer.md 中定义的执行策略完成审计落盘+文档校对补齐。\n\n约束:\n- 子代理禁止重复运行 git status --porcelain 或 git diff 全量扫描,所有信息已在 .audit_context.json 中预备好。\n- 子代理需要读取具体文件内容时(如更新文档),可以直接读取对应文件,但不要做全仓库遍历。\n- 子代理必须按需调用 skill:steering-readme-maintainer、change-annotation-audit、bd-manual-db-docs(仅在满足触发条件时)。\n- 子代理必须根据 compliance.code_without_docs 自动补齐缺失的文档同步。\n- 当 reasons 含 db-schema-change 时,子代理必须执行 DB 文档全量对账:连接测试库(TEST_DB_DSN)查询 information_schema,与 docs/database/ 下现有文档全量对比,补全或更新所有缺失/过时的表结构说明(不仅限于本次变更涉及的表),输出对账摘要。\n- 子代理应参考 session_diff 中的 added/modified/deleted 列表,精确定位本次对话的变更范围。\n- **Session 关联**:在审计记录(docs/audit/changes/*.md)头部增加 `session_id` 字段(executionId 前 8 位),并将 Session 索引中的 description 作为「操作摘要」写入审计记录。这建立了审计记录 ↔ Session 日志的双向链接。\n- 子代理必须为所有变更文件生成改动注解(步骤 5),写入审计记录的「改动注解」段落,包含:变更类型、原始原因、思路分析、修改结果。高风险文件写详细注解,普通修改写简要一行,删除文件只记录原因。\n- 若 compliance.api_changed=true 且 compliance.openapi_spec_stale=true,运行 `python scripts/ops/_export_openapi.py` 重新导出 OpenAPI spec;导出失败则在审计记录标注待手动导出;导出成功则提醒用户重连 OpenAPI Power MCP server。\n- 所有审计产物统一写入 docs/audit/,不写入子模块内部。\n- 完成后把 .kiro/state/.audit_state.json 中 audit_required 置为 false。\n- 执行 `python scripts/audit/gen_audit_dashboard.py` 刷新审计一览表。\n- **文档地图更新**:审计完成后,自动更新 `docs/DOCUMENTATION-MAP.md`:\n - 检查本次审计涉及的文档变更(从审计记录中识别)\n - 扫描 `docs/` 目录和各模块内部文档的变化(新增、修改、删除)\n - 特别关注数据库文档(`docs/database/`)是否有新增的 BD_Manual 文件\n - 根据发现的文档变更,更新文档地图中的相应条目\n - 确保文档地图的结构完整,所有重要文档都有记录\n- 最终回复必须是极短回执:done/files_written/next_step。"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "audit"
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "Session description maker",
|
||||
"description": "手动触发:为缺少 description 的 session log 调用百炼千问 API 生成摘要,写入双索引。askAgent 模式可看到实时输出。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "请在后台运行以下命令并展示实时输出:python -B C:/NeoZQYY/scripts/ops/batch_generate_summaries.py"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "session-summary"
|
||||
}
|
||||
@@ -1,39 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""cwd 校验工具 — .kiro/scripts/ 下所有脚本共享。
|
||||
|
||||
用法:
|
||||
from _ensure_root import ensure_repo_root
|
||||
ensure_repo_root()
|
||||
|
||||
委托给 neozqyy_shared.repo_root(共享包),未安装时 fallback。
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import warnings
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def ensure_repo_root() -> Path:
|
||||
"""校验 cwd 是否为仓库根目录,不是则自动切换。"""
|
||||
try:
|
||||
from neozqyy_shared.repo_root import ensure_repo_root as _shared
|
||||
return _shared()
|
||||
except ImportError:
|
||||
pass
|
||||
# fallback
|
||||
cwd = Path.cwd()
|
||||
if (cwd / "pyproject.toml").is_file() and (cwd / ".kiro").is_dir():
|
||||
return cwd
|
||||
root = Path(__file__).resolve().parents[2]
|
||||
if (root / "pyproject.toml").is_file() and (root / ".kiro").is_dir():
|
||||
os.chdir(root)
|
||||
warnings.warn(
|
||||
f"cwd 不是仓库根目录,已自动切换: {cwd} → {root}",
|
||||
stacklevel=2,
|
||||
)
|
||||
return root
|
||||
raise RuntimeError(
|
||||
f"无法定位仓库根目录。当前 cwd={cwd},推断 root={root}。"
|
||||
f"请在仓库根目录下运行脚本。"
|
||||
)
|
||||
@@ -1,650 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""agent_on_stop — agentStop 合并 hook 脚本(v3:含 LLM 摘要生成)。
|
||||
|
||||
合并原 audit_reminder + change_compliance_prescan + build_audit_context + session_extract:
|
||||
1. 全量会话记录提取 → docs/audit/session_logs/(无论是否有代码变更)
|
||||
2. 为刚提取的 session 调用百炼 API 生成 description → 写入双索引
|
||||
3. 扫描工作区 → 与 promptSubmit 基线对比 → 精确检测本次对话变更
|
||||
4. 若无任何文件变更 → 跳过审查,静默退出
|
||||
5. 合规预扫描 → .kiro/state/.compliance_state.json
|
||||
6. 构建审计上下文 → .kiro/state/.audit_context.json
|
||||
7. 审计提醒(15 分钟限频)→ stderr
|
||||
|
||||
变更检测基于文件 mtime+size 基线对比,不依赖 git commit 历史。
|
||||
所有功能块用 try/except 隔离,单个失败不影响其他。
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
# 同目录导入文件基线模块 + cwd 校验
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
from file_baseline import scan_workspace, load_baseline, diff_baselines, total_changes
|
||||
from _ensure_root import ensure_repo_root
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
MIN_INTERVAL = timedelta(minutes=15)
|
||||
|
||||
# 路径常量
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
COMPLIANCE_PATH = os.path.join(".kiro", "state", ".compliance_state.json")
|
||||
CONTEXT_PATH = os.path.join(".kiro", "state", ".audit_context.json")
|
||||
PROMPT_ID_PATH = os.path.join(".kiro", "state", ".last_prompt_id.json")
|
||||
# 噪声路径(用于过滤变更列表中的非业务文件)
|
||||
NOISE_PATTERNS = [
|
||||
re.compile(r"^docs/audit/"),
|
||||
re.compile(r"^\.kiro/"),
|
||||
re.compile(r"^\.hypothesis/"),
|
||||
re.compile(r"^tmp/"),
|
||||
re.compile(r"\.png$"),
|
||||
re.compile(r"\.jpg$"),
|
||||
]
|
||||
|
||||
# 高风险路径
|
||||
HIGH_RISK_PATTERNS = [
|
||||
re.compile(r"^apps/etl/connectors/feiqiu/(api|cli|config|database|loaders|models|orchestration|scd|tasks|utils|quality)/"),
|
||||
re.compile(r"^apps/backend/app/"),
|
||||
re.compile(r"^apps/admin-web/src/"),
|
||||
re.compile(r"^apps/miniprogram/"),
|
||||
re.compile(r"^packages/shared/"),
|
||||
re.compile(r"^db/"),
|
||||
]
|
||||
|
||||
# 文档映射(合规检查用)
|
||||
DOC_MAP = {
|
||||
"apps/backend/app/routers/": ["apps/backend/docs/API-REFERENCE.md", "docs/contracts/openapi/backend-api.json"],
|
||||
"apps/backend/app/services/": ["apps/backend/docs/API-REFERENCE.md", "apps/backend/README.md"],
|
||||
"apps/backend/app/auth/": ["apps/backend/docs/API-REFERENCE.md", "apps/backend/README.md", "docs/contracts/openapi/backend-api.json"],
|
||||
"apps/backend/app/schemas/": ["docs/contracts/openapi/backend-api.json"],
|
||||
"apps/backend/app/main.py": ["docs/contracts/openapi/backend-api.json"],
|
||||
"apps/etl/connectors/feiqiu/tasks/": ["apps/etl/connectors/feiqiu/docs/etl_tasks/"],
|
||||
"apps/etl/connectors/feiqiu/loaders/": ["apps/etl/connectors/feiqiu/docs/etl_tasks/"],
|
||||
"apps/etl/connectors/feiqiu/scd/": ["apps/etl/connectors/feiqiu/docs/business-rules/scd2_rules.md"],
|
||||
"apps/etl/connectors/feiqiu/orchestration/": ["apps/etl/connectors/feiqiu/docs/architecture/"],
|
||||
"apps/admin-web/src/": ["apps/admin-web/README.md"],
|
||||
"apps/miniprogram/": ["apps/miniprogram/README.md"],
|
||||
"packages/shared/": ["packages/shared/README.md"],
|
||||
}
|
||||
|
||||
# 接口变更检测模式(routers / auth / schemas / main.py)
|
||||
API_CHANGE_PATTERNS = [
|
||||
re.compile(r"^apps/backend/app/routers/"),
|
||||
re.compile(r"^apps/backend/app/auth/"),
|
||||
re.compile(r"^apps/backend/app/schemas/"),
|
||||
re.compile(r"^apps/backend/app/main\.py$"),
|
||||
]
|
||||
|
||||
MIGRATION_PATTERNS = [
|
||||
re.compile(r"^db/etl_feiqiu/migrations/.*\.sql$"),
|
||||
re.compile(r"^db/zqyy_app/migrations/.*\.sql$"),
|
||||
re.compile(r"^db/fdw/.*\.sql$"),
|
||||
]
|
||||
|
||||
BD_MANUAL_PATTERN = re.compile(r"^docs/database/BD_Manual_.*\.md$")
|
||||
DDL_BASELINE_DIR = "docs/database/ddl/"
|
||||
AUDIT_CHANGES_DIR = "docs/audit/changes/"
|
||||
|
||||
|
||||
def now_taipei():
|
||||
return datetime.now(TZ_TAIPEI)
|
||||
|
||||
|
||||
def sha1hex(s: str) -> str:
|
||||
return hashlib.sha1(s.encode("utf-8")).hexdigest()
|
||||
|
||||
|
||||
def is_noise(f: str) -> bool:
|
||||
return any(p.search(f) for p in NOISE_PATTERNS)
|
||||
|
||||
|
||||
def safe_read_json(path):
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def write_json(path, data):
|
||||
os.makedirs(os.path.dirname(path) or os.path.join(".kiro", "state"), exist_ok=True)
|
||||
with open(path, "w", encoding="utf-8") as f:
|
||||
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
def git_diff_stat():
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "diff", "--stat", "HEAD"],
|
||||
capture_output=True, text=True, encoding="utf-8", errors="replace", timeout=15
|
||||
)
|
||||
return r.stdout.strip() if r.returncode == 0 else ""
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
def git_diff_files(files, max_total=30000, max_per_file=15000):
|
||||
"""获取文件的实际 diff 内容。对已跟踪文件用 git diff HEAD,对新文件直接读取内容。"""
|
||||
if not files:
|
||||
return ""
|
||||
all_diff = []
|
||||
total_len = 0
|
||||
for f in files:
|
||||
if total_len >= max_total:
|
||||
all_diff.append(f"\n[TRUNCATED: diff exceeds {max_total // 1000}KB]")
|
||||
break
|
||||
try:
|
||||
# 先尝试 git diff HEAD
|
||||
r = subprocess.run(
|
||||
["git", "diff", "HEAD", "--", f],
|
||||
capture_output=True, text=True, encoding="utf-8", errors="replace", timeout=10
|
||||
)
|
||||
chunk = ""
|
||||
if r.returncode == 0 and r.stdout.strip():
|
||||
chunk = r.stdout.strip()
|
||||
elif os.path.isfile(f):
|
||||
# untracked 新文件:直接读取内容作为 diff
|
||||
try:
|
||||
with open(f, "r", encoding="utf-8", errors="replace") as fh:
|
||||
file_content = fh.read(max_per_file + 100)
|
||||
chunk = f"--- /dev/null\n+++ b/{f}\n@@ -0,0 +1 @@\n" + file_content
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
if chunk:
|
||||
if len(chunk) > max_per_file:
|
||||
chunk = chunk[:max_per_file] + f"\n[TRUNCATED: {f} diff too long]"
|
||||
all_diff.append(chunk)
|
||||
total_len += len(chunk)
|
||||
except Exception:
|
||||
continue
|
||||
return "\n".join(all_diff)
|
||||
|
||||
|
||||
def get_latest_prompt_log():
|
||||
log_dir = os.path.join("docs", "audit", "prompt_logs")
|
||||
if not os.path.isdir(log_dir):
|
||||
return ""
|
||||
try:
|
||||
files = sorted(
|
||||
[f for f in os.listdir(log_dir) if f.startswith("prompt_log_")],
|
||||
reverse=True
|
||||
)
|
||||
if not files:
|
||||
return ""
|
||||
with open(os.path.join(log_dir, files[0]), "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
return content[:3000] + "\n[TRUNCATED]" if len(content) > 3000 else content
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
# ── 步骤 1:基于文件基线检测变更 ──
|
||||
def detect_changes_via_baseline():
|
||||
"""扫描当前工作区,与 promptSubmit 基线对比,返回精确的变更列表。
|
||||
|
||||
返回 (all_changed_files, external_files, diff_result, no_change)
|
||||
- all_changed_files: 本次对话期间所有变更文件(added + modified)
|
||||
- external_files: 暂时等于 all_changed_files(后续可通过 Kiro 写入日志细化)
|
||||
- diff_result: 完整的 diff 结果 {added, modified, deleted}
|
||||
- no_change: 是否无任何变更
|
||||
"""
|
||||
before = load_baseline()
|
||||
after = scan_workspace(".")
|
||||
|
||||
if not before:
|
||||
# 没有基线(首次运行或基线丢失),无法对比,回退到全部文件
|
||||
return [], [], {"added": [], "modified": [], "deleted": []}, True
|
||||
|
||||
diff = diff_baselines(before, after)
|
||||
count = total_changes(diff)
|
||||
|
||||
if count == 0:
|
||||
return [], [], diff, True
|
||||
|
||||
# 所有变更文件 = added + modified(deleted 的文件已不存在,不参与风险判定)
|
||||
all_changed = sorted(set(diff["added"] + diff["modified"]))
|
||||
|
||||
# 过滤噪声
|
||||
real_files = [f for f in all_changed if not is_noise(f)]
|
||||
|
||||
if not real_files:
|
||||
return [], [], diff, True
|
||||
|
||||
# 外部变更:目前所有基线检测到的变更都记录,
|
||||
# 因为 Kiro 的写入也会改变 mtime,所以这里的"外部"含义是
|
||||
# "本次对话期间发生的所有变更",包括 Kiro 和非 Kiro 的。
|
||||
# 精确区分需要 Kiro 运行时提供写入文件列表,目前不可用。
|
||||
external_files = [] # 不再误报外部变更
|
||||
|
||||
return real_files, external_files, diff, False
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# ── 步骤 3:合规预扫描 ──
|
||||
def do_compliance_prescan(all_files):
|
||||
result = {
|
||||
"new_migration_sql": [],
|
||||
"new_or_modified_sql": [],
|
||||
"code_without_docs": [],
|
||||
"new_files": [],
|
||||
"has_bd_manual": False,
|
||||
"has_audit_record": False,
|
||||
"has_ddl_baseline": False,
|
||||
"api_changed": False,
|
||||
"openapi_spec_stale": False,
|
||||
}
|
||||
|
||||
code_files = []
|
||||
doc_files = set()
|
||||
|
||||
for f in all_files:
|
||||
if is_noise(f):
|
||||
continue
|
||||
for mp in MIGRATION_PATTERNS:
|
||||
if mp.search(f):
|
||||
result["new_migration_sql"].append(f)
|
||||
break
|
||||
if f.endswith(".sql"):
|
||||
result["new_or_modified_sql"].append(f)
|
||||
if BD_MANUAL_PATTERN.search(f):
|
||||
result["has_bd_manual"] = True
|
||||
if f.startswith(AUDIT_CHANGES_DIR):
|
||||
result["has_audit_record"] = True
|
||||
if f.startswith(DDL_BASELINE_DIR):
|
||||
result["has_ddl_baseline"] = True
|
||||
if f.endswith(".md") or "/docs/" in f:
|
||||
doc_files.add(f)
|
||||
if f.endswith((".py", ".ts", ".tsx", ".js", ".jsx")):
|
||||
code_files.append(f)
|
||||
# 检测接口相关文件变更
|
||||
for ap in API_CHANGE_PATTERNS:
|
||||
if ap.search(f):
|
||||
result["api_changed"] = True
|
||||
break
|
||||
|
||||
# 接口变更但 openapi spec 未同步更新 → 标记过期
|
||||
if result["api_changed"] and "docs/contracts/openapi/backend-api.json" not in all_files:
|
||||
result["openapi_spec_stale"] = True
|
||||
|
||||
for cf in code_files:
|
||||
expected_docs = []
|
||||
for prefix, docs in DOC_MAP.items():
|
||||
if cf.startswith(prefix):
|
||||
expected_docs.extend(docs)
|
||||
if expected_docs:
|
||||
has_doc = False
|
||||
for ed in expected_docs:
|
||||
if ed in doc_files:
|
||||
has_doc = True
|
||||
break
|
||||
if ed.endswith("/") and any(d.startswith(ed) for d in doc_files):
|
||||
has_doc = True
|
||||
break
|
||||
if not has_doc:
|
||||
result["code_without_docs"].append({
|
||||
"file": cf,
|
||||
"expected_docs": expected_docs,
|
||||
})
|
||||
|
||||
needs_check = bool(
|
||||
result["new_migration_sql"]
|
||||
or result["code_without_docs"]
|
||||
or result["openapi_spec_stale"]
|
||||
)
|
||||
|
||||
now = now_taipei()
|
||||
write_json(COMPLIANCE_PATH, {
|
||||
"needs_check": needs_check,
|
||||
"scanned_at": now.isoformat(),
|
||||
**result,
|
||||
})
|
||||
return result
|
||||
|
||||
|
||||
# ── 步骤 4:构建审计上下文 ──
|
||||
def do_build_audit_context(all_files, diff_result, compliance):
|
||||
now = now_taipei()
|
||||
audit_state = safe_read_json(STATE_PATH)
|
||||
prompt_info = safe_read_json(PROMPT_ID_PATH)
|
||||
|
||||
# 使用 audit_state 中的 changed_files(来自 git status 的风险文件)
|
||||
# 与本次对话的 baseline diff 合并
|
||||
git_changed = audit_state.get("changed_files", [])
|
||||
session_changed = all_files # 本次对话期间变更的文件
|
||||
|
||||
# 合并两个来源,去重
|
||||
all_changed = sorted(set(git_changed + session_changed))
|
||||
|
||||
high_risk_files = [
|
||||
f for f in all_changed
|
||||
if any(p.search(f) for p in HIGH_RISK_PATTERNS)
|
||||
]
|
||||
|
||||
diff_stat = git_diff_stat()
|
||||
high_risk_diff = git_diff_files(high_risk_files)
|
||||
prompt_log = get_latest_prompt_log()
|
||||
|
||||
context = {
|
||||
"built_at": now.isoformat(),
|
||||
"prompt_id": prompt_info.get("prompt_id", "unknown"),
|
||||
"prompt_at": prompt_info.get("at", ""),
|
||||
"audit_required": audit_state.get("audit_required", False),
|
||||
"db_docs_required": audit_state.get("db_docs_required", False),
|
||||
"reasons": audit_state.get("reasons", []),
|
||||
"changed_files": all_changed[:100],
|
||||
"high_risk_files": high_risk_files,
|
||||
"session_diff": {
|
||||
"added": diff_result.get("added", [])[:50],
|
||||
"modified": diff_result.get("modified", [])[:50],
|
||||
"deleted": diff_result.get("deleted", [])[:50],
|
||||
},
|
||||
"compliance": {
|
||||
"code_without_docs": compliance.get("code_without_docs", []),
|
||||
"new_migration_sql": compliance.get("new_migration_sql", []),
|
||||
"has_bd_manual": compliance.get("has_bd_manual", False),
|
||||
"has_audit_record": compliance.get("has_audit_record", False),
|
||||
"has_ddl_baseline": compliance.get("has_ddl_baseline", False),
|
||||
"api_changed": compliance.get("api_changed", False),
|
||||
"openapi_spec_stale": compliance.get("openapi_spec_stale", False),
|
||||
},
|
||||
"diff_stat": diff_stat,
|
||||
"high_risk_diff": high_risk_diff,
|
||||
"latest_prompt_log": prompt_log,
|
||||
}
|
||||
|
||||
write_json(CONTEXT_PATH, context)
|
||||
|
||||
|
||||
# ── 步骤 5:审计提醒(15 分钟限频) ──
|
||||
def do_audit_reminder(real_files):
|
||||
state = safe_read_json(STATE_PATH)
|
||||
if not state.get("audit_required"):
|
||||
return
|
||||
|
||||
# 无变更时不提醒
|
||||
if not real_files:
|
||||
return
|
||||
|
||||
now = now_taipei()
|
||||
last_str = state.get("last_reminded_at")
|
||||
if last_str:
|
||||
try:
|
||||
last = datetime.fromisoformat(last_str)
|
||||
if (now - last) < MIN_INTERVAL:
|
||||
return
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
state["last_reminded_at"] = now.isoformat()
|
||||
write_json(STATE_PATH, state)
|
||||
|
||||
reasons = state.get("reasons", [])
|
||||
reason_text = ", ".join(reasons) if reasons else "high-risk paths changed"
|
||||
|
||||
# 仅信息性提醒,exit(0) 避免 agent 将其视为错误并自行执行审计
|
||||
# 审计留痕统一由用户手动触发 /audit 完成
|
||||
sys.stderr.write(
|
||||
f"[AUDIT REMINDER] Pending audit ({reason_text}), "
|
||||
f"{len(real_files)} files changed this session. "
|
||||
f"Run /audit to sync. (15min rate limit)\n"
|
||||
)
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
# ── 步骤 6:全量会话记录提取 ──
|
||||
def do_full_session_extract():
|
||||
"""从 Kiro globalStorage 提取当前 execution 的全量对话记录。
|
||||
调用 scripts/ops/extract_kiro_session.py 的核心逻辑。
|
||||
仅提取最新一条未索引的 execution,避免重复。
|
||||
"""
|
||||
# 动态导入提取器(避免启动时 import 开销)
|
||||
scripts_ops = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "..", "scripts", "ops")
|
||||
scripts_ops = os.path.normpath(scripts_ops)
|
||||
if scripts_ops not in sys.path:
|
||||
sys.path.insert(0, scripts_ops)
|
||||
|
||||
try:
|
||||
from extract_kiro_session import extract_latest
|
||||
except ImportError:
|
||||
return # 提取器不存在则静默跳过
|
||||
|
||||
# globalStorage 路径:从环境变量或默认位置
|
||||
global_storage = os.environ.get(
|
||||
"KIRO_GLOBAL_STORAGE",
|
||||
os.path.join(os.environ.get("APPDATA", ""), "Kiro", "User", "globalStorage")
|
||||
)
|
||||
workspace_path = os.getcwd()
|
||||
|
||||
extract_latest(global_storage, workspace_path)
|
||||
|
||||
|
||||
def _extract_summary_content(md_content: str) -> str:
|
||||
"""从 session log markdown 中提取适合生成摘要的内容。
|
||||
|
||||
策略:如果"用户输入"包含 CONTEXT TRANSFER(跨轮续接),
|
||||
则替换为简短标注,避免历史背景干扰本轮摘要生成。
|
||||
"""
|
||||
import re
|
||||
# 检测用户输入中是否包含 context transfer
|
||||
ct_pattern = re.compile(r"## 2\. 用户输入\s*\n```\s*\n.*?CONTEXT TRANSFER", re.DOTALL)
|
||||
if ct_pattern.search(md_content):
|
||||
# 替换"用户输入"section 为简短标注
|
||||
# 匹配从 "## 2. 用户输入" 到下一个 "## 3." 之间的内容
|
||||
md_content = re.sub(
|
||||
r"(## 2\. 用户输入)\s*\n```[\s\S]*?```\s*\n(?=## 3\.)",
|
||||
r"\1\n\n[本轮为 Context Transfer 续接,用户输入为历史多轮摘要,已省略。请基于执行摘要和对话记录中的实际工具调用判断本轮工作。]\n\n",
|
||||
md_content,
|
||||
)
|
||||
return md_content
|
||||
|
||||
|
||||
# ── 步骤 7:为最新 session 生成 LLM 摘要 ──
|
||||
_SUMMARY_SYSTEM_PROMPT = """你是一个专业的技术对话分析师。你的任务是为 AI 编程助手的一轮执行(execution)生成简洁的中文摘要。
|
||||
|
||||
背景:一个对话(chatSession)包含多轮执行(execution)。每轮执行 = 用户发一条消息 → AI 完成响应。你收到的是单轮执行的完整记录。
|
||||
|
||||
摘要规则:
|
||||
1. 只描述本轮执行实际完成的工作,不要描述历史背景
|
||||
2. 列出完成的功能点/任务(一轮可能完成多个)
|
||||
3. 包含关键技术细节:文件路径、模块名、数据库表、API 端点等
|
||||
4. bug 修复要说明原因和方案
|
||||
5. 不写过程性描述("用户说..."),只写结果
|
||||
6. 内容太短或无实质内容的,写"无实质内容"
|
||||
7. 不限字数,信息完整优先,避免截断失真
|
||||
|
||||
重要:
|
||||
- "执行摘要"(📋)是最可靠的信息源,优先基于它判断本轮做了什么
|
||||
- 如果"用户输入"包含 CONTEXT TRANSFER,那是之前多轮的历史摘要,不是本轮工作
|
||||
- 对话记录中的实际工具调用和文件变更才是本轮的真实操作
|
||||
|
||||
请直接输出摘要,不要添加任何前缀或解释。"""
|
||||
|
||||
|
||||
def do_generate_description():
|
||||
"""为缺少 description 的主对话 entry 调用百炼 API 生成摘要,写入双索引。"""
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv()
|
||||
|
||||
api_key = os.environ.get("BAILIAN_API_KEY", "")
|
||||
if not api_key:
|
||||
return
|
||||
|
||||
model = os.environ.get("BAILIAN_MODEL", "qwen-plus")
|
||||
base_url = os.environ.get("BAILIAN_BASE_URL", "https://dashscope.aliyuncs.com/compatible-mode/v1")
|
||||
|
||||
scripts_ops = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "..", "scripts", "ops")
|
||||
scripts_ops = os.path.normpath(scripts_ops)
|
||||
if scripts_ops not in sys.path:
|
||||
sys.path.insert(0, scripts_ops)
|
||||
|
||||
try:
|
||||
from extract_kiro_session import load_index, save_index, load_full_index, save_full_index
|
||||
except ImportError:
|
||||
return
|
||||
|
||||
index = load_index()
|
||||
entries = index.get("entries", {})
|
||||
if not entries:
|
||||
return
|
||||
|
||||
# 收集所有缺少 description 的主对话 entry
|
||||
targets = []
|
||||
for eid, ent in entries.items():
|
||||
if ent.get("is_sub"):
|
||||
continue
|
||||
if not ent.get("description"):
|
||||
targets.append((eid, ent))
|
||||
|
||||
if not targets:
|
||||
return
|
||||
|
||||
# agent_on_stop 场景下限制处理数量,避免超时
|
||||
# 批量处理积压用独立脚本 batch_generate_summaries.py
|
||||
MAX_PER_RUN = 10
|
||||
if len(targets) > MAX_PER_RUN:
|
||||
# 优先处理最新的(按 startTime 降序)
|
||||
targets.sort(key=lambda t: t[1].get("startTime", ""), reverse=True)
|
||||
targets = targets[:MAX_PER_RUN]
|
||||
|
||||
try:
|
||||
from openai import OpenAI
|
||||
client = OpenAI(api_key=api_key, base_url=base_url)
|
||||
except Exception:
|
||||
return
|
||||
|
||||
full_index = load_full_index()
|
||||
full_entries = full_index.get("entries", {})
|
||||
generated = 0
|
||||
|
||||
for target_eid, target_entry in targets:
|
||||
out_dir = target_entry.get("output_dir", "")
|
||||
if not out_dir or not os.path.isdir(out_dir):
|
||||
continue
|
||||
|
||||
# 找到该 entry 对应的 main_*.md 文件
|
||||
main_files = sorted(
|
||||
f for f in os.listdir(out_dir)
|
||||
if f.startswith("main_") and f.endswith(".md")
|
||||
and target_eid[:8] in f # 按 executionId 短码匹配
|
||||
)
|
||||
if not main_files:
|
||||
# 回退:取目录下所有 main 文件
|
||||
main_files = sorted(
|
||||
f for f in os.listdir(out_dir)
|
||||
if f.startswith("main_") and f.endswith(".md")
|
||||
)
|
||||
if not main_files:
|
||||
continue
|
||||
|
||||
content_parts = []
|
||||
for mf in main_files:
|
||||
try:
|
||||
with open(os.path.join(out_dir, mf), "r", encoding="utf-8") as fh:
|
||||
content_parts.append(fh.read())
|
||||
except Exception:
|
||||
continue
|
||||
if not content_parts:
|
||||
continue
|
||||
|
||||
content = "\n\n---\n\n".join(content_parts)
|
||||
content = _extract_summary_content(content)
|
||||
if len(content) > 60000:
|
||||
content = content[:60000] + "\n\n[TRUNCATED]"
|
||||
|
||||
try:
|
||||
resp = client.chat.completions.create(
|
||||
model=model,
|
||||
messages=[
|
||||
{"role": "system", "content": _SUMMARY_SYSTEM_PROMPT},
|
||||
{"role": "user", "content": f"请为以下单轮执行记录生成摘要:\n\n{content}"},
|
||||
],
|
||||
max_tokens=4096,
|
||||
)
|
||||
description = resp.choices[0].message.content.strip()
|
||||
except Exception:
|
||||
continue # 单条失败不影响其他
|
||||
|
||||
if not description:
|
||||
continue
|
||||
|
||||
# 写入双索引(内存中)
|
||||
entries[target_eid]["description"] = description
|
||||
if target_eid in full_entries:
|
||||
full_entries[target_eid]["description"] = description
|
||||
generated += 1
|
||||
|
||||
# 批量保存
|
||||
if generated > 0:
|
||||
save_index(index)
|
||||
save_full_index(full_index)
|
||||
|
||||
|
||||
def main():
|
||||
ensure_repo_root()
|
||||
now = now_taipei()
|
||||
force_rebuild = "--force-rebuild" in sys.argv
|
||||
|
||||
# 全量会话记录提取(无论是否有文件变更,每次对话都要记录)
|
||||
try:
|
||||
do_full_session_extract()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 步骤 1:基于文件基线检测变更
|
||||
real_files, external_files, diff_result, no_change = detect_changes_via_baseline()
|
||||
|
||||
# 无任何文件变更 → 跳过所有审查(除非 --force-rebuild)
|
||||
if no_change and not force_rebuild:
|
||||
return
|
||||
|
||||
# --force-rebuild 且无变更时,仍需基于 git status 重建 context
|
||||
if no_change and force_rebuild:
|
||||
try:
|
||||
compliance = do_compliance_prescan(real_files or [])
|
||||
except Exception:
|
||||
compliance = {}
|
||||
try:
|
||||
do_build_audit_context(real_files or [], diff_result, compliance)
|
||||
except Exception:
|
||||
pass
|
||||
return
|
||||
|
||||
# 步骤 2:合规预扫描(基于本次对话变更的文件)
|
||||
compliance = {}
|
||||
try:
|
||||
compliance = do_compliance_prescan(real_files)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 步骤 4:构建审计上下文
|
||||
try:
|
||||
do_build_audit_context(real_files, diff_result, compliance)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 步骤 7:审计提醒(信息性,exit(0),不触发 agent 自行审计)
|
||||
try:
|
||||
do_audit_reminder(real_files)
|
||||
except SystemExit:
|
||||
pass # exit(0) 信息性退出,不需要 re-raise
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except SystemExit as e:
|
||||
sys.exit(e.code)
|
||||
except Exception:
|
||||
pass
|
||||
@@ -1,165 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""audit_flagger — 判断 git 工作区是否存在高风险改动,写入 .kiro/state/.audit_state.json
|
||||
|
||||
替代原 PowerShell 版本,避免 Windows PowerShell 5.1 解析器 bug。
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
|
||||
RISK_RULES = [
|
||||
(re.compile(r"^apps/etl/connectors/feiqiu/(api|cli|config|database|loaders|models|orchestration|scd|tasks|utils|quality)/"), "etl"),
|
||||
(re.compile(r"^apps/backend/app/"), "backend"),
|
||||
(re.compile(r"^apps/admin-web/src/"), "admin-web"),
|
||||
(re.compile(r"^apps/miniprogram/(miniapp|miniprogram)/"), "miniprogram"),
|
||||
(re.compile(r"^packages/shared/"), "shared"),
|
||||
(re.compile(r"^db/"), "db"),
|
||||
]
|
||||
|
||||
NOISE_PATTERNS = [
|
||||
re.compile(r"^docs/audit/"),
|
||||
re.compile(r"^\.kiro/"), # .kiro 配置变更不触发业务审计
|
||||
re.compile(r"^tmp/"),
|
||||
re.compile(r"^\.hypothesis/"),
|
||||
]
|
||||
|
||||
DB_PATTERNS = [
|
||||
re.compile(r"^db/"),
|
||||
re.compile(r"/migrations/"),
|
||||
re.compile(r"\.sql$"),
|
||||
re.compile(r"\.prisma$"),
|
||||
]
|
||||
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
|
||||
|
||||
def now_taipei():
|
||||
return datetime.now(TZ_TAIPEI).isoformat()
|
||||
|
||||
|
||||
def sha1hex(s: str) -> str:
|
||||
return hashlib.sha1(s.encode("utf-8")).hexdigest()
|
||||
|
||||
|
||||
def get_changed_files() -> list[str]:
|
||||
"""从 git status --porcelain 提取变更文件路径"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["git", "status", "--porcelain"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
if result.returncode != 0:
|
||||
return []
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
files = []
|
||||
for line in result.stdout.splitlines():
|
||||
if len(line) < 4:
|
||||
continue
|
||||
path = line[3:].strip()
|
||||
if " -> " in path:
|
||||
path = path.split(" -> ")[-1]
|
||||
path = path.strip().strip('"').replace("\\", "/")
|
||||
if path:
|
||||
files.append(path)
|
||||
return files
|
||||
|
||||
|
||||
def is_noise(f: str) -> bool:
|
||||
return any(p.search(f) for p in NOISE_PATTERNS)
|
||||
|
||||
|
||||
def write_state(state: dict):
|
||||
os.makedirs(os.path.join(".kiro", "state"), exist_ok=True)
|
||||
with open(STATE_PATH, "w", encoding="utf-8") as fh:
|
||||
json.dump(state, fh, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
def main():
|
||||
# 非 git 仓库直接退出
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "rev-parse", "--is-inside-work-tree"],
|
||||
capture_output=True, text=True, timeout=5
|
||||
)
|
||||
if r.returncode != 0:
|
||||
return
|
||||
except Exception:
|
||||
return
|
||||
|
||||
all_files = get_changed_files()
|
||||
files = sorted(set(f for f in all_files if not is_noise(f)))
|
||||
now = now_taipei()
|
||||
|
||||
if not files:
|
||||
write_state({
|
||||
"audit_required": False,
|
||||
"db_docs_required": False,
|
||||
"reasons": [],
|
||||
"changed_files": [],
|
||||
"change_fingerprint": "",
|
||||
"marked_at": now,
|
||||
"last_reminded_at": None,
|
||||
})
|
||||
return
|
||||
|
||||
reasons = []
|
||||
audit_required = False
|
||||
db_docs_required = False
|
||||
|
||||
for f in files:
|
||||
for pattern, label in RISK_RULES:
|
||||
if pattern.search(f):
|
||||
audit_required = True
|
||||
tag = f"dir:{label}"
|
||||
if tag not in reasons:
|
||||
reasons.append(tag)
|
||||
# 根目录散文件
|
||||
if "/" not in f:
|
||||
audit_required = True
|
||||
if "root-file" not in reasons:
|
||||
reasons.append("root-file")
|
||||
# DB 文档触发
|
||||
if any(p.search(f) for p in DB_PATTERNS):
|
||||
db_docs_required = True
|
||||
if "db-schema-change" not in reasons:
|
||||
reasons.append("db-schema-change")
|
||||
|
||||
fp = sha1hex("\n".join(files))
|
||||
|
||||
# 保留已有状态的 last_reminded_at
|
||||
last_reminded = None
|
||||
if os.path.isfile(STATE_PATH):
|
||||
try:
|
||||
with open(STATE_PATH, "r", encoding="utf-8") as fh:
|
||||
existing = json.load(fh)
|
||||
if existing.get("change_fingerprint") == fp:
|
||||
last_reminded = existing.get("last_reminded_at")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
write_state({
|
||||
"audit_required": audit_required,
|
||||
"db_docs_required": db_docs_required,
|
||||
"reasons": reasons,
|
||||
"changed_files": files[:50],
|
||||
"change_fingerprint": fp,
|
||||
"marked_at": now,
|
||||
"last_reminded_at": last_reminded,
|
||||
})
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception:
|
||||
# 绝不阻塞 prompt 提交
|
||||
pass
|
||||
@@ -1,107 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""audit_reminder — Agent 结束时检查是否有待审计改动,15 分钟限频提醒。
|
||||
|
||||
替代原 PowerShell 版本,避免 Windows PowerShell 5.1 解析器 bug。
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
MIN_INTERVAL = timedelta(minutes=15)
|
||||
|
||||
|
||||
def now_taipei():
|
||||
return datetime.now(TZ_TAIPEI)
|
||||
|
||||
|
||||
def load_state():
|
||||
if not os.path.isfile(STATE_PATH):
|
||||
return None
|
||||
try:
|
||||
with open(STATE_PATH, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def save_state(state):
|
||||
os.makedirs(os.path.join(".kiro", "state"), exist_ok=True)
|
||||
with open(STATE_PATH, "w", encoding="utf-8") as f:
|
||||
json.dump(state, f, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
def get_real_changes():
|
||||
"""获取排除噪声后的变更文件"""
|
||||
try:
|
||||
r = subprocess.run(["git", "status", "--porcelain"], capture_output=True, text=True, timeout=10)
|
||||
if r.returncode != 0:
|
||||
return []
|
||||
except Exception:
|
||||
return []
|
||||
files = []
|
||||
for line in r.stdout.splitlines():
|
||||
if len(line) < 4:
|
||||
continue
|
||||
path = line[3:].strip().strip('"').replace("\\", "/")
|
||||
if " -> " in path:
|
||||
path = path.split(" -> ")[-1]
|
||||
# 排除审计产物、.kiro 配置、临时文件
|
||||
if path and not path.startswith("docs/audit/") and not path.startswith(".kiro/") and not path.startswith("tmp/") and not path.startswith(".hypothesis/"):
|
||||
files.append(path)
|
||||
return sorted(set(files))
|
||||
|
||||
|
||||
def main():
|
||||
state = load_state()
|
||||
if not state:
|
||||
sys.exit(0)
|
||||
|
||||
if not state.get("audit_required"):
|
||||
sys.exit(0)
|
||||
|
||||
# 工作树干净时清除审计状态
|
||||
real_files = get_real_changes()
|
||||
if not real_files:
|
||||
state["audit_required"] = False
|
||||
state["reasons"] = []
|
||||
state["changed_files"] = []
|
||||
state["last_reminded_at"] = None
|
||||
save_state(state)
|
||||
sys.exit(0)
|
||||
|
||||
now = now_taipei()
|
||||
|
||||
# 15 分钟限频
|
||||
last_str = state.get("last_reminded_at")
|
||||
if last_str:
|
||||
try:
|
||||
last = datetime.fromisoformat(last_str)
|
||||
if (now - last) < MIN_INTERVAL:
|
||||
sys.exit(0)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 更新提醒时间
|
||||
state["last_reminded_at"] = now.isoformat()
|
||||
save_state(state)
|
||||
|
||||
reasons = state.get("reasons", [])
|
||||
reason_text = ", ".join(reasons) if reasons else "high-risk paths changed"
|
||||
sys.stderr.write(
|
||||
f"[AUDIT REMINDER] Pending audit detected ({reason_text}). "
|
||||
f"Run /audit (Manual: Run /audit hook) to sync docs & write audit artifacts. "
|
||||
f"(rate limit: 15min)\n"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
@@ -1,174 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""build_audit_context — 合并所有前置 hook 产出,生成统一审计上下文快照。
|
||||
|
||||
读取:
|
||||
- .kiro/state/.audit_state.json(audit-flagger 产出:风险判定、变更文件列表)
|
||||
- .kiro/state/.compliance_state.json(change-compliance 产出:文档缺失、迁移状态)
|
||||
- .kiro/state/.last_prompt_id.json(prompt-audit-log 产出:Prompt ID 溯源)
|
||||
- git diff --stat HEAD(变更统计摘要)
|
||||
- git diff HEAD(仅高风险文件的 diff,截断到合理长度)
|
||||
|
||||
输出:.kiro/state/.audit_context.json(audit-writer 子代理的唯一输入)
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
CONTEXT_PATH = os.path.join(".kiro", "state", ".audit_context.json")
|
||||
|
||||
# 高风险路径(只对这些文件取 diff,避免 diff 过大)
|
||||
HIGH_RISK_PATTERNS = [
|
||||
re.compile(r"^apps/etl/connectors/feiqiu/(api|cli|config|database|loaders|models|orchestration|scd|tasks|utils|quality)/"),
|
||||
re.compile(r"^apps/backend/app/"),
|
||||
re.compile(r"^apps/admin-web/src/"),
|
||||
re.compile(r"^apps/miniprogram/"),
|
||||
re.compile(r"^packages/shared/"),
|
||||
re.compile(r"^db/"),
|
||||
]
|
||||
|
||||
|
||||
def safe_read_json(path):
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def git_diff_stat():
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "diff", "--stat", "HEAD"],
|
||||
capture_output=True, text=True, encoding="utf-8", errors="replace", timeout=15
|
||||
)
|
||||
return r.stdout.strip() if r.returncode == 0 else ""
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
def git_diff_files(files, max_total=30000):
|
||||
"""获取指定文件的 git diff,截断到 max_total 字符"""
|
||||
if not files:
|
||||
return ""
|
||||
# 分批取 diff,避免命令行过长
|
||||
all_diff = []
|
||||
total_len = 0
|
||||
for f in files:
|
||||
if total_len >= max_total:
|
||||
all_diff.append(f"\n[TRUNCATED: diff exceeds {max_total // 1000}KB limit]")
|
||||
break
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "diff", "HEAD", "--", f],
|
||||
capture_output=True, text=True, encoding="utf-8", errors="replace", timeout=10
|
||||
)
|
||||
if r.returncode == 0 and r.stdout.strip():
|
||||
chunk = r.stdout.strip()
|
||||
# 单文件 diff 截断
|
||||
if len(chunk) > 5000:
|
||||
chunk = chunk[:5000] + f"\n[TRUNCATED: {f} diff too long]"
|
||||
all_diff.append(chunk)
|
||||
total_len += len(chunk)
|
||||
except Exception:
|
||||
continue
|
||||
return "\n".join(all_diff)
|
||||
|
||||
|
||||
def get_latest_prompt_log():
|
||||
"""获取最新的 prompt log 文件内容(用于溯源)"""
|
||||
log_dir = os.path.join("docs", "audit", "prompt_logs")
|
||||
if not os.path.isdir(log_dir):
|
||||
return ""
|
||||
try:
|
||||
files = sorted(
|
||||
[f for f in os.listdir(log_dir) if f.startswith("prompt_log_")],
|
||||
reverse=True
|
||||
)
|
||||
if not files:
|
||||
return ""
|
||||
latest = os.path.join(log_dir, files[0])
|
||||
with open(latest, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
# 截断过长内容
|
||||
if len(content) > 3000:
|
||||
content = content[:3000] + "\n[TRUNCATED]"
|
||||
return content
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
def main():
|
||||
now = datetime.now(TZ_TAIPEI)
|
||||
|
||||
# 读取前置 hook 产出
|
||||
audit_state = safe_read_json(os.path.join(".kiro", "state", ".audit_state.json"))
|
||||
compliance = safe_read_json(os.path.join(".kiro", "state", ".compliance_state.json"))
|
||||
prompt_id_info = safe_read_json(os.path.join(".kiro", "state", ".last_prompt_id.json"))
|
||||
|
||||
# 从 audit_state 提取高风险文件
|
||||
changed_files = audit_state.get("changed_files", [])
|
||||
high_risk_files = [
|
||||
f for f in changed_files
|
||||
if any(p.search(f) for p in HIGH_RISK_PATTERNS)
|
||||
]
|
||||
|
||||
# 获取 diff(仅高风险文件)
|
||||
diff_stat = git_diff_stat()
|
||||
high_risk_diff = git_diff_files(high_risk_files)
|
||||
|
||||
# 获取最新 prompt log
|
||||
prompt_log = get_latest_prompt_log()
|
||||
|
||||
# 构建统一上下文
|
||||
context = {
|
||||
"built_at": now.isoformat(),
|
||||
"prompt_id": prompt_id_info.get("prompt_id", "unknown"),
|
||||
"prompt_at": prompt_id_info.get("at", ""),
|
||||
|
||||
# 来自 audit-flagger
|
||||
"audit_required": audit_state.get("audit_required", False),
|
||||
"db_docs_required": audit_state.get("db_docs_required", False),
|
||||
"reasons": audit_state.get("reasons", []),
|
||||
"changed_files": changed_files,
|
||||
"high_risk_files": high_risk_files,
|
||||
|
||||
# 来自 change-compliance-prescan
|
||||
"compliance": {
|
||||
"code_without_docs": compliance.get("code_without_docs", []),
|
||||
"new_migration_sql": compliance.get("new_migration_sql", []),
|
||||
"has_bd_manual": compliance.get("has_bd_manual", False),
|
||||
"has_audit_record": compliance.get("has_audit_record", False),
|
||||
"has_ddl_baseline": compliance.get("has_ddl_baseline", False),
|
||||
},
|
||||
|
||||
# git 摘要
|
||||
"diff_stat": diff_stat,
|
||||
"high_risk_diff": high_risk_diff,
|
||||
|
||||
# prompt 溯源
|
||||
"latest_prompt_log": prompt_log,
|
||||
}
|
||||
|
||||
os.makedirs(os.path.join(".kiro", "state"), exist_ok=True)
|
||||
with open(CONTEXT_PATH, "w", encoding="utf-8") as f:
|
||||
json.dump(context, f, indent=2, ensure_ascii=False)
|
||||
|
||||
# 输出摘要到 stdout
|
||||
print(f"audit_context built: {len(changed_files)} files, "
|
||||
f"{len(high_risk_files)} high-risk, "
|
||||
f"{len(compliance.get('code_without_docs', []))} docs missing")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception as e:
|
||||
sys.stderr.write(f"build_audit_context failed: {e}\n")
|
||||
sys.exit(1)
|
||||
@@ -1,243 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""change_compliance_prescan — 预扫描变更文件,输出需要合规审查的项目。
|
||||
|
||||
在 agentStop 时由 askAgent hook 调用,为 LLM 提供精简的审查清单,
|
||||
避免 LLM 自行扫描文件浪费 Token。
|
||||
|
||||
输出到 stdout(供 askAgent 读取):
|
||||
- 若无需审查:输出 "NO_CHECK_NEEDED"
|
||||
- 若需审查:输出结构化 JSON 清单
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
|
||||
# doc-map 中定义的文档对应关系
|
||||
DOC_MAP = {
|
||||
# 代码路径前缀 → 应同步更新的文档
|
||||
"apps/backend/app/routers/": ["apps/backend/docs/API-REFERENCE.md"],
|
||||
"apps/backend/app/services/": ["apps/backend/docs/API-REFERENCE.md", "apps/backend/README.md"],
|
||||
"apps/backend/app/auth/": ["apps/backend/docs/API-REFERENCE.md", "apps/backend/README.md"],
|
||||
"apps/etl/connectors/feiqiu/tasks/": ["apps/etl/connectors/feiqiu/docs/etl_tasks/"],
|
||||
"apps/etl/connectors/feiqiu/loaders/": ["apps/etl/connectors/feiqiu/docs/etl_tasks/"],
|
||||
"apps/etl/connectors/feiqiu/scd/": ["apps/etl/connectors/feiqiu/docs/business-rules/scd2_rules.md"],
|
||||
"apps/etl/connectors/feiqiu/orchestration/": ["apps/etl/connectors/feiqiu/docs/architecture/"],
|
||||
"apps/admin-web/src/": ["apps/admin-web/README.md"],
|
||||
"apps/miniprogram/": ["apps/miniprogram/README.md"],
|
||||
"packages/shared/": ["packages/shared/README.md"],
|
||||
}
|
||||
|
||||
# DDL 基线文件(doc-map 中定义)
|
||||
DDL_BASELINE_DIR = "docs/database/ddl/"
|
||||
|
||||
# 迁移脚本路径
|
||||
MIGRATION_PATTERNS = [
|
||||
re.compile(r"^db/etl_feiqiu/migrations/.*\.sql$"),
|
||||
re.compile(r"^db/zqyy_app/migrations/.*\.sql$"),
|
||||
re.compile(r"^db/fdw/.*\.sql$"),
|
||||
]
|
||||
|
||||
# DB 文档路径
|
||||
BD_MANUAL_PATTERN = re.compile(r"^docs/database/BD_Manual_.*\.md$")
|
||||
|
||||
# 审计记录路径
|
||||
AUDIT_CHANGES_DIR = "docs/audit/changes/"
|
||||
|
||||
# 噪声路径(不参与合规检查)
|
||||
NOISE = [
|
||||
re.compile(r"^docs/audit/"),
|
||||
re.compile(r"^\.kiro/"),
|
||||
re.compile(r"^\.hypothesis/"),
|
||||
re.compile(r"^tmp/"),
|
||||
re.compile(r"\.png$"),
|
||||
re.compile(r"\.jpg$"),
|
||||
]
|
||||
|
||||
|
||||
def safe_read_json(path):
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def get_changed_files():
|
||||
"""从 audit_state 或 git status 获取变更文件"""
|
||||
state = safe_read_json(STATE_PATH)
|
||||
files = state.get("changed_files", [])
|
||||
if files:
|
||||
return files
|
||||
# 回退到 git status
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "status", "--porcelain"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
if r.returncode != 0:
|
||||
return []
|
||||
result = []
|
||||
for line in r.stdout.splitlines():
|
||||
if len(line) < 4:
|
||||
continue
|
||||
path = line[3:].strip().strip('"').replace("\\", "/")
|
||||
if " -> " in path:
|
||||
path = path.split(" -> ")[-1]
|
||||
if path:
|
||||
result.append(path)
|
||||
return sorted(set(result))
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def is_noise(f):
|
||||
return any(p.search(f) for p in NOISE)
|
||||
|
||||
|
||||
def classify_files(files):
|
||||
"""将变更文件分类,输出审查清单"""
|
||||
result = {
|
||||
"new_migration_sql": [], # 新增的迁移 SQL
|
||||
"new_or_modified_sql": [], # 所有 SQL 变更
|
||||
"code_without_docs": [], # 有代码改动但缺少对应文档改动
|
||||
"new_files": [], # 新增文件(需检查目录规范)
|
||||
"has_bd_manual": False, # 是否有 BD_Manual 文档变更
|
||||
"has_audit_record": False, # 是否有审计记录变更
|
||||
"has_ddl_baseline": False, # 是否有 DDL 基线变更
|
||||
}
|
||||
|
||||
code_files = []
|
||||
doc_files = set()
|
||||
|
||||
for f in files:
|
||||
if is_noise(f):
|
||||
continue
|
||||
|
||||
# 迁移 SQL
|
||||
for mp in MIGRATION_PATTERNS:
|
||||
if mp.search(f):
|
||||
result["new_migration_sql"].append(f)
|
||||
break
|
||||
|
||||
# SQL 文件
|
||||
if f.endswith(".sql"):
|
||||
result["new_or_modified_sql"].append(f)
|
||||
|
||||
# BD_Manual
|
||||
if BD_MANUAL_PATTERN.search(f):
|
||||
result["has_bd_manual"] = True
|
||||
|
||||
# 审计记录
|
||||
if f.startswith(AUDIT_CHANGES_DIR):
|
||||
result["has_audit_record"] = True
|
||||
|
||||
# DDL 基线
|
||||
if f.startswith(DDL_BASELINE_DIR):
|
||||
result["has_ddl_baseline"] = True
|
||||
|
||||
# 文档文件
|
||||
if f.endswith(".md") or "/docs/" in f:
|
||||
doc_files.add(f)
|
||||
|
||||
# 代码文件(非文档、非配置)
|
||||
if f.endswith((".py", ".ts", ".tsx", ".js", ".jsx")):
|
||||
code_files.append(f)
|
||||
|
||||
# 检查代码文件是否有对应文档变更
|
||||
for cf in code_files:
|
||||
expected_docs = []
|
||||
for prefix, docs in DOC_MAP.items():
|
||||
if cf.startswith(prefix):
|
||||
expected_docs.extend(docs)
|
||||
if expected_docs:
|
||||
# 检查是否有任一对应文档在变更列表中
|
||||
has_doc = False
|
||||
for ed in expected_docs:
|
||||
if ed in doc_files:
|
||||
has_doc = True
|
||||
break
|
||||
# 目录级匹配
|
||||
if ed.endswith("/"):
|
||||
if any(d.startswith(ed) for d in doc_files):
|
||||
has_doc = True
|
||||
break
|
||||
if not has_doc:
|
||||
result["code_without_docs"].append({
|
||||
"file": cf,
|
||||
"expected_docs": expected_docs,
|
||||
})
|
||||
|
||||
return result
|
||||
|
||||
|
||||
COMPLIANCE_STATE_PATH = os.path.join(".kiro", "state", ".compliance_state.json")
|
||||
|
||||
|
||||
def save_compliance_state(result, needs_check):
|
||||
"""持久化合规检查结果,供 audit-writer 子代理读取"""
|
||||
os.makedirs(os.path.join(".kiro", "state"), exist_ok=True)
|
||||
now = datetime.now(TZ_TAIPEI)
|
||||
state = {
|
||||
"needs_check": needs_check,
|
||||
"scanned_at": now.isoformat(),
|
||||
**result,
|
||||
}
|
||||
with open(COMPLIANCE_STATE_PATH, "w", encoding="utf-8") as f:
|
||||
json.dump(state, f, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
def main():
|
||||
files = get_changed_files()
|
||||
if not files:
|
||||
save_compliance_state({"new_migration_sql": [], "new_or_modified_sql": [],
|
||||
"code_without_docs": [], "new_files": [],
|
||||
"has_bd_manual": False, "has_audit_record": False,
|
||||
"has_ddl_baseline": False}, False)
|
||||
print("NO_CHECK_NEEDED")
|
||||
return
|
||||
|
||||
# 过滤噪声
|
||||
real_files = [f for f in files if not is_noise(f)]
|
||||
if not real_files:
|
||||
save_compliance_state({"new_migration_sql": [], "new_or_modified_sql": [],
|
||||
"code_without_docs": [], "new_files": [],
|
||||
"has_bd_manual": False, "has_audit_record": False,
|
||||
"has_ddl_baseline": False}, False)
|
||||
print("NO_CHECK_NEEDED")
|
||||
return
|
||||
|
||||
result = classify_files(files)
|
||||
|
||||
# 判断是否需要审查
|
||||
needs_check = (
|
||||
result["new_migration_sql"]
|
||||
or result["code_without_docs"]
|
||||
or (result["new_migration_sql"] and not result["has_ddl_baseline"])
|
||||
)
|
||||
|
||||
# 始终持久化结果
|
||||
save_compliance_state(result, needs_check)
|
||||
|
||||
if not needs_check:
|
||||
print("NO_CHECK_NEEDED")
|
||||
return
|
||||
|
||||
# 输出精简 JSON 供 LLM 审查
|
||||
print(json.dumps(result, indent=2, ensure_ascii=False))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception as e:
|
||||
# 出错时不阻塞,输出无需检查
|
||||
print("NO_CHECK_NEEDED")
|
||||
@@ -1,170 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""file_baseline — 基于文件 mtime+size 的独立基线快照系统。
|
||||
|
||||
不依赖 git commit 历史,通过扫描工作区文件的 (mtime, size) 指纹,
|
||||
在 promptSubmit 和 agentStop 之间精确检测"本次对话期间"的文件变更。
|
||||
|
||||
用法:
|
||||
from file_baseline import scan_workspace, diff_baselines, save_baseline, load_baseline
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
from typing import TypedDict
|
||||
|
||||
BASELINE_PATH = os.path.join(".kiro", "state", ".file_baseline.json")
|
||||
|
||||
# 扫描时排除的目录(与 .gitignore 对齐 + 额外排除)
|
||||
EXCLUDE_DIRS = {
|
||||
".git", ".venv", "venv", "ENV", "env",
|
||||
"node_modules", "__pycache__", ".hypothesis", ".pytest_cache",
|
||||
".idea", ".vscode", ".specstory",
|
||||
"build", "dist", "eggs", ".eggs",
|
||||
"export", "reports", "tmp",
|
||||
"htmlcov", ".coverage",
|
||||
# Kiro 运行时状态不参与业务变更检测
|
||||
".kiro",
|
||||
}
|
||||
|
||||
# 扫描时排除的文件后缀
|
||||
EXCLUDE_SUFFIXES = {
|
||||
".pyc", ".pyo", ".pyd", ".so", ".egg", ".whl",
|
||||
".log", ".jsonl", ".lnk",
|
||||
".swp", ".swo",
|
||||
}
|
||||
|
||||
# 扫描时排除的文件名模式
|
||||
EXCLUDE_NAMES = {
|
||||
".DS_Store", "Thumbs.db", "desktop.ini",
|
||||
}
|
||||
|
||||
# 业务目录白名单(只扫描这些顶层目录 + 根目录散文件)
|
||||
# 这样可以避免扫描 .vite/deps 等深层缓存目录
|
||||
SCAN_ROOTS = [
|
||||
"apps",
|
||||
"packages",
|
||||
"db",
|
||||
"docs",
|
||||
"scripts",
|
||||
"tests",
|
||||
]
|
||||
|
||||
|
||||
class FileEntry(TypedDict):
|
||||
mtime: float
|
||||
size: int
|
||||
|
||||
|
||||
class DiffResult(TypedDict):
|
||||
added: list[str]
|
||||
modified: list[str]
|
||||
deleted: list[str]
|
||||
|
||||
|
||||
def _should_exclude_dir(dirname: str) -> bool:
|
||||
"""判断目录是否应排除"""
|
||||
return dirname in EXCLUDE_DIRS or dirname.startswith(".")
|
||||
|
||||
|
||||
def _should_exclude_file(filename: str) -> bool:
|
||||
"""判断文件是否应排除"""
|
||||
if filename in EXCLUDE_NAMES:
|
||||
return True
|
||||
_, ext = os.path.splitext(filename)
|
||||
if ext.lower() in EXCLUDE_SUFFIXES:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def scan_workspace(root: str = ".") -> dict[str, FileEntry]:
|
||||
"""扫描工作区,返回 {相对路径: {mtime, size}} 字典。
|
||||
|
||||
只扫描 SCAN_ROOTS 中的目录 + 根目录下的散文件,
|
||||
跳过 EXCLUDE_DIRS / EXCLUDE_SUFFIXES / EXCLUDE_NAMES。
|
||||
"""
|
||||
result: dict[str, FileEntry] = {}
|
||||
|
||||
# 1. 根目录散文件(pyproject.toml, .env 等)
|
||||
try:
|
||||
for entry in os.scandir(root):
|
||||
if entry.is_file(follow_symlinks=False):
|
||||
if _should_exclude_file(entry.name):
|
||||
continue
|
||||
try:
|
||||
st = entry.stat(follow_symlinks=False)
|
||||
rel = entry.name.replace("\\", "/")
|
||||
result[rel] = {"mtime": st.st_mtime, "size": st.st_size}
|
||||
except OSError:
|
||||
continue
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# 2. 业务目录递归扫描
|
||||
for scan_root in SCAN_ROOTS:
|
||||
top = os.path.join(root, scan_root)
|
||||
if not os.path.isdir(top):
|
||||
continue
|
||||
for dirpath, dirnames, filenames in os.walk(top):
|
||||
# 原地修改 dirnames 以跳过排除目录
|
||||
dirnames[:] = [
|
||||
d for d in dirnames
|
||||
if not _should_exclude_dir(d)
|
||||
]
|
||||
for fname in filenames:
|
||||
if _should_exclude_file(fname):
|
||||
continue
|
||||
full = os.path.join(dirpath, fname)
|
||||
try:
|
||||
st = os.stat(full)
|
||||
rel = os.path.relpath(full, root).replace("\\", "/")
|
||||
result[rel] = {"mtime": st.st_mtime, "size": st.st_size}
|
||||
except OSError:
|
||||
continue
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def diff_baselines(
|
||||
before: dict[str, FileEntry],
|
||||
after: dict[str, FileEntry],
|
||||
) -> DiffResult:
|
||||
"""对比两次快照,返回 added/modified/deleted 列表。"""
|
||||
before_keys = set(before.keys())
|
||||
after_keys = set(after.keys())
|
||||
|
||||
added = sorted(after_keys - before_keys)
|
||||
deleted = sorted(before_keys - after_keys)
|
||||
|
||||
modified = []
|
||||
for path in sorted(before_keys & after_keys):
|
||||
b = before[path]
|
||||
a = after[path]
|
||||
# mtime 或 size 任一变化即视为修改
|
||||
if b["mtime"] != a["mtime"] or b["size"] != a["size"]:
|
||||
modified.append(path)
|
||||
|
||||
return {"added": added, "modified": modified, "deleted": deleted}
|
||||
|
||||
|
||||
def save_baseline(data: dict[str, FileEntry], path: str = BASELINE_PATH):
|
||||
"""保存基线快照到 JSON 文件。"""
|
||||
os.makedirs(os.path.dirname(path) or ".kiro", exist_ok=True)
|
||||
with open(path, "w", encoding="utf-8") as f:
|
||||
json.dump(data, f, ensure_ascii=False)
|
||||
|
||||
|
||||
def load_baseline(path: str = BASELINE_PATH) -> dict[str, FileEntry]:
|
||||
"""加载基线快照,文件不存在返回空字典。"""
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def total_changes(diff: DiffResult) -> int:
|
||||
"""变更文件总数"""
|
||||
return len(diff["added"]) + len(diff["modified"]) + len(diff["deleted"])
|
||||
@@ -1,60 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""prompt_audit_log — 每次提交 prompt 时生成独立日志文件。
|
||||
|
||||
替代原 PowerShell 版本,避免 Windows PowerShell 5.1 解析器 bug。
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
|
||||
|
||||
def main():
|
||||
now = datetime.now(TZ_TAIPEI)
|
||||
prompt_id = f"P{now.strftime('%Y%m%d-%H%M%S')}"
|
||||
prompt_raw = os.environ.get("USER_PROMPT", "")
|
||||
|
||||
# 截断过长的 prompt(避免展开的 #context 占用过多空间)
|
||||
if len(prompt_raw) > 20000:
|
||||
prompt_raw = prompt_raw[:5000] + "\n[TRUNCATED: prompt too long; possible expanded #context]"
|
||||
|
||||
summary = " ".join(prompt_raw.split()).strip()
|
||||
if len(summary) > 120:
|
||||
summary = summary[:120] + "…"
|
||||
if not summary:
|
||||
summary = "(empty prompt)"
|
||||
|
||||
# 写独立日志文件
|
||||
log_dir = os.path.join("docs", "audit", "prompt_logs")
|
||||
os.makedirs(log_dir, exist_ok=True)
|
||||
|
||||
filename = f"prompt_log_{now.strftime('%Y%m%d_%H%M%S')}.md"
|
||||
target = os.path.join(log_dir, filename)
|
||||
|
||||
timestamp = now.strftime("%Y-%m-%d %H:%M:%S %z")
|
||||
entry = f"""- [{prompt_id}] {timestamp}
|
||||
- summary: {summary}
|
||||
- prompt:
|
||||
```text
|
||||
{prompt_raw}
|
||||
```
|
||||
"""
|
||||
with open(target, "w", encoding="utf-8") as f:
|
||||
f.write(entry)
|
||||
|
||||
# 保存 last prompt id 供 /audit 溯源
|
||||
os.makedirs(os.path.join(".kiro", "state"), exist_ok=True)
|
||||
last_prompt = {"prompt_id": prompt_id, "at": now.isoformat()}
|
||||
with open(os.path.join(".kiro", "state", ".last_prompt_id.json"), "w", encoding="utf-8") as f:
|
||||
json.dump(last_prompt, f, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception:
|
||||
# 不阻塞 prompt 提交
|
||||
pass
|
||||
@@ -1,231 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""prompt_on_submit — promptSubmit 合并 hook 脚本(v2:文件基线模式)。
|
||||
|
||||
合并原 audit_flagger + prompt_audit_log 的功能:
|
||||
1. 扫描工作区文件 → 保存基线快照 → .kiro/state/.file_baseline.json
|
||||
2. 基于基线文件列表做风险判定 → .kiro/state/.audit_state.json
|
||||
3. 记录 prompt 日志 → docs/audit/prompt_logs/
|
||||
|
||||
变更检测不再依赖 git status(解决不常 commit 导致的误判问题)。
|
||||
风险判定仍基于 git status(因为需要知道哪些文件相对于 commit 有变化)。
|
||||
所有功能块用 try/except 隔离,单个失败不影响其他。
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
# 同目录导入文件基线模块 + cwd 校验
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
from file_baseline import scan_workspace, save_baseline
|
||||
from _ensure_root import ensure_repo_root
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
|
||||
# ── 风险规则(来自 audit_flagger) ──
|
||||
RISK_RULES = [
|
||||
(re.compile(r"^apps/etl/connectors/feiqiu/(api|cli|config|database|loaders|models|orchestration|scd|tasks|utils|quality)/"), "etl"),
|
||||
(re.compile(r"^apps/backend/app/"), "backend"),
|
||||
(re.compile(r"^apps/admin-web/src/"), "admin-web"),
|
||||
(re.compile(r"^apps/miniprogram/(miniapp|miniprogram)/"), "miniprogram"),
|
||||
(re.compile(r"^packages/shared/"), "shared"),
|
||||
(re.compile(r"^db/"), "db"),
|
||||
]
|
||||
|
||||
NOISE_PATTERNS = [
|
||||
re.compile(r"^docs/audit/"),
|
||||
re.compile(r"^\.kiro/"),
|
||||
re.compile(r"^tmp/"),
|
||||
re.compile(r"^\.hypothesis/"),
|
||||
]
|
||||
|
||||
DB_PATTERNS = [
|
||||
re.compile(r"^db/"),
|
||||
re.compile(r"/migrations/"),
|
||||
re.compile(r"\.sql$"),
|
||||
re.compile(r"\.prisma$"),
|
||||
]
|
||||
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
PROMPT_ID_PATH = os.path.join(".kiro", "state", ".last_prompt_id.json")
|
||||
|
||||
|
||||
def now_taipei():
|
||||
return datetime.now(TZ_TAIPEI)
|
||||
|
||||
|
||||
def sha1hex(s: str) -> str:
|
||||
return hashlib.sha1(s.encode("utf-8")).hexdigest()
|
||||
|
||||
|
||||
def get_git_changed_files() -> list[str]:
|
||||
"""通过 git status 获取变更文件(仅用于风险判定,不用于变更检测)"""
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "status", "--porcelain"],
|
||||
capture_output=True, text=True, encoding="utf-8", errors="replace", timeout=10
|
||||
)
|
||||
if r.returncode != 0:
|
||||
return []
|
||||
except Exception:
|
||||
return []
|
||||
files = []
|
||||
for line in r.stdout.splitlines():
|
||||
if len(line) < 4:
|
||||
continue
|
||||
path = line[3:].strip()
|
||||
if " -> " in path:
|
||||
path = path.split(" -> ")[-1]
|
||||
path = path.strip().strip('"').replace("\\", "/")
|
||||
if path:
|
||||
files.append(path)
|
||||
return files
|
||||
|
||||
|
||||
def is_noise(f: str) -> bool:
|
||||
return any(p.search(f) for p in NOISE_PATTERNS)
|
||||
|
||||
|
||||
def safe_read_json(path):
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def write_json(path, data):
|
||||
os.makedirs(os.path.dirname(path) or os.path.join(".kiro", "state"), exist_ok=True)
|
||||
with open(path, "w", encoding="utf-8") as f:
|
||||
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
# ── 功能块 1:风险标记(基于 git status,判定哪些文件需要审计) ──
|
||||
def do_audit_flag(git_files, now):
|
||||
files = sorted(set(f for f in git_files if not is_noise(f)))
|
||||
|
||||
if not files:
|
||||
write_json(STATE_PATH, {
|
||||
"audit_required": False,
|
||||
"db_docs_required": False,
|
||||
"reasons": [],
|
||||
"changed_files": [],
|
||||
"change_fingerprint": "",
|
||||
"marked_at": now.isoformat(),
|
||||
"last_reminded_at": None,
|
||||
})
|
||||
return
|
||||
|
||||
reasons = []
|
||||
audit_required = False
|
||||
db_docs_required = False
|
||||
|
||||
for f in files:
|
||||
for pattern, label in RISK_RULES:
|
||||
if pattern.search(f):
|
||||
audit_required = True
|
||||
tag = f"dir:{label}"
|
||||
if tag not in reasons:
|
||||
reasons.append(tag)
|
||||
if "/" not in f:
|
||||
audit_required = True
|
||||
if "root-file" not in reasons:
|
||||
reasons.append("root-file")
|
||||
if any(p.search(f) for p in DB_PATTERNS):
|
||||
db_docs_required = True
|
||||
if "db-schema-change" not in reasons:
|
||||
reasons.append("db-schema-change")
|
||||
|
||||
fp = sha1hex("\n".join(files))
|
||||
|
||||
# 保留已有 last_reminded_at
|
||||
last_reminded = None
|
||||
existing = safe_read_json(STATE_PATH)
|
||||
if existing.get("change_fingerprint") == fp:
|
||||
last_reminded = existing.get("last_reminded_at")
|
||||
|
||||
write_json(STATE_PATH, {
|
||||
"audit_required": audit_required,
|
||||
"db_docs_required": db_docs_required,
|
||||
"reasons": reasons,
|
||||
"changed_files": files[:50],
|
||||
"change_fingerprint": fp,
|
||||
"marked_at": now.isoformat(),
|
||||
"last_reminded_at": last_reminded,
|
||||
})
|
||||
|
||||
|
||||
# ── 功能块 2:Prompt 日志 ──
|
||||
def do_prompt_log(now):
|
||||
prompt_id = f"P{now.strftime('%Y%m%d-%H%M%S')}"
|
||||
prompt_raw = os.environ.get("USER_PROMPT", "")
|
||||
|
||||
if len(prompt_raw) > 20000:
|
||||
prompt_raw = prompt_raw[:5000] + "\n[TRUNCATED: prompt too long]"
|
||||
|
||||
summary = " ".join(prompt_raw.split()).strip()
|
||||
if len(summary) > 120:
|
||||
summary = summary[:120] + "…"
|
||||
if not summary:
|
||||
summary = "(empty prompt)"
|
||||
|
||||
log_dir = os.path.join("docs", "audit", "prompt_logs")
|
||||
os.makedirs(log_dir, exist_ok=True)
|
||||
filename = f"prompt_log_{now.strftime('%Y%m%d_%H%M%S')}.md"
|
||||
entry = f"""- [{prompt_id}] {now.strftime('%Y-%m-%d %H:%M:%S %z')}
|
||||
- summary: {summary}
|
||||
- prompt:
|
||||
```text
|
||||
{prompt_raw}
|
||||
```
|
||||
"""
|
||||
with open(os.path.join(log_dir, filename), "w", encoding="utf-8") as f:
|
||||
f.write(entry)
|
||||
|
||||
write_json(PROMPT_ID_PATH, {"prompt_id": prompt_id, "at": now.isoformat()})
|
||||
|
||||
|
||||
# ── 功能块 3:文件基线快照(替代 git snapshot) ──
|
||||
def do_file_baseline():
|
||||
"""扫描工作区文件 mtime+size,保存为基线快照。
|
||||
agentStop 时再扫一次对比,即可精确检测本次对话期间的变更。
|
||||
"""
|
||||
baseline = scan_workspace(".")
|
||||
save_baseline(baseline)
|
||||
|
||||
|
||||
def main():
|
||||
ensure_repo_root()
|
||||
now = now_taipei()
|
||||
|
||||
# 功能块 3:文件基线快照(最先执行,记录对话开始时的文件状态)
|
||||
try:
|
||||
do_file_baseline()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 功能块 1:风险标记(仍用 git status,因为需要知道未提交的变更)
|
||||
try:
|
||||
git_files = get_git_changed_files()
|
||||
do_audit_flag(git_files, now)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 功能块 2:Prompt 日志
|
||||
try:
|
||||
do_prompt_log(now)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception:
|
||||
pass
|
||||
@@ -1,139 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""session_log — agentStop 时记录本次对话的完整日志。
|
||||
|
||||
收集来源:
|
||||
- 环境变量 AGENT_OUTPUT(Kiro 注入的 agent 输出)
|
||||
- 环境变量 USER_PROMPT(最近一次用户输入)
|
||||
- .kiro/state/.last_prompt_id.json(Prompt ID 溯源)
|
||||
- .kiro/state/.audit_state.json(变更文件列表)
|
||||
- git diff --stat(变更统计)
|
||||
|
||||
输出:docs/audit/session_logs/session_<timestamp>.md
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
# cwd 校验
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
from _ensure_root import ensure_repo_root
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
LOG_DIR = os.path.join("docs", "audit", "session_logs")
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
PROMPT_ID_PATH = os.path.join(".kiro", "state", ".last_prompt_id.json")
|
||||
|
||||
|
||||
def now_taipei():
|
||||
return datetime.now(TZ_TAIPEI)
|
||||
|
||||
|
||||
def safe_read_json(path):
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def git_diff_stat():
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "diff", "--stat", "HEAD"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
return r.stdout.strip() if r.returncode == 0 else "(git diff failed)"
|
||||
except Exception:
|
||||
return "(git not available)"
|
||||
|
||||
|
||||
def git_status_short():
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "status", "--short"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
return r.stdout.strip() if r.returncode == 0 else ""
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
def main():
|
||||
ensure_repo_root()
|
||||
now = now_taipei()
|
||||
ts = now.strftime("%Y%m%d_%H%M%S")
|
||||
timestamp_display = now.strftime("%Y-%m-%d %H:%M:%S %z")
|
||||
|
||||
# 收集数据
|
||||
agent_output = os.environ.get("AGENT_OUTPUT", "")
|
||||
user_prompt = os.environ.get("USER_PROMPT", "")
|
||||
prompt_info = safe_read_json(PROMPT_ID_PATH)
|
||||
audit_state = safe_read_json(STATE_PATH)
|
||||
prompt_id = prompt_info.get("prompt_id", "unknown")
|
||||
|
||||
# 截断超长内容,避免日志文件过大
|
||||
max_len = 50000
|
||||
if len(agent_output) > max_len:
|
||||
agent_output = agent_output[:max_len] + "\n\n[TRUNCATED: output exceeds 50KB]"
|
||||
if len(user_prompt) > 10000:
|
||||
user_prompt = user_prompt[:10000] + "\n\n[TRUNCATED: prompt exceeds 10KB]"
|
||||
|
||||
diff_stat = git_diff_stat()
|
||||
status_short = git_status_short()
|
||||
changed_files = audit_state.get("changed_files", [])
|
||||
|
||||
os.makedirs(LOG_DIR, exist_ok=True)
|
||||
filename = f"session_{ts}.md"
|
||||
filepath = os.path.join(LOG_DIR, filename)
|
||||
|
||||
content = f"""# Session Log — {timestamp_display}
|
||||
|
||||
- Prompt-ID: `{prompt_id}`
|
||||
- Audit Required: `{audit_state.get('audit_required', 'N/A')}`
|
||||
- Reasons: {', '.join(audit_state.get('reasons', [])) or 'none'}
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
{user_prompt or '(not captured)'}
|
||||
```
|
||||
|
||||
## Agent Output
|
||||
|
||||
```text
|
||||
{agent_output or '(not captured)'}
|
||||
```
|
||||
|
||||
## Changed Files ({len(changed_files)})
|
||||
|
||||
```
|
||||
{chr(10).join(changed_files[:80]) if changed_files else '(none)'}
|
||||
```
|
||||
|
||||
## Git Diff Stat
|
||||
|
||||
```
|
||||
{diff_stat}
|
||||
```
|
||||
|
||||
## Git Status
|
||||
|
||||
```
|
||||
{status_short or '(clean)'}
|
||||
```
|
||||
"""
|
||||
|
||||
with open(filepath, "w", encoding="utf-8") as f:
|
||||
f.write(content)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception:
|
||||
pass
|
||||
@@ -1,90 +0,0 @@
|
||||
{
|
||||
"mcpServers": {
|
||||
"weixin-devtools-mcp": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "weixin-devtools-mcp", "--tools-profile=full", "--ws-endpoint=ws://127.0.0.1:9420"],
|
||||
"env": {
|
||||
"WECHAT_DEVTOOLS_CLI": "C:\\dev\\WechatDevtools\\cli.bat",
|
||||
"WECHAT_DEVTOOLS_PROJECT": "C:\\NeoZQYY\\apps\\miniprogram"
|
||||
},
|
||||
"disabled": true,
|
||||
"autoApprove": ["*"]
|
||||
},
|
||||
"git": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"mcp-server-git@2025.12.18",
|
||||
"--repository",
|
||||
"C:\\NeoZQYY"
|
||||
],
|
||||
"disabled": true,
|
||||
"autoApprove": [
|
||||
"all",
|
||||
"*"
|
||||
]
|
||||
},
|
||||
"postgres": {
|
||||
"disabled": true
|
||||
},
|
||||
"pg-etl": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"postgres-mcp",
|
||||
"--access-mode=unrestricted"
|
||||
],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/etl_feiqiu"
|
||||
},
|
||||
"disabled": true,
|
||||
"autoApprove": [
|
||||
"all",
|
||||
"*"
|
||||
]
|
||||
},
|
||||
"pg-etl-test": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"postgres-mcp",
|
||||
"--access-mode=unrestricted"
|
||||
],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/test_etl_feiqiu"
|
||||
},
|
||||
"disabled": true,
|
||||
"autoApprove": [
|
||||
"all",
|
||||
"*"
|
||||
]
|
||||
},
|
||||
"pg-app": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"postgres-mcp",
|
||||
"--access-mode=unrestricted"
|
||||
],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/zqyy_app"
|
||||
},
|
||||
"disabled": true,
|
||||
"autoApprove": [
|
||||
"all",
|
||||
"*"
|
||||
]
|
||||
},
|
||||
"pg-app-test": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"postgres-mcp",
|
||||
"--access-mode=unrestricted"
|
||||
],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/test_zqyy_app"
|
||||
},
|
||||
"disabled": true,
|
||||
"autoApprove": [
|
||||
"all",
|
||||
"*"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,41 +0,0 @@
|
||||
---
|
||||
name: bd-manual-db-docs
|
||||
description: 当 PostgreSQL schema/表结构发生变化时,用于将变更以审计友好的方式落盘到 docs/database/(含变更原因、影响、回滚与验证 SQL)。
|
||||
---
|
||||
|
||||
# 目的
|
||||
保证数据库结构变化可追溯、可审计、可回滚,并与 ETL/后端/小程序字段映射保持一致。
|
||||
|
||||
# 触发条件
|
||||
- 迁移脚本/DDL 修改(新增/删除/改表、字段、类型、默认值、非空、约束、索引、外键)
|
||||
- ORM/Schema 定义变更导致实际 DB 结构变化
|
||||
- 手工执行 DDL(需用 manualTrigger hook 或本 Skill 补齐文档)
|
||||
|
||||
# 输出要求(必须全部满足)
|
||||
所有输出必须落盘到:`docs/database/`
|
||||
|
||||
至少包含:
|
||||
1) Schema Change Log(变更日志条目)
|
||||
2) Table Structure Doc(涉及表的结构文档更新)
|
||||
3) Rollback & Verification(回滚要点 + 至少 3 条验证 SQL)
|
||||
4) 溯源:日期 + Prompt-ID/Prompt 摘录 + Direct cause(必要性 + 方案简介)
|
||||
|
||||
# 工作流
|
||||
## 1) 识别结构性变化
|
||||
- 列出新增/修改/删除的对象:schema/table/column/index/constraint/fk
|
||||
- 明确变更前后差异(before/after)
|
||||
|
||||
## 2) 更新变更日志(Schema Change Log)
|
||||
- 在对应 schema 目录下追加一条变更记录(模板见 assets/schema-changelog-template.md)
|
||||
|
||||
## 3) 更新表结构文档(Table Structure Doc)
|
||||
- 每张受影响的表都要更新(模板见 assets/table-structure-template.md)
|
||||
- 同步字段含义/口径说明,尤其是金额类字段:精度、币种、舍入
|
||||
|
||||
## 4) 回滚与验证
|
||||
- 写清楚 DDL 回滚路径(必要时提供反向迁移)
|
||||
- 写至少 3 条验证 SQL(含约束/索引/关键字段检查)
|
||||
|
||||
# 模板
|
||||
- `assets/schema-changelog-template.md`
|
||||
- `assets/table-structure-template.md`
|
||||
@@ -1,27 +0,0 @@
|
||||
# Schema 变更日志(Schema Change Log)
|
||||
|
||||
- 日期(Asia/Shanghai,YYYY-MM-DD HH:MM:SS,精确到秒):
|
||||
- Prompt-ID:
|
||||
- 原始原因(Prompt 摘录/原文):
|
||||
- 直接原因(必要性 + 方案简介):
|
||||
- 影响的 Schema:
|
||||
- 变更摘要(一句话):
|
||||
|
||||
## 变更明细
|
||||
- 新增:
|
||||
- 修改:
|
||||
- 删除:
|
||||
|
||||
## 影响范围
|
||||
- ETL:
|
||||
- 后端 API:
|
||||
- 小程序:
|
||||
|
||||
## 回滚要点
|
||||
- DDL 回滚:
|
||||
- 数据回填/迁移注意事项:
|
||||
|
||||
## 验证 SQL(至少 3 条)
|
||||
1)
|
||||
2)
|
||||
3)
|
||||
@@ -1,22 +0,0 @@
|
||||
# <schema>.<table>
|
||||
|
||||
## 表用途(Purpose)
|
||||
- 该表代表什么业务对象/过程
|
||||
|
||||
## 字段(Columns)
|
||||
| 字段名 | 类型 | 可空 | 默认值 | 约束/键 | 说明(含口径) |
|
||||
|---|---|---:|---|---|---|
|
||||
|
||||
> 金额类字段必须注明:币种、精度、舍入/截断规则、是否允许负数。
|
||||
|
||||
## 索引(Indexes)
|
||||
- 索引名 / 字段 / 是否唯一 / 备注
|
||||
|
||||
## 约束与外键(Constraints & FKs)
|
||||
- 约束名 / 定义 / 备注
|
||||
|
||||
## 数据不变量(Invariants)
|
||||
- 例如:状态机枚举范围、唯一性、跨字段一致性约束(如有)
|
||||
|
||||
## 变更历史(Change History)
|
||||
- YYYY-MM-DD HH:MM:SS | Prompt-ID | 直接原因 | 变更摘要
|
||||
@@ -1,37 +0,0 @@
|
||||
---
|
||||
name: change-annotation-audit
|
||||
description: 对每次修改强制生成审计记录(docs/audit/changes/...),并在每个被改文件写 AI_CHANGELOG、在逻辑变更处写 CHANGE 标记注释(包含日期、Prompt 与直接原因)。
|
||||
---
|
||||
|
||||
# 目的
|
||||
把“为什么改、怎么改、怎么验”固化到可审计产物中,满足资金相关项目的严谨性要求。
|
||||
|
||||
# 触发条件
|
||||
- 任何对代码或文档的实质修改(非纯格式化)
|
||||
- 特别是:逻辑改动、资金口径改动、接口契约改动、DB 结构改动
|
||||
|
||||
# 必须产物(缺一不可)
|
||||
1) `docs/audit/changes/<YYYY-MM-DD>__<slug>.md`
|
||||
2) 每个被修改文件内的 `AI_CHANGELOG` 条目
|
||||
3) 每个逻辑变更附近的 `CHANGE` 标记注释
|
||||
|
||||
# 工作流
|
||||
## 1) Prompt 溯源
|
||||
- 确认本次修改有 Prompt-ID(来自 prompt_log.md)
|
||||
- 若没有,先补写 Prompt-ID,再继续
|
||||
|
||||
## 2) 写审计记录(Per-change)
|
||||
使用模板:`assets/audit-record-template.md`
|
||||
- 必须写:原始原因(Prompt)、直接原因、改动方案简介、文件清单、风险/回滚/验证
|
||||
|
||||
## 3) 写文件内 AI_CHANGELOG(Per-file)
|
||||
- 对每个修改的文件追加一条 AI_CHANGELOG
|
||||
- 选择适合语言/文件类型的注释风格(模板见 assets/file-changelog-templates.md)
|
||||
|
||||
## 4) 写 CHANGE 标记(Block-level)
|
||||
- 对每处逻辑变更,必须在附近写 CHANGE 标记
|
||||
- 必须包含:intent、assumptions、边界条件(金额/舍入/精度)、验证提示
|
||||
|
||||
# 模板
|
||||
- `assets/audit-record-template.md`
|
||||
- `assets/file-changelog-templates.md`
|
||||
@@ -1,19 +0,0 @@
|
||||
# 变更审计记录(Change Audit Record)
|
||||
|
||||
- 日期/时间(Asia/Shanghai,精确到秒,格式 YYYY-MM-DD HH:MM:SS):
|
||||
- Prompt-ID:
|
||||
- 原始原因(Prompt 原文或 ≤5 行摘录):
|
||||
- 直接原因(必要性 + 修改方案简介):
|
||||
|
||||
## 变更范围(Changed)
|
||||
- 模块/接口/表/关键文件:
|
||||
|
||||
## 风险与回滚(Risk & Rollback)
|
||||
- 风险点:
|
||||
- 回滚要点:
|
||||
|
||||
## 验证(Verification)
|
||||
- 至少 1 条可执行验证方式(测试/SQL/联调):
|
||||
|
||||
## 文件清单(Files changed)
|
||||
- ...
|
||||
@@ -1,50 +0,0 @@
|
||||
# 文件内 AI_CHANGELOG 与 CHANGE 标记模板
|
||||
|
||||
> 所有时间戳精确到秒,格式:`YYYY-MM-DD HH:MM:SS`,时区 Asia/Shanghai。
|
||||
|
||||
## 通用 AI_CHANGELOG(建议放在文件头部或"变更记录"小节)
|
||||
- 2026-02-13 10:15:30 | Prompt: P20260213-101530(摘录:...)| Direct cause:... | Summary:... | Verify:...
|
||||
|
||||
---
|
||||
|
||||
## Markdown / 文档(放在文档末尾或"变更记录"小节)
|
||||
### AI_CHANGELOG
|
||||
- YYYY-MM-DD HH:MM:SS | Prompt: P...(摘录:...)| Direct cause:... | Summary:... | Verify:...
|
||||
|
||||
---
|
||||
|
||||
## JS/TS(块注释)
|
||||
/*
|
||||
AI_CHANGELOG
|
||||
- YYYY-MM-DD HH:MM:SS | Prompt: P...(摘录:...)| Direct cause:... | Summary:... | Verify:...
|
||||
*/
|
||||
|
||||
// [CHANGE P...] intent: ...
|
||||
// assumptions: ...
|
||||
// edge cases / money semantics: ...
|
||||
// verify: ...
|
||||
|
||||
---
|
||||
|
||||
## Python(docstring/块注释)
|
||||
"""
|
||||
AI_CHANGELOG
|
||||
- YYYY-MM-DD HH:MM:SS | Prompt: P...(摘录:...)| Direct cause:... | Summary:... | Verify:...
|
||||
"""
|
||||
|
||||
# [CHANGE P...] intent: ...
|
||||
# assumptions: ...
|
||||
# edge cases / money semantics: ...
|
||||
# verify: ...
|
||||
|
||||
---
|
||||
|
||||
## SQL(块注释 + 行注释)
|
||||
/*
|
||||
AI_CHANGELOG
|
||||
- YYYY-MM-DD HH:MM:SS | Prompt: P...(摘录:...)| Direct cause:... | Summary:... | Verify:...
|
||||
*/
|
||||
-- [CHANGE P...] intent: ...
|
||||
-- assumptions: ...
|
||||
-- money semantics: precision/rounding/currency ...
|
||||
-- verify: ...
|
||||
@@ -1,51 +0,0 @@
|
||||
---
|
||||
name: steering-readme-maintainer
|
||||
description: 当发生业务/ETL/API/鉴权/小程序交互等逻辑改动时,用于执行变更影响审查并同步更新 product/tech/structure/各级 README 与审计记录。
|
||||
---
|
||||
|
||||
# 目的
|
||||
将"逻辑改动→文档同步→审计留痕"流程标准化,减少漏更与口径漂移风险(资金相关场景优先保证可追溯与可复算)。
|
||||
|
||||
# 触发条件(何时调用本 Skill)
|
||||
- 修改了业务规则/计算口径/资金处理(精度、舍入、阈值等)
|
||||
- 修改了 ETL/SQL 清洗聚合映射逻辑
|
||||
- 修改了 API 行为(返回结构、错误码、鉴权/权限)
|
||||
- 修改了小程序关键交互流程(校验、状态机、关键字段)
|
||||
|
||||
# 工作流(必须按顺序执行)
|
||||
## 1) 分类:是否属于"逻辑改动"
|
||||
- 若不是逻辑改动:写明"无逻辑改动",并说明为何(例如仅格式化/拼写修正/注释调整)。
|
||||
- 若是逻辑改动:进入下一步。
|
||||
|
||||
## 2) Steering 与 README 同步(逐项评估)
|
||||
|
||||
### 2a) Steering 文件
|
||||
- `.kiro/steering/product.md`:业务定义/口径/资金规则是否变化?
|
||||
- `.kiro/steering/tech.md`:技术栈/运行方式/依赖/部署假设是否变化?
|
||||
- `.kiro/steering/structure-lite.md`(摘要)/ `.kiro/steering/structure.md`(仅在目录树/边界变化时):目录/模块边界/职责是否变化?
|
||||
|
||||
### 2b) 各级 README.md(根据变更涉及的模块逐一评估)
|
||||
- `README.md`(根目录):项目总览、快速开始、环境变量、架构概述
|
||||
- `apps/backend/README.md`:后端 API 路由、配置、运行方式、接口契约
|
||||
- `apps/etl/connectors/feiqiu/README.md`:ETL 任务清单、开发约定、注册流程
|
||||
- `apps/miniprogram/README.md`:小程序页面结构、构建部署
|
||||
- `apps/admin-web/README.md`:管理后台功能说明
|
||||
- `packages/shared/README.md`:共享包模块说明、使用方式
|
||||
- `db/README.md`:Schema 约定、迁移规范、种子数据说明
|
||||
- `scripts/README.md`:各子目录用途、常用脚本说明
|
||||
- `tests/README.md`:测试运行方式、FakeDB/FakeAPI 用法
|
||||
- `docs/README.md`:文档目录索引
|
||||
|
||||
> 规则:只更新与本次变更相关的 README;如果"对读者理解系统行为"有帮助,就应更新;不要为了追求"少改文档"而拒绝同步。若某个 README 尚不存在但变更涉及该模块,应创建。
|
||||
|
||||
## 3) 输出审计友好摘要(对话回复/审计记录都需要)
|
||||
- Changed:改了哪些模块/接口/表/关键文件
|
||||
- Why:原始原因(Prompt-ID + 摘录)与直接原因(必要性 + 方案简介)
|
||||
- Risk:风险点与回归范围
|
||||
- Verify:建议的验证步骤(测试/SQL/联调)
|
||||
|
||||
## 4) 联动硬规则检查
|
||||
- 如果涉及 DB schema/表结构变化:必须同步更新 `docs/database/`(见 skill `bd-manual-db-docs`)。
|
||||
|
||||
# 资产(可复制模板/清单)
|
||||
见:`assets/steering-update-checklist.md`
|
||||
@@ -1,23 +0,0 @@
|
||||
# Steering & README 同步清单(逻辑改动必查)
|
||||
|
||||
## product.md(产品/口径)
|
||||
- 业务定义/指标口径/字段含义是否改变?
|
||||
- 涉及金额的精度/舍入/阈值规则是否改变?
|
||||
- 角色/权限模型是否改变?
|
||||
|
||||
## tech.md(技术/运行)
|
||||
- 新增/变更依赖(框架、库、驱动)?
|
||||
- 配置项/环境变量/端口/服务启动方式是否改变?
|
||||
- 数据访问边界(ETL 库 vs 业务库)是否改变?
|
||||
- 性能/一致性/幂等/重试策略是否改变?
|
||||
|
||||
## structure.md(结构/职责)
|
||||
- 新增目录/模块?
|
||||
- 模块职责或边界是否重新划分?
|
||||
- 新增集成点(队列、定时任务、外部系统)?
|
||||
|
||||
## README.md(使用/联调)
|
||||
- 本地启动步骤是否改变?
|
||||
- 新增/变更配置项(.env 等)?
|
||||
- API 契约是否变化(路径、参数、返回、错误码)?
|
||||
- 小程序联调步骤是否变化?
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "27029642-a405-4932-8c22-5bc54fad5173", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "cf5c24d6-ec72-4c49-8650-264ef414e10e", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "98a585de-82d9-4bbd-bed8-179208c12f8b", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "7e1dc63d-3dbd-4462-a43c-9ecaa9b1dd07", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "cd79656c-9c23-4470-a147-d402b5f4b50b", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "a277a91a-b35c-4d48-b4a2-09df0e47b71b", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "4b6736e7-40fc-40a9-82f7-809f80253fe2", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "cd30e87b-ce7a-4ff5-8587-f5ae75013e58", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "cd30e87b-ce7a-4ff5-8587-f5ae75013e58", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "a7c3e1f2-9b84-4d6e-b5a1-3f8c2d7e9a04", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "a3f7c2d1-8e4b-4f6a-9c5d-2b1e8f3a7d9c", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "9eccc890-b6c3-41a3-8ba1-bb2f0e09f653", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "b2f4e8a1-3c7d-4f9b-a6e2-8d5c1b3f7a9e", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "13cfd0bc-b6d6-408e-b943-aa11fb515478", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "a7e3c1d4-8f2b-4e6a-b5d9-3c1f7a2e8b4d", "workflowType": "requirements-first", "specType": "feature"}
|
||||
@@ -1 +0,0 @@
|
||||
{"generationMode": "requirements-first"}
|
||||
@@ -1 +0,0 @@
|
||||
{"specId": "9eccc890-b6c3-41a3-8ba1-bb2f0e09f653", "workflowType": "requirements-first", "specType": "feature"}
|
||||
File diff suppressed because one or more lines are too long
@@ -1,66 +0,0 @@
|
||||
{
|
||||
"audit_required": true,
|
||||
"db_docs_required": true,
|
||||
"reasons": [
|
||||
"dir:backend",
|
||||
"dir:miniprogram",
|
||||
"dir:db",
|
||||
"db-schema-change",
|
||||
"root-file"
|
||||
],
|
||||
"changed_files": [
|
||||
"apps/DEMO-miniprogram/.gitignore",
|
||||
"apps/DEMO-miniprogram/.gitkeep",
|
||||
"apps/DEMO-miniprogram/README.md",
|
||||
"apps/DEMO-miniprogram/doc/ABANDON_MODAL_COMPONENT.md",
|
||||
"apps/DEMO-miniprogram/doc/KEYBOARD_INTERACTION_FIX.md",
|
||||
"apps/DEMO-miniprogram/doc/TASK_ABANDON_IMPROVEMENTS.md",
|
||||
"apps/DEMO-miniprogram/doc/TASK_ABANDON_QUICK_REFERENCE.md",
|
||||
"apps/DEMO-miniprogram/doc/progress-bar-animation.md",
|
||||
"apps/DEMO-miniprogram/doc/useless/ABANDON_MODAL_COMPONENT.md",
|
||||
"apps/DEMO-miniprogram/doc/useless/KEYBOARD_INTERACTION_FIX.md",
|
||||
"apps/DEMO-miniprogram/doc/useless/TASK_ABANDON_IMPROVEMENTS.md",
|
||||
"apps/DEMO-miniprogram/doc/useless/TASK_ABANDON_QUICK_REFERENCE.md",
|
||||
"apps/DEMO-miniprogram/doc/useless/progress-bar-animation.md",
|
||||
"apps/DEMO-miniprogram/i18n/base.json",
|
||||
"apps/DEMO-miniprogram/jest.config.js",
|
||||
"apps/DEMO-miniprogram/miniprogram/app.json",
|
||||
"apps/DEMO-miniprogram/miniprogram/app.miniapp.json",
|
||||
"apps/DEMO-miniprogram/miniprogram/app.ts",
|
||||
"apps/DEMO-miniprogram/miniprogram/app.wxss",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/ai-robot-sm.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/ai-robot-title.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/arrow-left.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/chart.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/chat-gray.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/chat.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/check-bold.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/check-circle.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/clock.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/forbidden.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/help-circle.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/icon-ai-float.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/icon-ai-inline.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/info-circle.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/info-error.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/info-warning.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/logout.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-board-active.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-board.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-my-active.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-my.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-task-active.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-task.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/task.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/wechat.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/ai-robot-badge.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/ai-robot-inline.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/ai-robot.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/ball-black.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/ball-gray.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/feature-ai.svg"
|
||||
],
|
||||
"change_fingerprint": "c347e0fb24548a8427f63a65a48dbf7df0b4a734",
|
||||
"marked_at": "2026-03-20T09:01:30.178895+08:00",
|
||||
"last_reminded_at": null
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
{
|
||||
"needs_check": false,
|
||||
"scanned_at": "2026-03-20T08:32:06.937993+08:00",
|
||||
"new_migration_sql": [],
|
||||
"new_or_modified_sql": [],
|
||||
"code_without_docs": [],
|
||||
"new_files": [],
|
||||
"has_bd_manual": false,
|
||||
"has_audit_record": false,
|
||||
"has_ddl_baseline": false,
|
||||
"api_changed": false,
|
||||
"openapi_spec_stale": false
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
@@ -1,7 +0,0 @@
|
||||
{
|
||||
"files": [
|
||||
".kiro/.audit_context.json"
|
||||
],
|
||||
"fingerprint": "4b767a035cfcbdd76756bbc0488e28e10b0f2fa1",
|
||||
"taken_at": "2026-02-26T08:04:31.572231+08:00"
|
||||
}
|
||||
@@ -1,4 +0,0 @@
|
||||
{
|
||||
"prompt_id": "P20260320-090130",
|
||||
"at": "2026-03-20T09:01:30.178895+08:00"
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
---
|
||||
inclusion: always
|
||||
---
|
||||
# AI 执行行为约束
|
||||
|
||||
## 上下文保护
|
||||
目标:避免大量文件内容或命令输出涌入主对话,导致上下文爆炸。
|
||||
|
||||
### 委托子代理的场景
|
||||
- 批量文件读取(≥3 个文件)或大范围代码搜索
|
||||
- 需要探索不熟悉的模块/目录结构
|
||||
- CLI 命令输出量大或需要多步骤 shell 操作
|
||||
|
||||
### 主流程直接处理的场景
|
||||
- 读取单个已知文件(路径明确、内容可预期)
|
||||
- 简单的单条命令(如 `uv sync`、单个 pytest 文件)
|
||||
- 小范围精确搜索(已知关键词和文件范围)
|
||||
@@ -1,38 +0,0 @@
|
||||
---
|
||||
inclusion: always
|
||||
---
|
||||
# CLI 环境规范(Windows PowerShell)
|
||||
|
||||
本项目运行在 Windows + PowerShell 环境。以下是构造命令时必须掌握的前置知识。
|
||||
|
||||
## PowerShell 语法要点
|
||||
- 环境变量:`$env:VAR_NAME`(不是 `$VAR_NAME`)
|
||||
- 命令连接符:`;`(不是 `&&`)
|
||||
- `where` 是 `Where-Object` 别名,查可执行文件用 `Get-Command <name>`
|
||||
- 删除文件/目录:`Remove-Item`(不是 `rm -rf`)
|
||||
- 路径分隔符 `\`,但 Python/Node 工具也接受 `/`
|
||||
|
||||
## Python 调用
|
||||
- 项目虚拟环境:`uv run python` 或 `.venv\Scripts\python.exe`
|
||||
- 安装依赖:`uv sync`(不是 `pip install`)
|
||||
- 运行模块:`uv run python -m <module>`
|
||||
- 系统 Python:`python`(来自 miniconda3)
|
||||
|
||||
## Shell 隔离与 REPL 防护(强制)
|
||||
|
||||
> 背景:Kiro 的 `executePwsh` 复用同一个 shell session。若意外进入 Python REPL,后续所有命令被吞掉,表现为"无输出 + exit code 0"。
|
||||
|
||||
### 核心原则
|
||||
- 禁止裸调 `python`/`node`/`ipython`——必须带 `-c`、`-m` 或脚本路径
|
||||
- 优先写脚本文件再执行,避免 `-c` 内联引号地狱
|
||||
- 禁止管道喂给 python(如 `echo "code" | python`)
|
||||
|
||||
### 长时间命令
|
||||
- 预估 > 30s 的命令,提前告知用户
|
||||
- pytest hypothesis 建议设 `timeout` 为预估时间 3 倍
|
||||
- 超时后不要立即重试,先确认前一个进程状态
|
||||
|
||||
### REPL 劫持
|
||||
症状:exit code 0 但无输出、连续命令无输出、出现 `>>>` 提示符。检测与自动恢复由 `repl-hijack-guard` hook 在命令执行后自动处理。
|
||||
|
||||
> cwd 校验和命令语法检查由 `cwd-guard-shell` hook 在执行前自动拦截,此处不再重复。
|
||||
@@ -1,22 +0,0 @@
|
||||
---
|
||||
inclusion: fileMatch
|
||||
fileMatchPattern:
|
||||
- "**/migrations/**/*.*"
|
||||
- "**/*.sql"
|
||||
- "**/*schema*.*"
|
||||
- "**/*ddl*.*"
|
||||
- "**/*.prisma"
|
||||
---
|
||||
|
||||
# Database Schema Documentation Rules
|
||||
|
||||
当你修改任何可能影响 PostgreSQL schema/表结构的内容时(迁移脚本/DDL/表定义/ORM 模型):
|
||||
|
||||
1) 必须同步更新 BD 手册目录:
|
||||
docs/database
|
||||
|
||||
2) 文档最低要求:
|
||||
- 变更说明:新增/修改/删除的表、字段、约束、索引
|
||||
- 兼容性:对 ETL、后端 API、小程序字段映射的影响
|
||||
- 回滚策略:如何撤销(DDL 回滚 / 数据回填)
|
||||
- 验证步骤:最少包含 3 条校验 SQL
|
||||
@@ -1,36 +0,0 @@
|
||||
---
|
||||
inclusion: manual
|
||||
name: doc-map
|
||||
description: 项目文档地图索引。需要定位文档、理解项目结构、查找规范时手动加载。
|
||||
---
|
||||
|
||||
# 文档地图索引
|
||||
|
||||
完整文档地图:`#[[file:docs/DOCUMENTATION-MAP.md]]`
|
||||
|
||||
## 快速定位
|
||||
|
||||
| 需要什么 | 去哪里找 |
|
||||
|---------|---------|
|
||||
| DB 变更审计(业务库) | `docs/database/BD_Manual_*.md` |
|
||||
| DB 变更审计(ETL 库) | `apps/etl/connectors/feiqiu/docs/database/` |
|
||||
| API 端点参考 | `apps/backend/docs/API-REFERENCE.md` |
|
||||
| ETL 任务说明 | `apps/etl/connectors/feiqiu/docs/etl_tasks/` |
|
||||
| ETL 业务规则 | `apps/etl/connectors/feiqiu/docs/business-rules/` |
|
||||
| 输出路径规范 | `docs/deployment/EXPORT-PATHS.md` |
|
||||
| 上线检查清单 | `docs/deployment/LAUNCH-CHECKLIST.md` |
|
||||
| 变更审计记录 | `docs/audit/changes/` |
|
||||
| PRD / Spec 拆分 | `docs/prd/specs/` |
|
||||
| 小程序 UI 原型 | `docs/h5_ui/pages/` |
|
||||
| 迁移脚本 | `db/etl_feiqiu/migrations/` + `db/zqyy_app/migrations/` |
|
||||
| DDL 基线 | `docs/database/ddl/` |
|
||||
| 模块 README | 各 `apps/*/README.md` + `packages/shared/README.md` + `db/README.md` |
|
||||
| Kiro Spec | `.kiro/specs/<spec-name>/` (requirements + design + tasks) |
|
||||
|
||||
## 行为规范提示
|
||||
|
||||
- 新增 DB 表/字段 → 必须写 `BD_Manual_*.md`(见 `db-docs.md` steering)
|
||||
- 新增输出路径 → 先加 `.env` 变量,再更新 `EXPORT-PATHS.md`(见 `export-paths.md` steering)
|
||||
- 逻辑改动 → 审计由 hooks 自动检测提醒,按需触发 `/audit`
|
||||
- 新增/修改 API → 同步更新 `API-REFERENCE.md`
|
||||
- 新增 ETL 任务 → 同步更新 `docs/etl_tasks/` 对应文档
|
||||
@@ -1,44 +0,0 @@
|
||||
---
|
||||
inclusion: fileMatch
|
||||
fileMatchPattern: "**/tasks/**,**/loaders/**,**/scd/**,**/dws/**,**/dwd/**,**/quality/**,**/business-rules/**,**/schemas/**,**/routers/**,**/financial*,**/settlement*,**/consume*,**/accounting*,**/salary*,**/assistant*,**/member*,**/index*,**/winback*,**/newconv*,**/relation_index*,**/spending*,**/stock*,**/finance_*,**/income_*,**/discount_*,**/order_contribution*,**/cfg_*,**/orchestration/**,**/config/**"
|
||||
name: dwd-doc-authority
|
||||
description: DWD-DOC 标杆文档强制规则。涉及 ETL 任务/财务/结算/消费/助教/会员/指数/统计/配置相关文件时自动加载。
|
||||
---
|
||||
|
||||
# DWD-DOC 标杆文档(权威数据源,强制优先参考)
|
||||
|
||||
`docs/reports/DWD-DOC/` 是本项目的业务模型与财务数据权威标杆文档。
|
||||
所有涉及金额口径、支付渠道、消费链路、账务公式、字段语义的开发工作,必须以此目录为第一参考源。
|
||||
|
||||
## 文档清单
|
||||
|
||||
| 文件 | 内容 | 关键规则 |
|
||||
|------|------|----------|
|
||||
| `README.md` | 总览 + GAP 闭环状态 | 文档索引入口 |
|
||||
| `01-business-panorama.md` | 消费链路 + 优惠机制 + 消费场景 | settle_type 枚举、助教费用拆分、团购券三层价格 |
|
||||
| `02-accounting-panorama.md` | 支付渠道 + 对账公式 + consume_money 口径 | 支付渠道恒等式、F2 三期公式 |
|
||||
| `03-financial-panorama.md` | 收入构成 + 储值卡资金流 + 对账矩阵 | 平台结算互斥关系 |
|
||||
| `04-dimension-panorama.md` | 维度表与主数据全景 | SCD2 维度取值规则 |
|
||||
| `05-f2-balance-audit.md` | F2 收支平衡公式专项 | 三期公式 + 139 笔失败根因 |
|
||||
| `06-calibration-checklist.md` | 校准清单 + 验证 SQL | 全部验证公式集中 |
|
||||
| `consume/consume-money-caliber.md` | consume_money 口径变化时间线 | 三种口径(A/B/C)定义与切换时间点 |
|
||||
|
||||
## 强制规则(所有 session 生效)
|
||||
|
||||
1. **consume_money 禁止直接用于计算**:存在三种历史口径(A/B/C)混合,DWS 层及下游统一使用 `items_sum = table_charge_money + goods_money + assistant_pd_money + assistant_cx_money + electricity_money`
|
||||
2. **助教费用必须拆分**:使用 `assistant_pd_money`(陪打)和 `assistant_cx_money`(超休),禁止使用笼统的 `service_fee` / `ASSISTANT_BASE` / `ASSISTANT_BONUS`(`service_fee` 仅在平台结算表中表示"平台服务费",语义不同)
|
||||
3. **支付渠道恒等式**:`balance_amount = recharge_card_amount + gift_card_amount`(100% 成立),三者不可重复计算
|
||||
4. **settle_type 过滤**:正向交易取 `IN (1, 3)`,本表无 `is_delete` 字段
|
||||
5. **电费未启用**:`electricity_money` 全为 0,`gross_amount` 不含电费是正确的
|
||||
6. **折扣互斥**:`discount_manual`(大客户优惠)与 `discount_other` 互斥,两者之和 = `adjust_amount`
|
||||
7. **现金流互斥**:`cash_inflow_total` 中 `platform_settlement_amount` 和 `groupbuy_pay_amount` 互斥
|
||||
8. **废单判断**:使用 `dwd_assistant_service_log_ex.is_trash`,`dwd_assistant_trash_event` 已废弃(2026-02-22 DROP)
|
||||
9. **储值卡字段命名**:DWS 层使用 `balance_pay`(总额)、`recharge_card_pay`(现金充值卡)、`gift_card_pay`(赠送卡);`recharge_card_consume`(财务日报)
|
||||
10. **会员字段断档(DQ-6)**:`settlement_head.member_phone/member_name` 自 2025-12 起全为 NULL。需要会员信息时通过 `member_id` LEFT JOIN `dwd.dim_member`(取 `scd2_is_current=1`)
|
||||
11. **会员卡字段断档(DQ-7)**:`settlement_head.member_card_type_name` 自 2025-07-21 起全为 NULL。需要会员卡类型时通过 `member_id` LEFT JOIN `dwd.dim_member_card_account`(取 `scd2_is_current=1`)。通用规则:结算单上所有会员相关冗余字段均不可靠,一律通过 ID 关联维度表获取
|
||||
|
||||
## 与其他文档的优先级
|
||||
|
||||
当 BD 手册、ETL 任务文档、业务规则文档、SPEC 文档、DDL 注释与 DWD-DOC 冲突时,以 DWD-DOC 为准。
|
||||
|
||||
> 标杆文档基于 2026-03-06 对 test_etl_feiqiu 数据库的实际数据验证,公式和比例关系具有权威性。
|
||||
@@ -1,279 +0,0 @@
|
||||
---
|
||||
inclusion: fileMatch
|
||||
fileMatchPattern: "**/tasks/**,**/loaders/**,**/scd/**,**/dws/**,**/dwd/**,**/quality/**,**/business-rules/**,**/schemas/**,**/routers/**,**/financial*,**/settlement*,**/consume*,**/accounting*,**/salary*,**/assistant*,**/member*,**/index*,**/winback*,**/newconv*,**/relation_index*,**/spending*,**/stock*,**/finance_*,**/income_*,**/discount_*,**/order_contribution*,**/cfg_*,**/orchestration/**,**/config/**"
|
||||
name: dws-doc-authority
|
||||
description: DWS 层权威规范。涉及 ETL 任务/财务/结算/消费/助教/会员/指数/统计/配置相关文件时自动加载。
|
||||
---
|
||||
|
||||
# DWS 层权威规范(强制优先参考)
|
||||
|
||||
DWS(Data Warehouse Summary)层从 DWD 明细层按业务维度聚合计算,输出汇总统计表,服务于助教业绩、会员分析、财务统计、指数算法等业务场景。
|
||||
|
||||
> DWD-DOC(`docs/reports/DWD-DOC/`)中的强制规则在 DWS 层同样生效,本文档不重复列出。两者冲突时以 DWD-DOC 为准。
|
||||
|
||||
## 一、任务体系(19 个已注册任务)
|
||||
|
||||
### 1.1 助教业绩域(6 个)
|
||||
|
||||
| 任务代码 | 目标表 | 粒度 | 核心指标 |
|
||||
|----------|--------|------|----------|
|
||||
| `DWS_ASSISTANT_DAILY` | `dws_assistant_daily_detail` | 日期+助教 | 服务次数/时长/金额、去重客户数、废除统计、惩罚检测 |
|
||||
| `DWS_ASSISTANT_MONTHLY` | `dws_assistant_monthly_summary` | 月份+助教 | 月度累计、有效业绩、档位匹配、排名(考虑并列) |
|
||||
| `DWS_ASSISTANT_CUSTOMER` | `dws_assistant_customer_stats` | 日期+助教+会员 | 全量累计、6 个滚动窗口(7/10/15/30/60/90 天)、活跃度 |
|
||||
| `DWS_ASSISTANT_SALARY` | `dws_assistant_salary_calc` | 月份+助教 | 课时收入、奖金明细、应发工资、假期 |
|
||||
| `DWS_ASSISTANT_FINANCE` | `dws_assistant_finance_analysis` | 日期+助教 | 日度收入、日均成本、毛利润、毛利率 |
|
||||
| `DWS_ASSISTANT_ORDER_CONTRIBUTION` | `dws_assistant_order_contribution` | 日期+助教 | 订单总流水、净流水、时效贡献、时效净贡献 |
|
||||
|
||||
### 1.2 会员分析域(2 个)
|
||||
|
||||
| 任务代码 | 目标表 | 粒度 | 核心指标 |
|
||||
|----------|--------|------|----------|
|
||||
| `DWS_MEMBER_CONSUMPTION` | `dws_member_consumption_summary` | 日期+会员 | 全量累计消费、6 个滚动窗口、卡余额、活跃度、客户分层 |
|
||||
| `DWS_MEMBER_VISIT` | `dws_member_visit_detail` | 日期+会员+结账单 | 消费金额拆分、支付方式拆分、台桌时长、助教服务明细(JSON) |
|
||||
|
||||
### 1.3 财务统计域(4 个)
|
||||
|
||||
| 任务代码 | 目标表 | 粒度 | 核心指标 |
|
||||
|----------|--------|------|----------|
|
||||
| `DWS_FINANCE_DAILY` | `dws_finance_daily_summary` | 日期 | 发生额、优惠合计、确认收入、现金流入/流出/净变动、卡消费、充值统计 |
|
||||
| `DWS_FINANCE_RECHARGE` | `dws_finance_recharge_summary` | 日期 | 充值笔数/总额、首充/续充拆分、去重会员数、全店卡余额快照、赠送卡按卡类型拆分(酒水卡/台费卡/抵用券 × 余额+新增) |
|
||||
| `DWS_FINANCE_INCOME_STRUCTURE` | `dws_finance_income_structure` | 日期+收入类型 | 按收入类型(台费/商品/助教基础课/附加课)和区域分析 |
|
||||
| `DWS_FINANCE_DISCOUNT_DETAIL` | `dws_finance_discount_detail` | 日期+折扣类型 | 折扣类型拆分(GROUPBUY/VIP/ROUNDING/GIFT_CARD_*/BIG_CUSTOMER/OTHER) |
|
||||
|
||||
### 1.4 库存汇总域(3 个)
|
||||
|
||||
| 任务代码 | 目标表 | 粒度 | 更新策略 |
|
||||
|----------|--------|------|----------|
|
||||
| `DWS_GOODS_STOCK_DAILY` | `dws_goods_stock_daily_summary` | 日期+商品 | upsert |
|
||||
| `DWS_GOODS_STOCK_WEEKLY` | `dws_goods_stock_weekly_summary` | ISO 周+商品 | upsert |
|
||||
| `DWS_GOODS_STOCK_MONTHLY` | `dws_goods_stock_monthly_summary` | 月份+商品 | upsert |
|
||||
|
||||
### 1.5 运维任务(2 个)
|
||||
|
||||
| 任务代码 | 说明 |
|
||||
|----------|------|
|
||||
| `DWS_BUILD_ORDER_SUMMARY` | 构建订单汇总中间表 `dws_order_summary` |
|
||||
| `DWS_MAINTENANCE` | 统一维护:物化视图刷新 + 历史数据清理 |
|
||||
|
||||
|
||||
## 二、强制规则(所有 session 生效)
|
||||
|
||||
### 2.1 幂等更新策略
|
||||
1. **汇总表默认 delete-before-insert**:按日期范围 + `site_id` 先删后插,保证幂等
|
||||
2. **库存表使用 upsert**:`ON CONFLICT DO UPDATE`,因库存快照需保留最新值
|
||||
3. **禁止 TRUNCATE**:DWS 表数据量大,TRUNCATE 会导致全表锁定
|
||||
|
||||
### 2.2 课程类型与定价
|
||||
4. **课程类型通过 `cfg_skill_type` 映射**:`skill_id` → `course_type_code`(BASE/BONUS/ROOM),禁止硬编码 skill_id 判断课程类型
|
||||
5. **定价通过 `cfg_assistant_level_price` 取值**:按 SCD2 生效期 as-of join,禁止硬编码价格
|
||||
6. **包厢课统一价格**:`dws.salary.room_course_price = 138`(元/小时),从配置读取
|
||||
|
||||
### 2.3 绩效档位与工资
|
||||
7. **绩效档位通过 `cfg_performance_tier` 取值**:按有效业绩小时数匹配 `[min_hours, max_hours)` 区间
|
||||
8. **新入职折算规则**:入职日期在当月 1 日后视为新入职,按日均业绩 × 30 定档;入职日期 > 25 日最高定档至 T2
|
||||
9. **奖金规则通过 `cfg_bonus_rules` 取值**:SPRINT 类型不累计取最高档,TOP_RANK 类型按排名发放(第 1 名 1000 元、第 2 名 600 元、第 3 名 400 元)
|
||||
10. **排名计算考虑并列**:使用 `calculate_rank_with_ties()`,相同业绩小时数并列同名次
|
||||
|
||||
### 2.4 会员与散客
|
||||
11. **散客判断**:`member_id ≤ 0` 为散客,不计入会员统计(但计入助教业绩)
|
||||
12. **客户分层规则**:高价值(90 天 ≥ 3 次且 ≥ 1000 元)→ 中等(30 天内有消费)→ 低活跃(90 天内有但 30 天内无)→ 流失
|
||||
13. **会员信息一律通过 ID 关联维度表**:结算单上所有会员冗余字段均不可靠(DQ-6/DQ-7),通过 `member_id` LEFT JOIN `dwd.dim_member`(`scd2_is_current=1`)
|
||||
|
||||
### 2.5 时间窗口与调度
|
||||
14. **滚动窗口标准集**:7/10/15/30/60/90 天,使用 `calculate_rolling_stats()` 统一计算
|
||||
15. **月度任务宽限期**:月初前 `dws.monthly.prev_month_grace_days`(默认 5)天可处理上月数据
|
||||
16. **工资计算周期**:月初前 `dws.salary.run_days`(默认 5)天运行,超期需 `dws.salary.allow_out_of_cycle = true`
|
||||
|
||||
### 2.6 SCD2 维度取值
|
||||
17. **助教等级 as-of 取值**:工资计算按月份生效期取历史版本,日度统计按 `stat_date` 取当日版本
|
||||
18. **会员卡余额 as-of 取值**:通过 `get_member_card_balance_asof()` 按日期取快照
|
||||
|
||||
### 2.7 台桌分类
|
||||
19. **`cfg_area_category` 仅精确匹配 + 兜底**:2026-03-07 改版后无 LIKE 匹配,分类为 BILLIARD/SNOOKER/OTHER,`BILLIARD_VIP` 已废弃
|
||||
|
||||
## 三、指数算法体系
|
||||
|
||||
### 3.1 总览
|
||||
|
||||
| 指数 | 全称 | 输出表 | 作用 |
|
||||
|------|------|--------|------|
|
||||
| WBI | Winback Index | `dws_member_winback_index` | 老客挽回优先级 |
|
||||
| NCI | Newconv Index | `dws_member_newconv_index` | 新客转化优先级 |
|
||||
| RS | Relation Index | `dws_member_assistant_relation_index` | 助教-会员关系强度 |
|
||||
| OS | Ownership Index | — | 所有权指数 |
|
||||
| MS | Maintenance Score | — | 维护分 |
|
||||
| ML | Manual Ledger | `dws_ml_manual_order_alloc` | 人工台账(唯一真源) |
|
||||
| SPI | Spending Power Index | `dws_member_spending_power_index` | 消费力指数 |
|
||||
|
||||
### 3.2 WBI(老客挽回指数)强制规则
|
||||
20. **分项得分**:Overdue(超期分,加权经验 CDF)+ Drop(降频分,近 14 天差值)+ Recharge(充值压力,衰减分)+ Value(价值分,对数压缩)
|
||||
21. **Raw Score 公式**:`WBI_raw = w_over × overdue + w_drop × drop + w_re × recharge + w_value × value`
|
||||
22. **近访抑制(Recency Suppression)**:距今 < 14 天 suppression = 0(Hard floor);14-17 天 Sigmoid 衰减
|
||||
23. **分流规则**:STOP(距今 ≥ 60 天,高余额例外可选)→ NEW(到店 ≤ 2 次或首访 ≤ 30 天或充值未回访)→ OLD(其他)
|
||||
|
||||
### 3.3 NCI(新客转化指数)强制规则
|
||||
24. **分项得分**:Welcome(欢迎分,首访/单访 3 天内触发)+ Need(转化紧迫度)+ Salvage(可救度,30-60 天线性衰减)+ Recharge/Value(同 WBI)
|
||||
25. **活跃抑制**:新客近 14 天来店 ≥ 2 次且最近活跃,用 0.2 系数抑制转化召回分
|
||||
|
||||
### 3.4 指数参数配置
|
||||
26. **参数通过 `cfg_index_parameters` 加载**:按 `index_type` 分组,支持 EWMA 平滑,禁止硬编码权重/阈值
|
||||
|
||||
|
||||
## 四、配置表体系
|
||||
|
||||
### 4.1 绩效档位(`dws.cfg_performance_tier`)
|
||||
|
||||
| 档位 | 小时区间 | 抽成(元/小时) | 打赏抽成 | 假期 |
|
||||
|------|----------|-----------------|----------|------|
|
||||
| T0(0 档) | 0-120 | 28 | 50% | 3 天 |
|
||||
| T1(1 档) | 120-150 | 18 | 40% | 4 天 |
|
||||
| T2(2 档) | 150-180 | 13 | 35% | 5 天 |
|
||||
| T3(3 档) | 180-210 | 10 | 30% | 6 天 |
|
||||
| T4(4 档) | 210+ | 8 | 25% | 自由假期 |
|
||||
|
||||
> 以上为 2026-03-01 起生效版本,历史版本通过 `effective_from/effective_to` SCD2 管理。
|
||||
|
||||
### 4.2 助教等级定价(`dws.cfg_assistant_level_price`)
|
||||
|
||||
| 等级 | 基础课(元/小时) | 附加课(元/小时) |
|
||||
|------|-------------------|-------------------|
|
||||
| 8(助教管理) | 98 | 190 |
|
||||
| 10(初级) | 98 | 190 |
|
||||
| 20(中级) | 108 | 190 |
|
||||
| 30(高级) | 118 | 190 |
|
||||
| 40(星级) | 138 | 190 |
|
||||
|
||||
### 4.3 奖金规则(`dws.cfg_bonus_rules`)
|
||||
|
||||
| 规则类型 | 生效期 | 说明 |
|
||||
|----------|--------|------|
|
||||
| SPRINT(冲刺奖金) | ≤ 2026-02-28 | 不累计,取最高档 |
|
||||
| TOP_RANK(排名奖金) | ≥ 2026-03-01 | 第 1 名 1000 元、第 2 名 600 元、第 3 名 400 元 |
|
||||
|
||||
### 4.4 技能→课程类型映射(`dws.cfg_skill_type`)
|
||||
|
||||
| 课程类型代码 | 名称 | 定价规则 |
|
||||
|-------------|------|----------|
|
||||
| BASE | 基础课(陪打/PD) | 按等级定价 98-138 元/小时 |
|
||||
| BONUS | 附加课(超休/CX) | 固定 190 元/小时 |
|
||||
| ROOM | 包厢课 | 统一 138 元/小时(`dws.salary.room_course_price`) |
|
||||
|
||||
### 4.5 台桌分类(`dws.cfg_area_category`)
|
||||
|
||||
| 分类代码 | 说明 | 备注 |
|
||||
|----------|------|------|
|
||||
| BILLIARD | 台球(含原 V1-V4) | 2026-03-07 改版 |
|
||||
| SNOOKER | 斯诺克(含原 V5) | 2026-03-07 改版 |
|
||||
| OTHER | 兜底 | 未匹配时归入 |
|
||||
|
||||
> `BILLIARD_VIP` 已废弃(2026-03-07),禁止引用。
|
||||
|
||||
### 4.6 指数参数(`dws.cfg_index_parameters`)
|
||||
|
||||
按 `index_type`(WBI/NCI/RS/OS/MS/ML/SPI)分组加载,支持 EWMA 平滑。所有权重和阈值从此表读取,禁止硬编码。
|
||||
|
||||
## 五、BaseDwsTask 公共机制
|
||||
|
||||
### 5.1 时间分层(TimeLayer)
|
||||
|
||||
| 枚举值 | 范围 | 用途 |
|
||||
|--------|------|------|
|
||||
| LAST_2_DAYS | 近 2 天 | 日度增量 |
|
||||
| LAST_1_MONTH | 近 30 天 | 月度汇总 |
|
||||
| LAST_3_MONTHS | 近 90 天 | 季度分析 |
|
||||
| LAST_6_MONTHS | 近 6 个月(不含本月) | 半年趋势 |
|
||||
| ALL | 从 2000-01-01 起 | 全量重算 |
|
||||
|
||||
### 5.2 配置缓存(ConfigCache)
|
||||
- 类级别共享,TTL 5 分钟
|
||||
- 包含:绩效档位、等级定价、奖金规则、区域分类、技能类型
|
||||
- 支持 SCD2 生效期过滤
|
||||
|
||||
### 5.3 数据读写方法
|
||||
- `iter_dwd_rows()`:分批迭代 DWD 数据(默认 1000 行/批)
|
||||
- `query_dwd()`:直接执行任意 SQL
|
||||
- `delete_existing_data()`:按日期范围 + site_id 删除
|
||||
- `bulk_insert()`:批量插入
|
||||
- `upsert()`:ON CONFLICT DO UPDATE
|
||||
|
||||
### 5.4 辅助计算
|
||||
- `calculate_rolling_stats()`:滚动窗口统计
|
||||
- `calculate_rank_with_ties()`:并列排名
|
||||
- `is_new_hire_in_month()`:新入职判断
|
||||
- `is_guest()`:散客判断(member_id ≤ 0)
|
||||
- `safe_decimal()` / `safe_int()`:安全类型转换
|
||||
- `seconds_to_hours()` / `hours_to_seconds()`:时间单位转换
|
||||
- `get_assistant_level_asof()`:SCD2 助教等级
|
||||
- `get_member_card_balance_asof()`:SCD2 会员卡余额
|
||||
|
||||
## 六、字段命名规范
|
||||
|
||||
### 6.1 金额字段
|
||||
- 统一 `NUMERIC(12,2)`,货币单位 CNY
|
||||
- 储值卡:DWS 层使用 `balance_pay`(总额)、`recharge_card_pay`(现金充值卡)、`gift_card_pay`(赠送卡)
|
||||
- 财务日报:使用 `recharge_card_consume`
|
||||
- 助教费用:`assistant_pd_money`(陪打)、`assistant_cx_money`(超休),禁止使用 `service_fee`
|
||||
|
||||
### 6.2 时间字段
|
||||
- `stat_date`:统计日期(DATE)
|
||||
- `stat_month`:统计月份(CHAR(7),格式 YYYY-MM)
|
||||
- `created_at` / `updated_at`:TIMESTAMPTZ
|
||||
|
||||
### 6.3 标识字段
|
||||
- `site_id`:门店 ID(多门店隔离,RLS)
|
||||
- `tenant_id`:租户 ID
|
||||
- `member_id`:会员 ID(≤ 0 为散客)
|
||||
- `assistant_id`:助教 ID
|
||||
|
||||
## 七、调度与 Flow 类型
|
||||
|
||||
| Flow 类型 | 包含阶段 | 说明 |
|
||||
|-----------|----------|------|
|
||||
| `dwd_dws` | 仅 DWS 汇总 | 日常增量 |
|
||||
| `dwd_dws_index` | DWS 汇总 + 指数计算 | 含指数更新 |
|
||||
| `api_full` | ODS → DWD → DWS → INDEX | 全流程 |
|
||||
|
||||
处理模式:`increment_only`(默认)、`verify_only`(仅校验修复)、`increment_verify`(先增量后校验)
|
||||
|
||||
## 八、DWS 层完整表清单
|
||||
|
||||
### 汇总表
|
||||
`dws_assistant_daily_detail`、`dws_assistant_monthly_summary`、`dws_assistant_customer_stats`、`dws_assistant_salary_calc`、`dws_assistant_finance_analysis`、`dws_assistant_order_contribution`、`dws_member_consumption_summary`、`dws_member_visit_detail`、`dws_finance_daily_summary`、`dws_finance_recharge_summary`、`dws_finance_income_structure`、`dws_finance_discount_detail`、`dws_goods_stock_daily_summary`、`dws_goods_stock_weekly_summary`、`dws_goods_stock_monthly_summary`、`dws_order_summary`
|
||||
|
||||
### 指数表
|
||||
`dws_member_winback_index`、`dws_member_newconv_index`、`dws_member_assistant_relation_index`、`dws_member_assistant_intimacy`、`dws_member_spending_power_index`、`dws_index_percentile_history`
|
||||
|
||||
### 其他表
|
||||
`dws_platform_settlement`、`dws_ml_manual_order_source`、`dws_ml_manual_order_alloc`、`dws_assistant_recharge_commission`、`dws_assistant_project_tag`、`dws_member_project_tag`
|
||||
|
||||
### 视图
|
||||
`v_member_recall_priority`
|
||||
|
||||
### 配置表
|
||||
`cfg_performance_tier`、`cfg_assistant_level_price`、`cfg_bonus_rules`、`cfg_skill_type`、`cfg_area_category`、`cfg_index_parameters`
|
||||
|
||||
## 九、废弃对象(禁止引用)
|
||||
|
||||
| 对象 | 删除日期 | 替代方案 |
|
||||
|------|----------|----------|
|
||||
| `BILLIARD_VIP` 分类代码 | 2026-03-07 | V1-V4 归入 BILLIARD,V5 归入 SNOOKER |
|
||||
| `dwd_assistant_trash_event` | 2026-02-22 | `dwd_assistant_service_log_ex.is_trash` |
|
||||
| `RecallIndexTask` / `IntimacyIndexTask` | 2026-02-13 | WBI + NCI + RelationIndexTask |
|
||||
| SPRINT 奖金规则 | 2026-02-28 止 | TOP_RANK 排名奖金(2026-03-01 起) |
|
||||
|
||||
## 十、关键文档索引
|
||||
|
||||
| 文档 | 路径 |
|
||||
|------|------|
|
||||
| DWS 任务详解 | `apps/etl/connectors/feiqiu/docs/etl_tasks/dws_tasks.md` |
|
||||
| DWS 指标定义 | `apps/etl/connectors/feiqiu/docs/business-rules/dws_metrics.md` |
|
||||
| 指数算法说明 | `apps/etl/connectors/feiqiu/docs/business-rules/index_algorithm_cn.md` |
|
||||
| BaseDwsTask 机制 | `apps/etl/connectors/feiqiu/docs/etl_tasks/base_task_mechanism.md` |
|
||||
| BD 手册(DWS 表) | `apps/etl/connectors/feiqiu/docs/database/DWS/main/` |
|
||||
| DWD-DOC 权威规则 | `.kiro/steering/dwd-doc-authority.md` |
|
||||
|
||||
## 与其他文档的优先级
|
||||
|
||||
DWS 层开发时的参考优先级:DWD-DOC > 本文档 > BD 手册 > ETL 任务文档 > 业务规则文档 > DDL 注释。
|
||||
|
||||
> 本文档基于 2026-03-19 对项目代码、配置表、BD 手册和审计记录的全面收集整理。
|
||||
@@ -1,48 +0,0 @@
|
||||
---
|
||||
inclusion: fileMatch
|
||||
fileMatchPattern: "**/.env*,**/scripts/**,**/export/**,**/EXPORT-PATHS*"
|
||||
name: export-paths-full
|
||||
description: 输出路径完整规范(目录结构、环境变量映射、检查清单)。读到 .env / scripts / export 文件时自动加载。
|
||||
---
|
||||
|
||||
# 输出路径完整规范
|
||||
|
||||
## 目录结构与环境变量
|
||||
|
||||
```
|
||||
export/
|
||||
├── ETL-Connectors/feiqiu/
|
||||
│ ├── JSON/ — EXPORT_ROOT / FETCH_ROOT
|
||||
│ ├── LOGS/ — LOG_ROOT
|
||||
│ └── REPORTS/ — ETL_REPORT_ROOT
|
||||
├── SYSTEM/
|
||||
│ ├── LOGS/ — SYSTEM_LOG_ROOT
|
||||
│ ├── REPORTS/
|
||||
│ │ ├── dataflow_analysis/ — SYSTEM_ANALYZE_ROOT
|
||||
│ │ ├── field_audit/ — FIELD_AUDIT_ROOT
|
||||
│ │ └── full_dataflow_doc/ — FULL_DATAFLOW_DOC_ROOT
|
||||
│ └── CACHE/
|
||||
│ └── api_samples/ — API_SAMPLE_CACHE_ROOT
|
||||
└── BACKEND/
|
||||
└── LOGS/ — BACKEND_LOG_ROOT
|
||||
```
|
||||
|
||||
## 路径读取方式详细
|
||||
- `scripts/ops/` 脚本:通过 `_env_paths.get_output_path("变量名")` 读取(内部自动 `load_dotenv`)
|
||||
- ETL 核心模块:通过 `env_parser.py` → `AppConfig` 的 `io.*` 配置节读取
|
||||
- ETL 独立脚本:通过 `os.environ.get("ETL_REPORT_ROOT")` 读取,缺失时抛错
|
||||
- 后端:通过 `os.environ.get("BACKEND_LOG_ROOT")` 读取
|
||||
|
||||
## 新增输出场景的检查清单
|
||||
|
||||
当任何操作需要写入文件时,按以下顺序确认:
|
||||
1. 该输出是否已有对应的环境变量?→ 直接使用
|
||||
2. 是否属于现有目录分类(ETL/SYSTEM/BACKEND)?→ 使用对应父目录变量 + 子路径
|
||||
3. 都不匹配?→ 在 `export/` 下新建合理子目录,新增环境变量,更新 `.env` / `.env.template` / `EXPORT-PATHS.md`
|
||||
|
||||
## 共享工具
|
||||
- `scripts/ops/_env_paths.py`:提供 `get_output_path(env_var)` 函数,自动 `load_dotenv` + 读取 + 建目录 + 缺失报错
|
||||
|
||||
## 参考文档
|
||||
- 完整目录说明:`docs/deployment/EXPORT-PATHS.md`
|
||||
- 环境变量定义:根 `.env` 的"统一输出路径配置"节
|
||||
@@ -1,19 +0,0 @@
|
||||
---
|
||||
inclusion: always
|
||||
---
|
||||
# 产出物路径规范(强制)
|
||||
|
||||
## 一、程序输出 → `export/` 目录
|
||||
路径从 `.env` 环境变量读取。禁止硬编码路径,禁止在 `export/` 外创建输出目录。
|
||||
- 环境变量缺失时必须报错,禁止静默回退
|
||||
- 读取方式:`scripts/ops/` → `_env_paths.get_output_path()`;ETL → `AppConfig.io.*`;独立脚本 → `os.environ.get()` + 显式报错
|
||||
- 新增输出类型:先在 `.env` + `.env.template` 加变量,再更新 `docs/deployment/EXPORT-PATHS.md`
|
||||
|
||||
> 完整目录结构与映射表见 `export-paths-full.md`(fileMatch 自动加载)。
|
||||
|
||||
## 二、文档产出 → `docs/` 对应子目录
|
||||
禁止在 `docs/` 根目录散放文件(`README.md` 和 `DOCUMENTATION-MAP.md` 除外)。
|
||||
|
||||
常用归档路径:分析报告 → `docs/reports/`,架构 → `docs/architecture/`,BD 手册 → `docs/database/`(业务库)或 `apps/etl/.../docs/database/`(ETL),审计 → `docs/audit/changes/`,PRD → `docs/prd/specs/`,部署 → `docs/deployment/`。
|
||||
|
||||
> 完整归档规则表见 `doc-map.md`(手动加载)或 `docs/DOCUMENTATION-MAP.md`。
|
||||
@@ -1,24 +0,0 @@
|
||||
---
|
||||
inclusion: always
|
||||
---
|
||||
# 飞球数据规范(入口索引)
|
||||
|
||||
涉及财务、结算、助教、会员、统计、指数、工资、任务调度、DWD/DWS 层开发时,必须参考以下两份权威文档(fileMatch 自动加载,也可手动引用):
|
||||
|
||||
- `dwd-doc-authority.md` — DWD 层 11 条强制规则(consume_money 口径、支付恒等式、会员字段断档等)
|
||||
- `dws-doc-authority.md` — DWS 层 26 条强制规则(幂等策略、课程定价、绩效档位、指数算法、配置表体系等)
|
||||
- `docs/database/BD_Manual_fdw_reverse_retention_clue.md` — FDW 反向映射手册(ETL 库通过 postgres_fdw 只读访问业务库 `member_retention_clue` 维客线索表)
|
||||
|
||||
## 最高频硬规则速查(完整规则见上述文档)
|
||||
|
||||
1. `consume_money` 禁止直接用于计算 → 用 `items_sum` 拆分字段
|
||||
2. 助教费用必须拆分:`assistant_pd_money`(陪打)+ `assistant_cx_money`(超休)
|
||||
3. 支付恒等式:`balance_amount = recharge_card_amount + gift_card_amount`
|
||||
4. 会员信息一律通过 `member_id` JOIN 维度表(`scd2_is_current=1`),结算单冗余字段不可靠
|
||||
5. 散客:`member_id ≤ 0`
|
||||
6. 课程类型/定价/绩效档位/奖金/指数权重 → 全部从配置表读取,禁止硬编码
|
||||
7. DWS 汇总表默认 delete-before-insert,库存表用 upsert
|
||||
|
||||
## 参考优先级
|
||||
|
||||
DWD-DOC > DWS 权威规范 > BD 手册 > ETL 任务文档 > 业务规则文档 > DDL 注释
|
||||
@@ -1,7 +0,0 @@
|
||||
---
|
||||
inclusion: always
|
||||
---
|
||||
# 语言规范
|
||||
- 说明性文字一律简体中文(对话、文档、注释、变更说明);代码标识符和第三方 CLI 原文保留英文
|
||||
- 文档与代码变更同步更新;注释只写"为什么/边界/假设"
|
||||
- 全仓 UTF-8 无 BOM,禁止 GBK/Big5 混用
|
||||
@@ -1,62 +0,0 @@
|
||||
---
|
||||
inclusion: always
|
||||
---
|
||||
# 编码前需求审问(强制)
|
||||
|
||||
AI 在用户清晰度结束的地方开始产生幻觉。因此,在写任何一行代码之前,必须通过持续提问来延伸用户的清晰度,找出思维中的 gaps,避免在破碎的基础上构建。
|
||||
|
||||
## 触发条件
|
||||
当用户提出涉及以下任一场景的需求时,进入「审问模式」:
|
||||
- 新建功能/模块/页面/接口
|
||||
- 重构或重新设计已有逻辑
|
||||
- 涉及多模块联动的改动
|
||||
- 任何需求描述中存在模糊、隐含假设、或未定义边界的情况
|
||||
|
||||
## 强制流程
|
||||
|
||||
### 1. 进入 Planning 模式
|
||||
收到需求后,不立即动手,先进入提问循环。每轮提出 3-5 个针对性问题,直到所有维度都有明确答案。
|
||||
|
||||
### 2. 必问清单(最低要求)
|
||||
以下问题必须逐一确认,不得假设答案:
|
||||
|
||||
| 维度 | 必问问题 |
|
||||
|------|----------|
|
||||
| 用户 | 这是给谁用的?(角色/人群) |
|
||||
| 核心行为 | 用户执行的核心操作是什么? |
|
||||
| 完成后果 | 操作完成后发生什么?(跳转/提示/状态变化) |
|
||||
| 数据写入 | 需要保存什么数据?保存到哪里? |
|
||||
| 数据展示 | 需要展示什么数据?数据来源? |
|
||||
| 错误处理 | 出错时发生什么?用户看到什么? |
|
||||
| 成功反馈 | 成功时发生什么?用户看到什么? |
|
||||
| 认证 | 需要登录/鉴权吗?什么权限级别? |
|
||||
| 存储 | 需要数据库吗?哪个库?新表还是已有表? |
|
||||
| 终端适配 | 需要在手机上工作吗?响应式要求? |
|
||||
| 边界条件 | 并发/幂等/数据量上限/超时? |
|
||||
|
||||
### 3. 追问规则
|
||||
- 用户回答后,如果答案引出新的未定义项,继续追问
|
||||
- 不接受"你看着办"作为最终答案——至少确认关键维度
|
||||
- 每轮追问聚焦于上一轮答案暴露的 gaps
|
||||
- 当所有必问维度都有明确答案、且无新假设浮出时,才可结束审问
|
||||
|
||||
### 4. 输出需求确认摘要
|
||||
审问结束后,输出一份简洁的「需求确认摘要」,包含:
|
||||
- 目标用户与场景
|
||||
- 核心功能描述(一句话)
|
||||
- 数据流向(输入 → 处理 → 输出/存储)
|
||||
- 关键约束与边界条件
|
||||
- 明确排除的内容(不做什么)
|
||||
|
||||
用户确认摘要后,才可进入实施阶段。
|
||||
|
||||
## 与前置调研的关系
|
||||
- 本规则在 `pre-change-research.md`(前置调研)之前执行
|
||||
- 流程顺序:需求审问 → 用户确认 → 前置调研 → 用户确认 → 编码实施
|
||||
- 如果审问阶段发现需求本身不成立,直接终止,不进入调研
|
||||
|
||||
## 例外
|
||||
- 用户明确说"直接改"、"跳过审问"、"不用问了"
|
||||
- Bug 修复且用户已给出明确的复现步骤和期望行为
|
||||
- 纯格式/文档/注释调整
|
||||
- 用户提供了完整的 spec 文档且所有维度已覆盖
|
||||
@@ -1,40 +0,0 @@
|
||||
---
|
||||
inclusion: always
|
||||
---
|
||||
# 逻辑改动前置调研(强制)
|
||||
|
||||
任何涉及逻辑改动的任务(ETL 流程、业务规则、API 接口、数据模型、前端交互逻辑等),在写第一行代码之前,必须完成以下调研步骤:
|
||||
|
||||
## 强制流程
|
||||
|
||||
### 1. 委托子代理调研(节省主流程上下文)
|
||||
使用 `context-gatherer` 子代理执行调研,传入以下指令要点:
|
||||
- 要改动的模块/文件路径
|
||||
- 搜索 `docs/audit/changes/` 中相关的历史审计记录
|
||||
- **查询 Session 索引**:读取 `docs/audit/session_logs/_session_index.json`,按 `summary.files_modified` 筛选涉及目标模块的历史 session,提取 `description`(操作摘要)和 `startTime`,了解该模块近期被修改的上下文和原因(详见 `docs/audit/SESSION-LOG-GUIDE.md`)
|
||||
- 阅读涉及模块的 README、PRD spec(`docs/prd/specs/`)
|
||||
- 数据库相关:BD 手册(`docs/database/BD_Manual_*.md` + `apps/etl/connectors/feiqiu/docs/database/`)
|
||||
- ETL 相关:产品说明、数据流报告
|
||||
- 接口相关:OpenAPI spec、接口文档
|
||||
- 读取要修改的文件及其直接依赖(调用方、被调用方)
|
||||
- 确认数据流向:上游输入 → 当前处理 → 下游消费
|
||||
- 识别潜在影响范围(哪些模块/表/接口会受波及)
|
||||
|
||||
子代理返回精炼摘要,主流程不直接读取大量文件,保持上下文干净。
|
||||
|
||||
> **Session 日志作为调研数据源**:Session 索引(`_session_index.json`)记录了每轮 AI 操作的结构化摘要(文件变更、子代理调用、错误、LLM 生成的操作描述),是了解"某个文件/模块近期发生了什么"的最高效数据源。相比逐个打开审计记录,索引查询零 Token 成本且信息密度更高。
|
||||
|
||||
### 2. 输出上下文摘要
|
||||
基于子代理返回的调研结果,向用户输出简要的「改动前上下文摘要」,包含:
|
||||
- 当前模块的职责和关键逻辑
|
||||
- 历史变更要点(如有)
|
||||
- 本次改动的影响范围评估
|
||||
- 需要注意的风险点或边界条件
|
||||
|
||||
用户确认后再开始实施。
|
||||
|
||||
## 例外
|
||||
- 纯格式调整(缩进、空行、import 排序)
|
||||
- 注释/文档纯文字修改(不涉及逻辑描述变更)
|
||||
- 用户明确说"直接改"或"跳过调研"
|
||||
- 新建文件且不涉及已有逻辑的修改
|
||||
@@ -1,18 +0,0 @@
|
||||
---
|
||||
inclusion: fileMatch
|
||||
fileMatchPattern: "**/tasks/**,**/models/**,**/loaders/**,**/scd/**,**/quality/**,**/business-rules/**"
|
||||
name: product-full
|
||||
description: 产品详细说明(ETL 功能、指数算法、在线/离线模式)。读到 ETL 任务/模型/业务规则文件时自动加载。
|
||||
---
|
||||
|
||||
# 产品详细说明
|
||||
|
||||
## ETL 功能
|
||||
- 从上游 SaaS API 抽取运营数据(订单、支付、会员、助教、库存等)
|
||||
- 原始数据落地 ODS,保留源 payload 便于回溯
|
||||
- 清洗装载至 DWD,维度走 SCD2,事实按时间增量
|
||||
- 汇总至 DWS:助教业绩、财务日报、会员分析、工资计算、自定义指数算法(WBI/NCI/RS/OS/MS/ML/SPI)
|
||||
- 支持在线(API 抓取)和离线(JSON 回放)两种模式
|
||||
|
||||
## 主要入口
|
||||
详见 `tech.md` 常用命令节。
|
||||
@@ -1,48 +0,0 @@
|
||||
---
|
||||
inclusion: always
|
||||
---
|
||||
# 项目概览
|
||||
|
||||
NeoZQYY Monorepo — 面向台球门店业务的全栈数据平台。多门店隔离(`site_id` + RLS),领域语言中文,货币 CNY,金额 numeric(2)。
|
||||
|
||||
## 子系统与目录
|
||||
| 目录 | 说明 |
|
||||
|------|------|
|
||||
| `apps/etl/connectors/feiqiu/` | 飞球 Connector:上游 SaaS API → ODS → DWD → DWS |
|
||||
| `apps/backend/` | FastAPI 后端 |
|
||||
| `apps/miniprogram/` | 微信小程序(C 端) |
|
||||
| `apps/admin-web/` | 管理后台(React + Vite + Ant Design) |
|
||||
| `apps/mcp-server/` | MCP Server(AI 工具集成) |
|
||||
| `packages/shared/` | 跨项目共享包(枚举、金额精度、时间工具) |
|
||||
| `db/` | DDL / 迁移 / 种子(`etl_feiqiu/`、`zqyy_app/`、`fdw/`) |
|
||||
| `docs/` | 项目级文档 + `audit/`(统一审计落地点) |
|
||||
| `tests/` | Monorepo 级属性测试(hypothesis) |
|
||||
| `scripts/` | 项目级运维脚本(`ops/`、`audit/`、`migrate/`、`server/`) |
|
||||
|
||||
## 高风险路径(变更需审计)
|
||||
- ETL:`api/`、`cli/`、`config/`、`database/`、`loaders/`、`models/`、`orchestration/`、`scd/`、`tasks/`、`utils/`、`quality/`
|
||||
- `apps/backend/app/`、`apps/admin-web/src/`、`apps/miniprogram/miniprogram/`
|
||||
- `packages/shared/`、`db/`、根目录散文件(`.env*`、`pyproject.toml`)
|
||||
|
||||
## 文件归属规则
|
||||
- 模块专属 docs/tests/scripts → 模块内部
|
||||
- 项目级/跨模块 → 根目录对应文件夹
|
||||
- 审计产物统一写 `docs/audit/`,禁止写入子模块
|
||||
- 编码:UTF-8、纯 SQL、迁移脚本日期前缀、任务大写蛇形
|
||||
|
||||
## 废弃对象黑名单(高频误引)
|
||||
|
||||
| 对象 | 类型 | 删除日期 | 替代方案 |
|
||||
|------|------|----------|----------|
|
||||
| `dwd.dwd_assistant_trash_event` / `_ex` | DWD 表 | 2026-02-22 | `dwd_assistant_service_log_ex.is_trash` |
|
||||
| `ods.assistant_cancellation_records` | ODS 表 | 2026-02-22 | 不再需要独立链路 |
|
||||
| `ODS_ASSISTANT_ABOLISH` / `ASSISTANT_ABOLISH` | ETL/调度任务 | 2026-02-22 | 无 |
|
||||
| `BILLIARD_VIP` | cfg_area_category 分类代码 | 2026-03-07 | V1-V4 归入 `BILLIARD`,V5 归入 `SNOOKER` |
|
||||
| `dws_member_recall_index` / `v_dws_member_recall_index` | DWS 表 + RLS 视图 | 2026-03-20 | `dws_member_winback_index`(WBI)+ `dws_member_newconv_index`(NCI) |
|
||||
|
||||
所有 `_archived/` 目录存放已废弃内容,除非用户明确要求,禁止读取或参考。
|
||||
|
||||
## 治理
|
||||
任何逻辑改动必须可追溯、可验证、可回滚。审计检测与提醒由 hooks 自动执行。
|
||||
|
||||
> 详细目录树见 `structure.md`(fileMatch 自动加载);ETL 功能细节见 `product-full.md`(fileMatch 自动加载)。
|
||||
@@ -1,17 +0,0 @@
|
||||
---
|
||||
inclusion: manual
|
||||
---
|
||||
|
||||
# 变更影响审查与文档同步(手动参考)
|
||||
|
||||
说明:本文件用于“按需加载”的快速参考(可作为 /slash command),详细流程请优先使用 skill:
|
||||
- steering-readme-maintainer
|
||||
|
||||
## 何时使用
|
||||
- 发生业务/资金口径/ETL/接口/鉴权/小程序交互等“逻辑改动”时
|
||||
|
||||
## 快速清单
|
||||
- 是否需要更新 project-overview.md / tech.md / structure.md / README.md / (各子目录下README.md)
|
||||
- 是否需要补齐审计记录 docs/audit/changes/<date>__<slug>.md
|
||||
- 是否需要在每个修改文件写入 AI_CHANGELOG
|
||||
- 是否需要在逻辑变更处加 CHANGE 标记注释
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user