chore: migrate IDE environment from Kiro to Claude Code
- Add CLAUDE.md (root + ETL subdirectory + db subdirectory) consolidating all Kiro steering docs - Add .mcp.json migrated from .kiro/settings/mcp.json (test DBs enabled, prod disabled) - Add .claude/commands/ (audit, doc-sync, db-docs) replacing Kiro skills - Add .claude/hooks/ (session_start, post_edit_audit, stop_audit_check) replacing Kiro hooks - Add .claude/settings.json registering all hooks - Add scripts/audit/prescan.py merging Kiro's audit_flagger + compliance_prescan - Remove .kiro/agents, hooks, scripts, settings, skills, state (migrated or obsolete) - Update .gitignore for Claude Code Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
104
.claude/commands/audit.md
Normal file
104
.claude/commands/audit.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# /audit — 变更审计
|
||||
|
||||
回顾本次会话中你所做的所有文件变更,结合自动预扫描结果,执行审计落盘。
|
||||
|
||||
## 执行步骤
|
||||
|
||||
### 第 1 步:运行预扫描脚本(Python,零 token)
|
||||
|
||||
运行:
|
||||
```bash
|
||||
python scripts/audit/prescan.py
|
||||
```
|
||||
|
||||
该脚本自动完成:
|
||||
- 从 git status 获取所有变更文件
|
||||
- 分类高风险文件 + 生成 risk_tags
|
||||
- 合规检查:代码→文档映射、迁移 SQL 检测、DDL 基线检查
|
||||
|
||||
读取输出的 JSON。如果 `audit_required: false`,告知用户"无需审计"并结束。
|
||||
|
||||
**备选**:如果 git status 包含大量非本次会话的历史变更,可以用 `--files` 参数只传入本次会话的文件:
|
||||
```bash
|
||||
python scripts/audit/prescan.py --files "file1.py,file2.sql,..."
|
||||
```
|
||||
文件列表从你的对话记忆(本次会话的 Edit/Write 工具调用)中提取。
|
||||
|
||||
### 第 2 步:补充语义上下文
|
||||
|
||||
预扫描脚本能告诉你"哪些文件变了、是否高风险、文档是否缺失",但它不知道**为什么改**。
|
||||
|
||||
从对话记忆中补充:
|
||||
- 每个变更文件的修改原因(用户的需求是什么)
|
||||
- 改动的技术思路和设计决策
|
||||
- 与其他模块的关联影响
|
||||
|
||||
将预扫描 JSON + 语义上下文合并,作为第 3 步的输入。
|
||||
|
||||
### 第 3 步:委托子代理写审计记录
|
||||
|
||||
用 Agent 工具启动子代理,传入:
|
||||
1. 预扫描 JSON 结果(完整)
|
||||
2. 每个变更的原因和内容概要(你补充的语义上下文)
|
||||
|
||||
子代理的任务指令:
|
||||
|
||||
> 在 `docs/audit/changes/` 目录下创建审计记录文件,文件名格式 `<YYYY-MM-DD>__<英文短标识>.md`。
|
||||
>
|
||||
> 使用以下格式:
|
||||
>
|
||||
> ```markdown
|
||||
> # 变更审计记录:<中文标题>
|
||||
>
|
||||
> | 字段 | 值 |
|
||||
> |------|-----|
|
||||
> | 日期 | YYYY-MM-DD HH:MM:SS |
|
||||
>
|
||||
> ## 操作摘要
|
||||
> <1-3 段,说清楚做了什么、为什么做>
|
||||
>
|
||||
> ## 变更文件
|
||||
> 按新增/修改/删除分组,每个文件一行,简要说明改动内容。
|
||||
>
|
||||
> ## 改动注解
|
||||
> 对每个变更文件写注解:
|
||||
> - 高风险文件(ETL 任务/后端路由/数据库迁移/金额相关):写详细注解(变更类型、原因、思路、结果)
|
||||
> - 普通文件:一行简要说明
|
||||
> - 删除的文件:只记录删除原因
|
||||
>
|
||||
> ## 数据库变更(如有)
|
||||
> 列出新建/修改/删除的表、字段、约束、索引。标注迁移执行状态。
|
||||
>
|
||||
> ## 风险与回滚
|
||||
> - 风险点(标注高/中/低)
|
||||
> - 回滚要点
|
||||
>
|
||||
> ## 验证
|
||||
> - 至少 1 条可执行的验证方式(测试命令 / SQL / 联调步骤)
|
||||
>
|
||||
> ## 合规检查
|
||||
> - 列出文档同步状态(已同步 / 待补齐 / 不适用)
|
||||
> ```
|
||||
>
|
||||
> 当前北京时间通过 `python -c "from datetime import datetime, timezone, timedelta; print(datetime.now(timezone(timedelta(hours=8))).strftime('%Y-%m-%d %H:%M:%S'))"` 获取。
|
||||
>
|
||||
> 审计记录语言使用简体中文。
|
||||
>
|
||||
> 完成后运行 `python scripts/audit/gen_audit_dashboard.py` 刷新审计一览表。
|
||||
>
|
||||
> 最终只返回:done / files_written / next_step。
|
||||
|
||||
### 第 4 步:补齐缺失的文档同步
|
||||
|
||||
根据预扫描 JSON 中 `code_without_docs` 列出的不合规项,逐项补齐:
|
||||
- 读取对应代码文件当前内容
|
||||
- 更新对应文档
|
||||
|
||||
如果补齐工作量大(>3 个文档),委托子代理处理。
|
||||
|
||||
### 第 5 步:向用户报告
|
||||
|
||||
简短回执:
|
||||
- 审计记录文件路径
|
||||
- 合规检查结果(全部通过 / N 项已补齐 / N 项待用户处理)
|
||||
- 下一步建议(如 "commit when ready")
|
||||
63
.claude/commands/db-docs.md
Normal file
63
.claude/commands/db-docs.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# /db-docs — 数据库文档同步
|
||||
|
||||
当 PostgreSQL schema/表结构发生变化时,将变更以审计友好的方式落盘到 `docs/database/`。
|
||||
|
||||
## 触发条件
|
||||
|
||||
- 迁移脚本/DDL 修改(新增/删除/改表、字段、类型、默认值、非空、约束、索引、外键)
|
||||
- 手工执行了 DDL
|
||||
|
||||
## 执行步骤
|
||||
|
||||
### 第 1 步:识别结构性变化
|
||||
|
||||
从本次会话的改动中,列出新增/修改/删除的对象:
|
||||
- schema / table / column / index / constraint / foreign key
|
||||
- 明确变更前后差异(before/after)
|
||||
|
||||
### 第 2 步:更新表结构文档
|
||||
|
||||
对每张受影响的表,更新 `docs/database/` 下对应的文档:
|
||||
- 如果文档已存在:更新字段列表、约束、索引等
|
||||
- 如果文档不存在:基于以下模板创建
|
||||
|
||||
模板:
|
||||
```markdown
|
||||
# <schema>.<table_name>
|
||||
|
||||
## 概述
|
||||
<表的用途说明>
|
||||
|
||||
## 字段
|
||||
|
||||
| 字段名 | 类型 | 可空 | 默认值 | 说明 |
|
||||
|--------|------|------|--------|------|
|
||||
| ... | ... | ... | ... | ... |
|
||||
|
||||
## 约束与索引
|
||||
- PRIMARY KEY: ...
|
||||
- UNIQUE: ...
|
||||
- INDEX: ...
|
||||
|
||||
## 关联
|
||||
- 上游:<数据来源>
|
||||
- 下游:<被哪些模块/表消费>
|
||||
```
|
||||
|
||||
特别注意金额类字段:标注精度、币种、舍入规则。
|
||||
|
||||
### 第 3 步:回滚与验证
|
||||
|
||||
写入审计友好的回滚和验证信息:
|
||||
- DDL 回滚路径(必要时提供反向迁移 SQL)
|
||||
- 至少 3 条验证 SQL(含约束/索引/关键字段检查)
|
||||
|
||||
### 第 4 步:DDL 基线检查
|
||||
|
||||
检查 `docs/database/ddl/` 下的基线文件是否需要合并更新。如需要,更新基线。
|
||||
|
||||
### 第 5 步:输出摘要
|
||||
|
||||
- 更新/创建了哪些文档
|
||||
- 迁移脚本执行状态(已执行 / 待执行)
|
||||
- DDL 基线状态(已合并 / 待合并)
|
||||
55
.claude/commands/doc-sync.md
Normal file
55
.claude/commands/doc-sync.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# /doc-sync — 逻辑改动后文档同步
|
||||
|
||||
检查本次会话中的逻辑改动是否需要同步更新文档,并执行同步。
|
||||
|
||||
## 触发条件
|
||||
|
||||
修改了以下任一类内容时应执行:
|
||||
- 业务规则/计算口径/资金处理(精度、舍入、阈值)
|
||||
- ETL/SQL 清洗聚合映射逻辑
|
||||
- API 行为(返回结构、错误码、鉴权/权限)
|
||||
- 小程序关键交互流程
|
||||
- 数据库表结构
|
||||
|
||||
## 执行步骤
|
||||
|
||||
### 第 1 步:分类
|
||||
|
||||
判断本次会话的改动是否属于"逻辑改动"。如果只是纯格式化/拼写修正/注释调整,告知用户"无逻辑改动,无需文档同步"并结束。
|
||||
|
||||
### 第 2 步:逐项评估需要更新的文档
|
||||
|
||||
根据变更涉及的模块,评估以下文档是否需要更新:
|
||||
|
||||
**各级 README.md**(只更新与本次变更相关的):
|
||||
- `README.md`(根目录):项目总览、快速开始、环境变量、架构概述
|
||||
- `apps/backend/README.md`:后端 API 路由、配置、运行方式
|
||||
- `apps/etl/connectors/feiqiu/README.md`:ETL 任务清单、开发约定
|
||||
- `apps/miniprogram/README.md`:小程序页面结构
|
||||
- `apps/admin-web/README.md`:管理后台功能说明
|
||||
- `apps/tenant-admin/README.md`:租户管理后台功能说明
|
||||
- `packages/shared/README.md`:共享包说明
|
||||
- `db/README.md`:Schema 约定、迁移规范
|
||||
|
||||
规则:如果"对读者理解系统行为有帮助"就应更新。若某个 README 尚不存在但变更涉及该模块,应创建。
|
||||
|
||||
### 第 3 步:执行更新
|
||||
|
||||
对每个需要更新的文档:
|
||||
1. 读取当前内容
|
||||
2. 根据本次变更更新相关段落
|
||||
3. 写入更新后的内容
|
||||
|
||||
如果更新工作量大(>3 个文档),委托子代理处理。
|
||||
|
||||
### 第 4 步:联动检查
|
||||
|
||||
- 如果涉及 DB schema 变化:提醒用户执行 `/db-docs`
|
||||
- 如果涉及 API 变化:检查 `apps/backend/docs/API-REFERENCE.md` 是否已更新
|
||||
|
||||
### 第 5 步:输出摘要
|
||||
|
||||
- Changed:改了哪些文档
|
||||
- Why:原始原因 + 直接原因
|
||||
- Risk:风险点与回归范围
|
||||
- Verify:建议的验证步骤
|
||||
33
.claude/hooks/post_edit_audit_reminder.py
Normal file
33
.claude/hooks/post_edit_audit_reminder.py
Normal file
@@ -0,0 +1,33 @@
|
||||
#!/usr/bin/env python3
|
||||
"""PostToolUse hook: 编辑高风险文件后提醒审计"""
|
||||
import json, re, sys
|
||||
|
||||
try:
|
||||
data = json.load(sys.stdin)
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
fp = (data.get("tool_input") or {}).get("file_path", "")
|
||||
if not fp:
|
||||
sys.exit(0)
|
||||
|
||||
# 转相对路径
|
||||
rel = re.sub(r"^.*?NeoZQYY[/\\]", "", fp.replace("\\", "/"))
|
||||
|
||||
HIGH_RISK = [
|
||||
r"^apps/etl/connectors/feiqiu/(tasks|loaders|scd|orchestration|config|database|models|quality)/",
|
||||
r"^apps/backend/app/(routers|services|auth|schemas)/",
|
||||
r"^db/.*/migrations/.*\.sql$",
|
||||
r"^db/.*/schemas/.*\.sql$",
|
||||
r"^packages/shared/",
|
||||
]
|
||||
|
||||
for p in HIGH_RISK:
|
||||
if re.search(p, rel):
|
||||
print(json.dumps({
|
||||
"hookSpecificOutput": {
|
||||
"hookEventName": "PostToolUse",
|
||||
"additionalContext": f"[audit-reminder] 已编辑高风险文件: {rel} — 完成本轮改动后请执行 /audit"
|
||||
}
|
||||
}))
|
||||
break
|
||||
36
.claude/hooks/session_start_context.py
Normal file
36
.claude/hooks/session_start_context.py
Normal file
@@ -0,0 +1,36 @@
|
||||
#!/usr/bin/env python3
|
||||
"""SessionStart hook: 会话开始时加载项目状态上下文"""
|
||||
import json, subprocess, sys, os
|
||||
|
||||
project_dir = os.environ.get("CLAUDE_PROJECT_DIR", os.getcwd())
|
||||
script = os.path.join(project_dir, "scripts", "audit", "prescan.py")
|
||||
|
||||
if not os.path.isfile(script):
|
||||
sys.exit(0)
|
||||
|
||||
try:
|
||||
r = subprocess.run(
|
||||
[sys.executable, script],
|
||||
capture_output=True, text=True, timeout=10, cwd=project_dir,
|
||||
)
|
||||
if r.returncode != 0:
|
||||
sys.exit(0)
|
||||
result = json.loads(r.stdout)
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
audit_required = result.get("audit_required", False)
|
||||
total = result.get("total_files", 0)
|
||||
tags = ", ".join(result.get("risk_tags", []))
|
||||
|
||||
if audit_required:
|
||||
ctx = f"[session-context] 当前工作区有 {total} 个未提交的变更文件,含高风险标签: {tags}。如果这些变更来自之前的会话且未审计,建议先执行 /audit。"
|
||||
else:
|
||||
ctx = "[session-context] 当前工作区状态正常,无高风险未审计变更。"
|
||||
|
||||
print(json.dumps({
|
||||
"hookSpecificOutput": {
|
||||
"hookEventName": "SessionStart",
|
||||
"additionalContext": ctx
|
||||
}
|
||||
}))
|
||||
29
.claude/hooks/stop_audit_check.py
Normal file
29
.claude/hooks/stop_audit_check.py
Normal file
@@ -0,0 +1,29 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Stop hook: Claude 结束回复时检查是否有未审计的高风险变更"""
|
||||
import json, subprocess, sys, os
|
||||
|
||||
project_dir = os.environ.get("CLAUDE_PROJECT_DIR", os.getcwd())
|
||||
script = os.path.join(project_dir, "scripts", "audit", "prescan.py")
|
||||
|
||||
if not os.path.isfile(script):
|
||||
sys.exit(0)
|
||||
|
||||
try:
|
||||
r = subprocess.run(
|
||||
[sys.executable, script],
|
||||
capture_output=True, text=True, timeout=10, cwd=project_dir,
|
||||
)
|
||||
if r.returncode != 0:
|
||||
sys.exit(0)
|
||||
result = json.loads(r.stdout)
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
|
||||
high_risk = result.get("high_risk_files", [])
|
||||
if result.get("audit_required", False) and len(high_risk) > 0:
|
||||
print(json.dumps({
|
||||
"hookSpecificOutput": {
|
||||
"hookEventName": "Stop",
|
||||
"additionalContext": f"[audit-check] 当前有 {len(high_risk)} 个高风险文件变更未审计。建议执行 /audit。"
|
||||
}
|
||||
}))
|
||||
45
.claude/settings.json
Normal file
45
.claude/settings.json
Normal file
@@ -0,0 +1,45 @@
|
||||
{
|
||||
"permissions": {
|
||||
"additionalDirectories": [
|
||||
"C:\\Users\\Administrator\\.claude",
|
||||
"c:\\NeoZQYY\\.git"
|
||||
]
|
||||
},
|
||||
"hooks": {
|
||||
"SessionStart": [
|
||||
{
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python \"$CLAUDE_PROJECT_DIR/.claude/hooks/session_start_context.py\"",
|
||||
"timeout": 15,
|
||||
"statusMessage": "加载项目状态..."
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": "Edit|Write",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python \"$CLAUDE_PROJECT_DIR/.claude/hooks/post_edit_audit_reminder.py\"",
|
||||
"timeout": 5
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"Stop": [
|
||||
{
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python \"$CLAUDE_PROJECT_DIR/.claude/hooks/stop_audit_check.py\"",
|
||||
"timeout": 15
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
10
.gitignore
vendored
10
.gitignore
vendored
@@ -74,17 +74,15 @@ infra/**/*.secret
|
||||
.specstory/
|
||||
.cursorindexingignore
|
||||
|
||||
# ===== Claude Code 本地配置 =====
|
||||
.claude/settings.local.json
|
||||
|
||||
# ===== Windows 杂项 =====
|
||||
*.lnk
|
||||
.Deleted/
|
||||
|
||||
# ===== Kiro 运行时状态 =====
|
||||
.kiro/.audit_state.json
|
||||
.kiro/.last_prompt_id.json
|
||||
.kiro/.git_snapshot.json
|
||||
.kiro/.file_baseline.json
|
||||
.kiro/.compliance_state.json
|
||||
.kiro/.audit_context.json
|
||||
.kiro/state/
|
||||
|
||||
# ===== 运维脚本运行时状态 =====
|
||||
scripts/ops/.monitor_token
|
||||
|
||||
@@ -1,186 +0,0 @@
|
||||
---
|
||||
name: audit-writer
|
||||
description: Run post-change audit + docs sync for NeoZQYY Monorepo; write audit artifacts; return a very short receipt only.
|
||||
tools: ["read", "write", "shell"]
|
||||
---
|
||||
|
||||
你是专职"审计收口/后处理写入"子代理。
|
||||
|
||||
## 核心原则:从预构建上下文工作,禁止全盘扫描
|
||||
|
||||
你的唯一输入是 `.kiro/state/.audit_context.json`(由 `build_audit_context.py` 预构建)。
|
||||
该文件已包含所有你需要的信息:
|
||||
|
||||
| 字段 | 来源 | 内容 |
|
||||
|------|------|------|
|
||||
| `changed_files` | audit-flagger | 全部变更文件列表 |
|
||||
| `high_risk_files` | audit-flagger | 高风险文件子集 |
|
||||
| `reasons` | audit-flagger | 风险分类标签 |
|
||||
| `high_risk_diff` | git diff | 高风险文件的 diff(已截断) |
|
||||
| `diff_stat` | git diff --stat | 变更统计摘要 |
|
||||
| `compliance.code_without_docs` | compliance-prescan | 缺少文档同步的代码文件及其应更新的文档 |
|
||||
| `compliance.new_migration_sql` | compliance-prescan | 新增迁移 SQL 列表 |
|
||||
| `compliance.has_bd_manual` | compliance-prescan | 是否已有 BD_Manual 文档 |
|
||||
| `compliance.has_ddl_baseline` | compliance-prescan | 是否已更新 DDL 基线 |
|
||||
| `compliance.api_changed` | compliance-prescan | 是否有接口相关文件变更 |
|
||||
| `compliance.openapi_spec_stale` | compliance-prescan | OpenAPI spec 是否需要重新导出 |
|
||||
| `session_diff` | agent-on-stop (file baseline) | 本次对话期间的精确变更:`added`/`modified`/`deleted` |
|
||||
| `prompt_id` / `latest_prompt_log` | prompt-audit-log | Prompt-ID 与原文(溯源用) |
|
||||
|
||||
**禁止操作**:
|
||||
- ❌ 运行 `git status --porcelain`(已有 `changed_files`)
|
||||
- ❌ 运行 `git diff` 全量(已有 `high_risk_diff` + `diff_stat`)
|
||||
- ❌ 遍历目录寻找变更文件(已有分类好的列表)
|
||||
- ❌ 运行 `change_compliance_prescan.py`(已有 `compliance` 数据)
|
||||
|
||||
**允许操作**:
|
||||
- ✅ 读取具体文件内容(如需更新某个 README 时读取其当前内容)
|
||||
- ✅ 对单个文件运行 `git diff HEAD -- <file>`(仅当 context 中 diff 被截断时)
|
||||
- ✅ 连接测试库验证迁移执行状态(仅当 `new_migration_sql` 非空时)
|
||||
|
||||
## 审计产物路径(统一根目录)
|
||||
- 变更审计记录:`docs/audit/changes/<YYYY-MM-DD>__<slug>.md`
|
||||
- 审计一览表:`docs/audit/audit_dashboard.md`(自动生成,勿手动编辑)
|
||||
- Prompt 日志:`docs/audit/prompt_logs/`
|
||||
- 一览表刷新命令:`python scripts/audit/gen_audit_dashboard.py`
|
||||
- 所有审计产物统一写入项目根目录 `docs/audit/`,不要写入子模块内部
|
||||
|
||||
## 何时需要做"重型后处理"
|
||||
根据 `audit_context.json` 中的 `audit_required` 和 `reasons` 判断:
|
||||
- `audit_required: true` → 执行完整审计流程
|
||||
- `audit_required: false` → 输出"无需审计",清除标记,退出
|
||||
|
||||
## 执行策略(从 context 驱动,不做冗余扫描)
|
||||
|
||||
### 步骤 1:读取上下文
|
||||
读取 `.kiro/state/.audit_context.json`,提取关键字段。
|
||||
|
||||
### 步骤 1b:读取 Session 索引
|
||||
读取 `docs/audit/session_logs/_session_index.json`,按 `startTime` 找到与 `audit_context.json` 中 `prompt_at` 最接近的 entry(非 `is_sub` 的主对话)。提取:
|
||||
- `description`:作为审计记录的「操作摘要」(比从 diff 推断更准确、更完整)
|
||||
- `summary.files_modified` / `summary.files_created`:交叉验证 `session_diff`
|
||||
- executionId 前 8 位:作为 `session_id` 写入审计记录,建立双向链接
|
||||
- `summary.sub_agents`:记录本次对话调用了哪些子代理
|
||||
- `summary.errors`:标注执行中的异常
|
||||
|
||||
若索引不存在或无匹配 entry,跳过此步骤,不影响后续流程。
|
||||
|
||||
### 步骤 2:审计落盘(按需调用 skill)
|
||||
根据 `reasons` 判断需要哪些 skill:
|
||||
- 含 `dir:backend` / `dir:etl` / `dir:shared` 等 → 调用 `steering-readme-maintainer`
|
||||
- 含任意高风险标签 → 调用 `change-annotation-audit`(写 docs/audit/changes/ + AI_CHANGELOG + CHANGE 注释)
|
||||
- 含 `db-schema-change` → 调用 `bd-manual-db-docs`,并执行 DB 文档全量对账(见步骤 2b)
|
||||
|
||||
所有审计记录中涉及日期时间的字段,必须精确到秒(格式:`YYYY-MM-DD HH:MM:SS`,时区 Asia/Shanghai)。包括但不限于:审计记录头部的"日期"、AI_CHANGELOG 条目的时间戳、CHANGE 标记注释中的日期。
|
||||
|
||||
若 `session_diff` 中有 `added` 或 `deleted` 文件,在审计记录中增加「本次对话文件变更」段落,分别列出新增和删除的文件。
|
||||
|
||||
若步骤 1b 成功获取了 Session 信息,在审计记录头部元数据中增加:
|
||||
- `session_id`:executionId 前 8 位(如 `f29acdea`)
|
||||
- `操作摘要`:Session 索引中的 `description`(LLM 生成的操作摘要)
|
||||
- `session_path`:Session 日志文件的相对路径(`output_dir` 字段值)
|
||||
|
||||
审计记录头部模板:
|
||||
```markdown
|
||||
# 变更审计记录:<标题>
|
||||
|
||||
| 字段 | 值 |
|
||||
|------|-----|
|
||||
| 日期 | YYYY-MM-DD HH:MM:SS |
|
||||
| Prompt-ID | <从 audit_context> |
|
||||
| Session-ID | <executionId 前 8 位> |
|
||||
| Session 路径 | <output_dir 相对路径> |
|
||||
|
||||
## 操作摘要
|
||||
<Session 索引中的 description,或从 diff 推断的摘要>
|
||||
```
|
||||
|
||||
### 步骤 2b:DB 文档全量对账(当 reasons 含 db-schema-change 时)
|
||||
当 `reasons` 含 `db-schema-change` 时,除了调用 `bd-manual-db-docs` skill 处理本次变更外,还必须执行全量对账:
|
||||
|
||||
1. 连接测试库(使用 pg power 的 `pg-etl-test` / `pg-app-test`),查询 `information_schema.tables` 和 `information_schema.columns` 获取所有表和字段的实际结构
|
||||
2. 扫描 `docs/database/` 下现有文档,逐表对比:
|
||||
- 文档中缺失的表 → 新建表结构文档
|
||||
- 文档中字段与实际不一致(类型、nullable、默认值等)→ 更新文档
|
||||
- 文档中存在但数据库已删除的表 → 在文档中标注已废弃
|
||||
3. 输出对账摘要到审计记录中,列出:新增文档数、更新文档数、废弃标注数
|
||||
4. 所有文档输出到 `docs/database/`,遵循现有目录结构和模板格式
|
||||
|
||||
注意:全量对账使用测试库(TEST_DB_DSN),禁止连接正式库。
|
||||
|
||||
### 步骤 3:文档校对补齐
|
||||
遍历 `compliance.code_without_docs`,对每个缺失项:
|
||||
- 读取对应代码文件当前内容(不需要 diff,直接读文件)
|
||||
- 更新对应文档:
|
||||
|
||||
| 代码路径前缀 | 应同步更新的文档 |
|
||||
|---|---|
|
||||
| `apps/backend/app/routers/` | `apps/backend/docs/API-REFERENCE.md` + `docs/contracts/openapi/backend-api.json` |
|
||||
| `apps/backend/app/services/` | `apps/backend/docs/API-REFERENCE.md` + `apps/backend/README.md` |
|
||||
| `apps/backend/app/auth/` | `apps/backend/docs/API-REFERENCE.md` + `apps/backend/README.md` + `docs/contracts/openapi/backend-api.json` |
|
||||
| `apps/backend/app/schemas/` | `docs/contracts/openapi/backend-api.json` |
|
||||
| `apps/etl/connectors/feiqiu/tasks/` | `apps/etl/connectors/feiqiu/docs/etl_tasks/` |
|
||||
| `apps/etl/connectors/feiqiu/loaders/` | `apps/etl/connectors/feiqiu/docs/etl_tasks/` |
|
||||
| `apps/etl/connectors/feiqiu/scd/` | `apps/etl/connectors/feiqiu/docs/business-rules/scd2_rules.md` |
|
||||
| `apps/etl/connectors/feiqiu/orchestration/` | `apps/etl/connectors/feiqiu/docs/architecture/` |
|
||||
| `apps/admin-web/src/` | `apps/admin-web/README.md` |
|
||||
| `apps/miniprogram/` | `apps/miniprogram/README.md` |
|
||||
| `packages/shared/` | `packages/shared/README.md` |
|
||||
| `db/*/migrations/*.sql` | `docs/database/BD_Manual_*.md` + `apps/etl/connectors/feiqiu/docs/database/` + `docs/database/ddl/` |
|
||||
|
||||
### 步骤 4:DDL/迁移检查
|
||||
- 若 `compliance.new_migration_sql` 非空:
|
||||
- 连接测试库验证迁移是否已执行
|
||||
- 在审计记录中标注执行状态
|
||||
- 若 `compliance.new_migration_sql` 非空且 `compliance.has_ddl_baseline` 为 false:
|
||||
- 在审计记录中标注 ⚠️ DDL 基线待合并
|
||||
|
||||
### 步骤 4b:OpenAPI Spec 同步检查
|
||||
- 若 `compliance.api_changed` 为 true 且 `compliance.openapi_spec_stale` 为 true:
|
||||
- 在审计记录中标注 ⚠️ 接口代码已变更但 OpenAPI spec 未同步
|
||||
- 运行 `python scripts/ops/_export_openapi.py` 重新导出 spec(需后端可导入)
|
||||
- 若导出失败(后端未启动等),在审计记录中标注待手动导出
|
||||
- 导出成功后提醒用户重连 OpenAPI Power 的 MCP server 以加载新 spec
|
||||
- 若 `compliance.api_changed` 为 true 且 `compliance.openapi_spec_stale` 为 false:
|
||||
- spec 已同步更新,无需额外操作
|
||||
|
||||
### 步骤 5:改动注解(Change Annotations)
|
||||
|
||||
对本次审计涉及的所有变更文件,在审计记录(`docs/audit/changes/<YYYY-MM-DD>__<slug>.md`)中生成逐文件的改动注解段落。
|
||||
|
||||
注解内容包括:
|
||||
- 文件路径
|
||||
- 变更类型(新增 / 修改 / 删除)
|
||||
- 原始原因:为什么要做这个改动(从 `latest_prompt_log` 和 diff 上下文推断用户意图)
|
||||
- 思路分析:改动的技术思路和设计决策(从 diff 内容和代码结构推断)
|
||||
- 修改结果:改动后的效果和影响范围
|
||||
|
||||
格式模板(写入审计记录的 `## 改动注解` 段落):
|
||||
|
||||
```markdown
|
||||
## 改动注解
|
||||
|
||||
### `<文件路径>`
|
||||
- 变更类型:新增 / 修改 / 删除
|
||||
- 原始原因:<从 prompt log 和 diff 推断的改动动机>
|
||||
- 思路分析:<技术思路、设计决策、为什么选择这种实现方式>
|
||||
- 修改结果:<改动后的效果、影响范围、与其他模块的关联>
|
||||
```
|
||||
|
||||
执行规则:
|
||||
- 只对 `high_risk_files` 和 `session_diff.added` 中的文件写详细注解
|
||||
- 对非高风险的 `session_diff.modified` 文件写简要一行注解即可
|
||||
- 对 `session_diff.deleted` 文件只记录删除原因
|
||||
- 注解内容从 `high_risk_diff`、`latest_prompt_log`、文件内容综合推断,不要编造
|
||||
- 若某文件的 diff 被截断,可对该单个文件运行 `git diff HEAD -- <file>` 获取完整 diff
|
||||
- 注解语言使用简体中文
|
||||
|
||||
### 步骤 6:收尾
|
||||
- 把 `.kiro/state/.audit_state.json` 的 `audit_required` 置为 false,清空 `reasons`/`changed_files`/`last_reminded_at`
|
||||
- 执行 `python scripts/audit/gen_audit_dashboard.py` 刷新审计一览表
|
||||
|
||||
## 输出(强制极短回执)
|
||||
你最终只允许输出 3 段信息:
|
||||
- done: yes/no
|
||||
- files_written: <按行列出相对路径>
|
||||
- next_step: <若失败给 1~2 条;成功则写 "commit when ready">
|
||||
@@ -1,16 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "Agent On Stop (Merged)",
|
||||
"description": "合并 hook:对话结束时检测变更(含非 Kiro 外部变更)、记录 session log、合规预扫描、构建审计上下文、审计提醒。无变更时跳过。纯 Shell,零 Token。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "agentStop"
|
||||
},
|
||||
"then": {
|
||||
"type": "runCommand",
|
||||
"command": "python C:/NeoZQYY/.kiro/scripts/agent_on_stop.py",
|
||||
"timeout": 360
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "agent-on-stop"
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "CWD Guard for Shell",
|
||||
"description": "在 AI 执行 shell 命令前,校验 cwd、命令语法和 Python 调用安全性,防止常见 Windows/PowerShell 陷阱。",
|
||||
"version": "2",
|
||||
"when": {
|
||||
"type": "preToolUse",
|
||||
"toolTypes": [
|
||||
"shell"
|
||||
]
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "请对即将执行的 shell 命令做以下检查,发现问题则修正后再执行,全部通过则直接放行:\n\n1. **cwd 校验**:如果命令涉及 scripts/ops/、.kiro/scripts/、apps/etl/connectors/feiqiu/scripts/ 下的 Python 脚本,cwd 必须为仓库根 C:\\NeoZQYY。ETL 模块命令 cwd 应为 apps/etl/connectors/feiqiu/,后端命令应为 apps/backend/,前端命令应为 apps/admin-web/。\n2. **裸调 Python/Node 拦截**:如果命令包含 `python`、`node`、`ipython` 但没有跟 `-c`、`-m` 或脚本路径参数,必须修正(会导致 REPL 劫持 shell)。\n3. **命令连接符**:如果使用了 `&&`,替换为 `;`(PowerShell 语法)。\n4. **环境变量语法**:如果使用了 `$VAR_NAME` 读取环境变量,替换为 `$env:VAR_NAME`(PowerShell 语法)。\n\n对于不涉及上述问题的命令,直接放行。"
|
||||
}
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "每日经营数据报告",
|
||||
"description": "手动触发后执行 daily_revenue_report.py,统计 3月1日至当天的每日经营数据(实收、充值、团购结算、到店人次、新会员、充值人数等),输出到 docs/reports/daily-revenue-latest.md",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "执行 python C:\\NeoZQYY\\scripts\\ops\\daily_revenue_report.py"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "daily-revenue-report"
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "ETL FULL TEST",
|
||||
"description": "一键执行 ETL 全流程前后端联调:启动服务 → Playwright 浏览器提交任务 → 实时监控 → 性能报告 → 黑盒一致性测试 → 服务清理。详细步骤参考 .kiro/specs/[ETL]-fullstack-integration/tasks.md",
|
||||
"version": "1.1.0",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "执行 ETL 全栈联调运维任务。先读取 `.kiro/specs/[ETL]-fullstack-integration/tasks.md` 获取完整步骤细节,然后严格按以下 6 大步骤依次执行。全程使用 Playwright 浏览器模拟真实用户操作,不直接调用 API。\n\n## 步骤 1:服务启动与健康检查\n- 用 controlPwshProcess 启动后端:uvicorn app.main:app --host 0.0.0.0 --port 8000,cwd=apps/backend/\n- 用 controlPwshProcess 启动前端:pnpm dev,cwd=apps/admin-web/\n- 等待服务就绪,验证 http://localhost:8000/docs 和 http://localhost:5173 可访问\n- Playwright 打开 http://localhost:5173,登录(用户名 admin,密码 admin123)\n- 验证登录成功后跳转到任务配置页,侧边栏菜单正常渲染\n\n## 步骤 2:浏览器操作 - 任务配置与提交\n- 在任务配置页(/)依次操作:\n - Flow 选择 api_full(API → ODS → DWD → DWS → INDEX)\n - 处理模式选择 full_window\n - 时间窗口模式设为【自定义】,开始 2025-7-01,结束为当前时间\n - 窗口切分【按天】,切分天数 30\n - 勾选 force_full(强制全量)\n - 任务选择区域全选 is_common=True 的常用任务(共 41 个)\n- 确认 CLI 命令预览区显示完整参数\n- 点击【直接执行】按钮(SendOutlined 图标),触发 POST /api/execution/run\n- 确认提交成功提示,记录 execution_id\n\n## 步骤 3:执行监控与 DEBUG\n- 导航到【任务管理】页面(/task-manager)\n- 在【队列】Tab 确认任务状态为 running\n- 点击 running 任务行,打开 WebSocket 实时日志流抽屉\n- 按需以 30秒~20分钟 弹性间隔检查页面状态\n- 检测日志中的 ERROR / CRITICAL / Traceback / Exception / WARNING 关键字\n- 连续 20 分钟无新日志输出则报超时警告\n- 任务完成(success/failed/cancelled)时停止监控\n- 收集所有 ERROR 和 WARNING 日志行及上下文,分析错误类型\n- 如果任务失败,切换到【历史】Tab 查看完整执行详情\n\n## 步骤 4:性能计时与报告生成\n- 在【历史】Tab 点击已完成任务查看执行详情\n- 通过 GET /api/execution/{id}/logs 获取完整日志\n- 从日志提取每个窗口切片(30天)的开始/结束时间,计算耗时\n- 识别 ODS / DWD / DWS / INDEX 各阶段耗时,标注 Top-5 瓶颈\n- 生成综合联调报告到 {SYSTEM_LOG_ROOT}/{date}__etl_integration_report.md\n- 报告包含:执行概要、性能报告(各切片耗时对比、Top-5)、DEBUG 报告\n\n## 步骤 5:黑盒数据一致性测试\n- 运行全链路检查器:uv run python scripts/ops/etl_consistency_check.py(cwd=C:\\\\NeoZQYY)\n - 脚本自动从 LOG_ROOT 找最近 ETL 日志,从 FETCH_ROOT 读 API JSON\n - 连接数据库(PG_DSN)逐表逐字段比对:API vs ODS、ODS vs DWD、DWD vs DWS\n - 白名单:ETL_META_COLS、SCD2_COLS 排除;API 空字符串 vs DB None 视为等价\n - 报告输出到 ETL_REPORT_ROOT\n- 检查 FlowRunner 内置一致性报告(ETL_REPORT_ROOT 下已自动生成)\n- 对比两份报告结论是否一致\n- 将黑盒测试结果摘要追加到步骤 4 的综合报告中(通过/失败统计、白名单差异、失败表清单)\n\n## 步骤 6:服务清理\n- 关闭 Playwright 浏览器实例\n- 停止 uvicorn 后端进程(controlPwshProcess stop)\n- 停止 pnpm dev 前端进程(controlPwshProcess stop)\n- 报告联调完成状态\n\n## 环境与规范要求\n- 环境变量从根 .env 加载(load_dotenv),缺失必须报错,禁止静默回退\n- 数据库使用测试库(PG_DSN 指向 test_etl_feiqiu)\n- 报告路径遵循 export-paths 规范,从环境变量读取\n- 需要的环境变量:PG_DSN、FETCH_ROOT、LOG_ROOT、ETL_REPORT_ROOT、SYSTEM_LOG_ROOT"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "etl-fullstack-integration"
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "ETL Unified Analysis",
|
||||
"description": "手动触发 ETL 统一分析:合并数据流结构分析和数据一致性检查为一个流程。支持 --mode structure|consistency|full(默认 full),支持 --source api|etl-log(默认 api 主动采集最近 60 天)。",
|
||||
"version": "1.0.0",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "执行 ETL 统一分析,按以下步骤完成。若发现已完成或有历史任务痕迹则清空,重新执行:\n\n运行 `python scripts/ops/etl_unified_analysis.py`\n\n默认行为(full 模式):\n1. 第一阶段:数据流结构分析\n - 运行 analyze_dataflow.py 采集 API JSON、DB 表结构、三层字段映射、BD_manual 业务描述(默认最近 60 天)\n - 运行 gen_dataflow_report.py 生成结构分析报告\n2. 第二阶段:ETL 数据一致性检查\n - 运行 etl_consistency_check.py 对 API→ODS→DWD→DWS 逐表逐字段比对\n - 每张表展示数据截止日期(create_time/createtime/fetched_at 的 MAX 值)\n3. 第三阶段:报告合并\n - 将两份报告合并为一份统一报告,输出到 ETL_REPORT_ROOT\n\n可选参数:\n- `--mode structure` 仅执行结构分析\n- `--mode consistency` 仅执行一致性检查\n- `--source etl-log` 切换为读 ETL 落盘 JSON(而非主动调 API)\n- `--date-from YYYY-MM-DD` 指定起始日期\n- `--date-to YYYY-MM-DD` 指定截止日期\n- `--limit N` 每端点最大记录数\n- `--tables t1,t2` 指定分析的表\n\n白名单规则(继承 v5):\n- ETL 元数据列(source_file, source_endpoint, fetched_at, payload, content_hash)\n- DWD 维表 SCD2 管理列(valid_from, valid_to, is_current, etl_loaded_at, etl_batch_id)\n- API siteProfile 嵌套对象字段\n- 时间格式等价:同一时刻的不同格式表示视为内容相同\n- 白名单字段仍正常参与检查和统计,仅在报告中折叠显示并注明原因\n\n注意:\n- 当前仅分析飞球(feiqiu)连接器\n- 数据库使用测试库(TEST_DB_DSN),只读模式"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "etl-unified-analysis"
|
||||
}
|
||||
@@ -1,14 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "字段消失扫描",
|
||||
"description": "手动触发 DWD 表字段消失扫描,检测字段值从某天起突然全部为空的异常(≥3天且≥20条连续空记录)。输出终端报告 + CSV。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "runCommand",
|
||||
"command": "python scripts/ops/field_disappearance_scan.py",
|
||||
"timeout": 300
|
||||
}
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "H5 原型截图",
|
||||
"description": "手动触发:启动 HTTP 服务器 → 运行 screenshot_h5_pages.py 批量截取 docs/h5_ui/pages/ 下所有 H5 原型页面(iPhone 15 Pro Max, 430×932, DPR:3),输出到 docs/h5_ui/screenshots/。完成后关闭服务器。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "执行 H5 原型页面批量截图流程:\n1. 启动 HTTP 服务器:`python -m http.server 8765 --directory docs/h5_ui/pages`(用 controlPwshProcess 后台启动,cwd 为 C:\\NeoZQYY)\n2. 等待 2 秒确认服务器就绪\n3. 运行截图脚本:`python C:\\NeoZQYY\\scripts\\ops\\screenshot_h5_pages.py`(cwd 为 C:\\NeoZQYY,timeout 180s)\n4. 检查输出:列出 docs/h5_ui/screenshots/*.png 的文件名和大小,确认数量和关键交互态截图大小合理\n5. 停止 HTTP 服务器(controlPwshProcess stop)\n6. 简要汇报结果:总截图数、像素尺寸验证(应为 1290×N)、异常文件(如有)"
|
||||
}
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
{
|
||||
"enabled": false,
|
||||
"name": "Pre-Change Research Guard",
|
||||
"description": "在写操作执行前检查:是否已完成逻辑改动前置调研(审计历史、文档阅读、上下文摘要)。若未完成则阻止写入,先完成调研流程。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "preToolUse",
|
||||
"toolTypes": [
|
||||
"write"
|
||||
]
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "你即将执行写操作。请确认:\n\n1. 本次写操作是否涉及逻辑改动(ETL/业务规则/API/数据模型/前端交互)?\n2. 如果涉及逻辑改动,你是否已通过 context-gatherer 子代理完成前置调研,并向用户输出了上下文摘要且获得确认?\n\n若属于例外情况(纯格式/注释/文档纯文字/配置文件/.kiro 目录/用户明确跳过/新建不涉及已有逻辑),可直接继续。\n若未完成前置调研,必须先停止写操作,使用 context-gatherer 子代理完成调研流程后再继续。"
|
||||
}
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "Prompt On Submit (Merged)",
|
||||
"description": "合并 hook:每次提交 prompt 时执行风险标记 + prompt 日志记录 + git 快照。纯 Shell,零 Token。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "promptSubmit"
|
||||
},
|
||||
"then": {
|
||||
"type": "runCommand",
|
||||
"command": "python C:/NeoZQYY/.kiro/scripts/prompt_on_submit.py"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "prompt-on-submit"
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "REPL 劫持检测与恢复",
|
||||
"description": "在 shell 命令执行后检查输出,若发现 REPL 劫持症状(exit code 0 但无输出、出现 >>> 提示符),先尝试 exit 命令自救,失败则提醒用户手动终止进程。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "postToolUse",
|
||||
"toolTypes": [
|
||||
"shell"
|
||||
]
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "检查刚执行的 shell 命令输出,判断是否出现 REPL 劫持症状:\n1. exit code 0 但完全无输出(对于预期有输出的命令)\n2. 输出中出现 `>>>` 或 `...` 等 Python REPL 提示符\n3. 输出中出现 `>` 等 Node REPL 提示符\n\n如果检测到症状:\n- 第一步:立即执行 `exit` 命令尝试退出 REPL\n- 第二步:执行一条验证命令(如 `echo \"shell_ok\"`)确认 shell 已恢复\n- 如果恢复成功:重新执行原命令\n- 如果仍未恢复:停止重试,提醒用户在外部终端执行 `Get-Process python* | Stop-Process -Force`,等用户确认后再继续\n\n如果没有症状,直接放行,不做任何操作。"
|
||||
}
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "Manual: Run /audit (via audit-writer subagent)",
|
||||
"description": "按需触发:读取 agent-on-stop 预构建的审计上下文 + Session 索引,启动 audit-writer 子代理执行审计落盘+文档校对+DB文档全量对账+Session关联。上下文过期时自动重建。",
|
||||
"version": "11",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "执行 /audit 审计流程:\n\n**第零步:获取当前时间**:运行 `python -c \"from datetime import datetime, timezone, timedelta; print(datetime.now(timezone(timedelta(hours=8))).isoformat())\"` 获取当前北京时间,记为 `now`。后续所有「超过 30 分钟」的判断以此 `now` 为基准。\n\n**前置检查**:读取 `.kiro/state/.audit_context.json`,检查 `built_at` 时间戳。若文件不存在或 `built_at` 距 `now` 超过 30 分钟,先运行 `python .kiro/scripts/agent_on_stop.py --force-rebuild` 重建上下文,再重新读取。\n\n**Session 索引读取**:读取 `docs/audit/session_logs/_session_index.json`,找到与本次对话时间最接近的 entry(按 `startTime` 匹配),提取其 `description`(LLM 操作摘要)和 `summary`(结构化摘要)。这些信息将用于:\n- 作为审计记录头部的「操作摘要」来源(比从 diff 推断更准确)\n- 交叉验证 audit_context.json 中的 session_diff(files_modified/created)\n- 记录本次审计关联的 session executionId,建立双向链接\n\n**主流程**:启动名为 audit-writer 的子代理,传入以下指令:\n\n> 读取 `.kiro/state/.audit_context.json` 作为主输入,同时参考 Session 索引中匹配的 entry。不要自行运行 git status/diff/扫描文件。audit_context.json 已包含:变更文件列表、高风险文件 diff、合规检查清单(文档缺失/迁移状态/DDL 基线/接口变更/OpenAPI spec 状态)、本次对话精确变更(session_diff: added/modified/deleted)、Prompt-ID 溯源。按 audit-writer.md 中定义的执行策略完成审计落盘+文档校对补齐。\n\n约束:\n- 子代理禁止重复运行 git status --porcelain 或 git diff 全量扫描,所有信息已在 .audit_context.json 中预备好。\n- 子代理需要读取具体文件内容时(如更新文档),可以直接读取对应文件,但不要做全仓库遍历。\n- 子代理必须按需调用 skill:steering-readme-maintainer、change-annotation-audit、bd-manual-db-docs(仅在满足触发条件时)。\n- 子代理必须根据 compliance.code_without_docs 自动补齐缺失的文档同步。\n- 当 reasons 含 db-schema-change 时,子代理必须执行 DB 文档全量对账:连接测试库(TEST_DB_DSN)查询 information_schema,与 docs/database/ 下现有文档全量对比,补全或更新所有缺失/过时的表结构说明(不仅限于本次变更涉及的表),输出对账摘要。\n- 子代理应参考 session_diff 中的 added/modified/deleted 列表,精确定位本次对话的变更范围。\n- **Session 关联**:在审计记录(docs/audit/changes/*.md)头部增加 `session_id` 字段(executionId 前 8 位),并将 Session 索引中的 description 作为「操作摘要」写入审计记录。这建立了审计记录 ↔ Session 日志的双向链接。\n- 子代理必须为所有变更文件生成改动注解(步骤 5),写入审计记录的「改动注解」段落,包含:变更类型、原始原因、思路分析、修改结果。高风险文件写详细注解,普通修改写简要一行,删除文件只记录原因。\n- 若 compliance.api_changed=true 且 compliance.openapi_spec_stale=true,运行 `python scripts/ops/_export_openapi.py` 重新导出 OpenAPI spec;导出失败则在审计记录标注待手动导出;导出成功则提醒用户重连 OpenAPI Power MCP server。\n- 所有审计产物统一写入 docs/audit/,不写入子模块内部。\n- 完成后把 .kiro/state/.audit_state.json 中 audit_required 置为 false。\n- 执行 `python scripts/audit/gen_audit_dashboard.py` 刷新审计一览表。\n- **文档地图更新**:审计完成后,自动更新 `docs/DOCUMENTATION-MAP.md`:\n - 检查本次审计涉及的文档变更(从审计记录中识别)\n - 扫描 `docs/` 目录和各模块内部文档的变化(新增、修改、删除)\n - 特别关注数据库文档(`docs/database/`)是否有新增的 BD_Manual 文件\n - 根据发现的文档变更,更新文档地图中的相应条目\n - 确保文档地图的结构完整,所有重要文档都有记录\n- 最终回复必须是极短回执:done/files_written/next_step。"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "audit"
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"enabled": true,
|
||||
"name": "Session description maker",
|
||||
"description": "手动触发:为缺少 description 的 session log 调用百炼千问 API 生成摘要,写入双索引。askAgent 模式可看到实时输出。",
|
||||
"version": "1",
|
||||
"when": {
|
||||
"type": "userTriggered"
|
||||
},
|
||||
"then": {
|
||||
"type": "askAgent",
|
||||
"prompt": "请在后台运行以下命令并展示实时输出:python -B C:/NeoZQYY/scripts/ops/batch_generate_summaries.py"
|
||||
},
|
||||
"workspaceFolderName": "NeoZQYY",
|
||||
"shortName": "session-summary"
|
||||
}
|
||||
@@ -1,39 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""cwd 校验工具 — .kiro/scripts/ 下所有脚本共享。
|
||||
|
||||
用法:
|
||||
from _ensure_root import ensure_repo_root
|
||||
ensure_repo_root()
|
||||
|
||||
委托给 neozqyy_shared.repo_root(共享包),未安装时 fallback。
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import warnings
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def ensure_repo_root() -> Path:
|
||||
"""校验 cwd 是否为仓库根目录,不是则自动切换。"""
|
||||
try:
|
||||
from neozqyy_shared.repo_root import ensure_repo_root as _shared
|
||||
return _shared()
|
||||
except ImportError:
|
||||
pass
|
||||
# fallback
|
||||
cwd = Path.cwd()
|
||||
if (cwd / "pyproject.toml").is_file() and (cwd / ".kiro").is_dir():
|
||||
return cwd
|
||||
root = Path(__file__).resolve().parents[2]
|
||||
if (root / "pyproject.toml").is_file() and (root / ".kiro").is_dir():
|
||||
os.chdir(root)
|
||||
warnings.warn(
|
||||
f"cwd 不是仓库根目录,已自动切换: {cwd} → {root}",
|
||||
stacklevel=2,
|
||||
)
|
||||
return root
|
||||
raise RuntimeError(
|
||||
f"无法定位仓库根目录。当前 cwd={cwd},推断 root={root}。"
|
||||
f"请在仓库根目录下运行脚本。"
|
||||
)
|
||||
@@ -1,650 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""agent_on_stop — agentStop 合并 hook 脚本(v3:含 LLM 摘要生成)。
|
||||
|
||||
合并原 audit_reminder + change_compliance_prescan + build_audit_context + session_extract:
|
||||
1. 全量会话记录提取 → docs/audit/session_logs/(无论是否有代码变更)
|
||||
2. 为刚提取的 session 调用百炼 API 生成 description → 写入双索引
|
||||
3. 扫描工作区 → 与 promptSubmit 基线对比 → 精确检测本次对话变更
|
||||
4. 若无任何文件变更 → 跳过审查,静默退出
|
||||
5. 合规预扫描 → .kiro/state/.compliance_state.json
|
||||
6. 构建审计上下文 → .kiro/state/.audit_context.json
|
||||
7. 审计提醒(15 分钟限频)→ stderr
|
||||
|
||||
变更检测基于文件 mtime+size 基线对比,不依赖 git commit 历史。
|
||||
所有功能块用 try/except 隔离,单个失败不影响其他。
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
# 同目录导入文件基线模块 + cwd 校验
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
from file_baseline import scan_workspace, load_baseline, diff_baselines, total_changes
|
||||
from _ensure_root import ensure_repo_root
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
MIN_INTERVAL = timedelta(minutes=15)
|
||||
|
||||
# 路径常量
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
COMPLIANCE_PATH = os.path.join(".kiro", "state", ".compliance_state.json")
|
||||
CONTEXT_PATH = os.path.join(".kiro", "state", ".audit_context.json")
|
||||
PROMPT_ID_PATH = os.path.join(".kiro", "state", ".last_prompt_id.json")
|
||||
# 噪声路径(用于过滤变更列表中的非业务文件)
|
||||
NOISE_PATTERNS = [
|
||||
re.compile(r"^docs/audit/"),
|
||||
re.compile(r"^\.kiro/"),
|
||||
re.compile(r"^\.hypothesis/"),
|
||||
re.compile(r"^tmp/"),
|
||||
re.compile(r"\.png$"),
|
||||
re.compile(r"\.jpg$"),
|
||||
]
|
||||
|
||||
# 高风险路径
|
||||
HIGH_RISK_PATTERNS = [
|
||||
re.compile(r"^apps/etl/connectors/feiqiu/(api|cli|config|database|loaders|models|orchestration|scd|tasks|utils|quality)/"),
|
||||
re.compile(r"^apps/backend/app/"),
|
||||
re.compile(r"^apps/admin-web/src/"),
|
||||
re.compile(r"^apps/miniprogram/"),
|
||||
re.compile(r"^packages/shared/"),
|
||||
re.compile(r"^db/"),
|
||||
]
|
||||
|
||||
# 文档映射(合规检查用)
|
||||
DOC_MAP = {
|
||||
"apps/backend/app/routers/": ["apps/backend/docs/API-REFERENCE.md", "docs/contracts/openapi/backend-api.json"],
|
||||
"apps/backend/app/services/": ["apps/backend/docs/API-REFERENCE.md", "apps/backend/README.md"],
|
||||
"apps/backend/app/auth/": ["apps/backend/docs/API-REFERENCE.md", "apps/backend/README.md", "docs/contracts/openapi/backend-api.json"],
|
||||
"apps/backend/app/schemas/": ["docs/contracts/openapi/backend-api.json"],
|
||||
"apps/backend/app/main.py": ["docs/contracts/openapi/backend-api.json"],
|
||||
"apps/etl/connectors/feiqiu/tasks/": ["apps/etl/connectors/feiqiu/docs/etl_tasks/"],
|
||||
"apps/etl/connectors/feiqiu/loaders/": ["apps/etl/connectors/feiqiu/docs/etl_tasks/"],
|
||||
"apps/etl/connectors/feiqiu/scd/": ["apps/etl/connectors/feiqiu/docs/business-rules/scd2_rules.md"],
|
||||
"apps/etl/connectors/feiqiu/orchestration/": ["apps/etl/connectors/feiqiu/docs/architecture/"],
|
||||
"apps/admin-web/src/": ["apps/admin-web/README.md"],
|
||||
"apps/miniprogram/": ["apps/miniprogram/README.md"],
|
||||
"packages/shared/": ["packages/shared/README.md"],
|
||||
}
|
||||
|
||||
# 接口变更检测模式(routers / auth / schemas / main.py)
|
||||
API_CHANGE_PATTERNS = [
|
||||
re.compile(r"^apps/backend/app/routers/"),
|
||||
re.compile(r"^apps/backend/app/auth/"),
|
||||
re.compile(r"^apps/backend/app/schemas/"),
|
||||
re.compile(r"^apps/backend/app/main\.py$"),
|
||||
]
|
||||
|
||||
MIGRATION_PATTERNS = [
|
||||
re.compile(r"^db/etl_feiqiu/migrations/.*\.sql$"),
|
||||
re.compile(r"^db/zqyy_app/migrations/.*\.sql$"),
|
||||
re.compile(r"^db/fdw/.*\.sql$"),
|
||||
]
|
||||
|
||||
BD_MANUAL_PATTERN = re.compile(r"^docs/database/BD_Manual_.*\.md$")
|
||||
DDL_BASELINE_DIR = "docs/database/ddl/"
|
||||
AUDIT_CHANGES_DIR = "docs/audit/changes/"
|
||||
|
||||
|
||||
def now_taipei():
|
||||
return datetime.now(TZ_TAIPEI)
|
||||
|
||||
|
||||
def sha1hex(s: str) -> str:
|
||||
return hashlib.sha1(s.encode("utf-8")).hexdigest()
|
||||
|
||||
|
||||
def is_noise(f: str) -> bool:
|
||||
return any(p.search(f) for p in NOISE_PATTERNS)
|
||||
|
||||
|
||||
def safe_read_json(path):
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def write_json(path, data):
|
||||
os.makedirs(os.path.dirname(path) or os.path.join(".kiro", "state"), exist_ok=True)
|
||||
with open(path, "w", encoding="utf-8") as f:
|
||||
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
def git_diff_stat():
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "diff", "--stat", "HEAD"],
|
||||
capture_output=True, text=True, encoding="utf-8", errors="replace", timeout=15
|
||||
)
|
||||
return r.stdout.strip() if r.returncode == 0 else ""
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
def git_diff_files(files, max_total=30000, max_per_file=15000):
|
||||
"""获取文件的实际 diff 内容。对已跟踪文件用 git diff HEAD,对新文件直接读取内容。"""
|
||||
if not files:
|
||||
return ""
|
||||
all_diff = []
|
||||
total_len = 0
|
||||
for f in files:
|
||||
if total_len >= max_total:
|
||||
all_diff.append(f"\n[TRUNCATED: diff exceeds {max_total // 1000}KB]")
|
||||
break
|
||||
try:
|
||||
# 先尝试 git diff HEAD
|
||||
r = subprocess.run(
|
||||
["git", "diff", "HEAD", "--", f],
|
||||
capture_output=True, text=True, encoding="utf-8", errors="replace", timeout=10
|
||||
)
|
||||
chunk = ""
|
||||
if r.returncode == 0 and r.stdout.strip():
|
||||
chunk = r.stdout.strip()
|
||||
elif os.path.isfile(f):
|
||||
# untracked 新文件:直接读取内容作为 diff
|
||||
try:
|
||||
with open(f, "r", encoding="utf-8", errors="replace") as fh:
|
||||
file_content = fh.read(max_per_file + 100)
|
||||
chunk = f"--- /dev/null\n+++ b/{f}\n@@ -0,0 +1 @@\n" + file_content
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
if chunk:
|
||||
if len(chunk) > max_per_file:
|
||||
chunk = chunk[:max_per_file] + f"\n[TRUNCATED: {f} diff too long]"
|
||||
all_diff.append(chunk)
|
||||
total_len += len(chunk)
|
||||
except Exception:
|
||||
continue
|
||||
return "\n".join(all_diff)
|
||||
|
||||
|
||||
def get_latest_prompt_log():
|
||||
log_dir = os.path.join("docs", "audit", "prompt_logs")
|
||||
if not os.path.isdir(log_dir):
|
||||
return ""
|
||||
try:
|
||||
files = sorted(
|
||||
[f for f in os.listdir(log_dir) if f.startswith("prompt_log_")],
|
||||
reverse=True
|
||||
)
|
||||
if not files:
|
||||
return ""
|
||||
with open(os.path.join(log_dir, files[0]), "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
return content[:3000] + "\n[TRUNCATED]" if len(content) > 3000 else content
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
# ── 步骤 1:基于文件基线检测变更 ──
|
||||
def detect_changes_via_baseline():
|
||||
"""扫描当前工作区,与 promptSubmit 基线对比,返回精确的变更列表。
|
||||
|
||||
返回 (all_changed_files, external_files, diff_result, no_change)
|
||||
- all_changed_files: 本次对话期间所有变更文件(added + modified)
|
||||
- external_files: 暂时等于 all_changed_files(后续可通过 Kiro 写入日志细化)
|
||||
- diff_result: 完整的 diff 结果 {added, modified, deleted}
|
||||
- no_change: 是否无任何变更
|
||||
"""
|
||||
before = load_baseline()
|
||||
after = scan_workspace(".")
|
||||
|
||||
if not before:
|
||||
# 没有基线(首次运行或基线丢失),无法对比,回退到全部文件
|
||||
return [], [], {"added": [], "modified": [], "deleted": []}, True
|
||||
|
||||
diff = diff_baselines(before, after)
|
||||
count = total_changes(diff)
|
||||
|
||||
if count == 0:
|
||||
return [], [], diff, True
|
||||
|
||||
# 所有变更文件 = added + modified(deleted 的文件已不存在,不参与风险判定)
|
||||
all_changed = sorted(set(diff["added"] + diff["modified"]))
|
||||
|
||||
# 过滤噪声
|
||||
real_files = [f for f in all_changed if not is_noise(f)]
|
||||
|
||||
if not real_files:
|
||||
return [], [], diff, True
|
||||
|
||||
# 外部变更:目前所有基线检测到的变更都记录,
|
||||
# 因为 Kiro 的写入也会改变 mtime,所以这里的"外部"含义是
|
||||
# "本次对话期间发生的所有变更",包括 Kiro 和非 Kiro 的。
|
||||
# 精确区分需要 Kiro 运行时提供写入文件列表,目前不可用。
|
||||
external_files = [] # 不再误报外部变更
|
||||
|
||||
return real_files, external_files, diff, False
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# ── 步骤 3:合规预扫描 ──
|
||||
def do_compliance_prescan(all_files):
|
||||
result = {
|
||||
"new_migration_sql": [],
|
||||
"new_or_modified_sql": [],
|
||||
"code_without_docs": [],
|
||||
"new_files": [],
|
||||
"has_bd_manual": False,
|
||||
"has_audit_record": False,
|
||||
"has_ddl_baseline": False,
|
||||
"api_changed": False,
|
||||
"openapi_spec_stale": False,
|
||||
}
|
||||
|
||||
code_files = []
|
||||
doc_files = set()
|
||||
|
||||
for f in all_files:
|
||||
if is_noise(f):
|
||||
continue
|
||||
for mp in MIGRATION_PATTERNS:
|
||||
if mp.search(f):
|
||||
result["new_migration_sql"].append(f)
|
||||
break
|
||||
if f.endswith(".sql"):
|
||||
result["new_or_modified_sql"].append(f)
|
||||
if BD_MANUAL_PATTERN.search(f):
|
||||
result["has_bd_manual"] = True
|
||||
if f.startswith(AUDIT_CHANGES_DIR):
|
||||
result["has_audit_record"] = True
|
||||
if f.startswith(DDL_BASELINE_DIR):
|
||||
result["has_ddl_baseline"] = True
|
||||
if f.endswith(".md") or "/docs/" in f:
|
||||
doc_files.add(f)
|
||||
if f.endswith((".py", ".ts", ".tsx", ".js", ".jsx")):
|
||||
code_files.append(f)
|
||||
# 检测接口相关文件变更
|
||||
for ap in API_CHANGE_PATTERNS:
|
||||
if ap.search(f):
|
||||
result["api_changed"] = True
|
||||
break
|
||||
|
||||
# 接口变更但 openapi spec 未同步更新 → 标记过期
|
||||
if result["api_changed"] and "docs/contracts/openapi/backend-api.json" not in all_files:
|
||||
result["openapi_spec_stale"] = True
|
||||
|
||||
for cf in code_files:
|
||||
expected_docs = []
|
||||
for prefix, docs in DOC_MAP.items():
|
||||
if cf.startswith(prefix):
|
||||
expected_docs.extend(docs)
|
||||
if expected_docs:
|
||||
has_doc = False
|
||||
for ed in expected_docs:
|
||||
if ed in doc_files:
|
||||
has_doc = True
|
||||
break
|
||||
if ed.endswith("/") and any(d.startswith(ed) for d in doc_files):
|
||||
has_doc = True
|
||||
break
|
||||
if not has_doc:
|
||||
result["code_without_docs"].append({
|
||||
"file": cf,
|
||||
"expected_docs": expected_docs,
|
||||
})
|
||||
|
||||
needs_check = bool(
|
||||
result["new_migration_sql"]
|
||||
or result["code_without_docs"]
|
||||
or result["openapi_spec_stale"]
|
||||
)
|
||||
|
||||
now = now_taipei()
|
||||
write_json(COMPLIANCE_PATH, {
|
||||
"needs_check": needs_check,
|
||||
"scanned_at": now.isoformat(),
|
||||
**result,
|
||||
})
|
||||
return result
|
||||
|
||||
|
||||
# ── 步骤 4:构建审计上下文 ──
|
||||
def do_build_audit_context(all_files, diff_result, compliance):
|
||||
now = now_taipei()
|
||||
audit_state = safe_read_json(STATE_PATH)
|
||||
prompt_info = safe_read_json(PROMPT_ID_PATH)
|
||||
|
||||
# 使用 audit_state 中的 changed_files(来自 git status 的风险文件)
|
||||
# 与本次对话的 baseline diff 合并
|
||||
git_changed = audit_state.get("changed_files", [])
|
||||
session_changed = all_files # 本次对话期间变更的文件
|
||||
|
||||
# 合并两个来源,去重
|
||||
all_changed = sorted(set(git_changed + session_changed))
|
||||
|
||||
high_risk_files = [
|
||||
f for f in all_changed
|
||||
if any(p.search(f) for p in HIGH_RISK_PATTERNS)
|
||||
]
|
||||
|
||||
diff_stat = git_diff_stat()
|
||||
high_risk_diff = git_diff_files(high_risk_files)
|
||||
prompt_log = get_latest_prompt_log()
|
||||
|
||||
context = {
|
||||
"built_at": now.isoformat(),
|
||||
"prompt_id": prompt_info.get("prompt_id", "unknown"),
|
||||
"prompt_at": prompt_info.get("at", ""),
|
||||
"audit_required": audit_state.get("audit_required", False),
|
||||
"db_docs_required": audit_state.get("db_docs_required", False),
|
||||
"reasons": audit_state.get("reasons", []),
|
||||
"changed_files": all_changed[:100],
|
||||
"high_risk_files": high_risk_files,
|
||||
"session_diff": {
|
||||
"added": diff_result.get("added", [])[:50],
|
||||
"modified": diff_result.get("modified", [])[:50],
|
||||
"deleted": diff_result.get("deleted", [])[:50],
|
||||
},
|
||||
"compliance": {
|
||||
"code_without_docs": compliance.get("code_without_docs", []),
|
||||
"new_migration_sql": compliance.get("new_migration_sql", []),
|
||||
"has_bd_manual": compliance.get("has_bd_manual", False),
|
||||
"has_audit_record": compliance.get("has_audit_record", False),
|
||||
"has_ddl_baseline": compliance.get("has_ddl_baseline", False),
|
||||
"api_changed": compliance.get("api_changed", False),
|
||||
"openapi_spec_stale": compliance.get("openapi_spec_stale", False),
|
||||
},
|
||||
"diff_stat": diff_stat,
|
||||
"high_risk_diff": high_risk_diff,
|
||||
"latest_prompt_log": prompt_log,
|
||||
}
|
||||
|
||||
write_json(CONTEXT_PATH, context)
|
||||
|
||||
|
||||
# ── 步骤 5:审计提醒(15 分钟限频) ──
|
||||
def do_audit_reminder(real_files):
|
||||
state = safe_read_json(STATE_PATH)
|
||||
if not state.get("audit_required"):
|
||||
return
|
||||
|
||||
# 无变更时不提醒
|
||||
if not real_files:
|
||||
return
|
||||
|
||||
now = now_taipei()
|
||||
last_str = state.get("last_reminded_at")
|
||||
if last_str:
|
||||
try:
|
||||
last = datetime.fromisoformat(last_str)
|
||||
if (now - last) < MIN_INTERVAL:
|
||||
return
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
state["last_reminded_at"] = now.isoformat()
|
||||
write_json(STATE_PATH, state)
|
||||
|
||||
reasons = state.get("reasons", [])
|
||||
reason_text = ", ".join(reasons) if reasons else "high-risk paths changed"
|
||||
|
||||
# 仅信息性提醒,exit(0) 避免 agent 将其视为错误并自行执行审计
|
||||
# 审计留痕统一由用户手动触发 /audit 完成
|
||||
sys.stderr.write(
|
||||
f"[AUDIT REMINDER] Pending audit ({reason_text}), "
|
||||
f"{len(real_files)} files changed this session. "
|
||||
f"Run /audit to sync. (15min rate limit)\n"
|
||||
)
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
# ── 步骤 6:全量会话记录提取 ──
|
||||
def do_full_session_extract():
|
||||
"""从 Kiro globalStorage 提取当前 execution 的全量对话记录。
|
||||
调用 scripts/ops/extract_kiro_session.py 的核心逻辑。
|
||||
仅提取最新一条未索引的 execution,避免重复。
|
||||
"""
|
||||
# 动态导入提取器(避免启动时 import 开销)
|
||||
scripts_ops = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "..", "scripts", "ops")
|
||||
scripts_ops = os.path.normpath(scripts_ops)
|
||||
if scripts_ops not in sys.path:
|
||||
sys.path.insert(0, scripts_ops)
|
||||
|
||||
try:
|
||||
from extract_kiro_session import extract_latest
|
||||
except ImportError:
|
||||
return # 提取器不存在则静默跳过
|
||||
|
||||
# globalStorage 路径:从环境变量或默认位置
|
||||
global_storage = os.environ.get(
|
||||
"KIRO_GLOBAL_STORAGE",
|
||||
os.path.join(os.environ.get("APPDATA", ""), "Kiro", "User", "globalStorage")
|
||||
)
|
||||
workspace_path = os.getcwd()
|
||||
|
||||
extract_latest(global_storage, workspace_path)
|
||||
|
||||
|
||||
def _extract_summary_content(md_content: str) -> str:
|
||||
"""从 session log markdown 中提取适合生成摘要的内容。
|
||||
|
||||
策略:如果"用户输入"包含 CONTEXT TRANSFER(跨轮续接),
|
||||
则替换为简短标注,避免历史背景干扰本轮摘要生成。
|
||||
"""
|
||||
import re
|
||||
# 检测用户输入中是否包含 context transfer
|
||||
ct_pattern = re.compile(r"## 2\. 用户输入\s*\n```\s*\n.*?CONTEXT TRANSFER", re.DOTALL)
|
||||
if ct_pattern.search(md_content):
|
||||
# 替换"用户输入"section 为简短标注
|
||||
# 匹配从 "## 2. 用户输入" 到下一个 "## 3." 之间的内容
|
||||
md_content = re.sub(
|
||||
r"(## 2\. 用户输入)\s*\n```[\s\S]*?```\s*\n(?=## 3\.)",
|
||||
r"\1\n\n[本轮为 Context Transfer 续接,用户输入为历史多轮摘要,已省略。请基于执行摘要和对话记录中的实际工具调用判断本轮工作。]\n\n",
|
||||
md_content,
|
||||
)
|
||||
return md_content
|
||||
|
||||
|
||||
# ── 步骤 7:为最新 session 生成 LLM 摘要 ──
|
||||
_SUMMARY_SYSTEM_PROMPT = """你是一个专业的技术对话分析师。你的任务是为 AI 编程助手的一轮执行(execution)生成简洁的中文摘要。
|
||||
|
||||
背景:一个对话(chatSession)包含多轮执行(execution)。每轮执行 = 用户发一条消息 → AI 完成响应。你收到的是单轮执行的完整记录。
|
||||
|
||||
摘要规则:
|
||||
1. 只描述本轮执行实际完成的工作,不要描述历史背景
|
||||
2. 列出完成的功能点/任务(一轮可能完成多个)
|
||||
3. 包含关键技术细节:文件路径、模块名、数据库表、API 端点等
|
||||
4. bug 修复要说明原因和方案
|
||||
5. 不写过程性描述("用户说..."),只写结果
|
||||
6. 内容太短或无实质内容的,写"无实质内容"
|
||||
7. 不限字数,信息完整优先,避免截断失真
|
||||
|
||||
重要:
|
||||
- "执行摘要"(📋)是最可靠的信息源,优先基于它判断本轮做了什么
|
||||
- 如果"用户输入"包含 CONTEXT TRANSFER,那是之前多轮的历史摘要,不是本轮工作
|
||||
- 对话记录中的实际工具调用和文件变更才是本轮的真实操作
|
||||
|
||||
请直接输出摘要,不要添加任何前缀或解释。"""
|
||||
|
||||
|
||||
def do_generate_description():
|
||||
"""为缺少 description 的主对话 entry 调用百炼 API 生成摘要,写入双索引。"""
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv()
|
||||
|
||||
api_key = os.environ.get("BAILIAN_API_KEY", "")
|
||||
if not api_key:
|
||||
return
|
||||
|
||||
model = os.environ.get("BAILIAN_MODEL", "qwen-plus")
|
||||
base_url = os.environ.get("BAILIAN_BASE_URL", "https://dashscope.aliyuncs.com/compatible-mode/v1")
|
||||
|
||||
scripts_ops = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "..", "scripts", "ops")
|
||||
scripts_ops = os.path.normpath(scripts_ops)
|
||||
if scripts_ops not in sys.path:
|
||||
sys.path.insert(0, scripts_ops)
|
||||
|
||||
try:
|
||||
from extract_kiro_session import load_index, save_index, load_full_index, save_full_index
|
||||
except ImportError:
|
||||
return
|
||||
|
||||
index = load_index()
|
||||
entries = index.get("entries", {})
|
||||
if not entries:
|
||||
return
|
||||
|
||||
# 收集所有缺少 description 的主对话 entry
|
||||
targets = []
|
||||
for eid, ent in entries.items():
|
||||
if ent.get("is_sub"):
|
||||
continue
|
||||
if not ent.get("description"):
|
||||
targets.append((eid, ent))
|
||||
|
||||
if not targets:
|
||||
return
|
||||
|
||||
# agent_on_stop 场景下限制处理数量,避免超时
|
||||
# 批量处理积压用独立脚本 batch_generate_summaries.py
|
||||
MAX_PER_RUN = 10
|
||||
if len(targets) > MAX_PER_RUN:
|
||||
# 优先处理最新的(按 startTime 降序)
|
||||
targets.sort(key=lambda t: t[1].get("startTime", ""), reverse=True)
|
||||
targets = targets[:MAX_PER_RUN]
|
||||
|
||||
try:
|
||||
from openai import OpenAI
|
||||
client = OpenAI(api_key=api_key, base_url=base_url)
|
||||
except Exception:
|
||||
return
|
||||
|
||||
full_index = load_full_index()
|
||||
full_entries = full_index.get("entries", {})
|
||||
generated = 0
|
||||
|
||||
for target_eid, target_entry in targets:
|
||||
out_dir = target_entry.get("output_dir", "")
|
||||
if not out_dir or not os.path.isdir(out_dir):
|
||||
continue
|
||||
|
||||
# 找到该 entry 对应的 main_*.md 文件
|
||||
main_files = sorted(
|
||||
f for f in os.listdir(out_dir)
|
||||
if f.startswith("main_") and f.endswith(".md")
|
||||
and target_eid[:8] in f # 按 executionId 短码匹配
|
||||
)
|
||||
if not main_files:
|
||||
# 回退:取目录下所有 main 文件
|
||||
main_files = sorted(
|
||||
f for f in os.listdir(out_dir)
|
||||
if f.startswith("main_") and f.endswith(".md")
|
||||
)
|
||||
if not main_files:
|
||||
continue
|
||||
|
||||
content_parts = []
|
||||
for mf in main_files:
|
||||
try:
|
||||
with open(os.path.join(out_dir, mf), "r", encoding="utf-8") as fh:
|
||||
content_parts.append(fh.read())
|
||||
except Exception:
|
||||
continue
|
||||
if not content_parts:
|
||||
continue
|
||||
|
||||
content = "\n\n---\n\n".join(content_parts)
|
||||
content = _extract_summary_content(content)
|
||||
if len(content) > 60000:
|
||||
content = content[:60000] + "\n\n[TRUNCATED]"
|
||||
|
||||
try:
|
||||
resp = client.chat.completions.create(
|
||||
model=model,
|
||||
messages=[
|
||||
{"role": "system", "content": _SUMMARY_SYSTEM_PROMPT},
|
||||
{"role": "user", "content": f"请为以下单轮执行记录生成摘要:\n\n{content}"},
|
||||
],
|
||||
max_tokens=4096,
|
||||
)
|
||||
description = resp.choices[0].message.content.strip()
|
||||
except Exception:
|
||||
continue # 单条失败不影响其他
|
||||
|
||||
if not description:
|
||||
continue
|
||||
|
||||
# 写入双索引(内存中)
|
||||
entries[target_eid]["description"] = description
|
||||
if target_eid in full_entries:
|
||||
full_entries[target_eid]["description"] = description
|
||||
generated += 1
|
||||
|
||||
# 批量保存
|
||||
if generated > 0:
|
||||
save_index(index)
|
||||
save_full_index(full_index)
|
||||
|
||||
|
||||
def main():
|
||||
ensure_repo_root()
|
||||
now = now_taipei()
|
||||
force_rebuild = "--force-rebuild" in sys.argv
|
||||
|
||||
# 全量会话记录提取(无论是否有文件变更,每次对话都要记录)
|
||||
try:
|
||||
do_full_session_extract()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 步骤 1:基于文件基线检测变更
|
||||
real_files, external_files, diff_result, no_change = detect_changes_via_baseline()
|
||||
|
||||
# 无任何文件变更 → 跳过所有审查(除非 --force-rebuild)
|
||||
if no_change and not force_rebuild:
|
||||
return
|
||||
|
||||
# --force-rebuild 且无变更时,仍需基于 git status 重建 context
|
||||
if no_change and force_rebuild:
|
||||
try:
|
||||
compliance = do_compliance_prescan(real_files or [])
|
||||
except Exception:
|
||||
compliance = {}
|
||||
try:
|
||||
do_build_audit_context(real_files or [], diff_result, compliance)
|
||||
except Exception:
|
||||
pass
|
||||
return
|
||||
|
||||
# 步骤 2:合规预扫描(基于本次对话变更的文件)
|
||||
compliance = {}
|
||||
try:
|
||||
compliance = do_compliance_prescan(real_files)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 步骤 4:构建审计上下文
|
||||
try:
|
||||
do_build_audit_context(real_files, diff_result, compliance)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 步骤 7:审计提醒(信息性,exit(0),不触发 agent 自行审计)
|
||||
try:
|
||||
do_audit_reminder(real_files)
|
||||
except SystemExit:
|
||||
pass # exit(0) 信息性退出,不需要 re-raise
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except SystemExit as e:
|
||||
sys.exit(e.code)
|
||||
except Exception:
|
||||
pass
|
||||
@@ -1,165 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""audit_flagger — 判断 git 工作区是否存在高风险改动,写入 .kiro/state/.audit_state.json
|
||||
|
||||
替代原 PowerShell 版本,避免 Windows PowerShell 5.1 解析器 bug。
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
|
||||
RISK_RULES = [
|
||||
(re.compile(r"^apps/etl/connectors/feiqiu/(api|cli|config|database|loaders|models|orchestration|scd|tasks|utils|quality)/"), "etl"),
|
||||
(re.compile(r"^apps/backend/app/"), "backend"),
|
||||
(re.compile(r"^apps/admin-web/src/"), "admin-web"),
|
||||
(re.compile(r"^apps/miniprogram/(miniapp|miniprogram)/"), "miniprogram"),
|
||||
(re.compile(r"^packages/shared/"), "shared"),
|
||||
(re.compile(r"^db/"), "db"),
|
||||
]
|
||||
|
||||
NOISE_PATTERNS = [
|
||||
re.compile(r"^docs/audit/"),
|
||||
re.compile(r"^\.kiro/"), # .kiro 配置变更不触发业务审计
|
||||
re.compile(r"^tmp/"),
|
||||
re.compile(r"^\.hypothesis/"),
|
||||
]
|
||||
|
||||
DB_PATTERNS = [
|
||||
re.compile(r"^db/"),
|
||||
re.compile(r"/migrations/"),
|
||||
re.compile(r"\.sql$"),
|
||||
re.compile(r"\.prisma$"),
|
||||
]
|
||||
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
|
||||
|
||||
def now_taipei():
|
||||
return datetime.now(TZ_TAIPEI).isoformat()
|
||||
|
||||
|
||||
def sha1hex(s: str) -> str:
|
||||
return hashlib.sha1(s.encode("utf-8")).hexdigest()
|
||||
|
||||
|
||||
def get_changed_files() -> list[str]:
|
||||
"""从 git status --porcelain 提取变更文件路径"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["git", "status", "--porcelain"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
if result.returncode != 0:
|
||||
return []
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
files = []
|
||||
for line in result.stdout.splitlines():
|
||||
if len(line) < 4:
|
||||
continue
|
||||
path = line[3:].strip()
|
||||
if " -> " in path:
|
||||
path = path.split(" -> ")[-1]
|
||||
path = path.strip().strip('"').replace("\\", "/")
|
||||
if path:
|
||||
files.append(path)
|
||||
return files
|
||||
|
||||
|
||||
def is_noise(f: str) -> bool:
|
||||
return any(p.search(f) for p in NOISE_PATTERNS)
|
||||
|
||||
|
||||
def write_state(state: dict):
|
||||
os.makedirs(os.path.join(".kiro", "state"), exist_ok=True)
|
||||
with open(STATE_PATH, "w", encoding="utf-8") as fh:
|
||||
json.dump(state, fh, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
def main():
|
||||
# 非 git 仓库直接退出
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "rev-parse", "--is-inside-work-tree"],
|
||||
capture_output=True, text=True, timeout=5
|
||||
)
|
||||
if r.returncode != 0:
|
||||
return
|
||||
except Exception:
|
||||
return
|
||||
|
||||
all_files = get_changed_files()
|
||||
files = sorted(set(f for f in all_files if not is_noise(f)))
|
||||
now = now_taipei()
|
||||
|
||||
if not files:
|
||||
write_state({
|
||||
"audit_required": False,
|
||||
"db_docs_required": False,
|
||||
"reasons": [],
|
||||
"changed_files": [],
|
||||
"change_fingerprint": "",
|
||||
"marked_at": now,
|
||||
"last_reminded_at": None,
|
||||
})
|
||||
return
|
||||
|
||||
reasons = []
|
||||
audit_required = False
|
||||
db_docs_required = False
|
||||
|
||||
for f in files:
|
||||
for pattern, label in RISK_RULES:
|
||||
if pattern.search(f):
|
||||
audit_required = True
|
||||
tag = f"dir:{label}"
|
||||
if tag not in reasons:
|
||||
reasons.append(tag)
|
||||
# 根目录散文件
|
||||
if "/" not in f:
|
||||
audit_required = True
|
||||
if "root-file" not in reasons:
|
||||
reasons.append("root-file")
|
||||
# DB 文档触发
|
||||
if any(p.search(f) for p in DB_PATTERNS):
|
||||
db_docs_required = True
|
||||
if "db-schema-change" not in reasons:
|
||||
reasons.append("db-schema-change")
|
||||
|
||||
fp = sha1hex("\n".join(files))
|
||||
|
||||
# 保留已有状态的 last_reminded_at
|
||||
last_reminded = None
|
||||
if os.path.isfile(STATE_PATH):
|
||||
try:
|
||||
with open(STATE_PATH, "r", encoding="utf-8") as fh:
|
||||
existing = json.load(fh)
|
||||
if existing.get("change_fingerprint") == fp:
|
||||
last_reminded = existing.get("last_reminded_at")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
write_state({
|
||||
"audit_required": audit_required,
|
||||
"db_docs_required": db_docs_required,
|
||||
"reasons": reasons,
|
||||
"changed_files": files[:50],
|
||||
"change_fingerprint": fp,
|
||||
"marked_at": now,
|
||||
"last_reminded_at": last_reminded,
|
||||
})
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception:
|
||||
# 绝不阻塞 prompt 提交
|
||||
pass
|
||||
@@ -1,107 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""audit_reminder — Agent 结束时检查是否有待审计改动,15 分钟限频提醒。
|
||||
|
||||
替代原 PowerShell 版本,避免 Windows PowerShell 5.1 解析器 bug。
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
MIN_INTERVAL = timedelta(minutes=15)
|
||||
|
||||
|
||||
def now_taipei():
|
||||
return datetime.now(TZ_TAIPEI)
|
||||
|
||||
|
||||
def load_state():
|
||||
if not os.path.isfile(STATE_PATH):
|
||||
return None
|
||||
try:
|
||||
with open(STATE_PATH, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def save_state(state):
|
||||
os.makedirs(os.path.join(".kiro", "state"), exist_ok=True)
|
||||
with open(STATE_PATH, "w", encoding="utf-8") as f:
|
||||
json.dump(state, f, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
def get_real_changes():
|
||||
"""获取排除噪声后的变更文件"""
|
||||
try:
|
||||
r = subprocess.run(["git", "status", "--porcelain"], capture_output=True, text=True, timeout=10)
|
||||
if r.returncode != 0:
|
||||
return []
|
||||
except Exception:
|
||||
return []
|
||||
files = []
|
||||
for line in r.stdout.splitlines():
|
||||
if len(line) < 4:
|
||||
continue
|
||||
path = line[3:].strip().strip('"').replace("\\", "/")
|
||||
if " -> " in path:
|
||||
path = path.split(" -> ")[-1]
|
||||
# 排除审计产物、.kiro 配置、临时文件
|
||||
if path and not path.startswith("docs/audit/") and not path.startswith(".kiro/") and not path.startswith("tmp/") and not path.startswith(".hypothesis/"):
|
||||
files.append(path)
|
||||
return sorted(set(files))
|
||||
|
||||
|
||||
def main():
|
||||
state = load_state()
|
||||
if not state:
|
||||
sys.exit(0)
|
||||
|
||||
if not state.get("audit_required"):
|
||||
sys.exit(0)
|
||||
|
||||
# 工作树干净时清除审计状态
|
||||
real_files = get_real_changes()
|
||||
if not real_files:
|
||||
state["audit_required"] = False
|
||||
state["reasons"] = []
|
||||
state["changed_files"] = []
|
||||
state["last_reminded_at"] = None
|
||||
save_state(state)
|
||||
sys.exit(0)
|
||||
|
||||
now = now_taipei()
|
||||
|
||||
# 15 分钟限频
|
||||
last_str = state.get("last_reminded_at")
|
||||
if last_str:
|
||||
try:
|
||||
last = datetime.fromisoformat(last_str)
|
||||
if (now - last) < MIN_INTERVAL:
|
||||
sys.exit(0)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 更新提醒时间
|
||||
state["last_reminded_at"] = now.isoformat()
|
||||
save_state(state)
|
||||
|
||||
reasons = state.get("reasons", [])
|
||||
reason_text = ", ".join(reasons) if reasons else "high-risk paths changed"
|
||||
sys.stderr.write(
|
||||
f"[AUDIT REMINDER] Pending audit detected ({reason_text}). "
|
||||
f"Run /audit (Manual: Run /audit hook) to sync docs & write audit artifacts. "
|
||||
f"(rate limit: 15min)\n"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
@@ -1,174 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""build_audit_context — 合并所有前置 hook 产出,生成统一审计上下文快照。
|
||||
|
||||
读取:
|
||||
- .kiro/state/.audit_state.json(audit-flagger 产出:风险判定、变更文件列表)
|
||||
- .kiro/state/.compliance_state.json(change-compliance 产出:文档缺失、迁移状态)
|
||||
- .kiro/state/.last_prompt_id.json(prompt-audit-log 产出:Prompt ID 溯源)
|
||||
- git diff --stat HEAD(变更统计摘要)
|
||||
- git diff HEAD(仅高风险文件的 diff,截断到合理长度)
|
||||
|
||||
输出:.kiro/state/.audit_context.json(audit-writer 子代理的唯一输入)
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
CONTEXT_PATH = os.path.join(".kiro", "state", ".audit_context.json")
|
||||
|
||||
# 高风险路径(只对这些文件取 diff,避免 diff 过大)
|
||||
HIGH_RISK_PATTERNS = [
|
||||
re.compile(r"^apps/etl/connectors/feiqiu/(api|cli|config|database|loaders|models|orchestration|scd|tasks|utils|quality)/"),
|
||||
re.compile(r"^apps/backend/app/"),
|
||||
re.compile(r"^apps/admin-web/src/"),
|
||||
re.compile(r"^apps/miniprogram/"),
|
||||
re.compile(r"^packages/shared/"),
|
||||
re.compile(r"^db/"),
|
||||
]
|
||||
|
||||
|
||||
def safe_read_json(path):
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def git_diff_stat():
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "diff", "--stat", "HEAD"],
|
||||
capture_output=True, text=True, encoding="utf-8", errors="replace", timeout=15
|
||||
)
|
||||
return r.stdout.strip() if r.returncode == 0 else ""
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
def git_diff_files(files, max_total=30000):
|
||||
"""获取指定文件的 git diff,截断到 max_total 字符"""
|
||||
if not files:
|
||||
return ""
|
||||
# 分批取 diff,避免命令行过长
|
||||
all_diff = []
|
||||
total_len = 0
|
||||
for f in files:
|
||||
if total_len >= max_total:
|
||||
all_diff.append(f"\n[TRUNCATED: diff exceeds {max_total // 1000}KB limit]")
|
||||
break
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "diff", "HEAD", "--", f],
|
||||
capture_output=True, text=True, encoding="utf-8", errors="replace", timeout=10
|
||||
)
|
||||
if r.returncode == 0 and r.stdout.strip():
|
||||
chunk = r.stdout.strip()
|
||||
# 单文件 diff 截断
|
||||
if len(chunk) > 5000:
|
||||
chunk = chunk[:5000] + f"\n[TRUNCATED: {f} diff too long]"
|
||||
all_diff.append(chunk)
|
||||
total_len += len(chunk)
|
||||
except Exception:
|
||||
continue
|
||||
return "\n".join(all_diff)
|
||||
|
||||
|
||||
def get_latest_prompt_log():
|
||||
"""获取最新的 prompt log 文件内容(用于溯源)"""
|
||||
log_dir = os.path.join("docs", "audit", "prompt_logs")
|
||||
if not os.path.isdir(log_dir):
|
||||
return ""
|
||||
try:
|
||||
files = sorted(
|
||||
[f for f in os.listdir(log_dir) if f.startswith("prompt_log_")],
|
||||
reverse=True
|
||||
)
|
||||
if not files:
|
||||
return ""
|
||||
latest = os.path.join(log_dir, files[0])
|
||||
with open(latest, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
# 截断过长内容
|
||||
if len(content) > 3000:
|
||||
content = content[:3000] + "\n[TRUNCATED]"
|
||||
return content
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
def main():
|
||||
now = datetime.now(TZ_TAIPEI)
|
||||
|
||||
# 读取前置 hook 产出
|
||||
audit_state = safe_read_json(os.path.join(".kiro", "state", ".audit_state.json"))
|
||||
compliance = safe_read_json(os.path.join(".kiro", "state", ".compliance_state.json"))
|
||||
prompt_id_info = safe_read_json(os.path.join(".kiro", "state", ".last_prompt_id.json"))
|
||||
|
||||
# 从 audit_state 提取高风险文件
|
||||
changed_files = audit_state.get("changed_files", [])
|
||||
high_risk_files = [
|
||||
f for f in changed_files
|
||||
if any(p.search(f) for p in HIGH_RISK_PATTERNS)
|
||||
]
|
||||
|
||||
# 获取 diff(仅高风险文件)
|
||||
diff_stat = git_diff_stat()
|
||||
high_risk_diff = git_diff_files(high_risk_files)
|
||||
|
||||
# 获取最新 prompt log
|
||||
prompt_log = get_latest_prompt_log()
|
||||
|
||||
# 构建统一上下文
|
||||
context = {
|
||||
"built_at": now.isoformat(),
|
||||
"prompt_id": prompt_id_info.get("prompt_id", "unknown"),
|
||||
"prompt_at": prompt_id_info.get("at", ""),
|
||||
|
||||
# 来自 audit-flagger
|
||||
"audit_required": audit_state.get("audit_required", False),
|
||||
"db_docs_required": audit_state.get("db_docs_required", False),
|
||||
"reasons": audit_state.get("reasons", []),
|
||||
"changed_files": changed_files,
|
||||
"high_risk_files": high_risk_files,
|
||||
|
||||
# 来自 change-compliance-prescan
|
||||
"compliance": {
|
||||
"code_without_docs": compliance.get("code_without_docs", []),
|
||||
"new_migration_sql": compliance.get("new_migration_sql", []),
|
||||
"has_bd_manual": compliance.get("has_bd_manual", False),
|
||||
"has_audit_record": compliance.get("has_audit_record", False),
|
||||
"has_ddl_baseline": compliance.get("has_ddl_baseline", False),
|
||||
},
|
||||
|
||||
# git 摘要
|
||||
"diff_stat": diff_stat,
|
||||
"high_risk_diff": high_risk_diff,
|
||||
|
||||
# prompt 溯源
|
||||
"latest_prompt_log": prompt_log,
|
||||
}
|
||||
|
||||
os.makedirs(os.path.join(".kiro", "state"), exist_ok=True)
|
||||
with open(CONTEXT_PATH, "w", encoding="utf-8") as f:
|
||||
json.dump(context, f, indent=2, ensure_ascii=False)
|
||||
|
||||
# 输出摘要到 stdout
|
||||
print(f"audit_context built: {len(changed_files)} files, "
|
||||
f"{len(high_risk_files)} high-risk, "
|
||||
f"{len(compliance.get('code_without_docs', []))} docs missing")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception as e:
|
||||
sys.stderr.write(f"build_audit_context failed: {e}\n")
|
||||
sys.exit(1)
|
||||
@@ -1,243 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""change_compliance_prescan — 预扫描变更文件,输出需要合规审查的项目。
|
||||
|
||||
在 agentStop 时由 askAgent hook 调用,为 LLM 提供精简的审查清单,
|
||||
避免 LLM 自行扫描文件浪费 Token。
|
||||
|
||||
输出到 stdout(供 askAgent 读取):
|
||||
- 若无需审查:输出 "NO_CHECK_NEEDED"
|
||||
- 若需审查:输出结构化 JSON 清单
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
|
||||
# doc-map 中定义的文档对应关系
|
||||
DOC_MAP = {
|
||||
# 代码路径前缀 → 应同步更新的文档
|
||||
"apps/backend/app/routers/": ["apps/backend/docs/API-REFERENCE.md"],
|
||||
"apps/backend/app/services/": ["apps/backend/docs/API-REFERENCE.md", "apps/backend/README.md"],
|
||||
"apps/backend/app/auth/": ["apps/backend/docs/API-REFERENCE.md", "apps/backend/README.md"],
|
||||
"apps/etl/connectors/feiqiu/tasks/": ["apps/etl/connectors/feiqiu/docs/etl_tasks/"],
|
||||
"apps/etl/connectors/feiqiu/loaders/": ["apps/etl/connectors/feiqiu/docs/etl_tasks/"],
|
||||
"apps/etl/connectors/feiqiu/scd/": ["apps/etl/connectors/feiqiu/docs/business-rules/scd2_rules.md"],
|
||||
"apps/etl/connectors/feiqiu/orchestration/": ["apps/etl/connectors/feiqiu/docs/architecture/"],
|
||||
"apps/admin-web/src/": ["apps/admin-web/README.md"],
|
||||
"apps/miniprogram/": ["apps/miniprogram/README.md"],
|
||||
"packages/shared/": ["packages/shared/README.md"],
|
||||
}
|
||||
|
||||
# DDL 基线文件(doc-map 中定义)
|
||||
DDL_BASELINE_DIR = "docs/database/ddl/"
|
||||
|
||||
# 迁移脚本路径
|
||||
MIGRATION_PATTERNS = [
|
||||
re.compile(r"^db/etl_feiqiu/migrations/.*\.sql$"),
|
||||
re.compile(r"^db/zqyy_app/migrations/.*\.sql$"),
|
||||
re.compile(r"^db/fdw/.*\.sql$"),
|
||||
]
|
||||
|
||||
# DB 文档路径
|
||||
BD_MANUAL_PATTERN = re.compile(r"^docs/database/BD_Manual_.*\.md$")
|
||||
|
||||
# 审计记录路径
|
||||
AUDIT_CHANGES_DIR = "docs/audit/changes/"
|
||||
|
||||
# 噪声路径(不参与合规检查)
|
||||
NOISE = [
|
||||
re.compile(r"^docs/audit/"),
|
||||
re.compile(r"^\.kiro/"),
|
||||
re.compile(r"^\.hypothesis/"),
|
||||
re.compile(r"^tmp/"),
|
||||
re.compile(r"\.png$"),
|
||||
re.compile(r"\.jpg$"),
|
||||
]
|
||||
|
||||
|
||||
def safe_read_json(path):
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def get_changed_files():
|
||||
"""从 audit_state 或 git status 获取变更文件"""
|
||||
state = safe_read_json(STATE_PATH)
|
||||
files = state.get("changed_files", [])
|
||||
if files:
|
||||
return files
|
||||
# 回退到 git status
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "status", "--porcelain"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
if r.returncode != 0:
|
||||
return []
|
||||
result = []
|
||||
for line in r.stdout.splitlines():
|
||||
if len(line) < 4:
|
||||
continue
|
||||
path = line[3:].strip().strip('"').replace("\\", "/")
|
||||
if " -> " in path:
|
||||
path = path.split(" -> ")[-1]
|
||||
if path:
|
||||
result.append(path)
|
||||
return sorted(set(result))
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def is_noise(f):
|
||||
return any(p.search(f) for p in NOISE)
|
||||
|
||||
|
||||
def classify_files(files):
|
||||
"""将变更文件分类,输出审查清单"""
|
||||
result = {
|
||||
"new_migration_sql": [], # 新增的迁移 SQL
|
||||
"new_or_modified_sql": [], # 所有 SQL 变更
|
||||
"code_without_docs": [], # 有代码改动但缺少对应文档改动
|
||||
"new_files": [], # 新增文件(需检查目录规范)
|
||||
"has_bd_manual": False, # 是否有 BD_Manual 文档变更
|
||||
"has_audit_record": False, # 是否有审计记录变更
|
||||
"has_ddl_baseline": False, # 是否有 DDL 基线变更
|
||||
}
|
||||
|
||||
code_files = []
|
||||
doc_files = set()
|
||||
|
||||
for f in files:
|
||||
if is_noise(f):
|
||||
continue
|
||||
|
||||
# 迁移 SQL
|
||||
for mp in MIGRATION_PATTERNS:
|
||||
if mp.search(f):
|
||||
result["new_migration_sql"].append(f)
|
||||
break
|
||||
|
||||
# SQL 文件
|
||||
if f.endswith(".sql"):
|
||||
result["new_or_modified_sql"].append(f)
|
||||
|
||||
# BD_Manual
|
||||
if BD_MANUAL_PATTERN.search(f):
|
||||
result["has_bd_manual"] = True
|
||||
|
||||
# 审计记录
|
||||
if f.startswith(AUDIT_CHANGES_DIR):
|
||||
result["has_audit_record"] = True
|
||||
|
||||
# DDL 基线
|
||||
if f.startswith(DDL_BASELINE_DIR):
|
||||
result["has_ddl_baseline"] = True
|
||||
|
||||
# 文档文件
|
||||
if f.endswith(".md") or "/docs/" in f:
|
||||
doc_files.add(f)
|
||||
|
||||
# 代码文件(非文档、非配置)
|
||||
if f.endswith((".py", ".ts", ".tsx", ".js", ".jsx")):
|
||||
code_files.append(f)
|
||||
|
||||
# 检查代码文件是否有对应文档变更
|
||||
for cf in code_files:
|
||||
expected_docs = []
|
||||
for prefix, docs in DOC_MAP.items():
|
||||
if cf.startswith(prefix):
|
||||
expected_docs.extend(docs)
|
||||
if expected_docs:
|
||||
# 检查是否有任一对应文档在变更列表中
|
||||
has_doc = False
|
||||
for ed in expected_docs:
|
||||
if ed in doc_files:
|
||||
has_doc = True
|
||||
break
|
||||
# 目录级匹配
|
||||
if ed.endswith("/"):
|
||||
if any(d.startswith(ed) for d in doc_files):
|
||||
has_doc = True
|
||||
break
|
||||
if not has_doc:
|
||||
result["code_without_docs"].append({
|
||||
"file": cf,
|
||||
"expected_docs": expected_docs,
|
||||
})
|
||||
|
||||
return result
|
||||
|
||||
|
||||
COMPLIANCE_STATE_PATH = os.path.join(".kiro", "state", ".compliance_state.json")
|
||||
|
||||
|
||||
def save_compliance_state(result, needs_check):
|
||||
"""持久化合规检查结果,供 audit-writer 子代理读取"""
|
||||
os.makedirs(os.path.join(".kiro", "state"), exist_ok=True)
|
||||
now = datetime.now(TZ_TAIPEI)
|
||||
state = {
|
||||
"needs_check": needs_check,
|
||||
"scanned_at": now.isoformat(),
|
||||
**result,
|
||||
}
|
||||
with open(COMPLIANCE_STATE_PATH, "w", encoding="utf-8") as f:
|
||||
json.dump(state, f, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
def main():
|
||||
files = get_changed_files()
|
||||
if not files:
|
||||
save_compliance_state({"new_migration_sql": [], "new_or_modified_sql": [],
|
||||
"code_without_docs": [], "new_files": [],
|
||||
"has_bd_manual": False, "has_audit_record": False,
|
||||
"has_ddl_baseline": False}, False)
|
||||
print("NO_CHECK_NEEDED")
|
||||
return
|
||||
|
||||
# 过滤噪声
|
||||
real_files = [f for f in files if not is_noise(f)]
|
||||
if not real_files:
|
||||
save_compliance_state({"new_migration_sql": [], "new_or_modified_sql": [],
|
||||
"code_without_docs": [], "new_files": [],
|
||||
"has_bd_manual": False, "has_audit_record": False,
|
||||
"has_ddl_baseline": False}, False)
|
||||
print("NO_CHECK_NEEDED")
|
||||
return
|
||||
|
||||
result = classify_files(files)
|
||||
|
||||
# 判断是否需要审查
|
||||
needs_check = (
|
||||
result["new_migration_sql"]
|
||||
or result["code_without_docs"]
|
||||
or (result["new_migration_sql"] and not result["has_ddl_baseline"])
|
||||
)
|
||||
|
||||
# 始终持久化结果
|
||||
save_compliance_state(result, needs_check)
|
||||
|
||||
if not needs_check:
|
||||
print("NO_CHECK_NEEDED")
|
||||
return
|
||||
|
||||
# 输出精简 JSON 供 LLM 审查
|
||||
print(json.dumps(result, indent=2, ensure_ascii=False))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception as e:
|
||||
# 出错时不阻塞,输出无需检查
|
||||
print("NO_CHECK_NEEDED")
|
||||
@@ -1,170 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""file_baseline — 基于文件 mtime+size 的独立基线快照系统。
|
||||
|
||||
不依赖 git commit 历史,通过扫描工作区文件的 (mtime, size) 指纹,
|
||||
在 promptSubmit 和 agentStop 之间精确检测"本次对话期间"的文件变更。
|
||||
|
||||
用法:
|
||||
from file_baseline import scan_workspace, diff_baselines, save_baseline, load_baseline
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
from typing import TypedDict
|
||||
|
||||
BASELINE_PATH = os.path.join(".kiro", "state", ".file_baseline.json")
|
||||
|
||||
# 扫描时排除的目录(与 .gitignore 对齐 + 额外排除)
|
||||
EXCLUDE_DIRS = {
|
||||
".git", ".venv", "venv", "ENV", "env",
|
||||
"node_modules", "__pycache__", ".hypothesis", ".pytest_cache",
|
||||
".idea", ".vscode", ".specstory",
|
||||
"build", "dist", "eggs", ".eggs",
|
||||
"export", "reports", "tmp",
|
||||
"htmlcov", ".coverage",
|
||||
# Kiro 运行时状态不参与业务变更检测
|
||||
".kiro",
|
||||
}
|
||||
|
||||
# 扫描时排除的文件后缀
|
||||
EXCLUDE_SUFFIXES = {
|
||||
".pyc", ".pyo", ".pyd", ".so", ".egg", ".whl",
|
||||
".log", ".jsonl", ".lnk",
|
||||
".swp", ".swo",
|
||||
}
|
||||
|
||||
# 扫描时排除的文件名模式
|
||||
EXCLUDE_NAMES = {
|
||||
".DS_Store", "Thumbs.db", "desktop.ini",
|
||||
}
|
||||
|
||||
# 业务目录白名单(只扫描这些顶层目录 + 根目录散文件)
|
||||
# 这样可以避免扫描 .vite/deps 等深层缓存目录
|
||||
SCAN_ROOTS = [
|
||||
"apps",
|
||||
"packages",
|
||||
"db",
|
||||
"docs",
|
||||
"scripts",
|
||||
"tests",
|
||||
]
|
||||
|
||||
|
||||
class FileEntry(TypedDict):
|
||||
mtime: float
|
||||
size: int
|
||||
|
||||
|
||||
class DiffResult(TypedDict):
|
||||
added: list[str]
|
||||
modified: list[str]
|
||||
deleted: list[str]
|
||||
|
||||
|
||||
def _should_exclude_dir(dirname: str) -> bool:
|
||||
"""判断目录是否应排除"""
|
||||
return dirname in EXCLUDE_DIRS or dirname.startswith(".")
|
||||
|
||||
|
||||
def _should_exclude_file(filename: str) -> bool:
|
||||
"""判断文件是否应排除"""
|
||||
if filename in EXCLUDE_NAMES:
|
||||
return True
|
||||
_, ext = os.path.splitext(filename)
|
||||
if ext.lower() in EXCLUDE_SUFFIXES:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def scan_workspace(root: str = ".") -> dict[str, FileEntry]:
|
||||
"""扫描工作区,返回 {相对路径: {mtime, size}} 字典。
|
||||
|
||||
只扫描 SCAN_ROOTS 中的目录 + 根目录下的散文件,
|
||||
跳过 EXCLUDE_DIRS / EXCLUDE_SUFFIXES / EXCLUDE_NAMES。
|
||||
"""
|
||||
result: dict[str, FileEntry] = {}
|
||||
|
||||
# 1. 根目录散文件(pyproject.toml, .env 等)
|
||||
try:
|
||||
for entry in os.scandir(root):
|
||||
if entry.is_file(follow_symlinks=False):
|
||||
if _should_exclude_file(entry.name):
|
||||
continue
|
||||
try:
|
||||
st = entry.stat(follow_symlinks=False)
|
||||
rel = entry.name.replace("\\", "/")
|
||||
result[rel] = {"mtime": st.st_mtime, "size": st.st_size}
|
||||
except OSError:
|
||||
continue
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# 2. 业务目录递归扫描
|
||||
for scan_root in SCAN_ROOTS:
|
||||
top = os.path.join(root, scan_root)
|
||||
if not os.path.isdir(top):
|
||||
continue
|
||||
for dirpath, dirnames, filenames in os.walk(top):
|
||||
# 原地修改 dirnames 以跳过排除目录
|
||||
dirnames[:] = [
|
||||
d for d in dirnames
|
||||
if not _should_exclude_dir(d)
|
||||
]
|
||||
for fname in filenames:
|
||||
if _should_exclude_file(fname):
|
||||
continue
|
||||
full = os.path.join(dirpath, fname)
|
||||
try:
|
||||
st = os.stat(full)
|
||||
rel = os.path.relpath(full, root).replace("\\", "/")
|
||||
result[rel] = {"mtime": st.st_mtime, "size": st.st_size}
|
||||
except OSError:
|
||||
continue
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def diff_baselines(
|
||||
before: dict[str, FileEntry],
|
||||
after: dict[str, FileEntry],
|
||||
) -> DiffResult:
|
||||
"""对比两次快照,返回 added/modified/deleted 列表。"""
|
||||
before_keys = set(before.keys())
|
||||
after_keys = set(after.keys())
|
||||
|
||||
added = sorted(after_keys - before_keys)
|
||||
deleted = sorted(before_keys - after_keys)
|
||||
|
||||
modified = []
|
||||
for path in sorted(before_keys & after_keys):
|
||||
b = before[path]
|
||||
a = after[path]
|
||||
# mtime 或 size 任一变化即视为修改
|
||||
if b["mtime"] != a["mtime"] or b["size"] != a["size"]:
|
||||
modified.append(path)
|
||||
|
||||
return {"added": added, "modified": modified, "deleted": deleted}
|
||||
|
||||
|
||||
def save_baseline(data: dict[str, FileEntry], path: str = BASELINE_PATH):
|
||||
"""保存基线快照到 JSON 文件。"""
|
||||
os.makedirs(os.path.dirname(path) or ".kiro", exist_ok=True)
|
||||
with open(path, "w", encoding="utf-8") as f:
|
||||
json.dump(data, f, ensure_ascii=False)
|
||||
|
||||
|
||||
def load_baseline(path: str = BASELINE_PATH) -> dict[str, FileEntry]:
|
||||
"""加载基线快照,文件不存在返回空字典。"""
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def total_changes(diff: DiffResult) -> int:
|
||||
"""变更文件总数"""
|
||||
return len(diff["added"]) + len(diff["modified"]) + len(diff["deleted"])
|
||||
@@ -1,60 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""prompt_audit_log — 每次提交 prompt 时生成独立日志文件。
|
||||
|
||||
替代原 PowerShell 版本,避免 Windows PowerShell 5.1 解析器 bug。
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
|
||||
|
||||
def main():
|
||||
now = datetime.now(TZ_TAIPEI)
|
||||
prompt_id = f"P{now.strftime('%Y%m%d-%H%M%S')}"
|
||||
prompt_raw = os.environ.get("USER_PROMPT", "")
|
||||
|
||||
# 截断过长的 prompt(避免展开的 #context 占用过多空间)
|
||||
if len(prompt_raw) > 20000:
|
||||
prompt_raw = prompt_raw[:5000] + "\n[TRUNCATED: prompt too long; possible expanded #context]"
|
||||
|
||||
summary = " ".join(prompt_raw.split()).strip()
|
||||
if len(summary) > 120:
|
||||
summary = summary[:120] + "…"
|
||||
if not summary:
|
||||
summary = "(empty prompt)"
|
||||
|
||||
# 写独立日志文件
|
||||
log_dir = os.path.join("docs", "audit", "prompt_logs")
|
||||
os.makedirs(log_dir, exist_ok=True)
|
||||
|
||||
filename = f"prompt_log_{now.strftime('%Y%m%d_%H%M%S')}.md"
|
||||
target = os.path.join(log_dir, filename)
|
||||
|
||||
timestamp = now.strftime("%Y-%m-%d %H:%M:%S %z")
|
||||
entry = f"""- [{prompt_id}] {timestamp}
|
||||
- summary: {summary}
|
||||
- prompt:
|
||||
```text
|
||||
{prompt_raw}
|
||||
```
|
||||
"""
|
||||
with open(target, "w", encoding="utf-8") as f:
|
||||
f.write(entry)
|
||||
|
||||
# 保存 last prompt id 供 /audit 溯源
|
||||
os.makedirs(os.path.join(".kiro", "state"), exist_ok=True)
|
||||
last_prompt = {"prompt_id": prompt_id, "at": now.isoformat()}
|
||||
with open(os.path.join(".kiro", "state", ".last_prompt_id.json"), "w", encoding="utf-8") as f:
|
||||
json.dump(last_prompt, f, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception:
|
||||
# 不阻塞 prompt 提交
|
||||
pass
|
||||
@@ -1,231 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""prompt_on_submit — promptSubmit 合并 hook 脚本(v2:文件基线模式)。
|
||||
|
||||
合并原 audit_flagger + prompt_audit_log 的功能:
|
||||
1. 扫描工作区文件 → 保存基线快照 → .kiro/state/.file_baseline.json
|
||||
2. 基于基线文件列表做风险判定 → .kiro/state/.audit_state.json
|
||||
3. 记录 prompt 日志 → docs/audit/prompt_logs/
|
||||
|
||||
变更检测不再依赖 git status(解决不常 commit 导致的误判问题)。
|
||||
风险判定仍基于 git status(因为需要知道哪些文件相对于 commit 有变化)。
|
||||
所有功能块用 try/except 隔离,单个失败不影响其他。
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
# 同目录导入文件基线模块 + cwd 校验
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
from file_baseline import scan_workspace, save_baseline
|
||||
from _ensure_root import ensure_repo_root
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
|
||||
# ── 风险规则(来自 audit_flagger) ──
|
||||
RISK_RULES = [
|
||||
(re.compile(r"^apps/etl/connectors/feiqiu/(api|cli|config|database|loaders|models|orchestration|scd|tasks|utils|quality)/"), "etl"),
|
||||
(re.compile(r"^apps/backend/app/"), "backend"),
|
||||
(re.compile(r"^apps/admin-web/src/"), "admin-web"),
|
||||
(re.compile(r"^apps/miniprogram/(miniapp|miniprogram)/"), "miniprogram"),
|
||||
(re.compile(r"^packages/shared/"), "shared"),
|
||||
(re.compile(r"^db/"), "db"),
|
||||
]
|
||||
|
||||
NOISE_PATTERNS = [
|
||||
re.compile(r"^docs/audit/"),
|
||||
re.compile(r"^\.kiro/"),
|
||||
re.compile(r"^tmp/"),
|
||||
re.compile(r"^\.hypothesis/"),
|
||||
]
|
||||
|
||||
DB_PATTERNS = [
|
||||
re.compile(r"^db/"),
|
||||
re.compile(r"/migrations/"),
|
||||
re.compile(r"\.sql$"),
|
||||
re.compile(r"\.prisma$"),
|
||||
]
|
||||
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
PROMPT_ID_PATH = os.path.join(".kiro", "state", ".last_prompt_id.json")
|
||||
|
||||
|
||||
def now_taipei():
|
||||
return datetime.now(TZ_TAIPEI)
|
||||
|
||||
|
||||
def sha1hex(s: str) -> str:
|
||||
return hashlib.sha1(s.encode("utf-8")).hexdigest()
|
||||
|
||||
|
||||
def get_git_changed_files() -> list[str]:
|
||||
"""通过 git status 获取变更文件(仅用于风险判定,不用于变更检测)"""
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "status", "--porcelain"],
|
||||
capture_output=True, text=True, encoding="utf-8", errors="replace", timeout=10
|
||||
)
|
||||
if r.returncode != 0:
|
||||
return []
|
||||
except Exception:
|
||||
return []
|
||||
files = []
|
||||
for line in r.stdout.splitlines():
|
||||
if len(line) < 4:
|
||||
continue
|
||||
path = line[3:].strip()
|
||||
if " -> " in path:
|
||||
path = path.split(" -> ")[-1]
|
||||
path = path.strip().strip('"').replace("\\", "/")
|
||||
if path:
|
||||
files.append(path)
|
||||
return files
|
||||
|
||||
|
||||
def is_noise(f: str) -> bool:
|
||||
return any(p.search(f) for p in NOISE_PATTERNS)
|
||||
|
||||
|
||||
def safe_read_json(path):
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def write_json(path, data):
|
||||
os.makedirs(os.path.dirname(path) or os.path.join(".kiro", "state"), exist_ok=True)
|
||||
with open(path, "w", encoding="utf-8") as f:
|
||||
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
# ── 功能块 1:风险标记(基于 git status,判定哪些文件需要审计) ──
|
||||
def do_audit_flag(git_files, now):
|
||||
files = sorted(set(f for f in git_files if not is_noise(f)))
|
||||
|
||||
if not files:
|
||||
write_json(STATE_PATH, {
|
||||
"audit_required": False,
|
||||
"db_docs_required": False,
|
||||
"reasons": [],
|
||||
"changed_files": [],
|
||||
"change_fingerprint": "",
|
||||
"marked_at": now.isoformat(),
|
||||
"last_reminded_at": None,
|
||||
})
|
||||
return
|
||||
|
||||
reasons = []
|
||||
audit_required = False
|
||||
db_docs_required = False
|
||||
|
||||
for f in files:
|
||||
for pattern, label in RISK_RULES:
|
||||
if pattern.search(f):
|
||||
audit_required = True
|
||||
tag = f"dir:{label}"
|
||||
if tag not in reasons:
|
||||
reasons.append(tag)
|
||||
if "/" not in f:
|
||||
audit_required = True
|
||||
if "root-file" not in reasons:
|
||||
reasons.append("root-file")
|
||||
if any(p.search(f) for p in DB_PATTERNS):
|
||||
db_docs_required = True
|
||||
if "db-schema-change" not in reasons:
|
||||
reasons.append("db-schema-change")
|
||||
|
||||
fp = sha1hex("\n".join(files))
|
||||
|
||||
# 保留已有 last_reminded_at
|
||||
last_reminded = None
|
||||
existing = safe_read_json(STATE_PATH)
|
||||
if existing.get("change_fingerprint") == fp:
|
||||
last_reminded = existing.get("last_reminded_at")
|
||||
|
||||
write_json(STATE_PATH, {
|
||||
"audit_required": audit_required,
|
||||
"db_docs_required": db_docs_required,
|
||||
"reasons": reasons,
|
||||
"changed_files": files[:50],
|
||||
"change_fingerprint": fp,
|
||||
"marked_at": now.isoformat(),
|
||||
"last_reminded_at": last_reminded,
|
||||
})
|
||||
|
||||
|
||||
# ── 功能块 2:Prompt 日志 ──
|
||||
def do_prompt_log(now):
|
||||
prompt_id = f"P{now.strftime('%Y%m%d-%H%M%S')}"
|
||||
prompt_raw = os.environ.get("USER_PROMPT", "")
|
||||
|
||||
if len(prompt_raw) > 20000:
|
||||
prompt_raw = prompt_raw[:5000] + "\n[TRUNCATED: prompt too long]"
|
||||
|
||||
summary = " ".join(prompt_raw.split()).strip()
|
||||
if len(summary) > 120:
|
||||
summary = summary[:120] + "…"
|
||||
if not summary:
|
||||
summary = "(empty prompt)"
|
||||
|
||||
log_dir = os.path.join("docs", "audit", "prompt_logs")
|
||||
os.makedirs(log_dir, exist_ok=True)
|
||||
filename = f"prompt_log_{now.strftime('%Y%m%d_%H%M%S')}.md"
|
||||
entry = f"""- [{prompt_id}] {now.strftime('%Y-%m-%d %H:%M:%S %z')}
|
||||
- summary: {summary}
|
||||
- prompt:
|
||||
```text
|
||||
{prompt_raw}
|
||||
```
|
||||
"""
|
||||
with open(os.path.join(log_dir, filename), "w", encoding="utf-8") as f:
|
||||
f.write(entry)
|
||||
|
||||
write_json(PROMPT_ID_PATH, {"prompt_id": prompt_id, "at": now.isoformat()})
|
||||
|
||||
|
||||
# ── 功能块 3:文件基线快照(替代 git snapshot) ──
|
||||
def do_file_baseline():
|
||||
"""扫描工作区文件 mtime+size,保存为基线快照。
|
||||
agentStop 时再扫一次对比,即可精确检测本次对话期间的变更。
|
||||
"""
|
||||
baseline = scan_workspace(".")
|
||||
save_baseline(baseline)
|
||||
|
||||
|
||||
def main():
|
||||
ensure_repo_root()
|
||||
now = now_taipei()
|
||||
|
||||
# 功能块 3:文件基线快照(最先执行,记录对话开始时的文件状态)
|
||||
try:
|
||||
do_file_baseline()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 功能块 1:风险标记(仍用 git status,因为需要知道未提交的变更)
|
||||
try:
|
||||
git_files = get_git_changed_files()
|
||||
do_audit_flag(git_files, now)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 功能块 2:Prompt 日志
|
||||
try:
|
||||
do_prompt_log(now)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception:
|
||||
pass
|
||||
@@ -1,139 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""session_log — agentStop 时记录本次对话的完整日志。
|
||||
|
||||
收集来源:
|
||||
- 环境变量 AGENT_OUTPUT(Kiro 注入的 agent 输出)
|
||||
- 环境变量 USER_PROMPT(最近一次用户输入)
|
||||
- .kiro/state/.last_prompt_id.json(Prompt ID 溯源)
|
||||
- .kiro/state/.audit_state.json(变更文件列表)
|
||||
- git diff --stat(变更统计)
|
||||
|
||||
输出:docs/audit/session_logs/session_<timestamp>.md
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
# cwd 校验
|
||||
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
||||
from _ensure_root import ensure_repo_root
|
||||
|
||||
TZ_TAIPEI = timezone(timedelta(hours=8))
|
||||
LOG_DIR = os.path.join("docs", "audit", "session_logs")
|
||||
STATE_PATH = os.path.join(".kiro", "state", ".audit_state.json")
|
||||
PROMPT_ID_PATH = os.path.join(".kiro", "state", ".last_prompt_id.json")
|
||||
|
||||
|
||||
def now_taipei():
|
||||
return datetime.now(TZ_TAIPEI)
|
||||
|
||||
|
||||
def safe_read_json(path):
|
||||
if not os.path.isfile(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def git_diff_stat():
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "diff", "--stat", "HEAD"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
return r.stdout.strip() if r.returncode == 0 else "(git diff failed)"
|
||||
except Exception:
|
||||
return "(git not available)"
|
||||
|
||||
|
||||
def git_status_short():
|
||||
try:
|
||||
r = subprocess.run(
|
||||
["git", "status", "--short"],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
return r.stdout.strip() if r.returncode == 0 else ""
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
def main():
|
||||
ensure_repo_root()
|
||||
now = now_taipei()
|
||||
ts = now.strftime("%Y%m%d_%H%M%S")
|
||||
timestamp_display = now.strftime("%Y-%m-%d %H:%M:%S %z")
|
||||
|
||||
# 收集数据
|
||||
agent_output = os.environ.get("AGENT_OUTPUT", "")
|
||||
user_prompt = os.environ.get("USER_PROMPT", "")
|
||||
prompt_info = safe_read_json(PROMPT_ID_PATH)
|
||||
audit_state = safe_read_json(STATE_PATH)
|
||||
prompt_id = prompt_info.get("prompt_id", "unknown")
|
||||
|
||||
# 截断超长内容,避免日志文件过大
|
||||
max_len = 50000
|
||||
if len(agent_output) > max_len:
|
||||
agent_output = agent_output[:max_len] + "\n\n[TRUNCATED: output exceeds 50KB]"
|
||||
if len(user_prompt) > 10000:
|
||||
user_prompt = user_prompt[:10000] + "\n\n[TRUNCATED: prompt exceeds 10KB]"
|
||||
|
||||
diff_stat = git_diff_stat()
|
||||
status_short = git_status_short()
|
||||
changed_files = audit_state.get("changed_files", [])
|
||||
|
||||
os.makedirs(LOG_DIR, exist_ok=True)
|
||||
filename = f"session_{ts}.md"
|
||||
filepath = os.path.join(LOG_DIR, filename)
|
||||
|
||||
content = f"""# Session Log — {timestamp_display}
|
||||
|
||||
- Prompt-ID: `{prompt_id}`
|
||||
- Audit Required: `{audit_state.get('audit_required', 'N/A')}`
|
||||
- Reasons: {', '.join(audit_state.get('reasons', [])) or 'none'}
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
{user_prompt or '(not captured)'}
|
||||
```
|
||||
|
||||
## Agent Output
|
||||
|
||||
```text
|
||||
{agent_output or '(not captured)'}
|
||||
```
|
||||
|
||||
## Changed Files ({len(changed_files)})
|
||||
|
||||
```
|
||||
{chr(10).join(changed_files[:80]) if changed_files else '(none)'}
|
||||
```
|
||||
|
||||
## Git Diff Stat
|
||||
|
||||
```
|
||||
{diff_stat}
|
||||
```
|
||||
|
||||
## Git Status
|
||||
|
||||
```
|
||||
{status_short or '(clean)'}
|
||||
```
|
||||
"""
|
||||
|
||||
with open(filepath, "w", encoding="utf-8") as f:
|
||||
f.write(content)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception:
|
||||
pass
|
||||
@@ -1,41 +0,0 @@
|
||||
---
|
||||
name: bd-manual-db-docs
|
||||
description: 当 PostgreSQL schema/表结构发生变化时,用于将变更以审计友好的方式落盘到 docs/database/(含变更原因、影响、回滚与验证 SQL)。
|
||||
---
|
||||
|
||||
# 目的
|
||||
保证数据库结构变化可追溯、可审计、可回滚,并与 ETL/后端/小程序字段映射保持一致。
|
||||
|
||||
# 触发条件
|
||||
- 迁移脚本/DDL 修改(新增/删除/改表、字段、类型、默认值、非空、约束、索引、外键)
|
||||
- ORM/Schema 定义变更导致实际 DB 结构变化
|
||||
- 手工执行 DDL(需用 manualTrigger hook 或本 Skill 补齐文档)
|
||||
|
||||
# 输出要求(必须全部满足)
|
||||
所有输出必须落盘到:`docs/database/`
|
||||
|
||||
至少包含:
|
||||
1) Schema Change Log(变更日志条目)
|
||||
2) Table Structure Doc(涉及表的结构文档更新)
|
||||
3) Rollback & Verification(回滚要点 + 至少 3 条验证 SQL)
|
||||
4) 溯源:日期 + Prompt-ID/Prompt 摘录 + Direct cause(必要性 + 方案简介)
|
||||
|
||||
# 工作流
|
||||
## 1) 识别结构性变化
|
||||
- 列出新增/修改/删除的对象:schema/table/column/index/constraint/fk
|
||||
- 明确变更前后差异(before/after)
|
||||
|
||||
## 2) 更新变更日志(Schema Change Log)
|
||||
- 在对应 schema 目录下追加一条变更记录(模板见 assets/schema-changelog-template.md)
|
||||
|
||||
## 3) 更新表结构文档(Table Structure Doc)
|
||||
- 每张受影响的表都要更新(模板见 assets/table-structure-template.md)
|
||||
- 同步字段含义/口径说明,尤其是金额类字段:精度、币种、舍入
|
||||
|
||||
## 4) 回滚与验证
|
||||
- 写清楚 DDL 回滚路径(必要时提供反向迁移)
|
||||
- 写至少 3 条验证 SQL(含约束/索引/关键字段检查)
|
||||
|
||||
# 模板
|
||||
- `assets/schema-changelog-template.md`
|
||||
- `assets/table-structure-template.md`
|
||||
@@ -1,27 +0,0 @@
|
||||
# Schema 变更日志(Schema Change Log)
|
||||
|
||||
- 日期(Asia/Shanghai,YYYY-MM-DD HH:MM:SS,精确到秒):
|
||||
- Prompt-ID:
|
||||
- 原始原因(Prompt 摘录/原文):
|
||||
- 直接原因(必要性 + 方案简介):
|
||||
- 影响的 Schema:
|
||||
- 变更摘要(一句话):
|
||||
|
||||
## 变更明细
|
||||
- 新增:
|
||||
- 修改:
|
||||
- 删除:
|
||||
|
||||
## 影响范围
|
||||
- ETL:
|
||||
- 后端 API:
|
||||
- 小程序:
|
||||
|
||||
## 回滚要点
|
||||
- DDL 回滚:
|
||||
- 数据回填/迁移注意事项:
|
||||
|
||||
## 验证 SQL(至少 3 条)
|
||||
1)
|
||||
2)
|
||||
3)
|
||||
@@ -1,22 +0,0 @@
|
||||
# <schema>.<table>
|
||||
|
||||
## 表用途(Purpose)
|
||||
- 该表代表什么业务对象/过程
|
||||
|
||||
## 字段(Columns)
|
||||
| 字段名 | 类型 | 可空 | 默认值 | 约束/键 | 说明(含口径) |
|
||||
|---|---|---:|---|---|---|
|
||||
|
||||
> 金额类字段必须注明:币种、精度、舍入/截断规则、是否允许负数。
|
||||
|
||||
## 索引(Indexes)
|
||||
- 索引名 / 字段 / 是否唯一 / 备注
|
||||
|
||||
## 约束与外键(Constraints & FKs)
|
||||
- 约束名 / 定义 / 备注
|
||||
|
||||
## 数据不变量(Invariants)
|
||||
- 例如:状态机枚举范围、唯一性、跨字段一致性约束(如有)
|
||||
|
||||
## 变更历史(Change History)
|
||||
- YYYY-MM-DD HH:MM:SS | Prompt-ID | 直接原因 | 变更摘要
|
||||
@@ -1,37 +0,0 @@
|
||||
---
|
||||
name: change-annotation-audit
|
||||
description: 对每次修改强制生成审计记录(docs/audit/changes/...),并在每个被改文件写 AI_CHANGELOG、在逻辑变更处写 CHANGE 标记注释(包含日期、Prompt 与直接原因)。
|
||||
---
|
||||
|
||||
# 目的
|
||||
把“为什么改、怎么改、怎么验”固化到可审计产物中,满足资金相关项目的严谨性要求。
|
||||
|
||||
# 触发条件
|
||||
- 任何对代码或文档的实质修改(非纯格式化)
|
||||
- 特别是:逻辑改动、资金口径改动、接口契约改动、DB 结构改动
|
||||
|
||||
# 必须产物(缺一不可)
|
||||
1) `docs/audit/changes/<YYYY-MM-DD>__<slug>.md`
|
||||
2) 每个被修改文件内的 `AI_CHANGELOG` 条目
|
||||
3) 每个逻辑变更附近的 `CHANGE` 标记注释
|
||||
|
||||
# 工作流
|
||||
## 1) Prompt 溯源
|
||||
- 确认本次修改有 Prompt-ID(来自 prompt_log.md)
|
||||
- 若没有,先补写 Prompt-ID,再继续
|
||||
|
||||
## 2) 写审计记录(Per-change)
|
||||
使用模板:`assets/audit-record-template.md`
|
||||
- 必须写:原始原因(Prompt)、直接原因、改动方案简介、文件清单、风险/回滚/验证
|
||||
|
||||
## 3) 写文件内 AI_CHANGELOG(Per-file)
|
||||
- 对每个修改的文件追加一条 AI_CHANGELOG
|
||||
- 选择适合语言/文件类型的注释风格(模板见 assets/file-changelog-templates.md)
|
||||
|
||||
## 4) 写 CHANGE 标记(Block-level)
|
||||
- 对每处逻辑变更,必须在附近写 CHANGE 标记
|
||||
- 必须包含:intent、assumptions、边界条件(金额/舍入/精度)、验证提示
|
||||
|
||||
# 模板
|
||||
- `assets/audit-record-template.md`
|
||||
- `assets/file-changelog-templates.md`
|
||||
@@ -1,19 +0,0 @@
|
||||
# 变更审计记录(Change Audit Record)
|
||||
|
||||
- 日期/时间(Asia/Shanghai,精确到秒,格式 YYYY-MM-DD HH:MM:SS):
|
||||
- Prompt-ID:
|
||||
- 原始原因(Prompt 原文或 ≤5 行摘录):
|
||||
- 直接原因(必要性 + 修改方案简介):
|
||||
|
||||
## 变更范围(Changed)
|
||||
- 模块/接口/表/关键文件:
|
||||
|
||||
## 风险与回滚(Risk & Rollback)
|
||||
- 风险点:
|
||||
- 回滚要点:
|
||||
|
||||
## 验证(Verification)
|
||||
- 至少 1 条可执行验证方式(测试/SQL/联调):
|
||||
|
||||
## 文件清单(Files changed)
|
||||
- ...
|
||||
@@ -1,50 +0,0 @@
|
||||
# 文件内 AI_CHANGELOG 与 CHANGE 标记模板
|
||||
|
||||
> 所有时间戳精确到秒,格式:`YYYY-MM-DD HH:MM:SS`,时区 Asia/Shanghai。
|
||||
|
||||
## 通用 AI_CHANGELOG(建议放在文件头部或"变更记录"小节)
|
||||
- 2026-02-13 10:15:30 | Prompt: P20260213-101530(摘录:...)| Direct cause:... | Summary:... | Verify:...
|
||||
|
||||
---
|
||||
|
||||
## Markdown / 文档(放在文档末尾或"变更记录"小节)
|
||||
### AI_CHANGELOG
|
||||
- YYYY-MM-DD HH:MM:SS | Prompt: P...(摘录:...)| Direct cause:... | Summary:... | Verify:...
|
||||
|
||||
---
|
||||
|
||||
## JS/TS(块注释)
|
||||
/*
|
||||
AI_CHANGELOG
|
||||
- YYYY-MM-DD HH:MM:SS | Prompt: P...(摘录:...)| Direct cause:... | Summary:... | Verify:...
|
||||
*/
|
||||
|
||||
// [CHANGE P...] intent: ...
|
||||
// assumptions: ...
|
||||
// edge cases / money semantics: ...
|
||||
// verify: ...
|
||||
|
||||
---
|
||||
|
||||
## Python(docstring/块注释)
|
||||
"""
|
||||
AI_CHANGELOG
|
||||
- YYYY-MM-DD HH:MM:SS | Prompt: P...(摘录:...)| Direct cause:... | Summary:... | Verify:...
|
||||
"""
|
||||
|
||||
# [CHANGE P...] intent: ...
|
||||
# assumptions: ...
|
||||
# edge cases / money semantics: ...
|
||||
# verify: ...
|
||||
|
||||
---
|
||||
|
||||
## SQL(块注释 + 行注释)
|
||||
/*
|
||||
AI_CHANGELOG
|
||||
- YYYY-MM-DD HH:MM:SS | Prompt: P...(摘录:...)| Direct cause:... | Summary:... | Verify:...
|
||||
*/
|
||||
-- [CHANGE P...] intent: ...
|
||||
-- assumptions: ...
|
||||
-- money semantics: precision/rounding/currency ...
|
||||
-- verify: ...
|
||||
@@ -1,51 +0,0 @@
|
||||
---
|
||||
name: steering-readme-maintainer
|
||||
description: 当发生业务/ETL/API/鉴权/小程序交互等逻辑改动时,用于执行变更影响审查并同步更新 product/tech/structure/各级 README 与审计记录。
|
||||
---
|
||||
|
||||
# 目的
|
||||
将"逻辑改动→文档同步→审计留痕"流程标准化,减少漏更与口径漂移风险(资金相关场景优先保证可追溯与可复算)。
|
||||
|
||||
# 触发条件(何时调用本 Skill)
|
||||
- 修改了业务规则/计算口径/资金处理(精度、舍入、阈值等)
|
||||
- 修改了 ETL/SQL 清洗聚合映射逻辑
|
||||
- 修改了 API 行为(返回结构、错误码、鉴权/权限)
|
||||
- 修改了小程序关键交互流程(校验、状态机、关键字段)
|
||||
|
||||
# 工作流(必须按顺序执行)
|
||||
## 1) 分类:是否属于"逻辑改动"
|
||||
- 若不是逻辑改动:写明"无逻辑改动",并说明为何(例如仅格式化/拼写修正/注释调整)。
|
||||
- 若是逻辑改动:进入下一步。
|
||||
|
||||
## 2) Steering 与 README 同步(逐项评估)
|
||||
|
||||
### 2a) Steering 文件
|
||||
- `.kiro/steering/product.md`:业务定义/口径/资金规则是否变化?
|
||||
- `.kiro/steering/tech.md`:技术栈/运行方式/依赖/部署假设是否变化?
|
||||
- `.kiro/steering/structure-lite.md`(摘要)/ `.kiro/steering/structure.md`(仅在目录树/边界变化时):目录/模块边界/职责是否变化?
|
||||
|
||||
### 2b) 各级 README.md(根据变更涉及的模块逐一评估)
|
||||
- `README.md`(根目录):项目总览、快速开始、环境变量、架构概述
|
||||
- `apps/backend/README.md`:后端 API 路由、配置、运行方式、接口契约
|
||||
- `apps/etl/connectors/feiqiu/README.md`:ETL 任务清单、开发约定、注册流程
|
||||
- `apps/miniprogram/README.md`:小程序页面结构、构建部署
|
||||
- `apps/admin-web/README.md`:管理后台功能说明
|
||||
- `packages/shared/README.md`:共享包模块说明、使用方式
|
||||
- `db/README.md`:Schema 约定、迁移规范、种子数据说明
|
||||
- `scripts/README.md`:各子目录用途、常用脚本说明
|
||||
- `tests/README.md`:测试运行方式、FakeDB/FakeAPI 用法
|
||||
- `docs/README.md`:文档目录索引
|
||||
|
||||
> 规则:只更新与本次变更相关的 README;如果"对读者理解系统行为"有帮助,就应更新;不要为了追求"少改文档"而拒绝同步。若某个 README 尚不存在但变更涉及该模块,应创建。
|
||||
|
||||
## 3) 输出审计友好摘要(对话回复/审计记录都需要)
|
||||
- Changed:改了哪些模块/接口/表/关键文件
|
||||
- Why:原始原因(Prompt-ID + 摘录)与直接原因(必要性 + 方案简介)
|
||||
- Risk:风险点与回归范围
|
||||
- Verify:建议的验证步骤(测试/SQL/联调)
|
||||
|
||||
## 4) 联动硬规则检查
|
||||
- 如果涉及 DB schema/表结构变化:必须同步更新 `docs/database/`(见 skill `bd-manual-db-docs`)。
|
||||
|
||||
# 资产(可复制模板/清单)
|
||||
见:`assets/steering-update-checklist.md`
|
||||
@@ -1,23 +0,0 @@
|
||||
# Steering & README 同步清单(逻辑改动必查)
|
||||
|
||||
## product.md(产品/口径)
|
||||
- 业务定义/指标口径/字段含义是否改变?
|
||||
- 涉及金额的精度/舍入/阈值规则是否改变?
|
||||
- 角色/权限模型是否改变?
|
||||
|
||||
## tech.md(技术/运行)
|
||||
- 新增/变更依赖(框架、库、驱动)?
|
||||
- 配置项/环境变量/端口/服务启动方式是否改变?
|
||||
- 数据访问边界(ETL 库 vs 业务库)是否改变?
|
||||
- 性能/一致性/幂等/重试策略是否改变?
|
||||
|
||||
## structure.md(结构/职责)
|
||||
- 新增目录/模块?
|
||||
- 模块职责或边界是否重新划分?
|
||||
- 新增集成点(队列、定时任务、外部系统)?
|
||||
|
||||
## README.md(使用/联调)
|
||||
- 本地启动步骤是否改变?
|
||||
- 新增/变更配置项(.env 等)?
|
||||
- API 契约是否变化(路径、参数、返回、错误码)?
|
||||
- 小程序联调步骤是否变化?
|
||||
File diff suppressed because one or more lines are too long
@@ -1,66 +0,0 @@
|
||||
{
|
||||
"audit_required": true,
|
||||
"db_docs_required": true,
|
||||
"reasons": [
|
||||
"dir:backend",
|
||||
"dir:miniprogram",
|
||||
"dir:db",
|
||||
"db-schema-change",
|
||||
"root-file"
|
||||
],
|
||||
"changed_files": [
|
||||
"apps/DEMO-miniprogram/.gitignore",
|
||||
"apps/DEMO-miniprogram/.gitkeep",
|
||||
"apps/DEMO-miniprogram/README.md",
|
||||
"apps/DEMO-miniprogram/doc/ABANDON_MODAL_COMPONENT.md",
|
||||
"apps/DEMO-miniprogram/doc/KEYBOARD_INTERACTION_FIX.md",
|
||||
"apps/DEMO-miniprogram/doc/TASK_ABANDON_IMPROVEMENTS.md",
|
||||
"apps/DEMO-miniprogram/doc/TASK_ABANDON_QUICK_REFERENCE.md",
|
||||
"apps/DEMO-miniprogram/doc/progress-bar-animation.md",
|
||||
"apps/DEMO-miniprogram/doc/useless/ABANDON_MODAL_COMPONENT.md",
|
||||
"apps/DEMO-miniprogram/doc/useless/KEYBOARD_INTERACTION_FIX.md",
|
||||
"apps/DEMO-miniprogram/doc/useless/TASK_ABANDON_IMPROVEMENTS.md",
|
||||
"apps/DEMO-miniprogram/doc/useless/TASK_ABANDON_QUICK_REFERENCE.md",
|
||||
"apps/DEMO-miniprogram/doc/useless/progress-bar-animation.md",
|
||||
"apps/DEMO-miniprogram/i18n/base.json",
|
||||
"apps/DEMO-miniprogram/jest.config.js",
|
||||
"apps/DEMO-miniprogram/miniprogram/app.json",
|
||||
"apps/DEMO-miniprogram/miniprogram/app.miniapp.json",
|
||||
"apps/DEMO-miniprogram/miniprogram/app.ts",
|
||||
"apps/DEMO-miniprogram/miniprogram/app.wxss",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/ai-robot-sm.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/ai-robot-title.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/arrow-left.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/chart.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/chat-gray.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/chat.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/check-bold.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/check-circle.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/clock.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/forbidden.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/help-circle.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/icon-ai-float.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/icon-ai-inline.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/info-circle.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/info-error.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/info-warning.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/logout.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-board-active.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-board.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-my-active.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-my.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-task-active.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/tab-task.png",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/task.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/_archived/wechat.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/ai-robot-badge.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/ai-robot-inline.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/ai-robot.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/ball-black.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/ball-gray.svg",
|
||||
"apps/DEMO-miniprogram/miniprogram/assets/icons/feature-ai.svg"
|
||||
],
|
||||
"change_fingerprint": "c347e0fb24548a8427f63a65a48dbf7df0b4a734",
|
||||
"marked_at": "2026-03-20T09:01:30.178895+08:00",
|
||||
"last_reminded_at": null
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
{
|
||||
"needs_check": false,
|
||||
"scanned_at": "2026-03-20T08:32:06.937993+08:00",
|
||||
"new_migration_sql": [],
|
||||
"new_or_modified_sql": [],
|
||||
"code_without_docs": [],
|
||||
"new_files": [],
|
||||
"has_bd_manual": false,
|
||||
"has_audit_record": false,
|
||||
"has_ddl_baseline": false,
|
||||
"api_changed": false,
|
||||
"openapi_spec_stale": false
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
@@ -1,7 +0,0 @@
|
||||
{
|
||||
"files": [
|
||||
".kiro/.audit_context.json"
|
||||
],
|
||||
"fingerprint": "4b767a035cfcbdd76756bbc0488e28e10b0f2fa1",
|
||||
"taken_at": "2026-02-26T08:04:31.572231+08:00"
|
||||
}
|
||||
@@ -1,4 +0,0 @@
|
||||
{
|
||||
"prompt_id": "P20260320-090130",
|
||||
"at": "2026-03-20T09:01:30.178895+08:00"
|
||||
}
|
||||
@@ -1,5 +1,37 @@
|
||||
{
|
||||
"mcpServers": {
|
||||
"pg-etl": {
|
||||
"command": "uvx",
|
||||
"args": ["postgres-mcp", "--access-mode=unrestricted"],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/etl_feiqiu"
|
||||
},
|
||||
"disabled": true
|
||||
},
|
||||
"pg-etl-test": {
|
||||
"command": "uvx",
|
||||
"args": ["postgres-mcp", "--access-mode=unrestricted"],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/test_etl_feiqiu"
|
||||
},
|
||||
"disabled": false
|
||||
},
|
||||
"pg-app": {
|
||||
"command": "uvx",
|
||||
"args": ["postgres-mcp", "--access-mode=unrestricted"],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/zqyy_app"
|
||||
},
|
||||
"disabled": true
|
||||
},
|
||||
"pg-app-test": {
|
||||
"command": "uvx",
|
||||
"args": ["postgres-mcp", "--access-mode=unrestricted"],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/test_zqyy_app"
|
||||
},
|
||||
"disabled": false
|
||||
},
|
||||
"weixin-devtools-mcp": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "weixin-devtools-mcp", "--tools-profile=full", "--ws-endpoint=ws://127.0.0.1:9420"],
|
||||
@@ -7,84 +39,7 @@
|
||||
"WECHAT_DEVTOOLS_CLI": "C:\\dev\\WechatDevtools\\cli.bat",
|
||||
"WECHAT_DEVTOOLS_PROJECT": "C:\\NeoZQYY\\apps\\miniprogram"
|
||||
},
|
||||
"disabled": true,
|
||||
"autoApprove": ["*"]
|
||||
},
|
||||
"git": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"mcp-server-git@2025.12.18",
|
||||
"--repository",
|
||||
"C:\\NeoZQYY"
|
||||
],
|
||||
"disabled": true,
|
||||
"autoApprove": [
|
||||
"all",
|
||||
"*"
|
||||
]
|
||||
},
|
||||
"postgres": {
|
||||
"disabled": true
|
||||
},
|
||||
"pg-etl": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"postgres-mcp",
|
||||
"--access-mode=unrestricted"
|
||||
],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/etl_feiqiu"
|
||||
},
|
||||
"disabled": true,
|
||||
"autoApprove": [
|
||||
"all",
|
||||
"*"
|
||||
]
|
||||
},
|
||||
"pg-etl-test": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"postgres-mcp",
|
||||
"--access-mode=unrestricted"
|
||||
],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/test_etl_feiqiu"
|
||||
},
|
||||
"disabled": true,
|
||||
"autoApprove": [
|
||||
"all",
|
||||
"*"
|
||||
]
|
||||
},
|
||||
"pg-app": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"postgres-mcp",
|
||||
"--access-mode=unrestricted"
|
||||
],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/zqyy_app"
|
||||
},
|
||||
"disabled": true,
|
||||
"autoApprove": [
|
||||
"all",
|
||||
"*"
|
||||
]
|
||||
},
|
||||
"pg-app-test": {
|
||||
"command": "uvx",
|
||||
"args": [
|
||||
"postgres-mcp",
|
||||
"--access-mode=unrestricted"
|
||||
],
|
||||
"env": {
|
||||
"DATABASE_URI": "postgresql://local-Python:Neo-local-1991125@100.64.0.4:5432/test_zqyy_app"
|
||||
},
|
||||
"disabled": true,
|
||||
"autoApprove": [
|
||||
"all",
|
||||
"*"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
208
CLAUDE.md
Normal file
208
CLAUDE.md
Normal file
@@ -0,0 +1,208 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## 项目概览
|
||||
|
||||
NeoZQYY Monorepo — 面向台球门店业务的全栈数据平台。多门店隔离(`site_id` + RLS),领域语言中文,货币 CNY,金额 `numeric(2)`。
|
||||
|
||||
### 子系统
|
||||
|
||||
| 目录 | 说明 |
|
||||
|------|------|
|
||||
| `apps/etl/connectors/feiqiu/` | 飞球 Connector:API → ODS → DWD → DWS |
|
||||
| `apps/backend/` | FastAPI 后端(JWT 双认证、WebSocket、AI 集成) |
|
||||
| `apps/miniprogram/` | 微信小程序(C 端,Donut + TDesign) |
|
||||
| `apps/admin-web/` | 系统管理后台(React+Vite+AntD)— 开发/运维视角,操作 ETL 库 |
|
||||
| `apps/tenant-admin/` | 租户管理后台(React+Vite+AntD)— 门店管理员视角,操作业务库 |
|
||||
| `apps/mcp-server/` | MCP Server(PostgreSQL 只读,AI 工具集成) |
|
||||
| `packages/shared/` | 跨项目共享包(enums, money, datetime_utils) |
|
||||
| `db/` | DDL / 迁移 / 种子数据 |
|
||||
|
||||
### 两个管理后台的区别
|
||||
|
||||
- `admin-web`:开发/运维用,ETL 配置、数据质量、系统监控,全局视角
|
||||
- `tenant-admin`:门店管理员用(`site_admin`/`tenant_admin`),用户审核/管理、Excel 上传、维客线索,门店隔离视角
|
||||
|
||||
## 技术栈
|
||||
|
||||
- Python 3.10+,uv workspace(4 成员:etl/connectors/feiqiu、backend、mcp-server、shared)
|
||||
- 前端:React + Vite + Ant Design,各应用独立 pnpm
|
||||
- PostgreSQL 四库:`etl_feiqiu` / `test_etl_feiqiu`(ETL,六层 Schema)、`zqyy_app` / `test_zqyy_app`(业务)
|
||||
- DSN:`PG_DSN`(ETL)、`APP_DB_DSN`(业务),根 `.env` 定义
|
||||
- 配置分层:`.env` < `.env.local` < 环境变量 < CLI 参数
|
||||
|
||||
## 常用命令
|
||||
|
||||
```bash
|
||||
# 依赖安装
|
||||
uv sync # Python 全量安装
|
||||
cd apps/admin-web && pnpm install # 前端(admin-web / tenant-admin 各自独立)
|
||||
|
||||
# 开发服务器
|
||||
cd apps/backend && uvicorn app.main:app --host 127.0.0.1 --port 8000 --reload
|
||||
cd apps/admin-web && pnpm dev # port 5173
|
||||
cd apps/tenant-admin && pnpm dev
|
||||
|
||||
# ETL
|
||||
cd apps/etl/connectors/feiqiu
|
||||
python -m cli.main --dry-run --tasks DWD_LOAD_FROM_ODS
|
||||
python -m cli.main --pg-dsn "$PG_DSN" --store-id "$STORE_ID" --api-token "$API_TOKEN"
|
||||
|
||||
# 测试
|
||||
cd apps/etl/connectors/feiqiu && pytest tests/unit # ETL 单元测试
|
||||
cd apps/etl/connectors/feiqiu && pytest tests/integration --with-db # ETL 集成测试
|
||||
cd apps/backend && pytest tests/ # 后端测试
|
||||
cd /c/NeoZQYY && pytest tests/ -v # Monorepo 属性测试(hypothesis)
|
||||
cd apps/admin-web && pnpm test # 前端 Vitest
|
||||
cd apps/admin-web && pnpm e2e # Playwright e2e
|
||||
cd apps/admin-web && pnpm lint # TypeScript 检查
|
||||
cd apps/miniprogram && npm test # 小程序 Jest
|
||||
```
|
||||
|
||||
## 架构模式
|
||||
|
||||
### 数据库
|
||||
|
||||
- 六层 Schema:`meta` → `ods`(原始)→ `dwd`(规范化)→ `core`(聚合)→ `dws`(业务汇总)→ `app`(RLS 视图)
|
||||
- 跨库访问:`zqyy_app` 通过 FDW 只读映射 `etl_feiqiu.app` schema
|
||||
- 多门店隔离:`site_id` + RLS,`app` schema 视图层通过 `app.current_site_id` 会话变量过滤
|
||||
|
||||
### RLS 视图双 Schema 规则(踩坑 2026-03-29)
|
||||
|
||||
新建 DWS/DWD 表的 RLS 视图必须同时在原 schema(如 `dws`)和 `app` schema 创建。后端通过 `app.v_*` 访问。只在 `dws` 创建会导致后端查询失败。迁移模板:先 `CREATE VIEW dws.v_xxx`,再 `CREATE VIEW app.v_xxx`,WHERE 条件相同。回滚需逆序 DROP。
|
||||
|
||||
### ETL
|
||||
|
||||
- 任务模式:继承 `BaseTask`(Extract → Transform → Load),在 `orchestration/task_registry.py` 注册
|
||||
- 加载器模式:每张目标表一个 Loader,`upsert()` + 冲突处理
|
||||
- SCD2 处理:`scd/` 模块
|
||||
- Flow:`--pipeline` 参数指定(如 `api_full`)
|
||||
|
||||
### 后端
|
||||
|
||||
- 全局响应包装:`ResponseWrapperMiddleware` 把所有 2xx 响应包为 `{ "code": 0, "data": <payload> }`
|
||||
- `CamelModel` 基类:snake_case → camelCase 自动转换(小程序 API 用)
|
||||
- JWT 双认证:用户名密码(admin)+ 微信 code(小程序);待审核用户有 limited token
|
||||
- AI 集成:8 个千问应用通过 DashScope SDK(chat/finance/clue/analysis/tactics/note/customer/consolidate),带熔断、限流、预算追踪
|
||||
- 后台服务(lifespan):`TaskQueue`(按 site_id 消费)、`Scheduler`(读 scheduled_tasks 自动入队)、4 个触发器
|
||||
|
||||
## 文件归属规则
|
||||
|
||||
| 判断标准 | 放置位置 |
|
||||
|----------|----------|
|
||||
| 只有本模块开发者需要看的文档 | 模块内 `docs/` |
|
||||
| 跨模块对照或全局视角的文档 | 根 `docs/` |
|
||||
| 只验证本模块逻辑的测试 | 模块内 `tests/` |
|
||||
| 守护 monorepo 结构/约定的测试 | 根 `tests/` |
|
||||
| 只操作本模块数据的脚本 | 模块内 `scripts/` |
|
||||
| 运维/全局工具脚本 | 根 `scripts/` |
|
||||
| 审计记录(任何模块的变更) | 根 `docs/audit/` — 禁止写入子模块 |
|
||||
| 数据库文档(全局 schema 视角) | 根 `docs/database/` |
|
||||
|
||||
审计产物路径:
|
||||
- 变更记录:`docs/audit/changes/<YYYY-MM-DD>__<slug>.md`
|
||||
- 审计一览表:`docs/audit/audit_dashboard.md`(`scripts/audit/gen_audit_dashboard.py` 自动生成,勿手动编辑)
|
||||
|
||||
## 飞球数据规范
|
||||
|
||||
权威文档:`docs/reports/DWD-DOC/`(DWD 12 条)+ 同目录 DWS 权威规范。与 BD 手册、ETL 文档、DDL 注释冲突时以 DWD-DOC 为准。
|
||||
|
||||
### 硬规则速查
|
||||
|
||||
1. `consume_money` 禁止直接计算 → 用 `items_sum = table_charge_money + goods_money + assistant_pd_money + assistant_cx_money + electricity_money`
|
||||
2. 助教费用拆分:`assistant_pd_money`(陪打)+ `assistant_cx_money`(超休),禁止用 `service_fee` / `ASSISTANT_BASE` / `ASSISTANT_BONUS`
|
||||
3. 支付恒等式:`balance_amount = recharge_card_amount + gift_card_amount`,三者不可重复计算
|
||||
4. `settle_type` 过滤:正向交易 `IN (1, 3)`,本表无 `is_delete` 字段
|
||||
5. 会员信息通过 `member_id` JOIN 维度表(`scd2_is_current=1`),结算单冗余字段不可靠(DQ-6/DQ-7)
|
||||
6. 支付方式拆分来源是 `dwd_payment` 表(DQ-8):`payment_method=2` 现金,`payment_method=4` 扫码。`dwd_settlement_head_ex.cash_amount/online_amount` 不可靠
|
||||
7. 散客:`member_id ≤ 0`,全链路过滤入口加 `member_id > 0`
|
||||
8. 课程类型/定价/绩效档位/奖金/指数权重 → 配置表读取,禁止硬编码
|
||||
9. DWS 汇总表 delete-before-insert,库存表 upsert
|
||||
10. 折扣互斥:`discount_manual` + `discount_other` = `adjust_amount`
|
||||
11. 现金流互斥:`platform_settlement_amount` 与 `groupbuy_pay_amount` 互斥
|
||||
12. 废单判断:`dwd_assistant_service_log_ex.is_trash`
|
||||
|
||||
### 取数优先级
|
||||
|
||||
DWS > DWD(明细+维度表)> 禁止 ODS(API 快照表,同一 id 有 100+ 行重复,JOIN 会行膨胀)
|
||||
|
||||
### 参考优先级
|
||||
|
||||
DWD-DOC > DWS 权威规范 > BD 手册 > ETL 任务文档 > 业务规则文档 > DDL 注释
|
||||
|
||||
## 废弃对象黑名单(禁止引用)
|
||||
|
||||
- `dwd_assistant_trash_event` / `_ex`(2026-02-22 DROP)→ 用 `dwd_assistant_service_log_ex.is_trash`
|
||||
- `ods.assistant_cancellation_records`(2026-02-22)→ 无需独立链路
|
||||
- `ODS_ASSISTANT_ABOLISH` / `ASSISTANT_ABOLISH`(2026-02-22)→ 无
|
||||
- `BILLIARD_VIP`(2026-03-07)→ V1-V4 归 `BILLIARD`,V5 归 `SNOOKER`
|
||||
- `dws_member_recall_index` / `v_dws_member_recall_index`(2026-03-20)→ WBI + NCI
|
||||
- 所有 `_archived/` 目录禁止读取或参考
|
||||
|
||||
## 编码前需求审问(强制)
|
||||
|
||||
新建功能/模块/页面/接口、重构、多模块联动、需求存在模糊或隐含假设时:
|
||||
1. 不立即动手,先提问循环(每轮 3-5 个问题)
|
||||
2. 必问维度:用户角色、核心操作、完成后果、数据写入/展示/来源、错误/成功反馈、认证权限、存储(哪个库/新表?)、终端适配、边界条件(并发/幂等/超时)
|
||||
3. 输出「需求确认摘要」,用户确认后才进入实施
|
||||
|
||||
例外:用户说"直接改/跳过审问/不用问了"、Bug 修复(有明确复现步骤)、纯格式/文档调整、已有完整 spec
|
||||
|
||||
## 逻辑改动前置调研(强制)
|
||||
|
||||
任何逻辑改动(ETL/业务规则/API/数据模型/前端交互),写代码前必须完成调研:
|
||||
1. 用 Explore 子代理调研:目标模块文件、`docs/audit/changes/` 历史审计、相关 README/PRD/BD 手册、要修改的文件及其调用方/被调用方、数据流向(上游→当前→下游)、影响范围
|
||||
2. 输出「改动前上下文摘要」(模块职责、历史变更、影响范围、风险点),用户确认后开始实施
|
||||
|
||||
流程:需求审问 → 用户确认 → 前置调研 → 用户确认 → 编码实施
|
||||
|
||||
例外:纯格式调整、注释/文档纯文字修改、用户说"直接改/跳过调研"、新建文件且不涉及已有逻辑
|
||||
|
||||
## 数据库 Schema 变更规则
|
||||
|
||||
修改任何影响 PostgreSQL schema 的内容(迁移/DDL/表定义/ORM 模型)时,必须同步更新 `docs/database/`:
|
||||
- 变更说明:新增/修改/删除的表、字段、约束、索引
|
||||
- 兼容性:对 ETL、后端 API、小程序字段映射的影响
|
||||
- 回滚策略:如何撤销
|
||||
- 验证步骤:至少 3 条校验 SQL
|
||||
|
||||
## 测试与验证环境规范
|
||||
|
||||
1. 必须 `load_dotenv` 加载根 `.env`;必需变量缺失时立即报错,禁止静默回退空字符串
|
||||
2. cwd 与正式一致:ETL → `apps/etl/connectors/feiqiu/`;后端 → `apps/backend/`
|
||||
3. 配置走 `AppConfig.load()` 正常流程,不得为测试单独构造简化配置
|
||||
4. 数据库使用测试库(`TEST_DB_DSN`),禁止连正式库
|
||||
5. 属性测试分组执行:每次单个测试函数或 `-k` 筛选,禁止一次性跑全部;hypothesis 默认 `max_examples=100`
|
||||
|
||||
例外:用户指定简化环境、纯单元测试用 FakeDB/FakeAPI、`--dry-run` CLI 验证
|
||||
|
||||
## 脚本规范
|
||||
|
||||
- 复杂操作优先写 Python 脚本,避免复杂 shell 逻辑
|
||||
- 一次性运维脚本 → `scripts/ops/`;模块专属 → 模块内 `scripts/`
|
||||
- `scripts/ops/` 中的脚本不在 uv workspace 内,导入 ETL 纯函数需用 `importlib.util` + stub 方式(参考 `scripts/ops/backfill_finance_area_daily.py`、`tests/conftest.py`)
|
||||
|
||||
## 子代理使用原则
|
||||
|
||||
- 委托子代理:批量文件读取(≥3 个)、大范围搜索、不熟悉的模块探索、多步骤 shell 操作
|
||||
- 主流程直接处理:单个已知文件、简单单条命令、小范围精确搜索
|
||||
|
||||
## 审计
|
||||
|
||||
任何逻辑改动必须可追溯、可验证、可回滚。
|
||||
|
||||
完成一轮改动后,使用 `/audit` 命令执行审计。流程:
|
||||
1. 运行 `python scripts/audit/prescan.py` 预扫描(自动识别变更文件、分类风险、合规检查,零 token)
|
||||
2. 补充语义上下文(从对话记忆中提取每个变更的原因和思路)
|
||||
3. 委托子代理写审计记录到 `docs/audit/changes/`
|
||||
4. 补齐缺失的文档同步
|
||||
5. 运行 `python scripts/audit/gen_audit_dashboard.py` 刷新审计一览表
|
||||
|
||||
预扫描脚本支持 `--files` 参数:当 git status 包含大量历史未提交变更时,可只传入本次会话涉及的文件列表。
|
||||
|
||||
已废弃(不再使用):
|
||||
- Prompt-ID 溯源、`docs/audit/prompt_logs/` — Claude Code 原生 session 日志已覆盖
|
||||
- AI_CHANGELOG 文件内标记 — git blame 替代
|
||||
- CHANGE 块级代码标记 — git blame 替代
|
||||
- Session 日志 (`docs/audit/session_logs/`) — Claude Code 原生 session 存储已覆盖
|
||||
74
apps/etl/connectors/feiqiu/CLAUDE.md
Normal file
74
apps/etl/connectors/feiqiu/CLAUDE.md
Normal file
@@ -0,0 +1,74 @@
|
||||
# CLAUDE.md — ETL Feiqiu Connector
|
||||
|
||||
进入本目录时自动加载。包含 DWD 和 DWS 层的强制业务规则。
|
||||
|
||||
## DWD-DOC 标杆文档(权威数据源)
|
||||
|
||||
`docs/reports/DWD-DOC/` 是业务模型与财务数据的权威标杆文档。所有涉及金额口径、支付渠道、消费链路、账务公式、字段语义的开发工作,必须以此目录为第一参考源。
|
||||
|
||||
### 文档清单
|
||||
|
||||
| 文件 | 内容 | 关键规则 |
|
||||
|------|------|----------|
|
||||
| `01-business-panorama.md` | 消费链路 + 优惠机制 + 消费场景 | settle_type 枚举、助教费用拆分、团购券三层价格 |
|
||||
| `02-accounting-panorama.md` | 支付渠道 + 对账公式 + consume_money 口径 | 支付渠道恒等式、F2 三期公式 |
|
||||
| `03-financial-panorama.md` | 收入构成 + 储值卡资金流 + 对账矩阵 | 平台结算互斥关系 |
|
||||
| `04-dimension-panorama.md` | 维度表与主数据全景 | SCD2 维度取值规则 |
|
||||
| `05-f2-balance-audit.md` | F2 收支平衡公式专项 | 三期公式 + 139 笔失败根因 |
|
||||
| `06-calibration-checklist.md` | 校准清单 + 验证 SQL | 全部验证公式集中 |
|
||||
| `consume/consume-money-caliber.md` | consume_money 口径变化时间线 | 三种口径(A/B/C)定义与切换时间点 |
|
||||
|
||||
### DWD 强制规则(12 条)
|
||||
|
||||
1. **consume_money 禁止直接用于计算**:存在三种历史口径(A/B/C)混合,DWS 层及下游统一使用 `items_sum = table_charge_money + goods_money + assistant_pd_money + assistant_cx_money + electricity_money`
|
||||
2. **助教费用必须拆分**:使用 `assistant_pd_money`(陪打)和 `assistant_cx_money`(超休),禁止使用 `service_fee` / `ASSISTANT_BASE` / `ASSISTANT_BONUS`(`service_fee` 仅在平台结算表中表示"平台服务费",语义不同)
|
||||
3. **支付渠道恒等式**:`balance_amount = recharge_card_amount + gift_card_amount`(100% 成立),三者不可重复计算
|
||||
4. **settle_type 过滤**:正向交易取 `IN (1, 3)`,本表无 `is_delete` 字段
|
||||
5. **电费未启用**:`electricity_money` 全为 0,`gross_amount` 不含电费是正确的
|
||||
6. **折扣互斥**:`discount_manual`(大客户优惠)与 `discount_other` 互斥,两者之和 = `adjust_amount`
|
||||
7. **现金流互斥**:`cash_inflow_total` 中 `platform_settlement_amount` 和 `groupbuy_pay_amount` 互斥
|
||||
8. **废单判断**:使用 `dwd_assistant_service_log_ex.is_trash`,`dwd_assistant_trash_event` 已废弃
|
||||
9. **储值卡字段命名**:DWS 层使用 `balance_pay`/`recharge_card_pay`/`gift_card_pay`;财务日报用 `recharge_card_consume`
|
||||
10. **会员字段断档(DQ-6)**:`settlement_head.member_phone/member_name` 自 2025-12 起全为 NULL → 通过 `member_id` LEFT JOIN `dwd.dim_member`(`scd2_is_current=1`)
|
||||
11. **会员卡字段断档(DQ-7)**:`settlement_head.member_card_type_name` 自 2025-07-21 起全为 NULL → 通过 `member_id` LEFT JOIN `dwd.dim_member_card_account`(`scd2_is_current=1`)。通用规则:结算单上所有会员冗余字段均不可靠
|
||||
12. **支付方式拆分(DQ-8)**:`dwd_settlement_head_ex.cash_amount`/`online_amount` 不可靠。正确来源是 `dwd_payment` 表:`payment_method=2` 现金,`payment_method=4` 扫码。通过 `relate_type=2` + `relate_id` 关联结算单
|
||||
|
||||
## DWS 层权威规范
|
||||
|
||||
> DWD 12 条在 DWS 层同样生效。冲突时以 DWD-DOC 为准。
|
||||
|
||||
### 幂等更新策略
|
||||
- 汇总表默认 delete-before-insert(按日期范围 + site_id 先删后插)
|
||||
- 库存表使用 upsert(`ON CONFLICT DO UPDATE`)
|
||||
- 禁止 TRUNCATE
|
||||
|
||||
### 课程类型与定价
|
||||
- 课程类型通过 `cfg_skill_type` 映射(`skill_id` → `course_type_code`:BASE/BONUS/ROOM),禁止硬编码
|
||||
- 定价通过 `cfg_assistant_level_price` 按 SCD2 生效期 as-of join,禁止硬编码价格
|
||||
- 包厢课统一 138 元/小时(`dws.salary.room_course_price`)
|
||||
|
||||
### 绩效档位与工资
|
||||
- 绩效档位通过 `cfg_performance_tier` 按有效业绩小时数匹配 `[min_hours, max_hours)` 区间
|
||||
- 新入职折算:入职日期在当月 1 日后按日均 × 30 定档;> 25 日最高 T2
|
||||
- 奖金通过 `cfg_bonus_rules`:SPRINT 不累计取最高档,TOP_RANK 按排名(1000/600/400 元)
|
||||
- 排名使用 `calculate_rank_with_ties()`,相同业绩并列
|
||||
|
||||
### 会员与散客
|
||||
- 散客:`member_id ≤ 0`,不计入会员统计(但计入助教业绩)
|
||||
- 客户分层:高价值(90 天 ≥ 3 次且 ≥ 1000 元)→ 中等 → 低活跃 → 流失
|
||||
- 会员信息一律通过 ID 关联维度表
|
||||
|
||||
### 时间窗口与调度
|
||||
- 滚动窗口标准集:7/10/15/30/60/90 天
|
||||
- 月度任务宽限期:月初前 5 天可处理上月数据
|
||||
- 工资计算周期:月初前 5 天运行
|
||||
|
||||
### 指数参数
|
||||
- 所有权重和阈值通过 `cfg_index_parameters` 按 `index_type`(WBI/NCI/RS/OS/MS/ML/SPI)加载,禁止硬编码
|
||||
|
||||
### 台桌分类
|
||||
- `cfg_area_category` 仅精确匹配 + 兜底:BILLIARD/SNOOKER/OTHER。`BILLIARD_VIP` 已废弃
|
||||
|
||||
### 参考优先级
|
||||
|
||||
DWD-DOC > DWS 权威规范 > BD 手册 > ETL 任务文档 > 业务规则文档 > DDL 注释
|
||||
45
db/CLAUDE.md
Normal file
45
db/CLAUDE.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# CLAUDE.md — Database (DDL / Migrations / Seeds)
|
||||
|
||||
进入本目录时自动加载。
|
||||
|
||||
## Schema 变更规则
|
||||
|
||||
修改任何影响 PostgreSQL schema 的内容(迁移脚本/DDL/表定义)时,必须同步更新 `docs/database/`:
|
||||
|
||||
1. **变更说明**:新增/修改/删除的表、字段、约束、索引
|
||||
2. **兼容性**:对 ETL、后端 API、小程序字段映射的影响
|
||||
3. **回滚策略**:如何撤销(DDL 回滚 / 数据回填)
|
||||
4. **验证步骤**:至少 3 条校验 SQL
|
||||
|
||||
## RLS 视图双 Schema 规则
|
||||
|
||||
新建 DWS/DWD 表的 RLS 视图必须同时在原 schema(如 `dws`)和 `app` schema 创建:
|
||||
|
||||
```sql
|
||||
-- 1. 原 schema
|
||||
CREATE VIEW dws.v_xxx AS SELECT ... WHERE site_id = current_setting('app.current_site_id')::int;
|
||||
|
||||
-- 2. app schema(后端通过此路径访问)
|
||||
CREATE VIEW app.v_xxx AS SELECT ... WHERE site_id = current_setting('app.current_site_id')::int;
|
||||
```
|
||||
|
||||
回滚需逆序 DROP 两个 schema 的视图。只在原 schema 创建会导致后端查询失败。
|
||||
|
||||
## 目录结构
|
||||
|
||||
```
|
||||
db/
|
||||
├── etl_feiqiu/
|
||||
│ ├── schemas/ # 六层 Schema DDL(meta/ods/dwd/core/dws/app)
|
||||
│ ├── migrations/ # 迁移脚本(日期前缀:YYYY-MM-DD__slug.sql)
|
||||
│ ├── seeds/ # 种子数据
|
||||
│ └── scripts/ # 测试数据库脚本
|
||||
├── zqyy_app/
|
||||
│ └── schemas/ # 业务数据库 DDL
|
||||
└── fdw/ # FDW 跨库只读映射
|
||||
```
|
||||
|
||||
## 测试规范
|
||||
|
||||
- 数据库操作使用测试库(`TEST_DB_DSN` / `TEST_APP_DB_DSN`),禁止连正式库
|
||||
- 迁移脚本在测试库执行后需验证表结构
|
||||
214
scripts/audit/prescan.py
Normal file
214
scripts/audit/prescan.py
Normal file
@@ -0,0 +1,214 @@
|
||||
#!/usr/bin/env python3
|
||||
"""审计预扫描 — 识别变更文件、分类风险、合规检查
|
||||
|
||||
合并自 .kiro/scripts/audit_flagger.py + change_compliance_prescan.py,
|
||||
去掉 .kiro/state 依赖,直接输出 JSON 到 stdout 供 /audit 命令读取。
|
||||
|
||||
用法:
|
||||
python scripts/audit/prescan.py
|
||||
python scripts/audit/prescan.py --files "apps/backend/app/routers/foo.py,db/etl_feiqiu/migrations/xxx.sql"
|
||||
|
||||
不带 --files 时从 git status 获取变更列表。
|
||||
带 --files 时使用指定的文件列表(逗号分隔),跳过 git。
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
TZ_SHANGHAI = timezone(timedelta(hours=8))
|
||||
|
||||
# ── 高风险路径规则 ──
|
||||
|
||||
RISK_RULES = [
|
||||
(re.compile(r"^apps/etl/connectors/feiqiu/(api|cli|config|database|loaders|models|orchestration|scd|tasks|utils|quality)/"), "etl"),
|
||||
(re.compile(r"^apps/backend/app/"), "backend"),
|
||||
(re.compile(r"^apps/admin-web/src/"), "admin-web"),
|
||||
(re.compile(r"^apps/tenant-admin/src/"), "tenant-admin"),
|
||||
(re.compile(r"^apps/miniprogram/(miniapp|miniprogram)/"), "miniprogram"),
|
||||
(re.compile(r"^packages/shared/"), "shared"),
|
||||
(re.compile(r"^db/"), "db"),
|
||||
]
|
||||
|
||||
NOISE_PATTERNS = [
|
||||
re.compile(r"^docs/audit/"),
|
||||
re.compile(r"^\.kiro/"),
|
||||
re.compile(r"^\.claude/"),
|
||||
re.compile(r"^tmp/"),
|
||||
re.compile(r"^\.hypothesis/"),
|
||||
re.compile(r"\.png$"),
|
||||
re.compile(r"\.jpg$"),
|
||||
]
|
||||
|
||||
# ── 代码→文档映射 ──
|
||||
|
||||
DOC_MAP = {
|
||||
"apps/backend/app/routers/": ["apps/backend/docs/API-REFERENCE.md"],
|
||||
"apps/backend/app/services/": ["apps/backend/docs/API-REFERENCE.md", "apps/backend/README.md"],
|
||||
"apps/backend/app/auth/": ["apps/backend/docs/API-REFERENCE.md", "apps/backend/README.md"],
|
||||
"apps/backend/app/schemas/": ["apps/backend/docs/API-REFERENCE.md"],
|
||||
"apps/etl/connectors/feiqiu/tasks/": ["apps/etl/connectors/feiqiu/docs/etl_tasks/"],
|
||||
"apps/etl/connectors/feiqiu/loaders/": ["apps/etl/connectors/feiqiu/docs/etl_tasks/"],
|
||||
"apps/etl/connectors/feiqiu/scd/": ["apps/etl/connectors/feiqiu/docs/business-rules/scd2_rules.md"],
|
||||
"apps/etl/connectors/feiqiu/orchestration/": ["apps/etl/connectors/feiqiu/docs/architecture/"],
|
||||
"apps/admin-web/src/": ["apps/admin-web/README.md"],
|
||||
"apps/tenant-admin/src/": ["apps/tenant-admin/README.md"],
|
||||
"apps/miniprogram/": ["apps/miniprogram/README.md"],
|
||||
"packages/shared/": ["packages/shared/README.md"],
|
||||
"db/etl_feiqiu/migrations/": ["docs/database/"],
|
||||
"db/zqyy_app/migrations/": ["docs/database/"],
|
||||
}
|
||||
|
||||
MIGRATION_PATTERNS = [
|
||||
re.compile(r"^db/etl_feiqiu/migrations/.*\.sql$"),
|
||||
re.compile(r"^db/zqyy_app/migrations/.*\.sql$"),
|
||||
re.compile(r"^db/fdw/.*\.sql$"),
|
||||
]
|
||||
|
||||
DDL_BASELINE_DIR = "docs/database/ddl/"
|
||||
BD_MANUAL_PATTERN = re.compile(r"^docs/database/BD_Manual_.*\.md$")
|
||||
|
||||
|
||||
def get_changed_files_from_git() -> list[str]:
|
||||
"""从 git status --porcelain 提取变更文件路径"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["git", "status", "--porcelain"],
|
||||
capture_output=True, text=True, timeout=10,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
return []
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
files = []
|
||||
for line in result.stdout.splitlines():
|
||||
if len(line) < 4:
|
||||
continue
|
||||
path = line[3:].strip()
|
||||
if " -> " in path:
|
||||
path = path.split(" -> ")[-1]
|
||||
path = path.strip().strip('"').replace("\\", "/")
|
||||
if path:
|
||||
files.append(path)
|
||||
return sorted(set(files))
|
||||
|
||||
|
||||
def is_noise(f: str) -> bool:
|
||||
return any(p.search(f) for p in NOISE_PATTERNS)
|
||||
|
||||
|
||||
def classify(files: list[str]) -> dict:
|
||||
"""分类变更文件,输出结构化审查清单"""
|
||||
real_files = [f for f in files if not is_noise(f)]
|
||||
|
||||
risk_tags = []
|
||||
high_risk_files = []
|
||||
new_migration_sql = []
|
||||
code_without_docs = []
|
||||
has_bd_manual = False
|
||||
has_ddl_baseline = False
|
||||
|
||||
code_files = []
|
||||
doc_files = set()
|
||||
|
||||
for f in real_files:
|
||||
# 高风险分类
|
||||
for pattern, label in RISK_RULES:
|
||||
if pattern.search(f):
|
||||
high_risk_files.append(f)
|
||||
tag = f"dir:{label}"
|
||||
if tag not in risk_tags:
|
||||
risk_tags.append(tag)
|
||||
break
|
||||
|
||||
# 根目录散文件
|
||||
if "/" not in f and "root-file" not in risk_tags:
|
||||
risk_tags.append("root-file")
|
||||
|
||||
# 迁移 SQL
|
||||
for mp in MIGRATION_PATTERNS:
|
||||
if mp.search(f):
|
||||
new_migration_sql.append(f)
|
||||
if "db-schema-change" not in risk_tags:
|
||||
risk_tags.append("db-schema-change")
|
||||
break
|
||||
|
||||
# BD Manual / DDL 基线
|
||||
if BD_MANUAL_PATTERN.search(f):
|
||||
has_bd_manual = True
|
||||
if f.startswith(DDL_BASELINE_DIR):
|
||||
has_ddl_baseline = True
|
||||
|
||||
# 分桶
|
||||
if f.endswith(".md") or "/docs/" in f:
|
||||
doc_files.add(f)
|
||||
if f.endswith((".py", ".ts", ".tsx", ".js", ".jsx", ".sql")):
|
||||
code_files.append(f)
|
||||
|
||||
# 代码→文档映射检查
|
||||
for cf in code_files:
|
||||
expected_docs = []
|
||||
for prefix, docs in DOC_MAP.items():
|
||||
if cf.startswith(prefix):
|
||||
expected_docs.extend(docs)
|
||||
if not expected_docs:
|
||||
continue
|
||||
has_doc = False
|
||||
for ed in expected_docs:
|
||||
if ed in doc_files:
|
||||
has_doc = True
|
||||
break
|
||||
if ed.endswith("/") and any(d.startswith(ed) for d in doc_files):
|
||||
has_doc = True
|
||||
break
|
||||
if not has_doc:
|
||||
code_without_docs.append({
|
||||
"file": cf,
|
||||
"expected_docs": expected_docs,
|
||||
})
|
||||
|
||||
return {
|
||||
"scanned_at": datetime.now(TZ_SHANGHAI).strftime("%Y-%m-%d %H:%M:%S"),
|
||||
"total_files": len(real_files),
|
||||
"all_files": real_files,
|
||||
"high_risk_files": sorted(set(high_risk_files)),
|
||||
"risk_tags": risk_tags,
|
||||
"new_migration_sql": new_migration_sql,
|
||||
"code_without_docs": code_without_docs,
|
||||
"has_bd_manual": has_bd_manual,
|
||||
"has_ddl_baseline": has_ddl_baseline,
|
||||
"audit_required": len(risk_tags) > 0,
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--files",
|
||||
help="逗号分隔的文件列表(跳过 git status)",
|
||||
default=None,
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.files:
|
||||
files = [f.strip() for f in args.files.split(",") if f.strip()]
|
||||
else:
|
||||
files = get_changed_files_from_git()
|
||||
|
||||
if not files:
|
||||
print(json.dumps({"audit_required": False, "total_files": 0}, ensure_ascii=False))
|
||||
return
|
||||
|
||||
result = classify(files)
|
||||
print(json.dumps(result, indent=2, ensure_ascii=False))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
main()
|
||||
except Exception as e:
|
||||
print(json.dumps({"error": str(e), "audit_required": False}, ensure_ascii=False))
|
||||
Reference in New Issue
Block a user