chore: 迁移项目路径 C:\NeoZQYY → C:\Project\NeoZQYY
开发环境从旧虚拟机 (DESKTOP-KGB0K5G) 迁移到新机器 (DESKTOP-D676QDA), 项目目录从 C:\NeoZQYY 变更为 C:\Project\NeoZQYY, 批量替换 126 个文件中的绝对路径引用。 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -16,7 +16,7 @@ CONTEXT TRANSFER: We are continuing a conversation that had gotten too long. Her
|
||||
### 第一阶段:数据采集 — DONE
|
||||
- Ran `python scripts/ops/analyze_dataflow.py --limit 200` successfully
|
||||
- 23 tables collected, all succeeded, 3405 total records
|
||||
- Output to `C:\NeoZQYY\export\dataflow_analysis\` with subdirs: `json_trees/`, `db_schemas/`, `collection_manifest.json`
|
||||
- Output to `C:\Project\NeoZQYY\export\dataflow_analysis\` with subdirs: `json_trees/`, `db_schemas/`, `collection_manifest.json`
|
||||
- DWD tables all returned 0 columns (DWD table names don't match ODS table names — DWD uses dimension/fact table names like `dim_member`, `dim_assistant`, not the ODS raw table names). This is expected behavior.
|
||||
|
||||
### 第二阶段:语义分析 — IN PROGRESS (data reading complete, analysis not started)
|
||||
@@ -33,7 +33,7 @@ CONTEXT TRANSFER: We are continuing a conversation that had gotten too long. Her
|
||||
- JSON→ODS mapping (matched, payload-only, ignored fields)
|
||||
- ODS→DWD mapping (direct, ETL-derived, SCD2 version control columns)
|
||||
- Field coverage stats, type distribution, upstream/downstream mapping coverage
|
||||
- Save to `SYSTEM_ANALYZE_ROOT` (`C:\NeoZQYY\export\dataflow_analysis\`) as `dataflow_YYYY-MM-DD_HHMMSS.md`
|
||||
- Save to `SYSTEM_ANALYZE_ROOT` (`C:\Project\NeoZQYY\export\dataflow_analysis\`) as `dataflow_YYYY-MM-DD_HHMMSS.md`
|
||||
|
||||
### Key Data Summary from collection_manifest.json:
|
||||
| Table | Records | ODS Cols | DWD Cols |
|
||||
@@ -69,7 +69,7 @@ CONTEXT TRANSFER: We are continuing a conversation that had gotten too long. Her
|
||||
- DDL COMMENTs follow pattern: `【说明】...【示例】...【JSON字段】...`
|
||||
|
||||
**NEXT STEPS**:
|
||||
1. Read remaining ODS schema files (18 more) from `C:\NeoZQYY\export\dataflow_analysis\db_schemas\ods_*.json`
|
||||
1. Read remaining ODS schema files (18 more) from `C:\Project\NeoZQYY\export\dataflow_analysis\db_schemas\ods_*.json`
|
||||
2. Read ETL source code for data flow understanding:
|
||||
- `apps/etl/pipelines/feiqiu/loaders/ods/generic.py` (ODS loader)
|
||||
- `apps/etl/pipelines/feiqiu/loaders/base_loader.py`
|
||||
@@ -80,15 +80,15 @@ CONTEXT TRANSFER: We are continuing a conversation that had gotten too long. Her
|
||||
- Field purpose inference using DDL COMMENT + JSON samples + ETL code
|
||||
- ODS→DWD mapping (requires reading DWD DDL files from `db/etl_feiqiu/schemas/` since runtime query returned empty)
|
||||
4. Generate per-table statistics: field coverage rate, type distribution, mapping coverage
|
||||
5. Assemble final Markdown report and save to `C:\NeoZQYY\export\dataflow_analysis\dataflow_2026-02-16_HHMMSS.md`
|
||||
5. Assemble final Markdown report and save to `C:\Project\NeoZQYY\export\dataflow_analysis\dataflow_2026-02-16_HHMMSS.md`
|
||||
|
||||
**FILEPATHS**:
|
||||
- `scripts/ops/analyze_dataflow.py` — CLI entry point
|
||||
- `scripts/ops/dataflow_analyzer.py` — core collection module with ODS_SPECS
|
||||
- `C:\NeoZQYY\export\dataflow_analysis\collection_manifest.json` — collection results
|
||||
- `C:\NeoZQYY\export\dataflow_analysis\json_trees\*.json` — 23 JSON tree files (all read)
|
||||
- `C:\NeoZQYY\export\dataflow_analysis\db_schemas\ods_*.json` — 23 ODS schema files (5 read)
|
||||
- `C:\NeoZQYY\export\dataflow_analysis\db_schemas\dwd_*.json` — 23 DWD schema files (all empty/0 cols)
|
||||
- `C:\Project\NeoZQYY\export\dataflow_analysis\collection_manifest.json` — collection results
|
||||
- `C:\Project\NeoZQYY\export\dataflow_analysis\json_trees\*.json` — 23 JSON tree files (all read)
|
||||
- `C:\Project\NeoZQYY\export\dataflow_analysis\db_schemas\ods_*.json` — 23 ODS schema files (5 read)
|
||||
- `C:\Project\NeoZQYY\export\dataflow_analysis\db_schemas\dwd_*.json` — 23 DWD schema files (all empty/0 cols)
|
||||
- `apps/etl/pipelines/feiqiu/loaders/` — ETL loader code
|
||||
- `apps/etl/pipelines/feiqiu/docs/architecture/data_flow.md` — architecture doc (read)
|
||||
- `.kiro/specs/dataflow-structure-audit/tasks.md` — spec tasks (all completed)
|
||||
@@ -96,14 +96,14 @@ CONTEXT TRANSFER: We are continuing a conversation that had gotten too long. Her
|
||||
**USER CORRECTIONS AND INSTRUCTIONS**:
|
||||
- 当前仅分析飞球(feiqiu)连接器
|
||||
- 报告使用中文
|
||||
- Output to `SYSTEM_ANALYZE_ROOT` = `C:\NeoZQYY\export\dataflow_analysis`
|
||||
- Output to `SYSTEM_ANALYZE_ROOT` = `C:\Project\NeoZQYY\export\dataflow_analysis`
|
||||
- Filename format: `dataflow_YYYY-MM-DD_HHMMSS.md`
|
||||
- DWD layer uses different table names (dim_member, dim_assistant, fact_* etc.) not the ODS raw table names — need to look at DDL files or ETL code to find the actual DWD table mappings
|
||||
- The workspace steering rules require: Chinese output, UTF-8, audit for high-risk changes, Python scripts for complex ops
|
||||
- This is a Kiro Hook-triggered analysis workflow (hook at `.kiro/hooks/dataflow-analyze.kiro.hook`)
|
||||
|
||||
## Files to read
|
||||
- `C:\NeoZQYY\export\dataflow_analysis\collection_manifest.json`
|
||||
- `C:\Project\NeoZQYY\export\dataflow_analysis\collection_manifest.json`
|
||||
- `apps/etl/pipelines/feiqiu/loaders/ods/generic.py`
|
||||
- `apps/etl/pipelines/feiqiu/loaders/base_loader.py`
|
||||
- `apps/etl/pipelines/feiqiu/docs/architecture/data_flow.md`
|
||||
|
||||
Reference in New Issue
Block a user