Compare commits

...

24 Commits

Author SHA1 Message Date
1850334bb7 Merge pull request 'docs: add live-operation issue tracking design and plan (#80)' (#395) from task/80-implementation-audit-closeout into base/ouroboros-workflow-20260302
Reviewed-on: #395
2026-03-04 00:45:06 +09:00
8a4cf31c4d Merge pull request 'docs: seed peak probability model plan (#86)' (#394) from task/86-peak-probability-plan into base/ouroboros-workflow-20260302
Reviewed-on: #394
2026-03-04 00:44:02 +09:00
agentson
c4eaf780a0 docs: sync requirements registry for #394 policy-doc update
All checks were successful
Gitea CI / test (pull_request) Successful in 32s
2026-03-04 00:38:12 +09:00
agentson
728b3667ab chore: retrigger CI after PR governance body update
All checks were successful
Gitea CI / test (pull_request) Successful in 32s
2026-03-04 00:36:56 +09:00
agentson
337c84c592 chore: retrigger CI after PR governance body update
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 00:36:41 +09:00
agentson
65ec0a2684 docs: add live-operation issue tracking implementation plan
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 23:07:10 +09:00
agentson
fdaa1aa33c docs: add live-operation issue tracking design 2026-03-02 23:06:40 +09:00
agentson
16515080dc docs: seed peak probability model plan (#86)
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 22:59:31 +09:00
agentson
acbf89bc90 docs: seed revised dual-track workflow plan 2026-03-02 22:59:17 +09:00
agentson
b708e8b4ed process: add mandatory PR body post-check step (#392) 2026-03-02 18:19:42 +09:00
7cd818f1e2 Merge pull request 'process: enforce issue-status consistency for completion marks (#390)' (#391) from feature/issue-390-validate-completion-consistency into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #391
2026-03-02 10:38:21 +09:00
agentson
7c17535c3d test: narrow pending keyword and add pending-only guard (#390)
All checks were successful
Gitea CI / test (pull_request) Successful in 33s
Gitea CI / test (push) Successful in 32s
2026-03-02 10:33:58 +09:00
agentson
453d67b91c docs: sync requirements registry for governance gate (#390)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 10:03:38 +09:00
agentson
ade5971387 process: enforce issue-status consistency in audit doc validation (#390)
Some checks failed
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 09:59:40 +09:00
87683a88b4 Merge pull request 'risk: define and implement kill-switch refresh retry policy (#377)' (#389) from feature/issue-377-kill-switch-refresh-retry into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #389
2026-03-02 09:47:56 +09:00
agentson
b34937ea9d risk: polish retry coverage and refresh failure summary
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 31s
2026-03-02 09:44:24 +09:00
agentson
ba2370e40e risk: add kill-switch refresh retry policy and tests (#377)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 34s
2026-03-02 09:38:39 +09:00
1c41379815 Merge pull request 'strategy: align model exit signal policy with v2 spec (#369)' (#388) from feature/issue-369-model-exit-signal-spec-sync into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #388
2026-03-02 09:35:23 +09:00
agentson
5e4c94bfeb strategy: implement model assist be-lock path and clarify audit note
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 09:31:38 +09:00
agentson
2332ba868f strategy: align model exit signal as assist-only trigger (#369)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 09:25:03 +09:00
f6e4cc7ea9 Merge pull request 'analysis: reflect cost/execution in v2 backtest pipeline (#368)' (#387) from feature/issue-368-backtest-cost-execution into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #387
2026-03-02 09:21:06 +09:00
agentson
2776a074b5 analysis: remove dead init and split execution seeds in fold
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 09:17:22 +09:00
agentson
0fb56a4a1a ci: retrigger after PR traceability update
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 34s
2026-03-02 09:08:00 +09:00
agentson
7e9738d5df docs: bump requirements registry version for policy change sync
Some checks failed
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 04:01:26 +09:00
25 changed files with 1344 additions and 35 deletions

View File

@@ -59,6 +59,18 @@ scripts/tea_comment.sh 374 /tmp/comment.md
- `scripts/tea_comment.sh` accepts stdin with `-` as body source. - `scripts/tea_comment.sh` accepts stdin with `-` as body source.
- The helper fails fast when body looks like escaped-newline text only. - The helper fails fast when body looks like escaped-newline text only.
#### PR Body Post-Check (Mandatory)
PR 생성 직후 본문이 `\n` 문자열로 깨지지 않았는지 반드시 확인한다.
```bash
python3 scripts/validate_pr_body.py --pr <PR_NUMBER>
```
검증 실패 시:
- PR 본문을 API patch 또는 파일 기반 본문으로 즉시 수정
- 같은 명령으로 재검증 통과 후에만 리뷰/머지 진행
#### ❌ TTY Error - Interactive Confirmation Fails #### ❌ TTY Error - Interactive Confirmation Fails
```bash ```bash
~/bin/tea issues create --repo X --title "Y" --description "Z" ~/bin/tea issues create --repo X --title "Y" --description "Z"

View File

@@ -1,6 +1,6 @@
<!-- <!--
Doc-ID: DOC-REQ-001 Doc-ID: DOC-REQ-001
Version: 1.0.7 Version: 1.0.12
Status: active Status: active
Owner: strategy Owner: strategy
Updated: 2026-03-02 Updated: 2026-03-02
@@ -19,7 +19,7 @@ Updated: 2026-03-02
- `REQ-V2-005`: 라벨링은 Triple Barrier(Upper/Lower/Time) 방식이어야 한다. - `REQ-V2-005`: 라벨링은 Triple Barrier(Upper/Lower/Time) 방식이어야 한다.
- `REQ-V2-006`: 검증은 Walk-forward + Purge/Embargo를 강제한다. - `REQ-V2-006`: 검증은 Walk-forward + Purge/Embargo를 강제한다.
- `REQ-V2-007`: 백테스트는 비용/슬리피지/체결실패를 반영하지 않으면 채택 불가다. - `REQ-V2-007`: 백테스트는 비용/슬리피지/체결실패를 반영하지 않으면 채택 불가다.
- `REQ-V2-008`: Kill Switch는 신규주문차단 -> 미체결취소 -> 재조회 -> 리스크축소 -> 스냅샷 순서다. - `REQ-V2-008`: Kill Switch는 신규주문차단 -> 미체결취소 -> 재조회(실패 시 최대 3회, 1s/2s backoff 재시도, 성공 시 즉시 중단) -> 리스크축소 -> 스냅샷 순서다.
## v3 핵심 요구사항 ## v3 핵심 요구사항
@@ -38,3 +38,9 @@ Updated: 2026-03-02
- `REQ-OPS-002`: 문서의 수치 정책은 원장에서만 변경한다. - `REQ-OPS-002`: 문서의 수치 정책은 원장에서만 변경한다.
- `REQ-OPS-003`: 구현 태스크는 반드시 테스트 태스크를 동반한다. - `REQ-OPS-003`: 구현 태스크는 반드시 테스트 태스크를 동반한다.
- `REQ-OPS-004`: 원본 계획 문서(`v2`, `v3`)는 `docs/ouroboros/source/` 경로를 단일 기준으로 사용한다. - `REQ-OPS-004`: 원본 계획 문서(`v2`, `v3`)는 `docs/ouroboros/source/` 경로를 단일 기준으로 사용한다.
## 변경 이력
- 2026-03-02: `v1.0.12` 문서 검증 게이트 강화(#390) 반영에 따라 정책 문서 동기화 체크를 수행했다. (`REQ-OPS-002`)
- 2026-03-04: 86_peak_probability_model_plan.md 초안 추가에 따른 정책 문서 동기화 체크를 수행했다. (REQ-OPS-002)

View File

@@ -9,7 +9,7 @@ Updated: 2026-03-02
# v2/v3 구현 감사 및 수익률 분석 보고서 # v2/v3 구현 감사 및 수익률 분석 보고서
작성일: 2026-02-28 작성일: 2026-02-28
최종 업데이트: 2026-03-02 (#373 상태표 정합화 반영) 최종 업데이트: 2026-03-02 (#377 kill-switch refresh 재시도 정책 반영)
대상 기간: 2026-02-25 ~ 2026-02-28 (실거래) 대상 기간: 2026-02-25 ~ 2026-02-28 (실거래)
분석 브랜치: `feature/v3-session-policy-stream` 분석 브랜치: `feature/v3-session-policy-stream`
@@ -32,11 +32,11 @@ Updated: 2026-03-02
| REQ-V2-001 | 4-상태 매도 상태기계 (HOLDING→BE_LOCK→ARMED→EXITED) | `src/strategy/position_state_machine.py` | ✅ 완료 | | REQ-V2-001 | 4-상태 매도 상태기계 (HOLDING→BE_LOCK→ARMED→EXITED) | `src/strategy/position_state_machine.py` | ✅ 완료 |
| REQ-V2-002 | 즉시 최상위 상태 승격 (갭 대응) | `position_state_machine.py:51-70` | ✅ 완료 | | REQ-V2-002 | 즉시 최상위 상태 승격 (갭 대응) | `position_state_machine.py:51-70` | ✅ 완료 |
| REQ-V2-003 | EXITED 우선 평가 | `position_state_machine.py:38-48` | ✅ 완료 | | REQ-V2-003 | EXITED 우선 평가 | `position_state_machine.py:38-48` | ✅ 완료 |
| REQ-V2-004 | 4중 청산 로직 (Hard/BE/ATR Trailing/Model) | `src/strategy/exit_rules.py` | ⚠️ 부분 (`#369`) | | REQ-V2-004 | 4중 청산 로직 (Hard/BE/ATR Trailing/Model assist-only, 직접 EXIT 미트리거) | `src/strategy/exit_rules.py` | ✅ 완료 |
| REQ-V2-005 | Triple Barrier 라벨링 | `src/analysis/triple_barrier.py` | ✅ 완료 | | REQ-V2-005 | Triple Barrier 라벨링 | `src/analysis/triple_barrier.py` | ✅ 완료 |
| REQ-V2-006 | Walk-Forward + Purge/Embargo 검증 | `src/analysis/walk_forward_split.py` | ✅ 완료 | | REQ-V2-006 | Walk-Forward + Purge/Embargo 검증 | `src/analysis/walk_forward_split.py` | ✅ 완료 |
| REQ-V2-007 | 비용/슬리피지/체결실패 모델 필수 | `src/analysis/backtest_cost_guard.py`, `src/analysis/backtest_pipeline.py` | ✅ 완료 | | REQ-V2-007 | 비용/슬리피지/체결실패 모델 필수 | `src/analysis/backtest_cost_guard.py`, `src/analysis/backtest_pipeline.py` | ✅ 완료 |
| REQ-V2-008 | Kill Switch 실행 순서 (Block→Cancel→Refresh→Reduce→Snapshot) | `src/core/kill_switch.py` | ⚠️ 부분 (`#377`) | | REQ-V2-008 | Kill Switch 실행 순서 (Block→Cancel→Refresh(retry)→Reduce→Snapshot) | `src/core/kill_switch.py` | ✅ 완료 |
### 1.3 v3 구현 상태: 부분 완료 (2026-03-02 기준) ### 1.3 v3 구현 상태: 부분 완료 (2026-03-02 기준)
@@ -45,7 +45,7 @@ Updated: 2026-03-02
| REQ-V3-001 | 모든 신호/주문/로그에 session_id 포함 | ⚠️ 부분 | 큐 intent에 `session_id` 누락 (`#375`) | | REQ-V3-001 | 모든 신호/주문/로그에 session_id 포함 | ⚠️ 부분 | 큐 intent에 `session_id` 누락 (`#375`) |
| REQ-V3-002 | 세션 전환 훅 + 리스크 파라미터 재로딩 | ✅ 완료 | 세션 경계 E2E 회귀(override 적용/해제 + 재로딩 실패 폴백) 보강 (`#376`) | | REQ-V3-002 | 세션 전환 훅 + 리스크 파라미터 재로딩 | ✅ 완료 | 세션 경계 E2E 회귀(override 적용/해제 + 재로딩 실패 폴백) 보강 (`#376`) |
| REQ-V3-003 | 블랙아웃 윈도우 정책 | ✅ 완료 | `src/core/blackout_manager.py` | | REQ-V3-003 | 블랙아웃 윈도우 정책 | ✅ 완료 | `src/core/blackout_manager.py` |
| REQ-V3-004 | 블랙아웃 큐 + 복구 시 재검증 | ⚠️ 부분 | 큐 포화 oldest-drop 정책으로 정합화 (`#371`), 재검증 강화는 `#328` 추적 | | REQ-V3-004 | 블랙아웃 큐 + 복구 시 재검증 | ✅ 완료 | DB 기록(`#324`), 재검증 강화(`#328`), 큐 포화 oldest-drop(`#371`) 반영 |
| REQ-V3-005 | 저유동 세션 시장가 금지 | ✅ 완료 | `src/core/order_policy.py` | | REQ-V3-005 | 저유동 세션 시장가 금지 | ✅ 완료 | `src/core/order_policy.py` |
| REQ-V3-006 | 보수적 백테스트 체결 (불리 방향) | ✅ 완료 | `src/analysis/backtest_execution_model.py` | | REQ-V3-006 | 보수적 백테스트 체결 (불리 방향) | ✅ 완료 | `src/analysis/backtest_execution_model.py` |
| REQ-V3-007 | FX 손익 분리 (전략 PnL vs 환율 PnL) | ⚠️ 부분 | 런타임 분리 계산/전달 적용 (`#370`), buy-side `fx_rate` 미관측 시 `fx_pnl=0` fallback | | REQ-V3-007 | FX 손익 분리 (전략 PnL vs 환율 PnL) | ⚠️ 부분 | 런타임 분리 계산/전달 적용 (`#370`), buy-side `fx_rate` 미관측 시 `fx_pnl=0` fallback |
@@ -89,13 +89,13 @@ Updated: 2026-03-02
- **해소**: 세션 경계 E2E 회귀 테스트를 추가해 override 적용/해제, 재로딩 실패 시 폴백 유지를 검증함 (`#376`) - **해소**: 세션 경계 E2E 회귀 테스트를 추가해 override 적용/해제, 재로딩 실패 시 폴백 유지를 검증함 (`#376`)
- **요구사항**: REQ-V3-002 - **요구사항**: REQ-V3-002
### GAP-4: 블랙아웃 복구 DB 기록 + 재검증 → ⚠️ 부분 해소 (#324, #328, #371) ### GAP-4: 블랙아웃 복구 DB 기록 + 재검증 → 해소 (#324, #328, #371)
- **위치**: `src/core/blackout_manager.py`, `src/main.py` - **위치**: `src/core/blackout_manager.py`, `src/main.py`
- **현 상태**: - **현 상태**:
- #324 추적 범위(DB 기록)는 구현 경로가 존재 - #324: 복구 주문 DB 기록 구현 및 테스트 반영
- #328 범위(가격/세션 재검증 강화)는 추적 이슈 오픈 상태 - #328: 가격/세션 재검증 강화 구현 및 머지 완료
- #371: 큐 포화 정책을 oldest-drop으로 명시/구현해 최신 intent 유실 경로 제거 - #371: 큐 포화 정책을 oldest-drop으로 명시/구현해 최신 intent 유실 경로 제거
- **요구사항**: REQ-V3-004 - **요구사항**: REQ-V3-004
### GAP-5: 시간장벽이 봉 개수 고정 → ✅ 해소 (#329) ### GAP-5: 시간장벽이 봉 개수 고정 → ✅ 해소 (#329)
@@ -328,7 +328,7 @@ Updated: 2026-03-02
| 블랙아웃 복구 주문 `log_trade()` 추가 (GAP-4) | #324 | ✅ 머지 | | 블랙아웃 복구 주문 `log_trade()` 추가 (GAP-4) | #324 | ✅ 머지 |
| 세션 전환 리스크 파라미터 동적 재로딩 (GAP-3) | #327 | ✅ 머지 | | 세션 전환 리스크 파라미터 동적 재로딩 (GAP-3) | #327 | ✅ 머지 |
| session_id 거래/의사결정 로그 명시 전달 (GAP-1, GAP-2) | #326 | ✅ 머지 | | session_id 거래/의사결정 로그 명시 전달 (GAP-1, GAP-2) | #326 | ✅ 머지 |
| 블랙아웃 복구 가격/세션 재검증 강화 (GAP-4 잔여) | #328 | ✅ 머지 | | 블랙아웃 복구 가격/세션 재검증 강화 (GAP-4) | #328 | ✅ 머지 |
**잔여 개선 항목:** **잔여 개선 항목:**
@@ -337,7 +337,6 @@ Updated: 2026-03-02
| P1 | US 시장 ATR 공급 경로 완성 (ROOT-5 잔여) | 중간 | | P1 | US 시장 ATR 공급 경로 완성 (ROOT-5 잔여) | 중간 |
| P1 | FX PnL 운영 활성화 (REQ-V3-007) | 낮음 | | P1 | FX PnL 운영 활성화 (REQ-V3-007) | 낮음 |
| P2 | pred_down_prob ML 모델 대체 (ROOT-5 잔여) | 높음 | | P2 | pred_down_prob ML 모델 대체 (ROOT-5 잔여) | 높음 |
| P2 | 세션 경계 E2E 통합 테스트 보강 (GAP-3 잔여) | 낮음 |
### 5.3 권장 실행 순서 ### 5.3 권장 실행 순서

View File

@@ -0,0 +1,168 @@
<!--
Doc-ID: DOC-PLAN-086
Version: 0.1.0
Status: draft
Owner: strategy
Updated: 2026-03-02
-->
# 고점 확률 모델 도입 계획 (초안)
작성일: 2026-03-02
목적: 기존 계획(v2)의 "모델 보조 신호"를, 실운용 가능한 단순한 형태로 재정의한다.
---
## 1. 한 줄 요약
- 목표는 "지금이 고점(또는 고점 근처)일 확률"을 예측해 익절/축소/오버나잇 리스크 결정을 보조하는 것이다.
- 1차는 정규장 중심으로 시작하고, NXT/US_PRE 등 확장세션은 이후 분리 대응한다.
---
## 2. 왜 바꾸는가
- 기존 논의는 범위가 넓어 구현/검증 비용이 컸다.
- 현재 로그 구조에서도 빠르게 시작 가능한 타깃이 필요하다.
- 실거래 효용 기준에서 "고점 근처 감지"는 청산 품질 개선에 직접 연결된다.
---
## 3. 이번 계획의 핵심 결정
### 3.1 예측 타깃 (2개)
- `p_top_d0`: 현재 시점이 "오늘 정규장 고점(근접 포함)"일 확률
- `p_top_d1`: 현재 시점이 "내일 정규장 마감까지 포함한 고점(근접 포함)"일 확률
### 3.2 모델 역할
- 단독 매도 트리거가 아니라 보조 신호로 사용한다.
- Hard stop / ATR trailing / Kill Switch 우선 원칙은 유지한다.
### 3.3 적용 순서
1. KR/US 정규장만 대상으로 MVP 구축
2. 성능/운영 안정화 후 NXT, US_PRE, US_AFTER 분리 확장
---
## 4. 데이터/라벨 정의 (MVP)
### 4.1 샘플 단위
- 단위: "시점 t의 관측값 1개"
- 대상: 주문 가능 시점 중 정책 게이트를 통과하는 시점
### 4.2 라벨 정의
- `y_d0 = 1` if `max_high(t ~ 오늘 정규장 마감) <= price_t * (1 + eps_d0)` else `0`
- `y_d1 = 1` if `max_high(t ~ 내일 정규장 마감) <= price_t * (1 + eps_d1)` else `0`
### 4.3 필수 피처 (초기)
- 가격/수익률: `current_price`, `price_change_pct`
- 미시지표: `rsi`, `volume_ratio`, `signal`, `score`
- 포트폴리오: `portfolio_pnl_pct`, `total_cash`(선택)
- 상태: `market`, `session_id`, `minutes_to_close`, `is_tradeable_now`
### 4.4 데이터 제외 규칙 (초기)
- `session_id in ('', 'UNKNOWN')` 제외
- `price <= 0` 또는 `quantity <= 0` 제외
- `rationale LIKE '[startup-sync]%'` 제외
- 라벨 윈도우 불완전(휴장/데이터 누락) 샘플 제외
---
## 5. 추론/실행 규칙 (초안)
### 5.1 추론 출력
```json
{
"p_top_d0": 0.00,
"p_top_d1": 0.00,
"model_version": "TBD"
}
```
### 5.2 실행 반영 원칙
- `p_top_d0`가 높으면: 당일 익절/트레일링 강화
- `p_top_d1`가 높으면: 오버나잇 보유 축소/청산 우선
- 확장세션(NXT/US_PRE)은 초기엔 보수 임계치 또는 실행 제한
---
## 6. 세션 이슈 대응 원칙
### 6.1 거래 가능 세션 기준
- 달력 시간이 아니라 "종목이 실제 거래 가능한 세션" 기준으로 라벨/추론을 구성한다.
### 6.2 정규장 vs 확장세션 분리
- MVP: 정규장 전용 라벨/모델
- 확장: 세션별 헤드 또는 세션별 모델 분리 검토
### 6.3 고점 정의 분리
- `reg_high`: 정규장 고점 (주 라벨)
- `ext_high`: 확장세션 포함 고점 (보조 분석)
---
## 7. 단계별 구현 계획
### Phase A: 계약 고정
- 학습용 canonical dataset 스키마 확정
- 시점 정렬(as-of) 규칙 문서화
- 라벨 생성 함수/검증 테스트 추가
### Phase B: 오프라인 검증
- Walk-forward + Purge/Embargo 평가
- `d0`, `d1` 분리 리포트
- B1(모델 없음) 대비 개선 여부 확인
### Phase C: 런타임 통합
- 추론 결과를 `pred_top_d0/pred_top_d1`로 주입
- 매도/축소 보조 규칙 연결
- Shadow 모드로 관측 후 제한적 활성화
### Phase D: 세션 확장
- NXT/US_PRE/US_AFTER 정책 분리
- 실행 게이트/임계치 세션별 튜닝
---
## 8. 지금 당장 결정할 최소 항목 (TBD)
- [ ] `eps_d0`, `eps_d1` 기본값 (`TBD`)
- [ ] `p_top_d0`, `p_top_d1` 실행 임계치 (`TBD`)
- [ ] 정규장/확장세션 대상 시장 범위 (`TBD`)
- [ ] 최소 학습 샘플 수 기준 (`TBD`)
- [ ] Shadow 관측 기간 (`TBD`)
---
## 9. 리스크 및 완화
- 라벨 불균형: class weight, threshold 튜닝
- 시간대 편향(장마감 근처): 시간 피처 추가 + 구간별 성능 보고
- 데이터 무결성: 제외 규칙 강제 + CI 검증
- 복잡도 증가: 정규장 MVP를 먼저 잠그고 확장세션은 후속 분리
---
## 10. 후속 문서 연결 (예정)
- requirements registry: 신규 REQ/TASK/TEST 매핑 추가
- code-level work orders: 구현 작업 분해
- acceptance/test plan: `p_top_d0/d1` 검증 항목 추가
- implementation audit: 기존 "모델 보조 신호" 항목 업데이트

View File

@@ -0,0 +1,211 @@
# Dual-Track Workflow Execution Plan (Revised)
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Re-seed workflow planning from a clean base branch (not the temporary branch), then execute isolated `86` and `80` tracks with real-operation-confirmed closeout and related Gitea issue closure.
**Architecture:** Start from `feature/v3-session-policy-stream` and create `base/ouroboros-workflow-20260302`. Bring only two workflow plan docs as one squashed seed commit. Branch `task/86...` and `task/80...` from that base with strict file-scope separation. For `80`, update audit only after real-operation confirmation, then close related Gitea issues (children before parent). Final promotion happens by merging the validated stream to `main`.
**Tech Stack:** Git CLI, Gitea (`tea`) issue/PR workflow, Markdown docs (`docs/ouroboros`, `docs/plans`)
---
### Task 1: Freeze temporary branch and identify export scope
**Files:**
- Modify: none
- Create: none
- Test: none
**Step 1: Inspect temporary branch state**
Run: `git status --short --branch`
Expected: current temporary branch with local changes listed.
**Step 2: Preserve changes using stash (recommended)**
Run: `git stash push -u -m "tmp-before-workflow-reseed-2026-03-02"`
Expected: stash created with tracked/untracked changes.
**Step 3: Define transferable artifacts**
Run: keep only these for reseed:
- `docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md`
- `docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
Expected: transfer scope fixed to two files.
**Step 4: Verify clean working tree**
Run: `git status --short --branch`
Expected: no local changes.
**Step 5: Commit**
```bash
# no commit in this task (stash-only preservation)
```
### Task 2: Create clean official base and seed one commit
**Files:**
- Modify: `docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md`
- Modify: `docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
- Test: none
**Step 1: Switch to canonical base**
Run: `git checkout feature/v3-session-policy-stream`
Expected: branch switch successful.
**Step 2: Update local base**
Run: `git pull --ff-only origin feature/v3-session-policy-stream`
Expected: up-to-date or fast-forward update.
**Step 3: Create staging base branch**
Run: `git checkout -b base/ouroboros-workflow-20260302`
Expected: new branch created from updated feature base.
**Step 4: Restore only two plan docs from temporary branch commits**
Run (example): `git checkout <temp-branch> -- docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
Expected: only two files restored in working tree.
**Step 5: Create one squashed seed commit**
Run: `git add docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md docs/plans/2026-03-02-dual-track-workflow-execution-plan.md && git commit -m "docs: seed revised dual-track workflow plan"`
Expected: single commit with two files only.
**Step 6: Verify seed scope**
Run: `git show --name-only --oneline HEAD`
Expected: exactly the two plan docs listed.
**Step 7: Push official base**
Run: `git push -u origin base/ouroboros-workflow-20260302`
Expected: remote tracking set.
### Task 3: Open Track A (`86`) branch and initial PR
**Files:**
- Modify: `docs/ouroboros/86_peak_probability_model_plan.md`
- Test: none
**Step 1: Create Track A branch**
Run: `git checkout -b task/86-peak-probability-plan`
Expected: branch created from official base.
**Step 2: Stage only `86` document**
Run: `git add docs/ouroboros/86_peak_probability_model_plan.md`
Expected: only `86` doc staged.
**Step 3: Commit initial PR seed**
Run: `git commit -m "docs: seed peak probability model plan (#86)"`
Expected: one-document commit.
**Step 4: Push branch**
Run: `git push -u origin task/86-peak-probability-plan`
Expected: remote branch created.
**Step 5: Open PR-A**
Run: `tea pr create --base base/ouroboros-workflow-20260302 --head task/86-peak-probability-plan --title "docs: seed peak probability model plan (#86)" --description "Initial planning PR for iterative concretization."`
Expected: PR URL returned.
### Task 4: Open Track B (`80`) branch for operation-confirmed closeout
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md`
- Test: operation observation evidence (outside unit-test scope)
**Step 1: Return to staging base**
Run: `git checkout base/ouroboros-workflow-20260302`
Expected: branch switch successful.
**Step 2: Create Track B branch**
Run: `git checkout -b task/80-implementation-audit-closeout`
Expected: branch created.
**Step 3: Apply closeout edits only after real-operation confirmation**
Run: edit `docs/ouroboros/80_implementation_audit.md`
Expected: status/evidence sections updated with observed behavior, confirmation timestamp, and operational proof summary.
**Step 4: Commit closeout update**
Run: `git add docs/ouroboros/80_implementation_audit.md && git commit -m "docs: close out implementation audit items after operation confirmation (#80)"`
Expected: single-scope commit.
**Step 5: Push and open PR-B**
Run: `git push -u origin task/80-implementation-audit-closeout`
Expected: remote branch created.
Run: `tea pr create --base base/ouroboros-workflow-20260302 --head task/80-implementation-audit-closeout --title "docs: close out implementation audit items (#80)" --description "Closeout based on real-operation confirmation."`
Expected: PR URL returned.
### Task 5: Close related Gitea issues after `80` confirmation
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md` (reference linkage only)
- Test: issue parent-child completeness check
**Step 1: Build related issue set from updated `80` audit**
Run: parse issue references from `80` updated sections.
Expected: target open issue list prepared.
**Step 2: Close child issues first**
Run: `tea issues close <child-issue-number>` for each confirmed child.
Expected: all qualifying child issues closed.
**Step 3: Close parent issues only when all children are closed**
Run: `tea issues close <parent-issue-number>`
Expected: parent closes only after dependency check passes.
**Step 4: Record closure summary in PR comment**
Run: add PR comment with closed issue IDs and parent-child resolution note.
Expected: auditable trace for closeout.
### Task 6: Validate branch hygiene and merge path
**Files:**
- Modify: none
- Test: git diff/log checks
**Step 1: Verify PR-A scope**
Run: `git diff --name-only base/ouroboros-workflow-20260302...task/86-peak-probability-plan`
Expected: only `docs/ouroboros/86_peak_probability_model_plan.md` (and intentional related plan files if any).
**Step 2: Verify PR-B scope**
Run: `git diff --name-only base/ouroboros-workflow-20260302...task/80-implementation-audit-closeout`
Expected: only `docs/ouroboros/80_implementation_audit.md` (and intentional related plan files if any).
**Step 3: Merge both PRs into staging base**
Run: GitHub merge flow for PR-A and PR-B
Expected: staging base includes both tracks without cross-file contamination.
**Step 4: Promote validated stream to `main`**
Run: open PR from `feature/v3-session-policy-stream` (or its validated promotion branch) to `main`
Expected: merge-based history preservation with no cherry-pick/rebase surgery.
**Step 5: Commit**
```bash
# no additional commit required in this task
```

View File

@@ -0,0 +1,100 @@
# Live Operation Issue Tracking Design
**Date:** 2026-03-02
**Scope:** Real-operation confirmation tracking for all open Gitea issues, tied to `task/80-implementation-audit-closeout`.
---
## 1. Goal
Define an issue-by-issue tracking method for real-operation confirmation so that:
- every open issue is tracked with explicit lifecycle states,
- parent-child close order is enforced,
- and final closeout is reflected in `80` audit work without loss.
---
## 2. Target Set
- Coverage target: all current open Gitea issues (`318~381`, and future open issues during this closeout window).
- Tracking starts from open set snapshot and stays synced until closeout completion.
---
## 3. Tracking Architecture
- Primary log: direct comments on each issue ticket.
- Aggregate log: single summary comment on the `80` PR (`task/80-implementation-audit-closeout`).
- No separate tracking doc as source of truth during execution.
---
## 4. State Model and Criteria
State machine:
- `NOT_STARTED`
- `OBSERVING`
- `CONFIRMED`
- `CLOSED`
Rules:
- `CONFIRMED` requires one real-operation observation evidence item.
- `CLOSED` requires close conditions satisfied and ticket close action executed.
- Parent issue cannot become `CLOSED` until all child issues are `CLOSED`.
---
## 5. Issue Comment Template
Each issue update comment uses:
- `Status: <STATE>`
- `Observed At: <KST>, <UTC>`
- `Environment: live`
- `Evidence: <log/query/order-execution identifier>`
- `Next: <single next action>`
This template is reused for every transition for consistent auditability.
---
## 6. Parent-Child Policy
- Parent issue comment must list child issue IDs.
- Child issues transition independently.
- Parent close gate:
- if any child is not `CLOSED`, parent remains at most `CONFIRMED`;
- when all children are `CLOSED`, parent can transition to `CLOSED` and be closed.
---
## 7. Operational Loop
1. Initialize all open issues with `NOT_STARTED` comment.
2. Move active issues to `OBSERVING` when live confirmation begins.
3. Move to `CONFIRMED` after one real-operation evidence capture.
4. Close child issues first, then close parent issue(s).
5. Update the single `80` PR summary comment whenever issue state changes.
6. Reflect final confirmed/closed outcomes in `docs/ouroboros/80_implementation_audit.md`.
---
## 8. Evidence and Time Rules
- Evidence must be replayable/referenceable by identifier (not vague narrative only).
- Every update includes both KST and UTC timestamps.
- Use absolute dates (example: `2026-03-02`) to avoid ambiguity in session handoff.
---
## 9. Completion Conditions
Tracking design is complete when:
- all open issues are represented in issue comments with valid lifecycle state,
- parent-child closure constraints are respected,
- `80` PR summary comment reflects current global status,
- and final `80` audit document update is ready for closeout commit.

View File

@@ -0,0 +1,203 @@
# Live Operation Issue Tracking Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Track real-operation confirmation for all open Gitea issues with per-issue comments and a synchronized summary on the `80` PR, then close issues in dependency-safe order.
**Architecture:** Use issue tickets as the primary state log with a strict four-state lifecycle. Maintain one aggregate status comment on `task/80-implementation-audit-closeout` PR. Enforce child-first closure and parent close gating.
**Tech Stack:** Git CLI, Gitea CLI (`tea`), Markdown (`docs/ouroboros/80_implementation_audit.md`)
---
### Task 1: Snapshot open issues and classify parent-child graph
**Files:**
- Create: none
- Modify: none
- Test: `tea issues ls` output inspection
**Step 1: Capture open issue list**
Run: `tea issues ls --state open --limit 200`
Expected: list of all open issues (including `318~381`).
**Step 2: Fetch details for each issue**
Run: `for n in $(tea issues ls --state open --limit 200 --output json | jq -r '.[].number'); do tea issue view "$n"; done`
Expected: issue bodies/comments available for relation parsing.
**Step 3: Extract parent-child references**
Run: parse issue text for relation keywords (`parent`, `child`, `depends on`, `blocks`).
Expected: dependency map ready.
**Step 4: Validate unresolved dependency edges**
Run: verify each parent references only existing issue numbers.
Expected: no dangling child references.
**Step 5: Commit**
```bash
# no commit (discovery task)
```
### Task 2: Initialize per-issue tracking comments (`NOT_STARTED`)
**Files:**
- Create: none
- Modify: none
- Test: issue comment presence check
**Step 1: Prepare standard template text**
```text
Status: NOT_STARTED
Observed At: <KST>, <UTC>
Environment: live
Evidence: N/A
Next: Begin live observation for this issue.
```
**Step 2: Post initialization comment to each open issue**
Run: `tea issues comment <issue-number> --message "<template>"`
Expected: each open issue has one initialization comment.
**Step 3: Verify comment coverage**
Run: sample-check each issue (`tea issue view <issue-number>`) for latest status comment.
Expected: 100% coverage across open issues.
**Step 4: Commit**
```bash
# no commit (ticket operations)
```
### Task 3: Create and maintain `80` PR aggregate status comment
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md` (final sync only, not this task)
- Test: PR comment contains full issue table
**Step 1: Identify PR number for `task/80-implementation-audit-closeout`**
Run: `tea pr ls --state open`
Expected: PR number for `task/80...` identified.
**Step 2: Post initial summary comment**
Run: `tea pr comment <pr-number> --message "<issue-state-summary-table>"`
Expected: one aggregate comment listing `issue | state | last observed at | parent/child`.
**Step 3: Define update protocol**
Run: update same summary comment on every issue state change.
Expected: summary remains single-source aggregate view.
**Step 4: Commit**
```bash
# no commit (ticket/PR operations)
```
### Task 4: Execute live observation loop and mark `OBSERVING` -> `CONFIRMED`
**Files:**
- Create: none
- Modify: none
- Test: evidence-backed state transitions present in issue comments
**Step 1: Mark active issue as `OBSERVING`**
Run: `tea issues comment <issue-number> --message "Status: OBSERVING ..."`
Expected: issue history shows observation started.
**Step 2: Capture one real-operation evidence item**
Run: collect log/query/order identifier tied to issue behavior.
Expected: one concrete evidence reference captured.
**Step 3: Mark issue `CONFIRMED`**
Run: `tea issues comment <issue-number> --message "Status: CONFIRMED ... Evidence: <id> ..."`
Expected: issue has explicit confirmed state.
**Step 4: Sync PR aggregate summary**
Run: update `80` PR summary comment row for that issue.
Expected: PR summary and issue status aligned.
**Step 5: Commit**
```bash
# no commit (ticket operations)
```
### Task 5: Close issues with child-first dependency enforcement
**Files:**
- Create: none
- Modify: none
- Test: parent close attempted only after all children closed
**Step 1: Close confirmed child issues first**
Run: `tea issues close <child-issue-number>`
Expected: child issue state becomes closed.
**Step 2: Verify all children for each parent**
Run: evaluate dependency map for remaining non-closed children.
Expected: parent close gate decision available.
**Step 3: Close eligible parent issues**
Run: `tea issues close <parent-issue-number>`
Expected: parent is closed only when all children are closed.
**Step 4: Post closure comment and sync PR summary**
Run: comment `Status: CLOSED` on closed issue and update PR summary row.
Expected: closure trace exists in both issue and PR aggregate.
**Step 5: Commit**
```bash
# no commit (ticket operations)
```
### Task 6: Reflect final tracked outcomes in `80` audit and commit
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md`
- Test: markdown consistency + traceability check
**Step 1: Update `80` with final live-confirmed statuses**
Run: edit status lines and evidence summary sections.
Expected: document reflects final issue states and confirmation notes.
**Step 2: Add issue closure summary block**
Run: include closed child/parent sequence summary.
Expected: audit reader can verify closure logic quickly.
**Step 3: Validate references**
Run: ensure issue numbers in doc match actual closed/open states.
Expected: no mismatch between doc and Gitea.
**Step 4: Commit closeout doc update**
```bash
git add docs/ouroboros/80_implementation_audit.md
git commit -m "docs: finalize 80 audit with live-operation issue confirmations"
```
**Step 5: Push branch**
Run: `git push`
Expected: closeout commit available on `task/80-implementation-audit-closeout`.

View File

@@ -0,0 +1,99 @@
# Ouroboros Dual-Track Workflow Design (Revised)
**Date:** 2026-03-02
**Scope:** `docs/ouroboros/86_peak_probability_model_plan.md` and `docs/ouroboros/80_implementation_audit.md`
---
## 1. Goal
Prevent work loss and history contamination while running two parallel tracks, while correcting an invalid starting condition (design commits created on a disposable temporary branch):
- Track A: open and iterate the `86` plan as a dedicated PR.
- Track B: finalize `80` only after real-operation behavior is confirmed, then reflect completion and close related Gitea open issues.
---
## 2. Constraints and Current Context
- Current temporary branch is disposable and must not remain the authoritative base.
- Canonical base line for current work is `feature/v3-session-policy-stream`.
- Final integration strategy is to merge the full `feature/v3-session-policy-stream` line into `main` after operation-level validation is complete.
---
## 3. Approaches Considered
1. Keep using temporary branch and continue from existing design commits.
2. Create a clean base from `feature/v3-session-policy-stream` and move only plan artifacts as one commit (recommended).
3. Split into two fresh bases (`86` and `80` separately) immediately.
Recommendation rationale:
- Approach 2 removes temporary-branch contamination risk.
- It preserves clean lineage with a single "official starting point" commit.
- It keeps downstream split for `86` and `80` straightforward.
---
## 4. Approved Branch and PR Topology
- Long-lived source baseline: `feature/v3-session-policy-stream`
- New official base: `base/ouroboros-workflow-20260302` (from `feature/v3-session-policy-stream`)
- Official base contains only two workflow plan docs squashed into one commit.
- Track A branch: `task/86-peak-probability-plan` (from official base)
- Track B branch: `task/80-implementation-audit-closeout` (from official base)
PR direction:
- PR-A (`86`): `task/86-peak-probability-plan -> base/ouroboros-workflow-20260302`
- PR-B (`80`): `task/80-implementation-audit-closeout -> base/ouroboros-workflow-20260302`
- Final integration: `feature/v3-session-policy-stream` line merged into `main`
---
## 5. Completion Criteria per Track
### Track A (`86`)
- Open initial PR early with the planning document.
- Continue iterative concretization in follow-up commits on the same branch.
### Track B (`80`)
- Do not close based on test-command artifacts alone.
- Close only when modified behavior is confirmed in real operation.
- Update `80_implementation_audit.md` with:
- observed behavior changes,
- confirmation timing,
- operation-side evidence summary.
- After `80` real-operation confirmation and doc update, close related Gitea open issues immediately (same phase).
- Parent issues close only when all child issues are confirmed closed.
---
## 6. Loss-Prevention Rules
- Preserve temporary-branch unmerged changes first (stash or archival commit).
- Exclude `workflow/session-handover.md` and `docs/ouroboros/86_peak_probability_model_plan.md` from official base-seeding commit.
- Enforce file-scope separation:
- PR-A should avoid `80` edits.
- PR-B should avoid `86` edits.
- Keep commit messages scoped to one track per commit.
---
## 7. Merge Strategy Decision
- For this workflow, prefer merge-based lineage preservation.
- Because the final promotion unit is the full `feature/v3-session-policy-stream` stream, avoid unnecessary cherry-pick/rebase-surgery unless a contamination incident occurs.
---
## 8. Immediate Next Actions
1. Seed `base/ouroboros-workflow-20260302` from `feature/v3-session-policy-stream` with one squashed workflow-doc commit.
2. Verify seed commit file scope contains only two plan docs.
3. Create and open PR-A for `86`.
4. Complete real-operation confirmation for `80`, update audit doc, then close related Gitea issues (children first, then parent).
5. Merge validated `feature/v3-session-policy-stream` line into `main`.

View File

@@ -128,6 +128,16 @@ tea pr create \
--description "$PR_BODY" --description "$PR_BODY"
``` ```
PR 생성 직후 본문 무결성 검증(필수):
```bash
python3 scripts/validate_pr_body.py --pr <PR_NUMBER>
```
강제 규칙:
- 검증 실패(`\n` 리터럴, 코드펜스 불균형, 헤더/리스트 누락) 상태에서는 리뷰/머지 금지
- 본문 수정 후 같은 명령으로 재검증 통과 필요
금지 패턴: 금지 패턴:
- `-d "line1\nline2"` (웹 UI에 `\n` 문자 그대로 노출될 수 있음) - `-d "line1\nline2"` (웹 UI에 `\n` 문자 그대로 노출될 수 있음)

View File

@@ -92,6 +92,25 @@ def validate_testing_doc_has_dynamic_count_guidance(errors: list[str]) -> None:
) )
def validate_pr_body_postcheck_guidance(errors: list[str]) -> None:
required_tokens = {
"commands": (
"PR Body Post-Check (Mandatory)",
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>",
),
"workflow": (
"PR 생성 직후 본문 무결성 검증(필수)",
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>",
),
}
for key, tokens in required_tokens.items():
path = REQUIRED_FILES[key]
text = _read(path)
for token in tokens:
if token not in text:
errors.append(f"{path}: missing PR body post-check guidance token -> {token}")
def main() -> int: def main() -> int:
errors: list[str] = [] errors: list[str] = []
@@ -117,6 +136,7 @@ def main() -> int:
validate_summary_docs_reference_core_docs(errors) validate_summary_docs_reference_core_docs(errors)
validate_commands_endpoint_duplicates(errors) validate_commands_endpoint_duplicates(errors)
validate_testing_doc_has_dynamic_count_guidance(errors) validate_testing_doc_has_dynamic_count_guidance(errors)
validate_pr_body_postcheck_guidance(errors)
if errors: if errors:
print("[FAIL] docs sync validation failed") print("[FAIL] docs sync validation failed")
@@ -128,6 +148,7 @@ def main() -> int:
print("[OK] summary docs link to core docs and links resolve") print("[OK] summary docs link to core docs and links resolve")
print("[OK] commands endpoint rows have no duplicates") print("[OK] commands endpoint rows have no duplicates")
print("[OK] testing doc includes dynamic count guidance") print("[OK] testing doc includes dynamic count guidance")
print("[OK] PR body post-check guidance exists in commands/workflow docs")
return 0 return 0

View File

@@ -33,6 +33,9 @@ ALLOWED_PLAN_TARGETS = {
"2": (DOC_DIR / "source" / "ouroboros_plan_v2.txt").resolve(), "2": (DOC_DIR / "source" / "ouroboros_plan_v2.txt").resolve(),
"3": (DOC_DIR / "source" / "ouroboros_plan_v3.txt").resolve(), "3": (DOC_DIR / "source" / "ouroboros_plan_v3.txt").resolve(),
} }
ISSUE_REF_PATTERN = re.compile(r"#(?P<issue>\d+)")
ISSUE_DONE_PATTERN = re.compile(r"(?:✅|머지|해소|완료)")
ISSUE_PENDING_PATTERN = re.compile(r"(?:잔여|오픈 상태|추적 이슈)")
def iter_docs() -> list[Path]: def iter_docs() -> list[Path]:
@@ -119,6 +122,38 @@ def collect_req_traceability(
req_to_test.setdefault(req_id, set()).add(item_id) req_to_test.setdefault(req_id, set()).add(item_id)
def validate_issue_status_consistency(path: Path, text: str, errors: list[str]) -> None:
issue_done_lines: dict[str, list[int]] = {}
issue_pending_lines: dict[str, list[int]] = {}
for line_no, raw_line in enumerate(text.splitlines(), start=1):
line = raw_line.strip()
if not line:
continue
issue_ids = [m.group("issue") for m in ISSUE_REF_PATTERN.finditer(line)]
if not issue_ids:
continue
is_pending = bool(ISSUE_PENDING_PATTERN.search(line))
is_done = bool(ISSUE_DONE_PATTERN.search(line)) and not is_pending
if not is_pending and not is_done:
continue
for issue_id in issue_ids:
if is_done:
issue_done_lines.setdefault(issue_id, []).append(line_no)
if is_pending:
issue_pending_lines.setdefault(issue_id, []).append(line_no)
conflicted_issues = sorted(set(issue_done_lines) & set(issue_pending_lines))
for issue_id in conflicted_issues:
errors.append(
f"{path}: conflicting status for issue #{issue_id} "
f"(done at lines {issue_done_lines[issue_id]}, "
f"pending at lines {issue_pending_lines[issue_id]})"
)
def main() -> int: def main() -> int:
if not DOC_DIR.exists(): if not DOC_DIR.exists():
print(f"ERROR: missing directory {DOC_DIR}") print(f"ERROR: missing directory {DOC_DIR}")
@@ -140,6 +175,8 @@ def main() -> int:
text = path.read_text(encoding="utf-8") text = path.read_text(encoding="utf-8")
validate_metadata(path, text, errors, doc_ids) validate_metadata(path, text, errors, doc_ids)
validate_links(path, text, errors) validate_links(path, text, errors)
if path.name == "80_implementation_audit.md":
validate_issue_status_consistency(path, text, errors)
collect_ids(path, text, defs, refs) collect_ids(path, text, defs, refs)
collect_req_traceability(text, req_to_task, req_to_test) collect_req_traceability(text, req_to_task, req_to_test)

View File

@@ -0,0 +1,94 @@
#!/usr/bin/env python3
"""Validate PR body formatting to prevent escaped-newline artifacts."""
from __future__ import annotations
import argparse
import json
import re
import subprocess
import sys
from pathlib import Path
HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE)
LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE)
def validate_pr_body_text(text: str) -> list[str]:
errors: list[str] = []
if "\\n" in text and "\n" not in text:
errors.append("body contains escaped newline sequence (\\n)")
if text.count("```") % 2 != 0:
errors.append("body has unbalanced fenced code blocks (``` count is odd)")
if not HEADER_PATTERN.search(text):
errors.append("body is missing markdown section headers (e.g. '## Summary')")
if not LIST_ITEM_PATTERN.search(text):
errors.append("body is missing markdown list items")
return errors
def fetch_pr_body(pr_number: int) -> str:
try:
completed = subprocess.run(
[
"tea",
"api",
"-R",
"origin",
f"repos/{{owner}}/{{repo}}/pulls/{pr_number}",
],
check=True,
capture_output=True,
text=True,
)
except (subprocess.CalledProcessError, FileNotFoundError) as exc:
raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc
try:
payload = json.loads(completed.stdout)
except json.JSONDecodeError as exc:
raise RuntimeError(f"failed to parse PR payload for #{pr_number}: {exc}") from exc
body = payload.get("body", "")
if not isinstance(body, str):
raise RuntimeError(f"unexpected PR body type for #{pr_number}: {type(body).__name__}")
return body
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(
description="Validate PR body markdown formatting and escaped-newline artifacts."
)
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument("--pr", type=int, help="PR number to fetch via `tea api`")
group.add_argument("--body-file", type=Path, help="Path to markdown body file")
return parser.parse_args()
def main() -> int:
args = parse_args()
if args.body_file is not None:
if not args.body_file.exists():
print(f"[FAIL] body file not found: {args.body_file}")
return 1
body = args.body_file.read_text(encoding="utf-8")
source = f"file:{args.body_file}"
else:
body = fetch_pr_body(args.pr)
source = f"pr:{args.pr}"
errors = validate_pr_body_text(body)
if errors:
print("[FAIL] PR body validation failed")
print(f"- source: {source}")
for err in errors:
print(f"- {err}")
return 1
print("[OK] PR body validation passed")
print(f"- source: {source}")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -94,14 +94,6 @@ def run_v2_backtest_pipeline(
else sorted({bar.session_id for bar in bars}) else sorted({bar.session_id for bar in bars})
) )
validate_backtest_cost_model(model=cost_model, required_sessions=resolved_sessions) validate_backtest_cost_model(model=cost_model, required_sessions=resolved_sessions)
execution_model = BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session=cost_model.slippage_bps_by_session or {},
failure_rate_by_session=cost_model.failure_rate_by_session or {},
partial_fill_rate_by_session=cost_model.partial_fill_rate_by_session or {},
seed=0,
)
)
highs = [float(bar.high) for bar in bars] highs = [float(bar.high) for bar in bars]
lows = [float(bar.low) for bar in bars] lows = [float(bar.low) for bar in bars]
@@ -156,7 +148,7 @@ def run_v2_backtest_pipeline(
execution_model = _build_execution_model(cost_model=cost_model, fold_seed=fold_idx) execution_model = _build_execution_model(cost_model=cost_model, fold_seed=fold_idx)
execution_return_model = _build_execution_model( execution_return_model = _build_execution_model(
cost_model=cost_model, cost_model=cost_model,
fold_seed=fold_idx, fold_seed=fold_idx + 1000,
) )
b0_pred = _baseline_b0_pred(train_labels) b0_pred = _baseline_b0_pred(train_labels)
m1_pred = _m1_pred(train_labels) m1_pred = _m1_pred(train_labels)

View File

@@ -3,13 +3,14 @@
Order is fixed: Order is fixed:
1) block new orders 1) block new orders
2) cancel pending orders 2) cancel pending orders
3) refresh order state 3) refresh order state (retry up to 3 attempts with exponential backoff)
4) reduce risk 4) reduce risk
5) snapshot and notify 5) snapshot and notify
""" """
from __future__ import annotations from __future__ import annotations
import asyncio
import inspect import inspect
from collections.abc import Awaitable, Callable from collections.abc import Awaitable, Callable
from dataclasses import dataclass, field from dataclasses import dataclass, field
@@ -34,16 +35,55 @@ class KillSwitchOrchestrator:
report: KillSwitchReport, report: KillSwitchReport,
name: str, name: str,
fn: StepCallable | None, fn: StepCallable | None,
) -> None: ) -> bool:
report.steps.append(name) report.steps.append(name)
if fn is None: if fn is None:
return return True
try: try:
result = fn() result = fn()
if inspect.isawaitable(result): if inspect.isawaitable(result):
await result await result
if result is False:
raise RuntimeError("step returned False")
return True
except Exception as exc: # pragma: no cover - intentionally resilient except Exception as exc: # pragma: no cover - intentionally resilient
report.errors.append(f"{name}: {exc}") report.errors.append(f"{name}: {exc}")
return False
async def _run_refresh_with_retry(
self,
report: KillSwitchReport,
fn: StepCallable | None,
*,
max_attempts: int,
base_delay_sec: float,
) -> None:
report.steps.append("refresh_order_state")
if fn is None:
return
attempts = max(1, max_attempts)
delay = max(0.0, base_delay_sec)
last_exc: Exception | None = None
for attempt in range(1, attempts + 1):
try:
result = fn()
if inspect.isawaitable(result):
await result
if result is False:
raise RuntimeError("step returned False")
return
except Exception as exc:
last_exc = exc
if attempt >= attempts:
break
if delay > 0:
await asyncio.sleep(delay * (2 ** (attempt - 1)))
if last_exc is not None:
report.errors.append(
"refresh_order_state: failed after "
f"{attempts} attempts ({last_exc})"
)
async def trigger( async def trigger(
self, self,
@@ -54,6 +94,8 @@ class KillSwitchOrchestrator:
reduce_risk: StepCallable | None = None, reduce_risk: StepCallable | None = None,
snapshot_state: StepCallable | None = None, snapshot_state: StepCallable | None = None,
notify: StepCallable | None = None, notify: StepCallable | None = None,
refresh_retry_attempts: int = 3,
refresh_retry_base_delay_sec: float = 1.0,
) -> KillSwitchReport: ) -> KillSwitchReport:
report = KillSwitchReport(reason=reason) report = KillSwitchReport(reason=reason)
@@ -61,7 +103,12 @@ class KillSwitchOrchestrator:
report.steps.append("block_new_orders") report.steps.append("block_new_orders")
await self._run_step(report, "cancel_pending_orders", cancel_pending_orders) await self._run_step(report, "cancel_pending_orders", cancel_pending_orders)
await self._run_step(report, "refresh_order_state", refresh_order_state) await self._run_refresh_with_retry(
report,
refresh_order_state,
max_attempts=refresh_retry_attempts,
base_delay_sec=refresh_retry_base_delay_sec,
)
await self._run_step(report, "reduce_risk", reduce_risk) await self._run_step(report, "reduce_risk", reduce_risk)
await self._run_step(report, "snapshot_state", snapshot_state) await self._run_step(report, "snapshot_state", snapshot_state)
await self._run_step(report, "notify", notify) await self._run_step(report, "notify", notify)

View File

@@ -1375,7 +1375,10 @@ async def _cancel_pending_orders_for_kill_switch(
) )
if failures: if failures:
raise RuntimeError("; ".join(failures[:3])) summary = "; ".join(failures[:3])
if len(failures) > 3:
summary = f"{summary} (+{len(failures) - 3} more)"
raise RuntimeError(summary)
async def _refresh_order_state_for_kill_switch( async def _refresh_order_state_for_kill_switch(
@@ -1384,6 +1387,7 @@ async def _refresh_order_state_for_kill_switch(
overseas_broker: OverseasBroker, overseas_broker: OverseasBroker,
markets: list[MarketInfo], markets: list[MarketInfo],
) -> None: ) -> None:
failures: list[str] = []
seen_overseas: set[str] = set() seen_overseas: set[str] = set()
for market in markets: for market in markets:
try: try:
@@ -1399,6 +1403,12 @@ async def _refresh_order_state_for_kill_switch(
market.exchange_code, market.exchange_code,
exc, exc,
) )
failures.append(f"{market.code}/{market.exchange_code}: {exc}")
if failures:
summary = "; ".join(failures[:3])
if len(failures) > 3:
summary = f"{summary} (+{len(failures) - 3} more)"
raise RuntimeError(summary)
def _reduce_risk_for_kill_switch() -> None: def _reduce_risk_for_kill_switch() -> None:

View File

@@ -85,8 +85,8 @@ def evaluate_exit(
reason = "atr_trailing_stop" reason = "atr_trailing_stop"
elif be_lock_threat: elif be_lock_threat:
reason = "be_lock_threat" reason = "be_lock_threat"
elif model_exit_signal: elif model_exit_signal and next_state == PositionState.BE_LOCK:
reason = "model_liquidity_exit" reason = "model_assist_be_lock"
elif take_profit_hit: elif take_profit_hit:
# Backward-compatible immediate profit-taking path. # Backward-compatible immediate profit-taking path.
reason = "arm_take_profit" reason = "arm_take_profit"

View File

@@ -40,7 +40,8 @@ def evaluate_exit_first(inp: StateTransitionInput) -> bool:
EXITED must be evaluated before any promotion. EXITED must be evaluated before any promotion.
""" """
return inp.hard_stop_hit or inp.trailing_stop_hit or inp.model_exit_signal or inp.be_lock_threat # model_exit_signal is assist-only and must not trigger EXIT directly.
return inp.hard_stop_hit or inp.trailing_stop_hit or inp.be_lock_threat
def promote_state(current: PositionState, inp: StateTransitionInput) -> PositionState: def promote_state(current: PositionState, inp: StateTransitionInput) -> PositionState:
@@ -61,5 +62,8 @@ def promote_state(current: PositionState, inp: StateTransitionInput) -> Position
target = PositionState.ARMED target = PositionState.ARMED
elif inp.unrealized_pnl_pct >= inp.be_arm_pct: elif inp.unrealized_pnl_pct >= inp.be_arm_pct:
target = PositionState.BE_LOCK target = PositionState.BE_LOCK
elif inp.model_exit_signal:
# Model signal assists risk posture by tightening to BE_LOCK.
target = PositionState.BE_LOCK
return target if _STATE_RANK[target] > _STATE_RANK[current] else current return target if _STATE_RANK[target] > _STATE_RANK[current] else current

View File

@@ -53,3 +53,52 @@ async def test_kill_switch_collects_step_errors() -> None:
report = await ks.trigger(reason="test", cancel_pending_orders=_boom) report = await ks.trigger(reason="test", cancel_pending_orders=_boom)
assert any(err.startswith("cancel_pending_orders:") for err in report.errors) assert any(err.startswith("cancel_pending_orders:") for err in report.errors)
@pytest.mark.asyncio
async def test_kill_switch_refresh_retries_then_succeeds() -> None:
ks = KillSwitchOrchestrator()
refresh_calls = {"count": 0}
def _flaky_refresh() -> None:
refresh_calls["count"] += 1
if refresh_calls["count"] < 3:
raise RuntimeError("temporary refresh failure")
report = await ks.trigger(
reason="test",
refresh_order_state=_flaky_refresh,
refresh_retry_attempts=3,
refresh_retry_base_delay_sec=0.0,
)
assert refresh_calls["count"] == 3
assert report.errors == []
@pytest.mark.asyncio
async def test_kill_switch_refresh_retry_exhausted_records_error_and_continues() -> None:
ks = KillSwitchOrchestrator()
calls: list[str] = []
def _refresh_fail() -> None:
raise RuntimeError("persistent refresh failure")
def _reduce() -> None:
calls.append("reduce")
def _snapshot() -> None:
calls.append("snapshot")
report = await ks.trigger(
reason="test",
refresh_order_state=_refresh_fail,
reduce_risk=_reduce,
snapshot_state=_snapshot,
refresh_retry_attempts=2,
refresh_retry_base_delay_sec=0.0,
)
assert any(
err.startswith("refresh_order_state: failed after 2 attempts")
for err in report.errors
)
assert calls == ["reduce", "snapshot"]

View File

@@ -7154,3 +7154,27 @@ async def test_trigger_emergency_kill_switch_records_cancel_failure() -> None:
) )
assert any(err.startswith("cancel_pending_orders:") for err in report.errors) assert any(err.startswith("cancel_pending_orders:") for err in report.errors)
@pytest.mark.asyncio
async def test_refresh_order_state_failure_summary_includes_more_count() -> None:
broker = MagicMock()
broker.get_balance = AsyncMock(side_effect=RuntimeError("domestic down"))
overseas_broker = MagicMock()
overseas_broker.get_overseas_balance = AsyncMock(side_effect=RuntimeError("overseas down"))
markets = []
for code, exchange in [("KR", "KRX"), ("US_PRE", "NASD"), ("US_DAY", "NYSE"), ("JP", "TKSE")]:
market = MagicMock()
market.code = code
market.exchange_code = exchange
market.is_domestic = code == "KR"
markets.append(market)
with pytest.raises(RuntimeError, match=r"\(\+1 more\)$") as exc_info:
await main_module._refresh_order_state_for_kill_switch(
broker=broker,
overseas_broker=overseas_broker,
markets=markets,
)
assert "KR/KRX" in str(exc_info.value)

View File

@@ -22,17 +22,18 @@ def test_take_profit_exit_for_backward_compatibility() -> None:
assert out.reason == "arm_take_profit" assert out.reason == "arm_take_profit"
def test_model_assist_exit_signal() -> None: def test_model_assist_signal_promotes_be_lock_without_direct_exit() -> None:
out = evaluate_exit( out = evaluate_exit(
current_state=PositionState.ARMED, current_state=PositionState.HOLDING,
config=ExitRuleConfig(model_prob_threshold=0.62, arm_pct=10.0), config=ExitRuleConfig(model_prob_threshold=0.62, be_arm_pct=1.2, arm_pct=10.0),
inp=ExitRuleInput( inp=ExitRuleInput(
current_price=101.0, current_price=100.5,
entry_price=100.0, entry_price=100.0,
peak_price=105.0, peak_price=105.0,
pred_down_prob=0.8, pred_down_prob=0.8,
liquidity_weak=True, liquidity_weak=True,
), ),
) )
assert out.should_exit is True assert out.should_exit is False
assert out.reason == "model_liquidity_exit" assert out.state == PositionState.BE_LOCK
assert out.reason == "model_assist_be_lock"

View File

@@ -28,3 +28,29 @@ def test_exited_has_priority_over_promotion() -> None:
), ),
) )
assert state == PositionState.EXITED assert state == PositionState.EXITED
def test_model_signal_promotes_be_lock_as_assist() -> None:
state = promote_state(
PositionState.HOLDING,
StateTransitionInput(
unrealized_pnl_pct=0.5,
be_arm_pct=1.2,
arm_pct=2.8,
model_exit_signal=True,
),
)
assert state == PositionState.BE_LOCK
def test_model_signal_does_not_force_exit_directly() -> None:
state = promote_state(
PositionState.ARMED,
StateTransitionInput(
unrealized_pnl_pct=1.0,
be_arm_pct=1.2,
arm_pct=2.8,
model_exit_signal=True,
),
)
assert state == PositionState.ARMED

View File

@@ -121,3 +121,44 @@ def test_validate_testing_doc_has_dynamic_count_guidance(monkeypatch) -> None:
monkeypatch.setattr(module, "_read", fake_read) monkeypatch.setattr(module, "_read", fake_read)
module.validate_testing_doc_has_dynamic_count_guidance(errors) module.validate_testing_doc_has_dynamic_count_guidance(errors)
assert errors == [] assert errors == []
def test_validate_pr_body_postcheck_guidance_passes(monkeypatch) -> None:
module = _load_module()
errors: list[str] = []
fake_docs = {
str(module.REQUIRED_FILES["commands"]): (
"PR Body Post-Check (Mandatory)\n"
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>\n"
),
str(module.REQUIRED_FILES["workflow"]): (
"PR 생성 직후 본문 무결성 검증(필수)\n"
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>\n"
),
}
def fake_read(path: Path) -> str:
return fake_docs[str(path)]
monkeypatch.setattr(module, "_read", fake_read)
module.validate_pr_body_postcheck_guidance(errors)
assert errors == []
def test_validate_pr_body_postcheck_guidance_reports_missing_tokens(
monkeypatch,
) -> None:
module = _load_module()
errors: list[str] = []
fake_docs = {
str(module.REQUIRED_FILES["commands"]): "PR Body Post-Check (Mandatory)\n",
str(module.REQUIRED_FILES["workflow"]): "PR Body Post-Check\n",
}
def fake_read(path: Path) -> str:
return fake_docs[str(path)]
monkeypatch.setattr(module, "_read", fake_read)
module.validate_pr_body_postcheck_guidance(errors)
assert any("commands.md" in err for err in errors)
assert any("workflow.md" in err for err in errors)

View File

@@ -79,3 +79,42 @@ def test_validate_links_avoids_duplicate_error_for_invalid_plan_link(tmp_path) -
assert len(errors) == 1 assert len(errors) == 1
assert "invalid plan link path" in errors[0] assert "invalid plan link path" in errors[0]
def test_validate_issue_status_consistency_reports_conflicts() -> None:
module = _load_module()
errors: list[str] = []
path = Path("docs/ouroboros/80_implementation_audit.md").resolve()
text = "\n".join(
[
"| REQ-V3-004 | 상태 | 부분 | `#328` 잔여 |",
"| 항목 | 상태 | ✅ 완료 | `#328` 머지 |",
]
)
module.validate_issue_status_consistency(path, text, errors)
assert len(errors) == 1
assert "conflicting status for issue #328" in errors[0]
def test_validate_issue_status_consistency_allows_done_only() -> None:
module = _load_module()
errors: list[str] = []
path = Path("docs/ouroboros/80_implementation_audit.md").resolve()
text = "| 항목 | 상태 | ✅ 완료 | `#371` 머지 |"
module.validate_issue_status_consistency(path, text, errors)
assert errors == []
def test_validate_issue_status_consistency_allows_pending_only() -> None:
module = _load_module()
errors: list[str] = []
path = Path("docs/ouroboros/80_implementation_audit.md").resolve()
text = "| 항목 | 상태 | 부분 | `#390` 추적 이슈 |"
module.validate_issue_status_consistency(path, text, errors)
assert errors == []

View File

@@ -0,0 +1,84 @@
from __future__ import annotations
import importlib.util
import json
from pathlib import Path
from types import SimpleNamespace
import pytest
def _load_module():
script_path = Path(__file__).resolve().parents[1] / "scripts" / "validate_pr_body.py"
spec = importlib.util.spec_from_file_location("validate_pr_body", script_path)
assert spec is not None
assert spec.loader is not None
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
def test_validate_pr_body_text_detects_escaped_newline() -> None:
module = _load_module()
errors = module.validate_pr_body_text("## Summary\\n- item")
assert any("escaped newline" in err for err in errors)
def test_validate_pr_body_text_allows_literal_sequence_when_multiline() -> None:
module = _load_module()
text = "## Summary\n- escaped sequence example: \\\\n"
assert module.validate_pr_body_text(text) == []
def test_validate_pr_body_text_detects_unbalanced_code_fence() -> None:
module = _load_module()
errors = module.validate_pr_body_text("## Summary\n- item\n```bash\necho hi\n")
assert any("unbalanced fenced code blocks" in err for err in errors)
def test_validate_pr_body_text_detects_missing_structure() -> None:
module = _load_module()
errors = module.validate_pr_body_text("plain text only")
assert any("missing markdown section headers" in err for err in errors)
assert any("missing markdown list items" in err for err in errors)
def test_validate_pr_body_text_passes_with_valid_markdown() -> None:
module = _load_module()
text = "\n".join(
[
"## Summary",
"- item",
"",
"## Validation",
"```bash",
"pytest -q",
"```",
]
)
assert module.validate_pr_body_text(text) == []
def test_fetch_pr_body_reads_body_from_tea_api(monkeypatch) -> None:
module = _load_module()
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
assert "tea" in cmd[0]
assert check is True
assert capture_output is True
assert text is True
return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"}))
monkeypatch.setattr(module.subprocess, "run", fake_run)
assert module.fetch_pr_body(391) == "## Summary\n- item"
def test_fetch_pr_body_rejects_non_string_body(monkeypatch) -> None:
module = _load_module()
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
return SimpleNamespace(stdout=json.dumps({"body": 123}))
monkeypatch.setattr(module.subprocess, "run", fake_run)
with pytest.raises(RuntimeError):
module.fetch_pr_body(391)

View File

@@ -105,3 +105,35 @@
- next_ticket: #368 - next_ticket: #368
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes - process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: TASK-V2-012 구현 갭 보완을 위해 cost guard + execution-adjusted fold metric + 회귀 테스트를 함께 반영한다. - risks_or_notes: TASK-V2-012 구현 갭 보완을 위해 cost guard + execution-adjusted fold metric + 회귀 테스트를 함께 반영한다.
### 2026-03-02 | session=codex-v3-stream-next-ticket-369
- branch: feature/v3-session-policy-stream
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #369, #370, #371, #374, #375, #376, #377, #381
- next_ticket: #369
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: 구현 티켓은 코드/테스트/문서(요구사항 원장/구현감사/PR traceability) 동시 반영을 기본 원칙으로 진행한다.
### 2026-03-02 | session=codex-issue369-start
- branch: feature/issue-369-model-exit-signal-spec-sync
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #369
- next_ticket: #369
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: v2 사양 기준으로 model_exit_signal을 직접 청산 트리거가 아닌 보조 트리거로 정합화하고 테스트/문서를 동기화한다.
### 2026-03-02 | session=codex-v3-stream-next-ticket-377
- branch: feature/v3-session-policy-stream
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #377, #370, #371, #375, #376, #381
- next_ticket: #377
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: kill switch refresh 재시도 정책(횟수/간격/중단조건)을 코드/테스트/요구사항 원장/감사 문서에 동시 반영한다.
### 2026-03-02 | session=codex-issue377-start
- branch: feature/issue-377-kill-switch-refresh-retry
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #377
- next_ticket: #377
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: refresh 단계를 최대 3회(초기+재시도2), 실패 시 지수 백오프로 재시도하고 성공 시 즉시 중단, 소진 시 오류를 기록한 뒤 다음 단계를 계속 수행한다.