42 Commits

Author SHA1 Message Date
1850334bb7 Merge pull request 'docs: add live-operation issue tracking design and plan (#80)' (#395) from task/80-implementation-audit-closeout into base/ouroboros-workflow-20260302
Reviewed-on: #395
2026-03-04 00:45:06 +09:00
8a4cf31c4d Merge pull request 'docs: seed peak probability model plan (#86)' (#394) from task/86-peak-probability-plan into base/ouroboros-workflow-20260302
Reviewed-on: #394
2026-03-04 00:44:02 +09:00
agentson
c4eaf780a0 docs: sync requirements registry for #394 policy-doc update
All checks were successful
Gitea CI / test (pull_request) Successful in 32s
2026-03-04 00:38:12 +09:00
agentson
728b3667ab chore: retrigger CI after PR governance body update
All checks were successful
Gitea CI / test (pull_request) Successful in 32s
2026-03-04 00:36:56 +09:00
agentson
337c84c592 chore: retrigger CI after PR governance body update
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 00:36:41 +09:00
agentson
65ec0a2684 docs: add live-operation issue tracking implementation plan
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 23:07:10 +09:00
agentson
fdaa1aa33c docs: add live-operation issue tracking design 2026-03-02 23:06:40 +09:00
agentson
16515080dc docs: seed peak probability model plan (#86)
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 22:59:31 +09:00
agentson
acbf89bc90 docs: seed revised dual-track workflow plan 2026-03-02 22:59:17 +09:00
agentson
b708e8b4ed process: add mandatory PR body post-check step (#392) 2026-03-02 18:19:42 +09:00
7cd818f1e2 Merge pull request 'process: enforce issue-status consistency for completion marks (#390)' (#391) from feature/issue-390-validate-completion-consistency into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #391
2026-03-02 10:38:21 +09:00
agentson
7c17535c3d test: narrow pending keyword and add pending-only guard (#390)
All checks were successful
Gitea CI / test (pull_request) Successful in 33s
Gitea CI / test (push) Successful in 32s
2026-03-02 10:33:58 +09:00
agentson
453d67b91c docs: sync requirements registry for governance gate (#390)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 10:03:38 +09:00
agentson
ade5971387 process: enforce issue-status consistency in audit doc validation (#390)
Some checks failed
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 09:59:40 +09:00
87683a88b4 Merge pull request 'risk: define and implement kill-switch refresh retry policy (#377)' (#389) from feature/issue-377-kill-switch-refresh-retry into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #389
2026-03-02 09:47:56 +09:00
agentson
b34937ea9d risk: polish retry coverage and refresh failure summary
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 31s
2026-03-02 09:44:24 +09:00
agentson
ba2370e40e risk: add kill-switch refresh retry policy and tests (#377)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 34s
2026-03-02 09:38:39 +09:00
1c41379815 Merge pull request 'strategy: align model exit signal policy with v2 spec (#369)' (#388) from feature/issue-369-model-exit-signal-spec-sync into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #388
2026-03-02 09:35:23 +09:00
agentson
5e4c94bfeb strategy: implement model assist be-lock path and clarify audit note
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 09:31:38 +09:00
agentson
2332ba868f strategy: align model exit signal as assist-only trigger (#369)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 09:25:03 +09:00
f6e4cc7ea9 Merge pull request 'analysis: reflect cost/execution in v2 backtest pipeline (#368)' (#387) from feature/issue-368-backtest-cost-execution into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #387
2026-03-02 09:21:06 +09:00
agentson
2776a074b5 analysis: remove dead init and split execution seeds in fold
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 09:17:22 +09:00
agentson
0fb56a4a1a ci: retrigger after PR traceability update
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 34s
2026-03-02 09:08:00 +09:00
agentson
7e9738d5df docs: bump requirements registry version for policy change sync
Some checks failed
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 04:01:26 +09:00
agentson
96d2c97fe7 analysis: apply execution-adjusted cost model in v2 backtest pipeline (#368)
Some checks failed
Gitea CI / test (push) Failing after 5s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 03:39:57 +09:00
4710aa2d66 Merge pull request 'test: add session-boundary risk reload e2e regressions (#376)' (#386) from feature/issue-376-session-boundary-e2e into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #386
2026-03-02 03:33:19 +09:00
agentson
ca9e1ad0e2 test: harden session-risk global reset isolation
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 03:30:46 +09:00
agentson
928e60877c test: add session-boundary risk reload e2e regressions (#376)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 03:23:58 +09:00
16ddc22d14 Merge pull request 'blackout: persist session_id across queued intent lifecycle (#375)' (#385) from feature/issue-375-queued-intent-session-id into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #385
2026-03-02 03:20:18 +09:00
agentson
4f21117eca blackout: simplify recovery session_id binding to queued value
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 35s
2026-03-02 03:17:28 +09:00
agentson
8e02b1ea4f blackout: persist session_id across queued intent lifecycle (#375)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 03:09:33 +09:00
ccceb38483 Merge pull request 'blackout: enforce bounded oldest-drop queue policy on overflow (#371)' (#384) from feature/issue-371-blackout-queue-overflow into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #384
2026-03-02 03:07:12 +09:00
agentson
96e5de7c5d test: align blackout queue mocks with overflow counter contract
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 03:03:35 +09:00
agentson
7959b749c7 blackout: enforce bounded oldest-drop queue policy on overflow (#371)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 02:57:08 +09:00
f7e242d147 Merge pull request 'trade: apply runtime strategy/fx pnl split on sell paths (#370)' (#383) from feature/issue-370-fx-pnl-runtime-split into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #383
2026-03-02 02:53:04 +09:00
agentson
589cc42e00 docs: bump requirements registry metadata for push governance sync
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 02:50:08 +09:00
agentson
920630e30e docs/main: clarify fx context behavior and rate-key provenance
Some checks failed
Gitea CI / test (push) Failing after 5s
Gitea CI / test (pull_request) Successful in 34s
2026-03-02 02:44:49 +09:00
agentson
d4f37ee392 trade: apply runtime strategy/fx pnl split on sell paths (#370)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 02:35:54 +09:00
3914f24872 Merge pull request 'backtest: reflect cost/execution effects in fold scoring (#368)' (#382) from feature/issue-368-backtest-cost-exec into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #382
2026-03-02 02:30:45 +09:00
agentson
ed713fdf40 style: wrap long helper signature in backtest pipeline
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 02:24:01 +09:00
agentson
c27decb6b1 backtest: reflect cost/execution effects in fold scoring (#368)
Some checks failed
Gitea CI / test (push) Failing after 6s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 02:10:08 +09:00
8ac7436953 Merge pull request 'docs: resync implementation audit status with actual code gaps (#373)' (#380) from feature/issue-373-audit-sync into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 36s
Reviewed-on: #380
2026-03-02 02:06:40 +09:00
31 changed files with 2181 additions and 59 deletions

View File

@@ -59,6 +59,18 @@ scripts/tea_comment.sh 374 /tmp/comment.md
- `scripts/tea_comment.sh` accepts stdin with `-` as body source. - `scripts/tea_comment.sh` accepts stdin with `-` as body source.
- The helper fails fast when body looks like escaped-newline text only. - The helper fails fast when body looks like escaped-newline text only.
#### PR Body Post-Check (Mandatory)
PR 생성 직후 본문이 `\n` 문자열로 깨지지 않았는지 반드시 확인한다.
```bash
python3 scripts/validate_pr_body.py --pr <PR_NUMBER>
```
검증 실패 시:
- PR 본문을 API patch 또는 파일 기반 본문으로 즉시 수정
- 같은 명령으로 재검증 통과 후에만 리뷰/머지 진행
#### ❌ TTY Error - Interactive Confirmation Fails #### ❌ TTY Error - Interactive Confirmation Fails
```bash ```bash
~/bin/tea issues create --repo X --title "Y" --description "Z" ~/bin/tea issues create --repo X --title "Y" --description "Z"

View File

@@ -1,6 +1,6 @@
<!-- <!--
Doc-ID: DOC-REQ-001 Doc-ID: DOC-REQ-001
Version: 1.0.3 Version: 1.0.12
Status: active Status: active
Owner: strategy Owner: strategy
Updated: 2026-03-02 Updated: 2026-03-02
@@ -19,14 +19,14 @@ Updated: 2026-03-02
- `REQ-V2-005`: 라벨링은 Triple Barrier(Upper/Lower/Time) 방식이어야 한다. - `REQ-V2-005`: 라벨링은 Triple Barrier(Upper/Lower/Time) 방식이어야 한다.
- `REQ-V2-006`: 검증은 Walk-forward + Purge/Embargo를 강제한다. - `REQ-V2-006`: 검증은 Walk-forward + Purge/Embargo를 강제한다.
- `REQ-V2-007`: 백테스트는 비용/슬리피지/체결실패를 반영하지 않으면 채택 불가다. - `REQ-V2-007`: 백테스트는 비용/슬리피지/체결실패를 반영하지 않으면 채택 불가다.
- `REQ-V2-008`: Kill Switch는 신규주문차단 -> 미체결취소 -> 재조회 -> 리스크축소 -> 스냅샷 순서다. - `REQ-V2-008`: Kill Switch는 신규주문차단 -> 미체결취소 -> 재조회(실패 시 최대 3회, 1s/2s backoff 재시도, 성공 시 즉시 중단) -> 리스크축소 -> 스냅샷 순서다.
## v3 핵심 요구사항 ## v3 핵심 요구사항
- `REQ-V3-001`: 모든 신호/주문/로그는 `session_id`를 포함해야 한다. - `REQ-V3-001`: 모든 신호/주문/로그는 `session_id`를 포함해야 한다.
- `REQ-V3-002`: 세션 전환 시 리스크 파라미터 재로딩이 수행되어야 한다. - `REQ-V3-002`: 세션 전환 시 리스크 파라미터 재로딩이 수행되어야 한다.
- `REQ-V3-003`: 브로커 블랙아웃 시간대에는 신규 주문이 금지되어야 한다. - `REQ-V3-003`: 브로커 블랙아웃 시간대에는 신규 주문이 금지되어야 한다.
- `REQ-V3-004`: 블랙아웃 중 신호는 Queue에 적재되고, 복구 후 유효성 재검증을 거친다. - `REQ-V3-004`: 블랙아웃 중 신호는 bounded Queue에 적재되며, 포화 시 oldest-drop 정책으로 최신 intent를 보존하고 복구 후 유효성 재검증을 거친다.
- `REQ-V3-005`: 저유동 세션(`NXT_AFTER`, `US_PRE`, `US_DAY`, `US_AFTER`)은 시장가 주문 금지다. - `REQ-V3-005`: 저유동 세션(`NXT_AFTER`, `US_PRE`, `US_DAY`, `US_AFTER`)은 시장가 주문 금지다.
- `REQ-V3-006`: 백테스트 체결가는 불리한 방향 체결 가정을 기본으로 한다. - `REQ-V3-006`: 백테스트 체결가는 불리한 방향 체결 가정을 기본으로 한다.
- `REQ-V3-007`: US 운용은 환율 손익 분리 추적과 통화 버퍼 정책을 포함해야 한다. - `REQ-V3-007`: US 운용은 환율 손익 분리 추적과 통화 버퍼 정책을 포함해야 한다.
@@ -38,3 +38,9 @@ Updated: 2026-03-02
- `REQ-OPS-002`: 문서의 수치 정책은 원장에서만 변경한다. - `REQ-OPS-002`: 문서의 수치 정책은 원장에서만 변경한다.
- `REQ-OPS-003`: 구현 태스크는 반드시 테스트 태스크를 동반한다. - `REQ-OPS-003`: 구현 태스크는 반드시 테스트 태스크를 동반한다.
- `REQ-OPS-004`: 원본 계획 문서(`v2`, `v3`)는 `docs/ouroboros/source/` 경로를 단일 기준으로 사용한다. - `REQ-OPS-004`: 원본 계획 문서(`v2`, `v3`)는 `docs/ouroboros/source/` 경로를 단일 기준으로 사용한다.
## 변경 이력
- 2026-03-02: `v1.0.12` 문서 검증 게이트 강화(#390) 반영에 따라 정책 문서 동기화 체크를 수행했다. (`REQ-OPS-002`)
- 2026-03-04: 86_peak_probability_model_plan.md 초안 추가에 따른 정책 문서 동기화 체크를 수행했다. (REQ-OPS-002)

View File

@@ -9,7 +9,7 @@ Updated: 2026-03-02
# v2/v3 구현 감사 및 수익률 분석 보고서 # v2/v3 구현 감사 및 수익률 분석 보고서
작성일: 2026-02-28 작성일: 2026-02-28
최종 업데이트: 2026-03-02 (#373 상태표 정합화 반영) 최종 업데이트: 2026-03-02 (#377 kill-switch refresh 재시도 정책 반영)
대상 기간: 2026-02-25 ~ 2026-02-28 (실거래) 대상 기간: 2026-02-25 ~ 2026-02-28 (실거래)
분석 브랜치: `feature/v3-session-policy-stream` 분석 브랜치: `feature/v3-session-policy-stream`
@@ -32,23 +32,23 @@ Updated: 2026-03-02
| REQ-V2-001 | 4-상태 매도 상태기계 (HOLDING→BE_LOCK→ARMED→EXITED) | `src/strategy/position_state_machine.py` | ✅ 완료 | | REQ-V2-001 | 4-상태 매도 상태기계 (HOLDING→BE_LOCK→ARMED→EXITED) | `src/strategy/position_state_machine.py` | ✅ 완료 |
| REQ-V2-002 | 즉시 최상위 상태 승격 (갭 대응) | `position_state_machine.py:51-70` | ✅ 완료 | | REQ-V2-002 | 즉시 최상위 상태 승격 (갭 대응) | `position_state_machine.py:51-70` | ✅ 완료 |
| REQ-V2-003 | EXITED 우선 평가 | `position_state_machine.py:38-48` | ✅ 완료 | | REQ-V2-003 | EXITED 우선 평가 | `position_state_machine.py:38-48` | ✅ 완료 |
| REQ-V2-004 | 4중 청산 로직 (Hard/BE/ATR Trailing/Model) | `src/strategy/exit_rules.py` | ⚠️ 부분 (`#369`) | | REQ-V2-004 | 4중 청산 로직 (Hard/BE/ATR Trailing/Model assist-only, 직접 EXIT 미트리거) | `src/strategy/exit_rules.py` | ✅ 완료 |
| REQ-V2-005 | Triple Barrier 라벨링 | `src/analysis/triple_barrier.py` | ✅ 완료 | | REQ-V2-005 | Triple Barrier 라벨링 | `src/analysis/triple_barrier.py` | ✅ 완료 |
| REQ-V2-006 | Walk-Forward + Purge/Embargo 검증 | `src/analysis/walk_forward_split.py` | ✅ 완료 | | REQ-V2-006 | Walk-Forward + Purge/Embargo 검증 | `src/analysis/walk_forward_split.py` | ✅ 완료 |
| REQ-V2-007 | 비용/슬리피지/체결실패 모델 필수 | `src/analysis/backtest_cost_guard.py` | ⚠️ 부분 (`#368`) | | REQ-V2-007 | 비용/슬리피지/체결실패 모델 필수 | `src/analysis/backtest_cost_guard.py`, `src/analysis/backtest_pipeline.py` | ✅ 완료 |
| REQ-V2-008 | Kill Switch 실행 순서 (Block→Cancel→Refresh→Reduce→Snapshot) | `src/core/kill_switch.py` | ⚠️ 부분 (`#377`) | | REQ-V2-008 | Kill Switch 실행 순서 (Block→Cancel→Refresh(retry)→Reduce→Snapshot) | `src/core/kill_switch.py` | ✅ 완료 |
### 1.3 v3 구현 상태: 부분 완료 (2026-03-02 기준) ### 1.3 v3 구현 상태: 부분 완료 (2026-03-02 기준)
| REQ-ID | 요구사항 | 상태 | 비고 | | REQ-ID | 요구사항 | 상태 | 비고 |
|--------|----------|------|------| |--------|----------|------|------|
| REQ-V3-001 | 모든 신호/주문/로그에 session_id 포함 | ⚠️ 부분 | 큐 intent에 `session_id` 누락 (`#375`) | | REQ-V3-001 | 모든 신호/주문/로그에 session_id 포함 | ⚠️ 부분 | 큐 intent에 `session_id` 누락 (`#375`) |
| REQ-V3-002 | 세션 전환 훅 + 리스크 파라미터 재로딩 | ⚠️ 부분 | 구현 존재, 세션 경계 E2E 회귀 보강 필요 (`#376`) | | REQ-V3-002 | 세션 전환 훅 + 리스크 파라미터 재로딩 | ✅ 완료 | 세션 경계 E2E 회귀(override 적용/해제 + 재로딩 실패 폴백) 보강 (`#376`) |
| REQ-V3-003 | 블랙아웃 윈도우 정책 | ✅ 완료 | `src/core/blackout_manager.py` | | REQ-V3-003 | 블랙아웃 윈도우 정책 | ✅ 완료 | `src/core/blackout_manager.py` |
| REQ-V3-004 | 블랙아웃 큐 + 복구 시 재검증 | ⚠️ 부분 | 큐 포화 시 intent 유실 경로 존재 (`#371`), 재검증 강화`#328`에서 추적 | | REQ-V3-004 | 블랙아웃 큐 + 복구 시 재검증 | ✅ 완료 | DB 기록(`#324`), 재검증 강화(`#328`), 큐 포화 oldest-drop(`#371`) 반영 |
| REQ-V3-005 | 저유동 세션 시장가 금지 | ✅ 완료 | `src/core/order_policy.py` | | REQ-V3-005 | 저유동 세션 시장가 금지 | ✅ 완료 | `src/core/order_policy.py` |
| REQ-V3-006 | 보수적 백테스트 체결 (불리 방향) | ✅ 완료 | `src/analysis/backtest_execution_model.py` | | REQ-V3-006 | 보수적 백테스트 체결 (불리 방향) | ✅ 완료 | `src/analysis/backtest_execution_model.py` |
| REQ-V3-007 | FX 손익 분리 (전략 PnL vs 환율 PnL) | ⚠️ 부분 | 스키마 존재, 런타임 분리 계산/전달 적용 (`#370`) | | REQ-V3-007 | FX 손익 분리 (전략 PnL vs 환율 PnL) | ⚠️ 부분 | 런타임 분리 계산/전달 적용 (`#370`), buy-side `fx_rate` 미관측 시 `fx_pnl=0` fallback |
| REQ-V3-008 | 오버나잇 예외 vs Kill Switch 우선순위 | ✅ 완료 | `src/main.py``_should_force_exit_for_overnight()`, `_apply_staged_exit_override_for_hold()` | | REQ-V3-008 | 오버나잇 예외 vs Kill Switch 우선순위 | ✅ 완료 | `src/main.py``_should_force_exit_for_overnight()`, `_apply_staged_exit_override_for_hold()` |
### 1.4 운영 거버넌스: 부분 완료 (2026-03-02 재평가) ### 1.4 운영 거버넌스: 부분 완료 (2026-03-02 재평가)
@@ -80,22 +80,22 @@ Updated: 2026-03-02
- **해소**: #326 머지 — `log_trade()` 호출 시 런타임 `session_id` 명시적 전달 - **해소**: #326 머지 — `log_trade()` 호출 시 런타임 `session_id` 명시적 전달
- **요구사항**: REQ-V3-001 - **요구사항**: REQ-V3-001
### GAP-3: 세션 전환 시 리스크 파라미터 재로딩 없음 → ⚠️ 부분 해소 (#327) ### GAP-3: 세션 전환 시 리스크 파라미터 재로딩 없음 → 해소 (#327, #376)
- **위치**: `src/main.py`, `src/config.py` - **위치**: `src/main.py`, `src/config.py`
- **해소 내용**: #327 머지 — `SESSION_RISK_PROFILES_JSON` 기반 세션별 파라미터 재로딩 메커니즘 구현 - **해소 내용**: #327 머지 — `SESSION_RISK_PROFILES_JSON` 기반 세션별 파라미터 재로딩 메커니즘 구현
- `SESSION_RISK_RELOAD_ENABLED=true` 시 세션 경계에서 파라미터 재로딩 - `SESSION_RISK_RELOAD_ENABLED=true` 시 세션 경계에서 파라미터 재로딩
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백) - 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
- **잔여 갭**: 세션 경계 실시간 전환 E2E 통합 테스트 보강 필요 (`test_main.py`에 설정 오버라이드/폴백 단위 테스트는 존재) - **해소**: 세션 경계 E2E 회귀 테스트를 추가해 override 적용/해제, 재로딩 실패 시 폴백 유지를 검증함 (`#376`)
- **요구사항**: REQ-V3-002 - **요구사항**: REQ-V3-002
### GAP-4: 블랙아웃 복구 DB 기록 + 재검증 → ⚠️ 부분 해소 (#324, #328, #371) ### GAP-4: 블랙아웃 복구 DB 기록 + 재검증 → 해소 (#324, #328, #371)
- **위치**: `src/core/blackout_manager.py`, `src/main.py` - **위치**: `src/core/blackout_manager.py`, `src/main.py`
- **현 상태**: - **현 상태**:
- #324 추적 범위(DB 기록)는 구현 경로가 존재 - #324: 복구 주문 DB 기록 구현 및 테스트 반영
- #328 범위(가격/세션 재검증 강화)는 추적 이슈 오픈 상태 - #328: 가격/세션 재검증 강화 구현 및 머지 완료
- #371: 큐 포화 시 intent 유실 경로가 남아 있어 `REQ-V3-004`를 완료로 보기 어려움 - #371: 큐 포화 정책을 oldest-drop으로 명시/구현해 최신 intent 유실 경로 제거
- **요구사항**: REQ-V3-004 - **요구사항**: REQ-V3-004
### GAP-5: 시간장벽이 봉 개수 고정 → ✅ 해소 (#329) ### GAP-5: 시간장벽이 봉 개수 고정 → ✅ 해소 (#329)
@@ -107,10 +107,12 @@ Updated: 2026-03-02
- `max_holding_bars` deprecated 경고 유지 (하위 호환) - `max_holding_bars` deprecated 경고 유지 (하위 호환)
- **요구사항**: REQ-V2-005 / v3 확장 - **요구사항**: REQ-V2-005 / v3 확장
### GAP-6 (신규): FX PnL 분리 미완료 (MEDIUM — 부분 구현) ### GAP-6 (신규): FX PnL 분리 부분 해소 (MEDIUM)
- **위치**: `src/db.py` (`fx_pnl`, `strategy_pnl` 컬럼 존재) - **위치**: `src/db.py` (`fx_pnl`, `strategy_pnl` 컬럼 존재)
- **문제**: 스키마와 함수는 존재하지만 런타임 경로에서 `strategy_pnl`/`fx_pnl` 분리 계산 전달이 누락됨 (`#370`) - **현 상태**: 런타임 SELL 경로에서 `strategy_pnl`/`fx_pnl` 분리 계산 및 전달을 적용함 (`#370`).
- **운영 메모**: `trading_cycle`은 scanner 기반 `selection_context``fx_rate`를 추가하고, `run_daily_session`은 scanner 컨텍스트 없이 `fx_rate` 스냅샷만 기록한다.
- **잔여**: 과거 BUY 레코드에 `fx_rate`가 없으면 해외 구간도 `fx_pnl=0` fallback으로 기록됨.
- **영향**: USD 거래에서 환율 손익과 전략 손익이 분리되지 않아 성과 분석 부정확 - **영향**: USD 거래에서 환율 손익과 전략 손익이 분리되지 않아 성과 분석 부정확
- **요구사항**: REQ-V3-007 - **요구사항**: REQ-V3-007
@@ -326,7 +328,7 @@ Updated: 2026-03-02
| 블랙아웃 복구 주문 `log_trade()` 추가 (GAP-4) | #324 | ✅ 머지 | | 블랙아웃 복구 주문 `log_trade()` 추가 (GAP-4) | #324 | ✅ 머지 |
| 세션 전환 리스크 파라미터 동적 재로딩 (GAP-3) | #327 | ✅ 머지 | | 세션 전환 리스크 파라미터 동적 재로딩 (GAP-3) | #327 | ✅ 머지 |
| session_id 거래/의사결정 로그 명시 전달 (GAP-1, GAP-2) | #326 | ✅ 머지 | | session_id 거래/의사결정 로그 명시 전달 (GAP-1, GAP-2) | #326 | ✅ 머지 |
| 블랙아웃 복구 가격/세션 재검증 강화 (GAP-4 잔여) | #328 | ✅ 머지 | | 블랙아웃 복구 가격/세션 재검증 강화 (GAP-4) | #328 | ✅ 머지 |
**잔여 개선 항목:** **잔여 개선 항목:**
@@ -335,7 +337,6 @@ Updated: 2026-03-02
| P1 | US 시장 ATR 공급 경로 완성 (ROOT-5 잔여) | 중간 | | P1 | US 시장 ATR 공급 경로 완성 (ROOT-5 잔여) | 중간 |
| P1 | FX PnL 운영 활성화 (REQ-V3-007) | 낮음 | | P1 | FX PnL 운영 활성화 (REQ-V3-007) | 낮음 |
| P2 | pred_down_prob ML 모델 대체 (ROOT-5 잔여) | 높음 | | P2 | pred_down_prob ML 모델 대체 (ROOT-5 잔여) | 높음 |
| P2 | 세션 경계 E2E 통합 테스트 보강 (GAP-3 잔여) | 낮음 |
### 5.3 권장 실행 순서 ### 5.3 권장 실행 순서
@@ -392,8 +393,7 @@ Phase 3 (중기): v3 세션 최적화
### 테스트 미존재 (잔여) ### 테스트 미존재 (잔여)
- 세션 전환 훅 콜백 (GAP-3 잔여) - 세션 전환 훅 콜백/세션 경계 리스크 재로딩 E2E 회귀 (`#376`)
- ❌ 세션 경계 리스크 파라미터 재로딩 단위 테스트 (GAP-3 잔여)
- ❌ 실거래 경로 ↔ v2 상태기계 통합 테스트 (피처 공급 포함) - ❌ 실거래 경로 ↔ v2 상태기계 통합 테스트 (피처 공급 포함)
- ❌ FX PnL 운영 활성화 검증 (GAP-6) - ❌ FX PnL 운영 활성화 검증 (GAP-6)

View File

@@ -0,0 +1,168 @@
<!--
Doc-ID: DOC-PLAN-086
Version: 0.1.0
Status: draft
Owner: strategy
Updated: 2026-03-02
-->
# 고점 확률 모델 도입 계획 (초안)
작성일: 2026-03-02
목적: 기존 계획(v2)의 "모델 보조 신호"를, 실운용 가능한 단순한 형태로 재정의한다.
---
## 1. 한 줄 요약
- 목표는 "지금이 고점(또는 고점 근처)일 확률"을 예측해 익절/축소/오버나잇 리스크 결정을 보조하는 것이다.
- 1차는 정규장 중심으로 시작하고, NXT/US_PRE 등 확장세션은 이후 분리 대응한다.
---
## 2. 왜 바꾸는가
- 기존 논의는 범위가 넓어 구현/검증 비용이 컸다.
- 현재 로그 구조에서도 빠르게 시작 가능한 타깃이 필요하다.
- 실거래 효용 기준에서 "고점 근처 감지"는 청산 품질 개선에 직접 연결된다.
---
## 3. 이번 계획의 핵심 결정
### 3.1 예측 타깃 (2개)
- `p_top_d0`: 현재 시점이 "오늘 정규장 고점(근접 포함)"일 확률
- `p_top_d1`: 현재 시점이 "내일 정규장 마감까지 포함한 고점(근접 포함)"일 확률
### 3.2 모델 역할
- 단독 매도 트리거가 아니라 보조 신호로 사용한다.
- Hard stop / ATR trailing / Kill Switch 우선 원칙은 유지한다.
### 3.3 적용 순서
1. KR/US 정규장만 대상으로 MVP 구축
2. 성능/운영 안정화 후 NXT, US_PRE, US_AFTER 분리 확장
---
## 4. 데이터/라벨 정의 (MVP)
### 4.1 샘플 단위
- 단위: "시점 t의 관측값 1개"
- 대상: 주문 가능 시점 중 정책 게이트를 통과하는 시점
### 4.2 라벨 정의
- `y_d0 = 1` if `max_high(t ~ 오늘 정규장 마감) <= price_t * (1 + eps_d0)` else `0`
- `y_d1 = 1` if `max_high(t ~ 내일 정규장 마감) <= price_t * (1 + eps_d1)` else `0`
### 4.3 필수 피처 (초기)
- 가격/수익률: `current_price`, `price_change_pct`
- 미시지표: `rsi`, `volume_ratio`, `signal`, `score`
- 포트폴리오: `portfolio_pnl_pct`, `total_cash`(선택)
- 상태: `market`, `session_id`, `minutes_to_close`, `is_tradeable_now`
### 4.4 데이터 제외 규칙 (초기)
- `session_id in ('', 'UNKNOWN')` 제외
- `price <= 0` 또는 `quantity <= 0` 제외
- `rationale LIKE '[startup-sync]%'` 제외
- 라벨 윈도우 불완전(휴장/데이터 누락) 샘플 제외
---
## 5. 추론/실행 규칙 (초안)
### 5.1 추론 출력
```json
{
"p_top_d0": 0.00,
"p_top_d1": 0.00,
"model_version": "TBD"
}
```
### 5.2 실행 반영 원칙
- `p_top_d0`가 높으면: 당일 익절/트레일링 강화
- `p_top_d1`가 높으면: 오버나잇 보유 축소/청산 우선
- 확장세션(NXT/US_PRE)은 초기엔 보수 임계치 또는 실행 제한
---
## 6. 세션 이슈 대응 원칙
### 6.1 거래 가능 세션 기준
- 달력 시간이 아니라 "종목이 실제 거래 가능한 세션" 기준으로 라벨/추론을 구성한다.
### 6.2 정규장 vs 확장세션 분리
- MVP: 정규장 전용 라벨/모델
- 확장: 세션별 헤드 또는 세션별 모델 분리 검토
### 6.3 고점 정의 분리
- `reg_high`: 정규장 고점 (주 라벨)
- `ext_high`: 확장세션 포함 고점 (보조 분석)
---
## 7. 단계별 구현 계획
### Phase A: 계약 고정
- 학습용 canonical dataset 스키마 확정
- 시점 정렬(as-of) 규칙 문서화
- 라벨 생성 함수/검증 테스트 추가
### Phase B: 오프라인 검증
- Walk-forward + Purge/Embargo 평가
- `d0`, `d1` 분리 리포트
- B1(모델 없음) 대비 개선 여부 확인
### Phase C: 런타임 통합
- 추론 결과를 `pred_top_d0/pred_top_d1`로 주입
- 매도/축소 보조 규칙 연결
- Shadow 모드로 관측 후 제한적 활성화
### Phase D: 세션 확장
- NXT/US_PRE/US_AFTER 정책 분리
- 실행 게이트/임계치 세션별 튜닝
---
## 8. 지금 당장 결정할 최소 항목 (TBD)
- [ ] `eps_d0`, `eps_d1` 기본값 (`TBD`)
- [ ] `p_top_d0`, `p_top_d1` 실행 임계치 (`TBD`)
- [ ] 정규장/확장세션 대상 시장 범위 (`TBD`)
- [ ] 최소 학습 샘플 수 기준 (`TBD`)
- [ ] Shadow 관측 기간 (`TBD`)
---
## 9. 리스크 및 완화
- 라벨 불균형: class weight, threshold 튜닝
- 시간대 편향(장마감 근처): 시간 피처 추가 + 구간별 성능 보고
- 데이터 무결성: 제외 규칙 강제 + CI 검증
- 복잡도 증가: 정규장 MVP를 먼저 잠그고 확장세션은 후속 분리
---
## 10. 후속 문서 연결 (예정)
- requirements registry: 신규 REQ/TASK/TEST 매핑 추가
- code-level work orders: 구현 작업 분해
- acceptance/test plan: `p_top_d0/d1` 검증 항목 추가
- implementation audit: 기존 "모델 보조 신호" 항목 업데이트

View File

@@ -0,0 +1,211 @@
# Dual-Track Workflow Execution Plan (Revised)
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Re-seed workflow planning from a clean base branch (not the temporary branch), then execute isolated `86` and `80` tracks with real-operation-confirmed closeout and related Gitea issue closure.
**Architecture:** Start from `feature/v3-session-policy-stream` and create `base/ouroboros-workflow-20260302`. Bring only two workflow plan docs as one squashed seed commit. Branch `task/86...` and `task/80...` from that base with strict file-scope separation. For `80`, update audit only after real-operation confirmation, then close related Gitea issues (children before parent). Final promotion happens by merging the validated stream to `main`.
**Tech Stack:** Git CLI, Gitea (`tea`) issue/PR workflow, Markdown docs (`docs/ouroboros`, `docs/plans`)
---
### Task 1: Freeze temporary branch and identify export scope
**Files:**
- Modify: none
- Create: none
- Test: none
**Step 1: Inspect temporary branch state**
Run: `git status --short --branch`
Expected: current temporary branch with local changes listed.
**Step 2: Preserve changes using stash (recommended)**
Run: `git stash push -u -m "tmp-before-workflow-reseed-2026-03-02"`
Expected: stash created with tracked/untracked changes.
**Step 3: Define transferable artifacts**
Run: keep only these for reseed:
- `docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md`
- `docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
Expected: transfer scope fixed to two files.
**Step 4: Verify clean working tree**
Run: `git status --short --branch`
Expected: no local changes.
**Step 5: Commit**
```bash
# no commit in this task (stash-only preservation)
```
### Task 2: Create clean official base and seed one commit
**Files:**
- Modify: `docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md`
- Modify: `docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
- Test: none
**Step 1: Switch to canonical base**
Run: `git checkout feature/v3-session-policy-stream`
Expected: branch switch successful.
**Step 2: Update local base**
Run: `git pull --ff-only origin feature/v3-session-policy-stream`
Expected: up-to-date or fast-forward update.
**Step 3: Create staging base branch**
Run: `git checkout -b base/ouroboros-workflow-20260302`
Expected: new branch created from updated feature base.
**Step 4: Restore only two plan docs from temporary branch commits**
Run (example): `git checkout <temp-branch> -- docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
Expected: only two files restored in working tree.
**Step 5: Create one squashed seed commit**
Run: `git add docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md docs/plans/2026-03-02-dual-track-workflow-execution-plan.md && git commit -m "docs: seed revised dual-track workflow plan"`
Expected: single commit with two files only.
**Step 6: Verify seed scope**
Run: `git show --name-only --oneline HEAD`
Expected: exactly the two plan docs listed.
**Step 7: Push official base**
Run: `git push -u origin base/ouroboros-workflow-20260302`
Expected: remote tracking set.
### Task 3: Open Track A (`86`) branch and initial PR
**Files:**
- Modify: `docs/ouroboros/86_peak_probability_model_plan.md`
- Test: none
**Step 1: Create Track A branch**
Run: `git checkout -b task/86-peak-probability-plan`
Expected: branch created from official base.
**Step 2: Stage only `86` document**
Run: `git add docs/ouroboros/86_peak_probability_model_plan.md`
Expected: only `86` doc staged.
**Step 3: Commit initial PR seed**
Run: `git commit -m "docs: seed peak probability model plan (#86)"`
Expected: one-document commit.
**Step 4: Push branch**
Run: `git push -u origin task/86-peak-probability-plan`
Expected: remote branch created.
**Step 5: Open PR-A**
Run: `tea pr create --base base/ouroboros-workflow-20260302 --head task/86-peak-probability-plan --title "docs: seed peak probability model plan (#86)" --description "Initial planning PR for iterative concretization."`
Expected: PR URL returned.
### Task 4: Open Track B (`80`) branch for operation-confirmed closeout
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md`
- Test: operation observation evidence (outside unit-test scope)
**Step 1: Return to staging base**
Run: `git checkout base/ouroboros-workflow-20260302`
Expected: branch switch successful.
**Step 2: Create Track B branch**
Run: `git checkout -b task/80-implementation-audit-closeout`
Expected: branch created.
**Step 3: Apply closeout edits only after real-operation confirmation**
Run: edit `docs/ouroboros/80_implementation_audit.md`
Expected: status/evidence sections updated with observed behavior, confirmation timestamp, and operational proof summary.
**Step 4: Commit closeout update**
Run: `git add docs/ouroboros/80_implementation_audit.md && git commit -m "docs: close out implementation audit items after operation confirmation (#80)"`
Expected: single-scope commit.
**Step 5: Push and open PR-B**
Run: `git push -u origin task/80-implementation-audit-closeout`
Expected: remote branch created.
Run: `tea pr create --base base/ouroboros-workflow-20260302 --head task/80-implementation-audit-closeout --title "docs: close out implementation audit items (#80)" --description "Closeout based on real-operation confirmation."`
Expected: PR URL returned.
### Task 5: Close related Gitea issues after `80` confirmation
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md` (reference linkage only)
- Test: issue parent-child completeness check
**Step 1: Build related issue set from updated `80` audit**
Run: parse issue references from `80` updated sections.
Expected: target open issue list prepared.
**Step 2: Close child issues first**
Run: `tea issues close <child-issue-number>` for each confirmed child.
Expected: all qualifying child issues closed.
**Step 3: Close parent issues only when all children are closed**
Run: `tea issues close <parent-issue-number>`
Expected: parent closes only after dependency check passes.
**Step 4: Record closure summary in PR comment**
Run: add PR comment with closed issue IDs and parent-child resolution note.
Expected: auditable trace for closeout.
### Task 6: Validate branch hygiene and merge path
**Files:**
- Modify: none
- Test: git diff/log checks
**Step 1: Verify PR-A scope**
Run: `git diff --name-only base/ouroboros-workflow-20260302...task/86-peak-probability-plan`
Expected: only `docs/ouroboros/86_peak_probability_model_plan.md` (and intentional related plan files if any).
**Step 2: Verify PR-B scope**
Run: `git diff --name-only base/ouroboros-workflow-20260302...task/80-implementation-audit-closeout`
Expected: only `docs/ouroboros/80_implementation_audit.md` (and intentional related plan files if any).
**Step 3: Merge both PRs into staging base**
Run: GitHub merge flow for PR-A and PR-B
Expected: staging base includes both tracks without cross-file contamination.
**Step 4: Promote validated stream to `main`**
Run: open PR from `feature/v3-session-policy-stream` (or its validated promotion branch) to `main`
Expected: merge-based history preservation with no cherry-pick/rebase surgery.
**Step 5: Commit**
```bash
# no additional commit required in this task
```

View File

@@ -0,0 +1,100 @@
# Live Operation Issue Tracking Design
**Date:** 2026-03-02
**Scope:** Real-operation confirmation tracking for all open Gitea issues, tied to `task/80-implementation-audit-closeout`.
---
## 1. Goal
Define an issue-by-issue tracking method for real-operation confirmation so that:
- every open issue is tracked with explicit lifecycle states,
- parent-child close order is enforced,
- and final closeout is reflected in `80` audit work without loss.
---
## 2. Target Set
- Coverage target: all current open Gitea issues (`318~381`, and future open issues during this closeout window).
- Tracking starts from open set snapshot and stays synced until closeout completion.
---
## 3. Tracking Architecture
- Primary log: direct comments on each issue ticket.
- Aggregate log: single summary comment on the `80` PR (`task/80-implementation-audit-closeout`).
- No separate tracking doc as source of truth during execution.
---
## 4. State Model and Criteria
State machine:
- `NOT_STARTED`
- `OBSERVING`
- `CONFIRMED`
- `CLOSED`
Rules:
- `CONFIRMED` requires one real-operation observation evidence item.
- `CLOSED` requires close conditions satisfied and ticket close action executed.
- Parent issue cannot become `CLOSED` until all child issues are `CLOSED`.
---
## 5. Issue Comment Template
Each issue update comment uses:
- `Status: <STATE>`
- `Observed At: <KST>, <UTC>`
- `Environment: live`
- `Evidence: <log/query/order-execution identifier>`
- `Next: <single next action>`
This template is reused for every transition for consistent auditability.
---
## 6. Parent-Child Policy
- Parent issue comment must list child issue IDs.
- Child issues transition independently.
- Parent close gate:
- if any child is not `CLOSED`, parent remains at most `CONFIRMED`;
- when all children are `CLOSED`, parent can transition to `CLOSED` and be closed.
---
## 7. Operational Loop
1. Initialize all open issues with `NOT_STARTED` comment.
2. Move active issues to `OBSERVING` when live confirmation begins.
3. Move to `CONFIRMED` after one real-operation evidence capture.
4. Close child issues first, then close parent issue(s).
5. Update the single `80` PR summary comment whenever issue state changes.
6. Reflect final confirmed/closed outcomes in `docs/ouroboros/80_implementation_audit.md`.
---
## 8. Evidence and Time Rules
- Evidence must be replayable/referenceable by identifier (not vague narrative only).
- Every update includes both KST and UTC timestamps.
- Use absolute dates (example: `2026-03-02`) to avoid ambiguity in session handoff.
---
## 9. Completion Conditions
Tracking design is complete when:
- all open issues are represented in issue comments with valid lifecycle state,
- parent-child closure constraints are respected,
- `80` PR summary comment reflects current global status,
- and final `80` audit document update is ready for closeout commit.

View File

@@ -0,0 +1,203 @@
# Live Operation Issue Tracking Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Track real-operation confirmation for all open Gitea issues with per-issue comments and a synchronized summary on the `80` PR, then close issues in dependency-safe order.
**Architecture:** Use issue tickets as the primary state log with a strict four-state lifecycle. Maintain one aggregate status comment on `task/80-implementation-audit-closeout` PR. Enforce child-first closure and parent close gating.
**Tech Stack:** Git CLI, Gitea CLI (`tea`), Markdown (`docs/ouroboros/80_implementation_audit.md`)
---
### Task 1: Snapshot open issues and classify parent-child graph
**Files:**
- Create: none
- Modify: none
- Test: `tea issues ls` output inspection
**Step 1: Capture open issue list**
Run: `tea issues ls --state open --limit 200`
Expected: list of all open issues (including `318~381`).
**Step 2: Fetch details for each issue**
Run: `for n in $(tea issues ls --state open --limit 200 --output json | jq -r '.[].number'); do tea issue view "$n"; done`
Expected: issue bodies/comments available for relation parsing.
**Step 3: Extract parent-child references**
Run: parse issue text for relation keywords (`parent`, `child`, `depends on`, `blocks`).
Expected: dependency map ready.
**Step 4: Validate unresolved dependency edges**
Run: verify each parent references only existing issue numbers.
Expected: no dangling child references.
**Step 5: Commit**
```bash
# no commit (discovery task)
```
### Task 2: Initialize per-issue tracking comments (`NOT_STARTED`)
**Files:**
- Create: none
- Modify: none
- Test: issue comment presence check
**Step 1: Prepare standard template text**
```text
Status: NOT_STARTED
Observed At: <KST>, <UTC>
Environment: live
Evidence: N/A
Next: Begin live observation for this issue.
```
**Step 2: Post initialization comment to each open issue**
Run: `tea issues comment <issue-number> --message "<template>"`
Expected: each open issue has one initialization comment.
**Step 3: Verify comment coverage**
Run: sample-check each issue (`tea issue view <issue-number>`) for latest status comment.
Expected: 100% coverage across open issues.
**Step 4: Commit**
```bash
# no commit (ticket operations)
```
### Task 3: Create and maintain `80` PR aggregate status comment
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md` (final sync only, not this task)
- Test: PR comment contains full issue table
**Step 1: Identify PR number for `task/80-implementation-audit-closeout`**
Run: `tea pr ls --state open`
Expected: PR number for `task/80...` identified.
**Step 2: Post initial summary comment**
Run: `tea pr comment <pr-number> --message "<issue-state-summary-table>"`
Expected: one aggregate comment listing `issue | state | last observed at | parent/child`.
**Step 3: Define update protocol**
Run: update same summary comment on every issue state change.
Expected: summary remains single-source aggregate view.
**Step 4: Commit**
```bash
# no commit (ticket/PR operations)
```
### Task 4: Execute live observation loop and mark `OBSERVING` -> `CONFIRMED`
**Files:**
- Create: none
- Modify: none
- Test: evidence-backed state transitions present in issue comments
**Step 1: Mark active issue as `OBSERVING`**
Run: `tea issues comment <issue-number> --message "Status: OBSERVING ..."`
Expected: issue history shows observation started.
**Step 2: Capture one real-operation evidence item**
Run: collect log/query/order identifier tied to issue behavior.
Expected: one concrete evidence reference captured.
**Step 3: Mark issue `CONFIRMED`**
Run: `tea issues comment <issue-number> --message "Status: CONFIRMED ... Evidence: <id> ..."`
Expected: issue has explicit confirmed state.
**Step 4: Sync PR aggregate summary**
Run: update `80` PR summary comment row for that issue.
Expected: PR summary and issue status aligned.
**Step 5: Commit**
```bash
# no commit (ticket operations)
```
### Task 5: Close issues with child-first dependency enforcement
**Files:**
- Create: none
- Modify: none
- Test: parent close attempted only after all children closed
**Step 1: Close confirmed child issues first**
Run: `tea issues close <child-issue-number>`
Expected: child issue state becomes closed.
**Step 2: Verify all children for each parent**
Run: evaluate dependency map for remaining non-closed children.
Expected: parent close gate decision available.
**Step 3: Close eligible parent issues**
Run: `tea issues close <parent-issue-number>`
Expected: parent is closed only when all children are closed.
**Step 4: Post closure comment and sync PR summary**
Run: comment `Status: CLOSED` on closed issue and update PR summary row.
Expected: closure trace exists in both issue and PR aggregate.
**Step 5: Commit**
```bash
# no commit (ticket operations)
```
### Task 6: Reflect final tracked outcomes in `80` audit and commit
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md`
- Test: markdown consistency + traceability check
**Step 1: Update `80` with final live-confirmed statuses**
Run: edit status lines and evidence summary sections.
Expected: document reflects final issue states and confirmation notes.
**Step 2: Add issue closure summary block**
Run: include closed child/parent sequence summary.
Expected: audit reader can verify closure logic quickly.
**Step 3: Validate references**
Run: ensure issue numbers in doc match actual closed/open states.
Expected: no mismatch between doc and Gitea.
**Step 4: Commit closeout doc update**
```bash
git add docs/ouroboros/80_implementation_audit.md
git commit -m "docs: finalize 80 audit with live-operation issue confirmations"
```
**Step 5: Push branch**
Run: `git push`
Expected: closeout commit available on `task/80-implementation-audit-closeout`.

View File

@@ -0,0 +1,99 @@
# Ouroboros Dual-Track Workflow Design (Revised)
**Date:** 2026-03-02
**Scope:** `docs/ouroboros/86_peak_probability_model_plan.md` and `docs/ouroboros/80_implementation_audit.md`
---
## 1. Goal
Prevent work loss and history contamination while running two parallel tracks, while correcting an invalid starting condition (design commits created on a disposable temporary branch):
- Track A: open and iterate the `86` plan as a dedicated PR.
- Track B: finalize `80` only after real-operation behavior is confirmed, then reflect completion and close related Gitea open issues.
---
## 2. Constraints and Current Context
- Current temporary branch is disposable and must not remain the authoritative base.
- Canonical base line for current work is `feature/v3-session-policy-stream`.
- Final integration strategy is to merge the full `feature/v3-session-policy-stream` line into `main` after operation-level validation is complete.
---
## 3. Approaches Considered
1. Keep using temporary branch and continue from existing design commits.
2. Create a clean base from `feature/v3-session-policy-stream` and move only plan artifacts as one commit (recommended).
3. Split into two fresh bases (`86` and `80` separately) immediately.
Recommendation rationale:
- Approach 2 removes temporary-branch contamination risk.
- It preserves clean lineage with a single "official starting point" commit.
- It keeps downstream split for `86` and `80` straightforward.
---
## 4. Approved Branch and PR Topology
- Long-lived source baseline: `feature/v3-session-policy-stream`
- New official base: `base/ouroboros-workflow-20260302` (from `feature/v3-session-policy-stream`)
- Official base contains only two workflow plan docs squashed into one commit.
- Track A branch: `task/86-peak-probability-plan` (from official base)
- Track B branch: `task/80-implementation-audit-closeout` (from official base)
PR direction:
- PR-A (`86`): `task/86-peak-probability-plan -> base/ouroboros-workflow-20260302`
- PR-B (`80`): `task/80-implementation-audit-closeout -> base/ouroboros-workflow-20260302`
- Final integration: `feature/v3-session-policy-stream` line merged into `main`
---
## 5. Completion Criteria per Track
### Track A (`86`)
- Open initial PR early with the planning document.
- Continue iterative concretization in follow-up commits on the same branch.
### Track B (`80`)
- Do not close based on test-command artifacts alone.
- Close only when modified behavior is confirmed in real operation.
- Update `80_implementation_audit.md` with:
- observed behavior changes,
- confirmation timing,
- operation-side evidence summary.
- After `80` real-operation confirmation and doc update, close related Gitea open issues immediately (same phase).
- Parent issues close only when all child issues are confirmed closed.
---
## 6. Loss-Prevention Rules
- Preserve temporary-branch unmerged changes first (stash or archival commit).
- Exclude `workflow/session-handover.md` and `docs/ouroboros/86_peak_probability_model_plan.md` from official base-seeding commit.
- Enforce file-scope separation:
- PR-A should avoid `80` edits.
- PR-B should avoid `86` edits.
- Keep commit messages scoped to one track per commit.
---
## 7. Merge Strategy Decision
- For this workflow, prefer merge-based lineage preservation.
- Because the final promotion unit is the full `feature/v3-session-policy-stream` stream, avoid unnecessary cherry-pick/rebase-surgery unless a contamination incident occurs.
---
## 8. Immediate Next Actions
1. Seed `base/ouroboros-workflow-20260302` from `feature/v3-session-policy-stream` with one squashed workflow-doc commit.
2. Verify seed commit file scope contains only two plan docs.
3. Create and open PR-A for `86`.
4. Complete real-operation confirmation for `80`, update audit doc, then close related Gitea issues (children first, then parent).
5. Merge validated `feature/v3-session-policy-stream` line into `main`.

View File

@@ -128,6 +128,16 @@ tea pr create \
--description "$PR_BODY" --description "$PR_BODY"
``` ```
PR 생성 직후 본문 무결성 검증(필수):
```bash
python3 scripts/validate_pr_body.py --pr <PR_NUMBER>
```
강제 규칙:
- 검증 실패(`\n` 리터럴, 코드펜스 불균형, 헤더/리스트 누락) 상태에서는 리뷰/머지 금지
- 본문 수정 후 같은 명령으로 재검증 통과 필요
금지 패턴: 금지 패턴:
- `-d "line1\nline2"` (웹 UI에 `\n` 문자 그대로 노출될 수 있음) - `-d "line1\nline2"` (웹 UI에 `\n` 문자 그대로 노출될 수 있음)

View File

@@ -92,6 +92,25 @@ def validate_testing_doc_has_dynamic_count_guidance(errors: list[str]) -> None:
) )
def validate_pr_body_postcheck_guidance(errors: list[str]) -> None:
required_tokens = {
"commands": (
"PR Body Post-Check (Mandatory)",
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>",
),
"workflow": (
"PR 생성 직후 본문 무결성 검증(필수)",
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>",
),
}
for key, tokens in required_tokens.items():
path = REQUIRED_FILES[key]
text = _read(path)
for token in tokens:
if token not in text:
errors.append(f"{path}: missing PR body post-check guidance token -> {token}")
def main() -> int: def main() -> int:
errors: list[str] = [] errors: list[str] = []
@@ -117,6 +136,7 @@ def main() -> int:
validate_summary_docs_reference_core_docs(errors) validate_summary_docs_reference_core_docs(errors)
validate_commands_endpoint_duplicates(errors) validate_commands_endpoint_duplicates(errors)
validate_testing_doc_has_dynamic_count_guidance(errors) validate_testing_doc_has_dynamic_count_guidance(errors)
validate_pr_body_postcheck_guidance(errors)
if errors: if errors:
print("[FAIL] docs sync validation failed") print("[FAIL] docs sync validation failed")
@@ -128,6 +148,7 @@ def main() -> int:
print("[OK] summary docs link to core docs and links resolve") print("[OK] summary docs link to core docs and links resolve")
print("[OK] commands endpoint rows have no duplicates") print("[OK] commands endpoint rows have no duplicates")
print("[OK] testing doc includes dynamic count guidance") print("[OK] testing doc includes dynamic count guidance")
print("[OK] PR body post-check guidance exists in commands/workflow docs")
return 0 return 0

View File

@@ -33,6 +33,9 @@ ALLOWED_PLAN_TARGETS = {
"2": (DOC_DIR / "source" / "ouroboros_plan_v2.txt").resolve(), "2": (DOC_DIR / "source" / "ouroboros_plan_v2.txt").resolve(),
"3": (DOC_DIR / "source" / "ouroboros_plan_v3.txt").resolve(), "3": (DOC_DIR / "source" / "ouroboros_plan_v3.txt").resolve(),
} }
ISSUE_REF_PATTERN = re.compile(r"#(?P<issue>\d+)")
ISSUE_DONE_PATTERN = re.compile(r"(?:✅|머지|해소|완료)")
ISSUE_PENDING_PATTERN = re.compile(r"(?:잔여|오픈 상태|추적 이슈)")
def iter_docs() -> list[Path]: def iter_docs() -> list[Path]:
@@ -119,6 +122,38 @@ def collect_req_traceability(
req_to_test.setdefault(req_id, set()).add(item_id) req_to_test.setdefault(req_id, set()).add(item_id)
def validate_issue_status_consistency(path: Path, text: str, errors: list[str]) -> None:
issue_done_lines: dict[str, list[int]] = {}
issue_pending_lines: dict[str, list[int]] = {}
for line_no, raw_line in enumerate(text.splitlines(), start=1):
line = raw_line.strip()
if not line:
continue
issue_ids = [m.group("issue") for m in ISSUE_REF_PATTERN.finditer(line)]
if not issue_ids:
continue
is_pending = bool(ISSUE_PENDING_PATTERN.search(line))
is_done = bool(ISSUE_DONE_PATTERN.search(line)) and not is_pending
if not is_pending and not is_done:
continue
for issue_id in issue_ids:
if is_done:
issue_done_lines.setdefault(issue_id, []).append(line_no)
if is_pending:
issue_pending_lines.setdefault(issue_id, []).append(line_no)
conflicted_issues = sorted(set(issue_done_lines) & set(issue_pending_lines))
for issue_id in conflicted_issues:
errors.append(
f"{path}: conflicting status for issue #{issue_id} "
f"(done at lines {issue_done_lines[issue_id]}, "
f"pending at lines {issue_pending_lines[issue_id]})"
)
def main() -> int: def main() -> int:
if not DOC_DIR.exists(): if not DOC_DIR.exists():
print(f"ERROR: missing directory {DOC_DIR}") print(f"ERROR: missing directory {DOC_DIR}")
@@ -140,6 +175,8 @@ def main() -> int:
text = path.read_text(encoding="utf-8") text = path.read_text(encoding="utf-8")
validate_metadata(path, text, errors, doc_ids) validate_metadata(path, text, errors, doc_ids)
validate_links(path, text, errors) validate_links(path, text, errors)
if path.name == "80_implementation_audit.md":
validate_issue_status_consistency(path, text, errors)
collect_ids(path, text, defs, refs) collect_ids(path, text, defs, refs)
collect_req_traceability(text, req_to_task, req_to_test) collect_req_traceability(text, req_to_task, req_to_test)

View File

@@ -0,0 +1,94 @@
#!/usr/bin/env python3
"""Validate PR body formatting to prevent escaped-newline artifacts."""
from __future__ import annotations
import argparse
import json
import re
import subprocess
import sys
from pathlib import Path
HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE)
LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE)
def validate_pr_body_text(text: str) -> list[str]:
errors: list[str] = []
if "\\n" in text and "\n" not in text:
errors.append("body contains escaped newline sequence (\\n)")
if text.count("```") % 2 != 0:
errors.append("body has unbalanced fenced code blocks (``` count is odd)")
if not HEADER_PATTERN.search(text):
errors.append("body is missing markdown section headers (e.g. '## Summary')")
if not LIST_ITEM_PATTERN.search(text):
errors.append("body is missing markdown list items")
return errors
def fetch_pr_body(pr_number: int) -> str:
try:
completed = subprocess.run(
[
"tea",
"api",
"-R",
"origin",
f"repos/{{owner}}/{{repo}}/pulls/{pr_number}",
],
check=True,
capture_output=True,
text=True,
)
except (subprocess.CalledProcessError, FileNotFoundError) as exc:
raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc
try:
payload = json.loads(completed.stdout)
except json.JSONDecodeError as exc:
raise RuntimeError(f"failed to parse PR payload for #{pr_number}: {exc}") from exc
body = payload.get("body", "")
if not isinstance(body, str):
raise RuntimeError(f"unexpected PR body type for #{pr_number}: {type(body).__name__}")
return body
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(
description="Validate PR body markdown formatting and escaped-newline artifacts."
)
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument("--pr", type=int, help="PR number to fetch via `tea api`")
group.add_argument("--body-file", type=Path, help="Path to markdown body file")
return parser.parse_args()
def main() -> int:
args = parse_args()
if args.body_file is not None:
if not args.body_file.exists():
print(f"[FAIL] body file not found: {args.body_file}")
return 1
body = args.body_file.read_text(encoding="utf-8")
source = f"file:{args.body_file}"
else:
body = fetch_pr_body(args.pr)
source = f"pr:{args.pr}"
errors = validate_pr_body_text(body)
if errors:
print("[FAIL] PR body validation failed")
print(f"- source: {source}")
for err in errors:
print(f"- {err}")
return 1
print("[OK] PR body validation passed")
print(f"- source: {source}")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -11,6 +11,7 @@ class BacktestCostModel:
commission_bps: float | None = None commission_bps: float | None = None
slippage_bps_by_session: dict[str, float] | None = None slippage_bps_by_session: dict[str, float] | None = None
failure_rate_by_session: dict[str, float] | None = None failure_rate_by_session: dict[str, float] | None = None
partial_fill_rate_by_session: dict[str, float] | None = None
unfavorable_fill_required: bool = True unfavorable_fill_required: bool = True
@@ -31,6 +32,7 @@ def validate_backtest_cost_model(
slippage = model.slippage_bps_by_session or {} slippage = model.slippage_bps_by_session or {}
failure = model.failure_rate_by_session or {} failure = model.failure_rate_by_session or {}
partial_fill = model.partial_fill_rate_by_session or {}
missing_slippage = [s for s in required_sessions if s not in slippage] missing_slippage = [s for s in required_sessions if s not in slippage]
if missing_slippage: if missing_slippage:
@@ -44,9 +46,19 @@ def validate_backtest_cost_model(
f"missing failure_rate_by_session for sessions: {', '.join(missing_failure)}" f"missing failure_rate_by_session for sessions: {', '.join(missing_failure)}"
) )
missing_partial_fill = [s for s in required_sessions if s not in partial_fill]
if missing_partial_fill:
raise ValueError(
"missing partial_fill_rate_by_session for sessions: "
f"{', '.join(missing_partial_fill)}"
)
for sess, bps in slippage.items(): for sess, bps in slippage.items():
if not math.isfinite(bps) or bps < 0: if not math.isfinite(bps) or bps < 0:
raise ValueError(f"slippage bps must be >= 0 for session={sess}") raise ValueError(f"slippage bps must be >= 0 for session={sess}")
for sess, rate in failure.items(): for sess, rate in failure.items():
if not math.isfinite(rate) or rate < 0 or rate > 1: if not math.isfinite(rate) or rate < 0 or rate > 1:
raise ValueError(f"failure rate must be within [0,1] for session={sess}") raise ValueError(f"failure rate must be within [0,1] for session={sess}")
for sess, rate in partial_fill.items():
if not math.isfinite(rate) or rate < 0 or rate > 1:
raise ValueError(f"partial fill rate must be within [0,1] for session={sess}")

View File

@@ -13,6 +13,11 @@ from statistics import mean
from typing import Literal, cast from typing import Literal, cast
from src.analysis.backtest_cost_guard import BacktestCostModel, validate_backtest_cost_model from src.analysis.backtest_cost_guard import BacktestCostModel, validate_backtest_cost_model
from src.analysis.backtest_execution_model import (
BacktestExecutionModel,
ExecutionAssumptions,
ExecutionRequest,
)
from src.analysis.triple_barrier import TripleBarrierSpec, label_with_triple_barrier from src.analysis.triple_barrier import TripleBarrierSpec, label_with_triple_barrier
from src.analysis.walk_forward_split import WalkForwardFold, generate_walk_forward_splits from src.analysis.walk_forward_split import WalkForwardFold, generate_walk_forward_splits
@@ -40,6 +45,7 @@ class WalkForwardConfig:
class BaselineScore: class BaselineScore:
name: Literal["B0", "B1", "M1"] name: Literal["B0", "B1", "M1"]
accuracy: float accuracy: float
cost_adjusted_accuracy: float
@dataclass(frozen=True) @dataclass(frozen=True)
@@ -50,6 +56,10 @@ class BacktestFoldResult:
train_label_distribution: dict[int, int] train_label_distribution: dict[int, int]
test_label_distribution: dict[int, int] test_label_distribution: dict[int, int]
baseline_scores: list[BaselineScore] baseline_scores: list[BaselineScore]
execution_adjusted_avg_return_bps: float
execution_adjusted_trade_count: int
execution_rejected_count: int
execution_partial_count: int
@dataclass(frozen=True) @dataclass(frozen=True)
@@ -115,6 +125,8 @@ def run_v2_backtest_pipeline(
).label ).label
ordered_labels = [labels_by_bar_index[idx] for idx in normalized_entries] ordered_labels = [labels_by_bar_index[idx] for idx in normalized_entries]
ordered_sessions = [bars[idx].session_id for idx in normalized_entries]
ordered_prices = [bars[idx].close for idx in normalized_entries]
folds = generate_walk_forward_splits( folds = generate_walk_forward_splits(
n_samples=len(normalized_entries), n_samples=len(normalized_entries),
train_size=walk_forward.train_size, train_size=walk_forward.train_size,
@@ -129,8 +141,37 @@ def run_v2_backtest_pipeline(
for fold_idx, fold in enumerate(folds): for fold_idx, fold in enumerate(folds):
train_labels = [ordered_labels[i] for i in fold.train_indices] train_labels = [ordered_labels[i] for i in fold.train_indices]
test_labels = [ordered_labels[i] for i in fold.test_indices] test_labels = [ordered_labels[i] for i in fold.test_indices]
test_sessions = [ordered_sessions[i] for i in fold.test_indices]
test_prices = [ordered_prices[i] for i in fold.test_indices]
if not test_labels: if not test_labels:
continue continue
execution_model = _build_execution_model(cost_model=cost_model, fold_seed=fold_idx)
execution_return_model = _build_execution_model(
cost_model=cost_model,
fold_seed=fold_idx + 1000,
)
b0_pred = _baseline_b0_pred(train_labels)
m1_pred = _m1_pred(train_labels)
execution_returns_bps: list[float] = []
execution_rejected = 0
execution_partial = 0
for rel_idx in fold.test_indices:
entry_bar_index = normalized_entries[rel_idx]
bar = bars[entry_bar_index]
trade = _simulate_execution_adjusted_return_bps(
execution_model=execution_return_model,
bar=bar,
label=ordered_labels[rel_idx],
side=side,
spec=triple_barrier_spec,
commission_bps=float(cost_model.commission_bps or 0.0),
)
if trade["status"] == "REJECTED":
execution_rejected += 1
continue
execution_returns_bps.append(float(trade["return_bps"]))
if trade["status"] == "PARTIAL":
execution_partial += 1
fold_results.append( fold_results.append(
BacktestFoldResult( BacktestFoldResult(
fold_index=fold_idx, fold_index=fold_idx,
@@ -139,13 +180,49 @@ def run_v2_backtest_pipeline(
train_label_distribution=_label_dist(train_labels), train_label_distribution=_label_dist(train_labels),
test_label_distribution=_label_dist(test_labels), test_label_distribution=_label_dist(test_labels),
baseline_scores=[ baseline_scores=[
BaselineScore(name="B0", accuracy=_baseline_b0(train_labels, test_labels)), BaselineScore(
BaselineScore(name="B1", accuracy=_score_constant(1, test_labels)), name="B0",
accuracy=_score_constant(b0_pred, test_labels),
cost_adjusted_accuracy=_score_with_execution(
prediction=b0_pred,
actual=test_labels,
sessions=test_sessions,
reference_prices=test_prices,
execution_model=execution_model,
commission_bps=float(cost_model.commission_bps or 0.0),
),
),
BaselineScore(
name="B1",
accuracy=_score_constant(1, test_labels),
cost_adjusted_accuracy=_score_with_execution(
prediction=1,
actual=test_labels,
sessions=test_sessions,
reference_prices=test_prices,
execution_model=execution_model,
commission_bps=float(cost_model.commission_bps or 0.0),
),
),
BaselineScore( BaselineScore(
name="M1", name="M1",
accuracy=_score_constant(_m1_pred(train_labels), test_labels), accuracy=_score_constant(m1_pred, test_labels),
cost_adjusted_accuracy=_score_with_execution(
prediction=m1_pred,
actual=test_labels,
sessions=test_sessions,
reference_prices=test_prices,
execution_model=execution_model,
commission_bps=float(cost_model.commission_bps or 0.0),
),
), ),
], ],
execution_adjusted_avg_return_bps=(
mean(execution_returns_bps) if execution_returns_bps else 0.0
),
execution_adjusted_trade_count=len(execution_returns_bps),
execution_rejected_count=execution_rejected,
execution_partial_count=execution_partial,
) )
) )
@@ -176,12 +253,15 @@ def _score_constant(pred: int, actual: Sequence[int]) -> float:
def _baseline_b0(train_labels: Sequence[int], test_labels: Sequence[int]) -> float: def _baseline_b0(train_labels: Sequence[int], test_labels: Sequence[int]) -> float:
return _score_constant(_baseline_b0_pred(train_labels), test_labels)
def _baseline_b0_pred(train_labels: Sequence[int]) -> int:
if not train_labels: if not train_labels:
return _score_constant(0, test_labels) return 0
# Majority-class baseline from training fold. # Majority-class baseline from training fold.
choices = (-1, 0, 1) choices = (-1, 0, 1)
pred = max(choices, key=lambda c: train_labels.count(c)) return max(choices, key=lambda c: train_labels.count(c))
return _score_constant(pred, test_labels)
def _m1_pred(train_labels: Sequence[int]) -> int: def _m1_pred(train_labels: Sequence[int]) -> int:
@@ -190,6 +270,56 @@ def _m1_pred(train_labels: Sequence[int]) -> int:
return train_labels[-1] return train_labels[-1]
def _build_execution_model(
*,
cost_model: BacktestCostModel,
fold_seed: int,
) -> BacktestExecutionModel:
return BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session=dict(cost_model.slippage_bps_by_session or {}),
failure_rate_by_session=dict(cost_model.failure_rate_by_session or {}),
partial_fill_rate_by_session=dict(cost_model.partial_fill_rate_by_session or {}),
seed=fold_seed,
)
)
def _score_with_execution(
*,
prediction: int,
actual: Sequence[int],
sessions: Sequence[str],
reference_prices: Sequence[float],
execution_model: BacktestExecutionModel,
commission_bps: float,
) -> float:
if not actual:
return 0.0
contributions: list[float] = []
for label, session_id, reference_price in zip(actual, sessions, reference_prices, strict=True):
if prediction == 0:
contributions.append(1.0 if label == 0 else 0.0)
continue
side = "BUY" if prediction > 0 else "SELL"
execution = execution_model.simulate(
ExecutionRequest(
side=side,
session_id=session_id,
qty=100,
reference_price=reference_price,
)
)
if execution.status == "REJECTED":
contributions.append(0.0)
continue
fill_ratio = execution.filled_qty / 100.0
cost_penalty = min(0.99, (commission_bps + execution.slippage_bps) / 10000.0)
correctness = 1.0 if prediction == label else 0.0
contributions.append(correctness * fill_ratio * (1.0 - cost_penalty))
return mean(contributions)
def _build_run_id(*, n_entries: int, n_folds: int, sessions: Sequence[str]) -> str: def _build_run_id(*, n_entries: int, n_folds: int, sessions: Sequence[str]) -> str:
sess_key = "_".join(sessions) sess_key = "_".join(sessions)
return f"v2p-e{n_entries}-f{n_folds}-s{sess_key}" return f"v2p-e{n_entries}-f{n_folds}-s{sess_key}"
@@ -198,3 +328,58 @@ def _build_run_id(*, n_entries: int, n_folds: int, sessions: Sequence[str]) -> s
def fold_has_leakage(fold: WalkForwardFold) -> bool: def fold_has_leakage(fold: WalkForwardFold) -> bool:
"""Utility for tests/verification: True when train/test overlap exists.""" """Utility for tests/verification: True when train/test overlap exists."""
return bool(set(fold.train_indices).intersection(fold.test_indices)) return bool(set(fold.train_indices).intersection(fold.test_indices))
def _simulate_execution_adjusted_return_bps(
*,
execution_model: BacktestExecutionModel,
bar: BacktestBar,
label: int,
side: int,
spec: TripleBarrierSpec,
commission_bps: float,
) -> dict[str, float | str]:
qty = 100
entry_req = ExecutionRequest(
side="BUY" if side == 1 else "SELL",
session_id=bar.session_id,
qty=qty,
reference_price=float(bar.close),
)
entry_fill = execution_model.simulate(entry_req)
if entry_fill.status == "REJECTED":
return {"status": "REJECTED", "return_bps": 0.0}
exit_qty = entry_fill.filled_qty
if label == 1:
gross_return_bps = spec.take_profit_pct * 10000.0
elif label == -1:
gross_return_bps = -spec.stop_loss_pct * 10000.0
else:
gross_return_bps = 0.0
if side == 1:
exit_price = float(bar.close) * (1.0 + gross_return_bps / 10000.0)
else:
exit_price = float(bar.close) * (1.0 - gross_return_bps / 10000.0)
exit_req = ExecutionRequest(
side="SELL" if side == 1 else "BUY",
session_id=bar.session_id,
qty=exit_qty,
reference_price=max(0.01, exit_price),
)
exit_fill = execution_model.simulate(exit_req)
if exit_fill.status == "REJECTED":
return {"status": "REJECTED", "return_bps": 0.0}
fill_ratio = min(entry_fill.filled_qty, exit_fill.filled_qty) / qty
cost_bps = (
float(entry_fill.slippage_bps)
+ float(exit_fill.slippage_bps)
+ (2.0 * float(commission_bps))
)
net_return_bps = (gross_return_bps * fill_ratio) - cost_bps
is_partial = entry_fill.status == "PARTIAL" or exit_fill.status == "PARTIAL"
status = "PARTIAL" if is_partial else "FILLED"
return {"status": status, "return_bps": net_return_bps}

View File

@@ -23,6 +23,7 @@ class BlackoutWindow:
class QueuedOrderIntent: class QueuedOrderIntent:
market_code: str market_code: str
exchange_code: str exchange_code: str
session_id: str
stock_code: str stock_code: str
order_type: str order_type: str
quantity: int quantity: int
@@ -68,11 +69,16 @@ class BlackoutOrderManager:
self._queue: deque[QueuedOrderIntent] = deque() self._queue: deque[QueuedOrderIntent] = deque()
self._was_blackout = False self._was_blackout = False
self._max_queue_size = max_queue_size self._max_queue_size = max_queue_size
self._overflow_drop_count = 0
@property @property
def pending_count(self) -> int: def pending_count(self) -> int:
return len(self._queue) return len(self._queue)
@property
def overflow_drop_count(self) -> int:
return self._overflow_drop_count
def in_blackout(self, now: datetime | None = None) -> bool: def in_blackout(self, now: datetime | None = None) -> bool:
if not self.enabled or not self._windows: if not self.enabled or not self._windows:
return False return False
@@ -81,8 +87,11 @@ class BlackoutOrderManager:
return any(window.contains(kst_now) for window in self._windows) return any(window.contains(kst_now) for window in self._windows)
def enqueue(self, intent: QueuedOrderIntent) -> bool: def enqueue(self, intent: QueuedOrderIntent) -> bool:
if len(self._queue) >= self._max_queue_size: if self._max_queue_size <= 0:
return False return False
if len(self._queue) >= self._max_queue_size:
self._queue.popleft()
self._overflow_drop_count += 1
self._queue.append(intent) self._queue.append(intent)
return True return True

View File

@@ -3,13 +3,14 @@
Order is fixed: Order is fixed:
1) block new orders 1) block new orders
2) cancel pending orders 2) cancel pending orders
3) refresh order state 3) refresh order state (retry up to 3 attempts with exponential backoff)
4) reduce risk 4) reduce risk
5) snapshot and notify 5) snapshot and notify
""" """
from __future__ import annotations from __future__ import annotations
import asyncio
import inspect import inspect
from collections.abc import Awaitable, Callable from collections.abc import Awaitable, Callable
from dataclasses import dataclass, field from dataclasses import dataclass, field
@@ -34,16 +35,55 @@ class KillSwitchOrchestrator:
report: KillSwitchReport, report: KillSwitchReport,
name: str, name: str,
fn: StepCallable | None, fn: StepCallable | None,
) -> None: ) -> bool:
report.steps.append(name) report.steps.append(name)
if fn is None: if fn is None:
return return True
try: try:
result = fn() result = fn()
if inspect.isawaitable(result): if inspect.isawaitable(result):
await result await result
if result is False:
raise RuntimeError("step returned False")
return True
except Exception as exc: # pragma: no cover - intentionally resilient except Exception as exc: # pragma: no cover - intentionally resilient
report.errors.append(f"{name}: {exc}") report.errors.append(f"{name}: {exc}")
return False
async def _run_refresh_with_retry(
self,
report: KillSwitchReport,
fn: StepCallable | None,
*,
max_attempts: int,
base_delay_sec: float,
) -> None:
report.steps.append("refresh_order_state")
if fn is None:
return
attempts = max(1, max_attempts)
delay = max(0.0, base_delay_sec)
last_exc: Exception | None = None
for attempt in range(1, attempts + 1):
try:
result = fn()
if inspect.isawaitable(result):
await result
if result is False:
raise RuntimeError("step returned False")
return
except Exception as exc:
last_exc = exc
if attempt >= attempts:
break
if delay > 0:
await asyncio.sleep(delay * (2 ** (attempt - 1)))
if last_exc is not None:
report.errors.append(
"refresh_order_state: failed after "
f"{attempts} attempts ({last_exc})"
)
async def trigger( async def trigger(
self, self,
@@ -54,6 +94,8 @@ class KillSwitchOrchestrator:
reduce_risk: StepCallable | None = None, reduce_risk: StepCallable | None = None,
snapshot_state: StepCallable | None = None, snapshot_state: StepCallable | None = None,
notify: StepCallable | None = None, notify: StepCallable | None = None,
refresh_retry_attempts: int = 3,
refresh_retry_base_delay_sec: float = 1.0,
) -> KillSwitchReport: ) -> KillSwitchReport:
report = KillSwitchReport(reason=reason) report = KillSwitchReport(reason=reason)
@@ -61,7 +103,12 @@ class KillSwitchOrchestrator:
report.steps.append("block_new_orders") report.steps.append("block_new_orders")
await self._run_step(report, "cancel_pending_orders", cancel_pending_orders) await self._run_step(report, "cancel_pending_orders", cancel_pending_orders)
await self._run_step(report, "refresh_order_state", refresh_order_state) await self._run_refresh_with_retry(
report,
refresh_order_state,
max_attempts=refresh_retry_attempts,
base_delay_sec=refresh_retry_base_delay_sec,
)
await self._run_step(report, "reduce_risk", reduce_risk) await self._run_step(report, "reduce_risk", reduce_risk)
await self._run_step(report, "snapshot_state", snapshot_state) await self._run_step(report, "snapshot_state", snapshot_state)
await self._run_step(report, "notify", notify) await self._run_step(report, "notify", notify)

View File

@@ -318,7 +318,7 @@ def get_latest_buy_trade(
if exchange_code: if exchange_code:
cursor = conn.execute( cursor = conn.execute(
""" """
SELECT decision_id, price, quantity SELECT decision_id, price, quantity, selection_context
FROM trades FROM trades
WHERE stock_code = ? WHERE stock_code = ?
AND market = ? AND market = ?
@@ -339,7 +339,7 @@ def get_latest_buy_trade(
else: else:
cursor = conn.execute( cursor = conn.execute(
""" """
SELECT decision_id, price, quantity SELECT decision_id, price, quantity, selection_context
FROM trades FROM trades
WHERE stock_code = ? WHERE stock_code = ?
AND market = ? AND market = ?

View File

@@ -128,6 +128,84 @@ def _resolve_sell_qty_for_pnl(*, sell_qty: int | None, buy_qty: int | None) -> i
return max(0, int(buy_qty or 0)) return max(0, int(buy_qty or 0))
def _extract_fx_rate_from_sources(*sources: dict[str, Any] | None) -> float | None:
"""Best-effort FX rate extraction from broker payloads."""
# KIS overseas payloads expose exchange-rate fields with varying key names
# across endpoints/responses (price, balance, buying power). Keep this list
# centralised so schema drifts can be patched in one place.
rate_keys = (
"frst_bltn_exrt",
"bass_exrt",
"ovrs_exrt",
"aply_xchg_rt",
"xchg_rt",
"exchange_rate",
"fx_rate",
)
for source in sources:
if not isinstance(source, dict):
continue
for key in rate_keys:
rate = safe_float(source.get(key), 0.0)
if rate > 0:
return rate
return None
def _split_trade_pnl_components(
*,
market: MarketInfo,
trade_pnl: float,
buy_price: float,
sell_price: float,
quantity: int,
buy_fx_rate: float | None = None,
sell_fx_rate: float | None = None,
) -> tuple[float, float]:
"""Split total trade pnl into strategy/fx components.
For overseas symbols, use buy/sell FX rates when both are available.
Otherwise preserve backward-compatible behaviour (all strategy pnl).
"""
if trade_pnl == 0.0:
return 0.0, 0.0
if market.is_domestic:
return trade_pnl, 0.0
if (
buy_fx_rate is not None
and sell_fx_rate is not None
and buy_fx_rate > 0
and sell_fx_rate > 0
and quantity > 0
and buy_price > 0
and sell_price > 0
):
buy_notional = buy_price * quantity
fx_return = (sell_fx_rate - buy_fx_rate) / buy_fx_rate
fx_pnl = buy_notional * fx_return
strategy_pnl = trade_pnl - fx_pnl
return strategy_pnl, fx_pnl
return trade_pnl, 0.0
def _extract_buy_fx_rate(buy_trade: dict[str, Any] | None) -> float | None:
if not buy_trade:
return None
raw_ctx = buy_trade.get("selection_context")
if not isinstance(raw_ctx, str) or not raw_ctx.strip():
return None
try:
decoded = json.loads(raw_ctx)
except (TypeError, ValueError):
return None
if not isinstance(decoded, dict):
return None
rate = safe_float(decoded.get("fx_rate"), 0.0)
return rate if rate > 0 else None
def _compute_kr_dynamic_stop_loss_pct( def _compute_kr_dynamic_stop_loss_pct(
*, *,
market: MarketInfo | None = None, market: MarketInfo | None = None,
@@ -926,6 +1004,7 @@ async def build_overseas_symbol_universe(
def _build_queued_order_intent( def _build_queued_order_intent(
*, *,
market: MarketInfo, market: MarketInfo,
session_id: str,
stock_code: str, stock_code: str,
order_type: str, order_type: str,
quantity: int, quantity: int,
@@ -935,6 +1014,7 @@ def _build_queued_order_intent(
return QueuedOrderIntent( return QueuedOrderIntent(
market_code=market.code, market_code=market.code,
exchange_code=market.exchange_code, exchange_code=market.exchange_code,
session_id=session_id,
stock_code=stock_code, stock_code=stock_code,
order_type=order_type, order_type=order_type,
quantity=quantity, quantity=quantity,
@@ -947,6 +1027,7 @@ def _build_queued_order_intent(
def _maybe_queue_order_intent( def _maybe_queue_order_intent(
*, *,
market: MarketInfo, market: MarketInfo,
session_id: str,
stock_code: str, stock_code: str,
order_type: str, order_type: str,
quantity: int, quantity: int,
@@ -956,9 +1037,11 @@ def _maybe_queue_order_intent(
if not BLACKOUT_ORDER_MANAGER.in_blackout(): if not BLACKOUT_ORDER_MANAGER.in_blackout():
return False return False
before_overflow_drops = BLACKOUT_ORDER_MANAGER.overflow_drop_count
queued = BLACKOUT_ORDER_MANAGER.enqueue( queued = BLACKOUT_ORDER_MANAGER.enqueue(
_build_queued_order_intent( _build_queued_order_intent(
market=market, market=market,
session_id=session_id,
stock_code=stock_code, stock_code=stock_code,
order_type=order_type, order_type=order_type,
quantity=quantity, quantity=quantity,
@@ -967,6 +1050,7 @@ def _maybe_queue_order_intent(
) )
) )
if queued: if queued:
after_overflow_drops = BLACKOUT_ORDER_MANAGER.overflow_drop_count
logger.warning( logger.warning(
( (
"Blackout active: queued order intent %s %s (%s) " "Blackout active: queued order intent %s %s (%s) "
@@ -980,9 +1064,22 @@ def _maybe_queue_order_intent(
source, source,
BLACKOUT_ORDER_MANAGER.pending_count, BLACKOUT_ORDER_MANAGER.pending_count,
) )
if after_overflow_drops > before_overflow_drops:
logger.error(
(
"Blackout queue overflow policy applied: evicted oldest intent "
"to keep latest %s %s (%s) source=%s pending=%d total_evicted=%d"
),
order_type,
stock_code,
market.code,
source,
BLACKOUT_ORDER_MANAGER.pending_count,
after_overflow_drops,
)
else: else:
logger.error( logger.error(
"Blackout queue full: dropped order intent %s %s (%s) qty=%d source=%s", "Blackout queue unavailable: could not queue order intent %s %s (%s) qty=%d source=%s",
order_type, order_type,
stock_code, stock_code,
market.code, market.code,
@@ -1115,7 +1212,6 @@ async def process_blackout_recovery_orders(
accepted = result.get("rt_cd", "0") == "0" accepted = result.get("rt_cd", "0") == "0"
if accepted: if accepted:
runtime_session_id = get_session_info(market).session_id
log_trade( log_trade(
conn=db_conn, conn=db_conn,
stock_code=intent.stock_code, stock_code=intent.stock_code,
@@ -1127,7 +1223,7 @@ async def process_blackout_recovery_orders(
pnl=0.0, pnl=0.0,
market=market.code, market=market.code,
exchange_code=market.exchange_code, exchange_code=market.exchange_code,
session_id=runtime_session_id, session_id=intent.session_id,
) )
logger.info( logger.info(
"Recovered queued order executed: %s %s (%s) qty=%d price=%.4f source=%s", "Recovered queued order executed: %s %s (%s) qty=%d price=%.4f source=%s",
@@ -1279,7 +1375,10 @@ async def _cancel_pending_orders_for_kill_switch(
) )
if failures: if failures:
raise RuntimeError("; ".join(failures[:3])) summary = "; ".join(failures[:3])
if len(failures) > 3:
summary = f"{summary} (+{len(failures) - 3} more)"
raise RuntimeError(summary)
async def _refresh_order_state_for_kill_switch( async def _refresh_order_state_for_kill_switch(
@@ -1288,6 +1387,7 @@ async def _refresh_order_state_for_kill_switch(
overseas_broker: OverseasBroker, overseas_broker: OverseasBroker,
markets: list[MarketInfo], markets: list[MarketInfo],
) -> None: ) -> None:
failures: list[str] = []
seen_overseas: set[str] = set() seen_overseas: set[str] = set()
for market in markets: for market in markets:
try: try:
@@ -1303,6 +1403,12 @@ async def _refresh_order_state_for_kill_switch(
market.exchange_code, market.exchange_code,
exc, exc,
) )
failures.append(f"{market.code}/{market.exchange_code}: {exc}")
if failures:
summary = "; ".join(failures[:3])
if len(failures) > 3:
summary = f"{summary} (+{len(failures) - 3} more)"
raise RuntimeError(summary)
def _reduce_risk_for_kill_switch() -> None: def _reduce_risk_for_kill_switch() -> None:
@@ -1372,6 +1478,7 @@ async def trading_cycle(
_session_risk_overrides(market=market, settings=settings) _session_risk_overrides(market=market, settings=settings)
# 1. Fetch market data # 1. Fetch market data
balance_info: dict[str, Any] = {}
price_output: dict[str, Any] = {} # Populated for overseas markets; used for fallback metrics price_output: dict[str, Any] = {} # Populated for overseas markets; used for fallback metrics
if market.is_domestic: if market.is_domestic:
current_price, price_change_pct, foreigner_net = await broker.get_current_price(stock_code) current_price, price_change_pct, foreigner_net = await broker.get_current_price(stock_code)
@@ -1394,8 +1501,6 @@ async def trading_cycle(
balance_info = output2[0] balance_info = output2[0]
elif isinstance(output2, dict): elif isinstance(output2, dict):
balance_info = output2 balance_info = output2
else:
balance_info = {}
total_eval = safe_float(balance_info.get("frcr_evlu_tota", "0") or "0") total_eval = safe_float(balance_info.get("frcr_evlu_tota", "0") or "0")
purchase_total = safe_float(balance_info.get("frcr_buy_amt_smtl", "0") or "0") purchase_total = safe_float(balance_info.get("frcr_buy_amt_smtl", "0") or "0")
@@ -1815,6 +1920,9 @@ async def trading_cycle(
quantity = 0 quantity = 0
trade_price = current_price trade_price = current_price
trade_pnl = 0.0 trade_pnl = 0.0
buy_trade: dict[str, Any] | None = None
buy_price = 0.0
sell_qty = 0
if decision.action in ("BUY", "SELL"): if decision.action in ("BUY", "SELL"):
if KILL_SWITCH.new_orders_blocked and decision.action == "BUY": if KILL_SWITCH.new_orders_blocked and decision.action == "BUY":
logger.critical( logger.critical(
@@ -1962,6 +2070,7 @@ async def trading_cycle(
return return
if _maybe_queue_order_intent( if _maybe_queue_order_intent(
market=market, market=market,
session_id=runtime_session_id,
stock_code=stock_code, stock_code=stock_code,
order_type=decision.action, order_type=decision.action,
quantity=quantity, quantity=quantity,
@@ -2009,6 +2118,7 @@ async def trading_cycle(
return return
if _maybe_queue_order_intent( if _maybe_queue_order_intent(
market=market, market=market,
session_id=runtime_session_id,
stock_code=stock_code, stock_code=stock_code,
order_type=decision.action, order_type=decision.action,
quantity=quantity, quantity=quantity,
@@ -2129,6 +2239,26 @@ async def trading_cycle(
"signal": candidate.signal, "signal": candidate.signal,
"score": candidate.score, "score": candidate.score,
} }
sell_fx_rate = _extract_fx_rate_from_sources(price_output, balance_info)
if sell_fx_rate is not None and not market.is_domestic:
if selection_context is None:
selection_context = {"fx_rate": sell_fx_rate}
else:
selection_context["fx_rate"] = sell_fx_rate
strategy_pnl: float | None = None
fx_pnl: float | None = None
if decision.action == "SELL" and order_succeeded:
buy_fx_rate = _extract_buy_fx_rate(buy_trade)
strategy_pnl, fx_pnl = _split_trade_pnl_components(
market=market,
trade_pnl=trade_pnl,
buy_price=buy_price,
sell_price=trade_price,
quantity=sell_qty or quantity,
buy_fx_rate=buy_fx_rate,
sell_fx_rate=sell_fx_rate,
)
log_trade( log_trade(
conn=db_conn, conn=db_conn,
@@ -2139,6 +2269,8 @@ async def trading_cycle(
quantity=quantity, quantity=quantity,
price=trade_price, price=trade_price,
pnl=trade_pnl, pnl=trade_pnl,
strategy_pnl=strategy_pnl,
fx_pnl=fx_pnl,
market=market.code, market=market.code,
exchange_code=market.exchange_code, exchange_code=market.exchange_code,
session_id=runtime_session_id, session_id=runtime_session_id,
@@ -2737,6 +2869,7 @@ async def run_daily_session(
) )
continue continue
balance_info: dict[str, Any] = {}
if market.is_domestic: if market.is_domestic:
output2 = balance_data.get("output2", [{}]) output2 = balance_data.get("output2", [{}])
total_eval = safe_float(output2[0].get("tot_evlu_amt", "0")) if output2 else 0 total_eval = safe_float(output2[0].get("tot_evlu_amt", "0")) if output2 else 0
@@ -2991,6 +3124,9 @@ async def run_daily_session(
quantity = 0 quantity = 0
trade_price = stock_data["current_price"] trade_price = stock_data["current_price"]
trade_pnl = 0.0 trade_pnl = 0.0
buy_trade: dict[str, Any] | None = None
buy_price = 0.0
sell_qty = 0
order_succeeded = True order_succeeded = True
if decision.action in ("BUY", "SELL"): if decision.action in ("BUY", "SELL"):
if KILL_SWITCH.new_orders_blocked and decision.action == "BUY": if KILL_SWITCH.new_orders_blocked and decision.action == "BUY":
@@ -3143,6 +3279,7 @@ async def run_daily_session(
continue continue
if _maybe_queue_order_intent( if _maybe_queue_order_intent(
market=market, market=market,
session_id=runtime_session_id,
stock_code=stock_code, stock_code=stock_code,
order_type=decision.action, order_type=decision.action,
quantity=quantity, quantity=quantity,
@@ -3180,6 +3317,7 @@ async def run_daily_session(
continue continue
if _maybe_queue_order_intent( if _maybe_queue_order_intent(
market=market, market=market,
session_id=runtime_session_id,
stock_code=stock_code, stock_code=stock_code,
order_type=decision.action, order_type=decision.action,
quantity=quantity, quantity=quantity,
@@ -3273,6 +3411,30 @@ async def run_daily_session(
# Log trade (skip if order was rejected by API) # Log trade (skip if order was rejected by API)
if decision.action in ("BUY", "SELL") and not order_succeeded: if decision.action in ("BUY", "SELL") and not order_succeeded:
continue continue
strategy_pnl: float | None = None
fx_pnl: float | None = None
selection_context: dict[str, Any] | None = None
if decision.action == "SELL" and order_succeeded:
buy_fx_rate = _extract_buy_fx_rate(buy_trade)
sell_fx_rate = _extract_fx_rate_from_sources(balance_info, stock_data)
strategy_pnl, fx_pnl = _split_trade_pnl_components(
market=market,
trade_pnl=trade_pnl,
buy_price=buy_price,
sell_price=trade_price,
quantity=sell_qty or quantity,
buy_fx_rate=buy_fx_rate,
sell_fx_rate=sell_fx_rate,
)
if sell_fx_rate is not None and not market.is_domestic:
# Daily path does not carry scanner candidate metrics, so this
# context intentionally stores FX snapshot only.
selection_context = {"fx_rate": sell_fx_rate}
elif not market.is_domestic:
snapshot_fx_rate = _extract_fx_rate_from_sources(balance_info, stock_data)
if snapshot_fx_rate is not None:
# BUY/HOLD in daily path: persist FX snapshot for later SELL split.
selection_context = {"fx_rate": snapshot_fx_rate}
log_trade( log_trade(
conn=db_conn, conn=db_conn,
stock_code=stock_code, stock_code=stock_code,
@@ -3282,9 +3444,12 @@ async def run_daily_session(
quantity=quantity, quantity=quantity,
price=trade_price, price=trade_price,
pnl=trade_pnl, pnl=trade_pnl,
strategy_pnl=strategy_pnl,
fx_pnl=fx_pnl,
market=market.code, market=market.code,
exchange_code=market.exchange_code, exchange_code=market.exchange_code,
session_id=runtime_session_id, session_id=runtime_session_id,
selection_context=selection_context,
decision_id=decision_id, decision_id=decision_id,
mode=settings.MODE, mode=settings.MODE,
) )

View File

@@ -85,8 +85,8 @@ def evaluate_exit(
reason = "atr_trailing_stop" reason = "atr_trailing_stop"
elif be_lock_threat: elif be_lock_threat:
reason = "be_lock_threat" reason = "be_lock_threat"
elif model_exit_signal: elif model_exit_signal and next_state == PositionState.BE_LOCK:
reason = "model_liquidity_exit" reason = "model_assist_be_lock"
elif take_profit_hit: elif take_profit_hit:
# Backward-compatible immediate profit-taking path. # Backward-compatible immediate profit-taking path.
reason = "arm_take_profit" reason = "arm_take_profit"

View File

@@ -40,7 +40,8 @@ def evaluate_exit_first(inp: StateTransitionInput) -> bool:
EXITED must be evaluated before any promotion. EXITED must be evaluated before any promotion.
""" """
return inp.hard_stop_hit or inp.trailing_stop_hit or inp.model_exit_signal or inp.be_lock_threat # model_exit_signal is assist-only and must not trigger EXIT directly.
return inp.hard_stop_hit or inp.trailing_stop_hit or inp.be_lock_threat
def promote_state(current: PositionState, inp: StateTransitionInput) -> PositionState: def promote_state(current: PositionState, inp: StateTransitionInput) -> PositionState:
@@ -61,5 +62,8 @@ def promote_state(current: PositionState, inp: StateTransitionInput) -> Position
target = PositionState.ARMED target = PositionState.ARMED
elif inp.unrealized_pnl_pct >= inp.be_arm_pct: elif inp.unrealized_pnl_pct >= inp.be_arm_pct:
target = PositionState.BE_LOCK target = PositionState.BE_LOCK
elif inp.model_exit_signal:
# Model signal assists risk posture by tightening to BE_LOCK.
target = PositionState.BE_LOCK
return target if _STATE_RANK[target] > _STATE_RANK[current] else current return target if _STATE_RANK[target] > _STATE_RANK[current] else current

View File

@@ -10,6 +10,7 @@ def test_valid_backtest_cost_model_passes() -> None:
commission_bps=5.0, commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0}, slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08}, failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.1, "US_PRE": 0.2},
unfavorable_fill_required=True, unfavorable_fill_required=True,
) )
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG", "US_PRE"]) validate_backtest_cost_model(model=model, required_sessions=["KRX_REG", "US_PRE"])
@@ -20,6 +21,7 @@ def test_missing_required_slippage_session_raises() -> None:
commission_bps=5.0, commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0}, slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08}, failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.1, "US_PRE": 0.2},
unfavorable_fill_required=True, unfavorable_fill_required=True,
) )
with pytest.raises(ValueError, match="missing slippage_bps_by_session.*US_PRE"): with pytest.raises(ValueError, match="missing slippage_bps_by_session.*US_PRE"):
@@ -31,6 +33,7 @@ def test_missing_required_failure_rate_session_raises() -> None:
commission_bps=5.0, commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0}, slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01}, failure_rate_by_session={"KRX_REG": 0.01},
partial_fill_rate_by_session={"KRX_REG": 0.1, "US_PRE": 0.2},
unfavorable_fill_required=True, unfavorable_fill_required=True,
) )
with pytest.raises(ValueError, match="missing failure_rate_by_session.*US_PRE"): with pytest.raises(ValueError, match="missing failure_rate_by_session.*US_PRE"):
@@ -42,6 +45,7 @@ def test_invalid_failure_rate_range_raises() -> None:
commission_bps=5.0, commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0}, slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 1.2}, failure_rate_by_session={"KRX_REG": 1.2},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=True, unfavorable_fill_required=True,
) )
with pytest.raises(ValueError, match="failure rate must be within"): with pytest.raises(ValueError, match="failure rate must be within"):
@@ -53,6 +57,7 @@ def test_unfavorable_fill_requirement_cannot_be_disabled() -> None:
commission_bps=5.0, commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0}, slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.02}, failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=False, unfavorable_fill_required=False,
) )
with pytest.raises(ValueError, match="unfavorable_fill_required must be True"): with pytest.raises(ValueError, match="unfavorable_fill_required must be True"):
@@ -65,6 +70,7 @@ def test_non_finite_commission_rejected(bad_commission: float) -> None:
commission_bps=bad_commission, commission_bps=bad_commission,
slippage_bps_by_session={"KRX_REG": 10.0}, slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.02}, failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=True, unfavorable_fill_required=True,
) )
with pytest.raises(ValueError, match="commission_bps"): with pytest.raises(ValueError, match="commission_bps"):
@@ -77,7 +83,33 @@ def test_non_finite_slippage_rejected(bad_slippage: float) -> None:
commission_bps=5.0, commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": bad_slippage}, slippage_bps_by_session={"KRX_REG": bad_slippage},
failure_rate_by_session={"KRX_REG": 0.02}, failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=True, unfavorable_fill_required=True,
) )
with pytest.raises(ValueError, match="slippage bps"): with pytest.raises(ValueError, match="slippage bps"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG"]) validate_backtest_cost_model(model=model, required_sessions=["KRX_REG"])
def test_missing_required_partial_fill_session_raises() -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.1},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="missing partial_fill_rate_by_session.*US_PRE"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG", "US_PRE"])
@pytest.mark.parametrize("bad_partial_fill", [float("nan"), float("inf"), float("-inf"), -0.1, 1.1])
def test_invalid_partial_fill_rate_rejected(bad_partial_fill: float) -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": bad_partial_fill},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="partial fill rate must be within"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG"])

View File

@@ -35,6 +35,7 @@ def _cost_model() -> BacktestCostModel:
commission_bps=3.0, commission_bps=3.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0}, slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08}, failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.05, "US_PRE": 0.2},
unfavorable_fill_required=True, unfavorable_fill_required=True,
) )
@@ -71,6 +72,10 @@ def test_pipeline_happy_path_returns_fold_and_artifact_contract() -> None:
assert names == {"B0", "B1", "M1"} assert names == {"B0", "B1", "M1"}
for score in fold.baseline_scores: for score in fold.baseline_scores:
assert 0.0 <= score.accuracy <= 1.0 assert 0.0 <= score.accuracy <= 1.0
assert 0.0 <= score.cost_adjusted_accuracy <= 1.0
assert fold.execution_adjusted_trade_count >= 0
assert fold.execution_rejected_count >= 0
assert fold.execution_partial_count >= 0
def test_pipeline_cost_guard_fail_fast() -> None: def test_pipeline_cost_guard_fail_fast() -> None:
@@ -78,6 +83,7 @@ def test_pipeline_cost_guard_fail_fast() -> None:
commission_bps=3.0, commission_bps=3.0,
slippage_bps_by_session={"KRX_REG": 10.0}, slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.01}, failure_rate_by_session={"KRX_REG": 0.01},
partial_fill_rate_by_session={"KRX_REG": 0.05},
unfavorable_fill_required=True, unfavorable_fill_required=True,
) )
try: try:
@@ -166,3 +172,49 @@ def test_pipeline_rejects_minutes_spec_when_timestamp_missing() -> None:
assert "BacktestBar.timestamp is required" in str(exc) assert "BacktestBar.timestamp is required" in str(exc)
else: else:
raise AssertionError("expected timestamp validation error") raise AssertionError("expected timestamp validation error")
def test_pipeline_fold_scores_reflect_cost_and_execution_effects() -> None:
cfg = dict(
bars=_bars(),
entry_indices=[0, 1, 2, 3, 4, 5, 6, 7],
side=1,
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(
train_size=4,
test_size=2,
step_size=2,
purge_size=1,
embargo_size=1,
min_train_size=3,
),
)
optimistic = BacktestCostModel(
commission_bps=0.0,
slippage_bps_by_session={"KRX_REG": 0.0, "US_PRE": 0.0},
failure_rate_by_session={"KRX_REG": 0.0, "US_PRE": 0.0},
partial_fill_rate_by_session={"KRX_REG": 0.0, "US_PRE": 0.0},
unfavorable_fill_required=True,
)
conservative = BacktestCostModel(
commission_bps=10.0,
slippage_bps_by_session={"KRX_REG": 30.0, "US_PRE": 80.0},
failure_rate_by_session={"KRX_REG": 0.2, "US_PRE": 0.4},
partial_fill_rate_by_session={"KRX_REG": 0.5, "US_PRE": 0.7},
unfavorable_fill_required=True,
)
optimistic_out = run_v2_backtest_pipeline(cost_model=optimistic, **cfg)
conservative_out = run_v2_backtest_pipeline(cost_model=conservative, **cfg)
assert optimistic_out.folds and conservative_out.folds
optimistic_score = optimistic_out.folds[0].baseline_scores[1].cost_adjusted_accuracy
conservative_score = conservative_out.folds[0].baseline_scores[1].cost_adjusted_accuracy
assert conservative_score < optimistic_score
optimistic_avg_return = optimistic_out.folds[0].execution_adjusted_avg_return_bps
conservative_avg_return = conservative_out.folds[0].execution_adjusted_avg_return_bps
assert conservative_avg_return < optimistic_avg_return

View File

@@ -35,6 +35,7 @@ def test_recovery_batch_only_after_blackout_exit() -> None:
intent = QueuedOrderIntent( intent = QueuedOrderIntent(
market_code="KR", market_code="KR",
exchange_code="KRX", exchange_code="KRX",
session_id="KRX_REG",
stock_code="005930", stock_code="005930",
order_type="BUY", order_type="BUY",
quantity=1, quantity=1,
@@ -64,6 +65,7 @@ def test_requeued_intent_is_processed_next_non_blackout_cycle() -> None:
intent = QueuedOrderIntent( intent = QueuedOrderIntent(
market_code="KR", market_code="KR",
exchange_code="KRX", exchange_code="KRX",
session_id="KRX_REG",
stock_code="005930", stock_code="005930",
order_type="BUY", order_type="BUY",
quantity=1, quantity=1,
@@ -79,3 +81,54 @@ def test_requeued_intent_is_processed_next_non_blackout_cycle() -> None:
manager.requeue(first_batch[0]) manager.requeue(first_batch[0])
second_batch = manager.pop_recovery_batch(outside_blackout) second_batch = manager.pop_recovery_batch(outside_blackout)
assert len(second_batch) == 1 assert len(second_batch) == 1
def test_queue_overflow_drops_oldest_and_keeps_latest() -> None:
manager = BlackoutOrderManager(
enabled=True,
windows=parse_blackout_windows_kst("23:30-00:10"),
max_queue_size=2,
)
first = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="000001",
order_type="BUY",
quantity=1,
price=100.0,
source="first",
queued_at=datetime.now(UTC),
)
second = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="000002",
order_type="BUY",
quantity=1,
price=101.0,
source="second",
queued_at=datetime.now(UTC),
)
third = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="000003",
order_type="SELL",
quantity=2,
price=102.0,
source="third",
queued_at=datetime.now(UTC),
)
assert manager.enqueue(first)
assert manager.enqueue(second)
assert manager.enqueue(third)
assert manager.pending_count == 2
assert manager.overflow_drop_count == 1
outside_blackout = datetime(2026, 1, 1, 15, 20, tzinfo=UTC)
batch = manager.pop_recovery_batch(outside_blackout)
assert [intent.stock_code for intent in batch] == ["000002", "000003"]

View File

@@ -53,3 +53,52 @@ async def test_kill_switch_collects_step_errors() -> None:
report = await ks.trigger(reason="test", cancel_pending_orders=_boom) report = await ks.trigger(reason="test", cancel_pending_orders=_boom)
assert any(err.startswith("cancel_pending_orders:") for err in report.errors) assert any(err.startswith("cancel_pending_orders:") for err in report.errors)
@pytest.mark.asyncio
async def test_kill_switch_refresh_retries_then_succeeds() -> None:
ks = KillSwitchOrchestrator()
refresh_calls = {"count": 0}
def _flaky_refresh() -> None:
refresh_calls["count"] += 1
if refresh_calls["count"] < 3:
raise RuntimeError("temporary refresh failure")
report = await ks.trigger(
reason="test",
refresh_order_state=_flaky_refresh,
refresh_retry_attempts=3,
refresh_retry_base_delay_sec=0.0,
)
assert refresh_calls["count"] == 3
assert report.errors == []
@pytest.mark.asyncio
async def test_kill_switch_refresh_retry_exhausted_records_error_and_continues() -> None:
ks = KillSwitchOrchestrator()
calls: list[str] = []
def _refresh_fail() -> None:
raise RuntimeError("persistent refresh failure")
def _reduce() -> None:
calls.append("reduce")
def _snapshot() -> None:
calls.append("snapshot")
report = await ks.trigger(
reason="test",
refresh_order_state=_refresh_fail,
reduce_risk=_reduce,
snapshot_state=_snapshot,
refresh_retry_attempts=2,
refresh_retry_base_delay_sec=0.0,
)
assert any(
err.startswith("refresh_order_state: failed after 2 attempts")
for err in report.errors
)
assert calls == ["reduce", "snapshot"]

View File

@@ -1,6 +1,7 @@
"""Tests for main trading loop integration.""" """Tests for main trading loop integration."""
from datetime import UTC, date, datetime from datetime import UTC, date, datetime
from typing import Any
from unittest.mock import ANY, AsyncMock, MagicMock, patch from unittest.mock import ANY, AsyncMock, MagicMock, patch
import pytest import pytest
@@ -9,6 +10,7 @@ import src.main as main_module
from src.config import Settings from src.config import Settings
from src.context.layer import ContextLayer from src.context.layer import ContextLayer
from src.context.scheduler import ScheduleResult from src.context.scheduler import ScheduleResult
from src.core.blackout_manager import BlackoutOrderManager
from src.core.order_policy import OrderPolicyRejected, get_session_info from src.core.order_policy import OrderPolicyRejected, get_session_info
from src.core.risk_manager import CircuitBreakerTripped, FatFingerRejected from src.core.risk_manager import CircuitBreakerTripped, FatFingerRejected
from src.db import init_db, log_trade from src.db import init_db, log_trade
@@ -33,6 +35,7 @@ from src.main import (
_extract_held_qty_from_balance, _extract_held_qty_from_balance,
_handle_market_close, _handle_market_close,
_inject_staged_exit_features, _inject_staged_exit_features,
_maybe_queue_order_intent,
_resolve_market_setting, _resolve_market_setting,
_resolve_sell_qty_for_pnl, _resolve_sell_qty_for_pnl,
_retry_connection, _retry_connection,
@@ -40,6 +43,7 @@ from src.main import (
_run_evolution_loop, _run_evolution_loop,
_should_block_overseas_buy_for_fx_buffer, _should_block_overseas_buy_for_fx_buffer,
_should_force_exit_for_overnight, _should_force_exit_for_overnight,
_split_trade_pnl_components,
_start_dashboard_server, _start_dashboard_server,
_stoploss_cooldown_minutes, _stoploss_cooldown_minutes,
_trigger_emergency_kill_switch, _trigger_emergency_kill_switch,
@@ -102,22 +106,22 @@ def _make_sell_match(stock_code: str = "005930") -> ScenarioMatch:
@pytest.fixture(autouse=True) @pytest.fixture(autouse=True)
def _reset_kill_switch_state() -> None: def _reset_kill_switch_state() -> None:
"""Prevent cross-test leakage from global kill-switch state.""" """Prevent cross-test leakage from global kill-switch state."""
KILL_SWITCH.clear_block() def _reset_session_risk_globals() -> None:
_RUNTIME_EXIT_STATES.clear()
_RUNTIME_EXIT_PEAKS.clear()
_SESSION_RISK_LAST_BY_MARKET.clear() _SESSION_RISK_LAST_BY_MARKET.clear()
_SESSION_RISK_OVERRIDES_BY_MARKET.clear() _SESSION_RISK_OVERRIDES_BY_MARKET.clear()
_SESSION_RISK_PROFILES_MAP.clear() _SESSION_RISK_PROFILES_MAP.clear()
main_module._SESSION_RISK_PROFILES_RAW = "__reset__" main_module._SESSION_RISK_PROFILES_RAW = "{}"
KILL_SWITCH.clear_block()
_RUNTIME_EXIT_STATES.clear()
_RUNTIME_EXIT_PEAKS.clear()
_reset_session_risk_globals()
_STOPLOSS_REENTRY_COOLDOWN_UNTIL.clear() _STOPLOSS_REENTRY_COOLDOWN_UNTIL.clear()
yield yield
KILL_SWITCH.clear_block() KILL_SWITCH.clear_block()
_RUNTIME_EXIT_STATES.clear() _RUNTIME_EXIT_STATES.clear()
_RUNTIME_EXIT_PEAKS.clear() _RUNTIME_EXIT_PEAKS.clear()
_SESSION_RISK_LAST_BY_MARKET.clear() _reset_session_risk_globals()
_SESSION_RISK_OVERRIDES_BY_MARKET.clear()
_SESSION_RISK_PROFILES_MAP.clear()
main_module._SESSION_RISK_PROFILES_RAW = "__reset__"
_STOPLOSS_REENTRY_COOLDOWN_UNTIL.clear() _STOPLOSS_REENTRY_COOLDOWN_UNTIL.clear()
@@ -3181,6 +3185,13 @@ async def test_sell_order_uses_broker_balance_qty_not_db() -> None:
updated_buy = decision_logger.get_decision_by_id(buy_decision_id) updated_buy = decision_logger.get_decision_by_id(buy_decision_id)
assert updated_buy is not None assert updated_buy is not None
assert updated_buy.outcome_pnl == -25.0 assert updated_buy.outcome_pnl == -25.0
sell_row = db_conn.execute(
"SELECT pnl, strategy_pnl, fx_pnl FROM trades WHERE action='SELL' ORDER BY id DESC LIMIT 1"
).fetchone()
assert sell_row is not None
assert sell_row[0] == -25.0
assert sell_row[1] == -25.0
assert sell_row[2] == 0.0
@pytest.mark.asyncio @pytest.mark.asyncio
@@ -4598,6 +4609,23 @@ def test_fx_buffer_guard_applies_only_to_us_and_respects_boundary() -> None:
assert required_jp == 0.0 assert required_jp == 0.0
def test_split_trade_pnl_components_overseas_fx_split_preserves_total() -> None:
market = MagicMock()
market.is_domestic = False
strategy_pnl, fx_pnl = _split_trade_pnl_components(
market=market,
trade_pnl=20.0,
buy_price=100.0,
sell_price=110.0,
quantity=2,
buy_fx_rate=1200.0,
sell_fx_rate=1260.0,
)
assert strategy_pnl == 10.0
assert fx_pnl == 10.0
assert strategy_pnl + fx_pnl == pytest.approx(20.0)
# run_daily_session — daily CB baseline (daily_start_eval) tests (issue #207) # run_daily_session — daily CB baseline (daily_start_eval) tests (issue #207)
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@@ -6351,6 +6379,225 @@ async def test_us_min_price_filter_not_applied_to_kr_market() -> None:
broker.send_order.assert_called_once() broker.send_order.assert_called_once()
@pytest.mark.asyncio
async def test_session_boundary_reloads_us_min_price_override_in_trading_cycle() -> None:
db_conn = init_db(":memory:")
decision_logger = DecisionLogger(db_conn)
broker = MagicMock()
broker.get_balance = AsyncMock(return_value={"output1": [], "output2": [{}]})
overseas_broker = MagicMock()
overseas_broker.get_overseas_price = AsyncMock(
return_value={"output": {"last": "7.0", "rate": "0.0"}}
)
overseas_broker.get_overseas_balance = AsyncMock(
return_value={
"output1": [],
"output2": [{"frcr_evlu_tota": "10000", "frcr_buy_amt_smtl": "0"}],
}
)
overseas_broker.get_overseas_buying_power = AsyncMock(
return_value={"output": {"ovrs_ord_psbl_amt": "10000"}}
)
overseas_broker.send_overseas_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
market = MagicMock()
market.name = "NASDAQ"
market.code = "US_NASDAQ"
market.exchange_code = "NASD"
market.is_domestic = False
telegram = MagicMock()
telegram.notify_trade_execution = AsyncMock()
telegram.notify_fat_finger = AsyncMock()
telegram.notify_circuit_breaker = AsyncMock()
telegram.notify_scenario_matched = AsyncMock()
settings = Settings(
KIS_APP_KEY="k",
KIS_APP_SECRET="s",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="g",
MODE="paper",
PAPER_OVERSEAS_CASH=50000.0,
US_MIN_PRICE=5.0,
USD_BUFFER_MIN=1000.0,
SESSION_RISK_RELOAD_ENABLED=True,
SESSION_RISK_PROFILES_JSON=(
'{"US_PRE": {"US_MIN_PRICE": 8.0}, "US_DAY": {"US_MIN_PRICE": 5.0}}'
),
)
current_session = {"id": "US_PRE"}
def _session_info(_: Any) -> MagicMock:
return MagicMock(session_id=current_session["id"])
with (
patch("src.main.get_open_position", return_value=None),
patch("src.main.get_session_info", side_effect=_session_info),
):
await trading_cycle(
broker=broker,
overseas_broker=overseas_broker,
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("AAPL"))),
playbook=_make_playbook("US_NASDAQ"),
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
db_conn=db_conn,
decision_logger=decision_logger,
context_store=MagicMock(
get_latest_timeframe=MagicMock(return_value=None),
set_context=MagicMock(),
),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=telegram,
market=market,
stock_code="AAPL",
scan_candidates={},
settings=settings,
)
assert overseas_broker.send_overseas_order.call_count == 0
current_session["id"] = "US_DAY"
await trading_cycle(
broker=broker,
overseas_broker=overseas_broker,
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("AAPL"))),
playbook=_make_playbook("US_NASDAQ"),
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
db_conn=db_conn,
decision_logger=decision_logger,
context_store=MagicMock(
get_latest_timeframe=MagicMock(return_value=None),
set_context=MagicMock(),
),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=telegram,
market=market,
stock_code="AAPL",
scan_candidates={},
settings=settings,
)
assert overseas_broker.send_overseas_order.call_count == 1
@pytest.mark.asyncio
async def test_session_boundary_falls_back_when_profile_reload_fails() -> None:
db_conn = init_db(":memory:")
decision_logger = DecisionLogger(db_conn)
broker = MagicMock()
broker.get_balance = AsyncMock(return_value={"output1": [], "output2": [{}]})
overseas_broker = MagicMock()
overseas_broker.get_overseas_price = AsyncMock(
return_value={"output": {"last": "7.0", "rate": "0.0"}}
)
overseas_broker.get_overseas_balance = AsyncMock(
return_value={
"output1": [],
"output2": [{"frcr_evlu_tota": "10000", "frcr_buy_amt_smtl": "0"}],
}
)
overseas_broker.get_overseas_buying_power = AsyncMock(
return_value={"output": {"ovrs_ord_psbl_amt": "10000"}}
)
overseas_broker.send_overseas_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
market = MagicMock()
market.name = "NASDAQ"
market.code = "US_NASDAQ"
market.exchange_code = "NASD"
market.is_domestic = False
telegram = MagicMock()
telegram.notify_trade_execution = AsyncMock()
telegram.notify_fat_finger = AsyncMock()
telegram.notify_circuit_breaker = AsyncMock()
telegram.notify_scenario_matched = AsyncMock()
settings = Settings(
KIS_APP_KEY="k",
KIS_APP_SECRET="s",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="g",
MODE="paper",
PAPER_OVERSEAS_CASH=50000.0,
US_MIN_PRICE=5.0,
USD_BUFFER_MIN=1000.0,
SESSION_RISK_RELOAD_ENABLED=True,
SESSION_RISK_PROFILES_JSON='{"US_PRE": {"US_MIN_PRICE": 8.0}}',
)
current_session = {"id": "US_PRE"}
def _session_info(_: Any) -> MagicMock:
return MagicMock(session_id=current_session["id"])
with (
patch("src.main.get_open_position", return_value=None),
patch("src.main.get_session_info", side_effect=_session_info),
):
await trading_cycle(
broker=broker,
overseas_broker=overseas_broker,
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("AAPL"))),
playbook=_make_playbook("US_NASDAQ"),
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
db_conn=db_conn,
decision_logger=decision_logger,
context_store=MagicMock(
get_latest_timeframe=MagicMock(return_value=None),
set_context=MagicMock(),
),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=telegram,
market=market,
stock_code="AAPL",
scan_candidates={},
settings=settings,
)
assert overseas_broker.send_overseas_order.call_count == 0
settings.SESSION_RISK_PROFILES_JSON = "{invalid-json"
current_session["id"] = "US_DAY"
await trading_cycle(
broker=broker,
overseas_broker=overseas_broker,
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("AAPL"))),
playbook=_make_playbook("US_NASDAQ"),
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
db_conn=db_conn,
decision_logger=decision_logger,
context_store=MagicMock(
get_latest_timeframe=MagicMock(return_value=None),
set_context=MagicMock(),
),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=telegram,
market=market,
stock_code="AAPL",
scan_candidates={},
settings=settings,
)
assert overseas_broker.send_overseas_order.call_count == 1
def test_overnight_policy_prioritizes_killswitch_over_exception() -> None: def test_overnight_policy_prioritizes_killswitch_over_exception() -> None:
market = MagicMock() market = MagicMock()
with patch("src.main.get_session_info", return_value=MagicMock(session_id="US_AFTER")): with patch("src.main.get_session_info", return_value=MagicMock(session_id="US_AFTER")):
@@ -6475,6 +6722,7 @@ async def test_blackout_queues_order_and_skips_submission() -> None:
blackout_manager.in_blackout.return_value = True blackout_manager.in_blackout.return_value = True
blackout_manager.enqueue.return_value = True blackout_manager.enqueue.return_value = True
blackout_manager.pending_count = 1 blackout_manager.pending_count = 1
blackout_manager.overflow_drop_count = 0
with patch("src.main.BLACKOUT_ORDER_MANAGER", blackout_manager): with patch("src.main.BLACKOUT_ORDER_MANAGER", blackout_manager):
await trading_cycle( await trading_cycle(
@@ -6504,6 +6752,43 @@ async def test_blackout_queues_order_and_skips_submission() -> None:
blackout_manager.enqueue.assert_called_once() blackout_manager.enqueue.assert_called_once()
def test_blackout_queue_overflow_keeps_latest_intent() -> None:
manager = BlackoutOrderManager(enabled=True, windows=[], max_queue_size=1)
manager.in_blackout = lambda now=None: True # type: ignore[method-assign]
market = MagicMock()
market.code = "KR"
market.exchange_code = "KRX"
with patch("src.main.BLACKOUT_ORDER_MANAGER", manager):
assert _maybe_queue_order_intent(
market=market,
session_id="KRX_REG",
stock_code="005930",
order_type="BUY",
quantity=1,
price=100.0,
source="test-first",
)
assert _maybe_queue_order_intent(
market=market,
session_id="KRX_REG",
stock_code="000660",
order_type="BUY",
quantity=2,
price=200.0,
source="test-second",
)
assert manager.pending_count == 1
assert manager.overflow_drop_count == 1
manager.in_blackout = lambda now=None: False # type: ignore[method-assign]
batch = manager.pop_recovery_batch()
assert len(batch) == 1
assert batch[0].stock_code == "000660"
assert batch[0].session_id == "KRX_REG"
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_process_blackout_recovery_executes_valid_intents() -> None: async def test_process_blackout_recovery_executes_valid_intents() -> None:
"""Recovery must execute queued intents that pass revalidation.""" """Recovery must execute queued intents that pass revalidation."""
@@ -6525,6 +6810,7 @@ async def test_process_blackout_recovery_executes_valid_intents() -> None:
intent.quantity = 1 intent.quantity = 1
intent.price = 100.0 intent.price = 100.0
intent.source = "test" intent.source = "test"
intent.session_id = "NXT_AFTER"
intent.attempts = 0 intent.attempts = 0
blackout_manager = MagicMock() blackout_manager = MagicMock()
@@ -6555,7 +6841,7 @@ async def test_process_blackout_recovery_executes_valid_intents() -> None:
assert row is not None assert row is not None
assert row[0] == "BUY" assert row[0] == "BUY"
assert row[1] == 1 assert row[1] == 1
assert row[2] == "KRX_REG" assert row[2] == "NXT_AFTER"
assert row[3].startswith("[blackout-recovery]") assert row[3].startswith("[blackout-recovery]")
@@ -6580,6 +6866,7 @@ async def test_process_blackout_recovery_drops_policy_rejected_intent() -> None:
intent.quantity = 1 intent.quantity = 1
intent.price = 100.0 intent.price = 100.0
intent.source = "test" intent.source = "test"
intent.session_id = "KRX_REG"
intent.attempts = 0 intent.attempts = 0
blackout_manager = MagicMock() blackout_manager = MagicMock()
@@ -6629,6 +6916,7 @@ async def test_process_blackout_recovery_drops_intent_on_excessive_price_drift()
intent.quantity = 1 intent.quantity = 1
intent.price = 100.0 intent.price = 100.0
intent.source = "test" intent.source = "test"
intent.session_id = "US_PRE"
intent.attempts = 0 intent.attempts = 0
blackout_manager = MagicMock() blackout_manager = MagicMock()
@@ -6679,6 +6967,7 @@ async def test_process_blackout_recovery_drops_overseas_intent_on_excessive_pric
intent.quantity = 1 intent.quantity = 1
intent.price = 100.0 intent.price = 100.0
intent.source = "test" intent.source = "test"
intent.session_id = "KRX_REG"
intent.attempts = 0 intent.attempts = 0
blackout_manager = MagicMock() blackout_manager = MagicMock()
@@ -6728,6 +7017,7 @@ async def test_process_blackout_recovery_requeues_intent_when_price_lookup_fails
intent.quantity = 1 intent.quantity = 1
intent.price = 100.0 intent.price = 100.0
intent.source = "test" intent.source = "test"
intent.session_id = "KRX_REG"
intent.attempts = 0 intent.attempts = 0
blackout_manager = MagicMock() blackout_manager = MagicMock()
@@ -6864,3 +7154,27 @@ async def test_trigger_emergency_kill_switch_records_cancel_failure() -> None:
) )
assert any(err.startswith("cancel_pending_orders:") for err in report.errors) assert any(err.startswith("cancel_pending_orders:") for err in report.errors)
@pytest.mark.asyncio
async def test_refresh_order_state_failure_summary_includes_more_count() -> None:
broker = MagicMock()
broker.get_balance = AsyncMock(side_effect=RuntimeError("domestic down"))
overseas_broker = MagicMock()
overseas_broker.get_overseas_balance = AsyncMock(side_effect=RuntimeError("overseas down"))
markets = []
for code, exchange in [("KR", "KRX"), ("US_PRE", "NASD"), ("US_DAY", "NYSE"), ("JP", "TKSE")]:
market = MagicMock()
market.code = code
market.exchange_code = exchange
market.is_domestic = code == "KR"
markets.append(market)
with pytest.raises(RuntimeError, match=r"\(\+1 more\)$") as exc_info:
await main_module._refresh_order_state_for_kill_switch(
broker=broker,
overseas_broker=overseas_broker,
markets=markets,
)
assert "KR/KRX" in str(exc_info.value)

View File

@@ -22,17 +22,18 @@ def test_take_profit_exit_for_backward_compatibility() -> None:
assert out.reason == "arm_take_profit" assert out.reason == "arm_take_profit"
def test_model_assist_exit_signal() -> None: def test_model_assist_signal_promotes_be_lock_without_direct_exit() -> None:
out = evaluate_exit( out = evaluate_exit(
current_state=PositionState.ARMED, current_state=PositionState.HOLDING,
config=ExitRuleConfig(model_prob_threshold=0.62, arm_pct=10.0), config=ExitRuleConfig(model_prob_threshold=0.62, be_arm_pct=1.2, arm_pct=10.0),
inp=ExitRuleInput( inp=ExitRuleInput(
current_price=101.0, current_price=100.5,
entry_price=100.0, entry_price=100.0,
peak_price=105.0, peak_price=105.0,
pred_down_prob=0.8, pred_down_prob=0.8,
liquidity_weak=True, liquidity_weak=True,
), ),
) )
assert out.should_exit is True assert out.should_exit is False
assert out.reason == "model_liquidity_exit" assert out.state == PositionState.BE_LOCK
assert out.reason == "model_assist_be_lock"

View File

@@ -28,3 +28,29 @@ def test_exited_has_priority_over_promotion() -> None:
), ),
) )
assert state == PositionState.EXITED assert state == PositionState.EXITED
def test_model_signal_promotes_be_lock_as_assist() -> None:
state = promote_state(
PositionState.HOLDING,
StateTransitionInput(
unrealized_pnl_pct=0.5,
be_arm_pct=1.2,
arm_pct=2.8,
model_exit_signal=True,
),
)
assert state == PositionState.BE_LOCK
def test_model_signal_does_not_force_exit_directly() -> None:
state = promote_state(
PositionState.ARMED,
StateTransitionInput(
unrealized_pnl_pct=1.0,
be_arm_pct=1.2,
arm_pct=2.8,
model_exit_signal=True,
),
)
assert state == PositionState.ARMED

View File

@@ -121,3 +121,44 @@ def test_validate_testing_doc_has_dynamic_count_guidance(monkeypatch) -> None:
monkeypatch.setattr(module, "_read", fake_read) monkeypatch.setattr(module, "_read", fake_read)
module.validate_testing_doc_has_dynamic_count_guidance(errors) module.validate_testing_doc_has_dynamic_count_guidance(errors)
assert errors == [] assert errors == []
def test_validate_pr_body_postcheck_guidance_passes(monkeypatch) -> None:
module = _load_module()
errors: list[str] = []
fake_docs = {
str(module.REQUIRED_FILES["commands"]): (
"PR Body Post-Check (Mandatory)\n"
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>\n"
),
str(module.REQUIRED_FILES["workflow"]): (
"PR 생성 직후 본문 무결성 검증(필수)\n"
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>\n"
),
}
def fake_read(path: Path) -> str:
return fake_docs[str(path)]
monkeypatch.setattr(module, "_read", fake_read)
module.validate_pr_body_postcheck_guidance(errors)
assert errors == []
def test_validate_pr_body_postcheck_guidance_reports_missing_tokens(
monkeypatch,
) -> None:
module = _load_module()
errors: list[str] = []
fake_docs = {
str(module.REQUIRED_FILES["commands"]): "PR Body Post-Check (Mandatory)\n",
str(module.REQUIRED_FILES["workflow"]): "PR Body Post-Check\n",
}
def fake_read(path: Path) -> str:
return fake_docs[str(path)]
monkeypatch.setattr(module, "_read", fake_read)
module.validate_pr_body_postcheck_guidance(errors)
assert any("commands.md" in err for err in errors)
assert any("workflow.md" in err for err in errors)

View File

@@ -79,3 +79,42 @@ def test_validate_links_avoids_duplicate_error_for_invalid_plan_link(tmp_path) -
assert len(errors) == 1 assert len(errors) == 1
assert "invalid plan link path" in errors[0] assert "invalid plan link path" in errors[0]
def test_validate_issue_status_consistency_reports_conflicts() -> None:
module = _load_module()
errors: list[str] = []
path = Path("docs/ouroboros/80_implementation_audit.md").resolve()
text = "\n".join(
[
"| REQ-V3-004 | 상태 | 부분 | `#328` 잔여 |",
"| 항목 | 상태 | ✅ 완료 | `#328` 머지 |",
]
)
module.validate_issue_status_consistency(path, text, errors)
assert len(errors) == 1
assert "conflicting status for issue #328" in errors[0]
def test_validate_issue_status_consistency_allows_done_only() -> None:
module = _load_module()
errors: list[str] = []
path = Path("docs/ouroboros/80_implementation_audit.md").resolve()
text = "| 항목 | 상태 | ✅ 완료 | `#371` 머지 |"
module.validate_issue_status_consistency(path, text, errors)
assert errors == []
def test_validate_issue_status_consistency_allows_pending_only() -> None:
module = _load_module()
errors: list[str] = []
path = Path("docs/ouroboros/80_implementation_audit.md").resolve()
text = "| 항목 | 상태 | 부분 | `#390` 추적 이슈 |"
module.validate_issue_status_consistency(path, text, errors)
assert errors == []

View File

@@ -0,0 +1,84 @@
from __future__ import annotations
import importlib.util
import json
from pathlib import Path
from types import SimpleNamespace
import pytest
def _load_module():
script_path = Path(__file__).resolve().parents[1] / "scripts" / "validate_pr_body.py"
spec = importlib.util.spec_from_file_location("validate_pr_body", script_path)
assert spec is not None
assert spec.loader is not None
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
def test_validate_pr_body_text_detects_escaped_newline() -> None:
module = _load_module()
errors = module.validate_pr_body_text("## Summary\\n- item")
assert any("escaped newline" in err for err in errors)
def test_validate_pr_body_text_allows_literal_sequence_when_multiline() -> None:
module = _load_module()
text = "## Summary\n- escaped sequence example: \\\\n"
assert module.validate_pr_body_text(text) == []
def test_validate_pr_body_text_detects_unbalanced_code_fence() -> None:
module = _load_module()
errors = module.validate_pr_body_text("## Summary\n- item\n```bash\necho hi\n")
assert any("unbalanced fenced code blocks" in err for err in errors)
def test_validate_pr_body_text_detects_missing_structure() -> None:
module = _load_module()
errors = module.validate_pr_body_text("plain text only")
assert any("missing markdown section headers" in err for err in errors)
assert any("missing markdown list items" in err for err in errors)
def test_validate_pr_body_text_passes_with_valid_markdown() -> None:
module = _load_module()
text = "\n".join(
[
"## Summary",
"- item",
"",
"## Validation",
"```bash",
"pytest -q",
"```",
]
)
assert module.validate_pr_body_text(text) == []
def test_fetch_pr_body_reads_body_from_tea_api(monkeypatch) -> None:
module = _load_module()
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
assert "tea" in cmd[0]
assert check is True
assert capture_output is True
assert text is True
return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"}))
monkeypatch.setattr(module.subprocess, "run", fake_run)
assert module.fetch_pr_body(391) == "## Summary\n- item"
def test_fetch_pr_body_rejects_non_string_body(monkeypatch) -> None:
module = _load_module()
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
return SimpleNamespace(stdout=json.dumps({"body": 123}))
monkeypatch.setattr(module.subprocess, "run", fake_run)
with pytest.raises(RuntimeError):
module.fetch_pr_body(391)

View File

@@ -89,3 +89,51 @@
- next_ticket: #316 - next_ticket: #316
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes - process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: 모니터 판정을 liveness 중심에서 policy invariant(FORBIDDEN) 중심으로 전환 - risks_or_notes: 모니터 판정을 liveness 중심에서 policy invariant(FORBIDDEN) 중심으로 전환
### 2026-03-01 | session=codex-v3-stream-next-ticket
- branch: feature/v3-session-policy-stream
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #368, #369, #370, #371, #374, #375, #376, #377, #381
- next_ticket: #368
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: 비블로킹 소견은 합당성(정확성/안정성/유지보수성) 기준으로 반영하고, 미반영 시 근거를 코멘트로 남긴다.
### 2026-03-01 | session=codex-issue368-start
- branch: feature/issue-368-backtest-cost-execution
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #368
- next_ticket: #368
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: TASK-V2-012 구현 갭 보완을 위해 cost guard + execution-adjusted fold metric + 회귀 테스트를 함께 반영한다.
### 2026-03-02 | session=codex-v3-stream-next-ticket-369
- branch: feature/v3-session-policy-stream
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #369, #370, #371, #374, #375, #376, #377, #381
- next_ticket: #369
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: 구현 티켓은 코드/테스트/문서(요구사항 원장/구현감사/PR traceability) 동시 반영을 기본 원칙으로 진행한다.
### 2026-03-02 | session=codex-issue369-start
- branch: feature/issue-369-model-exit-signal-spec-sync
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #369
- next_ticket: #369
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: v2 사양 기준으로 model_exit_signal을 직접 청산 트리거가 아닌 보조 트리거로 정합화하고 테스트/문서를 동기화한다.
### 2026-03-02 | session=codex-v3-stream-next-ticket-377
- branch: feature/v3-session-policy-stream
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #377, #370, #371, #375, #376, #381
- next_ticket: #377
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: kill switch refresh 재시도 정책(횟수/간격/중단조건)을 코드/테스트/요구사항 원장/감사 문서에 동시 반영한다.
### 2026-03-02 | session=codex-issue377-start
- branch: feature/issue-377-kill-switch-refresh-retry
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #377
- next_ticket: #377
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: refresh 단계를 최대 3회(초기+재시도2), 실패 시 지수 백오프로 재시도하고 성공 시 즉시 중단, 소진 시 오류를 기록한 뒤 다음 단계를 계속 수행한다.