176 Commits

Author SHA1 Message Date
agentson
7e9738d5df docs: bump requirements registry version for policy change sync
Some checks failed
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 04:01:26 +09:00
agentson
96d2c97fe7 analysis: apply execution-adjusted cost model in v2 backtest pipeline (#368)
Some checks failed
Gitea CI / test (push) Failing after 5s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 03:39:57 +09:00
4710aa2d66 Merge pull request 'test: add session-boundary risk reload e2e regressions (#376)' (#386) from feature/issue-376-session-boundary-e2e into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #386
2026-03-02 03:33:19 +09:00
agentson
ca9e1ad0e2 test: harden session-risk global reset isolation
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 03:30:46 +09:00
agentson
928e60877c test: add session-boundary risk reload e2e regressions (#376)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 03:23:58 +09:00
16ddc22d14 Merge pull request 'blackout: persist session_id across queued intent lifecycle (#375)' (#385) from feature/issue-375-queued-intent-session-id into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #385
2026-03-02 03:20:18 +09:00
agentson
4f21117eca blackout: simplify recovery session_id binding to queued value
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 35s
2026-03-02 03:17:28 +09:00
agentson
8e02b1ea4f blackout: persist session_id across queued intent lifecycle (#375)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 03:09:33 +09:00
ccceb38483 Merge pull request 'blackout: enforce bounded oldest-drop queue policy on overflow (#371)' (#384) from feature/issue-371-blackout-queue-overflow into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #384
2026-03-02 03:07:12 +09:00
agentson
96e5de7c5d test: align blackout queue mocks with overflow counter contract
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 03:03:35 +09:00
agentson
7959b749c7 blackout: enforce bounded oldest-drop queue policy on overflow (#371)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 02:57:08 +09:00
f7e242d147 Merge pull request 'trade: apply runtime strategy/fx pnl split on sell paths (#370)' (#383) from feature/issue-370-fx-pnl-runtime-split into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #383
2026-03-02 02:53:04 +09:00
agentson
589cc42e00 docs: bump requirements registry metadata for push governance sync
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 02:50:08 +09:00
agentson
920630e30e docs/main: clarify fx context behavior and rate-key provenance
Some checks failed
Gitea CI / test (push) Failing after 5s
Gitea CI / test (pull_request) Successful in 34s
2026-03-02 02:44:49 +09:00
agentson
d4f37ee392 trade: apply runtime strategy/fx pnl split on sell paths (#370)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 02:35:54 +09:00
3914f24872 Merge pull request 'backtest: reflect cost/execution effects in fold scoring (#368)' (#382) from feature/issue-368-backtest-cost-exec into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #382
2026-03-02 02:30:45 +09:00
agentson
ed713fdf40 style: wrap long helper signature in backtest pipeline
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 02:24:01 +09:00
agentson
c27decb6b1 backtest: reflect cost/execution effects in fold scoring (#368)
Some checks failed
Gitea CI / test (push) Failing after 6s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 02:10:08 +09:00
8ac7436953 Merge pull request 'docs: resync implementation audit status with actual code gaps (#373)' (#380) from feature/issue-373-audit-sync into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 36s
Reviewed-on: #380
2026-03-02 02:06:40 +09:00
agentson
1d404975ea docs: resync implementation audit status with actual code gaps (#373)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 01:54:27 +09:00
0b64206c01 Merge pull request 'governance: harden fail-fast checks for traceability and task-test pairing (#372)' (#379) from feature/issue-372-validator-hardening into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #379
2026-03-02 01:52:49 +09:00
agentson
c849e60199 ci: retrigger after PR body update
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 01:51:28 +09:00
agentson
53a6ef2968 governance: enforce fail-fast ops traceability and task-test pairing (#372)
Some checks failed
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 01:41:06 +09:00
1f05f7c566 Merge pull request 'process: prevent newline-escaped Gitea comments via helper + guard (#372)' (#378) from feature/issue-372-comment-newline-guard into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 31s
Reviewed-on: #378
2026-03-02 01:35:57 +09:00
agentson
d469002be7 test: add unhappy-path coverage for newline guard tokens
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 01:33:39 +09:00
agentson
3712a7a30b test: cover governance newline-helper required tokens
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 01:30:02 +09:00
agentson
746f873650 process: add newline-safe tea comment helper and governance guard (#372)
All checks were successful
Gitea CI / test (push) Successful in 35s
Gitea CI / test (pull_request) Successful in 34s
2026-03-02 01:21:13 +09:00
15e45fd099 Merge pull request 'docs: add top-level documentation hub index (#362)' (#367) from feature/issue-362-docs-index into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 35s
Reviewed-on: #367
2026-03-01 23:29:48 +09:00
agentson
a36e85b708 docs: include agent constraints and skills in docs hub
All checks were successful
Gitea CI / test (push) Successful in 35s
Gitea CI / test (pull_request) Successful in 33s
2026-03-01 23:26:27 +09:00
agentson
d2ac0dae53 docs: add top-level documentation hub index (#362)
All checks were successful
Gitea CI / test (push) Successful in 39s
Gitea CI / test (pull_request) Successful in 35s
2026-03-01 23:23:38 +09:00
461fdc755b Merge pull request 'docs: align template/commands with docs sync gate (#364)' (#366) from feature/issue-364-docs-sync-integration into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 34s
Reviewed-on: #366
2026-03-01 23:22:38 +09:00
agentson
243469cd40 docs: address PR #366 review on docs sync gate
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 23:18:42 +09:00
agentson
8e819e5939 docs: align template/commands with docs sync gate (#364)
All checks were successful
Gitea CI / test (push) Successful in 39s
Gitea CI / test (pull_request) Successful in 33s
2026-03-01 23:13:51 +09:00
774ce8e94f Merge pull request 'docs validator: add docs sync invariants with tests (#363)' (#365) from feature/issue-363-validate-docs-sync into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 36s
Reviewed-on: #365
2026-03-01 23:12:17 +09:00
agentson
6656adc2b7 ci/docs: wire docs sync validator into workflows and tighten tests
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 35s
2026-03-01 23:09:20 +09:00
agentson
51fd6b7a72 docs validator: add validate_docs_sync with unit tests (#363)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 23:03:52 +09:00
18931c8b58 Merge pull request 'governance: require approval evidence for READ-ONLY file changes (#356)' (#361) from feature/issue-356-readonly-approval into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #361
2026-03-01 22:46:53 +09:00
agentson
c431d82c0d test: cover no-readonly-change early return in governance validator
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-01 22:44:02 +09:00
agentson
6be78d73ff governance: enforce READ-ONLY approval evidence for protected file changes (#356)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-01 22:09:48 +09:00
f50833941c Merge pull request 'docs validator: enforce source path policy for ouroboros plan links (#357)' (#360) from feature/issue-357-docs-source-path-validator into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #360
2026-03-01 21:32:48 +09:00
agentson
d1ef79f385 docs validator: handle plan link fragments and avoid duplicate link errors
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 21:20:06 +09:00
agentson
117657d13f docs: enforce source path policy for ouroboros plan links (#357)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 21:11:34 +09:00
e430d6f28a Merge pull request 'test: add non-ci strict #TBD regression coverage for handover gate (#358)' (#359) from feature/issue-358-handover-nonci-tbd into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #359
2026-03-01 21:10:05 +09:00
agentson
999091e003 test: cover non-ci strict #TBD guard in handover check (#358)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 21:02:51 +09:00
07cb6b4ad8 Merge pull request 'workflow: session handover gate 실행환경 모드 분리 (#353)' (#354) from feature/issue-353-ci-handover-mode-v2 into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 37s
Reviewed-on: #354
2026-03-01 21:00:51 +09:00
agentson
8f2c08e2b7 test: add ci-mode coverage for session handover gate
All checks were successful
Gitea CI / test (push) Successful in 34s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 20:43:06 +09:00
agentson
940a7e094b workflow: skip main/master branch guard in --ci mode
All checks were successful
Gitea CI / test (push) Successful in 37s
Gitea CI / test (pull_request) Successful in 34s
2026-03-01 20:35:22 +09:00
agentson
05be112085 docs: move v2/v3 source plans under docs/ouroboros/source
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-01 20:25:39 +09:00
agentson
2c6e9802be docs: sync requirements registry metadata for policy doc changes
All checks were successful
Gitea CI / test (push) Successful in 34s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 20:23:34 +09:00
agentson
4c0b55d67c docs: replace absolute plan links with repo-relative paths
Some checks failed
Gitea CI / test (push) Failing after 8s
Gitea CI / test (pull_request) Failing after 6s
2026-03-01 20:22:13 +09:00
agentson
5730f0db2a ci: fix lint baseline and stabilize failing main tests
Some checks failed
Gitea CI / test (push) Failing after 5s
Gitea CI / test (pull_request) Failing after 5s
2026-03-01 20:17:13 +09:00
agentson
6f047a6daf ci: add --ci mode for session handover gate in workflows (#353)
Some checks failed
Gitea CI / test (push) Failing after 5s
Gitea CI / test (pull_request) Failing after 6s
2026-03-01 20:03:00 +09:00
841f8eaa8d Merge pull request 'docs: v2/v3 구현 감사 문서 피드백 전체 반영 (#349)' (#351) from feature/issue-349-doc-audit-feedback into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Failing after 1m5s
Reviewed-on: #351
2026-03-01 17:10:07 +09:00
agentson
6b34367656 docs: v2/v3 구현 감사 문서 피드백 전체 반영 (#349)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
11회 리뷰 사이클에서 남긴 [코멘트]를 모두 본문에 반영하고 블록을 제거한다.

변경 문서:
- docs/architecture.md: SmartScanner 동작 모드(both), 대시보드 10 API,
  DB 스키마(session_id/fx_pnl/mode), config 변수 갱신
- docs/commands.md: /api/pnl/history, /api/positions 엔드포인트 추가
- docs/testing.md: 테스트 수 고정값 제거, SmartScanner fallback 최신화,
  Dashboard 10 API routes 반영
- README.md: 고정 수치 제거, Gitea CI 명시, 파일별 수치 'CI 기준 변동' 표기
- CLAUDE.md: SmartScanner 섹션명 변경, 고정 수치 제거
- docs/requirements-log.md: #318~#331 구현 항목 추가
- docs/ouroboros/80_implementation_audit.md: ROOT-5/6/7 분리,
  REQ-V3-008 함수명 병기, v3 ~85% / 거버넌스 ~60%로 갱신
- docs/ouroboros/85_loss_recovery_action_plan.md: ACT-07 함수명 병기,
  테스트 수 갱신, 6.1/6.2 정확도 개선
- docs/ouroboros/60_repo_enforcement_checklist.md: CI job/step 구분 표 추가
- docs/ouroboros/README.md: 50_* 문서 (A)/(B) 보조 표기

Closes #349
2026-03-01 17:06:56 +09:00
e65a0e3585 Merge pull request 'test: session_id 명시 전파 회귀 고정 (#326)' (#348) from feature/issue-326-session-id-explicit-propagation into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #348
2026-03-01 13:42:17 +09:00
agentson
e3a3aada83 test: set rt_cd success response in session-id daily regression test (#326)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 13:38:57 +09:00
agentson
db316c539b test: align daily session mock order response with rt_cd success path (#326)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 13:37:04 +09:00
agentson
2df787757a test: enforce explicit runtime session_id propagation in realtime/daily logs (#326)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 4s
2026-03-01 10:14:59 +09:00
5f079206c6 Merge pull request 'infra: CI 자동 검증 강화 (정책 레지스트리 + TASK-REQ 매핑) (#330)' (#347) from feature/issue-330-governance-ci-guard into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #347
2026-03-01 10:11:18 +09:00
agentson
e9de950bec ci: wire governance traceability env in gitea workflow and guard zero SHA (#330)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 10:06:25 +09:00
agentson
c31ee37f13 infra: enforce governance sync and TASK-REQ mapping in CI (#330)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:58:45 +09:00
2ba1d1ad4d Merge pull request 'feat: Triple Barrier 시간장벽을 캘린더 분 기반으로 전환 (#329)' (#346) from feature/issue-329-triple-barrier-calendar-minutes into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #346
2026-03-01 09:57:01 +09:00
agentson
273a3c182a refactor: simplify timestamp normalization after non-null validation (#329)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 4s
2026-03-01 09:50:45 +09:00
agentson
701350fb65 feat: switch backtest triple barrier to calendar-minute horizon (#329)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:44:24 +09:00
35d81fb73d Merge pull request 'feat: 블랙아웃 복구 시 가격/세션 재검증 강화 (#328)' (#345) from feature/issue-328-blackout-revalidation into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #345
2026-03-01 09:43:09 +09:00
agentson
5fae9765e7 test: add blackout recovery overseas/failure revalidation coverage (#328)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:40:00 +09:00
agentson
0ceb2dfdc9 feat: revalidate blackout recovery orders by price/session context (#328)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:33:28 +09:00
89347ee525 Merge pull request 'feat: 세션 전환 시 리스크 파라미터 동적 재로딩 (#327)' (#344) from feature/issue-327-session-risk-reload into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #344
2026-03-01 09:31:14 +09:00
agentson
42c06929ea test: add session-risk reload edge-case coverage (#327)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-02-28 22:20:59 +09:00
agentson
5facd22ef9 feat: reload session risk profile on session transitions (#327)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 4s
2026-02-28 21:04:06 +09:00
3af62ce598 Merge pull request 'feat: v2 staged exit에 실제 피처(ATR, pred_down_prob) 공급 (#325)' (#343) from feature/issue-325-staged-exit-real-features into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #343
2026-02-28 20:59:38 +09:00
agentson
62cd8a81a4 feat: feed staged-exit with ATR/RSI runtime features (#325)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-02-28 20:58:23 +09:00
dd8549b912 Merge pull request 'feat: KR ATR-based dynamic hard-stop threshold (#318)' (#342) from feature/issue-318-kr-atr-dynamic-stoploss into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #342
2026-02-28 20:56:18 +09:00
agentson
8bba85da1e feat: add KR ATR-based dynamic hard-stop threshold (#318)
Some checks failed
Gitea CI / test (push) Failing after 4s
Gitea CI / test (pull_request) Failing after 3s
2026-02-28 18:30:52 +09:00
fc6083bd2a Merge pull request 'feat: stop-loss reentry cooldown guard (#319)' (#341) from feature/issue-319-stoploss-reentry-cooldown into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #341
2026-02-28 18:27:12 +09:00
agentson
5f53b02da8 test: add stop-loss reentry cooldown behavioral coverage (#319)
Some checks failed
Gitea CI / test (pull_request) Failing after 4s
Gitea CI / test (push) Failing after 4s
2026-02-28 18:24:28 +09:00
agentson
82808a8493 feat: enforce stop-loss reentry cooldown window (#319) 2026-02-28 18:24:28 +09:00
9456d66de4 Merge pull request 'feat: US minimum price entry filter (#320)' (#340) from feature/issue-320-us-min-price-filter into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #340
2026-02-28 18:22:28 +09:00
33b97f21ac Merge pull request 'fix: log blackout recovery executions to DB (#324)' (#339) from feature/issue-324-blackout-recovery-trade-log into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #339
2026-02-28 18:22:11 +09:00
3b135c3080 Merge pull request 'fix: SELL outcome PnL uses sell quantity (#322)' (#337) from feature/issue-322-sell-pnl-sell-qty into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #337
2026-02-28 18:21:34 +09:00
1b0d5568d3 Merge pull request 'infra: governance registry sync gate in CI (#330)' (#335) from feature/issue-330-governance-ci-guard into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #335
2026-02-28 18:21:10 +09:00
agentson
2406a80782 test: add governance validator unit coverage (#330)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 53s
2026-02-28 17:40:51 +09:00
b8569d9de1 Merge pull request 'fix: exchange-aware latest BUY matching (#323)' (#338) from feature/issue-323-buy-match-exchange-code into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #338
2026-02-28 17:37:43 +09:00
agentson
92261da414 fix: include exchange_code in latest BUY matching key (#323)
Some checks failed
Gitea CI / test (pull_request) Has been cancelled
Gitea CI / test (push) Has been cancelled
2026-02-28 17:17:21 +09:00
agentson
9267f1fb77 test: add US minimum price boundary and KR-scope coverage (#320)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 17:15:10 +09:00
agentson
fd0246769a test: add sell qty fallback guard and quantity-basis coverage (#322)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 17:13:56 +09:00
ea7260d574 Merge pull request 'feat: explicit session_id propagation in logs (#326)' (#336) from feature/issue-326-session-id-explicit-propagation into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #336
2026-02-28 17:07:34 +09:00
a2855e286e Merge pull request 'feat: minute-based triple barrier horizon (#329)' (#334) from feature/issue-329-triple-barrier-minutes into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #334
2026-02-28 17:06:31 +09:00
28ded34441 Merge pull request 'fix: evolved strategy syntax guard (#321)' (#333) from feature/issue-321-evolution-syntax-guard into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #333
2026-02-28 17:06:04 +09:00
agentson
08607eaa56 feat: block US BUY entries below minimum price threshold (#320)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:40:19 +09:00
agentson
5c107d2435 fix: persist blackout recovery executions to trades log (#324)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:39:30 +09:00
agentson
6d7e6557d2 fix: compute SELL decision outcome using sell quantity (#322)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:38:10 +09:00
agentson
11b9ad126f feat: propagate runtime session_id across decision and trade logs (#326)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:37:32 +09:00
agentson
2e394cd17c infra: enforce governance registry sync checks in CI (#330)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:36:05 +09:00
agentson
c641097fe7 feat: support minute-based triple barrier horizon (#329)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:35:55 +09:00
agentson
2f3b2149d5 fix: add syntax guard for evolved strategy generation (#321)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:35:35 +09:00
13a6d6612a Merge pull request 'docs: 감사 문서 재구조화 + 손실 복구 실행 계획 (#331)' (#332) from feature/issue-331-doc-restructure into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #332
2026-02-28 14:18:46 +09:00
agentson
ca5fa73769 docs: restructure audit docs and create loss recovery action plan (#331)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
- Clean up 80_implementation_audit.md: remove review history (6.1/6.2),
  extract SQL queries, condense data quality section
- Create 85_loss_recovery_action_plan.md with 13 action items across
  3 phases (Phase 1: stop bleeding, Phase 2: data integrity + v2,
  Phase 3: v3 session optimization)
- Extract standard audit SQL queries to scripts/audit_queries.sql
- Update docs/ouroboros/README.md with 85_ link
- Create Gitea issues #318-#330 for all 13 action items

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 13:21:15 +09:00
agentson
ab9ea56efa docs: consolidate implementation audit updates and add restructure plan
Some checks failed
Gitea CI / test (push) Has been cancelled
2026-02-28 13:04:15 +09:00
8dc9f95032 Merge pull request 'process: enforce forbidden runtime invariants in monitor (#316)' (#317) from feature/issue-316-weekend-forbidden-monitor into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
2026-02-28 09:37:41 +09:00
agentson
dd51ffb6ac process: enforce forbidden runtime invariants in monitor (#316)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 09:37:16 +09:00
0542e78f90 Merge pull request 'process: automate backtest gate for PR/push/schedule (#314)' (#315) from feature/issue-314-backtest-gate-automation into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
2026-02-28 03:25:45 +09:00
agentson
8396dc1606 process: automate backtest gate for PR/push/schedule (#314)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 03:25:00 +09:00
343631a935 Merge pull request 'feat: integrate v2 backtest validation pipeline (#305)' (#313) from feature/issue-305-backtest-pipeline-integration into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
2026-02-27 23:59:34 +09:00
agentson
c00525eb4d feat: integrate v2 backtest pipeline for triple barrier and walk-forward (#305)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-27 23:58:52 +09:00
1ae12f92f6 Merge pull request 'fix: runtime staged exit semantics in trading_cycle and run_daily_session (#304)' (#312) from feature/issue-304-runtime-staged-exit-semantics into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
2026-02-27 23:49:59 +09:00
agentson
98dab2e06e fix: apply staged exit semantics in runtime paths (#304)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-27 23:48:52 +09:00
a63d23fab9 Merge pull request 'process: harden implementation-start gate before coding (#310)' (#311) from feature/issue-310-implementation-start-gate into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
2026-02-27 23:24:40 +09:00
agentson
85a59542f8 process: harden implementation-start gate before coding
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-27 23:21:54 +09:00
5830791355 Merge pull request 'process: enforce session handover gate across sessions (#308)' (#309) from feature/issue-308-session-handover-gate into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
2026-02-27 23:09:04 +09:00
agentson
b1610f14c5 process: enforce session handover gate across sessions (#308)
Some checks failed
Gitea CI / test (pull_request) Has been cancelled
Gitea CI / test (push) Has been cancelled
2026-02-27 23:08:29 +09:00
1984065499 Merge pull request 'process: enforce process-change-first and staged acceptance gates (#306)' (#307) from feature/issue-306-process-change-first into feature/v3-session-policy-stream 2026-02-27 22:46:33 +09:00
agentson
d912471d0e process: enforce process-change-first and staged ticket maturity (#306) 2026-02-27 22:46:18 +09:00
5f337e2ebc Merge pull request 'fix: realtime include extended KR/US sessions (#301)' (#303) from feature/issue-301-extended-session-schedule into feature/v3-session-policy-stream 2026-02-27 22:30:26 +09:00
agentson
4a404875a9 fix: include extended KR/US sessions in realtime market scheduling (#301) 2026-02-27 22:30:13 +09:00
cdd3814781 Merge pull request 'governance: enforce runtime NOT_OBSERVED recovery gates (#301)' (#302) from feature/issue-301-runtime-verify-recovery into feature/v3-session-policy-stream 2026-02-27 22:14:03 +09:00
agentson
dbf57b5068 governance: enforce runtime verification coverage gates (#301) 2026-02-27 22:13:11 +09:00
7efc254ab5 Merge pull request '[RISK-EMERGENCY] TKT-P1-008 오버나잇 예외 vs Kill Switch 우선순위' (#300) from feature/issue-tkt-p1-008-overnight-killswitch-priority into feature/v3-session-policy-stream 2026-02-27 08:57:25 +09:00
agentson
2742628b78 feat: prioritize kill-switch over overnight exception policy (TASK-CODE-012) 2026-02-27 08:55:24 +09:00
d60fd8947b Merge pull request '[EXEC-POLICY] TKT-P1-007 session_id 로그 원장 강제' (#298) from feature/issue-tkt-p1-007-session-id-ledger into feature/v3-session-policy-stream 2026-02-27 08:51:27 +09:00
agentson
694d73b212 fix: lazy session resolver and one-time session_id backfill 2026-02-27 08:51:00 +09:00
agentson
b2b02b6f57 feat: enforce session_id persistence in trade ledger (TASK-CODE-007) 2026-02-27 08:49:04 +09:00
2dbe98615d Merge pull request '[FX-ACCOUNTING] TKT-P1-006 전략/환율 PnL 분리 회계' (#296) from feature/issue-tkt-p1-006-fx-pnl-separation into feature/v3-session-policy-stream 2026-02-27 08:46:56 +09:00
agentson
34cf081c96 fix: backfill split pnl migration and harden partial pnl inputs 2026-02-27 08:46:22 +09:00
agentson
7bc4e88335 feat: separate strategy and fx pnl fields in trade logs (TASK-CODE-011) 2026-02-27 08:44:05 +09:00
386e039ff6 Merge pull request '[BACKTEST-MODEL] TKT-P1-005 보수적 체결 모델 구현' (#294) from feature/issue-tkt-p1-005-conservative-fill-model into feature/v3-session-policy-stream 2026-02-27 08:42:22 +09:00
agentson
13ba9e8081 fix: validate execution assumption ranges in backtest model 2026-02-27 08:41:56 +09:00
agentson
5b52f593a8 feat: add conservative backtest execution simulator (TASK-CODE-010) 2026-02-27 08:40:23 +09:00
2798558bf3 Merge pull request '[BACKTEST-MODEL] TKT-P1-002 백테스트 비용/슬리피지 옵션 필수화' (#292) from feature/issue-tkt-p1-002-backtest-cost-mandatory into feature/v3-session-policy-stream 2026-02-27 08:37:15 +09:00
agentson
2331d80915 fix: reject non-finite backtest cost assumptions 2026-02-27 08:36:38 +09:00
agentson
7d72669cb8 feat: enforce mandatory backtest cost assumptions (TASK-CODE-006) 2026-02-27 08:34:44 +09:00
74a4784b7a Merge pull request '[BACKTEST-MODEL] TKT-P1-004 Walk-forward + Purge/Embargo 분할 유틸' (#290) from feature/issue-tkt-p1-004-walkforward-purge-embargo into feature/v3-session-policy-stream 2026-02-27 08:33:01 +09:00
agentson
dc70311aed fix: keep embargo tied to accepted folds and enforce PR-comment decision logs 2026-02-27 08:32:09 +09:00
agentson
e56819e9e2 feat: add walk-forward splitter with purge and embargo controls (TASK-CODE-005) 2026-02-27 08:28:11 +09:00
cfd5351b58 Merge pull request '[FX-ACCOUNTING] TKT-P1-001 USD/KRW 버퍼 진입 제한' (#288) from feature/issue-tkt-p1-001-fx-buffer-guard into feature/v3-session-policy-stream 2026-02-27 00:53:21 +09:00
agentson
b206c23fc9 fix: scope USD buffer guard to US markets and add boundary tests 2026-02-27 00:52:44 +09:00
agentson
4d9f3e2cfc feat: enforce overseas buy guard with USD buffer threshold (TASK-V3-014) 2026-02-27 00:50:12 +09:00
a93a5c616b Merge pull request '[BACKTEST-MODEL] TKT-P1-003 Triple Barrier 라벨러 구현' (#286) from feature/issue-tkt-p1-003-triple-barrier-labeler into feature/v3-session-policy-stream 2026-02-27 00:47:37 +09:00
agentson
9f64c9944a fix: correct short-side tie-break semantics in triple barrier 2026-02-27 00:47:09 +09:00
agentson
bb391d502c feat: add triple barrier labeler with first-touch logic (TASK-CODE-004) 2026-02-27 00:45:18 +09:00
b0100fde10 Merge pull request '[RISK-EMERGENCY][SCN-FAIL-003] TKT-P0-002 Kill Switch 순서 강제 검증 자동화' (#284) from feature/issue-tkt-p0-002-killswitch-ordering into feature/v3-session-policy-stream 2026-02-27 00:42:16 +09:00
agentson
0a4e69d40c fix: record kill switch cancel failures and add failure-path tests 2026-02-27 00:41:13 +09:00
agentson
25401ac132 feat: enforce operational kill switch callbacks in runtime flow (TASK-CODE-003) 2026-02-27 00:38:26 +09:00
1381b140ab Merge pull request '[EXEC-POLICY][SCN-FAIL-001] TKT-P0-001 블랙아웃 차단/큐/복구 재검증' (#282) from feature/issue-tkt-p0-001-blackout-queue-revalidate into feature/v3-session-policy-stream 2026-02-27 00:32:59 +09:00
agentson
356d085ab0 feat: implement blackout queue and recovery revalidation (TASK-CODE-008) 2026-02-27 00:31:29 +09:00
54d6cc3d7c Merge pull request 'docs: feature-branch 팀 운영 규칙 및 모니터링 검증 게이트 반영 (#279)' (#280) from feature/issue-279-session-order-policy-guard into feature/v3-session-policy-stream 2026-02-27 00:19:55 +09:00
agentson
3ffad58d57 docs: allow ticket->feature merges without user approval; keep main gated (#279) 2026-02-27 00:19:51 +09:00
agentson
df6baee7f1 feat: add session-aware order policy guard for low-liquidity market-order rejection (#279) 2026-02-27 00:13:47 +09:00
agentson
c31a6a569d docs: enforce feature-branch team flow and mandatory runtime monitoring validation (#279) 2026-02-27 00:05:01 +09:00
990f9696ab Merge pull request 'docs: TPM 티켓 우선순위/메인 아이디에이션/무머지 세션 규칙 반영 (#277)' (#278) from feature/issue-277-tpm-priority-main-ideation-no-merge-session into main
Some checks failed
CI / test (push) Has been cancelled
2026-02-26 23:58:03 +09:00
agentson
9bf72c63ec docs: clarify no-merge-by-default server reflection rule (#277)
Some checks failed
CI / test (pull_request) Has been cancelled
2026-02-26 23:57:58 +09:00
agentson
1399fa4d09 docs: enforce TPM ticket ownership and add main-agent ideation backlog (#277)
Some checks failed
CI / test (pull_request) Has been cancelled
2026-02-26 23:56:25 +09:00
f63fb53289 Merge pull request 'feat: phase1 상태기계/청산엔진/kill-switch 구현 (#275)' (#276) from feature/issue-275-phase1-state-exit-killswitch into main
Some checks failed
CI / test (push) Has been cancelled
2026-02-26 23:46:11 +09:00
agentson
5050a4cf84 fix: address reviewer feedback for kill-switch enforcement and observability (#275)
Some checks failed
CI / test (pull_request) Has been cancelled
2026-02-26 23:46:02 +09:00
agentson
4987b6393a feat: implement phase1 state machine, composite exits, and kill-switch orchestration (#275)
Some checks failed
CI / test (pull_request) Has been cancelled
2026-02-26 23:22:58 +09:00
8faf974522 Merge pull request 'docs: multi-agent governance 운영 체계 반영 (#273)' (#274) from feature/issue-273-multi-agent-governance-docs into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #274
2026-02-26 23:19:11 +09:00
agentson
d524159ad0 docs: add runtime verifier role and replan escalation protocol (#273)
Some checks failed
CI / test (pull_request) Has been cancelled
2026-02-26 23:16:44 +09:00
agentson
c7c740f446 docs: add repository enforcement checklist for strict governance (#273)
Some checks failed
CI / test (pull_request) Has been cancelled
2026-02-26 23:08:16 +09:00
agentson
1333c65455 docs: add PM/TPM governance artifacts for multi-agent control (#273)
Some checks failed
CI / test (pull_request) Has been cancelled
2026-02-26 23:06:51 +09:00
9db7f903f8 Merge pull request 'docs: ouroboros 실행 지시서/검증 시스템 반영 (#271)' (#272) from feature/issue-271-docs-routing-validation into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #272
2026-02-26 22:56:17 +09:00
agentson
4660310ee4 docs: add tea newline troubleshooting and runlog (#271)
Some checks failed
CI / test (pull_request) Has been cancelled
2026-02-26 22:52:56 +09:00
agentson
c383a411ff docs: add ouroboros execution routing and validation system (#271)
Some checks failed
CI / test (pull_request) Has been cancelled
2026-02-26 22:49:21 +09:00
7b3ba27ef7 Merge pull request 'fix: 해외 매수가능금액 ord_psbl_frcr_amt → ovrs_ord_psbl_amt 교체 (#269)' (#270) from feature/issue-269-overseas-cash-ovrs-ord-psbl-amt into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #270
2026-02-26 02:01:57 +09:00
agentson
6ff887c047 fix: 해외 매수가능금액 ord_psbl_frcr_amt → ovrs_ord_psbl_amt 교체 (#269)
Some checks failed
CI / test (pull_request) Has been cancelled
외화 예수금만 반영하는 ord_psbl_frcr_amt 대신
미결제 매도 대금(sll_ruse_psbl_amt)을 포함하는
ovrs_ord_psbl_amt (앱 '외화' 기준 통합 주문가능금액)를 사용하도록 수정.

실제 API 응답 확인:
  ord_psbl_frcr_amt  = $139.25  (외화 예수금만)
  sll_ruse_psbl_amt  = $7292.70 (미결제 매도 대금)
  ovrs_ord_psbl_amt  = $7391.30 (합산, 원화 미포함)

원화 환산(frcr_ord_psbl_amt1)은 요구사항에 따라 사용하지 않음.
출처: KIS 공식문서(20260221) '해외주식 매수가능금액조회' 시트.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-26 02:01:13 +09:00
219eef6388 Merge pull request 'fix: 로그 WARNING 2종 수정 - scanner 오해 메시지 및 홀딩 종목 rsi 누락 (#267)' (#268) from feature/issue-267-fix-log-warnings into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #268
2026-02-26 01:46:43 +09:00
agentson
9d7ca12275 fix: 홀딩 종목 volume_ratio를 price API high/low 실데이터로 계산 (#267)
Some checks failed
CI / test (pull_request) Has been cancelled
candidate 없는 해외 홀딩 종목(NVDA 등)에 대해 이미 호출된
get_overseas_price 응답의 high/low를 활용하여 scanner와 동일한 방식으로
volume_ratio 계산:

  intraday_range_pct = (high - low) / price * 100
  volume_ratio = max(1.0, volatility_pct / 2.0)

high/low 미제공 시(국내 종목, API 미응답) 기존 기본값 1.0 유지.
implied_rsi는 이미 실API price_change_pct(rate 필드) 기반.

tests/test_main.py: 해외 홀딩 종목 volume_ratio 계산 검증 테스트 추가

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-26 01:45:22 +09:00
agentson
ccb00ee77d fix: 로그 WARNING 2종 수정 - scanner 오해 메시지 및 홀딩 종목 rsi 누락 (#267)
Some checks failed
CI / test (pull_request) Has been cancelled
1. WARNING → DEBUG: fallback_stocks 없어도 overseas ranking API로 scanner
   정상 동작하므로 오해를 주는 WARNING 레벨을 DEBUG로 낮춤 (2곳)

2. 홀딩 종목 market_data 보강: scanner를 통하지 않은 종목(NVDA 등)에
   price_change_pct 기반 implied_rsi와 volume_ratio=1.0 기본값 설정,
   scenario_engine 조건 평가 완전화

3. test_main.py: 새로운 동작에 맞게 관련 테스트 2개 업데이트

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-26 01:39:45 +09:00
b1b728f62e Merge pull request 'fix: 해외 cash=0.00 및 get_open_position HOLD 필터링 수정 (#264, #265)' (#266) from feature/issue-264-265-overseas-cash-and-open-position into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #266
2026-02-26 01:30:37 +09:00
agentson
df12be1305 fix: 해외 cash=0.00 및 get_open_position HOLD 필터링 수정 (#264, #265)
Some checks failed
CI / test (pull_request) Has been cancelled
## 변경사항

### #264 — 해외 매수가능금액 조회 API 교체 (frcr_dncl_amt_2 → inquire-psamount)
- TTTS3012R (해외주식 잔고) output2에 frcr_dncl_amt_2 필드가 존재하지 않아
  총 가용 현금이 항상 0.00으로 산출되는 문제 수정
- OverseasBroker에 get_overseas_buying_power() 메서드 추가
  (TR_ID: 실전 TTTS3007R / 모의 VTTS3007R, ord_psbl_frcr_amt 반환)
- main.py trading_cycle() 및 daily cycle 모두 수정
- 출처: 한국투자증권 오픈API 전체문서 (20260221) — 해외주식 매수가능금액조회 시트

### #265 — get_open_position() HOLD 레코드 필터링 추가
- HOLD 결정도 trades 테이블에 저장되어 BUY 이후 HOLD 기록 시
  최신 레코드가 HOLD → get_open_position이 None 반환하는 문제 수정
- 쿼리에 AND action IN ('BUY', 'SELL') 필터 추가
- HOLD 레코드를 제외하고 마지막 BUY/SELL 기록만 확인

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-26 01:29:46 +09:00
6a6d3bd631 Merge pull request 'fix: market_data에 unrealized_pnl_pct/holding_days 추가하여 SELL 시나리오 정상화 (#259)' (#263) from feature/issue-259-market-data-pnl-holding-days into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #263
2026-02-26 00:23:55 +09:00
agentson
7aa5fedc12 fix: market_data에 unrealized_pnl_pct/holding_days 추가하여 SELL 시나리오 정상화 (#259)
Some checks failed
CI / test (pull_request) Has been cancelled
trading_cycle()의 market_data에 보유 포지션 정보가 없어
Condition requires 'unrealized_pnl_pct' but key missing from market_data 경고 발생.
보유 종목(NVDA 등)의 take-profit/stop-loss 시나리오가 평가 불가하여 HOLD(confidence=0) 고착.

- get_open_position()에 timestamp 컬럼 추가
- market_data 구성 시 open_position 조회 후 아래 키 추가:
  - unrealized_pnl_pct: (current_price - entry_price) / entry_price * 100
  - holding_days: 매수일로부터 경과 일수

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-26 00:23:28 +09:00
agentson
3e777a5ab8 fix: mock_settings에 MODE='paper' 명시하여 paper 모드 테스트 실패 수정 (#261)
mock_settings fixture에 MODE 미지정 시 .env의 MODE=live가 적용되어
paper TR_ID를 검증하는 테스트 3개가 실패.

- test_buy_market_order: VTTT1002U 기대 → TTTT1002U 실제
- test_sell_limit_order: VTTT1001U 기대 → TTTT1006U 실제
- test_us_paper_uses_vttt1004u: VTTT1004U 기대 → TTTT1004U 실제

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-26 00:23:01 +09:00
6f93258983 Merge pull request 'fix: mock_settings에 MODE='paper' 명시하여 paper 모드 테스트 실패 수정 (#261)' (#262) from feature/issue-261-fix-mock-settings-mode into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #262
2026-02-26 00:22:02 +09:00
agentson
82167c5b8a fix: mock_settings에 MODE='paper' 명시하여 paper 모드 테스트 실패 수정 (#261)
Some checks failed
CI / test (pull_request) Has been cancelled
mock_settings fixture에 MODE 미지정 시 .env의 MODE=live가 적용되어
paper TR_ID를 검증하는 테스트 3개가 실패.

- test_buy_market_order: VTTT1002U 기대 → TTTT1002U 실제
- test_sell_limit_order: VTTT1001U 기대 → TTTT1006U 실제
- test_us_paper_uses_vttt1004u: VTTT1004U 기대 → TTTT1004U 실제

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-26 00:21:39 +09:00
f87c4dc2f0 Merge pull request 'fix: ranking API 필수 파라미터 KEYB 추가 및 GUBN 값 수정 (#258)' (#260) from feature/issue-258-ranking-api-keyb-param into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #260
2026-02-26 00:20:58 +09:00
agentson
8af5f564c3 fix: ranking API 필수 파라미터 KEYB 추가 및 GUBN 값 수정 (#258)
Some checks failed
CI / test (pull_request) Has been cancelled
KIS 공식 문서(20260221) 기준 KEYB(NEXT KEY BUFF)는 Required=Y이나
누락되어 있어 항상 rt_cd=2 오류 발생, fallback 경로로만 실행됨.

- fluctuation/volume 양쪽 params에 KEYB: '' 추가
- GUBN 주석 수정: 0=하락율, 1=상승율 (문서 기준)
- GUBN 값 0→1 수정: 상승율 기준으로 변동성 급등 종목 스캔

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-26 00:19:55 +09:00
120 changed files with 12667 additions and 1466 deletions

View File

@@ -0,0 +1,41 @@
---
name: Runtime Verification Incident
about: 실운영/스테이징 동작 검증 중 발견된 이상 징후 등록
title: "[RUNTIME-VERIFY][SCN-XXX] "
labels: runtime, verification
---
## Summary
- 현상:
- 최초 관측 시각(UTC):
## Reproduction / Observation
- 실행 모드(`live`/`paper`):
- 세션(`NXT`, `US_PRE`, `US_DAY`, `US_AFTER`, ...):
- 실행 커맨드:
- 로그 경로:
## Expected vs Actual
- Expected:
- Actual:
## Requirement Mapping
- REQ:
- TASK:
- TEST:
## Temporary Mitigation
- 즉시 완화책:
## Close Criteria
- [ ] Dev 수정 반영
- [ ] Verifier 재검증 PASS
- [ ] Runtime Verifier 재관측 PASS
- [ ] `NOT_OBSERVED = 0`

View File

@@ -0,0 +1,64 @@
## Linked Issue
- Closes #N
## Scope
- REQ: `REQ-...`
- TASK: `TASK-...`
- TEST: `TEST-...`
## Ticket Stage
- Current stage: `Implemented` / `Integrated` / `Observed` / `Accepted`
- Previous stage evidence link:
## Main -> Verifier Directive Contract
- Scope: 대상 요구사항/코드/로그 경로
- Method: 실행 커맨드 + 관측 포인트
- PASS criteria:
- FAIL criteria:
- NOT_OBSERVED criteria:
- Evidence format: PR 코멘트 `Coverage Matrix`
## Verifier Coverage Matrix (Required)
| Item | Evidence | Status (PASS/FAIL/NOT_OBSERVED) |
|---|---|---|
| REQ-... | 링크/로그 | PASS |
`NOT_OBSERVED`가 1개라도 있으면 승인/머지 금지.
## Gitea Preflight
- [ ] `docs/commands.md``docs/workflow.md` 트러블슈팅 선확인
- [ ] `tea` 사용 (`gh` 미사용)
## Session Handover Gate
- [ ] `python3 scripts/session_handover_check.py --strict` 통과
- [ ] `workflow/session-handover.md` 최신 엔트리가 현재 브랜치/당일(UTC) 기준으로 갱신됨
- 최신 handover 엔트리 heading:
## Docs Sync Gate (docs 파일 변경 시 필수)
- [ ] `python3 scripts/validate_docs_sync.py` 통과 (`docs` 미변경 PR은 N/A 기재)
## Runtime Evidence
- 시스템 실제 구동 커맨드:
- 모니터링 로그 경로:
- 이상 징후/이슈 링크:
## READ-ONLY Approval (Required when touching READ-ONLY files)
- Touched READ-ONLY files:
- Human approval:
- Test suite 1:
- Test suite 2:
## Approval Gate
- [ ] Static Verifier approval comment linked
- [ ] Runtime Verifier approval comment linked

57
.gitea/workflows/ci.yml Normal file
View File

@@ -0,0 +1,57 @@
name: Gitea CI
on:
pull_request:
push:
branches:
- main
- feature/**
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install dependencies
run: pip install ".[dev]"
- name: Session handover gate
run: python3 scripts/session_handover_check.py --strict --ci
- name: Validate governance assets
env:
GOVERNANCE_PR_TITLE: ${{ github.event.pull_request.title }}
GOVERNANCE_PR_BODY: ${{ github.event.pull_request.body }}
run: |
RANGE=""
if [ "${{ github.event_name }}" = "pull_request" ] && [ -n "${{ github.event.pull_request.base.sha }}" ]; then
RANGE="${{ github.event.pull_request.base.sha }}...${{ github.sha }}"
elif [ -n "${{ github.event.before }}" ] && [ "${{ github.event.before }}" != "0000000000000000000000000000000000000000" ]; then
RANGE="${{ github.event.before }}...${{ github.sha }}"
fi
if [ -n "$RANGE" ]; then
python3 scripts/validate_governance_assets.py "$RANGE"
else
python3 scripts/validate_governance_assets.py
fi
- name: Validate Ouroboros docs
run: python3 scripts/validate_ouroboros_docs.py
- name: Validate docs sync
run: python3 scripts/validate_docs_sync.py
- name: Lint
run: ruff check src/ tests/
- name: Run tests with coverage
run: pytest -v --cov=src --cov-report=term-missing --cov-fail-under=80

66
.github/workflows/backtest-gate.yml vendored Normal file
View File

@@ -0,0 +1,66 @@
name: Backtest Gate
on:
pull_request:
branches: ["**"]
push:
branches:
- "feature/**"
schedule:
# Daily scheduled gate (KST 01:20)
- cron: "20 16 * * *"
workflow_dispatch:
inputs:
mode:
description: "backtest mode (auto|smoke|full)"
required: false
default: "auto"
base_ref:
description: "git base ref for changed-file diff"
required: false
default: "origin/main"
jobs:
backtest-gate:
runs-on: ubuntu-latest
concurrency:
group: backtest-gate-${{ github.ref }}
cancel-in-progress: true
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install dependencies
run: pip install ".[dev]"
- name: Resolve base ref
id: base
run: |
if [ "${{ github.event_name }}" = "pull_request" ]; then
echo "ref=origin/${{ github.base_ref }}" >> "$GITHUB_OUTPUT"
elif [ "${{ github.event_name }}" = "workflow_dispatch" ] && [ -n "${{ github.event.inputs.base_ref }}" ]; then
echo "ref=${{ github.event.inputs.base_ref }}" >> "$GITHUB_OUTPUT"
else
echo "ref=origin/main" >> "$GITHUB_OUTPUT"
fi
- name: Run backtest gate
env:
BASE_REF: ${{ steps.base.outputs.ref }}
BACKTEST_MODE: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.mode || 'auto' }}
FORCE_FULL_BACKTEST: ${{ github.event_name == 'schedule' && 'true' || 'false' }}
run: bash scripts/backtest_gate.sh
- name: Upload backtest logs
if: always()
uses: actions/upload-artifact@v4
with:
name: backtest-gate-logs
path: data/backtest-gate/*.log

View File

@@ -21,6 +21,32 @@ jobs:
- name: Install dependencies - name: Install dependencies
run: pip install ".[dev]" run: pip install ".[dev]"
- name: Session handover gate
run: python3 scripts/session_handover_check.py --strict --ci
- name: Validate governance assets
env:
GOVERNANCE_PR_TITLE: ${{ github.event.pull_request.title }}
GOVERNANCE_PR_BODY: ${{ github.event.pull_request.body }}
run: |
RANGE=""
if [ "${{ github.event_name }}" = "pull_request" ]; then
RANGE="${{ github.event.pull_request.base.sha }}...${{ github.sha }}"
elif [ "${{ github.event_name }}" = "push" ] && [ "${{ github.event.before }}" != "0000000000000000000000000000000000000000" ]; then
RANGE="${{ github.event.before }}...${{ github.sha }}"
fi
if [ -n "$RANGE" ]; then
python3 scripts/validate_governance_assets.py "$RANGE"
else
python3 scripts/validate_governance_assets.py
fi
- name: Validate Ouroboros docs
run: python3 scripts/validate_ouroboros_docs.py
- name: Validate docs sync
run: python3 scripts/validate_docs_sync.py
- name: Lint - name: Lint
run: ruff check src/ tests/ run: ruff check src/ tests/

View File

@@ -81,12 +81,13 @@ SCANNER_TOP_N=3 # Max candidates per scan
- **Evolution-ready** — Selection context logged for strategy optimization - **Evolution-ready** — Selection context logged for strategy optimization
- **Fault-tolerant** — Falls back to static watchlist on API failure - **Fault-tolerant** — Falls back to static watchlist on API failure
### Realtime Mode Only ### Trading Mode Integration
Smart Scanner runs in `TRADE_MODE=realtime` only. Daily mode uses static watchlists for batch efficiency. Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
## Documentation ## Documentation
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development - **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting - **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
- **[Architecture](docs/architecture.md)** — System design, components, data flow - **[Architecture](docs/architecture.md)** — System design, components, data flow
@@ -122,7 +123,7 @@ src/
├── broker/ # KIS API client (domestic + overseas) ├── broker/ # KIS API client (domestic + overseas)
├── context/ # L1-L7 hierarchical memory system ├── context/ # L1-L7 hierarchical memory system
├── core/ # Risk manager (READ-ONLY) ├── core/ # Risk manager (READ-ONLY)
├── dashboard/ # FastAPI read-only monitoring (8 API endpoints) ├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
├── data/ # External data integration (news, market data, calendar) ├── data/ # External data integration (news, market data, calendar)
├── evolution/ # Self-improvement (optimizer, daily review, scorecard) ├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
├── logging/ # Decision logger (audit trail) ├── logging/ # Decision logger (audit trail)
@@ -133,7 +134,7 @@ src/
├── main.py # Trading loop orchestrator ├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env) └── config.py # Settings (from .env)
tests/ # 551 tests across 25 files tests/ # 998 tests across 41 files
docs/ # Extended documentation docs/ # Extended documentation
``` ```

View File

@@ -39,7 +39,7 @@ KIS(한국투자증권) API로 매매하고, Google Gemini로 판단하며, 자
| 컨텍스트 | `src/context/` | L1-L7 계층형 메모리 시스템 | | 컨텍스트 | `src/context/` | L1-L7 계층형 메모리 시스템 |
| 분석 | `src/analysis/` | RSI, ATR, Smart Volatility Scanner | | 분석 | `src/analysis/` | RSI, ATR, Smart Volatility Scanner |
| 알림 | `src/notifications/` | 텔레그램 양방향 (알림 + 9개 명령어) | | 알림 | `src/notifications/` | 텔레그램 양방향 (알림 + 9개 명령어) |
| 대시보드 | `src/dashboard/` | FastAPI 읽기 전용 모니터링 (8개 API) | | 대시보드 | `src/dashboard/` | FastAPI 읽기 전용 모니터링 (10개 API) |
| 진화 | `src/evolution/` | 전략 진화 + Daily Review + Scorecard | | 진화 | `src/evolution/` | 전략 진화 + Daily Review + Scorecard |
| 의사결정 로그 | `src/logging/` | 전체 거래 결정 감사 추적 | | 의사결정 로그 | `src/logging/` | 전체 거래 결정 감사 추적 |
| 데이터 | `src/data/` | 뉴스, 시장 데이터, 경제 캘린더 연동 | | 데이터 | `src/data/` | 뉴스, 시장 데이터, 경제 캘린더 연동 |
@@ -153,19 +153,16 @@ docker compose up -d ouroboros
## 테스트 ## 테스트
551개 테스트가 25개 파일에 걸쳐 구현되어 있습니다. 최소 커버리지 80%. 998개 테스트가 41개 파일에 걸쳐 구현되어 있습니다. 최소 커버리지 80%.
``` ```
tests/test_scenario_engine.py — 시나리오 매칭 (44개) tests/test_main.py — 거래 루프 통합
tests/test_data_integration.py — 외부 데이터 연동 (38개) tests/test_scenario_engine.py — 시나리오 매칭
tests/test_pre_market_planner.py — 플레이북 생성 (37개) tests/test_pre_market_planner.py — 플레이북 생성
tests/test_main.py — 거래 루프 통합 (37개) tests/test_overseas_broker.py — 해외 브로커
tests/test_token_efficiency.py 토큰 최적화 (34개) tests/test_telegram_commands.py — 텔레그램 명령어
tests/test_strategy_models.py — 전략 모델 검증 (33개) tests/test_telegram.py — 텔레그램 알림
tests/test_telegram_commands.py — 텔레그램 명령어 (31개) ... 외 35개 파일 ※ 파일별 수치는 CI 기준으로 변동 가능
tests/test_latency_control.py — 지연시간 제어 (30개)
tests/test_telegram.py — 텔레그램 알림 (25개)
... 외 16개 파일
``` ```
**상세**: [docs/testing.md](docs/testing.md) **상세**: [docs/testing.md](docs/testing.md)
@@ -177,8 +174,8 @@ tests/test_telegram.py — 텔레그램 알림 (25개)
- **AI**: Google Gemini Pro - **AI**: Google Gemini Pro
- **DB**: SQLite (5개 테이블: trades, contexts, decision_logs, playbooks, context_metadata) - **DB**: SQLite (5개 테이블: trades, contexts, decision_logs, playbooks, context_metadata)
- **대시보드**: FastAPI + uvicorn - **대시보드**: FastAPI + uvicorn
- **검증**: pytest + coverage (551 tests) - **검증**: pytest + coverage (998 tests)
- **CI/CD**: GitHub Actions - **CI/CD**: Gitea CI (`.gitea/workflows/ci.yml`)
- **배포**: Docker + Docker Compose - **배포**: Docker + Docker Compose
## 프로젝트 구조 ## 프로젝트 구조
@@ -212,7 +209,7 @@ The-Ouroboros/
│ ├── config.py # Pydantic 설정 │ ├── config.py # Pydantic 설정
│ ├── db.py # SQLite 데이터베이스 │ ├── db.py # SQLite 데이터베이스
│ └── main.py # 비동기 거래 루프 │ └── main.py # 비동기 거래 루프
├── tests/ # 551개 테스트 (25개 파일) ├── tests/ # 998개 테스트 (41개 파일)
├── Dockerfile # 멀티스테이지 빌드 ├── Dockerfile # 멀티스테이지 빌드
├── docker-compose.yml # 서비스 오케스트레이션 ├── docker-compose.yml # 서비스 오케스트레이션
└── pyproject.toml # 의존성 및 도구 설정 └── pyproject.toml # 의존성 및 도구 설정
@@ -220,6 +217,7 @@ The-Ouroboros/
## 문서 ## 문서
- **[문서 허브](docs/README.md)** — 전체 문서 라우팅, 우선순위, 읽기 순서
- **[아키텍처](docs/architecture.md)** — 시스템 설계, 컴포넌트, 데이터 흐름 - **[아키텍처](docs/architecture.md)** — 시스템 설계, 컴포넌트, 데이터 흐름
- **[테스트](docs/testing.md)** — 테스트 구조, 커버리지, 작성 가이드 - **[테스트](docs/testing.md)** — 테스트 구조, 커버리지, 작성 가이드
- **[명령어](docs/commands.md)** — CLI, Dashboard, Telegram 명령어 - **[명령어](docs/commands.md)** — CLI, Dashboard, Telegram 명령어

48
docs/README.md Normal file
View File

@@ -0,0 +1,48 @@
# Documentation Hub
이 문서는 저장소 전체 문서의 상위 라우팅 허브입니다.
세부 문서로 바로 들어가기 전에 아래 우선순위와 읽기 순서를 기준으로 이동하세요.
## Priority (SSOT)
1. 실행/협업 규칙 SSOT: [workflow.md](./workflow.md)
2. 명령/장애 대응 SSOT: [commands.md](./commands.md)
3. 테스트/검증 SSOT: [testing.md](./testing.md)
4. 에이전트 제약 SSOT: [agents.md](./agents.md)
5. 요구사항 추적 SSOT: [requirements-log.md](./requirements-log.md)
6. Ouroboros 실행 문서 허브: [ouroboros/README.md](./ouroboros/README.md)
## Recommended Reading Order
1. [workflow.md](./workflow.md)
2. [commands.md](./commands.md)
3. [testing.md](./testing.md)
4. [agents.md](./agents.md)
5. [architecture.md](./architecture.md)
6. [context-tree.md](./context-tree.md)
7. [disaster_recovery.md](./disaster_recovery.md)
8. [live-trading-checklist.md](./live-trading-checklist.md)
9. [ouroboros/README.md](./ouroboros/README.md)
## Document Map
- Core
- [workflow.md](./workflow.md): 브랜치/PR/리뷰/세션 handover 정책
- [commands.md](./commands.md): 실행 커맨드, 실패 사례, 트러블슈팅
- [testing.md](./testing.md): 테스트 구조, 작성 규칙, 검증 명령
- [agents.md](./agents.md): 에이전트 작업 제약과 금지 행위
- [agent-constraints.md](./agent-constraints.md): 영속 제약/운영 불변식(agents.md 보완)
- [skills.md](./skills.md): 설치/사용 가능한 스킬 목록과 활용 가이드
- Design and Operations
- [architecture.md](./architecture.md): 시스템 구조와 컴포넌트 책임
- [context-tree.md](./context-tree.md): L1-L7 컨텍스트 계층 설계
- [disaster_recovery.md](./disaster_recovery.md): 백업/복구 절차
- [live-trading-checklist.md](./live-trading-checklist.md): 실전 전환 체크리스트
- Governance and Planning
- [requirements-log.md](./requirements-log.md): 요구사항/피드백 히스토리
- [ouroboros/README.md](./ouroboros/README.md): v2/v3 실행 문서 라우팅
## Change Rule
- 문서 신규/이동/대규모 개편 시 이 파일의 링크와 분류를 함께 갱신합니다.
- 링크는 상대경로만 사용합니다.

View File

@@ -12,6 +12,8 @@ It is distinct from `docs/requirements-log.md`, which records **project/product
1. **Workflow enforcement** 1. **Workflow enforcement**
- Follow `docs/workflow.md` for all changes. - Follow `docs/workflow.md` for all changes.
- Before any Gitea issue/PR/comment operation, read `docs/commands.md` and `docs/workflow.md` troubleshooting section.
- Use `tea` for Gitea operations; do not use GitHub CLI (`gh`) in this repository workflow.
- Create a Gitea issue before any code or documentation change. - Create a Gitea issue before any code or documentation change.
- Work on a feature branch `feature/issue-{N}-{short-description}` and open a PR. - Work on a feature branch `feature/issue-{N}-{short-description}` and open a PR.
- Never commit directly to `main`. - Never commit directly to `main`.
@@ -30,6 +32,16 @@ It is distinct from `docs/requirements-log.md`, which records **project/product
(or in a dedicated policy doc) and reference it when working. (or in a dedicated policy doc) and reference it when working.
- Keep entries short and concrete, with dates. - Keep entries short and concrete, with dates.
5. **Session start handover gate**
- Before implementation/verification work, run `python3 scripts/session_handover_check.py --strict`.
- Keep `workflow/session-handover.md` updated with a same-day entry for the active branch.
- If the check fails, stop and fix handover artifacts first.
6. **Process-change-first execution gate**
- If process/governance change is required, merge the process ticket to the feature branch first.
- Do not start code/test edits for implementation tickets until process merge evidence is confirmed.
- Subagents must be constrained to read-only exploration until the process gate is satisfied.
## Change Control ## Change Control
- Changes to this file follow the same workflow as code changes. - Changes to this file follow the same workflow as code changes.
@@ -43,3 +55,15 @@ It is distinct from `docs/requirements-log.md`, which records **project/product
- When work requires guidance, consult the relevant `docs/` policies first. - When work requires guidance, consult the relevant `docs/` policies first.
- Any code change must be accompanied by relevant documentation updates. - Any code change must be accompanied by relevant documentation updates.
- Persist user constraints across sessions by recording them in this document. - Persist user constraints across sessions by recording them in this document.
### 2026-02-27
- All agents must pre-read `docs/commands.md` and `docs/workflow.md` troubleshooting before running Gitea issue/PR/comment commands.
- `gh` CLI is prohibited for repository ticket/PR operations; use `tea` (or documented Gitea API fallback only).
- Session start must pass `python3 scripts/session_handover_check.py --strict`, with branch-matched entry in `workflow/session-handover.md`.
### 2026-02-27
- Apply process-change-first as an execution gate: process ticket must be merged before implementation ticket coding.
- Handover entry must record concrete `next_ticket` and `process_gate_checked`; placeholders are not allowed in strict gate.
- Before process merge confirmation, all subagent tasks must remain read-only (analysis only).

View File

@@ -84,6 +84,37 @@ High-frequency trading with individual stock analysis:
- Momentum scoring (0-100 scale) - Momentum scoring (0-100 scale)
- Breakout/breakdown pattern detection - Breakout/breakdown pattern detection
**TripleBarrierLabeler** (`triple_barrier.py`) — Financial time-series labeling (v2)
- Triple Barrier method: upper (take-profit), lower (stop-loss), time barrier
- First-touch labeling: labels confirmed by whichever barrier is breached first
- `max_holding_minutes` (calendar-minute) time barrier — session-aware, bar-period independent
- Tie-break mode: `"stop_first"` (conservative) or `"take_first"`
- Feature-label strict separation to prevent look-ahead bias
**BacktestPipeline** (`backtest_pipeline.py`) — End-to-end validation pipeline (v2)
- `run_v2_backtest_pipeline()`: cost guard → triple barrier labeling → walk-forward splits → fold scoring
- `BacktestPipelineResult`: artifact contract for reproducible output
- `fold_has_leakage()`: leakage detection utility
**WalkForwardSplit** (`walk_forward_split.py`) — Time-series validation (v2)
- Fold-based walk-forward splits (no random shuffling)
- Purge/Embargo: excludes N bars before/after fold boundaries to prevent data leakage
**BacktestExecutionModel** (`backtest_execution_model.py`) — Conservative fill simulation (v2/v3)
- Session-aware slippage: KRX_REG 5bps, NXT_AFTER 15bps, US_REG 3bps, US_PRE/DAY 30-50bps
- Order failure rate simulation per session
- Partial fill rate simulation with min/max ratio bounds
- Unfavorable-direction fill assumption (no simple close-price fill)
**BacktestCostGuard** (`backtest_cost_guard.py`) — Cost model validator (v2)
- `validate_backtest_cost_model()`: fail-fast check that session cost assumptions are present
- Enforces realistic cost assumptions before any backtest run proceeds
**SmartVolatilityScanner** (`smart_scanner.py`) — Python-first filtering pipeline **SmartVolatilityScanner** (`smart_scanner.py`) — Python-first filtering pipeline
- **Domestic (KR)**: - **Domestic (KR)**:
@@ -98,7 +129,7 @@ High-frequency trading with individual stock analysis:
- **Step 4**: Return top N candidates (default 3) - **Step 4**: Return top N candidates (default 3)
- **Fallback (overseas only)**: If ranking API is unavailable, uses dynamic universe - **Fallback (overseas only)**: If ranking API is unavailable, uses dynamic universe
from runtime active symbols + recent traded symbols + current holdings (no static watchlist) from runtime active symbols + recent traded symbols + current holdings (no static watchlist)
- **Realtime mode only**: Daily mode uses batch processing for API efficiency - **Both modes**: Realtime 중심이지만 Daily 경로(`run_daily_session()`)에서도 후보 선별에 사용
**Benefits:** **Benefits:**
- Reduces Gemini API calls from 20-30 stocks to 1-3 qualified candidates - Reduces Gemini API calls from 20-30 stocks to 1-3 qualified candidates
@@ -124,9 +155,9 @@ High-frequency trading with individual stock analysis:
- Selects appropriate context layers for current market conditions - Selects appropriate context layers for current market conditions
### 4. Risk Manager (`src/core/risk_manager.py`) ### 4. Risk Manager & Session Policy (`src/core/`)
**RiskManager** — Safety circuit breaker and order validation **RiskManager** (`risk_manager.py`) — Safety circuit breaker and order validation
> **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md)) > **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md))
@@ -136,8 +167,59 @@ High-frequency trading with individual stock analysis:
- **Fat-Finger Protection**: Rejects orders exceeding 30% of available cash - **Fat-Finger Protection**: Rejects orders exceeding 30% of available cash
- Must always be enforced, cannot be disabled - Must always be enforced, cannot be disabled
**OrderPolicy** (`order_policy.py`) — Session classification and order type enforcement (v3)
- `classify_session_id()`: Classifies current KR/US session from KST clock
- KR: `NXT_PRE` (08:00-08:50), `KRX_REG` (09:00-15:30), `NXT_AFTER` (15:30-20:00)
- US: `US_DAY` (10:00-18:00), `US_PRE` (18:00-23:30), `US_REG` (23:30-06:00), `US_AFTER` (06:00-07:00)
- Low-liquidity session detection: `NXT_AFTER`, `US_PRE`, `US_DAY`, `US_AFTER`
- Market order forbidden in low-liquidity sessions (`OrderPolicyRejected` raised)
- Limit/IOC/FOK orders always allowed
**KillSwitch** (`kill_switch.py`) — Emergency trading halt orchestration (v2)
- Fixed 5-step atomic sequence:
1. Block new orders (`new_orders_blocked = True`)
2. Cancel all unfilled orders
3. Refresh order state (query final status)
4. Reduce risk (force-close or reduce positions)
5. Snapshot state + send Telegram alert
- Async, injectable step callables — each step individually testable
- Highest priority: overrides overnight exception and all other rules
**BlackoutManager** (`blackout_manager.py`) — KIS maintenance window handling (v3)
- Configurable blackout windows (e.g., `23:30-00:10 KST`)
- `queue_order()`: Queues order intent during blackout, enforces max queue size
- `pop_recovery_batch()`: Returns queued intents after recovery
- Recovery revalidation path (in `src/main.py`):
- Stale BUY drop (position already exists)
- Stale SELL drop (position absent)
- `validate_order_policy()` rechecked
- Price drift check (>5% → drop, configurable via `BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT`)
### 5. Strategy (`src/strategy/`) ### 5. Strategy (`src/strategy/`)
**PositionStateMachine** (`position_state_machine.py`) — 4-state sell state machine (v2)
- States: `HOLDING` → `BE_LOCK` → `ARMED` → `EXITED`
- `HOLDING`: Normal holding
- `BE_LOCK`: Profit ≥ `be_arm_pct` — stop-loss elevated to break-even
- `ARMED`: Profit ≥ `arm_pct` — peak-tracking trailing stop active
- `EXITED`: Position closed
- `promote_state()`: Immediately elevates to highest admissible state (handles gaps/skips)
- `evaluate_exit_first()`: EXITED conditions checked before state promotion
- Monotonic: states only move up, never down
**ExitRules** (`exit_rules.py`) — 4-layer composite exit logic (v2)
- **Hard Stop**: `unrealized <= hard_stop_pct` (always enforced, ATR-adaptive for KR)
- **Break-Even Lock**: Once in BE_LOCK/ARMED, exit if price falls to entry price
- **ATR Trailing Stop**: `trailing_stop_price = peak_price - (atr_multiplier_k × ATR)`
- **Model Signal**: Exit if `pred_down_prob >= model_prob_threshold AND liquidity_weak`
- `evaluate_exit()`: Returns `ExitEvaluation` with next state, exit flag, reason, trailing price
- `ExitRuleConfig`: Frozen dataclass with all tunable parameters
**Pre-Market Planner** (`pre_market_planner.py`) — AI playbook generation **Pre-Market Planner** (`pre_market_planner.py`) — AI playbook generation
- Runs before market open (configurable `PRE_MARKET_MINUTES`, default 30) - Runs before market open (configurable `PRE_MARKET_MINUTES`, default 30)
@@ -195,7 +277,7 @@ High-frequency trading with individual stock analysis:
- Configurable host/port (`DASHBOARD_HOST`, `DASHBOARD_PORT`, default `127.0.0.1:8080`) - Configurable host/port (`DASHBOARD_HOST`, `DASHBOARD_PORT`, default `127.0.0.1:8080`)
- Serves static HTML frontend - Serves static HTML frontend
**8 API Endpoints:** **10 API Endpoints:**
| Endpoint | Method | Description | | Endpoint | Method | Description |
|----------|--------|-------------| |----------|--------|-------------|
@@ -207,6 +289,8 @@ High-frequency trading with individual stock analysis:
| `/api/context/{layer}` | GET | Query context by layer (L1-L7) | | `/api/context/{layer}` | GET | Query context by layer (L1-L7) |
| `/api/decisions` | GET | Decision log entries with outcomes | | `/api/decisions` | GET | Decision log entries with outcomes |
| `/api/scenarios/active` | GET | Today's matched scenarios | | `/api/scenarios/active` | GET | Today's matched scenarios |
| `/api/pnl/history` | GET | P&L history time series |
| `/api/positions` | GET | Current open positions |
### 8. Notifications (`src/notifications/telegram_client.py`) ### 8. Notifications (`src/notifications/telegram_client.py`)
@@ -448,8 +532,12 @@ CREATE TABLE trades (
pnl REAL DEFAULT 0.0, pnl REAL DEFAULT 0.0,
market TEXT DEFAULT 'KR', market TEXT DEFAULT 'KR',
exchange_code TEXT DEFAULT 'KRX', exchange_code TEXT DEFAULT 'KRX',
session_id TEXT DEFAULT 'UNKNOWN', -- v3: KRX_REG | NXT_AFTER | US_REG | US_PRE | ...
selection_context TEXT, -- JSON: {rsi, volume_ratio, signal, score} selection_context TEXT, -- JSON: {rsi, volume_ratio, signal, score}
decision_id TEXT -- Links to decision_logs decision_id TEXT, -- Links to decision_logs
strategy_pnl REAL, -- v3: Core strategy P&L (separated from FX)
fx_pnl REAL DEFAULT 0.0, -- v3: FX gain/loss for USD trades (schema ready, activation pending)
mode TEXT -- paper | live
); );
``` ```
@@ -475,13 +563,14 @@ CREATE TABLE decision_logs (
stock_code TEXT, stock_code TEXT,
market TEXT, market TEXT,
exchange_code TEXT, exchange_code TEXT,
session_id TEXT DEFAULT 'UNKNOWN', -- v3: session when decision was made
action TEXT, action TEXT,
confidence INTEGER, confidence INTEGER,
rationale TEXT, rationale TEXT,
context_snapshot TEXT, -- JSON: full context at decision time context_snapshot TEXT, -- JSON: full context at decision time
input_data TEXT, -- JSON: market data used input_data TEXT, -- JSON: market data used
outcome_pnl REAL, outcome_pnl REAL,
outcome_accuracy REAL, outcome_accuracy INTEGER,
reviewed INTEGER DEFAULT 0, reviewed INTEGER DEFAULT 0,
review_notes TEXT review_notes TEXT
); );
@@ -494,7 +583,7 @@ CREATE TABLE playbooks (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT NOT NULL, date TEXT NOT NULL,
market TEXT NOT NULL, market TEXT NOT NULL,
status TEXT DEFAULT 'generated', status TEXT NOT NULL DEFAULT 'pending', -- pending → generated → active → expired
playbook_json TEXT NOT NULL, -- Full playbook with scenarios playbook_json TEXT NOT NULL, -- Full playbook with scenarios
generated_at TEXT NOT NULL, generated_at TEXT NOT NULL,
token_count INTEGER, token_count INTEGER,
@@ -552,6 +641,29 @@ PLANNER_TIMEOUT_SECONDS=60 # Timeout for playbook generation
DEFENSIVE_PLAYBOOK_ON_FAILURE=true # Fallback on AI failure DEFENSIVE_PLAYBOOK_ON_FAILURE=true # Fallback on AI failure
RESCAN_INTERVAL_SECONDS=300 # Scenario rescan interval during trading RESCAN_INTERVAL_SECONDS=300 # Scenario rescan interval during trading
# Optional — v2 Exit Rules (State Machine)
STAGED_EXIT_BE_ARM_PCT=1.2 # Break-even lock threshold (%)
STAGED_EXIT_ARM_PCT=3.0 # Armed state threshold (%)
KR_ATR_STOP_MULTIPLIER_K=2.0 # ATR multiplier for KR dynamic hard stop
KR_ATR_STOP_MIN_PCT=-2.0 # KR hard stop floor (must tighten, negative)
KR_ATR_STOP_MAX_PCT=-7.0 # KR hard stop ceiling (loosest, negative)
# Optional — v2 Trade Filters
STOP_LOSS_COOLDOWN_MINUTES=120 # Cooldown after stop-loss before re-entry (same ticker)
US_MIN_PRICE=5.0 # Minimum US stock price for BUY ($)
# Optional — v3 Session Risk Management
SESSION_RISK_RELOAD_ENABLED=true # Reload risk params at session boundaries
SESSION_RISK_PROFILES_JSON="{}" # Per-session overrides JSON: {"KRX_REG": {"be_arm_pct": 1.0}}
OVERNIGHT_EXCEPTION_ENABLED=true # Allow holding through session close (conditions apply)
# Optional — v3 Blackout (KIS maintenance windows)
ORDER_BLACKOUT_ENABLED=true
ORDER_BLACKOUT_WINDOWS_KST=23:30-00:10 # Comma-separated: "HH:MM-HH:MM"
ORDER_BLACKOUT_QUEUE_MAX=500 # Max queued orders during blackout
BLACKOUT_RECOVERY_PRICE_REVALIDATION_ENABLED=true
BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT=5.0 # Drop recovery order if price drifted >5%
# Optional — Smart Scanner (realtime mode only) # Optional — Smart Scanner (realtime mode only)
RSI_OVERSOLD_THRESHOLD=30 # 0-50, oversold threshold RSI_OVERSOLD_THRESHOLD=30 # 0-50, oversold threshold
RSI_MOMENTUM_THRESHOLD=70 # 50-100, momentum threshold RSI_MOMENTUM_THRESHOLD=70 # 50-100, momentum threshold

View File

@@ -4,8 +4,61 @@
**Critical: Learn from failures. Never repeat the same failed command without modification.** **Critical: Learn from failures. Never repeat the same failed command without modification.**
## Repository VCS Rule (Mandatory)
- 이 저장소의 티켓/PR/코멘트 작업은 Gitea 기준으로 수행한다.
- `gh`(GitHub CLI) 명령 사용은 금지한다.
- 기본 도구는 `tea`이며, `tea` 미지원 케이스만 Gitea API를 fallback으로 사용한다.
- 실행 전 `docs/workflow.md``Gitea CLI Formatting Troubleshooting`을 반드시 확인한다.
## Session Handover Preflight (Mandatory)
- 세션 시작 직후(코드 변경 전) 아래 명령을 먼저 실행한다.
```bash
python3 scripts/session_handover_check.py --strict
```
- 실패 시 `workflow/session-handover.md` 최신 엔트리를 보강한 뒤 재실행한다.
## Docs Sync Validator (Mandatory for docs changes)
- 문서 변경 PR에서는 아래 명령으로 동기화 검증을 먼저 실행한다.
```bash
python3 scripts/validate_docs_sync.py
```
- 검증 실패 시 메시지 기준으로 즉시 수정한다.
- `absolute link is forbidden`: 문서 링크에 절대경로(`/...`) 사용
- `broken link`: 상대경로 링크 대상 파일/앵커 누락
- `missing core doc link reference`: `README.md`/`CLAUDE.md` 핵심 링크 누락
- `duplicated API endpoint row`: `docs/commands.md` API endpoint 표 중복 행
- `missing dynamic test count guidance`: `docs/testing.md``pytest --collect-only -q` 가이드 누락
### tea CLI (Gitea Command Line Tool) ### tea CLI (Gitea Command Line Tool)
#### ❌ Comment Newline Escaping (`\n` rendered literally)
```bash
YES="" ~/bin/tea comment 374 "line1\nline2"
# Web UI shows "\n" as text instead of line breaks
```
**💡 Reason:** Inline string escaping is interpreted literally before comment submission.
**✅ Solution:** Use file-based helper to preserve multiline text
```bash
cat > /tmp/comment.md <<'EOF'
line1
line2
EOF
scripts/tea_comment.sh 374 /tmp/comment.md
```
**📝 Notes:**
- `scripts/tea_comment.sh` accepts stdin with `-` as body source.
- The helper fails fast when body looks like escaped-newline text only.
#### ❌ TTY Error - Interactive Confirmation Fails #### ❌ TTY Error - Interactive Confirmation Fails
```bash ```bash
~/bin/tea issues create --repo X --title "Y" --description "Z" ~/bin/tea issues create --repo X --title "Y" --description "Z"
@@ -119,7 +172,7 @@ No decorator needed for async tests.
# Install all dependencies (production + dev) # Install all dependencies (production + dev)
pip install -e ".[dev]" pip install -e ".[dev]"
# Run full test suite with coverage (551 tests across 25 files) # Run full test suite with coverage (998 tests across 41 files)
pytest -v --cov=src --cov-report=term-missing pytest -v --cov=src --cov-report=term-missing
# Run a single test file # Run a single test file
@@ -140,6 +193,18 @@ python -m src.main --mode=paper
# Run with dashboard enabled # Run with dashboard enabled
python -m src.main --mode=paper --dashboard python -m src.main --mode=paper --dashboard
# Runtime verification monitor (coverage + forbidden invariants)
bash scripts/runtime_verify_monitor.sh
# Runtime monitor with explicit policy timezone (example: KST)
POLICY_TZ=Asia/Seoul bash scripts/runtime_verify_monitor.sh
# Session handover gate (must pass before implementation)
python3 scripts/session_handover_check.py --strict
# Follow runtime verification log
tail -f data/overnight/runtime_verify_*.log
# Docker # Docker
docker compose up -d ouroboros # Run agent docker compose up -d ouroboros # Run agent
docker compose --profile test up test # Run tests in container docker compose --profile test up test # Run tests in container
@@ -173,6 +238,8 @@ Dashboard runs as a daemon thread on `DASHBOARD_HOST:DASHBOARD_PORT` (default: `
| `GET /api/context/{layer}` | Context data by layer L1-L7 (query: `timeframe`) | | `GET /api/context/{layer}` | Context data by layer L1-L7 (query: `timeframe`) |
| `GET /api/decisions` | Decision log entries (query: `limit`, `market`) | | `GET /api/decisions` | Decision log entries (query: `limit`, `market`) |
| `GET /api/scenarios/active` | Today's matched scenarios | | `GET /api/scenarios/active` | Today's matched scenarios |
| `GET /api/pnl/history` | P&L history over time |
| `GET /api/positions` | Current open positions |
## Telegram Commands ## Telegram Commands

View File

@@ -0,0 +1,56 @@
<!--
Doc-ID: DOC-VAL-001
Version: 1.0.0
Status: active
Owner: strategy
Updated: 2026-02-26
-->
# 문서 검증 시스템
본 문서는 문서 간 허위 내용, 수치 충돌, 구현 불가능 지시를 사전에 제거하기 위한 검증 규칙이다.
## 검증 목표
- 단일 진실원장 기준으로 모든 지시서의 수치/규칙 정합성 보장
- 설계 문장과 코드 작업 지시 간 추적성 보장
- 테스트 미정의 상태에서 구현 착수 금지
## 불일치 유형 정의
- `RULE-DOC-001`: 정의되지 않은 요구사항 ID 사용
- `RULE-DOC-002`: 동일 요구사항 ID에 상충되는 값(예: 슬리피지 수치) 기술
- `RULE-DOC-003`: 시간대 미표기 또는 KST/UTC 혼용 지시
- `RULE-DOC-004`: 주문 정책과 리스크 정책 충돌(예: 저유동 세션 시장가 허용)
- `RULE-DOC-005`: 구현 태스크에 테스트 ID 미연결
- `RULE-DOC-006`: 문서 라우팅 링크 깨짐
## 검증 파이프라인
1. 정적 검사 (자동)
- 대상: `docs/ouroboros/*.md`
- 검사: 메타데이터, 링크 유효성, ID 정의/참조 일치, REQ-추적성 매핑
- 도구: `scripts/validate_ouroboros_docs.py`
2. 추적성 검사 (자동 + 수동)
- 자동: `REQ-*`가 최소 1개 `TASK-*`와 1개 `TEST-*`에 연결되었는지 확인
- 수동: 정책 충돌 후보를 PR 체크리스트로 검토
3. 도메인 무결성 검사 (수동)
- KIS 점검시간 회피, 주문 유형 강제, Kill Switch 순서, 환율 정책이 동시에 존재하는지 점검
- 백테스트 체결가가 보수 가정인지 점검
## 변경 통제 규칙
- `REQ-*` 추가/수정 시 반드시 `01_requirements_registry.md` 먼저 변경
- `TASK-*` 수정 시 반드시 `40_acceptance_and_test_plan.md`의 대응 테스트를 동시 수정
- 충돌 발생 시 우선순위: `requirements_registry > phase execution > code work order`
적용 룰셋:
- `RULE-DOC-001` `RULE-DOC-002` `RULE-DOC-003` `RULE-DOC-004` `RULE-DOC-005` `RULE-DOC-006`
## PR 게이트
- `python3 scripts/validate_ouroboros_docs.py` 성공
- 신규/변경 `REQ-*`가 테스트 기준(`TEST-*`)과 연결됨
- 원본 계획(v2/v3)과 모순 없음

View File

@@ -0,0 +1,40 @@
<!--
Doc-ID: DOC-REQ-001
Version: 1.0.8
Status: active
Owner: strategy
Updated: 2026-03-02
-->
# 요구사항 원장 (Single Source of Truth)
이 문서의 ID가 계획/구현/테스트 전 문서에서 참조되는 유일한 요구사항 집합이다.
## v2 핵심 요구사항
- `REQ-V2-001`: 상태는 `HOLDING`, `BE_LOCK`, `ARMED`, `EXITED` 4단계여야 한다.
- `REQ-V2-002`: 상태 전이는 매 틱/바 평가 시 최상위 상태로 즉시 승격되어야 한다.
- `REQ-V2-003`: `EXITED` 조건은 모든 상태보다 우선 평가되어야 한다.
- `REQ-V2-004`: 청산 로직은 Hard Stop, BE Lock, ATR Trailing, 모델 확률 보조 트리거를 포함해야 한다.
- `REQ-V2-005`: 라벨링은 Triple Barrier(Upper/Lower/Time) 방식이어야 한다.
- `REQ-V2-006`: 검증은 Walk-forward + Purge/Embargo를 강제한다.
- `REQ-V2-007`: 백테스트는 비용/슬리피지/체결실패를 반영하지 않으면 채택 불가다.
- `REQ-V2-008`: Kill Switch는 신규주문차단 -> 미체결취소 -> 재조회 -> 리스크축소 -> 스냅샷 순서다.
## v3 핵심 요구사항
- `REQ-V3-001`: 모든 신호/주문/로그는 `session_id`를 포함해야 한다.
- `REQ-V3-002`: 세션 전환 시 리스크 파라미터 재로딩이 수행되어야 한다.
- `REQ-V3-003`: 브로커 블랙아웃 시간대에는 신규 주문이 금지되어야 한다.
- `REQ-V3-004`: 블랙아웃 중 신호는 bounded Queue에 적재되며, 포화 시 oldest-drop 정책으로 최신 intent를 보존하고 복구 후 유효성 재검증을 거친다.
- `REQ-V3-005`: 저유동 세션(`NXT_AFTER`, `US_PRE`, `US_DAY`, `US_AFTER`)은 시장가 주문 금지다.
- `REQ-V3-006`: 백테스트 체결가는 불리한 방향 체결 가정을 기본으로 한다.
- `REQ-V3-007`: US 운용은 환율 손익 분리 추적과 통화 버퍼 정책을 포함해야 한다.
- `REQ-V3-008`: 마감/오버나잇 규칙은 Kill Switch와 충돌 없이 연동되어야 한다.
## 공통 운영 요구사항
- `REQ-OPS-001`: 타임존은 모든 시간 필드에 명시(KST/UTC)되어야 한다.
- `REQ-OPS-002`: 문서의 수치 정책은 원장에서만 변경한다.
- `REQ-OPS-003`: 구현 태스크는 반드시 테스트 태스크를 동반한다.
- `REQ-OPS-004`: 원본 계획 문서(`v2`, `v3`)는 `docs/ouroboros/source/` 경로를 단일 기준으로 사용한다.

View File

@@ -0,0 +1,63 @@
<!--
Doc-ID: DOC-PHASE-V2-001
Version: 1.0.0
Status: active
Owner: strategy
Updated: 2026-02-26
-->
# v2 실행 지시서 (설계 -> 코드)
참조 요구사항: `REQ-V2-001` `REQ-V2-002` `REQ-V2-003` `REQ-V2-004` `REQ-V2-005` `REQ-V2-006` `REQ-V2-007` `REQ-V2-008` `REQ-OPS-001` `REQ-OPS-002` `REQ-OPS-003`
## 단계 1: 도메인 모델 확정
- `TASK-V2-001`: 상태머신 enum/전이 이벤트/전이 사유 스키마 설계
- `TASK-V2-002`: `position_state` 스냅샷 구조(현재상태, peak, stops, last_reason) 정의
- `TASK-V2-003`: 청산 판단 입력 DTO(가격, ATR, pred_prob, liquidity_signal) 정의
완료 기준:
- 상태와 전이 사유가 로그/DB에서 재현 가능
- `REQ-V2-001`~`003`을 코드 타입 수준에서 강제
## 단계 2: 청산 엔진 구현
- `TASK-V2-004`: 우선순위 기반 전이 함수 구현(`evaluate_exit_first` -> `promote_state`)
- `TASK-V2-005`: Hard Stop/BE Lock/ATR Trailing 결합 로직 구현
- `TASK-V2-006`: 모델 확률 신호를 보조 트리거로 결합(단독 청산 금지)
완료 기준:
- 갭 상황에서 다중 조건 동시 충족 시 최상위 상태로 단번 전이
- `REQ-V2-004` 준수
## 단계 3: 라벨링/학습 데이터 파이프라인
- `TASK-V2-007`: Triple Barrier 라벨러 구현(장벽 선터치 우선)
- `TASK-V2-008`: 피처 구간/라벨 구간 분리 검증 유틸 구현
- `TASK-V2-009`: 라벨 생성 로그(진입시각, 터치장벽, 만기장벽) 기록
완료 기준:
- look-ahead 차단 증빙 로그 확보
- `REQ-V2-005` 충족
## 단계 4: 검증 프레임워크
- `TASK-V2-010`: Walk-forward split + Purge/Embargo 분할기 구현
- `TASK-V2-011`: 베이스라인(`B0`,`B1`,`M1`) 비교 리포트 포맷 구현
- `TASK-V2-012`: 체결 비용/슬리피지/실패 반영 백테스트 옵션 강제
완료 기준:
- `REQ-V2-006`, `REQ-V2-007` 충족
## 단계 5: Kill Switch 통합
- `TASK-V2-013`: Kill Switch 순차 실행 오케스트레이터 구현 (`src/core/risk_manager.py` 수정 금지)
- `TASK-V2-014`: 주문 차단 플래그/미체결 취소/재조회 재시도 로직 구현
- `TASK-V2-015`: 스냅샷/알림/복구 진입 절차 구현
완료 기준:
- `REQ-V2-008` 순서 일치
라우팅:
- 코드 지시 상세: [30_code_level_work_orders.md](./30_code_level_work_orders.md)
- 테스트 상세: [40_acceptance_and_test_plan.md](./40_acceptance_and_test_plan.md)

View File

@@ -0,0 +1,60 @@
<!--
Doc-ID: DOC-PHASE-V3-001
Version: 1.0.0
Status: active
Owner: strategy
Updated: 2026-02-26
-->
# v3 실행 지시서 (세션 확장)
참조 요구사항: `REQ-V3-001` `REQ-V3-002` `REQ-V3-003` `REQ-V3-004` `REQ-V3-005` `REQ-V3-006` `REQ-V3-007` `REQ-V3-008` `REQ-OPS-001` `REQ-OPS-002` `REQ-OPS-003`
## 단계 1: 세션 엔진
- `TASK-V3-001`: `session_id` 분류기 구현(KR/US 확장 세션)
- `TASK-V3-002`: 세션 전환 훅에서 리스크 파라미터 재로딩 구현
- `TASK-V3-003`: 로그/DB 스키마에 `session_id` 필드 강제
완료 기준:
- `REQ-V3-001`, `REQ-V3-002` 충족
## 단계 2: 블랙아웃/복구 제어
- `TASK-V3-004`: 블랙아웃 윈도우 정책 로더 구현(설정 기반)
- `TASK-V3-005`: 블랙아웃 중 신규 주문 차단 + 의도 큐 적재 구현
- `TASK-V3-006`: 복구 시 동기화(잔고/미체결/체결) 후 큐 재검증 실행
완료 기준:
- `REQ-V3-003`, `REQ-V3-004` 충족
## 단계 3: 주문 정책 강화
- `TASK-V3-007`: 세션별 주문 타입 매트릭스 구현
- `TASK-V3-008`: 저유동 세션 시장가 주문 하드 차단
- `TASK-V3-009`: 재호가 간격/횟수 제한 및 주문 철회 조건 구현
완료 기준:
- `REQ-V3-005` 충족
## 단계 4: 비용/체결 모델 정교화
- `TASK-V3-010`: 세션별 슬리피지/비용 테이블 엔진 반영
- `TASK-V3-011`: 불리한 체결 가정(상대 호가 방향) 체결기 구현
- `TASK-V3-012`: 시나리오별 체결 실패/부분체결 모델 반영
완료 기준:
- `REQ-V3-006` 충족
## 단계 5: 환율/오버나잇/Kill Switch 연동
- `TASK-V3-013`: 전략 PnL과 FX PnL 분리 회계 구현
- `TASK-V3-014`: USD/KRW 버퍼 규칙 위반 시 신규 진입 제한 구현
- `TASK-V3-015`: 오버나잇 예외와 Kill Switch 우선순위 통합
완료 기준:
- `REQ-V3-007`, `REQ-V3-008` 충족
라우팅:
- 코드 지시 상세: [30_code_level_work_orders.md](./30_code_level_work_orders.md)
- 테스트 상세: [40_acceptance_and_test_plan.md](./40_acceptance_and_test_plan.md)

View File

@@ -0,0 +1,60 @@
<!--
Doc-ID: DOC-CODE-001
Version: 1.0.0
Status: active
Owner: strategy
Updated: 2026-02-26
-->
# 코드 레벨 작업 지시서
본 문서는 파일 단위 구현 지시서다. 모든 작업은 요구사항 ID와 테스트 ID를 포함해야 한다.
제약:
- `src/core/risk_manager.py`는 READ-ONLY로 간주하고 수정하지 않는다.
- Kill Switch는 별도 모듈(예: `src/core/kill_switch.py`)로 추가하고 상위 실행 루프에서 연동한다.
## 구현 단위 A: 상태기계/청산
- `TASK-CODE-001` (`REQ-V2-001`,`REQ-V2-002`,`REQ-V2-003`,`TEST-CODE-001`,`TEST-CODE-002`): `src/strategy/`에 상태기계 모듈 추가
- `TASK-CODE-002` (`REQ-V2-004`,`TEST-ACC-011`): ATR/BE/Hard Stop 결합 청산 함수 추가
- `TASK-CODE-003` (`REQ-V2-008`,`TEST-ACC-002`): Kill Switch 오케스트레이터를 `src/core/kill_switch.py`에 추가
- `TEST-CODE-001`: 갭 점프 시 최고상태 승격 테스트
- `TEST-CODE-002`: EXIT 우선순위 테스트
## 구현 단위 B: 라벨링/검증
- `TASK-CODE-004` (`REQ-V2-005`,`TEST-CODE-003`,`TEST-ACC-012`): Triple Barrier 라벨러 모듈 추가(`src/analysis/` 또는 `src/strategy/`)
- `TASK-CODE-005` (`REQ-V2-006`,`TEST-CODE-004`,`TEST-ACC-013`): Walk-forward + Purge/Embargo 분할 유틸 추가
- `TASK-CODE-006` (`REQ-V2-007`,`TEST-ACC-014`): 백테스트 실행기에서 비용/슬리피지 옵션 필수화
- `TEST-CODE-003`: 라벨 선터치 우선 테스트
- `TEST-CODE-004`: 누수 차단 테스트
## 구현 단위 C: 세션/주문 정책
- `TASK-CODE-007` (`REQ-V3-001`,`REQ-V3-002`,`TEST-ACC-015`,`TEST-ACC-016`): 세션 분류/전환 훅을 `src/markets/schedule.py` 연동
- `TASK-CODE-008` (`REQ-V3-003`,`REQ-V3-004`,`TEST-CODE-005`,`TEST-ACC-017`): 블랙아웃 큐 처리기를 `src/broker/`에 추가
- `TASK-CODE-009` (`REQ-V3-005`,`TEST-CODE-006`,`TEST-ACC-004`): 세션별 주문 타입 검증기 추가
- `TEST-CODE-005`: 블랙아웃 신규주문 차단 테스트
- `TEST-CODE-006`: 저유동 세션 시장가 거부 테스트
## 구현 단위 D: 체결/환율/오버나잇
- `TASK-CODE-010` (`REQ-V3-006`,`TEST-CODE-007`,`TEST-ACC-005`): 불리한 체결가 모델을 백테스트 체결기로 구현
- `TASK-CODE-011` (`REQ-V3-007`,`TEST-CODE-008`,`TEST-ACC-006`): FX PnL 분리 회계 테이블/컬럼 추가
- `TASK-CODE-012` (`REQ-V3-008`,`TEST-ACC-018`): 오버나잇 예외와 Kill Switch 충돌 해소 로직 구현
- `TEST-CODE-007`: 불리한 체결가 모델 테스트
- `TEST-CODE-008`: FX 버퍼 위반 시 신규진입 제한 테스트
## 구현 단위 E: 운영/문서 거버넌스
- `TASK-OPS-001` (`REQ-OPS-001`,`TEST-ACC-007`): 시간 필드/로그 스키마의 타임존(KST/UTC) 표기 강제 규칙 구현
- `TASK-OPS-002` (`REQ-OPS-002`,`TEST-ACC-008`): 정책 수치 변경 시 `01_requirements_registry.md` 선수정 CI 체크 추가
- `TASK-OPS-003` (`REQ-OPS-003`,`TEST-ACC-009`): `TASK-*` 없는 `REQ-*` 또는 `TEST-*` 없는 `REQ-*`를 차단하는 문서 검증 게이트 유지
- `TASK-OPS-004` (`REQ-OPS-004`,`TEST-ACC-019`): v2/v3 원본 계획 문서 위치를 `docs/ouroboros/source/`로 표준화하고 링크 일관성 검증
## 커밋 규칙
- 커밋 메시지에 `TASK-*` 포함
- PR 본문에 `REQ-*`, `TEST-*` 매핑 표 포함
- 변경 파일마다 최소 1개 테스트 연결

View File

@@ -0,0 +1,64 @@
<!--
Doc-ID: DOC-TEST-001
Version: 1.0.0
Status: active
Owner: strategy
Updated: 2026-02-26
-->
# 수용 기준 및 테스트 계획
## 수용 기준
- `TEST-ACC-000` (`REQ-V2-001`): 상태 enum은 4개(`HOLDING`,`BE_LOCK`,`ARMED`,`EXITED`)만 허용한다.
- `TEST-ACC-001` (`REQ-V2-002`): 상태 전이는 순차 if-else가 아닌 우선순위 승격으로 동작한다.
- `TEST-ACC-010` (`REQ-V2-003`): `EXITED` 조건은 어떤 상태보다 먼저 평가된다.
- `TEST-ACC-011` (`REQ-V2-004`): 청산 판단은 Hard Stop/BE Lock/ATR/모델보조 4요소를 모두 포함한다.
- `TEST-ACC-012` (`REQ-V2-005`): Triple Barrier 라벨은 first-touch 규칙으로 결정된다.
- `TEST-ACC-013` (`REQ-V2-006`): 학습/검증 분할은 Walk-forward + Purge/Embargo를 적용한다.
- `TEST-ACC-014` (`REQ-V2-007`): 비용/슬리피지/체결실패 옵션 비활성 시 백테스트 실행을 거부한다.
- `TEST-ACC-002` (`REQ-V2-008`): Kill Switch 실행 순서가 고정 순서를 위반하지 않는다.
- `TEST-ACC-015` (`REQ-V3-001`): 모든 주문/로그 레코드에 `session_id`가 저장된다.
- `TEST-ACC-016` (`REQ-V3-002`): 세션 전환 이벤트 시 리스크 파라미터가 재로딩된다.
- `TEST-ACC-003` (`REQ-V3-003`): 블랙아웃 중 신규 주문 API 호출이 발생하지 않는다.
- `TEST-ACC-017` (`REQ-V3-004`): 블랙아웃 큐는 복구 후 재검증을 통과한 주문만 실행한다.
- `TEST-ACC-004` (`REQ-V3-005`): 저유동 세션 시장가 주문은 항상 거부된다.
- `TEST-ACC-005` (`REQ-V3-006`): 백테스트 체결가가 단순 종가 체결보다 보수적 손익을 낸다.
- `TEST-ACC-006` (`REQ-V3-007`): 전략 손익과 환율 손익이 별도 집계된다.
- `TEST-ACC-018` (`REQ-V3-008`): 오버나잇 예외 상태에서도 Kill Switch 우선순위가 유지된다.
- `TEST-ACC-007` (`REQ-OPS-001`): 시간 관련 필드는 타임존(KST/UTC)이 누락되면 검증 실패한다.
- `TEST-ACC-008` (`REQ-OPS-002`): 정책 수치 변경이 원장 미반영이면 검증 실패한다.
- `TEST-ACC-009` (`REQ-OPS-003`): `REQ-*``TASK-*`/`TEST-*` 매핑 없이 존재하면 검증 실패한다.
- `TEST-ACC-019` (`REQ-OPS-004`): v2/v3 원본 계획 문서 링크는 `docs/ouroboros/source/` 경로 기준으로만 통과한다.
## 테스트 계층
1. 단위 테스트
- 상태 전이, 주문타입 검증, 큐 복구 로직, 체결가 모델
2. 통합 테스트
- 세션 전환 -> 주문 정책 -> 리스크 엔진 연동
- 블랙아웃 시작/해제 이벤트 연동
3. 회귀 테스트
- 기존 `tests/` 스위트 전량 실행
- 신규 기능 플래그 ON/OFF 비교
4. 구동/모니터링 검증 (필수)
- 개발 완료 후 시스템을 실제 구동해 핵심 경로를 관찰
- 필수 관찰 항목: 주문 차단 정책, Kill Switch 동작, 경보/예외 로그, 세션 전환 로그
- Runtime Verifier 코멘트로 증적(실행 명령/요약 로그) 첨부
## 실행 명령
```bash
pytest -q
python3 scripts/validate_ouroboros_docs.py
```
## 실패 처리 규칙
- 문서 검증 실패 시 구현 PR 병합 금지
- `REQ-*` 변경 후 테스트 매핑 누락 시 병합 금지
- 회귀 실패 시 원인 모듈 분리 후 재검증
- 구동/모니터링 증적 누락 시 검증 승인 금지

View File

@@ -0,0 +1,68 @@
<!--
Doc-ID: DOC-PM-001
Version: 1.0.0
Status: active
Owner: strategy
Updated: 2026-02-26
-->
# 실전 시나리오 매트릭스 + 이슈 분류 체계
목표: 운영에서 바로 사용할 수 있는 형태로 Happy Path / Failure Path / Ops Incident를 추적 가능한 ID 체계(`REQ-*`, `TASK-*`, `TEST-*`)에 매핑한다.
## 1) 시나리오 매트릭스
| Scenario ID | Type | Trigger | Expected System Behavior | Primary IDs (REQ/TASK/TEST) | Ticket Priority |
|---|---|---|---|---|---|
| `SCN-HAPPY-001` | Happy Path | KR 정규 세션에서 진입 신호 발생, 블랙아웃 아님 | 주문/로그에 `session_id` 저장 후 정책에 맞는 주문 전송 | `REQ-V3-001`, `TASK-V3-001`, `TASK-V3-003`, `TEST-ACC-015` | P1 |
| `SCN-HAPPY-002` | Happy Path | 보유 포지션에서 BE/ATR/Hard Stop 조건 순차 도달 | 상태가 즉시 상위 단계로 승격, `EXITED` 우선 평가 보장 | `REQ-V2-002`, `REQ-V2-003`, `TASK-V2-004`, `TEST-ACC-001`, `TEST-ACC-010` | P0 |
| `SCN-HAPPY-003` | Happy Path | 세션 전환(KR->US) 이벤트 발생 | 리스크 파라미터 자동 재로딩, 새 세션 정책으로 즉시 전환 | `REQ-V3-002`, `TASK-V3-002`, `TEST-ACC-016` | P0 |
| `SCN-HAPPY-004` | Happy Path | 백테스트 실행 요청 | 비용/슬리피지/체결실패 옵션 누락 시 실행 거부, 포함 시 실행 | `REQ-V2-007`, `TASK-V2-012`, `TEST-ACC-014` | P1 |
| `SCN-FAIL-001` | Failure Path | 블랙아웃 중 신규 주문 신호 발생 | 신규 주문 차단 + 주문 의도 큐 적재, API 직접 호출 금지 | `REQ-V3-003`, `REQ-V3-004`, `TASK-V3-005`, `TEST-ACC-003`, `TEST-ACC-017` | P0 |
| `SCN-FAIL-002` | Failure Path | 저유동 세션에 시장가 주문 요청 | 시장가 하드 거부, 지정가 대체 또는 주문 취소 | `REQ-V3-005`, `TASK-V3-007`, `TASK-V3-008`, `TEST-ACC-004` | P0 |
| `SCN-FAIL-003` | Failure Path | Kill Switch 트리거(손실/연결/리스크 한도) | 신규주문차단->미체결취소->재조회->리스크축소->스냅샷 순서 강제 | `REQ-V2-008`, `TASK-V2-013`, `TEST-ACC-002` | P0 |
| `SCN-FAIL-004` | Failure Path | FX 버퍼 부족 상태에서 US 진입 신호 | 전략 PnL/FX PnL 분리 집계 유지, 신규 진입 제한 | `REQ-V3-007`, `TASK-V3-013`, `TASK-V3-014`, `TEST-ACC-006` | P1 |
| `SCN-OPS-001` | Ops Incident | 브로커 점검/블랙아웃 종료 직후 | 잔고/미체결/체결 동기화 후 큐 재검증 통과 주문만 집행 | `REQ-V3-004`, `TASK-V3-006`, `TEST-ACC-017` | P0 |
| `SCN-OPS-002` | Ops Incident | 정책 수치가 코드에만 반영되고 원장 미수정 | 문서 검증에서 실패 처리, PR 병합 차단 | `REQ-OPS-002`, `TASK-OPS-002`, `TEST-ACC-008` | P0 |
| `SCN-OPS-003` | Ops Incident | 타임존 누락 로그/스케줄 데이터 유입 | KST/UTC 미표기 레코드 검증 실패 처리 | `REQ-OPS-001`, `TASK-OPS-001`, `TEST-ACC-007` | P1 |
| `SCN-OPS-004` | Ops Incident | 신규 REQ 추가 후 TASK/TEST 누락 | 추적성 게이트 실패, 구현 PR 병합 차단 | `REQ-OPS-003`, `TASK-OPS-003`, `TEST-ACC-009` | P0 |
| `SCN-OPS-005` | Ops Incident | 배포 후 런타임 이상 동작(주문오류/상태전이오류/정책위반) 탐지 | Runtime Verifier가 즉시 이슈 발행, Dev 수정 후 재관측으로 클로즈 판정 | `REQ-V2-008`, `REQ-V3-003`, `REQ-V3-005`, `TEST-ACC-002`, `TEST-ACC-003`, `TEST-ACC-004` | P0 |
## 2) 이슈 분류 체계 (Issue Taxonomy)
| Taxonomy | Definition | Typical Symptoms | Default Owner | Mapping Baseline |
|---|---|---|---|---|
| `EXEC-STATE` | 상태기계/청산 우선순위 위반 | EXIT 우선순위 깨짐, 상태 역행, 갭 대응 실패 | Strategy | `REQ-V2-001`~`REQ-V2-004`, `TASK-V2-004`~`TASK-V2-006`, `TEST-ACC-000`,`001`,`010`,`011` |
| `EXEC-POLICY` | 세션/주문 정책 위반 | 블랙아웃 주문 전송, 저유동 시장가 허용 | Broker/Execution | `REQ-V3-003`~`REQ-V3-005`, `TASK-V3-004`~`TASK-V3-009`, `TEST-ACC-003`,`004`,`017` |
| `BACKTEST-MODEL` | 백테스트 현실성/검증 무결성 위반 | 비용 옵션 off로 실행, 체결가 과낙관 | Research | `REQ-V2-006`,`REQ-V2-007`,`REQ-V3-006`, `TASK-V2-010`~`012`, `TASK-V3-010`~`012`, `TEST-ACC-013`,`014`,`005` |
| `RISK-EMERGENCY` | Kill Switch/리스크 비상 대응 실패 | 순서 위반, 차단 누락, 복구 절차 누락 | Risk | `REQ-V2-008`,`REQ-V3-008`, `TASK-V2-013`~`015`, `TASK-V3-015`, `TEST-ACC-002`,`018` |
| `FX-ACCOUNTING` | 환율/통화 버퍼 정책 위반 | 전략손익/환차손익 혼합 집계, 버퍼 미적용 | Risk + Data | `REQ-V3-007`, `TASK-V3-013`,`014`, `TEST-ACC-006` |
| `OPS-GOVERNANCE` | 문서/추적성/타임존 거버넌스 위반 | 원장 미수정, TEST 누락, 타임존 미표기 | PM + QA | `REQ-OPS-001`~`003`, `TASK-OPS-001`~`003`, `TEST-ACC-007`~`009` |
| `RUNTIME-VERIFY` | 실동작 모니터링 검증 | 배포 후 이상 현상, 간헐 오류, 테스트 미포착 회귀 | Runtime Verifier + TPM | 관련 `REQ/TASK/TEST`와 런타임 로그 증적 필수 |
## 3) 티켓 생성 규칙 (Implementable)
1. 모든 이슈는 `taxonomy + scenario_id`를 제목에 포함한다.
예: `[EXEC-POLICY][SCN-FAIL-001] blackout 주문 차단 누락`
2. 본문 필수 항목: 재현절차, 기대결과, 실제결과, 영향범위, 롤백/완화책.
3. 본문에 최소 1개 `REQ-*`, 1개 `TASK-*`, 1개 `TEST-*`를 명시한다.
4. 우선순위 기준:
- P0: 실주문 위험, Kill Switch, 블랙아웃/시장가 정책, 추적성 게이트 실패
- P1: 손익 왜곡 가능성(체결/FX/시간대), 운영 리스크 증가
- P2: 보고서/관측성 품질 이슈(거래 안전성 영향 없음)
5. Runtime Verifier가 발행한 `RUNTIME-VERIFY` 이슈는 Main Agent 확인 전 클로즈 금지.
## 4) 즉시 생성 권장 티켓 (초기 백로그)
- `TKT-P0-001`: `[EXEC-POLICY][SCN-FAIL-001]` 블랙아웃 차단 + 큐적재 + 복구 재검증 e2e 점검 (`REQ-V3-003`,`REQ-V3-004`)
- `TKT-P0-002`: `[RISK-EMERGENCY][SCN-FAIL-003]` Kill Switch 순서 강제 검증 자동화 (`REQ-V2-008`)
- `TKT-P0-003`: `[OPS-GOVERNANCE][SCN-OPS-004]` REQ/TASK/TEST 누락 시 PR 차단 게이트 상시 점검 (`REQ-OPS-003`)
- `TKT-P1-001`: `[FX-ACCOUNTING][SCN-FAIL-004]` FX 버퍼 위반 시 진입 제한 회귀 케이스 보강 (`REQ-V3-007`)
- `TKT-P1-002`: `[BACKTEST-MODEL][SCN-HAPPY-004]` 비용/슬리피지 미설정 백테스트 거부 UX 명확화 (`REQ-V2-007`)
- `TKT-P0-004`: `[RUNTIME-VERIFY][SCN-OPS-005]` 배포 후 런타임 이상 탐지/재현/클로즈 판정 절차 자동화
## 5) 운영 체크포인트
- 스프린트 계획 시 `P0` 시나리오 100% 테스트 통과를 출발 조건으로 둔다.
- 배포 승인 시 `SCN-FAIL-*`, `SCN-OPS-*` 관련 `TEST-ACC-*`를 우선 확인한다.
- 정책 변경 PR은 반드시 원장(`01_requirements_registry.md`) 선수정 후 진행한다.

View File

@@ -0,0 +1,223 @@
<!--
Doc-ID: DOC-TPM-001
Version: 1.0.0
Status: active
Owner: tpm
Updated: 2026-02-26
-->
# TPM Control Protocol (Main <-> PM <-> TPM <-> Dev <-> Verifier <-> Runtime Verifier)
목적:
- PM 시나리오가 구현 가능한 단위로 분해되고, 개발/검증이 동일 ID 체계(`REQ-*`, `TASK-*`, `TEST-*`)로 닫히도록 강제한다.
- 각 단계는 Entry/Exit gate를 통과해야 다음 단계로 이동 가능하다.
- 주요 의사결정 포인트마다 Main Agent의 승인/의견 확인을 강제한다.
## Team Roles
- Main Agent: 최종 취합/우선순위/승인 게이트 오너
- PM Agent: 시나리오/요구사항/티켓 관리
- TPM Agent: PM-Dev-검증 간 구현 가능성/달성률 통제, 티켓 등록 및 구현 우선순위 지정 오너
- Dev Agent: 구현 수행, 블로커 발생 시 재계획 요청
- Verifier Agent: 문서/코드/테스트 산출물 검증
- Runtime Verifier Agent: 실제 동작 모니터링, 이상 징후 이슈 발행, 수정 후 이슈 클로즈 판정
Main Agent 아이디에이션 책임:
- 진행 중 신규 구현 아이디어를 별도 문서에 누적 기록한다.
- 기록 위치: [70_main_agent_ideation.md](./70_main_agent_ideation.md)
- 각 항목은 `IDEA-*` 식별자, 배경, 기대효과, 리스크, 후속 티켓 후보를 포함해야 한다.
## Main Decision Checkpoints (Mandatory)
- DCP-01 범위 확정: Phase 0 종료 전 Main Agent 승인 필수
- DCP-02 요구사항 확정: Phase 1 종료 전 Main Agent 승인 필수
- DCP-03 구현 착수: Phase 2 종료 전 Main Agent 승인 필수
- DCP-04 배포 승인: Phase 4 종료 후 Main Agent 최종 승인 필수
Main/Verifier 사고 재발 방지 규칙:
- Main Agent는 검증 위임 시 `Directive Contract`를 충족하지 않으면 검증 착수 금지
- Verifier Agent는 지시 누락/모호성 발견 시 즉시 `BLOCKED`를 선언하고 보완 요청
- Verifier Agent는 `미관측(NOT_OBSERVED)` 항목을 PASS로 보고할 수 없다
- Runtime 검증에서 요구 세션 증적이 없으면 "정상"이 아니라 `미검증 이상`으로 이슈화한다
## Phase Control Gates
### Phase 0: Scenario Intake and Scope Lock
Entry criteria:
- PM 시나리오가 사용자 가치, 실패 모드, 우선순위를 포함해 제출됨
- 영향 범위(모듈/세션/KR-US 시장)가 명시됨
Exit criteria:
- 시나리오가 `REQ-*` 후보에 1:1 또는 1:N 매핑됨
- 모호한 표현("개선", "최적화")은 측정 가능한 조건으로 치환됨
- 비범위 항목(out-of-scope) 명시
Control checks:
- PM/TPM 합의 완료
- Main Agent 승인(DCP-01)
- 산출물: 시나리오 카드, 초기 매핑 메모
### Phase 1: Requirement Registry Gate
Entry criteria:
- Phase 0 산출물 승인
- 변경 대상 요구사항 문서 식별 완료
Exit criteria:
- [01_requirements_registry.md](./01_requirements_registry.md)에 `REQ-*` 정의/수정 반영
-`REQ-*`가 최소 1개 `TASK-*`, 1개 `TEST-*`와 연결 가능 상태
- 시간/정책 수치는 원장 단일 소스로 확정(`REQ-OPS-001`,`REQ-OPS-002`)
Control checks:
- `python3 scripts/validate_ouroboros_docs.py` 통과
- Main Agent 승인(DCP-02)
- 산출물: 업데이트된 요구사항 원장
### Phase 2: Design and Work-Order Gate
Entry criteria:
- 요구사항 원장 갱신 완료
- 영향 모듈 분석 완료(상태기계, 주문정책, 백테스트, 세션)
Exit criteria:
- [10_phase_v2_execution.md](./10_phase_v2_execution.md), [20_phase_v3_execution.md](./20_phase_v3_execution.md), [30_code_level_work_orders.md](./30_code_level_work_orders.md)에 작업 분해 완료
- 각 작업은 구현 위치/제약/완료 조건을 가짐
- 위험 작업(Kill Switch, blackout, session transition)은 별도 롤백 절차 포함
Control checks:
- TPM이 `REQ -> TASK` 누락 여부 검토
- Main Agent 승인(DCP-03)
- 산출물: 승인된 Work Order 세트
### Phase 3: Implementation Gate
Entry criteria:
- 승인된 `TASK-*`가 브랜치 작업 단위로 분리됨
- 변경 범위별 테스트 계획이 PR 본문에 링크됨
Exit criteria:
- 코드 변경이 `TASK-*`에 대응되어 추적 가능
- 제약 준수(`src/core/risk_manager.py` 직접 수정 금지 등) 확인
- 신규 로직마다 최소 1개 테스트 추가 또는 기존 테스트 확장
Control checks:
- PR 템플릿 내 `REQ-*`/`TASK-*`/`TEST-*` 매핑 확인
- 산출물: 리뷰 가능한 PR
### Phase 4: Verification and Acceptance Gate
Entry criteria:
- 구현 PR ready 상태
- 테스트 케이스/픽스처 준비 완료
Exit criteria:
- [40_acceptance_and_test_plan.md](./40_acceptance_and_test_plan.md)의 해당 `TEST-ACC-*` 전부 통과
- 회귀 테스트 통과(`pytest -q`)
- 문서 검증 통과(`python3 scripts/validate_ouroboros_docs.py`)
Control checks:
- Verifier가 테스트 증적(로그/리포트/실행 커맨드) 첨부
- Verifier가 `Coverage Matrix`(`REQ/TASK/TEST` x `PASS/FAIL/NOT_OBSERVED`) 첨부
- `NOT_OBSERVED` 항목 수가 0인지 확인(0이 아니면 Gate 실패)
- Runtime Verifier가 스테이징/실운영 모니터링 계획 승인
- 정적 Verifier 승인 + Runtime Verifier 승인 2개 모두 확인
- 산출물: 수용 승인 레코드
### Phase 5: Release and Post-Release Control
Entry criteria:
- Phase 4 승인
- 운영 체크리스트 준비(세션 전환, 블랙아웃, Kill Switch)
Exit criteria:
- 배포 후 초기 관찰 윈도우에서 치명 경보 없음
- 신규 시나리오/회귀 이슈는 다음 Cycle의 Phase 0 입력으로 환류
- 요구사항/테스트 문서 버전 동기화 완료
Control checks:
- PM/TPM/Dev 3자 종료 확인
- Runtime Verifier가 운영 모니터링 이슈 상태(신규/진행/해결)를 리포트
- Main Agent 최종 승인(DCP-04)
- 산출물: 릴리즈 노트 + 후속 액션 목록
## Replan Protocol (Dev -> TPM)
- 트리거:
- 구현 불가능(기술적 제약/외부 API 제약)
- 예상 대비 개발 리소스 과다(공수/인력/의존성 급증)
- 절차:
1) Dev Agent가 `REPLAN-REQUEST` 발행(영향 REQ/TASK, 원인, 대안, 추가 공수 포함)
2) TPM Agent가 1차 심사(범위 축소/단계 분할/요구사항 조정안)
3) Verifier/PM 의견 수렴 후 Main Agent 승인으로 재계획 확정
- 규칙:
- Main Agent 승인 없는 재계획은 실행 금지
- 재계획 반영 시 문서(`REQ/TASK/TEST`) 동시 갱신 필수
TPM 티켓 운영 규칙:
- TPM은 합의된 변경을 이슈로 등록하고 우선순위(`P0/P1/P2`)를 지정한다.
- PR 본문에는 TPM이 지정한 우선순위와 범위가 그대로 반영되어야 한다.
- 우선순위 변경은 TPM 제안 + Main Agent 승인으로만 가능하다.
- PM/TPM/Dev/Reviewer/Verifier/Runtime Verifier는 주요 의사결정 시점마다 PR 코멘트를 남겨 결정 근거를 추적 가능 상태로 유지한다.
- PM/TPM/Dev/Reviewer/Verifier/Runtime Verifier는 이슈/PR/코멘트 조작 전에 `docs/commands.md``docs/workflow.md`의 Gitea 트러블슈팅 섹션을 선참조해야 한다.
- 저장소 협업에서 GitHub CLI(`gh`) 사용은 금지하며, Gitea 작업은 `tea`(필요 시 문서화된 API fallback)만 허용한다.
- 재발 방지/운영 규칙 변경이 합의되면, 기능 구현 이전에 process 티켓을 먼저 생성/머지해야 한다.
- process 티켓 미반영 상태에서 구현 티켓 진행 시 TPM이 즉시 `BLOCKED` 처리한다.
티켓 성숙도 단계 (Mandatory):
- `Implemented`: 코드/문서 변경 완료
- `Integrated`: 호출 경로/파이프라인 연결 확인
- `Observed`: 런타임/실행 증적 확보
- `Accepted`: Verifier + Runtime Verifier 승인 완료
- 단계는 순차 전진만 허용되며, 단계 점프는 허용되지 않는다.
브랜치 운영 규칙:
- TPM은 각 티켓에 대해 `ticket temp branch -> program feature branch` PR 경로를 지정한다.
- 티켓 머지 대상은 항상 program feature branch이며, `main`은 최종 통합 단계에서만 사용한다.
## Runtime Verification Protocol
- Runtime Verifier는 테스트 통과 이후 실제 동작(스테이징/실운영)을 모니터링한다.
- 이상 동작/현상 발견 시 즉시 이슈 발행:
- 제목 규칙: `[RUNTIME-VERIFY][SCN-*] ...`
- 본문 필수: 재현조건, 관측 로그, 영향 범위, 임시 완화책, 관련 `REQ/TASK/TEST`
- 이슈 클로즈 규칙:
- Dev 수정 완료 + Verifier 재검증 통과 + Runtime Verifier 재관측 정상
- 최종 클로즈 승인자는 Main Agent
- 개발 완료 필수 절차:
- 시스템 실제 구동(스테이징/로컬 실운영 모드) 실행
- 모니터링 체크리스트(핵심 경보/주문 경로/예외 로그) 수행
- 결과를 티켓/PR 코멘트에 증적으로 첨부하지 않으면 완료로 간주하지 않음
- 세션별 필수 관측 포인트(`NXT`, `US_PRE`, `US_DAY`, `US_AFTER` 등) 중 미관측 항목은 `NOT_OBSERVED`로 기록
- `NOT_OBSERVED` 존재 시 승인 금지 + Runtime 이슈 발행
## Server Reflection Rule
- `ticket temp branch -> program feature branch` 머지는 검증 승인 후 자동/수동 진행 가능하다.
- `program feature branch -> main` 머지는 사용자 명시 승인 시에만 허용한다.
- Main 병합 시 Main Agent가 승인 근거를 PR 코멘트에 기록한다.
## Acceptance Matrix (PM Scenario -> Dev Tasks -> Verifier Checks)
| PM Scenario | Requirement Coverage | Dev Tasks (Primary) | Verifier Checks (Must Pass) |
|---|---|---|---|
| 갭 급락/급등에서 청산 우선 처리 필요 | `REQ-V2-001`,`REQ-V2-002`,`REQ-V2-003` | `TASK-V2-004`,`TASK-CODE-001` | `TEST-ACC-000`,`TEST-ACC-001`,`TEST-ACC-010`,`TEST-CODE-001`,`TEST-CODE-002` |
| 하드스탑 + BE락 + ATR + 모델보조를 한 엔진으로 통합 | `REQ-V2-004` | `TASK-V2-005`,`TASK-V2-006`,`TASK-CODE-002` | `TEST-ACC-011` |
| 라벨 누수 없는 학습데이터 생성 | `REQ-V2-005` | `TASK-V2-007`,`TASK-CODE-004` | `TEST-ACC-012`,`TEST-CODE-003` |
| 검증 프레임워크를 시계열 누수 방지 구조로 강제 | `REQ-V2-006` | `TASK-V2-010`,`TASK-CODE-005` | `TEST-ACC-013`,`TEST-CODE-004` |
| 과낙관 백테스트 방지(비용/슬리피지/실패 강제) | `REQ-V2-007` | `TASK-V2-012`,`TASK-CODE-006` | `TEST-ACC-014` |
| 장애 시 Kill Switch 실행 순서 고정 | `REQ-V2-008` | `TASK-V2-013`,`TASK-V2-014`,`TASK-V2-015`,`TASK-CODE-003` | `TEST-ACC-002`,`TEST-ACC-018` |
| 세션 전환 단위 리스크/로그 추적 일관화 | `REQ-V3-001`,`REQ-V3-002` | `TASK-V3-001`,`TASK-V3-002`,`TASK-V3-003`,`TASK-CODE-007` | `TEST-ACC-015`,`TEST-ACC-016` |
| 블랙아웃 중 주문 차단 + 복구 후 재검증 실행 | `REQ-V3-003`,`REQ-V3-004` | `TASK-V3-004`,`TASK-V3-005`,`TASK-V3-006`,`TASK-CODE-008` | `TEST-ACC-003`,`TEST-ACC-017`,`TEST-CODE-005` |
| 저유동 세션 시장가 주문 금지 | `REQ-V3-005` | `TASK-V3-007`,`TASK-V3-008`,`TASK-CODE-009` | `TEST-ACC-004`,`TEST-CODE-006` |
| 보수적 체결 모델을 백테스트 기본으로 설정 | `REQ-V3-006` | `TASK-V3-010`,`TASK-V3-011`,`TASK-V3-012`,`TASK-CODE-010` | `TEST-ACC-005`,`TEST-CODE-007` |
| 전략손익/환율손익 분리 + 통화 버퍼 통제 | `REQ-V3-007` | `TASK-V3-013`,`TASK-V3-014`,`TASK-CODE-011` | `TEST-ACC-006`,`TEST-CODE-008` |
| 오버나잇 규칙과 Kill Switch 충돌 방지 | `REQ-V3-008` | `TASK-V3-015`,`TASK-CODE-012` | `TEST-ACC-018` |
| 타임존/정책변경/추적성 문서 거버넌스 | `REQ-OPS-001`,`REQ-OPS-002`,`REQ-OPS-003` | `TASK-OPS-001`,`TASK-OPS-002`,`TASK-OPS-003` | `TEST-ACC-007`,`TEST-ACC-008`,`TEST-ACC-009` |
## 운영 규율 (TPM Enforcement Rules)
- 어떤 PM 시나리오도 `REQ-*` 없는 구현 착수 금지.
- 어떤 `REQ-*``TASK-*`,`TEST-*` 없는 승인 금지.
- Verifier는 "코드 리뷰 통과"만으로 승인 불가, 반드시 `TEST-ACC-*` 증적 필요.
- 배포 승인권자는 Phase 4 체크리스트 미충족 시 릴리즈 보류 권한을 행사해야 한다.

View File

@@ -0,0 +1,118 @@
<!--
Doc-ID: DOC-OPS-002
Version: 1.0.0
Status: active
Owner: tpm
Updated: 2026-02-27
-->
# 저장소 강제 설정 체크리스트
목표: "엄격 검증 운영"을 문서가 아니라 저장소 설정으로 강제한다.
## 1) main 브랜치 보호 (필수)
적용 항목:
- direct push 금지
- force push 금지
- branch 삭제 금지
- merge는 PR 경로만 허용
검증:
- `main`에 대해 직접 `git push origin main` 시 거부되는지 확인
## 2) 필수 상태 체크 (필수)
필수 CI 항목:
| 참조 기준 | 이름 | 설명 |
|-----------|------|------|
| **job 단위** (브랜치 보호 설정 시 사용) | `test` | 전체 CI job (문서 검증 + 테스트 포함) |
| **step 단위** (로그 확인 시 참조) | `validate_ouroboros_docs` | `python3 scripts/validate_ouroboros_docs.py` 실행 step |
| **step 단위** | `run_tests` | `pytest -q` 실행 step |
> **주의**: Gitea 브랜치 보호의 Required Status Checks는 **job 이름** 기준으로 설정한다 (`test`). step 이름은 UI 로그 탐색용이며 보호 규칙에 직접 입력하지 않는다.
설정 기준:
- `test` job이 `success` 아니면 머지 금지
- 체크 스킵/중립 상태 허용 금지
## 3) 필수 리뷰어 규칙 (권장 -> 필수)
역할 기반 승인:
- Verifier 1명 승인 필수
- TPM 또는 PM 1명 승인 필수
- Runtime Verifier 관련 변경(PR 본문에 runtime 영향 있음) 시 Runtime Verifier 승인 필수
설정 기준:
- 최소 승인 수: 2
- 작성자 self-approval 불가
- 새 커밋 푸시 시 기존 승인 재검토 요구
## 4) 워크플로우 게이트
병합 전 체크리스트:
- 이슈 연결(`Closes #N`) 존재
- PR 본문에 `REQ-*`, `TASK-*`, `TEST-*` 매핑 표 존재
- Main -> Verifier Directive Contract(범위/방법/합격/실패/미관측/증적 형식) 기재
- process-change-first 대상이면 process 티켓 PR이 선머지됨
- `src/core/risk_manager.py` 변경 없음
- 주요 의사결정 체크포인트(DCP-01~04) 중 해당 단계 Main Agent 확인 기록 존재
- 주요 의사결정(리뷰 지적/수정 합의/검증 승인)에 대한 에이전트 PR 코멘트 존재
- 티켓 PR의 base가 `main`이 아닌 program feature branch인지 확인
자동 점검:
- 문서 검증 스크립트 통과
- 테스트 통과
- `python3 scripts/session_handover_check.py --strict` 통과
- 개발 완료 시 시스템 구동/모니터링 증적 코멘트 존재
- 이슈/PR 조작 전에 `docs/commands.md``docs/workflow.md` 트러블슈팅 확인 코멘트 존재
- `gh` CLI 미사용, `tea` 사용 증적 존재
- Verifier `Coverage Matrix` 첨부(PASS/FAIL/NOT_OBSERVED)
- `NOT_OBSERVED` 항목 0 확인(0이 아니면 머지 금지)
- 티켓 단계 기록(`Implemented` -> `Integrated` -> `Observed` -> `Accepted`) 존재
- 정적 Verifier 승인 + Runtime Verifier 승인 2개 확인
## 5) 감사 추적
필수 보존 증적:
- CI 실행 로그 링크
- 검증 실패/복구 기록
- 머지 승인 코멘트(Verifier/TPM)
분기별 점검:
- 브랜치 보호 규칙 drift 여부
- 필수 CI 이름 변경/누락 여부
## 6) 적용 순서 (운영 절차)
1. 브랜치 보호 활성화
2. 필수 CI 체크 연결
3. 리뷰어 규칙 적용
4. 샘플 PR로 거부 시나리오 테스트
5. 정상 머지 시나리오 테스트
## 7) 실패 시 조치
- 브랜치 보호 미적용 발견 시: 즉시 릴리즈 중지
- 필수 CI 우회 발견 시: 관리자 권한 점검 및 감사 이슈 발행
- 리뷰 규칙 무효화 발견 시: 규칙 복구 후 재머지 정책 시행
- Runtime 이상 이슈 미해결 상태에서 클로즈 시도 발견 시: 즉시 이슈 재오픈 + 릴리즈 중지
## 8) 재계획(Dev Replan) 운영 규칙
- Dev가 `REPLAN-REQUEST` 발행 시 TPM 심사 없이는 스코프/일정 변경 금지
- `REPLAN-REQUEST`는 Main Agent 승인 전 \"제안\" 상태로 유지
- 승인된 재계획은 `REQ/TASK/TEST` 문서를 동시 갱신해야 유효
## 9) 서버 반영 규칙
- 티켓 PR(`feature/issue-* -> feature/{stream}`)은 검증 승인 후 머지 가능하다.
- 최종 통합 PR(`feature/{stream} -> main`)은 사용자 명시 승인 전 `tea pulls merge` 실행 금지.
- Main 병합 시 승인 근거 코멘트 필수.
## 10) 최종 main 병합 조건
- 모든 티켓이 program feature branch로 병합 완료
- Runtime Verifier의 구동/모니터링 검증 완료
- 사용자 최종 승인 코멘트 확인 후에만 `feature -> main` PR 머지 허용

View File

@@ -0,0 +1,48 @@
<!--
Doc-ID: DOC-IDEA-001
Version: 1.0.0
Status: active
Owner: main-agent
Updated: 2026-02-26
-->
# 메인 에이전트 아이디에이션 백로그
목적:
- 구현 진행 중 떠오른 신규 구현 아이디어를 계획 반영 전 임시 저장한다.
- 본 문서는 사용자 검토 후 다음 계획 포함 여부를 결정하기 위한 검토 큐다.
운영 규칙:
- 각 아이디어는 `IDEA-*` 식별자를 사용한다.
- 필수 필드: 배경, 기대효과, 리스크, 후속 티켓 후보.
- 상태는 `proposed`, `under-review`, `accepted`, `rejected` 중 하나를 사용한다.
## 아이디어 목록
- `IDEA-001` (status: proposed)
- 제목: Kill-Switch 전역 상태를 프로세스 단일 전역에서 시장/세션 단위 상태로 분리
- 배경: 현재는 전역 block 플래그 기반이라 시장별 분리 제어가 제한될 수 있음
- 기대효과: KR/US 병행 운용 시 한 시장 장애가 다른 시장 주문을 불필요하게 막는 리스크 축소
- 리스크: 상태 동기화 복잡도 증가, 테스트 케이스 확장 필요
- 후속 티켓 후보: `TKT-P1-KS-SCOPE-SPLIT`
- `IDEA-002` (status: proposed)
- 제목: Exit Engine 입력 계약(ATR/peak/model_prob/liquidity) 표준 DTO를 데이터 파이프라인에 고정
- 배경: 현재 ATR/모델확률 일부가 fallback 기반이라 운영 일관성이 약함
- 기대효과: 백테스트-실거래 입력 동형성 강화, 회귀 분석 용이
- 리스크: 기존 스캐너/시나리오 엔진 연동 작업량 증가
- 후속 티켓 후보: `TKT-P1-EXIT-CONTRACT`
- `IDEA-003` (status: proposed)
- 제목: Runtime Verifier 자동 이슈 생성기(로그 패턴 -> 이슈 템플릿 자동화)
- 배경: 런타임 이상 리포트가 수동 작성 중심이라 누락 가능성 존재
- 기대효과: 이상 탐지 후 이슈 등록 리드타임 단축, 증적 표준화
- 리스크: 오탐 이슈 폭증 가능성, 필터링 룰 필요
- 후속 티켓 후보: `TKT-P1-RUNTIME-AUTO-ISSUE`
- `IDEA-004` (status: proposed)
- 제목: PR 코멘트 워크플로우 자동 점검(리뷰어->개발논의->검증승인 누락 차단)
- 배경: 현재 절차는 강력하지만 수행 확인이 수동
- 기대효과: 절차 누락 방지, 감사 추적 자동화
- 리스크: CLI/API 연동 유지보수 비용
- 후속 티켓 후보: `TKT-P0-WORKFLOW-GUARD`

View File

@@ -0,0 +1,410 @@
<!--
Doc-ID: DOC-AUDIT-001
Version: 1.2.0
Status: active
Owner: strategy
Updated: 2026-03-02
-->
# v2/v3 구현 감사 및 수익률 분석 보고서
작성일: 2026-02-28
최종 업데이트: 2026-03-02 (#373 상태표 정합화 반영)
대상 기간: 2026-02-25 ~ 2026-02-28 (실거래)
분석 브랜치: `feature/v3-session-policy-stream`
---
## 1. 계획 대비 구현 감사
### 1.1 완료 판정 기준 (Definition of Done)
아래 3가지를 모두 만족할 때만 `✅ 완료`로 표기한다.
1. 코드 경로 존재: 요구사항을 수행하는 실행 경로가 코드에 존재한다.
2. 효과 검증 통과: 요구사항 효과를 검증하는 테스트/런타임 증적이 존재한다.
3. 추적성 일치: 요구사항 상태와 열린 갭 이슈가 모순되지 않는다.
### 1.2 v2 구현 상태: 부분 완료 (핵심 갭 잔존)
| REQ-ID | 요구사항 | 구현 파일 | 상태 |
|--------|----------|-----------|------|
| REQ-V2-001 | 4-상태 매도 상태기계 (HOLDING→BE_LOCK→ARMED→EXITED) | `src/strategy/position_state_machine.py` | ✅ 완료 |
| REQ-V2-002 | 즉시 최상위 상태 승격 (갭 대응) | `position_state_machine.py:51-70` | ✅ 완료 |
| REQ-V2-003 | EXITED 우선 평가 | `position_state_machine.py:38-48` | ✅ 완료 |
| REQ-V2-004 | 4중 청산 로직 (Hard/BE/ATR Trailing/Model) | `src/strategy/exit_rules.py` | ⚠️ 부분 (`#369`) |
| REQ-V2-005 | Triple Barrier 라벨링 | `src/analysis/triple_barrier.py` | ✅ 완료 |
| REQ-V2-006 | Walk-Forward + Purge/Embargo 검증 | `src/analysis/walk_forward_split.py` | ✅ 완료 |
| REQ-V2-007 | 비용/슬리피지/체결실패 모델 필수 | `src/analysis/backtest_cost_guard.py`, `src/analysis/backtest_pipeline.py` | ✅ 완료 |
| REQ-V2-008 | Kill Switch 실행 순서 (Block→Cancel→Refresh→Reduce→Snapshot) | `src/core/kill_switch.py` | ⚠️ 부분 (`#377`) |
### 1.3 v3 구현 상태: 부분 완료 (2026-03-02 기준)
| REQ-ID | 요구사항 | 상태 | 비고 |
|--------|----------|------|------|
| REQ-V3-001 | 모든 신호/주문/로그에 session_id 포함 | ⚠️ 부분 | 큐 intent에 `session_id` 누락 (`#375`) |
| REQ-V3-002 | 세션 전환 훅 + 리스크 파라미터 재로딩 | ✅ 완료 | 세션 경계 E2E 회귀(override 적용/해제 + 재로딩 실패 폴백) 보강 (`#376`) |
| REQ-V3-003 | 블랙아웃 윈도우 정책 | ✅ 완료 | `src/core/blackout_manager.py` |
| REQ-V3-004 | 블랙아웃 큐 + 복구 시 재검증 | ⚠️ 부분 | 큐 포화는 oldest-drop 정책으로 정합화 (`#371`), 재검증 강화는 `#328` 추적 |
| REQ-V3-005 | 저유동 세션 시장가 금지 | ✅ 완료 | `src/core/order_policy.py` |
| REQ-V3-006 | 보수적 백테스트 체결 (불리 방향) | ✅ 완료 | `src/analysis/backtest_execution_model.py` |
| REQ-V3-007 | FX 손익 분리 (전략 PnL vs 환율 PnL) | ⚠️ 부분 | 런타임 분리 계산/전달 적용 (`#370`), buy-side `fx_rate` 미관측 시 `fx_pnl=0` fallback |
| REQ-V3-008 | 오버나잇 예외 vs Kill Switch 우선순위 | ✅ 완료 | `src/main.py``_should_force_exit_for_overnight()`, `_apply_staged_exit_override_for_hold()` |
### 1.4 운영 거버넌스: 부분 완료 (2026-03-02 재평가)
| REQ-ID | 요구사항 | 상태 | 비고 |
|--------|----------|------|------|
| REQ-OPS-001 | 타임존 명시 (KST/UTC) | ⚠️ 부분 | 문서 토큰 fail-fast 추가, 필드 수준 검증은 `#372` 잔여 |
| REQ-OPS-002 | 정책 변경 시 레지스트리 업데이트 강제 | ⚠️ 부분 | 파일 단위 강제는 구현, 정책 수치 단위 정밀 검증은 `#372` 잔여 |
| REQ-OPS-003 | TASK-REQ 매핑 강제 | ⚠️ 부분 | TASK-REQ/TASK-TEST 강제는 구현, 우회 케이스 추가 점검은 `#372` 잔여 |
| REQ-OPS-004 | source 경로 표준화 검증 | ✅ 완료 | `scripts/validate_ouroboros_docs.py`의 canonical source path 검증 |
---
## 2. 구현 갭 상세
> **2026-03-02 업데이트**: 기존 해소 표기를 재검증했고, 열려 있는 갭 이슈 기준으로 상태를 재분류함.
### GAP-1: DecisionLogger에 session_id 미포함 → ✅ 해소 (#326)
- **위치**: `src/logging/decision_logger.py`
- ~~문제: `log_decision()` 함수에 `session_id` 파라미터가 없음~~
- **해소**: #326 머지 — `log_decision()` 파라미터에 `session_id` 추가, DB 기록 포함
- **요구사항**: REQ-V3-001
### GAP-2: src/main.py 거래 로그에 session_id 미전달 → ✅ 해소 (#326)
- **위치**: `src/main.py`
- ~~문제: `log_trade()` 호출 시 `session_id` 파라미터를 전달하지 않음~~
- **해소**: #326 머지 — `log_trade()` 호출 시 런타임 `session_id` 명시적 전달
- **요구사항**: REQ-V3-001
### GAP-3: 세션 전환 시 리스크 파라미터 재로딩 없음 → ✅ 해소 (#327, #376)
- **위치**: `src/main.py`, `src/config.py`
- **해소 내용**: #327 머지 — `SESSION_RISK_PROFILES_JSON` 기반 세션별 파라미터 재로딩 메커니즘 구현
- `SESSION_RISK_RELOAD_ENABLED=true` 시 세션 경계에서 파라미터 재로딩
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
- **해소**: 세션 경계 E2E 회귀 테스트를 추가해 override 적용/해제, 재로딩 실패 시 폴백 유지를 검증함 (`#376`)
- **요구사항**: REQ-V3-002
### GAP-4: 블랙아웃 복구 DB 기록 + 재검증 → ⚠️ 부분 해소 (#324, #328, #371)
- **위치**: `src/core/blackout_manager.py`, `src/main.py`
- **현 상태**:
- #324 추적 범위(DB 기록)는 구현 경로가 존재
- #328 범위(가격/세션 재검증 강화)는 추적 이슈 오픈 상태
- #371: 큐 포화 정책을 oldest-drop으로 명시/구현해 최신 intent 유실 경로를 제거
- **요구사항**: REQ-V3-004
### GAP-5: 시간장벽이 봉 개수 고정 → ✅ 해소 (#329)
- **위치**: `src/analysis/triple_barrier.py`
- ~~문제: `max_holding_bars` (고정 봉 수) 사용~~
- **해소**: #329 머지 — `max_holding_minutes` (캘린더 분) 기반 시간장벽 전환
- 봉 주기 무관하게 일정 시간 경과 시 장벽 도달
- `max_holding_bars` deprecated 경고 유지 (하위 호환)
- **요구사항**: REQ-V2-005 / v3 확장
### GAP-6 (신규): FX PnL 분리 부분 해소 (MEDIUM)
- **위치**: `src/db.py` (`fx_pnl`, `strategy_pnl` 컬럼 존재)
- **현 상태**: 런타임 SELL 경로에서 `strategy_pnl`/`fx_pnl` 분리 계산 및 전달을 적용함 (`#370`).
- **운영 메모**: `trading_cycle`은 scanner 기반 `selection_context``fx_rate`를 추가하고, `run_daily_session`은 scanner 컨텍스트 없이 `fx_rate` 스냅샷만 기록한다.
- **잔여**: 과거 BUY 레코드에 `fx_rate`가 없으면 해외 구간도 `fx_pnl=0` fallback으로 기록됨.
- **영향**: USD 거래에서 환율 손익과 전략 손익이 분리되지 않아 성과 분석 부정확
- **요구사항**: REQ-V3-007
---
## 3. 실거래 수익률 분석
### 3.1 종합 성적
| 지표 | 값 |
|------|-----|
| 총 실현 손익 | **-52,481** (KRW + USD 혼합, 통화 분리 집계는 3.4 참조) |
| 총 거래 기록 | 19,130건 (BUY 121, SELL 46, HOLD 18,963) |
| 집계 기준 | UTC `2026-02-25T00:00:00` ~ `2026-02-28T00:00:00`, SELL 45건 (기간 외 1건 제외) |
| 승률 | **39.1%** (18승 / 46매도, 0손익 포함 기준) |
| 평균 수익 거래 | +6,107 |
| 평균 손실 거래 | -7,382 |
| 최대 수익 거래 | +46,350 KRW (452260 KR) |
| 최대 손실 거래 | -26,400 KRW (000370 KR) |
| 운영 모드 | LIVE (실계좌) |
### 3.2 일별 손익
| 날짜 | 매도 수 | 승 | 패 | 일간 손익 |
|------|---------|----|----|-----------|
| 02-25 | 9 | 8 | 1 | +63.21 (USD, 미세 수익) |
| 02-26 | 14 | 5 | 5 | **-32,083.40** (KR 대량 손실) |
| 02-27 | 22 | 5 | 16 | **-20,461.11** (고빈도 매매, 대부분 손실) |
> 정확한 재현: `scripts/audit_queries.sql` 참조.
### 3.3 시장별 손익
| 시장 | 매도 수 | 승률 | 총 손익 |
|------|---------|------|---------|
| **KR** | 17 | 38.5% (0손익 제외, 5/13) | **-56,735 KRW** |
| US_AMEX | 12 | 75% | +4,476 USD |
| US_NASDAQ | 4 | 0% | -177 USD |
| US_NYSE | 13 | 30.8% | -45 USD |
**KR 시장이 손실의 주 원인.** US는 AMEX 제외 시 대체로 손실 또는 보합.
### 3.4 재계산 주석 반영 (통화 분리)
> 산식 주석: 기존 표의 `총 실현 손익 -52,481`은 KRW/USD를 단순 합산한 값으로, 회계적으로 해석 불가.
> 아래는 같은 기간(2026-02-25~2026-02-27, SELL 45건)을 통화별로 분리한 결과.
| 통화 | 매도 수 | 승/패 | 실현 손익 |
|------|---------|-------|-----------|
| KRW | 17 | 5승 / 8패 (4건 0손익) | **-56,735 KRW** |
| USD | 28 | 13승 / 14패 (1건 0손익) | **+4,253.70 USD** |
### 3.5 재계산 주석 반영 (기존 보유 청산 성과 분리)
> 분리 기준: 각 SELL의 직전 BUY가 `rationale LIKE '[startup-sync]%'` 인 경우를
> `기존 보유(시작 시점 동기화 포지션) 청산`으로 분류.
| 구분 | 통화 | 매도 수 | 손익 |
|------|------|---------|------|
| 기존 보유 청산분 | KRW | 10 | **+12,230 KRW** |
| 기존 보유 청산분 | USD | 2 | **+21.03 USD** |
| 신규/전략 진입분만 | KRW | 7 | **-68,965 KRW** |
| 신규/전략 진입분만 | USD | 26 | **+4,232.67 USD** |
추가로, 요청 취지(“기존 보유 수익 종목 정리 수익 제외”)에 맞춰 **기존 보유 청산 중 수익(+PnL)만 제외**하면:
- KRW: `-56,735`**-113,885 KRW** (기존 보유 수익 +57,150 KRW 제거)
- USD: `+4,253.70`**+4,232.67 USD** (기존 보유 수익 +21.03 USD 제거)
즉, 기존 성과표는 기보유 청산 이익(특히 KR 452260 +46,350 KRW)을 전략 성과에 포함해
전략 자체 손익을 과대평가한 상태다.
### 3.6 데이터 무결성 점검 (모의투자 혼합 여부 + USD 과대수익 원인)
- `mode` 점검 결과: `live` 19,130건, `paper` 0건
**모의투자 혼합은 확인되지 않음**.
- 다만 USD 손익에는 **체결 매칭 이상치 1건**이 존재:
- `CRCA` SELL(15주, $35.14, +4,612.15 USD) vs 직전 BUY(146주, $3.5499)
- BUY/SELL 수량 불일치(146→15) 상태에서 PnL이 계산되어, 역분할/동기화 이슈 가능성이 큼.
보수적 재집계(2026-02-25~2026-02-27, USD SELL 28건):
| 집계 기준 | USD 손익 | 환산 KRW (참고) | KRW 합산 참고값 |
|-----------|----------|-----------------|-----------------|
| 원집계 | **+4,253.70 USD** | +6,167,865 | -56,735 + 6,167,865 = **+6,111,130** |
| 기존보유(startup-sync) 제외 | **+4,232.67 USD** | +6,137,372 | -68,965 + 6,137,372 = **+6,068,407** |
| 수량 일치 체결만 포함 | **-358.45 USD** | -519,753 | -56,735 + (-519,753) = **-576,488** |
| 기존보유 제외 + 수량 일치 체결만 포함 | **-379.48 USD** | -550,246 | -68,965 + (-550,246) = **-619,211** |
> 가정 환율: **1 USD = 1,450 KRW** (2026-02-28 기준 참고 환율).
> 환산 KRW 및 합산값은 비교용 보조지표이며, 회계/정산 기준값과는 분리해 해석해야 한다.
결론적으로 USD 구간의 플러스 성과는 실질적으로 `CRCA` 이상치 1건 영향이 지배적이며,
해당 거래를 무결성 필터로 제외하면 USD 성과는 손실 구간으로 전환된다.
### 3.7 데이터 품질 이슈 요약
- **startup-sync 중복**: BUY 76건 반복 동기화, price=0 38건 → PnL 매칭 왜곡 가능. 분리 집계는 3.5 참조.
- **티커-거래소 드리프트**: 동일 티커가 다중 거래소에 혼재 기록 → ROOT-7 참조.
- **FX PnL 미활성**: 스키마 존재, 운영 데이터 전부 0 → REQ-V3-007 참조.
### 3.8 표준 집계 SQL (재현용)
성과표 재현을 위한 기준 쿼리는 [`scripts/audit_queries.sql`](../../scripts/audit_queries.sql)에 분리되어 있다.
- **Base**: 기간 + LIVE + SELL + 직전 BUY 메타 매칭
- **Q1**: 통화 분리 손익 (KRW/USD 혼합 금지)
- **Q2**: 기존 보유(startup-sync) 제외 성과
- **Q3**: 수량 일치 체결만 포함 (무결성 필터)
- **Q4**: 이상치 목록 (수량 불일치)
---
## 4. 수익률 저조 근본 원인 분석
### ROOT-1: hard_stop_pct 기본값(-2%)이 KR 소형주 변동성 대비 과소
- **현재 설정**: `stop_loss_threshold = -2.0` (`src/main.py:511`), staged exit의 `hard_stop_pct`로 전달
- **v2 계획**: ATR 기반 동적 trailing stop (ExitPrice = PeakPrice - k × ATR)
- **실제 동작**: staged exit는 호출되나, `atr_value`/`pred_down_prob` 등 피처가 0.0으로 공급되어 hard_stop 편향 발동 (ROOT-5 참조)
- **증거**:
- 000370: 매수 8,040 → 24분 후 -2.74% 손절
- 033340: 매수 2,080 → 18분 후 -3.13% 손절
- 229000: -3.7%, -3.25%, -3.2% 반복 손절
### ROOT-2: 동일 종목 반복 매매 (재진입 쿨다운 미구현)
- **문제**: 손절 후 동일 종목 즉시 재매수 → 고가 재진입 → 재손절 반복
- **최악 사례**: 종목 229000
| 매수가 | 매도가 | 손익 | 보유 시간 |
|--------|--------|------|-----------|
| 5,670 | 5,460 | -24,780 | 0.5h |
| 5,540 | 5,360 | -21,780 | 0.7h |
| 5,310 | 5,580 | +34,020 (승) | 0.8h |
| 5,620 | 5,440 | -21,420 | 1.5h |
- **순손실**: 하루 한 종목에서 **-33,960 KRW**
### ROOT-3: 미국 페니스탁/마이크로캡 무분별 진입
- **문제**: $2 이하 종목에 confidence 85~90으로 진입, 오버나잇 대폭락
- **사례**:
| 종목 | 손실률 | 보유시간 |
|------|--------|----------|
| ALBT | -27.7% | ~23h |
| SMJF | -15.9% | ~23h |
| KAPA | -18.2% | ~23h |
| CURX | -10.6% | ~23h |
| CELT | -8.3% | ~23h |
### ROOT-4: 진화 전략 코드 생성기 문법 오류
- **위치**: `src/strategies/v20260227_*_evolved.py`
- **문제**: 중첩 `def evaluate` 정의 (들여쓰기 오류)
- **영향**: 런타임 실패 → 기본 전략으로 폴백 → 진화 시스템 사실상 무효
### ROOT-5: v2 청산 로직이 부분 통합되었으나 실효성 부족 → ⚠️ 부분 해소 (#325)
**초기 진단 (2026-02-28 감사 기준):**
- `hard_stop_pct`에 고정 `-2.0`이 기본값으로 들어가 v2 계획의 ATR 적응형 의도와 괴리
- `be_arm_pct`/`arm_pct`가 playbook의 `take_profit_pct`에서 기계적 파생(`* 0.4`)되어 v2 계획의 독립 파라미터 튜닝 불가
- `atr_value`, `pred_down_prob` 등 런타임 피처가 0.0으로 공급되어 사실상 hard stop만 발동
**현재 상태 (#325 머지 후):**
- `STAGED_EXIT_BE_ARM_PCT`, `STAGED_EXIT_ARM_PCT` 환경변수로 독립 파라미터 설정 가능
- `_inject_staged_exit_features()`: KR 시장 ATR 실시간 계산 주입, RSI 기반 `pred_down_prob` 공급
- KR ATR dynamic hard stop (#318)으로 `-2.0` 고정값 문제 해소
**잔여 리스크:**
- KR 외 시장(US 등)에서 `atr_value` 공급 경로 불완전 — hard stop 편향 잔존 가능
- `pred_down_prob`가 RSI 프록시 수준 — 추후 실제 ML 모델 대체 권장
### ROOT-6: SELL 손익 계산이 부분청산/수량 불일치에 취약 (CRITICAL) → ✅ 해소 (#322)
> **현재 상태**: #322 머지로 해소됨. 아래는 원인 발견 시점(2026-02-28) 진단 기록.
- **위치**: `src/main.py:1658-1663`, `src/main.py:2755-2760`
- **문제**: PnL 계산이 실제 매도 수량(`sell_qty`)이 아닌 직전 BUY의 `buy_qty`를 사용
- `trade_pnl = (trade_price - buy_price) * buy_qty`
- **영향**: 부분청산, 역분할/액분할, startup-sync 후 수량 드리프트 시 손익 과대/과소 계상
- **실증**: CRCA 이상치(BUY 146주 → SELL 15주에서 PnL +4,612 USD) 가 이 버그와 정합
### ROOT-7: BUY 매칭 키에 exchange_code 미포함 — 잠재 오매칭 리스크 (HIGH) → ✅ 해소 (#323)
> **현재 상태**: #323 머지로 해소됨. 아래는 원인 발견 시점(2026-02-28) 진단 기록.
- **위치**: `src/db.py:292-313`
- **문제**: `get_latest_buy_trade()``(stock_code, market)`만으로 매칭, `exchange_code` 미사용
- **성격**: 현재 즉시 발생하는 확정 버그가 아닌, 동일 티커가 다중 거래소에 혼재 기록될 때 증폭되는 구조 리스크
- **영향**: 데이터 드리프트 조건(예: CCUP/CRCA 등 다중 exchange 기록)에서 오매칭 → 손익 왜곡 가능
---
## 5. 수익률 개선 방안
### 5.1 즉시 적용 가능 (파라미터/로직 수정)
| 우선순위 | 방안 | 예상 효과 | 난이도 |
|----------|------|-----------|--------|
| P0 | KR 손절선 확대: -2% → -4~5% 또는 ATR 기반 | 노이즈 손절 대폭 감소 | 낮음 |
| P0 | 재진입 쿨다운: 손절 후 동일 종목 1~2시간 매수 차단 | churn & burn 패턴 제거 | 낮음 |
| P1 | US 최소 가격 필터: $5 이하 종목 진입 차단 | 페니스탁 대폭락 방지 | 낮음 |
| P1 | 진화 전략 코드 생성 시 syntax 검증 추가 | 진화 시스템 정상화 | 낮음 |
### 5.2 구조적 개선 현황 (2026-03-01 기준)
**완료 항목 (모니터링 단계):**
| 항목 | 이슈 | 상태 |
|------|------|------|
| SELL PnL 계산을 sell_qty 기준으로 수정 (ROOT-6) | #322 | ✅ 머지 |
| v2 staged exit 피처 공급 + 독립 파라미터 설정 (ROOT-5) | #325 | ✅ 머지 |
| BUY 매칭 키에 exchange_code 추가 (ROOT-7) | #323 | ✅ 머지 |
| 블랙아웃 복구 주문 `log_trade()` 추가 (GAP-4) | #324 | ✅ 머지 |
| 세션 전환 리스크 파라미터 동적 재로딩 (GAP-3) | #327 | ✅ 머지 |
| session_id 거래/의사결정 로그 명시 전달 (GAP-1, GAP-2) | #326 | ✅ 머지 |
| 블랙아웃 복구 가격/세션 재검증 강화 (GAP-4 잔여) | #328 | ✅ 머지 |
**잔여 개선 항목:**
| 우선순위 | 방안 | 난이도 |
|----------|------|--------|
| P1 | US 시장 ATR 공급 경로 완성 (ROOT-5 잔여) | 중간 |
| P1 | FX PnL 운영 활성화 (REQ-V3-007) | 낮음 |
| P2 | pred_down_prob ML 모델 대체 (ROOT-5 잔여) | 높음 |
| P2 | 세션 경계 E2E 통합 테스트 보강 (GAP-3 잔여) | 낮음 |
### 5.3 권장 실행 순서
```
Phase 1 (즉시): 파라미터 조정
→ KR 손절 확대 + 재진입 쿨다운 + US 가격 필터
→ 예상: 가장 큰 손실 패턴 2개(노이즈 손절, 반복 매매) 즉시 제거
Phase 2 (단기): 데이터 정합성 + v2 실효화
→ SELL PnL을 sell_qty 기준으로 수정
→ BUY 매칭 키에 exchange_code 추가
→ 블랙아웃 복구 주문 DB 기록 추가
→ v2 staged exit에 실제 피처(ATR, pred_down_prob) 공급 + 독립 파라미터 설정
→ session_id 명시적 전달
→ 예상: 손익 정확도 확보 + 수익 구간 보호 메커니즘 실효화
Phase 3 (중기): v3 세션 최적화
→ 세션 전환 훅 + 파라미터 재로딩
→ 블랙아웃 재검증
→ 운영 거버넌스 CI 자동화
```
---
## 6. 테스트 커버리지 현황
### 테스트 존재 (통과)
- ✅ 상태기계 승격 (`test_strategy_state_machine.py`)
- ✅ 4중 청산 규칙 (`test_strategy_exit_rules.py`)
- ✅ Triple Barrier 라벨링 (`test_triple_barrier.py`)
- ✅ Walk-Forward + Purge/Embargo (`test_walk_forward_split.py`)
- ✅ 백테스트 비용 검증 (`test_backtest_cost_guard.py`)
- ✅ Kill Switch 순서 (`test_kill_switch.py`)
- ✅ 블랙아웃 관리 (`test_blackout_manager.py`)
- ✅ 주문 정책 저유동 거부 (`test_order_policy.py`)
- ✅ FX 손익 분리 (`test_db.py`)
- ✅ 블랙아웃 복구 후 유효 intent 실행 (`tests/test_main.py:5811`)
- ✅ 블랙아웃 복구 후 정책 거부 intent 드롭 (`tests/test_main.py:5851`)
### 테스트 추가됨 (Phase 1~3, 2026-03-01)
- ✅ KR ATR 기반 동적 hard stop (`test_main.py`#318)
- ✅ 재진입 쿨다운 (손절 후 동일 종목 매수 차단) (`test_main.py`#319)
- ✅ US 최소 가격 필터 ($5 이하 차단) (`test_main.py`#320)
- ✅ 진화 전략 syntax 검증 (`test_evolution.py`#321)
- ✅ SELL PnL sell_qty 기준 계산 (`test_main.py`#322)
- ✅ BUY 매칭 키 exchange_code 포함 (`test_db.py`#323)
- ✅ 블랙아웃 복구 주문 DB 기록 (`test_main.py`#324)
- ✅ staged exit에 실제 ATR/RSI 피처 공급 (`test_main.py`#325)
- ✅ session_id 거래/의사결정 로그 명시적 전달 (`test_main.py`, `test_decision_logger.py`#326)
- ✅ 블랙아웃 복구 후 유효 intent 실행 (`tests/test_main.py:5811`)
- ✅ 블랙아웃 복구 후 정책 거부 intent 드롭 (`tests/test_main.py:5851`)
### 테스트 미존재 (잔여)
- ✅ 세션 전환 훅 콜백/세션 경계 리스크 재로딩 E2E 회귀 (`#376`)
- ❌ 실거래 경로 ↔ v2 상태기계 통합 테스트 (피처 공급 포함)
- ❌ FX PnL 운영 활성화 검증 (GAP-6)
---
## 7. 후속 문서
- **실행 계획**: [85_loss_recovery_action_plan.md](./85_loss_recovery_action_plan.md) — ROOT/GAP 해소를 위한 Phase별 작업 분해 및 Gitea 이슈 연결
- **표준 집계 SQL**: [scripts/audit_queries.sql](../../scripts/audit_queries.sql)
---
*끝.*

View File

@@ -0,0 +1,96 @@
<!--
Doc-ID: DOC-PLAN-082
Version: 1.0.0
Status: draft
Owner: strategy
Updated: 2026-02-28
-->
# 문서 재구조화 계획: 감사 → 실행 파이프라인
## Context
80_implementation_audit.md는 v2/v3 구현 감사와 수익률 분석을 수행했으나, 여러 차례 리뷰를 거치면서 리뷰 이력/데이터 품질 논의/SQL 쿼리 등이 혼재되어 **실행 문서로 사용하기 어려운 상태**다.
목표: 이 감사 결과를 바탕으로 **티켓 생성 → 개발 설계 → 구현/리뷰 → 검증 → 실환경 테스트**까지 일관되게 진행할 수 있는 문서 체계를 만든다.
## 변경 사항
### 1. 80_implementation_audit.md 정리 (감사 기록 문서)
**역할**: 현재 상태의 팩트 기록. "무엇이 문제인가"에만 집중.
정리 내용:
- Section 3: P&L 분석을 핵심 수치만 남기고 간결화
- 3.1(종합), 3.3(시장별), 3.4(통화 분리), 3.5(전략 진입분 분리), 3.6(무결성 결론) 유지
- 3.2 일별 손익: 주의 문구 제거, 본문으로 통합
- 3.7 데이터 품질: 핵심 결론만 남기고 세부 항목 제거
- 3.8 SQL: 별도 파일(`scripts/audit_queries.sql`)로 분리, 본문에서 참조만
- Section 6.1, 6.2 리뷰 반영 이력: 전부 제거 (git history로 추적 가능)
- Section 6 테스트: "재점검으로 확인" 항목을 "테스트 존재" 항목에 통합
- 신규 Section 7: 후속 문서 링크 (85_ 참조)
### 2. 85_loss_recovery_action_plan.md 신규 작성 (실행 계획 문서)
**역할**: "어떻게 고칠 것인가". 티켓 생성부터 실환경 검증까지의 실행 청사진.
구조:
```
## 1. 요약
- 목표: 손실 구간 탈출을 위한 7개 ROOT/5개 GAP 해소
- 성공 기준 (정량)
## 2. Phase별 작업 분해
### Phase 1: 즉시 파라미터/로직 수정 (손실 출혈 차단)
각 항목마다:
- ROOT/GAP 참조
- Gitea 이슈 제목/설명 템플릿
- 변경 대상 파일 + 현재 동작 + 목표 동작
- 수용 기준 (acceptance criteria)
- 테스트 계획
- 의존성/차단 관계
### Phase 2: 데이터 정합성 + v2 실효화
(동일 형식)
### Phase 3: v3 세션 최적화
(동일 형식)
## 3. 검증 계획
- 단위 테스트 기준
- 통합 테스트 시나리오 (백테스트 파이프라인 활용)
- 실환경 검증: 소액 live 운용으로 직접 검증
(paper trading 제외 — 실환경과 괴리가 커 검증 신뢰도 부족)
- Phase별 실환경 투입 기준:
단위/통합 테스트 통과 → 소액 live → 모니터링 → 정상 확인 후 본운용
## 4. 의존성 그래프
- Phase 간 blocking 관계
- Phase 내 작업 순서
## 5. 롤백 계획
- 각 Phase 실패 시 롤백 절차
```
### 3. README.md 업데이트
- 85_ 문서 링크 추가
## 작업 순서
1. 80_ 정리 (노이즈 제거, SQL 분리, 리뷰 이력 삭제)
2. `scripts/audit_queries.sql` 작성 (80_에서 분리한 SQL)
3. 85_ 신규 작성 (실행 계획)
4. README.md 업데이트
## 작성하지 않는 것
- 30_code_level_work_orders.md, 40_acceptance_and_test_plan.md 업데이트: 85_를 기반으로 실제 구현 시점에 업데이트 (지금은 실행 계획 수립까지만)
- 01_requirements_registry.md: ROOT/GAP에서 파생되는 신규 REQ는 구현 착수 시 등록
- Gitea 이슈 생성: 85_ 문서 확정 후 별도 진행
## 검증
- 80_: 감사 팩트만 남았는지, 리뷰 이력이 제거되었는지 확인
- 85_: Phase별 작업이 Gitea 이슈로 바로 전환 가능한 수준인지 확인
- 85_ 각 항목에 수용 기준과 테스트 계획이 포함되었는지 확인

View File

@@ -0,0 +1,417 @@
<!--
Doc-ID: DOC-ACTION-085
Version: 1.1.0
Status: active
Owner: strategy
Updated: 2026-03-01
-->
# 손실 복구 실행 계획
작성일: 2026-02-28
최종 업데이트: 2026-03-01 (Phase 1~3 완료 상태 반영)
기반 문서: [80_implementation_audit.md](./80_implementation_audit.md) (ROOT 7개 + GAP 5개)
> **2026-03-01 현황**: Phase 1 ✅ 완료, Phase 2 ✅ 완료, Phase 3 ✅ 기본 완료 (ACT-13 고도화 잔여)
---
## 1. 요약
### 1.1 목표
80_implementation_audit.md에서 식별된 7개 근본 원인(ROOT-1~7)과 5개 구현 갭(GAP-1~5)을 해소하여 실거래 손실 구간에서 탈출한다.
### 1.2 성공 기준 (정량)
| 지표 | 현재 | 목표 |
|------|------|------|
| KR 시장 승률 | 38.5% | >= 50% |
| 동일 종목 반복 매매 (일간) | 최대 4회 | <= 2회 |
| US 페니스탁($5 이하) 진입 | 무제한 | 0건 |
| SELL PnL 수량 불일치 건 | 존재 | 0건 |
| 블랙아웃 복구 주문 DB 누락 | 존재 | 0건 |
| session_id 누락 거래 로그 | 다수 | 0건 |
| 진화 전략 syntax 오류율 | 100% (확인된 3건 모두) | 0% |
---
## 2. Phase별 작업 분해
### Phase 1: 즉시 — 손실 출혈 차단 ✅ 완료
가장 큰 손실 패턴(노이즈 손절, 반복 매매, 페니스탁)을 즉시 제거한다.
---
#### ACT-01: KR 손절선 ATR 기반 동적 확대 ✅ 머지
- **ROOT 참조**: ROOT-1 (hard_stop_pct -2%가 KR 소형주 변동성 대비 과소)
- **Gitea 이슈**: feat: KR 손절선 ATR 기반 동적 확대 (-2% → ATR 적응형)
- **Gitea 이슈 번호**: #318
- **변경 대상 파일**: `src/main.py`, `src/strategy/exit_rules.py`, `src/config.py`
- **현재 동작**: `hard_stop_pct = -2.0` 고정값으로 모든 시장에 동일 적용
- **목표 동작**: KR 시장은 ATR(14) 기반 동적 손절선 적용. 최소 -2%, 최대 -7%, 기본값은 `k * ATR / entry_price * 100` (k=2.0)
- **수용 기준**:
- ATR 값이 존재할 때 동적 손절선이 계산됨
- ATR 미제공 시 기존 -2% 폴백
- KR 이외 시장은 기존 동작 유지
- **테스트 계획**:
- 단위: ATR 기반 손절선 계산 로직 테스트 (경계값: ATR=0, ATR=극단값)
- 통합: 백테스트 파이프라인에서 KR 종목 손절 빈도 비교
- **의존성**: 없음
---
#### ACT-02: 손절 후 동일 종목 재진입 쿨다운 ✅ 머지
- **ROOT 참조**: ROOT-2 (동일 종목 반복 매매)
- **Gitea 이슈**: feat: 손절 후 동일 종목 재진입 쿨다운 (1~2시간)
- **Gitea 이슈 번호**: #319
- **변경 대상 파일**: `src/main.py`, `src/config.py`
- **현재 동작**: 손절 후 동일 종목 즉시 재매수 가능
- **목표 동작**: 손절(SELL with pnl < 0) 후 동일 종목은 `COOLDOWN_MINUTES` (기본 120분) 동안 매수 차단
- **수용 기준**:
- 손절 기록이 있는 종목에 대해 쿨다운 시간 내 BUY 시도 시 거부
- 쿨다운 경과 후 정상 진입 허용
- 익절(pnl >= 0)에는 쿨다운 미적용
- **테스트 계획**:
- 단위: 쿨다운 시간 내/외 매수 시도 테스트
- 통합: 229000 유사 패턴 백테스트 시나리오
- **의존성**: 없음
---
#### ACT-03: US $5 이하 종목 진입 차단 필터 ✅ 머지
- **ROOT 참조**: ROOT-3 (미국 페니스탁 무분별 진입)
- **Gitea 이슈**: feat: US $5 이하 종목 진입 차단 필터
- **Gitea 이슈 번호**: #320
- **변경 대상 파일**: `src/main.py`, `src/config.py`
- **현재 동작**: 가격 제한 없이 모든 US 종목 진입 가능
- **목표 동작**: US 시장 BUY 시 현재가 $5 이하이면 진입 차단. 임계값은 `US_MIN_PRICE` 환경변수로 설정 가능
- **수용 기준**:
- $5 이하 종목 BUY 시도 시 거부 + 로그 기록
- $5 초과 종목은 기존 동작 유지
- KR 등 다른 시장에는 미적용
- **테스트 계획**:
- 단위: 가격별 필터 동작 테스트 (경계값: $4.99, $5.00, $5.01)
- **의존성**: 없음
---
#### ACT-04: 진화 전략 코드 생성 시 syntax 검증 추가 ✅ 머지
- **ROOT 참조**: ROOT-4 (진화 전략 문법 오류)
- **Gitea 이슈**: fix: 진화 전략 코드 생성 시 syntax 검증 추가
- **Gitea 이슈 번호**: #321
- **변경 대상 파일**: `src/evolution/optimizer.py`
- **현재 동작**: 생성된 Python 코드를 검증 없이 파일로 저장
- **목표 동작**: `ast.parse()` + `compile()` 로 syntax 검증 후 통과한 코드만 저장. 실패 시 로그 경고 + 기존 전략 유지
- **수용 기준**:
- syntax 오류가 있는 코드는 저장되지 않음
- 검증 실패 시 기존 전략으로 폴백
- 검증 실패 로그가 기록됨
- **테스트 계획**:
- 단위: 정상 코드/오류 코드 검증 테스트
- 기존 `v20260227_*_evolved.py` 파일로 회귀 테스트
- **의존성**: 없음
---
### Phase 2: 단기 — 데이터 정합성 + v2 실효화 ✅ 완료
손익 계산 정확도를 확보하고, v2 청산 로직을 실효화한다.
---
#### ACT-05: SELL PnL 계산을 sell_qty 기준으로 수정 ✅ 머지
- **ROOT 참조**: ROOT-6 (CRITICAL — PnL 계산이 buy_qty 사용)
- **Gitea 이슈**: fix(critical): SELL PnL 계산을 sell_qty 기준으로 수정
- **Gitea 이슈 번호**: #322
- **변경 대상 파일**: `src/main.py` (line 1658-1663, 2755-2760)
- **현재 동작**: `trade_pnl = (trade_price - buy_price) * buy_qty` — 직전 BUY 수량 사용
- **목표 동작**: `trade_pnl = (trade_price - buy_price) * sell_qty` — 실제 매도 수량 사용
- **수용 기준**:
- 부분청산 시 매도 수량 기준 PnL 계산
- 기존 전량 매도(buy_qty == sell_qty) 케이스는 동일 결과
- CRCA 유사 이상치 재발 불가
- **테스트 계획**:
- 단위: 전량 매도, 부분 매도, 수량 불일치 케이스별 PnL 검증
- DB: Q4 쿼리(`scripts/audit_queries.sql`)로 이상치 0건 확인
- **의존성**: 없음
---
#### ACT-06: BUY 매칭 키에 exchange_code 추가 ✅ 머지
- **ROOT 참조**: ROOT-7 (BUY 매칭 키에 exchange_code 미포함)
- **Gitea 이슈**: fix: BUY 매칭 키에 exchange_code 추가
- **Gitea 이슈 번호**: #323
- **변경 대상 파일**: `src/db.py` (line 292-313)
- **현재 동작**: `get_latest_buy_trade()``(stock_code, market)`만으로 매칭
- **목표 동작**: `exchange_code`가 존재할 때 매칭 키에 포함. NULL인 경우 기존 동작 유지 (하위 호환)
- **수용 기준**:
- 동일 티커 다중 거래소 기록 시 정확한 BUY 매칭
- exchange_code가 NULL인 레거시 데이터에서도 정상 동작
- **테스트 계획**:
- 단위: 동일 티커 다중 exchange 매칭 테스트
- 단위: exchange_code NULL 하위 호환 테스트
- **의존성**: 없음
---
#### ACT-07: 블랙아웃 복구 주문에 log_trade() 추가 ✅ 머지
- **ROOT 참조**: GAP-4 (블랙아웃 복구 주문 DB 미기록)
- **Gitea 이슈**: fix: 블랙아웃 복구 주문에 log_trade() 추가
- **Gitea 이슈 번호**: #324
- **변경 대상 파일**: `src/main.py``process_blackout_recovery_orders()` 함수 내 복구 주문 실행 경로
- **현재 동작**: 블랙아웃 복구 주문이 실행되나 `log_trade()` 호출 없음 → DB에 기록 안 됨
- **목표 동작**: 복구 주문 실행 후 `log_trade()` 호출하여 DB에 기록. rationale에 `[blackout-recovery]` prefix 추가
- **수용 기준**:
- 블랙아웃 복구 주문이 trades 테이블에 기록됨
- rationale로 복구 주문 식별 가능
- 성과 리포트에 복구 주문 포함
- **테스트 계획**:
- 단위: 복구 주문 실행 후 DB 기록 존재 확인
- 통합: 블랙아웃 시나리오 end-to-end 테스트
- **의존성**: 없음
---
#### ACT-08: v2 staged exit에 실제 피처 공급 ✅ 머지
- **ROOT 참조**: ROOT-5 (v2 청산 로직 실효성 부족)
- **Gitea 이슈**: feat: v2 staged exit에 실제 피처(ATR, pred_down_prob) 공급
- **Gitea 이슈 번호**: #325
- **변경 대상 파일**: `src/main.py` (line 500-583), `src/strategy/exit_rules.py`, `src/analysis/technical.py`
- **현재 동작**: `atr_value=0.0`, `pred_down_prob=0.0`으로 공급 → hard stop만 발동
- **목표 동작**:
- `atr_value`: 보유 종목의 ATR(14) 실시간 계산하여 공급
- `pred_down_prob`: 최소한 RSI 기반 하락 확률 추정값 공급 (추후 ML 모델로 대체 가능)
- `be_arm_pct`/`arm_pct`: 독립 파라미터로 설정 가능 (take_profit_pct * 0.4 기계적 파생 제거)
- **수용 기준**:
- `evaluate_exit()` 호출 시 atr_value > 0 (ATR 계산 가능한 종목)
- ATR trailing stop이 실제 발동 가능
- be_arm_pct/arm_pct 독립 설정 가능
- **테스트 계획**:
- 단위: 피처 공급 경로별 값 검증
- 통합: 상태기계 전이 시나리오 (HOLDING→BE_LOCK→ARMED→EXITED)
- **의존성**: ACT-01 (ATR 계산 인프라 공유)
---
#### ACT-09: session_id를 거래/의사결정 로그에 명시적 전달 ✅ 머지
- **ROOT 참조**: GAP-1 (DecisionLogger session_id 미포함), GAP-2 (log_trade session_id 미전달)
- **Gitea 이슈**: feat: session_id를 거래/의사결정 로그에 명시적 전달
- **Gitea 이슈 번호**: #326
- **변경 대상 파일**: `src/logging/decision_logger.py`, `src/main.py` (line 1625, 1682, 2769), `src/db.py`
- **현재 동작**:
- `log_decision()`: session_id 파라미터 없음
- `log_trade()`: session_id 미전달, 시장 코드 기반 자동 추론에 의존
- **목표 동작**:
- `log_decision()`: session_id 파라미터 추가, 로그에 기록
- `log_trade()` 호출 시 런타임 session_id 명시적 전달
- **수용 기준**:
- 모든 SELL/BUY 로그에 session_id 필드 존재
- 의사결정 로그에 session_id 필드 존재
- session_id가 실제 런타임 세션과 일치
- **테스트 계획**:
- 단위: log_decision() session_id 캡처 테스트
- 단위: log_trade() session_id 전달 테스트
- **의존성**: 없음
---
### Phase 3: 중기 — v3 세션 최적화 ✅ 기본 완료 (ACT-13 고도화 잔여)
세션 경계 처리와 운영 거버넌스를 강화한다.
---
#### ACT-10: 세션 전환 시 리스크 파라미터 동적 재로딩 ✅ 머지
- **ROOT 참조**: GAP-3 (세션 전환 시 리스크 파라미터 재로딩 없음)
- **Gitea 이슈**: feat: 세션 전환 시 리스크 파라미터 동적 재로딩
- **Gitea 이슈 번호**: #327
- **변경 대상 파일**: `src/main.py`, `src/config.py`
- **현재 동작**: 리스크 파라미터가 시작 시 한 번만 로딩
- **목표 동작**: 세션 경계 변경 이벤트 시 해당 세션의 리스크 파라미터를 재로딩. 세션별 프로파일 지원
- **수용 기준**:
- NXT_AFTER → KRX_REG 전환 시 파라미터 재로딩 확인
- 재로딩 이벤트 로그 기록
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
- **테스트**: `test_main.py`에 설정 오버라이드/리로드/폴백 단위 테스트 포함. **잔여**: 세션 경계 실시간 전환 E2E 보강
- **의존성**: ACT-09 (session_id 인프라)
---
#### ACT-11: 블랙아웃 복구 시 가격/세션 재검증 강화 ✅ 머지
- **ROOT 참조**: GAP-4 잔여 (가격 유효성, 세션 변경 재적용 미구현)
- **Gitea 이슈**: feat: 블랙아웃 복구 시 가격/세션 재검증 강화
- **Gitea 이슈 번호**: #328
- **변경 대상 파일**: `src/main.py` (line 694-791), `src/core/blackout_manager.py`
- **현재 동작**: stale BUY/SELL 드롭 + order_policy 검증만 수행
- **목표 동작**:
- 복구 시 현재 시세 조회하여 가격 유효성 검증 (진입가 대비 급등/급락 시 드롭)
- 세션 변경 시 새 세션의 파라미터로 재검증
- **수용 기준**:
- 블랙아웃 전후 가격 변동 > 임계값(예: 5%) 시 주문 드롭
- 세션 변경 시 새 세션 파라미터로 재평가
- **테스트 계획**:
- 단위: 가격 변동 시나리오별 드롭/실행 테스트
- 통합: 블랙아웃 + 세션 전환 복합 시나리오
- **의존성**: ACT-07 (복구 주문 DB 기록), ACT-10 (세션 파라미터 재로딩)
---
#### ACT-12: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환 ✅ 머지
- **ROOT 참조**: GAP-5 (시간장벽이 봉 개수 고정)
- **Gitea 이슈**: feat: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환
- **Gitea 이슈 번호**: #329
- **변경 대상 파일**: `src/analysis/triple_barrier.py`
- **현재 동작**: `max_holding_bars` (고정 봉 수) 사용
- **목표 동작**: `max_holding_minutes` (캘린더 시간) 기반으로 전환. 봉 주기와 무관하게 일정 시간 경과 시 장벽 도달
- **수용 기준**:
- 분 단위 시간장벽이 봉 주기 변경에도 일관 동작
- 기존 max_holding_bars 하위 호환 (deprecated 경고)
- **테스트 계획**:
- 단위: 다양한 봉 주기(1분, 5분, 15분)에서 시간장벽 일관성 테스트
- 기존 triple_barrier 테스트 회귀 확인
- **의존성**: 없음
---
#### ACT-13: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑) ✅ 기본 구현 완료, 고도화 잔여
- **ROOT 참조**: REQ-OPS-002 (정책 변경 시 레지스트리 업데이트 강제), REQ-OPS-003 (TASK-REQ 매핑 강제)
- **Gitea 이슈**: infra: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑)
- **Gitea 이슈 번호**: #330
- **현재 동작**: `.gitea/workflows/ci.yml`에서 `scripts/validate_governance_assets.py` + `scripts/validate_ouroboros_docs.py` 자동 실행
- **잔여 고도화**: PR 본문 REQ/TASK/TEST 강제 레벨 상향, 정책 파일 미업데이트 시 CI 실패 기준 강화
- **의존성**: 없음
---
## 3. 검증 계획
### 3.1 단위 테스트
- 모든 ACT 항목에 대해 개별 테스트 작성
- 커버리지 >= 80% 유지
- 현재 CI 기준 전체 테스트 통과 확인 (2026-03-01 기준 998 tests collected)
### 3.2 통합 테스트
- 백테스트 파이프라인: Phase 1 적용 전후 KR 시장 손절 빈도, 반복 매매 횟수, 승률 비교
- 상태기계 통합: Phase 2 피처 공급 후 4중 청산 로직 end-to-end 시나리오
- 블랙아웃 복합: Phase 3 세션 전환 + 블랙아웃 복구 시나리오
### 3.3 실환경 검증
- Paper trading은 실환경과 괴리가 커 검증 신뢰도 부족 → **소액 live 운용**으로 검증
- Phase별 투입 기준: 단위/통합 테스트 통과 → 소액 live (1~2일) → 모니터링 → 정상 확인 후 본운용
---
## 4. 의존성 그래프
```
Phase 1 (병렬 실행 가능)
ACT-01 #318 ─┐
ACT-02 #319 │ (모두 독립)
ACT-03 #320 │
ACT-04 #321 ─┘
Phase 2
ACT-05 #322 ─┐
ACT-06 #323 │ (대부분 독립)
ACT-07 #324 │
ACT-09 #326 ─┘
ACT-08 #325 ←── ACT-01 #318 (ATR 인프라 공유)
Phase 3
ACT-10 #327 ←── ACT-09 #326 (session_id 인프라)
ACT-11 #328 ←── ACT-07 #324, ACT-10 #327
ACT-12 #329 (독립)
ACT-13 #330 (독립)
```
### Phase 간 관계
- Phase 1 → Phase 2: Phase 1 완료가 Phase 2의 전제 조건은 아니나, Phase 1로 출혈 차단 후 Phase 2 진행 권장
- Phase 2 → Phase 3: ACT-09(session_id)가 ACT-10(세션 재로딩)의 전제, ACT-07+ACT-10이 ACT-11의 전제
---
## 5. 롤백 계획
### Phase 1 롤백
- 각 ACT는 독립적이므로 개별 revert 가능
- 손절선(ACT-01): 기존 -2% 고정값으로 복원
- 쿨다운(ACT-02): 쿨다운 체크 제거
- 가격 필터(ACT-03): 필터 조건 제거
- syntax 검증(ACT-04): 검증 스킵, 기존 저장 로직 복원
### Phase 2 롤백
- PnL 수정(ACT-05): buy_qty 기준으로 복원 (단, 데이터 정합성 후퇴 감수)
- exchange_code(ACT-06): 매칭 키에서 제거
- 블랙아웃 DB(ACT-07): log_trade() 호출 제거
- 피처 공급(ACT-08): 0.0 공급으로 복원
- session_id(ACT-09): 파라미터 제거, 자동 추론 복원
### Phase 3 롤백
- 세션 재로딩(ACT-10): 시작 시 1회 로딩으로 복원
- 블랙아웃 재검증(ACT-11): 기존 stale 드롭만 유지
- 시간장벽(ACT-12): max_holding_bars로 복원
- CI(ACT-13): CI 워크플로우 제거
### 롤백 절차
1. 해당 ACT의 PR branch에서 `git revert` 수행
2. 기존 테스트 전체 통과 확인
3. 실환경 투입 전 소액 live 검증
---
## 6. 미진 사항 (2026-03-01 기준)
Phase 1~3 구현 완료 후에도 다음 항목이 운영상 미완료 상태이다.
### 6.1 운영 검증 필요
| 항목 | 설명 | 우선순위 |
|------|------|----------|
| FX PnL 운영 활성화 | `fx_pnl`/`strategy_pnl` 컬럼 존재하나 모든 운영 데이터 값이 0 | P1 |
| 세션 경계 E2E 통합 테스트 보강 | `test_main.py`에 단위 테스트 존재; 세션 경계 실시간 전환 E2E 미작성 | P2 |
| v2 상태기계 통합 end-to-end | 실거래 경로에서 HOLDING→BE_LOCK→ARMED→EXITED 전체 시나리오 테스트 미작성 | P2 |
### 6.2 아키텍처 수준 잔여 갭
| 항목 | 설명 | 배경 문서 |
|------|------|-----------|
| CI 자동 검증 고도화 (#330) | 기본 구현 완료(`validate_governance_assets.py` CI 연동); 규칙/강제수준 고도화 필요 | REQ-OPS-002, REQ-OPS-003 |
| pred_down_prob ML 모델 대체 | 현재 RSI 프록시 사용 — 추후 실제 GBDT/ML 모델로 대체 권장 | ROOT-5, ouroboros_plan_v2.txt §3.D |
| KR/US 파라미터 민감도 분석 | v2 계획의 be_arm_pct/arm_pct/atr_k 최적값 탐색 미수행 | ouroboros_plan_v2.txt §8 |
### 6.3 v3 실험 매트릭스 미착수
ouroboros_plan_v3.txt §9에 정의된 3개 실험이 아직 시작되지 않았다.
| 실험 ID | 시장 | 포커스 | 상태 |
|---------|------|--------|------|
| EXP-KR-01 | KR | NXT 야간 특화 (p_thresh 0.65) | ❌ 미착수 |
| EXP-US-01 | US | 21h 준연속 운용 (atr_k 2.5) | ❌ 미착수 |
| EXP-HYB-01 | Global | KR 낮 + US 밤 연계 레짐 자산배분 | ❌ 미착수 |
---
*끝.*

44
docs/ouroboros/README.md Normal file
View File

@@ -0,0 +1,44 @@
<!--
Doc-ID: DOC-ROOT-001
Version: 1.0.1
Status: active
Owner: strategy
Updated: 2026-03-01
-->
# The Ouroboros 실행 문서 허브
이 폴더는 `source/ouroboros_plan_v2.txt`, `source/ouroboros_plan_v3.txt`를 구현 가능한 작업 지시서 수준으로 분해한 문서 허브다.
## 읽기 순서 (Routing)
1. 검증 체계부터 확정: [00_validation_system.md](./00_validation_system.md)
2. 단일 진실원장(요구사항): [01_requirements_registry.md](./01_requirements_registry.md)
3. v2 실행 지시서: [10_phase_v2_execution.md](./10_phase_v2_execution.md)
4. v3 실행 지시서: [20_phase_v3_execution.md](./20_phase_v3_execution.md)
5. 코드 레벨 작업 지시: [30_code_level_work_orders.md](./30_code_level_work_orders.md)
6. 수용 기준/테스트 계획: [40_acceptance_and_test_plan.md](./40_acceptance_and_test_plan.md)
7. PM 시나리오/이슈 분류 **(A)**: [50_scenario_matrix_and_issue_taxonomy.md](./50_scenario_matrix_and_issue_taxonomy.md)
8. TPM 제어 프로토콜/수용 매트릭스 **(B)**: [50_tpm_control_protocol.md](./50_tpm_control_protocol.md)
9. 저장소 강제 설정 체크리스트: [60_repo_enforcement_checklist.md](./60_repo_enforcement_checklist.md)
10. 메인 에이전트 아이디에이션 백로그: [70_main_agent_ideation.md](./70_main_agent_ideation.md)
11. v2/v3 구현 감사 및 수익률 분석: [80_implementation_audit.md](./80_implementation_audit.md)
12. 손실 복구 실행 계획: [85_loss_recovery_action_plan.md](./85_loss_recovery_action_plan.md)
> **참고**: 7번·8번은 `50_` 프리픽스를 공유합니다. (A) = 시나리오/이슈 분류, (B) = TPM 제어 프로토콜.
## 운영 규칙
- 계획 변경은 반드시 `01_requirements_registry.md`의 ID 정의부터 수정한다.
- 구현 문서는 원장 ID만 참조하고 자체 숫자/정책을 새로 만들지 않는다.
- 문서 품질 룰셋(`RULE-DOC-001` `RULE-DOC-002` `RULE-DOC-003` `RULE-DOC-004` `RULE-DOC-005` `RULE-DOC-006`)은 [00_validation_system.md](./00_validation_system.md)를 기준으로 적용한다.
- 문서 병합 전 아래 검증을 통과해야 한다.
```bash
python3 scripts/validate_ouroboros_docs.py
```
## 원본 계획 문서
- [v2](./source/ouroboros_plan_v2.txt)
- [v3](./source/ouroboros_plan_v3.txt)

View File

@@ -0,0 +1,165 @@
[The Ouroboros] 운영/전략 계획서 v2
작성일: 2026-02-26
상태: 코드 구현 전 설계안(전략/검증 중심)
==================================================
0) 목적
==================================================
고정 익절(+3%) 중심 로직에서 벗어나, 다음을 만족하는 실전형 청산 체계로 전환한다.
- 수익 구간 보호 (손익 역전 방지)
- 변동성 적응형 청산
- 예측 모델의 확률 신호를 보조적으로 결합
- 과적합 방지를 최우선으로 한 검증 프레임워크
==================================================
1) 핵심 설계 원칙
==================================================
1. 예측 성능과 전략 성능을 분리 평가
- 예측 성능: PR-AUC, Brier, Calibration
- 전략 성능: Net PnL, Sharpe, MDD, Profit Factor, Turnover
2. 시계열 검증 규율 강제
- Walk-forward 분할
- Purge/Embargo 적용
- Random split 금지
3. 실거래 리얼리즘 우선
- 거래비용/슬리피지/체결실패 반영 없는 백테스트 결과는 채택 금지
==================================================
2) 매도 상태기계 (State Machine)
==================================================
상태:
- HOLDING
- BE_LOCK
- ARMED
- EXITED
정의:
- HOLDING: 일반 보유 상태
- BE_LOCK: 일정 수익권 진입 시 손절선을 본전(또는 비용 반영 본전)으로 상향
- ARMED: 추세 추적(피크 추적) 기반 청산 준비 상태
- EXITED: 청산 완료
전이 규칙(개념):
- HOLDING -> BE_LOCK: unrealized_pnl_pct >= be_arm_pct
- BE_LOCK -> ARMED: unrealized_pnl_pct >= arm_pct
- ARMED -> EXITED: 아래 조건 중 하나 충족
1) hard stop 도달
2) trailing stop 도달 (peak 대비 하락)
3) 모델 하락확률 + 유동성 약화 조건 충족
상태 전이 구현 규칙(필수):
- 매 틱/바 평가 시 "현재 조건이 허용하는 최상위 상태"로 즉시 승격
- 순차 if-else로 인한 전이 누락 금지 (예: 갭으로 BE_LOCK/ARMED 동시 충족)
- EXITED 조건은 모든 상태보다 우선 평가
- 상태 전이 로그에 이전/이후 상태, 전이 사유, 기준 가격/수익률 기록
==================================================
3) 청산 로직 구성 (4중 안전장치)
==================================================
A. Hard Stop
- 계좌/포지션 보호용 절대 하한
- 항상 활성화
B. Dynamic Stop (Break-even Lock)
- BE_LOCK 진입 시 손절선을 본전 이상으로 상향
- "수익 포지션이 손실로 반전"되는 구조적 리스크 차단
C. ATR 기반 Trailing Stop
- 고정 trail_pct 대신 변동성 적응형 사용
- 예시: ExitPrice = PeakPrice - (k * ATR)
D. 모델 확률 신호
- 하락전환 확률(pred_prob)이 임계값 이상일 때 청산 가중
- 단독 트리거가 아닌 trailing/리스크 룰 보조 트리거로 사용
==================================================
4) 라벨링 체계 (Triple Barrier)
==================================================
목표:
고정 H-window 라벨 편향을 줄이고, 금융 시계열의 경로 의존성을 반영한다.
라벨 정의:
- Upper barrier (익절)
- Lower barrier (손절)
- Time barrier (만기)
규칙:
- 세 장벽 중 "먼저 터치한 장벽"으로 라벨 확정
- 라벨은 entry 시점 이후 데이터만 사용해 생성
- 피처 생성 구간과 라벨 구간을 엄격 분리해 look-ahead bias 방지
==================================================
5) 검증 프레임워크
==================================================
5.1 분할 방식
- Fold 단위 Walk-forward
- Purge/Embargo로 인접 샘플 누수 차단
5.2 비교군(Baseline) 구조
- B0: 기존 고정 손절/익절
- B1: 모델 없는 trailing only
- M1: trailing + 모델 확률 결합
5.3 채택 기준
- M1이 B0/B1 대비 OOS(Out-of-sample)에서 일관된 우위
- 단일 구간 성과가 아닌 fold 분포 기준으로 판단
==================================================
6) 실행 아키텍처 원칙
==================================================
1. 저지연 실행 경로
- 실시간 청산 판단은 경량 엔진(룰/GBDT) 담당
- LLM은 레짐 판단/비중 조절/상위 의사결정 보조
2. 체결 현실 반영
- 세션 유동성에 따른 슬리피지 페널티 차등 적용
- 미체결/재호가/재접수 시나리오를 백테스트에 반영
==================================================
7) 운영 리스크 관리
==================================================
승격 단계:
- Offline backtest -> Paper shadow -> Small-capital live
중단(Kill Switch):
- rolling Sharpe 악화
- MDD 한도 초과
- 체결 실패율/슬리피지 급등
Kill Switch 실행 순서(원자적):
1) 모든 신규 주문 차단 플래그 ON
2) 모든 미체결 주문 취소 요청
3) 취소 결과 재조회(실패 건 재시도)
4) 포지션 리스크 재계산 후 강제 축소/청산 판단
5) 상태/로그 스냅샷 저장 및 운영 경보 발송
원칙:
- 모델이 실패해도 hard stop 기반 보수 모드로 즉시 디그레이드 가능해야 함
==================================================
8) 고정 파라미터(초기안)
==================================================
(15분봉 단기 스윙 기준 제안)
- KR: be_arm_pct=1.2, arm_pct=2.8, atr_period=14, atr_multiplier_k=2.2,
time_barrier_bars=26, p_thresh=0.62
- US: be_arm_pct=1.0, arm_pct=2.4, atr_period=14, atr_multiplier_k=2.0,
time_barrier_bars=32, p_thresh=0.60
민감도 범위(초기 탐색):
- be_arm_pct: KR 0.9~1.8 / US 0.7~1.5
- arm_pct: KR 2.2~3.8 / US 1.8~3.2
- atr_multiplier_k: KR 1.8~2.8 / US 1.6~2.4
- time_barrier_bars: KR 20~36 / US 24~48
- p_thresh: 0.55~0.70
==================================================
9) 구현 전 체크리스트
==================================================
- 파라미터 튜닝 시 nested leakage 방지
- 수수료/세금/슬리피지 전부 반영 여부 확인
- 세션/타임존/DST 처리 일관성 확인
- 모델 버전/설정 해시/실험 로그 재현성 확보
끝.

View File

@@ -0,0 +1,185 @@
[The Ouroboros] 운영확장 v3
작성일: 2026-02-26
상태: v2 확장판 / 야간·프리마켓 포함 글로벌 세션 운영 설계안
==================================================
0) 목적
==================================================
"24시간 무중단 자산 증식" 비전을 위해 거래 세션 범위를 KR 정규장 중심에서
NXT/미국 확장 세션까지 확대한다. 핵심은 다음 3가지다.
- 세션 인지형 의사결정
- 세션별 리스크/비용 차등 적용
- 시간장벽의 현실적 재정의
==================================================
1) 세션 모델 (Session-aware Engine)
==================================================
KR 세션:
- NXT_PRE : 08:00 ~ 08:50 (KST)
- KRX_REG : 09:00 ~ 15:30 (KST)
- NXT_AFTER : 15:30 ~ 20:00 (KST)
US 세션(KST 관점 운영):
- US_DAY : 10:00 ~ 18:00
- US_PRE : 18:00 ~ 23:30
- US_REG : 23:30 ~ 06:00
- US_AFTER : 06:00 ~ 07:00
원칙:
- 모든 피처/신호/주문/로그에 session_id를 명시적으로 포함
- 세션 전환 시 상태 업데이트 및 리스크 파라미터 재로딩
==================================================
2) 캘린더/휴장/DST 고정 소스
==================================================
KR:
- 기본: pykrx 또는 FinanceDataReader (KRX 기준)
- 예외: 연휴/임시 휴장/NXT 특이 운영은 KIS 공지 기반 보완
US:
- pandas_market_calendars (NYSE 기준)
- 2026 DST:
- 시작: 2026-03-08
- 종료: 2026-11-01
정합성 규칙:
- 스케줄 충돌 시 "거래소 캘린더 > 로컬 추정" 우선
- 시장 상태(open/close/half-day)는 주문 엔진 진입 전 최종 검증
KIS 점검시간 회피 정책(필수):
- 브로커 점검/장애 블랙아웃 윈도우는 운영 설정으로 별도 관리
- 블랙아웃 구간에는 신규 주문 전송 금지, 취소/정정도 정책적으로 제한
- 신호는 유지하되 주문 의도는 Queue에 적재, 복구 후 유효성 재검증 뒤 실행
- 복구 직후에는 잔고/미체결/체결내역을 우선 동기화한 뒤 주문 엔진 재가동
==================================================
3) 시간장벽 재정의
==================================================
v2의 time_barrier_bars 고정값을 v3에서 다음으로 확장:
- max_holding_minutes (시장별 기본 만기)
- 봉 개수는 세션 길이/간격으로 동적 계산
기본값:
- KR: max_holding_minutes = 2160 (약 3거래일, NXT 포함 관점)
- US: max_holding_minutes = 4320 (약 72시간)
운영 주의:
- 고정 "일중 청산"보다 "포지션 유지 시간" 기준 만기 적용
- 세션 종료 강제청산 규칙과 충돌 시 우선순위 명시 필요
==================================================
4) 세션별 비용/슬리피지 모델 (보수적)
==================================================
KRX_REG:
- 슬리피지: 2~3틱 (약 0.05%)
- 수수료+세금: 0.20% ~ 0.23%
NXT_AFTER:
- 슬리피지: 5~8틱 (약 0.15%)
- 수수료+세금: 0.20% ~ 0.23%
US_REG:
- 슬리피지: 2~3틱 (약 0.03%)
- 수수료+기타 비용: 0.07% ~ 0.15%
US_PRE / US_DAY:
- 슬리피지: 10틱+ (약 0.3% ~ 0.5%)
- 수수료+기타 비용: 0.07% ~ 0.15%
원칙:
- 백테스트 체결가는 세션별 보수 가정 적용
- 저유동 세션은 자동 보수 모드(p_thresh 상향, atr_k 상향) 권장
- 백테스트 체결가 기본은 "불리한 방향 체결" 가정 (단순 close 체결 금지)
세션별 주문 유형 강제(필수):
- KRX_REG / US_REG: 지정가 우선, 시장가 제한적 허용
- NXT_AFTER / US_PRE / US_DAY / US_AFTER: 시장가 금지
- 저유동 세션은 최우선 지정가 또는 IOC/FOK(가격 보호 한도 포함)만 허용
- 주문 실패 시 재호가 간격/횟수 상한을 두고, 초과 시 주문 철회
==================================================
5) 포지션/잔고 통합 규칙 (KIS 특성 반영)
==================================================
문제:
- KRX/NXT 잔고 조회가 venue 단위로 분리되거나 반영 지연 가능
규칙:
- 종목 식별은 동일 종목코드(또는 ISIN) 기준 통합 포지션으로 관리
- 다만 주문 가능 수량은 venue별 API 응답을 최종 기준으로 사용
- 매도 가능 수량 검증은 주문 직전 재조회로 확정
==================================================
6) 마감 강제청산/오버나잇 예외 규칙
==================================================
기본 원칙:
- 모든 포지션에 대해 세션 종료 10분 전 REDUCE_ALL 검토
오버나잇 예외 허용 (모두 충족 시):
1) ARMED 상태 (예: +2.8% 이상)
2) 모델 하락확률 < 0.30
3) 포트폴리오 현금 비중 >= 50%
갭 리스크 통제:
- 다음 개장 시 hard stop를 시가 기준으로 재산정
- 조건 위반 시 즉시 청산 우선
Kill Switch 연동:
- MDD/실패율 임계치 초과 시 "미체결 전량 취소 -> 신규 주문 차단 -> 리스크 축소" 순서 강제
==================================================
7) 데이터 저장/용량 정책
==================================================
핵심 테이블(계획):
- feature_snapshots
- position_states
- model_predictions
저장 규칙:
- feature_hash 기반 중복 제거
- 가격 변화가 작아도 session_id 변경 시 강제 스냅샷
- 월 단위 DB 로테이션 권장 (예: trading_YYYY_MM.db)
==================================================
8) 환율/정산 리스크 정책 (US 필수)
==================================================
원칙:
- USD 노출은 전략 손익과 별도로 환율 손익을 분리 추적
- 원화 주문 서비스 사용 시 가환율 체결/익일 정산 리스크를 예수금 규칙에 반영
운영 규칙:
- 환전 시점 정책(사전 환전/수시 환전)을 고정하고 로그에 기록
- 최소 USD 버퍼와 KRW 버퍼를 각각 설정해 주문 가능금 부족 리스크 완화
- 환율 급변 구간에는 포지션 한도 축소 또는 신규 진입 제한
==================================================
9) v3 실험 매트릭스 (우선 3선)
==================================================
EXP-KR-01:
- 시장: KR
- 포커스: NXT 야간 특화
- 제안: time barrier 확장(예: 48 bars 상당), p_thresh 상향(0.65)
EXP-US-01:
- 시장: US
- 포커스: 21h 준연속 운용
- 제안: time barrier 확장(예: 80 bars 상당), atr_k 상향(2.5)
EXP-HYB-01:
- 시장: Global
- 포커스: KR 낮 + US 밤 연계
- 제안: 레짐 기반 자산배분 자동조절
==================================================
10) 코드 착수 전 최종 확정 체크
==================================================
1) 세션별 공식 캘린더 소스/우선순위
2) 세션별 슬리피지/비용 테이블 수치
3) 시장별 max_holding_minutes
4) 마감 강제청산 예외 조건 임계값
5) 블랙아웃(점검/장애) 시간대와 주문 큐 처리 규칙
6) 세션별 허용 주문 유형(시장가 허용 범위 포함)
7) 환전/정산 정책 및 통화 버퍼 임계값
모든 항목 확정 후 Step 1 구현(코드)로 이동.
끝.

View File

@@ -87,7 +87,7 @@
- 선정 기준 추적 → Evolution 시스템 최적화 가능 - 선정 기준 추적 → Evolution 시스템 최적화 가능
- API 장애 시 정적 watchlist로 자동 전환 - API 장애 시 정적 watchlist로 자동 전환
**참고:** Realtime 모드 전용. Daily 모드는 배치 효율성을 위해 정적 watchlist 사용. **참고 (당시 구현 기준):** Realtime 모드 전용으로 설계되었으나, 이후 Daily 경로에서도 스캐너를 사용하도록 변경됨. 해외 fallback도 정적 watchlist → 동적 유니버스(active/recent/holdings)로 전환 (2026-02-16 참조).
**이슈/PR:** #76, #77 **이슈/PR:** #76, #77
@@ -355,3 +355,159 @@ Order result: 모의투자 매수주문이 완료 되었습니다. ✓
- `TestOverseasGhostPositionClose` 2개: ghost-close 로그 확인, 일반 오류 무시 - `TestOverseasGhostPositionClose` 2개: ghost-close 로그 확인, 일반 오류 무시
**이슈/PR:** #235, PR #236 **이슈/PR:** #235, PR #236
---
## 2026-02-27
### v2 백테스트 파이프라인 통합 (#305)
**배경:**
- `TripleBarrier`, `WalkForward`, `BacktestCostGuard`는 개별 모듈로 존재했으나,
하나의 실행 경로로 연결된 파이프라인이 없어 통합 검증이 불가능했다.
**구현 내용:**
1. `src/analysis/backtest_pipeline.py`
- `run_v2_backtest_pipeline()` 추가:
- `validate_backtest_cost_model()` 선검증(fail-fast)
- `label_with_triple_barrier()`로 entry 라벨 생성
- `generate_walk_forward_splits()`로 fold 생성
- fold별 baseline(`B0`, `B1`, `M1`) score 산출
- 결과 아티팩트 계약 구조(`BacktestPipelineResult`) 정의
- leakage 검사 유틸 `fold_has_leakage()` 제공
2. `tests/test_backtest_pipeline_integration.py` 신규
- happy path 통합 검증
- cost guard 실패 fail-fast 검증
- purge/embargo 기반 누수 방지 검증
- 동일 입력 재실행 결정성 검증
**검증:**
- `pytest -q tests/test_backtest_pipeline_integration.py tests/test_triple_barrier.py tests/test_walk_forward_split.py tests/test_backtest_cost_guard.py tests/test_backtest_execution_model.py`
- `ruff check src/analysis/backtest_pipeline.py tests/test_backtest_pipeline_integration.py`
**이슈/PR:** #305
---
## 2026-02-28 ~ 2026-03-01
### v2/v3 손실 복구 실행 계획 — Phase 1 완료 (#318~#321)
**배경:**
- `docs/ouroboros/80_implementation_audit.md` 감사 결과 식별된 7개 근본 원인(ROOT) 및 5개 구현 갭(GAP) 중
가장 큰 손실 패턴 4개를 Phase 1로 즉시 제거.
**구현 내용:**
1. **ACT-01: KR 손절선 ATR 기반 동적 확대** (#318)
- `src/main.py`, `src/config.py`
- KR 시장: ATR(14) 기반 동적 hard stop (`k=2.0`, 범위 -2%~-7%)
- ATR 미제공 시 기존 -2% 폴백
- ROOT-1 (hard_stop_pct 고정값 과소) 해소
2. **ACT-02: 손절 후 동일 종목 재진입 쿨다운** (#319)
- `src/main.py`, `src/config.py`
- 손절(pnl<0) 후 동일 종목 `COOLDOWN_MINUTES`(기본 120분) 동안 BUY 차단
- 익절에는 미적용
- ROOT-2 (동일 종목 반복 매매) 해소
3. **ACT-03: US $5 이하 종목 진입 차단 필터** (#320)
- `src/main.py`, `src/config.py`
- US 시장 BUY 시 현재가 `US_MIN_PRICE`(기본 $5) 이하 차단
- ROOT-3 (미국 페니스탁 무분별 진입) 해소
4. **ACT-04: 진화 전략 코드 syntax 검증** (#321)
- `src/evolution/optimizer.py`
- `ast.parse()` + `compile()` 선검증 후 통과한 코드만 저장
- ROOT-4 (진화 전략 문법 오류) 해소
**이슈/PR:** #318, #319, #320, #321
---
### v2/v3 손실 복구 실행 계획 — Phase 2 완료 (#322~#326)
**배경:**
- 손익 계산 정확도 확보 및 v2 청산 로직 실효화.
**구현 내용:**
1. **ACT-05: SELL PnL 계산을 sell_qty 기준으로 수정** (#322)
- `src/main.py` (line 1658-1663, 2755-2760)
- `trade_pnl = (trade_price - buy_price) * sell_qty`로 변경
- ROOT-6 (PnL 계산 buy_qty 사용 CRITICAL) 해소
2. **ACT-06: BUY 매칭 키에 exchange_code 추가** (#323)
- `src/db.py`
- `get_latest_buy_trade()``(stock_code, market, exchange_code)` 기준 매칭
- exchange_code NULL인 레거시 데이터 하위 호환 유지
- ROOT-7 (오매칭 리스크) 해소
3. **ACT-07: 블랙아웃 복구 주문에 log_trade() 추가** (#324)
- `src/main.py` (블랙아웃 복구 실행 경로)
- 복구 주문 실행 후 `log_trade()` 호출, rationale에 `[blackout-recovery]` prefix
- GAP-4 (블랙아웃 복구 주문 DB 미기록) 해소
4. **ACT-08: v2 staged exit에 실제 피처 공급** (#325)
- `src/main.py`, `src/strategy/exit_rules.py`
- `atr_value`: ATR(14) 실시간 계산 공급
- `pred_down_prob`: RSI 기반 하락 확률 추정값 공급 (ML 모델 대체 가능)
- `be_arm_pct`/`arm_pct` 독립 파라미터 설정 가능 (take_profit_pct * 0.4 파생 제거)
- ROOT-5 (v2 청산 로직 실효성 부족) 해소
5. **ACT-09: session_id를 거래/의사결정 로그에 명시적 전달** (#326)
- `src/logging/decision_logger.py`, `src/main.py`, `src/db.py`
- `log_decision()`: session_id 파라미터 추가
- `log_trade()`: 런타임 session_id 명시적 전달
- GAP-1, GAP-2 (session_id 미포함) 부분 해소
**이슈/PR:** #322, #323, #324, #325, #326
---
### v2/v3 손실 복구 실행 계획 — Phase 3 부분 완료 (#327~#329)
**배경:**
- 세션 경계 처리 및 시간장벽 캘린더 기반 전환.
**구현 내용:**
1. **ACT-10: 세션 전환 시 리스크 파라미터 동적 재로딩** (#327)
- `src/main.py`, `src/config.py`
- 세션 경계 변경 이벤트 시 `SESSION_RISK_PROFILES_JSON` 기반 재로딩
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
- GAP-3 (세션 전환 시 파라미터 재로딩 없음) 부분 해소
2. **ACT-11: 블랙아웃 복구 시 가격/세션 재검증 강화** (#328)
- `src/main.py`, `src/core/blackout_manager.py`
- 복구 시 현재 시세 조회하여 가격 유효성 검증 (진입가 대비 급등/급락 시 드롭)
- 세션 변경 시 새 세션의 파라미터로 재검증
- GAP-4 잔여 (가격/세션 재검증) 부분 해소
3. **ACT-12: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환** (#329)
- `src/analysis/triple_barrier.py`
- `max_holding_minutes` (캘린더 분) 기반 전환, 봉 주기 무관 일관 동작
- 기존 `max_holding_bars` deprecated 경고 유지 (하위 호환)
- GAP-5 (시간장벽 봉 개수 고정) 해소
**미완료 (ACT-13):**
- **#330: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑)** — 문서 구조화 작업으로 대체 진행 중
**이슈/PR:** #327, #328, #329
---
### v2/v3 문서 구조화 및 감사 문서 작성 (#331)
**배경:**
- Phase 1~3 구현 완료 후 감사 결과와 실행 계획을 문서화
- 기존 감사 문서가 산발적으로 관리되어 통합 정리 필요
**구현 내용:**
- `docs/ouroboros/80_implementation_audit.md` 신규 작성: v2/v3 구현 감사 + 실거래 수익률 분석
- `docs/ouroboros/85_loss_recovery_action_plan.md` 신규 작성: ROOT/GAP 해소 Phase별 실행 계획
- `scripts/audit_queries.sql` 신규 작성: 성과 재현용 표준 집계 SQL
**이슈/PR:** #331

View File

@@ -2,7 +2,7 @@
## Test Structure ## Test Structure
**551 tests** across **25 files**. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator. **998 tests** across **41 files**. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator.
The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB. The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB.
@@ -23,6 +23,8 @@ The `settings` fixture in `conftest.py` provides safe defaults with test credent
- Network error handling - Network error handling
- SSL context configuration - SSL context configuration
> **Note**: 아래 파일별 테스트 수는 릴리즈 시점 스냅샷이며 실제 수치와 다를 수 있습니다. 현재 정확한 수치는 `pytest --collect-only -q`로 확인하세요.
##### `tests/test_brain.py` (24 tests) ##### `tests/test_brain.py` (24 tests)
- Valid JSON parsing and markdown-wrapped JSON handling - Valid JSON parsing and markdown-wrapped JSON handling
- Malformed JSON fallback - Malformed JSON fallback
@@ -90,7 +92,7 @@ The `settings` fixture in `conftest.py` provides safe defaults with test credent
- Python-first filtering pipeline - Python-first filtering pipeline
- RSI and volume ratio filter logic - RSI and volume ratio filter logic
- Candidate scoring and ranking - Candidate scoring and ranking
- Fallback to static watchlist - Fallback to static watchlist (domestic) or dynamic universe (overseas)
#### Context & Memory #### Context & Memory
@@ -138,8 +140,8 @@ The `settings` fixture in `conftest.py` provides safe defaults with test credent
#### Dashboard #### Dashboard
##### `tests/test_dashboard.py` (14 tests) ##### `tests/test_dashboard.py` (14 tests)
- FastAPI endpoint responses (8 API routes) - FastAPI endpoint responses (10 API routes)
- Status, playbook, scorecard, performance, context, decisions, scenarios - Status, playbook, scorecard, performance, context, decisions, scenarios, pnl/history, positions
- Query parameter handling (market, date, limit) - Query parameter handling (market, date, limit)
#### Performance & Quality #### Performance & Quality
@@ -181,6 +183,29 @@ pytest -v --cov=src --cov-report=term-missing
**Note:** `main.py` has lower coverage as it contains the main loop which is tested via integration/manual testing. **Note:** `main.py` has lower coverage as it contains the main loop which is tested via integration/manual testing.
## Backtest Automation Gate
백테스트 관련 검증은 `scripts/backtest_gate.sh``.github/workflows/backtest-gate.yml`로 자동 실행된다.
- PR: 변경 파일 기준 `auto` 모드
- `feature/**` push: 변경 파일 기준 `auto` 모드
- Daily schedule: `full` 강제 실행
- Manual dispatch: `mode`(`auto|smoke|full`) 지정 가능
실행 기준:
- `src/analysis/`, `src/strategy/`, `src/strategies/`, `src/main.py`, `src/markets/`, `src/broker/`
- 백테스트 핵심 테스트 파일 변경
- `docs/ouroboros/` 변경
`auto` 모드에서 백테스트 민감 영역 변경이 없으면 게이트는 `skip` 처리되며 실패로 간주하지 않는다.
로컬 수동 실행:
```bash
bash scripts/backtest_gate.sh
BACKTEST_MODE=full bash scripts/backtest_gate.sh
BASE_REF=origin/feature/v3-session-policy-stream BACKTEST_MODE=auto bash scripts/backtest_gate.sh
```
## Test Configuration ## Test Configuration
### `pyproject.toml` ### `pyproject.toml`

View File

@@ -5,14 +5,134 @@
**CRITICAL: All code changes MUST follow this workflow. Direct pushes to `main` are ABSOLUTELY PROHIBITED.** **CRITICAL: All code changes MUST follow this workflow. Direct pushes to `main` are ABSOLUTELY PROHIBITED.**
1. **Create Gitea Issue First** — All features, bug fixes, and policy changes require a Gitea issue before any code is written 1. **Create Gitea Issue First** — All features, bug fixes, and policy changes require a Gitea issue before any code is written
2. **Create Feature Branch** — Branch from `main` using format `feature/issue-{N}-{short-description}` 2. **Create Program Feature Branch** — Branch from `main` for the whole development stream
- After creating the branch, run `git pull origin main` and rebase to ensure the branch is up to date - Format: `feature/{epic-or-stream-name}`
3. **Implement Changes** — Write code, tests, and documentation on the feature branch 3. **Create Ticket Temp Branch** — Branch from the program feature branch per ticket
4. **Create Pull Request** — Submit PR to `main` branch referencing the issue number - Format: `feature/issue-{N}-{short-description}`
5. **Review & Merge** — After approval, merge via PR (squash or merge commit) 4. **Implement Per Ticket** — Write code, tests, and documentation on the ticket temp branch
5. **Create Pull Request to Program Feature Branch**`feature/issue-N-* -> feature/{stream}`
6. **Review/Verify and Merge into Program Feature Branch** — user approval not required
7. **Final Integration PR to main** — Only after all ticket stages complete and explicit user approval
**Never commit directly to `main`.** This policy applies to all changes, no exceptions. **Never commit directly to `main`.** This policy applies to all changes, no exceptions.
## Agent Gitea Preflight (Mandatory)
Gitea 이슈/PR/코멘트 작업 전에 모든 에이전트는 아래를 먼저 확인해야 한다.
1. `docs/commands.md``tea CLI` 실패 사례/해결 패턴 확인
2. 본 문서의 `Gitea CLI Formatting Troubleshooting` 확인
3. 명령 실행 전 `gh`(GitHub CLI) 사용 금지 확인
강제 규칙:
- 이 저장소 협업 명령은 `tea`를 기본으로 사용한다.
- `gh issue`, `gh pr` 등 GitHub CLI 명령은 사용 금지다.
- `tea` 실패 시 동일 명령 재시도 전에 원인/수정사항을 PR 코멘트에 남긴다.
- 필요한 경우에만 Gitea API(`localhost:3000`)를 fallback으로 사용한다.
## Session Handover Gate (Mandatory)
새 세션에서 구현/검증을 시작하기 전에 아래를 선행해야 한다.
1. `docs/workflow.md`, `docs/commands.md`, `docs/agent-constraints.md` 재확인
2. `workflow/session-handover.md`에 최신 세션 엔트리 추가
3. `python3 scripts/session_handover_check.py --strict` 통과 확인
강제 규칙:
- handover check 실패 상태에서 코드 수정/이슈 상태 전이/PR 생성 금지
- 최신 handover 엔트리는 현재 작업 브랜치를 명시해야 한다
- 최신 handover 엔트리는 당일(UTC) 날짜를 포함해야 한다
## Branch Strategy (Mandatory)
- Team operation default branch is the **program feature branch**, not `main`.
- Ticket-level development happens only on **ticket temp branches** cut from the program feature branch.
- Ticket PR merges into program feature branch are allowed after verifier approval.
- Until final user sign-off, `main` merge is prohibited.
- 각 에이전트는 주요 의사결정(리뷰 지적, 수정 방향, 검증 승인)마다 PR 코멘트를 적극 작성해 의사결정 과정을 남긴다.
## Backtest Gate Policy (Mandatory)
사람 의존도를 줄이기 위해 백테스트 검증은 자동 게이트를 기본으로 한다.
- 워크플로우: `.github/workflows/backtest-gate.yml`
- 실행 스크립트: `scripts/backtest_gate.sh`
- 기본 모드: `auto` (변경 파일 기반 실행/skip 판정)
- 정기 스케줄: daily `full` 강제 실행
- 수동 재실행: workflow dispatch + `mode` 지정
강제 규칙:
- 백테스트 민감 변경(PR/feature push)에서 게이트 실패 시 머지 금지
- 스케줄 게이트 실패 시 이슈 등록 후 원인/복구 계획 기록
- `python` 대신 `python3` 기준으로 실행한다
## Gitea CLI Formatting Troubleshooting
Issue/PR 본문 작성 시 줄바꿈(`\n`)이 문자열 그대로 저장되는 문제가 반복될 수 있다. 원인은 `-d "...\n..."` 형태에서 쉘/CLI가 이스케이프를 실제 개행으로 해석하지 않기 때문이다.
코멘트도 동일한 문제가 자주 발생하므로, 코멘트는 파일 기반 래퍼를 표준으로 사용한다.
```bash
# 권장: 파일/STDIN 기반 코멘트 등록 (줄바꿈 보존)
cat > /tmp/review.md <<'EOF'
리뷰 반영 완료했습니다.
- 항목 1
- 항목 2
EOF
scripts/tea_comment.sh 374 /tmp/review.md
# 또는
cat /tmp/review.md | scripts/tea_comment.sh 374 -
```
권장 패턴:
```bash
ISSUE_BODY=$(cat <<'EOF'
## Summary
- 변경 내용 1
- 변경 내용 2
## Why
- 배경 1
- 배경 2
## Scope
- 포함 범위
- 제외 범위
EOF
)
tea issues create \
-t "docs: 제목" \
-d "$ISSUE_BODY"
```
PR도 동일하게 적용:
```bash
PR_BODY=$(cat <<'EOF'
## Summary
- ...
## Validation
- python3 scripts/validate_ouroboros_docs.py
EOF
)
tea pr create \
--base main \
--head feature/issue-N-something \
--title "docs: ... (#N)" \
--description "$PR_BODY"
```
금지 패턴:
- `-d "line1\nline2"` (웹 UI에 `\n` 문자 그대로 노출될 수 있음)
- 본문에 백틱/괄호를 인라인로 넣고 적절한 quoting 없이 즉시 실행
## Agent Workflow ## Agent Workflow
**Modern AI development leverages specialized agents for concurrent, efficient task execution.** **Modern AI development leverages specialized agents for concurrent, efficient task execution.**
@@ -75,6 +195,62 @@ task_tool(
Use `run_in_background=True` for independent tasks that don't block subsequent work. Use `run_in_background=True` for independent tasks that don't block subsequent work.
### Main -> Verifier Directive Contract (Mandatory)
메인 에이전트가 검증 에이전트에 작업을 위임할 때, 아래 6개를 누락하면 지시가 무효다.
1. 검증 대상 범위: `REQ-*`, `TASK-*`, 코드/로그 경로
2. 검증 방법: 실행 커맨드와 관측 포인트(예: 세션별 로그 키워드)
3. 합격 기준: PASS 조건을 수치/문구로 명시
4. 실패 기준: FAIL 조건을 수치/문구로 명시
5. 미관측 기준: `NOT_OBSERVED` 조건과 즉시 에스컬레이션 규칙
6. 증적 형식: PR 코멘트에 `Coverage Matrix` 표로 제출
`NOT_OBSERVED` 처리 규칙:
- 요구사항 항목이 관측되지 않았으면 PASS로 간주 금지
- `NOT_OBSERVED`는 운영상 `FAIL`과 동일하게 처리
- `NOT_OBSERVED`가 하나라도 있으면 승인/머지 금지
`FORBIDDEN` 처리 규칙:
- 정책 위반 신호(예: 주말 `session=KRX_REG`)는 `FORBIDDEN=HIT`으로 별도 기록한다
- `FORBIDDEN=HIT`은 즉시 `P0 FAIL`로 간주하고 모니터링 승인 불가
- 실시간 모니터는 `alive`만으로 정상 판정하지 않는다(정책 불변식 통과가 필수)
### Process-Change-First Rule (Mandatory)
재발 방지/운영 규칙 변경이 결정되면, 기능 구현 티켓보다 먼저 서버(feature branch)에 반영해야 한다.
- 순서: `process ticket merge` -> `implementation ticket start`
- process ticket 미반영 상태에서 기능 티켓 코딩/머지 금지
- 세션 전환 시에도 동일 규칙 유지
### Implementation Start Gate (Mandatory)
구현 티켓을 시작하기 전에 아래 3개를 모두 만족해야 한다.
1. `process ticket merge` 증적 확인 (feature branch 반영 커밋/PR)
2. `workflow/session-handover.md` 최신 엔트리에 `next_ticket``process_gate_checked` 기록
3. `python3 scripts/session_handover_check.py --strict` 통과
강제 규칙:
- 위 3개 중 하나라도 불충족이면 코드/테스트 수정 금지
- 서브에이전트 지시도 동일하게 제한한다 (`process merged 확인 전 read-only 탐색만 허용`)
- 성급 착수 발견 시 구현 작업을 즉시 중단하고 handover/proces gate부터 복구한다
### Ticket Maturity Stages (Mandatory)
모든 티켓은 아래 4단계를 순서대로 통과해야 한다.
1. `Implemented`: 코드/문서 변경 완료
2. `Integrated`: 호출 경로/파이프라인 연결 완료
3. `Observed`: 런타임/실행 증적 확보 완료
4. `Accepted`: 정적 Verifier + Runtime Verifier 승인 완료
강제 규칙:
- 단계 점프 금지 (예: Implemented -> Accepted 금지)
- `Observed` 전에는 완료 선언 금지
- `Accepted` 전에는 머지 금지
## Code Review Checklist ## Code Review Checklist
**CRITICAL: Every PR review MUST verify plan-implementation consistency.** **CRITICAL: Every PR review MUST verify plan-implementation consistency.**
@@ -108,3 +284,10 @@ Before approving any PR, the reviewer (human or agent) must check ALL of the fol
- [ ] PR references the Gitea issue number - [ ] PR references the Gitea issue number
- [ ] Feature branch follows naming convention (`feature/issue-N-description`) - [ ] Feature branch follows naming convention (`feature/issue-N-description`)
- [ ] Commit messages are clear and descriptive - [ ] Commit messages are clear and descriptive
- [ ] 이슈/PR 작업 전에 `docs/commands.md`와 본 문서 트러블슈팅 섹션을 확인했다
- [ ] `gh` 명령을 사용하지 않고 `tea`(또는 허용된 Gitea API fallback)만 사용했다
- [ ] Main -> Verifier 지시가 Directive Contract 6개 항목을 모두 포함한다
- [ ] Verifier 결과에 `Coverage Matrix`(PASS/FAIL/NOT_OBSERVED)가 있고, `NOT_OBSERVED=0`이다
- [ ] Process-change-first 대상이면 해당 process PR이 먼저 머지되었다
- [ ] 티켓 단계가 `Implemented -> Integrated -> Observed -> Accepted` 순서로 기록되었다
- [ ] 정적 Verifier와 Runtime Verifier 승인 코멘트가 모두 존재한다

184
scripts/audit_queries.sql Normal file
View File

@@ -0,0 +1,184 @@
-- audit_queries.sql
-- 용도: 80_implementation_audit.md 성과표 재현을 위한 표준 집계 SQL
-- 대상 DB: trading.db (SQLite)
-- 기간: 2026-02-25 ~ 2026-02-28 (UTC)
-- 참조: docs/ouroboros/80_implementation_audit.md Section 3
------------------------------------------------------------------------
-- Base: 기간 + LIVE + SELL + 직전 BUY 메타 매칭
------------------------------------------------------------------------
-- 모든 후속 쿼리의 기반이 되는 CTE.
-- prev_buy_rationale: 직전 BUY의 rationale (startup-sync 분류용)
-- prev_buy_qty: 직전 BUY 수량 (수량 일치 무결성 필터용)
------------------------------------------------------------------------
WITH base AS (
SELECT *
FROM trades
WHERE mode='live'
AND action='SELL'
AND timestamp >= '2026-02-25T00:00:00+00:00'
AND timestamp < '2026-02-28T00:00:00+00:00'
),
labeled AS (
SELECT
s.id,
s.timestamp,
s.stock_code,
s.market,
s.exchange_code,
s.quantity AS sell_qty,
s.price AS sell_price,
s.pnl,
COALESCE((
SELECT b.rationale
FROM trades b
WHERE b.mode='live'
AND b.action='BUY'
AND b.stock_code=s.stock_code
AND b.market=s.market
AND b.timestamp < s.timestamp
ORDER BY b.timestamp DESC, b.id DESC
LIMIT 1
), '') AS prev_buy_rationale,
(
SELECT b.quantity
FROM trades b
WHERE b.mode='live'
AND b.action='BUY'
AND b.stock_code=s.stock_code
AND b.market=s.market
AND b.timestamp < s.timestamp
ORDER BY b.timestamp DESC, b.id DESC
LIMIT 1
) AS prev_buy_qty
FROM base s
)
SELECT * FROM labeled;
------------------------------------------------------------------------
-- Q1) 통화 분리 손익 (KRW/USD 혼합 금지)
------------------------------------------------------------------------
WITH base AS (
SELECT * FROM trades
WHERE mode='live' AND action='SELL'
AND timestamp >= '2026-02-25T00:00:00+00:00'
AND timestamp < '2026-02-28T00:00:00+00:00'
),
labeled AS (
SELECT s.*,
s.quantity AS sell_qty,
COALESCE((SELECT b.rationale FROM trades b
WHERE b.mode='live' AND b.action='BUY'
AND b.stock_code=s.stock_code AND b.market=s.market
AND b.timestamp < s.timestamp
ORDER BY b.timestamp DESC, b.id DESC LIMIT 1), '') AS prev_buy_rationale,
(SELECT b.quantity FROM trades b
WHERE b.mode='live' AND b.action='BUY'
AND b.stock_code=s.stock_code AND b.market=s.market
AND b.timestamp < s.timestamp
ORDER BY b.timestamp DESC, b.id DESC LIMIT 1) AS prev_buy_qty
FROM base s
)
SELECT
CASE WHEN market='KR' THEN 'KRW' ELSE 'USD' END AS ccy,
COUNT(*) AS sells,
ROUND(SUM(pnl),2) AS pnl_sum
FROM labeled
GROUP BY ccy
ORDER BY ccy;
------------------------------------------------------------------------
-- Q2) 기존 보유(startup-sync) 제외 성과
------------------------------------------------------------------------
WITH base AS (
SELECT * FROM trades
WHERE mode='live' AND action='SELL'
AND timestamp >= '2026-02-25T00:00:00+00:00'
AND timestamp < '2026-02-28T00:00:00+00:00'
),
labeled AS (
SELECT s.*,
s.quantity AS sell_qty,
COALESCE((SELECT b.rationale FROM trades b
WHERE b.mode='live' AND b.action='BUY'
AND b.stock_code=s.stock_code AND b.market=s.market
AND b.timestamp < s.timestamp
ORDER BY b.timestamp DESC, b.id DESC LIMIT 1), '') AS prev_buy_rationale,
(SELECT b.quantity FROM trades b
WHERE b.mode='live' AND b.action='BUY'
AND b.stock_code=s.stock_code AND b.market=s.market
AND b.timestamp < s.timestamp
ORDER BY b.timestamp DESC, b.id DESC LIMIT 1) AS prev_buy_qty
FROM base s
)
SELECT
CASE WHEN market='KR' THEN 'KRW' ELSE 'USD' END AS ccy,
COUNT(*) AS sells,
ROUND(SUM(pnl),2) AS pnl_sum
FROM labeled
WHERE prev_buy_rationale NOT LIKE '[startup-sync]%'
GROUP BY ccy
ORDER BY ccy;
------------------------------------------------------------------------
-- Q3) 수량 일치 체결만 포함 (무결성 필터)
------------------------------------------------------------------------
WITH base AS (
SELECT * FROM trades
WHERE mode='live' AND action='SELL'
AND timestamp >= '2026-02-25T00:00:00+00:00'
AND timestamp < '2026-02-28T00:00:00+00:00'
),
labeled AS (
SELECT s.*,
s.quantity AS sell_qty,
COALESCE((SELECT b.rationale FROM trades b
WHERE b.mode='live' AND b.action='BUY'
AND b.stock_code=s.stock_code AND b.market=s.market
AND b.timestamp < s.timestamp
ORDER BY b.timestamp DESC, b.id DESC LIMIT 1), '') AS prev_buy_rationale,
(SELECT b.quantity FROM trades b
WHERE b.mode='live' AND b.action='BUY'
AND b.stock_code=s.stock_code AND b.market=s.market
AND b.timestamp < s.timestamp
ORDER BY b.timestamp DESC, b.id DESC LIMIT 1) AS prev_buy_qty
FROM base s
)
SELECT
CASE WHEN market='KR' THEN 'KRW' ELSE 'USD' END AS ccy,
COUNT(*) AS sells,
ROUND(SUM(pnl),2) AS pnl_sum
FROM labeled
WHERE prev_buy_qty = sell_qty
GROUP BY ccy
ORDER BY ccy;
------------------------------------------------------------------------
-- Q4) 이상치 목록 (수량 불일치)
------------------------------------------------------------------------
WITH base AS (
SELECT * FROM trades
WHERE mode='live' AND action='SELL'
AND timestamp >= '2026-02-25T00:00:00+00:00'
AND timestamp < '2026-02-28T00:00:00+00:00'
),
labeled AS (
SELECT s.id, s.timestamp, s.stock_code, s.market, s.quantity AS sell_qty, s.pnl,
(SELECT b.quantity FROM trades b
WHERE b.mode='live' AND b.action='BUY'
AND b.stock_code=s.stock_code AND b.market=s.market
AND b.timestamp < s.timestamp
ORDER BY b.timestamp DESC, b.id DESC LIMIT 1) AS prev_buy_qty
FROM base s
)
SELECT
id, timestamp, stock_code, market, sell_qty, prev_buy_qty, ROUND(pnl,2) AS pnl
FROM labeled
WHERE prev_buy_qty IS NOT NULL
AND prev_buy_qty != sell_qty
ORDER BY ABS(pnl) DESC;

106
scripts/backtest_gate.sh Executable file
View File

@@ -0,0 +1,106 @@
#!/usr/bin/env bash
# Backtest gate for PR/push/scheduled verification.
set -euo pipefail
MODE="${BACKTEST_MODE:-auto}" # auto | smoke | full
BASE_REF="${BASE_REF:-origin/main}" # used when MODE=auto
FORCE_FULL="${FORCE_FULL_BACKTEST:-false}"
LOG_DIR="${LOG_DIR:-data/backtest-gate}"
mkdir -p "$LOG_DIR"
STAMP="$(date -u +%Y%m%d_%H%M%S)"
LOG_FILE="$LOG_DIR/backtest_gate_${STAMP}.log"
log() {
printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$LOG_FILE"
}
run_cmd() {
log "[RUN] $*"
"$@" 2>&1 | tee -a "$LOG_FILE"
}
resolve_mode_from_changes() {
if [ "$FORCE_FULL" = "true" ]; then
echo "full"
return
fi
if ! git rev-parse --verify "$BASE_REF" >/dev/null 2>&1; then
log "[WARN] BASE_REF not found: $BASE_REF; fallback to full"
echo "full"
return
fi
changed_files="$(git diff --name-only "$BASE_REF"...HEAD || true)"
if [ -z "$changed_files" ]; then
log "[INFO] no changed files between $BASE_REF...HEAD; skip backtest gate"
echo "skip"
return
fi
log "[INFO] changed files from $BASE_REF...HEAD:"
while IFS= read -r line; do
[ -n "$line" ] && log " - $line"
done <<< "$changed_files"
# Backtest-sensitive areas: analysis/strategy/runtime execution semantics.
if printf '%s\n' "$changed_files" | rg -q \
'^(src/analysis/|src/strategy/|src/strategies/|src/main.py|src/markets/|src/broker/|tests/test_backtest_|tests/test_triple_barrier.py|tests/test_walk_forward_split.py|tests/test_main.py|docs/ouroboros/)'
then
echo "full"
else
echo "skip"
fi
}
SMOKE_TESTS=(
tests/test_backtest_pipeline_integration.py
tests/test_triple_barrier.py
tests/test_walk_forward_split.py
tests/test_backtest_cost_guard.py
tests/test_backtest_execution_model.py
)
FULL_TESTS=(
tests/test_backtest_pipeline_integration.py
tests/test_triple_barrier.py
tests/test_walk_forward_split.py
tests/test_backtest_cost_guard.py
tests/test_backtest_execution_model.py
tests/test_main.py
)
main() {
log "[INFO] backtest gate started mode=$MODE base_ref=$BASE_REF force_full=$FORCE_FULL"
selected_mode="$MODE"
if [ "$MODE" = "auto" ]; then
selected_mode="$(resolve_mode_from_changes)"
fi
case "$selected_mode" in
skip)
log "[PASS] backtest gate skipped (no backtest-sensitive changes)"
exit 0
;;
smoke)
run_cmd python3 -m pytest -q "${SMOKE_TESTS[@]}"
log "[PASS] smoke backtest gate passed"
;;
full)
run_cmd python3 -m pytest -q "${SMOKE_TESTS[@]}"
# Runtime semantics tied to v2 staged-exit must remain covered in full gate.
run_cmd python3 -m pytest -q tests/test_main.py -k \
"staged_exit_override or runtime_exit_cache_cleared or run_daily_session_applies_staged_exit_override_on_hold"
log "[PASS] full backtest gate passed"
;;
*)
log "[FAIL] invalid BACKTEST_MODE=$selected_mode (expected auto|smoke|full)"
exit 2
;;
esac
}
main "$@"

114
scripts/runtime_verify_monitor.sh Executable file
View File

@@ -0,0 +1,114 @@
#!/usr/bin/env bash
# Runtime verification monitor with coverage + forbidden invariant checks.
set -euo pipefail
ROOT_DIR="${ROOT_DIR:-/home/agentson/repos/The-Ouroboros}"
LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}"
INTERVAL_SEC="${INTERVAL_SEC:-60}"
MAX_HOURS="${MAX_HOURS:-24}"
POLICY_TZ="${POLICY_TZ:-Asia/Seoul}"
cd "$ROOT_DIR"
OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log"
END_TS=$(( $(date +%s) + MAX_HOURS*3600 ))
log() {
printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null
}
check_signal() {
local name="$1"
local pattern="$2"
local run_log="$3"
if rg -q "$pattern" "$run_log"; then
log "[COVERAGE] ${name}=PASS pattern=${pattern}"
return 0
fi
log "[COVERAGE] ${name}=NOT_OBSERVED pattern=${pattern}"
return 1
}
check_forbidden() {
local name="$1"
local pattern="$2"
local run_log="$3"
if rg -q "$pattern" "$run_log"; then
log "[FORBIDDEN] ${name}=HIT pattern=${pattern}"
return 1
fi
log "[FORBIDDEN] ${name}=CLEAR pattern=${pattern}"
return 0
}
log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}"
while true; do
now=$(date +%s)
if [ "$now" -ge "$END_TS" ]; then
log "[INFO] monitor completed (time window reached)"
exit 0
fi
latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)"
if [ -z "$latest_run" ]; then
log "[ANOMALY] no run log found"
sleep "$INTERVAL_SEC"
continue
fi
# Basic liveness hints.
app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)"
wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)"
app_alive=0
wd_alive=0
port_alive=0
[ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1
[ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1
ss -ltnp 2>/dev/null | rg -q ':8080' && port_alive=1
log "[HEARTBEAT] run_log=$latest_run app_alive=$app_alive watchdog_alive=$wd_alive port8080=$port_alive"
# Coverage matrix rows (session paths and policy gate evidence).
not_observed=0
check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1))
check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1))
check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1))
check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1))
if [ "$not_observed" -gt 0 ]; then
log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)"
else
log "[OK] coverage complete (NOT_OBSERVED=0)"
fi
# Forbidden invariants: must never happen under given policy context.
forbidden_hits=0
policy_dow="$(TZ="$POLICY_TZ" date +%u)" # 1..7 (Mon..Sun)
is_weekend=0
if [ "$policy_dow" -ge 6 ]; then
is_weekend=1
fi
if [ "$is_weekend" -eq 1 ]; then
# Weekend policy: KR regular session loop must never appear.
check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \
"Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \
"$latest_run" || forbidden_hits=$((forbidden_hits+1))
else
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday"
fi
if [ "$forbidden_hits" -gt 0 ]; then
log "[P0] forbidden_invariant_hits=$forbidden_hits (treat as immediate FAIL)"
else
log "[OK] forbidden invariants clear"
fi
sleep "$INTERVAL_SEC"
done

162
scripts/session_handover_check.py Executable file
View File

@@ -0,0 +1,162 @@
#!/usr/bin/env python3
"""Session handover preflight gate.
This script enforces a minimal handover record per working branch so that
new sessions cannot start implementation without reading the required docs
and recording current intent.
"""
from __future__ import annotations
import argparse
import subprocess
import sys
from datetime import UTC, datetime
from pathlib import Path
REQUIRED_DOCS = (
Path("docs/workflow.md"),
Path("docs/commands.md"),
Path("docs/agent-constraints.md"),
)
HANDOVER_LOG = Path("workflow/session-handover.md")
def _run_git(*args: str) -> str:
try:
return (
subprocess.check_output(["git", *args], stderr=subprocess.DEVNULL)
.decode("utf-8")
.strip()
)
except Exception:
return ""
def _current_branch() -> str:
branch = _run_git("branch", "--show-current")
if branch:
return branch
return _run_git("rev-parse", "--abbrev-ref", "HEAD")
def _latest_entry(text: str) -> str:
chunks = text.split("\n### ")
if not chunks:
return ""
if chunks[0].startswith("### "):
chunks[0] = chunks[0][4:]
latest = chunks[-1].strip()
if not latest:
return ""
if not latest.startswith("### "):
latest = f"### {latest}"
return latest
def _check_required_files(errors: list[str]) -> None:
for path in REQUIRED_DOCS:
if not path.exists():
errors.append(f"missing required document: {path}")
if not HANDOVER_LOG.exists():
errors.append(f"missing handover log: {HANDOVER_LOG}")
def _check_handover_entry(
*,
branch: str,
strict: bool,
ci_mode: bool,
errors: list[str],
) -> None:
if not HANDOVER_LOG.exists():
return
text = HANDOVER_LOG.read_text(encoding="utf-8")
latest = _latest_entry(text)
if not latest:
errors.append("handover log has no session entry")
return
required_tokens = (
"- branch:",
"- docs_checked:",
"- open_issues_reviewed:",
"- next_ticket:",
"- process_gate_checked:",
)
for token in required_tokens:
if token not in latest:
errors.append(f"latest handover entry missing token: {token}")
if strict:
if "- next_ticket: #TBD" in latest:
errors.append("latest handover entry must not use placeholder next_ticket (#TBD)")
if strict and not ci_mode:
today_utc = datetime.now(UTC).date().isoformat()
if today_utc not in latest:
errors.append(
f"latest handover entry must contain today's UTC date ({today_utc})"
)
branch_token = f"- branch: {branch}"
if branch_token not in latest:
errors.append(
"latest handover entry must target current branch "
f"({branch_token})"
)
if "merged_to_feature_branch=no" in latest:
errors.append(
"process gate indicates not merged; implementation must stay blocked "
"(merged_to_feature_branch=no)"
)
def main() -> int:
parser = argparse.ArgumentParser(
description="Validate session handover gate requirements."
)
parser.add_argument(
"--strict",
action="store_true",
help="Enforce today-date and current-branch match on latest handover entry.",
)
parser.add_argument(
"--ci",
action="store_true",
help=(
"CI mode: keep structural/token checks and placeholder guard, "
"but skip strict today-date/current-branch/merge-gate checks."
),
)
args = parser.parse_args()
errors: list[str] = []
_check_required_files(errors)
branch = _current_branch()
if not branch:
errors.append("cannot resolve current git branch")
elif not args.ci and branch in {"main", "master"}:
errors.append(f"working branch must not be {branch}")
_check_handover_entry(
branch=branch,
strict=args.strict,
ci_mode=args.ci,
errors=errors,
)
if errors:
print("[FAIL] session handover check failed")
for err in errors:
print(f"- {err}")
return 1
print("[OK] session handover check passed")
print(f"[OK] branch={branch}")
print(f"[OK] handover_log={HANDOVER_LOG}")
return 0
if __name__ == "__main__":
sys.exit(main())

49
scripts/tea_comment.sh Executable file
View File

@@ -0,0 +1,49 @@
#!/usr/bin/env bash
# Safe helper for posting multiline Gitea comments without escaped-newline artifacts.
set -euo pipefail
if [ "${1:-}" = "-h" ] || [ "${1:-}" = "--help" ] || [ "$#" -lt 2 ]; then
cat <<'EOF'
Usage:
scripts/tea_comment.sh <issue_or_pr_index> <body_file|-> [repo]
Examples:
scripts/tea_comment.sh 374 /tmp/comment.md
cat /tmp/comment.md | scripts/tea_comment.sh 374 - jihoson/The-Ouroboros
Notes:
- Use file/stdin input to preserve real newlines.
- Passing inline strings with "\n" is intentionally avoided by this helper.
EOF
exit 1
fi
INDEX="$1"
BODY_SOURCE="$2"
REPO="${3:-jihoson/The-Ouroboros}"
if [ "$BODY_SOURCE" = "-" ]; then
BODY="$(cat)"
else
if [ ! -f "$BODY_SOURCE" ]; then
echo "[FAIL] body file not found: $BODY_SOURCE" >&2
exit 1
fi
BODY="$(cat "$BODY_SOURCE")"
fi
if [ -z "$BODY" ]; then
echo "[FAIL] empty comment body" >&2
exit 1
fi
# Guard against the common escaped-newline mistake.
if [[ "$BODY" == *"\\n"* ]] && [[ "$BODY" != *$'\n'* ]]; then
echo "[FAIL] body appears to contain escaped newlines (\\n) instead of real line breaks" >&2
echo "Use a multiline file/heredoc and pass that file to scripts/tea_comment.sh" >&2
exit 1
fi
YES="" ~/bin/tea comment "$INDEX" --repo "$REPO" "$BODY"

View File

@@ -0,0 +1,135 @@
#!/usr/bin/env python3
"""Validate top-level docs synchronization invariants."""
from __future__ import annotations
import re
import sys
from pathlib import Path
REPO_ROOT = Path(".")
REQUIRED_FILES = {
"README.md": REPO_ROOT / "README.md",
"CLAUDE.md": REPO_ROOT / "CLAUDE.md",
"commands": REPO_ROOT / "docs" / "commands.md",
"testing": REPO_ROOT / "docs" / "testing.md",
"workflow": REPO_ROOT / "docs" / "workflow.md",
}
LINK_PATTERN = re.compile(r"\[[^\]]+\]\((?P<link>[^)]+)\)")
ENDPOINT_ROW_PATTERN = re.compile(
r"^\|\s*`(?P<endpoint>(?:GET|POST|PUT|PATCH|DELETE)\s+/[^`]*)`\s*\|"
)
def _read(path: Path) -> str:
return path.read_text(encoding="utf-8")
def validate_required_files_exist(errors: list[str]) -> None:
for name, path in REQUIRED_FILES.items():
if not path.exists():
errors.append(f"missing required doc file ({name}): {path}")
def validate_links_resolve(doc_path: Path, text: str, errors: list[str]) -> None:
for match in LINK_PATTERN.finditer(text):
raw_link = match.group("link").strip()
if not raw_link or raw_link.startswith("#") or raw_link.startswith("http"):
continue
link_path = raw_link.split("#", 1)[0].strip()
if not link_path:
continue
if link_path.startswith("/"):
errors.append(f"{doc_path}: absolute link is forbidden -> {raw_link}")
continue
target = (doc_path.parent / link_path).resolve()
if not target.exists():
errors.append(f"{doc_path}: broken link -> {raw_link}")
def validate_summary_docs_reference_core_docs(errors: list[str]) -> None:
required_links = {
"README.md": ("docs/workflow.md", "docs/commands.md", "docs/testing.md"),
"CLAUDE.md": ("docs/workflow.md", "docs/commands.md"),
}
for file_name, links in required_links.items():
doc_path = REQUIRED_FILES[file_name]
text = _read(doc_path)
for link in links:
if link not in text:
errors.append(f"{doc_path}: missing core doc link reference -> {link}")
def collect_command_endpoints(text: str) -> list[str]:
endpoints: list[str] = []
for line in text.splitlines():
match = ENDPOINT_ROW_PATTERN.match(line.strip())
if match:
endpoints.append(match.group("endpoint"))
return endpoints
def validate_commands_endpoint_duplicates(errors: list[str]) -> None:
text = _read(REQUIRED_FILES["commands"])
endpoints = collect_command_endpoints(text)
seen: set[str] = set()
duplicates: set[str] = set()
for endpoint in endpoints:
if endpoint in seen:
duplicates.add(endpoint)
seen.add(endpoint)
for endpoint in sorted(duplicates):
errors.append(f"docs/commands.md: duplicated API endpoint row -> {endpoint}")
def validate_testing_doc_has_dynamic_count_guidance(errors: list[str]) -> None:
text = _read(REQUIRED_FILES["testing"])
if "pytest --collect-only -q" not in text:
errors.append(
"docs/testing.md: missing dynamic test count guidance "
"(pytest --collect-only -q)"
)
def main() -> int:
errors: list[str] = []
validate_required_files_exist(errors)
if errors:
print("[FAIL] docs sync validation failed")
for err in errors:
print(f"- {err}")
return 1
readme_text = _read(REQUIRED_FILES["README.md"])
claude_text = _read(REQUIRED_FILES["CLAUDE.md"])
validate_links_resolve(REQUIRED_FILES["README.md"], readme_text, errors)
validate_links_resolve(REQUIRED_FILES["CLAUDE.md"], claude_text, errors)
validate_links_resolve(
REQUIRED_FILES["commands"], _read(REQUIRED_FILES["commands"]), errors
)
validate_links_resolve(REQUIRED_FILES["testing"], _read(REQUIRED_FILES["testing"]), errors)
validate_links_resolve(
REQUIRED_FILES["workflow"], _read(REQUIRED_FILES["workflow"]), errors
)
validate_summary_docs_reference_core_docs(errors)
validate_commands_endpoint_duplicates(errors)
validate_testing_doc_has_dynamic_count_guidance(errors)
if errors:
print("[FAIL] docs sync validation failed")
for err in errors:
print(f"- {err}")
return 1
print("[OK] docs sync validated")
print("[OK] summary docs link to core docs and links resolve")
print("[OK] commands endpoint rows have no duplicates")
print("[OK] testing doc includes dynamic count guidance")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,305 @@
#!/usr/bin/env python3
"""Validate persistent governance assets for agent workflow safety."""
from __future__ import annotations
import os
import re
import subprocess
import sys
from pathlib import Path
REQUIREMENTS_REGISTRY = "docs/ouroboros/01_requirements_registry.md"
TASK_WORK_ORDERS_DOC = "docs/ouroboros/30_code_level_work_orders.md"
TASK_DEF_LINE = re.compile(r"^-\s+`(?P<task_id>TASK-[A-Z0-9-]+-\d{3})`(?P<body>.*)$")
REQ_ID_IN_LINE = re.compile(r"\bREQ-[A-Z0-9-]+-\d{3}\b")
TASK_ID_IN_TEXT = re.compile(r"\bTASK-[A-Z0-9-]+-\d{3}\b")
TEST_ID_IN_TEXT = re.compile(r"\bTEST-[A-Z0-9-]+-\d{3}\b")
READ_ONLY_FILES = {"src/core/risk_manager.py"}
PLACEHOLDER_VALUES = {"", "tbd", "n/a", "na", "none", "<link>", "<required>"}
TIMEZONE_TOKEN_PATTERN = re.compile(r"\b(?:KST|UTC)\b")
def must_contain(path: Path, required: list[str], errors: list[str]) -> None:
if not path.exists():
errors.append(f"missing file: {path}")
return
text = path.read_text(encoding="utf-8")
for token in required:
if token not in text:
errors.append(f"{path}: missing required token -> {token}")
def normalize_changed_path(path: str) -> str:
normalized = path.strip().replace("\\", "/")
if normalized.startswith("./"):
normalized = normalized[2:]
return normalized
def is_policy_file(path: str) -> bool:
normalized = normalize_changed_path(path)
if not normalized.endswith(".md"):
return False
if not normalized.startswith("docs/ouroboros/"):
return False
return normalized != REQUIREMENTS_REGISTRY
def load_changed_files(args: list[str], errors: list[str]) -> list[str]:
if not args:
return []
# Single range input (e.g. BASE..HEAD or BASE...HEAD)
if len(args) == 1 and ".." in args[0]:
range_spec = args[0]
try:
completed = subprocess.run(
["git", "diff", "--name-only", range_spec],
check=True,
capture_output=True,
text=True,
)
except (subprocess.CalledProcessError, FileNotFoundError) as exc:
errors.append(f"failed to load changed files from range '{range_spec}': {exc}")
return []
return [
normalize_changed_path(line)
for line in completed.stdout.splitlines()
if line.strip()
]
return [normalize_changed_path(path) for path in args if path.strip()]
def validate_registry_sync(changed_files: list[str], errors: list[str]) -> None:
if not changed_files:
return
changed_set = set(changed_files)
policy_changed = any(is_policy_file(path) for path in changed_set)
registry_changed = REQUIREMENTS_REGISTRY in changed_set
if policy_changed and not registry_changed:
errors.append(
"policy file changed without updating docs/ouroboros/01_requirements_registry.md"
)
def validate_task_req_mapping(errors: list[str], *, task_doc: Path | None = None) -> None:
path = task_doc or Path(TASK_WORK_ORDERS_DOC)
if not path.exists():
errors.append(f"missing file: {path}")
return
text = path.read_text(encoding="utf-8")
found_task = False
for line in text.splitlines():
m = TASK_DEF_LINE.match(line.strip())
if not m:
continue
found_task = True
if not REQ_ID_IN_LINE.search(m.group("body")):
errors.append(
f"{path}: TASK without REQ mapping -> {m.group('task_id')}"
)
if not found_task:
errors.append(f"{path}: no TASK definitions found")
def validate_task_test_pairing(errors: list[str], *, task_doc: Path | None = None) -> None:
"""Fail when TASK definitions are not linked to at least one TEST id."""
path = task_doc or Path(TASK_WORK_ORDERS_DOC)
if not path.exists():
errors.append(f"missing file: {path}")
return
text = path.read_text(encoding="utf-8")
found_task = False
for line in text.splitlines():
m = TASK_DEF_LINE.match(line.strip())
if not m:
continue
found_task = True
if not TEST_ID_IN_TEXT.search(m.group("body")):
errors.append(f"{path}: TASK without TEST mapping -> {m.group('task_id')}")
if not found_task:
errors.append(f"{path}: no TASK definitions found")
def validate_timezone_policy_tokens(errors: list[str]) -> None:
"""Fail-fast check for REQ-OPS-001 governance tokens."""
required_docs = [
Path("docs/ouroboros/01_requirements_registry.md"),
Path("docs/ouroboros/30_code_level_work_orders.md"),
Path("docs/workflow.md"),
]
for path in required_docs:
if not path.exists():
errors.append(f"missing file: {path}")
continue
text = path.read_text(encoding="utf-8")
if not TIMEZONE_TOKEN_PATTERN.search(text):
errors.append(f"{path}: missing timezone policy token (KST/UTC)")
def validate_pr_traceability(errors: list[str]) -> None:
title = os.getenv("GOVERNANCE_PR_TITLE", "").strip()
body = os.getenv("GOVERNANCE_PR_BODY", "").strip()
if not title and not body:
return
text = f"{title}\n{body}"
if not REQ_ID_IN_LINE.search(text):
errors.append("PR text missing REQ-ID reference")
if not TASK_ID_IN_TEXT.search(text):
errors.append("PR text missing TASK-ID reference")
if not TEST_ID_IN_TEXT.search(text):
errors.append("PR text missing TEST-ID reference")
def _parse_pr_evidence_line(text: str, field: str) -> str | None:
pattern = re.compile(rf"^\s*-\s*{re.escape(field)}:\s*(?P<value>.+?)\s*$", re.MULTILINE)
match = pattern.search(text)
if not match:
return None
return match.group("value").strip()
def _is_placeholder(value: str | None) -> bool:
if value is None:
return True
normalized = value.strip().lower()
return normalized in PLACEHOLDER_VALUES
def validate_read_only_approval(
changed_files: list[str], errors: list[str], warnings: list[str]
) -> None:
changed_set = set(changed_files)
touched = sorted(path for path in READ_ONLY_FILES if path in changed_set)
if not touched:
return
body = os.getenv("GOVERNANCE_PR_BODY", "").strip()
if not body:
errors.append(
"READ-ONLY file changed but PR body is unavailable; approval evidence is required"
)
return
if "READ-ONLY Approval" not in body:
errors.append("READ-ONLY file changed without 'READ-ONLY Approval' section in PR body")
return
touched_field = _parse_pr_evidence_line(body, "Touched READ-ONLY files")
human_approval = _parse_pr_evidence_line(body, "Human approval")
test_suite_1 = _parse_pr_evidence_line(body, "Test suite 1")
test_suite_2 = _parse_pr_evidence_line(body, "Test suite 2")
if _is_placeholder(touched_field):
errors.append("READ-ONLY Approval section missing 'Touched READ-ONLY files' evidence")
if _is_placeholder(human_approval):
errors.append("READ-ONLY Approval section missing 'Human approval' evidence")
if _is_placeholder(test_suite_1):
errors.append("READ-ONLY Approval section missing 'Test suite 1' evidence")
if _is_placeholder(test_suite_2):
errors.append("READ-ONLY Approval section missing 'Test suite 2' evidence")
def main() -> int:
errors: list[str] = []
warnings: list[str] = []
changed_files = load_changed_files(sys.argv[1:], errors)
pr_template = Path(".gitea/PULL_REQUEST_TEMPLATE.md")
issue_template = Path(".gitea/ISSUE_TEMPLATE/runtime_verification.md")
workflow_doc = Path("docs/workflow.md")
commands_doc = Path("docs/commands.md")
handover_script = Path("scripts/session_handover_check.py")
handover_log = Path("workflow/session-handover.md")
must_contain(
pr_template,
[
"Closes #N",
"Main -> Verifier Directive Contract",
"Coverage Matrix",
"NOT_OBSERVED",
"tea",
"gh",
"Session Handover Gate",
"session_handover_check.py --strict",
"READ-ONLY Approval",
"Touched READ-ONLY files",
"Human approval",
"Test suite 1",
"Test suite 2",
],
errors,
)
must_contain(
issue_template,
[
"[RUNTIME-VERIFY][SCN-XXX]",
"Requirement Mapping",
"Close Criteria",
"NOT_OBSERVED = 0",
],
errors,
)
must_contain(
workflow_doc,
[
"Session Handover Gate (Mandatory)",
"session_handover_check.py --strict",
"scripts/tea_comment.sh",
],
errors,
)
must_contain(
commands_doc,
[
"Session Handover Preflight (Mandatory)",
"session_handover_check.py --strict",
"Comment Newline Escaping",
"scripts/tea_comment.sh",
],
errors,
)
must_contain(
handover_log,
[
"Session Handover Log",
"- branch:",
"- docs_checked:",
"- open_issues_reviewed:",
"- next_ticket:",
],
errors,
)
if not handover_script.exists():
errors.append(f"missing file: {handover_script}")
validate_registry_sync(changed_files, errors)
validate_task_req_mapping(errors)
validate_task_test_pairing(errors)
validate_timezone_policy_tokens(errors)
validate_pr_traceability(errors)
validate_read_only_approval(changed_files, errors, warnings)
if errors:
print("[FAIL] governance asset validation failed")
for err in errors:
print(f"- {err}")
return 1
print("[OK] governance assets validated")
if warnings:
print(f"[WARN] governance advisory: {len(warnings)}")
for warn in warnings:
print(f"- {warn}")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,185 @@
#!/usr/bin/env python3
"""Validate Ouroboros planning docs for metadata, links, and ID consistency."""
from __future__ import annotations
import re
import sys
from pathlib import Path
DOC_DIR = Path("docs/ouroboros")
META_PATTERN = re.compile(
r"<!--\n"
r"Doc-ID: (?P<doc_id>[^\n]+)\n"
r"Version: (?P<version>[^\n]+)\n"
r"Status: (?P<status>[^\n]+)\n"
r"Owner: (?P<owner>[^\n]+)\n"
r"Updated: (?P<updated>\d{4}-\d{2}-\d{2})\n"
r"-->",
re.MULTILINE,
)
ID_PATTERN = re.compile(r"\b(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3}\b")
DEF_PATTERN = re.compile(
r"^-\s+`(?P<id>(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3})`",
re.MULTILINE,
)
LINK_PATTERN = re.compile(r"\[[^\]]+\]\((?P<link>[^)]+)\)")
LINE_DEF_PATTERN = re.compile(
r"^-\s+`(?P<id>(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3})`.*$",
re.MULTILINE,
)
PLAN_LINK_PATTERN = re.compile(r"ouroboros_plan_v(?P<version>[23])\.txt$")
ALLOWED_PLAN_TARGETS = {
"2": (DOC_DIR / "source" / "ouroboros_plan_v2.txt").resolve(),
"3": (DOC_DIR / "source" / "ouroboros_plan_v3.txt").resolve(),
}
def iter_docs() -> list[Path]:
return sorted([p for p in DOC_DIR.glob("*.md") if p.is_file()])
def validate_metadata(path: Path, text: str, errors: list[str], doc_ids: dict[str, Path]) -> None:
match = META_PATTERN.search(text)
if not match:
errors.append(f"{path}: missing or malformed metadata block")
return
doc_id = match.group("doc_id").strip()
if doc_id in doc_ids:
errors.append(f"{path}: duplicate Doc-ID {doc_id} (already in {doc_ids[doc_id]})")
else:
doc_ids[doc_id] = path
def validate_plan_source_link(path: Path, link: str, errors: list[str]) -> bool:
normalized = link.strip()
# Ignore in-page anchors and parse the filesystem part for validation.
link_path = normalized.split("#", 1)[0].strip()
if not link_path:
return False
match = PLAN_LINK_PATTERN.search(link_path)
if not match:
return False
version = match.group("version")
expected_target = ALLOWED_PLAN_TARGETS[version]
if link_path.startswith("/"):
errors.append(
f"{path}: invalid plan link path -> {link} "
f"(use ./source/ouroboros_plan_v{version}.txt)"
)
return True
resolved_target = (path.parent / link_path).resolve()
if resolved_target != expected_target:
errors.append(
f"{path}: invalid plan link path -> {link} "
f"(must resolve to docs/ouroboros/source/ouroboros_plan_v{version}.txt)"
)
return True
return False
def validate_links(path: Path, text: str, errors: list[str]) -> None:
for m in LINK_PATTERN.finditer(text):
link = m.group("link").strip()
if not link or link.startswith("http") or link.startswith("#"):
continue
if validate_plan_source_link(path, link, errors):
continue
link_path = link.split("#", 1)[0].strip()
if link_path.startswith("/"):
target = Path(link_path)
else:
target = (path.parent / link_path).resolve()
if not target.exists():
errors.append(f"{path}: broken link -> {link}")
def collect_ids(path: Path, text: str, defs: dict[str, Path], refs: dict[str, set[Path]]) -> None:
for m in DEF_PATTERN.finditer(text):
defs[m.group("id")] = path
for m in ID_PATTERN.finditer(text):
idv = m.group(0)
refs.setdefault(idv, set()).add(path)
def collect_req_traceability(
text: str, req_to_task: dict[str, set[str]], req_to_test: dict[str, set[str]]
) -> None:
for m in LINE_DEF_PATTERN.finditer(text):
line = m.group(0)
item_id = m.group("id")
req_ids = [rid for rid in ID_PATTERN.findall(line) if rid.startswith("REQ-")]
if item_id.startswith("TASK-"):
for req_id in req_ids:
req_to_task.setdefault(req_id, set()).add(item_id)
if item_id.startswith("TEST-"):
for req_id in req_ids:
req_to_test.setdefault(req_id, set()).add(item_id)
def main() -> int:
if not DOC_DIR.exists():
print(f"ERROR: missing directory {DOC_DIR}")
return 1
docs = iter_docs()
if not docs:
print(f"ERROR: no markdown docs found in {DOC_DIR}")
return 1
errors: list[str] = []
doc_ids: dict[str, Path] = {}
defs: dict[str, Path] = {}
refs: dict[str, set[Path]] = {}
req_to_task: dict[str, set[str]] = {}
req_to_test: dict[str, set[str]] = {}
for path in docs:
text = path.read_text(encoding="utf-8")
validate_metadata(path, text, errors, doc_ids)
validate_links(path, text, errors)
collect_ids(path, text, defs, refs)
collect_req_traceability(text, req_to_task, req_to_test)
for idv, where_used in sorted(refs.items()):
if idv.startswith("DOC-"):
continue
if idv not in defs:
files = ", ".join(str(p) for p in sorted(where_used))
errors.append(f"undefined ID {idv}, used in: {files}")
for idv in sorted(defs):
if not idv.startswith("REQ-"):
continue
if idv not in req_to_task:
errors.append(f"REQ without TASK mapping: {idv}")
if idv not in req_to_test:
errors.append(f"REQ without TEST mapping: {idv}")
warnings: list[str] = []
for idv, where_def in sorted(defs.items()):
if len(refs.get(idv, set())) <= 1 and (idv.startswith("REQ-") or idv.startswith("RULE-")):
warnings.append(f"orphan ID {idv} defined in {where_def} (not referenced elsewhere)")
if errors:
print("[FAIL] Ouroboros docs validation failed")
for err in errors:
print(f"- {err}")
return 1
print(f"[OK] validated {len(docs)} docs in {DOC_DIR}")
print(f"[OK] unique Doc-ID: {len(doc_ids)}")
print(f"[OK] definitions: {len(defs)}, references: {len(refs)}")
print(f"[OK] req->task mappings: {len(req_to_task)}")
print(f"[OK] req->test mappings: {len(req_to_test)}")
if warnings:
print(f"[WARN] orphan IDs: {len(warnings)}")
for w in warnings:
print(f"- {w}")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,64 @@
"""Backtest cost/slippage/failure validation guard."""
from __future__ import annotations
import math
from dataclasses import dataclass
@dataclass(frozen=True)
class BacktestCostModel:
commission_bps: float | None = None
slippage_bps_by_session: dict[str, float] | None = None
failure_rate_by_session: dict[str, float] | None = None
partial_fill_rate_by_session: dict[str, float] | None = None
unfavorable_fill_required: bool = True
def validate_backtest_cost_model(
*,
model: BacktestCostModel,
required_sessions: list[str],
) -> None:
"""Raise ValueError when required cost assumptions are missing/invalid."""
if (
model.commission_bps is None
or not math.isfinite(model.commission_bps)
or model.commission_bps < 0
):
raise ValueError("commission_bps must be provided and >= 0")
if not model.unfavorable_fill_required:
raise ValueError("unfavorable_fill_required must be True")
slippage = model.slippage_bps_by_session or {}
failure = model.failure_rate_by_session or {}
partial_fill = model.partial_fill_rate_by_session or {}
missing_slippage = [s for s in required_sessions if s not in slippage]
if missing_slippage:
raise ValueError(
f"missing slippage_bps_by_session for sessions: {', '.join(missing_slippage)}"
)
missing_failure = [s for s in required_sessions if s not in failure]
if missing_failure:
raise ValueError(
f"missing failure_rate_by_session for sessions: {', '.join(missing_failure)}"
)
missing_partial_fill = [s for s in required_sessions if s not in partial_fill]
if missing_partial_fill:
raise ValueError(
"missing partial_fill_rate_by_session for sessions: "
f"{', '.join(missing_partial_fill)}"
)
for sess, bps in slippage.items():
if not math.isfinite(bps) or bps < 0:
raise ValueError(f"slippage bps must be >= 0 for session={sess}")
for sess, rate in failure.items():
if not math.isfinite(rate) or rate < 0 or rate > 1:
raise ValueError(f"failure rate must be within [0,1] for session={sess}")
for sess, rate in partial_fill.items():
if not math.isfinite(rate) or rate < 0 or rate > 1:
raise ValueError(f"partial fill rate must be within [0,1] for session={sess}")

View File

@@ -0,0 +1,104 @@
"""Conservative backtest execution model."""
from __future__ import annotations
import math
from dataclasses import dataclass
from random import Random
from typing import Literal
OrderSide = Literal["BUY", "SELL"]
@dataclass(frozen=True)
class ExecutionRequest:
side: OrderSide
session_id: str
qty: int
reference_price: float
@dataclass(frozen=True)
class ExecutionAssumptions:
slippage_bps_by_session: dict[str, float]
failure_rate_by_session: dict[str, float]
partial_fill_rate_by_session: dict[str, float]
partial_fill_min_ratio: float = 0.3
partial_fill_max_ratio: float = 0.8
seed: int = 0
@dataclass(frozen=True)
class ExecutionResult:
status: Literal["FILLED", "PARTIAL", "REJECTED"]
filled_qty: int
avg_price: float
slippage_bps: float
reason: str
class BacktestExecutionModel:
"""Execution simulator with conservative unfavorable fill assumptions."""
def __init__(self, assumptions: ExecutionAssumptions) -> None:
self.assumptions = assumptions
self._rng = Random(assumptions.seed)
if assumptions.partial_fill_min_ratio <= 0 or assumptions.partial_fill_max_ratio > 1:
raise ValueError("partial fill ratios must be within (0,1]")
if assumptions.partial_fill_min_ratio > assumptions.partial_fill_max_ratio:
raise ValueError("partial_fill_min_ratio must be <= partial_fill_max_ratio")
for sess, bps in assumptions.slippage_bps_by_session.items():
if not math.isfinite(bps) or bps < 0:
raise ValueError(f"slippage_bps must be finite and >= 0 for session={sess}")
for sess, rate in assumptions.failure_rate_by_session.items():
if not math.isfinite(rate) or rate < 0 or rate > 1:
raise ValueError(f"failure_rate must be in [0,1] for session={sess}")
for sess, rate in assumptions.partial_fill_rate_by_session.items():
if not math.isfinite(rate) or rate < 0 or rate > 1:
raise ValueError(f"partial_fill_rate must be in [0,1] for session={sess}")
def simulate(self, request: ExecutionRequest) -> ExecutionResult:
if request.qty <= 0:
raise ValueError("qty must be positive")
if request.reference_price <= 0:
raise ValueError("reference_price must be positive")
slippage_bps = self.assumptions.slippage_bps_by_session.get(request.session_id, 0.0)
failure_rate = self.assumptions.failure_rate_by_session.get(request.session_id, 0.0)
partial_rate = self.assumptions.partial_fill_rate_by_session.get(request.session_id, 0.0)
if self._rng.random() < failure_rate:
return ExecutionResult(
status="REJECTED",
filled_qty=0,
avg_price=0.0,
slippage_bps=slippage_bps,
reason="execution_failure",
)
slip_mult = 1.0 + (
slippage_bps / 10000.0 if request.side == "BUY" else -slippage_bps / 10000.0
)
exec_price = request.reference_price * slip_mult
if self._rng.random() < partial_rate:
ratio = self._rng.uniform(
self.assumptions.partial_fill_min_ratio,
self.assumptions.partial_fill_max_ratio,
)
filled = max(1, min(request.qty - 1, int(request.qty * ratio)))
return ExecutionResult(
status="PARTIAL",
filled_qty=filled,
avg_price=exec_price,
slippage_bps=slippage_bps,
reason="partial_fill",
)
return ExecutionResult(
status="FILLED",
filled_qty=request.qty,
avg_price=exec_price,
slippage_bps=slippage_bps,
reason="filled",
)

View File

@@ -0,0 +1,393 @@
"""Integrated v2 backtest pipeline.
Wires TripleBarrier labeling + WalkForward split + CostGuard validation
into a single deterministic orchestration path.
"""
from __future__ import annotations
from collections.abc import Sequence
from dataclasses import dataclass
from datetime import datetime
from statistics import mean
from typing import Literal, cast
from src.analysis.backtest_cost_guard import BacktestCostModel, validate_backtest_cost_model
from src.analysis.backtest_execution_model import (
BacktestExecutionModel,
ExecutionAssumptions,
ExecutionRequest,
)
from src.analysis.triple_barrier import TripleBarrierSpec, label_with_triple_barrier
from src.analysis.walk_forward_split import WalkForwardFold, generate_walk_forward_splits
@dataclass(frozen=True)
class BacktestBar:
high: float
low: float
close: float
session_id: str
timestamp: datetime | None = None
@dataclass(frozen=True)
class WalkForwardConfig:
train_size: int
test_size: int
step_size: int | None = None
purge_size: int = 0
embargo_size: int = 0
min_train_size: int = 1
@dataclass(frozen=True)
class BaselineScore:
name: Literal["B0", "B1", "M1"]
accuracy: float
cost_adjusted_accuracy: float
@dataclass(frozen=True)
class BacktestFoldResult:
fold_index: int
train_indices: list[int]
test_indices: list[int]
train_label_distribution: dict[int, int]
test_label_distribution: dict[int, int]
baseline_scores: list[BaselineScore]
execution_adjusted_avg_return_bps: float
execution_adjusted_trade_count: int
execution_rejected_count: int
execution_partial_count: int
@dataclass(frozen=True)
class BacktestPipelineResult:
run_id: str
n_bars: int
n_entries: int
required_sessions: list[str]
label_distribution: dict[int, int]
folds: list[BacktestFoldResult]
def run_v2_backtest_pipeline(
*,
bars: Sequence[BacktestBar],
entry_indices: Sequence[int],
side: int,
triple_barrier_spec: TripleBarrierSpec,
walk_forward: WalkForwardConfig,
cost_model: BacktestCostModel,
required_sessions: list[str] | None = None,
) -> BacktestPipelineResult:
"""Run v2 integrated pipeline (cost guard -> labels -> walk-forward baselines)."""
if not bars:
raise ValueError("bars must not be empty")
if not entry_indices:
raise ValueError("entry_indices must not be empty")
resolved_sessions = (
sorted(set(required_sessions))
if required_sessions is not None
else sorted({bar.session_id for bar in bars})
)
validate_backtest_cost_model(model=cost_model, required_sessions=resolved_sessions)
execution_model = BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session=cost_model.slippage_bps_by_session or {},
failure_rate_by_session=cost_model.failure_rate_by_session or {},
partial_fill_rate_by_session=cost_model.partial_fill_rate_by_session or {},
seed=0,
)
)
highs = [float(bar.high) for bar in bars]
lows = [float(bar.low) for bar in bars]
closes = [float(bar.close) for bar in bars]
timestamps = [bar.timestamp for bar in bars]
normalized_entries = sorted(set(int(i) for i in entry_indices))
if normalized_entries[0] < 0 or normalized_entries[-1] >= len(bars):
raise IndexError("entry index out of range")
resolved_timestamps: list[datetime] | None = None
if triple_barrier_spec.max_holding_minutes is not None:
if any(ts is None for ts in timestamps):
raise ValueError(
"BacktestBar.timestamp is required for all bars when "
"triple_barrier_spec.max_holding_minutes is set"
)
resolved_timestamps = cast(list[datetime], timestamps)
labels_by_bar_index: dict[int, int] = {}
for idx in normalized_entries:
labels_by_bar_index[idx] = label_with_triple_barrier(
highs=highs,
lows=lows,
closes=closes,
timestamps=resolved_timestamps,
entry_index=idx,
side=side,
spec=triple_barrier_spec,
).label
ordered_labels = [labels_by_bar_index[idx] for idx in normalized_entries]
ordered_sessions = [bars[idx].session_id for idx in normalized_entries]
ordered_prices = [bars[idx].close for idx in normalized_entries]
folds = generate_walk_forward_splits(
n_samples=len(normalized_entries),
train_size=walk_forward.train_size,
test_size=walk_forward.test_size,
step_size=walk_forward.step_size,
purge_size=walk_forward.purge_size,
embargo_size=walk_forward.embargo_size,
min_train_size=walk_forward.min_train_size,
)
fold_results: list[BacktestFoldResult] = []
for fold_idx, fold in enumerate(folds):
train_labels = [ordered_labels[i] for i in fold.train_indices]
test_labels = [ordered_labels[i] for i in fold.test_indices]
test_sessions = [ordered_sessions[i] for i in fold.test_indices]
test_prices = [ordered_prices[i] for i in fold.test_indices]
if not test_labels:
continue
execution_model = _build_execution_model(cost_model=cost_model, fold_seed=fold_idx)
execution_return_model = _build_execution_model(
cost_model=cost_model,
fold_seed=fold_idx,
)
b0_pred = _baseline_b0_pred(train_labels)
m1_pred = _m1_pred(train_labels)
execution_returns_bps: list[float] = []
execution_rejected = 0
execution_partial = 0
for rel_idx in fold.test_indices:
entry_bar_index = normalized_entries[rel_idx]
bar = bars[entry_bar_index]
trade = _simulate_execution_adjusted_return_bps(
execution_model=execution_return_model,
bar=bar,
label=ordered_labels[rel_idx],
side=side,
spec=triple_barrier_spec,
commission_bps=float(cost_model.commission_bps or 0.0),
)
if trade["status"] == "REJECTED":
execution_rejected += 1
continue
execution_returns_bps.append(float(trade["return_bps"]))
if trade["status"] == "PARTIAL":
execution_partial += 1
fold_results.append(
BacktestFoldResult(
fold_index=fold_idx,
train_indices=fold.train_indices,
test_indices=fold.test_indices,
train_label_distribution=_label_dist(train_labels),
test_label_distribution=_label_dist(test_labels),
baseline_scores=[
BaselineScore(
name="B0",
accuracy=_score_constant(b0_pred, test_labels),
cost_adjusted_accuracy=_score_with_execution(
prediction=b0_pred,
actual=test_labels,
sessions=test_sessions,
reference_prices=test_prices,
execution_model=execution_model,
commission_bps=float(cost_model.commission_bps or 0.0),
),
),
BaselineScore(
name="B1",
accuracy=_score_constant(1, test_labels),
cost_adjusted_accuracy=_score_with_execution(
prediction=1,
actual=test_labels,
sessions=test_sessions,
reference_prices=test_prices,
execution_model=execution_model,
commission_bps=float(cost_model.commission_bps or 0.0),
),
),
BaselineScore(
name="M1",
accuracy=_score_constant(m1_pred, test_labels),
cost_adjusted_accuracy=_score_with_execution(
prediction=m1_pred,
actual=test_labels,
sessions=test_sessions,
reference_prices=test_prices,
execution_model=execution_model,
commission_bps=float(cost_model.commission_bps or 0.0),
),
),
],
execution_adjusted_avg_return_bps=(
mean(execution_returns_bps) if execution_returns_bps else 0.0
),
execution_adjusted_trade_count=len(execution_returns_bps),
execution_rejected_count=execution_rejected,
execution_partial_count=execution_partial,
)
)
return BacktestPipelineResult(
run_id=_build_run_id(
n_entries=len(normalized_entries),
n_folds=len(fold_results),
sessions=resolved_sessions,
),
n_bars=len(bars),
n_entries=len(normalized_entries),
required_sessions=resolved_sessions,
label_distribution=_label_dist(ordered_labels),
folds=fold_results,
)
def _label_dist(labels: Sequence[int]) -> dict[int, int]:
dist: dict[int, int] = {-1: 0, 0: 0, 1: 0}
for val in labels:
if val in dist:
dist[val] += 1
return dist
def _score_constant(pred: int, actual: Sequence[int]) -> float:
return mean(1.0 if pred == label else 0.0 for label in actual)
def _baseline_b0(train_labels: Sequence[int], test_labels: Sequence[int]) -> float:
return _score_constant(_baseline_b0_pred(train_labels), test_labels)
def _baseline_b0_pred(train_labels: Sequence[int]) -> int:
if not train_labels:
return 0
# Majority-class baseline from training fold.
choices = (-1, 0, 1)
return max(choices, key=lambda c: train_labels.count(c))
def _m1_pred(train_labels: Sequence[int]) -> int:
if not train_labels:
return 0
return train_labels[-1]
def _build_execution_model(
*,
cost_model: BacktestCostModel,
fold_seed: int,
) -> BacktestExecutionModel:
return BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session=dict(cost_model.slippage_bps_by_session or {}),
failure_rate_by_session=dict(cost_model.failure_rate_by_session or {}),
partial_fill_rate_by_session=dict(cost_model.partial_fill_rate_by_session or {}),
seed=fold_seed,
)
)
def _score_with_execution(
*,
prediction: int,
actual: Sequence[int],
sessions: Sequence[str],
reference_prices: Sequence[float],
execution_model: BacktestExecutionModel,
commission_bps: float,
) -> float:
if not actual:
return 0.0
contributions: list[float] = []
for label, session_id, reference_price in zip(actual, sessions, reference_prices, strict=True):
if prediction == 0:
contributions.append(1.0 if label == 0 else 0.0)
continue
side = "BUY" if prediction > 0 else "SELL"
execution = execution_model.simulate(
ExecutionRequest(
side=side,
session_id=session_id,
qty=100,
reference_price=reference_price,
)
)
if execution.status == "REJECTED":
contributions.append(0.0)
continue
fill_ratio = execution.filled_qty / 100.0
cost_penalty = min(0.99, (commission_bps + execution.slippage_bps) / 10000.0)
correctness = 1.0 if prediction == label else 0.0
contributions.append(correctness * fill_ratio * (1.0 - cost_penalty))
return mean(contributions)
def _build_run_id(*, n_entries: int, n_folds: int, sessions: Sequence[str]) -> str:
sess_key = "_".join(sessions)
return f"v2p-e{n_entries}-f{n_folds}-s{sess_key}"
def fold_has_leakage(fold: WalkForwardFold) -> bool:
"""Utility for tests/verification: True when train/test overlap exists."""
return bool(set(fold.train_indices).intersection(fold.test_indices))
def _simulate_execution_adjusted_return_bps(
*,
execution_model: BacktestExecutionModel,
bar: BacktestBar,
label: int,
side: int,
spec: TripleBarrierSpec,
commission_bps: float,
) -> dict[str, float | str]:
qty = 100
entry_req = ExecutionRequest(
side="BUY" if side == 1 else "SELL",
session_id=bar.session_id,
qty=qty,
reference_price=float(bar.close),
)
entry_fill = execution_model.simulate(entry_req)
if entry_fill.status == "REJECTED":
return {"status": "REJECTED", "return_bps": 0.0}
exit_qty = entry_fill.filled_qty
if label == 1:
gross_return_bps = spec.take_profit_pct * 10000.0
elif label == -1:
gross_return_bps = -spec.stop_loss_pct * 10000.0
else:
gross_return_bps = 0.0
if side == 1:
exit_price = float(bar.close) * (1.0 + gross_return_bps / 10000.0)
else:
exit_price = float(bar.close) * (1.0 - gross_return_bps / 10000.0)
exit_req = ExecutionRequest(
side="SELL" if side == 1 else "BUY",
session_id=bar.session_id,
qty=exit_qty,
reference_price=max(0.01, exit_price),
)
exit_fill = execution_model.simulate(exit_req)
if exit_fill.status == "REJECTED":
return {"status": "REJECTED", "return_bps": 0.0}
fill_ratio = min(entry_fill.filled_qty, exit_fill.filled_qty) / qty
cost_bps = (
float(entry_fill.slippage_bps)
+ float(exit_fill.slippage_bps)
+ (2.0 * float(commission_bps))
)
net_return_bps = (gross_return_bps * fill_ratio) - cost_bps
is_partial = entry_fill.status == "PARTIAL" or exit_fill.status == "PARTIAL"
status = "PARTIAL" if is_partial else "FILLED"
return {"status": status, "return_bps": net_return_bps}

View File

@@ -104,6 +104,7 @@ class MarketScanner:
# Store in L7 real-time layer # Store in L7 real-time layer
from datetime import UTC, datetime from datetime import UTC, datetime
timeframe = datetime.now(UTC).isoformat() timeframe = datetime.now(UTC).isoformat()
self.context_store.set_context( self.context_store.set_context(
ContextLayer.L7_REALTIME, ContextLayer.L7_REALTIME,
@@ -158,12 +159,8 @@ class MarketScanner:
top_movers = valid_metrics[: self.top_n] top_movers = valid_metrics[: self.top_n]
# Detect breakouts and breakdowns # Detect breakouts and breakdowns
breakouts = [ breakouts = [m.stock_code for m in valid_metrics if self.analyzer.is_breakout(m)]
m.stock_code for m in valid_metrics if self.analyzer.is_breakout(m) breakdowns = [m.stock_code for m in valid_metrics if self.analyzer.is_breakdown(m)]
]
breakdowns = [
m.stock_code for m in valid_metrics if self.analyzer.is_breakdown(m)
]
logger.info( logger.info(
"%s scan complete: %d scanned, top momentum=%.1f, %d breakouts, %d breakdowns", "%s scan complete: %d scanned, top momentum=%.1f, %d breakouts, %d breakdowns",
@@ -228,10 +225,9 @@ class MarketScanner:
# If we removed too many, backfill from current watchlist # If we removed too many, backfill from current watchlist
if len(updated) < len(current_watchlist): if len(updated) < len(current_watchlist):
backfill = [ backfill = [code for code in current_watchlist if code not in updated][
code for code in current_watchlist : len(current_watchlist) - len(updated)
if code not in updated ]
][: len(current_watchlist) - len(updated)]
updated.extend(backfill) updated.extend(backfill)
logger.info( logger.info(

View File

@@ -158,7 +158,12 @@ class SmartVolatilityScanner:
price = latest_close price = latest_close
latest_high = _safe_float(latest.get("high")) latest_high = _safe_float(latest.get("high"))
latest_low = _safe_float(latest.get("low")) latest_low = _safe_float(latest.get("low"))
if latest_close > 0 and latest_high > 0 and latest_low > 0 and latest_high >= latest_low: if (
latest_close > 0
and latest_high > 0
and latest_low > 0
and latest_high >= latest_low
):
intraday_range_pct = (latest_high - latest_low) / latest_close * 100.0 intraday_range_pct = (latest_high - latest_low) / latest_close * 100.0
if volume <= 0: if volume <= 0:
volume = _safe_float(latest.get("volume")) volume = _safe_float(latest.get("volume"))
@@ -234,9 +239,7 @@ class SmartVolatilityScanner:
limit=50, limit=50,
) )
except Exception as exc: except Exception as exc:
logger.warning( logger.warning("Overseas fluctuation ranking failed for %s: %s", market.code, exc)
"Overseas fluctuation ranking failed for %s: %s", market.code, exc
)
fluct_rows = [] fluct_rows = []
if not fluct_rows: if not fluct_rows:
@@ -250,9 +253,7 @@ class SmartVolatilityScanner:
limit=50, limit=50,
) )
except Exception as exc: except Exception as exc:
logger.warning( logger.warning("Overseas volume ranking failed for %s: %s", market.code, exc)
"Overseas volume ranking failed for %s: %s", market.code, exc
)
volume_rows = [] volume_rows = []
for idx, row in enumerate(volume_rows): for idx, row in enumerate(volume_rows):
@@ -433,16 +434,10 @@ def _extract_intraday_range_pct(row: dict[str, Any], price: float) -> float:
if price <= 0: if price <= 0:
return 0.0 return 0.0
high = _safe_float( high = _safe_float(
row.get("high") row.get("high") or row.get("ovrs_hgpr") or row.get("stck_hgpr") or row.get("day_hgpr")
or row.get("ovrs_hgpr")
or row.get("stck_hgpr")
or row.get("day_hgpr")
) )
low = _safe_float( low = _safe_float(
row.get("low") row.get("low") or row.get("ovrs_lwpr") or row.get("stck_lwpr") or row.get("day_lwpr")
or row.get("ovrs_lwpr")
or row.get("stck_lwpr")
or row.get("day_lwpr")
) )
if high <= 0 or low <= 0 or high < low: if high <= 0 or low <= 0 or high < low:
return 0.0 return 0.0

View File

@@ -0,0 +1,142 @@
"""Triple barrier labeler utilities.
Implements first-touch labeling with upper/lower/time barriers.
"""
from __future__ import annotations
import warnings
from collections.abc import Sequence
from dataclasses import dataclass
from datetime import datetime, timedelta
from typing import Literal
TieBreakMode = Literal["stop_first", "take_first"]
@dataclass(frozen=True)
class TripleBarrierSpec:
take_profit_pct: float
stop_loss_pct: float
max_holding_bars: int | None = None
max_holding_minutes: int | None = None
tie_break: TieBreakMode = "stop_first"
def __post_init__(self) -> None:
if self.max_holding_minutes is None and self.max_holding_bars is None:
raise ValueError("one of max_holding_minutes or max_holding_bars must be set")
if self.max_holding_minutes is not None and self.max_holding_minutes <= 0:
raise ValueError("max_holding_minutes must be positive")
if self.max_holding_bars is not None and self.max_holding_bars <= 0:
raise ValueError("max_holding_bars must be positive")
@dataclass(frozen=True)
class TripleBarrierLabel:
label: int # +1 take-profit first, -1 stop-loss first, 0 timeout
touched: Literal["take_profit", "stop_loss", "time"]
touch_bar: int
entry_price: float
upper_barrier: float
lower_barrier: float
def label_with_triple_barrier(
*,
highs: Sequence[float],
lows: Sequence[float],
closes: Sequence[float],
timestamps: Sequence[datetime] | None = None,
entry_index: int,
side: int,
spec: TripleBarrierSpec,
) -> TripleBarrierLabel:
"""Label one entry using triple-barrier first-touch rule.
Args:
highs/lows/closes: OHLC components with identical length.
entry_index: Entry bar index in the sequences.
side: +1 for long, -1 for short.
spec: Barrier specification.
"""
if side not in {1, -1}:
raise ValueError("side must be +1 or -1")
if len(highs) != len(lows) or len(highs) != len(closes):
raise ValueError("highs, lows, closes lengths must match")
if entry_index < 0 or entry_index >= len(closes):
raise IndexError("entry_index out of range")
entry_price = float(closes[entry_index])
if entry_price <= 0:
raise ValueError("entry price must be positive")
if side == 1:
upper = entry_price * (1.0 + spec.take_profit_pct)
lower = entry_price * (1.0 - spec.stop_loss_pct)
else:
# For short side, favorable move is down.
upper = entry_price * (1.0 + spec.stop_loss_pct)
lower = entry_price * (1.0 - spec.take_profit_pct)
if spec.max_holding_minutes is not None:
if timestamps is None:
raise ValueError("timestamps are required when max_holding_minutes is set")
if len(timestamps) != len(closes):
raise ValueError("timestamps length must match OHLC lengths")
expiry_timestamp = timestamps[entry_index] + timedelta(minutes=spec.max_holding_minutes)
last_index = entry_index
for idx in range(entry_index + 1, len(closes)):
if timestamps[idx] > expiry_timestamp:
break
last_index = idx
else:
assert spec.max_holding_bars is not None
warnings.warn(
(
"TripleBarrierSpec.max_holding_bars is deprecated; "
"use max_holding_minutes with timestamps instead."
),
DeprecationWarning,
stacklevel=2,
)
last_index = min(len(closes) - 1, entry_index + spec.max_holding_bars)
for idx in range(entry_index + 1, last_index + 1):
high_price = float(highs[idx])
low_price = float(lows[idx])
up_touch = high_price >= upper
down_touch = low_price <= lower
if not up_touch and not down_touch:
continue
if up_touch and down_touch:
if spec.tie_break == "stop_first":
touched = "stop_loss"
label = -1
else:
touched = "take_profit"
label = 1
elif up_touch:
touched = "take_profit" if side == 1 else "stop_loss"
label = 1 if side == 1 else -1
else:
touched = "stop_loss" if side == 1 else "take_profit"
label = -1 if side == 1 else 1
return TripleBarrierLabel(
label=label,
touched=touched,
touch_bar=idx,
entry_price=entry_price,
upper_barrier=upper,
lower_barrier=lower,
)
return TripleBarrierLabel(
label=0,
touched="time",
touch_bar=last_index,
entry_price=entry_price,
upper_barrier=upper,
lower_barrier=lower,
)

View File

@@ -92,9 +92,7 @@ class VolatilityAnalyzer:
recent_tr = true_ranges[-period:] recent_tr = true_ranges[-period:]
return sum(recent_tr) / len(recent_tr) return sum(recent_tr) / len(recent_tr)
def calculate_price_change( def calculate_price_change(self, current_price: float, past_price: float) -> float:
self, current_price: float, past_price: float
) -> float:
"""Calculate price change percentage. """Calculate price change percentage.
Args: Args:
@@ -108,9 +106,7 @@ class VolatilityAnalyzer:
return 0.0 return 0.0
return ((current_price - past_price) / past_price) * 100 return ((current_price - past_price) / past_price) * 100
def calculate_volume_surge( def calculate_volume_surge(self, current_volume: float, avg_volume: float) -> float:
self, current_volume: float, avg_volume: float
) -> float:
"""Calculate volume surge ratio. """Calculate volume surge ratio.
Args: Args:
@@ -240,11 +236,7 @@ class VolatilityAnalyzer:
Momentum score (0-100) Momentum score (0-100)
""" """
# Weight recent changes more heavily # Weight recent changes more heavily
weighted_change = ( weighted_change = price_change_1m * 0.4 + price_change_5m * 0.3 + price_change_15m * 0.2
price_change_1m * 0.4 +
price_change_5m * 0.3 +
price_change_15m * 0.2
)
# Volume contribution (normalized to 0-10 scale) # Volume contribution (normalized to 0-10 scale)
volume_contribution = min(10.0, (volume_surge - 1.0) * 5.0) volume_contribution = min(10.0, (volume_surge - 1.0) * 5.0)
@@ -301,17 +293,11 @@ class VolatilityAnalyzer:
if len(close_prices) > 0: if len(close_prices) > 0:
if len(close_prices) >= 1: if len(close_prices) >= 1:
price_change_1m = self.calculate_price_change( price_change_1m = self.calculate_price_change(current_price, close_prices[-1])
current_price, close_prices[-1]
)
if len(close_prices) >= 5: if len(close_prices) >= 5:
price_change_5m = self.calculate_price_change( price_change_5m = self.calculate_price_change(current_price, close_prices[-5])
current_price, close_prices[-5]
)
if len(close_prices) >= 15: if len(close_prices) >= 15:
price_change_15m = self.calculate_price_change( price_change_15m = self.calculate_price_change(current_price, close_prices[-15])
current_price, close_prices[-15]
)
# Calculate volume surge # Calculate volume surge
avg_volume = sum(volumes) / len(volumes) if volumes else current_volume avg_volume = sum(volumes) / len(volumes) if volumes else current_volume

View File

@@ -0,0 +1,74 @@
"""Walk-forward splitter with purge/embargo controls."""
from __future__ import annotations
from dataclasses import dataclass
@dataclass(frozen=True)
class WalkForwardFold:
train_indices: list[int]
test_indices: list[int]
@property
def train_size(self) -> int:
return len(self.train_indices)
@property
def test_size(self) -> int:
return len(self.test_indices)
def generate_walk_forward_splits(
*,
n_samples: int,
train_size: int,
test_size: int,
step_size: int | None = None,
purge_size: int = 0,
embargo_size: int = 0,
min_train_size: int = 1,
) -> list[WalkForwardFold]:
"""Generate chronological folds with purge/embargo leakage controls."""
if n_samples <= 0:
raise ValueError("n_samples must be positive")
if train_size <= 0 or test_size <= 0:
raise ValueError("train_size and test_size must be positive")
if purge_size < 0 or embargo_size < 0:
raise ValueError("purge_size and embargo_size must be >= 0")
if min_train_size <= 0:
raise ValueError("min_train_size must be positive")
step = step_size if step_size is not None else test_size
if step <= 0:
raise ValueError("step_size must be positive")
folds: list[WalkForwardFold] = []
prev_test_end: int | None = None
test_start = train_size + purge_size
while test_start + test_size <= n_samples:
test_end = test_start + test_size - 1
train_end = test_start - purge_size - 1
if train_end < 0:
break
train_start = max(0, train_end - train_size + 1)
train_indices = list(range(train_start, train_end + 1))
if prev_test_end is not None and embargo_size > 0:
emb_from = prev_test_end + 1
emb_to = prev_test_end + embargo_size
train_indices = [i for i in train_indices if i < emb_from or i > emb_to]
if len(train_indices) >= min_train_size:
folds.append(
WalkForwardFold(
train_indices=train_indices,
test_indices=list(range(test_start, test_end + 1)),
)
)
prev_test_end = test_end
test_start += step
return folds

View File

@@ -7,9 +7,9 @@ This module provides:
- Health monitoring and alerts - Health monitoring and alerts
""" """
from src.backup.exporter import BackupExporter, ExportFormat
from src.backup.scheduler import BackupScheduler, BackupPolicy
from src.backup.cloud_storage import CloudStorage, S3Config from src.backup.cloud_storage import CloudStorage, S3Config
from src.backup.exporter import BackupExporter, ExportFormat
from src.backup.scheduler import BackupPolicy, BackupScheduler
__all__ = [ __all__ = [
"BackupExporter", "BackupExporter",

View File

@@ -94,7 +94,9 @@ class CloudStorage:
if metadata: if metadata:
extra_args["Metadata"] = metadata extra_args["Metadata"] = metadata
logger.info("Uploading %s to s3://%s/%s", file_path.name, self.config.bucket_name, object_key) logger.info(
"Uploading %s to s3://%s/%s", file_path.name, self.config.bucket_name, object_key
)
try: try:
self.client.upload_file( self.client.upload_file(

View File

@@ -14,14 +14,14 @@ import json
import logging import logging
import sqlite3 import sqlite3
from datetime import UTC, datetime from datetime import UTC, datetime
from enum import Enum from enum import StrEnum
from pathlib import Path from pathlib import Path
from typing import Any from typing import Any
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class ExportFormat(str, Enum): class ExportFormat(StrEnum):
"""Supported export formats.""" """Supported export formats."""
JSON = "json" JSON = "json"
@@ -103,15 +103,11 @@ class BackupExporter:
elif fmt == ExportFormat.CSV: elif fmt == ExportFormat.CSV:
return self._export_csv(output_dir, timestamp, compress, incremental_since) return self._export_csv(output_dir, timestamp, compress, incremental_since)
elif fmt == ExportFormat.PARQUET: elif fmt == ExportFormat.PARQUET:
return self._export_parquet( return self._export_parquet(output_dir, timestamp, compress, incremental_since)
output_dir, timestamp, compress, incremental_since
)
else: else:
raise ValueError(f"Unsupported format: {fmt}") raise ValueError(f"Unsupported format: {fmt}")
def _get_trades( def _get_trades(self, incremental_since: datetime | None = None) -> list[dict[str, Any]]:
self, incremental_since: datetime | None = None
) -> list[dict[str, Any]]:
"""Fetch trades from database. """Fetch trades from database.
Args: Args:
@@ -164,9 +160,7 @@ class BackupExporter:
data = { data = {
"export_timestamp": datetime.now(UTC).isoformat(), "export_timestamp": datetime.now(UTC).isoformat(),
"incremental_since": ( "incremental_since": (incremental_since.isoformat() if incremental_since else None),
incremental_since.isoformat() if incremental_since else None
),
"record_count": len(trades), "record_count": len(trades),
"trades": trades, "trades": trades,
} }
@@ -284,8 +278,7 @@ class BackupExporter:
import pyarrow.parquet as pq import pyarrow.parquet as pq
except ImportError: except ImportError:
raise ImportError( raise ImportError(
"pyarrow is required for Parquet export. " "pyarrow is required for Parquet export. Install with: pip install pyarrow"
"Install with: pip install pyarrow"
) )
# Convert to pyarrow table # Convert to pyarrow table

View File

@@ -14,14 +14,14 @@ import shutil
import sqlite3 import sqlite3
from dataclasses import dataclass from dataclasses import dataclass
from datetime import UTC, datetime, timedelta from datetime import UTC, datetime, timedelta
from enum import Enum from enum import StrEnum
from pathlib import Path from pathlib import Path
from typing import Any from typing import Any
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class HealthStatus(str, Enum): class HealthStatus(StrEnum):
"""Health check status.""" """Health check status."""
HEALTHY = "healthy" HEALTHY = "healthy"
@@ -137,9 +137,13 @@ class HealthMonitor:
used_percent = (stat.used / stat.total) * 100 used_percent = (stat.used / stat.total) * 100
if stat.free < self.min_disk_space_bytes: if stat.free < self.min_disk_space_bytes:
min_disk_gb = self.min_disk_space_bytes / 1024 / 1024 / 1024
return HealthCheckResult( return HealthCheckResult(
status=HealthStatus.UNHEALTHY, status=HealthStatus.UNHEALTHY,
message=f"Low disk space: {free_gb:.2f} GB free (minimum: {self.min_disk_space_bytes / 1024 / 1024 / 1024:.2f} GB)", message=(
f"Low disk space: {free_gb:.2f} GB free "
f"(minimum: {min_disk_gb:.2f} GB)"
),
details={ details={
"free_gb": free_gb, "free_gb": free_gb,
"total_gb": total_gb, "total_gb": total_gb,

View File

@@ -12,14 +12,14 @@ import logging
import shutil import shutil
from dataclasses import dataclass from dataclasses import dataclass
from datetime import UTC, datetime, timedelta from datetime import UTC, datetime, timedelta
from enum import Enum from enum import StrEnum
from pathlib import Path from pathlib import Path
from typing import Any from typing import Any
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class BackupPolicy(str, Enum): class BackupPolicy(StrEnum):
"""Backup retention policies.""" """Backup retention policies."""
DAILY = "daily" DAILY = "daily"
@@ -69,9 +69,7 @@ class BackupScheduler:
for d in [self.daily_dir, self.weekly_dir, self.monthly_dir]: for d in [self.daily_dir, self.weekly_dir, self.monthly_dir]:
d.mkdir(parents=True, exist_ok=True) d.mkdir(parents=True, exist_ok=True)
def create_backup( def create_backup(self, policy: BackupPolicy, verify: bool = True) -> BackupMetadata:
self, policy: BackupPolicy, verify: bool = True
) -> BackupMetadata:
"""Create a database backup. """Create a database backup.
Args: Args:
@@ -229,9 +227,7 @@ class BackupScheduler:
return removed return removed
def list_backups( def list_backups(self, policy: BackupPolicy | None = None) -> list[BackupMetadata]:
self, policy: BackupPolicy | None = None
) -> list[BackupMetadata]:
"""List available backups. """List available backups.
Args: Args:

View File

@@ -13,8 +13,8 @@ import hashlib
import json import json
import logging import logging
import time import time
from dataclasses import dataclass, field from dataclasses import dataclass
from typing import Any, TYPE_CHECKING from typing import TYPE_CHECKING, Any
if TYPE_CHECKING: if TYPE_CHECKING:
from src.brain.gemini_client import TradeDecision from src.brain.gemini_client import TradeDecision
@@ -26,7 +26,7 @@ logger = logging.getLogger(__name__)
class CacheEntry: class CacheEntry:
"""Cached decision with metadata.""" """Cached decision with metadata."""
decision: "TradeDecision" decision: TradeDecision
cached_at: float # Unix timestamp cached_at: float # Unix timestamp
hit_count: int = 0 hit_count: int = 0
market_data_hash: str = "" market_data_hash: str = ""
@@ -239,9 +239,7 @@ class DecisionCache:
""" """
current_time = time.time() current_time = time.time()
expired_keys = [ expired_keys = [
k k for k, v in self._cache.items() if current_time - v.cached_at > self.ttl_seconds
for k, v in self._cache.items()
if current_time - v.cached_at > self.ttl_seconds
] ]
count = len(expired_keys) count = len(expired_keys)

View File

@@ -11,14 +11,14 @@ from __future__ import annotations
from dataclasses import dataclass from dataclasses import dataclass
from datetime import UTC, datetime from datetime import UTC, datetime
from enum import Enum from enum import StrEnum
from typing import Any from typing import Any
from src.context.layer import ContextLayer from src.context.layer import ContextLayer
from src.context.store import ContextStore from src.context.store import ContextStore
class DecisionType(str, Enum): class DecisionType(StrEnum):
"""Type of trading decision being made.""" """Type of trading decision being made."""
NORMAL = "normal" # Regular trade decision NORMAL = "normal" # Regular trade decision
@@ -183,9 +183,7 @@ class ContextSelector:
ContextLayer.L1_LEGACY, ContextLayer.L1_LEGACY,
] ]
scores = { scores = {layer: self.score_layer_relevance(layer, decision_type) for layer in all_layers}
layer: self.score_layer_relevance(layer, decision_type) for layer in all_layers
}
# Filter by minimum score # Filter by minimum score
selected_layers = [layer for layer, score in scores.items() if score >= min_score] selected_layers = [layer for layer, score in scores.items() if score >= min_score]

View File

@@ -25,12 +25,12 @@ from typing import Any
from google import genai from google import genai
from src.config import Settings
from src.data.news_api import NewsAPI, NewsSentiment
from src.data.economic_calendar import EconomicCalendar
from src.data.market_data import MarketData
from src.brain.cache import DecisionCache from src.brain.cache import DecisionCache
from src.brain.prompt_optimizer import PromptOptimizer from src.brain.prompt_optimizer import PromptOptimizer
from src.config import Settings
from src.data.economic_calendar import EconomicCalendar
from src.data.market_data import MarketData
from src.data.news_api import NewsAPI, NewsSentiment
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -159,16 +159,12 @@ class GeminiClient:
return "" return ""
# Check for upcoming high-impact events # Check for upcoming high-impact events
upcoming = self._economic_calendar.get_upcoming_events( upcoming = self._economic_calendar.get_upcoming_events(days_ahead=7, min_impact="HIGH")
days_ahead=7, min_impact="HIGH"
)
if upcoming.high_impact_count == 0: if upcoming.high_impact_count == 0:
return "" return ""
lines = [ lines = [f"Upcoming High-Impact Events: {upcoming.high_impact_count} in next 7 days"]
f"Upcoming High-Impact Events: {upcoming.high_impact_count} in next 7 days"
]
if upcoming.next_major_event is not None: if upcoming.next_major_event is not None:
event = upcoming.next_major_event event = upcoming.next_major_event
@@ -180,9 +176,7 @@ class GeminiClient:
# Check for earnings # Check for earnings
earnings_date = self._economic_calendar.get_earnings_date(stock_code) earnings_date = self._economic_calendar.get_earnings_date(stock_code)
if earnings_date is not None: if earnings_date is not None:
lines.append( lines.append(f" Earnings: {stock_code} on {earnings_date.strftime('%Y-%m-%d')}")
f" Earnings: {stock_code} on {earnings_date.strftime('%Y-%m-%d')}"
)
return "\n".join(lines) return "\n".join(lines)
@@ -235,9 +229,7 @@ class GeminiClient:
# Add foreigner net if non-zero # Add foreigner net if non-zero
if market_data.get("foreigner_net", 0) != 0: if market_data.get("foreigner_net", 0) != 0:
market_info_lines.append( market_info_lines.append(f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}")
f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}"
)
market_info = "\n".join(market_info_lines) market_info = "\n".join(market_info_lines)
@@ -249,8 +241,7 @@ class GeminiClient:
market_info += f"\n\n{external_context}" market_info += f"\n\n{external_context}"
json_format = ( json_format = (
'{"action": "BUY"|"SELL"|"HOLD", ' '{"action": "BUY"|"SELL"|"HOLD", "confidence": <int 0-100>, "rationale": "<string>"}'
'"confidence": <int 0-100>, "rationale": "<string>"}'
) )
return ( return (
f"You are a professional {market_name} trading analyst.\n" f"You are a professional {market_name} trading analyst.\n"
@@ -289,15 +280,12 @@ class GeminiClient:
# Add foreigner net if non-zero # Add foreigner net if non-zero
if market_data.get("foreigner_net", 0) != 0: if market_data.get("foreigner_net", 0) != 0:
market_info_lines.append( market_info_lines.append(f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}")
f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}"
)
market_info = "\n".join(market_info_lines) market_info = "\n".join(market_info_lines)
json_format = ( json_format = (
'{"action": "BUY"|"SELL"|"HOLD", ' '{"action": "BUY"|"SELL"|"HOLD", "confidence": <int 0-100>, "rationale": "<string>"}'
'"confidence": <int 0-100>, "rationale": "<string>"}'
) )
return ( return (
f"You are a professional {market_name} trading analyst.\n" f"You are a professional {market_name} trading analyst.\n"
@@ -339,25 +327,19 @@ class GeminiClient:
data = json.loads(cleaned) data = json.loads(cleaned)
except json.JSONDecodeError: except json.JSONDecodeError:
logger.warning("Malformed JSON from Gemini — defaulting to HOLD") logger.warning("Malformed JSON from Gemini — defaulting to HOLD")
return TradeDecision( return TradeDecision(action="HOLD", confidence=0, rationale="Malformed JSON response")
action="HOLD", confidence=0, rationale="Malformed JSON response"
)
# Validate required fields # Validate required fields
if not all(k in data for k in ("action", "confidence", "rationale")): if not all(k in data for k in ("action", "confidence", "rationale")):
logger.warning("Missing fields in Gemini response — defaulting to HOLD") logger.warning("Missing fields in Gemini response — defaulting to HOLD")
# Preserve raw text in rationale so prompt_override callers (e.g. pre_market_planner) # Preserve raw text in rationale so prompt_override callers (e.g. pre_market_planner)
# can extract their own JSON format from decision.rationale (#245) # can extract their own JSON format from decision.rationale (#245)
return TradeDecision( return TradeDecision(action="HOLD", confidence=0, rationale=raw)
action="HOLD", confidence=0, rationale=raw
)
action = str(data["action"]).upper() action = str(data["action"]).upper()
if action not in VALID_ACTIONS: if action not in VALID_ACTIONS:
logger.warning("Invalid action '%s' from Gemini — defaulting to HOLD", action) logger.warning("Invalid action '%s' from Gemini — defaulting to HOLD", action)
return TradeDecision( return TradeDecision(action="HOLD", confidence=0, rationale=f"Invalid action: {action}")
action="HOLD", confidence=0, rationale=f"Invalid action: {action}"
)
confidence = int(data["confidence"]) confidence = int(data["confidence"])
rationale = str(data["rationale"]) rationale = str(data["rationale"])
@@ -445,9 +427,7 @@ class GeminiClient:
# not a parsed TradeDecision. Skip parse_response to avoid spurious # not a parsed TradeDecision. Skip parse_response to avoid spurious
# "Missing fields" warnings and return the raw response directly. (#247) # "Missing fields" warnings and return the raw response directly. (#247)
if "prompt_override" in market_data: if "prompt_override" in market_data:
logger.info( logger.info("Gemini raw response received (prompt_override, tokens=%d)", token_count)
"Gemini raw response received (prompt_override, tokens=%d)", token_count
)
# Not a trade decision — don't inflate _total_decisions metrics # Not a trade decision — don't inflate _total_decisions metrics
return TradeDecision( return TradeDecision(
action="HOLD", confidence=0, rationale=raw, token_count=token_count action="HOLD", confidence=0, rationale=raw, token_count=token_count
@@ -546,9 +526,7 @@ class GeminiClient:
# Batch Decision Making (for daily trading mode) # Batch Decision Making (for daily trading mode)
# ------------------------------------------------------------------ # ------------------------------------------------------------------
async def decide_batch( async def decide_batch(self, stocks_data: list[dict[str, Any]]) -> dict[str, TradeDecision]:
self, stocks_data: list[dict[str, Any]]
) -> dict[str, TradeDecision]:
"""Make decisions for multiple stocks in a single API call. """Make decisions for multiple stocks in a single API call.
This is designed for daily trading mode to minimize API usage This is designed for daily trading mode to minimize API usage

View File

@@ -179,7 +179,8 @@ class PromptOptimizer:
# Minimal instructions # Minimal instructions
prompt = ( prompt = (
f"{market_name} trader. Analyze:\n{data_str}\n\n" f"{market_name} trader. Analyze:\n{data_str}\n\n"
'Return JSON: {"action":"BUY"|"SELL"|"HOLD","confidence":<0-100>,"rationale":"<text>"}\n' "Return JSON: "
'{"action":"BUY"|"SELL"|"HOLD","confidence":<0-100>,"rationale":"<text>"}\n'
"Rules: action=BUY/SELL/HOLD, confidence=0-100, rationale=concise. No markdown." "Rules: action=BUY/SELL/HOLD, confidence=0-100, rationale=concise. No markdown."
) )
else: else:

View File

@@ -103,7 +103,8 @@ class KISBroker:
ssl_ctx.verify_mode = ssl.CERT_NONE ssl_ctx.verify_mode = ssl.CERT_NONE
connector = aiohttp.TCPConnector(ssl=ssl_ctx) connector = aiohttp.TCPConnector(ssl=ssl_ctx)
self._session = aiohttp.ClientSession( self._session = aiohttp.ClientSession(
timeout=timeout, connector=connector, timeout=timeout,
connector=connector,
) )
return self._session return self._session
@@ -224,16 +225,12 @@ class KISBroker:
async with session.get(url, headers=headers, params=params) as resp: async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError( raise ConnectionError(f"get_orderbook failed ({resp.status}): {text}")
f"get_orderbook failed ({resp.status}): {text}"
)
return await resp.json() return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(f"Network error fetching orderbook: {exc}") from exc raise ConnectionError(f"Network error fetching orderbook: {exc}") from exc
async def get_current_price( async def get_current_price(self, stock_code: str) -> tuple[float, float, float]:
self, stock_code: str
) -> tuple[float, float, float]:
"""Fetch current price data for a domestic stock. """Fetch current price data for a domestic stock.
Uses the ``inquire-price`` API (FHKST01010100), which works in both Uses the ``inquire-price`` API (FHKST01010100), which works in both
@@ -265,9 +262,7 @@ class KISBroker:
async with session.get(url, headers=headers, params=params) as resp: async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError( raise ConnectionError(f"get_current_price failed ({resp.status}): {text}")
f"get_current_price failed ({resp.status}): {text}"
)
data = await resp.json() data = await resp.json()
out = data.get("output", {}) out = data.get("output", {})
return ( return (
@@ -276,9 +271,7 @@ class KISBroker:
_f(out.get("frgn_ntby_qty")), _f(out.get("frgn_ntby_qty")),
) )
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError( raise ConnectionError(f"Network error fetching current price: {exc}") from exc
f"Network error fetching current price: {exc}"
) from exc
async def get_balance(self) -> dict[str, Any]: async def get_balance(self) -> dict[str, Any]:
"""Fetch current account balance and holdings.""" """Fetch current account balance and holdings."""
@@ -308,9 +301,7 @@ class KISBroker:
async with session.get(url, headers=headers, params=params) as resp: async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError( raise ConnectionError(f"get_balance failed ({resp.status}): {text}")
f"get_balance failed ({resp.status}): {text}"
)
return await resp.json() return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(f"Network error fetching balance: {exc}") from exc raise ConnectionError(f"Network error fetching balance: {exc}") from exc
@@ -369,9 +360,7 @@ class KISBroker:
async with session.post(url, headers=headers, json=body) as resp: async with session.post(url, headers=headers, json=body) as resp:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError( raise ConnectionError(f"send_order failed ({resp.status}): {text}")
f"send_order failed ({resp.status}): {text}"
)
data = await resp.json() data = await resp.json()
logger.info( logger.info(
"Order submitted", "Order submitted",
@@ -449,9 +438,7 @@ class KISBroker:
async with session.get(url, headers=headers, params=params) as resp: async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError( raise ConnectionError(f"fetch_market_rankings failed ({resp.status}): {text}")
f"fetch_market_rankings failed ({resp.status}): {text}"
)
data = await resp.json() data = await resp.json()
# Parse response - output is a list of ranked stocks # Parse response - output is a list of ranked stocks
@@ -465,14 +452,16 @@ class KISBroker:
rankings = [] rankings = []
for item in data.get("output", [])[:limit]: for item in data.get("output", [])[:limit]:
rankings.append({ rankings.append(
{
"stock_code": item.get("stck_shrn_iscd") or item.get("mksc_shrn_iscd", ""), "stock_code": item.get("stck_shrn_iscd") or item.get("mksc_shrn_iscd", ""),
"name": item.get("hts_kor_isnm", ""), "name": item.get("hts_kor_isnm", ""),
"price": _safe_float(item.get("stck_prpr", "0")), "price": _safe_float(item.get("stck_prpr", "0")),
"volume": _safe_float(item.get("acml_vol", "0")), "volume": _safe_float(item.get("acml_vol", "0")),
"change_rate": _safe_float(item.get("prdy_ctrt", "0")), "change_rate": _safe_float(item.get("prdy_ctrt", "0")),
"volume_increase_rate": _safe_float(item.get("vol_inrt", "0")), "volume_increase_rate": _safe_float(item.get("vol_inrt", "0")),
}) }
)
return rankings return rankings
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
@@ -522,9 +511,7 @@ class KISBroker:
data = await resp.json() data = await resp.json()
return data.get("output", []) or [] return data.get("output", []) or []
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError( raise ConnectionError(f"Network error fetching domestic pending orders: {exc}") from exc
f"Network error fetching domestic pending orders: {exc}"
) from exc
async def cancel_domestic_order( async def cancel_domestic_order(
self, self,
@@ -575,14 +562,10 @@ class KISBroker:
async with session.post(url, headers=headers, json=body) as resp: async with session.post(url, headers=headers, json=body) as resp:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError( raise ConnectionError(f"cancel_domestic_order failed ({resp.status}): {text}")
f"cancel_domestic_order failed ({resp.status}): {text}"
)
return cast(dict[str, Any], await resp.json()) return cast(dict[str, Any], await resp.json())
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError( raise ConnectionError(f"Network error cancelling domestic order: {exc}") from exc
f"Network error cancelling domestic order: {exc}"
) from exc
async def get_daily_prices( async def get_daily_prices(
self, self,
@@ -609,6 +592,7 @@ class KISBroker:
# Calculate date range (today and N days ago) # Calculate date range (today and N days ago)
from datetime import datetime, timedelta from datetime import datetime, timedelta
end_date = datetime.now().strftime("%Y%m%d") end_date = datetime.now().strftime("%Y%m%d")
start_date = (datetime.now() - timedelta(days=days + 10)).strftime("%Y%m%d") start_date = (datetime.now() - timedelta(days=days + 10)).strftime("%Y%m%d")
@@ -627,9 +611,7 @@ class KISBroker:
async with session.get(url, headers=headers, params=params) as resp: async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError( raise ConnectionError(f"get_daily_prices failed ({resp.status}): {text}")
f"get_daily_prices failed ({resp.status}): {text}"
)
data = await resp.json() data = await resp.json()
# Parse response # Parse response
@@ -643,14 +625,16 @@ class KISBroker:
prices = [] prices = []
for item in data.get("output2", []): for item in data.get("output2", []):
prices.append({ prices.append(
{
"date": item.get("stck_bsop_date", ""), "date": item.get("stck_bsop_date", ""),
"open": _safe_float(item.get("stck_oprc", "0")), "open": _safe_float(item.get("stck_oprc", "0")),
"high": _safe_float(item.get("stck_hgpr", "0")), "high": _safe_float(item.get("stck_hgpr", "0")),
"low": _safe_float(item.get("stck_lwpr", "0")), "low": _safe_float(item.get("stck_lwpr", "0")),
"close": _safe_float(item.get("stck_clpr", "0")), "close": _safe_float(item.get("stck_clpr", "0")),
"volume": _safe_float(item.get("acml_vol", "0")), "volume": _safe_float(item.get("acml_vol", "0")),
}) }
)
# Sort oldest to newest (KIS returns newest first) # Sort oldest to newest (KIS returns newest first)
prices.reverse() prices.reverse()

View File

@@ -56,9 +56,7 @@ class OverseasBroker:
""" """
self._broker = kis_broker self._broker = kis_broker
async def get_overseas_price( async def get_overseas_price(self, exchange_code: str, stock_code: str) -> dict[str, Any]:
self, exchange_code: str, stock_code: str
) -> dict[str, Any]:
""" """
Fetch overseas stock price. Fetch overseas stock price.
@@ -89,14 +87,10 @@ class OverseasBroker:
async with session.get(url, headers=headers, params=params) as resp: async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError( raise ConnectionError(f"get_overseas_price failed ({resp.status}): {text}")
f"get_overseas_price failed ({resp.status}): {text}"
)
return await resp.json() return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError( raise ConnectionError(f"Network error fetching overseas price: {exc}") from exc
f"Network error fetching overseas price: {exc}"
) from exc
async def fetch_overseas_rankings( async def fetch_overseas_rankings(
self, self,
@@ -121,6 +115,7 @@ class OverseasBroker:
tr_id = self._broker._settings.OVERSEAS_RANKING_VOLUME_TR_ID tr_id = self._broker._settings.OVERSEAS_RANKING_VOLUME_TR_ID
path = self._broker._settings.OVERSEAS_RANKING_VOLUME_PATH path = self._broker._settings.OVERSEAS_RANKING_VOLUME_PATH
params: dict[str, str] = { params: dict[str, str] = {
"KEYB": "", # NEXT KEY BUFF — Required, 공백
"AUTH": "", "AUTH": "",
"EXCD": ranking_excd, "EXCD": ranking_excd,
"MIXN": "0", "MIXN": "0",
@@ -130,10 +125,11 @@ class OverseasBroker:
tr_id = self._broker._settings.OVERSEAS_RANKING_FLUCT_TR_ID tr_id = self._broker._settings.OVERSEAS_RANKING_FLUCT_TR_ID
path = self._broker._settings.OVERSEAS_RANKING_FLUCT_PATH path = self._broker._settings.OVERSEAS_RANKING_FLUCT_PATH
params = { params = {
"KEYB": "", # NEXT KEY BUFF — Required, 공백
"AUTH": "", "AUTH": "",
"EXCD": ranking_excd, "EXCD": ranking_excd,
"NDAY": "0", "NDAY": "0",
"GUBN": "0", # 0=전체(상승+하락), 1=상승 — 변동성 스캐너는 전체 필요 "GUBN": "1", # 0=하락, 1=상승 — 변동성 스캐너는 급등 종목 우선
"VOL_RANG": "0", "VOL_RANG": "0",
} }
@@ -152,9 +148,7 @@ class OverseasBroker:
ranking_type, ranking_type,
) )
return [] return []
raise ConnectionError( raise ConnectionError(f"fetch_overseas_rankings failed ({resp.status}): {text}")
f"fetch_overseas_rankings failed ({resp.status}): {text}"
)
data = await resp.json() data = await resp.json()
rows = self._extract_ranking_rows(data) rows = self._extract_ranking_rows(data)
@@ -169,9 +163,7 @@ class OverseasBroker:
) )
return [] return []
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError( raise ConnectionError(f"Network error fetching overseas rankings: {exc}") from exc
f"Network error fetching overseas rankings: {exc}"
) from exc
async def get_overseas_balance(self, exchange_code: str) -> dict[str, Any]: async def get_overseas_balance(self, exchange_code: str) -> dict[str, Any]:
""" """
@@ -191,9 +183,7 @@ class OverseasBroker:
# TR_ID: 실전 TTTS3012R, 모의 VTTS3012R # TR_ID: 실전 TTTS3012R, 모의 VTTS3012R
# Source: 한국투자증권 오픈API 전체문서 (20260221) — '해외주식 잔고조회' 시트 # Source: 한국투자증권 오픈API 전체문서 (20260221) — '해외주식 잔고조회' 시트
balance_tr_id = ( balance_tr_id = "TTTS3012R" if self._broker._settings.MODE == "live" else "VTTS3012R"
"TTTS3012R" if self._broker._settings.MODE == "live" else "VTTS3012R"
)
headers = await self._broker._auth_headers(balance_tr_id) headers = await self._broker._auth_headers(balance_tr_id)
params = { params = {
"CANO": self._broker._account_no, "CANO": self._broker._account_no,
@@ -203,22 +193,63 @@ class OverseasBroker:
"CTX_AREA_FK200": "", "CTX_AREA_FK200": "",
"CTX_AREA_NK200": "", "CTX_AREA_NK200": "",
} }
url = ( url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-balance"
f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-balance"
) try:
async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(f"get_overseas_balance failed ({resp.status}): {text}")
return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(f"Network error fetching overseas balance: {exc}") from exc
async def get_overseas_buying_power(
self,
exchange_code: str,
stock_code: str,
price: float,
) -> dict[str, Any]:
"""
Fetch overseas buying power for a specific stock and price.
Args:
exchange_code: Exchange code (e.g., "NASD", "NYSE")
stock_code: Stock ticker symbol
price: Current stock price (used for quantity calculation)
Returns:
API response; key field: output.ord_psbl_frcr_amt (주문가능외화금액)
Raises:
ConnectionError: On network or API errors
"""
await self._broker._rate_limiter.acquire()
session = self._broker._get_session()
# TR_ID: 실전 TTTS3007R, 모의 VTTS3007R
# Source: 한국투자증권 오픈API 전체문서 (20260221) — '해외주식 매수가능금액조회' 시트
ps_tr_id = "TTTS3007R" if self._broker._settings.MODE == "live" else "VTTS3007R"
headers = await self._broker._auth_headers(ps_tr_id)
params = {
"CANO": self._broker._account_no,
"ACNT_PRDT_CD": self._broker._product_cd,
"OVRS_EXCG_CD": exchange_code,
"OVRS_ORD_UNPR": f"{price:.2f}",
"ITEM_CD": stock_code,
}
url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-psamount"
try: try:
async with session.get(url, headers=headers, params=params) as resp: async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError( raise ConnectionError(
f"get_overseas_balance failed ({resp.status}): {text}" f"get_overseas_buying_power failed ({resp.status}): {text}"
) )
return await resp.json() return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError( raise ConnectionError(f"Network error fetching overseas buying power: {exc}") from exc
f"Network error fetching overseas balance: {exc}"
) from exc
async def send_overseas_order( async def send_overseas_order(
self, self,
@@ -275,9 +306,7 @@ class OverseasBroker:
async with session.post(url, headers=headers, json=body) as resp: async with session.post(url, headers=headers, json=body) as resp:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError( raise ConnectionError(f"send_overseas_order failed ({resp.status}): {text}")
f"send_overseas_order failed ({resp.status}): {text}"
)
data = await resp.json() data = await resp.json()
rt_cd = data.get("rt_cd", "") rt_cd = data.get("rt_cd", "")
msg1 = data.get("msg1", "") msg1 = data.get("msg1", "")
@@ -302,13 +331,9 @@ class OverseasBroker:
) )
return data return data
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError( raise ConnectionError(f"Network error sending overseas order: {exc}") from exc
f"Network error sending overseas order: {exc}"
) from exc
async def get_overseas_pending_orders( async def get_overseas_pending_orders(self, exchange_code: str) -> list[dict[str, Any]]:
self, exchange_code: str
) -> list[dict[str, Any]]:
"""Fetch unfilled (pending) overseas orders for a given exchange. """Fetch unfilled (pending) overseas orders for a given exchange.
Args: Args:
@@ -324,9 +349,7 @@ class OverseasBroker:
ConnectionError: On network or API errors (live mode only). ConnectionError: On network or API errors (live mode only).
""" """
if self._broker._settings.MODE != "live": if self._broker._settings.MODE != "live":
logger.debug( logger.debug("Pending orders API (TTTS3018R) not supported in paper mode; returning []")
"Pending orders API (TTTS3018R) not supported in paper mode; returning []"
)
return [] return []
await self._broker._rate_limiter.acquire() await self._broker._rate_limiter.acquire()
@@ -343,9 +366,7 @@ class OverseasBroker:
"CTX_AREA_FK200": "", "CTX_AREA_FK200": "",
"CTX_AREA_NK200": "", "CTX_AREA_NK200": "",
} }
url = ( url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-nccs"
f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-nccs"
)
try: try:
async with session.get(url, headers=headers, params=params) as resp: async with session.get(url, headers=headers, params=params) as resp:
@@ -360,9 +381,7 @@ class OverseasBroker:
return output return output
return [] return []
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError( raise ConnectionError(f"Network error fetching pending orders: {exc}") from exc
f"Network error fetching pending orders: {exc}"
) from exc
async def cancel_overseas_order( async def cancel_overseas_order(
self, self,
@@ -414,22 +433,16 @@ class OverseasBroker:
headers = await self._broker._auth_headers(tr_id) headers = await self._broker._auth_headers(tr_id)
headers["hashkey"] = hash_key headers["hashkey"] = hash_key
url = ( url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/order-rvsecncl"
f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/order-rvsecncl"
)
try: try:
async with session.post(url, headers=headers, json=body) as resp: async with session.post(url, headers=headers, json=body) as resp:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError( raise ConnectionError(f"cancel_overseas_order failed ({resp.status}): {text}")
f"cancel_overseas_order failed ({resp.status}): {text}"
)
return await resp.json() return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError( raise ConnectionError(f"Network error cancelling overseas order: {exc}") from exc
f"Network error cancelling overseas order: {exc}"
) from exc
def _get_currency_code(self, exchange_code: str) -> str: def _get_currency_code(self, exchange_code: str) -> str:
""" """

View File

@@ -59,11 +59,27 @@ class Settings(BaseSettings):
# KIS VTS overseas balance API returns errors for most accounts. # KIS VTS overseas balance API returns errors for most accounts.
# This value is used as a fallback when the balance API returns 0 in paper mode. # This value is used as a fallback when the balance API returns 0 in paper mode.
PAPER_OVERSEAS_CASH: float = Field(default=50000.0, ge=0.0) PAPER_OVERSEAS_CASH: float = Field(default=50000.0, ge=0.0)
USD_BUFFER_MIN: float = Field(default=1000.0, ge=0.0)
US_MIN_PRICE: float = Field(default=5.0, ge=0.0)
STAGED_EXIT_BE_ARM_PCT: float = Field(default=1.2, gt=0.0, le=30.0)
STAGED_EXIT_ARM_PCT: float = Field(default=3.0, gt=0.0, le=100.0)
STOPLOSS_REENTRY_COOLDOWN_MINUTES: int = Field(default=120, ge=1, le=1440)
KR_ATR_STOP_MULTIPLIER_K: float = Field(default=2.0, ge=0.1, le=10.0)
KR_ATR_STOP_MIN_PCT: float = Field(default=-2.0, le=0.0)
KR_ATR_STOP_MAX_PCT: float = Field(default=-7.0, le=0.0)
OVERNIGHT_EXCEPTION_ENABLED: bool = True
SESSION_RISK_RELOAD_ENABLED: bool = True
SESSION_RISK_PROFILES_JSON: str = "{}"
# Trading frequency mode (daily = batch API calls, realtime = per-stock calls) # Trading frequency mode (daily = batch API calls, realtime = per-stock calls)
TRADE_MODE: str = Field(default="daily", pattern="^(daily|realtime)$") TRADE_MODE: str = Field(default="daily", pattern="^(daily|realtime)$")
DAILY_SESSIONS: int = Field(default=4, ge=1, le=10) DAILY_SESSIONS: int = Field(default=4, ge=1, le=10)
SESSION_INTERVAL_HOURS: int = Field(default=6, ge=1, le=24) SESSION_INTERVAL_HOURS: int = Field(default=6, ge=1, le=24)
ORDER_BLACKOUT_ENABLED: bool = True
ORDER_BLACKOUT_WINDOWS_KST: str = "23:30-00:10"
ORDER_BLACKOUT_QUEUE_MAX: int = Field(default=500, ge=10, le=5000)
BLACKOUT_RECOVERY_PRICE_REVALIDATION_ENABLED: bool = True
BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT: float = Field(default=5.0, ge=0.0, le=100.0)
# Pre-Market Planner # Pre-Market Planner
PRE_MARKET_MINUTES: int = Field(default=30, ge=10, le=120) PRE_MARKET_MINUTES: int = Field(default=30, ge=10, le=120)
@@ -108,12 +124,8 @@ class Settings(BaseSettings):
OVERSEAS_RANKING_ENABLED: bool = True OVERSEAS_RANKING_ENABLED: bool = True
OVERSEAS_RANKING_FLUCT_TR_ID: str = "HHDFS76290000" OVERSEAS_RANKING_FLUCT_TR_ID: str = "HHDFS76290000"
OVERSEAS_RANKING_VOLUME_TR_ID: str = "HHDFS76270000" OVERSEAS_RANKING_VOLUME_TR_ID: str = "HHDFS76270000"
OVERSEAS_RANKING_FLUCT_PATH: str = ( OVERSEAS_RANKING_FLUCT_PATH: str = "/uapi/overseas-stock/v1/ranking/updown-rate"
"/uapi/overseas-stock/v1/ranking/updown-rate" OVERSEAS_RANKING_VOLUME_PATH: str = "/uapi/overseas-stock/v1/ranking/volume-surge"
)
OVERSEAS_RANKING_VOLUME_PATH: str = (
"/uapi/overseas-stock/v1/ranking/volume-surge"
)
# Dashboard (optional) # Dashboard (optional)
DASHBOARD_ENABLED: bool = False DASHBOARD_ENABLED: bool = False

View File

@@ -222,9 +222,7 @@ class ContextAggregator:
total_pnl = 0.0 total_pnl = 0.0
for month in months: for month in months:
monthly_pnl = self.store.get_context( monthly_pnl = self.store.get_context(ContextLayer.L4_MONTHLY, month, "monthly_pnl")
ContextLayer.L4_MONTHLY, month, "monthly_pnl"
)
if monthly_pnl is not None: if monthly_pnl is not None:
total_pnl += monthly_pnl total_pnl += monthly_pnl
@@ -251,9 +249,7 @@ class ContextAggregator:
if quarterly_pnl is not None: if quarterly_pnl is not None:
total_pnl += quarterly_pnl total_pnl += quarterly_pnl
self.store.set_context( self.store.set_context(ContextLayer.L2_ANNUAL, year, "annual_pnl", round(total_pnl, 2))
ContextLayer.L2_ANNUAL, year, "annual_pnl", round(total_pnl, 2)
)
def aggregate_legacy_from_annual(self) -> None: def aggregate_legacy_from_annual(self) -> None:
"""Aggregate L1 (legacy) context from all L2 (annual) data.""" """Aggregate L1 (legacy) context from all L2 (annual) data."""
@@ -280,9 +276,7 @@ class ContextAggregator:
self.store.set_context( self.store.set_context(
ContextLayer.L1_LEGACY, "LEGACY", "total_pnl", round(total_pnl, 2) ContextLayer.L1_LEGACY, "LEGACY", "total_pnl", round(total_pnl, 2)
) )
self.store.set_context( self.store.set_context(ContextLayer.L1_LEGACY, "LEGACY", "years_traded", years_traded)
ContextLayer.L1_LEGACY, "LEGACY", "years_traded", years_traded
)
self.store.set_context( self.store.set_context(
ContextLayer.L1_LEGACY, ContextLayer.L1_LEGACY,
"LEGACY", "LEGACY",

View File

@@ -3,10 +3,10 @@
from __future__ import annotations from __future__ import annotations
from dataclasses import dataclass from dataclasses import dataclass
from enum import Enum from enum import StrEnum
class ContextLayer(str, Enum): class ContextLayer(StrEnum):
"""7-tier context hierarchy from real-time to generational.""" """7-tier context hierarchy from real-time to generational."""
L1_LEGACY = "L1_LEGACY" # Cumulative/generational wisdom L1_LEGACY = "L1_LEGACY" # Cumulative/generational wisdom

View File

@@ -9,7 +9,7 @@ This module summarizes old context data instead of including raw details:
from __future__ import annotations from __future__ import annotations
from dataclasses import dataclass from dataclasses import dataclass
from datetime import UTC, datetime, timedelta from datetime import UTC, datetime
from typing import Any from typing import Any
from src.context.layer import ContextLayer from src.context.layer import ContextLayer

View File

@@ -0,0 +1,114 @@
"""Blackout policy and queued order-intent manager."""
from __future__ import annotations
from collections import deque
from dataclasses import dataclass
from datetime import UTC, datetime, time
from zoneinfo import ZoneInfo
@dataclass(frozen=True)
class BlackoutWindow:
start: time
end: time
def contains(self, kst_time: time) -> bool:
if self.start <= self.end:
return self.start <= kst_time < self.end
return kst_time >= self.start or kst_time < self.end
@dataclass
class QueuedOrderIntent:
market_code: str
exchange_code: str
session_id: str
stock_code: str
order_type: str
quantity: int
price: float
source: str
queued_at: datetime
attempts: int = 0
def parse_blackout_windows_kst(raw: str) -> list[BlackoutWindow]:
"""Parse comma-separated KST windows like '23:30-00:10,11:20-11:30'."""
windows: list[BlackoutWindow] = []
for token in raw.split(","):
span = token.strip()
if not span or "-" not in span:
continue
start_raw, end_raw = [part.strip() for part in span.split("-", 1)]
try:
start_h, start_m = [int(v) for v in start_raw.split(":", 1)]
end_h, end_m = [int(v) for v in end_raw.split(":", 1)]
except (ValueError, TypeError):
continue
if not (0 <= start_h <= 23 and 0 <= end_h <= 23):
continue
if not (0 <= start_m <= 59 and 0 <= end_m <= 59):
continue
windows.append(BlackoutWindow(start=time(start_h, start_m), end=time(end_h, end_m)))
return windows
class BlackoutOrderManager:
"""Tracks blackout mode and queues order intents until recovery."""
def __init__(
self,
*,
enabled: bool,
windows: list[BlackoutWindow],
max_queue_size: int = 500,
) -> None:
self.enabled = enabled
self._windows = windows
self._queue: deque[QueuedOrderIntent] = deque()
self._was_blackout = False
self._max_queue_size = max_queue_size
self._overflow_drop_count = 0
@property
def pending_count(self) -> int:
return len(self._queue)
@property
def overflow_drop_count(self) -> int:
return self._overflow_drop_count
def in_blackout(self, now: datetime | None = None) -> bool:
if not self.enabled or not self._windows:
return False
now = now or datetime.now(UTC)
kst_now = now.astimezone(ZoneInfo("Asia/Seoul")).timetz().replace(tzinfo=None)
return any(window.contains(kst_now) for window in self._windows)
def enqueue(self, intent: QueuedOrderIntent) -> bool:
if self._max_queue_size <= 0:
return False
if len(self._queue) >= self._max_queue_size:
self._queue.popleft()
self._overflow_drop_count += 1
self._queue.append(intent)
return True
def pop_recovery_batch(self, now: datetime | None = None) -> list[QueuedOrderIntent]:
in_blackout_now = self.in_blackout(now)
batch: list[QueuedOrderIntent] = []
if not in_blackout_now and self._queue:
while self._queue:
batch.append(self._queue.popleft())
self._was_blackout = in_blackout_now
return batch
def requeue(self, intent: QueuedOrderIntent) -> None:
if len(self._queue) < self._max_queue_size:
self._queue.append(intent)
def clear(self) -> int:
count = len(self._queue)
self._queue.clear()
return count

72
src/core/kill_switch.py Normal file
View File

@@ -0,0 +1,72 @@
"""Kill switch orchestration for emergency risk actions.
Order is fixed:
1) block new orders
2) cancel pending orders
3) refresh order state
4) reduce risk
5) snapshot and notify
"""
from __future__ import annotations
import inspect
from collections.abc import Awaitable, Callable
from dataclasses import dataclass, field
from typing import Any
StepCallable = Callable[[], Any | Awaitable[Any]]
@dataclass
class KillSwitchReport:
reason: str
steps: list[str] = field(default_factory=list)
errors: list[str] = field(default_factory=list)
class KillSwitchOrchestrator:
def __init__(self) -> None:
self.new_orders_blocked = False
async def _run_step(
self,
report: KillSwitchReport,
name: str,
fn: StepCallable | None,
) -> None:
report.steps.append(name)
if fn is None:
return
try:
result = fn()
if inspect.isawaitable(result):
await result
except Exception as exc: # pragma: no cover - intentionally resilient
report.errors.append(f"{name}: {exc}")
async def trigger(
self,
*,
reason: str,
cancel_pending_orders: StepCallable | None = None,
refresh_order_state: StepCallable | None = None,
reduce_risk: StepCallable | None = None,
snapshot_state: StepCallable | None = None,
notify: StepCallable | None = None,
) -> KillSwitchReport:
report = KillSwitchReport(reason=reason)
self.new_orders_blocked = True
report.steps.append("block_new_orders")
await self._run_step(report, "cancel_pending_orders", cancel_pending_orders)
await self._run_step(report, "refresh_order_state", refresh_order_state)
await self._run_step(report, "reduce_risk", reduce_risk)
await self._run_step(report, "snapshot_state", snapshot_state)
await self._run_step(report, "notify", notify)
return report
def clear_block(self) -> None:
self.new_orders_blocked = False

99
src/core/order_policy.py Normal file
View File

@@ -0,0 +1,99 @@
"""Session-aware order policy guards.
Default policy:
- Low-liquidity sessions must reject market orders (price <= 0).
"""
from __future__ import annotations
from dataclasses import dataclass
from datetime import UTC, datetime, time
from zoneinfo import ZoneInfo
from src.markets.schedule import MarketInfo
_LOW_LIQUIDITY_SESSIONS = {"NXT_AFTER", "US_PRE", "US_DAY", "US_AFTER"}
class OrderPolicyRejectedError(Exception):
"""Raised when an order violates session policy."""
def __init__(self, message: str, *, session_id: str, market_code: str) -> None:
super().__init__(message)
self.session_id = session_id
self.market_code = market_code
@dataclass(frozen=True)
class SessionInfo:
session_id: str
is_low_liquidity: bool
def classify_session_id(market: MarketInfo, now: datetime | None = None) -> str:
"""Classify current session by KST schedule used in v3 docs."""
now = now or datetime.now(UTC)
# v3 session tables are explicitly defined in KST perspective.
kst_time = now.astimezone(ZoneInfo("Asia/Seoul")).timetz().replace(tzinfo=None)
if market.code == "KR":
if time(8, 0) <= kst_time < time(8, 50):
return "NXT_PRE"
if time(9, 0) <= kst_time < time(15, 30):
return "KRX_REG"
if time(15, 30) <= kst_time < time(20, 0):
return "NXT_AFTER"
return "KR_OFF"
if market.code.startswith("US"):
if time(10, 0) <= kst_time < time(18, 0):
return "US_DAY"
if time(18, 0) <= kst_time < time(23, 30):
return "US_PRE"
if time(23, 30) <= kst_time or kst_time < time(6, 0):
return "US_REG"
if time(6, 0) <= kst_time < time(7, 0):
return "US_AFTER"
return "US_OFF"
return "GENERIC_REG"
def get_session_info(market: MarketInfo, now: datetime | None = None) -> SessionInfo:
session_id = classify_session_id(market, now)
return SessionInfo(
session_id=session_id, is_low_liquidity=session_id in _LOW_LIQUIDITY_SESSIONS
)
def validate_order_policy(
*,
market: MarketInfo,
order_type: str,
price: float,
now: datetime | None = None,
) -> SessionInfo:
"""Validate order against session policy and return resolved session info."""
info = get_session_info(market, now)
is_market_order = price <= 0
if info.is_low_liquidity and is_market_order:
raise OrderPolicyRejectedError(
f"Market order is forbidden in low-liquidity session ({info.session_id})",
session_id=info.session_id,
market_code=market.code,
)
# Guard against accidental unsupported actions.
if order_type not in {"BUY", "SELL"}:
raise OrderPolicyRejectedError(
f"Unsupported order_type={order_type}",
session_id=info.session_id,
market_code=market.code,
)
return info
# Backward compatibility alias
OrderPolicyRejected = OrderPolicyRejectedError

View File

@@ -28,9 +28,7 @@ class PriorityTask:
# Task data not used in comparison # Task data not used in comparison
task_id: str = field(compare=False) task_id: str = field(compare=False)
task_data: dict[str, Any] = field(compare=False, default_factory=dict) task_data: dict[str, Any] = field(compare=False, default_factory=dict)
callback: Callable[[], Coroutine[Any, Any, Any]] | None = field( callback: Callable[[], Coroutine[Any, Any, Any]] | None = field(compare=False, default=None)
compare=False, default=None
)
@dataclass @dataclass

View File

@@ -25,7 +25,7 @@ class CircuitBreakerTripped(SystemExit):
) )
class FatFingerRejected(Exception): class FatFingerRejectedError(Exception):
"""Raised when an order exceeds the maximum allowed proportion of cash.""" """Raised when an order exceeds the maximum allowed proportion of cash."""
def __init__(self, order_amount: float, total_cash: float, max_pct: float) -> None: def __init__(self, order_amount: float, total_cash: float, max_pct: float) -> None:
@@ -61,7 +61,7 @@ class RiskManager:
def check_fat_finger(self, order_amount: float, total_cash: float) -> None: def check_fat_finger(self, order_amount: float, total_cash: float) -> None:
"""Reject orders that exceed the maximum proportion of available cash.""" """Reject orders that exceed the maximum proportion of available cash."""
if total_cash <= 0: if total_cash <= 0:
raise FatFingerRejected(order_amount, total_cash, self._ff_max_pct) raise FatFingerRejectedError(order_amount, total_cash, self._ff_max_pct)
ratio_pct = (order_amount / total_cash) * 100 ratio_pct = (order_amount / total_cash) * 100
if ratio_pct > self._ff_max_pct: if ratio_pct > self._ff_max_pct:
@@ -69,7 +69,7 @@ class RiskManager:
"Fat finger check failed", "Fat finger check failed",
extra={"order_amount": order_amount}, extra={"order_amount": order_amount},
) )
raise FatFingerRejected(order_amount, total_cash, self._ff_max_pct) raise FatFingerRejectedError(order_amount, total_cash, self._ff_max_pct)
def validate_order( def validate_order(
self, self,
@@ -81,3 +81,7 @@ class RiskManager:
self.check_circuit_breaker(current_pnl_pct) self.check_circuit_breaker(current_pnl_pct)
self.check_fat_finger(order_amount, total_cash) self.check_fat_finger(order_amount, total_cash)
logger.info("Order passed risk validation") logger.info("Order passed risk validation")
# Backward compatibility alias
FatFingerRejected = FatFingerRejectedError

View File

@@ -5,7 +5,7 @@ from __future__ import annotations
import json import json
import os import os
import sqlite3 import sqlite3
from datetime import UTC, datetime, timezone from datetime import UTC, datetime
from pathlib import Path from pathlib import Path
from typing import Any from typing import Any
@@ -188,10 +188,7 @@ def create_dashboard_app(db_path: str, mode: str = "paper") -> FastAPI:
return { return {
"market": "all", "market": "all",
"combined": combined, "combined": combined,
"by_market": [ "by_market": [_row_to_performance(row) for row in by_market_rows],
_row_to_performance(row)
for row in by_market_rows
],
} }
row = conn.execute( row = conn.execute(
@@ -401,7 +398,7 @@ def create_dashboard_app(db_path: str, mode: str = "paper") -> FastAPI:
""" """
).fetchall() ).fetchall()
now = datetime.now(timezone.utc) now = datetime.now(UTC)
positions = [] positions = []
for row in rows: for row in rows:
entry_time_str = row["entry_time"] entry_time_str = row["entry_time"]

View File

@@ -9,7 +9,6 @@ from __future__ import annotations
import logging import logging
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime, timedelta from datetime import datetime, timedelta
from typing import Any
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

135
src/db.py
View File

@@ -31,8 +31,12 @@ def init_db(db_path: str) -> sqlite3.Connection:
quantity INTEGER, quantity INTEGER,
price REAL, price REAL,
pnl REAL DEFAULT 0.0, pnl REAL DEFAULT 0.0,
strategy_pnl REAL DEFAULT 0.0,
fx_pnl REAL DEFAULT 0.0,
market TEXT DEFAULT 'KR', market TEXT DEFAULT 'KR',
exchange_code TEXT DEFAULT 'KRX', exchange_code TEXT DEFAULT 'KRX',
session_id TEXT DEFAULT 'UNKNOWN',
selection_context TEXT,
decision_id TEXT, decision_id TEXT,
mode TEXT DEFAULT 'paper' mode TEXT DEFAULT 'paper'
) )
@@ -53,6 +57,32 @@ def init_db(db_path: str) -> sqlite3.Connection:
conn.execute("ALTER TABLE trades ADD COLUMN decision_id TEXT") conn.execute("ALTER TABLE trades ADD COLUMN decision_id TEXT")
if "mode" not in columns: if "mode" not in columns:
conn.execute("ALTER TABLE trades ADD COLUMN mode TEXT DEFAULT 'paper'") conn.execute("ALTER TABLE trades ADD COLUMN mode TEXT DEFAULT 'paper'")
session_id_added = False
if "session_id" not in columns:
conn.execute("ALTER TABLE trades ADD COLUMN session_id TEXT DEFAULT 'UNKNOWN'")
session_id_added = True
if "strategy_pnl" not in columns:
conn.execute("ALTER TABLE trades ADD COLUMN strategy_pnl REAL DEFAULT 0.0")
if "fx_pnl" not in columns:
conn.execute("ALTER TABLE trades ADD COLUMN fx_pnl REAL DEFAULT 0.0")
# Backfill legacy rows where only pnl existed before split accounting columns.
conn.execute(
"""
UPDATE trades
SET strategy_pnl = pnl, fx_pnl = 0.0
WHERE pnl != 0.0
AND strategy_pnl = 0.0
AND fx_pnl = 0.0
"""
)
if session_id_added:
conn.execute(
"""
UPDATE trades
SET session_id = 'UNKNOWN'
WHERE session_id IS NULL OR session_id = ''
"""
)
# Context tree tables for multi-layered memory management # Context tree tables for multi-layered memory management
conn.execute( conn.execute(
@@ -79,6 +109,7 @@ def init_db(db_path: str) -> sqlite3.Connection:
stock_code TEXT NOT NULL, stock_code TEXT NOT NULL,
market TEXT NOT NULL, market TEXT NOT NULL,
exchange_code TEXT NOT NULL, exchange_code TEXT NOT NULL,
session_id TEXT DEFAULT 'UNKNOWN',
action TEXT NOT NULL, action TEXT NOT NULL,
confidence INTEGER NOT NULL, confidence INTEGER NOT NULL,
rationale TEXT NOT NULL, rationale TEXT NOT NULL,
@@ -91,6 +122,26 @@ def init_db(db_path: str) -> sqlite3.Connection:
) )
""" """
) )
decision_columns = {
row[1] for row in conn.execute("PRAGMA table_info(decision_logs)").fetchall()
}
if "session_id" not in decision_columns:
conn.execute("ALTER TABLE decision_logs ADD COLUMN session_id TEXT DEFAULT 'UNKNOWN'")
conn.execute(
"""
UPDATE decision_logs
SET session_id = 'UNKNOWN'
WHERE session_id IS NULL OR session_id = ''
"""
)
if "outcome_pnl" not in decision_columns:
conn.execute("ALTER TABLE decision_logs ADD COLUMN outcome_pnl REAL")
if "outcome_accuracy" not in decision_columns:
conn.execute("ALTER TABLE decision_logs ADD COLUMN outcome_accuracy INTEGER")
if "reviewed" not in decision_columns:
conn.execute("ALTER TABLE decision_logs ADD COLUMN reviewed INTEGER DEFAULT 0")
if "review_notes" not in decision_columns:
conn.execute("ALTER TABLE decision_logs ADD COLUMN review_notes TEXT")
conn.execute( conn.execute(
""" """
@@ -133,9 +184,7 @@ def init_db(db_path: str) -> sqlite3.Connection:
conn.execute( conn.execute(
"CREATE INDEX IF NOT EXISTS idx_decision_logs_timestamp ON decision_logs(timestamp)" "CREATE INDEX IF NOT EXISTS idx_decision_logs_timestamp ON decision_logs(timestamp)"
) )
conn.execute( conn.execute("CREATE INDEX IF NOT EXISTS idx_decision_logs_reviewed ON decision_logs(reviewed)")
"CREATE INDEX IF NOT EXISTS idx_decision_logs_reviewed ON decision_logs(reviewed)"
)
conn.execute( conn.execute(
"CREATE INDEX IF NOT EXISTS idx_decision_logs_confidence ON decision_logs(confidence)" "CREATE INDEX IF NOT EXISTS idx_decision_logs_confidence ON decision_logs(confidence)"
) )
@@ -171,8 +220,11 @@ def log_trade(
quantity: int = 0, quantity: int = 0,
price: float = 0.0, price: float = 0.0,
pnl: float = 0.0, pnl: float = 0.0,
strategy_pnl: float | None = None,
fx_pnl: float | None = None,
market: str = "KR", market: str = "KR",
exchange_code: str = "KRX", exchange_code: str = "KRX",
session_id: str | None = None,
selection_context: dict[str, any] | None = None, selection_context: dict[str, any] | None = None,
decision_id: str | None = None, decision_id: str | None = None,
mode: str = "paper", mode: str = "paper",
@@ -187,24 +239,37 @@ def log_trade(
rationale: AI decision rationale rationale: AI decision rationale
quantity: Number of shares quantity: Number of shares
price: Trade price price: Trade price
pnl: Profit/loss pnl: Total profit/loss (backward compatibility)
strategy_pnl: Strategy PnL component
fx_pnl: FX PnL component
market: Market code market: Market code
exchange_code: Exchange code exchange_code: Exchange code
session_id: Session identifier (if omitted, auto-derived from market)
selection_context: Scanner selection data (RSI, volume_ratio, signal, score) selection_context: Scanner selection data (RSI, volume_ratio, signal, score)
decision_id: Unique decision identifier for audit linking decision_id: Unique decision identifier for audit linking
mode: Trading mode ('paper' or 'live') for data separation mode: Trading mode ('paper' or 'live') for data separation
""" """
# Serialize selection context to JSON # Serialize selection context to JSON
context_json = json.dumps(selection_context) if selection_context else None context_json = json.dumps(selection_context) if selection_context else None
resolved_session_id = _resolve_session_id(market=market, session_id=session_id)
if strategy_pnl is None and fx_pnl is None:
strategy_pnl = pnl
fx_pnl = 0.0
elif strategy_pnl is None:
strategy_pnl = pnl - float(fx_pnl or 0.0) if pnl != 0.0 else 0.0
elif fx_pnl is None:
fx_pnl = pnl - float(strategy_pnl) if pnl != 0.0 else 0.0
if pnl == 0.0 and (strategy_pnl or fx_pnl):
pnl = float(strategy_pnl) + float(fx_pnl)
conn.execute( conn.execute(
""" """
INSERT INTO trades ( INSERT INTO trades (
timestamp, stock_code, action, confidence, rationale, timestamp, stock_code, action, confidence, rationale,
quantity, price, pnl, market, exchange_code, selection_context, decision_id, quantity, price, pnl, strategy_pnl, fx_pnl,
mode market, exchange_code, session_id, selection_context, decision_id, mode
) )
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", """,
( (
datetime.now(UTC).isoformat(), datetime.now(UTC).isoformat(),
@@ -215,8 +280,11 @@ def log_trade(
quantity, quantity,
price, price,
pnl, pnl,
strategy_pnl,
fx_pnl,
market, market,
exchange_code, exchange_code,
resolved_session_id,
context_json, context_json,
decision_id, decision_id,
mode, mode,
@@ -225,13 +293,53 @@ def log_trade(
conn.commit() conn.commit()
def _resolve_session_id(*, market: str, session_id: str | None) -> str:
if session_id:
return session_id
try:
from src.core.order_policy import classify_session_id
from src.markets.schedule import MARKETS
market_info = MARKETS.get(market)
if market_info is not None:
return classify_session_id(market_info)
except Exception:
pass
return "UNKNOWN"
def get_latest_buy_trade( def get_latest_buy_trade(
conn: sqlite3.Connection, stock_code: str, market: str conn: sqlite3.Connection,
stock_code: str,
market: str,
exchange_code: str | None = None,
) -> dict[str, Any] | None: ) -> dict[str, Any] | None:
"""Fetch the most recent BUY trade for a stock and market.""" """Fetch the most recent BUY trade for a stock and market."""
if exchange_code:
cursor = conn.execute( cursor = conn.execute(
""" """
SELECT decision_id, price, quantity SELECT decision_id, price, quantity, selection_context
FROM trades
WHERE stock_code = ?
AND market = ?
AND action = 'BUY'
AND decision_id IS NOT NULL
AND (
exchange_code = ?
OR exchange_code IS NULL
OR exchange_code = ''
)
ORDER BY
CASE WHEN exchange_code = ? THEN 0 ELSE 1 END,
timestamp DESC
LIMIT 1
""",
(stock_code, market, exchange_code, exchange_code),
)
else:
cursor = conn.execute(
"""
SELECT decision_id, price, quantity, selection_context
FROM trades FROM trades
WHERE stock_code = ? WHERE stock_code = ?
AND market = ? AND market = ?
@@ -254,10 +362,11 @@ def get_open_position(
"""Return open position if latest trade is BUY, else None.""" """Return open position if latest trade is BUY, else None."""
cursor = conn.execute( cursor = conn.execute(
""" """
SELECT action, decision_id, price, quantity SELECT action, decision_id, price, quantity, timestamp
FROM trades FROM trades
WHERE stock_code = ? WHERE stock_code = ?
AND market = ? AND market = ?
AND action IN ('BUY', 'SELL')
ORDER BY timestamp DESC ORDER BY timestamp DESC
LIMIT 1 LIMIT 1
""", """,
@@ -266,12 +375,10 @@ def get_open_position(
row = cursor.fetchone() row = cursor.fetchone()
if not row or row[0] != "BUY": if not row or row[0] != "BUY":
return None return None
return {"decision_id": row[1], "price": row[2], "quantity": row[3]} return {"decision_id": row[1], "price": row[2], "quantity": row[3], "timestamp": row[4]}
def get_recent_symbols( def get_recent_symbols(conn: sqlite3.Connection, market: str, limit: int = 30) -> list[str]:
conn: sqlite3.Connection, market: str, limit: int = 30
) -> list[str]:
"""Return recent unique symbols for a market, newest first.""" """Return recent unique symbols for a market, newest first."""
cursor = conn.execute( cursor = conn.execute(
""" """

View File

@@ -90,9 +90,7 @@ class ABTester:
sharpe_ratio = None sharpe_ratio = None
if len(pnls) > 1: if len(pnls) > 1:
mean_return = avg_pnl mean_return = avg_pnl
std_return = ( std_return = (sum((p - mean_return) ** 2 for p in pnls) / (len(pnls) - 1)) ** 0.5
sum((p - mean_return) ** 2 for p in pnls) / (len(pnls) - 1)
) ** 0.5
if std_return > 0: if std_return > 0:
sharpe_ratio = mean_return / std_return sharpe_ratio = mean_return / std_return
@@ -198,8 +196,7 @@ class ABTester:
if meets_criteria: if meets_criteria:
logger.info( logger.info(
"Strategy '%s' meets deployment criteria: " "Strategy '%s' meets deployment criteria: win_rate=%.2f%%, trades=%d, avg_pnl=%.2f",
"win_rate=%.2f%%, trades=%d, avg_pnl=%.2f",
result.winner, result.winner,
winning_perf.win_rate, winning_perf.win_rate,
winning_perf.total_trades, winning_perf.total_trades,

View File

@@ -60,9 +60,7 @@ class DailyReviewer:
if isinstance(scenario_match, dict) and scenario_match: if isinstance(scenario_match, dict) and scenario_match:
matched += 1 matched += 1
scenario_match_rate = ( scenario_match_rate = (
round((matched / total_decisions) * 100, 2) round((matched / total_decisions) * 100, 2) if total_decisions else 0.0
if total_decisions
else 0.0
) )
trade_stats = self._conn.execute( trade_stats = self._conn.execute(

View File

@@ -9,6 +9,7 @@ This module:
from __future__ import annotations from __future__ import annotations
import ast
import json import json
import logging import logging
import sqlite3 import sqlite3
@@ -28,24 +29,24 @@ from src.logging.decision_logger import DecisionLogger
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
STRATEGIES_DIR = Path("src/strategies") STRATEGIES_DIR = Path("src/strategies")
STRATEGY_TEMPLATE = textwrap.dedent("""\ STRATEGY_TEMPLATE = """\
\"\"\"Auto-generated strategy: {name} \"\"\"Auto-generated strategy: {name}
Generated at: {timestamp} Generated at: {timestamp}
Rationale: {rationale} Rationale: {rationale}
\"\"\" \"\"\"
from __future__ import annotations from __future__ import annotations
from typing import Any from typing import Any
from src.strategies.base import BaseStrategy from src.strategies.base import BaseStrategy
class {class_name}(BaseStrategy): class {class_name}(BaseStrategy):
\"\"\"Strategy: {name}\"\"\" \"\"\"Strategy: {name}\"\"\"
def evaluate(self, market_data: dict[str, Any]) -> dict[str, Any]: def evaluate(self, market_data: dict[str, Any]) -> dict[str, Any]:
{body} {body}
""") """
class EvolutionOptimizer: class EvolutionOptimizer:
@@ -79,7 +80,8 @@ class EvolutionOptimizer:
# Convert to dict format for analysis # Convert to dict format for analysis
failures = [] failures = []
for decision in losing_decisions: for decision in losing_decisions:
failures.append({ failures.append(
{
"decision_id": decision.decision_id, "decision_id": decision.decision_id,
"timestamp": decision.timestamp, "timestamp": decision.timestamp,
"stock_code": decision.stock_code, "stock_code": decision.stock_code,
@@ -92,13 +94,12 @@ class EvolutionOptimizer:
"outcome_accuracy": decision.outcome_accuracy, "outcome_accuracy": decision.outcome_accuracy,
"context_snapshot": decision.context_snapshot, "context_snapshot": decision.context_snapshot,
"input_data": decision.input_data, "input_data": decision.input_data,
}) }
)
return failures return failures
def identify_failure_patterns( def identify_failure_patterns(self, failures: list[dict[str, Any]]) -> dict[str, Any]:
self, failures: list[dict[str, Any]]
) -> dict[str, Any]:
"""Identify patterns in losing decisions. """Identify patterns in losing decisions.
Analyzes: Analyzes:
@@ -142,12 +143,8 @@ class EvolutionOptimizer:
total_confidence += failure.get("confidence", 0) total_confidence += failure.get("confidence", 0)
total_loss += failure.get("outcome_pnl", 0.0) total_loss += failure.get("outcome_pnl", 0.0)
patterns["avg_confidence"] = ( patterns["avg_confidence"] = round(total_confidence / len(failures), 2) if failures else 0.0
round(total_confidence / len(failures), 2) if failures else 0.0 patterns["avg_loss"] = round(total_loss / len(failures), 2) if failures else 0.0
)
patterns["avg_loss"] = (
round(total_loss / len(failures), 2) if failures else 0.0
)
# Convert Counters to regular dicts for JSON serialization # Convert Counters to regular dicts for JSON serialization
patterns["markets"] = dict(patterns["markets"]) patterns["markets"] = dict(patterns["markets"])
@@ -196,7 +193,8 @@ class EvolutionOptimizer:
prompt = ( prompt = (
"You are a quantitative trading strategy developer.\n" "You are a quantitative trading strategy developer.\n"
"Analyze these failed trades and their patterns, then generate an improved strategy.\n\n" "Analyze these failed trades and their patterns, "
"then generate an improved strategy.\n\n"
f"Failure Patterns:\n{json.dumps(patterns, indent=2)}\n\n" f"Failure Patterns:\n{json.dumps(patterns, indent=2)}\n\n"
f"Sample Failed Trades (first 5):\n" f"Sample Failed Trades (first 5):\n"
f"{json.dumps(failures[:5], indent=2, default=str)}\n\n" f"{json.dumps(failures[:5], indent=2, default=str)}\n\n"
@@ -213,7 +211,8 @@ class EvolutionOptimizer:
try: try:
response = await self._client.aio.models.generate_content( response = await self._client.aio.models.generate_content(
model=self._model_name, contents=prompt, model=self._model_name,
contents=prompt,
) )
body = response.text.strip() body = response.text.strip()
except Exception as exc: except Exception as exc:
@@ -235,7 +234,8 @@ class EvolutionOptimizer:
file_path = STRATEGIES_DIR / file_name file_path = STRATEGIES_DIR / file_name
# Indent the body for the class method # Indent the body for the class method
indented_body = textwrap.indent(body, " ") normalized_body = textwrap.dedent(body).strip()
indented_body = textwrap.indent(normalized_body, " ")
# Generate rationale from patterns # Generate rationale from patterns
rationale = f"Auto-evolved from {len(failures)} failures. " rationale = f"Auto-evolved from {len(failures)} failures. "
@@ -247,9 +247,16 @@ class EvolutionOptimizer:
timestamp=datetime.now(UTC).isoformat(), timestamp=datetime.now(UTC).isoformat(),
rationale=rationale, rationale=rationale,
class_name=class_name, class_name=class_name,
body=indented_body.strip(), body=indented_body.rstrip(),
) )
try:
parsed = ast.parse(content, filename=str(file_path))
compile(parsed, filename=str(file_path), mode="exec")
except SyntaxError as exc:
logger.warning("Generated strategy failed syntax validation: %s", exc)
return None
file_path.write_text(content) file_path.write_text(content)
logger.info("Generated strategy file: %s", file_path) logger.info("Generated strategy file: %s", file_path)
return file_path return file_path
@@ -271,9 +278,7 @@ class EvolutionOptimizer:
logger.info("Strategy validation PASSED") logger.info("Strategy validation PASSED")
return True return True
else: else:
logger.warning( logger.warning("Strategy validation FAILED:\n%s", result.stdout + result.stderr)
"Strategy validation FAILED:\n%s", result.stdout + result.stderr
)
# Clean up failing strategy # Clean up failing strategy
strategy_path.unlink(missing_ok=True) strategy_path.unlink(missing_ok=True)
return False return False

View File

@@ -187,9 +187,7 @@ class PerformanceTracker:
return metrics return metrics
def calculate_improvement_trend( def calculate_improvement_trend(self, metrics_history: list[StrategyMetrics]) -> dict[str, Any]:
self, metrics_history: list[StrategyMetrics]
) -> dict[str, Any]:
"""Calculate improvement trend from historical metrics. """Calculate improvement trend from historical metrics.
Args: Args:
@@ -229,9 +227,7 @@ class PerformanceTracker:
"period_count": len(metrics_history), "period_count": len(metrics_history),
} }
def generate_dashboard( def generate_dashboard(self, strategy_name: str | None = None) -> PerformanceDashboard:
self, strategy_name: str | None = None
) -> PerformanceDashboard:
"""Generate a comprehensive performance dashboard. """Generate a comprehensive performance dashboard.
Args: Args:
@@ -260,9 +256,7 @@ class PerformanceTracker:
improvement_trend=improvement_trend, improvement_trend=improvement_trend,
) )
def export_dashboard_json( def export_dashboard_json(self, dashboard: PerformanceDashboard) -> str:
self, dashboard: PerformanceDashboard
) -> str:
"""Export dashboard as JSON string. """Export dashboard as JSON string.
Args: Args:

View File

@@ -19,6 +19,7 @@ class DecisionLog:
stock_code: str stock_code: str
market: str market: str
exchange_code: str exchange_code: str
session_id: str
action: str action: str
confidence: int confidence: int
rationale: str rationale: str
@@ -47,6 +48,7 @@ class DecisionLogger:
rationale: str, rationale: str,
context_snapshot: dict[str, Any], context_snapshot: dict[str, Any],
input_data: dict[str, Any], input_data: dict[str, Any],
session_id: str | None = None,
) -> str: ) -> str:
"""Log a trading decision with full context. """Log a trading decision with full context.
@@ -59,20 +61,22 @@ class DecisionLogger:
rationale: Reasoning for the decision rationale: Reasoning for the decision
context_snapshot: L1-L7 context snapshot at decision time context_snapshot: L1-L7 context snapshot at decision time
input_data: Market data inputs (price, volume, orderbook, etc.) input_data: Market data inputs (price, volume, orderbook, etc.)
session_id: Runtime session identifier
Returns: Returns:
decision_id: Unique identifier for this decision decision_id: Unique identifier for this decision
""" """
decision_id = str(uuid.uuid4()) decision_id = str(uuid.uuid4())
timestamp = datetime.now(UTC).isoformat() timestamp = datetime.now(UTC).isoformat()
resolved_session = session_id or "UNKNOWN"
self.conn.execute( self.conn.execute(
""" """
INSERT INTO decision_logs ( INSERT INTO decision_logs (
decision_id, timestamp, stock_code, market, exchange_code, decision_id, timestamp, stock_code, market, exchange_code,
action, confidence, rationale, context_snapshot, input_data session_id, action, confidence, rationale, context_snapshot, input_data
) )
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", """,
( (
decision_id, decision_id,
@@ -80,6 +84,7 @@ class DecisionLogger:
stock_code, stock_code,
market, market,
exchange_code, exchange_code,
resolved_session,
action, action,
confidence, confidence,
rationale, rationale,
@@ -106,7 +111,7 @@ class DecisionLogger:
query = """ query = """
SELECT SELECT
decision_id, timestamp, stock_code, market, exchange_code, decision_id, timestamp, stock_code, market, exchange_code,
action, confidence, rationale, context_snapshot, input_data, session_id, action, confidence, rationale, context_snapshot, input_data,
outcome_pnl, outcome_accuracy, reviewed, review_notes outcome_pnl, outcome_accuracy, reviewed, review_notes
FROM decision_logs FROM decision_logs
WHERE reviewed = 0 AND confidence >= ? WHERE reviewed = 0 AND confidence >= ?
@@ -135,9 +140,7 @@ class DecisionLogger:
) )
self.conn.commit() self.conn.commit()
def update_outcome( def update_outcome(self, decision_id: str, pnl: float, accuracy: int) -> None:
self, decision_id: str, pnl: float, accuracy: int
) -> None:
"""Update the outcome of a decision after trade execution. """Update the outcome of a decision after trade execution.
Args: Args:
@@ -168,7 +171,7 @@ class DecisionLogger:
""" """
SELECT SELECT
decision_id, timestamp, stock_code, market, exchange_code, decision_id, timestamp, stock_code, market, exchange_code,
action, confidence, rationale, context_snapshot, input_data, session_id, action, confidence, rationale, context_snapshot, input_data,
outcome_pnl, outcome_accuracy, reviewed, review_notes outcome_pnl, outcome_accuracy, reviewed, review_notes
FROM decision_logs FROM decision_logs
WHERE decision_id = ? WHERE decision_id = ?
@@ -196,7 +199,7 @@ class DecisionLogger:
""" """
SELECT SELECT
decision_id, timestamp, stock_code, market, exchange_code, decision_id, timestamp, stock_code, market, exchange_code,
action, confidence, rationale, context_snapshot, input_data, session_id, action, confidence, rationale, context_snapshot, input_data,
outcome_pnl, outcome_accuracy, reviewed, review_notes outcome_pnl, outcome_accuracy, reviewed, review_notes
FROM decision_logs FROM decision_logs
WHERE confidence >= ? WHERE confidence >= ?
@@ -223,13 +226,14 @@ class DecisionLogger:
stock_code=row[2], stock_code=row[2],
market=row[3], market=row[3],
exchange_code=row[4], exchange_code=row[4],
action=row[5], session_id=row[5] or "UNKNOWN",
confidence=row[6], action=row[6],
rationale=row[7], confidence=row[7],
context_snapshot=json.loads(row[8]), rationale=row[8],
input_data=json.loads(row[9]), context_snapshot=json.loads(row[9]),
outcome_pnl=row[10], input_data=json.loads(row[10]),
outcome_accuracy=row[11], outcome_pnl=row[11],
reviewed=bool(row[12]), outcome_accuracy=row[12],
review_notes=row[13], reviewed=bool(row[13]),
review_notes=row[14],
) )

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
"""Market schedule management with timezone support.""" """Market schedule management with timezone support."""
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime, time, timedelta from datetime import UTC, datetime, time, timedelta
from zoneinfo import ZoneInfo from zoneinfo import ZoneInfo
@@ -181,7 +181,10 @@ def is_market_open(market: MarketInfo, now: datetime | None = None) -> bool:
def get_open_markets( def get_open_markets(
enabled_markets: list[str] | None = None, now: datetime | None = None enabled_markets: list[str] | None = None,
now: datetime | None = None,
*,
include_extended_sessions: bool = False,
) -> list[MarketInfo]: ) -> list[MarketInfo]:
""" """
Get list of currently open markets. Get list of currently open markets.
@@ -196,17 +199,29 @@ def get_open_markets(
if enabled_markets is None: if enabled_markets is None:
enabled_markets = list(MARKETS.keys()) enabled_markets = list(MARKETS.keys())
def is_available(market: MarketInfo) -> bool:
if not include_extended_sessions:
return is_market_open(market, now)
if market.code == "KR" or market.code.startswith("US"):
# Import lazily to avoid module cycle at import-time.
from src.core.order_policy import classify_session_id
session_id = classify_session_id(market, now)
return session_id not in {"KR_OFF", "US_OFF"}
return is_market_open(market, now)
open_markets = [ open_markets = [
MARKETS[code] MARKETS[code] for code in enabled_markets if code in MARKETS and is_available(MARKETS[code])
for code in enabled_markets
if code in MARKETS and is_market_open(MARKETS[code], now)
] ]
return sorted(open_markets, key=lambda m: m.code) return sorted(open_markets, key=lambda m: m.code)
def get_next_market_open( def get_next_market_open(
enabled_markets: list[str] | None = None, now: datetime | None = None enabled_markets: list[str] | None = None,
now: datetime | None = None,
*,
include_extended_sessions: bool = False,
) -> tuple[MarketInfo, datetime]: ) -> tuple[MarketInfo, datetime]:
""" """
Find the next market that will open and when. Find the next market that will open and when.
@@ -233,6 +248,21 @@ def get_next_market_open(
next_open_time: datetime | None = None next_open_time: datetime | None = None
next_market: MarketInfo | None = None next_market: MarketInfo | None = None
def first_extended_open_after(market: MarketInfo, start_utc: datetime) -> datetime | None:
# Search minute-by-minute for KR/US session transition into active window.
# Bounded to 7 days to match existing behavior.
from src.core.order_policy import classify_session_id
ts = start_utc.astimezone(ZoneInfo("UTC")).replace(second=0, microsecond=0)
prev_active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF"}
for _ in range(7 * 24 * 60):
ts = ts + timedelta(minutes=1)
active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF"}
if active and not prev_active:
return ts
prev_active = active
return None
for code in enabled_markets: for code in enabled_markets:
if code not in MARKETS: if code not in MARKETS:
continue continue
@@ -240,12 +270,17 @@ def get_next_market_open(
market = MARKETS[code] market = MARKETS[code]
market_now = now.astimezone(market.timezone) market_now = now.astimezone(market.timezone)
if include_extended_sessions and (market.code == "KR" or market.code.startswith("US")):
ext_open = first_extended_open_after(market, now.astimezone(UTC))
if ext_open and (next_open_time is None or ext_open < next_open_time):
next_open_time = ext_open
next_market = market
continue
# Calculate next open time for this market # Calculate next open time for this market
for days_ahead in range(7): # Check next 7 days for days_ahead in range(7): # Check next 7 days
check_date = market_now.date() + timedelta(days=days_ahead) check_date = market_now.date() + timedelta(days=days_ahead)
check_datetime = datetime.combine( check_datetime = datetime.combine(check_date, market.open_time, tzinfo=market.timezone)
check_date, market.open_time, tzinfo=market.timezone
)
# Skip weekends # Skip weekends
if check_datetime.weekday() >= 5: if check_datetime.weekday() >= 5:

View File

@@ -4,7 +4,7 @@ import asyncio
import logging import logging
import time import time
from collections.abc import Awaitable, Callable from collections.abc import Awaitable, Callable
from dataclasses import dataclass, fields from dataclasses import dataclass
from enum import Enum from enum import Enum
from typing import ClassVar from typing import ClassVar
@@ -136,14 +136,14 @@ class TelegramClient:
self._enabled = enabled self._enabled = enabled
self._rate_limiter = LeakyBucket(rate=rate_limit) self._rate_limiter = LeakyBucket(rate=rate_limit)
self._session: aiohttp.ClientSession | None = None self._session: aiohttp.ClientSession | None = None
self._filter = notification_filter if notification_filter is not None else NotificationFilter() self._filter = (
notification_filter if notification_filter is not None else NotificationFilter()
)
if not enabled: if not enabled:
logger.info("Telegram notifications disabled via configuration") logger.info("Telegram notifications disabled via configuration")
elif bot_token is None or chat_id is None: elif bot_token is None or chat_id is None:
logger.warning( logger.warning("Telegram notifications disabled (missing bot_token or chat_id)")
"Telegram notifications disabled (missing bot_token or chat_id)"
)
self._enabled = False self._enabled = False
else: else:
logger.info("Telegram notifications enabled for chat_id=%s", chat_id) logger.info("Telegram notifications enabled for chat_id=%s", chat_id)
@@ -209,14 +209,12 @@ class TelegramClient:
async with session.post(url, json=payload) as resp: async with session.post(url, json=payload) as resp:
if resp.status != 200: if resp.status != 200:
error_text = await resp.text() error_text = await resp.text()
logger.error( logger.error("Telegram API error (status=%d): %s", resp.status, error_text)
"Telegram API error (status=%d): %s", resp.status, error_text
)
return False return False
logger.debug("Telegram message sent: %s", text[:50]) logger.debug("Telegram message sent: %s", text[:50])
return True return True
except asyncio.TimeoutError: except TimeoutError:
logger.error("Telegram message timeout") logger.error("Telegram message timeout")
return False return False
except aiohttp.ClientError as exc: except aiohttp.ClientError as exc:
@@ -305,9 +303,7 @@ class TelegramClient:
NotificationMessage(priority=NotificationPriority.LOW, message=message) NotificationMessage(priority=NotificationPriority.LOW, message=message)
) )
async def notify_circuit_breaker( async def notify_circuit_breaker(self, pnl_pct: float, threshold: float) -> None:
self, pnl_pct: float, threshold: float
) -> None:
""" """
Notify circuit breaker activation. Notify circuit breaker activation.
@@ -354,9 +350,7 @@ class TelegramClient:
NotificationMessage(priority=NotificationPriority.HIGH, message=message) NotificationMessage(priority=NotificationPriority.HIGH, message=message)
) )
async def notify_system_start( async def notify_system_start(self, mode: str, enabled_markets: list[str]) -> None:
self, mode: str, enabled_markets: list[str]
) -> None:
""" """
Notify system startup. Notify system startup.
@@ -369,9 +363,7 @@ class TelegramClient:
mode_emoji = "📝" if mode == "paper" else "💰" mode_emoji = "📝" if mode == "paper" else "💰"
markets_str = ", ".join(enabled_markets) markets_str = ", ".join(enabled_markets)
message = ( message = (
f"<b>{mode_emoji} System Started</b>\n" f"<b>{mode_emoji} System Started</b>\nMode: {mode.upper()}\nMarkets: {markets_str}"
f"Mode: {mode.upper()}\n"
f"Markets: {markets_str}"
) )
await self._send_notification( await self._send_notification(
NotificationMessage(priority=NotificationPriority.MEDIUM, message=message) NotificationMessage(priority=NotificationPriority.MEDIUM, message=message)
@@ -445,11 +437,7 @@ class TelegramClient:
""" """
if not self._filter.playbook: if not self._filter.playbook:
return return
message = ( message = f"<b>Playbook Failed</b>\nMarket: {market}\nReason: {reason[:200]}"
f"<b>Playbook Failed</b>\n"
f"Market: {market}\n"
f"Reason: {reason[:200]}"
)
await self._send_notification( await self._send_notification(
NotificationMessage(priority=NotificationPriority.HIGH, message=message) NotificationMessage(priority=NotificationPriority.HIGH, message=message)
) )
@@ -469,9 +457,7 @@ class TelegramClient:
if "circuit breaker" in reason.lower() if "circuit breaker" in reason.lower()
else NotificationPriority.MEDIUM else NotificationPriority.MEDIUM
) )
await self._send_notification( await self._send_notification(NotificationMessage(priority=priority, message=message))
NotificationMessage(priority=priority, message=message)
)
async def notify_unfilled_order( async def notify_unfilled_order(
self, self,
@@ -496,11 +482,7 @@ class TelegramClient:
return return
# SELL resubmit is high priority — position liquidation at risk. # SELL resubmit is high priority — position liquidation at risk.
# BUY cancel is medium priority — only cash is freed. # BUY cancel is medium priority — only cash is freed.
priority = ( priority = NotificationPriority.HIGH if action == "SELL" else NotificationPriority.MEDIUM
NotificationPriority.HIGH
if action == "SELL"
else NotificationPriority.MEDIUM
)
outcome_emoji = "🔄" if outcome == "resubmitted" else "" outcome_emoji = "🔄" if outcome == "resubmitted" else ""
outcome_label = "재주문" if outcome == "resubmitted" else "취소됨" outcome_label = "재주문" if outcome == "resubmitted" else "취소됨"
action_emoji = "🔴" if action == "SELL" else "🟢" action_emoji = "🔴" if action == "SELL" else "🟢"
@@ -515,9 +497,7 @@ class TelegramClient:
message = "\n".join(lines) message = "\n".join(lines)
await self._send_notification(NotificationMessage(priority=priority, message=message)) await self._send_notification(NotificationMessage(priority=priority, message=message))
async def notify_error( async def notify_error(self, error_type: str, error_msg: str, context: str) -> None:
self, error_type: str, error_msg: str, context: str
) -> None:
""" """
Notify system error. Notify system error.
@@ -541,9 +521,7 @@ class TelegramClient:
class TelegramCommandHandler: class TelegramCommandHandler:
"""Handles incoming Telegram commands via long polling.""" """Handles incoming Telegram commands via long polling."""
def __init__( def __init__(self, client: TelegramClient, polling_interval: float = 1.0) -> None:
self, client: TelegramClient, polling_interval: float = 1.0
) -> None:
""" """
Initialize command handler. Initialize command handler.
@@ -559,9 +537,7 @@ class TelegramCommandHandler:
self._polling_task: asyncio.Task[None] | None = None self._polling_task: asyncio.Task[None] | None = None
self._running = False self._running = False
def register_command( def register_command(self, command: str, handler: Callable[[], Awaitable[None]]) -> None:
self, command: str, handler: Callable[[], Awaitable[None]]
) -> None:
""" """
Register a command handler (no arguments). Register a command handler (no arguments).
@@ -672,7 +648,7 @@ class TelegramCommandHandler:
return updates return updates
except asyncio.TimeoutError: except TimeoutError:
logger.debug("getUpdates timeout (normal)") logger.debug("getUpdates timeout (normal)")
return [] return []
except aiohttp.ClientError as exc: except aiohttp.ClientError as exc:
@@ -697,9 +673,7 @@ class TelegramCommandHandler:
# Verify chat_id matches configured chat # Verify chat_id matches configured chat
chat_id = str(message.get("chat", {}).get("id", "")) chat_id = str(message.get("chat", {}).get("id", ""))
if chat_id != self._client._chat_id: if chat_id != self._client._chat_id:
logger.warning( logger.warning("Ignoring command from unauthorized chat_id: %s", chat_id)
"Ignoring command from unauthorized chat_id: %s", chat_id
)
return return
# Extract command text # Extract command text

104
src/strategy/exit_rules.py Normal file
View File

@@ -0,0 +1,104 @@
"""Composite exit rules: hard stop, break-even lock, ATR trailing, model assist."""
from __future__ import annotations
from dataclasses import dataclass
from src.strategy.position_state_machine import PositionState, StateTransitionInput, promote_state
@dataclass(frozen=True)
class ExitRuleConfig:
hard_stop_pct: float = -2.0
be_arm_pct: float = 1.2
arm_pct: float = 3.0
atr_multiplier_k: float = 2.2
model_prob_threshold: float = 0.62
@dataclass(frozen=True)
class ExitRuleInput:
current_price: float
entry_price: float
peak_price: float
atr_value: float = 0.0
pred_down_prob: float = 0.0
liquidity_weak: bool = False
@dataclass(frozen=True)
class ExitEvaluation:
state: PositionState
should_exit: bool
reason: str
unrealized_pnl_pct: float
trailing_stop_price: float | None
def evaluate_exit(
*,
current_state: PositionState,
config: ExitRuleConfig,
inp: ExitRuleInput,
) -> ExitEvaluation:
"""Evaluate composite exit logic and return updated state."""
if inp.entry_price <= 0 or inp.current_price <= 0:
return ExitEvaluation(
state=current_state,
should_exit=False,
reason="invalid_price",
unrealized_pnl_pct=0.0,
trailing_stop_price=None,
)
unrealized = (inp.current_price - inp.entry_price) / inp.entry_price * 100.0
hard_stop_hit = unrealized <= config.hard_stop_pct
take_profit_hit = unrealized >= config.arm_pct
trailing_stop_price: float | None = None
trailing_stop_hit = False
if inp.atr_value > 0 and inp.peak_price > 0:
trailing_stop_price = inp.peak_price - (config.atr_multiplier_k * inp.atr_value)
trailing_stop_hit = inp.current_price <= trailing_stop_price
be_lock_threat = current_state in (PositionState.BE_LOCK, PositionState.ARMED) and (
inp.current_price <= inp.entry_price
)
model_exit_signal = inp.pred_down_prob >= config.model_prob_threshold and inp.liquidity_weak
next_state = promote_state(
current=current_state,
inp=StateTransitionInput(
unrealized_pnl_pct=unrealized,
be_arm_pct=config.be_arm_pct,
arm_pct=config.arm_pct,
hard_stop_hit=hard_stop_hit,
trailing_stop_hit=trailing_stop_hit,
model_exit_signal=model_exit_signal,
be_lock_threat=be_lock_threat,
),
)
if hard_stop_hit:
reason = "hard_stop"
elif trailing_stop_hit:
reason = "atr_trailing_stop"
elif be_lock_threat:
reason = "be_lock_threat"
elif model_exit_signal:
reason = "model_liquidity_exit"
elif take_profit_hit:
# Backward-compatible immediate profit-taking path.
reason = "arm_take_profit"
else:
reason = "hold"
should_exit = next_state == PositionState.EXITED or take_profit_hit
return ExitEvaluation(
state=next_state,
should_exit=should_exit,
reason=reason,
unrealized_pnl_pct=unrealized,
trailing_stop_price=trailing_stop_price,
)

View File

@@ -8,12 +8,12 @@ Defines the data contracts for the proactive strategy system:
from __future__ import annotations from __future__ import annotations
from datetime import UTC, date, datetime from datetime import UTC, date, datetime
from enum import Enum from enum import StrEnum
from pydantic import BaseModel, Field, field_validator from pydantic import BaseModel, Field, field_validator
class ScenarioAction(str, Enum): class ScenarioAction(StrEnum):
"""Actions that can be taken by scenarios.""" """Actions that can be taken by scenarios."""
BUY = "BUY" BUY = "BUY"
@@ -22,7 +22,7 @@ class ScenarioAction(str, Enum):
REDUCE_ALL = "REDUCE_ALL" REDUCE_ALL = "REDUCE_ALL"
class MarketOutlook(str, Enum): class MarketOutlook(StrEnum):
"""AI's assessment of market direction.""" """AI's assessment of market direction."""
BULLISH = "bullish" BULLISH = "bullish"
@@ -32,7 +32,7 @@ class MarketOutlook(str, Enum):
BEARISH = "bearish" BEARISH = "bearish"
class PlaybookStatus(str, Enum): class PlaybookStatus(StrEnum):
"""Lifecycle status of a playbook.""" """Lifecycle status of a playbook."""
PENDING = "pending" PENDING = "pending"

View File

@@ -6,7 +6,6 @@ Designed for the pre-market strategy system (one playbook per market per day).
from __future__ import annotations from __future__ import annotations
import json
import logging import logging
import sqlite3 import sqlite3
from datetime import date from datetime import date
@@ -53,8 +52,10 @@ class PlaybookStore:
row_id = cursor.lastrowid or 0 row_id = cursor.lastrowid or 0
logger.info( logger.info(
"Saved playbook for %s/%s (%d stocks, %d scenarios)", "Saved playbook for %s/%s (%d stocks, %d scenarios)",
playbook.date, playbook.market, playbook.date,
playbook.stock_count, playbook.scenario_count, playbook.market,
playbook.stock_count,
playbook.scenario_count,
) )
return row_id return row_id

View File

@@ -0,0 +1,65 @@
"""Position state machine for staged exit control.
State progression is monotonic (promotion-only) except terminal EXITED.
"""
from __future__ import annotations
from dataclasses import dataclass
from enum import StrEnum
class PositionState(StrEnum):
HOLDING = "HOLDING"
BE_LOCK = "BE_LOCK"
ARMED = "ARMED"
EXITED = "EXITED"
_STATE_RANK: dict[PositionState, int] = {
PositionState.HOLDING: 0,
PositionState.BE_LOCK: 1,
PositionState.ARMED: 2,
PositionState.EXITED: 3,
}
@dataclass(frozen=True)
class StateTransitionInput:
unrealized_pnl_pct: float
be_arm_pct: float
arm_pct: float
hard_stop_hit: bool = False
trailing_stop_hit: bool = False
model_exit_signal: bool = False
be_lock_threat: bool = False
def evaluate_exit_first(inp: StateTransitionInput) -> bool:
"""Return True when terminal exit conditions are met.
EXITED must be evaluated before any promotion.
"""
return inp.hard_stop_hit or inp.trailing_stop_hit or inp.model_exit_signal or inp.be_lock_threat
def promote_state(current: PositionState, inp: StateTransitionInput) -> PositionState:
"""Promote to highest admissible state for current tick/bar.
Rules:
- EXITED has highest precedence and is terminal.
- Promotions are monotonic (no downgrade).
"""
if current == PositionState.EXITED:
return PositionState.EXITED
if evaluate_exit_first(inp):
return PositionState.EXITED
target = PositionState.HOLDING
if inp.unrealized_pnl_pct >= inp.arm_pct:
target = PositionState.ARMED
elif inp.unrealized_pnl_pct >= inp.be_arm_pct:
target = PositionState.BE_LOCK
return target if _STATE_RANK[target] > _STATE_RANK[current] else current

View File

@@ -124,12 +124,14 @@ class PreMarketPlanner:
# 4. Parse response # 4. Parse response
playbook = self._parse_response( playbook = self._parse_response(
decision.rationale, today, market, candidates, cross_market, decision.rationale,
today,
market,
candidates,
cross_market,
current_holdings=current_holdings, current_holdings=current_holdings,
) )
playbook_with_tokens = playbook.model_copy( playbook_with_tokens = playbook.model_copy(update={"token_count": decision.token_count})
update={"token_count": decision.token_count}
)
logger.info( logger.info(
"Generated playbook for %s: %d stocks, %d scenarios, %d tokens", "Generated playbook for %s: %d stocks, %d scenarios, %d tokens",
market, market,
@@ -146,7 +148,9 @@ class PreMarketPlanner:
return self._empty_playbook(today, market) return self._empty_playbook(today, market)
def build_cross_market_context( def build_cross_market_context(
self, target_market: str, today: date | None = None, self,
target_market: str,
today: date | None = None,
) -> CrossMarketContext | None: ) -> CrossMarketContext | None:
"""Build cross-market context from the other market's L6 data. """Build cross-market context from the other market's L6 data.
@@ -192,7 +196,9 @@ class PreMarketPlanner:
) )
def build_self_market_scorecard( def build_self_market_scorecard(
self, market: str, today: date | None = None, self,
market: str,
today: date | None = None,
) -> dict[str, Any] | None: ) -> dict[str, Any] | None:
"""Build previous-day scorecard for the same market.""" """Build previous-day scorecard for the same market."""
if today is None: if today is None:
@@ -320,18 +326,18 @@ class PreMarketPlanner:
f"{context_text}\n" f"{context_text}\n"
f"## Instructions\n" f"## Instructions\n"
f"Return a JSON object with this exact structure:\n" f"Return a JSON object with this exact structure:\n"
f'{{\n' f"{{\n"
f' "market_outlook": "bullish|neutral_to_bullish|neutral' f' "market_outlook": "bullish|neutral_to_bullish|neutral'
f'|neutral_to_bearish|bearish",\n' f'|neutral_to_bearish|bearish",\n'
f' "global_rules": [\n' f' "global_rules": [\n'
f' {{"condition": "portfolio_pnl_pct < -2.0",' f' {{"condition": "portfolio_pnl_pct < -2.0",'
f' "action": "REDUCE_ALL", "rationale": "..."}}\n' f' "action": "REDUCE_ALL", "rationale": "..."}}\n'
f' ],\n' f" ],\n"
f' "stocks": [\n' f' "stocks": [\n'
f' {{\n' f" {{\n"
f' "stock_code": "...",\n' f' "stock_code": "...",\n'
f' "scenarios": [\n' f' "scenarios": [\n'
f' {{\n' f" {{\n"
f' "condition": {{"rsi_below": 30, "volume_ratio_above": 2.0,' f' "condition": {{"rsi_below": 30, "volume_ratio_above": 2.0,'
f' "unrealized_pnl_pct_above": 3.0, "holding_days_above": 5}},\n' f' "unrealized_pnl_pct_above": 3.0, "holding_days_above": 5}},\n'
f' "action": "BUY|SELL|HOLD",\n' f' "action": "BUY|SELL|HOLD",\n'
@@ -340,11 +346,11 @@ class PreMarketPlanner:
f' "stop_loss_pct": -2.0,\n' f' "stop_loss_pct": -2.0,\n'
f' "take_profit_pct": 3.0,\n' f' "take_profit_pct": 3.0,\n'
f' "rationale": "..."\n' f' "rationale": "..."\n'
f' }}\n' f" }}\n"
f' ]\n' f" ]\n"
f' }}\n' f" }}\n"
f' ]\n' f" ]\n"
f'}}\n\n' f"}}\n\n"
f"Rules:\n" f"Rules:\n"
f"- Max {max_scenarios} scenarios per stock\n" f"- Max {max_scenarios} scenarios per stock\n"
f"- Candidates list is the primary source for BUY candidates\n" f"- Candidates list is the primary source for BUY candidates\n"
@@ -575,8 +581,7 @@ class PreMarketPlanner:
stop_loss_pct=-3.0, stop_loss_pct=-3.0,
take_profit_pct=5.0, take_profit_pct=5.0,
rationale=( rationale=(
f"Rule-based BUY: oversold signal, " f"Rule-based BUY: oversold signal, RSI={c.rsi:.0f} (fallback planner)"
f"RSI={c.rsi:.0f} (fallback planner)"
), ),
) )
) )

View File

@@ -107,7 +107,9 @@ class ScenarioEngine:
# 2. Find stock playbook # 2. Find stock playbook
stock_pb = playbook.get_stock_playbook(stock_code) stock_pb = playbook.get_stock_playbook(stock_code)
if stock_pb is None: if stock_pb is None:
logger.debug("No playbook for %s — defaulting to %s", stock_code, playbook.default_action) logger.debug(
"No playbook for %s — defaulting to %s", stock_code, playbook.default_action
)
return ScenarioMatch( return ScenarioMatch(
stock_code=stock_code, stock_code=stock_code,
matched_scenario=None, matched_scenario=None,
@@ -135,7 +137,9 @@ class ScenarioEngine:
) )
# 4. No match — default action # 4. No match — default action
logger.debug("No scenario matched for %s — defaulting to %s", stock_code, playbook.default_action) logger.debug(
"No scenario matched for %s — defaulting to %s", stock_code, playbook.default_action
)
return ScenarioMatch( return ScenarioMatch(
stock_code=stock_code, stock_code=stock_code,
matched_scenario=None, matched_scenario=None,
@@ -198,17 +202,27 @@ class ScenarioEngine:
checks.append(price is not None and price < condition.price_below) checks.append(price is not None and price < condition.price_below)
price_change_pct = self._safe_float(market_data.get("price_change_pct")) price_change_pct = self._safe_float(market_data.get("price_change_pct"))
if condition.price_change_pct_above is not None or condition.price_change_pct_below is not None: if (
condition.price_change_pct_above is not None
or condition.price_change_pct_below is not None
):
if "price_change_pct" not in market_data: if "price_change_pct" not in market_data:
self._warn_missing_key("price_change_pct") self._warn_missing_key("price_change_pct")
if condition.price_change_pct_above is not None: if condition.price_change_pct_above is not None:
checks.append(price_change_pct is not None and price_change_pct > condition.price_change_pct_above) checks.append(
price_change_pct is not None and price_change_pct > condition.price_change_pct_above
)
if condition.price_change_pct_below is not None: if condition.price_change_pct_below is not None:
checks.append(price_change_pct is not None and price_change_pct < condition.price_change_pct_below) checks.append(
price_change_pct is not None and price_change_pct < condition.price_change_pct_below
)
# Position-aware conditions # Position-aware conditions
unrealized_pnl_pct = self._safe_float(market_data.get("unrealized_pnl_pct")) unrealized_pnl_pct = self._safe_float(market_data.get("unrealized_pnl_pct"))
if condition.unrealized_pnl_pct_above is not None or condition.unrealized_pnl_pct_below is not None: if (
condition.unrealized_pnl_pct_above is not None
or condition.unrealized_pnl_pct_below is not None
):
if "unrealized_pnl_pct" not in market_data: if "unrealized_pnl_pct" not in market_data:
self._warn_missing_key("unrealized_pnl_pct") self._warn_missing_key("unrealized_pnl_pct")
if condition.unrealized_pnl_pct_above is not None: if condition.unrealized_pnl_pct_above is not None:
@@ -227,15 +241,9 @@ class ScenarioEngine:
if "holding_days" not in market_data: if "holding_days" not in market_data:
self._warn_missing_key("holding_days") self._warn_missing_key("holding_days")
if condition.holding_days_above is not None: if condition.holding_days_above is not None:
checks.append( checks.append(holding_days is not None and holding_days > condition.holding_days_above)
holding_days is not None
and holding_days > condition.holding_days_above
)
if condition.holding_days_below is not None: if condition.holding_days_below is not None:
checks.append( checks.append(holding_days is not None and holding_days < condition.holding_days_below)
holding_days is not None
and holding_days < condition.holding_days_below
)
return len(checks) > 0 and all(checks) return len(checks) > 0 and all(checks)
@@ -295,9 +303,15 @@ class ScenarioEngine:
details["volume_ratio"] = self._safe_float(market_data.get("volume_ratio")) details["volume_ratio"] = self._safe_float(market_data.get("volume_ratio"))
if condition.price_above is not None or condition.price_below is not None: if condition.price_above is not None or condition.price_below is not None:
details["current_price"] = self._safe_float(market_data.get("current_price")) details["current_price"] = self._safe_float(market_data.get("current_price"))
if condition.price_change_pct_above is not None or condition.price_change_pct_below is not None: if (
condition.price_change_pct_above is not None
or condition.price_change_pct_below is not None
):
details["price_change_pct"] = self._safe_float(market_data.get("price_change_pct")) details["price_change_pct"] = self._safe_float(market_data.get("price_change_pct"))
if condition.unrealized_pnl_pct_above is not None or condition.unrealized_pnl_pct_below is not None: if (
condition.unrealized_pnl_pct_above is not None
or condition.unrealized_pnl_pct_below is not None
):
details["unrealized_pnl_pct"] = self._safe_float(market_data.get("unrealized_pnl_pct")) details["unrealized_pnl_pct"] = self._safe_float(market_data.get("unrealized_pnl_pct"))
if condition.holding_days_above is not None or condition.holding_days_below is not None: if condition.holding_days_above is not None or condition.holding_days_below is not None:
details["holding_days"] = self._safe_float(market_data.get("holding_days")) details["holding_days"] = self._safe_float(market_data.get("holding_days"))

View File

@@ -0,0 +1,115 @@
from __future__ import annotations
import pytest
from src.analysis.backtest_cost_guard import BacktestCostModel, validate_backtest_cost_model
def test_valid_backtest_cost_model_passes() -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.1, "US_PRE": 0.2},
unfavorable_fill_required=True,
)
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG", "US_PRE"])
def test_missing_required_slippage_session_raises() -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.1, "US_PRE": 0.2},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="missing slippage_bps_by_session.*US_PRE"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG", "US_PRE"])
def test_missing_required_failure_rate_session_raises() -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01},
partial_fill_rate_by_session={"KRX_REG": 0.1, "US_PRE": 0.2},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="missing failure_rate_by_session.*US_PRE"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG", "US_PRE"])
def test_invalid_failure_rate_range_raises() -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 1.2},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="failure rate must be within"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG"])
def test_unfavorable_fill_requirement_cannot_be_disabled() -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=False,
)
with pytest.raises(ValueError, match="unfavorable_fill_required must be True"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG"])
@pytest.mark.parametrize("bad_commission", [float("nan"), float("inf"), float("-inf")])
def test_non_finite_commission_rejected(bad_commission: float) -> None:
model = BacktestCostModel(
commission_bps=bad_commission,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="commission_bps"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG"])
@pytest.mark.parametrize("bad_slippage", [float("nan"), float("inf"), float("-inf")])
def test_non_finite_slippage_rejected(bad_slippage: float) -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": bad_slippage},
failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="slippage bps"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG"])
def test_missing_required_partial_fill_session_raises() -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.1},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="missing partial_fill_rate_by_session.*US_PRE"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG", "US_PRE"])
@pytest.mark.parametrize("bad_partial_fill", [float("nan"), float("inf"), float("-inf"), -0.1, 1.1])
def test_invalid_partial_fill_rate_rejected(bad_partial_fill: float) -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": bad_partial_fill},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="partial fill rate must be within"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG"])

View File

@@ -0,0 +1,108 @@
from __future__ import annotations
import pytest
from src.analysis.backtest_execution_model import (
BacktestExecutionModel,
ExecutionAssumptions,
ExecutionRequest,
)
def test_buy_uses_unfavorable_slippage_direction() -> None:
model = BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session={"US_PRE": 50.0},
failure_rate_by_session={"US_PRE": 0.0},
partial_fill_rate_by_session={"US_PRE": 0.0},
seed=1,
)
)
out = model.simulate(
ExecutionRequest(side="BUY", session_id="US_PRE", qty=10, reference_price=100.0)
)
assert out.status == "FILLED"
assert out.avg_price == pytest.approx(100.5)
def test_sell_uses_unfavorable_slippage_direction() -> None:
model = BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session={"US_PRE": 50.0},
failure_rate_by_session={"US_PRE": 0.0},
partial_fill_rate_by_session={"US_PRE": 0.0},
seed=1,
)
)
out = model.simulate(
ExecutionRequest(side="SELL", session_id="US_PRE", qty=10, reference_price=100.0)
)
assert out.status == "FILLED"
assert out.avg_price == pytest.approx(99.5)
def test_failure_rate_can_reject_order() -> None:
model = BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 1.0},
partial_fill_rate_by_session={"KRX_REG": 0.0},
seed=42,
)
)
out = model.simulate(
ExecutionRequest(side="BUY", session_id="KRX_REG", qty=10, reference_price=100.0)
)
assert out.status == "REJECTED"
assert out.filled_qty == 0
def test_partial_fill_applies_when_rate_is_one() -> None:
model = BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session={"KRX_REG": 0.0},
failure_rate_by_session={"KRX_REG": 0.0},
partial_fill_rate_by_session={"KRX_REG": 1.0},
partial_fill_min_ratio=0.4,
partial_fill_max_ratio=0.4,
seed=0,
)
)
out = model.simulate(
ExecutionRequest(side="BUY", session_id="KRX_REG", qty=10, reference_price=100.0)
)
assert out.status == "PARTIAL"
assert out.filled_qty == 4
assert out.avg_price == 100.0
@pytest.mark.parametrize("bad_slip", [-1.0, float("nan"), float("inf")])
def test_invalid_slippage_is_rejected(bad_slip: float) -> None:
with pytest.raises(ValueError, match="slippage_bps"):
BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session={"US_PRE": bad_slip},
failure_rate_by_session={"US_PRE": 0.0},
partial_fill_rate_by_session={"US_PRE": 0.0},
)
)
@pytest.mark.parametrize("bad_rate", [-0.1, 1.1, float("nan")])
def test_invalid_failure_or_partial_rates_are_rejected(bad_rate: float) -> None:
with pytest.raises(ValueError, match="failure_rate"):
BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session={"US_PRE": 10.0},
failure_rate_by_session={"US_PRE": bad_rate},
partial_fill_rate_by_session={"US_PRE": 0.0},
)
)
with pytest.raises(ValueError, match="partial_fill_rate"):
BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session={"US_PRE": 10.0},
failure_rate_by_session={"US_PRE": 0.0},
partial_fill_rate_by_session={"US_PRE": bad_rate},
)
)

View File

@@ -0,0 +1,220 @@
from __future__ import annotations
from datetime import UTC, datetime, timedelta
from src.analysis.backtest_cost_guard import BacktestCostModel
from src.analysis.backtest_pipeline import (
BacktestBar,
WalkForwardConfig,
fold_has_leakage,
run_v2_backtest_pipeline,
)
from src.analysis.triple_barrier import TripleBarrierSpec
from src.analysis.walk_forward_split import generate_walk_forward_splits
def _bars() -> list[BacktestBar]:
base_ts = datetime(2026, 2, 28, 0, 0, tzinfo=UTC)
closes = [100.0, 101.0, 102.0, 101.5, 103.0, 102.5, 104.0, 103.5, 105.0, 104.5, 106.0, 105.5]
bars: list[BacktestBar] = []
for i, close in enumerate(closes):
bars.append(
BacktestBar(
high=close + 1.0,
low=close - 1.0,
close=close,
session_id="KRX_REG" if i % 2 == 0 else "US_PRE",
timestamp=base_ts + timedelta(minutes=i),
)
)
return bars
def _cost_model() -> BacktestCostModel:
return BacktestCostModel(
commission_bps=3.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.05, "US_PRE": 0.2},
unfavorable_fill_required=True,
)
def test_pipeline_happy_path_returns_fold_and_artifact_contract() -> None:
out = run_v2_backtest_pipeline(
bars=_bars(),
entry_indices=[0, 1, 2, 3, 4, 5, 6, 7],
side=1,
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(
train_size=4,
test_size=2,
step_size=2,
purge_size=1,
embargo_size=1,
min_train_size=3,
),
cost_model=_cost_model(),
)
assert out.run_id.startswith("v2p-e8-f")
assert out.n_bars == 12
assert out.n_entries == 8
assert out.required_sessions == ["KRX_REG", "US_PRE"]
assert len(out.folds) > 0
assert set(out.label_distribution) == {-1, 0, 1}
for fold in out.folds:
names = {score.name for score in fold.baseline_scores}
assert names == {"B0", "B1", "M1"}
for score in fold.baseline_scores:
assert 0.0 <= score.accuracy <= 1.0
assert 0.0 <= score.cost_adjusted_accuracy <= 1.0
assert fold.execution_adjusted_trade_count >= 0
assert fold.execution_rejected_count >= 0
assert fold.execution_partial_count >= 0
def test_pipeline_cost_guard_fail_fast() -> None:
bad = BacktestCostModel(
commission_bps=3.0,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.01},
partial_fill_rate_by_session={"KRX_REG": 0.05},
unfavorable_fill_required=True,
)
try:
run_v2_backtest_pipeline(
bars=_bars(),
entry_indices=[0, 1, 2, 3],
side=1,
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(train_size=2, test_size=1),
cost_model=bad,
required_sessions=["KRX_REG", "US_PRE"],
)
except ValueError as exc:
assert "missing slippage_bps_by_session" in str(exc)
else:
raise AssertionError("expected cost guard validation error")
def test_pipeline_fold_leakage_guard() -> None:
folds = generate_walk_forward_splits(
n_samples=12,
train_size=6,
test_size=2,
step_size=2,
purge_size=1,
embargo_size=1,
min_train_size=5,
)
assert folds
for fold in folds:
assert not fold_has_leakage(fold)
def test_pipeline_deterministic_seed_free_deterministic_result() -> None:
cfg = dict(
bars=_bars(),
entry_indices=[0, 1, 2, 3, 4, 5, 6, 7],
side=1,
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(
train_size=4,
test_size=2,
step_size=2,
purge_size=1,
embargo_size=1,
min_train_size=3,
),
cost_model=_cost_model(),
)
out1 = run_v2_backtest_pipeline(**cfg)
out2 = run_v2_backtest_pipeline(**cfg)
assert out1 == out2
def test_pipeline_rejects_minutes_spec_when_timestamp_missing() -> None:
bars = _bars()
bars[2] = BacktestBar(
high=bars[2].high,
low=bars[2].low,
close=bars[2].close,
session_id=bars[2].session_id,
timestamp=None,
)
try:
run_v2_backtest_pipeline(
bars=bars,
entry_indices=[0, 1, 2, 3],
side=1,
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(train_size=2, test_size=1),
cost_model=_cost_model(),
)
except ValueError as exc:
assert "BacktestBar.timestamp is required" in str(exc)
else:
raise AssertionError("expected timestamp validation error")
def test_pipeline_fold_scores_reflect_cost_and_execution_effects() -> None:
cfg = dict(
bars=_bars(),
entry_indices=[0, 1, 2, 3, 4, 5, 6, 7],
side=1,
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(
train_size=4,
test_size=2,
step_size=2,
purge_size=1,
embargo_size=1,
min_train_size=3,
),
)
optimistic = BacktestCostModel(
commission_bps=0.0,
slippage_bps_by_session={"KRX_REG": 0.0, "US_PRE": 0.0},
failure_rate_by_session={"KRX_REG": 0.0, "US_PRE": 0.0},
partial_fill_rate_by_session={"KRX_REG": 0.0, "US_PRE": 0.0},
unfavorable_fill_required=True,
)
conservative = BacktestCostModel(
commission_bps=10.0,
slippage_bps_by_session={"KRX_REG": 30.0, "US_PRE": 80.0},
failure_rate_by_session={"KRX_REG": 0.2, "US_PRE": 0.4},
partial_fill_rate_by_session={"KRX_REG": 0.5, "US_PRE": 0.7},
unfavorable_fill_required=True,
)
optimistic_out = run_v2_backtest_pipeline(cost_model=optimistic, **cfg)
conservative_out = run_v2_backtest_pipeline(cost_model=conservative, **cfg)
assert optimistic_out.folds and conservative_out.folds
optimistic_score = optimistic_out.folds[0].baseline_scores[1].cost_adjusted_accuracy
conservative_score = conservative_out.folds[0].baseline_scores[1].cost_adjusted_accuracy
assert conservative_score < optimistic_score
optimistic_avg_return = optimistic_out.folds[0].execution_adjusted_avg_return_bps
conservative_avg_return = conservative_out.folds[0].execution_adjusted_avg_return_bps
assert conservative_avg_return < optimistic_avg_return

View File

@@ -4,8 +4,7 @@ from __future__ import annotations
import sqlite3 import sqlite3
import sys import sys
import tempfile from datetime import UTC, datetime
from datetime import UTC, datetime, timedelta
from pathlib import Path from pathlib import Path
from unittest.mock import MagicMock, patch from unittest.mock import MagicMock, patch
@@ -48,7 +47,9 @@ def temp_db(tmp_path: Path) -> Path:
cursor.executemany( cursor.executemany(
""" """
INSERT INTO trades (timestamp, stock_code, action, quantity, price, confidence, rationale, pnl) INSERT INTO trades (
timestamp, stock_code, action, quantity, price, confidence, rationale, pnl
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""", """,
test_trades, test_trades,
@@ -73,9 +74,7 @@ class TestBackupExporter:
exporter = BackupExporter(str(temp_db)) exporter = BackupExporter(str(temp_db))
output_dir = tmp_path / "exports" output_dir = tmp_path / "exports"
results = exporter.export_all( results = exporter.export_all(output_dir, formats=[ExportFormat.JSON], compress=False)
output_dir, formats=[ExportFormat.JSON], compress=False
)
assert ExportFormat.JSON in results assert ExportFormat.JSON in results
assert results[ExportFormat.JSON].exists() assert results[ExportFormat.JSON].exists()
@@ -86,9 +85,7 @@ class TestBackupExporter:
exporter = BackupExporter(str(temp_db)) exporter = BackupExporter(str(temp_db))
output_dir = tmp_path / "exports" output_dir = tmp_path / "exports"
results = exporter.export_all( results = exporter.export_all(output_dir, formats=[ExportFormat.JSON], compress=True)
output_dir, formats=[ExportFormat.JSON], compress=True
)
assert ExportFormat.JSON in results assert ExportFormat.JSON in results
assert results[ExportFormat.JSON].suffix == ".gz" assert results[ExportFormat.JSON].suffix == ".gz"
@@ -98,15 +95,13 @@ class TestBackupExporter:
exporter = BackupExporter(str(temp_db)) exporter = BackupExporter(str(temp_db))
output_dir = tmp_path / "exports" output_dir = tmp_path / "exports"
results = exporter.export_all( results = exporter.export_all(output_dir, formats=[ExportFormat.CSV], compress=False)
output_dir, formats=[ExportFormat.CSV], compress=False
)
assert ExportFormat.CSV in results assert ExportFormat.CSV in results
assert results[ExportFormat.CSV].exists() assert results[ExportFormat.CSV].exists()
# Verify CSV content # Verify CSV content
with open(results[ExportFormat.CSV], "r") as f: with open(results[ExportFormat.CSV]) as f:
lines = f.readlines() lines = f.readlines()
assert len(lines) == 4 # Header + 3 rows assert len(lines) == 4 # Header + 3 rows
@@ -146,7 +141,7 @@ class TestBackupExporter:
# Should only have 1 trade (AAPL on Jan 2) # Should only have 1 trade (AAPL on Jan 2)
import json import json
with open(results[ExportFormat.JSON], "r") as f: with open(results[ExportFormat.JSON]) as f:
data = json.load(f) data = json.load(f)
assert data["record_count"] == 1 assert data["record_count"] == 1
assert data["trades"][0]["stock_code"] == "AAPL" assert data["trades"][0]["stock_code"] == "AAPL"
@@ -407,9 +402,7 @@ class TestBackupExporterAdditional:
assert ExportFormat.JSON in results assert ExportFormat.JSON in results
assert ExportFormat.CSV in results assert ExportFormat.CSV in results
def test_export_all_logs_error_on_failure( def test_export_all_logs_error_on_failure(self, temp_db: Path, tmp_path: Path) -> None:
self, temp_db: Path, tmp_path: Path
) -> None:
"""export_all must log an error and continue when one format fails.""" """export_all must log an error and continue when one format fails."""
exporter = BackupExporter(str(temp_db)) exporter = BackupExporter(str(temp_db))
# Patch _export_format to raise on JSON, succeed on CSV # Patch _export_format to raise on JSON, succeed on CSV
@@ -430,9 +423,7 @@ class TestBackupExporterAdditional:
assert ExportFormat.JSON not in results assert ExportFormat.JSON not in results
assert ExportFormat.CSV in results assert ExportFormat.CSV in results
def test_export_csv_empty_trades_no_compress( def test_export_csv_empty_trades_no_compress(self, empty_db: Path, tmp_path: Path) -> None:
self, empty_db: Path, tmp_path: Path
) -> None:
"""CSV export with no trades and compress=False must write header row only.""" """CSV export with no trades and compress=False must write header row only."""
exporter = BackupExporter(str(empty_db)) exporter = BackupExporter(str(empty_db))
results = exporter.export_all( results = exporter.export_all(
@@ -446,9 +437,7 @@ class TestBackupExporterAdditional:
content = out.read_text() content = out.read_text()
assert "timestamp" in content assert "timestamp" in content
def test_export_csv_empty_trades_compressed( def test_export_csv_empty_trades_compressed(self, empty_db: Path, tmp_path: Path) -> None:
self, empty_db: Path, tmp_path: Path
) -> None:
"""CSV export with no trades and compress=True must write gzipped header.""" """CSV export with no trades and compress=True must write gzipped header."""
import gzip import gzip
@@ -465,9 +454,7 @@ class TestBackupExporterAdditional:
content = f.read() content = f.read()
assert "timestamp" in content assert "timestamp" in content
def test_export_csv_with_data_compressed( def test_export_csv_with_data_compressed(self, temp_db: Path, tmp_path: Path) -> None:
self, temp_db: Path, tmp_path: Path
) -> None:
"""CSV export with data and compress=True must write gzipped rows.""" """CSV export with data and compress=True must write gzipped rows."""
import gzip import gzip
@@ -492,6 +479,7 @@ class TestBackupExporterAdditional:
with patch.dict(sys.modules, {"pyarrow": None, "pyarrow.parquet": None}): with patch.dict(sys.modules, {"pyarrow": None, "pyarrow.parquet": None}):
try: try:
import pyarrow # noqa: F401 import pyarrow # noqa: F401
pytest.skip("pyarrow is installed; cannot test ImportError path") pytest.skip("pyarrow is installed; cannot test ImportError path")
except ImportError: except ImportError:
pass pass
@@ -557,9 +545,7 @@ class TestCloudStorage:
importlib.reload(m) importlib.reload(m)
m.CloudStorage(s3_config) m.CloudStorage(s3_config)
def test_upload_file_success( def test_upload_file_success(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
self, mock_boto3_module, s3_config, tmp_path: Path
) -> None:
"""upload_file must call client.upload_file and return the object key.""" """upload_file must call client.upload_file and return the object key."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -572,9 +558,7 @@ class TestCloudStorage:
assert key == "backups/backup.json.gz" assert key == "backups/backup.json.gz"
storage.client.upload_file.assert_called_once() storage.client.upload_file.assert_called_once()
def test_upload_file_default_key( def test_upload_file_default_key(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
self, mock_boto3_module, s3_config, tmp_path: Path
) -> None:
"""upload_file without object_key must use the filename as key.""" """upload_file without object_key must use the filename as key."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -586,9 +570,7 @@ class TestCloudStorage:
assert key == "myfile.gz" assert key == "myfile.gz"
def test_upload_file_not_found( def test_upload_file_not_found(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
self, mock_boto3_module, s3_config, tmp_path: Path
) -> None:
"""upload_file must raise FileNotFoundError for missing files.""" """upload_file must raise FileNotFoundError for missing files."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -611,9 +593,7 @@ class TestCloudStorage:
with pytest.raises(RuntimeError, match="network error"): with pytest.raises(RuntimeError, match="network error"):
storage.upload_file(test_file) storage.upload_file(test_file)
def test_download_file_success( def test_download_file_success(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
self, mock_boto3_module, s3_config, tmp_path: Path
) -> None:
"""download_file must call client.download_file and return local path.""" """download_file must call client.download_file and return local path."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -637,11 +617,8 @@ class TestCloudStorage:
with pytest.raises(RuntimeError, match="timeout"): with pytest.raises(RuntimeError, match="timeout"):
storage.download_file("key", tmp_path / "dest.gz") storage.download_file("key", tmp_path / "dest.gz")
def test_list_files_returns_objects( def test_list_files_returns_objects(self, mock_boto3_module, s3_config) -> None:
self, mock_boto3_module, s3_config
) -> None:
"""list_files must return parsed file metadata from S3 response.""" """list_files must return parsed file metadata from S3 response."""
from datetime import timezone
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -651,7 +628,7 @@ class TestCloudStorage:
{ {
"Key": "backups/a.gz", "Key": "backups/a.gz",
"Size": 1024, "Size": 1024,
"LastModified": datetime(2026, 1, 1, tzinfo=timezone.utc), "LastModified": datetime(2026, 1, 1, tzinfo=UTC),
"ETag": '"abc123"', "ETag": '"abc123"',
} }
] ]
@@ -662,9 +639,7 @@ class TestCloudStorage:
assert files[0]["key"] == "backups/a.gz" assert files[0]["key"] == "backups/a.gz"
assert files[0]["size_bytes"] == 1024 assert files[0]["size_bytes"] == 1024
def test_list_files_empty_bucket( def test_list_files_empty_bucket(self, mock_boto3_module, s3_config) -> None:
self, mock_boto3_module, s3_config
) -> None:
"""list_files must return empty list when bucket has no objects.""" """list_files must return empty list when bucket has no objects."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -674,9 +649,7 @@ class TestCloudStorage:
files = storage.list_files() files = storage.list_files()
assert files == [] assert files == []
def test_list_files_propagates_error( def test_list_files_propagates_error(self, mock_boto3_module, s3_config) -> None:
self, mock_boto3_module, s3_config
) -> None:
"""list_files must re-raise exceptions from the boto3 client.""" """list_files must re-raise exceptions from the boto3 client."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -686,9 +659,7 @@ class TestCloudStorage:
with pytest.raises(RuntimeError): with pytest.raises(RuntimeError):
storage.list_files() storage.list_files()
def test_delete_file_success( def test_delete_file_success(self, mock_boto3_module, s3_config) -> None:
self, mock_boto3_module, s3_config
) -> None:
"""delete_file must call client.delete_object with the correct key.""" """delete_file must call client.delete_object with the correct key."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -698,9 +669,7 @@ class TestCloudStorage:
Bucket="test-bucket", Key="backups/old.gz" Bucket="test-bucket", Key="backups/old.gz"
) )
def test_delete_file_propagates_error( def test_delete_file_propagates_error(self, mock_boto3_module, s3_config) -> None:
self, mock_boto3_module, s3_config
) -> None:
"""delete_file must re-raise exceptions from the boto3 client.""" """delete_file must re-raise exceptions from the boto3 client."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -710,11 +679,8 @@ class TestCloudStorage:
with pytest.raises(RuntimeError): with pytest.raises(RuntimeError):
storage.delete_file("backups/old.gz") storage.delete_file("backups/old.gz")
def test_get_storage_stats_success( def test_get_storage_stats_success(self, mock_boto3_module, s3_config) -> None:
self, mock_boto3_module, s3_config
) -> None:
"""get_storage_stats must aggregate file sizes correctly.""" """get_storage_stats must aggregate file sizes correctly."""
from datetime import timezone
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -724,13 +690,13 @@ class TestCloudStorage:
{ {
"Key": "a.gz", "Key": "a.gz",
"Size": 1024 * 1024, "Size": 1024 * 1024,
"LastModified": datetime(2026, 1, 1, tzinfo=timezone.utc), "LastModified": datetime(2026, 1, 1, tzinfo=UTC),
"ETag": '"x"', "ETag": '"x"',
}, },
{ {
"Key": "b.gz", "Key": "b.gz",
"Size": 1024 * 1024, "Size": 1024 * 1024,
"LastModified": datetime(2026, 1, 2, tzinfo=timezone.utc), "LastModified": datetime(2026, 1, 2, tzinfo=UTC),
"ETag": '"y"', "ETag": '"y"',
}, },
] ]
@@ -741,9 +707,7 @@ class TestCloudStorage:
assert stats["total_size_bytes"] == 2 * 1024 * 1024 assert stats["total_size_bytes"] == 2 * 1024 * 1024
assert stats["total_size_mb"] == pytest.approx(2.0) assert stats["total_size_mb"] == pytest.approx(2.0)
def test_get_storage_stats_on_error( def test_get_storage_stats_on_error(self, mock_boto3_module, s3_config) -> None:
self, mock_boto3_module, s3_config
) -> None:
"""get_storage_stats must return error dict without raising on failure.""" """get_storage_stats must return error dict without raising on failure."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -754,9 +718,7 @@ class TestCloudStorage:
assert "error" in stats assert "error" in stats
assert stats["total_files"] == 0 assert stats["total_files"] == 0
def test_verify_connection_success( def test_verify_connection_success(self, mock_boto3_module, s3_config) -> None:
self, mock_boto3_module, s3_config
) -> None:
"""verify_connection must return True when head_bucket succeeds.""" """verify_connection must return True when head_bucket succeeds."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -764,9 +726,7 @@ class TestCloudStorage:
result = storage.verify_connection() result = storage.verify_connection()
assert result is True assert result is True
def test_verify_connection_failure( def test_verify_connection_failure(self, mock_boto3_module, s3_config) -> None:
self, mock_boto3_module, s3_config
) -> None:
"""verify_connection must return False when head_bucket raises.""" """verify_connection must return False when head_bucket raises."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -776,9 +736,7 @@ class TestCloudStorage:
result = storage.verify_connection() result = storage.verify_connection()
assert result is False assert result is False
def test_enable_versioning( def test_enable_versioning(self, mock_boto3_module, s3_config) -> None:
self, mock_boto3_module, s3_config
) -> None:
"""enable_versioning must call put_bucket_versioning.""" """enable_versioning must call put_bucket_versioning."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage
@@ -786,9 +744,7 @@ class TestCloudStorage:
storage.enable_versioning() storage.enable_versioning()
storage.client.put_bucket_versioning.assert_called_once() storage.client.put_bucket_versioning.assert_called_once()
def test_enable_versioning_propagates_error( def test_enable_versioning_propagates_error(self, mock_boto3_module, s3_config) -> None:
self, mock_boto3_module, s3_config
) -> None:
"""enable_versioning must re-raise exceptions from the boto3 client.""" """enable_versioning must re-raise exceptions from the boto3 client."""
from src.backup.cloud_storage import CloudStorage from src.backup.cloud_storage import CloudStorage

View File

@@ -0,0 +1,134 @@
from __future__ import annotations
from datetime import UTC, datetime
from src.core.blackout_manager import (
BlackoutOrderManager,
QueuedOrderIntent,
parse_blackout_windows_kst,
)
def test_parse_blackout_windows_kst() -> None:
windows = parse_blackout_windows_kst("23:30-00:10,11:20-11:30,invalid")
assert len(windows) == 2
def test_blackout_manager_handles_cross_midnight_window() -> None:
manager = BlackoutOrderManager(
enabled=True,
windows=parse_blackout_windows_kst("23:30-00:10"),
max_queue_size=10,
)
# 2026-01-01 23:40 KST = 2026-01-01 14:40 UTC
assert manager.in_blackout(datetime(2026, 1, 1, 14, 40, tzinfo=UTC))
# 2026-01-02 00:20 KST = 2026-01-01 15:20 UTC
assert not manager.in_blackout(datetime(2026, 1, 1, 15, 20, tzinfo=UTC))
def test_recovery_batch_only_after_blackout_exit() -> None:
manager = BlackoutOrderManager(
enabled=True,
windows=parse_blackout_windows_kst("23:30-00:10"),
max_queue_size=10,
)
intent = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="005930",
order_type="BUY",
quantity=1,
price=100.0,
source="test",
queued_at=datetime.now(UTC),
)
assert manager.enqueue(intent)
# Inside blackout: no pop yet
inside_blackout = datetime(2026, 1, 1, 14, 40, tzinfo=UTC)
assert manager.pop_recovery_batch(inside_blackout) == []
# Outside blackout: pop full batch once
outside_blackout = datetime(2026, 1, 1, 15, 20, tzinfo=UTC)
batch = manager.pop_recovery_batch(outside_blackout)
assert len(batch) == 1
assert manager.pending_count == 0
def test_requeued_intent_is_processed_next_non_blackout_cycle() -> None:
manager = BlackoutOrderManager(
enabled=True,
windows=parse_blackout_windows_kst("23:30-00:10"),
max_queue_size=10,
)
intent = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="005930",
order_type="BUY",
quantity=1,
price=100.0,
source="test",
queued_at=datetime.now(UTC),
)
manager.enqueue(intent)
outside_blackout = datetime(2026, 1, 1, 15, 20, tzinfo=UTC)
first_batch = manager.pop_recovery_batch(outside_blackout)
assert len(first_batch) == 1
manager.requeue(first_batch[0])
second_batch = manager.pop_recovery_batch(outside_blackout)
assert len(second_batch) == 1
def test_queue_overflow_drops_oldest_and_keeps_latest() -> None:
manager = BlackoutOrderManager(
enabled=True,
windows=parse_blackout_windows_kst("23:30-00:10"),
max_queue_size=2,
)
first = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="000001",
order_type="BUY",
quantity=1,
price=100.0,
source="first",
queued_at=datetime.now(UTC),
)
second = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="000002",
order_type="BUY",
quantity=1,
price=101.0,
source="second",
queued_at=datetime.now(UTC),
)
third = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="000003",
order_type="SELL",
quantity=2,
price=102.0,
source="third",
queued_at=datetime.now(UTC),
)
assert manager.enqueue(first)
assert manager.enqueue(second)
assert manager.enqueue(third)
assert manager.pending_count == 2
assert manager.overflow_drop_count == 1
outside_blackout = datetime(2026, 1, 1, 15, 20, tzinfo=UTC)
batch = manager.pop_recovery_batch(outside_blackout)
assert [intent.stock_code for intent in batch] == ["000002", "000003"]

View File

@@ -323,7 +323,8 @@ class TestPromptOverride:
# Verify the custom prompt was sent, not a built prompt # Verify the custom prompt was sent, not a built prompt
mock_generate.assert_called_once() mock_generate.assert_called_once()
actual_prompt = mock_generate.call_args[1].get( actual_prompt = mock_generate.call_args[1].get(
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None "contents",
mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None,
) )
assert actual_prompt == custom_prompt assert actual_prompt == custom_prompt
# Raw response preserved in rationale without parse_response (#247) # Raw response preserved in rationale without parse_response (#247)
@@ -385,7 +386,8 @@ class TestPromptOverride:
await client.decide(market_data) await client.decide(market_data)
actual_prompt = mock_generate.call_args[1].get( actual_prompt = mock_generate.call_args[1].get(
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None "contents",
mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None,
) )
# The custom prompt must be used, not the compressed prompt # The custom prompt must be used, not the compressed prompt
assert actual_prompt == custom_prompt assert actual_prompt == custom_prompt
@@ -411,7 +413,8 @@ class TestPromptOverride:
await client.decide(market_data) await client.decide(market_data)
actual_prompt = mock_generate.call_args[1].get( actual_prompt = mock_generate.call_args[1].get(
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None "contents",
mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None,
) )
# Should contain stock code from build_prompt, not be a custom override # Should contain stock code from build_prompt, not be a custom override
assert "005930" in actual_prompt assert "005930" in actual_prompt

View File

@@ -3,7 +3,7 @@
from __future__ import annotations from __future__ import annotations
import asyncio import asyncio
from unittest.mock import AsyncMock, MagicMock, patch from unittest.mock import AsyncMock, patch
import pytest import pytest
@@ -99,7 +99,10 @@ class TestTokenManagement:
mock_resp_403 = AsyncMock() mock_resp_403 = AsyncMock()
mock_resp_403.status = 403 mock_resp_403.status = 403
mock_resp_403.text = AsyncMock( mock_resp_403.text = AsyncMock(
return_value='{"error_code":"EGW00133","error_description":"접근토큰 발급 잠시 후 다시 시도하세요(1분당 1회)"}' return_value=(
'{"error_code":"EGW00133","error_description":'
'"접근토큰 발급 잠시 후 다시 시도하세요(1분당 1회)"}'
)
) )
mock_resp_403.__aenter__ = AsyncMock(return_value=mock_resp_403) mock_resp_403.__aenter__ = AsyncMock(return_value=mock_resp_403)
mock_resp_403.__aexit__ = AsyncMock(return_value=False) mock_resp_403.__aexit__ = AsyncMock(return_value=False)
@@ -232,9 +235,7 @@ class TestRateLimiter:
mock_order_resp.__aenter__ = AsyncMock(return_value=mock_order_resp) mock_order_resp.__aenter__ = AsyncMock(return_value=mock_order_resp)
mock_order_resp.__aexit__ = AsyncMock(return_value=False) mock_order_resp.__aexit__ = AsyncMock(return_value=False)
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash_resp, mock_order_resp]):
"aiohttp.ClientSession.post", side_effect=[mock_hash_resp, mock_order_resp]
):
with patch.object( with patch.object(
broker._rate_limiter, "acquire", new_callable=AsyncMock broker._rate_limiter, "acquire", new_callable=AsyncMock
) as mock_acquire: ) as mock_acquire:
@@ -405,7 +406,7 @@ class TestFetchMarketRankings:
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
from src.broker.kis_api import kr_tick_unit, kr_round_down # noqa: E402 from src.broker.kis_api import kr_round_down, kr_tick_unit # noqa: E402
class TestKrTickUnit: class TestKrTickUnit:
@@ -538,9 +539,7 @@ class TestSendOrderTickRounding:
mock_order.__aenter__ = AsyncMock(return_value=mock_order) mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False) mock_order.__aexit__ = AsyncMock(return_value=False)
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.send_order("005930", "BUY", 1, price=188150) await broker.send_order("005930", "BUY", 1, price=188150)
order_call = mock_post.call_args_list[1] order_call = mock_post.call_args_list[1]
@@ -563,9 +562,7 @@ class TestSendOrderTickRounding:
mock_order.__aenter__ = AsyncMock(return_value=mock_order) mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False) mock_order.__aexit__ = AsyncMock(return_value=False)
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.send_order("005930", "BUY", 1, price=50000) await broker.send_order("005930", "BUY", 1, price=50000)
order_call = mock_post.call_args_list[1] order_call = mock_post.call_args_list[1]
@@ -587,9 +584,7 @@ class TestSendOrderTickRounding:
mock_order.__aenter__ = AsyncMock(return_value=mock_order) mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False) mock_order.__aexit__ = AsyncMock(return_value=False)
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.send_order("005930", "SELL", 1, price=0) await broker.send_order("005930", "SELL", 1, price=0)
order_call = mock_post.call_args_list[1] order_call = mock_post.call_args_list[1]
@@ -628,9 +623,7 @@ class TestTRIDBranchingDomestic:
broker = self._make_broker(settings, "paper") broker = self._make_broker(settings, "paper")
mock_resp = AsyncMock() mock_resp = AsyncMock()
mock_resp.status = 200 mock_resp.status = 200
mock_resp.json = AsyncMock( mock_resp.json = AsyncMock(return_value={"output1": [], "output2": {}})
return_value={"output1": [], "output2": {}}
)
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp) mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
mock_resp.__aexit__ = AsyncMock(return_value=False) mock_resp.__aexit__ = AsyncMock(return_value=False)
@@ -645,9 +638,7 @@ class TestTRIDBranchingDomestic:
broker = self._make_broker(settings, "live") broker = self._make_broker(settings, "live")
mock_resp = AsyncMock() mock_resp = AsyncMock()
mock_resp.status = 200 mock_resp.status = 200
mock_resp.json = AsyncMock( mock_resp.json = AsyncMock(return_value={"output1": [], "output2": {}})
return_value={"output1": [], "output2": {}}
)
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp) mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
mock_resp.__aexit__ = AsyncMock(return_value=False) mock_resp.__aexit__ = AsyncMock(return_value=False)
@@ -672,9 +663,7 @@ class TestTRIDBranchingDomestic:
mock_order.__aenter__ = AsyncMock(return_value=mock_order) mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False) mock_order.__aexit__ = AsyncMock(return_value=False)
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.send_order("005930", "BUY", 1) await broker.send_order("005930", "BUY", 1)
order_headers = mock_post.call_args_list[1][1].get("headers", {}) order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -695,9 +684,7 @@ class TestTRIDBranchingDomestic:
mock_order.__aenter__ = AsyncMock(return_value=mock_order) mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False) mock_order.__aexit__ = AsyncMock(return_value=False)
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.send_order("005930", "BUY", 1) await broker.send_order("005930", "BUY", 1)
order_headers = mock_post.call_args_list[1][1].get("headers", {}) order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -718,9 +705,7 @@ class TestTRIDBranchingDomestic:
mock_order.__aenter__ = AsyncMock(return_value=mock_order) mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False) mock_order.__aexit__ = AsyncMock(return_value=False)
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.send_order("005930", "SELL", 1) await broker.send_order("005930", "SELL", 1)
order_headers = mock_post.call_args_list[1][1].get("headers", {}) order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -741,9 +726,7 @@ class TestTRIDBranchingDomestic:
mock_order.__aenter__ = AsyncMock(return_value=mock_order) mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False) mock_order.__aexit__ = AsyncMock(return_value=False)
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.send_order("005930", "SELL", 1) await broker.send_order("005930", "SELL", 1)
order_headers = mock_post.call_args_list[1][1].get("headers", {}) order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -788,9 +771,7 @@ class TestGetDomesticPendingOrders:
mock_get.assert_not_called() mock_get.assert_not_called()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_live_mode_calls_tttc0084r_with_correct_params( async def test_live_mode_calls_tttc0084r_with_correct_params(self, settings) -> None:
self, settings
) -> None:
"""Live mode must call TTTC0084R with INQR_DVSN_1/2 and paging params.""" """Live mode must call TTTC0084R with INQR_DVSN_1/2 and paging params."""
broker = self._make_broker(settings, "live") broker = self._make_broker(settings, "live")
pending = [{"odno": "001", "pdno": "005930", "psbl_qty": "10"}] pending = [{"odno": "001", "pdno": "005930", "psbl_qty": "10"}]
@@ -872,9 +853,7 @@ class TestCancelDomesticOrder:
broker = self._make_broker(settings, "live") broker = self._make_broker(settings, "live")
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"}) mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5) await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
order_headers = mock_post.call_args_list[1][1].get("headers", {}) order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -886,9 +865,7 @@ class TestCancelDomesticOrder:
broker = self._make_broker(settings, "paper") broker = self._make_broker(settings, "paper")
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"}) mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5) await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
order_headers = mock_post.call_args_list[1][1].get("headers", {}) order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -900,9 +877,7 @@ class TestCancelDomesticOrder:
broker = self._make_broker(settings, "live") broker = self._make_broker(settings, "live")
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"}) mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5) await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
body = mock_post.call_args_list[1][1].get("json", {}) body = mock_post.call_args_list[1][1].get("json", {})
@@ -916,9 +891,7 @@ class TestCancelDomesticOrder:
broker = self._make_broker(settings, "live") broker = self._make_broker(settings, "live")
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"}) mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.cancel_domestic_order("005930", "ORD123", "BRN456", 3) await broker.cancel_domestic_order("005930", "ORD123", "BRN456", 3)
body = mock_post.call_args_list[1][1].get("json", {}) body = mock_post.call_args_list[1][1].get("json", {})
@@ -932,9 +905,7 @@ class TestCancelDomesticOrder:
broker = self._make_broker(settings, "live") broker = self._make_broker(settings, "live")
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"}) mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
with patch( with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 2) await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 2)
order_headers = mock_post.call_args_list[1][1].get("headers", {}) order_headers = mock_post.call_args_list[1][1].get("headers", {})

View File

@@ -77,9 +77,7 @@ class TestContextStore:
# Latest by updated_at, which should be the last one set # Latest by updated_at, which should be the last one set
assert latest == "2026-02-02" assert latest == "2026-02-02"
def test_delete_old_contexts( def test_delete_old_contexts(self, store: ContextStore, db_conn: sqlite3.Connection) -> None:
self, store: ContextStore, db_conn: sqlite3.Connection
) -> None:
"""Test deleting contexts older than a cutoff date.""" """Test deleting contexts older than a cutoff date."""
# Insert contexts with specific old timestamps # Insert contexts with specific old timestamps
# (bypassing set_context which uses current time) # (bypassing set_context which uses current time)
@@ -170,9 +168,7 @@ class TestContextAggregator:
log_trade(db_conn, "035720", "HOLD", 75, "Wait", quantity=0, price=0, pnl=0) log_trade(db_conn, "035720", "HOLD", 75, "Wait", quantity=0, price=0, pnl=0)
# Manually set timestamps to the target date # Manually set timestamps to the target date
db_conn.execute( db_conn.execute(f"UPDATE trades SET timestamp = '{date}T10:00:00+00:00'")
f"UPDATE trades SET timestamp = '{date}T10:00:00+00:00'"
)
db_conn.commit() db_conn.commit()
# Aggregate # Aggregate
@@ -194,18 +190,10 @@ class TestContextAggregator:
week = "2026-W06" week = "2026-W06"
# Set daily contexts # Set daily contexts
aggregator.store.set_context( aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "total_pnl_KR", 100.0)
ContextLayer.L6_DAILY, "2026-02-02", "total_pnl_KR", 100.0 aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "total_pnl_KR", 200.0)
) aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "avg_confidence_KR", 80.0)
aggregator.store.set_context( aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "avg_confidence_KR", 85.0)
ContextLayer.L6_DAILY, "2026-02-03", "total_pnl_KR", 200.0
)
aggregator.store.set_context(
ContextLayer.L6_DAILY, "2026-02-02", "avg_confidence_KR", 80.0
)
aggregator.store.set_context(
ContextLayer.L6_DAILY, "2026-02-03", "avg_confidence_KR", 85.0
)
# Aggregate # Aggregate
aggregator.aggregate_weekly_from_daily(week) aggregator.aggregate_weekly_from_daily(week)
@@ -223,15 +211,9 @@ class TestContextAggregator:
month = "2026-02" month = "2026-02"
# Set weekly contexts # Set weekly contexts
aggregator.store.set_context( aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W05", "weekly_pnl_KR", 100.0)
ContextLayer.L5_WEEKLY, "2026-W05", "weekly_pnl_KR", 100.0 aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W06", "weekly_pnl_KR", 200.0)
) aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W07", "weekly_pnl_KR", 150.0)
aggregator.store.set_context(
ContextLayer.L5_WEEKLY, "2026-W06", "weekly_pnl_KR", 200.0
)
aggregator.store.set_context(
ContextLayer.L5_WEEKLY, "2026-W07", "weekly_pnl_KR", 150.0
)
# Aggregate # Aggregate
aggregator.aggregate_monthly_from_weekly(month) aggregator.aggregate_monthly_from_weekly(month)
@@ -316,6 +298,7 @@ class TestContextAggregator:
store = aggregator.store store = aggregator.store
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl_KR") == 1000.0 assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl_KR") == 1000.0
from datetime import date as date_cls from datetime import date as date_cls
trade_date = date_cls.fromisoformat(date) trade_date = date_cls.fromisoformat(date)
iso_year, iso_week, _ = trade_date.isocalendar() iso_year, iso_week, _ = trade_date.isocalendar()
trade_week = f"{iso_year}-W{iso_week:02d}" trade_week = f"{iso_year}-W{iso_week:02d}"
@@ -324,7 +307,9 @@ class TestContextAggregator:
trade_quarter = f"{trade_date.year}-Q{(trade_date.month - 1) // 3 + 1}" trade_quarter = f"{trade_date.year}-Q{(trade_date.month - 1) // 3 + 1}"
trade_year = str(trade_date.year) trade_year = str(trade_date.year)
assert store.get_context(ContextLayer.L4_MONTHLY, trade_month, "monthly_pnl") == 1000.0 assert store.get_context(ContextLayer.L4_MONTHLY, trade_month, "monthly_pnl") == 1000.0
assert store.get_context(ContextLayer.L3_QUARTERLY, trade_quarter, "quarterly_pnl") == 1000.0 assert (
store.get_context(ContextLayer.L3_QUARTERLY, trade_quarter, "quarterly_pnl") == 1000.0
)
assert store.get_context(ContextLayer.L2_ANNUAL, trade_year, "annual_pnl") == 1000.0 assert store.get_context(ContextLayer.L2_ANNUAL, trade_year, "annual_pnl") == 1000.0
@@ -429,9 +414,7 @@ class TestContextSummarizer:
# summarize_layer # summarize_layer
# ------------------------------------------------------------------ # ------------------------------------------------------------------
def test_summarize_layer_no_data( def test_summarize_layer_no_data(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""summarize_layer with no data must return the 'No data' sentinel.""" """summarize_layer with no data must return the 'No data' sentinel."""
result = summarizer.summarize_layer(ContextLayer.L6_DAILY) result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
assert result["count"] == 0 assert result["count"] == 0
@@ -448,15 +431,12 @@ class TestContextSummarizer:
result = summarizer.summarize_layer(ContextLayer.L6_DAILY) result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
assert "total_entries" in result assert "total_entries" in result
def test_summarize_layer_with_dict_values( def test_summarize_layer_with_dict_values(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""summarize_layer must handle dict values by extracting numeric subkeys.""" """summarize_layer must handle dict values by extracting numeric subkeys."""
store = summarizer.store store = summarizer.store
# set_context serialises the value as JSON, so passing a dict works # set_context serialises the value as JSON, so passing a dict works
store.set_context( store.set_context(
ContextLayer.L6_DAILY, "2026-02-01", "metrics", ContextLayer.L6_DAILY, "2026-02-01", "metrics", {"win_rate": 65.0, "label": "good"}
{"win_rate": 65.0, "label": "good"}
) )
result = summarizer.summarize_layer(ContextLayer.L6_DAILY) result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
@@ -464,9 +444,7 @@ class TestContextSummarizer:
# numeric subkey "win_rate" should appear as "metrics.win_rate" # numeric subkey "win_rate" should appear as "metrics.win_rate"
assert "metrics.win_rate" in result assert "metrics.win_rate" in result
def test_summarize_layer_with_string_values( def test_summarize_layer_with_string_values(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""summarize_layer must count string values separately.""" """summarize_layer must count string values separately."""
store = summarizer.store store = summarizer.store
# set_context stores string values as JSON-encoded strings # set_context stores string values as JSON-encoded strings
@@ -480,9 +458,7 @@ class TestContextSummarizer:
# rolling_window_summary # rolling_window_summary
# ------------------------------------------------------------------ # ------------------------------------------------------------------
def test_rolling_window_summary_basic( def test_rolling_window_summary_basic(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""rolling_window_summary must return the expected structure.""" """rolling_window_summary must return the expected structure."""
store = summarizer.store store = summarizer.store
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "pnl", 500.0) store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "pnl", 500.0)
@@ -492,22 +468,16 @@ class TestContextSummarizer:
assert "recent_data" in result assert "recent_data" in result
assert "historical_summary" in result assert "historical_summary" in result
def test_rolling_window_summary_no_older_data( def test_rolling_window_summary_no_older_data(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""rolling_window_summary with summarize_older=False skips history.""" """rolling_window_summary with summarize_older=False skips history."""
result = summarizer.rolling_window_summary( result = summarizer.rolling_window_summary(ContextLayer.L6_DAILY, summarize_older=False)
ContextLayer.L6_DAILY, summarize_older=False
)
assert result["historical_summary"] == {} assert result["historical_summary"] == {}
# ------------------------------------------------------------------ # ------------------------------------------------------------------
# aggregate_to_higher_layer # aggregate_to_higher_layer
# ------------------------------------------------------------------ # ------------------------------------------------------------------
def test_aggregate_to_higher_layer_mean( def test_aggregate_to_higher_layer_mean(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""aggregate_to_higher_layer with 'mean' via dict subkeys returns average.""" """aggregate_to_higher_layer with 'mean' via dict subkeys returns average."""
store = summarizer.store store = summarizer.store
# Use different outer keys but same inner metric key so get_all_contexts # Use different outer keys but same inner metric key so get_all_contexts
@@ -520,9 +490,7 @@ class TestContextSummarizer:
) )
assert result == pytest.approx(150.0) assert result == pytest.approx(150.0)
def test_aggregate_to_higher_layer_sum( def test_aggregate_to_higher_layer_sum(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""aggregate_to_higher_layer with 'sum' must return the total.""" """aggregate_to_higher_layer with 'sum' must return the total."""
store = summarizer.store store = summarizer.store
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0}) store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
@@ -533,9 +501,7 @@ class TestContextSummarizer:
) )
assert result == pytest.approx(300.0) assert result == pytest.approx(300.0)
def test_aggregate_to_higher_layer_max( def test_aggregate_to_higher_layer_max(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""aggregate_to_higher_layer with 'max' must return the maximum.""" """aggregate_to_higher_layer with 'max' must return the maximum."""
store = summarizer.store store = summarizer.store
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0}) store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
@@ -546,9 +512,7 @@ class TestContextSummarizer:
) )
assert result == pytest.approx(200.0) assert result == pytest.approx(200.0)
def test_aggregate_to_higher_layer_min( def test_aggregate_to_higher_layer_min(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""aggregate_to_higher_layer with 'min' must return the minimum.""" """aggregate_to_higher_layer with 'min' must return the minimum."""
store = summarizer.store store = summarizer.store
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0}) store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
@@ -559,9 +523,7 @@ class TestContextSummarizer:
) )
assert result == pytest.approx(100.0) assert result == pytest.approx(100.0)
def test_aggregate_to_higher_layer_no_data( def test_aggregate_to_higher_layer_no_data(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""aggregate_to_higher_layer with no matching key must return None.""" """aggregate_to_higher_layer with no matching key must return None."""
result = summarizer.aggregate_to_higher_layer( result = summarizer.aggregate_to_higher_layer(
ContextLayer.L6_DAILY, ContextLayer.L5_WEEKLY, "nonexistent", "mean" ContextLayer.L6_DAILY, ContextLayer.L5_WEEKLY, "nonexistent", "mean"
@@ -585,9 +547,7 @@ class TestContextSummarizer:
# create_compact_summary + format_summary_for_prompt # create_compact_summary + format_summary_for_prompt
# ------------------------------------------------------------------ # ------------------------------------------------------------------
def test_create_compact_summary( def test_create_compact_summary(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""create_compact_summary must produce a dict keyed by layer value.""" """create_compact_summary must produce a dict keyed by layer value."""
store = summarizer.store store = summarizer.store
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "pnl", 100.0) store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "pnl", 100.0)
@@ -615,9 +575,7 @@ class TestContextSummarizer:
text = summarizer.format_summary_for_prompt(summary) text = summarizer.format_summary_for_prompt(summary)
assert text == "" assert text == ""
def test_format_summary_non_dict_value( def test_format_summary_non_dict_value(self, summarizer: ContextSummarizer) -> None:
self, summarizer: ContextSummarizer
) -> None:
"""format_summary_for_prompt must render non-dict values as plain text.""" """format_summary_for_prompt must render non-dict values as plain text."""
summary = { summary = {
"daily": { "daily": {

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import json import json
import sqlite3 import sqlite3
from datetime import UTC, datetime
from types import SimpleNamespace from types import SimpleNamespace
from unittest.mock import AsyncMock, MagicMock from unittest.mock import AsyncMock, MagicMock
@@ -16,8 +17,6 @@ from src.evolution.daily_review import DailyReviewer
from src.evolution.scorecard import DailyScorecard from src.evolution.scorecard import DailyScorecard
from src.logging.decision_logger import DecisionLogger from src.logging.decision_logger import DecisionLogger
from datetime import UTC, datetime
TODAY = datetime.now(UTC).strftime("%Y-%m-%d") TODAY = datetime.now(UTC).strftime("%Y-%m-%d")
@@ -53,7 +52,8 @@ def _log_decision(
def test_generate_scorecard_market_scoped( def test_generate_scorecard_market_scoped(
db_conn: sqlite3.Connection, context_store: ContextStore, db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None: ) -> None:
reviewer = DailyReviewer(db_conn, context_store) reviewer = DailyReviewer(db_conn, context_store)
logger = DecisionLogger(db_conn) logger = DecisionLogger(db_conn)
@@ -134,7 +134,8 @@ def test_generate_scorecard_market_scoped(
def test_generate_scorecard_top_winners_and_losers( def test_generate_scorecard_top_winners_and_losers(
db_conn: sqlite3.Connection, context_store: ContextStore, db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None: ) -> None:
reviewer = DailyReviewer(db_conn, context_store) reviewer = DailyReviewer(db_conn, context_store)
logger = DecisionLogger(db_conn) logger = DecisionLogger(db_conn)
@@ -168,7 +169,8 @@ def test_generate_scorecard_top_winners_and_losers(
def test_generate_scorecard_empty_day( def test_generate_scorecard_empty_day(
db_conn: sqlite3.Connection, context_store: ContextStore, db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None: ) -> None:
reviewer = DailyReviewer(db_conn, context_store) reviewer = DailyReviewer(db_conn, context_store)
scorecard = reviewer.generate_scorecard(TODAY, "KR") scorecard = reviewer.generate_scorecard(TODAY, "KR")
@@ -184,7 +186,8 @@ def test_generate_scorecard_empty_day(
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_generate_lessons_without_gemini_returns_empty( async def test_generate_lessons_without_gemini_returns_empty(
db_conn: sqlite3.Connection, context_store: ContextStore, db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None: ) -> None:
reviewer = DailyReviewer(db_conn, context_store, gemini_client=None) reviewer = DailyReviewer(db_conn, context_store, gemini_client=None)
lessons = await reviewer.generate_lessons( lessons = await reviewer.generate_lessons(
@@ -206,7 +209,8 @@ async def test_generate_lessons_without_gemini_returns_empty(
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_generate_lessons_parses_json_array( async def test_generate_lessons_parses_json_array(
db_conn: sqlite3.Connection, context_store: ContextStore, db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None: ) -> None:
mock_gemini = MagicMock() mock_gemini = MagicMock()
mock_gemini.decide = AsyncMock( mock_gemini.decide = AsyncMock(
@@ -233,7 +237,8 @@ async def test_generate_lessons_parses_json_array(
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_generate_lessons_fallback_to_lines( async def test_generate_lessons_fallback_to_lines(
db_conn: sqlite3.Connection, context_store: ContextStore, db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None: ) -> None:
mock_gemini = MagicMock() mock_gemini = MagicMock()
mock_gemini.decide = AsyncMock( mock_gemini.decide = AsyncMock(
@@ -260,7 +265,8 @@ async def test_generate_lessons_fallback_to_lines(
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_generate_lessons_handles_gemini_error( async def test_generate_lessons_handles_gemini_error(
db_conn: sqlite3.Connection, context_store: ContextStore, db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None: ) -> None:
mock_gemini = MagicMock() mock_gemini = MagicMock()
mock_gemini.decide = AsyncMock(side_effect=RuntimeError("boom")) mock_gemini.decide = AsyncMock(side_effect=RuntimeError("boom"))
@@ -284,7 +290,8 @@ async def test_generate_lessons_handles_gemini_error(
def test_store_scorecard_in_context( def test_store_scorecard_in_context(
db_conn: sqlite3.Connection, context_store: ContextStore, db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None: ) -> None:
reviewer = DailyReviewer(db_conn, context_store) reviewer = DailyReviewer(db_conn, context_store)
scorecard = DailyScorecard( scorecard = DailyScorecard(
@@ -316,7 +323,8 @@ def test_store_scorecard_in_context(
def test_store_scorecard_key_is_market_scoped( def test_store_scorecard_key_is_market_scoped(
db_conn: sqlite3.Connection, context_store: ContextStore, db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None: ) -> None:
reviewer = DailyReviewer(db_conn, context_store) reviewer = DailyReviewer(db_conn, context_store)
kr = DailyScorecard( kr = DailyScorecard(
@@ -357,7 +365,8 @@ def test_store_scorecard_key_is_market_scoped(
def test_generate_scorecard_handles_invalid_context_snapshot( def test_generate_scorecard_handles_invalid_context_snapshot(
db_conn: sqlite3.Connection, context_store: ContextStore, db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None: ) -> None:
reviewer = DailyReviewer(db_conn, context_store) reviewer = DailyReviewer(db_conn, context_store)
db_conn.execute( db_conn.execute(

View File

@@ -355,6 +355,7 @@ def test_positions_empty_when_no_trades(tmp_path: Path) -> None:
def _seed_cb_context(conn: sqlite3.Connection, pnl_pct: float, market: str = "KR") -> None: def _seed_cb_context(conn: sqlite3.Connection, pnl_pct: float, market: str = "KR") -> None:
import json as _json import json as _json
conn.execute( conn.execute(
"INSERT OR REPLACE INTO system_metrics (key, value, updated_at) VALUES (?, ?, ?)", "INSERT OR REPLACE INTO system_metrics (key, value, updated_at) VALUES (?, ?, ?)",
( (

View File

@@ -79,7 +79,7 @@ class TestNewsAPI:
# Mock the fetch to avoid real API call # Mock the fetch to avoid real API call
with patch.object(api, "_fetch_news", new_callable=AsyncMock) as mock_fetch: with patch.object(api, "_fetch_news", new_callable=AsyncMock) as mock_fetch:
mock_fetch.return_value = None mock_fetch.return_value = None
result = await api.get_news_sentiment("AAPL") await api.get_news_sentiment("AAPL")
# Should have attempted refetch since cache expired # Should have attempted refetch since cache expired
mock_fetch.assert_called_once_with("AAPL") mock_fetch.assert_called_once_with("AAPL")
@@ -111,9 +111,7 @@ class TestNewsAPI:
"source": "Reuters", "source": "Reuters",
"time_published": "2026-02-04T10:00:00", "time_published": "2026-02-04T10:00:00",
"url": "https://example.com/1", "url": "https://example.com/1",
"ticker_sentiment": [ "ticker_sentiment": [{"ticker": "AAPL", "ticker_sentiment_score": "0.85"}],
{"ticker": "AAPL", "ticker_sentiment_score": "0.85"}
],
"overall_sentiment_score": "0.75", "overall_sentiment_score": "0.75",
}, },
{ {
@@ -122,9 +120,7 @@ class TestNewsAPI:
"source": "Bloomberg", "source": "Bloomberg",
"time_published": "2026-02-04T09:00:00", "time_published": "2026-02-04T09:00:00",
"url": "https://example.com/2", "url": "https://example.com/2",
"ticker_sentiment": [ "ticker_sentiment": [{"ticker": "AAPL", "ticker_sentiment_score": "-0.3"}],
{"ticker": "AAPL", "ticker_sentiment_score": "-0.3"}
],
"overall_sentiment_score": "-0.2", "overall_sentiment_score": "-0.2",
}, },
] ]
@@ -661,7 +657,9 @@ class TestGeminiClientWithExternalData:
) )
# Mock the Gemini API call # Mock the Gemini API call
with patch.object(client._client.aio.models, "generate_content", new_callable=AsyncMock) as mock_gen: with patch.object(
client._client.aio.models, "generate_content", new_callable=AsyncMock
) as mock_gen:
mock_response = MagicMock() mock_response = MagicMock()
mock_response.text = '{"action": "BUY", "confidence": 85, "rationale": "Good news"}' mock_response.text = '{"action": "BUY", "confidence": 85, "rationale": "Good news"}'
mock_gen.return_value = mock_response mock_gen.return_value = mock_response

View File

@@ -1,9 +1,9 @@
"""Tests for database helper functions.""" """Tests for database helper functions."""
import tempfile
import os import os
import tempfile
from src.db import get_open_position, init_db, log_trade from src.db import get_latest_buy_trade, get_open_position, init_db, log_trade
def test_get_open_position_returns_latest_buy() -> None: def test_get_open_position_returns_latest_buy() -> None:
@@ -155,6 +155,9 @@ def test_mode_column_exists_in_schema() -> None:
cursor = conn.execute("PRAGMA table_info(trades)") cursor = conn.execute("PRAGMA table_info(trades)")
columns = {row[1] for row in cursor.fetchall()} columns = {row[1] for row in cursor.fetchall()}
assert "mode" in columns assert "mode" in columns
assert "session_id" in columns
assert "strategy_pnl" in columns
assert "fx_pnl" in columns
def test_mode_migration_adds_column_to_existing_db() -> None: def test_mode_migration_adds_column_to_existing_db() -> None:
@@ -182,6 +185,13 @@ def test_mode_migration_adds_column_to_existing_db() -> None:
decision_id TEXT decision_id TEXT
)""" )"""
) )
old_conn.execute(
"""
INSERT INTO trades (
timestamp, stock_code, action, confidence, rationale, quantity, price, pnl
) VALUES ('2026-01-01T00:00:00+00:00', 'AAPL', 'SELL', 90, 'legacy', 1, 100.0, 123.45)
"""
)
old_conn.commit() old_conn.commit()
old_conn.close() old_conn.close()
@@ -190,6 +200,217 @@ def test_mode_migration_adds_column_to_existing_db() -> None:
cursor = conn.execute("PRAGMA table_info(trades)") cursor = conn.execute("PRAGMA table_info(trades)")
columns = {row[1] for row in cursor.fetchall()} columns = {row[1] for row in cursor.fetchall()}
assert "mode" in columns assert "mode" in columns
assert "session_id" in columns
assert "strategy_pnl" in columns
assert "fx_pnl" in columns
migrated = conn.execute(
"SELECT pnl, strategy_pnl, fx_pnl, session_id "
"FROM trades WHERE stock_code='AAPL' LIMIT 1"
).fetchone()
assert migrated is not None
assert migrated[0] == 123.45
assert migrated[1] == 123.45
assert migrated[2] == 0.0
assert migrated[3] == "UNKNOWN"
conn.close()
finally:
os.unlink(db_path)
def test_log_trade_stores_strategy_and_fx_pnl_separately() -> None:
conn = init_db(":memory:")
log_trade(
conn=conn,
stock_code="AAPL",
action="SELL",
confidence=90,
rationale="fx split",
pnl=120.0,
strategy_pnl=100.0,
fx_pnl=20.0,
market="US_NASDAQ",
exchange_code="NASD",
)
row = conn.execute(
"SELECT pnl, strategy_pnl, fx_pnl FROM trades ORDER BY id DESC LIMIT 1"
).fetchone()
assert row is not None
assert row[0] == 120.0
assert row[1] == 100.0
assert row[2] == 20.0
def test_log_trade_backward_compat_sets_strategy_pnl_from_pnl() -> None:
conn = init_db(":memory:")
log_trade(
conn=conn,
stock_code="005930",
action="SELL",
confidence=80,
rationale="legacy",
pnl=50.0,
market="KR",
exchange_code="KRX",
)
row = conn.execute(
"SELECT pnl, strategy_pnl, fx_pnl FROM trades ORDER BY id DESC LIMIT 1"
).fetchone()
assert row is not None
assert row[0] == 50.0
assert row[1] == 50.0
assert row[2] == 0.0
def test_log_trade_partial_fx_input_does_not_infer_negative_strategy_pnl() -> None:
conn = init_db(":memory:")
log_trade(
conn=conn,
stock_code="AAPL",
action="SELL",
confidence=70,
rationale="fx only",
pnl=0.0,
fx_pnl=10.0,
market="US_NASDAQ",
exchange_code="NASD",
)
row = conn.execute(
"SELECT pnl, strategy_pnl, fx_pnl FROM trades ORDER BY id DESC LIMIT 1"
).fetchone()
assert row is not None
assert row[0] == 10.0
assert row[1] == 0.0
assert row[2] == 10.0
def test_log_trade_persists_explicit_session_id() -> None:
conn = init_db(":memory:")
log_trade(
conn=conn,
stock_code="AAPL",
action="BUY",
confidence=70,
rationale="session test",
market="US_NASDAQ",
exchange_code="NASD",
session_id="US_PRE",
)
row = conn.execute("SELECT session_id FROM trades ORDER BY id DESC LIMIT 1").fetchone()
assert row is not None
assert row[0] == "US_PRE"
def test_log_trade_auto_derives_session_id_when_not_provided() -> None:
conn = init_db(":memory:")
log_trade(
conn=conn,
stock_code="005930",
action="BUY",
confidence=70,
rationale="auto session",
market="KR",
exchange_code="KRX",
)
row = conn.execute("SELECT session_id FROM trades ORDER BY id DESC LIMIT 1").fetchone()
assert row is not None
assert row[0] != "UNKNOWN"
def test_log_trade_unknown_market_falls_back_to_unknown_session() -> None:
conn = init_db(":memory:")
log_trade(
conn=conn,
stock_code="X",
action="BUY",
confidence=70,
rationale="unknown market",
market="MARS",
exchange_code="MARS",
)
row = conn.execute("SELECT session_id FROM trades ORDER BY id DESC LIMIT 1").fetchone()
assert row is not None
assert row[0] == "UNKNOWN"
def test_get_latest_buy_trade_prefers_exchange_code_match() -> None:
conn = init_db(":memory:")
log_trade(
conn=conn,
stock_code="AAPL",
action="BUY",
confidence=80,
rationale="legacy",
quantity=10,
price=120.0,
market="US_NASDAQ",
exchange_code="",
decision_id="legacy-buy",
)
log_trade(
conn=conn,
stock_code="AAPL",
action="BUY",
confidence=85,
rationale="matched",
quantity=5,
price=125.0,
market="US_NASDAQ",
exchange_code="NASD",
decision_id="matched-buy",
)
matched = get_latest_buy_trade(
conn,
stock_code="AAPL",
market="US_NASDAQ",
exchange_code="NASD",
)
assert matched is not None
assert matched["decision_id"] == "matched-buy"
def test_decision_logs_session_id_migration_backfills_unknown() -> None:
import sqlite3
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as f:
db_path = f.name
try:
old_conn = sqlite3.connect(db_path)
old_conn.execute(
"""
CREATE TABLE decision_logs (
decision_id TEXT PRIMARY KEY,
timestamp TEXT NOT NULL,
stock_code TEXT NOT NULL,
market TEXT NOT NULL,
exchange_code TEXT NOT NULL,
action TEXT NOT NULL,
confidence INTEGER NOT NULL,
rationale TEXT NOT NULL,
context_snapshot TEXT NOT NULL,
input_data TEXT NOT NULL
)
"""
)
old_conn.execute(
"""
INSERT INTO decision_logs (
decision_id, timestamp, stock_code, market, exchange_code,
action, confidence, rationale, context_snapshot, input_data
) VALUES (
'd1', '2026-01-01T00:00:00+00:00', 'AAPL', 'US_NASDAQ', 'NASD',
'BUY', 80, 'legacy row', '{}', '{}'
)
"""
)
old_conn.commit()
old_conn.close()
conn = init_db(db_path)
columns = {row[1] for row in conn.execute("PRAGMA table_info(decision_logs)").fetchall()}
assert "session_id" in columns
row = conn.execute("SELECT session_id FROM decision_logs WHERE decision_id='d1'").fetchone()
assert row is not None
assert row[0] == "UNKNOWN"
conn.close() conn.close()
finally: finally:
os.unlink(db_path) os.unlink(db_path)

View File

@@ -49,7 +49,10 @@ def test_log_decision_creates_record(logger: DecisionLogger, db_conn: sqlite3.Co
# Verify record exists in database # Verify record exists in database
cursor = db_conn.execute( cursor = db_conn.execute(
"SELECT decision_id, action, confidence FROM decision_logs WHERE decision_id = ?", (
"SELECT decision_id, action, confidence, session_id "
"FROM decision_logs WHERE decision_id = ?"
),
(decision_id,), (decision_id,),
) )
row = cursor.fetchone() row = cursor.fetchone()
@@ -57,6 +60,7 @@ def test_log_decision_creates_record(logger: DecisionLogger, db_conn: sqlite3.Co
assert row[0] == decision_id assert row[0] == decision_id
assert row[1] == "BUY" assert row[1] == "BUY"
assert row[2] == 85 assert row[2] == 85
assert row[3] == "UNKNOWN"
def test_log_decision_stores_context_snapshot(logger: DecisionLogger) -> None: def test_log_decision_stores_context_snapshot(logger: DecisionLogger) -> None:
@@ -84,6 +88,24 @@ def test_log_decision_stores_context_snapshot(logger: DecisionLogger) -> None:
assert decision is not None assert decision is not None
assert decision.context_snapshot == context_snapshot assert decision.context_snapshot == context_snapshot
assert decision.input_data == input_data assert decision.input_data == input_data
assert decision.session_id == "UNKNOWN"
def test_log_decision_stores_explicit_session_id(logger: DecisionLogger) -> None:
decision_id = logger.log_decision(
stock_code="AAPL",
market="US_NASDAQ",
exchange_code="NASD",
action="BUY",
confidence=88,
rationale="session check",
context_snapshot={},
input_data={},
session_id="US_PRE",
)
decision = logger.get_decision_by_id(decision_id)
assert decision is not None
assert decision.session_id == "US_PRE"
def test_get_unreviewed_decisions(logger: DecisionLogger) -> None: def test_get_unreviewed_decisions(logger: DecisionLogger) -> None:
@@ -278,6 +300,7 @@ def test_decision_log_dataclass() -> None:
stock_code="005930", stock_code="005930",
market="KR", market="KR",
exchange_code="KRX", exchange_code="KRX",
session_id="KRX_REG",
action="BUY", action="BUY",
confidence=85, confidence=85,
rationale="Test", rationale="Test",
@@ -286,6 +309,7 @@ def test_decision_log_dataclass() -> None:
) )
assert log.decision_id == "test-uuid" assert log.decision_id == "test-uuid"
assert log.session_id == "KRX_REG"
assert log.action == "BUY" assert log.action == "BUY"
assert log.confidence == 85 assert log.confidence == 85
assert log.reviewed is False assert log.reviewed is False

View File

@@ -208,7 +208,9 @@ def test_identify_failure_patterns_empty(optimizer: EvolutionOptimizer) -> None:
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_generate_strategy_creates_file(optimizer: EvolutionOptimizer, tmp_path: Path) -> None: async def test_generate_strategy_creates_file(
optimizer: EvolutionOptimizer, tmp_path: Path
) -> None:
"""Test that generate_strategy creates a strategy file.""" """Test that generate_strategy creates a strategy file."""
failures = [ failures = [
{ {
@@ -234,7 +236,9 @@ async def test_generate_strategy_creates_file(optimizer: EvolutionOptimizer, tmp
return {"action": "HOLD", "confidence": 50, "rationale": "Waiting"} return {"action": "HOLD", "confidence": 50, "rationale": "Waiting"}
""" """
with patch.object(optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)): with patch.object(
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
):
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path): with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
strategy_path = await optimizer.generate_strategy(failures) strategy_path = await optimizer.generate_strategy(failures)
@@ -245,6 +249,59 @@ async def test_generate_strategy_creates_file(optimizer: EvolutionOptimizer, tmp
assert "def evaluate" in strategy_path.read_text() assert "def evaluate" in strategy_path.read_text()
@pytest.mark.asyncio
async def test_generate_strategy_saves_valid_python_code(
optimizer: EvolutionOptimizer,
tmp_path: Path,
) -> None:
"""Test that syntactically valid generated code is saved."""
failures = [{"decision_id": "1", "timestamp": "2024-01-15T09:30:00+00:00"}]
mock_response = Mock()
mock_response.text = (
'price = market_data.get("current_price", 0)\n'
"if price > 0:\n"
' return {"action": "BUY", "confidence": 80, "rationale": "Positive price"}\n'
'return {"action": "HOLD", "confidence": 50, "rationale": "No signal"}\n'
)
with patch.object(
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
):
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
strategy_path = await optimizer.generate_strategy(failures)
assert strategy_path is not None
assert strategy_path.exists()
@pytest.mark.asyncio
async def test_generate_strategy_blocks_invalid_python_code(
optimizer: EvolutionOptimizer,
tmp_path: Path,
caplog: pytest.LogCaptureFixture,
) -> None:
"""Test that syntactically invalid generated code is not saved."""
failures = [{"decision_id": "1", "timestamp": "2024-01-15T09:30:00+00:00"}]
mock_response = Mock()
mock_response.text = (
'if market_data.get("current_price", 0) > 0\n'
' return {"action": "BUY", "confidence": 80, "rationale": "broken"}\n'
)
with patch.object(
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
):
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
with caplog.at_level("WARNING"):
strategy_path = await optimizer.generate_strategy(failures)
assert strategy_path is None
assert list(tmp_path.glob("*.py")) == []
assert "failed syntax validation" in caplog.text
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_generate_strategy_handles_api_error(optimizer: EvolutionOptimizer) -> None: async def test_generate_strategy_handles_api_error(optimizer: EvolutionOptimizer) -> None:
"""Test that generate_strategy handles Gemini API errors gracefully.""" """Test that generate_strategy handles Gemini API errors gracefully."""
@@ -264,6 +321,7 @@ def test_get_performance_summary() -> None:
"""Test getting performance summary from trades table.""" """Test getting performance summary from trades table."""
# Create a temporary database with trades # Create a temporary database with trades
import tempfile import tempfile
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp: with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp:
tmp_path = tmp.name tmp_path = tmp.name
@@ -558,7 +616,9 @@ def test_calculate_improvement_trend_declining(performance_tracker: PerformanceT
assert trend["pnl_change"] == -250.0 assert trend["pnl_change"] == -250.0
def test_calculate_improvement_trend_insufficient_data(performance_tracker: PerformanceTracker) -> None: def test_calculate_improvement_trend_insufficient_data(
performance_tracker: PerformanceTracker,
) -> None:
"""Test improvement trend with insufficient data.""" """Test improvement trend with insufficient data."""
metrics = [ metrics = [
StrategyMetrics( StrategyMetrics(
@@ -672,7 +732,9 @@ async def test_full_evolution_pipeline(optimizer: EvolutionOptimizer, tmp_path:
mock_response = Mock() mock_response = Mock()
mock_response.text = 'return {"action": "HOLD", "confidence": 50, "rationale": "Test"}' mock_response.text = 'return {"action": "HOLD", "confidence": 50, "rationale": "Test"}'
with patch.object(optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)): with patch.object(
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
):
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path): with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
with patch("subprocess.run") as mock_run: with patch("subprocess.run") as mock_run:
mock_run.return_value = Mock(returncode=0, stdout="", stderr="") mock_run.return_value = Mock(returncode=0, stdout="", stderr="")

55
tests/test_kill_switch.py Normal file
View File

@@ -0,0 +1,55 @@
import pytest
from src.core.kill_switch import KillSwitchOrchestrator
@pytest.mark.asyncio
async def test_kill_switch_executes_steps_in_order() -> None:
ks = KillSwitchOrchestrator()
calls: list[str] = []
async def _cancel() -> None:
calls.append("cancel")
def _refresh() -> None:
calls.append("refresh")
def _reduce() -> None:
calls.append("reduce")
def _snapshot() -> None:
calls.append("snapshot")
def _notify() -> None:
calls.append("notify")
report = await ks.trigger(
reason="test",
cancel_pending_orders=_cancel,
refresh_order_state=_refresh,
reduce_risk=_reduce,
snapshot_state=_snapshot,
notify=_notify,
)
assert report.steps == [
"block_new_orders",
"cancel_pending_orders",
"refresh_order_state",
"reduce_risk",
"snapshot_state",
"notify",
]
assert calls == ["cancel", "refresh", "reduce", "snapshot", "notify"]
assert report.errors == []
@pytest.mark.asyncio
async def test_kill_switch_collects_step_errors() -> None:
ks = KillSwitchOrchestrator()
def _boom() -> None:
raise RuntimeError("boom")
report = await ks.trigger(reason="test", cancel_pending_orders=_boom)
assert any(err.startswith("cancel_pending_orders:") for err in report.errors)

View File

@@ -103,9 +103,7 @@ class TestSetupLogging:
"""setup_logging must attach a JSON handler to the root logger.""" """setup_logging must attach a JSON handler to the root logger."""
setup_logging(level=logging.DEBUG) setup_logging(level=logging.DEBUG)
root = logging.getLogger() root = logging.getLogger()
json_handlers = [ json_handlers = [h for h in root.handlers if isinstance(h.formatter, JSONFormatter)]
h for h in root.handlers if isinstance(h.formatter, JSONFormatter)
]
assert len(json_handlers) == 1 assert len(json_handlers) == 1
assert root.level == logging.DEBUG assert root.level == logging.DEBUG

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More