87 Commits

Author SHA1 Message Date
4710aa2d66 Merge pull request 'test: add session-boundary risk reload e2e regressions (#376)' (#386) from feature/issue-376-session-boundary-e2e into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #386
2026-03-02 03:33:19 +09:00
agentson
ca9e1ad0e2 test: harden session-risk global reset isolation
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 03:30:46 +09:00
agentson
928e60877c test: add session-boundary risk reload e2e regressions (#376)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 03:23:58 +09:00
16ddc22d14 Merge pull request 'blackout: persist session_id across queued intent lifecycle (#375)' (#385) from feature/issue-375-queued-intent-session-id into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #385
2026-03-02 03:20:18 +09:00
agentson
4f21117eca blackout: simplify recovery session_id binding to queued value
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 35s
2026-03-02 03:17:28 +09:00
agentson
8e02b1ea4f blackout: persist session_id across queued intent lifecycle (#375)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 03:09:33 +09:00
ccceb38483 Merge pull request 'blackout: enforce bounded oldest-drop queue policy on overflow (#371)' (#384) from feature/issue-371-blackout-queue-overflow into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 32s
Reviewed-on: #384
2026-03-02 03:07:12 +09:00
agentson
96e5de7c5d test: align blackout queue mocks with overflow counter contract
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 03:03:35 +09:00
agentson
7959b749c7 blackout: enforce bounded oldest-drop queue policy on overflow (#371)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 02:57:08 +09:00
f7e242d147 Merge pull request 'trade: apply runtime strategy/fx pnl split on sell paths (#370)' (#383) from feature/issue-370-fx-pnl-runtime-split into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #383
2026-03-02 02:53:04 +09:00
agentson
589cc42e00 docs: bump requirements registry metadata for push governance sync
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 02:50:08 +09:00
agentson
920630e30e docs/main: clarify fx context behavior and rate-key provenance
Some checks failed
Gitea CI / test (push) Failing after 5s
Gitea CI / test (pull_request) Successful in 34s
2026-03-02 02:44:49 +09:00
agentson
d4f37ee392 trade: apply runtime strategy/fx pnl split on sell paths (#370)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 02:35:54 +09:00
3914f24872 Merge pull request 'backtest: reflect cost/execution effects in fold scoring (#368)' (#382) from feature/issue-368-backtest-cost-exec into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #382
2026-03-02 02:30:45 +09:00
agentson
ed713fdf40 style: wrap long helper signature in backtest pipeline
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 02:24:01 +09:00
agentson
c27decb6b1 backtest: reflect cost/execution effects in fold scoring (#368)
Some checks failed
Gitea CI / test (push) Failing after 6s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 02:10:08 +09:00
8ac7436953 Merge pull request 'docs: resync implementation audit status with actual code gaps (#373)' (#380) from feature/issue-373-audit-sync into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 36s
Reviewed-on: #380
2026-03-02 02:06:40 +09:00
agentson
1d404975ea docs: resync implementation audit status with actual code gaps (#373)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 01:54:27 +09:00
0b64206c01 Merge pull request 'governance: harden fail-fast checks for traceability and task-test pairing (#372)' (#379) from feature/issue-372-validator-hardening into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #379
2026-03-02 01:52:49 +09:00
agentson
c849e60199 ci: retrigger after PR body update
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 01:51:28 +09:00
agentson
53a6ef2968 governance: enforce fail-fast ops traceability and task-test pairing (#372)
Some checks failed
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Failing after 5s
2026-03-02 01:41:06 +09:00
1f05f7c566 Merge pull request 'process: prevent newline-escaped Gitea comments via helper + guard (#372)' (#378) from feature/issue-372-comment-newline-guard into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 31s
Reviewed-on: #378
2026-03-02 01:35:57 +09:00
agentson
d469002be7 test: add unhappy-path coverage for newline guard tokens
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-02 01:33:39 +09:00
agentson
3712a7a30b test: cover governance newline-helper required tokens
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 01:30:02 +09:00
agentson
746f873650 process: add newline-safe tea comment helper and governance guard (#372)
All checks were successful
Gitea CI / test (push) Successful in 35s
Gitea CI / test (pull_request) Successful in 34s
2026-03-02 01:21:13 +09:00
15e45fd099 Merge pull request 'docs: add top-level documentation hub index (#362)' (#367) from feature/issue-362-docs-index into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 35s
Reviewed-on: #367
2026-03-01 23:29:48 +09:00
agentson
a36e85b708 docs: include agent constraints and skills in docs hub
All checks were successful
Gitea CI / test (push) Successful in 35s
Gitea CI / test (pull_request) Successful in 33s
2026-03-01 23:26:27 +09:00
agentson
d2ac0dae53 docs: add top-level documentation hub index (#362)
All checks were successful
Gitea CI / test (push) Successful in 39s
Gitea CI / test (pull_request) Successful in 35s
2026-03-01 23:23:38 +09:00
461fdc755b Merge pull request 'docs: align template/commands with docs sync gate (#364)' (#366) from feature/issue-364-docs-sync-integration into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 34s
Reviewed-on: #366
2026-03-01 23:22:38 +09:00
agentson
243469cd40 docs: address PR #366 review on docs sync gate
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 23:18:42 +09:00
agentson
8e819e5939 docs: align template/commands with docs sync gate (#364)
All checks were successful
Gitea CI / test (push) Successful in 39s
Gitea CI / test (pull_request) Successful in 33s
2026-03-01 23:13:51 +09:00
774ce8e94f Merge pull request 'docs validator: add docs sync invariants with tests (#363)' (#365) from feature/issue-363-validate-docs-sync into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 36s
Reviewed-on: #365
2026-03-01 23:12:17 +09:00
agentson
6656adc2b7 ci/docs: wire docs sync validator into workflows and tighten tests
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 35s
2026-03-01 23:09:20 +09:00
agentson
51fd6b7a72 docs validator: add validate_docs_sync with unit tests (#363)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 23:03:52 +09:00
18931c8b58 Merge pull request 'governance: require approval evidence for READ-ONLY file changes (#356)' (#361) from feature/issue-356-readonly-approval into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #361
2026-03-01 22:46:53 +09:00
agentson
c431d82c0d test: cover no-readonly-change early return in governance validator
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-01 22:44:02 +09:00
agentson
6be78d73ff governance: enforce READ-ONLY approval evidence for protected file changes (#356)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-01 22:09:48 +09:00
f50833941c Merge pull request 'docs validator: enforce source path policy for ouroboros plan links (#357)' (#360) from feature/issue-357-docs-source-path-validator into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #360
2026-03-01 21:32:48 +09:00
agentson
d1ef79f385 docs validator: handle plan link fragments and avoid duplicate link errors
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 21:20:06 +09:00
agentson
117657d13f docs: enforce source path policy for ouroboros plan links (#357)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 21:11:34 +09:00
e430d6f28a Merge pull request 'test: add non-ci strict #TBD regression coverage for handover gate (#358)' (#359) from feature/issue-358-handover-nonci-tbd into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #359
2026-03-01 21:10:05 +09:00
agentson
999091e003 test: cover non-ci strict #TBD guard in handover check (#358)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 21:02:51 +09:00
07cb6b4ad8 Merge pull request 'workflow: session handover gate 실행환경 모드 분리 (#353)' (#354) from feature/issue-353-ci-handover-mode-v2 into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 37s
Reviewed-on: #354
2026-03-01 21:00:51 +09:00
agentson
8f2c08e2b7 test: add ci-mode coverage for session handover gate
All checks were successful
Gitea CI / test (push) Successful in 34s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 20:43:06 +09:00
agentson
940a7e094b workflow: skip main/master branch guard in --ci mode
All checks were successful
Gitea CI / test (push) Successful in 37s
Gitea CI / test (pull_request) Successful in 34s
2026-03-01 20:35:22 +09:00
agentson
05be112085 docs: move v2/v3 source plans under docs/ouroboros/source
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-01 20:25:39 +09:00
agentson
2c6e9802be docs: sync requirements registry metadata for policy doc changes
All checks were successful
Gitea CI / test (push) Successful in 34s
Gitea CI / test (pull_request) Successful in 32s
2026-03-01 20:23:34 +09:00
agentson
4c0b55d67c docs: replace absolute plan links with repo-relative paths
Some checks failed
Gitea CI / test (push) Failing after 8s
Gitea CI / test (pull_request) Failing after 6s
2026-03-01 20:22:13 +09:00
agentson
5730f0db2a ci: fix lint baseline and stabilize failing main tests
Some checks failed
Gitea CI / test (push) Failing after 5s
Gitea CI / test (pull_request) Failing after 5s
2026-03-01 20:17:13 +09:00
agentson
6f047a6daf ci: add --ci mode for session handover gate in workflows (#353)
Some checks failed
Gitea CI / test (push) Failing after 5s
Gitea CI / test (pull_request) Failing after 6s
2026-03-01 20:03:00 +09:00
841f8eaa8d Merge pull request 'docs: v2/v3 구현 감사 문서 피드백 전체 반영 (#349)' (#351) from feature/issue-349-doc-audit-feedback into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Failing after 1m5s
Reviewed-on: #351
2026-03-01 17:10:07 +09:00
agentson
6b34367656 docs: v2/v3 구현 감사 문서 피드백 전체 반영 (#349)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
11회 리뷰 사이클에서 남긴 [코멘트]를 모두 본문에 반영하고 블록을 제거한다.

변경 문서:
- docs/architecture.md: SmartScanner 동작 모드(both), 대시보드 10 API,
  DB 스키마(session_id/fx_pnl/mode), config 변수 갱신
- docs/commands.md: /api/pnl/history, /api/positions 엔드포인트 추가
- docs/testing.md: 테스트 수 고정값 제거, SmartScanner fallback 최신화,
  Dashboard 10 API routes 반영
- README.md: 고정 수치 제거, Gitea CI 명시, 파일별 수치 'CI 기준 변동' 표기
- CLAUDE.md: SmartScanner 섹션명 변경, 고정 수치 제거
- docs/requirements-log.md: #318~#331 구현 항목 추가
- docs/ouroboros/80_implementation_audit.md: ROOT-5/6/7 분리,
  REQ-V3-008 함수명 병기, v3 ~85% / 거버넌스 ~60%로 갱신
- docs/ouroboros/85_loss_recovery_action_plan.md: ACT-07 함수명 병기,
  테스트 수 갱신, 6.1/6.2 정확도 개선
- docs/ouroboros/60_repo_enforcement_checklist.md: CI job/step 구분 표 추가
- docs/ouroboros/README.md: 50_* 문서 (A)/(B) 보조 표기

Closes #349
2026-03-01 17:06:56 +09:00
e65a0e3585 Merge pull request 'test: session_id 명시 전파 회귀 고정 (#326)' (#348) from feature/issue-326-session-id-explicit-propagation into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #348
2026-03-01 13:42:17 +09:00
agentson
e3a3aada83 test: set rt_cd success response in session-id daily regression test (#326)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 13:38:57 +09:00
agentson
db316c539b test: align daily session mock order response with rt_cd success path (#326)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 13:37:04 +09:00
agentson
2df787757a test: enforce explicit runtime session_id propagation in realtime/daily logs (#326)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 4s
2026-03-01 10:14:59 +09:00
5f079206c6 Merge pull request 'infra: CI 자동 검증 강화 (정책 레지스트리 + TASK-REQ 매핑) (#330)' (#347) from feature/issue-330-governance-ci-guard into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #347
2026-03-01 10:11:18 +09:00
agentson
e9de950bec ci: wire governance traceability env in gitea workflow and guard zero SHA (#330)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 10:06:25 +09:00
agentson
c31ee37f13 infra: enforce governance sync and TASK-REQ mapping in CI (#330)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:58:45 +09:00
2ba1d1ad4d Merge pull request 'feat: Triple Barrier 시간장벽을 캘린더 분 기반으로 전환 (#329)' (#346) from feature/issue-329-triple-barrier-calendar-minutes into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #346
2026-03-01 09:57:01 +09:00
agentson
273a3c182a refactor: simplify timestamp normalization after non-null validation (#329)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 4s
2026-03-01 09:50:45 +09:00
agentson
701350fb65 feat: switch backtest triple barrier to calendar-minute horizon (#329)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:44:24 +09:00
35d81fb73d Merge pull request 'feat: 블랙아웃 복구 시 가격/세션 재검증 강화 (#328)' (#345) from feature/issue-328-blackout-revalidation into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #345
2026-03-01 09:43:09 +09:00
agentson
5fae9765e7 test: add blackout recovery overseas/failure revalidation coverage (#328)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:40:00 +09:00
agentson
0ceb2dfdc9 feat: revalidate blackout recovery orders by price/session context (#328)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:33:28 +09:00
89347ee525 Merge pull request 'feat: 세션 전환 시 리스크 파라미터 동적 재로딩 (#327)' (#344) from feature/issue-327-session-risk-reload into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #344
2026-03-01 09:31:14 +09:00
agentson
42c06929ea test: add session-risk reload edge-case coverage (#327)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-02-28 22:20:59 +09:00
agentson
5facd22ef9 feat: reload session risk profile on session transitions (#327)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 4s
2026-02-28 21:04:06 +09:00
3af62ce598 Merge pull request 'feat: v2 staged exit에 실제 피처(ATR, pred_down_prob) 공급 (#325)' (#343) from feature/issue-325-staged-exit-real-features into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #343
2026-02-28 20:59:38 +09:00
agentson
62cd8a81a4 feat: feed staged-exit with ATR/RSI runtime features (#325)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-02-28 20:58:23 +09:00
dd8549b912 Merge pull request 'feat: KR ATR-based dynamic hard-stop threshold (#318)' (#342) from feature/issue-318-kr-atr-dynamic-stoploss into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #342
2026-02-28 20:56:18 +09:00
agentson
8bba85da1e feat: add KR ATR-based dynamic hard-stop threshold (#318)
Some checks failed
Gitea CI / test (push) Failing after 4s
Gitea CI / test (pull_request) Failing after 3s
2026-02-28 18:30:52 +09:00
fc6083bd2a Merge pull request 'feat: stop-loss reentry cooldown guard (#319)' (#341) from feature/issue-319-stoploss-reentry-cooldown into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #341
2026-02-28 18:27:12 +09:00
agentson
5f53b02da8 test: add stop-loss reentry cooldown behavioral coverage (#319)
Some checks failed
Gitea CI / test (pull_request) Failing after 4s
Gitea CI / test (push) Failing after 4s
2026-02-28 18:24:28 +09:00
agentson
82808a8493 feat: enforce stop-loss reentry cooldown window (#319) 2026-02-28 18:24:28 +09:00
9456d66de4 Merge pull request 'feat: US minimum price entry filter (#320)' (#340) from feature/issue-320-us-min-price-filter into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #340
2026-02-28 18:22:28 +09:00
33b97f21ac Merge pull request 'fix: log blackout recovery executions to DB (#324)' (#339) from feature/issue-324-blackout-recovery-trade-log into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #339
2026-02-28 18:22:11 +09:00
3b135c3080 Merge pull request 'fix: SELL outcome PnL uses sell quantity (#322)' (#337) from feature/issue-322-sell-pnl-sell-qty into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #337
2026-02-28 18:21:34 +09:00
1b0d5568d3 Merge pull request 'infra: governance registry sync gate in CI (#330)' (#335) from feature/issue-330-governance-ci-guard into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #335
2026-02-28 18:21:10 +09:00
agentson
2406a80782 test: add governance validator unit coverage (#330)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 53s
2026-02-28 17:40:51 +09:00
b8569d9de1 Merge pull request 'fix: exchange-aware latest BUY matching (#323)' (#338) from feature/issue-323-buy-match-exchange-code into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #338
2026-02-28 17:37:43 +09:00
agentson
9267f1fb77 test: add US minimum price boundary and KR-scope coverage (#320)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 17:15:10 +09:00
agentson
fd0246769a test: add sell qty fallback guard and quantity-basis coverage (#322)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 17:13:56 +09:00
agentson
08607eaa56 feat: block US BUY entries below minimum price threshold (#320)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:40:19 +09:00
agentson
5c107d2435 fix: persist blackout recovery executions to trades log (#324)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:39:30 +09:00
agentson
6d7e6557d2 fix: compute SELL decision outcome using sell quantity (#322)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:38:10 +09:00
agentson
2e394cd17c infra: enforce governance registry sync checks in CI (#330)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:36:05 +09:00
96 changed files with 4883 additions and 1558 deletions

View File

@@ -41,12 +41,23 @@
- [ ] `workflow/session-handover.md` 최신 엔트리가 현재 브랜치/당일(UTC) 기준으로 갱신됨
- 최신 handover 엔트리 heading:
## Docs Sync Gate (docs 파일 변경 시 필수)
- [ ] `python3 scripts/validate_docs_sync.py` 통과 (`docs` 미변경 PR은 N/A 기재)
## Runtime Evidence
- 시스템 실제 구동 커맨드:
- 모니터링 로그 경로:
- 이상 징후/이슈 링크:
## READ-ONLY Approval (Required when touching READ-ONLY files)
- Touched READ-ONLY files:
- Human approval:
- Test suite 1:
- Test suite 2:
## Approval Gate
- [ ] Static Verifier approval comment linked

View File

@@ -13,6 +13,8 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
@@ -23,14 +25,31 @@ jobs:
run: pip install ".[dev]"
- name: Session handover gate
run: python3 scripts/session_handover_check.py --strict
run: python3 scripts/session_handover_check.py --strict --ci
- name: Validate governance assets
run: python3 scripts/validate_governance_assets.py
env:
GOVERNANCE_PR_TITLE: ${{ github.event.pull_request.title }}
GOVERNANCE_PR_BODY: ${{ github.event.pull_request.body }}
run: |
RANGE=""
if [ "${{ github.event_name }}" = "pull_request" ] && [ -n "${{ github.event.pull_request.base.sha }}" ]; then
RANGE="${{ github.event.pull_request.base.sha }}...${{ github.sha }}"
elif [ -n "${{ github.event.before }}" ] && [ "${{ github.event.before }}" != "0000000000000000000000000000000000000000" ]; then
RANGE="${{ github.event.before }}...${{ github.sha }}"
fi
if [ -n "$RANGE" ]; then
python3 scripts/validate_governance_assets.py "$RANGE"
else
python3 scripts/validate_governance_assets.py
fi
- name: Validate Ouroboros docs
run: python3 scripts/validate_ouroboros_docs.py
- name: Validate docs sync
run: python3 scripts/validate_docs_sync.py
- name: Lint
run: ruff check src/ tests/

View File

@@ -22,14 +22,31 @@ jobs:
run: pip install ".[dev]"
- name: Session handover gate
run: python3 scripts/session_handover_check.py --strict
run: python3 scripts/session_handover_check.py --strict --ci
- name: Validate governance assets
run: python3 scripts/validate_governance_assets.py
env:
GOVERNANCE_PR_TITLE: ${{ github.event.pull_request.title }}
GOVERNANCE_PR_BODY: ${{ github.event.pull_request.body }}
run: |
RANGE=""
if [ "${{ github.event_name }}" = "pull_request" ]; then
RANGE="${{ github.event.pull_request.base.sha }}...${{ github.sha }}"
elif [ "${{ github.event_name }}" = "push" ] && [ "${{ github.event.before }}" != "0000000000000000000000000000000000000000" ]; then
RANGE="${{ github.event.before }}...${{ github.sha }}"
fi
if [ -n "$RANGE" ]; then
python3 scripts/validate_governance_assets.py "$RANGE"
else
python3 scripts/validate_governance_assets.py
fi
- name: Validate Ouroboros docs
run: python3 scripts/validate_ouroboros_docs.py
- name: Validate docs sync
run: python3 scripts/validate_docs_sync.py
- name: Lint
run: ruff check src/ tests/

View File

@@ -81,12 +81,13 @@ SCANNER_TOP_N=3 # Max candidates per scan
- **Evolution-ready** — Selection context logged for strategy optimization
- **Fault-tolerant** — Falls back to static watchlist on API failure
### Realtime Mode Only
### Trading Mode Integration
Smart Scanner runs in `TRADE_MODE=realtime` only. Daily mode uses static watchlists for batch efficiency.
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
## Documentation
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
- **[Architecture](docs/architecture.md)** — System design, components, data flow
@@ -122,7 +123,7 @@ src/
├── broker/ # KIS API client (domestic + overseas)
├── context/ # L1-L7 hierarchical memory system
├── core/ # Risk manager (READ-ONLY)
├── dashboard/ # FastAPI read-only monitoring (8 API endpoints)
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
├── data/ # External data integration (news, market data, calendar)
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
├── logging/ # Decision logger (audit trail)
@@ -133,7 +134,7 @@ src/
├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env)
tests/ # 551 tests across 25 files
tests/ # 998 tests across 41 files
docs/ # Extended documentation
```

View File

@@ -39,7 +39,7 @@ KIS(한국투자증권) API로 매매하고, Google Gemini로 판단하며, 자
| 컨텍스트 | `src/context/` | L1-L7 계층형 메모리 시스템 |
| 분석 | `src/analysis/` | RSI, ATR, Smart Volatility Scanner |
| 알림 | `src/notifications/` | 텔레그램 양방향 (알림 + 9개 명령어) |
| 대시보드 | `src/dashboard/` | FastAPI 읽기 전용 모니터링 (8개 API) |
| 대시보드 | `src/dashboard/` | FastAPI 읽기 전용 모니터링 (10개 API) |
| 진화 | `src/evolution/` | 전략 진화 + Daily Review + Scorecard |
| 의사결정 로그 | `src/logging/` | 전체 거래 결정 감사 추적 |
| 데이터 | `src/data/` | 뉴스, 시장 데이터, 경제 캘린더 연동 |
@@ -153,19 +153,16 @@ docker compose up -d ouroboros
## 테스트
551개 테스트가 25개 파일에 걸쳐 구현되어 있습니다. 최소 커버리지 80%.
998개 테스트가 41개 파일에 걸쳐 구현되어 있습니다. 최소 커버리지 80%.
```
tests/test_scenario_engine.py — 시나리오 매칭 (44개)
tests/test_data_integration.py — 외부 데이터 연동 (38개)
tests/test_pre_market_planner.py — 플레이북 생성 (37개)
tests/test_main.py — 거래 루프 통합 (37개)
tests/test_token_efficiency.py 토큰 최적화 (34개)
tests/test_strategy_models.py — 전략 모델 검증 (33개)
tests/test_telegram_commands.py — 텔레그램 명령어 (31개)
tests/test_latency_control.py — 지연시간 제어 (30개)
tests/test_telegram.py — 텔레그램 알림 (25개)
... 외 16개 파일
tests/test_main.py — 거래 루프 통합
tests/test_scenario_engine.py — 시나리오 매칭
tests/test_pre_market_planner.py — 플레이북 생성
tests/test_overseas_broker.py — 해외 브로커
tests/test_telegram_commands.py — 텔레그램 명령어
tests/test_telegram.py — 텔레그램 알림
... 외 35개 파일 ※ 파일별 수치는 CI 기준으로 변동 가능
```
**상세**: [docs/testing.md](docs/testing.md)
@@ -177,8 +174,8 @@ tests/test_telegram.py — 텔레그램 알림 (25개)
- **AI**: Google Gemini Pro
- **DB**: SQLite (5개 테이블: trades, contexts, decision_logs, playbooks, context_metadata)
- **대시보드**: FastAPI + uvicorn
- **검증**: pytest + coverage (551 tests)
- **CI/CD**: GitHub Actions
- **검증**: pytest + coverage (998 tests)
- **CI/CD**: Gitea CI (`.gitea/workflows/ci.yml`)
- **배포**: Docker + Docker Compose
## 프로젝트 구조
@@ -212,7 +209,7 @@ The-Ouroboros/
│ ├── config.py # Pydantic 설정
│ ├── db.py # SQLite 데이터베이스
│ └── main.py # 비동기 거래 루프
├── tests/ # 551개 테스트 (25개 파일)
├── tests/ # 998개 테스트 (41개 파일)
├── Dockerfile # 멀티스테이지 빌드
├── docker-compose.yml # 서비스 오케스트레이션
└── pyproject.toml # 의존성 및 도구 설정
@@ -220,6 +217,7 @@ The-Ouroboros/
## 문서
- **[문서 허브](docs/README.md)** — 전체 문서 라우팅, 우선순위, 읽기 순서
- **[아키텍처](docs/architecture.md)** — 시스템 설계, 컴포넌트, 데이터 흐름
- **[테스트](docs/testing.md)** — 테스트 구조, 커버리지, 작성 가이드
- **[명령어](docs/commands.md)** — CLI, Dashboard, Telegram 명령어

48
docs/README.md Normal file
View File

@@ -0,0 +1,48 @@
# Documentation Hub
이 문서는 저장소 전체 문서의 상위 라우팅 허브입니다.
세부 문서로 바로 들어가기 전에 아래 우선순위와 읽기 순서를 기준으로 이동하세요.
## Priority (SSOT)
1. 실행/협업 규칙 SSOT: [workflow.md](./workflow.md)
2. 명령/장애 대응 SSOT: [commands.md](./commands.md)
3. 테스트/검증 SSOT: [testing.md](./testing.md)
4. 에이전트 제약 SSOT: [agents.md](./agents.md)
5. 요구사항 추적 SSOT: [requirements-log.md](./requirements-log.md)
6. Ouroboros 실행 문서 허브: [ouroboros/README.md](./ouroboros/README.md)
## Recommended Reading Order
1. [workflow.md](./workflow.md)
2. [commands.md](./commands.md)
3. [testing.md](./testing.md)
4. [agents.md](./agents.md)
5. [architecture.md](./architecture.md)
6. [context-tree.md](./context-tree.md)
7. [disaster_recovery.md](./disaster_recovery.md)
8. [live-trading-checklist.md](./live-trading-checklist.md)
9. [ouroboros/README.md](./ouroboros/README.md)
## Document Map
- Core
- [workflow.md](./workflow.md): 브랜치/PR/리뷰/세션 handover 정책
- [commands.md](./commands.md): 실행 커맨드, 실패 사례, 트러블슈팅
- [testing.md](./testing.md): 테스트 구조, 작성 규칙, 검증 명령
- [agents.md](./agents.md): 에이전트 작업 제약과 금지 행위
- [agent-constraints.md](./agent-constraints.md): 영속 제약/운영 불변식(agents.md 보완)
- [skills.md](./skills.md): 설치/사용 가능한 스킬 목록과 활용 가이드
- Design and Operations
- [architecture.md](./architecture.md): 시스템 구조와 컴포넌트 책임
- [context-tree.md](./context-tree.md): L1-L7 컨텍스트 계층 설계
- [disaster_recovery.md](./disaster_recovery.md): 백업/복구 절차
- [live-trading-checklist.md](./live-trading-checklist.md): 실전 전환 체크리스트
- Governance and Planning
- [requirements-log.md](./requirements-log.md): 요구사항/피드백 히스토리
- [ouroboros/README.md](./ouroboros/README.md): v2/v3 실행 문서 라우팅
## Change Rule
- 문서 신규/이동/대규모 개편 시 이 파일의 링크와 분류를 함께 갱신합니다.
- 링크는 상대경로만 사용합니다.

View File

@@ -84,6 +84,37 @@ High-frequency trading with individual stock analysis:
- Momentum scoring (0-100 scale)
- Breakout/breakdown pattern detection
**TripleBarrierLabeler** (`triple_barrier.py`) — Financial time-series labeling (v2)
- Triple Barrier method: upper (take-profit), lower (stop-loss), time barrier
- First-touch labeling: labels confirmed by whichever barrier is breached first
- `max_holding_minutes` (calendar-minute) time barrier — session-aware, bar-period independent
- Tie-break mode: `"stop_first"` (conservative) or `"take_first"`
- Feature-label strict separation to prevent look-ahead bias
**BacktestPipeline** (`backtest_pipeline.py`) — End-to-end validation pipeline (v2)
- `run_v2_backtest_pipeline()`: cost guard → triple barrier labeling → walk-forward splits → fold scoring
- `BacktestPipelineResult`: artifact contract for reproducible output
- `fold_has_leakage()`: leakage detection utility
**WalkForwardSplit** (`walk_forward_split.py`) — Time-series validation (v2)
- Fold-based walk-forward splits (no random shuffling)
- Purge/Embargo: excludes N bars before/after fold boundaries to prevent data leakage
**BacktestExecutionModel** (`backtest_execution_model.py`) — Conservative fill simulation (v2/v3)
- Session-aware slippage: KRX_REG 5bps, NXT_AFTER 15bps, US_REG 3bps, US_PRE/DAY 30-50bps
- Order failure rate simulation per session
- Partial fill rate simulation with min/max ratio bounds
- Unfavorable-direction fill assumption (no simple close-price fill)
**BacktestCostGuard** (`backtest_cost_guard.py`) — Cost model validator (v2)
- `validate_backtest_cost_model()`: fail-fast check that session cost assumptions are present
- Enforces realistic cost assumptions before any backtest run proceeds
**SmartVolatilityScanner** (`smart_scanner.py`) — Python-first filtering pipeline
- **Domestic (KR)**:
@@ -98,7 +129,7 @@ High-frequency trading with individual stock analysis:
- **Step 4**: Return top N candidates (default 3)
- **Fallback (overseas only)**: If ranking API is unavailable, uses dynamic universe
from runtime active symbols + recent traded symbols + current holdings (no static watchlist)
- **Realtime mode only**: Daily mode uses batch processing for API efficiency
- **Both modes**: Realtime 중심이지만 Daily 경로(`run_daily_session()`)에서도 후보 선별에 사용
**Benefits:**
- Reduces Gemini API calls from 20-30 stocks to 1-3 qualified candidates
@@ -124,9 +155,9 @@ High-frequency trading with individual stock analysis:
- Selects appropriate context layers for current market conditions
### 4. Risk Manager (`src/core/risk_manager.py`)
### 4. Risk Manager & Session Policy (`src/core/`)
**RiskManager** — Safety circuit breaker and order validation
**RiskManager** (`risk_manager.py`) — Safety circuit breaker and order validation
> **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md))
@@ -136,8 +167,59 @@ High-frequency trading with individual stock analysis:
- **Fat-Finger Protection**: Rejects orders exceeding 30% of available cash
- Must always be enforced, cannot be disabled
**OrderPolicy** (`order_policy.py`) — Session classification and order type enforcement (v3)
- `classify_session_id()`: Classifies current KR/US session from KST clock
- KR: `NXT_PRE` (08:00-08:50), `KRX_REG` (09:00-15:30), `NXT_AFTER` (15:30-20:00)
- US: `US_DAY` (10:00-18:00), `US_PRE` (18:00-23:30), `US_REG` (23:30-06:00), `US_AFTER` (06:00-07:00)
- Low-liquidity session detection: `NXT_AFTER`, `US_PRE`, `US_DAY`, `US_AFTER`
- Market order forbidden in low-liquidity sessions (`OrderPolicyRejected` raised)
- Limit/IOC/FOK orders always allowed
**KillSwitch** (`kill_switch.py`) — Emergency trading halt orchestration (v2)
- Fixed 5-step atomic sequence:
1. Block new orders (`new_orders_blocked = True`)
2. Cancel all unfilled orders
3. Refresh order state (query final status)
4. Reduce risk (force-close or reduce positions)
5. Snapshot state + send Telegram alert
- Async, injectable step callables — each step individually testable
- Highest priority: overrides overnight exception and all other rules
**BlackoutManager** (`blackout_manager.py`) — KIS maintenance window handling (v3)
- Configurable blackout windows (e.g., `23:30-00:10 KST`)
- `queue_order()`: Queues order intent during blackout, enforces max queue size
- `pop_recovery_batch()`: Returns queued intents after recovery
- Recovery revalidation path (in `src/main.py`):
- Stale BUY drop (position already exists)
- Stale SELL drop (position absent)
- `validate_order_policy()` rechecked
- Price drift check (>5% → drop, configurable via `BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT`)
### 5. Strategy (`src/strategy/`)
**PositionStateMachine** (`position_state_machine.py`) — 4-state sell state machine (v2)
- States: `HOLDING` → `BE_LOCK` → `ARMED` → `EXITED`
- `HOLDING`: Normal holding
- `BE_LOCK`: Profit ≥ `be_arm_pct` — stop-loss elevated to break-even
- `ARMED`: Profit ≥ `arm_pct` — peak-tracking trailing stop active
- `EXITED`: Position closed
- `promote_state()`: Immediately elevates to highest admissible state (handles gaps/skips)
- `evaluate_exit_first()`: EXITED conditions checked before state promotion
- Monotonic: states only move up, never down
**ExitRules** (`exit_rules.py`) — 4-layer composite exit logic (v2)
- **Hard Stop**: `unrealized <= hard_stop_pct` (always enforced, ATR-adaptive for KR)
- **Break-Even Lock**: Once in BE_LOCK/ARMED, exit if price falls to entry price
- **ATR Trailing Stop**: `trailing_stop_price = peak_price - (atr_multiplier_k × ATR)`
- **Model Signal**: Exit if `pred_down_prob >= model_prob_threshold AND liquidity_weak`
- `evaluate_exit()`: Returns `ExitEvaluation` with next state, exit flag, reason, trailing price
- `ExitRuleConfig`: Frozen dataclass with all tunable parameters
**Pre-Market Planner** (`pre_market_planner.py`) — AI playbook generation
- Runs before market open (configurable `PRE_MARKET_MINUTES`, default 30)
@@ -195,7 +277,7 @@ High-frequency trading with individual stock analysis:
- Configurable host/port (`DASHBOARD_HOST`, `DASHBOARD_PORT`, default `127.0.0.1:8080`)
- Serves static HTML frontend
**8 API Endpoints:**
**10 API Endpoints:**
| Endpoint | Method | Description |
|----------|--------|-------------|
@@ -207,6 +289,8 @@ High-frequency trading with individual stock analysis:
| `/api/context/{layer}` | GET | Query context by layer (L1-L7) |
| `/api/decisions` | GET | Decision log entries with outcomes |
| `/api/scenarios/active` | GET | Today's matched scenarios |
| `/api/pnl/history` | GET | P&L history time series |
| `/api/positions` | GET | Current open positions |
### 8. Notifications (`src/notifications/telegram_client.py`)
@@ -448,8 +532,12 @@ CREATE TABLE trades (
pnl REAL DEFAULT 0.0,
market TEXT DEFAULT 'KR',
exchange_code TEXT DEFAULT 'KRX',
session_id TEXT DEFAULT 'UNKNOWN', -- v3: KRX_REG | NXT_AFTER | US_REG | US_PRE | ...
selection_context TEXT, -- JSON: {rsi, volume_ratio, signal, score}
decision_id TEXT -- Links to decision_logs
decision_id TEXT, -- Links to decision_logs
strategy_pnl REAL, -- v3: Core strategy P&L (separated from FX)
fx_pnl REAL DEFAULT 0.0, -- v3: FX gain/loss for USD trades (schema ready, activation pending)
mode TEXT -- paper | live
);
```
@@ -475,13 +563,14 @@ CREATE TABLE decision_logs (
stock_code TEXT,
market TEXT,
exchange_code TEXT,
session_id TEXT DEFAULT 'UNKNOWN', -- v3: session when decision was made
action TEXT,
confidence INTEGER,
rationale TEXT,
context_snapshot TEXT, -- JSON: full context at decision time
input_data TEXT, -- JSON: market data used
outcome_pnl REAL,
outcome_accuracy REAL,
outcome_accuracy INTEGER,
reviewed INTEGER DEFAULT 0,
review_notes TEXT
);
@@ -494,7 +583,7 @@ CREATE TABLE playbooks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT NOT NULL,
market TEXT NOT NULL,
status TEXT DEFAULT 'generated',
status TEXT NOT NULL DEFAULT 'pending', -- pending → generated → active → expired
playbook_json TEXT NOT NULL, -- Full playbook with scenarios
generated_at TEXT NOT NULL,
token_count INTEGER,
@@ -552,6 +641,29 @@ PLANNER_TIMEOUT_SECONDS=60 # Timeout for playbook generation
DEFENSIVE_PLAYBOOK_ON_FAILURE=true # Fallback on AI failure
RESCAN_INTERVAL_SECONDS=300 # Scenario rescan interval during trading
# Optional — v2 Exit Rules (State Machine)
STAGED_EXIT_BE_ARM_PCT=1.2 # Break-even lock threshold (%)
STAGED_EXIT_ARM_PCT=3.0 # Armed state threshold (%)
KR_ATR_STOP_MULTIPLIER_K=2.0 # ATR multiplier for KR dynamic hard stop
KR_ATR_STOP_MIN_PCT=-2.0 # KR hard stop floor (must tighten, negative)
KR_ATR_STOP_MAX_PCT=-7.0 # KR hard stop ceiling (loosest, negative)
# Optional — v2 Trade Filters
STOP_LOSS_COOLDOWN_MINUTES=120 # Cooldown after stop-loss before re-entry (same ticker)
US_MIN_PRICE=5.0 # Minimum US stock price for BUY ($)
# Optional — v3 Session Risk Management
SESSION_RISK_RELOAD_ENABLED=true # Reload risk params at session boundaries
SESSION_RISK_PROFILES_JSON="{}" # Per-session overrides JSON: {"KRX_REG": {"be_arm_pct": 1.0}}
OVERNIGHT_EXCEPTION_ENABLED=true # Allow holding through session close (conditions apply)
# Optional — v3 Blackout (KIS maintenance windows)
ORDER_BLACKOUT_ENABLED=true
ORDER_BLACKOUT_WINDOWS_KST=23:30-00:10 # Comma-separated: "HH:MM-HH:MM"
ORDER_BLACKOUT_QUEUE_MAX=500 # Max queued orders during blackout
BLACKOUT_RECOVERY_PRICE_REVALIDATION_ENABLED=true
BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT=5.0 # Drop recovery order if price drifted >5%
# Optional — Smart Scanner (realtime mode only)
RSI_OVERSOLD_THRESHOLD=30 # 0-50, oversold threshold
RSI_MOMENTUM_THRESHOLD=70 # 50-100, momentum threshold

View File

@@ -21,8 +21,44 @@ python3 scripts/session_handover_check.py --strict
- 실패 시 `workflow/session-handover.md` 최신 엔트리를 보강한 뒤 재실행한다.
## Docs Sync Validator (Mandatory for docs changes)
- 문서 변경 PR에서는 아래 명령으로 동기화 검증을 먼저 실행한다.
```bash
python3 scripts/validate_docs_sync.py
```
- 검증 실패 시 메시지 기준으로 즉시 수정한다.
- `absolute link is forbidden`: 문서 링크에 절대경로(`/...`) 사용
- `broken link`: 상대경로 링크 대상 파일/앵커 누락
- `missing core doc link reference`: `README.md`/`CLAUDE.md` 핵심 링크 누락
- `duplicated API endpoint row`: `docs/commands.md` API endpoint 표 중복 행
- `missing dynamic test count guidance`: `docs/testing.md``pytest --collect-only -q` 가이드 누락
### tea CLI (Gitea Command Line Tool)
#### ❌ Comment Newline Escaping (`\n` rendered literally)
```bash
YES="" ~/bin/tea comment 374 "line1\nline2"
# Web UI shows "\n" as text instead of line breaks
```
**💡 Reason:** Inline string escaping is interpreted literally before comment submission.
**✅ Solution:** Use file-based helper to preserve multiline text
```bash
cat > /tmp/comment.md <<'EOF'
line1
line2
EOF
scripts/tea_comment.sh 374 /tmp/comment.md
```
**📝 Notes:**
- `scripts/tea_comment.sh` accepts stdin with `-` as body source.
- The helper fails fast when body looks like escaped-newline text only.
#### ❌ TTY Error - Interactive Confirmation Fails
```bash
~/bin/tea issues create --repo X --title "Y" --description "Z"
@@ -136,7 +172,7 @@ No decorator needed for async tests.
# Install all dependencies (production + dev)
pip install -e ".[dev]"
# Run full test suite with coverage (551 tests across 25 files)
# Run full test suite with coverage (998 tests across 41 files)
pytest -v --cov=src --cov-report=term-missing
# Run a single test file
@@ -202,6 +238,8 @@ Dashboard runs as a daemon thread on `DASHBOARD_HOST:DASHBOARD_PORT` (default: `
| `GET /api/context/{layer}` | Context data by layer L1-L7 (query: `timeframe`) |
| `GET /api/decisions` | Decision log entries (query: `limit`, `market`) |
| `GET /api/scenarios/active` | Today's matched scenarios |
| `GET /api/pnl/history` | P&L history over time |
| `GET /api/positions` | Current open positions |
## Telegram Commands

View File

@@ -1,9 +1,9 @@
<!--
Doc-ID: DOC-REQ-001
Version: 1.0.0
Version: 1.0.7
Status: active
Owner: strategy
Updated: 2026-02-26
Updated: 2026-03-02
-->
# 요구사항 원장 (Single Source of Truth)
@@ -26,7 +26,7 @@ Updated: 2026-02-26
- `REQ-V3-001`: 모든 신호/주문/로그는 `session_id`를 포함해야 한다.
- `REQ-V3-002`: 세션 전환 시 리스크 파라미터 재로딩이 수행되어야 한다.
- `REQ-V3-003`: 브로커 블랙아웃 시간대에는 신규 주문이 금지되어야 한다.
- `REQ-V3-004`: 블랙아웃 중 신호는 Queue에 적재되고, 복구 후 유효성 재검증을 거친다.
- `REQ-V3-004`: 블랙아웃 중 신호는 bounded Queue에 적재되며, 포화 시 oldest-drop 정책으로 최신 intent를 보존하고 복구 후 유효성 재검증을 거친다.
- `REQ-V3-005`: 저유동 세션(`NXT_AFTER`, `US_PRE`, `US_DAY`, `US_AFTER`)은 시장가 주문 금지다.
- `REQ-V3-006`: 백테스트 체결가는 불리한 방향 체결 가정을 기본으로 한다.
- `REQ-V3-007`: US 운용은 환율 손익 분리 추적과 통화 버퍼 정책을 포함해야 한다.
@@ -37,3 +37,4 @@ Updated: 2026-02-26
- `REQ-OPS-001`: 타임존은 모든 시간 필드에 명시(KST/UTC)되어야 한다.
- `REQ-OPS-002`: 문서의 수치 정책은 원장에서만 변경한다.
- `REQ-OPS-003`: 구현 태스크는 반드시 테스트 태스크를 동반한다.
- `REQ-OPS-004`: 원본 계획 문서(`v2`, `v3`)는 `docs/ouroboros/source/` 경로를 단일 기준으로 사용한다.

View File

@@ -16,41 +16,42 @@ Updated: 2026-02-26
## 구현 단위 A: 상태기계/청산
- `TASK-CODE-001` (`REQ-V2-001`,`REQ-V2-002`,`REQ-V2-003`): `src/strategy/`에 상태기계 모듈 추가
- `TASK-CODE-002` (`REQ-V2-004`): ATR/BE/Hard Stop 결합 청산 함수 추가
- `TASK-CODE-003` (`REQ-V2-008`): Kill Switch 오케스트레이터를 `src/core/kill_switch.py`에 추가
- `TASK-CODE-001` (`REQ-V2-001`,`REQ-V2-002`,`REQ-V2-003`,`TEST-CODE-001`,`TEST-CODE-002`): `src/strategy/`에 상태기계 모듈 추가
- `TASK-CODE-002` (`REQ-V2-004`,`TEST-ACC-011`): ATR/BE/Hard Stop 결합 청산 함수 추가
- `TASK-CODE-003` (`REQ-V2-008`,`TEST-ACC-002`): Kill Switch 오케스트레이터를 `src/core/kill_switch.py`에 추가
- `TEST-CODE-001`: 갭 점프 시 최고상태 승격 테스트
- `TEST-CODE-002`: EXIT 우선순위 테스트
## 구현 단위 B: 라벨링/검증
- `TASK-CODE-004` (`REQ-V2-005`): Triple Barrier 라벨러 모듈 추가(`src/analysis/` 또는 `src/strategy/`)
- `TASK-CODE-005` (`REQ-V2-006`): Walk-forward + Purge/Embargo 분할 유틸 추가
- `TASK-CODE-006` (`REQ-V2-007`): 백테스트 실행기에서 비용/슬리피지 옵션 필수화
- `TASK-CODE-004` (`REQ-V2-005`,`TEST-CODE-003`,`TEST-ACC-012`): Triple Barrier 라벨러 모듈 추가(`src/analysis/` 또는 `src/strategy/`)
- `TASK-CODE-005` (`REQ-V2-006`,`TEST-CODE-004`,`TEST-ACC-013`): Walk-forward + Purge/Embargo 분할 유틸 추가
- `TASK-CODE-006` (`REQ-V2-007`,`TEST-ACC-014`): 백테스트 실행기에서 비용/슬리피지 옵션 필수화
- `TEST-CODE-003`: 라벨 선터치 우선 테스트
- `TEST-CODE-004`: 누수 차단 테스트
## 구현 단위 C: 세션/주문 정책
- `TASK-CODE-007` (`REQ-V3-001`,`REQ-V3-002`): 세션 분류/전환 훅을 `src/markets/schedule.py` 연동
- `TASK-CODE-008` (`REQ-V3-003`,`REQ-V3-004`): 블랙아웃 큐 처리기를 `src/broker/`에 추가
- `TASK-CODE-009` (`REQ-V3-005`): 세션별 주문 타입 검증기 추가
- `TASK-CODE-007` (`REQ-V3-001`,`REQ-V3-002`,`TEST-ACC-015`,`TEST-ACC-016`): 세션 분류/전환 훅을 `src/markets/schedule.py` 연동
- `TASK-CODE-008` (`REQ-V3-003`,`REQ-V3-004`,`TEST-CODE-005`,`TEST-ACC-017`): 블랙아웃 큐 처리기를 `src/broker/`에 추가
- `TASK-CODE-009` (`REQ-V3-005`,`TEST-CODE-006`,`TEST-ACC-004`): 세션별 주문 타입 검증기 추가
- `TEST-CODE-005`: 블랙아웃 신규주문 차단 테스트
- `TEST-CODE-006`: 저유동 세션 시장가 거부 테스트
## 구현 단위 D: 체결/환율/오버나잇
- `TASK-CODE-010` (`REQ-V3-006`): 불리한 체결가 모델을 백테스트 체결기로 구현
- `TASK-CODE-011` (`REQ-V3-007`): FX PnL 분리 회계 테이블/컬럼 추가
- `TASK-CODE-012` (`REQ-V3-008`): 오버나잇 예외와 Kill Switch 충돌 해소 로직 구현
- `TASK-CODE-010` (`REQ-V3-006`,`TEST-CODE-007`,`TEST-ACC-005`): 불리한 체결가 모델을 백테스트 체결기로 구현
- `TASK-CODE-011` (`REQ-V3-007`,`TEST-CODE-008`,`TEST-ACC-006`): FX PnL 분리 회계 테이블/컬럼 추가
- `TASK-CODE-012` (`REQ-V3-008`,`TEST-ACC-018`): 오버나잇 예외와 Kill Switch 충돌 해소 로직 구현
- `TEST-CODE-007`: 불리한 체결가 모델 테스트
- `TEST-CODE-008`: FX 버퍼 위반 시 신규진입 제한 테스트
## 구현 단위 E: 운영/문서 거버넌스
- `TASK-OPS-001` (`REQ-OPS-001`): 시간 필드/로그 스키마의 타임존 표기 강제 규칙 구현
- `TASK-OPS-002` (`REQ-OPS-002`): 정책 수치 변경 시 `01_requirements_registry.md` 선수정 CI 체크 추가
- `TASK-OPS-003` (`REQ-OPS-003`): `TASK-*` 없는 `REQ-*` 또는 `TEST-*` 없는 `REQ-*`를 차단하는 문서 검증 게이트 유지
- `TASK-OPS-001` (`REQ-OPS-001`,`TEST-ACC-007`): 시간 필드/로그 스키마의 타임존(KST/UTC) 표기 강제 규칙 구현
- `TASK-OPS-002` (`REQ-OPS-002`,`TEST-ACC-008`): 정책 수치 변경 시 `01_requirements_registry.md` 선수정 CI 체크 추가
- `TASK-OPS-003` (`REQ-OPS-003`,`TEST-ACC-009`): `TASK-*` 없는 `REQ-*` 또는 `TEST-*` 없는 `REQ-*`를 차단하는 문서 검증 게이트 유지
- `TASK-OPS-004` (`REQ-OPS-004`,`TEST-ACC-019`): v2/v3 원본 계획 문서 위치를 `docs/ouroboros/source/`로 표준화하고 링크 일관성 검증
## 커밋 규칙

View File

@@ -29,6 +29,7 @@ Updated: 2026-02-26
- `TEST-ACC-007` (`REQ-OPS-001`): 시간 관련 필드는 타임존(KST/UTC)이 누락되면 검증 실패한다.
- `TEST-ACC-008` (`REQ-OPS-002`): 정책 수치 변경이 원장 미반영이면 검증 실패한다.
- `TEST-ACC-009` (`REQ-OPS-003`): `REQ-*``TASK-*`/`TEST-*` 매핑 없이 존재하면 검증 실패한다.
- `TEST-ACC-019` (`REQ-OPS-004`): v2/v3 원본 계획 문서 링크는 `docs/ouroboros/source/` 경로 기준으로만 통과한다.
## 테스트 계층

View File

@@ -24,11 +24,17 @@ Updated: 2026-02-27
## 2) 필수 상태 체크 (필수)
필수 CI 항목:
- `validate_ouroboros_docs` (명령: `python3 scripts/validate_ouroboros_docs.py`)
- `test` (명령: `pytest -q`)
| 참조 기준 | 이름 | 설명 |
|-----------|------|------|
| **job 단위** (브랜치 보호 설정 시 사용) | `test` | 전체 CI job (문서 검증 + 테스트 포함) |
| **step 단위** (로그 확인 시 참조) | `validate_ouroboros_docs` | `python3 scripts/validate_ouroboros_docs.py` 실행 step |
| **step 단위** | `run_tests` | `pytest -q` 실행 step |
> **주의**: Gitea 브랜치 보호의 Required Status Checks는 **job 이름** 기준으로 설정한다 (`test`). step 이름은 UI 로그 탐색용이며 보호 규칙에 직접 입력하지 않는다.
설정 기준:
- 위 2개 체크가 `success` 아니면 머지 금지
- `test` job이 `success` 아니면 머지 금지
- 체크 스킵/중립 상태 허용 금지
## 3) 필수 리뷰어 규칙 (권장 -> 필수)

View File

@@ -1,14 +1,15 @@
<!--
Doc-ID: DOC-AUDIT-001
Version: 1.0.0
Version: 1.2.0
Status: active
Owner: strategy
Updated: 2026-02-28
Updated: 2026-03-02
-->
# v2/v3 구현 감사 및 수익률 분석 보고서
작성일: 2026-02-28
최종 업데이트: 2026-03-02 (#373 상태표 정합화 반영)
대상 기간: 2026-02-25 ~ 2026-02-28 (실거래)
분석 브랜치: `feature/v3-session-policy-stream`
@@ -16,82 +17,105 @@ Updated: 2026-02-28
## 1. 계획 대비 구현 감사
### 1.1 v2 구현 상태: 100% 완료
### 1.1 완료 판정 기준 (Definition of Done)
아래 3가지를 모두 만족할 때만 `✅ 완료`로 표기한다.
1. 코드 경로 존재: 요구사항을 수행하는 실행 경로가 코드에 존재한다.
2. 효과 검증 통과: 요구사항 효과를 검증하는 테스트/런타임 증적이 존재한다.
3. 추적성 일치: 요구사항 상태와 열린 갭 이슈가 모순되지 않는다.
### 1.2 v2 구현 상태: 부분 완료 (핵심 갭 잔존)
| REQ-ID | 요구사항 | 구현 파일 | 상태 |
|--------|----------|-----------|------|
| REQ-V2-001 | 4-상태 매도 상태기계 (HOLDING→BE_LOCK→ARMED→EXITED) | `src/strategy/position_state_machine.py` | ✅ 완료 |
| REQ-V2-002 | 즉시 최상위 상태 승격 (갭 대응) | `position_state_machine.py:51-70` | ✅ 완료 |
| REQ-V2-003 | EXITED 우선 평가 | `position_state_machine.py:38-48` | ✅ 완료 |
| REQ-V2-004 | 4중 청산 로직 (Hard/BE/ATR Trailing/Model) | `src/strategy/exit_rules.py` | ✅ 완료 |
| REQ-V2-004 | 4중 청산 로직 (Hard/BE/ATR Trailing/Model) | `src/strategy/exit_rules.py` | ⚠️ 부분 (`#369`) |
| REQ-V2-005 | Triple Barrier 라벨링 | `src/analysis/triple_barrier.py` | ✅ 완료 |
| REQ-V2-006 | Walk-Forward + Purge/Embargo 검증 | `src/analysis/walk_forward_split.py` | ✅ 완료 |
| REQ-V2-007 | 비용/슬리피지/체결실패 모델 필수 | `src/analysis/backtest_cost_guard.py` | ✅ 완료 |
| REQ-V2-008 | Kill Switch 실행 순서 (Block→Cancel→Refresh→Reduce→Snapshot) | `src/core/kill_switch.py` | ✅ 완료 |
| REQ-V2-007 | 비용/슬리피지/체결실패 모델 필수 | `src/analysis/backtest_cost_guard.py` | ⚠️ 부분 (`#368`) |
| REQ-V2-008 | Kill Switch 실행 순서 (Block→Cancel→Refresh→Reduce→Snapshot) | `src/core/kill_switch.py` | ⚠️ 부분 (`#377`) |
### 1.2 v3 구현 상태: ~75% 완료
### 1.3 v3 구현 상태: 부분 완료 (2026-03-02 기준)
| REQ-ID | 요구사항 | 상태 | 갭 설명 |
|--------|----------|------|---------|
| REQ-V3-001 | 모든 신호/주문/로그에 session_id 포함 | ⚠️ 부분 | 아래 GAP-1, GAP-2 참조 |
| REQ-V3-002 | 세션 전환 훅 + 리스크 파라미터 재로딩 | ⚠️ 부분 | 아래 GAP-3 참조 |
| REQ-ID | 요구사항 | 상태 | 비고 |
|--------|----------|------|------|
| REQ-V3-001 | 모든 신호/주문/로그에 session_id 포함 | ⚠️ 부분 | 큐 intent에 `session_id` 누락 (`#375`) |
| REQ-V3-002 | 세션 전환 훅 + 리스크 파라미터 재로딩 | ✅ 완료 | 세션 경계 E2E 회귀(override 적용/해제 + 재로딩 실패 폴백) 보강 (`#376`) |
| REQ-V3-003 | 블랙아웃 윈도우 정책 | ✅ 완료 | `src/core/blackout_manager.py` |
| REQ-V3-004 | 블랙아웃 큐 + 복구 시 재검증 | ⚠️ 부분 | 아래 GAP-4 참조 (부분 해소) |
| REQ-V3-004 | 블랙아웃 큐 + 복구 시 재검증 | ⚠️ 부분 | 큐 포화는 oldest-drop 정책으로 정합화 (`#371`), 재검증 강화는 `#328` 추적 |
| REQ-V3-005 | 저유동 세션 시장가 금지 | ✅ 완료 | `src/core/order_policy.py` |
| REQ-V3-006 | 보수적 백테스트 체결 (불리 방향) | ✅ 완료 | `src/analysis/backtest_execution_model.py` |
| REQ-V3-007 | FX 손익 분리 (전략 PnL vs 환율 PnL) | ⚠️ 코드 완료 / 운영 미반영 | `src/db.py` 스키마·함수 완료, 운영 데이터 `fx_pnl` 전부 0 |
| REQ-V3-008 | 오버나잇 예외 vs Kill Switch 우선순위 | ✅ 완료 | `src/main.py:459-471` |
| REQ-V3-007 | FX 손익 분리 (전략 PnL vs 환율 PnL) | ⚠️ 부분 | 런타임 분리 계산/전달 적용 (`#370`), buy-side `fx_rate` 미관측 시 `fx_pnl=0` fallback |
| REQ-V3-008 | 오버나잇 예외 vs Kill Switch 우선순위 | ✅ 완료 | `src/main.py``_should_force_exit_for_overnight()`, `_apply_staged_exit_override_for_hold()` |
### 1.3 운영 거버넌스: ~20% 완료
### 1.4 운영 거버넌스: 부분 완료 (2026-03-02 재평가)
| REQ-ID | 요구사항 | 상태 | 갭 설명 |
|--------|----------|------|---------|
| REQ-OPS-001 | 타임존 명시 (KST/UTC) | ⚠️ 부분 | DB 기록은 UTC, 세션은 KST. 일부 로그에서 타임존 미표기 |
| REQ-OPS-002 | 정책 변경 시 레지스트리 업데이트 강제 | ❌ 미구현 | CI 자동 검증 없음 |
| REQ-OPS-003 | TASK-REQ 매핑 강제 | ❌ 미구현 | PR 단위 자동 검증 없음 |
| REQ-ID | 요구사항 | 상태 | 비고 |
|--------|----------|------|------|
| REQ-OPS-001 | 타임존 명시 (KST/UTC) | ⚠️ 부분 | 문서 토큰 fail-fast 추가, 필드 수준 검증은 `#372` 잔여 |
| REQ-OPS-002 | 정책 변경 시 레지스트리 업데이트 강제 | ⚠️ 부분 | 파일 단위 강제는 구현, 정책 수치 단위 정밀 검증은 `#372` 잔여 |
| REQ-OPS-003 | TASK-REQ 매핑 강제 | ⚠️ 부분 | TASK-REQ/TASK-TEST 강제는 구현, 우회 케이스 추가 점검은 `#372` 잔여 |
| REQ-OPS-004 | source 경로 표준화 검증 | ✅ 완료 | `scripts/validate_ouroboros_docs.py`의 canonical source path 검증 |
---
## 2. 구현 갭 상세
### GAP-1: DecisionLogger에 session_id 미포함 (CRITICAL)
> **2026-03-02 업데이트**: 기존 해소 표기를 재검증했고, 열려 있는 갭 이슈 기준으로 상태를 재분류함.
- **위치**: `src/logging/decision_logger.py:40`
- **문제**: `log_decision()` 함수에 `session_id` 파라미터가 없음
- **영향**: 어떤 세션에서 전략적 의사결정이 내려졌는지 추적 불가
### GAP-1: DecisionLogger에 session_id 미포함 → ✅ 해소 (#326)
- **위치**: `src/logging/decision_logger.py`
- ~~문제: `log_decision()` 함수에 `session_id` 파라미터가 없음~~
- **해소**: #326 머지 — `log_decision()` 파라미터에 `session_id` 추가, DB 기록 포함
- **요구사항**: REQ-V3-001
### GAP-2: src/main.py 거래 로그에 session_id 미전달 (CRITICAL)
### GAP-2: src/main.py 거래 로그에 session_id 미전달 → ✅ 해소 (#326)
- **위치**: `src/main.py` line 1625, 1682, 2769
- **문제**: `log_trade()` 호출 시 `session_id` 파라미터를 전달하지 않음
- **현상**: 시장 코드 기반 자동 추론에 의존 → 실제 런타임 세션과 불일치 가능
- **위치**: `src/main.py`
- ~~문제: `log_trade()` 호출 시 `session_id` 파라미터를 전달하지 않음~~
- **해소**: #326 머지 — `log_trade()` 호출 시 런타임 `session_id` 명시적 전달
- **요구사항**: REQ-V3-001
### GAP-3: 세션 전환 시 리스크 파라미터 재로딩 없음 (HIGH)
### GAP-3: 세션 전환 시 리스크 파라미터 재로딩 없음 → ✅ 해소 (#327, #376)
- **위치**: `src/main.py` 전체
- **문제**: 리스크 파라미터가 시작 시 한 번만 로딩되고, 세션 경계 변경 시 재로딩 메커니즘 없음
- **영향**: NXT_AFTER(저유동) → KRX_REG(정규장) 전환 시에도 동일 파라미터 사용
- **위치**: `src/main.py`, `src/config.py`
- **해소 내용**: #327 머지 — `SESSION_RISK_PROFILES_JSON` 기반 세션별 파라미터 재로딩 메커니즘 구현
- `SESSION_RISK_RELOAD_ENABLED=true` 시 세션 경계에서 파라미터 재로딩
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
- **해소**: 세션 경계 E2E 회귀 테스트를 추가해 override 적용/해제, 재로딩 실패 시 폴백 유지를 검증함 (`#376`)
- **요구사항**: REQ-V3-002
### GAP-4: 블랙아웃 복구 시 재검증 부분 해소, DB 기록 미구현 (HIGH)
### GAP-4: 블랙아웃 복구 DB 기록 + 재검증 → ⚠️ 부분 해소 (#324, #328, #371)
- **위치**: `src/core/blackout_manager.py:89-96`, `src/main.py:694-791`
- **상태**: `pop_recovery_batch()` 자체는 단순 dequeue이나, 실행 경로에서 부분 재검증 수행:
- stale BUY 드롭 (포지션 이미 존재 시) — `src/main.py:713-720`
- stale SELL 드롭 (포지션 부재 시) — `src/main.py:721-727`
- `validate_order_policy()` 호출 — `src/main.py:729-734`
- **잔여 갭**: 가격 유효성(시세 변동), 세션 변경에 따른 파라미터 재적용은 미구현
- **신규 발견**: 블랙아웃 복구 주문이 `log_trade()` 없이 실행되어 거래 DB에 기록되지 않음 → 성과 리포트 불일치 유발
- **위치**: `src/core/blackout_manager.py`, `src/main.py`
- **상태**:
- #324 추적 범위(DB 기록)는 구현 경로가 존재
- #328 범위(가격/세션 재검증 강화)는 추적 이슈 오픈 상태
- #371: 큐 포화 정책을 oldest-drop으로 명시/구현해 최신 intent 유실 경로를 제거
- **요구사항**: REQ-V3-004
### GAP-5: 시간장벽이 봉 개수 고정 (MEDIUM)
### GAP-5: 시간장벽이 봉 개수 고정 → ✅ 해소 (#329)
- **위치**: `src/analysis/triple_barrier.py:19`
- **문제**: `max_holding_bars` (고정 봉 수) 사용, v3 계획의 `max_holding_minutes` (캘린더 시간) 미반영
- **위치**: `src/analysis/triple_barrier.py`
- ~~문제: `max_holding_bars` (고정 봉 수) 사용~~
- **해소**: #329 머지 — `max_holding_minutes` (캘린더 분) 기반 시간장벽 전환
- 봉 주기 무관하게 일정 시간 경과 시 장벽 도달
- `max_holding_bars` deprecated 경고 유지 (하위 호환)
- **요구사항**: REQ-V2-005 / v3 확장
### GAP-6 (신규): FX PnL 분리 부분 해소 (MEDIUM)
- **위치**: `src/db.py` (`fx_pnl`, `strategy_pnl` 컬럼 존재)
- **현 상태**: 런타임 SELL 경로에서 `strategy_pnl`/`fx_pnl` 분리 계산 및 전달을 적용함 (`#370`).
- **운영 메모**: `trading_cycle`은 scanner 기반 `selection_context``fx_rate`를 추가하고, `run_daily_session`은 scanner 컨텍스트 없이 `fx_rate` 스냅샷만 기록한다.
- **잔여**: 과거 BUY 레코드에 `fx_rate`가 없으면 해외 구간도 `fx_pnl=0` fallback으로 기록됨.
- **영향**: USD 거래에서 환율 손익과 전략 손익이 분리되지 않아 성과 분석 부정확
- **요구사항**: REQ-V3-007
---
## 3. 실거래 수익률 분석
@@ -244,18 +268,25 @@ Updated: 2026-02-28
- **문제**: 중첩 `def evaluate` 정의 (들여쓰기 오류)
- **영향**: 런타임 실패 → 기본 전략으로 폴백 → 진화 시스템 사실상 무효
### ROOT-5: v2 청산 로직이 부분 통합되었으나 실효성 부족 (HIGH)
### ROOT-5: v2 청산 로직이 부분 통합되었으나 실효성 부족 → ⚠️ 부분 해소 (#325)
- **현재 상태**: `src/main.py:500-583`에서 `evaluate_exit()` 기반 staged exit override가 동작함
- 상태기계(HOLDING→BE_LOCK→ARMED→EXITED) 전이 구현
- 4중 청산(hard stop, BE lock threat, ATR trailing, model/liquidity exit) 평가
- **실효성 문제**:
**초기 진단 (2026-02-28 감사 기준):**
- `hard_stop_pct`에 고정 `-2.0`이 기본값으로 들어가 v2 계획의 ATR 적응형 의도와 괴리
- `be_arm_pct`/`arm_pct`가 playbook의 `take_profit_pct`에서 기계적 파생(`* 0.4`)되어 v2 계획의 독립 파라미터 튜닝 불가
- `atr_value`, `pred_down_prob` 등 런타임 피처가 대부분 0.0으로 들어와 사실상 hard stop만 발동
- **결론**: 코드 통합은 되었으나, 피처 공급과 파라미터 설정이 미비하여 v2 설계 가치가 실현되지 않는 상태
- `atr_value`, `pred_down_prob` 등 런타임 피처가 0.0으로 공급되어 사실상 hard stop만 발동
### ROOT-6: SELL 손익 계산이 부분청산/수량 불일치에 취약 (CRITICAL)
**현재 상태 (#325 머지 후):**
- `STAGED_EXIT_BE_ARM_PCT`, `STAGED_EXIT_ARM_PCT` 환경변수로 독립 파라미터 설정 가능
- `_inject_staged_exit_features()`: KR 시장 ATR 실시간 계산 주입, RSI 기반 `pred_down_prob` 공급
- KR ATR dynamic hard stop (#318)으로 `-2.0` 고정값 문제 해소
**잔여 리스크:**
- KR 외 시장(US 등)에서 `atr_value` 공급 경로 불완전 — hard stop 편향 잔존 가능
- `pred_down_prob`가 RSI 프록시 수준 — 추후 실제 ML 모델 대체 권장
### ROOT-6: SELL 손익 계산이 부분청산/수량 불일치에 취약 (CRITICAL) → ✅ 해소 (#322)
> **현재 상태**: #322 머지로 해소됨. 아래는 원인 발견 시점(2026-02-28) 진단 기록.
- **위치**: `src/main.py:1658-1663`, `src/main.py:2755-2760`
- **문제**: PnL 계산이 실제 매도 수량(`sell_qty`)이 아닌 직전 BUY의 `buy_qty`를 사용
@@ -263,7 +294,9 @@ Updated: 2026-02-28
- **영향**: 부분청산, 역분할/액분할, startup-sync 후 수량 드리프트 시 손익 과대/과소 계상
- **실증**: CRCA 이상치(BUY 146주 → SELL 15주에서 PnL +4,612 USD) 가 이 버그와 정합
### ROOT-7: BUY 매칭 키에 exchange_code 미포함 — 잠재 오매칭 리스크 (HIGH)
### ROOT-7: BUY 매칭 키에 exchange_code 미포함 — 잠재 오매칭 리스크 (HIGH) → ✅ 해소 (#323)
> **현재 상태**: #323 머지로 해소됨. 아래는 원인 발견 시점(2026-02-28) 진단 기록.
- **위치**: `src/db.py:292-313`
- **문제**: `get_latest_buy_trade()``(stock_code, market)`만으로 매칭, `exchange_code` 미사용
@@ -283,17 +316,28 @@ Updated: 2026-02-28
| P1 | US 최소 가격 필터: $5 이하 종목 진입 차단 | 페니스탁 대폭락 방지 | 낮음 |
| P1 | 진화 전략 코드 생성 시 syntax 검증 추가 | 진화 시스템 정상화 | 낮음 |
### 5.2 구조적 개선 (아키텍처 변경)
### 5.2 구조적 개선 현황 (2026-03-01 기준)
| 우선순위 | 방안 | 예상 효과 | 난이도 |
|----------|------|-----------|--------|
| **P0** | **SELL PnL 계산을 sell_qty 기준으로 수정 (ROOT-6)** | 손익 계상 정확도 확보, 이상치 제거 | 낮음 |
| **P0** | **v2 staged exit에 실제 피처 공급 (atr_value, pred_down_prob) + 독립 파라미터 설정 (ROOT-5)** | v2 설계 가치 실현, 수익 보호 | 중간 |
| P0 | BUY 매칭 키에 exchange_code 추가 (ROOT-7) | 오매칭 방지 | 낮음 |
| P0 | 블랙아웃 복구 주문에 `log_trade()` 추가 (GAP-4) | DB/성과 리포트 정합성 | 낮음 |
| P1 | 세션 전환 시 리스크 파라미터 동적 재로딩 (GAP-3 해소) | 세션별 최적 파라미터 적용 | 중간 |
| P1 | session_id를 거래 로그/의사결정 로그에 명시적 전달 (GAP-1,2 해소) | 세션별 성과 분석 가능 | 낮음 |
| P2 | 블랙아웃 복구 시 가격/세션 재검증 강화 (GAP-4 잔여) | 세션 변경 후 무효 주문 방지 | 중간 |
**완료 항목 (모니터링 단계):**
| 항목 | 이슈 | 상태 |
|------|------|------|
| SELL PnL 계산을 sell_qty 기준으로 수정 (ROOT-6) | #322 | ✅ 머지 |
| v2 staged exit 피처 공급 + 독립 파라미터 설정 (ROOT-5) | #325 | ✅ 머지 |
| BUY 매칭 키에 exchange_code 추가 (ROOT-7) | #323 | ✅ 머지 |
| 블랙아웃 복구 주문 `log_trade()` 추가 (GAP-4) | #324 | ✅ 머지 |
| 세션 전환 리스크 파라미터 동적 재로딩 (GAP-3) | #327 | ✅ 머지 |
| session_id 거래/의사결정 로그 명시 전달 (GAP-1, GAP-2) | #326 | ✅ 머지 |
| 블랙아웃 복구 가격/세션 재검증 강화 (GAP-4 잔여) | #328 | ✅ 머지 |
**잔여 개선 항목:**
| 우선순위 | 방안 | 난이도 |
|----------|------|--------|
| P1 | US 시장 ATR 공급 경로 완성 (ROOT-5 잔여) | 중간 |
| P1 | FX PnL 운영 활성화 (REQ-V3-007) | 낮음 |
| P2 | pred_down_prob ML 모델 대체 (ROOT-5 잔여) | 높음 |
| P2 | 세션 경계 E2E 통합 테스트 보강 (GAP-3 잔여) | 낮음 |
### 5.3 권장 실행 순서
@@ -334,14 +378,25 @@ Phase 3 (중기): v3 세션 최적화
- ✅ 블랙아웃 복구 후 유효 intent 실행 (`tests/test_main.py:5811`)
- ✅ 블랙아웃 복구 후 정책 거부 intent 드롭 (`tests/test_main.py:5851`)
### 테스트 미존재
### 테스트 추가됨 (Phase 1~3, 2026-03-01)
- ❌ 세션 전환 훅 콜백
- ❌ 세션 경계 리스크 파라미터 재로딩
- ❌ DecisionLogger session_id 캡처
- ✅ KR ATR 기반 동적 hard stop (`test_main.py`#318)
- ✅ 재진입 쿨다운 (손절 후 동일 종목 매수 차단) (`test_main.py`#319)
- ✅ US 최소 가격 필터 ($5 이하 차단) (`test_main.py`#320)
- ✅ 진화 전략 syntax 검증 (`test_evolution.py`#321)
- ✅ SELL PnL sell_qty 기준 계산 (`test_main.py`#322)
- ✅ BUY 매칭 키 exchange_code 포함 (`test_db.py`#323)
- ✅ 블랙아웃 복구 주문 DB 기록 (`test_main.py`#324)
- ✅ staged exit에 실제 ATR/RSI 피처 공급 (`test_main.py`#325)
- ✅ session_id 거래/의사결정 로그 명시적 전달 (`test_main.py`, `test_decision_logger.py`#326)
- ✅ 블랙아웃 복구 후 유효 intent 실행 (`tests/test_main.py:5811`)
- ✅ 블랙아웃 복구 후 정책 거부 intent 드롭 (`tests/test_main.py:5851`)
### 테스트 미존재 (잔여)
- ✅ 세션 전환 훅 콜백/세션 경계 리스크 재로딩 E2E 회귀 (`#376`)
- ❌ 실거래 경로 ↔ v2 상태기계 통합 테스트 (피처 공급 포함)
-블랙아웃 복구 주문의 DB 기록 검증
- ❌ SELL PnL 계산 시 수량 불일치 케이스
-FX PnL 운영 활성화 검증 (GAP-6)
---

View File

@@ -1,16 +1,19 @@
<!--
Doc-ID: DOC-ACTION-085
Version: 1.0.0
Version: 1.1.0
Status: active
Owner: strategy
Updated: 2026-02-28
Updated: 2026-03-01
-->
# 손실 복구 실행 계획
작성일: 2026-02-28
최종 업데이트: 2026-03-01 (Phase 1~3 완료 상태 반영)
기반 문서: [80_implementation_audit.md](./80_implementation_audit.md) (ROOT 7개 + GAP 5개)
> **2026-03-01 현황**: Phase 1 ✅ 완료, Phase 2 ✅ 완료, Phase 3 ✅ 기본 완료 (ACT-13 고도화 잔여)
---
## 1. 요약
@@ -35,13 +38,13 @@ Updated: 2026-02-28
## 2. Phase별 작업 분해
### Phase 1: 즉시 — 손실 출혈 차단
### Phase 1: 즉시 — 손실 출혈 차단 ✅ 완료
가장 큰 손실 패턴(노이즈 손절, 반복 매매, 페니스탁)을 즉시 제거한다.
---
#### ACT-01: KR 손절선 ATR 기반 동적 확대
#### ACT-01: KR 손절선 ATR 기반 동적 확대 ✅ 머지
- **ROOT 참조**: ROOT-1 (hard_stop_pct -2%가 KR 소형주 변동성 대비 과소)
- **Gitea 이슈**: feat: KR 손절선 ATR 기반 동적 확대 (-2% → ATR 적응형)
@@ -60,7 +63,7 @@ Updated: 2026-02-28
---
#### ACT-02: 손절 후 동일 종목 재진입 쿨다운
#### ACT-02: 손절 후 동일 종목 재진입 쿨다운 ✅ 머지
- **ROOT 참조**: ROOT-2 (동일 종목 반복 매매)
- **Gitea 이슈**: feat: 손절 후 동일 종목 재진입 쿨다운 (1~2시간)
@@ -79,7 +82,7 @@ Updated: 2026-02-28
---
#### ACT-03: US $5 이하 종목 진입 차단 필터
#### ACT-03: US $5 이하 종목 진입 차단 필터 ✅ 머지
- **ROOT 참조**: ROOT-3 (미국 페니스탁 무분별 진입)
- **Gitea 이슈**: feat: US $5 이하 종목 진입 차단 필터
@@ -97,7 +100,7 @@ Updated: 2026-02-28
---
#### ACT-04: 진화 전략 코드 생성 시 syntax 검증 추가
#### ACT-04: 진화 전략 코드 생성 시 syntax 검증 추가 ✅ 머지
- **ROOT 참조**: ROOT-4 (진화 전략 문법 오류)
- **Gitea 이슈**: fix: 진화 전략 코드 생성 시 syntax 검증 추가
@@ -116,13 +119,13 @@ Updated: 2026-02-28
---
### Phase 2: 단기 — 데이터 정합성 + v2 실효화
### Phase 2: 단기 — 데이터 정합성 + v2 실효화 ✅ 완료
손익 계산 정확도를 확보하고, v2 청산 로직을 실효화한다.
---
#### ACT-05: SELL PnL 계산을 sell_qty 기준으로 수정
#### ACT-05: SELL PnL 계산을 sell_qty 기준으로 수정 ✅ 머지
- **ROOT 참조**: ROOT-6 (CRITICAL — PnL 계산이 buy_qty 사용)
- **Gitea 이슈**: fix(critical): SELL PnL 계산을 sell_qty 기준으로 수정
@@ -141,7 +144,7 @@ Updated: 2026-02-28
---
#### ACT-06: BUY 매칭 키에 exchange_code 추가
#### ACT-06: BUY 매칭 키에 exchange_code 추가 ✅ 머지
- **ROOT 참조**: ROOT-7 (BUY 매칭 키에 exchange_code 미포함)
- **Gitea 이슈**: fix: BUY 매칭 키에 exchange_code 추가
@@ -159,12 +162,12 @@ Updated: 2026-02-28
---
#### ACT-07: 블랙아웃 복구 주문에 log_trade() 추가
#### ACT-07: 블랙아웃 복구 주문에 log_trade() 추가 ✅ 머지
- **ROOT 참조**: GAP-4 (블랙아웃 복구 주문 DB 미기록)
- **Gitea 이슈**: fix: 블랙아웃 복구 주문에 log_trade() 추가
- **Gitea 이슈 번호**: #324
- **변경 대상 파일**: `src/main.py` (line 694-791, 블랙아웃 복구 실행 경로)
- **변경 대상 파일**: `src/main.py` `process_blackout_recovery_orders()` 함수 내 복구 주문 실행 경로
- **현재 동작**: 블랙아웃 복구 주문이 실행되나 `log_trade()` 호출 없음 → DB에 기록 안 됨
- **목표 동작**: 복구 주문 실행 후 `log_trade()` 호출하여 DB에 기록. rationale에 `[blackout-recovery]` prefix 추가
- **수용 기준**:
@@ -178,7 +181,7 @@ Updated: 2026-02-28
---
#### ACT-08: v2 staged exit에 실제 피처 공급
#### ACT-08: v2 staged exit에 실제 피처 공급 ✅ 머지
- **ROOT 참조**: ROOT-5 (v2 청산 로직 실효성 부족)
- **Gitea 이슈**: feat: v2 staged exit에 실제 피처(ATR, pred_down_prob) 공급
@@ -200,7 +203,7 @@ Updated: 2026-02-28
---
#### ACT-09: session_id를 거래/의사결정 로그에 명시적 전달
#### ACT-09: session_id를 거래/의사결정 로그에 명시적 전달 ✅ 머지
- **ROOT 참조**: GAP-1 (DecisionLogger session_id 미포함), GAP-2 (log_trade session_id 미전달)
- **Gitea 이슈**: feat: session_id를 거래/의사결정 로그에 명시적 전달
@@ -223,13 +226,13 @@ Updated: 2026-02-28
---
### Phase 3: 중기 — v3 세션 최적화
### Phase 3: 중기 — v3 세션 최적화 ✅ 기본 완료 (ACT-13 고도화 잔여)
세션 경계 처리와 운영 거버넌스를 강화한다.
---
#### ACT-10: 세션 전환 시 리스크 파라미터 동적 재로딩
#### ACT-10: 세션 전환 시 리스크 파라미터 동적 재로딩 ✅ 머지
- **ROOT 참조**: GAP-3 (세션 전환 시 리스크 파라미터 재로딩 없음)
- **Gitea 이슈**: feat: 세션 전환 시 리스크 파라미터 동적 재로딩
@@ -241,14 +244,12 @@ Updated: 2026-02-28
- NXT_AFTER → KRX_REG 전환 시 파라미터 재로딩 확인
- 재로딩 이벤트 로그 기록
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
- **테스트 계획**:
- 단위: 세션 전환 훅 콜백 테스트
- 단위: 재로딩 실패 시 폴백 테스트
- **테스트**: `test_main.py`에 설정 오버라이드/리로드/폴백 단위 테스트 포함. **잔여**: 세션 경계 실시간 전환 E2E 보강
- **의존성**: ACT-09 (session_id 인프라)
---
#### ACT-11: 블랙아웃 복구 시 가격/세션 재검증 강화
#### ACT-11: 블랙아웃 복구 시 가격/세션 재검증 강화 ✅ 머지
- **ROOT 참조**: GAP-4 잔여 (가격 유효성, 세션 변경 재적용 미구현)
- **Gitea 이슈**: feat: 블랙아웃 복구 시 가격/세션 재검증 강화
@@ -268,7 +269,7 @@ Updated: 2026-02-28
---
#### ACT-12: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환
#### ACT-12: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환 ✅ 머지
- **ROOT 참조**: GAP-5 (시간장벽이 봉 개수 고정)
- **Gitea 이슈**: feat: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환
@@ -286,21 +287,13 @@ Updated: 2026-02-28
---
#### ACT-13: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑)
#### ACT-13: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑) ✅ 기본 구현 완료, 고도화 잔여
- **ROOT 참조**: REQ-OPS-002 (정책 변경 시 레지스트리 업데이트 강제), REQ-OPS-003 (TASK-REQ 매핑 강제)
- **Gitea 이슈**: infra: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑)
- **Gitea 이슈 번호**: #330
- **변경 대상 파일**: `.gitea/workflows/`, `scripts/validate_governance_assets.py`
- **현재 동작**: CI 자동 검증 없음. 문서 검증은 수동 실행
- **목표 동작**:
- PR 시 정책 레지스트리(`01_requirements_registry.md`) 변경 여부 자동 검증
- TASK/이슈가 REQ-ID를 참조하는지 자동 검증
- **수용 기준**:
- 정책 파일 변경 시 레지스트리 미업데이트면 CI 실패
- 새 이슈/PR에 REQ-ID 미참조 시 경고
- **테스트 계획**:
- CI 파이프라인 자체 테스트 (정상/실패 케이스)
- **현재 동작**: `.gitea/workflows/ci.yml`에서 `scripts/validate_governance_assets.py` + `scripts/validate_ouroboros_docs.py` 자동 실행
- **잔여 고도화**: PR 본문 REQ/TASK/TEST 강제 레벨 상향, 정책 파일 미업데이트 시 CI 실패 기준 강화
- **의존성**: 없음
---
@@ -311,7 +304,7 @@ Updated: 2026-02-28
- 모든 ACT 항목에 대해 개별 테스트 작성
- 커버리지 >= 80% 유지
- 기존 551개 테스트 전체 통과 확인
- 현재 CI 기준 전체 테스트 통과 확인 (2026-03-01 기준 998 tests collected)
### 3.2 통합 테스트
@@ -389,4 +382,36 @@ Phase 3
---
## 6. 미진 사항 (2026-03-01 기준)
Phase 1~3 구현 완료 후에도 다음 항목이 운영상 미완료 상태이다.
### 6.1 운영 검증 필요
| 항목 | 설명 | 우선순위 |
|------|------|----------|
| FX PnL 운영 활성화 | `fx_pnl`/`strategy_pnl` 컬럼 존재하나 모든 운영 데이터 값이 0 | P1 |
| 세션 경계 E2E 통합 테스트 보강 | `test_main.py`에 단위 테스트 존재; 세션 경계 실시간 전환 E2E 미작성 | P2 |
| v2 상태기계 통합 end-to-end | 실거래 경로에서 HOLDING→BE_LOCK→ARMED→EXITED 전체 시나리오 테스트 미작성 | P2 |
### 6.2 아키텍처 수준 잔여 갭
| 항목 | 설명 | 배경 문서 |
|------|------|-----------|
| CI 자동 검증 고도화 (#330) | 기본 구현 완료(`validate_governance_assets.py` CI 연동); 규칙/강제수준 고도화 필요 | REQ-OPS-002, REQ-OPS-003 |
| pred_down_prob ML 모델 대체 | 현재 RSI 프록시 사용 — 추후 실제 GBDT/ML 모델로 대체 권장 | ROOT-5, ouroboros_plan_v2.txt §3.D |
| KR/US 파라미터 민감도 분석 | v2 계획의 be_arm_pct/arm_pct/atr_k 최적값 탐색 미수행 | ouroboros_plan_v2.txt §8 |
### 6.3 v3 실험 매트릭스 미착수
ouroboros_plan_v3.txt §9에 정의된 3개 실험이 아직 시작되지 않았다.
| 실험 ID | 시장 | 포커스 | 상태 |
|---------|------|--------|------|
| EXP-KR-01 | KR | NXT 야간 특화 (p_thresh 0.65) | ❌ 미착수 |
| EXP-US-01 | US | 21h 준연속 운용 (atr_k 2.5) | ❌ 미착수 |
| EXP-HYB-01 | Global | KR 낮 + US 밤 연계 레짐 자산배분 | ❌ 미착수 |
---
*끝.*

View File

@@ -1,14 +1,14 @@
<!--
Doc-ID: DOC-ROOT-001
Version: 1.0.0
Version: 1.0.1
Status: active
Owner: strategy
Updated: 2026-02-26
Updated: 2026-03-01
-->
# The Ouroboros 실행 문서 허브
이 폴더는 `ouroboros_plan_v2.txt`, `ouroboros_plan_v3.txt`를 구현 가능한 작업 지시서 수준으로 분해한 문서 허브다.
이 폴더는 `source/ouroboros_plan_v2.txt`, `source/ouroboros_plan_v3.txt`를 구현 가능한 작업 지시서 수준으로 분해한 문서 허브다.
## 읽기 순서 (Routing)
@@ -18,13 +18,15 @@ Updated: 2026-02-26
4. v3 실행 지시서: [20_phase_v3_execution.md](./20_phase_v3_execution.md)
5. 코드 레벨 작업 지시: [30_code_level_work_orders.md](./30_code_level_work_orders.md)
6. 수용 기준/테스트 계획: [40_acceptance_and_test_plan.md](./40_acceptance_and_test_plan.md)
7. PM 시나리오/이슈 분류: [50_scenario_matrix_and_issue_taxonomy.md](./50_scenario_matrix_and_issue_taxonomy.md)
8. TPM 제어 프로토콜/수용 매트릭스: [50_tpm_control_protocol.md](./50_tpm_control_protocol.md)
7. PM 시나리오/이슈 분류 **(A)**: [50_scenario_matrix_and_issue_taxonomy.md](./50_scenario_matrix_and_issue_taxonomy.md)
8. TPM 제어 프로토콜/수용 매트릭스 **(B)**: [50_tpm_control_protocol.md](./50_tpm_control_protocol.md)
9. 저장소 강제 설정 체크리스트: [60_repo_enforcement_checklist.md](./60_repo_enforcement_checklist.md)
10. 메인 에이전트 아이디에이션 백로그: [70_main_agent_ideation.md](./70_main_agent_ideation.md)
11. v2/v3 구현 감사 및 수익률 분석: [80_implementation_audit.md](./80_implementation_audit.md)
12. 손실 복구 실행 계획: [85_loss_recovery_action_plan.md](./85_loss_recovery_action_plan.md)
> **참고**: 7번·8번은 `50_` 프리픽스를 공유합니다. (A) = 시나리오/이슈 분류, (B) = TPM 제어 프로토콜.
## 운영 규칙
- 계획 변경은 반드시 `01_requirements_registry.md`의 ID 정의부터 수정한다.
@@ -38,5 +40,5 @@ python3 scripts/validate_ouroboros_docs.py
## 원본 계획 문서
- [v2](/home/agentson/repos/The-Ouroboros/ouroboros_plan_v2.txt)
- [v3](/home/agentson/repos/The-Ouroboros/ouroboros_plan_v3.txt)
- [v2](./source/ouroboros_plan_v2.txt)
- [v3](./source/ouroboros_plan_v3.txt)

View File

@@ -87,7 +87,7 @@
- 선정 기준 추적 → Evolution 시스템 최적화 가능
- API 장애 시 정적 watchlist로 자동 전환
**참고:** Realtime 모드 전용. Daily 모드는 배치 효율성을 위해 정적 watchlist 사용.
**참고 (당시 구현 기준):** Realtime 모드 전용으로 설계되었으나, 이후 Daily 경로에서도 스캐너를 사용하도록 변경됨. 해외 fallback도 정적 watchlist → 동적 유니버스(active/recent/holdings)로 전환 (2026-02-16 참조).
**이슈/PR:** #76, #77
@@ -388,3 +388,126 @@ Order result: 모의투자 매수주문이 완료 되었습니다. ✓
- `ruff check src/analysis/backtest_pipeline.py tests/test_backtest_pipeline_integration.py`
**이슈/PR:** #305
---
## 2026-02-28 ~ 2026-03-01
### v2/v3 손실 복구 실행 계획 — Phase 1 완료 (#318~#321)
**배경:**
- `docs/ouroboros/80_implementation_audit.md` 감사 결과 식별된 7개 근본 원인(ROOT) 및 5개 구현 갭(GAP) 중
가장 큰 손실 패턴 4개를 Phase 1로 즉시 제거.
**구현 내용:**
1. **ACT-01: KR 손절선 ATR 기반 동적 확대** (#318)
- `src/main.py`, `src/config.py`
- KR 시장: ATR(14) 기반 동적 hard stop (`k=2.0`, 범위 -2%~-7%)
- ATR 미제공 시 기존 -2% 폴백
- ROOT-1 (hard_stop_pct 고정값 과소) 해소
2. **ACT-02: 손절 후 동일 종목 재진입 쿨다운** (#319)
- `src/main.py`, `src/config.py`
- 손절(pnl<0) 후 동일 종목 `COOLDOWN_MINUTES`(기본 120분) 동안 BUY 차단
- 익절에는 미적용
- ROOT-2 (동일 종목 반복 매매) 해소
3. **ACT-03: US $5 이하 종목 진입 차단 필터** (#320)
- `src/main.py`, `src/config.py`
- US 시장 BUY 시 현재가 `US_MIN_PRICE`(기본 $5) 이하 차단
- ROOT-3 (미국 페니스탁 무분별 진입) 해소
4. **ACT-04: 진화 전략 코드 syntax 검증** (#321)
- `src/evolution/optimizer.py`
- `ast.parse()` + `compile()` 선검증 후 통과한 코드만 저장
- ROOT-4 (진화 전략 문법 오류) 해소
**이슈/PR:** #318, #319, #320, #321
---
### v2/v3 손실 복구 실행 계획 — Phase 2 완료 (#322~#326)
**배경:**
- 손익 계산 정확도 확보 및 v2 청산 로직 실효화.
**구현 내용:**
1. **ACT-05: SELL PnL 계산을 sell_qty 기준으로 수정** (#322)
- `src/main.py` (line 1658-1663, 2755-2760)
- `trade_pnl = (trade_price - buy_price) * sell_qty`로 변경
- ROOT-6 (PnL 계산 buy_qty 사용 CRITICAL) 해소
2. **ACT-06: BUY 매칭 키에 exchange_code 추가** (#323)
- `src/db.py`
- `get_latest_buy_trade()``(stock_code, market, exchange_code)` 기준 매칭
- exchange_code NULL인 레거시 데이터 하위 호환 유지
- ROOT-7 (오매칭 리스크) 해소
3. **ACT-07: 블랙아웃 복구 주문에 log_trade() 추가** (#324)
- `src/main.py` (블랙아웃 복구 실행 경로)
- 복구 주문 실행 후 `log_trade()` 호출, rationale에 `[blackout-recovery]` prefix
- GAP-4 (블랙아웃 복구 주문 DB 미기록) 해소
4. **ACT-08: v2 staged exit에 실제 피처 공급** (#325)
- `src/main.py`, `src/strategy/exit_rules.py`
- `atr_value`: ATR(14) 실시간 계산 공급
- `pred_down_prob`: RSI 기반 하락 확률 추정값 공급 (ML 모델 대체 가능)
- `be_arm_pct`/`arm_pct` 독립 파라미터 설정 가능 (take_profit_pct * 0.4 파생 제거)
- ROOT-5 (v2 청산 로직 실효성 부족) 해소
5. **ACT-09: session_id를 거래/의사결정 로그에 명시적 전달** (#326)
- `src/logging/decision_logger.py`, `src/main.py`, `src/db.py`
- `log_decision()`: session_id 파라미터 추가
- `log_trade()`: 런타임 session_id 명시적 전달
- GAP-1, GAP-2 (session_id 미포함) 부분 해소
**이슈/PR:** #322, #323, #324, #325, #326
---
### v2/v3 손실 복구 실행 계획 — Phase 3 부분 완료 (#327~#329)
**배경:**
- 세션 경계 처리 및 시간장벽 캘린더 기반 전환.
**구현 내용:**
1. **ACT-10: 세션 전환 시 리스크 파라미터 동적 재로딩** (#327)
- `src/main.py`, `src/config.py`
- 세션 경계 변경 이벤트 시 `SESSION_RISK_PROFILES_JSON` 기반 재로딩
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
- GAP-3 (세션 전환 시 파라미터 재로딩 없음) 부분 해소
2. **ACT-11: 블랙아웃 복구 시 가격/세션 재검증 강화** (#328)
- `src/main.py`, `src/core/blackout_manager.py`
- 복구 시 현재 시세 조회하여 가격 유효성 검증 (진입가 대비 급등/급락 시 드롭)
- 세션 변경 시 새 세션의 파라미터로 재검증
- GAP-4 잔여 (가격/세션 재검증) 부분 해소
3. **ACT-12: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환** (#329)
- `src/analysis/triple_barrier.py`
- `max_holding_minutes` (캘린더 분) 기반 전환, 봉 주기 무관 일관 동작
- 기존 `max_holding_bars` deprecated 경고 유지 (하위 호환)
- GAP-5 (시간장벽 봉 개수 고정) 해소
**미완료 (ACT-13):**
- **#330: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑)** — 문서 구조화 작업으로 대체 진행 중
**이슈/PR:** #327, #328, #329
---
### v2/v3 문서 구조화 및 감사 문서 작성 (#331)
**배경:**
- Phase 1~3 구현 완료 후 감사 결과와 실행 계획을 문서화
- 기존 감사 문서가 산발적으로 관리되어 통합 정리 필요
**구현 내용:**
- `docs/ouroboros/80_implementation_audit.md` 신규 작성: v2/v3 구현 감사 + 실거래 수익률 분석
- `docs/ouroboros/85_loss_recovery_action_plan.md` 신규 작성: ROOT/GAP 해소 Phase별 실행 계획
- `scripts/audit_queries.sql` 신규 작성: 성과 재현용 표준 집계 SQL
**이슈/PR:** #331

View File

@@ -2,7 +2,7 @@
## Test Structure
**551 tests** across **25 files**. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator.
**998 tests** across **41 files**. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator.
The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB.
@@ -23,6 +23,8 @@ The `settings` fixture in `conftest.py` provides safe defaults with test credent
- Network error handling
- SSL context configuration
> **Note**: 아래 파일별 테스트 수는 릴리즈 시점 스냅샷이며 실제 수치와 다를 수 있습니다. 현재 정확한 수치는 `pytest --collect-only -q`로 확인하세요.
##### `tests/test_brain.py` (24 tests)
- Valid JSON parsing and markdown-wrapped JSON handling
- Malformed JSON fallback
@@ -90,7 +92,7 @@ The `settings` fixture in `conftest.py` provides safe defaults with test credent
- Python-first filtering pipeline
- RSI and volume ratio filter logic
- Candidate scoring and ranking
- Fallback to static watchlist
- Fallback to static watchlist (domestic) or dynamic universe (overseas)
#### Context & Memory
@@ -138,8 +140,8 @@ The `settings` fixture in `conftest.py` provides safe defaults with test credent
#### Dashboard
##### `tests/test_dashboard.py` (14 tests)
- FastAPI endpoint responses (8 API routes)
- Status, playbook, scorecard, performance, context, decisions, scenarios
- FastAPI endpoint responses (10 API routes)
- Status, playbook, scorecard, performance, context, decisions, scenarios, pnl/history, positions
- Query parameter handling (market, date, limit)
#### Performance & Quality

View File

@@ -70,6 +70,22 @@ Gitea 이슈/PR/코멘트 작업 전에 모든 에이전트는 아래를 먼저
Issue/PR 본문 작성 시 줄바꿈(`\n`)이 문자열 그대로 저장되는 문제가 반복될 수 있다. 원인은 `-d "...\n..."` 형태에서 쉘/CLI가 이스케이프를 실제 개행으로 해석하지 않기 때문이다.
코멘트도 동일한 문제가 자주 발생하므로, 코멘트는 파일 기반 래퍼를 표준으로 사용한다.
```bash
# 권장: 파일/STDIN 기반 코멘트 등록 (줄바꿈 보존)
cat > /tmp/review.md <<'EOF'
리뷰 반영 완료했습니다.
- 항목 1
- 항목 2
EOF
scripts/tea_comment.sh 374 /tmp/review.md
# 또는
cat /tmp/review.md | scripts/tea_comment.sh 374 -
```
권장 패턴:
```bash

View File

@@ -66,6 +66,7 @@ def _check_handover_entry(
*,
branch: str,
strict: bool,
ci_mode: bool,
errors: list[str],
) -> None:
if not HANDOVER_LOG.exists():
@@ -88,6 +89,10 @@ def _check_handover_entry(
errors.append(f"latest handover entry missing token: {token}")
if strict:
if "- next_ticket: #TBD" in latest:
errors.append("latest handover entry must not use placeholder next_ticket (#TBD)")
if strict and not ci_mode:
today_utc = datetime.now(UTC).date().isoformat()
if today_utc not in latest:
errors.append(
@@ -99,8 +104,6 @@ def _check_handover_entry(
"latest handover entry must target current branch "
f"({branch_token})"
)
if "- next_ticket: #TBD" in latest:
errors.append("latest handover entry must not use placeholder next_ticket (#TBD)")
if "merged_to_feature_branch=no" in latest:
errors.append(
"process gate indicates not merged; implementation must stay blocked "
@@ -117,6 +120,14 @@ def main() -> int:
action="store_true",
help="Enforce today-date and current-branch match on latest handover entry.",
)
parser.add_argument(
"--ci",
action="store_true",
help=(
"CI mode: keep structural/token checks and placeholder guard, "
"but skip strict today-date/current-branch/merge-gate checks."
),
)
args = parser.parse_args()
errors: list[str] = []
@@ -125,10 +136,15 @@ def main() -> int:
branch = _current_branch()
if not branch:
errors.append("cannot resolve current git branch")
elif branch in {"main", "master"}:
elif not args.ci and branch in {"main", "master"}:
errors.append(f"working branch must not be {branch}")
_check_handover_entry(branch=branch, strict=args.strict, errors=errors)
_check_handover_entry(
branch=branch,
strict=args.strict,
ci_mode=args.ci,
errors=errors,
)
if errors:
print("[FAIL] session handover check failed")

49
scripts/tea_comment.sh Executable file
View File

@@ -0,0 +1,49 @@
#!/usr/bin/env bash
# Safe helper for posting multiline Gitea comments without escaped-newline artifacts.
set -euo pipefail
if [ "${1:-}" = "-h" ] || [ "${1:-}" = "--help" ] || [ "$#" -lt 2 ]; then
cat <<'EOF'
Usage:
scripts/tea_comment.sh <issue_or_pr_index> <body_file|-> [repo]
Examples:
scripts/tea_comment.sh 374 /tmp/comment.md
cat /tmp/comment.md | scripts/tea_comment.sh 374 - jihoson/The-Ouroboros
Notes:
- Use file/stdin input to preserve real newlines.
- Passing inline strings with "\n" is intentionally avoided by this helper.
EOF
exit 1
fi
INDEX="$1"
BODY_SOURCE="$2"
REPO="${3:-jihoson/The-Ouroboros}"
if [ "$BODY_SOURCE" = "-" ]; then
BODY="$(cat)"
else
if [ ! -f "$BODY_SOURCE" ]; then
echo "[FAIL] body file not found: $BODY_SOURCE" >&2
exit 1
fi
BODY="$(cat "$BODY_SOURCE")"
fi
if [ -z "$BODY" ]; then
echo "[FAIL] empty comment body" >&2
exit 1
fi
# Guard against the common escaped-newline mistake.
if [[ "$BODY" == *"\\n"* ]] && [[ "$BODY" != *$'\n'* ]]; then
echo "[FAIL] body appears to contain escaped newlines (\\n) instead of real line breaks" >&2
echo "Use a multiline file/heredoc and pass that file to scripts/tea_comment.sh" >&2
exit 1
fi
YES="" ~/bin/tea comment "$INDEX" --repo "$REPO" "$BODY"

View File

@@ -0,0 +1,135 @@
#!/usr/bin/env python3
"""Validate top-level docs synchronization invariants."""
from __future__ import annotations
import re
import sys
from pathlib import Path
REPO_ROOT = Path(".")
REQUIRED_FILES = {
"README.md": REPO_ROOT / "README.md",
"CLAUDE.md": REPO_ROOT / "CLAUDE.md",
"commands": REPO_ROOT / "docs" / "commands.md",
"testing": REPO_ROOT / "docs" / "testing.md",
"workflow": REPO_ROOT / "docs" / "workflow.md",
}
LINK_PATTERN = re.compile(r"\[[^\]]+\]\((?P<link>[^)]+)\)")
ENDPOINT_ROW_PATTERN = re.compile(
r"^\|\s*`(?P<endpoint>(?:GET|POST|PUT|PATCH|DELETE)\s+/[^`]*)`\s*\|"
)
def _read(path: Path) -> str:
return path.read_text(encoding="utf-8")
def validate_required_files_exist(errors: list[str]) -> None:
for name, path in REQUIRED_FILES.items():
if not path.exists():
errors.append(f"missing required doc file ({name}): {path}")
def validate_links_resolve(doc_path: Path, text: str, errors: list[str]) -> None:
for match in LINK_PATTERN.finditer(text):
raw_link = match.group("link").strip()
if not raw_link or raw_link.startswith("#") or raw_link.startswith("http"):
continue
link_path = raw_link.split("#", 1)[0].strip()
if not link_path:
continue
if link_path.startswith("/"):
errors.append(f"{doc_path}: absolute link is forbidden -> {raw_link}")
continue
target = (doc_path.parent / link_path).resolve()
if not target.exists():
errors.append(f"{doc_path}: broken link -> {raw_link}")
def validate_summary_docs_reference_core_docs(errors: list[str]) -> None:
required_links = {
"README.md": ("docs/workflow.md", "docs/commands.md", "docs/testing.md"),
"CLAUDE.md": ("docs/workflow.md", "docs/commands.md"),
}
for file_name, links in required_links.items():
doc_path = REQUIRED_FILES[file_name]
text = _read(doc_path)
for link in links:
if link not in text:
errors.append(f"{doc_path}: missing core doc link reference -> {link}")
def collect_command_endpoints(text: str) -> list[str]:
endpoints: list[str] = []
for line in text.splitlines():
match = ENDPOINT_ROW_PATTERN.match(line.strip())
if match:
endpoints.append(match.group("endpoint"))
return endpoints
def validate_commands_endpoint_duplicates(errors: list[str]) -> None:
text = _read(REQUIRED_FILES["commands"])
endpoints = collect_command_endpoints(text)
seen: set[str] = set()
duplicates: set[str] = set()
for endpoint in endpoints:
if endpoint in seen:
duplicates.add(endpoint)
seen.add(endpoint)
for endpoint in sorted(duplicates):
errors.append(f"docs/commands.md: duplicated API endpoint row -> {endpoint}")
def validate_testing_doc_has_dynamic_count_guidance(errors: list[str]) -> None:
text = _read(REQUIRED_FILES["testing"])
if "pytest --collect-only -q" not in text:
errors.append(
"docs/testing.md: missing dynamic test count guidance "
"(pytest --collect-only -q)"
)
def main() -> int:
errors: list[str] = []
validate_required_files_exist(errors)
if errors:
print("[FAIL] docs sync validation failed")
for err in errors:
print(f"- {err}")
return 1
readme_text = _read(REQUIRED_FILES["README.md"])
claude_text = _read(REQUIRED_FILES["CLAUDE.md"])
validate_links_resolve(REQUIRED_FILES["README.md"], readme_text, errors)
validate_links_resolve(REQUIRED_FILES["CLAUDE.md"], claude_text, errors)
validate_links_resolve(
REQUIRED_FILES["commands"], _read(REQUIRED_FILES["commands"]), errors
)
validate_links_resolve(REQUIRED_FILES["testing"], _read(REQUIRED_FILES["testing"]), errors)
validate_links_resolve(
REQUIRED_FILES["workflow"], _read(REQUIRED_FILES["workflow"]), errors
)
validate_summary_docs_reference_core_docs(errors)
validate_commands_endpoint_duplicates(errors)
validate_testing_doc_has_dynamic_count_guidance(errors)
if errors:
print("[FAIL] docs sync validation failed")
for err in errors:
print(f"- {err}")
return 1
print("[OK] docs sync validated")
print("[OK] summary docs link to core docs and links resolve")
print("[OK] commands endpoint rows have no duplicates")
print("[OK] testing doc includes dynamic count guidance")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -3,9 +3,22 @@
from __future__ import annotations
import os
import re
import subprocess
import sys
from pathlib import Path
REQUIREMENTS_REGISTRY = "docs/ouroboros/01_requirements_registry.md"
TASK_WORK_ORDERS_DOC = "docs/ouroboros/30_code_level_work_orders.md"
TASK_DEF_LINE = re.compile(r"^-\s+`(?P<task_id>TASK-[A-Z0-9-]+-\d{3})`(?P<body>.*)$")
REQ_ID_IN_LINE = re.compile(r"\bREQ-[A-Z0-9-]+-\d{3}\b")
TASK_ID_IN_TEXT = re.compile(r"\bTASK-[A-Z0-9-]+-\d{3}\b")
TEST_ID_IN_TEXT = re.compile(r"\bTEST-[A-Z0-9-]+-\d{3}\b")
READ_ONLY_FILES = {"src/core/risk_manager.py"}
PLACEHOLDER_VALUES = {"", "tbd", "n/a", "na", "none", "<link>", "<required>"}
TIMEZONE_TOKEN_PATTERN = re.compile(r"\b(?:KST|UTC)\b")
def must_contain(path: Path, required: list[str], errors: list[str]) -> None:
if not path.exists():
@@ -17,8 +30,186 @@ def must_contain(path: Path, required: list[str], errors: list[str]) -> None:
errors.append(f"{path}: missing required token -> {token}")
def normalize_changed_path(path: str) -> str:
normalized = path.strip().replace("\\", "/")
if normalized.startswith("./"):
normalized = normalized[2:]
return normalized
def is_policy_file(path: str) -> bool:
normalized = normalize_changed_path(path)
if not normalized.endswith(".md"):
return False
if not normalized.startswith("docs/ouroboros/"):
return False
return normalized != REQUIREMENTS_REGISTRY
def load_changed_files(args: list[str], errors: list[str]) -> list[str]:
if not args:
return []
# Single range input (e.g. BASE..HEAD or BASE...HEAD)
if len(args) == 1 and ".." in args[0]:
range_spec = args[0]
try:
completed = subprocess.run(
["git", "diff", "--name-only", range_spec],
check=True,
capture_output=True,
text=True,
)
except (subprocess.CalledProcessError, FileNotFoundError) as exc:
errors.append(f"failed to load changed files from range '{range_spec}': {exc}")
return []
return [
normalize_changed_path(line)
for line in completed.stdout.splitlines()
if line.strip()
]
return [normalize_changed_path(path) for path in args if path.strip()]
def validate_registry_sync(changed_files: list[str], errors: list[str]) -> None:
if not changed_files:
return
changed_set = set(changed_files)
policy_changed = any(is_policy_file(path) for path in changed_set)
registry_changed = REQUIREMENTS_REGISTRY in changed_set
if policy_changed and not registry_changed:
errors.append(
"policy file changed without updating docs/ouroboros/01_requirements_registry.md"
)
def validate_task_req_mapping(errors: list[str], *, task_doc: Path | None = None) -> None:
path = task_doc or Path(TASK_WORK_ORDERS_DOC)
if not path.exists():
errors.append(f"missing file: {path}")
return
text = path.read_text(encoding="utf-8")
found_task = False
for line in text.splitlines():
m = TASK_DEF_LINE.match(line.strip())
if not m:
continue
found_task = True
if not REQ_ID_IN_LINE.search(m.group("body")):
errors.append(
f"{path}: TASK without REQ mapping -> {m.group('task_id')}"
)
if not found_task:
errors.append(f"{path}: no TASK definitions found")
def validate_task_test_pairing(errors: list[str], *, task_doc: Path | None = None) -> None:
"""Fail when TASK definitions are not linked to at least one TEST id."""
path = task_doc or Path(TASK_WORK_ORDERS_DOC)
if not path.exists():
errors.append(f"missing file: {path}")
return
text = path.read_text(encoding="utf-8")
found_task = False
for line in text.splitlines():
m = TASK_DEF_LINE.match(line.strip())
if not m:
continue
found_task = True
if not TEST_ID_IN_TEXT.search(m.group("body")):
errors.append(f"{path}: TASK without TEST mapping -> {m.group('task_id')}")
if not found_task:
errors.append(f"{path}: no TASK definitions found")
def validate_timezone_policy_tokens(errors: list[str]) -> None:
"""Fail-fast check for REQ-OPS-001 governance tokens."""
required_docs = [
Path("docs/ouroboros/01_requirements_registry.md"),
Path("docs/ouroboros/30_code_level_work_orders.md"),
Path("docs/workflow.md"),
]
for path in required_docs:
if not path.exists():
errors.append(f"missing file: {path}")
continue
text = path.read_text(encoding="utf-8")
if not TIMEZONE_TOKEN_PATTERN.search(text):
errors.append(f"{path}: missing timezone policy token (KST/UTC)")
def validate_pr_traceability(errors: list[str]) -> None:
title = os.getenv("GOVERNANCE_PR_TITLE", "").strip()
body = os.getenv("GOVERNANCE_PR_BODY", "").strip()
if not title and not body:
return
text = f"{title}\n{body}"
if not REQ_ID_IN_LINE.search(text):
errors.append("PR text missing REQ-ID reference")
if not TASK_ID_IN_TEXT.search(text):
errors.append("PR text missing TASK-ID reference")
if not TEST_ID_IN_TEXT.search(text):
errors.append("PR text missing TEST-ID reference")
def _parse_pr_evidence_line(text: str, field: str) -> str | None:
pattern = re.compile(rf"^\s*-\s*{re.escape(field)}:\s*(?P<value>.+?)\s*$", re.MULTILINE)
match = pattern.search(text)
if not match:
return None
return match.group("value").strip()
def _is_placeholder(value: str | None) -> bool:
if value is None:
return True
normalized = value.strip().lower()
return normalized in PLACEHOLDER_VALUES
def validate_read_only_approval(
changed_files: list[str], errors: list[str], warnings: list[str]
) -> None:
changed_set = set(changed_files)
touched = sorted(path for path in READ_ONLY_FILES if path in changed_set)
if not touched:
return
body = os.getenv("GOVERNANCE_PR_BODY", "").strip()
if not body:
errors.append(
"READ-ONLY file changed but PR body is unavailable; approval evidence is required"
)
return
if "READ-ONLY Approval" not in body:
errors.append("READ-ONLY file changed without 'READ-ONLY Approval' section in PR body")
return
touched_field = _parse_pr_evidence_line(body, "Touched READ-ONLY files")
human_approval = _parse_pr_evidence_line(body, "Human approval")
test_suite_1 = _parse_pr_evidence_line(body, "Test suite 1")
test_suite_2 = _parse_pr_evidence_line(body, "Test suite 2")
if _is_placeholder(touched_field):
errors.append("READ-ONLY Approval section missing 'Touched READ-ONLY files' evidence")
if _is_placeholder(human_approval):
errors.append("READ-ONLY Approval section missing 'Human approval' evidence")
if _is_placeholder(test_suite_1):
errors.append("READ-ONLY Approval section missing 'Test suite 1' evidence")
if _is_placeholder(test_suite_2):
errors.append("READ-ONLY Approval section missing 'Test suite 2' evidence")
def main() -> int:
errors: list[str] = []
warnings: list[str] = []
changed_files = load_changed_files(sys.argv[1:], errors)
pr_template = Path(".gitea/PULL_REQUEST_TEMPLATE.md")
issue_template = Path(".gitea/ISSUE_TEMPLATE/runtime_verification.md")
@@ -38,6 +229,11 @@ def main() -> int:
"gh",
"Session Handover Gate",
"session_handover_check.py --strict",
"READ-ONLY Approval",
"Touched READ-ONLY files",
"Human approval",
"Test suite 1",
"Test suite 2",
],
errors,
)
@@ -56,6 +252,7 @@ def main() -> int:
[
"Session Handover Gate (Mandatory)",
"session_handover_check.py --strict",
"scripts/tea_comment.sh",
],
errors,
)
@@ -64,6 +261,8 @@ def main() -> int:
[
"Session Handover Preflight (Mandatory)",
"session_handover_check.py --strict",
"Comment Newline Escaping",
"scripts/tea_comment.sh",
],
errors,
)
@@ -81,6 +280,13 @@ def main() -> int:
if not handover_script.exists():
errors.append(f"missing file: {handover_script}")
validate_registry_sync(changed_files, errors)
validate_task_req_mapping(errors)
validate_task_test_pairing(errors)
validate_timezone_policy_tokens(errors)
validate_pr_traceability(errors)
validate_read_only_approval(changed_files, errors, warnings)
if errors:
print("[FAIL] governance asset validation failed")
for err in errors:
@@ -88,6 +294,10 @@ def main() -> int:
return 1
print("[OK] governance assets validated")
if warnings:
print(f"[WARN] governance advisory: {len(warnings)}")
for warn in warnings:
print(f"- {warn}")
return 0

View File

@@ -19,9 +19,20 @@ META_PATTERN = re.compile(
re.MULTILINE,
)
ID_PATTERN = re.compile(r"\b(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3}\b")
DEF_PATTERN = re.compile(r"^-\s+`(?P<id>(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3})`", re.MULTILINE)
DEF_PATTERN = re.compile(
r"^-\s+`(?P<id>(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3})`",
re.MULTILINE,
)
LINK_PATTERN = re.compile(r"\[[^\]]+\]\((?P<link>[^)]+)\)")
LINE_DEF_PATTERN = re.compile(r"^-\s+`(?P<id>(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3})`.*$", re.MULTILINE)
LINE_DEF_PATTERN = re.compile(
r"^-\s+`(?P<id>(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3})`.*$",
re.MULTILINE,
)
PLAN_LINK_PATTERN = re.compile(r"ouroboros_plan_v(?P<version>[23])\.txt$")
ALLOWED_PLAN_TARGETS = {
"2": (DOC_DIR / "source" / "ouroboros_plan_v2.txt").resolve(),
"3": (DOC_DIR / "source" / "ouroboros_plan_v3.txt").resolve(),
}
def iter_docs() -> list[Path]:
@@ -40,15 +51,47 @@ def validate_metadata(path: Path, text: str, errors: list[str], doc_ids: dict[st
doc_ids[doc_id] = path
def validate_plan_source_link(path: Path, link: str, errors: list[str]) -> bool:
normalized = link.strip()
# Ignore in-page anchors and parse the filesystem part for validation.
link_path = normalized.split("#", 1)[0].strip()
if not link_path:
return False
match = PLAN_LINK_PATTERN.search(link_path)
if not match:
return False
version = match.group("version")
expected_target = ALLOWED_PLAN_TARGETS[version]
if link_path.startswith("/"):
errors.append(
f"{path}: invalid plan link path -> {link} "
f"(use ./source/ouroboros_plan_v{version}.txt)"
)
return True
resolved_target = (path.parent / link_path).resolve()
if resolved_target != expected_target:
errors.append(
f"{path}: invalid plan link path -> {link} "
f"(must resolve to docs/ouroboros/source/ouroboros_plan_v{version}.txt)"
)
return True
return False
def validate_links(path: Path, text: str, errors: list[str]) -> None:
for m in LINK_PATTERN.finditer(text):
link = m.group("link").strip()
if not link or link.startswith("http") or link.startswith("#"):
continue
if link.startswith("/"):
target = Path(link)
if validate_plan_source_link(path, link, errors):
continue
link_path = link.split("#", 1)[0].strip()
if link_path.startswith("/"):
target = Path(link_path)
else:
target = (path.parent / link).resolve()
target = (path.parent / link_path).resolve()
if not target.exists():
errors.append(f"{path}: broken link -> {link}")
@@ -61,7 +104,9 @@ def collect_ids(path: Path, text: str, defs: dict[str, Path], refs: dict[str, se
refs.setdefault(idv, set()).add(path)
def collect_req_traceability(text: str, req_to_task: dict[str, set[str]], req_to_test: dict[str, set[str]]) -> None:
def collect_req_traceability(
text: str, req_to_task: dict[str, set[str]], req_to_test: dict[str, set[str]]
) -> None:
for m in LINE_DEF_PATTERN.finditer(text):
line = m.group(0)
item_id = m.group("id")

View File

@@ -2,8 +2,8 @@
from __future__ import annotations
from dataclasses import dataclass
import math
from dataclasses import dataclass
@dataclass(frozen=True)
@@ -11,6 +11,7 @@ class BacktestCostModel:
commission_bps: float | None = None
slippage_bps_by_session: dict[str, float] | None = None
failure_rate_by_session: dict[str, float] | None = None
partial_fill_rate_by_session: dict[str, float] | None = None
unfavorable_fill_required: bool = True
@@ -31,6 +32,7 @@ def validate_backtest_cost_model(
slippage = model.slippage_bps_by_session or {}
failure = model.failure_rate_by_session or {}
partial_fill = model.partial_fill_rate_by_session or {}
missing_slippage = [s for s in required_sessions if s not in slippage]
if missing_slippage:
@@ -43,6 +45,12 @@ def validate_backtest_cost_model(
raise ValueError(
f"missing failure_rate_by_session for sessions: {', '.join(missing_failure)}"
)
missing_partial_fill = [s for s in required_sessions if s not in partial_fill]
if missing_partial_fill:
raise ValueError(
"missing partial_fill_rate_by_session for sessions: "
f"{', '.join(missing_partial_fill)}"
)
for sess, bps in slippage.items():
if not math.isfinite(bps) or bps < 0:
@@ -50,3 +58,6 @@ def validate_backtest_cost_model(
for sess, rate in failure.items():
if not math.isfinite(rate) or rate < 0 or rate > 1:
raise ValueError(f"failure rate must be within [0,1] for session={sess}")
for sess, rate in partial_fill.items():
if not math.isfinite(rate) or rate < 0 or rate > 1:
raise ValueError(f"partial fill rate must be within [0,1] for session={sess}")

View File

@@ -2,12 +2,11 @@
from __future__ import annotations
from dataclasses import dataclass
import math
from dataclasses import dataclass
from random import Random
from typing import Literal
OrderSide = Literal["BUY", "SELL"]
@@ -77,7 +76,9 @@ class BacktestExecutionModel:
reason="execution_failure",
)
slip_mult = 1.0 + (slippage_bps / 10000.0 if request.side == "BUY" else -slippage_bps / 10000.0)
slip_mult = 1.0 + (
slippage_bps / 10000.0 if request.side == "BUY" else -slippage_bps / 10000.0
)
exec_price = request.reference_price * slip_mult
if self._rng.random() < partial_rate:

View File

@@ -8,10 +8,16 @@ from __future__ import annotations
from collections.abc import Sequence
from dataclasses import dataclass
from datetime import datetime
from statistics import mean
from typing import Literal
from typing import Literal, cast
from src.analysis.backtest_cost_guard import BacktestCostModel, validate_backtest_cost_model
from src.analysis.backtest_execution_model import (
BacktestExecutionModel,
ExecutionAssumptions,
ExecutionRequest,
)
from src.analysis.triple_barrier import TripleBarrierSpec, label_with_triple_barrier
from src.analysis.walk_forward_split import WalkForwardFold, generate_walk_forward_splits
@@ -22,6 +28,7 @@ class BacktestBar:
low: float
close: float
session_id: str
timestamp: datetime | None = None
@dataclass(frozen=True)
@@ -38,6 +45,7 @@ class WalkForwardConfig:
class BaselineScore:
name: Literal["B0", "B1", "M1"]
accuracy: float
cost_adjusted_accuracy: float
@dataclass(frozen=True)
@@ -86,22 +94,35 @@ def run_v2_backtest_pipeline(
highs = [float(bar.high) for bar in bars]
lows = [float(bar.low) for bar in bars]
closes = [float(bar.close) for bar in bars]
timestamps = [bar.timestamp for bar in bars]
normalized_entries = sorted(set(int(i) for i in entry_indices))
if normalized_entries[0] < 0 or normalized_entries[-1] >= len(bars):
raise IndexError("entry index out of range")
resolved_timestamps: list[datetime] | None = None
if triple_barrier_spec.max_holding_minutes is not None:
if any(ts is None for ts in timestamps):
raise ValueError(
"BacktestBar.timestamp is required for all bars when "
"triple_barrier_spec.max_holding_minutes is set"
)
resolved_timestamps = cast(list[datetime], timestamps)
labels_by_bar_index: dict[int, int] = {}
for idx in normalized_entries:
labels_by_bar_index[idx] = label_with_triple_barrier(
highs=highs,
lows=lows,
closes=closes,
timestamps=resolved_timestamps,
entry_index=idx,
side=side,
spec=triple_barrier_spec,
).label
ordered_labels = [labels_by_bar_index[idx] for idx in normalized_entries]
ordered_sessions = [bars[idx].session_id for idx in normalized_entries]
ordered_prices = [bars[idx].close for idx in normalized_entries]
folds = generate_walk_forward_splits(
n_samples=len(normalized_entries),
train_size=walk_forward.train_size,
@@ -116,8 +137,13 @@ def run_v2_backtest_pipeline(
for fold_idx, fold in enumerate(folds):
train_labels = [ordered_labels[i] for i in fold.train_indices]
test_labels = [ordered_labels[i] for i in fold.test_indices]
test_sessions = [ordered_sessions[i] for i in fold.test_indices]
test_prices = [ordered_prices[i] for i in fold.test_indices]
if not test_labels:
continue
execution_model = _build_execution_model(cost_model=cost_model, fold_seed=fold_idx)
b0_pred = _baseline_b0_pred(train_labels)
m1_pred = _m1_pred(train_labels)
fold_results.append(
BacktestFoldResult(
fold_index=fold_idx,
@@ -126,11 +152,41 @@ def run_v2_backtest_pipeline(
train_label_distribution=_label_dist(train_labels),
test_label_distribution=_label_dist(test_labels),
baseline_scores=[
BaselineScore(name="B0", accuracy=_baseline_b0(train_labels, test_labels)),
BaselineScore(name="B1", accuracy=_score_constant(1, test_labels)),
BaselineScore(
name="B0",
accuracy=_score_constant(b0_pred, test_labels),
cost_adjusted_accuracy=_score_with_execution(
prediction=b0_pred,
actual=test_labels,
sessions=test_sessions,
reference_prices=test_prices,
execution_model=execution_model,
commission_bps=float(cost_model.commission_bps or 0.0),
),
),
BaselineScore(
name="B1",
accuracy=_score_constant(1, test_labels),
cost_adjusted_accuracy=_score_with_execution(
prediction=1,
actual=test_labels,
sessions=test_sessions,
reference_prices=test_prices,
execution_model=execution_model,
commission_bps=float(cost_model.commission_bps or 0.0),
),
),
BaselineScore(
name="M1",
accuracy=_score_constant(_m1_pred(train_labels), test_labels),
accuracy=_score_constant(m1_pred, test_labels),
cost_adjusted_accuracy=_score_with_execution(
prediction=m1_pred,
actual=test_labels,
sessions=test_sessions,
reference_prices=test_prices,
execution_model=execution_model,
commission_bps=float(cost_model.commission_bps or 0.0),
),
),
],
)
@@ -163,12 +219,15 @@ def _score_constant(pred: int, actual: Sequence[int]) -> float:
def _baseline_b0(train_labels: Sequence[int], test_labels: Sequence[int]) -> float:
return _score_constant(_baseline_b0_pred(train_labels), test_labels)
def _baseline_b0_pred(train_labels: Sequence[int]) -> int:
if not train_labels:
return _score_constant(0, test_labels)
return 0
# Majority-class baseline from training fold.
choices = (-1, 0, 1)
pred = max(choices, key=lambda c: train_labels.count(c))
return _score_constant(pred, test_labels)
return max(choices, key=lambda c: train_labels.count(c))
def _m1_pred(train_labels: Sequence[int]) -> int:
@@ -177,6 +236,56 @@ def _m1_pred(train_labels: Sequence[int]) -> int:
return train_labels[-1]
def _build_execution_model(
*,
cost_model: BacktestCostModel,
fold_seed: int,
) -> BacktestExecutionModel:
return BacktestExecutionModel(
ExecutionAssumptions(
slippage_bps_by_session=dict(cost_model.slippage_bps_by_session or {}),
failure_rate_by_session=dict(cost_model.failure_rate_by_session or {}),
partial_fill_rate_by_session=dict(cost_model.partial_fill_rate_by_session or {}),
seed=fold_seed,
)
)
def _score_with_execution(
*,
prediction: int,
actual: Sequence[int],
sessions: Sequence[str],
reference_prices: Sequence[float],
execution_model: BacktestExecutionModel,
commission_bps: float,
) -> float:
if not actual:
return 0.0
contributions: list[float] = []
for label, session_id, reference_price in zip(actual, sessions, reference_prices, strict=True):
if prediction == 0:
contributions.append(1.0 if label == 0 else 0.0)
continue
side = "BUY" if prediction > 0 else "SELL"
execution = execution_model.simulate(
ExecutionRequest(
side=side,
session_id=session_id,
qty=100,
reference_price=reference_price,
)
)
if execution.status == "REJECTED":
contributions.append(0.0)
continue
fill_ratio = execution.filled_qty / 100.0
cost_penalty = min(0.99, (commission_bps + execution.slippage_bps) / 10000.0)
correctness = 1.0 if prediction == label else 0.0
contributions.append(correctness * fill_ratio * (1.0 - cost_penalty))
return mean(contributions)
def _build_run_id(*, n_entries: int, n_folds: int, sessions: Sequence[str]) -> str:
sess_key = "_".join(sessions)
return f"v2p-e{n_entries}-f{n_folds}-s{sess_key}"

View File

@@ -104,6 +104,7 @@ class MarketScanner:
# Store in L7 real-time layer
from datetime import UTC, datetime
timeframe = datetime.now(UTC).isoformat()
self.context_store.set_context(
ContextLayer.L7_REALTIME,
@@ -158,12 +159,8 @@ class MarketScanner:
top_movers = valid_metrics[: self.top_n]
# Detect breakouts and breakdowns
breakouts = [
m.stock_code for m in valid_metrics if self.analyzer.is_breakout(m)
]
breakdowns = [
m.stock_code for m in valid_metrics if self.analyzer.is_breakdown(m)
]
breakouts = [m.stock_code for m in valid_metrics if self.analyzer.is_breakout(m)]
breakdowns = [m.stock_code for m in valid_metrics if self.analyzer.is_breakdown(m)]
logger.info(
"%s scan complete: %d scanned, top momentum=%.1f, %d breakouts, %d breakdowns",
@@ -228,10 +225,9 @@ class MarketScanner:
# If we removed too many, backfill from current watchlist
if len(updated) < len(current_watchlist):
backfill = [
code for code in current_watchlist
if code not in updated
][: len(current_watchlist) - len(updated)]
backfill = [code for code in current_watchlist if code not in updated][
: len(current_watchlist) - len(updated)
]
updated.extend(backfill)
logger.info(

View File

@@ -158,7 +158,12 @@ class SmartVolatilityScanner:
price = latest_close
latest_high = _safe_float(latest.get("high"))
latest_low = _safe_float(latest.get("low"))
if latest_close > 0 and latest_high > 0 and latest_low > 0 and latest_high >= latest_low:
if (
latest_close > 0
and latest_high > 0
and latest_low > 0
and latest_high >= latest_low
):
intraday_range_pct = (latest_high - latest_low) / latest_close * 100.0
if volume <= 0:
volume = _safe_float(latest.get("volume"))
@@ -234,9 +239,7 @@ class SmartVolatilityScanner:
limit=50,
)
except Exception as exc:
logger.warning(
"Overseas fluctuation ranking failed for %s: %s", market.code, exc
)
logger.warning("Overseas fluctuation ranking failed for %s: %s", market.code, exc)
fluct_rows = []
if not fluct_rows:
@@ -250,9 +253,7 @@ class SmartVolatilityScanner:
limit=50,
)
except Exception as exc:
logger.warning(
"Overseas volume ranking failed for %s: %s", market.code, exc
)
logger.warning("Overseas volume ranking failed for %s: %s", market.code, exc)
volume_rows = []
for idx, row in enumerate(volume_rows):
@@ -433,16 +434,10 @@ def _extract_intraday_range_pct(row: dict[str, Any], price: float) -> float:
if price <= 0:
return 0.0
high = _safe_float(
row.get("high")
or row.get("ovrs_hgpr")
or row.get("stck_hgpr")
or row.get("day_hgpr")
row.get("high") or row.get("ovrs_hgpr") or row.get("stck_hgpr") or row.get("day_hgpr")
)
low = _safe_float(
row.get("low")
or row.get("ovrs_lwpr")
or row.get("stck_lwpr")
or row.get("day_lwpr")
row.get("low") or row.get("ovrs_lwpr") or row.get("stck_lwpr") or row.get("day_lwpr")
)
if high <= 0 or low <= 0 or high < low:
return 0.0

View File

@@ -6,10 +6,10 @@ Implements first-touch labeling with upper/lower/time barriers.
from __future__ import annotations
import warnings
from collections.abc import Sequence
from dataclasses import dataclass
from datetime import datetime, timedelta
from typing import Literal, Sequence
from typing import Literal
TieBreakMode = Literal["stop_first", "take_first"]
@@ -92,7 +92,10 @@ def label_with_triple_barrier(
else:
assert spec.max_holding_bars is not None
warnings.warn(
"TripleBarrierSpec.max_holding_bars is deprecated; use max_holding_minutes with timestamps instead.",
(
"TripleBarrierSpec.max_holding_bars is deprecated; "
"use max_holding_minutes with timestamps instead."
),
DeprecationWarning,
stacklevel=2,
)

View File

@@ -92,9 +92,7 @@ class VolatilityAnalyzer:
recent_tr = true_ranges[-period:]
return sum(recent_tr) / len(recent_tr)
def calculate_price_change(
self, current_price: float, past_price: float
) -> float:
def calculate_price_change(self, current_price: float, past_price: float) -> float:
"""Calculate price change percentage.
Args:
@@ -108,9 +106,7 @@ class VolatilityAnalyzer:
return 0.0
return ((current_price - past_price) / past_price) * 100
def calculate_volume_surge(
self, current_volume: float, avg_volume: float
) -> float:
def calculate_volume_surge(self, current_volume: float, avg_volume: float) -> float:
"""Calculate volume surge ratio.
Args:
@@ -240,11 +236,7 @@ class VolatilityAnalyzer:
Momentum score (0-100)
"""
# Weight recent changes more heavily
weighted_change = (
price_change_1m * 0.4 +
price_change_5m * 0.3 +
price_change_15m * 0.2
)
weighted_change = price_change_1m * 0.4 + price_change_5m * 0.3 + price_change_15m * 0.2
# Volume contribution (normalized to 0-10 scale)
volume_contribution = min(10.0, (volume_surge - 1.0) * 5.0)
@@ -301,17 +293,11 @@ class VolatilityAnalyzer:
if len(close_prices) > 0:
if len(close_prices) >= 1:
price_change_1m = self.calculate_price_change(
current_price, close_prices[-1]
)
price_change_1m = self.calculate_price_change(current_price, close_prices[-1])
if len(close_prices) >= 5:
price_change_5m = self.calculate_price_change(
current_price, close_prices[-5]
)
price_change_5m = self.calculate_price_change(current_price, close_prices[-5])
if len(close_prices) >= 15:
price_change_15m = self.calculate_price_change(
current_price, close_prices[-15]
)
price_change_15m = self.calculate_price_change(current_price, close_prices[-15])
# Calculate volume surge
avg_volume = sum(volumes) / len(volumes) if volumes else current_volume

View File

@@ -7,9 +7,9 @@ This module provides:
- Health monitoring and alerts
"""
from src.backup.exporter import BackupExporter, ExportFormat
from src.backup.scheduler import BackupScheduler, BackupPolicy
from src.backup.cloud_storage import CloudStorage, S3Config
from src.backup.exporter import BackupExporter, ExportFormat
from src.backup.scheduler import BackupPolicy, BackupScheduler
__all__ = [
"BackupExporter",

View File

@@ -94,7 +94,9 @@ class CloudStorage:
if metadata:
extra_args["Metadata"] = metadata
logger.info("Uploading %s to s3://%s/%s", file_path.name, self.config.bucket_name, object_key)
logger.info(
"Uploading %s to s3://%s/%s", file_path.name, self.config.bucket_name, object_key
)
try:
self.client.upload_file(

View File

@@ -14,14 +14,14 @@ import json
import logging
import sqlite3
from datetime import UTC, datetime
from enum import Enum
from enum import StrEnum
from pathlib import Path
from typing import Any
logger = logging.getLogger(__name__)
class ExportFormat(str, Enum):
class ExportFormat(StrEnum):
"""Supported export formats."""
JSON = "json"
@@ -103,15 +103,11 @@ class BackupExporter:
elif fmt == ExportFormat.CSV:
return self._export_csv(output_dir, timestamp, compress, incremental_since)
elif fmt == ExportFormat.PARQUET:
return self._export_parquet(
output_dir, timestamp, compress, incremental_since
)
return self._export_parquet(output_dir, timestamp, compress, incremental_since)
else:
raise ValueError(f"Unsupported format: {fmt}")
def _get_trades(
self, incremental_since: datetime | None = None
) -> list[dict[str, Any]]:
def _get_trades(self, incremental_since: datetime | None = None) -> list[dict[str, Any]]:
"""Fetch trades from database.
Args:
@@ -164,9 +160,7 @@ class BackupExporter:
data = {
"export_timestamp": datetime.now(UTC).isoformat(),
"incremental_since": (
incremental_since.isoformat() if incremental_since else None
),
"incremental_since": (incremental_since.isoformat() if incremental_since else None),
"record_count": len(trades),
"trades": trades,
}
@@ -284,8 +278,7 @@ class BackupExporter:
import pyarrow.parquet as pq
except ImportError:
raise ImportError(
"pyarrow is required for Parquet export. "
"Install with: pip install pyarrow"
"pyarrow is required for Parquet export. Install with: pip install pyarrow"
)
# Convert to pyarrow table

View File

@@ -14,14 +14,14 @@ import shutil
import sqlite3
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from enum import Enum
from enum import StrEnum
from pathlib import Path
from typing import Any
logger = logging.getLogger(__name__)
class HealthStatus(str, Enum):
class HealthStatus(StrEnum):
"""Health check status."""
HEALTHY = "healthy"
@@ -137,9 +137,13 @@ class HealthMonitor:
used_percent = (stat.used / stat.total) * 100
if stat.free < self.min_disk_space_bytes:
min_disk_gb = self.min_disk_space_bytes / 1024 / 1024 / 1024
return HealthCheckResult(
status=HealthStatus.UNHEALTHY,
message=f"Low disk space: {free_gb:.2f} GB free (minimum: {self.min_disk_space_bytes / 1024 / 1024 / 1024:.2f} GB)",
message=(
f"Low disk space: {free_gb:.2f} GB free "
f"(minimum: {min_disk_gb:.2f} GB)"
),
details={
"free_gb": free_gb,
"total_gb": total_gb,

View File

@@ -12,14 +12,14 @@ import logging
import shutil
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from enum import Enum
from enum import StrEnum
from pathlib import Path
from typing import Any
logger = logging.getLogger(__name__)
class BackupPolicy(str, Enum):
class BackupPolicy(StrEnum):
"""Backup retention policies."""
DAILY = "daily"
@@ -69,9 +69,7 @@ class BackupScheduler:
for d in [self.daily_dir, self.weekly_dir, self.monthly_dir]:
d.mkdir(parents=True, exist_ok=True)
def create_backup(
self, policy: BackupPolicy, verify: bool = True
) -> BackupMetadata:
def create_backup(self, policy: BackupPolicy, verify: bool = True) -> BackupMetadata:
"""Create a database backup.
Args:
@@ -229,9 +227,7 @@ class BackupScheduler:
return removed
def list_backups(
self, policy: BackupPolicy | None = None
) -> list[BackupMetadata]:
def list_backups(self, policy: BackupPolicy | None = None) -> list[BackupMetadata]:
"""List available backups.
Args:

View File

@@ -13,8 +13,8 @@ import hashlib
import json
import logging
import time
from dataclasses import dataclass, field
from typing import Any, TYPE_CHECKING
from dataclasses import dataclass
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from src.brain.gemini_client import TradeDecision
@@ -26,7 +26,7 @@ logger = logging.getLogger(__name__)
class CacheEntry:
"""Cached decision with metadata."""
decision: "TradeDecision"
decision: TradeDecision
cached_at: float # Unix timestamp
hit_count: int = 0
market_data_hash: str = ""
@@ -239,9 +239,7 @@ class DecisionCache:
"""
current_time = time.time()
expired_keys = [
k
for k, v in self._cache.items()
if current_time - v.cached_at > self.ttl_seconds
k for k, v in self._cache.items() if current_time - v.cached_at > self.ttl_seconds
]
count = len(expired_keys)

View File

@@ -11,14 +11,14 @@ from __future__ import annotations
from dataclasses import dataclass
from datetime import UTC, datetime
from enum import Enum
from enum import StrEnum
from typing import Any
from src.context.layer import ContextLayer
from src.context.store import ContextStore
class DecisionType(str, Enum):
class DecisionType(StrEnum):
"""Type of trading decision being made."""
NORMAL = "normal" # Regular trade decision
@@ -183,9 +183,7 @@ class ContextSelector:
ContextLayer.L1_LEGACY,
]
scores = {
layer: self.score_layer_relevance(layer, decision_type) for layer in all_layers
}
scores = {layer: self.score_layer_relevance(layer, decision_type) for layer in all_layers}
# Filter by minimum score
selected_layers = [layer for layer, score in scores.items() if score >= min_score]

View File

@@ -25,12 +25,12 @@ from typing import Any
from google import genai
from src.config import Settings
from src.data.news_api import NewsAPI, NewsSentiment
from src.data.economic_calendar import EconomicCalendar
from src.data.market_data import MarketData
from src.brain.cache import DecisionCache
from src.brain.prompt_optimizer import PromptOptimizer
from src.config import Settings
from src.data.economic_calendar import EconomicCalendar
from src.data.market_data import MarketData
from src.data.news_api import NewsAPI, NewsSentiment
logger = logging.getLogger(__name__)
@@ -159,16 +159,12 @@ class GeminiClient:
return ""
# Check for upcoming high-impact events
upcoming = self._economic_calendar.get_upcoming_events(
days_ahead=7, min_impact="HIGH"
)
upcoming = self._economic_calendar.get_upcoming_events(days_ahead=7, min_impact="HIGH")
if upcoming.high_impact_count == 0:
return ""
lines = [
f"Upcoming High-Impact Events: {upcoming.high_impact_count} in next 7 days"
]
lines = [f"Upcoming High-Impact Events: {upcoming.high_impact_count} in next 7 days"]
if upcoming.next_major_event is not None:
event = upcoming.next_major_event
@@ -180,9 +176,7 @@ class GeminiClient:
# Check for earnings
earnings_date = self._economic_calendar.get_earnings_date(stock_code)
if earnings_date is not None:
lines.append(
f" Earnings: {stock_code} on {earnings_date.strftime('%Y-%m-%d')}"
)
lines.append(f" Earnings: {stock_code} on {earnings_date.strftime('%Y-%m-%d')}")
return "\n".join(lines)
@@ -235,9 +229,7 @@ class GeminiClient:
# Add foreigner net if non-zero
if market_data.get("foreigner_net", 0) != 0:
market_info_lines.append(
f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}"
)
market_info_lines.append(f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}")
market_info = "\n".join(market_info_lines)
@@ -249,8 +241,7 @@ class GeminiClient:
market_info += f"\n\n{external_context}"
json_format = (
'{"action": "BUY"|"SELL"|"HOLD", '
'"confidence": <int 0-100>, "rationale": "<string>"}'
'{"action": "BUY"|"SELL"|"HOLD", "confidence": <int 0-100>, "rationale": "<string>"}'
)
return (
f"You are a professional {market_name} trading analyst.\n"
@@ -289,15 +280,12 @@ class GeminiClient:
# Add foreigner net if non-zero
if market_data.get("foreigner_net", 0) != 0:
market_info_lines.append(
f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}"
)
market_info_lines.append(f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}")
market_info = "\n".join(market_info_lines)
json_format = (
'{"action": "BUY"|"SELL"|"HOLD", '
'"confidence": <int 0-100>, "rationale": "<string>"}'
'{"action": "BUY"|"SELL"|"HOLD", "confidence": <int 0-100>, "rationale": "<string>"}'
)
return (
f"You are a professional {market_name} trading analyst.\n"
@@ -339,25 +327,19 @@ class GeminiClient:
data = json.loads(cleaned)
except json.JSONDecodeError:
logger.warning("Malformed JSON from Gemini — defaulting to HOLD")
return TradeDecision(
action="HOLD", confidence=0, rationale="Malformed JSON response"
)
return TradeDecision(action="HOLD", confidence=0, rationale="Malformed JSON response")
# Validate required fields
if not all(k in data for k in ("action", "confidence", "rationale")):
logger.warning("Missing fields in Gemini response — defaulting to HOLD")
# Preserve raw text in rationale so prompt_override callers (e.g. pre_market_planner)
# can extract their own JSON format from decision.rationale (#245)
return TradeDecision(
action="HOLD", confidence=0, rationale=raw
)
return TradeDecision(action="HOLD", confidence=0, rationale=raw)
action = str(data["action"]).upper()
if action not in VALID_ACTIONS:
logger.warning("Invalid action '%s' from Gemini — defaulting to HOLD", action)
return TradeDecision(
action="HOLD", confidence=0, rationale=f"Invalid action: {action}"
)
return TradeDecision(action="HOLD", confidence=0, rationale=f"Invalid action: {action}")
confidence = int(data["confidence"])
rationale = str(data["rationale"])
@@ -445,9 +427,7 @@ class GeminiClient:
# not a parsed TradeDecision. Skip parse_response to avoid spurious
# "Missing fields" warnings and return the raw response directly. (#247)
if "prompt_override" in market_data:
logger.info(
"Gemini raw response received (prompt_override, tokens=%d)", token_count
)
logger.info("Gemini raw response received (prompt_override, tokens=%d)", token_count)
# Not a trade decision — don't inflate _total_decisions metrics
return TradeDecision(
action="HOLD", confidence=0, rationale=raw, token_count=token_count
@@ -546,9 +526,7 @@ class GeminiClient:
# Batch Decision Making (for daily trading mode)
# ------------------------------------------------------------------
async def decide_batch(
self, stocks_data: list[dict[str, Any]]
) -> dict[str, TradeDecision]:
async def decide_batch(self, stocks_data: list[dict[str, Any]]) -> dict[str, TradeDecision]:
"""Make decisions for multiple stocks in a single API call.
This is designed for daily trading mode to minimize API usage

View File

@@ -179,7 +179,8 @@ class PromptOptimizer:
# Minimal instructions
prompt = (
f"{market_name} trader. Analyze:\n{data_str}\n\n"
'Return JSON: {"action":"BUY"|"SELL"|"HOLD","confidence":<0-100>,"rationale":"<text>"}\n'
"Return JSON: "
'{"action":"BUY"|"SELL"|"HOLD","confidence":<0-100>,"rationale":"<text>"}\n'
"Rules: action=BUY/SELL/HOLD, confidence=0-100, rationale=concise. No markdown."
)
else:

View File

@@ -103,7 +103,8 @@ class KISBroker:
ssl_ctx.verify_mode = ssl.CERT_NONE
connector = aiohttp.TCPConnector(ssl=ssl_ctx)
self._session = aiohttp.ClientSession(
timeout=timeout, connector=connector,
timeout=timeout,
connector=connector,
)
return self._session
@@ -224,16 +225,12 @@ class KISBroker:
async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(
f"get_orderbook failed ({resp.status}): {text}"
)
raise ConnectionError(f"get_orderbook failed ({resp.status}): {text}")
return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(f"Network error fetching orderbook: {exc}") from exc
async def get_current_price(
self, stock_code: str
) -> tuple[float, float, float]:
async def get_current_price(self, stock_code: str) -> tuple[float, float, float]:
"""Fetch current price data for a domestic stock.
Uses the ``inquire-price`` API (FHKST01010100), which works in both
@@ -265,9 +262,7 @@ class KISBroker:
async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(
f"get_current_price failed ({resp.status}): {text}"
)
raise ConnectionError(f"get_current_price failed ({resp.status}): {text}")
data = await resp.json()
out = data.get("output", {})
return (
@@ -276,9 +271,7 @@ class KISBroker:
_f(out.get("frgn_ntby_qty")),
)
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(
f"Network error fetching current price: {exc}"
) from exc
raise ConnectionError(f"Network error fetching current price: {exc}") from exc
async def get_balance(self) -> dict[str, Any]:
"""Fetch current account balance and holdings."""
@@ -308,9 +301,7 @@ class KISBroker:
async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(
f"get_balance failed ({resp.status}): {text}"
)
raise ConnectionError(f"get_balance failed ({resp.status}): {text}")
return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(f"Network error fetching balance: {exc}") from exc
@@ -369,9 +360,7 @@ class KISBroker:
async with session.post(url, headers=headers, json=body) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(
f"send_order failed ({resp.status}): {text}"
)
raise ConnectionError(f"send_order failed ({resp.status}): {text}")
data = await resp.json()
logger.info(
"Order submitted",
@@ -449,9 +438,7 @@ class KISBroker:
async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(
f"fetch_market_rankings failed ({resp.status}): {text}"
)
raise ConnectionError(f"fetch_market_rankings failed ({resp.status}): {text}")
data = await resp.json()
# Parse response - output is a list of ranked stocks
@@ -465,14 +452,16 @@ class KISBroker:
rankings = []
for item in data.get("output", [])[:limit]:
rankings.append({
rankings.append(
{
"stock_code": item.get("stck_shrn_iscd") or item.get("mksc_shrn_iscd", ""),
"name": item.get("hts_kor_isnm", ""),
"price": _safe_float(item.get("stck_prpr", "0")),
"volume": _safe_float(item.get("acml_vol", "0")),
"change_rate": _safe_float(item.get("prdy_ctrt", "0")),
"volume_increase_rate": _safe_float(item.get("vol_inrt", "0")),
})
}
)
return rankings
except (TimeoutError, aiohttp.ClientError) as exc:
@@ -522,9 +511,7 @@ class KISBroker:
data = await resp.json()
return data.get("output", []) or []
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(
f"Network error fetching domestic pending orders: {exc}"
) from exc
raise ConnectionError(f"Network error fetching domestic pending orders: {exc}") from exc
async def cancel_domestic_order(
self,
@@ -575,14 +562,10 @@ class KISBroker:
async with session.post(url, headers=headers, json=body) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(
f"cancel_domestic_order failed ({resp.status}): {text}"
)
raise ConnectionError(f"cancel_domestic_order failed ({resp.status}): {text}")
return cast(dict[str, Any], await resp.json())
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(
f"Network error cancelling domestic order: {exc}"
) from exc
raise ConnectionError(f"Network error cancelling domestic order: {exc}") from exc
async def get_daily_prices(
self,
@@ -609,6 +592,7 @@ class KISBroker:
# Calculate date range (today and N days ago)
from datetime import datetime, timedelta
end_date = datetime.now().strftime("%Y%m%d")
start_date = (datetime.now() - timedelta(days=days + 10)).strftime("%Y%m%d")
@@ -627,9 +611,7 @@ class KISBroker:
async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(
f"get_daily_prices failed ({resp.status}): {text}"
)
raise ConnectionError(f"get_daily_prices failed ({resp.status}): {text}")
data = await resp.json()
# Parse response
@@ -643,14 +625,16 @@ class KISBroker:
prices = []
for item in data.get("output2", []):
prices.append({
prices.append(
{
"date": item.get("stck_bsop_date", ""),
"open": _safe_float(item.get("stck_oprc", "0")),
"high": _safe_float(item.get("stck_hgpr", "0")),
"low": _safe_float(item.get("stck_lwpr", "0")),
"close": _safe_float(item.get("stck_clpr", "0")),
"volume": _safe_float(item.get("acml_vol", "0")),
})
}
)
# Sort oldest to newest (KIS returns newest first)
prices.reverse()

View File

@@ -56,9 +56,7 @@ class OverseasBroker:
"""
self._broker = kis_broker
async def get_overseas_price(
self, exchange_code: str, stock_code: str
) -> dict[str, Any]:
async def get_overseas_price(self, exchange_code: str, stock_code: str) -> dict[str, Any]:
"""
Fetch overseas stock price.
@@ -89,14 +87,10 @@ class OverseasBroker:
async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(
f"get_overseas_price failed ({resp.status}): {text}"
)
raise ConnectionError(f"get_overseas_price failed ({resp.status}): {text}")
return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(
f"Network error fetching overseas price: {exc}"
) from exc
raise ConnectionError(f"Network error fetching overseas price: {exc}") from exc
async def fetch_overseas_rankings(
self,
@@ -154,9 +148,7 @@ class OverseasBroker:
ranking_type,
)
return []
raise ConnectionError(
f"fetch_overseas_rankings failed ({resp.status}): {text}"
)
raise ConnectionError(f"fetch_overseas_rankings failed ({resp.status}): {text}")
data = await resp.json()
rows = self._extract_ranking_rows(data)
@@ -171,9 +163,7 @@ class OverseasBroker:
)
return []
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(
f"Network error fetching overseas rankings: {exc}"
) from exc
raise ConnectionError(f"Network error fetching overseas rankings: {exc}") from exc
async def get_overseas_balance(self, exchange_code: str) -> dict[str, Any]:
"""
@@ -193,9 +183,7 @@ class OverseasBroker:
# TR_ID: 실전 TTTS3012R, 모의 VTTS3012R
# Source: 한국투자증권 오픈API 전체문서 (20260221) — '해외주식 잔고조회' 시트
balance_tr_id = (
"TTTS3012R" if self._broker._settings.MODE == "live" else "VTTS3012R"
)
balance_tr_id = "TTTS3012R" if self._broker._settings.MODE == "live" else "VTTS3012R"
headers = await self._broker._auth_headers(balance_tr_id)
params = {
"CANO": self._broker._account_no,
@@ -205,22 +193,16 @@ class OverseasBroker:
"CTX_AREA_FK200": "",
"CTX_AREA_NK200": "",
}
url = (
f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-balance"
)
url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-balance"
try:
async with session.get(url, headers=headers, params=params) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(
f"get_overseas_balance failed ({resp.status}): {text}"
)
raise ConnectionError(f"get_overseas_balance failed ({resp.status}): {text}")
return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(
f"Network error fetching overseas balance: {exc}"
) from exc
raise ConnectionError(f"Network error fetching overseas balance: {exc}") from exc
async def get_overseas_buying_power(
self,
@@ -247,9 +229,7 @@ class OverseasBroker:
# TR_ID: 실전 TTTS3007R, 모의 VTTS3007R
# Source: 한국투자증권 오픈API 전체문서 (20260221) — '해외주식 매수가능금액조회' 시트
ps_tr_id = (
"TTTS3007R" if self._broker._settings.MODE == "live" else "VTTS3007R"
)
ps_tr_id = "TTTS3007R" if self._broker._settings.MODE == "live" else "VTTS3007R"
headers = await self._broker._auth_headers(ps_tr_id)
params = {
"CANO": self._broker._account_no,
@@ -258,9 +238,7 @@ class OverseasBroker:
"OVRS_ORD_UNPR": f"{price:.2f}",
"ITEM_CD": stock_code,
}
url = (
f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-psamount"
)
url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-psamount"
try:
async with session.get(url, headers=headers, params=params) as resp:
@@ -271,9 +249,7 @@ class OverseasBroker:
)
return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(
f"Network error fetching overseas buying power: {exc}"
) from exc
raise ConnectionError(f"Network error fetching overseas buying power: {exc}") from exc
async def send_overseas_order(
self,
@@ -330,9 +306,7 @@ class OverseasBroker:
async with session.post(url, headers=headers, json=body) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(
f"send_overseas_order failed ({resp.status}): {text}"
)
raise ConnectionError(f"send_overseas_order failed ({resp.status}): {text}")
data = await resp.json()
rt_cd = data.get("rt_cd", "")
msg1 = data.get("msg1", "")
@@ -357,13 +331,9 @@ class OverseasBroker:
)
return data
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(
f"Network error sending overseas order: {exc}"
) from exc
raise ConnectionError(f"Network error sending overseas order: {exc}") from exc
async def get_overseas_pending_orders(
self, exchange_code: str
) -> list[dict[str, Any]]:
async def get_overseas_pending_orders(self, exchange_code: str) -> list[dict[str, Any]]:
"""Fetch unfilled (pending) overseas orders for a given exchange.
Args:
@@ -379,9 +349,7 @@ class OverseasBroker:
ConnectionError: On network or API errors (live mode only).
"""
if self._broker._settings.MODE != "live":
logger.debug(
"Pending orders API (TTTS3018R) not supported in paper mode; returning []"
)
logger.debug("Pending orders API (TTTS3018R) not supported in paper mode; returning []")
return []
await self._broker._rate_limiter.acquire()
@@ -398,9 +366,7 @@ class OverseasBroker:
"CTX_AREA_FK200": "",
"CTX_AREA_NK200": "",
}
url = (
f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-nccs"
)
url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-nccs"
try:
async with session.get(url, headers=headers, params=params) as resp:
@@ -415,9 +381,7 @@ class OverseasBroker:
return output
return []
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(
f"Network error fetching pending orders: {exc}"
) from exc
raise ConnectionError(f"Network error fetching pending orders: {exc}") from exc
async def cancel_overseas_order(
self,
@@ -469,22 +433,16 @@ class OverseasBroker:
headers = await self._broker._auth_headers(tr_id)
headers["hashkey"] = hash_key
url = (
f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/order-rvsecncl"
)
url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/order-rvsecncl"
try:
async with session.post(url, headers=headers, json=body) as resp:
if resp.status != 200:
text = await resp.text()
raise ConnectionError(
f"cancel_overseas_order failed ({resp.status}): {text}"
)
raise ConnectionError(f"cancel_overseas_order failed ({resp.status}): {text}")
return await resp.json()
except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(
f"Network error cancelling overseas order: {exc}"
) from exc
raise ConnectionError(f"Network error cancelling overseas order: {exc}") from exc
def _get_currency_code(self, exchange_code: str) -> str:
"""

View File

@@ -60,7 +60,16 @@ class Settings(BaseSettings):
# This value is used as a fallback when the balance API returns 0 in paper mode.
PAPER_OVERSEAS_CASH: float = Field(default=50000.0, ge=0.0)
USD_BUFFER_MIN: float = Field(default=1000.0, ge=0.0)
US_MIN_PRICE: float = Field(default=5.0, ge=0.0)
STAGED_EXIT_BE_ARM_PCT: float = Field(default=1.2, gt=0.0, le=30.0)
STAGED_EXIT_ARM_PCT: float = Field(default=3.0, gt=0.0, le=100.0)
STOPLOSS_REENTRY_COOLDOWN_MINUTES: int = Field(default=120, ge=1, le=1440)
KR_ATR_STOP_MULTIPLIER_K: float = Field(default=2.0, ge=0.1, le=10.0)
KR_ATR_STOP_MIN_PCT: float = Field(default=-2.0, le=0.0)
KR_ATR_STOP_MAX_PCT: float = Field(default=-7.0, le=0.0)
OVERNIGHT_EXCEPTION_ENABLED: bool = True
SESSION_RISK_RELOAD_ENABLED: bool = True
SESSION_RISK_PROFILES_JSON: str = "{}"
# Trading frequency mode (daily = batch API calls, realtime = per-stock calls)
TRADE_MODE: str = Field(default="daily", pattern="^(daily|realtime)$")
@@ -69,6 +78,8 @@ class Settings(BaseSettings):
ORDER_BLACKOUT_ENABLED: bool = True
ORDER_BLACKOUT_WINDOWS_KST: str = "23:30-00:10"
ORDER_BLACKOUT_QUEUE_MAX: int = Field(default=500, ge=10, le=5000)
BLACKOUT_RECOVERY_PRICE_REVALIDATION_ENABLED: bool = True
BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT: float = Field(default=5.0, ge=0.0, le=100.0)
# Pre-Market Planner
PRE_MARKET_MINUTES: int = Field(default=30, ge=10, le=120)
@@ -113,12 +124,8 @@ class Settings(BaseSettings):
OVERSEAS_RANKING_ENABLED: bool = True
OVERSEAS_RANKING_FLUCT_TR_ID: str = "HHDFS76290000"
OVERSEAS_RANKING_VOLUME_TR_ID: str = "HHDFS76270000"
OVERSEAS_RANKING_FLUCT_PATH: str = (
"/uapi/overseas-stock/v1/ranking/updown-rate"
)
OVERSEAS_RANKING_VOLUME_PATH: str = (
"/uapi/overseas-stock/v1/ranking/volume-surge"
)
OVERSEAS_RANKING_FLUCT_PATH: str = "/uapi/overseas-stock/v1/ranking/updown-rate"
OVERSEAS_RANKING_VOLUME_PATH: str = "/uapi/overseas-stock/v1/ranking/volume-surge"
# Dashboard (optional)
DASHBOARD_ENABLED: bool = False

View File

@@ -222,9 +222,7 @@ class ContextAggregator:
total_pnl = 0.0
for month in months:
monthly_pnl = self.store.get_context(
ContextLayer.L4_MONTHLY, month, "monthly_pnl"
)
monthly_pnl = self.store.get_context(ContextLayer.L4_MONTHLY, month, "monthly_pnl")
if monthly_pnl is not None:
total_pnl += monthly_pnl
@@ -251,9 +249,7 @@ class ContextAggregator:
if quarterly_pnl is not None:
total_pnl += quarterly_pnl
self.store.set_context(
ContextLayer.L2_ANNUAL, year, "annual_pnl", round(total_pnl, 2)
)
self.store.set_context(ContextLayer.L2_ANNUAL, year, "annual_pnl", round(total_pnl, 2))
def aggregate_legacy_from_annual(self) -> None:
"""Aggregate L1 (legacy) context from all L2 (annual) data."""
@@ -280,9 +276,7 @@ class ContextAggregator:
self.store.set_context(
ContextLayer.L1_LEGACY, "LEGACY", "total_pnl", round(total_pnl, 2)
)
self.store.set_context(
ContextLayer.L1_LEGACY, "LEGACY", "years_traded", years_traded
)
self.store.set_context(ContextLayer.L1_LEGACY, "LEGACY", "years_traded", years_traded)
self.store.set_context(
ContextLayer.L1_LEGACY,
"LEGACY",

View File

@@ -3,10 +3,10 @@
from __future__ import annotations
from dataclasses import dataclass
from enum import Enum
from enum import StrEnum
class ContextLayer(str, Enum):
class ContextLayer(StrEnum):
"""7-tier context hierarchy from real-time to generational."""
L1_LEGACY = "L1_LEGACY" # Cumulative/generational wisdom

View File

@@ -9,7 +9,7 @@ This module summarizes old context data instead of including raw details:
from __future__ import annotations
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from datetime import UTC, datetime
from typing import Any
from src.context.layer import ContextLayer

View File

@@ -23,6 +23,7 @@ class BlackoutWindow:
class QueuedOrderIntent:
market_code: str
exchange_code: str
session_id: str
stock_code: str
order_type: str
quantity: int
@@ -68,11 +69,16 @@ class BlackoutOrderManager:
self._queue: deque[QueuedOrderIntent] = deque()
self._was_blackout = False
self._max_queue_size = max_queue_size
self._overflow_drop_count = 0
@property
def pending_count(self) -> int:
return len(self._queue)
@property
def overflow_drop_count(self) -> int:
return self._overflow_drop_count
def in_blackout(self, now: datetime | None = None) -> bool:
if not self.enabled or not self._windows:
return False
@@ -81,8 +87,11 @@ class BlackoutOrderManager:
return any(window.contains(kst_now) for window in self._windows)
def enqueue(self, intent: QueuedOrderIntent) -> bool:
if len(self._queue) >= self._max_queue_size:
if self._max_queue_size <= 0:
return False
if len(self._queue) >= self._max_queue_size:
self._queue.popleft()
self._overflow_drop_count += 1
self._queue.append(intent)
return True

View File

@@ -11,8 +11,9 @@ Order is fixed:
from __future__ import annotations
import inspect
from collections.abc import Awaitable, Callable
from dataclasses import dataclass, field
from typing import Any, Awaitable, Callable
from typing import Any
StepCallable = Callable[[], Any | Awaitable[Any]]

View File

@@ -15,7 +15,7 @@ from src.markets.schedule import MarketInfo
_LOW_LIQUIDITY_SESSIONS = {"NXT_AFTER", "US_PRE", "US_DAY", "US_AFTER"}
class OrderPolicyRejected(Exception):
class OrderPolicyRejectedError(Exception):
"""Raised when an order violates session policy."""
def __init__(self, message: str, *, session_id: str, market_code: str) -> None:
@@ -61,7 +61,9 @@ def classify_session_id(market: MarketInfo, now: datetime | None = None) -> str:
def get_session_info(market: MarketInfo, now: datetime | None = None) -> SessionInfo:
session_id = classify_session_id(market, now)
return SessionInfo(session_id=session_id, is_low_liquidity=session_id in _LOW_LIQUIDITY_SESSIONS)
return SessionInfo(
session_id=session_id, is_low_liquidity=session_id in _LOW_LIQUIDITY_SESSIONS
)
def validate_order_policy(
@@ -76,7 +78,7 @@ def validate_order_policy(
is_market_order = price <= 0
if info.is_low_liquidity and is_market_order:
raise OrderPolicyRejected(
raise OrderPolicyRejectedError(
f"Market order is forbidden in low-liquidity session ({info.session_id})",
session_id=info.session_id,
market_code=market.code,
@@ -84,10 +86,14 @@ def validate_order_policy(
# Guard against accidental unsupported actions.
if order_type not in {"BUY", "SELL"}:
raise OrderPolicyRejected(
raise OrderPolicyRejectedError(
f"Unsupported order_type={order_type}",
session_id=info.session_id,
market_code=market.code,
)
return info
# Backward compatibility alias
OrderPolicyRejected = OrderPolicyRejectedError

View File

@@ -28,9 +28,7 @@ class PriorityTask:
# Task data not used in comparison
task_id: str = field(compare=False)
task_data: dict[str, Any] = field(compare=False, default_factory=dict)
callback: Callable[[], Coroutine[Any, Any, Any]] | None = field(
compare=False, default=None
)
callback: Callable[[], Coroutine[Any, Any, Any]] | None = field(compare=False, default=None)
@dataclass

View File

@@ -25,7 +25,7 @@ class CircuitBreakerTripped(SystemExit):
)
class FatFingerRejected(Exception):
class FatFingerRejectedError(Exception):
"""Raised when an order exceeds the maximum allowed proportion of cash."""
def __init__(self, order_amount: float, total_cash: float, max_pct: float) -> None:
@@ -61,7 +61,7 @@ class RiskManager:
def check_fat_finger(self, order_amount: float, total_cash: float) -> None:
"""Reject orders that exceed the maximum proportion of available cash."""
if total_cash <= 0:
raise FatFingerRejected(order_amount, total_cash, self._ff_max_pct)
raise FatFingerRejectedError(order_amount, total_cash, self._ff_max_pct)
ratio_pct = (order_amount / total_cash) * 100
if ratio_pct > self._ff_max_pct:
@@ -69,7 +69,7 @@ class RiskManager:
"Fat finger check failed",
extra={"order_amount": order_amount},
)
raise FatFingerRejected(order_amount, total_cash, self._ff_max_pct)
raise FatFingerRejectedError(order_amount, total_cash, self._ff_max_pct)
def validate_order(
self,
@@ -81,3 +81,7 @@ class RiskManager:
self.check_circuit_breaker(current_pnl_pct)
self.check_fat_finger(order_amount, total_cash)
logger.info("Order passed risk validation")
# Backward compatibility alias
FatFingerRejected = FatFingerRejectedError

View File

@@ -5,7 +5,7 @@ from __future__ import annotations
import json
import os
import sqlite3
from datetime import UTC, datetime, timezone
from datetime import UTC, datetime
from pathlib import Path
from typing import Any
@@ -188,10 +188,7 @@ def create_dashboard_app(db_path: str, mode: str = "paper") -> FastAPI:
return {
"market": "all",
"combined": combined,
"by_market": [
_row_to_performance(row)
for row in by_market_rows
],
"by_market": [_row_to_performance(row) for row in by_market_rows],
}
row = conn.execute(
@@ -401,7 +398,7 @@ def create_dashboard_app(db_path: str, mode: str = "paper") -> FastAPI:
"""
).fetchall()
now = datetime.now(timezone.utc)
now = datetime.now(UTC)
positions = []
for row in rows:
entry_time_str = row["entry_time"]

View File

@@ -9,7 +9,6 @@ from __future__ import annotations
import logging
from dataclasses import dataclass
from datetime import datetime, timedelta
from typing import Any
logger = logging.getLogger(__name__)

View File

@@ -123,8 +123,7 @@ def init_db(db_path: str) -> sqlite3.Connection:
"""
)
decision_columns = {
row[1]
for row in conn.execute("PRAGMA table_info(decision_logs)").fetchall()
row[1] for row in conn.execute("PRAGMA table_info(decision_logs)").fetchall()
}
if "session_id" not in decision_columns:
conn.execute("ALTER TABLE decision_logs ADD COLUMN session_id TEXT DEFAULT 'UNKNOWN'")
@@ -185,9 +184,7 @@ def init_db(db_path: str) -> sqlite3.Connection:
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_decision_logs_timestamp ON decision_logs(timestamp)"
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_decision_logs_reviewed ON decision_logs(reviewed)"
)
conn.execute("CREATE INDEX IF NOT EXISTS idx_decision_logs_reviewed ON decision_logs(reviewed)")
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_decision_logs_confidence ON decision_logs(confidence)"
)
@@ -321,7 +318,7 @@ def get_latest_buy_trade(
if exchange_code:
cursor = conn.execute(
"""
SELECT decision_id, price, quantity
SELECT decision_id, price, quantity, selection_context
FROM trades
WHERE stock_code = ?
AND market = ?
@@ -342,7 +339,7 @@ def get_latest_buy_trade(
else:
cursor = conn.execute(
"""
SELECT decision_id, price, quantity
SELECT decision_id, price, quantity, selection_context
FROM trades
WHERE stock_code = ?
AND market = ?
@@ -381,9 +378,7 @@ def get_open_position(
return {"decision_id": row[1], "price": row[2], "quantity": row[3], "timestamp": row[4]}
def get_recent_symbols(
conn: sqlite3.Connection, market: str, limit: int = 30
) -> list[str]:
def get_recent_symbols(conn: sqlite3.Connection, market: str, limit: int = 30) -> list[str]:
"""Return recent unique symbols for a market, newest first."""
cursor = conn.execute(
"""

View File

@@ -90,9 +90,7 @@ class ABTester:
sharpe_ratio = None
if len(pnls) > 1:
mean_return = avg_pnl
std_return = (
sum((p - mean_return) ** 2 for p in pnls) / (len(pnls) - 1)
) ** 0.5
std_return = (sum((p - mean_return) ** 2 for p in pnls) / (len(pnls) - 1)) ** 0.5
if std_return > 0:
sharpe_ratio = mean_return / std_return
@@ -198,8 +196,7 @@ class ABTester:
if meets_criteria:
logger.info(
"Strategy '%s' meets deployment criteria: "
"win_rate=%.2f%%, trades=%d, avg_pnl=%.2f",
"Strategy '%s' meets deployment criteria: win_rate=%.2f%%, trades=%d, avg_pnl=%.2f",
result.winner,
winning_perf.win_rate,
winning_perf.total_trades,

View File

@@ -60,9 +60,7 @@ class DailyReviewer:
if isinstance(scenario_match, dict) and scenario_match:
matched += 1
scenario_match_rate = (
round((matched / total_decisions) * 100, 2)
if total_decisions
else 0.0
round((matched / total_decisions) * 100, 2) if total_decisions else 0.0
)
trade_stats = self._conn.execute(

View File

@@ -80,7 +80,8 @@ class EvolutionOptimizer:
# Convert to dict format for analysis
failures = []
for decision in losing_decisions:
failures.append({
failures.append(
{
"decision_id": decision.decision_id,
"timestamp": decision.timestamp,
"stock_code": decision.stock_code,
@@ -93,13 +94,12 @@ class EvolutionOptimizer:
"outcome_accuracy": decision.outcome_accuracy,
"context_snapshot": decision.context_snapshot,
"input_data": decision.input_data,
})
}
)
return failures
def identify_failure_patterns(
self, failures: list[dict[str, Any]]
) -> dict[str, Any]:
def identify_failure_patterns(self, failures: list[dict[str, Any]]) -> dict[str, Any]:
"""Identify patterns in losing decisions.
Analyzes:
@@ -143,12 +143,8 @@ class EvolutionOptimizer:
total_confidence += failure.get("confidence", 0)
total_loss += failure.get("outcome_pnl", 0.0)
patterns["avg_confidence"] = (
round(total_confidence / len(failures), 2) if failures else 0.0
)
patterns["avg_loss"] = (
round(total_loss / len(failures), 2) if failures else 0.0
)
patterns["avg_confidence"] = round(total_confidence / len(failures), 2) if failures else 0.0
patterns["avg_loss"] = round(total_loss / len(failures), 2) if failures else 0.0
# Convert Counters to regular dicts for JSON serialization
patterns["markets"] = dict(patterns["markets"])
@@ -197,7 +193,8 @@ class EvolutionOptimizer:
prompt = (
"You are a quantitative trading strategy developer.\n"
"Analyze these failed trades and their patterns, then generate an improved strategy.\n\n"
"Analyze these failed trades and their patterns, "
"then generate an improved strategy.\n\n"
f"Failure Patterns:\n{json.dumps(patterns, indent=2)}\n\n"
f"Sample Failed Trades (first 5):\n"
f"{json.dumps(failures[:5], indent=2, default=str)}\n\n"
@@ -214,7 +211,8 @@ class EvolutionOptimizer:
try:
response = await self._client.aio.models.generate_content(
model=self._model_name, contents=prompt,
model=self._model_name,
contents=prompt,
)
body = response.text.strip()
except Exception as exc:
@@ -280,9 +278,7 @@ class EvolutionOptimizer:
logger.info("Strategy validation PASSED")
return True
else:
logger.warning(
"Strategy validation FAILED:\n%s", result.stdout + result.stderr
)
logger.warning("Strategy validation FAILED:\n%s", result.stdout + result.stderr)
# Clean up failing strategy
strategy_path.unlink(missing_ok=True)
return False

View File

@@ -187,9 +187,7 @@ class PerformanceTracker:
return metrics
def calculate_improvement_trend(
self, metrics_history: list[StrategyMetrics]
) -> dict[str, Any]:
def calculate_improvement_trend(self, metrics_history: list[StrategyMetrics]) -> dict[str, Any]:
"""Calculate improvement trend from historical metrics.
Args:
@@ -229,9 +227,7 @@ class PerformanceTracker:
"period_count": len(metrics_history),
}
def generate_dashboard(
self, strategy_name: str | None = None
) -> PerformanceDashboard:
def generate_dashboard(self, strategy_name: str | None = None) -> PerformanceDashboard:
"""Generate a comprehensive performance dashboard.
Args:
@@ -260,9 +256,7 @@ class PerformanceTracker:
improvement_trend=improvement_trend,
)
def export_dashboard_json(
self, dashboard: PerformanceDashboard
) -> str:
def export_dashboard_json(self, dashboard: PerformanceDashboard) -> str:
"""Export dashboard as JSON string.
Args:

View File

@@ -140,9 +140,7 @@ class DecisionLogger:
)
self.conn.commit()
def update_outcome(
self, decision_id: str, pnl: float, accuracy: int
) -> None:
def update_outcome(self, decision_id: str, pnl: float, accuracy: int) -> None:
"""Update the outcome of a decision after trade execution.
Args:

File diff suppressed because it is too large Load Diff

View File

@@ -211,9 +211,7 @@ def get_open_markets(
return is_market_open(market, now)
open_markets = [
MARKETS[code]
for code in enabled_markets
if code in MARKETS and is_available(MARKETS[code])
MARKETS[code] for code in enabled_markets if code in MARKETS and is_available(MARKETS[code])
]
return sorted(open_markets, key=lambda m: m.code)
@@ -282,9 +280,7 @@ def get_next_market_open(
# Calculate next open time for this market
for days_ahead in range(7): # Check next 7 days
check_date = market_now.date() + timedelta(days=days_ahead)
check_datetime = datetime.combine(
check_date, market.open_time, tzinfo=market.timezone
)
check_datetime = datetime.combine(check_date, market.open_time, tzinfo=market.timezone)
# Skip weekends
if check_datetime.weekday() >= 5:

View File

@@ -4,7 +4,7 @@ import asyncio
import logging
import time
from collections.abc import Awaitable, Callable
from dataclasses import dataclass, fields
from dataclasses import dataclass
from enum import Enum
from typing import ClassVar
@@ -136,14 +136,14 @@ class TelegramClient:
self._enabled = enabled
self._rate_limiter = LeakyBucket(rate=rate_limit)
self._session: aiohttp.ClientSession | None = None
self._filter = notification_filter if notification_filter is not None else NotificationFilter()
self._filter = (
notification_filter if notification_filter is not None else NotificationFilter()
)
if not enabled:
logger.info("Telegram notifications disabled via configuration")
elif bot_token is None or chat_id is None:
logger.warning(
"Telegram notifications disabled (missing bot_token or chat_id)"
)
logger.warning("Telegram notifications disabled (missing bot_token or chat_id)")
self._enabled = False
else:
logger.info("Telegram notifications enabled for chat_id=%s", chat_id)
@@ -209,14 +209,12 @@ class TelegramClient:
async with session.post(url, json=payload) as resp:
if resp.status != 200:
error_text = await resp.text()
logger.error(
"Telegram API error (status=%d): %s", resp.status, error_text
)
logger.error("Telegram API error (status=%d): %s", resp.status, error_text)
return False
logger.debug("Telegram message sent: %s", text[:50])
return True
except asyncio.TimeoutError:
except TimeoutError:
logger.error("Telegram message timeout")
return False
except aiohttp.ClientError as exc:
@@ -305,9 +303,7 @@ class TelegramClient:
NotificationMessage(priority=NotificationPriority.LOW, message=message)
)
async def notify_circuit_breaker(
self, pnl_pct: float, threshold: float
) -> None:
async def notify_circuit_breaker(self, pnl_pct: float, threshold: float) -> None:
"""
Notify circuit breaker activation.
@@ -354,9 +350,7 @@ class TelegramClient:
NotificationMessage(priority=NotificationPriority.HIGH, message=message)
)
async def notify_system_start(
self, mode: str, enabled_markets: list[str]
) -> None:
async def notify_system_start(self, mode: str, enabled_markets: list[str]) -> None:
"""
Notify system startup.
@@ -369,9 +363,7 @@ class TelegramClient:
mode_emoji = "📝" if mode == "paper" else "💰"
markets_str = ", ".join(enabled_markets)
message = (
f"<b>{mode_emoji} System Started</b>\n"
f"Mode: {mode.upper()}\n"
f"Markets: {markets_str}"
f"<b>{mode_emoji} System Started</b>\nMode: {mode.upper()}\nMarkets: {markets_str}"
)
await self._send_notification(
NotificationMessage(priority=NotificationPriority.MEDIUM, message=message)
@@ -445,11 +437,7 @@ class TelegramClient:
"""
if not self._filter.playbook:
return
message = (
f"<b>Playbook Failed</b>\n"
f"Market: {market}\n"
f"Reason: {reason[:200]}"
)
message = f"<b>Playbook Failed</b>\nMarket: {market}\nReason: {reason[:200]}"
await self._send_notification(
NotificationMessage(priority=NotificationPriority.HIGH, message=message)
)
@@ -469,9 +457,7 @@ class TelegramClient:
if "circuit breaker" in reason.lower()
else NotificationPriority.MEDIUM
)
await self._send_notification(
NotificationMessage(priority=priority, message=message)
)
await self._send_notification(NotificationMessage(priority=priority, message=message))
async def notify_unfilled_order(
self,
@@ -496,11 +482,7 @@ class TelegramClient:
return
# SELL resubmit is high priority — position liquidation at risk.
# BUY cancel is medium priority — only cash is freed.
priority = (
NotificationPriority.HIGH
if action == "SELL"
else NotificationPriority.MEDIUM
)
priority = NotificationPriority.HIGH if action == "SELL" else NotificationPriority.MEDIUM
outcome_emoji = "🔄" if outcome == "resubmitted" else ""
outcome_label = "재주문" if outcome == "resubmitted" else "취소됨"
action_emoji = "🔴" if action == "SELL" else "🟢"
@@ -515,9 +497,7 @@ class TelegramClient:
message = "\n".join(lines)
await self._send_notification(NotificationMessage(priority=priority, message=message))
async def notify_error(
self, error_type: str, error_msg: str, context: str
) -> None:
async def notify_error(self, error_type: str, error_msg: str, context: str) -> None:
"""
Notify system error.
@@ -541,9 +521,7 @@ class TelegramClient:
class TelegramCommandHandler:
"""Handles incoming Telegram commands via long polling."""
def __init__(
self, client: TelegramClient, polling_interval: float = 1.0
) -> None:
def __init__(self, client: TelegramClient, polling_interval: float = 1.0) -> None:
"""
Initialize command handler.
@@ -559,9 +537,7 @@ class TelegramCommandHandler:
self._polling_task: asyncio.Task[None] | None = None
self._running = False
def register_command(
self, command: str, handler: Callable[[], Awaitable[None]]
) -> None:
def register_command(self, command: str, handler: Callable[[], Awaitable[None]]) -> None:
"""
Register a command handler (no arguments).
@@ -672,7 +648,7 @@ class TelegramCommandHandler:
return updates
except asyncio.TimeoutError:
except TimeoutError:
logger.debug("getUpdates timeout (normal)")
return []
except aiohttp.ClientError as exc:
@@ -697,9 +673,7 @@ class TelegramCommandHandler:
# Verify chat_id matches configured chat
chat_id = str(message.get("chat", {}).get("id", ""))
if chat_id != self._client._chat_id:
logger.warning(
"Ignoring command from unauthorized chat_id: %s", chat_id
)
logger.warning("Ignoring command from unauthorized chat_id: %s", chat_id)
return
# Extract command text

View File

@@ -8,12 +8,12 @@ Defines the data contracts for the proactive strategy system:
from __future__ import annotations
from datetime import UTC, date, datetime
from enum import Enum
from enum import StrEnum
from pydantic import BaseModel, Field, field_validator
class ScenarioAction(str, Enum):
class ScenarioAction(StrEnum):
"""Actions that can be taken by scenarios."""
BUY = "BUY"
@@ -22,7 +22,7 @@ class ScenarioAction(str, Enum):
REDUCE_ALL = "REDUCE_ALL"
class MarketOutlook(str, Enum):
class MarketOutlook(StrEnum):
"""AI's assessment of market direction."""
BULLISH = "bullish"
@@ -32,7 +32,7 @@ class MarketOutlook(str, Enum):
BEARISH = "bearish"
class PlaybookStatus(str, Enum):
class PlaybookStatus(StrEnum):
"""Lifecycle status of a playbook."""
PENDING = "pending"

View File

@@ -6,7 +6,6 @@ Designed for the pre-market strategy system (one playbook per market per day).
from __future__ import annotations
import json
import logging
import sqlite3
from datetime import date
@@ -53,8 +52,10 @@ class PlaybookStore:
row_id = cursor.lastrowid or 0
logger.info(
"Saved playbook for %s/%s (%d stocks, %d scenarios)",
playbook.date, playbook.market,
playbook.stock_count, playbook.scenario_count,
playbook.date,
playbook.market,
playbook.stock_count,
playbook.scenario_count,
)
return row_id

View File

@@ -6,10 +6,10 @@ State progression is monotonic (promotion-only) except terminal EXITED.
from __future__ import annotations
from dataclasses import dataclass
from enum import Enum
from enum import StrEnum
class PositionState(str, Enum):
class PositionState(StrEnum):
HOLDING = "HOLDING"
BE_LOCK = "BE_LOCK"
ARMED = "ARMED"
@@ -40,12 +40,7 @@ def evaluate_exit_first(inp: StateTransitionInput) -> bool:
EXITED must be evaluated before any promotion.
"""
return (
inp.hard_stop_hit
or inp.trailing_stop_hit
or inp.model_exit_signal
or inp.be_lock_threat
)
return inp.hard_stop_hit or inp.trailing_stop_hit or inp.model_exit_signal or inp.be_lock_threat
def promote_state(current: PositionState, inp: StateTransitionInput) -> PositionState:

View File

@@ -124,12 +124,14 @@ class PreMarketPlanner:
# 4. Parse response
playbook = self._parse_response(
decision.rationale, today, market, candidates, cross_market,
decision.rationale,
today,
market,
candidates,
cross_market,
current_holdings=current_holdings,
)
playbook_with_tokens = playbook.model_copy(
update={"token_count": decision.token_count}
)
playbook_with_tokens = playbook.model_copy(update={"token_count": decision.token_count})
logger.info(
"Generated playbook for %s: %d stocks, %d scenarios, %d tokens",
market,
@@ -146,7 +148,9 @@ class PreMarketPlanner:
return self._empty_playbook(today, market)
def build_cross_market_context(
self, target_market: str, today: date | None = None,
self,
target_market: str,
today: date | None = None,
) -> CrossMarketContext | None:
"""Build cross-market context from the other market's L6 data.
@@ -192,7 +196,9 @@ class PreMarketPlanner:
)
def build_self_market_scorecard(
self, market: str, today: date | None = None,
self,
market: str,
today: date | None = None,
) -> dict[str, Any] | None:
"""Build previous-day scorecard for the same market."""
if today is None:
@@ -320,18 +326,18 @@ class PreMarketPlanner:
f"{context_text}\n"
f"## Instructions\n"
f"Return a JSON object with this exact structure:\n"
f'{{\n'
f"{{\n"
f' "market_outlook": "bullish|neutral_to_bullish|neutral'
f'|neutral_to_bearish|bearish",\n'
f' "global_rules": [\n'
f' {{"condition": "portfolio_pnl_pct < -2.0",'
f' "action": "REDUCE_ALL", "rationale": "..."}}\n'
f' ],\n'
f" ],\n"
f' "stocks": [\n'
f' {{\n'
f" {{\n"
f' "stock_code": "...",\n'
f' "scenarios": [\n'
f' {{\n'
f" {{\n"
f' "condition": {{"rsi_below": 30, "volume_ratio_above": 2.0,'
f' "unrealized_pnl_pct_above": 3.0, "holding_days_above": 5}},\n'
f' "action": "BUY|SELL|HOLD",\n'
@@ -340,11 +346,11 @@ class PreMarketPlanner:
f' "stop_loss_pct": -2.0,\n'
f' "take_profit_pct": 3.0,\n'
f' "rationale": "..."\n'
f' }}\n'
f' ]\n'
f' }}\n'
f' ]\n'
f'}}\n\n'
f" }}\n"
f" ]\n"
f" }}\n"
f" ]\n"
f"}}\n\n"
f"Rules:\n"
f"- Max {max_scenarios} scenarios per stock\n"
f"- Candidates list is the primary source for BUY candidates\n"
@@ -575,8 +581,7 @@ class PreMarketPlanner:
stop_loss_pct=-3.0,
take_profit_pct=5.0,
rationale=(
f"Rule-based BUY: oversold signal, "
f"RSI={c.rsi:.0f} (fallback planner)"
f"Rule-based BUY: oversold signal, RSI={c.rsi:.0f} (fallback planner)"
),
)
)

View File

@@ -107,7 +107,9 @@ class ScenarioEngine:
# 2. Find stock playbook
stock_pb = playbook.get_stock_playbook(stock_code)
if stock_pb is None:
logger.debug("No playbook for %s — defaulting to %s", stock_code, playbook.default_action)
logger.debug(
"No playbook for %s — defaulting to %s", stock_code, playbook.default_action
)
return ScenarioMatch(
stock_code=stock_code,
matched_scenario=None,
@@ -135,7 +137,9 @@ class ScenarioEngine:
)
# 4. No match — default action
logger.debug("No scenario matched for %s — defaulting to %s", stock_code, playbook.default_action)
logger.debug(
"No scenario matched for %s — defaulting to %s", stock_code, playbook.default_action
)
return ScenarioMatch(
stock_code=stock_code,
matched_scenario=None,
@@ -198,17 +202,27 @@ class ScenarioEngine:
checks.append(price is not None and price < condition.price_below)
price_change_pct = self._safe_float(market_data.get("price_change_pct"))
if condition.price_change_pct_above is not None or condition.price_change_pct_below is not None:
if (
condition.price_change_pct_above is not None
or condition.price_change_pct_below is not None
):
if "price_change_pct" not in market_data:
self._warn_missing_key("price_change_pct")
if condition.price_change_pct_above is not None:
checks.append(price_change_pct is not None and price_change_pct > condition.price_change_pct_above)
checks.append(
price_change_pct is not None and price_change_pct > condition.price_change_pct_above
)
if condition.price_change_pct_below is not None:
checks.append(price_change_pct is not None and price_change_pct < condition.price_change_pct_below)
checks.append(
price_change_pct is not None and price_change_pct < condition.price_change_pct_below
)
# Position-aware conditions
unrealized_pnl_pct = self._safe_float(market_data.get("unrealized_pnl_pct"))
if condition.unrealized_pnl_pct_above is not None or condition.unrealized_pnl_pct_below is not None:
if (
condition.unrealized_pnl_pct_above is not None
or condition.unrealized_pnl_pct_below is not None
):
if "unrealized_pnl_pct" not in market_data:
self._warn_missing_key("unrealized_pnl_pct")
if condition.unrealized_pnl_pct_above is not None:
@@ -227,15 +241,9 @@ class ScenarioEngine:
if "holding_days" not in market_data:
self._warn_missing_key("holding_days")
if condition.holding_days_above is not None:
checks.append(
holding_days is not None
and holding_days > condition.holding_days_above
)
checks.append(holding_days is not None and holding_days > condition.holding_days_above)
if condition.holding_days_below is not None:
checks.append(
holding_days is not None
and holding_days < condition.holding_days_below
)
checks.append(holding_days is not None and holding_days < condition.holding_days_below)
return len(checks) > 0 and all(checks)
@@ -295,9 +303,15 @@ class ScenarioEngine:
details["volume_ratio"] = self._safe_float(market_data.get("volume_ratio"))
if condition.price_above is not None or condition.price_below is not None:
details["current_price"] = self._safe_float(market_data.get("current_price"))
if condition.price_change_pct_above is not None or condition.price_change_pct_below is not None:
if (
condition.price_change_pct_above is not None
or condition.price_change_pct_below is not None
):
details["price_change_pct"] = self._safe_float(market_data.get("price_change_pct"))
if condition.unrealized_pnl_pct_above is not None or condition.unrealized_pnl_pct_below is not None:
if (
condition.unrealized_pnl_pct_above is not None
or condition.unrealized_pnl_pct_below is not None
):
details["unrealized_pnl_pct"] = self._safe_float(market_data.get("unrealized_pnl_pct"))
if condition.holding_days_above is not None or condition.holding_days_below is not None:
details["holding_days"] = self._safe_float(market_data.get("holding_days"))

View File

@@ -10,6 +10,7 @@ def test_valid_backtest_cost_model_passes() -> None:
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.1, "US_PRE": 0.2},
unfavorable_fill_required=True,
)
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG", "US_PRE"])
@@ -20,6 +21,7 @@ def test_missing_required_slippage_session_raises() -> None:
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.1, "US_PRE": 0.2},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="missing slippage_bps_by_session.*US_PRE"):
@@ -31,6 +33,7 @@ def test_missing_required_failure_rate_session_raises() -> None:
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01},
partial_fill_rate_by_session={"KRX_REG": 0.1, "US_PRE": 0.2},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="missing failure_rate_by_session.*US_PRE"):
@@ -42,6 +45,7 @@ def test_invalid_failure_rate_range_raises() -> None:
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 1.2},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="failure rate must be within"):
@@ -53,6 +57,7 @@ def test_unfavorable_fill_requirement_cannot_be_disabled() -> None:
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=False,
)
with pytest.raises(ValueError, match="unfavorable_fill_required must be True"):
@@ -65,6 +70,7 @@ def test_non_finite_commission_rejected(bad_commission: float) -> None:
commission_bps=bad_commission,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="commission_bps"):
@@ -77,7 +83,33 @@ def test_non_finite_slippage_rejected(bad_slippage: float) -> None:
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": bad_slippage},
failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": 0.2},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="slippage bps"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG"])
def test_missing_required_partial_fill_session_raises() -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.1},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="missing partial_fill_rate_by_session.*US_PRE"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG", "US_PRE"])
@pytest.mark.parametrize("bad_partial_fill", [float("nan"), float("inf"), float("-inf"), -0.1, 1.1])
def test_invalid_partial_fill_rate_rejected(bad_partial_fill: float) -> None:
model = BacktestCostModel(
commission_bps=5.0,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.02},
partial_fill_rate_by_session={"KRX_REG": bad_partial_fill},
unfavorable_fill_required=True,
)
with pytest.raises(ValueError, match="partial fill rate must be within"):
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG"])

View File

@@ -1,5 +1,7 @@
from __future__ import annotations
from datetime import UTC, datetime, timedelta
from src.analysis.backtest_cost_guard import BacktestCostModel
from src.analysis.backtest_pipeline import (
BacktestBar,
@@ -12,6 +14,7 @@ from src.analysis.walk_forward_split import generate_walk_forward_splits
def _bars() -> list[BacktestBar]:
base_ts = datetime(2026, 2, 28, 0, 0, tzinfo=UTC)
closes = [100.0, 101.0, 102.0, 101.5, 103.0, 102.5, 104.0, 103.5, 105.0, 104.5, 106.0, 105.5]
bars: list[BacktestBar] = []
for i, close in enumerate(closes):
@@ -21,6 +24,7 @@ def _bars() -> list[BacktestBar]:
low=close - 1.0,
close=close,
session_id="KRX_REG" if i % 2 == 0 else "US_PRE",
timestamp=base_ts + timedelta(minutes=i),
)
)
return bars
@@ -31,6 +35,7 @@ def _cost_model() -> BacktestCostModel:
commission_bps=3.0,
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
partial_fill_rate_by_session={"KRX_REG": 0.05, "US_PRE": 0.2},
unfavorable_fill_required=True,
)
@@ -43,7 +48,7 @@ def test_pipeline_happy_path_returns_fold_and_artifact_contract() -> None:
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_bars=3,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(
train_size=4,
@@ -67,6 +72,7 @@ def test_pipeline_happy_path_returns_fold_and_artifact_contract() -> None:
assert names == {"B0", "B1", "M1"}
for score in fold.baseline_scores:
assert 0.0 <= score.accuracy <= 1.0
assert 0.0 <= score.cost_adjusted_accuracy <= 1.0
def test_pipeline_cost_guard_fail_fast() -> None:
@@ -74,6 +80,7 @@ def test_pipeline_cost_guard_fail_fast() -> None:
commission_bps=3.0,
slippage_bps_by_session={"KRX_REG": 10.0},
failure_rate_by_session={"KRX_REG": 0.01},
partial_fill_rate_by_session={"KRX_REG": 0.05},
unfavorable_fill_required=True,
)
try:
@@ -84,7 +91,7 @@ def test_pipeline_cost_guard_fail_fast() -> None:
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_bars=3,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(train_size=2, test_size=1),
cost_model=bad,
@@ -119,7 +126,7 @@ def test_pipeline_deterministic_seed_free_deterministic_result() -> None:
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_bars=3,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(
train_size=4,
@@ -134,3 +141,73 @@ def test_pipeline_deterministic_seed_free_deterministic_result() -> None:
out1 = run_v2_backtest_pipeline(**cfg)
out2 = run_v2_backtest_pipeline(**cfg)
assert out1 == out2
def test_pipeline_rejects_minutes_spec_when_timestamp_missing() -> None:
bars = _bars()
bars[2] = BacktestBar(
high=bars[2].high,
low=bars[2].low,
close=bars[2].close,
session_id=bars[2].session_id,
timestamp=None,
)
try:
run_v2_backtest_pipeline(
bars=bars,
entry_indices=[0, 1, 2, 3],
side=1,
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(train_size=2, test_size=1),
cost_model=_cost_model(),
)
except ValueError as exc:
assert "BacktestBar.timestamp is required" in str(exc)
else:
raise AssertionError("expected timestamp validation error")
def test_pipeline_fold_scores_reflect_cost_and_execution_effects() -> None:
cfg = dict(
bars=_bars(),
entry_indices=[0, 1, 2, 3, 4, 5, 6, 7],
side=1,
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(
train_size=4,
test_size=2,
step_size=2,
purge_size=1,
embargo_size=1,
min_train_size=3,
),
)
optimistic = BacktestCostModel(
commission_bps=0.0,
slippage_bps_by_session={"KRX_REG": 0.0, "US_PRE": 0.0},
failure_rate_by_session={"KRX_REG": 0.0, "US_PRE": 0.0},
partial_fill_rate_by_session={"KRX_REG": 0.0, "US_PRE": 0.0},
unfavorable_fill_required=True,
)
conservative = BacktestCostModel(
commission_bps=10.0,
slippage_bps_by_session={"KRX_REG": 30.0, "US_PRE": 80.0},
failure_rate_by_session={"KRX_REG": 0.2, "US_PRE": 0.4},
partial_fill_rate_by_session={"KRX_REG": 0.5, "US_PRE": 0.7},
unfavorable_fill_required=True,
)
optimistic_out = run_v2_backtest_pipeline(cost_model=optimistic, **cfg)
conservative_out = run_v2_backtest_pipeline(cost_model=conservative, **cfg)
assert optimistic_out.folds and conservative_out.folds
optimistic_score = optimistic_out.folds[0].baseline_scores[1].cost_adjusted_accuracy
conservative_score = conservative_out.folds[0].baseline_scores[1].cost_adjusted_accuracy
assert conservative_score < optimistic_score

View File

@@ -4,8 +4,7 @@ from __future__ import annotations
import sqlite3
import sys
import tempfile
from datetime import UTC, datetime, timedelta
from datetime import UTC, datetime
from pathlib import Path
from unittest.mock import MagicMock, patch
@@ -48,7 +47,9 @@ def temp_db(tmp_path: Path) -> Path:
cursor.executemany(
"""
INSERT INTO trades (timestamp, stock_code, action, quantity, price, confidence, rationale, pnl)
INSERT INTO trades (
timestamp, stock_code, action, quantity, price, confidence, rationale, pnl
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""",
test_trades,
@@ -73,9 +74,7 @@ class TestBackupExporter:
exporter = BackupExporter(str(temp_db))
output_dir = tmp_path / "exports"
results = exporter.export_all(
output_dir, formats=[ExportFormat.JSON], compress=False
)
results = exporter.export_all(output_dir, formats=[ExportFormat.JSON], compress=False)
assert ExportFormat.JSON in results
assert results[ExportFormat.JSON].exists()
@@ -86,9 +85,7 @@ class TestBackupExporter:
exporter = BackupExporter(str(temp_db))
output_dir = tmp_path / "exports"
results = exporter.export_all(
output_dir, formats=[ExportFormat.JSON], compress=True
)
results = exporter.export_all(output_dir, formats=[ExportFormat.JSON], compress=True)
assert ExportFormat.JSON in results
assert results[ExportFormat.JSON].suffix == ".gz"
@@ -98,15 +95,13 @@ class TestBackupExporter:
exporter = BackupExporter(str(temp_db))
output_dir = tmp_path / "exports"
results = exporter.export_all(
output_dir, formats=[ExportFormat.CSV], compress=False
)
results = exporter.export_all(output_dir, formats=[ExportFormat.CSV], compress=False)
assert ExportFormat.CSV in results
assert results[ExportFormat.CSV].exists()
# Verify CSV content
with open(results[ExportFormat.CSV], "r") as f:
with open(results[ExportFormat.CSV]) as f:
lines = f.readlines()
assert len(lines) == 4 # Header + 3 rows
@@ -146,7 +141,7 @@ class TestBackupExporter:
# Should only have 1 trade (AAPL on Jan 2)
import json
with open(results[ExportFormat.JSON], "r") as f:
with open(results[ExportFormat.JSON]) as f:
data = json.load(f)
assert data["record_count"] == 1
assert data["trades"][0]["stock_code"] == "AAPL"
@@ -407,9 +402,7 @@ class TestBackupExporterAdditional:
assert ExportFormat.JSON in results
assert ExportFormat.CSV in results
def test_export_all_logs_error_on_failure(
self, temp_db: Path, tmp_path: Path
) -> None:
def test_export_all_logs_error_on_failure(self, temp_db: Path, tmp_path: Path) -> None:
"""export_all must log an error and continue when one format fails."""
exporter = BackupExporter(str(temp_db))
# Patch _export_format to raise on JSON, succeed on CSV
@@ -430,9 +423,7 @@ class TestBackupExporterAdditional:
assert ExportFormat.JSON not in results
assert ExportFormat.CSV in results
def test_export_csv_empty_trades_no_compress(
self, empty_db: Path, tmp_path: Path
) -> None:
def test_export_csv_empty_trades_no_compress(self, empty_db: Path, tmp_path: Path) -> None:
"""CSV export with no trades and compress=False must write header row only."""
exporter = BackupExporter(str(empty_db))
results = exporter.export_all(
@@ -446,9 +437,7 @@ class TestBackupExporterAdditional:
content = out.read_text()
assert "timestamp" in content
def test_export_csv_empty_trades_compressed(
self, empty_db: Path, tmp_path: Path
) -> None:
def test_export_csv_empty_trades_compressed(self, empty_db: Path, tmp_path: Path) -> None:
"""CSV export with no trades and compress=True must write gzipped header."""
import gzip
@@ -465,9 +454,7 @@ class TestBackupExporterAdditional:
content = f.read()
assert "timestamp" in content
def test_export_csv_with_data_compressed(
self, temp_db: Path, tmp_path: Path
) -> None:
def test_export_csv_with_data_compressed(self, temp_db: Path, tmp_path: Path) -> None:
"""CSV export with data and compress=True must write gzipped rows."""
import gzip
@@ -492,6 +479,7 @@ class TestBackupExporterAdditional:
with patch.dict(sys.modules, {"pyarrow": None, "pyarrow.parquet": None}):
try:
import pyarrow # noqa: F401
pytest.skip("pyarrow is installed; cannot test ImportError path")
except ImportError:
pass
@@ -557,9 +545,7 @@ class TestCloudStorage:
importlib.reload(m)
m.CloudStorage(s3_config)
def test_upload_file_success(
self, mock_boto3_module, s3_config, tmp_path: Path
) -> None:
def test_upload_file_success(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
"""upload_file must call client.upload_file and return the object key."""
from src.backup.cloud_storage import CloudStorage
@@ -572,9 +558,7 @@ class TestCloudStorage:
assert key == "backups/backup.json.gz"
storage.client.upload_file.assert_called_once()
def test_upload_file_default_key(
self, mock_boto3_module, s3_config, tmp_path: Path
) -> None:
def test_upload_file_default_key(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
"""upload_file without object_key must use the filename as key."""
from src.backup.cloud_storage import CloudStorage
@@ -586,9 +570,7 @@ class TestCloudStorage:
assert key == "myfile.gz"
def test_upload_file_not_found(
self, mock_boto3_module, s3_config, tmp_path: Path
) -> None:
def test_upload_file_not_found(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
"""upload_file must raise FileNotFoundError for missing files."""
from src.backup.cloud_storage import CloudStorage
@@ -611,9 +593,7 @@ class TestCloudStorage:
with pytest.raises(RuntimeError, match="network error"):
storage.upload_file(test_file)
def test_download_file_success(
self, mock_boto3_module, s3_config, tmp_path: Path
) -> None:
def test_download_file_success(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
"""download_file must call client.download_file and return local path."""
from src.backup.cloud_storage import CloudStorage
@@ -637,11 +617,8 @@ class TestCloudStorage:
with pytest.raises(RuntimeError, match="timeout"):
storage.download_file("key", tmp_path / "dest.gz")
def test_list_files_returns_objects(
self, mock_boto3_module, s3_config
) -> None:
def test_list_files_returns_objects(self, mock_boto3_module, s3_config) -> None:
"""list_files must return parsed file metadata from S3 response."""
from datetime import timezone
from src.backup.cloud_storage import CloudStorage
@@ -651,7 +628,7 @@ class TestCloudStorage:
{
"Key": "backups/a.gz",
"Size": 1024,
"LastModified": datetime(2026, 1, 1, tzinfo=timezone.utc),
"LastModified": datetime(2026, 1, 1, tzinfo=UTC),
"ETag": '"abc123"',
}
]
@@ -662,9 +639,7 @@ class TestCloudStorage:
assert files[0]["key"] == "backups/a.gz"
assert files[0]["size_bytes"] == 1024
def test_list_files_empty_bucket(
self, mock_boto3_module, s3_config
) -> None:
def test_list_files_empty_bucket(self, mock_boto3_module, s3_config) -> None:
"""list_files must return empty list when bucket has no objects."""
from src.backup.cloud_storage import CloudStorage
@@ -674,9 +649,7 @@ class TestCloudStorage:
files = storage.list_files()
assert files == []
def test_list_files_propagates_error(
self, mock_boto3_module, s3_config
) -> None:
def test_list_files_propagates_error(self, mock_boto3_module, s3_config) -> None:
"""list_files must re-raise exceptions from the boto3 client."""
from src.backup.cloud_storage import CloudStorage
@@ -686,9 +659,7 @@ class TestCloudStorage:
with pytest.raises(RuntimeError):
storage.list_files()
def test_delete_file_success(
self, mock_boto3_module, s3_config
) -> None:
def test_delete_file_success(self, mock_boto3_module, s3_config) -> None:
"""delete_file must call client.delete_object with the correct key."""
from src.backup.cloud_storage import CloudStorage
@@ -698,9 +669,7 @@ class TestCloudStorage:
Bucket="test-bucket", Key="backups/old.gz"
)
def test_delete_file_propagates_error(
self, mock_boto3_module, s3_config
) -> None:
def test_delete_file_propagates_error(self, mock_boto3_module, s3_config) -> None:
"""delete_file must re-raise exceptions from the boto3 client."""
from src.backup.cloud_storage import CloudStorage
@@ -710,11 +679,8 @@ class TestCloudStorage:
with pytest.raises(RuntimeError):
storage.delete_file("backups/old.gz")
def test_get_storage_stats_success(
self, mock_boto3_module, s3_config
) -> None:
def test_get_storage_stats_success(self, mock_boto3_module, s3_config) -> None:
"""get_storage_stats must aggregate file sizes correctly."""
from datetime import timezone
from src.backup.cloud_storage import CloudStorage
@@ -724,13 +690,13 @@ class TestCloudStorage:
{
"Key": "a.gz",
"Size": 1024 * 1024,
"LastModified": datetime(2026, 1, 1, tzinfo=timezone.utc),
"LastModified": datetime(2026, 1, 1, tzinfo=UTC),
"ETag": '"x"',
},
{
"Key": "b.gz",
"Size": 1024 * 1024,
"LastModified": datetime(2026, 1, 2, tzinfo=timezone.utc),
"LastModified": datetime(2026, 1, 2, tzinfo=UTC),
"ETag": '"y"',
},
]
@@ -741,9 +707,7 @@ class TestCloudStorage:
assert stats["total_size_bytes"] == 2 * 1024 * 1024
assert stats["total_size_mb"] == pytest.approx(2.0)
def test_get_storage_stats_on_error(
self, mock_boto3_module, s3_config
) -> None:
def test_get_storage_stats_on_error(self, mock_boto3_module, s3_config) -> None:
"""get_storage_stats must return error dict without raising on failure."""
from src.backup.cloud_storage import CloudStorage
@@ -754,9 +718,7 @@ class TestCloudStorage:
assert "error" in stats
assert stats["total_files"] == 0
def test_verify_connection_success(
self, mock_boto3_module, s3_config
) -> None:
def test_verify_connection_success(self, mock_boto3_module, s3_config) -> None:
"""verify_connection must return True when head_bucket succeeds."""
from src.backup.cloud_storage import CloudStorage
@@ -764,9 +726,7 @@ class TestCloudStorage:
result = storage.verify_connection()
assert result is True
def test_verify_connection_failure(
self, mock_boto3_module, s3_config
) -> None:
def test_verify_connection_failure(self, mock_boto3_module, s3_config) -> None:
"""verify_connection must return False when head_bucket raises."""
from src.backup.cloud_storage import CloudStorage
@@ -776,9 +736,7 @@ class TestCloudStorage:
result = storage.verify_connection()
assert result is False
def test_enable_versioning(
self, mock_boto3_module, s3_config
) -> None:
def test_enable_versioning(self, mock_boto3_module, s3_config) -> None:
"""enable_versioning must call put_bucket_versioning."""
from src.backup.cloud_storage import CloudStorage
@@ -786,9 +744,7 @@ class TestCloudStorage:
storage.enable_versioning()
storage.client.put_bucket_versioning.assert_called_once()
def test_enable_versioning_propagates_error(
self, mock_boto3_module, s3_config
) -> None:
def test_enable_versioning_propagates_error(self, mock_boto3_module, s3_config) -> None:
"""enable_versioning must re-raise exceptions from the boto3 client."""
from src.backup.cloud_storage import CloudStorage

View File

@@ -35,6 +35,7 @@ def test_recovery_batch_only_after_blackout_exit() -> None:
intent = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="005930",
order_type="BUY",
quantity=1,
@@ -64,6 +65,7 @@ def test_requeued_intent_is_processed_next_non_blackout_cycle() -> None:
intent = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="005930",
order_type="BUY",
quantity=1,
@@ -79,3 +81,54 @@ def test_requeued_intent_is_processed_next_non_blackout_cycle() -> None:
manager.requeue(first_batch[0])
second_batch = manager.pop_recovery_batch(outside_blackout)
assert len(second_batch) == 1
def test_queue_overflow_drops_oldest_and_keeps_latest() -> None:
manager = BlackoutOrderManager(
enabled=True,
windows=parse_blackout_windows_kst("23:30-00:10"),
max_queue_size=2,
)
first = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="000001",
order_type="BUY",
quantity=1,
price=100.0,
source="first",
queued_at=datetime.now(UTC),
)
second = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="000002",
order_type="BUY",
quantity=1,
price=101.0,
source="second",
queued_at=datetime.now(UTC),
)
third = QueuedOrderIntent(
market_code="KR",
exchange_code="KRX",
session_id="KRX_REG",
stock_code="000003",
order_type="SELL",
quantity=2,
price=102.0,
source="third",
queued_at=datetime.now(UTC),
)
assert manager.enqueue(first)
assert manager.enqueue(second)
assert manager.enqueue(third)
assert manager.pending_count == 2
assert manager.overflow_drop_count == 1
outside_blackout = datetime(2026, 1, 1, 15, 20, tzinfo=UTC)
batch = manager.pop_recovery_batch(outside_blackout)
assert [intent.stock_code for intent in batch] == ["000002", "000003"]

View File

@@ -323,7 +323,8 @@ class TestPromptOverride:
# Verify the custom prompt was sent, not a built prompt
mock_generate.assert_called_once()
actual_prompt = mock_generate.call_args[1].get(
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None
"contents",
mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None,
)
assert actual_prompt == custom_prompt
# Raw response preserved in rationale without parse_response (#247)
@@ -385,7 +386,8 @@ class TestPromptOverride:
await client.decide(market_data)
actual_prompt = mock_generate.call_args[1].get(
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None
"contents",
mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None,
)
# The custom prompt must be used, not the compressed prompt
assert actual_prompt == custom_prompt
@@ -411,7 +413,8 @@ class TestPromptOverride:
await client.decide(market_data)
actual_prompt = mock_generate.call_args[1].get(
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None
"contents",
mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None,
)
# Should contain stock code from build_prompt, not be a custom override
assert "005930" in actual_prompt

View File

@@ -3,7 +3,7 @@
from __future__ import annotations
import asyncio
from unittest.mock import AsyncMock, MagicMock, patch
from unittest.mock import AsyncMock, patch
import pytest
@@ -99,7 +99,10 @@ class TestTokenManagement:
mock_resp_403 = AsyncMock()
mock_resp_403.status = 403
mock_resp_403.text = AsyncMock(
return_value='{"error_code":"EGW00133","error_description":"접근토큰 발급 잠시 후 다시 시도하세요(1분당 1회)"}'
return_value=(
'{"error_code":"EGW00133","error_description":'
'"접근토큰 발급 잠시 후 다시 시도하세요(1분당 1회)"}'
)
)
mock_resp_403.__aenter__ = AsyncMock(return_value=mock_resp_403)
mock_resp_403.__aexit__ = AsyncMock(return_value=False)
@@ -232,9 +235,7 @@ class TestRateLimiter:
mock_order_resp.__aenter__ = AsyncMock(return_value=mock_order_resp)
mock_order_resp.__aexit__ = AsyncMock(return_value=False)
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash_resp, mock_order_resp]
):
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash_resp, mock_order_resp]):
with patch.object(
broker._rate_limiter, "acquire", new_callable=AsyncMock
) as mock_acquire:
@@ -405,7 +406,7 @@ class TestFetchMarketRankings:
# ---------------------------------------------------------------------------
from src.broker.kis_api import kr_tick_unit, kr_round_down # noqa: E402
from src.broker.kis_api import kr_round_down, kr_tick_unit # noqa: E402
class TestKrTickUnit:
@@ -538,9 +539,7 @@ class TestSendOrderTickRounding:
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False)
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.send_order("005930", "BUY", 1, price=188150)
order_call = mock_post.call_args_list[1]
@@ -563,9 +562,7 @@ class TestSendOrderTickRounding:
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False)
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.send_order("005930", "BUY", 1, price=50000)
order_call = mock_post.call_args_list[1]
@@ -587,9 +584,7 @@ class TestSendOrderTickRounding:
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False)
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.send_order("005930", "SELL", 1, price=0)
order_call = mock_post.call_args_list[1]
@@ -628,9 +623,7 @@ class TestTRIDBranchingDomestic:
broker = self._make_broker(settings, "paper")
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(
return_value={"output1": [], "output2": {}}
)
mock_resp.json = AsyncMock(return_value={"output1": [], "output2": {}})
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
mock_resp.__aexit__ = AsyncMock(return_value=False)
@@ -645,9 +638,7 @@ class TestTRIDBranchingDomestic:
broker = self._make_broker(settings, "live")
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(
return_value={"output1": [], "output2": {}}
)
mock_resp.json = AsyncMock(return_value={"output1": [], "output2": {}})
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
mock_resp.__aexit__ = AsyncMock(return_value=False)
@@ -672,9 +663,7 @@ class TestTRIDBranchingDomestic:
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False)
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.send_order("005930", "BUY", 1)
order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -695,9 +684,7 @@ class TestTRIDBranchingDomestic:
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False)
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.send_order("005930", "BUY", 1)
order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -718,9 +705,7 @@ class TestTRIDBranchingDomestic:
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False)
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.send_order("005930", "SELL", 1)
order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -741,9 +726,7 @@ class TestTRIDBranchingDomestic:
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False)
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.send_order("005930", "SELL", 1)
order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -788,9 +771,7 @@ class TestGetDomesticPendingOrders:
mock_get.assert_not_called()
@pytest.mark.asyncio
async def test_live_mode_calls_tttc0084r_with_correct_params(
self, settings
) -> None:
async def test_live_mode_calls_tttc0084r_with_correct_params(self, settings) -> None:
"""Live mode must call TTTC0084R with INQR_DVSN_1/2 and paging params."""
broker = self._make_broker(settings, "live")
pending = [{"odno": "001", "pdno": "005930", "psbl_qty": "10"}]
@@ -872,9 +853,7 @@ class TestCancelDomesticOrder:
broker = self._make_broker(settings, "live")
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -886,9 +865,7 @@ class TestCancelDomesticOrder:
broker = self._make_broker(settings, "paper")
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
order_headers = mock_post.call_args_list[1][1].get("headers", {})
@@ -900,9 +877,7 @@ class TestCancelDomesticOrder:
broker = self._make_broker(settings, "live")
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
body = mock_post.call_args_list[1][1].get("json", {})
@@ -916,9 +891,7 @@ class TestCancelDomesticOrder:
broker = self._make_broker(settings, "live")
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.cancel_domestic_order("005930", "ORD123", "BRN456", 3)
body = mock_post.call_args_list[1][1].get("json", {})
@@ -932,9 +905,7 @@ class TestCancelDomesticOrder:
broker = self._make_broker(settings, "live")
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
with patch(
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
) as mock_post:
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 2)
order_headers = mock_post.call_args_list[1][1].get("headers", {})

View File

@@ -77,9 +77,7 @@ class TestContextStore:
# Latest by updated_at, which should be the last one set
assert latest == "2026-02-02"
def test_delete_old_contexts(
self, store: ContextStore, db_conn: sqlite3.Connection
) -> None:
def test_delete_old_contexts(self, store: ContextStore, db_conn: sqlite3.Connection) -> None:
"""Test deleting contexts older than a cutoff date."""
# Insert contexts with specific old timestamps
# (bypassing set_context which uses current time)
@@ -170,9 +168,7 @@ class TestContextAggregator:
log_trade(db_conn, "035720", "HOLD", 75, "Wait", quantity=0, price=0, pnl=0)
# Manually set timestamps to the target date
db_conn.execute(
f"UPDATE trades SET timestamp = '{date}T10:00:00+00:00'"
)
db_conn.execute(f"UPDATE trades SET timestamp = '{date}T10:00:00+00:00'")
db_conn.commit()
# Aggregate
@@ -194,18 +190,10 @@ class TestContextAggregator:
week = "2026-W06"
# Set daily contexts
aggregator.store.set_context(
ContextLayer.L6_DAILY, "2026-02-02", "total_pnl_KR", 100.0
)
aggregator.store.set_context(
ContextLayer.L6_DAILY, "2026-02-03", "total_pnl_KR", 200.0
)
aggregator.store.set_context(
ContextLayer.L6_DAILY, "2026-02-02", "avg_confidence_KR", 80.0
)
aggregator.store.set_context(
ContextLayer.L6_DAILY, "2026-02-03", "avg_confidence_KR", 85.0
)
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "total_pnl_KR", 100.0)
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "total_pnl_KR", 200.0)
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "avg_confidence_KR", 80.0)
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "avg_confidence_KR", 85.0)
# Aggregate
aggregator.aggregate_weekly_from_daily(week)
@@ -223,15 +211,9 @@ class TestContextAggregator:
month = "2026-02"
# Set weekly contexts
aggregator.store.set_context(
ContextLayer.L5_WEEKLY, "2026-W05", "weekly_pnl_KR", 100.0
)
aggregator.store.set_context(
ContextLayer.L5_WEEKLY, "2026-W06", "weekly_pnl_KR", 200.0
)
aggregator.store.set_context(
ContextLayer.L5_WEEKLY, "2026-W07", "weekly_pnl_KR", 150.0
)
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W05", "weekly_pnl_KR", 100.0)
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W06", "weekly_pnl_KR", 200.0)
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W07", "weekly_pnl_KR", 150.0)
# Aggregate
aggregator.aggregate_monthly_from_weekly(month)
@@ -316,6 +298,7 @@ class TestContextAggregator:
store = aggregator.store
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl_KR") == 1000.0
from datetime import date as date_cls
trade_date = date_cls.fromisoformat(date)
iso_year, iso_week, _ = trade_date.isocalendar()
trade_week = f"{iso_year}-W{iso_week:02d}"
@@ -324,7 +307,9 @@ class TestContextAggregator:
trade_quarter = f"{trade_date.year}-Q{(trade_date.month - 1) // 3 + 1}"
trade_year = str(trade_date.year)
assert store.get_context(ContextLayer.L4_MONTHLY, trade_month, "monthly_pnl") == 1000.0
assert store.get_context(ContextLayer.L3_QUARTERLY, trade_quarter, "quarterly_pnl") == 1000.0
assert (
store.get_context(ContextLayer.L3_QUARTERLY, trade_quarter, "quarterly_pnl") == 1000.0
)
assert store.get_context(ContextLayer.L2_ANNUAL, trade_year, "annual_pnl") == 1000.0
@@ -429,9 +414,7 @@ class TestContextSummarizer:
# summarize_layer
# ------------------------------------------------------------------
def test_summarize_layer_no_data(
self, summarizer: ContextSummarizer
) -> None:
def test_summarize_layer_no_data(self, summarizer: ContextSummarizer) -> None:
"""summarize_layer with no data must return the 'No data' sentinel."""
result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
assert result["count"] == 0
@@ -448,15 +431,12 @@ class TestContextSummarizer:
result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
assert "total_entries" in result
def test_summarize_layer_with_dict_values(
self, summarizer: ContextSummarizer
) -> None:
def test_summarize_layer_with_dict_values(self, summarizer: ContextSummarizer) -> None:
"""summarize_layer must handle dict values by extracting numeric subkeys."""
store = summarizer.store
# set_context serialises the value as JSON, so passing a dict works
store.set_context(
ContextLayer.L6_DAILY, "2026-02-01", "metrics",
{"win_rate": 65.0, "label": "good"}
ContextLayer.L6_DAILY, "2026-02-01", "metrics", {"win_rate": 65.0, "label": "good"}
)
result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
@@ -464,9 +444,7 @@ class TestContextSummarizer:
# numeric subkey "win_rate" should appear as "metrics.win_rate"
assert "metrics.win_rate" in result
def test_summarize_layer_with_string_values(
self, summarizer: ContextSummarizer
) -> None:
def test_summarize_layer_with_string_values(self, summarizer: ContextSummarizer) -> None:
"""summarize_layer must count string values separately."""
store = summarizer.store
# set_context stores string values as JSON-encoded strings
@@ -480,9 +458,7 @@ class TestContextSummarizer:
# rolling_window_summary
# ------------------------------------------------------------------
def test_rolling_window_summary_basic(
self, summarizer: ContextSummarizer
) -> None:
def test_rolling_window_summary_basic(self, summarizer: ContextSummarizer) -> None:
"""rolling_window_summary must return the expected structure."""
store = summarizer.store
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "pnl", 500.0)
@@ -492,22 +468,16 @@ class TestContextSummarizer:
assert "recent_data" in result
assert "historical_summary" in result
def test_rolling_window_summary_no_older_data(
self, summarizer: ContextSummarizer
) -> None:
def test_rolling_window_summary_no_older_data(self, summarizer: ContextSummarizer) -> None:
"""rolling_window_summary with summarize_older=False skips history."""
result = summarizer.rolling_window_summary(
ContextLayer.L6_DAILY, summarize_older=False
)
result = summarizer.rolling_window_summary(ContextLayer.L6_DAILY, summarize_older=False)
assert result["historical_summary"] == {}
# ------------------------------------------------------------------
# aggregate_to_higher_layer
# ------------------------------------------------------------------
def test_aggregate_to_higher_layer_mean(
self, summarizer: ContextSummarizer
) -> None:
def test_aggregate_to_higher_layer_mean(self, summarizer: ContextSummarizer) -> None:
"""aggregate_to_higher_layer with 'mean' via dict subkeys returns average."""
store = summarizer.store
# Use different outer keys but same inner metric key so get_all_contexts
@@ -520,9 +490,7 @@ class TestContextSummarizer:
)
assert result == pytest.approx(150.0)
def test_aggregate_to_higher_layer_sum(
self, summarizer: ContextSummarizer
) -> None:
def test_aggregate_to_higher_layer_sum(self, summarizer: ContextSummarizer) -> None:
"""aggregate_to_higher_layer with 'sum' must return the total."""
store = summarizer.store
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
@@ -533,9 +501,7 @@ class TestContextSummarizer:
)
assert result == pytest.approx(300.0)
def test_aggregate_to_higher_layer_max(
self, summarizer: ContextSummarizer
) -> None:
def test_aggregate_to_higher_layer_max(self, summarizer: ContextSummarizer) -> None:
"""aggregate_to_higher_layer with 'max' must return the maximum."""
store = summarizer.store
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
@@ -546,9 +512,7 @@ class TestContextSummarizer:
)
assert result == pytest.approx(200.0)
def test_aggregate_to_higher_layer_min(
self, summarizer: ContextSummarizer
) -> None:
def test_aggregate_to_higher_layer_min(self, summarizer: ContextSummarizer) -> None:
"""aggregate_to_higher_layer with 'min' must return the minimum."""
store = summarizer.store
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
@@ -559,9 +523,7 @@ class TestContextSummarizer:
)
assert result == pytest.approx(100.0)
def test_aggregate_to_higher_layer_no_data(
self, summarizer: ContextSummarizer
) -> None:
def test_aggregate_to_higher_layer_no_data(self, summarizer: ContextSummarizer) -> None:
"""aggregate_to_higher_layer with no matching key must return None."""
result = summarizer.aggregate_to_higher_layer(
ContextLayer.L6_DAILY, ContextLayer.L5_WEEKLY, "nonexistent", "mean"
@@ -585,9 +547,7 @@ class TestContextSummarizer:
# create_compact_summary + format_summary_for_prompt
# ------------------------------------------------------------------
def test_create_compact_summary(
self, summarizer: ContextSummarizer
) -> None:
def test_create_compact_summary(self, summarizer: ContextSummarizer) -> None:
"""create_compact_summary must produce a dict keyed by layer value."""
store = summarizer.store
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "pnl", 100.0)
@@ -615,9 +575,7 @@ class TestContextSummarizer:
text = summarizer.format_summary_for_prompt(summary)
assert text == ""
def test_format_summary_non_dict_value(
self, summarizer: ContextSummarizer
) -> None:
def test_format_summary_non_dict_value(self, summarizer: ContextSummarizer) -> None:
"""format_summary_for_prompt must render non-dict values as plain text."""
summary = {
"daily": {

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import json
import sqlite3
from datetime import UTC, datetime
from types import SimpleNamespace
from unittest.mock import AsyncMock, MagicMock
@@ -16,8 +17,6 @@ from src.evolution.daily_review import DailyReviewer
from src.evolution.scorecard import DailyScorecard
from src.logging.decision_logger import DecisionLogger
from datetime import UTC, datetime
TODAY = datetime.now(UTC).strftime("%Y-%m-%d")
@@ -53,7 +52,8 @@ def _log_decision(
def test_generate_scorecard_market_scoped(
db_conn: sqlite3.Connection, context_store: ContextStore,
db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None:
reviewer = DailyReviewer(db_conn, context_store)
logger = DecisionLogger(db_conn)
@@ -134,7 +134,8 @@ def test_generate_scorecard_market_scoped(
def test_generate_scorecard_top_winners_and_losers(
db_conn: sqlite3.Connection, context_store: ContextStore,
db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None:
reviewer = DailyReviewer(db_conn, context_store)
logger = DecisionLogger(db_conn)
@@ -168,7 +169,8 @@ def test_generate_scorecard_top_winners_and_losers(
def test_generate_scorecard_empty_day(
db_conn: sqlite3.Connection, context_store: ContextStore,
db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None:
reviewer = DailyReviewer(db_conn, context_store)
scorecard = reviewer.generate_scorecard(TODAY, "KR")
@@ -184,7 +186,8 @@ def test_generate_scorecard_empty_day(
@pytest.mark.asyncio
async def test_generate_lessons_without_gemini_returns_empty(
db_conn: sqlite3.Connection, context_store: ContextStore,
db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None:
reviewer = DailyReviewer(db_conn, context_store, gemini_client=None)
lessons = await reviewer.generate_lessons(
@@ -206,7 +209,8 @@ async def test_generate_lessons_without_gemini_returns_empty(
@pytest.mark.asyncio
async def test_generate_lessons_parses_json_array(
db_conn: sqlite3.Connection, context_store: ContextStore,
db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None:
mock_gemini = MagicMock()
mock_gemini.decide = AsyncMock(
@@ -233,7 +237,8 @@ async def test_generate_lessons_parses_json_array(
@pytest.mark.asyncio
async def test_generate_lessons_fallback_to_lines(
db_conn: sqlite3.Connection, context_store: ContextStore,
db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None:
mock_gemini = MagicMock()
mock_gemini.decide = AsyncMock(
@@ -260,7 +265,8 @@ async def test_generate_lessons_fallback_to_lines(
@pytest.mark.asyncio
async def test_generate_lessons_handles_gemini_error(
db_conn: sqlite3.Connection, context_store: ContextStore,
db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None:
mock_gemini = MagicMock()
mock_gemini.decide = AsyncMock(side_effect=RuntimeError("boom"))
@@ -284,7 +290,8 @@ async def test_generate_lessons_handles_gemini_error(
def test_store_scorecard_in_context(
db_conn: sqlite3.Connection, context_store: ContextStore,
db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None:
reviewer = DailyReviewer(db_conn, context_store)
scorecard = DailyScorecard(
@@ -316,7 +323,8 @@ def test_store_scorecard_in_context(
def test_store_scorecard_key_is_market_scoped(
db_conn: sqlite3.Connection, context_store: ContextStore,
db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None:
reviewer = DailyReviewer(db_conn, context_store)
kr = DailyScorecard(
@@ -357,7 +365,8 @@ def test_store_scorecard_key_is_market_scoped(
def test_generate_scorecard_handles_invalid_context_snapshot(
db_conn: sqlite3.Connection, context_store: ContextStore,
db_conn: sqlite3.Connection,
context_store: ContextStore,
) -> None:
reviewer = DailyReviewer(db_conn, context_store)
db_conn.execute(

View File

@@ -355,6 +355,7 @@ def test_positions_empty_when_no_trades(tmp_path: Path) -> None:
def _seed_cb_context(conn: sqlite3.Connection, pnl_pct: float, market: str = "KR") -> None:
import json as _json
conn.execute(
"INSERT OR REPLACE INTO system_metrics (key, value, updated_at) VALUES (?, ?, ?)",
(

View File

@@ -79,7 +79,7 @@ class TestNewsAPI:
# Mock the fetch to avoid real API call
with patch.object(api, "_fetch_news", new_callable=AsyncMock) as mock_fetch:
mock_fetch.return_value = None
result = await api.get_news_sentiment("AAPL")
await api.get_news_sentiment("AAPL")
# Should have attempted refetch since cache expired
mock_fetch.assert_called_once_with("AAPL")
@@ -111,9 +111,7 @@ class TestNewsAPI:
"source": "Reuters",
"time_published": "2026-02-04T10:00:00",
"url": "https://example.com/1",
"ticker_sentiment": [
{"ticker": "AAPL", "ticker_sentiment_score": "0.85"}
],
"ticker_sentiment": [{"ticker": "AAPL", "ticker_sentiment_score": "0.85"}],
"overall_sentiment_score": "0.75",
},
{
@@ -122,9 +120,7 @@ class TestNewsAPI:
"source": "Bloomberg",
"time_published": "2026-02-04T09:00:00",
"url": "https://example.com/2",
"ticker_sentiment": [
{"ticker": "AAPL", "ticker_sentiment_score": "-0.3"}
],
"ticker_sentiment": [{"ticker": "AAPL", "ticker_sentiment_score": "-0.3"}],
"overall_sentiment_score": "-0.2",
},
]
@@ -661,7 +657,9 @@ class TestGeminiClientWithExternalData:
)
# Mock the Gemini API call
with patch.object(client._client.aio.models, "generate_content", new_callable=AsyncMock) as mock_gen:
with patch.object(
client._client.aio.models, "generate_content", new_callable=AsyncMock
) as mock_gen:
mock_response = MagicMock()
mock_response.text = '{"action": "BUY", "confidence": 85, "rationale": "Good news"}'
mock_gen.return_value = mock_response

View File

@@ -1,7 +1,7 @@
"""Tests for database helper functions."""
import tempfile
import os
import tempfile
from src.db import get_latest_buy_trade, get_open_position, init_db, log_trade
@@ -204,7 +204,8 @@ def test_mode_migration_adds_column_to_existing_db() -> None:
assert "strategy_pnl" in columns
assert "fx_pnl" in columns
migrated = conn.execute(
"SELECT pnl, strategy_pnl, fx_pnl, session_id FROM trades WHERE stock_code='AAPL' LIMIT 1"
"SELECT pnl, strategy_pnl, fx_pnl, session_id "
"FROM trades WHERE stock_code='AAPL' LIMIT 1"
).fetchone()
assert migrated is not None
assert migrated[0] == 123.45
@@ -407,9 +408,7 @@ def test_decision_logs_session_id_migration_backfills_unknown() -> None:
conn = init_db(db_path)
columns = {row[1] for row in conn.execute("PRAGMA table_info(decision_logs)").fetchall()}
assert "session_id" in columns
row = conn.execute(
"SELECT session_id FROM decision_logs WHERE decision_id='d1'"
).fetchone()
row = conn.execute("SELECT session_id FROM decision_logs WHERE decision_id='d1'").fetchone()
assert row is not None
assert row[0] == "UNKNOWN"
conn.close()

View File

@@ -49,7 +49,10 @@ def test_log_decision_creates_record(logger: DecisionLogger, db_conn: sqlite3.Co
# Verify record exists in database
cursor = db_conn.execute(
"SELECT decision_id, action, confidence, session_id FROM decision_logs WHERE decision_id = ?",
(
"SELECT decision_id, action, confidence, session_id "
"FROM decision_logs WHERE decision_id = ?"
),
(decision_id,),
)
row = cursor.fetchone()

View File

@@ -208,7 +208,9 @@ def test_identify_failure_patterns_empty(optimizer: EvolutionOptimizer) -> None:
@pytest.mark.asyncio
async def test_generate_strategy_creates_file(optimizer: EvolutionOptimizer, tmp_path: Path) -> None:
async def test_generate_strategy_creates_file(
optimizer: EvolutionOptimizer, tmp_path: Path
) -> None:
"""Test that generate_strategy creates a strategy file."""
failures = [
{
@@ -234,7 +236,9 @@ async def test_generate_strategy_creates_file(optimizer: EvolutionOptimizer, tmp
return {"action": "HOLD", "confidence": 50, "rationale": "Waiting"}
"""
with patch.object(optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)):
with patch.object(
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
):
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
strategy_path = await optimizer.generate_strategy(failures)
@@ -247,7 +251,8 @@ async def test_generate_strategy_creates_file(optimizer: EvolutionOptimizer, tmp
@pytest.mark.asyncio
async def test_generate_strategy_saves_valid_python_code(
optimizer: EvolutionOptimizer, tmp_path: Path,
optimizer: EvolutionOptimizer,
tmp_path: Path,
) -> None:
"""Test that syntactically valid generated code is saved."""
failures = [{"decision_id": "1", "timestamp": "2024-01-15T09:30:00+00:00"}]
@@ -255,12 +260,14 @@ async def test_generate_strategy_saves_valid_python_code(
mock_response = Mock()
mock_response.text = (
'price = market_data.get("current_price", 0)\n'
'if price > 0:\n'
"if price > 0:\n"
' return {"action": "BUY", "confidence": 80, "rationale": "Positive price"}\n'
'return {"action": "HOLD", "confidence": 50, "rationale": "No signal"}\n'
)
with patch.object(optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)):
with patch.object(
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
):
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
strategy_path = await optimizer.generate_strategy(failures)
@@ -270,7 +277,9 @@ async def test_generate_strategy_saves_valid_python_code(
@pytest.mark.asyncio
async def test_generate_strategy_blocks_invalid_python_code(
optimizer: EvolutionOptimizer, tmp_path: Path, caplog: pytest.LogCaptureFixture,
optimizer: EvolutionOptimizer,
tmp_path: Path,
caplog: pytest.LogCaptureFixture,
) -> None:
"""Test that syntactically invalid generated code is not saved."""
failures = [{"decision_id": "1", "timestamp": "2024-01-15T09:30:00+00:00"}]
@@ -281,7 +290,9 @@ async def test_generate_strategy_blocks_invalid_python_code(
' return {"action": "BUY", "confidence": 80, "rationale": "broken"}\n'
)
with patch.object(optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)):
with patch.object(
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
):
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
with caplog.at_level("WARNING"):
strategy_path = await optimizer.generate_strategy(failures)
@@ -310,6 +321,7 @@ def test_get_performance_summary() -> None:
"""Test getting performance summary from trades table."""
# Create a temporary database with trades
import tempfile
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp:
tmp_path = tmp.name
@@ -604,7 +616,9 @@ def test_calculate_improvement_trend_declining(performance_tracker: PerformanceT
assert trend["pnl_change"] == -250.0
def test_calculate_improvement_trend_insufficient_data(performance_tracker: PerformanceTracker) -> None:
def test_calculate_improvement_trend_insufficient_data(
performance_tracker: PerformanceTracker,
) -> None:
"""Test improvement trend with insufficient data."""
metrics = [
StrategyMetrics(
@@ -718,7 +732,9 @@ async def test_full_evolution_pipeline(optimizer: EvolutionOptimizer, tmp_path:
mock_response = Mock()
mock_response.text = 'return {"action": "HOLD", "confidence": 50, "rationale": "Test"}'
with patch.object(optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)):
with patch.object(
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
):
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
with patch("subprocess.run") as mock_run:
mock_run.return_value = Mock(returncode=0, stdout="", stderr="")

View File

@@ -103,9 +103,7 @@ class TestSetupLogging:
"""setup_logging must attach a JSON handler to the root logger."""
setup_logging(level=logging.DEBUG)
root = logging.getLogger()
json_handlers = [
h for h in root.handlers if isinstance(h.formatter, JSONFormatter)
]
json_handlers = [h for h in root.handlers if isinstance(h.formatter, JSONFormatter)]
assert len(json_handlers) == 1
assert root.level == logging.DEBUG

File diff suppressed because it is too large Load Diff

View File

@@ -173,9 +173,7 @@ class TestGetNextMarketOpen:
"""Should find next Monday opening when called on weekend."""
# Saturday 2026-02-07 12:00 UTC
test_time = datetime(2026, 2, 7, 12, 0, tzinfo=ZoneInfo("UTC"))
market, open_time = get_next_market_open(
enabled_markets=["KR"], now=test_time
)
market, open_time = get_next_market_open(enabled_markets=["KR"], now=test_time)
assert market.code == "KR"
# Monday 2026-02-09 09:00 KST
expected = datetime(2026, 2, 9, 9, 0, tzinfo=ZoneInfo("Asia/Seoul"))
@@ -185,9 +183,7 @@ class TestGetNextMarketOpen:
"""Should find next day opening when called after market close."""
# Monday 2026-02-02 16:00 KST (after close)
test_time = datetime(2026, 2, 2, 16, 0, tzinfo=ZoneInfo("Asia/Seoul"))
market, open_time = get_next_market_open(
enabled_markets=["KR"], now=test_time
)
market, open_time = get_next_market_open(enabled_markets=["KR"], now=test_time)
assert market.code == "KR"
# Tuesday 2026-02-03 09:00 KST
expected = datetime(2026, 2, 3, 9, 0, tzinfo=ZoneInfo("Asia/Seoul"))
@@ -197,9 +193,7 @@ class TestGetNextMarketOpen:
"""Should find earliest opening market among multiple."""
# Saturday 2026-02-07 12:00 UTC
test_time = datetime(2026, 2, 7, 12, 0, tzinfo=ZoneInfo("UTC"))
market, open_time = get_next_market_open(
enabled_markets=["KR", "US_NASDAQ"], now=test_time
)
market, open_time = get_next_market_open(enabled_markets=["KR", "US_NASDAQ"], now=test_time)
# Monday 2026-02-09: KR opens at 09:00 KST = 00:00 UTC
# Monday 2026-02-09: US opens at 09:30 EST = 14:30 UTC
# KR opens first
@@ -214,9 +208,7 @@ class TestGetNextMarketOpen:
def test_get_next_market_open_invalid_market(self) -> None:
"""Should skip invalid market codes."""
test_time = datetime(2026, 2, 7, 12, 0, tzinfo=ZoneInfo("UTC"))
market, _ = get_next_market_open(
enabled_markets=["INVALID", "KR"], now=test_time
)
market, _ = get_next_market_open(enabled_markets=["INVALID", "KR"], now=test_time)
assert market.code == "KR"
def test_get_next_market_open_prefers_extended_session(self) -> None:

View File

@@ -8,7 +8,7 @@ import aiohttp
import pytest
from src.broker.kis_api import KISBroker
from src.broker.overseas import OverseasBroker, _PRICE_EXCHANGE_MAP, _RANKING_EXCHANGE_MAP
from src.broker.overseas import _PRICE_EXCHANGE_MAP, _RANKING_EXCHANGE_MAP, OverseasBroker
from src.config import Settings
@@ -85,25 +85,27 @@ class TestConfigDefaults:
assert mock_settings.OVERSEAS_RANKING_VOLUME_TR_ID == "HHDFS76270000"
def test_fluct_path(self, mock_settings: Settings) -> None:
assert mock_settings.OVERSEAS_RANKING_FLUCT_PATH == "/uapi/overseas-stock/v1/ranking/updown-rate"
assert (
mock_settings.OVERSEAS_RANKING_FLUCT_PATH
== "/uapi/overseas-stock/v1/ranking/updown-rate"
)
def test_volume_path(self, mock_settings: Settings) -> None:
assert mock_settings.OVERSEAS_RANKING_VOLUME_PATH == "/uapi/overseas-stock/v1/ranking/volume-surge"
assert (
mock_settings.OVERSEAS_RANKING_VOLUME_PATH
== "/uapi/overseas-stock/v1/ranking/volume-surge"
)
class TestFetchOverseasRankings:
"""Test fetch_overseas_rankings method."""
@pytest.mark.asyncio
async def test_fluctuation_uses_correct_params(
self, overseas_broker: OverseasBroker
) -> None:
async def test_fluctuation_uses_correct_params(self, overseas_broker: OverseasBroker) -> None:
"""Fluctuation ranking should use HHDFS76290000, updown-rate path, and correct params."""
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(
return_value={"output": [{"symb": "AAPL", "name": "Apple"}]}
)
mock_resp.json = AsyncMock(return_value={"output": [{"symb": "AAPL", "name": "Apple"}]})
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
@@ -132,15 +134,11 @@ class TestFetchOverseasRankings:
overseas_broker._broker._auth_headers.assert_called_with("HHDFS76290000")
@pytest.mark.asyncio
async def test_volume_uses_correct_params(
self, overseas_broker: OverseasBroker
) -> None:
async def test_volume_uses_correct_params(self, overseas_broker: OverseasBroker) -> None:
"""Volume ranking should use HHDFS76270000, volume-surge path, and correct params."""
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(
return_value={"output": [{"symb": "TSLA", "name": "Tesla"}]}
)
mock_resp.json = AsyncMock(return_value={"output": [{"symb": "TSLA", "name": "Tesla"}]})
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
@@ -169,9 +167,7 @@ class TestFetchOverseasRankings:
overseas_broker._broker._auth_headers.assert_called_with("HHDFS76270000")
@pytest.mark.asyncio
async def test_404_returns_empty_list(
self, overseas_broker: OverseasBroker
) -> None:
async def test_404_returns_empty_list(self, overseas_broker: OverseasBroker) -> None:
"""HTTP 404 should return empty list (fallback) instead of raising."""
mock_resp = AsyncMock()
mock_resp.status = 404
@@ -186,9 +182,7 @@ class TestFetchOverseasRankings:
assert result == []
@pytest.mark.asyncio
async def test_non_404_error_raises(
self, overseas_broker: OverseasBroker
) -> None:
async def test_non_404_error_raises(self, overseas_broker: OverseasBroker) -> None:
"""Non-404 HTTP errors should raise ConnectionError."""
mock_resp = AsyncMock()
mock_resp.status = 500
@@ -203,9 +197,7 @@ class TestFetchOverseasRankings:
await overseas_broker.fetch_overseas_rankings("NASD")
@pytest.mark.asyncio
async def test_empty_response_returns_empty(
self, overseas_broker: OverseasBroker
) -> None:
async def test_empty_response_returns_empty(self, overseas_broker: OverseasBroker) -> None:
"""Empty output in response should return empty list."""
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -220,18 +212,14 @@ class TestFetchOverseasRankings:
assert result == []
@pytest.mark.asyncio
async def test_ranking_disabled_returns_empty(
self, overseas_broker: OverseasBroker
) -> None:
async def test_ranking_disabled_returns_empty(self, overseas_broker: OverseasBroker) -> None:
"""When OVERSEAS_RANKING_ENABLED=False, should return empty immediately."""
overseas_broker._broker._settings.OVERSEAS_RANKING_ENABLED = False
result = await overseas_broker.fetch_overseas_rankings("NASD")
assert result == []
@pytest.mark.asyncio
async def test_limit_truncates_results(
self, overseas_broker: OverseasBroker
) -> None:
async def test_limit_truncates_results(self, overseas_broker: OverseasBroker) -> None:
"""Results should be truncated to the specified limit."""
rows = [{"symb": f"SYM{i}"} for i in range(20)]
mock_resp = AsyncMock()
@@ -247,9 +235,7 @@ class TestFetchOverseasRankings:
assert len(result) == 5
@pytest.mark.asyncio
async def test_network_error_raises(
self, overseas_broker: OverseasBroker
) -> None:
async def test_network_error_raises(self, overseas_broker: OverseasBroker) -> None:
"""Network errors should raise ConnectionError."""
cm = MagicMock()
cm.__aenter__ = AsyncMock(side_effect=aiohttp.ClientError("timeout"))
@@ -264,9 +250,7 @@ class TestFetchOverseasRankings:
await overseas_broker.fetch_overseas_rankings("NASD")
@pytest.mark.asyncio
async def test_exchange_code_mapping_applied(
self, overseas_broker: OverseasBroker
) -> None:
async def test_exchange_code_mapping_applied(self, overseas_broker: OverseasBroker) -> None:
"""All major exchanges should use mapped codes in API params."""
for original, mapped in [("NASD", "NAS"), ("NYSE", "NYS"), ("AMEX", "AMS")]:
mock_resp = AsyncMock()
@@ -298,7 +282,9 @@ class TestGetOverseasPrice:
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
overseas_broker._broker._auth_headers = AsyncMock(return_value={"authorization": "Bearer t"})
overseas_broker._broker._auth_headers = AsyncMock(
return_value={"authorization": "Bearer t"}
)
result = await overseas_broker.get_overseas_price("NASD", "AAPL")
assert result["output"]["last"] == "150.00"
@@ -530,11 +516,14 @@ class TestPriceExchangeMap:
def test_price_map_equals_ranking_map(self) -> None:
assert _PRICE_EXCHANGE_MAP is _RANKING_EXCHANGE_MAP
@pytest.mark.parametrize("original,expected", [
@pytest.mark.parametrize(
"original,expected",
[
("NASD", "NAS"),
("NYSE", "NYS"),
("AMEX", "AMS"),
])
],
)
def test_us_exchange_code_mapping(self, original: str, expected: str) -> None:
assert _PRICE_EXCHANGE_MAP[original] == expected
@@ -574,9 +563,7 @@ class TestOrderRtCdCheck:
return OverseasBroker(broker)
@pytest.mark.asyncio
async def test_success_rt_cd_returns_data(
self, overseas_broker: OverseasBroker
) -> None:
async def test_success_rt_cd_returns_data(self, overseas_broker: OverseasBroker) -> None:
"""rt_cd='0' → order accepted, data returned."""
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -590,9 +577,7 @@ class TestOrderRtCdCheck:
assert result["rt_cd"] == "0"
@pytest.mark.asyncio
async def test_error_rt_cd_returns_data_with_msg(
self, overseas_broker: OverseasBroker
) -> None:
async def test_error_rt_cd_returns_data_with_msg(self, overseas_broker: OverseasBroker) -> None:
"""rt_cd != '0' → order rejected, data still returned (caller checks rt_cd)."""
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -623,6 +608,7 @@ class TestPaperOverseasCash:
def test_env_override(self) -> None:
import os
os.environ["PAPER_OVERSEAS_CASH"] = "25000"
settings = Settings(
KIS_APP_KEY="k",
@@ -635,6 +621,7 @@ class TestPaperOverseasCash:
def test_zero_disables_fallback(self) -> None:
import os
os.environ["PAPER_OVERSEAS_CASH"] = "0"
settings = Settings(
KIS_APP_KEY="k",
@@ -822,9 +809,7 @@ class TestGetOverseasPendingOrders:
"""Tests for get_overseas_pending_orders method."""
@pytest.mark.asyncio
async def test_paper_mode_returns_empty(
self, overseas_broker: OverseasBroker
) -> None:
async def test_paper_mode_returns_empty(self, overseas_broker: OverseasBroker) -> None:
"""Paper mode should immediately return [] without any API call."""
# Default mock_settings has MODE="paper"
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
@@ -855,9 +840,7 @@ class TestGetOverseasPendingOrders:
overseas_broker._broker._auth_headers = mock_auth_headers # type: ignore[method-assign]
pending_orders = [
{"odno": "001", "pdno": "AAPL", "sll_buy_dvsn_cd": "02", "nccs_qty": "5"}
]
pending_orders = [{"odno": "001", "pdno": "AAPL", "sll_buy_dvsn_cd": "02", "nccs_qty": "5"}]
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(return_value={"output": pending_orders})
@@ -879,9 +862,7 @@ class TestGetOverseasPendingOrders:
assert captured_params[0]["OVRS_EXCG_CD"] == "NASD"
@pytest.mark.asyncio
async def test_live_mode_connection_error(
self, overseas_broker: OverseasBroker
) -> None:
async def test_live_mode_connection_error(self, overseas_broker: OverseasBroker) -> None:
"""Network error in live mode should raise ConnectionError."""
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
update={"MODE": "live"}
@@ -926,55 +907,41 @@ class TestCancelOverseasOrder:
return captured_tr_ids, mock_session
@pytest.mark.asyncio
async def test_us_live_uses_tttt1004u(
self, overseas_broker: OverseasBroker
) -> None:
async def test_us_live_uses_tttt1004u(self, overseas_broker: OverseasBroker) -> None:
"""US exchange in live mode should use TTTT1004U."""
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
update={"MODE": "live"}
)
captured, _ = self._setup_cancel_mocks(
overseas_broker, {"rt_cd": "0", "msg1": "OK"}
)
captured, _ = self._setup_cancel_mocks(overseas_broker, {"rt_cd": "0", "msg1": "OK"})
await overseas_broker.cancel_overseas_order("NASD", "AAPL", "ORD001", 5)
assert "TTTT1004U" in captured
@pytest.mark.asyncio
async def test_us_paper_uses_vttt1004u(
self, overseas_broker: OverseasBroker
) -> None:
async def test_us_paper_uses_vttt1004u(self, overseas_broker: OverseasBroker) -> None:
"""US exchange in paper mode should use VTTT1004U."""
# Default mock_settings has MODE="paper"
captured, _ = self._setup_cancel_mocks(
overseas_broker, {"rt_cd": "0", "msg1": "OK"}
)
captured, _ = self._setup_cancel_mocks(overseas_broker, {"rt_cd": "0", "msg1": "OK"})
await overseas_broker.cancel_overseas_order("NASD", "AAPL", "ORD001", 5)
assert "VTTT1004U" in captured
@pytest.mark.asyncio
async def test_hk_live_uses_ttts1003u(
self, overseas_broker: OverseasBroker
) -> None:
async def test_hk_live_uses_ttts1003u(self, overseas_broker: OverseasBroker) -> None:
"""SEHK exchange in live mode should use TTTS1003U."""
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
update={"MODE": "live"}
)
captured, _ = self._setup_cancel_mocks(
overseas_broker, {"rt_cd": "0", "msg1": "OK"}
)
captured, _ = self._setup_cancel_mocks(overseas_broker, {"rt_cd": "0", "msg1": "OK"})
await overseas_broker.cancel_overseas_order("SEHK", "0700", "ORD002", 10)
assert "TTTS1003U" in captured
@pytest.mark.asyncio
async def test_cancel_sets_rvse_cncl_dvsn_cd_02(
self, overseas_broker: OverseasBroker
) -> None:
async def test_cancel_sets_rvse_cncl_dvsn_cd_02(self, overseas_broker: OverseasBroker) -> None:
"""Cancel body must include RVSE_CNCL_DVSN_CD='02' and OVRS_ORD_UNPR='0'."""
captured_body: list[dict] = []
@@ -1005,9 +972,7 @@ class TestCancelOverseasOrder:
assert captured_body[0]["ORGN_ODNO"] == "ORD003"
@pytest.mark.asyncio
async def test_cancel_sets_hashkey_header(
self, overseas_broker: OverseasBroker
) -> None:
async def test_cancel_sets_hashkey_header(self, overseas_broker: OverseasBroker) -> None:
"""hashkey must be set in the request headers."""
captured_headers: list[dict] = []
overseas_broker._broker._get_hash_key = AsyncMock(return_value="test_hash") # type: ignore[method-assign]

View File

@@ -78,9 +78,7 @@ def _gemini_response_json(
"rationale": "Near circuit breaker",
}
]
return json.dumps(
{"market_outlook": outlook, "global_rules": global_rules, "stocks": stocks}
)
return json.dumps({"market_outlook": outlook, "global_rules": global_rules, "stocks": stocks})
def _make_planner(
@@ -564,8 +562,12 @@ class TestBuildPrompt:
def test_prompt_contains_cross_market(self) -> None:
planner = _make_planner()
cross = CrossMarketContext(
market="US", date="2026-02-07", total_pnl=1.5,
win_rate=60, index_change_pct=0.8, lessons=["Cut losses early"],
market="US",
date="2026-02-07",
total_pnl=1.5,
win_rate=60,
index_change_pct=0.8,
lessons=["Cut losses early"],
)
prompt = planner._build_prompt("KR", [_candidate()], {}, None, cross)
@@ -683,9 +685,7 @@ class TestSmartFallbackPlaybook:
)
def test_momentum_candidate_gets_buy_on_volume(self) -> None:
candidates = [
_candidate(code="CHOW", signal="momentum", volume_ratio=13.64, rsi=100.0)
]
candidates = [_candidate(code="CHOW", signal="momentum", volume_ratio=13.64, rsi=100.0)]
settings = self._make_settings()
pb = PreMarketPlanner._smart_fallback_playbook(
@@ -707,9 +707,7 @@ class TestSmartFallbackPlaybook:
assert sell_sc.condition.price_change_pct_below == -3.0
def test_oversold_candidate_gets_buy_on_rsi(self) -> None:
candidates = [
_candidate(code="005930", signal="oversold", rsi=22.0, volume_ratio=3.5)
]
candidates = [_candidate(code="005930", signal="oversold", rsi=22.0, volume_ratio=3.5)]
settings = self._make_settings()
pb = PreMarketPlanner._smart_fallback_playbook(
@@ -776,9 +774,7 @@ class TestSmartFallbackPlaybook:
def test_empty_candidates_returns_empty_playbook(self) -> None:
settings = self._make_settings()
pb = PreMarketPlanner._smart_fallback_playbook(
date(2026, 2, 17), "US_AMEX", [], settings
)
pb = PreMarketPlanner._smart_fallback_playbook(date(2026, 2, 17), "US_AMEX", [], settings)
assert pb.stock_count == 0
@@ -814,19 +810,14 @@ class TestSmartFallbackPlaybook:
planner = _make_planner()
planner._gemini.decide = AsyncMock(side_effect=ConnectionError("429 quota exceeded"))
# momentum candidate
candidates = [
_candidate(code="CHOW", signal="momentum", volume_ratio=13.64, rsi=100.0)
]
candidates = [_candidate(code="CHOW", signal="momentum", volume_ratio=13.64, rsi=100.0)]
pb = await planner.generate_playbook(
"US_AMEX", candidates, today=date(2026, 2, 18)
)
pb = await planner.generate_playbook("US_AMEX", candidates, today=date(2026, 2, 18))
# Should NOT be all-SELL defensive; should have BUY for momentum
assert pb.stock_count == 1
buy_scenarios = [
s for s in pb.stock_playbooks[0].scenarios
if s.action == ScenarioAction.BUY
s for s in pb.stock_playbooks[0].scenarios if s.action == ScenarioAction.BUY
]
assert len(buy_scenarios) == 1
assert buy_scenarios[0].condition.volume_ratio_above == 2.0 # VOL_MULTIPLIER default

View File

@@ -14,7 +14,7 @@ from src.strategy.models import (
StockPlaybook,
StockScenario,
)
from src.strategy.scenario_engine import ScenarioEngine, ScenarioMatch
from src.strategy.scenario_engine import ScenarioEngine
@pytest.fixture
@@ -162,8 +162,10 @@ class TestEvaluateCondition:
def test_mixed_invalid_types_no_exception(self, engine: ScenarioEngine) -> None:
"""Various invalid types should not raise exceptions."""
cond = StockCondition(
rsi_below=30.0, volume_ratio_above=2.0,
price_above=100, price_change_pct_below=-1.0,
rsi_below=30.0,
volume_ratio_above=2.0,
price_above=100,
price_change_pct_below=-1.0,
)
data = {
"rsi": [25], # list
@@ -356,9 +358,7 @@ class TestEvaluate:
def test_match_details_populated(self, engine: ScenarioEngine) -> None:
pb = _playbook(scenarios=[_scenario(rsi_below=30.0, volume_ratio_above=2.0)])
result = engine.evaluate(
pb, "005930", {"rsi": 25.0, "volume_ratio": 3.0}, {}
)
result = engine.evaluate(pb, "005930", {"rsi": 25.0, "volume_ratio": 3.0}, {})
assert result.match_details.get("rsi") == 25.0
assert result.match_details.get("volume_ratio") == 3.0
@@ -381,7 +381,9 @@ class TestEvaluate:
),
StockPlaybook(
stock_code="MSFT",
scenarios=[_scenario(rsi_above=75.0, action=ScenarioAction.SELL, confidence=80)],
scenarios=[
_scenario(rsi_above=75.0, action=ScenarioAction.SELL, confidence=80)
],
),
],
)
@@ -450,58 +452,42 @@ class TestEvaluate:
class TestPositionAwareConditions:
"""Tests for unrealized_pnl_pct and holding_days condition fields."""
def test_evaluate_condition_unrealized_pnl_above_matches(
self, engine: ScenarioEngine
) -> None:
def test_evaluate_condition_unrealized_pnl_above_matches(self, engine: ScenarioEngine) -> None:
"""unrealized_pnl_pct_above should match when P&L exceeds threshold."""
condition = StockCondition(unrealized_pnl_pct_above=3.0)
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": 5.0}) is True
def test_evaluate_condition_unrealized_pnl_above_no_match(
self, engine: ScenarioEngine
) -> None:
def test_evaluate_condition_unrealized_pnl_above_no_match(self, engine: ScenarioEngine) -> None:
"""unrealized_pnl_pct_above should NOT match when P&L is below threshold."""
condition = StockCondition(unrealized_pnl_pct_above=3.0)
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": 2.0}) is False
def test_evaluate_condition_unrealized_pnl_below_matches(
self, engine: ScenarioEngine
) -> None:
def test_evaluate_condition_unrealized_pnl_below_matches(self, engine: ScenarioEngine) -> None:
"""unrealized_pnl_pct_below should match when P&L is under threshold."""
condition = StockCondition(unrealized_pnl_pct_below=-2.0)
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": -3.5}) is True
def test_evaluate_condition_unrealized_pnl_below_no_match(
self, engine: ScenarioEngine
) -> None:
def test_evaluate_condition_unrealized_pnl_below_no_match(self, engine: ScenarioEngine) -> None:
"""unrealized_pnl_pct_below should NOT match when P&L is above threshold."""
condition = StockCondition(unrealized_pnl_pct_below=-2.0)
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": -1.0}) is False
def test_evaluate_condition_holding_days_above_matches(
self, engine: ScenarioEngine
) -> None:
def test_evaluate_condition_holding_days_above_matches(self, engine: ScenarioEngine) -> None:
"""holding_days_above should match when position held longer than threshold."""
condition = StockCondition(holding_days_above=5)
assert engine.evaluate_condition(condition, {"holding_days": 7}) is True
def test_evaluate_condition_holding_days_above_no_match(
self, engine: ScenarioEngine
) -> None:
def test_evaluate_condition_holding_days_above_no_match(self, engine: ScenarioEngine) -> None:
"""holding_days_above should NOT match when position held shorter."""
condition = StockCondition(holding_days_above=5)
assert engine.evaluate_condition(condition, {"holding_days": 3}) is False
def test_evaluate_condition_holding_days_below_matches(
self, engine: ScenarioEngine
) -> None:
def test_evaluate_condition_holding_days_below_matches(self, engine: ScenarioEngine) -> None:
"""holding_days_below should match when position held fewer days."""
condition = StockCondition(holding_days_below=3)
assert engine.evaluate_condition(condition, {"holding_days": 1}) is True
def test_evaluate_condition_holding_days_below_no_match(
self, engine: ScenarioEngine
) -> None:
def test_evaluate_condition_holding_days_below_no_match(self, engine: ScenarioEngine) -> None:
"""holding_days_below should NOT match when held more days."""
condition = StockCondition(holding_days_below=3)
assert engine.evaluate_condition(condition, {"holding_days": 5}) is False
@@ -513,33 +499,33 @@ class TestPositionAwareConditions:
holding_days_above=5,
)
# Both met → match
assert engine.evaluate_condition(
assert (
engine.evaluate_condition(
condition,
{"unrealized_pnl_pct": 4.5, "holding_days": 7},
) is True
)
is True
)
# Only pnl met → no match
assert engine.evaluate_condition(
assert (
engine.evaluate_condition(
condition,
{"unrealized_pnl_pct": 4.5, "holding_days": 3},
) is False
)
is False
)
def test_missing_unrealized_pnl_does_not_match(
self, engine: ScenarioEngine
) -> None:
def test_missing_unrealized_pnl_does_not_match(self, engine: ScenarioEngine) -> None:
"""Missing unrealized_pnl_pct key should not match the condition."""
condition = StockCondition(unrealized_pnl_pct_above=3.0)
assert engine.evaluate_condition(condition, {}) is False
def test_missing_holding_days_does_not_match(
self, engine: ScenarioEngine
) -> None:
def test_missing_holding_days_does_not_match(self, engine: ScenarioEngine) -> None:
"""Missing holding_days key should not match the condition."""
condition = StockCondition(holding_days_above=5)
assert engine.evaluate_condition(condition, {}) is False
def test_match_details_includes_position_fields(
self, engine: ScenarioEngine
) -> None:
def test_match_details_includes_position_fields(self, engine: ScenarioEngine) -> None:
"""match_details should include position fields when condition specifies them."""
pb = _playbook(
scenarios=[

View File

@@ -0,0 +1,128 @@
from __future__ import annotations
import importlib.util
from pathlib import Path
def _load_module():
script_path = Path(__file__).resolve().parents[1] / "scripts" / "session_handover_check.py"
spec = importlib.util.spec_from_file_location("session_handover_check", script_path)
assert spec is not None
assert spec.loader is not None
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
def test_ci_mode_skips_date_branch_and_merge_gate(monkeypatch, tmp_path) -> None:
module = _load_module()
handover = tmp_path / "session-handover.md"
handover.write_text(
"\n".join(
[
"### 2000-01-01 | session=test",
"- branch: feature/other-branch",
"- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md",
"- open_issues_reviewed: #1",
"- next_ticket: #123",
"- process_gate_checked: process_ticket=#1 merged_to_feature_branch=no",
]
),
encoding="utf-8",
)
monkeypatch.setattr(module, "HANDOVER_LOG", handover)
errors: list[str] = []
module._check_handover_entry(
branch="feature/current-branch",
strict=True,
ci_mode=True,
errors=errors,
)
assert errors == []
def test_ci_mode_still_blocks_tbd_next_ticket(monkeypatch, tmp_path) -> None:
module = _load_module()
handover = tmp_path / "session-handover.md"
handover.write_text(
"\n".join(
[
"### 2000-01-01 | session=test",
"- branch: feature/other-branch",
"- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md",
"- open_issues_reviewed: #1",
"- next_ticket: #TBD",
"- process_gate_checked: process_ticket=#1 merged_to_feature_branch=no",
]
),
encoding="utf-8",
)
monkeypatch.setattr(module, "HANDOVER_LOG", handover)
errors: list[str] = []
module._check_handover_entry(
branch="feature/current-branch",
strict=True,
ci_mode=True,
errors=errors,
)
assert "latest handover entry must not use placeholder next_ticket (#TBD)" in errors
def test_non_ci_strict_enforces_date_branch_and_merge_gate(monkeypatch, tmp_path) -> None:
module = _load_module()
handover = tmp_path / "session-handover.md"
handover.write_text(
"\n".join(
[
"### 2000-01-01 | session=test",
"- branch: feature/other-branch",
"- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md",
"- open_issues_reviewed: #1",
"- next_ticket: #123",
"- process_gate_checked: process_ticket=#1 merged_to_feature_branch=no",
]
),
encoding="utf-8",
)
monkeypatch.setattr(module, "HANDOVER_LOG", handover)
errors: list[str] = []
module._check_handover_entry(
branch="feature/current-branch",
strict=True,
ci_mode=False,
errors=errors,
)
assert any("must contain today's UTC date" in e for e in errors)
assert any("must target current branch" in e for e in errors)
assert any("merged_to_feature_branch=no" in e for e in errors)
def test_non_ci_strict_still_blocks_tbd_next_ticket(monkeypatch, tmp_path) -> None:
module = _load_module()
handover = tmp_path / "session-handover.md"
handover.write_text(
"\n".join(
[
"### 2000-01-01 | session=test",
"- branch: feature/other-branch",
"- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md",
"- open_issues_reviewed: #1",
"- next_ticket: #TBD",
"- process_gate_checked: process_ticket=#1 merged_to_feature_branch=yes",
]
),
encoding="utf-8",
)
monkeypatch.setattr(module, "HANDOVER_LOG", handover)
errors: list[str] = []
module._check_handover_entry(
branch="feature/current-branch",
strict=True,
ci_mode=False,
errors=errors,
)
assert "latest handover entry must not use placeholder next_ticket (#TBD)" in errors

View File

@@ -2,9 +2,10 @@
from __future__ import annotations
import pytest
from unittest.mock import AsyncMock, MagicMock
import pytest
from src.analysis.smart_scanner import ScanCandidate, SmartVolatilityScanner
from src.analysis.volatility import VolatilityAnalyzer
from src.broker.kis_api import KISBroker
@@ -200,9 +201,7 @@ class TestSmartVolatilityScanner:
assert len(candidates) <= scanner.top_n
@pytest.mark.asyncio
async def test_get_stock_codes(
self, scanner: SmartVolatilityScanner
) -> None:
async def test_get_stock_codes(self, scanner: SmartVolatilityScanner) -> None:
"""Test extraction of stock codes from candidates."""
candidates = [
ScanCandidate(

View File

@@ -19,7 +19,6 @@ from src.strategy.models import (
StockScenario,
)
# ---------------------------------------------------------------------------
# StockCondition
# ---------------------------------------------------------------------------

View File

@@ -5,7 +5,11 @@ from unittest.mock import AsyncMock, patch
import aiohttp
import pytest
from src.notifications.telegram_client import NotificationFilter, NotificationPriority, TelegramClient
from src.notifications.telegram_client import (
NotificationFilter,
NotificationPriority,
TelegramClient,
)
class TestTelegramClientInit:
@@ -13,9 +17,7 @@ class TestTelegramClientInit:
def test_disabled_via_flag(self) -> None:
"""Client disabled via enabled=False flag."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=False
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=False)
assert client._enabled is False
def test_disabled_missing_token(self) -> None:
@@ -30,9 +32,7 @@ class TestTelegramClientInit:
def test_enabled_with_credentials(self) -> None:
"""Client enabled when credentials provided."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
assert client._enabled is True
@@ -42,9 +42,7 @@ class TestNotificationSending:
@pytest.mark.asyncio
async def test_send_message_success(self) -> None:
"""send_message returns True on successful send."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -76,9 +74,7 @@ class TestNotificationSending:
@pytest.mark.asyncio
async def test_send_message_api_error(self) -> None:
"""send_message returns False on API error."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 400
@@ -93,9 +89,7 @@ class TestNotificationSending:
@pytest.mark.asyncio
async def test_send_message_with_markdown(self) -> None:
"""send_message supports different parse modes."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -128,9 +122,7 @@ class TestNotificationSending:
@pytest.mark.asyncio
async def test_trade_execution_format(self) -> None:
"""Trade notification has correct format."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -163,9 +155,7 @@ class TestNotificationSending:
@pytest.mark.asyncio
async def test_playbook_generated_format(self) -> None:
"""Playbook generated notification has expected fields."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -190,9 +180,7 @@ class TestNotificationSending:
@pytest.mark.asyncio
async def test_scenario_matched_format(self) -> None:
"""Scenario matched notification has expected fields."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -217,9 +205,7 @@ class TestNotificationSending:
@pytest.mark.asyncio
async def test_playbook_failed_format(self) -> None:
"""Playbook failed notification has expected fields."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -240,9 +226,7 @@ class TestNotificationSending:
@pytest.mark.asyncio
async def test_circuit_breaker_priority(self) -> None:
"""Circuit breaker uses CRITICAL priority."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -260,9 +244,7 @@ class TestNotificationSending:
@pytest.mark.asyncio
async def test_api_error_handling(self) -> None:
"""API errors logged but don't crash."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 400
@@ -277,25 +259,19 @@ class TestNotificationSending:
@pytest.mark.asyncio
async def test_timeout_handling(self) -> None:
"""Timeouts logged but don't crash."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
with patch(
"aiohttp.ClientSession.post",
side_effect=aiohttp.ClientError("Connection timeout"),
):
# Should not raise exception
await client.notify_error(
error_type="Test Error", error_msg="Test", context="test"
)
await client.notify_error(error_type="Test Error", error_msg="Test", context="test")
@pytest.mark.asyncio
async def test_session_management(self) -> None:
"""Session created and reused correctly."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
# Session should be None initially
assert client._session is None
@@ -324,9 +300,7 @@ class TestRateLimiting:
"""Rate limiter delays rapid requests."""
import time
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True, rate_limit=2.0
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True, rate_limit=2.0)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -353,9 +327,7 @@ class TestMessagePriorities:
@pytest.mark.asyncio
async def test_low_priority_uses_info_emoji(self) -> None:
"""LOW priority uses emoji."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -371,9 +343,7 @@ class TestMessagePriorities:
@pytest.mark.asyncio
async def test_critical_priority_uses_alarm_emoji(self) -> None:
"""CRITICAL priority uses 🚨 emoji."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -389,9 +359,7 @@ class TestMessagePriorities:
@pytest.mark.asyncio
async def test_playbook_generated_priority(self) -> None:
"""Playbook generated uses MEDIUM priority emoji."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -412,9 +380,7 @@ class TestMessagePriorities:
@pytest.mark.asyncio
async def test_playbook_failed_priority(self) -> None:
"""Playbook failed uses HIGH priority emoji."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -433,9 +399,7 @@ class TestMessagePriorities:
@pytest.mark.asyncio
async def test_scenario_matched_priority(self) -> None:
"""Scenario matched uses HIGH priority emoji."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_resp = AsyncMock()
mock_resp.status = 200
@@ -460,9 +424,7 @@ class TestClientCleanup:
@pytest.mark.asyncio
async def test_close_closes_session(self) -> None:
"""close() closes the HTTP session."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
mock_session = AsyncMock()
mock_session.closed = False
@@ -475,9 +437,7 @@ class TestClientCleanup:
@pytest.mark.asyncio
async def test_close_handles_no_session(self) -> None:
"""close() handles None session gracefully."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True
)
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
# Should not raise exception
await client.close()
@@ -535,8 +495,12 @@ class TestNotificationFilter:
)
with patch("aiohttp.ClientSession.post") as mock_post:
await client.notify_trade_execution(
stock_code="005930", market="KR", action="BUY",
quantity=10, price=70000.0, confidence=85.0
stock_code="005930",
market="KR",
action="BUY",
quantity=10,
price=70000.0,
confidence=85.0,
)
mock_post.assert_not_called()
@@ -556,8 +520,13 @@ class TestNotificationFilter:
async def test_circuit_breaker_always_sends_regardless_of_filter(self) -> None:
"""notify_circuit_breaker always sends (no filter flag)."""
nf = NotificationFilter(
trades=False, market_open_close=False, fat_finger=False,
system_events=False, playbook=False, scenario_match=False, errors=False,
trades=False,
market_open_close=False,
fat_finger=False,
system_events=False,
playbook=False,
scenario_match=False,
errors=False,
)
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True, notification_filter=nf
@@ -617,7 +586,7 @@ class TestNotificationFilter:
nf = NotificationFilter()
assert nf.set_flag("unknown_key", False) is False
def test_as_dict_keys_match_KEYS(self) -> None:
def test_as_dict_keys_match_keys(self) -> None:
"""as_dict() returns every key defined in KEYS."""
nf = NotificationFilter()
d = nf.as_dict()
@@ -640,10 +609,17 @@ class TestNotificationFilter:
def test_set_notification_all_on(self) -> None:
"""set_notification('all', True) enables every filter flag."""
client = TelegramClient(
bot_token="123:abc", chat_id="456", enabled=True,
bot_token="123:abc",
chat_id="456",
enabled=True,
notification_filter=NotificationFilter(
trades=False, market_open_close=False, scenario_match=False,
fat_finger=False, system_events=False, playbook=False, errors=False,
trades=False,
market_open_close=False,
scenario_match=False,
fat_finger=False,
system_events=False,
playbook=False,
errors=False,
),
)
assert client.set_notification("all", True) is True

View File

@@ -357,8 +357,7 @@ class TestTradingControlCommands:
pause_event.set()
await client.send_message(
"<b>▶️ Trading Resumed</b>\n\n"
"Trading operations have been restarted."
"<b>▶️ Trading Resumed</b>\n\nTrading operations have been restarted."
)
handler.register_command("resume", mock_resume)
@@ -526,9 +525,7 @@ class TestStatusCommands:
async def mock_status_error() -> None:
"""Mock /status handler with error."""
await client.send_message(
"<b>⚠️ Error</b>\n\nFailed to retrieve trading status."
)
await client.send_message("<b>⚠️ Error</b>\n\nFailed to retrieve trading status.")
handler.register_command("status", mock_status_error)
@@ -603,10 +600,7 @@ class TestStatusCommands:
async def mock_positions_empty() -> None:
"""Mock /positions handler with no positions."""
message = (
"<b>💼 Account Summary</b>\n\n"
"No balance information available."
)
message = "<b>💼 Account Summary</b>\n\nNo balance information available."
await client.send_message(message)
handler.register_command("positions", mock_positions_empty)
@@ -639,9 +633,7 @@ class TestStatusCommands:
async def mock_positions_error() -> None:
"""Mock /positions handler with error."""
await client.send_message(
"<b>⚠️ Error</b>\n\nFailed to retrieve positions."
)
await client.send_message("<b>⚠️ Error</b>\n\nFailed to retrieve positions.")
handler.register_command("positions", mock_positions_error)

View File

@@ -0,0 +1,123 @@
from __future__ import annotations
import importlib.util
from pathlib import Path
def _load_module():
script_path = Path(__file__).resolve().parents[1] / "scripts" / "validate_docs_sync.py"
spec = importlib.util.spec_from_file_location("validate_docs_sync", script_path)
assert spec is not None
assert spec.loader is not None
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
def test_collect_command_endpoints_parses_markdown_table_rows() -> None:
module = _load_module()
text = "\n".join(
[
"| Endpoint | Description |",
"|----------|-------------|",
"| `GET /api/status` | status |",
"| `POST /api/run` | run |",
"| not-a-row | ignored |",
]
)
endpoints = module.collect_command_endpoints(text)
assert endpoints == ["GET /api/status", "POST /api/run"]
def test_validate_links_resolve_detects_absolute_and_broken_links(tmp_path) -> None:
module = _load_module()
doc = tmp_path / "doc.md"
existing = tmp_path / "ok.md"
existing.write_text("# ok\n", encoding="utf-8")
doc.write_text(
"\n".join(
[
"[ok](./ok.md)",
"[abs](/tmp/nowhere.md)",
"[broken](./missing.md)",
]
),
encoding="utf-8",
)
errors: list[str] = []
module.validate_links_resolve(doc, doc.read_text(encoding="utf-8"), errors)
assert any("absolute link is forbidden" in err for err in errors)
assert any("broken link" in err for err in errors)
def test_validate_summary_docs_reference_core_docs(monkeypatch) -> None:
module = _load_module()
errors: list[str] = []
fake_docs = {
str(module.REQUIRED_FILES["README.md"]): (
"docs/workflow.md docs/commands.md docs/testing.md"
),
str(module.REQUIRED_FILES["CLAUDE.md"]): "docs/workflow.md docs/commands.md",
}
def fake_read(path: Path) -> str:
return fake_docs[str(path)]
monkeypatch.setattr(module, "_read", fake_read)
module.validate_summary_docs_reference_core_docs(errors)
assert errors == []
def test_validate_summary_docs_reference_core_docs_reports_missing_links(
monkeypatch,
) -> None:
module = _load_module()
errors: list[str] = []
fake_docs = {
str(module.REQUIRED_FILES["README.md"]): "docs/workflow.md",
str(module.REQUIRED_FILES["CLAUDE.md"]): "docs/workflow.md",
}
def fake_read(path: Path) -> str:
return fake_docs[str(path)]
monkeypatch.setattr(module, "_read", fake_read)
module.validate_summary_docs_reference_core_docs(errors)
assert any("README.md" in err and "docs/commands.md" in err for err in errors)
assert any("README.md" in err and "docs/testing.md" in err for err in errors)
assert any("CLAUDE.md" in err and "docs/commands.md" in err for err in errors)
def test_validate_commands_endpoint_duplicates_reports_duplicates(monkeypatch) -> None:
module = _load_module()
errors: list[str] = []
text = "\n".join(
[
"| `GET /api/status` | status |",
"| `GET /api/status` | duplicate |",
]
)
def fake_read(path: Path) -> str:
assert path == module.REQUIRED_FILES["commands"]
return text
monkeypatch.setattr(module, "_read", fake_read)
module.validate_commands_endpoint_duplicates(errors)
assert errors
assert "duplicated API endpoint row -> GET /api/status" in errors[0]
def test_validate_testing_doc_has_dynamic_count_guidance(monkeypatch) -> None:
module = _load_module()
errors: list[str] = []
def fake_read(path: Path) -> str:
assert path == module.REQUIRED_FILES["testing"]
return "Use pytest --collect-only -q for dynamic counts."
monkeypatch.setattr(module, "_read", fake_read)
module.validate_testing_doc_has_dynamic_count_guidance(errors)
assert errors == []

View File

@@ -0,0 +1,337 @@
from __future__ import annotations
import importlib.util
from pathlib import Path
from types import SimpleNamespace
def _load_module():
script_path = Path(__file__).resolve().parents[1] / "scripts" / "validate_governance_assets.py"
spec = importlib.util.spec_from_file_location("validate_governance_assets", script_path)
assert spec is not None
assert spec.loader is not None
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
def test_is_policy_file_detects_ouroboros_policy_docs() -> None:
module = _load_module()
assert module.is_policy_file("docs/ouroboros/85_loss_recovery_action_plan.md")
assert not module.is_policy_file("docs/ouroboros/01_requirements_registry.md")
assert not module.is_policy_file("docs/workflow.md")
assert not module.is_policy_file("docs/ouroboros/notes.txt")
def test_validate_registry_sync_requires_registry_update_when_policy_changes() -> None:
module = _load_module()
errors: list[str] = []
module.validate_registry_sync(
["docs/ouroboros/85_loss_recovery_action_plan.md"],
errors,
)
assert errors
assert "policy file changed without updating" in errors[0]
def test_validate_registry_sync_passes_when_registry_included() -> None:
module = _load_module()
errors: list[str] = []
module.validate_registry_sync(
[
"docs/ouroboros/85_loss_recovery_action_plan.md",
"docs/ouroboros/01_requirements_registry.md",
],
errors,
)
assert errors == []
def test_load_changed_files_supports_explicit_paths() -> None:
module = _load_module()
errors: list[str] = []
changed = module.load_changed_files(
["./docs/ouroboros/85_loss_recovery_action_plan.md", " src/main.py "],
errors,
)
assert errors == []
assert changed == [
"docs/ouroboros/85_loss_recovery_action_plan.md",
"src/main.py",
]
def test_load_changed_files_with_range_uses_git_diff(monkeypatch) -> None:
module = _load_module()
errors: list[str] = []
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
assert cmd[:3] == ["git", "diff", "--name-only"]
assert check is True
assert capture_output is True
assert text is True
return SimpleNamespace(
stdout="docs/ouroboros/85_loss_recovery_action_plan.md\nsrc/main.py\n"
)
monkeypatch.setattr(module.subprocess, "run", fake_run)
changed = module.load_changed_files(["abc...def"], errors)
assert errors == []
assert changed == [
"docs/ouroboros/85_loss_recovery_action_plan.md",
"src/main.py",
]
def test_validate_task_req_mapping_reports_missing_req_reference(tmp_path) -> None:
module = _load_module()
doc = tmp_path / "work_orders.md"
doc.write_text(
"- `TASK-OPS-999` no req mapping line\n",
encoding="utf-8",
)
errors: list[str] = []
module.validate_task_req_mapping(errors, task_doc=doc)
assert errors
assert "TASK without REQ mapping" in errors[0]
def test_validate_task_req_mapping_passes_when_req_present(tmp_path) -> None:
module = _load_module()
doc = tmp_path / "work_orders.md"
doc.write_text(
"- `TASK-OPS-999` (`REQ-OPS-001`): enforce timezone labels\n",
encoding="utf-8",
)
errors: list[str] = []
module.validate_task_req_mapping(errors, task_doc=doc)
assert errors == []
def test_validate_pr_traceability_fails_when_req_missing(monkeypatch) -> None:
module = _load_module()
monkeypatch.setenv("GOVERNANCE_PR_TITLE", "feat: update policy checker")
monkeypatch.setenv("GOVERNANCE_PR_BODY", "Refs: TASK-OPS-001 TEST-ACC-007")
errors: list[str] = []
module.validate_pr_traceability(errors)
assert errors
assert "PR text missing REQ-ID reference" in errors
def test_validate_read_only_approval_requires_evidence(monkeypatch) -> None:
module = _load_module()
changed_files = ["src/core/risk_manager.py"]
errors: list[str] = []
warnings: list[str] = []
monkeypatch.setenv(
"GOVERNANCE_PR_BODY",
"\n".join(
[
"## READ-ONLY Approval (Required when touching READ-ONLY files)",
"- Touched READ-ONLY files: src/core/risk_manager.py",
"- Human approval: TBD",
"- Test suite 1: pytest -q",
"- Test suite 2: TBD",
]
),
)
module.validate_read_only_approval(changed_files, errors, warnings)
assert warnings == []
assert any("Human approval" in err for err in errors)
assert any("Test suite 2" in err for err in errors)
def test_validate_read_only_approval_passes_with_complete_evidence(monkeypatch) -> None:
module = _load_module()
changed_files = ["src/core/risk_manager.py"]
errors: list[str] = []
warnings: list[str] = []
monkeypatch.setenv(
"GOVERNANCE_PR_BODY",
"\n".join(
[
"## READ-ONLY Approval (Required when touching READ-ONLY files)",
"- Touched READ-ONLY files: src/core/risk_manager.py",
"- Human approval: https://example.com/review/123",
"- Test suite 1: pytest -q tests/test_risk.py",
"- Test suite 2: pytest -q tests/test_main.py -k risk",
]
),
)
module.validate_read_only_approval(changed_files, errors, warnings)
assert errors == []
assert warnings == []
def test_validate_read_only_approval_fails_without_pr_body(monkeypatch) -> None:
module = _load_module()
changed_files = ["src/core/risk_manager.py"]
errors: list[str] = []
warnings: list[str] = []
monkeypatch.delenv("GOVERNANCE_PR_BODY", raising=False)
module.validate_read_only_approval(changed_files, errors, warnings)
assert warnings == []
assert errors
assert "approval evidence is required" in errors[0]
def test_validate_read_only_approval_skips_when_no_readonly_file_changed() -> None:
module = _load_module()
changed_files = ["src/main.py"]
errors: list[str] = []
warnings: list[str] = []
module.validate_read_only_approval(changed_files, errors, warnings)
assert errors == []
assert warnings == []
def test_must_contain_enforces_workflow_newline_helper_tokens(tmp_path) -> None:
module = _load_module()
workflow_doc = tmp_path / "workflow.md"
workflow_doc.write_text(
"\n".join(
[
"Session Handover Gate (Mandatory)",
"python3 scripts/session_handover_check.py --strict",
"scripts/tea_comment.sh",
]
),
encoding="utf-8",
)
errors: list[str] = []
module.must_contain(
workflow_doc,
[
"Session Handover Gate (Mandatory)",
"session_handover_check.py --strict",
"scripts/tea_comment.sh",
],
errors,
)
assert errors == []
def test_must_contain_fails_when_workflow_missing_newline_helper_token(tmp_path) -> None:
module = _load_module()
workflow_doc = tmp_path / "workflow.md"
workflow_doc.write_text(
"\n".join(
[
"Session Handover Gate (Mandatory)",
"python3 scripts/session_handover_check.py --strict",
]
),
encoding="utf-8",
)
errors: list[str] = []
module.must_contain(
workflow_doc,
["scripts/tea_comment.sh"],
errors,
)
assert any("scripts/tea_comment.sh" in err for err in errors)
def test_must_contain_enforces_commands_newline_section_tokens(tmp_path) -> None:
module = _load_module()
commands_doc = tmp_path / "commands.md"
commands_doc.write_text(
"\n".join(
[
"Session Handover Preflight (Mandatory)",
"python3 scripts/session_handover_check.py --strict",
"Comment Newline Escaping",
"scripts/tea_comment.sh",
]
),
encoding="utf-8",
)
errors: list[str] = []
module.must_contain(
commands_doc,
[
"Session Handover Preflight (Mandatory)",
"session_handover_check.py --strict",
"Comment Newline Escaping",
"scripts/tea_comment.sh",
],
errors,
)
assert errors == []
def test_must_contain_fails_when_commands_missing_newline_section_token(tmp_path) -> None:
module = _load_module()
commands_doc = tmp_path / "commands.md"
commands_doc.write_text(
"\n".join(
[
"Session Handover Preflight (Mandatory)",
"python3 scripts/session_handover_check.py --strict",
"scripts/tea_comment.sh",
]
),
encoding="utf-8",
)
errors: list[str] = []
module.must_contain(
commands_doc,
["Comment Newline Escaping"],
errors,
)
assert any("Comment Newline Escaping" in err for err in errors)
def test_validate_task_test_pairing_reports_missing_test_reference(tmp_path) -> None:
module = _load_module()
doc = tmp_path / "work_orders.md"
doc.write_text(
"- `TASK-OPS-999` (`REQ-OPS-001`): enforce timezone labels only\n",
encoding="utf-8",
)
errors: list[str] = []
module.validate_task_test_pairing(errors, task_doc=doc)
assert errors
assert "TASK without TEST mapping" in errors[0]
def test_validate_task_test_pairing_passes_when_test_present(tmp_path) -> None:
module = _load_module()
doc = tmp_path / "work_orders.md"
doc.write_text(
"- `TASK-OPS-999` (`REQ-OPS-001`,`TEST-ACC-007`): enforce timezone labels\n",
encoding="utf-8",
)
errors: list[str] = []
module.validate_task_test_pairing(errors, task_doc=doc)
assert errors == []
def test_validate_timezone_policy_tokens_requires_kst_or_utc(tmp_path, monkeypatch) -> None:
module = _load_module()
docs = tmp_path / "docs"
ouroboros = docs / "ouroboros"
docs.mkdir(parents=True)
ouroboros.mkdir(parents=True)
monkeypatch.chdir(tmp_path)
(ouroboros / "01_requirements_registry.md").write_text("REQ-OPS-001\nUTC\n", encoding="utf-8")
(ouroboros / "30_code_level_work_orders.md").write_text(
"TASK-OPS-001 (`REQ-OPS-001`,`TEST-ACC-007`)\nKST\n",
encoding="utf-8",
)
(docs / "workflow.md").write_text("timezone policy: KST and UTC\n", encoding="utf-8")
errors: list[str] = []
module.validate_timezone_policy_tokens(errors)
assert errors == []
(docs / "workflow.md").write_text("timezone policy missing labels\n", encoding="utf-8")
errors = []
module.validate_timezone_policy_tokens(errors)
assert errors
assert any("missing timezone policy token" in err for err in errors)

View File

@@ -0,0 +1,81 @@
from __future__ import annotations
import importlib.util
from pathlib import Path
def _load_module():
script_path = Path(__file__).resolve().parents[1] / "scripts" / "validate_ouroboros_docs.py"
spec = importlib.util.spec_from_file_location("validate_ouroboros_docs", script_path)
assert spec is not None
assert spec.loader is not None
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
def test_validate_plan_source_link_accepts_canonical_source_path() -> None:
module = _load_module()
errors: list[str] = []
path = Path("docs/ouroboros/README.md").resolve()
assert module.validate_plan_source_link(path, "./source/ouroboros_plan_v2.txt", errors) is False
assert module.validate_plan_source_link(path, "./source/ouroboros_plan_v3.txt", errors) is False
assert errors == []
def test_validate_plan_source_link_rejects_root_relative_path() -> None:
module = _load_module()
errors: list[str] = []
path = Path("docs/ouroboros/README.md").resolve()
handled = module.validate_plan_source_link(
path,
"/home/agentson/repos/The-Ouroboros/ouroboros_plan_v2.txt",
errors,
)
assert handled is True
assert errors
assert "invalid plan link path" in errors[0]
assert "use ./source/ouroboros_plan_v2.txt" in errors[0]
def test_validate_plan_source_link_rejects_repo_root_relative_path() -> None:
module = _load_module()
errors: list[str] = []
path = Path("docs/ouroboros/README.md").resolve()
handled = module.validate_plan_source_link(path, "../../ouroboros_plan_v2.txt", errors)
assert handled is True
assert errors
assert "invalid plan link path" in errors[0]
assert "must resolve to docs/ouroboros/source/ouroboros_plan_v2.txt" in errors[0]
def test_validate_plan_source_link_accepts_fragment_suffix() -> None:
module = _load_module()
errors: list[str] = []
path = Path("docs/ouroboros/README.md").resolve()
handled = module.validate_plan_source_link(path, "./source/ouroboros_plan_v2.txt#sec", errors)
assert handled is False
assert errors == []
def test_validate_links_avoids_duplicate_error_for_invalid_plan_link(tmp_path) -> None:
module = _load_module()
errors: list[str] = []
doc = tmp_path / "doc.md"
doc.write_text(
"[v2](/home/agentson/repos/The-Ouroboros/ouroboros_plan_v2.txt)\n",
encoding="utf-8",
)
module.validate_links(doc, doc.read_text(encoding="utf-8"), errors)
assert len(errors) == 1
assert "invalid plan link path" in errors[0]

View File

@@ -80,9 +80,7 @@ class TestVolatilityAnalyzer:
# ATR should be roughly the average true range
assert 3.0 <= atr <= 6.0
def test_calculate_atr_insufficient_data(
self, volatility_analyzer: VolatilityAnalyzer
) -> None:
def test_calculate_atr_insufficient_data(self, volatility_analyzer: VolatilityAnalyzer) -> None:
"""Test ATR with insufficient data returns 0."""
high_prices = [110.0, 112.0]
low_prices = [105.0, 107.0]
@@ -120,17 +118,13 @@ class TestVolatilityAnalyzer:
surge = volatility_analyzer.calculate_volume_surge(1000.0, 0.0)
assert surge == 1.0
def test_calculate_pv_divergence_bullish(
self, volatility_analyzer: VolatilityAnalyzer
) -> None:
def test_calculate_pv_divergence_bullish(self, volatility_analyzer: VolatilityAnalyzer) -> None:
"""Test bullish price-volume divergence."""
# Price up + Volume up = bullish
divergence = volatility_analyzer.calculate_pv_divergence(5.0, 2.0)
assert divergence > 0.0
def test_calculate_pv_divergence_bearish(
self, volatility_analyzer: VolatilityAnalyzer
) -> None:
def test_calculate_pv_divergence_bearish(self, volatility_analyzer: VolatilityAnalyzer) -> None:
"""Test bearish price-volume divergence."""
# Price up + Volume down = bearish divergence
divergence = volatility_analyzer.calculate_pv_divergence(5.0, 0.5)
@@ -144,9 +138,7 @@ class TestVolatilityAnalyzer:
divergence = volatility_analyzer.calculate_pv_divergence(-5.0, 2.0)
assert divergence < 0.0
def test_calculate_momentum_score(
self, volatility_analyzer: VolatilityAnalyzer
) -> None:
def test_calculate_momentum_score(self, volatility_analyzer: VolatilityAnalyzer) -> None:
"""Test momentum score calculation."""
score = volatility_analyzer.calculate_momentum_score(
price_change_1m=5.0,
@@ -500,9 +492,7 @@ class TestMarketScanner:
# Should keep all current stocks since they're all in top movers
assert set(updated) == set(current_watchlist)
def test_get_updated_watchlist_max_replacements(
self, scanner: MarketScanner
) -> None:
def test_get_updated_watchlist_max_replacements(self, scanner: MarketScanner) -> None:
"""Test that max_replacements limit is respected."""
current_watchlist = ["000660", "035420", "005490"]
@@ -556,8 +546,6 @@ class TestMarketScanner:
active_count = 0
peak_count = 0
original_scan = scanner.scan_stock
async def tracking_scan(code: str, market: Any) -> VolatilityMetrics:
nonlocal active_count, peak_count
active_count += 1