37 Commits

Author SHA1 Message Date
841f8eaa8d Merge pull request 'docs: v2/v3 구현 감사 문서 피드백 전체 반영 (#349)' (#351) from feature/issue-349-doc-audit-feedback into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Failing after 1m5s
Reviewed-on: #351
2026-03-01 17:10:07 +09:00
agentson
6b34367656 docs: v2/v3 구현 감사 문서 피드백 전체 반영 (#349)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
11회 리뷰 사이클에서 남긴 [코멘트]를 모두 본문에 반영하고 블록을 제거한다.

변경 문서:
- docs/architecture.md: SmartScanner 동작 모드(both), 대시보드 10 API,
  DB 스키마(session_id/fx_pnl/mode), config 변수 갱신
- docs/commands.md: /api/pnl/history, /api/positions 엔드포인트 추가
- docs/testing.md: 테스트 수 고정값 제거, SmartScanner fallback 최신화,
  Dashboard 10 API routes 반영
- README.md: 고정 수치 제거, Gitea CI 명시, 파일별 수치 'CI 기준 변동' 표기
- CLAUDE.md: SmartScanner 섹션명 변경, 고정 수치 제거
- docs/requirements-log.md: #318~#331 구현 항목 추가
- docs/ouroboros/80_implementation_audit.md: ROOT-5/6/7 분리,
  REQ-V3-008 함수명 병기, v3 ~85% / 거버넌스 ~60%로 갱신
- docs/ouroboros/85_loss_recovery_action_plan.md: ACT-07 함수명 병기,
  테스트 수 갱신, 6.1/6.2 정확도 개선
- docs/ouroboros/60_repo_enforcement_checklist.md: CI job/step 구분 표 추가
- docs/ouroboros/README.md: 50_* 문서 (A)/(B) 보조 표기

Closes #349
2026-03-01 17:06:56 +09:00
e65a0e3585 Merge pull request 'test: session_id 명시 전파 회귀 고정 (#326)' (#348) from feature/issue-326-session-id-explicit-propagation into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #348
2026-03-01 13:42:17 +09:00
agentson
e3a3aada83 test: set rt_cd success response in session-id daily regression test (#326)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 13:38:57 +09:00
agentson
db316c539b test: align daily session mock order response with rt_cd success path (#326)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 13:37:04 +09:00
agentson
2df787757a test: enforce explicit runtime session_id propagation in realtime/daily logs (#326)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 4s
2026-03-01 10:14:59 +09:00
5f079206c6 Merge pull request 'infra: CI 자동 검증 강화 (정책 레지스트리 + TASK-REQ 매핑) (#330)' (#347) from feature/issue-330-governance-ci-guard into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #347
2026-03-01 10:11:18 +09:00
agentson
e9de950bec ci: wire governance traceability env in gitea workflow and guard zero SHA (#330)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 10:06:25 +09:00
agentson
c31ee37f13 infra: enforce governance sync and TASK-REQ mapping in CI (#330)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:58:45 +09:00
2ba1d1ad4d Merge pull request 'feat: Triple Barrier 시간장벽을 캘린더 분 기반으로 전환 (#329)' (#346) from feature/issue-329-triple-barrier-calendar-minutes into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #346
2026-03-01 09:57:01 +09:00
agentson
273a3c182a refactor: simplify timestamp normalization after non-null validation (#329)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 4s
2026-03-01 09:50:45 +09:00
agentson
701350fb65 feat: switch backtest triple barrier to calendar-minute horizon (#329)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:44:24 +09:00
35d81fb73d Merge pull request 'feat: 블랙아웃 복구 시 가격/세션 재검증 강화 (#328)' (#345) from feature/issue-328-blackout-revalidation into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #345
2026-03-01 09:43:09 +09:00
agentson
5fae9765e7 test: add blackout recovery overseas/failure revalidation coverage (#328)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:40:00 +09:00
agentson
0ceb2dfdc9 feat: revalidate blackout recovery orders by price/session context (#328)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 3s
2026-03-01 09:33:28 +09:00
89347ee525 Merge pull request 'feat: 세션 전환 시 리스크 파라미터 동적 재로딩 (#327)' (#344) from feature/issue-327-session-risk-reload into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #344
2026-03-01 09:31:14 +09:00
agentson
42c06929ea test: add session-risk reload edge-case coverage (#327)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-02-28 22:20:59 +09:00
agentson
5facd22ef9 feat: reload session risk profile on session transitions (#327)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 4s
2026-02-28 21:04:06 +09:00
3af62ce598 Merge pull request 'feat: v2 staged exit에 실제 피처(ATR, pred_down_prob) 공급 (#325)' (#343) from feature/issue-325-staged-exit-real-features into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #343
2026-02-28 20:59:38 +09:00
agentson
62cd8a81a4 feat: feed staged-exit with ATR/RSI runtime features (#325)
Some checks failed
Gitea CI / test (push) Failing after 3s
Gitea CI / test (pull_request) Failing after 3s
2026-02-28 20:58:23 +09:00
dd8549b912 Merge pull request 'feat: KR ATR-based dynamic hard-stop threshold (#318)' (#342) from feature/issue-318-kr-atr-dynamic-stoploss into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #342
2026-02-28 20:56:18 +09:00
agentson
8bba85da1e feat: add KR ATR-based dynamic hard-stop threshold (#318)
Some checks failed
Gitea CI / test (push) Failing after 4s
Gitea CI / test (pull_request) Failing after 3s
2026-02-28 18:30:52 +09:00
fc6083bd2a Merge pull request 'feat: stop-loss reentry cooldown guard (#319)' (#341) from feature/issue-319-stoploss-reentry-cooldown into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #341
2026-02-28 18:27:12 +09:00
agentson
5f53b02da8 test: add stop-loss reentry cooldown behavioral coverage (#319)
Some checks failed
Gitea CI / test (pull_request) Failing after 4s
Gitea CI / test (push) Failing after 4s
2026-02-28 18:24:28 +09:00
agentson
82808a8493 feat: enforce stop-loss reentry cooldown window (#319) 2026-02-28 18:24:28 +09:00
9456d66de4 Merge pull request 'feat: US minimum price entry filter (#320)' (#340) from feature/issue-320-us-min-price-filter into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #340
2026-02-28 18:22:28 +09:00
33b97f21ac Merge pull request 'fix: log blackout recovery executions to DB (#324)' (#339) from feature/issue-324-blackout-recovery-trade-log into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #339
2026-02-28 18:22:11 +09:00
3b135c3080 Merge pull request 'fix: SELL outcome PnL uses sell quantity (#322)' (#337) from feature/issue-322-sell-pnl-sell-qty into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #337
2026-02-28 18:21:34 +09:00
1b0d5568d3 Merge pull request 'infra: governance registry sync gate in CI (#330)' (#335) from feature/issue-330-governance-ci-guard into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #335
2026-02-28 18:21:10 +09:00
agentson
2406a80782 test: add governance validator unit coverage (#330)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Failing after 53s
2026-02-28 17:40:51 +09:00
b8569d9de1 Merge pull request 'fix: exchange-aware latest BUY matching (#323)' (#338) from feature/issue-323-buy-match-exchange-code into feature/v3-session-policy-stream
Some checks failed
Gitea CI / test (push) Has been cancelled
Reviewed-on: #338
2026-02-28 17:37:43 +09:00
agentson
9267f1fb77 test: add US minimum price boundary and KR-scope coverage (#320)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 17:15:10 +09:00
agentson
fd0246769a test: add sell qty fallback guard and quantity-basis coverage (#322)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 17:13:56 +09:00
agentson
08607eaa56 feat: block US BUY entries below minimum price threshold (#320)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:40:19 +09:00
agentson
5c107d2435 fix: persist blackout recovery executions to trades log (#324)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:39:30 +09:00
agentson
6d7e6557d2 fix: compute SELL decision outcome using sell quantity (#322)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:38:10 +09:00
agentson
2e394cd17c infra: enforce governance registry sync checks in CI (#330)
Some checks failed
Gitea CI / test (push) Has been cancelled
Gitea CI / test (pull_request) Has been cancelled
2026-02-28 14:36:05 +09:00
19 changed files with 2097 additions and 152 deletions

View File

@@ -13,6 +13,8 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5
@@ -26,7 +28,21 @@ jobs:
run: python3 scripts/session_handover_check.py --strict run: python3 scripts/session_handover_check.py --strict
- name: Validate governance assets - name: Validate governance assets
run: python3 scripts/validate_governance_assets.py env:
GOVERNANCE_PR_TITLE: ${{ github.event.pull_request.title }}
GOVERNANCE_PR_BODY: ${{ github.event.pull_request.body }}
run: |
RANGE=""
if [ "${{ github.event_name }}" = "pull_request" ] && [ -n "${{ github.event.pull_request.base.sha }}" ]; then
RANGE="${{ github.event.pull_request.base.sha }}...${{ github.sha }}"
elif [ -n "${{ github.event.before }}" ] && [ "${{ github.event.before }}" != "0000000000000000000000000000000000000000" ]; then
RANGE="${{ github.event.before }}...${{ github.sha }}"
fi
if [ -n "$RANGE" ]; then
python3 scripts/validate_governance_assets.py "$RANGE"
else
python3 scripts/validate_governance_assets.py
fi
- name: Validate Ouroboros docs - name: Validate Ouroboros docs
run: python3 scripts/validate_ouroboros_docs.py run: python3 scripts/validate_ouroboros_docs.py

View File

@@ -25,7 +25,21 @@ jobs:
run: python3 scripts/session_handover_check.py --strict run: python3 scripts/session_handover_check.py --strict
- name: Validate governance assets - name: Validate governance assets
run: python3 scripts/validate_governance_assets.py env:
GOVERNANCE_PR_TITLE: ${{ github.event.pull_request.title }}
GOVERNANCE_PR_BODY: ${{ github.event.pull_request.body }}
run: |
RANGE=""
if [ "${{ github.event_name }}" = "pull_request" ]; then
RANGE="${{ github.event.pull_request.base.sha }}...${{ github.sha }}"
elif [ "${{ github.event_name }}" = "push" ] && [ "${{ github.event.before }}" != "0000000000000000000000000000000000000000" ]; then
RANGE="${{ github.event.before }}...${{ github.sha }}"
fi
if [ -n "$RANGE" ]; then
python3 scripts/validate_governance_assets.py "$RANGE"
else
python3 scripts/validate_governance_assets.py
fi
- name: Validate Ouroboros docs - name: Validate Ouroboros docs
run: python3 scripts/validate_ouroboros_docs.py run: python3 scripts/validate_ouroboros_docs.py

View File

@@ -81,9 +81,9 @@ SCANNER_TOP_N=3 # Max candidates per scan
- **Evolution-ready** — Selection context logged for strategy optimization - **Evolution-ready** — Selection context logged for strategy optimization
- **Fault-tolerant** — Falls back to static watchlist on API failure - **Fault-tolerant** — Falls back to static watchlist on API failure
### Realtime Mode Only ### Trading Mode Integration
Smart Scanner runs in `TRADE_MODE=realtime` only. Daily mode uses static watchlists for batch efficiency. Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
## Documentation ## Documentation
@@ -122,7 +122,7 @@ src/
├── broker/ # KIS API client (domestic + overseas) ├── broker/ # KIS API client (domestic + overseas)
├── context/ # L1-L7 hierarchical memory system ├── context/ # L1-L7 hierarchical memory system
├── core/ # Risk manager (READ-ONLY) ├── core/ # Risk manager (READ-ONLY)
├── dashboard/ # FastAPI read-only monitoring (8 API endpoints) ├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
├── data/ # External data integration (news, market data, calendar) ├── data/ # External data integration (news, market data, calendar)
├── evolution/ # Self-improvement (optimizer, daily review, scorecard) ├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
├── logging/ # Decision logger (audit trail) ├── logging/ # Decision logger (audit trail)
@@ -133,7 +133,7 @@ src/
├── main.py # Trading loop orchestrator ├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env) └── config.py # Settings (from .env)
tests/ # 551 tests across 25 files tests/ # 998 tests across 41 files
docs/ # Extended documentation docs/ # Extended documentation
``` ```

View File

@@ -39,7 +39,7 @@ KIS(한국투자증권) API로 매매하고, Google Gemini로 판단하며, 자
| 컨텍스트 | `src/context/` | L1-L7 계층형 메모리 시스템 | | 컨텍스트 | `src/context/` | L1-L7 계층형 메모리 시스템 |
| 분석 | `src/analysis/` | RSI, ATR, Smart Volatility Scanner | | 분석 | `src/analysis/` | RSI, ATR, Smart Volatility Scanner |
| 알림 | `src/notifications/` | 텔레그램 양방향 (알림 + 9개 명령어) | | 알림 | `src/notifications/` | 텔레그램 양방향 (알림 + 9개 명령어) |
| 대시보드 | `src/dashboard/` | FastAPI 읽기 전용 모니터링 (8개 API) | | 대시보드 | `src/dashboard/` | FastAPI 읽기 전용 모니터링 (10개 API) |
| 진화 | `src/evolution/` | 전략 진화 + Daily Review + Scorecard | | 진화 | `src/evolution/` | 전략 진화 + Daily Review + Scorecard |
| 의사결정 로그 | `src/logging/` | 전체 거래 결정 감사 추적 | | 의사결정 로그 | `src/logging/` | 전체 거래 결정 감사 추적 |
| 데이터 | `src/data/` | 뉴스, 시장 데이터, 경제 캘린더 연동 | | 데이터 | `src/data/` | 뉴스, 시장 데이터, 경제 캘린더 연동 |
@@ -153,19 +153,16 @@ docker compose up -d ouroboros
## 테스트 ## 테스트
551개 테스트가 25개 파일에 걸쳐 구현되어 있습니다. 최소 커버리지 80%. 998개 테스트가 41개 파일에 걸쳐 구현되어 있습니다. 최소 커버리지 80%.
``` ```
tests/test_scenario_engine.py — 시나리오 매칭 (44개) tests/test_main.py — 거래 루프 통합
tests/test_data_integration.py — 외부 데이터 연동 (38개) tests/test_scenario_engine.py — 시나리오 매칭
tests/test_pre_market_planner.py — 플레이북 생성 (37개) tests/test_pre_market_planner.py — 플레이북 생성
tests/test_main.py — 거래 루프 통합 (37개) tests/test_overseas_broker.py — 해외 브로커
tests/test_token_efficiency.py 토큰 최적화 (34개) tests/test_telegram_commands.py — 텔레그램 명령어
tests/test_strategy_models.py — 전략 모델 검증 (33개) tests/test_telegram.py — 텔레그램 알림
tests/test_telegram_commands.py — 텔레그램 명령어 (31개) ... 외 35개 파일 ※ 파일별 수치는 CI 기준으로 변동 가능
tests/test_latency_control.py — 지연시간 제어 (30개)
tests/test_telegram.py — 텔레그램 알림 (25개)
... 외 16개 파일
``` ```
**상세**: [docs/testing.md](docs/testing.md) **상세**: [docs/testing.md](docs/testing.md)
@@ -177,8 +174,8 @@ tests/test_telegram.py — 텔레그램 알림 (25개)
- **AI**: Google Gemini Pro - **AI**: Google Gemini Pro
- **DB**: SQLite (5개 테이블: trades, contexts, decision_logs, playbooks, context_metadata) - **DB**: SQLite (5개 테이블: trades, contexts, decision_logs, playbooks, context_metadata)
- **대시보드**: FastAPI + uvicorn - **대시보드**: FastAPI + uvicorn
- **검증**: pytest + coverage (551 tests) - **검증**: pytest + coverage (998 tests)
- **CI/CD**: GitHub Actions - **CI/CD**: Gitea CI (`.gitea/workflows/ci.yml`)
- **배포**: Docker + Docker Compose - **배포**: Docker + Docker Compose
## 프로젝트 구조 ## 프로젝트 구조
@@ -212,7 +209,7 @@ The-Ouroboros/
│ ├── config.py # Pydantic 설정 │ ├── config.py # Pydantic 설정
│ ├── db.py # SQLite 데이터베이스 │ ├── db.py # SQLite 데이터베이스
│ └── main.py # 비동기 거래 루프 │ └── main.py # 비동기 거래 루프
├── tests/ # 551개 테스트 (25개 파일) ├── tests/ # 998개 테스트 (41개 파일)
├── Dockerfile # 멀티스테이지 빌드 ├── Dockerfile # 멀티스테이지 빌드
├── docker-compose.yml # 서비스 오케스트레이션 ├── docker-compose.yml # 서비스 오케스트레이션
└── pyproject.toml # 의존성 및 도구 설정 └── pyproject.toml # 의존성 및 도구 설정

View File

@@ -84,6 +84,37 @@ High-frequency trading with individual stock analysis:
- Momentum scoring (0-100 scale) - Momentum scoring (0-100 scale)
- Breakout/breakdown pattern detection - Breakout/breakdown pattern detection
**TripleBarrierLabeler** (`triple_barrier.py`) — Financial time-series labeling (v2)
- Triple Barrier method: upper (take-profit), lower (stop-loss), time barrier
- First-touch labeling: labels confirmed by whichever barrier is breached first
- `max_holding_minutes` (calendar-minute) time barrier — session-aware, bar-period independent
- Tie-break mode: `"stop_first"` (conservative) or `"take_first"`
- Feature-label strict separation to prevent look-ahead bias
**BacktestPipeline** (`backtest_pipeline.py`) — End-to-end validation pipeline (v2)
- `run_v2_backtest_pipeline()`: cost guard → triple barrier labeling → walk-forward splits → fold scoring
- `BacktestPipelineResult`: artifact contract for reproducible output
- `fold_has_leakage()`: leakage detection utility
**WalkForwardSplit** (`walk_forward_split.py`) — Time-series validation (v2)
- Fold-based walk-forward splits (no random shuffling)
- Purge/Embargo: excludes N bars before/after fold boundaries to prevent data leakage
**BacktestExecutionModel** (`backtest_execution_model.py`) — Conservative fill simulation (v2/v3)
- Session-aware slippage: KRX_REG 5bps, NXT_AFTER 15bps, US_REG 3bps, US_PRE/DAY 30-50bps
- Order failure rate simulation per session
- Partial fill rate simulation with min/max ratio bounds
- Unfavorable-direction fill assumption (no simple close-price fill)
**BacktestCostGuard** (`backtest_cost_guard.py`) — Cost model validator (v2)
- `validate_backtest_cost_model()`: fail-fast check that session cost assumptions are present
- Enforces realistic cost assumptions before any backtest run proceeds
**SmartVolatilityScanner** (`smart_scanner.py`) — Python-first filtering pipeline **SmartVolatilityScanner** (`smart_scanner.py`) — Python-first filtering pipeline
- **Domestic (KR)**: - **Domestic (KR)**:
@@ -98,7 +129,7 @@ High-frequency trading with individual stock analysis:
- **Step 4**: Return top N candidates (default 3) - **Step 4**: Return top N candidates (default 3)
- **Fallback (overseas only)**: If ranking API is unavailable, uses dynamic universe - **Fallback (overseas only)**: If ranking API is unavailable, uses dynamic universe
from runtime active symbols + recent traded symbols + current holdings (no static watchlist) from runtime active symbols + recent traded symbols + current holdings (no static watchlist)
- **Realtime mode only**: Daily mode uses batch processing for API efficiency - **Both modes**: Realtime 중심이지만 Daily 경로(`run_daily_session()`)에서도 후보 선별에 사용
**Benefits:** **Benefits:**
- Reduces Gemini API calls from 20-30 stocks to 1-3 qualified candidates - Reduces Gemini API calls from 20-30 stocks to 1-3 qualified candidates
@@ -124,9 +155,9 @@ High-frequency trading with individual stock analysis:
- Selects appropriate context layers for current market conditions - Selects appropriate context layers for current market conditions
### 4. Risk Manager (`src/core/risk_manager.py`) ### 4. Risk Manager & Session Policy (`src/core/`)
**RiskManager** — Safety circuit breaker and order validation **RiskManager** (`risk_manager.py`) — Safety circuit breaker and order validation
> **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md)) > **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md))
@@ -136,8 +167,59 @@ High-frequency trading with individual stock analysis:
- **Fat-Finger Protection**: Rejects orders exceeding 30% of available cash - **Fat-Finger Protection**: Rejects orders exceeding 30% of available cash
- Must always be enforced, cannot be disabled - Must always be enforced, cannot be disabled
**OrderPolicy** (`order_policy.py`) — Session classification and order type enforcement (v3)
- `classify_session_id()`: Classifies current KR/US session from KST clock
- KR: `NXT_PRE` (08:00-08:50), `KRX_REG` (09:00-15:30), `NXT_AFTER` (15:30-20:00)
- US: `US_DAY` (10:00-18:00), `US_PRE` (18:00-23:30), `US_REG` (23:30-06:00), `US_AFTER` (06:00-07:00)
- Low-liquidity session detection: `NXT_AFTER`, `US_PRE`, `US_DAY`, `US_AFTER`
- Market order forbidden in low-liquidity sessions (`OrderPolicyRejected` raised)
- Limit/IOC/FOK orders always allowed
**KillSwitch** (`kill_switch.py`) — Emergency trading halt orchestration (v2)
- Fixed 5-step atomic sequence:
1. Block new orders (`new_orders_blocked = True`)
2. Cancel all unfilled orders
3. Refresh order state (query final status)
4. Reduce risk (force-close or reduce positions)
5. Snapshot state + send Telegram alert
- Async, injectable step callables — each step individually testable
- Highest priority: overrides overnight exception and all other rules
**BlackoutManager** (`blackout_manager.py`) — KIS maintenance window handling (v3)
- Configurable blackout windows (e.g., `23:30-00:10 KST`)
- `queue_order()`: Queues order intent during blackout, enforces max queue size
- `pop_recovery_batch()`: Returns queued intents after recovery
- Recovery revalidation path (in `src/main.py`):
- Stale BUY drop (position already exists)
- Stale SELL drop (position absent)
- `validate_order_policy()` rechecked
- Price drift check (>5% → drop, configurable via `BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT`)
### 5. Strategy (`src/strategy/`) ### 5. Strategy (`src/strategy/`)
**PositionStateMachine** (`position_state_machine.py`) — 4-state sell state machine (v2)
- States: `HOLDING` → `BE_LOCK` → `ARMED` → `EXITED`
- `HOLDING`: Normal holding
- `BE_LOCK`: Profit ≥ `be_arm_pct` — stop-loss elevated to break-even
- `ARMED`: Profit ≥ `arm_pct` — peak-tracking trailing stop active
- `EXITED`: Position closed
- `promote_state()`: Immediately elevates to highest admissible state (handles gaps/skips)
- `evaluate_exit_first()`: EXITED conditions checked before state promotion
- Monotonic: states only move up, never down
**ExitRules** (`exit_rules.py`) — 4-layer composite exit logic (v2)
- **Hard Stop**: `unrealized <= hard_stop_pct` (always enforced, ATR-adaptive for KR)
- **Break-Even Lock**: Once in BE_LOCK/ARMED, exit if price falls to entry price
- **ATR Trailing Stop**: `trailing_stop_price = peak_price - (atr_multiplier_k × ATR)`
- **Model Signal**: Exit if `pred_down_prob >= model_prob_threshold AND liquidity_weak`
- `evaluate_exit()`: Returns `ExitEvaluation` with next state, exit flag, reason, trailing price
- `ExitRuleConfig`: Frozen dataclass with all tunable parameters
**Pre-Market Planner** (`pre_market_planner.py`) — AI playbook generation **Pre-Market Planner** (`pre_market_planner.py`) — AI playbook generation
- Runs before market open (configurable `PRE_MARKET_MINUTES`, default 30) - Runs before market open (configurable `PRE_MARKET_MINUTES`, default 30)
@@ -195,7 +277,7 @@ High-frequency trading with individual stock analysis:
- Configurable host/port (`DASHBOARD_HOST`, `DASHBOARD_PORT`, default `127.0.0.1:8080`) - Configurable host/port (`DASHBOARD_HOST`, `DASHBOARD_PORT`, default `127.0.0.1:8080`)
- Serves static HTML frontend - Serves static HTML frontend
**8 API Endpoints:** **10 API Endpoints:**
| Endpoint | Method | Description | | Endpoint | Method | Description |
|----------|--------|-------------| |----------|--------|-------------|
@@ -207,6 +289,8 @@ High-frequency trading with individual stock analysis:
| `/api/context/{layer}` | GET | Query context by layer (L1-L7) | | `/api/context/{layer}` | GET | Query context by layer (L1-L7) |
| `/api/decisions` | GET | Decision log entries with outcomes | | `/api/decisions` | GET | Decision log entries with outcomes |
| `/api/scenarios/active` | GET | Today's matched scenarios | | `/api/scenarios/active` | GET | Today's matched scenarios |
| `/api/pnl/history` | GET | P&L history time series |
| `/api/positions` | GET | Current open positions |
### 8. Notifications (`src/notifications/telegram_client.py`) ### 8. Notifications (`src/notifications/telegram_client.py`)
@@ -448,8 +532,12 @@ CREATE TABLE trades (
pnl REAL DEFAULT 0.0, pnl REAL DEFAULT 0.0,
market TEXT DEFAULT 'KR', market TEXT DEFAULT 'KR',
exchange_code TEXT DEFAULT 'KRX', exchange_code TEXT DEFAULT 'KRX',
session_id TEXT DEFAULT 'UNKNOWN', -- v3: KRX_REG | NXT_AFTER | US_REG | US_PRE | ...
selection_context TEXT, -- JSON: {rsi, volume_ratio, signal, score} selection_context TEXT, -- JSON: {rsi, volume_ratio, signal, score}
decision_id TEXT -- Links to decision_logs decision_id TEXT, -- Links to decision_logs
strategy_pnl REAL, -- v3: Core strategy P&L (separated from FX)
fx_pnl REAL DEFAULT 0.0, -- v3: FX gain/loss for USD trades (schema ready, activation pending)
mode TEXT -- paper | live
); );
``` ```
@@ -475,13 +563,14 @@ CREATE TABLE decision_logs (
stock_code TEXT, stock_code TEXT,
market TEXT, market TEXT,
exchange_code TEXT, exchange_code TEXT,
session_id TEXT DEFAULT 'UNKNOWN', -- v3: session when decision was made
action TEXT, action TEXT,
confidence INTEGER, confidence INTEGER,
rationale TEXT, rationale TEXT,
context_snapshot TEXT, -- JSON: full context at decision time context_snapshot TEXT, -- JSON: full context at decision time
input_data TEXT, -- JSON: market data used input_data TEXT, -- JSON: market data used
outcome_pnl REAL, outcome_pnl REAL,
outcome_accuracy REAL, outcome_accuracy INTEGER,
reviewed INTEGER DEFAULT 0, reviewed INTEGER DEFAULT 0,
review_notes TEXT review_notes TEXT
); );
@@ -494,7 +583,7 @@ CREATE TABLE playbooks (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT NOT NULL, date TEXT NOT NULL,
market TEXT NOT NULL, market TEXT NOT NULL,
status TEXT DEFAULT 'generated', status TEXT NOT NULL DEFAULT 'pending', -- pending → generated → active → expired
playbook_json TEXT NOT NULL, -- Full playbook with scenarios playbook_json TEXT NOT NULL, -- Full playbook with scenarios
generated_at TEXT NOT NULL, generated_at TEXT NOT NULL,
token_count INTEGER, token_count INTEGER,
@@ -552,6 +641,29 @@ PLANNER_TIMEOUT_SECONDS=60 # Timeout for playbook generation
DEFENSIVE_PLAYBOOK_ON_FAILURE=true # Fallback on AI failure DEFENSIVE_PLAYBOOK_ON_FAILURE=true # Fallback on AI failure
RESCAN_INTERVAL_SECONDS=300 # Scenario rescan interval during trading RESCAN_INTERVAL_SECONDS=300 # Scenario rescan interval during trading
# Optional — v2 Exit Rules (State Machine)
STAGED_EXIT_BE_ARM_PCT=1.2 # Break-even lock threshold (%)
STAGED_EXIT_ARM_PCT=3.0 # Armed state threshold (%)
KR_ATR_STOP_MULTIPLIER_K=2.0 # ATR multiplier for KR dynamic hard stop
KR_ATR_STOP_MIN_PCT=-2.0 # KR hard stop floor (must tighten, negative)
KR_ATR_STOP_MAX_PCT=-7.0 # KR hard stop ceiling (loosest, negative)
# Optional — v2 Trade Filters
STOP_LOSS_COOLDOWN_MINUTES=120 # Cooldown after stop-loss before re-entry (same ticker)
US_MIN_PRICE=5.0 # Minimum US stock price for BUY ($)
# Optional — v3 Session Risk Management
SESSION_RISK_RELOAD_ENABLED=true # Reload risk params at session boundaries
SESSION_RISK_PROFILES_JSON="{}" # Per-session overrides JSON: {"KRX_REG": {"be_arm_pct": 1.0}}
OVERNIGHT_EXCEPTION_ENABLED=true # Allow holding through session close (conditions apply)
# Optional — v3 Blackout (KIS maintenance windows)
ORDER_BLACKOUT_ENABLED=true
ORDER_BLACKOUT_WINDOWS_KST=23:30-00:10 # Comma-separated: "HH:MM-HH:MM"
ORDER_BLACKOUT_QUEUE_MAX=500 # Max queued orders during blackout
BLACKOUT_RECOVERY_PRICE_REVALIDATION_ENABLED=true
BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT=5.0 # Drop recovery order if price drifted >5%
# Optional — Smart Scanner (realtime mode only) # Optional — Smart Scanner (realtime mode only)
RSI_OVERSOLD_THRESHOLD=30 # 0-50, oversold threshold RSI_OVERSOLD_THRESHOLD=30 # 0-50, oversold threshold
RSI_MOMENTUM_THRESHOLD=70 # 50-100, momentum threshold RSI_MOMENTUM_THRESHOLD=70 # 50-100, momentum threshold

View File

@@ -136,7 +136,7 @@ No decorator needed for async tests.
# Install all dependencies (production + dev) # Install all dependencies (production + dev)
pip install -e ".[dev]" pip install -e ".[dev]"
# Run full test suite with coverage (551 tests across 25 files) # Run full test suite with coverage (998 tests across 41 files)
pytest -v --cov=src --cov-report=term-missing pytest -v --cov=src --cov-report=term-missing
# Run a single test file # Run a single test file
@@ -202,6 +202,8 @@ Dashboard runs as a daemon thread on `DASHBOARD_HOST:DASHBOARD_PORT` (default: `
| `GET /api/context/{layer}` | Context data by layer L1-L7 (query: `timeframe`) | | `GET /api/context/{layer}` | Context data by layer L1-L7 (query: `timeframe`) |
| `GET /api/decisions` | Decision log entries (query: `limit`, `market`) | | `GET /api/decisions` | Decision log entries (query: `limit`, `market`) |
| `GET /api/scenarios/active` | Today's matched scenarios | | `GET /api/scenarios/active` | Today's matched scenarios |
| `GET /api/pnl/history` | P&L history over time |
| `GET /api/positions` | Current open positions |
## Telegram Commands ## Telegram Commands

View File

@@ -24,11 +24,17 @@ Updated: 2026-02-27
## 2) 필수 상태 체크 (필수) ## 2) 필수 상태 체크 (필수)
필수 CI 항목: 필수 CI 항목:
- `validate_ouroboros_docs` (명령: `python3 scripts/validate_ouroboros_docs.py`)
- `test` (명령: `pytest -q`) | 참조 기준 | 이름 | 설명 |
|-----------|------|------|
| **job 단위** (브랜치 보호 설정 시 사용) | `test` | 전체 CI job (문서 검증 + 테스트 포함) |
| **step 단위** (로그 확인 시 참조) | `validate_ouroboros_docs` | `python3 scripts/validate_ouroboros_docs.py` 실행 step |
| **step 단위** | `run_tests` | `pytest -q` 실행 step |
> **주의**: Gitea 브랜치 보호의 Required Status Checks는 **job 이름** 기준으로 설정한다 (`test`). step 이름은 UI 로그 탐색용이며 보호 규칙에 직접 입력하지 않는다.
설정 기준: 설정 기준:
- 위 2개 체크가 `success` 아니면 머지 금지 - `test` job이 `success` 아니면 머지 금지
- 체크 스킵/중립 상태 허용 금지 - 체크 스킵/중립 상태 허용 금지
## 3) 필수 리뷰어 규칙 (권장 -> 필수) ## 3) 필수 리뷰어 규칙 (권장 -> 필수)

View File

@@ -1,14 +1,15 @@
<!-- <!--
Doc-ID: DOC-AUDIT-001 Doc-ID: DOC-AUDIT-001
Version: 1.0.0 Version: 1.1.0
Status: active Status: active
Owner: strategy Owner: strategy
Updated: 2026-02-28 Updated: 2026-03-01
--> -->
# v2/v3 구현 감사 및 수익률 분석 보고서 # v2/v3 구현 감사 및 수익률 분석 보고서
작성일: 2026-02-28 작성일: 2026-02-28
최종 업데이트: 2026-03-01 (Phase 2 완료 + Phase 3 부분 완료 반영)
대상 기간: 2026-02-25 ~ 2026-02-28 (실거래) 대상 기간: 2026-02-25 ~ 2026-02-28 (실거래)
분석 브랜치: `feature/v3-session-policy-stream` 분석 브랜치: `feature/v3-session-policy-stream`
@@ -29,69 +30,80 @@ Updated: 2026-02-28
| REQ-V2-007 | 비용/슬리피지/체결실패 모델 필수 | `src/analysis/backtest_cost_guard.py` | ✅ 완료 | | REQ-V2-007 | 비용/슬리피지/체결실패 모델 필수 | `src/analysis/backtest_cost_guard.py` | ✅ 완료 |
| REQ-V2-008 | Kill Switch 실행 순서 (Block→Cancel→Refresh→Reduce→Snapshot) | `src/core/kill_switch.py` | ✅ 완료 | | REQ-V2-008 | Kill Switch 실행 순서 (Block→Cancel→Refresh→Reduce→Snapshot) | `src/core/kill_switch.py` | ✅ 완료 |
### 1.2 v3 구현 상태: ~75% 완료 ### 1.2 v3 구현 상태: ~85% 완료 (2026-03-01 기준)
| REQ-ID | 요구사항 | 상태 | 갭 설명 | | REQ-ID | 요구사항 | 상태 | 비고 |
|--------|----------|------|---------| |--------|----------|------|------|
| REQ-V3-001 | 모든 신호/주문/로그에 session_id 포함 | ⚠️ 부분 | 아래 GAP-1, GAP-2 참조 | | REQ-V3-001 | 모든 신호/주문/로그에 session_id 포함 | ✅ 완료 | #326 머지 — `log_decision()` 파라미터 추가, `log_trade()` 명시적 전달 |
| REQ-V3-002 | 세션 전환 훅 + 리스크 파라미터 재로딩 | ⚠️ 부분 | 아래 GAP-3 참조 | | REQ-V3-002 | 세션 전환 훅 + 리스크 파라미터 재로딩 | ⚠️ 부분 | #327 머지 — 재로딩 메커니즘 구현, 세션 훅 테스트 미작성 |
| REQ-V3-003 | 블랙아웃 윈도우 정책 | ✅ 완료 | `src/core/blackout_manager.py` | | REQ-V3-003 | 블랙아웃 윈도우 정책 | ✅ 완료 | `src/core/blackout_manager.py` |
| REQ-V3-004 | 블랙아웃 큐 + 복구 시 재검증 | ⚠️ 부분 | 아래 GAP-4 참조 (부분 해소) | | REQ-V3-004 | 블랙아웃 큐 + 복구 시 재검증 | ✅ 완료 | #324(DB 기록) + #328(가격/세션 재검증) 머지 |
| REQ-V3-005 | 저유동 세션 시장가 금지 | ✅ 완료 | `src/core/order_policy.py` | | REQ-V3-005 | 저유동 세션 시장가 금지 | ✅ 완료 | `src/core/order_policy.py` |
| REQ-V3-006 | 보수적 백테스트 체결 (불리 방향) | ✅ 완료 | `src/analysis/backtest_execution_model.py` | | REQ-V3-006 | 보수적 백테스트 체결 (불리 방향) | ✅ 완료 | `src/analysis/backtest_execution_model.py` |
| REQ-V3-007 | FX 손익 분리 (전략 PnL vs 환율 PnL) | ⚠️ 코드 완료 / 운영 미반영 | `src/db.py` 스키마·함수 완료, 운영 데이터 `fx_pnl` 전부 0 | | REQ-V3-007 | FX 손익 분리 (전략 PnL vs 환율 PnL) | ⚠️ 코드 완료 / 운영 미반영 | `src/db.py` 스키마·함수 완료, 운영 데이터 `fx_pnl` 전부 0 |
| REQ-V3-008 | 오버나잇 예외 vs Kill Switch 우선순위 | ✅ 완료 | `src/main.py:459-471` | | REQ-V3-008 | 오버나잇 예외 vs Kill Switch 우선순위 | ✅ 완료 | `src/main.py``_should_force_exit_for_overnight()`, `_apply_staged_exit_override_for_hold()` |
### 1.3 운영 거버넌스: ~20% 완료 ### 1.3 운영 거버넌스: ~60% 완료 (2026-03-01 재평가)
| REQ-ID | 요구사항 | 상태 | 갭 설명 | | REQ-ID | 요구사항 | 상태 | 비고 |
|--------|----------|------|---------| |--------|----------|------|------|
| REQ-OPS-001 | 타임존 명시 (KST/UTC) | ⚠️ 부분 | DB 기록은 UTC, 세션은 KST. 일부 로그에서 타임존 미표기 | | REQ-OPS-001 | 타임존 명시 (KST/UTC) | ⚠️ 부분 | DB 기록은 UTC, 세션은 KST. 일부 로그에서 타임존 미표기 |
| REQ-OPS-002 | 정책 변경 시 레지스트리 업데이트 강제 | ❌ 미구현 | CI 검증 없음 | | REQ-OPS-002 | 정책 변경 시 레지스트리 업데이트 강제 | ⚠️ 기본 구현 완료 | `scripts/validate_governance_assets.py` CI 완료; 규칙 고도화 잔여 |
| REQ-OPS-003 | TASK-REQ 매핑 강제 | ❌ 미구현 | PR 단위 자동 검증 없음 | | REQ-OPS-003 | TASK-REQ 매핑 강제 | ⚠️ 기본 구현 완료 | `scripts/validate_ouroboros_docs.py` CI 연동 완료; PR 강제 검증 강화 잔여 |
--- ---
## 2. 구현 갭 상세 ## 2. 구현 갭 상세
### GAP-1: DecisionLogger에 session_id 미포함 (CRITICAL) > **2026-03-01 업데이트**: GAP-1~5 모두 해소되었거나 이슈 머지로 부분 해소됨.
- **위치**: `src/logging/decision_logger.py:40` ### GAP-1: DecisionLogger에 session_id 미포함 → ✅ 해소 (#326)
- **문제**: `log_decision()` 함수에 `session_id` 파라미터가 없음
- **영향**: 어떤 세션에서 전략적 의사결정이 내려졌는지 추적 불가 - **위치**: `src/logging/decision_logger.py`
- ~~문제: `log_decision()` 함수에 `session_id` 파라미터가 없음~~
- **해소**: #326 머지 — `log_decision()` 파라미터에 `session_id` 추가, DB 기록 포함
- **요구사항**: REQ-V3-001 - **요구사항**: REQ-V3-001
### GAP-2: src/main.py 거래 로그에 session_id 미전달 (CRITICAL) ### GAP-2: src/main.py 거래 로그에 session_id 미전달 → ✅ 해소 (#326)
- **위치**: `src/main.py` line 1625, 1682, 2769 - **위치**: `src/main.py`
- **문제**: `log_trade()` 호출 시 `session_id` 파라미터를 전달하지 않음 - ~~문제: `log_trade()` 호출 시 `session_id` 파라미터를 전달하지 않음~~
- **현상**: 시장 코드 기반 자동 추론에 의존 → 실제 런타임 세션과 불일치 가능 - **해소**: #326 머지 — `log_trade()` 호출 시 런타임 `session_id` 명시적 전달
- **요구사항**: REQ-V3-001 - **요구사항**: REQ-V3-001
### GAP-3: 세션 전환 시 리스크 파라미터 재로딩 없음 (HIGH) ### GAP-3: 세션 전환 시 리스크 파라미터 재로딩 없음 → ⚠️ 부분 해소 (#327)
- **위치**: `src/main.py` 전체 - **위치**: `src/main.py`, `src/config.py`
- **문제**: 리스크 파라미터가 시작 시 한 번만 로딩되고, 세션 경계 변경 시 재로딩 메커니즘 없음 - **해소 내용**: #327 머지 — `SESSION_RISK_PROFILES_JSON` 기반 세션별 파라미터 재로딩 메커니즘 구현
- **영향**: NXT_AFTER(저유동) → KRX_REG(정규장) 전환 시에도 동일 파라미터 사용 - `SESSION_RISK_RELOAD_ENABLED=true` 시 세션 경계에서 파라미터 재로딩
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
- **잔여 갭**: 세션 경계 실시간 전환 E2E 통합 테스트 보강 필요 (`test_main.py`에 설정 오버라이드/폴백 단위 테스트는 존재)
- **요구사항**: REQ-V3-002 - **요구사항**: REQ-V3-002
### GAP-4: 블랙아웃 복구 시 재검증 부분 해소, DB 기록 미구현 (HIGH) ### GAP-4: 블랙아웃 복구 DB 기록 + 재검증 → ✅ 해소 (#324, #328)
- **위치**: `src/core/blackout_manager.py:89-96`, `src/main.py:694-791` - **위치**: `src/core/blackout_manager.py`, `src/main.py`
- **상태**: `pop_recovery_batch()` 자체는 단순 dequeue이나, 실행 경로에서 부분 재검증 수행: - **해소 내용**:
- stale BUY 드롭 (포지션 이미 존재 시) — `src/main.py:713-720` - #324 머지 — 복구 주문 실행 후 `log_trade()` 호출, rationale에 `[blackout-recovery]` prefix
- stale SELL 드롭 (포지션 부재 시) — `src/main.py:721-727` - #328 머지 — 가격 유효성 검증 (진입가 대비 급변 시 드롭), 세션 변경 시 새 파라미터로 재검증
- `validate_order_policy()` 호출 — `src/main.py:729-734`
- **잔여 갭**: 가격 유효성(시세 변동), 세션 변경에 따른 파라미터 재적용은 미구현
- **신규 발견**: 블랙아웃 복구 주문이 `log_trade()` 없이 실행되어 거래 DB에 기록되지 않음 → 성과 리포트 불일치 유발
- **요구사항**: REQ-V3-004 - **요구사항**: REQ-V3-004
### GAP-5: 시간장벽이 봉 개수 고정 (MEDIUM) ### GAP-5: 시간장벽이 봉 개수 고정 → ✅ 해소 (#329)
- **위치**: `src/analysis/triple_barrier.py:19` - **위치**: `src/analysis/triple_barrier.py`
- **문제**: `max_holding_bars` (고정 봉 수) 사용, v3 계획의 `max_holding_minutes` (캘린더 시간) 미반영 - ~~문제: `max_holding_bars` (고정 봉 수) 사용~~
- **해소**: #329 머지 — `max_holding_minutes` (캘린더 분) 기반 시간장벽 전환
- 봉 주기 무관하게 일정 시간 경과 시 장벽 도달
- `max_holding_bars` deprecated 경고 유지 (하위 호환)
- **요구사항**: REQ-V2-005 / v3 확장 - **요구사항**: REQ-V2-005 / v3 확장
### GAP-6 (신규): FX PnL 운영 미활성 (LOW — 코드 완료)
- **위치**: `src/db.py` (`fx_pnl`, `strategy_pnl` 컬럼 존재)
- **문제**: 스키마와 함수는 완료되었으나 운영 데이터에서 `fx_pnl` 전부 0
- **영향**: USD 거래에서 환율 손익과 전략 손익이 분리되지 않아 성과 분석 부정확
- **요구사항**: REQ-V3-007
--- ---
## 3. 실거래 수익률 분석 ## 3. 실거래 수익률 분석
@@ -244,18 +256,25 @@ Updated: 2026-02-28
- **문제**: 중첩 `def evaluate` 정의 (들여쓰기 오류) - **문제**: 중첩 `def evaluate` 정의 (들여쓰기 오류)
- **영향**: 런타임 실패 → 기본 전략으로 폴백 → 진화 시스템 사실상 무효 - **영향**: 런타임 실패 → 기본 전략으로 폴백 → 진화 시스템 사실상 무효
### ROOT-5: v2 청산 로직이 부분 통합되었으나 실효성 부족 (HIGH) ### ROOT-5: v2 청산 로직이 부분 통합되었으나 실효성 부족 → ⚠️ 부분 해소 (#325)
- **현재 상태**: `src/main.py:500-583`에서 `evaluate_exit()` 기반 staged exit override가 동작함 **초기 진단 (2026-02-28 감사 기준):**
- 상태기계(HOLDING→BE_LOCK→ARMED→EXITED) 전이 구현 - `hard_stop_pct`에 고정 `-2.0`이 기본값으로 들어가 v2 계획의 ATR 적응형 의도와 괴리
- 4중 청산(hard stop, BE lock threat, ATR trailing, model/liquidity exit) 평 - `be_arm_pct`/`arm_pct`가 playbook의 `take_profit_pct`에서 기계적 파생(`* 0.4`)되어 v2 계획의 독립 파라미터 튜닝 불
- **실효성 문제**: - `atr_value`, `pred_down_prob` 등 런타임 피처가 0.0으로 공급되어 사실상 hard stop만 발동
- `hard_stop_pct`에 고정 `-2.0`이 기본값으로 들어가 v2 계획의 ATR 적응형 의도와 괴리
- `be_arm_pct`/`arm_pct`가 playbook의 `take_profit_pct`에서 기계적 파생(`* 0.4`)되어 v2 계획의 독립 파라미터 튜닝 불가
- `atr_value`, `pred_down_prob` 등 런타임 피처가 대부분 0.0으로 들어와 사실상 hard stop만 발동
- **결론**: 코드 통합은 되었으나, 피처 공급과 파라미터 설정이 미비하여 v2 설계 가치가 실현되지 않는 상태
### ROOT-6: SELL 손익 계산이 부분청산/수량 불일치에 취약 (CRITICAL) **현재 상태 (#325 머지 후):**
- `STAGED_EXIT_BE_ARM_PCT`, `STAGED_EXIT_ARM_PCT` 환경변수로 독립 파라미터 설정 가능
- `_inject_staged_exit_features()`: KR 시장 ATR 실시간 계산 주입, RSI 기반 `pred_down_prob` 공급
- KR ATR dynamic hard stop (#318)으로 `-2.0` 고정값 문제 해소
**잔여 리스크:**
- KR 외 시장(US 등)에서 `atr_value` 공급 경로 불완전 — hard stop 편향 잔존 가능
- `pred_down_prob`가 RSI 프록시 수준 — 추후 실제 ML 모델 대체 권장
### ROOT-6: SELL 손익 계산이 부분청산/수량 불일치에 취약 (CRITICAL) → ✅ 해소 (#322)
> **현재 상태**: #322 머지로 해소됨. 아래는 원인 발견 시점(2026-02-28) 진단 기록.
- **위치**: `src/main.py:1658-1663`, `src/main.py:2755-2760` - **위치**: `src/main.py:1658-1663`, `src/main.py:2755-2760`
- **문제**: PnL 계산이 실제 매도 수량(`sell_qty`)이 아닌 직전 BUY의 `buy_qty`를 사용 - **문제**: PnL 계산이 실제 매도 수량(`sell_qty`)이 아닌 직전 BUY의 `buy_qty`를 사용
@@ -263,7 +282,9 @@ Updated: 2026-02-28
- **영향**: 부분청산, 역분할/액분할, startup-sync 후 수량 드리프트 시 손익 과대/과소 계상 - **영향**: 부분청산, 역분할/액분할, startup-sync 후 수량 드리프트 시 손익 과대/과소 계상
- **실증**: CRCA 이상치(BUY 146주 → SELL 15주에서 PnL +4,612 USD) 가 이 버그와 정합 - **실증**: CRCA 이상치(BUY 146주 → SELL 15주에서 PnL +4,612 USD) 가 이 버그와 정합
### ROOT-7: BUY 매칭 키에 exchange_code 미포함 — 잠재 오매칭 리스크 (HIGH) ### ROOT-7: BUY 매칭 키에 exchange_code 미포함 — 잠재 오매칭 리스크 (HIGH) → ✅ 해소 (#323)
> **현재 상태**: #323 머지로 해소됨. 아래는 원인 발견 시점(2026-02-28) 진단 기록.
- **위치**: `src/db.py:292-313` - **위치**: `src/db.py:292-313`
- **문제**: `get_latest_buy_trade()``(stock_code, market)`만으로 매칭, `exchange_code` 미사용 - **문제**: `get_latest_buy_trade()``(stock_code, market)`만으로 매칭, `exchange_code` 미사용
@@ -283,17 +304,28 @@ Updated: 2026-02-28
| P1 | US 최소 가격 필터: $5 이하 종목 진입 차단 | 페니스탁 대폭락 방지 | 낮음 | | P1 | US 최소 가격 필터: $5 이하 종목 진입 차단 | 페니스탁 대폭락 방지 | 낮음 |
| P1 | 진화 전략 코드 생성 시 syntax 검증 추가 | 진화 시스템 정상화 | 낮음 | | P1 | 진화 전략 코드 생성 시 syntax 검증 추가 | 진화 시스템 정상화 | 낮음 |
### 5.2 구조적 개선 (아키텍처 변경) ### 5.2 구조적 개선 현황 (2026-03-01 기준)
| 우선순위 | 방안 | 예상 효과 | 난이도 | **완료 항목 (모니터링 단계):**
|----------|------|-----------|--------|
| **P0** | **SELL PnL 계산을 sell_qty 기준으로 수정 (ROOT-6)** | 손익 계상 정확도 확보, 이상치 제거 | 낮음 | | 항목 | 이슈 | 상태 |
| **P0** | **v2 staged exit에 실제 피처 공급 (atr_value, pred_down_prob) + 독립 파라미터 설정 (ROOT-5)** | v2 설계 가치 실현, 수익 보호 | 중간 | |------|------|------|
| P0 | BUY 매칭 키에 exchange_code 추가 (ROOT-7) | 오매칭 방지 | 낮음 | | SELL PnL 계산을 sell_qty 기준으로 수정 (ROOT-6) | #322 | ✅ 머지 |
| P0 | 블랙아웃 복구 주문에 `log_trade()` 추가 (GAP-4) | DB/성과 리포트 정합성 | 낮음 | | v2 staged exit 피처 공급 + 독립 파라미터 설정 (ROOT-5) | #325 | ✅ 머지 |
| P1 | 세션 전환 시 리스크 파라미터 동적 재로딩 (GAP-3 해소) | 세션별 최적 파라미터 적용 | 중간 | | BUY 매칭 키에 exchange_code 추가 (ROOT-7) | #323 | ✅ 머지 |
| P1 | session_id를 거래 로그/의사결정 로그에 명시적 전달 (GAP-1,2 해소) | 세션별 성과 분석 가능 | 낮음 | | 블랙아웃 복구 주문 `log_trade()` 추가 (GAP-4) | #324 | ✅ 머지 |
| P2 | 블랙아웃 복구 시 가격/세션 재검증 강화 (GAP-4 잔여) | 세션 변경 후 무효 주문 방지 | 중간 | | 세션 전환 리스크 파라미터 동적 재로딩 (GAP-3) | #327 | ✅ 머지 |
| session_id 거래/의사결정 로그 명시 전달 (GAP-1, GAP-2) | #326 | ✅ 머지 |
| 블랙아웃 복구 가격/세션 재검증 강화 (GAP-4 잔여) | #328 | ✅ 머지 |
**잔여 개선 항목:**
| 우선순위 | 방안 | 난이도 |
|----------|------|--------|
| P1 | US 시장 ATR 공급 경로 완성 (ROOT-5 잔여) | 중간 |
| P1 | FX PnL 운영 활성화 (REQ-V3-007) | 낮음 |
| P2 | pred_down_prob ML 모델 대체 (ROOT-5 잔여) | 높음 |
| P2 | 세션 경계 E2E 통합 테스트 보강 (GAP-3 잔여) | 낮음 |
### 5.3 권장 실행 순서 ### 5.3 권장 실행 순서
@@ -334,14 +366,26 @@ Phase 3 (중기): v3 세션 최적화
- ✅ 블랙아웃 복구 후 유효 intent 실행 (`tests/test_main.py:5811`) - ✅ 블랙아웃 복구 후 유효 intent 실행 (`tests/test_main.py:5811`)
- ✅ 블랙아웃 복구 후 정책 거부 intent 드롭 (`tests/test_main.py:5851`) - ✅ 블랙아웃 복구 후 정책 거부 intent 드롭 (`tests/test_main.py:5851`)
### 테스트 미존재 ### 테스트 추가됨 (Phase 1~3, 2026-03-01)
- ❌ 세션 전환 훅 콜백 - ✅ KR ATR 기반 동적 hard stop (`test_main.py`#318)
- ❌ 세션 경계 리스크 파라미터 재로딩 - ✅ 재진입 쿨다운 (손절 후 동일 종목 매수 차단) (`test_main.py`#319)
- ❌ DecisionLogger session_id 캡처 - ✅ US 최소 가격 필터 ($5 이하 차단) (`test_main.py`#320)
- ✅ 진화 전략 syntax 검증 (`test_evolution.py`#321)
- ✅ SELL PnL sell_qty 기준 계산 (`test_main.py`#322)
- ✅ BUY 매칭 키 exchange_code 포함 (`test_db.py`#323)
- ✅ 블랙아웃 복구 주문 DB 기록 (`test_main.py`#324)
- ✅ staged exit에 실제 ATR/RSI 피처 공급 (`test_main.py`#325)
- ✅ session_id 거래/의사결정 로그 명시적 전달 (`test_main.py`, `test_decision_logger.py`#326)
- ✅ 블랙아웃 복구 후 유효 intent 실행 (`tests/test_main.py:5811`)
- ✅ 블랙아웃 복구 후 정책 거부 intent 드롭 (`tests/test_main.py:5851`)
### 테스트 미존재 (잔여)
- ❌ 세션 전환 훅 콜백 (GAP-3 잔여)
- ❌ 세션 경계 리스크 파라미터 재로딩 단위 테스트 (GAP-3 잔여)
- ❌ 실거래 경로 ↔ v2 상태기계 통합 테스트 (피처 공급 포함) - ❌ 실거래 경로 ↔ v2 상태기계 통합 테스트 (피처 공급 포함)
-블랙아웃 복구 주문의 DB 기록 검증 -FX PnL 운영 활성화 검증 (GAP-6)
- ❌ SELL PnL 계산 시 수량 불일치 케이스
--- ---

View File

@@ -1,16 +1,19 @@
<!-- <!--
Doc-ID: DOC-ACTION-085 Doc-ID: DOC-ACTION-085
Version: 1.0.0 Version: 1.1.0
Status: active Status: active
Owner: strategy Owner: strategy
Updated: 2026-02-28 Updated: 2026-03-01
--> -->
# 손실 복구 실행 계획 # 손실 복구 실행 계획
작성일: 2026-02-28 작성일: 2026-02-28
최종 업데이트: 2026-03-01 (Phase 1~3 완료 상태 반영)
기반 문서: [80_implementation_audit.md](./80_implementation_audit.md) (ROOT 7개 + GAP 5개) 기반 문서: [80_implementation_audit.md](./80_implementation_audit.md) (ROOT 7개 + GAP 5개)
> **2026-03-01 현황**: Phase 1 ✅ 완료, Phase 2 ✅ 완료, Phase 3 ✅ 기본 완료 (ACT-13 고도화 잔여)
--- ---
## 1. 요약 ## 1. 요약
@@ -35,13 +38,13 @@ Updated: 2026-02-28
## 2. Phase별 작업 분해 ## 2. Phase별 작업 분해
### Phase 1: 즉시 — 손실 출혈 차단 ### Phase 1: 즉시 — 손실 출혈 차단 ✅ 완료
가장 큰 손실 패턴(노이즈 손절, 반복 매매, 페니스탁)을 즉시 제거한다. 가장 큰 손실 패턴(노이즈 손절, 반복 매매, 페니스탁)을 즉시 제거한다.
--- ---
#### ACT-01: KR 손절선 ATR 기반 동적 확대 #### ACT-01: KR 손절선 ATR 기반 동적 확대 ✅ 머지
- **ROOT 참조**: ROOT-1 (hard_stop_pct -2%가 KR 소형주 변동성 대비 과소) - **ROOT 참조**: ROOT-1 (hard_stop_pct -2%가 KR 소형주 변동성 대비 과소)
- **Gitea 이슈**: feat: KR 손절선 ATR 기반 동적 확대 (-2% → ATR 적응형) - **Gitea 이슈**: feat: KR 손절선 ATR 기반 동적 확대 (-2% → ATR 적응형)
@@ -60,7 +63,7 @@ Updated: 2026-02-28
--- ---
#### ACT-02: 손절 후 동일 종목 재진입 쿨다운 #### ACT-02: 손절 후 동일 종목 재진입 쿨다운 ✅ 머지
- **ROOT 참조**: ROOT-2 (동일 종목 반복 매매) - **ROOT 참조**: ROOT-2 (동일 종목 반복 매매)
- **Gitea 이슈**: feat: 손절 후 동일 종목 재진입 쿨다운 (1~2시간) - **Gitea 이슈**: feat: 손절 후 동일 종목 재진입 쿨다운 (1~2시간)
@@ -79,7 +82,7 @@ Updated: 2026-02-28
--- ---
#### ACT-03: US $5 이하 종목 진입 차단 필터 #### ACT-03: US $5 이하 종목 진입 차단 필터 ✅ 머지
- **ROOT 참조**: ROOT-3 (미국 페니스탁 무분별 진입) - **ROOT 참조**: ROOT-3 (미국 페니스탁 무분별 진입)
- **Gitea 이슈**: feat: US $5 이하 종목 진입 차단 필터 - **Gitea 이슈**: feat: US $5 이하 종목 진입 차단 필터
@@ -97,7 +100,7 @@ Updated: 2026-02-28
--- ---
#### ACT-04: 진화 전략 코드 생성 시 syntax 검증 추가 #### ACT-04: 진화 전략 코드 생성 시 syntax 검증 추가 ✅ 머지
- **ROOT 참조**: ROOT-4 (진화 전략 문법 오류) - **ROOT 참조**: ROOT-4 (진화 전략 문법 오류)
- **Gitea 이슈**: fix: 진화 전략 코드 생성 시 syntax 검증 추가 - **Gitea 이슈**: fix: 진화 전략 코드 생성 시 syntax 검증 추가
@@ -116,13 +119,13 @@ Updated: 2026-02-28
--- ---
### Phase 2: 단기 — 데이터 정합성 + v2 실효화 ### Phase 2: 단기 — 데이터 정합성 + v2 실효화 ✅ 완료
손익 계산 정확도를 확보하고, v2 청산 로직을 실효화한다. 손익 계산 정확도를 확보하고, v2 청산 로직을 실효화한다.
--- ---
#### ACT-05: SELL PnL 계산을 sell_qty 기준으로 수정 #### ACT-05: SELL PnL 계산을 sell_qty 기준으로 수정 ✅ 머지
- **ROOT 참조**: ROOT-6 (CRITICAL — PnL 계산이 buy_qty 사용) - **ROOT 참조**: ROOT-6 (CRITICAL — PnL 계산이 buy_qty 사용)
- **Gitea 이슈**: fix(critical): SELL PnL 계산을 sell_qty 기준으로 수정 - **Gitea 이슈**: fix(critical): SELL PnL 계산을 sell_qty 기준으로 수정
@@ -141,7 +144,7 @@ Updated: 2026-02-28
--- ---
#### ACT-06: BUY 매칭 키에 exchange_code 추가 #### ACT-06: BUY 매칭 키에 exchange_code 추가 ✅ 머지
- **ROOT 참조**: ROOT-7 (BUY 매칭 키에 exchange_code 미포함) - **ROOT 참조**: ROOT-7 (BUY 매칭 키에 exchange_code 미포함)
- **Gitea 이슈**: fix: BUY 매칭 키에 exchange_code 추가 - **Gitea 이슈**: fix: BUY 매칭 키에 exchange_code 추가
@@ -159,12 +162,12 @@ Updated: 2026-02-28
--- ---
#### ACT-07: 블랙아웃 복구 주문에 log_trade() 추가 #### ACT-07: 블랙아웃 복구 주문에 log_trade() 추가 ✅ 머지
- **ROOT 참조**: GAP-4 (블랙아웃 복구 주문 DB 미기록) - **ROOT 참조**: GAP-4 (블랙아웃 복구 주문 DB 미기록)
- **Gitea 이슈**: fix: 블랙아웃 복구 주문에 log_trade() 추가 - **Gitea 이슈**: fix: 블랙아웃 복구 주문에 log_trade() 추가
- **Gitea 이슈 번호**: #324 - **Gitea 이슈 번호**: #324
- **변경 대상 파일**: `src/main.py` (line 694-791, 블랙아웃 복구 실행 경로) - **변경 대상 파일**: `src/main.py` `process_blackout_recovery_orders()` 함수 내 복구 주문 실행 경로
- **현재 동작**: 블랙아웃 복구 주문이 실행되나 `log_trade()` 호출 없음 → DB에 기록 안 됨 - **현재 동작**: 블랙아웃 복구 주문이 실행되나 `log_trade()` 호출 없음 → DB에 기록 안 됨
- **목표 동작**: 복구 주문 실행 후 `log_trade()` 호출하여 DB에 기록. rationale에 `[blackout-recovery]` prefix 추가 - **목표 동작**: 복구 주문 실행 후 `log_trade()` 호출하여 DB에 기록. rationale에 `[blackout-recovery]` prefix 추가
- **수용 기준**: - **수용 기준**:
@@ -178,7 +181,7 @@ Updated: 2026-02-28
--- ---
#### ACT-08: v2 staged exit에 실제 피처 공급 #### ACT-08: v2 staged exit에 실제 피처 공급 ✅ 머지
- **ROOT 참조**: ROOT-5 (v2 청산 로직 실효성 부족) - **ROOT 참조**: ROOT-5 (v2 청산 로직 실효성 부족)
- **Gitea 이슈**: feat: v2 staged exit에 실제 피처(ATR, pred_down_prob) 공급 - **Gitea 이슈**: feat: v2 staged exit에 실제 피처(ATR, pred_down_prob) 공급
@@ -200,7 +203,7 @@ Updated: 2026-02-28
--- ---
#### ACT-09: session_id를 거래/의사결정 로그에 명시적 전달 #### ACT-09: session_id를 거래/의사결정 로그에 명시적 전달 ✅ 머지
- **ROOT 참조**: GAP-1 (DecisionLogger session_id 미포함), GAP-2 (log_trade session_id 미전달) - **ROOT 참조**: GAP-1 (DecisionLogger session_id 미포함), GAP-2 (log_trade session_id 미전달)
- **Gitea 이슈**: feat: session_id를 거래/의사결정 로그에 명시적 전달 - **Gitea 이슈**: feat: session_id를 거래/의사결정 로그에 명시적 전달
@@ -223,13 +226,13 @@ Updated: 2026-02-28
--- ---
### Phase 3: 중기 — v3 세션 최적화 ### Phase 3: 중기 — v3 세션 최적화 ✅ 기본 완료 (ACT-13 고도화 잔여)
세션 경계 처리와 운영 거버넌스를 강화한다. 세션 경계 처리와 운영 거버넌스를 강화한다.
--- ---
#### ACT-10: 세션 전환 시 리스크 파라미터 동적 재로딩 #### ACT-10: 세션 전환 시 리스크 파라미터 동적 재로딩 ✅ 머지
- **ROOT 참조**: GAP-3 (세션 전환 시 리스크 파라미터 재로딩 없음) - **ROOT 참조**: GAP-3 (세션 전환 시 리스크 파라미터 재로딩 없음)
- **Gitea 이슈**: feat: 세션 전환 시 리스크 파라미터 동적 재로딩 - **Gitea 이슈**: feat: 세션 전환 시 리스크 파라미터 동적 재로딩
@@ -241,14 +244,12 @@ Updated: 2026-02-28
- NXT_AFTER → KRX_REG 전환 시 파라미터 재로딩 확인 - NXT_AFTER → KRX_REG 전환 시 파라미터 재로딩 확인
- 재로딩 이벤트 로그 기록 - 재로딩 이벤트 로그 기록
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백) - 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
- **테스트 계획**: - **테스트**: `test_main.py`에 설정 오버라이드/리로드/폴백 단위 테스트 포함. **잔여**: 세션 경계 실시간 전환 E2E 보강
- 단위: 세션 전환 훅 콜백 테스트
- 단위: 재로딩 실패 시 폴백 테스트
- **의존성**: ACT-09 (session_id 인프라) - **의존성**: ACT-09 (session_id 인프라)
--- ---
#### ACT-11: 블랙아웃 복구 시 가격/세션 재검증 강화 #### ACT-11: 블랙아웃 복구 시 가격/세션 재검증 강화 ✅ 머지
- **ROOT 참조**: GAP-4 잔여 (가격 유효성, 세션 변경 재적용 미구현) - **ROOT 참조**: GAP-4 잔여 (가격 유효성, 세션 변경 재적용 미구현)
- **Gitea 이슈**: feat: 블랙아웃 복구 시 가격/세션 재검증 강화 - **Gitea 이슈**: feat: 블랙아웃 복구 시 가격/세션 재검증 강화
@@ -268,7 +269,7 @@ Updated: 2026-02-28
--- ---
#### ACT-12: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환 #### ACT-12: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환 ✅ 머지
- **ROOT 참조**: GAP-5 (시간장벽이 봉 개수 고정) - **ROOT 참조**: GAP-5 (시간장벽이 봉 개수 고정)
- **Gitea 이슈**: feat: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환 - **Gitea 이슈**: feat: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환
@@ -286,21 +287,13 @@ Updated: 2026-02-28
--- ---
#### ACT-13: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑) #### ACT-13: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑) ✅ 기본 구현 완료, 고도화 잔여
- **ROOT 참조**: REQ-OPS-002 (정책 변경 시 레지스트리 업데이트 강제), REQ-OPS-003 (TASK-REQ 매핑 강제) - **ROOT 참조**: REQ-OPS-002 (정책 변경 시 레지스트리 업데이트 강제), REQ-OPS-003 (TASK-REQ 매핑 강제)
- **Gitea 이슈**: infra: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑) - **Gitea 이슈**: infra: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑)
- **Gitea 이슈 번호**: #330 - **Gitea 이슈 번호**: #330
- **변경 대상 파일**: `.gitea/workflows/`, `scripts/validate_governance_assets.py` - **현재 동작**: `.gitea/workflows/ci.yml`에서 `scripts/validate_governance_assets.py` + `scripts/validate_ouroboros_docs.py` 자동 실행
- **현재 동작**: CI 자동 검증 없음. 문서 검증은 수동 실행 - **잔여 고도화**: PR 본문 REQ/TASK/TEST 강제 레벨 상향, 정책 파일 미업데이트 시 CI 실패 기준 강화
- **목표 동작**:
- PR 시 정책 레지스트리(`01_requirements_registry.md`) 변경 여부 자동 검증
- TASK/이슈가 REQ-ID를 참조하는지 자동 검증
- **수용 기준**:
- 정책 파일 변경 시 레지스트리 미업데이트면 CI 실패
- 새 이슈/PR에 REQ-ID 미참조 시 경고
- **테스트 계획**:
- CI 파이프라인 자체 테스트 (정상/실패 케이스)
- **의존성**: 없음 - **의존성**: 없음
--- ---
@@ -311,7 +304,7 @@ Updated: 2026-02-28
- 모든 ACT 항목에 대해 개별 테스트 작성 - 모든 ACT 항목에 대해 개별 테스트 작성
- 커버리지 >= 80% 유지 - 커버리지 >= 80% 유지
- 기존 551개 테스트 전체 통과 확인 - 현재 CI 기준 전체 테스트 통과 확인 (2026-03-01 기준 998 tests collected)
### 3.2 통합 테스트 ### 3.2 통합 테스트
@@ -389,4 +382,36 @@ Phase 3
--- ---
## 6. 미진 사항 (2026-03-01 기준)
Phase 1~3 구현 완료 후에도 다음 항목이 운영상 미완료 상태이다.
### 6.1 운영 검증 필요
| 항목 | 설명 | 우선순위 |
|------|------|----------|
| FX PnL 운영 활성화 | `fx_pnl`/`strategy_pnl` 컬럼 존재하나 모든 운영 데이터 값이 0 | P1 |
| 세션 경계 E2E 통합 테스트 보강 | `test_main.py`에 단위 테스트 존재; 세션 경계 실시간 전환 E2E 미작성 | P2 |
| v2 상태기계 통합 end-to-end | 실거래 경로에서 HOLDING→BE_LOCK→ARMED→EXITED 전체 시나리오 테스트 미작성 | P2 |
### 6.2 아키텍처 수준 잔여 갭
| 항목 | 설명 | 배경 문서 |
|------|------|-----------|
| CI 자동 검증 고도화 (#330) | 기본 구현 완료(`validate_governance_assets.py` CI 연동); 규칙/강제수준 고도화 필요 | REQ-OPS-002, REQ-OPS-003 |
| pred_down_prob ML 모델 대체 | 현재 RSI 프록시 사용 — 추후 실제 GBDT/ML 모델로 대체 권장 | ROOT-5, ouroboros_plan_v2.txt §3.D |
| KR/US 파라미터 민감도 분석 | v2 계획의 be_arm_pct/arm_pct/atr_k 최적값 탐색 미수행 | ouroboros_plan_v2.txt §8 |
### 6.3 v3 실험 매트릭스 미착수
ouroboros_plan_v3.txt §9에 정의된 3개 실험이 아직 시작되지 않았다.
| 실험 ID | 시장 | 포커스 | 상태 |
|---------|------|--------|------|
| EXP-KR-01 | KR | NXT 야간 특화 (p_thresh 0.65) | ❌ 미착수 |
| EXP-US-01 | US | 21h 준연속 운용 (atr_k 2.5) | ❌ 미착수 |
| EXP-HYB-01 | Global | KR 낮 + US 밤 연계 레짐 자산배분 | ❌ 미착수 |
---
*끝.* *끝.*

View File

@@ -18,13 +18,15 @@ Updated: 2026-02-26
4. v3 실행 지시서: [20_phase_v3_execution.md](./20_phase_v3_execution.md) 4. v3 실행 지시서: [20_phase_v3_execution.md](./20_phase_v3_execution.md)
5. 코드 레벨 작업 지시: [30_code_level_work_orders.md](./30_code_level_work_orders.md) 5. 코드 레벨 작업 지시: [30_code_level_work_orders.md](./30_code_level_work_orders.md)
6. 수용 기준/테스트 계획: [40_acceptance_and_test_plan.md](./40_acceptance_and_test_plan.md) 6. 수용 기준/테스트 계획: [40_acceptance_and_test_plan.md](./40_acceptance_and_test_plan.md)
7. PM 시나리오/이슈 분류: [50_scenario_matrix_and_issue_taxonomy.md](./50_scenario_matrix_and_issue_taxonomy.md) 7. PM 시나리오/이슈 분류 **(A)**: [50_scenario_matrix_and_issue_taxonomy.md](./50_scenario_matrix_and_issue_taxonomy.md)
8. TPM 제어 프로토콜/수용 매트릭스: [50_tpm_control_protocol.md](./50_tpm_control_protocol.md) 8. TPM 제어 프로토콜/수용 매트릭스 **(B)**: [50_tpm_control_protocol.md](./50_tpm_control_protocol.md)
9. 저장소 강제 설정 체크리스트: [60_repo_enforcement_checklist.md](./60_repo_enforcement_checklist.md) 9. 저장소 강제 설정 체크리스트: [60_repo_enforcement_checklist.md](./60_repo_enforcement_checklist.md)
10. 메인 에이전트 아이디에이션 백로그: [70_main_agent_ideation.md](./70_main_agent_ideation.md) 10. 메인 에이전트 아이디에이션 백로그: [70_main_agent_ideation.md](./70_main_agent_ideation.md)
11. v2/v3 구현 감사 및 수익률 분석: [80_implementation_audit.md](./80_implementation_audit.md) 11. v2/v3 구현 감사 및 수익률 분석: [80_implementation_audit.md](./80_implementation_audit.md)
12. 손실 복구 실행 계획: [85_loss_recovery_action_plan.md](./85_loss_recovery_action_plan.md) 12. 손실 복구 실행 계획: [85_loss_recovery_action_plan.md](./85_loss_recovery_action_plan.md)
> **참고**: 7번·8번은 `50_` 프리픽스를 공유합니다. (A) = 시나리오/이슈 분류, (B) = TPM 제어 프로토콜.
## 운영 규칙 ## 운영 규칙
- 계획 변경은 반드시 `01_requirements_registry.md`의 ID 정의부터 수정한다. - 계획 변경은 반드시 `01_requirements_registry.md`의 ID 정의부터 수정한다.

View File

@@ -87,7 +87,7 @@
- 선정 기준 추적 → Evolution 시스템 최적화 가능 - 선정 기준 추적 → Evolution 시스템 최적화 가능
- API 장애 시 정적 watchlist로 자동 전환 - API 장애 시 정적 watchlist로 자동 전환
**참고:** Realtime 모드 전용. Daily 모드는 배치 효율성을 위해 정적 watchlist 사용. **참고 (당시 구현 기준):** Realtime 모드 전용으로 설계되었으나, 이후 Daily 경로에서도 스캐너를 사용하도록 변경됨. 해외 fallback도 정적 watchlist → 동적 유니버스(active/recent/holdings)로 전환 (2026-02-16 참조).
**이슈/PR:** #76, #77 **이슈/PR:** #76, #77
@@ -388,3 +388,126 @@ Order result: 모의투자 매수주문이 완료 되었습니다. ✓
- `ruff check src/analysis/backtest_pipeline.py tests/test_backtest_pipeline_integration.py` - `ruff check src/analysis/backtest_pipeline.py tests/test_backtest_pipeline_integration.py`
**이슈/PR:** #305 **이슈/PR:** #305
---
## 2026-02-28 ~ 2026-03-01
### v2/v3 손실 복구 실행 계획 — Phase 1 완료 (#318~#321)
**배경:**
- `docs/ouroboros/80_implementation_audit.md` 감사 결과 식별된 7개 근본 원인(ROOT) 및 5개 구현 갭(GAP) 중
가장 큰 손실 패턴 4개를 Phase 1로 즉시 제거.
**구현 내용:**
1. **ACT-01: KR 손절선 ATR 기반 동적 확대** (#318)
- `src/main.py`, `src/config.py`
- KR 시장: ATR(14) 기반 동적 hard stop (`k=2.0`, 범위 -2%~-7%)
- ATR 미제공 시 기존 -2% 폴백
- ROOT-1 (hard_stop_pct 고정값 과소) 해소
2. **ACT-02: 손절 후 동일 종목 재진입 쿨다운** (#319)
- `src/main.py`, `src/config.py`
- 손절(pnl<0) 후 동일 종목 `COOLDOWN_MINUTES`(기본 120분) 동안 BUY 차단
- 익절에는 미적용
- ROOT-2 (동일 종목 반복 매매) 해소
3. **ACT-03: US $5 이하 종목 진입 차단 필터** (#320)
- `src/main.py`, `src/config.py`
- US 시장 BUY 시 현재가 `US_MIN_PRICE`(기본 $5) 이하 차단
- ROOT-3 (미국 페니스탁 무분별 진입) 해소
4. **ACT-04: 진화 전략 코드 syntax 검증** (#321)
- `src/evolution/optimizer.py`
- `ast.parse()` + `compile()` 선검증 후 통과한 코드만 저장
- ROOT-4 (진화 전략 문법 오류) 해소
**이슈/PR:** #318, #319, #320, #321
---
### v2/v3 손실 복구 실행 계획 — Phase 2 완료 (#322~#326)
**배경:**
- 손익 계산 정확도 확보 및 v2 청산 로직 실효화.
**구현 내용:**
1. **ACT-05: SELL PnL 계산을 sell_qty 기준으로 수정** (#322)
- `src/main.py` (line 1658-1663, 2755-2760)
- `trade_pnl = (trade_price - buy_price) * sell_qty`로 변경
- ROOT-6 (PnL 계산 buy_qty 사용 CRITICAL) 해소
2. **ACT-06: BUY 매칭 키에 exchange_code 추가** (#323)
- `src/db.py`
- `get_latest_buy_trade()``(stock_code, market, exchange_code)` 기준 매칭
- exchange_code NULL인 레거시 데이터 하위 호환 유지
- ROOT-7 (오매칭 리스크) 해소
3. **ACT-07: 블랙아웃 복구 주문에 log_trade() 추가** (#324)
- `src/main.py` (블랙아웃 복구 실행 경로)
- 복구 주문 실행 후 `log_trade()` 호출, rationale에 `[blackout-recovery]` prefix
- GAP-4 (블랙아웃 복구 주문 DB 미기록) 해소
4. **ACT-08: v2 staged exit에 실제 피처 공급** (#325)
- `src/main.py`, `src/strategy/exit_rules.py`
- `atr_value`: ATR(14) 실시간 계산 공급
- `pred_down_prob`: RSI 기반 하락 확률 추정값 공급 (ML 모델 대체 가능)
- `be_arm_pct`/`arm_pct` 독립 파라미터 설정 가능 (take_profit_pct * 0.4 파생 제거)
- ROOT-5 (v2 청산 로직 실효성 부족) 해소
5. **ACT-09: session_id를 거래/의사결정 로그에 명시적 전달** (#326)
- `src/logging/decision_logger.py`, `src/main.py`, `src/db.py`
- `log_decision()`: session_id 파라미터 추가
- `log_trade()`: 런타임 session_id 명시적 전달
- GAP-1, GAP-2 (session_id 미포함) 부분 해소
**이슈/PR:** #322, #323, #324, #325, #326
---
### v2/v3 손실 복구 실행 계획 — Phase 3 부분 완료 (#327~#329)
**배경:**
- 세션 경계 처리 및 시간장벽 캘린더 기반 전환.
**구현 내용:**
1. **ACT-10: 세션 전환 시 리스크 파라미터 동적 재로딩** (#327)
- `src/main.py`, `src/config.py`
- 세션 경계 변경 이벤트 시 `SESSION_RISK_PROFILES_JSON` 기반 재로딩
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
- GAP-3 (세션 전환 시 파라미터 재로딩 없음) 부분 해소
2. **ACT-11: 블랙아웃 복구 시 가격/세션 재검증 강화** (#328)
- `src/main.py`, `src/core/blackout_manager.py`
- 복구 시 현재 시세 조회하여 가격 유효성 검증 (진입가 대비 급등/급락 시 드롭)
- 세션 변경 시 새 세션의 파라미터로 재검증
- GAP-4 잔여 (가격/세션 재검증) 부분 해소
3. **ACT-12: Triple Barrier 시간장벽을 캘린더 시간(분) 기반으로 전환** (#329)
- `src/analysis/triple_barrier.py`
- `max_holding_minutes` (캘린더 분) 기반 전환, 봉 주기 무관 일관 동작
- 기존 `max_holding_bars` deprecated 경고 유지 (하위 호환)
- GAP-5 (시간장벽 봉 개수 고정) 해소
**미완료 (ACT-13):**
- **#330: CI 자동 검증 (정책 레지스트리 + TASK-REQ 매핑)** — 문서 구조화 작업으로 대체 진행 중
**이슈/PR:** #327, #328, #329
---
### v2/v3 문서 구조화 및 감사 문서 작성 (#331)
**배경:**
- Phase 1~3 구현 완료 후 감사 결과와 실행 계획을 문서화
- 기존 감사 문서가 산발적으로 관리되어 통합 정리 필요
**구현 내용:**
- `docs/ouroboros/80_implementation_audit.md` 신규 작성: v2/v3 구현 감사 + 실거래 수익률 분석
- `docs/ouroboros/85_loss_recovery_action_plan.md` 신규 작성: ROOT/GAP 해소 Phase별 실행 계획
- `scripts/audit_queries.sql` 신규 작성: 성과 재현용 표준 집계 SQL
**이슈/PR:** #331

View File

@@ -2,7 +2,7 @@
## Test Structure ## Test Structure
**551 tests** across **25 files**. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator. **998 tests** across **41 files**. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator.
The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB. The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB.
@@ -23,6 +23,8 @@ The `settings` fixture in `conftest.py` provides safe defaults with test credent
- Network error handling - Network error handling
- SSL context configuration - SSL context configuration
> **Note**: 아래 파일별 테스트 수는 릴리즈 시점 스냅샷이며 실제 수치와 다를 수 있습니다. 현재 정확한 수치는 `pytest --collect-only -q`로 확인하세요.
##### `tests/test_brain.py` (24 tests) ##### `tests/test_brain.py` (24 tests)
- Valid JSON parsing and markdown-wrapped JSON handling - Valid JSON parsing and markdown-wrapped JSON handling
- Malformed JSON fallback - Malformed JSON fallback
@@ -90,7 +92,7 @@ The `settings` fixture in `conftest.py` provides safe defaults with test credent
- Python-first filtering pipeline - Python-first filtering pipeline
- RSI and volume ratio filter logic - RSI and volume ratio filter logic
- Candidate scoring and ranking - Candidate scoring and ranking
- Fallback to static watchlist - Fallback to static watchlist (domestic) or dynamic universe (overseas)
#### Context & Memory #### Context & Memory
@@ -138,8 +140,8 @@ The `settings` fixture in `conftest.py` provides safe defaults with test credent
#### Dashboard #### Dashboard
##### `tests/test_dashboard.py` (14 tests) ##### `tests/test_dashboard.py` (14 tests)
- FastAPI endpoint responses (8 API routes) - FastAPI endpoint responses (10 API routes)
- Status, playbook, scorecard, performance, context, decisions, scenarios - Status, playbook, scorecard, performance, context, decisions, scenarios, pnl/history, positions
- Query parameter handling (market, date, limit) - Query parameter handling (market, date, limit)
#### Performance & Quality #### Performance & Quality

View File

@@ -3,9 +3,19 @@
from __future__ import annotations from __future__ import annotations
import subprocess
import sys import sys
import os
import re
from pathlib import Path from pathlib import Path
REQUIREMENTS_REGISTRY = "docs/ouroboros/01_requirements_registry.md"
TASK_WORK_ORDERS_DOC = "docs/ouroboros/30_code_level_work_orders.md"
TASK_DEF_LINE = re.compile(r"^-\s+`(?P<task_id>TASK-[A-Z0-9-]+-\d{3})`(?P<body>.*)$")
REQ_ID_IN_LINE = re.compile(r"\bREQ-[A-Z0-9-]+-\d{3}\b")
TASK_ID_IN_TEXT = re.compile(r"\bTASK-[A-Z0-9-]+-\d{3}\b")
TEST_ID_IN_TEXT = re.compile(r"\bTEST-[A-Z0-9-]+-\d{3}\b")
def must_contain(path: Path, required: list[str], errors: list[str]) -> None: def must_contain(path: Path, required: list[str], errors: list[str]) -> None:
if not path.exists(): if not path.exists():
@@ -17,8 +27,101 @@ def must_contain(path: Path, required: list[str], errors: list[str]) -> None:
errors.append(f"{path}: missing required token -> {token}") errors.append(f"{path}: missing required token -> {token}")
def normalize_changed_path(path: str) -> str:
normalized = path.strip().replace("\\", "/")
if normalized.startswith("./"):
normalized = normalized[2:]
return normalized
def is_policy_file(path: str) -> bool:
normalized = normalize_changed_path(path)
if not normalized.endswith(".md"):
return False
if not normalized.startswith("docs/ouroboros/"):
return False
return normalized != REQUIREMENTS_REGISTRY
def load_changed_files(args: list[str], errors: list[str]) -> list[str]:
if not args:
return []
# Single range input (e.g. BASE..HEAD or BASE...HEAD)
if len(args) == 1 and ".." in args[0]:
range_spec = args[0]
try:
completed = subprocess.run(
["git", "diff", "--name-only", range_spec],
check=True,
capture_output=True,
text=True,
)
except (subprocess.CalledProcessError, FileNotFoundError) as exc:
errors.append(f"failed to load changed files from range '{range_spec}': {exc}")
return []
return [
normalize_changed_path(line)
for line in completed.stdout.splitlines()
if line.strip()
]
return [normalize_changed_path(path) for path in args if path.strip()]
def validate_registry_sync(changed_files: list[str], errors: list[str]) -> None:
if not changed_files:
return
changed_set = set(changed_files)
policy_changed = any(is_policy_file(path) for path in changed_set)
registry_changed = REQUIREMENTS_REGISTRY in changed_set
if policy_changed and not registry_changed:
errors.append(
"policy file changed without updating docs/ouroboros/01_requirements_registry.md"
)
def validate_task_req_mapping(errors: list[str], *, task_doc: Path | None = None) -> None:
path = task_doc or Path(TASK_WORK_ORDERS_DOC)
if not path.exists():
errors.append(f"missing file: {path}")
return
text = path.read_text(encoding="utf-8")
found_task = False
for line in text.splitlines():
m = TASK_DEF_LINE.match(line.strip())
if not m:
continue
found_task = True
if not REQ_ID_IN_LINE.search(m.group("body")):
errors.append(
f"{path}: TASK without REQ mapping -> {m.group('task_id')}"
)
if not found_task:
errors.append(f"{path}: no TASK definitions found")
def validate_pr_traceability(warnings: list[str]) -> None:
title = os.getenv("GOVERNANCE_PR_TITLE", "").strip()
body = os.getenv("GOVERNANCE_PR_BODY", "").strip()
if not title and not body:
return
text = f"{title}\n{body}"
if not REQ_ID_IN_LINE.search(text):
warnings.append("PR text missing REQ-ID reference")
if not TASK_ID_IN_TEXT.search(text):
warnings.append("PR text missing TASK-ID reference")
if not TEST_ID_IN_TEXT.search(text):
warnings.append("PR text missing TEST-ID reference")
def main() -> int: def main() -> int:
errors: list[str] = [] errors: list[str] = []
warnings: list[str] = []
changed_files = load_changed_files(sys.argv[1:], errors)
pr_template = Path(".gitea/PULL_REQUEST_TEMPLATE.md") pr_template = Path(".gitea/PULL_REQUEST_TEMPLATE.md")
issue_template = Path(".gitea/ISSUE_TEMPLATE/runtime_verification.md") issue_template = Path(".gitea/ISSUE_TEMPLATE/runtime_verification.md")
@@ -81,6 +184,10 @@ def main() -> int:
if not handover_script.exists(): if not handover_script.exists():
errors.append(f"missing file: {handover_script}") errors.append(f"missing file: {handover_script}")
validate_registry_sync(changed_files, errors)
validate_task_req_mapping(errors)
validate_pr_traceability(warnings)
if errors: if errors:
print("[FAIL] governance asset validation failed") print("[FAIL] governance asset validation failed")
for err in errors: for err in errors:
@@ -88,6 +195,10 @@ def main() -> int:
return 1 return 1
print("[OK] governance assets validated") print("[OK] governance assets validated")
if warnings:
print(f"[WARN] governance advisory: {len(warnings)}")
for warn in warnings:
print(f"- {warn}")
return 0 return 0

View File

@@ -8,8 +8,10 @@ from __future__ import annotations
from collections.abc import Sequence from collections.abc import Sequence
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime
from statistics import mean from statistics import mean
from typing import Literal from typing import Literal
from typing import cast
from src.analysis.backtest_cost_guard import BacktestCostModel, validate_backtest_cost_model from src.analysis.backtest_cost_guard import BacktestCostModel, validate_backtest_cost_model
from src.analysis.triple_barrier import TripleBarrierSpec, label_with_triple_barrier from src.analysis.triple_barrier import TripleBarrierSpec, label_with_triple_barrier
@@ -22,6 +24,7 @@ class BacktestBar:
low: float low: float
close: float close: float
session_id: str session_id: str
timestamp: datetime | None = None
@dataclass(frozen=True) @dataclass(frozen=True)
@@ -86,16 +89,27 @@ def run_v2_backtest_pipeline(
highs = [float(bar.high) for bar in bars] highs = [float(bar.high) for bar in bars]
lows = [float(bar.low) for bar in bars] lows = [float(bar.low) for bar in bars]
closes = [float(bar.close) for bar in bars] closes = [float(bar.close) for bar in bars]
timestamps = [bar.timestamp for bar in bars]
normalized_entries = sorted(set(int(i) for i in entry_indices)) normalized_entries = sorted(set(int(i) for i in entry_indices))
if normalized_entries[0] < 0 or normalized_entries[-1] >= len(bars): if normalized_entries[0] < 0 or normalized_entries[-1] >= len(bars):
raise IndexError("entry index out of range") raise IndexError("entry index out of range")
resolved_timestamps: list[datetime] | None = None
if triple_barrier_spec.max_holding_minutes is not None:
if any(ts is None for ts in timestamps):
raise ValueError(
"BacktestBar.timestamp is required for all bars when "
"triple_barrier_spec.max_holding_minutes is set"
)
resolved_timestamps = cast(list[datetime], timestamps)
labels_by_bar_index: dict[int, int] = {} labels_by_bar_index: dict[int, int] = {}
for idx in normalized_entries: for idx in normalized_entries:
labels_by_bar_index[idx] = label_with_triple_barrier( labels_by_bar_index[idx] = label_with_triple_barrier(
highs=highs, highs=highs,
lows=lows, lows=lows,
closes=closes, closes=closes,
timestamps=resolved_timestamps,
entry_index=idx, entry_index=idx,
side=side, side=side,
spec=triple_barrier_spec, spec=triple_barrier_spec,

View File

@@ -60,7 +60,16 @@ class Settings(BaseSettings):
# This value is used as a fallback when the balance API returns 0 in paper mode. # This value is used as a fallback when the balance API returns 0 in paper mode.
PAPER_OVERSEAS_CASH: float = Field(default=50000.0, ge=0.0) PAPER_OVERSEAS_CASH: float = Field(default=50000.0, ge=0.0)
USD_BUFFER_MIN: float = Field(default=1000.0, ge=0.0) USD_BUFFER_MIN: float = Field(default=1000.0, ge=0.0)
US_MIN_PRICE: float = Field(default=5.0, ge=0.0)
STAGED_EXIT_BE_ARM_PCT: float = Field(default=1.2, gt=0.0, le=30.0)
STAGED_EXIT_ARM_PCT: float = Field(default=3.0, gt=0.0, le=100.0)
STOPLOSS_REENTRY_COOLDOWN_MINUTES: int = Field(default=120, ge=1, le=1440)
KR_ATR_STOP_MULTIPLIER_K: float = Field(default=2.0, ge=0.1, le=10.0)
KR_ATR_STOP_MIN_PCT: float = Field(default=-2.0, le=0.0)
KR_ATR_STOP_MAX_PCT: float = Field(default=-7.0, le=0.0)
OVERNIGHT_EXCEPTION_ENABLED: bool = True OVERNIGHT_EXCEPTION_ENABLED: bool = True
SESSION_RISK_RELOAD_ENABLED: bool = True
SESSION_RISK_PROFILES_JSON: str = "{}"
# Trading frequency mode (daily = batch API calls, realtime = per-stock calls) # Trading frequency mode (daily = batch API calls, realtime = per-stock calls)
TRADE_MODE: str = Field(default="daily", pattern="^(daily|realtime)$") TRADE_MODE: str = Field(default="daily", pattern="^(daily|realtime)$")
@@ -69,6 +78,8 @@ class Settings(BaseSettings):
ORDER_BLACKOUT_ENABLED: bool = True ORDER_BLACKOUT_ENABLED: bool = True
ORDER_BLACKOUT_WINDOWS_KST: str = "23:30-00:10" ORDER_BLACKOUT_WINDOWS_KST: str = "23:30-00:10"
ORDER_BLACKOUT_QUEUE_MAX: int = Field(default=500, ge=10, le=5000) ORDER_BLACKOUT_QUEUE_MAX: int = Field(default=500, ge=10, le=5000)
BLACKOUT_RECOVERY_PRICE_REVALIDATION_ENABLED: bool = True
BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT: float = Field(default=5.0, ge=0.0, le=100.0)
# Pre-Market Planner # Pre-Market Planner
PRE_MARKET_MINUTES: int = Field(default=30, ge=10, le=120) PRE_MARKET_MINUTES: int = Field(default=30, ge=10, le=120)

View File

@@ -70,6 +70,12 @@ BLACKOUT_ORDER_MANAGER = BlackoutOrderManager(
_SESSION_CLOSE_WINDOWS = {"NXT_AFTER", "US_AFTER"} _SESSION_CLOSE_WINDOWS = {"NXT_AFTER", "US_AFTER"}
_RUNTIME_EXIT_STATES: dict[str, PositionState] = {} _RUNTIME_EXIT_STATES: dict[str, PositionState] = {}
_RUNTIME_EXIT_PEAKS: dict[str, float] = {} _RUNTIME_EXIT_PEAKS: dict[str, float] = {}
_STOPLOSS_REENTRY_COOLDOWN_UNTIL: dict[str, float] = {}
_VOLATILITY_ANALYZER = VolatilityAnalyzer()
_SESSION_RISK_PROFILES_RAW = "{}"
_SESSION_RISK_PROFILES_MAP: dict[str, dict[str, Any]] = {}
_SESSION_RISK_LAST_BY_MARKET: dict[str, str] = {}
_SESSION_RISK_OVERRIDES_BY_MARKET: dict[str, dict[str, Any]] = {}
def safe_float(value: str | float | None, default: float = 0.0) -> float: def safe_float(value: str | float | None, default: float = 0.0) -> float:
@@ -110,6 +116,258 @@ DAILY_TRADE_SESSIONS = 4 # Number of trading sessions per day
TRADE_SESSION_INTERVAL_HOURS = 6 # Hours between sessions TRADE_SESSION_INTERVAL_HOURS = 6 # Hours between sessions
def _resolve_sell_qty_for_pnl(*, sell_qty: int | None, buy_qty: int | None) -> int:
"""Choose quantity basis for SELL outcome PnL with safe fallback."""
resolved_sell = int(sell_qty or 0)
if resolved_sell > 0:
return resolved_sell
return max(0, int(buy_qty or 0))
def _compute_kr_dynamic_stop_loss_pct(
*,
market: MarketInfo | None = None,
entry_price: float,
atr_value: float,
fallback_stop_loss_pct: float,
settings: Settings | None,
) -> float:
"""Compute KR dynamic hard-stop threshold in percent."""
if entry_price <= 0 or atr_value <= 0:
return fallback_stop_loss_pct
k = _resolve_market_setting(
market=market,
settings=settings,
key="KR_ATR_STOP_MULTIPLIER_K",
default=2.0,
)
min_pct = _resolve_market_setting(
market=market,
settings=settings,
key="KR_ATR_STOP_MIN_PCT",
default=-2.0,
)
max_pct = _resolve_market_setting(
market=market,
settings=settings,
key="KR_ATR_STOP_MAX_PCT",
default=-7.0,
)
if max_pct > min_pct:
min_pct, max_pct = max_pct, min_pct
dynamic_stop_pct = -((k * atr_value) / entry_price) * 100.0
return max(max_pct, min(min_pct, dynamic_stop_pct))
def _stoploss_cooldown_key(*, market: MarketInfo, stock_code: str) -> str:
return f"{market.code}:{stock_code}"
def _parse_session_risk_profiles(settings: Settings | None) -> dict[str, dict[str, Any]]:
if settings is None:
return {}
global _SESSION_RISK_PROFILES_RAW, _SESSION_RISK_PROFILES_MAP
raw = str(getattr(settings, "SESSION_RISK_PROFILES_JSON", "{}") or "{}")
if raw == _SESSION_RISK_PROFILES_RAW:
return _SESSION_RISK_PROFILES_MAP
parsed_map: dict[str, dict[str, Any]] = {}
try:
decoded = json.loads(raw)
if isinstance(decoded, dict):
for session_id, session_values in decoded.items():
if isinstance(session_id, str) and isinstance(session_values, dict):
parsed_map[session_id] = session_values
except (ValueError, TypeError) as exc:
logger.warning("Invalid SESSION_RISK_PROFILES_JSON; using defaults: %s", exc)
parsed_map = {}
_SESSION_RISK_PROFILES_RAW = raw
_SESSION_RISK_PROFILES_MAP = parsed_map
return _SESSION_RISK_PROFILES_MAP
def _coerce_setting_value(*, value: Any, default: Any) -> Any:
if isinstance(default, bool):
if isinstance(value, bool):
return value
if isinstance(value, str):
return value.strip().lower() in {"1", "true", "yes", "on"}
if isinstance(value, (int, float)):
return value != 0
return default
if isinstance(default, int) and not isinstance(default, bool):
try:
return int(value)
except (ValueError, TypeError):
return default
if isinstance(default, float):
return safe_float(value, float(default))
if isinstance(default, str):
return str(value)
return value
def _session_risk_overrides(
*,
market: MarketInfo | None,
settings: Settings | None,
) -> dict[str, Any]:
if market is None or settings is None:
return {}
if not bool(getattr(settings, "SESSION_RISK_RELOAD_ENABLED", True)):
return {}
session_id = get_session_info(market).session_id
previous_session = _SESSION_RISK_LAST_BY_MARKET.get(market.code)
if previous_session == session_id:
return _SESSION_RISK_OVERRIDES_BY_MARKET.get(market.code, {})
profile_map = _parse_session_risk_profiles(settings)
merged: dict[str, Any] = {}
default_profile = profile_map.get("default")
if isinstance(default_profile, dict):
merged.update(default_profile)
session_profile = profile_map.get(session_id)
if isinstance(session_profile, dict):
merged.update(session_profile)
_SESSION_RISK_LAST_BY_MARKET[market.code] = session_id
_SESSION_RISK_OVERRIDES_BY_MARKET[market.code] = merged
if previous_session is None:
logger.info(
"Session risk profile initialized for %s: %s (overrides=%s)",
market.code,
session_id,
",".join(sorted(merged.keys())) if merged else "none",
)
else:
logger.info(
"Session risk profile reloaded for %s: %s -> %s (overrides=%s)",
market.code,
previous_session,
session_id,
",".join(sorted(merged.keys())) if merged else "none",
)
return merged
def _resolve_market_setting(
*,
market: MarketInfo | None,
settings: Settings | None,
key: str,
default: Any,
) -> Any:
if settings is None:
return default
fallback = getattr(settings, key, default)
overrides = _session_risk_overrides(market=market, settings=settings)
if key not in overrides:
return fallback
return _coerce_setting_value(value=overrides[key], default=fallback)
def _stoploss_cooldown_minutes(
settings: Settings | None,
market: MarketInfo | None = None,
) -> int:
minutes = _resolve_market_setting(
market=market,
settings=settings,
key="STOPLOSS_REENTRY_COOLDOWN_MINUTES",
default=120,
)
return max(1, int(minutes))
def _estimate_pred_down_prob_from_rsi(rsi: float | str | None) -> float:
"""Estimate downside probability from RSI using a simple linear mapping."""
if rsi is None:
return 0.5
rsi_value = max(0.0, min(100.0, safe_float(rsi, 50.0)))
return rsi_value / 100.0
async def _compute_kr_atr_value(
*,
broker: KISBroker,
stock_code: str,
period: int = 14,
) -> float:
"""Compute ATR(period) for KR stocks using daily OHLC."""
days = max(period + 1, 30)
try:
daily_prices = await _retry_connection(
broker.get_daily_prices,
stock_code,
days=days,
label=f"daily_prices:{stock_code}",
)
except ConnectionError as exc:
logger.warning("ATR source unavailable for %s: %s", stock_code, exc)
return 0.0
except Exception as exc:
logger.warning("Unexpected ATR fetch failure for %s: %s", stock_code, exc)
return 0.0
if not isinstance(daily_prices, list):
return 0.0
highs: list[float] = []
lows: list[float] = []
closes: list[float] = []
for row in daily_prices:
if not isinstance(row, dict):
continue
high = safe_float(row.get("high"), 0.0)
low = safe_float(row.get("low"), 0.0)
close = safe_float(row.get("close"), 0.0)
if high <= 0 or low <= 0 or close <= 0:
continue
highs.append(high)
lows.append(low)
closes.append(close)
if len(highs) < period + 1 or len(lows) < period + 1 or len(closes) < period + 1:
return 0.0
return max(0.0, _VOLATILITY_ANALYZER.calculate_atr(highs, lows, closes, period=period))
async def _inject_staged_exit_features(
*,
market: MarketInfo,
stock_code: str,
open_position: dict[str, Any] | None,
market_data: dict[str, Any],
broker: KISBroker | None,
) -> None:
"""Inject ATR/pred_down_prob used by staged exit evaluation."""
if not open_position:
return
if "pred_down_prob" not in market_data:
market_data["pred_down_prob"] = _estimate_pred_down_prob_from_rsi(
market_data.get("rsi")
)
existing_atr = safe_float(market_data.get("atr_value"), 0.0)
if existing_atr > 0:
return
if market.is_domestic and broker is not None:
market_data["atr_value"] = await _compute_kr_atr_value(
broker=broker,
stock_code=stock_code,
)
return
market_data["atr_value"] = 0.0
async def _retry_connection(coro_factory: Any, *args: Any, label: str = "", **kwargs: Any) -> Any: async def _retry_connection(coro_factory: Any, *args: Any, label: str = "", **kwargs: Any) -> Any:
"""Call an async function retrying on ConnectionError with exponential backoff. """Call an async function retrying on ConnectionError with exponential backoff.
@@ -453,7 +711,14 @@ def _should_block_overseas_buy_for_fx_buffer(
): ):
return False, total_cash - order_amount, 0.0 return False, total_cash - order_amount, 0.0
remaining = total_cash - order_amount remaining = total_cash - order_amount
required = settings.USD_BUFFER_MIN required = float(
_resolve_market_setting(
market=market,
settings=settings,
key="USD_BUFFER_MIN",
default=1000.0,
)
)
return remaining < required, remaining, required return remaining < required, remaining, required
@@ -469,7 +734,13 @@ def _should_force_exit_for_overnight(
return True return True
if settings is None: if settings is None:
return False return False
return not settings.OVERNIGHT_EXCEPTION_ENABLED overnight_enabled = _resolve_market_setting(
market=market,
settings=settings,
key="OVERNIGHT_EXCEPTION_ENABLED",
default=True,
)
return not bool(overnight_enabled)
def _build_runtime_position_key( def _build_runtime_position_key(
@@ -499,6 +770,7 @@ def _apply_staged_exit_override_for_hold(
open_position: dict[str, Any] | None, open_position: dict[str, Any] | None,
market_data: dict[str, Any], market_data: dict[str, Any],
stock_playbook: Any | None, stock_playbook: Any | None,
settings: Settings | None = None,
) -> TradeDecision: ) -> TradeDecision:
"""Apply v2 staged exit semantics for HOLD positions using runtime state.""" """Apply v2 staged exit semantics for HOLD positions using runtime state."""
if decision.action != "HOLD" or not open_position: if decision.action != "HOLD" or not open_position:
@@ -514,6 +786,41 @@ def _apply_staged_exit_override_for_hold(
if stock_playbook and stock_playbook.scenarios: if stock_playbook and stock_playbook.scenarios:
stop_loss_threshold = stock_playbook.scenarios[0].stop_loss_pct stop_loss_threshold = stock_playbook.scenarios[0].stop_loss_pct
take_profit_threshold = stock_playbook.scenarios[0].take_profit_pct take_profit_threshold = stock_playbook.scenarios[0].take_profit_pct
atr_value = safe_float(market_data.get("atr_value"), 0.0)
if market.code == "KR":
stop_loss_threshold = _compute_kr_dynamic_stop_loss_pct(
market=market,
entry_price=entry_price,
atr_value=atr_value,
fallback_stop_loss_pct=stop_loss_threshold,
settings=settings,
)
if settings is None:
be_arm_pct = max(0.5, take_profit_threshold * 0.4)
arm_pct = take_profit_threshold
else:
be_arm_pct = max(
0.1,
float(
_resolve_market_setting(
market=market,
settings=settings,
key="STAGED_EXIT_BE_ARM_PCT",
default=1.2,
)
),
)
arm_pct = max(
be_arm_pct,
float(
_resolve_market_setting(
market=market,
settings=settings,
key="STAGED_EXIT_ARM_PCT",
default=3.0,
)
),
)
runtime_key = _build_runtime_position_key( runtime_key = _build_runtime_position_key(
market_code=market.code, market_code=market.code,
@@ -532,14 +839,14 @@ def _apply_staged_exit_override_for_hold(
current_state=current_state, current_state=current_state,
config=ExitRuleConfig( config=ExitRuleConfig(
hard_stop_pct=stop_loss_threshold, hard_stop_pct=stop_loss_threshold,
be_arm_pct=max(0.5, take_profit_threshold * 0.4), be_arm_pct=be_arm_pct,
arm_pct=take_profit_threshold, arm_pct=arm_pct,
), ),
inp=ExitRuleInput( inp=ExitRuleInput(
current_price=current_price, current_price=current_price,
entry_price=entry_price, entry_price=entry_price,
peak_price=peak_price, peak_price=peak_price,
atr_value=safe_float(market_data.get("atr_value"), 0.0), atr_value=atr_value,
pred_down_prob=safe_float(market_data.get("pred_down_prob"), 0.0), pred_down_prob=safe_float(market_data.get("pred_down_prob"), 0.0),
liquidity_weak=safe_float(market_data.get("volume_ratio"), 1.0) < 1.0, liquidity_weak=safe_float(market_data.get("volume_ratio"), 1.0) < 1.0,
), ),
@@ -559,7 +866,7 @@ def _apply_staged_exit_override_for_hold(
elif exit_eval.reason == "arm_take_profit": elif exit_eval.reason == "arm_take_profit":
rationale = ( rationale = (
f"Take-profit triggered ({pnl_pct:.2f}% >= " f"Take-profit triggered ({pnl_pct:.2f}% >= "
f"{take_profit_threshold:.2f}%)" f"{arm_pct:.2f}%)"
) )
elif exit_eval.reason == "atr_trailing_stop": elif exit_eval.reason == "atr_trailing_stop":
rationale = "ATR trailing-stop triggered" rationale = "ATR trailing-stop triggered"
@@ -697,6 +1004,7 @@ async def process_blackout_recovery_orders(
broker: KISBroker, broker: KISBroker,
overseas_broker: OverseasBroker, overseas_broker: OverseasBroker,
db_conn: Any, db_conn: Any,
settings: Settings | None = None,
) -> None: ) -> None:
intents = BLACKOUT_ORDER_MANAGER.pop_recovery_batch() intents = BLACKOUT_ORDER_MANAGER.pop_recovery_batch()
if not intents: if not intents:
@@ -728,6 +1036,63 @@ async def process_blackout_recovery_orders(
continue continue
try: try:
revalidation_enabled = bool(
_resolve_market_setting(
market=market,
settings=settings,
key="BLACKOUT_RECOVERY_PRICE_REVALIDATION_ENABLED",
default=True,
)
)
if revalidation_enabled:
if market.is_domestic:
current_price, _, _ = await _retry_connection(
broker.get_current_price,
intent.stock_code,
label=f"recovery_price:{market.code}:{intent.stock_code}",
)
else:
price_data = await _retry_connection(
overseas_broker.get_overseas_price,
market.exchange_code,
intent.stock_code,
label=f"recovery_price:{market.code}:{intent.stock_code}",
)
current_price = safe_float(price_data.get("output", {}).get("last"), 0.0)
queued_price = float(intent.price)
max_drift_pct = float(
_resolve_market_setting(
market=market,
settings=settings,
key="BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT",
default=5.0,
)
)
if queued_price <= 0 or current_price <= 0:
logger.info(
"Drop queued intent by price revalidation (invalid price): %s %s (%s) queued=%.4f current=%.4f",
intent.order_type,
intent.stock_code,
market.code,
queued_price,
current_price,
)
continue
drift_pct = abs(current_price - queued_price) / queued_price * 100.0
if drift_pct > max_drift_pct:
logger.info(
"Drop queued intent by price revalidation: %s %s (%s) queued=%.4f current=%.4f drift=%.2f%% max=%.2f%%",
intent.order_type,
intent.stock_code,
market.code,
queued_price,
current_price,
drift_pct,
max_drift_pct,
)
continue
validate_order_policy( validate_order_policy(
market=market, market=market,
order_type=intent.order_type, order_type=intent.order_type,
@@ -751,6 +1116,20 @@ async def process_blackout_recovery_orders(
accepted = result.get("rt_cd", "0") == "0" accepted = result.get("rt_cd", "0") == "0"
if accepted: if accepted:
runtime_session_id = get_session_info(market).session_id
log_trade(
conn=db_conn,
stock_code=intent.stock_code,
action=intent.order_type,
confidence=0,
rationale=f"[blackout-recovery] {intent.source}",
quantity=intent.quantity,
price=float(intent.price),
pnl=0.0,
market=market.code,
exchange_code=market.exchange_code,
session_id=runtime_session_id,
)
logger.info( logger.info(
"Recovered queued order executed: %s %s (%s) qty=%d price=%.4f source=%s", "Recovered queued order executed: %s %s (%s) qty=%d price=%.4f source=%s",
intent.order_type, intent.order_type,
@@ -991,6 +1370,7 @@ async def trading_cycle(
) -> None: ) -> None:
"""Execute one trading cycle for a single stock.""" """Execute one trading cycle for a single stock."""
cycle_start_time = asyncio.get_event_loop().time() cycle_start_time = asyncio.get_event_loop().time()
_session_risk_overrides(market=market, settings=settings)
# 1. Fetch market data # 1. Fetch market data
price_output: dict[str, Any] = {} # Populated for overseas markets; used for fallback metrics price_output: dict[str, Any] = {} # Populated for overseas markets; used for fallback metrics
@@ -1240,7 +1620,14 @@ async def trading_cycle(
# 2.1. Apply market_outlook-based BUY confidence threshold # 2.1. Apply market_outlook-based BUY confidence threshold
if decision.action == "BUY": if decision.action == "BUY":
base_threshold = (settings.CONFIDENCE_THRESHOLD if settings else 80) base_threshold = int(
_resolve_market_setting(
market=market,
settings=settings,
key="CONFIDENCE_THRESHOLD",
default=80,
)
)
outlook = playbook.market_outlook outlook = playbook.market_outlook
if outlook == MarketOutlook.BEARISH: if outlook == MarketOutlook.BEARISH:
min_confidence = 90 min_confidence = 90
@@ -1292,6 +1679,48 @@ async def trading_cycle(
stock_code, stock_code,
market.name, market.name,
) )
elif market.code.startswith("US"):
min_price = float(
_resolve_market_setting(
market=market,
settings=settings,
key="US_MIN_PRICE",
default=5.0,
)
)
if current_price <= min_price:
decision = TradeDecision(
action="HOLD",
confidence=decision.confidence,
rationale=(
f"US minimum price filter blocked BUY "
f"(price={current_price:.4f} <= {min_price:.4f})"
),
)
logger.info(
"BUY suppressed for %s (%s): US min price filter %.4f <= %.4f",
stock_code,
market.name,
current_price,
min_price,
)
if decision.action == "BUY":
cooldown_key = _stoploss_cooldown_key(market=market, stock_code=stock_code)
now_epoch = datetime.now(UTC).timestamp()
cooldown_until = _STOPLOSS_REENTRY_COOLDOWN_UNTIL.get(cooldown_key, 0.0)
if now_epoch < cooldown_until:
remaining = int(cooldown_until - now_epoch)
decision = TradeDecision(
action="HOLD",
confidence=decision.confidence,
rationale=f"Stop-loss reentry cooldown active ({remaining}s remaining)",
)
logger.info(
"BUY suppressed for %s (%s): stop-loss cooldown active (%ds remaining)",
stock_code,
market.name,
remaining,
)
if decision.action == "HOLD": if decision.action == "HOLD":
open_position = get_open_position(db_conn, stock_code, market.code) open_position = get_open_position(db_conn, stock_code, market.code)
@@ -1300,6 +1729,13 @@ async def trading_cycle(
market_code=market.code, market_code=market.code,
stock_code=stock_code, stock_code=stock_code,
) )
await _inject_staged_exit_features(
market=market,
stock_code=stock_code,
open_position=open_position,
market_data=market_data,
broker=broker,
)
decision = _apply_staged_exit_override_for_hold( decision = _apply_staged_exit_override_for_hold(
decision=decision, decision=decision,
market=market, market=market,
@@ -1307,6 +1743,7 @@ async def trading_cycle(
open_position=open_position, open_position=open_position,
market_data=market_data, market_data=market_data,
stock_playbook=stock_playbook, stock_playbook=stock_playbook,
settings=settings,
) )
if open_position and decision.action == "HOLD" and _should_force_exit_for_overnight( if open_position and decision.action == "HOLD" and _should_force_exit_for_overnight(
market=market, market=market,
@@ -1667,13 +2104,26 @@ async def trading_cycle(
) )
if buy_trade and buy_trade.get("price") is not None: if buy_trade and buy_trade.get("price") is not None:
buy_price = float(buy_trade["price"]) buy_price = float(buy_trade["price"])
buy_qty = int(buy_trade.get("quantity") or 1) buy_qty = int(buy_trade.get("quantity") or 0)
trade_pnl = (trade_price - buy_price) * buy_qty sell_qty = _resolve_sell_qty_for_pnl(sell_qty=quantity, buy_qty=buy_qty)
trade_pnl = (trade_price - buy_price) * sell_qty
decision_logger.update_outcome( decision_logger.update_outcome(
decision_id=buy_trade["decision_id"], decision_id=buy_trade["decision_id"],
pnl=trade_pnl, pnl=trade_pnl,
accuracy=1 if trade_pnl > 0 else 0, accuracy=1 if trade_pnl > 0 else 0,
) )
if trade_pnl < 0:
cooldown_key = _stoploss_cooldown_key(market=market, stock_code=stock_code)
cooldown_minutes = _stoploss_cooldown_minutes(settings, market=market)
_STOPLOSS_REENTRY_COOLDOWN_UNTIL[cooldown_key] = (
datetime.now(UTC).timestamp() + cooldown_minutes * 60
)
logger.info(
"Stop-loss cooldown set for %s (%s): %d minutes",
stock_code,
market.name,
cooldown_minutes,
)
# 6. Log trade with selection context (skip if order was rejected) # 6. Log trade with selection context (skip if order was rejected)
if decision.action in ("BUY", "SELL") and not order_succeeded: if decision.action in ("BUY", "SELL") and not order_succeeded:
@@ -2116,10 +2566,12 @@ async def run_daily_session(
# Process each open market # Process each open market
for market in open_markets: for market in open_markets:
_session_risk_overrides(market=market, settings=settings)
await process_blackout_recovery_orders( await process_blackout_recovery_orders(
broker=broker, broker=broker,
overseas_broker=overseas_broker, overseas_broker=overseas_broker,
db_conn=db_conn, db_conn=db_conn,
settings=settings,
) )
# Use market-local date for playbook keying # Use market-local date for playbook keying
market_today = datetime.now(market.timezone).date() market_today = datetime.now(market.timezone).date()
@@ -2452,6 +2904,48 @@ async def run_daily_session(
stock_code, stock_code,
market.name, market.name,
) )
elif market.code.startswith("US"):
min_price = float(
_resolve_market_setting(
market=market,
settings=settings,
key="US_MIN_PRICE",
default=5.0,
)
)
if stock_data["current_price"] <= min_price:
decision = TradeDecision(
action="HOLD",
confidence=decision.confidence,
rationale=(
f"US minimum price filter blocked BUY "
f"(price={stock_data['current_price']:.4f} <= {min_price:.4f})"
),
)
logger.info(
"BUY suppressed for %s (%s): US min price filter %.4f <= %.4f",
stock_code,
market.name,
stock_data["current_price"],
min_price,
)
if decision.action == "BUY":
cooldown_key = _stoploss_cooldown_key(market=market, stock_code=stock_code)
now_epoch = datetime.now(UTC).timestamp()
cooldown_until = _STOPLOSS_REENTRY_COOLDOWN_UNTIL.get(cooldown_key, 0.0)
if now_epoch < cooldown_until:
remaining = int(cooldown_until - now_epoch)
decision = TradeDecision(
action="HOLD",
confidence=decision.confidence,
rationale=f"Stop-loss reentry cooldown active ({remaining}s remaining)",
)
logger.info(
"BUY suppressed for %s (%s): stop-loss cooldown active (%ds remaining)",
stock_code,
market.name,
remaining,
)
if decision.action == "HOLD": if decision.action == "HOLD":
daily_open = get_open_position(db_conn, stock_code, market.code) daily_open = get_open_position(db_conn, stock_code, market.code)
if not daily_open: if not daily_open:
@@ -2459,6 +2953,13 @@ async def run_daily_session(
market_code=market.code, market_code=market.code,
stock_code=stock_code, stock_code=stock_code,
) )
await _inject_staged_exit_features(
market=market,
stock_code=stock_code,
open_position=daily_open,
market_data=stock_data,
broker=broker,
)
decision = _apply_staged_exit_override_for_hold( decision = _apply_staged_exit_override_for_hold(
decision=decision, decision=decision,
market=market, market=market,
@@ -2466,6 +2967,7 @@ async def run_daily_session(
open_position=daily_open, open_position=daily_open,
market_data=stock_data, market_data=stock_data,
stock_playbook=stock_playbook, stock_playbook=stock_playbook,
settings=settings,
) )
if daily_open and decision.action == "HOLD" and _should_force_exit_for_overnight( if daily_open and decision.action == "HOLD" and _should_force_exit_for_overnight(
market=market, market=market,
@@ -2772,13 +3274,32 @@ async def run_daily_session(
) )
if buy_trade and buy_trade.get("price") is not None: if buy_trade and buy_trade.get("price") is not None:
buy_price = float(buy_trade["price"]) buy_price = float(buy_trade["price"])
buy_qty = int(buy_trade.get("quantity") or 1) buy_qty = int(buy_trade.get("quantity") or 0)
trade_pnl = (trade_price - buy_price) * buy_qty sell_qty = _resolve_sell_qty_for_pnl(
sell_qty=quantity,
buy_qty=buy_qty,
)
trade_pnl = (trade_price - buy_price) * sell_qty
decision_logger.update_outcome( decision_logger.update_outcome(
decision_id=buy_trade["decision_id"], decision_id=buy_trade["decision_id"],
pnl=trade_pnl, pnl=trade_pnl,
accuracy=1 if trade_pnl > 0 else 0, accuracy=1 if trade_pnl > 0 else 0,
) )
if trade_pnl < 0:
cooldown_key = _stoploss_cooldown_key(market=market, stock_code=stock_code)
cooldown_minutes = _stoploss_cooldown_minutes(
settings,
market=market,
)
_STOPLOSS_REENTRY_COOLDOWN_UNTIL[cooldown_key] = (
datetime.now(UTC).timestamp() + cooldown_minutes * 60
)
logger.info(
"Stop-loss cooldown set for %s (%s): %d minutes",
stock_code,
market.name,
cooldown_minutes,
)
# Log trade (skip if order was rejected by API) # Log trade (skip if order was rejected by API)
if decision.action in ("BUY", "SELL") and not order_succeeded: if decision.action in ("BUY", "SELL") and not order_succeeded:
@@ -3577,6 +4098,7 @@ async def run(settings: Settings) -> None:
break break
session_info = get_session_info(market) session_info = get_session_info(market)
_session_risk_overrides(market=market, settings=settings)
logger.info( logger.info(
"Market session active: %s (%s) session=%s", "Market session active: %s (%s) session=%s",
market.code, market.code,
@@ -3588,6 +4110,7 @@ async def run(settings: Settings) -> None:
broker=broker, broker=broker,
overseas_broker=overseas_broker, overseas_broker=overseas_broker,
db_conn=db_conn, db_conn=db_conn,
settings=settings,
) )
# Notify market open if it just opened # Notify market open if it just opened

View File

@@ -1,5 +1,7 @@
from __future__ import annotations from __future__ import annotations
from datetime import UTC, datetime, timedelta
from src.analysis.backtest_cost_guard import BacktestCostModel from src.analysis.backtest_cost_guard import BacktestCostModel
from src.analysis.backtest_pipeline import ( from src.analysis.backtest_pipeline import (
BacktestBar, BacktestBar,
@@ -12,6 +14,7 @@ from src.analysis.walk_forward_split import generate_walk_forward_splits
def _bars() -> list[BacktestBar]: def _bars() -> list[BacktestBar]:
base_ts = datetime(2026, 2, 28, 0, 0, tzinfo=UTC)
closes = [100.0, 101.0, 102.0, 101.5, 103.0, 102.5, 104.0, 103.5, 105.0, 104.5, 106.0, 105.5] closes = [100.0, 101.0, 102.0, 101.5, 103.0, 102.5, 104.0, 103.5, 105.0, 104.5, 106.0, 105.5]
bars: list[BacktestBar] = [] bars: list[BacktestBar] = []
for i, close in enumerate(closes): for i, close in enumerate(closes):
@@ -21,6 +24,7 @@ def _bars() -> list[BacktestBar]:
low=close - 1.0, low=close - 1.0,
close=close, close=close,
session_id="KRX_REG" if i % 2 == 0 else "US_PRE", session_id="KRX_REG" if i % 2 == 0 else "US_PRE",
timestamp=base_ts + timedelta(minutes=i),
) )
) )
return bars return bars
@@ -43,7 +47,7 @@ def test_pipeline_happy_path_returns_fold_and_artifact_contract() -> None:
triple_barrier_spec=TripleBarrierSpec( triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02, take_profit_pct=0.02,
stop_loss_pct=0.01, stop_loss_pct=0.01,
max_holding_bars=3, max_holding_minutes=3,
), ),
walk_forward=WalkForwardConfig( walk_forward=WalkForwardConfig(
train_size=4, train_size=4,
@@ -84,7 +88,7 @@ def test_pipeline_cost_guard_fail_fast() -> None:
triple_barrier_spec=TripleBarrierSpec( triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02, take_profit_pct=0.02,
stop_loss_pct=0.01, stop_loss_pct=0.01,
max_holding_bars=3, max_holding_minutes=3,
), ),
walk_forward=WalkForwardConfig(train_size=2, test_size=1), walk_forward=WalkForwardConfig(train_size=2, test_size=1),
cost_model=bad, cost_model=bad,
@@ -119,7 +123,7 @@ def test_pipeline_deterministic_seed_free_deterministic_result() -> None:
triple_barrier_spec=TripleBarrierSpec( triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02, take_profit_pct=0.02,
stop_loss_pct=0.01, stop_loss_pct=0.01,
max_holding_bars=3, max_holding_minutes=3,
), ),
walk_forward=WalkForwardConfig( walk_forward=WalkForwardConfig(
train_size=4, train_size=4,
@@ -134,3 +138,31 @@ def test_pipeline_deterministic_seed_free_deterministic_result() -> None:
out1 = run_v2_backtest_pipeline(**cfg) out1 = run_v2_backtest_pipeline(**cfg)
out2 = run_v2_backtest_pipeline(**cfg) out2 = run_v2_backtest_pipeline(**cfg)
assert out1 == out2 assert out1 == out2
def test_pipeline_rejects_minutes_spec_when_timestamp_missing() -> None:
bars = _bars()
bars[2] = BacktestBar(
high=bars[2].high,
low=bars[2].low,
close=bars[2].close,
session_id=bars[2].session_id,
timestamp=None,
)
try:
run_v2_backtest_pipeline(
bars=bars,
entry_indices=[0, 1, 2, 3],
side=1,
triple_barrier_spec=TripleBarrierSpec(
take_profit_pct=0.02,
stop_loss_pct=0.01,
max_holding_minutes=3,
),
walk_forward=WalkForwardConfig(train_size=2, test_size=1),
cost_model=_cost_model(),
)
except ValueError as exc:
assert "BacktestBar.timestamp is required" in str(exc)
else:
raise AssertionError("expected timestamp validation error")

View File

@@ -4,6 +4,7 @@ from datetime import UTC, date, datetime
from unittest.mock import ANY, AsyncMock, MagicMock, patch from unittest.mock import ANY, AsyncMock, MagicMock, patch
import pytest import pytest
import src.main as main_module
from src.config import Settings from src.config import Settings
from src.context.layer import ContextLayer from src.context.layer import ContextLayer
@@ -15,6 +16,14 @@ from src.evolution.scorecard import DailyScorecard
from src.logging.decision_logger import DecisionLogger from src.logging.decision_logger import DecisionLogger
from src.main import ( from src.main import (
KILL_SWITCH, KILL_SWITCH,
_SESSION_RISK_LAST_BY_MARKET,
_SESSION_RISK_OVERRIDES_BY_MARKET,
_SESSION_RISK_PROFILES_MAP,
_STOPLOSS_REENTRY_COOLDOWN_UNTIL,
_apply_staged_exit_override_for_hold,
_compute_kr_atr_value,
_estimate_pred_down_prob_from_rsi,
_inject_staged_exit_features,
_RUNTIME_EXIT_PEAKS, _RUNTIME_EXIT_PEAKS,
_RUNTIME_EXIT_STATES, _RUNTIME_EXIT_STATES,
_should_force_exit_for_overnight, _should_force_exit_for_overnight,
@@ -27,9 +36,13 @@ from src.main import (
_extract_held_qty_from_balance, _extract_held_qty_from_balance,
_handle_market_close, _handle_market_close,
_retry_connection, _retry_connection,
_resolve_market_setting,
_resolve_sell_qty_for_pnl,
_run_context_scheduler, _run_context_scheduler,
_run_evolution_loop, _run_evolution_loop,
_start_dashboard_server, _start_dashboard_server,
_stoploss_cooldown_minutes,
_compute_kr_dynamic_stop_loss_pct,
handle_domestic_pending_orders, handle_domestic_pending_orders,
handle_overseas_pending_orders, handle_overseas_pending_orders,
process_blackout_recovery_orders, process_blackout_recovery_orders,
@@ -92,10 +105,20 @@ def _reset_kill_switch_state() -> None:
KILL_SWITCH.clear_block() KILL_SWITCH.clear_block()
_RUNTIME_EXIT_STATES.clear() _RUNTIME_EXIT_STATES.clear()
_RUNTIME_EXIT_PEAKS.clear() _RUNTIME_EXIT_PEAKS.clear()
_SESSION_RISK_LAST_BY_MARKET.clear()
_SESSION_RISK_OVERRIDES_BY_MARKET.clear()
_SESSION_RISK_PROFILES_MAP.clear()
main_module._SESSION_RISK_PROFILES_RAW = "__reset__"
_STOPLOSS_REENTRY_COOLDOWN_UNTIL.clear()
yield yield
KILL_SWITCH.clear_block() KILL_SWITCH.clear_block()
_RUNTIME_EXIT_STATES.clear() _RUNTIME_EXIT_STATES.clear()
_RUNTIME_EXIT_PEAKS.clear() _RUNTIME_EXIT_PEAKS.clear()
_SESSION_RISK_LAST_BY_MARKET.clear()
_SESSION_RISK_OVERRIDES_BY_MARKET.clear()
_SESSION_RISK_PROFILES_MAP.clear()
main_module._SESSION_RISK_PROFILES_RAW = "__reset__"
_STOPLOSS_REENTRY_COOLDOWN_UNTIL.clear()
class TestExtractAvgPriceFromBalance: class TestExtractAvgPriceFromBalance:
@@ -119,6 +142,266 @@ class TestExtractAvgPriceFromBalance:
result = _extract_avg_price_from_balance(balance, "005930", is_domestic=True) result = _extract_avg_price_from_balance(balance, "005930", is_domestic=True)
assert result == 0.0 assert result == 0.0
def test_resolve_sell_qty_for_pnl_prefers_sell_qty() -> None:
assert _resolve_sell_qty_for_pnl(sell_qty=30, buy_qty=100) == 30
def test_resolve_sell_qty_for_pnl_uses_buy_qty_fallback_when_sell_qty_missing() -> None:
assert _resolve_sell_qty_for_pnl(sell_qty=None, buy_qty=12) == 12
def test_resolve_sell_qty_for_pnl_returns_zero_when_both_missing() -> None:
assert _resolve_sell_qty_for_pnl(sell_qty=None, buy_qty=None) == 0
def test_compute_kr_dynamic_stop_loss_pct_falls_back_without_atr() -> None:
out = _compute_kr_dynamic_stop_loss_pct(
entry_price=100.0,
atr_value=0.0,
fallback_stop_loss_pct=-2.0,
settings=None,
)
assert out == -2.0
def test_compute_kr_dynamic_stop_loss_pct_clamps_to_min_and_max() -> None:
# Small ATR -> clamp to min (-2%)
out_small = _compute_kr_dynamic_stop_loss_pct(
entry_price=100.0,
atr_value=0.2,
fallback_stop_loss_pct=-2.0,
settings=None,
)
assert out_small == -2.0
# Large ATR -> clamp to max (-7%)
out_large = _compute_kr_dynamic_stop_loss_pct(
entry_price=100.0,
atr_value=10.0,
fallback_stop_loss_pct=-2.0,
settings=None,
)
assert out_large == -7.0
def test_compute_kr_dynamic_stop_loss_pct_uses_settings_values() -> None:
settings = MagicMock(
KR_ATR_STOP_MULTIPLIER_K=3.0,
KR_ATR_STOP_MIN_PCT=-1.5,
KR_ATR_STOP_MAX_PCT=-6.0,
)
out = _compute_kr_dynamic_stop_loss_pct(
entry_price=100.0,
atr_value=1.0,
fallback_stop_loss_pct=-2.0,
settings=settings,
)
assert out == -3.0
def test_resolve_market_setting_uses_session_profile_override() -> None:
settings = Settings(
KIS_APP_KEY="k",
KIS_APP_SECRET="s",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="g",
SESSION_RISK_PROFILES_JSON='{"US_PRE": {"US_MIN_PRICE": 7.5}}',
)
market = MagicMock()
market.code = "US_NASDAQ"
with patch("src.main.get_session_info", return_value=MagicMock(session_id="US_PRE")):
value = _resolve_market_setting(
market=market,
settings=settings,
key="US_MIN_PRICE",
default=5.0,
)
assert value == pytest.approx(7.5)
def test_stoploss_cooldown_minutes_uses_session_override() -> None:
settings = Settings(
KIS_APP_KEY="k",
KIS_APP_SECRET="s",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="g",
STOPLOSS_REENTRY_COOLDOWN_MINUTES=120,
SESSION_RISK_PROFILES_JSON='{"NXT_AFTER": {"STOPLOSS_REENTRY_COOLDOWN_MINUTES": 45}}',
)
market = MagicMock()
market.code = "KR"
with patch("src.main.get_session_info", return_value=MagicMock(session_id="NXT_AFTER")):
value = _stoploss_cooldown_minutes(settings, market=market)
assert value == 45
def test_resolve_market_setting_ignores_profile_when_reload_disabled() -> None:
settings = Settings(
KIS_APP_KEY="k",
KIS_APP_SECRET="s",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="g",
US_MIN_PRICE=5.0,
SESSION_RISK_RELOAD_ENABLED=False,
SESSION_RISK_PROFILES_JSON='{"US_PRE": {"US_MIN_PRICE": 9.5}}',
)
market = MagicMock()
market.code = "US_NASDAQ"
with patch("src.main.get_session_info", return_value=MagicMock(session_id="US_PRE")):
value = _resolve_market_setting(
market=market,
settings=settings,
key="US_MIN_PRICE",
default=5.0,
)
assert value == pytest.approx(5.0)
def test_resolve_market_setting_falls_back_on_invalid_profile_json() -> None:
settings = Settings(
KIS_APP_KEY="k",
KIS_APP_SECRET="s",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="g",
US_MIN_PRICE=5.0,
SESSION_RISK_PROFILES_JSON="{invalid-json",
)
market = MagicMock()
market.code = "US_NASDAQ"
with patch("src.main.get_session_info", return_value=MagicMock(session_id="US_PRE")):
value = _resolve_market_setting(
market=market,
settings=settings,
key="US_MIN_PRICE",
default=5.0,
)
assert value == pytest.approx(5.0)
def test_resolve_market_setting_coerces_bool_string_override() -> None:
settings = Settings(
KIS_APP_KEY="k",
KIS_APP_SECRET="s",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="g",
OVERNIGHT_EXCEPTION_ENABLED=True,
SESSION_RISK_PROFILES_JSON='{"US_AFTER": {"OVERNIGHT_EXCEPTION_ENABLED": "false"}}',
)
market = MagicMock()
market.code = "US_NASDAQ"
with patch("src.main.get_session_info", return_value=MagicMock(session_id="US_AFTER")):
value = _resolve_market_setting(
market=market,
settings=settings,
key="OVERNIGHT_EXCEPTION_ENABLED",
default=True,
)
assert value is False
def test_estimate_pred_down_prob_from_rsi_uses_linear_mapping() -> None:
assert _estimate_pred_down_prob_from_rsi(None) == 0.5
assert _estimate_pred_down_prob_from_rsi(0.0) == 0.0
assert _estimate_pred_down_prob_from_rsi(50.0) == 0.5
assert _estimate_pred_down_prob_from_rsi(100.0) == 1.0
@pytest.mark.asyncio
async def test_compute_kr_atr_value_returns_zero_on_short_series() -> None:
broker = MagicMock()
broker.get_daily_prices = AsyncMock(
return_value=[{"high": 101.0, "low": 99.0, "close": 100.0}] * 10
)
atr = await _compute_kr_atr_value(broker=broker, stock_code="005930")
assert atr == 0.0
@pytest.mark.asyncio
async def test_inject_staged_exit_features_sets_pred_down_prob_and_atr_for_kr() -> None:
market = MagicMock()
market.is_domestic = True
stock_data: dict[str, float] = {"rsi": 65.0}
broker = MagicMock()
broker.get_daily_prices = AsyncMock(
return_value=[
{"high": 102.0 + i, "low": 98.0 + i, "close": 100.0 + i}
for i in range(40)
]
)
await _inject_staged_exit_features(
market=market,
stock_code="005930",
open_position={"price": 100.0, "quantity": 1},
market_data=stock_data,
broker=broker,
)
assert stock_data["pred_down_prob"] == pytest.approx(0.65)
assert stock_data["atr_value"] > 0.0
def test_apply_staged_exit_uses_independent_arm_threshold_settings() -> None:
market = MagicMock()
market.code = "KR"
market.name = "Korea"
decision = MagicMock()
decision.action = "HOLD"
decision.confidence = 70
decision.rationale = "hold"
settings = Settings(
KIS_APP_KEY="k",
KIS_APP_SECRET="s",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="g",
STAGED_EXIT_BE_ARM_PCT=2.2,
STAGED_EXIT_ARM_PCT=5.4,
)
captured: dict[str, float] = {}
def _fake_eval(**kwargs): # type: ignore[no-untyped-def]
cfg = kwargs["config"]
captured["be_arm_pct"] = cfg.be_arm_pct
captured["arm_pct"] = cfg.arm_pct
class _Out:
should_exit = False
reason = "none"
state = PositionState.HOLDING
return _Out()
with patch("src.main.evaluate_exit", side_effect=_fake_eval):
out = _apply_staged_exit_override_for_hold(
decision=decision,
market=market,
stock_code="005930",
open_position={"price": 100.0, "quantity": 1, "decision_id": "d1", "timestamp": "t1"},
market_data={"current_price": 101.0, "rsi": 60.0, "pred_down_prob": 0.6},
stock_playbook=None,
settings=settings,
)
assert out is decision
assert captured["be_arm_pct"] == pytest.approx(2.2)
assert captured["arm_pct"] == pytest.approx(5.4)
def test_returns_zero_when_field_empty_string(self) -> None: def test_returns_zero_when_field_empty_string(self) -> None:
"""Returns 0.0 when pchs_avg_pric is an empty string.""" """Returns 0.0 when pchs_avg_pric is an empty string."""
balance = {"output1": [{"pdno": "005930", "pchs_avg_pric": ""}]} balance = {"output1": [{"pdno": "005930", "pchs_avg_pric": ""}]}
@@ -1553,7 +1836,10 @@ class TestScenarioEngineIntegration:
signal="oversold", score=85.0, signal="oversold", score=85.0,
) )
with patch("src.main.log_trade"): with (
patch("src.main.log_trade"),
patch("src.main.get_session_info", return_value=MagicMock(session_id="KRX_REG")),
):
await trading_cycle( await trading_cycle(
broker=mock_broker, broker=mock_broker,
overseas_broker=MagicMock(), overseas_broker=MagicMock(),
@@ -1907,6 +2193,7 @@ class TestScenarioEngineIntegration:
decision_logger.log_decision.assert_called_once() decision_logger.log_decision.assert_called_once()
call_kwargs = decision_logger.log_decision.call_args.kwargs call_kwargs = decision_logger.log_decision.call_args.kwargs
assert call_kwargs["session_id"] == "KRX_REG"
assert "scenario_match" in call_kwargs["context_snapshot"] assert "scenario_match" in call_kwargs["context_snapshot"]
assert call_kwargs["context_snapshot"]["scenario_match"]["rsi"] == 45.0 assert call_kwargs["context_snapshot"]["scenario_match"]["rsi"] == 45.0
@@ -1994,7 +2281,7 @@ async def test_sell_updates_original_buy_decision_outcome() -> None:
], ],
} }
) )
broker.send_order = AsyncMock(return_value={"msg1": "OK"}) broker.send_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
overseas_broker = MagicMock() overseas_broker = MagicMock()
engine = MagicMock(spec=ScenarioEngine) engine = MagicMock(spec=ScenarioEngine)
@@ -2040,6 +2327,105 @@ async def test_sell_updates_original_buy_decision_outcome() -> None:
assert updated_buy is not None assert updated_buy is not None
assert updated_buy.outcome_pnl == 20.0 assert updated_buy.outcome_pnl == 20.0
assert updated_buy.outcome_accuracy == 1 assert updated_buy.outcome_accuracy == 1
assert "KR:005930" not in _STOPLOSS_REENTRY_COOLDOWN_UNTIL
@pytest.mark.asyncio
async def test_stoploss_reentry_cooldown_blocks_buy_when_active() -> None:
_STOPLOSS_REENTRY_COOLDOWN_UNTIL["KR:005930"] = datetime.now(UTC).timestamp() + 300
db_conn = init_db(":memory:")
broker = MagicMock()
broker.get_current_price = AsyncMock(return_value=(100.0, 0.0, 0.0))
broker.get_balance = AsyncMock(
return_value={
"output1": [],
"output2": [{"tot_evlu_amt": "100000", "dnca_tot_amt": "50000", "pchs_amt_smtl_amt": "50000"}],
}
)
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
market = MagicMock()
market.name = "Korea"
market.code = "KR"
market.exchange_code = "KRX"
market.is_domestic = True
await trading_cycle(
broker=broker,
overseas_broker=MagicMock(),
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("005930"))),
playbook=_make_playbook(),
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
db_conn=db_conn,
decision_logger=DecisionLogger(db_conn),
context_store=MagicMock(get_latest_timeframe=MagicMock(return_value=None), set_context=MagicMock()),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=MagicMock(
notify_trade_execution=AsyncMock(),
notify_fat_finger=AsyncMock(),
notify_circuit_breaker=AsyncMock(),
notify_scenario_matched=AsyncMock(),
),
market=market,
stock_code="005930",
scan_candidates={},
settings=MagicMock(POSITION_SIZING_ENABLED=False, CONFIDENCE_THRESHOLD=80, MODE="paper"),
)
broker.send_order.assert_not_called()
@pytest.mark.asyncio
async def test_stoploss_reentry_cooldown_allows_buy_after_expiry() -> None:
_STOPLOSS_REENTRY_COOLDOWN_UNTIL["KR:005930"] = datetime.now(UTC).timestamp() - 10
db_conn = init_db(":memory:")
broker = MagicMock()
broker.get_current_price = AsyncMock(return_value=(100.0, 0.0, 0.0))
broker.get_balance = AsyncMock(
return_value={
"output1": [],
"output2": [{"tot_evlu_amt": "100000", "dnca_tot_amt": "50000", "pchs_amt_smtl_amt": "50000"}],
}
)
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
market = MagicMock()
market.name = "Korea"
market.code = "KR"
market.exchange_code = "KRX"
market.is_domestic = True
await trading_cycle(
broker=broker,
overseas_broker=MagicMock(),
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("005930"))),
playbook=_make_playbook(),
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
db_conn=db_conn,
decision_logger=DecisionLogger(db_conn),
context_store=MagicMock(get_latest_timeframe=MagicMock(return_value=None), set_context=MagicMock()),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=MagicMock(
notify_trade_execution=AsyncMock(),
notify_fat_finger=AsyncMock(),
notify_circuit_breaker=AsyncMock(),
notify_scenario_matched=AsyncMock(),
),
market=market,
stock_code="005930",
scan_candidates={},
settings=MagicMock(POSITION_SIZING_ENABLED=False, CONFIDENCE_THRESHOLD=80, MODE="paper"),
)
broker.send_order.assert_called_once()
@pytest.mark.asyncio @pytest.mark.asyncio
@@ -2750,6 +3136,9 @@ async def test_sell_order_uses_broker_balance_qty_not_db() -> None:
assert call_kwargs["order_type"] == "SELL" assert call_kwargs["order_type"] == "SELL"
# Must use broker-confirmed qty (5), NOT DB-recorded ordered qty (10) # Must use broker-confirmed qty (5), NOT DB-recorded ordered qty (10)
assert call_kwargs["quantity"] == 5 assert call_kwargs["quantity"] == 5
updated_buy = decision_logger.get_decision_by_id(buy_decision_id)
assert updated_buy is not None
assert updated_buy.outcome_pnl == -25.0
@pytest.mark.asyncio @pytest.mark.asyncio
@@ -4478,6 +4867,110 @@ async def test_run_daily_session_applies_staged_exit_override_on_hold() -> None:
assert broker.send_order.call_args.kwargs["order_type"] == "SELL" assert broker.send_order.call_args.kwargs["order_type"] == "SELL"
@pytest.mark.asyncio
async def test_run_daily_session_passes_runtime_session_id_to_decision_and_trade_logs() -> None:
"""Daily session must explicitly forward runtime session_id to decision/trade logs."""
from src.analysis.smart_scanner import ScanCandidate
db_conn = init_db(":memory:")
settings = Settings(
KIS_APP_KEY="k",
KIS_APP_SECRET="s",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="g",
MODE="paper",
)
broker = MagicMock()
broker.get_balance = AsyncMock(
return_value={
"output1": [],
"output2": [
{
"tot_evlu_amt": "100000",
"dnca_tot_amt": "50000",
"pchs_amt_smtl_amt": "50000",
}
],
}
)
broker.get_current_price = AsyncMock(return_value=(100.0, 1.0, 0.0))
broker.send_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
market = MagicMock()
market.name = "Korea"
market.code = "KR"
market.exchange_code = "KRX"
market.is_domestic = True
market.timezone = __import__("zoneinfo").ZoneInfo("Asia/Seoul")
smart_scanner = MagicMock()
smart_scanner.scan = AsyncMock(
return_value=[
ScanCandidate(
stock_code="005930",
name="Samsung",
price=100.0,
volume=1_000_000.0,
volume_ratio=2.0,
rsi=45.0,
signal="momentum",
score=80.0,
)
]
)
playbook_store = MagicMock()
playbook_store.load = MagicMock(return_value=_make_playbook("KR"))
scenario_engine = MagicMock(spec=ScenarioEngine)
scenario_engine.evaluate = MagicMock(return_value=_make_buy_match("005930"))
risk = MagicMock()
risk.check_circuit_breaker = MagicMock()
risk.validate_order = MagicMock()
decision_logger = MagicMock()
decision_logger.log_decision = MagicMock(return_value="d1")
telegram = MagicMock()
telegram.notify_trade_execution = AsyncMock()
telegram.notify_scenario_matched = AsyncMock()
async def _passthrough(fn, *a, label: str = "", **kw): # type: ignore[override]
return await fn(*a, **kw)
with (
patch("src.main.get_open_position", return_value=None),
patch("src.main.get_open_markets", return_value=[market]),
patch("src.main.get_session_info", return_value=MagicMock(session_id="KRX_REG")),
patch("src.main._retry_connection", new=_passthrough),
patch("src.main.log_trade") as mock_log_trade,
):
await run_daily_session(
broker=broker,
overseas_broker=MagicMock(),
scenario_engine=scenario_engine,
playbook_store=playbook_store,
pre_market_planner=MagicMock(),
risk=risk,
db_conn=db_conn,
decision_logger=decision_logger,
context_store=MagicMock(),
criticality_assessor=MagicMock(),
telegram=telegram,
settings=settings,
smart_scanner=smart_scanner,
daily_start_eval=0.0,
)
decision_logger.log_decision.assert_called_once()
assert decision_logger.log_decision.call_args.kwargs["session_id"] == "KRX_REG"
assert mock_log_trade.call_count >= 1
for call in mock_log_trade.call_args_list:
assert call.kwargs.get("session_id") == "KRX_REG"
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# sync_positions_from_broker — startup DB sync tests (issue #206) # sync_positions_from_broker — startup DB sync tests (issue #206)
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@@ -5654,6 +6147,149 @@ async def test_order_policy_rejection_skips_order_execution() -> None:
broker.send_order.assert_not_called() broker.send_order.assert_not_called()
@pytest.mark.asyncio
@pytest.mark.parametrize(
("price", "should_block"),
[
(4.99, True),
(5.00, True),
(5.01, False),
],
)
async def test_us_min_price_filter_boundary(price: float, should_block: bool) -> None:
db_conn = init_db(":memory:")
decision_logger = DecisionLogger(db_conn)
broker = MagicMock()
broker.get_balance = AsyncMock(return_value={"output1": [], "output2": [{}]})
overseas_broker = MagicMock()
overseas_broker.get_overseas_price = AsyncMock(
return_value={"output": {"last": str(price), "rate": "0.0"}}
)
overseas_broker.get_overseas_balance = AsyncMock(
return_value={"output1": [], "output2": [{"frcr_evlu_tota": "10000", "frcr_buy_amt_smtl": "0"}]}
)
overseas_broker.get_overseas_buying_power = AsyncMock(
return_value={"output": {"ovrs_ord_psbl_amt": "10000"}}
)
overseas_broker.send_overseas_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
market = MagicMock()
market.name = "NASDAQ"
market.code = "US_NASDAQ"
market.exchange_code = "NASD"
market.is_domestic = False
telegram = MagicMock()
telegram.notify_trade_execution = AsyncMock()
telegram.notify_fat_finger = AsyncMock()
telegram.notify_circuit_breaker = AsyncMock()
telegram.notify_scenario_matched = AsyncMock()
settings = MagicMock()
settings.POSITION_SIZING_ENABLED = False
settings.CONFIDENCE_THRESHOLD = 80
settings.MODE = "paper"
settings.PAPER_OVERSEAS_CASH = 50000
settings.US_MIN_PRICE = 5.0
settings.USD_BUFFER_MIN = 1000.0
await trading_cycle(
broker=broker,
overseas_broker=overseas_broker,
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("AAPL"))),
playbook=_make_playbook("US_NASDAQ"),
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
db_conn=db_conn,
decision_logger=decision_logger,
context_store=MagicMock(
get_latest_timeframe=MagicMock(return_value=None),
set_context=MagicMock(),
),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=telegram,
market=market,
stock_code="AAPL",
scan_candidates={},
settings=settings,
)
if should_block:
overseas_broker.send_overseas_order.assert_not_called()
else:
overseas_broker.send_overseas_order.assert_called_once()
@pytest.mark.asyncio
async def test_us_min_price_filter_not_applied_to_kr_market() -> None:
db_conn = init_db(":memory:")
decision_logger = DecisionLogger(db_conn)
broker = MagicMock()
broker.get_current_price = AsyncMock(return_value=(4.0, 0.0, 0.0))
broker.get_balance = AsyncMock(
return_value={
"output1": [],
"output2": [
{
"tot_evlu_amt": "100000",
"dnca_tot_amt": "50000",
"pchs_amt_smtl_amt": "50000",
}
],
}
)
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
market = MagicMock()
market.name = "Korea"
market.code = "KR"
market.exchange_code = "KRX"
market.is_domestic = True
telegram = MagicMock()
telegram.notify_trade_execution = AsyncMock()
telegram.notify_fat_finger = AsyncMock()
telegram.notify_circuit_breaker = AsyncMock()
telegram.notify_scenario_matched = AsyncMock()
settings = MagicMock()
settings.POSITION_SIZING_ENABLED = False
settings.CONFIDENCE_THRESHOLD = 80
settings.MODE = "paper"
settings.US_MIN_PRICE = 5.0
settings.USD_BUFFER_MIN = 1000.0
await trading_cycle(
broker=broker,
overseas_broker=MagicMock(),
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("005930"))),
playbook=_make_playbook(),
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
db_conn=db_conn,
decision_logger=decision_logger,
context_store=MagicMock(
get_latest_timeframe=MagicMock(return_value=None),
set_context=MagicMock(),
),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=telegram,
market=market,
stock_code="005930",
scan_candidates={},
settings=settings,
)
broker.send_order.assert_called_once()
def test_overnight_policy_prioritizes_killswitch_over_exception() -> None: def test_overnight_policy_prioritizes_killswitch_over_exception() -> None:
market = MagicMock() market = MagicMock()
with patch("src.main.get_session_info", return_value=MagicMock(session_id="US_AFTER")): with patch("src.main.get_session_info", return_value=MagicMock(session_id="US_AFTER")):
@@ -5812,6 +6448,7 @@ async def test_process_blackout_recovery_executes_valid_intents() -> None:
"""Recovery must execute queued intents that pass revalidation.""" """Recovery must execute queued intents that pass revalidation."""
db_conn = init_db(":memory:") db_conn = init_db(":memory:")
broker = MagicMock() broker = MagicMock()
broker.get_current_price = AsyncMock(return_value=(100.0, 0.0, 0.0))
broker.send_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"}) broker.send_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
overseas_broker = MagicMock() overseas_broker = MagicMock()
@@ -5837,6 +6474,7 @@ async def test_process_blackout_recovery_executes_valid_intents() -> None:
patch("src.main.MARKETS", {"KR": market}), patch("src.main.MARKETS", {"KR": market}),
patch("src.main.get_open_position", return_value=None), patch("src.main.get_open_position", return_value=None),
patch("src.main.validate_order_policy"), patch("src.main.validate_order_policy"),
patch("src.main.get_session_info", return_value=MagicMock(session_id="KRX_REG")),
): ):
await process_blackout_recovery_orders( await process_blackout_recovery_orders(
broker=broker, broker=broker,
@@ -5845,6 +6483,19 @@ async def test_process_blackout_recovery_executes_valid_intents() -> None:
) )
broker.send_order.assert_called_once() broker.send_order.assert_called_once()
row = db_conn.execute(
"""
SELECT action, quantity, session_id, rationale
FROM trades
WHERE stock_code = '005930'
ORDER BY id DESC LIMIT 1
"""
).fetchone()
assert row is not None
assert row[0] == "BUY"
assert row[1] == 1
assert row[2] == "KRX_REG"
assert row[3].startswith("[blackout-recovery]")
@pytest.mark.asyncio @pytest.mark.asyncio
@@ -5852,6 +6503,7 @@ async def test_process_blackout_recovery_drops_policy_rejected_intent() -> None:
"""Policy-rejected queued intents must not be requeued.""" """Policy-rejected queued intents must not be requeued."""
db_conn = init_db(":memory:") db_conn = init_db(":memory:")
broker = MagicMock() broker = MagicMock()
broker.get_current_price = AsyncMock(return_value=(100.0, 0.0, 0.0))
broker.send_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"}) broker.send_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
overseas_broker = MagicMock() overseas_broker = MagicMock()
@@ -5895,6 +6547,149 @@ async def test_process_blackout_recovery_drops_policy_rejected_intent() -> None:
blackout_manager.requeue.assert_not_called() blackout_manager.requeue.assert_not_called()
@pytest.mark.asyncio
async def test_process_blackout_recovery_drops_intent_on_excessive_price_drift() -> None:
"""Queued intent is dropped when current market price drift exceeds threshold."""
db_conn = init_db(":memory:")
broker = MagicMock()
broker.get_current_price = AsyncMock(return_value=(106.0, 0.0, 0.0))
broker.send_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
overseas_broker = MagicMock()
market = MagicMock()
market.code = "KR"
market.exchange_code = "KRX"
market.is_domestic = True
intent = MagicMock()
intent.market_code = "KR"
intent.stock_code = "005930"
intent.order_type = "BUY"
intent.quantity = 1
intent.price = 100.0
intent.source = "test"
intent.attempts = 0
blackout_manager = MagicMock()
blackout_manager.pop_recovery_batch.return_value = [intent]
with (
patch("src.main.BLACKOUT_ORDER_MANAGER", blackout_manager),
patch("src.main.MARKETS", {"KR": market}),
patch("src.main.get_open_position", return_value=None),
patch("src.main.validate_order_policy") as validate_policy,
):
await process_blackout_recovery_orders(
broker=broker,
overseas_broker=overseas_broker,
db_conn=db_conn,
settings=Settings(
KIS_APP_KEY="k",
KIS_APP_SECRET="s",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="g",
BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT=5.0,
),
)
broker.send_order.assert_not_called()
validate_policy.assert_not_called()
@pytest.mark.asyncio
async def test_process_blackout_recovery_drops_overseas_intent_on_excessive_price_drift() -> None:
"""Overseas queued intent is dropped when price drift exceeds threshold."""
db_conn = init_db(":memory:")
broker = MagicMock()
broker.send_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
overseas_broker = MagicMock()
overseas_broker.get_overseas_price = AsyncMock(return_value={"output": {"last": "106.0"}})
overseas_broker.send_overseas_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
market = MagicMock()
market.code = "US_NASDAQ"
market.exchange_code = "NASD"
market.is_domestic = False
intent = MagicMock()
intent.market_code = "US_NASDAQ"
intent.stock_code = "AAPL"
intent.order_type = "BUY"
intent.quantity = 1
intent.price = 100.0
intent.source = "test"
intent.attempts = 0
blackout_manager = MagicMock()
blackout_manager.pop_recovery_batch.return_value = [intent]
with (
patch("src.main.BLACKOUT_ORDER_MANAGER", blackout_manager),
patch("src.main.MARKETS", {"US_NASDAQ": market}),
patch("src.main.get_open_position", return_value=None),
patch("src.main.validate_order_policy") as validate_policy,
):
await process_blackout_recovery_orders(
broker=broker,
overseas_broker=overseas_broker,
db_conn=db_conn,
settings=Settings(
KIS_APP_KEY="k",
KIS_APP_SECRET="s",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="g",
BLACKOUT_RECOVERY_MAX_PRICE_DRIFT_PCT=5.0,
),
)
overseas_broker.send_overseas_order.assert_not_called()
validate_policy.assert_not_called()
@pytest.mark.asyncio
async def test_process_blackout_recovery_requeues_intent_when_price_lookup_fails() -> None:
"""Price lookup failure must requeue intent for a later retry."""
db_conn = init_db(":memory:")
broker = MagicMock()
broker.get_current_price = AsyncMock(side_effect=ConnectionError("price API down"))
broker.send_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
overseas_broker = MagicMock()
market = MagicMock()
market.code = "KR"
market.exchange_code = "KRX"
market.is_domestic = True
intent = MagicMock()
intent.market_code = "KR"
intent.stock_code = "005930"
intent.order_type = "BUY"
intent.quantity = 1
intent.price = 100.0
intent.source = "test"
intent.attempts = 0
blackout_manager = MagicMock()
blackout_manager.pop_recovery_batch.return_value = [intent]
with (
patch("src.main.BLACKOUT_ORDER_MANAGER", blackout_manager),
patch("src.main.MARKETS", {"KR": market}),
patch("src.main.get_open_position", return_value=None),
patch("src.main.validate_order_policy") as validate_policy,
):
await process_blackout_recovery_orders(
broker=broker,
overseas_broker=overseas_broker,
db_conn=db_conn,
)
broker.send_order.assert_not_called()
validate_policy.assert_not_called()
blackout_manager.requeue.assert_called_once_with(intent)
assert intent.attempts == 1
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_trigger_emergency_kill_switch_executes_operational_steps() -> None: async def test_trigger_emergency_kill_switch_executes_operational_steps() -> None:
"""Emergency kill switch should execute cancel/refresh/reduce/notify callbacks.""" """Emergency kill switch should execute cancel/refresh/reduce/notify callbacks."""

View File

@@ -0,0 +1,116 @@
from __future__ import annotations
import importlib.util
from pathlib import Path
from types import SimpleNamespace
def _load_module():
script_path = Path(__file__).resolve().parents[1] / "scripts" / "validate_governance_assets.py"
spec = importlib.util.spec_from_file_location("validate_governance_assets", script_path)
assert spec is not None
assert spec.loader is not None
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
def test_is_policy_file_detects_ouroboros_policy_docs() -> None:
module = _load_module()
assert module.is_policy_file("docs/ouroboros/85_loss_recovery_action_plan.md")
assert not module.is_policy_file("docs/ouroboros/01_requirements_registry.md")
assert not module.is_policy_file("docs/workflow.md")
assert not module.is_policy_file("docs/ouroboros/notes.txt")
def test_validate_registry_sync_requires_registry_update_when_policy_changes() -> None:
module = _load_module()
errors: list[str] = []
module.validate_registry_sync(
["docs/ouroboros/85_loss_recovery_action_plan.md"],
errors,
)
assert errors
assert "policy file changed without updating" in errors[0]
def test_validate_registry_sync_passes_when_registry_included() -> None:
module = _load_module()
errors: list[str] = []
module.validate_registry_sync(
[
"docs/ouroboros/85_loss_recovery_action_plan.md",
"docs/ouroboros/01_requirements_registry.md",
],
errors,
)
assert errors == []
def test_load_changed_files_supports_explicit_paths() -> None:
module = _load_module()
errors: list[str] = []
changed = module.load_changed_files(
["./docs/ouroboros/85_loss_recovery_action_plan.md", " src/main.py "],
errors,
)
assert errors == []
assert changed == [
"docs/ouroboros/85_loss_recovery_action_plan.md",
"src/main.py",
]
def test_load_changed_files_with_range_uses_git_diff(monkeypatch) -> None:
module = _load_module()
errors: list[str] = []
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
assert cmd[:3] == ["git", "diff", "--name-only"]
assert check is True
assert capture_output is True
assert text is True
return SimpleNamespace(stdout="docs/ouroboros/85_loss_recovery_action_plan.md\nsrc/main.py\n")
monkeypatch.setattr(module.subprocess, "run", fake_run)
changed = module.load_changed_files(["abc...def"], errors)
assert errors == []
assert changed == [
"docs/ouroboros/85_loss_recovery_action_plan.md",
"src/main.py",
]
def test_validate_task_req_mapping_reports_missing_req_reference(tmp_path) -> None:
module = _load_module()
doc = tmp_path / "work_orders.md"
doc.write_text(
"- `TASK-OPS-999` no req mapping line\n",
encoding="utf-8",
)
errors: list[str] = []
module.validate_task_req_mapping(errors, task_doc=doc)
assert errors
assert "TASK without REQ mapping" in errors[0]
def test_validate_task_req_mapping_passes_when_req_present(tmp_path) -> None:
module = _load_module()
doc = tmp_path / "work_orders.md"
doc.write_text(
"- `TASK-OPS-999` (`REQ-OPS-001`): enforce timezone labels\n",
encoding="utf-8",
)
errors: list[str] = []
module.validate_task_req_mapping(errors, task_doc=doc)
assert errors == []
def test_validate_pr_traceability_warns_when_req_missing(monkeypatch) -> None:
module = _load_module()
monkeypatch.setenv("GOVERNANCE_PR_TITLE", "feat: update policy checker")
monkeypatch.setenv("GOVERNANCE_PR_BODY", "Refs: TASK-OPS-001 TEST-ACC-007")
warnings: list[str] = []
module.validate_pr_traceability(warnings)
assert warnings
assert "PR text missing REQ-ID reference" in warnings