24 Commits

Author SHA1 Message Date
agentson
9fd9c552f3 fix: add dual-listing spread routing and session propagation
Some checks failed
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 10:16:28 +09:00
agentson
c80f3daad7 fix: apply KR session-aware exchange routing for rankings and orders (#409)
Some checks failed
Gitea CI / test (push) Successful in 35s
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 10:12:41 +09:00
agentson
100586e237 chore: add handover entry for issue #409 2026-03-04 10:10:45 +09:00
agentson
86733ef830 docs: add implementation plan for #409 exchange routing 2026-03-04 10:09:53 +09:00
agentson
296b89d95f docs: add design for #409 KR session exchange routing 2026-03-04 10:07:39 +09:00
agentson
fa89499ccb docs: add implementation plan for #398 #400 #401
Some checks failed
Gitea CI / test (push) Has been cancelled
2026-03-04 03:00:45 +09:00
agentson
b227554e9e docs: add design for #398 #400 #401 feature integration workflow 2026-03-04 02:59:47 +09:00
c217e8cd72 Merge pull request 'fix: runtime anomaly handling for overnight startup and monitor (#396 #397)' (#404) from feature/issue-396-397-runtime-anomaly-fixes into main
All checks were successful
Gitea CI / test (push) Successful in 36s
Reviewed-on: #404
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 02:46:38 +09:00
agentson
bcbbf80d16 docs: clarify APP_CMD legacy and APP_CMD_ARGS contract
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 36s
2026-03-04 02:43:32 +09:00
agentson
dc0775cbc6 fix: add safer custom command path for run_overnight
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 35s
2026-03-04 02:36:24 +09:00
agentson
c412412f7b fix: address second-round review findings on PR #404
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 36s
2026-03-04 02:29:54 +09:00
agentson
3cde8779fa fix: address PR #404 review feedback
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 35s
2026-03-04 02:23:43 +09:00
agentson
370ee8cc85 fix: make overnight startup portable in CI environments
All checks were successful
Gitea CI / test (push) Successful in 34s
Gitea CI / test (pull_request) Successful in 34s
2026-03-04 02:07:52 +09:00
agentson
528e17a29c fix: stabilize overnight startup and monitor live fallback (#396 #397)
Some checks failed
Gitea CI / test (push) Failing after 37s
Gitea CI / test (pull_request) Failing after 38s
2026-03-04 02:04:13 +09:00
d2f3fe9108 Merge pull request 'docs: consolidate CLAUDE entrypoint into agents guide (#402)' (#403) from temp/agents-md-migration-20260303 into main
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #403
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 01:52:46 +09:00
agentson
12bcccab42 ci: rerun PR checks after traceability update
All checks were successful
Gitea CI / test (pull_request) Successful in 32s
2026-03-04 01:32:42 +09:00
agentson
ef16cf8800 docs: consolidate agent entrypoint into agents.md (#402)
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 01:31:00 +09:00
3c58c5d110 Merge pull request 'merge: feature/v3-session-policy-stream into main' (#399) from feature/main-merge-v3-session-policy-stream-20260303 into main
Some checks failed
Gitea CI / test (push) Failing after 5s
Reviewed-on: #399
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 00:47:20 +09:00
agentson
8ecd3ac55f chore: retrigger CI after PR governance body update
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 32s
2026-03-04 00:36:57 +09:00
agentson
79ad108e2f Merge origin/feature/v3-session-policy-stream into main
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
Gitea CI / test (push) Failing after 5s
2026-03-04 00:30:45 +09:00
d9cf056df8 Merge pull request 'process: add PR body post-check gate and tooling (#392)' (#393) from feature/issue-392-pr-body-postcheck into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #393
2026-03-02 18:34:59 +09:00
agentson
bd9286a39f fix: require executable tea fallback binary (#392)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 18:32:07 +09:00
agentson
f4f8827353 fix: harden PR body validator for mixed escaped-newline and tea path (#392)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 18:27:59 +09:00
agentson
7d24f19cc4 process: add mandatory PR body post-check step (#392)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 18:20:17 +09:00
25 changed files with 1734 additions and 1009 deletions

190
CLAUDE.md
View File

@@ -1,187 +1,9 @@
# The Ouroboros # Agent Entry Point
AI-powered trading agent for global stock markets with self-evolution capabilities. This file moved to [agents.md](./agents.md).
## Quick Start Follow `agents.md` as the single source of truth for Claude/Codex session behavior and project workflow gates.
```bash Core process references:
# Setup - [Workflow Guide](docs/workflow.md)
pip install -e ".[dev]" - [Command Reference](docs/commands.md)
cp .env.example .env
# Edit .env with your KIS and Gemini API credentials
# Test
pytest -v --cov=src
# Run (paper trading)
python -m src.main --mode=paper
# Run with dashboard
python -m src.main --mode=paper --dashboard
```
## Telegram Notifications (Optional)
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
### Quick Setup
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
3. **Configure**: Add to `.env`:
```bash
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=123456789
TELEGRAM_ENABLED=true
```
4. **Test**: Start bot conversation (`/start`), then run the agent
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
### What You'll Get
- 🟢 Trade execution alerts (BUY/SELL with confidence)
- 🚨 Circuit breaker trips (automatic trading halt)
- ⚠️ Fat-finger rejections (oversized orders blocked)
- Market open/close notifications
- 📝 System startup/shutdown status
### Interactive Commands
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
## Smart Volatility Scanner (Optional)
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
### How It Works
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
2. **Python Filter** — RSI + volume ratio calculations (no AI)
- Volume > 200% of previous day
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
### Configuration
Add to `.env` (optional, has sensible defaults):
```bash
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
SCANNER_TOP_N=3 # Max candidates per scan
```
### Benefits
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
- **Python-based filtering** — Fast technical analysis before AI
- **Evolution-ready** — Selection context logged for strategy optimization
- **Fault-tolerant** — Falls back to static watchlist on API failure
### Trading Mode Integration
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
## Documentation
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
- **[Architecture](docs/architecture.md)** — System design, components, data flow
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
## Core Principles
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
2. **Test Everything** — 80% coverage minimum, all changes require tests
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
## Requirements Management
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
- New requirements are added chronologically with dates
- Code changes should reference related requirements
- Helps maintain project evolution aligned with user needs
- Preserves context across conversations and development cycles
## Project Structure
```
src/
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
├── broker/ # KIS API client (domestic + overseas)
├── context/ # L1-L7 hierarchical memory system
├── core/ # Risk manager (READ-ONLY)
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
├── data/ # External data integration (news, market data, calendar)
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
├── logging/ # Decision logger (audit trail)
├── markets/ # Market schedules and timezone handling
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
├── strategy/ # Pre-market planner, scenario engine, playbook store
├── db.py # SQLite trade logging
├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env)
tests/ # 998 tests across 41 files
docs/ # Extended documentation
```
## Key Commands
```bash
pytest -v --cov=src # Run tests with coverage
ruff check src/ tests/ # Lint
mypy src/ --strict # Type check
python -m src.main --mode=paper # Paper trading
python -m src.main --mode=paper --dashboard # With dashboard
python -m src.main --mode=live # Live trading (⚠️ real money)
# Gitea workflow (requires tea CLI)
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
```
## Markets Supported
- 🇰🇷 Korea (KRX)
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
- 🇯🇵 Japan (TSE)
- 🇭🇰 Hong Kong (SEHK)
- 🇨🇳 China (Shanghai, Shenzhen)
- 🇻🇳 Vietnam (Hanoi, HCM)
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
## Critical Constraints
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
- Circuit breaker at -3.0% P&L — may only be made **stricter**
- Fat-finger protection: max 30% of cash per order — always enforced
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
- All code changes → corresponding tests → coverage ≥ 80%
## Contributing
See [docs/workflow.md](docs/workflow.md) for the complete development process.
**TL;DR:**
1. Create issue in Gitea
2. Create feature branch: `feature/issue-N-description`
3. Implement with tests
4. Open PR
5. Merge after review

199
agents.md Normal file
View File

@@ -0,0 +1,199 @@
# The Ouroboros
AI-powered trading agent for global stock markets with self-evolution capabilities.
## Agent Workflow Gate (Claude/Codex)
Before any implementation, both Claude and Codex must align on the same project process:
1. Read `docs/workflow.md` first (branch policy, issue/PR flow, merge rules).
2. Read `docs/commands.md` for required verification commands and failure handling.
3. Read `docs/agent-constraints.md` and `docs/agents.md` for safety constraints.
4. Check `workflow/session-handover.md` and append a session entry when starting or handing off work.
5. Confirm current branch is based on `main` or an explicitly designated temporary/base branch before editing.
If any instruction conflicts, default to the safer path and document the reason in the handover log.
## Quick Start
```bash
# Setup
pip install -e ".[dev]"
cp .env.example .env
# Edit .env with your KIS and Gemini API credentials
# Test
pytest -v --cov=src
# Run (paper trading)
python -m src.main --mode=paper
# Run with dashboard
python -m src.main --mode=paper --dashboard
```
## Telegram Notifications (Optional)
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
### Quick Setup
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
3. **Configure**: Add to `.env`:
```bash
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=123456789
TELEGRAM_ENABLED=true
```
4. **Test**: Start bot conversation (`/start`), then run the agent
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
### What You'll Get
- 🟢 Trade execution alerts (BUY/SELL with confidence)
- 🚨 Circuit breaker trips (automatic trading halt)
- ⚠️ Fat-finger rejections (oversized orders blocked)
- Market open/close notifications
- 📝 System startup/shutdown status
### Interactive Commands
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
## Smart Volatility Scanner (Optional)
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
### How It Works
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
2. **Python Filter** — RSI + volume ratio calculations (no AI)
- Volume > 200% of previous day
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
### Configuration
Add to `.env` (optional, has sensible defaults):
```bash
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
SCANNER_TOP_N=3 # Max candidates per scan
```
### Benefits
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
- **Python-based filtering** — Fast technical analysis before AI
- **Evolution-ready** — Selection context logged for strategy optimization
- **Fault-tolerant** — Falls back to static watchlist on API failure
### Trading Mode Integration
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
## Documentation
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
- **[Architecture](docs/architecture.md)** — System design, components, data flow
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
## Core Principles
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
2. **Test Everything** — 80% coverage minimum, all changes require tests
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
## Requirements Management
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
- New requirements are added chronologically with dates
- Code changes should reference related requirements
- Helps maintain project evolution aligned with user needs
- Preserves context across conversations and development cycles
## Project Structure
```
src/
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
├── broker/ # KIS API client (domestic + overseas)
├── context/ # L1-L7 hierarchical memory system
├── core/ # Risk manager (READ-ONLY)
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
├── data/ # External data integration (news, market data, calendar)
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
├── logging/ # Decision logger (audit trail)
├── markets/ # Market schedules and timezone handling
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
├── strategy/ # Pre-market planner, scenario engine, playbook store
├── db.py # SQLite trade logging
├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env)
tests/ # 998 tests across 41 files
docs/ # Extended documentation
```
## Key Commands
```bash
pytest -v --cov=src # Run tests with coverage
ruff check src/ tests/ # Lint
mypy src/ --strict # Type check
python -m src.main --mode=paper # Paper trading
python -m src.main --mode=paper --dashboard # With dashboard
python -m src.main --mode=live # Live trading (⚠️ real money)
# Gitea workflow (requires tea CLI)
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
```
## Markets Supported
- 🇰🇷 Korea (KRX)
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
- 🇯🇵 Japan (TSE)
- 🇭🇰 Hong Kong (SEHK)
- 🇨🇳 China (Shanghai, Shenzhen)
- 🇻🇳 Vietnam (Hanoi, HCM)
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
## Critical Constraints
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
- Circuit breaker at -3.0% P&L — may only be made **stricter**
- Fat-finger protection: max 30% of cash per order — always enforced
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
- All code changes → corresponding tests → coverage ≥ 80%
## Contributing
See [docs/workflow.md](docs/workflow.md) for the complete development process.
**TL;DR:**
1. Create issue in Gitea
2. Create feature branch: `feature/issue-N-description`
3. Implement with tests
4. Open PR
5. Merge after review

View File

@@ -42,5 +42,3 @@ Updated: 2026-03-02
## 변경 이력 ## 변경 이력
- 2026-03-02: `v1.0.12` 문서 검증 게이트 강화(#390) 반영에 따라 정책 문서 동기화 체크를 수행했다. (`REQ-OPS-002`) - 2026-03-02: `v1.0.12` 문서 검증 게이트 강화(#390) 반영에 따라 정책 문서 동기화 체크를 수행했다. (`REQ-OPS-002`)
- 2026-03-04: 86_peak_probability_model_plan.md 초안 추가에 따른 정책 문서 동기화 체크를 수행했다. (REQ-OPS-002)

View File

@@ -1,168 +0,0 @@
<!--
Doc-ID: DOC-PLAN-086
Version: 0.1.0
Status: draft
Owner: strategy
Updated: 2026-03-02
-->
# 고점 확률 모델 도입 계획 (초안)
작성일: 2026-03-02
목적: 기존 계획(v2)의 "모델 보조 신호"를, 실운용 가능한 단순한 형태로 재정의한다.
---
## 1. 한 줄 요약
- 목표는 "지금이 고점(또는 고점 근처)일 확률"을 예측해 익절/축소/오버나잇 리스크 결정을 보조하는 것이다.
- 1차는 정규장 중심으로 시작하고, NXT/US_PRE 등 확장세션은 이후 분리 대응한다.
---
## 2. 왜 바꾸는가
- 기존 논의는 범위가 넓어 구현/검증 비용이 컸다.
- 현재 로그 구조에서도 빠르게 시작 가능한 타깃이 필요하다.
- 실거래 효용 기준에서 "고점 근처 감지"는 청산 품질 개선에 직접 연결된다.
---
## 3. 이번 계획의 핵심 결정
### 3.1 예측 타깃 (2개)
- `p_top_d0`: 현재 시점이 "오늘 정규장 고점(근접 포함)"일 확률
- `p_top_d1`: 현재 시점이 "내일 정규장 마감까지 포함한 고점(근접 포함)"일 확률
### 3.2 모델 역할
- 단독 매도 트리거가 아니라 보조 신호로 사용한다.
- Hard stop / ATR trailing / Kill Switch 우선 원칙은 유지한다.
### 3.3 적용 순서
1. KR/US 정규장만 대상으로 MVP 구축
2. 성능/운영 안정화 후 NXT, US_PRE, US_AFTER 분리 확장
---
## 4. 데이터/라벨 정의 (MVP)
### 4.1 샘플 단위
- 단위: "시점 t의 관측값 1개"
- 대상: 주문 가능 시점 중 정책 게이트를 통과하는 시점
### 4.2 라벨 정의
- `y_d0 = 1` if `max_high(t ~ 오늘 정규장 마감) <= price_t * (1 + eps_d0)` else `0`
- `y_d1 = 1` if `max_high(t ~ 내일 정규장 마감) <= price_t * (1 + eps_d1)` else `0`
### 4.3 필수 피처 (초기)
- 가격/수익률: `current_price`, `price_change_pct`
- 미시지표: `rsi`, `volume_ratio`, `signal`, `score`
- 포트폴리오: `portfolio_pnl_pct`, `total_cash`(선택)
- 상태: `market`, `session_id`, `minutes_to_close`, `is_tradeable_now`
### 4.4 데이터 제외 규칙 (초기)
- `session_id in ('', 'UNKNOWN')` 제외
- `price <= 0` 또는 `quantity <= 0` 제외
- `rationale LIKE '[startup-sync]%'` 제외
- 라벨 윈도우 불완전(휴장/데이터 누락) 샘플 제외
---
## 5. 추론/실행 규칙 (초안)
### 5.1 추론 출력
```json
{
"p_top_d0": 0.00,
"p_top_d1": 0.00,
"model_version": "TBD"
}
```
### 5.2 실행 반영 원칙
- `p_top_d0`가 높으면: 당일 익절/트레일링 강화
- `p_top_d1`가 높으면: 오버나잇 보유 축소/청산 우선
- 확장세션(NXT/US_PRE)은 초기엔 보수 임계치 또는 실행 제한
---
## 6. 세션 이슈 대응 원칙
### 6.1 거래 가능 세션 기준
- 달력 시간이 아니라 "종목이 실제 거래 가능한 세션" 기준으로 라벨/추론을 구성한다.
### 6.2 정규장 vs 확장세션 분리
- MVP: 정규장 전용 라벨/모델
- 확장: 세션별 헤드 또는 세션별 모델 분리 검토
### 6.3 고점 정의 분리
- `reg_high`: 정규장 고점 (주 라벨)
- `ext_high`: 확장세션 포함 고점 (보조 분석)
---
## 7. 단계별 구현 계획
### Phase A: 계약 고정
- 학습용 canonical dataset 스키마 확정
- 시점 정렬(as-of) 규칙 문서화
- 라벨 생성 함수/검증 테스트 추가
### Phase B: 오프라인 검증
- Walk-forward + Purge/Embargo 평가
- `d0`, `d1` 분리 리포트
- B1(모델 없음) 대비 개선 여부 확인
### Phase C: 런타임 통합
- 추론 결과를 `pred_top_d0/pred_top_d1`로 주입
- 매도/축소 보조 규칙 연결
- Shadow 모드로 관측 후 제한적 활성화
### Phase D: 세션 확장
- NXT/US_PRE/US_AFTER 정책 분리
- 실행 게이트/임계치 세션별 튜닝
---
## 8. 지금 당장 결정할 최소 항목 (TBD)
- [ ] `eps_d0`, `eps_d1` 기본값 (`TBD`)
- [ ] `p_top_d0`, `p_top_d1` 실행 임계치 (`TBD`)
- [ ] 정규장/확장세션 대상 시장 범위 (`TBD`)
- [ ] 최소 학습 샘플 수 기준 (`TBD`)
- [ ] Shadow 관측 기간 (`TBD`)
---
## 9. 리스크 및 완화
- 라벨 불균형: class weight, threshold 튜닝
- 시간대 편향(장마감 근처): 시간 피처 추가 + 구간별 성능 보고
- 데이터 무결성: 제외 규칙 강제 + CI 검증
- 복잡도 증가: 정규장 MVP를 먼저 잠그고 확장세션은 후속 분리
---
## 10. 후속 문서 연결 (예정)
- requirements registry: 신규 REQ/TASK/TEST 매핑 추가
- code-level work orders: 구현 작업 분해
- acceptance/test plan: `p_top_d0/d1` 검증 항목 추가
- implementation audit: 기존 "모델 보조 신호" 항목 업데이트

View File

@@ -1,211 +0,0 @@
# Dual-Track Workflow Execution Plan (Revised)
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Re-seed workflow planning from a clean base branch (not the temporary branch), then execute isolated `86` and `80` tracks with real-operation-confirmed closeout and related Gitea issue closure.
**Architecture:** Start from `feature/v3-session-policy-stream` and create `base/ouroboros-workflow-20260302`. Bring only two workflow plan docs as one squashed seed commit. Branch `task/86...` and `task/80...` from that base with strict file-scope separation. For `80`, update audit only after real-operation confirmation, then close related Gitea issues (children before parent). Final promotion happens by merging the validated stream to `main`.
**Tech Stack:** Git CLI, Gitea (`tea`) issue/PR workflow, Markdown docs (`docs/ouroboros`, `docs/plans`)
---
### Task 1: Freeze temporary branch and identify export scope
**Files:**
- Modify: none
- Create: none
- Test: none
**Step 1: Inspect temporary branch state**
Run: `git status --short --branch`
Expected: current temporary branch with local changes listed.
**Step 2: Preserve changes using stash (recommended)**
Run: `git stash push -u -m "tmp-before-workflow-reseed-2026-03-02"`
Expected: stash created with tracked/untracked changes.
**Step 3: Define transferable artifacts**
Run: keep only these for reseed:
- `docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md`
- `docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
Expected: transfer scope fixed to two files.
**Step 4: Verify clean working tree**
Run: `git status --short --branch`
Expected: no local changes.
**Step 5: Commit**
```bash
# no commit in this task (stash-only preservation)
```
### Task 2: Create clean official base and seed one commit
**Files:**
- Modify: `docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md`
- Modify: `docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
- Test: none
**Step 1: Switch to canonical base**
Run: `git checkout feature/v3-session-policy-stream`
Expected: branch switch successful.
**Step 2: Update local base**
Run: `git pull --ff-only origin feature/v3-session-policy-stream`
Expected: up-to-date or fast-forward update.
**Step 3: Create staging base branch**
Run: `git checkout -b base/ouroboros-workflow-20260302`
Expected: new branch created from updated feature base.
**Step 4: Restore only two plan docs from temporary branch commits**
Run (example): `git checkout <temp-branch> -- docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
Expected: only two files restored in working tree.
**Step 5: Create one squashed seed commit**
Run: `git add docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md docs/plans/2026-03-02-dual-track-workflow-execution-plan.md && git commit -m "docs: seed revised dual-track workflow plan"`
Expected: single commit with two files only.
**Step 6: Verify seed scope**
Run: `git show --name-only --oneline HEAD`
Expected: exactly the two plan docs listed.
**Step 7: Push official base**
Run: `git push -u origin base/ouroboros-workflow-20260302`
Expected: remote tracking set.
### Task 3: Open Track A (`86`) branch and initial PR
**Files:**
- Modify: `docs/ouroboros/86_peak_probability_model_plan.md`
- Test: none
**Step 1: Create Track A branch**
Run: `git checkout -b task/86-peak-probability-plan`
Expected: branch created from official base.
**Step 2: Stage only `86` document**
Run: `git add docs/ouroboros/86_peak_probability_model_plan.md`
Expected: only `86` doc staged.
**Step 3: Commit initial PR seed**
Run: `git commit -m "docs: seed peak probability model plan (#86)"`
Expected: one-document commit.
**Step 4: Push branch**
Run: `git push -u origin task/86-peak-probability-plan`
Expected: remote branch created.
**Step 5: Open PR-A**
Run: `tea pr create --base base/ouroboros-workflow-20260302 --head task/86-peak-probability-plan --title "docs: seed peak probability model plan (#86)" --description "Initial planning PR for iterative concretization."`
Expected: PR URL returned.
### Task 4: Open Track B (`80`) branch for operation-confirmed closeout
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md`
- Test: operation observation evidence (outside unit-test scope)
**Step 1: Return to staging base**
Run: `git checkout base/ouroboros-workflow-20260302`
Expected: branch switch successful.
**Step 2: Create Track B branch**
Run: `git checkout -b task/80-implementation-audit-closeout`
Expected: branch created.
**Step 3: Apply closeout edits only after real-operation confirmation**
Run: edit `docs/ouroboros/80_implementation_audit.md`
Expected: status/evidence sections updated with observed behavior, confirmation timestamp, and operational proof summary.
**Step 4: Commit closeout update**
Run: `git add docs/ouroboros/80_implementation_audit.md && git commit -m "docs: close out implementation audit items after operation confirmation (#80)"`
Expected: single-scope commit.
**Step 5: Push and open PR-B**
Run: `git push -u origin task/80-implementation-audit-closeout`
Expected: remote branch created.
Run: `tea pr create --base base/ouroboros-workflow-20260302 --head task/80-implementation-audit-closeout --title "docs: close out implementation audit items (#80)" --description "Closeout based on real-operation confirmation."`
Expected: PR URL returned.
### Task 5: Close related Gitea issues after `80` confirmation
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md` (reference linkage only)
- Test: issue parent-child completeness check
**Step 1: Build related issue set from updated `80` audit**
Run: parse issue references from `80` updated sections.
Expected: target open issue list prepared.
**Step 2: Close child issues first**
Run: `tea issues close <child-issue-number>` for each confirmed child.
Expected: all qualifying child issues closed.
**Step 3: Close parent issues only when all children are closed**
Run: `tea issues close <parent-issue-number>`
Expected: parent closes only after dependency check passes.
**Step 4: Record closure summary in PR comment**
Run: add PR comment with closed issue IDs and parent-child resolution note.
Expected: auditable trace for closeout.
### Task 6: Validate branch hygiene and merge path
**Files:**
- Modify: none
- Test: git diff/log checks
**Step 1: Verify PR-A scope**
Run: `git diff --name-only base/ouroboros-workflow-20260302...task/86-peak-probability-plan`
Expected: only `docs/ouroboros/86_peak_probability_model_plan.md` (and intentional related plan files if any).
**Step 2: Verify PR-B scope**
Run: `git diff --name-only base/ouroboros-workflow-20260302...task/80-implementation-audit-closeout`
Expected: only `docs/ouroboros/80_implementation_audit.md` (and intentional related plan files if any).
**Step 3: Merge both PRs into staging base**
Run: GitHub merge flow for PR-A and PR-B
Expected: staging base includes both tracks without cross-file contamination.
**Step 4: Promote validated stream to `main`**
Run: open PR from `feature/v3-session-policy-stream` (or its validated promotion branch) to `main`
Expected: merge-based history preservation with no cherry-pick/rebase surgery.
**Step 5: Commit**
```bash
# no additional commit required in this task
```

View File

@@ -1,100 +0,0 @@
# Live Operation Issue Tracking Design
**Date:** 2026-03-02
**Scope:** Real-operation confirmation tracking for all open Gitea issues, tied to `task/80-implementation-audit-closeout`.
---
## 1. Goal
Define an issue-by-issue tracking method for real-operation confirmation so that:
- every open issue is tracked with explicit lifecycle states,
- parent-child close order is enforced,
- and final closeout is reflected in `80` audit work without loss.
---
## 2. Target Set
- Coverage target: all current open Gitea issues (`318~381`, and future open issues during this closeout window).
- Tracking starts from open set snapshot and stays synced until closeout completion.
---
## 3. Tracking Architecture
- Primary log: direct comments on each issue ticket.
- Aggregate log: single summary comment on the `80` PR (`task/80-implementation-audit-closeout`).
- No separate tracking doc as source of truth during execution.
---
## 4. State Model and Criteria
State machine:
- `NOT_STARTED`
- `OBSERVING`
- `CONFIRMED`
- `CLOSED`
Rules:
- `CONFIRMED` requires one real-operation observation evidence item.
- `CLOSED` requires close conditions satisfied and ticket close action executed.
- Parent issue cannot become `CLOSED` until all child issues are `CLOSED`.
---
## 5. Issue Comment Template
Each issue update comment uses:
- `Status: <STATE>`
- `Observed At: <KST>, <UTC>`
- `Environment: live`
- `Evidence: <log/query/order-execution identifier>`
- `Next: <single next action>`
This template is reused for every transition for consistent auditability.
---
## 6. Parent-Child Policy
- Parent issue comment must list child issue IDs.
- Child issues transition independently.
- Parent close gate:
- if any child is not `CLOSED`, parent remains at most `CONFIRMED`;
- when all children are `CLOSED`, parent can transition to `CLOSED` and be closed.
---
## 7. Operational Loop
1. Initialize all open issues with `NOT_STARTED` comment.
2. Move active issues to `OBSERVING` when live confirmation begins.
3. Move to `CONFIRMED` after one real-operation evidence capture.
4. Close child issues first, then close parent issue(s).
5. Update the single `80` PR summary comment whenever issue state changes.
6. Reflect final confirmed/closed outcomes in `docs/ouroboros/80_implementation_audit.md`.
---
## 8. Evidence and Time Rules
- Evidence must be replayable/referenceable by identifier (not vague narrative only).
- Every update includes both KST and UTC timestamps.
- Use absolute dates (example: `2026-03-02`) to avoid ambiguity in session handoff.
---
## 9. Completion Conditions
Tracking design is complete when:
- all open issues are represented in issue comments with valid lifecycle state,
- parent-child closure constraints are respected,
- `80` PR summary comment reflects current global status,
- and final `80` audit document update is ready for closeout commit.

View File

@@ -1,203 +0,0 @@
# Live Operation Issue Tracking Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Track real-operation confirmation for all open Gitea issues with per-issue comments and a synchronized summary on the `80` PR, then close issues in dependency-safe order.
**Architecture:** Use issue tickets as the primary state log with a strict four-state lifecycle. Maintain one aggregate status comment on `task/80-implementation-audit-closeout` PR. Enforce child-first closure and parent close gating.
**Tech Stack:** Git CLI, Gitea CLI (`tea`), Markdown (`docs/ouroboros/80_implementation_audit.md`)
---
### Task 1: Snapshot open issues and classify parent-child graph
**Files:**
- Create: none
- Modify: none
- Test: `tea issues ls` output inspection
**Step 1: Capture open issue list**
Run: `tea issues ls --state open --limit 200`
Expected: list of all open issues (including `318~381`).
**Step 2: Fetch details for each issue**
Run: `for n in $(tea issues ls --state open --limit 200 --output json | jq -r '.[].number'); do tea issue view "$n"; done`
Expected: issue bodies/comments available for relation parsing.
**Step 3: Extract parent-child references**
Run: parse issue text for relation keywords (`parent`, `child`, `depends on`, `blocks`).
Expected: dependency map ready.
**Step 4: Validate unresolved dependency edges**
Run: verify each parent references only existing issue numbers.
Expected: no dangling child references.
**Step 5: Commit**
```bash
# no commit (discovery task)
```
### Task 2: Initialize per-issue tracking comments (`NOT_STARTED`)
**Files:**
- Create: none
- Modify: none
- Test: issue comment presence check
**Step 1: Prepare standard template text**
```text
Status: NOT_STARTED
Observed At: <KST>, <UTC>
Environment: live
Evidence: N/A
Next: Begin live observation for this issue.
```
**Step 2: Post initialization comment to each open issue**
Run: `tea issues comment <issue-number> --message "<template>"`
Expected: each open issue has one initialization comment.
**Step 3: Verify comment coverage**
Run: sample-check each issue (`tea issue view <issue-number>`) for latest status comment.
Expected: 100% coverage across open issues.
**Step 4: Commit**
```bash
# no commit (ticket operations)
```
### Task 3: Create and maintain `80` PR aggregate status comment
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md` (final sync only, not this task)
- Test: PR comment contains full issue table
**Step 1: Identify PR number for `task/80-implementation-audit-closeout`**
Run: `tea pr ls --state open`
Expected: PR number for `task/80...` identified.
**Step 2: Post initial summary comment**
Run: `tea pr comment <pr-number> --message "<issue-state-summary-table>"`
Expected: one aggregate comment listing `issue | state | last observed at | parent/child`.
**Step 3: Define update protocol**
Run: update same summary comment on every issue state change.
Expected: summary remains single-source aggregate view.
**Step 4: Commit**
```bash
# no commit (ticket/PR operations)
```
### Task 4: Execute live observation loop and mark `OBSERVING` -> `CONFIRMED`
**Files:**
- Create: none
- Modify: none
- Test: evidence-backed state transitions present in issue comments
**Step 1: Mark active issue as `OBSERVING`**
Run: `tea issues comment <issue-number> --message "Status: OBSERVING ..."`
Expected: issue history shows observation started.
**Step 2: Capture one real-operation evidence item**
Run: collect log/query/order identifier tied to issue behavior.
Expected: one concrete evidence reference captured.
**Step 3: Mark issue `CONFIRMED`**
Run: `tea issues comment <issue-number> --message "Status: CONFIRMED ... Evidence: <id> ..."`
Expected: issue has explicit confirmed state.
**Step 4: Sync PR aggregate summary**
Run: update `80` PR summary comment row for that issue.
Expected: PR summary and issue status aligned.
**Step 5: Commit**
```bash
# no commit (ticket operations)
```
### Task 5: Close issues with child-first dependency enforcement
**Files:**
- Create: none
- Modify: none
- Test: parent close attempted only after all children closed
**Step 1: Close confirmed child issues first**
Run: `tea issues close <child-issue-number>`
Expected: child issue state becomes closed.
**Step 2: Verify all children for each parent**
Run: evaluate dependency map for remaining non-closed children.
Expected: parent close gate decision available.
**Step 3: Close eligible parent issues**
Run: `tea issues close <parent-issue-number>`
Expected: parent is closed only when all children are closed.
**Step 4: Post closure comment and sync PR summary**
Run: comment `Status: CLOSED` on closed issue and update PR summary row.
Expected: closure trace exists in both issue and PR aggregate.
**Step 5: Commit**
```bash
# no commit (ticket operations)
```
### Task 6: Reflect final tracked outcomes in `80` audit and commit
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md`
- Test: markdown consistency + traceability check
**Step 1: Update `80` with final live-confirmed statuses**
Run: edit status lines and evidence summary sections.
Expected: document reflects final issue states and confirmation notes.
**Step 2: Add issue closure summary block**
Run: include closed child/parent sequence summary.
Expected: audit reader can verify closure logic quickly.
**Step 3: Validate references**
Run: ensure issue numbers in doc match actual closed/open states.
Expected: no mismatch between doc and Gitea.
**Step 4: Commit closeout doc update**
```bash
git add docs/ouroboros/80_implementation_audit.md
git commit -m "docs: finalize 80 audit with live-operation issue confirmations"
```
**Step 5: Push branch**
Run: `git push`
Expected: closeout commit available on `task/80-implementation-audit-closeout`.

View File

@@ -1,99 +0,0 @@
# Ouroboros Dual-Track Workflow Design (Revised)
**Date:** 2026-03-02
**Scope:** `docs/ouroboros/86_peak_probability_model_plan.md` and `docs/ouroboros/80_implementation_audit.md`
---
## 1. Goal
Prevent work loss and history contamination while running two parallel tracks, while correcting an invalid starting condition (design commits created on a disposable temporary branch):
- Track A: open and iterate the `86` plan as a dedicated PR.
- Track B: finalize `80` only after real-operation behavior is confirmed, then reflect completion and close related Gitea open issues.
---
## 2. Constraints and Current Context
- Current temporary branch is disposable and must not remain the authoritative base.
- Canonical base line for current work is `feature/v3-session-policy-stream`.
- Final integration strategy is to merge the full `feature/v3-session-policy-stream` line into `main` after operation-level validation is complete.
---
## 3. Approaches Considered
1. Keep using temporary branch and continue from existing design commits.
2. Create a clean base from `feature/v3-session-policy-stream` and move only plan artifacts as one commit (recommended).
3. Split into two fresh bases (`86` and `80` separately) immediately.
Recommendation rationale:
- Approach 2 removes temporary-branch contamination risk.
- It preserves clean lineage with a single "official starting point" commit.
- It keeps downstream split for `86` and `80` straightforward.
---
## 4. Approved Branch and PR Topology
- Long-lived source baseline: `feature/v3-session-policy-stream`
- New official base: `base/ouroboros-workflow-20260302` (from `feature/v3-session-policy-stream`)
- Official base contains only two workflow plan docs squashed into one commit.
- Track A branch: `task/86-peak-probability-plan` (from official base)
- Track B branch: `task/80-implementation-audit-closeout` (from official base)
PR direction:
- PR-A (`86`): `task/86-peak-probability-plan -> base/ouroboros-workflow-20260302`
- PR-B (`80`): `task/80-implementation-audit-closeout -> base/ouroboros-workflow-20260302`
- Final integration: `feature/v3-session-policy-stream` line merged into `main`
---
## 5. Completion Criteria per Track
### Track A (`86`)
- Open initial PR early with the planning document.
- Continue iterative concretization in follow-up commits on the same branch.
### Track B (`80`)
- Do not close based on test-command artifacts alone.
- Close only when modified behavior is confirmed in real operation.
- Update `80_implementation_audit.md` with:
- observed behavior changes,
- confirmation timing,
- operation-side evidence summary.
- After `80` real-operation confirmation and doc update, close related Gitea open issues immediately (same phase).
- Parent issues close only when all child issues are confirmed closed.
---
## 6. Loss-Prevention Rules
- Preserve temporary-branch unmerged changes first (stash or archival commit).
- Exclude `workflow/session-handover.md` and `docs/ouroboros/86_peak_probability_model_plan.md` from official base-seeding commit.
- Enforce file-scope separation:
- PR-A should avoid `80` edits.
- PR-B should avoid `86` edits.
- Keep commit messages scoped to one track per commit.
---
## 7. Merge Strategy Decision
- For this workflow, prefer merge-based lineage preservation.
- Because the final promotion unit is the full `feature/v3-session-policy-stream` stream, avoid unnecessary cherry-pick/rebase-surgery unless a contamination incident occurs.
---
## 8. Immediate Next Actions
1. Seed `base/ouroboros-workflow-20260302` from `feature/v3-session-policy-stream` with one squashed workflow-doc commit.
2. Verify seed commit file scope contains only two plan docs.
3. Create and open PR-A for `86`.
4. Complete real-operation confirmation for `80`, update audit doc, then close related Gitea issues (children first, then parent).
5. Merge validated `feature/v3-session-policy-stream` line into `main`.

View File

@@ -0,0 +1,281 @@
# 398/400/401 Integration Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Implement #398, #400, #401 as three isolated PRs targeting `feature/398-400-401`, merge only when CI passes and self-review has zero minor issues, then run and monitor overnight script without stopping the process.
**Architecture:** Create one integration base branch from `origin/main`, branch per issue, and ship in strict sequence (`398 -> 400 -> 401`) to keep diffs isolated. Use TDD per issue (fail-first tests, minimal fix, regression checks), then perform PR self-review and CI gate before merge. After all merges, run overnight in background and monitor logs/process health while leaving runtime active.
**Tech Stack:** Python 3, pytest, asyncio runtime loop, Git/Gitea (`tea`), shell scripts (`scripts/run_overnight.sh`).
---
### Task 1: Prepare Integration Branch Topology
**Files:**
- Modify: `.git` refs only (branch operations)
**Step 1: Sync base branch**
Run: `git fetch origin && git checkout main && git pull --ff-only origin main`
Expected: local `main` equals `origin/main`
**Step 2: Create integration branch**
Run: `git checkout -b feature/398-400-401`
Expected: current branch is `feature/398-400-401`
**Step 3: Create issue branches from integration branch**
Run: `git checkout -b fix/398 && git checkout feature/398-400-401 && git checkout -b fix/400 && git checkout feature/398-400-401 && git checkout -b fix/401 && git checkout feature/398-400-401`
Expected: three issue branches exist and point to same base commit
**Step 4: Push all branches**
Run: `git push -u origin feature/398-400-401 fix/398 fix/400 fix/401`
Expected: remote tracking set for all four branches
**Step 5: Commit checkpoint**
Run:
```bash
git status --short
```
Expected: clean workspace before issue implementation
### Task 2: Implement #398 with TDD (KR rt_cd failure handling)
**Files:**
- Modify: `src/main.py`
- Test: `tests/test_main.py`
**Step 1: Write failing test**
Add test in `tests/test_main.py` verifying KR order returns `rt_cd != '0'` does not trigger success side effects (no BUY notify, no trade log success path).
**Step 2: Run test to verify failure**
Run: `pytest tests/test_main.py -k "kr and rt_cd" -v`
Expected: FAIL showing current code incorrectly treats KR order as success
**Step 3: Write minimal implementation**
In KR order branch of `src/main.py`, immediately after `send_order`, add `rt_cd` acceptance check identical to overseas branch behavior; set `order_succeeded = False` and warning log when rejected.
**Step 4: Run targeted tests**
Run: `pytest tests/test_main.py -k "kr and rt_cd" -v`
Expected: PASS
**Step 5: Run safety regression**
Run: `pytest tests/test_main.py tests/test_order_policy.py -q`
Expected: PASS
**Step 6: Commit**
Run:
```bash
git add tests/test_main.py src/main.py
git commit -m "fix: handle KR order rejection via rt_cd check (#398)"
```
### Task 3: Open PR for #398, Self-review, CI gate, Merge
**Files:**
- Modify: remote PR metadata/comments only
**Step 1: Push branch**
Run: `git checkout fix/398 && git push -u origin fix/398`
**Step 2: Create PR targeting integration branch**
Run: `tea pr create --base feature/398-400-401 --head fix/398 --title "fix: #398 KR rt_cd rejection handling" --description "Implements issue #398 with tests."`
Expected: PR URL returned
**Step 3: Add self-review comment (severity rubric)**
Run: `tea pr comment <PR_398> --message "Self-review: Critical 0 / Major 0 / Minor 0. Merge allowed when CI passes."`
**Step 4: Wait for CI success**
Run: `tea pr checks <PR_398>` (poll until all success)
Expected: all checks success
**Step 5: Merge only when gate passes**
Run: `tea pr merge <PR_398> --delete-branch=false`
Expected: merged into `feature/398-400-401`
### Task 4: Implement #400 with TDD (US session transition correctness)
**Files:**
- Modify: `src/main.py`, `src/core/order_policy.py`, `src/markets/schedule.py`
- Test: `tests/test_main.py`, `tests/test_market_schedule.py`, `tests/test_order_policy.py`
**Step 1: Write failing tests**
Add tests for:
- session transition event handling (`US_DAY -> US_REG`) emits open event and forces rescan
- `US_DAY` treated non-tradable for playbook/trading actions
**Step 2: Run failing tests**
Run: `pytest tests/test_main.py tests/test_market_schedule.py tests/test_order_policy.py -k "US_DAY or US_REG or session" -v`
Expected: FAIL at current behavior
**Step 3: Minimal implementation**
- Track market state by session identifier (not bool only)
- Force rescan/playbook refresh on US_REG entry
- Exclude/suppress US_DAY for trading/playbook generation path
**Step 4: Re-run targeted tests**
Run: same command as Step 2
Expected: PASS
**Step 5: Regression pass**
Run: `pytest tests/test_main.py tests/test_market_schedule.py tests/test_order_policy.py tests/test_pre_market_planner.py -q`
Expected: PASS
**Step 6: Commit**
Run:
```bash
git add src/main.py src/core/order_policy.py src/markets/schedule.py tests/test_main.py tests/test_market_schedule.py tests/test_order_policy.py
git commit -m "fix: handle US session transitions and suppress US_DAY trading (#400)"
```
### Task 5: Open PR for #400, Self-review, CI gate, Merge
**Files:**
- Modify: remote PR metadata/comments only
**Step 1: Push branch**
Run: `git checkout fix/400 && git push -u origin fix/400`
**Step 2: Create PR**
Run: `tea pr create --base feature/398-400-401 --head fix/400 --title "fix: #400 US session transition handling" --description "Implements issue #400 with tests."`
**Step 3: Add self-review comment**
Run: `tea pr comment <PR_400> --message "Self-review: Critical 0 / Major 0 / Minor 0. Merge allowed when CI passes."`
**Step 4: Wait for CI success**
Run: `tea pr checks <PR_400>`
Expected: all checks success
**Step 5: Merge**
Run: `tea pr merge <PR_400> --delete-branch=false`
### Task 6: Implement #401 with TDD (multi-market parallel processing)
**Files:**
- Modify: `src/main.py`
- Test: `tests/test_main.py`
**Step 1: Write failing tests**
Add tests verifying:
- open markets are processed via parallel task dispatch
- circuit breaker behavior still triggers global shutdown semantics
- shared state updates remain deterministic under parallel market execution
**Step 2: Run failing tests**
Run: `pytest tests/test_main.py -k "parallel or market" -v`
Expected: FAIL before implementation
**Step 3: Minimal implementation**
Refactor sequential market loop into market-level async tasks (`asyncio.gather`/task group) while preserving stock-level processing order per market and existing failure semantics.
**Step 4: Re-run targeted tests**
Run: same command as Step 2
Expected: PASS
**Step 5: Regression pass**
Run: `pytest tests/test_main.py tests/test_runtime_overnight_scripts.py -q`
Expected: PASS
**Step 6: Commit**
Run:
```bash
git add src/main.py tests/test_main.py
git commit -m "feat: process active markets in parallel with preserved shutdown semantics (#401)"
```
### Task 7: Open PR for #401, Self-review, CI gate, Merge
**Files:**
- Modify: remote PR metadata/comments only
**Step 1: Push branch**
Run: `git checkout fix/401 && git push -u origin fix/401`
**Step 2: Create PR**
Run: `tea pr create --base feature/398-400-401 --head fix/401 --title "feat: #401 parallel multi-market processing" --description "Implements issue #401 with tests."`
**Step 3: Add self-review comment**
Run: `tea pr comment <PR_401> --message "Self-review: Critical 0 / Major 0 / Minor 0. Merge allowed when CI passes."`
**Step 4: Wait for CI success**
Run: `tea pr checks <PR_401>`
Expected: all checks success
**Step 5: Merge**
Run: `tea pr merge <PR_401> --delete-branch=false`
### Task 8: Final Branch Validation + Overnight Runtime Monitoring
**Files:**
- Execute: `scripts/run_overnight.sh`
- Observe: runtime log file (e.g., `logs/overnight.log`)
**Step 1: Checkout integrated branch and sync**
Run: `git checkout feature/398-400-401 && git pull --ff-only origin feature/398-400-401`
Expected: branch contains merged PRs
**Step 2: Start overnight in background (non-blocking)**
Run:
```bash
nohup ./scripts/run_overnight.sh > /tmp/ouroboros_overnight.log 2>&1 &
echo $! > /tmp/ouroboros_overnight.pid
```
Expected: PID written and process running
**Step 3: Verify process alive**
Run: `ps -p $(cat /tmp/ouroboros_overnight.pid) -o pid,ppid,stat,etime,cmd`
Expected: process present
**Step 4: Monitor startup logs**
Run: `tail -n 120 /tmp/ouroboros_overnight.log`
Expected: startup complete and runtime loop active without fatal errors
**Step 5: Ongoing monitor without shutdown**
Run: `tail -f /tmp/ouroboros_overnight.log` (sample monitoring window, then detach)
Expected: continued activity; do not kill process
**Step 6: Final status note**
Record PID, log path, and “process left running” status.

View File

@@ -0,0 +1,62 @@
# 398/400/401 통합 처리 설계
## 개요
이 문서는 이슈 #398, #400, #401을 `origin/main` 기반 통합 브랜치에서 순차적으로 처리하고,
각 PR을 셀프 리뷰 및 CI 게이트로 검증한 뒤 머지하는 운영 설계를 정의한다.
최종 머지된 통합 브랜치에서 overnight 스크립트를 실행하고, 모니터링 이후에도 프로그램은 계속 실행 상태를 유지한다.
## 목표
- 통합 브랜치: `feature/398-400-401`
- 작업 브랜치: `fix/398`, `fix/400`, `fix/401`
- PR base: 모두 `feature/398-400-401`
- 머지 조건: `CI 전체 통과` + `셀프 리뷰에서 minor 포함 이슈 0건`
- 최종 확인: 통합 브랜치에서 overnight 실행 및 모니터링, 프로세스 지속 실행
## 아키텍처
- `origin/main`에서 `feature/398-400-401` 생성
- 각 이슈는 독립 브랜치(`fix/398`, `fix/400`, `fix/401`)에서 구현
- PR은 순차적으로 생성/검증/머지 (`398 -> 400 -> 401`)
- 각 PR은 셀프 리뷰 코멘트를 남기고, minor 이상 발견 시 수정 후 재검증
- 3개 PR 머지 완료 후 통합 브랜치에서 overnight 백그라운드 실행 및 로그 모니터링
- 모니터링 완료 후에도 프로세스는 종료하지 않음
## 컴포넌트
- Git/브랜치 컴포넌트: 브랜치 생성, 리베이스, 충돌 해결
- 이슈 구현 컴포넌트:
- #398: KR 주문 `rt_cd` 실패 처리, 오알림/오기록 차단
- #400: US 세션 전환 감지, US_DAY 억제, US_REG 진입 이벤트/강제 재스캔
- #401: 시장 단위 병렬 처리 및 공유 상태 동시성 보호
- PR 운영 컴포넌트: PR 생성, 셀프 리뷰 코멘트 작성, 승인 기준 확인
- CI 게이트 컴포넌트: 체크 상태 폴링 및 pass 확인
- 머지 컴포넌트: 게이트 통과 PR만 머지
- 런타임 검증 컴포넌트: overnight 실행, 로그 추적, 프로세스 생존 확인
## 데이터/제어 흐름
1. `feature/398-400-401` 생성
2. `fix/398` 구현 -> 테스트 -> 커밋 -> PR 생성
3. 셀프 리뷰 코멘트 작성(결함 레벨 포함)
4. CI 완료 대기 후 `CI pass && minor 0`이면 머지
5. `fix/400`, `fix/401`에 대해 동일 절차 반복
6. 통합 브랜치에서 overnight 백그라운드 실행
7. 로그/상태 모니터링으로 실제 동작 확인
8. 결과 보고 후에도 프로세스는 계속 실행
## 에러 처리/복구
- PR 생성/충돌 실패: 해당 브랜치만 중단 후 해결, 다른 브랜치와 격리 유지
- 셀프 리뷰 실패(minor 포함): 머지 금지, 수정 커밋 후 리뷰 갱신
- CI 실패: 실패 원인 수정 후 재푸시, 재검증
- 머지 실패: base 최신화 및 재시도
- overnight 시작 실패: 로그 분석 후 재기동
- 모니터링 중 오류: 오류 보고는 하되 자동 종료하지 않고 실행 유지
## 테스트/검증
- PR별 관련 단위/통합 테스트 실행
- 필요 시 `tests/test_main.py`, `tests/test_runtime_overnight_scripts.py` 포함 회귀 실행
- 셀프 리뷰는 `Critical/Major/Minor` 기준으로 작성
- minor 0건 명시된 경우에만 머지 진행
- 최종 통합 브랜치에서 overnight 기동/루프 진입/에러 로그 확인
- PID/프로세스 생존 확인 후 실행 지속 상태 보고
## 비목표
- 본 문서는 구현 상세 코드 변경 자체를 다루지 않는다.
- 본 문서는 외부 리뷰어 승인 프로세스를 다루지 않는다(셀프 리뷰만 대상).

View File

@@ -0,0 +1,103 @@
# Issue #409 Design - KR Session-Aware Exchange Routing
## Context
- Issue: #409 (bug: KR 세션별 거래소 미분리 - 스크리닝/주문/이중상장 우선순위 미처리)
- Related runtime observation targets: #318, #325
- Date: 2026-03-04
- Confirmed approach: Option 2 (routing module introduction)
## Goals
1. Ensure domestic screening uses session-specific exchange market code.
2. Ensure domestic order submission explicitly sets exchange routing code.
3. Add dual-listing routing priority logic (spread/liquidity aware) with safe fallback.
4. Keep existing behavior stable for non-KR flows and existing risk/order policy guards.
5. Enable runtime observability for #409 while monitoring #318/#325 in parallel.
## Non-Goals
- Replacing current session classification model.
- Introducing new market sessions or changing session boundaries.
- Refactoring overseas order flow.
## Architecture
### New Component
- Add `KRExchangeRouter` (new module, e.g. `src/broker/kr_exchange_router.py`).
- Responsibility split:
- `classify_session_id`: session classification only.
- `KRExchangeRouter`: final domestic exchange selection (`KRX`/`NXT`) for ranking and order.
- `KISBroker`: inject resolved routing values into request params/body.
### Integration Points
- `KISBroker.fetch_market_rankings`
- Session-aware market division code:
- `KRX_REG` -> `J`
- `NXT_PRE`, `NXT_AFTER` -> `NX`
- `KISBroker.send_order`
- Explicit `EXCG_ID_DVSN_CD` is always set.
- `SmartVolatilityScanner._scan_domestic`
- Ensure domestic ranking API path resolves exchange consistently with current session.
## Data Flow
1. Scanner path:
- Determine `session_id`.
- `resolve_for_ranking(session_id)`.
- Inject `J` or `NX` into ranking API params.
2. Order path:
- Pass `session_id` into order path.
- `resolve_for_order(stock_code, session_id)`.
- Single listing: session default exchange.
- Dual listing: select by spread/liquidity heuristic when data is available.
- Data unavailable/error: fallback to session default.
- Send order with explicit `EXCG_ID_DVSN_CD`.
3. Observability:
- Log `session_id`, `resolved_exchange`, `routing_reason`.
## Dual-Listing Routing Priority
- Preferred decision source: spread/liquidity comparison.
- Deterministic fallback: session-default exchange.
- Proposed reasons in logs:
- `session_default`
- `dual_listing_spread`
- `dual_listing_liquidity`
- `fallback_data_unavailable`
## Error Handling
- Router does not block order path when auxiliary data is unavailable.
- Fail-open strategy for routing selection (fallback to session default) while preserving existing API/network error semantics.
- `send_order` exchange field omission is forbidden by design after this change.
## Testing Strategy
### Unit
- Router mapping by session (`KRX_REG`, `NXT_PRE`, `NXT_AFTER`).
- Dual-listing routing priority and fallback.
- Broker order body includes `EXCG_ID_DVSN_CD`.
- Ranking params use session-aware market code.
### Integration/Regression
- `smart_scanner` domestic calls align with session exchange.
- Existing order policy tests remain green.
- Re-run regression sets covering #318/#325 related paths.
### Runtime Observation (24h)
- Restart program from working branch build.
- Run runtime monitor for up to 24h.
- Verify and track:
- #409: session-aware routing evidence in logs.
- #318: ATR dynamic stop evidence.
- #325: ATR/pred_down_prob injection evidence.
- If anomalies are detected during monitoring, create separate issue tickets with evidence and links.
## Acceptance Criteria
1. No domestic ranking call uses hardcoded KRX-only behavior across NXT sessions.
2. No domestic order is sent without `EXCG_ID_DVSN_CD`.
3. Dual-listing path has explicit priority logic and deterministic fallback.
4. Tests pass for new and affected paths.
5. Runtime monitor evidence is collected for #409, #318, #325; anomalies are ticketed.
## Risks and Mitigations
- Risk: Increased routing complexity introduces regressions.
- Mitigation: isolate router, high-coverage unit tests, preserve existing interfaces where possible.
- Risk: Runtime events for #318/#325 may not naturally occur in 24h.
- Mitigation: mark as `NOT_OBSERVED` and keep issue state based on evidence policy; do not force-close without proof.
## Planned Next Step
- Invoke `writing-plans` workflow and produce implementation plan before code changes.

View File

@@ -0,0 +1,352 @@
# Issue #409 KR Session Exchange Routing Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Fix #409 by making KR screening/order routing session-aware and adding dual-listing exchange priority with deterministic fallback, then run 24h runtime observation for #409/#318/#325.
**Architecture:** Introduce a dedicated `KRExchangeRouter` module that resolves exchange by session and dual-listing metadata. Keep session classification in `order_policy`, and inject router outputs into `KISBroker` ranking/order requests. Add explicit routing logs for runtime evidence and keep non-KR behavior unchanged.
**Tech Stack:** Python 3.12, aiohttp client layer, pytest/pytest-asyncio, Gitea CLI (`tea`), bash runtime monitor scripts.
---
### Task 1: Preflight and Branch Runtime Gate
**Files:**
- Modify: `workflow/session-handover.md`
**Step 1: Add handover entry for this ticket branch**
```md
### 2026-03-04 | session=codex-issue409-start
- branch: feature/issue-409-kr-session-exchange-routing
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #409, #318, #325
- next_ticket: #409
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: #409 code fix + 24h monitor, runtime anomaly creates separate issue ticket
```
**Step 2: Run strict handover check**
Run: `python3 scripts/session_handover_check.py --strict`
Expected: PASS
**Step 3: Commit**
```bash
git add workflow/session-handover.md
git commit -m "chore: add handover entry for issue #409"
```
### Task 2: Add Router Unit Tests First (TDD)
**Files:**
- Create: `tests/test_kr_exchange_router.py`
**Step 1: Write failing tests for session mapping**
```python
from src.broker.kr_exchange_router import KRExchangeRouter
def test_ranking_market_code_by_session() -> None:
router = KRExchangeRouter()
assert router.resolve_for_ranking("KRX_REG") == "J"
assert router.resolve_for_ranking("NXT_PRE") == "NX"
assert router.resolve_for_ranking("NXT_AFTER") == "NX"
```
**Step 2: Write failing tests for dual-listing fallback behavior**
```python
def test_order_exchange_falls_back_to_session_default_on_missing_data() -> None:
router = KRExchangeRouter()
resolved = router.resolve_for_order(
stock_code="0001A0",
session_id="NXT_PRE",
is_dual_listed=True,
spread_krx=None,
spread_nxt=None,
liquidity_krx=None,
liquidity_nxt=None,
)
assert resolved.exchange_code == "NXT"
assert resolved.reason == "fallback_data_unavailable"
```
**Step 3: Run tests to verify fail**
Run: `pytest tests/test_kr_exchange_router.py -v`
Expected: FAIL (`ModuleNotFoundError` or missing class)
**Step 4: Commit tests-only checkpoint**
```bash
git add tests/test_kr_exchange_router.py
git commit -m "test: add failing tests for KR exchange router"
```
### Task 3: Implement Router Minimal Code
**Files:**
- Create: `src/broker/kr_exchange_router.py`
- Modify: `src/broker/__init__.py`
**Step 1: Add routing dataclass + session default mapping**
```python
@dataclass(frozen=True)
class ExchangeResolution:
exchange_code: str
reason: str
class KRExchangeRouter:
def resolve_for_ranking(self, session_id: str) -> str:
return "NX" if session_id in {"NXT_PRE", "NXT_AFTER"} else "J"
```
**Step 2: Add dual-listing decision path + fallback**
```python
if is_dual_listed and spread_krx is not None and spread_nxt is not None:
if spread_nxt < spread_krx:
return ExchangeResolution("NXT", "dual_listing_spread")
return ExchangeResolution("KRX", "dual_listing_spread")
return ExchangeResolution(default_exchange, "fallback_data_unavailable")
```
**Step 3: Run router tests**
Run: `pytest tests/test_kr_exchange_router.py -v`
Expected: PASS
**Step 4: Commit**
```bash
git add src/broker/kr_exchange_router.py src/broker/__init__.py
git commit -m "feat: add KR session-aware exchange router"
```
### Task 4: Broker Request Wiring (Ranking + Order)
**Files:**
- Modify: `src/broker/kis_api.py`
- Modify: `tests/test_broker.py`
**Step 1: Add failing tests for ranking param and order body exchange field**
```python
assert called_params["FID_COND_MRKT_DIV_CODE"] == "NX"
assert called_json["EXCG_ID_DVSN_CD"] == "NXT"
```
**Step 2: Run targeted test subset (fail first)**
Run: `pytest tests/test_broker.py -k "market_rankings or EXCG_ID_DVSN_CD" -v`
Expected: FAIL on missing field/value
**Step 3: Implement minimal wiring**
```python
session_id = runtime_session_id or classify_session_id(MARKETS["KR"])
market_div_code = self._kr_router.resolve_for_ranking(session_id)
params["FID_COND_MRKT_DIV_CODE"] = market_div_code
resolution = self._kr_router.resolve_for_order(...)
body["EXCG_ID_DVSN_CD"] = resolution.exchange_code
```
**Step 4: Add routing evidence logs**
```python
logger.info(
"KR routing resolved",
extra={"session_id": session_id, "exchange": resolution.exchange_code, "reason": resolution.reason},
)
```
**Step 5: Re-run broker tests**
Run: `pytest tests/test_broker.py -k "market_rankings or EXCG_ID_DVSN_CD" -v`
Expected: PASS
**Step 6: Commit**
```bash
git add src/broker/kis_api.py tests/test_broker.py
git commit -m "fix: apply KR exchange routing to rankings and orders"
```
### Task 5: Scanner Session Alignment
**Files:**
- Modify: `src/analysis/smart_scanner.py`
- Modify: `tests/test_smart_scanner.py`
**Step 1: Add failing test for domestic session-aware ranking path**
```python
assert mock_broker.fetch_market_rankings.call_args_list[0].kwargs["session_id"] == "NXT_PRE"
```
**Step 2: Run scanner tests (fail first)**
Run: `pytest tests/test_smart_scanner.py -k "session" -v`
Expected: FAIL on missing session argument
**Step 3: Implement scanner call wiring**
```python
fluct_rows = await self.broker.fetch_market_rankings(
ranking_type="fluctuation",
limit=50,
session_id=session_id,
)
```
**Step 4: Re-run scanner tests**
Run: `pytest tests/test_smart_scanner.py -v`
Expected: PASS
**Step 5: Commit**
```bash
git add src/analysis/smart_scanner.py tests/test_smart_scanner.py
git commit -m "fix: align domestic scanner rankings with KR session routing"
```
### Task 6: Full Verification and Regression
**Files:**
- No new files
**Step 1: Run focused regressions for #409**
Run:
- `pytest tests/test_kr_exchange_router.py tests/test_broker.py tests/test_smart_scanner.py -v`
Expected: PASS
**Step 2: Run related runtime-path regressions for #318/#325**
Run:
- `pytest tests/test_main.py -k "atr or staged_exit or pred_down_prob" -v`
Expected: PASS
**Step 3: Run lint/type checks for touched modules**
Run:
- `ruff check src/broker/kis_api.py src/broker/kr_exchange_router.py src/analysis/smart_scanner.py tests/test_kr_exchange_router.py tests/test_broker.py tests/test_smart_scanner.py`
- `mypy src/broker/kis_api.py src/broker/kr_exchange_router.py src/analysis/smart_scanner.py --strict`
Expected: PASS
**Step 4: Commit final fixup if needed**
```bash
git add -A
git commit -m "chore: finalize #409 verification adjustments"
```
### Task 7: PR Creation, Self-Review, and Merge
**Files:**
- Modify: PR metadata only
**Step 1: Push branch**
Run: `git push -u origin feature/issue-409-kr-session-exchange-routing`
Expected: remote branch created
**Step 2: Create PR to `main` with issue links**
```bash
PR_BODY=$(cat <<'MD'
## Summary
- fix KR session-aware exchange routing for rankings and orders (#409)
- add dual-listing exchange priority with deterministic fallback
- add logs and tests for routing evidence
## Validation
- pytest tests/test_kr_exchange_router.py tests/test_broker.py tests/test_smart_scanner.py -v
- pytest tests/test_main.py -k "atr or staged_exit or pred_down_prob" -v
- ruff check ...
- mypy ...
MD
)
tea pr create --base main --head feature/issue-409-kr-session-exchange-routing --title "fix: KR session-aware exchange routing (#409)" --description "$PR_BODY"
```
**Step 3: Validate PR body integrity**
Run: `python3 scripts/validate_pr_body.py --pr <PR_NUMBER>`
Expected: PASS
**Step 4: Self-review checklist (blocking)**
- Re-check diff for missing `EXCG_ID_DVSN_CD`
- Confirm session mapping (`KRX_REG=J`, `NXT_PRE/NXT_AFTER=NX`)
- Confirm fallback reason logging exists
- Confirm tests cover dual-listing fallback
**Step 5: Merge only if no minor issues remain**
Run: `tea pr merge <PR_NUMBER> --merge`
Expected: merged
### Task 8: Restart Program and 24h Runtime Monitoring
**Files:**
- Runtime artifacts: `data/overnight/*.log`
**Step 1: Restart runtime from merged state**
Run:
- `bash scripts/stop_overnight.sh`
- `bash scripts/run_overnight.sh`
Expected: live process and watchdog healthy
**Step 2: Start 24h monitor**
Run:
- `INTERVAL_SEC=60 MAX_HOURS=24 POLICY_TZ=Asia/Seoul bash scripts/runtime_verify_monitor.sh`
Expected: monitor loop runs and writes `data/overnight/runtime_verify_*.log`
**Step 3: Track #409/#318/#325 evidence in loop**
Run examples:
- `rg -n "KR routing resolved|EXCG_ID_DVSN_CD|session=NXT_|session=KRX_REG" data/overnight/run_*.log`
- `rg -n "atr_value|dynamic hard stop|staged exit|pred_down_prob" data/overnight/run_*.log`
Expected:
- #409 routing evidence present when KR flows trigger
- #318/#325 evidence captured if runtime conditions occur
**Step 4: If anomaly found, create separate issue ticket immediately**
```bash
ISSUE_BODY=$(cat <<'MD'
## Summary
- runtime anomaly detected during #409 monitor
## Evidence
- log: data/overnight/run_xxx.log
- timestamp: <UTC/KST>
- observed: <symptom>
## Suspected Scope
- related to #409/#318/#325 monitoring path
## Next Action
- triage + reproducible test
MD
)
tea issues create -t "bug: runtime anomaly during #409 monitor" -d "$ISSUE_BODY"
```
**Step 5: Post monitoring summary to #409/#318/#325**
- Include PASS/FAIL/NOT_OBSERVED matrix and exact timestamps.
- Do not close #318/#325 without concrete acceptance evidence.

View File

@@ -8,8 +8,32 @@ CHECK_INTERVAL="${CHECK_INTERVAL:-30}"
TMUX_AUTO="${TMUX_AUTO:-true}" TMUX_AUTO="${TMUX_AUTO:-true}"
TMUX_ATTACH="${TMUX_ATTACH:-true}" TMUX_ATTACH="${TMUX_ATTACH:-true}"
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}" TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
STARTUP_GRACE_SEC="${STARTUP_GRACE_SEC:-3}"
dashboard_port="${DASHBOARD_PORT:-8080}"
APP_CMD_BIN="${APP_CMD_BIN:-}"
APP_CMD_ARGS="${APP_CMD_ARGS:-}"
RUNS_DASHBOARD="false"
if [ -z "${APP_CMD:-}" ]; then # Custom override contract:
# 1) Preferred: APP_CMD_BIN + APP_CMD_ARGS
# - APP_CMD_BIN is treated as a single executable token.
# - APP_CMD_ARGS uses shell-style word splitting; quote/escape inside this
# variable is NOT preserved as a nested shell parse.
# 2) Legacy fallback: APP_CMD (raw shell command string)
# - This path remains for backward compatibility.
# - When APP_CMD includes --dashboard, caller should include explicit
# DASHBOARD_PORT assignment in APP_CMD if non-default port is required.
if [ -n "$APP_CMD_BIN" ]; then
USE_DEFAULT_APP_CMD="false"
USE_SAFE_CUSTOM_APP_CMD="true"
APP_CMD="${APP_CMD_BIN} ${APP_CMD_ARGS}"
if [[ " $APP_CMD_ARGS " == *" --dashboard "* ]]; then
RUNS_DASHBOARD="true"
fi
elif [ -z "${APP_CMD:-}" ]; then
USE_DEFAULT_APP_CMD="true"
USE_SAFE_CUSTOM_APP_CMD="false"
if [ -x ".venv/bin/python" ]; then if [ -x ".venv/bin/python" ]; then
PYTHON_BIN=".venv/bin/python" PYTHON_BIN=".venv/bin/python"
elif command -v python3 >/dev/null 2>&1; then elif command -v python3 >/dev/null 2>&1; then
@@ -21,9 +45,14 @@ if [ -z "${APP_CMD:-}" ]; then
exit 1 exit 1
fi fi
dashboard_port="${DASHBOARD_PORT:-8080}" APP_CMD="$PYTHON_BIN -m src.main --mode=live --dashboard"
RUNS_DASHBOARD="true"
APP_CMD="DASHBOARD_PORT=$dashboard_port $PYTHON_BIN -m src.main --mode=live --dashboard" else
USE_DEFAULT_APP_CMD="false"
USE_SAFE_CUSTOM_APP_CMD="false"
if [[ "$APP_CMD" == *"--dashboard"* ]]; then
RUNS_DASHBOARD="true"
fi
fi fi
mkdir -p "$LOG_DIR" mkdir -p "$LOG_DIR"
@@ -34,6 +63,24 @@ WATCHDOG_LOG="$LOG_DIR/watchdog_${timestamp}.log"
PID_FILE="$LOG_DIR/app.pid" PID_FILE="$LOG_DIR/app.pid"
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid" WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
is_port_in_use() {
local port="$1"
if command -v ss >/dev/null 2>&1; then
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
return $?
fi
if command -v lsof >/dev/null 2>&1; then
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
return $?
fi
if command -v netstat >/dev/null 2>&1; then
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
return $?
fi
# No supported socket inspection command found.
return 1
}
if [ -f "$PID_FILE" ]; then if [ -f "$PID_FILE" ]; then
old_pid="$(cat "$PID_FILE" || true)" old_pid="$(cat "$PID_FILE" || true)"
if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then
@@ -43,7 +90,29 @@ if [ -f "$PID_FILE" ]; then
fi fi
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG" echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG"
nohup bash -lc "$APP_CMD" >>"$RUN_LOG" 2>&1 & if [ "$RUNS_DASHBOARD" = "true" ] && is_port_in_use "$dashboard_port"; then
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: dashboard port ${dashboard_port} already in use" | tee -a "$RUN_LOG"
exit 1
fi
if [ "$USE_DEFAULT_APP_CMD" = "true" ]; then
# Default path avoids shell word-splitting on executable paths.
nohup env DASHBOARD_PORT="$dashboard_port" "$PYTHON_BIN" -m src.main --mode=live --dashboard >>"$RUN_LOG" 2>&1 &
elif [ "$USE_SAFE_CUSTOM_APP_CMD" = "true" ]; then
# Safer custom path: executable path is handled as a single token.
if [ -n "$APP_CMD_ARGS" ]; then
# shellcheck disable=SC2206
app_args=( $APP_CMD_ARGS )
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" "${app_args[@]}" >>"$RUN_LOG" 2>&1 &
else
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" >>"$RUN_LOG" 2>&1 &
fi
else
# Custom APP_CMD is treated as a shell command string.
# If executable paths include spaces, they must be quoted inside APP_CMD.
# Legacy compatibility path: caller owns quoting and env var injection.
nohup bash -lc "exec env $APP_CMD" >>"$RUN_LOG" 2>&1 &
fi
app_pid=$! app_pid=$!
echo "$app_pid" > "$PID_FILE" echo "$app_pid" > "$PID_FILE"
@@ -54,6 +123,20 @@ nohup env PID_FILE="$PID_FILE" LOG_FILE="$WATCHDOG_LOG" CHECK_INTERVAL="$CHECK_I
watchdog_pid=$! watchdog_pid=$!
echo "$watchdog_pid" > "$WATCHDOG_PID_FILE" echo "$watchdog_pid" > "$WATCHDOG_PID_FILE"
sleep "$STARTUP_GRACE_SEC"
if ! kill -0 "$app_pid" 2>/dev/null; then
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: app process exited early (pid=$app_pid)" | tee -a "$RUN_LOG"
[ -n "${watchdog_pid:-}" ] && kill "$watchdog_pid" 2>/dev/null || true
tail -n 20 "$RUN_LOG" || true
exit 1
fi
if ! kill -0 "$watchdog_pid" 2>/dev/null; then
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: watchdog exited early (pid=$watchdog_pid)" | tee -a "$WATCHDOG_LOG"
kill "$app_pid" 2>/dev/null || true
tail -n 20 "$WATCHDOG_LOG" || true
exit 1
fi
cat <<EOF cat <<EOF
시작 완료 시작 완료
- app pid: $app_pid - app pid: $app_pid

View File

@@ -7,12 +7,15 @@ ROOT_DIR="${ROOT_DIR:-/home/agentson/repos/The-Ouroboros}"
LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}" LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}"
INTERVAL_SEC="${INTERVAL_SEC:-60}" INTERVAL_SEC="${INTERVAL_SEC:-60}"
MAX_HOURS="${MAX_HOURS:-24}" MAX_HOURS="${MAX_HOURS:-24}"
MAX_LOOPS="${MAX_LOOPS:-0}"
POLICY_TZ="${POLICY_TZ:-Asia/Seoul}" POLICY_TZ="${POLICY_TZ:-Asia/Seoul}"
DASHBOARD_PORT="${DASHBOARD_PORT:-8080}"
cd "$ROOT_DIR" cd "$ROOT_DIR"
OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log" OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log"
END_TS=$(( $(date +%s) + MAX_HOURS*3600 )) END_TS=$(( $(date +%s) + MAX_HOURS*3600 ))
loops=0
log() { log() {
printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null
@@ -31,6 +34,11 @@ check_signal() {
return 1 return 1
} }
find_live_pids() {
# Detect live-mode process even when run_overnight pid files are absent.
pgrep -af "[s]rc.main --mode=live" 2>/dev/null | awk '{print $1}' | tr '\n' ',' | sed 's/,$//'
}
check_forbidden() { check_forbidden() {
local name="$1" local name="$1"
local pattern="$2" local pattern="$2"
@@ -44,42 +52,94 @@ check_forbidden() {
return 0 return 0
} }
is_port_listening() {
local port="$1"
if command -v ss >/dev/null 2>&1; then
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
return $?
fi
if command -v lsof >/dev/null 2>&1; then
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
return $?
fi
if command -v netstat >/dev/null 2>&1; then
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
return $?
fi
return 1
}
log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}" log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}"
while true; do while true; do
loops=$((loops + 1))
now=$(date +%s) now=$(date +%s)
if [ "$now" -ge "$END_TS" ]; then if [ "$now" -ge "$END_TS" ]; then
log "[INFO] monitor completed (time window reached)" log "[INFO] monitor completed (time window reached)"
exit 0 exit 0
fi fi
if [ "$MAX_LOOPS" -gt 0 ] && [ "$loops" -gt "$MAX_LOOPS" ]; then
log "[INFO] monitor completed (max loops reached)"
exit 0
fi
latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)" latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)"
if [ -z "$latest_run" ]; then
log "[ANOMALY] no run log found"
sleep "$INTERVAL_SEC"
continue
fi
# Basic liveness hints. # Basic liveness hints.
app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)" app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)"
wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)" wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)"
live_pids="$(find_live_pids)"
app_alive=0 app_alive=0
wd_alive=0 wd_alive=0
port_alive=0 port_alive=0
[ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1 [ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1
[ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1 [ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1
ss -ltnp 2>/dev/null | rg -q ':8080' && port_alive=1 if [ "$app_alive" -eq 0 ] && [ -n "$live_pids" ]; then
log "[HEARTBEAT] run_log=$latest_run app_alive=$app_alive watchdog_alive=$wd_alive port8080=$port_alive" app_alive=1
fi
is_port_listening "$DASHBOARD_PORT" && port_alive=1
log "[HEARTBEAT] run_log=${latest_run:-none} app_alive=$app_alive watchdog_alive=$wd_alive port=${DASHBOARD_PORT} alive=$port_alive live_pids=${live_pids:-none}"
defer_log_checks=0
if [ -z "$latest_run" ] && [ "$app_alive" -eq 1 ]; then
defer_log_checks=1
log "[INFO] run log not yet available; defer log-based coverage checks"
fi
if [ -z "$latest_run" ] && [ "$defer_log_checks" -eq 0 ]; then
log "[ANOMALY] no run log found"
fi
# Coverage matrix rows (session paths and policy gate evidence). # Coverage matrix rows (session paths and policy gate evidence).
not_observed=0 not_observed=0
if [ "$app_alive" -eq 1 ]; then
log "[COVERAGE] LIVE_MODE=PASS source=process_liveness"
else
if [ -n "$latest_run" ]; then
check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1)) check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1))
else
log "[COVERAGE] LIVE_MODE=NOT_OBSERVED reason=no_run_log_no_live_pid"
not_observed=$((not_observed+1))
fi
fi
if [ "$defer_log_checks" -eq 1 ]; then
for deferred in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
log "[COVERAGE] ${deferred}=DEFERRED reason=no_run_log_process_alive"
done
elif [ -n "$latest_run" ]; then
check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1)) check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1))
check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1)) check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1)) check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1)) check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1)) check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1))
check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1)) check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1))
else
for missing in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
log "[COVERAGE] ${missing}=NOT_OBSERVED reason=no_run_log"
not_observed=$((not_observed+1))
done
fi
if [ "$not_observed" -gt 0 ]; then if [ "$not_observed" -gt 0 ]; then
log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)" log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)"
@@ -95,11 +155,17 @@ while true; do
is_weekend=1 is_weekend=1
fi fi
if [ "$is_weekend" -eq 1 ]; then if [ "$defer_log_checks" -eq 1 ]; then
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log_process_alive"
elif [ "$is_weekend" -eq 1 ]; then
# Weekend policy: KR regular session loop must never appear. # Weekend policy: KR regular session loop must never appear.
if [ -n "$latest_run" ]; then
check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \ check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \
"Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \ "Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \
"$latest_run" || forbidden_hits=$((forbidden_hits+1)) "$latest_run" || forbidden_hits=$((forbidden_hits+1))
else
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log"
fi
else else
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday" log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday"
fi fi

View File

@@ -5,6 +5,8 @@ from __future__ import annotations
import argparse import argparse
import json import json
import os
import shutil
import re import re
import subprocess import subprocess
import sys import sys
@@ -12,11 +14,31 @@ from pathlib import Path
HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE) HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE)
LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE) LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE)
FENCED_CODE_PATTERN = re.compile(r"```.*?```", re.DOTALL)
INLINE_CODE_PATTERN = re.compile(r"`[^`]*`")
def _strip_code_segments(text: str) -> str:
without_fences = FENCED_CODE_PATTERN.sub("", text)
return INLINE_CODE_PATTERN.sub("", without_fences)
def resolve_tea_binary() -> str:
tea_from_path = shutil.which("tea")
if tea_from_path:
return tea_from_path
tea_home = Path.home() / "bin" / "tea"
if tea_home.exists() and tea_home.is_file() and os.access(tea_home, os.X_OK):
return str(tea_home)
raise RuntimeError("tea binary not found (checked PATH and ~/bin/tea)")
def validate_pr_body_text(text: str) -> list[str]: def validate_pr_body_text(text: str) -> list[str]:
errors: list[str] = [] errors: list[str] = []
if "\\n" in text and "\n" not in text: searchable = _strip_code_segments(text)
if "\\n" in searchable:
errors.append("body contains escaped newline sequence (\\n)") errors.append("body contains escaped newline sequence (\\n)")
if text.count("```") % 2 != 0: if text.count("```") % 2 != 0:
errors.append("body has unbalanced fenced code blocks (``` count is odd)") errors.append("body has unbalanced fenced code blocks (``` count is odd)")
@@ -28,10 +50,11 @@ def validate_pr_body_text(text: str) -> list[str]:
def fetch_pr_body(pr_number: int) -> str: def fetch_pr_body(pr_number: int) -> str:
tea_binary = resolve_tea_binary()
try: try:
completed = subprocess.run( completed = subprocess.run(
[ [
"tea", tea_binary,
"api", "api",
"-R", "-R",
"origin", "origin",
@@ -41,7 +64,7 @@ def fetch_pr_body(pr_number: int) -> str:
capture_output=True, capture_output=True,
text=True, text=True,
) )
except (subprocess.CalledProcessError, FileNotFoundError) as exc: except (subprocess.CalledProcessError, FileNotFoundError, PermissionError) as exc:
raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc
try: try:

View File

@@ -68,6 +68,7 @@ class SmartVolatilityScanner:
self, self,
market: MarketInfo | None = None, market: MarketInfo | None = None,
fallback_stocks: list[str] | None = None, fallback_stocks: list[str] | None = None,
domestic_session_id: str | None = None,
) -> list[ScanCandidate]: ) -> list[ScanCandidate]:
"""Execute smart scan and return qualified candidates. """Execute smart scan and return qualified candidates.
@@ -81,11 +82,12 @@ class SmartVolatilityScanner:
if market and not market.is_domestic: if market and not market.is_domestic:
return await self._scan_overseas(market, fallback_stocks) return await self._scan_overseas(market, fallback_stocks)
return await self._scan_domestic(fallback_stocks) return await self._scan_domestic(fallback_stocks, session_id=domestic_session_id)
async def _scan_domestic( async def _scan_domestic(
self, self,
fallback_stocks: list[str] | None = None, fallback_stocks: list[str] | None = None,
session_id: str | None = None,
) -> list[ScanCandidate]: ) -> list[ScanCandidate]:
"""Scan domestic market using volatility-first ranking + liquidity bonus.""" """Scan domestic market using volatility-first ranking + liquidity bonus."""
# 1) Primary universe from fluctuation ranking. # 1) Primary universe from fluctuation ranking.
@@ -93,6 +95,7 @@ class SmartVolatilityScanner:
fluct_rows = await self.broker.fetch_market_rankings( fluct_rows = await self.broker.fetch_market_rankings(
ranking_type="fluctuation", ranking_type="fluctuation",
limit=50, limit=50,
session_id=session_id,
) )
except ConnectionError as exc: except ConnectionError as exc:
logger.warning("Domestic fluctuation ranking failed: %s", exc) logger.warning("Domestic fluctuation ranking failed: %s", exc)
@@ -103,6 +106,7 @@ class SmartVolatilityScanner:
volume_rows = await self.broker.fetch_market_rankings( volume_rows = await self.broker.fetch_market_rankings(
ranking_type="volume", ranking_type="volume",
limit=50, limit=50,
session_id=session_id,
) )
except ConnectionError as exc: except ConnectionError as exc:
logger.warning("Domestic volume ranking failed: %s", exc) logger.warning("Domestic volume ranking failed: %s", exc)

View File

@@ -12,7 +12,10 @@ from typing import Any, cast
import aiohttp import aiohttp
from src.broker.kr_exchange_router import KRExchangeRouter
from src.config import Settings from src.config import Settings
from src.core.order_policy import classify_session_id
from src.markets.schedule import MARKETS
# KIS virtual trading server has a known SSL certificate hostname mismatch. # KIS virtual trading server has a known SSL certificate hostname mismatch.
_KIS_VTS_HOST = "openapivts.koreainvestment.com" _KIS_VTS_HOST = "openapivts.koreainvestment.com"
@@ -92,6 +95,7 @@ class KISBroker:
self._last_refresh_attempt: float = 0.0 self._last_refresh_attempt: float = 0.0
self._refresh_cooldown: float = 60.0 # Seconds (matches KIS 1/minute limit) self._refresh_cooldown: float = 60.0 # Seconds (matches KIS 1/minute limit)
self._rate_limiter = LeakyBucket(settings.RATE_LIMIT_RPS) self._rate_limiter = LeakyBucket(settings.RATE_LIMIT_RPS)
self._kr_router = KRExchangeRouter()
def _get_session(self) -> aiohttp.ClientSession: def _get_session(self) -> aiohttp.ClientSession:
if self._session is None or self._session.closed: if self._session is None or self._session.closed:
@@ -187,9 +191,12 @@ class KISBroker:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError(f"Hash key request failed ({resp.status}): {text}") raise ConnectionError(f"Hash key request failed ({resp.status}): {text}")
data = await resp.json() data = cast(dict[str, Any], await resp.json())
return data["HASH"] hash_value = data.get("HASH")
if not isinstance(hash_value, str):
raise ConnectionError("Hash key response missing HASH")
return hash_value
# ------------------------------------------------------------------ # ------------------------------------------------------------------
# Common Headers # Common Headers
@@ -211,12 +218,21 @@ class KISBroker:
async def get_orderbook(self, stock_code: str) -> dict[str, Any]: async def get_orderbook(self, stock_code: str) -> dict[str, Any]:
"""Fetch the current orderbook for a given stock code.""" """Fetch the current orderbook for a given stock code."""
return await self.get_orderbook_by_market(stock_code, market_div_code="J")
async def get_orderbook_by_market(
self,
stock_code: str,
*,
market_div_code: str,
) -> dict[str, Any]:
"""Fetch orderbook for a specific domestic market division code."""
await self._rate_limiter.acquire() await self._rate_limiter.acquire()
session = self._get_session() session = self._get_session()
headers = await self._auth_headers("FHKST01010200") headers = await self._auth_headers("FHKST01010200")
params = { params = {
"FID_COND_MRKT_DIV_CODE": "J", "FID_COND_MRKT_DIV_CODE": market_div_code,
"FID_INPUT_ISCD": stock_code, "FID_INPUT_ISCD": stock_code,
} }
url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/inquire-asking-price-exp-ccn" url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/inquire-asking-price-exp-ccn"
@@ -226,10 +242,80 @@ class KISBroker:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError(f"get_orderbook failed ({resp.status}): {text}") raise ConnectionError(f"get_orderbook failed ({resp.status}): {text}")
return await resp.json() return cast(dict[str, Any], await resp.json())
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(f"Network error fetching orderbook: {exc}") from exc raise ConnectionError(f"Network error fetching orderbook: {exc}") from exc
@staticmethod
def _extract_orderbook_metrics(payload: dict[str, Any]) -> tuple[float | None, float | None]:
output = payload.get("output1") or payload.get("output") or {}
if not isinstance(output, dict):
return None, None
def _float(*keys: str) -> float | None:
for key in keys:
raw = output.get(key)
if raw in (None, ""):
continue
try:
return float(cast(str | int | float, raw))
except (ValueError, TypeError):
continue
return None
ask = _float("askp1", "stck_askp1")
bid = _float("bidp1", "stck_bidp1")
if ask is not None and bid is not None and ask > 0 and bid > 0 and ask >= bid:
mid = (ask + bid) / 2
if mid > 0:
spread = (ask - bid) / mid
else:
spread = None
else:
spread = None
ask_qty = _float("askp_rsqn1", "ask_qty1")
bid_qty = _float("bidp_rsqn1", "bid_qty1")
if ask_qty is not None and bid_qty is not None and ask_qty >= 0 and bid_qty >= 0:
liquidity = ask_qty + bid_qty
else:
liquidity = None
return spread, liquidity
async def _load_dual_listing_metrics(
self,
stock_code: str,
) -> tuple[bool, float | None, float | None, float | None, float | None]:
"""Try KRX/NXT orderbooks and derive spread/liquidity metrics."""
spread_krx: float | None = None
spread_nxt: float | None = None
liquidity_krx: float | None = None
liquidity_nxt: float | None = None
for market_div_code, exchange in (("J", "KRX"), ("NX", "NXT")):
try:
payload = await self.get_orderbook_by_market(
stock_code,
market_div_code=market_div_code,
)
except ConnectionError:
continue
spread, liquidity = self._extract_orderbook_metrics(payload)
if exchange == "KRX":
spread_krx = spread
liquidity_krx = liquidity
else:
spread_nxt = spread
liquidity_nxt = liquidity
is_dual_listed = (
(spread_krx is not None and spread_nxt is not None)
or (liquidity_krx is not None and liquidity_nxt is not None)
)
return is_dual_listed, spread_krx, spread_nxt, liquidity_krx, liquidity_nxt
async def get_current_price(self, stock_code: str) -> tuple[float, float, float]: async def get_current_price(self, stock_code: str) -> tuple[float, float, float]:
"""Fetch current price data for a domestic stock. """Fetch current price data for a domestic stock.
@@ -302,7 +388,7 @@ class KISBroker:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError(f"get_balance failed ({resp.status}): {text}") raise ConnectionError(f"get_balance failed ({resp.status}): {text}")
return await resp.json() return cast(dict[str, Any], await resp.json())
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(f"Network error fetching balance: {exc}") from exc raise ConnectionError(f"Network error fetching balance: {exc}") from exc
@@ -311,7 +397,8 @@ class KISBroker:
stock_code: str, stock_code: str,
order_type: str, # "BUY" or "SELL" order_type: str, # "BUY" or "SELL"
quantity: int, quantity: int,
price: int = 0, price: float = 0,
session_id: str | None = None,
) -> dict[str, Any]: ) -> dict[str, Any]:
"""Submit a buy or sell order. """Submit a buy or sell order.
@@ -341,10 +428,32 @@ class KISBroker:
ord_dvsn = "01" # 시장가 ord_dvsn = "01" # 시장가
ord_price = 0 ord_price = 0
resolved_session = session_id or classify_session_id(MARKETS["KR"])
if session_id is not None:
is_dual_listed, spread_krx, spread_nxt, liquidity_krx, liquidity_nxt = (
await self._load_dual_listing_metrics(stock_code)
)
else:
is_dual_listed = False
spread_krx = None
spread_nxt = None
liquidity_krx = None
liquidity_nxt = None
resolution = self._kr_router.resolve_for_order(
stock_code=stock_code,
session_id=resolved_session,
is_dual_listed=is_dual_listed,
spread_krx=spread_krx,
spread_nxt=spread_nxt,
liquidity_krx=liquidity_krx,
liquidity_nxt=liquidity_nxt,
)
body = { body = {
"CANO": self._account_no, "CANO": self._account_no,
"ACNT_PRDT_CD": self._product_cd, "ACNT_PRDT_CD": self._product_cd,
"PDNO": stock_code, "PDNO": stock_code,
"EXCG_ID_DVSN_CD": resolution.exchange_code,
"ORD_DVSN": ord_dvsn, "ORD_DVSN": ord_dvsn,
"ORD_QTY": str(quantity), "ORD_QTY": str(quantity),
"ORD_UNPR": str(ord_price), "ORD_UNPR": str(ord_price),
@@ -361,12 +470,15 @@ class KISBroker:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError(f"send_order failed ({resp.status}): {text}") raise ConnectionError(f"send_order failed ({resp.status}): {text}")
data = await resp.json() data = cast(dict[str, Any], await resp.json())
logger.info( logger.info(
"Order submitted", "Order submitted",
extra={ extra={
"stock_code": stock_code, "stock_code": stock_code,
"action": order_type, "action": order_type,
"session_id": resolved_session,
"exchange": resolution.exchange_code,
"routing_reason": resolution.reason,
}, },
) )
return data return data
@@ -377,6 +489,7 @@ class KISBroker:
self, self,
ranking_type: str = "volume", ranking_type: str = "volume",
limit: int = 30, limit: int = 30,
session_id: str | None = None,
) -> list[dict[str, Any]]: ) -> list[dict[str, Any]]:
"""Fetch market rankings from KIS API. """Fetch market rankings from KIS API.
@@ -394,12 +507,15 @@ class KISBroker:
await self._rate_limiter.acquire() await self._rate_limiter.acquire()
session = self._get_session() session = self._get_session()
resolved_session = session_id or classify_session_id(MARKETS["KR"])
ranking_market_code = self._kr_router.resolve_for_ranking(resolved_session)
if ranking_type == "volume": if ranking_type == "volume":
# 거래량순위: FHPST01710000 / /quotations/volume-rank # 거래량순위: FHPST01710000 / /quotations/volume-rank
tr_id = "FHPST01710000" tr_id = "FHPST01710000"
url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/volume-rank" url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/volume-rank"
params: dict[str, str] = { params: dict[str, str] = {
"FID_COND_MRKT_DIV_CODE": "J", "FID_COND_MRKT_DIV_CODE": ranking_market_code,
"FID_COND_SCR_DIV_CODE": "20171", "FID_COND_SCR_DIV_CODE": "20171",
"FID_INPUT_ISCD": "0000", "FID_INPUT_ISCD": "0000",
"FID_DIV_CLS_CODE": "0", "FID_DIV_CLS_CODE": "0",
@@ -416,7 +532,7 @@ class KISBroker:
tr_id = "FHPST01700000" tr_id = "FHPST01700000"
url = f"{self._base_url}/uapi/domestic-stock/v1/ranking/fluctuation" url = f"{self._base_url}/uapi/domestic-stock/v1/ranking/fluctuation"
params = { params = {
"fid_cond_mrkt_div_code": "J", "fid_cond_mrkt_div_code": ranking_market_code,
"fid_cond_scr_div_code": "20170", "fid_cond_scr_div_code": "20170",
"fid_input_iscd": "0000", "fid_input_iscd": "0000",
"fid_rank_sort_cls_code": "0", "fid_rank_sort_cls_code": "0",

View File

@@ -0,0 +1,48 @@
from __future__ import annotations
from dataclasses import dataclass
@dataclass(frozen=True)
class ExchangeResolution:
exchange_code: str
reason: str
class KRExchangeRouter:
"""Resolve domestic exchange routing for KR sessions."""
def resolve_for_ranking(self, session_id: str) -> str:
if session_id in {"NXT_PRE", "NXT_AFTER"}:
return "NX"
return "J"
def resolve_for_order(
self,
*,
stock_code: str,
session_id: str,
is_dual_listed: bool = False,
spread_krx: float | None = None,
spread_nxt: float | None = None,
liquidity_krx: float | None = None,
liquidity_nxt: float | None = None,
) -> ExchangeResolution:
del stock_code
default_exchange = "NXT" if session_id in {"NXT_PRE", "NXT_AFTER"} else "KRX"
default_reason = "session_default"
if not is_dual_listed:
return ExchangeResolution(default_exchange, default_reason)
if spread_krx is not None and spread_nxt is not None:
if spread_nxt < spread_krx:
return ExchangeResolution("NXT", "dual_listing_spread")
return ExchangeResolution("KRX", "dual_listing_spread")
if liquidity_krx is not None and liquidity_nxt is not None:
if liquidity_nxt > liquidity_krx:
return ExchangeResolution("NXT", "dual_listing_liquidity")
return ExchangeResolution("KRX", "dual_listing_liquidity")
return ExchangeResolution(default_exchange, "fallback_data_unavailable")

View File

@@ -35,6 +35,7 @@ from src.core.criticality import CriticalityAssessor
from src.core.kill_switch import KillSwitchOrchestrator from src.core.kill_switch import KillSwitchOrchestrator
from src.core.order_policy import ( from src.core.order_policy import (
OrderPolicyRejected, OrderPolicyRejected,
classify_session_id,
get_session_info, get_session_info,
validate_order_policy, validate_order_policy,
) )
@@ -224,23 +225,27 @@ def _compute_kr_dynamic_stop_loss_pct(
key="KR_ATR_STOP_MULTIPLIER_K", key="KR_ATR_STOP_MULTIPLIER_K",
default=2.0, default=2.0,
) )
min_pct = _resolve_market_setting( min_pct = float(
_resolve_market_setting(
market=market, market=market,
settings=settings, settings=settings,
key="KR_ATR_STOP_MIN_PCT", key="KR_ATR_STOP_MIN_PCT",
default=-2.0, default=-2.0,
) )
max_pct = _resolve_market_setting( )
max_pct = float(
_resolve_market_setting(
market=market, market=market,
settings=settings, settings=settings,
key="KR_ATR_STOP_MAX_PCT", key="KR_ATR_STOP_MAX_PCT",
default=-7.0, default=-7.0,
) )
)
if max_pct > min_pct: if max_pct > min_pct:
min_pct, max_pct = max_pct, min_pct min_pct, max_pct = max_pct, min_pct
dynamic_stop_pct = -((k * atr_value) / entry_price) * 100.0 dynamic_stop_pct = -((k * atr_value) / entry_price) * 100.0
return max(max_pct, min(min_pct, dynamic_stop_pct)) return float(max(max_pct, min(min_pct, dynamic_stop_pct)))
def _stoploss_cooldown_key(*, market: MarketInfo, stock_code: str) -> str: def _stoploss_cooldown_key(*, market: MarketInfo, stock_code: str) -> str:
@@ -1200,6 +1205,7 @@ async def process_blackout_recovery_orders(
order_type=intent.order_type, order_type=intent.order_type,
quantity=intent.quantity, quantity=intent.quantity,
price=intent.price, price=intent.price,
session_id=intent.session_id,
) )
else: else:
result = await overseas_broker.send_overseas_order( result = await overseas_broker.send_overseas_order(
@@ -2083,6 +2089,7 @@ async def trading_cycle(
order_type=decision.action, order_type=decision.action,
quantity=quantity, quantity=quantity,
price=order_price, price=order_price,
session_id=runtime_session_id,
) )
else: else:
# For overseas orders, always use limit orders (지정가): # For overseas orders, always use limit orders (지정가):
@@ -2417,6 +2424,7 @@ async def handle_domestic_pending_orders(
order_type="SELL", order_type="SELL",
quantity=psbl_qty, quantity=psbl_qty,
price=new_price, price=new_price,
session_id=classify_session_id(MARKETS["KR"]),
) )
sell_resubmit_counts[key] = sell_resubmit_counts.get(key, 0) + 1 sell_resubmit_counts[key] = sell_resubmit_counts.get(key, 0) + 1
try: try:
@@ -3292,6 +3300,7 @@ async def run_daily_session(
order_type=decision.action, order_type=decision.action,
quantity=quantity, quantity=quantity,
price=order_price, price=order_price,
session_id=runtime_session_id,
) )
else: else:
# KIS VTS only accepts limit orders; use 0.5% premium for BUY # KIS VTS only accepts limit orders; use 0.5% premium for BUY

View File

@@ -400,6 +400,15 @@ class TestFetchMarketRankings:
assert result[0]["stock_code"] == "015260" assert result[0]["stock_code"] == "015260"
assert result[0]["change_rate"] == 29.74 assert result[0]["change_rate"] == 29.74
@pytest.mark.asyncio
async def test_volume_uses_nx_market_code_in_nxt_session(self, broker: KISBroker) -> None:
mock_resp = _make_ranking_mock([])
with patch("aiohttp.ClientSession.get", return_value=mock_resp) as mock_get:
await broker.fetch_market_rankings(ranking_type="volume", session_id="NXT_PRE")
params = mock_get.call_args[1].get("params", {})
assert params.get("FID_COND_MRKT_DIV_CODE") == "NX"
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# KRX tick unit / round-down helpers (issue #157) # KRX tick unit / round-down helpers (issue #157)
@@ -591,6 +600,60 @@ class TestSendOrderTickRounding:
body = order_call[1].get("json", {}) body = order_call[1].get("json", {})
assert body["ORD_DVSN"] == "01" assert body["ORD_DVSN"] == "01"
@pytest.mark.asyncio
async def test_send_order_sets_exchange_field_from_session(self, broker: KISBroker) -> None:
mock_hash = AsyncMock()
mock_hash.status = 200
mock_hash.json = AsyncMock(return_value={"HASH": "h"})
mock_hash.__aenter__ = AsyncMock(return_value=mock_hash)
mock_hash.__aexit__ = AsyncMock(return_value=False)
mock_order = AsyncMock()
mock_order.status = 200
mock_order.json = AsyncMock(return_value={"rt_cd": "0"})
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False)
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
with patch.object(
broker,
"_load_dual_listing_metrics",
new=AsyncMock(return_value=(False, None, None, None, None)),
):
await broker.send_order("005930", "BUY", 1, price=50000, session_id="NXT_PRE")
order_call = mock_post.call_args_list[1]
body = order_call[1].get("json", {})
assert body["EXCG_ID_DVSN_CD"] == "NXT"
@pytest.mark.asyncio
async def test_send_order_prefers_nxt_when_dual_listing_spread_is_tighter(
self, broker: KISBroker
) -> None:
mock_hash = AsyncMock()
mock_hash.status = 200
mock_hash.json = AsyncMock(return_value={"HASH": "h"})
mock_hash.__aenter__ = AsyncMock(return_value=mock_hash)
mock_hash.__aexit__ = AsyncMock(return_value=False)
mock_order = AsyncMock()
mock_order.status = 200
mock_order.json = AsyncMock(return_value={"rt_cd": "0"})
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False)
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
with patch.object(
broker,
"_load_dual_listing_metrics",
new=AsyncMock(return_value=(True, 0.004, 0.002, 100000.0, 90000.0)),
):
await broker.send_order("005930", "BUY", 1, price=50000, session_id="KRX_REG")
order_call = mock_post.call_args_list[1]
body = order_call[1].get("json", {})
assert body["EXCG_ID_DVSN_CD"] == "NXT"
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# TR_ID live/paper branching (issues #201, #202, #203) # TR_ID live/paper branching (issues #201, #202, #203)

View File

@@ -0,0 +1,40 @@
from __future__ import annotations
from src.broker.kr_exchange_router import KRExchangeRouter
def test_ranking_market_code_by_session() -> None:
router = KRExchangeRouter()
assert router.resolve_for_ranking("KRX_REG") == "J"
assert router.resolve_for_ranking("NXT_PRE") == "NX"
assert router.resolve_for_ranking("NXT_AFTER") == "NX"
def test_order_exchange_falls_back_to_session_default_on_missing_data() -> None:
router = KRExchangeRouter()
resolved = router.resolve_for_order(
stock_code="0001A0",
session_id="NXT_PRE",
is_dual_listed=True,
spread_krx=None,
spread_nxt=None,
liquidity_krx=None,
liquidity_nxt=None,
)
assert resolved.exchange_code == "NXT"
assert resolved.reason == "fallback_data_unavailable"
def test_order_exchange_uses_spread_preference_for_dual_listing() -> None:
router = KRExchangeRouter()
resolved = router.resolve_for_order(
stock_code="0001A0",
session_id="KRX_REG",
is_dual_listed=True,
spread_krx=0.005,
spread_nxt=0.003,
liquidity_krx=100000.0,
liquidity_nxt=90000.0,
)
assert resolved.exchange_code == "NXT"
assert resolved.reason == "dual_listing_spread"

View File

@@ -0,0 +1,160 @@
from __future__ import annotations
import os
import signal
import socket
import subprocess
from pathlib import Path
import pytest
REPO_ROOT = Path(__file__).resolve().parent.parent
RUN_OVERNIGHT = REPO_ROOT / "scripts" / "run_overnight.sh"
RUNTIME_MONITOR = REPO_ROOT / "scripts" / "runtime_verify_monitor.sh"
def _latest_runtime_log(log_dir: Path) -> str:
logs = sorted(log_dir.glob("runtime_verify_*.log"))
assert logs, "runtime monitor did not produce log output"
return logs[-1].read_text(encoding="utf-8")
def test_runtime_verify_monitor_detects_live_process_without_pid_files(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
fake_live = subprocess.Popen(
["bash", "-lc", 'exec -a "src.main --mode=live" sleep 10'],
cwd=REPO_ROOT,
)
try:
env = os.environ.copy()
env.update(
{
"ROOT_DIR": str(REPO_ROOT),
"LOG_DIR": str(log_dir),
"INTERVAL_SEC": "1",
"MAX_HOURS": "1",
"MAX_LOOPS": "1",
"POLICY_TZ": "UTC",
}
)
completed = subprocess.run(
["bash", str(RUNTIME_MONITOR)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode == 0, completed.stderr
log_text = _latest_runtime_log(log_dir)
assert "app_alive=1" in log_text
assert "[COVERAGE] LIVE_MODE=PASS source=process_liveness" in log_text
assert "[ANOMALY]" not in log_text
finally:
fake_live.terminate()
fake_live.wait(timeout=5)
def test_run_overnight_fails_fast_when_dashboard_port_in_use(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(("127.0.0.1", 0))
sock.listen(1)
port = sock.getsockname()[1]
try:
env = os.environ.copy()
env.update(
{
"LOG_DIR": str(log_dir),
"TMUX_AUTO": "false",
"DASHBOARD_PORT": str(port),
}
)
completed = subprocess.run(
["bash", str(RUN_OVERNIGHT)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode != 0
output = f"{completed.stdout}\n{completed.stderr}"
assert "already in use" in output
finally:
sock.close()
def test_run_overnight_writes_live_pid_and_watchdog_pid(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
env = os.environ.copy()
env.update(
{
"LOG_DIR": str(log_dir),
"TMUX_AUTO": "false",
"STARTUP_GRACE_SEC": "1",
"CHECK_INTERVAL": "2",
"APP_CMD_BIN": "sleep",
"APP_CMD_ARGS": "10",
}
)
completed = subprocess.run(
["bash", str(RUN_OVERNIGHT)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode == 0, f"{completed.stdout}\n{completed.stderr}"
app_pid = int((log_dir / "app.pid").read_text(encoding="utf-8").strip())
watchdog_pid = int((log_dir / "watchdog.pid").read_text(encoding="utf-8").strip())
os.kill(app_pid, 0)
os.kill(watchdog_pid, 0)
for pid in (watchdog_pid, app_pid):
try:
os.kill(pid, signal.SIGTERM)
except ProcessLookupError:
pass
def test_run_overnight_fails_when_process_exits_before_grace_period(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
env = os.environ.copy()
env.update(
{
"LOG_DIR": str(log_dir),
"TMUX_AUTO": "false",
"STARTUP_GRACE_SEC": "1",
"APP_CMD_BIN": "false",
}
)
completed = subprocess.run(
["bash", str(RUN_OVERNIGHT)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode != 0
output = f"{completed.stdout}\n{completed.stderr}"
assert "startup failed:" in output
watchdog_pid_file = log_dir / "watchdog.pid"
if watchdog_pid_file.exists():
watchdog_pid = int(watchdog_pid_file.read_text(encoding="utf-8").strip())
with pytest.raises(ProcessLookupError):
os.kill(watchdog_pid, 0)

View File

@@ -103,6 +103,33 @@ class TestSmartVolatilityScanner:
assert candidates[0].stock_code == "005930" assert candidates[0].stock_code == "005930"
assert candidates[0].signal == "oversold" assert candidates[0].signal == "oversold"
@pytest.mark.asyncio
async def test_scan_domestic_passes_session_id_to_rankings(
self, scanner: SmartVolatilityScanner, mock_broker: MagicMock
) -> None:
fluctuation_rows = [
{
"stock_code": "005930",
"name": "Samsung",
"price": 70000,
"volume": 5000000,
"change_rate": 1.0,
"volume_increase_rate": 120,
},
]
mock_broker.fetch_market_rankings.side_effect = [fluctuation_rows, fluctuation_rows]
mock_broker.get_daily_prices.return_value = [
{"open": 1, "high": 71000, "low": 69000, "close": 70000, "volume": 1000000},
{"open": 1, "high": 70000, "low": 68000, "close": 69000, "volume": 900000},
]
await scanner.scan(domestic_session_id="NXT_PRE")
first_call = mock_broker.fetch_market_rankings.call_args_list[0]
second_call = mock_broker.fetch_market_rankings.call_args_list[1]
assert first_call.kwargs["session_id"] == "NXT_PRE"
assert second_call.kwargs["session_id"] == "NXT_PRE"
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_scan_domestic_finds_momentum_candidate( async def test_scan_domestic_finds_momentum_candidate(
self, scanner: SmartVolatilityScanner, mock_broker: MagicMock self, scanner: SmartVolatilityScanner, mock_broker: MagicMock

View File

@@ -24,9 +24,24 @@ def test_validate_pr_body_text_detects_escaped_newline() -> None:
assert any("escaped newline" in err for err in errors) assert any("escaped newline" in err for err in errors)
def test_validate_pr_body_text_allows_literal_sequence_when_multiline() -> None: def test_validate_pr_body_text_detects_escaped_newline_in_multiline_body() -> None:
module = _load_module() module = _load_module()
text = "## Summary\n- escaped sequence example: \\\\n" text = "## Summary\n- first line\n- broken line with \\n literal"
errors = module.validate_pr_body_text(text)
assert any("escaped newline" in err for err in errors)
def test_validate_pr_body_text_allows_escaped_newline_in_code_blocks() -> None:
module = _load_module()
text = "\n".join(
[
"## Summary",
"- example uses `\\n` for explanation",
"```bash",
"printf 'line1\\nline2\\n'",
"```",
]
)
assert module.validate_pr_body_text(text) == [] assert module.validate_pr_body_text(text) == []
@@ -63,12 +78,13 @@ def test_fetch_pr_body_reads_body_from_tea_api(monkeypatch) -> None:
module = _load_module() module = _load_module()
def fake_run(cmd, check, capture_output, text): # noqa: ANN001 def fake_run(cmd, check, capture_output, text): # noqa: ANN001
assert "tea" in cmd[0] assert cmd[0] == "/tmp/tea-bin"
assert check is True assert check is True
assert capture_output is True assert capture_output is True
assert text is True assert text is True
return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"})) return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"}))
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
monkeypatch.setattr(module.subprocess, "run", fake_run) monkeypatch.setattr(module.subprocess, "run", fake_run)
assert module.fetch_pr_body(391) == "## Summary\n- item" assert module.fetch_pr_body(391) == "## Summary\n- item"
@@ -79,6 +95,32 @@ def test_fetch_pr_body_rejects_non_string_body(monkeypatch) -> None:
def fake_run(cmd, check, capture_output, text): # noqa: ANN001 def fake_run(cmd, check, capture_output, text): # noqa: ANN001
return SimpleNamespace(stdout=json.dumps({"body": 123})) return SimpleNamespace(stdout=json.dumps({"body": 123}))
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
monkeypatch.setattr(module.subprocess, "run", fake_run) monkeypatch.setattr(module.subprocess, "run", fake_run)
with pytest.raises(RuntimeError): with pytest.raises(RuntimeError):
module.fetch_pr_body(391) module.fetch_pr_body(391)
def test_resolve_tea_binary_falls_back_to_home_bin(monkeypatch, tmp_path) -> None:
module = _load_module()
tea_home = tmp_path / "bin" / "tea"
tea_home.parent.mkdir(parents=True)
tea_home.write_text("#!/usr/bin/env bash\n", encoding="utf-8")
tea_home.chmod(0o755)
monkeypatch.setattr(module.shutil, "which", lambda _: None)
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
assert module.resolve_tea_binary() == str(tea_home)
def test_resolve_tea_binary_rejects_non_executable_home_bin(monkeypatch, tmp_path) -> None:
module = _load_module()
tea_home = tmp_path / "bin" / "tea"
tea_home.parent.mkdir(parents=True)
tea_home.write_text("not executable\n", encoding="utf-8")
tea_home.chmod(0o644)
monkeypatch.setattr(module.shutil, "which", lambda _: None)
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
with pytest.raises(RuntimeError):
module.resolve_tea_binary()

View File

@@ -137,3 +137,11 @@
- next_ticket: #377 - next_ticket: #377
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes - process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: refresh 단계를 최대 3회(초기+재시도2), 실패 시 지수 백오프로 재시도하고 성공 시 즉시 중단, 소진 시 오류를 기록한 뒤 다음 단계를 계속 수행한다. - risks_or_notes: refresh 단계를 최대 3회(초기+재시도2), 실패 시 지수 백오프로 재시도하고 성공 시 즉시 중단, 소진 시 오류를 기록한 뒤 다음 단계를 계속 수행한다.
### 2026-03-04 | session=codex-issue409-start
- branch: feature/issue-409-kr-session-exchange-routing
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #409, #318, #325
- next_ticket: #409
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: #409 코드수정/검증 후 프로그램 재시작 및 24h 런타임 모니터링 수행, 모니터 이상 징후는 별도 이슈 발행