Compare commits

40 Commits

Author SHA1 Message Date
7de2a818e8 Merge pull request 'fix: KR session-aware exchange routing (#409)' (#411) from feature/issue-409-kr-session-exchange-routing into main
All checks were successful
Gitea CI / test (push) Successful in 36s
Reviewed-on: #411
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 23:06:09 +09:00
agentson
779a8a678e ci: trigger checks after PR traceability update
All checks were successful
Gitea CI / test (push) Successful in 38s
Gitea CI / test (pull_request) Successful in 35s
2026-03-04 22:58:04 +09:00
agentson
9fd9c552f3 fix: add dual-listing spread routing and session propagation
Some checks failed
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 10:16:28 +09:00
agentson
c80f3daad7 fix: apply KR session-aware exchange routing for rankings and orders (#409)
Some checks failed
Gitea CI / test (push) Successful in 35s
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 10:12:41 +09:00
agentson
100586e237 chore: add handover entry for issue #409 2026-03-04 10:10:45 +09:00
agentson
86733ef830 docs: add implementation plan for #409 exchange routing 2026-03-04 10:09:53 +09:00
agentson
296b89d95f docs: add design for #409 KR session exchange routing 2026-03-04 10:07:39 +09:00
815fb89585 Merge pull request 'feat: integrate 398-400-401 for main merge REQ-OPS-001 TASK-OPS-003 TEST-ACC-009' (#410) from feature/398-400-401 into main
All checks were successful
Gitea CI / test (push) Successful in 35s
Reviewed-on: #410
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 09:56:01 +09:00
agentson
79e51b8ece ci: retrigger PR #410 after governance traceability update
All checks were successful
Gitea CI / test (pull_request) Successful in 35s
Gitea CI / test (push) Successful in 36s
2026-03-04 09:53:53 +09:00
c29e5125bc Merge pull request 'docs: KIS API 공식 문서 복원 (docs/ 위치)' (#408) from fix/restore-kis-api-docs into feature/398-400-401
Some checks failed
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Failing after 5s
Reviewed-on: #408
2026-03-04 09:13:14 +09:00
agentson
4afae017a2 ci: retrigger for PR #408 body update
All checks were successful
Gitea CI / test (pull_request) Successful in 35s
2026-03-04 09:12:24 +09:00
agentson
650d464da5 docs: restore KIS API reference document to docs/
Some checks failed
Gitea CI / test (pull_request) Failing after 6s
한국투자증권_오픈API_전체문서_20260221_030000.xlsx was accidentally
removed during branch merge. Moved from project root to docs/ where
commands.md and MEMORY.md expect it to be.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-04 09:08:13 +09:00
b1f5f3e888 Merge pull request 'feat: #401 parallel multi-market processing [REQ-OPS-003] [TASK-OPS-003] [TEST-OPS-003]' (#407) from fix/401 into feature/398-400-401
All checks were successful
Gitea CI / test (push) Successful in 36s
2026-03-04 03:13:16 +09:00
agentson
ec5d656fdf feat: process active markets in parallel with fail-fast semantics (#401)
All checks were successful
Gitea CI / test (pull_request) Successful in 36s
2026-03-04 03:12:23 +09:00
bacb0d2037 Merge pull request 'fix: #400 US session transition handling [REQ-OPS-002] [TASK-OPS-002] [TEST-OPS-002]' (#406) from fix/400 into feature/398-400-401
All checks were successful
Gitea CI / test (push) Successful in 35s
2026-03-04 03:09:36 +09:00
agentson
a67a9aa41f style: sort imports for main tests
All checks were successful
Gitea CI / test (pull_request) Successful in 35s
2026-03-04 03:08:47 +09:00
agentson
47a87fa2f6 chore: retrigger CI for PR #406
Some checks failed
Gitea CI / test (pull_request) Failing after 6s
2026-03-04 03:07:43 +09:00
agentson
2e3aed5664 fix: handle US session transitions and suppress US_DAY trading (#400)
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 03:07:14 +09:00
f245f55a32 Merge pull request 'fix: #398 KR rt_cd rejection handling [REQ-OPS-001] [TASK-OPS-001] [TEST-OPS-001]' (#405) from fix/398 into feature/398-400-401
All checks were successful
Gitea CI / test (push) Successful in 35s
2026-03-04 03:04:42 +09:00
agentson
90cd7c0504 chore: retrigger CI for PR #405
All checks were successful
Gitea CI / test (pull_request) Successful in 36s
2026-03-04 03:03:53 +09:00
agentson
b283880774 fix: handle KR order rejection via rt_cd check (#398)
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 03:02:26 +09:00
agentson
fa89499ccb docs: add implementation plan for #398 #400 #401
Some checks failed
Gitea CI / test (push) Has been cancelled
2026-03-04 03:00:45 +09:00
agentson
b227554e9e docs: add design for #398 #400 #401 feature integration workflow 2026-03-04 02:59:47 +09:00
c217e8cd72 Merge pull request 'fix: runtime anomaly handling for overnight startup and monitor (#396 #397)' (#404) from feature/issue-396-397-runtime-anomaly-fixes into main
All checks were successful
Gitea CI / test (push) Successful in 36s
Reviewed-on: #404
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 02:46:38 +09:00
agentson
bcbbf80d16 docs: clarify APP_CMD legacy and APP_CMD_ARGS contract
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 36s
2026-03-04 02:43:32 +09:00
agentson
dc0775cbc6 fix: add safer custom command path for run_overnight
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 35s
2026-03-04 02:36:24 +09:00
agentson
c412412f7b fix: address second-round review findings on PR #404
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 36s
2026-03-04 02:29:54 +09:00
agentson
3cde8779fa fix: address PR #404 review feedback
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 35s
2026-03-04 02:23:43 +09:00
agentson
370ee8cc85 fix: make overnight startup portable in CI environments
All checks were successful
Gitea CI / test (push) Successful in 34s
Gitea CI / test (pull_request) Successful in 34s
2026-03-04 02:07:52 +09:00
agentson
528e17a29c fix: stabilize overnight startup and monitor live fallback (#396 #397)
Some checks failed
Gitea CI / test (push) Failing after 37s
Gitea CI / test (pull_request) Failing after 38s
2026-03-04 02:04:13 +09:00
d2f3fe9108 Merge pull request 'docs: consolidate CLAUDE entrypoint into agents guide (#402)' (#403) from temp/agents-md-migration-20260303 into main
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #403
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 01:52:46 +09:00
agentson
12bcccab42 ci: rerun PR checks after traceability update
All checks were successful
Gitea CI / test (pull_request) Successful in 32s
2026-03-04 01:32:42 +09:00
agentson
ef16cf8800 docs: consolidate agent entrypoint into agents.md (#402)
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 01:31:00 +09:00
3c58c5d110 Merge pull request 'merge: feature/v3-session-policy-stream into main' (#399) from feature/main-merge-v3-session-policy-stream-20260303 into main
Some checks failed
Gitea CI / test (push) Failing after 5s
Reviewed-on: #399
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 00:47:20 +09:00
agentson
8ecd3ac55f chore: retrigger CI after PR governance body update
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 32s
2026-03-04 00:36:57 +09:00
agentson
79ad108e2f Merge origin/feature/v3-session-policy-stream into main
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
Gitea CI / test (push) Failing after 5s
2026-03-04 00:30:45 +09:00
d9cf056df8 Merge pull request 'process: add PR body post-check gate and tooling (#392)' (#393) from feature/issue-392-pr-body-postcheck into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #393
2026-03-02 18:34:59 +09:00
agentson
bd9286a39f fix: require executable tea fallback binary (#392)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 18:32:07 +09:00
agentson
f4f8827353 fix: harden PR body validator for mixed escaped-newline and tea path (#392)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 18:27:59 +09:00
agentson
7d24f19cc4 process: add mandatory PR body post-check step (#392)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 18:20:17 +09:00
23 changed files with 1937 additions and 264 deletions

190
CLAUDE.md
View File

@@ -1,187 +1,9 @@
# The Ouroboros # Agent Entry Point
AI-powered trading agent for global stock markets with self-evolution capabilities. This file moved to [agents.md](./agents.md).
## Quick Start Follow `agents.md` as the single source of truth for Claude/Codex session behavior and project workflow gates.
```bash Core process references:
# Setup - [Workflow Guide](docs/workflow.md)
pip install -e ".[dev]" - [Command Reference](docs/commands.md)
cp .env.example .env
# Edit .env with your KIS and Gemini API credentials
# Test
pytest -v --cov=src
# Run (paper trading)
python -m src.main --mode=paper
# Run with dashboard
python -m src.main --mode=paper --dashboard
```
## Telegram Notifications (Optional)
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
### Quick Setup
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
3. **Configure**: Add to `.env`:
```bash
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=123456789
TELEGRAM_ENABLED=true
```
4. **Test**: Start bot conversation (`/start`), then run the agent
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
### What You'll Get
- 🟢 Trade execution alerts (BUY/SELL with confidence)
- 🚨 Circuit breaker trips (automatic trading halt)
- ⚠️ Fat-finger rejections (oversized orders blocked)
- Market open/close notifications
- 📝 System startup/shutdown status
### Interactive Commands
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
## Smart Volatility Scanner (Optional)
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
### How It Works
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
2. **Python Filter** — RSI + volume ratio calculations (no AI)
- Volume > 200% of previous day
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
### Configuration
Add to `.env` (optional, has sensible defaults):
```bash
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
SCANNER_TOP_N=3 # Max candidates per scan
```
### Benefits
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
- **Python-based filtering** — Fast technical analysis before AI
- **Evolution-ready** — Selection context logged for strategy optimization
- **Fault-tolerant** — Falls back to static watchlist on API failure
### Trading Mode Integration
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
## Documentation
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
- **[Architecture](docs/architecture.md)** — System design, components, data flow
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
## Core Principles
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
2. **Test Everything** — 80% coverage minimum, all changes require tests
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
## Requirements Management
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
- New requirements are added chronologically with dates
- Code changes should reference related requirements
- Helps maintain project evolution aligned with user needs
- Preserves context across conversations and development cycles
## Project Structure
```
src/
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
├── broker/ # KIS API client (domestic + overseas)
├── context/ # L1-L7 hierarchical memory system
├── core/ # Risk manager (READ-ONLY)
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
├── data/ # External data integration (news, market data, calendar)
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
├── logging/ # Decision logger (audit trail)
├── markets/ # Market schedules and timezone handling
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
├── strategy/ # Pre-market planner, scenario engine, playbook store
├── db.py # SQLite trade logging
├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env)
tests/ # 998 tests across 41 files
docs/ # Extended documentation
```
## Key Commands
```bash
pytest -v --cov=src # Run tests with coverage
ruff check src/ tests/ # Lint
mypy src/ --strict # Type check
python -m src.main --mode=paper # Paper trading
python -m src.main --mode=paper --dashboard # With dashboard
python -m src.main --mode=live # Live trading (⚠️ real money)
# Gitea workflow (requires tea CLI)
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
```
## Markets Supported
- 🇰🇷 Korea (KRX)
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
- 🇯🇵 Japan (TSE)
- 🇭🇰 Hong Kong (SEHK)
- 🇨🇳 China (Shanghai, Shenzhen)
- 🇻🇳 Vietnam (Hanoi, HCM)
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
## Critical Constraints
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
- Circuit breaker at -3.0% P&L — may only be made **stricter**
- Fat-finger protection: max 30% of cash per order — always enforced
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
- All code changes → corresponding tests → coverage ≥ 80%
## Contributing
See [docs/workflow.md](docs/workflow.md) for the complete development process.
**TL;DR:**
1. Create issue in Gitea
2. Create feature branch: `feature/issue-N-description`
3. Implement with tests
4. Open PR
5. Merge after review

199
agents.md Normal file
View File

@@ -0,0 +1,199 @@
# The Ouroboros
AI-powered trading agent for global stock markets with self-evolution capabilities.
## Agent Workflow Gate (Claude/Codex)
Before any implementation, both Claude and Codex must align on the same project process:
1. Read `docs/workflow.md` first (branch policy, issue/PR flow, merge rules).
2. Read `docs/commands.md` for required verification commands and failure handling.
3. Read `docs/agent-constraints.md` and `docs/agents.md` for safety constraints.
4. Check `workflow/session-handover.md` and append a session entry when starting or handing off work.
5. Confirm current branch is based on `main` or an explicitly designated temporary/base branch before editing.
If any instruction conflicts, default to the safer path and document the reason in the handover log.
## Quick Start
```bash
# Setup
pip install -e ".[dev]"
cp .env.example .env
# Edit .env with your KIS and Gemini API credentials
# Test
pytest -v --cov=src
# Run (paper trading)
python -m src.main --mode=paper
# Run with dashboard
python -m src.main --mode=paper --dashboard
```
## Telegram Notifications (Optional)
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
### Quick Setup
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
3. **Configure**: Add to `.env`:
```bash
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=123456789
TELEGRAM_ENABLED=true
```
4. **Test**: Start bot conversation (`/start`), then run the agent
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
### What You'll Get
- 🟢 Trade execution alerts (BUY/SELL with confidence)
- 🚨 Circuit breaker trips (automatic trading halt)
- ⚠️ Fat-finger rejections (oversized orders blocked)
- Market open/close notifications
- 📝 System startup/shutdown status
### Interactive Commands
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
## Smart Volatility Scanner (Optional)
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
### How It Works
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
2. **Python Filter** — RSI + volume ratio calculations (no AI)
- Volume > 200% of previous day
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
### Configuration
Add to `.env` (optional, has sensible defaults):
```bash
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
SCANNER_TOP_N=3 # Max candidates per scan
```
### Benefits
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
- **Python-based filtering** — Fast technical analysis before AI
- **Evolution-ready** — Selection context logged for strategy optimization
- **Fault-tolerant** — Falls back to static watchlist on API failure
### Trading Mode Integration
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
## Documentation
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
- **[Architecture](docs/architecture.md)** — System design, components, data flow
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
## Core Principles
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
2. **Test Everything** — 80% coverage minimum, all changes require tests
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
## Requirements Management
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
- New requirements are added chronologically with dates
- Code changes should reference related requirements
- Helps maintain project evolution aligned with user needs
- Preserves context across conversations and development cycles
## Project Structure
```
src/
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
├── broker/ # KIS API client (domestic + overseas)
├── context/ # L1-L7 hierarchical memory system
├── core/ # Risk manager (READ-ONLY)
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
├── data/ # External data integration (news, market data, calendar)
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
├── logging/ # Decision logger (audit trail)
├── markets/ # Market schedules and timezone handling
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
├── strategy/ # Pre-market planner, scenario engine, playbook store
├── db.py # SQLite trade logging
├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env)
tests/ # 998 tests across 41 files
docs/ # Extended documentation
```
## Key Commands
```bash
pytest -v --cov=src # Run tests with coverage
ruff check src/ tests/ # Lint
mypy src/ --strict # Type check
python -m src.main --mode=paper # Paper trading
python -m src.main --mode=paper --dashboard # With dashboard
python -m src.main --mode=live # Live trading (⚠️ real money)
# Gitea workflow (requires tea CLI)
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
```
## Markets Supported
- 🇰🇷 Korea (KRX)
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
- 🇯🇵 Japan (TSE)
- 🇭🇰 Hong Kong (SEHK)
- 🇨🇳 China (Shanghai, Shenzhen)
- 🇻🇳 Vietnam (Hanoi, HCM)
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
## Critical Constraints
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
- Circuit breaker at -3.0% P&L — may only be made **stricter**
- Fat-finger protection: max 30% of cash per order — always enforced
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
- All code changes → corresponding tests → coverage ≥ 80%
## Contributing
See [docs/workflow.md](docs/workflow.md) for the complete development process.
**TL;DR:**
1. Create issue in Gitea
2. Create feature branch: `feature/issue-N-description`
3. Implement with tests
4. Open PR
5. Merge after review

View File

@@ -0,0 +1,281 @@
# 398/400/401 Integration Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Implement #398, #400, #401 as three isolated PRs targeting `feature/398-400-401`, merge only when CI passes and self-review has zero minor issues, then run and monitor overnight script without stopping the process.
**Architecture:** Create one integration base branch from `origin/main`, branch per issue, and ship in strict sequence (`398 -> 400 -> 401`) to keep diffs isolated. Use TDD per issue (fail-first tests, minimal fix, regression checks), then perform PR self-review and CI gate before merge. After all merges, run overnight in background and monitor logs/process health while leaving runtime active.
**Tech Stack:** Python 3, pytest, asyncio runtime loop, Git/Gitea (`tea`), shell scripts (`scripts/run_overnight.sh`).
---
### Task 1: Prepare Integration Branch Topology
**Files:**
- Modify: `.git` refs only (branch operations)
**Step 1: Sync base branch**
Run: `git fetch origin && git checkout main && git pull --ff-only origin main`
Expected: local `main` equals `origin/main`
**Step 2: Create integration branch**
Run: `git checkout -b feature/398-400-401`
Expected: current branch is `feature/398-400-401`
**Step 3: Create issue branches from integration branch**
Run: `git checkout -b fix/398 && git checkout feature/398-400-401 && git checkout -b fix/400 && git checkout feature/398-400-401 && git checkout -b fix/401 && git checkout feature/398-400-401`
Expected: three issue branches exist and point to same base commit
**Step 4: Push all branches**
Run: `git push -u origin feature/398-400-401 fix/398 fix/400 fix/401`
Expected: remote tracking set for all four branches
**Step 5: Commit checkpoint**
Run:
```bash
git status --short
```
Expected: clean workspace before issue implementation
### Task 2: Implement #398 with TDD (KR rt_cd failure handling)
**Files:**
- Modify: `src/main.py`
- Test: `tests/test_main.py`
**Step 1: Write failing test**
Add test in `tests/test_main.py` verifying KR order returns `rt_cd != '0'` does not trigger success side effects (no BUY notify, no trade log success path).
**Step 2: Run test to verify failure**
Run: `pytest tests/test_main.py -k "kr and rt_cd" -v`
Expected: FAIL showing current code incorrectly treats KR order as success
**Step 3: Write minimal implementation**
In KR order branch of `src/main.py`, immediately after `send_order`, add `rt_cd` acceptance check identical to overseas branch behavior; set `order_succeeded = False` and warning log when rejected.
**Step 4: Run targeted tests**
Run: `pytest tests/test_main.py -k "kr and rt_cd" -v`
Expected: PASS
**Step 5: Run safety regression**
Run: `pytest tests/test_main.py tests/test_order_policy.py -q`
Expected: PASS
**Step 6: Commit**
Run:
```bash
git add tests/test_main.py src/main.py
git commit -m "fix: handle KR order rejection via rt_cd check (#398)"
```
### Task 3: Open PR for #398, Self-review, CI gate, Merge
**Files:**
- Modify: remote PR metadata/comments only
**Step 1: Push branch**
Run: `git checkout fix/398 && git push -u origin fix/398`
**Step 2: Create PR targeting integration branch**
Run: `tea pr create --base feature/398-400-401 --head fix/398 --title "fix: #398 KR rt_cd rejection handling" --description "Implements issue #398 with tests."`
Expected: PR URL returned
**Step 3: Add self-review comment (severity rubric)**
Run: `tea pr comment <PR_398> --message "Self-review: Critical 0 / Major 0 / Minor 0. Merge allowed when CI passes."`
**Step 4: Wait for CI success**
Run: `tea pr checks <PR_398>` (poll until all success)
Expected: all checks success
**Step 5: Merge only when gate passes**
Run: `tea pr merge <PR_398> --delete-branch=false`
Expected: merged into `feature/398-400-401`
### Task 4: Implement #400 with TDD (US session transition correctness)
**Files:**
- Modify: `src/main.py`, `src/core/order_policy.py`, `src/markets/schedule.py`
- Test: `tests/test_main.py`, `tests/test_market_schedule.py`, `tests/test_order_policy.py`
**Step 1: Write failing tests**
Add tests for:
- session transition event handling (`US_DAY -> US_REG`) emits open event and forces rescan
- `US_DAY` treated non-tradable for playbook/trading actions
**Step 2: Run failing tests**
Run: `pytest tests/test_main.py tests/test_market_schedule.py tests/test_order_policy.py -k "US_DAY or US_REG or session" -v`
Expected: FAIL at current behavior
**Step 3: Minimal implementation**
- Track market state by session identifier (not bool only)
- Force rescan/playbook refresh on US_REG entry
- Exclude/suppress US_DAY for trading/playbook generation path
**Step 4: Re-run targeted tests**
Run: same command as Step 2
Expected: PASS
**Step 5: Regression pass**
Run: `pytest tests/test_main.py tests/test_market_schedule.py tests/test_order_policy.py tests/test_pre_market_planner.py -q`
Expected: PASS
**Step 6: Commit**
Run:
```bash
git add src/main.py src/core/order_policy.py src/markets/schedule.py tests/test_main.py tests/test_market_schedule.py tests/test_order_policy.py
git commit -m "fix: handle US session transitions and suppress US_DAY trading (#400)"
```
### Task 5: Open PR for #400, Self-review, CI gate, Merge
**Files:**
- Modify: remote PR metadata/comments only
**Step 1: Push branch**
Run: `git checkout fix/400 && git push -u origin fix/400`
**Step 2: Create PR**
Run: `tea pr create --base feature/398-400-401 --head fix/400 --title "fix: #400 US session transition handling" --description "Implements issue #400 with tests."`
**Step 3: Add self-review comment**
Run: `tea pr comment <PR_400> --message "Self-review: Critical 0 / Major 0 / Minor 0. Merge allowed when CI passes."`
**Step 4: Wait for CI success**
Run: `tea pr checks <PR_400>`
Expected: all checks success
**Step 5: Merge**
Run: `tea pr merge <PR_400> --delete-branch=false`
### Task 6: Implement #401 with TDD (multi-market parallel processing)
**Files:**
- Modify: `src/main.py`
- Test: `tests/test_main.py`
**Step 1: Write failing tests**
Add tests verifying:
- open markets are processed via parallel task dispatch
- circuit breaker behavior still triggers global shutdown semantics
- shared state updates remain deterministic under parallel market execution
**Step 2: Run failing tests**
Run: `pytest tests/test_main.py -k "parallel or market" -v`
Expected: FAIL before implementation
**Step 3: Minimal implementation**
Refactor sequential market loop into market-level async tasks (`asyncio.gather`/task group) while preserving stock-level processing order per market and existing failure semantics.
**Step 4: Re-run targeted tests**
Run: same command as Step 2
Expected: PASS
**Step 5: Regression pass**
Run: `pytest tests/test_main.py tests/test_runtime_overnight_scripts.py -q`
Expected: PASS
**Step 6: Commit**
Run:
```bash
git add src/main.py tests/test_main.py
git commit -m "feat: process active markets in parallel with preserved shutdown semantics (#401)"
```
### Task 7: Open PR for #401, Self-review, CI gate, Merge
**Files:**
- Modify: remote PR metadata/comments only
**Step 1: Push branch**
Run: `git checkout fix/401 && git push -u origin fix/401`
**Step 2: Create PR**
Run: `tea pr create --base feature/398-400-401 --head fix/401 --title "feat: #401 parallel multi-market processing" --description "Implements issue #401 with tests."`
**Step 3: Add self-review comment**
Run: `tea pr comment <PR_401> --message "Self-review: Critical 0 / Major 0 / Minor 0. Merge allowed when CI passes."`
**Step 4: Wait for CI success**
Run: `tea pr checks <PR_401>`
Expected: all checks success
**Step 5: Merge**
Run: `tea pr merge <PR_401> --delete-branch=false`
### Task 8: Final Branch Validation + Overnight Runtime Monitoring
**Files:**
- Execute: `scripts/run_overnight.sh`
- Observe: runtime log file (e.g., `logs/overnight.log`)
**Step 1: Checkout integrated branch and sync**
Run: `git checkout feature/398-400-401 && git pull --ff-only origin feature/398-400-401`
Expected: branch contains merged PRs
**Step 2: Start overnight in background (non-blocking)**
Run:
```bash
nohup ./scripts/run_overnight.sh > /tmp/ouroboros_overnight.log 2>&1 &
echo $! > /tmp/ouroboros_overnight.pid
```
Expected: PID written and process running
**Step 3: Verify process alive**
Run: `ps -p $(cat /tmp/ouroboros_overnight.pid) -o pid,ppid,stat,etime,cmd`
Expected: process present
**Step 4: Monitor startup logs**
Run: `tail -n 120 /tmp/ouroboros_overnight.log`
Expected: startup complete and runtime loop active without fatal errors
**Step 5: Ongoing monitor without shutdown**
Run: `tail -f /tmp/ouroboros_overnight.log` (sample monitoring window, then detach)
Expected: continued activity; do not kill process
**Step 6: Final status note**
Record PID, log path, and “process left running” status.

View File

@@ -0,0 +1,62 @@
# 398/400/401 통합 처리 설계
## 개요
이 문서는 이슈 #398, #400, #401을 `origin/main` 기반 통합 브랜치에서 순차적으로 처리하고,
각 PR을 셀프 리뷰 및 CI 게이트로 검증한 뒤 머지하는 운영 설계를 정의한다.
최종 머지된 통합 브랜치에서 overnight 스크립트를 실행하고, 모니터링 이후에도 프로그램은 계속 실행 상태를 유지한다.
## 목표
- 통합 브랜치: `feature/398-400-401`
- 작업 브랜치: `fix/398`, `fix/400`, `fix/401`
- PR base: 모두 `feature/398-400-401`
- 머지 조건: `CI 전체 통과` + `셀프 리뷰에서 minor 포함 이슈 0건`
- 최종 확인: 통합 브랜치에서 overnight 실행 및 모니터링, 프로세스 지속 실행
## 아키텍처
- `origin/main`에서 `feature/398-400-401` 생성
- 각 이슈는 독립 브랜치(`fix/398`, `fix/400`, `fix/401`)에서 구현
- PR은 순차적으로 생성/검증/머지 (`398 -> 400 -> 401`)
- 각 PR은 셀프 리뷰 코멘트를 남기고, minor 이상 발견 시 수정 후 재검증
- 3개 PR 머지 완료 후 통합 브랜치에서 overnight 백그라운드 실행 및 로그 모니터링
- 모니터링 완료 후에도 프로세스는 종료하지 않음
## 컴포넌트
- Git/브랜치 컴포넌트: 브랜치 생성, 리베이스, 충돌 해결
- 이슈 구현 컴포넌트:
- #398: KR 주문 `rt_cd` 실패 처리, 오알림/오기록 차단
- #400: US 세션 전환 감지, US_DAY 억제, US_REG 진입 이벤트/강제 재스캔
- #401: 시장 단위 병렬 처리 및 공유 상태 동시성 보호
- PR 운영 컴포넌트: PR 생성, 셀프 리뷰 코멘트 작성, 승인 기준 확인
- CI 게이트 컴포넌트: 체크 상태 폴링 및 pass 확인
- 머지 컴포넌트: 게이트 통과 PR만 머지
- 런타임 검증 컴포넌트: overnight 실행, 로그 추적, 프로세스 생존 확인
## 데이터/제어 흐름
1. `feature/398-400-401` 생성
2. `fix/398` 구현 -> 테스트 -> 커밋 -> PR 생성
3. 셀프 리뷰 코멘트 작성(결함 레벨 포함)
4. CI 완료 대기 후 `CI pass && minor 0`이면 머지
5. `fix/400`, `fix/401`에 대해 동일 절차 반복
6. 통합 브랜치에서 overnight 백그라운드 실행
7. 로그/상태 모니터링으로 실제 동작 확인
8. 결과 보고 후에도 프로세스는 계속 실행
## 에러 처리/복구
- PR 생성/충돌 실패: 해당 브랜치만 중단 후 해결, 다른 브랜치와 격리 유지
- 셀프 리뷰 실패(minor 포함): 머지 금지, 수정 커밋 후 리뷰 갱신
- CI 실패: 실패 원인 수정 후 재푸시, 재검증
- 머지 실패: base 최신화 및 재시도
- overnight 시작 실패: 로그 분석 후 재기동
- 모니터링 중 오류: 오류 보고는 하되 자동 종료하지 않고 실행 유지
## 테스트/검증
- PR별 관련 단위/통합 테스트 실행
- 필요 시 `tests/test_main.py`, `tests/test_runtime_overnight_scripts.py` 포함 회귀 실행
- 셀프 리뷰는 `Critical/Major/Minor` 기준으로 작성
- minor 0건 명시된 경우에만 머지 진행
- 최종 통합 브랜치에서 overnight 기동/루프 진입/에러 로그 확인
- PID/프로세스 생존 확인 후 실행 지속 상태 보고
## 비목표
- 본 문서는 구현 상세 코드 변경 자체를 다루지 않는다.
- 본 문서는 외부 리뷰어 승인 프로세스를 다루지 않는다(셀프 리뷰만 대상).

View File

@@ -0,0 +1,103 @@
# Issue #409 Design - KR Session-Aware Exchange Routing
## Context
- Issue: #409 (bug: KR 세션별 거래소 미분리 - 스크리닝/주문/이중상장 우선순위 미처리)
- Related runtime observation targets: #318, #325
- Date: 2026-03-04
- Confirmed approach: Option 2 (routing module introduction)
## Goals
1. Ensure domestic screening uses session-specific exchange market code.
2. Ensure domestic order submission explicitly sets exchange routing code.
3. Add dual-listing routing priority logic (spread/liquidity aware) with safe fallback.
4. Keep existing behavior stable for non-KR flows and existing risk/order policy guards.
5. Enable runtime observability for #409 while monitoring #318/#325 in parallel.
## Non-Goals
- Replacing current session classification model.
- Introducing new market sessions or changing session boundaries.
- Refactoring overseas order flow.
## Architecture
### New Component
- Add `KRExchangeRouter` (new module, e.g. `src/broker/kr_exchange_router.py`).
- Responsibility split:
- `classify_session_id`: session classification only.
- `KRExchangeRouter`: final domestic exchange selection (`KRX`/`NXT`) for ranking and order.
- `KISBroker`: inject resolved routing values into request params/body.
### Integration Points
- `KISBroker.fetch_market_rankings`
- Session-aware market division code:
- `KRX_REG` -> `J`
- `NXT_PRE`, `NXT_AFTER` -> `NX`
- `KISBroker.send_order`
- Explicit `EXCG_ID_DVSN_CD` is always set.
- `SmartVolatilityScanner._scan_domestic`
- Ensure domestic ranking API path resolves exchange consistently with current session.
## Data Flow
1. Scanner path:
- Determine `session_id`.
- `resolve_for_ranking(session_id)`.
- Inject `J` or `NX` into ranking API params.
2. Order path:
- Pass `session_id` into order path.
- `resolve_for_order(stock_code, session_id)`.
- Single listing: session default exchange.
- Dual listing: select by spread/liquidity heuristic when data is available.
- Data unavailable/error: fallback to session default.
- Send order with explicit `EXCG_ID_DVSN_CD`.
3. Observability:
- Log `session_id`, `resolved_exchange`, `routing_reason`.
## Dual-Listing Routing Priority
- Preferred decision source: spread/liquidity comparison.
- Deterministic fallback: session-default exchange.
- Proposed reasons in logs:
- `session_default`
- `dual_listing_spread`
- `dual_listing_liquidity`
- `fallback_data_unavailable`
## Error Handling
- Router does not block order path when auxiliary data is unavailable.
- Fail-open strategy for routing selection (fallback to session default) while preserving existing API/network error semantics.
- `send_order` exchange field omission is forbidden by design after this change.
## Testing Strategy
### Unit
- Router mapping by session (`KRX_REG`, `NXT_PRE`, `NXT_AFTER`).
- Dual-listing routing priority and fallback.
- Broker order body includes `EXCG_ID_DVSN_CD`.
- Ranking params use session-aware market code.
### Integration/Regression
- `smart_scanner` domestic calls align with session exchange.
- Existing order policy tests remain green.
- Re-run regression sets covering #318/#325 related paths.
### Runtime Observation (24h)
- Restart program from working branch build.
- Run runtime monitor for up to 24h.
- Verify and track:
- #409: session-aware routing evidence in logs.
- #318: ATR dynamic stop evidence.
- #325: ATR/pred_down_prob injection evidence.
- If anomalies are detected during monitoring, create separate issue tickets with evidence and links.
## Acceptance Criteria
1. No domestic ranking call uses hardcoded KRX-only behavior across NXT sessions.
2. No domestic order is sent without `EXCG_ID_DVSN_CD`.
3. Dual-listing path has explicit priority logic and deterministic fallback.
4. Tests pass for new and affected paths.
5. Runtime monitor evidence is collected for #409, #318, #325; anomalies are ticketed.
## Risks and Mitigations
- Risk: Increased routing complexity introduces regressions.
- Mitigation: isolate router, high-coverage unit tests, preserve existing interfaces where possible.
- Risk: Runtime events for #318/#325 may not naturally occur in 24h.
- Mitigation: mark as `NOT_OBSERVED` and keep issue state based on evidence policy; do not force-close without proof.
## Planned Next Step
- Invoke `writing-plans` workflow and produce implementation plan before code changes.

View File

@@ -0,0 +1,352 @@
# Issue #409 KR Session Exchange Routing Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Fix #409 by making KR screening/order routing session-aware and adding dual-listing exchange priority with deterministic fallback, then run 24h runtime observation for #409/#318/#325.
**Architecture:** Introduce a dedicated `KRExchangeRouter` module that resolves exchange by session and dual-listing metadata. Keep session classification in `order_policy`, and inject router outputs into `KISBroker` ranking/order requests. Add explicit routing logs for runtime evidence and keep non-KR behavior unchanged.
**Tech Stack:** Python 3.12, aiohttp client layer, pytest/pytest-asyncio, Gitea CLI (`tea`), bash runtime monitor scripts.
---
### Task 1: Preflight and Branch Runtime Gate
**Files:**
- Modify: `workflow/session-handover.md`
**Step 1: Add handover entry for this ticket branch**
```md
### 2026-03-04 | session=codex-issue409-start
- branch: feature/issue-409-kr-session-exchange-routing
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #409, #318, #325
- next_ticket: #409
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: #409 code fix + 24h monitor, runtime anomaly creates separate issue ticket
```
**Step 2: Run strict handover check**
Run: `python3 scripts/session_handover_check.py --strict`
Expected: PASS
**Step 3: Commit**
```bash
git add workflow/session-handover.md
git commit -m "chore: add handover entry for issue #409"
```
### Task 2: Add Router Unit Tests First (TDD)
**Files:**
- Create: `tests/test_kr_exchange_router.py`
**Step 1: Write failing tests for session mapping**
```python
from src.broker.kr_exchange_router import KRExchangeRouter
def test_ranking_market_code_by_session() -> None:
router = KRExchangeRouter()
assert router.resolve_for_ranking("KRX_REG") == "J"
assert router.resolve_for_ranking("NXT_PRE") == "NX"
assert router.resolve_for_ranking("NXT_AFTER") == "NX"
```
**Step 2: Write failing tests for dual-listing fallback behavior**
```python
def test_order_exchange_falls_back_to_session_default_on_missing_data() -> None:
router = KRExchangeRouter()
resolved = router.resolve_for_order(
stock_code="0001A0",
session_id="NXT_PRE",
is_dual_listed=True,
spread_krx=None,
spread_nxt=None,
liquidity_krx=None,
liquidity_nxt=None,
)
assert resolved.exchange_code == "NXT"
assert resolved.reason == "fallback_data_unavailable"
```
**Step 3: Run tests to verify fail**
Run: `pytest tests/test_kr_exchange_router.py -v`
Expected: FAIL (`ModuleNotFoundError` or missing class)
**Step 4: Commit tests-only checkpoint**
```bash
git add tests/test_kr_exchange_router.py
git commit -m "test: add failing tests for KR exchange router"
```
### Task 3: Implement Router Minimal Code
**Files:**
- Create: `src/broker/kr_exchange_router.py`
- Modify: `src/broker/__init__.py`
**Step 1: Add routing dataclass + session default mapping**
```python
@dataclass(frozen=True)
class ExchangeResolution:
exchange_code: str
reason: str
class KRExchangeRouter:
def resolve_for_ranking(self, session_id: str) -> str:
return "NX" if session_id in {"NXT_PRE", "NXT_AFTER"} else "J"
```
**Step 2: Add dual-listing decision path + fallback**
```python
if is_dual_listed and spread_krx is not None and spread_nxt is not None:
if spread_nxt < spread_krx:
return ExchangeResolution("NXT", "dual_listing_spread")
return ExchangeResolution("KRX", "dual_listing_spread")
return ExchangeResolution(default_exchange, "fallback_data_unavailable")
```
**Step 3: Run router tests**
Run: `pytest tests/test_kr_exchange_router.py -v`
Expected: PASS
**Step 4: Commit**
```bash
git add src/broker/kr_exchange_router.py src/broker/__init__.py
git commit -m "feat: add KR session-aware exchange router"
```
### Task 4: Broker Request Wiring (Ranking + Order)
**Files:**
- Modify: `src/broker/kis_api.py`
- Modify: `tests/test_broker.py`
**Step 1: Add failing tests for ranking param and order body exchange field**
```python
assert called_params["FID_COND_MRKT_DIV_CODE"] == "NX"
assert called_json["EXCG_ID_DVSN_CD"] == "NXT"
```
**Step 2: Run targeted test subset (fail first)**
Run: `pytest tests/test_broker.py -k "market_rankings or EXCG_ID_DVSN_CD" -v`
Expected: FAIL on missing field/value
**Step 3: Implement minimal wiring**
```python
session_id = runtime_session_id or classify_session_id(MARKETS["KR"])
market_div_code = self._kr_router.resolve_for_ranking(session_id)
params["FID_COND_MRKT_DIV_CODE"] = market_div_code
resolution = self._kr_router.resolve_for_order(...)
body["EXCG_ID_DVSN_CD"] = resolution.exchange_code
```
**Step 4: Add routing evidence logs**
```python
logger.info(
"KR routing resolved",
extra={"session_id": session_id, "exchange": resolution.exchange_code, "reason": resolution.reason},
)
```
**Step 5: Re-run broker tests**
Run: `pytest tests/test_broker.py -k "market_rankings or EXCG_ID_DVSN_CD" -v`
Expected: PASS
**Step 6: Commit**
```bash
git add src/broker/kis_api.py tests/test_broker.py
git commit -m "fix: apply KR exchange routing to rankings and orders"
```
### Task 5: Scanner Session Alignment
**Files:**
- Modify: `src/analysis/smart_scanner.py`
- Modify: `tests/test_smart_scanner.py`
**Step 1: Add failing test for domestic session-aware ranking path**
```python
assert mock_broker.fetch_market_rankings.call_args_list[0].kwargs["session_id"] == "NXT_PRE"
```
**Step 2: Run scanner tests (fail first)**
Run: `pytest tests/test_smart_scanner.py -k "session" -v`
Expected: FAIL on missing session argument
**Step 3: Implement scanner call wiring**
```python
fluct_rows = await self.broker.fetch_market_rankings(
ranking_type="fluctuation",
limit=50,
session_id=session_id,
)
```
**Step 4: Re-run scanner tests**
Run: `pytest tests/test_smart_scanner.py -v`
Expected: PASS
**Step 5: Commit**
```bash
git add src/analysis/smart_scanner.py tests/test_smart_scanner.py
git commit -m "fix: align domestic scanner rankings with KR session routing"
```
### Task 6: Full Verification and Regression
**Files:**
- No new files
**Step 1: Run focused regressions for #409**
Run:
- `pytest tests/test_kr_exchange_router.py tests/test_broker.py tests/test_smart_scanner.py -v`
Expected: PASS
**Step 2: Run related runtime-path regressions for #318/#325**
Run:
- `pytest tests/test_main.py -k "atr or staged_exit or pred_down_prob" -v`
Expected: PASS
**Step 3: Run lint/type checks for touched modules**
Run:
- `ruff check src/broker/kis_api.py src/broker/kr_exchange_router.py src/analysis/smart_scanner.py tests/test_kr_exchange_router.py tests/test_broker.py tests/test_smart_scanner.py`
- `mypy src/broker/kis_api.py src/broker/kr_exchange_router.py src/analysis/smart_scanner.py --strict`
Expected: PASS
**Step 4: Commit final fixup if needed**
```bash
git add -A
git commit -m "chore: finalize #409 verification adjustments"
```
### Task 7: PR Creation, Self-Review, and Merge
**Files:**
- Modify: PR metadata only
**Step 1: Push branch**
Run: `git push -u origin feature/issue-409-kr-session-exchange-routing`
Expected: remote branch created
**Step 2: Create PR to `main` with issue links**
```bash
PR_BODY=$(cat <<'MD'
## Summary
- fix KR session-aware exchange routing for rankings and orders (#409)
- add dual-listing exchange priority with deterministic fallback
- add logs and tests for routing evidence
## Validation
- pytest tests/test_kr_exchange_router.py tests/test_broker.py tests/test_smart_scanner.py -v
- pytest tests/test_main.py -k "atr or staged_exit or pred_down_prob" -v
- ruff check ...
- mypy ...
MD
)
tea pr create --base main --head feature/issue-409-kr-session-exchange-routing --title "fix: KR session-aware exchange routing (#409)" --description "$PR_BODY"
```
**Step 3: Validate PR body integrity**
Run: `python3 scripts/validate_pr_body.py --pr <PR_NUMBER>`
Expected: PASS
**Step 4: Self-review checklist (blocking)**
- Re-check diff for missing `EXCG_ID_DVSN_CD`
- Confirm session mapping (`KRX_REG=J`, `NXT_PRE/NXT_AFTER=NX`)
- Confirm fallback reason logging exists
- Confirm tests cover dual-listing fallback
**Step 5: Merge only if no minor issues remain**
Run: `tea pr merge <PR_NUMBER> --merge`
Expected: merged
### Task 8: Restart Program and 24h Runtime Monitoring
**Files:**
- Runtime artifacts: `data/overnight/*.log`
**Step 1: Restart runtime from merged state**
Run:
- `bash scripts/stop_overnight.sh`
- `bash scripts/run_overnight.sh`
Expected: live process and watchdog healthy
**Step 2: Start 24h monitor**
Run:
- `INTERVAL_SEC=60 MAX_HOURS=24 POLICY_TZ=Asia/Seoul bash scripts/runtime_verify_monitor.sh`
Expected: monitor loop runs and writes `data/overnight/runtime_verify_*.log`
**Step 3: Track #409/#318/#325 evidence in loop**
Run examples:
- `rg -n "KR routing resolved|EXCG_ID_DVSN_CD|session=NXT_|session=KRX_REG" data/overnight/run_*.log`
- `rg -n "atr_value|dynamic hard stop|staged exit|pred_down_prob" data/overnight/run_*.log`
Expected:
- #409 routing evidence present when KR flows trigger
- #318/#325 evidence captured if runtime conditions occur
**Step 4: If anomaly found, create separate issue ticket immediately**
```bash
ISSUE_BODY=$(cat <<'MD'
## Summary
- runtime anomaly detected during #409 monitor
## Evidence
- log: data/overnight/run_xxx.log
- timestamp: <UTC/KST>
- observed: <symptom>
## Suspected Scope
- related to #409/#318/#325 monitoring path
## Next Action
- triage + reproducible test
MD
)
tea issues create -t "bug: runtime anomaly during #409 monitor" -d "$ISSUE_BODY"
```
**Step 5: Post monitoring summary to #409/#318/#325**
- Include PASS/FAIL/NOT_OBSERVED matrix and exact timestamps.
- Do not close #318/#325 without concrete acceptance evidence.

View File

@@ -8,8 +8,32 @@ CHECK_INTERVAL="${CHECK_INTERVAL:-30}"
TMUX_AUTO="${TMUX_AUTO:-true}" TMUX_AUTO="${TMUX_AUTO:-true}"
TMUX_ATTACH="${TMUX_ATTACH:-true}" TMUX_ATTACH="${TMUX_ATTACH:-true}"
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}" TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
STARTUP_GRACE_SEC="${STARTUP_GRACE_SEC:-3}"
dashboard_port="${DASHBOARD_PORT:-8080}"
APP_CMD_BIN="${APP_CMD_BIN:-}"
APP_CMD_ARGS="${APP_CMD_ARGS:-}"
RUNS_DASHBOARD="false"
if [ -z "${APP_CMD:-}" ]; then # Custom override contract:
# 1) Preferred: APP_CMD_BIN + APP_CMD_ARGS
# - APP_CMD_BIN is treated as a single executable token.
# - APP_CMD_ARGS uses shell-style word splitting; quote/escape inside this
# variable is NOT preserved as a nested shell parse.
# 2) Legacy fallback: APP_CMD (raw shell command string)
# - This path remains for backward compatibility.
# - When APP_CMD includes --dashboard, caller should include explicit
# DASHBOARD_PORT assignment in APP_CMD if non-default port is required.
if [ -n "$APP_CMD_BIN" ]; then
USE_DEFAULT_APP_CMD="false"
USE_SAFE_CUSTOM_APP_CMD="true"
APP_CMD="${APP_CMD_BIN} ${APP_CMD_ARGS}"
if [[ " $APP_CMD_ARGS " == *" --dashboard "* ]]; then
RUNS_DASHBOARD="true"
fi
elif [ -z "${APP_CMD:-}" ]; then
USE_DEFAULT_APP_CMD="true"
USE_SAFE_CUSTOM_APP_CMD="false"
if [ -x ".venv/bin/python" ]; then if [ -x ".venv/bin/python" ]; then
PYTHON_BIN=".venv/bin/python" PYTHON_BIN=".venv/bin/python"
elif command -v python3 >/dev/null 2>&1; then elif command -v python3 >/dev/null 2>&1; then
@@ -21,9 +45,14 @@ if [ -z "${APP_CMD:-}" ]; then
exit 1 exit 1
fi fi
dashboard_port="${DASHBOARD_PORT:-8080}" APP_CMD="$PYTHON_BIN -m src.main --mode=live --dashboard"
RUNS_DASHBOARD="true"
APP_CMD="DASHBOARD_PORT=$dashboard_port $PYTHON_BIN -m src.main --mode=live --dashboard" else
USE_DEFAULT_APP_CMD="false"
USE_SAFE_CUSTOM_APP_CMD="false"
if [[ "$APP_CMD" == *"--dashboard"* ]]; then
RUNS_DASHBOARD="true"
fi
fi fi
mkdir -p "$LOG_DIR" mkdir -p "$LOG_DIR"
@@ -34,6 +63,24 @@ WATCHDOG_LOG="$LOG_DIR/watchdog_${timestamp}.log"
PID_FILE="$LOG_DIR/app.pid" PID_FILE="$LOG_DIR/app.pid"
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid" WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
is_port_in_use() {
local port="$1"
if command -v ss >/dev/null 2>&1; then
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
return $?
fi
if command -v lsof >/dev/null 2>&1; then
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
return $?
fi
if command -v netstat >/dev/null 2>&1; then
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
return $?
fi
# No supported socket inspection command found.
return 1
}
if [ -f "$PID_FILE" ]; then if [ -f "$PID_FILE" ]; then
old_pid="$(cat "$PID_FILE" || true)" old_pid="$(cat "$PID_FILE" || true)"
if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then
@@ -43,7 +90,29 @@ if [ -f "$PID_FILE" ]; then
fi fi
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG" echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG"
nohup bash -lc "$APP_CMD" >>"$RUN_LOG" 2>&1 & if [ "$RUNS_DASHBOARD" = "true" ] && is_port_in_use "$dashboard_port"; then
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: dashboard port ${dashboard_port} already in use" | tee -a "$RUN_LOG"
exit 1
fi
if [ "$USE_DEFAULT_APP_CMD" = "true" ]; then
# Default path avoids shell word-splitting on executable paths.
nohup env DASHBOARD_PORT="$dashboard_port" "$PYTHON_BIN" -m src.main --mode=live --dashboard >>"$RUN_LOG" 2>&1 &
elif [ "$USE_SAFE_CUSTOM_APP_CMD" = "true" ]; then
# Safer custom path: executable path is handled as a single token.
if [ -n "$APP_CMD_ARGS" ]; then
# shellcheck disable=SC2206
app_args=( $APP_CMD_ARGS )
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" "${app_args[@]}" >>"$RUN_LOG" 2>&1 &
else
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" >>"$RUN_LOG" 2>&1 &
fi
else
# Custom APP_CMD is treated as a shell command string.
# If executable paths include spaces, they must be quoted inside APP_CMD.
# Legacy compatibility path: caller owns quoting and env var injection.
nohup bash -lc "exec env $APP_CMD" >>"$RUN_LOG" 2>&1 &
fi
app_pid=$! app_pid=$!
echo "$app_pid" > "$PID_FILE" echo "$app_pid" > "$PID_FILE"
@@ -54,6 +123,20 @@ nohup env PID_FILE="$PID_FILE" LOG_FILE="$WATCHDOG_LOG" CHECK_INTERVAL="$CHECK_I
watchdog_pid=$! watchdog_pid=$!
echo "$watchdog_pid" > "$WATCHDOG_PID_FILE" echo "$watchdog_pid" > "$WATCHDOG_PID_FILE"
sleep "$STARTUP_GRACE_SEC"
if ! kill -0 "$app_pid" 2>/dev/null; then
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: app process exited early (pid=$app_pid)" | tee -a "$RUN_LOG"
[ -n "${watchdog_pid:-}" ] && kill "$watchdog_pid" 2>/dev/null || true
tail -n 20 "$RUN_LOG" || true
exit 1
fi
if ! kill -0 "$watchdog_pid" 2>/dev/null; then
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: watchdog exited early (pid=$watchdog_pid)" | tee -a "$WATCHDOG_LOG"
kill "$app_pid" 2>/dev/null || true
tail -n 20 "$WATCHDOG_LOG" || true
exit 1
fi
cat <<EOF cat <<EOF
시작 완료 시작 완료
- app pid: $app_pid - app pid: $app_pid

View File

@@ -7,12 +7,15 @@ ROOT_DIR="${ROOT_DIR:-/home/agentson/repos/The-Ouroboros}"
LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}" LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}"
INTERVAL_SEC="${INTERVAL_SEC:-60}" INTERVAL_SEC="${INTERVAL_SEC:-60}"
MAX_HOURS="${MAX_HOURS:-24}" MAX_HOURS="${MAX_HOURS:-24}"
MAX_LOOPS="${MAX_LOOPS:-0}"
POLICY_TZ="${POLICY_TZ:-Asia/Seoul}" POLICY_TZ="${POLICY_TZ:-Asia/Seoul}"
DASHBOARD_PORT="${DASHBOARD_PORT:-8080}"
cd "$ROOT_DIR" cd "$ROOT_DIR"
OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log" OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log"
END_TS=$(( $(date +%s) + MAX_HOURS*3600 )) END_TS=$(( $(date +%s) + MAX_HOURS*3600 ))
loops=0
log() { log() {
printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null
@@ -31,6 +34,11 @@ check_signal() {
return 1 return 1
} }
find_live_pids() {
# Detect live-mode process even when run_overnight pid files are absent.
pgrep -af "[s]rc.main --mode=live" 2>/dev/null | awk '{print $1}' | tr '\n' ',' | sed 's/,$//'
}
check_forbidden() { check_forbidden() {
local name="$1" local name="$1"
local pattern="$2" local pattern="$2"
@@ -44,42 +52,94 @@ check_forbidden() {
return 0 return 0
} }
is_port_listening() {
local port="$1"
if command -v ss >/dev/null 2>&1; then
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
return $?
fi
if command -v lsof >/dev/null 2>&1; then
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
return $?
fi
if command -v netstat >/dev/null 2>&1; then
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
return $?
fi
return 1
}
log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}" log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}"
while true; do while true; do
loops=$((loops + 1))
now=$(date +%s) now=$(date +%s)
if [ "$now" -ge "$END_TS" ]; then if [ "$now" -ge "$END_TS" ]; then
log "[INFO] monitor completed (time window reached)" log "[INFO] monitor completed (time window reached)"
exit 0 exit 0
fi fi
if [ "$MAX_LOOPS" -gt 0 ] && [ "$loops" -gt "$MAX_LOOPS" ]; then
log "[INFO] monitor completed (max loops reached)"
exit 0
fi
latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)" latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)"
if [ -z "$latest_run" ]; then
log "[ANOMALY] no run log found"
sleep "$INTERVAL_SEC"
continue
fi
# Basic liveness hints. # Basic liveness hints.
app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)" app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)"
wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)" wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)"
live_pids="$(find_live_pids)"
app_alive=0 app_alive=0
wd_alive=0 wd_alive=0
port_alive=0 port_alive=0
[ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1 [ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1
[ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1 [ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1
ss -ltnp 2>/dev/null | rg -q ':8080' && port_alive=1 if [ "$app_alive" -eq 0 ] && [ -n "$live_pids" ]; then
log "[HEARTBEAT] run_log=$latest_run app_alive=$app_alive watchdog_alive=$wd_alive port8080=$port_alive" app_alive=1
fi
is_port_listening "$DASHBOARD_PORT" && port_alive=1
log "[HEARTBEAT] run_log=${latest_run:-none} app_alive=$app_alive watchdog_alive=$wd_alive port=${DASHBOARD_PORT} alive=$port_alive live_pids=${live_pids:-none}"
defer_log_checks=0
if [ -z "$latest_run" ] && [ "$app_alive" -eq 1 ]; then
defer_log_checks=1
log "[INFO] run log not yet available; defer log-based coverage checks"
fi
if [ -z "$latest_run" ] && [ "$defer_log_checks" -eq 0 ]; then
log "[ANOMALY] no run log found"
fi
# Coverage matrix rows (session paths and policy gate evidence). # Coverage matrix rows (session paths and policy gate evidence).
not_observed=0 not_observed=0
if [ "$app_alive" -eq 1 ]; then
log "[COVERAGE] LIVE_MODE=PASS source=process_liveness"
else
if [ -n "$latest_run" ]; then
check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1)) check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1))
else
log "[COVERAGE] LIVE_MODE=NOT_OBSERVED reason=no_run_log_no_live_pid"
not_observed=$((not_observed+1))
fi
fi
if [ "$defer_log_checks" -eq 1 ]; then
for deferred in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
log "[COVERAGE] ${deferred}=DEFERRED reason=no_run_log_process_alive"
done
elif [ -n "$latest_run" ]; then
check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1)) check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1))
check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1)) check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1)) check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1)) check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1)) check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1))
check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1)) check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1))
else
for missing in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
log "[COVERAGE] ${missing}=NOT_OBSERVED reason=no_run_log"
not_observed=$((not_observed+1))
done
fi
if [ "$not_observed" -gt 0 ]; then if [ "$not_observed" -gt 0 ]; then
log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)" log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)"
@@ -95,11 +155,17 @@ while true; do
is_weekend=1 is_weekend=1
fi fi
if [ "$is_weekend" -eq 1 ]; then if [ "$defer_log_checks" -eq 1 ]; then
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log_process_alive"
elif [ "$is_weekend" -eq 1 ]; then
# Weekend policy: KR regular session loop must never appear. # Weekend policy: KR regular session loop must never appear.
if [ -n "$latest_run" ]; then
check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \ check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \
"Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \ "Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \
"$latest_run" || forbidden_hits=$((forbidden_hits+1)) "$latest_run" || forbidden_hits=$((forbidden_hits+1))
else
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log"
fi
else else
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday" log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday"
fi fi

View File

@@ -5,6 +5,8 @@ from __future__ import annotations
import argparse import argparse
import json import json
import os
import shutil
import re import re
import subprocess import subprocess
import sys import sys
@@ -12,11 +14,31 @@ from pathlib import Path
HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE) HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE)
LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE) LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE)
FENCED_CODE_PATTERN = re.compile(r"```.*?```", re.DOTALL)
INLINE_CODE_PATTERN = re.compile(r"`[^`]*`")
def _strip_code_segments(text: str) -> str:
without_fences = FENCED_CODE_PATTERN.sub("", text)
return INLINE_CODE_PATTERN.sub("", without_fences)
def resolve_tea_binary() -> str:
tea_from_path = shutil.which("tea")
if tea_from_path:
return tea_from_path
tea_home = Path.home() / "bin" / "tea"
if tea_home.exists() and tea_home.is_file() and os.access(tea_home, os.X_OK):
return str(tea_home)
raise RuntimeError("tea binary not found (checked PATH and ~/bin/tea)")
def validate_pr_body_text(text: str) -> list[str]: def validate_pr_body_text(text: str) -> list[str]:
errors: list[str] = [] errors: list[str] = []
if "\\n" in text and "\n" not in text: searchable = _strip_code_segments(text)
if "\\n" in searchable:
errors.append("body contains escaped newline sequence (\\n)") errors.append("body contains escaped newline sequence (\\n)")
if text.count("```") % 2 != 0: if text.count("```") % 2 != 0:
errors.append("body has unbalanced fenced code blocks (``` count is odd)") errors.append("body has unbalanced fenced code blocks (``` count is odd)")
@@ -28,10 +50,11 @@ def validate_pr_body_text(text: str) -> list[str]:
def fetch_pr_body(pr_number: int) -> str: def fetch_pr_body(pr_number: int) -> str:
tea_binary = resolve_tea_binary()
try: try:
completed = subprocess.run( completed = subprocess.run(
[ [
"tea", tea_binary,
"api", "api",
"-R", "-R",
"origin", "origin",
@@ -41,7 +64,7 @@ def fetch_pr_body(pr_number: int) -> str:
capture_output=True, capture_output=True,
text=True, text=True,
) )
except (subprocess.CalledProcessError, FileNotFoundError) as exc: except (subprocess.CalledProcessError, FileNotFoundError, PermissionError) as exc:
raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc
try: try:

View File

@@ -68,6 +68,7 @@ class SmartVolatilityScanner:
self, self,
market: MarketInfo | None = None, market: MarketInfo | None = None,
fallback_stocks: list[str] | None = None, fallback_stocks: list[str] | None = None,
domestic_session_id: str | None = None,
) -> list[ScanCandidate]: ) -> list[ScanCandidate]:
"""Execute smart scan and return qualified candidates. """Execute smart scan and return qualified candidates.
@@ -81,11 +82,12 @@ class SmartVolatilityScanner:
if market and not market.is_domestic: if market and not market.is_domestic:
return await self._scan_overseas(market, fallback_stocks) return await self._scan_overseas(market, fallback_stocks)
return await self._scan_domestic(fallback_stocks) return await self._scan_domestic(fallback_stocks, session_id=domestic_session_id)
async def _scan_domestic( async def _scan_domestic(
self, self,
fallback_stocks: list[str] | None = None, fallback_stocks: list[str] | None = None,
session_id: str | None = None,
) -> list[ScanCandidate]: ) -> list[ScanCandidate]:
"""Scan domestic market using volatility-first ranking + liquidity bonus.""" """Scan domestic market using volatility-first ranking + liquidity bonus."""
# 1) Primary universe from fluctuation ranking. # 1) Primary universe from fluctuation ranking.
@@ -93,6 +95,7 @@ class SmartVolatilityScanner:
fluct_rows = await self.broker.fetch_market_rankings( fluct_rows = await self.broker.fetch_market_rankings(
ranking_type="fluctuation", ranking_type="fluctuation",
limit=50, limit=50,
session_id=session_id,
) )
except ConnectionError as exc: except ConnectionError as exc:
logger.warning("Domestic fluctuation ranking failed: %s", exc) logger.warning("Domestic fluctuation ranking failed: %s", exc)
@@ -103,6 +106,7 @@ class SmartVolatilityScanner:
volume_rows = await self.broker.fetch_market_rankings( volume_rows = await self.broker.fetch_market_rankings(
ranking_type="volume", ranking_type="volume",
limit=50, limit=50,
session_id=session_id,
) )
except ConnectionError as exc: except ConnectionError as exc:
logger.warning("Domestic volume ranking failed: %s", exc) logger.warning("Domestic volume ranking failed: %s", exc)

View File

@@ -12,7 +12,10 @@ from typing import Any, cast
import aiohttp import aiohttp
from src.broker.kr_exchange_router import KRExchangeRouter
from src.config import Settings from src.config import Settings
from src.core.order_policy import classify_session_id
from src.markets.schedule import MARKETS
# KIS virtual trading server has a known SSL certificate hostname mismatch. # KIS virtual trading server has a known SSL certificate hostname mismatch.
_KIS_VTS_HOST = "openapivts.koreainvestment.com" _KIS_VTS_HOST = "openapivts.koreainvestment.com"
@@ -92,6 +95,7 @@ class KISBroker:
self._last_refresh_attempt: float = 0.0 self._last_refresh_attempt: float = 0.0
self._refresh_cooldown: float = 60.0 # Seconds (matches KIS 1/minute limit) self._refresh_cooldown: float = 60.0 # Seconds (matches KIS 1/minute limit)
self._rate_limiter = LeakyBucket(settings.RATE_LIMIT_RPS) self._rate_limiter = LeakyBucket(settings.RATE_LIMIT_RPS)
self._kr_router = KRExchangeRouter()
def _get_session(self) -> aiohttp.ClientSession: def _get_session(self) -> aiohttp.ClientSession:
if self._session is None or self._session.closed: if self._session is None or self._session.closed:
@@ -187,9 +191,12 @@ class KISBroker:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError(f"Hash key request failed ({resp.status}): {text}") raise ConnectionError(f"Hash key request failed ({resp.status}): {text}")
data = await resp.json() data = cast(dict[str, Any], await resp.json())
return data["HASH"] hash_value = data.get("HASH")
if not isinstance(hash_value, str):
raise ConnectionError("Hash key response missing HASH")
return hash_value
# ------------------------------------------------------------------ # ------------------------------------------------------------------
# Common Headers # Common Headers
@@ -211,12 +218,21 @@ class KISBroker:
async def get_orderbook(self, stock_code: str) -> dict[str, Any]: async def get_orderbook(self, stock_code: str) -> dict[str, Any]:
"""Fetch the current orderbook for a given stock code.""" """Fetch the current orderbook for a given stock code."""
return await self.get_orderbook_by_market(stock_code, market_div_code="J")
async def get_orderbook_by_market(
self,
stock_code: str,
*,
market_div_code: str,
) -> dict[str, Any]:
"""Fetch orderbook for a specific domestic market division code."""
await self._rate_limiter.acquire() await self._rate_limiter.acquire()
session = self._get_session() session = self._get_session()
headers = await self._auth_headers("FHKST01010200") headers = await self._auth_headers("FHKST01010200")
params = { params = {
"FID_COND_MRKT_DIV_CODE": "J", "FID_COND_MRKT_DIV_CODE": market_div_code,
"FID_INPUT_ISCD": stock_code, "FID_INPUT_ISCD": stock_code,
} }
url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/inquire-asking-price-exp-ccn" url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/inquire-asking-price-exp-ccn"
@@ -226,10 +242,80 @@ class KISBroker:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError(f"get_orderbook failed ({resp.status}): {text}") raise ConnectionError(f"get_orderbook failed ({resp.status}): {text}")
return await resp.json() return cast(dict[str, Any], await resp.json())
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(f"Network error fetching orderbook: {exc}") from exc raise ConnectionError(f"Network error fetching orderbook: {exc}") from exc
@staticmethod
def _extract_orderbook_metrics(payload: dict[str, Any]) -> tuple[float | None, float | None]:
output = payload.get("output1") or payload.get("output") or {}
if not isinstance(output, dict):
return None, None
def _float(*keys: str) -> float | None:
for key in keys:
raw = output.get(key)
if raw in (None, ""):
continue
try:
return float(cast(str | int | float, raw))
except (ValueError, TypeError):
continue
return None
ask = _float("askp1", "stck_askp1")
bid = _float("bidp1", "stck_bidp1")
if ask is not None and bid is not None and ask > 0 and bid > 0 and ask >= bid:
mid = (ask + bid) / 2
if mid > 0:
spread = (ask - bid) / mid
else:
spread = None
else:
spread = None
ask_qty = _float("askp_rsqn1", "ask_qty1")
bid_qty = _float("bidp_rsqn1", "bid_qty1")
if ask_qty is not None and bid_qty is not None and ask_qty >= 0 and bid_qty >= 0:
liquidity = ask_qty + bid_qty
else:
liquidity = None
return spread, liquidity
async def _load_dual_listing_metrics(
self,
stock_code: str,
) -> tuple[bool, float | None, float | None, float | None, float | None]:
"""Try KRX/NXT orderbooks and derive spread/liquidity metrics."""
spread_krx: float | None = None
spread_nxt: float | None = None
liquidity_krx: float | None = None
liquidity_nxt: float | None = None
for market_div_code, exchange in (("J", "KRX"), ("NX", "NXT")):
try:
payload = await self.get_orderbook_by_market(
stock_code,
market_div_code=market_div_code,
)
except ConnectionError:
continue
spread, liquidity = self._extract_orderbook_metrics(payload)
if exchange == "KRX":
spread_krx = spread
liquidity_krx = liquidity
else:
spread_nxt = spread
liquidity_nxt = liquidity
is_dual_listed = (
(spread_krx is not None and spread_nxt is not None)
or (liquidity_krx is not None and liquidity_nxt is not None)
)
return is_dual_listed, spread_krx, spread_nxt, liquidity_krx, liquidity_nxt
async def get_current_price(self, stock_code: str) -> tuple[float, float, float]: async def get_current_price(self, stock_code: str) -> tuple[float, float, float]:
"""Fetch current price data for a domestic stock. """Fetch current price data for a domestic stock.
@@ -302,7 +388,7 @@ class KISBroker:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError(f"get_balance failed ({resp.status}): {text}") raise ConnectionError(f"get_balance failed ({resp.status}): {text}")
return await resp.json() return cast(dict[str, Any], await resp.json())
except (TimeoutError, aiohttp.ClientError) as exc: except (TimeoutError, aiohttp.ClientError) as exc:
raise ConnectionError(f"Network error fetching balance: {exc}") from exc raise ConnectionError(f"Network error fetching balance: {exc}") from exc
@@ -311,7 +397,8 @@ class KISBroker:
stock_code: str, stock_code: str,
order_type: str, # "BUY" or "SELL" order_type: str, # "BUY" or "SELL"
quantity: int, quantity: int,
price: int = 0, price: float = 0,
session_id: str | None = None,
) -> dict[str, Any]: ) -> dict[str, Any]:
"""Submit a buy or sell order. """Submit a buy or sell order.
@@ -341,10 +428,32 @@ class KISBroker:
ord_dvsn = "01" # 시장가 ord_dvsn = "01" # 시장가
ord_price = 0 ord_price = 0
resolved_session = session_id or classify_session_id(MARKETS["KR"])
if session_id is not None:
is_dual_listed, spread_krx, spread_nxt, liquidity_krx, liquidity_nxt = (
await self._load_dual_listing_metrics(stock_code)
)
else:
is_dual_listed = False
spread_krx = None
spread_nxt = None
liquidity_krx = None
liquidity_nxt = None
resolution = self._kr_router.resolve_for_order(
stock_code=stock_code,
session_id=resolved_session,
is_dual_listed=is_dual_listed,
spread_krx=spread_krx,
spread_nxt=spread_nxt,
liquidity_krx=liquidity_krx,
liquidity_nxt=liquidity_nxt,
)
body = { body = {
"CANO": self._account_no, "CANO": self._account_no,
"ACNT_PRDT_CD": self._product_cd, "ACNT_PRDT_CD": self._product_cd,
"PDNO": stock_code, "PDNO": stock_code,
"EXCG_ID_DVSN_CD": resolution.exchange_code,
"ORD_DVSN": ord_dvsn, "ORD_DVSN": ord_dvsn,
"ORD_QTY": str(quantity), "ORD_QTY": str(quantity),
"ORD_UNPR": str(ord_price), "ORD_UNPR": str(ord_price),
@@ -361,12 +470,15 @@ class KISBroker:
if resp.status != 200: if resp.status != 200:
text = await resp.text() text = await resp.text()
raise ConnectionError(f"send_order failed ({resp.status}): {text}") raise ConnectionError(f"send_order failed ({resp.status}): {text}")
data = await resp.json() data = cast(dict[str, Any], await resp.json())
logger.info( logger.info(
"Order submitted", "Order submitted",
extra={ extra={
"stock_code": stock_code, "stock_code": stock_code,
"action": order_type, "action": order_type,
"session_id": resolved_session,
"exchange": resolution.exchange_code,
"routing_reason": resolution.reason,
}, },
) )
return data return data
@@ -377,6 +489,7 @@ class KISBroker:
self, self,
ranking_type: str = "volume", ranking_type: str = "volume",
limit: int = 30, limit: int = 30,
session_id: str | None = None,
) -> list[dict[str, Any]]: ) -> list[dict[str, Any]]:
"""Fetch market rankings from KIS API. """Fetch market rankings from KIS API.
@@ -394,12 +507,15 @@ class KISBroker:
await self._rate_limiter.acquire() await self._rate_limiter.acquire()
session = self._get_session() session = self._get_session()
resolved_session = session_id or classify_session_id(MARKETS["KR"])
ranking_market_code = self._kr_router.resolve_for_ranking(resolved_session)
if ranking_type == "volume": if ranking_type == "volume":
# 거래량순위: FHPST01710000 / /quotations/volume-rank # 거래량순위: FHPST01710000 / /quotations/volume-rank
tr_id = "FHPST01710000" tr_id = "FHPST01710000"
url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/volume-rank" url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/volume-rank"
params: dict[str, str] = { params: dict[str, str] = {
"FID_COND_MRKT_DIV_CODE": "J", "FID_COND_MRKT_DIV_CODE": ranking_market_code,
"FID_COND_SCR_DIV_CODE": "20171", "FID_COND_SCR_DIV_CODE": "20171",
"FID_INPUT_ISCD": "0000", "FID_INPUT_ISCD": "0000",
"FID_DIV_CLS_CODE": "0", "FID_DIV_CLS_CODE": "0",
@@ -416,7 +532,7 @@ class KISBroker:
tr_id = "FHPST01700000" tr_id = "FHPST01700000"
url = f"{self._base_url}/uapi/domestic-stock/v1/ranking/fluctuation" url = f"{self._base_url}/uapi/domestic-stock/v1/ranking/fluctuation"
params = { params = {
"fid_cond_mrkt_div_code": "J", "fid_cond_mrkt_div_code": ranking_market_code,
"fid_cond_scr_div_code": "20170", "fid_cond_scr_div_code": "20170",
"fid_input_iscd": "0000", "fid_input_iscd": "0000",
"fid_rank_sort_cls_code": "0", "fid_rank_sort_cls_code": "0",

View File

@@ -0,0 +1,48 @@
from __future__ import annotations
from dataclasses import dataclass
@dataclass(frozen=True)
class ExchangeResolution:
exchange_code: str
reason: str
class KRExchangeRouter:
"""Resolve domestic exchange routing for KR sessions."""
def resolve_for_ranking(self, session_id: str) -> str:
if session_id in {"NXT_PRE", "NXT_AFTER"}:
return "NX"
return "J"
def resolve_for_order(
self,
*,
stock_code: str,
session_id: str,
is_dual_listed: bool = False,
spread_krx: float | None = None,
spread_nxt: float | None = None,
liquidity_krx: float | None = None,
liquidity_nxt: float | None = None,
) -> ExchangeResolution:
del stock_code
default_exchange = "NXT" if session_id in {"NXT_PRE", "NXT_AFTER"} else "KRX"
default_reason = "session_default"
if not is_dual_listed:
return ExchangeResolution(default_exchange, default_reason)
if spread_krx is not None and spread_nxt is not None:
if spread_nxt < spread_krx:
return ExchangeResolution("NXT", "dual_listing_spread")
return ExchangeResolution("KRX", "dual_listing_spread")
if liquidity_krx is not None and liquidity_nxt is not None:
if liquidity_nxt > liquidity_krx:
return ExchangeResolution("NXT", "dual_listing_liquidity")
return ExchangeResolution("KRX", "dual_listing_liquidity")
return ExchangeResolution(default_exchange, "fallback_data_unavailable")

View File

@@ -12,6 +12,7 @@ import json
import logging import logging
import signal import signal
import threading import threading
from collections.abc import Awaitable, Callable
from datetime import UTC, datetime from datetime import UTC, datetime
from typing import Any from typing import Any
@@ -35,6 +36,7 @@ from src.core.criticality import CriticalityAssessor
from src.core.kill_switch import KillSwitchOrchestrator from src.core.kill_switch import KillSwitchOrchestrator
from src.core.order_policy import ( from src.core.order_policy import (
OrderPolicyRejected, OrderPolicyRejected,
classify_session_id,
get_session_info, get_session_info,
validate_order_policy, validate_order_policy,
) )
@@ -224,23 +226,27 @@ def _compute_kr_dynamic_stop_loss_pct(
key="KR_ATR_STOP_MULTIPLIER_K", key="KR_ATR_STOP_MULTIPLIER_K",
default=2.0, default=2.0,
) )
min_pct = _resolve_market_setting( min_pct = float(
_resolve_market_setting(
market=market, market=market,
settings=settings, settings=settings,
key="KR_ATR_STOP_MIN_PCT", key="KR_ATR_STOP_MIN_PCT",
default=-2.0, default=-2.0,
) )
max_pct = _resolve_market_setting( )
max_pct = float(
_resolve_market_setting(
market=market, market=market,
settings=settings, settings=settings,
key="KR_ATR_STOP_MAX_PCT", key="KR_ATR_STOP_MAX_PCT",
default=-7.0, default=-7.0,
) )
)
if max_pct > min_pct: if max_pct > min_pct:
min_pct, max_pct = max_pct, min_pct min_pct, max_pct = max_pct, min_pct
dynamic_stop_pct = -((k * atr_value) / entry_price) * 100.0 dynamic_stop_pct = -((k * atr_value) / entry_price) * 100.0
return max(max_pct, min(min_pct, dynamic_stop_pct)) return float(max(max_pct, min(min_pct, dynamic_stop_pct)))
def _stoploss_cooldown_key(*, market: MarketInfo, stock_code: str) -> str: def _stoploss_cooldown_key(*, market: MarketInfo, stock_code: str) -> str:
@@ -1200,6 +1206,7 @@ async def process_blackout_recovery_orders(
order_type=intent.order_type, order_type=intent.order_type,
quantity=intent.quantity, quantity=intent.quantity,
price=intent.price, price=intent.price,
session_id=intent.session_id,
) )
else: else:
result = await overseas_broker.send_overseas_order( result = await overseas_broker.send_overseas_order(
@@ -2083,6 +2090,16 @@ async def trading_cycle(
order_type=decision.action, order_type=decision.action,
quantity=quantity, quantity=quantity,
price=order_price, price=order_price,
session_id=runtime_session_id,
)
if result.get("rt_cd", "0") != "0":
order_succeeded = False
msg1 = result.get("msg1") or ""
logger.warning(
"KR order not accepted for %s: rt_cd=%s msg=%s",
stock_code,
result.get("rt_cd"),
msg1,
) )
else: else:
# For overseas orders, always use limit orders (지정가): # For overseas orders, always use limit orders (지정가):
@@ -2417,6 +2434,7 @@ async def handle_domestic_pending_orders(
order_type="SELL", order_type="SELL",
quantity=psbl_qty, quantity=psbl_qty,
price=new_price, price=new_price,
session_id=classify_session_id(MARKETS["KR"]),
) )
sell_resubmit_counts[key] = sell_resubmit_counts.get(key, 0) + 1 sell_resubmit_counts[key] = sell_resubmit_counts.get(key, 0) + 1
try: try:
@@ -3292,6 +3310,16 @@ async def run_daily_session(
order_type=decision.action, order_type=decision.action,
quantity=quantity, quantity=quantity,
price=order_price, price=order_price,
session_id=runtime_session_id,
)
if result.get("rt_cd", "0") != "0":
order_succeeded = False
daily_msg1 = result.get("msg1") or ""
logger.warning(
"KR order not accepted for %s: rt_cd=%s msg=%s",
stock_code,
result.get("rt_cd"),
daily_msg1,
) )
else: else:
# KIS VTS only accepts limit orders; use 0.5% premium for BUY # KIS VTS only accepts limit orders; use 0.5% premium for BUY
@@ -3532,6 +3560,47 @@ def _run_context_scheduler(
) )
def _has_market_session_transition(
market_states: dict[str, str], market_code: str, session_id: str
) -> bool:
"""Return True when market session changed (or market has no prior state)."""
return market_states.get(market_code) != session_id
def _should_rescan_market(
*, last_scan: float, now_timestamp: float, rescan_interval: float, session_changed: bool
) -> bool:
"""Force rescan on session transition; otherwise follow interval cadence."""
return session_changed or (now_timestamp - last_scan >= rescan_interval)
async def _run_markets_in_parallel(
markets: list[Any], processor: Callable[[Any], Awaitable[None]]
) -> None:
"""Run market processors in parallel and fail fast on the first exception."""
if not markets:
return
tasks = [asyncio.create_task(processor(market)) for market in markets]
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_EXCEPTION)
first_exc: BaseException | None = None
for task in done:
exc = task.exception()
if exc is not None and first_exc is None:
first_exc = exc
if first_exc is not None:
for task in pending:
task.cancel()
if pending:
await asyncio.gather(*pending, return_exceptions=True)
raise first_exc
if pending:
await asyncio.gather(*pending)
async def _run_evolution_loop( async def _run_evolution_loop(
evolution_optimizer: EvolutionOptimizer, evolution_optimizer: EvolutionOptimizer,
telegram: TelegramClient, telegram: TelegramClient,
@@ -4045,7 +4114,7 @@ async def run(settings: Settings) -> None:
last_scan_time: dict[str, float] = {} last_scan_time: dict[str, float] = {}
# Track market open/close state for notifications # Track market open/close state for notifications
_market_states: dict[str, bool] = {} # market_code -> is_open _market_states: dict[str, str] = {} # market_code -> session_id
# Trading control events # Trading control events
shutdown = asyncio.Event() shutdown = asyncio.Event()
@@ -4163,8 +4232,8 @@ async def run(settings: Settings) -> None:
if not open_markets: if not open_markets:
# Notify market close for any markets that were open # Notify market close for any markets that were open
for market_code, is_open in list(_market_states.items()): for market_code, session_id in list(_market_states.items()):
if is_open: if session_id:
try: try:
from src.markets.schedule import MARKETS from src.markets.schedule import MARKETS
@@ -4181,7 +4250,7 @@ async def run(settings: Settings) -> None:
) )
except Exception as exc: except Exception as exc:
logger.warning("Market close notification failed: %s", exc) logger.warning("Market close notification failed: %s", exc)
_market_states[market_code] = False _market_states.pop(market_code, None)
# Clear playbook for closed market (new one generated next open) # Clear playbook for closed market (new one generated next open)
playbooks.pop(market_code, None) playbooks.pop(market_code, None)
@@ -4206,10 +4275,9 @@ async def run(settings: Settings) -> None:
await asyncio.sleep(TRADE_INTERVAL_SECONDS) await asyncio.sleep(TRADE_INTERVAL_SECONDS)
continue continue
# Process each open market async def _process_realtime_market(market: MarketInfo) -> None:
for market in open_markets:
if shutdown.is_set(): if shutdown.is_set():
break return
session_info = get_session_info(market) session_info = get_session_info(market)
_session_risk_overrides(market=market, settings=settings) _session_risk_overrides(market=market, settings=settings)
@@ -4227,13 +4295,16 @@ async def run(settings: Settings) -> None:
settings=settings, settings=settings,
) )
# Notify market open if it just opened # Notify on market/session transition (e.g., US_PRE -> US_REG)
if not _market_states.get(market.code, False): session_changed = _has_market_session_transition(
_market_states, market.code, session_info.session_id
)
if session_changed:
try: try:
await telegram.notify_market_open(market.name) await telegram.notify_market_open(market.name)
except Exception as exc: except Exception as exc:
logger.warning("Market open notification failed: %s", exc) logger.warning("Market open notification failed: %s", exc)
_market_states[market.code] = True _market_states[market.code] = session_info.session_id
# Check and handle domestic pending (unfilled) limit orders. # Check and handle domestic pending (unfilled) limit orders.
if market.is_domestic: if market.is_domestic:
@@ -4265,7 +4336,12 @@ async def run(settings: Settings) -> None:
now_timestamp = asyncio.get_event_loop().time() now_timestamp = asyncio.get_event_loop().time()
last_scan = last_scan_time.get(market.code, 0.0) last_scan = last_scan_time.get(market.code, 0.0)
rescan_interval = settings.RESCAN_INTERVAL_SECONDS rescan_interval = settings.RESCAN_INTERVAL_SECONDS
if now_timestamp - last_scan >= rescan_interval: if _should_rescan_market(
last_scan=last_scan,
now_timestamp=now_timestamp,
rescan_interval=rescan_interval,
session_changed=session_changed,
):
try: try:
logger.info("Smart Scanner: Scanning %s market", market.name) logger.info("Smart Scanner: Scanning %s market", market.name)
@@ -4290,12 +4366,9 @@ async def run(settings: Settings) -> None:
) )
if candidates: if candidates:
# Use scanner results directly as trading candidates
active_stocks[market.code] = smart_scanner.get_stock_codes( active_stocks[market.code] = smart_scanner.get_stock_codes(
candidates candidates
) )
# Store candidates per market for selection context logging
scan_candidates[market.code] = {c.stock_code: c for c in candidates} scan_candidates[market.code] = {c.stock_code: c for c in candidates}
logger.info( logger.info(
@@ -4305,12 +4378,8 @@ async def run(settings: Settings) -> None:
[f"{c.stock_code}(RSI={c.rsi:.0f})" for c in candidates], [f"{c.stock_code}(RSI={c.rsi:.0f})" for c in candidates],
) )
# Get market-local date for playbook keying
market_today = datetime.now(market.timezone).date() market_today = datetime.now(market.timezone).date()
# Load or generate playbook (1 Gemini call per market per day)
if market.code not in playbooks: if market.code not in playbooks:
# Try DB first (survives process restart)
stored_pb = playbook_store.load(market_today, market.code) stored_pb = playbook_store.load(market_today, market.code)
if stored_pb is not None: if stored_pb is not None:
playbooks[market.code] = stored_pb playbooks[market.code] = stored_pb
@@ -4370,12 +4439,6 @@ async def run(settings: Settings) -> None:
except Exception as exc: except Exception as exc:
logger.error("Smart Scanner failed for %s: %s", market.name, exc) logger.error("Smart Scanner failed for %s: %s", market.name, exc)
# Get active stocks from scanner (dynamic, no static fallback).
# Also include currently-held positions so stop-loss /
# take-profit can fire even when a holding drops off the
# scanner. Broker balance is the source of truth here —
# unlike the local DB it reflects actual fills and any
# manual trades done outside the bot.
scanner_codes = active_stocks.get(market.code, []) scanner_codes = active_stocks.get(market.code, [])
try: try:
if market.is_domestic: if market.is_domestic:
@@ -4406,16 +4469,14 @@ async def run(settings: Settings) -> None:
if not stock_codes: if not stock_codes:
logger.debug("No active stocks for market %s", market.code) logger.debug("No active stocks for market %s", market.code)
continue return
logger.info("Processing market: %s (%d stocks)", market.name, len(stock_codes)) logger.info("Processing market: %s (%d stocks)", market.name, len(stock_codes))
# Process each stock from scanner results
for stock_code in stock_codes: for stock_code in stock_codes:
if shutdown.is_set(): if shutdown.is_set():
break break
# Get playbook for this market
market_playbook = playbooks.get( market_playbook = playbooks.get(
market.code, market.code,
PreMarketPlanner._empty_playbook( PreMarketPlanner._empty_playbook(
@@ -4423,7 +4484,6 @@ async def run(settings: Settings) -> None:
), ),
) )
# Retry logic for connection errors
for attempt in range(1, MAX_CONNECTION_RETRIES + 1): for attempt in range(1, MAX_CONNECTION_RETRIES + 1):
try: try:
await trading_cycle( await trading_cycle(
@@ -4443,7 +4503,7 @@ async def run(settings: Settings) -> None:
settings, settings,
buy_cooldown, buy_cooldown,
) )
break # Success — exit retry loop break
except CircuitBreakerTripped as exc: except CircuitBreakerTripped as exc:
logger.critical("Circuit breaker tripped — shutting down") logger.critical("Circuit breaker tripped — shutting down")
try: try:
@@ -4465,17 +4525,19 @@ async def run(settings: Settings) -> None:
MAX_CONNECTION_RETRIES, MAX_CONNECTION_RETRIES,
exc, exc,
) )
await asyncio.sleep(2**attempt) # Exponential backoff await asyncio.sleep(2**attempt)
else: else:
logger.error( logger.error(
"Connection error for %s (all retries exhausted): %s", "Connection error for %s (all retries exhausted): %s",
stock_code, stock_code,
exc, exc,
) )
break # Give up on this stock break
except Exception as exc: except Exception as exc:
logger.exception("Unexpected error for %s: %s", stock_code, exc) logger.exception("Unexpected error for %s: %s", stock_code, exc)
break # Don't retry on unexpected errors break
await _run_markets_in_parallel(open_markets, _process_realtime_market)
# Log priority queue metrics periodically # Log priority queue metrics periodically
metrics = await priority_queue.get_metrics() metrics = await priority_queue.get_metrics()

View File

@@ -207,7 +207,7 @@ def get_open_markets(
from src.core.order_policy import classify_session_id from src.core.order_policy import classify_session_id
session_id = classify_session_id(market, now) session_id = classify_session_id(market, now)
return session_id not in {"KR_OFF", "US_OFF"} return session_id not in {"KR_OFF", "US_OFF", "US_DAY"}
return is_market_open(market, now) return is_market_open(market, now)
open_markets = [ open_markets = [
@@ -254,10 +254,10 @@ def get_next_market_open(
from src.core.order_policy import classify_session_id from src.core.order_policy import classify_session_id
ts = start_utc.astimezone(ZoneInfo("UTC")).replace(second=0, microsecond=0) ts = start_utc.astimezone(ZoneInfo("UTC")).replace(second=0, microsecond=0)
prev_active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF"} prev_active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF", "US_DAY"}
for _ in range(7 * 24 * 60): for _ in range(7 * 24 * 60):
ts = ts + timedelta(minutes=1) ts = ts + timedelta(minutes=1)
active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF"} active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF", "US_DAY"}
if active and not prev_active: if active and not prev_active:
return ts return ts
prev_active = active prev_active = active

View File

@@ -400,6 +400,15 @@ class TestFetchMarketRankings:
assert result[0]["stock_code"] == "015260" assert result[0]["stock_code"] == "015260"
assert result[0]["change_rate"] == 29.74 assert result[0]["change_rate"] == 29.74
@pytest.mark.asyncio
async def test_volume_uses_nx_market_code_in_nxt_session(self, broker: KISBroker) -> None:
mock_resp = _make_ranking_mock([])
with patch("aiohttp.ClientSession.get", return_value=mock_resp) as mock_get:
await broker.fetch_market_rankings(ranking_type="volume", session_id="NXT_PRE")
params = mock_get.call_args[1].get("params", {})
assert params.get("FID_COND_MRKT_DIV_CODE") == "NX"
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# KRX tick unit / round-down helpers (issue #157) # KRX tick unit / round-down helpers (issue #157)
@@ -591,6 +600,60 @@ class TestSendOrderTickRounding:
body = order_call[1].get("json", {}) body = order_call[1].get("json", {})
assert body["ORD_DVSN"] == "01" assert body["ORD_DVSN"] == "01"
@pytest.mark.asyncio
async def test_send_order_sets_exchange_field_from_session(self, broker: KISBroker) -> None:
mock_hash = AsyncMock()
mock_hash.status = 200
mock_hash.json = AsyncMock(return_value={"HASH": "h"})
mock_hash.__aenter__ = AsyncMock(return_value=mock_hash)
mock_hash.__aexit__ = AsyncMock(return_value=False)
mock_order = AsyncMock()
mock_order.status = 200
mock_order.json = AsyncMock(return_value={"rt_cd": "0"})
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False)
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
with patch.object(
broker,
"_load_dual_listing_metrics",
new=AsyncMock(return_value=(False, None, None, None, None)),
):
await broker.send_order("005930", "BUY", 1, price=50000, session_id="NXT_PRE")
order_call = mock_post.call_args_list[1]
body = order_call[1].get("json", {})
assert body["EXCG_ID_DVSN_CD"] == "NXT"
@pytest.mark.asyncio
async def test_send_order_prefers_nxt_when_dual_listing_spread_is_tighter(
self, broker: KISBroker
) -> None:
mock_hash = AsyncMock()
mock_hash.status = 200
mock_hash.json = AsyncMock(return_value={"HASH": "h"})
mock_hash.__aenter__ = AsyncMock(return_value=mock_hash)
mock_hash.__aexit__ = AsyncMock(return_value=False)
mock_order = AsyncMock()
mock_order.status = 200
mock_order.json = AsyncMock(return_value={"rt_cd": "0"})
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
mock_order.__aexit__ = AsyncMock(return_value=False)
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
with patch.object(
broker,
"_load_dual_listing_metrics",
new=AsyncMock(return_value=(True, 0.004, 0.002, 100000.0, 90000.0)),
):
await broker.send_order("005930", "BUY", 1, price=50000, session_id="KRX_REG")
order_call = mock_post.call_args_list[1]
body = order_call[1].get("json", {})
assert body["EXCG_ID_DVSN_CD"] == "NXT"
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# TR_ID live/paper branching (issues #201, #202, #203) # TR_ID live/paper branching (issues #201, #202, #203)

View File

@@ -0,0 +1,40 @@
from __future__ import annotations
from src.broker.kr_exchange_router import KRExchangeRouter
def test_ranking_market_code_by_session() -> None:
router = KRExchangeRouter()
assert router.resolve_for_ranking("KRX_REG") == "J"
assert router.resolve_for_ranking("NXT_PRE") == "NX"
assert router.resolve_for_ranking("NXT_AFTER") == "NX"
def test_order_exchange_falls_back_to_session_default_on_missing_data() -> None:
router = KRExchangeRouter()
resolved = router.resolve_for_order(
stock_code="0001A0",
session_id="NXT_PRE",
is_dual_listed=True,
spread_krx=None,
spread_nxt=None,
liquidity_krx=None,
liquidity_nxt=None,
)
assert resolved.exchange_code == "NXT"
assert resolved.reason == "fallback_data_unavailable"
def test_order_exchange_uses_spread_preference_for_dual_listing() -> None:
router = KRExchangeRouter()
resolved = router.resolve_for_order(
stock_code="0001A0",
session_id="KRX_REG",
is_dual_listed=True,
spread_krx=0.005,
spread_nxt=0.003,
liquidity_krx=100000.0,
liquidity_nxt=90000.0,
)
assert resolved.exchange_code == "NXT"
assert resolved.reason == "dual_listing_spread"

View File

@@ -1,5 +1,6 @@
"""Tests for main trading loop integration.""" """Tests for main trading loop integration."""
import asyncio
from datetime import UTC, date, datetime from datetime import UTC, date, datetime
from typing import Any from typing import Any
from unittest.mock import ANY, AsyncMock, MagicMock, patch from unittest.mock import ANY, AsyncMock, MagicMock, patch
@@ -34,6 +35,7 @@ from src.main import (
_extract_held_codes_from_balance, _extract_held_codes_from_balance,
_extract_held_qty_from_balance, _extract_held_qty_from_balance,
_handle_market_close, _handle_market_close,
_has_market_session_transition,
_inject_staged_exit_features, _inject_staged_exit_features,
_maybe_queue_order_intent, _maybe_queue_order_intent,
_resolve_market_setting, _resolve_market_setting,
@@ -41,8 +43,10 @@ from src.main import (
_retry_connection, _retry_connection,
_run_context_scheduler, _run_context_scheduler,
_run_evolution_loop, _run_evolution_loop,
_run_markets_in_parallel,
_should_block_overseas_buy_for_fx_buffer, _should_block_overseas_buy_for_fx_buffer,
_should_force_exit_for_overnight, _should_force_exit_for_overnight,
_should_rescan_market,
_split_trade_pnl_components, _split_trade_pnl_components,
_start_dashboard_server, _start_dashboard_server,
_stoploss_cooldown_minutes, _stoploss_cooldown_minutes,
@@ -140,6 +144,63 @@ class TestExtractAvgPriceFromBalance:
result = _extract_avg_price_from_balance(balance, "AAPL", is_domestic=False) result = _extract_avg_price_from_balance(balance, "AAPL", is_domestic=False)
assert result == 170.5 assert result == 170.5
class TestRealtimeSessionStateHelpers:
"""Tests for realtime loop session-transition/rescan helper logic."""
def test_has_market_session_transition_when_state_missing(self) -> None:
states: dict[str, str] = {}
assert _has_market_session_transition(states, "US_NASDAQ", "US_REG")
def test_has_market_session_transition_when_session_changes(self) -> None:
states = {"US_NASDAQ": "US_PRE"}
assert _has_market_session_transition(states, "US_NASDAQ", "US_REG")
def test_has_market_session_transition_false_when_same_session(self) -> None:
states = {"US_NASDAQ": "US_REG"}
assert not _has_market_session_transition(states, "US_NASDAQ", "US_REG")
def test_should_rescan_market_forces_on_session_transition(self) -> None:
assert _should_rescan_market(
last_scan=1000.0,
now_timestamp=1050.0,
rescan_interval=300.0,
session_changed=True,
)
def test_should_rescan_market_uses_interval_without_transition(self) -> None:
assert not _should_rescan_market(
last_scan=1000.0,
now_timestamp=1050.0,
rescan_interval=300.0,
session_changed=False,
)
class TestMarketParallelRunner:
"""Tests for market-level parallel processing helper."""
@pytest.mark.asyncio
async def test_run_markets_in_parallel_runs_all_markets(self) -> None:
processed: list[str] = []
async def _processor(market: str) -> None:
await asyncio.sleep(0.01)
processed.append(market)
await _run_markets_in_parallel(["KR", "US_NASDAQ", "US_NYSE"], _processor)
assert set(processed) == {"KR", "US_NASDAQ", "US_NYSE"}
@pytest.mark.asyncio
async def test_run_markets_in_parallel_propagates_errors(self) -> None:
async def _processor(market: str) -> None:
if market == "US_NASDAQ":
raise RuntimeError("boom")
await asyncio.sleep(0.01)
with pytest.raises(RuntimeError, match="boom"):
await _run_markets_in_parallel(["KR", "US_NASDAQ"], _processor)
def test_returns_zero_when_field_absent(self) -> None: def test_returns_zero_when_field_absent(self) -> None:
"""Returns 0.0 when pchs_avg_pric key is missing entirely.""" """Returns 0.0 when pchs_avg_pric key is missing entirely."""
balance = {"output1": [{"pdno": "005930", "ord_psbl_qty": "5"}]} balance = {"output1": [{"pdno": "005930", "ord_psbl_qty": "5"}]}
@@ -913,6 +974,46 @@ class TestTradingCycleTelegramIntegration:
# Verify notification was attempted # Verify notification was attempted
mock_telegram.notify_trade_execution.assert_called_once() mock_telegram.notify_trade_execution.assert_called_once()
@pytest.mark.asyncio
async def test_kr_rejected_order_does_not_notify_or_log_trade(
self,
mock_broker: MagicMock,
mock_overseas_broker: MagicMock,
mock_scenario_engine: MagicMock,
mock_playbook: DayPlaybook,
mock_risk: MagicMock,
mock_db: MagicMock,
mock_decision_logger: MagicMock,
mock_context_store: MagicMock,
mock_criticality_assessor: MagicMock,
mock_telegram: MagicMock,
mock_market: MagicMock,
) -> None:
"""KR orders rejected by KIS should not trigger success side effects."""
mock_broker.send_order = AsyncMock(
return_value={"rt_cd": "1", "msg1": "장운영시간이 아닙니다."}
)
with patch("src.main.log_trade") as mock_log_trade:
await trading_cycle(
broker=mock_broker,
overseas_broker=mock_overseas_broker,
scenario_engine=mock_scenario_engine,
playbook=mock_playbook,
risk=mock_risk,
db_conn=mock_db,
decision_logger=mock_decision_logger,
context_store=mock_context_store,
criticality_assessor=mock_criticality_assessor,
telegram=mock_telegram,
market=mock_market,
stock_code="005930",
scan_candidates={},
)
mock_telegram.notify_trade_execution.assert_not_called()
mock_log_trade.assert_not_called()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_fat_finger_notification_sent( async def test_fat_finger_notification_sent(
self, self,

View File

@@ -165,6 +165,17 @@ class TestGetOpenMarkets:
) )
assert {m.code for m in extended} == {"US_NASDAQ", "US_NYSE", "US_AMEX"} assert {m.code for m in extended} == {"US_NASDAQ", "US_NYSE", "US_AMEX"}
def test_get_open_markets_excludes_us_day_when_extended_enabled(self) -> None:
"""US_DAY should be treated as non-tradable even in extended-session lookup."""
# Monday 2026-02-02 10:30 KST = 01:30 UTC (US_DAY by session classification)
test_time = datetime(2026, 2, 2, 1, 30, tzinfo=ZoneInfo("UTC"))
extended = get_open_markets(
enabled_markets=["US_NASDAQ", "US_NYSE", "US_AMEX"],
now=test_time,
include_extended_sessions=True,
)
assert extended == []
class TestGetNextMarketOpen: class TestGetNextMarketOpen:
"""Test get_next_market_open function.""" """Test get_next_market_open function."""
@@ -214,8 +225,8 @@ class TestGetNextMarketOpen:
def test_get_next_market_open_prefers_extended_session(self) -> None: def test_get_next_market_open_prefers_extended_session(self) -> None:
"""Extended lookup should return premarket open time before regular open.""" """Extended lookup should return premarket open time before regular open."""
# Monday 2026-02-02 07:00 EST = 12:00 UTC # Monday 2026-02-02 07:00 EST = 12:00 UTC
# By v3 KST session rules, US is OFF only in KST 07:00-10:00 (UTC 22:00-01:00). # US_DAY is treated as non-tradable in extended lookup, so after entering
# At 12:00 UTC market is active, so next OFF->ON transition is 01:00 UTC next day. # US_DAY the next tradable OFF->ON transition is US_PRE at 09:00 UTC next day.
test_time = datetime(2026, 2, 2, 12, 0, tzinfo=ZoneInfo("UTC")) test_time = datetime(2026, 2, 2, 12, 0, tzinfo=ZoneInfo("UTC"))
market, next_open = get_next_market_open( market, next_open = get_next_market_open(
enabled_markets=["US_NASDAQ"], enabled_markets=["US_NASDAQ"],
@@ -223,7 +234,7 @@ class TestGetNextMarketOpen:
include_extended_sessions=True, include_extended_sessions=True,
) )
assert market.code == "US_NASDAQ" assert market.code == "US_NASDAQ"
assert next_open == datetime(2026, 2, 3, 1, 0, tzinfo=ZoneInfo("UTC")) assert next_open == datetime(2026, 2, 3, 9, 0, tzinfo=ZoneInfo("UTC"))
class TestExpandMarketCodes: class TestExpandMarketCodes:

View File

@@ -0,0 +1,160 @@
from __future__ import annotations
import os
import signal
import socket
import subprocess
from pathlib import Path
import pytest
REPO_ROOT = Path(__file__).resolve().parent.parent
RUN_OVERNIGHT = REPO_ROOT / "scripts" / "run_overnight.sh"
RUNTIME_MONITOR = REPO_ROOT / "scripts" / "runtime_verify_monitor.sh"
def _latest_runtime_log(log_dir: Path) -> str:
logs = sorted(log_dir.glob("runtime_verify_*.log"))
assert logs, "runtime monitor did not produce log output"
return logs[-1].read_text(encoding="utf-8")
def test_runtime_verify_monitor_detects_live_process_without_pid_files(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
fake_live = subprocess.Popen(
["bash", "-lc", 'exec -a "src.main --mode=live" sleep 10'],
cwd=REPO_ROOT,
)
try:
env = os.environ.copy()
env.update(
{
"ROOT_DIR": str(REPO_ROOT),
"LOG_DIR": str(log_dir),
"INTERVAL_SEC": "1",
"MAX_HOURS": "1",
"MAX_LOOPS": "1",
"POLICY_TZ": "UTC",
}
)
completed = subprocess.run(
["bash", str(RUNTIME_MONITOR)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode == 0, completed.stderr
log_text = _latest_runtime_log(log_dir)
assert "app_alive=1" in log_text
assert "[COVERAGE] LIVE_MODE=PASS source=process_liveness" in log_text
assert "[ANOMALY]" not in log_text
finally:
fake_live.terminate()
fake_live.wait(timeout=5)
def test_run_overnight_fails_fast_when_dashboard_port_in_use(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(("127.0.0.1", 0))
sock.listen(1)
port = sock.getsockname()[1]
try:
env = os.environ.copy()
env.update(
{
"LOG_DIR": str(log_dir),
"TMUX_AUTO": "false",
"DASHBOARD_PORT": str(port),
}
)
completed = subprocess.run(
["bash", str(RUN_OVERNIGHT)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode != 0
output = f"{completed.stdout}\n{completed.stderr}"
assert "already in use" in output
finally:
sock.close()
def test_run_overnight_writes_live_pid_and_watchdog_pid(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
env = os.environ.copy()
env.update(
{
"LOG_DIR": str(log_dir),
"TMUX_AUTO": "false",
"STARTUP_GRACE_SEC": "1",
"CHECK_INTERVAL": "2",
"APP_CMD_BIN": "sleep",
"APP_CMD_ARGS": "10",
}
)
completed = subprocess.run(
["bash", str(RUN_OVERNIGHT)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode == 0, f"{completed.stdout}\n{completed.stderr}"
app_pid = int((log_dir / "app.pid").read_text(encoding="utf-8").strip())
watchdog_pid = int((log_dir / "watchdog.pid").read_text(encoding="utf-8").strip())
os.kill(app_pid, 0)
os.kill(watchdog_pid, 0)
for pid in (watchdog_pid, app_pid):
try:
os.kill(pid, signal.SIGTERM)
except ProcessLookupError:
pass
def test_run_overnight_fails_when_process_exits_before_grace_period(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
env = os.environ.copy()
env.update(
{
"LOG_DIR": str(log_dir),
"TMUX_AUTO": "false",
"STARTUP_GRACE_SEC": "1",
"APP_CMD_BIN": "false",
}
)
completed = subprocess.run(
["bash", str(RUN_OVERNIGHT)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode != 0
output = f"{completed.stdout}\n{completed.stderr}"
assert "startup failed:" in output
watchdog_pid_file = log_dir / "watchdog.pid"
if watchdog_pid_file.exists():
watchdog_pid = int(watchdog_pid_file.read_text(encoding="utf-8").strip())
with pytest.raises(ProcessLookupError):
os.kill(watchdog_pid, 0)

View File

@@ -103,6 +103,33 @@ class TestSmartVolatilityScanner:
assert candidates[0].stock_code == "005930" assert candidates[0].stock_code == "005930"
assert candidates[0].signal == "oversold" assert candidates[0].signal == "oversold"
@pytest.mark.asyncio
async def test_scan_domestic_passes_session_id_to_rankings(
self, scanner: SmartVolatilityScanner, mock_broker: MagicMock
) -> None:
fluctuation_rows = [
{
"stock_code": "005930",
"name": "Samsung",
"price": 70000,
"volume": 5000000,
"change_rate": 1.0,
"volume_increase_rate": 120,
},
]
mock_broker.fetch_market_rankings.side_effect = [fluctuation_rows, fluctuation_rows]
mock_broker.get_daily_prices.return_value = [
{"open": 1, "high": 71000, "low": 69000, "close": 70000, "volume": 1000000},
{"open": 1, "high": 70000, "low": 68000, "close": 69000, "volume": 900000},
]
await scanner.scan(domestic_session_id="NXT_PRE")
first_call = mock_broker.fetch_market_rankings.call_args_list[0]
second_call = mock_broker.fetch_market_rankings.call_args_list[1]
assert first_call.kwargs["session_id"] == "NXT_PRE"
assert second_call.kwargs["session_id"] == "NXT_PRE"
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_scan_domestic_finds_momentum_candidate( async def test_scan_domestic_finds_momentum_candidate(
self, scanner: SmartVolatilityScanner, mock_broker: MagicMock self, scanner: SmartVolatilityScanner, mock_broker: MagicMock

View File

@@ -24,9 +24,24 @@ def test_validate_pr_body_text_detects_escaped_newline() -> None:
assert any("escaped newline" in err for err in errors) assert any("escaped newline" in err for err in errors)
def test_validate_pr_body_text_allows_literal_sequence_when_multiline() -> None: def test_validate_pr_body_text_detects_escaped_newline_in_multiline_body() -> None:
module = _load_module() module = _load_module()
text = "## Summary\n- escaped sequence example: \\\\n" text = "## Summary\n- first line\n- broken line with \\n literal"
errors = module.validate_pr_body_text(text)
assert any("escaped newline" in err for err in errors)
def test_validate_pr_body_text_allows_escaped_newline_in_code_blocks() -> None:
module = _load_module()
text = "\n".join(
[
"## Summary",
"- example uses `\\n` for explanation",
"```bash",
"printf 'line1\\nline2\\n'",
"```",
]
)
assert module.validate_pr_body_text(text) == [] assert module.validate_pr_body_text(text) == []
@@ -63,12 +78,13 @@ def test_fetch_pr_body_reads_body_from_tea_api(monkeypatch) -> None:
module = _load_module() module = _load_module()
def fake_run(cmd, check, capture_output, text): # noqa: ANN001 def fake_run(cmd, check, capture_output, text): # noqa: ANN001
assert "tea" in cmd[0] assert cmd[0] == "/tmp/tea-bin"
assert check is True assert check is True
assert capture_output is True assert capture_output is True
assert text is True assert text is True
return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"})) return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"}))
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
monkeypatch.setattr(module.subprocess, "run", fake_run) monkeypatch.setattr(module.subprocess, "run", fake_run)
assert module.fetch_pr_body(391) == "## Summary\n- item" assert module.fetch_pr_body(391) == "## Summary\n- item"
@@ -79,6 +95,32 @@ def test_fetch_pr_body_rejects_non_string_body(monkeypatch) -> None:
def fake_run(cmd, check, capture_output, text): # noqa: ANN001 def fake_run(cmd, check, capture_output, text): # noqa: ANN001
return SimpleNamespace(stdout=json.dumps({"body": 123})) return SimpleNamespace(stdout=json.dumps({"body": 123}))
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
monkeypatch.setattr(module.subprocess, "run", fake_run) monkeypatch.setattr(module.subprocess, "run", fake_run)
with pytest.raises(RuntimeError): with pytest.raises(RuntimeError):
module.fetch_pr_body(391) module.fetch_pr_body(391)
def test_resolve_tea_binary_falls_back_to_home_bin(monkeypatch, tmp_path) -> None:
module = _load_module()
tea_home = tmp_path / "bin" / "tea"
tea_home.parent.mkdir(parents=True)
tea_home.write_text("#!/usr/bin/env bash\n", encoding="utf-8")
tea_home.chmod(0o755)
monkeypatch.setattr(module.shutil, "which", lambda _: None)
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
assert module.resolve_tea_binary() == str(tea_home)
def test_resolve_tea_binary_rejects_non_executable_home_bin(monkeypatch, tmp_path) -> None:
module = _load_module()
tea_home = tmp_path / "bin" / "tea"
tea_home.parent.mkdir(parents=True)
tea_home.write_text("not executable\n", encoding="utf-8")
tea_home.chmod(0o644)
monkeypatch.setattr(module.shutil, "which", lambda _: None)
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
with pytest.raises(RuntimeError):
module.resolve_tea_binary()

View File

@@ -137,3 +137,11 @@
- next_ticket: #377 - next_ticket: #377
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes - process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: refresh 단계를 최대 3회(초기+재시도2), 실패 시 지수 백오프로 재시도하고 성공 시 즉시 중단, 소진 시 오류를 기록한 뒤 다음 단계를 계속 수행한다. - risks_or_notes: refresh 단계를 최대 3회(초기+재시도2), 실패 시 지수 백오프로 재시도하고 성공 시 즉시 중단, 소진 시 오류를 기록한 뒤 다음 단계를 계속 수행한다.
### 2026-03-04 | session=codex-issue409-start
- branch: feature/issue-409-kr-session-exchange-routing
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
- open_issues_reviewed: #409, #318, #325
- next_ticket: #409
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
- risks_or_notes: #409 코드수정/검증 후 프로그램 재시작 및 24h 런타임 모니터링 수행, 모니터 이상 징후는 별도 이슈 발행