Compare commits
56 Commits
200bc82a27
...
feature/is
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9fd9c552f3 | ||
|
|
c80f3daad7 | ||
|
|
100586e237 | ||
|
|
86733ef830 | ||
|
|
296b89d95f | ||
|
|
fa89499ccb | ||
|
|
b227554e9e | ||
| c217e8cd72 | |||
|
|
bcbbf80d16 | ||
|
|
dc0775cbc6 | ||
|
|
c412412f7b | ||
|
|
3cde8779fa | ||
|
|
370ee8cc85 | ||
|
|
528e17a29c | ||
| d2f3fe9108 | |||
|
|
12bcccab42 | ||
|
|
ef16cf8800 | ||
| 3c58c5d110 | |||
|
|
8ecd3ac55f | ||
|
|
79ad108e2f | ||
| d9cf056df8 | |||
|
|
bd9286a39f | ||
|
|
f4f8827353 | ||
|
|
7d24f19cc4 | ||
| 7cd818f1e2 | |||
|
|
7c17535c3d | ||
|
|
453d67b91c | ||
|
|
ade5971387 | ||
| 87683a88b4 | |||
|
|
b34937ea9d | ||
|
|
ba2370e40e | ||
| 1c41379815 | |||
|
|
5e4c94bfeb | ||
|
|
2332ba868f | ||
| f6e4cc7ea9 | |||
|
|
2776a074b5 | ||
|
|
0fb56a4a1a | ||
|
|
7e9738d5df | ||
|
|
96d2c97fe7 | ||
| 4710aa2d66 | |||
|
|
ca9e1ad0e2 | ||
|
|
928e60877c | ||
| 16ddc22d14 | |||
|
|
4f21117eca | ||
|
|
8e02b1ea4f | ||
| ccceb38483 | |||
|
|
96e5de7c5d | ||
|
|
7959b749c7 | ||
| f7e242d147 | |||
|
|
589cc42e00 | ||
|
|
920630e30e | ||
|
|
d4f37ee392 | ||
| 3914f24872 | |||
|
|
ed713fdf40 | ||
|
|
c27decb6b1 | ||
| 8ac7436953 |
190
CLAUDE.md
190
CLAUDE.md
@@ -1,187 +1,9 @@
|
|||||||
# The Ouroboros
|
# Agent Entry Point
|
||||||
|
|
||||||
AI-powered trading agent for global stock markets with self-evolution capabilities.
|
This file moved to [agents.md](./agents.md).
|
||||||
|
|
||||||
## Quick Start
|
Follow `agents.md` as the single source of truth for Claude/Codex session behavior and project workflow gates.
|
||||||
|
|
||||||
```bash
|
Core process references:
|
||||||
# Setup
|
- [Workflow Guide](docs/workflow.md)
|
||||||
pip install -e ".[dev]"
|
- [Command Reference](docs/commands.md)
|
||||||
cp .env.example .env
|
|
||||||
# Edit .env with your KIS and Gemini API credentials
|
|
||||||
|
|
||||||
# Test
|
|
||||||
pytest -v --cov=src
|
|
||||||
|
|
||||||
# Run (paper trading)
|
|
||||||
python -m src.main --mode=paper
|
|
||||||
|
|
||||||
# Run with dashboard
|
|
||||||
python -m src.main --mode=paper --dashboard
|
|
||||||
```
|
|
||||||
|
|
||||||
## Telegram Notifications (Optional)
|
|
||||||
|
|
||||||
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
|
|
||||||
|
|
||||||
### Quick Setup
|
|
||||||
|
|
||||||
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
|
|
||||||
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
|
|
||||||
3. **Configure**: Add to `.env`:
|
|
||||||
```bash
|
|
||||||
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
|
|
||||||
TELEGRAM_CHAT_ID=123456789
|
|
||||||
TELEGRAM_ENABLED=true
|
|
||||||
```
|
|
||||||
4. **Test**: Start bot conversation (`/start`), then run the agent
|
|
||||||
|
|
||||||
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
|
|
||||||
|
|
||||||
### What You'll Get
|
|
||||||
|
|
||||||
- 🟢 Trade execution alerts (BUY/SELL with confidence)
|
|
||||||
- 🚨 Circuit breaker trips (automatic trading halt)
|
|
||||||
- ⚠️ Fat-finger rejections (oversized orders blocked)
|
|
||||||
- ℹ️ Market open/close notifications
|
|
||||||
- 📝 System startup/shutdown status
|
|
||||||
|
|
||||||
### Interactive Commands
|
|
||||||
|
|
||||||
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
|
|
||||||
|
|
||||||
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
|
|
||||||
|
|
||||||
## Smart Volatility Scanner (Optional)
|
|
||||||
|
|
||||||
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
|
|
||||||
|
|
||||||
### How It Works
|
|
||||||
|
|
||||||
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
|
|
||||||
2. **Python Filter** — RSI + volume ratio calculations (no AI)
|
|
||||||
- Volume > 200% of previous day
|
|
||||||
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
|
|
||||||
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
|
|
||||||
Add to `.env` (optional, has sensible defaults):
|
|
||||||
```bash
|
|
||||||
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
|
|
||||||
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
|
|
||||||
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
|
|
||||||
SCANNER_TOP_N=3 # Max candidates per scan
|
|
||||||
```
|
|
||||||
|
|
||||||
### Benefits
|
|
||||||
|
|
||||||
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
|
|
||||||
- **Python-based filtering** — Fast technical analysis before AI
|
|
||||||
- **Evolution-ready** — Selection context logged for strategy optimization
|
|
||||||
- **Fault-tolerant** — Falls back to static watchlist on API failure
|
|
||||||
|
|
||||||
### Trading Mode Integration
|
|
||||||
|
|
||||||
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
|
|
||||||
|
|
||||||
## Documentation
|
|
||||||
|
|
||||||
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
|
|
||||||
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
|
|
||||||
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
|
|
||||||
- **[Architecture](docs/architecture.md)** — System design, components, data flow
|
|
||||||
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
|
|
||||||
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
|
|
||||||
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
|
|
||||||
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
|
|
||||||
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
|
|
||||||
|
|
||||||
## Core Principles
|
|
||||||
|
|
||||||
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
|
|
||||||
2. **Test Everything** — 80% coverage minimum, all changes require tests
|
|
||||||
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
|
|
||||||
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
|
|
||||||
|
|
||||||
## Requirements Management
|
|
||||||
|
|
||||||
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
|
|
||||||
|
|
||||||
- New requirements are added chronologically with dates
|
|
||||||
- Code changes should reference related requirements
|
|
||||||
- Helps maintain project evolution aligned with user needs
|
|
||||||
- Preserves context across conversations and development cycles
|
|
||||||
|
|
||||||
## Project Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
src/
|
|
||||||
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
|
|
||||||
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
|
|
||||||
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
|
|
||||||
├── broker/ # KIS API client (domestic + overseas)
|
|
||||||
├── context/ # L1-L7 hierarchical memory system
|
|
||||||
├── core/ # Risk manager (READ-ONLY)
|
|
||||||
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
|
|
||||||
├── data/ # External data integration (news, market data, calendar)
|
|
||||||
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
|
|
||||||
├── logging/ # Decision logger (audit trail)
|
|
||||||
├── markets/ # Market schedules and timezone handling
|
|
||||||
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
|
|
||||||
├── strategy/ # Pre-market planner, scenario engine, playbook store
|
|
||||||
├── db.py # SQLite trade logging
|
|
||||||
├── main.py # Trading loop orchestrator
|
|
||||||
└── config.py # Settings (from .env)
|
|
||||||
|
|
||||||
tests/ # 998 tests across 41 files
|
|
||||||
docs/ # Extended documentation
|
|
||||||
```
|
|
||||||
|
|
||||||
## Key Commands
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pytest -v --cov=src # Run tests with coverage
|
|
||||||
ruff check src/ tests/ # Lint
|
|
||||||
mypy src/ --strict # Type check
|
|
||||||
|
|
||||||
python -m src.main --mode=paper # Paper trading
|
|
||||||
python -m src.main --mode=paper --dashboard # With dashboard
|
|
||||||
python -m src.main --mode=live # Live trading (⚠️ real money)
|
|
||||||
|
|
||||||
# Gitea workflow (requires tea CLI)
|
|
||||||
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
|
|
||||||
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
|
|
||||||
```
|
|
||||||
|
|
||||||
## Markets Supported
|
|
||||||
|
|
||||||
- 🇰🇷 Korea (KRX)
|
|
||||||
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
|
|
||||||
- 🇯🇵 Japan (TSE)
|
|
||||||
- 🇭🇰 Hong Kong (SEHK)
|
|
||||||
- 🇨🇳 China (Shanghai, Shenzhen)
|
|
||||||
- 🇻🇳 Vietnam (Hanoi, HCM)
|
|
||||||
|
|
||||||
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
|
|
||||||
|
|
||||||
## Critical Constraints
|
|
||||||
|
|
||||||
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
|
|
||||||
|
|
||||||
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
|
|
||||||
- Circuit breaker at -3.0% P&L — may only be made **stricter**
|
|
||||||
- Fat-finger protection: max 30% of cash per order — always enforced
|
|
||||||
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
|
|
||||||
- All code changes → corresponding tests → coverage ≥ 80%
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
See [docs/workflow.md](docs/workflow.md) for the complete development process.
|
|
||||||
|
|
||||||
**TL;DR:**
|
|
||||||
1. Create issue in Gitea
|
|
||||||
2. Create feature branch: `feature/issue-N-description`
|
|
||||||
3. Implement with tests
|
|
||||||
4. Open PR
|
|
||||||
5. Merge after review
|
|
||||||
|
|||||||
199
agents.md
Normal file
199
agents.md
Normal file
@@ -0,0 +1,199 @@
|
|||||||
|
# The Ouroboros
|
||||||
|
|
||||||
|
AI-powered trading agent for global stock markets with self-evolution capabilities.
|
||||||
|
|
||||||
|
## Agent Workflow Gate (Claude/Codex)
|
||||||
|
|
||||||
|
Before any implementation, both Claude and Codex must align on the same project process:
|
||||||
|
|
||||||
|
1. Read `docs/workflow.md` first (branch policy, issue/PR flow, merge rules).
|
||||||
|
2. Read `docs/commands.md` for required verification commands and failure handling.
|
||||||
|
3. Read `docs/agent-constraints.md` and `docs/agents.md` for safety constraints.
|
||||||
|
4. Check `workflow/session-handover.md` and append a session entry when starting or handing off work.
|
||||||
|
5. Confirm current branch is based on `main` or an explicitly designated temporary/base branch before editing.
|
||||||
|
|
||||||
|
If any instruction conflicts, default to the safer path and document the reason in the handover log.
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Setup
|
||||||
|
pip install -e ".[dev]"
|
||||||
|
cp .env.example .env
|
||||||
|
# Edit .env with your KIS and Gemini API credentials
|
||||||
|
|
||||||
|
# Test
|
||||||
|
pytest -v --cov=src
|
||||||
|
|
||||||
|
# Run (paper trading)
|
||||||
|
python -m src.main --mode=paper
|
||||||
|
|
||||||
|
# Run with dashboard
|
||||||
|
python -m src.main --mode=paper --dashboard
|
||||||
|
```
|
||||||
|
|
||||||
|
## Telegram Notifications (Optional)
|
||||||
|
|
||||||
|
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
|
||||||
|
|
||||||
|
### Quick Setup
|
||||||
|
|
||||||
|
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
|
||||||
|
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
|
||||||
|
3. **Configure**: Add to `.env`:
|
||||||
|
```bash
|
||||||
|
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
|
||||||
|
TELEGRAM_CHAT_ID=123456789
|
||||||
|
TELEGRAM_ENABLED=true
|
||||||
|
```
|
||||||
|
4. **Test**: Start bot conversation (`/start`), then run the agent
|
||||||
|
|
||||||
|
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
|
||||||
|
|
||||||
|
### What You'll Get
|
||||||
|
|
||||||
|
- 🟢 Trade execution alerts (BUY/SELL with confidence)
|
||||||
|
- 🚨 Circuit breaker trips (automatic trading halt)
|
||||||
|
- ⚠️ Fat-finger rejections (oversized orders blocked)
|
||||||
|
- ℹ️ Market open/close notifications
|
||||||
|
- 📝 System startup/shutdown status
|
||||||
|
|
||||||
|
### Interactive Commands
|
||||||
|
|
||||||
|
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
|
||||||
|
|
||||||
|
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
|
||||||
|
|
||||||
|
## Smart Volatility Scanner (Optional)
|
||||||
|
|
||||||
|
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
|
||||||
|
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
|
||||||
|
2. **Python Filter** — RSI + volume ratio calculations (no AI)
|
||||||
|
- Volume > 200% of previous day
|
||||||
|
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
|
||||||
|
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Add to `.env` (optional, has sensible defaults):
|
||||||
|
```bash
|
||||||
|
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
|
||||||
|
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
|
||||||
|
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
|
||||||
|
SCANNER_TOP_N=3 # Max candidates per scan
|
||||||
|
```
|
||||||
|
|
||||||
|
### Benefits
|
||||||
|
|
||||||
|
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
|
||||||
|
- **Python-based filtering** — Fast technical analysis before AI
|
||||||
|
- **Evolution-ready** — Selection context logged for strategy optimization
|
||||||
|
- **Fault-tolerant** — Falls back to static watchlist on API failure
|
||||||
|
|
||||||
|
### Trading Mode Integration
|
||||||
|
|
||||||
|
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
|
||||||
|
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
|
||||||
|
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
|
||||||
|
- **[Architecture](docs/architecture.md)** — System design, components, data flow
|
||||||
|
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
|
||||||
|
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
|
||||||
|
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
|
||||||
|
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
|
||||||
|
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
|
||||||
|
2. **Test Everything** — 80% coverage minimum, all changes require tests
|
||||||
|
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
|
||||||
|
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
|
||||||
|
|
||||||
|
## Requirements Management
|
||||||
|
|
||||||
|
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
|
||||||
|
|
||||||
|
- New requirements are added chronologically with dates
|
||||||
|
- Code changes should reference related requirements
|
||||||
|
- Helps maintain project evolution aligned with user needs
|
||||||
|
- Preserves context across conversations and development cycles
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
|
||||||
|
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
|
||||||
|
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
|
||||||
|
├── broker/ # KIS API client (domestic + overseas)
|
||||||
|
├── context/ # L1-L7 hierarchical memory system
|
||||||
|
├── core/ # Risk manager (READ-ONLY)
|
||||||
|
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
|
||||||
|
├── data/ # External data integration (news, market data, calendar)
|
||||||
|
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
|
||||||
|
├── logging/ # Decision logger (audit trail)
|
||||||
|
├── markets/ # Market schedules and timezone handling
|
||||||
|
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
|
||||||
|
├── strategy/ # Pre-market planner, scenario engine, playbook store
|
||||||
|
├── db.py # SQLite trade logging
|
||||||
|
├── main.py # Trading loop orchestrator
|
||||||
|
└── config.py # Settings (from .env)
|
||||||
|
|
||||||
|
tests/ # 998 tests across 41 files
|
||||||
|
docs/ # Extended documentation
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pytest -v --cov=src # Run tests with coverage
|
||||||
|
ruff check src/ tests/ # Lint
|
||||||
|
mypy src/ --strict # Type check
|
||||||
|
|
||||||
|
python -m src.main --mode=paper # Paper trading
|
||||||
|
python -m src.main --mode=paper --dashboard # With dashboard
|
||||||
|
python -m src.main --mode=live # Live trading (⚠️ real money)
|
||||||
|
|
||||||
|
# Gitea workflow (requires tea CLI)
|
||||||
|
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
|
||||||
|
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
|
||||||
|
```
|
||||||
|
|
||||||
|
## Markets Supported
|
||||||
|
|
||||||
|
- 🇰🇷 Korea (KRX)
|
||||||
|
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
|
||||||
|
- 🇯🇵 Japan (TSE)
|
||||||
|
- 🇭🇰 Hong Kong (SEHK)
|
||||||
|
- 🇨🇳 China (Shanghai, Shenzhen)
|
||||||
|
- 🇻🇳 Vietnam (Hanoi, HCM)
|
||||||
|
|
||||||
|
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
|
||||||
|
|
||||||
|
## Critical Constraints
|
||||||
|
|
||||||
|
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
|
||||||
|
|
||||||
|
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
|
||||||
|
- Circuit breaker at -3.0% P&L — may only be made **stricter**
|
||||||
|
- Fat-finger protection: max 30% of cash per order — always enforced
|
||||||
|
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
|
||||||
|
- All code changes → corresponding tests → coverage ≥ 80%
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
See [docs/workflow.md](docs/workflow.md) for the complete development process.
|
||||||
|
|
||||||
|
**TL;DR:**
|
||||||
|
1. Create issue in Gitea
|
||||||
|
2. Create feature branch: `feature/issue-N-description`
|
||||||
|
3. Implement with tests
|
||||||
|
4. Open PR
|
||||||
|
5. Merge after review
|
||||||
@@ -59,6 +59,18 @@ scripts/tea_comment.sh 374 /tmp/comment.md
|
|||||||
- `scripts/tea_comment.sh` accepts stdin with `-` as body source.
|
- `scripts/tea_comment.sh` accepts stdin with `-` as body source.
|
||||||
- The helper fails fast when body looks like escaped-newline text only.
|
- The helper fails fast when body looks like escaped-newline text only.
|
||||||
|
|
||||||
|
#### PR Body Post-Check (Mandatory)
|
||||||
|
|
||||||
|
PR 생성 직후 본문이 `\n` 문자열로 깨지지 않았는지 반드시 확인한다.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 scripts/validate_pr_body.py --pr <PR_NUMBER>
|
||||||
|
```
|
||||||
|
|
||||||
|
검증 실패 시:
|
||||||
|
- PR 본문을 API patch 또는 파일 기반 본문으로 즉시 수정
|
||||||
|
- 같은 명령으로 재검증 통과 후에만 리뷰/머지 진행
|
||||||
|
|
||||||
#### ❌ TTY Error - Interactive Confirmation Fails
|
#### ❌ TTY Error - Interactive Confirmation Fails
|
||||||
```bash
|
```bash
|
||||||
~/bin/tea issues create --repo X --title "Y" --description "Z"
|
~/bin/tea issues create --repo X --title "Y" --description "Z"
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
<!--
|
<!--
|
||||||
Doc-ID: DOC-REQ-001
|
Doc-ID: DOC-REQ-001
|
||||||
Version: 1.0.3
|
Version: 1.0.12
|
||||||
Status: active
|
Status: active
|
||||||
Owner: strategy
|
Owner: strategy
|
||||||
Updated: 2026-03-02
|
Updated: 2026-03-02
|
||||||
@@ -19,14 +19,14 @@ Updated: 2026-03-02
|
|||||||
- `REQ-V2-005`: 라벨링은 Triple Barrier(Upper/Lower/Time) 방식이어야 한다.
|
- `REQ-V2-005`: 라벨링은 Triple Barrier(Upper/Lower/Time) 방식이어야 한다.
|
||||||
- `REQ-V2-006`: 검증은 Walk-forward + Purge/Embargo를 강제한다.
|
- `REQ-V2-006`: 검증은 Walk-forward + Purge/Embargo를 강제한다.
|
||||||
- `REQ-V2-007`: 백테스트는 비용/슬리피지/체결실패를 반영하지 않으면 채택 불가다.
|
- `REQ-V2-007`: 백테스트는 비용/슬리피지/체결실패를 반영하지 않으면 채택 불가다.
|
||||||
- `REQ-V2-008`: Kill Switch는 신규주문차단 -> 미체결취소 -> 재조회 -> 리스크축소 -> 스냅샷 순서다.
|
- `REQ-V2-008`: Kill Switch는 신규주문차단 -> 미체결취소 -> 재조회(실패 시 최대 3회, 1s/2s backoff 재시도, 성공 시 즉시 중단) -> 리스크축소 -> 스냅샷 순서다.
|
||||||
|
|
||||||
## v3 핵심 요구사항
|
## v3 핵심 요구사항
|
||||||
|
|
||||||
- `REQ-V3-001`: 모든 신호/주문/로그는 `session_id`를 포함해야 한다.
|
- `REQ-V3-001`: 모든 신호/주문/로그는 `session_id`를 포함해야 한다.
|
||||||
- `REQ-V3-002`: 세션 전환 시 리스크 파라미터 재로딩이 수행되어야 한다.
|
- `REQ-V3-002`: 세션 전환 시 리스크 파라미터 재로딩이 수행되어야 한다.
|
||||||
- `REQ-V3-003`: 브로커 블랙아웃 시간대에는 신규 주문이 금지되어야 한다.
|
- `REQ-V3-003`: 브로커 블랙아웃 시간대에는 신규 주문이 금지되어야 한다.
|
||||||
- `REQ-V3-004`: 블랙아웃 중 신호는 Queue에 적재되고, 복구 후 유효성 재검증을 거친다.
|
- `REQ-V3-004`: 블랙아웃 중 신호는 bounded Queue에 적재되며, 포화 시 oldest-drop 정책으로 최신 intent를 보존하고 복구 후 유효성 재검증을 거친다.
|
||||||
- `REQ-V3-005`: 저유동 세션(`NXT_AFTER`, `US_PRE`, `US_DAY`, `US_AFTER`)은 시장가 주문 금지다.
|
- `REQ-V3-005`: 저유동 세션(`NXT_AFTER`, `US_PRE`, `US_DAY`, `US_AFTER`)은 시장가 주문 금지다.
|
||||||
- `REQ-V3-006`: 백테스트 체결가는 불리한 방향 체결 가정을 기본으로 한다.
|
- `REQ-V3-006`: 백테스트 체결가는 불리한 방향 체결 가정을 기본으로 한다.
|
||||||
- `REQ-V3-007`: US 운용은 환율 손익 분리 추적과 통화 버퍼 정책을 포함해야 한다.
|
- `REQ-V3-007`: US 운용은 환율 손익 분리 추적과 통화 버퍼 정책을 포함해야 한다.
|
||||||
@@ -38,3 +38,7 @@ Updated: 2026-03-02
|
|||||||
- `REQ-OPS-002`: 문서의 수치 정책은 원장에서만 변경한다.
|
- `REQ-OPS-002`: 문서의 수치 정책은 원장에서만 변경한다.
|
||||||
- `REQ-OPS-003`: 구현 태스크는 반드시 테스트 태스크를 동반한다.
|
- `REQ-OPS-003`: 구현 태스크는 반드시 테스트 태스크를 동반한다.
|
||||||
- `REQ-OPS-004`: 원본 계획 문서(`v2`, `v3`)는 `docs/ouroboros/source/` 경로를 단일 기준으로 사용한다.
|
- `REQ-OPS-004`: 원본 계획 문서(`v2`, `v3`)는 `docs/ouroboros/source/` 경로를 단일 기준으로 사용한다.
|
||||||
|
|
||||||
|
## 변경 이력
|
||||||
|
|
||||||
|
- 2026-03-02: `v1.0.12` 문서 검증 게이트 강화(#390) 반영에 따라 정책 문서 동기화 체크를 수행했다. (`REQ-OPS-002`)
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ Updated: 2026-03-02
|
|||||||
# v2/v3 구현 감사 및 수익률 분석 보고서
|
# v2/v3 구현 감사 및 수익률 분석 보고서
|
||||||
|
|
||||||
작성일: 2026-02-28
|
작성일: 2026-02-28
|
||||||
최종 업데이트: 2026-03-02 (#373 상태표 정합화 반영)
|
최종 업데이트: 2026-03-02 (#377 kill-switch refresh 재시도 정책 반영)
|
||||||
대상 기간: 2026-02-25 ~ 2026-02-28 (실거래)
|
대상 기간: 2026-02-25 ~ 2026-02-28 (실거래)
|
||||||
분석 브랜치: `feature/v3-session-policy-stream`
|
분석 브랜치: `feature/v3-session-policy-stream`
|
||||||
|
|
||||||
@@ -32,23 +32,23 @@ Updated: 2026-03-02
|
|||||||
| REQ-V2-001 | 4-상태 매도 상태기계 (HOLDING→BE_LOCK→ARMED→EXITED) | `src/strategy/position_state_machine.py` | ✅ 완료 |
|
| REQ-V2-001 | 4-상태 매도 상태기계 (HOLDING→BE_LOCK→ARMED→EXITED) | `src/strategy/position_state_machine.py` | ✅ 완료 |
|
||||||
| REQ-V2-002 | 즉시 최상위 상태 승격 (갭 대응) | `position_state_machine.py:51-70` | ✅ 완료 |
|
| REQ-V2-002 | 즉시 최상위 상태 승격 (갭 대응) | `position_state_machine.py:51-70` | ✅ 완료 |
|
||||||
| REQ-V2-003 | EXITED 우선 평가 | `position_state_machine.py:38-48` | ✅ 완료 |
|
| REQ-V2-003 | EXITED 우선 평가 | `position_state_machine.py:38-48` | ✅ 완료 |
|
||||||
| REQ-V2-004 | 4중 청산 로직 (Hard/BE/ATR Trailing/Model) | `src/strategy/exit_rules.py` | ⚠️ 부분 (`#369`) |
|
| REQ-V2-004 | 4중 청산 로직 (Hard/BE/ATR Trailing/Model assist-only, 직접 EXIT 미트리거) | `src/strategy/exit_rules.py` | ✅ 완료 |
|
||||||
| REQ-V2-005 | Triple Barrier 라벨링 | `src/analysis/triple_barrier.py` | ✅ 완료 |
|
| REQ-V2-005 | Triple Barrier 라벨링 | `src/analysis/triple_barrier.py` | ✅ 완료 |
|
||||||
| REQ-V2-006 | Walk-Forward + Purge/Embargo 검증 | `src/analysis/walk_forward_split.py` | ✅ 완료 |
|
| REQ-V2-006 | Walk-Forward + Purge/Embargo 검증 | `src/analysis/walk_forward_split.py` | ✅ 완료 |
|
||||||
| REQ-V2-007 | 비용/슬리피지/체결실패 모델 필수 | `src/analysis/backtest_cost_guard.py`, `src/analysis/backtest_pipeline.py` | ✅ 완료 |
|
| REQ-V2-007 | 비용/슬리피지/체결실패 모델 필수 | `src/analysis/backtest_cost_guard.py`, `src/analysis/backtest_pipeline.py` | ✅ 완료 |
|
||||||
| REQ-V2-008 | Kill Switch 실행 순서 (Block→Cancel→Refresh→Reduce→Snapshot) | `src/core/kill_switch.py` | ⚠️ 부분 (`#377`) |
|
| REQ-V2-008 | Kill Switch 실행 순서 (Block→Cancel→Refresh(retry)→Reduce→Snapshot) | `src/core/kill_switch.py` | ✅ 완료 |
|
||||||
|
|
||||||
### 1.3 v3 구현 상태: 부분 완료 (2026-03-02 기준)
|
### 1.3 v3 구현 상태: 부분 완료 (2026-03-02 기준)
|
||||||
|
|
||||||
| REQ-ID | 요구사항 | 상태 | 비고 |
|
| REQ-ID | 요구사항 | 상태 | 비고 |
|
||||||
|--------|----------|------|------|
|
|--------|----------|------|------|
|
||||||
| REQ-V3-001 | 모든 신호/주문/로그에 session_id 포함 | ⚠️ 부분 | 큐 intent에 `session_id` 누락 (`#375`) |
|
| REQ-V3-001 | 모든 신호/주문/로그에 session_id 포함 | ⚠️ 부분 | 큐 intent에 `session_id` 누락 (`#375`) |
|
||||||
| REQ-V3-002 | 세션 전환 훅 + 리스크 파라미터 재로딩 | ⚠️ 부분 | 구현 존재, 세션 경계 E2E 회귀 보강 필요 (`#376`) |
|
| REQ-V3-002 | 세션 전환 훅 + 리스크 파라미터 재로딩 | ✅ 완료 | 세션 경계 E2E 회귀(override 적용/해제 + 재로딩 실패 폴백) 보강 (`#376`) |
|
||||||
| REQ-V3-003 | 블랙아웃 윈도우 정책 | ✅ 완료 | `src/core/blackout_manager.py` |
|
| REQ-V3-003 | 블랙아웃 윈도우 정책 | ✅ 완료 | `src/core/blackout_manager.py` |
|
||||||
| REQ-V3-004 | 블랙아웃 큐 + 복구 시 재검증 | ⚠️ 부분 | 큐 포화 시 intent 유실 경로 존재 (`#371`), 재검증 강화를 `#328`에서 추적 |
|
| REQ-V3-004 | 블랙아웃 큐 + 복구 시 재검증 | ✅ 완료 | DB 기록(`#324`), 재검증 강화(`#328`), 큐 포화 oldest-drop(`#371`) 반영 |
|
||||||
| REQ-V3-005 | 저유동 세션 시장가 금지 | ✅ 완료 | `src/core/order_policy.py` |
|
| REQ-V3-005 | 저유동 세션 시장가 금지 | ✅ 완료 | `src/core/order_policy.py` |
|
||||||
| REQ-V3-006 | 보수적 백테스트 체결 (불리 방향) | ✅ 완료 | `src/analysis/backtest_execution_model.py` |
|
| REQ-V3-006 | 보수적 백테스트 체결 (불리 방향) | ✅ 완료 | `src/analysis/backtest_execution_model.py` |
|
||||||
| REQ-V3-007 | FX 손익 분리 (전략 PnL vs 환율 PnL) | ⚠️ 부분 | 스키마 존재, 런타임 분리 계산/전달 미적용 (`#370`) |
|
| REQ-V3-007 | FX 손익 분리 (전략 PnL vs 환율 PnL) | ⚠️ 부분 | 런타임 분리 계산/전달 적용 (`#370`), buy-side `fx_rate` 미관측 시 `fx_pnl=0` fallback |
|
||||||
| REQ-V3-008 | 오버나잇 예외 vs Kill Switch 우선순위 | ✅ 완료 | `src/main.py` — `_should_force_exit_for_overnight()`, `_apply_staged_exit_override_for_hold()` |
|
| REQ-V3-008 | 오버나잇 예외 vs Kill Switch 우선순위 | ✅ 완료 | `src/main.py` — `_should_force_exit_for_overnight()`, `_apply_staged_exit_override_for_hold()` |
|
||||||
|
|
||||||
### 1.4 운영 거버넌스: 부분 완료 (2026-03-02 재평가)
|
### 1.4 운영 거버넌스: 부분 완료 (2026-03-02 재평가)
|
||||||
@@ -80,22 +80,22 @@ Updated: 2026-03-02
|
|||||||
- **해소**: #326 머지 — `log_trade()` 호출 시 런타임 `session_id` 명시적 전달
|
- **해소**: #326 머지 — `log_trade()` 호출 시 런타임 `session_id` 명시적 전달
|
||||||
- **요구사항**: REQ-V3-001
|
- **요구사항**: REQ-V3-001
|
||||||
|
|
||||||
### GAP-3: 세션 전환 시 리스크 파라미터 재로딩 없음 → ⚠️ 부분 해소 (#327)
|
### GAP-3: 세션 전환 시 리스크 파라미터 재로딩 없음 → ✅ 해소 (#327, #376)
|
||||||
|
|
||||||
- **위치**: `src/main.py`, `src/config.py`
|
- **위치**: `src/main.py`, `src/config.py`
|
||||||
- **해소 내용**: #327 머지 — `SESSION_RISK_PROFILES_JSON` 기반 세션별 파라미터 재로딩 메커니즘 구현
|
- **해소 내용**: #327 머지 — `SESSION_RISK_PROFILES_JSON` 기반 세션별 파라미터 재로딩 메커니즘 구현
|
||||||
- `SESSION_RISK_RELOAD_ENABLED=true` 시 세션 경계에서 파라미터 재로딩
|
- `SESSION_RISK_RELOAD_ENABLED=true` 시 세션 경계에서 파라미터 재로딩
|
||||||
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
|
- 재로딩 실패 시 기존 파라미터 유지 (안전 폴백)
|
||||||
- **잔여 갭**: 세션 경계 실시간 전환 E2E 통합 테스트 보강 필요 (`test_main.py`에 설정 오버라이드/폴백 단위 테스트는 존재)
|
- **해소**: 세션 경계 E2E 회귀 테스트를 추가해 override 적용/해제, 재로딩 실패 시 폴백 유지를 검증함 (`#376`)
|
||||||
- **요구사항**: REQ-V3-002
|
- **요구사항**: REQ-V3-002
|
||||||
|
|
||||||
### GAP-4: 블랙아웃 복구 DB 기록 + 재검증 → ⚠️ 부분 해소 (#324, #328, #371)
|
### GAP-4: 블랙아웃 복구 DB 기록 + 재검증 → ✅ 해소 (#324, #328, #371)
|
||||||
|
|
||||||
- **위치**: `src/core/blackout_manager.py`, `src/main.py`
|
- **위치**: `src/core/blackout_manager.py`, `src/main.py`
|
||||||
- **현 상태**:
|
- **현 상태**:
|
||||||
- #324 추적 범위(DB 기록)는 구현 경로가 존재
|
- #324: 복구 주문 DB 기록 구현 및 테스트 반영
|
||||||
- #328 범위(가격/세션 재검증 강화)는 추적 이슈 오픈 상태
|
- #328: 가격/세션 재검증 강화 구현 및 머지 완료
|
||||||
- #371: 큐 포화 시 intent 유실 경로가 남아 있어 `REQ-V3-004`를 완료로 보기 어려움
|
- #371: 큐 포화 정책을 oldest-drop으로 명시/구현해 최신 intent 유실 경로 제거
|
||||||
- **요구사항**: REQ-V3-004
|
- **요구사항**: REQ-V3-004
|
||||||
|
|
||||||
### GAP-5: 시간장벽이 봉 개수 고정 → ✅ 해소 (#329)
|
### GAP-5: 시간장벽이 봉 개수 고정 → ✅ 해소 (#329)
|
||||||
@@ -107,10 +107,12 @@ Updated: 2026-03-02
|
|||||||
- `max_holding_bars` deprecated 경고 유지 (하위 호환)
|
- `max_holding_bars` deprecated 경고 유지 (하위 호환)
|
||||||
- **요구사항**: REQ-V2-005 / v3 확장
|
- **요구사항**: REQ-V2-005 / v3 확장
|
||||||
|
|
||||||
### GAP-6 (신규): FX PnL 분리 미완료 (MEDIUM — 부분 구현)
|
### GAP-6 (신규): FX PnL 분리 부분 해소 (MEDIUM)
|
||||||
|
|
||||||
- **위치**: `src/db.py` (`fx_pnl`, `strategy_pnl` 컬럼 존재)
|
- **위치**: `src/db.py` (`fx_pnl`, `strategy_pnl` 컬럼 존재)
|
||||||
- **문제**: 스키마와 함수는 존재하지만 런타임 경로에서 `strategy_pnl`/`fx_pnl` 분리 계산 전달이 누락됨 (`#370`)
|
- **현 상태**: 런타임 SELL 경로에서 `strategy_pnl`/`fx_pnl` 분리 계산 및 전달을 적용함 (`#370`).
|
||||||
|
- **운영 메모**: `trading_cycle`은 scanner 기반 `selection_context`에 `fx_rate`를 추가하고, `run_daily_session`은 scanner 컨텍스트 없이 `fx_rate` 스냅샷만 기록한다.
|
||||||
|
- **잔여**: 과거 BUY 레코드에 `fx_rate`가 없으면 해외 구간도 `fx_pnl=0` fallback으로 기록됨.
|
||||||
- **영향**: USD 거래에서 환율 손익과 전략 손익이 분리되지 않아 성과 분석 부정확
|
- **영향**: USD 거래에서 환율 손익과 전략 손익이 분리되지 않아 성과 분석 부정확
|
||||||
- **요구사항**: REQ-V3-007
|
- **요구사항**: REQ-V3-007
|
||||||
|
|
||||||
@@ -326,7 +328,7 @@ Updated: 2026-03-02
|
|||||||
| 블랙아웃 복구 주문 `log_trade()` 추가 (GAP-4) | #324 | ✅ 머지 |
|
| 블랙아웃 복구 주문 `log_trade()` 추가 (GAP-4) | #324 | ✅ 머지 |
|
||||||
| 세션 전환 리스크 파라미터 동적 재로딩 (GAP-3) | #327 | ✅ 머지 |
|
| 세션 전환 리스크 파라미터 동적 재로딩 (GAP-3) | #327 | ✅ 머지 |
|
||||||
| session_id 거래/의사결정 로그 명시 전달 (GAP-1, GAP-2) | #326 | ✅ 머지 |
|
| session_id 거래/의사결정 로그 명시 전달 (GAP-1, GAP-2) | #326 | ✅ 머지 |
|
||||||
| 블랙아웃 복구 가격/세션 재검증 강화 (GAP-4 잔여) | #328 | ✅ 머지 |
|
| 블랙아웃 복구 가격/세션 재검증 강화 (GAP-4) | #328 | ✅ 머지 |
|
||||||
|
|
||||||
**잔여 개선 항목:**
|
**잔여 개선 항목:**
|
||||||
|
|
||||||
@@ -335,7 +337,6 @@ Updated: 2026-03-02
|
|||||||
| P1 | US 시장 ATR 공급 경로 완성 (ROOT-5 잔여) | 중간 |
|
| P1 | US 시장 ATR 공급 경로 완성 (ROOT-5 잔여) | 중간 |
|
||||||
| P1 | FX PnL 운영 활성화 (REQ-V3-007) | 낮음 |
|
| P1 | FX PnL 운영 활성화 (REQ-V3-007) | 낮음 |
|
||||||
| P2 | pred_down_prob ML 모델 대체 (ROOT-5 잔여) | 높음 |
|
| P2 | pred_down_prob ML 모델 대체 (ROOT-5 잔여) | 높음 |
|
||||||
| P2 | 세션 경계 E2E 통합 테스트 보강 (GAP-3 잔여) | 낮음 |
|
|
||||||
|
|
||||||
### 5.3 권장 실행 순서
|
### 5.3 권장 실행 순서
|
||||||
|
|
||||||
@@ -392,8 +393,7 @@ Phase 3 (중기): v3 세션 최적화
|
|||||||
|
|
||||||
### 테스트 미존재 (잔여)
|
### 테스트 미존재 (잔여)
|
||||||
|
|
||||||
- ❌ 세션 전환 훅 콜백 (GAP-3 잔여)
|
- ✅ 세션 전환 훅 콜백/세션 경계 리스크 재로딩 E2E 회귀 (`#376`)
|
||||||
- ❌ 세션 경계 리스크 파라미터 재로딩 단위 테스트 (GAP-3 잔여)
|
|
||||||
- ❌ 실거래 경로 ↔ v2 상태기계 통합 테스트 (피처 공급 포함)
|
- ❌ 실거래 경로 ↔ v2 상태기계 통합 테스트 (피처 공급 포함)
|
||||||
- ❌ FX PnL 운영 활성화 검증 (GAP-6)
|
- ❌ FX PnL 운영 활성화 검증 (GAP-6)
|
||||||
|
|
||||||
|
|||||||
281
docs/plans/2026-03-03-398-400-401-implementation.md
Normal file
281
docs/plans/2026-03-03-398-400-401-implementation.md
Normal file
@@ -0,0 +1,281 @@
|
|||||||
|
# 398/400/401 Integration Implementation Plan
|
||||||
|
|
||||||
|
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||||
|
|
||||||
|
**Goal:** Implement #398, #400, #401 as three isolated PRs targeting `feature/398-400-401`, merge only when CI passes and self-review has zero minor issues, then run and monitor overnight script without stopping the process.
|
||||||
|
|
||||||
|
**Architecture:** Create one integration base branch from `origin/main`, branch per issue, and ship in strict sequence (`398 -> 400 -> 401`) to keep diffs isolated. Use TDD per issue (fail-first tests, minimal fix, regression checks), then perform PR self-review and CI gate before merge. After all merges, run overnight in background and monitor logs/process health while leaving runtime active.
|
||||||
|
|
||||||
|
**Tech Stack:** Python 3, pytest, asyncio runtime loop, Git/Gitea (`tea`), shell scripts (`scripts/run_overnight.sh`).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Prepare Integration Branch Topology
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `.git` refs only (branch operations)
|
||||||
|
|
||||||
|
**Step 1: Sync base branch**
|
||||||
|
|
||||||
|
Run: `git fetch origin && git checkout main && git pull --ff-only origin main`
|
||||||
|
Expected: local `main` equals `origin/main`
|
||||||
|
|
||||||
|
**Step 2: Create integration branch**
|
||||||
|
|
||||||
|
Run: `git checkout -b feature/398-400-401`
|
||||||
|
Expected: current branch is `feature/398-400-401`
|
||||||
|
|
||||||
|
**Step 3: Create issue branches from integration branch**
|
||||||
|
|
||||||
|
Run: `git checkout -b fix/398 && git checkout feature/398-400-401 && git checkout -b fix/400 && git checkout feature/398-400-401 && git checkout -b fix/401 && git checkout feature/398-400-401`
|
||||||
|
Expected: three issue branches exist and point to same base commit
|
||||||
|
|
||||||
|
**Step 4: Push all branches**
|
||||||
|
|
||||||
|
Run: `git push -u origin feature/398-400-401 fix/398 fix/400 fix/401`
|
||||||
|
Expected: remote tracking set for all four branches
|
||||||
|
|
||||||
|
**Step 5: Commit checkpoint**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git status --short
|
||||||
|
```
|
||||||
|
Expected: clean workspace before issue implementation
|
||||||
|
|
||||||
|
### Task 2: Implement #398 with TDD (KR rt_cd failure handling)
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/main.py`
|
||||||
|
- Test: `tests/test_main.py`
|
||||||
|
|
||||||
|
**Step 1: Write failing test**
|
||||||
|
|
||||||
|
Add test in `tests/test_main.py` verifying KR order returns `rt_cd != '0'` does not trigger success side effects (no BUY notify, no trade log success path).
|
||||||
|
|
||||||
|
**Step 2: Run test to verify failure**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_main.py -k "kr and rt_cd" -v`
|
||||||
|
Expected: FAIL showing current code incorrectly treats KR order as success
|
||||||
|
|
||||||
|
**Step 3: Write minimal implementation**
|
||||||
|
|
||||||
|
In KR order branch of `src/main.py`, immediately after `send_order`, add `rt_cd` acceptance check identical to overseas branch behavior; set `order_succeeded = False` and warning log when rejected.
|
||||||
|
|
||||||
|
**Step 4: Run targeted tests**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_main.py -k "kr and rt_cd" -v`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 5: Run safety regression**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_main.py tests/test_order_policy.py -q`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 6: Commit**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git add tests/test_main.py src/main.py
|
||||||
|
git commit -m "fix: handle KR order rejection via rt_cd check (#398)"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 3: Open PR for #398, Self-review, CI gate, Merge
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: remote PR metadata/comments only
|
||||||
|
|
||||||
|
**Step 1: Push branch**
|
||||||
|
|
||||||
|
Run: `git checkout fix/398 && git push -u origin fix/398`
|
||||||
|
|
||||||
|
**Step 2: Create PR targeting integration branch**
|
||||||
|
|
||||||
|
Run: `tea pr create --base feature/398-400-401 --head fix/398 --title "fix: #398 KR rt_cd rejection handling" --description "Implements issue #398 with tests."`
|
||||||
|
Expected: PR URL returned
|
||||||
|
|
||||||
|
**Step 3: Add self-review comment (severity rubric)**
|
||||||
|
|
||||||
|
Run: `tea pr comment <PR_398> --message "Self-review: Critical 0 / Major 0 / Minor 0. Merge allowed when CI passes."`
|
||||||
|
|
||||||
|
**Step 4: Wait for CI success**
|
||||||
|
|
||||||
|
Run: `tea pr checks <PR_398>` (poll until all success)
|
||||||
|
Expected: all checks success
|
||||||
|
|
||||||
|
**Step 5: Merge only when gate passes**
|
||||||
|
|
||||||
|
Run: `tea pr merge <PR_398> --delete-branch=false`
|
||||||
|
Expected: merged into `feature/398-400-401`
|
||||||
|
|
||||||
|
### Task 4: Implement #400 with TDD (US session transition correctness)
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/main.py`, `src/core/order_policy.py`, `src/markets/schedule.py`
|
||||||
|
- Test: `tests/test_main.py`, `tests/test_market_schedule.py`, `tests/test_order_policy.py`
|
||||||
|
|
||||||
|
**Step 1: Write failing tests**
|
||||||
|
|
||||||
|
Add tests for:
|
||||||
|
- session transition event handling (`US_DAY -> US_REG`) emits open event and forces rescan
|
||||||
|
- `US_DAY` treated non-tradable for playbook/trading actions
|
||||||
|
|
||||||
|
**Step 2: Run failing tests**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_main.py tests/test_market_schedule.py tests/test_order_policy.py -k "US_DAY or US_REG or session" -v`
|
||||||
|
Expected: FAIL at current behavior
|
||||||
|
|
||||||
|
**Step 3: Minimal implementation**
|
||||||
|
|
||||||
|
- Track market state by session identifier (not bool only)
|
||||||
|
- Force rescan/playbook refresh on US_REG entry
|
||||||
|
- Exclude/suppress US_DAY for trading/playbook generation path
|
||||||
|
|
||||||
|
**Step 4: Re-run targeted tests**
|
||||||
|
|
||||||
|
Run: same command as Step 2
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 5: Regression pass**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_main.py tests/test_market_schedule.py tests/test_order_policy.py tests/test_pre_market_planner.py -q`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 6: Commit**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git add src/main.py src/core/order_policy.py src/markets/schedule.py tests/test_main.py tests/test_market_schedule.py tests/test_order_policy.py
|
||||||
|
git commit -m "fix: handle US session transitions and suppress US_DAY trading (#400)"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 5: Open PR for #400, Self-review, CI gate, Merge
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: remote PR metadata/comments only
|
||||||
|
|
||||||
|
**Step 1: Push branch**
|
||||||
|
|
||||||
|
Run: `git checkout fix/400 && git push -u origin fix/400`
|
||||||
|
|
||||||
|
**Step 2: Create PR**
|
||||||
|
|
||||||
|
Run: `tea pr create --base feature/398-400-401 --head fix/400 --title "fix: #400 US session transition handling" --description "Implements issue #400 with tests."`
|
||||||
|
|
||||||
|
**Step 3: Add self-review comment**
|
||||||
|
|
||||||
|
Run: `tea pr comment <PR_400> --message "Self-review: Critical 0 / Major 0 / Minor 0. Merge allowed when CI passes."`
|
||||||
|
|
||||||
|
**Step 4: Wait for CI success**
|
||||||
|
|
||||||
|
Run: `tea pr checks <PR_400>`
|
||||||
|
Expected: all checks success
|
||||||
|
|
||||||
|
**Step 5: Merge**
|
||||||
|
|
||||||
|
Run: `tea pr merge <PR_400> --delete-branch=false`
|
||||||
|
|
||||||
|
### Task 6: Implement #401 with TDD (multi-market parallel processing)
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/main.py`
|
||||||
|
- Test: `tests/test_main.py`
|
||||||
|
|
||||||
|
**Step 1: Write failing tests**
|
||||||
|
|
||||||
|
Add tests verifying:
|
||||||
|
- open markets are processed via parallel task dispatch
|
||||||
|
- circuit breaker behavior still triggers global shutdown semantics
|
||||||
|
- shared state updates remain deterministic under parallel market execution
|
||||||
|
|
||||||
|
**Step 2: Run failing tests**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_main.py -k "parallel or market" -v`
|
||||||
|
Expected: FAIL before implementation
|
||||||
|
|
||||||
|
**Step 3: Minimal implementation**
|
||||||
|
|
||||||
|
Refactor sequential market loop into market-level async tasks (`asyncio.gather`/task group) while preserving stock-level processing order per market and existing failure semantics.
|
||||||
|
|
||||||
|
**Step 4: Re-run targeted tests**
|
||||||
|
|
||||||
|
Run: same command as Step 2
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 5: Regression pass**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_main.py tests/test_runtime_overnight_scripts.py -q`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 6: Commit**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git add src/main.py tests/test_main.py
|
||||||
|
git commit -m "feat: process active markets in parallel with preserved shutdown semantics (#401)"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 7: Open PR for #401, Self-review, CI gate, Merge
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: remote PR metadata/comments only
|
||||||
|
|
||||||
|
**Step 1: Push branch**
|
||||||
|
|
||||||
|
Run: `git checkout fix/401 && git push -u origin fix/401`
|
||||||
|
|
||||||
|
**Step 2: Create PR**
|
||||||
|
|
||||||
|
Run: `tea pr create --base feature/398-400-401 --head fix/401 --title "feat: #401 parallel multi-market processing" --description "Implements issue #401 with tests."`
|
||||||
|
|
||||||
|
**Step 3: Add self-review comment**
|
||||||
|
|
||||||
|
Run: `tea pr comment <PR_401> --message "Self-review: Critical 0 / Major 0 / Minor 0. Merge allowed when CI passes."`
|
||||||
|
|
||||||
|
**Step 4: Wait for CI success**
|
||||||
|
|
||||||
|
Run: `tea pr checks <PR_401>`
|
||||||
|
Expected: all checks success
|
||||||
|
|
||||||
|
**Step 5: Merge**
|
||||||
|
|
||||||
|
Run: `tea pr merge <PR_401> --delete-branch=false`
|
||||||
|
|
||||||
|
### Task 8: Final Branch Validation + Overnight Runtime Monitoring
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Execute: `scripts/run_overnight.sh`
|
||||||
|
- Observe: runtime log file (e.g., `logs/overnight.log`)
|
||||||
|
|
||||||
|
**Step 1: Checkout integrated branch and sync**
|
||||||
|
|
||||||
|
Run: `git checkout feature/398-400-401 && git pull --ff-only origin feature/398-400-401`
|
||||||
|
Expected: branch contains merged PRs
|
||||||
|
|
||||||
|
**Step 2: Start overnight in background (non-blocking)**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
nohup ./scripts/run_overnight.sh > /tmp/ouroboros_overnight.log 2>&1 &
|
||||||
|
echo $! > /tmp/ouroboros_overnight.pid
|
||||||
|
```
|
||||||
|
Expected: PID written and process running
|
||||||
|
|
||||||
|
**Step 3: Verify process alive**
|
||||||
|
|
||||||
|
Run: `ps -p $(cat /tmp/ouroboros_overnight.pid) -o pid,ppid,stat,etime,cmd`
|
||||||
|
Expected: process present
|
||||||
|
|
||||||
|
**Step 4: Monitor startup logs**
|
||||||
|
|
||||||
|
Run: `tail -n 120 /tmp/ouroboros_overnight.log`
|
||||||
|
Expected: startup complete and runtime loop active without fatal errors
|
||||||
|
|
||||||
|
**Step 5: Ongoing monitor without shutdown**
|
||||||
|
|
||||||
|
Run: `tail -f /tmp/ouroboros_overnight.log` (sample monitoring window, then detach)
|
||||||
|
Expected: continued activity; do not kill process
|
||||||
|
|
||||||
|
**Step 6: Final status note**
|
||||||
|
|
||||||
|
Record PID, log path, and “process left running” status.
|
||||||
62
docs/plans/2026-03-03-398-400-401-integration-design.md
Normal file
62
docs/plans/2026-03-03-398-400-401-integration-design.md
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
# 398/400/401 통합 처리 설계
|
||||||
|
|
||||||
|
## 개요
|
||||||
|
이 문서는 이슈 #398, #400, #401을 `origin/main` 기반 통합 브랜치에서 순차적으로 처리하고,
|
||||||
|
각 PR을 셀프 리뷰 및 CI 게이트로 검증한 뒤 머지하는 운영 설계를 정의한다.
|
||||||
|
최종 머지된 통합 브랜치에서 overnight 스크립트를 실행하고, 모니터링 이후에도 프로그램은 계속 실행 상태를 유지한다.
|
||||||
|
|
||||||
|
## 목표
|
||||||
|
- 통합 브랜치: `feature/398-400-401`
|
||||||
|
- 작업 브랜치: `fix/398`, `fix/400`, `fix/401`
|
||||||
|
- PR base: 모두 `feature/398-400-401`
|
||||||
|
- 머지 조건: `CI 전체 통과` + `셀프 리뷰에서 minor 포함 이슈 0건`
|
||||||
|
- 최종 확인: 통합 브랜치에서 overnight 실행 및 모니터링, 프로세스 지속 실행
|
||||||
|
|
||||||
|
## 아키텍처
|
||||||
|
- `origin/main`에서 `feature/398-400-401` 생성
|
||||||
|
- 각 이슈는 독립 브랜치(`fix/398`, `fix/400`, `fix/401`)에서 구현
|
||||||
|
- PR은 순차적으로 생성/검증/머지 (`398 -> 400 -> 401`)
|
||||||
|
- 각 PR은 셀프 리뷰 코멘트를 남기고, minor 이상 발견 시 수정 후 재검증
|
||||||
|
- 3개 PR 머지 완료 후 통합 브랜치에서 overnight 백그라운드 실행 및 로그 모니터링
|
||||||
|
- 모니터링 완료 후에도 프로세스는 종료하지 않음
|
||||||
|
|
||||||
|
## 컴포넌트
|
||||||
|
- Git/브랜치 컴포넌트: 브랜치 생성, 리베이스, 충돌 해결
|
||||||
|
- 이슈 구현 컴포넌트:
|
||||||
|
- #398: KR 주문 `rt_cd` 실패 처리, 오알림/오기록 차단
|
||||||
|
- #400: US 세션 전환 감지, US_DAY 억제, US_REG 진입 이벤트/강제 재스캔
|
||||||
|
- #401: 시장 단위 병렬 처리 및 공유 상태 동시성 보호
|
||||||
|
- PR 운영 컴포넌트: PR 생성, 셀프 리뷰 코멘트 작성, 승인 기준 확인
|
||||||
|
- CI 게이트 컴포넌트: 체크 상태 폴링 및 pass 확인
|
||||||
|
- 머지 컴포넌트: 게이트 통과 PR만 머지
|
||||||
|
- 런타임 검증 컴포넌트: overnight 실행, 로그 추적, 프로세스 생존 확인
|
||||||
|
|
||||||
|
## 데이터/제어 흐름
|
||||||
|
1. `feature/398-400-401` 생성
|
||||||
|
2. `fix/398` 구현 -> 테스트 -> 커밋 -> PR 생성
|
||||||
|
3. 셀프 리뷰 코멘트 작성(결함 레벨 포함)
|
||||||
|
4. CI 완료 대기 후 `CI pass && minor 0`이면 머지
|
||||||
|
5. `fix/400`, `fix/401`에 대해 동일 절차 반복
|
||||||
|
6. 통합 브랜치에서 overnight 백그라운드 실행
|
||||||
|
7. 로그/상태 모니터링으로 실제 동작 확인
|
||||||
|
8. 결과 보고 후에도 프로세스는 계속 실행
|
||||||
|
|
||||||
|
## 에러 처리/복구
|
||||||
|
- PR 생성/충돌 실패: 해당 브랜치만 중단 후 해결, 다른 브랜치와 격리 유지
|
||||||
|
- 셀프 리뷰 실패(minor 포함): 머지 금지, 수정 커밋 후 리뷰 갱신
|
||||||
|
- CI 실패: 실패 원인 수정 후 재푸시, 재검증
|
||||||
|
- 머지 실패: base 최신화 및 재시도
|
||||||
|
- overnight 시작 실패: 로그 분석 후 재기동
|
||||||
|
- 모니터링 중 오류: 오류 보고는 하되 자동 종료하지 않고 실행 유지
|
||||||
|
|
||||||
|
## 테스트/검증
|
||||||
|
- PR별 관련 단위/통합 테스트 실행
|
||||||
|
- 필요 시 `tests/test_main.py`, `tests/test_runtime_overnight_scripts.py` 포함 회귀 실행
|
||||||
|
- 셀프 리뷰는 `Critical/Major/Minor` 기준으로 작성
|
||||||
|
- minor 0건 명시된 경우에만 머지 진행
|
||||||
|
- 최종 통합 브랜치에서 overnight 기동/루프 진입/에러 로그 확인
|
||||||
|
- PID/프로세스 생존 확인 후 실행 지속 상태 보고
|
||||||
|
|
||||||
|
## 비목표
|
||||||
|
- 본 문서는 구현 상세 코드 변경 자체를 다루지 않는다.
|
||||||
|
- 본 문서는 외부 리뷰어 승인 프로세스를 다루지 않는다(셀프 리뷰만 대상).
|
||||||
@@ -0,0 +1,103 @@
|
|||||||
|
# Issue #409 Design - KR Session-Aware Exchange Routing
|
||||||
|
|
||||||
|
## Context
|
||||||
|
- Issue: #409 (bug: KR 세션별 거래소 미분리 - 스크리닝/주문/이중상장 우선순위 미처리)
|
||||||
|
- Related runtime observation targets: #318, #325
|
||||||
|
- Date: 2026-03-04
|
||||||
|
- Confirmed approach: Option 2 (routing module introduction)
|
||||||
|
|
||||||
|
## Goals
|
||||||
|
1. Ensure domestic screening uses session-specific exchange market code.
|
||||||
|
2. Ensure domestic order submission explicitly sets exchange routing code.
|
||||||
|
3. Add dual-listing routing priority logic (spread/liquidity aware) with safe fallback.
|
||||||
|
4. Keep existing behavior stable for non-KR flows and existing risk/order policy guards.
|
||||||
|
5. Enable runtime observability for #409 while monitoring #318/#325 in parallel.
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
- Replacing current session classification model.
|
||||||
|
- Introducing new market sessions or changing session boundaries.
|
||||||
|
- Refactoring overseas order flow.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
### New Component
|
||||||
|
- Add `KRExchangeRouter` (new module, e.g. `src/broker/kr_exchange_router.py`).
|
||||||
|
- Responsibility split:
|
||||||
|
- `classify_session_id`: session classification only.
|
||||||
|
- `KRExchangeRouter`: final domestic exchange selection (`KRX`/`NXT`) for ranking and order.
|
||||||
|
- `KISBroker`: inject resolved routing values into request params/body.
|
||||||
|
|
||||||
|
### Integration Points
|
||||||
|
- `KISBroker.fetch_market_rankings`
|
||||||
|
- Session-aware market division code:
|
||||||
|
- `KRX_REG` -> `J`
|
||||||
|
- `NXT_PRE`, `NXT_AFTER` -> `NX`
|
||||||
|
- `KISBroker.send_order`
|
||||||
|
- Explicit `EXCG_ID_DVSN_CD` is always set.
|
||||||
|
- `SmartVolatilityScanner._scan_domestic`
|
||||||
|
- Ensure domestic ranking API path resolves exchange consistently with current session.
|
||||||
|
|
||||||
|
## Data Flow
|
||||||
|
1. Scanner path:
|
||||||
|
- Determine `session_id`.
|
||||||
|
- `resolve_for_ranking(session_id)`.
|
||||||
|
- Inject `J` or `NX` into ranking API params.
|
||||||
|
2. Order path:
|
||||||
|
- Pass `session_id` into order path.
|
||||||
|
- `resolve_for_order(stock_code, session_id)`.
|
||||||
|
- Single listing: session default exchange.
|
||||||
|
- Dual listing: select by spread/liquidity heuristic when data is available.
|
||||||
|
- Data unavailable/error: fallback to session default.
|
||||||
|
- Send order with explicit `EXCG_ID_DVSN_CD`.
|
||||||
|
3. Observability:
|
||||||
|
- Log `session_id`, `resolved_exchange`, `routing_reason`.
|
||||||
|
|
||||||
|
## Dual-Listing Routing Priority
|
||||||
|
- Preferred decision source: spread/liquidity comparison.
|
||||||
|
- Deterministic fallback: session-default exchange.
|
||||||
|
- Proposed reasons in logs:
|
||||||
|
- `session_default`
|
||||||
|
- `dual_listing_spread`
|
||||||
|
- `dual_listing_liquidity`
|
||||||
|
- `fallback_data_unavailable`
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
- Router does not block order path when auxiliary data is unavailable.
|
||||||
|
- Fail-open strategy for routing selection (fallback to session default) while preserving existing API/network error semantics.
|
||||||
|
- `send_order` exchange field omission is forbidden by design after this change.
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
### Unit
|
||||||
|
- Router mapping by session (`KRX_REG`, `NXT_PRE`, `NXT_AFTER`).
|
||||||
|
- Dual-listing routing priority and fallback.
|
||||||
|
- Broker order body includes `EXCG_ID_DVSN_CD`.
|
||||||
|
- Ranking params use session-aware market code.
|
||||||
|
|
||||||
|
### Integration/Regression
|
||||||
|
- `smart_scanner` domestic calls align with session exchange.
|
||||||
|
- Existing order policy tests remain green.
|
||||||
|
- Re-run regression sets covering #318/#325 related paths.
|
||||||
|
|
||||||
|
### Runtime Observation (24h)
|
||||||
|
- Restart program from working branch build.
|
||||||
|
- Run runtime monitor for up to 24h.
|
||||||
|
- Verify and track:
|
||||||
|
- #409: session-aware routing evidence in logs.
|
||||||
|
- #318: ATR dynamic stop evidence.
|
||||||
|
- #325: ATR/pred_down_prob injection evidence.
|
||||||
|
- If anomalies are detected during monitoring, create separate issue tickets with evidence and links.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
1. No domestic ranking call uses hardcoded KRX-only behavior across NXT sessions.
|
||||||
|
2. No domestic order is sent without `EXCG_ID_DVSN_CD`.
|
||||||
|
3. Dual-listing path has explicit priority logic and deterministic fallback.
|
||||||
|
4. Tests pass for new and affected paths.
|
||||||
|
5. Runtime monitor evidence is collected for #409, #318, #325; anomalies are ticketed.
|
||||||
|
|
||||||
|
## Risks and Mitigations
|
||||||
|
- Risk: Increased routing complexity introduces regressions.
|
||||||
|
- Mitigation: isolate router, high-coverage unit tests, preserve existing interfaces where possible.
|
||||||
|
- Risk: Runtime events for #318/#325 may not naturally occur in 24h.
|
||||||
|
- Mitigation: mark as `NOT_OBSERVED` and keep issue state based on evidence policy; do not force-close without proof.
|
||||||
|
|
||||||
|
## Planned Next Step
|
||||||
|
- Invoke `writing-plans` workflow and produce implementation plan before code changes.
|
||||||
@@ -0,0 +1,352 @@
|
|||||||
|
# Issue #409 KR Session Exchange Routing Implementation Plan
|
||||||
|
|
||||||
|
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||||
|
|
||||||
|
**Goal:** Fix #409 by making KR screening/order routing session-aware and adding dual-listing exchange priority with deterministic fallback, then run 24h runtime observation for #409/#318/#325.
|
||||||
|
|
||||||
|
**Architecture:** Introduce a dedicated `KRExchangeRouter` module that resolves exchange by session and dual-listing metadata. Keep session classification in `order_policy`, and inject router outputs into `KISBroker` ranking/order requests. Add explicit routing logs for runtime evidence and keep non-KR behavior unchanged.
|
||||||
|
|
||||||
|
**Tech Stack:** Python 3.12, aiohttp client layer, pytest/pytest-asyncio, Gitea CLI (`tea`), bash runtime monitor scripts.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Preflight and Branch Runtime Gate
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `workflow/session-handover.md`
|
||||||
|
|
||||||
|
**Step 1: Add handover entry for this ticket branch**
|
||||||
|
|
||||||
|
```md
|
||||||
|
### 2026-03-04 | session=codex-issue409-start
|
||||||
|
- branch: feature/issue-409-kr-session-exchange-routing
|
||||||
|
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
|
||||||
|
- open_issues_reviewed: #409, #318, #325
|
||||||
|
- next_ticket: #409
|
||||||
|
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
|
||||||
|
- risks_or_notes: #409 code fix + 24h monitor, runtime anomaly creates separate issue ticket
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Run strict handover check**
|
||||||
|
|
||||||
|
Run: `python3 scripts/session_handover_check.py --strict`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 3: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add workflow/session-handover.md
|
||||||
|
git commit -m "chore: add handover entry for issue #409"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 2: Add Router Unit Tests First (TDD)
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `tests/test_kr_exchange_router.py`
|
||||||
|
|
||||||
|
**Step 1: Write failing tests for session mapping**
|
||||||
|
|
||||||
|
```python
|
||||||
|
from src.broker.kr_exchange_router import KRExchangeRouter
|
||||||
|
|
||||||
|
|
||||||
|
def test_ranking_market_code_by_session() -> None:
|
||||||
|
router = KRExchangeRouter()
|
||||||
|
assert router.resolve_for_ranking("KRX_REG") == "J"
|
||||||
|
assert router.resolve_for_ranking("NXT_PRE") == "NX"
|
||||||
|
assert router.resolve_for_ranking("NXT_AFTER") == "NX"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Write failing tests for dual-listing fallback behavior**
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_order_exchange_falls_back_to_session_default_on_missing_data() -> None:
|
||||||
|
router = KRExchangeRouter()
|
||||||
|
resolved = router.resolve_for_order(
|
||||||
|
stock_code="0001A0",
|
||||||
|
session_id="NXT_PRE",
|
||||||
|
is_dual_listed=True,
|
||||||
|
spread_krx=None,
|
||||||
|
spread_nxt=None,
|
||||||
|
liquidity_krx=None,
|
||||||
|
liquidity_nxt=None,
|
||||||
|
)
|
||||||
|
assert resolved.exchange_code == "NXT"
|
||||||
|
assert resolved.reason == "fallback_data_unavailable"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3: Run tests to verify fail**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_kr_exchange_router.py -v`
|
||||||
|
Expected: FAIL (`ModuleNotFoundError` or missing class)
|
||||||
|
|
||||||
|
**Step 4: Commit tests-only checkpoint**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add tests/test_kr_exchange_router.py
|
||||||
|
git commit -m "test: add failing tests for KR exchange router"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 3: Implement Router Minimal Code
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `src/broker/kr_exchange_router.py`
|
||||||
|
- Modify: `src/broker/__init__.py`
|
||||||
|
|
||||||
|
**Step 1: Add routing dataclass + session default mapping**
|
||||||
|
|
||||||
|
```python
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class ExchangeResolution:
|
||||||
|
exchange_code: str
|
||||||
|
reason: str
|
||||||
|
|
||||||
|
|
||||||
|
class KRExchangeRouter:
|
||||||
|
def resolve_for_ranking(self, session_id: str) -> str:
|
||||||
|
return "NX" if session_id in {"NXT_PRE", "NXT_AFTER"} else "J"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Add dual-listing decision path + fallback**
|
||||||
|
|
||||||
|
```python
|
||||||
|
if is_dual_listed and spread_krx is not None and spread_nxt is not None:
|
||||||
|
if spread_nxt < spread_krx:
|
||||||
|
return ExchangeResolution("NXT", "dual_listing_spread")
|
||||||
|
return ExchangeResolution("KRX", "dual_listing_spread")
|
||||||
|
|
||||||
|
return ExchangeResolution(default_exchange, "fallback_data_unavailable")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3: Run router tests**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_kr_exchange_router.py -v`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 4: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/broker/kr_exchange_router.py src/broker/__init__.py
|
||||||
|
git commit -m "feat: add KR session-aware exchange router"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 4: Broker Request Wiring (Ranking + Order)
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/broker/kis_api.py`
|
||||||
|
- Modify: `tests/test_broker.py`
|
||||||
|
|
||||||
|
**Step 1: Add failing tests for ranking param and order body exchange field**
|
||||||
|
|
||||||
|
```python
|
||||||
|
assert called_params["FID_COND_MRKT_DIV_CODE"] == "NX"
|
||||||
|
assert called_json["EXCG_ID_DVSN_CD"] == "NXT"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Run targeted test subset (fail first)**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_broker.py -k "market_rankings or EXCG_ID_DVSN_CD" -v`
|
||||||
|
Expected: FAIL on missing field/value
|
||||||
|
|
||||||
|
**Step 3: Implement minimal wiring**
|
||||||
|
|
||||||
|
```python
|
||||||
|
session_id = runtime_session_id or classify_session_id(MARKETS["KR"])
|
||||||
|
market_div_code = self._kr_router.resolve_for_ranking(session_id)
|
||||||
|
params["FID_COND_MRKT_DIV_CODE"] = market_div_code
|
||||||
|
|
||||||
|
resolution = self._kr_router.resolve_for_order(...)
|
||||||
|
body["EXCG_ID_DVSN_CD"] = resolution.exchange_code
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4: Add routing evidence logs**
|
||||||
|
|
||||||
|
```python
|
||||||
|
logger.info(
|
||||||
|
"KR routing resolved",
|
||||||
|
extra={"session_id": session_id, "exchange": resolution.exchange_code, "reason": resolution.reason},
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 5: Re-run broker tests**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_broker.py -k "market_rankings or EXCG_ID_DVSN_CD" -v`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 6: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/broker/kis_api.py tests/test_broker.py
|
||||||
|
git commit -m "fix: apply KR exchange routing to rankings and orders"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 5: Scanner Session Alignment
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/analysis/smart_scanner.py`
|
||||||
|
- Modify: `tests/test_smart_scanner.py`
|
||||||
|
|
||||||
|
**Step 1: Add failing test for domestic session-aware ranking path**
|
||||||
|
|
||||||
|
```python
|
||||||
|
assert mock_broker.fetch_market_rankings.call_args_list[0].kwargs["session_id"] == "NXT_PRE"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Run scanner tests (fail first)**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_smart_scanner.py -k "session" -v`
|
||||||
|
Expected: FAIL on missing session argument
|
||||||
|
|
||||||
|
**Step 3: Implement scanner call wiring**
|
||||||
|
|
||||||
|
```python
|
||||||
|
fluct_rows = await self.broker.fetch_market_rankings(
|
||||||
|
ranking_type="fluctuation",
|
||||||
|
limit=50,
|
||||||
|
session_id=session_id,
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4: Re-run scanner tests**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_smart_scanner.py -v`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/analysis/smart_scanner.py tests/test_smart_scanner.py
|
||||||
|
git commit -m "fix: align domestic scanner rankings with KR session routing"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 6: Full Verification and Regression
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- No new files
|
||||||
|
|
||||||
|
**Step 1: Run focused regressions for #409**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
- `pytest tests/test_kr_exchange_router.py tests/test_broker.py tests/test_smart_scanner.py -v`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 2: Run related runtime-path regressions for #318/#325**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
- `pytest tests/test_main.py -k "atr or staged_exit or pred_down_prob" -v`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 3: Run lint/type checks for touched modules**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
- `ruff check src/broker/kis_api.py src/broker/kr_exchange_router.py src/analysis/smart_scanner.py tests/test_kr_exchange_router.py tests/test_broker.py tests/test_smart_scanner.py`
|
||||||
|
- `mypy src/broker/kis_api.py src/broker/kr_exchange_router.py src/analysis/smart_scanner.py --strict`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 4: Commit final fixup if needed**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add -A
|
||||||
|
git commit -m "chore: finalize #409 verification adjustments"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 7: PR Creation, Self-Review, and Merge
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: PR metadata only
|
||||||
|
|
||||||
|
**Step 1: Push branch**
|
||||||
|
|
||||||
|
Run: `git push -u origin feature/issue-409-kr-session-exchange-routing`
|
||||||
|
Expected: remote branch created
|
||||||
|
|
||||||
|
**Step 2: Create PR to `main` with issue links**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
PR_BODY=$(cat <<'MD'
|
||||||
|
## Summary
|
||||||
|
- fix KR session-aware exchange routing for rankings and orders (#409)
|
||||||
|
- add dual-listing exchange priority with deterministic fallback
|
||||||
|
- add logs and tests for routing evidence
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
- pytest tests/test_kr_exchange_router.py tests/test_broker.py tests/test_smart_scanner.py -v
|
||||||
|
- pytest tests/test_main.py -k "atr or staged_exit or pred_down_prob" -v
|
||||||
|
- ruff check ...
|
||||||
|
- mypy ...
|
||||||
|
MD
|
||||||
|
)
|
||||||
|
|
||||||
|
tea pr create --base main --head feature/issue-409-kr-session-exchange-routing --title "fix: KR session-aware exchange routing (#409)" --description "$PR_BODY"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3: Validate PR body integrity**
|
||||||
|
|
||||||
|
Run: `python3 scripts/validate_pr_body.py --pr <PR_NUMBER>`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 4: Self-review checklist (blocking)**
|
||||||
|
- Re-check diff for missing `EXCG_ID_DVSN_CD`
|
||||||
|
- Confirm session mapping (`KRX_REG=J`, `NXT_PRE/NXT_AFTER=NX`)
|
||||||
|
- Confirm fallback reason logging exists
|
||||||
|
- Confirm tests cover dual-listing fallback
|
||||||
|
|
||||||
|
**Step 5: Merge only if no minor issues remain**
|
||||||
|
|
||||||
|
Run: `tea pr merge <PR_NUMBER> --merge`
|
||||||
|
Expected: merged
|
||||||
|
|
||||||
|
### Task 8: Restart Program and 24h Runtime Monitoring
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Runtime artifacts: `data/overnight/*.log`
|
||||||
|
|
||||||
|
**Step 1: Restart runtime from merged state**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
- `bash scripts/stop_overnight.sh`
|
||||||
|
- `bash scripts/run_overnight.sh`
|
||||||
|
Expected: live process and watchdog healthy
|
||||||
|
|
||||||
|
**Step 2: Start 24h monitor**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
- `INTERVAL_SEC=60 MAX_HOURS=24 POLICY_TZ=Asia/Seoul bash scripts/runtime_verify_monitor.sh`
|
||||||
|
Expected: monitor loop runs and writes `data/overnight/runtime_verify_*.log`
|
||||||
|
|
||||||
|
**Step 3: Track #409/#318/#325 evidence in loop**
|
||||||
|
|
||||||
|
Run examples:
|
||||||
|
- `rg -n "KR routing resolved|EXCG_ID_DVSN_CD|session=NXT_|session=KRX_REG" data/overnight/run_*.log`
|
||||||
|
- `rg -n "atr_value|dynamic hard stop|staged exit|pred_down_prob" data/overnight/run_*.log`
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- #409 routing evidence present when KR flows trigger
|
||||||
|
- #318/#325 evidence captured if runtime conditions occur
|
||||||
|
|
||||||
|
**Step 4: If anomaly found, create separate issue ticket immediately**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ISSUE_BODY=$(cat <<'MD'
|
||||||
|
## Summary
|
||||||
|
- runtime anomaly detected during #409 monitor
|
||||||
|
|
||||||
|
## Evidence
|
||||||
|
- log: data/overnight/run_xxx.log
|
||||||
|
- timestamp: <UTC/KST>
|
||||||
|
- observed: <symptom>
|
||||||
|
|
||||||
|
## Suspected Scope
|
||||||
|
- related to #409/#318/#325 monitoring path
|
||||||
|
|
||||||
|
## Next Action
|
||||||
|
- triage + reproducible test
|
||||||
|
MD
|
||||||
|
)
|
||||||
|
|
||||||
|
tea issues create -t "bug: runtime anomaly during #409 monitor" -d "$ISSUE_BODY"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 5: Post monitoring summary to #409/#318/#325**
|
||||||
|
- Include PASS/FAIL/NOT_OBSERVED matrix and exact timestamps.
|
||||||
|
- Do not close #318/#325 without concrete acceptance evidence.
|
||||||
@@ -128,6 +128,16 @@ tea pr create \
|
|||||||
--description "$PR_BODY"
|
--description "$PR_BODY"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
PR 생성 직후 본문 무결성 검증(필수):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 scripts/validate_pr_body.py --pr <PR_NUMBER>
|
||||||
|
```
|
||||||
|
|
||||||
|
강제 규칙:
|
||||||
|
- 검증 실패(`\n` 리터럴, 코드펜스 불균형, 헤더/리스트 누락) 상태에서는 리뷰/머지 금지
|
||||||
|
- 본문 수정 후 같은 명령으로 재검증 통과 필요
|
||||||
|
|
||||||
금지 패턴:
|
금지 패턴:
|
||||||
|
|
||||||
- `-d "line1\nline2"` (웹 UI에 `\n` 문자 그대로 노출될 수 있음)
|
- `-d "line1\nline2"` (웹 UI에 `\n` 문자 그대로 노출될 수 있음)
|
||||||
|
|||||||
@@ -8,8 +8,32 @@ CHECK_INTERVAL="${CHECK_INTERVAL:-30}"
|
|||||||
TMUX_AUTO="${TMUX_AUTO:-true}"
|
TMUX_AUTO="${TMUX_AUTO:-true}"
|
||||||
TMUX_ATTACH="${TMUX_ATTACH:-true}"
|
TMUX_ATTACH="${TMUX_ATTACH:-true}"
|
||||||
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
|
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
|
||||||
|
STARTUP_GRACE_SEC="${STARTUP_GRACE_SEC:-3}"
|
||||||
|
dashboard_port="${DASHBOARD_PORT:-8080}"
|
||||||
|
APP_CMD_BIN="${APP_CMD_BIN:-}"
|
||||||
|
APP_CMD_ARGS="${APP_CMD_ARGS:-}"
|
||||||
|
RUNS_DASHBOARD="false"
|
||||||
|
|
||||||
if [ -z "${APP_CMD:-}" ]; then
|
# Custom override contract:
|
||||||
|
# 1) Preferred: APP_CMD_BIN + APP_CMD_ARGS
|
||||||
|
# - APP_CMD_BIN is treated as a single executable token.
|
||||||
|
# - APP_CMD_ARGS uses shell-style word splitting; quote/escape inside this
|
||||||
|
# variable is NOT preserved as a nested shell parse.
|
||||||
|
# 2) Legacy fallback: APP_CMD (raw shell command string)
|
||||||
|
# - This path remains for backward compatibility.
|
||||||
|
# - When APP_CMD includes --dashboard, caller should include explicit
|
||||||
|
# DASHBOARD_PORT assignment in APP_CMD if non-default port is required.
|
||||||
|
|
||||||
|
if [ -n "$APP_CMD_BIN" ]; then
|
||||||
|
USE_DEFAULT_APP_CMD="false"
|
||||||
|
USE_SAFE_CUSTOM_APP_CMD="true"
|
||||||
|
APP_CMD="${APP_CMD_BIN} ${APP_CMD_ARGS}"
|
||||||
|
if [[ " $APP_CMD_ARGS " == *" --dashboard "* ]]; then
|
||||||
|
RUNS_DASHBOARD="true"
|
||||||
|
fi
|
||||||
|
elif [ -z "${APP_CMD:-}" ]; then
|
||||||
|
USE_DEFAULT_APP_CMD="true"
|
||||||
|
USE_SAFE_CUSTOM_APP_CMD="false"
|
||||||
if [ -x ".venv/bin/python" ]; then
|
if [ -x ".venv/bin/python" ]; then
|
||||||
PYTHON_BIN=".venv/bin/python"
|
PYTHON_BIN=".venv/bin/python"
|
||||||
elif command -v python3 >/dev/null 2>&1; then
|
elif command -v python3 >/dev/null 2>&1; then
|
||||||
@@ -21,9 +45,14 @@ if [ -z "${APP_CMD:-}" ]; then
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
dashboard_port="${DASHBOARD_PORT:-8080}"
|
APP_CMD="$PYTHON_BIN -m src.main --mode=live --dashboard"
|
||||||
|
RUNS_DASHBOARD="true"
|
||||||
APP_CMD="DASHBOARD_PORT=$dashboard_port $PYTHON_BIN -m src.main --mode=live --dashboard"
|
else
|
||||||
|
USE_DEFAULT_APP_CMD="false"
|
||||||
|
USE_SAFE_CUSTOM_APP_CMD="false"
|
||||||
|
if [[ "$APP_CMD" == *"--dashboard"* ]]; then
|
||||||
|
RUNS_DASHBOARD="true"
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
mkdir -p "$LOG_DIR"
|
mkdir -p "$LOG_DIR"
|
||||||
@@ -34,6 +63,24 @@ WATCHDOG_LOG="$LOG_DIR/watchdog_${timestamp}.log"
|
|||||||
PID_FILE="$LOG_DIR/app.pid"
|
PID_FILE="$LOG_DIR/app.pid"
|
||||||
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
|
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
|
||||||
|
|
||||||
|
is_port_in_use() {
|
||||||
|
local port="$1"
|
||||||
|
if command -v ss >/dev/null 2>&1; then
|
||||||
|
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
|
||||||
|
return $?
|
||||||
|
fi
|
||||||
|
if command -v lsof >/dev/null 2>&1; then
|
||||||
|
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
|
||||||
|
return $?
|
||||||
|
fi
|
||||||
|
if command -v netstat >/dev/null 2>&1; then
|
||||||
|
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
|
||||||
|
return $?
|
||||||
|
fi
|
||||||
|
# No supported socket inspection command found.
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
if [ -f "$PID_FILE" ]; then
|
if [ -f "$PID_FILE" ]; then
|
||||||
old_pid="$(cat "$PID_FILE" || true)"
|
old_pid="$(cat "$PID_FILE" || true)"
|
||||||
if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then
|
if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then
|
||||||
@@ -43,7 +90,29 @@ if [ -f "$PID_FILE" ]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG"
|
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG"
|
||||||
nohup bash -lc "$APP_CMD" >>"$RUN_LOG" 2>&1 &
|
if [ "$RUNS_DASHBOARD" = "true" ] && is_port_in_use "$dashboard_port"; then
|
||||||
|
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: dashboard port ${dashboard_port} already in use" | tee -a "$RUN_LOG"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$USE_DEFAULT_APP_CMD" = "true" ]; then
|
||||||
|
# Default path avoids shell word-splitting on executable paths.
|
||||||
|
nohup env DASHBOARD_PORT="$dashboard_port" "$PYTHON_BIN" -m src.main --mode=live --dashboard >>"$RUN_LOG" 2>&1 &
|
||||||
|
elif [ "$USE_SAFE_CUSTOM_APP_CMD" = "true" ]; then
|
||||||
|
# Safer custom path: executable path is handled as a single token.
|
||||||
|
if [ -n "$APP_CMD_ARGS" ]; then
|
||||||
|
# shellcheck disable=SC2206
|
||||||
|
app_args=( $APP_CMD_ARGS )
|
||||||
|
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" "${app_args[@]}" >>"$RUN_LOG" 2>&1 &
|
||||||
|
else
|
||||||
|
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" >>"$RUN_LOG" 2>&1 &
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Custom APP_CMD is treated as a shell command string.
|
||||||
|
# If executable paths include spaces, they must be quoted inside APP_CMD.
|
||||||
|
# Legacy compatibility path: caller owns quoting and env var injection.
|
||||||
|
nohup bash -lc "exec env $APP_CMD" >>"$RUN_LOG" 2>&1 &
|
||||||
|
fi
|
||||||
app_pid=$!
|
app_pid=$!
|
||||||
echo "$app_pid" > "$PID_FILE"
|
echo "$app_pid" > "$PID_FILE"
|
||||||
|
|
||||||
@@ -54,6 +123,20 @@ nohup env PID_FILE="$PID_FILE" LOG_FILE="$WATCHDOG_LOG" CHECK_INTERVAL="$CHECK_I
|
|||||||
watchdog_pid=$!
|
watchdog_pid=$!
|
||||||
echo "$watchdog_pid" > "$WATCHDOG_PID_FILE"
|
echo "$watchdog_pid" > "$WATCHDOG_PID_FILE"
|
||||||
|
|
||||||
|
sleep "$STARTUP_GRACE_SEC"
|
||||||
|
if ! kill -0 "$app_pid" 2>/dev/null; then
|
||||||
|
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: app process exited early (pid=$app_pid)" | tee -a "$RUN_LOG"
|
||||||
|
[ -n "${watchdog_pid:-}" ] && kill "$watchdog_pid" 2>/dev/null || true
|
||||||
|
tail -n 20 "$RUN_LOG" || true
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if ! kill -0 "$watchdog_pid" 2>/dev/null; then
|
||||||
|
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: watchdog exited early (pid=$watchdog_pid)" | tee -a "$WATCHDOG_LOG"
|
||||||
|
kill "$app_pid" 2>/dev/null || true
|
||||||
|
tail -n 20 "$WATCHDOG_LOG" || true
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
cat <<EOF
|
cat <<EOF
|
||||||
시작 완료
|
시작 완료
|
||||||
- app pid: $app_pid
|
- app pid: $app_pid
|
||||||
|
|||||||
@@ -7,12 +7,15 @@ ROOT_DIR="${ROOT_DIR:-/home/agentson/repos/The-Ouroboros}"
|
|||||||
LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}"
|
LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}"
|
||||||
INTERVAL_SEC="${INTERVAL_SEC:-60}"
|
INTERVAL_SEC="${INTERVAL_SEC:-60}"
|
||||||
MAX_HOURS="${MAX_HOURS:-24}"
|
MAX_HOURS="${MAX_HOURS:-24}"
|
||||||
|
MAX_LOOPS="${MAX_LOOPS:-0}"
|
||||||
POLICY_TZ="${POLICY_TZ:-Asia/Seoul}"
|
POLICY_TZ="${POLICY_TZ:-Asia/Seoul}"
|
||||||
|
DASHBOARD_PORT="${DASHBOARD_PORT:-8080}"
|
||||||
|
|
||||||
cd "$ROOT_DIR"
|
cd "$ROOT_DIR"
|
||||||
|
|
||||||
OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log"
|
OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log"
|
||||||
END_TS=$(( $(date +%s) + MAX_HOURS*3600 ))
|
END_TS=$(( $(date +%s) + MAX_HOURS*3600 ))
|
||||||
|
loops=0
|
||||||
|
|
||||||
log() {
|
log() {
|
||||||
printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null
|
printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null
|
||||||
@@ -31,6 +34,11 @@ check_signal() {
|
|||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
|
find_live_pids() {
|
||||||
|
# Detect live-mode process even when run_overnight pid files are absent.
|
||||||
|
pgrep -af "[s]rc.main --mode=live" 2>/dev/null | awk '{print $1}' | tr '\n' ',' | sed 's/,$//'
|
||||||
|
}
|
||||||
|
|
||||||
check_forbidden() {
|
check_forbidden() {
|
||||||
local name="$1"
|
local name="$1"
|
||||||
local pattern="$2"
|
local pattern="$2"
|
||||||
@@ -44,42 +52,94 @@ check_forbidden() {
|
|||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
is_port_listening() {
|
||||||
|
local port="$1"
|
||||||
|
|
||||||
|
if command -v ss >/dev/null 2>&1; then
|
||||||
|
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
|
||||||
|
return $?
|
||||||
|
fi
|
||||||
|
if command -v lsof >/dev/null 2>&1; then
|
||||||
|
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
|
||||||
|
return $?
|
||||||
|
fi
|
||||||
|
if command -v netstat >/dev/null 2>&1; then
|
||||||
|
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
|
||||||
|
return $?
|
||||||
|
fi
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}"
|
log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}"
|
||||||
|
|
||||||
while true; do
|
while true; do
|
||||||
|
loops=$((loops + 1))
|
||||||
now=$(date +%s)
|
now=$(date +%s)
|
||||||
if [ "$now" -ge "$END_TS" ]; then
|
if [ "$now" -ge "$END_TS" ]; then
|
||||||
log "[INFO] monitor completed (time window reached)"
|
log "[INFO] monitor completed (time window reached)"
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
if [ "$MAX_LOOPS" -gt 0 ] && [ "$loops" -gt "$MAX_LOOPS" ]; then
|
||||||
|
log "[INFO] monitor completed (max loops reached)"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)"
|
latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)"
|
||||||
if [ -z "$latest_run" ]; then
|
|
||||||
log "[ANOMALY] no run log found"
|
|
||||||
sleep "$INTERVAL_SEC"
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Basic liveness hints.
|
# Basic liveness hints.
|
||||||
app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)"
|
app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)"
|
||||||
wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)"
|
wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)"
|
||||||
|
live_pids="$(find_live_pids)"
|
||||||
app_alive=0
|
app_alive=0
|
||||||
wd_alive=0
|
wd_alive=0
|
||||||
port_alive=0
|
port_alive=0
|
||||||
[ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1
|
[ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1
|
||||||
[ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1
|
[ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1
|
||||||
ss -ltnp 2>/dev/null | rg -q ':8080' && port_alive=1
|
if [ "$app_alive" -eq 0 ] && [ -n "$live_pids" ]; then
|
||||||
log "[HEARTBEAT] run_log=$latest_run app_alive=$app_alive watchdog_alive=$wd_alive port8080=$port_alive"
|
app_alive=1
|
||||||
|
fi
|
||||||
|
is_port_listening "$DASHBOARD_PORT" && port_alive=1
|
||||||
|
log "[HEARTBEAT] run_log=${latest_run:-none} app_alive=$app_alive watchdog_alive=$wd_alive port=${DASHBOARD_PORT} alive=$port_alive live_pids=${live_pids:-none}"
|
||||||
|
|
||||||
|
defer_log_checks=0
|
||||||
|
if [ -z "$latest_run" ] && [ "$app_alive" -eq 1 ]; then
|
||||||
|
defer_log_checks=1
|
||||||
|
log "[INFO] run log not yet available; defer log-based coverage checks"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -z "$latest_run" ] && [ "$defer_log_checks" -eq 0 ]; then
|
||||||
|
log "[ANOMALY] no run log found"
|
||||||
|
fi
|
||||||
|
|
||||||
# Coverage matrix rows (session paths and policy gate evidence).
|
# Coverage matrix rows (session paths and policy gate evidence).
|
||||||
not_observed=0
|
not_observed=0
|
||||||
check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1))
|
if [ "$app_alive" -eq 1 ]; then
|
||||||
check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1))
|
log "[COVERAGE] LIVE_MODE=PASS source=process_liveness"
|
||||||
check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1))
|
else
|
||||||
check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1))
|
if [ -n "$latest_run" ]; then
|
||||||
check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1))
|
check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1))
|
else
|
||||||
check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1))
|
log "[COVERAGE] LIVE_MODE=NOT_OBSERVED reason=no_run_log_no_live_pid"
|
||||||
|
not_observed=$((not_observed+1))
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
if [ "$defer_log_checks" -eq 1 ]; then
|
||||||
|
for deferred in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
|
||||||
|
log "[COVERAGE] ${deferred}=DEFERRED reason=no_run_log_process_alive"
|
||||||
|
done
|
||||||
|
elif [ -n "$latest_run" ]; then
|
||||||
|
check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
|
check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
|
check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
|
check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
|
check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
|
check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
|
else
|
||||||
|
for missing in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
|
||||||
|
log "[COVERAGE] ${missing}=NOT_OBSERVED reason=no_run_log"
|
||||||
|
not_observed=$((not_observed+1))
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
if [ "$not_observed" -gt 0 ]; then
|
if [ "$not_observed" -gt 0 ]; then
|
||||||
log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)"
|
log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)"
|
||||||
@@ -95,11 +155,17 @@ while true; do
|
|||||||
is_weekend=1
|
is_weekend=1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ "$is_weekend" -eq 1 ]; then
|
if [ "$defer_log_checks" -eq 1 ]; then
|
||||||
|
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log_process_alive"
|
||||||
|
elif [ "$is_weekend" -eq 1 ]; then
|
||||||
# Weekend policy: KR regular session loop must never appear.
|
# Weekend policy: KR regular session loop must never appear.
|
||||||
check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \
|
if [ -n "$latest_run" ]; then
|
||||||
"Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \
|
check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \
|
||||||
"$latest_run" || forbidden_hits=$((forbidden_hits+1))
|
"Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \
|
||||||
|
"$latest_run" || forbidden_hits=$((forbidden_hits+1))
|
||||||
|
else
|
||||||
|
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log"
|
||||||
|
fi
|
||||||
else
|
else
|
||||||
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday"
|
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday"
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -92,6 +92,25 @@ def validate_testing_doc_has_dynamic_count_guidance(errors: list[str]) -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def validate_pr_body_postcheck_guidance(errors: list[str]) -> None:
|
||||||
|
required_tokens = {
|
||||||
|
"commands": (
|
||||||
|
"PR Body Post-Check (Mandatory)",
|
||||||
|
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>",
|
||||||
|
),
|
||||||
|
"workflow": (
|
||||||
|
"PR 생성 직후 본문 무결성 검증(필수)",
|
||||||
|
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>",
|
||||||
|
),
|
||||||
|
}
|
||||||
|
for key, tokens in required_tokens.items():
|
||||||
|
path = REQUIRED_FILES[key]
|
||||||
|
text = _read(path)
|
||||||
|
for token in tokens:
|
||||||
|
if token not in text:
|
||||||
|
errors.append(f"{path}: missing PR body post-check guidance token -> {token}")
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
def main() -> int:
|
||||||
errors: list[str] = []
|
errors: list[str] = []
|
||||||
|
|
||||||
@@ -117,6 +136,7 @@ def main() -> int:
|
|||||||
validate_summary_docs_reference_core_docs(errors)
|
validate_summary_docs_reference_core_docs(errors)
|
||||||
validate_commands_endpoint_duplicates(errors)
|
validate_commands_endpoint_duplicates(errors)
|
||||||
validate_testing_doc_has_dynamic_count_guidance(errors)
|
validate_testing_doc_has_dynamic_count_guidance(errors)
|
||||||
|
validate_pr_body_postcheck_guidance(errors)
|
||||||
|
|
||||||
if errors:
|
if errors:
|
||||||
print("[FAIL] docs sync validation failed")
|
print("[FAIL] docs sync validation failed")
|
||||||
@@ -128,6 +148,7 @@ def main() -> int:
|
|||||||
print("[OK] summary docs link to core docs and links resolve")
|
print("[OK] summary docs link to core docs and links resolve")
|
||||||
print("[OK] commands endpoint rows have no duplicates")
|
print("[OK] commands endpoint rows have no duplicates")
|
||||||
print("[OK] testing doc includes dynamic count guidance")
|
print("[OK] testing doc includes dynamic count guidance")
|
||||||
|
print("[OK] PR body post-check guidance exists in commands/workflow docs")
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -33,6 +33,9 @@ ALLOWED_PLAN_TARGETS = {
|
|||||||
"2": (DOC_DIR / "source" / "ouroboros_plan_v2.txt").resolve(),
|
"2": (DOC_DIR / "source" / "ouroboros_plan_v2.txt").resolve(),
|
||||||
"3": (DOC_DIR / "source" / "ouroboros_plan_v3.txt").resolve(),
|
"3": (DOC_DIR / "source" / "ouroboros_plan_v3.txt").resolve(),
|
||||||
}
|
}
|
||||||
|
ISSUE_REF_PATTERN = re.compile(r"#(?P<issue>\d+)")
|
||||||
|
ISSUE_DONE_PATTERN = re.compile(r"(?:✅|머지|해소|완료)")
|
||||||
|
ISSUE_PENDING_PATTERN = re.compile(r"(?:잔여|오픈 상태|추적 이슈)")
|
||||||
|
|
||||||
|
|
||||||
def iter_docs() -> list[Path]:
|
def iter_docs() -> list[Path]:
|
||||||
@@ -119,6 +122,38 @@ def collect_req_traceability(
|
|||||||
req_to_test.setdefault(req_id, set()).add(item_id)
|
req_to_test.setdefault(req_id, set()).add(item_id)
|
||||||
|
|
||||||
|
|
||||||
|
def validate_issue_status_consistency(path: Path, text: str, errors: list[str]) -> None:
|
||||||
|
issue_done_lines: dict[str, list[int]] = {}
|
||||||
|
issue_pending_lines: dict[str, list[int]] = {}
|
||||||
|
|
||||||
|
for line_no, raw_line in enumerate(text.splitlines(), start=1):
|
||||||
|
line = raw_line.strip()
|
||||||
|
if not line:
|
||||||
|
continue
|
||||||
|
issue_ids = [m.group("issue") for m in ISSUE_REF_PATTERN.finditer(line)]
|
||||||
|
if not issue_ids:
|
||||||
|
continue
|
||||||
|
|
||||||
|
is_pending = bool(ISSUE_PENDING_PATTERN.search(line))
|
||||||
|
is_done = bool(ISSUE_DONE_PATTERN.search(line)) and not is_pending
|
||||||
|
if not is_pending and not is_done:
|
||||||
|
continue
|
||||||
|
|
||||||
|
for issue_id in issue_ids:
|
||||||
|
if is_done:
|
||||||
|
issue_done_lines.setdefault(issue_id, []).append(line_no)
|
||||||
|
if is_pending:
|
||||||
|
issue_pending_lines.setdefault(issue_id, []).append(line_no)
|
||||||
|
|
||||||
|
conflicted_issues = sorted(set(issue_done_lines) & set(issue_pending_lines))
|
||||||
|
for issue_id in conflicted_issues:
|
||||||
|
errors.append(
|
||||||
|
f"{path}: conflicting status for issue #{issue_id} "
|
||||||
|
f"(done at lines {issue_done_lines[issue_id]}, "
|
||||||
|
f"pending at lines {issue_pending_lines[issue_id]})"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
def main() -> int:
|
||||||
if not DOC_DIR.exists():
|
if not DOC_DIR.exists():
|
||||||
print(f"ERROR: missing directory {DOC_DIR}")
|
print(f"ERROR: missing directory {DOC_DIR}")
|
||||||
@@ -140,6 +175,8 @@ def main() -> int:
|
|||||||
text = path.read_text(encoding="utf-8")
|
text = path.read_text(encoding="utf-8")
|
||||||
validate_metadata(path, text, errors, doc_ids)
|
validate_metadata(path, text, errors, doc_ids)
|
||||||
validate_links(path, text, errors)
|
validate_links(path, text, errors)
|
||||||
|
if path.name == "80_implementation_audit.md":
|
||||||
|
validate_issue_status_consistency(path, text, errors)
|
||||||
collect_ids(path, text, defs, refs)
|
collect_ids(path, text, defs, refs)
|
||||||
collect_req_traceability(text, req_to_task, req_to_test)
|
collect_req_traceability(text, req_to_task, req_to_test)
|
||||||
|
|
||||||
|
|||||||
117
scripts/validate_pr_body.py
Normal file
117
scripts/validate_pr_body.py
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Validate PR body formatting to prevent escaped-newline artifacts."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import re
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE)
|
||||||
|
LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE)
|
||||||
|
FENCED_CODE_PATTERN = re.compile(r"```.*?```", re.DOTALL)
|
||||||
|
INLINE_CODE_PATTERN = re.compile(r"`[^`]*`")
|
||||||
|
|
||||||
|
|
||||||
|
def _strip_code_segments(text: str) -> str:
|
||||||
|
without_fences = FENCED_CODE_PATTERN.sub("", text)
|
||||||
|
return INLINE_CODE_PATTERN.sub("", without_fences)
|
||||||
|
|
||||||
|
|
||||||
|
def resolve_tea_binary() -> str:
|
||||||
|
tea_from_path = shutil.which("tea")
|
||||||
|
if tea_from_path:
|
||||||
|
return tea_from_path
|
||||||
|
|
||||||
|
tea_home = Path.home() / "bin" / "tea"
|
||||||
|
if tea_home.exists() and tea_home.is_file() and os.access(tea_home, os.X_OK):
|
||||||
|
return str(tea_home)
|
||||||
|
|
||||||
|
raise RuntimeError("tea binary not found (checked PATH and ~/bin/tea)")
|
||||||
|
|
||||||
|
|
||||||
|
def validate_pr_body_text(text: str) -> list[str]:
|
||||||
|
errors: list[str] = []
|
||||||
|
searchable = _strip_code_segments(text)
|
||||||
|
if "\\n" in searchable:
|
||||||
|
errors.append("body contains escaped newline sequence (\\n)")
|
||||||
|
if text.count("```") % 2 != 0:
|
||||||
|
errors.append("body has unbalanced fenced code blocks (``` count is odd)")
|
||||||
|
if not HEADER_PATTERN.search(text):
|
||||||
|
errors.append("body is missing markdown section headers (e.g. '## Summary')")
|
||||||
|
if not LIST_ITEM_PATTERN.search(text):
|
||||||
|
errors.append("body is missing markdown list items")
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_pr_body(pr_number: int) -> str:
|
||||||
|
tea_binary = resolve_tea_binary()
|
||||||
|
try:
|
||||||
|
completed = subprocess.run(
|
||||||
|
[
|
||||||
|
tea_binary,
|
||||||
|
"api",
|
||||||
|
"-R",
|
||||||
|
"origin",
|
||||||
|
f"repos/{{owner}}/{{repo}}/pulls/{pr_number}",
|
||||||
|
],
|
||||||
|
check=True,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
)
|
||||||
|
except (subprocess.CalledProcessError, FileNotFoundError, PermissionError) as exc:
|
||||||
|
raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc
|
||||||
|
|
||||||
|
try:
|
||||||
|
payload = json.loads(completed.stdout)
|
||||||
|
except json.JSONDecodeError as exc:
|
||||||
|
raise RuntimeError(f"failed to parse PR payload for #{pr_number}: {exc}") from exc
|
||||||
|
|
||||||
|
body = payload.get("body", "")
|
||||||
|
if not isinstance(body, str):
|
||||||
|
raise RuntimeError(f"unexpected PR body type for #{pr_number}: {type(body).__name__}")
|
||||||
|
return body
|
||||||
|
|
||||||
|
|
||||||
|
def parse_args() -> argparse.Namespace:
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Validate PR body markdown formatting and escaped-newline artifacts."
|
||||||
|
)
|
||||||
|
group = parser.add_mutually_exclusive_group(required=True)
|
||||||
|
group.add_argument("--pr", type=int, help="PR number to fetch via `tea api`")
|
||||||
|
group.add_argument("--body-file", type=Path, help="Path to markdown body file")
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
args = parse_args()
|
||||||
|
if args.body_file is not None:
|
||||||
|
if not args.body_file.exists():
|
||||||
|
print(f"[FAIL] body file not found: {args.body_file}")
|
||||||
|
return 1
|
||||||
|
body = args.body_file.read_text(encoding="utf-8")
|
||||||
|
source = f"file:{args.body_file}"
|
||||||
|
else:
|
||||||
|
body = fetch_pr_body(args.pr)
|
||||||
|
source = f"pr:{args.pr}"
|
||||||
|
|
||||||
|
errors = validate_pr_body_text(body)
|
||||||
|
if errors:
|
||||||
|
print("[FAIL] PR body validation failed")
|
||||||
|
print(f"- source: {source}")
|
||||||
|
for err in errors:
|
||||||
|
print(f"- {err}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
print("[OK] PR body validation passed")
|
||||||
|
print(f"- source: {source}")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
@@ -32,7 +32,7 @@ def validate_backtest_cost_model(
|
|||||||
|
|
||||||
slippage = model.slippage_bps_by_session or {}
|
slippage = model.slippage_bps_by_session or {}
|
||||||
failure = model.failure_rate_by_session or {}
|
failure = model.failure_rate_by_session or {}
|
||||||
partial = model.partial_fill_rate_by_session or {}
|
partial_fill = model.partial_fill_rate_by_session or {}
|
||||||
|
|
||||||
missing_slippage = [s for s in required_sessions if s not in slippage]
|
missing_slippage = [s for s in required_sessions if s not in slippage]
|
||||||
if missing_slippage:
|
if missing_slippage:
|
||||||
@@ -45,11 +45,12 @@ def validate_backtest_cost_model(
|
|||||||
raise ValueError(
|
raise ValueError(
|
||||||
f"missing failure_rate_by_session for sessions: {', '.join(missing_failure)}"
|
f"missing failure_rate_by_session for sessions: {', '.join(missing_failure)}"
|
||||||
)
|
)
|
||||||
missing_partial = [s for s in required_sessions if s not in partial]
|
|
||||||
if missing_partial:
|
missing_partial_fill = [s for s in required_sessions if s not in partial_fill]
|
||||||
|
if missing_partial_fill:
|
||||||
raise ValueError(
|
raise ValueError(
|
||||||
"missing partial_fill_rate_by_session for sessions: "
|
"missing partial_fill_rate_by_session for sessions: "
|
||||||
f"{', '.join(missing_partial)}"
|
f"{', '.join(missing_partial_fill)}"
|
||||||
)
|
)
|
||||||
|
|
||||||
for sess, bps in slippage.items():
|
for sess, bps in slippage.items():
|
||||||
@@ -58,6 +59,6 @@ def validate_backtest_cost_model(
|
|||||||
for sess, rate in failure.items():
|
for sess, rate in failure.items():
|
||||||
if not math.isfinite(rate) or rate < 0 or rate > 1:
|
if not math.isfinite(rate) or rate < 0 or rate > 1:
|
||||||
raise ValueError(f"failure rate must be within [0,1] for session={sess}")
|
raise ValueError(f"failure rate must be within [0,1] for session={sess}")
|
||||||
for sess, rate in partial.items():
|
for sess, rate in partial_fill.items():
|
||||||
if not math.isfinite(rate) or rate < 0 or rate > 1:
|
if not math.isfinite(rate) or rate < 0 or rate > 1:
|
||||||
raise ValueError(f"partial fill rate must be within [0,1] for session={sess}")
|
raise ValueError(f"partial fill rate must be within [0,1] for session={sess}")
|
||||||
|
|||||||
@@ -45,6 +45,7 @@ class WalkForwardConfig:
|
|||||||
class BaselineScore:
|
class BaselineScore:
|
||||||
name: Literal["B0", "B1", "M1"]
|
name: Literal["B0", "B1", "M1"]
|
||||||
accuracy: float
|
accuracy: float
|
||||||
|
cost_adjusted_accuracy: float
|
||||||
|
|
||||||
|
|
||||||
@dataclass(frozen=True)
|
@dataclass(frozen=True)
|
||||||
@@ -93,14 +94,6 @@ def run_v2_backtest_pipeline(
|
|||||||
else sorted({bar.session_id for bar in bars})
|
else sorted({bar.session_id for bar in bars})
|
||||||
)
|
)
|
||||||
validate_backtest_cost_model(model=cost_model, required_sessions=resolved_sessions)
|
validate_backtest_cost_model(model=cost_model, required_sessions=resolved_sessions)
|
||||||
execution_model = BacktestExecutionModel(
|
|
||||||
ExecutionAssumptions(
|
|
||||||
slippage_bps_by_session=cost_model.slippage_bps_by_session or {},
|
|
||||||
failure_rate_by_session=cost_model.failure_rate_by_session or {},
|
|
||||||
partial_fill_rate_by_session=cost_model.partial_fill_rate_by_session or {},
|
|
||||||
seed=0,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
highs = [float(bar.high) for bar in bars]
|
highs = [float(bar.high) for bar in bars]
|
||||||
lows = [float(bar.low) for bar in bars]
|
lows = [float(bar.low) for bar in bars]
|
||||||
@@ -132,6 +125,8 @@ def run_v2_backtest_pipeline(
|
|||||||
).label
|
).label
|
||||||
|
|
||||||
ordered_labels = [labels_by_bar_index[idx] for idx in normalized_entries]
|
ordered_labels = [labels_by_bar_index[idx] for idx in normalized_entries]
|
||||||
|
ordered_sessions = [bars[idx].session_id for idx in normalized_entries]
|
||||||
|
ordered_prices = [bars[idx].close for idx in normalized_entries]
|
||||||
folds = generate_walk_forward_splits(
|
folds = generate_walk_forward_splits(
|
||||||
n_samples=len(normalized_entries),
|
n_samples=len(normalized_entries),
|
||||||
train_size=walk_forward.train_size,
|
train_size=walk_forward.train_size,
|
||||||
@@ -146,8 +141,17 @@ def run_v2_backtest_pipeline(
|
|||||||
for fold_idx, fold in enumerate(folds):
|
for fold_idx, fold in enumerate(folds):
|
||||||
train_labels = [ordered_labels[i] for i in fold.train_indices]
|
train_labels = [ordered_labels[i] for i in fold.train_indices]
|
||||||
test_labels = [ordered_labels[i] for i in fold.test_indices]
|
test_labels = [ordered_labels[i] for i in fold.test_indices]
|
||||||
|
test_sessions = [ordered_sessions[i] for i in fold.test_indices]
|
||||||
|
test_prices = [ordered_prices[i] for i in fold.test_indices]
|
||||||
if not test_labels:
|
if not test_labels:
|
||||||
continue
|
continue
|
||||||
|
execution_model = _build_execution_model(cost_model=cost_model, fold_seed=fold_idx)
|
||||||
|
execution_return_model = _build_execution_model(
|
||||||
|
cost_model=cost_model,
|
||||||
|
fold_seed=fold_idx + 1000,
|
||||||
|
)
|
||||||
|
b0_pred = _baseline_b0_pred(train_labels)
|
||||||
|
m1_pred = _m1_pred(train_labels)
|
||||||
execution_returns_bps: list[float] = []
|
execution_returns_bps: list[float] = []
|
||||||
execution_rejected = 0
|
execution_rejected = 0
|
||||||
execution_partial = 0
|
execution_partial = 0
|
||||||
@@ -155,7 +159,7 @@ def run_v2_backtest_pipeline(
|
|||||||
entry_bar_index = normalized_entries[rel_idx]
|
entry_bar_index = normalized_entries[rel_idx]
|
||||||
bar = bars[entry_bar_index]
|
bar = bars[entry_bar_index]
|
||||||
trade = _simulate_execution_adjusted_return_bps(
|
trade = _simulate_execution_adjusted_return_bps(
|
||||||
execution_model=execution_model,
|
execution_model=execution_return_model,
|
||||||
bar=bar,
|
bar=bar,
|
||||||
label=ordered_labels[rel_idx],
|
label=ordered_labels[rel_idx],
|
||||||
side=side,
|
side=side,
|
||||||
@@ -176,11 +180,41 @@ def run_v2_backtest_pipeline(
|
|||||||
train_label_distribution=_label_dist(train_labels),
|
train_label_distribution=_label_dist(train_labels),
|
||||||
test_label_distribution=_label_dist(test_labels),
|
test_label_distribution=_label_dist(test_labels),
|
||||||
baseline_scores=[
|
baseline_scores=[
|
||||||
BaselineScore(name="B0", accuracy=_baseline_b0(train_labels, test_labels)),
|
BaselineScore(
|
||||||
BaselineScore(name="B1", accuracy=_score_constant(1, test_labels)),
|
name="B0",
|
||||||
|
accuracy=_score_constant(b0_pred, test_labels),
|
||||||
|
cost_adjusted_accuracy=_score_with_execution(
|
||||||
|
prediction=b0_pred,
|
||||||
|
actual=test_labels,
|
||||||
|
sessions=test_sessions,
|
||||||
|
reference_prices=test_prices,
|
||||||
|
execution_model=execution_model,
|
||||||
|
commission_bps=float(cost_model.commission_bps or 0.0),
|
||||||
|
),
|
||||||
|
),
|
||||||
|
BaselineScore(
|
||||||
|
name="B1",
|
||||||
|
accuracy=_score_constant(1, test_labels),
|
||||||
|
cost_adjusted_accuracy=_score_with_execution(
|
||||||
|
prediction=1,
|
||||||
|
actual=test_labels,
|
||||||
|
sessions=test_sessions,
|
||||||
|
reference_prices=test_prices,
|
||||||
|
execution_model=execution_model,
|
||||||
|
commission_bps=float(cost_model.commission_bps or 0.0),
|
||||||
|
),
|
||||||
|
),
|
||||||
BaselineScore(
|
BaselineScore(
|
||||||
name="M1",
|
name="M1",
|
||||||
accuracy=_score_constant(_m1_pred(train_labels), test_labels),
|
accuracy=_score_constant(m1_pred, test_labels),
|
||||||
|
cost_adjusted_accuracy=_score_with_execution(
|
||||||
|
prediction=m1_pred,
|
||||||
|
actual=test_labels,
|
||||||
|
sessions=test_sessions,
|
||||||
|
reference_prices=test_prices,
|
||||||
|
execution_model=execution_model,
|
||||||
|
commission_bps=float(cost_model.commission_bps or 0.0),
|
||||||
|
),
|
||||||
),
|
),
|
||||||
],
|
],
|
||||||
execution_adjusted_avg_return_bps=(
|
execution_adjusted_avg_return_bps=(
|
||||||
@@ -219,12 +253,15 @@ def _score_constant(pred: int, actual: Sequence[int]) -> float:
|
|||||||
|
|
||||||
|
|
||||||
def _baseline_b0(train_labels: Sequence[int], test_labels: Sequence[int]) -> float:
|
def _baseline_b0(train_labels: Sequence[int], test_labels: Sequence[int]) -> float:
|
||||||
|
return _score_constant(_baseline_b0_pred(train_labels), test_labels)
|
||||||
|
|
||||||
|
|
||||||
|
def _baseline_b0_pred(train_labels: Sequence[int]) -> int:
|
||||||
if not train_labels:
|
if not train_labels:
|
||||||
return _score_constant(0, test_labels)
|
return 0
|
||||||
# Majority-class baseline from training fold.
|
# Majority-class baseline from training fold.
|
||||||
choices = (-1, 0, 1)
|
choices = (-1, 0, 1)
|
||||||
pred = max(choices, key=lambda c: train_labels.count(c))
|
return max(choices, key=lambda c: train_labels.count(c))
|
||||||
return _score_constant(pred, test_labels)
|
|
||||||
|
|
||||||
|
|
||||||
def _m1_pred(train_labels: Sequence[int]) -> int:
|
def _m1_pred(train_labels: Sequence[int]) -> int:
|
||||||
@@ -233,6 +270,56 @@ def _m1_pred(train_labels: Sequence[int]) -> int:
|
|||||||
return train_labels[-1]
|
return train_labels[-1]
|
||||||
|
|
||||||
|
|
||||||
|
def _build_execution_model(
|
||||||
|
*,
|
||||||
|
cost_model: BacktestCostModel,
|
||||||
|
fold_seed: int,
|
||||||
|
) -> BacktestExecutionModel:
|
||||||
|
return BacktestExecutionModel(
|
||||||
|
ExecutionAssumptions(
|
||||||
|
slippage_bps_by_session=dict(cost_model.slippage_bps_by_session or {}),
|
||||||
|
failure_rate_by_session=dict(cost_model.failure_rate_by_session or {}),
|
||||||
|
partial_fill_rate_by_session=dict(cost_model.partial_fill_rate_by_session or {}),
|
||||||
|
seed=fold_seed,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _score_with_execution(
|
||||||
|
*,
|
||||||
|
prediction: int,
|
||||||
|
actual: Sequence[int],
|
||||||
|
sessions: Sequence[str],
|
||||||
|
reference_prices: Sequence[float],
|
||||||
|
execution_model: BacktestExecutionModel,
|
||||||
|
commission_bps: float,
|
||||||
|
) -> float:
|
||||||
|
if not actual:
|
||||||
|
return 0.0
|
||||||
|
contributions: list[float] = []
|
||||||
|
for label, session_id, reference_price in zip(actual, sessions, reference_prices, strict=True):
|
||||||
|
if prediction == 0:
|
||||||
|
contributions.append(1.0 if label == 0 else 0.0)
|
||||||
|
continue
|
||||||
|
side = "BUY" if prediction > 0 else "SELL"
|
||||||
|
execution = execution_model.simulate(
|
||||||
|
ExecutionRequest(
|
||||||
|
side=side,
|
||||||
|
session_id=session_id,
|
||||||
|
qty=100,
|
||||||
|
reference_price=reference_price,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if execution.status == "REJECTED":
|
||||||
|
contributions.append(0.0)
|
||||||
|
continue
|
||||||
|
fill_ratio = execution.filled_qty / 100.0
|
||||||
|
cost_penalty = min(0.99, (commission_bps + execution.slippage_bps) / 10000.0)
|
||||||
|
correctness = 1.0 if prediction == label else 0.0
|
||||||
|
contributions.append(correctness * fill_ratio * (1.0 - cost_penalty))
|
||||||
|
return mean(contributions)
|
||||||
|
|
||||||
|
|
||||||
def _build_run_id(*, n_entries: int, n_folds: int, sessions: Sequence[str]) -> str:
|
def _build_run_id(*, n_entries: int, n_folds: int, sessions: Sequence[str]) -> str:
|
||||||
sess_key = "_".join(sessions)
|
sess_key = "_".join(sessions)
|
||||||
return f"v2p-e{n_entries}-f{n_folds}-s{sess_key}"
|
return f"v2p-e{n_entries}-f{n_folds}-s{sess_key}"
|
||||||
|
|||||||
@@ -68,6 +68,7 @@ class SmartVolatilityScanner:
|
|||||||
self,
|
self,
|
||||||
market: MarketInfo | None = None,
|
market: MarketInfo | None = None,
|
||||||
fallback_stocks: list[str] | None = None,
|
fallback_stocks: list[str] | None = None,
|
||||||
|
domestic_session_id: str | None = None,
|
||||||
) -> list[ScanCandidate]:
|
) -> list[ScanCandidate]:
|
||||||
"""Execute smart scan and return qualified candidates.
|
"""Execute smart scan and return qualified candidates.
|
||||||
|
|
||||||
@@ -81,11 +82,12 @@ class SmartVolatilityScanner:
|
|||||||
if market and not market.is_domestic:
|
if market and not market.is_domestic:
|
||||||
return await self._scan_overseas(market, fallback_stocks)
|
return await self._scan_overseas(market, fallback_stocks)
|
||||||
|
|
||||||
return await self._scan_domestic(fallback_stocks)
|
return await self._scan_domestic(fallback_stocks, session_id=domestic_session_id)
|
||||||
|
|
||||||
async def _scan_domestic(
|
async def _scan_domestic(
|
||||||
self,
|
self,
|
||||||
fallback_stocks: list[str] | None = None,
|
fallback_stocks: list[str] | None = None,
|
||||||
|
session_id: str | None = None,
|
||||||
) -> list[ScanCandidate]:
|
) -> list[ScanCandidate]:
|
||||||
"""Scan domestic market using volatility-first ranking + liquidity bonus."""
|
"""Scan domestic market using volatility-first ranking + liquidity bonus."""
|
||||||
# 1) Primary universe from fluctuation ranking.
|
# 1) Primary universe from fluctuation ranking.
|
||||||
@@ -93,6 +95,7 @@ class SmartVolatilityScanner:
|
|||||||
fluct_rows = await self.broker.fetch_market_rankings(
|
fluct_rows = await self.broker.fetch_market_rankings(
|
||||||
ranking_type="fluctuation",
|
ranking_type="fluctuation",
|
||||||
limit=50,
|
limit=50,
|
||||||
|
session_id=session_id,
|
||||||
)
|
)
|
||||||
except ConnectionError as exc:
|
except ConnectionError as exc:
|
||||||
logger.warning("Domestic fluctuation ranking failed: %s", exc)
|
logger.warning("Domestic fluctuation ranking failed: %s", exc)
|
||||||
@@ -103,6 +106,7 @@ class SmartVolatilityScanner:
|
|||||||
volume_rows = await self.broker.fetch_market_rankings(
|
volume_rows = await self.broker.fetch_market_rankings(
|
||||||
ranking_type="volume",
|
ranking_type="volume",
|
||||||
limit=50,
|
limit=50,
|
||||||
|
session_id=session_id,
|
||||||
)
|
)
|
||||||
except ConnectionError as exc:
|
except ConnectionError as exc:
|
||||||
logger.warning("Domestic volume ranking failed: %s", exc)
|
logger.warning("Domestic volume ranking failed: %s", exc)
|
||||||
|
|||||||
@@ -12,7 +12,10 @@ from typing import Any, cast
|
|||||||
|
|
||||||
import aiohttp
|
import aiohttp
|
||||||
|
|
||||||
|
from src.broker.kr_exchange_router import KRExchangeRouter
|
||||||
from src.config import Settings
|
from src.config import Settings
|
||||||
|
from src.core.order_policy import classify_session_id
|
||||||
|
from src.markets.schedule import MARKETS
|
||||||
|
|
||||||
# KIS virtual trading server has a known SSL certificate hostname mismatch.
|
# KIS virtual trading server has a known SSL certificate hostname mismatch.
|
||||||
_KIS_VTS_HOST = "openapivts.koreainvestment.com"
|
_KIS_VTS_HOST = "openapivts.koreainvestment.com"
|
||||||
@@ -92,6 +95,7 @@ class KISBroker:
|
|||||||
self._last_refresh_attempt: float = 0.0
|
self._last_refresh_attempt: float = 0.0
|
||||||
self._refresh_cooldown: float = 60.0 # Seconds (matches KIS 1/minute limit)
|
self._refresh_cooldown: float = 60.0 # Seconds (matches KIS 1/minute limit)
|
||||||
self._rate_limiter = LeakyBucket(settings.RATE_LIMIT_RPS)
|
self._rate_limiter = LeakyBucket(settings.RATE_LIMIT_RPS)
|
||||||
|
self._kr_router = KRExchangeRouter()
|
||||||
|
|
||||||
def _get_session(self) -> aiohttp.ClientSession:
|
def _get_session(self) -> aiohttp.ClientSession:
|
||||||
if self._session is None or self._session.closed:
|
if self._session is None or self._session.closed:
|
||||||
@@ -187,9 +191,12 @@ class KISBroker:
|
|||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"Hash key request failed ({resp.status}): {text}")
|
raise ConnectionError(f"Hash key request failed ({resp.status}): {text}")
|
||||||
data = await resp.json()
|
data = cast(dict[str, Any], await resp.json())
|
||||||
|
|
||||||
return data["HASH"]
|
hash_value = data.get("HASH")
|
||||||
|
if not isinstance(hash_value, str):
|
||||||
|
raise ConnectionError("Hash key response missing HASH")
|
||||||
|
return hash_value
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
# ------------------------------------------------------------------
|
||||||
# Common Headers
|
# Common Headers
|
||||||
@@ -211,12 +218,21 @@ class KISBroker:
|
|||||||
|
|
||||||
async def get_orderbook(self, stock_code: str) -> dict[str, Any]:
|
async def get_orderbook(self, stock_code: str) -> dict[str, Any]:
|
||||||
"""Fetch the current orderbook for a given stock code."""
|
"""Fetch the current orderbook for a given stock code."""
|
||||||
|
return await self.get_orderbook_by_market(stock_code, market_div_code="J")
|
||||||
|
|
||||||
|
async def get_orderbook_by_market(
|
||||||
|
self,
|
||||||
|
stock_code: str,
|
||||||
|
*,
|
||||||
|
market_div_code: str,
|
||||||
|
) -> dict[str, Any]:
|
||||||
|
"""Fetch orderbook for a specific domestic market division code."""
|
||||||
await self._rate_limiter.acquire()
|
await self._rate_limiter.acquire()
|
||||||
session = self._get_session()
|
session = self._get_session()
|
||||||
|
|
||||||
headers = await self._auth_headers("FHKST01010200")
|
headers = await self._auth_headers("FHKST01010200")
|
||||||
params = {
|
params = {
|
||||||
"FID_COND_MRKT_DIV_CODE": "J",
|
"FID_COND_MRKT_DIV_CODE": market_div_code,
|
||||||
"FID_INPUT_ISCD": stock_code,
|
"FID_INPUT_ISCD": stock_code,
|
||||||
}
|
}
|
||||||
url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/inquire-asking-price-exp-ccn"
|
url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/inquire-asking-price-exp-ccn"
|
||||||
@@ -226,10 +242,80 @@ class KISBroker:
|
|||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"get_orderbook failed ({resp.status}): {text}")
|
raise ConnectionError(f"get_orderbook failed ({resp.status}): {text}")
|
||||||
return await resp.json()
|
return cast(dict[str, Any], await resp.json())
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error fetching orderbook: {exc}") from exc
|
raise ConnectionError(f"Network error fetching orderbook: {exc}") from exc
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _extract_orderbook_metrics(payload: dict[str, Any]) -> tuple[float | None, float | None]:
|
||||||
|
output = payload.get("output1") or payload.get("output") or {}
|
||||||
|
if not isinstance(output, dict):
|
||||||
|
return None, None
|
||||||
|
|
||||||
|
def _float(*keys: str) -> float | None:
|
||||||
|
for key in keys:
|
||||||
|
raw = output.get(key)
|
||||||
|
if raw in (None, ""):
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
return float(cast(str | int | float, raw))
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
continue
|
||||||
|
return None
|
||||||
|
|
||||||
|
ask = _float("askp1", "stck_askp1")
|
||||||
|
bid = _float("bidp1", "stck_bidp1")
|
||||||
|
if ask is not None and bid is not None and ask > 0 and bid > 0 and ask >= bid:
|
||||||
|
mid = (ask + bid) / 2
|
||||||
|
if mid > 0:
|
||||||
|
spread = (ask - bid) / mid
|
||||||
|
else:
|
||||||
|
spread = None
|
||||||
|
else:
|
||||||
|
spread = None
|
||||||
|
|
||||||
|
ask_qty = _float("askp_rsqn1", "ask_qty1")
|
||||||
|
bid_qty = _float("bidp_rsqn1", "bid_qty1")
|
||||||
|
if ask_qty is not None and bid_qty is not None and ask_qty >= 0 and bid_qty >= 0:
|
||||||
|
liquidity = ask_qty + bid_qty
|
||||||
|
else:
|
||||||
|
liquidity = None
|
||||||
|
|
||||||
|
return spread, liquidity
|
||||||
|
|
||||||
|
async def _load_dual_listing_metrics(
|
||||||
|
self,
|
||||||
|
stock_code: str,
|
||||||
|
) -> tuple[bool, float | None, float | None, float | None, float | None]:
|
||||||
|
"""Try KRX/NXT orderbooks and derive spread/liquidity metrics."""
|
||||||
|
spread_krx: float | None = None
|
||||||
|
spread_nxt: float | None = None
|
||||||
|
liquidity_krx: float | None = None
|
||||||
|
liquidity_nxt: float | None = None
|
||||||
|
|
||||||
|
for market_div_code, exchange in (("J", "KRX"), ("NX", "NXT")):
|
||||||
|
try:
|
||||||
|
payload = await self.get_orderbook_by_market(
|
||||||
|
stock_code,
|
||||||
|
market_div_code=market_div_code,
|
||||||
|
)
|
||||||
|
except ConnectionError:
|
||||||
|
continue
|
||||||
|
|
||||||
|
spread, liquidity = self._extract_orderbook_metrics(payload)
|
||||||
|
if exchange == "KRX":
|
||||||
|
spread_krx = spread
|
||||||
|
liquidity_krx = liquidity
|
||||||
|
else:
|
||||||
|
spread_nxt = spread
|
||||||
|
liquidity_nxt = liquidity
|
||||||
|
|
||||||
|
is_dual_listed = (
|
||||||
|
(spread_krx is not None and spread_nxt is not None)
|
||||||
|
or (liquidity_krx is not None and liquidity_nxt is not None)
|
||||||
|
)
|
||||||
|
return is_dual_listed, spread_krx, spread_nxt, liquidity_krx, liquidity_nxt
|
||||||
|
|
||||||
async def get_current_price(self, stock_code: str) -> tuple[float, float, float]:
|
async def get_current_price(self, stock_code: str) -> tuple[float, float, float]:
|
||||||
"""Fetch current price data for a domestic stock.
|
"""Fetch current price data for a domestic stock.
|
||||||
|
|
||||||
@@ -302,7 +388,7 @@ class KISBroker:
|
|||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"get_balance failed ({resp.status}): {text}")
|
raise ConnectionError(f"get_balance failed ({resp.status}): {text}")
|
||||||
return await resp.json()
|
return cast(dict[str, Any], await resp.json())
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error fetching balance: {exc}") from exc
|
raise ConnectionError(f"Network error fetching balance: {exc}") from exc
|
||||||
|
|
||||||
@@ -311,7 +397,8 @@ class KISBroker:
|
|||||||
stock_code: str,
|
stock_code: str,
|
||||||
order_type: str, # "BUY" or "SELL"
|
order_type: str, # "BUY" or "SELL"
|
||||||
quantity: int,
|
quantity: int,
|
||||||
price: int = 0,
|
price: float = 0,
|
||||||
|
session_id: str | None = None,
|
||||||
) -> dict[str, Any]:
|
) -> dict[str, Any]:
|
||||||
"""Submit a buy or sell order.
|
"""Submit a buy or sell order.
|
||||||
|
|
||||||
@@ -341,10 +428,32 @@ class KISBroker:
|
|||||||
ord_dvsn = "01" # 시장가
|
ord_dvsn = "01" # 시장가
|
||||||
ord_price = 0
|
ord_price = 0
|
||||||
|
|
||||||
|
resolved_session = session_id or classify_session_id(MARKETS["KR"])
|
||||||
|
if session_id is not None:
|
||||||
|
is_dual_listed, spread_krx, spread_nxt, liquidity_krx, liquidity_nxt = (
|
||||||
|
await self._load_dual_listing_metrics(stock_code)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
is_dual_listed = False
|
||||||
|
spread_krx = None
|
||||||
|
spread_nxt = None
|
||||||
|
liquidity_krx = None
|
||||||
|
liquidity_nxt = None
|
||||||
|
resolution = self._kr_router.resolve_for_order(
|
||||||
|
stock_code=stock_code,
|
||||||
|
session_id=resolved_session,
|
||||||
|
is_dual_listed=is_dual_listed,
|
||||||
|
spread_krx=spread_krx,
|
||||||
|
spread_nxt=spread_nxt,
|
||||||
|
liquidity_krx=liquidity_krx,
|
||||||
|
liquidity_nxt=liquidity_nxt,
|
||||||
|
)
|
||||||
|
|
||||||
body = {
|
body = {
|
||||||
"CANO": self._account_no,
|
"CANO": self._account_no,
|
||||||
"ACNT_PRDT_CD": self._product_cd,
|
"ACNT_PRDT_CD": self._product_cd,
|
||||||
"PDNO": stock_code,
|
"PDNO": stock_code,
|
||||||
|
"EXCG_ID_DVSN_CD": resolution.exchange_code,
|
||||||
"ORD_DVSN": ord_dvsn,
|
"ORD_DVSN": ord_dvsn,
|
||||||
"ORD_QTY": str(quantity),
|
"ORD_QTY": str(quantity),
|
||||||
"ORD_UNPR": str(ord_price),
|
"ORD_UNPR": str(ord_price),
|
||||||
@@ -361,12 +470,15 @@ class KISBroker:
|
|||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"send_order failed ({resp.status}): {text}")
|
raise ConnectionError(f"send_order failed ({resp.status}): {text}")
|
||||||
data = await resp.json()
|
data = cast(dict[str, Any], await resp.json())
|
||||||
logger.info(
|
logger.info(
|
||||||
"Order submitted",
|
"Order submitted",
|
||||||
extra={
|
extra={
|
||||||
"stock_code": stock_code,
|
"stock_code": stock_code,
|
||||||
"action": order_type,
|
"action": order_type,
|
||||||
|
"session_id": resolved_session,
|
||||||
|
"exchange": resolution.exchange_code,
|
||||||
|
"routing_reason": resolution.reason,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
return data
|
return data
|
||||||
@@ -377,6 +489,7 @@ class KISBroker:
|
|||||||
self,
|
self,
|
||||||
ranking_type: str = "volume",
|
ranking_type: str = "volume",
|
||||||
limit: int = 30,
|
limit: int = 30,
|
||||||
|
session_id: str | None = None,
|
||||||
) -> list[dict[str, Any]]:
|
) -> list[dict[str, Any]]:
|
||||||
"""Fetch market rankings from KIS API.
|
"""Fetch market rankings from KIS API.
|
||||||
|
|
||||||
@@ -394,12 +507,15 @@ class KISBroker:
|
|||||||
await self._rate_limiter.acquire()
|
await self._rate_limiter.acquire()
|
||||||
session = self._get_session()
|
session = self._get_session()
|
||||||
|
|
||||||
|
resolved_session = session_id or classify_session_id(MARKETS["KR"])
|
||||||
|
ranking_market_code = self._kr_router.resolve_for_ranking(resolved_session)
|
||||||
|
|
||||||
if ranking_type == "volume":
|
if ranking_type == "volume":
|
||||||
# 거래량순위: FHPST01710000 / /quotations/volume-rank
|
# 거래량순위: FHPST01710000 / /quotations/volume-rank
|
||||||
tr_id = "FHPST01710000"
|
tr_id = "FHPST01710000"
|
||||||
url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/volume-rank"
|
url = f"{self._base_url}/uapi/domestic-stock/v1/quotations/volume-rank"
|
||||||
params: dict[str, str] = {
|
params: dict[str, str] = {
|
||||||
"FID_COND_MRKT_DIV_CODE": "J",
|
"FID_COND_MRKT_DIV_CODE": ranking_market_code,
|
||||||
"FID_COND_SCR_DIV_CODE": "20171",
|
"FID_COND_SCR_DIV_CODE": "20171",
|
||||||
"FID_INPUT_ISCD": "0000",
|
"FID_INPUT_ISCD": "0000",
|
||||||
"FID_DIV_CLS_CODE": "0",
|
"FID_DIV_CLS_CODE": "0",
|
||||||
@@ -416,7 +532,7 @@ class KISBroker:
|
|||||||
tr_id = "FHPST01700000"
|
tr_id = "FHPST01700000"
|
||||||
url = f"{self._base_url}/uapi/domestic-stock/v1/ranking/fluctuation"
|
url = f"{self._base_url}/uapi/domestic-stock/v1/ranking/fluctuation"
|
||||||
params = {
|
params = {
|
||||||
"fid_cond_mrkt_div_code": "J",
|
"fid_cond_mrkt_div_code": ranking_market_code,
|
||||||
"fid_cond_scr_div_code": "20170",
|
"fid_cond_scr_div_code": "20170",
|
||||||
"fid_input_iscd": "0000",
|
"fid_input_iscd": "0000",
|
||||||
"fid_rank_sort_cls_code": "0",
|
"fid_rank_sort_cls_code": "0",
|
||||||
|
|||||||
48
src/broker/kr_exchange_router.py
Normal file
48
src/broker/kr_exchange_router.py
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class ExchangeResolution:
|
||||||
|
exchange_code: str
|
||||||
|
reason: str
|
||||||
|
|
||||||
|
|
||||||
|
class KRExchangeRouter:
|
||||||
|
"""Resolve domestic exchange routing for KR sessions."""
|
||||||
|
|
||||||
|
def resolve_for_ranking(self, session_id: str) -> str:
|
||||||
|
if session_id in {"NXT_PRE", "NXT_AFTER"}:
|
||||||
|
return "NX"
|
||||||
|
return "J"
|
||||||
|
|
||||||
|
def resolve_for_order(
|
||||||
|
self,
|
||||||
|
*,
|
||||||
|
stock_code: str,
|
||||||
|
session_id: str,
|
||||||
|
is_dual_listed: bool = False,
|
||||||
|
spread_krx: float | None = None,
|
||||||
|
spread_nxt: float | None = None,
|
||||||
|
liquidity_krx: float | None = None,
|
||||||
|
liquidity_nxt: float | None = None,
|
||||||
|
) -> ExchangeResolution:
|
||||||
|
del stock_code
|
||||||
|
default_exchange = "NXT" if session_id in {"NXT_PRE", "NXT_AFTER"} else "KRX"
|
||||||
|
default_reason = "session_default"
|
||||||
|
|
||||||
|
if not is_dual_listed:
|
||||||
|
return ExchangeResolution(default_exchange, default_reason)
|
||||||
|
|
||||||
|
if spread_krx is not None and spread_nxt is not None:
|
||||||
|
if spread_nxt < spread_krx:
|
||||||
|
return ExchangeResolution("NXT", "dual_listing_spread")
|
||||||
|
return ExchangeResolution("KRX", "dual_listing_spread")
|
||||||
|
|
||||||
|
if liquidity_krx is not None and liquidity_nxt is not None:
|
||||||
|
if liquidity_nxt > liquidity_krx:
|
||||||
|
return ExchangeResolution("NXT", "dual_listing_liquidity")
|
||||||
|
return ExchangeResolution("KRX", "dual_listing_liquidity")
|
||||||
|
|
||||||
|
return ExchangeResolution(default_exchange, "fallback_data_unavailable")
|
||||||
@@ -23,6 +23,7 @@ class BlackoutWindow:
|
|||||||
class QueuedOrderIntent:
|
class QueuedOrderIntent:
|
||||||
market_code: str
|
market_code: str
|
||||||
exchange_code: str
|
exchange_code: str
|
||||||
|
session_id: str
|
||||||
stock_code: str
|
stock_code: str
|
||||||
order_type: str
|
order_type: str
|
||||||
quantity: int
|
quantity: int
|
||||||
@@ -68,11 +69,16 @@ class BlackoutOrderManager:
|
|||||||
self._queue: deque[QueuedOrderIntent] = deque()
|
self._queue: deque[QueuedOrderIntent] = deque()
|
||||||
self._was_blackout = False
|
self._was_blackout = False
|
||||||
self._max_queue_size = max_queue_size
|
self._max_queue_size = max_queue_size
|
||||||
|
self._overflow_drop_count = 0
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def pending_count(self) -> int:
|
def pending_count(self) -> int:
|
||||||
return len(self._queue)
|
return len(self._queue)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def overflow_drop_count(self) -> int:
|
||||||
|
return self._overflow_drop_count
|
||||||
|
|
||||||
def in_blackout(self, now: datetime | None = None) -> bool:
|
def in_blackout(self, now: datetime | None = None) -> bool:
|
||||||
if not self.enabled or not self._windows:
|
if not self.enabled or not self._windows:
|
||||||
return False
|
return False
|
||||||
@@ -81,8 +87,11 @@ class BlackoutOrderManager:
|
|||||||
return any(window.contains(kst_now) for window in self._windows)
|
return any(window.contains(kst_now) for window in self._windows)
|
||||||
|
|
||||||
def enqueue(self, intent: QueuedOrderIntent) -> bool:
|
def enqueue(self, intent: QueuedOrderIntent) -> bool:
|
||||||
if len(self._queue) >= self._max_queue_size:
|
if self._max_queue_size <= 0:
|
||||||
return False
|
return False
|
||||||
|
if len(self._queue) >= self._max_queue_size:
|
||||||
|
self._queue.popleft()
|
||||||
|
self._overflow_drop_count += 1
|
||||||
self._queue.append(intent)
|
self._queue.append(intent)
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|||||||
@@ -3,13 +3,14 @@
|
|||||||
Order is fixed:
|
Order is fixed:
|
||||||
1) block new orders
|
1) block new orders
|
||||||
2) cancel pending orders
|
2) cancel pending orders
|
||||||
3) refresh order state
|
3) refresh order state (retry up to 3 attempts with exponential backoff)
|
||||||
4) reduce risk
|
4) reduce risk
|
||||||
5) snapshot and notify
|
5) snapshot and notify
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
import inspect
|
import inspect
|
||||||
from collections.abc import Awaitable, Callable
|
from collections.abc import Awaitable, Callable
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
@@ -34,16 +35,55 @@ class KillSwitchOrchestrator:
|
|||||||
report: KillSwitchReport,
|
report: KillSwitchReport,
|
||||||
name: str,
|
name: str,
|
||||||
fn: StepCallable | None,
|
fn: StepCallable | None,
|
||||||
) -> None:
|
) -> bool:
|
||||||
report.steps.append(name)
|
report.steps.append(name)
|
||||||
if fn is None:
|
if fn is None:
|
||||||
return
|
return True
|
||||||
try:
|
try:
|
||||||
result = fn()
|
result = fn()
|
||||||
if inspect.isawaitable(result):
|
if inspect.isawaitable(result):
|
||||||
await result
|
await result
|
||||||
|
if result is False:
|
||||||
|
raise RuntimeError("step returned False")
|
||||||
|
return True
|
||||||
except Exception as exc: # pragma: no cover - intentionally resilient
|
except Exception as exc: # pragma: no cover - intentionally resilient
|
||||||
report.errors.append(f"{name}: {exc}")
|
report.errors.append(f"{name}: {exc}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
async def _run_refresh_with_retry(
|
||||||
|
self,
|
||||||
|
report: KillSwitchReport,
|
||||||
|
fn: StepCallable | None,
|
||||||
|
*,
|
||||||
|
max_attempts: int,
|
||||||
|
base_delay_sec: float,
|
||||||
|
) -> None:
|
||||||
|
report.steps.append("refresh_order_state")
|
||||||
|
if fn is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
attempts = max(1, max_attempts)
|
||||||
|
delay = max(0.0, base_delay_sec)
|
||||||
|
last_exc: Exception | None = None
|
||||||
|
for attempt in range(1, attempts + 1):
|
||||||
|
try:
|
||||||
|
result = fn()
|
||||||
|
if inspect.isawaitable(result):
|
||||||
|
await result
|
||||||
|
if result is False:
|
||||||
|
raise RuntimeError("step returned False")
|
||||||
|
return
|
||||||
|
except Exception as exc:
|
||||||
|
last_exc = exc
|
||||||
|
if attempt >= attempts:
|
||||||
|
break
|
||||||
|
if delay > 0:
|
||||||
|
await asyncio.sleep(delay * (2 ** (attempt - 1)))
|
||||||
|
if last_exc is not None:
|
||||||
|
report.errors.append(
|
||||||
|
"refresh_order_state: failed after "
|
||||||
|
f"{attempts} attempts ({last_exc})"
|
||||||
|
)
|
||||||
|
|
||||||
async def trigger(
|
async def trigger(
|
||||||
self,
|
self,
|
||||||
@@ -54,6 +94,8 @@ class KillSwitchOrchestrator:
|
|||||||
reduce_risk: StepCallable | None = None,
|
reduce_risk: StepCallable | None = None,
|
||||||
snapshot_state: StepCallable | None = None,
|
snapshot_state: StepCallable | None = None,
|
||||||
notify: StepCallable | None = None,
|
notify: StepCallable | None = None,
|
||||||
|
refresh_retry_attempts: int = 3,
|
||||||
|
refresh_retry_base_delay_sec: float = 1.0,
|
||||||
) -> KillSwitchReport:
|
) -> KillSwitchReport:
|
||||||
report = KillSwitchReport(reason=reason)
|
report = KillSwitchReport(reason=reason)
|
||||||
|
|
||||||
@@ -61,7 +103,12 @@ class KillSwitchOrchestrator:
|
|||||||
report.steps.append("block_new_orders")
|
report.steps.append("block_new_orders")
|
||||||
|
|
||||||
await self._run_step(report, "cancel_pending_orders", cancel_pending_orders)
|
await self._run_step(report, "cancel_pending_orders", cancel_pending_orders)
|
||||||
await self._run_step(report, "refresh_order_state", refresh_order_state)
|
await self._run_refresh_with_retry(
|
||||||
|
report,
|
||||||
|
refresh_order_state,
|
||||||
|
max_attempts=refresh_retry_attempts,
|
||||||
|
base_delay_sec=refresh_retry_base_delay_sec,
|
||||||
|
)
|
||||||
await self._run_step(report, "reduce_risk", reduce_risk)
|
await self._run_step(report, "reduce_risk", reduce_risk)
|
||||||
await self._run_step(report, "snapshot_state", snapshot_state)
|
await self._run_step(report, "snapshot_state", snapshot_state)
|
||||||
await self._run_step(report, "notify", notify)
|
await self._run_step(report, "notify", notify)
|
||||||
|
|||||||
@@ -318,7 +318,7 @@ def get_latest_buy_trade(
|
|||||||
if exchange_code:
|
if exchange_code:
|
||||||
cursor = conn.execute(
|
cursor = conn.execute(
|
||||||
"""
|
"""
|
||||||
SELECT decision_id, price, quantity
|
SELECT decision_id, price, quantity, selection_context
|
||||||
FROM trades
|
FROM trades
|
||||||
WHERE stock_code = ?
|
WHERE stock_code = ?
|
||||||
AND market = ?
|
AND market = ?
|
||||||
@@ -339,7 +339,7 @@ def get_latest_buy_trade(
|
|||||||
else:
|
else:
|
||||||
cursor = conn.execute(
|
cursor = conn.execute(
|
||||||
"""
|
"""
|
||||||
SELECT decision_id, price, quantity
|
SELECT decision_id, price, quantity, selection_context
|
||||||
FROM trades
|
FROM trades
|
||||||
WHERE stock_code = ?
|
WHERE stock_code = ?
|
||||||
AND market = ?
|
AND market = ?
|
||||||
|
|||||||
189
src/main.py
189
src/main.py
@@ -35,6 +35,7 @@ from src.core.criticality import CriticalityAssessor
|
|||||||
from src.core.kill_switch import KillSwitchOrchestrator
|
from src.core.kill_switch import KillSwitchOrchestrator
|
||||||
from src.core.order_policy import (
|
from src.core.order_policy import (
|
||||||
OrderPolicyRejected,
|
OrderPolicyRejected,
|
||||||
|
classify_session_id,
|
||||||
get_session_info,
|
get_session_info,
|
||||||
validate_order_policy,
|
validate_order_policy,
|
||||||
)
|
)
|
||||||
@@ -128,6 +129,84 @@ def _resolve_sell_qty_for_pnl(*, sell_qty: int | None, buy_qty: int | None) -> i
|
|||||||
return max(0, int(buy_qty or 0))
|
return max(0, int(buy_qty or 0))
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_fx_rate_from_sources(*sources: dict[str, Any] | None) -> float | None:
|
||||||
|
"""Best-effort FX rate extraction from broker payloads."""
|
||||||
|
# KIS overseas payloads expose exchange-rate fields with varying key names
|
||||||
|
# across endpoints/responses (price, balance, buying power). Keep this list
|
||||||
|
# centralised so schema drifts can be patched in one place.
|
||||||
|
rate_keys = (
|
||||||
|
"frst_bltn_exrt",
|
||||||
|
"bass_exrt",
|
||||||
|
"ovrs_exrt",
|
||||||
|
"aply_xchg_rt",
|
||||||
|
"xchg_rt",
|
||||||
|
"exchange_rate",
|
||||||
|
"fx_rate",
|
||||||
|
)
|
||||||
|
for source in sources:
|
||||||
|
if not isinstance(source, dict):
|
||||||
|
continue
|
||||||
|
for key in rate_keys:
|
||||||
|
rate = safe_float(source.get(key), 0.0)
|
||||||
|
if rate > 0:
|
||||||
|
return rate
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _split_trade_pnl_components(
|
||||||
|
*,
|
||||||
|
market: MarketInfo,
|
||||||
|
trade_pnl: float,
|
||||||
|
buy_price: float,
|
||||||
|
sell_price: float,
|
||||||
|
quantity: int,
|
||||||
|
buy_fx_rate: float | None = None,
|
||||||
|
sell_fx_rate: float | None = None,
|
||||||
|
) -> tuple[float, float]:
|
||||||
|
"""Split total trade pnl into strategy/fx components.
|
||||||
|
|
||||||
|
For overseas symbols, use buy/sell FX rates when both are available.
|
||||||
|
Otherwise preserve backward-compatible behaviour (all strategy pnl).
|
||||||
|
"""
|
||||||
|
if trade_pnl == 0.0:
|
||||||
|
return 0.0, 0.0
|
||||||
|
if market.is_domestic:
|
||||||
|
return trade_pnl, 0.0
|
||||||
|
|
||||||
|
if (
|
||||||
|
buy_fx_rate is not None
|
||||||
|
and sell_fx_rate is not None
|
||||||
|
and buy_fx_rate > 0
|
||||||
|
and sell_fx_rate > 0
|
||||||
|
and quantity > 0
|
||||||
|
and buy_price > 0
|
||||||
|
and sell_price > 0
|
||||||
|
):
|
||||||
|
buy_notional = buy_price * quantity
|
||||||
|
fx_return = (sell_fx_rate - buy_fx_rate) / buy_fx_rate
|
||||||
|
fx_pnl = buy_notional * fx_return
|
||||||
|
strategy_pnl = trade_pnl - fx_pnl
|
||||||
|
return strategy_pnl, fx_pnl
|
||||||
|
|
||||||
|
return trade_pnl, 0.0
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_buy_fx_rate(buy_trade: dict[str, Any] | None) -> float | None:
|
||||||
|
if not buy_trade:
|
||||||
|
return None
|
||||||
|
raw_ctx = buy_trade.get("selection_context")
|
||||||
|
if not isinstance(raw_ctx, str) or not raw_ctx.strip():
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
decoded = json.loads(raw_ctx)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
return None
|
||||||
|
if not isinstance(decoded, dict):
|
||||||
|
return None
|
||||||
|
rate = safe_float(decoded.get("fx_rate"), 0.0)
|
||||||
|
return rate if rate > 0 else None
|
||||||
|
|
||||||
|
|
||||||
def _compute_kr_dynamic_stop_loss_pct(
|
def _compute_kr_dynamic_stop_loss_pct(
|
||||||
*,
|
*,
|
||||||
market: MarketInfo | None = None,
|
market: MarketInfo | None = None,
|
||||||
@@ -146,23 +225,27 @@ def _compute_kr_dynamic_stop_loss_pct(
|
|||||||
key="KR_ATR_STOP_MULTIPLIER_K",
|
key="KR_ATR_STOP_MULTIPLIER_K",
|
||||||
default=2.0,
|
default=2.0,
|
||||||
)
|
)
|
||||||
min_pct = _resolve_market_setting(
|
min_pct = float(
|
||||||
|
_resolve_market_setting(
|
||||||
market=market,
|
market=market,
|
||||||
settings=settings,
|
settings=settings,
|
||||||
key="KR_ATR_STOP_MIN_PCT",
|
key="KR_ATR_STOP_MIN_PCT",
|
||||||
default=-2.0,
|
default=-2.0,
|
||||||
|
)
|
||||||
)
|
)
|
||||||
max_pct = _resolve_market_setting(
|
max_pct = float(
|
||||||
|
_resolve_market_setting(
|
||||||
market=market,
|
market=market,
|
||||||
settings=settings,
|
settings=settings,
|
||||||
key="KR_ATR_STOP_MAX_PCT",
|
key="KR_ATR_STOP_MAX_PCT",
|
||||||
default=-7.0,
|
default=-7.0,
|
||||||
|
)
|
||||||
)
|
)
|
||||||
if max_pct > min_pct:
|
if max_pct > min_pct:
|
||||||
min_pct, max_pct = max_pct, min_pct
|
min_pct, max_pct = max_pct, min_pct
|
||||||
|
|
||||||
dynamic_stop_pct = -((k * atr_value) / entry_price) * 100.0
|
dynamic_stop_pct = -((k * atr_value) / entry_price) * 100.0
|
||||||
return max(max_pct, min(min_pct, dynamic_stop_pct))
|
return float(max(max_pct, min(min_pct, dynamic_stop_pct)))
|
||||||
|
|
||||||
|
|
||||||
def _stoploss_cooldown_key(*, market: MarketInfo, stock_code: str) -> str:
|
def _stoploss_cooldown_key(*, market: MarketInfo, stock_code: str) -> str:
|
||||||
@@ -926,6 +1009,7 @@ async def build_overseas_symbol_universe(
|
|||||||
def _build_queued_order_intent(
|
def _build_queued_order_intent(
|
||||||
*,
|
*,
|
||||||
market: MarketInfo,
|
market: MarketInfo,
|
||||||
|
session_id: str,
|
||||||
stock_code: str,
|
stock_code: str,
|
||||||
order_type: str,
|
order_type: str,
|
||||||
quantity: int,
|
quantity: int,
|
||||||
@@ -935,6 +1019,7 @@ def _build_queued_order_intent(
|
|||||||
return QueuedOrderIntent(
|
return QueuedOrderIntent(
|
||||||
market_code=market.code,
|
market_code=market.code,
|
||||||
exchange_code=market.exchange_code,
|
exchange_code=market.exchange_code,
|
||||||
|
session_id=session_id,
|
||||||
stock_code=stock_code,
|
stock_code=stock_code,
|
||||||
order_type=order_type,
|
order_type=order_type,
|
||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
@@ -947,6 +1032,7 @@ def _build_queued_order_intent(
|
|||||||
def _maybe_queue_order_intent(
|
def _maybe_queue_order_intent(
|
||||||
*,
|
*,
|
||||||
market: MarketInfo,
|
market: MarketInfo,
|
||||||
|
session_id: str,
|
||||||
stock_code: str,
|
stock_code: str,
|
||||||
order_type: str,
|
order_type: str,
|
||||||
quantity: int,
|
quantity: int,
|
||||||
@@ -956,9 +1042,11 @@ def _maybe_queue_order_intent(
|
|||||||
if not BLACKOUT_ORDER_MANAGER.in_blackout():
|
if not BLACKOUT_ORDER_MANAGER.in_blackout():
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
before_overflow_drops = BLACKOUT_ORDER_MANAGER.overflow_drop_count
|
||||||
queued = BLACKOUT_ORDER_MANAGER.enqueue(
|
queued = BLACKOUT_ORDER_MANAGER.enqueue(
|
||||||
_build_queued_order_intent(
|
_build_queued_order_intent(
|
||||||
market=market,
|
market=market,
|
||||||
|
session_id=session_id,
|
||||||
stock_code=stock_code,
|
stock_code=stock_code,
|
||||||
order_type=order_type,
|
order_type=order_type,
|
||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
@@ -967,6 +1055,7 @@ def _maybe_queue_order_intent(
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
if queued:
|
if queued:
|
||||||
|
after_overflow_drops = BLACKOUT_ORDER_MANAGER.overflow_drop_count
|
||||||
logger.warning(
|
logger.warning(
|
||||||
(
|
(
|
||||||
"Blackout active: queued order intent %s %s (%s) "
|
"Blackout active: queued order intent %s %s (%s) "
|
||||||
@@ -980,9 +1069,22 @@ def _maybe_queue_order_intent(
|
|||||||
source,
|
source,
|
||||||
BLACKOUT_ORDER_MANAGER.pending_count,
|
BLACKOUT_ORDER_MANAGER.pending_count,
|
||||||
)
|
)
|
||||||
|
if after_overflow_drops > before_overflow_drops:
|
||||||
|
logger.error(
|
||||||
|
(
|
||||||
|
"Blackout queue overflow policy applied: evicted oldest intent "
|
||||||
|
"to keep latest %s %s (%s) source=%s pending=%d total_evicted=%d"
|
||||||
|
),
|
||||||
|
order_type,
|
||||||
|
stock_code,
|
||||||
|
market.code,
|
||||||
|
source,
|
||||||
|
BLACKOUT_ORDER_MANAGER.pending_count,
|
||||||
|
after_overflow_drops,
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
logger.error(
|
logger.error(
|
||||||
"Blackout queue full: dropped order intent %s %s (%s) qty=%d source=%s",
|
"Blackout queue unavailable: could not queue order intent %s %s (%s) qty=%d source=%s",
|
||||||
order_type,
|
order_type,
|
||||||
stock_code,
|
stock_code,
|
||||||
market.code,
|
market.code,
|
||||||
@@ -1103,6 +1205,7 @@ async def process_blackout_recovery_orders(
|
|||||||
order_type=intent.order_type,
|
order_type=intent.order_type,
|
||||||
quantity=intent.quantity,
|
quantity=intent.quantity,
|
||||||
price=intent.price,
|
price=intent.price,
|
||||||
|
session_id=intent.session_id,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
result = await overseas_broker.send_overseas_order(
|
result = await overseas_broker.send_overseas_order(
|
||||||
@@ -1278,7 +1381,10 @@ async def _cancel_pending_orders_for_kill_switch(
|
|||||||
)
|
)
|
||||||
|
|
||||||
if failures:
|
if failures:
|
||||||
raise RuntimeError("; ".join(failures[:3]))
|
summary = "; ".join(failures[:3])
|
||||||
|
if len(failures) > 3:
|
||||||
|
summary = f"{summary} (+{len(failures) - 3} more)"
|
||||||
|
raise RuntimeError(summary)
|
||||||
|
|
||||||
|
|
||||||
async def _refresh_order_state_for_kill_switch(
|
async def _refresh_order_state_for_kill_switch(
|
||||||
@@ -1287,6 +1393,7 @@ async def _refresh_order_state_for_kill_switch(
|
|||||||
overseas_broker: OverseasBroker,
|
overseas_broker: OverseasBroker,
|
||||||
markets: list[MarketInfo],
|
markets: list[MarketInfo],
|
||||||
) -> None:
|
) -> None:
|
||||||
|
failures: list[str] = []
|
||||||
seen_overseas: set[str] = set()
|
seen_overseas: set[str] = set()
|
||||||
for market in markets:
|
for market in markets:
|
||||||
try:
|
try:
|
||||||
@@ -1302,6 +1409,12 @@ async def _refresh_order_state_for_kill_switch(
|
|||||||
market.exchange_code,
|
market.exchange_code,
|
||||||
exc,
|
exc,
|
||||||
)
|
)
|
||||||
|
failures.append(f"{market.code}/{market.exchange_code}: {exc}")
|
||||||
|
if failures:
|
||||||
|
summary = "; ".join(failures[:3])
|
||||||
|
if len(failures) > 3:
|
||||||
|
summary = f"{summary} (+{len(failures) - 3} more)"
|
||||||
|
raise RuntimeError(summary)
|
||||||
|
|
||||||
|
|
||||||
def _reduce_risk_for_kill_switch() -> None:
|
def _reduce_risk_for_kill_switch() -> None:
|
||||||
@@ -1371,6 +1484,7 @@ async def trading_cycle(
|
|||||||
_session_risk_overrides(market=market, settings=settings)
|
_session_risk_overrides(market=market, settings=settings)
|
||||||
|
|
||||||
# 1. Fetch market data
|
# 1. Fetch market data
|
||||||
|
balance_info: dict[str, Any] = {}
|
||||||
price_output: dict[str, Any] = {} # Populated for overseas markets; used for fallback metrics
|
price_output: dict[str, Any] = {} # Populated for overseas markets; used for fallback metrics
|
||||||
if market.is_domestic:
|
if market.is_domestic:
|
||||||
current_price, price_change_pct, foreigner_net = await broker.get_current_price(stock_code)
|
current_price, price_change_pct, foreigner_net = await broker.get_current_price(stock_code)
|
||||||
@@ -1393,8 +1507,6 @@ async def trading_cycle(
|
|||||||
balance_info = output2[0]
|
balance_info = output2[0]
|
||||||
elif isinstance(output2, dict):
|
elif isinstance(output2, dict):
|
||||||
balance_info = output2
|
balance_info = output2
|
||||||
else:
|
|
||||||
balance_info = {}
|
|
||||||
|
|
||||||
total_eval = safe_float(balance_info.get("frcr_evlu_tota", "0") or "0")
|
total_eval = safe_float(balance_info.get("frcr_evlu_tota", "0") or "0")
|
||||||
purchase_total = safe_float(balance_info.get("frcr_buy_amt_smtl", "0") or "0")
|
purchase_total = safe_float(balance_info.get("frcr_buy_amt_smtl", "0") or "0")
|
||||||
@@ -1814,6 +1926,9 @@ async def trading_cycle(
|
|||||||
quantity = 0
|
quantity = 0
|
||||||
trade_price = current_price
|
trade_price = current_price
|
||||||
trade_pnl = 0.0
|
trade_pnl = 0.0
|
||||||
|
buy_trade: dict[str, Any] | None = None
|
||||||
|
buy_price = 0.0
|
||||||
|
sell_qty = 0
|
||||||
if decision.action in ("BUY", "SELL"):
|
if decision.action in ("BUY", "SELL"):
|
||||||
if KILL_SWITCH.new_orders_blocked and decision.action == "BUY":
|
if KILL_SWITCH.new_orders_blocked and decision.action == "BUY":
|
||||||
logger.critical(
|
logger.critical(
|
||||||
@@ -1961,6 +2076,7 @@ async def trading_cycle(
|
|||||||
return
|
return
|
||||||
if _maybe_queue_order_intent(
|
if _maybe_queue_order_intent(
|
||||||
market=market,
|
market=market,
|
||||||
|
session_id=runtime_session_id,
|
||||||
stock_code=stock_code,
|
stock_code=stock_code,
|
||||||
order_type=decision.action,
|
order_type=decision.action,
|
||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
@@ -1973,6 +2089,7 @@ async def trading_cycle(
|
|||||||
order_type=decision.action,
|
order_type=decision.action,
|
||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
price=order_price,
|
price=order_price,
|
||||||
|
session_id=runtime_session_id,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
# For overseas orders, always use limit orders (지정가):
|
# For overseas orders, always use limit orders (지정가):
|
||||||
@@ -2008,6 +2125,7 @@ async def trading_cycle(
|
|||||||
return
|
return
|
||||||
if _maybe_queue_order_intent(
|
if _maybe_queue_order_intent(
|
||||||
market=market,
|
market=market,
|
||||||
|
session_id=runtime_session_id,
|
||||||
stock_code=stock_code,
|
stock_code=stock_code,
|
||||||
order_type=decision.action,
|
order_type=decision.action,
|
||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
@@ -2128,6 +2246,26 @@ async def trading_cycle(
|
|||||||
"signal": candidate.signal,
|
"signal": candidate.signal,
|
||||||
"score": candidate.score,
|
"score": candidate.score,
|
||||||
}
|
}
|
||||||
|
sell_fx_rate = _extract_fx_rate_from_sources(price_output, balance_info)
|
||||||
|
if sell_fx_rate is not None and not market.is_domestic:
|
||||||
|
if selection_context is None:
|
||||||
|
selection_context = {"fx_rate": sell_fx_rate}
|
||||||
|
else:
|
||||||
|
selection_context["fx_rate"] = sell_fx_rate
|
||||||
|
|
||||||
|
strategy_pnl: float | None = None
|
||||||
|
fx_pnl: float | None = None
|
||||||
|
if decision.action == "SELL" and order_succeeded:
|
||||||
|
buy_fx_rate = _extract_buy_fx_rate(buy_trade)
|
||||||
|
strategy_pnl, fx_pnl = _split_trade_pnl_components(
|
||||||
|
market=market,
|
||||||
|
trade_pnl=trade_pnl,
|
||||||
|
buy_price=buy_price,
|
||||||
|
sell_price=trade_price,
|
||||||
|
quantity=sell_qty or quantity,
|
||||||
|
buy_fx_rate=buy_fx_rate,
|
||||||
|
sell_fx_rate=sell_fx_rate,
|
||||||
|
)
|
||||||
|
|
||||||
log_trade(
|
log_trade(
|
||||||
conn=db_conn,
|
conn=db_conn,
|
||||||
@@ -2138,6 +2276,8 @@ async def trading_cycle(
|
|||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
price=trade_price,
|
price=trade_price,
|
||||||
pnl=trade_pnl,
|
pnl=trade_pnl,
|
||||||
|
strategy_pnl=strategy_pnl,
|
||||||
|
fx_pnl=fx_pnl,
|
||||||
market=market.code,
|
market=market.code,
|
||||||
exchange_code=market.exchange_code,
|
exchange_code=market.exchange_code,
|
||||||
session_id=runtime_session_id,
|
session_id=runtime_session_id,
|
||||||
@@ -2284,6 +2424,7 @@ async def handle_domestic_pending_orders(
|
|||||||
order_type="SELL",
|
order_type="SELL",
|
||||||
quantity=psbl_qty,
|
quantity=psbl_qty,
|
||||||
price=new_price,
|
price=new_price,
|
||||||
|
session_id=classify_session_id(MARKETS["KR"]),
|
||||||
)
|
)
|
||||||
sell_resubmit_counts[key] = sell_resubmit_counts.get(key, 0) + 1
|
sell_resubmit_counts[key] = sell_resubmit_counts.get(key, 0) + 1
|
||||||
try:
|
try:
|
||||||
@@ -2736,6 +2877,7 @@ async def run_daily_session(
|
|||||||
)
|
)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
balance_info: dict[str, Any] = {}
|
||||||
if market.is_domestic:
|
if market.is_domestic:
|
||||||
output2 = balance_data.get("output2", [{}])
|
output2 = balance_data.get("output2", [{}])
|
||||||
total_eval = safe_float(output2[0].get("tot_evlu_amt", "0")) if output2 else 0
|
total_eval = safe_float(output2[0].get("tot_evlu_amt", "0")) if output2 else 0
|
||||||
@@ -2990,6 +3132,9 @@ async def run_daily_session(
|
|||||||
quantity = 0
|
quantity = 0
|
||||||
trade_price = stock_data["current_price"]
|
trade_price = stock_data["current_price"]
|
||||||
trade_pnl = 0.0
|
trade_pnl = 0.0
|
||||||
|
buy_trade: dict[str, Any] | None = None
|
||||||
|
buy_price = 0.0
|
||||||
|
sell_qty = 0
|
||||||
order_succeeded = True
|
order_succeeded = True
|
||||||
if decision.action in ("BUY", "SELL"):
|
if decision.action in ("BUY", "SELL"):
|
||||||
if KILL_SWITCH.new_orders_blocked and decision.action == "BUY":
|
if KILL_SWITCH.new_orders_blocked and decision.action == "BUY":
|
||||||
@@ -3142,6 +3287,7 @@ async def run_daily_session(
|
|||||||
continue
|
continue
|
||||||
if _maybe_queue_order_intent(
|
if _maybe_queue_order_intent(
|
||||||
market=market,
|
market=market,
|
||||||
|
session_id=runtime_session_id,
|
||||||
stock_code=stock_code,
|
stock_code=stock_code,
|
||||||
order_type=decision.action,
|
order_type=decision.action,
|
||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
@@ -3154,6 +3300,7 @@ async def run_daily_session(
|
|||||||
order_type=decision.action,
|
order_type=decision.action,
|
||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
price=order_price,
|
price=order_price,
|
||||||
|
session_id=runtime_session_id,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
# KIS VTS only accepts limit orders; use 0.5% premium for BUY
|
# KIS VTS only accepts limit orders; use 0.5% premium for BUY
|
||||||
@@ -3179,6 +3326,7 @@ async def run_daily_session(
|
|||||||
continue
|
continue
|
||||||
if _maybe_queue_order_intent(
|
if _maybe_queue_order_intent(
|
||||||
market=market,
|
market=market,
|
||||||
|
session_id=runtime_session_id,
|
||||||
stock_code=stock_code,
|
stock_code=stock_code,
|
||||||
order_type=decision.action,
|
order_type=decision.action,
|
||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
@@ -3272,6 +3420,30 @@ async def run_daily_session(
|
|||||||
# Log trade (skip if order was rejected by API)
|
# Log trade (skip if order was rejected by API)
|
||||||
if decision.action in ("BUY", "SELL") and not order_succeeded:
|
if decision.action in ("BUY", "SELL") and not order_succeeded:
|
||||||
continue
|
continue
|
||||||
|
strategy_pnl: float | None = None
|
||||||
|
fx_pnl: float | None = None
|
||||||
|
selection_context: dict[str, Any] | None = None
|
||||||
|
if decision.action == "SELL" and order_succeeded:
|
||||||
|
buy_fx_rate = _extract_buy_fx_rate(buy_trade)
|
||||||
|
sell_fx_rate = _extract_fx_rate_from_sources(balance_info, stock_data)
|
||||||
|
strategy_pnl, fx_pnl = _split_trade_pnl_components(
|
||||||
|
market=market,
|
||||||
|
trade_pnl=trade_pnl,
|
||||||
|
buy_price=buy_price,
|
||||||
|
sell_price=trade_price,
|
||||||
|
quantity=sell_qty or quantity,
|
||||||
|
buy_fx_rate=buy_fx_rate,
|
||||||
|
sell_fx_rate=sell_fx_rate,
|
||||||
|
)
|
||||||
|
if sell_fx_rate is not None and not market.is_domestic:
|
||||||
|
# Daily path does not carry scanner candidate metrics, so this
|
||||||
|
# context intentionally stores FX snapshot only.
|
||||||
|
selection_context = {"fx_rate": sell_fx_rate}
|
||||||
|
elif not market.is_domestic:
|
||||||
|
snapshot_fx_rate = _extract_fx_rate_from_sources(balance_info, stock_data)
|
||||||
|
if snapshot_fx_rate is not None:
|
||||||
|
# BUY/HOLD in daily path: persist FX snapshot for later SELL split.
|
||||||
|
selection_context = {"fx_rate": snapshot_fx_rate}
|
||||||
log_trade(
|
log_trade(
|
||||||
conn=db_conn,
|
conn=db_conn,
|
||||||
stock_code=stock_code,
|
stock_code=stock_code,
|
||||||
@@ -3281,9 +3453,12 @@ async def run_daily_session(
|
|||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
price=trade_price,
|
price=trade_price,
|
||||||
pnl=trade_pnl,
|
pnl=trade_pnl,
|
||||||
|
strategy_pnl=strategy_pnl,
|
||||||
|
fx_pnl=fx_pnl,
|
||||||
market=market.code,
|
market=market.code,
|
||||||
exchange_code=market.exchange_code,
|
exchange_code=market.exchange_code,
|
||||||
session_id=runtime_session_id,
|
session_id=runtime_session_id,
|
||||||
|
selection_context=selection_context,
|
||||||
decision_id=decision_id,
|
decision_id=decision_id,
|
||||||
mode=settings.MODE,
|
mode=settings.MODE,
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -85,8 +85,8 @@ def evaluate_exit(
|
|||||||
reason = "atr_trailing_stop"
|
reason = "atr_trailing_stop"
|
||||||
elif be_lock_threat:
|
elif be_lock_threat:
|
||||||
reason = "be_lock_threat"
|
reason = "be_lock_threat"
|
||||||
elif model_exit_signal:
|
elif model_exit_signal and next_state == PositionState.BE_LOCK:
|
||||||
reason = "model_liquidity_exit"
|
reason = "model_assist_be_lock"
|
||||||
elif take_profit_hit:
|
elif take_profit_hit:
|
||||||
# Backward-compatible immediate profit-taking path.
|
# Backward-compatible immediate profit-taking path.
|
||||||
reason = "arm_take_profit"
|
reason = "arm_take_profit"
|
||||||
|
|||||||
@@ -40,7 +40,8 @@ def evaluate_exit_first(inp: StateTransitionInput) -> bool:
|
|||||||
|
|
||||||
EXITED must be evaluated before any promotion.
|
EXITED must be evaluated before any promotion.
|
||||||
"""
|
"""
|
||||||
return inp.hard_stop_hit or inp.trailing_stop_hit or inp.model_exit_signal or inp.be_lock_threat
|
# model_exit_signal is assist-only and must not trigger EXIT directly.
|
||||||
|
return inp.hard_stop_hit or inp.trailing_stop_hit or inp.be_lock_threat
|
||||||
|
|
||||||
|
|
||||||
def promote_state(current: PositionState, inp: StateTransitionInput) -> PositionState:
|
def promote_state(current: PositionState, inp: StateTransitionInput) -> PositionState:
|
||||||
@@ -61,5 +62,8 @@ def promote_state(current: PositionState, inp: StateTransitionInput) -> Position
|
|||||||
target = PositionState.ARMED
|
target = PositionState.ARMED
|
||||||
elif inp.unrealized_pnl_pct >= inp.be_arm_pct:
|
elif inp.unrealized_pnl_pct >= inp.be_arm_pct:
|
||||||
target = PositionState.BE_LOCK
|
target = PositionState.BE_LOCK
|
||||||
|
elif inp.model_exit_signal:
|
||||||
|
# Model signal assists risk posture by tightening to BE_LOCK.
|
||||||
|
target = PositionState.BE_LOCK
|
||||||
|
|
||||||
return target if _STATE_RANK[target] > _STATE_RANK[current] else current
|
return target if _STATE_RANK[target] > _STATE_RANK[current] else current
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ def test_invalid_failure_rate_range_raises() -> None:
|
|||||||
commission_bps=5.0,
|
commission_bps=5.0,
|
||||||
slippage_bps_by_session={"KRX_REG": 10.0},
|
slippage_bps_by_session={"KRX_REG": 10.0},
|
||||||
failure_rate_by_session={"KRX_REG": 1.2},
|
failure_rate_by_session={"KRX_REG": 1.2},
|
||||||
partial_fill_rate_by_session={"KRX_REG": 0.1},
|
partial_fill_rate_by_session={"KRX_REG": 0.2},
|
||||||
unfavorable_fill_required=True,
|
unfavorable_fill_required=True,
|
||||||
)
|
)
|
||||||
with pytest.raises(ValueError, match="failure rate must be within"):
|
with pytest.raises(ValueError, match="failure rate must be within"):
|
||||||
@@ -57,7 +57,7 @@ def test_unfavorable_fill_requirement_cannot_be_disabled() -> None:
|
|||||||
commission_bps=5.0,
|
commission_bps=5.0,
|
||||||
slippage_bps_by_session={"KRX_REG": 10.0},
|
slippage_bps_by_session={"KRX_REG": 10.0},
|
||||||
failure_rate_by_session={"KRX_REG": 0.02},
|
failure_rate_by_session={"KRX_REG": 0.02},
|
||||||
partial_fill_rate_by_session={"KRX_REG": 0.1},
|
partial_fill_rate_by_session={"KRX_REG": 0.2},
|
||||||
unfavorable_fill_required=False,
|
unfavorable_fill_required=False,
|
||||||
)
|
)
|
||||||
with pytest.raises(ValueError, match="unfavorable_fill_required must be True"):
|
with pytest.raises(ValueError, match="unfavorable_fill_required must be True"):
|
||||||
@@ -70,7 +70,7 @@ def test_non_finite_commission_rejected(bad_commission: float) -> None:
|
|||||||
commission_bps=bad_commission,
|
commission_bps=bad_commission,
|
||||||
slippage_bps_by_session={"KRX_REG": 10.0},
|
slippage_bps_by_session={"KRX_REG": 10.0},
|
||||||
failure_rate_by_session={"KRX_REG": 0.02},
|
failure_rate_by_session={"KRX_REG": 0.02},
|
||||||
partial_fill_rate_by_session={"KRX_REG": 0.1},
|
partial_fill_rate_by_session={"KRX_REG": 0.2},
|
||||||
unfavorable_fill_required=True,
|
unfavorable_fill_required=True,
|
||||||
)
|
)
|
||||||
with pytest.raises(ValueError, match="commission_bps"):
|
with pytest.raises(ValueError, match="commission_bps"):
|
||||||
@@ -83,7 +83,7 @@ def test_non_finite_slippage_rejected(bad_slippage: float) -> None:
|
|||||||
commission_bps=5.0,
|
commission_bps=5.0,
|
||||||
slippage_bps_by_session={"KRX_REG": bad_slippage},
|
slippage_bps_by_session={"KRX_REG": bad_slippage},
|
||||||
failure_rate_by_session={"KRX_REG": 0.02},
|
failure_rate_by_session={"KRX_REG": 0.02},
|
||||||
partial_fill_rate_by_session={"KRX_REG": 0.1},
|
partial_fill_rate_by_session={"KRX_REG": 0.2},
|
||||||
unfavorable_fill_required=True,
|
unfavorable_fill_required=True,
|
||||||
)
|
)
|
||||||
with pytest.raises(ValueError, match="slippage bps"):
|
with pytest.raises(ValueError, match="slippage bps"):
|
||||||
@@ -102,13 +102,13 @@ def test_missing_required_partial_fill_session_raises() -> None:
|
|||||||
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG", "US_PRE"])
|
validate_backtest_cost_model(model=model, required_sessions=["KRX_REG", "US_PRE"])
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("bad_rate", [-0.1, 1.1, float("nan")])
|
@pytest.mark.parametrize("bad_partial_fill", [float("nan"), float("inf"), float("-inf"), -0.1, 1.1])
|
||||||
def test_invalid_partial_fill_rate_range_raises(bad_rate: float) -> None:
|
def test_invalid_partial_fill_rate_rejected(bad_partial_fill: float) -> None:
|
||||||
model = BacktestCostModel(
|
model = BacktestCostModel(
|
||||||
commission_bps=5.0,
|
commission_bps=5.0,
|
||||||
slippage_bps_by_session={"KRX_REG": 10.0},
|
slippage_bps_by_session={"KRX_REG": 10.0},
|
||||||
failure_rate_by_session={"KRX_REG": 0.02},
|
failure_rate_by_session={"KRX_REG": 0.02},
|
||||||
partial_fill_rate_by_session={"KRX_REG": bad_rate},
|
partial_fill_rate_by_session={"KRX_REG": bad_partial_fill},
|
||||||
unfavorable_fill_required=True,
|
unfavorable_fill_required=True,
|
||||||
)
|
)
|
||||||
with pytest.raises(ValueError, match="partial fill rate must be within"):
|
with pytest.raises(ValueError, match="partial fill rate must be within"):
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ def _cost_model() -> BacktestCostModel:
|
|||||||
commission_bps=3.0,
|
commission_bps=3.0,
|
||||||
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
|
slippage_bps_by_session={"KRX_REG": 10.0, "US_PRE": 50.0},
|
||||||
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
|
failure_rate_by_session={"KRX_REG": 0.01, "US_PRE": 0.08},
|
||||||
partial_fill_rate_by_session={"KRX_REG": 0.1, "US_PRE": 0.2},
|
partial_fill_rate_by_session={"KRX_REG": 0.05, "US_PRE": 0.2},
|
||||||
unfavorable_fill_required=True,
|
unfavorable_fill_required=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -72,6 +72,7 @@ def test_pipeline_happy_path_returns_fold_and_artifact_contract() -> None:
|
|||||||
assert names == {"B0", "B1", "M1"}
|
assert names == {"B0", "B1", "M1"}
|
||||||
for score in fold.baseline_scores:
|
for score in fold.baseline_scores:
|
||||||
assert 0.0 <= score.accuracy <= 1.0
|
assert 0.0 <= score.accuracy <= 1.0
|
||||||
|
assert 0.0 <= score.cost_adjusted_accuracy <= 1.0
|
||||||
assert fold.execution_adjusted_trade_count >= 0
|
assert fold.execution_adjusted_trade_count >= 0
|
||||||
assert fold.execution_rejected_count >= 0
|
assert fold.execution_rejected_count >= 0
|
||||||
assert fold.execution_partial_count >= 0
|
assert fold.execution_partial_count >= 0
|
||||||
@@ -82,7 +83,7 @@ def test_pipeline_cost_guard_fail_fast() -> None:
|
|||||||
commission_bps=3.0,
|
commission_bps=3.0,
|
||||||
slippage_bps_by_session={"KRX_REG": 10.0},
|
slippage_bps_by_session={"KRX_REG": 10.0},
|
||||||
failure_rate_by_session={"KRX_REG": 0.01},
|
failure_rate_by_session={"KRX_REG": 0.01},
|
||||||
partial_fill_rate_by_session={"KRX_REG": 0.1},
|
partial_fill_rate_by_session={"KRX_REG": 0.05},
|
||||||
unfavorable_fill_required=True,
|
unfavorable_fill_required=True,
|
||||||
)
|
)
|
||||||
try:
|
try:
|
||||||
@@ -173,8 +174,8 @@ def test_pipeline_rejects_minutes_spec_when_timestamp_missing() -> None:
|
|||||||
raise AssertionError("expected timestamp validation error")
|
raise AssertionError("expected timestamp validation error")
|
||||||
|
|
||||||
|
|
||||||
def test_pipeline_execution_adjusted_returns_reflect_cost_and_fill_assumptions() -> None:
|
def test_pipeline_fold_scores_reflect_cost_and_execution_effects() -> None:
|
||||||
base_cfg = dict(
|
cfg = dict(
|
||||||
bars=_bars(),
|
bars=_bars(),
|
||||||
entry_indices=[0, 1, 2, 3, 4, 5, 6, 7],
|
entry_indices=[0, 1, 2, 3, 4, 5, 6, 7],
|
||||||
side=1,
|
side=1,
|
||||||
@@ -192,7 +193,6 @@ def test_pipeline_execution_adjusted_returns_reflect_cost_and_fill_assumptions()
|
|||||||
min_train_size=3,
|
min_train_size=3,
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
optimistic = BacktestCostModel(
|
optimistic = BacktestCostModel(
|
||||||
commission_bps=0.0,
|
commission_bps=0.0,
|
||||||
slippage_bps_by_session={"KRX_REG": 0.0, "US_PRE": 0.0},
|
slippage_bps_by_session={"KRX_REG": 0.0, "US_PRE": 0.0},
|
||||||
@@ -202,25 +202,19 @@ def test_pipeline_execution_adjusted_returns_reflect_cost_and_fill_assumptions()
|
|||||||
)
|
)
|
||||||
conservative = BacktestCostModel(
|
conservative = BacktestCostModel(
|
||||||
commission_bps=10.0,
|
commission_bps=10.0,
|
||||||
slippage_bps_by_session={"KRX_REG": 20.0, "US_PRE": 60.0},
|
slippage_bps_by_session={"KRX_REG": 30.0, "US_PRE": 80.0},
|
||||||
failure_rate_by_session={"KRX_REG": 0.2, "US_PRE": 0.4},
|
failure_rate_by_session={"KRX_REG": 0.2, "US_PRE": 0.4},
|
||||||
partial_fill_rate_by_session={"KRX_REG": 0.5, "US_PRE": 0.7},
|
partial_fill_rate_by_session={"KRX_REG": 0.5, "US_PRE": 0.7},
|
||||||
unfavorable_fill_required=True,
|
unfavorable_fill_required=True,
|
||||||
)
|
)
|
||||||
|
optimistic_out = run_v2_backtest_pipeline(cost_model=optimistic, **cfg)
|
||||||
|
conservative_out = run_v2_backtest_pipeline(cost_model=conservative, **cfg)
|
||||||
|
|
||||||
opt_out = run_v2_backtest_pipeline(cost_model=optimistic, **base_cfg)
|
assert optimistic_out.folds and conservative_out.folds
|
||||||
cons_out = run_v2_backtest_pipeline(cost_model=conservative, **base_cfg)
|
optimistic_score = optimistic_out.folds[0].baseline_scores[1].cost_adjusted_accuracy
|
||||||
|
conservative_score = conservative_out.folds[0].baseline_scores[1].cost_adjusted_accuracy
|
||||||
|
assert conservative_score < optimistic_score
|
||||||
|
|
||||||
opt_avg = sum(
|
optimistic_avg_return = optimistic_out.folds[0].execution_adjusted_avg_return_bps
|
||||||
f.execution_adjusted_avg_return_bps for f in opt_out.folds
|
conservative_avg_return = conservative_out.folds[0].execution_adjusted_avg_return_bps
|
||||||
) / len(opt_out.folds)
|
assert conservative_avg_return < optimistic_avg_return
|
||||||
cons_avg = sum(
|
|
||||||
f.execution_adjusted_avg_return_bps for f in cons_out.folds
|
|
||||||
) / len(cons_out.folds)
|
|
||||||
assert cons_avg < opt_avg
|
|
||||||
|
|
||||||
opt_trades = sum(f.execution_adjusted_trade_count for f in opt_out.folds)
|
|
||||||
cons_trades = sum(f.execution_adjusted_trade_count for f in cons_out.folds)
|
|
||||||
cons_rejected = sum(f.execution_rejected_count for f in cons_out.folds)
|
|
||||||
assert cons_trades <= opt_trades
|
|
||||||
assert cons_rejected >= 0
|
|
||||||
|
|||||||
@@ -35,6 +35,7 @@ def test_recovery_batch_only_after_blackout_exit() -> None:
|
|||||||
intent = QueuedOrderIntent(
|
intent = QueuedOrderIntent(
|
||||||
market_code="KR",
|
market_code="KR",
|
||||||
exchange_code="KRX",
|
exchange_code="KRX",
|
||||||
|
session_id="KRX_REG",
|
||||||
stock_code="005930",
|
stock_code="005930",
|
||||||
order_type="BUY",
|
order_type="BUY",
|
||||||
quantity=1,
|
quantity=1,
|
||||||
@@ -64,6 +65,7 @@ def test_requeued_intent_is_processed_next_non_blackout_cycle() -> None:
|
|||||||
intent = QueuedOrderIntent(
|
intent = QueuedOrderIntent(
|
||||||
market_code="KR",
|
market_code="KR",
|
||||||
exchange_code="KRX",
|
exchange_code="KRX",
|
||||||
|
session_id="KRX_REG",
|
||||||
stock_code="005930",
|
stock_code="005930",
|
||||||
order_type="BUY",
|
order_type="BUY",
|
||||||
quantity=1,
|
quantity=1,
|
||||||
@@ -79,3 +81,54 @@ def test_requeued_intent_is_processed_next_non_blackout_cycle() -> None:
|
|||||||
manager.requeue(first_batch[0])
|
manager.requeue(first_batch[0])
|
||||||
second_batch = manager.pop_recovery_batch(outside_blackout)
|
second_batch = manager.pop_recovery_batch(outside_blackout)
|
||||||
assert len(second_batch) == 1
|
assert len(second_batch) == 1
|
||||||
|
|
||||||
|
|
||||||
|
def test_queue_overflow_drops_oldest_and_keeps_latest() -> None:
|
||||||
|
manager = BlackoutOrderManager(
|
||||||
|
enabled=True,
|
||||||
|
windows=parse_blackout_windows_kst("23:30-00:10"),
|
||||||
|
max_queue_size=2,
|
||||||
|
)
|
||||||
|
first = QueuedOrderIntent(
|
||||||
|
market_code="KR",
|
||||||
|
exchange_code="KRX",
|
||||||
|
session_id="KRX_REG",
|
||||||
|
stock_code="000001",
|
||||||
|
order_type="BUY",
|
||||||
|
quantity=1,
|
||||||
|
price=100.0,
|
||||||
|
source="first",
|
||||||
|
queued_at=datetime.now(UTC),
|
||||||
|
)
|
||||||
|
second = QueuedOrderIntent(
|
||||||
|
market_code="KR",
|
||||||
|
exchange_code="KRX",
|
||||||
|
session_id="KRX_REG",
|
||||||
|
stock_code="000002",
|
||||||
|
order_type="BUY",
|
||||||
|
quantity=1,
|
||||||
|
price=101.0,
|
||||||
|
source="second",
|
||||||
|
queued_at=datetime.now(UTC),
|
||||||
|
)
|
||||||
|
third = QueuedOrderIntent(
|
||||||
|
market_code="KR",
|
||||||
|
exchange_code="KRX",
|
||||||
|
session_id="KRX_REG",
|
||||||
|
stock_code="000003",
|
||||||
|
order_type="SELL",
|
||||||
|
quantity=2,
|
||||||
|
price=102.0,
|
||||||
|
source="third",
|
||||||
|
queued_at=datetime.now(UTC),
|
||||||
|
)
|
||||||
|
|
||||||
|
assert manager.enqueue(first)
|
||||||
|
assert manager.enqueue(second)
|
||||||
|
assert manager.enqueue(third)
|
||||||
|
assert manager.pending_count == 2
|
||||||
|
assert manager.overflow_drop_count == 1
|
||||||
|
|
||||||
|
outside_blackout = datetime(2026, 1, 1, 15, 20, tzinfo=UTC)
|
||||||
|
batch = manager.pop_recovery_batch(outside_blackout)
|
||||||
|
assert [intent.stock_code for intent in batch] == ["000002", "000003"]
|
||||||
|
|||||||
@@ -400,6 +400,15 @@ class TestFetchMarketRankings:
|
|||||||
assert result[0]["stock_code"] == "015260"
|
assert result[0]["stock_code"] == "015260"
|
||||||
assert result[0]["change_rate"] == 29.74
|
assert result[0]["change_rate"] == 29.74
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_volume_uses_nx_market_code_in_nxt_session(self, broker: KISBroker) -> None:
|
||||||
|
mock_resp = _make_ranking_mock([])
|
||||||
|
with patch("aiohttp.ClientSession.get", return_value=mock_resp) as mock_get:
|
||||||
|
await broker.fetch_market_rankings(ranking_type="volume", session_id="NXT_PRE")
|
||||||
|
|
||||||
|
params = mock_get.call_args[1].get("params", {})
|
||||||
|
assert params.get("FID_COND_MRKT_DIV_CODE") == "NX"
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# KRX tick unit / round-down helpers (issue #157)
|
# KRX tick unit / round-down helpers (issue #157)
|
||||||
@@ -591,6 +600,60 @@ class TestSendOrderTickRounding:
|
|||||||
body = order_call[1].get("json", {})
|
body = order_call[1].get("json", {})
|
||||||
assert body["ORD_DVSN"] == "01"
|
assert body["ORD_DVSN"] == "01"
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_send_order_sets_exchange_field_from_session(self, broker: KISBroker) -> None:
|
||||||
|
mock_hash = AsyncMock()
|
||||||
|
mock_hash.status = 200
|
||||||
|
mock_hash.json = AsyncMock(return_value={"HASH": "h"})
|
||||||
|
mock_hash.__aenter__ = AsyncMock(return_value=mock_hash)
|
||||||
|
mock_hash.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
|
mock_order = AsyncMock()
|
||||||
|
mock_order.status = 200
|
||||||
|
mock_order.json = AsyncMock(return_value={"rt_cd": "0"})
|
||||||
|
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
||||||
|
mock_order.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
|
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
||||||
|
with patch.object(
|
||||||
|
broker,
|
||||||
|
"_load_dual_listing_metrics",
|
||||||
|
new=AsyncMock(return_value=(False, None, None, None, None)),
|
||||||
|
):
|
||||||
|
await broker.send_order("005930", "BUY", 1, price=50000, session_id="NXT_PRE")
|
||||||
|
|
||||||
|
order_call = mock_post.call_args_list[1]
|
||||||
|
body = order_call[1].get("json", {})
|
||||||
|
assert body["EXCG_ID_DVSN_CD"] == "NXT"
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_send_order_prefers_nxt_when_dual_listing_spread_is_tighter(
|
||||||
|
self, broker: KISBroker
|
||||||
|
) -> None:
|
||||||
|
mock_hash = AsyncMock()
|
||||||
|
mock_hash.status = 200
|
||||||
|
mock_hash.json = AsyncMock(return_value={"HASH": "h"})
|
||||||
|
mock_hash.__aenter__ = AsyncMock(return_value=mock_hash)
|
||||||
|
mock_hash.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
|
mock_order = AsyncMock()
|
||||||
|
mock_order.status = 200
|
||||||
|
mock_order.json = AsyncMock(return_value={"rt_cd": "0"})
|
||||||
|
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
||||||
|
mock_order.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
|
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
||||||
|
with patch.object(
|
||||||
|
broker,
|
||||||
|
"_load_dual_listing_metrics",
|
||||||
|
new=AsyncMock(return_value=(True, 0.004, 0.002, 100000.0, 90000.0)),
|
||||||
|
):
|
||||||
|
await broker.send_order("005930", "BUY", 1, price=50000, session_id="KRX_REG")
|
||||||
|
|
||||||
|
order_call = mock_post.call_args_list[1]
|
||||||
|
body = order_call[1].get("json", {})
|
||||||
|
assert body["EXCG_ID_DVSN_CD"] == "NXT"
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# TR_ID live/paper branching (issues #201, #202, #203)
|
# TR_ID live/paper branching (issues #201, #202, #203)
|
||||||
|
|||||||
@@ -53,3 +53,52 @@ async def test_kill_switch_collects_step_errors() -> None:
|
|||||||
|
|
||||||
report = await ks.trigger(reason="test", cancel_pending_orders=_boom)
|
report = await ks.trigger(reason="test", cancel_pending_orders=_boom)
|
||||||
assert any(err.startswith("cancel_pending_orders:") for err in report.errors)
|
assert any(err.startswith("cancel_pending_orders:") for err in report.errors)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_kill_switch_refresh_retries_then_succeeds() -> None:
|
||||||
|
ks = KillSwitchOrchestrator()
|
||||||
|
refresh_calls = {"count": 0}
|
||||||
|
|
||||||
|
def _flaky_refresh() -> None:
|
||||||
|
refresh_calls["count"] += 1
|
||||||
|
if refresh_calls["count"] < 3:
|
||||||
|
raise RuntimeError("temporary refresh failure")
|
||||||
|
|
||||||
|
report = await ks.trigger(
|
||||||
|
reason="test",
|
||||||
|
refresh_order_state=_flaky_refresh,
|
||||||
|
refresh_retry_attempts=3,
|
||||||
|
refresh_retry_base_delay_sec=0.0,
|
||||||
|
)
|
||||||
|
assert refresh_calls["count"] == 3
|
||||||
|
assert report.errors == []
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_kill_switch_refresh_retry_exhausted_records_error_and_continues() -> None:
|
||||||
|
ks = KillSwitchOrchestrator()
|
||||||
|
calls: list[str] = []
|
||||||
|
|
||||||
|
def _refresh_fail() -> None:
|
||||||
|
raise RuntimeError("persistent refresh failure")
|
||||||
|
|
||||||
|
def _reduce() -> None:
|
||||||
|
calls.append("reduce")
|
||||||
|
|
||||||
|
def _snapshot() -> None:
|
||||||
|
calls.append("snapshot")
|
||||||
|
|
||||||
|
report = await ks.trigger(
|
||||||
|
reason="test",
|
||||||
|
refresh_order_state=_refresh_fail,
|
||||||
|
reduce_risk=_reduce,
|
||||||
|
snapshot_state=_snapshot,
|
||||||
|
refresh_retry_attempts=2,
|
||||||
|
refresh_retry_base_delay_sec=0.0,
|
||||||
|
)
|
||||||
|
assert any(
|
||||||
|
err.startswith("refresh_order_state: failed after 2 attempts")
|
||||||
|
for err in report.errors
|
||||||
|
)
|
||||||
|
assert calls == ["reduce", "snapshot"]
|
||||||
|
|||||||
40
tests/test_kr_exchange_router.py
Normal file
40
tests/test_kr_exchange_router.py
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from src.broker.kr_exchange_router import KRExchangeRouter
|
||||||
|
|
||||||
|
|
||||||
|
def test_ranking_market_code_by_session() -> None:
|
||||||
|
router = KRExchangeRouter()
|
||||||
|
assert router.resolve_for_ranking("KRX_REG") == "J"
|
||||||
|
assert router.resolve_for_ranking("NXT_PRE") == "NX"
|
||||||
|
assert router.resolve_for_ranking("NXT_AFTER") == "NX"
|
||||||
|
|
||||||
|
|
||||||
|
def test_order_exchange_falls_back_to_session_default_on_missing_data() -> None:
|
||||||
|
router = KRExchangeRouter()
|
||||||
|
resolved = router.resolve_for_order(
|
||||||
|
stock_code="0001A0",
|
||||||
|
session_id="NXT_PRE",
|
||||||
|
is_dual_listed=True,
|
||||||
|
spread_krx=None,
|
||||||
|
spread_nxt=None,
|
||||||
|
liquidity_krx=None,
|
||||||
|
liquidity_nxt=None,
|
||||||
|
)
|
||||||
|
assert resolved.exchange_code == "NXT"
|
||||||
|
assert resolved.reason == "fallback_data_unavailable"
|
||||||
|
|
||||||
|
|
||||||
|
def test_order_exchange_uses_spread_preference_for_dual_listing() -> None:
|
||||||
|
router = KRExchangeRouter()
|
||||||
|
resolved = router.resolve_for_order(
|
||||||
|
stock_code="0001A0",
|
||||||
|
session_id="KRX_REG",
|
||||||
|
is_dual_listed=True,
|
||||||
|
spread_krx=0.005,
|
||||||
|
spread_nxt=0.003,
|
||||||
|
liquidity_krx=100000.0,
|
||||||
|
liquidity_nxt=90000.0,
|
||||||
|
)
|
||||||
|
assert resolved.exchange_code == "NXT"
|
||||||
|
assert resolved.reason == "dual_listing_spread"
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
"""Tests for main trading loop integration."""
|
"""Tests for main trading loop integration."""
|
||||||
|
|
||||||
from datetime import UTC, date, datetime
|
from datetime import UTC, date, datetime
|
||||||
|
from typing import Any
|
||||||
from unittest.mock import ANY, AsyncMock, MagicMock, patch
|
from unittest.mock import ANY, AsyncMock, MagicMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
@@ -9,6 +10,7 @@ import src.main as main_module
|
|||||||
from src.config import Settings
|
from src.config import Settings
|
||||||
from src.context.layer import ContextLayer
|
from src.context.layer import ContextLayer
|
||||||
from src.context.scheduler import ScheduleResult
|
from src.context.scheduler import ScheduleResult
|
||||||
|
from src.core.blackout_manager import BlackoutOrderManager
|
||||||
from src.core.order_policy import OrderPolicyRejected, get_session_info
|
from src.core.order_policy import OrderPolicyRejected, get_session_info
|
||||||
from src.core.risk_manager import CircuitBreakerTripped, FatFingerRejected
|
from src.core.risk_manager import CircuitBreakerTripped, FatFingerRejected
|
||||||
from src.db import init_db, log_trade
|
from src.db import init_db, log_trade
|
||||||
@@ -33,6 +35,7 @@ from src.main import (
|
|||||||
_extract_held_qty_from_balance,
|
_extract_held_qty_from_balance,
|
||||||
_handle_market_close,
|
_handle_market_close,
|
||||||
_inject_staged_exit_features,
|
_inject_staged_exit_features,
|
||||||
|
_maybe_queue_order_intent,
|
||||||
_resolve_market_setting,
|
_resolve_market_setting,
|
||||||
_resolve_sell_qty_for_pnl,
|
_resolve_sell_qty_for_pnl,
|
||||||
_retry_connection,
|
_retry_connection,
|
||||||
@@ -40,6 +43,7 @@ from src.main import (
|
|||||||
_run_evolution_loop,
|
_run_evolution_loop,
|
||||||
_should_block_overseas_buy_for_fx_buffer,
|
_should_block_overseas_buy_for_fx_buffer,
|
||||||
_should_force_exit_for_overnight,
|
_should_force_exit_for_overnight,
|
||||||
|
_split_trade_pnl_components,
|
||||||
_start_dashboard_server,
|
_start_dashboard_server,
|
||||||
_stoploss_cooldown_minutes,
|
_stoploss_cooldown_minutes,
|
||||||
_trigger_emergency_kill_switch,
|
_trigger_emergency_kill_switch,
|
||||||
@@ -102,22 +106,22 @@ def _make_sell_match(stock_code: str = "005930") -> ScenarioMatch:
|
|||||||
@pytest.fixture(autouse=True)
|
@pytest.fixture(autouse=True)
|
||||||
def _reset_kill_switch_state() -> None:
|
def _reset_kill_switch_state() -> None:
|
||||||
"""Prevent cross-test leakage from global kill-switch state."""
|
"""Prevent cross-test leakage from global kill-switch state."""
|
||||||
|
def _reset_session_risk_globals() -> None:
|
||||||
|
_SESSION_RISK_LAST_BY_MARKET.clear()
|
||||||
|
_SESSION_RISK_OVERRIDES_BY_MARKET.clear()
|
||||||
|
_SESSION_RISK_PROFILES_MAP.clear()
|
||||||
|
main_module._SESSION_RISK_PROFILES_RAW = "{}"
|
||||||
|
|
||||||
KILL_SWITCH.clear_block()
|
KILL_SWITCH.clear_block()
|
||||||
_RUNTIME_EXIT_STATES.clear()
|
_RUNTIME_EXIT_STATES.clear()
|
||||||
_RUNTIME_EXIT_PEAKS.clear()
|
_RUNTIME_EXIT_PEAKS.clear()
|
||||||
_SESSION_RISK_LAST_BY_MARKET.clear()
|
_reset_session_risk_globals()
|
||||||
_SESSION_RISK_OVERRIDES_BY_MARKET.clear()
|
|
||||||
_SESSION_RISK_PROFILES_MAP.clear()
|
|
||||||
main_module._SESSION_RISK_PROFILES_RAW = "__reset__"
|
|
||||||
_STOPLOSS_REENTRY_COOLDOWN_UNTIL.clear()
|
_STOPLOSS_REENTRY_COOLDOWN_UNTIL.clear()
|
||||||
yield
|
yield
|
||||||
KILL_SWITCH.clear_block()
|
KILL_SWITCH.clear_block()
|
||||||
_RUNTIME_EXIT_STATES.clear()
|
_RUNTIME_EXIT_STATES.clear()
|
||||||
_RUNTIME_EXIT_PEAKS.clear()
|
_RUNTIME_EXIT_PEAKS.clear()
|
||||||
_SESSION_RISK_LAST_BY_MARKET.clear()
|
_reset_session_risk_globals()
|
||||||
_SESSION_RISK_OVERRIDES_BY_MARKET.clear()
|
|
||||||
_SESSION_RISK_PROFILES_MAP.clear()
|
|
||||||
main_module._SESSION_RISK_PROFILES_RAW = "__reset__"
|
|
||||||
_STOPLOSS_REENTRY_COOLDOWN_UNTIL.clear()
|
_STOPLOSS_REENTRY_COOLDOWN_UNTIL.clear()
|
||||||
|
|
||||||
|
|
||||||
@@ -3181,6 +3185,13 @@ async def test_sell_order_uses_broker_balance_qty_not_db() -> None:
|
|||||||
updated_buy = decision_logger.get_decision_by_id(buy_decision_id)
|
updated_buy = decision_logger.get_decision_by_id(buy_decision_id)
|
||||||
assert updated_buy is not None
|
assert updated_buy is not None
|
||||||
assert updated_buy.outcome_pnl == -25.0
|
assert updated_buy.outcome_pnl == -25.0
|
||||||
|
sell_row = db_conn.execute(
|
||||||
|
"SELECT pnl, strategy_pnl, fx_pnl FROM trades WHERE action='SELL' ORDER BY id DESC LIMIT 1"
|
||||||
|
).fetchone()
|
||||||
|
assert sell_row is not None
|
||||||
|
assert sell_row[0] == -25.0
|
||||||
|
assert sell_row[1] == -25.0
|
||||||
|
assert sell_row[2] == 0.0
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
@@ -4598,6 +4609,23 @@ def test_fx_buffer_guard_applies_only_to_us_and_respects_boundary() -> None:
|
|||||||
assert required_jp == 0.0
|
assert required_jp == 0.0
|
||||||
|
|
||||||
|
|
||||||
|
def test_split_trade_pnl_components_overseas_fx_split_preserves_total() -> None:
|
||||||
|
market = MagicMock()
|
||||||
|
market.is_domestic = False
|
||||||
|
strategy_pnl, fx_pnl = _split_trade_pnl_components(
|
||||||
|
market=market,
|
||||||
|
trade_pnl=20.0,
|
||||||
|
buy_price=100.0,
|
||||||
|
sell_price=110.0,
|
||||||
|
quantity=2,
|
||||||
|
buy_fx_rate=1200.0,
|
||||||
|
sell_fx_rate=1260.0,
|
||||||
|
)
|
||||||
|
assert strategy_pnl == 10.0
|
||||||
|
assert fx_pnl == 10.0
|
||||||
|
assert strategy_pnl + fx_pnl == pytest.approx(20.0)
|
||||||
|
|
||||||
|
|
||||||
# run_daily_session — daily CB baseline (daily_start_eval) tests (issue #207)
|
# run_daily_session — daily CB baseline (daily_start_eval) tests (issue #207)
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
@@ -6351,6 +6379,225 @@ async def test_us_min_price_filter_not_applied_to_kr_market() -> None:
|
|||||||
broker.send_order.assert_called_once()
|
broker.send_order.assert_called_once()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_session_boundary_reloads_us_min_price_override_in_trading_cycle() -> None:
|
||||||
|
db_conn = init_db(":memory:")
|
||||||
|
decision_logger = DecisionLogger(db_conn)
|
||||||
|
|
||||||
|
broker = MagicMock()
|
||||||
|
broker.get_balance = AsyncMock(return_value={"output1": [], "output2": [{}]})
|
||||||
|
|
||||||
|
overseas_broker = MagicMock()
|
||||||
|
overseas_broker.get_overseas_price = AsyncMock(
|
||||||
|
return_value={"output": {"last": "7.0", "rate": "0.0"}}
|
||||||
|
)
|
||||||
|
overseas_broker.get_overseas_balance = AsyncMock(
|
||||||
|
return_value={
|
||||||
|
"output1": [],
|
||||||
|
"output2": [{"frcr_evlu_tota": "10000", "frcr_buy_amt_smtl": "0"}],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
overseas_broker.get_overseas_buying_power = AsyncMock(
|
||||||
|
return_value={"output": {"ovrs_ord_psbl_amt": "10000"}}
|
||||||
|
)
|
||||||
|
overseas_broker.send_overseas_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
|
||||||
|
|
||||||
|
market = MagicMock()
|
||||||
|
market.name = "NASDAQ"
|
||||||
|
market.code = "US_NASDAQ"
|
||||||
|
market.exchange_code = "NASD"
|
||||||
|
market.is_domestic = False
|
||||||
|
|
||||||
|
telegram = MagicMock()
|
||||||
|
telegram.notify_trade_execution = AsyncMock()
|
||||||
|
telegram.notify_fat_finger = AsyncMock()
|
||||||
|
telegram.notify_circuit_breaker = AsyncMock()
|
||||||
|
telegram.notify_scenario_matched = AsyncMock()
|
||||||
|
|
||||||
|
settings = Settings(
|
||||||
|
KIS_APP_KEY="k",
|
||||||
|
KIS_APP_SECRET="s",
|
||||||
|
KIS_ACCOUNT_NO="12345678-01",
|
||||||
|
GEMINI_API_KEY="g",
|
||||||
|
MODE="paper",
|
||||||
|
PAPER_OVERSEAS_CASH=50000.0,
|
||||||
|
US_MIN_PRICE=5.0,
|
||||||
|
USD_BUFFER_MIN=1000.0,
|
||||||
|
SESSION_RISK_RELOAD_ENABLED=True,
|
||||||
|
SESSION_RISK_PROFILES_JSON=(
|
||||||
|
'{"US_PRE": {"US_MIN_PRICE": 8.0}, "US_DAY": {"US_MIN_PRICE": 5.0}}'
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
current_session = {"id": "US_PRE"}
|
||||||
|
|
||||||
|
def _session_info(_: Any) -> MagicMock:
|
||||||
|
return MagicMock(session_id=current_session["id"])
|
||||||
|
|
||||||
|
with (
|
||||||
|
patch("src.main.get_open_position", return_value=None),
|
||||||
|
patch("src.main.get_session_info", side_effect=_session_info),
|
||||||
|
):
|
||||||
|
await trading_cycle(
|
||||||
|
broker=broker,
|
||||||
|
overseas_broker=overseas_broker,
|
||||||
|
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("AAPL"))),
|
||||||
|
playbook=_make_playbook("US_NASDAQ"),
|
||||||
|
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
|
||||||
|
db_conn=db_conn,
|
||||||
|
decision_logger=decision_logger,
|
||||||
|
context_store=MagicMock(
|
||||||
|
get_latest_timeframe=MagicMock(return_value=None),
|
||||||
|
set_context=MagicMock(),
|
||||||
|
),
|
||||||
|
criticality_assessor=MagicMock(
|
||||||
|
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
|
||||||
|
get_timeout=MagicMock(return_value=5.0),
|
||||||
|
),
|
||||||
|
telegram=telegram,
|
||||||
|
market=market,
|
||||||
|
stock_code="AAPL",
|
||||||
|
scan_candidates={},
|
||||||
|
settings=settings,
|
||||||
|
)
|
||||||
|
assert overseas_broker.send_overseas_order.call_count == 0
|
||||||
|
|
||||||
|
current_session["id"] = "US_DAY"
|
||||||
|
await trading_cycle(
|
||||||
|
broker=broker,
|
||||||
|
overseas_broker=overseas_broker,
|
||||||
|
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("AAPL"))),
|
||||||
|
playbook=_make_playbook("US_NASDAQ"),
|
||||||
|
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
|
||||||
|
db_conn=db_conn,
|
||||||
|
decision_logger=decision_logger,
|
||||||
|
context_store=MagicMock(
|
||||||
|
get_latest_timeframe=MagicMock(return_value=None),
|
||||||
|
set_context=MagicMock(),
|
||||||
|
),
|
||||||
|
criticality_assessor=MagicMock(
|
||||||
|
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
|
||||||
|
get_timeout=MagicMock(return_value=5.0),
|
||||||
|
),
|
||||||
|
telegram=telegram,
|
||||||
|
market=market,
|
||||||
|
stock_code="AAPL",
|
||||||
|
scan_candidates={},
|
||||||
|
settings=settings,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert overseas_broker.send_overseas_order.call_count == 1
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_session_boundary_falls_back_when_profile_reload_fails() -> None:
|
||||||
|
db_conn = init_db(":memory:")
|
||||||
|
decision_logger = DecisionLogger(db_conn)
|
||||||
|
|
||||||
|
broker = MagicMock()
|
||||||
|
broker.get_balance = AsyncMock(return_value={"output1": [], "output2": [{}]})
|
||||||
|
|
||||||
|
overseas_broker = MagicMock()
|
||||||
|
overseas_broker.get_overseas_price = AsyncMock(
|
||||||
|
return_value={"output": {"last": "7.0", "rate": "0.0"}}
|
||||||
|
)
|
||||||
|
overseas_broker.get_overseas_balance = AsyncMock(
|
||||||
|
return_value={
|
||||||
|
"output1": [],
|
||||||
|
"output2": [{"frcr_evlu_tota": "10000", "frcr_buy_amt_smtl": "0"}],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
overseas_broker.get_overseas_buying_power = AsyncMock(
|
||||||
|
return_value={"output": {"ovrs_ord_psbl_amt": "10000"}}
|
||||||
|
)
|
||||||
|
overseas_broker.send_overseas_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
|
||||||
|
|
||||||
|
market = MagicMock()
|
||||||
|
market.name = "NASDAQ"
|
||||||
|
market.code = "US_NASDAQ"
|
||||||
|
market.exchange_code = "NASD"
|
||||||
|
market.is_domestic = False
|
||||||
|
|
||||||
|
telegram = MagicMock()
|
||||||
|
telegram.notify_trade_execution = AsyncMock()
|
||||||
|
telegram.notify_fat_finger = AsyncMock()
|
||||||
|
telegram.notify_circuit_breaker = AsyncMock()
|
||||||
|
telegram.notify_scenario_matched = AsyncMock()
|
||||||
|
|
||||||
|
settings = Settings(
|
||||||
|
KIS_APP_KEY="k",
|
||||||
|
KIS_APP_SECRET="s",
|
||||||
|
KIS_ACCOUNT_NO="12345678-01",
|
||||||
|
GEMINI_API_KEY="g",
|
||||||
|
MODE="paper",
|
||||||
|
PAPER_OVERSEAS_CASH=50000.0,
|
||||||
|
US_MIN_PRICE=5.0,
|
||||||
|
USD_BUFFER_MIN=1000.0,
|
||||||
|
SESSION_RISK_RELOAD_ENABLED=True,
|
||||||
|
SESSION_RISK_PROFILES_JSON='{"US_PRE": {"US_MIN_PRICE": 8.0}}',
|
||||||
|
)
|
||||||
|
|
||||||
|
current_session = {"id": "US_PRE"}
|
||||||
|
|
||||||
|
def _session_info(_: Any) -> MagicMock:
|
||||||
|
return MagicMock(session_id=current_session["id"])
|
||||||
|
|
||||||
|
with (
|
||||||
|
patch("src.main.get_open_position", return_value=None),
|
||||||
|
patch("src.main.get_session_info", side_effect=_session_info),
|
||||||
|
):
|
||||||
|
await trading_cycle(
|
||||||
|
broker=broker,
|
||||||
|
overseas_broker=overseas_broker,
|
||||||
|
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("AAPL"))),
|
||||||
|
playbook=_make_playbook("US_NASDAQ"),
|
||||||
|
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
|
||||||
|
db_conn=db_conn,
|
||||||
|
decision_logger=decision_logger,
|
||||||
|
context_store=MagicMock(
|
||||||
|
get_latest_timeframe=MagicMock(return_value=None),
|
||||||
|
set_context=MagicMock(),
|
||||||
|
),
|
||||||
|
criticality_assessor=MagicMock(
|
||||||
|
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
|
||||||
|
get_timeout=MagicMock(return_value=5.0),
|
||||||
|
),
|
||||||
|
telegram=telegram,
|
||||||
|
market=market,
|
||||||
|
stock_code="AAPL",
|
||||||
|
scan_candidates={},
|
||||||
|
settings=settings,
|
||||||
|
)
|
||||||
|
assert overseas_broker.send_overseas_order.call_count == 0
|
||||||
|
|
||||||
|
settings.SESSION_RISK_PROFILES_JSON = "{invalid-json"
|
||||||
|
current_session["id"] = "US_DAY"
|
||||||
|
await trading_cycle(
|
||||||
|
broker=broker,
|
||||||
|
overseas_broker=overseas_broker,
|
||||||
|
scenario_engine=MagicMock(evaluate=MagicMock(return_value=_make_buy_match("AAPL"))),
|
||||||
|
playbook=_make_playbook("US_NASDAQ"),
|
||||||
|
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
|
||||||
|
db_conn=db_conn,
|
||||||
|
decision_logger=decision_logger,
|
||||||
|
context_store=MagicMock(
|
||||||
|
get_latest_timeframe=MagicMock(return_value=None),
|
||||||
|
set_context=MagicMock(),
|
||||||
|
),
|
||||||
|
criticality_assessor=MagicMock(
|
||||||
|
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
|
||||||
|
get_timeout=MagicMock(return_value=5.0),
|
||||||
|
),
|
||||||
|
telegram=telegram,
|
||||||
|
market=market,
|
||||||
|
stock_code="AAPL",
|
||||||
|
scan_candidates={},
|
||||||
|
settings=settings,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert overseas_broker.send_overseas_order.call_count == 1
|
||||||
|
|
||||||
|
|
||||||
def test_overnight_policy_prioritizes_killswitch_over_exception() -> None:
|
def test_overnight_policy_prioritizes_killswitch_over_exception() -> None:
|
||||||
market = MagicMock()
|
market = MagicMock()
|
||||||
with patch("src.main.get_session_info", return_value=MagicMock(session_id="US_AFTER")):
|
with patch("src.main.get_session_info", return_value=MagicMock(session_id="US_AFTER")):
|
||||||
@@ -6475,6 +6722,7 @@ async def test_blackout_queues_order_and_skips_submission() -> None:
|
|||||||
blackout_manager.in_blackout.return_value = True
|
blackout_manager.in_blackout.return_value = True
|
||||||
blackout_manager.enqueue.return_value = True
|
blackout_manager.enqueue.return_value = True
|
||||||
blackout_manager.pending_count = 1
|
blackout_manager.pending_count = 1
|
||||||
|
blackout_manager.overflow_drop_count = 0
|
||||||
|
|
||||||
with patch("src.main.BLACKOUT_ORDER_MANAGER", blackout_manager):
|
with patch("src.main.BLACKOUT_ORDER_MANAGER", blackout_manager):
|
||||||
await trading_cycle(
|
await trading_cycle(
|
||||||
@@ -6504,6 +6752,43 @@ async def test_blackout_queues_order_and_skips_submission() -> None:
|
|||||||
blackout_manager.enqueue.assert_called_once()
|
blackout_manager.enqueue.assert_called_once()
|
||||||
|
|
||||||
|
|
||||||
|
def test_blackout_queue_overflow_keeps_latest_intent() -> None:
|
||||||
|
manager = BlackoutOrderManager(enabled=True, windows=[], max_queue_size=1)
|
||||||
|
manager.in_blackout = lambda now=None: True # type: ignore[method-assign]
|
||||||
|
|
||||||
|
market = MagicMock()
|
||||||
|
market.code = "KR"
|
||||||
|
market.exchange_code = "KRX"
|
||||||
|
|
||||||
|
with patch("src.main.BLACKOUT_ORDER_MANAGER", manager):
|
||||||
|
assert _maybe_queue_order_intent(
|
||||||
|
market=market,
|
||||||
|
session_id="KRX_REG",
|
||||||
|
stock_code="005930",
|
||||||
|
order_type="BUY",
|
||||||
|
quantity=1,
|
||||||
|
price=100.0,
|
||||||
|
source="test-first",
|
||||||
|
)
|
||||||
|
assert _maybe_queue_order_intent(
|
||||||
|
market=market,
|
||||||
|
session_id="KRX_REG",
|
||||||
|
stock_code="000660",
|
||||||
|
order_type="BUY",
|
||||||
|
quantity=2,
|
||||||
|
price=200.0,
|
||||||
|
source="test-second",
|
||||||
|
)
|
||||||
|
|
||||||
|
assert manager.pending_count == 1
|
||||||
|
assert manager.overflow_drop_count == 1
|
||||||
|
manager.in_blackout = lambda now=None: False # type: ignore[method-assign]
|
||||||
|
batch = manager.pop_recovery_batch()
|
||||||
|
assert len(batch) == 1
|
||||||
|
assert batch[0].stock_code == "000660"
|
||||||
|
assert batch[0].session_id == "KRX_REG"
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_process_blackout_recovery_executes_valid_intents() -> None:
|
async def test_process_blackout_recovery_executes_valid_intents() -> None:
|
||||||
"""Recovery must execute queued intents that pass revalidation."""
|
"""Recovery must execute queued intents that pass revalidation."""
|
||||||
@@ -6581,6 +6866,7 @@ async def test_process_blackout_recovery_drops_policy_rejected_intent() -> None:
|
|||||||
intent.quantity = 1
|
intent.quantity = 1
|
||||||
intent.price = 100.0
|
intent.price = 100.0
|
||||||
intent.source = "test"
|
intent.source = "test"
|
||||||
|
intent.session_id = "KRX_REG"
|
||||||
intent.attempts = 0
|
intent.attempts = 0
|
||||||
|
|
||||||
blackout_manager = MagicMock()
|
blackout_manager = MagicMock()
|
||||||
@@ -6630,6 +6916,7 @@ async def test_process_blackout_recovery_drops_intent_on_excessive_price_drift()
|
|||||||
intent.quantity = 1
|
intent.quantity = 1
|
||||||
intent.price = 100.0
|
intent.price = 100.0
|
||||||
intent.source = "test"
|
intent.source = "test"
|
||||||
|
intent.session_id = "US_PRE"
|
||||||
intent.attempts = 0
|
intent.attempts = 0
|
||||||
|
|
||||||
blackout_manager = MagicMock()
|
blackout_manager = MagicMock()
|
||||||
@@ -6680,6 +6967,7 @@ async def test_process_blackout_recovery_drops_overseas_intent_on_excessive_pric
|
|||||||
intent.quantity = 1
|
intent.quantity = 1
|
||||||
intent.price = 100.0
|
intent.price = 100.0
|
||||||
intent.source = "test"
|
intent.source = "test"
|
||||||
|
intent.session_id = "KRX_REG"
|
||||||
intent.attempts = 0
|
intent.attempts = 0
|
||||||
|
|
||||||
blackout_manager = MagicMock()
|
blackout_manager = MagicMock()
|
||||||
@@ -6729,6 +7017,7 @@ async def test_process_blackout_recovery_requeues_intent_when_price_lookup_fails
|
|||||||
intent.quantity = 1
|
intent.quantity = 1
|
||||||
intent.price = 100.0
|
intent.price = 100.0
|
||||||
intent.source = "test"
|
intent.source = "test"
|
||||||
|
intent.session_id = "KRX_REG"
|
||||||
intent.attempts = 0
|
intent.attempts = 0
|
||||||
|
|
||||||
blackout_manager = MagicMock()
|
blackout_manager = MagicMock()
|
||||||
@@ -6865,3 +7154,27 @@ async def test_trigger_emergency_kill_switch_records_cancel_failure() -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
assert any(err.startswith("cancel_pending_orders:") for err in report.errors)
|
assert any(err.startswith("cancel_pending_orders:") for err in report.errors)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_refresh_order_state_failure_summary_includes_more_count() -> None:
|
||||||
|
broker = MagicMock()
|
||||||
|
broker.get_balance = AsyncMock(side_effect=RuntimeError("domestic down"))
|
||||||
|
overseas_broker = MagicMock()
|
||||||
|
overseas_broker.get_overseas_balance = AsyncMock(side_effect=RuntimeError("overseas down"))
|
||||||
|
|
||||||
|
markets = []
|
||||||
|
for code, exchange in [("KR", "KRX"), ("US_PRE", "NASD"), ("US_DAY", "NYSE"), ("JP", "TKSE")]:
|
||||||
|
market = MagicMock()
|
||||||
|
market.code = code
|
||||||
|
market.exchange_code = exchange
|
||||||
|
market.is_domestic = code == "KR"
|
||||||
|
markets.append(market)
|
||||||
|
|
||||||
|
with pytest.raises(RuntimeError, match=r"\(\+1 more\)$") as exc_info:
|
||||||
|
await main_module._refresh_order_state_for_kill_switch(
|
||||||
|
broker=broker,
|
||||||
|
overseas_broker=overseas_broker,
|
||||||
|
markets=markets,
|
||||||
|
)
|
||||||
|
assert "KR/KRX" in str(exc_info.value)
|
||||||
|
|||||||
160
tests/test_runtime_overnight_scripts.py
Normal file
160
tests/test_runtime_overnight_scripts.py
Normal file
@@ -0,0 +1,160 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import os
|
||||||
|
import signal
|
||||||
|
import socket
|
||||||
|
import subprocess
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||||
|
RUN_OVERNIGHT = REPO_ROOT / "scripts" / "run_overnight.sh"
|
||||||
|
RUNTIME_MONITOR = REPO_ROOT / "scripts" / "runtime_verify_monitor.sh"
|
||||||
|
|
||||||
|
|
||||||
|
def _latest_runtime_log(log_dir: Path) -> str:
|
||||||
|
logs = sorted(log_dir.glob("runtime_verify_*.log"))
|
||||||
|
assert logs, "runtime monitor did not produce log output"
|
||||||
|
return logs[-1].read_text(encoding="utf-8")
|
||||||
|
|
||||||
|
|
||||||
|
def test_runtime_verify_monitor_detects_live_process_without_pid_files(tmp_path: Path) -> None:
|
||||||
|
log_dir = tmp_path / "overnight"
|
||||||
|
log_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
fake_live = subprocess.Popen(
|
||||||
|
["bash", "-lc", 'exec -a "src.main --mode=live" sleep 10'],
|
||||||
|
cwd=REPO_ROOT,
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
env = os.environ.copy()
|
||||||
|
env.update(
|
||||||
|
{
|
||||||
|
"ROOT_DIR": str(REPO_ROOT),
|
||||||
|
"LOG_DIR": str(log_dir),
|
||||||
|
"INTERVAL_SEC": "1",
|
||||||
|
"MAX_HOURS": "1",
|
||||||
|
"MAX_LOOPS": "1",
|
||||||
|
"POLICY_TZ": "UTC",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
completed = subprocess.run(
|
||||||
|
["bash", str(RUNTIME_MONITOR)],
|
||||||
|
cwd=REPO_ROOT,
|
||||||
|
env=env,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
check=False,
|
||||||
|
)
|
||||||
|
assert completed.returncode == 0, completed.stderr
|
||||||
|
|
||||||
|
log_text = _latest_runtime_log(log_dir)
|
||||||
|
assert "app_alive=1" in log_text
|
||||||
|
assert "[COVERAGE] LIVE_MODE=PASS source=process_liveness" in log_text
|
||||||
|
assert "[ANOMALY]" not in log_text
|
||||||
|
finally:
|
||||||
|
fake_live.terminate()
|
||||||
|
fake_live.wait(timeout=5)
|
||||||
|
|
||||||
|
|
||||||
|
def test_run_overnight_fails_fast_when_dashboard_port_in_use(tmp_path: Path) -> None:
|
||||||
|
log_dir = tmp_path / "overnight"
|
||||||
|
log_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||||
|
sock.bind(("127.0.0.1", 0))
|
||||||
|
sock.listen(1)
|
||||||
|
port = sock.getsockname()[1]
|
||||||
|
try:
|
||||||
|
env = os.environ.copy()
|
||||||
|
env.update(
|
||||||
|
{
|
||||||
|
"LOG_DIR": str(log_dir),
|
||||||
|
"TMUX_AUTO": "false",
|
||||||
|
"DASHBOARD_PORT": str(port),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
completed = subprocess.run(
|
||||||
|
["bash", str(RUN_OVERNIGHT)],
|
||||||
|
cwd=REPO_ROOT,
|
||||||
|
env=env,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
check=False,
|
||||||
|
)
|
||||||
|
assert completed.returncode != 0
|
||||||
|
output = f"{completed.stdout}\n{completed.stderr}"
|
||||||
|
assert "already in use" in output
|
||||||
|
finally:
|
||||||
|
sock.close()
|
||||||
|
|
||||||
|
|
||||||
|
def test_run_overnight_writes_live_pid_and_watchdog_pid(tmp_path: Path) -> None:
|
||||||
|
log_dir = tmp_path / "overnight"
|
||||||
|
log_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
env = os.environ.copy()
|
||||||
|
env.update(
|
||||||
|
{
|
||||||
|
"LOG_DIR": str(log_dir),
|
||||||
|
"TMUX_AUTO": "false",
|
||||||
|
"STARTUP_GRACE_SEC": "1",
|
||||||
|
"CHECK_INTERVAL": "2",
|
||||||
|
"APP_CMD_BIN": "sleep",
|
||||||
|
"APP_CMD_ARGS": "10",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
completed = subprocess.run(
|
||||||
|
["bash", str(RUN_OVERNIGHT)],
|
||||||
|
cwd=REPO_ROOT,
|
||||||
|
env=env,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
check=False,
|
||||||
|
)
|
||||||
|
assert completed.returncode == 0, f"{completed.stdout}\n{completed.stderr}"
|
||||||
|
|
||||||
|
app_pid = int((log_dir / "app.pid").read_text(encoding="utf-8").strip())
|
||||||
|
watchdog_pid = int((log_dir / "watchdog.pid").read_text(encoding="utf-8").strip())
|
||||||
|
|
||||||
|
os.kill(app_pid, 0)
|
||||||
|
os.kill(watchdog_pid, 0)
|
||||||
|
|
||||||
|
for pid in (watchdog_pid, app_pid):
|
||||||
|
try:
|
||||||
|
os.kill(pid, signal.SIGTERM)
|
||||||
|
except ProcessLookupError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def test_run_overnight_fails_when_process_exits_before_grace_period(tmp_path: Path) -> None:
|
||||||
|
log_dir = tmp_path / "overnight"
|
||||||
|
log_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
env = os.environ.copy()
|
||||||
|
env.update(
|
||||||
|
{
|
||||||
|
"LOG_DIR": str(log_dir),
|
||||||
|
"TMUX_AUTO": "false",
|
||||||
|
"STARTUP_GRACE_SEC": "1",
|
||||||
|
"APP_CMD_BIN": "false",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
completed = subprocess.run(
|
||||||
|
["bash", str(RUN_OVERNIGHT)],
|
||||||
|
cwd=REPO_ROOT,
|
||||||
|
env=env,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
check=False,
|
||||||
|
)
|
||||||
|
assert completed.returncode != 0
|
||||||
|
output = f"{completed.stdout}\n{completed.stderr}"
|
||||||
|
assert "startup failed:" in output
|
||||||
|
|
||||||
|
watchdog_pid_file = log_dir / "watchdog.pid"
|
||||||
|
if watchdog_pid_file.exists():
|
||||||
|
watchdog_pid = int(watchdog_pid_file.read_text(encoding="utf-8").strip())
|
||||||
|
with pytest.raises(ProcessLookupError):
|
||||||
|
os.kill(watchdog_pid, 0)
|
||||||
@@ -103,6 +103,33 @@ class TestSmartVolatilityScanner:
|
|||||||
assert candidates[0].stock_code == "005930"
|
assert candidates[0].stock_code == "005930"
|
||||||
assert candidates[0].signal == "oversold"
|
assert candidates[0].signal == "oversold"
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_scan_domestic_passes_session_id_to_rankings(
|
||||||
|
self, scanner: SmartVolatilityScanner, mock_broker: MagicMock
|
||||||
|
) -> None:
|
||||||
|
fluctuation_rows = [
|
||||||
|
{
|
||||||
|
"stock_code": "005930",
|
||||||
|
"name": "Samsung",
|
||||||
|
"price": 70000,
|
||||||
|
"volume": 5000000,
|
||||||
|
"change_rate": 1.0,
|
||||||
|
"volume_increase_rate": 120,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
mock_broker.fetch_market_rankings.side_effect = [fluctuation_rows, fluctuation_rows]
|
||||||
|
mock_broker.get_daily_prices.return_value = [
|
||||||
|
{"open": 1, "high": 71000, "low": 69000, "close": 70000, "volume": 1000000},
|
||||||
|
{"open": 1, "high": 70000, "low": 68000, "close": 69000, "volume": 900000},
|
||||||
|
]
|
||||||
|
|
||||||
|
await scanner.scan(domestic_session_id="NXT_PRE")
|
||||||
|
|
||||||
|
first_call = mock_broker.fetch_market_rankings.call_args_list[0]
|
||||||
|
second_call = mock_broker.fetch_market_rankings.call_args_list[1]
|
||||||
|
assert first_call.kwargs["session_id"] == "NXT_PRE"
|
||||||
|
assert second_call.kwargs["session_id"] == "NXT_PRE"
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_scan_domestic_finds_momentum_candidate(
|
async def test_scan_domestic_finds_momentum_candidate(
|
||||||
self, scanner: SmartVolatilityScanner, mock_broker: MagicMock
|
self, scanner: SmartVolatilityScanner, mock_broker: MagicMock
|
||||||
|
|||||||
@@ -22,17 +22,18 @@ def test_take_profit_exit_for_backward_compatibility() -> None:
|
|||||||
assert out.reason == "arm_take_profit"
|
assert out.reason == "arm_take_profit"
|
||||||
|
|
||||||
|
|
||||||
def test_model_assist_exit_signal() -> None:
|
def test_model_assist_signal_promotes_be_lock_without_direct_exit() -> None:
|
||||||
out = evaluate_exit(
|
out = evaluate_exit(
|
||||||
current_state=PositionState.ARMED,
|
current_state=PositionState.HOLDING,
|
||||||
config=ExitRuleConfig(model_prob_threshold=0.62, arm_pct=10.0),
|
config=ExitRuleConfig(model_prob_threshold=0.62, be_arm_pct=1.2, arm_pct=10.0),
|
||||||
inp=ExitRuleInput(
|
inp=ExitRuleInput(
|
||||||
current_price=101.0,
|
current_price=100.5,
|
||||||
entry_price=100.0,
|
entry_price=100.0,
|
||||||
peak_price=105.0,
|
peak_price=105.0,
|
||||||
pred_down_prob=0.8,
|
pred_down_prob=0.8,
|
||||||
liquidity_weak=True,
|
liquidity_weak=True,
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
assert out.should_exit is True
|
assert out.should_exit is False
|
||||||
assert out.reason == "model_liquidity_exit"
|
assert out.state == PositionState.BE_LOCK
|
||||||
|
assert out.reason == "model_assist_be_lock"
|
||||||
|
|||||||
@@ -28,3 +28,29 @@ def test_exited_has_priority_over_promotion() -> None:
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
assert state == PositionState.EXITED
|
assert state == PositionState.EXITED
|
||||||
|
|
||||||
|
|
||||||
|
def test_model_signal_promotes_be_lock_as_assist() -> None:
|
||||||
|
state = promote_state(
|
||||||
|
PositionState.HOLDING,
|
||||||
|
StateTransitionInput(
|
||||||
|
unrealized_pnl_pct=0.5,
|
||||||
|
be_arm_pct=1.2,
|
||||||
|
arm_pct=2.8,
|
||||||
|
model_exit_signal=True,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
assert state == PositionState.BE_LOCK
|
||||||
|
|
||||||
|
|
||||||
|
def test_model_signal_does_not_force_exit_directly() -> None:
|
||||||
|
state = promote_state(
|
||||||
|
PositionState.ARMED,
|
||||||
|
StateTransitionInput(
|
||||||
|
unrealized_pnl_pct=1.0,
|
||||||
|
be_arm_pct=1.2,
|
||||||
|
arm_pct=2.8,
|
||||||
|
model_exit_signal=True,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
assert state == PositionState.ARMED
|
||||||
|
|||||||
@@ -121,3 +121,44 @@ def test_validate_testing_doc_has_dynamic_count_guidance(monkeypatch) -> None:
|
|||||||
monkeypatch.setattr(module, "_read", fake_read)
|
monkeypatch.setattr(module, "_read", fake_read)
|
||||||
module.validate_testing_doc_has_dynamic_count_guidance(errors)
|
module.validate_testing_doc_has_dynamic_count_guidance(errors)
|
||||||
assert errors == []
|
assert errors == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_pr_body_postcheck_guidance_passes(monkeypatch) -> None:
|
||||||
|
module = _load_module()
|
||||||
|
errors: list[str] = []
|
||||||
|
fake_docs = {
|
||||||
|
str(module.REQUIRED_FILES["commands"]): (
|
||||||
|
"PR Body Post-Check (Mandatory)\n"
|
||||||
|
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>\n"
|
||||||
|
),
|
||||||
|
str(module.REQUIRED_FILES["workflow"]): (
|
||||||
|
"PR 생성 직후 본문 무결성 검증(필수)\n"
|
||||||
|
"python3 scripts/validate_pr_body.py --pr <PR_NUMBER>\n"
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
def fake_read(path: Path) -> str:
|
||||||
|
return fake_docs[str(path)]
|
||||||
|
|
||||||
|
monkeypatch.setattr(module, "_read", fake_read)
|
||||||
|
module.validate_pr_body_postcheck_guidance(errors)
|
||||||
|
assert errors == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_pr_body_postcheck_guidance_reports_missing_tokens(
|
||||||
|
monkeypatch,
|
||||||
|
) -> None:
|
||||||
|
module = _load_module()
|
||||||
|
errors: list[str] = []
|
||||||
|
fake_docs = {
|
||||||
|
str(module.REQUIRED_FILES["commands"]): "PR Body Post-Check (Mandatory)\n",
|
||||||
|
str(module.REQUIRED_FILES["workflow"]): "PR Body Post-Check\n",
|
||||||
|
}
|
||||||
|
|
||||||
|
def fake_read(path: Path) -> str:
|
||||||
|
return fake_docs[str(path)]
|
||||||
|
|
||||||
|
monkeypatch.setattr(module, "_read", fake_read)
|
||||||
|
module.validate_pr_body_postcheck_guidance(errors)
|
||||||
|
assert any("commands.md" in err for err in errors)
|
||||||
|
assert any("workflow.md" in err for err in errors)
|
||||||
|
|||||||
@@ -79,3 +79,42 @@ def test_validate_links_avoids_duplicate_error_for_invalid_plan_link(tmp_path) -
|
|||||||
|
|
||||||
assert len(errors) == 1
|
assert len(errors) == 1
|
||||||
assert "invalid plan link path" in errors[0]
|
assert "invalid plan link path" in errors[0]
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_issue_status_consistency_reports_conflicts() -> None:
|
||||||
|
module = _load_module()
|
||||||
|
errors: list[str] = []
|
||||||
|
path = Path("docs/ouroboros/80_implementation_audit.md").resolve()
|
||||||
|
text = "\n".join(
|
||||||
|
[
|
||||||
|
"| REQ-V3-004 | 상태 | 부분 | `#328` 잔여 |",
|
||||||
|
"| 항목 | 상태 | ✅ 완료 | `#328` 머지 |",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
module.validate_issue_status_consistency(path, text, errors)
|
||||||
|
|
||||||
|
assert len(errors) == 1
|
||||||
|
assert "conflicting status for issue #328" in errors[0]
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_issue_status_consistency_allows_done_only() -> None:
|
||||||
|
module = _load_module()
|
||||||
|
errors: list[str] = []
|
||||||
|
path = Path("docs/ouroboros/80_implementation_audit.md").resolve()
|
||||||
|
text = "| 항목 | 상태 | ✅ 완료 | `#371` 머지 |"
|
||||||
|
|
||||||
|
module.validate_issue_status_consistency(path, text, errors)
|
||||||
|
|
||||||
|
assert errors == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_issue_status_consistency_allows_pending_only() -> None:
|
||||||
|
module = _load_module()
|
||||||
|
errors: list[str] = []
|
||||||
|
path = Path("docs/ouroboros/80_implementation_audit.md").resolve()
|
||||||
|
text = "| 항목 | 상태 | 부분 | `#390` 추적 이슈 |"
|
||||||
|
|
||||||
|
module.validate_issue_status_consistency(path, text, errors)
|
||||||
|
|
||||||
|
assert errors == []
|
||||||
|
|||||||
126
tests/test_validate_pr_body.py
Normal file
126
tests/test_validate_pr_body.py
Normal file
@@ -0,0 +1,126 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import importlib.util
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
from types import SimpleNamespace
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
def _load_module():
|
||||||
|
script_path = Path(__file__).resolve().parents[1] / "scripts" / "validate_pr_body.py"
|
||||||
|
spec = importlib.util.spec_from_file_location("validate_pr_body", script_path)
|
||||||
|
assert spec is not None
|
||||||
|
assert spec.loader is not None
|
||||||
|
module = importlib.util.module_from_spec(spec)
|
||||||
|
spec.loader.exec_module(module)
|
||||||
|
return module
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_pr_body_text_detects_escaped_newline() -> None:
|
||||||
|
module = _load_module()
|
||||||
|
errors = module.validate_pr_body_text("## Summary\\n- item")
|
||||||
|
assert any("escaped newline" in err for err in errors)
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_pr_body_text_detects_escaped_newline_in_multiline_body() -> None:
|
||||||
|
module = _load_module()
|
||||||
|
text = "## Summary\n- first line\n- broken line with \\n literal"
|
||||||
|
errors = module.validate_pr_body_text(text)
|
||||||
|
assert any("escaped newline" in err for err in errors)
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_pr_body_text_allows_escaped_newline_in_code_blocks() -> None:
|
||||||
|
module = _load_module()
|
||||||
|
text = "\n".join(
|
||||||
|
[
|
||||||
|
"## Summary",
|
||||||
|
"- example uses `\\n` for explanation",
|
||||||
|
"```bash",
|
||||||
|
"printf 'line1\\nline2\\n'",
|
||||||
|
"```",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
assert module.validate_pr_body_text(text) == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_pr_body_text_detects_unbalanced_code_fence() -> None:
|
||||||
|
module = _load_module()
|
||||||
|
errors = module.validate_pr_body_text("## Summary\n- item\n```bash\necho hi\n")
|
||||||
|
assert any("unbalanced fenced code blocks" in err for err in errors)
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_pr_body_text_detects_missing_structure() -> None:
|
||||||
|
module = _load_module()
|
||||||
|
errors = module.validate_pr_body_text("plain text only")
|
||||||
|
assert any("missing markdown section headers" in err for err in errors)
|
||||||
|
assert any("missing markdown list items" in err for err in errors)
|
||||||
|
|
||||||
|
|
||||||
|
def test_validate_pr_body_text_passes_with_valid_markdown() -> None:
|
||||||
|
module = _load_module()
|
||||||
|
text = "\n".join(
|
||||||
|
[
|
||||||
|
"## Summary",
|
||||||
|
"- item",
|
||||||
|
"",
|
||||||
|
"## Validation",
|
||||||
|
"```bash",
|
||||||
|
"pytest -q",
|
||||||
|
"```",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
assert module.validate_pr_body_text(text) == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_fetch_pr_body_reads_body_from_tea_api(monkeypatch) -> None:
|
||||||
|
module = _load_module()
|
||||||
|
|
||||||
|
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
|
||||||
|
assert cmd[0] == "/tmp/tea-bin"
|
||||||
|
assert check is True
|
||||||
|
assert capture_output is True
|
||||||
|
assert text is True
|
||||||
|
return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"}))
|
||||||
|
|
||||||
|
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
|
||||||
|
monkeypatch.setattr(module.subprocess, "run", fake_run)
|
||||||
|
assert module.fetch_pr_body(391) == "## Summary\n- item"
|
||||||
|
|
||||||
|
|
||||||
|
def test_fetch_pr_body_rejects_non_string_body(monkeypatch) -> None:
|
||||||
|
module = _load_module()
|
||||||
|
|
||||||
|
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
|
||||||
|
return SimpleNamespace(stdout=json.dumps({"body": 123}))
|
||||||
|
|
||||||
|
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
|
||||||
|
monkeypatch.setattr(module.subprocess, "run", fake_run)
|
||||||
|
with pytest.raises(RuntimeError):
|
||||||
|
module.fetch_pr_body(391)
|
||||||
|
|
||||||
|
|
||||||
|
def test_resolve_tea_binary_falls_back_to_home_bin(monkeypatch, tmp_path) -> None:
|
||||||
|
module = _load_module()
|
||||||
|
tea_home = tmp_path / "bin" / "tea"
|
||||||
|
tea_home.parent.mkdir(parents=True)
|
||||||
|
tea_home.write_text("#!/usr/bin/env bash\n", encoding="utf-8")
|
||||||
|
tea_home.chmod(0o755)
|
||||||
|
|
||||||
|
monkeypatch.setattr(module.shutil, "which", lambda _: None)
|
||||||
|
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
|
||||||
|
assert module.resolve_tea_binary() == str(tea_home)
|
||||||
|
|
||||||
|
|
||||||
|
def test_resolve_tea_binary_rejects_non_executable_home_bin(monkeypatch, tmp_path) -> None:
|
||||||
|
module = _load_module()
|
||||||
|
tea_home = tmp_path / "bin" / "tea"
|
||||||
|
tea_home.parent.mkdir(parents=True)
|
||||||
|
tea_home.write_text("not executable\n", encoding="utf-8")
|
||||||
|
tea_home.chmod(0o644)
|
||||||
|
|
||||||
|
monkeypatch.setattr(module.shutil, "which", lambda _: None)
|
||||||
|
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
|
||||||
|
with pytest.raises(RuntimeError):
|
||||||
|
module.resolve_tea_binary()
|
||||||
@@ -105,3 +105,43 @@
|
|||||||
- next_ticket: #368
|
- next_ticket: #368
|
||||||
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
|
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
|
||||||
- risks_or_notes: TASK-V2-012 구현 갭 보완을 위해 cost guard + execution-adjusted fold metric + 회귀 테스트를 함께 반영한다.
|
- risks_or_notes: TASK-V2-012 구현 갭 보완을 위해 cost guard + execution-adjusted fold metric + 회귀 테스트를 함께 반영한다.
|
||||||
|
|
||||||
|
### 2026-03-02 | session=codex-v3-stream-next-ticket-369
|
||||||
|
- branch: feature/v3-session-policy-stream
|
||||||
|
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
|
||||||
|
- open_issues_reviewed: #369, #370, #371, #374, #375, #376, #377, #381
|
||||||
|
- next_ticket: #369
|
||||||
|
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
|
||||||
|
- risks_or_notes: 구현 티켓은 코드/테스트/문서(요구사항 원장/구현감사/PR traceability) 동시 반영을 기본 원칙으로 진행한다.
|
||||||
|
|
||||||
|
### 2026-03-02 | session=codex-issue369-start
|
||||||
|
- branch: feature/issue-369-model-exit-signal-spec-sync
|
||||||
|
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
|
||||||
|
- open_issues_reviewed: #369
|
||||||
|
- next_ticket: #369
|
||||||
|
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
|
||||||
|
- risks_or_notes: v2 사양 기준으로 model_exit_signal을 직접 청산 트리거가 아닌 보조 트리거로 정합화하고 테스트/문서를 동기화한다.
|
||||||
|
|
||||||
|
### 2026-03-02 | session=codex-v3-stream-next-ticket-377
|
||||||
|
- branch: feature/v3-session-policy-stream
|
||||||
|
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
|
||||||
|
- open_issues_reviewed: #377, #370, #371, #375, #376, #381
|
||||||
|
- next_ticket: #377
|
||||||
|
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
|
||||||
|
- risks_or_notes: kill switch refresh 재시도 정책(횟수/간격/중단조건)을 코드/테스트/요구사항 원장/감사 문서에 동시 반영한다.
|
||||||
|
|
||||||
|
### 2026-03-02 | session=codex-issue377-start
|
||||||
|
- branch: feature/issue-377-kill-switch-refresh-retry
|
||||||
|
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
|
||||||
|
- open_issues_reviewed: #377
|
||||||
|
- next_ticket: #377
|
||||||
|
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
|
||||||
|
- risks_or_notes: refresh 단계를 최대 3회(초기+재시도2), 실패 시 지수 백오프로 재시도하고 성공 시 즉시 중단, 소진 시 오류를 기록한 뒤 다음 단계를 계속 수행한다.
|
||||||
|
|
||||||
|
### 2026-03-04 | session=codex-issue409-start
|
||||||
|
- branch: feature/issue-409-kr-session-exchange-routing
|
||||||
|
- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md
|
||||||
|
- open_issues_reviewed: #409, #318, #325
|
||||||
|
- next_ticket: #409
|
||||||
|
- process_gate_checked: process_ticket=#306,#308 merged_to_feature_branch=yes
|
||||||
|
- risks_or_notes: #409 코드수정/검증 후 프로그램 재시작 및 24h 런타임 모니터링 수행, 모니터 이상 징후는 별도 이슈 발행
|
||||||
|
|||||||
Reference in New Issue
Block a user