Compare commits
2 Commits
main
...
efa43e2c97
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
efa43e2c97 | ||
|
|
b708e8b4ed |
190
CLAUDE.md
190
CLAUDE.md
@@ -1,9 +1,187 @@
|
|||||||
# Agent Entry Point
|
# The Ouroboros
|
||||||
|
|
||||||
This file moved to [agents.md](./agents.md).
|
AI-powered trading agent for global stock markets with self-evolution capabilities.
|
||||||
|
|
||||||
Follow `agents.md` as the single source of truth for Claude/Codex session behavior and project workflow gates.
|
## Quick Start
|
||||||
|
|
||||||
Core process references:
|
```bash
|
||||||
- [Workflow Guide](docs/workflow.md)
|
# Setup
|
||||||
- [Command Reference](docs/commands.md)
|
pip install -e ".[dev]"
|
||||||
|
cp .env.example .env
|
||||||
|
# Edit .env with your KIS and Gemini API credentials
|
||||||
|
|
||||||
|
# Test
|
||||||
|
pytest -v --cov=src
|
||||||
|
|
||||||
|
# Run (paper trading)
|
||||||
|
python -m src.main --mode=paper
|
||||||
|
|
||||||
|
# Run with dashboard
|
||||||
|
python -m src.main --mode=paper --dashboard
|
||||||
|
```
|
||||||
|
|
||||||
|
## Telegram Notifications (Optional)
|
||||||
|
|
||||||
|
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
|
||||||
|
|
||||||
|
### Quick Setup
|
||||||
|
|
||||||
|
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
|
||||||
|
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
|
||||||
|
3. **Configure**: Add to `.env`:
|
||||||
|
```bash
|
||||||
|
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
|
||||||
|
TELEGRAM_CHAT_ID=123456789
|
||||||
|
TELEGRAM_ENABLED=true
|
||||||
|
```
|
||||||
|
4. **Test**: Start bot conversation (`/start`), then run the agent
|
||||||
|
|
||||||
|
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
|
||||||
|
|
||||||
|
### What You'll Get
|
||||||
|
|
||||||
|
- 🟢 Trade execution alerts (BUY/SELL with confidence)
|
||||||
|
- 🚨 Circuit breaker trips (automatic trading halt)
|
||||||
|
- ⚠️ Fat-finger rejections (oversized orders blocked)
|
||||||
|
- ℹ️ Market open/close notifications
|
||||||
|
- 📝 System startup/shutdown status
|
||||||
|
|
||||||
|
### Interactive Commands
|
||||||
|
|
||||||
|
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
|
||||||
|
|
||||||
|
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
|
||||||
|
|
||||||
|
## Smart Volatility Scanner (Optional)
|
||||||
|
|
||||||
|
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
|
||||||
|
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
|
||||||
|
2. **Python Filter** — RSI + volume ratio calculations (no AI)
|
||||||
|
- Volume > 200% of previous day
|
||||||
|
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
|
||||||
|
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Add to `.env` (optional, has sensible defaults):
|
||||||
|
```bash
|
||||||
|
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
|
||||||
|
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
|
||||||
|
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
|
||||||
|
SCANNER_TOP_N=3 # Max candidates per scan
|
||||||
|
```
|
||||||
|
|
||||||
|
### Benefits
|
||||||
|
|
||||||
|
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
|
||||||
|
- **Python-based filtering** — Fast technical analysis before AI
|
||||||
|
- **Evolution-ready** — Selection context logged for strategy optimization
|
||||||
|
- **Fault-tolerant** — Falls back to static watchlist on API failure
|
||||||
|
|
||||||
|
### Trading Mode Integration
|
||||||
|
|
||||||
|
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
|
||||||
|
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
|
||||||
|
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
|
||||||
|
- **[Architecture](docs/architecture.md)** — System design, components, data flow
|
||||||
|
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
|
||||||
|
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
|
||||||
|
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
|
||||||
|
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
|
||||||
|
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
|
||||||
|
2. **Test Everything** — 80% coverage minimum, all changes require tests
|
||||||
|
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
|
||||||
|
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
|
||||||
|
|
||||||
|
## Requirements Management
|
||||||
|
|
||||||
|
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
|
||||||
|
|
||||||
|
- New requirements are added chronologically with dates
|
||||||
|
- Code changes should reference related requirements
|
||||||
|
- Helps maintain project evolution aligned with user needs
|
||||||
|
- Preserves context across conversations and development cycles
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
|
||||||
|
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
|
||||||
|
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
|
||||||
|
├── broker/ # KIS API client (domestic + overseas)
|
||||||
|
├── context/ # L1-L7 hierarchical memory system
|
||||||
|
├── core/ # Risk manager (READ-ONLY)
|
||||||
|
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
|
||||||
|
├── data/ # External data integration (news, market data, calendar)
|
||||||
|
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
|
||||||
|
├── logging/ # Decision logger (audit trail)
|
||||||
|
├── markets/ # Market schedules and timezone handling
|
||||||
|
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
|
||||||
|
├── strategy/ # Pre-market planner, scenario engine, playbook store
|
||||||
|
├── db.py # SQLite trade logging
|
||||||
|
├── main.py # Trading loop orchestrator
|
||||||
|
└── config.py # Settings (from .env)
|
||||||
|
|
||||||
|
tests/ # 998 tests across 41 files
|
||||||
|
docs/ # Extended documentation
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pytest -v --cov=src # Run tests with coverage
|
||||||
|
ruff check src/ tests/ # Lint
|
||||||
|
mypy src/ --strict # Type check
|
||||||
|
|
||||||
|
python -m src.main --mode=paper # Paper trading
|
||||||
|
python -m src.main --mode=paper --dashboard # With dashboard
|
||||||
|
python -m src.main --mode=live # Live trading (⚠️ real money)
|
||||||
|
|
||||||
|
# Gitea workflow (requires tea CLI)
|
||||||
|
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
|
||||||
|
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
|
||||||
|
```
|
||||||
|
|
||||||
|
## Markets Supported
|
||||||
|
|
||||||
|
- 🇰🇷 Korea (KRX)
|
||||||
|
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
|
||||||
|
- 🇯🇵 Japan (TSE)
|
||||||
|
- 🇭🇰 Hong Kong (SEHK)
|
||||||
|
- 🇨🇳 China (Shanghai, Shenzhen)
|
||||||
|
- 🇻🇳 Vietnam (Hanoi, HCM)
|
||||||
|
|
||||||
|
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
|
||||||
|
|
||||||
|
## Critical Constraints
|
||||||
|
|
||||||
|
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
|
||||||
|
|
||||||
|
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
|
||||||
|
- Circuit breaker at -3.0% P&L — may only be made **stricter**
|
||||||
|
- Fat-finger protection: max 30% of cash per order — always enforced
|
||||||
|
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
|
||||||
|
- All code changes → corresponding tests → coverage ≥ 80%
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
See [docs/workflow.md](docs/workflow.md) for the complete development process.
|
||||||
|
|
||||||
|
**TL;DR:**
|
||||||
|
1. Create issue in Gitea
|
||||||
|
2. Create feature branch: `feature/issue-N-description`
|
||||||
|
3. Implement with tests
|
||||||
|
4. Open PR
|
||||||
|
5. Merge after review
|
||||||
|
|||||||
199
agents.md
199
agents.md
@@ -1,199 +0,0 @@
|
|||||||
# The Ouroboros
|
|
||||||
|
|
||||||
AI-powered trading agent for global stock markets with self-evolution capabilities.
|
|
||||||
|
|
||||||
## Agent Workflow Gate (Claude/Codex)
|
|
||||||
|
|
||||||
Before any implementation, both Claude and Codex must align on the same project process:
|
|
||||||
|
|
||||||
1. Read `docs/workflow.md` first (branch policy, issue/PR flow, merge rules).
|
|
||||||
2. Read `docs/commands.md` for required verification commands and failure handling.
|
|
||||||
3. Read `docs/agent-constraints.md` and `docs/agents.md` for safety constraints.
|
|
||||||
4. Check `workflow/session-handover.md` and append a session entry when starting or handing off work.
|
|
||||||
5. Confirm current branch is based on `main` or an explicitly designated temporary/base branch before editing.
|
|
||||||
|
|
||||||
If any instruction conflicts, default to the safer path and document the reason in the handover log.
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Setup
|
|
||||||
pip install -e ".[dev]"
|
|
||||||
cp .env.example .env
|
|
||||||
# Edit .env with your KIS and Gemini API credentials
|
|
||||||
|
|
||||||
# Test
|
|
||||||
pytest -v --cov=src
|
|
||||||
|
|
||||||
# Run (paper trading)
|
|
||||||
python -m src.main --mode=paper
|
|
||||||
|
|
||||||
# Run with dashboard
|
|
||||||
python -m src.main --mode=paper --dashboard
|
|
||||||
```
|
|
||||||
|
|
||||||
## Telegram Notifications (Optional)
|
|
||||||
|
|
||||||
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
|
|
||||||
|
|
||||||
### Quick Setup
|
|
||||||
|
|
||||||
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
|
|
||||||
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
|
|
||||||
3. **Configure**: Add to `.env`:
|
|
||||||
```bash
|
|
||||||
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
|
|
||||||
TELEGRAM_CHAT_ID=123456789
|
|
||||||
TELEGRAM_ENABLED=true
|
|
||||||
```
|
|
||||||
4. **Test**: Start bot conversation (`/start`), then run the agent
|
|
||||||
|
|
||||||
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
|
|
||||||
|
|
||||||
### What You'll Get
|
|
||||||
|
|
||||||
- 🟢 Trade execution alerts (BUY/SELL with confidence)
|
|
||||||
- 🚨 Circuit breaker trips (automatic trading halt)
|
|
||||||
- ⚠️ Fat-finger rejections (oversized orders blocked)
|
|
||||||
- ℹ️ Market open/close notifications
|
|
||||||
- 📝 System startup/shutdown status
|
|
||||||
|
|
||||||
### Interactive Commands
|
|
||||||
|
|
||||||
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
|
|
||||||
|
|
||||||
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
|
|
||||||
|
|
||||||
## Smart Volatility Scanner (Optional)
|
|
||||||
|
|
||||||
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
|
|
||||||
|
|
||||||
### How It Works
|
|
||||||
|
|
||||||
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
|
|
||||||
2. **Python Filter** — RSI + volume ratio calculations (no AI)
|
|
||||||
- Volume > 200% of previous day
|
|
||||||
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
|
|
||||||
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
|
|
||||||
Add to `.env` (optional, has sensible defaults):
|
|
||||||
```bash
|
|
||||||
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
|
|
||||||
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
|
|
||||||
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
|
|
||||||
SCANNER_TOP_N=3 # Max candidates per scan
|
|
||||||
```
|
|
||||||
|
|
||||||
### Benefits
|
|
||||||
|
|
||||||
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
|
|
||||||
- **Python-based filtering** — Fast technical analysis before AI
|
|
||||||
- **Evolution-ready** — Selection context logged for strategy optimization
|
|
||||||
- **Fault-tolerant** — Falls back to static watchlist on API failure
|
|
||||||
|
|
||||||
### Trading Mode Integration
|
|
||||||
|
|
||||||
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
|
|
||||||
|
|
||||||
## Documentation
|
|
||||||
|
|
||||||
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
|
|
||||||
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
|
|
||||||
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
|
|
||||||
- **[Architecture](docs/architecture.md)** — System design, components, data flow
|
|
||||||
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
|
|
||||||
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
|
|
||||||
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
|
|
||||||
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
|
|
||||||
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
|
|
||||||
|
|
||||||
## Core Principles
|
|
||||||
|
|
||||||
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
|
|
||||||
2. **Test Everything** — 80% coverage minimum, all changes require tests
|
|
||||||
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
|
|
||||||
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
|
|
||||||
|
|
||||||
## Requirements Management
|
|
||||||
|
|
||||||
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
|
|
||||||
|
|
||||||
- New requirements are added chronologically with dates
|
|
||||||
- Code changes should reference related requirements
|
|
||||||
- Helps maintain project evolution aligned with user needs
|
|
||||||
- Preserves context across conversations and development cycles
|
|
||||||
|
|
||||||
## Project Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
src/
|
|
||||||
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
|
|
||||||
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
|
|
||||||
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
|
|
||||||
├── broker/ # KIS API client (domestic + overseas)
|
|
||||||
├── context/ # L1-L7 hierarchical memory system
|
|
||||||
├── core/ # Risk manager (READ-ONLY)
|
|
||||||
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
|
|
||||||
├── data/ # External data integration (news, market data, calendar)
|
|
||||||
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
|
|
||||||
├── logging/ # Decision logger (audit trail)
|
|
||||||
├── markets/ # Market schedules and timezone handling
|
|
||||||
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
|
|
||||||
├── strategy/ # Pre-market planner, scenario engine, playbook store
|
|
||||||
├── db.py # SQLite trade logging
|
|
||||||
├── main.py # Trading loop orchestrator
|
|
||||||
└── config.py # Settings (from .env)
|
|
||||||
|
|
||||||
tests/ # 998 tests across 41 files
|
|
||||||
docs/ # Extended documentation
|
|
||||||
```
|
|
||||||
|
|
||||||
## Key Commands
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pytest -v --cov=src # Run tests with coverage
|
|
||||||
ruff check src/ tests/ # Lint
|
|
||||||
mypy src/ --strict # Type check
|
|
||||||
|
|
||||||
python -m src.main --mode=paper # Paper trading
|
|
||||||
python -m src.main --mode=paper --dashboard # With dashboard
|
|
||||||
python -m src.main --mode=live # Live trading (⚠️ real money)
|
|
||||||
|
|
||||||
# Gitea workflow (requires tea CLI)
|
|
||||||
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
|
|
||||||
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
|
|
||||||
```
|
|
||||||
|
|
||||||
## Markets Supported
|
|
||||||
|
|
||||||
- 🇰🇷 Korea (KRX)
|
|
||||||
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
|
|
||||||
- 🇯🇵 Japan (TSE)
|
|
||||||
- 🇭🇰 Hong Kong (SEHK)
|
|
||||||
- 🇨🇳 China (Shanghai, Shenzhen)
|
|
||||||
- 🇻🇳 Vietnam (Hanoi, HCM)
|
|
||||||
|
|
||||||
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
|
|
||||||
|
|
||||||
## Critical Constraints
|
|
||||||
|
|
||||||
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
|
|
||||||
|
|
||||||
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
|
|
||||||
- Circuit breaker at -3.0% P&L — may only be made **stricter**
|
|
||||||
- Fat-finger protection: max 30% of cash per order — always enforced
|
|
||||||
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
|
|
||||||
- All code changes → corresponding tests → coverage ≥ 80%
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
See [docs/workflow.md](docs/workflow.md) for the complete development process.
|
|
||||||
|
|
||||||
**TL;DR:**
|
|
||||||
1. Create issue in Gitea
|
|
||||||
2. Create feature branch: `feature/issue-N-description`
|
|
||||||
3. Implement with tests
|
|
||||||
4. Open PR
|
|
||||||
5. Merge after review
|
|
||||||
Binary file not shown.
@@ -8,32 +8,8 @@ CHECK_INTERVAL="${CHECK_INTERVAL:-30}"
|
|||||||
TMUX_AUTO="${TMUX_AUTO:-true}"
|
TMUX_AUTO="${TMUX_AUTO:-true}"
|
||||||
TMUX_ATTACH="${TMUX_ATTACH:-true}"
|
TMUX_ATTACH="${TMUX_ATTACH:-true}"
|
||||||
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
|
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
|
||||||
STARTUP_GRACE_SEC="${STARTUP_GRACE_SEC:-3}"
|
|
||||||
dashboard_port="${DASHBOARD_PORT:-8080}"
|
|
||||||
APP_CMD_BIN="${APP_CMD_BIN:-}"
|
|
||||||
APP_CMD_ARGS="${APP_CMD_ARGS:-}"
|
|
||||||
RUNS_DASHBOARD="false"
|
|
||||||
|
|
||||||
# Custom override contract:
|
if [ -z "${APP_CMD:-}" ]; then
|
||||||
# 1) Preferred: APP_CMD_BIN + APP_CMD_ARGS
|
|
||||||
# - APP_CMD_BIN is treated as a single executable token.
|
|
||||||
# - APP_CMD_ARGS uses shell-style word splitting; quote/escape inside this
|
|
||||||
# variable is NOT preserved as a nested shell parse.
|
|
||||||
# 2) Legacy fallback: APP_CMD (raw shell command string)
|
|
||||||
# - This path remains for backward compatibility.
|
|
||||||
# - When APP_CMD includes --dashboard, caller should include explicit
|
|
||||||
# DASHBOARD_PORT assignment in APP_CMD if non-default port is required.
|
|
||||||
|
|
||||||
if [ -n "$APP_CMD_BIN" ]; then
|
|
||||||
USE_DEFAULT_APP_CMD="false"
|
|
||||||
USE_SAFE_CUSTOM_APP_CMD="true"
|
|
||||||
APP_CMD="${APP_CMD_BIN} ${APP_CMD_ARGS}"
|
|
||||||
if [[ " $APP_CMD_ARGS " == *" --dashboard "* ]]; then
|
|
||||||
RUNS_DASHBOARD="true"
|
|
||||||
fi
|
|
||||||
elif [ -z "${APP_CMD:-}" ]; then
|
|
||||||
USE_DEFAULT_APP_CMD="true"
|
|
||||||
USE_SAFE_CUSTOM_APP_CMD="false"
|
|
||||||
if [ -x ".venv/bin/python" ]; then
|
if [ -x ".venv/bin/python" ]; then
|
||||||
PYTHON_BIN=".venv/bin/python"
|
PYTHON_BIN=".venv/bin/python"
|
||||||
elif command -v python3 >/dev/null 2>&1; then
|
elif command -v python3 >/dev/null 2>&1; then
|
||||||
@@ -45,14 +21,9 @@ elif [ -z "${APP_CMD:-}" ]; then
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
APP_CMD="$PYTHON_BIN -m src.main --mode=live --dashboard"
|
dashboard_port="${DASHBOARD_PORT:-8080}"
|
||||||
RUNS_DASHBOARD="true"
|
|
||||||
else
|
APP_CMD="DASHBOARD_PORT=$dashboard_port $PYTHON_BIN -m src.main --mode=live --dashboard"
|
||||||
USE_DEFAULT_APP_CMD="false"
|
|
||||||
USE_SAFE_CUSTOM_APP_CMD="false"
|
|
||||||
if [[ "$APP_CMD" == *"--dashboard"* ]]; then
|
|
||||||
RUNS_DASHBOARD="true"
|
|
||||||
fi
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
mkdir -p "$LOG_DIR"
|
mkdir -p "$LOG_DIR"
|
||||||
@@ -63,24 +34,6 @@ WATCHDOG_LOG="$LOG_DIR/watchdog_${timestamp}.log"
|
|||||||
PID_FILE="$LOG_DIR/app.pid"
|
PID_FILE="$LOG_DIR/app.pid"
|
||||||
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
|
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
|
||||||
|
|
||||||
is_port_in_use() {
|
|
||||||
local port="$1"
|
|
||||||
if command -v ss >/dev/null 2>&1; then
|
|
||||||
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
|
|
||||||
return $?
|
|
||||||
fi
|
|
||||||
if command -v lsof >/dev/null 2>&1; then
|
|
||||||
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
|
|
||||||
return $?
|
|
||||||
fi
|
|
||||||
if command -v netstat >/dev/null 2>&1; then
|
|
||||||
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
|
|
||||||
return $?
|
|
||||||
fi
|
|
||||||
# No supported socket inspection command found.
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
if [ -f "$PID_FILE" ]; then
|
if [ -f "$PID_FILE" ]; then
|
||||||
old_pid="$(cat "$PID_FILE" || true)"
|
old_pid="$(cat "$PID_FILE" || true)"
|
||||||
if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then
|
if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then
|
||||||
@@ -90,29 +43,7 @@ if [ -f "$PID_FILE" ]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG"
|
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG"
|
||||||
if [ "$RUNS_DASHBOARD" = "true" ] && is_port_in_use "$dashboard_port"; then
|
nohup bash -lc "$APP_CMD" >>"$RUN_LOG" 2>&1 &
|
||||||
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: dashboard port ${dashboard_port} already in use" | tee -a "$RUN_LOG"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$USE_DEFAULT_APP_CMD" = "true" ]; then
|
|
||||||
# Default path avoids shell word-splitting on executable paths.
|
|
||||||
nohup env DASHBOARD_PORT="$dashboard_port" "$PYTHON_BIN" -m src.main --mode=live --dashboard >>"$RUN_LOG" 2>&1 &
|
|
||||||
elif [ "$USE_SAFE_CUSTOM_APP_CMD" = "true" ]; then
|
|
||||||
# Safer custom path: executable path is handled as a single token.
|
|
||||||
if [ -n "$APP_CMD_ARGS" ]; then
|
|
||||||
# shellcheck disable=SC2206
|
|
||||||
app_args=( $APP_CMD_ARGS )
|
|
||||||
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" "${app_args[@]}" >>"$RUN_LOG" 2>&1 &
|
|
||||||
else
|
|
||||||
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" >>"$RUN_LOG" 2>&1 &
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Custom APP_CMD is treated as a shell command string.
|
|
||||||
# If executable paths include spaces, they must be quoted inside APP_CMD.
|
|
||||||
# Legacy compatibility path: caller owns quoting and env var injection.
|
|
||||||
nohup bash -lc "exec env $APP_CMD" >>"$RUN_LOG" 2>&1 &
|
|
||||||
fi
|
|
||||||
app_pid=$!
|
app_pid=$!
|
||||||
echo "$app_pid" > "$PID_FILE"
|
echo "$app_pid" > "$PID_FILE"
|
||||||
|
|
||||||
@@ -123,20 +54,6 @@ nohup env PID_FILE="$PID_FILE" LOG_FILE="$WATCHDOG_LOG" CHECK_INTERVAL="$CHECK_I
|
|||||||
watchdog_pid=$!
|
watchdog_pid=$!
|
||||||
echo "$watchdog_pid" > "$WATCHDOG_PID_FILE"
|
echo "$watchdog_pid" > "$WATCHDOG_PID_FILE"
|
||||||
|
|
||||||
sleep "$STARTUP_GRACE_SEC"
|
|
||||||
if ! kill -0 "$app_pid" 2>/dev/null; then
|
|
||||||
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: app process exited early (pid=$app_pid)" | tee -a "$RUN_LOG"
|
|
||||||
[ -n "${watchdog_pid:-}" ] && kill "$watchdog_pid" 2>/dev/null || true
|
|
||||||
tail -n 20 "$RUN_LOG" || true
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
if ! kill -0 "$watchdog_pid" 2>/dev/null; then
|
|
||||||
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: watchdog exited early (pid=$watchdog_pid)" | tee -a "$WATCHDOG_LOG"
|
|
||||||
kill "$app_pid" 2>/dev/null || true
|
|
||||||
tail -n 20 "$WATCHDOG_LOG" || true
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
cat <<EOF
|
cat <<EOF
|
||||||
시작 완료
|
시작 완료
|
||||||
- app pid: $app_pid
|
- app pid: $app_pid
|
||||||
|
|||||||
@@ -7,15 +7,12 @@ ROOT_DIR="${ROOT_DIR:-/home/agentson/repos/The-Ouroboros}"
|
|||||||
LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}"
|
LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}"
|
||||||
INTERVAL_SEC="${INTERVAL_SEC:-60}"
|
INTERVAL_SEC="${INTERVAL_SEC:-60}"
|
||||||
MAX_HOURS="${MAX_HOURS:-24}"
|
MAX_HOURS="${MAX_HOURS:-24}"
|
||||||
MAX_LOOPS="${MAX_LOOPS:-0}"
|
|
||||||
POLICY_TZ="${POLICY_TZ:-Asia/Seoul}"
|
POLICY_TZ="${POLICY_TZ:-Asia/Seoul}"
|
||||||
DASHBOARD_PORT="${DASHBOARD_PORT:-8080}"
|
|
||||||
|
|
||||||
cd "$ROOT_DIR"
|
cd "$ROOT_DIR"
|
||||||
|
|
||||||
OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log"
|
OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log"
|
||||||
END_TS=$(( $(date +%s) + MAX_HOURS*3600 ))
|
END_TS=$(( $(date +%s) + MAX_HOURS*3600 ))
|
||||||
loops=0
|
|
||||||
|
|
||||||
log() {
|
log() {
|
||||||
printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null
|
printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null
|
||||||
@@ -34,11 +31,6 @@ check_signal() {
|
|||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
find_live_pids() {
|
|
||||||
# Detect live-mode process even when run_overnight pid files are absent.
|
|
||||||
pgrep -af "[s]rc.main --mode=live" 2>/dev/null | awk '{print $1}' | tr '\n' ',' | sed 's/,$//'
|
|
||||||
}
|
|
||||||
|
|
||||||
check_forbidden() {
|
check_forbidden() {
|
||||||
local name="$1"
|
local name="$1"
|
||||||
local pattern="$2"
|
local pattern="$2"
|
||||||
@@ -52,94 +44,42 @@ check_forbidden() {
|
|||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
is_port_listening() {
|
|
||||||
local port="$1"
|
|
||||||
|
|
||||||
if command -v ss >/dev/null 2>&1; then
|
|
||||||
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
|
|
||||||
return $?
|
|
||||||
fi
|
|
||||||
if command -v lsof >/dev/null 2>&1; then
|
|
||||||
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
|
|
||||||
return $?
|
|
||||||
fi
|
|
||||||
if command -v netstat >/dev/null 2>&1; then
|
|
||||||
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
|
|
||||||
return $?
|
|
||||||
fi
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}"
|
log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}"
|
||||||
|
|
||||||
while true; do
|
while true; do
|
||||||
loops=$((loops + 1))
|
|
||||||
now=$(date +%s)
|
now=$(date +%s)
|
||||||
if [ "$now" -ge "$END_TS" ]; then
|
if [ "$now" -ge "$END_TS" ]; then
|
||||||
log "[INFO] monitor completed (time window reached)"
|
log "[INFO] monitor completed (time window reached)"
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
if [ "$MAX_LOOPS" -gt 0 ] && [ "$loops" -gt "$MAX_LOOPS" ]; then
|
|
||||||
log "[INFO] monitor completed (max loops reached)"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)"
|
latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)"
|
||||||
|
if [ -z "$latest_run" ]; then
|
||||||
|
log "[ANOMALY] no run log found"
|
||||||
|
sleep "$INTERVAL_SEC"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
# Basic liveness hints.
|
# Basic liveness hints.
|
||||||
app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)"
|
app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)"
|
||||||
wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)"
|
wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)"
|
||||||
live_pids="$(find_live_pids)"
|
|
||||||
app_alive=0
|
app_alive=0
|
||||||
wd_alive=0
|
wd_alive=0
|
||||||
port_alive=0
|
port_alive=0
|
||||||
[ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1
|
[ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1
|
||||||
[ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1
|
[ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1
|
||||||
if [ "$app_alive" -eq 0 ] && [ -n "$live_pids" ]; then
|
ss -ltnp 2>/dev/null | rg -q ':8080' && port_alive=1
|
||||||
app_alive=1
|
log "[HEARTBEAT] run_log=$latest_run app_alive=$app_alive watchdog_alive=$wd_alive port8080=$port_alive"
|
||||||
fi
|
|
||||||
is_port_listening "$DASHBOARD_PORT" && port_alive=1
|
|
||||||
log "[HEARTBEAT] run_log=${latest_run:-none} app_alive=$app_alive watchdog_alive=$wd_alive port=${DASHBOARD_PORT} alive=$port_alive live_pids=${live_pids:-none}"
|
|
||||||
|
|
||||||
defer_log_checks=0
|
|
||||||
if [ -z "$latest_run" ] && [ "$app_alive" -eq 1 ]; then
|
|
||||||
defer_log_checks=1
|
|
||||||
log "[INFO] run log not yet available; defer log-based coverage checks"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -z "$latest_run" ] && [ "$defer_log_checks" -eq 0 ]; then
|
|
||||||
log "[ANOMALY] no run log found"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Coverage matrix rows (session paths and policy gate evidence).
|
# Coverage matrix rows (session paths and policy gate evidence).
|
||||||
not_observed=0
|
not_observed=0
|
||||||
if [ "$app_alive" -eq 1 ]; then
|
check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
log "[COVERAGE] LIVE_MODE=PASS source=process_liveness"
|
check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
else
|
check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
if [ -n "$latest_run" ]; then
|
check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1))
|
check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
else
|
check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
log "[COVERAGE] LIVE_MODE=NOT_OBSERVED reason=no_run_log_no_live_pid"
|
check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1))
|
||||||
not_observed=$((not_observed+1))
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
if [ "$defer_log_checks" -eq 1 ]; then
|
|
||||||
for deferred in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
|
|
||||||
log "[COVERAGE] ${deferred}=DEFERRED reason=no_run_log_process_alive"
|
|
||||||
done
|
|
||||||
elif [ -n "$latest_run" ]; then
|
|
||||||
check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1))
|
|
||||||
check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1))
|
|
||||||
check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1))
|
|
||||||
check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1))
|
|
||||||
check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1))
|
|
||||||
check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1))
|
|
||||||
else
|
|
||||||
for missing in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
|
|
||||||
log "[COVERAGE] ${missing}=NOT_OBSERVED reason=no_run_log"
|
|
||||||
not_observed=$((not_observed+1))
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$not_observed" -gt 0 ]; then
|
if [ "$not_observed" -gt 0 ]; then
|
||||||
log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)"
|
log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)"
|
||||||
@@ -155,17 +95,11 @@ while true; do
|
|||||||
is_weekend=1
|
is_weekend=1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ "$defer_log_checks" -eq 1 ]; then
|
if [ "$is_weekend" -eq 1 ]; then
|
||||||
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log_process_alive"
|
|
||||||
elif [ "$is_weekend" -eq 1 ]; then
|
|
||||||
# Weekend policy: KR regular session loop must never appear.
|
# Weekend policy: KR regular session loop must never appear.
|
||||||
if [ -n "$latest_run" ]; then
|
check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \
|
||||||
check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \
|
"Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \
|
||||||
"Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \
|
"$latest_run" || forbidden_hits=$((forbidden_hits+1))
|
||||||
"$latest_run" || forbidden_hits=$((forbidden_hits+1))
|
|
||||||
else
|
|
||||||
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log"
|
|
||||||
fi
|
|
||||||
else
|
else
|
||||||
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday"
|
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday"
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -5,8 +5,6 @@ from __future__ import annotations
|
|||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import json
|
import json
|
||||||
import os
|
|
||||||
import shutil
|
|
||||||
import re
|
import re
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
@@ -14,31 +12,11 @@ from pathlib import Path
|
|||||||
|
|
||||||
HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE)
|
HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE)
|
||||||
LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE)
|
LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE)
|
||||||
FENCED_CODE_PATTERN = re.compile(r"```.*?```", re.DOTALL)
|
|
||||||
INLINE_CODE_PATTERN = re.compile(r"`[^`]*`")
|
|
||||||
|
|
||||||
|
|
||||||
def _strip_code_segments(text: str) -> str:
|
|
||||||
without_fences = FENCED_CODE_PATTERN.sub("", text)
|
|
||||||
return INLINE_CODE_PATTERN.sub("", without_fences)
|
|
||||||
|
|
||||||
|
|
||||||
def resolve_tea_binary() -> str:
|
|
||||||
tea_from_path = shutil.which("tea")
|
|
||||||
if tea_from_path:
|
|
||||||
return tea_from_path
|
|
||||||
|
|
||||||
tea_home = Path.home() / "bin" / "tea"
|
|
||||||
if tea_home.exists() and tea_home.is_file() and os.access(tea_home, os.X_OK):
|
|
||||||
return str(tea_home)
|
|
||||||
|
|
||||||
raise RuntimeError("tea binary not found (checked PATH and ~/bin/tea)")
|
|
||||||
|
|
||||||
|
|
||||||
def validate_pr_body_text(text: str) -> list[str]:
|
def validate_pr_body_text(text: str) -> list[str]:
|
||||||
errors: list[str] = []
|
errors: list[str] = []
|
||||||
searchable = _strip_code_segments(text)
|
if "\\n" in text and "\n" not in text:
|
||||||
if "\\n" in searchable:
|
|
||||||
errors.append("body contains escaped newline sequence (\\n)")
|
errors.append("body contains escaped newline sequence (\\n)")
|
||||||
if text.count("```") % 2 != 0:
|
if text.count("```") % 2 != 0:
|
||||||
errors.append("body has unbalanced fenced code blocks (``` count is odd)")
|
errors.append("body has unbalanced fenced code blocks (``` count is odd)")
|
||||||
@@ -50,11 +28,10 @@ def validate_pr_body_text(text: str) -> list[str]:
|
|||||||
|
|
||||||
|
|
||||||
def fetch_pr_body(pr_number: int) -> str:
|
def fetch_pr_body(pr_number: int) -> str:
|
||||||
tea_binary = resolve_tea_binary()
|
|
||||||
try:
|
try:
|
||||||
completed = subprocess.run(
|
completed = subprocess.run(
|
||||||
[
|
[
|
||||||
tea_binary,
|
"tea",
|
||||||
"api",
|
"api",
|
||||||
"-R",
|
"-R",
|
||||||
"origin",
|
"origin",
|
||||||
@@ -64,7 +41,7 @@ def fetch_pr_body(pr_number: int) -> str:
|
|||||||
capture_output=True,
|
capture_output=True,
|
||||||
text=True,
|
text=True,
|
||||||
)
|
)
|
||||||
except (subprocess.CalledProcessError, FileNotFoundError, PermissionError) as exc:
|
except (subprocess.CalledProcessError, FileNotFoundError) as exc:
|
||||||
raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc
|
raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|||||||
117
src/main.py
117
src/main.py
@@ -12,7 +12,6 @@ import json
|
|||||||
import logging
|
import logging
|
||||||
import signal
|
import signal
|
||||||
import threading
|
import threading
|
||||||
from collections.abc import Awaitable, Callable
|
|
||||||
from datetime import UTC, datetime
|
from datetime import UTC, datetime
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
@@ -2085,15 +2084,6 @@ async def trading_cycle(
|
|||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
price=order_price,
|
price=order_price,
|
||||||
)
|
)
|
||||||
if result.get("rt_cd", "0") != "0":
|
|
||||||
order_succeeded = False
|
|
||||||
msg1 = result.get("msg1") or ""
|
|
||||||
logger.warning(
|
|
||||||
"KR order not accepted for %s: rt_cd=%s msg=%s",
|
|
||||||
stock_code,
|
|
||||||
result.get("rt_cd"),
|
|
||||||
msg1,
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
# For overseas orders, always use limit orders (지정가):
|
# For overseas orders, always use limit orders (지정가):
|
||||||
# - KIS market orders (ORD_DVSN=01) calculate quantity based on upper limit
|
# - KIS market orders (ORD_DVSN=01) calculate quantity based on upper limit
|
||||||
@@ -3303,15 +3293,6 @@ async def run_daily_session(
|
|||||||
quantity=quantity,
|
quantity=quantity,
|
||||||
price=order_price,
|
price=order_price,
|
||||||
)
|
)
|
||||||
if result.get("rt_cd", "0") != "0":
|
|
||||||
order_succeeded = False
|
|
||||||
daily_msg1 = result.get("msg1") or ""
|
|
||||||
logger.warning(
|
|
||||||
"KR order not accepted for %s: rt_cd=%s msg=%s",
|
|
||||||
stock_code,
|
|
||||||
result.get("rt_cd"),
|
|
||||||
daily_msg1,
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
# KIS VTS only accepts limit orders; use 0.5% premium for BUY
|
# KIS VTS only accepts limit orders; use 0.5% premium for BUY
|
||||||
if decision.action == "BUY":
|
if decision.action == "BUY":
|
||||||
@@ -3551,47 +3532,6 @@ def _run_context_scheduler(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def _has_market_session_transition(
|
|
||||||
market_states: dict[str, str], market_code: str, session_id: str
|
|
||||||
) -> bool:
|
|
||||||
"""Return True when market session changed (or market has no prior state)."""
|
|
||||||
return market_states.get(market_code) != session_id
|
|
||||||
|
|
||||||
|
|
||||||
def _should_rescan_market(
|
|
||||||
*, last_scan: float, now_timestamp: float, rescan_interval: float, session_changed: bool
|
|
||||||
) -> bool:
|
|
||||||
"""Force rescan on session transition; otherwise follow interval cadence."""
|
|
||||||
return session_changed or (now_timestamp - last_scan >= rescan_interval)
|
|
||||||
|
|
||||||
|
|
||||||
async def _run_markets_in_parallel(
|
|
||||||
markets: list[Any], processor: Callable[[Any], Awaitable[None]]
|
|
||||||
) -> None:
|
|
||||||
"""Run market processors in parallel and fail fast on the first exception."""
|
|
||||||
if not markets:
|
|
||||||
return
|
|
||||||
|
|
||||||
tasks = [asyncio.create_task(processor(market)) for market in markets]
|
|
||||||
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_EXCEPTION)
|
|
||||||
|
|
||||||
first_exc: BaseException | None = None
|
|
||||||
for task in done:
|
|
||||||
exc = task.exception()
|
|
||||||
if exc is not None and first_exc is None:
|
|
||||||
first_exc = exc
|
|
||||||
|
|
||||||
if first_exc is not None:
|
|
||||||
for task in pending:
|
|
||||||
task.cancel()
|
|
||||||
if pending:
|
|
||||||
await asyncio.gather(*pending, return_exceptions=True)
|
|
||||||
raise first_exc
|
|
||||||
|
|
||||||
if pending:
|
|
||||||
await asyncio.gather(*pending)
|
|
||||||
|
|
||||||
|
|
||||||
async def _run_evolution_loop(
|
async def _run_evolution_loop(
|
||||||
evolution_optimizer: EvolutionOptimizer,
|
evolution_optimizer: EvolutionOptimizer,
|
||||||
telegram: TelegramClient,
|
telegram: TelegramClient,
|
||||||
@@ -4105,7 +4045,7 @@ async def run(settings: Settings) -> None:
|
|||||||
last_scan_time: dict[str, float] = {}
|
last_scan_time: dict[str, float] = {}
|
||||||
|
|
||||||
# Track market open/close state for notifications
|
# Track market open/close state for notifications
|
||||||
_market_states: dict[str, str] = {} # market_code -> session_id
|
_market_states: dict[str, bool] = {} # market_code -> is_open
|
||||||
|
|
||||||
# Trading control events
|
# Trading control events
|
||||||
shutdown = asyncio.Event()
|
shutdown = asyncio.Event()
|
||||||
@@ -4223,8 +4163,8 @@ async def run(settings: Settings) -> None:
|
|||||||
|
|
||||||
if not open_markets:
|
if not open_markets:
|
||||||
# Notify market close for any markets that were open
|
# Notify market close for any markets that were open
|
||||||
for market_code, session_id in list(_market_states.items()):
|
for market_code, is_open in list(_market_states.items()):
|
||||||
if session_id:
|
if is_open:
|
||||||
try:
|
try:
|
||||||
from src.markets.schedule import MARKETS
|
from src.markets.schedule import MARKETS
|
||||||
|
|
||||||
@@ -4241,7 +4181,7 @@ async def run(settings: Settings) -> None:
|
|||||||
)
|
)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.warning("Market close notification failed: %s", exc)
|
logger.warning("Market close notification failed: %s", exc)
|
||||||
_market_states.pop(market_code, None)
|
_market_states[market_code] = False
|
||||||
# Clear playbook for closed market (new one generated next open)
|
# Clear playbook for closed market (new one generated next open)
|
||||||
playbooks.pop(market_code, None)
|
playbooks.pop(market_code, None)
|
||||||
|
|
||||||
@@ -4266,9 +4206,10 @@ async def run(settings: Settings) -> None:
|
|||||||
await asyncio.sleep(TRADE_INTERVAL_SECONDS)
|
await asyncio.sleep(TRADE_INTERVAL_SECONDS)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
async def _process_realtime_market(market: MarketInfo) -> None:
|
# Process each open market
|
||||||
|
for market in open_markets:
|
||||||
if shutdown.is_set():
|
if shutdown.is_set():
|
||||||
return
|
break
|
||||||
|
|
||||||
session_info = get_session_info(market)
|
session_info = get_session_info(market)
|
||||||
_session_risk_overrides(market=market, settings=settings)
|
_session_risk_overrides(market=market, settings=settings)
|
||||||
@@ -4286,16 +4227,13 @@ async def run(settings: Settings) -> None:
|
|||||||
settings=settings,
|
settings=settings,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Notify on market/session transition (e.g., US_PRE -> US_REG)
|
# Notify market open if it just opened
|
||||||
session_changed = _has_market_session_transition(
|
if not _market_states.get(market.code, False):
|
||||||
_market_states, market.code, session_info.session_id
|
|
||||||
)
|
|
||||||
if session_changed:
|
|
||||||
try:
|
try:
|
||||||
await telegram.notify_market_open(market.name)
|
await telegram.notify_market_open(market.name)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.warning("Market open notification failed: %s", exc)
|
logger.warning("Market open notification failed: %s", exc)
|
||||||
_market_states[market.code] = session_info.session_id
|
_market_states[market.code] = True
|
||||||
|
|
||||||
# Check and handle domestic pending (unfilled) limit orders.
|
# Check and handle domestic pending (unfilled) limit orders.
|
||||||
if market.is_domestic:
|
if market.is_domestic:
|
||||||
@@ -4327,12 +4265,7 @@ async def run(settings: Settings) -> None:
|
|||||||
now_timestamp = asyncio.get_event_loop().time()
|
now_timestamp = asyncio.get_event_loop().time()
|
||||||
last_scan = last_scan_time.get(market.code, 0.0)
|
last_scan = last_scan_time.get(market.code, 0.0)
|
||||||
rescan_interval = settings.RESCAN_INTERVAL_SECONDS
|
rescan_interval = settings.RESCAN_INTERVAL_SECONDS
|
||||||
if _should_rescan_market(
|
if now_timestamp - last_scan >= rescan_interval:
|
||||||
last_scan=last_scan,
|
|
||||||
now_timestamp=now_timestamp,
|
|
||||||
rescan_interval=rescan_interval,
|
|
||||||
session_changed=session_changed,
|
|
||||||
):
|
|
||||||
try:
|
try:
|
||||||
logger.info("Smart Scanner: Scanning %s market", market.name)
|
logger.info("Smart Scanner: Scanning %s market", market.name)
|
||||||
|
|
||||||
@@ -4357,9 +4290,12 @@ async def run(settings: Settings) -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
if candidates:
|
if candidates:
|
||||||
|
# Use scanner results directly as trading candidates
|
||||||
active_stocks[market.code] = smart_scanner.get_stock_codes(
|
active_stocks[market.code] = smart_scanner.get_stock_codes(
|
||||||
candidates
|
candidates
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Store candidates per market for selection context logging
|
||||||
scan_candidates[market.code] = {c.stock_code: c for c in candidates}
|
scan_candidates[market.code] = {c.stock_code: c for c in candidates}
|
||||||
|
|
||||||
logger.info(
|
logger.info(
|
||||||
@@ -4369,8 +4305,12 @@ async def run(settings: Settings) -> None:
|
|||||||
[f"{c.stock_code}(RSI={c.rsi:.0f})" for c in candidates],
|
[f"{c.stock_code}(RSI={c.rsi:.0f})" for c in candidates],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Get market-local date for playbook keying
|
||||||
market_today = datetime.now(market.timezone).date()
|
market_today = datetime.now(market.timezone).date()
|
||||||
|
|
||||||
|
# Load or generate playbook (1 Gemini call per market per day)
|
||||||
if market.code not in playbooks:
|
if market.code not in playbooks:
|
||||||
|
# Try DB first (survives process restart)
|
||||||
stored_pb = playbook_store.load(market_today, market.code)
|
stored_pb = playbook_store.load(market_today, market.code)
|
||||||
if stored_pb is not None:
|
if stored_pb is not None:
|
||||||
playbooks[market.code] = stored_pb
|
playbooks[market.code] = stored_pb
|
||||||
@@ -4430,6 +4370,12 @@ async def run(settings: Settings) -> None:
|
|||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error("Smart Scanner failed for %s: %s", market.name, exc)
|
logger.error("Smart Scanner failed for %s: %s", market.name, exc)
|
||||||
|
|
||||||
|
# Get active stocks from scanner (dynamic, no static fallback).
|
||||||
|
# Also include currently-held positions so stop-loss /
|
||||||
|
# take-profit can fire even when a holding drops off the
|
||||||
|
# scanner. Broker balance is the source of truth here —
|
||||||
|
# unlike the local DB it reflects actual fills and any
|
||||||
|
# manual trades done outside the bot.
|
||||||
scanner_codes = active_stocks.get(market.code, [])
|
scanner_codes = active_stocks.get(market.code, [])
|
||||||
try:
|
try:
|
||||||
if market.is_domestic:
|
if market.is_domestic:
|
||||||
@@ -4460,14 +4406,16 @@ async def run(settings: Settings) -> None:
|
|||||||
|
|
||||||
if not stock_codes:
|
if not stock_codes:
|
||||||
logger.debug("No active stocks for market %s", market.code)
|
logger.debug("No active stocks for market %s", market.code)
|
||||||
return
|
continue
|
||||||
|
|
||||||
logger.info("Processing market: %s (%d stocks)", market.name, len(stock_codes))
|
logger.info("Processing market: %s (%d stocks)", market.name, len(stock_codes))
|
||||||
|
|
||||||
|
# Process each stock from scanner results
|
||||||
for stock_code in stock_codes:
|
for stock_code in stock_codes:
|
||||||
if shutdown.is_set():
|
if shutdown.is_set():
|
||||||
break
|
break
|
||||||
|
|
||||||
|
# Get playbook for this market
|
||||||
market_playbook = playbooks.get(
|
market_playbook = playbooks.get(
|
||||||
market.code,
|
market.code,
|
||||||
PreMarketPlanner._empty_playbook(
|
PreMarketPlanner._empty_playbook(
|
||||||
@@ -4475,6 +4423,7 @@ async def run(settings: Settings) -> None:
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Retry logic for connection errors
|
||||||
for attempt in range(1, MAX_CONNECTION_RETRIES + 1):
|
for attempt in range(1, MAX_CONNECTION_RETRIES + 1):
|
||||||
try:
|
try:
|
||||||
await trading_cycle(
|
await trading_cycle(
|
||||||
@@ -4494,7 +4443,7 @@ async def run(settings: Settings) -> None:
|
|||||||
settings,
|
settings,
|
||||||
buy_cooldown,
|
buy_cooldown,
|
||||||
)
|
)
|
||||||
break
|
break # Success — exit retry loop
|
||||||
except CircuitBreakerTripped as exc:
|
except CircuitBreakerTripped as exc:
|
||||||
logger.critical("Circuit breaker tripped — shutting down")
|
logger.critical("Circuit breaker tripped — shutting down")
|
||||||
try:
|
try:
|
||||||
@@ -4516,19 +4465,17 @@ async def run(settings: Settings) -> None:
|
|||||||
MAX_CONNECTION_RETRIES,
|
MAX_CONNECTION_RETRIES,
|
||||||
exc,
|
exc,
|
||||||
)
|
)
|
||||||
await asyncio.sleep(2**attempt)
|
await asyncio.sleep(2**attempt) # Exponential backoff
|
||||||
else:
|
else:
|
||||||
logger.error(
|
logger.error(
|
||||||
"Connection error for %s (all retries exhausted): %s",
|
"Connection error for %s (all retries exhausted): %s",
|
||||||
stock_code,
|
stock_code,
|
||||||
exc,
|
exc,
|
||||||
)
|
)
|
||||||
break
|
break # Give up on this stock
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.exception("Unexpected error for %s: %s", stock_code, exc)
|
logger.exception("Unexpected error for %s: %s", stock_code, exc)
|
||||||
break
|
break # Don't retry on unexpected errors
|
||||||
|
|
||||||
await _run_markets_in_parallel(open_markets, _process_realtime_market)
|
|
||||||
|
|
||||||
# Log priority queue metrics periodically
|
# Log priority queue metrics periodically
|
||||||
metrics = await priority_queue.get_metrics()
|
metrics = await priority_queue.get_metrics()
|
||||||
|
|||||||
@@ -207,7 +207,7 @@ def get_open_markets(
|
|||||||
from src.core.order_policy import classify_session_id
|
from src.core.order_policy import classify_session_id
|
||||||
|
|
||||||
session_id = classify_session_id(market, now)
|
session_id = classify_session_id(market, now)
|
||||||
return session_id not in {"KR_OFF", "US_OFF", "US_DAY"}
|
return session_id not in {"KR_OFF", "US_OFF"}
|
||||||
return is_market_open(market, now)
|
return is_market_open(market, now)
|
||||||
|
|
||||||
open_markets = [
|
open_markets = [
|
||||||
@@ -254,10 +254,10 @@ def get_next_market_open(
|
|||||||
from src.core.order_policy import classify_session_id
|
from src.core.order_policy import classify_session_id
|
||||||
|
|
||||||
ts = start_utc.astimezone(ZoneInfo("UTC")).replace(second=0, microsecond=0)
|
ts = start_utc.astimezone(ZoneInfo("UTC")).replace(second=0, microsecond=0)
|
||||||
prev_active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF", "US_DAY"}
|
prev_active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF"}
|
||||||
for _ in range(7 * 24 * 60):
|
for _ in range(7 * 24 * 60):
|
||||||
ts = ts + timedelta(minutes=1)
|
ts = ts + timedelta(minutes=1)
|
||||||
active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF", "US_DAY"}
|
active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF"}
|
||||||
if active and not prev_active:
|
if active and not prev_active:
|
||||||
return ts
|
return ts
|
||||||
prev_active = active
|
prev_active = active
|
||||||
|
|||||||
@@ -1,6 +1,5 @@
|
|||||||
"""Tests for main trading loop integration."""
|
"""Tests for main trading loop integration."""
|
||||||
|
|
||||||
import asyncio
|
|
||||||
from datetime import UTC, date, datetime
|
from datetime import UTC, date, datetime
|
||||||
from typing import Any
|
from typing import Any
|
||||||
from unittest.mock import ANY, AsyncMock, MagicMock, patch
|
from unittest.mock import ANY, AsyncMock, MagicMock, patch
|
||||||
@@ -35,7 +34,6 @@ from src.main import (
|
|||||||
_extract_held_codes_from_balance,
|
_extract_held_codes_from_balance,
|
||||||
_extract_held_qty_from_balance,
|
_extract_held_qty_from_balance,
|
||||||
_handle_market_close,
|
_handle_market_close,
|
||||||
_has_market_session_transition,
|
|
||||||
_inject_staged_exit_features,
|
_inject_staged_exit_features,
|
||||||
_maybe_queue_order_intent,
|
_maybe_queue_order_intent,
|
||||||
_resolve_market_setting,
|
_resolve_market_setting,
|
||||||
@@ -43,10 +41,8 @@ from src.main import (
|
|||||||
_retry_connection,
|
_retry_connection,
|
||||||
_run_context_scheduler,
|
_run_context_scheduler,
|
||||||
_run_evolution_loop,
|
_run_evolution_loop,
|
||||||
_run_markets_in_parallel,
|
|
||||||
_should_block_overseas_buy_for_fx_buffer,
|
_should_block_overseas_buy_for_fx_buffer,
|
||||||
_should_force_exit_for_overnight,
|
_should_force_exit_for_overnight,
|
||||||
_should_rescan_market,
|
|
||||||
_split_trade_pnl_components,
|
_split_trade_pnl_components,
|
||||||
_start_dashboard_server,
|
_start_dashboard_server,
|
||||||
_stoploss_cooldown_minutes,
|
_stoploss_cooldown_minutes,
|
||||||
@@ -144,63 +140,6 @@ class TestExtractAvgPriceFromBalance:
|
|||||||
result = _extract_avg_price_from_balance(balance, "AAPL", is_domestic=False)
|
result = _extract_avg_price_from_balance(balance, "AAPL", is_domestic=False)
|
||||||
assert result == 170.5
|
assert result == 170.5
|
||||||
|
|
||||||
|
|
||||||
class TestRealtimeSessionStateHelpers:
|
|
||||||
"""Tests for realtime loop session-transition/rescan helper logic."""
|
|
||||||
|
|
||||||
def test_has_market_session_transition_when_state_missing(self) -> None:
|
|
||||||
states: dict[str, str] = {}
|
|
||||||
assert _has_market_session_transition(states, "US_NASDAQ", "US_REG")
|
|
||||||
|
|
||||||
def test_has_market_session_transition_when_session_changes(self) -> None:
|
|
||||||
states = {"US_NASDAQ": "US_PRE"}
|
|
||||||
assert _has_market_session_transition(states, "US_NASDAQ", "US_REG")
|
|
||||||
|
|
||||||
def test_has_market_session_transition_false_when_same_session(self) -> None:
|
|
||||||
states = {"US_NASDAQ": "US_REG"}
|
|
||||||
assert not _has_market_session_transition(states, "US_NASDAQ", "US_REG")
|
|
||||||
|
|
||||||
def test_should_rescan_market_forces_on_session_transition(self) -> None:
|
|
||||||
assert _should_rescan_market(
|
|
||||||
last_scan=1000.0,
|
|
||||||
now_timestamp=1050.0,
|
|
||||||
rescan_interval=300.0,
|
|
||||||
session_changed=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_should_rescan_market_uses_interval_without_transition(self) -> None:
|
|
||||||
assert not _should_rescan_market(
|
|
||||||
last_scan=1000.0,
|
|
||||||
now_timestamp=1050.0,
|
|
||||||
rescan_interval=300.0,
|
|
||||||
session_changed=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestMarketParallelRunner:
|
|
||||||
"""Tests for market-level parallel processing helper."""
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_run_markets_in_parallel_runs_all_markets(self) -> None:
|
|
||||||
processed: list[str] = []
|
|
||||||
|
|
||||||
async def _processor(market: str) -> None:
|
|
||||||
await asyncio.sleep(0.01)
|
|
||||||
processed.append(market)
|
|
||||||
|
|
||||||
await _run_markets_in_parallel(["KR", "US_NASDAQ", "US_NYSE"], _processor)
|
|
||||||
assert set(processed) == {"KR", "US_NASDAQ", "US_NYSE"}
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_run_markets_in_parallel_propagates_errors(self) -> None:
|
|
||||||
async def _processor(market: str) -> None:
|
|
||||||
if market == "US_NASDAQ":
|
|
||||||
raise RuntimeError("boom")
|
|
||||||
await asyncio.sleep(0.01)
|
|
||||||
|
|
||||||
with pytest.raises(RuntimeError, match="boom"):
|
|
||||||
await _run_markets_in_parallel(["KR", "US_NASDAQ"], _processor)
|
|
||||||
|
|
||||||
def test_returns_zero_when_field_absent(self) -> None:
|
def test_returns_zero_when_field_absent(self) -> None:
|
||||||
"""Returns 0.0 when pchs_avg_pric key is missing entirely."""
|
"""Returns 0.0 when pchs_avg_pric key is missing entirely."""
|
||||||
balance = {"output1": [{"pdno": "005930", "ord_psbl_qty": "5"}]}
|
balance = {"output1": [{"pdno": "005930", "ord_psbl_qty": "5"}]}
|
||||||
@@ -974,46 +913,6 @@ class TestTradingCycleTelegramIntegration:
|
|||||||
# Verify notification was attempted
|
# Verify notification was attempted
|
||||||
mock_telegram.notify_trade_execution.assert_called_once()
|
mock_telegram.notify_trade_execution.assert_called_once()
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_kr_rejected_order_does_not_notify_or_log_trade(
|
|
||||||
self,
|
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_overseas_broker: MagicMock,
|
|
||||||
mock_scenario_engine: MagicMock,
|
|
||||||
mock_playbook: DayPlaybook,
|
|
||||||
mock_risk: MagicMock,
|
|
||||||
mock_db: MagicMock,
|
|
||||||
mock_decision_logger: MagicMock,
|
|
||||||
mock_context_store: MagicMock,
|
|
||||||
mock_criticality_assessor: MagicMock,
|
|
||||||
mock_telegram: MagicMock,
|
|
||||||
mock_market: MagicMock,
|
|
||||||
) -> None:
|
|
||||||
"""KR orders rejected by KIS should not trigger success side effects."""
|
|
||||||
mock_broker.send_order = AsyncMock(
|
|
||||||
return_value={"rt_cd": "1", "msg1": "장운영시간이 아닙니다."}
|
|
||||||
)
|
|
||||||
|
|
||||||
with patch("src.main.log_trade") as mock_log_trade:
|
|
||||||
await trading_cycle(
|
|
||||||
broker=mock_broker,
|
|
||||||
overseas_broker=mock_overseas_broker,
|
|
||||||
scenario_engine=mock_scenario_engine,
|
|
||||||
playbook=mock_playbook,
|
|
||||||
risk=mock_risk,
|
|
||||||
db_conn=mock_db,
|
|
||||||
decision_logger=mock_decision_logger,
|
|
||||||
context_store=mock_context_store,
|
|
||||||
criticality_assessor=mock_criticality_assessor,
|
|
||||||
telegram=mock_telegram,
|
|
||||||
market=mock_market,
|
|
||||||
stock_code="005930",
|
|
||||||
scan_candidates={},
|
|
||||||
)
|
|
||||||
|
|
||||||
mock_telegram.notify_trade_execution.assert_not_called()
|
|
||||||
mock_log_trade.assert_not_called()
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_fat_finger_notification_sent(
|
async def test_fat_finger_notification_sent(
|
||||||
self,
|
self,
|
||||||
|
|||||||
@@ -165,17 +165,6 @@ class TestGetOpenMarkets:
|
|||||||
)
|
)
|
||||||
assert {m.code for m in extended} == {"US_NASDAQ", "US_NYSE", "US_AMEX"}
|
assert {m.code for m in extended} == {"US_NASDAQ", "US_NYSE", "US_AMEX"}
|
||||||
|
|
||||||
def test_get_open_markets_excludes_us_day_when_extended_enabled(self) -> None:
|
|
||||||
"""US_DAY should be treated as non-tradable even in extended-session lookup."""
|
|
||||||
# Monday 2026-02-02 10:30 KST = 01:30 UTC (US_DAY by session classification)
|
|
||||||
test_time = datetime(2026, 2, 2, 1, 30, tzinfo=ZoneInfo("UTC"))
|
|
||||||
extended = get_open_markets(
|
|
||||||
enabled_markets=["US_NASDAQ", "US_NYSE", "US_AMEX"],
|
|
||||||
now=test_time,
|
|
||||||
include_extended_sessions=True,
|
|
||||||
)
|
|
||||||
assert extended == []
|
|
||||||
|
|
||||||
|
|
||||||
class TestGetNextMarketOpen:
|
class TestGetNextMarketOpen:
|
||||||
"""Test get_next_market_open function."""
|
"""Test get_next_market_open function."""
|
||||||
@@ -225,8 +214,8 @@ class TestGetNextMarketOpen:
|
|||||||
def test_get_next_market_open_prefers_extended_session(self) -> None:
|
def test_get_next_market_open_prefers_extended_session(self) -> None:
|
||||||
"""Extended lookup should return premarket open time before regular open."""
|
"""Extended lookup should return premarket open time before regular open."""
|
||||||
# Monday 2026-02-02 07:00 EST = 12:00 UTC
|
# Monday 2026-02-02 07:00 EST = 12:00 UTC
|
||||||
# US_DAY is treated as non-tradable in extended lookup, so after entering
|
# By v3 KST session rules, US is OFF only in KST 07:00-10:00 (UTC 22:00-01:00).
|
||||||
# US_DAY the next tradable OFF->ON transition is US_PRE at 09:00 UTC next day.
|
# At 12:00 UTC market is active, so next OFF->ON transition is 01:00 UTC next day.
|
||||||
test_time = datetime(2026, 2, 2, 12, 0, tzinfo=ZoneInfo("UTC"))
|
test_time = datetime(2026, 2, 2, 12, 0, tzinfo=ZoneInfo("UTC"))
|
||||||
market, next_open = get_next_market_open(
|
market, next_open = get_next_market_open(
|
||||||
enabled_markets=["US_NASDAQ"],
|
enabled_markets=["US_NASDAQ"],
|
||||||
@@ -234,7 +223,7 @@ class TestGetNextMarketOpen:
|
|||||||
include_extended_sessions=True,
|
include_extended_sessions=True,
|
||||||
)
|
)
|
||||||
assert market.code == "US_NASDAQ"
|
assert market.code == "US_NASDAQ"
|
||||||
assert next_open == datetime(2026, 2, 3, 9, 0, tzinfo=ZoneInfo("UTC"))
|
assert next_open == datetime(2026, 2, 3, 1, 0, tzinfo=ZoneInfo("UTC"))
|
||||||
|
|
||||||
|
|
||||||
class TestExpandMarketCodes:
|
class TestExpandMarketCodes:
|
||||||
|
|||||||
@@ -1,160 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import os
|
|
||||||
import signal
|
|
||||||
import socket
|
|
||||||
import subprocess
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
REPO_ROOT = Path(__file__).resolve().parent.parent
|
|
||||||
RUN_OVERNIGHT = REPO_ROOT / "scripts" / "run_overnight.sh"
|
|
||||||
RUNTIME_MONITOR = REPO_ROOT / "scripts" / "runtime_verify_monitor.sh"
|
|
||||||
|
|
||||||
|
|
||||||
def _latest_runtime_log(log_dir: Path) -> str:
|
|
||||||
logs = sorted(log_dir.glob("runtime_verify_*.log"))
|
|
||||||
assert logs, "runtime monitor did not produce log output"
|
|
||||||
return logs[-1].read_text(encoding="utf-8")
|
|
||||||
|
|
||||||
|
|
||||||
def test_runtime_verify_monitor_detects_live_process_without_pid_files(tmp_path: Path) -> None:
|
|
||||||
log_dir = tmp_path / "overnight"
|
|
||||||
log_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
fake_live = subprocess.Popen(
|
|
||||||
["bash", "-lc", 'exec -a "src.main --mode=live" sleep 10'],
|
|
||||||
cwd=REPO_ROOT,
|
|
||||||
)
|
|
||||||
try:
|
|
||||||
env = os.environ.copy()
|
|
||||||
env.update(
|
|
||||||
{
|
|
||||||
"ROOT_DIR": str(REPO_ROOT),
|
|
||||||
"LOG_DIR": str(log_dir),
|
|
||||||
"INTERVAL_SEC": "1",
|
|
||||||
"MAX_HOURS": "1",
|
|
||||||
"MAX_LOOPS": "1",
|
|
||||||
"POLICY_TZ": "UTC",
|
|
||||||
}
|
|
||||||
)
|
|
||||||
completed = subprocess.run(
|
|
||||||
["bash", str(RUNTIME_MONITOR)],
|
|
||||||
cwd=REPO_ROOT,
|
|
||||||
env=env,
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
check=False,
|
|
||||||
)
|
|
||||||
assert completed.returncode == 0, completed.stderr
|
|
||||||
|
|
||||||
log_text = _latest_runtime_log(log_dir)
|
|
||||||
assert "app_alive=1" in log_text
|
|
||||||
assert "[COVERAGE] LIVE_MODE=PASS source=process_liveness" in log_text
|
|
||||||
assert "[ANOMALY]" not in log_text
|
|
||||||
finally:
|
|
||||||
fake_live.terminate()
|
|
||||||
fake_live.wait(timeout=5)
|
|
||||||
|
|
||||||
|
|
||||||
def test_run_overnight_fails_fast_when_dashboard_port_in_use(tmp_path: Path) -> None:
|
|
||||||
log_dir = tmp_path / "overnight"
|
|
||||||
log_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
|
||||||
sock.bind(("127.0.0.1", 0))
|
|
||||||
sock.listen(1)
|
|
||||||
port = sock.getsockname()[1]
|
|
||||||
try:
|
|
||||||
env = os.environ.copy()
|
|
||||||
env.update(
|
|
||||||
{
|
|
||||||
"LOG_DIR": str(log_dir),
|
|
||||||
"TMUX_AUTO": "false",
|
|
||||||
"DASHBOARD_PORT": str(port),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
completed = subprocess.run(
|
|
||||||
["bash", str(RUN_OVERNIGHT)],
|
|
||||||
cwd=REPO_ROOT,
|
|
||||||
env=env,
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
check=False,
|
|
||||||
)
|
|
||||||
assert completed.returncode != 0
|
|
||||||
output = f"{completed.stdout}\n{completed.stderr}"
|
|
||||||
assert "already in use" in output
|
|
||||||
finally:
|
|
||||||
sock.close()
|
|
||||||
|
|
||||||
|
|
||||||
def test_run_overnight_writes_live_pid_and_watchdog_pid(tmp_path: Path) -> None:
|
|
||||||
log_dir = tmp_path / "overnight"
|
|
||||||
log_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
env = os.environ.copy()
|
|
||||||
env.update(
|
|
||||||
{
|
|
||||||
"LOG_DIR": str(log_dir),
|
|
||||||
"TMUX_AUTO": "false",
|
|
||||||
"STARTUP_GRACE_SEC": "1",
|
|
||||||
"CHECK_INTERVAL": "2",
|
|
||||||
"APP_CMD_BIN": "sleep",
|
|
||||||
"APP_CMD_ARGS": "10",
|
|
||||||
}
|
|
||||||
)
|
|
||||||
completed = subprocess.run(
|
|
||||||
["bash", str(RUN_OVERNIGHT)],
|
|
||||||
cwd=REPO_ROOT,
|
|
||||||
env=env,
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
check=False,
|
|
||||||
)
|
|
||||||
assert completed.returncode == 0, f"{completed.stdout}\n{completed.stderr}"
|
|
||||||
|
|
||||||
app_pid = int((log_dir / "app.pid").read_text(encoding="utf-8").strip())
|
|
||||||
watchdog_pid = int((log_dir / "watchdog.pid").read_text(encoding="utf-8").strip())
|
|
||||||
|
|
||||||
os.kill(app_pid, 0)
|
|
||||||
os.kill(watchdog_pid, 0)
|
|
||||||
|
|
||||||
for pid in (watchdog_pid, app_pid):
|
|
||||||
try:
|
|
||||||
os.kill(pid, signal.SIGTERM)
|
|
||||||
except ProcessLookupError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def test_run_overnight_fails_when_process_exits_before_grace_period(tmp_path: Path) -> None:
|
|
||||||
log_dir = tmp_path / "overnight"
|
|
||||||
log_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
env = os.environ.copy()
|
|
||||||
env.update(
|
|
||||||
{
|
|
||||||
"LOG_DIR": str(log_dir),
|
|
||||||
"TMUX_AUTO": "false",
|
|
||||||
"STARTUP_GRACE_SEC": "1",
|
|
||||||
"APP_CMD_BIN": "false",
|
|
||||||
}
|
|
||||||
)
|
|
||||||
completed = subprocess.run(
|
|
||||||
["bash", str(RUN_OVERNIGHT)],
|
|
||||||
cwd=REPO_ROOT,
|
|
||||||
env=env,
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
check=False,
|
|
||||||
)
|
|
||||||
assert completed.returncode != 0
|
|
||||||
output = f"{completed.stdout}\n{completed.stderr}"
|
|
||||||
assert "startup failed:" in output
|
|
||||||
|
|
||||||
watchdog_pid_file = log_dir / "watchdog.pid"
|
|
||||||
if watchdog_pid_file.exists():
|
|
||||||
watchdog_pid = int(watchdog_pid_file.read_text(encoding="utf-8").strip())
|
|
||||||
with pytest.raises(ProcessLookupError):
|
|
||||||
os.kill(watchdog_pid, 0)
|
|
||||||
@@ -24,24 +24,9 @@ def test_validate_pr_body_text_detects_escaped_newline() -> None:
|
|||||||
assert any("escaped newline" in err for err in errors)
|
assert any("escaped newline" in err for err in errors)
|
||||||
|
|
||||||
|
|
||||||
def test_validate_pr_body_text_detects_escaped_newline_in_multiline_body() -> None:
|
def test_validate_pr_body_text_allows_literal_sequence_when_multiline() -> None:
|
||||||
module = _load_module()
|
module = _load_module()
|
||||||
text = "## Summary\n- first line\n- broken line with \\n literal"
|
text = "## Summary\n- escaped sequence example: \\\\n"
|
||||||
errors = module.validate_pr_body_text(text)
|
|
||||||
assert any("escaped newline" in err for err in errors)
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_pr_body_text_allows_escaped_newline_in_code_blocks() -> None:
|
|
||||||
module = _load_module()
|
|
||||||
text = "\n".join(
|
|
||||||
[
|
|
||||||
"## Summary",
|
|
||||||
"- example uses `\\n` for explanation",
|
|
||||||
"```bash",
|
|
||||||
"printf 'line1\\nline2\\n'",
|
|
||||||
"```",
|
|
||||||
]
|
|
||||||
)
|
|
||||||
assert module.validate_pr_body_text(text) == []
|
assert module.validate_pr_body_text(text) == []
|
||||||
|
|
||||||
|
|
||||||
@@ -78,13 +63,12 @@ def test_fetch_pr_body_reads_body_from_tea_api(monkeypatch) -> None:
|
|||||||
module = _load_module()
|
module = _load_module()
|
||||||
|
|
||||||
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
|
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
|
||||||
assert cmd[0] == "/tmp/tea-bin"
|
assert "tea" in cmd[0]
|
||||||
assert check is True
|
assert check is True
|
||||||
assert capture_output is True
|
assert capture_output is True
|
||||||
assert text is True
|
assert text is True
|
||||||
return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"}))
|
return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"}))
|
||||||
|
|
||||||
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
|
|
||||||
monkeypatch.setattr(module.subprocess, "run", fake_run)
|
monkeypatch.setattr(module.subprocess, "run", fake_run)
|
||||||
assert module.fetch_pr_body(391) == "## Summary\n- item"
|
assert module.fetch_pr_body(391) == "## Summary\n- item"
|
||||||
|
|
||||||
@@ -95,32 +79,6 @@ def test_fetch_pr_body_rejects_non_string_body(monkeypatch) -> None:
|
|||||||
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
|
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
|
||||||
return SimpleNamespace(stdout=json.dumps({"body": 123}))
|
return SimpleNamespace(stdout=json.dumps({"body": 123}))
|
||||||
|
|
||||||
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
|
|
||||||
monkeypatch.setattr(module.subprocess, "run", fake_run)
|
monkeypatch.setattr(module.subprocess, "run", fake_run)
|
||||||
with pytest.raises(RuntimeError):
|
with pytest.raises(RuntimeError):
|
||||||
module.fetch_pr_body(391)
|
module.fetch_pr_body(391)
|
||||||
|
|
||||||
|
|
||||||
def test_resolve_tea_binary_falls_back_to_home_bin(monkeypatch, tmp_path) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
tea_home = tmp_path / "bin" / "tea"
|
|
||||||
tea_home.parent.mkdir(parents=True)
|
|
||||||
tea_home.write_text("#!/usr/bin/env bash\n", encoding="utf-8")
|
|
||||||
tea_home.chmod(0o755)
|
|
||||||
|
|
||||||
monkeypatch.setattr(module.shutil, "which", lambda _: None)
|
|
||||||
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
|
|
||||||
assert module.resolve_tea_binary() == str(tea_home)
|
|
||||||
|
|
||||||
|
|
||||||
def test_resolve_tea_binary_rejects_non_executable_home_bin(monkeypatch, tmp_path) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
tea_home = tmp_path / "bin" / "tea"
|
|
||||||
tea_home.parent.mkdir(parents=True)
|
|
||||||
tea_home.write_text("not executable\n", encoding="utf-8")
|
|
||||||
tea_home.chmod(0o644)
|
|
||||||
|
|
||||||
monkeypatch.setattr(module.shutil, "which", lambda _: None)
|
|
||||||
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
|
|
||||||
with pytest.raises(RuntimeError):
|
|
||||||
module.resolve_tea_binary()
|
|
||||||
|
|||||||
Reference in New Issue
Block a user