31 Commits

Author SHA1 Message Date
815fb89585 Merge pull request 'feat: integrate 398-400-401 for main merge REQ-OPS-001 TASK-OPS-003 TEST-ACC-009' (#410) from feature/398-400-401 into main
All checks were successful
Gitea CI / test (push) Successful in 35s
Reviewed-on: #410
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 09:56:01 +09:00
agentson
79e51b8ece ci: retrigger PR #410 after governance traceability update
All checks were successful
Gitea CI / test (pull_request) Successful in 35s
Gitea CI / test (push) Successful in 36s
2026-03-04 09:53:53 +09:00
c29e5125bc Merge pull request 'docs: KIS API 공식 문서 복원 (docs/ 위치)' (#408) from fix/restore-kis-api-docs into feature/398-400-401
Some checks failed
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Failing after 5s
Reviewed-on: #408
2026-03-04 09:13:14 +09:00
agentson
4afae017a2 ci: retrigger for PR #408 body update
All checks were successful
Gitea CI / test (pull_request) Successful in 35s
2026-03-04 09:12:24 +09:00
agentson
650d464da5 docs: restore KIS API reference document to docs/
Some checks failed
Gitea CI / test (pull_request) Failing after 6s
한국투자증권_오픈API_전체문서_20260221_030000.xlsx was accidentally
removed during branch merge. Moved from project root to docs/ where
commands.md and MEMORY.md expect it to be.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-04 09:08:13 +09:00
b1f5f3e888 Merge pull request 'feat: #401 parallel multi-market processing [REQ-OPS-003] [TASK-OPS-003] [TEST-OPS-003]' (#407) from fix/401 into feature/398-400-401
All checks were successful
Gitea CI / test (push) Successful in 36s
2026-03-04 03:13:16 +09:00
agentson
ec5d656fdf feat: process active markets in parallel with fail-fast semantics (#401)
All checks were successful
Gitea CI / test (pull_request) Successful in 36s
2026-03-04 03:12:23 +09:00
bacb0d2037 Merge pull request 'fix: #400 US session transition handling [REQ-OPS-002] [TASK-OPS-002] [TEST-OPS-002]' (#406) from fix/400 into feature/398-400-401
All checks were successful
Gitea CI / test (push) Successful in 35s
2026-03-04 03:09:36 +09:00
agentson
a67a9aa41f style: sort imports for main tests
All checks were successful
Gitea CI / test (pull_request) Successful in 35s
2026-03-04 03:08:47 +09:00
agentson
47a87fa2f6 chore: retrigger CI for PR #406
Some checks failed
Gitea CI / test (pull_request) Failing after 6s
2026-03-04 03:07:43 +09:00
agentson
2e3aed5664 fix: handle US session transitions and suppress US_DAY trading (#400)
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 03:07:14 +09:00
f245f55a32 Merge pull request 'fix: #398 KR rt_cd rejection handling [REQ-OPS-001] [TASK-OPS-001] [TEST-OPS-001]' (#405) from fix/398 into feature/398-400-401
All checks were successful
Gitea CI / test (push) Successful in 35s
2026-03-04 03:04:42 +09:00
agentson
90cd7c0504 chore: retrigger CI for PR #405
All checks were successful
Gitea CI / test (pull_request) Successful in 36s
2026-03-04 03:03:53 +09:00
agentson
b283880774 fix: handle KR order rejection via rt_cd check (#398)
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 03:02:26 +09:00
c217e8cd72 Merge pull request 'fix: runtime anomaly handling for overnight startup and monitor (#396 #397)' (#404) from feature/issue-396-397-runtime-anomaly-fixes into main
All checks were successful
Gitea CI / test (push) Successful in 36s
Reviewed-on: #404
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 02:46:38 +09:00
agentson
bcbbf80d16 docs: clarify APP_CMD legacy and APP_CMD_ARGS contract
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 36s
2026-03-04 02:43:32 +09:00
agentson
dc0775cbc6 fix: add safer custom command path for run_overnight
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 35s
2026-03-04 02:36:24 +09:00
agentson
c412412f7b fix: address second-round review findings on PR #404
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 36s
2026-03-04 02:29:54 +09:00
agentson
3cde8779fa fix: address PR #404 review feedback
All checks were successful
Gitea CI / test (push) Successful in 36s
Gitea CI / test (pull_request) Successful in 35s
2026-03-04 02:23:43 +09:00
agentson
370ee8cc85 fix: make overnight startup portable in CI environments
All checks were successful
Gitea CI / test (push) Successful in 34s
Gitea CI / test (pull_request) Successful in 34s
2026-03-04 02:07:52 +09:00
agentson
528e17a29c fix: stabilize overnight startup and monitor live fallback (#396 #397)
Some checks failed
Gitea CI / test (push) Failing after 37s
Gitea CI / test (pull_request) Failing after 38s
2026-03-04 02:04:13 +09:00
d2f3fe9108 Merge pull request 'docs: consolidate CLAUDE entrypoint into agents guide (#402)' (#403) from temp/agents-md-migration-20260303 into main
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #403
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 01:52:46 +09:00
agentson
12bcccab42 ci: rerun PR checks after traceability update
All checks were successful
Gitea CI / test (pull_request) Successful in 32s
2026-03-04 01:32:42 +09:00
agentson
ef16cf8800 docs: consolidate agent entrypoint into agents.md (#402)
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
2026-03-04 01:31:00 +09:00
3c58c5d110 Merge pull request 'merge: feature/v3-session-policy-stream into main' (#399) from feature/main-merge-v3-session-policy-stream-20260303 into main
Some checks failed
Gitea CI / test (push) Failing after 5s
Reviewed-on: #399
Reviewed-by: jihoson <kiparang7th@gmail.com>
2026-03-04 00:47:20 +09:00
agentson
8ecd3ac55f chore: retrigger CI after PR governance body update
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 32s
2026-03-04 00:36:57 +09:00
agentson
79ad108e2f Merge origin/feature/v3-session-policy-stream into main
Some checks failed
Gitea CI / test (pull_request) Failing after 5s
Gitea CI / test (push) Failing after 5s
2026-03-04 00:30:45 +09:00
d9cf056df8 Merge pull request 'process: add PR body post-check gate and tooling (#392)' (#393) from feature/issue-392-pr-body-postcheck into feature/v3-session-policy-stream
All checks were successful
Gitea CI / test (push) Successful in 33s
Reviewed-on: #393
2026-03-02 18:34:59 +09:00
agentson
bd9286a39f fix: require executable tea fallback binary (#392)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 18:32:07 +09:00
agentson
f4f8827353 fix: harden PR body validator for mixed escaped-newline and tea path (#392)
All checks were successful
Gitea CI / test (push) Successful in 32s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 18:27:59 +09:00
agentson
7d24f19cc4 process: add mandatory PR body post-check step (#392)
All checks were successful
Gitea CI / test (push) Successful in 33s
Gitea CI / test (pull_request) Successful in 33s
2026-03-02 18:20:17 +09:00
18 changed files with 811 additions and 1034 deletions

190
CLAUDE.md
View File

@@ -1,187 +1,9 @@
# The Ouroboros # Agent Entry Point
AI-powered trading agent for global stock markets with self-evolution capabilities. This file moved to [agents.md](./agents.md).
## Quick Start Follow `agents.md` as the single source of truth for Claude/Codex session behavior and project workflow gates.
```bash Core process references:
# Setup - [Workflow Guide](docs/workflow.md)
pip install -e ".[dev]" - [Command Reference](docs/commands.md)
cp .env.example .env
# Edit .env with your KIS and Gemini API credentials
# Test
pytest -v --cov=src
# Run (paper trading)
python -m src.main --mode=paper
# Run with dashboard
python -m src.main --mode=paper --dashboard
```
## Telegram Notifications (Optional)
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
### Quick Setup
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
3. **Configure**: Add to `.env`:
```bash
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=123456789
TELEGRAM_ENABLED=true
```
4. **Test**: Start bot conversation (`/start`), then run the agent
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
### What You'll Get
- 🟢 Trade execution alerts (BUY/SELL with confidence)
- 🚨 Circuit breaker trips (automatic trading halt)
- ⚠️ Fat-finger rejections (oversized orders blocked)
- Market open/close notifications
- 📝 System startup/shutdown status
### Interactive Commands
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
## Smart Volatility Scanner (Optional)
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
### How It Works
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
2. **Python Filter** — RSI + volume ratio calculations (no AI)
- Volume > 200% of previous day
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
### Configuration
Add to `.env` (optional, has sensible defaults):
```bash
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
SCANNER_TOP_N=3 # Max candidates per scan
```
### Benefits
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
- **Python-based filtering** — Fast technical analysis before AI
- **Evolution-ready** — Selection context logged for strategy optimization
- **Fault-tolerant** — Falls back to static watchlist on API failure
### Trading Mode Integration
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
## Documentation
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
- **[Architecture](docs/architecture.md)** — System design, components, data flow
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
## Core Principles
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
2. **Test Everything** — 80% coverage minimum, all changes require tests
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
## Requirements Management
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
- New requirements are added chronologically with dates
- Code changes should reference related requirements
- Helps maintain project evolution aligned with user needs
- Preserves context across conversations and development cycles
## Project Structure
```
src/
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
├── broker/ # KIS API client (domestic + overseas)
├── context/ # L1-L7 hierarchical memory system
├── core/ # Risk manager (READ-ONLY)
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
├── data/ # External data integration (news, market data, calendar)
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
├── logging/ # Decision logger (audit trail)
├── markets/ # Market schedules and timezone handling
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
├── strategy/ # Pre-market planner, scenario engine, playbook store
├── db.py # SQLite trade logging
├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env)
tests/ # 998 tests across 41 files
docs/ # Extended documentation
```
## Key Commands
```bash
pytest -v --cov=src # Run tests with coverage
ruff check src/ tests/ # Lint
mypy src/ --strict # Type check
python -m src.main --mode=paper # Paper trading
python -m src.main --mode=paper --dashboard # With dashboard
python -m src.main --mode=live # Live trading (⚠️ real money)
# Gitea workflow (requires tea CLI)
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
```
## Markets Supported
- 🇰🇷 Korea (KRX)
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
- 🇯🇵 Japan (TSE)
- 🇭🇰 Hong Kong (SEHK)
- 🇨🇳 China (Shanghai, Shenzhen)
- 🇻🇳 Vietnam (Hanoi, HCM)
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
## Critical Constraints
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
- Circuit breaker at -3.0% P&L — may only be made **stricter**
- Fat-finger protection: max 30% of cash per order — always enforced
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
- All code changes → corresponding tests → coverage ≥ 80%
## Contributing
See [docs/workflow.md](docs/workflow.md) for the complete development process.
**TL;DR:**
1. Create issue in Gitea
2. Create feature branch: `feature/issue-N-description`
3. Implement with tests
4. Open PR
5. Merge after review

199
agents.md Normal file
View File

@@ -0,0 +1,199 @@
# The Ouroboros
AI-powered trading agent for global stock markets with self-evolution capabilities.
## Agent Workflow Gate (Claude/Codex)
Before any implementation, both Claude and Codex must align on the same project process:
1. Read `docs/workflow.md` first (branch policy, issue/PR flow, merge rules).
2. Read `docs/commands.md` for required verification commands and failure handling.
3. Read `docs/agent-constraints.md` and `docs/agents.md` for safety constraints.
4. Check `workflow/session-handover.md` and append a session entry when starting or handing off work.
5. Confirm current branch is based on `main` or an explicitly designated temporary/base branch before editing.
If any instruction conflicts, default to the safer path and document the reason in the handover log.
## Quick Start
```bash
# Setup
pip install -e ".[dev]"
cp .env.example .env
# Edit .env with your KIS and Gemini API credentials
# Test
pytest -v --cov=src
# Run (paper trading)
python -m src.main --mode=paper
# Run with dashboard
python -m src.main --mode=paper --dashboard
```
## Telegram Notifications (Optional)
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
### Quick Setup
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
3. **Configure**: Add to `.env`:
```bash
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=123456789
TELEGRAM_ENABLED=true
```
4. **Test**: Start bot conversation (`/start`), then run the agent
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
### What You'll Get
- 🟢 Trade execution alerts (BUY/SELL with confidence)
- 🚨 Circuit breaker trips (automatic trading halt)
- ⚠️ Fat-finger rejections (oversized orders blocked)
- Market open/close notifications
- 📝 System startup/shutdown status
### Interactive Commands
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
## Smart Volatility Scanner (Optional)
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
### How It Works
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
2. **Python Filter** — RSI + volume ratio calculations (no AI)
- Volume > 200% of previous day
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
### Configuration
Add to `.env` (optional, has sensible defaults):
```bash
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
SCANNER_TOP_N=3 # Max candidates per scan
```
### Benefits
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
- **Python-based filtering** — Fast technical analysis before AI
- **Evolution-ready** — Selection context logged for strategy optimization
- **Fault-tolerant** — Falls back to static watchlist on API failure
### Trading Mode Integration
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
## Documentation
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
- **[Architecture](docs/architecture.md)** — System design, components, data flow
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
## Core Principles
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
2. **Test Everything** — 80% coverage minimum, all changes require tests
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
## Requirements Management
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
- New requirements are added chronologically with dates
- Code changes should reference related requirements
- Helps maintain project evolution aligned with user needs
- Preserves context across conversations and development cycles
## Project Structure
```
src/
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
├── broker/ # KIS API client (domestic + overseas)
├── context/ # L1-L7 hierarchical memory system
├── core/ # Risk manager (READ-ONLY)
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
├── data/ # External data integration (news, market data, calendar)
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
├── logging/ # Decision logger (audit trail)
├── markets/ # Market schedules and timezone handling
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
├── strategy/ # Pre-market planner, scenario engine, playbook store
├── db.py # SQLite trade logging
├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env)
tests/ # 998 tests across 41 files
docs/ # Extended documentation
```
## Key Commands
```bash
pytest -v --cov=src # Run tests with coverage
ruff check src/ tests/ # Lint
mypy src/ --strict # Type check
python -m src.main --mode=paper # Paper trading
python -m src.main --mode=paper --dashboard # With dashboard
python -m src.main --mode=live # Live trading (⚠️ real money)
# Gitea workflow (requires tea CLI)
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
```
## Markets Supported
- 🇰🇷 Korea (KRX)
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
- 🇯🇵 Japan (TSE)
- 🇭🇰 Hong Kong (SEHK)
- 🇨🇳 China (Shanghai, Shenzhen)
- 🇻🇳 Vietnam (Hanoi, HCM)
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
## Critical Constraints
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
- Circuit breaker at -3.0% P&L — may only be made **stricter**
- Fat-finger protection: max 30% of cash per order — always enforced
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
- All code changes → corresponding tests → coverage ≥ 80%
## Contributing
See [docs/workflow.md](docs/workflow.md) for the complete development process.
**TL;DR:**
1. Create issue in Gitea
2. Create feature branch: `feature/issue-N-description`
3. Implement with tests
4. Open PR
5. Merge after review

View File

@@ -42,5 +42,3 @@ Updated: 2026-03-02
## 변경 이력 ## 변경 이력
- 2026-03-02: `v1.0.12` 문서 검증 게이트 강화(#390) 반영에 따라 정책 문서 동기화 체크를 수행했다. (`REQ-OPS-002`) - 2026-03-02: `v1.0.12` 문서 검증 게이트 강화(#390) 반영에 따라 정책 문서 동기화 체크를 수행했다. (`REQ-OPS-002`)
- 2026-03-04: 86_peak_probability_model_plan.md 초안 추가에 따른 정책 문서 동기화 체크를 수행했다. (REQ-OPS-002)

View File

@@ -1,168 +0,0 @@
<!--
Doc-ID: DOC-PLAN-086
Version: 0.1.0
Status: draft
Owner: strategy
Updated: 2026-03-02
-->
# 고점 확률 모델 도입 계획 (초안)
작성일: 2026-03-02
목적: 기존 계획(v2)의 "모델 보조 신호"를, 실운용 가능한 단순한 형태로 재정의한다.
---
## 1. 한 줄 요약
- 목표는 "지금이 고점(또는 고점 근처)일 확률"을 예측해 익절/축소/오버나잇 리스크 결정을 보조하는 것이다.
- 1차는 정규장 중심으로 시작하고, NXT/US_PRE 등 확장세션은 이후 분리 대응한다.
---
## 2. 왜 바꾸는가
- 기존 논의는 범위가 넓어 구현/검증 비용이 컸다.
- 현재 로그 구조에서도 빠르게 시작 가능한 타깃이 필요하다.
- 실거래 효용 기준에서 "고점 근처 감지"는 청산 품질 개선에 직접 연결된다.
---
## 3. 이번 계획의 핵심 결정
### 3.1 예측 타깃 (2개)
- `p_top_d0`: 현재 시점이 "오늘 정규장 고점(근접 포함)"일 확률
- `p_top_d1`: 현재 시점이 "내일 정규장 마감까지 포함한 고점(근접 포함)"일 확률
### 3.2 모델 역할
- 단독 매도 트리거가 아니라 보조 신호로 사용한다.
- Hard stop / ATR trailing / Kill Switch 우선 원칙은 유지한다.
### 3.3 적용 순서
1. KR/US 정규장만 대상으로 MVP 구축
2. 성능/운영 안정화 후 NXT, US_PRE, US_AFTER 분리 확장
---
## 4. 데이터/라벨 정의 (MVP)
### 4.1 샘플 단위
- 단위: "시점 t의 관측값 1개"
- 대상: 주문 가능 시점 중 정책 게이트를 통과하는 시점
### 4.2 라벨 정의
- `y_d0 = 1` if `max_high(t ~ 오늘 정규장 마감) <= price_t * (1 + eps_d0)` else `0`
- `y_d1 = 1` if `max_high(t ~ 내일 정규장 마감) <= price_t * (1 + eps_d1)` else `0`
### 4.3 필수 피처 (초기)
- 가격/수익률: `current_price`, `price_change_pct`
- 미시지표: `rsi`, `volume_ratio`, `signal`, `score`
- 포트폴리오: `portfolio_pnl_pct`, `total_cash`(선택)
- 상태: `market`, `session_id`, `minutes_to_close`, `is_tradeable_now`
### 4.4 데이터 제외 규칙 (초기)
- `session_id in ('', 'UNKNOWN')` 제외
- `price <= 0` 또는 `quantity <= 0` 제외
- `rationale LIKE '[startup-sync]%'` 제외
- 라벨 윈도우 불완전(휴장/데이터 누락) 샘플 제외
---
## 5. 추론/실행 규칙 (초안)
### 5.1 추론 출력
```json
{
"p_top_d0": 0.00,
"p_top_d1": 0.00,
"model_version": "TBD"
}
```
### 5.2 실행 반영 원칙
- `p_top_d0`가 높으면: 당일 익절/트레일링 강화
- `p_top_d1`가 높으면: 오버나잇 보유 축소/청산 우선
- 확장세션(NXT/US_PRE)은 초기엔 보수 임계치 또는 실행 제한
---
## 6. 세션 이슈 대응 원칙
### 6.1 거래 가능 세션 기준
- 달력 시간이 아니라 "종목이 실제 거래 가능한 세션" 기준으로 라벨/추론을 구성한다.
### 6.2 정규장 vs 확장세션 분리
- MVP: 정규장 전용 라벨/모델
- 확장: 세션별 헤드 또는 세션별 모델 분리 검토
### 6.3 고점 정의 분리
- `reg_high`: 정규장 고점 (주 라벨)
- `ext_high`: 확장세션 포함 고점 (보조 분석)
---
## 7. 단계별 구현 계획
### Phase A: 계약 고정
- 학습용 canonical dataset 스키마 확정
- 시점 정렬(as-of) 규칙 문서화
- 라벨 생성 함수/검증 테스트 추가
### Phase B: 오프라인 검증
- Walk-forward + Purge/Embargo 평가
- `d0`, `d1` 분리 리포트
- B1(모델 없음) 대비 개선 여부 확인
### Phase C: 런타임 통합
- 추론 결과를 `pred_top_d0/pred_top_d1`로 주입
- 매도/축소 보조 규칙 연결
- Shadow 모드로 관측 후 제한적 활성화
### Phase D: 세션 확장
- NXT/US_PRE/US_AFTER 정책 분리
- 실행 게이트/임계치 세션별 튜닝
---
## 8. 지금 당장 결정할 최소 항목 (TBD)
- [ ] `eps_d0`, `eps_d1` 기본값 (`TBD`)
- [ ] `p_top_d0`, `p_top_d1` 실행 임계치 (`TBD`)
- [ ] 정규장/확장세션 대상 시장 범위 (`TBD`)
- [ ] 최소 학습 샘플 수 기준 (`TBD`)
- [ ] Shadow 관측 기간 (`TBD`)
---
## 9. 리스크 및 완화
- 라벨 불균형: class weight, threshold 튜닝
- 시간대 편향(장마감 근처): 시간 피처 추가 + 구간별 성능 보고
- 데이터 무결성: 제외 규칙 강제 + CI 검증
- 복잡도 증가: 정규장 MVP를 먼저 잠그고 확장세션은 후속 분리
---
## 10. 후속 문서 연결 (예정)
- requirements registry: 신규 REQ/TASK/TEST 매핑 추가
- code-level work orders: 구현 작업 분해
- acceptance/test plan: `p_top_d0/d1` 검증 항목 추가
- implementation audit: 기존 "모델 보조 신호" 항목 업데이트

View File

@@ -1,211 +0,0 @@
# Dual-Track Workflow Execution Plan (Revised)
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Re-seed workflow planning from a clean base branch (not the temporary branch), then execute isolated `86` and `80` tracks with real-operation-confirmed closeout and related Gitea issue closure.
**Architecture:** Start from `feature/v3-session-policy-stream` and create `base/ouroboros-workflow-20260302`. Bring only two workflow plan docs as one squashed seed commit. Branch `task/86...` and `task/80...` from that base with strict file-scope separation. For `80`, update audit only after real-operation confirmation, then close related Gitea issues (children before parent). Final promotion happens by merging the validated stream to `main`.
**Tech Stack:** Git CLI, Gitea (`tea`) issue/PR workflow, Markdown docs (`docs/ouroboros`, `docs/plans`)
---
### Task 1: Freeze temporary branch and identify export scope
**Files:**
- Modify: none
- Create: none
- Test: none
**Step 1: Inspect temporary branch state**
Run: `git status --short --branch`
Expected: current temporary branch with local changes listed.
**Step 2: Preserve changes using stash (recommended)**
Run: `git stash push -u -m "tmp-before-workflow-reseed-2026-03-02"`
Expected: stash created with tracked/untracked changes.
**Step 3: Define transferable artifacts**
Run: keep only these for reseed:
- `docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md`
- `docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
Expected: transfer scope fixed to two files.
**Step 4: Verify clean working tree**
Run: `git status --short --branch`
Expected: no local changes.
**Step 5: Commit**
```bash
# no commit in this task (stash-only preservation)
```
### Task 2: Create clean official base and seed one commit
**Files:**
- Modify: `docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md`
- Modify: `docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
- Test: none
**Step 1: Switch to canonical base**
Run: `git checkout feature/v3-session-policy-stream`
Expected: branch switch successful.
**Step 2: Update local base**
Run: `git pull --ff-only origin feature/v3-session-policy-stream`
Expected: up-to-date or fast-forward update.
**Step 3: Create staging base branch**
Run: `git checkout -b base/ouroboros-workflow-20260302`
Expected: new branch created from updated feature base.
**Step 4: Restore only two plan docs from temporary branch commits**
Run (example): `git checkout <temp-branch> -- docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
Expected: only two files restored in working tree.
**Step 5: Create one squashed seed commit**
Run: `git add docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md docs/plans/2026-03-02-dual-track-workflow-execution-plan.md && git commit -m "docs: seed revised dual-track workflow plan"`
Expected: single commit with two files only.
**Step 6: Verify seed scope**
Run: `git show --name-only --oneline HEAD`
Expected: exactly the two plan docs listed.
**Step 7: Push official base**
Run: `git push -u origin base/ouroboros-workflow-20260302`
Expected: remote tracking set.
### Task 3: Open Track A (`86`) branch and initial PR
**Files:**
- Modify: `docs/ouroboros/86_peak_probability_model_plan.md`
- Test: none
**Step 1: Create Track A branch**
Run: `git checkout -b task/86-peak-probability-plan`
Expected: branch created from official base.
**Step 2: Stage only `86` document**
Run: `git add docs/ouroboros/86_peak_probability_model_plan.md`
Expected: only `86` doc staged.
**Step 3: Commit initial PR seed**
Run: `git commit -m "docs: seed peak probability model plan (#86)"`
Expected: one-document commit.
**Step 4: Push branch**
Run: `git push -u origin task/86-peak-probability-plan`
Expected: remote branch created.
**Step 5: Open PR-A**
Run: `tea pr create --base base/ouroboros-workflow-20260302 --head task/86-peak-probability-plan --title "docs: seed peak probability model plan (#86)" --description "Initial planning PR for iterative concretization."`
Expected: PR URL returned.
### Task 4: Open Track B (`80`) branch for operation-confirmed closeout
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md`
- Test: operation observation evidence (outside unit-test scope)
**Step 1: Return to staging base**
Run: `git checkout base/ouroboros-workflow-20260302`
Expected: branch switch successful.
**Step 2: Create Track B branch**
Run: `git checkout -b task/80-implementation-audit-closeout`
Expected: branch created.
**Step 3: Apply closeout edits only after real-operation confirmation**
Run: edit `docs/ouroboros/80_implementation_audit.md`
Expected: status/evidence sections updated with observed behavior, confirmation timestamp, and operational proof summary.
**Step 4: Commit closeout update**
Run: `git add docs/ouroboros/80_implementation_audit.md && git commit -m "docs: close out implementation audit items after operation confirmation (#80)"`
Expected: single-scope commit.
**Step 5: Push and open PR-B**
Run: `git push -u origin task/80-implementation-audit-closeout`
Expected: remote branch created.
Run: `tea pr create --base base/ouroboros-workflow-20260302 --head task/80-implementation-audit-closeout --title "docs: close out implementation audit items (#80)" --description "Closeout based on real-operation confirmation."`
Expected: PR URL returned.
### Task 5: Close related Gitea issues after `80` confirmation
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md` (reference linkage only)
- Test: issue parent-child completeness check
**Step 1: Build related issue set from updated `80` audit**
Run: parse issue references from `80` updated sections.
Expected: target open issue list prepared.
**Step 2: Close child issues first**
Run: `tea issues close <child-issue-number>` for each confirmed child.
Expected: all qualifying child issues closed.
**Step 3: Close parent issues only when all children are closed**
Run: `tea issues close <parent-issue-number>`
Expected: parent closes only after dependency check passes.
**Step 4: Record closure summary in PR comment**
Run: add PR comment with closed issue IDs and parent-child resolution note.
Expected: auditable trace for closeout.
### Task 6: Validate branch hygiene and merge path
**Files:**
- Modify: none
- Test: git diff/log checks
**Step 1: Verify PR-A scope**
Run: `git diff --name-only base/ouroboros-workflow-20260302...task/86-peak-probability-plan`
Expected: only `docs/ouroboros/86_peak_probability_model_plan.md` (and intentional related plan files if any).
**Step 2: Verify PR-B scope**
Run: `git diff --name-only base/ouroboros-workflow-20260302...task/80-implementation-audit-closeout`
Expected: only `docs/ouroboros/80_implementation_audit.md` (and intentional related plan files if any).
**Step 3: Merge both PRs into staging base**
Run: GitHub merge flow for PR-A and PR-B
Expected: staging base includes both tracks without cross-file contamination.
**Step 4: Promote validated stream to `main`**
Run: open PR from `feature/v3-session-policy-stream` (or its validated promotion branch) to `main`
Expected: merge-based history preservation with no cherry-pick/rebase surgery.
**Step 5: Commit**
```bash
# no additional commit required in this task
```

View File

@@ -1,100 +0,0 @@
# Live Operation Issue Tracking Design
**Date:** 2026-03-02
**Scope:** Real-operation confirmation tracking for all open Gitea issues, tied to `task/80-implementation-audit-closeout`.
---
## 1. Goal
Define an issue-by-issue tracking method for real-operation confirmation so that:
- every open issue is tracked with explicit lifecycle states,
- parent-child close order is enforced,
- and final closeout is reflected in `80` audit work without loss.
---
## 2. Target Set
- Coverage target: all current open Gitea issues (`318~381`, and future open issues during this closeout window).
- Tracking starts from open set snapshot and stays synced until closeout completion.
---
## 3. Tracking Architecture
- Primary log: direct comments on each issue ticket.
- Aggregate log: single summary comment on the `80` PR (`task/80-implementation-audit-closeout`).
- No separate tracking doc as source of truth during execution.
---
## 4. State Model and Criteria
State machine:
- `NOT_STARTED`
- `OBSERVING`
- `CONFIRMED`
- `CLOSED`
Rules:
- `CONFIRMED` requires one real-operation observation evidence item.
- `CLOSED` requires close conditions satisfied and ticket close action executed.
- Parent issue cannot become `CLOSED` until all child issues are `CLOSED`.
---
## 5. Issue Comment Template
Each issue update comment uses:
- `Status: <STATE>`
- `Observed At: <KST>, <UTC>`
- `Environment: live`
- `Evidence: <log/query/order-execution identifier>`
- `Next: <single next action>`
This template is reused for every transition for consistent auditability.
---
## 6. Parent-Child Policy
- Parent issue comment must list child issue IDs.
- Child issues transition independently.
- Parent close gate:
- if any child is not `CLOSED`, parent remains at most `CONFIRMED`;
- when all children are `CLOSED`, parent can transition to `CLOSED` and be closed.
---
## 7. Operational Loop
1. Initialize all open issues with `NOT_STARTED` comment.
2. Move active issues to `OBSERVING` when live confirmation begins.
3. Move to `CONFIRMED` after one real-operation evidence capture.
4. Close child issues first, then close parent issue(s).
5. Update the single `80` PR summary comment whenever issue state changes.
6. Reflect final confirmed/closed outcomes in `docs/ouroboros/80_implementation_audit.md`.
---
## 8. Evidence and Time Rules
- Evidence must be replayable/referenceable by identifier (not vague narrative only).
- Every update includes both KST and UTC timestamps.
- Use absolute dates (example: `2026-03-02`) to avoid ambiguity in session handoff.
---
## 9. Completion Conditions
Tracking design is complete when:
- all open issues are represented in issue comments with valid lifecycle state,
- parent-child closure constraints are respected,
- `80` PR summary comment reflects current global status,
- and final `80` audit document update is ready for closeout commit.

View File

@@ -1,203 +0,0 @@
# Live Operation Issue Tracking Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Track real-operation confirmation for all open Gitea issues with per-issue comments and a synchronized summary on the `80` PR, then close issues in dependency-safe order.
**Architecture:** Use issue tickets as the primary state log with a strict four-state lifecycle. Maintain one aggregate status comment on `task/80-implementation-audit-closeout` PR. Enforce child-first closure and parent close gating.
**Tech Stack:** Git CLI, Gitea CLI (`tea`), Markdown (`docs/ouroboros/80_implementation_audit.md`)
---
### Task 1: Snapshot open issues and classify parent-child graph
**Files:**
- Create: none
- Modify: none
- Test: `tea issues ls` output inspection
**Step 1: Capture open issue list**
Run: `tea issues ls --state open --limit 200`
Expected: list of all open issues (including `318~381`).
**Step 2: Fetch details for each issue**
Run: `for n in $(tea issues ls --state open --limit 200 --output json | jq -r '.[].number'); do tea issue view "$n"; done`
Expected: issue bodies/comments available for relation parsing.
**Step 3: Extract parent-child references**
Run: parse issue text for relation keywords (`parent`, `child`, `depends on`, `blocks`).
Expected: dependency map ready.
**Step 4: Validate unresolved dependency edges**
Run: verify each parent references only existing issue numbers.
Expected: no dangling child references.
**Step 5: Commit**
```bash
# no commit (discovery task)
```
### Task 2: Initialize per-issue tracking comments (`NOT_STARTED`)
**Files:**
- Create: none
- Modify: none
- Test: issue comment presence check
**Step 1: Prepare standard template text**
```text
Status: NOT_STARTED
Observed At: <KST>, <UTC>
Environment: live
Evidence: N/A
Next: Begin live observation for this issue.
```
**Step 2: Post initialization comment to each open issue**
Run: `tea issues comment <issue-number> --message "<template>"`
Expected: each open issue has one initialization comment.
**Step 3: Verify comment coverage**
Run: sample-check each issue (`tea issue view <issue-number>`) for latest status comment.
Expected: 100% coverage across open issues.
**Step 4: Commit**
```bash
# no commit (ticket operations)
```
### Task 3: Create and maintain `80` PR aggregate status comment
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md` (final sync only, not this task)
- Test: PR comment contains full issue table
**Step 1: Identify PR number for `task/80-implementation-audit-closeout`**
Run: `tea pr ls --state open`
Expected: PR number for `task/80...` identified.
**Step 2: Post initial summary comment**
Run: `tea pr comment <pr-number> --message "<issue-state-summary-table>"`
Expected: one aggregate comment listing `issue | state | last observed at | parent/child`.
**Step 3: Define update protocol**
Run: update same summary comment on every issue state change.
Expected: summary remains single-source aggregate view.
**Step 4: Commit**
```bash
# no commit (ticket/PR operations)
```
### Task 4: Execute live observation loop and mark `OBSERVING` -> `CONFIRMED`
**Files:**
- Create: none
- Modify: none
- Test: evidence-backed state transitions present in issue comments
**Step 1: Mark active issue as `OBSERVING`**
Run: `tea issues comment <issue-number> --message "Status: OBSERVING ..."`
Expected: issue history shows observation started.
**Step 2: Capture one real-operation evidence item**
Run: collect log/query/order identifier tied to issue behavior.
Expected: one concrete evidence reference captured.
**Step 3: Mark issue `CONFIRMED`**
Run: `tea issues comment <issue-number> --message "Status: CONFIRMED ... Evidence: <id> ..."`
Expected: issue has explicit confirmed state.
**Step 4: Sync PR aggregate summary**
Run: update `80` PR summary comment row for that issue.
Expected: PR summary and issue status aligned.
**Step 5: Commit**
```bash
# no commit (ticket operations)
```
### Task 5: Close issues with child-first dependency enforcement
**Files:**
- Create: none
- Modify: none
- Test: parent close attempted only after all children closed
**Step 1: Close confirmed child issues first**
Run: `tea issues close <child-issue-number>`
Expected: child issue state becomes closed.
**Step 2: Verify all children for each parent**
Run: evaluate dependency map for remaining non-closed children.
Expected: parent close gate decision available.
**Step 3: Close eligible parent issues**
Run: `tea issues close <parent-issue-number>`
Expected: parent is closed only when all children are closed.
**Step 4: Post closure comment and sync PR summary**
Run: comment `Status: CLOSED` on closed issue and update PR summary row.
Expected: closure trace exists in both issue and PR aggregate.
**Step 5: Commit**
```bash
# no commit (ticket operations)
```
### Task 6: Reflect final tracked outcomes in `80` audit and commit
**Files:**
- Modify: `docs/ouroboros/80_implementation_audit.md`
- Test: markdown consistency + traceability check
**Step 1: Update `80` with final live-confirmed statuses**
Run: edit status lines and evidence summary sections.
Expected: document reflects final issue states and confirmation notes.
**Step 2: Add issue closure summary block**
Run: include closed child/parent sequence summary.
Expected: audit reader can verify closure logic quickly.
**Step 3: Validate references**
Run: ensure issue numbers in doc match actual closed/open states.
Expected: no mismatch between doc and Gitea.
**Step 4: Commit closeout doc update**
```bash
git add docs/ouroboros/80_implementation_audit.md
git commit -m "docs: finalize 80 audit with live-operation issue confirmations"
```
**Step 5: Push branch**
Run: `git push`
Expected: closeout commit available on `task/80-implementation-audit-closeout`.

View File

@@ -1,99 +0,0 @@
# Ouroboros Dual-Track Workflow Design (Revised)
**Date:** 2026-03-02
**Scope:** `docs/ouroboros/86_peak_probability_model_plan.md` and `docs/ouroboros/80_implementation_audit.md`
---
## 1. Goal
Prevent work loss and history contamination while running two parallel tracks, while correcting an invalid starting condition (design commits created on a disposable temporary branch):
- Track A: open and iterate the `86` plan as a dedicated PR.
- Track B: finalize `80` only after real-operation behavior is confirmed, then reflect completion and close related Gitea open issues.
---
## 2. Constraints and Current Context
- Current temporary branch is disposable and must not remain the authoritative base.
- Canonical base line for current work is `feature/v3-session-policy-stream`.
- Final integration strategy is to merge the full `feature/v3-session-policy-stream` line into `main` after operation-level validation is complete.
---
## 3. Approaches Considered
1. Keep using temporary branch and continue from existing design commits.
2. Create a clean base from `feature/v3-session-policy-stream` and move only plan artifacts as one commit (recommended).
3. Split into two fresh bases (`86` and `80` separately) immediately.
Recommendation rationale:
- Approach 2 removes temporary-branch contamination risk.
- It preserves clean lineage with a single "official starting point" commit.
- It keeps downstream split for `86` and `80` straightforward.
---
## 4. Approved Branch and PR Topology
- Long-lived source baseline: `feature/v3-session-policy-stream`
- New official base: `base/ouroboros-workflow-20260302` (from `feature/v3-session-policy-stream`)
- Official base contains only two workflow plan docs squashed into one commit.
- Track A branch: `task/86-peak-probability-plan` (from official base)
- Track B branch: `task/80-implementation-audit-closeout` (from official base)
PR direction:
- PR-A (`86`): `task/86-peak-probability-plan -> base/ouroboros-workflow-20260302`
- PR-B (`80`): `task/80-implementation-audit-closeout -> base/ouroboros-workflow-20260302`
- Final integration: `feature/v3-session-policy-stream` line merged into `main`
---
## 5. Completion Criteria per Track
### Track A (`86`)
- Open initial PR early with the planning document.
- Continue iterative concretization in follow-up commits on the same branch.
### Track B (`80`)
- Do not close based on test-command artifacts alone.
- Close only when modified behavior is confirmed in real operation.
- Update `80_implementation_audit.md` with:
- observed behavior changes,
- confirmation timing,
- operation-side evidence summary.
- After `80` real-operation confirmation and doc update, close related Gitea open issues immediately (same phase).
- Parent issues close only when all child issues are confirmed closed.
---
## 6. Loss-Prevention Rules
- Preserve temporary-branch unmerged changes first (stash or archival commit).
- Exclude `workflow/session-handover.md` and `docs/ouroboros/86_peak_probability_model_plan.md` from official base-seeding commit.
- Enforce file-scope separation:
- PR-A should avoid `80` edits.
- PR-B should avoid `86` edits.
- Keep commit messages scoped to one track per commit.
---
## 7. Merge Strategy Decision
- For this workflow, prefer merge-based lineage preservation.
- Because the final promotion unit is the full `feature/v3-session-policy-stream` stream, avoid unnecessary cherry-pick/rebase-surgery unless a contamination incident occurs.
---
## 8. Immediate Next Actions
1. Seed `base/ouroboros-workflow-20260302` from `feature/v3-session-policy-stream` with one squashed workflow-doc commit.
2. Verify seed commit file scope contains only two plan docs.
3. Create and open PR-A for `86`.
4. Complete real-operation confirmation for `80`, update audit doc, then close related Gitea issues (children first, then parent).
5. Merge validated `feature/v3-session-policy-stream` line into `main`.

View File

@@ -8,8 +8,32 @@ CHECK_INTERVAL="${CHECK_INTERVAL:-30}"
TMUX_AUTO="${TMUX_AUTO:-true}" TMUX_AUTO="${TMUX_AUTO:-true}"
TMUX_ATTACH="${TMUX_ATTACH:-true}" TMUX_ATTACH="${TMUX_ATTACH:-true}"
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}" TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
STARTUP_GRACE_SEC="${STARTUP_GRACE_SEC:-3}"
dashboard_port="${DASHBOARD_PORT:-8080}"
APP_CMD_BIN="${APP_CMD_BIN:-}"
APP_CMD_ARGS="${APP_CMD_ARGS:-}"
RUNS_DASHBOARD="false"
if [ -z "${APP_CMD:-}" ]; then # Custom override contract:
# 1) Preferred: APP_CMD_BIN + APP_CMD_ARGS
# - APP_CMD_BIN is treated as a single executable token.
# - APP_CMD_ARGS uses shell-style word splitting; quote/escape inside this
# variable is NOT preserved as a nested shell parse.
# 2) Legacy fallback: APP_CMD (raw shell command string)
# - This path remains for backward compatibility.
# - When APP_CMD includes --dashboard, caller should include explicit
# DASHBOARD_PORT assignment in APP_CMD if non-default port is required.
if [ -n "$APP_CMD_BIN" ]; then
USE_DEFAULT_APP_CMD="false"
USE_SAFE_CUSTOM_APP_CMD="true"
APP_CMD="${APP_CMD_BIN} ${APP_CMD_ARGS}"
if [[ " $APP_CMD_ARGS " == *" --dashboard "* ]]; then
RUNS_DASHBOARD="true"
fi
elif [ -z "${APP_CMD:-}" ]; then
USE_DEFAULT_APP_CMD="true"
USE_SAFE_CUSTOM_APP_CMD="false"
if [ -x ".venv/bin/python" ]; then if [ -x ".venv/bin/python" ]; then
PYTHON_BIN=".venv/bin/python" PYTHON_BIN=".venv/bin/python"
elif command -v python3 >/dev/null 2>&1; then elif command -v python3 >/dev/null 2>&1; then
@@ -21,9 +45,14 @@ if [ -z "${APP_CMD:-}" ]; then
exit 1 exit 1
fi fi
dashboard_port="${DASHBOARD_PORT:-8080}" APP_CMD="$PYTHON_BIN -m src.main --mode=live --dashboard"
RUNS_DASHBOARD="true"
APP_CMD="DASHBOARD_PORT=$dashboard_port $PYTHON_BIN -m src.main --mode=live --dashboard" else
USE_DEFAULT_APP_CMD="false"
USE_SAFE_CUSTOM_APP_CMD="false"
if [[ "$APP_CMD" == *"--dashboard"* ]]; then
RUNS_DASHBOARD="true"
fi
fi fi
mkdir -p "$LOG_DIR" mkdir -p "$LOG_DIR"
@@ -34,6 +63,24 @@ WATCHDOG_LOG="$LOG_DIR/watchdog_${timestamp}.log"
PID_FILE="$LOG_DIR/app.pid" PID_FILE="$LOG_DIR/app.pid"
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid" WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
is_port_in_use() {
local port="$1"
if command -v ss >/dev/null 2>&1; then
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
return $?
fi
if command -v lsof >/dev/null 2>&1; then
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
return $?
fi
if command -v netstat >/dev/null 2>&1; then
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
return $?
fi
# No supported socket inspection command found.
return 1
}
if [ -f "$PID_FILE" ]; then if [ -f "$PID_FILE" ]; then
old_pid="$(cat "$PID_FILE" || true)" old_pid="$(cat "$PID_FILE" || true)"
if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then
@@ -43,7 +90,29 @@ if [ -f "$PID_FILE" ]; then
fi fi
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG" echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG"
nohup bash -lc "$APP_CMD" >>"$RUN_LOG" 2>&1 & if [ "$RUNS_DASHBOARD" = "true" ] && is_port_in_use "$dashboard_port"; then
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: dashboard port ${dashboard_port} already in use" | tee -a "$RUN_LOG"
exit 1
fi
if [ "$USE_DEFAULT_APP_CMD" = "true" ]; then
# Default path avoids shell word-splitting on executable paths.
nohup env DASHBOARD_PORT="$dashboard_port" "$PYTHON_BIN" -m src.main --mode=live --dashboard >>"$RUN_LOG" 2>&1 &
elif [ "$USE_SAFE_CUSTOM_APP_CMD" = "true" ]; then
# Safer custom path: executable path is handled as a single token.
if [ -n "$APP_CMD_ARGS" ]; then
# shellcheck disable=SC2206
app_args=( $APP_CMD_ARGS )
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" "${app_args[@]}" >>"$RUN_LOG" 2>&1 &
else
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" >>"$RUN_LOG" 2>&1 &
fi
else
# Custom APP_CMD is treated as a shell command string.
# If executable paths include spaces, they must be quoted inside APP_CMD.
# Legacy compatibility path: caller owns quoting and env var injection.
nohup bash -lc "exec env $APP_CMD" >>"$RUN_LOG" 2>&1 &
fi
app_pid=$! app_pid=$!
echo "$app_pid" > "$PID_FILE" echo "$app_pid" > "$PID_FILE"
@@ -54,6 +123,20 @@ nohup env PID_FILE="$PID_FILE" LOG_FILE="$WATCHDOG_LOG" CHECK_INTERVAL="$CHECK_I
watchdog_pid=$! watchdog_pid=$!
echo "$watchdog_pid" > "$WATCHDOG_PID_FILE" echo "$watchdog_pid" > "$WATCHDOG_PID_FILE"
sleep "$STARTUP_GRACE_SEC"
if ! kill -0 "$app_pid" 2>/dev/null; then
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: app process exited early (pid=$app_pid)" | tee -a "$RUN_LOG"
[ -n "${watchdog_pid:-}" ] && kill "$watchdog_pid" 2>/dev/null || true
tail -n 20 "$RUN_LOG" || true
exit 1
fi
if ! kill -0 "$watchdog_pid" 2>/dev/null; then
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: watchdog exited early (pid=$watchdog_pid)" | tee -a "$WATCHDOG_LOG"
kill "$app_pid" 2>/dev/null || true
tail -n 20 "$WATCHDOG_LOG" || true
exit 1
fi
cat <<EOF cat <<EOF
시작 완료 시작 완료
- app pid: $app_pid - app pid: $app_pid

View File

@@ -7,12 +7,15 @@ ROOT_DIR="${ROOT_DIR:-/home/agentson/repos/The-Ouroboros}"
LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}" LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}"
INTERVAL_SEC="${INTERVAL_SEC:-60}" INTERVAL_SEC="${INTERVAL_SEC:-60}"
MAX_HOURS="${MAX_HOURS:-24}" MAX_HOURS="${MAX_HOURS:-24}"
MAX_LOOPS="${MAX_LOOPS:-0}"
POLICY_TZ="${POLICY_TZ:-Asia/Seoul}" POLICY_TZ="${POLICY_TZ:-Asia/Seoul}"
DASHBOARD_PORT="${DASHBOARD_PORT:-8080}"
cd "$ROOT_DIR" cd "$ROOT_DIR"
OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log" OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log"
END_TS=$(( $(date +%s) + MAX_HOURS*3600 )) END_TS=$(( $(date +%s) + MAX_HOURS*3600 ))
loops=0
log() { log() {
printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null
@@ -31,6 +34,11 @@ check_signal() {
return 1 return 1
} }
find_live_pids() {
# Detect live-mode process even when run_overnight pid files are absent.
pgrep -af "[s]rc.main --mode=live" 2>/dev/null | awk '{print $1}' | tr '\n' ',' | sed 's/,$//'
}
check_forbidden() { check_forbidden() {
local name="$1" local name="$1"
local pattern="$2" local pattern="$2"
@@ -44,42 +52,94 @@ check_forbidden() {
return 0 return 0
} }
is_port_listening() {
local port="$1"
if command -v ss >/dev/null 2>&1; then
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
return $?
fi
if command -v lsof >/dev/null 2>&1; then
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
return $?
fi
if command -v netstat >/dev/null 2>&1; then
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
return $?
fi
return 1
}
log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}" log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}"
while true; do while true; do
loops=$((loops + 1))
now=$(date +%s) now=$(date +%s)
if [ "$now" -ge "$END_TS" ]; then if [ "$now" -ge "$END_TS" ]; then
log "[INFO] monitor completed (time window reached)" log "[INFO] monitor completed (time window reached)"
exit 0 exit 0
fi fi
if [ "$MAX_LOOPS" -gt 0 ] && [ "$loops" -gt "$MAX_LOOPS" ]; then
log "[INFO] monitor completed (max loops reached)"
exit 0
fi
latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)" latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)"
if [ -z "$latest_run" ]; then
log "[ANOMALY] no run log found"
sleep "$INTERVAL_SEC"
continue
fi
# Basic liveness hints. # Basic liveness hints.
app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)" app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)"
wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)" wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)"
live_pids="$(find_live_pids)"
app_alive=0 app_alive=0
wd_alive=0 wd_alive=0
port_alive=0 port_alive=0
[ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1 [ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1
[ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1 [ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1
ss -ltnp 2>/dev/null | rg -q ':8080' && port_alive=1 if [ "$app_alive" -eq 0 ] && [ -n "$live_pids" ]; then
log "[HEARTBEAT] run_log=$latest_run app_alive=$app_alive watchdog_alive=$wd_alive port8080=$port_alive" app_alive=1
fi
is_port_listening "$DASHBOARD_PORT" && port_alive=1
log "[HEARTBEAT] run_log=${latest_run:-none} app_alive=$app_alive watchdog_alive=$wd_alive port=${DASHBOARD_PORT} alive=$port_alive live_pids=${live_pids:-none}"
defer_log_checks=0
if [ -z "$latest_run" ] && [ "$app_alive" -eq 1 ]; then
defer_log_checks=1
log "[INFO] run log not yet available; defer log-based coverage checks"
fi
if [ -z "$latest_run" ] && [ "$defer_log_checks" -eq 0 ]; then
log "[ANOMALY] no run log found"
fi
# Coverage matrix rows (session paths and policy gate evidence). # Coverage matrix rows (session paths and policy gate evidence).
not_observed=0 not_observed=0
if [ "$app_alive" -eq 1 ]; then
log "[COVERAGE] LIVE_MODE=PASS source=process_liveness"
else
if [ -n "$latest_run" ]; then
check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1)) check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1))
else
log "[COVERAGE] LIVE_MODE=NOT_OBSERVED reason=no_run_log_no_live_pid"
not_observed=$((not_observed+1))
fi
fi
if [ "$defer_log_checks" -eq 1 ]; then
for deferred in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
log "[COVERAGE] ${deferred}=DEFERRED reason=no_run_log_process_alive"
done
elif [ -n "$latest_run" ]; then
check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1)) check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1))
check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1)) check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1)) check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1)) check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1))
check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1)) check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1))
check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1)) check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1))
else
for missing in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
log "[COVERAGE] ${missing}=NOT_OBSERVED reason=no_run_log"
not_observed=$((not_observed+1))
done
fi
if [ "$not_observed" -gt 0 ]; then if [ "$not_observed" -gt 0 ]; then
log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)" log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)"
@@ -95,11 +155,17 @@ while true; do
is_weekend=1 is_weekend=1
fi fi
if [ "$is_weekend" -eq 1 ]; then if [ "$defer_log_checks" -eq 1 ]; then
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log_process_alive"
elif [ "$is_weekend" -eq 1 ]; then
# Weekend policy: KR regular session loop must never appear. # Weekend policy: KR regular session loop must never appear.
if [ -n "$latest_run" ]; then
check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \ check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \
"Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \ "Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \
"$latest_run" || forbidden_hits=$((forbidden_hits+1)) "$latest_run" || forbidden_hits=$((forbidden_hits+1))
else
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log"
fi
else else
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday" log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday"
fi fi

View File

@@ -5,6 +5,8 @@ from __future__ import annotations
import argparse import argparse
import json import json
import os
import shutil
import re import re
import subprocess import subprocess
import sys import sys
@@ -12,11 +14,31 @@ from pathlib import Path
HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE) HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE)
LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE) LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE)
FENCED_CODE_PATTERN = re.compile(r"```.*?```", re.DOTALL)
INLINE_CODE_PATTERN = re.compile(r"`[^`]*`")
def _strip_code_segments(text: str) -> str:
without_fences = FENCED_CODE_PATTERN.sub("", text)
return INLINE_CODE_PATTERN.sub("", without_fences)
def resolve_tea_binary() -> str:
tea_from_path = shutil.which("tea")
if tea_from_path:
return tea_from_path
tea_home = Path.home() / "bin" / "tea"
if tea_home.exists() and tea_home.is_file() and os.access(tea_home, os.X_OK):
return str(tea_home)
raise RuntimeError("tea binary not found (checked PATH and ~/bin/tea)")
def validate_pr_body_text(text: str) -> list[str]: def validate_pr_body_text(text: str) -> list[str]:
errors: list[str] = [] errors: list[str] = []
if "\\n" in text and "\n" not in text: searchable = _strip_code_segments(text)
if "\\n" in searchable:
errors.append("body contains escaped newline sequence (\\n)") errors.append("body contains escaped newline sequence (\\n)")
if text.count("```") % 2 != 0: if text.count("```") % 2 != 0:
errors.append("body has unbalanced fenced code blocks (``` count is odd)") errors.append("body has unbalanced fenced code blocks (``` count is odd)")
@@ -28,10 +50,11 @@ def validate_pr_body_text(text: str) -> list[str]:
def fetch_pr_body(pr_number: int) -> str: def fetch_pr_body(pr_number: int) -> str:
tea_binary = resolve_tea_binary()
try: try:
completed = subprocess.run( completed = subprocess.run(
[ [
"tea", tea_binary,
"api", "api",
"-R", "-R",
"origin", "origin",
@@ -41,7 +64,7 @@ def fetch_pr_body(pr_number: int) -> str:
capture_output=True, capture_output=True,
text=True, text=True,
) )
except (subprocess.CalledProcessError, FileNotFoundError) as exc: except (subprocess.CalledProcessError, FileNotFoundError, PermissionError) as exc:
raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc
try: try:

View File

@@ -12,6 +12,7 @@ import json
import logging import logging
import signal import signal
import threading import threading
from collections.abc import Awaitable, Callable
from datetime import UTC, datetime from datetime import UTC, datetime
from typing import Any from typing import Any
@@ -2084,6 +2085,15 @@ async def trading_cycle(
quantity=quantity, quantity=quantity,
price=order_price, price=order_price,
) )
if result.get("rt_cd", "0") != "0":
order_succeeded = False
msg1 = result.get("msg1") or ""
logger.warning(
"KR order not accepted for %s: rt_cd=%s msg=%s",
stock_code,
result.get("rt_cd"),
msg1,
)
else: else:
# For overseas orders, always use limit orders (지정가): # For overseas orders, always use limit orders (지정가):
# - KIS market orders (ORD_DVSN=01) calculate quantity based on upper limit # - KIS market orders (ORD_DVSN=01) calculate quantity based on upper limit
@@ -3293,6 +3303,15 @@ async def run_daily_session(
quantity=quantity, quantity=quantity,
price=order_price, price=order_price,
) )
if result.get("rt_cd", "0") != "0":
order_succeeded = False
daily_msg1 = result.get("msg1") or ""
logger.warning(
"KR order not accepted for %s: rt_cd=%s msg=%s",
stock_code,
result.get("rt_cd"),
daily_msg1,
)
else: else:
# KIS VTS only accepts limit orders; use 0.5% premium for BUY # KIS VTS only accepts limit orders; use 0.5% premium for BUY
if decision.action == "BUY": if decision.action == "BUY":
@@ -3532,6 +3551,47 @@ def _run_context_scheduler(
) )
def _has_market_session_transition(
market_states: dict[str, str], market_code: str, session_id: str
) -> bool:
"""Return True when market session changed (or market has no prior state)."""
return market_states.get(market_code) != session_id
def _should_rescan_market(
*, last_scan: float, now_timestamp: float, rescan_interval: float, session_changed: bool
) -> bool:
"""Force rescan on session transition; otherwise follow interval cadence."""
return session_changed or (now_timestamp - last_scan >= rescan_interval)
async def _run_markets_in_parallel(
markets: list[Any], processor: Callable[[Any], Awaitable[None]]
) -> None:
"""Run market processors in parallel and fail fast on the first exception."""
if not markets:
return
tasks = [asyncio.create_task(processor(market)) for market in markets]
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_EXCEPTION)
first_exc: BaseException | None = None
for task in done:
exc = task.exception()
if exc is not None and first_exc is None:
first_exc = exc
if first_exc is not None:
for task in pending:
task.cancel()
if pending:
await asyncio.gather(*pending, return_exceptions=True)
raise first_exc
if pending:
await asyncio.gather(*pending)
async def _run_evolution_loop( async def _run_evolution_loop(
evolution_optimizer: EvolutionOptimizer, evolution_optimizer: EvolutionOptimizer,
telegram: TelegramClient, telegram: TelegramClient,
@@ -4045,7 +4105,7 @@ async def run(settings: Settings) -> None:
last_scan_time: dict[str, float] = {} last_scan_time: dict[str, float] = {}
# Track market open/close state for notifications # Track market open/close state for notifications
_market_states: dict[str, bool] = {} # market_code -> is_open _market_states: dict[str, str] = {} # market_code -> session_id
# Trading control events # Trading control events
shutdown = asyncio.Event() shutdown = asyncio.Event()
@@ -4163,8 +4223,8 @@ async def run(settings: Settings) -> None:
if not open_markets: if not open_markets:
# Notify market close for any markets that were open # Notify market close for any markets that were open
for market_code, is_open in list(_market_states.items()): for market_code, session_id in list(_market_states.items()):
if is_open: if session_id:
try: try:
from src.markets.schedule import MARKETS from src.markets.schedule import MARKETS
@@ -4181,7 +4241,7 @@ async def run(settings: Settings) -> None:
) )
except Exception as exc: except Exception as exc:
logger.warning("Market close notification failed: %s", exc) logger.warning("Market close notification failed: %s", exc)
_market_states[market_code] = False _market_states.pop(market_code, None)
# Clear playbook for closed market (new one generated next open) # Clear playbook for closed market (new one generated next open)
playbooks.pop(market_code, None) playbooks.pop(market_code, None)
@@ -4206,10 +4266,9 @@ async def run(settings: Settings) -> None:
await asyncio.sleep(TRADE_INTERVAL_SECONDS) await asyncio.sleep(TRADE_INTERVAL_SECONDS)
continue continue
# Process each open market async def _process_realtime_market(market: MarketInfo) -> None:
for market in open_markets:
if shutdown.is_set(): if shutdown.is_set():
break return
session_info = get_session_info(market) session_info = get_session_info(market)
_session_risk_overrides(market=market, settings=settings) _session_risk_overrides(market=market, settings=settings)
@@ -4227,13 +4286,16 @@ async def run(settings: Settings) -> None:
settings=settings, settings=settings,
) )
# Notify market open if it just opened # Notify on market/session transition (e.g., US_PRE -> US_REG)
if not _market_states.get(market.code, False): session_changed = _has_market_session_transition(
_market_states, market.code, session_info.session_id
)
if session_changed:
try: try:
await telegram.notify_market_open(market.name) await telegram.notify_market_open(market.name)
except Exception as exc: except Exception as exc:
logger.warning("Market open notification failed: %s", exc) logger.warning("Market open notification failed: %s", exc)
_market_states[market.code] = True _market_states[market.code] = session_info.session_id
# Check and handle domestic pending (unfilled) limit orders. # Check and handle domestic pending (unfilled) limit orders.
if market.is_domestic: if market.is_domestic:
@@ -4265,7 +4327,12 @@ async def run(settings: Settings) -> None:
now_timestamp = asyncio.get_event_loop().time() now_timestamp = asyncio.get_event_loop().time()
last_scan = last_scan_time.get(market.code, 0.0) last_scan = last_scan_time.get(market.code, 0.0)
rescan_interval = settings.RESCAN_INTERVAL_SECONDS rescan_interval = settings.RESCAN_INTERVAL_SECONDS
if now_timestamp - last_scan >= rescan_interval: if _should_rescan_market(
last_scan=last_scan,
now_timestamp=now_timestamp,
rescan_interval=rescan_interval,
session_changed=session_changed,
):
try: try:
logger.info("Smart Scanner: Scanning %s market", market.name) logger.info("Smart Scanner: Scanning %s market", market.name)
@@ -4290,12 +4357,9 @@ async def run(settings: Settings) -> None:
) )
if candidates: if candidates:
# Use scanner results directly as trading candidates
active_stocks[market.code] = smart_scanner.get_stock_codes( active_stocks[market.code] = smart_scanner.get_stock_codes(
candidates candidates
) )
# Store candidates per market for selection context logging
scan_candidates[market.code] = {c.stock_code: c for c in candidates} scan_candidates[market.code] = {c.stock_code: c for c in candidates}
logger.info( logger.info(
@@ -4305,12 +4369,8 @@ async def run(settings: Settings) -> None:
[f"{c.stock_code}(RSI={c.rsi:.0f})" for c in candidates], [f"{c.stock_code}(RSI={c.rsi:.0f})" for c in candidates],
) )
# Get market-local date for playbook keying
market_today = datetime.now(market.timezone).date() market_today = datetime.now(market.timezone).date()
# Load or generate playbook (1 Gemini call per market per day)
if market.code not in playbooks: if market.code not in playbooks:
# Try DB first (survives process restart)
stored_pb = playbook_store.load(market_today, market.code) stored_pb = playbook_store.load(market_today, market.code)
if stored_pb is not None: if stored_pb is not None:
playbooks[market.code] = stored_pb playbooks[market.code] = stored_pb
@@ -4370,12 +4430,6 @@ async def run(settings: Settings) -> None:
except Exception as exc: except Exception as exc:
logger.error("Smart Scanner failed for %s: %s", market.name, exc) logger.error("Smart Scanner failed for %s: %s", market.name, exc)
# Get active stocks from scanner (dynamic, no static fallback).
# Also include currently-held positions so stop-loss /
# take-profit can fire even when a holding drops off the
# scanner. Broker balance is the source of truth here —
# unlike the local DB it reflects actual fills and any
# manual trades done outside the bot.
scanner_codes = active_stocks.get(market.code, []) scanner_codes = active_stocks.get(market.code, [])
try: try:
if market.is_domestic: if market.is_domestic:
@@ -4406,16 +4460,14 @@ async def run(settings: Settings) -> None:
if not stock_codes: if not stock_codes:
logger.debug("No active stocks for market %s", market.code) logger.debug("No active stocks for market %s", market.code)
continue return
logger.info("Processing market: %s (%d stocks)", market.name, len(stock_codes)) logger.info("Processing market: %s (%d stocks)", market.name, len(stock_codes))
# Process each stock from scanner results
for stock_code in stock_codes: for stock_code in stock_codes:
if shutdown.is_set(): if shutdown.is_set():
break break
# Get playbook for this market
market_playbook = playbooks.get( market_playbook = playbooks.get(
market.code, market.code,
PreMarketPlanner._empty_playbook( PreMarketPlanner._empty_playbook(
@@ -4423,7 +4475,6 @@ async def run(settings: Settings) -> None:
), ),
) )
# Retry logic for connection errors
for attempt in range(1, MAX_CONNECTION_RETRIES + 1): for attempt in range(1, MAX_CONNECTION_RETRIES + 1):
try: try:
await trading_cycle( await trading_cycle(
@@ -4443,7 +4494,7 @@ async def run(settings: Settings) -> None:
settings, settings,
buy_cooldown, buy_cooldown,
) )
break # Success — exit retry loop break
except CircuitBreakerTripped as exc: except CircuitBreakerTripped as exc:
logger.critical("Circuit breaker tripped — shutting down") logger.critical("Circuit breaker tripped — shutting down")
try: try:
@@ -4465,17 +4516,19 @@ async def run(settings: Settings) -> None:
MAX_CONNECTION_RETRIES, MAX_CONNECTION_RETRIES,
exc, exc,
) )
await asyncio.sleep(2**attempt) # Exponential backoff await asyncio.sleep(2**attempt)
else: else:
logger.error( logger.error(
"Connection error for %s (all retries exhausted): %s", "Connection error for %s (all retries exhausted): %s",
stock_code, stock_code,
exc, exc,
) )
break # Give up on this stock break
except Exception as exc: except Exception as exc:
logger.exception("Unexpected error for %s: %s", stock_code, exc) logger.exception("Unexpected error for %s: %s", stock_code, exc)
break # Don't retry on unexpected errors break
await _run_markets_in_parallel(open_markets, _process_realtime_market)
# Log priority queue metrics periodically # Log priority queue metrics periodically
metrics = await priority_queue.get_metrics() metrics = await priority_queue.get_metrics()

View File

@@ -207,7 +207,7 @@ def get_open_markets(
from src.core.order_policy import classify_session_id from src.core.order_policy import classify_session_id
session_id = classify_session_id(market, now) session_id = classify_session_id(market, now)
return session_id not in {"KR_OFF", "US_OFF"} return session_id not in {"KR_OFF", "US_OFF", "US_DAY"}
return is_market_open(market, now) return is_market_open(market, now)
open_markets = [ open_markets = [
@@ -254,10 +254,10 @@ def get_next_market_open(
from src.core.order_policy import classify_session_id from src.core.order_policy import classify_session_id
ts = start_utc.astimezone(ZoneInfo("UTC")).replace(second=0, microsecond=0) ts = start_utc.astimezone(ZoneInfo("UTC")).replace(second=0, microsecond=0)
prev_active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF"} prev_active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF", "US_DAY"}
for _ in range(7 * 24 * 60): for _ in range(7 * 24 * 60):
ts = ts + timedelta(minutes=1) ts = ts + timedelta(minutes=1)
active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF"} active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF", "US_DAY"}
if active and not prev_active: if active and not prev_active:
return ts return ts
prev_active = active prev_active = active

View File

@@ -1,5 +1,6 @@
"""Tests for main trading loop integration.""" """Tests for main trading loop integration."""
import asyncio
from datetime import UTC, date, datetime from datetime import UTC, date, datetime
from typing import Any from typing import Any
from unittest.mock import ANY, AsyncMock, MagicMock, patch from unittest.mock import ANY, AsyncMock, MagicMock, patch
@@ -34,6 +35,7 @@ from src.main import (
_extract_held_codes_from_balance, _extract_held_codes_from_balance,
_extract_held_qty_from_balance, _extract_held_qty_from_balance,
_handle_market_close, _handle_market_close,
_has_market_session_transition,
_inject_staged_exit_features, _inject_staged_exit_features,
_maybe_queue_order_intent, _maybe_queue_order_intent,
_resolve_market_setting, _resolve_market_setting,
@@ -41,8 +43,10 @@ from src.main import (
_retry_connection, _retry_connection,
_run_context_scheduler, _run_context_scheduler,
_run_evolution_loop, _run_evolution_loop,
_run_markets_in_parallel,
_should_block_overseas_buy_for_fx_buffer, _should_block_overseas_buy_for_fx_buffer,
_should_force_exit_for_overnight, _should_force_exit_for_overnight,
_should_rescan_market,
_split_trade_pnl_components, _split_trade_pnl_components,
_start_dashboard_server, _start_dashboard_server,
_stoploss_cooldown_minutes, _stoploss_cooldown_minutes,
@@ -140,6 +144,63 @@ class TestExtractAvgPriceFromBalance:
result = _extract_avg_price_from_balance(balance, "AAPL", is_domestic=False) result = _extract_avg_price_from_balance(balance, "AAPL", is_domestic=False)
assert result == 170.5 assert result == 170.5
class TestRealtimeSessionStateHelpers:
"""Tests for realtime loop session-transition/rescan helper logic."""
def test_has_market_session_transition_when_state_missing(self) -> None:
states: dict[str, str] = {}
assert _has_market_session_transition(states, "US_NASDAQ", "US_REG")
def test_has_market_session_transition_when_session_changes(self) -> None:
states = {"US_NASDAQ": "US_PRE"}
assert _has_market_session_transition(states, "US_NASDAQ", "US_REG")
def test_has_market_session_transition_false_when_same_session(self) -> None:
states = {"US_NASDAQ": "US_REG"}
assert not _has_market_session_transition(states, "US_NASDAQ", "US_REG")
def test_should_rescan_market_forces_on_session_transition(self) -> None:
assert _should_rescan_market(
last_scan=1000.0,
now_timestamp=1050.0,
rescan_interval=300.0,
session_changed=True,
)
def test_should_rescan_market_uses_interval_without_transition(self) -> None:
assert not _should_rescan_market(
last_scan=1000.0,
now_timestamp=1050.0,
rescan_interval=300.0,
session_changed=False,
)
class TestMarketParallelRunner:
"""Tests for market-level parallel processing helper."""
@pytest.mark.asyncio
async def test_run_markets_in_parallel_runs_all_markets(self) -> None:
processed: list[str] = []
async def _processor(market: str) -> None:
await asyncio.sleep(0.01)
processed.append(market)
await _run_markets_in_parallel(["KR", "US_NASDAQ", "US_NYSE"], _processor)
assert set(processed) == {"KR", "US_NASDAQ", "US_NYSE"}
@pytest.mark.asyncio
async def test_run_markets_in_parallel_propagates_errors(self) -> None:
async def _processor(market: str) -> None:
if market == "US_NASDAQ":
raise RuntimeError("boom")
await asyncio.sleep(0.01)
with pytest.raises(RuntimeError, match="boom"):
await _run_markets_in_parallel(["KR", "US_NASDAQ"], _processor)
def test_returns_zero_when_field_absent(self) -> None: def test_returns_zero_when_field_absent(self) -> None:
"""Returns 0.0 when pchs_avg_pric key is missing entirely.""" """Returns 0.0 when pchs_avg_pric key is missing entirely."""
balance = {"output1": [{"pdno": "005930", "ord_psbl_qty": "5"}]} balance = {"output1": [{"pdno": "005930", "ord_psbl_qty": "5"}]}
@@ -913,6 +974,46 @@ class TestTradingCycleTelegramIntegration:
# Verify notification was attempted # Verify notification was attempted
mock_telegram.notify_trade_execution.assert_called_once() mock_telegram.notify_trade_execution.assert_called_once()
@pytest.mark.asyncio
async def test_kr_rejected_order_does_not_notify_or_log_trade(
self,
mock_broker: MagicMock,
mock_overseas_broker: MagicMock,
mock_scenario_engine: MagicMock,
mock_playbook: DayPlaybook,
mock_risk: MagicMock,
mock_db: MagicMock,
mock_decision_logger: MagicMock,
mock_context_store: MagicMock,
mock_criticality_assessor: MagicMock,
mock_telegram: MagicMock,
mock_market: MagicMock,
) -> None:
"""KR orders rejected by KIS should not trigger success side effects."""
mock_broker.send_order = AsyncMock(
return_value={"rt_cd": "1", "msg1": "장운영시간이 아닙니다."}
)
with patch("src.main.log_trade") as mock_log_trade:
await trading_cycle(
broker=mock_broker,
overseas_broker=mock_overseas_broker,
scenario_engine=mock_scenario_engine,
playbook=mock_playbook,
risk=mock_risk,
db_conn=mock_db,
decision_logger=mock_decision_logger,
context_store=mock_context_store,
criticality_assessor=mock_criticality_assessor,
telegram=mock_telegram,
market=mock_market,
stock_code="005930",
scan_candidates={},
)
mock_telegram.notify_trade_execution.assert_not_called()
mock_log_trade.assert_not_called()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_fat_finger_notification_sent( async def test_fat_finger_notification_sent(
self, self,

View File

@@ -165,6 +165,17 @@ class TestGetOpenMarkets:
) )
assert {m.code for m in extended} == {"US_NASDAQ", "US_NYSE", "US_AMEX"} assert {m.code for m in extended} == {"US_NASDAQ", "US_NYSE", "US_AMEX"}
def test_get_open_markets_excludes_us_day_when_extended_enabled(self) -> None:
"""US_DAY should be treated as non-tradable even in extended-session lookup."""
# Monday 2026-02-02 10:30 KST = 01:30 UTC (US_DAY by session classification)
test_time = datetime(2026, 2, 2, 1, 30, tzinfo=ZoneInfo("UTC"))
extended = get_open_markets(
enabled_markets=["US_NASDAQ", "US_NYSE", "US_AMEX"],
now=test_time,
include_extended_sessions=True,
)
assert extended == []
class TestGetNextMarketOpen: class TestGetNextMarketOpen:
"""Test get_next_market_open function.""" """Test get_next_market_open function."""
@@ -214,8 +225,8 @@ class TestGetNextMarketOpen:
def test_get_next_market_open_prefers_extended_session(self) -> None: def test_get_next_market_open_prefers_extended_session(self) -> None:
"""Extended lookup should return premarket open time before regular open.""" """Extended lookup should return premarket open time before regular open."""
# Monday 2026-02-02 07:00 EST = 12:00 UTC # Monday 2026-02-02 07:00 EST = 12:00 UTC
# By v3 KST session rules, US is OFF only in KST 07:00-10:00 (UTC 22:00-01:00). # US_DAY is treated as non-tradable in extended lookup, so after entering
# At 12:00 UTC market is active, so next OFF->ON transition is 01:00 UTC next day. # US_DAY the next tradable OFF->ON transition is US_PRE at 09:00 UTC next day.
test_time = datetime(2026, 2, 2, 12, 0, tzinfo=ZoneInfo("UTC")) test_time = datetime(2026, 2, 2, 12, 0, tzinfo=ZoneInfo("UTC"))
market, next_open = get_next_market_open( market, next_open = get_next_market_open(
enabled_markets=["US_NASDAQ"], enabled_markets=["US_NASDAQ"],
@@ -223,7 +234,7 @@ class TestGetNextMarketOpen:
include_extended_sessions=True, include_extended_sessions=True,
) )
assert market.code == "US_NASDAQ" assert market.code == "US_NASDAQ"
assert next_open == datetime(2026, 2, 3, 1, 0, tzinfo=ZoneInfo("UTC")) assert next_open == datetime(2026, 2, 3, 9, 0, tzinfo=ZoneInfo("UTC"))
class TestExpandMarketCodes: class TestExpandMarketCodes:

View File

@@ -0,0 +1,160 @@
from __future__ import annotations
import os
import signal
import socket
import subprocess
from pathlib import Path
import pytest
REPO_ROOT = Path(__file__).resolve().parent.parent
RUN_OVERNIGHT = REPO_ROOT / "scripts" / "run_overnight.sh"
RUNTIME_MONITOR = REPO_ROOT / "scripts" / "runtime_verify_monitor.sh"
def _latest_runtime_log(log_dir: Path) -> str:
logs = sorted(log_dir.glob("runtime_verify_*.log"))
assert logs, "runtime monitor did not produce log output"
return logs[-1].read_text(encoding="utf-8")
def test_runtime_verify_monitor_detects_live_process_without_pid_files(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
fake_live = subprocess.Popen(
["bash", "-lc", 'exec -a "src.main --mode=live" sleep 10'],
cwd=REPO_ROOT,
)
try:
env = os.environ.copy()
env.update(
{
"ROOT_DIR": str(REPO_ROOT),
"LOG_DIR": str(log_dir),
"INTERVAL_SEC": "1",
"MAX_HOURS": "1",
"MAX_LOOPS": "1",
"POLICY_TZ": "UTC",
}
)
completed = subprocess.run(
["bash", str(RUNTIME_MONITOR)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode == 0, completed.stderr
log_text = _latest_runtime_log(log_dir)
assert "app_alive=1" in log_text
assert "[COVERAGE] LIVE_MODE=PASS source=process_liveness" in log_text
assert "[ANOMALY]" not in log_text
finally:
fake_live.terminate()
fake_live.wait(timeout=5)
def test_run_overnight_fails_fast_when_dashboard_port_in_use(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(("127.0.0.1", 0))
sock.listen(1)
port = sock.getsockname()[1]
try:
env = os.environ.copy()
env.update(
{
"LOG_DIR": str(log_dir),
"TMUX_AUTO": "false",
"DASHBOARD_PORT": str(port),
}
)
completed = subprocess.run(
["bash", str(RUN_OVERNIGHT)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode != 0
output = f"{completed.stdout}\n{completed.stderr}"
assert "already in use" in output
finally:
sock.close()
def test_run_overnight_writes_live_pid_and_watchdog_pid(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
env = os.environ.copy()
env.update(
{
"LOG_DIR": str(log_dir),
"TMUX_AUTO": "false",
"STARTUP_GRACE_SEC": "1",
"CHECK_INTERVAL": "2",
"APP_CMD_BIN": "sleep",
"APP_CMD_ARGS": "10",
}
)
completed = subprocess.run(
["bash", str(RUN_OVERNIGHT)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode == 0, f"{completed.stdout}\n{completed.stderr}"
app_pid = int((log_dir / "app.pid").read_text(encoding="utf-8").strip())
watchdog_pid = int((log_dir / "watchdog.pid").read_text(encoding="utf-8").strip())
os.kill(app_pid, 0)
os.kill(watchdog_pid, 0)
for pid in (watchdog_pid, app_pid):
try:
os.kill(pid, signal.SIGTERM)
except ProcessLookupError:
pass
def test_run_overnight_fails_when_process_exits_before_grace_period(tmp_path: Path) -> None:
log_dir = tmp_path / "overnight"
log_dir.mkdir(parents=True, exist_ok=True)
env = os.environ.copy()
env.update(
{
"LOG_DIR": str(log_dir),
"TMUX_AUTO": "false",
"STARTUP_GRACE_SEC": "1",
"APP_CMD_BIN": "false",
}
)
completed = subprocess.run(
["bash", str(RUN_OVERNIGHT)],
cwd=REPO_ROOT,
env=env,
capture_output=True,
text=True,
check=False,
)
assert completed.returncode != 0
output = f"{completed.stdout}\n{completed.stderr}"
assert "startup failed:" in output
watchdog_pid_file = log_dir / "watchdog.pid"
if watchdog_pid_file.exists():
watchdog_pid = int(watchdog_pid_file.read_text(encoding="utf-8").strip())
with pytest.raises(ProcessLookupError):
os.kill(watchdog_pid, 0)

View File

@@ -24,9 +24,24 @@ def test_validate_pr_body_text_detects_escaped_newline() -> None:
assert any("escaped newline" in err for err in errors) assert any("escaped newline" in err for err in errors)
def test_validate_pr_body_text_allows_literal_sequence_when_multiline() -> None: def test_validate_pr_body_text_detects_escaped_newline_in_multiline_body() -> None:
module = _load_module() module = _load_module()
text = "## Summary\n- escaped sequence example: \\\\n" text = "## Summary\n- first line\n- broken line with \\n literal"
errors = module.validate_pr_body_text(text)
assert any("escaped newline" in err for err in errors)
def test_validate_pr_body_text_allows_escaped_newline_in_code_blocks() -> None:
module = _load_module()
text = "\n".join(
[
"## Summary",
"- example uses `\\n` for explanation",
"```bash",
"printf 'line1\\nline2\\n'",
"```",
]
)
assert module.validate_pr_body_text(text) == [] assert module.validate_pr_body_text(text) == []
@@ -63,12 +78,13 @@ def test_fetch_pr_body_reads_body_from_tea_api(monkeypatch) -> None:
module = _load_module() module = _load_module()
def fake_run(cmd, check, capture_output, text): # noqa: ANN001 def fake_run(cmd, check, capture_output, text): # noqa: ANN001
assert "tea" in cmd[0] assert cmd[0] == "/tmp/tea-bin"
assert check is True assert check is True
assert capture_output is True assert capture_output is True
assert text is True assert text is True
return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"})) return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"}))
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
monkeypatch.setattr(module.subprocess, "run", fake_run) monkeypatch.setattr(module.subprocess, "run", fake_run)
assert module.fetch_pr_body(391) == "## Summary\n- item" assert module.fetch_pr_body(391) == "## Summary\n- item"
@@ -79,6 +95,32 @@ def test_fetch_pr_body_rejects_non_string_body(monkeypatch) -> None:
def fake_run(cmd, check, capture_output, text): # noqa: ANN001 def fake_run(cmd, check, capture_output, text): # noqa: ANN001
return SimpleNamespace(stdout=json.dumps({"body": 123})) return SimpleNamespace(stdout=json.dumps({"body": 123}))
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
monkeypatch.setattr(module.subprocess, "run", fake_run) monkeypatch.setattr(module.subprocess, "run", fake_run)
with pytest.raises(RuntimeError): with pytest.raises(RuntimeError):
module.fetch_pr_body(391) module.fetch_pr_body(391)
def test_resolve_tea_binary_falls_back_to_home_bin(monkeypatch, tmp_path) -> None:
module = _load_module()
tea_home = tmp_path / "bin" / "tea"
tea_home.parent.mkdir(parents=True)
tea_home.write_text("#!/usr/bin/env bash\n", encoding="utf-8")
tea_home.chmod(0o755)
monkeypatch.setattr(module.shutil, "which", lambda _: None)
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
assert module.resolve_tea_binary() == str(tea_home)
def test_resolve_tea_binary_rejects_non_executable_home_bin(monkeypatch, tmp_path) -> None:
module = _load_module()
tea_home = tmp_path / "bin" / "tea"
tea_home.parent.mkdir(parents=True)
tea_home.write_text("not executable\n", encoding="utf-8")
tea_home.chmod(0o644)
monkeypatch.setattr(module.shutil, "which", lambda _: None)
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
with pytest.raises(RuntimeError):
module.resolve_tea_binary()