Compare commits
10 Commits
fix/400
...
base/ourob
| Author | SHA1 | Date | |
|---|---|---|---|
| 1850334bb7 | |||
| 8a4cf31c4d | |||
|
|
c4eaf780a0 | ||
|
|
728b3667ab | ||
|
|
337c84c592 | ||
|
|
65ec0a2684 | ||
|
|
fdaa1aa33c | ||
|
|
16515080dc | ||
|
|
acbf89bc90 | ||
|
|
b708e8b4ed |
190
CLAUDE.md
190
CLAUDE.md
@@ -1,9 +1,187 @@
|
||||
# Agent Entry Point
|
||||
# The Ouroboros
|
||||
|
||||
This file moved to [agents.md](./agents.md).
|
||||
AI-powered trading agent for global stock markets with self-evolution capabilities.
|
||||
|
||||
Follow `agents.md` as the single source of truth for Claude/Codex session behavior and project workflow gates.
|
||||
## Quick Start
|
||||
|
||||
Core process references:
|
||||
- [Workflow Guide](docs/workflow.md)
|
||||
- [Command Reference](docs/commands.md)
|
||||
```bash
|
||||
# Setup
|
||||
pip install -e ".[dev]"
|
||||
cp .env.example .env
|
||||
# Edit .env with your KIS and Gemini API credentials
|
||||
|
||||
# Test
|
||||
pytest -v --cov=src
|
||||
|
||||
# Run (paper trading)
|
||||
python -m src.main --mode=paper
|
||||
|
||||
# Run with dashboard
|
||||
python -m src.main --mode=paper --dashboard
|
||||
```
|
||||
|
||||
## Telegram Notifications (Optional)
|
||||
|
||||
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
|
||||
|
||||
### Quick Setup
|
||||
|
||||
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
|
||||
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
|
||||
3. **Configure**: Add to `.env`:
|
||||
```bash
|
||||
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
|
||||
TELEGRAM_CHAT_ID=123456789
|
||||
TELEGRAM_ENABLED=true
|
||||
```
|
||||
4. **Test**: Start bot conversation (`/start`), then run the agent
|
||||
|
||||
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
|
||||
|
||||
### What You'll Get
|
||||
|
||||
- 🟢 Trade execution alerts (BUY/SELL with confidence)
|
||||
- 🚨 Circuit breaker trips (automatic trading halt)
|
||||
- ⚠️ Fat-finger rejections (oversized orders blocked)
|
||||
- ℹ️ Market open/close notifications
|
||||
- 📝 System startup/shutdown status
|
||||
|
||||
### Interactive Commands
|
||||
|
||||
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
|
||||
|
||||
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
|
||||
|
||||
## Smart Volatility Scanner (Optional)
|
||||
|
||||
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
|
||||
2. **Python Filter** — RSI + volume ratio calculations (no AI)
|
||||
- Volume > 200% of previous day
|
||||
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
|
||||
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
|
||||
|
||||
### Configuration
|
||||
|
||||
Add to `.env` (optional, has sensible defaults):
|
||||
```bash
|
||||
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
|
||||
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
|
||||
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
|
||||
SCANNER_TOP_N=3 # Max candidates per scan
|
||||
```
|
||||
|
||||
### Benefits
|
||||
|
||||
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
|
||||
- **Python-based filtering** — Fast technical analysis before AI
|
||||
- **Evolution-ready** — Selection context logged for strategy optimization
|
||||
- **Fault-tolerant** — Falls back to static watchlist on API failure
|
||||
|
||||
### Trading Mode Integration
|
||||
|
||||
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
|
||||
|
||||
## Documentation
|
||||
|
||||
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
|
||||
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
|
||||
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
|
||||
- **[Architecture](docs/architecture.md)** — System design, components, data flow
|
||||
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
|
||||
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
|
||||
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
|
||||
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
|
||||
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
|
||||
2. **Test Everything** — 80% coverage minimum, all changes require tests
|
||||
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
|
||||
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
|
||||
|
||||
## Requirements Management
|
||||
|
||||
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
|
||||
|
||||
- New requirements are added chronologically with dates
|
||||
- Code changes should reference related requirements
|
||||
- Helps maintain project evolution aligned with user needs
|
||||
- Preserves context across conversations and development cycles
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
src/
|
||||
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
|
||||
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
|
||||
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
|
||||
├── broker/ # KIS API client (domestic + overseas)
|
||||
├── context/ # L1-L7 hierarchical memory system
|
||||
├── core/ # Risk manager (READ-ONLY)
|
||||
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
|
||||
├── data/ # External data integration (news, market data, calendar)
|
||||
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
|
||||
├── logging/ # Decision logger (audit trail)
|
||||
├── markets/ # Market schedules and timezone handling
|
||||
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
|
||||
├── strategy/ # Pre-market planner, scenario engine, playbook store
|
||||
├── db.py # SQLite trade logging
|
||||
├── main.py # Trading loop orchestrator
|
||||
└── config.py # Settings (from .env)
|
||||
|
||||
tests/ # 998 tests across 41 files
|
||||
docs/ # Extended documentation
|
||||
```
|
||||
|
||||
## Key Commands
|
||||
|
||||
```bash
|
||||
pytest -v --cov=src # Run tests with coverage
|
||||
ruff check src/ tests/ # Lint
|
||||
mypy src/ --strict # Type check
|
||||
|
||||
python -m src.main --mode=paper # Paper trading
|
||||
python -m src.main --mode=paper --dashboard # With dashboard
|
||||
python -m src.main --mode=live # Live trading (⚠️ real money)
|
||||
|
||||
# Gitea workflow (requires tea CLI)
|
||||
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
|
||||
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
|
||||
```
|
||||
|
||||
## Markets Supported
|
||||
|
||||
- 🇰🇷 Korea (KRX)
|
||||
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
|
||||
- 🇯🇵 Japan (TSE)
|
||||
- 🇭🇰 Hong Kong (SEHK)
|
||||
- 🇨🇳 China (Shanghai, Shenzhen)
|
||||
- 🇻🇳 Vietnam (Hanoi, HCM)
|
||||
|
||||
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
|
||||
|
||||
## Critical Constraints
|
||||
|
||||
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
|
||||
|
||||
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
|
||||
- Circuit breaker at -3.0% P&L — may only be made **stricter**
|
||||
- Fat-finger protection: max 30% of cash per order — always enforced
|
||||
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
|
||||
- All code changes → corresponding tests → coverage ≥ 80%
|
||||
|
||||
## Contributing
|
||||
|
||||
See [docs/workflow.md](docs/workflow.md) for the complete development process.
|
||||
|
||||
**TL;DR:**
|
||||
1. Create issue in Gitea
|
||||
2. Create feature branch: `feature/issue-N-description`
|
||||
3. Implement with tests
|
||||
4. Open PR
|
||||
5. Merge after review
|
||||
|
||||
199
agents.md
199
agents.md
@@ -1,199 +0,0 @@
|
||||
# The Ouroboros
|
||||
|
||||
AI-powered trading agent for global stock markets with self-evolution capabilities.
|
||||
|
||||
## Agent Workflow Gate (Claude/Codex)
|
||||
|
||||
Before any implementation, both Claude and Codex must align on the same project process:
|
||||
|
||||
1. Read `docs/workflow.md` first (branch policy, issue/PR flow, merge rules).
|
||||
2. Read `docs/commands.md` for required verification commands and failure handling.
|
||||
3. Read `docs/agent-constraints.md` and `docs/agents.md` for safety constraints.
|
||||
4. Check `workflow/session-handover.md` and append a session entry when starting or handing off work.
|
||||
5. Confirm current branch is based on `main` or an explicitly designated temporary/base branch before editing.
|
||||
|
||||
If any instruction conflicts, default to the safer path and document the reason in the handover log.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Setup
|
||||
pip install -e ".[dev]"
|
||||
cp .env.example .env
|
||||
# Edit .env with your KIS and Gemini API credentials
|
||||
|
||||
# Test
|
||||
pytest -v --cov=src
|
||||
|
||||
# Run (paper trading)
|
||||
python -m src.main --mode=paper
|
||||
|
||||
# Run with dashboard
|
||||
python -m src.main --mode=paper --dashboard
|
||||
```
|
||||
|
||||
## Telegram Notifications (Optional)
|
||||
|
||||
Get real-time alerts for trades, circuit breakers, and system events via Telegram.
|
||||
|
||||
### Quick Setup
|
||||
|
||||
1. **Create bot**: Message [@BotFather](https://t.me/BotFather) on Telegram → `/newbot`
|
||||
2. **Get chat ID**: Message [@userinfobot](https://t.me/userinfobot) → `/start`
|
||||
3. **Configure**: Add to `.env`:
|
||||
```bash
|
||||
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
|
||||
TELEGRAM_CHAT_ID=123456789
|
||||
TELEGRAM_ENABLED=true
|
||||
```
|
||||
4. **Test**: Start bot conversation (`/start`), then run the agent
|
||||
|
||||
**Full documentation**: [src/notifications/README.md](src/notifications/README.md)
|
||||
|
||||
### What You'll Get
|
||||
|
||||
- 🟢 Trade execution alerts (BUY/SELL with confidence)
|
||||
- 🚨 Circuit breaker trips (automatic trading halt)
|
||||
- ⚠️ Fat-finger rejections (oversized orders blocked)
|
||||
- ℹ️ Market open/close notifications
|
||||
- 📝 System startup/shutdown status
|
||||
|
||||
### Interactive Commands
|
||||
|
||||
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
|
||||
|
||||
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
|
||||
|
||||
## Smart Volatility Scanner (Optional)
|
||||
|
||||
Python-first filtering pipeline that reduces Gemini API calls by pre-filtering stocks using technical indicators.
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Fetch Rankings** — KIS API volume surge rankings (top 30 stocks)
|
||||
2. **Python Filter** — RSI + volume ratio calculations (no AI)
|
||||
- Volume > 200% of previous day
|
||||
- RSI(14) < 30 (oversold) OR RSI(14) > 70 (momentum)
|
||||
3. **AI Judgment** — Only qualified candidates (1-3 stocks) sent to Gemini
|
||||
|
||||
### Configuration
|
||||
|
||||
Add to `.env` (optional, has sensible defaults):
|
||||
```bash
|
||||
RSI_OVERSOLD_THRESHOLD=30 # 0-50, default 30
|
||||
RSI_MOMENTUM_THRESHOLD=70 # 50-100, default 70
|
||||
VOL_MULTIPLIER=2.0 # Volume threshold (2.0 = 200%)
|
||||
SCANNER_TOP_N=3 # Max candidates per scan
|
||||
```
|
||||
|
||||
### Benefits
|
||||
|
||||
- **Reduces API costs** — Process 1-3 stocks instead of 20-30
|
||||
- **Python-based filtering** — Fast technical analysis before AI
|
||||
- **Evolution-ready** — Selection context logged for strategy optimization
|
||||
- **Fault-tolerant** — Falls back to static watchlist on API failure
|
||||
|
||||
### Trading Mode Integration
|
||||
|
||||
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
|
||||
|
||||
## Documentation
|
||||
|
||||
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
|
||||
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
|
||||
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
|
||||
- **[Architecture](docs/architecture.md)** — System design, components, data flow
|
||||
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
|
||||
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
|
||||
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
|
||||
- **[Requirements Log](docs/requirements-log.md)** — User requirements and feedback tracking
|
||||
- **[Live Trading Checklist](docs/live-trading-checklist.md)** — 모의→실전 전환 체크리스트
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
|
||||
2. **Test Everything** — 80% coverage minimum, all changes require tests
|
||||
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
|
||||
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
|
||||
|
||||
## Requirements Management
|
||||
|
||||
User requirements and feedback are tracked in [docs/requirements-log.md](docs/requirements-log.md):
|
||||
|
||||
- New requirements are added chronologically with dates
|
||||
- Code changes should reference related requirements
|
||||
- Helps maintain project evolution aligned with user needs
|
||||
- Preserves context across conversations and development cycles
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
src/
|
||||
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
|
||||
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
|
||||
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
|
||||
├── broker/ # KIS API client (domestic + overseas)
|
||||
├── context/ # L1-L7 hierarchical memory system
|
||||
├── core/ # Risk manager (READ-ONLY)
|
||||
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
|
||||
├── data/ # External data integration (news, market data, calendar)
|
||||
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
|
||||
├── logging/ # Decision logger (audit trail)
|
||||
├── markets/ # Market schedules and timezone handling
|
||||
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
|
||||
├── strategy/ # Pre-market planner, scenario engine, playbook store
|
||||
├── db.py # SQLite trade logging
|
||||
├── main.py # Trading loop orchestrator
|
||||
└── config.py # Settings (from .env)
|
||||
|
||||
tests/ # 998 tests across 41 files
|
||||
docs/ # Extended documentation
|
||||
```
|
||||
|
||||
## Key Commands
|
||||
|
||||
```bash
|
||||
pytest -v --cov=src # Run tests with coverage
|
||||
ruff check src/ tests/ # Lint
|
||||
mypy src/ --strict # Type check
|
||||
|
||||
python -m src.main --mode=paper # Paper trading
|
||||
python -m src.main --mode=paper --dashboard # With dashboard
|
||||
python -m src.main --mode=live # Live trading (⚠️ real money)
|
||||
|
||||
# Gitea workflow (requires tea CLI)
|
||||
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
|
||||
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
|
||||
```
|
||||
|
||||
## Markets Supported
|
||||
|
||||
- 🇰🇷 Korea (KRX)
|
||||
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
|
||||
- 🇯🇵 Japan (TSE)
|
||||
- 🇭🇰 Hong Kong (SEHK)
|
||||
- 🇨🇳 China (Shanghai, Shenzhen)
|
||||
- 🇻🇳 Vietnam (Hanoi, HCM)
|
||||
|
||||
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
|
||||
|
||||
## Critical Constraints
|
||||
|
||||
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
|
||||
|
||||
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
|
||||
- Circuit breaker at -3.0% P&L — may only be made **stricter**
|
||||
- Fat-finger protection: max 30% of cash per order — always enforced
|
||||
- Confidence 임계값 (market_outlook별, 낮출 수 없음): BEARISH ≥ 90, NEUTRAL/기본 ≥ 80, BULLISH ≥ 75
|
||||
- All code changes → corresponding tests → coverage ≥ 80%
|
||||
|
||||
## Contributing
|
||||
|
||||
See [docs/workflow.md](docs/workflow.md) for the complete development process.
|
||||
|
||||
**TL;DR:**
|
||||
1. Create issue in Gitea
|
||||
2. Create feature branch: `feature/issue-N-description`
|
||||
3. Implement with tests
|
||||
4. Open PR
|
||||
5. Merge after review
|
||||
@@ -42,3 +42,5 @@ Updated: 2026-03-02
|
||||
## 변경 이력
|
||||
|
||||
- 2026-03-02: `v1.0.12` 문서 검증 게이트 강화(#390) 반영에 따라 정책 문서 동기화 체크를 수행했다. (`REQ-OPS-002`)
|
||||
|
||||
- 2026-03-04: 86_peak_probability_model_plan.md 초안 추가에 따른 정책 문서 동기화 체크를 수행했다. (REQ-OPS-002)
|
||||
|
||||
168
docs/ouroboros/86_peak_probability_model_plan.md
Normal file
168
docs/ouroboros/86_peak_probability_model_plan.md
Normal file
@@ -0,0 +1,168 @@
|
||||
<!--
|
||||
Doc-ID: DOC-PLAN-086
|
||||
Version: 0.1.0
|
||||
Status: draft
|
||||
Owner: strategy
|
||||
Updated: 2026-03-02
|
||||
-->
|
||||
|
||||
# 고점 확률 모델 도입 계획 (초안)
|
||||
|
||||
작성일: 2026-03-02
|
||||
목적: 기존 계획(v2)의 "모델 보조 신호"를, 실운용 가능한 단순한 형태로 재정의한다.
|
||||
|
||||
---
|
||||
|
||||
## 1. 한 줄 요약
|
||||
|
||||
- 목표는 "지금이 고점(또는 고점 근처)일 확률"을 예측해 익절/축소/오버나잇 리스크 결정을 보조하는 것이다.
|
||||
- 1차는 정규장 중심으로 시작하고, NXT/US_PRE 등 확장세션은 이후 분리 대응한다.
|
||||
|
||||
---
|
||||
|
||||
## 2. 왜 바꾸는가
|
||||
|
||||
- 기존 논의는 범위가 넓어 구현/검증 비용이 컸다.
|
||||
- 현재 로그 구조에서도 빠르게 시작 가능한 타깃이 필요하다.
|
||||
- 실거래 효용 기준에서 "고점 근처 감지"는 청산 품질 개선에 직접 연결된다.
|
||||
|
||||
---
|
||||
|
||||
## 3. 이번 계획의 핵심 결정
|
||||
|
||||
### 3.1 예측 타깃 (2개)
|
||||
|
||||
- `p_top_d0`: 현재 시점이 "오늘 정규장 고점(근접 포함)"일 확률
|
||||
- `p_top_d1`: 현재 시점이 "내일 정규장 마감까지 포함한 고점(근접 포함)"일 확률
|
||||
|
||||
### 3.2 모델 역할
|
||||
|
||||
- 단독 매도 트리거가 아니라 보조 신호로 사용한다.
|
||||
- Hard stop / ATR trailing / Kill Switch 우선 원칙은 유지한다.
|
||||
|
||||
### 3.3 적용 순서
|
||||
|
||||
1. KR/US 정규장만 대상으로 MVP 구축
|
||||
2. 성능/운영 안정화 후 NXT, US_PRE, US_AFTER 분리 확장
|
||||
|
||||
---
|
||||
|
||||
## 4. 데이터/라벨 정의 (MVP)
|
||||
|
||||
### 4.1 샘플 단위
|
||||
|
||||
- 단위: "시점 t의 관측값 1개"
|
||||
- 대상: 주문 가능 시점 중 정책 게이트를 통과하는 시점
|
||||
|
||||
### 4.2 라벨 정의
|
||||
|
||||
- `y_d0 = 1` if `max_high(t ~ 오늘 정규장 마감) <= price_t * (1 + eps_d0)` else `0`
|
||||
- `y_d1 = 1` if `max_high(t ~ 내일 정규장 마감) <= price_t * (1 + eps_d1)` else `0`
|
||||
|
||||
### 4.3 필수 피처 (초기)
|
||||
|
||||
- 가격/수익률: `current_price`, `price_change_pct`
|
||||
- 미시지표: `rsi`, `volume_ratio`, `signal`, `score`
|
||||
- 포트폴리오: `portfolio_pnl_pct`, `total_cash`(선택)
|
||||
- 상태: `market`, `session_id`, `minutes_to_close`, `is_tradeable_now`
|
||||
|
||||
### 4.4 데이터 제외 규칙 (초기)
|
||||
|
||||
- `session_id in ('', 'UNKNOWN')` 제외
|
||||
- `price <= 0` 또는 `quantity <= 0` 제외
|
||||
- `rationale LIKE '[startup-sync]%'` 제외
|
||||
- 라벨 윈도우 불완전(휴장/데이터 누락) 샘플 제외
|
||||
|
||||
---
|
||||
|
||||
## 5. 추론/실행 규칙 (초안)
|
||||
|
||||
### 5.1 추론 출력
|
||||
|
||||
```json
|
||||
{
|
||||
"p_top_d0": 0.00,
|
||||
"p_top_d1": 0.00,
|
||||
"model_version": "TBD"
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 실행 반영 원칙
|
||||
|
||||
- `p_top_d0`가 높으면: 당일 익절/트레일링 강화
|
||||
- `p_top_d1`가 높으면: 오버나잇 보유 축소/청산 우선
|
||||
- 확장세션(NXT/US_PRE)은 초기엔 보수 임계치 또는 실행 제한
|
||||
|
||||
---
|
||||
|
||||
## 6. 세션 이슈 대응 원칙
|
||||
|
||||
### 6.1 거래 가능 세션 기준
|
||||
|
||||
- 달력 시간이 아니라 "종목이 실제 거래 가능한 세션" 기준으로 라벨/추론을 구성한다.
|
||||
|
||||
### 6.2 정규장 vs 확장세션 분리
|
||||
|
||||
- MVP: 정규장 전용 라벨/모델
|
||||
- 확장: 세션별 헤드 또는 세션별 모델 분리 검토
|
||||
|
||||
### 6.3 고점 정의 분리
|
||||
|
||||
- `reg_high`: 정규장 고점 (주 라벨)
|
||||
- `ext_high`: 확장세션 포함 고점 (보조 분석)
|
||||
|
||||
---
|
||||
|
||||
## 7. 단계별 구현 계획
|
||||
|
||||
### Phase A: 계약 고정
|
||||
|
||||
- 학습용 canonical dataset 스키마 확정
|
||||
- 시점 정렬(as-of) 규칙 문서화
|
||||
- 라벨 생성 함수/검증 테스트 추가
|
||||
|
||||
### Phase B: 오프라인 검증
|
||||
|
||||
- Walk-forward + Purge/Embargo 평가
|
||||
- `d0`, `d1` 분리 리포트
|
||||
- B1(모델 없음) 대비 개선 여부 확인
|
||||
|
||||
### Phase C: 런타임 통합
|
||||
|
||||
- 추론 결과를 `pred_top_d0/pred_top_d1`로 주입
|
||||
- 매도/축소 보조 규칙 연결
|
||||
- Shadow 모드로 관측 후 제한적 활성화
|
||||
|
||||
### Phase D: 세션 확장
|
||||
|
||||
- NXT/US_PRE/US_AFTER 정책 분리
|
||||
- 실행 게이트/임계치 세션별 튜닝
|
||||
|
||||
---
|
||||
|
||||
## 8. 지금 당장 결정할 최소 항목 (TBD)
|
||||
|
||||
- [ ] `eps_d0`, `eps_d1` 기본값 (`TBD`)
|
||||
- [ ] `p_top_d0`, `p_top_d1` 실행 임계치 (`TBD`)
|
||||
- [ ] 정규장/확장세션 대상 시장 범위 (`TBD`)
|
||||
- [ ] 최소 학습 샘플 수 기준 (`TBD`)
|
||||
- [ ] Shadow 관측 기간 (`TBD`)
|
||||
|
||||
---
|
||||
|
||||
## 9. 리스크 및 완화
|
||||
|
||||
- 라벨 불균형: class weight, threshold 튜닝
|
||||
- 시간대 편향(장마감 근처): 시간 피처 추가 + 구간별 성능 보고
|
||||
- 데이터 무결성: 제외 규칙 강제 + CI 검증
|
||||
- 복잡도 증가: 정규장 MVP를 먼저 잠그고 확장세션은 후속 분리
|
||||
|
||||
---
|
||||
|
||||
## 10. 후속 문서 연결 (예정)
|
||||
|
||||
- requirements registry: 신규 REQ/TASK/TEST 매핑 추가
|
||||
- code-level work orders: 구현 작업 분해
|
||||
- acceptance/test plan: `p_top_d0/d1` 검증 항목 추가
|
||||
- implementation audit: 기존 "모델 보조 신호" 항목 업데이트
|
||||
|
||||
211
docs/plans/2026-03-02-dual-track-workflow-execution-plan.md
Normal file
211
docs/plans/2026-03-02-dual-track-workflow-execution-plan.md
Normal file
@@ -0,0 +1,211 @@
|
||||
# Dual-Track Workflow Execution Plan (Revised)
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Re-seed workflow planning from a clean base branch (not the temporary branch), then execute isolated `86` and `80` tracks with real-operation-confirmed closeout and related Gitea issue closure.
|
||||
|
||||
**Architecture:** Start from `feature/v3-session-policy-stream` and create `base/ouroboros-workflow-20260302`. Bring only two workflow plan docs as one squashed seed commit. Branch `task/86...` and `task/80...` from that base with strict file-scope separation. For `80`, update audit only after real-operation confirmation, then close related Gitea issues (children before parent). Final promotion happens by merging the validated stream to `main`.
|
||||
|
||||
**Tech Stack:** Git CLI, Gitea (`tea`) issue/PR workflow, Markdown docs (`docs/ouroboros`, `docs/plans`)
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Freeze temporary branch and identify export scope
|
||||
|
||||
**Files:**
|
||||
- Modify: none
|
||||
- Create: none
|
||||
- Test: none
|
||||
|
||||
**Step 1: Inspect temporary branch state**
|
||||
|
||||
Run: `git status --short --branch`
|
||||
Expected: current temporary branch with local changes listed.
|
||||
|
||||
**Step 2: Preserve changes using stash (recommended)**
|
||||
|
||||
Run: `git stash push -u -m "tmp-before-workflow-reseed-2026-03-02"`
|
||||
Expected: stash created with tracked/untracked changes.
|
||||
|
||||
**Step 3: Define transferable artifacts**
|
||||
|
||||
Run: keep only these for reseed:
|
||||
- `docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md`
|
||||
- `docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
|
||||
Expected: transfer scope fixed to two files.
|
||||
|
||||
**Step 4: Verify clean working tree**
|
||||
|
||||
Run: `git status --short --branch`
|
||||
Expected: no local changes.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
# no commit in this task (stash-only preservation)
|
||||
```
|
||||
|
||||
### Task 2: Create clean official base and seed one commit
|
||||
|
||||
**Files:**
|
||||
- Modify: `docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md`
|
||||
- Modify: `docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
|
||||
- Test: none
|
||||
|
||||
**Step 1: Switch to canonical base**
|
||||
|
||||
Run: `git checkout feature/v3-session-policy-stream`
|
||||
Expected: branch switch successful.
|
||||
|
||||
**Step 2: Update local base**
|
||||
|
||||
Run: `git pull --ff-only origin feature/v3-session-policy-stream`
|
||||
Expected: up-to-date or fast-forward update.
|
||||
|
||||
**Step 3: Create staging base branch**
|
||||
|
||||
Run: `git checkout -b base/ouroboros-workflow-20260302`
|
||||
Expected: new branch created from updated feature base.
|
||||
|
||||
**Step 4: Restore only two plan docs from temporary branch commits**
|
||||
|
||||
Run (example): `git checkout <temp-branch> -- docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md docs/plans/2026-03-02-dual-track-workflow-execution-plan.md`
|
||||
Expected: only two files restored in working tree.
|
||||
|
||||
**Step 5: Create one squashed seed commit**
|
||||
|
||||
Run: `git add docs/plans/2026-03-02-ouroboros-dual-track-workflow-design.md docs/plans/2026-03-02-dual-track-workflow-execution-plan.md && git commit -m "docs: seed revised dual-track workflow plan"`
|
||||
Expected: single commit with two files only.
|
||||
|
||||
**Step 6: Verify seed scope**
|
||||
|
||||
Run: `git show --name-only --oneline HEAD`
|
||||
Expected: exactly the two plan docs listed.
|
||||
|
||||
**Step 7: Push official base**
|
||||
|
||||
Run: `git push -u origin base/ouroboros-workflow-20260302`
|
||||
Expected: remote tracking set.
|
||||
|
||||
### Task 3: Open Track A (`86`) branch and initial PR
|
||||
|
||||
**Files:**
|
||||
- Modify: `docs/ouroboros/86_peak_probability_model_plan.md`
|
||||
- Test: none
|
||||
|
||||
**Step 1: Create Track A branch**
|
||||
|
||||
Run: `git checkout -b task/86-peak-probability-plan`
|
||||
Expected: branch created from official base.
|
||||
|
||||
**Step 2: Stage only `86` document**
|
||||
|
||||
Run: `git add docs/ouroboros/86_peak_probability_model_plan.md`
|
||||
Expected: only `86` doc staged.
|
||||
|
||||
**Step 3: Commit initial PR seed**
|
||||
|
||||
Run: `git commit -m "docs: seed peak probability model plan (#86)"`
|
||||
Expected: one-document commit.
|
||||
|
||||
**Step 4: Push branch**
|
||||
|
||||
Run: `git push -u origin task/86-peak-probability-plan`
|
||||
Expected: remote branch created.
|
||||
|
||||
**Step 5: Open PR-A**
|
||||
|
||||
Run: `tea pr create --base base/ouroboros-workflow-20260302 --head task/86-peak-probability-plan --title "docs: seed peak probability model plan (#86)" --description "Initial planning PR for iterative concretization."`
|
||||
Expected: PR URL returned.
|
||||
|
||||
### Task 4: Open Track B (`80`) branch for operation-confirmed closeout
|
||||
|
||||
**Files:**
|
||||
- Modify: `docs/ouroboros/80_implementation_audit.md`
|
||||
- Test: operation observation evidence (outside unit-test scope)
|
||||
|
||||
**Step 1: Return to staging base**
|
||||
|
||||
Run: `git checkout base/ouroboros-workflow-20260302`
|
||||
Expected: branch switch successful.
|
||||
|
||||
**Step 2: Create Track B branch**
|
||||
|
||||
Run: `git checkout -b task/80-implementation-audit-closeout`
|
||||
Expected: branch created.
|
||||
|
||||
**Step 3: Apply closeout edits only after real-operation confirmation**
|
||||
|
||||
Run: edit `docs/ouroboros/80_implementation_audit.md`
|
||||
Expected: status/evidence sections updated with observed behavior, confirmation timestamp, and operational proof summary.
|
||||
|
||||
**Step 4: Commit closeout update**
|
||||
|
||||
Run: `git add docs/ouroboros/80_implementation_audit.md && git commit -m "docs: close out implementation audit items after operation confirmation (#80)"`
|
||||
Expected: single-scope commit.
|
||||
|
||||
**Step 5: Push and open PR-B**
|
||||
|
||||
Run: `git push -u origin task/80-implementation-audit-closeout`
|
||||
Expected: remote branch created.
|
||||
|
||||
Run: `tea pr create --base base/ouroboros-workflow-20260302 --head task/80-implementation-audit-closeout --title "docs: close out implementation audit items (#80)" --description "Closeout based on real-operation confirmation."`
|
||||
Expected: PR URL returned.
|
||||
|
||||
### Task 5: Close related Gitea issues after `80` confirmation
|
||||
|
||||
**Files:**
|
||||
- Modify: `docs/ouroboros/80_implementation_audit.md` (reference linkage only)
|
||||
- Test: issue parent-child completeness check
|
||||
|
||||
**Step 1: Build related issue set from updated `80` audit**
|
||||
|
||||
Run: parse issue references from `80` updated sections.
|
||||
Expected: target open issue list prepared.
|
||||
|
||||
**Step 2: Close child issues first**
|
||||
|
||||
Run: `tea issues close <child-issue-number>` for each confirmed child.
|
||||
Expected: all qualifying child issues closed.
|
||||
|
||||
**Step 3: Close parent issues only when all children are closed**
|
||||
|
||||
Run: `tea issues close <parent-issue-number>`
|
||||
Expected: parent closes only after dependency check passes.
|
||||
|
||||
**Step 4: Record closure summary in PR comment**
|
||||
|
||||
Run: add PR comment with closed issue IDs and parent-child resolution note.
|
||||
Expected: auditable trace for closeout.
|
||||
|
||||
### Task 6: Validate branch hygiene and merge path
|
||||
|
||||
**Files:**
|
||||
- Modify: none
|
||||
- Test: git diff/log checks
|
||||
|
||||
**Step 1: Verify PR-A scope**
|
||||
|
||||
Run: `git diff --name-only base/ouroboros-workflow-20260302...task/86-peak-probability-plan`
|
||||
Expected: only `docs/ouroboros/86_peak_probability_model_plan.md` (and intentional related plan files if any).
|
||||
|
||||
**Step 2: Verify PR-B scope**
|
||||
|
||||
Run: `git diff --name-only base/ouroboros-workflow-20260302...task/80-implementation-audit-closeout`
|
||||
Expected: only `docs/ouroboros/80_implementation_audit.md` (and intentional related plan files if any).
|
||||
|
||||
**Step 3: Merge both PRs into staging base**
|
||||
|
||||
Run: GitHub merge flow for PR-A and PR-B
|
||||
Expected: staging base includes both tracks without cross-file contamination.
|
||||
|
||||
**Step 4: Promote validated stream to `main`**
|
||||
|
||||
Run: open PR from `feature/v3-session-policy-stream` (or its validated promotion branch) to `main`
|
||||
Expected: merge-based history preservation with no cherry-pick/rebase surgery.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
# no additional commit required in this task
|
||||
```
|
||||
100
docs/plans/2026-03-02-live-operation-issue-tracking-design.md
Normal file
100
docs/plans/2026-03-02-live-operation-issue-tracking-design.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# Live Operation Issue Tracking Design
|
||||
|
||||
**Date:** 2026-03-02
|
||||
**Scope:** Real-operation confirmation tracking for all open Gitea issues, tied to `task/80-implementation-audit-closeout`.
|
||||
|
||||
---
|
||||
|
||||
## 1. Goal
|
||||
|
||||
Define an issue-by-issue tracking method for real-operation confirmation so that:
|
||||
|
||||
- every open issue is tracked with explicit lifecycle states,
|
||||
- parent-child close order is enforced,
|
||||
- and final closeout is reflected in `80` audit work without loss.
|
||||
|
||||
---
|
||||
|
||||
## 2. Target Set
|
||||
|
||||
- Coverage target: all current open Gitea issues (`318~381`, and future open issues during this closeout window).
|
||||
- Tracking starts from open set snapshot and stays synced until closeout completion.
|
||||
|
||||
---
|
||||
|
||||
## 3. Tracking Architecture
|
||||
|
||||
- Primary log: direct comments on each issue ticket.
|
||||
- Aggregate log: single summary comment on the `80` PR (`task/80-implementation-audit-closeout`).
|
||||
- No separate tracking doc as source of truth during execution.
|
||||
|
||||
---
|
||||
|
||||
## 4. State Model and Criteria
|
||||
|
||||
State machine:
|
||||
|
||||
- `NOT_STARTED`
|
||||
- `OBSERVING`
|
||||
- `CONFIRMED`
|
||||
- `CLOSED`
|
||||
|
||||
Rules:
|
||||
|
||||
- `CONFIRMED` requires one real-operation observation evidence item.
|
||||
- `CLOSED` requires close conditions satisfied and ticket close action executed.
|
||||
- Parent issue cannot become `CLOSED` until all child issues are `CLOSED`.
|
||||
|
||||
---
|
||||
|
||||
## 5. Issue Comment Template
|
||||
|
||||
Each issue update comment uses:
|
||||
|
||||
- `Status: <STATE>`
|
||||
- `Observed At: <KST>, <UTC>`
|
||||
- `Environment: live`
|
||||
- `Evidence: <log/query/order-execution identifier>`
|
||||
- `Next: <single next action>`
|
||||
|
||||
This template is reused for every transition for consistent auditability.
|
||||
|
||||
---
|
||||
|
||||
## 6. Parent-Child Policy
|
||||
|
||||
- Parent issue comment must list child issue IDs.
|
||||
- Child issues transition independently.
|
||||
- Parent close gate:
|
||||
- if any child is not `CLOSED`, parent remains at most `CONFIRMED`;
|
||||
- when all children are `CLOSED`, parent can transition to `CLOSED` and be closed.
|
||||
|
||||
---
|
||||
|
||||
## 7. Operational Loop
|
||||
|
||||
1. Initialize all open issues with `NOT_STARTED` comment.
|
||||
2. Move active issues to `OBSERVING` when live confirmation begins.
|
||||
3. Move to `CONFIRMED` after one real-operation evidence capture.
|
||||
4. Close child issues first, then close parent issue(s).
|
||||
5. Update the single `80` PR summary comment whenever issue state changes.
|
||||
6. Reflect final confirmed/closed outcomes in `docs/ouroboros/80_implementation_audit.md`.
|
||||
|
||||
---
|
||||
|
||||
## 8. Evidence and Time Rules
|
||||
|
||||
- Evidence must be replayable/referenceable by identifier (not vague narrative only).
|
||||
- Every update includes both KST and UTC timestamps.
|
||||
- Use absolute dates (example: `2026-03-02`) to avoid ambiguity in session handoff.
|
||||
|
||||
---
|
||||
|
||||
## 9. Completion Conditions
|
||||
|
||||
Tracking design is complete when:
|
||||
|
||||
- all open issues are represented in issue comments with valid lifecycle state,
|
||||
- parent-child closure constraints are respected,
|
||||
- `80` PR summary comment reflects current global status,
|
||||
- and final `80` audit document update is ready for closeout commit.
|
||||
@@ -0,0 +1,203 @@
|
||||
# Live Operation Issue Tracking Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Track real-operation confirmation for all open Gitea issues with per-issue comments and a synchronized summary on the `80` PR, then close issues in dependency-safe order.
|
||||
|
||||
**Architecture:** Use issue tickets as the primary state log with a strict four-state lifecycle. Maintain one aggregate status comment on `task/80-implementation-audit-closeout` PR. Enforce child-first closure and parent close gating.
|
||||
|
||||
**Tech Stack:** Git CLI, Gitea CLI (`tea`), Markdown (`docs/ouroboros/80_implementation_audit.md`)
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Snapshot open issues and classify parent-child graph
|
||||
|
||||
**Files:**
|
||||
- Create: none
|
||||
- Modify: none
|
||||
- Test: `tea issues ls` output inspection
|
||||
|
||||
**Step 1: Capture open issue list**
|
||||
|
||||
Run: `tea issues ls --state open --limit 200`
|
||||
Expected: list of all open issues (including `318~381`).
|
||||
|
||||
**Step 2: Fetch details for each issue**
|
||||
|
||||
Run: `for n in $(tea issues ls --state open --limit 200 --output json | jq -r '.[].number'); do tea issue view "$n"; done`
|
||||
Expected: issue bodies/comments available for relation parsing.
|
||||
|
||||
**Step 3: Extract parent-child references**
|
||||
|
||||
Run: parse issue text for relation keywords (`parent`, `child`, `depends on`, `blocks`).
|
||||
Expected: dependency map ready.
|
||||
|
||||
**Step 4: Validate unresolved dependency edges**
|
||||
|
||||
Run: verify each parent references only existing issue numbers.
|
||||
Expected: no dangling child references.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
# no commit (discovery task)
|
||||
```
|
||||
|
||||
### Task 2: Initialize per-issue tracking comments (`NOT_STARTED`)
|
||||
|
||||
**Files:**
|
||||
- Create: none
|
||||
- Modify: none
|
||||
- Test: issue comment presence check
|
||||
|
||||
**Step 1: Prepare standard template text**
|
||||
|
||||
```text
|
||||
Status: NOT_STARTED
|
||||
Observed At: <KST>, <UTC>
|
||||
Environment: live
|
||||
Evidence: N/A
|
||||
Next: Begin live observation for this issue.
|
||||
```
|
||||
|
||||
**Step 2: Post initialization comment to each open issue**
|
||||
|
||||
Run: `tea issues comment <issue-number> --message "<template>"`
|
||||
Expected: each open issue has one initialization comment.
|
||||
|
||||
**Step 3: Verify comment coverage**
|
||||
|
||||
Run: sample-check each issue (`tea issue view <issue-number>`) for latest status comment.
|
||||
Expected: 100% coverage across open issues.
|
||||
|
||||
**Step 4: Commit**
|
||||
|
||||
```bash
|
||||
# no commit (ticket operations)
|
||||
```
|
||||
|
||||
### Task 3: Create and maintain `80` PR aggregate status comment
|
||||
|
||||
**Files:**
|
||||
- Modify: `docs/ouroboros/80_implementation_audit.md` (final sync only, not this task)
|
||||
- Test: PR comment contains full issue table
|
||||
|
||||
**Step 1: Identify PR number for `task/80-implementation-audit-closeout`**
|
||||
|
||||
Run: `tea pr ls --state open`
|
||||
Expected: PR number for `task/80...` identified.
|
||||
|
||||
**Step 2: Post initial summary comment**
|
||||
|
||||
Run: `tea pr comment <pr-number> --message "<issue-state-summary-table>"`
|
||||
Expected: one aggregate comment listing `issue | state | last observed at | parent/child`.
|
||||
|
||||
**Step 3: Define update protocol**
|
||||
|
||||
Run: update same summary comment on every issue state change.
|
||||
Expected: summary remains single-source aggregate view.
|
||||
|
||||
**Step 4: Commit**
|
||||
|
||||
```bash
|
||||
# no commit (ticket/PR operations)
|
||||
```
|
||||
|
||||
### Task 4: Execute live observation loop and mark `OBSERVING` -> `CONFIRMED`
|
||||
|
||||
**Files:**
|
||||
- Create: none
|
||||
- Modify: none
|
||||
- Test: evidence-backed state transitions present in issue comments
|
||||
|
||||
**Step 1: Mark active issue as `OBSERVING`**
|
||||
|
||||
Run: `tea issues comment <issue-number> --message "Status: OBSERVING ..."`
|
||||
Expected: issue history shows observation started.
|
||||
|
||||
**Step 2: Capture one real-operation evidence item**
|
||||
|
||||
Run: collect log/query/order identifier tied to issue behavior.
|
||||
Expected: one concrete evidence reference captured.
|
||||
|
||||
**Step 3: Mark issue `CONFIRMED`**
|
||||
|
||||
Run: `tea issues comment <issue-number> --message "Status: CONFIRMED ... Evidence: <id> ..."`
|
||||
Expected: issue has explicit confirmed state.
|
||||
|
||||
**Step 4: Sync PR aggregate summary**
|
||||
|
||||
Run: update `80` PR summary comment row for that issue.
|
||||
Expected: PR summary and issue status aligned.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
# no commit (ticket operations)
|
||||
```
|
||||
|
||||
### Task 5: Close issues with child-first dependency enforcement
|
||||
|
||||
**Files:**
|
||||
- Create: none
|
||||
- Modify: none
|
||||
- Test: parent close attempted only after all children closed
|
||||
|
||||
**Step 1: Close confirmed child issues first**
|
||||
|
||||
Run: `tea issues close <child-issue-number>`
|
||||
Expected: child issue state becomes closed.
|
||||
|
||||
**Step 2: Verify all children for each parent**
|
||||
|
||||
Run: evaluate dependency map for remaining non-closed children.
|
||||
Expected: parent close gate decision available.
|
||||
|
||||
**Step 3: Close eligible parent issues**
|
||||
|
||||
Run: `tea issues close <parent-issue-number>`
|
||||
Expected: parent is closed only when all children are closed.
|
||||
|
||||
**Step 4: Post closure comment and sync PR summary**
|
||||
|
||||
Run: comment `Status: CLOSED` on closed issue and update PR summary row.
|
||||
Expected: closure trace exists in both issue and PR aggregate.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
# no commit (ticket operations)
|
||||
```
|
||||
|
||||
### Task 6: Reflect final tracked outcomes in `80` audit and commit
|
||||
|
||||
**Files:**
|
||||
- Modify: `docs/ouroboros/80_implementation_audit.md`
|
||||
- Test: markdown consistency + traceability check
|
||||
|
||||
**Step 1: Update `80` with final live-confirmed statuses**
|
||||
|
||||
Run: edit status lines and evidence summary sections.
|
||||
Expected: document reflects final issue states and confirmation notes.
|
||||
|
||||
**Step 2: Add issue closure summary block**
|
||||
|
||||
Run: include closed child/parent sequence summary.
|
||||
Expected: audit reader can verify closure logic quickly.
|
||||
|
||||
**Step 3: Validate references**
|
||||
|
||||
Run: ensure issue numbers in doc match actual closed/open states.
|
||||
Expected: no mismatch between doc and Gitea.
|
||||
|
||||
**Step 4: Commit closeout doc update**
|
||||
|
||||
```bash
|
||||
git add docs/ouroboros/80_implementation_audit.md
|
||||
git commit -m "docs: finalize 80 audit with live-operation issue confirmations"
|
||||
```
|
||||
|
||||
**Step 5: Push branch**
|
||||
|
||||
Run: `git push`
|
||||
Expected: closeout commit available on `task/80-implementation-audit-closeout`.
|
||||
@@ -0,0 +1,99 @@
|
||||
# Ouroboros Dual-Track Workflow Design (Revised)
|
||||
|
||||
**Date:** 2026-03-02
|
||||
**Scope:** `docs/ouroboros/86_peak_probability_model_plan.md` and `docs/ouroboros/80_implementation_audit.md`
|
||||
|
||||
---
|
||||
|
||||
## 1. Goal
|
||||
|
||||
Prevent work loss and history contamination while running two parallel tracks, while correcting an invalid starting condition (design commits created on a disposable temporary branch):
|
||||
|
||||
- Track A: open and iterate the `86` plan as a dedicated PR.
|
||||
- Track B: finalize `80` only after real-operation behavior is confirmed, then reflect completion and close related Gitea open issues.
|
||||
|
||||
---
|
||||
|
||||
## 2. Constraints and Current Context
|
||||
|
||||
- Current temporary branch is disposable and must not remain the authoritative base.
|
||||
- Canonical base line for current work is `feature/v3-session-policy-stream`.
|
||||
- Final integration strategy is to merge the full `feature/v3-session-policy-stream` line into `main` after operation-level validation is complete.
|
||||
|
||||
---
|
||||
|
||||
## 3. Approaches Considered
|
||||
|
||||
1. Keep using temporary branch and continue from existing design commits.
|
||||
2. Create a clean base from `feature/v3-session-policy-stream` and move only plan artifacts as one commit (recommended).
|
||||
3. Split into two fresh bases (`86` and `80` separately) immediately.
|
||||
|
||||
Recommendation rationale:
|
||||
|
||||
- Approach 2 removes temporary-branch contamination risk.
|
||||
- It preserves clean lineage with a single "official starting point" commit.
|
||||
- It keeps downstream split for `86` and `80` straightforward.
|
||||
|
||||
---
|
||||
|
||||
## 4. Approved Branch and PR Topology
|
||||
|
||||
- Long-lived source baseline: `feature/v3-session-policy-stream`
|
||||
- New official base: `base/ouroboros-workflow-20260302` (from `feature/v3-session-policy-stream`)
|
||||
- Official base contains only two workflow plan docs squashed into one commit.
|
||||
- Track A branch: `task/86-peak-probability-plan` (from official base)
|
||||
- Track B branch: `task/80-implementation-audit-closeout` (from official base)
|
||||
|
||||
PR direction:
|
||||
|
||||
- PR-A (`86`): `task/86-peak-probability-plan -> base/ouroboros-workflow-20260302`
|
||||
- PR-B (`80`): `task/80-implementation-audit-closeout -> base/ouroboros-workflow-20260302`
|
||||
- Final integration: `feature/v3-session-policy-stream` line merged into `main`
|
||||
|
||||
---
|
||||
|
||||
## 5. Completion Criteria per Track
|
||||
|
||||
### Track A (`86`)
|
||||
|
||||
- Open initial PR early with the planning document.
|
||||
- Continue iterative concretization in follow-up commits on the same branch.
|
||||
|
||||
### Track B (`80`)
|
||||
|
||||
- Do not close based on test-command artifacts alone.
|
||||
- Close only when modified behavior is confirmed in real operation.
|
||||
- Update `80_implementation_audit.md` with:
|
||||
- observed behavior changes,
|
||||
- confirmation timing,
|
||||
- operation-side evidence summary.
|
||||
- After `80` real-operation confirmation and doc update, close related Gitea open issues immediately (same phase).
|
||||
- Parent issues close only when all child issues are confirmed closed.
|
||||
|
||||
---
|
||||
|
||||
## 6. Loss-Prevention Rules
|
||||
|
||||
- Preserve temporary-branch unmerged changes first (stash or archival commit).
|
||||
- Exclude `workflow/session-handover.md` and `docs/ouroboros/86_peak_probability_model_plan.md` from official base-seeding commit.
|
||||
- Enforce file-scope separation:
|
||||
- PR-A should avoid `80` edits.
|
||||
- PR-B should avoid `86` edits.
|
||||
- Keep commit messages scoped to one track per commit.
|
||||
|
||||
---
|
||||
|
||||
## 7. Merge Strategy Decision
|
||||
|
||||
- For this workflow, prefer merge-based lineage preservation.
|
||||
- Because the final promotion unit is the full `feature/v3-session-policy-stream` stream, avoid unnecessary cherry-pick/rebase-surgery unless a contamination incident occurs.
|
||||
|
||||
---
|
||||
|
||||
## 8. Immediate Next Actions
|
||||
|
||||
1. Seed `base/ouroboros-workflow-20260302` from `feature/v3-session-policy-stream` with one squashed workflow-doc commit.
|
||||
2. Verify seed commit file scope contains only two plan docs.
|
||||
3. Create and open PR-A for `86`.
|
||||
4. Complete real-operation confirmation for `80`, update audit doc, then close related Gitea issues (children first, then parent).
|
||||
5. Merge validated `feature/v3-session-policy-stream` line into `main`.
|
||||
@@ -8,32 +8,8 @@ CHECK_INTERVAL="${CHECK_INTERVAL:-30}"
|
||||
TMUX_AUTO="${TMUX_AUTO:-true}"
|
||||
TMUX_ATTACH="${TMUX_ATTACH:-true}"
|
||||
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
|
||||
STARTUP_GRACE_SEC="${STARTUP_GRACE_SEC:-3}"
|
||||
dashboard_port="${DASHBOARD_PORT:-8080}"
|
||||
APP_CMD_BIN="${APP_CMD_BIN:-}"
|
||||
APP_CMD_ARGS="${APP_CMD_ARGS:-}"
|
||||
RUNS_DASHBOARD="false"
|
||||
|
||||
# Custom override contract:
|
||||
# 1) Preferred: APP_CMD_BIN + APP_CMD_ARGS
|
||||
# - APP_CMD_BIN is treated as a single executable token.
|
||||
# - APP_CMD_ARGS uses shell-style word splitting; quote/escape inside this
|
||||
# variable is NOT preserved as a nested shell parse.
|
||||
# 2) Legacy fallback: APP_CMD (raw shell command string)
|
||||
# - This path remains for backward compatibility.
|
||||
# - When APP_CMD includes --dashboard, caller should include explicit
|
||||
# DASHBOARD_PORT assignment in APP_CMD if non-default port is required.
|
||||
|
||||
if [ -n "$APP_CMD_BIN" ]; then
|
||||
USE_DEFAULT_APP_CMD="false"
|
||||
USE_SAFE_CUSTOM_APP_CMD="true"
|
||||
APP_CMD="${APP_CMD_BIN} ${APP_CMD_ARGS}"
|
||||
if [[ " $APP_CMD_ARGS " == *" --dashboard "* ]]; then
|
||||
RUNS_DASHBOARD="true"
|
||||
fi
|
||||
elif [ -z "${APP_CMD:-}" ]; then
|
||||
USE_DEFAULT_APP_CMD="true"
|
||||
USE_SAFE_CUSTOM_APP_CMD="false"
|
||||
if [ -z "${APP_CMD:-}" ]; then
|
||||
if [ -x ".venv/bin/python" ]; then
|
||||
PYTHON_BIN=".venv/bin/python"
|
||||
elif command -v python3 >/dev/null 2>&1; then
|
||||
@@ -45,14 +21,9 @@ elif [ -z "${APP_CMD:-}" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
APP_CMD="$PYTHON_BIN -m src.main --mode=live --dashboard"
|
||||
RUNS_DASHBOARD="true"
|
||||
else
|
||||
USE_DEFAULT_APP_CMD="false"
|
||||
USE_SAFE_CUSTOM_APP_CMD="false"
|
||||
if [[ "$APP_CMD" == *"--dashboard"* ]]; then
|
||||
RUNS_DASHBOARD="true"
|
||||
fi
|
||||
dashboard_port="${DASHBOARD_PORT:-8080}"
|
||||
|
||||
APP_CMD="DASHBOARD_PORT=$dashboard_port $PYTHON_BIN -m src.main --mode=live --dashboard"
|
||||
fi
|
||||
|
||||
mkdir -p "$LOG_DIR"
|
||||
@@ -63,24 +34,6 @@ WATCHDOG_LOG="$LOG_DIR/watchdog_${timestamp}.log"
|
||||
PID_FILE="$LOG_DIR/app.pid"
|
||||
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
|
||||
|
||||
is_port_in_use() {
|
||||
local port="$1"
|
||||
if command -v ss >/dev/null 2>&1; then
|
||||
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
|
||||
return $?
|
||||
fi
|
||||
if command -v lsof >/dev/null 2>&1; then
|
||||
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
|
||||
return $?
|
||||
fi
|
||||
if command -v netstat >/dev/null 2>&1; then
|
||||
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
|
||||
return $?
|
||||
fi
|
||||
# No supported socket inspection command found.
|
||||
return 1
|
||||
}
|
||||
|
||||
if [ -f "$PID_FILE" ]; then
|
||||
old_pid="$(cat "$PID_FILE" || true)"
|
||||
if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then
|
||||
@@ -90,29 +43,7 @@ if [ -f "$PID_FILE" ]; then
|
||||
fi
|
||||
|
||||
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG"
|
||||
if [ "$RUNS_DASHBOARD" = "true" ] && is_port_in_use "$dashboard_port"; then
|
||||
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: dashboard port ${dashboard_port} already in use" | tee -a "$RUN_LOG"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$USE_DEFAULT_APP_CMD" = "true" ]; then
|
||||
# Default path avoids shell word-splitting on executable paths.
|
||||
nohup env DASHBOARD_PORT="$dashboard_port" "$PYTHON_BIN" -m src.main --mode=live --dashboard >>"$RUN_LOG" 2>&1 &
|
||||
elif [ "$USE_SAFE_CUSTOM_APP_CMD" = "true" ]; then
|
||||
# Safer custom path: executable path is handled as a single token.
|
||||
if [ -n "$APP_CMD_ARGS" ]; then
|
||||
# shellcheck disable=SC2206
|
||||
app_args=( $APP_CMD_ARGS )
|
||||
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" "${app_args[@]}" >>"$RUN_LOG" 2>&1 &
|
||||
else
|
||||
nohup env DASHBOARD_PORT="$dashboard_port" "$APP_CMD_BIN" >>"$RUN_LOG" 2>&1 &
|
||||
fi
|
||||
else
|
||||
# Custom APP_CMD is treated as a shell command string.
|
||||
# If executable paths include spaces, they must be quoted inside APP_CMD.
|
||||
# Legacy compatibility path: caller owns quoting and env var injection.
|
||||
nohup bash -lc "exec env $APP_CMD" >>"$RUN_LOG" 2>&1 &
|
||||
fi
|
||||
nohup bash -lc "$APP_CMD" >>"$RUN_LOG" 2>&1 &
|
||||
app_pid=$!
|
||||
echo "$app_pid" > "$PID_FILE"
|
||||
|
||||
@@ -123,20 +54,6 @@ nohup env PID_FILE="$PID_FILE" LOG_FILE="$WATCHDOG_LOG" CHECK_INTERVAL="$CHECK_I
|
||||
watchdog_pid=$!
|
||||
echo "$watchdog_pid" > "$WATCHDOG_PID_FILE"
|
||||
|
||||
sleep "$STARTUP_GRACE_SEC"
|
||||
if ! kill -0 "$app_pid" 2>/dev/null; then
|
||||
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: app process exited early (pid=$app_pid)" | tee -a "$RUN_LOG"
|
||||
[ -n "${watchdog_pid:-}" ] && kill "$watchdog_pid" 2>/dev/null || true
|
||||
tail -n 20 "$RUN_LOG" || true
|
||||
exit 1
|
||||
fi
|
||||
if ! kill -0 "$watchdog_pid" 2>/dev/null; then
|
||||
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] startup failed: watchdog exited early (pid=$watchdog_pid)" | tee -a "$WATCHDOG_LOG"
|
||||
kill "$app_pid" 2>/dev/null || true
|
||||
tail -n 20 "$WATCHDOG_LOG" || true
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cat <<EOF
|
||||
시작 완료
|
||||
- app pid: $app_pid
|
||||
|
||||
@@ -7,15 +7,12 @@ ROOT_DIR="${ROOT_DIR:-/home/agentson/repos/The-Ouroboros}"
|
||||
LOG_DIR="${LOG_DIR:-$ROOT_DIR/data/overnight}"
|
||||
INTERVAL_SEC="${INTERVAL_SEC:-60}"
|
||||
MAX_HOURS="${MAX_HOURS:-24}"
|
||||
MAX_LOOPS="${MAX_LOOPS:-0}"
|
||||
POLICY_TZ="${POLICY_TZ:-Asia/Seoul}"
|
||||
DASHBOARD_PORT="${DASHBOARD_PORT:-8080}"
|
||||
|
||||
cd "$ROOT_DIR"
|
||||
|
||||
OUT_LOG="$LOG_DIR/runtime_verify_$(date +%Y%m%d_%H%M%S).log"
|
||||
END_TS=$(( $(date +%s) + MAX_HOURS*3600 ))
|
||||
loops=0
|
||||
|
||||
log() {
|
||||
printf '%s %s\n' "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$1" | tee -a "$OUT_LOG" >/dev/null
|
||||
@@ -34,11 +31,6 @@ check_signal() {
|
||||
return 1
|
||||
}
|
||||
|
||||
find_live_pids() {
|
||||
# Detect live-mode process even when run_overnight pid files are absent.
|
||||
pgrep -af "[s]rc.main --mode=live" 2>/dev/null | awk '{print $1}' | tr '\n' ',' | sed 's/,$//'
|
||||
}
|
||||
|
||||
check_forbidden() {
|
||||
local name="$1"
|
||||
local pattern="$2"
|
||||
@@ -52,94 +44,42 @@ check_forbidden() {
|
||||
return 0
|
||||
}
|
||||
|
||||
is_port_listening() {
|
||||
local port="$1"
|
||||
|
||||
if command -v ss >/dev/null 2>&1; then
|
||||
ss -ltn 2>/dev/null | grep -Eq ":${port}[[:space:]]"
|
||||
return $?
|
||||
fi
|
||||
if command -v lsof >/dev/null 2>&1; then
|
||||
lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1
|
||||
return $?
|
||||
fi
|
||||
if command -v netstat >/dev/null 2>&1; then
|
||||
netstat -ltn 2>/dev/null | grep -Eq "[:.]${port}[[:space:]]"
|
||||
return $?
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
log "[INFO] runtime verify monitor started interval=${INTERVAL_SEC}s max_hours=${MAX_HOURS} policy_tz=${POLICY_TZ}"
|
||||
|
||||
while true; do
|
||||
loops=$((loops + 1))
|
||||
now=$(date +%s)
|
||||
if [ "$now" -ge "$END_TS" ]; then
|
||||
log "[INFO] monitor completed (time window reached)"
|
||||
exit 0
|
||||
fi
|
||||
if [ "$MAX_LOOPS" -gt 0 ] && [ "$loops" -gt "$MAX_LOOPS" ]; then
|
||||
log "[INFO] monitor completed (max loops reached)"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
latest_run="$(ls -t "$LOG_DIR"/run_*.log 2>/dev/null | head -n1 || true)"
|
||||
if [ -z "$latest_run" ]; then
|
||||
log "[ANOMALY] no run log found"
|
||||
sleep "$INTERVAL_SEC"
|
||||
continue
|
||||
fi
|
||||
|
||||
# Basic liveness hints.
|
||||
app_pid="$(cat "$LOG_DIR/app.pid" 2>/dev/null || true)"
|
||||
wd_pid="$(cat "$LOG_DIR/watchdog.pid" 2>/dev/null || true)"
|
||||
live_pids="$(find_live_pids)"
|
||||
app_alive=0
|
||||
wd_alive=0
|
||||
port_alive=0
|
||||
[ -n "$app_pid" ] && kill -0 "$app_pid" 2>/dev/null && app_alive=1
|
||||
[ -n "$wd_pid" ] && kill -0 "$wd_pid" 2>/dev/null && wd_alive=1
|
||||
if [ "$app_alive" -eq 0 ] && [ -n "$live_pids" ]; then
|
||||
app_alive=1
|
||||
fi
|
||||
is_port_listening "$DASHBOARD_PORT" && port_alive=1
|
||||
log "[HEARTBEAT] run_log=${latest_run:-none} app_alive=$app_alive watchdog_alive=$wd_alive port=${DASHBOARD_PORT} alive=$port_alive live_pids=${live_pids:-none}"
|
||||
|
||||
defer_log_checks=0
|
||||
if [ -z "$latest_run" ] && [ "$app_alive" -eq 1 ]; then
|
||||
defer_log_checks=1
|
||||
log "[INFO] run log not yet available; defer log-based coverage checks"
|
||||
fi
|
||||
|
||||
if [ -z "$latest_run" ] && [ "$defer_log_checks" -eq 0 ]; then
|
||||
log "[ANOMALY] no run log found"
|
||||
fi
|
||||
ss -ltnp 2>/dev/null | rg -q ':8080' && port_alive=1
|
||||
log "[HEARTBEAT] run_log=$latest_run app_alive=$app_alive watchdog_alive=$wd_alive port8080=$port_alive"
|
||||
|
||||
# Coverage matrix rows (session paths and policy gate evidence).
|
||||
not_observed=0
|
||||
if [ "$app_alive" -eq 1 ]; then
|
||||
log "[COVERAGE] LIVE_MODE=PASS source=process_liveness"
|
||||
else
|
||||
if [ -n "$latest_run" ]; then
|
||||
check_signal "LIVE_MODE" "Mode: live" "$latest_run" || not_observed=$((not_observed+1))
|
||||
else
|
||||
log "[COVERAGE] LIVE_MODE=NOT_OBSERVED reason=no_run_log_no_live_pid"
|
||||
not_observed=$((not_observed+1))
|
||||
fi
|
||||
fi
|
||||
if [ "$defer_log_checks" -eq 1 ]; then
|
||||
for deferred in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
|
||||
log "[COVERAGE] ${deferred}=DEFERRED reason=no_run_log_process_alive"
|
||||
done
|
||||
elif [ -n "$latest_run" ]; then
|
||||
check_signal "KR_LOOP" "Processing market: Korea Exchange" "$latest_run" || not_observed=$((not_observed+1))
|
||||
check_signal "NXT_PATH" "NXT_PRE|NXT_AFTER|session=NXT_" "$latest_run" || not_observed=$((not_observed+1))
|
||||
check_signal "US_PRE_PATH" "US_PRE|session=US_PRE" "$latest_run" || not_observed=$((not_observed+1))
|
||||
check_signal "US_DAY_PATH" "US_DAY|session=US_DAY|Processing market: .*NASDAQ|Processing market: .*NYSE|Processing market: .*AMEX" "$latest_run" || not_observed=$((not_observed+1))
|
||||
check_signal "US_AFTER_PATH" "US_AFTER|session=US_AFTER" "$latest_run" || not_observed=$((not_observed+1))
|
||||
check_signal "ORDER_POLICY_SESSION" "Order policy rejected .*\\[session=" "$latest_run" || not_observed=$((not_observed+1))
|
||||
else
|
||||
for missing in KR_LOOP NXT_PATH US_PRE_PATH US_DAY_PATH US_AFTER_PATH ORDER_POLICY_SESSION; do
|
||||
log "[COVERAGE] ${missing}=NOT_OBSERVED reason=no_run_log"
|
||||
not_observed=$((not_observed+1))
|
||||
done
|
||||
fi
|
||||
|
||||
if [ "$not_observed" -gt 0 ]; then
|
||||
log "[ANOMALY] coverage_not_observed=$not_observed (treat as FAIL)"
|
||||
@@ -155,17 +95,11 @@ while true; do
|
||||
is_weekend=1
|
||||
fi
|
||||
|
||||
if [ "$defer_log_checks" -eq 1 ]; then
|
||||
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log_process_alive"
|
||||
elif [ "$is_weekend" -eq 1 ]; then
|
||||
if [ "$is_weekend" -eq 1 ]; then
|
||||
# Weekend policy: KR regular session loop must never appear.
|
||||
if [ -n "$latest_run" ]; then
|
||||
check_forbidden "WEEKEND_KR_SESSION_ACTIVE" \
|
||||
"Market session active: KR|session=KRX_REG|Processing market: Korea Exchange" \
|
||||
"$latest_run" || forbidden_hits=$((forbidden_hits+1))
|
||||
else
|
||||
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=no_run_log"
|
||||
fi
|
||||
else
|
||||
log "[FORBIDDEN] WEEKEND_KR_SESSION_ACTIVE=SKIP reason=weekday"
|
||||
fi
|
||||
|
||||
@@ -5,8 +5,6 @@ from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
@@ -14,31 +12,11 @@ from pathlib import Path
|
||||
|
||||
HEADER_PATTERN = re.compile(r"^##\s+\S+", re.MULTILINE)
|
||||
LIST_ITEM_PATTERN = re.compile(r"^\s*(?:-|\*|\d+\.)\s+\S+", re.MULTILINE)
|
||||
FENCED_CODE_PATTERN = re.compile(r"```.*?```", re.DOTALL)
|
||||
INLINE_CODE_PATTERN = re.compile(r"`[^`]*`")
|
||||
|
||||
|
||||
def _strip_code_segments(text: str) -> str:
|
||||
without_fences = FENCED_CODE_PATTERN.sub("", text)
|
||||
return INLINE_CODE_PATTERN.sub("", without_fences)
|
||||
|
||||
|
||||
def resolve_tea_binary() -> str:
|
||||
tea_from_path = shutil.which("tea")
|
||||
if tea_from_path:
|
||||
return tea_from_path
|
||||
|
||||
tea_home = Path.home() / "bin" / "tea"
|
||||
if tea_home.exists() and tea_home.is_file() and os.access(tea_home, os.X_OK):
|
||||
return str(tea_home)
|
||||
|
||||
raise RuntimeError("tea binary not found (checked PATH and ~/bin/tea)")
|
||||
|
||||
|
||||
def validate_pr_body_text(text: str) -> list[str]:
|
||||
errors: list[str] = []
|
||||
searchable = _strip_code_segments(text)
|
||||
if "\\n" in searchable:
|
||||
if "\\n" in text and "\n" not in text:
|
||||
errors.append("body contains escaped newline sequence (\\n)")
|
||||
if text.count("```") % 2 != 0:
|
||||
errors.append("body has unbalanced fenced code blocks (``` count is odd)")
|
||||
@@ -50,11 +28,10 @@ def validate_pr_body_text(text: str) -> list[str]:
|
||||
|
||||
|
||||
def fetch_pr_body(pr_number: int) -> str:
|
||||
tea_binary = resolve_tea_binary()
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
[
|
||||
tea_binary,
|
||||
"tea",
|
||||
"api",
|
||||
"-R",
|
||||
"origin",
|
||||
@@ -64,7 +41,7 @@ def fetch_pr_body(pr_number: int) -> str:
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
except (subprocess.CalledProcessError, FileNotFoundError, PermissionError) as exc:
|
||||
except (subprocess.CalledProcessError, FileNotFoundError) as exc:
|
||||
raise RuntimeError(f"failed to fetch PR #{pr_number}: {exc}") from exc
|
||||
|
||||
try:
|
||||
|
||||
56
src/main.py
56
src/main.py
@@ -2084,15 +2084,6 @@ async def trading_cycle(
|
||||
quantity=quantity,
|
||||
price=order_price,
|
||||
)
|
||||
if result.get("rt_cd", "0") != "0":
|
||||
order_succeeded = False
|
||||
msg1 = result.get("msg1") or ""
|
||||
logger.warning(
|
||||
"KR order not accepted for %s: rt_cd=%s msg=%s",
|
||||
stock_code,
|
||||
result.get("rt_cd"),
|
||||
msg1,
|
||||
)
|
||||
else:
|
||||
# For overseas orders, always use limit orders (지정가):
|
||||
# - KIS market orders (ORD_DVSN=01) calculate quantity based on upper limit
|
||||
@@ -3302,15 +3293,6 @@ async def run_daily_session(
|
||||
quantity=quantity,
|
||||
price=order_price,
|
||||
)
|
||||
if result.get("rt_cd", "0") != "0":
|
||||
order_succeeded = False
|
||||
daily_msg1 = result.get("msg1") or ""
|
||||
logger.warning(
|
||||
"KR order not accepted for %s: rt_cd=%s msg=%s",
|
||||
stock_code,
|
||||
result.get("rt_cd"),
|
||||
daily_msg1,
|
||||
)
|
||||
else:
|
||||
# KIS VTS only accepts limit orders; use 0.5% premium for BUY
|
||||
if decision.action == "BUY":
|
||||
@@ -3550,20 +3532,6 @@ def _run_context_scheduler(
|
||||
)
|
||||
|
||||
|
||||
def _has_market_session_transition(
|
||||
market_states: dict[str, str], market_code: str, session_id: str
|
||||
) -> bool:
|
||||
"""Return True when market session changed (or market has no prior state)."""
|
||||
return market_states.get(market_code) != session_id
|
||||
|
||||
|
||||
def _should_rescan_market(
|
||||
*, last_scan: float, now_timestamp: float, rescan_interval: float, session_changed: bool
|
||||
) -> bool:
|
||||
"""Force rescan on session transition; otherwise follow interval cadence."""
|
||||
return session_changed or (now_timestamp - last_scan >= rescan_interval)
|
||||
|
||||
|
||||
async def _run_evolution_loop(
|
||||
evolution_optimizer: EvolutionOptimizer,
|
||||
telegram: TelegramClient,
|
||||
@@ -4077,7 +4045,7 @@ async def run(settings: Settings) -> None:
|
||||
last_scan_time: dict[str, float] = {}
|
||||
|
||||
# Track market open/close state for notifications
|
||||
_market_states: dict[str, str] = {} # market_code -> session_id
|
||||
_market_states: dict[str, bool] = {} # market_code -> is_open
|
||||
|
||||
# Trading control events
|
||||
shutdown = asyncio.Event()
|
||||
@@ -4195,8 +4163,8 @@ async def run(settings: Settings) -> None:
|
||||
|
||||
if not open_markets:
|
||||
# Notify market close for any markets that were open
|
||||
for market_code, session_id in list(_market_states.items()):
|
||||
if session_id:
|
||||
for market_code, is_open in list(_market_states.items()):
|
||||
if is_open:
|
||||
try:
|
||||
from src.markets.schedule import MARKETS
|
||||
|
||||
@@ -4213,7 +4181,7 @@ async def run(settings: Settings) -> None:
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Market close notification failed: %s", exc)
|
||||
_market_states.pop(market_code, None)
|
||||
_market_states[market_code] = False
|
||||
# Clear playbook for closed market (new one generated next open)
|
||||
playbooks.pop(market_code, None)
|
||||
|
||||
@@ -4259,16 +4227,13 @@ async def run(settings: Settings) -> None:
|
||||
settings=settings,
|
||||
)
|
||||
|
||||
# Notify on market/session transition (e.g., US_PRE -> US_REG)
|
||||
session_changed = _has_market_session_transition(
|
||||
_market_states, market.code, session_info.session_id
|
||||
)
|
||||
if session_changed:
|
||||
# Notify market open if it just opened
|
||||
if not _market_states.get(market.code, False):
|
||||
try:
|
||||
await telegram.notify_market_open(market.name)
|
||||
except Exception as exc:
|
||||
logger.warning("Market open notification failed: %s", exc)
|
||||
_market_states[market.code] = session_info.session_id
|
||||
_market_states[market.code] = True
|
||||
|
||||
# Check and handle domestic pending (unfilled) limit orders.
|
||||
if market.is_domestic:
|
||||
@@ -4300,12 +4265,7 @@ async def run(settings: Settings) -> None:
|
||||
now_timestamp = asyncio.get_event_loop().time()
|
||||
last_scan = last_scan_time.get(market.code, 0.0)
|
||||
rescan_interval = settings.RESCAN_INTERVAL_SECONDS
|
||||
if _should_rescan_market(
|
||||
last_scan=last_scan,
|
||||
now_timestamp=now_timestamp,
|
||||
rescan_interval=rescan_interval,
|
||||
session_changed=session_changed,
|
||||
):
|
||||
if now_timestamp - last_scan >= rescan_interval:
|
||||
try:
|
||||
logger.info("Smart Scanner: Scanning %s market", market.name)
|
||||
|
||||
|
||||
@@ -207,7 +207,7 @@ def get_open_markets(
|
||||
from src.core.order_policy import classify_session_id
|
||||
|
||||
session_id = classify_session_id(market, now)
|
||||
return session_id not in {"KR_OFF", "US_OFF", "US_DAY"}
|
||||
return session_id not in {"KR_OFF", "US_OFF"}
|
||||
return is_market_open(market, now)
|
||||
|
||||
open_markets = [
|
||||
@@ -254,10 +254,10 @@ def get_next_market_open(
|
||||
from src.core.order_policy import classify_session_id
|
||||
|
||||
ts = start_utc.astimezone(ZoneInfo("UTC")).replace(second=0, microsecond=0)
|
||||
prev_active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF", "US_DAY"}
|
||||
prev_active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF"}
|
||||
for _ in range(7 * 24 * 60):
|
||||
ts = ts + timedelta(minutes=1)
|
||||
active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF", "US_DAY"}
|
||||
active = classify_session_id(market, ts) not in {"KR_OFF", "US_OFF"}
|
||||
if active and not prev_active:
|
||||
return ts
|
||||
prev_active = active
|
||||
|
||||
@@ -34,7 +34,6 @@ from src.main import (
|
||||
_extract_held_codes_from_balance,
|
||||
_extract_held_qty_from_balance,
|
||||
_handle_market_close,
|
||||
_has_market_session_transition,
|
||||
_inject_staged_exit_features,
|
||||
_maybe_queue_order_intent,
|
||||
_resolve_market_setting,
|
||||
@@ -44,7 +43,6 @@ from src.main import (
|
||||
_run_evolution_loop,
|
||||
_should_block_overseas_buy_for_fx_buffer,
|
||||
_should_force_exit_for_overnight,
|
||||
_should_rescan_market,
|
||||
_split_trade_pnl_components,
|
||||
_start_dashboard_server,
|
||||
_stoploss_cooldown_minutes,
|
||||
@@ -142,38 +140,6 @@ class TestExtractAvgPriceFromBalance:
|
||||
result = _extract_avg_price_from_balance(balance, "AAPL", is_domestic=False)
|
||||
assert result == 170.5
|
||||
|
||||
|
||||
class TestRealtimeSessionStateHelpers:
|
||||
"""Tests for realtime loop session-transition/rescan helper logic."""
|
||||
|
||||
def test_has_market_session_transition_when_state_missing(self) -> None:
|
||||
states: dict[str, str] = {}
|
||||
assert _has_market_session_transition(states, "US_NASDAQ", "US_REG")
|
||||
|
||||
def test_has_market_session_transition_when_session_changes(self) -> None:
|
||||
states = {"US_NASDAQ": "US_PRE"}
|
||||
assert _has_market_session_transition(states, "US_NASDAQ", "US_REG")
|
||||
|
||||
def test_has_market_session_transition_false_when_same_session(self) -> None:
|
||||
states = {"US_NASDAQ": "US_REG"}
|
||||
assert not _has_market_session_transition(states, "US_NASDAQ", "US_REG")
|
||||
|
||||
def test_should_rescan_market_forces_on_session_transition(self) -> None:
|
||||
assert _should_rescan_market(
|
||||
last_scan=1000.0,
|
||||
now_timestamp=1050.0,
|
||||
rescan_interval=300.0,
|
||||
session_changed=True,
|
||||
)
|
||||
|
||||
def test_should_rescan_market_uses_interval_without_transition(self) -> None:
|
||||
assert not _should_rescan_market(
|
||||
last_scan=1000.0,
|
||||
now_timestamp=1050.0,
|
||||
rescan_interval=300.0,
|
||||
session_changed=False,
|
||||
)
|
||||
|
||||
def test_returns_zero_when_field_absent(self) -> None:
|
||||
"""Returns 0.0 when pchs_avg_pric key is missing entirely."""
|
||||
balance = {"output1": [{"pdno": "005930", "ord_psbl_qty": "5"}]}
|
||||
@@ -947,46 +913,6 @@ class TestTradingCycleTelegramIntegration:
|
||||
# Verify notification was attempted
|
||||
mock_telegram.notify_trade_execution.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_kr_rejected_order_does_not_notify_or_log_trade(
|
||||
self,
|
||||
mock_broker: MagicMock,
|
||||
mock_overseas_broker: MagicMock,
|
||||
mock_scenario_engine: MagicMock,
|
||||
mock_playbook: DayPlaybook,
|
||||
mock_risk: MagicMock,
|
||||
mock_db: MagicMock,
|
||||
mock_decision_logger: MagicMock,
|
||||
mock_context_store: MagicMock,
|
||||
mock_criticality_assessor: MagicMock,
|
||||
mock_telegram: MagicMock,
|
||||
mock_market: MagicMock,
|
||||
) -> None:
|
||||
"""KR orders rejected by KIS should not trigger success side effects."""
|
||||
mock_broker.send_order = AsyncMock(
|
||||
return_value={"rt_cd": "1", "msg1": "장운영시간이 아닙니다."}
|
||||
)
|
||||
|
||||
with patch("src.main.log_trade") as mock_log_trade:
|
||||
await trading_cycle(
|
||||
broker=mock_broker,
|
||||
overseas_broker=mock_overseas_broker,
|
||||
scenario_engine=mock_scenario_engine,
|
||||
playbook=mock_playbook,
|
||||
risk=mock_risk,
|
||||
db_conn=mock_db,
|
||||
decision_logger=mock_decision_logger,
|
||||
context_store=mock_context_store,
|
||||
criticality_assessor=mock_criticality_assessor,
|
||||
telegram=mock_telegram,
|
||||
market=mock_market,
|
||||
stock_code="005930",
|
||||
scan_candidates={},
|
||||
)
|
||||
|
||||
mock_telegram.notify_trade_execution.assert_not_called()
|
||||
mock_log_trade.assert_not_called()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_fat_finger_notification_sent(
|
||||
self,
|
||||
|
||||
@@ -165,17 +165,6 @@ class TestGetOpenMarkets:
|
||||
)
|
||||
assert {m.code for m in extended} == {"US_NASDAQ", "US_NYSE", "US_AMEX"}
|
||||
|
||||
def test_get_open_markets_excludes_us_day_when_extended_enabled(self) -> None:
|
||||
"""US_DAY should be treated as non-tradable even in extended-session lookup."""
|
||||
# Monday 2026-02-02 10:30 KST = 01:30 UTC (US_DAY by session classification)
|
||||
test_time = datetime(2026, 2, 2, 1, 30, tzinfo=ZoneInfo("UTC"))
|
||||
extended = get_open_markets(
|
||||
enabled_markets=["US_NASDAQ", "US_NYSE", "US_AMEX"],
|
||||
now=test_time,
|
||||
include_extended_sessions=True,
|
||||
)
|
||||
assert extended == []
|
||||
|
||||
|
||||
class TestGetNextMarketOpen:
|
||||
"""Test get_next_market_open function."""
|
||||
@@ -225,8 +214,8 @@ class TestGetNextMarketOpen:
|
||||
def test_get_next_market_open_prefers_extended_session(self) -> None:
|
||||
"""Extended lookup should return premarket open time before regular open."""
|
||||
# Monday 2026-02-02 07:00 EST = 12:00 UTC
|
||||
# US_DAY is treated as non-tradable in extended lookup, so after entering
|
||||
# US_DAY the next tradable OFF->ON transition is US_PRE at 09:00 UTC next day.
|
||||
# By v3 KST session rules, US is OFF only in KST 07:00-10:00 (UTC 22:00-01:00).
|
||||
# At 12:00 UTC market is active, so next OFF->ON transition is 01:00 UTC next day.
|
||||
test_time = datetime(2026, 2, 2, 12, 0, tzinfo=ZoneInfo("UTC"))
|
||||
market, next_open = get_next_market_open(
|
||||
enabled_markets=["US_NASDAQ"],
|
||||
@@ -234,7 +223,7 @@ class TestGetNextMarketOpen:
|
||||
include_extended_sessions=True,
|
||||
)
|
||||
assert market.code == "US_NASDAQ"
|
||||
assert next_open == datetime(2026, 2, 3, 9, 0, tzinfo=ZoneInfo("UTC"))
|
||||
assert next_open == datetime(2026, 2, 3, 1, 0, tzinfo=ZoneInfo("UTC"))
|
||||
|
||||
|
||||
class TestExpandMarketCodes:
|
||||
|
||||
@@ -1,160 +0,0 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import signal
|
||||
import socket
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||
RUN_OVERNIGHT = REPO_ROOT / "scripts" / "run_overnight.sh"
|
||||
RUNTIME_MONITOR = REPO_ROOT / "scripts" / "runtime_verify_monitor.sh"
|
||||
|
||||
|
||||
def _latest_runtime_log(log_dir: Path) -> str:
|
||||
logs = sorted(log_dir.glob("runtime_verify_*.log"))
|
||||
assert logs, "runtime monitor did not produce log output"
|
||||
return logs[-1].read_text(encoding="utf-8")
|
||||
|
||||
|
||||
def test_runtime_verify_monitor_detects_live_process_without_pid_files(tmp_path: Path) -> None:
|
||||
log_dir = tmp_path / "overnight"
|
||||
log_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
fake_live = subprocess.Popen(
|
||||
["bash", "-lc", 'exec -a "src.main --mode=live" sleep 10'],
|
||||
cwd=REPO_ROOT,
|
||||
)
|
||||
try:
|
||||
env = os.environ.copy()
|
||||
env.update(
|
||||
{
|
||||
"ROOT_DIR": str(REPO_ROOT),
|
||||
"LOG_DIR": str(log_dir),
|
||||
"INTERVAL_SEC": "1",
|
||||
"MAX_HOURS": "1",
|
||||
"MAX_LOOPS": "1",
|
||||
"POLICY_TZ": "UTC",
|
||||
}
|
||||
)
|
||||
completed = subprocess.run(
|
||||
["bash", str(RUNTIME_MONITOR)],
|
||||
cwd=REPO_ROOT,
|
||||
env=env,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
assert completed.returncode == 0, completed.stderr
|
||||
|
||||
log_text = _latest_runtime_log(log_dir)
|
||||
assert "app_alive=1" in log_text
|
||||
assert "[COVERAGE] LIVE_MODE=PASS source=process_liveness" in log_text
|
||||
assert "[ANOMALY]" not in log_text
|
||||
finally:
|
||||
fake_live.terminate()
|
||||
fake_live.wait(timeout=5)
|
||||
|
||||
|
||||
def test_run_overnight_fails_fast_when_dashboard_port_in_use(tmp_path: Path) -> None:
|
||||
log_dir = tmp_path / "overnight"
|
||||
log_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
sock.bind(("127.0.0.1", 0))
|
||||
sock.listen(1)
|
||||
port = sock.getsockname()[1]
|
||||
try:
|
||||
env = os.environ.copy()
|
||||
env.update(
|
||||
{
|
||||
"LOG_DIR": str(log_dir),
|
||||
"TMUX_AUTO": "false",
|
||||
"DASHBOARD_PORT": str(port),
|
||||
}
|
||||
)
|
||||
completed = subprocess.run(
|
||||
["bash", str(RUN_OVERNIGHT)],
|
||||
cwd=REPO_ROOT,
|
||||
env=env,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
assert completed.returncode != 0
|
||||
output = f"{completed.stdout}\n{completed.stderr}"
|
||||
assert "already in use" in output
|
||||
finally:
|
||||
sock.close()
|
||||
|
||||
|
||||
def test_run_overnight_writes_live_pid_and_watchdog_pid(tmp_path: Path) -> None:
|
||||
log_dir = tmp_path / "overnight"
|
||||
log_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
env = os.environ.copy()
|
||||
env.update(
|
||||
{
|
||||
"LOG_DIR": str(log_dir),
|
||||
"TMUX_AUTO": "false",
|
||||
"STARTUP_GRACE_SEC": "1",
|
||||
"CHECK_INTERVAL": "2",
|
||||
"APP_CMD_BIN": "sleep",
|
||||
"APP_CMD_ARGS": "10",
|
||||
}
|
||||
)
|
||||
completed = subprocess.run(
|
||||
["bash", str(RUN_OVERNIGHT)],
|
||||
cwd=REPO_ROOT,
|
||||
env=env,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
assert completed.returncode == 0, f"{completed.stdout}\n{completed.stderr}"
|
||||
|
||||
app_pid = int((log_dir / "app.pid").read_text(encoding="utf-8").strip())
|
||||
watchdog_pid = int((log_dir / "watchdog.pid").read_text(encoding="utf-8").strip())
|
||||
|
||||
os.kill(app_pid, 0)
|
||||
os.kill(watchdog_pid, 0)
|
||||
|
||||
for pid in (watchdog_pid, app_pid):
|
||||
try:
|
||||
os.kill(pid, signal.SIGTERM)
|
||||
except ProcessLookupError:
|
||||
pass
|
||||
|
||||
|
||||
def test_run_overnight_fails_when_process_exits_before_grace_period(tmp_path: Path) -> None:
|
||||
log_dir = tmp_path / "overnight"
|
||||
log_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
env = os.environ.copy()
|
||||
env.update(
|
||||
{
|
||||
"LOG_DIR": str(log_dir),
|
||||
"TMUX_AUTO": "false",
|
||||
"STARTUP_GRACE_SEC": "1",
|
||||
"APP_CMD_BIN": "false",
|
||||
}
|
||||
)
|
||||
completed = subprocess.run(
|
||||
["bash", str(RUN_OVERNIGHT)],
|
||||
cwd=REPO_ROOT,
|
||||
env=env,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
assert completed.returncode != 0
|
||||
output = f"{completed.stdout}\n{completed.stderr}"
|
||||
assert "startup failed:" in output
|
||||
|
||||
watchdog_pid_file = log_dir / "watchdog.pid"
|
||||
if watchdog_pid_file.exists():
|
||||
watchdog_pid = int(watchdog_pid_file.read_text(encoding="utf-8").strip())
|
||||
with pytest.raises(ProcessLookupError):
|
||||
os.kill(watchdog_pid, 0)
|
||||
@@ -24,24 +24,9 @@ def test_validate_pr_body_text_detects_escaped_newline() -> None:
|
||||
assert any("escaped newline" in err for err in errors)
|
||||
|
||||
|
||||
def test_validate_pr_body_text_detects_escaped_newline_in_multiline_body() -> None:
|
||||
def test_validate_pr_body_text_allows_literal_sequence_when_multiline() -> None:
|
||||
module = _load_module()
|
||||
text = "## Summary\n- first line\n- broken line with \\n literal"
|
||||
errors = module.validate_pr_body_text(text)
|
||||
assert any("escaped newline" in err for err in errors)
|
||||
|
||||
|
||||
def test_validate_pr_body_text_allows_escaped_newline_in_code_blocks() -> None:
|
||||
module = _load_module()
|
||||
text = "\n".join(
|
||||
[
|
||||
"## Summary",
|
||||
"- example uses `\\n` for explanation",
|
||||
"```bash",
|
||||
"printf 'line1\\nline2\\n'",
|
||||
"```",
|
||||
]
|
||||
)
|
||||
text = "## Summary\n- escaped sequence example: \\\\n"
|
||||
assert module.validate_pr_body_text(text) == []
|
||||
|
||||
|
||||
@@ -78,13 +63,12 @@ def test_fetch_pr_body_reads_body_from_tea_api(monkeypatch) -> None:
|
||||
module = _load_module()
|
||||
|
||||
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
|
||||
assert cmd[0] == "/tmp/tea-bin"
|
||||
assert "tea" in cmd[0]
|
||||
assert check is True
|
||||
assert capture_output is True
|
||||
assert text is True
|
||||
return SimpleNamespace(stdout=json.dumps({"body": "## Summary\n- item"}))
|
||||
|
||||
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
|
||||
monkeypatch.setattr(module.subprocess, "run", fake_run)
|
||||
assert module.fetch_pr_body(391) == "## Summary\n- item"
|
||||
|
||||
@@ -95,32 +79,6 @@ def test_fetch_pr_body_rejects_non_string_body(monkeypatch) -> None:
|
||||
def fake_run(cmd, check, capture_output, text): # noqa: ANN001
|
||||
return SimpleNamespace(stdout=json.dumps({"body": 123}))
|
||||
|
||||
monkeypatch.setattr(module, "resolve_tea_binary", lambda: "/tmp/tea-bin")
|
||||
monkeypatch.setattr(module.subprocess, "run", fake_run)
|
||||
with pytest.raises(RuntimeError):
|
||||
module.fetch_pr_body(391)
|
||||
|
||||
|
||||
def test_resolve_tea_binary_falls_back_to_home_bin(monkeypatch, tmp_path) -> None:
|
||||
module = _load_module()
|
||||
tea_home = tmp_path / "bin" / "tea"
|
||||
tea_home.parent.mkdir(parents=True)
|
||||
tea_home.write_text("#!/usr/bin/env bash\n", encoding="utf-8")
|
||||
tea_home.chmod(0o755)
|
||||
|
||||
monkeypatch.setattr(module.shutil, "which", lambda _: None)
|
||||
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
|
||||
assert module.resolve_tea_binary() == str(tea_home)
|
||||
|
||||
|
||||
def test_resolve_tea_binary_rejects_non_executable_home_bin(monkeypatch, tmp_path) -> None:
|
||||
module = _load_module()
|
||||
tea_home = tmp_path / "bin" / "tea"
|
||||
tea_home.parent.mkdir(parents=True)
|
||||
tea_home.write_text("not executable\n", encoding="utf-8")
|
||||
tea_home.chmod(0o644)
|
||||
|
||||
monkeypatch.setattr(module.shutil, "which", lambda _: None)
|
||||
monkeypatch.setattr(module.Path, "home", lambda: tmp_path)
|
||||
with pytest.raises(RuntimeError):
|
||||
module.resolve_tea_binary()
|
||||
|
||||
Reference in New Issue
Block a user