Compare commits

..

4 Commits

Author SHA1 Message Date
agentson
09e6eef3bf feat: unify domestic scanner and sizing; update docs
Some checks failed
CI / test (pull_request) Has been cancelled
2026-02-17 06:15:20 +09:00
agentson
10b15a4563 docs: reflect overseas ranking integration and volatility-first selection 2026-02-17 05:57:56 +09:00
agentson
a6693560c1 feat: prioritize overseas volatility scoring over raw rankings 2026-02-17 05:54:46 +09:00
agentson
16bb8b6dc6 feat: add overseas ranking integration with dynamic fallback 2026-02-17 05:50:10 +09:00
21 changed files with 254 additions and 2004 deletions

View File

@@ -15,9 +15,6 @@ pytest -v --cov=src
# Run (paper trading) # Run (paper trading)
python -m src.main --mode=paper python -m src.main --mode=paper
# Run with dashboard
python -m src.main --mode=paper --dashboard
``` ```
## Telegram Notifications (Optional) ## Telegram Notifications (Optional)
@@ -46,10 +43,6 @@ Get real-time alerts for trades, circuit breakers, and system events via Telegra
- Market open/close notifications - Market open/close notifications
- 📝 System startup/shutdown status - 📝 System startup/shutdown status
### Interactive Commands
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally. **Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
## Smart Volatility Scanner (Optional) ## Smart Volatility Scanner (Optional)
@@ -116,23 +109,17 @@ User requirements and feedback are tracked in [docs/requirements-log.md](docs/re
``` ```
src/ src/
├── analysis/ # Technical analysis (RSI, volatility, smart scanner) ├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
├── broker/ # KIS API client (domestic + overseas) ├── broker/ # KIS API client (domestic + overseas)
├── context/ # L1-L7 hierarchical memory system ├── brain/ # Gemini AI decision engine
├── core/ # Risk manager (READ-ONLY) ├── core/ # Risk manager (READ-ONLY)
├── dashboard/ # FastAPI read-only monitoring (8 API endpoints) ├── evolution/ # Self-improvement optimizer
├── data/ # External data integration (news, market data, calendar)
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
├── logging/ # Decision logger (audit trail)
├── markets/ # Market schedules and timezone handling ├── markets/ # Market schedules and timezone handling
├── notifications/ # Telegram alerts + bidirectional commands (9 commands) ├── notifications/ # Telegram real-time alerts
├── strategy/ # Pre-market planner, scenario engine, playbook store
├── db.py # SQLite trade logging ├── db.py # SQLite trade logging
├── main.py # Trading loop orchestrator ├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env) └── config.py # Settings (from .env)
tests/ # 551 tests across 25 files tests/ # 343 tests across 14 files
docs/ # Extended documentation docs/ # Extended documentation
``` ```
@@ -144,7 +131,6 @@ ruff check src/ tests/ # Lint
mypy src/ --strict # Type check mypy src/ --strict # Type check
python -m src.main --mode=paper # Paper trading python -m src.main --mode=paper # Paper trading
python -m src.main --mode=paper --dashboard # With dashboard
python -m src.main --mode=live # Live trading (⚠️ real money) python -m src.main --mode=live # Live trading (⚠️ real money)
# Gitea workflow (requires tea CLI) # Gitea workflow (requires tea CLI)

160
README.md
View File

@@ -10,41 +10,28 @@ KIS(한국투자증권) API로 매매하고, Google Gemini로 판단하며, 자
│ (매매 실행) │ │ (거래 루프) │ │ (의사결정) │ │ (매매 실행) │ │ (거래 루프) │ │ (의사결정) │
└─────────────┘ └──────┬──────┘ └─────────────┘ └─────────────┘ └──────┬──────┘ └─────────────┘
┌────────────┼────────────┐ ┌──────┴──────┐
│ │ │Risk Manager
┌──────┴──────┐ ┌──┴───┐ ┌──────┴──────┐ │ (안전장치) │
│Risk Manager │ │ DB │ │ Telegram │ └──────┬──────┘
(안전장치) │ │ │ │ (알림+명령)
└──────┬──────┘ └──────┘ └───────────── ────────────
│ Evolution
┌────────┼────────┐ │ (전략 진화) │
└─────────────┘
┌────┴────┐┌──┴──┐┌────┴─────┐
│Strategy ││Ctx ││Evolution │
│(플레이북)││(메모리)││ (진화) │
└─────────┘└─────┘└──────────┘
``` ```
**v2 핵심**: "Plan Once, Execute Locally" — 장 시작 전 AI가 시나리오 플레이북을 1회 생성하고, 거래 시간에는 로컬 시나리오 매칭만 수행하여 API 비용과 지연 시간을 대폭 절감.
## 핵심 모듈 ## 핵심 모듈
| 모듈 | 위치 | 설명 | | 모듈 | 파일 | 설명 |
|------|------|------| |------|------|------|
| 설정 | `src/config.py` | Pydantic 기반 환경변수 로딩 및 타입 검증 (35+ 변수) | | 설정 | `src/config.py` | Pydantic 기반 환경변수 로딩 및 타입 검증 |
| 브로커 | `src/broker/` | KIS API 비동기 래퍼 (국내 + 해외 9개 시장) | | 브로커 | `src/broker/kis_api.py` | KIS API 비동기 래퍼 (토큰 갱신, 레이트 리미터, 해시키) |
| 두뇌 | `src/brain/` | Gemini 프롬프트 구성, JSON 파싱, 토큰 최적화 | | 두뇌 | `src/brain/gemini_client.py` | Gemini 프롬프트 구성 JSON 응답 파싱 |
| 방패 | `src/core/risk_manager.py` | 서킷 브레이커 + 팻 핑거 체크 (READ-ONLY) | | 방패 | `src/core/risk_manager.py` | 서킷 브레이커 + 팻 핑거 체크 |
| 전략 | `src/strategy/` | Pre-Market Planner, Scenario Engine, Playbook Store | | 알림 | `src/notifications/telegram_client.py` | 텔레그램 실시간 거래 알림 (선택사항) |
| 컨텍스트 | `src/context/` | L1-L7 계층형 메모리 시스템 | | 진화 | `src/evolution/optimizer.py` | 실패 패턴 분석 → 새 전략 생성 → 테스트 → PR |
| 분석 | `src/analysis/` | RSI, ATR, Smart Volatility Scanner | | DB | `src/db.py` | SQLite 거래 로그 기록 |
| 알림 | `src/notifications/` | 텔레그램 양방향 (알림 + 9개 명령어) |
| 대시보드 | `src/dashboard/` | FastAPI 읽기 전용 모니터링 (8개 API) |
| 진화 | `src/evolution/` | 전략 진화 + Daily Review + Scorecard |
| 의사결정 로그 | `src/logging/` | 전체 거래 결정 감사 추적 |
| 데이터 | `src/data/` | 뉴스, 시장 데이터, 경제 캘린더 연동 |
| 백업 | `src/backup/` | 자동 백업, S3 클라우드, 무결성 검증 |
| DB | `src/db.py` | SQLite 거래 로그 (5개 테이블) |
## 안전장치 ## 안전장치
@@ -55,7 +42,6 @@ KIS(한국투자증권) API로 매매하고, Google Gemini로 판단하며, 자
| 신뢰도 임계값 | Gemini 신뢰도 80 미만이면 강제 HOLD | | 신뢰도 임계값 | Gemini 신뢰도 80 미만이면 강제 HOLD |
| 레이트 리미터 | Leaky Bucket 알고리즘으로 API 호출 제한 | | 레이트 리미터 | Leaky Bucket 알고리즘으로 API 호출 제한 |
| 토큰 자동 갱신 | 만료 1분 전 자동으로 Access Token 재발급 | | 토큰 자동 갱신 | 만료 1분 전 자동으로 Access Token 재발급 |
| 손절 모니터링 | 플레이북 시나리오 기반 실시간 포지션 보호 |
## 빠른 시작 ## 빠른 시작
@@ -81,11 +67,7 @@ pytest -v --cov=src --cov-report=term-missing
### 4. 실행 (모의투자) ### 4. 실행 (모의투자)
```bash ```bash
# 기본 실행
python -m src.main --mode=paper python -m src.main --mode=paper
# 대시보드 활성화
python -m src.main --mode=paper --dashboard
``` ```
### 5. Docker 실행 ### 5. Docker 실행
@@ -94,20 +76,7 @@ python -m src.main --mode=paper --dashboard
docker compose up -d ouroboros docker compose up -d ouroboros
``` ```
## 지원 시장 ## 텔레그램 알림 (선택사항)
| 국가 | 거래소 | 코드 |
|------|--------|------|
| 🇰🇷 한국 | KRX | KR |
| 🇺🇸 미국 | NASDAQ, NYSE, AMEX | US_NASDAQ, US_NYSE, US_AMEX |
| 🇯🇵 일본 | TSE | JP |
| 🇭🇰 홍콩 | SEHK | HK |
| 🇨🇳 중국 | 상하이, 선전 | CN_SHA, CN_SZA |
| 🇻🇳 베트남 | 하노이, 호치민 | VN_HNX, VN_HSX |
`ENABLED_MARKETS` 환경변수로 활성 시장 선택 (기본: `KR,US`).
## 텔레그램 (선택사항)
거래 실행, 서킷 브레이커 발동, 시스템 상태 등을 텔레그램으로 실시간 알림 받을 수 있습니다. 거래 실행, 서킷 브레이커 발동, 시스템 상태 등을 텔레그램으로 실시간 알림 받을 수 있습니다.
@@ -133,51 +102,25 @@ docker compose up -d ouroboros
- 장 시작/종료 알림 - 장 시작/종료 알림
- 📝 시스템 시작/종료 상태 - 📝 시스템 시작/종료 상태
### 양방향 명령어 **안전장치**: 알림 실패해도 거래는 계속 진행됩니다. 텔레그램 API 오류나 설정 누락이 있어도 거래 시스템은 정상 작동합니다.
`TELEGRAM_COMMANDS_ENABLED=true` (기본값) 설정 시 9개 대화형 명령어 지원:
| 명령어 | 설명 |
|--------|------|
| `/help` | 사용 가능한 명령어 목록 |
| `/status` | 거래 상태 (모드, 시장, P&L) |
| `/positions` | 계좌 요약 (잔고, 현금, P&L) |
| `/report` | 일일 요약 (거래 수, P&L, 승률) |
| `/scenarios` | 오늘의 플레이북 시나리오 |
| `/review` | 최근 스코어카드 (L6_DAILY) |
| `/dashboard` | 대시보드 URL 표시 |
| `/stop` | 거래 일시 정지 |
| `/resume` | 거래 재개 |
**안전장치**: 알림 실패해도 거래는 계속 진행됩니다.
## 테스트 ## 테스트
551개 테스트가 25개 파일에 걸쳐 구현되어 있습니다. 최소 커버리지 80%. 35개 테스트가 TDD 방식으로 구현 전에 먼저 작성되었습니다.
``` ```
tests/test_scenario_engine.py — 시나리오 매칭 (44개) tests/test_risk.py — 서킷 브레이커, 팻 핑거, 통합 검증 (11개)
tests/test_data_integration.py — 외부 데이터 연동 (38개) tests/test_broker.py — 토큰 관리, 타임아웃, HTTP 에러, 해시키 (6개)
tests/test_pre_market_planner.py — 플레이북 생성 (37개) tests/test_brain.py JSON 파싱, 신뢰도 임계값, 비정상 응답 처리 (15개)
tests/test_main.py — 거래 루프 통합 (37개)
tests/test_token_efficiency.py — 토큰 최적화 (34개)
tests/test_strategy_models.py — 전략 모델 검증 (33개)
tests/test_telegram_commands.py — 텔레그램 명령어 (31개)
tests/test_latency_control.py — 지연시간 제어 (30개)
tests/test_telegram.py — 텔레그램 알림 (25개)
... 외 16개 파일
``` ```
**상세**: [docs/testing.md](docs/testing.md)
## 기술 스택 ## 기술 스택
- **언어**: Python 3.11+ (asyncio 기반) - **언어**: Python 3.11+ (asyncio 기반)
- **브로커**: KIS Open API (REST, 국내+해외) - **브로커**: KIS Open API (REST)
- **AI**: Google Gemini Pro - **AI**: Google Gemini Pro
- **DB**: SQLite (5개 테이블: trades, contexts, decision_logs, playbooks, context_metadata) - **DB**: SQLite
- **대시보드**: FastAPI + uvicorn - **검증**: pytest + coverage
- **검증**: pytest + coverage (551 tests)
- **CI/CD**: GitHub Actions - **CI/CD**: GitHub Actions
- **배포**: Docker + Docker Compose - **배포**: Docker + Docker Compose
@@ -185,50 +128,27 @@ tests/test_telegram.py — 텔레그램 알림 (25개)
``` ```
The-Ouroboros/ The-Ouroboros/
├── .github/workflows/ci.yml # CI 파이프라인
├── docs/ ├── docs/
│ ├── architecture.md # 시스템 아키텍처 │ ├── agents.md # AI 에이전트 페르소나 정의
── testing.md # 테스트 가이드 ── skills.md # 사용 가능한 도구 목록
│ ├── commands.md # 명령어 레퍼런스
│ ├── context-tree.md # L1-L7 메모리 시스템
│ ├── workflow.md # Git 워크플로우
│ ├── agents.md # 에이전트 정책
│ ├── skills.md # 도구 목록
│ ├── disaster_recovery.md # 백업/복구
│ └── requirements-log.md # 요구사항 기록
├── src/ ├── src/
│ ├── analysis/ # 기술적 분석 (RSI, ATR, Smart Scanner) │ ├── config.py # Pydantic 설정
│ ├── backup/ # 백업 (스케줄러, S3, 무결성 검증) │ ├── logging_config.py # JSON 구조화 로깅
│ ├── brain/ # Gemini 의사결정 (프롬프트 최적화, 컨텍스트 선택) │ ├── db.py # SQLite 거래 기록
│ ├── broker/ # KIS API (국내 + 해외) │ ├── main.py # 비동기 거래 루프
│ ├── context/ # L1-L7 계층 메모리 │ ├── broker/kis_api.py # KIS API 클라이언트
│ ├── core/ # 리스크 관리 (READ-ONLY) │ ├── brain/gemini_client.py # Gemini 의사결정 엔진
│ ├── dashboard/ # FastAPI 모니터링 대시보드 │ ├── core/risk_manager.py # 리스크 관리
│ ├── data/ # 외부 데이터 연동 │ ├── notifications/telegram_client.py # 텔레그램 알림
│ ├── evolution/ # 전략 진화 + Daily Review │ ├── evolution/optimizer.py # 전략 진화 엔진
── logging/ # 의사결정 감사 추적 ── strategies/base.py # 전략 베이스 클래스
│ ├── markets/ # 시장 스케줄 + 타임존 ├── tests/ # TDD 테스트 스위트
│ ├── notifications/ # 텔레그램 알림 + 명령어
│ ├── strategy/ # 플레이북 (Planner, Scenario Engine)
│ ├── config.py # Pydantic 설정
│ ├── db.py # SQLite 데이터베이스
│ └── main.py # 비동기 거래 루프
├── tests/ # 551개 테스트 (25개 파일)
├── Dockerfile # 멀티스테이지 빌드 ├── Dockerfile # 멀티스테이지 빌드
├── docker-compose.yml # 서비스 오케스트레이션 ├── docker-compose.yml # 서비스 오케스트레이션
└── pyproject.toml # 의존성 및 도구 설정 └── pyproject.toml # 의존성 및 도구 설정
``` ```
## 문서
- **[아키텍처](docs/architecture.md)** — 시스템 설계, 컴포넌트, 데이터 흐름
- **[테스트](docs/testing.md)** — 테스트 구조, 커버리지, 작성 가이드
- **[명령어](docs/commands.md)** — CLI, Dashboard, Telegram 명령어
- **[컨텍스트 트리](docs/context-tree.md)** — L1-L7 계층 메모리
- **[워크플로우](docs/workflow.md)** — Git 워크플로우 정책
- **[에이전트 정책](docs/agents.md)** — 안전 제약, 금지 행위
- **[백업/복구](docs/disaster_recovery.md)** — 재해 복구 절차
- **[요구사항](docs/requirements-log.md)** — 사용자 요구사항 추적
## 라이선스 ## 라이선스
이 프로젝트의 라이선스는 [LICENSE](LICENSE) 파일을 참조하세요. 이 프로젝트의 라이선스는 [LICENSE](LICENSE) 파일을 참조하세요.

View File

@@ -2,9 +2,7 @@
## Overview ## Overview
Self-evolving AI trading agent for global stock markets via KIS (Korea Investment & Securities) API. The main loop in `src/main.py` orchestrates components across multiple markets with two trading modes: daily (batch API calls) or realtime (per-stock decisions). Self-evolving AI trading agent for global stock markets via KIS (Korea Investment & Securities) API. The main loop in `src/main.py` orchestrates four components across multiple markets with two trading modes: daily (batch API calls) or realtime (per-stock decisions).
**v2 Proactive Playbook Architecture**: The system uses a "plan once, execute locally" approach. Pre-market, the AI generates a playbook of scenarios (one Gemini API call per market per day). During trading hours, a local scenario engine matches live market data against these pre-computed scenarios — no additional AI calls needed. This dramatically reduces API costs and latency.
## Trading Modes ## Trading Modes
@@ -48,11 +46,9 @@ High-frequency trading with individual stock analysis:
**KISBroker** (`kis_api.py`) — Async KIS API client for domestic Korean market **KISBroker** (`kis_api.py`) — Async KIS API client for domestic Korean market
- Automatic OAuth token refresh (valid for 24 hours) - Automatic OAuth token refresh (valid for 24 hours)
- Leaky-bucket rate limiter (configurable RPS, default 2.0) - Leaky-bucket rate limiter (10 requests per second)
- POST body hash-key signing for order authentication - POST body hash-key signing for order authentication
- Custom SSL context with disabled hostname verification for VTS (virtual trading) endpoint due to known certificate mismatch - Custom SSL context with disabled hostname verification for VTS (virtual trading) endpoint due to known certificate mismatch
- `fetch_market_rankings()` — Fetch volume surge rankings from KIS API
- `get_daily_prices()` — Fetch OHLCV history for technical analysis
**OverseasBroker** (`overseas.py`) — KIS overseas stock API wrapper **OverseasBroker** (`overseas.py`) — KIS overseas stock API wrapper
@@ -67,7 +63,10 @@ High-frequency trading with individual stock analysis:
- `is_market_open()` checks weekends, trading hours, lunch breaks - `is_market_open()` checks weekends, trading hours, lunch breaks
- `get_open_markets()` returns currently active markets - `get_open_markets()` returns currently active markets
- `get_next_market_open()` finds next market to open and when - `get_next_market_open()` finds next market to open and when
- 10 global markets defined (KR, US_NASDAQ, US_NYSE, US_AMEX, JP, HK, CN_SHA, CN_SZA, VN_HNX, VN_HSX)
**New API Methods** (added in v0.9.0):
- `fetch_market_rankings()` — Fetch volume surge rankings from KIS API
- `get_daily_prices()` — Fetch OHLCV history for technical analysis
**Overseas Ranking API Methods** (added in v0.10.x): **Overseas Ranking API Methods** (added in v0.10.x):
- `fetch_overseas_rankings()` — Fetch overseas ranking universe (fluctuation / volume) - `fetch_overseas_rankings()` — Fetch overseas ranking universe (fluctuation / volume)
@@ -105,9 +104,9 @@ High-frequency trading with individual stock analysis:
- Fast Python-based filtering before expensive AI judgment - Fast Python-based filtering before expensive AI judgment
- Logs selection context (RSI-compatible proxy, volume_ratio, signal, score) for Evolution system - Logs selection context (RSI-compatible proxy, volume_ratio, signal, score) for Evolution system
### 3. Brain (`src/brain/`) ### 3. Brain (`src/brain/gemini_client.py`)
**GeminiClient** (`gemini_client.py`) — AI decision engine powered by Google Gemini **GeminiClient** — AI decision engine powered by Google Gemini
- Constructs structured prompts from market data - Constructs structured prompts from market data
- Parses JSON responses into `TradeDecision` objects (`action`, `confidence`, `rationale`) - Parses JSON responses into `TradeDecision` objects (`action`, `confidence`, `rationale`)
@@ -115,20 +114,11 @@ High-frequency trading with individual stock analysis:
- Falls back to safe HOLD on any parse/API error - Falls back to safe HOLD on any parse/API error
- Handles markdown-wrapped JSON, malformed responses, invalid actions - Handles markdown-wrapped JSON, malformed responses, invalid actions
**PromptOptimizer** (`prompt_optimizer.py`) — Token efficiency optimization
- Reduces prompt size while preserving decision quality
- Caches optimized prompts
**ContextSelector** (`context_selector.py`) — Relevant context selection for prompts
- Selects appropriate context layers for current market conditions
### 4. Risk Manager (`src/core/risk_manager.py`) ### 4. Risk Manager (`src/core/risk_manager.py`)
**RiskManager** — Safety circuit breaker and order validation **RiskManager** — Safety circuit breaker and order validation
> **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md)) ⚠️ **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md))
- **Circuit Breaker**: Halts all trading via `SystemExit` when daily P&L drops below -3.0% - **Circuit Breaker**: Halts all trading via `SystemExit` when daily P&L drops below -3.0%
- Threshold may only be made stricter, never relaxed - Threshold may only be made stricter, never relaxed
@@ -136,79 +126,7 @@ High-frequency trading with individual stock analysis:
- **Fat-Finger Protection**: Rejects orders exceeding 30% of available cash - **Fat-Finger Protection**: Rejects orders exceeding 30% of available cash
- Must always be enforced, cannot be disabled - Must always be enforced, cannot be disabled
### 5. Strategy (`src/strategy/`) ### 5. Notifications (`src/notifications/telegram_client.py`)
**Pre-Market Planner** (`pre_market_planner.py`) — AI playbook generation
- Runs before market open (configurable `PRE_MARKET_MINUTES`, default 30)
- Generates scenario-based playbooks via single Gemini API call per market
- Handles timeout (`PLANNER_TIMEOUT_SECONDS`, default 60) with defensive playbook fallback
- Persists playbooks to database for audit trail
**Scenario Engine** (`scenario_engine.py`) — Local scenario matching
- Matches live market data against pre-computed playbook scenarios
- No AI calls during trading hours — pure Python matching logic
- Returns matched scenarios with confidence scores
- Configurable `MAX_SCENARIOS_PER_STOCK` (default 5)
- Periodic rescan at `RESCAN_INTERVAL_SECONDS` (default 300)
**Playbook Store** (`playbook_store.py`) — Playbook persistence
- SQLite-backed storage for daily playbooks
- Date and market-based retrieval
- Status tracking (generated, active, expired)
**Models** (`models.py`) — Pydantic data models
- Scenario, Playbook, MatchResult, and related type definitions
### 6. Context System (`src/context/`)
**Context Store** (`store.py`) — L1-L7 hierarchical memory
- 7-layer context system (see [docs/context-tree.md](./context-tree.md)):
- L1: Tick-level (real-time price)
- L2: Intraday (session summary)
- L3: Daily (end-of-day)
- L4: Weekly (trend analysis)
- L5: Monthly (strategy review)
- L6: Daily Review (scorecard)
- L7: Evolution (long-term learning)
- Key-value storage with timeframe tagging
- SQLite persistence in `contexts` table
**Context Scheduler** (`scheduler.py`) — Periodic aggregation
- Scheduled summarization from lower to higher layers
- Configurable aggregation intervals
**Context Summarizer** (`summarizer.py`) — Layer summarization
- Aggregates lower-layer data into higher-layer summaries
### 7. Dashboard (`src/dashboard/`)
**FastAPI App** (`app.py`) — Read-only monitoring dashboard
- Runs as daemon thread when enabled (`--dashboard` CLI flag or `DASHBOARD_ENABLED=true`)
- Configurable host/port (`DASHBOARD_HOST`, `DASHBOARD_PORT`, default `127.0.0.1:8080`)
- Serves static HTML frontend
**8 API Endpoints:**
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/` | GET | Static HTML dashboard |
| `/api/status` | GET | Daily trading status by market |
| `/api/playbook/{date}` | GET | Playbook for specific date and market |
| `/api/scorecard/{date}` | GET | Daily scorecard from L6_DAILY context |
| `/api/performance` | GET | Trading performance metrics (by market + combined) |
| `/api/context/{layer}` | GET | Query context by layer (L1-L7) |
| `/api/decisions` | GET | Decision log entries with outcomes |
| `/api/scenarios/active` | GET | Today's matched scenarios |
### 8. Notifications (`src/notifications/telegram_client.py`)
**TelegramClient** — Real-time event notifications via Telegram Bot API **TelegramClient** — Real-time event notifications via Telegram Bot API
@@ -216,13 +134,7 @@ High-frequency trading with individual stock analysis:
- Non-blocking: failures are logged but never crash trading - Non-blocking: failures are logged but never crash trading
- Rate-limited: 1 message/second default to respect Telegram API limits - Rate-limited: 1 message/second default to respect Telegram API limits
- Auto-disabled when credentials missing - Auto-disabled when credentials missing
- Gracefully handles API errors, network timeouts, invalid tokens
**TelegramCommandHandler** — Bidirectional command interface
- Long polling from Telegram API (configurable `TELEGRAM_POLLING_INTERVAL`)
- 9 interactive commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`
- Authorization filtering by `TELEGRAM_CHAT_ID`
- Enable/disable via `TELEGRAM_COMMANDS_ENABLED` (default: true)
**Notification Types:** **Notification Types:**
- Trade execution (BUY/SELL with confidence) - Trade execution (BUY/SELL with confidence)
@@ -230,12 +142,12 @@ High-frequency trading with individual stock analysis:
- Fat-finger protection triggers (order rejection) - Fat-finger protection triggers (order rejection)
- Market open/close events - Market open/close events
- System startup/shutdown status - System startup/shutdown status
- Playbook generation results
- Stop-loss monitoring alerts
### 9. Evolution (`src/evolution/`) **Setup:** See [src/notifications/README.md](../src/notifications/README.md) for bot creation and configuration.
**StrategyOptimizer** (`optimizer.py`) — Self-improvement loop ### 6. Evolution (`src/evolution/optimizer.py`)
**StrategyOptimizer** — Self-improvement loop
- Analyzes high-confidence losing trades from SQLite - Analyzes high-confidence losing trades from SQLite
- Asks Gemini to generate new `BaseStrategy` subclasses - Asks Gemini to generate new `BaseStrategy` subclasses
@@ -243,136 +155,22 @@ High-frequency trading with individual stock analysis:
- Simulates PR creation for human review - Simulates PR creation for human review
- Only activates strategies that pass all tests - Only activates strategies that pass all tests
**DailyReview** (`daily_review.py`) — End-of-day review
- Generates comprehensive trade performance summary
- Stores results in L6_DAILY context layer
- Tracks win rate, P&L, confidence accuracy
**DailyScorecard** (`scorecard.py`) — Performance scoring
- Calculates daily metrics (trades, P&L, win rate, avg confidence)
- Enables trend tracking across days
**Stop-Loss Monitoring** — Real-time position protection
- Monitors positions against stop-loss levels from playbook scenarios
- Sends Telegram alerts when thresholds approached or breached
### 10. Decision Logger (`src/logging/decision_logger.py`)
**DecisionLogger** — Comprehensive audit trail
- Logs every trading decision with full context snapshot
- Captures input data, rationale, confidence, and outcomes
- Supports outcome tracking (P&L, accuracy) for post-analysis
- Stored in `decision_logs` table with indexed queries
- Review workflow support (reviewed flag, review notes)
### 11. Data Integration (`src/data/`)
**External Data Sources** (optional):
- `news_api.py` — News sentiment data
- `market_data.py` — Extended market data
- `economic_calendar.py` — Economic event calendar
### 12. Backup (`src/backup/`)
**Disaster Recovery** (see [docs/disaster_recovery.md](./disaster_recovery.md)):
- `scheduler.py` — Automated backup scheduling
- `exporter.py` — Data export to various formats
- `cloud_storage.py` — S3-compatible cloud backup
- `health_monitor.py` — Backup integrity verification
## Data Flow ## Data Flow
### Playbook Mode (Daily — Primary v2 Flow)
```
┌─────────────────────────────────────────────────────────────┐
│ Pre-Market Phase (before market open) │
└─────────────────────────────────────────────────────────────┘
┌──────────────────────────────────┐
│ Pre-Market Planner │
│ - 1 Gemini API call per market │
│ - Generate scenario playbook │
│ - Store in playbooks table │
└──────────────────┬───────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Trading Hours (market open → close) │
└─────────────────────────────────────────────────────────────┘
┌──────────────────────────────────┐
│ Market Schedule Check │
│ - Get open markets │
│ - Filter by enabled markets │
└──────────────────┬───────────────┘
┌──────────────────────────────────┐
│ Scenario Engine (local) │
│ - Match live data vs playbook │
│ - No AI calls needed │
│ - Return matched scenarios │
└──────────────────┬───────────────┘
┌──────────────────────────────────┐
│ Risk Manager: Validate Order │
│ - Check circuit breaker │
│ - Check fat-finger limit │
└──────────────────┬───────────────┘
┌──────────────────────────────────┐
│ Broker: Execute Order │
│ - Domestic: send_order() │
│ - Overseas: send_overseas_order()│
└──────────────────┬───────────────┘
┌──────────────────────────────────┐
│ Decision Logger + DB │
│ - Full audit trail │
│ - Context snapshot │
│ - Telegram notification │
└──────────────────┬───────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Post-Market Phase │
└─────────────────────────────────────────────────────────────┘
┌──────────────────────────────────┐
│ Daily Review + Scorecard │
│ - Performance summary │
│ - Store in L6_DAILY context │
│ - Evolution learning │
└──────────────────────────────────┘
```
### Realtime Mode (with Smart Scanner) ### Realtime Mode (with Smart Scanner)
``` ```
┌─────────────────────────────────────────────────────────────┐ ┌─────────────────────────────────────────────────────────────┐
│ Main Loop (60s cycle per market) │ Main Loop (60s cycle per market) │
└─────────────────────────────────────────────────────────────┘ └─────────────────────────────────────────────────────────────┘
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
│ Market Schedule Check │ │ Market Schedule Check
│ - Get open markets │ │ - Get open markets
│ - Filter by enabled markets │ │ - Filter by enabled markets
│ - Wait if all closed │ │ - Wait if all closed
└──────────────────┬───────────────┘ └──────────────────┬───────────────
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
@@ -384,57 +182,74 @@ High-frequency trading with individual stock analysis:
│ + volatility-first scoring │ │ + volatility-first scoring │
│ - Fallback: dynamic universe │ │ - Fallback: dynamic universe │
│ - Return top 3 qualified stocks │ │ - Return top 3 qualified stocks │
└──────────────────┬───────────────┘ └──────────────────┬───────────────
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
│ For Each Qualified Candidate │ │ For Each Qualified Candidate
└──────────────────┬───────────────┘ └──────────────────┬───────────────
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
│ Broker: Fetch Market Data │ │ Broker: Fetch Market Data │
│ - Domestic: orderbook + balance │ │ - Domestic: orderbook + balance │
│ - Overseas: price + balance │ │ - Overseas: price + balance │
└──────────────────┬───────────────┘ └──────────────────┬───────────────
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
Brain: Get Decision (AI) Calculate P&L
- Build prompt with market data pnl_pct = (eval - cost) / cost
│ - Call Gemini API │ └──────────────────┬────────────────┘
│ - Parse JSON response │
│ - Return TradeDecision │
└──────────────────┬───────────────┘
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
Risk Manager: Validate Order Brain: Get Decision (AI)
│ - Check circuit breaker │ - Build prompt with market data
│ - Check fat-finger limit │ - Call Gemini API
└──────────────────┬───────────────┘ │ - Parse JSON response │
│ - Return TradeDecision │
└──────────────────┬────────────────┘
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
Broker: Execute Order Risk Manager: Validate Order │
│ - Domestic: send_order() │ - Check circuit breaker
│ - Overseas: send_overseas_order() │ - Check fat-finger limit
└──────────────────┬───────────────┘ │ - Raise if validation fails │
└──────────────────┬────────────────┘
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
Decision Logger + Notifications Broker: Execute Order
│ - Log trade to SQLite │ - Domestic: send_order()
│ - selection_context (JSON) │ - Overseas: send_overseas_order()
│ - Telegram notification │ └──────────────────┬────────────────┘
└──────────────────────────────────┘
┌──────────────────────────────────┐
│ Notifications: Send Alert │
│ - Trade execution notification │
│ - Non-blocking (errors logged) │
│ - Rate-limited to 1/sec │
└──────────────────┬────────────────┘
┌──────────────────────────────────┐
│ Database: Log Trade │
│ - SQLite (data/trades.db) │
│ - Track: action, confidence, │
│ rationale, market, exchange │
│ - NEW: selection_context (JSON) │
│ - RSI, volume_ratio, signal │
│ - For Evolution optimization │
└───────────────────────────────────┘
``` ```
## Database Schema ## Database Schema
**SQLite** (`src/db.py`) — Database: `data/trades.db` **SQLite** (`src/db.py`)
### trades
```sql ```sql
CREATE TABLE trades ( CREATE TABLE trades (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -446,73 +261,25 @@ CREATE TABLE trades (
quantity INTEGER, quantity INTEGER,
price REAL, price REAL,
pnl REAL DEFAULT 0.0, pnl REAL DEFAULT 0.0,
market TEXT DEFAULT 'KR', market TEXT DEFAULT 'KR', -- KR | US_NASDAQ | JP | etc.
exchange_code TEXT DEFAULT 'KRX', exchange_code TEXT DEFAULT 'KRX', -- KRX | NASD | NYSE | etc.
selection_context TEXT, -- JSON: {rsi, volume_ratio, signal, score} selection_context TEXT -- JSON: {rsi, volume_ratio, signal, score}
decision_id TEXT -- Links to decision_logs
); );
``` ```
### contexts **Selection Context** (new in v0.9.0): Stores scanner selection criteria as JSON:
```sql ```json
CREATE TABLE contexts ( {
id INTEGER PRIMARY KEY AUTOINCREMENT, "rsi": 28.5,
layer TEXT NOT NULL, -- L1 through L7 "volume_ratio": 2.7,
timeframe TEXT, "signal": "oversold",
key TEXT NOT NULL, "score": 85.2
value TEXT NOT NULL, -- JSON data }
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
-- Indices: idx_contexts_layer, idx_contexts_timeframe, idx_contexts_updated
``` ```
### decision_logs Enables Evolution system to analyze correlation between selection criteria and trade outcomes.
```sql
CREATE TABLE decision_logs (
decision_id TEXT PRIMARY KEY,
timestamp TEXT NOT NULL,
stock_code TEXT,
market TEXT,
exchange_code TEXT,
action TEXT,
confidence INTEGER,
rationale TEXT,
context_snapshot TEXT, -- JSON: full context at decision time
input_data TEXT, -- JSON: market data used
outcome_pnl REAL,
outcome_accuracy REAL,
reviewed INTEGER DEFAULT 0,
review_notes TEXT
);
-- Indices: idx_decision_logs_timestamp, idx_decision_logs_reviewed, idx_decision_logs_confidence
```
### playbooks Auto-migration: Adds `market`, `exchange_code`, and `selection_context` columns if missing for backward compatibility.
```sql
CREATE TABLE playbooks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT NOT NULL,
market TEXT NOT NULL,
status TEXT DEFAULT 'generated',
playbook_json TEXT NOT NULL, -- Full playbook with scenarios
generated_at TEXT NOT NULL,
token_count INTEGER,
scenario_count INTEGER,
match_count INTEGER DEFAULT 0
);
-- Indices: idx_playbooks_date, idx_playbooks_market
```
### context_metadata
```sql
CREATE TABLE context_metadata (
layer TEXT PRIMARY KEY,
description TEXT,
retention_days INTEGER,
aggregation_source TEXT
);
```
## Configuration ## Configuration
@@ -527,70 +294,31 @@ KIS_APP_SECRET=your_app_secret
KIS_ACCOUNT_NO=XXXXXXXX-XX KIS_ACCOUNT_NO=XXXXXXXX-XX
GEMINI_API_KEY=your_gemini_key GEMINI_API_KEY=your_gemini_key
# Optional — Trading Mode # Optional
MODE=paper # paper | live MODE=paper # paper | live
DB_PATH=data/trades.db
CONFIDENCE_THRESHOLD=80
MAX_LOSS_PCT=3.0
MAX_ORDER_PCT=30.0
ENABLED_MARKETS=KR,US_NASDAQ # Comma-separated market codes
# Trading Mode (API efficiency)
TRADE_MODE=daily # daily | realtime TRADE_MODE=daily # daily | realtime
DAILY_SESSIONS=4 # Sessions per day (daily mode only) DAILY_SESSIONS=4 # Sessions per day (daily mode only)
SESSION_INTERVAL_HOURS=6 # Hours between sessions (daily mode only) SESSION_INTERVAL_HOURS=6 # Hours between sessions (daily mode only)
# Optional — Database # Telegram Notifications (optional)
DB_PATH=data/trades.db
# Optional — Risk
CONFIDENCE_THRESHOLD=80
MAX_LOSS_PCT=3.0
MAX_ORDER_PCT=30.0
# Optional — Markets
ENABLED_MARKETS=KR,US # Comma-separated market codes
RATE_LIMIT_RPS=2.0 # KIS API requests per second
# Optional — Pre-Market Planner (v2)
PRE_MARKET_MINUTES=30 # Minutes before market open to generate playbook
MAX_SCENARIOS_PER_STOCK=5 # Max scenarios per stock in playbook
PLANNER_TIMEOUT_SECONDS=60 # Timeout for playbook generation
DEFENSIVE_PLAYBOOK_ON_FAILURE=true # Fallback on AI failure
RESCAN_INTERVAL_SECONDS=300 # Scenario rescan interval during trading
# Optional — Smart Scanner (realtime mode only)
RSI_OVERSOLD_THRESHOLD=30 # 0-50, oversold threshold
RSI_MOMENTUM_THRESHOLD=70 # 50-100, momentum threshold
VOL_MULTIPLIER=2.0 # Minimum volume ratio (2.0 = 200%)
SCANNER_TOP_N=3 # Max qualified candidates per scan
# Optional — Dashboard
DASHBOARD_ENABLED=false # Enable FastAPI dashboard
DASHBOARD_HOST=127.0.0.1 # Dashboard bind address
DASHBOARD_PORT=8080 # Dashboard port (1-65535)
# Optional — Telegram
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=123456789 TELEGRAM_CHAT_ID=123456789
TELEGRAM_ENABLED=true TELEGRAM_ENABLED=true
TELEGRAM_COMMANDS_ENABLED=true # Enable bidirectional commands
TELEGRAM_POLLING_INTERVAL=1.0 # Command polling interval (seconds)
# Optional — Backup # Smart Scanner (optional, realtime mode only)
BACKUP_ENABLED=false SCANNER_TOP_N=3 # Max qualified candidates per scan
BACKUP_DIR=data/backups
S3_ENDPOINT_URL=...
S3_ACCESS_KEY=...
S3_SECRET_KEY=...
S3_BUCKET_NAME=...
S3_REGION=...
# Optional — External Data
NEWS_API_KEY=...
NEWS_API_PROVIDER=...
MARKET_DATA_API_KEY=...
# Position Sizing (optional)
POSITION_SIZING_ENABLED=true POSITION_SIZING_ENABLED=true
POSITION_BASE_ALLOCATION_PCT=5.0 POSITION_BASE_ALLOCATION_PCT=5.0
POSITION_MIN_ALLOCATION_PCT=1.0 POSITION_MIN_ALLOCATION_PCT=1.0
POSITION_MAX_ALLOCATION_PCT=10.0 POSITION_MAX_ALLOCATION_PCT=10.0
POSITION_VOLATILITY_TARGET_SCORE=50.0 POSITION_VOLATILITY_TARGET_SCORE=50.0
# Legacy/compat scanner thresholds (kept for backward compatibility) # Legacy/compat scanner thresholds (kept for backward compatibility)
RSI_OVERSOLD_THRESHOLD=30 RSI_OVERSOLD_THRESHOLD=30
RSI_MOMENTUM_THRESHOLD=70 RSI_MOMENTUM_THRESHOLD=70
@@ -635,9 +363,4 @@ Tests use in-memory SQLite (`DB_PATH=":memory:"`) and dummy credentials via `tes
- Invalid token → log error, trading unaffected - Invalid token → log error, trading unaffected
- Rate limit exceeded → queued via rate limiter - Rate limit exceeded → queued via rate limiter
### Playbook Generation Failure **Guarantee**: Notification failures never interrupt trading operations.
- Timeout → fall back to defensive playbook (`DEFENSIVE_PLAYBOOK_ON_FAILURE`)
- API error → use previous day's playbook if available
- No playbook → skip pre-market phase, fall back to direct AI calls
**Guarantee**: Notification and dashboard failures never interrupt trading operations.

View File

@@ -119,7 +119,7 @@ No decorator needed for async tests.
# Install all dependencies (production + dev) # Install all dependencies (production + dev)
pip install -e ".[dev]" pip install -e ".[dev]"
# Run full test suite with coverage (551 tests across 25 files) # Run full test suite with coverage
pytest -v --cov=src --cov-report=term-missing pytest -v --cov=src --cov-report=term-missing
# Run a single test file # Run a single test file
@@ -137,61 +137,11 @@ mypy src/ --strict
# Run the trading agent # Run the trading agent
python -m src.main --mode=paper python -m src.main --mode=paper
# Run with dashboard enabled
python -m src.main --mode=paper --dashboard
# Docker # Docker
docker compose up -d ouroboros # Run agent docker compose up -d ouroboros # Run agent
docker compose --profile test up test # Run tests in container docker compose --profile test up test # Run tests in container
``` ```
## Dashboard
The FastAPI dashboard provides read-only monitoring of the trading system.
### Starting the Dashboard
```bash
# Via CLI flag
python -m src.main --mode=paper --dashboard
# Via environment variable
DASHBOARD_ENABLED=true python -m src.main --mode=paper
```
Dashboard runs as a daemon thread on `DASHBOARD_HOST:DASHBOARD_PORT` (default: `127.0.0.1:8080`).
### API Endpoints
| Endpoint | Description |
|----------|-------------|
| `GET /` | HTML dashboard UI |
| `GET /api/status` | Daily trading status by market |
| `GET /api/playbook/{date}` | Playbook for specific date (query: `market`) |
| `GET /api/scorecard/{date}` | Daily scorecard from L6_DAILY context |
| `GET /api/performance` | Performance metrics by market and combined |
| `GET /api/context/{layer}` | Context data by layer L1-L7 (query: `timeframe`) |
| `GET /api/decisions` | Decision log entries (query: `limit`, `market`) |
| `GET /api/scenarios/active` | Today's matched scenarios |
## Telegram Commands
When `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot accepts these interactive commands:
| Command | Description |
|---------|-------------|
| `/help` | List available commands |
| `/status` | Show trading status (mode, markets, P&L) |
| `/positions` | Display account summary (balance, cash, P&L) |
| `/report` | Daily summary metrics (trades, P&L, win rate) |
| `/scenarios` | Show today's playbook scenarios |
| `/review` | Display recent scorecards (L6_DAILY layer) |
| `/dashboard` | Show dashboard URL if enabled |
| `/stop` | Pause trading |
| `/resume` | Resume trading |
Commands are only processed from the authorized `TELEGRAM_CHAT_ID`.
## Environment Setup ## Environment Setup
```bash ```bash

View File

@@ -91,27 +91,6 @@
## 2026-02-16 ## 2026-02-16
### 문서 v2 동기화 (전체 문서 현행화)
**배경:**
- v2 기능 구현 완료 후 문서가 실제 코드 상태와 크게 괴리
- 문서에는 54 tests / 4 files로 기록되었으나 실제로는 551 tests / 25 files
- v2 핵심 기능(Playbook, Scenario Engine, Dashboard, Telegram Commands, Daily Review, Context System, Backup) 문서화 누락
**요구사항:**
1. `docs/testing.md` — 551 tests / 25 files 반영, 전체 테스트 파일 설명
2. `docs/architecture.md` — v2 컴포넌트(Strategy, Context, Dashboard, Decision Logger 등) 추가, Playbook Mode 데이터 플로우, DB 스키마 5개 테이블, v2 환경변수
3. `docs/commands.md` — Dashboard 실행 명령어, Telegram 명령어 9종 레퍼런스
4. `CLAUDE.md` — Project Structure 트리 확장, 테스트 수 업데이트, `--dashboard` 플래그
5. `docs/skills.md` — DB 파일명 `trades.db`로 통일, Dashboard 명령어 추가
6. 기존에 유효한 트러블슈팅, 코드 예제 등은 유지
**구현 결과:**
- 6개 문서 파일 업데이트
- 이전 시도(2개 커밋)는 기존 내용을 과도하게 삭제하여 폐기, main 기준으로 재작업
**이슈/PR:** #131, PR #134
### 해외 스캐너 개선: 랭킹 연동 + 변동성 우선 선별 ### 해외 스캐너 개선: 랭킹 연동 + 변동성 우선 선별
**배경:** **배경:**
@@ -165,39 +144,3 @@
**효과:** **효과:**
- 국내/해외 스캐너 기준이 변동성 중심으로 일관화 - 국내/해외 스캐너 기준이 변동성 중심으로 일관화
- 고변동 구간에서 자동 익스포저 축소, 저변동 구간에서 과소진입 완화 - 고변동 구간에서 자동 익스포저 축소, 저변동 구간에서 과소진입 완화
## 2026-02-18
### KIS 해외 랭킹 API 404 에러 수정
**배경:**
- KIS 해외주식 랭킹 API(`fetch_overseas_rankings`)가 모든 거래소에서 HTTP 404를 반환
- Smart Scanner가 해외 시장 후보 종목을 찾지 못해 거래가 전혀 실행되지 않음
**근본 원인:**
- TR_ID, API 경로, 거래소 코드가 모두 KIS 공식 문서와 불일치
**구현 결과:**
- `src/config.py`: TR_ID/Path 기본값을 KIS 공식 스펙으로 수정
- `src/broker/overseas.py`: 랭킹 API 전용 거래소 코드 매핑 추가 (NASD→NAS, NYSE→NYS, AMEX→AMS), 올바른 API 파라미터 사용
- `tests/test_overseas_broker.py`: 19개 단위 테스트 추가
**효과:**
- 해외 시장 랭킹 스캔이 정상 동작하여 Smart Scanner가 후보 종목 탐지 가능
### Gemini prompt_override 미적용 버그 수정
**배경:**
- `run_overnight` 실행 시 모든 시장에서 Playbook 생성 실패 (`JSONDecodeError`)
- defensive playbook으로 폴백되어 모든 종목이 HOLD 처리
**근본 원인:**
- `pre_market_planner.py``market_data["prompt_override"]`에 Playbook 전용 프롬프트를 넣어 `gemini.decide()` 호출
- `gemini_client.py``decide()` 메서드가 `prompt_override` 키를 전혀 확인하지 않고 항상 일반 트레이드 결정 프롬프트 생성
- Gemini가 Playbook JSON 대신 일반 트레이드 결정을 반환하여 파싱 실패
**구현 결과:**
- `src/brain/gemini_client.py`: `decide()` 메서드에서 `prompt_override` 우선 사용 로직 추가
- `tests/test_brain.py`: 3개 테스트 추가 (override 전달, optimization 우회, 미지정 시 기존 동작 유지)
**이슈/PR:** #143

View File

@@ -34,12 +34,6 @@ python -m src.main --mode=paper
``` ```
Runs the agent in paper-trading mode (no real orders). Runs the agent in paper-trading mode (no real orders).
### Start Trading Agent with Dashboard
```bash
python -m src.main --mode=paper --dashboard
```
Runs the agent with FastAPI dashboard on `127.0.0.1:8080` (configurable via `DASHBOARD_HOST`/`DASHBOARD_PORT`).
### Start Trading Agent (Production) ### Start Trading Agent (Production)
```bash ```bash
docker compose up -d ouroboros docker compose up -d ouroboros
@@ -65,7 +59,7 @@ Analyze the last 30 days of trade logs and generate performance metrics.
python -m src.evolution.optimizer --evolve python -m src.evolution.optimizer --evolve
``` ```
Triggers the evolution engine to: Triggers the evolution engine to:
1. Analyze `trades.db` for failing patterns 1. Analyze `trade_logs.db` for failing patterns
2. Ask Gemini to generate a new strategy 2. Ask Gemini to generate a new strategy
3. Run tests on the new strategy 3. Run tests on the new strategy
4. Create a PR if tests pass 4. Create a PR if tests pass
@@ -97,12 +91,12 @@ curl http://localhost:8080/health
### View Trade Logs ### View Trade Logs
```bash ```bash
sqlite3 data/trades.db "SELECT * FROM trades ORDER BY timestamp DESC LIMIT 20;" sqlite3 data/trade_logs.db "SELECT * FROM trades ORDER BY timestamp DESC LIMIT 20;"
``` ```
### Export Trade History ### Export Trade History
```bash ```bash
sqlite3 -header -csv data/trades.db "SELECT * FROM trades;" > trades_export.csv sqlite3 -header -csv data/trade_logs.db "SELECT * FROM trades;" > trades_export.csv
``` ```
## Safety Checklist (Pre-Deploy) ## Safety Checklist (Pre-Deploy)

View File

@@ -2,29 +2,51 @@
## Test Structure ## Test Structure
**551 tests** across **25 files**. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator. **54 tests** across four files. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator.
The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB. The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB.
### Test Files ### Test Files
#### Core Components #### `tests/test_risk.py` (11 tests)
- Circuit breaker boundaries
##### `tests/test_risk.py` (14 tests) - Fat-finger edge cases
- Circuit breaker boundaries and exact threshold triggers
- Fat-finger edge cases and percentage validation
- P&L calculation edge cases - P&L calculation edge cases
- Order validation logic - Order validation logic
##### `tests/test_broker.py` (11 tests) **Example:**
```python
def test_circuit_breaker_exact_threshold(risk_manager):
"""Circuit breaker should trip at exactly -3.0%."""
with pytest.raises(CircuitBreakerTripped):
risk_manager.validate_order(
current_pnl_pct=-3.0,
order_amount=1000,
total_cash=10000
)
```
#### `tests/test_broker.py` (6 tests)
- OAuth token lifecycle - OAuth token lifecycle
- Rate limiting enforcement - Rate limiting enforcement
- Hash key generation - Hash key generation
- Network error handling - Network error handling
- SSL context configuration - SSL context configuration
##### `tests/test_brain.py` (24 tests) **Example:**
- Valid JSON parsing and markdown-wrapped JSON handling ```python
async def test_rate_limiter(broker):
"""Rate limiter should delay requests to stay under 10 RPS."""
start = time.monotonic()
for _ in range(15): # 15 requests
await broker._rate_limiter.acquire()
elapsed = time.monotonic() - start
assert elapsed >= 1.0 # Should take at least 1 second
```
#### `tests/test_brain.py` (18 tests)
- Valid JSON parsing
- Markdown-wrapped JSON handling
- Malformed JSON fallback - Malformed JSON fallback
- Missing fields handling - Missing fields handling
- Invalid action validation - Invalid action validation
@@ -32,143 +54,33 @@ The `settings` fixture in `conftest.py` provides safe defaults with test credent
- Empty response handling - Empty response handling
- Prompt construction for different markets - Prompt construction for different markets
##### `tests/test_market_schedule.py` (24 tests) **Example:**
```python
async def test_confidence_below_threshold_forces_hold(brain):
"""Decisions below confidence threshold should force HOLD."""
decision = brain.parse_response('{"action":"BUY","confidence":70,"rationale":"test"}')
assert decision.action == "HOLD"
assert decision.confidence == 70
```
#### `tests/test_market_schedule.py` (19 tests)
- Market open/close logic - Market open/close logic
- Timezone handling (UTC, Asia/Seoul, America/New_York, etc.) - Timezone handling (UTC, Asia/Seoul, America/New_York, etc.)
- DST (Daylight Saving Time) transitions - DST (Daylight Saving Time) transitions
- Weekend handling and lunch break logic - Weekend handling
- Lunch break logic
- Multiple market filtering - Multiple market filtering
- Next market open calculation - Next market open calculation
##### `tests/test_db.py` (3 tests) **Example:**
- Database initialization and table creation ```python
- Trade logging with all fields (market, exchange_code, decision_id) def test_is_market_open_during_trading_hours():
- Query and retrieval operations """Market should be open during regular trading hours."""
# KRX: 9:00-15:30 KST, no lunch break
##### `tests/test_main.py` (37 tests) market = MARKETS["KR"]
- Trading loop orchestration trading_time = datetime(2026, 2, 3, 10, 0, tzinfo=ZoneInfo("Asia/Seoul")) # Monday 10:00
- Market iteration and stock processing assert is_market_open(market, trading_time) is True
- Dashboard integration (`--dashboard` flag) ```
- Telegram command handler wiring
- Error handling and graceful shutdown
#### Strategy & Playbook (v2)
##### `tests/test_pre_market_planner.py` (37 tests)
- Pre-market playbook generation
- Gemini API integration for scenario creation
- Timeout handling and defensive playbook fallback
- Multi-market playbook generation
##### `tests/test_scenario_engine.py` (44 tests)
- Scenario matching against live market data
- Confidence scoring and threshold filtering
- Multiple scenario type handling
- Edge cases (no match, partial match, expired scenarios)
##### `tests/test_playbook_store.py` (23 tests)
- Playbook persistence to SQLite
- Date-based retrieval and market filtering
- Playbook status management (generated, active, expired)
- JSON serialization/deserialization
##### `tests/test_strategy_models.py` (33 tests)
- Pydantic model validation for scenarios, playbooks, decisions
- Field constraints and default values
- Serialization round-trips
#### Analysis & Scanning
##### `tests/test_volatility.py` (24 tests)
- ATR and RSI calculation accuracy
- Volume surge ratio computation
- Momentum scoring
- Breakout/breakdown pattern detection
- Market scanner watchlist management
##### `tests/test_smart_scanner.py` (13 tests)
- Python-first filtering pipeline
- RSI and volume ratio filter logic
- Candidate scoring and ranking
- Fallback to static watchlist
#### Context & Memory
##### `tests/test_context.py` (18 tests)
- L1-L7 layer storage and retrieval
- Context key-value CRUD operations
- Timeframe-based queries
- Layer metadata management
##### `tests/test_context_scheduler.py` (5 tests)
- Periodic context aggregation scheduling
- Layer summarization triggers
#### Evolution & Review
##### `tests/test_evolution.py` (24 tests)
- Strategy optimization loop
- High-confidence losing trade analysis
- Generated strategy validation
##### `tests/test_daily_review.py` (10 tests)
- End-of-day review generation
- Trade performance summarization
- Context layer (L6_DAILY) integration
##### `tests/test_scorecard.py` (3 tests)
- Daily scorecard metrics calculation
- Win rate, P&L, confidence tracking
#### Notifications & Commands
##### `tests/test_telegram.py` (25 tests)
- Message sending and formatting
- Rate limiting (leaky bucket)
- Error handling (network timeout, invalid token)
- Auto-disable on missing credentials
- Notification types (trade, circuit breaker, fat-finger, market events)
##### `tests/test_telegram_commands.py` (31 tests)
- 9 command handlers (/help, /status, /positions, /report, /scenarios, /review, /dashboard, /stop, /resume)
- Long polling and command dispatch
- Authorization filtering by chat_id
- Command response formatting
#### Dashboard
##### `tests/test_dashboard.py` (14 tests)
- FastAPI endpoint responses (8 API routes)
- Status, playbook, scorecard, performance, context, decisions, scenarios
- Query parameter handling (market, date, limit)
#### Performance & Quality
##### `tests/test_token_efficiency.py` (34 tests)
- Gemini token usage optimization
- Prompt size reduction verification
- Cache effectiveness
##### `tests/test_latency_control.py` (30 tests)
- API call latency measurement
- Rate limiter timing accuracy
- Async operation overhead
##### `tests/test_decision_logger.py` (9 tests)
- Decision audit trail completeness
- Context snapshot capture
- Outcome tracking (P&L, accuracy)
##### `tests/test_data_integration.py` (38 tests)
- External data source integration
- News API, market data, economic calendar
- Error handling for API failures
##### `tests/test_backup.py` (23 tests)
- Backup scheduler and execution
- Cloud storage (S3) upload
- Health monitoring
- Data export functionality
## Coverage Requirements ## Coverage Requirements
@@ -179,6 +91,20 @@ Check coverage:
pytest -v --cov=src --cov-report=term-missing pytest -v --cov=src --cov-report=term-missing
``` ```
Expected output:
```
Name Stmts Miss Cover Missing
-----------------------------------------------------------
src/brain/gemini_client.py 85 5 94% 165-169
src/broker/kis_api.py 120 12 90% ...
src/core/risk_manager.py 35 2 94% ...
src/db.py 25 1 96% ...
src/main.py 150 80 47% (excluded from CI)
src/markets/schedule.py 95 3 97% ...
-----------------------------------------------------------
TOTAL 510 103 80%
```
**Note:** `main.py` has lower coverage as it contains the main loop which is tested via integration/manual testing. **Note:** `main.py` has lower coverage as it contains the main loop which is tested via integration/manual testing.
## Test Configuration ## Test Configuration

View File

@@ -1,54 +0,0 @@
#!/usr/bin/env bash
# Morning summary for overnight run logs.
set -euo pipefail
LOG_DIR="${LOG_DIR:-data/overnight}"
if [ ! -d "$LOG_DIR" ]; then
echo "로그 디렉터리가 없습니다: $LOG_DIR"
exit 1
fi
latest_run="$(ls -1t "$LOG_DIR"/run_*.log 2>/dev/null | head -n 1 || true)"
latest_watchdog="$(ls -1t "$LOG_DIR"/watchdog_*.log 2>/dev/null | head -n 1 || true)"
if [ -z "$latest_run" ]; then
echo "run 로그가 없습니다: $LOG_DIR/run_*.log"
exit 1
fi
echo "Overnight report"
echo "- run log: $latest_run"
if [ -n "$latest_watchdog" ]; then
echo "- watchdog log: $latest_watchdog"
fi
start_line="$(head -n 1 "$latest_run" || true)"
end_line="$(tail -n 1 "$latest_run" || true)"
info_count="$(rg -c '"level": "INFO"' "$latest_run" || true)"
warn_count="$(rg -c '"level": "WARNING"' "$latest_run" || true)"
error_count="$(rg -c '"level": "ERROR"' "$latest_run" || true)"
critical_count="$(rg -c '"level": "CRITICAL"' "$latest_run" || true)"
traceback_count="$(rg -c 'Traceback' "$latest_run" || true)"
echo "- start: ${start_line:-N/A}"
echo "- end: ${end_line:-N/A}"
echo "- INFO: ${info_count:-0}"
echo "- WARNING: ${warn_count:-0}"
echo "- ERROR: ${error_count:-0}"
echo "- CRITICAL: ${critical_count:-0}"
echo "- Traceback: ${traceback_count:-0}"
if [ -n "$latest_watchdog" ]; then
watchdog_errors="$(rg -c '\[ERROR\]' "$latest_watchdog" || true)"
echo "- watchdog ERROR: ${watchdog_errors:-0}"
echo ""
echo "최근 watchdog 로그:"
tail -n 5 "$latest_watchdog" || true
fi
echo ""
echo "최근 앱 로그:"
tail -n 20 "$latest_run" || true

View File

@@ -1,87 +0,0 @@
#!/usr/bin/env bash
# Start The Ouroboros overnight with logs and watchdog.
set -euo pipefail
LOG_DIR="${LOG_DIR:-data/overnight}"
CHECK_INTERVAL="${CHECK_INTERVAL:-30}"
TMUX_AUTO="${TMUX_AUTO:-true}"
TMUX_ATTACH="${TMUX_ATTACH:-true}"
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
if [ -z "${APP_CMD:-}" ]; then
if [ -x ".venv/bin/python" ]; then
PYTHON_BIN=".venv/bin/python"
elif command -v python3 >/dev/null 2>&1; then
PYTHON_BIN="python3"
elif command -v python >/dev/null 2>&1; then
PYTHON_BIN="python"
else
echo ".venv/bin/python 또는 python3/python 실행 파일을 찾을 수 없습니다."
exit 1
fi
dashboard_port="${DASHBOARD_PORT:-8080}"
APP_CMD="DASHBOARD_PORT=$dashboard_port $PYTHON_BIN -m src.main --mode=paper --dashboard"
fi
mkdir -p "$LOG_DIR"
timestamp="$(date +"%Y%m%d_%H%M%S")"
RUN_LOG="$LOG_DIR/run_${timestamp}.log"
WATCHDOG_LOG="$LOG_DIR/watchdog_${timestamp}.log"
PID_FILE="$LOG_DIR/app.pid"
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
if [ -f "$PID_FILE" ]; then
old_pid="$(cat "$PID_FILE" || true)"
if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then
echo "앱이 이미 실행 중입니다. pid=$old_pid"
exit 1
fi
fi
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG"
nohup bash -lc "$APP_CMD" >>"$RUN_LOG" 2>&1 &
app_pid=$!
echo "$app_pid" > "$PID_FILE"
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] app pid=$app_pid" | tee -a "$RUN_LOG"
nohup env PID_FILE="$PID_FILE" LOG_FILE="$WATCHDOG_LOG" CHECK_INTERVAL="$CHECK_INTERVAL" \
bash scripts/watchdog.sh >/dev/null 2>&1 &
watchdog_pid=$!
echo "$watchdog_pid" > "$WATCHDOG_PID_FILE"
cat <<EOF
시작 완료
- app pid: $app_pid
- watchdog pid: $watchdog_pid
- app log: $RUN_LOG
- watchdog log: $WATCHDOG_LOG
실시간 확인:
tail -f "$RUN_LOG"
tail -f "$WATCHDOG_LOG"
EOF
if [ "$TMUX_AUTO" = "true" ]; then
if ! command -v tmux >/dev/null 2>&1; then
echo "tmux를 찾지 못해 자동 세션 생성은 건너뜁니다."
exit 0
fi
session_name="${TMUX_SESSION_PREFIX}_${timestamp}"
window_name="overnight"
tmux new-session -d -s "$session_name" -n "$window_name" "tail -f '$RUN_LOG'"
tmux split-window -t "${session_name}:${window_name}" -v "tail -f '$WATCHDOG_LOG'"
tmux select-layout -t "${session_name}:${window_name}" even-vertical
echo "tmux session 생성: $session_name"
echo "수동 접속: tmux attach -t $session_name"
if [ -z "${TMUX:-}" ] && [ "$TMUX_ATTACH" = "true" ]; then
tmux attach -t "$session_name"
fi
fi

View File

@@ -1,76 +0,0 @@
#!/usr/bin/env bash
# Stop The Ouroboros overnight app/watchdog/tmux session.
set -euo pipefail
LOG_DIR="${LOG_DIR:-data/overnight}"
PID_FILE="$LOG_DIR/app.pid"
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
KILL_TIMEOUT="${KILL_TIMEOUT:-5}"
stop_pid() {
local name="$1"
local pid="$2"
if [ -z "$pid" ]; then
echo "$name PID가 비어 있습니다."
return 1
fi
if ! kill -0 "$pid" 2>/dev/null; then
echo "$name 프로세스가 이미 종료됨 (pid=$pid)"
return 0
fi
kill "$pid" 2>/dev/null || true
for _ in $(seq 1 "$KILL_TIMEOUT"); do
if ! kill -0 "$pid" 2>/dev/null; then
echo "$name 종료됨 (pid=$pid)"
return 0
fi
sleep 1
done
kill -9 "$pid" 2>/dev/null || true
if ! kill -0 "$pid" 2>/dev/null; then
echo "$name 강제 종료됨 (pid=$pid)"
return 0
fi
echo "$name 종료 실패 (pid=$pid)"
return 1
}
status=0
if [ -f "$WATCHDOG_PID_FILE" ]; then
watchdog_pid="$(cat "$WATCHDOG_PID_FILE" || true)"
stop_pid "watchdog" "$watchdog_pid" || status=1
rm -f "$WATCHDOG_PID_FILE"
else
echo "watchdog pid 파일 없음: $WATCHDOG_PID_FILE"
fi
if [ -f "$PID_FILE" ]; then
app_pid="$(cat "$PID_FILE" || true)"
stop_pid "app" "$app_pid" || status=1
rm -f "$PID_FILE"
else
echo "app pid 파일 없음: $PID_FILE"
fi
if command -v tmux >/dev/null 2>&1; then
sessions="$(tmux ls 2>/dev/null | awk -F: -v p="$TMUX_SESSION_PREFIX" '$1 ~ "^" p "_" {print $1}')"
if [ -n "$sessions" ]; then
while IFS= read -r s; do
[ -z "$s" ] && continue
tmux kill-session -t "$s" 2>/dev/null || true
echo "tmux 세션 종료: $s"
done <<< "$sessions"
else
echo "종료할 tmux 세션 없음 (prefix=${TMUX_SESSION_PREFIX}_)"
fi
fi
exit "$status"

View File

@@ -1,42 +0,0 @@
#!/usr/bin/env bash
# Simple watchdog for The Ouroboros process.
set -euo pipefail
PID_FILE="${PID_FILE:-data/overnight/app.pid}"
LOG_FILE="${LOG_FILE:-data/overnight/watchdog.log}"
CHECK_INTERVAL="${CHECK_INTERVAL:-30}"
STATUS_EVERY="${STATUS_EVERY:-10}"
mkdir -p "$(dirname "$LOG_FILE")"
log() {
printf '%s %s\n' "$(date -u +"%Y-%m-%dT%H:%M:%SZ")" "$1" | tee -a "$LOG_FILE"
}
if [ ! -f "$PID_FILE" ]; then
log "[ERROR] pid file not found: $PID_FILE"
exit 1
fi
PID="$(cat "$PID_FILE")"
if [ -z "$PID" ]; then
log "[ERROR] pid file is empty: $PID_FILE"
exit 1
fi
log "[INFO] watchdog started (pid=$PID, interval=${CHECK_INTERVAL}s)"
count=0
while true; do
if kill -0 "$PID" 2>/dev/null; then
count=$((count + 1))
if [ $((count % STATUS_EVERY)) -eq 0 ]; then
log "[INFO] process alive (pid=$PID)"
fi
else
log "[ERROR] process stopped (pid=$PID)"
exit 1
fi
sleep "$CHECK_INTERVAL"
done

View File

@@ -315,11 +315,6 @@ class SmartVolatilityScanner:
logger.info("Overseas scanner: no symbol universe for %s", market.name) logger.info("Overseas scanner: no symbol universe for %s", market.name)
return [] return []
logger.info(
"Overseas scanner: scanning %d fallback symbols for %s",
len(symbols),
market.name,
)
candidates: list[ScanCandidate] = [] candidates: list[ScanCandidate] = []
for stock_code in symbols: for stock_code in symbols:
try: try:
@@ -355,11 +350,6 @@ class SmartVolatilityScanner:
logger.warning("Failed to analyze overseas %s: %s", stock_code, exc) logger.warning("Failed to analyze overseas %s: %s", stock_code, exc)
except Exception as exc: except Exception as exc:
logger.error("Unexpected error analyzing overseas %s: %s", stock_code, exc) logger.error("Unexpected error analyzing overseas %s: %s", stock_code, exc)
logger.info(
"Overseas symbol fallback scan found %d candidates for %s",
len(candidates),
market.name,
)
return candidates return candidates
def get_stock_codes(self, candidates: list[ScanCandidate]) -> list[str]: def get_stock_codes(self, candidates: list[ScanCandidate]) -> list[str]:

View File

@@ -410,10 +410,8 @@ class GeminiClient:
cached=True, cached=True,
) )
# Build prompt (prompt_override takes priority for callers like pre_market_planner) # Build optimized prompt
if "prompt_override" in market_data: if self._enable_optimization:
prompt = market_data["prompt_override"]
elif self._enable_optimization:
prompt = self._optimizer.build_compressed_prompt(market_data) prompt = self._optimizer.build_compressed_prompt(market_data)
else: else:
prompt = await self.build_prompt(market_data, news_sentiment) prompt = await self.build_prompt(market_data, news_sentiment)

View File

@@ -104,14 +104,12 @@ class KISBroker:
time_since_last_attempt = now - self._last_refresh_attempt time_since_last_attempt = now - self._last_refresh_attempt
if time_since_last_attempt < self._refresh_cooldown: if time_since_last_attempt < self._refresh_cooldown:
remaining = self._refresh_cooldown - time_since_last_attempt remaining = self._refresh_cooldown - time_since_last_attempt
# Do not fail fast here. If token is unavailable, upstream calls error_msg = (
# will all fail for up to a minute and scanning returns no trades. f"Token refresh on cooldown. "
logger.warning( f"Retry in {remaining:.1f}s (KIS allows 1/minute)"
"Token refresh on cooldown. Waiting %.1fs before retry (KIS allows 1/minute)",
remaining,
) )
await asyncio.sleep(remaining) logger.warning(error_msg)
now = asyncio.get_event_loop().time() raise ConnectionError(error_msg)
logger.info("Refreshing KIS access token") logger.info("Refreshing KIS access token")
self._last_refresh_attempt = now self._last_refresh_attempt = now

View File

@@ -12,20 +12,6 @@ from src.broker.kis_api import KISBroker
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# Ranking API uses different exchange codes than order/quote APIs.
_RANKING_EXCHANGE_MAP: dict[str, str] = {
"NASD": "NAS",
"NYSE": "NYS",
"AMEX": "AMS",
"SEHK": "HKS",
"SHAA": "SHS",
"SZAA": "SZS",
"HSX": "HSX",
"HNX": "HNX",
"TSE": "TSE",
}
class OverseasBroker: class OverseasBroker:
"""KIS Overseas Stock API wrapper that reuses KISBroker infrastructure.""" """KIS Overseas Stock API wrapper that reuses KISBroker infrastructure."""
@@ -84,7 +70,7 @@ class OverseasBroker:
ranking_type: str = "fluctuation", ranking_type: str = "fluctuation",
limit: int = 30, limit: int = 30,
) -> list[dict[str, Any]]: ) -> list[dict[str, Any]]:
"""Fetch overseas rankings (price change or volume surge). """Fetch overseas rankings (price change or volume amount).
Ranking API specs may differ by account/product. Endpoint paths and Ranking API specs may differ by account/product. Endpoint paths and
TR_IDs are configurable via settings and can be overridden in .env. TR_IDs are configurable via settings and can be overridden in .env.
@@ -95,63 +81,47 @@ class OverseasBroker:
await self._broker._rate_limiter.acquire() await self._broker._rate_limiter.acquire()
session = self._broker._get_session() session = self._broker._get_session()
ranking_excd = _RANKING_EXCHANGE_MAP.get(exchange_code, exchange_code)
if ranking_type == "volume": if ranking_type == "volume":
tr_id = self._broker._settings.OVERSEAS_RANKING_VOLUME_TR_ID tr_id = self._broker._settings.OVERSEAS_RANKING_VOLUME_TR_ID
path = self._broker._settings.OVERSEAS_RANKING_VOLUME_PATH path = self._broker._settings.OVERSEAS_RANKING_VOLUME_PATH
params: dict[str, str] = {
"AUTH": "",
"EXCD": ranking_excd,
"MIXN": "0",
"VOL_RANG": "0",
}
else: else:
tr_id = self._broker._settings.OVERSEAS_RANKING_FLUCT_TR_ID tr_id = self._broker._settings.OVERSEAS_RANKING_FLUCT_TR_ID
path = self._broker._settings.OVERSEAS_RANKING_FLUCT_PATH path = self._broker._settings.OVERSEAS_RANKING_FLUCT_PATH
params = {
"AUTH": "",
"EXCD": ranking_excd,
"NDAY": "0",
"GUBN": "1",
"VOL_RANG": "0",
}
headers = await self._broker._auth_headers(tr_id) headers = await self._broker._auth_headers(tr_id)
url = f"{self._broker._base_url}{path}" url = f"{self._broker._base_url}{path}"
try: # Try common param variants used by KIS overseas quotation APIs.
async with session.get(url, headers=headers, params=params) as resp: param_variants = [
if resp.status != 200: {"AUTH": "", "EXCD": exchange_code, "NREC": str(max(limit, 30))},
{"AUTH": "", "OVRS_EXCG_CD": exchange_code, "NREC": str(max(limit, 30))},
{"AUTH": "", "EXCD": exchange_code},
{"AUTH": "", "OVRS_EXCG_CD": exchange_code},
]
last_error: str | None = None
for params in param_variants:
try:
async with session.get(url, headers=headers, params=params) as resp:
text = await resp.text() text = await resp.text()
if resp.status == 404: if resp.status != 200:
logger.warning( last_error = f"HTTP {resp.status}: {text}"
"Overseas ranking endpoint unavailable (404) for %s/%s; " continue
"using symbol fallback scan",
exchange_code,
ranking_type,
)
return []
raise ConnectionError(
f"fetch_overseas_rankings failed ({resp.status}): {text}"
)
data = await resp.json() data = await resp.json()
rows = self._extract_ranking_rows(data) rows = self._extract_ranking_rows(data)
if rows: if rows:
return rows[:limit] return rows[:limit]
logger.debug( # keep trying another param variant if response has no usable rows
"Overseas ranking returned empty for %s/%s (keys=%s)", last_error = f"empty output (keys={list(data.keys())})"
exchange_code, except (TimeoutError, aiohttp.ClientError) as exc:
ranking_type, last_error = str(exc)
list(data.keys()), continue
)
return [] raise ConnectionError(
except (TimeoutError, aiohttp.ClientError) as exc: f"fetch_overseas_rankings failed for {exchange_code}/{ranking_type}: {last_error}"
raise ConnectionError( )
f"Network error fetching overseas rankings: {exc}"
) from exc
async def get_overseas_balance(self, exchange_code: str) -> dict[str, Any]: async def get_overseas_balance(self, exchange_code: str) -> dict[str, Any]:
""" """

View File

@@ -91,13 +91,13 @@ class Settings(BaseSettings):
# Overseas ranking API (KIS endpoint/TR_ID may vary by account/product) # Overseas ranking API (KIS endpoint/TR_ID may vary by account/product)
# Override these from .env if your account uses different specs. # Override these from .env if your account uses different specs.
OVERSEAS_RANKING_ENABLED: bool = True OVERSEAS_RANKING_ENABLED: bool = True
OVERSEAS_RANKING_FLUCT_TR_ID: str = "HHDFS76290000" OVERSEAS_RANKING_FLUCT_TR_ID: str = "HHDFS76200100"
OVERSEAS_RANKING_VOLUME_TR_ID: str = "HHDFS76270000" OVERSEAS_RANKING_VOLUME_TR_ID: str = "HHDFS76200200"
OVERSEAS_RANKING_FLUCT_PATH: str = ( OVERSEAS_RANKING_FLUCT_PATH: str = (
"/uapi/overseas-stock/v1/ranking/updown-rate" "/uapi/overseas-price/v1/quotations/inquire-updown-rank"
) )
OVERSEAS_RANKING_VOLUME_PATH: str = ( OVERSEAS_RANKING_VOLUME_PATH: str = (
"/uapi/overseas-stock/v1/ranking/volume-surge" "/uapi/overseas-price/v1/quotations/inquire-volume-rank"
) )
# Dashboard (optional) # Dashboard (optional)

View File

@@ -1,8 +1,7 @@
"""Pre-market planner — generates DayPlaybook via Gemini before market open. """Pre-market planner — generates DayPlaybook via Gemini before market open.
One Gemini API call per market per day. Candidates come from SmartVolatilityScanner. One Gemini API call per market per day. Candidates come from SmartVolatilityScanner.
On failure, returns a smart rule-based fallback playbook that uses scanner signals On failure, returns a defensive playbook (all HOLD, no trades).
(momentum/oversold) to generate BUY conditions, avoiding the all-HOLD problem.
""" """
from __future__ import annotations from __future__ import annotations
@@ -135,7 +134,7 @@ class PreMarketPlanner:
except Exception: except Exception:
logger.exception("Playbook generation failed for %s", market) logger.exception("Playbook generation failed for %s", market)
if self._settings.DEFENSIVE_PLAYBOOK_ON_FAILURE: if self._settings.DEFENSIVE_PLAYBOOK_ON_FAILURE:
return self._smart_fallback_playbook(today, market, candidates, self._settings) return self._defensive_playbook(today, market, candidates)
return self._empty_playbook(today, market) return self._empty_playbook(today, market)
def build_cross_market_context( def build_cross_market_context(
@@ -471,99 +470,3 @@ class PreMarketPlanner:
), ),
], ],
) )
@staticmethod
def _smart_fallback_playbook(
today: date,
market: str,
candidates: list[ScanCandidate],
settings: Settings,
) -> DayPlaybook:
"""Rule-based fallback playbook when Gemini is unavailable.
Uses scanner signals (RSI, volume_ratio) to generate meaningful BUY
conditions instead of the all-SELL defensive playbook. Candidates are
already pre-qualified by SmartVolatilityScanner, so we trust their
signals and build actionable scenarios from them.
Scenario logic per candidate:
- momentum signal: BUY when volume_ratio exceeds scanner threshold
- oversold signal: BUY when RSI is below oversold threshold
- always: SELL stop-loss at -3.0% as guard
"""
stock_playbooks = []
for c in candidates:
scenarios: list[StockScenario] = []
if c.signal == "momentum":
scenarios.append(
StockScenario(
condition=StockCondition(
volume_ratio_above=settings.VOL_MULTIPLIER,
),
action=ScenarioAction.BUY,
confidence=80,
allocation_pct=10.0,
stop_loss_pct=-3.0,
take_profit_pct=5.0,
rationale=(
f"Rule-based BUY: momentum signal, "
f"volume={c.volume_ratio:.1f}x (fallback planner)"
),
)
)
elif c.signal == "oversold":
scenarios.append(
StockScenario(
condition=StockCondition(
rsi_below=settings.RSI_OVERSOLD_THRESHOLD,
),
action=ScenarioAction.BUY,
confidence=80,
allocation_pct=10.0,
stop_loss_pct=-3.0,
take_profit_pct=5.0,
rationale=(
f"Rule-based BUY: oversold signal, "
f"RSI={c.rsi:.0f} (fallback planner)"
),
)
)
# Always add stop-loss guard
scenarios.append(
StockScenario(
condition=StockCondition(price_change_pct_below=-3.0),
action=ScenarioAction.SELL,
confidence=90,
stop_loss_pct=-3.0,
rationale="Rule-based stop-loss (fallback planner)",
)
)
stock_playbooks.append(
StockPlaybook(
stock_code=c.stock_code,
scenarios=scenarios,
)
)
logger.info(
"Smart fallback playbook for %s: %d stocks with rule-based BUY/SELL conditions",
market,
len(stock_playbooks),
)
return DayPlaybook(
date=today,
market=market,
market_outlook=MarketOutlook.NEUTRAL,
default_action=ScenarioAction.HOLD,
stock_playbooks=stock_playbooks,
global_rules=[
GlobalRule(
condition="portfolio_pnl_pct < -2.0",
action=ScenarioAction.REDUCE_ALL,
rationale="Defensive: reduce on loss threshold",
),
],
)

View File

@@ -2,10 +2,6 @@
from __future__ import annotations from __future__ import annotations
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from src.brain.gemini_client import GeminiClient from src.brain.gemini_client import GeminiClient
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@@ -274,97 +270,3 @@ class TestBatchDecisionParsing:
assert decisions["AAPL"].action == "HOLD" assert decisions["AAPL"].action == "HOLD"
assert decisions["AAPL"].confidence == 0 assert decisions["AAPL"].confidence == 0
# ---------------------------------------------------------------------------
# Prompt Override (used by pre_market_planner)
# ---------------------------------------------------------------------------
class TestPromptOverride:
"""decide() must use prompt_override when present in market_data."""
@pytest.mark.asyncio
async def test_prompt_override_is_sent_to_gemini(self, settings):
"""When prompt_override is in market_data, it should be used as the prompt."""
client = GeminiClient(settings)
custom_prompt = "You are a playbook generator. Return JSON with scenarios."
mock_response = MagicMock()
mock_response.text = '{"action": "HOLD", "confidence": 50, "rationale": "test"}'
with patch.object(
client._client.aio.models,
"generate_content",
new_callable=AsyncMock,
return_value=mock_response,
) as mock_generate:
market_data = {
"stock_code": "PLANNER",
"current_price": 0,
"prompt_override": custom_prompt,
}
await client.decide(market_data)
# Verify the custom prompt was sent, not a built prompt
mock_generate.assert_called_once()
actual_prompt = mock_generate.call_args[1].get(
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None
)
assert actual_prompt == custom_prompt
@pytest.mark.asyncio
async def test_prompt_override_skips_optimization(self, settings):
"""prompt_override should bypass prompt optimization."""
client = GeminiClient(settings)
client._enable_optimization = True
custom_prompt = "Custom playbook prompt"
mock_response = MagicMock()
mock_response.text = '{"action": "HOLD", "confidence": 50, "rationale": "ok"}'
with patch.object(
client._client.aio.models,
"generate_content",
new_callable=AsyncMock,
return_value=mock_response,
) as mock_generate:
market_data = {
"stock_code": "PLANNER",
"current_price": 0,
"prompt_override": custom_prompt,
}
await client.decide(market_data)
actual_prompt = mock_generate.call_args[1].get(
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None
)
assert actual_prompt == custom_prompt
@pytest.mark.asyncio
async def test_without_prompt_override_uses_build_prompt(self, settings):
"""Without prompt_override, decide() should use build_prompt as before."""
client = GeminiClient(settings)
mock_response = MagicMock()
mock_response.text = '{"action": "HOLD", "confidence": 50, "rationale": "ok"}'
with patch.object(
client._client.aio.models,
"generate_content",
new_callable=AsyncMock,
return_value=mock_response,
) as mock_generate:
market_data = {
"stock_code": "005930",
"current_price": 72000,
}
await client.decide(market_data)
actual_prompt = mock_generate.call_args[1].get(
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None
)
# Should contain stock code from build_prompt, not be a custom override
assert "005930" in actual_prompt

View File

@@ -90,12 +90,12 @@ class TestTokenManagement:
await broker.close() await broker.close()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_token_refresh_cooldown_waits_then_retries(self, settings): async def test_token_refresh_cooldown_prevents_rapid_retries(self, settings):
"""Token refresh should wait out cooldown then retry (issue #54).""" """Token refresh should enforce cooldown after failure (issue #54)."""
broker = KISBroker(settings) broker = KISBroker(settings)
broker._refresh_cooldown = 0.1 # Short cooldown for testing broker._refresh_cooldown = 2.0 # Short cooldown for testing
# All attempts fail with 403 (EGW00133) # First refresh attempt fails with 403 (EGW00133)
mock_resp_403 = AsyncMock() mock_resp_403 = AsyncMock()
mock_resp_403.status = 403 mock_resp_403.status = 403
mock_resp_403.text = AsyncMock( mock_resp_403.text = AsyncMock(
@@ -109,8 +109,8 @@ class TestTokenManagement:
with pytest.raises(ConnectionError, match="Token refresh failed"): with pytest.raises(ConnectionError, match="Token refresh failed"):
await broker._ensure_token() await broker._ensure_token()
# Second attempt within cooldown should wait then retry (and still get 403) # Second attempt within cooldown should fail with cooldown error
with pytest.raises(ConnectionError, match="Token refresh failed"): with pytest.raises(ConnectionError, match="Token refresh on cooldown"):
await broker._ensure_token() await broker._ensure_token()
await broker.close() await broker.close()

View File

@@ -1,521 +0,0 @@
"""Tests for OverseasBroker — rankings, price, balance, order, and helpers."""
from __future__ import annotations
from unittest.mock import AsyncMock, MagicMock
import aiohttp
import pytest
from src.broker.kis_api import KISBroker
from src.broker.overseas import OverseasBroker, _RANKING_EXCHANGE_MAP
from src.config import Settings
def _make_async_cm(mock_resp: AsyncMock) -> MagicMock:
"""Create an async context manager that returns mock_resp on __aenter__."""
cm = MagicMock()
cm.__aenter__ = AsyncMock(return_value=mock_resp)
cm.__aexit__ = AsyncMock(return_value=False)
return cm
@pytest.fixture
def mock_settings() -> Settings:
"""Provide mock settings with correct default TR_IDs/paths."""
return Settings(
KIS_APP_KEY="test_key",
KIS_APP_SECRET="test_secret",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="test_gemini_key",
)
@pytest.fixture
def mock_broker(mock_settings: Settings) -> KISBroker:
"""Provide a mock KIS broker."""
broker = KISBroker(mock_settings)
broker.get_orderbook = AsyncMock() # type: ignore[method-assign]
return broker
@pytest.fixture
def overseas_broker(mock_broker: KISBroker) -> OverseasBroker:
"""Provide an OverseasBroker wrapping a mock KISBroker."""
return OverseasBroker(mock_broker)
def _setup_broker_mocks(overseas_broker: OverseasBroker, mock_session: MagicMock) -> None:
"""Wire up common broker mocks."""
overseas_broker._broker._rate_limiter.acquire = AsyncMock()
overseas_broker._broker._get_session = MagicMock(return_value=mock_session)
overseas_broker._broker._auth_headers = AsyncMock(return_value={})
class TestRankingExchangeMap:
"""Test exchange code mapping for ranking API."""
def test_nasd_maps_to_nas(self) -> None:
assert _RANKING_EXCHANGE_MAP["NASD"] == "NAS"
def test_nyse_maps_to_nys(self) -> None:
assert _RANKING_EXCHANGE_MAP["NYSE"] == "NYS"
def test_amex_maps_to_ams(self) -> None:
assert _RANKING_EXCHANGE_MAP["AMEX"] == "AMS"
def test_sehk_maps_to_hks(self) -> None:
assert _RANKING_EXCHANGE_MAP["SEHK"] == "HKS"
def test_unmapped_exchange_passes_through(self) -> None:
assert _RANKING_EXCHANGE_MAP.get("UNKNOWN", "UNKNOWN") == "UNKNOWN"
def test_tse_unchanged(self) -> None:
assert _RANKING_EXCHANGE_MAP["TSE"] == "TSE"
class TestConfigDefaults:
"""Test that config defaults match KIS official API specs."""
def test_fluct_tr_id(self, mock_settings: Settings) -> None:
assert mock_settings.OVERSEAS_RANKING_FLUCT_TR_ID == "HHDFS76290000"
def test_volume_tr_id(self, mock_settings: Settings) -> None:
assert mock_settings.OVERSEAS_RANKING_VOLUME_TR_ID == "HHDFS76270000"
def test_fluct_path(self, mock_settings: Settings) -> None:
assert mock_settings.OVERSEAS_RANKING_FLUCT_PATH == "/uapi/overseas-stock/v1/ranking/updown-rate"
def test_volume_path(self, mock_settings: Settings) -> None:
assert mock_settings.OVERSEAS_RANKING_VOLUME_PATH == "/uapi/overseas-stock/v1/ranking/volume-surge"
class TestFetchOverseasRankings:
"""Test fetch_overseas_rankings method."""
@pytest.mark.asyncio
async def test_fluctuation_uses_correct_params(
self, overseas_broker: OverseasBroker
) -> None:
"""Fluctuation ranking should use HHDFS76290000, updown-rate path, and correct params."""
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(
return_value={"output": [{"symb": "AAPL", "name": "Apple"}]}
)
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
overseas_broker._broker._auth_headers = AsyncMock(
return_value={"authorization": "Bearer test"}
)
result = await overseas_broker.fetch_overseas_rankings("NASD", "fluctuation")
assert len(result) == 1
assert result[0]["symb"] == "AAPL"
call_args = mock_session.get.call_args
url = call_args[0][0]
params = call_args[1]["params"]
assert "/uapi/overseas-stock/v1/ranking/updown-rate" in url
assert params["EXCD"] == "NAS"
assert params["NDAY"] == "0"
assert params["GUBN"] == "1"
assert params["VOL_RANG"] == "0"
overseas_broker._broker._auth_headers.assert_called_with("HHDFS76290000")
@pytest.mark.asyncio
async def test_volume_uses_correct_params(
self, overseas_broker: OverseasBroker
) -> None:
"""Volume ranking should use HHDFS76270000, volume-surge path, and correct params."""
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(
return_value={"output": [{"symb": "TSLA", "name": "Tesla"}]}
)
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
overseas_broker._broker._auth_headers = AsyncMock(
return_value={"authorization": "Bearer test"}
)
result = await overseas_broker.fetch_overseas_rankings("NYSE", "volume")
assert len(result) == 1
call_args = mock_session.get.call_args
url = call_args[0][0]
params = call_args[1]["params"]
assert "/uapi/overseas-stock/v1/ranking/volume-surge" in url
assert params["EXCD"] == "NYS"
assert params["MIXN"] == "0"
assert params["VOL_RANG"] == "0"
assert "NDAY" not in params
assert "GUBN" not in params
overseas_broker._broker._auth_headers.assert_called_with("HHDFS76270000")
@pytest.mark.asyncio
async def test_404_returns_empty_list(
self, overseas_broker: OverseasBroker
) -> None:
"""HTTP 404 should return empty list (fallback) instead of raising."""
mock_resp = AsyncMock()
mock_resp.status = 404
mock_resp.text = AsyncMock(return_value="Not Found")
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
result = await overseas_broker.fetch_overseas_rankings("AMEX", "fluctuation")
assert result == []
@pytest.mark.asyncio
async def test_non_404_error_raises(
self, overseas_broker: OverseasBroker
) -> None:
"""Non-404 HTTP errors should raise ConnectionError."""
mock_resp = AsyncMock()
mock_resp.status = 500
mock_resp.text = AsyncMock(return_value="Internal Server Error")
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
with pytest.raises(ConnectionError, match="500"):
await overseas_broker.fetch_overseas_rankings("NASD")
@pytest.mark.asyncio
async def test_empty_response_returns_empty(
self, overseas_broker: OverseasBroker
) -> None:
"""Empty output in response should return empty list."""
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(return_value={"output": []})
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
result = await overseas_broker.fetch_overseas_rankings("NASD")
assert result == []
@pytest.mark.asyncio
async def test_ranking_disabled_returns_empty(
self, overseas_broker: OverseasBroker
) -> None:
"""When OVERSEAS_RANKING_ENABLED=False, should return empty immediately."""
overseas_broker._broker._settings.OVERSEAS_RANKING_ENABLED = False
result = await overseas_broker.fetch_overseas_rankings("NASD")
assert result == []
@pytest.mark.asyncio
async def test_limit_truncates_results(
self, overseas_broker: OverseasBroker
) -> None:
"""Results should be truncated to the specified limit."""
rows = [{"symb": f"SYM{i}"} for i in range(20)]
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(return_value={"output": rows})
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
result = await overseas_broker.fetch_overseas_rankings("NASD", limit=5)
assert len(result) == 5
@pytest.mark.asyncio
async def test_network_error_raises(
self, overseas_broker: OverseasBroker
) -> None:
"""Network errors should raise ConnectionError."""
cm = MagicMock()
cm.__aenter__ = AsyncMock(side_effect=aiohttp.ClientError("timeout"))
cm.__aexit__ = AsyncMock(return_value=False)
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=cm)
_setup_broker_mocks(overseas_broker, mock_session)
with pytest.raises(ConnectionError, match="Network error"):
await overseas_broker.fetch_overseas_rankings("NASD")
@pytest.mark.asyncio
async def test_exchange_code_mapping_applied(
self, overseas_broker: OverseasBroker
) -> None:
"""All major exchanges should use mapped codes in API params."""
for original, mapped in [("NASD", "NAS"), ("NYSE", "NYS"), ("AMEX", "AMS")]:
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(return_value={"output": [{"symb": "X"}]})
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
await overseas_broker.fetch_overseas_rankings(original)
call_params = mock_session.get.call_args[1]["params"]
assert call_params["EXCD"] == mapped, f"{original} should map to {mapped}"
class TestGetOverseasPrice:
"""Test get_overseas_price method."""
@pytest.mark.asyncio
async def test_success(self, overseas_broker: OverseasBroker) -> None:
"""Successful price fetch returns JSON data."""
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(return_value={"output": {"last": "150.00"}})
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
overseas_broker._broker._auth_headers = AsyncMock(return_value={"authorization": "Bearer t"})
result = await overseas_broker.get_overseas_price("NASD", "AAPL")
assert result["output"]["last"] == "150.00"
call_args = mock_session.get.call_args
params = call_args[1]["params"]
assert params["EXCD"] == "NASD"
assert params["SYMB"] == "AAPL"
@pytest.mark.asyncio
async def test_http_error_raises(self, overseas_broker: OverseasBroker) -> None:
"""Non-200 response should raise ConnectionError."""
mock_resp = AsyncMock()
mock_resp.status = 400
mock_resp.text = AsyncMock(return_value="Bad Request")
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
with pytest.raises(ConnectionError, match="get_overseas_price failed"):
await overseas_broker.get_overseas_price("NASD", "AAPL")
@pytest.mark.asyncio
async def test_network_error_raises(self, overseas_broker: OverseasBroker) -> None:
"""Network error should raise ConnectionError."""
cm = MagicMock()
cm.__aenter__ = AsyncMock(side_effect=aiohttp.ClientError("conn refused"))
cm.__aexit__ = AsyncMock(return_value=False)
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=cm)
_setup_broker_mocks(overseas_broker, mock_session)
with pytest.raises(ConnectionError, match="Network error"):
await overseas_broker.get_overseas_price("NASD", "AAPL")
class TestGetOverseasBalance:
"""Test get_overseas_balance method."""
@pytest.mark.asyncio
async def test_success(self, overseas_broker: OverseasBroker) -> None:
"""Successful balance fetch returns JSON data."""
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(return_value={"output1": [{"pdno": "AAPL"}]})
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
result = await overseas_broker.get_overseas_balance("NASD")
assert result["output1"][0]["pdno"] == "AAPL"
@pytest.mark.asyncio
async def test_http_error_raises(self, overseas_broker: OverseasBroker) -> None:
"""Non-200 should raise ConnectionError."""
mock_resp = AsyncMock()
mock_resp.status = 500
mock_resp.text = AsyncMock(return_value="Server Error")
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
with pytest.raises(ConnectionError, match="get_overseas_balance failed"):
await overseas_broker.get_overseas_balance("NASD")
@pytest.mark.asyncio
async def test_network_error_raises(self, overseas_broker: OverseasBroker) -> None:
"""Network error should raise ConnectionError."""
cm = MagicMock()
cm.__aenter__ = AsyncMock(side_effect=TimeoutError("timeout"))
cm.__aexit__ = AsyncMock(return_value=False)
mock_session = MagicMock()
mock_session.get = MagicMock(return_value=cm)
_setup_broker_mocks(overseas_broker, mock_session)
with pytest.raises(ConnectionError, match="Network error"):
await overseas_broker.get_overseas_balance("NYSE")
class TestSendOverseasOrder:
"""Test send_overseas_order method."""
@pytest.mark.asyncio
async def test_buy_market_order(self, overseas_broker: OverseasBroker) -> None:
"""Market buy order should use VTTT1002U and ORD_DVSN=01."""
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(return_value={"rt_cd": "0"})
mock_session = MagicMock()
mock_session.post = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
overseas_broker._broker._get_hash_key = AsyncMock(return_value="hashval")
result = await overseas_broker.send_overseas_order("NASD", "AAPL", "BUY", 10)
assert result["rt_cd"] == "0"
# Verify BUY TR_ID
overseas_broker._broker._auth_headers.assert_called_with("VTTT1002U")
call_args = mock_session.post.call_args
body = call_args[1]["json"]
assert body["ORD_DVSN"] == "01" # market order
assert body["OVRS_ORD_UNPR"] == "0"
@pytest.mark.asyncio
async def test_sell_limit_order(self, overseas_broker: OverseasBroker) -> None:
"""Limit sell order should use VTTT1006U and ORD_DVSN=00."""
mock_resp = AsyncMock()
mock_resp.status = 200
mock_resp.json = AsyncMock(return_value={"rt_cd": "0"})
mock_session = MagicMock()
mock_session.post = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
overseas_broker._broker._get_hash_key = AsyncMock(return_value="hashval")
result = await overseas_broker.send_overseas_order("NYSE", "MSFT", "SELL", 5, price=350.0)
assert result["rt_cd"] == "0"
overseas_broker._broker._auth_headers.assert_called_with("VTTT1006U")
call_args = mock_session.post.call_args
body = call_args[1]["json"]
assert body["ORD_DVSN"] == "00" # limit order
assert body["OVRS_ORD_UNPR"] == "350.0"
@pytest.mark.asyncio
async def test_order_http_error_raises(self, overseas_broker: OverseasBroker) -> None:
"""Non-200 should raise ConnectionError."""
mock_resp = AsyncMock()
mock_resp.status = 400
mock_resp.text = AsyncMock(return_value="Bad Request")
mock_session = MagicMock()
mock_session.post = MagicMock(return_value=_make_async_cm(mock_resp))
_setup_broker_mocks(overseas_broker, mock_session)
overseas_broker._broker._get_hash_key = AsyncMock(return_value="hashval")
with pytest.raises(ConnectionError, match="send_overseas_order failed"):
await overseas_broker.send_overseas_order("NASD", "AAPL", "BUY", 1)
@pytest.mark.asyncio
async def test_order_network_error_raises(self, overseas_broker: OverseasBroker) -> None:
"""Network error should raise ConnectionError."""
cm = MagicMock()
cm.__aenter__ = AsyncMock(side_effect=aiohttp.ClientError("conn reset"))
cm.__aexit__ = AsyncMock(return_value=False)
mock_session = MagicMock()
mock_session.post = MagicMock(return_value=cm)
_setup_broker_mocks(overseas_broker, mock_session)
overseas_broker._broker._get_hash_key = AsyncMock(return_value="hashval")
with pytest.raises(ConnectionError, match="Network error"):
await overseas_broker.send_overseas_order("NASD", "TSLA", "SELL", 2)
class TestGetCurrencyCode:
"""Test _get_currency_code mapping."""
def test_us_exchanges(self, overseas_broker: OverseasBroker) -> None:
assert overseas_broker._get_currency_code("NASD") == "USD"
assert overseas_broker._get_currency_code("NYSE") == "USD"
assert overseas_broker._get_currency_code("AMEX") == "USD"
def test_japan(self, overseas_broker: OverseasBroker) -> None:
assert overseas_broker._get_currency_code("TSE") == "JPY"
def test_hong_kong(self, overseas_broker: OverseasBroker) -> None:
assert overseas_broker._get_currency_code("SEHK") == "HKD"
def test_china(self, overseas_broker: OverseasBroker) -> None:
assert overseas_broker._get_currency_code("SHAA") == "CNY"
assert overseas_broker._get_currency_code("SZAA") == "CNY"
def test_vietnam(self, overseas_broker: OverseasBroker) -> None:
assert overseas_broker._get_currency_code("HNX") == "VND"
assert overseas_broker._get_currency_code("HSX") == "VND"
def test_unknown_defaults_usd(self, overseas_broker: OverseasBroker) -> None:
assert overseas_broker._get_currency_code("UNKNOWN") == "USD"
class TestExtractRankingRows:
"""Test _extract_ranking_rows helper."""
def test_output_key(self, overseas_broker: OverseasBroker) -> None:
data = {"output": [{"a": 1}, {"b": 2}]}
assert overseas_broker._extract_ranking_rows(data) == [{"a": 1}, {"b": 2}]
def test_output1_key(self, overseas_broker: OverseasBroker) -> None:
data = {"output1": [{"c": 3}]}
assert overseas_broker._extract_ranking_rows(data) == [{"c": 3}]
def test_output2_key(self, overseas_broker: OverseasBroker) -> None:
data = {"output2": [{"d": 4}]}
assert overseas_broker._extract_ranking_rows(data) == [{"d": 4}]
def test_no_list_returns_empty(self, overseas_broker: OverseasBroker) -> None:
data = {"output": "not a list"}
assert overseas_broker._extract_ranking_rows(data) == []
def test_empty_data(self, overseas_broker: OverseasBroker) -> None:
assert overseas_broker._extract_ranking_rows({}) == []
def test_filters_non_dict_rows(self, overseas_broker: OverseasBroker) -> None:
data = {"output": [{"a": 1}, "invalid", {"b": 2}]}
assert overseas_broker._extract_ranking_rows(data) == [{"a": 1}, {"b": 2}]

View File

@@ -164,23 +164,18 @@ class TestGeneratePlaybook:
assert pb.market_outlook == MarketOutlook.NEUTRAL assert pb.market_outlook == MarketOutlook.NEUTRAL
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_gemini_failure_returns_smart_fallback(self) -> None: async def test_gemini_failure_returns_defensive(self) -> None:
planner = _make_planner() planner = _make_planner()
planner._gemini.decide = AsyncMock(side_effect=RuntimeError("API timeout")) planner._gemini.decide = AsyncMock(side_effect=RuntimeError("API timeout"))
# oversold candidate (signal="oversold", rsi=28.5)
candidates = [_candidate()] candidates = [_candidate()]
pb = await planner.generate_playbook("KR", candidates, today=date(2026, 2, 8)) pb = await planner.generate_playbook("KR", candidates, today=date(2026, 2, 8))
assert pb.default_action == ScenarioAction.HOLD assert pb.default_action == ScenarioAction.HOLD
# Smart fallback uses NEUTRAL outlook (not NEUTRAL_TO_BEARISH) assert pb.market_outlook == MarketOutlook.NEUTRAL_TO_BEARISH
assert pb.market_outlook == MarketOutlook.NEUTRAL
assert pb.stock_count == 1 assert pb.stock_count == 1
# Oversold candidate → first scenario is BUY, second is SELL stop-loss # Defensive playbook has stop-loss scenarios
scenarios = pb.stock_playbooks[0].scenarios assert pb.stock_playbooks[0].scenarios[0].action == ScenarioAction.SELL
assert scenarios[0].action == ScenarioAction.BUY
assert scenarios[0].condition.rsi_below == 30
assert scenarios[1].action == ScenarioAction.SELL
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_gemini_failure_empty_when_defensive_disabled(self) -> None: async def test_gemini_failure_empty_when_defensive_disabled(self) -> None:
@@ -662,171 +657,3 @@ class TestDefensivePlaybook:
assert pb.stock_count == 0 assert pb.stock_count == 0
assert pb.market == "US" assert pb.market == "US"
assert pb.market_outlook == MarketOutlook.NEUTRAL assert pb.market_outlook == MarketOutlook.NEUTRAL
# ---------------------------------------------------------------------------
# Smart fallback playbook
# ---------------------------------------------------------------------------
class TestSmartFallbackPlaybook:
"""Tests for _smart_fallback_playbook — rule-based BUY/SELL on Gemini failure."""
def _make_settings(self) -> Settings:
return Settings(
KIS_APP_KEY="test",
KIS_APP_SECRET="test",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="test",
RSI_OVERSOLD_THRESHOLD=30,
VOL_MULTIPLIER=2.0,
)
def test_momentum_candidate_gets_buy_on_volume(self) -> None:
candidates = [
_candidate(code="CHOW", signal="momentum", volume_ratio=13.64, rsi=100.0)
]
settings = self._make_settings()
pb = PreMarketPlanner._smart_fallback_playbook(
date(2026, 2, 17), "US_AMEX", candidates, settings
)
assert pb.stock_count == 1
sp = pb.stock_playbooks[0]
assert sp.stock_code == "CHOW"
# First scenario: BUY with volume_ratio_above
buy_sc = sp.scenarios[0]
assert buy_sc.action == ScenarioAction.BUY
assert buy_sc.condition.volume_ratio_above == 2.0
assert buy_sc.condition.rsi_below is None
assert buy_sc.confidence == 80
# Second scenario: stop-loss SELL
sell_sc = sp.scenarios[1]
assert sell_sc.action == ScenarioAction.SELL
assert sell_sc.condition.price_change_pct_below == -3.0
def test_oversold_candidate_gets_buy_on_rsi(self) -> None:
candidates = [
_candidate(code="005930", signal="oversold", rsi=22.0, volume_ratio=3.5)
]
settings = self._make_settings()
pb = PreMarketPlanner._smart_fallback_playbook(
date(2026, 2, 17), "KR", candidates, settings
)
sp = pb.stock_playbooks[0]
buy_sc = sp.scenarios[0]
assert buy_sc.action == ScenarioAction.BUY
assert buy_sc.condition.rsi_below == 30
assert buy_sc.condition.volume_ratio_above is None
def test_all_candidates_have_stop_loss_sell(self) -> None:
candidates = [
_candidate(code="AAA", signal="momentum", volume_ratio=5.0),
_candidate(code="BBB", signal="oversold", rsi=25.0),
]
settings = self._make_settings()
pb = PreMarketPlanner._smart_fallback_playbook(
date(2026, 2, 17), "US_NASDAQ", candidates, settings
)
assert pb.stock_count == 2
for sp in pb.stock_playbooks:
sell_scenarios = [s for s in sp.scenarios if s.action == ScenarioAction.SELL]
assert len(sell_scenarios) == 1
assert sell_scenarios[0].condition.price_change_pct_below == -3.0
assert sell_scenarios[0].condition.price_change_pct_below == -3.0
def test_market_outlook_is_neutral(self) -> None:
candidates = [_candidate(signal="momentum", volume_ratio=5.0)]
settings = self._make_settings()
pb = PreMarketPlanner._smart_fallback_playbook(
date(2026, 2, 17), "US_AMEX", candidates, settings
)
assert pb.market_outlook == MarketOutlook.NEUTRAL
def test_default_action_is_hold(self) -> None:
candidates = [_candidate(signal="momentum", volume_ratio=5.0)]
settings = self._make_settings()
pb = PreMarketPlanner._smart_fallback_playbook(
date(2026, 2, 17), "US_AMEX", candidates, settings
)
assert pb.default_action == ScenarioAction.HOLD
def test_has_global_reduce_all_rule(self) -> None:
candidates = [_candidate(signal="momentum", volume_ratio=5.0)]
settings = self._make_settings()
pb = PreMarketPlanner._smart_fallback_playbook(
date(2026, 2, 17), "US_AMEX", candidates, settings
)
assert len(pb.global_rules) == 1
rule = pb.global_rules[0]
assert rule.action == ScenarioAction.REDUCE_ALL
assert "portfolio_pnl_pct" in rule.condition
def test_empty_candidates_returns_empty_playbook(self) -> None:
settings = self._make_settings()
pb = PreMarketPlanner._smart_fallback_playbook(
date(2026, 2, 17), "US_AMEX", [], settings
)
assert pb.stock_count == 0
def test_vol_multiplier_applied_from_settings(self) -> None:
"""VOL_MULTIPLIER=3.0 should set volume_ratio_above=3.0 for momentum."""
candidates = [_candidate(signal="momentum", volume_ratio=5.0)]
settings = self._make_settings()
settings = settings.model_copy(update={"VOL_MULTIPLIER": 3.0})
pb = PreMarketPlanner._smart_fallback_playbook(
date(2026, 2, 17), "US_AMEX", candidates, settings
)
buy_sc = pb.stock_playbooks[0].scenarios[0]
assert buy_sc.condition.volume_ratio_above == 3.0
def test_rsi_oversold_threshold_applied_from_settings(self) -> None:
"""RSI_OVERSOLD_THRESHOLD=25 should set rsi_below=25 for oversold."""
candidates = [_candidate(signal="oversold", rsi=22.0)]
settings = self._make_settings()
settings = settings.model_copy(update={"RSI_OVERSOLD_THRESHOLD": 25})
pb = PreMarketPlanner._smart_fallback_playbook(
date(2026, 2, 17), "KR", candidates, settings
)
buy_sc = pb.stock_playbooks[0].scenarios[0]
assert buy_sc.condition.rsi_below == 25
@pytest.mark.asyncio
async def test_generate_playbook_uses_smart_fallback_on_gemini_error(self) -> None:
"""generate_playbook() should use smart fallback (not defensive) on API failure."""
planner = _make_planner()
planner._gemini.decide = AsyncMock(side_effect=ConnectionError("429 quota exceeded"))
# momentum candidate
candidates = [
_candidate(code="CHOW", signal="momentum", volume_ratio=13.64, rsi=100.0)
]
pb = await planner.generate_playbook(
"US_AMEX", candidates, today=date(2026, 2, 18)
)
# Should NOT be all-SELL defensive; should have BUY for momentum
assert pb.stock_count == 1
buy_scenarios = [
s for s in pb.stock_playbooks[0].scenarios
if s.action == ScenarioAction.BUY
]
assert len(buy_scenarios) == 1
assert buy_scenarios[0].condition.volume_ratio_above == 2.0 # VOL_MULTIPLIER default