Compare commits

..

4 Commits

Author SHA1 Message Date
agentson
09e6eef3bf feat: unify domestic scanner and sizing; update docs
Some checks failed
CI / test (pull_request) Has been cancelled
2026-02-17 06:15:20 +09:00
agentson
10b15a4563 docs: reflect overseas ranking integration and volatility-first selection 2026-02-17 05:57:56 +09:00
agentson
a6693560c1 feat: prioritize overseas volatility scoring over raw rankings 2026-02-17 05:54:46 +09:00
agentson
16bb8b6dc6 feat: add overseas ranking integration with dynamic fallback 2026-02-17 05:50:10 +09:00
16 changed files with 232 additions and 1169 deletions

View File

@@ -15,9 +15,6 @@ pytest -v --cov=src
# Run (paper trading) # Run (paper trading)
python -m src.main --mode=paper python -m src.main --mode=paper
# Run with dashboard
python -m src.main --mode=paper --dashboard
``` ```
## Telegram Notifications (Optional) ## Telegram Notifications (Optional)
@@ -46,10 +43,6 @@ Get real-time alerts for trades, circuit breakers, and system events via Telegra
- Market open/close notifications - Market open/close notifications
- 📝 System startup/shutdown status - 📝 System startup/shutdown status
### Interactive Commands
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally. **Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
## Smart Volatility Scanner (Optional) ## Smart Volatility Scanner (Optional)
@@ -116,23 +109,17 @@ User requirements and feedback are tracked in [docs/requirements-log.md](docs/re
``` ```
src/ src/
├── analysis/ # Technical analysis (RSI, volatility, smart scanner) ├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
├── broker/ # KIS API client (domestic + overseas) ├── broker/ # KIS API client (domestic + overseas)
├── context/ # L1-L7 hierarchical memory system ├── brain/ # Gemini AI decision engine
├── core/ # Risk manager (READ-ONLY) ├── core/ # Risk manager (READ-ONLY)
├── dashboard/ # FastAPI read-only monitoring (8 API endpoints) ├── evolution/ # Self-improvement optimizer
├── data/ # External data integration (news, market data, calendar)
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
├── logging/ # Decision logger (audit trail)
├── markets/ # Market schedules and timezone handling ├── markets/ # Market schedules and timezone handling
├── notifications/ # Telegram alerts + bidirectional commands (9 commands) ├── notifications/ # Telegram real-time alerts
├── strategy/ # Pre-market planner, scenario engine, playbook store
├── db.py # SQLite trade logging ├── db.py # SQLite trade logging
├── main.py # Trading loop orchestrator ├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env) └── config.py # Settings (from .env)
tests/ # 551 tests across 25 files tests/ # 343 tests across 14 files
docs/ # Extended documentation docs/ # Extended documentation
``` ```
@@ -144,7 +131,6 @@ ruff check src/ tests/ # Lint
mypy src/ --strict # Type check mypy src/ --strict # Type check
python -m src.main --mode=paper # Paper trading python -m src.main --mode=paper # Paper trading
python -m src.main --mode=paper --dashboard # With dashboard
python -m src.main --mode=live # Live trading (⚠️ real money) python -m src.main --mode=live # Live trading (⚠️ real money)
# Gitea workflow (requires tea CLI) # Gitea workflow (requires tea CLI)

156
README.md
View File

@@ -10,41 +10,28 @@ KIS(한국투자증권) API로 매매하고, Google Gemini로 판단하며, 자
│ (매매 실행) │ │ (거래 루프) │ │ (의사결정) │ │ (매매 실행) │ │ (거래 루프) │ │ (의사결정) │
└─────────────┘ └──────┬──────┘ └─────────────┘ └─────────────┘ └──────┬──────┘ └─────────────┘
┌────────────┼────────────┐ ┌──────┴──────┐
│ │ │Risk Manager
┌──────┴──────┐ ┌──┴───┐ ┌──────┴──────┐ │ (안전장치) │
│Risk Manager │ │ DB │ │ Telegram │ └──────┬──────┘
│ (안전장치) │ │ │ │ (알림+명령) │
└──────┬──────┘ └──────┘ └─────────────┘
┌────────┼────────┐ ┌────────────┐
Evolution
┌────┴────┐┌──┴──┐┌────┴─────┐ │ (전략 진화) │
│Strategy ││Ctx ││Evolution │ └─────────────┘
│(플레이북)││(메모리)││ (진화) │
└─────────┘└─────┘└──────────┘
``` ```
**v2 핵심**: "Plan Once, Execute Locally" — 장 시작 전 AI가 시나리오 플레이북을 1회 생성하고, 거래 시간에는 로컬 시나리오 매칭만 수행하여 API 비용과 지연 시간을 대폭 절감.
## 핵심 모듈 ## 핵심 모듈
| 모듈 | 위치 | 설명 | | 모듈 | 파일 | 설명 |
|------|------|------| |------|------|------|
| 설정 | `src/config.py` | Pydantic 기반 환경변수 로딩 및 타입 검증 (35+ 변수) | | 설정 | `src/config.py` | Pydantic 기반 환경변수 로딩 및 타입 검증 |
| 브로커 | `src/broker/` | KIS API 비동기 래퍼 (국내 + 해외 9개 시장) | | 브로커 | `src/broker/kis_api.py` | KIS API 비동기 래퍼 (토큰 갱신, 레이트 리미터, 해시키) |
| 두뇌 | `src/brain/` | Gemini 프롬프트 구성, JSON 파싱, 토큰 최적화 | | 두뇌 | `src/brain/gemini_client.py` | Gemini 프롬프트 구성 JSON 응답 파싱 |
| 방패 | `src/core/risk_manager.py` | 서킷 브레이커 + 팻 핑거 체크 (READ-ONLY) | | 방패 | `src/core/risk_manager.py` | 서킷 브레이커 + 팻 핑거 체크 |
| 전략 | `src/strategy/` | Pre-Market Planner, Scenario Engine, Playbook Store | | 알림 | `src/notifications/telegram_client.py` | 텔레그램 실시간 거래 알림 (선택사항) |
| 컨텍스트 | `src/context/` | L1-L7 계층형 메모리 시스템 | | 진화 | `src/evolution/optimizer.py` | 실패 패턴 분석 → 새 전략 생성 → 테스트 → PR |
| 분석 | `src/analysis/` | RSI, ATR, Smart Volatility Scanner | | DB | `src/db.py` | SQLite 거래 로그 기록 |
| 알림 | `src/notifications/` | 텔레그램 양방향 (알림 + 9개 명령어) |
| 대시보드 | `src/dashboard/` | FastAPI 읽기 전용 모니터링 (8개 API) |
| 진화 | `src/evolution/` | 전략 진화 + Daily Review + Scorecard |
| 의사결정 로그 | `src/logging/` | 전체 거래 결정 감사 추적 |
| 데이터 | `src/data/` | 뉴스, 시장 데이터, 경제 캘린더 연동 |
| 백업 | `src/backup/` | 자동 백업, S3 클라우드, 무결성 검증 |
| DB | `src/db.py` | SQLite 거래 로그 (5개 테이블) |
## 안전장치 ## 안전장치
@@ -55,7 +42,6 @@ KIS(한국투자증권) API로 매매하고, Google Gemini로 판단하며, 자
| 신뢰도 임계값 | Gemini 신뢰도 80 미만이면 강제 HOLD | | 신뢰도 임계값 | Gemini 신뢰도 80 미만이면 강제 HOLD |
| 레이트 리미터 | Leaky Bucket 알고리즘으로 API 호출 제한 | | 레이트 리미터 | Leaky Bucket 알고리즘으로 API 호출 제한 |
| 토큰 자동 갱신 | 만료 1분 전 자동으로 Access Token 재발급 | | 토큰 자동 갱신 | 만료 1분 전 자동으로 Access Token 재발급 |
| 손절 모니터링 | 플레이북 시나리오 기반 실시간 포지션 보호 |
## 빠른 시작 ## 빠른 시작
@@ -81,11 +67,7 @@ pytest -v --cov=src --cov-report=term-missing
### 4. 실행 (모의투자) ### 4. 실행 (모의투자)
```bash ```bash
# 기본 실행
python -m src.main --mode=paper python -m src.main --mode=paper
# 대시보드 활성화
python -m src.main --mode=paper --dashboard
``` ```
### 5. Docker 실행 ### 5. Docker 실행
@@ -94,20 +76,7 @@ python -m src.main --mode=paper --dashboard
docker compose up -d ouroboros docker compose up -d ouroboros
``` ```
## 지원 시장 ## 텔레그램 알림 (선택사항)
| 국가 | 거래소 | 코드 |
|------|--------|------|
| 🇰🇷 한국 | KRX | KR |
| 🇺🇸 미국 | NASDAQ, NYSE, AMEX | US_NASDAQ, US_NYSE, US_AMEX |
| 🇯🇵 일본 | TSE | JP |
| 🇭🇰 홍콩 | SEHK | HK |
| 🇨🇳 중국 | 상하이, 선전 | CN_SHA, CN_SZA |
| 🇻🇳 베트남 | 하노이, 호치민 | VN_HNX, VN_HSX |
`ENABLED_MARKETS` 환경변수로 활성 시장 선택 (기본: `KR,US`).
## 텔레그램 (선택사항)
거래 실행, 서킷 브레이커 발동, 시스템 상태 등을 텔레그램으로 실시간 알림 받을 수 있습니다. 거래 실행, 서킷 브레이커 발동, 시스템 상태 등을 텔레그램으로 실시간 알림 받을 수 있습니다.
@@ -133,51 +102,25 @@ docker compose up -d ouroboros
- 장 시작/종료 알림 - 장 시작/종료 알림
- 📝 시스템 시작/종료 상태 - 📝 시스템 시작/종료 상태
### 양방향 명령어 **안전장치**: 알림 실패해도 거래는 계속 진행됩니다. 텔레그램 API 오류나 설정 누락이 있어도 거래 시스템은 정상 작동합니다.
`TELEGRAM_COMMANDS_ENABLED=true` (기본값) 설정 시 9개 대화형 명령어 지원:
| 명령어 | 설명 |
|--------|------|
| `/help` | 사용 가능한 명령어 목록 |
| `/status` | 거래 상태 (모드, 시장, P&L) |
| `/positions` | 계좌 요약 (잔고, 현금, P&L) |
| `/report` | 일일 요약 (거래 수, P&L, 승률) |
| `/scenarios` | 오늘의 플레이북 시나리오 |
| `/review` | 최근 스코어카드 (L6_DAILY) |
| `/dashboard` | 대시보드 URL 표시 |
| `/stop` | 거래 일시 정지 |
| `/resume` | 거래 재개 |
**안전장치**: 알림 실패해도 거래는 계속 진행됩니다.
## 테스트 ## 테스트
551개 테스트가 25개 파일에 걸쳐 구현되어 있습니다. 최소 커버리지 80%. 35개 테스트가 TDD 방식으로 구현 전에 먼저 작성되었습니다.
``` ```
tests/test_scenario_engine.py — 시나리오 매칭 (44개) tests/test_risk.py — 서킷 브레이커, 팻 핑거, 통합 검증 (11개)
tests/test_data_integration.py — 외부 데이터 연동 (38개) tests/test_broker.py — 토큰 관리, 타임아웃, HTTP 에러, 해시키 (6개)
tests/test_pre_market_planner.py — 플레이북 생성 (37개) tests/test_brain.py JSON 파싱, 신뢰도 임계값, 비정상 응답 처리 (15개)
tests/test_main.py — 거래 루프 통합 (37개)
tests/test_token_efficiency.py — 토큰 최적화 (34개)
tests/test_strategy_models.py — 전략 모델 검증 (33개)
tests/test_telegram_commands.py — 텔레그램 명령어 (31개)
tests/test_latency_control.py — 지연시간 제어 (30개)
tests/test_telegram.py — 텔레그램 알림 (25개)
... 외 16개 파일
``` ```
**상세**: [docs/testing.md](docs/testing.md)
## 기술 스택 ## 기술 스택
- **언어**: Python 3.11+ (asyncio 기반) - **언어**: Python 3.11+ (asyncio 기반)
- **브로커**: KIS Open API (REST, 국내+해외) - **브로커**: KIS Open API (REST)
- **AI**: Google Gemini Pro - **AI**: Google Gemini Pro
- **DB**: SQLite (5개 테이블: trades, contexts, decision_logs, playbooks, context_metadata) - **DB**: SQLite
- **대시보드**: FastAPI + uvicorn - **검증**: pytest + coverage
- **검증**: pytest + coverage (551 tests)
- **CI/CD**: GitHub Actions - **CI/CD**: GitHub Actions
- **배포**: Docker + Docker Compose - **배포**: Docker + Docker Compose
@@ -185,50 +128,27 @@ tests/test_telegram.py — 텔레그램 알림 (25개)
``` ```
The-Ouroboros/ The-Ouroboros/
├── .github/workflows/ci.yml # CI 파이프라인
├── docs/ ├── docs/
│ ├── architecture.md # 시스템 아키텍처 │ ├── agents.md # AI 에이전트 페르소나 정의
── testing.md # 테스트 가이드 ── skills.md # 사용 가능한 도구 목록
│ ├── commands.md # 명령어 레퍼런스
│ ├── context-tree.md # L1-L7 메모리 시스템
│ ├── workflow.md # Git 워크플로우
│ ├── agents.md # 에이전트 정책
│ ├── skills.md # 도구 목록
│ ├── disaster_recovery.md # 백업/복구
│ └── requirements-log.md # 요구사항 기록
├── src/ ├── src/
│ ├── analysis/ # 기술적 분석 (RSI, ATR, Smart Scanner)
│ ├── backup/ # 백업 (스케줄러, S3, 무결성 검증)
│ ├── brain/ # Gemini 의사결정 (프롬프트 최적화, 컨텍스트 선택)
│ ├── broker/ # KIS API (국내 + 해외)
│ ├── context/ # L1-L7 계층 메모리
│ ├── core/ # 리스크 관리 (READ-ONLY)
│ ├── dashboard/ # FastAPI 모니터링 대시보드
│ ├── data/ # 외부 데이터 연동
│ ├── evolution/ # 전략 진화 + Daily Review
│ ├── logging/ # 의사결정 감사 추적
│ ├── markets/ # 시장 스케줄 + 타임존
│ ├── notifications/ # 텔레그램 알림 + 명령어
│ ├── strategy/ # 플레이북 (Planner, Scenario Engine)
│ ├── config.py # Pydantic 설정 │ ├── config.py # Pydantic 설정
│ ├── db.py # SQLite 데이터베이스 │ ├── logging_config.py # JSON 구조화 로깅
── main.py # 비동기 거래 루프 ── db.py # SQLite 거래 기록
├── tests/ # 551개 테스트 (25개 파일) │ ├── main.py # 비동기 거래 루프
│ ├── broker/kis_api.py # KIS API 클라이언트
│ ├── brain/gemini_client.py # Gemini 의사결정 엔진
│ ├── core/risk_manager.py # 리스크 관리
│ ├── notifications/telegram_client.py # 텔레그램 알림
│ ├── evolution/optimizer.py # 전략 진화 엔진
│ └── strategies/base.py # 전략 베이스 클래스
├── tests/ # TDD 테스트 스위트
├── Dockerfile # 멀티스테이지 빌드 ├── Dockerfile # 멀티스테이지 빌드
├── docker-compose.yml # 서비스 오케스트레이션 ├── docker-compose.yml # 서비스 오케스트레이션
└── pyproject.toml # 의존성 및 도구 설정 └── pyproject.toml # 의존성 및 도구 설정
``` ```
## 문서
- **[아키텍처](docs/architecture.md)** — 시스템 설계, 컴포넌트, 데이터 흐름
- **[테스트](docs/testing.md)** — 테스트 구조, 커버리지, 작성 가이드
- **[명령어](docs/commands.md)** — CLI, Dashboard, Telegram 명령어
- **[컨텍스트 트리](docs/context-tree.md)** — L1-L7 계층 메모리
- **[워크플로우](docs/workflow.md)** — Git 워크플로우 정책
- **[에이전트 정책](docs/agents.md)** — 안전 제약, 금지 행위
- **[백업/복구](docs/disaster_recovery.md)** — 재해 복구 절차
- **[요구사항](docs/requirements-log.md)** — 사용자 요구사항 추적
## 라이선스 ## 라이선스
이 프로젝트의 라이선스는 [LICENSE](LICENSE) 파일을 참조하세요. 이 프로젝트의 라이선스는 [LICENSE](LICENSE) 파일을 참조하세요.

View File

@@ -2,9 +2,7 @@
## Overview ## Overview
Self-evolving AI trading agent for global stock markets via KIS (Korea Investment & Securities) API. The main loop in `src/main.py` orchestrates components across multiple markets with two trading modes: daily (batch API calls) or realtime (per-stock decisions). Self-evolving AI trading agent for global stock markets via KIS (Korea Investment & Securities) API. The main loop in `src/main.py` orchestrates four components across multiple markets with two trading modes: daily (batch API calls) or realtime (per-stock decisions).
**v2 Proactive Playbook Architecture**: The system uses a "plan once, execute locally" approach. Pre-market, the AI generates a playbook of scenarios (one Gemini API call per market per day). During trading hours, a local scenario engine matches live market data against these pre-computed scenarios — no additional AI calls needed. This dramatically reduces API costs and latency.
## Trading Modes ## Trading Modes
@@ -48,11 +46,9 @@ High-frequency trading with individual stock analysis:
**KISBroker** (`kis_api.py`) — Async KIS API client for domestic Korean market **KISBroker** (`kis_api.py`) — Async KIS API client for domestic Korean market
- Automatic OAuth token refresh (valid for 24 hours) - Automatic OAuth token refresh (valid for 24 hours)
- Leaky-bucket rate limiter (configurable RPS, default 2.0) - Leaky-bucket rate limiter (10 requests per second)
- POST body hash-key signing for order authentication - POST body hash-key signing for order authentication
- Custom SSL context with disabled hostname verification for VTS (virtual trading) endpoint due to known certificate mismatch - Custom SSL context with disabled hostname verification for VTS (virtual trading) endpoint due to known certificate mismatch
- `fetch_market_rankings()` — Fetch volume surge rankings from KIS API
- `get_daily_prices()` — Fetch OHLCV history for technical analysis
**OverseasBroker** (`overseas.py`) — KIS overseas stock API wrapper **OverseasBroker** (`overseas.py`) — KIS overseas stock API wrapper
@@ -67,7 +63,10 @@ High-frequency trading with individual stock analysis:
- `is_market_open()` checks weekends, trading hours, lunch breaks - `is_market_open()` checks weekends, trading hours, lunch breaks
- `get_open_markets()` returns currently active markets - `get_open_markets()` returns currently active markets
- `get_next_market_open()` finds next market to open and when - `get_next_market_open()` finds next market to open and when
- 10 global markets defined (KR, US_NASDAQ, US_NYSE, US_AMEX, JP, HK, CN_SHA, CN_SZA, VN_HNX, VN_HSX)
**New API Methods** (added in v0.9.0):
- `fetch_market_rankings()` — Fetch volume surge rankings from KIS API
- `get_daily_prices()` — Fetch OHLCV history for technical analysis
**Overseas Ranking API Methods** (added in v0.10.x): **Overseas Ranking API Methods** (added in v0.10.x):
- `fetch_overseas_rankings()` — Fetch overseas ranking universe (fluctuation / volume) - `fetch_overseas_rankings()` — Fetch overseas ranking universe (fluctuation / volume)
@@ -105,9 +104,9 @@ High-frequency trading with individual stock analysis:
- Fast Python-based filtering before expensive AI judgment - Fast Python-based filtering before expensive AI judgment
- Logs selection context (RSI-compatible proxy, volume_ratio, signal, score) for Evolution system - Logs selection context (RSI-compatible proxy, volume_ratio, signal, score) for Evolution system
### 3. Brain (`src/brain/`) ### 3. Brain (`src/brain/gemini_client.py`)
**GeminiClient** (`gemini_client.py`) — AI decision engine powered by Google Gemini **GeminiClient** — AI decision engine powered by Google Gemini
- Constructs structured prompts from market data - Constructs structured prompts from market data
- Parses JSON responses into `TradeDecision` objects (`action`, `confidence`, `rationale`) - Parses JSON responses into `TradeDecision` objects (`action`, `confidence`, `rationale`)
@@ -115,20 +114,11 @@ High-frequency trading with individual stock analysis:
- Falls back to safe HOLD on any parse/API error - Falls back to safe HOLD on any parse/API error
- Handles markdown-wrapped JSON, malformed responses, invalid actions - Handles markdown-wrapped JSON, malformed responses, invalid actions
**PromptOptimizer** (`prompt_optimizer.py`) — Token efficiency optimization
- Reduces prompt size while preserving decision quality
- Caches optimized prompts
**ContextSelector** (`context_selector.py`) — Relevant context selection for prompts
- Selects appropriate context layers for current market conditions
### 4. Risk Manager (`src/core/risk_manager.py`) ### 4. Risk Manager (`src/core/risk_manager.py`)
**RiskManager** — Safety circuit breaker and order validation **RiskManager** — Safety circuit breaker and order validation
> **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md)) ⚠️ **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md))
- **Circuit Breaker**: Halts all trading via `SystemExit` when daily P&L drops below -3.0% - **Circuit Breaker**: Halts all trading via `SystemExit` when daily P&L drops below -3.0%
- Threshold may only be made stricter, never relaxed - Threshold may only be made stricter, never relaxed
@@ -136,79 +126,7 @@ High-frequency trading with individual stock analysis:
- **Fat-Finger Protection**: Rejects orders exceeding 30% of available cash - **Fat-Finger Protection**: Rejects orders exceeding 30% of available cash
- Must always be enforced, cannot be disabled - Must always be enforced, cannot be disabled
### 5. Strategy (`src/strategy/`) ### 5. Notifications (`src/notifications/telegram_client.py`)
**Pre-Market Planner** (`pre_market_planner.py`) — AI playbook generation
- Runs before market open (configurable `PRE_MARKET_MINUTES`, default 30)
- Generates scenario-based playbooks via single Gemini API call per market
- Handles timeout (`PLANNER_TIMEOUT_SECONDS`, default 60) with defensive playbook fallback
- Persists playbooks to database for audit trail
**Scenario Engine** (`scenario_engine.py`) — Local scenario matching
- Matches live market data against pre-computed playbook scenarios
- No AI calls during trading hours — pure Python matching logic
- Returns matched scenarios with confidence scores
- Configurable `MAX_SCENARIOS_PER_STOCK` (default 5)
- Periodic rescan at `RESCAN_INTERVAL_SECONDS` (default 300)
**Playbook Store** (`playbook_store.py`) — Playbook persistence
- SQLite-backed storage for daily playbooks
- Date and market-based retrieval
- Status tracking (generated, active, expired)
**Models** (`models.py`) — Pydantic data models
- Scenario, Playbook, MatchResult, and related type definitions
### 6. Context System (`src/context/`)
**Context Store** (`store.py`) — L1-L7 hierarchical memory
- 7-layer context system (see [docs/context-tree.md](./context-tree.md)):
- L1: Tick-level (real-time price)
- L2: Intraday (session summary)
- L3: Daily (end-of-day)
- L4: Weekly (trend analysis)
- L5: Monthly (strategy review)
- L6: Daily Review (scorecard)
- L7: Evolution (long-term learning)
- Key-value storage with timeframe tagging
- SQLite persistence in `contexts` table
**Context Scheduler** (`scheduler.py`) — Periodic aggregation
- Scheduled summarization from lower to higher layers
- Configurable aggregation intervals
**Context Summarizer** (`summarizer.py`) — Layer summarization
- Aggregates lower-layer data into higher-layer summaries
### 7. Dashboard (`src/dashboard/`)
**FastAPI App** (`app.py`) — Read-only monitoring dashboard
- Runs as daemon thread when enabled (`--dashboard` CLI flag or `DASHBOARD_ENABLED=true`)
- Configurable host/port (`DASHBOARD_HOST`, `DASHBOARD_PORT`, default `127.0.0.1:8080`)
- Serves static HTML frontend
**8 API Endpoints:**
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/` | GET | Static HTML dashboard |
| `/api/status` | GET | Daily trading status by market |
| `/api/playbook/{date}` | GET | Playbook for specific date and market |
| `/api/scorecard/{date}` | GET | Daily scorecard from L6_DAILY context |
| `/api/performance` | GET | Trading performance metrics (by market + combined) |
| `/api/context/{layer}` | GET | Query context by layer (L1-L7) |
| `/api/decisions` | GET | Decision log entries with outcomes |
| `/api/scenarios/active` | GET | Today's matched scenarios |
### 8. Notifications (`src/notifications/telegram_client.py`)
**TelegramClient** — Real-time event notifications via Telegram Bot API **TelegramClient** — Real-time event notifications via Telegram Bot API
@@ -216,13 +134,7 @@ High-frequency trading with individual stock analysis:
- Non-blocking: failures are logged but never crash trading - Non-blocking: failures are logged but never crash trading
- Rate-limited: 1 message/second default to respect Telegram API limits - Rate-limited: 1 message/second default to respect Telegram API limits
- Auto-disabled when credentials missing - Auto-disabled when credentials missing
- Gracefully handles API errors, network timeouts, invalid tokens
**TelegramCommandHandler** — Bidirectional command interface
- Long polling from Telegram API (configurable `TELEGRAM_POLLING_INTERVAL`)
- 9 interactive commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`
- Authorization filtering by `TELEGRAM_CHAT_ID`
- Enable/disable via `TELEGRAM_COMMANDS_ENABLED` (default: true)
**Notification Types:** **Notification Types:**
- Trade execution (BUY/SELL with confidence) - Trade execution (BUY/SELL with confidence)
@@ -230,12 +142,12 @@ High-frequency trading with individual stock analysis:
- Fat-finger protection triggers (order rejection) - Fat-finger protection triggers (order rejection)
- Market open/close events - Market open/close events
- System startup/shutdown status - System startup/shutdown status
- Playbook generation results
- Stop-loss monitoring alerts
### 9. Evolution (`src/evolution/`) **Setup:** See [src/notifications/README.md](../src/notifications/README.md) for bot creation and configuration.
**StrategyOptimizer** (`optimizer.py`) — Self-improvement loop ### 6. Evolution (`src/evolution/optimizer.py`)
**StrategyOptimizer** — Self-improvement loop
- Analyzes high-confidence losing trades from SQLite - Analyzes high-confidence losing trades from SQLite
- Asks Gemini to generate new `BaseStrategy` subclasses - Asks Gemini to generate new `BaseStrategy` subclasses
@@ -243,122 +155,8 @@ High-frequency trading with individual stock analysis:
- Simulates PR creation for human review - Simulates PR creation for human review
- Only activates strategies that pass all tests - Only activates strategies that pass all tests
**DailyReview** (`daily_review.py`) — End-of-day review
- Generates comprehensive trade performance summary
- Stores results in L6_DAILY context layer
- Tracks win rate, P&L, confidence accuracy
**DailyScorecard** (`scorecard.py`) — Performance scoring
- Calculates daily metrics (trades, P&L, win rate, avg confidence)
- Enables trend tracking across days
**Stop-Loss Monitoring** — Real-time position protection
- Monitors positions against stop-loss levels from playbook scenarios
- Sends Telegram alerts when thresholds approached or breached
### 10. Decision Logger (`src/logging/decision_logger.py`)
**DecisionLogger** — Comprehensive audit trail
- Logs every trading decision with full context snapshot
- Captures input data, rationale, confidence, and outcomes
- Supports outcome tracking (P&L, accuracy) for post-analysis
- Stored in `decision_logs` table with indexed queries
- Review workflow support (reviewed flag, review notes)
### 11. Data Integration (`src/data/`)
**External Data Sources** (optional):
- `news_api.py` — News sentiment data
- `market_data.py` — Extended market data
- `economic_calendar.py` — Economic event calendar
### 12. Backup (`src/backup/`)
**Disaster Recovery** (see [docs/disaster_recovery.md](./disaster_recovery.md)):
- `scheduler.py` — Automated backup scheduling
- `exporter.py` — Data export to various formats
- `cloud_storage.py` — S3-compatible cloud backup
- `health_monitor.py` — Backup integrity verification
## Data Flow ## Data Flow
### Playbook Mode (Daily — Primary v2 Flow)
```
┌─────────────────────────────────────────────────────────────┐
│ Pre-Market Phase (before market open) │
└─────────────────────────────────────────────────────────────┘
┌──────────────────────────────────┐
│ Pre-Market Planner │
│ - 1 Gemini API call per market │
│ - Generate scenario playbook │
│ - Store in playbooks table │
└──────────────────┬───────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Trading Hours (market open → close) │
└─────────────────────────────────────────────────────────────┘
┌──────────────────────────────────┐
│ Market Schedule Check │
│ - Get open markets │
│ - Filter by enabled markets │
└──────────────────┬───────────────┘
┌──────────────────────────────────┐
│ Scenario Engine (local) │
│ - Match live data vs playbook │
│ - No AI calls needed │
│ - Return matched scenarios │
└──────────────────┬───────────────┘
┌──────────────────────────────────┐
│ Risk Manager: Validate Order │
│ - Check circuit breaker │
│ - Check fat-finger limit │
└──────────────────┬───────────────┘
┌──────────────────────────────────┐
│ Broker: Execute Order │
│ - Domestic: send_order() │
│ - Overseas: send_overseas_order()│
└──────────────────┬───────────────┘
┌──────────────────────────────────┐
│ Decision Logger + DB │
│ - Full audit trail │
│ - Context snapshot │
│ - Telegram notification │
└──────────────────┬───────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Post-Market Phase │
└─────────────────────────────────────────────────────────────┘
┌──────────────────────────────────┐
│ Daily Review + Scorecard │
│ - Performance summary │
│ - Store in L6_DAILY context │
│ - Evolution learning │
└──────────────────────────────────┘
```
### Realtime Mode (with Smart Scanner) ### Realtime Mode (with Smart Scanner)
``` ```
@@ -372,7 +170,7 @@ High-frequency trading with individual stock analysis:
│ - Get open markets │ │ - Get open markets │
│ - Filter by enabled markets │ │ - Filter by enabled markets │
│ - Wait if all closed │ │ - Wait if all closed │
└──────────────────┬───────────────┘ └──────────────────┬───────────────
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
@@ -384,19 +182,25 @@ High-frequency trading with individual stock analysis:
│ + volatility-first scoring │ │ + volatility-first scoring │
│ - Fallback: dynamic universe │ │ - Fallback: dynamic universe │
│ - Return top 3 qualified stocks │ │ - Return top 3 qualified stocks │
└──────────────────┬───────────────┘ └──────────────────┬───────────────
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
│ For Each Qualified Candidate │ │ For Each Qualified Candidate │
└──────────────────┬───────────────┘ └──────────────────┬───────────────
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
│ Broker: Fetch Market Data │ │ Broker: Fetch Market Data │
│ - Domestic: orderbook + balance │ │ - Domestic: orderbook + balance │
│ - Overseas: price + balance │ │ - Overseas: price + balance │
└──────────────────┬───────────────┘ └──────────────────┬───────────────
┌──────────────────────────────────┐
│ Calculate P&L │
│ pnl_pct = (eval - cost) / cost │
└──────────────────┬────────────────┘
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
@@ -405,36 +209,47 @@ High-frequency trading with individual stock analysis:
│ - Call Gemini API │ │ - Call Gemini API │
│ - Parse JSON response │ │ - Parse JSON response │
│ - Return TradeDecision │ │ - Return TradeDecision │
└──────────────────┬───────────────┘ └──────────────────┬───────────────
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
│ Risk Manager: Validate Order │ │ Risk Manager: Validate Order │
│ - Check circuit breaker │ │ - Check circuit breaker │
│ - Check fat-finger limit │ │ - Check fat-finger limit │
└──────────────────┬───────────────┘ │ - Raise if validation fails │
└──────────────────┬────────────────┘
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
│ Broker: Execute Order │ │ Broker: Execute Order │
│ - Domestic: send_order() │ │ - Domestic: send_order() │
│ - Overseas: send_overseas_order()│ │ - Overseas: send_overseas_order()
└──────────────────┬───────────────┘ └──────────────────┬───────────────
┌──────────────────────────────────┐ ┌──────────────────────────────────┐
Decision Logger + Notifications Notifications: Send Alert
│ - Log trade to SQLite │ - Trade execution notification
│ - selection_context (JSON) │ - Non-blocking (errors logged)
│ - Telegram notification │ - Rate-limited to 1/sec
└──────────────────────────────────┘ └──────────────────────────────────┘
┌──────────────────────────────────┐
│ Database: Log Trade │
│ - SQLite (data/trades.db) │
│ - Track: action, confidence, │
│ rationale, market, exchange │
│ - NEW: selection_context (JSON) │
│ - RSI, volume_ratio, signal │
│ - For Evolution optimization │
└───────────────────────────────────┘
``` ```
## Database Schema ## Database Schema
**SQLite** (`src/db.py`) — Database: `data/trades.db` **SQLite** (`src/db.py`)
### trades
```sql ```sql
CREATE TABLE trades ( CREATE TABLE trades (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -446,73 +261,25 @@ CREATE TABLE trades (
quantity INTEGER, quantity INTEGER,
price REAL, price REAL,
pnl REAL DEFAULT 0.0, pnl REAL DEFAULT 0.0,
market TEXT DEFAULT 'KR', market TEXT DEFAULT 'KR', -- KR | US_NASDAQ | JP | etc.
exchange_code TEXT DEFAULT 'KRX', exchange_code TEXT DEFAULT 'KRX', -- KRX | NASD | NYSE | etc.
selection_context TEXT, -- JSON: {rsi, volume_ratio, signal, score} selection_context TEXT -- JSON: {rsi, volume_ratio, signal, score}
decision_id TEXT -- Links to decision_logs
); );
``` ```
### contexts **Selection Context** (new in v0.9.0): Stores scanner selection criteria as JSON:
```sql ```json
CREATE TABLE contexts ( {
id INTEGER PRIMARY KEY AUTOINCREMENT, "rsi": 28.5,
layer TEXT NOT NULL, -- L1 through L7 "volume_ratio": 2.7,
timeframe TEXT, "signal": "oversold",
key TEXT NOT NULL, "score": 85.2
value TEXT NOT NULL, -- JSON data }
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
-- Indices: idx_contexts_layer, idx_contexts_timeframe, idx_contexts_updated
``` ```
### decision_logs Enables Evolution system to analyze correlation between selection criteria and trade outcomes.
```sql
CREATE TABLE decision_logs (
decision_id TEXT PRIMARY KEY,
timestamp TEXT NOT NULL,
stock_code TEXT,
market TEXT,
exchange_code TEXT,
action TEXT,
confidence INTEGER,
rationale TEXT,
context_snapshot TEXT, -- JSON: full context at decision time
input_data TEXT, -- JSON: market data used
outcome_pnl REAL,
outcome_accuracy REAL,
reviewed INTEGER DEFAULT 0,
review_notes TEXT
);
-- Indices: idx_decision_logs_timestamp, idx_decision_logs_reviewed, idx_decision_logs_confidence
```
### playbooks Auto-migration: Adds `market`, `exchange_code`, and `selection_context` columns if missing for backward compatibility.
```sql
CREATE TABLE playbooks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
date TEXT NOT NULL,
market TEXT NOT NULL,
status TEXT DEFAULT 'generated',
playbook_json TEXT NOT NULL, -- Full playbook with scenarios
generated_at TEXT NOT NULL,
token_count INTEGER,
scenario_count INTEGER,
match_count INTEGER DEFAULT 0
);
-- Indices: idx_playbooks_date, idx_playbooks_market
```
### context_metadata
```sql
CREATE TABLE context_metadata (
layer TEXT PRIMARY KEY,
description TEXT,
retention_days INTEGER,
aggregation_source TEXT
);
```
## Configuration ## Configuration
@@ -527,70 +294,31 @@ KIS_APP_SECRET=your_app_secret
KIS_ACCOUNT_NO=XXXXXXXX-XX KIS_ACCOUNT_NO=XXXXXXXX-XX
GEMINI_API_KEY=your_gemini_key GEMINI_API_KEY=your_gemini_key
# Optional — Trading Mode # Optional
MODE=paper # paper | live MODE=paper # paper | live
DB_PATH=data/trades.db
CONFIDENCE_THRESHOLD=80
MAX_LOSS_PCT=3.0
MAX_ORDER_PCT=30.0
ENABLED_MARKETS=KR,US_NASDAQ # Comma-separated market codes
# Trading Mode (API efficiency)
TRADE_MODE=daily # daily | realtime TRADE_MODE=daily # daily | realtime
DAILY_SESSIONS=4 # Sessions per day (daily mode only) DAILY_SESSIONS=4 # Sessions per day (daily mode only)
SESSION_INTERVAL_HOURS=6 # Hours between sessions (daily mode only) SESSION_INTERVAL_HOURS=6 # Hours between sessions (daily mode only)
# Optional — Database # Telegram Notifications (optional)
DB_PATH=data/trades.db
# Optional — Risk
CONFIDENCE_THRESHOLD=80
MAX_LOSS_PCT=3.0
MAX_ORDER_PCT=30.0
# Optional — Markets
ENABLED_MARKETS=KR,US # Comma-separated market codes
RATE_LIMIT_RPS=2.0 # KIS API requests per second
# Optional — Pre-Market Planner (v2)
PRE_MARKET_MINUTES=30 # Minutes before market open to generate playbook
MAX_SCENARIOS_PER_STOCK=5 # Max scenarios per stock in playbook
PLANNER_TIMEOUT_SECONDS=60 # Timeout for playbook generation
DEFENSIVE_PLAYBOOK_ON_FAILURE=true # Fallback on AI failure
RESCAN_INTERVAL_SECONDS=300 # Scenario rescan interval during trading
# Optional — Smart Scanner (realtime mode only)
RSI_OVERSOLD_THRESHOLD=30 # 0-50, oversold threshold
RSI_MOMENTUM_THRESHOLD=70 # 50-100, momentum threshold
VOL_MULTIPLIER=2.0 # Minimum volume ratio (2.0 = 200%)
SCANNER_TOP_N=3 # Max qualified candidates per scan
# Optional — Dashboard
DASHBOARD_ENABLED=false # Enable FastAPI dashboard
DASHBOARD_HOST=127.0.0.1 # Dashboard bind address
DASHBOARD_PORT=8080 # Dashboard port (1-65535)
# Optional — Telegram
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=123456789 TELEGRAM_CHAT_ID=123456789
TELEGRAM_ENABLED=true TELEGRAM_ENABLED=true
TELEGRAM_COMMANDS_ENABLED=true # Enable bidirectional commands
TELEGRAM_POLLING_INTERVAL=1.0 # Command polling interval (seconds)
# Optional — Backup # Smart Scanner (optional, realtime mode only)
BACKUP_ENABLED=false SCANNER_TOP_N=3 # Max qualified candidates per scan
BACKUP_DIR=data/backups
S3_ENDPOINT_URL=...
S3_ACCESS_KEY=...
S3_SECRET_KEY=...
S3_BUCKET_NAME=...
S3_REGION=...
# Optional — External Data
NEWS_API_KEY=...
NEWS_API_PROVIDER=...
MARKET_DATA_API_KEY=...
# Position Sizing (optional)
POSITION_SIZING_ENABLED=true POSITION_SIZING_ENABLED=true
POSITION_BASE_ALLOCATION_PCT=5.0 POSITION_BASE_ALLOCATION_PCT=5.0
POSITION_MIN_ALLOCATION_PCT=1.0 POSITION_MIN_ALLOCATION_PCT=1.0
POSITION_MAX_ALLOCATION_PCT=10.0 POSITION_MAX_ALLOCATION_PCT=10.0
POSITION_VOLATILITY_TARGET_SCORE=50.0 POSITION_VOLATILITY_TARGET_SCORE=50.0
# Legacy/compat scanner thresholds (kept for backward compatibility) # Legacy/compat scanner thresholds (kept for backward compatibility)
RSI_OVERSOLD_THRESHOLD=30 RSI_OVERSOLD_THRESHOLD=30
RSI_MOMENTUM_THRESHOLD=70 RSI_MOMENTUM_THRESHOLD=70
@@ -635,9 +363,4 @@ Tests use in-memory SQLite (`DB_PATH=":memory:"`) and dummy credentials via `tes
- Invalid token → log error, trading unaffected - Invalid token → log error, trading unaffected
- Rate limit exceeded → queued via rate limiter - Rate limit exceeded → queued via rate limiter
### Playbook Generation Failure **Guarantee**: Notification failures never interrupt trading operations.
- Timeout → fall back to defensive playbook (`DEFENSIVE_PLAYBOOK_ON_FAILURE`)
- API error → use previous day's playbook if available
- No playbook → skip pre-market phase, fall back to direct AI calls
**Guarantee**: Notification and dashboard failures never interrupt trading operations.

View File

@@ -119,7 +119,7 @@ No decorator needed for async tests.
# Install all dependencies (production + dev) # Install all dependencies (production + dev)
pip install -e ".[dev]" pip install -e ".[dev]"
# Run full test suite with coverage (551 tests across 25 files) # Run full test suite with coverage
pytest -v --cov=src --cov-report=term-missing pytest -v --cov=src --cov-report=term-missing
# Run a single test file # Run a single test file
@@ -137,61 +137,11 @@ mypy src/ --strict
# Run the trading agent # Run the trading agent
python -m src.main --mode=paper python -m src.main --mode=paper
# Run with dashboard enabled
python -m src.main --mode=paper --dashboard
# Docker # Docker
docker compose up -d ouroboros # Run agent docker compose up -d ouroboros # Run agent
docker compose --profile test up test # Run tests in container docker compose --profile test up test # Run tests in container
``` ```
## Dashboard
The FastAPI dashboard provides read-only monitoring of the trading system.
### Starting the Dashboard
```bash
# Via CLI flag
python -m src.main --mode=paper --dashboard
# Via environment variable
DASHBOARD_ENABLED=true python -m src.main --mode=paper
```
Dashboard runs as a daemon thread on `DASHBOARD_HOST:DASHBOARD_PORT` (default: `127.0.0.1:8080`).
### API Endpoints
| Endpoint | Description |
|----------|-------------|
| `GET /` | HTML dashboard UI |
| `GET /api/status` | Daily trading status by market |
| `GET /api/playbook/{date}` | Playbook for specific date (query: `market`) |
| `GET /api/scorecard/{date}` | Daily scorecard from L6_DAILY context |
| `GET /api/performance` | Performance metrics by market and combined |
| `GET /api/context/{layer}` | Context data by layer L1-L7 (query: `timeframe`) |
| `GET /api/decisions` | Decision log entries (query: `limit`, `market`) |
| `GET /api/scenarios/active` | Today's matched scenarios |
## Telegram Commands
When `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot accepts these interactive commands:
| Command | Description |
|---------|-------------|
| `/help` | List available commands |
| `/status` | Show trading status (mode, markets, P&L) |
| `/positions` | Display account summary (balance, cash, P&L) |
| `/report` | Daily summary metrics (trades, P&L, win rate) |
| `/scenarios` | Show today's playbook scenarios |
| `/review` | Display recent scorecards (L6_DAILY layer) |
| `/dashboard` | Show dashboard URL if enabled |
| `/stop` | Pause trading |
| `/resume` | Resume trading |
Commands are only processed from the authorized `TELEGRAM_CHAT_ID`.
## Environment Setup ## Environment Setup
```bash ```bash

View File

@@ -1,29 +0,0 @@
# Issue: Realtime 모드에서 거래가 지속적으로 0건
## Summary
`realtime` 실행 중 주문 단계까지 진입하지 못하고, 스캐너 단계에서 후보가 0건으로 반복 종료된다.
## Observed
- 로그에서 반복적으로 `Smart Scanner: No candidates ... — no trades` 출력
- 해외 시장에서 `Overseas ranking endpoint unavailable (404)` 다수 발생
- fallback 심볼 스캔도 `0 candidates`로 종료
- `data/trade_logs.db` 기준 최근 구간에 `BUY/SELL` 없음
## Impact
- 매매 전략 품질과 무관하게 주문 경로가 실행되지 않아 실질 거래 불가
- 장애 원인을 로그만으로 즉시 분해하기 어려움
## Root-Cause Hypothesis
- 스캐너 필터(가격/변동성) 단계에서 대부분 탈락
- 해외 랭킹 API 불가 시 입력 유니버스가 빈 상태가 되어 후보 생성 실패
- 기존 로그는 최종 결과(0 candidates)만 보여 원인별 분해가 어려움
## Acceptance Criteria
- 스캔 1회마다 탈락 사유가 구조화되어 로그에 남아야 함
- 국내/해외(랭킹/폴백) 경로 모두 동일한 진단 지표를 제공해야 함
- 운영자가 로그만 보고 `왜 0 candidates인지`를 즉시 판단 가능해야 함
## Scope
- 이번 이슈는 **진단 가능성 개선(Observability)** 에 한정
- 후보 생성 전략 변경(기본 유니버스 강제 추가 등)은 별도 이슈로 분리

View File

@@ -1,32 +0,0 @@
# PR: Smart Scanner 진단 로그 추가 (0 candidates 원인 분해)
## Linked Issue
- `docs/issues/ISSUE-2026-02-17-no-trades-zero-candidates.md`
## What Changed
- `src/analysis/smart_scanner.py`에 스캔 진단 카운터 추가
- 국내 스캔 진단 로그 추가
- 해외 랭킹 스캔 진단 로그 추가
- 해외 fallback 심볼 스캔 진단 로그 추가
## Diagnostics Keys
- `total_rows`
- `missing_code`
- `invalid_price`
- `low_volatility`
- `connection_error` (해당 경로에서만)
- `unexpected_error` (해당 경로에서만)
- `qualified`
## Expected Log Examples
- `Domestic scan diagnostics: {...}`
- `Overseas ranking scan diagnostics for US_NASDAQ: {...}`
- `Overseas fallback scan diagnostics for US_NYSE: {...}`
## Out of Scope
- 해외 랭킹 404 시 기본 심볼 유니버스 강제 주입
- 국내 경로 fallback 정책 변경
## Validation
- `.venv/bin/python -m py_compile src/analysis/smart_scanner.py`

View File

@@ -91,27 +91,6 @@
## 2026-02-16 ## 2026-02-16
### 문서 v2 동기화 (전체 문서 현행화)
**배경:**
- v2 기능 구현 완료 후 문서가 실제 코드 상태와 크게 괴리
- 문서에는 54 tests / 4 files로 기록되었으나 실제로는 551 tests / 25 files
- v2 핵심 기능(Playbook, Scenario Engine, Dashboard, Telegram Commands, Daily Review, Context System, Backup) 문서화 누락
**요구사항:**
1. `docs/testing.md` — 551 tests / 25 files 반영, 전체 테스트 파일 설명
2. `docs/architecture.md` — v2 컴포넌트(Strategy, Context, Dashboard, Decision Logger 등) 추가, Playbook Mode 데이터 플로우, DB 스키마 5개 테이블, v2 환경변수
3. `docs/commands.md` — Dashboard 실행 명령어, Telegram 명령어 9종 레퍼런스
4. `CLAUDE.md` — Project Structure 트리 확장, 테스트 수 업데이트, `--dashboard` 플래그
5. `docs/skills.md` — DB 파일명 `trades.db`로 통일, Dashboard 명령어 추가
6. 기존에 유효한 트러블슈팅, 코드 예제 등은 유지
**구현 결과:**
- 6개 문서 파일 업데이트
- 이전 시도(2개 커밋)는 기존 내용을 과도하게 삭제하여 폐기, main 기준으로 재작업
**이슈/PR:** #131, PR #134
### 해외 스캐너 개선: 랭킹 연동 + 변동성 우선 선별 ### 해외 스캐너 개선: 랭킹 연동 + 변동성 우선 선별
**배경:** **배경:**

View File

@@ -34,12 +34,6 @@ python -m src.main --mode=paper
``` ```
Runs the agent in paper-trading mode (no real orders). Runs the agent in paper-trading mode (no real orders).
### Start Trading Agent with Dashboard
```bash
python -m src.main --mode=paper --dashboard
```
Runs the agent with FastAPI dashboard on `127.0.0.1:8080` (configurable via `DASHBOARD_HOST`/`DASHBOARD_PORT`).
### Start Trading Agent (Production) ### Start Trading Agent (Production)
```bash ```bash
docker compose up -d ouroboros docker compose up -d ouroboros
@@ -65,7 +59,7 @@ Analyze the last 30 days of trade logs and generate performance metrics.
python -m src.evolution.optimizer --evolve python -m src.evolution.optimizer --evolve
``` ```
Triggers the evolution engine to: Triggers the evolution engine to:
1. Analyze `trades.db` for failing patterns 1. Analyze `trade_logs.db` for failing patterns
2. Ask Gemini to generate a new strategy 2. Ask Gemini to generate a new strategy
3. Run tests on the new strategy 3. Run tests on the new strategy
4. Create a PR if tests pass 4. Create a PR if tests pass
@@ -97,12 +91,12 @@ curl http://localhost:8080/health
### View Trade Logs ### View Trade Logs
```bash ```bash
sqlite3 data/trades.db "SELECT * FROM trades ORDER BY timestamp DESC LIMIT 20;" sqlite3 data/trade_logs.db "SELECT * FROM trades ORDER BY timestamp DESC LIMIT 20;"
``` ```
### Export Trade History ### Export Trade History
```bash ```bash
sqlite3 -header -csv data/trades.db "SELECT * FROM trades;" > trades_export.csv sqlite3 -header -csv data/trade_logs.db "SELECT * FROM trades;" > trades_export.csv
``` ```
## Safety Checklist (Pre-Deploy) ## Safety Checklist (Pre-Deploy)

View File

@@ -2,29 +2,51 @@
## Test Structure ## Test Structure
**551 tests** across **25 files**. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator. **54 tests** across four files. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator.
The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB. The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB.
### Test Files ### Test Files
#### Core Components #### `tests/test_risk.py` (11 tests)
- Circuit breaker boundaries
##### `tests/test_risk.py` (14 tests) - Fat-finger edge cases
- Circuit breaker boundaries and exact threshold triggers
- Fat-finger edge cases and percentage validation
- P&L calculation edge cases - P&L calculation edge cases
- Order validation logic - Order validation logic
##### `tests/test_broker.py` (11 tests) **Example:**
```python
def test_circuit_breaker_exact_threshold(risk_manager):
"""Circuit breaker should trip at exactly -3.0%."""
with pytest.raises(CircuitBreakerTripped):
risk_manager.validate_order(
current_pnl_pct=-3.0,
order_amount=1000,
total_cash=10000
)
```
#### `tests/test_broker.py` (6 tests)
- OAuth token lifecycle - OAuth token lifecycle
- Rate limiting enforcement - Rate limiting enforcement
- Hash key generation - Hash key generation
- Network error handling - Network error handling
- SSL context configuration - SSL context configuration
##### `tests/test_brain.py` (24 tests) **Example:**
- Valid JSON parsing and markdown-wrapped JSON handling ```python
async def test_rate_limiter(broker):
"""Rate limiter should delay requests to stay under 10 RPS."""
start = time.monotonic()
for _ in range(15): # 15 requests
await broker._rate_limiter.acquire()
elapsed = time.monotonic() - start
assert elapsed >= 1.0 # Should take at least 1 second
```
#### `tests/test_brain.py` (18 tests)
- Valid JSON parsing
- Markdown-wrapped JSON handling
- Malformed JSON fallback - Malformed JSON fallback
- Missing fields handling - Missing fields handling
- Invalid action validation - Invalid action validation
@@ -32,143 +54,33 @@ The `settings` fixture in `conftest.py` provides safe defaults with test credent
- Empty response handling - Empty response handling
- Prompt construction for different markets - Prompt construction for different markets
##### `tests/test_market_schedule.py` (24 tests) **Example:**
```python
async def test_confidence_below_threshold_forces_hold(brain):
"""Decisions below confidence threshold should force HOLD."""
decision = brain.parse_response('{"action":"BUY","confidence":70,"rationale":"test"}')
assert decision.action == "HOLD"
assert decision.confidence == 70
```
#### `tests/test_market_schedule.py` (19 tests)
- Market open/close logic - Market open/close logic
- Timezone handling (UTC, Asia/Seoul, America/New_York, etc.) - Timezone handling (UTC, Asia/Seoul, America/New_York, etc.)
- DST (Daylight Saving Time) transitions - DST (Daylight Saving Time) transitions
- Weekend handling and lunch break logic - Weekend handling
- Lunch break logic
- Multiple market filtering - Multiple market filtering
- Next market open calculation - Next market open calculation
##### `tests/test_db.py` (3 tests) **Example:**
- Database initialization and table creation ```python
- Trade logging with all fields (market, exchange_code, decision_id) def test_is_market_open_during_trading_hours():
- Query and retrieval operations """Market should be open during regular trading hours."""
# KRX: 9:00-15:30 KST, no lunch break
##### `tests/test_main.py` (37 tests) market = MARKETS["KR"]
- Trading loop orchestration trading_time = datetime(2026, 2, 3, 10, 0, tzinfo=ZoneInfo("Asia/Seoul")) # Monday 10:00
- Market iteration and stock processing assert is_market_open(market, trading_time) is True
- Dashboard integration (`--dashboard` flag) ```
- Telegram command handler wiring
- Error handling and graceful shutdown
#### Strategy & Playbook (v2)
##### `tests/test_pre_market_planner.py` (37 tests)
- Pre-market playbook generation
- Gemini API integration for scenario creation
- Timeout handling and defensive playbook fallback
- Multi-market playbook generation
##### `tests/test_scenario_engine.py` (44 tests)
- Scenario matching against live market data
- Confidence scoring and threshold filtering
- Multiple scenario type handling
- Edge cases (no match, partial match, expired scenarios)
##### `tests/test_playbook_store.py` (23 tests)
- Playbook persistence to SQLite
- Date-based retrieval and market filtering
- Playbook status management (generated, active, expired)
- JSON serialization/deserialization
##### `tests/test_strategy_models.py` (33 tests)
- Pydantic model validation for scenarios, playbooks, decisions
- Field constraints and default values
- Serialization round-trips
#### Analysis & Scanning
##### `tests/test_volatility.py` (24 tests)
- ATR and RSI calculation accuracy
- Volume surge ratio computation
- Momentum scoring
- Breakout/breakdown pattern detection
- Market scanner watchlist management
##### `tests/test_smart_scanner.py` (13 tests)
- Python-first filtering pipeline
- RSI and volume ratio filter logic
- Candidate scoring and ranking
- Fallback to static watchlist
#### Context & Memory
##### `tests/test_context.py` (18 tests)
- L1-L7 layer storage and retrieval
- Context key-value CRUD operations
- Timeframe-based queries
- Layer metadata management
##### `tests/test_context_scheduler.py` (5 tests)
- Periodic context aggregation scheduling
- Layer summarization triggers
#### Evolution & Review
##### `tests/test_evolution.py` (24 tests)
- Strategy optimization loop
- High-confidence losing trade analysis
- Generated strategy validation
##### `tests/test_daily_review.py` (10 tests)
- End-of-day review generation
- Trade performance summarization
- Context layer (L6_DAILY) integration
##### `tests/test_scorecard.py` (3 tests)
- Daily scorecard metrics calculation
- Win rate, P&L, confidence tracking
#### Notifications & Commands
##### `tests/test_telegram.py` (25 tests)
- Message sending and formatting
- Rate limiting (leaky bucket)
- Error handling (network timeout, invalid token)
- Auto-disable on missing credentials
- Notification types (trade, circuit breaker, fat-finger, market events)
##### `tests/test_telegram_commands.py` (31 tests)
- 9 command handlers (/help, /status, /positions, /report, /scenarios, /review, /dashboard, /stop, /resume)
- Long polling and command dispatch
- Authorization filtering by chat_id
- Command response formatting
#### Dashboard
##### `tests/test_dashboard.py` (14 tests)
- FastAPI endpoint responses (8 API routes)
- Status, playbook, scorecard, performance, context, decisions, scenarios
- Query parameter handling (market, date, limit)
#### Performance & Quality
##### `tests/test_token_efficiency.py` (34 tests)
- Gemini token usage optimization
- Prompt size reduction verification
- Cache effectiveness
##### `tests/test_latency_control.py` (30 tests)
- API call latency measurement
- Rate limiter timing accuracy
- Async operation overhead
##### `tests/test_decision_logger.py` (9 tests)
- Decision audit trail completeness
- Context snapshot capture
- Outcome tracking (P&L, accuracy)
##### `tests/test_data_integration.py` (38 tests)
- External data source integration
- News API, market data, economic calendar
- Error handling for API failures
##### `tests/test_backup.py` (23 tests)
- Backup scheduler and execution
- Cloud storage (S3) upload
- Health monitoring
- Data export functionality
## Coverage Requirements ## Coverage Requirements
@@ -179,6 +91,20 @@ Check coverage:
pytest -v --cov=src --cov-report=term-missing pytest -v --cov=src --cov-report=term-missing
``` ```
Expected output:
```
Name Stmts Miss Cover Missing
-----------------------------------------------------------
src/brain/gemini_client.py 85 5 94% 165-169
src/broker/kis_api.py 120 12 90% ...
src/core/risk_manager.py 35 2 94% ...
src/db.py 25 1 96% ...
src/main.py 150 80 47% (excluded from CI)
src/markets/schedule.py 95 3 97% ...
-----------------------------------------------------------
TOTAL 510 103 80%
```
**Note:** `main.py` has lower coverage as it contains the main loop which is tested via integration/manual testing. **Note:** `main.py` has lower coverage as it contains the main loop which is tested via integration/manual testing.
## Test Configuration ## Test Configuration

View File

@@ -1,54 +0,0 @@
#!/usr/bin/env bash
# Morning summary for overnight run logs.
set -euo pipefail
LOG_DIR="${LOG_DIR:-data/overnight}"
if [ ! -d "$LOG_DIR" ]; then
echo "로그 디렉터리가 없습니다: $LOG_DIR"
exit 1
fi
latest_run="$(ls -1t "$LOG_DIR"/run_*.log 2>/dev/null | head -n 1 || true)"
latest_watchdog="$(ls -1t "$LOG_DIR"/watchdog_*.log 2>/dev/null | head -n 1 || true)"
if [ -z "$latest_run" ]; then
echo "run 로그가 없습니다: $LOG_DIR/run_*.log"
exit 1
fi
echo "Overnight report"
echo "- run log: $latest_run"
if [ -n "$latest_watchdog" ]; then
echo "- watchdog log: $latest_watchdog"
fi
start_line="$(head -n 1 "$latest_run" || true)"
end_line="$(tail -n 1 "$latest_run" || true)"
info_count="$(rg -c '"level": "INFO"' "$latest_run" || true)"
warn_count="$(rg -c '"level": "WARNING"' "$latest_run" || true)"
error_count="$(rg -c '"level": "ERROR"' "$latest_run" || true)"
critical_count="$(rg -c '"level": "CRITICAL"' "$latest_run" || true)"
traceback_count="$(rg -c 'Traceback' "$latest_run" || true)"
echo "- start: ${start_line:-N/A}"
echo "- end: ${end_line:-N/A}"
echo "- INFO: ${info_count:-0}"
echo "- WARNING: ${warn_count:-0}"
echo "- ERROR: ${error_count:-0}"
echo "- CRITICAL: ${critical_count:-0}"
echo "- Traceback: ${traceback_count:-0}"
if [ -n "$latest_watchdog" ]; then
watchdog_errors="$(rg -c '\[ERROR\]' "$latest_watchdog" || true)"
echo "- watchdog ERROR: ${watchdog_errors:-0}"
echo ""
echo "최근 watchdog 로그:"
tail -n 5 "$latest_watchdog" || true
fi
echo ""
echo "최근 앱 로그:"
tail -n 20 "$latest_run" || true

View File

@@ -1,87 +0,0 @@
#!/usr/bin/env bash
# Start The Ouroboros overnight with logs and watchdog.
set -euo pipefail
LOG_DIR="${LOG_DIR:-data/overnight}"
CHECK_INTERVAL="${CHECK_INTERVAL:-30}"
TMUX_AUTO="${TMUX_AUTO:-true}"
TMUX_ATTACH="${TMUX_ATTACH:-true}"
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
if [ -z "${APP_CMD:-}" ]; then
if [ -x ".venv/bin/python" ]; then
PYTHON_BIN=".venv/bin/python"
elif command -v python3 >/dev/null 2>&1; then
PYTHON_BIN="python3"
elif command -v python >/dev/null 2>&1; then
PYTHON_BIN="python"
else
echo ".venv/bin/python 또는 python3/python 실행 파일을 찾을 수 없습니다."
exit 1
fi
dashboard_port="${DASHBOARD_PORT:-8080}"
APP_CMD="DASHBOARD_PORT=$dashboard_port $PYTHON_BIN -m src.main --mode=paper --dashboard"
fi
mkdir -p "$LOG_DIR"
timestamp="$(date +"%Y%m%d_%H%M%S")"
RUN_LOG="$LOG_DIR/run_${timestamp}.log"
WATCHDOG_LOG="$LOG_DIR/watchdog_${timestamp}.log"
PID_FILE="$LOG_DIR/app.pid"
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
if [ -f "$PID_FILE" ]; then
old_pid="$(cat "$PID_FILE" || true)"
if [ -n "$old_pid" ] && kill -0 "$old_pid" 2>/dev/null; then
echo "앱이 이미 실행 중입니다. pid=$old_pid"
exit 1
fi
fi
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] starting: $APP_CMD" | tee -a "$RUN_LOG"
nohup bash -lc "$APP_CMD" >>"$RUN_LOG" 2>&1 &
app_pid=$!
echo "$app_pid" > "$PID_FILE"
echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] app pid=$app_pid" | tee -a "$RUN_LOG"
nohup env PID_FILE="$PID_FILE" LOG_FILE="$WATCHDOG_LOG" CHECK_INTERVAL="$CHECK_INTERVAL" \
bash scripts/watchdog.sh >/dev/null 2>&1 &
watchdog_pid=$!
echo "$watchdog_pid" > "$WATCHDOG_PID_FILE"
cat <<EOF
시작 완료
- app pid: $app_pid
- watchdog pid: $watchdog_pid
- app log: $RUN_LOG
- watchdog log: $WATCHDOG_LOG
실시간 확인:
tail -f "$RUN_LOG"
tail -f "$WATCHDOG_LOG"
EOF
if [ "$TMUX_AUTO" = "true" ]; then
if ! command -v tmux >/dev/null 2>&1; then
echo "tmux를 찾지 못해 자동 세션 생성은 건너뜁니다."
exit 0
fi
session_name="${TMUX_SESSION_PREFIX}_${timestamp}"
window_name="overnight"
tmux new-session -d -s "$session_name" -n "$window_name" "tail -f '$RUN_LOG'"
tmux split-window -t "${session_name}:${window_name}" -v "tail -f '$WATCHDOG_LOG'"
tmux select-layout -t "${session_name}:${window_name}" even-vertical
echo "tmux session 생성: $session_name"
echo "수동 접속: tmux attach -t $session_name"
if [ -z "${TMUX:-}" ] && [ "$TMUX_ATTACH" = "true" ]; then
tmux attach -t "$session_name"
fi
fi

View File

@@ -1,76 +0,0 @@
#!/usr/bin/env bash
# Stop The Ouroboros overnight app/watchdog/tmux session.
set -euo pipefail
LOG_DIR="${LOG_DIR:-data/overnight}"
PID_FILE="$LOG_DIR/app.pid"
WATCHDOG_PID_FILE="$LOG_DIR/watchdog.pid"
TMUX_SESSION_PREFIX="${TMUX_SESSION_PREFIX:-ouroboros_overnight}"
KILL_TIMEOUT="${KILL_TIMEOUT:-5}"
stop_pid() {
local name="$1"
local pid="$2"
if [ -z "$pid" ]; then
echo "$name PID가 비어 있습니다."
return 1
fi
if ! kill -0 "$pid" 2>/dev/null; then
echo "$name 프로세스가 이미 종료됨 (pid=$pid)"
return 0
fi
kill "$pid" 2>/dev/null || true
for _ in $(seq 1 "$KILL_TIMEOUT"); do
if ! kill -0 "$pid" 2>/dev/null; then
echo "$name 종료됨 (pid=$pid)"
return 0
fi
sleep 1
done
kill -9 "$pid" 2>/dev/null || true
if ! kill -0 "$pid" 2>/dev/null; then
echo "$name 강제 종료됨 (pid=$pid)"
return 0
fi
echo "$name 종료 실패 (pid=$pid)"
return 1
}
status=0
if [ -f "$WATCHDOG_PID_FILE" ]; then
watchdog_pid="$(cat "$WATCHDOG_PID_FILE" || true)"
stop_pid "watchdog" "$watchdog_pid" || status=1
rm -f "$WATCHDOG_PID_FILE"
else
echo "watchdog pid 파일 없음: $WATCHDOG_PID_FILE"
fi
if [ -f "$PID_FILE" ]; then
app_pid="$(cat "$PID_FILE" || true)"
stop_pid "app" "$app_pid" || status=1
rm -f "$PID_FILE"
else
echo "app pid 파일 없음: $PID_FILE"
fi
if command -v tmux >/dev/null 2>&1; then
sessions="$(tmux ls 2>/dev/null | awk -F: -v p="$TMUX_SESSION_PREFIX" '$1 ~ "^" p "_" {print $1}')"
if [ -n "$sessions" ]; then
while IFS= read -r s; do
[ -z "$s" ] && continue
tmux kill-session -t "$s" 2>/dev/null || true
echo "tmux 세션 종료: $s"
done <<< "$sessions"
else
echo "종료할 tmux 세션 없음 (prefix=${TMUX_SESSION_PREFIX}_)"
fi
fi
exit "$status"

View File

@@ -1,42 +0,0 @@
#!/usr/bin/env bash
# Simple watchdog for The Ouroboros process.
set -euo pipefail
PID_FILE="${PID_FILE:-data/overnight/app.pid}"
LOG_FILE="${LOG_FILE:-data/overnight/watchdog.log}"
CHECK_INTERVAL="${CHECK_INTERVAL:-30}"
STATUS_EVERY="${STATUS_EVERY:-10}"
mkdir -p "$(dirname "$LOG_FILE")"
log() {
printf '%s %s\n' "$(date -u +"%Y-%m-%dT%H:%M:%SZ")" "$1" | tee -a "$LOG_FILE"
}
if [ ! -f "$PID_FILE" ]; then
log "[ERROR] pid file not found: $PID_FILE"
exit 1
fi
PID="$(cat "$PID_FILE")"
if [ -z "$PID" ]; then
log "[ERROR] pid file is empty: $PID_FILE"
exit 1
fi
log "[INFO] watchdog started (pid=$PID, interval=${CHECK_INTERVAL}s)"
count=0
while true; do
if kill -0 "$PID" 2>/dev/null; then
count=$((count + 1))
if [ $((count % STATUS_EVERY)) -eq 0 ]; then
log "[INFO] process alive (pid=$PID)"
fi
else
log "[ERROR] process stopped (pid=$PID)"
exit 1
fi
sleep "$CHECK_INTERVAL"
done

View File

@@ -128,16 +128,6 @@ class SmartVolatilityScanner:
if not fluct_rows: if not fluct_rows:
return [] return []
diagnostics: dict[str, int | float] = {
"total_rows": len(fluct_rows),
"missing_code": 0,
"invalid_price": 0,
"low_volatility": 0,
"connection_error": 0,
"unexpected_error": 0,
"qualified": 0,
}
volume_rank_bonus: dict[str, float] = {} volume_rank_bonus: dict[str, float] = {}
for idx, row in enumerate(volume_rows): for idx, row in enumerate(volume_rows):
code = _extract_stock_code(row) code = _extract_stock_code(row)
@@ -149,7 +139,6 @@ class SmartVolatilityScanner:
for stock in fluct_rows: for stock in fluct_rows:
stock_code = _extract_stock_code(stock) stock_code = _extract_stock_code(stock)
if not stock_code: if not stock_code:
diagnostics["missing_code"] += 1
continue continue
try: try:
@@ -179,11 +168,7 @@ class SmartVolatilityScanner:
volume_ratio = max(volume_ratio, volume / prev_day_volume) volume_ratio = max(volume_ratio, volume / prev_day_volume)
volatility_pct = max(abs(change_rate), intraday_range_pct) volatility_pct = max(abs(change_rate), intraday_range_pct)
if price <= 0: if price <= 0 or volatility_pct < 0.8:
diagnostics["invalid_price"] += 1
continue
if volatility_pct < 0.8:
diagnostics["low_volatility"] += 1
continue continue
volatility_score = min(volatility_pct / 10.0, 1.0) * 85.0 volatility_score = min(volatility_pct / 10.0, 1.0) * 85.0
@@ -204,22 +189,14 @@ class SmartVolatilityScanner:
score=score, score=score,
) )
) )
diagnostics["qualified"] += 1
except ConnectionError as exc: except ConnectionError as exc:
diagnostics["connection_error"] += 1
logger.warning("Failed to analyze %s: %s", stock_code, exc) logger.warning("Failed to analyze %s: %s", stock_code, exc)
continue continue
except Exception as exc: except Exception as exc:
diagnostics["unexpected_error"] += 1
logger.error("Unexpected error analyzing %s: %s", stock_code, exc) logger.error("Unexpected error analyzing %s: %s", stock_code, exc)
continue continue
logger.info(
"Domestic scan diagnostics: %s (volatility_threshold=0.8, top_n=%d)",
diagnostics,
self.top_n,
)
logger.info("Domestic ranking scan found %d candidates", len(candidates)) logger.info("Domestic ranking scan found %d candidates", len(candidates))
candidates.sort(key=lambda c: c.score, reverse=True) candidates.sort(key=lambda c: c.score, reverse=True)
return candidates[: self.top_n] return candidates[: self.top_n]
@@ -265,14 +242,6 @@ class SmartVolatilityScanner:
if not fluct_rows: if not fluct_rows:
return [] return []
diagnostics: dict[str, int | float] = {
"total_rows": len(fluct_rows),
"missing_code": 0,
"invalid_price": 0,
"low_volatility": 0,
"qualified": 0,
}
volume_rank_bonus: dict[str, float] = {} volume_rank_bonus: dict[str, float] = {}
try: try:
volume_rows = await self.overseas_broker.fetch_overseas_rankings( volume_rows = await self.overseas_broker.fetch_overseas_rankings(
@@ -297,7 +266,6 @@ class SmartVolatilityScanner:
for row in fluct_rows: for row in fluct_rows:
stock_code = _extract_stock_code(row) stock_code = _extract_stock_code(row)
if not stock_code: if not stock_code:
diagnostics["missing_code"] += 1
continue continue
price = _extract_last_price(row) price = _extract_last_price(row)
@@ -307,11 +275,7 @@ class SmartVolatilityScanner:
volatility_pct = max(abs(change_rate), intraday_range_pct) volatility_pct = max(abs(change_rate), intraday_range_pct)
# Volatility-first filter (not simple gainers/value ranking). # Volatility-first filter (not simple gainers/value ranking).
if price <= 0: if price <= 0 or volatility_pct < 0.8:
diagnostics["invalid_price"] += 1
continue
if volatility_pct < 0.8:
diagnostics["low_volatility"] += 1
continue continue
volatility_score = min(volatility_pct / 10.0, 1.0) * 85.0 volatility_score = min(volatility_pct / 10.0, 1.0) * 85.0
@@ -331,14 +295,7 @@ class SmartVolatilityScanner:
score=score, score=score,
) )
) )
diagnostics["qualified"] += 1
logger.info(
"Overseas ranking scan diagnostics for %s: %s (volatility_threshold=0.8, top_n=%d)",
market.code,
diagnostics,
self.top_n,
)
if candidates: if candidates:
logger.info( logger.info(
"Overseas ranking scan found %d candidates for %s", "Overseas ranking scan found %d candidates for %s",
@@ -358,19 +315,6 @@ class SmartVolatilityScanner:
logger.info("Overseas scanner: no symbol universe for %s", market.name) logger.info("Overseas scanner: no symbol universe for %s", market.name)
return [] return []
logger.info(
"Overseas scanner: scanning %d fallback symbols for %s",
len(symbols),
market.name,
)
diagnostics: dict[str, int | float] = {
"total_rows": len(symbols),
"invalid_price": 0,
"low_volatility": 0,
"connection_error": 0,
"unexpected_error": 0,
"qualified": 0,
}
candidates: list[ScanCandidate] = [] candidates: list[ScanCandidate] = []
for stock_code in symbols: for stock_code in symbols:
try: try:
@@ -384,11 +328,7 @@ class SmartVolatilityScanner:
intraday_range_pct = _extract_intraday_range_pct(output, price) intraday_range_pct = _extract_intraday_range_pct(output, price)
volatility_pct = max(abs(change_rate), intraday_range_pct) volatility_pct = max(abs(change_rate), intraday_range_pct)
if price <= 0: if price <= 0 or volatility_pct < 0.8:
diagnostics["invalid_price"] += 1
continue
if volatility_pct < 0.8:
diagnostics["low_volatility"] += 1
continue continue
score = min(volatility_pct / 10.0, 1.0) * 100.0 score = min(volatility_pct / 10.0, 1.0) * 100.0
@@ -406,24 +346,10 @@ class SmartVolatilityScanner:
score=score, score=score,
) )
) )
diagnostics["qualified"] += 1
except ConnectionError as exc: except ConnectionError as exc:
diagnostics["connection_error"] += 1
logger.warning("Failed to analyze overseas %s: %s", stock_code, exc) logger.warning("Failed to analyze overseas %s: %s", stock_code, exc)
except Exception as exc: except Exception as exc:
diagnostics["unexpected_error"] += 1
logger.error("Unexpected error analyzing overseas %s: %s", stock_code, exc) logger.error("Unexpected error analyzing overseas %s: %s", stock_code, exc)
logger.info(
"Overseas fallback scan diagnostics for %s: %s (volatility_threshold=0.8, top_n=%d)",
market.code,
diagnostics,
self.top_n,
)
logger.info(
"Overseas symbol fallback scan found %d candidates for %s",
len(candidates),
market.name,
)
return candidates return candidates
def get_stock_codes(self, candidates: list[ScanCandidate]) -> list[str]: def get_stock_codes(self, candidates: list[ScanCandidate]) -> list[str]:

View File

@@ -104,14 +104,12 @@ class KISBroker:
time_since_last_attempt = now - self._last_refresh_attempt time_since_last_attempt = now - self._last_refresh_attempt
if time_since_last_attempt < self._refresh_cooldown: if time_since_last_attempt < self._refresh_cooldown:
remaining = self._refresh_cooldown - time_since_last_attempt remaining = self._refresh_cooldown - time_since_last_attempt
# Do not fail fast here. If token is unavailable, upstream calls error_msg = (
# will all fail for up to a minute and scanning returns no trades. f"Token refresh on cooldown. "
logger.warning( f"Retry in {remaining:.1f}s (KIS allows 1/minute)"
"Token refresh on cooldown. Waiting %.1fs before retry (KIS allows 1/minute)",
remaining,
) )
await asyncio.sleep(remaining) logger.warning(error_msg)
now = asyncio.get_event_loop().time() raise ConnectionError(error_msg)
logger.info("Refreshing KIS access token") logger.info("Refreshing KIS access token")
self._last_refresh_attempt = now self._last_refresh_attempt = now

View File

@@ -82,19 +82,14 @@ class OverseasBroker:
session = self._broker._get_session() session = self._broker._get_session()
if ranking_type == "volume": if ranking_type == "volume":
configured_tr_id = self._broker._settings.OVERSEAS_RANKING_VOLUME_TR_ID tr_id = self._broker._settings.OVERSEAS_RANKING_VOLUME_TR_ID
configured_path = self._broker._settings.OVERSEAS_RANKING_VOLUME_PATH path = self._broker._settings.OVERSEAS_RANKING_VOLUME_PATH
default_tr_id = "HHDFS76200200"
default_path = "/uapi/overseas-price/v1/quotations/inquire-volume-rank"
else: else:
configured_tr_id = self._broker._settings.OVERSEAS_RANKING_FLUCT_TR_ID tr_id = self._broker._settings.OVERSEAS_RANKING_FLUCT_TR_ID
configured_path = self._broker._settings.OVERSEAS_RANKING_FLUCT_PATH path = self._broker._settings.OVERSEAS_RANKING_FLUCT_PATH
default_tr_id = "HHDFS76200100"
default_path = "/uapi/overseas-price/v1/quotations/inquire-updown-rank"
endpoint_specs: list[tuple[str, str]] = [(configured_tr_id, configured_path)] headers = await self._broker._auth_headers(tr_id)
if (configured_tr_id, configured_path) != (default_tr_id, default_path): url = f"{self._broker._base_url}{path}"
endpoint_specs.append((default_tr_id, default_path))
# Try common param variants used by KIS overseas quotation APIs. # Try common param variants used by KIS overseas quotation APIs.
param_variants = [ param_variants = [
@@ -105,18 +100,12 @@ class OverseasBroker:
] ]
last_error: str | None = None last_error: str | None = None
saw_http_404 = False
for tr_id, path in endpoint_specs:
headers = await self._broker._auth_headers(tr_id)
url = f"{self._broker._base_url}{path}"
for params in param_variants: for params in param_variants:
try: try:
async with session.get(url, headers=headers, params=params) as resp: async with session.get(url, headers=headers, params=params) as resp:
text = await resp.text() text = await resp.text()
if resp.status != 200: if resp.status != 200:
last_error = f"HTTP {resp.status}: {text}" last_error = f"HTTP {resp.status}: {text}"
if resp.status == 404:
saw_http_404 = True
continue continue
data = await resp.json() data = await resp.json()
@@ -130,14 +119,6 @@ class OverseasBroker:
last_error = str(exc) last_error = str(exc)
continue continue
if saw_http_404:
logger.warning(
"Overseas ranking endpoint unavailable (404) for %s/%s; using symbol fallback scan",
exchange_code,
ranking_type,
)
return []
raise ConnectionError( raise ConnectionError(
f"fetch_overseas_rankings failed for {exchange_code}/{ranking_type}: {last_error}" f"fetch_overseas_rankings failed for {exchange_code}/{ranking_type}: {last_error}"
) )