Compare commits
37 Commits
feature/is
...
d105a3ff5e
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d105a3ff5e | ||
| 0424c78f6c | |||
|
|
3fdb7a29d4 | ||
| 31b4d0bf1e | |||
|
|
e2275a23b1 | ||
| 7522bb7e66 | |||
|
|
63fa6841a2 | ||
| ece3c5597b | |||
|
|
63f4e49d88 | ||
|
|
e0a6b307a2 | ||
| 75320eb587 | |||
|
|
afb31b7f4b | ||
| a429a9f4da | |||
|
|
d9763def85 | ||
| ab7f0444b2 | |||
|
|
6b3960a3a4 | ||
| 6cad8e74e1 | |||
|
|
86c94cff62 | ||
| 692cb61991 | |||
|
|
392422992b | ||
| cc637a9738 | |||
|
|
8c27473fed | ||
| bde54c7487 | |||
|
|
a14f944fcc | ||
| 56f7405baa | |||
|
|
e3b1ecc572 | ||
| 8acf72b22c | |||
|
|
c95102a0bd | ||
| 0685d62f9c | |||
|
|
78021d4695 | ||
| 3cdd10783b | |||
|
|
c4e31be27a | ||
| 9d9ade14eb | |||
|
|
9a8936ab34 | ||
| c5831966ed | |||
|
|
f03cc6039b | ||
| 9171e54652 |
22
CLAUDE.md
22
CLAUDE.md
@@ -15,6 +15,9 @@ pytest -v --cov=src
|
||||
|
||||
# Run (paper trading)
|
||||
python -m src.main --mode=paper
|
||||
|
||||
# Run with dashboard
|
||||
python -m src.main --mode=paper --dashboard
|
||||
```
|
||||
|
||||
## Telegram Notifications (Optional)
|
||||
@@ -43,6 +46,10 @@ Get real-time alerts for trades, circuit breakers, and system events via Telegra
|
||||
- ℹ️ Market open/close notifications
|
||||
- 📝 System startup/shutdown status
|
||||
|
||||
### Interactive Commands
|
||||
|
||||
With `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot supports 9 bidirectional commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`.
|
||||
|
||||
**Fail-safe**: Notifications never crash the trading system. Missing credentials or API errors are logged but trading continues normally.
|
||||
|
||||
## Smart Volatility Scanner (Optional)
|
||||
@@ -109,17 +116,23 @@ User requirements and feedback are tracked in [docs/requirements-log.md](docs/re
|
||||
```
|
||||
src/
|
||||
├── analysis/ # Technical analysis (RSI, volatility, smart scanner)
|
||||
├── backup/ # Disaster recovery (scheduler, cloud storage, health)
|
||||
├── brain/ # Gemini AI decision engine (prompt optimizer, context selector)
|
||||
├── broker/ # KIS API client (domestic + overseas)
|
||||
├── brain/ # Gemini AI decision engine
|
||||
├── context/ # L1-L7 hierarchical memory system
|
||||
├── core/ # Risk manager (READ-ONLY)
|
||||
├── evolution/ # Self-improvement optimizer
|
||||
├── dashboard/ # FastAPI read-only monitoring (8 API endpoints)
|
||||
├── data/ # External data integration (news, market data, calendar)
|
||||
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
|
||||
├── logging/ # Decision logger (audit trail)
|
||||
├── markets/ # Market schedules and timezone handling
|
||||
├── notifications/ # Telegram real-time alerts
|
||||
├── notifications/ # Telegram alerts + bidirectional commands (9 commands)
|
||||
├── strategy/ # Pre-market planner, scenario engine, playbook store
|
||||
├── db.py # SQLite trade logging
|
||||
├── main.py # Trading loop orchestrator
|
||||
└── config.py # Settings (from .env)
|
||||
|
||||
tests/ # 343 tests across 14 files
|
||||
tests/ # 551 tests across 25 files
|
||||
docs/ # Extended documentation
|
||||
```
|
||||
|
||||
@@ -131,6 +144,7 @@ ruff check src/ tests/ # Lint
|
||||
mypy src/ --strict # Type check
|
||||
|
||||
python -m src.main --mode=paper # Paper trading
|
||||
python -m src.main --mode=paper --dashboard # With dashboard
|
||||
python -m src.main --mode=live # Live trading (⚠️ real money)
|
||||
|
||||
# Gitea workflow (requires tea CLI)
|
||||
|
||||
@@ -2,7 +2,9 @@
|
||||
|
||||
## Overview
|
||||
|
||||
Self-evolving AI trading agent for global stock markets via KIS (Korea Investment & Securities) API. The main loop in `src/main.py` orchestrates four components across multiple markets with two trading modes: daily (batch API calls) or realtime (per-stock decisions).
|
||||
Self-evolving AI trading agent for global stock markets via KIS (Korea Investment & Securities) API. The main loop in `src/main.py` orchestrates components across multiple markets with two trading modes: daily (batch API calls) or realtime (per-stock decisions).
|
||||
|
||||
**v2 Proactive Playbook Architecture**: The system uses a "plan once, execute locally" approach. Pre-market, the AI generates a playbook of scenarios (one Gemini API call per market per day). During trading hours, a local scenario engine matches live market data against these pre-computed scenarios — no additional AI calls needed. This dramatically reduces API costs and latency.
|
||||
|
||||
## Trading Modes
|
||||
|
||||
@@ -46,9 +48,11 @@ High-frequency trading with individual stock analysis:
|
||||
**KISBroker** (`kis_api.py`) — Async KIS API client for domestic Korean market
|
||||
|
||||
- Automatic OAuth token refresh (valid for 24 hours)
|
||||
- Leaky-bucket rate limiter (10 requests per second)
|
||||
- Leaky-bucket rate limiter (configurable RPS, default 2.0)
|
||||
- POST body hash-key signing for order authentication
|
||||
- Custom SSL context with disabled hostname verification for VTS (virtual trading) endpoint due to known certificate mismatch
|
||||
- `fetch_market_rankings()` — Fetch volume surge rankings from KIS API
|
||||
- `get_daily_prices()` — Fetch OHLCV history for technical analysis
|
||||
|
||||
**OverseasBroker** (`overseas.py`) — KIS overseas stock API wrapper
|
||||
|
||||
@@ -63,10 +67,7 @@ High-frequency trading with individual stock analysis:
|
||||
- `is_market_open()` checks weekends, trading hours, lunch breaks
|
||||
- `get_open_markets()` returns currently active markets
|
||||
- `get_next_market_open()` finds next market to open and when
|
||||
|
||||
**New API Methods** (added in v0.9.0):
|
||||
- `fetch_market_rankings()` — Fetch volume surge rankings from KIS API
|
||||
- `get_daily_prices()` — Fetch OHLCV history for technical analysis
|
||||
- 10 global markets defined (KR, US_NASDAQ, US_NYSE, US_AMEX, JP, HK, CN_SHA, CN_SZA, VN_HNX, VN_HSX)
|
||||
|
||||
### 2. Analysis (`src/analysis/`)
|
||||
|
||||
@@ -91,14 +92,9 @@ High-frequency trading with individual stock analysis:
|
||||
- **Fallback**: Uses static watchlist if ranking API unavailable
|
||||
- **Realtime mode only**: Daily mode uses batch processing for API efficiency
|
||||
|
||||
**Benefits:**
|
||||
- Reduces Gemini API calls from 20-30 stocks to 1-3 qualified candidates
|
||||
- Fast Python-based filtering before expensive AI judgment
|
||||
- Logs selection context (RSI, volume_ratio, signal, score) for Evolution system
|
||||
### 3. Brain (`src/brain/`)
|
||||
|
||||
### 3. Brain (`src/brain/gemini_client.py`)
|
||||
|
||||
**GeminiClient** — AI decision engine powered by Google Gemini
|
||||
**GeminiClient** (`gemini_client.py`) — AI decision engine powered by Google Gemini
|
||||
|
||||
- Constructs structured prompts from market data
|
||||
- Parses JSON responses into `TradeDecision` objects (`action`, `confidence`, `rationale`)
|
||||
@@ -106,11 +102,20 @@ High-frequency trading with individual stock analysis:
|
||||
- Falls back to safe HOLD on any parse/API error
|
||||
- Handles markdown-wrapped JSON, malformed responses, invalid actions
|
||||
|
||||
**PromptOptimizer** (`prompt_optimizer.py`) — Token efficiency optimization
|
||||
|
||||
- Reduces prompt size while preserving decision quality
|
||||
- Caches optimized prompts
|
||||
|
||||
**ContextSelector** (`context_selector.py`) — Relevant context selection for prompts
|
||||
|
||||
- Selects appropriate context layers for current market conditions
|
||||
|
||||
### 4. Risk Manager (`src/core/risk_manager.py`)
|
||||
|
||||
**RiskManager** — Safety circuit breaker and order validation
|
||||
|
||||
⚠️ **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md))
|
||||
> **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md))
|
||||
|
||||
- **Circuit Breaker**: Halts all trading via `SystemExit` when daily P&L drops below -3.0%
|
||||
- Threshold may only be made stricter, never relaxed
|
||||
@@ -118,7 +123,79 @@ High-frequency trading with individual stock analysis:
|
||||
- **Fat-Finger Protection**: Rejects orders exceeding 30% of available cash
|
||||
- Must always be enforced, cannot be disabled
|
||||
|
||||
### 5. Notifications (`src/notifications/telegram_client.py`)
|
||||
### 5. Strategy (`src/strategy/`)
|
||||
|
||||
**Pre-Market Planner** (`pre_market_planner.py`) — AI playbook generation
|
||||
|
||||
- Runs before market open (configurable `PRE_MARKET_MINUTES`, default 30)
|
||||
- Generates scenario-based playbooks via single Gemini API call per market
|
||||
- Handles timeout (`PLANNER_TIMEOUT_SECONDS`, default 60) with defensive playbook fallback
|
||||
- Persists playbooks to database for audit trail
|
||||
|
||||
**Scenario Engine** (`scenario_engine.py`) — Local scenario matching
|
||||
|
||||
- Matches live market data against pre-computed playbook scenarios
|
||||
- No AI calls during trading hours — pure Python matching logic
|
||||
- Returns matched scenarios with confidence scores
|
||||
- Configurable `MAX_SCENARIOS_PER_STOCK` (default 5)
|
||||
- Periodic rescan at `RESCAN_INTERVAL_SECONDS` (default 300)
|
||||
|
||||
**Playbook Store** (`playbook_store.py`) — Playbook persistence
|
||||
|
||||
- SQLite-backed storage for daily playbooks
|
||||
- Date and market-based retrieval
|
||||
- Status tracking (generated, active, expired)
|
||||
|
||||
**Models** (`models.py`) — Pydantic data models
|
||||
|
||||
- Scenario, Playbook, MatchResult, and related type definitions
|
||||
|
||||
### 6. Context System (`src/context/`)
|
||||
|
||||
**Context Store** (`store.py`) — L1-L7 hierarchical memory
|
||||
|
||||
- 7-layer context system (see [docs/context-tree.md](./context-tree.md)):
|
||||
- L1: Tick-level (real-time price)
|
||||
- L2: Intraday (session summary)
|
||||
- L3: Daily (end-of-day)
|
||||
- L4: Weekly (trend analysis)
|
||||
- L5: Monthly (strategy review)
|
||||
- L6: Daily Review (scorecard)
|
||||
- L7: Evolution (long-term learning)
|
||||
- Key-value storage with timeframe tagging
|
||||
- SQLite persistence in `contexts` table
|
||||
|
||||
**Context Scheduler** (`scheduler.py`) — Periodic aggregation
|
||||
|
||||
- Scheduled summarization from lower to higher layers
|
||||
- Configurable aggregation intervals
|
||||
|
||||
**Context Summarizer** (`summarizer.py`) — Layer summarization
|
||||
|
||||
- Aggregates lower-layer data into higher-layer summaries
|
||||
|
||||
### 7. Dashboard (`src/dashboard/`)
|
||||
|
||||
**FastAPI App** (`app.py`) — Read-only monitoring dashboard
|
||||
|
||||
- Runs as daemon thread when enabled (`--dashboard` CLI flag or `DASHBOARD_ENABLED=true`)
|
||||
- Configurable host/port (`DASHBOARD_HOST`, `DASHBOARD_PORT`, default `127.0.0.1:8080`)
|
||||
- Serves static HTML frontend
|
||||
|
||||
**8 API Endpoints:**
|
||||
|
||||
| Endpoint | Method | Description |
|
||||
|----------|--------|-------------|
|
||||
| `/` | GET | Static HTML dashboard |
|
||||
| `/api/status` | GET | Daily trading status by market |
|
||||
| `/api/playbook/{date}` | GET | Playbook for specific date and market |
|
||||
| `/api/scorecard/{date}` | GET | Daily scorecard from L6_DAILY context |
|
||||
| `/api/performance` | GET | Trading performance metrics (by market + combined) |
|
||||
| `/api/context/{layer}` | GET | Query context by layer (L1-L7) |
|
||||
| `/api/decisions` | GET | Decision log entries with outcomes |
|
||||
| `/api/scenarios/active` | GET | Today's matched scenarios |
|
||||
|
||||
### 8. Notifications (`src/notifications/telegram_client.py`)
|
||||
|
||||
**TelegramClient** — Real-time event notifications via Telegram Bot API
|
||||
|
||||
@@ -126,7 +203,13 @@ High-frequency trading with individual stock analysis:
|
||||
- Non-blocking: failures are logged but never crash trading
|
||||
- Rate-limited: 1 message/second default to respect Telegram API limits
|
||||
- Auto-disabled when credentials missing
|
||||
- Gracefully handles API errors, network timeouts, invalid tokens
|
||||
|
||||
**TelegramCommandHandler** — Bidirectional command interface
|
||||
|
||||
- Long polling from Telegram API (configurable `TELEGRAM_POLLING_INTERVAL`)
|
||||
- 9 interactive commands: `/help`, `/status`, `/positions`, `/report`, `/scenarios`, `/review`, `/dashboard`, `/stop`, `/resume`
|
||||
- Authorization filtering by `TELEGRAM_CHAT_ID`
|
||||
- Enable/disable via `TELEGRAM_COMMANDS_ENABLED` (default: true)
|
||||
|
||||
**Notification Types:**
|
||||
- Trade execution (BUY/SELL with confidence)
|
||||
@@ -134,12 +217,12 @@ High-frequency trading with individual stock analysis:
|
||||
- Fat-finger protection triggers (order rejection)
|
||||
- Market open/close events
|
||||
- System startup/shutdown status
|
||||
- Playbook generation results
|
||||
- Stop-loss monitoring alerts
|
||||
|
||||
**Setup:** See [src/notifications/README.md](../src/notifications/README.md) for bot creation and configuration.
|
||||
### 9. Evolution (`src/evolution/`)
|
||||
|
||||
### 6. Evolution (`src/evolution/optimizer.py`)
|
||||
|
||||
**StrategyOptimizer** — Self-improvement loop
|
||||
**StrategyOptimizer** (`optimizer.py`) — Self-improvement loop
|
||||
|
||||
- Analyzes high-confidence losing trades from SQLite
|
||||
- Asks Gemini to generate new `BaseStrategy` subclasses
|
||||
@@ -147,99 +230,196 @@ High-frequency trading with individual stock analysis:
|
||||
- Simulates PR creation for human review
|
||||
- Only activates strategies that pass all tests
|
||||
|
||||
**DailyReview** (`daily_review.py`) — End-of-day review
|
||||
|
||||
- Generates comprehensive trade performance summary
|
||||
- Stores results in L6_DAILY context layer
|
||||
- Tracks win rate, P&L, confidence accuracy
|
||||
|
||||
**DailyScorecard** (`scorecard.py`) — Performance scoring
|
||||
|
||||
- Calculates daily metrics (trades, P&L, win rate, avg confidence)
|
||||
- Enables trend tracking across days
|
||||
|
||||
**Stop-Loss Monitoring** — Real-time position protection
|
||||
|
||||
- Monitors positions against stop-loss levels from playbook scenarios
|
||||
- Sends Telegram alerts when thresholds approached or breached
|
||||
|
||||
### 10. Decision Logger (`src/logging/decision_logger.py`)
|
||||
|
||||
**DecisionLogger** — Comprehensive audit trail
|
||||
|
||||
- Logs every trading decision with full context snapshot
|
||||
- Captures input data, rationale, confidence, and outcomes
|
||||
- Supports outcome tracking (P&L, accuracy) for post-analysis
|
||||
- Stored in `decision_logs` table with indexed queries
|
||||
- Review workflow support (reviewed flag, review notes)
|
||||
|
||||
### 11. Data Integration (`src/data/`)
|
||||
|
||||
**External Data Sources** (optional):
|
||||
|
||||
- `news_api.py` — News sentiment data
|
||||
- `market_data.py` — Extended market data
|
||||
- `economic_calendar.py` — Economic event calendar
|
||||
|
||||
### 12. Backup (`src/backup/`)
|
||||
|
||||
**Disaster Recovery** (see [docs/disaster_recovery.md](./disaster_recovery.md)):
|
||||
|
||||
- `scheduler.py` — Automated backup scheduling
|
||||
- `exporter.py` — Data export to various formats
|
||||
- `cloud_storage.py` — S3-compatible cloud backup
|
||||
- `health_monitor.py` — Backup integrity verification
|
||||
|
||||
## Data Flow
|
||||
|
||||
### Playbook Mode (Daily — Primary v2 Flow)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Pre-Market Phase (before market open) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Pre-Market Planner │
|
||||
│ - 1 Gemini API call per market │
|
||||
│ - Generate scenario playbook │
|
||||
│ - Store in playbooks table │
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Trading Hours (market open → close) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Market Schedule Check │
|
||||
│ - Get open markets │
|
||||
│ - Filter by enabled markets │
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Scenario Engine (local) │
|
||||
│ - Match live data vs playbook │
|
||||
│ - No AI calls needed │
|
||||
│ - Return matched scenarios │
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Risk Manager: Validate Order │
|
||||
│ - Check circuit breaker │
|
||||
│ - Check fat-finger limit │
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Broker: Execute Order │
|
||||
│ - Domestic: send_order() │
|
||||
│ - Overseas: send_overseas_order()│
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Decision Logger + DB │
|
||||
│ - Full audit trail │
|
||||
│ - Context snapshot │
|
||||
│ - Telegram notification │
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Post-Market Phase │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Daily Review + Scorecard │
|
||||
│ - Performance summary │
|
||||
│ - Store in L6_DAILY context │
|
||||
│ - Evolution learning │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Realtime Mode (with Smart Scanner)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Main Loop (60s cycle per market) │
|
||||
│ Main Loop (60s cycle per market) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Market Schedule Check │
|
||||
│ - Get open markets │
|
||||
│ - Filter by enabled markets │
|
||||
│ - Wait if all closed │
|
||||
└──────────────────┬────────────────┘
|
||||
│ Market Schedule Check │
|
||||
│ - Get open markets │
|
||||
│ - Filter by enabled markets │
|
||||
│ - Wait if all closed │
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Smart Scanner (Python-first) │
|
||||
│ Smart Scanner (Python-first) │
|
||||
│ - Fetch volume rankings (KIS) │
|
||||
│ - Get 20d price history per stock│
|
||||
│ - Calculate RSI(14) + vol ratio │
|
||||
│ - Filter: vol>2x AND RSI extreme │
|
||||
│ - Return top 3 qualified stocks │
|
||||
└──────────────────┬────────────────┘
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ For Each Qualified Candidate │
|
||||
└──────────────────┬────────────────┘
|
||||
│ For Each Qualified Candidate │
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Broker: Fetch Market Data │
|
||||
│ - Domestic: orderbook + balance │
|
||||
│ - Overseas: price + balance │
|
||||
└──────────────────┬────────────────┘
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Calculate P&L │
|
||||
│ pnl_pct = (eval - cost) / cost │
|
||||
└──────────────────┬────────────────┘
|
||||
│ Brain: Get Decision (AI) │
|
||||
│ - Build prompt with market data │
|
||||
│ - Call Gemini API │
|
||||
│ - Parse JSON response │
|
||||
│ - Return TradeDecision │
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Brain: Get Decision (AI) │
|
||||
│ - Build prompt with market data │
|
||||
│ - Call Gemini API │
|
||||
│ - Parse JSON response │
|
||||
│ - Return TradeDecision │
|
||||
└──────────────────┬────────────────┘
|
||||
│ Risk Manager: Validate Order │
|
||||
│ - Check circuit breaker │
|
||||
│ - Check fat-finger limit │
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Risk Manager: Validate Order │
|
||||
│ - Check circuit breaker │
|
||||
│ - Check fat-finger limit │
|
||||
│ - Raise if validation fails │
|
||||
└──────────────────┬────────────────┘
|
||||
│ Broker: Execute Order │
|
||||
│ - Domestic: send_order() │
|
||||
│ - Overseas: send_overseas_order()│
|
||||
└──────────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Broker: Execute Order │
|
||||
│ - Domestic: send_order() │
|
||||
│ - Overseas: send_overseas_order() │
|
||||
└──────────────────┬────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Notifications: Send Alert │
|
||||
│ - Trade execution notification │
|
||||
│ - Non-blocking (errors logged) │
|
||||
│ - Rate-limited to 1/sec │
|
||||
└──────────────────┬────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────┐
|
||||
│ Database: Log Trade │
|
||||
│ - SQLite (data/trades.db) │
|
||||
│ - Track: action, confidence, │
|
||||
│ rationale, market, exchange │
|
||||
│ - NEW: selection_context (JSON) │
|
||||
│ - RSI, volume_ratio, signal │
|
||||
│ - For Evolution optimization │
|
||||
└───────────────────────────────────┘
|
||||
│ Decision Logger + Notifications │
|
||||
│ - Log trade to SQLite │
|
||||
│ - selection_context (JSON) │
|
||||
│ - Telegram notification │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Database Schema
|
||||
|
||||
**SQLite** (`src/db.py`)
|
||||
**SQLite** (`src/db.py`) — Database: `data/trades.db`
|
||||
|
||||
### trades
|
||||
```sql
|
||||
CREATE TABLE trades (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
@@ -251,25 +431,73 @@ CREATE TABLE trades (
|
||||
quantity INTEGER,
|
||||
price REAL,
|
||||
pnl REAL DEFAULT 0.0,
|
||||
market TEXT DEFAULT 'KR', -- KR | US_NASDAQ | JP | etc.
|
||||
exchange_code TEXT DEFAULT 'KRX', -- KRX | NASD | NYSE | etc.
|
||||
selection_context TEXT -- JSON: {rsi, volume_ratio, signal, score}
|
||||
market TEXT DEFAULT 'KR',
|
||||
exchange_code TEXT DEFAULT 'KRX',
|
||||
selection_context TEXT, -- JSON: {rsi, volume_ratio, signal, score}
|
||||
decision_id TEXT -- Links to decision_logs
|
||||
);
|
||||
```
|
||||
|
||||
**Selection Context** (new in v0.9.0): Stores scanner selection criteria as JSON:
|
||||
```json
|
||||
{
|
||||
"rsi": 28.5,
|
||||
"volume_ratio": 2.7,
|
||||
"signal": "oversold",
|
||||
"score": 85.2
|
||||
}
|
||||
### contexts
|
||||
```sql
|
||||
CREATE TABLE contexts (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
layer TEXT NOT NULL, -- L1 through L7
|
||||
timeframe TEXT,
|
||||
key TEXT NOT NULL,
|
||||
value TEXT NOT NULL, -- JSON data
|
||||
created_at TEXT NOT NULL,
|
||||
updated_at TEXT NOT NULL
|
||||
);
|
||||
-- Indices: idx_contexts_layer, idx_contexts_timeframe, idx_contexts_updated
|
||||
```
|
||||
|
||||
Enables Evolution system to analyze correlation between selection criteria and trade outcomes.
|
||||
### decision_logs
|
||||
```sql
|
||||
CREATE TABLE decision_logs (
|
||||
decision_id TEXT PRIMARY KEY,
|
||||
timestamp TEXT NOT NULL,
|
||||
stock_code TEXT,
|
||||
market TEXT,
|
||||
exchange_code TEXT,
|
||||
action TEXT,
|
||||
confidence INTEGER,
|
||||
rationale TEXT,
|
||||
context_snapshot TEXT, -- JSON: full context at decision time
|
||||
input_data TEXT, -- JSON: market data used
|
||||
outcome_pnl REAL,
|
||||
outcome_accuracy REAL,
|
||||
reviewed INTEGER DEFAULT 0,
|
||||
review_notes TEXT
|
||||
);
|
||||
-- Indices: idx_decision_logs_timestamp, idx_decision_logs_reviewed, idx_decision_logs_confidence
|
||||
```
|
||||
|
||||
Auto-migration: Adds `market`, `exchange_code`, and `selection_context` columns if missing for backward compatibility.
|
||||
### playbooks
|
||||
```sql
|
||||
CREATE TABLE playbooks (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
date TEXT NOT NULL,
|
||||
market TEXT NOT NULL,
|
||||
status TEXT DEFAULT 'generated',
|
||||
playbook_json TEXT NOT NULL, -- Full playbook with scenarios
|
||||
generated_at TEXT NOT NULL,
|
||||
token_count INTEGER,
|
||||
scenario_count INTEGER,
|
||||
match_count INTEGER DEFAULT 0
|
||||
);
|
||||
-- Indices: idx_playbooks_date, idx_playbooks_market
|
||||
```
|
||||
|
||||
### context_metadata
|
||||
```sql
|
||||
CREATE TABLE context_metadata (
|
||||
layer TEXT PRIMARY KEY,
|
||||
description TEXT,
|
||||
retention_days INTEGER,
|
||||
aggregation_source TEXT
|
||||
);
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
@@ -284,29 +512,62 @@ KIS_APP_SECRET=your_app_secret
|
||||
KIS_ACCOUNT_NO=XXXXXXXX-XX
|
||||
GEMINI_API_KEY=your_gemini_key
|
||||
|
||||
# Optional
|
||||
# Optional — Trading Mode
|
||||
MODE=paper # paper | live
|
||||
DB_PATH=data/trades.db
|
||||
CONFIDENCE_THRESHOLD=80
|
||||
MAX_LOSS_PCT=3.0
|
||||
MAX_ORDER_PCT=30.0
|
||||
ENABLED_MARKETS=KR,US_NASDAQ # Comma-separated market codes
|
||||
|
||||
# Trading Mode (API efficiency)
|
||||
TRADE_MODE=daily # daily | realtime
|
||||
DAILY_SESSIONS=4 # Sessions per day (daily mode only)
|
||||
SESSION_INTERVAL_HOURS=6 # Hours between sessions (daily mode only)
|
||||
|
||||
# Telegram Notifications (optional)
|
||||
# Optional — Database
|
||||
DB_PATH=data/trades.db
|
||||
|
||||
# Optional — Risk
|
||||
CONFIDENCE_THRESHOLD=80
|
||||
MAX_LOSS_PCT=3.0
|
||||
MAX_ORDER_PCT=30.0
|
||||
|
||||
# Optional — Markets
|
||||
ENABLED_MARKETS=KR,US # Comma-separated market codes
|
||||
RATE_LIMIT_RPS=2.0 # KIS API requests per second
|
||||
|
||||
# Optional — Pre-Market Planner (v2)
|
||||
PRE_MARKET_MINUTES=30 # Minutes before market open to generate playbook
|
||||
MAX_SCENARIOS_PER_STOCK=5 # Max scenarios per stock in playbook
|
||||
PLANNER_TIMEOUT_SECONDS=60 # Timeout for playbook generation
|
||||
DEFENSIVE_PLAYBOOK_ON_FAILURE=true # Fallback on AI failure
|
||||
RESCAN_INTERVAL_SECONDS=300 # Scenario rescan interval during trading
|
||||
|
||||
# Optional — Smart Scanner (realtime mode only)
|
||||
RSI_OVERSOLD_THRESHOLD=30 # 0-50, oversold threshold
|
||||
RSI_MOMENTUM_THRESHOLD=70 # 50-100, momentum threshold
|
||||
VOL_MULTIPLIER=2.0 # Minimum volume ratio (2.0 = 200%)
|
||||
SCANNER_TOP_N=3 # Max qualified candidates per scan
|
||||
|
||||
# Optional — Dashboard
|
||||
DASHBOARD_ENABLED=false # Enable FastAPI dashboard
|
||||
DASHBOARD_HOST=127.0.0.1 # Dashboard bind address
|
||||
DASHBOARD_PORT=8080 # Dashboard port (1-65535)
|
||||
|
||||
# Optional — Telegram
|
||||
TELEGRAM_BOT_TOKEN=1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
|
||||
TELEGRAM_CHAT_ID=123456789
|
||||
TELEGRAM_ENABLED=true
|
||||
TELEGRAM_COMMANDS_ENABLED=true # Enable bidirectional commands
|
||||
TELEGRAM_POLLING_INTERVAL=1.0 # Command polling interval (seconds)
|
||||
|
||||
# Smart Scanner (optional, realtime mode only)
|
||||
RSI_OVERSOLD_THRESHOLD=30 # 0-50, oversold threshold
|
||||
RSI_MOMENTUM_THRESHOLD=70 # 50-100, momentum threshold
|
||||
VOL_MULTIPLIER=2.0 # Minimum volume ratio (2.0 = 200%)
|
||||
SCANNER_TOP_N=3 # Max qualified candidates per scan
|
||||
# Optional — Backup
|
||||
BACKUP_ENABLED=false
|
||||
BACKUP_DIR=data/backups
|
||||
S3_ENDPOINT_URL=...
|
||||
S3_ACCESS_KEY=...
|
||||
S3_SECRET_KEY=...
|
||||
S3_BUCKET_NAME=...
|
||||
S3_REGION=...
|
||||
|
||||
# Optional — External Data
|
||||
NEWS_API_KEY=...
|
||||
NEWS_API_PROVIDER=...
|
||||
MARKET_DATA_API_KEY=...
|
||||
```
|
||||
|
||||
Tests use in-memory SQLite (`DB_PATH=":memory:"`) and dummy credentials via `tests/conftest.py`.
|
||||
@@ -340,4 +601,9 @@ Tests use in-memory SQLite (`DB_PATH=":memory:"`) and dummy credentials via `tes
|
||||
- Invalid token → log error, trading unaffected
|
||||
- Rate limit exceeded → queued via rate limiter
|
||||
|
||||
**Guarantee**: Notification failures never interrupt trading operations.
|
||||
### Playbook Generation Failure
|
||||
- Timeout → fall back to defensive playbook (`DEFENSIVE_PLAYBOOK_ON_FAILURE`)
|
||||
- API error → use previous day's playbook if available
|
||||
- No playbook → skip pre-market phase, fall back to direct AI calls
|
||||
|
||||
**Guarantee**: Notification and dashboard failures never interrupt trading operations.
|
||||
|
||||
@@ -119,7 +119,7 @@ No decorator needed for async tests.
|
||||
# Install all dependencies (production + dev)
|
||||
pip install -e ".[dev]"
|
||||
|
||||
# Run full test suite with coverage
|
||||
# Run full test suite with coverage (551 tests across 25 files)
|
||||
pytest -v --cov=src --cov-report=term-missing
|
||||
|
||||
# Run a single test file
|
||||
@@ -137,11 +137,61 @@ mypy src/ --strict
|
||||
# Run the trading agent
|
||||
python -m src.main --mode=paper
|
||||
|
||||
# Run with dashboard enabled
|
||||
python -m src.main --mode=paper --dashboard
|
||||
|
||||
# Docker
|
||||
docker compose up -d ouroboros # Run agent
|
||||
docker compose --profile test up test # Run tests in container
|
||||
```
|
||||
|
||||
## Dashboard
|
||||
|
||||
The FastAPI dashboard provides read-only monitoring of the trading system.
|
||||
|
||||
### Starting the Dashboard
|
||||
|
||||
```bash
|
||||
# Via CLI flag
|
||||
python -m src.main --mode=paper --dashboard
|
||||
|
||||
# Via environment variable
|
||||
DASHBOARD_ENABLED=true python -m src.main --mode=paper
|
||||
```
|
||||
|
||||
Dashboard runs as a daemon thread on `DASHBOARD_HOST:DASHBOARD_PORT` (default: `127.0.0.1:8080`).
|
||||
|
||||
### API Endpoints
|
||||
|
||||
| Endpoint | Description |
|
||||
|----------|-------------|
|
||||
| `GET /` | HTML dashboard UI |
|
||||
| `GET /api/status` | Daily trading status by market |
|
||||
| `GET /api/playbook/{date}` | Playbook for specific date (query: `market`) |
|
||||
| `GET /api/scorecard/{date}` | Daily scorecard from L6_DAILY context |
|
||||
| `GET /api/performance` | Performance metrics by market and combined |
|
||||
| `GET /api/context/{layer}` | Context data by layer L1-L7 (query: `timeframe`) |
|
||||
| `GET /api/decisions` | Decision log entries (query: `limit`, `market`) |
|
||||
| `GET /api/scenarios/active` | Today's matched scenarios |
|
||||
|
||||
## Telegram Commands
|
||||
|
||||
When `TELEGRAM_COMMANDS_ENABLED=true` (default), the bot accepts these interactive commands:
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `/help` | List available commands |
|
||||
| `/status` | Show trading status (mode, markets, P&L) |
|
||||
| `/positions` | Display account summary (balance, cash, P&L) |
|
||||
| `/report` | Daily summary metrics (trades, P&L, win rate) |
|
||||
| `/scenarios` | Show today's playbook scenarios |
|
||||
| `/review` | Display recent scorecards (L6_DAILY layer) |
|
||||
| `/dashboard` | Show dashboard URL if enabled |
|
||||
| `/stop` | Pause trading |
|
||||
| `/resume` | Resume trading |
|
||||
|
||||
Commands are only processed from the authorized `TELEGRAM_CHAT_ID`.
|
||||
|
||||
## Environment Setup
|
||||
|
||||
```bash
|
||||
|
||||
@@ -64,3 +64,50 @@
|
||||
**참고:** Realtime 모드 전용. Daily 모드는 배치 효율성을 위해 정적 watchlist 사용.
|
||||
|
||||
**이슈/PR:** #76, #77
|
||||
|
||||
---
|
||||
|
||||
## 2026-02-10
|
||||
|
||||
### 코드 리뷰 시 플랜-구현 일치 검증 규칙
|
||||
|
||||
**배경:**
|
||||
- 코드 리뷰 시 플랜(EnterPlanMode에서 승인된 계획)과 실제 구현이 일치하는지 확인하는 절차가 없었음
|
||||
- 플랜과 다른 구현이 리뷰 없이 통과될 위험
|
||||
|
||||
**요구사항:**
|
||||
1. 모든 PR 리뷰에서 플랜-구현 일치 여부를 필수 체크
|
||||
2. 플랜에 없는 변경은 정당한 사유 필요
|
||||
3. 플랜 항목이 누락되면 PR 설명에 사유 기록
|
||||
4. 스코프가 플랜과 일치하는지 확인
|
||||
|
||||
**구현 결과:**
|
||||
- `docs/workflow.md`에 Code Review Checklist 섹션 추가
|
||||
- Plan Consistency (필수), Safety & Constraints, Quality, Workflow 4개 카테고리
|
||||
|
||||
**이슈/PR:** #114
|
||||
|
||||
---
|
||||
|
||||
## 2026-02-16
|
||||
|
||||
### 문서 v2 동기화 (전체 문서 현행화)
|
||||
|
||||
**배경:**
|
||||
- v2 기능 구현 완료 후 문서가 실제 코드 상태와 크게 괴리
|
||||
- 문서에는 54 tests / 4 files로 기록되었으나 실제로는 551 tests / 25 files
|
||||
- v2 핵심 기능(Playbook, Scenario Engine, Dashboard, Telegram Commands, Daily Review, Context System, Backup) 문서화 누락
|
||||
|
||||
**요구사항:**
|
||||
1. `docs/testing.md` — 551 tests / 25 files 반영, 전체 테스트 파일 설명
|
||||
2. `docs/architecture.md` — v2 컴포넌트(Strategy, Context, Dashboard, Decision Logger 등) 추가, Playbook Mode 데이터 플로우, DB 스키마 5개 테이블, v2 환경변수
|
||||
3. `docs/commands.md` — Dashboard 실행 명령어, Telegram 명령어 9종 레퍼런스
|
||||
4. `CLAUDE.md` — Project Structure 트리 확장, 테스트 수 업데이트, `--dashboard` 플래그
|
||||
5. `docs/skills.md` — DB 파일명 `trades.db`로 통일, Dashboard 명령어 추가
|
||||
6. 기존에 유효한 트러블슈팅, 코드 예제 등은 유지
|
||||
|
||||
**구현 결과:**
|
||||
- 6개 문서 파일 업데이트
|
||||
- 이전 시도(2개 커밋)는 기존 내용을 과도하게 삭제하여 폐기, main 기준으로 재작업
|
||||
|
||||
**이슈/PR:** #131, PR #134
|
||||
|
||||
@@ -34,6 +34,12 @@ python -m src.main --mode=paper
|
||||
```
|
||||
Runs the agent in paper-trading mode (no real orders).
|
||||
|
||||
### Start Trading Agent with Dashboard
|
||||
```bash
|
||||
python -m src.main --mode=paper --dashboard
|
||||
```
|
||||
Runs the agent with FastAPI dashboard on `127.0.0.1:8080` (configurable via `DASHBOARD_HOST`/`DASHBOARD_PORT`).
|
||||
|
||||
### Start Trading Agent (Production)
|
||||
```bash
|
||||
docker compose up -d ouroboros
|
||||
@@ -59,7 +65,7 @@ Analyze the last 30 days of trade logs and generate performance metrics.
|
||||
python -m src.evolution.optimizer --evolve
|
||||
```
|
||||
Triggers the evolution engine to:
|
||||
1. Analyze `trade_logs.db` for failing patterns
|
||||
1. Analyze `trades.db` for failing patterns
|
||||
2. Ask Gemini to generate a new strategy
|
||||
3. Run tests on the new strategy
|
||||
4. Create a PR if tests pass
|
||||
@@ -91,12 +97,12 @@ curl http://localhost:8080/health
|
||||
|
||||
### View Trade Logs
|
||||
```bash
|
||||
sqlite3 data/trade_logs.db "SELECT * FROM trades ORDER BY timestamp DESC LIMIT 20;"
|
||||
sqlite3 data/trades.db "SELECT * FROM trades ORDER BY timestamp DESC LIMIT 20;"
|
||||
```
|
||||
|
||||
### Export Trade History
|
||||
```bash
|
||||
sqlite3 -header -csv data/trade_logs.db "SELECT * FROM trades;" > trades_export.csv
|
||||
sqlite3 -header -csv data/trades.db "SELECT * FROM trades;" > trades_export.csv
|
||||
```
|
||||
|
||||
## Safety Checklist (Pre-Deploy)
|
||||
|
||||
206
docs/testing.md
206
docs/testing.md
@@ -2,51 +2,29 @@
|
||||
|
||||
## Test Structure
|
||||
|
||||
**54 tests** across four files. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator.
|
||||
**551 tests** across **25 files**. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator.
|
||||
|
||||
The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB.
|
||||
|
||||
### Test Files
|
||||
|
||||
#### `tests/test_risk.py` (11 tests)
|
||||
- Circuit breaker boundaries
|
||||
- Fat-finger edge cases
|
||||
#### Core Components
|
||||
|
||||
##### `tests/test_risk.py` (14 tests)
|
||||
- Circuit breaker boundaries and exact threshold triggers
|
||||
- Fat-finger edge cases and percentage validation
|
||||
- P&L calculation edge cases
|
||||
- Order validation logic
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
def test_circuit_breaker_exact_threshold(risk_manager):
|
||||
"""Circuit breaker should trip at exactly -3.0%."""
|
||||
with pytest.raises(CircuitBreakerTripped):
|
||||
risk_manager.validate_order(
|
||||
current_pnl_pct=-3.0,
|
||||
order_amount=1000,
|
||||
total_cash=10000
|
||||
)
|
||||
```
|
||||
|
||||
#### `tests/test_broker.py` (6 tests)
|
||||
##### `tests/test_broker.py` (11 tests)
|
||||
- OAuth token lifecycle
|
||||
- Rate limiting enforcement
|
||||
- Hash key generation
|
||||
- Network error handling
|
||||
- SSL context configuration
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
async def test_rate_limiter(broker):
|
||||
"""Rate limiter should delay requests to stay under 10 RPS."""
|
||||
start = time.monotonic()
|
||||
for _ in range(15): # 15 requests
|
||||
await broker._rate_limiter.acquire()
|
||||
elapsed = time.monotonic() - start
|
||||
assert elapsed >= 1.0 # Should take at least 1 second
|
||||
```
|
||||
|
||||
#### `tests/test_brain.py` (18 tests)
|
||||
- Valid JSON parsing
|
||||
- Markdown-wrapped JSON handling
|
||||
##### `tests/test_brain.py` (24 tests)
|
||||
- Valid JSON parsing and markdown-wrapped JSON handling
|
||||
- Malformed JSON fallback
|
||||
- Missing fields handling
|
||||
- Invalid action validation
|
||||
@@ -54,33 +32,143 @@ async def test_rate_limiter(broker):
|
||||
- Empty response handling
|
||||
- Prompt construction for different markets
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
async def test_confidence_below_threshold_forces_hold(brain):
|
||||
"""Decisions below confidence threshold should force HOLD."""
|
||||
decision = brain.parse_response('{"action":"BUY","confidence":70,"rationale":"test"}')
|
||||
assert decision.action == "HOLD"
|
||||
assert decision.confidence == 70
|
||||
```
|
||||
|
||||
#### `tests/test_market_schedule.py` (19 tests)
|
||||
##### `tests/test_market_schedule.py` (24 tests)
|
||||
- Market open/close logic
|
||||
- Timezone handling (UTC, Asia/Seoul, America/New_York, etc.)
|
||||
- DST (Daylight Saving Time) transitions
|
||||
- Weekend handling
|
||||
- Lunch break logic
|
||||
- Weekend handling and lunch break logic
|
||||
- Multiple market filtering
|
||||
- Next market open calculation
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
def test_is_market_open_during_trading_hours():
|
||||
"""Market should be open during regular trading hours."""
|
||||
# KRX: 9:00-15:30 KST, no lunch break
|
||||
market = MARKETS["KR"]
|
||||
trading_time = datetime(2026, 2, 3, 10, 0, tzinfo=ZoneInfo("Asia/Seoul")) # Monday 10:00
|
||||
assert is_market_open(market, trading_time) is True
|
||||
```
|
||||
##### `tests/test_db.py` (3 tests)
|
||||
- Database initialization and table creation
|
||||
- Trade logging with all fields (market, exchange_code, decision_id)
|
||||
- Query and retrieval operations
|
||||
|
||||
##### `tests/test_main.py` (37 tests)
|
||||
- Trading loop orchestration
|
||||
- Market iteration and stock processing
|
||||
- Dashboard integration (`--dashboard` flag)
|
||||
- Telegram command handler wiring
|
||||
- Error handling and graceful shutdown
|
||||
|
||||
#### Strategy & Playbook (v2)
|
||||
|
||||
##### `tests/test_pre_market_planner.py` (37 tests)
|
||||
- Pre-market playbook generation
|
||||
- Gemini API integration for scenario creation
|
||||
- Timeout handling and defensive playbook fallback
|
||||
- Multi-market playbook generation
|
||||
|
||||
##### `tests/test_scenario_engine.py` (44 tests)
|
||||
- Scenario matching against live market data
|
||||
- Confidence scoring and threshold filtering
|
||||
- Multiple scenario type handling
|
||||
- Edge cases (no match, partial match, expired scenarios)
|
||||
|
||||
##### `tests/test_playbook_store.py` (23 tests)
|
||||
- Playbook persistence to SQLite
|
||||
- Date-based retrieval and market filtering
|
||||
- Playbook status management (generated, active, expired)
|
||||
- JSON serialization/deserialization
|
||||
|
||||
##### `tests/test_strategy_models.py` (33 tests)
|
||||
- Pydantic model validation for scenarios, playbooks, decisions
|
||||
- Field constraints and default values
|
||||
- Serialization round-trips
|
||||
|
||||
#### Analysis & Scanning
|
||||
|
||||
##### `tests/test_volatility.py` (24 tests)
|
||||
- ATR and RSI calculation accuracy
|
||||
- Volume surge ratio computation
|
||||
- Momentum scoring
|
||||
- Breakout/breakdown pattern detection
|
||||
- Market scanner watchlist management
|
||||
|
||||
##### `tests/test_smart_scanner.py` (13 tests)
|
||||
- Python-first filtering pipeline
|
||||
- RSI and volume ratio filter logic
|
||||
- Candidate scoring and ranking
|
||||
- Fallback to static watchlist
|
||||
|
||||
#### Context & Memory
|
||||
|
||||
##### `tests/test_context.py` (18 tests)
|
||||
- L1-L7 layer storage and retrieval
|
||||
- Context key-value CRUD operations
|
||||
- Timeframe-based queries
|
||||
- Layer metadata management
|
||||
|
||||
##### `tests/test_context_scheduler.py` (5 tests)
|
||||
- Periodic context aggregation scheduling
|
||||
- Layer summarization triggers
|
||||
|
||||
#### Evolution & Review
|
||||
|
||||
##### `tests/test_evolution.py` (24 tests)
|
||||
- Strategy optimization loop
|
||||
- High-confidence losing trade analysis
|
||||
- Generated strategy validation
|
||||
|
||||
##### `tests/test_daily_review.py` (10 tests)
|
||||
- End-of-day review generation
|
||||
- Trade performance summarization
|
||||
- Context layer (L6_DAILY) integration
|
||||
|
||||
##### `tests/test_scorecard.py` (3 tests)
|
||||
- Daily scorecard metrics calculation
|
||||
- Win rate, P&L, confidence tracking
|
||||
|
||||
#### Notifications & Commands
|
||||
|
||||
##### `tests/test_telegram.py` (25 tests)
|
||||
- Message sending and formatting
|
||||
- Rate limiting (leaky bucket)
|
||||
- Error handling (network timeout, invalid token)
|
||||
- Auto-disable on missing credentials
|
||||
- Notification types (trade, circuit breaker, fat-finger, market events)
|
||||
|
||||
##### `tests/test_telegram_commands.py` (31 tests)
|
||||
- 9 command handlers (/help, /status, /positions, /report, /scenarios, /review, /dashboard, /stop, /resume)
|
||||
- Long polling and command dispatch
|
||||
- Authorization filtering by chat_id
|
||||
- Command response formatting
|
||||
|
||||
#### Dashboard
|
||||
|
||||
##### `tests/test_dashboard.py` (14 tests)
|
||||
- FastAPI endpoint responses (8 API routes)
|
||||
- Status, playbook, scorecard, performance, context, decisions, scenarios
|
||||
- Query parameter handling (market, date, limit)
|
||||
|
||||
#### Performance & Quality
|
||||
|
||||
##### `tests/test_token_efficiency.py` (34 tests)
|
||||
- Gemini token usage optimization
|
||||
- Prompt size reduction verification
|
||||
- Cache effectiveness
|
||||
|
||||
##### `tests/test_latency_control.py` (30 tests)
|
||||
- API call latency measurement
|
||||
- Rate limiter timing accuracy
|
||||
- Async operation overhead
|
||||
|
||||
##### `tests/test_decision_logger.py` (9 tests)
|
||||
- Decision audit trail completeness
|
||||
- Context snapshot capture
|
||||
- Outcome tracking (P&L, accuracy)
|
||||
|
||||
##### `tests/test_data_integration.py` (38 tests)
|
||||
- External data source integration
|
||||
- News API, market data, economic calendar
|
||||
- Error handling for API failures
|
||||
|
||||
##### `tests/test_backup.py` (23 tests)
|
||||
- Backup scheduler and execution
|
||||
- Cloud storage (S3) upload
|
||||
- Health monitoring
|
||||
- Data export functionality
|
||||
|
||||
## Coverage Requirements
|
||||
|
||||
@@ -91,20 +179,6 @@ Check coverage:
|
||||
pytest -v --cov=src --cov-report=term-missing
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
Name Stmts Miss Cover Missing
|
||||
-----------------------------------------------------------
|
||||
src/brain/gemini_client.py 85 5 94% 165-169
|
||||
src/broker/kis_api.py 120 12 90% ...
|
||||
src/core/risk_manager.py 35 2 94% ...
|
||||
src/db.py 25 1 96% ...
|
||||
src/main.py 150 80 47% (excluded from CI)
|
||||
src/markets/schedule.py 95 3 97% ...
|
||||
-----------------------------------------------------------
|
||||
TOTAL 510 103 80%
|
||||
```
|
||||
|
||||
**Note:** `main.py` has lower coverage as it contains the main loop which is tested via integration/manual testing.
|
||||
|
||||
## Test Configuration
|
||||
|
||||
@@ -74,3 +74,37 @@ task_tool(
|
||||
```
|
||||
|
||||
Use `run_in_background=True` for independent tasks that don't block subsequent work.
|
||||
|
||||
## Code Review Checklist
|
||||
|
||||
**CRITICAL: Every PR review MUST verify plan-implementation consistency.**
|
||||
|
||||
Before approving any PR, the reviewer (human or agent) must check ALL of the following:
|
||||
|
||||
### 1. Plan Consistency (MANDATORY)
|
||||
|
||||
- [ ] **Implementation matches the approved plan** — Compare the actual code changes against the plan created during `EnterPlanMode`. Every item in the plan must be addressed.
|
||||
- [ ] **No unplanned changes** — If the implementation includes changes not in the plan, they must be explicitly justified.
|
||||
- [ ] **No plan items omitted** — If any planned item was skipped, the reason must be documented in the PR description.
|
||||
- [ ] **Scope matches** — The PR does not exceed or fall short of the planned scope.
|
||||
|
||||
### 2. Safety & Constraints
|
||||
|
||||
- [ ] `src/core/risk_manager.py` is unchanged (READ-ONLY)
|
||||
- [ ] Circuit breaker threshold not weakened (only stricter allowed)
|
||||
- [ ] Fat-finger protection (30% max order) still enforced
|
||||
- [ ] Confidence < 80 still forces HOLD
|
||||
- [ ] No hardcoded API keys or secrets
|
||||
|
||||
### 3. Quality
|
||||
|
||||
- [ ] All new/modified code has corresponding tests
|
||||
- [ ] Test coverage >= 80%
|
||||
- [ ] `ruff check src/ tests/` passes (no lint errors)
|
||||
- [ ] No `assert` statements removed from tests
|
||||
|
||||
### 4. Workflow
|
||||
|
||||
- [ ] PR references the Gitea issue number
|
||||
- [ ] Feature branch follows naming convention (`feature/issue-N-description`)
|
||||
- [ ] Commit messages are clear and descriptive
|
||||
|
||||
@@ -9,6 +9,8 @@ dependencies = [
|
||||
"pydantic-settings>=2.1,<3",
|
||||
"google-genai>=1.0,<2",
|
||||
"scipy>=1.11,<2",
|
||||
"fastapi>=0.110,<1",
|
||||
"uvicorn>=0.29,<1",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
|
||||
@@ -108,7 +108,7 @@ class MarketScanner:
|
||||
self.context_store.set_context(
|
||||
ContextLayer.L7_REALTIME,
|
||||
timeframe,
|
||||
f"{market.code}_{stock_code}_volatility",
|
||||
f"volatility_{market.code}_{stock_code}",
|
||||
{
|
||||
"price": metrics.current_price,
|
||||
"atr": metrics.atr,
|
||||
@@ -179,7 +179,7 @@ class MarketScanner:
|
||||
self.context_store.set_context(
|
||||
ContextLayer.L7_REALTIME,
|
||||
timeframe,
|
||||
f"{market.code}_scan_result",
|
||||
f"scan_result_{market.code}",
|
||||
{
|
||||
"total_scanned": len(valid_metrics),
|
||||
"top_movers": [m.stock_code for m in top_movers],
|
||||
|
||||
@@ -83,6 +83,11 @@ class Settings(BaseSettings):
|
||||
TELEGRAM_COMMANDS_ENABLED: bool = True
|
||||
TELEGRAM_POLLING_INTERVAL: float = 1.0 # seconds
|
||||
|
||||
# Dashboard (optional)
|
||||
DASHBOARD_ENABLED: bool = False
|
||||
DASHBOARD_HOST: str = "127.0.0.1"
|
||||
DASHBOARD_PORT: int = Field(default=8080, ge=1, le=65535)
|
||||
|
||||
model_config = {"env_file": ".env", "env_file_encoding": "utf-8"}
|
||||
|
||||
@property
|
||||
@@ -96,4 +101,7 @@ class Settings(BaseSettings):
|
||||
@property
|
||||
def enabled_market_list(self) -> list[str]:
|
||||
"""Parse ENABLED_MARKETS into list of market codes."""
|
||||
return [m.strip() for m in self.ENABLED_MARKETS.split(",") if m.strip()]
|
||||
from src.markets.schedule import expand_market_codes
|
||||
|
||||
raw = [m.strip() for m in self.ENABLED_MARKETS.split(",") if m.strip()]
|
||||
return expand_market_codes(raw)
|
||||
|
||||
@@ -5,6 +5,7 @@ The context tree implements Pillar 2: hierarchical memory management across
|
||||
"""
|
||||
|
||||
from src.context.layer import ContextLayer
|
||||
from src.context.scheduler import ContextScheduler
|
||||
from src.context.store import ContextStore
|
||||
|
||||
__all__ = ["ContextLayer", "ContextStore"]
|
||||
__all__ = ["ContextLayer", "ContextScheduler", "ContextStore"]
|
||||
|
||||
@@ -18,52 +18,83 @@ class ContextAggregator:
|
||||
self.conn = conn
|
||||
self.store = ContextStore(conn)
|
||||
|
||||
def aggregate_daily_from_trades(self, date: str | None = None) -> None:
|
||||
def aggregate_daily_from_trades(
|
||||
self, date: str | None = None, market: str | None = None
|
||||
) -> None:
|
||||
"""Aggregate L6 (daily) context from trades table.
|
||||
|
||||
Args:
|
||||
date: Date in YYYY-MM-DD format. If None, uses today.
|
||||
market: Market code filter (e.g., "KR", "US"). If None, aggregates all markets.
|
||||
"""
|
||||
if date is None:
|
||||
date = datetime.now(UTC).date().isoformat()
|
||||
|
||||
# Calculate daily metrics from trades
|
||||
cursor = self.conn.execute(
|
||||
"""
|
||||
SELECT
|
||||
COUNT(*) as trade_count,
|
||||
SUM(CASE WHEN action = 'BUY' THEN 1 ELSE 0 END) as buys,
|
||||
SUM(CASE WHEN action = 'SELL' THEN 1 ELSE 0 END) as sells,
|
||||
SUM(CASE WHEN action = 'HOLD' THEN 1 ELSE 0 END) as holds,
|
||||
AVG(confidence) as avg_confidence,
|
||||
SUM(pnl) as total_pnl,
|
||||
COUNT(DISTINCT stock_code) as unique_stocks,
|
||||
SUM(CASE WHEN pnl > 0 THEN 1 ELSE 0 END) as wins,
|
||||
SUM(CASE WHEN pnl < 0 THEN 1 ELSE 0 END) as losses
|
||||
FROM trades
|
||||
WHERE DATE(timestamp) = ?
|
||||
""",
|
||||
(date,),
|
||||
)
|
||||
row = cursor.fetchone()
|
||||
|
||||
if row and row[0] > 0: # At least one trade
|
||||
trade_count, buys, sells, holds, avg_conf, total_pnl, stocks, wins, losses = row
|
||||
|
||||
# Store daily metrics in L6
|
||||
self.store.set_context(ContextLayer.L6_DAILY, date, "trade_count", trade_count)
|
||||
self.store.set_context(ContextLayer.L6_DAILY, date, "buys", buys)
|
||||
self.store.set_context(ContextLayer.L6_DAILY, date, "sells", sells)
|
||||
self.store.set_context(ContextLayer.L6_DAILY, date, "holds", holds)
|
||||
self.store.set_context(
|
||||
ContextLayer.L6_DAILY, date, "avg_confidence", round(avg_conf, 2)
|
||||
if market is None:
|
||||
cursor = self.conn.execute(
|
||||
"""
|
||||
SELECT DISTINCT market
|
||||
FROM trades
|
||||
WHERE DATE(timestamp) = ?
|
||||
""",
|
||||
(date,),
|
||||
)
|
||||
self.store.set_context(
|
||||
ContextLayer.L6_DAILY, date, "total_pnl", round(total_pnl, 2)
|
||||
markets = [row[0] for row in cursor.fetchall() if row[0]]
|
||||
else:
|
||||
markets = [market]
|
||||
|
||||
for market_code in markets:
|
||||
# Calculate daily metrics from trades for the market
|
||||
cursor = self.conn.execute(
|
||||
"""
|
||||
SELECT
|
||||
COUNT(*) as trade_count,
|
||||
SUM(CASE WHEN action = 'BUY' THEN 1 ELSE 0 END) as buys,
|
||||
SUM(CASE WHEN action = 'SELL' THEN 1 ELSE 0 END) as sells,
|
||||
SUM(CASE WHEN action = 'HOLD' THEN 1 ELSE 0 END) as holds,
|
||||
AVG(confidence) as avg_confidence,
|
||||
SUM(pnl) as total_pnl,
|
||||
COUNT(DISTINCT stock_code) as unique_stocks,
|
||||
SUM(CASE WHEN pnl > 0 THEN 1 ELSE 0 END) as wins,
|
||||
SUM(CASE WHEN pnl < 0 THEN 1 ELSE 0 END) as losses
|
||||
FROM trades
|
||||
WHERE DATE(timestamp) = ? AND market = ?
|
||||
""",
|
||||
(date, market_code),
|
||||
)
|
||||
self.store.set_context(ContextLayer.L6_DAILY, date, "unique_stocks", stocks)
|
||||
win_rate = round(wins / max(wins + losses, 1) * 100, 2)
|
||||
self.store.set_context(ContextLayer.L6_DAILY, date, "win_rate", win_rate)
|
||||
row = cursor.fetchone()
|
||||
|
||||
if row and row[0] > 0: # At least one trade
|
||||
trade_count, buys, sells, holds, avg_conf, total_pnl, stocks, wins, losses = row
|
||||
|
||||
key_suffix = f"_{market_code}"
|
||||
|
||||
# Store daily metrics in L6 with market suffix
|
||||
self.store.set_context(
|
||||
ContextLayer.L6_DAILY, date, f"trade_count{key_suffix}", trade_count
|
||||
)
|
||||
self.store.set_context(ContextLayer.L6_DAILY, date, f"buys{key_suffix}", buys)
|
||||
self.store.set_context(ContextLayer.L6_DAILY, date, f"sells{key_suffix}", sells)
|
||||
self.store.set_context(ContextLayer.L6_DAILY, date, f"holds{key_suffix}", holds)
|
||||
self.store.set_context(
|
||||
ContextLayer.L6_DAILY,
|
||||
date,
|
||||
f"avg_confidence{key_suffix}",
|
||||
round(avg_conf, 2),
|
||||
)
|
||||
self.store.set_context(
|
||||
ContextLayer.L6_DAILY,
|
||||
date,
|
||||
f"total_pnl{key_suffix}",
|
||||
round(total_pnl, 2),
|
||||
)
|
||||
self.store.set_context(
|
||||
ContextLayer.L6_DAILY, date, f"unique_stocks{key_suffix}", stocks
|
||||
)
|
||||
win_rate = round(wins / max(wins + losses, 1) * 100, 2)
|
||||
self.store.set_context(
|
||||
ContextLayer.L6_DAILY, date, f"win_rate{key_suffix}", win_rate
|
||||
)
|
||||
|
||||
def aggregate_weekly_from_daily(self, week: str | None = None) -> None:
|
||||
"""Aggregate L5 (weekly) context from L6 (daily).
|
||||
@@ -92,14 +123,25 @@ class ContextAggregator:
|
||||
daily_data[row[0]].append(json.loads(row[1]))
|
||||
|
||||
if daily_data:
|
||||
# Sum all PnL values
|
||||
# Sum all PnL values (market-specific if suffixed)
|
||||
if "total_pnl" in daily_data:
|
||||
total_pnl = sum(daily_data["total_pnl"])
|
||||
self.store.set_context(
|
||||
ContextLayer.L5_WEEKLY, week, "weekly_pnl", round(total_pnl, 2)
|
||||
)
|
||||
|
||||
# Average all confidence values
|
||||
for key, values in daily_data.items():
|
||||
if key.startswith("total_pnl_"):
|
||||
market_code = key.split("total_pnl_", 1)[1]
|
||||
total_pnl = sum(values)
|
||||
self.store.set_context(
|
||||
ContextLayer.L5_WEEKLY,
|
||||
week,
|
||||
f"weekly_pnl_{market_code}",
|
||||
round(total_pnl, 2),
|
||||
)
|
||||
|
||||
# Average all confidence values (market-specific if suffixed)
|
||||
if "avg_confidence" in daily_data:
|
||||
conf_values = daily_data["avg_confidence"]
|
||||
avg_conf = sum(conf_values) / len(conf_values)
|
||||
@@ -107,6 +149,17 @@ class ContextAggregator:
|
||||
ContextLayer.L5_WEEKLY, week, "avg_confidence", round(avg_conf, 2)
|
||||
)
|
||||
|
||||
for key, values in daily_data.items():
|
||||
if key.startswith("avg_confidence_"):
|
||||
market_code = key.split("avg_confidence_", 1)[1]
|
||||
avg_conf = sum(values) / len(values)
|
||||
self.store.set_context(
|
||||
ContextLayer.L5_WEEKLY,
|
||||
week,
|
||||
f"avg_confidence_{market_code}",
|
||||
round(avg_conf, 2),
|
||||
)
|
||||
|
||||
def aggregate_monthly_from_weekly(self, month: str | None = None) -> None:
|
||||
"""Aggregate L4 (monthly) context from L5 (weekly).
|
||||
|
||||
@@ -135,8 +188,16 @@ class ContextAggregator:
|
||||
|
||||
if weekly_data:
|
||||
# Sum all weekly PnL values
|
||||
total_pnl_values: list[float] = []
|
||||
if "weekly_pnl" in weekly_data:
|
||||
total_pnl = sum(weekly_data["weekly_pnl"])
|
||||
total_pnl_values.extend(weekly_data["weekly_pnl"])
|
||||
|
||||
for key, values in weekly_data.items():
|
||||
if key.startswith("weekly_pnl_"):
|
||||
total_pnl_values.extend(values)
|
||||
|
||||
if total_pnl_values:
|
||||
total_pnl = sum(total_pnl_values)
|
||||
self.store.set_context(
|
||||
ContextLayer.L4_MONTHLY, month, "monthly_pnl", round(total_pnl, 2)
|
||||
)
|
||||
@@ -230,21 +291,44 @@ class ContextAggregator:
|
||||
)
|
||||
|
||||
def run_all_aggregations(self) -> None:
|
||||
"""Run all aggregations from L7 to L1 (bottom-up)."""
|
||||
"""Run all aggregations from L7 to L1 (bottom-up).
|
||||
|
||||
All timeframes are derived from the latest trade timestamp so that
|
||||
past data re-aggregation produces consistent results across layers.
|
||||
"""
|
||||
cursor = self.conn.execute("SELECT MAX(timestamp) FROM trades")
|
||||
row = cursor.fetchone()
|
||||
if not row or row[0] is None:
|
||||
return
|
||||
|
||||
ts_raw = row[0]
|
||||
if ts_raw.endswith("Z"):
|
||||
ts_raw = ts_raw.replace("Z", "+00:00")
|
||||
latest_ts = datetime.fromisoformat(ts_raw)
|
||||
trade_date = latest_ts.date()
|
||||
date_str = trade_date.isoformat()
|
||||
|
||||
iso_year, iso_week, _ = trade_date.isocalendar()
|
||||
week_str = f"{iso_year}-W{iso_week:02d}"
|
||||
month_str = f"{trade_date.year}-{trade_date.month:02d}"
|
||||
quarter = (trade_date.month - 1) // 3 + 1
|
||||
quarter_str = f"{trade_date.year}-Q{quarter}"
|
||||
year_str = str(trade_date.year)
|
||||
|
||||
# L7 (trades) → L6 (daily)
|
||||
self.aggregate_daily_from_trades()
|
||||
self.aggregate_daily_from_trades(date_str)
|
||||
|
||||
# L6 (daily) → L5 (weekly)
|
||||
self.aggregate_weekly_from_daily()
|
||||
self.aggregate_weekly_from_daily(week_str)
|
||||
|
||||
# L5 (weekly) → L4 (monthly)
|
||||
self.aggregate_monthly_from_weekly()
|
||||
self.aggregate_monthly_from_weekly(month_str)
|
||||
|
||||
# L4 (monthly) → L3 (quarterly)
|
||||
self.aggregate_quarterly_from_monthly()
|
||||
self.aggregate_quarterly_from_monthly(quarter_str)
|
||||
|
||||
# L3 (quarterly) → L2 (annual)
|
||||
self.aggregate_annual_from_quarterly()
|
||||
self.aggregate_annual_from_quarterly(year_str)
|
||||
|
||||
# L2 (annual) → L1 (legacy)
|
||||
self.aggregate_legacy_from_annual()
|
||||
|
||||
135
src/context/scheduler.py
Normal file
135
src/context/scheduler.py
Normal file
@@ -0,0 +1,135 @@
|
||||
"""Context aggregation scheduler for periodic rollups and cleanup."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sqlite3
|
||||
from calendar import monthrange
|
||||
from dataclasses import dataclass
|
||||
from datetime import UTC, datetime
|
||||
|
||||
from src.context.aggregator import ContextAggregator
|
||||
from src.context.store import ContextStore
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ScheduleResult:
|
||||
"""Represents which scheduled tasks ran."""
|
||||
|
||||
weekly: bool = False
|
||||
monthly: bool = False
|
||||
quarterly: bool = False
|
||||
annual: bool = False
|
||||
legacy: bool = False
|
||||
cleanup: bool = False
|
||||
|
||||
|
||||
class ContextScheduler:
|
||||
"""Run periodic context aggregations and cleanup when due."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
conn: sqlite3.Connection | None = None,
|
||||
aggregator: ContextAggregator | None = None,
|
||||
store: ContextStore | None = None,
|
||||
) -> None:
|
||||
if aggregator is None:
|
||||
if conn is None:
|
||||
raise ValueError("conn is required when aggregator is not provided")
|
||||
aggregator = ContextAggregator(conn)
|
||||
self.aggregator = aggregator
|
||||
|
||||
if store is None:
|
||||
store = getattr(aggregator, "store", None)
|
||||
if store is None:
|
||||
if conn is None:
|
||||
raise ValueError("conn is required when store is not provided")
|
||||
store = ContextStore(conn)
|
||||
self.store = store
|
||||
|
||||
self._last_run: dict[str, str] = {}
|
||||
|
||||
def run_if_due(self, now: datetime | None = None) -> ScheduleResult:
|
||||
"""Run scheduled aggregations if their schedule is due.
|
||||
|
||||
Args:
|
||||
now: Current datetime (UTC). If None, uses current time.
|
||||
|
||||
Returns:
|
||||
ScheduleResult indicating which tasks ran.
|
||||
"""
|
||||
if now is None:
|
||||
now = datetime.now(UTC)
|
||||
|
||||
today = now.date().isoformat()
|
||||
result = ScheduleResult()
|
||||
|
||||
if self._should_run("cleanup", today):
|
||||
self.store.cleanup_expired_contexts()
|
||||
result = self._with(result, cleanup=True)
|
||||
|
||||
if self._is_sunday(now) and self._should_run("weekly", today):
|
||||
week = now.strftime("%Y-W%V")
|
||||
self.aggregator.aggregate_weekly_from_daily(week)
|
||||
result = self._with(result, weekly=True)
|
||||
|
||||
if self._is_last_day_of_month(now) and self._should_run("monthly", today):
|
||||
month = now.strftime("%Y-%m")
|
||||
self.aggregator.aggregate_monthly_from_weekly(month)
|
||||
result = self._with(result, monthly=True)
|
||||
|
||||
if self._is_last_day_of_quarter(now) and self._should_run("quarterly", today):
|
||||
quarter = self._current_quarter(now)
|
||||
self.aggregator.aggregate_quarterly_from_monthly(quarter)
|
||||
result = self._with(result, quarterly=True)
|
||||
|
||||
if self._is_last_day_of_year(now) and self._should_run("annual", today):
|
||||
year = str(now.year)
|
||||
self.aggregator.aggregate_annual_from_quarterly(year)
|
||||
result = self._with(result, annual=True)
|
||||
|
||||
# Legacy rollup runs after annual aggregation.
|
||||
self.aggregator.aggregate_legacy_from_annual()
|
||||
result = self._with(result, legacy=True)
|
||||
|
||||
return result
|
||||
|
||||
def _should_run(self, key: str, date_str: str) -> bool:
|
||||
if self._last_run.get(key) == date_str:
|
||||
return False
|
||||
self._last_run[key] = date_str
|
||||
return True
|
||||
|
||||
@staticmethod
|
||||
def _is_sunday(now: datetime) -> bool:
|
||||
return now.weekday() == 6
|
||||
|
||||
@staticmethod
|
||||
def _is_last_day_of_month(now: datetime) -> bool:
|
||||
last_day = monthrange(now.year, now.month)[1]
|
||||
return now.day == last_day
|
||||
|
||||
@classmethod
|
||||
def _is_last_day_of_quarter(cls, now: datetime) -> bool:
|
||||
if now.month not in (3, 6, 9, 12):
|
||||
return False
|
||||
return cls._is_last_day_of_month(now)
|
||||
|
||||
@staticmethod
|
||||
def _is_last_day_of_year(now: datetime) -> bool:
|
||||
return now.month == 12 and now.day == 31
|
||||
|
||||
@staticmethod
|
||||
def _current_quarter(now: datetime) -> str:
|
||||
quarter = (now.month - 1) // 3 + 1
|
||||
return f"{now.year}-Q{quarter}"
|
||||
|
||||
@staticmethod
|
||||
def _with(result: ScheduleResult, **kwargs: bool) -> ScheduleResult:
|
||||
return ScheduleResult(
|
||||
weekly=kwargs.get("weekly", result.weekly),
|
||||
monthly=kwargs.get("monthly", result.monthly),
|
||||
quarterly=kwargs.get("quarterly", result.quarterly),
|
||||
annual=kwargs.get("annual", result.annual),
|
||||
legacy=kwargs.get("legacy", result.legacy),
|
||||
cleanup=kwargs.get("cleanup", result.cleanup),
|
||||
)
|
||||
5
src/dashboard/__init__.py
Normal file
5
src/dashboard/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
"""FastAPI dashboard package for observability APIs."""
|
||||
|
||||
from src.dashboard.app import create_dashboard_app
|
||||
|
||||
__all__ = ["create_dashboard_app"]
|
||||
361
src/dashboard/app.py
Normal file
361
src/dashboard/app.py
Normal file
@@ -0,0 +1,361 @@
|
||||
"""FastAPI application for observability dashboard endpoints."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import sqlite3
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from fastapi import FastAPI, HTTPException, Query
|
||||
from fastapi.responses import FileResponse
|
||||
|
||||
|
||||
def create_dashboard_app(db_path: str) -> FastAPI:
|
||||
"""Create dashboard FastAPI app bound to a SQLite database path."""
|
||||
app = FastAPI(title="The Ouroboros Dashboard", version="1.0.0")
|
||||
app.state.db_path = db_path
|
||||
|
||||
@app.get("/")
|
||||
def index() -> FileResponse:
|
||||
index_path = Path(__file__).parent / "static" / "index.html"
|
||||
return FileResponse(index_path)
|
||||
|
||||
@app.get("/api/status")
|
||||
def get_status() -> dict[str, Any]:
|
||||
today = datetime.now(UTC).date().isoformat()
|
||||
with _connect(db_path) as conn:
|
||||
market_rows = conn.execute(
|
||||
"""
|
||||
SELECT DISTINCT market FROM (
|
||||
SELECT market FROM trades WHERE DATE(timestamp) = ?
|
||||
UNION
|
||||
SELECT market FROM decision_logs WHERE DATE(timestamp) = ?
|
||||
UNION
|
||||
SELECT market FROM playbooks WHERE date = ?
|
||||
) ORDER BY market
|
||||
""",
|
||||
(today, today, today),
|
||||
).fetchall()
|
||||
markets = [row[0] for row in market_rows] if market_rows else []
|
||||
market_status: dict[str, Any] = {}
|
||||
total_trades = 0
|
||||
total_pnl = 0.0
|
||||
total_decisions = 0
|
||||
for market in markets:
|
||||
trade_row = conn.execute(
|
||||
"""
|
||||
SELECT COUNT(*) AS c, COALESCE(SUM(pnl), 0.0) AS p
|
||||
FROM trades
|
||||
WHERE DATE(timestamp) = ? AND market = ?
|
||||
""",
|
||||
(today, market),
|
||||
).fetchone()
|
||||
decision_row = conn.execute(
|
||||
"""
|
||||
SELECT COUNT(*) AS c
|
||||
FROM decision_logs
|
||||
WHERE DATE(timestamp) = ? AND market = ?
|
||||
""",
|
||||
(today, market),
|
||||
).fetchone()
|
||||
playbook_row = conn.execute(
|
||||
"""
|
||||
SELECT status
|
||||
FROM playbooks
|
||||
WHERE date = ? AND market = ?
|
||||
LIMIT 1
|
||||
""",
|
||||
(today, market),
|
||||
).fetchone()
|
||||
market_status[market] = {
|
||||
"trade_count": int(trade_row["c"] if trade_row else 0),
|
||||
"total_pnl": float(trade_row["p"] if trade_row else 0.0),
|
||||
"decision_count": int(decision_row["c"] if decision_row else 0),
|
||||
"playbook_status": playbook_row["status"] if playbook_row else None,
|
||||
}
|
||||
total_trades += market_status[market]["trade_count"]
|
||||
total_pnl += market_status[market]["total_pnl"]
|
||||
total_decisions += market_status[market]["decision_count"]
|
||||
|
||||
return {
|
||||
"date": today,
|
||||
"markets": market_status,
|
||||
"totals": {
|
||||
"trade_count": total_trades,
|
||||
"total_pnl": round(total_pnl, 2),
|
||||
"decision_count": total_decisions,
|
||||
},
|
||||
}
|
||||
|
||||
@app.get("/api/playbook/{date_str}")
|
||||
def get_playbook(date_str: str, market: str = Query("KR")) -> dict[str, Any]:
|
||||
with _connect(db_path) as conn:
|
||||
row = conn.execute(
|
||||
"""
|
||||
SELECT date, market, status, playbook_json, generated_at,
|
||||
token_count, scenario_count, match_count
|
||||
FROM playbooks
|
||||
WHERE date = ? AND market = ?
|
||||
""",
|
||||
(date_str, market),
|
||||
).fetchone()
|
||||
if row is None:
|
||||
raise HTTPException(status_code=404, detail="playbook not found")
|
||||
return {
|
||||
"date": row["date"],
|
||||
"market": row["market"],
|
||||
"status": row["status"],
|
||||
"playbook": json.loads(row["playbook_json"]),
|
||||
"generated_at": row["generated_at"],
|
||||
"token_count": row["token_count"],
|
||||
"scenario_count": row["scenario_count"],
|
||||
"match_count": row["match_count"],
|
||||
}
|
||||
|
||||
@app.get("/api/scorecard/{date_str}")
|
||||
def get_scorecard(date_str: str, market: str = Query("KR")) -> dict[str, Any]:
|
||||
key = f"scorecard_{market}"
|
||||
with _connect(db_path) as conn:
|
||||
row = conn.execute(
|
||||
"""
|
||||
SELECT value
|
||||
FROM contexts
|
||||
WHERE layer = 'L6_DAILY' AND timeframe = ? AND key = ?
|
||||
""",
|
||||
(date_str, key),
|
||||
).fetchone()
|
||||
if row is None:
|
||||
raise HTTPException(status_code=404, detail="scorecard not found")
|
||||
return {"date": date_str, "market": market, "scorecard": json.loads(row["value"])}
|
||||
|
||||
@app.get("/api/performance")
|
||||
def get_performance(market: str = Query("all")) -> dict[str, Any]:
|
||||
with _connect(db_path) as conn:
|
||||
if market == "all":
|
||||
by_market_rows = conn.execute(
|
||||
"""
|
||||
SELECT market,
|
||||
COUNT(*) AS total_trades,
|
||||
SUM(CASE WHEN pnl > 0 THEN 1 ELSE 0 END) AS wins,
|
||||
SUM(CASE WHEN pnl < 0 THEN 1 ELSE 0 END) AS losses,
|
||||
COALESCE(SUM(pnl), 0.0) AS total_pnl,
|
||||
COALESCE(AVG(confidence), 0.0) AS avg_confidence
|
||||
FROM trades
|
||||
GROUP BY market
|
||||
ORDER BY market
|
||||
"""
|
||||
).fetchall()
|
||||
combined = _performance_from_rows(by_market_rows)
|
||||
return {
|
||||
"market": "all",
|
||||
"combined": combined,
|
||||
"by_market": [
|
||||
_row_to_performance(row)
|
||||
for row in by_market_rows
|
||||
],
|
||||
}
|
||||
|
||||
row = conn.execute(
|
||||
"""
|
||||
SELECT market,
|
||||
COUNT(*) AS total_trades,
|
||||
SUM(CASE WHEN pnl > 0 THEN 1 ELSE 0 END) AS wins,
|
||||
SUM(CASE WHEN pnl < 0 THEN 1 ELSE 0 END) AS losses,
|
||||
COALESCE(SUM(pnl), 0.0) AS total_pnl,
|
||||
COALESCE(AVG(confidence), 0.0) AS avg_confidence
|
||||
FROM trades
|
||||
WHERE market = ?
|
||||
GROUP BY market
|
||||
""",
|
||||
(market,),
|
||||
).fetchone()
|
||||
if row is None:
|
||||
return {"market": market, "metrics": _empty_performance(market)}
|
||||
return {"market": market, "metrics": _row_to_performance(row)}
|
||||
|
||||
@app.get("/api/context/{layer}")
|
||||
def get_context_layer(
|
||||
layer: str,
|
||||
timeframe: str | None = Query(default=None),
|
||||
limit: int = Query(default=100, ge=1, le=1000),
|
||||
) -> dict[str, Any]:
|
||||
with _connect(db_path) as conn:
|
||||
if timeframe is None:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT timeframe, key, value, updated_at
|
||||
FROM contexts
|
||||
WHERE layer = ?
|
||||
ORDER BY updated_at DESC
|
||||
LIMIT ?
|
||||
""",
|
||||
(layer, limit),
|
||||
).fetchall()
|
||||
else:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT timeframe, key, value, updated_at
|
||||
FROM contexts
|
||||
WHERE layer = ? AND timeframe = ?
|
||||
ORDER BY key
|
||||
LIMIT ?
|
||||
""",
|
||||
(layer, timeframe, limit),
|
||||
).fetchall()
|
||||
|
||||
entries = [
|
||||
{
|
||||
"timeframe": row["timeframe"],
|
||||
"key": row["key"],
|
||||
"value": json.loads(row["value"]),
|
||||
"updated_at": row["updated_at"],
|
||||
}
|
||||
for row in rows
|
||||
]
|
||||
return {
|
||||
"layer": layer,
|
||||
"timeframe": timeframe,
|
||||
"count": len(entries),
|
||||
"entries": entries,
|
||||
}
|
||||
|
||||
@app.get("/api/decisions")
|
||||
def get_decisions(
|
||||
market: str = Query("KR"),
|
||||
limit: int = Query(default=50, ge=1, le=500),
|
||||
) -> dict[str, Any]:
|
||||
with _connect(db_path) as conn:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT decision_id, timestamp, stock_code, market, exchange_code,
|
||||
action, confidence, rationale, context_snapshot, input_data,
|
||||
outcome_pnl, outcome_accuracy
|
||||
FROM decision_logs
|
||||
WHERE market = ?
|
||||
ORDER BY timestamp DESC
|
||||
LIMIT ?
|
||||
""",
|
||||
(market, limit),
|
||||
).fetchall()
|
||||
decisions = []
|
||||
for row in rows:
|
||||
decisions.append(
|
||||
{
|
||||
"decision_id": row["decision_id"],
|
||||
"timestamp": row["timestamp"],
|
||||
"stock_code": row["stock_code"],
|
||||
"market": row["market"],
|
||||
"exchange_code": row["exchange_code"],
|
||||
"action": row["action"],
|
||||
"confidence": row["confidence"],
|
||||
"rationale": row["rationale"],
|
||||
"context_snapshot": json.loads(row["context_snapshot"]),
|
||||
"input_data": json.loads(row["input_data"]),
|
||||
"outcome_pnl": row["outcome_pnl"],
|
||||
"outcome_accuracy": row["outcome_accuracy"],
|
||||
}
|
||||
)
|
||||
return {"market": market, "count": len(decisions), "decisions": decisions}
|
||||
|
||||
@app.get("/api/scenarios/active")
|
||||
def get_active_scenarios(
|
||||
market: str = Query("US"),
|
||||
date_str: str | None = Query(default=None),
|
||||
limit: int = Query(default=50, ge=1, le=500),
|
||||
) -> dict[str, Any]:
|
||||
if date_str is None:
|
||||
date_str = datetime.now(UTC).date().isoformat()
|
||||
|
||||
with _connect(db_path) as conn:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT timestamp, stock_code, action, confidence, rationale, context_snapshot
|
||||
FROM decision_logs
|
||||
WHERE market = ? AND DATE(timestamp) = ?
|
||||
ORDER BY timestamp DESC
|
||||
LIMIT ?
|
||||
""",
|
||||
(market, date_str, limit),
|
||||
).fetchall()
|
||||
matches: list[dict[str, Any]] = []
|
||||
for row in rows:
|
||||
snapshot = json.loads(row["context_snapshot"])
|
||||
scenario_match = snapshot.get("scenario_match", {})
|
||||
if not isinstance(scenario_match, dict) or not scenario_match:
|
||||
continue
|
||||
matches.append(
|
||||
{
|
||||
"timestamp": row["timestamp"],
|
||||
"stock_code": row["stock_code"],
|
||||
"action": row["action"],
|
||||
"confidence": row["confidence"],
|
||||
"rationale": row["rationale"],
|
||||
"scenario_match": scenario_match,
|
||||
}
|
||||
)
|
||||
return {"market": market, "date": date_str, "count": len(matches), "matches": matches}
|
||||
|
||||
return app
|
||||
|
||||
|
||||
def _connect(db_path: str) -> sqlite3.Connection:
|
||||
conn = sqlite3.connect(db_path)
|
||||
conn.row_factory = sqlite3.Row
|
||||
return conn
|
||||
|
||||
|
||||
def _row_to_performance(row: sqlite3.Row) -> dict[str, Any]:
|
||||
wins = int(row["wins"] or 0)
|
||||
losses = int(row["losses"] or 0)
|
||||
total = int(row["total_trades"] or 0)
|
||||
win_rate = round((wins / (wins + losses) * 100), 2) if (wins + losses) > 0 else 0.0
|
||||
return {
|
||||
"market": row["market"],
|
||||
"total_trades": total,
|
||||
"wins": wins,
|
||||
"losses": losses,
|
||||
"win_rate": win_rate,
|
||||
"total_pnl": round(float(row["total_pnl"] or 0.0), 2),
|
||||
"avg_confidence": round(float(row["avg_confidence"] or 0.0), 2),
|
||||
}
|
||||
|
||||
|
||||
def _performance_from_rows(rows: list[sqlite3.Row]) -> dict[str, Any]:
|
||||
total_trades = 0
|
||||
wins = 0
|
||||
losses = 0
|
||||
total_pnl = 0.0
|
||||
confidence_weighted = 0.0
|
||||
for row in rows:
|
||||
market_total = int(row["total_trades"] or 0)
|
||||
market_conf = float(row["avg_confidence"] or 0.0)
|
||||
total_trades += market_total
|
||||
wins += int(row["wins"] or 0)
|
||||
losses += int(row["losses"] or 0)
|
||||
total_pnl += float(row["total_pnl"] or 0.0)
|
||||
confidence_weighted += market_total * market_conf
|
||||
win_rate = round((wins / (wins + losses) * 100), 2) if (wins + losses) > 0 else 0.0
|
||||
avg_confidence = round(confidence_weighted / total_trades, 2) if total_trades > 0 else 0.0
|
||||
return {
|
||||
"market": "all",
|
||||
"total_trades": total_trades,
|
||||
"wins": wins,
|
||||
"losses": losses,
|
||||
"win_rate": win_rate,
|
||||
"total_pnl": round(total_pnl, 2),
|
||||
"avg_confidence": avg_confidence,
|
||||
}
|
||||
|
||||
|
||||
def _empty_performance(market: str) -> dict[str, Any]:
|
||||
return {
|
||||
"market": market,
|
||||
"total_trades": 0,
|
||||
"wins": 0,
|
||||
"losses": 0,
|
||||
"win_rate": 0.0,
|
||||
"total_pnl": 0.0,
|
||||
"avg_confidence": 0.0,
|
||||
}
|
||||
61
src/dashboard/static/index.html
Normal file
61
src/dashboard/static/index.html
Normal file
@@ -0,0 +1,61 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>The Ouroboros Dashboard</title>
|
||||
<style>
|
||||
:root {
|
||||
--bg: #0b1724;
|
||||
--panel: #12263a;
|
||||
--fg: #e6eef7;
|
||||
--muted: #9fb3c8;
|
||||
--accent: #3cb371;
|
||||
}
|
||||
body {
|
||||
margin: 0;
|
||||
font-family: ui-monospace, SFMono-Regular, Menlo, monospace;
|
||||
background: radial-gradient(circle at top left, #173b58, var(--bg));
|
||||
color: var(--fg);
|
||||
}
|
||||
.wrap {
|
||||
max-width: 900px;
|
||||
margin: 48px auto;
|
||||
padding: 0 16px;
|
||||
}
|
||||
.card {
|
||||
background: color-mix(in oklab, var(--panel), black 12%);
|
||||
border: 1px solid #28455f;
|
||||
border-radius: 12px;
|
||||
padding: 20px;
|
||||
}
|
||||
h1 {
|
||||
margin-top: 0;
|
||||
}
|
||||
code {
|
||||
color: var(--accent);
|
||||
}
|
||||
li {
|
||||
margin: 6px 0;
|
||||
color: var(--muted);
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="wrap">
|
||||
<div class="card">
|
||||
<h1>The Ouroboros Dashboard API</h1>
|
||||
<p>Use the following endpoints:</p>
|
||||
<ul>
|
||||
<li><code>/api/status</code></li>
|
||||
<li><code>/api/playbook/{date}?market=KR</code></li>
|
||||
<li><code>/api/scorecard/{date}?market=KR</code></li>
|
||||
<li><code>/api/performance?market=all</code></li>
|
||||
<li><code>/api/context/{layer}</code></li>
|
||||
<li><code>/api/decisions?market=KR</code></li>
|
||||
<li><code>/api/scenarios/active?market=US</code></li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
56
src/db.py
56
src/db.py
@@ -6,6 +6,7 @@ import json
|
||||
import sqlite3
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
|
||||
def init_db(db_path: str) -> sqlite3.Connection:
|
||||
@@ -26,7 +27,8 @@ def init_db(db_path: str) -> sqlite3.Connection:
|
||||
price REAL,
|
||||
pnl REAL DEFAULT 0.0,
|
||||
market TEXT DEFAULT 'KR',
|
||||
exchange_code TEXT DEFAULT 'KRX'
|
||||
exchange_code TEXT DEFAULT 'KRX',
|
||||
decision_id TEXT
|
||||
)
|
||||
"""
|
||||
)
|
||||
@@ -41,6 +43,8 @@ def init_db(db_path: str) -> sqlite3.Connection:
|
||||
conn.execute("ALTER TABLE trades ADD COLUMN exchange_code TEXT DEFAULT 'KRX'")
|
||||
if "selection_context" not in columns:
|
||||
conn.execute("ALTER TABLE trades ADD COLUMN selection_context TEXT")
|
||||
if "decision_id" not in columns:
|
||||
conn.execute("ALTER TABLE trades ADD COLUMN decision_id TEXT")
|
||||
|
||||
# Context tree tables for multi-layered memory management
|
||||
conn.execute(
|
||||
@@ -143,6 +147,7 @@ def log_trade(
|
||||
market: str = "KR",
|
||||
exchange_code: str = "KRX",
|
||||
selection_context: dict[str, any] | None = None,
|
||||
decision_id: str | None = None,
|
||||
) -> None:
|
||||
"""Insert a trade record into the database.
|
||||
|
||||
@@ -166,9 +171,9 @@ def log_trade(
|
||||
"""
|
||||
INSERT INTO trades (
|
||||
timestamp, stock_code, action, confidence, rationale,
|
||||
quantity, price, pnl, market, exchange_code, selection_context
|
||||
quantity, price, pnl, market, exchange_code, selection_context, decision_id
|
||||
)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
datetime.now(UTC).isoformat(),
|
||||
@@ -182,6 +187,51 @@ def log_trade(
|
||||
market,
|
||||
exchange_code,
|
||||
context_json,
|
||||
decision_id,
|
||||
),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
|
||||
def get_latest_buy_trade(
|
||||
conn: sqlite3.Connection, stock_code: str, market: str
|
||||
) -> dict[str, Any] | None:
|
||||
"""Fetch the most recent BUY trade for a stock and market."""
|
||||
cursor = conn.execute(
|
||||
"""
|
||||
SELECT decision_id, price, quantity
|
||||
FROM trades
|
||||
WHERE stock_code = ?
|
||||
AND market = ?
|
||||
AND action = 'BUY'
|
||||
AND decision_id IS NOT NULL
|
||||
ORDER BY timestamp DESC
|
||||
LIMIT 1
|
||||
""",
|
||||
(stock_code, market),
|
||||
)
|
||||
row = cursor.fetchone()
|
||||
if not row:
|
||||
return None
|
||||
return {"decision_id": row[0], "price": row[1], "quantity": row[2]}
|
||||
|
||||
|
||||
def get_open_position(
|
||||
conn: sqlite3.Connection, stock_code: str, market: str
|
||||
) -> dict[str, Any] | None:
|
||||
"""Return open position if latest trade is BUY, else None."""
|
||||
cursor = conn.execute(
|
||||
"""
|
||||
SELECT action, decision_id, price, quantity
|
||||
FROM trades
|
||||
WHERE stock_code = ?
|
||||
AND market = ?
|
||||
ORDER BY timestamp DESC
|
||||
LIMIT 1
|
||||
""",
|
||||
(stock_code, market),
|
||||
)
|
||||
row = cursor.fetchone()
|
||||
if not row or row[0] != "BUY":
|
||||
return None
|
||||
return {"decision_id": row[1], "price": row[2], "quantity": row[3]}
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
"""Evolution engine for self-improving trading strategies."""
|
||||
|
||||
from src.evolution.ab_test import ABTester, ABTestResult, StrategyPerformance
|
||||
from src.evolution.daily_review import DailyReviewer
|
||||
from src.evolution.optimizer import EvolutionOptimizer
|
||||
from src.evolution.performance_tracker import (
|
||||
PerformanceDashboard,
|
||||
PerformanceTracker,
|
||||
StrategyMetrics,
|
||||
)
|
||||
from src.evolution.scorecard import DailyScorecard
|
||||
|
||||
__all__ = [
|
||||
"EvolutionOptimizer",
|
||||
@@ -16,4 +18,6 @@ __all__ = [
|
||||
"PerformanceTracker",
|
||||
"PerformanceDashboard",
|
||||
"StrategyMetrics",
|
||||
"DailyScorecard",
|
||||
"DailyReviewer",
|
||||
]
|
||||
|
||||
196
src/evolution/daily_review.py
Normal file
196
src/evolution/daily_review.py
Normal file
@@ -0,0 +1,196 @@
|
||||
"""Daily review generator for market-scoped end-of-day scorecards."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import sqlite3
|
||||
from dataclasses import asdict
|
||||
|
||||
from src.brain.gemini_client import GeminiClient
|
||||
from src.context.layer import ContextLayer
|
||||
from src.context.store import ContextStore
|
||||
from src.evolution.scorecard import DailyScorecard
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DailyReviewer:
|
||||
"""Builds daily scorecards and optional AI-generated lessons."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
conn: sqlite3.Connection,
|
||||
context_store: ContextStore,
|
||||
gemini_client: GeminiClient | None = None,
|
||||
) -> None:
|
||||
self._conn = conn
|
||||
self._context_store = context_store
|
||||
self._gemini = gemini_client
|
||||
|
||||
def generate_scorecard(self, date: str, market: str) -> DailyScorecard:
|
||||
"""Generate a market-scoped scorecard from decision logs and trades."""
|
||||
decision_rows = self._conn.execute(
|
||||
"""
|
||||
SELECT action, confidence, context_snapshot
|
||||
FROM decision_logs
|
||||
WHERE DATE(timestamp) = ? AND market = ?
|
||||
""",
|
||||
(date, market),
|
||||
).fetchall()
|
||||
|
||||
total_decisions = len(decision_rows)
|
||||
buys = sum(1 for row in decision_rows if row[0] == "BUY")
|
||||
sells = sum(1 for row in decision_rows if row[0] == "SELL")
|
||||
holds = sum(1 for row in decision_rows if row[0] == "HOLD")
|
||||
avg_confidence = (
|
||||
round(sum(int(row[1]) for row in decision_rows) / total_decisions, 2)
|
||||
if total_decisions > 0
|
||||
else 0.0
|
||||
)
|
||||
|
||||
matched = 0
|
||||
for row in decision_rows:
|
||||
try:
|
||||
snapshot = json.loads(row[2]) if row[2] else {}
|
||||
except json.JSONDecodeError:
|
||||
snapshot = {}
|
||||
scenario_match = snapshot.get("scenario_match", {})
|
||||
if isinstance(scenario_match, dict) and scenario_match:
|
||||
matched += 1
|
||||
scenario_match_rate = (
|
||||
round((matched / total_decisions) * 100, 2)
|
||||
if total_decisions
|
||||
else 0.0
|
||||
)
|
||||
|
||||
trade_stats = self._conn.execute(
|
||||
"""
|
||||
SELECT
|
||||
COALESCE(SUM(pnl), 0.0),
|
||||
SUM(CASE WHEN pnl > 0 THEN 1 ELSE 0 END),
|
||||
SUM(CASE WHEN pnl < 0 THEN 1 ELSE 0 END)
|
||||
FROM trades
|
||||
WHERE DATE(timestamp) = ? AND market = ?
|
||||
""",
|
||||
(date, market),
|
||||
).fetchone()
|
||||
total_pnl = round(float(trade_stats[0] or 0.0), 2) if trade_stats else 0.0
|
||||
wins = int(trade_stats[1] or 0) if trade_stats else 0
|
||||
losses = int(trade_stats[2] or 0) if trade_stats else 0
|
||||
win_rate = round((wins / (wins + losses)) * 100, 2) if (wins + losses) > 0 else 0.0
|
||||
|
||||
top_winners = [
|
||||
row[0]
|
||||
for row in self._conn.execute(
|
||||
"""
|
||||
SELECT stock_code, SUM(pnl) AS stock_pnl
|
||||
FROM trades
|
||||
WHERE DATE(timestamp) = ? AND market = ?
|
||||
GROUP BY stock_code
|
||||
HAVING stock_pnl > 0
|
||||
ORDER BY stock_pnl DESC
|
||||
LIMIT 3
|
||||
""",
|
||||
(date, market),
|
||||
).fetchall()
|
||||
]
|
||||
|
||||
top_losers = [
|
||||
row[0]
|
||||
for row in self._conn.execute(
|
||||
"""
|
||||
SELECT stock_code, SUM(pnl) AS stock_pnl
|
||||
FROM trades
|
||||
WHERE DATE(timestamp) = ? AND market = ?
|
||||
GROUP BY stock_code
|
||||
HAVING stock_pnl < 0
|
||||
ORDER BY stock_pnl ASC
|
||||
LIMIT 3
|
||||
""",
|
||||
(date, market),
|
||||
).fetchall()
|
||||
]
|
||||
|
||||
return DailyScorecard(
|
||||
date=date,
|
||||
market=market,
|
||||
total_decisions=total_decisions,
|
||||
buys=buys,
|
||||
sells=sells,
|
||||
holds=holds,
|
||||
total_pnl=total_pnl,
|
||||
win_rate=win_rate,
|
||||
avg_confidence=avg_confidence,
|
||||
scenario_match_rate=scenario_match_rate,
|
||||
top_winners=top_winners,
|
||||
top_losers=top_losers,
|
||||
lessons=[],
|
||||
cross_market_note="",
|
||||
)
|
||||
|
||||
async def generate_lessons(self, scorecard: DailyScorecard) -> list[str]:
|
||||
"""Generate concise lessons from scorecard metrics using Gemini."""
|
||||
if self._gemini is None:
|
||||
return []
|
||||
|
||||
prompt = (
|
||||
"You are a trading performance reviewer.\n"
|
||||
"Return ONLY a JSON array of 1-3 short lessons in English.\n"
|
||||
f"Market: {scorecard.market}\n"
|
||||
f"Date: {scorecard.date}\n"
|
||||
f"Total decisions: {scorecard.total_decisions}\n"
|
||||
f"Buys/Sells/Holds: {scorecard.buys}/{scorecard.sells}/{scorecard.holds}\n"
|
||||
f"Total PnL: {scorecard.total_pnl}\n"
|
||||
f"Win rate: {scorecard.win_rate}%\n"
|
||||
f"Average confidence: {scorecard.avg_confidence}\n"
|
||||
f"Scenario match rate: {scorecard.scenario_match_rate}%\n"
|
||||
f"Top winners: {', '.join(scorecard.top_winners) or 'N/A'}\n"
|
||||
f"Top losers: {', '.join(scorecard.top_losers) or 'N/A'}\n"
|
||||
)
|
||||
|
||||
try:
|
||||
decision = await self._gemini.decide(
|
||||
{
|
||||
"stock_code": "REVIEW",
|
||||
"market_name": scorecard.market,
|
||||
"current_price": 0,
|
||||
"prompt_override": prompt,
|
||||
}
|
||||
)
|
||||
return self._parse_lessons(decision.rationale)
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to generate daily lessons: %s", exc)
|
||||
return []
|
||||
|
||||
def store_scorecard_in_context(self, scorecard: DailyScorecard) -> None:
|
||||
"""Store scorecard in L6 using market-scoped key."""
|
||||
self._context_store.set_context(
|
||||
ContextLayer.L6_DAILY,
|
||||
scorecard.date,
|
||||
f"scorecard_{scorecard.market}",
|
||||
asdict(scorecard),
|
||||
)
|
||||
|
||||
def _parse_lessons(self, raw_text: str) -> list[str]:
|
||||
"""Parse lessons from JSON array response or fallback text."""
|
||||
raw_text = raw_text.strip()
|
||||
try:
|
||||
parsed = json.loads(raw_text)
|
||||
if isinstance(parsed, list):
|
||||
return [str(item).strip() for item in parsed if str(item).strip()][:3]
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
match = re.search(r"\[.*\]", raw_text, re.DOTALL)
|
||||
if match:
|
||||
try:
|
||||
parsed = json.loads(match.group(0))
|
||||
if isinstance(parsed, list):
|
||||
return [str(item).strip() for item in parsed if str(item).strip()][:3]
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
lines = [line.strip("-* \t") for line in raw_text.splitlines() if line.strip()]
|
||||
return lines[:3]
|
||||
25
src/evolution/scorecard.py
Normal file
25
src/evolution/scorecard.py
Normal file
@@ -0,0 +1,25 @@
|
||||
"""Daily scorecard model for end-of-day performance review."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
|
||||
@dataclass
|
||||
class DailyScorecard:
|
||||
"""Structured daily performance snapshot for a single market."""
|
||||
|
||||
date: str
|
||||
market: str
|
||||
total_decisions: int
|
||||
buys: int
|
||||
sells: int
|
||||
holds: int
|
||||
total_pnl: float
|
||||
win_rate: float
|
||||
avg_confidence: float
|
||||
scenario_match_rate: float
|
||||
top_winners: list[str] = field(default_factory=list)
|
||||
top_losers: list[str] = field(default_factory=list)
|
||||
lessons: list[str] = field(default_factory=list)
|
||||
cross_market_note: str = ""
|
||||
463
src/main.py
463
src/main.py
@@ -8,12 +8,13 @@ from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import signal
|
||||
import threading
|
||||
from datetime import UTC, datetime
|
||||
from typing import Any
|
||||
|
||||
from src.analysis.scanner import MarketScanner
|
||||
from src.analysis.smart_scanner import ScanCandidate, SmartVolatilityScanner
|
||||
from src.analysis.volatility import VolatilityAnalyzer
|
||||
from src.brain.context_selector import ContextSelector
|
||||
@@ -21,12 +22,16 @@ from src.brain.gemini_client import GeminiClient, TradeDecision
|
||||
from src.broker.kis_api import KISBroker
|
||||
from src.broker.overseas import OverseasBroker
|
||||
from src.config import Settings
|
||||
from src.context.aggregator import ContextAggregator
|
||||
from src.context.layer import ContextLayer
|
||||
from src.context.scheduler import ContextScheduler
|
||||
from src.context.store import ContextStore
|
||||
from src.core.criticality import CriticalityAssessor
|
||||
from src.core.priority_queue import PriorityTaskQueue
|
||||
from src.core.risk_manager import CircuitBreakerTripped, FatFingerRejected, RiskManager
|
||||
from src.db import init_db, log_trade
|
||||
from src.db import get_latest_buy_trade, get_open_position, init_db, log_trade
|
||||
from src.evolution.daily_review import DailyReviewer
|
||||
from src.evolution.optimizer import EvolutionOptimizer
|
||||
from src.logging.decision_logger import DecisionLogger
|
||||
from src.logging_config import setup_logging
|
||||
from src.markets.schedule import MarketInfo, get_next_market_open, get_open_markets
|
||||
@@ -110,6 +115,7 @@ async def trading_cycle(
|
||||
|
||||
current_price = safe_float(orderbook.get("output1", {}).get("stck_prpr", "0"))
|
||||
foreigner_net = safe_float(orderbook.get("output1", {}).get("frgn_ntby_qty", "0"))
|
||||
price_change_pct = safe_float(orderbook.get("output1", {}).get("prdy_ctrt", "0"))
|
||||
else:
|
||||
# Overseas market
|
||||
price_data = await overseas_broker.get_overseas_price(
|
||||
@@ -132,6 +138,7 @@ async def trading_cycle(
|
||||
|
||||
current_price = safe_float(price_data.get("output", {}).get("last", "0"))
|
||||
foreigner_net = 0.0 # Not available for overseas
|
||||
price_change_pct = safe_float(price_data.get("output", {}).get("rate", "0"))
|
||||
|
||||
# Calculate daily P&L %
|
||||
pnl_pct = (
|
||||
@@ -145,6 +152,7 @@ async def trading_cycle(
|
||||
"market_name": market.name,
|
||||
"current_price": current_price,
|
||||
"foreigner_net": foreigner_net,
|
||||
"price_change_pct": price_change_pct,
|
||||
}
|
||||
|
||||
# Enrich market_data with scanner metrics for scenario engine
|
||||
@@ -154,6 +162,38 @@ async def trading_cycle(
|
||||
market_data["rsi"] = candidate.rsi
|
||||
market_data["volume_ratio"] = candidate.volume_ratio
|
||||
|
||||
# 1.3. Record L7 real-time context (market-scoped keys)
|
||||
timeframe = datetime.now(UTC).isoformat()
|
||||
context_store.set_context(
|
||||
ContextLayer.L7_REALTIME,
|
||||
timeframe,
|
||||
f"volatility_{market.code}_{stock_code}",
|
||||
{
|
||||
"momentum_score": 50.0,
|
||||
"volume_surge": 1.0,
|
||||
"price_change_1m": 0.0,
|
||||
},
|
||||
)
|
||||
context_store.set_context(
|
||||
ContextLayer.L7_REALTIME,
|
||||
timeframe,
|
||||
f"price_{market.code}_{stock_code}",
|
||||
{"current_price": current_price},
|
||||
)
|
||||
if candidate:
|
||||
context_store.set_context(
|
||||
ContextLayer.L7_REALTIME,
|
||||
timeframe,
|
||||
f"rsi_{market.code}_{stock_code}",
|
||||
{"rsi": candidate.rsi},
|
||||
)
|
||||
context_store.set_context(
|
||||
ContextLayer.L7_REALTIME,
|
||||
timeframe,
|
||||
f"volume_ratio_{market.code}_{stock_code}",
|
||||
{"volume_ratio": candidate.volume_ratio},
|
||||
)
|
||||
|
||||
# Build portfolio data for global rule evaluation
|
||||
portfolio_data = {
|
||||
"portfolio_pnl_pct": pnl_pct,
|
||||
@@ -171,7 +211,7 @@ async def trading_cycle(
|
||||
volatility_data = context_store.get_context(
|
||||
ContextLayer.L7_REALTIME,
|
||||
latest_timeframe,
|
||||
f"volatility_{stock_code}",
|
||||
f"volatility_{market.code}_{stock_code}",
|
||||
)
|
||||
if volatility_data:
|
||||
volatility_score = volatility_data.get("momentum_score", 50.0)
|
||||
@@ -204,6 +244,34 @@ async def trading_cycle(
|
||||
confidence=match.confidence,
|
||||
rationale=match.rationale,
|
||||
)
|
||||
stock_playbook = playbook.get_stock_playbook(stock_code)
|
||||
|
||||
if decision.action == "HOLD":
|
||||
open_position = get_open_position(db_conn, stock_code, market.code)
|
||||
if open_position:
|
||||
entry_price = safe_float(open_position.get("price"), 0.0)
|
||||
if entry_price > 0:
|
||||
loss_pct = (current_price - entry_price) / entry_price * 100
|
||||
stop_loss_threshold = -2.0
|
||||
if stock_playbook and stock_playbook.scenarios:
|
||||
stop_loss_threshold = stock_playbook.scenarios[0].stop_loss_pct
|
||||
|
||||
if loss_pct <= stop_loss_threshold:
|
||||
decision = TradeDecision(
|
||||
action="SELL",
|
||||
confidence=95,
|
||||
rationale=(
|
||||
f"Stop-loss triggered ({loss_pct:.2f}% <= "
|
||||
f"{stop_loss_threshold:.2f}%)"
|
||||
),
|
||||
)
|
||||
logger.info(
|
||||
"Stop-loss override for %s (%s): %.2f%% <= %.2f%%",
|
||||
stock_code,
|
||||
market.name,
|
||||
loss_pct,
|
||||
stop_loss_threshold,
|
||||
)
|
||||
logger.info(
|
||||
"Decision for %s (%s): %s (confidence=%d)",
|
||||
stock_code,
|
||||
@@ -242,12 +310,13 @@ async def trading_cycle(
|
||||
input_data = {
|
||||
"current_price": current_price,
|
||||
"foreigner_net": foreigner_net,
|
||||
"price_change_pct": price_change_pct,
|
||||
"total_eval": total_eval,
|
||||
"total_cash": total_cash,
|
||||
"pnl_pct": pnl_pct,
|
||||
}
|
||||
|
||||
decision_logger.log_decision(
|
||||
decision_id = decision_logger.log_decision(
|
||||
stock_code=stock_code,
|
||||
market=market.code,
|
||||
exchange_code=market.exchange_code,
|
||||
@@ -259,6 +328,9 @@ async def trading_cycle(
|
||||
)
|
||||
|
||||
# 3. Execute if actionable
|
||||
quantity = 0
|
||||
trade_price = current_price
|
||||
trade_pnl = 0.0
|
||||
if decision.action in ("BUY", "SELL"):
|
||||
# Determine order size (simplified: 1 lot)
|
||||
quantity = 1
|
||||
@@ -314,6 +386,18 @@ async def trading_cycle(
|
||||
except Exception as exc:
|
||||
logger.warning("Telegram notification failed: %s", exc)
|
||||
|
||||
if decision.action == "SELL":
|
||||
buy_trade = get_latest_buy_trade(db_conn, stock_code, market.code)
|
||||
if buy_trade and buy_trade.get("price") is not None:
|
||||
buy_price = float(buy_trade["price"])
|
||||
buy_qty = int(buy_trade.get("quantity") or 1)
|
||||
trade_pnl = (trade_price - buy_price) * buy_qty
|
||||
decision_logger.update_outcome(
|
||||
decision_id=buy_trade["decision_id"],
|
||||
pnl=trade_pnl,
|
||||
accuracy=1 if trade_pnl > 0 else 0,
|
||||
)
|
||||
|
||||
# 6. Log trade with selection context
|
||||
selection_context = None
|
||||
if stock_code in market_candidates:
|
||||
@@ -331,9 +415,13 @@ async def trading_cycle(
|
||||
action=decision.action,
|
||||
confidence=decision.confidence,
|
||||
rationale=decision.rationale,
|
||||
quantity=quantity,
|
||||
price=trade_price,
|
||||
pnl=trade_pnl,
|
||||
market=market.code,
|
||||
exchange_code=market.exchange_code,
|
||||
selection_context=selection_context,
|
||||
decision_id=decision_id,
|
||||
)
|
||||
|
||||
# 7. Latency monitoring
|
||||
@@ -452,6 +540,9 @@ async def run_daily_session(
|
||||
foreigner_net = safe_float(
|
||||
orderbook.get("output1", {}).get("frgn_ntby_qty", "0")
|
||||
)
|
||||
price_change_pct = safe_float(
|
||||
orderbook.get("output1", {}).get("prdy_ctrt", "0")
|
||||
)
|
||||
else:
|
||||
price_data = await overseas_broker.get_overseas_price(
|
||||
market.exchange_code, stock_code
|
||||
@@ -460,12 +551,16 @@ async def run_daily_session(
|
||||
price_data.get("output", {}).get("last", "0")
|
||||
)
|
||||
foreigner_net = 0.0
|
||||
price_change_pct = safe_float(
|
||||
price_data.get("output", {}).get("rate", "0")
|
||||
)
|
||||
|
||||
stock_data: dict[str, Any] = {
|
||||
"stock_code": stock_code,
|
||||
"market_name": market.name,
|
||||
"current_price": current_price,
|
||||
"foreigner_net": foreigner_net,
|
||||
"price_change_pct": price_change_pct,
|
||||
}
|
||||
# Enrich with scanner metrics
|
||||
cand = candidate_map.get(stock_code)
|
||||
@@ -568,7 +663,7 @@ async def run_daily_session(
|
||||
"pnl_pct": pnl_pct,
|
||||
}
|
||||
|
||||
decision_logger.log_decision(
|
||||
decision_id = decision_logger.log_decision(
|
||||
stock_code=stock_code,
|
||||
market=market.code,
|
||||
exchange_code=market.exchange_code,
|
||||
@@ -580,6 +675,9 @@ async def run_daily_session(
|
||||
)
|
||||
|
||||
# Execute if actionable
|
||||
quantity = 0
|
||||
trade_price = stock_data["current_price"]
|
||||
trade_pnl = 0.0
|
||||
if decision.action in ("BUY", "SELL"):
|
||||
quantity = 1
|
||||
order_amount = stock_data["current_price"] * quantity
|
||||
@@ -652,6 +750,18 @@ async def run_daily_session(
|
||||
)
|
||||
continue
|
||||
|
||||
if decision.action == "SELL":
|
||||
buy_trade = get_latest_buy_trade(db_conn, stock_code, market.code)
|
||||
if buy_trade and buy_trade.get("price") is not None:
|
||||
buy_price = float(buy_trade["price"])
|
||||
buy_qty = int(buy_trade.get("quantity") or 1)
|
||||
trade_pnl = (trade_price - buy_price) * buy_qty
|
||||
decision_logger.update_outcome(
|
||||
decision_id=buy_trade["decision_id"],
|
||||
pnl=trade_pnl,
|
||||
accuracy=1 if trade_pnl > 0 else 0,
|
||||
)
|
||||
|
||||
# Log trade
|
||||
log_trade(
|
||||
conn=db_conn,
|
||||
@@ -659,13 +769,164 @@ async def run_daily_session(
|
||||
action=decision.action,
|
||||
confidence=decision.confidence,
|
||||
rationale=decision.rationale,
|
||||
quantity=quantity,
|
||||
price=trade_price,
|
||||
pnl=trade_pnl,
|
||||
market=market.code,
|
||||
exchange_code=market.exchange_code,
|
||||
decision_id=decision_id,
|
||||
)
|
||||
|
||||
logger.info("Daily trading session completed")
|
||||
|
||||
|
||||
async def _handle_market_close(
|
||||
market_code: str,
|
||||
market_name: str,
|
||||
market_timezone: Any,
|
||||
telegram: TelegramClient,
|
||||
context_aggregator: ContextAggregator,
|
||||
daily_reviewer: DailyReviewer,
|
||||
evolution_optimizer: EvolutionOptimizer | None = None,
|
||||
) -> None:
|
||||
"""Handle market-close tasks: notify, aggregate, review, and store context."""
|
||||
await telegram.notify_market_close(market_name, 0.0)
|
||||
|
||||
market_date = datetime.now(market_timezone).date().isoformat()
|
||||
context_aggregator.aggregate_daily_from_trades(
|
||||
date=market_date,
|
||||
market=market_code,
|
||||
)
|
||||
|
||||
scorecard = daily_reviewer.generate_scorecard(market_date, market_code)
|
||||
daily_reviewer.store_scorecard_in_context(scorecard)
|
||||
|
||||
lessons = await daily_reviewer.generate_lessons(scorecard)
|
||||
if lessons:
|
||||
scorecard.lessons = lessons
|
||||
daily_reviewer.store_scorecard_in_context(scorecard)
|
||||
|
||||
await telegram.send_message(
|
||||
f"<b>Daily Review ({market_code})</b>\n"
|
||||
f"Date: {scorecard.date}\n"
|
||||
f"Decisions: {scorecard.total_decisions}\n"
|
||||
f"P&L: {scorecard.total_pnl:+.2f}\n"
|
||||
f"Win Rate: {scorecard.win_rate:.2f}%\n"
|
||||
f"Lessons: {', '.join(scorecard.lessons) if scorecard.lessons else 'N/A'}"
|
||||
)
|
||||
|
||||
if evolution_optimizer is not None:
|
||||
await _run_evolution_loop(
|
||||
evolution_optimizer=evolution_optimizer,
|
||||
telegram=telegram,
|
||||
market_code=market_code,
|
||||
market_date=market_date,
|
||||
)
|
||||
|
||||
|
||||
def _run_context_scheduler(
|
||||
scheduler: ContextScheduler, now: datetime | None = None,
|
||||
) -> None:
|
||||
"""Run periodic context scheduler tasks and log when anything executes."""
|
||||
result = scheduler.run_if_due(now=now)
|
||||
if any(
|
||||
[
|
||||
result.weekly,
|
||||
result.monthly,
|
||||
result.quarterly,
|
||||
result.annual,
|
||||
result.legacy,
|
||||
result.cleanup,
|
||||
]
|
||||
):
|
||||
logger.info(
|
||||
(
|
||||
"Context scheduler ran (weekly=%s, monthly=%s, quarterly=%s, "
|
||||
"annual=%s, legacy=%s, cleanup=%s)"
|
||||
),
|
||||
result.weekly,
|
||||
result.monthly,
|
||||
result.quarterly,
|
||||
result.annual,
|
||||
result.legacy,
|
||||
result.cleanup,
|
||||
)
|
||||
|
||||
|
||||
async def _run_evolution_loop(
|
||||
evolution_optimizer: EvolutionOptimizer,
|
||||
telegram: TelegramClient,
|
||||
market_code: str,
|
||||
market_date: str,
|
||||
) -> None:
|
||||
"""Run evolution loop once at US close (end of trading day)."""
|
||||
if not market_code.startswith("US"):
|
||||
return
|
||||
|
||||
try:
|
||||
pr_info = await evolution_optimizer.evolve()
|
||||
except Exception as exc:
|
||||
logger.warning("Evolution loop failed on %s: %s", market_date, exc)
|
||||
return
|
||||
|
||||
if pr_info is None:
|
||||
logger.info("Evolution loop skipped on %s (no actionable failures)", market_date)
|
||||
return
|
||||
|
||||
try:
|
||||
await telegram.send_message(
|
||||
"<b>Evolution Update</b>\n"
|
||||
f"Date: {market_date}\n"
|
||||
f"PR: {pr_info.get('title', 'N/A')}\n"
|
||||
f"Branch: {pr_info.get('branch', 'N/A')}\n"
|
||||
f"Status: {pr_info.get('status', 'N/A')}"
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Evolution notification failed on %s: %s", market_date, exc)
|
||||
|
||||
|
||||
def _start_dashboard_server(settings: Settings) -> threading.Thread | None:
|
||||
"""Start FastAPI dashboard in a daemon thread when enabled."""
|
||||
if not settings.DASHBOARD_ENABLED:
|
||||
return None
|
||||
|
||||
def _serve() -> None:
|
||||
try:
|
||||
import uvicorn
|
||||
|
||||
from src.dashboard import create_dashboard_app
|
||||
|
||||
app = create_dashboard_app(settings.DB_PATH)
|
||||
uvicorn.run(
|
||||
app,
|
||||
host=settings.DASHBOARD_HOST,
|
||||
port=settings.DASHBOARD_PORT,
|
||||
log_level="info",
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Dashboard server failed to start: %s", exc)
|
||||
|
||||
thread = threading.Thread(
|
||||
target=_serve,
|
||||
name="dashboard-server",
|
||||
daemon=True,
|
||||
)
|
||||
thread.start()
|
||||
logger.info(
|
||||
"Dashboard server started at http://%s:%d",
|
||||
settings.DASHBOARD_HOST,
|
||||
settings.DASHBOARD_PORT,
|
||||
)
|
||||
return thread
|
||||
|
||||
|
||||
def _apply_dashboard_flag(settings: Settings, dashboard_flag: bool) -> Settings:
|
||||
"""Apply CLI dashboard flag over environment settings."""
|
||||
if dashboard_flag and not settings.DASHBOARD_ENABLED:
|
||||
return settings.model_copy(update={"DASHBOARD_ENABLED": True})
|
||||
return settings
|
||||
|
||||
|
||||
async def run(settings: Settings) -> None:
|
||||
"""Main async loop — iterate over open markets on a timer."""
|
||||
broker = KISBroker(settings)
|
||||
@@ -675,11 +936,18 @@ async def run(settings: Settings) -> None:
|
||||
db_conn = init_db(settings.DB_PATH)
|
||||
decision_logger = DecisionLogger(db_conn)
|
||||
context_store = ContextStore(db_conn)
|
||||
context_aggregator = ContextAggregator(db_conn)
|
||||
context_scheduler = ContextScheduler(
|
||||
aggregator=context_aggregator,
|
||||
store=context_store,
|
||||
)
|
||||
evolution_optimizer = EvolutionOptimizer(settings)
|
||||
|
||||
# V2 proactive strategy components
|
||||
context_selector = ContextSelector(context_store)
|
||||
scenario_engine = ScenarioEngine()
|
||||
playbook_store = PlaybookStore(db_conn)
|
||||
daily_reviewer = DailyReviewer(db_conn, context_store, gemini_client=brain)
|
||||
pre_market_planner = PreMarketPlanner(
|
||||
gemini_client=brain,
|
||||
context_store=context_store,
|
||||
@@ -708,6 +976,10 @@ async def run(settings: Settings) -> None:
|
||||
"/help - Show available commands\n"
|
||||
"/status - Trading status (mode, markets, P&L)\n"
|
||||
"/positions - Current holdings\n"
|
||||
"/report - Daily summary report\n"
|
||||
"/scenarios - Today's playbook scenarios\n"
|
||||
"/review - Recent scorecards\n"
|
||||
"/dashboard - Dashboard URL/status\n"
|
||||
"/stop - Pause trading\n"
|
||||
"/resume - Resume trading"
|
||||
)
|
||||
@@ -827,23 +1099,167 @@ async def run(settings: Settings) -> None:
|
||||
"<b>⚠️ Error</b>\n\nFailed to retrieve positions."
|
||||
)
|
||||
|
||||
async def handle_report() -> None:
|
||||
"""Handle /report command - show daily summary metrics."""
|
||||
try:
|
||||
today = datetime.now(UTC).date().isoformat()
|
||||
trade_row = db_conn.execute(
|
||||
"""
|
||||
SELECT COUNT(*) AS trade_count,
|
||||
COALESCE(SUM(pnl), 0.0) AS total_pnl,
|
||||
SUM(CASE WHEN pnl > 0 THEN 1 ELSE 0 END) AS wins
|
||||
FROM trades
|
||||
WHERE DATE(timestamp) = ?
|
||||
""",
|
||||
(today,),
|
||||
).fetchone()
|
||||
decision_row = db_conn.execute(
|
||||
"""
|
||||
SELECT COUNT(*) AS decision_count,
|
||||
COALESCE(AVG(confidence), 0.0) AS avg_confidence
|
||||
FROM decision_logs
|
||||
WHERE DATE(timestamp) = ?
|
||||
""",
|
||||
(today,),
|
||||
).fetchone()
|
||||
|
||||
trade_count = int(trade_row[0] if trade_row else 0)
|
||||
total_pnl = float(trade_row[1] if trade_row else 0.0)
|
||||
wins = int(trade_row[2] if trade_row and trade_row[2] is not None else 0)
|
||||
decision_count = int(decision_row[0] if decision_row else 0)
|
||||
avg_confidence = float(decision_row[1] if decision_row else 0.0)
|
||||
win_rate = (wins / trade_count * 100.0) if trade_count > 0 else 0.0
|
||||
|
||||
await telegram.send_message(
|
||||
"<b>📈 Daily Report</b>\n\n"
|
||||
f"<b>Date:</b> {today}\n"
|
||||
f"<b>Trades:</b> {trade_count}\n"
|
||||
f"<b>Total P&L:</b> {total_pnl:+.2f}\n"
|
||||
f"<b>Win Rate:</b> {win_rate:.2f}%\n"
|
||||
f"<b>Decisions:</b> {decision_count}\n"
|
||||
f"<b>Avg Confidence:</b> {avg_confidence:.2f}"
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.error("Error in /report handler: %s", exc)
|
||||
await telegram.send_message(
|
||||
"<b>⚠️ Error</b>\n\nFailed to generate daily report."
|
||||
)
|
||||
|
||||
async def handle_scenarios() -> None:
|
||||
"""Handle /scenarios command - show today's playbook scenarios."""
|
||||
try:
|
||||
today = datetime.now(UTC).date().isoformat()
|
||||
rows = db_conn.execute(
|
||||
"""
|
||||
SELECT market, playbook_json
|
||||
FROM playbooks
|
||||
WHERE date = ?
|
||||
ORDER BY market
|
||||
""",
|
||||
(today,),
|
||||
).fetchall()
|
||||
|
||||
if not rows:
|
||||
await telegram.send_message(
|
||||
"<b>🧠 Today's Scenarios</b>\n\nNo playbooks found for today."
|
||||
)
|
||||
return
|
||||
|
||||
lines = ["<b>🧠 Today's Scenarios</b>", ""]
|
||||
for market, playbook_json in rows:
|
||||
lines.append(f"<b>{market}</b>")
|
||||
playbook_data = {}
|
||||
try:
|
||||
playbook_data = json.loads(playbook_json)
|
||||
except Exception:
|
||||
playbook_data = {}
|
||||
|
||||
stock_playbooks = playbook_data.get("stock_playbooks", [])
|
||||
if not stock_playbooks:
|
||||
lines.append("- No scenarios")
|
||||
lines.append("")
|
||||
continue
|
||||
|
||||
for stock_pb in stock_playbooks:
|
||||
stock_code = stock_pb.get("stock_code", "N/A")
|
||||
scenarios = stock_pb.get("scenarios", [])
|
||||
for sc in scenarios:
|
||||
action = sc.get("action", "HOLD")
|
||||
confidence = sc.get("confidence", 0)
|
||||
lines.append(f"- {stock_code}: {action} ({confidence})")
|
||||
lines.append("")
|
||||
|
||||
await telegram.send_message("\n".join(lines).strip())
|
||||
except Exception as exc:
|
||||
logger.error("Error in /scenarios handler: %s", exc)
|
||||
await telegram.send_message(
|
||||
"<b>⚠️ Error</b>\n\nFailed to retrieve scenarios."
|
||||
)
|
||||
|
||||
async def handle_review() -> None:
|
||||
"""Handle /review command - show recent scorecards."""
|
||||
try:
|
||||
rows = db_conn.execute(
|
||||
"""
|
||||
SELECT timeframe, key, value
|
||||
FROM contexts
|
||||
WHERE layer = 'L6_DAILY' AND key LIKE 'scorecard_%'
|
||||
ORDER BY updated_at DESC
|
||||
LIMIT 5
|
||||
"""
|
||||
).fetchall()
|
||||
|
||||
if not rows:
|
||||
await telegram.send_message(
|
||||
"<b>📝 Recent Reviews</b>\n\nNo scorecards available."
|
||||
)
|
||||
return
|
||||
|
||||
lines = ["<b>📝 Recent Reviews</b>", ""]
|
||||
for timeframe, key, value in rows:
|
||||
scorecard = json.loads(value)
|
||||
market = key.replace("scorecard_", "")
|
||||
total_pnl = float(scorecard.get("total_pnl", 0.0))
|
||||
win_rate = float(scorecard.get("win_rate", 0.0))
|
||||
decisions = int(scorecard.get("total_decisions", 0))
|
||||
lines.append(
|
||||
f"- {timeframe} {market}: P&L {total_pnl:+.2f}, "
|
||||
f"Win {win_rate:.2f}%, Decisions {decisions}"
|
||||
)
|
||||
|
||||
await telegram.send_message("\n".join(lines))
|
||||
except Exception as exc:
|
||||
logger.error("Error in /review handler: %s", exc)
|
||||
await telegram.send_message(
|
||||
"<b>⚠️ Error</b>\n\nFailed to retrieve reviews."
|
||||
)
|
||||
|
||||
async def handle_dashboard() -> None:
|
||||
"""Handle /dashboard command - show dashboard URL if enabled."""
|
||||
if not settings.DASHBOARD_ENABLED:
|
||||
await telegram.send_message(
|
||||
"<b>🖥️ Dashboard</b>\n\nDashboard is not enabled."
|
||||
)
|
||||
return
|
||||
|
||||
url = f"http://{settings.DASHBOARD_HOST}:{settings.DASHBOARD_PORT}"
|
||||
await telegram.send_message(
|
||||
"<b>🖥️ Dashboard</b>\n\n"
|
||||
f"<b>URL:</b> {url}"
|
||||
)
|
||||
|
||||
command_handler.register_command("help", handle_help)
|
||||
command_handler.register_command("stop", handle_stop)
|
||||
command_handler.register_command("resume", handle_resume)
|
||||
command_handler.register_command("status", handle_status)
|
||||
command_handler.register_command("positions", handle_positions)
|
||||
command_handler.register_command("report", handle_report)
|
||||
command_handler.register_command("scenarios", handle_scenarios)
|
||||
command_handler.register_command("review", handle_review)
|
||||
command_handler.register_command("dashboard", handle_dashboard)
|
||||
|
||||
# Initialize volatility hunter
|
||||
volatility_analyzer = VolatilityAnalyzer(min_volume_surge=2.0, min_price_change=1.0)
|
||||
market_scanner = MarketScanner(
|
||||
broker=broker,
|
||||
overseas_broker=overseas_broker,
|
||||
volatility_analyzer=volatility_analyzer,
|
||||
context_store=context_store,
|
||||
top_n=5,
|
||||
max_concurrent_scans=1, # Fully serialized to avoid EGW00201
|
||||
)
|
||||
|
||||
# Initialize smart scanner (Python-first, AI-last pipeline)
|
||||
smart_scanner = SmartVolatilityScanner(
|
||||
broker=broker,
|
||||
@@ -866,6 +1282,7 @@ async def run(settings: Settings) -> None:
|
||||
low_volatility_threshold=30.0,
|
||||
)
|
||||
priority_queue = PriorityTaskQueue(max_size=1000)
|
||||
_start_dashboard_server(settings)
|
||||
|
||||
# Track last scan time for each market
|
||||
last_scan_time: dict[str, float] = {}
|
||||
@@ -916,6 +1333,7 @@ async def run(settings: Settings) -> None:
|
||||
while not shutdown.is_set():
|
||||
# Wait for trading to be unpaused
|
||||
await pause_trading.wait()
|
||||
_run_context_scheduler(context_scheduler, now=datetime.now(UTC))
|
||||
|
||||
try:
|
||||
await run_daily_session(
|
||||
@@ -954,6 +1372,7 @@ async def run(settings: Settings) -> None:
|
||||
while not shutdown.is_set():
|
||||
# Wait for trading to be unpaused
|
||||
await pause_trading.wait()
|
||||
_run_context_scheduler(context_scheduler, now=datetime.now(UTC))
|
||||
|
||||
# Get currently open markets
|
||||
open_markets = get_open_markets(settings.enabled_market_list)
|
||||
@@ -967,7 +1386,15 @@ async def run(settings: Settings) -> None:
|
||||
|
||||
market_info = MARKETS.get(market_code)
|
||||
if market_info:
|
||||
await telegram.notify_market_close(market_info.name, 0.0)
|
||||
await _handle_market_close(
|
||||
market_code=market_code,
|
||||
market_name=market_info.name,
|
||||
market_timezone=market_info.timezone,
|
||||
telegram=telegram,
|
||||
context_aggregator=context_aggregator,
|
||||
daily_reviewer=daily_reviewer,
|
||||
evolution_optimizer=evolution_optimizer,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.warning("Market close notification failed: %s", exc)
|
||||
_market_states[market_code] = False
|
||||
@@ -1209,10 +1636,16 @@ def main() -> None:
|
||||
default="paper",
|
||||
help="Trading mode (default: paper)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dashboard",
|
||||
action="store_true",
|
||||
help="Enable FastAPI dashboard server in background thread",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging()
|
||||
settings = Settings(MODE=args.mode) # type: ignore[call-arg]
|
||||
settings = _apply_dashboard_flag(settings, args.dashboard)
|
||||
asyncio.run(run(settings))
|
||||
|
||||
|
||||
|
||||
@@ -123,6 +123,23 @@ MARKETS: dict[str, MarketInfo] = {
|
||||
),
|
||||
}
|
||||
|
||||
MARKET_SHORTHAND: dict[str, list[str]] = {
|
||||
"US": ["US_NASDAQ", "US_NYSE", "US_AMEX"],
|
||||
"CN": ["CN_SHA", "CN_SZA"],
|
||||
"VN": ["VN_HAN", "VN_HCM"],
|
||||
}
|
||||
|
||||
|
||||
def expand_market_codes(codes: list[str]) -> list[str]:
|
||||
"""Expand shorthand market codes into concrete exchange market codes."""
|
||||
expanded: list[str] = []
|
||||
for code in codes:
|
||||
if code in MARKET_SHORTHAND:
|
||||
expanded.extend(MARKET_SHORTHAND[code])
|
||||
else:
|
||||
expanded.append(code)
|
||||
return expanded
|
||||
|
||||
|
||||
def is_market_open(market: MarketInfo, now: datetime | None = None) -> bool:
|
||||
"""
|
||||
|
||||
@@ -8,7 +8,7 @@ from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
from datetime import date
|
||||
from datetime import date, timedelta
|
||||
from typing import Any
|
||||
|
||||
from src.analysis.smart_scanner import ScanCandidate
|
||||
@@ -95,10 +95,17 @@ class PreMarketPlanner:
|
||||
try:
|
||||
# 1. Gather context
|
||||
context_data = self._gather_context()
|
||||
self_market_scorecard = self.build_self_market_scorecard(market, today)
|
||||
cross_market = self.build_cross_market_context(market, today)
|
||||
|
||||
# 2. Build prompt
|
||||
prompt = self._build_prompt(market, candidates, context_data, cross_market)
|
||||
prompt = self._build_prompt(
|
||||
market,
|
||||
candidates,
|
||||
context_data,
|
||||
self_market_scorecard,
|
||||
cross_market,
|
||||
)
|
||||
|
||||
# 3. Call Gemini
|
||||
market_data = {
|
||||
@@ -145,7 +152,8 @@ class PreMarketPlanner:
|
||||
other_market = "US" if target_market == "KR" else "KR"
|
||||
if today is None:
|
||||
today = date.today()
|
||||
timeframe = today.isoformat()
|
||||
timeframe_date = today - timedelta(days=1) if target_market == "KR" else today
|
||||
timeframe = timeframe_date.isoformat()
|
||||
|
||||
scorecard_key = f"scorecard_{other_market}"
|
||||
scorecard_data = self._context_store.get_context(
|
||||
@@ -175,6 +183,37 @@ class PreMarketPlanner:
|
||||
lessons=scorecard_data.get("lessons", []),
|
||||
)
|
||||
|
||||
def build_self_market_scorecard(
|
||||
self, market: str, today: date | None = None,
|
||||
) -> dict[str, Any] | None:
|
||||
"""Build previous-day scorecard for the same market."""
|
||||
if today is None:
|
||||
today = date.today()
|
||||
timeframe = (today - timedelta(days=1)).isoformat()
|
||||
scorecard_key = f"scorecard_{market}"
|
||||
scorecard_data = self._context_store.get_context(
|
||||
ContextLayer.L6_DAILY, timeframe, scorecard_key
|
||||
)
|
||||
|
||||
if scorecard_data is None:
|
||||
return None
|
||||
|
||||
if isinstance(scorecard_data, str):
|
||||
try:
|
||||
scorecard_data = json.loads(scorecard_data)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
return None
|
||||
|
||||
if not isinstance(scorecard_data, dict):
|
||||
return None
|
||||
|
||||
return {
|
||||
"date": timeframe,
|
||||
"total_pnl": float(scorecard_data.get("total_pnl", 0.0)),
|
||||
"win_rate": float(scorecard_data.get("win_rate", 0.0)),
|
||||
"lessons": scorecard_data.get("lessons", []),
|
||||
}
|
||||
|
||||
def _gather_context(self) -> dict[str, Any]:
|
||||
"""Gather strategic context using ContextSelector."""
|
||||
layers = self._context_selector.select_layers(
|
||||
@@ -188,6 +227,7 @@ class PreMarketPlanner:
|
||||
market: str,
|
||||
candidates: list[ScanCandidate],
|
||||
context_data: dict[str, Any],
|
||||
self_market_scorecard: dict[str, Any] | None,
|
||||
cross_market: CrossMarketContext | None,
|
||||
) -> str:
|
||||
"""Build a structured prompt for Gemini to generate scenario JSON."""
|
||||
@@ -211,6 +251,18 @@ class PreMarketPlanner:
|
||||
if cross_market.lessons:
|
||||
cross_market_text += f"- Lessons: {'; '.join(cross_market.lessons[:3])}\n"
|
||||
|
||||
self_market_text = ""
|
||||
if self_market_scorecard:
|
||||
self_market_text = (
|
||||
f"\n## My Market Previous Day ({market})\n"
|
||||
f"- Date: {self_market_scorecard['date']}\n"
|
||||
f"- P&L: {self_market_scorecard['total_pnl']:+.2f}%\n"
|
||||
f"- Win Rate: {self_market_scorecard['win_rate']:.0f}%\n"
|
||||
)
|
||||
lessons = self_market_scorecard.get("lessons", [])
|
||||
if lessons:
|
||||
self_market_text += f"- Lessons: {'; '.join(lessons[:3])}\n"
|
||||
|
||||
context_text = ""
|
||||
if context_data:
|
||||
context_text = "\n## Strategic Context\n"
|
||||
@@ -224,6 +276,7 @@ class PreMarketPlanner:
|
||||
f"You are a pre-market trading strategist for the {market} market.\n"
|
||||
f"Generate structured trading scenarios for today.\n\n"
|
||||
f"## Candidates (from volatility scanner)\n{candidates_text}\n"
|
||||
f"{self_market_text}"
|
||||
f"{cross_market_text}"
|
||||
f"{context_text}\n"
|
||||
f"## Instructions\n"
|
||||
|
||||
@@ -161,7 +161,7 @@ class TestContextAggregator:
|
||||
self, aggregator: ContextAggregator, db_conn: sqlite3.Connection
|
||||
) -> None:
|
||||
"""Test aggregating daily metrics from trades."""
|
||||
date = "2026-02-04"
|
||||
date = datetime.now(UTC).date().isoformat()
|
||||
|
||||
# Create sample trades
|
||||
log_trade(db_conn, "005930", "BUY", 85, "Good signal", quantity=10, price=70000, pnl=500)
|
||||
@@ -175,36 +175,44 @@ class TestContextAggregator:
|
||||
db_conn.commit()
|
||||
|
||||
# Aggregate
|
||||
aggregator.aggregate_daily_from_trades(date)
|
||||
aggregator.aggregate_daily_from_trades(date, market="KR")
|
||||
|
||||
# Verify L6 contexts
|
||||
store = aggregator.store
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "trade_count") == 3
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "buys") == 1
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "sells") == 1
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "holds") == 1
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl") == 2000.0
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "unique_stocks") == 3
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "trade_count_KR") == 3
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "buys_KR") == 1
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "sells_KR") == 1
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "holds_KR") == 1
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl_KR") == 2000.0
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "unique_stocks_KR") == 3
|
||||
# 2 wins, 0 losses
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "win_rate") == 100.0
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "win_rate_KR") == 100.0
|
||||
|
||||
def test_aggregate_weekly_from_daily(self, aggregator: ContextAggregator) -> None:
|
||||
"""Test aggregating weekly metrics from daily."""
|
||||
week = "2026-W06"
|
||||
|
||||
# Set daily contexts
|
||||
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "total_pnl", 100.0)
|
||||
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "total_pnl", 200.0)
|
||||
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "avg_confidence", 80.0)
|
||||
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "avg_confidence", 85.0)
|
||||
aggregator.store.set_context(
|
||||
ContextLayer.L6_DAILY, "2026-02-02", "total_pnl_KR", 100.0
|
||||
)
|
||||
aggregator.store.set_context(
|
||||
ContextLayer.L6_DAILY, "2026-02-03", "total_pnl_KR", 200.0
|
||||
)
|
||||
aggregator.store.set_context(
|
||||
ContextLayer.L6_DAILY, "2026-02-02", "avg_confidence_KR", 80.0
|
||||
)
|
||||
aggregator.store.set_context(
|
||||
ContextLayer.L6_DAILY, "2026-02-03", "avg_confidence_KR", 85.0
|
||||
)
|
||||
|
||||
# Aggregate
|
||||
aggregator.aggregate_weekly_from_daily(week)
|
||||
|
||||
# Verify L5 contexts
|
||||
store = aggregator.store
|
||||
weekly_pnl = store.get_context(ContextLayer.L5_WEEKLY, week, "weekly_pnl")
|
||||
avg_conf = store.get_context(ContextLayer.L5_WEEKLY, week, "avg_confidence")
|
||||
weekly_pnl = store.get_context(ContextLayer.L5_WEEKLY, week, "weekly_pnl_KR")
|
||||
avg_conf = store.get_context(ContextLayer.L5_WEEKLY, week, "avg_confidence_KR")
|
||||
|
||||
assert weekly_pnl == 300.0
|
||||
assert avg_conf == 82.5
|
||||
@@ -214,9 +222,15 @@ class TestContextAggregator:
|
||||
month = "2026-02"
|
||||
|
||||
# Set weekly contexts
|
||||
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W05", "weekly_pnl", 100.0)
|
||||
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W06", "weekly_pnl", 200.0)
|
||||
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W07", "weekly_pnl", 150.0)
|
||||
aggregator.store.set_context(
|
||||
ContextLayer.L5_WEEKLY, "2026-W05", "weekly_pnl_KR", 100.0
|
||||
)
|
||||
aggregator.store.set_context(
|
||||
ContextLayer.L5_WEEKLY, "2026-W06", "weekly_pnl_KR", 200.0
|
||||
)
|
||||
aggregator.store.set_context(
|
||||
ContextLayer.L5_WEEKLY, "2026-W07", "weekly_pnl_KR", 150.0
|
||||
)
|
||||
|
||||
# Aggregate
|
||||
aggregator.aggregate_monthly_from_weekly(month)
|
||||
@@ -285,7 +299,7 @@ class TestContextAggregator:
|
||||
self, aggregator: ContextAggregator, db_conn: sqlite3.Connection
|
||||
) -> None:
|
||||
"""Test running all aggregations from L7 to L1."""
|
||||
date = "2026-02-04"
|
||||
date = datetime.now(UTC).date().isoformat()
|
||||
|
||||
# Create sample trades
|
||||
log_trade(db_conn, "005930", "BUY", 85, "Good signal", quantity=10, price=70000, pnl=1000)
|
||||
@@ -299,10 +313,18 @@ class TestContextAggregator:
|
||||
|
||||
# Verify data exists in each layer
|
||||
store = aggregator.store
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl") == 1000.0
|
||||
current_week = datetime.now(UTC).strftime("%Y-W%V")
|
||||
assert store.get_context(ContextLayer.L5_WEEKLY, current_week, "weekly_pnl") is not None
|
||||
# Further layers depend on time alignment, just verify no crashes
|
||||
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl_KR") == 1000.0
|
||||
from datetime import date as date_cls
|
||||
trade_date = date_cls.fromisoformat(date)
|
||||
iso_year, iso_week, _ = trade_date.isocalendar()
|
||||
trade_week = f"{iso_year}-W{iso_week:02d}"
|
||||
assert store.get_context(ContextLayer.L5_WEEKLY, trade_week, "weekly_pnl_KR") is not None
|
||||
trade_month = f"{trade_date.year}-{trade_date.month:02d}"
|
||||
trade_quarter = f"{trade_date.year}-Q{(trade_date.month - 1) // 3 + 1}"
|
||||
trade_year = str(trade_date.year)
|
||||
assert store.get_context(ContextLayer.L4_MONTHLY, trade_month, "monthly_pnl") == 1000.0
|
||||
assert store.get_context(ContextLayer.L3_QUARTERLY, trade_quarter, "quarterly_pnl") == 1000.0
|
||||
assert store.get_context(ContextLayer.L2_ANNUAL, trade_year, "annual_pnl") == 1000.0
|
||||
|
||||
|
||||
class TestLayerMetadata:
|
||||
|
||||
104
tests/test_context_scheduler.py
Normal file
104
tests/test_context_scheduler.py
Normal file
@@ -0,0 +1,104 @@
|
||||
"""Tests for ContextScheduler."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from datetime import UTC, datetime
|
||||
|
||||
from src.context.scheduler import ContextScheduler
|
||||
|
||||
|
||||
@dataclass
|
||||
class StubAggregator:
|
||||
"""Stub aggregator that records calls."""
|
||||
|
||||
weekly_calls: list[str]
|
||||
monthly_calls: list[str]
|
||||
quarterly_calls: list[str]
|
||||
annual_calls: list[str]
|
||||
legacy_calls: int
|
||||
|
||||
def aggregate_weekly_from_daily(self, week: str) -> None:
|
||||
self.weekly_calls.append(week)
|
||||
|
||||
def aggregate_monthly_from_weekly(self, month: str) -> None:
|
||||
self.monthly_calls.append(month)
|
||||
|
||||
def aggregate_quarterly_from_monthly(self, quarter: str) -> None:
|
||||
self.quarterly_calls.append(quarter)
|
||||
|
||||
def aggregate_annual_from_quarterly(self, year: str) -> None:
|
||||
self.annual_calls.append(year)
|
||||
|
||||
def aggregate_legacy_from_annual(self) -> None:
|
||||
self.legacy_calls += 1
|
||||
|
||||
|
||||
@dataclass
|
||||
class StubStore:
|
||||
"""Stub store that records cleanup calls."""
|
||||
|
||||
cleanup_calls: int = 0
|
||||
|
||||
def cleanup_expired_contexts(self) -> None:
|
||||
self.cleanup_calls += 1
|
||||
|
||||
|
||||
def make_scheduler() -> tuple[ContextScheduler, StubAggregator, StubStore]:
|
||||
aggregator = StubAggregator([], [], [], [], 0)
|
||||
store = StubStore()
|
||||
scheduler = ContextScheduler(aggregator=aggregator, store=store)
|
||||
return scheduler, aggregator, store
|
||||
|
||||
|
||||
def test_run_if_due_weekly() -> None:
|
||||
scheduler, aggregator, store = make_scheduler()
|
||||
now = datetime(2026, 2, 8, 10, 0, tzinfo=UTC) # Sunday
|
||||
|
||||
result = scheduler.run_if_due(now)
|
||||
|
||||
assert result.weekly is True
|
||||
assert aggregator.weekly_calls == ["2026-W06"]
|
||||
assert store.cleanup_calls == 1
|
||||
|
||||
|
||||
def test_run_if_due_monthly() -> None:
|
||||
scheduler, aggregator, _store = make_scheduler()
|
||||
now = datetime(2026, 2, 28, 12, 0, tzinfo=UTC) # Last day of month
|
||||
|
||||
result = scheduler.run_if_due(now)
|
||||
|
||||
assert result.monthly is True
|
||||
assert aggregator.monthly_calls == ["2026-02"]
|
||||
|
||||
|
||||
def test_run_if_due_quarterly() -> None:
|
||||
scheduler, aggregator, _store = make_scheduler()
|
||||
now = datetime(2026, 3, 31, 12, 0, tzinfo=UTC) # Last day of Q1
|
||||
|
||||
result = scheduler.run_if_due(now)
|
||||
|
||||
assert result.quarterly is True
|
||||
assert aggregator.quarterly_calls == ["2026-Q1"]
|
||||
|
||||
|
||||
def test_run_if_due_annual_and_legacy() -> None:
|
||||
scheduler, aggregator, _store = make_scheduler()
|
||||
now = datetime(2026, 12, 31, 12, 0, tzinfo=UTC)
|
||||
|
||||
result = scheduler.run_if_due(now)
|
||||
|
||||
assert result.annual is True
|
||||
assert result.legacy is True
|
||||
assert aggregator.annual_calls == ["2026"]
|
||||
assert aggregator.legacy_calls == 1
|
||||
|
||||
|
||||
def test_cleanup_runs_once_per_day() -> None:
|
||||
scheduler, _aggregator, store = make_scheduler()
|
||||
now = datetime(2026, 2, 9, 9, 0, tzinfo=UTC)
|
||||
|
||||
scheduler.run_if_due(now)
|
||||
scheduler.run_if_due(now)
|
||||
|
||||
assert store.cleanup_calls == 1
|
||||
387
tests/test_daily_review.py
Normal file
387
tests/test_daily_review.py
Normal file
@@ -0,0 +1,387 @@
|
||||
"""Tests for DailyReviewer."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import sqlite3
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from src.context.layer import ContextLayer
|
||||
from src.context.store import ContextStore
|
||||
from src.db import init_db, log_trade
|
||||
from src.evolution.daily_review import DailyReviewer
|
||||
from src.evolution.scorecard import DailyScorecard
|
||||
from src.logging.decision_logger import DecisionLogger
|
||||
|
||||
from datetime import UTC, datetime
|
||||
|
||||
TODAY = datetime.now(UTC).strftime("%Y-%m-%d")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def db_conn() -> sqlite3.Connection:
|
||||
return init_db(":memory:")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def context_store(db_conn: sqlite3.Connection) -> ContextStore:
|
||||
return ContextStore(db_conn)
|
||||
|
||||
|
||||
def _log_decision(
|
||||
logger: DecisionLogger,
|
||||
*,
|
||||
stock_code: str,
|
||||
market: str,
|
||||
action: str,
|
||||
confidence: int,
|
||||
scenario_match: dict[str, float] | None = None,
|
||||
) -> str:
|
||||
return logger.log_decision(
|
||||
stock_code=stock_code,
|
||||
market=market,
|
||||
exchange_code="KRX" if market == "KR" else "NASDAQ",
|
||||
action=action,
|
||||
confidence=confidence,
|
||||
rationale="test",
|
||||
context_snapshot={"scenario_match": scenario_match or {}},
|
||||
input_data={"stock_code": stock_code},
|
||||
)
|
||||
|
||||
|
||||
def test_generate_scorecard_market_scoped(
|
||||
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||
) -> None:
|
||||
reviewer = DailyReviewer(db_conn, context_store)
|
||||
logger = DecisionLogger(db_conn)
|
||||
|
||||
buy_id = _log_decision(
|
||||
logger,
|
||||
stock_code="005930",
|
||||
market="KR",
|
||||
action="BUY",
|
||||
confidence=90,
|
||||
scenario_match={"rsi": 29.0},
|
||||
)
|
||||
_log_decision(
|
||||
logger,
|
||||
stock_code="000660",
|
||||
market="KR",
|
||||
action="HOLD",
|
||||
confidence=60,
|
||||
)
|
||||
_log_decision(
|
||||
logger,
|
||||
stock_code="AAPL",
|
||||
market="US",
|
||||
action="SELL",
|
||||
confidence=80,
|
||||
scenario_match={"volume_ratio": 2.1},
|
||||
)
|
||||
|
||||
log_trade(
|
||||
db_conn,
|
||||
"005930",
|
||||
"BUY",
|
||||
90,
|
||||
"buy",
|
||||
quantity=1,
|
||||
price=100.0,
|
||||
pnl=10.0,
|
||||
market="KR",
|
||||
exchange_code="KRX",
|
||||
decision_id=buy_id,
|
||||
)
|
||||
log_trade(
|
||||
db_conn,
|
||||
"000660",
|
||||
"HOLD",
|
||||
60,
|
||||
"hold",
|
||||
quantity=0,
|
||||
price=0.0,
|
||||
pnl=0.0,
|
||||
market="KR",
|
||||
exchange_code="KRX",
|
||||
)
|
||||
log_trade(
|
||||
db_conn,
|
||||
"AAPL",
|
||||
"SELL",
|
||||
80,
|
||||
"sell",
|
||||
quantity=1,
|
||||
price=200.0,
|
||||
pnl=-5.0,
|
||||
market="US",
|
||||
exchange_code="NASDAQ",
|
||||
)
|
||||
|
||||
scorecard = reviewer.generate_scorecard(TODAY, "KR")
|
||||
|
||||
assert scorecard.market == "KR"
|
||||
assert scorecard.total_decisions == 2
|
||||
assert scorecard.buys == 1
|
||||
assert scorecard.sells == 0
|
||||
assert scorecard.holds == 1
|
||||
assert scorecard.total_pnl == 10.0
|
||||
assert scorecard.win_rate == 100.0
|
||||
assert scorecard.avg_confidence == 75.0
|
||||
assert scorecard.scenario_match_rate == 50.0
|
||||
|
||||
|
||||
def test_generate_scorecard_top_winners_and_losers(
|
||||
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||
) -> None:
|
||||
reviewer = DailyReviewer(db_conn, context_store)
|
||||
logger = DecisionLogger(db_conn)
|
||||
|
||||
for code, pnl in [("005930", 30.0), ("000660", 10.0), ("035420", -15.0), ("051910", -5.0)]:
|
||||
decision_id = _log_decision(
|
||||
logger,
|
||||
stock_code=code,
|
||||
market="KR",
|
||||
action="BUY" if pnl >= 0 else "SELL",
|
||||
confidence=80,
|
||||
scenario_match={"rsi": 30.0},
|
||||
)
|
||||
log_trade(
|
||||
db_conn,
|
||||
code,
|
||||
"BUY" if pnl >= 0 else "SELL",
|
||||
80,
|
||||
"test",
|
||||
quantity=1,
|
||||
price=100.0,
|
||||
pnl=pnl,
|
||||
market="KR",
|
||||
exchange_code="KRX",
|
||||
decision_id=decision_id,
|
||||
)
|
||||
|
||||
scorecard = reviewer.generate_scorecard(TODAY, "KR")
|
||||
assert scorecard.top_winners == ["005930", "000660"]
|
||||
assert scorecard.top_losers == ["035420", "051910"]
|
||||
|
||||
|
||||
def test_generate_scorecard_empty_day(
|
||||
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||
) -> None:
|
||||
reviewer = DailyReviewer(db_conn, context_store)
|
||||
scorecard = reviewer.generate_scorecard(TODAY, "KR")
|
||||
|
||||
assert scorecard.total_decisions == 0
|
||||
assert scorecard.total_pnl == 0.0
|
||||
assert scorecard.win_rate == 0.0
|
||||
assert scorecard.avg_confidence == 0.0
|
||||
assert scorecard.scenario_match_rate == 0.0
|
||||
assert scorecard.top_winners == []
|
||||
assert scorecard.top_losers == []
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generate_lessons_without_gemini_returns_empty(
|
||||
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||
) -> None:
|
||||
reviewer = DailyReviewer(db_conn, context_store, gemini_client=None)
|
||||
lessons = await reviewer.generate_lessons(
|
||||
DailyScorecard(
|
||||
date="2026-02-14",
|
||||
market="KR",
|
||||
total_decisions=1,
|
||||
buys=1,
|
||||
sells=0,
|
||||
holds=0,
|
||||
total_pnl=5.0,
|
||||
win_rate=100.0,
|
||||
avg_confidence=90.0,
|
||||
scenario_match_rate=100.0,
|
||||
)
|
||||
)
|
||||
assert lessons == []
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generate_lessons_parses_json_array(
|
||||
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||
) -> None:
|
||||
mock_gemini = MagicMock()
|
||||
mock_gemini.decide = AsyncMock(
|
||||
return_value=SimpleNamespace(rationale='["Cut losers earlier", "Reduce midday churn"]')
|
||||
)
|
||||
reviewer = DailyReviewer(db_conn, context_store, gemini_client=mock_gemini)
|
||||
|
||||
lessons = await reviewer.generate_lessons(
|
||||
DailyScorecard(
|
||||
date="2026-02-14",
|
||||
market="KR",
|
||||
total_decisions=3,
|
||||
buys=1,
|
||||
sells=1,
|
||||
holds=1,
|
||||
total_pnl=-2.5,
|
||||
win_rate=50.0,
|
||||
avg_confidence=70.0,
|
||||
scenario_match_rate=66.7,
|
||||
)
|
||||
)
|
||||
assert lessons == ["Cut losers earlier", "Reduce midday churn"]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generate_lessons_fallback_to_lines(
|
||||
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||
) -> None:
|
||||
mock_gemini = MagicMock()
|
||||
mock_gemini.decide = AsyncMock(
|
||||
return_value=SimpleNamespace(rationale="- Keep risk tighter\n- Increase selectivity")
|
||||
)
|
||||
reviewer = DailyReviewer(db_conn, context_store, gemini_client=mock_gemini)
|
||||
|
||||
lessons = await reviewer.generate_lessons(
|
||||
DailyScorecard(
|
||||
date="2026-02-14",
|
||||
market="US",
|
||||
total_decisions=2,
|
||||
buys=1,
|
||||
sells=1,
|
||||
holds=0,
|
||||
total_pnl=1.0,
|
||||
win_rate=50.0,
|
||||
avg_confidence=75.0,
|
||||
scenario_match_rate=100.0,
|
||||
)
|
||||
)
|
||||
assert lessons == ["Keep risk tighter", "Increase selectivity"]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generate_lessons_handles_gemini_error(
|
||||
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||
) -> None:
|
||||
mock_gemini = MagicMock()
|
||||
mock_gemini.decide = AsyncMock(side_effect=RuntimeError("boom"))
|
||||
reviewer = DailyReviewer(db_conn, context_store, gemini_client=mock_gemini)
|
||||
|
||||
lessons = await reviewer.generate_lessons(
|
||||
DailyScorecard(
|
||||
date="2026-02-14",
|
||||
market="US",
|
||||
total_decisions=0,
|
||||
buys=0,
|
||||
sells=0,
|
||||
holds=0,
|
||||
total_pnl=0.0,
|
||||
win_rate=0.0,
|
||||
avg_confidence=0.0,
|
||||
scenario_match_rate=0.0,
|
||||
)
|
||||
)
|
||||
assert lessons == []
|
||||
|
||||
|
||||
def test_store_scorecard_in_context(
|
||||
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||
) -> None:
|
||||
reviewer = DailyReviewer(db_conn, context_store)
|
||||
scorecard = DailyScorecard(
|
||||
date="2026-02-14",
|
||||
market="KR",
|
||||
total_decisions=5,
|
||||
buys=2,
|
||||
sells=1,
|
||||
holds=2,
|
||||
total_pnl=15.0,
|
||||
win_rate=66.67,
|
||||
avg_confidence=82.0,
|
||||
scenario_match_rate=80.0,
|
||||
lessons=["Keep position sizing stable"],
|
||||
cross_market_note="US risk-off",
|
||||
)
|
||||
|
||||
reviewer.store_scorecard_in_context(scorecard)
|
||||
|
||||
stored = context_store.get_context(
|
||||
ContextLayer.L6_DAILY,
|
||||
"2026-02-14",
|
||||
"scorecard_KR",
|
||||
)
|
||||
assert stored is not None
|
||||
assert stored["market"] == "KR"
|
||||
assert stored["total_pnl"] == 15.0
|
||||
assert stored["lessons"] == ["Keep position sizing stable"]
|
||||
|
||||
|
||||
def test_store_scorecard_key_is_market_scoped(
|
||||
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||
) -> None:
|
||||
reviewer = DailyReviewer(db_conn, context_store)
|
||||
kr = DailyScorecard(
|
||||
date="2026-02-14",
|
||||
market="KR",
|
||||
total_decisions=1,
|
||||
buys=1,
|
||||
sells=0,
|
||||
holds=0,
|
||||
total_pnl=1.0,
|
||||
win_rate=100.0,
|
||||
avg_confidence=90.0,
|
||||
scenario_match_rate=100.0,
|
||||
)
|
||||
us = DailyScorecard(
|
||||
date="2026-02-14",
|
||||
market="US",
|
||||
total_decisions=1,
|
||||
buys=0,
|
||||
sells=1,
|
||||
holds=0,
|
||||
total_pnl=-1.0,
|
||||
win_rate=0.0,
|
||||
avg_confidence=70.0,
|
||||
scenario_match_rate=100.0,
|
||||
)
|
||||
|
||||
reviewer.store_scorecard_in_context(kr)
|
||||
reviewer.store_scorecard_in_context(us)
|
||||
|
||||
kr_ctx = context_store.get_context(ContextLayer.L6_DAILY, "2026-02-14", "scorecard_KR")
|
||||
us_ctx = context_store.get_context(ContextLayer.L6_DAILY, "2026-02-14", "scorecard_US")
|
||||
|
||||
assert kr_ctx["market"] == "KR"
|
||||
assert us_ctx["market"] == "US"
|
||||
assert kr_ctx["total_pnl"] == 1.0
|
||||
assert us_ctx["total_pnl"] == -1.0
|
||||
|
||||
|
||||
def test_generate_scorecard_handles_invalid_context_snapshot(
|
||||
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||
) -> None:
|
||||
reviewer = DailyReviewer(db_conn, context_store)
|
||||
db_conn.execute(
|
||||
"""
|
||||
INSERT INTO decision_logs (
|
||||
decision_id, timestamp, stock_code, market, exchange_code,
|
||||
action, confidence, rationale, context_snapshot, input_data
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
"d1",
|
||||
"2026-02-14T09:00:00+00:00",
|
||||
"005930",
|
||||
"KR",
|
||||
"KRX",
|
||||
"HOLD",
|
||||
50,
|
||||
"test",
|
||||
"{invalid_json",
|
||||
json.dumps({}),
|
||||
),
|
||||
)
|
||||
db_conn.commit()
|
||||
|
||||
scorecard = reviewer.generate_scorecard("2026-02-14", "KR")
|
||||
assert scorecard.total_decisions == 1
|
||||
assert scorecard.scenario_match_rate == 0.0
|
||||
298
tests/test_dashboard.py
Normal file
298
tests/test_dashboard.py
Normal file
@@ -0,0 +1,298 @@
|
||||
"""Tests for dashboard endpoint handlers."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import sqlite3
|
||||
from collections.abc import Callable
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import pytest
|
||||
from fastapi import HTTPException
|
||||
from fastapi.responses import FileResponse
|
||||
|
||||
from src.dashboard.app import create_dashboard_app
|
||||
from src.db import init_db
|
||||
|
||||
|
||||
def _seed_db(conn: sqlite3.Connection) -> None:
|
||||
today = datetime.now(UTC).date().isoformat()
|
||||
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO playbooks (
|
||||
date, market, status, playbook_json, generated_at,
|
||||
token_count, scenario_count, match_count
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
"2026-02-14",
|
||||
"KR",
|
||||
"ready",
|
||||
json.dumps({"market": "KR", "stock_playbooks": []}),
|
||||
"2026-02-14T08:30:00+00:00",
|
||||
123,
|
||||
2,
|
||||
1,
|
||||
),
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO playbooks (
|
||||
date, market, status, playbook_json, generated_at,
|
||||
token_count, scenario_count, match_count
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
today,
|
||||
"US_NASDAQ",
|
||||
"ready",
|
||||
json.dumps({"market": "US_NASDAQ", "stock_playbooks": []}),
|
||||
f"{today}T08:30:00+00:00",
|
||||
100,
|
||||
1,
|
||||
0,
|
||||
),
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO contexts (layer, timeframe, key, value, created_at, updated_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
"L6_DAILY",
|
||||
"2026-02-14",
|
||||
"scorecard_KR",
|
||||
json.dumps({"market": "KR", "total_pnl": 1.5, "win_rate": 60.0}),
|
||||
"2026-02-14T15:30:00+00:00",
|
||||
"2026-02-14T15:30:00+00:00",
|
||||
),
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO contexts (layer, timeframe, key, value, created_at, updated_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
"L7_REALTIME",
|
||||
"2026-02-14T10:00:00+00:00",
|
||||
"volatility_KR_005930",
|
||||
json.dumps({"momentum_score": 70.0}),
|
||||
"2026-02-14T10:00:00+00:00",
|
||||
"2026-02-14T10:00:00+00:00",
|
||||
),
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO decision_logs (
|
||||
decision_id, timestamp, stock_code, market, exchange_code,
|
||||
action, confidence, rationale, context_snapshot, input_data
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
"d-kr-1",
|
||||
f"{today}T09:10:00+00:00",
|
||||
"005930",
|
||||
"KR",
|
||||
"KRX",
|
||||
"BUY",
|
||||
85,
|
||||
"signal matched",
|
||||
json.dumps({"scenario_match": {"rsi": 28.0}}),
|
||||
json.dumps({"current_price": 70000}),
|
||||
),
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO decision_logs (
|
||||
decision_id, timestamp, stock_code, market, exchange_code,
|
||||
action, confidence, rationale, context_snapshot, input_data
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
"d-us-1",
|
||||
f"{today}T21:10:00+00:00",
|
||||
"AAPL",
|
||||
"US_NASDAQ",
|
||||
"NASDAQ",
|
||||
"SELL",
|
||||
80,
|
||||
"no match",
|
||||
json.dumps({"scenario_match": {}}),
|
||||
json.dumps({"current_price": 200}),
|
||||
),
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO trades (
|
||||
timestamp, stock_code, action, confidence, rationale,
|
||||
quantity, price, pnl, market, exchange_code, selection_context, decision_id
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
f"{today}T09:11:00+00:00",
|
||||
"005930",
|
||||
"BUY",
|
||||
85,
|
||||
"buy",
|
||||
1,
|
||||
70000,
|
||||
2.0,
|
||||
"KR",
|
||||
"KRX",
|
||||
None,
|
||||
"d-kr-1",
|
||||
),
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO trades (
|
||||
timestamp, stock_code, action, confidence, rationale,
|
||||
quantity, price, pnl, market, exchange_code, selection_context, decision_id
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
f"{today}T21:11:00+00:00",
|
||||
"AAPL",
|
||||
"SELL",
|
||||
80,
|
||||
"sell",
|
||||
1,
|
||||
200,
|
||||
-1.0,
|
||||
"US_NASDAQ",
|
||||
"NASDAQ",
|
||||
None,
|
||||
"d-us-1",
|
||||
),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
|
||||
def _app(tmp_path: Path) -> Any:
|
||||
db_path = tmp_path / "dashboard_test.db"
|
||||
conn = init_db(str(db_path))
|
||||
_seed_db(conn)
|
||||
conn.close()
|
||||
return create_dashboard_app(str(db_path))
|
||||
|
||||
|
||||
def _endpoint(app: Any, path: str) -> Callable[..., Any]:
|
||||
for route in app.routes:
|
||||
if getattr(route, "path", None) == path:
|
||||
return route.endpoint
|
||||
raise AssertionError(f"route not found: {path}")
|
||||
|
||||
|
||||
def test_index_serves_html(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
index = _endpoint(app, "/")
|
||||
resp = index()
|
||||
assert isinstance(resp, FileResponse)
|
||||
assert "index.html" in str(resp.path)
|
||||
|
||||
|
||||
def test_status_endpoint(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_status = _endpoint(app, "/api/status")
|
||||
body = get_status()
|
||||
assert "KR" in body["markets"]
|
||||
assert "US_NASDAQ" in body["markets"]
|
||||
assert "totals" in body
|
||||
|
||||
|
||||
def test_playbook_found(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_playbook = _endpoint(app, "/api/playbook/{date_str}")
|
||||
body = get_playbook("2026-02-14", market="KR")
|
||||
assert body["market"] == "KR"
|
||||
|
||||
|
||||
def test_playbook_not_found(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_playbook = _endpoint(app, "/api/playbook/{date_str}")
|
||||
with pytest.raises(HTTPException, match="playbook not found"):
|
||||
get_playbook("2026-02-15", market="KR")
|
||||
|
||||
|
||||
def test_scorecard_found(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_scorecard = _endpoint(app, "/api/scorecard/{date_str}")
|
||||
body = get_scorecard("2026-02-14", market="KR")
|
||||
assert body["scorecard"]["total_pnl"] == 1.5
|
||||
|
||||
|
||||
def test_scorecard_not_found(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_scorecard = _endpoint(app, "/api/scorecard/{date_str}")
|
||||
with pytest.raises(HTTPException, match="scorecard not found"):
|
||||
get_scorecard("2026-02-15", market="KR")
|
||||
|
||||
|
||||
def test_performance_all(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_performance = _endpoint(app, "/api/performance")
|
||||
body = get_performance(market="all")
|
||||
assert body["market"] == "all"
|
||||
assert body["combined"]["total_trades"] == 2
|
||||
assert len(body["by_market"]) == 2
|
||||
|
||||
|
||||
def test_performance_market_filter(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_performance = _endpoint(app, "/api/performance")
|
||||
body = get_performance(market="KR")
|
||||
assert body["market"] == "KR"
|
||||
assert body["metrics"]["total_trades"] == 1
|
||||
|
||||
|
||||
def test_performance_empty_market(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_performance = _endpoint(app, "/api/performance")
|
||||
body = get_performance(market="JP")
|
||||
assert body["metrics"]["total_trades"] == 0
|
||||
|
||||
|
||||
def test_context_layer_all(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_context_layer = _endpoint(app, "/api/context/{layer}")
|
||||
body = get_context_layer("L7_REALTIME", timeframe=None, limit=100)
|
||||
assert body["layer"] == "L7_REALTIME"
|
||||
assert body["count"] == 1
|
||||
|
||||
|
||||
def test_context_layer_timeframe_filter(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_context_layer = _endpoint(app, "/api/context/{layer}")
|
||||
body = get_context_layer("L6_DAILY", timeframe="2026-02-14", limit=100)
|
||||
assert body["count"] == 1
|
||||
assert body["entries"][0]["key"] == "scorecard_KR"
|
||||
|
||||
|
||||
def test_decisions_endpoint(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_decisions = _endpoint(app, "/api/decisions")
|
||||
body = get_decisions(market="KR", limit=50)
|
||||
assert body["count"] == 1
|
||||
assert body["decisions"][0]["decision_id"] == "d-kr-1"
|
||||
|
||||
|
||||
def test_scenarios_active_filters_non_matched(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_active_scenarios = _endpoint(app, "/api/scenarios/active")
|
||||
body = get_active_scenarios(
|
||||
market="KR",
|
||||
date_str=datetime.now(UTC).date().isoformat(),
|
||||
limit=50,
|
||||
)
|
||||
assert body["count"] == 1
|
||||
assert body["matches"][0]["stock_code"] == "005930"
|
||||
|
||||
|
||||
def test_scenarios_active_empty_when_no_matches(tmp_path: Path) -> None:
|
||||
app = _app(tmp_path)
|
||||
get_active_scenarios = _endpoint(app, "/api/scenarios/active")
|
||||
body = get_active_scenarios(market="US", date_str="2026-02-14", limit=50)
|
||||
assert body["count"] == 0
|
||||
60
tests/test_db.py
Normal file
60
tests/test_db.py
Normal file
@@ -0,0 +1,60 @@
|
||||
"""Tests for database helper functions."""
|
||||
|
||||
from src.db import get_open_position, init_db, log_trade
|
||||
|
||||
|
||||
def test_get_open_position_returns_latest_buy() -> None:
|
||||
conn = init_db(":memory:")
|
||||
log_trade(
|
||||
conn=conn,
|
||||
stock_code="005930",
|
||||
action="BUY",
|
||||
confidence=90,
|
||||
rationale="entry",
|
||||
quantity=2,
|
||||
price=70000.0,
|
||||
market="KR",
|
||||
exchange_code="KRX",
|
||||
decision_id="d-buy-1",
|
||||
)
|
||||
|
||||
position = get_open_position(conn, "005930", "KR")
|
||||
assert position is not None
|
||||
assert position["decision_id"] == "d-buy-1"
|
||||
assert position["price"] == 70000.0
|
||||
assert position["quantity"] == 2
|
||||
|
||||
|
||||
def test_get_open_position_returns_none_when_latest_is_sell() -> None:
|
||||
conn = init_db(":memory:")
|
||||
log_trade(
|
||||
conn=conn,
|
||||
stock_code="005930",
|
||||
action="BUY",
|
||||
confidence=90,
|
||||
rationale="entry",
|
||||
quantity=1,
|
||||
price=70000.0,
|
||||
market="KR",
|
||||
exchange_code="KRX",
|
||||
decision_id="d-buy-1",
|
||||
)
|
||||
log_trade(
|
||||
conn=conn,
|
||||
stock_code="005930",
|
||||
action="SELL",
|
||||
confidence=95,
|
||||
rationale="exit",
|
||||
quantity=1,
|
||||
price=71000.0,
|
||||
market="KR",
|
||||
exchange_code="KRX",
|
||||
decision_id="d-sell-1",
|
||||
)
|
||||
|
||||
assert get_open_position(conn, "005930", "KR") is None
|
||||
|
||||
|
||||
def test_get_open_position_returns_none_when_no_trades() -> None:
|
||||
conn = init_db(":memory:")
|
||||
assert get_open_position(conn, "AAPL", "US_NASDAQ") is None
|
||||
@@ -1,12 +1,26 @@
|
||||
"""Tests for main trading loop integration."""
|
||||
|
||||
from datetime import date
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
from datetime import UTC, date, datetime
|
||||
from unittest.mock import ANY, AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from src.config import Settings
|
||||
from src.context.layer import ContextLayer
|
||||
from src.context.scheduler import ScheduleResult
|
||||
from src.core.risk_manager import CircuitBreakerTripped, FatFingerRejected
|
||||
from src.main import safe_float, trading_cycle
|
||||
from src.db import init_db, log_trade
|
||||
from src.evolution.scorecard import DailyScorecard
|
||||
from src.logging.decision_logger import DecisionLogger
|
||||
from src.main import (
|
||||
_apply_dashboard_flag,
|
||||
_handle_market_close,
|
||||
_run_context_scheduler,
|
||||
_run_evolution_loop,
|
||||
_start_dashboard_server,
|
||||
safe_float,
|
||||
trading_cycle,
|
||||
)
|
||||
from src.strategy.models import (
|
||||
DayPlaybook,
|
||||
ScenarioAction,
|
||||
@@ -43,6 +57,17 @@ def _make_hold_match(stock_code: str = "005930") -> ScenarioMatch:
|
||||
)
|
||||
|
||||
|
||||
def _make_sell_match(stock_code: str = "005930") -> ScenarioMatch:
|
||||
"""Create a ScenarioMatch that returns SELL."""
|
||||
return ScenarioMatch(
|
||||
stock_code=stock_code,
|
||||
matched_scenario=None,
|
||||
action=ScenarioAction.SELL,
|
||||
confidence=90,
|
||||
rationale="Test sell",
|
||||
)
|
||||
|
||||
|
||||
class TestSafeFloat:
|
||||
"""Test safe_float() helper function."""
|
||||
|
||||
@@ -91,6 +116,7 @@ class TestTradingCycleTelegramIntegration:
|
||||
"output1": {
|
||||
"stck_prpr": "50000",
|
||||
"frgn_ntby_qty": "100",
|
||||
"prdy_ctrt": "1.23",
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -722,7 +748,7 @@ class TestScenarioEngineIntegration:
|
||||
broker = MagicMock()
|
||||
broker.get_orderbook = AsyncMock(
|
||||
return_value={
|
||||
"output1": {"stck_prpr": "50000", "frgn_ntby_qty": "100"}
|
||||
"output1": {"stck_prpr": "50000", "frgn_ntby_qty": "100", "prdy_ctrt": "2.50"}
|
||||
}
|
||||
)
|
||||
broker.get_balance = AsyncMock(
|
||||
@@ -805,11 +831,75 @@ class TestScenarioEngineIntegration:
|
||||
assert market_data["rsi"] == 25.0
|
||||
assert market_data["volume_ratio"] == 3.5
|
||||
assert market_data["current_price"] == 50000.0
|
||||
assert market_data["price_change_pct"] == 2.5
|
||||
|
||||
# Portfolio data should include pnl
|
||||
assert "portfolio_pnl_pct" in portfolio_data
|
||||
assert "total_cash" in portfolio_data
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_trading_cycle_sets_l7_context_keys(
|
||||
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
|
||||
) -> None:
|
||||
"""Test L7 context is written with market-scoped keys."""
|
||||
from src.analysis.smart_scanner import ScanCandidate
|
||||
|
||||
engine = MagicMock(spec=ScenarioEngine)
|
||||
engine.evaluate = MagicMock(return_value=_make_hold_match())
|
||||
playbook = _make_playbook()
|
||||
context_store = MagicMock(get_latest_timeframe=MagicMock(return_value=None))
|
||||
|
||||
candidate = ScanCandidate(
|
||||
stock_code="005930", name="Samsung", price=50000,
|
||||
volume=1000000, volume_ratio=3.5, rsi=25.0,
|
||||
signal="oversold", score=85.0,
|
||||
)
|
||||
|
||||
with patch("src.main.log_trade"):
|
||||
await trading_cycle(
|
||||
broker=mock_broker,
|
||||
overseas_broker=MagicMock(),
|
||||
scenario_engine=engine,
|
||||
playbook=playbook,
|
||||
risk=MagicMock(),
|
||||
db_conn=MagicMock(),
|
||||
decision_logger=MagicMock(),
|
||||
context_store=context_store,
|
||||
criticality_assessor=MagicMock(
|
||||
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
|
||||
get_timeout=MagicMock(return_value=5.0),
|
||||
),
|
||||
telegram=mock_telegram,
|
||||
market=mock_market,
|
||||
stock_code="005930",
|
||||
scan_candidates={"KR": {"005930": candidate}},
|
||||
)
|
||||
|
||||
context_store.set_context.assert_any_call(
|
||||
ContextLayer.L7_REALTIME,
|
||||
ANY,
|
||||
"volatility_KR_005930",
|
||||
{"momentum_score": 50.0, "volume_surge": 1.0, "price_change_1m": 0.0},
|
||||
)
|
||||
context_store.set_context.assert_any_call(
|
||||
ContextLayer.L7_REALTIME,
|
||||
ANY,
|
||||
"price_KR_005930",
|
||||
{"current_price": 50000.0},
|
||||
)
|
||||
context_store.set_context.assert_any_call(
|
||||
ContextLayer.L7_REALTIME,
|
||||
ANY,
|
||||
"rsi_KR_005930",
|
||||
{"rsi": 25.0},
|
||||
)
|
||||
context_store.set_context.assert_any_call(
|
||||
ContextLayer.L7_REALTIME,
|
||||
ANY,
|
||||
"volume_ratio_KR_005930",
|
||||
{"volume_ratio": 3.5},
|
||||
)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_scan_candidates_market_scoped(
|
||||
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
|
||||
@@ -1049,3 +1139,465 @@ class TestScenarioEngineIntegration:
|
||||
# REDUCE_ALL is not BUY or SELL — no order sent
|
||||
mock_broker.send_order.assert_not_called()
|
||||
mock_telegram.notify_trade_execution.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_sell_updates_original_buy_decision_outcome() -> None:
|
||||
"""SELL should update the original BUY decision outcome in decision_logs."""
|
||||
db_conn = init_db(":memory:")
|
||||
decision_logger = DecisionLogger(db_conn)
|
||||
|
||||
buy_decision_id = decision_logger.log_decision(
|
||||
stock_code="005930",
|
||||
market="KR",
|
||||
exchange_code="KRX",
|
||||
action="BUY",
|
||||
confidence=85,
|
||||
rationale="Initial buy",
|
||||
context_snapshot={},
|
||||
input_data={},
|
||||
)
|
||||
log_trade(
|
||||
conn=db_conn,
|
||||
stock_code="005930",
|
||||
action="BUY",
|
||||
confidence=85,
|
||||
rationale="Initial buy",
|
||||
quantity=1,
|
||||
price=100.0,
|
||||
pnl=0.0,
|
||||
market="KR",
|
||||
exchange_code="KRX",
|
||||
decision_id=buy_decision_id,
|
||||
)
|
||||
|
||||
broker = MagicMock()
|
||||
broker.get_orderbook = AsyncMock(
|
||||
return_value={"output1": {"stck_prpr": "120", "frgn_ntby_qty": "0"}}
|
||||
)
|
||||
broker.get_balance = AsyncMock(
|
||||
return_value={
|
||||
"output2": [
|
||||
{
|
||||
"tot_evlu_amt": "100000",
|
||||
"dnca_tot_amt": "10000",
|
||||
"pchs_amt_smtl_amt": "90000",
|
||||
}
|
||||
]
|
||||
}
|
||||
)
|
||||
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
|
||||
|
||||
overseas_broker = MagicMock()
|
||||
engine = MagicMock(spec=ScenarioEngine)
|
||||
engine.evaluate = MagicMock(return_value=_make_sell_match())
|
||||
risk = MagicMock()
|
||||
context_store = MagicMock(
|
||||
get_latest_timeframe=MagicMock(return_value=None),
|
||||
set_context=MagicMock(),
|
||||
)
|
||||
criticality_assessor = MagicMock(
|
||||
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
|
||||
get_timeout=MagicMock(return_value=5.0),
|
||||
)
|
||||
telegram = MagicMock()
|
||||
telegram.notify_trade_execution = AsyncMock()
|
||||
telegram.notify_fat_finger = AsyncMock()
|
||||
telegram.notify_circuit_breaker = AsyncMock()
|
||||
telegram.notify_scenario_matched = AsyncMock()
|
||||
|
||||
market = MagicMock()
|
||||
market.name = "Korea"
|
||||
market.code = "KR"
|
||||
market.exchange_code = "KRX"
|
||||
market.is_domestic = True
|
||||
|
||||
await trading_cycle(
|
||||
broker=broker,
|
||||
overseas_broker=overseas_broker,
|
||||
scenario_engine=engine,
|
||||
playbook=_make_playbook(),
|
||||
risk=risk,
|
||||
db_conn=db_conn,
|
||||
decision_logger=decision_logger,
|
||||
context_store=context_store,
|
||||
criticality_assessor=criticality_assessor,
|
||||
telegram=telegram,
|
||||
market=market,
|
||||
stock_code="005930",
|
||||
scan_candidates={},
|
||||
)
|
||||
|
||||
updated_buy = decision_logger.get_decision_by_id(buy_decision_id)
|
||||
assert updated_buy is not None
|
||||
assert updated_buy.outcome_pnl == 20.0
|
||||
assert updated_buy.outcome_accuracy == 1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_hold_overridden_to_sell_when_stop_loss_triggered() -> None:
|
||||
"""HOLD decision should be overridden to SELL when stop-loss threshold is breached."""
|
||||
db_conn = init_db(":memory:")
|
||||
decision_logger = DecisionLogger(db_conn)
|
||||
|
||||
buy_decision_id = decision_logger.log_decision(
|
||||
stock_code="005930",
|
||||
market="KR",
|
||||
exchange_code="KRX",
|
||||
action="BUY",
|
||||
confidence=90,
|
||||
rationale="entry",
|
||||
context_snapshot={},
|
||||
input_data={},
|
||||
)
|
||||
log_trade(
|
||||
conn=db_conn,
|
||||
stock_code="005930",
|
||||
action="BUY",
|
||||
confidence=90,
|
||||
rationale="entry",
|
||||
quantity=1,
|
||||
price=100.0,
|
||||
market="KR",
|
||||
exchange_code="KRX",
|
||||
decision_id=buy_decision_id,
|
||||
)
|
||||
|
||||
broker = MagicMock()
|
||||
broker.get_orderbook = AsyncMock(
|
||||
return_value={"output1": {"stck_prpr": "95", "frgn_ntby_qty": "0", "prdy_ctrt": "-5.0"}}
|
||||
)
|
||||
broker.get_balance = AsyncMock(
|
||||
return_value={
|
||||
"output2": [
|
||||
{
|
||||
"tot_evlu_amt": "100000",
|
||||
"dnca_tot_amt": "10000",
|
||||
"pchs_amt_smtl_amt": "90000",
|
||||
}
|
||||
]
|
||||
}
|
||||
)
|
||||
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
|
||||
|
||||
scenario = StockScenario(
|
||||
condition=StockCondition(rsi_below=30),
|
||||
action=ScenarioAction.BUY,
|
||||
confidence=88,
|
||||
stop_loss_pct=-2.0,
|
||||
rationale="stop loss policy",
|
||||
)
|
||||
playbook = DayPlaybook(
|
||||
date=date(2026, 2, 8),
|
||||
market="KR",
|
||||
stock_playbooks=[
|
||||
{"stock_code": "005930", "stock_name": "Samsung", "scenarios": [scenario]}
|
||||
],
|
||||
)
|
||||
engine = MagicMock(spec=ScenarioEngine)
|
||||
engine.evaluate = MagicMock(return_value=_make_hold_match())
|
||||
|
||||
market = MagicMock()
|
||||
market.name = "Korea"
|
||||
market.code = "KR"
|
||||
market.exchange_code = "KRX"
|
||||
market.is_domestic = True
|
||||
|
||||
telegram = MagicMock()
|
||||
telegram.notify_trade_execution = AsyncMock()
|
||||
telegram.notify_fat_finger = AsyncMock()
|
||||
telegram.notify_circuit_breaker = AsyncMock()
|
||||
telegram.notify_scenario_matched = AsyncMock()
|
||||
|
||||
await trading_cycle(
|
||||
broker=broker,
|
||||
overseas_broker=MagicMock(),
|
||||
scenario_engine=engine,
|
||||
playbook=playbook,
|
||||
risk=MagicMock(),
|
||||
db_conn=db_conn,
|
||||
decision_logger=decision_logger,
|
||||
context_store=MagicMock(
|
||||
get_latest_timeframe=MagicMock(return_value=None),
|
||||
set_context=MagicMock(),
|
||||
),
|
||||
criticality_assessor=MagicMock(
|
||||
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
|
||||
get_timeout=MagicMock(return_value=5.0),
|
||||
),
|
||||
telegram=telegram,
|
||||
market=market,
|
||||
stock_code="005930",
|
||||
scan_candidates={},
|
||||
)
|
||||
|
||||
broker.send_order.assert_called_once()
|
||||
assert broker.send_order.call_args.kwargs["order_type"] == "SELL"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handle_market_close_runs_daily_review_flow() -> None:
|
||||
"""Market close should aggregate, create scorecard, lessons, and notify."""
|
||||
telegram = MagicMock()
|
||||
telegram.notify_market_close = AsyncMock()
|
||||
telegram.send_message = AsyncMock()
|
||||
|
||||
context_aggregator = MagicMock()
|
||||
reviewer = MagicMock()
|
||||
reviewer.generate_scorecard.return_value = DailyScorecard(
|
||||
date="2026-02-14",
|
||||
market="KR",
|
||||
total_decisions=3,
|
||||
buys=1,
|
||||
sells=1,
|
||||
holds=1,
|
||||
total_pnl=12.5,
|
||||
win_rate=50.0,
|
||||
avg_confidence=75.0,
|
||||
scenario_match_rate=66.7,
|
||||
)
|
||||
reviewer.generate_lessons = AsyncMock(return_value=["Cut losers faster"])
|
||||
|
||||
await _handle_market_close(
|
||||
market_code="KR",
|
||||
market_name="Korea",
|
||||
market_timezone=UTC,
|
||||
telegram=telegram,
|
||||
context_aggregator=context_aggregator,
|
||||
daily_reviewer=reviewer,
|
||||
)
|
||||
|
||||
telegram.notify_market_close.assert_called_once_with("Korea", 0.0)
|
||||
context_aggregator.aggregate_daily_from_trades.assert_called_once()
|
||||
reviewer.generate_scorecard.assert_called_once()
|
||||
assert reviewer.store_scorecard_in_context.call_count == 2
|
||||
reviewer.generate_lessons.assert_called_once()
|
||||
telegram.send_message.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handle_market_close_without_lessons_stores_once() -> None:
|
||||
"""If no lessons are generated, scorecard should be stored once."""
|
||||
telegram = MagicMock()
|
||||
telegram.notify_market_close = AsyncMock()
|
||||
telegram.send_message = AsyncMock()
|
||||
|
||||
context_aggregator = MagicMock()
|
||||
reviewer = MagicMock()
|
||||
reviewer.generate_scorecard.return_value = DailyScorecard(
|
||||
date="2026-02-14",
|
||||
market="US",
|
||||
total_decisions=1,
|
||||
buys=0,
|
||||
sells=1,
|
||||
holds=0,
|
||||
total_pnl=-3.0,
|
||||
win_rate=0.0,
|
||||
avg_confidence=65.0,
|
||||
scenario_match_rate=100.0,
|
||||
)
|
||||
reviewer.generate_lessons = AsyncMock(return_value=[])
|
||||
|
||||
await _handle_market_close(
|
||||
market_code="US",
|
||||
market_name="United States",
|
||||
market_timezone=UTC,
|
||||
telegram=telegram,
|
||||
context_aggregator=context_aggregator,
|
||||
daily_reviewer=reviewer,
|
||||
)
|
||||
|
||||
assert reviewer.store_scorecard_in_context.call_count == 1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handle_market_close_triggers_evolution_for_us() -> None:
|
||||
telegram = MagicMock()
|
||||
telegram.notify_market_close = AsyncMock()
|
||||
telegram.send_message = AsyncMock()
|
||||
|
||||
context_aggregator = MagicMock()
|
||||
reviewer = MagicMock()
|
||||
reviewer.generate_scorecard.return_value = DailyScorecard(
|
||||
date="2026-02-14",
|
||||
market="US",
|
||||
total_decisions=2,
|
||||
buys=1,
|
||||
sells=1,
|
||||
holds=0,
|
||||
total_pnl=3.0,
|
||||
win_rate=50.0,
|
||||
avg_confidence=80.0,
|
||||
scenario_match_rate=100.0,
|
||||
)
|
||||
reviewer.generate_lessons = AsyncMock(return_value=[])
|
||||
|
||||
evolution_optimizer = MagicMock()
|
||||
evolution_optimizer.evolve = AsyncMock(return_value=None)
|
||||
|
||||
await _handle_market_close(
|
||||
market_code="US",
|
||||
market_name="United States",
|
||||
market_timezone=UTC,
|
||||
telegram=telegram,
|
||||
context_aggregator=context_aggregator,
|
||||
daily_reviewer=reviewer,
|
||||
evolution_optimizer=evolution_optimizer,
|
||||
)
|
||||
|
||||
evolution_optimizer.evolve.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handle_market_close_skips_evolution_for_kr() -> None:
|
||||
telegram = MagicMock()
|
||||
telegram.notify_market_close = AsyncMock()
|
||||
telegram.send_message = AsyncMock()
|
||||
|
||||
context_aggregator = MagicMock()
|
||||
reviewer = MagicMock()
|
||||
reviewer.generate_scorecard.return_value = DailyScorecard(
|
||||
date="2026-02-14",
|
||||
market="KR",
|
||||
total_decisions=1,
|
||||
buys=1,
|
||||
sells=0,
|
||||
holds=0,
|
||||
total_pnl=1.0,
|
||||
win_rate=100.0,
|
||||
avg_confidence=90.0,
|
||||
scenario_match_rate=100.0,
|
||||
)
|
||||
reviewer.generate_lessons = AsyncMock(return_value=[])
|
||||
|
||||
evolution_optimizer = MagicMock()
|
||||
evolution_optimizer.evolve = AsyncMock(return_value=None)
|
||||
|
||||
await _handle_market_close(
|
||||
market_code="KR",
|
||||
market_name="Korea",
|
||||
market_timezone=UTC,
|
||||
telegram=telegram,
|
||||
context_aggregator=context_aggregator,
|
||||
daily_reviewer=reviewer,
|
||||
evolution_optimizer=evolution_optimizer,
|
||||
)
|
||||
|
||||
evolution_optimizer.evolve.assert_not_called()
|
||||
|
||||
|
||||
def test_run_context_scheduler_invokes_scheduler() -> None:
|
||||
"""Scheduler helper should call run_if_due with provided datetime."""
|
||||
scheduler = MagicMock()
|
||||
scheduler.run_if_due = MagicMock(return_value=ScheduleResult(cleanup=True))
|
||||
|
||||
_run_context_scheduler(scheduler, now=datetime(2026, 2, 14, tzinfo=UTC))
|
||||
|
||||
scheduler.run_if_due.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_evolution_loop_skips_non_us_market() -> None:
|
||||
optimizer = MagicMock()
|
||||
optimizer.evolve = AsyncMock()
|
||||
telegram = MagicMock()
|
||||
telegram.send_message = AsyncMock()
|
||||
|
||||
await _run_evolution_loop(
|
||||
evolution_optimizer=optimizer,
|
||||
telegram=telegram,
|
||||
market_code="KR",
|
||||
market_date="2026-02-14",
|
||||
)
|
||||
|
||||
optimizer.evolve.assert_not_called()
|
||||
telegram.send_message.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_evolution_loop_notifies_when_pr_generated() -> None:
|
||||
optimizer = MagicMock()
|
||||
optimizer.evolve = AsyncMock(
|
||||
return_value={
|
||||
"title": "[Evolution] New strategy: v20260214_050000",
|
||||
"branch": "evolution/v20260214_050000",
|
||||
"status": "ready_for_review",
|
||||
}
|
||||
)
|
||||
telegram = MagicMock()
|
||||
telegram.send_message = AsyncMock()
|
||||
|
||||
await _run_evolution_loop(
|
||||
evolution_optimizer=optimizer,
|
||||
telegram=telegram,
|
||||
market_code="US_NASDAQ",
|
||||
market_date="2026-02-14",
|
||||
)
|
||||
|
||||
optimizer.evolve.assert_called_once()
|
||||
telegram.send_message.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_evolution_loop_notification_error_is_ignored() -> None:
|
||||
optimizer = MagicMock()
|
||||
optimizer.evolve = AsyncMock(
|
||||
return_value={
|
||||
"title": "[Evolution] New strategy: v20260214_050000",
|
||||
"branch": "evolution/v20260214_050000",
|
||||
"status": "ready_for_review",
|
||||
}
|
||||
)
|
||||
telegram = MagicMock()
|
||||
telegram.send_message = AsyncMock(side_effect=RuntimeError("telegram down"))
|
||||
|
||||
await _run_evolution_loop(
|
||||
evolution_optimizer=optimizer,
|
||||
telegram=telegram,
|
||||
market_code="US_NYSE",
|
||||
market_date="2026-02-14",
|
||||
)
|
||||
|
||||
optimizer.evolve.assert_called_once()
|
||||
telegram.send_message.assert_called_once()
|
||||
|
||||
|
||||
def test_apply_dashboard_flag_enables_dashboard() -> None:
|
||||
settings = Settings(
|
||||
KIS_APP_KEY="test_key",
|
||||
KIS_APP_SECRET="test_secret",
|
||||
KIS_ACCOUNT_NO="12345678-01",
|
||||
GEMINI_API_KEY="test_gemini_key",
|
||||
DASHBOARD_ENABLED=False,
|
||||
)
|
||||
updated = _apply_dashboard_flag(settings, dashboard_flag=True)
|
||||
assert updated.DASHBOARD_ENABLED is True
|
||||
|
||||
|
||||
def test_start_dashboard_server_disabled_returns_none() -> None:
|
||||
settings = Settings(
|
||||
KIS_APP_KEY="test_key",
|
||||
KIS_APP_SECRET="test_secret",
|
||||
KIS_ACCOUNT_NO="12345678-01",
|
||||
GEMINI_API_KEY="test_gemini_key",
|
||||
DASHBOARD_ENABLED=False,
|
||||
)
|
||||
thread = _start_dashboard_server(settings)
|
||||
assert thread is None
|
||||
|
||||
|
||||
def test_start_dashboard_server_enabled_starts_thread() -> None:
|
||||
settings = Settings(
|
||||
KIS_APP_KEY="test_key",
|
||||
KIS_APP_SECRET="test_secret",
|
||||
KIS_ACCOUNT_NO="12345678-01",
|
||||
GEMINI_API_KEY="test_gemini_key",
|
||||
DASHBOARD_ENABLED=True,
|
||||
)
|
||||
mock_thread = MagicMock()
|
||||
with patch("src.main.threading.Thread", return_value=mock_thread) as mock_thread_cls:
|
||||
thread = _start_dashboard_server(settings)
|
||||
|
||||
assert thread == mock_thread
|
||||
mock_thread_cls.assert_called_once()
|
||||
mock_thread.start.assert_called_once()
|
||||
|
||||
@@ -7,6 +7,7 @@ import pytest
|
||||
|
||||
from src.markets.schedule import (
|
||||
MARKETS,
|
||||
expand_market_codes,
|
||||
get_next_market_open,
|
||||
get_open_markets,
|
||||
is_market_open,
|
||||
@@ -199,3 +200,28 @@ class TestGetNextMarketOpen:
|
||||
enabled_markets=["INVALID", "KR"], now=test_time
|
||||
)
|
||||
assert market.code == "KR"
|
||||
|
||||
|
||||
class TestExpandMarketCodes:
|
||||
"""Test shorthand market expansion."""
|
||||
|
||||
def test_expand_us_shorthand(self) -> None:
|
||||
assert expand_market_codes(["US"]) == ["US_NASDAQ", "US_NYSE", "US_AMEX"]
|
||||
|
||||
def test_expand_cn_shorthand(self) -> None:
|
||||
assert expand_market_codes(["CN"]) == ["CN_SHA", "CN_SZA"]
|
||||
|
||||
def test_expand_vn_shorthand(self) -> None:
|
||||
assert expand_market_codes(["VN"]) == ["VN_HAN", "VN_HCM"]
|
||||
|
||||
def test_expand_mixed_codes(self) -> None:
|
||||
assert expand_market_codes(["KR", "US", "JP"]) == [
|
||||
"KR",
|
||||
"US_NASDAQ",
|
||||
"US_NYSE",
|
||||
"US_AMEX",
|
||||
"JP",
|
||||
]
|
||||
|
||||
def test_expand_preserves_unknown_code(self) -> None:
|
||||
assert expand_market_codes(["KR", "UNKNOWN"]) == ["KR", "UNKNOWN"]
|
||||
|
||||
@@ -9,6 +9,7 @@ from unittest.mock import AsyncMock, MagicMock
|
||||
import pytest
|
||||
|
||||
from src.analysis.smart_scanner import ScanCandidate
|
||||
from src.brain.context_selector import DecisionType
|
||||
from src.brain.gemini_client import TradeDecision
|
||||
from src.config import Settings
|
||||
from src.context.store import ContextLayer
|
||||
@@ -16,12 +17,10 @@ from src.strategy.models import (
|
||||
CrossMarketContext,
|
||||
DayPlaybook,
|
||||
MarketOutlook,
|
||||
PlaybookStatus,
|
||||
ScenarioAction,
|
||||
)
|
||||
from src.strategy.pre_market_planner import PreMarketPlanner
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fixtures
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -89,6 +88,7 @@ def _make_planner(
|
||||
token_count: int = 200,
|
||||
context_data: dict | None = None,
|
||||
scorecard_data: dict | None = None,
|
||||
scorecard_map: dict[tuple[str, str, str], dict | None] | None = None,
|
||||
) -> PreMarketPlanner:
|
||||
"""Create a PreMarketPlanner with mocked dependencies."""
|
||||
if not gemini_response:
|
||||
@@ -107,11 +107,20 @@ def _make_planner(
|
||||
|
||||
# Mock ContextStore
|
||||
store = MagicMock()
|
||||
store.get_context = MagicMock(return_value=scorecard_data)
|
||||
if scorecard_map is not None:
|
||||
store.get_context = MagicMock(
|
||||
side_effect=lambda layer, timeframe, key: scorecard_map.get(
|
||||
(layer.value if hasattr(layer, "value") else layer, timeframe, key)
|
||||
)
|
||||
)
|
||||
else:
|
||||
store.get_context = MagicMock(return_value=scorecard_data)
|
||||
|
||||
# Mock ContextSelector
|
||||
selector = MagicMock()
|
||||
selector.select_layers = MagicMock(return_value=[ContextLayer.L7_REALTIME, ContextLayer.L6_DAILY])
|
||||
selector.select_layers = MagicMock(
|
||||
return_value=[ContextLayer.L7_REALTIME, ContextLayer.L6_DAILY]
|
||||
)
|
||||
selector.get_context_data = MagicMock(return_value=context_data or {})
|
||||
|
||||
settings = Settings(
|
||||
@@ -220,11 +229,25 @@ class TestGeneratePlaybook:
|
||||
stocks = [
|
||||
{
|
||||
"stock_code": "005930",
|
||||
"scenarios": [{"condition": {"rsi_below": 30}, "action": "BUY", "confidence": 85, "rationale": "ok"}],
|
||||
"scenarios": [
|
||||
{
|
||||
"condition": {"rsi_below": 30},
|
||||
"action": "BUY",
|
||||
"confidence": 85,
|
||||
"rationale": "ok",
|
||||
}
|
||||
],
|
||||
},
|
||||
{
|
||||
"stock_code": "UNKNOWN",
|
||||
"scenarios": [{"condition": {"rsi_below": 20}, "action": "BUY", "confidence": 90, "rationale": "bad"}],
|
||||
"scenarios": [
|
||||
{
|
||||
"condition": {"rsi_below": 20},
|
||||
"action": "BUY",
|
||||
"confidence": 90,
|
||||
"rationale": "bad",
|
||||
}
|
||||
],
|
||||
},
|
||||
]
|
||||
planner = _make_planner(gemini_response=_gemini_response_json(stocks=stocks))
|
||||
@@ -254,6 +277,43 @@ class TestGeneratePlaybook:
|
||||
|
||||
assert pb.token_count == 450
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generate_playbook_uses_strategic_context_selector(self) -> None:
|
||||
planner = _make_planner()
|
||||
candidates = [_candidate()]
|
||||
|
||||
await planner.generate_playbook("KR", candidates, today=date(2026, 2, 8))
|
||||
|
||||
planner._context_selector.select_layers.assert_called_once_with(
|
||||
decision_type=DecisionType.STRATEGIC,
|
||||
include_realtime=True,
|
||||
)
|
||||
planner._context_selector.get_context_data.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generate_playbook_injects_self_and_cross_scorecards(self) -> None:
|
||||
scorecard_map = {
|
||||
(ContextLayer.L6_DAILY.value, "2026-02-07", "scorecard_KR"): {
|
||||
"total_pnl": -1.0,
|
||||
"win_rate": 40,
|
||||
"lessons": ["Tighten entries"],
|
||||
},
|
||||
(ContextLayer.L6_DAILY.value, "2026-02-07", "scorecard_US"): {
|
||||
"total_pnl": 1.5,
|
||||
"win_rate": 62,
|
||||
"index_change_pct": 0.9,
|
||||
"lessons": ["Follow momentum"],
|
||||
},
|
||||
}
|
||||
planner = _make_planner(scorecard_map=scorecard_map)
|
||||
|
||||
await planner.generate_playbook("KR", [_candidate()], today=date(2026, 2, 8))
|
||||
|
||||
call_market_data = planner._gemini.decide.call_args.args[0]
|
||||
prompt = call_market_data["prompt_override"]
|
||||
assert "My Market Previous Day (KR)" in prompt
|
||||
assert "Other Market (US)" in prompt
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _parse_response
|
||||
@@ -402,7 +462,12 @@ class TestParseResponse:
|
||||
|
||||
class TestBuildCrossMarketContext:
|
||||
def test_kr_reads_us_scorecard(self) -> None:
|
||||
scorecard = {"total_pnl": 2.5, "win_rate": 65, "index_change_pct": 0.8, "lessons": ["Stay patient"]}
|
||||
scorecard = {
|
||||
"total_pnl": 2.5,
|
||||
"win_rate": 65,
|
||||
"index_change_pct": 0.8,
|
||||
"lessons": ["Stay patient"],
|
||||
}
|
||||
planner = _make_planner(scorecard_data=scorecard)
|
||||
|
||||
ctx = planner.build_cross_market_context("KR", today=date(2026, 2, 8))
|
||||
@@ -415,8 +480,9 @@ class TestBuildCrossMarketContext:
|
||||
|
||||
# Verify it queried scorecard_US
|
||||
planner._context_store.get_context.assert_called_once_with(
|
||||
ContextLayer.L6_DAILY, "2026-02-08", "scorecard_US"
|
||||
ContextLayer.L6_DAILY, "2026-02-07", "scorecard_US"
|
||||
)
|
||||
assert ctx.date == "2026-02-07"
|
||||
|
||||
def test_us_reads_kr_scorecard(self) -> None:
|
||||
scorecard = {"total_pnl": -1.0, "win_rate": 40, "index_change_pct": -0.5}
|
||||
@@ -447,6 +513,32 @@ class TestBuildCrossMarketContext:
|
||||
assert ctx is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# build_self_market_scorecard
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestBuildSelfMarketScorecard:
|
||||
def test_reads_previous_day_scorecard(self) -> None:
|
||||
scorecard = {"total_pnl": -1.2, "win_rate": 45, "lessons": ["Reduce overtrading"]}
|
||||
planner = _make_planner(scorecard_data=scorecard)
|
||||
|
||||
data = planner.build_self_market_scorecard("KR", today=date(2026, 2, 8))
|
||||
|
||||
assert data is not None
|
||||
assert data["date"] == "2026-02-07"
|
||||
assert data["total_pnl"] == -1.2
|
||||
assert data["win_rate"] == 45
|
||||
assert "Reduce overtrading" in data["lessons"]
|
||||
planner._context_store.get_context.assert_called_once_with(
|
||||
ContextLayer.L6_DAILY, "2026-02-07", "scorecard_KR"
|
||||
)
|
||||
|
||||
def test_missing_scorecard_returns_none(self) -> None:
|
||||
planner = _make_planner(scorecard_data=None)
|
||||
assert planner.build_self_market_scorecard("US", today=date(2026, 2, 8)) is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _build_prompt
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -457,7 +549,7 @@ class TestBuildPrompt:
|
||||
planner = _make_planner()
|
||||
candidates = [_candidate(code="005930", name="Samsung")]
|
||||
|
||||
prompt = planner._build_prompt("KR", candidates, {}, None)
|
||||
prompt = planner._build_prompt("KR", candidates, {}, None, None)
|
||||
|
||||
assert "005930" in prompt
|
||||
assert "Samsung" in prompt
|
||||
@@ -471,7 +563,7 @@ class TestBuildPrompt:
|
||||
win_rate=60, index_change_pct=0.8, lessons=["Cut losses early"],
|
||||
)
|
||||
|
||||
prompt = planner._build_prompt("KR", [_candidate()], {}, cross)
|
||||
prompt = planner._build_prompt("KR", [_candidate()], {}, None, cross)
|
||||
|
||||
assert "Other Market (US)" in prompt
|
||||
assert "+1.50%" in prompt
|
||||
@@ -481,7 +573,7 @@ class TestBuildPrompt:
|
||||
planner = _make_planner()
|
||||
context = {"L6_DAILY": {"win_rate": 0.65, "total_pnl": 2.5}}
|
||||
|
||||
prompt = planner._build_prompt("KR", [_candidate()], context, None)
|
||||
prompt = planner._build_prompt("KR", [_candidate()], context, None, None)
|
||||
|
||||
assert "Strategic Context" in prompt
|
||||
assert "L6_DAILY" in prompt
|
||||
@@ -489,15 +581,30 @@ class TestBuildPrompt:
|
||||
|
||||
def test_prompt_contains_max_scenarios(self) -> None:
|
||||
planner = _make_planner()
|
||||
prompt = planner._build_prompt("KR", [_candidate()], {}, None)
|
||||
prompt = planner._build_prompt("KR", [_candidate()], {}, None, None)
|
||||
|
||||
assert f"Max {planner._settings.MAX_SCENARIOS_PER_STOCK} scenarios" in prompt
|
||||
|
||||
def test_prompt_market_name(self) -> None:
|
||||
planner = _make_planner()
|
||||
prompt = planner._build_prompt("US", [_candidate()], {}, None)
|
||||
prompt = planner._build_prompt("US", [_candidate()], {}, None, None)
|
||||
assert "US market" in prompt
|
||||
|
||||
def test_prompt_contains_self_market_scorecard(self) -> None:
|
||||
planner = _make_planner()
|
||||
self_scorecard = {
|
||||
"date": "2026-02-07",
|
||||
"total_pnl": -0.8,
|
||||
"win_rate": 45.0,
|
||||
"lessons": ["Avoid midday entries"],
|
||||
}
|
||||
prompt = planner._build_prompt("KR", [_candidate()], {}, self_scorecard, None)
|
||||
|
||||
assert "My Market Previous Day (KR)" in prompt
|
||||
assert "2026-02-07" in prompt
|
||||
assert "-0.80%" in prompt
|
||||
assert "Avoid midday entries" in prompt
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _extract_json
|
||||
|
||||
81
tests/test_scorecard.py
Normal file
81
tests/test_scorecard.py
Normal file
@@ -0,0 +1,81 @@
|
||||
"""Tests for DailyScorecard model."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from src.evolution.scorecard import DailyScorecard
|
||||
|
||||
|
||||
def test_scorecard_initialization() -> None:
|
||||
scorecard = DailyScorecard(
|
||||
date="2026-02-08",
|
||||
market="KR",
|
||||
total_decisions=10,
|
||||
buys=3,
|
||||
sells=2,
|
||||
holds=5,
|
||||
total_pnl=1234.5,
|
||||
win_rate=60.0,
|
||||
avg_confidence=78.5,
|
||||
scenario_match_rate=70.0,
|
||||
top_winners=["005930", "000660"],
|
||||
top_losers=["035420"],
|
||||
lessons=["Avoid chasing breakouts"],
|
||||
cross_market_note="US volatility spillover",
|
||||
)
|
||||
|
||||
assert scorecard.market == "KR"
|
||||
assert scorecard.total_decisions == 10
|
||||
assert scorecard.total_pnl == 1234.5
|
||||
assert scorecard.top_winners == ["005930", "000660"]
|
||||
assert scorecard.lessons == ["Avoid chasing breakouts"]
|
||||
assert scorecard.cross_market_note == "US volatility spillover"
|
||||
|
||||
|
||||
def test_scorecard_defaults() -> None:
|
||||
scorecard = DailyScorecard(
|
||||
date="2026-02-08",
|
||||
market="US",
|
||||
total_decisions=0,
|
||||
buys=0,
|
||||
sells=0,
|
||||
holds=0,
|
||||
total_pnl=0.0,
|
||||
win_rate=0.0,
|
||||
avg_confidence=0.0,
|
||||
scenario_match_rate=0.0,
|
||||
)
|
||||
|
||||
assert scorecard.top_winners == []
|
||||
assert scorecard.top_losers == []
|
||||
assert scorecard.lessons == []
|
||||
assert scorecard.cross_market_note == ""
|
||||
|
||||
|
||||
def test_scorecard_list_isolation() -> None:
|
||||
a = DailyScorecard(
|
||||
date="2026-02-08",
|
||||
market="KR",
|
||||
total_decisions=1,
|
||||
buys=1,
|
||||
sells=0,
|
||||
holds=0,
|
||||
total_pnl=10.0,
|
||||
win_rate=100.0,
|
||||
avg_confidence=90.0,
|
||||
scenario_match_rate=100.0,
|
||||
)
|
||||
b = DailyScorecard(
|
||||
date="2026-02-08",
|
||||
market="US",
|
||||
total_decisions=1,
|
||||
buys=0,
|
||||
sells=1,
|
||||
holds=0,
|
||||
total_pnl=-5.0,
|
||||
win_rate=0.0,
|
||||
avg_confidence=60.0,
|
||||
scenario_match_rate=50.0,
|
||||
)
|
||||
|
||||
a.top_winners.append("005930")
|
||||
assert b.top_winners == []
|
||||
@@ -682,6 +682,10 @@ class TestBasicCommands:
|
||||
"/help - Show available commands\n"
|
||||
"/status - Trading status (mode, markets, P&L)\n"
|
||||
"/positions - Current holdings\n"
|
||||
"/report - Daily summary report\n"
|
||||
"/scenarios - Today's playbook scenarios\n"
|
||||
"/review - Recent scorecards\n"
|
||||
"/dashboard - Dashboard URL/status\n"
|
||||
"/stop - Pause trading\n"
|
||||
"/resume - Resume trading"
|
||||
)
|
||||
@@ -707,10 +711,106 @@ class TestBasicCommands:
|
||||
assert "/help" in payload["text"]
|
||||
assert "/status" in payload["text"]
|
||||
assert "/positions" in payload["text"]
|
||||
assert "/report" in payload["text"]
|
||||
assert "/scenarios" in payload["text"]
|
||||
assert "/review" in payload["text"]
|
||||
assert "/dashboard" in payload["text"]
|
||||
assert "/stop" in payload["text"]
|
||||
assert "/resume" in payload["text"]
|
||||
|
||||
|
||||
class TestExtendedCommands:
|
||||
"""Test additional bot commands."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_report_command(self) -> None:
|
||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
||||
handler = TelegramCommandHandler(client)
|
||||
|
||||
mock_resp = AsyncMock()
|
||||
mock_resp.status = 200
|
||||
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
|
||||
mock_resp.__aexit__ = AsyncMock(return_value=False)
|
||||
|
||||
async def mock_report() -> None:
|
||||
await client.send_message("<b>📈 Daily Report</b>\n\nTrades: 1")
|
||||
|
||||
handler.register_command("report", mock_report)
|
||||
|
||||
with patch("aiohttp.ClientSession.post", return_value=mock_resp) as mock_post:
|
||||
await handler._handle_update(
|
||||
{"update_id": 1, "message": {"chat": {"id": 456}, "text": "/report"}}
|
||||
)
|
||||
payload = mock_post.call_args.kwargs["json"]
|
||||
assert "Daily Report" in payload["text"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_scenarios_command(self) -> None:
|
||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
||||
handler = TelegramCommandHandler(client)
|
||||
|
||||
mock_resp = AsyncMock()
|
||||
mock_resp.status = 200
|
||||
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
|
||||
mock_resp.__aexit__ = AsyncMock(return_value=False)
|
||||
|
||||
async def mock_scenarios() -> None:
|
||||
await client.send_message("<b>🧠 Today's Scenarios</b>\n\n- AAPL: BUY (85)")
|
||||
|
||||
handler.register_command("scenarios", mock_scenarios)
|
||||
|
||||
with patch("aiohttp.ClientSession.post", return_value=mock_resp) as mock_post:
|
||||
await handler._handle_update(
|
||||
{"update_id": 1, "message": {"chat": {"id": 456}, "text": "/scenarios"}}
|
||||
)
|
||||
payload = mock_post.call_args.kwargs["json"]
|
||||
assert "Today's Scenarios" in payload["text"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_review_command(self) -> None:
|
||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
||||
handler = TelegramCommandHandler(client)
|
||||
|
||||
mock_resp = AsyncMock()
|
||||
mock_resp.status = 200
|
||||
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
|
||||
mock_resp.__aexit__ = AsyncMock(return_value=False)
|
||||
|
||||
async def mock_review() -> None:
|
||||
await client.send_message("<b>📝 Recent Reviews</b>\n\n- 2026-02-14 KR")
|
||||
|
||||
handler.register_command("review", mock_review)
|
||||
|
||||
with patch("aiohttp.ClientSession.post", return_value=mock_resp) as mock_post:
|
||||
await handler._handle_update(
|
||||
{"update_id": 1, "message": {"chat": {"id": 456}, "text": "/review"}}
|
||||
)
|
||||
payload = mock_post.call_args.kwargs["json"]
|
||||
assert "Recent Reviews" in payload["text"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_dashboard_command(self) -> None:
|
||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
||||
handler = TelegramCommandHandler(client)
|
||||
|
||||
mock_resp = AsyncMock()
|
||||
mock_resp.status = 200
|
||||
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
|
||||
mock_resp.__aexit__ = AsyncMock(return_value=False)
|
||||
|
||||
async def mock_dashboard() -> None:
|
||||
await client.send_message("<b>🖥️ Dashboard</b>\n\nURL: http://127.0.0.1:8080")
|
||||
|
||||
handler.register_command("dashboard", mock_dashboard)
|
||||
|
||||
with patch("aiohttp.ClientSession.post", return_value=mock_resp) as mock_post:
|
||||
await handler._handle_update(
|
||||
{"update_id": 1, "message": {"chat": {"id": 456}, "text": "/dashboard"}}
|
||||
)
|
||||
payload = mock_post.call_args.kwargs["json"]
|
||||
assert "Dashboard" in payload["text"]
|
||||
|
||||
|
||||
class TestGetUpdates:
|
||||
"""Test getUpdates API interaction."""
|
||||
|
||||
|
||||
@@ -412,7 +412,7 @@ class TestMarketScanner:
|
||||
scan_result = context_store.get_context(
|
||||
ContextLayer.L7_REALTIME,
|
||||
latest_timeframe,
|
||||
"KR_scan_result",
|
||||
"scan_result_KR",
|
||||
)
|
||||
assert scan_result is not None
|
||||
assert scan_result["total_scanned"] == 3
|
||||
|
||||
Reference in New Issue
Block a user