Compare commits

..

16 Commits

Author SHA1 Message Date
agentson
e3b1ecc572 feat: context aggregation scheduler (issue #87)
Some checks failed
CI / test (pull_request) Has been cancelled
- Add ContextScheduler with run_if_due() for periodic rollups
- Weekly (Sunday), monthly (last day), quarterly, annual, legacy schedules
- Daily cleanup of expired contexts via ContextStore
- Dedup guard: each task runs at most once per day

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 04:26:51 +09:00
8acf72b22c Merge pull request 'feat: DailyScorecard 모델 정의 (issue #90)' (#118) from feature/issue-90-scorecard-model into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #118
2026-02-10 04:26:21 +09:00
agentson
c95102a0bd feat: DailyScorecard model for per-market performance review (issue #90)
Some checks failed
CI / test (pull_request) Has been cancelled
- Add DailyScorecard dataclass with market-scoped fields
- Fields: date, market, decisions, pnl, win_rate, scenario_match_rate, lessons, cross_market_note
- Export from src/evolution/__init__.py

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 04:25:37 +09:00
0685d62f9c Merge pull request 'feat: EOD 집계 시장 필터 추가 (issue #86)' (#117) from feature/issue-86-eod-market-filter into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #117
2026-02-10 04:24:58 +09:00
agentson
78021d4695 feat: EOD aggregation with market filter (issue #86)
Some checks failed
CI / test (pull_request) Has been cancelled
- Add market parameter to aggregate_daily_from_trades() for per-market L6 aggregation
- Store market-scoped keys (total_pnl_KR, win_rate_US, etc.) in L6/L5/L4 layers
- Hook aggregate_daily_from_trades() into market close detection in run()
- Update tests for market-scoped context keys

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 04:23:49 +09:00
3cdd10783b Merge pull request 'feat: L7 실시간 컨텍스트 시장별 기록 (issue #85)' (#116) from feature/issue-85-l7-context-write into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #116
2026-02-10 04:22:57 +09:00
agentson
c4e31be27a feat: L7 real-time context write with market-scoped keys (issue #85)
Some checks failed
CI / test (pull_request) Has been cancelled
- Add L7_REALTIME writes in trading_cycle() for volatility, price, rsi, volume_ratio
- Normalize key format to {metric}_{market}_{stock_code} across scanner and main
- Fix existing key mismatch between scanner writes and main reads
- Remove unused MarketScanner dead code

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 04:21:52 +09:00
9d9ade14eb Merge pull request 'docs: add plan-implementation consistency check to code review checklist (#114)' (#115) from feature/issue-114-review-plan-consistency into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #115
2026-02-10 04:16:30 +09:00
agentson
9a8936ab34 docs: add plan-implementation consistency check to code review checklist (#114)
Some checks failed
CI / test (pull_request) Has been cancelled
리뷰 시 플랜과 구현의 일치 여부를 필수로 확인하는 규칙 추가.
- workflow.md에 Code Review Checklist 섹션 신설
- requirements-log.md에 사용자 요구사항 기록

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 04:15:51 +09:00
c5831966ed Merge pull request 'fix: derive all aggregation timeframes from trade timestamp (#112)' (#113) from fix/test-failures into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #113
2026-02-10 00:42:39 +09:00
agentson
f03cc6039b fix: derive all aggregation timeframes from trade timestamp (#112)
Some checks failed
CI / test (pull_request) Has been cancelled
run_all_aggregations() previously used datetime.now(UTC) for weekly
through annual layers while using the trade date only for daily,
causing data misalignment on backfill. Now all layers consistently
use the latest trade timestamp. Also adds "Z" suffix handling for
fromisoformat() compatibility and strengthens test assertions to
verify L4-L2 layer values end-to-end.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 00:40:28 +09:00
9171e54652 Merge pull request 'feat: integrate scenario engine and playbook into main trading loop (issue #84)' (#110) from feature/issue-84-main-integration into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #110
2026-02-09 23:18:24 +09:00
agentson
d64e072f06 fix: PR review — DB reload, market-local date, market-scoped scan_candidates
Some checks failed
CI / test (pull_request) Has been cancelled
Address PR #110 review findings:

1. High — Realtime mode now loads playbook from DB before calling Gemini,
   preventing duplicate API calls on process restart (4/day budget).
2. Medium — Pass market-local date (via market.timezone) to
   generate_playbook() and _empty_playbook() instead of date.today().
3. Medium — scan_candidates restructured from {stock_code: candidate}
   to {market_code: {stock_code: candidate}} to prevent KR/US symbol
   collision.

New test: test_scan_candidates_market_scoped verifies cross-market
isolation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 23:00:06 +09:00
agentson
b2312fbe01 fix: resolve lint issues in main.py and test_main.py
Some checks failed
CI / test (pull_request) Has been cancelled
Remove unused imports (sys, ScenarioMatch, asyncio, StockPlaybook),
fix import ordering, and split long lines for ruff compliance.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 22:28:31 +09:00
agentson
98c4a2413c feat: integrate scenario engine and playbook into main trading loop (issue #84)
Replace brain.decide() with scenario_engine.evaluate() in trading_cycle
and brain.decide_batch() with per-stock scenario evaluation in
run_daily_session. Initialize PreMarketPlanner, ScenarioEngine, and
PlaybookStore in run(). Add pre-market playbook generation on market
open (1 Gemini call per market per day), market_data enrichment from
scanner metrics (rsi, volume_ratio), portfolio_data for global rules,
scenario match notifications, and playbook lifecycle management.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 22:24:19 +09:00
6fba7c7ae8 Merge pull request 'feat: implement pre-market planner with Gemini integration (issue #83)' (#109) from feature/issue-83-pre-market-planner into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #109
2026-02-08 22:07:36 +09:00
14 changed files with 1371 additions and 184 deletions

View File

@@ -64,3 +64,25 @@
**참고:** Realtime 모드 전용. Daily 모드는 배치 효율성을 위해 정적 watchlist 사용. **참고:** Realtime 모드 전용. Daily 모드는 배치 효율성을 위해 정적 watchlist 사용.
**이슈/PR:** #76, #77 **이슈/PR:** #76, #77
---
## 2026-02-10
### 코드 리뷰 시 플랜-구현 일치 검증 규칙
**배경:**
- 코드 리뷰 시 플랜(EnterPlanMode에서 승인된 계획)과 실제 구현이 일치하는지 확인하는 절차가 없었음
- 플랜과 다른 구현이 리뷰 없이 통과될 위험
**요구사항:**
1. 모든 PR 리뷰에서 플랜-구현 일치 여부를 필수 체크
2. 플랜에 없는 변경은 정당한 사유 필요
3. 플랜 항목이 누락되면 PR 설명에 사유 기록
4. 스코프가 플랜과 일치하는지 확인
**구현 결과:**
- `docs/workflow.md`에 Code Review Checklist 섹션 추가
- Plan Consistency (필수), Safety & Constraints, Quality, Workflow 4개 카테고리
**이슈/PR:** #114

View File

@@ -74,3 +74,37 @@ task_tool(
``` ```
Use `run_in_background=True` for independent tasks that don't block subsequent work. Use `run_in_background=True` for independent tasks that don't block subsequent work.
## Code Review Checklist
**CRITICAL: Every PR review MUST verify plan-implementation consistency.**
Before approving any PR, the reviewer (human or agent) must check ALL of the following:
### 1. Plan Consistency (MANDATORY)
- [ ] **Implementation matches the approved plan** — Compare the actual code changes against the plan created during `EnterPlanMode`. Every item in the plan must be addressed.
- [ ] **No unplanned changes** — If the implementation includes changes not in the plan, they must be explicitly justified.
- [ ] **No plan items omitted** — If any planned item was skipped, the reason must be documented in the PR description.
- [ ] **Scope matches** — The PR does not exceed or fall short of the planned scope.
### 2. Safety & Constraints
- [ ] `src/core/risk_manager.py` is unchanged (READ-ONLY)
- [ ] Circuit breaker threshold not weakened (only stricter allowed)
- [ ] Fat-finger protection (30% max order) still enforced
- [ ] Confidence < 80 still forces HOLD
- [ ] No hardcoded API keys or secrets
### 3. Quality
- [ ] All new/modified code has corresponding tests
- [ ] Test coverage >= 80%
- [ ] `ruff check src/ tests/` passes (no lint errors)
- [ ] No `assert` statements removed from tests
### 4. Workflow
- [ ] PR references the Gitea issue number
- [ ] Feature branch follows naming convention (`feature/issue-N-description`)
- [ ] Commit messages are clear and descriptive

View File

@@ -108,7 +108,7 @@ class MarketScanner:
self.context_store.set_context( self.context_store.set_context(
ContextLayer.L7_REALTIME, ContextLayer.L7_REALTIME,
timeframe, timeframe,
f"{market.code}_{stock_code}_volatility", f"volatility_{market.code}_{stock_code}",
{ {
"price": metrics.current_price, "price": metrics.current_price,
"atr": metrics.atr, "atr": metrics.atr,
@@ -179,7 +179,7 @@ class MarketScanner:
self.context_store.set_context( self.context_store.set_context(
ContextLayer.L7_REALTIME, ContextLayer.L7_REALTIME,
timeframe, timeframe,
f"{market.code}_scan_result", f"scan_result_{market.code}",
{ {
"total_scanned": len(valid_metrics), "total_scanned": len(valid_metrics),
"top_movers": [m.stock_code for m in top_movers], "top_movers": [m.stock_code for m in top_movers],

View File

@@ -5,6 +5,7 @@ The context tree implements Pillar 2: hierarchical memory management across
""" """
from src.context.layer import ContextLayer from src.context.layer import ContextLayer
from src.context.scheduler import ContextScheduler
from src.context.store import ContextStore from src.context.store import ContextStore
__all__ = ["ContextLayer", "ContextStore"] __all__ = ["ContextLayer", "ContextScheduler", "ContextStore"]

View File

@@ -18,52 +18,83 @@ class ContextAggregator:
self.conn = conn self.conn = conn
self.store = ContextStore(conn) self.store = ContextStore(conn)
def aggregate_daily_from_trades(self, date: str | None = None) -> None: def aggregate_daily_from_trades(
self, date: str | None = None, market: str | None = None
) -> None:
"""Aggregate L6 (daily) context from trades table. """Aggregate L6 (daily) context from trades table.
Args: Args:
date: Date in YYYY-MM-DD format. If None, uses today. date: Date in YYYY-MM-DD format. If None, uses today.
market: Market code filter (e.g., "KR", "US"). If None, aggregates all markets.
""" """
if date is None: if date is None:
date = datetime.now(UTC).date().isoformat() date = datetime.now(UTC).date().isoformat()
# Calculate daily metrics from trades if market is None:
cursor = self.conn.execute( cursor = self.conn.execute(
""" """
SELECT SELECT DISTINCT market
COUNT(*) as trade_count, FROM trades
SUM(CASE WHEN action = 'BUY' THEN 1 ELSE 0 END) as buys, WHERE DATE(timestamp) = ?
SUM(CASE WHEN action = 'SELL' THEN 1 ELSE 0 END) as sells, """,
SUM(CASE WHEN action = 'HOLD' THEN 1 ELSE 0 END) as holds, (date,),
AVG(confidence) as avg_confidence,
SUM(pnl) as total_pnl,
COUNT(DISTINCT stock_code) as unique_stocks,
SUM(CASE WHEN pnl > 0 THEN 1 ELSE 0 END) as wins,
SUM(CASE WHEN pnl < 0 THEN 1 ELSE 0 END) as losses
FROM trades
WHERE DATE(timestamp) = ?
""",
(date,),
)
row = cursor.fetchone()
if row and row[0] > 0: # At least one trade
trade_count, buys, sells, holds, avg_conf, total_pnl, stocks, wins, losses = row
# Store daily metrics in L6
self.store.set_context(ContextLayer.L6_DAILY, date, "trade_count", trade_count)
self.store.set_context(ContextLayer.L6_DAILY, date, "buys", buys)
self.store.set_context(ContextLayer.L6_DAILY, date, "sells", sells)
self.store.set_context(ContextLayer.L6_DAILY, date, "holds", holds)
self.store.set_context(
ContextLayer.L6_DAILY, date, "avg_confidence", round(avg_conf, 2)
) )
self.store.set_context( markets = [row[0] for row in cursor.fetchall() if row[0]]
ContextLayer.L6_DAILY, date, "total_pnl", round(total_pnl, 2) else:
markets = [market]
for market_code in markets:
# Calculate daily metrics from trades for the market
cursor = self.conn.execute(
"""
SELECT
COUNT(*) as trade_count,
SUM(CASE WHEN action = 'BUY' THEN 1 ELSE 0 END) as buys,
SUM(CASE WHEN action = 'SELL' THEN 1 ELSE 0 END) as sells,
SUM(CASE WHEN action = 'HOLD' THEN 1 ELSE 0 END) as holds,
AVG(confidence) as avg_confidence,
SUM(pnl) as total_pnl,
COUNT(DISTINCT stock_code) as unique_stocks,
SUM(CASE WHEN pnl > 0 THEN 1 ELSE 0 END) as wins,
SUM(CASE WHEN pnl < 0 THEN 1 ELSE 0 END) as losses
FROM trades
WHERE DATE(timestamp) = ? AND market = ?
""",
(date, market_code),
) )
self.store.set_context(ContextLayer.L6_DAILY, date, "unique_stocks", stocks) row = cursor.fetchone()
win_rate = round(wins / max(wins + losses, 1) * 100, 2)
self.store.set_context(ContextLayer.L6_DAILY, date, "win_rate", win_rate) if row and row[0] > 0: # At least one trade
trade_count, buys, sells, holds, avg_conf, total_pnl, stocks, wins, losses = row
key_suffix = f"_{market_code}"
# Store daily metrics in L6 with market suffix
self.store.set_context(
ContextLayer.L6_DAILY, date, f"trade_count{key_suffix}", trade_count
)
self.store.set_context(ContextLayer.L6_DAILY, date, f"buys{key_suffix}", buys)
self.store.set_context(ContextLayer.L6_DAILY, date, f"sells{key_suffix}", sells)
self.store.set_context(ContextLayer.L6_DAILY, date, f"holds{key_suffix}", holds)
self.store.set_context(
ContextLayer.L6_DAILY,
date,
f"avg_confidence{key_suffix}",
round(avg_conf, 2),
)
self.store.set_context(
ContextLayer.L6_DAILY,
date,
f"total_pnl{key_suffix}",
round(total_pnl, 2),
)
self.store.set_context(
ContextLayer.L6_DAILY, date, f"unique_stocks{key_suffix}", stocks
)
win_rate = round(wins / max(wins + losses, 1) * 100, 2)
self.store.set_context(
ContextLayer.L6_DAILY, date, f"win_rate{key_suffix}", win_rate
)
def aggregate_weekly_from_daily(self, week: str | None = None) -> None: def aggregate_weekly_from_daily(self, week: str | None = None) -> None:
"""Aggregate L5 (weekly) context from L6 (daily). """Aggregate L5 (weekly) context from L6 (daily).
@@ -92,14 +123,25 @@ class ContextAggregator:
daily_data[row[0]].append(json.loads(row[1])) daily_data[row[0]].append(json.loads(row[1]))
if daily_data: if daily_data:
# Sum all PnL values # Sum all PnL values (market-specific if suffixed)
if "total_pnl" in daily_data: if "total_pnl" in daily_data:
total_pnl = sum(daily_data["total_pnl"]) total_pnl = sum(daily_data["total_pnl"])
self.store.set_context( self.store.set_context(
ContextLayer.L5_WEEKLY, week, "weekly_pnl", round(total_pnl, 2) ContextLayer.L5_WEEKLY, week, "weekly_pnl", round(total_pnl, 2)
) )
# Average all confidence values for key, values in daily_data.items():
if key.startswith("total_pnl_"):
market_code = key.split("total_pnl_", 1)[1]
total_pnl = sum(values)
self.store.set_context(
ContextLayer.L5_WEEKLY,
week,
f"weekly_pnl_{market_code}",
round(total_pnl, 2),
)
# Average all confidence values (market-specific if suffixed)
if "avg_confidence" in daily_data: if "avg_confidence" in daily_data:
conf_values = daily_data["avg_confidence"] conf_values = daily_data["avg_confidence"]
avg_conf = sum(conf_values) / len(conf_values) avg_conf = sum(conf_values) / len(conf_values)
@@ -107,6 +149,17 @@ class ContextAggregator:
ContextLayer.L5_WEEKLY, week, "avg_confidence", round(avg_conf, 2) ContextLayer.L5_WEEKLY, week, "avg_confidence", round(avg_conf, 2)
) )
for key, values in daily_data.items():
if key.startswith("avg_confidence_"):
market_code = key.split("avg_confidence_", 1)[1]
avg_conf = sum(values) / len(values)
self.store.set_context(
ContextLayer.L5_WEEKLY,
week,
f"avg_confidence_{market_code}",
round(avg_conf, 2),
)
def aggregate_monthly_from_weekly(self, month: str | None = None) -> None: def aggregate_monthly_from_weekly(self, month: str | None = None) -> None:
"""Aggregate L4 (monthly) context from L5 (weekly). """Aggregate L4 (monthly) context from L5 (weekly).
@@ -135,8 +188,16 @@ class ContextAggregator:
if weekly_data: if weekly_data:
# Sum all weekly PnL values # Sum all weekly PnL values
total_pnl_values: list[float] = []
if "weekly_pnl" in weekly_data: if "weekly_pnl" in weekly_data:
total_pnl = sum(weekly_data["weekly_pnl"]) total_pnl_values.extend(weekly_data["weekly_pnl"])
for key, values in weekly_data.items():
if key.startswith("weekly_pnl_"):
total_pnl_values.extend(values)
if total_pnl_values:
total_pnl = sum(total_pnl_values)
self.store.set_context( self.store.set_context(
ContextLayer.L4_MONTHLY, month, "monthly_pnl", round(total_pnl, 2) ContextLayer.L4_MONTHLY, month, "monthly_pnl", round(total_pnl, 2)
) )
@@ -230,21 +291,44 @@ class ContextAggregator:
) )
def run_all_aggregations(self) -> None: def run_all_aggregations(self) -> None:
"""Run all aggregations from L7 to L1 (bottom-up).""" """Run all aggregations from L7 to L1 (bottom-up).
All timeframes are derived from the latest trade timestamp so that
past data re-aggregation produces consistent results across layers.
"""
cursor = self.conn.execute("SELECT MAX(timestamp) FROM trades")
row = cursor.fetchone()
if not row or row[0] is None:
return
ts_raw = row[0]
if ts_raw.endswith("Z"):
ts_raw = ts_raw.replace("Z", "+00:00")
latest_ts = datetime.fromisoformat(ts_raw)
trade_date = latest_ts.date()
date_str = trade_date.isoformat()
iso_year, iso_week, _ = trade_date.isocalendar()
week_str = f"{iso_year}-W{iso_week:02d}"
month_str = f"{trade_date.year}-{trade_date.month:02d}"
quarter = (trade_date.month - 1) // 3 + 1
quarter_str = f"{trade_date.year}-Q{quarter}"
year_str = str(trade_date.year)
# L7 (trades) → L6 (daily) # L7 (trades) → L6 (daily)
self.aggregate_daily_from_trades() self.aggregate_daily_from_trades(date_str)
# L6 (daily) → L5 (weekly) # L6 (daily) → L5 (weekly)
self.aggregate_weekly_from_daily() self.aggregate_weekly_from_daily(week_str)
# L5 (weekly) → L4 (monthly) # L5 (weekly) → L4 (monthly)
self.aggregate_monthly_from_weekly() self.aggregate_monthly_from_weekly(month_str)
# L4 (monthly) → L3 (quarterly) # L4 (monthly) → L3 (quarterly)
self.aggregate_quarterly_from_monthly() self.aggregate_quarterly_from_monthly(quarter_str)
# L3 (quarterly) → L2 (annual) # L3 (quarterly) → L2 (annual)
self.aggregate_annual_from_quarterly() self.aggregate_annual_from_quarterly(year_str)
# L2 (annual) → L1 (legacy) # L2 (annual) → L1 (legacy)
self.aggregate_legacy_from_annual() self.aggregate_legacy_from_annual()

135
src/context/scheduler.py Normal file
View File

@@ -0,0 +1,135 @@
"""Context aggregation scheduler for periodic rollups and cleanup."""
from __future__ import annotations
import sqlite3
from calendar import monthrange
from dataclasses import dataclass
from datetime import UTC, datetime
from src.context.aggregator import ContextAggregator
from src.context.store import ContextStore
@dataclass(frozen=True)
class ScheduleResult:
"""Represents which scheduled tasks ran."""
weekly: bool = False
monthly: bool = False
quarterly: bool = False
annual: bool = False
legacy: bool = False
cleanup: bool = False
class ContextScheduler:
"""Run periodic context aggregations and cleanup when due."""
def __init__(
self,
conn: sqlite3.Connection | None = None,
aggregator: ContextAggregator | None = None,
store: ContextStore | None = None,
) -> None:
if aggregator is None:
if conn is None:
raise ValueError("conn is required when aggregator is not provided")
aggregator = ContextAggregator(conn)
self.aggregator = aggregator
if store is None:
store = getattr(aggregator, "store", None)
if store is None:
if conn is None:
raise ValueError("conn is required when store is not provided")
store = ContextStore(conn)
self.store = store
self._last_run: dict[str, str] = {}
def run_if_due(self, now: datetime | None = None) -> ScheduleResult:
"""Run scheduled aggregations if their schedule is due.
Args:
now: Current datetime (UTC). If None, uses current time.
Returns:
ScheduleResult indicating which tasks ran.
"""
if now is None:
now = datetime.now(UTC)
today = now.date().isoformat()
result = ScheduleResult()
if self._should_run("cleanup", today):
self.store.cleanup_expired_contexts()
result = self._with(result, cleanup=True)
if self._is_sunday(now) and self._should_run("weekly", today):
week = now.strftime("%Y-W%V")
self.aggregator.aggregate_weekly_from_daily(week)
result = self._with(result, weekly=True)
if self._is_last_day_of_month(now) and self._should_run("monthly", today):
month = now.strftime("%Y-%m")
self.aggregator.aggregate_monthly_from_weekly(month)
result = self._with(result, monthly=True)
if self._is_last_day_of_quarter(now) and self._should_run("quarterly", today):
quarter = self._current_quarter(now)
self.aggregator.aggregate_quarterly_from_monthly(quarter)
result = self._with(result, quarterly=True)
if self._is_last_day_of_year(now) and self._should_run("annual", today):
year = str(now.year)
self.aggregator.aggregate_annual_from_quarterly(year)
result = self._with(result, annual=True)
# Legacy rollup runs after annual aggregation.
self.aggregator.aggregate_legacy_from_annual()
result = self._with(result, legacy=True)
return result
def _should_run(self, key: str, date_str: str) -> bool:
if self._last_run.get(key) == date_str:
return False
self._last_run[key] = date_str
return True
@staticmethod
def _is_sunday(now: datetime) -> bool:
return now.weekday() == 6
@staticmethod
def _is_last_day_of_month(now: datetime) -> bool:
last_day = monthrange(now.year, now.month)[1]
return now.day == last_day
@classmethod
def _is_last_day_of_quarter(cls, now: datetime) -> bool:
if now.month not in (3, 6, 9, 12):
return False
return cls._is_last_day_of_month(now)
@staticmethod
def _is_last_day_of_year(now: datetime) -> bool:
return now.month == 12 and now.day == 31
@staticmethod
def _current_quarter(now: datetime) -> str:
quarter = (now.month - 1) // 3 + 1
return f"{now.year}-Q{quarter}"
@staticmethod
def _with(result: ScheduleResult, **kwargs: bool) -> ScheduleResult:
return ScheduleResult(
weekly=kwargs.get("weekly", result.weekly),
monthly=kwargs.get("monthly", result.monthly),
quarterly=kwargs.get("quarterly", result.quarterly),
annual=kwargs.get("annual", result.annual),
legacy=kwargs.get("legacy", result.legacy),
cleanup=kwargs.get("cleanup", result.cleanup),
)

View File

@@ -7,6 +7,7 @@ from src.evolution.performance_tracker import (
PerformanceTracker, PerformanceTracker,
StrategyMetrics, StrategyMetrics,
) )
from src.evolution.scorecard import DailyScorecard
__all__ = [ __all__ = [
"EvolutionOptimizer", "EvolutionOptimizer",
@@ -16,4 +17,5 @@ __all__ = [
"PerformanceTracker", "PerformanceTracker",
"PerformanceDashboard", "PerformanceDashboard",
"StrategyMetrics", "StrategyMetrics",
"DailyScorecard",
] ]

View File

@@ -0,0 +1,25 @@
"""Daily scorecard model for end-of-day performance review."""
from __future__ import annotations
from dataclasses import dataclass, field
@dataclass
class DailyScorecard:
"""Structured daily performance snapshot for a single market."""
date: str
market: str
total_decisions: int
buys: int
sells: int
holds: int
total_pnl: float
win_rate: float
avg_confidence: float
scenario_match_rate: float
top_winners: list[str] = field(default_factory=list)
top_losers: list[str] = field(default_factory=list)
lessons: list[str] = field(default_factory=list)
cross_market_note: str = ""

View File

@@ -10,17 +10,17 @@ import argparse
import asyncio import asyncio
import logging import logging
import signal import signal
import sys
from datetime import UTC, datetime from datetime import UTC, datetime
from typing import Any from typing import Any
from src.analysis.scanner import MarketScanner
from src.analysis.smart_scanner import ScanCandidate, SmartVolatilityScanner from src.analysis.smart_scanner import ScanCandidate, SmartVolatilityScanner
from src.analysis.volatility import VolatilityAnalyzer from src.analysis.volatility import VolatilityAnalyzer
from src.brain.gemini_client import GeminiClient from src.brain.context_selector import ContextSelector
from src.brain.gemini_client import GeminiClient, TradeDecision
from src.broker.kis_api import KISBroker from src.broker.kis_api import KISBroker
from src.broker.overseas import OverseasBroker from src.broker.overseas import OverseasBroker
from src.config import Settings from src.config import Settings
from src.context.aggregator import ContextAggregator
from src.context.layer import ContextLayer from src.context.layer import ContextLayer
from src.context.store import ContextStore from src.context.store import ContextStore
from src.core.criticality import CriticalityAssessor from src.core.criticality import CriticalityAssessor
@@ -31,6 +31,10 @@ from src.logging.decision_logger import DecisionLogger
from src.logging_config import setup_logging from src.logging_config import setup_logging
from src.markets.schedule import MarketInfo, get_next_market_open, get_open_markets from src.markets.schedule import MarketInfo, get_next_market_open, get_open_markets
from src.notifications.telegram_client import TelegramClient, TelegramCommandHandler from src.notifications.telegram_client import TelegramClient, TelegramCommandHandler
from src.strategy.models import DayPlaybook
from src.strategy.playbook_store import PlaybookStore
from src.strategy.pre_market_planner import PreMarketPlanner
from src.strategy.scenario_engine import ScenarioEngine
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -75,7 +79,8 @@ TRADE_SESSION_INTERVAL_HOURS = 6 # Hours between sessions
async def trading_cycle( async def trading_cycle(
broker: KISBroker, broker: KISBroker,
overseas_broker: OverseasBroker, overseas_broker: OverseasBroker,
brain: GeminiClient, scenario_engine: ScenarioEngine,
playbook: DayPlaybook,
risk: RiskManager, risk: RiskManager,
db_conn: Any, db_conn: Any,
decision_logger: DecisionLogger, decision_logger: DecisionLogger,
@@ -84,7 +89,7 @@ async def trading_cycle(
telegram: TelegramClient, telegram: TelegramClient,
market: MarketInfo, market: MarketInfo,
stock_code: str, stock_code: str,
scan_candidates: dict[str, ScanCandidate], scan_candidates: dict[str, dict[str, ScanCandidate]],
) -> None: ) -> None:
"""Execute one trading cycle for a single stock.""" """Execute one trading cycle for a single stock."""
cycle_start_time = asyncio.get_event_loop().time() cycle_start_time = asyncio.get_event_loop().time()
@@ -135,13 +140,59 @@ async def trading_cycle(
else 0.0 else 0.0
) )
market_data = { market_data: dict[str, Any] = {
"stock_code": stock_code, "stock_code": stock_code,
"market_name": market.name, "market_name": market.name,
"current_price": current_price, "current_price": current_price,
"foreigner_net": foreigner_net, "foreigner_net": foreigner_net,
} }
# Enrich market_data with scanner metrics for scenario engine
market_candidates = scan_candidates.get(market.code, {})
candidate = market_candidates.get(stock_code)
if candidate:
market_data["rsi"] = candidate.rsi
market_data["volume_ratio"] = candidate.volume_ratio
# 1.3. Record L7 real-time context (market-scoped keys)
timeframe = datetime.now(UTC).isoformat()
context_store.set_context(
ContextLayer.L7_REALTIME,
timeframe,
f"volatility_{market.code}_{stock_code}",
{
"momentum_score": 50.0,
"volume_surge": 1.0,
"price_change_1m": 0.0,
},
)
context_store.set_context(
ContextLayer.L7_REALTIME,
timeframe,
f"price_{market.code}_{stock_code}",
{"current_price": current_price},
)
if candidate:
context_store.set_context(
ContextLayer.L7_REALTIME,
timeframe,
f"rsi_{market.code}_{stock_code}",
{"rsi": candidate.rsi},
)
context_store.set_context(
ContextLayer.L7_REALTIME,
timeframe,
f"volume_ratio_{market.code}_{stock_code}",
{"volume_ratio": candidate.volume_ratio},
)
# Build portfolio data for global rule evaluation
portfolio_data = {
"portfolio_pnl_pct": pnl_pct,
"total_cash": total_cash,
"total_eval": total_eval,
}
# 1.5. Get volatility metrics from context store (L7_REALTIME) # 1.5. Get volatility metrics from context store (L7_REALTIME)
latest_timeframe = context_store.get_latest_timeframe(ContextLayer.L7_REALTIME) latest_timeframe = context_store.get_latest_timeframe(ContextLayer.L7_REALTIME)
volatility_score = 50.0 # Default normal volatility volatility_score = 50.0 # Default normal volatility
@@ -152,7 +203,7 @@ async def trading_cycle(
volatility_data = context_store.get_context( volatility_data = context_store.get_context(
ContextLayer.L7_REALTIME, ContextLayer.L7_REALTIME,
latest_timeframe, latest_timeframe,
f"volatility_{stock_code}", f"volatility_{market.code}_{stock_code}",
) )
if volatility_data: if volatility_data:
volatility_score = volatility_data.get("momentum_score", 50.0) volatility_score = volatility_data.get("momentum_score", 50.0)
@@ -178,8 +229,13 @@ async def trading_cycle(
volume_surge, volume_surge,
) )
# 2. Ask the brain for a decision # 2. Evaluate scenario (local, no API call)
decision = await brain.decide(market_data) match = scenario_engine.evaluate(playbook, stock_code, market_data, portfolio_data)
decision = TradeDecision(
action=match.action.value,
confidence=match.confidence,
rationale=match.rationale,
)
logger.info( logger.info(
"Decision for %s (%s): %s (confidence=%d)", "Decision for %s (%s): %s (confidence=%d)",
stock_code, stock_code,
@@ -188,6 +244,19 @@ async def trading_cycle(
decision.confidence, decision.confidence,
) )
# 2.1. Notify scenario match
if match.matched_scenario is not None:
try:
condition_parts = [f"{k}={v}" for k, v in match.match_details.items()]
await telegram.notify_scenario_matched(
stock_code=stock_code,
action=decision.action,
condition_summary=", ".join(condition_parts) if condition_parts else "matched",
confidence=float(decision.confidence),
)
except Exception as exc:
logger.warning("Scenario matched notification failed: %s", exc)
# 2.5. Log decision with context snapshot # 2.5. Log decision with context snapshot
context_snapshot = { context_snapshot = {
"L1": { "L1": {
@@ -200,7 +269,7 @@ async def trading_cycle(
"purchase_total": purchase_total, "purchase_total": purchase_total,
"pnl_pct": pnl_pct, "pnl_pct": pnl_pct,
}, },
# L3-L7 will be populated when context tree is implemented "scenario_match": match.match_details,
} }
input_data = { input_data = {
"current_price": current_price, "current_price": current_price,
@@ -279,8 +348,8 @@ async def trading_cycle(
# 6. Log trade with selection context # 6. Log trade with selection context
selection_context = None selection_context = None
if stock_code in scan_candidates: if stock_code in market_candidates:
candidate = scan_candidates[stock_code] candidate = market_candidates[stock_code]
selection_context = { selection_context = {
"rsi": candidate.rsi, "rsi": candidate.rsi,
"volume_ratio": candidate.volume_ratio, "volume_ratio": candidate.volume_ratio,
@@ -324,7 +393,9 @@ async def trading_cycle(
async def run_daily_session( async def run_daily_session(
broker: KISBroker, broker: KISBroker,
overseas_broker: OverseasBroker, overseas_broker: OverseasBroker,
brain: GeminiClient, scenario_engine: ScenarioEngine,
playbook_store: PlaybookStore,
pre_market_planner: PreMarketPlanner,
risk: RiskManager, risk: RiskManager,
db_conn: Any, db_conn: Any,
decision_logger: DecisionLogger, decision_logger: DecisionLogger,
@@ -336,10 +407,8 @@ async def run_daily_session(
) -> None: ) -> None:
"""Execute one daily trading session. """Execute one daily trading session.
Designed for API efficiency with Gemini Free tier: V2 proactive strategy: 1 Gemini call for playbook generation,
- Batch decision making (1 API call per market) then local scenario evaluation per stock (0 API calls).
- Runs N times per day at fixed intervals
- Minimizes API usage while maintaining trading capability
""" """
# Get currently open markets # Get currently open markets
open_markets = get_open_markets(settings.enabled_market_list) open_markets = get_open_markets(settings.enabled_market_list)
@@ -352,27 +421,66 @@ async def run_daily_session(
# Process each open market # Process each open market
for market in open_markets: for market in open_markets:
# Use market-local date for playbook keying
market_today = datetime.now(market.timezone).date()
# Dynamic stock discovery via scanner (no static watchlists) # Dynamic stock discovery via scanner (no static watchlists)
candidates_list: list[ScanCandidate] = []
try: try:
candidates = await smart_scanner.scan() candidates_list = await smart_scanner.scan() if smart_scanner else []
watchlist = [c.stock_code for c in candidates] if candidates else []
except Exception as exc: except Exception as exc:
logger.error("Smart Scanner failed for %s: %s", market.name, exc) logger.error("Smart Scanner failed for %s: %s", market.name, exc)
watchlist = []
if not watchlist: if not candidates_list:
logger.info("No scanner candidates for market %s — skipping", market.code) logger.info("No scanner candidates for market %s — skipping", market.code)
continue continue
watchlist = [c.stock_code for c in candidates_list]
candidate_map = {c.stock_code: c for c in candidates_list}
logger.info("Processing market: %s (%d stocks)", market.name, len(watchlist)) logger.info("Processing market: %s (%d stocks)", market.name, len(watchlist))
# Generate or load playbook (1 Gemini API call per market per day)
playbook = playbook_store.load(market_today, market.code)
if playbook is None:
try:
playbook = await pre_market_planner.generate_playbook(
market=market.code,
candidates=candidates_list,
today=market_today,
)
playbook_store.save(playbook)
try:
await telegram.notify_playbook_generated(
market=market.code,
stock_count=playbook.stock_count,
scenario_count=playbook.scenario_count,
token_count=playbook.token_count,
)
except Exception as exc:
logger.warning("Playbook notification failed: %s", exc)
logger.info(
"Generated playbook for %s: %d stocks, %d scenarios",
market.code, playbook.stock_count, playbook.scenario_count,
)
except Exception as exc:
logger.error("Playbook generation failed for %s: %s", market.code, exc)
try:
await telegram.notify_playbook_failed(
market=market.code, reason=str(exc)[:200],
)
except Exception as notify_exc:
logger.warning("Playbook failed notification error: %s", notify_exc)
playbook = PreMarketPlanner._empty_playbook(market_today, market.code)
# Collect market data for all stocks from scanner # Collect market data for all stocks from scanner
stocks_data = [] stocks_data = []
for stock_code in watchlist: for stock_code in watchlist:
try: try:
if market.is_domestic: if market.is_domestic:
orderbook = await broker.get_orderbook(stock_code) orderbook = await broker.get_orderbook(stock_code)
current_price = safe_float(orderbook.get("output1", {}).get("stck_prpr", "0")) current_price = safe_float(
orderbook.get("output1", {}).get("stck_prpr", "0")
)
foreigner_net = safe_float( foreigner_net = safe_float(
orderbook.get("output1", {}).get("frgn_ntby_qty", "0") orderbook.get("output1", {}).get("frgn_ntby_qty", "0")
) )
@@ -380,17 +488,23 @@ async def run_daily_session(
price_data = await overseas_broker.get_overseas_price( price_data = await overseas_broker.get_overseas_price(
market.exchange_code, stock_code market.exchange_code, stock_code
) )
current_price = safe_float(price_data.get("output", {}).get("last", "0")) current_price = safe_float(
price_data.get("output", {}).get("last", "0")
)
foreigner_net = 0.0 foreigner_net = 0.0
stocks_data.append( stock_data: dict[str, Any] = {
{ "stock_code": stock_code,
"stock_code": stock_code, "market_name": market.name,
"market_name": market.name, "current_price": current_price,
"current_price": current_price, "foreigner_net": foreigner_net,
"foreigner_net": foreigner_net, }
} # Enrich with scanner metrics
) cand = candidate_map.get(stock_code)
if cand:
stock_data["rsi"] = cand.rsi
stock_data["volume_ratio"] = cand.volume_ratio
stocks_data.append(stock_data)
except Exception as exc: except Exception as exc:
logger.error("Failed to fetch data for %s: %s", stock_code, exc) logger.error("Failed to fetch data for %s: %s", stock_code, exc)
continue continue
@@ -399,17 +513,19 @@ async def run_daily_session(
logger.warning("No valid stock data for market %s", market.code) logger.warning("No valid stock data for market %s", market.code)
continue continue
# Get batch decisions (1 API call for all stocks in this market)
logger.info("Requesting batch decision for %d stocks in %s", len(stocks_data), market.name)
decisions = await brain.decide_batch(stocks_data)
# Get balance data once for the market # Get balance data once for the market
if market.is_domestic: if market.is_domestic:
balance_data = await broker.get_balance() balance_data = await broker.get_balance()
output2 = balance_data.get("output2", [{}]) output2 = balance_data.get("output2", [{}])
total_eval = safe_float(output2[0].get("tot_evlu_amt", "0")) if output2 else 0 total_eval = safe_float(
total_cash = safe_float(output2[0].get("dnca_tot_amt", "0")) if output2 else 0 output2[0].get("tot_evlu_amt", "0")
purchase_total = safe_float(output2[0].get("pchs_amt_smtl_amt", "0")) if output2 else 0 ) if output2 else 0
total_cash = safe_float(
output2[0].get("dnca_tot_amt", "0")
) if output2 else 0
purchase_total = safe_float(
output2[0].get("pchs_amt_smtl_amt", "0")
) if output2 else 0
else: else:
balance_data = await overseas_broker.get_overseas_balance(market.exchange_code) balance_data = await overseas_broker.get_overseas_balance(market.exchange_code)
output2 = balance_data.get("output2", [{}]) output2 = balance_data.get("output2", [{}])
@@ -422,21 +538,37 @@ async def run_daily_session(
total_eval = safe_float(balance_info.get("frcr_evlu_tota", "0") or "0") total_eval = safe_float(balance_info.get("frcr_evlu_tota", "0") or "0")
total_cash = safe_float(balance_info.get("frcr_dncl_amt_2", "0") or "0") total_cash = safe_float(balance_info.get("frcr_dncl_amt_2", "0") or "0")
purchase_total = safe_float(balance_info.get("frcr_buy_amt_smtl", "0") or "0") purchase_total = safe_float(
balance_info.get("frcr_buy_amt_smtl", "0") or "0"
)
# Calculate daily P&L % # Calculate daily P&L %
pnl_pct = ( pnl_pct = (
((total_eval - purchase_total) / purchase_total * 100) if purchase_total > 0 else 0.0 ((total_eval - purchase_total) / purchase_total * 100)
if purchase_total > 0
else 0.0
) )
portfolio_data = {
"portfolio_pnl_pct": pnl_pct,
"total_cash": total_cash,
"total_eval": total_eval,
}
# Execute decisions for each stock # Evaluate scenarios for each stock (local, no API calls)
logger.info(
"Evaluating %d stocks against playbook for %s",
len(stocks_data), market.name,
)
for stock_data in stocks_data: for stock_data in stocks_data:
stock_code = stock_data["stock_code"] stock_code = stock_data["stock_code"]
decision = decisions.get(stock_code) match = scenario_engine.evaluate(
playbook, stock_code, stock_data, portfolio_data,
if not decision: )
logger.warning("No decision for %s — skipping", stock_code) decision = TradeDecision(
continue action=match.action.value,
confidence=match.confidence,
rationale=match.rationale,
)
logger.info( logger.info(
"Decision for %s (%s): %s (confidence=%d)", "Decision for %s (%s): %s (confidence=%d)",
@@ -458,6 +590,7 @@ async def run_daily_session(
"purchase_total": purchase_total, "purchase_total": purchase_total,
"pnl_pct": pnl_pct, "pnl_pct": pnl_pct,
}, },
"scenario_match": match.match_details,
} }
input_data = { input_data = {
"current_price": stock_data["current_price"], "current_price": stock_data["current_price"],
@@ -509,7 +642,9 @@ async def run_daily_session(
threshold=exc.threshold, threshold=exc.threshold,
) )
except Exception as notify_exc: except Exception as notify_exc:
logger.warning("Circuit breaker notification failed: %s", notify_exc) logger.warning(
"Circuit breaker notification failed: %s", notify_exc
)
raise raise
# Send order # Send order
@@ -544,7 +679,9 @@ async def run_daily_session(
except Exception as exc: except Exception as exc:
logger.warning("Telegram notification failed: %s", exc) logger.warning("Telegram notification failed: %s", exc)
except Exception as exc: except Exception as exc:
logger.error("Order execution failed for %s: %s", stock_code, exc) logger.error(
"Order execution failed for %s: %s", stock_code, exc
)
continue continue
# Log trade # Log trade
@@ -570,6 +707,21 @@ async def run(settings: Settings) -> None:
db_conn = init_db(settings.DB_PATH) db_conn = init_db(settings.DB_PATH)
decision_logger = DecisionLogger(db_conn) decision_logger = DecisionLogger(db_conn)
context_store = ContextStore(db_conn) context_store = ContextStore(db_conn)
context_aggregator = ContextAggregator(db_conn)
# V2 proactive strategy components
context_selector = ContextSelector(context_store)
scenario_engine = ScenarioEngine()
playbook_store = PlaybookStore(db_conn)
pre_market_planner = PreMarketPlanner(
gemini_client=brain,
context_store=context_store,
context_selector=context_selector,
settings=settings,
)
# Track playbooks per market (in-memory cache)
playbooks: dict[str, DayPlaybook] = {}
# Initialize Telegram notifications # Initialize Telegram notifications
telegram = TelegramClient( telegram = TelegramClient(
@@ -716,15 +868,6 @@ async def run(settings: Settings) -> None:
# Initialize volatility hunter # Initialize volatility hunter
volatility_analyzer = VolatilityAnalyzer(min_volume_surge=2.0, min_price_change=1.0) volatility_analyzer = VolatilityAnalyzer(min_volume_surge=2.0, min_price_change=1.0)
market_scanner = MarketScanner(
broker=broker,
overseas_broker=overseas_broker,
volatility_analyzer=volatility_analyzer,
context_store=context_store,
top_n=5,
max_concurrent_scans=1, # Fully serialized to avoid EGW00201
)
# Initialize smart scanner (Python-first, AI-last pipeline) # Initialize smart scanner (Python-first, AI-last pipeline)
smart_scanner = SmartVolatilityScanner( smart_scanner = SmartVolatilityScanner(
broker=broker, broker=broker,
@@ -732,8 +875,8 @@ async def run(settings: Settings) -> None:
settings=settings, settings=settings,
) )
# Track scan candidates for selection context logging # Track scan candidates per market for selection context logging
scan_candidates: dict[str, ScanCandidate] = {} # stock_code -> candidate scan_candidates: dict[str, dict[str, ScanCandidate]] = {} # market -> {stock_code -> candidate}
# Active stocks per market (dynamically discovered by scanner) # Active stocks per market (dynamically discovered by scanner)
active_stocks: dict[str, list[str]] = {} # market_code -> [stock_codes] active_stocks: dict[str, list[str]] = {} # market_code -> [stock_codes]
@@ -802,7 +945,9 @@ async def run(settings: Settings) -> None:
await run_daily_session( await run_daily_session(
broker, broker,
overseas_broker, overseas_broker,
brain, scenario_engine,
playbook_store,
pre_market_planner,
risk, risk,
db_conn, db_conn,
decision_logger, decision_logger,
@@ -847,9 +992,18 @@ async def run(settings: Settings) -> None:
market_info = MARKETS.get(market_code) market_info = MARKETS.get(market_code)
if market_info: if market_info:
await telegram.notify_market_close(market_info.name, 0.0) await telegram.notify_market_close(market_info.name, 0.0)
market_date = datetime.now(
market_info.timezone
).date().isoformat()
context_aggregator.aggregate_daily_from_trades(
date=market_date,
market=market_code,
)
except Exception as exc: except Exception as exc:
logger.warning("Market close notification failed: %s", exc) logger.warning("Market close notification failed: %s", exc)
_market_states[market_code] = False _market_states[market_code] = False
# Clear playbook for closed market (new one generated next open)
playbooks.pop(market_code, None)
# No markets open — wait until next market opens # No markets open — wait until next market opens
try: try:
@@ -887,7 +1041,8 @@ async def run(settings: Settings) -> None:
# Smart Scanner: dynamic stock discovery (no static watchlists) # Smart Scanner: dynamic stock discovery (no static watchlists)
now_timestamp = asyncio.get_event_loop().time() now_timestamp = asyncio.get_event_loop().time()
last_scan = last_scan_time.get(market.code, 0.0) last_scan = last_scan_time.get(market.code, 0.0)
if now_timestamp - last_scan >= SCAN_INTERVAL_SECONDS: rescan_interval = settings.RESCAN_INTERVAL_SECONDS
if now_timestamp - last_scan >= rescan_interval:
try: try:
logger.info("Smart Scanner: Scanning %s market", market.name) logger.info("Smart Scanner: Scanning %s market", market.name)
@@ -899,9 +1054,10 @@ async def run(settings: Settings) -> None:
candidates candidates
) )
# Store candidates for selection context logging # Store candidates per market for selection context logging
for candidate in candidates: scan_candidates[market.code] = {
scan_candidates[candidate.stock_code] = candidate c.stock_code: c for c in candidates
}
logger.info( logger.info(
"Smart Scanner: Found %d candidates for %s: %s", "Smart Scanner: Found %d candidates for %s: %s",
@@ -909,6 +1065,62 @@ async def run(settings: Settings) -> None:
market.name, market.name,
[f"{c.stock_code}(RSI={c.rsi:.0f})" for c in candidates], [f"{c.stock_code}(RSI={c.rsi:.0f})" for c in candidates],
) )
# Get market-local date for playbook keying
market_today = datetime.now(
market.timezone
).date()
# Load or generate playbook (1 Gemini call per market per day)
if market.code not in playbooks:
# Try DB first (survives process restart)
stored_pb = playbook_store.load(market_today, market.code)
if stored_pb is not None:
playbooks[market.code] = stored_pb
logger.info(
"Loaded existing playbook for %s from DB"
" (%d stocks, %d scenarios)",
market.code,
stored_pb.stock_count,
stored_pb.scenario_count,
)
else:
try:
pb = await pre_market_planner.generate_playbook(
market=market.code,
candidates=candidates,
today=market_today,
)
playbook_store.save(pb)
playbooks[market.code] = pb
try:
await telegram.notify_playbook_generated(
market=market.code,
stock_count=pb.stock_count,
scenario_count=pb.scenario_count,
token_count=pb.token_count,
)
except Exception as exc:
logger.warning(
"Playbook notification failed: %s", exc
)
except Exception as exc:
logger.error(
"Playbook generation failed for %s: %s",
market.code, exc,
)
try:
await telegram.notify_playbook_failed(
market=market.code,
reason=str(exc)[:200],
)
except Exception:
pass
playbooks[market.code] = (
PreMarketPlanner._empty_playbook(
market_today, market.code
)
)
else: else:
logger.info( logger.info(
"Smart Scanner: No candidates for %s — no trades", market.name "Smart Scanner: No candidates for %s — no trades", market.name
@@ -933,13 +1145,22 @@ async def run(settings: Settings) -> None:
if shutdown.is_set(): if shutdown.is_set():
break break
# Get playbook for this market
market_playbook = playbooks.get(
market.code,
PreMarketPlanner._empty_playbook(
datetime.now(market.timezone).date(), market.code
),
)
# Retry logic for connection errors # Retry logic for connection errors
for attempt in range(1, MAX_CONNECTION_RETRIES + 1): for attempt in range(1, MAX_CONNECTION_RETRIES + 1):
try: try:
await trading_cycle( await trading_cycle(
broker, broker,
overseas_broker, overseas_broker,
brain, scenario_engine,
market_playbook,
risk, risk,
db_conn, db_conn,
decision_logger, decision_logger,
@@ -988,7 +1209,8 @@ async def run(settings: Settings) -> None:
metrics = await priority_queue.get_metrics() metrics = await priority_queue.get_metrics()
if metrics.total_enqueued > 0: if metrics.total_enqueued > 0:
logger.info( logger.info(
"Priority queue metrics: enqueued=%d, dequeued=%d, size=%d, timeouts=%d, errors=%d", "Priority queue metrics: enqueued=%d, dequeued=%d,"
" size=%d, timeouts=%d, errors=%d",
metrics.total_enqueued, metrics.total_enqueued,
metrics.total_dequeued, metrics.total_dequeued,
metrics.current_size, metrics.current_size,

View File

@@ -161,7 +161,7 @@ class TestContextAggregator:
self, aggregator: ContextAggregator, db_conn: sqlite3.Connection self, aggregator: ContextAggregator, db_conn: sqlite3.Connection
) -> None: ) -> None:
"""Test aggregating daily metrics from trades.""" """Test aggregating daily metrics from trades."""
date = "2026-02-04" date = datetime.now(UTC).date().isoformat()
# Create sample trades # Create sample trades
log_trade(db_conn, "005930", "BUY", 85, "Good signal", quantity=10, price=70000, pnl=500) log_trade(db_conn, "005930", "BUY", 85, "Good signal", quantity=10, price=70000, pnl=500)
@@ -175,36 +175,44 @@ class TestContextAggregator:
db_conn.commit() db_conn.commit()
# Aggregate # Aggregate
aggregator.aggregate_daily_from_trades(date) aggregator.aggregate_daily_from_trades(date, market="KR")
# Verify L6 contexts # Verify L6 contexts
store = aggregator.store store = aggregator.store
assert store.get_context(ContextLayer.L6_DAILY, date, "trade_count") == 3 assert store.get_context(ContextLayer.L6_DAILY, date, "trade_count_KR") == 3
assert store.get_context(ContextLayer.L6_DAILY, date, "buys") == 1 assert store.get_context(ContextLayer.L6_DAILY, date, "buys_KR") == 1
assert store.get_context(ContextLayer.L6_DAILY, date, "sells") == 1 assert store.get_context(ContextLayer.L6_DAILY, date, "sells_KR") == 1
assert store.get_context(ContextLayer.L6_DAILY, date, "holds") == 1 assert store.get_context(ContextLayer.L6_DAILY, date, "holds_KR") == 1
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl") == 2000.0 assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl_KR") == 2000.0
assert store.get_context(ContextLayer.L6_DAILY, date, "unique_stocks") == 3 assert store.get_context(ContextLayer.L6_DAILY, date, "unique_stocks_KR") == 3
# 2 wins, 0 losses # 2 wins, 0 losses
assert store.get_context(ContextLayer.L6_DAILY, date, "win_rate") == 100.0 assert store.get_context(ContextLayer.L6_DAILY, date, "win_rate_KR") == 100.0
def test_aggregate_weekly_from_daily(self, aggregator: ContextAggregator) -> None: def test_aggregate_weekly_from_daily(self, aggregator: ContextAggregator) -> None:
"""Test aggregating weekly metrics from daily.""" """Test aggregating weekly metrics from daily."""
week = "2026-W06" week = "2026-W06"
# Set daily contexts # Set daily contexts
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "total_pnl", 100.0) aggregator.store.set_context(
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "total_pnl", 200.0) ContextLayer.L6_DAILY, "2026-02-02", "total_pnl_KR", 100.0
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "avg_confidence", 80.0) )
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "avg_confidence", 85.0) aggregator.store.set_context(
ContextLayer.L6_DAILY, "2026-02-03", "total_pnl_KR", 200.0
)
aggregator.store.set_context(
ContextLayer.L6_DAILY, "2026-02-02", "avg_confidence_KR", 80.0
)
aggregator.store.set_context(
ContextLayer.L6_DAILY, "2026-02-03", "avg_confidence_KR", 85.0
)
# Aggregate # Aggregate
aggregator.aggregate_weekly_from_daily(week) aggregator.aggregate_weekly_from_daily(week)
# Verify L5 contexts # Verify L5 contexts
store = aggregator.store store = aggregator.store
weekly_pnl = store.get_context(ContextLayer.L5_WEEKLY, week, "weekly_pnl") weekly_pnl = store.get_context(ContextLayer.L5_WEEKLY, week, "weekly_pnl_KR")
avg_conf = store.get_context(ContextLayer.L5_WEEKLY, week, "avg_confidence") avg_conf = store.get_context(ContextLayer.L5_WEEKLY, week, "avg_confidence_KR")
assert weekly_pnl == 300.0 assert weekly_pnl == 300.0
assert avg_conf == 82.5 assert avg_conf == 82.5
@@ -214,9 +222,15 @@ class TestContextAggregator:
month = "2026-02" month = "2026-02"
# Set weekly contexts # Set weekly contexts
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W05", "weekly_pnl", 100.0) aggregator.store.set_context(
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W06", "weekly_pnl", 200.0) ContextLayer.L5_WEEKLY, "2026-W05", "weekly_pnl_KR", 100.0
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W07", "weekly_pnl", 150.0) )
aggregator.store.set_context(
ContextLayer.L5_WEEKLY, "2026-W06", "weekly_pnl_KR", 200.0
)
aggregator.store.set_context(
ContextLayer.L5_WEEKLY, "2026-W07", "weekly_pnl_KR", 150.0
)
# Aggregate # Aggregate
aggregator.aggregate_monthly_from_weekly(month) aggregator.aggregate_monthly_from_weekly(month)
@@ -285,7 +299,7 @@ class TestContextAggregator:
self, aggregator: ContextAggregator, db_conn: sqlite3.Connection self, aggregator: ContextAggregator, db_conn: sqlite3.Connection
) -> None: ) -> None:
"""Test running all aggregations from L7 to L1.""" """Test running all aggregations from L7 to L1."""
date = "2026-02-04" date = datetime.now(UTC).date().isoformat()
# Create sample trades # Create sample trades
log_trade(db_conn, "005930", "BUY", 85, "Good signal", quantity=10, price=70000, pnl=1000) log_trade(db_conn, "005930", "BUY", 85, "Good signal", quantity=10, price=70000, pnl=1000)
@@ -299,10 +313,18 @@ class TestContextAggregator:
# Verify data exists in each layer # Verify data exists in each layer
store = aggregator.store store = aggregator.store
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl") == 1000.0 assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl_KR") == 1000.0
current_week = datetime.now(UTC).strftime("%Y-W%V") from datetime import date as date_cls
assert store.get_context(ContextLayer.L5_WEEKLY, current_week, "weekly_pnl") is not None trade_date = date_cls.fromisoformat(date)
# Further layers depend on time alignment, just verify no crashes iso_year, iso_week, _ = trade_date.isocalendar()
trade_week = f"{iso_year}-W{iso_week:02d}"
assert store.get_context(ContextLayer.L5_WEEKLY, trade_week, "weekly_pnl_KR") is not None
trade_month = f"{trade_date.year}-{trade_date.month:02d}"
trade_quarter = f"{trade_date.year}-Q{(trade_date.month - 1) // 3 + 1}"
trade_year = str(trade_date.year)
assert store.get_context(ContextLayer.L4_MONTHLY, trade_month, "monthly_pnl") == 1000.0
assert store.get_context(ContextLayer.L3_QUARTERLY, trade_quarter, "quarterly_pnl") == 1000.0
assert store.get_context(ContextLayer.L2_ANNUAL, trade_year, "annual_pnl") == 1000.0
class TestLayerMetadata: class TestLayerMetadata:

View File

@@ -0,0 +1,104 @@
"""Tests for ContextScheduler."""
from __future__ import annotations
from dataclasses import dataclass
from datetime import UTC, datetime
from src.context.scheduler import ContextScheduler
@dataclass
class StubAggregator:
"""Stub aggregator that records calls."""
weekly_calls: list[str]
monthly_calls: list[str]
quarterly_calls: list[str]
annual_calls: list[str]
legacy_calls: int
def aggregate_weekly_from_daily(self, week: str) -> None:
self.weekly_calls.append(week)
def aggregate_monthly_from_weekly(self, month: str) -> None:
self.monthly_calls.append(month)
def aggregate_quarterly_from_monthly(self, quarter: str) -> None:
self.quarterly_calls.append(quarter)
def aggregate_annual_from_quarterly(self, year: str) -> None:
self.annual_calls.append(year)
def aggregate_legacy_from_annual(self) -> None:
self.legacy_calls += 1
@dataclass
class StubStore:
"""Stub store that records cleanup calls."""
cleanup_calls: int = 0
def cleanup_expired_contexts(self) -> None:
self.cleanup_calls += 1
def make_scheduler() -> tuple[ContextScheduler, StubAggregator, StubStore]:
aggregator = StubAggregator([], [], [], [], 0)
store = StubStore()
scheduler = ContextScheduler(aggregator=aggregator, store=store)
return scheduler, aggregator, store
def test_run_if_due_weekly() -> None:
scheduler, aggregator, store = make_scheduler()
now = datetime(2026, 2, 8, 10, 0, tzinfo=UTC) # Sunday
result = scheduler.run_if_due(now)
assert result.weekly is True
assert aggregator.weekly_calls == ["2026-W06"]
assert store.cleanup_calls == 1
def test_run_if_due_monthly() -> None:
scheduler, aggregator, _store = make_scheduler()
now = datetime(2026, 2, 28, 12, 0, tzinfo=UTC) # Last day of month
result = scheduler.run_if_due(now)
assert result.monthly is True
assert aggregator.monthly_calls == ["2026-02"]
def test_run_if_due_quarterly() -> None:
scheduler, aggregator, _store = make_scheduler()
now = datetime(2026, 3, 31, 12, 0, tzinfo=UTC) # Last day of Q1
result = scheduler.run_if_due(now)
assert result.quarterly is True
assert aggregator.quarterly_calls == ["2026-Q1"]
def test_run_if_due_annual_and_legacy() -> None:
scheduler, aggregator, _store = make_scheduler()
now = datetime(2026, 12, 31, 12, 0, tzinfo=UTC)
result = scheduler.run_if_due(now)
assert result.annual is True
assert result.legacy is True
assert aggregator.annual_calls == ["2026"]
assert aggregator.legacy_calls == 1
def test_cleanup_runs_once_per_day() -> None:
scheduler, _aggregator, store = make_scheduler()
now = datetime(2026, 2, 9, 9, 0, tzinfo=UTC)
scheduler.run_if_due(now)
scheduler.run_if_due(now)
assert store.cleanup_calls == 1

View File

@@ -1,12 +1,47 @@
"""Tests for main trading loop telegram integration.""" """Tests for main trading loop integration."""
import asyncio from datetime import date
from unittest.mock import AsyncMock, MagicMock, patch from unittest.mock import ANY, AsyncMock, MagicMock, patch
import pytest import pytest
from src.core.risk_manager import CircuitBreakerTripped, FatFingerRejected from src.core.risk_manager import CircuitBreakerTripped, FatFingerRejected
from src.context.layer import ContextLayer
from src.main import safe_float, trading_cycle from src.main import safe_float, trading_cycle
from src.strategy.models import (
DayPlaybook,
ScenarioAction,
StockCondition,
StockScenario,
)
from src.strategy.scenario_engine import ScenarioEngine, ScenarioMatch
def _make_playbook(market: str = "KR") -> DayPlaybook:
"""Create a minimal empty playbook for testing."""
return DayPlaybook(date=date(2026, 2, 8), market=market)
def _make_buy_match(stock_code: str = "005930") -> ScenarioMatch:
"""Create a ScenarioMatch that returns BUY."""
return ScenarioMatch(
stock_code=stock_code,
matched_scenario=None,
action=ScenarioAction.BUY,
confidence=85,
rationale="Test buy",
)
def _make_hold_match(stock_code: str = "005930") -> ScenarioMatch:
"""Create a ScenarioMatch that returns HOLD."""
return ScenarioMatch(
stock_code=stock_code,
matched_scenario=None,
action=ScenarioAction.HOLD,
confidence=0,
rationale="No scenario conditions met",
)
class TestSafeFloat: class TestSafeFloat:
@@ -81,15 +116,16 @@ class TestTradingCycleTelegramIntegration:
return broker return broker
@pytest.fixture @pytest.fixture
def mock_brain(self) -> MagicMock: def mock_scenario_engine(self) -> MagicMock:
"""Create mock brain that decides to buy.""" """Create mock scenario engine that returns BUY."""
brain = MagicMock() engine = MagicMock(spec=ScenarioEngine)
decision = MagicMock() engine.evaluate = MagicMock(return_value=_make_buy_match())
decision.action = "BUY" return engine
decision.confidence = 85
decision.rationale = "Test buy" @pytest.fixture
brain.decide = AsyncMock(return_value=decision) def mock_playbook(self) -> DayPlaybook:
return brain """Create a minimal day playbook."""
return _make_playbook()
@pytest.fixture @pytest.fixture
def mock_risk(self) -> MagicMock: def mock_risk(self) -> MagicMock:
@@ -134,6 +170,7 @@ class TestTradingCycleTelegramIntegration:
telegram.notify_trade_execution = AsyncMock() telegram.notify_trade_execution = AsyncMock()
telegram.notify_fat_finger = AsyncMock() telegram.notify_fat_finger = AsyncMock()
telegram.notify_circuit_breaker = AsyncMock() telegram.notify_circuit_breaker = AsyncMock()
telegram.notify_scenario_matched = AsyncMock()
return telegram return telegram
@pytest.fixture @pytest.fixture
@@ -151,7 +188,8 @@ class TestTradingCycleTelegramIntegration:
self, self,
mock_broker: MagicMock, mock_broker: MagicMock,
mock_overseas_broker: MagicMock, mock_overseas_broker: MagicMock,
mock_brain: MagicMock, mock_scenario_engine: MagicMock,
mock_playbook: DayPlaybook,
mock_risk: MagicMock, mock_risk: MagicMock,
mock_db: MagicMock, mock_db: MagicMock,
mock_decision_logger: MagicMock, mock_decision_logger: MagicMock,
@@ -165,7 +203,8 @@ class TestTradingCycleTelegramIntegration:
await trading_cycle( await trading_cycle(
broker=mock_broker, broker=mock_broker,
overseas_broker=mock_overseas_broker, overseas_broker=mock_overseas_broker,
brain=mock_brain, scenario_engine=mock_scenario_engine,
playbook=mock_playbook,
risk=mock_risk, risk=mock_risk,
db_conn=mock_db, db_conn=mock_db,
decision_logger=mock_decision_logger, decision_logger=mock_decision_logger,
@@ -190,7 +229,8 @@ class TestTradingCycleTelegramIntegration:
self, self,
mock_broker: MagicMock, mock_broker: MagicMock,
mock_overseas_broker: MagicMock, mock_overseas_broker: MagicMock,
mock_brain: MagicMock, mock_scenario_engine: MagicMock,
mock_playbook: DayPlaybook,
mock_risk: MagicMock, mock_risk: MagicMock,
mock_db: MagicMock, mock_db: MagicMock,
mock_decision_logger: MagicMock, mock_decision_logger: MagicMock,
@@ -208,7 +248,8 @@ class TestTradingCycleTelegramIntegration:
await trading_cycle( await trading_cycle(
broker=mock_broker, broker=mock_broker,
overseas_broker=mock_overseas_broker, overseas_broker=mock_overseas_broker,
brain=mock_brain, scenario_engine=mock_scenario_engine,
playbook=mock_playbook,
risk=mock_risk, risk=mock_risk,
db_conn=mock_db, db_conn=mock_db,
decision_logger=mock_decision_logger, decision_logger=mock_decision_logger,
@@ -228,7 +269,8 @@ class TestTradingCycleTelegramIntegration:
self, self,
mock_broker: MagicMock, mock_broker: MagicMock,
mock_overseas_broker: MagicMock, mock_overseas_broker: MagicMock,
mock_brain: MagicMock, mock_scenario_engine: MagicMock,
mock_playbook: DayPlaybook,
mock_risk: MagicMock, mock_risk: MagicMock,
mock_db: MagicMock, mock_db: MagicMock,
mock_decision_logger: MagicMock, mock_decision_logger: MagicMock,
@@ -250,7 +292,8 @@ class TestTradingCycleTelegramIntegration:
await trading_cycle( await trading_cycle(
broker=mock_broker, broker=mock_broker,
overseas_broker=mock_overseas_broker, overseas_broker=mock_overseas_broker,
brain=mock_brain, scenario_engine=mock_scenario_engine,
playbook=mock_playbook,
risk=mock_risk, risk=mock_risk,
db_conn=mock_db, db_conn=mock_db,
decision_logger=mock_decision_logger, decision_logger=mock_decision_logger,
@@ -275,7 +318,8 @@ class TestTradingCycleTelegramIntegration:
self, self,
mock_broker: MagicMock, mock_broker: MagicMock,
mock_overseas_broker: MagicMock, mock_overseas_broker: MagicMock,
mock_brain: MagicMock, mock_scenario_engine: MagicMock,
mock_playbook: DayPlaybook,
mock_risk: MagicMock, mock_risk: MagicMock,
mock_db: MagicMock, mock_db: MagicMock,
mock_decision_logger: MagicMock, mock_decision_logger: MagicMock,
@@ -299,7 +343,8 @@ class TestTradingCycleTelegramIntegration:
await trading_cycle( await trading_cycle(
broker=mock_broker, broker=mock_broker,
overseas_broker=mock_overseas_broker, overseas_broker=mock_overseas_broker,
brain=mock_brain, scenario_engine=mock_scenario_engine,
playbook=mock_playbook,
risk=mock_risk, risk=mock_risk,
db_conn=mock_db, db_conn=mock_db,
decision_logger=mock_decision_logger, decision_logger=mock_decision_logger,
@@ -319,7 +364,8 @@ class TestTradingCycleTelegramIntegration:
self, self,
mock_broker: MagicMock, mock_broker: MagicMock,
mock_overseas_broker: MagicMock, mock_overseas_broker: MagicMock,
mock_brain: MagicMock, mock_scenario_engine: MagicMock,
mock_playbook: DayPlaybook,
mock_risk: MagicMock, mock_risk: MagicMock,
mock_db: MagicMock, mock_db: MagicMock,
mock_decision_logger: MagicMock, mock_decision_logger: MagicMock,
@@ -329,18 +375,15 @@ class TestTradingCycleTelegramIntegration:
mock_market: MagicMock, mock_market: MagicMock,
) -> None: ) -> None:
"""Test no trade notification sent when decision is HOLD.""" """Test no trade notification sent when decision is HOLD."""
# Change brain decision to HOLD # Scenario engine returns HOLD
decision = MagicMock() mock_scenario_engine.evaluate = MagicMock(return_value=_make_hold_match())
decision.action = "HOLD"
decision.confidence = 50
decision.rationale = "Insufficient signal"
mock_brain.decide = AsyncMock(return_value=decision)
with patch("src.main.log_trade"): with patch("src.main.log_trade"):
await trading_cycle( await trading_cycle(
broker=mock_broker, broker=mock_broker,
overseas_broker=mock_overseas_broker, overseas_broker=mock_overseas_broker,
brain=mock_brain, scenario_engine=mock_scenario_engine,
playbook=mock_playbook,
risk=mock_risk, risk=mock_risk,
db_conn=mock_db, db_conn=mock_db,
decision_logger=mock_decision_logger, decision_logger=mock_decision_logger,
@@ -472,15 +515,16 @@ class TestOverseasBalanceParsing:
return market return market
@pytest.fixture @pytest.fixture
def mock_brain_hold(self) -> MagicMock: def mock_scenario_engine_hold(self) -> MagicMock:
"""Create mock brain that always holds.""" """Create mock scenario engine that always returns HOLD."""
brain = MagicMock() engine = MagicMock(spec=ScenarioEngine)
decision = MagicMock() engine.evaluate = MagicMock(return_value=_make_hold_match("AAPL"))
decision.action = "HOLD" return engine
decision.confidence = 50
decision.rationale = "Testing balance parsing" @pytest.fixture
brain.decide = AsyncMock(return_value=decision) def mock_playbook(self) -> DayPlaybook:
return brain """Create a minimal playbook."""
return _make_playbook("US")
@pytest.fixture @pytest.fixture
def mock_risk(self) -> MagicMock: def mock_risk(self) -> MagicMock:
@@ -517,14 +561,17 @@ class TestOverseasBalanceParsing:
@pytest.fixture @pytest.fixture
def mock_telegram(self) -> MagicMock: def mock_telegram(self) -> MagicMock:
"""Create mock telegram client.""" """Create mock telegram client."""
return MagicMock() telegram = MagicMock()
telegram.notify_scenario_matched = AsyncMock()
return telegram
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_overseas_balance_list_format( async def test_overseas_balance_list_format(
self, self,
mock_domestic_broker: MagicMock, mock_domestic_broker: MagicMock,
mock_overseas_broker_with_list: MagicMock, mock_overseas_broker_with_list: MagicMock,
mock_brain_hold: MagicMock, mock_scenario_engine_hold: MagicMock,
mock_playbook: DayPlaybook,
mock_risk: MagicMock, mock_risk: MagicMock,
mock_db: MagicMock, mock_db: MagicMock,
mock_decision_logger: MagicMock, mock_decision_logger: MagicMock,
@@ -539,7 +586,8 @@ class TestOverseasBalanceParsing:
await trading_cycle( await trading_cycle(
broker=mock_domestic_broker, broker=mock_domestic_broker,
overseas_broker=mock_overseas_broker_with_list, overseas_broker=mock_overseas_broker_with_list,
brain=mock_brain_hold, scenario_engine=mock_scenario_engine_hold,
playbook=mock_playbook,
risk=mock_risk, risk=mock_risk,
db_conn=mock_db, db_conn=mock_db,
decision_logger=mock_decision_logger, decision_logger=mock_decision_logger,
@@ -559,7 +607,8 @@ class TestOverseasBalanceParsing:
self, self,
mock_domestic_broker: MagicMock, mock_domestic_broker: MagicMock,
mock_overseas_broker_with_dict: MagicMock, mock_overseas_broker_with_dict: MagicMock,
mock_brain_hold: MagicMock, mock_scenario_engine_hold: MagicMock,
mock_playbook: DayPlaybook,
mock_risk: MagicMock, mock_risk: MagicMock,
mock_db: MagicMock, mock_db: MagicMock,
mock_decision_logger: MagicMock, mock_decision_logger: MagicMock,
@@ -574,7 +623,8 @@ class TestOverseasBalanceParsing:
await trading_cycle( await trading_cycle(
broker=mock_domestic_broker, broker=mock_domestic_broker,
overseas_broker=mock_overseas_broker_with_dict, overseas_broker=mock_overseas_broker_with_dict,
brain=mock_brain_hold, scenario_engine=mock_scenario_engine_hold,
playbook=mock_playbook,
risk=mock_risk, risk=mock_risk,
db_conn=mock_db, db_conn=mock_db,
decision_logger=mock_decision_logger, decision_logger=mock_decision_logger,
@@ -594,7 +644,8 @@ class TestOverseasBalanceParsing:
self, self,
mock_domestic_broker: MagicMock, mock_domestic_broker: MagicMock,
mock_overseas_broker_with_empty: MagicMock, mock_overseas_broker_with_empty: MagicMock,
mock_brain_hold: MagicMock, mock_scenario_engine_hold: MagicMock,
mock_playbook: DayPlaybook,
mock_risk: MagicMock, mock_risk: MagicMock,
mock_db: MagicMock, mock_db: MagicMock,
mock_decision_logger: MagicMock, mock_decision_logger: MagicMock,
@@ -609,7 +660,8 @@ class TestOverseasBalanceParsing:
await trading_cycle( await trading_cycle(
broker=mock_domestic_broker, broker=mock_domestic_broker,
overseas_broker=mock_overseas_broker_with_empty, overseas_broker=mock_overseas_broker_with_empty,
brain=mock_brain_hold, scenario_engine=mock_scenario_engine_hold,
playbook=mock_playbook,
risk=mock_risk, risk=mock_risk,
db_conn=mock_db, db_conn=mock_db,
decision_logger=mock_decision_logger, decision_logger=mock_decision_logger,
@@ -629,7 +681,8 @@ class TestOverseasBalanceParsing:
self, self,
mock_domestic_broker: MagicMock, mock_domestic_broker: MagicMock,
mock_overseas_broker_with_empty_price: MagicMock, mock_overseas_broker_with_empty_price: MagicMock,
mock_brain_hold: MagicMock, mock_scenario_engine_hold: MagicMock,
mock_playbook: DayPlaybook,
mock_risk: MagicMock, mock_risk: MagicMock,
mock_db: MagicMock, mock_db: MagicMock,
mock_decision_logger: MagicMock, mock_decision_logger: MagicMock,
@@ -644,7 +697,8 @@ class TestOverseasBalanceParsing:
await trading_cycle( await trading_cycle(
broker=mock_domestic_broker, broker=mock_domestic_broker,
overseas_broker=mock_overseas_broker_with_empty_price, overseas_broker=mock_overseas_broker_with_empty_price,
brain=mock_brain_hold, scenario_engine=mock_scenario_engine_hold,
playbook=mock_playbook,
risk=mock_risk, risk=mock_risk,
db_conn=mock_db, db_conn=mock_db,
decision_logger=mock_decision_logger, decision_logger=mock_decision_logger,
@@ -658,3 +712,404 @@ class TestOverseasBalanceParsing:
# Verify price API was called # Verify price API was called
mock_overseas_broker_with_empty_price.get_overseas_price.assert_called_once() mock_overseas_broker_with_empty_price.get_overseas_price.assert_called_once()
class TestScenarioEngineIntegration:
"""Test scenario engine integration in trading_cycle."""
@pytest.fixture
def mock_broker(self) -> MagicMock:
"""Create mock broker with standard domestic data."""
broker = MagicMock()
broker.get_orderbook = AsyncMock(
return_value={
"output1": {"stck_prpr": "50000", "frgn_ntby_qty": "100"}
}
)
broker.get_balance = AsyncMock(
return_value={
"output2": [
{
"tot_evlu_amt": "10000000",
"dnca_tot_amt": "5000000",
"pchs_amt_smtl_amt": "9500000",
}
]
}
)
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
return broker
@pytest.fixture
def mock_market(self) -> MagicMock:
"""Create mock KR market."""
market = MagicMock()
market.name = "Korea"
market.code = "KR"
market.exchange_code = "KRX"
market.is_domestic = True
return market
@pytest.fixture
def mock_telegram(self) -> MagicMock:
"""Create mock telegram with all notification methods."""
telegram = MagicMock()
telegram.notify_trade_execution = AsyncMock()
telegram.notify_scenario_matched = AsyncMock()
telegram.notify_fat_finger = AsyncMock()
return telegram
@pytest.mark.asyncio
async def test_scenario_engine_called_with_enriched_market_data(
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
) -> None:
"""Test scenario engine receives market_data enriched with scanner metrics."""
from src.analysis.smart_scanner import ScanCandidate
engine = MagicMock(spec=ScenarioEngine)
engine.evaluate = MagicMock(return_value=_make_hold_match())
playbook = _make_playbook()
candidate = ScanCandidate(
stock_code="005930", name="Samsung", price=50000,
volume=1000000, volume_ratio=3.5, rsi=25.0,
signal="oversold", score=85.0,
)
with patch("src.main.log_trade"):
await trading_cycle(
broker=mock_broker,
overseas_broker=MagicMock(),
scenario_engine=engine,
playbook=playbook,
risk=MagicMock(),
db_conn=MagicMock(),
decision_logger=MagicMock(),
context_store=MagicMock(get_latest_timeframe=MagicMock(return_value=None)),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=mock_telegram,
market=mock_market,
stock_code="005930",
scan_candidates={"KR": {"005930": candidate}},
)
# Verify evaluate was called
engine.evaluate.assert_called_once()
call_args = engine.evaluate.call_args
market_data = call_args[0][2] # 3rd positional arg
portfolio_data = call_args[0][3] # 4th positional arg
# Scanner data should be enriched into market_data
assert market_data["rsi"] == 25.0
assert market_data["volume_ratio"] == 3.5
assert market_data["current_price"] == 50000.0
# Portfolio data should include pnl
assert "portfolio_pnl_pct" in portfolio_data
assert "total_cash" in portfolio_data
@pytest.mark.asyncio
async def test_trading_cycle_sets_l7_context_keys(
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
) -> None:
"""Test L7 context is written with market-scoped keys."""
from src.analysis.smart_scanner import ScanCandidate
engine = MagicMock(spec=ScenarioEngine)
engine.evaluate = MagicMock(return_value=_make_hold_match())
playbook = _make_playbook()
context_store = MagicMock(get_latest_timeframe=MagicMock(return_value=None))
candidate = ScanCandidate(
stock_code="005930", name="Samsung", price=50000,
volume=1000000, volume_ratio=3.5, rsi=25.0,
signal="oversold", score=85.0,
)
with patch("src.main.log_trade"):
await trading_cycle(
broker=mock_broker,
overseas_broker=MagicMock(),
scenario_engine=engine,
playbook=playbook,
risk=MagicMock(),
db_conn=MagicMock(),
decision_logger=MagicMock(),
context_store=context_store,
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=mock_telegram,
market=mock_market,
stock_code="005930",
scan_candidates={"KR": {"005930": candidate}},
)
context_store.set_context.assert_any_call(
ContextLayer.L7_REALTIME,
ANY,
"volatility_KR_005930",
{"momentum_score": 50.0, "volume_surge": 1.0, "price_change_1m": 0.0},
)
context_store.set_context.assert_any_call(
ContextLayer.L7_REALTIME,
ANY,
"price_KR_005930",
{"current_price": 50000.0},
)
context_store.set_context.assert_any_call(
ContextLayer.L7_REALTIME,
ANY,
"rsi_KR_005930",
{"rsi": 25.0},
)
context_store.set_context.assert_any_call(
ContextLayer.L7_REALTIME,
ANY,
"volume_ratio_KR_005930",
{"volume_ratio": 3.5},
)
@pytest.mark.asyncio
async def test_scan_candidates_market_scoped(
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
) -> None:
"""Test scan_candidates uses market-scoped lookup, ignoring other markets."""
from src.analysis.smart_scanner import ScanCandidate
engine = MagicMock(spec=ScenarioEngine)
engine.evaluate = MagicMock(return_value=_make_hold_match())
# Candidate stored under US market — should NOT be found for KR market
us_candidate = ScanCandidate(
stock_code="005930", name="Overlap", price=100,
volume=500000, volume_ratio=5.0, rsi=15.0,
signal="oversold", score=90.0,
)
with patch("src.main.log_trade"):
await trading_cycle(
broker=mock_broker,
overseas_broker=MagicMock(),
scenario_engine=engine,
playbook=_make_playbook(),
risk=MagicMock(),
db_conn=MagicMock(),
decision_logger=MagicMock(),
context_store=MagicMock(get_latest_timeframe=MagicMock(return_value=None)),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=mock_telegram,
market=mock_market, # KR market
stock_code="005930",
scan_candidates={"US": {"005930": us_candidate}}, # Wrong market
)
# Should NOT have rsi/volume_ratio because candidate is under US, not KR
market_data = engine.evaluate.call_args[0][2]
assert "rsi" not in market_data
assert "volume_ratio" not in market_data
@pytest.mark.asyncio
async def test_scenario_engine_called_without_scanner_data(
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
) -> None:
"""Test scenario engine works when stock has no scan candidate."""
engine = MagicMock(spec=ScenarioEngine)
engine.evaluate = MagicMock(return_value=_make_hold_match())
playbook = _make_playbook()
with patch("src.main.log_trade"):
await trading_cycle(
broker=mock_broker,
overseas_broker=MagicMock(),
scenario_engine=engine,
playbook=playbook,
risk=MagicMock(),
db_conn=MagicMock(),
decision_logger=MagicMock(),
context_store=MagicMock(get_latest_timeframe=MagicMock(return_value=None)),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=mock_telegram,
market=mock_market,
stock_code="005930",
scan_candidates={}, # No scanner data
)
# Should still work, just without rsi/volume_ratio
engine.evaluate.assert_called_once()
market_data = engine.evaluate.call_args[0][2]
assert "rsi" not in market_data
assert "volume_ratio" not in market_data
assert market_data["current_price"] == 50000.0
@pytest.mark.asyncio
async def test_scenario_matched_notification_sent(
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
) -> None:
"""Test telegram notification sent when a scenario matches."""
# Create a match with matched_scenario (not None)
scenario = StockScenario(
condition=StockCondition(rsi_below=30),
action=ScenarioAction.BUY,
confidence=88,
rationale="RSI oversold bounce",
)
match = ScenarioMatch(
stock_code="005930",
matched_scenario=scenario,
action=ScenarioAction.BUY,
confidence=88,
rationale="RSI oversold bounce",
match_details={"rsi": 25.0},
)
engine = MagicMock(spec=ScenarioEngine)
engine.evaluate = MagicMock(return_value=match)
with patch("src.main.log_trade"):
await trading_cycle(
broker=mock_broker,
overseas_broker=MagicMock(),
scenario_engine=engine,
playbook=_make_playbook(),
risk=MagicMock(),
db_conn=MagicMock(),
decision_logger=MagicMock(),
context_store=MagicMock(get_latest_timeframe=MagicMock(return_value=None)),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=mock_telegram,
market=mock_market,
stock_code="005930",
scan_candidates={},
)
# Scenario matched notification should be sent
mock_telegram.notify_scenario_matched.assert_called_once()
call_kwargs = mock_telegram.notify_scenario_matched.call_args.kwargs
assert call_kwargs["stock_code"] == "005930"
assert call_kwargs["action"] == "BUY"
assert "rsi=25.0" in call_kwargs["condition_summary"]
@pytest.mark.asyncio
async def test_no_scenario_matched_notification_on_default_hold(
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
) -> None:
"""Test no scenario notification when default HOLD is returned."""
engine = MagicMock(spec=ScenarioEngine)
engine.evaluate = MagicMock(return_value=_make_hold_match())
with patch("src.main.log_trade"):
await trading_cycle(
broker=mock_broker,
overseas_broker=MagicMock(),
scenario_engine=engine,
playbook=_make_playbook(),
risk=MagicMock(),
db_conn=MagicMock(),
decision_logger=MagicMock(),
context_store=MagicMock(get_latest_timeframe=MagicMock(return_value=None)),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=mock_telegram,
market=mock_market,
stock_code="005930",
scan_candidates={},
)
# No scenario matched notification for default HOLD
mock_telegram.notify_scenario_matched.assert_not_called()
@pytest.mark.asyncio
async def test_decision_logger_receives_scenario_match_details(
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
) -> None:
"""Test decision logger context includes scenario match details."""
match = ScenarioMatch(
stock_code="005930",
matched_scenario=None,
action=ScenarioAction.HOLD,
confidence=0,
rationale="No match",
match_details={"rsi": 45.0, "volume_ratio": 1.2},
)
engine = MagicMock(spec=ScenarioEngine)
engine.evaluate = MagicMock(return_value=match)
decision_logger = MagicMock()
with patch("src.main.log_trade"):
await trading_cycle(
broker=mock_broker,
overseas_broker=MagicMock(),
scenario_engine=engine,
playbook=_make_playbook(),
risk=MagicMock(),
db_conn=MagicMock(),
decision_logger=decision_logger,
context_store=MagicMock(get_latest_timeframe=MagicMock(return_value=None)),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=mock_telegram,
market=mock_market,
stock_code="005930",
scan_candidates={},
)
decision_logger.log_decision.assert_called_once()
call_kwargs = decision_logger.log_decision.call_args.kwargs
assert "scenario_match" in call_kwargs["context_snapshot"]
assert call_kwargs["context_snapshot"]["scenario_match"]["rsi"] == 45.0
@pytest.mark.asyncio
async def test_reduce_all_does_not_execute_order(
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
) -> None:
"""Test REDUCE_ALL action does not trigger order execution."""
match = ScenarioMatch(
stock_code="005930",
matched_scenario=None,
action=ScenarioAction.REDUCE_ALL,
confidence=100,
rationale="Global rule: portfolio loss > 2%",
)
engine = MagicMock(spec=ScenarioEngine)
engine.evaluate = MagicMock(return_value=match)
with patch("src.main.log_trade"):
await trading_cycle(
broker=mock_broker,
overseas_broker=MagicMock(),
scenario_engine=engine,
playbook=_make_playbook(),
risk=MagicMock(),
db_conn=MagicMock(),
decision_logger=MagicMock(),
context_store=MagicMock(get_latest_timeframe=MagicMock(return_value=None)),
criticality_assessor=MagicMock(
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
get_timeout=MagicMock(return_value=5.0),
),
telegram=mock_telegram,
market=mock_market,
stock_code="005930",
scan_candidates={},
)
# REDUCE_ALL is not BUY or SELL — no order sent
mock_broker.send_order.assert_not_called()
mock_telegram.notify_trade_execution.assert_not_called()

81
tests/test_scorecard.py Normal file
View File

@@ -0,0 +1,81 @@
"""Tests for DailyScorecard model."""
from __future__ import annotations
from src.evolution.scorecard import DailyScorecard
def test_scorecard_initialization() -> None:
scorecard = DailyScorecard(
date="2026-02-08",
market="KR",
total_decisions=10,
buys=3,
sells=2,
holds=5,
total_pnl=1234.5,
win_rate=60.0,
avg_confidence=78.5,
scenario_match_rate=70.0,
top_winners=["005930", "000660"],
top_losers=["035420"],
lessons=["Avoid chasing breakouts"],
cross_market_note="US volatility spillover",
)
assert scorecard.market == "KR"
assert scorecard.total_decisions == 10
assert scorecard.total_pnl == 1234.5
assert scorecard.top_winners == ["005930", "000660"]
assert scorecard.lessons == ["Avoid chasing breakouts"]
assert scorecard.cross_market_note == "US volatility spillover"
def test_scorecard_defaults() -> None:
scorecard = DailyScorecard(
date="2026-02-08",
market="US",
total_decisions=0,
buys=0,
sells=0,
holds=0,
total_pnl=0.0,
win_rate=0.0,
avg_confidence=0.0,
scenario_match_rate=0.0,
)
assert scorecard.top_winners == []
assert scorecard.top_losers == []
assert scorecard.lessons == []
assert scorecard.cross_market_note == ""
def test_scorecard_list_isolation() -> None:
a = DailyScorecard(
date="2026-02-08",
market="KR",
total_decisions=1,
buys=1,
sells=0,
holds=0,
total_pnl=10.0,
win_rate=100.0,
avg_confidence=90.0,
scenario_match_rate=100.0,
)
b = DailyScorecard(
date="2026-02-08",
market="US",
total_decisions=1,
buys=0,
sells=1,
holds=0,
total_pnl=-5.0,
win_rate=0.0,
avg_confidence=60.0,
scenario_match_rate=50.0,
)
a.top_winners.append("005930")
assert b.top_winners == []

View File

@@ -412,7 +412,7 @@ class TestMarketScanner:
scan_result = context_store.get_context( scan_result = context_store.get_context(
ContextLayer.L7_REALTIME, ContextLayer.L7_REALTIME,
latest_timeframe, latest_timeframe,
"KR_scan_result", "scan_result_KR",
) )
assert scan_result is not None assert scan_result is not None
assert scan_result["total_scanned"] == 3 assert scan_result["total_scanned"] == 3