Compare commits

...

7 Commits

Author SHA1 Message Date
agentson
254b543c89 Merge main into feature/issue-15-context-tree
Some checks failed
CI / test (pull_request) Has been cancelled
Resolved conflicts in CLAUDE.md by:
- Keeping main's refactored structure (docs split into separate files)
- Added Context Tree documentation link to docs section
- Preserved all constraints and guidelines from main

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-04 15:25:13 +09:00
agentson
917b68eb81 feat: implement L1-L7 context tree for multi-layered memory management
Some checks failed
CI / test (pull_request) Has been cancelled
Implements Pillar 2 (Multi-layered Context Management) with a 7-tier
hierarchical memory system from real-time market data to generational
trading wisdom.

## New Modules
- `src/context/layer.py`: ContextLayer enum and metadata config
- `src/context/store.py`: ContextStore for CRUD operations
- `src/context/aggregator.py`: Bottom-up aggregation (L7→L6→...→L1)

## Database Changes
- Added `contexts` table for hierarchical data storage
- Added `context_metadata` table for layer configuration
- Indexed by layer, timeframe, and updated_at for fast queries

## Context Layers
- L1 (Legacy): Cumulative wisdom (kept forever)
- L2 (Annual): Yearly metrics (10 years retention)
- L3 (Quarterly): Strategy pivots (3 years)
- L4 (Monthly): Portfolio rebalancing (2 years)
- L5 (Weekly): Stock selection (1 year)
- L6 (Daily): Trade logs (90 days)
- L7 (Real-time): Live market data (7 days)

## Tests
- 18 new tests in `tests/test_context.py`
- 100% coverage on context modules
- All 72 tests passing (54 existing + 18 new)

## Documentation
- Added `docs/context-tree.md` with comprehensive guide
- Updated `CLAUDE.md` architecture section
- Includes usage examples and best practices

Closes #15

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-04 14:12:29 +09:00
2becbddb4a Merge pull request 'refactor: split CLAUDE.md into focused documentation structure' (#14) from feature/issue-13-docs-refactor into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #14
2026-02-04 10:15:09 +09:00
agentson
05e8986ff5 refactor: split CLAUDE.md into focused documentation structure
Some checks failed
CI / test (pull_request) Has been cancelled
- Restructure docs into topic-specific files to minimize context
- Create docs/workflow.md (Git + Agent workflow)
- Create docs/commands.md (Common failures + build commands)
- Create docs/architecture.md (System design + data flow)
- Create docs/testing.md (Test structure + guidelines)
- Rewrite CLAUDE.md as concise hub with links to detailed docs
- Update .gitignore to exclude data/ directory

Benefits:
- Reduced context size for AI assistants
- Faster reference lookups
- Better maintainability
- Topic-focused documentation

Closes #13

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-04 10:13:48 +09:00
3c676c2b8d Merge pull request 'docs: add common command failures and solutions' (#12) from feature/issue-11-command-failures into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #12
2026-02-04 10:03:29 +09:00
agentson
3dd222bd3b docs: add common command failures and solutions
Some checks failed
CI / test (pull_request) Has been cancelled
- Document tea CLI TTY errors and YES="" workaround
- Record wrong parameter names (--body vs --description)
- Add Gitea API hostname corrections
- Include Git configuration errors
- Document Python/pytest common issues

Each failure includes:
-  Failing command
- 💡 Failure reason
-  Working solution
- 📝 Additional notes

Closes #11

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-04 10:02:41 +09:00
f4e6b609a4 Merge pull request 'docs: add agent workflow and parallel execution strategy' (#10) from feature/issue-9-agent-workflow into main
Some checks failed
CI / test (push) Has been cancelled
Reviewed-on: #10
2026-02-04 09:51:01 +09:00
13 changed files with 1960 additions and 120 deletions

1
.gitignore vendored
View File

@@ -174,3 +174,4 @@ cython_debug/
# PyPI configuration file
.pypirc
data/

196
CLAUDE.md
View File

@@ -1,142 +1,98 @@
# CLAUDE.md
# The Ouroboros
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
AI-powered trading agent for global stock markets with self-evolution capabilities.
## Git Workflow Policy
**CRITICAL: All code changes MUST follow this workflow. Direct pushes to `main` are ABSOLUTELY PROHIBITED.**
1. **Create Gitea Issue First** — All features, bug fixes, and policy changes require a Gitea issue before any code is written
2. **Create Feature Branch** — Branch from `main` using format `feature/issue-{N}-{short-description}`
3. **Implement Changes** — Write code, tests, and documentation on the feature branch
4. **Create Pull Request** — Submit PR to `main` branch referencing the issue number
5. **Review & Merge** — After approval, merge via PR (squash or merge commit)
**Never commit directly to `main`.** This policy applies to all changes, no exceptions.
## Agent Workflow
**Modern AI development leverages specialized agents for concurrent, efficient task execution.**
### Parallel Execution Strategy
Use **git worktree** or **subagents** (via the Task tool) to handle multiple requirements simultaneously:
- Each task runs in independent context
- Parallel branches for concurrent features
- Isolated test environments prevent interference
- Faster iteration with distributed workload
### Specialized Agent Roles
Deploy task-specific agents as needed instead of handling everything in the main conversation:
- **Conversational Agent** (main) — Interface with user, coordinate other agents
- **Ticket Management Agent** — Create/update Gitea issues, track task status
- **Design Agent** — Architectural planning, RFC documents, API design
- **Code Writing Agent** — Implementation following specs
- **Testing Agent** — Write tests, verify coverage, run test suites
- **Documentation Agent** — Update docs, docstrings, CLAUDE.md, README
- **Review Agent** — Code review, lint checks, security audits
- **Custom Agents** — Created dynamically for specialized tasks (performance analysis, migration scripts, etc.)
### When to Use Agents
**Prefer spawning specialized agents for:**
1. Complex multi-file changes requiring exploration
2. Tasks with clear, isolated scope (e.g., "write tests for module X")
3. Parallel work streams (feature A + bugfix B simultaneously)
4. Long-running analysis (codebase search, dependency audit)
5. Tasks requiring different contexts (multiple git worktrees)
**Use the main conversation for:**
1. User interaction and clarification
2. Quick single-file edits
3. Coordinating agent work
4. High-level decision making
### Implementation
```python
# Example: Spawn parallel test and documentation agents
task_tool(
subagent_type="general-purpose",
prompt="Write comprehensive tests for src/markets/schedule.py",
description="Write schedule tests"
)
task_tool(
subagent_type="general-purpose",
prompt="Update README.md with global market feature documentation",
description="Update README"
)
```
Use `run_in_background=True` for independent tasks that don't block subsequent work.
## Build & Test Commands
## Quick Start
```bash
# Install all dependencies (production + dev)
pip install ".[dev]"
# Setup
pip install -e ".[dev]"
cp .env.example .env
# Edit .env with your KIS and Gemini API credentials
# Run full test suite with coverage
pytest -v --cov=src --cov-report=term-missing
# Test
pytest -v --cov=src
# Run a single test file
pytest tests/test_risk.py -v
# Run a single test by name
pytest tests/test_brain.py -k "test_parse_valid_json" -v
# Lint
ruff check src/ tests/
# Type check (strict mode, non-blocking in CI)
mypy src/ --strict
# Run the trading agent
# Run (paper trading)
python -m src.main --mode=paper
# Docker
docker compose up -d ouroboros # Run agent
docker compose --profile test up test # Run tests in container
```
## Architecture
## Documentation
Self-evolving AI trading agent for Korean stock markets (KIS API). The main loop in `src/main.py` orchestrates four components in a 60-second cycle per stock:
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
- **[Architecture](docs/architecture.md)** — System design, components, data flow
- **[Context Tree](docs/context-tree.md)** — L1-L7 hierarchical memory system
- **[Testing](docs/testing.md)** — Test structure, coverage requirements, writing tests
- **[Agent Policies](docs/agents.md)** — Prime directives, constraints, prohibited actions
1. **Broker** (`src/broker/kis_api.py`) — Async KIS API client with automatic OAuth token refresh, leaky-bucket rate limiter (10 RPS), and POST body hash-key signing. Uses a custom SSL context with disabled hostname verification for the VTS (virtual trading) endpoint due to a known certificate mismatch.
## Core Principles
2. **Brain** (`src/brain/gemini_client.py`) — Sends structured prompts to Google Gemini, parses JSON responses into `TradeDecision` objects. Forces HOLD when confidence < threshold (default 80). Falls back to safe HOLD on any parse/API error.
1. **Safety First** — Risk manager is READ-ONLY and enforces circuit breakers
2. **Test Everything** — 80% coverage minimum, all changes require tests
3. **Issue-Driven Development** — All work goes through Gitea issues → feature branches → PRs
4. **Agent Specialization** — Use dedicated agents for design, coding, testing, docs, review
3. **Risk Manager** (`src/core/risk_manager.py`) — **READ-ONLY by policy** (see `docs/agents.md`). Circuit breaker halts all trading via `SystemExit` when daily P&L drops below -3.0%. Fat-finger check rejects orders exceeding 30% of available cash.
## Project Structure
4. **Evolution** (`src/evolution/optimizer.py`) — Analyzes high-confidence losing trades from SQLite, asks Gemini to generate new `BaseStrategy` subclasses, validates them by running the full pytest suite, and simulates PR creation.
```
src/
├── broker/ # KIS API client (domestic + overseas)
├── brain/ # Gemini AI decision engine
├── core/ # Risk manager (READ-ONLY)
├── evolution/ # Self-improvement optimizer
├── markets/ # Market schedules and timezone handling
├── db.py # SQLite trade logging
├── main.py # Trading loop orchestrator
└── config.py # Settings (from .env)
**Data flow per cycle:** Fetch orderbook + balance → calculate P&L → get Gemini decision → validate with risk manager → execute order → log to SQLite (`src/db.py`).
tests/ # 54 tests across 4 files
docs/ # Extended documentation
```
## Key Constraints (from `docs/agents.md`)
## Key Commands
- `core/risk_manager.py` is **READ-ONLY**. Changes require human approval.
- Circuit breaker threshold (-3.0%) may only be made stricter, never relaxed.
- Fat-finger protection (30% max order size) must always be enforced.
- Confidence < 80 **must** force HOLD — this rule cannot be weakened.
- All code changes require corresponding tests. Coverage must stay >= 80%.
- Generated strategies must pass the full test suite before activation.
```bash
pytest -v --cov=src # Run tests with coverage
ruff check src/ tests/ # Lint
mypy src/ --strict # Type check
## Configuration
python -m src.main --mode=paper # Paper trading
python -m src.main --mode=live # Live trading (⚠️ real money)
Pydantic Settings loaded from `.env` (see `.env.example`). Required vars: `KIS_APP_KEY`, `KIS_APP_SECRET`, `KIS_ACCOUNT_NO` (format `XXXXXXXX-XX`), `GEMINI_API_KEY`. Tests use in-memory SQLite (`DB_PATH=":memory:"`) and dummy credentials via `tests/conftest.py`.
# Gitea workflow (requires tea CLI)
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "..." --description "..."
YES="" ~/bin/tea pulls create --head feature-branch --base main --title "..." --description "..."
```
## Test Structure
## Markets Supported
54 tests across four files. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator. The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB.
- 🇰🇷 Korea (KRX)
- 🇺🇸 United States (NASDAQ, NYSE, AMEX)
- 🇯🇵 Japan (TSE)
- 🇭🇰 Hong Kong (SEHK)
- 🇨🇳 China (Shanghai, Shenzhen)
- 🇻🇳 Vietnam (Hanoi, HCM)
- `test_risk.py` (11) — Circuit breaker boundaries, fat-finger edge cases
- `test_broker.py` (6) — Token lifecycle, rate limiting, hash keys, network errors
- `test_brain.py` (18) — JSON parsing, confidence threshold, malformed responses, prompt construction
- `test_market_schedule.py` (19) — Market open/close logic, timezone handling, DST, lunch breaks
Markets auto-detected based on timezone and enabled in `ENABLED_MARKETS` env variable.
## Critical Constraints
⚠️ **Non-Negotiable Rules** (see [docs/agents.md](docs/agents.md)):
- `src/core/risk_manager.py` is **READ-ONLY** — changes require human approval
- Circuit breaker at -3.0% P&L — may only be made **stricter**
- Fat-finger protection: max 30% of cash per order — always enforced
- Confidence < 80 → force HOLD — cannot be weakened
- All code changes → corresponding tests → coverage ≥ 80%
## Contributing
See [docs/workflow.md](docs/workflow.md) for the complete development process.
**TL;DR:**
1. Create issue in Gitea
2. Create feature branch: `feature/issue-N-description`
3. Implement with tests
4. Open PR
5. Merge after review

191
docs/architecture.md Normal file
View File

@@ -0,0 +1,191 @@
# System Architecture
## Overview
Self-evolving AI trading agent for global stock markets via KIS (Korea Investment & Securities) API. The main loop in `src/main.py` orchestrates four components in a 60-second cycle per stock across multiple markets.
## Core Components
### 1. Broker (`src/broker/`)
**KISBroker** (`kis_api.py`) — Async KIS API client for domestic Korean market
- Automatic OAuth token refresh (valid for 24 hours)
- Leaky-bucket rate limiter (10 requests per second)
- POST body hash-key signing for order authentication
- Custom SSL context with disabled hostname verification for VTS (virtual trading) endpoint due to known certificate mismatch
**OverseasBroker** (`overseas.py`) — KIS overseas stock API wrapper
- Reuses KISBroker infrastructure (session, token, rate limiter) via composition
- Supports 9 global markets: US (NASDAQ/NYSE/AMEX), Japan, Hong Kong, China (Shanghai/Shenzhen), Vietnam (Hanoi/HCM)
- Different API endpoints for overseas price/balance/order operations
**Market Schedule** (`src/markets/schedule.py`) — Timezone-aware market management
- `MarketInfo` dataclass with timezone, trading hours, lunch breaks
- Automatic DST handling via `zoneinfo.ZoneInfo`
- `is_market_open()` checks weekends, trading hours, lunch breaks
- `get_open_markets()` returns currently active markets
- `get_next_market_open()` finds next market to open and when
### 2. Brain (`src/brain/gemini_client.py`)
**GeminiClient** — AI decision engine powered by Google Gemini
- Constructs structured prompts from market data
- Parses JSON responses into `TradeDecision` objects (`action`, `confidence`, `rationale`)
- Forces HOLD when confidence < threshold (default 80)
- Falls back to safe HOLD on any parse/API error
- Handles markdown-wrapped JSON, malformed responses, invalid actions
### 3. Risk Manager (`src/core/risk_manager.py`)
**RiskManager** — Safety circuit breaker and order validation
⚠️ **READ-ONLY by policy** (see [`docs/agents.md`](./agents.md))
- **Circuit Breaker**: Halts all trading via `SystemExit` when daily P&L drops below -3.0%
- Threshold may only be made stricter, never relaxed
- Calculated as `(total_eval - purchase_total) / purchase_total * 100`
- **Fat-Finger Protection**: Rejects orders exceeding 30% of available cash
- Must always be enforced, cannot be disabled
### 4. Evolution (`src/evolution/optimizer.py`)
**StrategyOptimizer** — Self-improvement loop
- Analyzes high-confidence losing trades from SQLite
- Asks Gemini to generate new `BaseStrategy` subclasses
- Validates generated strategies by running full pytest suite
- Simulates PR creation for human review
- Only activates strategies that pass all tests
## Data Flow
```
┌─────────────────────────────────────────────────────────────┐
│ Main Loop (60s cycle per stock, per market) │
└─────────────────────────────────────────────────────────────┘
┌──────────────────────────────────┐
│ Market Schedule Check │
│ - Get open markets │
│ - Filter by enabled markets │
│ - Wait if all closed │
└──────────────────┬────────────────┘
┌──────────────────────────────────┐
│ Broker: Fetch Market Data │
│ - Domestic: orderbook + balance │
│ - Overseas: price + balance │
└──────────────────┬────────────────┘
┌──────────────────────────────────┐
│ Calculate P&L │
│ pnl_pct = (eval - cost) / cost │
└──────────────────┬────────────────┘
┌──────────────────────────────────┐
│ Brain: Get Decision │
│ - Build prompt with market data │
│ - Call Gemini API │
│ - Parse JSON response │
│ - Return TradeDecision │
└──────────────────┬────────────────┘
┌──────────────────────────────────┐
│ Risk Manager: Validate Order │
│ - Check circuit breaker │
│ - Check fat-finger limit │
│ - Raise if validation fails │
└──────────────────┬────────────────┘
┌──────────────────────────────────┐
│ Broker: Execute Order │
│ - Domestic: send_order() │
│ - Overseas: send_overseas_order() │
└──────────────────┬────────────────┘
┌──────────────────────────────────┐
│ Database: Log Trade │
│ - SQLite (data/trades.db) │
│ - Track: action, confidence, │
│ rationale, market, exchange │
└───────────────────────────────────┘
```
## Database Schema
**SQLite** (`src/db.py`)
```sql
CREATE TABLE trades (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp TEXT NOT NULL,
stock_code TEXT NOT NULL,
action TEXT NOT NULL, -- BUY | SELL | HOLD
confidence INTEGER NOT NULL, -- 0-100
rationale TEXT,
quantity INTEGER,
price REAL,
pnl REAL DEFAULT 0.0,
market TEXT DEFAULT 'KR', -- KR | US_NASDAQ | JP | etc.
exchange_code TEXT DEFAULT 'KRX' -- KRX | NASD | NYSE | etc.
);
```
Auto-migration: Adds `market` and `exchange_code` columns if missing for backward compatibility.
## Configuration
**Pydantic Settings** (`src/config.py`)
Loaded from `.env` file:
```bash
# Required
KIS_APP_KEY=your_app_key
KIS_APP_SECRET=your_app_secret
KIS_ACCOUNT_NO=XXXXXXXX-XX
GEMINI_API_KEY=your_gemini_key
# Optional
MODE=paper # paper | live
DB_PATH=data/trades.db
CONFIDENCE_THRESHOLD=80
MAX_LOSS_PCT=3.0
MAX_ORDER_PCT=30.0
ENABLED_MARKETS=KR,US_NASDAQ # Comma-separated market codes
```
Tests use in-memory SQLite (`DB_PATH=":memory:"`) and dummy credentials via `tests/conftest.py`.
## Error Handling
### Connection Errors (Broker API)
- Retry with exponential backoff (2^attempt seconds)
- Max 3 retries per stock
- After exhaustion, skip stock and continue with next
### API Quota Errors (Gemini)
- Return safe HOLD decision with confidence=0
- Log error but don't crash
- Agent continues trading on next cycle
### Circuit Breaker Tripped
- Immediately halt via `SystemExit`
- Log critical message
- Requires manual intervention to restart
### Market Closed
- Wait until next market opens
- Use `get_next_market_open()` to calculate wait time
- Sleep until market open time

156
docs/commands.md Normal file
View File

@@ -0,0 +1,156 @@
# Command Reference
## Common Command Failures
**Critical: Learn from failures. Never repeat the same failed command without modification.**
### tea CLI (Gitea Command Line Tool)
#### ❌ TTY Error - Interactive Confirmation Fails
```bash
~/bin/tea issues create --repo X --title "Y" --description "Z"
# Error: huh: could not open a new TTY: open /dev/tty: no such device or address
```
**💡 Reason:** tea tries to open `/dev/tty` for interactive confirmation prompts, which is unavailable in non-interactive environments.
**✅ Solution:** Use `YES=""` environment variable to bypass confirmation
```bash
YES="" ~/bin/tea issues create --repo jihoson/The-Ouroboros --title "Title" --description "Body"
YES="" ~/bin/tea issues edit <number> --repo jihoson/The-Ouroboros --description "Updated body"
YES="" ~/bin/tea pulls create --repo jihoson/The-Ouroboros --head feature-branch --base main --title "Title" --description "Body"
```
**📝 Notes:**
- Always set default login: `~/bin/tea login default local`
- Use `--repo jihoson/The-Ouroboros` when outside repo directory
- tea is preferred over direct Gitea API calls for consistency
#### ❌ Wrong Parameter Name
```bash
tea issues create --body "text"
# Error: flag provided but not defined: -body
```
**💡 Reason:** Parameter is `--description`, not `--body`.
**✅ Solution:** Use correct parameter name
```bash
YES="" ~/bin/tea issues create --description "text"
```
### Gitea API (Direct HTTP Calls)
#### ❌ Wrong Hostname
```bash
curl http://gitea.local:3000/api/v1/...
# Error: Could not resolve host: gitea.local
```
**💡 Reason:** Gitea instance runs on `localhost:3000`, not `gitea.local`.
**✅ Solution:** Use correct hostname (but prefer tea CLI)
```bash
curl http://localhost:3000/api/v1/repos/jihoson/The-Ouroboros/issues \
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"title":"...", "body":"..."}'
```
**📝 Notes:**
- Prefer `tea` CLI over direct API calls
- Only use curl for operations tea doesn't support
### Git Commands
#### ❌ User Not Configured
```bash
git commit -m "message"
# Error: Author identity unknown
```
**💡 Reason:** Git user.name and user.email not set.
**✅ Solution:** Configure git user
```bash
git config user.name "agentson"
git config user.email "agentson@localhost"
```
#### ❌ Permission Denied on Push
```bash
git push origin branch
# Error: User permission denied for writing
```
**💡 Reason:** Repository access token lacks write permissions or user lacks repo write access.
**✅ Solution:**
1. Verify user has write access to repository (admin grants this)
2. Ensure git credential has correct token with `write:repository` scope
3. Check remote URL uses correct authentication
### Python/Pytest
#### ❌ Module Import Error
```bash
pytest tests/test_foo.py
# ModuleNotFoundError: No module named 'src'
```
**💡 Reason:** Package not installed in development mode.
**✅ Solution:** Install package with dev dependencies
```bash
pip install -e ".[dev]"
```
#### ❌ Async Test Hangs
```python
async def test_something(): # Hangs forever
result = await async_function()
```
**💡 Reason:** Missing pytest-asyncio or wrong configuration.
**✅ Solution:** Already configured in pyproject.toml
```toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
```
No decorator needed for async tests.
## Build & Test Commands
```bash
# Install all dependencies (production + dev)
pip install -e ".[dev]"
# Run full test suite with coverage
pytest -v --cov=src --cov-report=term-missing
# Run a single test file
pytest tests/test_risk.py -v
# Run a single test by name
pytest tests/test_brain.py -k "test_parse_valid_json" -v
# Lint
ruff check src/ tests/
# Type check (strict mode, non-blocking in CI)
mypy src/ --strict
# Run the trading agent
python -m src.main --mode=paper
# Docker
docker compose up -d ouroboros # Run agent
docker compose --profile test up test # Run tests in container
```
## Environment Setup
```bash
# Create .env file from example
cp .env.example .env
# Edit .env with your credentials
# Required: KIS_APP_KEY, KIS_APP_SECRET, KIS_ACCOUNT_NO, GEMINI_API_KEY
# Verify configuration
python -c "from src.config import Settings; print(Settings())"
```

338
docs/context-tree.md Normal file
View File

@@ -0,0 +1,338 @@
# Context Tree: Multi-Layered Memory Management
The context tree implements **Pillar 2** of The Ouroboros: hierarchical memory management across 7 time horizons, from real-time market data to generational trading wisdom.
## Overview
Instead of a flat memory structure, The Ouroboros maintains a **7-tier context tree** where each layer represents a different time horizon and level of abstraction:
```
L1 (Legacy) ← Cumulative wisdom across generations
L2 (Annual) ← Yearly performance metrics
L3 (Quarterly) ← Quarterly strategy adjustments
L4 (Monthly) ← Monthly portfolio rebalancing
L5 (Weekly) ← Weekly stock selection
L6 (Daily) ← Daily trade logs
L7 (Real-time) ← Live market data
```
Data flows **bottom-up**: real-time trades aggregate into daily summaries, which roll up to weekly, then monthly, quarterly, annual, and finally into permanent legacy knowledge.
## The 7 Layers
### L7: Real-time
**Retention**: 7 days
**Timeframe format**: `YYYY-MM-DD` (same-day)
**Content**: Current positions, live quotes, orderbook snapshots, tick-by-tick volatility
**Use cases**:
- Immediate execution decisions
- Stop-loss triggers
- Real-time P&L tracking
**Example keys**:
- `current_position_{stock_code}`: Current holdings
- `live_price_{stock_code}`: Latest quote
- `volatility_5m_{stock_code}`: 5-minute rolling volatility
### L6: Daily
**Retention**: 90 days
**Timeframe format**: `YYYY-MM-DD`
**Content**: Daily trade logs, end-of-day P&L, market summaries, decision accuracy
**Use cases**:
- Daily performance review
- Identify patterns in recent trading
- Backtest strategy adjustments
**Example keys**:
- `total_pnl`: Daily profit/loss
- `trade_count`: Number of trades
- `win_rate`: Percentage of profitable trades
- `avg_confidence`: Average Gemini confidence
### L5: Weekly
**Retention**: 1 year
**Timeframe format**: `YYYY-Www` (ISO week, e.g., `2026-W06`)
**Content**: Weekly stock selection, sector rotation, volatility regime classification
**Use cases**:
- Weekly strategy adjustment
- Sector momentum tracking
- Identify hot/cold markets
**Example keys**:
- `weekly_pnl`: Week's total P&L
- `top_performers`: Best-performing stocks
- `sector_focus`: Dominant sectors
- `avg_confidence`: Weekly average confidence
### L4: Monthly
**Retention**: 2 years
**Timeframe format**: `YYYY-MM`
**Content**: Monthly portfolio rebalancing, risk exposure analysis, drawdown recovery
**Use cases**:
- Monthly performance reporting
- Risk exposure adjustment
- Correlation analysis
**Example keys**:
- `monthly_pnl`: Month's total P&L
- `sharpe_ratio`: Risk-adjusted return
- `max_drawdown`: Largest peak-to-trough decline
- `rebalancing_notes`: Manual insights
### L3: Quarterly
**Retention**: 3 years
**Timeframe format**: `YYYY-Qn` (e.g., `2026-Q1`)
**Content**: Quarterly strategy pivots, market phase detection (bull/bear/sideways), macro regime changes
**Use cases**:
- Strategic pivots (e.g., growth → value)
- Macro regime classification
- Long-term pattern recognition
**Example keys**:
- `quarterly_pnl`: Quarter's total P&L
- `market_phase`: Bull/Bear/Sideways
- `strategy_adjustments`: Major changes made
- `lessons_learned`: Key insights
### L2: Annual
**Retention**: 10 years
**Timeframe format**: `YYYY`
**Content**: Yearly returns, Sharpe ratio, max drawdown, win rate, strategy effectiveness
**Use cases**:
- Annual performance review
- Multi-year trend analysis
- Strategy benchmarking
**Example keys**:
- `annual_pnl`: Year's total P&L
- `sharpe_ratio`: Annual risk-adjusted return
- `win_rate`: Yearly win percentage
- `best_strategy`: Most successful strategy
- `worst_mistake`: Biggest lesson learned
### L1: Legacy
**Retention**: Forever
**Timeframe format**: `LEGACY` (single timeframe)
**Content**: Cumulative trading history, core principles, generational wisdom
**Use cases**:
- Long-term philosophy
- Foundational rules
- Lessons that transcend market cycles
**Example keys**:
- `total_pnl`: All-time profit/loss
- `years_traded`: Trading longevity
- `avg_annual_pnl`: Long-term average return
- `core_principles`: Immutable trading rules
- `greatest_trades`: Hall of fame
- `never_again`: Permanent warnings
## Usage
### Setting Context
```python
from src.context import ContextLayer, ContextStore
from src.db import init_db
conn = init_db("data/ouroboros.db")
store = ContextStore(conn)
# Store daily P&L
store.set_context(
layer=ContextLayer.L6_DAILY,
timeframe="2026-02-04",
key="total_pnl",
value=1234.56
)
# Store weekly insight
store.set_context(
layer=ContextLayer.L5_WEEKLY,
timeframe="2026-W06",
key="top_performers",
value=["005930", "000660", "035720"] # JSON-serializable
)
# Store legacy wisdom
store.set_context(
layer=ContextLayer.L1_LEGACY,
timeframe="LEGACY",
key="core_principles",
value=[
"Cut losses fast",
"Let winners run",
"Never average down on losing positions"
]
)
```
### Retrieving Context
```python
# Get a specific value
pnl = store.get_context(ContextLayer.L6_DAILY, "2026-02-04", "total_pnl")
# Returns: 1234.56
# Get all keys for a timeframe
daily_summary = store.get_all_contexts(ContextLayer.L6_DAILY, "2026-02-04")
# Returns: {"total_pnl": 1234.56, "trade_count": 10, "win_rate": 60.0, ...}
# Get all data for a layer (any timeframe)
all_daily = store.get_all_contexts(ContextLayer.L6_DAILY)
# Returns: {"total_pnl": 1234.56, "trade_count": 10, ...} (latest timeframes first)
# Get the latest timeframe
latest = store.get_latest_timeframe(ContextLayer.L6_DAILY)
# Returns: "2026-02-04"
```
### Automatic Aggregation
The `ContextAggregator` rolls up data from lower to higher layers:
```python
from src.context.aggregator import ContextAggregator
aggregator = ContextAggregator(conn)
# Aggregate daily metrics from trades
aggregator.aggregate_daily_from_trades("2026-02-04")
# Roll up weekly from daily
aggregator.aggregate_weekly_from_daily("2026-W06")
# Roll up all layers at once (bottom-up)
aggregator.run_all_aggregations()
```
**Aggregation schedule** (recommended):
- **L7 → L6**: Every midnight (daily rollup)
- **L6 → L5**: Every Sunday (weekly rollup)
- **L5 → L4**: First day of each month (monthly rollup)
- **L4 → L3**: First day of quarter (quarterly rollup)
- **L3 → L2**: January 1st (annual rollup)
- **L2 → L1**: On demand (major milestones)
### Context Cleanup
Expired contexts are automatically deleted based on retention policies:
```python
# Manual cleanup
deleted = store.cleanup_expired_contexts()
# Returns: {ContextLayer.L7_REALTIME: 42, ContextLayer.L6_DAILY: 15, ...}
```
**Retention policies** (defined in `src/context/layer.py`):
- L1: Forever
- L2: 10 years
- L3: 3 years
- L4: 2 years
- L5: 1 year
- L6: 90 days
- L7: 7 days
## Integration with Gemini Brain
The context tree provides hierarchical memory for decision-making:
```python
from src.brain.gemini_client import GeminiClient
# Build prompt with multi-layer context
def build_enhanced_prompt(stock_code: str, store: ContextStore) -> str:
# L7: Real-time data
current_price = store.get_context(ContextLayer.L7_REALTIME, "2026-02-04", f"live_price_{stock_code}")
# L6: Recent daily performance
yesterday_pnl = store.get_context(ContextLayer.L6_DAILY, "2026-02-03", "total_pnl")
# L5: Weekly trend
weekly_data = store.get_all_contexts(ContextLayer.L5_WEEKLY, "2026-W06")
# L1: Core principles
principles = store.get_context(ContextLayer.L1_LEGACY, "LEGACY", "core_principles")
return f"""
Analyze {stock_code} for trading decision.
Current price: {current_price}
Yesterday's P&L: {yesterday_pnl}
This week: {weekly_data}
Core principles:
{chr(10).join(f'- {p}' for p in principles)}
Decision (BUY/SELL/HOLD):
"""
```
## Database Schema
```sql
-- Context storage
CREATE TABLE contexts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
layer TEXT NOT NULL, -- L1_LEGACY, L2_ANNUAL, ..., L7_REALTIME
timeframe TEXT NOT NULL, -- "LEGACY", "2026", "2026-Q1", "2026-02", "2026-W06", "2026-02-04"
key TEXT NOT NULL, -- "total_pnl", "win_rate", "core_principles", etc.
value TEXT NOT NULL, -- JSON-serialized value
created_at TEXT NOT NULL, -- ISO 8601 timestamp
updated_at TEXT NOT NULL, -- ISO 8601 timestamp
UNIQUE(layer, timeframe, key)
);
-- Layer metadata
CREATE TABLE context_metadata (
layer TEXT PRIMARY KEY,
description TEXT NOT NULL,
retention_days INTEGER, -- NULL = keep forever
aggregation_source TEXT -- Parent layer for rollup
);
-- Indices for fast queries
CREATE INDEX idx_contexts_layer ON contexts(layer);
CREATE INDEX idx_contexts_timeframe ON contexts(timeframe);
CREATE INDEX idx_contexts_updated ON contexts(updated_at);
```
## Best Practices
1. **Write to leaf layers only** — Never manually write to L1-L5; let aggregation populate them
2. **Aggregate regularly** — Schedule aggregation jobs to keep higher layers fresh
3. **Query specific timeframes** — Use `get_context(layer, timeframe, key)` for precise retrieval
4. **Clean up periodically** — Run `cleanup_expired_contexts()` weekly to free space
5. **Preserve L1 forever** — Legacy wisdom should never expire
6. **Use JSON-serializable values** — Store dicts, lists, strings, numbers (not custom objects)
## Testing
See `tests/test_context.py` for comprehensive test coverage (18 tests, 100% coverage on context modules).
```bash
pytest tests/test_context.py -v
```
## References
- **Implementation**: `src/context/`
- `layer.py`: Layer definitions and metadata
- `store.py`: CRUD operations
- `aggregator.py`: Bottom-up aggregation logic
- **Database**: `src/db.py` (table initialization)
- **Tests**: `tests/test_context.py`
- **Related**: Pillar 2 (Multi-layered Context Management)

213
docs/testing.md Normal file
View File

@@ -0,0 +1,213 @@
# Testing Guidelines
## Test Structure
**54 tests** across four files. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator.
The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB.
### Test Files
#### `tests/test_risk.py` (11 tests)
- Circuit breaker boundaries
- Fat-finger edge cases
- P&L calculation edge cases
- Order validation logic
**Example:**
```python
def test_circuit_breaker_exact_threshold(risk_manager):
"""Circuit breaker should trip at exactly -3.0%."""
with pytest.raises(CircuitBreakerTripped):
risk_manager.validate_order(
current_pnl_pct=-3.0,
order_amount=1000,
total_cash=10000
)
```
#### `tests/test_broker.py` (6 tests)
- OAuth token lifecycle
- Rate limiting enforcement
- Hash key generation
- Network error handling
- SSL context configuration
**Example:**
```python
async def test_rate_limiter(broker):
"""Rate limiter should delay requests to stay under 10 RPS."""
start = time.monotonic()
for _ in range(15): # 15 requests
await broker._rate_limiter.acquire()
elapsed = time.monotonic() - start
assert elapsed >= 1.0 # Should take at least 1 second
```
#### `tests/test_brain.py` (18 tests)
- Valid JSON parsing
- Markdown-wrapped JSON handling
- Malformed JSON fallback
- Missing fields handling
- Invalid action validation
- Confidence threshold enforcement
- Empty response handling
- Prompt construction for different markets
**Example:**
```python
async def test_confidence_below_threshold_forces_hold(brain):
"""Decisions below confidence threshold should force HOLD."""
decision = brain.parse_response('{"action":"BUY","confidence":70,"rationale":"test"}')
assert decision.action == "HOLD"
assert decision.confidence == 70
```
#### `tests/test_market_schedule.py` (19 tests)
- Market open/close logic
- Timezone handling (UTC, Asia/Seoul, America/New_York, etc.)
- DST (Daylight Saving Time) transitions
- Weekend handling
- Lunch break logic
- Multiple market filtering
- Next market open calculation
**Example:**
```python
def test_is_market_open_during_trading_hours():
"""Market should be open during regular trading hours."""
# KRX: 9:00-15:30 KST, no lunch break
market = MARKETS["KR"]
trading_time = datetime(2026, 2, 3, 10, 0, tzinfo=ZoneInfo("Asia/Seoul")) # Monday 10:00
assert is_market_open(market, trading_time) is True
```
## Coverage Requirements
**Minimum coverage: 80%**
Check coverage:
```bash
pytest -v --cov=src --cov-report=term-missing
```
Expected output:
```
Name Stmts Miss Cover Missing
-----------------------------------------------------------
src/brain/gemini_client.py 85 5 94% 165-169
src/broker/kis_api.py 120 12 90% ...
src/core/risk_manager.py 35 2 94% ...
src/db.py 25 1 96% ...
src/main.py 150 80 47% (excluded from CI)
src/markets/schedule.py 95 3 97% ...
-----------------------------------------------------------
TOTAL 510 103 80%
```
**Note:** `main.py` has lower coverage as it contains the main loop which is tested via integration/manual testing.
## Test Configuration
### `pyproject.toml`
```toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]
python_files = ["test_*.py"]
```
### `tests/conftest.py`
```python
@pytest.fixture
def settings() -> Settings:
"""Provide test settings with safe defaults."""
return Settings(
KIS_APP_KEY="test_key",
KIS_APP_SECRET="test_secret",
KIS_ACCOUNT_NO="12345678-01",
GEMINI_API_KEY="test_gemini_key",
MODE="paper",
DB_PATH=":memory:", # In-memory SQLite
CONFIDENCE_THRESHOLD=80,
ENABLED_MARKETS="KR",
)
```
## Writing New Tests
### Naming Convention
- Test files: `test_<module>.py`
- Test functions: `test_<feature>_<scenario>()`
- Use descriptive names that explain what is being tested
### Good Test Example
```python
async def test_send_order_with_market_price(broker, settings):
"""Market orders should use price=0 and ORD_DVSN='01'."""
# Arrange
stock_code = "005930"
order_type = "BUY"
quantity = 10
# Act
with patch.object(broker._session, 'post') as mock_post:
mock_post.return_value.__aenter__.return_value.status = 200
mock_post.return_value.__aenter__.return_value.json = AsyncMock(
return_value={"rt_cd": "0", "msg1": "OK"}
)
await broker.send_order(stock_code, order_type, quantity, price=0)
# Assert
call_args = mock_post.call_args
body = call_args.kwargs['json']
assert body['ORD_DVSN'] == '01' # Market order
assert body['ORD_UNPR'] == '0' # Price 0
```
### Test Checklist
- [ ] Test passes in isolation (`pytest tests/test_foo.py::test_bar -v`)
- [ ] Test has clear docstring explaining what it tests
- [ ] Arrange-Act-Assert structure
- [ ] Uses appropriate fixtures from conftest.py
- [ ] Mocks external dependencies (API calls, network)
- [ ] Tests edge cases and error conditions
- [ ] Doesn't rely on test execution order
## Running Tests
```bash
# All tests
pytest -v
# Specific file
pytest tests/test_risk.py -v
# Specific test
pytest tests/test_brain.py::test_parse_valid_json -v
# With coverage
pytest -v --cov=src --cov-report=term-missing
# Stop on first failure
pytest -x
# Verbose output with print statements
pytest -v -s
```
## CI/CD Integration
Tests run automatically on:
- Every commit to feature branches
- Every PR to main
- Scheduled daily runs
**Blocking conditions:**
- Test failures → PR blocked
- Coverage < 80% → PR blocked (warning only for main.py)
**Non-blocking:**
- `mypy --strict` errors (type hints encouraged but not enforced)
- `ruff check` warnings (must be acknowledged)

75
docs/workflow.md Normal file
View File

@@ -0,0 +1,75 @@
# Development Workflow
## Git Workflow Policy
**CRITICAL: All code changes MUST follow this workflow. Direct pushes to `main` are ABSOLUTELY PROHIBITED.**
1. **Create Gitea Issue First** — All features, bug fixes, and policy changes require a Gitea issue before any code is written
2. **Create Feature Branch** — Branch from `main` using format `feature/issue-{N}-{short-description}`
3. **Implement Changes** — Write code, tests, and documentation on the feature branch
4. **Create Pull Request** — Submit PR to `main` branch referencing the issue number
5. **Review & Merge** — After approval, merge via PR (squash or merge commit)
**Never commit directly to `main`.** This policy applies to all changes, no exceptions.
## Agent Workflow
**Modern AI development leverages specialized agents for concurrent, efficient task execution.**
### Parallel Execution Strategy
Use **git worktree** or **subagents** (via the Task tool) to handle multiple requirements simultaneously:
- Each task runs in independent context
- Parallel branches for concurrent features
- Isolated test environments prevent interference
- Faster iteration with distributed workload
### Specialized Agent Roles
Deploy task-specific agents as needed instead of handling everything in the main conversation:
- **Conversational Agent** (main) — Interface with user, coordinate other agents
- **Ticket Management Agent** — Create/update Gitea issues, track task status
- **Design Agent** — Architectural planning, RFC documents, API design
- **Code Writing Agent** — Implementation following specs
- **Testing Agent** — Write tests, verify coverage, run test suites
- **Documentation Agent** — Update docs, docstrings, CLAUDE.md, README
- **Review Agent** — Code review, lint checks, security audits
- **Custom Agents** — Created dynamically for specialized tasks (performance analysis, migration scripts, etc.)
### When to Use Agents
**Prefer spawning specialized agents for:**
1. Complex multi-file changes requiring exploration
2. Tasks with clear, isolated scope (e.g., "write tests for module X")
3. Parallel work streams (feature A + bugfix B simultaneously)
4. Long-running analysis (codebase search, dependency audit)
5. Tasks requiring different contexts (multiple git worktrees)
**Use the main conversation for:**
1. User interaction and clarification
2. Quick single-file edits
3. Coordinating agent work
4. High-level decision making
### Implementation
```python
# Example: Spawn parallel test and documentation agents
task_tool(
subagent_type="general-purpose",
prompt="Write comprehensive tests for src/markets/schedule.py",
description="Write schedule tests"
)
task_tool(
subagent_type="general-purpose",
prompt="Update README.md with global market feature documentation",
description="Update README"
)
```
Use `run_in_background=True` for independent tasks that don't block subsequent work.

10
src/context/__init__.py Normal file
View File

@@ -0,0 +1,10 @@
"""Multi-layered context management system for trading decisions.
The context tree implements Pillar 2: hierarchical memory management across
7 time horizons, from real-time quotes to generational wisdom.
"""
from src.context.layer import ContextLayer
from src.context.store import ContextStore
__all__ = ["ContextLayer", "ContextStore"]

250
src/context/aggregator.py Normal file
View File

@@ -0,0 +1,250 @@
"""Context aggregation logic for rolling up data from lower to higher layers."""
from __future__ import annotations
import sqlite3
from datetime import UTC, datetime
from typing import Any
from src.context.layer import ContextLayer
from src.context.store import ContextStore
class ContextAggregator:
"""Aggregates context data from lower (finer) to higher (coarser) layers."""
def __init__(self, conn: sqlite3.Connection) -> None:
"""Initialize the aggregator with a database connection."""
self.conn = conn
self.store = ContextStore(conn)
def aggregate_daily_from_trades(self, date: str | None = None) -> None:
"""Aggregate L6 (daily) context from trades table.
Args:
date: Date in YYYY-MM-DD format. If None, uses today.
"""
if date is None:
date = datetime.now(UTC).date().isoformat()
# Calculate daily metrics from trades
cursor = self.conn.execute(
"""
SELECT
COUNT(*) as trade_count,
SUM(CASE WHEN action = 'BUY' THEN 1 ELSE 0 END) as buys,
SUM(CASE WHEN action = 'SELL' THEN 1 ELSE 0 END) as sells,
SUM(CASE WHEN action = 'HOLD' THEN 1 ELSE 0 END) as holds,
AVG(confidence) as avg_confidence,
SUM(pnl) as total_pnl,
COUNT(DISTINCT stock_code) as unique_stocks,
SUM(CASE WHEN pnl > 0 THEN 1 ELSE 0 END) as wins,
SUM(CASE WHEN pnl < 0 THEN 1 ELSE 0 END) as losses
FROM trades
WHERE DATE(timestamp) = ?
""",
(date,),
)
row = cursor.fetchone()
if row and row[0] > 0: # At least one trade
trade_count, buys, sells, holds, avg_conf, total_pnl, stocks, wins, losses = row
# Store daily metrics in L6
self.store.set_context(ContextLayer.L6_DAILY, date, "trade_count", trade_count)
self.store.set_context(ContextLayer.L6_DAILY, date, "buys", buys)
self.store.set_context(ContextLayer.L6_DAILY, date, "sells", sells)
self.store.set_context(ContextLayer.L6_DAILY, date, "holds", holds)
self.store.set_context(
ContextLayer.L6_DAILY, date, "avg_confidence", round(avg_conf, 2)
)
self.store.set_context(
ContextLayer.L6_DAILY, date, "total_pnl", round(total_pnl, 2)
)
self.store.set_context(ContextLayer.L6_DAILY, date, "unique_stocks", stocks)
win_rate = round(wins / max(wins + losses, 1) * 100, 2)
self.store.set_context(ContextLayer.L6_DAILY, date, "win_rate", win_rate)
def aggregate_weekly_from_daily(self, week: str | None = None) -> None:
"""Aggregate L5 (weekly) context from L6 (daily).
Args:
week: Week in YYYY-Www format (ISO week). If None, uses current week.
"""
if week is None:
week = datetime.now(UTC).strftime("%Y-W%V")
# Get all daily contexts for this week
cursor = self.conn.execute(
"""
SELECT key, value FROM contexts
WHERE layer = ? AND timeframe LIKE ?
""",
(ContextLayer.L6_DAILY.value, f"{week[:4]}-%"), # All days in the year
)
# Group by key and collect all values
import json
from collections import defaultdict
daily_data: dict[str, list[Any]] = defaultdict(list)
for row in cursor.fetchall():
daily_data[row[0]].append(json.loads(row[1]))
if daily_data:
# Sum all PnL values
if "total_pnl" in daily_data:
total_pnl = sum(daily_data["total_pnl"])
self.store.set_context(
ContextLayer.L5_WEEKLY, week, "weekly_pnl", round(total_pnl, 2)
)
# Average all confidence values
if "avg_confidence" in daily_data:
conf_values = daily_data["avg_confidence"]
avg_conf = sum(conf_values) / len(conf_values)
self.store.set_context(
ContextLayer.L5_WEEKLY, week, "avg_confidence", round(avg_conf, 2)
)
def aggregate_monthly_from_weekly(self, month: str | None = None) -> None:
"""Aggregate L4 (monthly) context from L5 (weekly).
Args:
month: Month in YYYY-MM format. If None, uses current month.
"""
if month is None:
month = datetime.now(UTC).strftime("%Y-%m")
# Get all weekly contexts for this month
cursor = self.conn.execute(
"""
SELECT key, value FROM contexts
WHERE layer = ? AND timeframe LIKE ?
""",
(ContextLayer.L5_WEEKLY.value, f"{month[:4]}-W%"),
)
# Group by key and collect all values
import json
from collections import defaultdict
weekly_data: dict[str, list[Any]] = defaultdict(list)
for row in cursor.fetchall():
weekly_data[row[0]].append(json.loads(row[1]))
if weekly_data:
# Sum all weekly PnL values
if "weekly_pnl" in weekly_data:
total_pnl = sum(weekly_data["weekly_pnl"])
self.store.set_context(
ContextLayer.L4_MONTHLY, month, "monthly_pnl", round(total_pnl, 2)
)
def aggregate_quarterly_from_monthly(self, quarter: str | None = None) -> None:
"""Aggregate L3 (quarterly) context from L4 (monthly).
Args:
quarter: Quarter in YYYY-Qn format. If None, uses current quarter.
"""
if quarter is None:
from datetime import datetime
now = datetime.now(UTC)
q = (now.month - 1) // 3 + 1
quarter = f"{now.year}-Q{q}"
# Get all monthly contexts for this quarter
# Q1: 01-03, Q2: 04-06, Q3: 07-09, Q4: 10-12
q_num = int(quarter.split("-Q")[1])
months = [f"{quarter[:4]}-{m:02d}" for m in range((q_num - 1) * 3 + 1, q_num * 3 + 1)]
total_pnl = 0.0
for month in months:
monthly_pnl = self.store.get_context(
ContextLayer.L4_MONTHLY, month, "monthly_pnl"
)
if monthly_pnl is not None:
total_pnl += monthly_pnl
self.store.set_context(
ContextLayer.L3_QUARTERLY, quarter, "quarterly_pnl", round(total_pnl, 2)
)
def aggregate_annual_from_quarterly(self, year: str | None = None) -> None:
"""Aggregate L2 (annual) context from L3 (quarterly).
Args:
year: Year in YYYY format. If None, uses current year.
"""
if year is None:
year = str(datetime.now(UTC).year)
# Get all quarterly contexts for this year
total_pnl = 0.0
for q in range(1, 5):
quarter = f"{year}-Q{q}"
quarterly_pnl = self.store.get_context(
ContextLayer.L3_QUARTERLY, quarter, "quarterly_pnl"
)
if quarterly_pnl is not None:
total_pnl += quarterly_pnl
self.store.set_context(
ContextLayer.L2_ANNUAL, year, "annual_pnl", round(total_pnl, 2)
)
def aggregate_legacy_from_annual(self) -> None:
"""Aggregate L1 (legacy) context from all L2 (annual) data."""
# Get all annual PnL
cursor = self.conn.execute(
"""
SELECT timeframe, value FROM contexts
WHERE layer = ? AND key = ?
ORDER BY timeframe
""",
(ContextLayer.L2_ANNUAL.value, "annual_pnl"),
)
import json
annual_data = [(row[0], json.loads(row[1])) for row in cursor.fetchall()]
if annual_data:
total_pnl = sum(pnl for _, pnl in annual_data)
years_traded = len(annual_data)
avg_annual_pnl = total_pnl / years_traded
# Store in L1 (single "LEGACY" timeframe)
self.store.set_context(
ContextLayer.L1_LEGACY, "LEGACY", "total_pnl", round(total_pnl, 2)
)
self.store.set_context(
ContextLayer.L1_LEGACY, "LEGACY", "years_traded", years_traded
)
self.store.set_context(
ContextLayer.L1_LEGACY,
"LEGACY",
"avg_annual_pnl",
round(avg_annual_pnl, 2),
)
def run_all_aggregations(self) -> None:
"""Run all aggregations from L7 to L1 (bottom-up)."""
# L7 (trades) → L6 (daily)
self.aggregate_daily_from_trades()
# L6 (daily) → L5 (weekly)
self.aggregate_weekly_from_daily()
# L5 (weekly) → L4 (monthly)
self.aggregate_monthly_from_weekly()
# L4 (monthly) → L3 (quarterly)
self.aggregate_quarterly_from_monthly()
# L3 (quarterly) → L2 (annual)
self.aggregate_annual_from_quarterly()
# L2 (annual) → L1 (legacy)
self.aggregate_legacy_from_annual()

75
src/context/layer.py Normal file
View File

@@ -0,0 +1,75 @@
"""Context layer definitions for multi-tier memory management."""
from __future__ import annotations
from dataclasses import dataclass
from enum import Enum
class ContextLayer(str, Enum):
"""7-tier context hierarchy from real-time to generational."""
L1_LEGACY = "L1_LEGACY" # Cumulative/generational wisdom
L2_ANNUAL = "L2_ANNUAL" # Yearly performance
L3_QUARTERLY = "L3_QUARTERLY" # Quarterly strategy adjustments
L4_MONTHLY = "L4_MONTHLY" # Monthly rebalancing
L5_WEEKLY = "L5_WEEKLY" # Weekly stock selection
L6_DAILY = "L6_DAILY" # Daily trade logs
L7_REALTIME = "L7_REALTIME" # Real-time market data
@dataclass(frozen=True)
class LayerMetadata:
"""Metadata for each context layer."""
layer: ContextLayer
description: str
retention_days: int | None # None = keep forever
aggregation_source: ContextLayer | None # Parent layer for aggregation
# Layer configuration
LAYER_CONFIG: dict[ContextLayer, LayerMetadata] = {
ContextLayer.L1_LEGACY: LayerMetadata(
layer=ContextLayer.L1_LEGACY,
description="Cumulative trading history and core lessons learned across generations",
retention_days=None, # Keep forever
aggregation_source=ContextLayer.L2_ANNUAL,
),
ContextLayer.L2_ANNUAL: LayerMetadata(
layer=ContextLayer.L2_ANNUAL,
description="Yearly returns, Sharpe ratio, max drawdown, win rate",
retention_days=365 * 10, # 10 years
aggregation_source=ContextLayer.L3_QUARTERLY,
),
ContextLayer.L3_QUARTERLY: LayerMetadata(
layer=ContextLayer.L3_QUARTERLY,
description="Quarterly strategy adjustments, market phase detection, sector rotation",
retention_days=365 * 3, # 3 years
aggregation_source=ContextLayer.L4_MONTHLY,
),
ContextLayer.L4_MONTHLY: LayerMetadata(
layer=ContextLayer.L4_MONTHLY,
description="Monthly portfolio rebalancing, risk exposure, drawdown recovery",
retention_days=365 * 2, # 2 years
aggregation_source=ContextLayer.L5_WEEKLY,
),
ContextLayer.L5_WEEKLY: LayerMetadata(
layer=ContextLayer.L5_WEEKLY,
description="Weekly stock selection, sector focus, volatility regime",
retention_days=365, # 1 year
aggregation_source=ContextLayer.L6_DAILY,
),
ContextLayer.L6_DAILY: LayerMetadata(
layer=ContextLayer.L6_DAILY,
description="Daily trade logs, P&L, market summaries, decision accuracy",
retention_days=90, # 90 days
aggregation_source=ContextLayer.L7_REALTIME,
),
ContextLayer.L7_REALTIME: LayerMetadata(
layer=ContextLayer.L7_REALTIME,
description="Real-time positions, quotes, orderbook, volatility, live P&L",
retention_days=7, # 7 days (real-time data is ephemeral)
aggregation_source=None, # No aggregation source (leaf layer)
),
}

193
src/context/store.py Normal file
View File

@@ -0,0 +1,193 @@
"""Context storage and retrieval for the 7-tier memory system."""
from __future__ import annotations
import json
import sqlite3
from datetime import UTC, datetime
from typing import Any
from src.context.layer import LAYER_CONFIG, ContextLayer
class ContextStore:
"""Manages context data across the 7-tier hierarchy."""
def __init__(self, conn: sqlite3.Connection) -> None:
"""Initialize the context store with a database connection."""
self.conn = conn
self._init_metadata()
def _init_metadata(self) -> None:
"""Initialize context_metadata table with layer configurations."""
for config in LAYER_CONFIG.values():
self.conn.execute(
"""
INSERT OR REPLACE INTO context_metadata
(layer, description, retention_days, aggregation_source)
VALUES (?, ?, ?, ?)
""",
(
config.layer.value,
config.description,
config.retention_days,
config.aggregation_source.value if config.aggregation_source else None,
),
)
self.conn.commit()
def set_context(
self,
layer: ContextLayer,
timeframe: str,
key: str,
value: Any,
) -> None:
"""Set a context value for a given layer and timeframe.
Args:
layer: The context layer (L1-L7)
timeframe: Time identifier (e.g., "2026", "2026-Q1", "2026-01",
"2026-W05", "2026-02-04")
key: Context key (e.g., "sharpe_ratio", "win_rate", "lesson_learned")
value: Context value (will be JSON-serialized)
"""
now = datetime.now(UTC).isoformat()
value_json = json.dumps(value)
self.conn.execute(
"""
INSERT INTO contexts (layer, timeframe, key, value, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?)
ON CONFLICT(layer, timeframe, key)
DO UPDATE SET value = excluded.value, updated_at = excluded.updated_at
""",
(layer.value, timeframe, key, value_json, now, now),
)
self.conn.commit()
def get_context(
self,
layer: ContextLayer,
timeframe: str,
key: str,
) -> Any | None:
"""Get a context value for a given layer and timeframe.
Args:
layer: The context layer (L1-L7)
timeframe: Time identifier
key: Context key
Returns:
The context value (deserialized from JSON), or None if not found
"""
cursor = self.conn.execute(
"""
SELECT value FROM contexts
WHERE layer = ? AND timeframe = ? AND key = ?
""",
(layer.value, timeframe, key),
)
row = cursor.fetchone()
if row:
return json.loads(row[0])
return None
def get_all_contexts(
self,
layer: ContextLayer,
timeframe: str | None = None,
) -> dict[str, Any]:
"""Get all context values for a given layer and optional timeframe.
Args:
layer: The context layer (L1-L7)
timeframe: Optional time identifier filter
Returns:
Dictionary of key-value pairs for the specified layer/timeframe
"""
if timeframe:
cursor = self.conn.execute(
"""
SELECT key, value FROM contexts
WHERE layer = ? AND timeframe = ?
ORDER BY key
""",
(layer.value, timeframe),
)
else:
cursor = self.conn.execute(
"""
SELECT key, value FROM contexts
WHERE layer = ?
ORDER BY timeframe DESC, key
""",
(layer.value,),
)
return {row[0]: json.loads(row[1]) for row in cursor.fetchall()}
def get_latest_timeframe(self, layer: ContextLayer) -> str | None:
"""Get the most recent timeframe for a given layer.
Args:
layer: The context layer (L1-L7)
Returns:
The latest timeframe string, or None if no data exists
"""
cursor = self.conn.execute(
"""
SELECT timeframe FROM contexts
WHERE layer = ?
ORDER BY updated_at DESC
LIMIT 1
""",
(layer.value,),
)
row = cursor.fetchone()
return row[0] if row else None
def delete_old_contexts(self, layer: ContextLayer, cutoff_date: str) -> int:
"""Delete contexts older than the cutoff date for a given layer.
Args:
layer: The context layer (L1-L7)
cutoff_date: ISO format date string (contexts before this will be deleted)
Returns:
Number of rows deleted
"""
cursor = self.conn.execute(
"""
DELETE FROM contexts
WHERE layer = ? AND updated_at < ?
""",
(layer.value, cutoff_date),
)
self.conn.commit()
return cursor.rowcount
def cleanup_expired_contexts(self) -> dict[ContextLayer, int]:
"""Delete expired contexts based on retention policies.
Returns:
Dictionary mapping layer to number of deleted rows
"""
deleted_counts: dict[ContextLayer, int] = {}
for layer, config in LAYER_CONFIG.items():
if config.retention_days is None:
# Keep forever (e.g., L1_LEGACY)
deleted_counts[layer] = 0
continue
# Calculate cutoff date
from datetime import timedelta
cutoff = datetime.now(UTC) - timedelta(days=config.retention_days)
deleted_counts[layer] = self.delete_old_contexts(layer, cutoff.isoformat())
return deleted_counts

View File

@@ -39,6 +39,38 @@ def init_db(db_path: str) -> sqlite3.Connection:
if "exchange_code" not in columns:
conn.execute("ALTER TABLE trades ADD COLUMN exchange_code TEXT DEFAULT 'KRX'")
# Context tree tables for multi-layered memory management
conn.execute(
"""
CREATE TABLE IF NOT EXISTS contexts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
layer TEXT NOT NULL,
timeframe TEXT NOT NULL,
key TEXT NOT NULL,
value TEXT NOT NULL,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
UNIQUE(layer, timeframe, key)
)
"""
)
conn.execute(
"""
CREATE TABLE IF NOT EXISTS context_metadata (
layer TEXT PRIMARY KEY,
description TEXT NOT NULL,
retention_days INTEGER,
aggregation_source TEXT
)
"""
)
# Create indices for efficient context queries
conn.execute("CREATE INDEX IF NOT EXISTS idx_contexts_layer ON contexts(layer)")
conn.execute("CREATE INDEX IF NOT EXISTS idx_contexts_timeframe ON contexts(timeframe)")
conn.execute("CREATE INDEX IF NOT EXISTS idx_contexts_updated ON contexts(updated_at)")
conn.commit()
return conn

350
tests/test_context.py Normal file
View File

@@ -0,0 +1,350 @@
"""Tests for the multi-layered context management system."""
from __future__ import annotations
import sqlite3
from datetime import UTC, datetime, timedelta
import pytest
from src.context.aggregator import ContextAggregator
from src.context.layer import LAYER_CONFIG, ContextLayer
from src.context.store import ContextStore
from src.db import init_db, log_trade
@pytest.fixture
def db_conn() -> sqlite3.Connection:
"""Provide an in-memory database connection."""
return init_db(":memory:")
@pytest.fixture
def store(db_conn: sqlite3.Connection) -> ContextStore:
"""Provide a ContextStore instance."""
return ContextStore(db_conn)
@pytest.fixture
def aggregator(db_conn: sqlite3.Connection) -> ContextAggregator:
"""Provide a ContextAggregator instance."""
return ContextAggregator(db_conn)
class TestContextStore:
"""Test suite for ContextStore CRUD operations."""
def test_set_and_get_context(self, store: ContextStore) -> None:
"""Test setting and retrieving a context value."""
store.set_context(ContextLayer.L6_DAILY, "2026-02-04", "total_pnl", 1234.56)
value = store.get_context(ContextLayer.L6_DAILY, "2026-02-04", "total_pnl")
assert value == 1234.56
def test_get_nonexistent_context(self, store: ContextStore) -> None:
"""Test retrieving a non-existent context returns None."""
value = store.get_context(ContextLayer.L6_DAILY, "2026-02-04", "nonexistent")
assert value is None
def test_update_existing_context(self, store: ContextStore) -> None:
"""Test updating an existing context value."""
store.set_context(ContextLayer.L6_DAILY, "2026-02-04", "total_pnl", 100.0)
store.set_context(ContextLayer.L6_DAILY, "2026-02-04", "total_pnl", 200.0)
value = store.get_context(ContextLayer.L6_DAILY, "2026-02-04", "total_pnl")
assert value == 200.0
def test_get_all_contexts_for_layer(self, store: ContextStore) -> None:
"""Test retrieving all contexts for a specific layer."""
store.set_context(ContextLayer.L6_DAILY, "2026-02-04", "total_pnl", 100.0)
store.set_context(ContextLayer.L6_DAILY, "2026-02-04", "trade_count", 10)
store.set_context(ContextLayer.L6_DAILY, "2026-02-04", "win_rate", 60.5)
contexts = store.get_all_contexts(ContextLayer.L6_DAILY, "2026-02-04")
assert len(contexts) == 3
assert contexts["total_pnl"] == 100.0
assert contexts["trade_count"] == 10
assert contexts["win_rate"] == 60.5
def test_get_latest_timeframe(self, store: ContextStore) -> None:
"""Test getting the most recent timeframe for a layer."""
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "total_pnl", 100.0)
store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "total_pnl", 200.0)
store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "total_pnl", 150.0)
latest = store.get_latest_timeframe(ContextLayer.L6_DAILY)
# Latest by updated_at, which should be the last one set
assert latest == "2026-02-02"
def test_delete_old_contexts(
self, store: ContextStore, db_conn: sqlite3.Connection
) -> None:
"""Test deleting contexts older than a cutoff date."""
# Insert contexts with specific old timestamps
# (bypassing set_context which uses current time)
old_date = "2026-01-01T00:00:00+00:00"
new_date = "2026-02-01T00:00:00+00:00"
db_conn.execute(
"""
INSERT INTO contexts (layer, timeframe, key, value, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?)
""",
(ContextLayer.L6_DAILY.value, "2026-01-01", "total_pnl", "100.0", old_date, old_date),
)
db_conn.execute(
"""
INSERT INTO contexts (layer, timeframe, key, value, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?)
""",
(ContextLayer.L6_DAILY.value, "2026-02-01", "total_pnl", "200.0", new_date, new_date),
)
db_conn.commit()
# Delete contexts before 2026-01-15
cutoff = "2026-01-15T00:00:00+00:00"
deleted = store.delete_old_contexts(ContextLayer.L6_DAILY, cutoff)
# Should delete the 2026-01-01 context
assert deleted == 1
assert store.get_context(ContextLayer.L6_DAILY, "2026-02-01", "total_pnl") == 200.0
assert store.get_context(ContextLayer.L6_DAILY, "2026-01-01", "total_pnl") is None
def test_cleanup_expired_contexts(
self, store: ContextStore, db_conn: sqlite3.Connection
) -> None:
"""Test automatic cleanup based on retention policies."""
# Set old contexts for L7 (7 day retention)
old_date = (datetime.now(UTC) - timedelta(days=10)).isoformat()
db_conn.execute(
"""
INSERT INTO contexts (layer, timeframe, key, value, created_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?)
""",
(ContextLayer.L7_REALTIME.value, "2026-01-01", "price", "100.0", old_date, old_date),
)
db_conn.commit()
deleted_counts = store.cleanup_expired_contexts()
# Should delete the old L7 context (10 days > 7 day retention)
assert deleted_counts[ContextLayer.L7_REALTIME] == 1
# L1 has no retention limit, so nothing should be deleted
assert deleted_counts[ContextLayer.L1_LEGACY] == 0
def test_context_metadata_initialized(
self, store: ContextStore, db_conn: sqlite3.Connection
) -> None:
"""Test that context metadata is properly initialized."""
cursor = db_conn.execute("SELECT COUNT(*) FROM context_metadata")
count = cursor.fetchone()[0]
# Should have metadata for all 7 layers
assert count == 7
# Verify L1 metadata
cursor = db_conn.execute(
"SELECT description, retention_days FROM context_metadata WHERE layer = ?",
(ContextLayer.L1_LEGACY.value,),
)
row = cursor.fetchone()
assert row is not None
assert "Cumulative trading history" in row[0]
assert row[1] is None # No retention limit for L1
class TestContextAggregator:
"""Test suite for ContextAggregator."""
def test_aggregate_daily_from_trades(
self, aggregator: ContextAggregator, db_conn: sqlite3.Connection
) -> None:
"""Test aggregating daily metrics from trades."""
date = "2026-02-04"
# Create sample trades
log_trade(db_conn, "005930", "BUY", 85, "Good signal", quantity=10, price=70000, pnl=500)
log_trade(db_conn, "000660", "SELL", 90, "Take profit", quantity=5, price=50000, pnl=1500)
log_trade(db_conn, "035720", "HOLD", 75, "Wait", quantity=0, price=0, pnl=0)
# Manually set timestamps to the target date
db_conn.execute(
f"UPDATE trades SET timestamp = '{date}T10:00:00+00:00'"
)
db_conn.commit()
# Aggregate
aggregator.aggregate_daily_from_trades(date)
# Verify L6 contexts
store = aggregator.store
assert store.get_context(ContextLayer.L6_DAILY, date, "trade_count") == 3
assert store.get_context(ContextLayer.L6_DAILY, date, "buys") == 1
assert store.get_context(ContextLayer.L6_DAILY, date, "sells") == 1
assert store.get_context(ContextLayer.L6_DAILY, date, "holds") == 1
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl") == 2000.0
assert store.get_context(ContextLayer.L6_DAILY, date, "unique_stocks") == 3
# 2 wins, 0 losses
assert store.get_context(ContextLayer.L6_DAILY, date, "win_rate") == 100.0
def test_aggregate_weekly_from_daily(self, aggregator: ContextAggregator) -> None:
"""Test aggregating weekly metrics from daily."""
week = "2026-W06"
# Set daily contexts
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "total_pnl", 100.0)
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "total_pnl", 200.0)
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "avg_confidence", 80.0)
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "avg_confidence", 85.0)
# Aggregate
aggregator.aggregate_weekly_from_daily(week)
# Verify L5 contexts
store = aggregator.store
weekly_pnl = store.get_context(ContextLayer.L5_WEEKLY, week, "weekly_pnl")
avg_conf = store.get_context(ContextLayer.L5_WEEKLY, week, "avg_confidence")
assert weekly_pnl == 300.0
assert avg_conf == 82.5
def test_aggregate_monthly_from_weekly(self, aggregator: ContextAggregator) -> None:
"""Test aggregating monthly metrics from weekly."""
month = "2026-02"
# Set weekly contexts
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W05", "weekly_pnl", 100.0)
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W06", "weekly_pnl", 200.0)
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W07", "weekly_pnl", 150.0)
# Aggregate
aggregator.aggregate_monthly_from_weekly(month)
# Verify L4 contexts
store = aggregator.store
monthly_pnl = store.get_context(ContextLayer.L4_MONTHLY, month, "monthly_pnl")
assert monthly_pnl == 450.0
def test_aggregate_quarterly_from_monthly(self, aggregator: ContextAggregator) -> None:
"""Test aggregating quarterly metrics from monthly."""
quarter = "2026-Q1"
# Set monthly contexts for Q1 (Jan, Feb, Mar)
aggregator.store.set_context(ContextLayer.L4_MONTHLY, "2026-01", "monthly_pnl", 1000.0)
aggregator.store.set_context(ContextLayer.L4_MONTHLY, "2026-02", "monthly_pnl", 2000.0)
aggregator.store.set_context(ContextLayer.L4_MONTHLY, "2026-03", "monthly_pnl", 1500.0)
# Aggregate
aggregator.aggregate_quarterly_from_monthly(quarter)
# Verify L3 contexts
store = aggregator.store
quarterly_pnl = store.get_context(ContextLayer.L3_QUARTERLY, quarter, "quarterly_pnl")
assert quarterly_pnl == 4500.0
def test_aggregate_annual_from_quarterly(self, aggregator: ContextAggregator) -> None:
"""Test aggregating annual metrics from quarterly."""
year = "2026"
# Set quarterly contexts for all 4 quarters
aggregator.store.set_context(ContextLayer.L3_QUARTERLY, "2026-Q1", "quarterly_pnl", 4500.0)
aggregator.store.set_context(ContextLayer.L3_QUARTERLY, "2026-Q2", "quarterly_pnl", 5000.0)
aggregator.store.set_context(ContextLayer.L3_QUARTERLY, "2026-Q3", "quarterly_pnl", 4800.0)
aggregator.store.set_context(ContextLayer.L3_QUARTERLY, "2026-Q4", "quarterly_pnl", 5200.0)
# Aggregate
aggregator.aggregate_annual_from_quarterly(year)
# Verify L2 contexts
store = aggregator.store
annual_pnl = store.get_context(ContextLayer.L2_ANNUAL, year, "annual_pnl")
assert annual_pnl == 19500.0
def test_aggregate_legacy_from_annual(self, aggregator: ContextAggregator) -> None:
"""Test aggregating legacy metrics from all annual data."""
# Set annual contexts for multiple years
aggregator.store.set_context(ContextLayer.L2_ANNUAL, "2024", "annual_pnl", 10000.0)
aggregator.store.set_context(ContextLayer.L2_ANNUAL, "2025", "annual_pnl", 15000.0)
aggregator.store.set_context(ContextLayer.L2_ANNUAL, "2026", "annual_pnl", 20000.0)
# Aggregate
aggregator.aggregate_legacy_from_annual()
# Verify L1 contexts
store = aggregator.store
total_pnl = store.get_context(ContextLayer.L1_LEGACY, "LEGACY", "total_pnl")
years_traded = store.get_context(ContextLayer.L1_LEGACY, "LEGACY", "years_traded")
avg_annual_pnl = store.get_context(ContextLayer.L1_LEGACY, "LEGACY", "avg_annual_pnl")
assert total_pnl == 45000.0
assert years_traded == 3
assert avg_annual_pnl == 15000.0
def test_run_all_aggregations(
self, aggregator: ContextAggregator, db_conn: sqlite3.Connection
) -> None:
"""Test running all aggregations from L7 to L1."""
date = "2026-02-04"
# Create sample trades
log_trade(db_conn, "005930", "BUY", 85, "Good signal", quantity=10, price=70000, pnl=1000)
# Set timestamp
db_conn.execute(f"UPDATE trades SET timestamp = '{date}T10:00:00+00:00'")
db_conn.commit()
# Run all aggregations
aggregator.run_all_aggregations()
# Verify data exists in each layer
store = aggregator.store
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl") == 1000.0
current_week = datetime.now(UTC).strftime("%Y-W%V")
assert store.get_context(ContextLayer.L5_WEEKLY, current_week, "weekly_pnl") is not None
# Further layers depend on time alignment, just verify no crashes
class TestLayerMetadata:
"""Test suite for layer metadata configuration."""
def test_all_layers_have_metadata(self) -> None:
"""Test that all 7 layers have metadata defined."""
assert len(LAYER_CONFIG) == 7
for layer in ContextLayer:
assert layer in LAYER_CONFIG
def test_layer_retention_policies(self) -> None:
"""Test layer retention policies are correctly configured."""
# L1 should have no retention limit
assert LAYER_CONFIG[ContextLayer.L1_LEGACY].retention_days is None
# L7 should have the shortest retention (7 days)
assert LAYER_CONFIG[ContextLayer.L7_REALTIME].retention_days == 7
# L2 should have a long retention (10 years)
assert LAYER_CONFIG[ContextLayer.L2_ANNUAL].retention_days == 365 * 10
def test_layer_aggregation_chain(self) -> None:
"""Test that the aggregation chain is properly configured."""
# L7 has no source (leaf layer)
assert LAYER_CONFIG[ContextLayer.L7_REALTIME].aggregation_source is None
# L6 aggregates from L7
assert LAYER_CONFIG[ContextLayer.L6_DAILY].aggregation_source == ContextLayer.L7_REALTIME
# L5 aggregates from L6
assert LAYER_CONFIG[ContextLayer.L5_WEEKLY].aggregation_source == ContextLayer.L6_DAILY
# L4 aggregates from L5
assert LAYER_CONFIG[ContextLayer.L4_MONTHLY].aggregation_source == ContextLayer.L5_WEEKLY
# L3 aggregates from L4
assert LAYER_CONFIG[ContextLayer.L3_QUARTERLY].aggregation_source == ContextLayer.L4_MONTHLY
# L2 aggregates from L3
assert LAYER_CONFIG[ContextLayer.L2_ANNUAL].aggregation_source == ContextLayer.L3_QUARTERLY
# L1 aggregates from L2
assert LAYER_CONFIG[ContextLayer.L1_LEGACY].aggregation_source == ContextLayer.L2_ANNUAL