Compare commits
1 Commits
feature/v3
...
feature/is
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4ca582b418 |
@@ -8,6 +8,13 @@
|
|||||||
- TASK: `TASK-...`
|
- TASK: `TASK-...`
|
||||||
- TEST: `TEST-...`
|
- TEST: `TEST-...`
|
||||||
|
|
||||||
|
## Docs Sync (SSOT)
|
||||||
|
|
||||||
|
- [ ] `docs/README.md` 라우팅/역할 영향 여부 확인
|
||||||
|
- [ ] SSOT 문서(architecture/commands/testing/ouroboros registry) 업데이트 또는 "변경 없음" 명시
|
||||||
|
- [ ] 요약 문서(`README.md`, `CLAUDE.md`)에 가변 수치 하드코딩 추가 없음
|
||||||
|
- SSOT 반영 위치(링크):
|
||||||
|
|
||||||
## Ticket Stage
|
## Ticket Stage
|
||||||
|
|
||||||
- Current stage: `Implemented` / `Integrated` / `Observed` / `Accepted`
|
- Current stage: `Implemented` / `Integrated` / `Observed` / `Accepted`
|
||||||
@@ -41,23 +48,12 @@
|
|||||||
- [ ] `workflow/session-handover.md` 최신 엔트리가 현재 브랜치/당일(UTC) 기준으로 갱신됨
|
- [ ] `workflow/session-handover.md` 최신 엔트리가 현재 브랜치/당일(UTC) 기준으로 갱신됨
|
||||||
- 최신 handover 엔트리 heading:
|
- 최신 handover 엔트리 heading:
|
||||||
|
|
||||||
## Docs Sync Gate (docs 파일 변경 시 필수)
|
|
||||||
|
|
||||||
- [ ] `python3 scripts/validate_docs_sync.py` 통과 (`docs` 미변경 PR은 N/A 기재)
|
|
||||||
|
|
||||||
## Runtime Evidence
|
## Runtime Evidence
|
||||||
|
|
||||||
- 시스템 실제 구동 커맨드:
|
- 시스템 실제 구동 커맨드:
|
||||||
- 모니터링 로그 경로:
|
- 모니터링 로그 경로:
|
||||||
- 이상 징후/이슈 링크:
|
- 이상 징후/이슈 링크:
|
||||||
|
|
||||||
## READ-ONLY Approval (Required when touching READ-ONLY files)
|
|
||||||
|
|
||||||
- Touched READ-ONLY files:
|
|
||||||
- Human approval:
|
|
||||||
- Test suite 1:
|
|
||||||
- Test suite 2:
|
|
||||||
|
|
||||||
## Approval Gate
|
## Approval Gate
|
||||||
|
|
||||||
- [ ] Static Verifier approval comment linked
|
- [ ] Static Verifier approval comment linked
|
||||||
|
|||||||
@@ -25,7 +25,7 @@ jobs:
|
|||||||
run: pip install ".[dev]"
|
run: pip install ".[dev]"
|
||||||
|
|
||||||
- name: Session handover gate
|
- name: Session handover gate
|
||||||
run: python3 scripts/session_handover_check.py --strict --ci
|
run: python3 scripts/session_handover_check.py --strict
|
||||||
|
|
||||||
- name: Validate governance assets
|
- name: Validate governance assets
|
||||||
env:
|
env:
|
||||||
|
|||||||
2
.github/workflows/ci.yml
vendored
2
.github/workflows/ci.yml
vendored
@@ -22,7 +22,7 @@ jobs:
|
|||||||
run: pip install ".[dev]"
|
run: pip install ".[dev]"
|
||||||
|
|
||||||
- name: Session handover gate
|
- name: Session handover gate
|
||||||
run: python3 scripts/session_handover_check.py --strict --ci
|
run: python3 scripts/session_handover_check.py --strict
|
||||||
|
|
||||||
- name: Validate governance assets
|
- name: Validate governance assets
|
||||||
env:
|
env:
|
||||||
|
|||||||
11
CLAUDE.md
11
CLAUDE.md
@@ -2,6 +2,10 @@
|
|||||||
|
|
||||||
AI-powered trading agent for global stock markets with self-evolution capabilities.
|
AI-powered trading agent for global stock markets with self-evolution capabilities.
|
||||||
|
|
||||||
|
## Documentation Entry
|
||||||
|
|
||||||
|
문서 우선순위/역할은 [docs/README.md](docs/README.md)를 기준으로 한다.
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -83,11 +87,10 @@ SCANNER_TOP_N=3 # Max candidates per scan
|
|||||||
|
|
||||||
### Trading Mode Integration
|
### Trading Mode Integration
|
||||||
|
|
||||||
Smart Scanner runs in both `TRADE_MODE=realtime` and `daily` paths. On API failure, domestic stocks fall back to a static watchlist; overseas stocks fall back to a dynamic universe (active positions, recent holdings).
|
Smart Scanner behavior and mode integration are defined in [docs/architecture.md](docs/architecture.md).
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
- **[Documentation Hub](docs/README.md)** — Top-level doc routing and reading order
|
|
||||||
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
|
- **[Workflow Guide](docs/workflow.md)** — Git workflow policy and agent-based development
|
||||||
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
|
- **[Command Reference](docs/commands.md)** — Common failures, build commands, troubleshooting
|
||||||
- **[Architecture](docs/architecture.md)** — System design, components, data flow
|
- **[Architecture](docs/architecture.md)** — System design, components, data flow
|
||||||
@@ -123,7 +126,7 @@ src/
|
|||||||
├── broker/ # KIS API client (domestic + overseas)
|
├── broker/ # KIS API client (domestic + overseas)
|
||||||
├── context/ # L1-L7 hierarchical memory system
|
├── context/ # L1-L7 hierarchical memory system
|
||||||
├── core/ # Risk manager (READ-ONLY)
|
├── core/ # Risk manager (READ-ONLY)
|
||||||
├── dashboard/ # FastAPI read-only monitoring (10 API endpoints)
|
├── dashboard/ # FastAPI read-only monitoring (endpoint details: docs/commands.md)
|
||||||
├── data/ # External data integration (news, market data, calendar)
|
├── data/ # External data integration (news, market data, calendar)
|
||||||
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
|
├── evolution/ # Self-improvement (optimizer, daily review, scorecard)
|
||||||
├── logging/ # Decision logger (audit trail)
|
├── logging/ # Decision logger (audit trail)
|
||||||
@@ -134,7 +137,7 @@ src/
|
|||||||
├── main.py # Trading loop orchestrator
|
├── main.py # Trading loop orchestrator
|
||||||
└── config.py # Settings (from .env)
|
└── config.py # Settings (from .env)
|
||||||
|
|
||||||
tests/ # 998 tests across 41 files
|
tests/ # Test suite (details: docs/testing.md)
|
||||||
docs/ # Extended documentation
|
docs/ # Extended documentation
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
26
README.md
26
README.md
@@ -2,6 +2,10 @@
|
|||||||
|
|
||||||
KIS(한국투자증권) API로 매매하고, Google Gemini로 판단하며, 자체 전략 코드를 TDD 기반으로 진화시키는 자율 주식 트레이딩 에이전트.
|
KIS(한국투자증권) API로 매매하고, Google Gemini로 판단하며, 자체 전략 코드를 TDD 기반으로 진화시키는 자율 주식 트레이딩 에이전트.
|
||||||
|
|
||||||
|
## 문서 네비게이션
|
||||||
|
|
||||||
|
문서 전체 라우팅/역할/우선순위는 [docs/README.md](docs/README.md)를 기준으로 본다.
|
||||||
|
|
||||||
## 아키텍처
|
## 아키텍처
|
||||||
|
|
||||||
```
|
```
|
||||||
@@ -39,7 +43,7 @@ KIS(한국투자증권) API로 매매하고, Google Gemini로 판단하며, 자
|
|||||||
| 컨텍스트 | `src/context/` | L1-L7 계층형 메모리 시스템 |
|
| 컨텍스트 | `src/context/` | L1-L7 계층형 메모리 시스템 |
|
||||||
| 분석 | `src/analysis/` | RSI, ATR, Smart Volatility Scanner |
|
| 분석 | `src/analysis/` | RSI, ATR, Smart Volatility Scanner |
|
||||||
| 알림 | `src/notifications/` | 텔레그램 양방향 (알림 + 9개 명령어) |
|
| 알림 | `src/notifications/` | 텔레그램 양방향 (알림 + 9개 명령어) |
|
||||||
| 대시보드 | `src/dashboard/` | FastAPI 읽기 전용 모니터링 (10개 API) |
|
| 대시보드 | `src/dashboard/` | FastAPI 읽기 전용 모니터링 (엔드포인트 목록은 `docs/commands.md`) |
|
||||||
| 진화 | `src/evolution/` | 전략 진화 + Daily Review + Scorecard |
|
| 진화 | `src/evolution/` | 전략 진화 + Daily Review + Scorecard |
|
||||||
| 의사결정 로그 | `src/logging/` | 전체 거래 결정 감사 추적 |
|
| 의사결정 로그 | `src/logging/` | 전체 거래 결정 감사 추적 |
|
||||||
| 데이터 | `src/data/` | 뉴스, 시장 데이터, 경제 캘린더 연동 |
|
| 데이터 | `src/data/` | 뉴스, 시장 데이터, 경제 캘린더 연동 |
|
||||||
@@ -153,19 +157,10 @@ docker compose up -d ouroboros
|
|||||||
|
|
||||||
## 테스트
|
## 테스트
|
||||||
|
|
||||||
998개 테스트가 41개 파일에 걸쳐 구현되어 있습니다. 최소 커버리지 80%.
|
테스트 정책/실행/구성은 [docs/testing.md](docs/testing.md)를 기준으로 한다.
|
||||||
|
|
||||||
```
|
- 최소 커버리지: 80%
|
||||||
tests/test_main.py — 거래 루프 통합
|
- 전체 수집 기준: `pytest --collect-only -q`
|
||||||
tests/test_scenario_engine.py — 시나리오 매칭
|
|
||||||
tests/test_pre_market_planner.py — 플레이북 생성
|
|
||||||
tests/test_overseas_broker.py — 해외 브로커
|
|
||||||
tests/test_telegram_commands.py — 텔레그램 명령어
|
|
||||||
tests/test_telegram.py — 텔레그램 알림
|
|
||||||
... 외 35개 파일 ※ 파일별 수치는 CI 기준으로 변동 가능
|
|
||||||
```
|
|
||||||
|
|
||||||
**상세**: [docs/testing.md](docs/testing.md)
|
|
||||||
|
|
||||||
## 기술 스택
|
## 기술 스택
|
||||||
|
|
||||||
@@ -174,7 +169,7 @@ tests/test_telegram.py — 텔레그램 알림
|
|||||||
- **AI**: Google Gemini Pro
|
- **AI**: Google Gemini Pro
|
||||||
- **DB**: SQLite (5개 테이블: trades, contexts, decision_logs, playbooks, context_metadata)
|
- **DB**: SQLite (5개 테이블: trades, contexts, decision_logs, playbooks, context_metadata)
|
||||||
- **대시보드**: FastAPI + uvicorn
|
- **대시보드**: FastAPI + uvicorn
|
||||||
- **검증**: pytest + coverage (998 tests)
|
- **검증**: pytest + coverage (`docs/testing.md` 기준)
|
||||||
- **CI/CD**: Gitea CI (`.gitea/workflows/ci.yml`)
|
- **CI/CD**: Gitea CI (`.gitea/workflows/ci.yml`)
|
||||||
- **배포**: Docker + Docker Compose
|
- **배포**: Docker + Docker Compose
|
||||||
|
|
||||||
@@ -209,7 +204,7 @@ The-Ouroboros/
|
|||||||
│ ├── config.py # Pydantic 설정
|
│ ├── config.py # Pydantic 설정
|
||||||
│ ├── db.py # SQLite 데이터베이스
|
│ ├── db.py # SQLite 데이터베이스
|
||||||
│ └── main.py # 비동기 거래 루프
|
│ └── main.py # 비동기 거래 루프
|
||||||
├── tests/ # 998개 테스트 (41개 파일)
|
├── tests/ # 테스트 스위트 (세부는 docs/testing.md 참조)
|
||||||
├── Dockerfile # 멀티스테이지 빌드
|
├── Dockerfile # 멀티스테이지 빌드
|
||||||
├── docker-compose.yml # 서비스 오케스트레이션
|
├── docker-compose.yml # 서비스 오케스트레이션
|
||||||
└── pyproject.toml # 의존성 및 도구 설정
|
└── pyproject.toml # 의존성 및 도구 설정
|
||||||
@@ -217,7 +212,6 @@ The-Ouroboros/
|
|||||||
|
|
||||||
## 문서
|
## 문서
|
||||||
|
|
||||||
- **[문서 허브](docs/README.md)** — 전체 문서 라우팅, 우선순위, 읽기 순서
|
|
||||||
- **[아키텍처](docs/architecture.md)** — 시스템 설계, 컴포넌트, 데이터 흐름
|
- **[아키텍처](docs/architecture.md)** — 시스템 설계, 컴포넌트, 데이터 흐름
|
||||||
- **[테스트](docs/testing.md)** — 테스트 구조, 커버리지, 작성 가이드
|
- **[테스트](docs/testing.md)** — 테스트 구조, 커버리지, 작성 가이드
|
||||||
- **[명령어](docs/commands.md)** — CLI, Dashboard, Telegram 명령어
|
- **[명령어](docs/commands.md)** — CLI, Dashboard, Telegram 명령어
|
||||||
|
|||||||
@@ -1,48 +1,29 @@
|
|||||||
# Documentation Hub
|
# Documentation Map
|
||||||
|
|
||||||
이 문서는 저장소 전체 문서의 상위 라우팅 허브입니다.
|
이 문서는 저장소 문서의 단일 라우팅/역할 정의다.
|
||||||
세부 문서로 바로 들어가기 전에 아래 우선순위와 읽기 순서를 기준으로 이동하세요.
|
각 문서는 아래 역할 범위를 넘지 않는다.
|
||||||
|
|
||||||
## Priority (SSOT)
|
## Reading Order
|
||||||
|
|
||||||
1. 실행/협업 규칙 SSOT: [workflow.md](./workflow.md)
|
1. [Project README](../README.md): 빠른 시작, 개요
|
||||||
2. 명령/장애 대응 SSOT: [commands.md](./commands.md)
|
2. [Architecture](architecture.md): 시스템 구성/데이터 흐름
|
||||||
3. 테스트/검증 SSOT: [testing.md](./testing.md)
|
3. [Workflow](workflow.md): 개발/PR/검증 절차
|
||||||
4. 에이전트 제약 SSOT: [agents.md](./agents.md)
|
4. [Commands](commands.md): 실행/운영 명령 레퍼런스
|
||||||
5. 요구사항 추적 SSOT: [requirements-log.md](./requirements-log.md)
|
5. [Testing](testing.md): 테스트 전략/작성/운영
|
||||||
6. Ouroboros 실행 문서 허브: [ouroboros/README.md](./ouroboros/README.md)
|
6. [Ouroboros Hub](ouroboros/README.md): 기획/요구사항/실행 통제 문서군
|
||||||
|
|
||||||
## Recommended Reading Order
|
## Single Source of Truth (SSOT)
|
||||||
|
|
||||||
1. [workflow.md](./workflow.md)
|
- 아키텍처/동작 기준: [Architecture](architecture.md)
|
||||||
2. [commands.md](./commands.md)
|
- 실행 명령 기준: [Commands](commands.md)
|
||||||
3. [testing.md](./testing.md)
|
- 테스트 정책 기준: [Testing](testing.md)
|
||||||
4. [agents.md](./agents.md)
|
- 요구사항/REQ 기준: [Requirements Registry](ouroboros/01_requirements_registry.md)
|
||||||
5. [architecture.md](./architecture.md)
|
- 작업/TASK 기준: [Code Work Orders](ouroboros/30_code_level_work_orders.md)
|
||||||
6. [context-tree.md](./context-tree.md)
|
- 수용/TEST 기준: [Acceptance Plan](ouroboros/40_acceptance_and_test_plan.md)
|
||||||
7. [disaster_recovery.md](./disaster_recovery.md)
|
|
||||||
8. [live-trading-checklist.md](./live-trading-checklist.md)
|
|
||||||
9. [ouroboros/README.md](./ouroboros/README.md)
|
|
||||||
|
|
||||||
## Document Map
|
## Authoring Rules
|
||||||
|
|
||||||
- Core
|
- `README.md`, `CLAUDE.md`는 입문/요약 역할만 가진다.
|
||||||
- [workflow.md](./workflow.md): 브랜치/PR/리뷰/세션 handover 정책
|
- 가변 수치(테스트 개수, API 개수, 세부 파일별 케이스 수)는 요약 문서에 고정값으로 중복 기재하지 않는다.
|
||||||
- [commands.md](./commands.md): 실행 커맨드, 실패 사례, 트러블슈팅
|
- 수치/정책 상세는 SSOT 문서에만 기록하고, 요약 문서에서는 링크로 참조한다.
|
||||||
- [testing.md](./testing.md): 테스트 구조, 작성 규칙, 검증 명령
|
- 동일 내용이 2개 이상 문서에 반복되면 요약 + 링크 형태로 축약한다.
|
||||||
- [agents.md](./agents.md): 에이전트 작업 제약과 금지 행위
|
|
||||||
- [agent-constraints.md](./agent-constraints.md): 영속 제약/운영 불변식(agents.md 보완)
|
|
||||||
- [skills.md](./skills.md): 설치/사용 가능한 스킬 목록과 활용 가이드
|
|
||||||
- Design and Operations
|
|
||||||
- [architecture.md](./architecture.md): 시스템 구조와 컴포넌트 책임
|
|
||||||
- [context-tree.md](./context-tree.md): L1-L7 컨텍스트 계층 설계
|
|
||||||
- [disaster_recovery.md](./disaster_recovery.md): 백업/복구 절차
|
|
||||||
- [live-trading-checklist.md](./live-trading-checklist.md): 실전 전환 체크리스트
|
|
||||||
- Governance and Planning
|
|
||||||
- [requirements-log.md](./requirements-log.md): 요구사항/피드백 히스토리
|
|
||||||
- [ouroboros/README.md](./ouroboros/README.md): v2/v3 실행 문서 라우팅
|
|
||||||
|
|
||||||
## Change Rule
|
|
||||||
|
|
||||||
- 문서 신규/이동/대규모 개편 시 이 파일의 링크와 분류를 함께 갱신합니다.
|
|
||||||
- 링크는 상대경로만 사용합니다.
|
|
||||||
|
|||||||
@@ -21,21 +21,6 @@ python3 scripts/session_handover_check.py --strict
|
|||||||
|
|
||||||
- 실패 시 `workflow/session-handover.md` 최신 엔트리를 보강한 뒤 재실행한다.
|
- 실패 시 `workflow/session-handover.md` 최신 엔트리를 보강한 뒤 재실행한다.
|
||||||
|
|
||||||
## Docs Sync Validator (Mandatory for docs changes)
|
|
||||||
|
|
||||||
- 문서 변경 PR에서는 아래 명령으로 동기화 검증을 먼저 실행한다.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python3 scripts/validate_docs_sync.py
|
|
||||||
```
|
|
||||||
|
|
||||||
- 검증 실패 시 메시지 기준으로 즉시 수정한다.
|
|
||||||
- `absolute link is forbidden`: 문서 링크에 절대경로(`/...`) 사용
|
|
||||||
- `broken link`: 상대경로 링크 대상 파일/앵커 누락
|
|
||||||
- `missing core doc link reference`: `README.md`/`CLAUDE.md` 핵심 링크 누락
|
|
||||||
- `duplicated API endpoint row`: `docs/commands.md` API endpoint 표 중복 행
|
|
||||||
- `missing dynamic test count guidance`: `docs/testing.md`에 `pytest --collect-only -q` 가이드 누락
|
|
||||||
|
|
||||||
### tea CLI (Gitea Command Line Tool)
|
### tea CLI (Gitea Command Line Tool)
|
||||||
|
|
||||||
#### ❌ TTY Error - Interactive Confirmation Fails
|
#### ❌ TTY Error - Interactive Confirmation Fails
|
||||||
@@ -151,9 +136,12 @@ No decorator needed for async tests.
|
|||||||
# Install all dependencies (production + dev)
|
# Install all dependencies (production + dev)
|
||||||
pip install -e ".[dev]"
|
pip install -e ".[dev]"
|
||||||
|
|
||||||
# Run full test suite with coverage (998 tests across 41 files)
|
# Run full test suite with coverage
|
||||||
pytest -v --cov=src --cov-report=term-missing
|
pytest -v --cov=src --cov-report=term-missing
|
||||||
|
|
||||||
|
# Validate docs sync rules (SSOT/duplication guard)
|
||||||
|
python3 scripts/validate_docs_sync.py
|
||||||
|
|
||||||
# Run a single test file
|
# Run a single test file
|
||||||
pytest tests/test_risk.py -v
|
pytest tests/test_risk.py -v
|
||||||
|
|
||||||
@@ -216,8 +204,8 @@ Dashboard runs as a daemon thread on `DASHBOARD_HOST:DASHBOARD_PORT` (default: `
|
|||||||
| `GET /api/performance` | Performance metrics by market and combined |
|
| `GET /api/performance` | Performance metrics by market and combined |
|
||||||
| `GET /api/context/{layer}` | Context data by layer L1-L7 (query: `timeframe`) |
|
| `GET /api/context/{layer}` | Context data by layer L1-L7 (query: `timeframe`) |
|
||||||
| `GET /api/decisions` | Decision log entries (query: `limit`, `market`) |
|
| `GET /api/decisions` | Decision log entries (query: `limit`, `market`) |
|
||||||
|
| `GET /api/pnl/history` | P&L history time series |
|
||||||
| `GET /api/scenarios/active` | Today's matched scenarios |
|
| `GET /api/scenarios/active` | Today's matched scenarios |
|
||||||
| `GET /api/pnl/history` | P&L history over time |
|
|
||||||
| `GET /api/positions` | Current open positions |
|
| `GET /api/positions` | Current open positions |
|
||||||
|
|
||||||
## Telegram Commands
|
## Telegram Commands
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
<!--
|
<!--
|
||||||
Doc-ID: DOC-REQ-001
|
Doc-ID: DOC-REQ-001
|
||||||
Version: 1.0.1
|
Version: 1.0.0
|
||||||
Status: active
|
Status: active
|
||||||
Owner: strategy
|
Owner: strategy
|
||||||
Updated: 2026-03-01
|
Updated: 2026-02-26
|
||||||
-->
|
-->
|
||||||
|
|
||||||
# 요구사항 원장 (Single Source of Truth)
|
# 요구사항 원장 (Single Source of Truth)
|
||||||
@@ -37,4 +37,3 @@ Updated: 2026-03-01
|
|||||||
- `REQ-OPS-001`: 타임존은 모든 시간 필드에 명시(KST/UTC)되어야 한다.
|
- `REQ-OPS-001`: 타임존은 모든 시간 필드에 명시(KST/UTC)되어야 한다.
|
||||||
- `REQ-OPS-002`: 문서의 수치 정책은 원장에서만 변경한다.
|
- `REQ-OPS-002`: 문서의 수치 정책은 원장에서만 변경한다.
|
||||||
- `REQ-OPS-003`: 구현 태스크는 반드시 테스트 태스크를 동반한다.
|
- `REQ-OPS-003`: 구현 태스크는 반드시 테스트 태스크를 동반한다.
|
||||||
- `REQ-OPS-004`: 원본 계획 문서(`v2`, `v3`)는 `docs/ouroboros/source/` 경로를 단일 기준으로 사용한다.
|
|
||||||
|
|||||||
@@ -51,7 +51,6 @@ Updated: 2026-02-26
|
|||||||
- `TASK-OPS-001` (`REQ-OPS-001`): 시간 필드/로그 스키마의 타임존 표기 강제 규칙 구현
|
- `TASK-OPS-001` (`REQ-OPS-001`): 시간 필드/로그 스키마의 타임존 표기 강제 규칙 구현
|
||||||
- `TASK-OPS-002` (`REQ-OPS-002`): 정책 수치 변경 시 `01_requirements_registry.md` 선수정 CI 체크 추가
|
- `TASK-OPS-002` (`REQ-OPS-002`): 정책 수치 변경 시 `01_requirements_registry.md` 선수정 CI 체크 추가
|
||||||
- `TASK-OPS-003` (`REQ-OPS-003`): `TASK-*` 없는 `REQ-*` 또는 `TEST-*` 없는 `REQ-*`를 차단하는 문서 검증 게이트 유지
|
- `TASK-OPS-003` (`REQ-OPS-003`): `TASK-*` 없는 `REQ-*` 또는 `TEST-*` 없는 `REQ-*`를 차단하는 문서 검증 게이트 유지
|
||||||
- `TASK-OPS-004` (`REQ-OPS-004`): v2/v3 원본 계획 문서 위치를 `docs/ouroboros/source/`로 표준화하고 링크 일관성 검증
|
|
||||||
|
|
||||||
## 커밋 규칙
|
## 커밋 규칙
|
||||||
|
|
||||||
|
|||||||
@@ -29,7 +29,6 @@ Updated: 2026-02-26
|
|||||||
- `TEST-ACC-007` (`REQ-OPS-001`): 시간 관련 필드는 타임존(KST/UTC)이 누락되면 검증 실패한다.
|
- `TEST-ACC-007` (`REQ-OPS-001`): 시간 관련 필드는 타임존(KST/UTC)이 누락되면 검증 실패한다.
|
||||||
- `TEST-ACC-008` (`REQ-OPS-002`): 정책 수치 변경이 원장 미반영이면 검증 실패한다.
|
- `TEST-ACC-008` (`REQ-OPS-002`): 정책 수치 변경이 원장 미반영이면 검증 실패한다.
|
||||||
- `TEST-ACC-009` (`REQ-OPS-003`): `REQ-*`가 `TASK-*`/`TEST-*` 매핑 없이 존재하면 검증 실패한다.
|
- `TEST-ACC-009` (`REQ-OPS-003`): `REQ-*`가 `TASK-*`/`TEST-*` 매핑 없이 존재하면 검증 실패한다.
|
||||||
- `TEST-ACC-019` (`REQ-OPS-004`): v2/v3 원본 계획 문서 링크는 `docs/ouroboros/source/` 경로 기준으로만 통과한다.
|
|
||||||
|
|
||||||
## 테스트 계층
|
## 테스트 계층
|
||||||
|
|
||||||
|
|||||||
@@ -1,96 +1,75 @@
|
|||||||
<!--
|
<!--
|
||||||
Doc-ID: DOC-PLAN-082
|
Doc-ID: DOC-PLAN-082
|
||||||
Version: 1.0.0
|
Version: 1.1.0
|
||||||
Status: draft
|
Status: active
|
||||||
Owner: strategy
|
Owner: strategy
|
||||||
Updated: 2026-02-28
|
Updated: 2026-03-01
|
||||||
-->
|
-->
|
||||||
|
|
||||||
# 문서 재구조화 계획: 감사 → 실행 파이프라인
|
# 문서 재구조화 실행 현황
|
||||||
|
|
||||||
## Context
|
## 목적
|
||||||
|
|
||||||
80_implementation_audit.md는 v2/v3 구현 감사와 수익률 분석을 수행했으나, 여러 차례 리뷰를 거치면서 리뷰 이력/데이터 품질 논의/SQL 쿼리 등이 혼재되어 **실행 문서로 사용하기 어려운 상태**다.
|
문서 중복/드리프트/탐색 난이도를 줄여, 에이전트가 문서 기반으로 기획/설계/구현/검증을 일관되게 수행하도록 운영 규칙을 고정한다.
|
||||||
|
|
||||||
목표: 이 감사 결과를 바탕으로 **티켓 생성 → 개발 설계 → 구현/리뷰 → 검증 → 실환경 테스트**까지 일관되게 진행할 수 있는 문서 체계를 만든다.
|
## 적용 범위
|
||||||
|
|
||||||
## 변경 사항
|
- 루트 요약 문서: `README.md`, `CLAUDE.md`
|
||||||
|
- 운영 문서: `docs/architecture.md`, `docs/commands.md`, `docs/testing.md`, `docs/workflow.md`
|
||||||
|
- 실행 통제 문서군: `docs/ouroboros/*`
|
||||||
|
- 검증 자동화: `scripts/validate_docs_sync.py`, CI 워크플로우
|
||||||
|
|
||||||
### 1. 80_implementation_audit.md 정리 (감사 기록 문서)
|
## 완료 항목 (2026-03-01)
|
||||||
|
|
||||||
**역할**: 현재 상태의 팩트 기록. "무엇이 문제인가"에만 집중.
|
### 1) 문서 라우팅/역할 고정
|
||||||
|
|
||||||
정리 내용:
|
- `docs/README.md` 신설
|
||||||
- Section 3: P&L 분석을 핵심 수치만 남기고 간결화
|
- 읽기 순서 + SSOT + 작성 규칙 명시
|
||||||
- 3.1(종합), 3.3(시장별), 3.4(통화 분리), 3.5(전략 진입분 분리), 3.6(무결성 결론) 유지
|
|
||||||
- 3.2 일별 손익: 주의 문구 제거, 본문으로 통합
|
|
||||||
- 3.7 데이터 품질: 핵심 결론만 남기고 세부 항목 제거
|
|
||||||
- 3.8 SQL: 별도 파일(`scripts/audit_queries.sql`)로 분리, 본문에서 참조만
|
|
||||||
- Section 6.1, 6.2 리뷰 반영 이력: 전부 제거 (git history로 추적 가능)
|
|
||||||
- Section 6 테스트: "재점검으로 확인" 항목을 "테스트 존재" 항목에 통합
|
|
||||||
- 신규 Section 7: 후속 문서 링크 (85_ 참조)
|
|
||||||
|
|
||||||
### 2. 85_loss_recovery_action_plan.md 신규 작성 (실행 계획 문서)
|
### 2) 요약 문서 중복 축소
|
||||||
|
|
||||||
**역할**: "어떻게 고칠 것인가". 티켓 생성부터 실환경 검증까지의 실행 청사진.
|
- `README.md`, `CLAUDE.md`에 문서 진입점 추가
|
||||||
|
- 가변 수치/세부 동작의 직접 중복 기재를 축약하고 SSOT 문서 링크 중심으로 정리
|
||||||
|
|
||||||
구조:
|
### 3) 명령/테스트 문서 정합성 개선
|
||||||
```
|
|
||||||
## 1. 요약
|
|
||||||
- 목표: 손실 구간 탈출을 위한 7개 ROOT/5개 GAP 해소
|
|
||||||
- 성공 기준 (정량)
|
|
||||||
|
|
||||||
## 2. Phase별 작업 분해
|
- `docs/commands.md` 대시보드 API 목록 최신화(`pnl/history`, `positions` 포함)
|
||||||
### Phase 1: 즉시 파라미터/로직 수정 (손실 출혈 차단)
|
- `docs/testing.md` 테스트 총량 확인 방식을 고정 수치 -> `pytest --collect-only -q`로 전환
|
||||||
각 항목마다:
|
|
||||||
- ROOT/GAP 참조
|
|
||||||
- Gitea 이슈 제목/설명 템플릿
|
|
||||||
- 변경 대상 파일 + 현재 동작 + 목표 동작
|
|
||||||
- 수용 기준 (acceptance criteria)
|
|
||||||
- 테스트 계획
|
|
||||||
- 의존성/차단 관계
|
|
||||||
|
|
||||||
### Phase 2: 데이터 정합성 + v2 실효화
|
### 4) 동기화 자동 검증 + CI 게이트
|
||||||
(동일 형식)
|
|
||||||
|
|
||||||
### Phase 3: v3 세션 최적화
|
- `scripts/validate_docs_sync.py` 추가
|
||||||
(동일 형식)
|
- `.gitea/workflows/ci.yml`, `.github/workflows/ci.yml`에 `Validate docs sync` 단계 추가
|
||||||
|
- `docs/commands.md`에 동기화 검증 명령 추가
|
||||||
|
|
||||||
## 3. 검증 계획
|
## 잔여 작업
|
||||||
- 단위 테스트 기준
|
|
||||||
- 통합 테스트 시나리오 (백테스트 파이프라인 활용)
|
|
||||||
- 실환경 검증: 소액 live 운용으로 직접 검증
|
|
||||||
(paper trading 제외 — 실환경과 괴리가 커 검증 신뢰도 부족)
|
|
||||||
- Phase별 실환경 투입 기준:
|
|
||||||
단위/통합 테스트 통과 → 소액 live → 모니터링 → 정상 확인 후 본운용
|
|
||||||
|
|
||||||
## 4. 의존성 그래프
|
### A) SSOT 강제 범위 확장
|
||||||
- Phase 간 blocking 관계
|
- `README.md`/`CLAUDE.md`의 남은 동작 설명을 더 축약하고 상세는 `docs/architecture.md`로 일원화
|
||||||
- Phase 내 작업 순서
|
- `docs/testing.md`의 파일별 테스트 개수 스냅샷 자동 생성 여부 결정
|
||||||
|
|
||||||
## 5. 롤백 계획
|
### B) 검증 규칙 고도화
|
||||||
- 각 Phase 실패 시 롤백 절차
|
- `validate_docs_sync.py`에 추가 패턴(중복 정책 문구/금지 숫자 표현) 확대
|
||||||
```
|
- 필요 시 `docs/architecture.md`를 API/모드 동작의 유일한 근거 문서로 명시
|
||||||
|
|
||||||
### 3. README.md 업데이트
|
### C) 유지보수 운영화
|
||||||
|
- 릴리즈/스프린트 종료 시 문서 동기화 점검 체크리스트 정례화
|
||||||
|
- 문서 변경 PR 템플릿에 "SSOT 링크 업데이트 여부" 체크박스 추가 검토
|
||||||
|
|
||||||
- 85_ 문서 링크 추가
|
## 검증 상태
|
||||||
|
|
||||||
## 작업 순서
|
- `python3 scripts/validate_docs_sync.py`: PASS
|
||||||
|
- `python3 scripts/validate_ouroboros_docs.py`: PASS
|
||||||
|
- `python3 scripts/validate_governance_assets.py`: PASS
|
||||||
|
|
||||||
1. 80_ 정리 (노이즈 제거, SQL 분리, 리뷰 이력 삭제)
|
## 운영 규칙
|
||||||
2. `scripts/audit_queries.sql` 작성 (80_에서 분리한 SQL)
|
|
||||||
3. 85_ 신규 작성 (실행 계획)
|
|
||||||
4. README.md 업데이트
|
|
||||||
|
|
||||||
## 작성하지 않는 것
|
- 문서 구조 변경 시 `docs/README.md`와 동기화 검증 규칙을 함께 갱신한다.
|
||||||
|
- 요약 문서는 "개요/진입점" 역할만 유지하고, 상세 사실/수치 정책은 SSOT 문서에만 기록한다.
|
||||||
|
- 가변 수치(테스트 개수, 엔드포인트 개수)는 자동 확인 명령 또는 SSOT 링크로 대체한다.
|
||||||
|
|
||||||
- 30_code_level_work_orders.md, 40_acceptance_and_test_plan.md 업데이트: 85_를 기반으로 실제 구현 시점에 업데이트 (지금은 실행 계획 수립까지만)
|
## 참조
|
||||||
- 01_requirements_registry.md: ROOT/GAP에서 파생되는 신규 REQ는 구현 착수 시 등록
|
|
||||||
- Gitea 이슈 생성: 85_ 문서 확정 후 별도 진행
|
|
||||||
|
|
||||||
## 검증
|
- 문서 허브: [docs/README.md](../../docs/README.md)
|
||||||
|
- 실행 문서 허브: [docs/ouroboros/README.md](./README.md)
|
||||||
- 80_: 감사 팩트만 남았는지, 리뷰 이력이 제거되었는지 확인
|
- 동기화 검증 스크립트: [`scripts/validate_docs_sync.py`](../../scripts/validate_docs_sync.py)
|
||||||
- 85_: Phase별 작업이 Gitea 이슈로 바로 전환 가능한 수준인지 확인
|
|
||||||
- 85_ 각 항목에 수용 기준과 테스트 계획이 포함되었는지 확인
|
|
||||||
|
|||||||
@@ -1,14 +1,14 @@
|
|||||||
<!--
|
<!--
|
||||||
Doc-ID: DOC-ROOT-001
|
Doc-ID: DOC-ROOT-001
|
||||||
Version: 1.0.1
|
Version: 1.0.0
|
||||||
Status: active
|
Status: active
|
||||||
Owner: strategy
|
Owner: strategy
|
||||||
Updated: 2026-03-01
|
Updated: 2026-02-26
|
||||||
-->
|
-->
|
||||||
|
|
||||||
# The Ouroboros 실행 문서 허브
|
# The Ouroboros 실행 문서 허브
|
||||||
|
|
||||||
이 폴더는 `source/ouroboros_plan_v2.txt`, `source/ouroboros_plan_v3.txt`를 구현 가능한 작업 지시서 수준으로 분해한 문서 허브다.
|
이 폴더는 `ouroboros_plan_v2.txt`, `ouroboros_plan_v3.txt`를 구현 가능한 작업 지시서 수준으로 분해한 문서 허브다.
|
||||||
|
|
||||||
## 읽기 순서 (Routing)
|
## 읽기 순서 (Routing)
|
||||||
|
|
||||||
@@ -40,5 +40,5 @@ python3 scripts/validate_ouroboros_docs.py
|
|||||||
|
|
||||||
## 원본 계획 문서
|
## 원본 계획 문서
|
||||||
|
|
||||||
- [v2](./source/ouroboros_plan_v2.txt)
|
- [v2](/home/agentson/repos/The-Ouroboros/ouroboros_plan_v2.txt)
|
||||||
- [v3](./source/ouroboros_plan_v3.txt)
|
- [v3](/home/agentson/repos/The-Ouroboros/ouroboros_plan_v3.txt)
|
||||||
|
|||||||
@@ -2,7 +2,13 @@
|
|||||||
|
|
||||||
## Test Structure
|
## Test Structure
|
||||||
|
|
||||||
**998 tests** across **41 files**. `asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator.
|
테스트 총량은 지속적으로 변동된다. 최신 수치는 아래 명령으로 확인한다.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pytest --collect-only -q
|
||||||
|
```
|
||||||
|
|
||||||
|
`asyncio_mode = "auto"` in pyproject.toml — async tests need no special decorator.
|
||||||
|
|
||||||
The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB.
|
The `settings` fixture in `conftest.py` provides safe defaults with test credentials and in-memory DB.
|
||||||
|
|
||||||
|
|||||||
@@ -66,7 +66,6 @@ def _check_handover_entry(
|
|||||||
*,
|
*,
|
||||||
branch: str,
|
branch: str,
|
||||||
strict: bool,
|
strict: bool,
|
||||||
ci_mode: bool,
|
|
||||||
errors: list[str],
|
errors: list[str],
|
||||||
) -> None:
|
) -> None:
|
||||||
if not HANDOVER_LOG.exists():
|
if not HANDOVER_LOG.exists():
|
||||||
@@ -89,10 +88,6 @@ def _check_handover_entry(
|
|||||||
errors.append(f"latest handover entry missing token: {token}")
|
errors.append(f"latest handover entry missing token: {token}")
|
||||||
|
|
||||||
if strict:
|
if strict:
|
||||||
if "- next_ticket: #TBD" in latest:
|
|
||||||
errors.append("latest handover entry must not use placeholder next_ticket (#TBD)")
|
|
||||||
|
|
||||||
if strict and not ci_mode:
|
|
||||||
today_utc = datetime.now(UTC).date().isoformat()
|
today_utc = datetime.now(UTC).date().isoformat()
|
||||||
if today_utc not in latest:
|
if today_utc not in latest:
|
||||||
errors.append(
|
errors.append(
|
||||||
@@ -104,6 +99,8 @@ def _check_handover_entry(
|
|||||||
"latest handover entry must target current branch "
|
"latest handover entry must target current branch "
|
||||||
f"({branch_token})"
|
f"({branch_token})"
|
||||||
)
|
)
|
||||||
|
if "- next_ticket: #TBD" in latest:
|
||||||
|
errors.append("latest handover entry must not use placeholder next_ticket (#TBD)")
|
||||||
if "merged_to_feature_branch=no" in latest:
|
if "merged_to_feature_branch=no" in latest:
|
||||||
errors.append(
|
errors.append(
|
||||||
"process gate indicates not merged; implementation must stay blocked "
|
"process gate indicates not merged; implementation must stay blocked "
|
||||||
@@ -120,14 +117,6 @@ def main() -> int:
|
|||||||
action="store_true",
|
action="store_true",
|
||||||
help="Enforce today-date and current-branch match on latest handover entry.",
|
help="Enforce today-date and current-branch match on latest handover entry.",
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
|
||||||
"--ci",
|
|
||||||
action="store_true",
|
|
||||||
help=(
|
|
||||||
"CI mode: keep structural/token checks and placeholder guard, "
|
|
||||||
"but skip strict today-date/current-branch/merge-gate checks."
|
|
||||||
),
|
|
||||||
)
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
errors: list[str] = []
|
errors: list[str] = []
|
||||||
@@ -136,15 +125,10 @@ def main() -> int:
|
|||||||
branch = _current_branch()
|
branch = _current_branch()
|
||||||
if not branch:
|
if not branch:
|
||||||
errors.append("cannot resolve current git branch")
|
errors.append("cannot resolve current git branch")
|
||||||
elif not args.ci and branch in {"main", "master"}:
|
elif branch in {"main", "master"}:
|
||||||
errors.append(f"working branch must not be {branch}")
|
errors.append(f"working branch must not be {branch}")
|
||||||
|
|
||||||
_check_handover_entry(
|
_check_handover_entry(branch=branch, strict=args.strict, errors=errors)
|
||||||
branch=branch,
|
|
||||||
strict=args.strict,
|
|
||||||
ci_mode=args.ci,
|
|
||||||
errors=errors,
|
|
||||||
)
|
|
||||||
|
|
||||||
if errors:
|
if errors:
|
||||||
print("[FAIL] session handover check failed")
|
print("[FAIL] session handover check failed")
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
"""Validate top-level docs synchronization invariants."""
|
"""Validate lightweight documentation synchronization rules."""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
@@ -7,116 +7,62 @@ import re
|
|||||||
import sys
|
import sys
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
REPO_ROOT = Path(".")
|
|
||||||
REQUIRED_FILES = {
|
|
||||||
"README.md": REPO_ROOT / "README.md",
|
|
||||||
"CLAUDE.md": REPO_ROOT / "CLAUDE.md",
|
|
||||||
"commands": REPO_ROOT / "docs" / "commands.md",
|
|
||||||
"testing": REPO_ROOT / "docs" / "testing.md",
|
|
||||||
"workflow": REPO_ROOT / "docs" / "workflow.md",
|
|
||||||
}
|
|
||||||
|
|
||||||
LINK_PATTERN = re.compile(r"\[[^\]]+\]\((?P<link>[^)]+)\)")
|
ROOT = Path(".")
|
||||||
ENDPOINT_ROW_PATTERN = re.compile(
|
|
||||||
r"^\|\s*`(?P<endpoint>(?:GET|POST|PUT|PATCH|DELETE)\s+/[^`]*)`\s*\|"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def _read(path: Path) -> str:
|
def read_text(path: Path, errors: list[str]) -> str:
|
||||||
return path.read_text(encoding="utf-8")
|
|
||||||
|
|
||||||
|
|
||||||
def validate_required_files_exist(errors: list[str]) -> None:
|
|
||||||
for name, path in REQUIRED_FILES.items():
|
|
||||||
if not path.exists():
|
if not path.exists():
|
||||||
errors.append(f"missing required doc file ({name}): {path}")
|
errors.append(f"missing file: {path}")
|
||||||
|
return ""
|
||||||
|
return path.read_text(encoding="utf-8")
|
||||||
def validate_links_resolve(doc_path: Path, text: str, errors: list[str]) -> None:
|
|
||||||
for match in LINK_PATTERN.finditer(text):
|
|
||||||
raw_link = match.group("link").strip()
|
|
||||||
if not raw_link or raw_link.startswith("#") or raw_link.startswith("http"):
|
|
||||||
continue
|
|
||||||
link_path = raw_link.split("#", 1)[0].strip()
|
|
||||||
if not link_path:
|
|
||||||
continue
|
|
||||||
if link_path.startswith("/"):
|
|
||||||
errors.append(f"{doc_path}: absolute link is forbidden -> {raw_link}")
|
|
||||||
continue
|
|
||||||
target = (doc_path.parent / link_path).resolve()
|
|
||||||
if not target.exists():
|
|
||||||
errors.append(f"{doc_path}: broken link -> {raw_link}")
|
|
||||||
|
|
||||||
|
|
||||||
def validate_summary_docs_reference_core_docs(errors: list[str]) -> None:
|
|
||||||
required_links = {
|
|
||||||
"README.md": ("docs/workflow.md", "docs/commands.md", "docs/testing.md"),
|
|
||||||
"CLAUDE.md": ("docs/workflow.md", "docs/commands.md"),
|
|
||||||
}
|
|
||||||
for file_name, links in required_links.items():
|
|
||||||
doc_path = REQUIRED_FILES[file_name]
|
|
||||||
text = _read(doc_path)
|
|
||||||
for link in links:
|
|
||||||
if link not in text:
|
|
||||||
errors.append(f"{doc_path}: missing core doc link reference -> {link}")
|
|
||||||
|
|
||||||
|
|
||||||
def collect_command_endpoints(text: str) -> list[str]:
|
|
||||||
endpoints: list[str] = []
|
|
||||||
for line in text.splitlines():
|
|
||||||
match = ENDPOINT_ROW_PATTERN.match(line.strip())
|
|
||||||
if match:
|
|
||||||
endpoints.append(match.group("endpoint"))
|
|
||||||
return endpoints
|
|
||||||
|
|
||||||
|
|
||||||
def validate_commands_endpoint_duplicates(errors: list[str]) -> None:
|
|
||||||
text = _read(REQUIRED_FILES["commands"])
|
|
||||||
endpoints = collect_command_endpoints(text)
|
|
||||||
seen: set[str] = set()
|
|
||||||
duplicates: set[str] = set()
|
|
||||||
for endpoint in endpoints:
|
|
||||||
if endpoint in seen:
|
|
||||||
duplicates.add(endpoint)
|
|
||||||
seen.add(endpoint)
|
|
||||||
for endpoint in sorted(duplicates):
|
|
||||||
errors.append(f"docs/commands.md: duplicated API endpoint row -> {endpoint}")
|
|
||||||
|
|
||||||
|
|
||||||
def validate_testing_doc_has_dynamic_count_guidance(errors: list[str]) -> None:
|
|
||||||
text = _read(REQUIRED_FILES["testing"])
|
|
||||||
if "pytest --collect-only -q" not in text:
|
|
||||||
errors.append(
|
|
||||||
"docs/testing.md: missing dynamic test count guidance "
|
|
||||||
"(pytest --collect-only -q)"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
def main() -> int:
|
||||||
errors: list[str] = []
|
errors: list[str] = []
|
||||||
|
|
||||||
validate_required_files_exist(errors)
|
docs_index = ROOT / "docs" / "README.md"
|
||||||
if errors:
|
readme = ROOT / "README.md"
|
||||||
print("[FAIL] docs sync validation failed")
|
claude = ROOT / "CLAUDE.md"
|
||||||
for err in errors:
|
commands = ROOT / "docs" / "commands.md"
|
||||||
print(f"- {err}")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
readme_text = _read(REQUIRED_FILES["README.md"])
|
docs_index_text = read_text(docs_index, errors)
|
||||||
claude_text = _read(REQUIRED_FILES["CLAUDE.md"])
|
readme_text = read_text(readme, errors)
|
||||||
validate_links_resolve(REQUIRED_FILES["README.md"], readme_text, errors)
|
claude_text = read_text(claude, errors)
|
||||||
validate_links_resolve(REQUIRED_FILES["CLAUDE.md"], claude_text, errors)
|
commands_text = read_text(commands, errors)
|
||||||
validate_links_resolve(
|
|
||||||
REQUIRED_FILES["commands"], _read(REQUIRED_FILES["commands"]), errors
|
|
||||||
)
|
|
||||||
validate_links_resolve(REQUIRED_FILES["testing"], _read(REQUIRED_FILES["testing"]), errors)
|
|
||||||
validate_links_resolve(
|
|
||||||
REQUIRED_FILES["workflow"], _read(REQUIRED_FILES["workflow"]), errors
|
|
||||||
)
|
|
||||||
|
|
||||||
validate_summary_docs_reference_core_docs(errors)
|
if docs_index_text and "Single Source of Truth" not in docs_index_text:
|
||||||
validate_commands_endpoint_duplicates(errors)
|
errors.append("docs/README.md: missing 'Single Source of Truth' section")
|
||||||
validate_testing_doc_has_dynamic_count_guidance(errors)
|
|
||||||
|
if readme_text and "docs/README.md" not in readme_text:
|
||||||
|
errors.append("README.md: missing docs/README.md routing link")
|
||||||
|
if claude_text and "docs/README.md" not in claude_text:
|
||||||
|
errors.append("CLAUDE.md: missing docs/README.md routing link")
|
||||||
|
|
||||||
|
# Prevent volatile hard-coded scale numbers in summary docs.
|
||||||
|
volatile_patterns: list[tuple[str, re.Pattern[str]]] = [
|
||||||
|
("README.md", re.compile(r"\b\d+\s*개 테스트\b")),
|
||||||
|
("README.md", re.compile(r"\b\d+\s*tests\s+across\b", re.IGNORECASE)),
|
||||||
|
("README.md", re.compile(r"\(\d+\s*개 API\)")),
|
||||||
|
("README.md", re.compile(r"\(\d+\s*API endpoints?\)", re.IGNORECASE)),
|
||||||
|
("CLAUDE.md", re.compile(r"\b\d+\s*tests\s+across\b", re.IGNORECASE)),
|
||||||
|
("CLAUDE.md", re.compile(r"\(\d+\s*API endpoints?\)", re.IGNORECASE)),
|
||||||
|
("docs/commands.md", re.compile(r"Run full test suite with coverage\s*\(\d+", re.IGNORECASE)),
|
||||||
|
]
|
||||||
|
text_by_name = {
|
||||||
|
"README.md": readme_text,
|
||||||
|
"CLAUDE.md": claude_text,
|
||||||
|
"docs/commands.md": commands_text,
|
||||||
|
}
|
||||||
|
for name, pattern in volatile_patterns:
|
||||||
|
text = text_by_name.get(name, "")
|
||||||
|
if text and pattern.search(text):
|
||||||
|
errors.append(f"{name}: contains volatile hard-coded scale text ({pattern.pattern})")
|
||||||
|
|
||||||
|
# Command doc should list all dashboard endpoints exposed by app.py.
|
||||||
|
for endpoint in ("/api/pnl/history", "/api/positions"):
|
||||||
|
if commands_text and endpoint not in commands_text:
|
||||||
|
errors.append(f"docs/commands.md: missing dashboard endpoint {endpoint}")
|
||||||
|
|
||||||
if errors:
|
if errors:
|
||||||
print("[FAIL] docs sync validation failed")
|
print("[FAIL] docs sync validation failed")
|
||||||
@@ -124,10 +70,7 @@ def main() -> int:
|
|||||||
print(f"- {err}")
|
print(f"- {err}")
|
||||||
return 1
|
return 1
|
||||||
|
|
||||||
print("[OK] docs sync validated")
|
print("[OK] docs sync validation passed")
|
||||||
print("[OK] summary docs link to core docs and links resolve")
|
|
||||||
print("[OK] commands endpoint rows have no duplicates")
|
|
||||||
print("[OK] testing doc includes dynamic count guidance")
|
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -3,10 +3,10 @@
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
|
import os
|
||||||
|
import re
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
REQUIREMENTS_REGISTRY = "docs/ouroboros/01_requirements_registry.md"
|
REQUIREMENTS_REGISTRY = "docs/ouroboros/01_requirements_registry.md"
|
||||||
@@ -15,8 +15,6 @@ TASK_DEF_LINE = re.compile(r"^-\s+`(?P<task_id>TASK-[A-Z0-9-]+-\d{3})`(?P<body>.
|
|||||||
REQ_ID_IN_LINE = re.compile(r"\bREQ-[A-Z0-9-]+-\d{3}\b")
|
REQ_ID_IN_LINE = re.compile(r"\bREQ-[A-Z0-9-]+-\d{3}\b")
|
||||||
TASK_ID_IN_TEXT = re.compile(r"\bTASK-[A-Z0-9-]+-\d{3}\b")
|
TASK_ID_IN_TEXT = re.compile(r"\bTASK-[A-Z0-9-]+-\d{3}\b")
|
||||||
TEST_ID_IN_TEXT = re.compile(r"\bTEST-[A-Z0-9-]+-\d{3}\b")
|
TEST_ID_IN_TEXT = re.compile(r"\bTEST-[A-Z0-9-]+-\d{3}\b")
|
||||||
READ_ONLY_FILES = {"src/core/risk_manager.py"}
|
|
||||||
PLACEHOLDER_VALUES = {"", "tbd", "n/a", "na", "none", "<link>", "<required>"}
|
|
||||||
|
|
||||||
|
|
||||||
def must_contain(path: Path, required: list[str], errors: list[str]) -> None:
|
def must_contain(path: Path, required: list[str], errors: list[str]) -> None:
|
||||||
@@ -120,55 +118,6 @@ def validate_pr_traceability(warnings: list[str]) -> None:
|
|||||||
warnings.append("PR text missing TEST-ID reference")
|
warnings.append("PR text missing TEST-ID reference")
|
||||||
|
|
||||||
|
|
||||||
def _parse_pr_evidence_line(text: str, field: str) -> str | None:
|
|
||||||
pattern = re.compile(rf"^\s*-\s*{re.escape(field)}:\s*(?P<value>.+?)\s*$", re.MULTILINE)
|
|
||||||
match = pattern.search(text)
|
|
||||||
if not match:
|
|
||||||
return None
|
|
||||||
return match.group("value").strip()
|
|
||||||
|
|
||||||
|
|
||||||
def _is_placeholder(value: str | None) -> bool:
|
|
||||||
if value is None:
|
|
||||||
return True
|
|
||||||
normalized = value.strip().lower()
|
|
||||||
return normalized in PLACEHOLDER_VALUES
|
|
||||||
|
|
||||||
|
|
||||||
def validate_read_only_approval(
|
|
||||||
changed_files: list[str], errors: list[str], warnings: list[str]
|
|
||||||
) -> None:
|
|
||||||
changed_set = set(changed_files)
|
|
||||||
touched = sorted(path for path in READ_ONLY_FILES if path in changed_set)
|
|
||||||
if not touched:
|
|
||||||
return
|
|
||||||
|
|
||||||
body = os.getenv("GOVERNANCE_PR_BODY", "").strip()
|
|
||||||
if not body:
|
|
||||||
warnings.append(
|
|
||||||
"READ-ONLY file changed but PR body is unavailable; approval evidence check skipped"
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
if "READ-ONLY Approval" not in body:
|
|
||||||
errors.append("READ-ONLY file changed without 'READ-ONLY Approval' section in PR body")
|
|
||||||
return
|
|
||||||
|
|
||||||
touched_field = _parse_pr_evidence_line(body, "Touched READ-ONLY files")
|
|
||||||
human_approval = _parse_pr_evidence_line(body, "Human approval")
|
|
||||||
test_suite_1 = _parse_pr_evidence_line(body, "Test suite 1")
|
|
||||||
test_suite_2 = _parse_pr_evidence_line(body, "Test suite 2")
|
|
||||||
|
|
||||||
if _is_placeholder(touched_field):
|
|
||||||
errors.append("READ-ONLY Approval section missing 'Touched READ-ONLY files' evidence")
|
|
||||||
if _is_placeholder(human_approval):
|
|
||||||
errors.append("READ-ONLY Approval section missing 'Human approval' evidence")
|
|
||||||
if _is_placeholder(test_suite_1):
|
|
||||||
errors.append("READ-ONLY Approval section missing 'Test suite 1' evidence")
|
|
||||||
if _is_placeholder(test_suite_2):
|
|
||||||
errors.append("READ-ONLY Approval section missing 'Test suite 2' evidence")
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
def main() -> int:
|
||||||
errors: list[str] = []
|
errors: list[str] = []
|
||||||
warnings: list[str] = []
|
warnings: list[str] = []
|
||||||
@@ -192,11 +141,6 @@ def main() -> int:
|
|||||||
"gh",
|
"gh",
|
||||||
"Session Handover Gate",
|
"Session Handover Gate",
|
||||||
"session_handover_check.py --strict",
|
"session_handover_check.py --strict",
|
||||||
"READ-ONLY Approval",
|
|
||||||
"Touched READ-ONLY files",
|
|
||||||
"Human approval",
|
|
||||||
"Test suite 1",
|
|
||||||
"Test suite 2",
|
|
||||||
],
|
],
|
||||||
errors,
|
errors,
|
||||||
)
|
)
|
||||||
@@ -243,7 +187,6 @@ def main() -> int:
|
|||||||
validate_registry_sync(changed_files, errors)
|
validate_registry_sync(changed_files, errors)
|
||||||
validate_task_req_mapping(errors)
|
validate_task_req_mapping(errors)
|
||||||
validate_pr_traceability(warnings)
|
validate_pr_traceability(warnings)
|
||||||
validate_read_only_approval(changed_files, errors, warnings)
|
|
||||||
|
|
||||||
if errors:
|
if errors:
|
||||||
print("[FAIL] governance asset validation failed")
|
print("[FAIL] governance asset validation failed")
|
||||||
|
|||||||
@@ -19,20 +19,9 @@ META_PATTERN = re.compile(
|
|||||||
re.MULTILINE,
|
re.MULTILINE,
|
||||||
)
|
)
|
||||||
ID_PATTERN = re.compile(r"\b(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3}\b")
|
ID_PATTERN = re.compile(r"\b(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3}\b")
|
||||||
DEF_PATTERN = re.compile(
|
DEF_PATTERN = re.compile(r"^-\s+`(?P<id>(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3})`", re.MULTILINE)
|
||||||
r"^-\s+`(?P<id>(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3})`",
|
|
||||||
re.MULTILINE,
|
|
||||||
)
|
|
||||||
LINK_PATTERN = re.compile(r"\[[^\]]+\]\((?P<link>[^)]+)\)")
|
LINK_PATTERN = re.compile(r"\[[^\]]+\]\((?P<link>[^)]+)\)")
|
||||||
LINE_DEF_PATTERN = re.compile(
|
LINE_DEF_PATTERN = re.compile(r"^-\s+`(?P<id>(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3})`.*$", re.MULTILINE)
|
||||||
r"^-\s+`(?P<id>(?:REQ|RULE|TASK|TEST|DOC)-[A-Z0-9-]+-\d{3})`.*$",
|
|
||||||
re.MULTILINE,
|
|
||||||
)
|
|
||||||
PLAN_LINK_PATTERN = re.compile(r"ouroboros_plan_v(?P<version>[23])\.txt$")
|
|
||||||
ALLOWED_PLAN_TARGETS = {
|
|
||||||
"2": (DOC_DIR / "source" / "ouroboros_plan_v2.txt").resolve(),
|
|
||||||
"3": (DOC_DIR / "source" / "ouroboros_plan_v3.txt").resolve(),
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def iter_docs() -> list[Path]:
|
def iter_docs() -> list[Path]:
|
||||||
@@ -51,47 +40,15 @@ def validate_metadata(path: Path, text: str, errors: list[str], doc_ids: dict[st
|
|||||||
doc_ids[doc_id] = path
|
doc_ids[doc_id] = path
|
||||||
|
|
||||||
|
|
||||||
def validate_plan_source_link(path: Path, link: str, errors: list[str]) -> bool:
|
|
||||||
normalized = link.strip()
|
|
||||||
# Ignore in-page anchors and parse the filesystem part for validation.
|
|
||||||
link_path = normalized.split("#", 1)[0].strip()
|
|
||||||
if not link_path:
|
|
||||||
return False
|
|
||||||
match = PLAN_LINK_PATTERN.search(link_path)
|
|
||||||
if not match:
|
|
||||||
return False
|
|
||||||
|
|
||||||
version = match.group("version")
|
|
||||||
expected_target = ALLOWED_PLAN_TARGETS[version]
|
|
||||||
if link_path.startswith("/"):
|
|
||||||
errors.append(
|
|
||||||
f"{path}: invalid plan link path -> {link} "
|
|
||||||
f"(use ./source/ouroboros_plan_v{version}.txt)"
|
|
||||||
)
|
|
||||||
return True
|
|
||||||
|
|
||||||
resolved_target = (path.parent / link_path).resolve()
|
|
||||||
if resolved_target != expected_target:
|
|
||||||
errors.append(
|
|
||||||
f"{path}: invalid plan link path -> {link} "
|
|
||||||
f"(must resolve to docs/ouroboros/source/ouroboros_plan_v{version}.txt)"
|
|
||||||
)
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def validate_links(path: Path, text: str, errors: list[str]) -> None:
|
def validate_links(path: Path, text: str, errors: list[str]) -> None:
|
||||||
for m in LINK_PATTERN.finditer(text):
|
for m in LINK_PATTERN.finditer(text):
|
||||||
link = m.group("link").strip()
|
link = m.group("link").strip()
|
||||||
if not link or link.startswith("http") or link.startswith("#"):
|
if not link or link.startswith("http") or link.startswith("#"):
|
||||||
continue
|
continue
|
||||||
if validate_plan_source_link(path, link, errors):
|
if link.startswith("/"):
|
||||||
continue
|
target = Path(link)
|
||||||
link_path = link.split("#", 1)[0].strip()
|
|
||||||
if link_path.startswith("/"):
|
|
||||||
target = Path(link_path)
|
|
||||||
else:
|
else:
|
||||||
target = (path.parent / link_path).resolve()
|
target = (path.parent / link).resolve()
|
||||||
if not target.exists():
|
if not target.exists():
|
||||||
errors.append(f"{path}: broken link -> {link}")
|
errors.append(f"{path}: broken link -> {link}")
|
||||||
|
|
||||||
@@ -104,9 +61,7 @@ def collect_ids(path: Path, text: str, defs: dict[str, Path], refs: dict[str, se
|
|||||||
refs.setdefault(idv, set()).add(path)
|
refs.setdefault(idv, set()).add(path)
|
||||||
|
|
||||||
|
|
||||||
def collect_req_traceability(
|
def collect_req_traceability(text: str, req_to_task: dict[str, set[str]], req_to_test: dict[str, set[str]]) -> None:
|
||||||
text: str, req_to_task: dict[str, set[str]], req_to_test: dict[str, set[str]]
|
|
||||||
) -> None:
|
|
||||||
for m in LINE_DEF_PATTERN.finditer(text):
|
for m in LINE_DEF_PATTERN.finditer(text):
|
||||||
line = m.group(0)
|
line = m.group(0)
|
||||||
item_id = m.group("id")
|
item_id = m.group("id")
|
||||||
|
|||||||
@@ -2,8 +2,8 @@
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import math
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
|
import math
|
||||||
|
|
||||||
|
|
||||||
@dataclass(frozen=True)
|
@dataclass(frozen=True)
|
||||||
|
|||||||
@@ -2,11 +2,12 @@
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import math
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
|
import math
|
||||||
from random import Random
|
from random import Random
|
||||||
from typing import Literal
|
from typing import Literal
|
||||||
|
|
||||||
|
|
||||||
OrderSide = Literal["BUY", "SELL"]
|
OrderSide = Literal["BUY", "SELL"]
|
||||||
|
|
||||||
|
|
||||||
@@ -76,9 +77,7 @@ class BacktestExecutionModel:
|
|||||||
reason="execution_failure",
|
reason="execution_failure",
|
||||||
)
|
)
|
||||||
|
|
||||||
slip_mult = 1.0 + (
|
slip_mult = 1.0 + (slippage_bps / 10000.0 if request.side == "BUY" else -slippage_bps / 10000.0)
|
||||||
slippage_bps / 10000.0 if request.side == "BUY" else -slippage_bps / 10000.0
|
|
||||||
)
|
|
||||||
exec_price = request.reference_price * slip_mult
|
exec_price = request.reference_price * slip_mult
|
||||||
|
|
||||||
if self._rng.random() < partial_rate:
|
if self._rng.random() < partial_rate:
|
||||||
|
|||||||
@@ -10,7 +10,8 @@ from collections.abc import Sequence
|
|||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from statistics import mean
|
from statistics import mean
|
||||||
from typing import Literal, cast
|
from typing import Literal
|
||||||
|
from typing import cast
|
||||||
|
|
||||||
from src.analysis.backtest_cost_guard import BacktestCostModel, validate_backtest_cost_model
|
from src.analysis.backtest_cost_guard import BacktestCostModel, validate_backtest_cost_model
|
||||||
from src.analysis.triple_barrier import TripleBarrierSpec, label_with_triple_barrier
|
from src.analysis.triple_barrier import TripleBarrierSpec, label_with_triple_barrier
|
||||||
|
|||||||
@@ -104,7 +104,6 @@ class MarketScanner:
|
|||||||
|
|
||||||
# Store in L7 real-time layer
|
# Store in L7 real-time layer
|
||||||
from datetime import UTC, datetime
|
from datetime import UTC, datetime
|
||||||
|
|
||||||
timeframe = datetime.now(UTC).isoformat()
|
timeframe = datetime.now(UTC).isoformat()
|
||||||
self.context_store.set_context(
|
self.context_store.set_context(
|
||||||
ContextLayer.L7_REALTIME,
|
ContextLayer.L7_REALTIME,
|
||||||
@@ -159,8 +158,12 @@ class MarketScanner:
|
|||||||
top_movers = valid_metrics[: self.top_n]
|
top_movers = valid_metrics[: self.top_n]
|
||||||
|
|
||||||
# Detect breakouts and breakdowns
|
# Detect breakouts and breakdowns
|
||||||
breakouts = [m.stock_code for m in valid_metrics if self.analyzer.is_breakout(m)]
|
breakouts = [
|
||||||
breakdowns = [m.stock_code for m in valid_metrics if self.analyzer.is_breakdown(m)]
|
m.stock_code for m in valid_metrics if self.analyzer.is_breakout(m)
|
||||||
|
]
|
||||||
|
breakdowns = [
|
||||||
|
m.stock_code for m in valid_metrics if self.analyzer.is_breakdown(m)
|
||||||
|
]
|
||||||
|
|
||||||
logger.info(
|
logger.info(
|
||||||
"%s scan complete: %d scanned, top momentum=%.1f, %d breakouts, %d breakdowns",
|
"%s scan complete: %d scanned, top momentum=%.1f, %d breakouts, %d breakdowns",
|
||||||
@@ -225,9 +228,10 @@ class MarketScanner:
|
|||||||
|
|
||||||
# If we removed too many, backfill from current watchlist
|
# If we removed too many, backfill from current watchlist
|
||||||
if len(updated) < len(current_watchlist):
|
if len(updated) < len(current_watchlist):
|
||||||
backfill = [code for code in current_watchlist if code not in updated][
|
backfill = [
|
||||||
: len(current_watchlist) - len(updated)
|
code for code in current_watchlist
|
||||||
]
|
if code not in updated
|
||||||
|
][: len(current_watchlist) - len(updated)]
|
||||||
updated.extend(backfill)
|
updated.extend(backfill)
|
||||||
|
|
||||||
logger.info(
|
logger.info(
|
||||||
|
|||||||
@@ -158,12 +158,7 @@ class SmartVolatilityScanner:
|
|||||||
price = latest_close
|
price = latest_close
|
||||||
latest_high = _safe_float(latest.get("high"))
|
latest_high = _safe_float(latest.get("high"))
|
||||||
latest_low = _safe_float(latest.get("low"))
|
latest_low = _safe_float(latest.get("low"))
|
||||||
if (
|
if latest_close > 0 and latest_high > 0 and latest_low > 0 and latest_high >= latest_low:
|
||||||
latest_close > 0
|
|
||||||
and latest_high > 0
|
|
||||||
and latest_low > 0
|
|
||||||
and latest_high >= latest_low
|
|
||||||
):
|
|
||||||
intraday_range_pct = (latest_high - latest_low) / latest_close * 100.0
|
intraday_range_pct = (latest_high - latest_low) / latest_close * 100.0
|
||||||
if volume <= 0:
|
if volume <= 0:
|
||||||
volume = _safe_float(latest.get("volume"))
|
volume = _safe_float(latest.get("volume"))
|
||||||
@@ -239,7 +234,9 @@ class SmartVolatilityScanner:
|
|||||||
limit=50,
|
limit=50,
|
||||||
)
|
)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.warning("Overseas fluctuation ranking failed for %s: %s", market.code, exc)
|
logger.warning(
|
||||||
|
"Overseas fluctuation ranking failed for %s: %s", market.code, exc
|
||||||
|
)
|
||||||
fluct_rows = []
|
fluct_rows = []
|
||||||
|
|
||||||
if not fluct_rows:
|
if not fluct_rows:
|
||||||
@@ -253,7 +250,9 @@ class SmartVolatilityScanner:
|
|||||||
limit=50,
|
limit=50,
|
||||||
)
|
)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.warning("Overseas volume ranking failed for %s: %s", market.code, exc)
|
logger.warning(
|
||||||
|
"Overseas volume ranking failed for %s: %s", market.code, exc
|
||||||
|
)
|
||||||
volume_rows = []
|
volume_rows = []
|
||||||
|
|
||||||
for idx, row in enumerate(volume_rows):
|
for idx, row in enumerate(volume_rows):
|
||||||
@@ -434,10 +433,16 @@ def _extract_intraday_range_pct(row: dict[str, Any], price: float) -> float:
|
|||||||
if price <= 0:
|
if price <= 0:
|
||||||
return 0.0
|
return 0.0
|
||||||
high = _safe_float(
|
high = _safe_float(
|
||||||
row.get("high") or row.get("ovrs_hgpr") or row.get("stck_hgpr") or row.get("day_hgpr")
|
row.get("high")
|
||||||
|
or row.get("ovrs_hgpr")
|
||||||
|
or row.get("stck_hgpr")
|
||||||
|
or row.get("day_hgpr")
|
||||||
)
|
)
|
||||||
low = _safe_float(
|
low = _safe_float(
|
||||||
row.get("low") or row.get("ovrs_lwpr") or row.get("stck_lwpr") or row.get("day_lwpr")
|
row.get("low")
|
||||||
|
or row.get("ovrs_lwpr")
|
||||||
|
or row.get("stck_lwpr")
|
||||||
|
or row.get("day_lwpr")
|
||||||
)
|
)
|
||||||
if high <= 0 or low <= 0 or high < low:
|
if high <= 0 or low <= 0 or high < low:
|
||||||
return 0.0
|
return 0.0
|
||||||
|
|||||||
@@ -6,10 +6,10 @@ Implements first-touch labeling with upper/lower/time barriers.
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import warnings
|
import warnings
|
||||||
from collections.abc import Sequence
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta
|
||||||
from typing import Literal
|
from typing import Literal, Sequence
|
||||||
|
|
||||||
|
|
||||||
TieBreakMode = Literal["stop_first", "take_first"]
|
TieBreakMode = Literal["stop_first", "take_first"]
|
||||||
|
|
||||||
@@ -92,10 +92,7 @@ def label_with_triple_barrier(
|
|||||||
else:
|
else:
|
||||||
assert spec.max_holding_bars is not None
|
assert spec.max_holding_bars is not None
|
||||||
warnings.warn(
|
warnings.warn(
|
||||||
(
|
"TripleBarrierSpec.max_holding_bars is deprecated; use max_holding_minutes with timestamps instead.",
|
||||||
"TripleBarrierSpec.max_holding_bars is deprecated; "
|
|
||||||
"use max_holding_minutes with timestamps instead."
|
|
||||||
),
|
|
||||||
DeprecationWarning,
|
DeprecationWarning,
|
||||||
stacklevel=2,
|
stacklevel=2,
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -92,7 +92,9 @@ class VolatilityAnalyzer:
|
|||||||
recent_tr = true_ranges[-period:]
|
recent_tr = true_ranges[-period:]
|
||||||
return sum(recent_tr) / len(recent_tr)
|
return sum(recent_tr) / len(recent_tr)
|
||||||
|
|
||||||
def calculate_price_change(self, current_price: float, past_price: float) -> float:
|
def calculate_price_change(
|
||||||
|
self, current_price: float, past_price: float
|
||||||
|
) -> float:
|
||||||
"""Calculate price change percentage.
|
"""Calculate price change percentage.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@@ -106,7 +108,9 @@ class VolatilityAnalyzer:
|
|||||||
return 0.0
|
return 0.0
|
||||||
return ((current_price - past_price) / past_price) * 100
|
return ((current_price - past_price) / past_price) * 100
|
||||||
|
|
||||||
def calculate_volume_surge(self, current_volume: float, avg_volume: float) -> float:
|
def calculate_volume_surge(
|
||||||
|
self, current_volume: float, avg_volume: float
|
||||||
|
) -> float:
|
||||||
"""Calculate volume surge ratio.
|
"""Calculate volume surge ratio.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@@ -236,7 +240,11 @@ class VolatilityAnalyzer:
|
|||||||
Momentum score (0-100)
|
Momentum score (0-100)
|
||||||
"""
|
"""
|
||||||
# Weight recent changes more heavily
|
# Weight recent changes more heavily
|
||||||
weighted_change = price_change_1m * 0.4 + price_change_5m * 0.3 + price_change_15m * 0.2
|
weighted_change = (
|
||||||
|
price_change_1m * 0.4 +
|
||||||
|
price_change_5m * 0.3 +
|
||||||
|
price_change_15m * 0.2
|
||||||
|
)
|
||||||
|
|
||||||
# Volume contribution (normalized to 0-10 scale)
|
# Volume contribution (normalized to 0-10 scale)
|
||||||
volume_contribution = min(10.0, (volume_surge - 1.0) * 5.0)
|
volume_contribution = min(10.0, (volume_surge - 1.0) * 5.0)
|
||||||
@@ -293,11 +301,17 @@ class VolatilityAnalyzer:
|
|||||||
|
|
||||||
if len(close_prices) > 0:
|
if len(close_prices) > 0:
|
||||||
if len(close_prices) >= 1:
|
if len(close_prices) >= 1:
|
||||||
price_change_1m = self.calculate_price_change(current_price, close_prices[-1])
|
price_change_1m = self.calculate_price_change(
|
||||||
|
current_price, close_prices[-1]
|
||||||
|
)
|
||||||
if len(close_prices) >= 5:
|
if len(close_prices) >= 5:
|
||||||
price_change_5m = self.calculate_price_change(current_price, close_prices[-5])
|
price_change_5m = self.calculate_price_change(
|
||||||
|
current_price, close_prices[-5]
|
||||||
|
)
|
||||||
if len(close_prices) >= 15:
|
if len(close_prices) >= 15:
|
||||||
price_change_15m = self.calculate_price_change(current_price, close_prices[-15])
|
price_change_15m = self.calculate_price_change(
|
||||||
|
current_price, close_prices[-15]
|
||||||
|
)
|
||||||
|
|
||||||
# Calculate volume surge
|
# Calculate volume surge
|
||||||
avg_volume = sum(volumes) / len(volumes) if volumes else current_volume
|
avg_volume = sum(volumes) / len(volumes) if volumes else current_volume
|
||||||
|
|||||||
@@ -7,9 +7,9 @@ This module provides:
|
|||||||
- Health monitoring and alerts
|
- Health monitoring and alerts
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from src.backup.cloud_storage import CloudStorage, S3Config
|
|
||||||
from src.backup.exporter import BackupExporter, ExportFormat
|
from src.backup.exporter import BackupExporter, ExportFormat
|
||||||
from src.backup.scheduler import BackupPolicy, BackupScheduler
|
from src.backup.scheduler import BackupScheduler, BackupPolicy
|
||||||
|
from src.backup.cloud_storage import CloudStorage, S3Config
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
"BackupExporter",
|
"BackupExporter",
|
||||||
|
|||||||
@@ -94,9 +94,7 @@ class CloudStorage:
|
|||||||
if metadata:
|
if metadata:
|
||||||
extra_args["Metadata"] = metadata
|
extra_args["Metadata"] = metadata
|
||||||
|
|
||||||
logger.info(
|
logger.info("Uploading %s to s3://%s/%s", file_path.name, self.config.bucket_name, object_key)
|
||||||
"Uploading %s to s3://%s/%s", file_path.name, self.config.bucket_name, object_key
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.client.upload_file(
|
self.client.upload_file(
|
||||||
|
|||||||
@@ -14,14 +14,14 @@ import json
|
|||||||
import logging
|
import logging
|
||||||
import sqlite3
|
import sqlite3
|
||||||
from datetime import UTC, datetime
|
from datetime import UTC, datetime
|
||||||
from enum import StrEnum
|
from enum import Enum
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class ExportFormat(StrEnum):
|
class ExportFormat(str, Enum):
|
||||||
"""Supported export formats."""
|
"""Supported export formats."""
|
||||||
|
|
||||||
JSON = "json"
|
JSON = "json"
|
||||||
@@ -103,11 +103,15 @@ class BackupExporter:
|
|||||||
elif fmt == ExportFormat.CSV:
|
elif fmt == ExportFormat.CSV:
|
||||||
return self._export_csv(output_dir, timestamp, compress, incremental_since)
|
return self._export_csv(output_dir, timestamp, compress, incremental_since)
|
||||||
elif fmt == ExportFormat.PARQUET:
|
elif fmt == ExportFormat.PARQUET:
|
||||||
return self._export_parquet(output_dir, timestamp, compress, incremental_since)
|
return self._export_parquet(
|
||||||
|
output_dir, timestamp, compress, incremental_since
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
raise ValueError(f"Unsupported format: {fmt}")
|
raise ValueError(f"Unsupported format: {fmt}")
|
||||||
|
|
||||||
def _get_trades(self, incremental_since: datetime | None = None) -> list[dict[str, Any]]:
|
def _get_trades(
|
||||||
|
self, incremental_since: datetime | None = None
|
||||||
|
) -> list[dict[str, Any]]:
|
||||||
"""Fetch trades from database.
|
"""Fetch trades from database.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@@ -160,7 +164,9 @@ class BackupExporter:
|
|||||||
|
|
||||||
data = {
|
data = {
|
||||||
"export_timestamp": datetime.now(UTC).isoformat(),
|
"export_timestamp": datetime.now(UTC).isoformat(),
|
||||||
"incremental_since": (incremental_since.isoformat() if incremental_since else None),
|
"incremental_since": (
|
||||||
|
incremental_since.isoformat() if incremental_since else None
|
||||||
|
),
|
||||||
"record_count": len(trades),
|
"record_count": len(trades),
|
||||||
"trades": trades,
|
"trades": trades,
|
||||||
}
|
}
|
||||||
@@ -278,7 +284,8 @@ class BackupExporter:
|
|||||||
import pyarrow.parquet as pq
|
import pyarrow.parquet as pq
|
||||||
except ImportError:
|
except ImportError:
|
||||||
raise ImportError(
|
raise ImportError(
|
||||||
"pyarrow is required for Parquet export. Install with: pip install pyarrow"
|
"pyarrow is required for Parquet export. "
|
||||||
|
"Install with: pip install pyarrow"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Convert to pyarrow table
|
# Convert to pyarrow table
|
||||||
|
|||||||
@@ -14,14 +14,14 @@ import shutil
|
|||||||
import sqlite3
|
import sqlite3
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import UTC, datetime, timedelta
|
from datetime import UTC, datetime, timedelta
|
||||||
from enum import StrEnum
|
from enum import Enum
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class HealthStatus(StrEnum):
|
class HealthStatus(str, Enum):
|
||||||
"""Health check status."""
|
"""Health check status."""
|
||||||
|
|
||||||
HEALTHY = "healthy"
|
HEALTHY = "healthy"
|
||||||
@@ -137,13 +137,9 @@ class HealthMonitor:
|
|||||||
used_percent = (stat.used / stat.total) * 100
|
used_percent = (stat.used / stat.total) * 100
|
||||||
|
|
||||||
if stat.free < self.min_disk_space_bytes:
|
if stat.free < self.min_disk_space_bytes:
|
||||||
min_disk_gb = self.min_disk_space_bytes / 1024 / 1024 / 1024
|
|
||||||
return HealthCheckResult(
|
return HealthCheckResult(
|
||||||
status=HealthStatus.UNHEALTHY,
|
status=HealthStatus.UNHEALTHY,
|
||||||
message=(
|
message=f"Low disk space: {free_gb:.2f} GB free (minimum: {self.min_disk_space_bytes / 1024 / 1024 / 1024:.2f} GB)",
|
||||||
f"Low disk space: {free_gb:.2f} GB free "
|
|
||||||
f"(minimum: {min_disk_gb:.2f} GB)"
|
|
||||||
),
|
|
||||||
details={
|
details={
|
||||||
"free_gb": free_gb,
|
"free_gb": free_gb,
|
||||||
"total_gb": total_gb,
|
"total_gb": total_gb,
|
||||||
|
|||||||
@@ -12,14 +12,14 @@ import logging
|
|||||||
import shutil
|
import shutil
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import UTC, datetime, timedelta
|
from datetime import UTC, datetime, timedelta
|
||||||
from enum import StrEnum
|
from enum import Enum
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class BackupPolicy(StrEnum):
|
class BackupPolicy(str, Enum):
|
||||||
"""Backup retention policies."""
|
"""Backup retention policies."""
|
||||||
|
|
||||||
DAILY = "daily"
|
DAILY = "daily"
|
||||||
@@ -69,7 +69,9 @@ class BackupScheduler:
|
|||||||
for d in [self.daily_dir, self.weekly_dir, self.monthly_dir]:
|
for d in [self.daily_dir, self.weekly_dir, self.monthly_dir]:
|
||||||
d.mkdir(parents=True, exist_ok=True)
|
d.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
def create_backup(self, policy: BackupPolicy, verify: bool = True) -> BackupMetadata:
|
def create_backup(
|
||||||
|
self, policy: BackupPolicy, verify: bool = True
|
||||||
|
) -> BackupMetadata:
|
||||||
"""Create a database backup.
|
"""Create a database backup.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@@ -227,7 +229,9 @@ class BackupScheduler:
|
|||||||
|
|
||||||
return removed
|
return removed
|
||||||
|
|
||||||
def list_backups(self, policy: BackupPolicy | None = None) -> list[BackupMetadata]:
|
def list_backups(
|
||||||
|
self, policy: BackupPolicy | None = None
|
||||||
|
) -> list[BackupMetadata]:
|
||||||
"""List available backups.
|
"""List available backups.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
|||||||
@@ -13,8 +13,8 @@ import hashlib
|
|||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
import time
|
import time
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass, field
|
||||||
from typing import TYPE_CHECKING, Any
|
from typing import Any, TYPE_CHECKING
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from src.brain.gemini_client import TradeDecision
|
from src.brain.gemini_client import TradeDecision
|
||||||
@@ -26,7 +26,7 @@ logger = logging.getLogger(__name__)
|
|||||||
class CacheEntry:
|
class CacheEntry:
|
||||||
"""Cached decision with metadata."""
|
"""Cached decision with metadata."""
|
||||||
|
|
||||||
decision: TradeDecision
|
decision: "TradeDecision"
|
||||||
cached_at: float # Unix timestamp
|
cached_at: float # Unix timestamp
|
||||||
hit_count: int = 0
|
hit_count: int = 0
|
||||||
market_data_hash: str = ""
|
market_data_hash: str = ""
|
||||||
@@ -239,7 +239,9 @@ class DecisionCache:
|
|||||||
"""
|
"""
|
||||||
current_time = time.time()
|
current_time = time.time()
|
||||||
expired_keys = [
|
expired_keys = [
|
||||||
k for k, v in self._cache.items() if current_time - v.cached_at > self.ttl_seconds
|
k
|
||||||
|
for k, v in self._cache.items()
|
||||||
|
if current_time - v.cached_at > self.ttl_seconds
|
||||||
]
|
]
|
||||||
|
|
||||||
count = len(expired_keys)
|
count = len(expired_keys)
|
||||||
|
|||||||
@@ -11,14 +11,14 @@ from __future__ import annotations
|
|||||||
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import UTC, datetime
|
from datetime import UTC, datetime
|
||||||
from enum import StrEnum
|
from enum import Enum
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
from src.context.layer import ContextLayer
|
from src.context.layer import ContextLayer
|
||||||
from src.context.store import ContextStore
|
from src.context.store import ContextStore
|
||||||
|
|
||||||
|
|
||||||
class DecisionType(StrEnum):
|
class DecisionType(str, Enum):
|
||||||
"""Type of trading decision being made."""
|
"""Type of trading decision being made."""
|
||||||
|
|
||||||
NORMAL = "normal" # Regular trade decision
|
NORMAL = "normal" # Regular trade decision
|
||||||
@@ -183,7 +183,9 @@ class ContextSelector:
|
|||||||
ContextLayer.L1_LEGACY,
|
ContextLayer.L1_LEGACY,
|
||||||
]
|
]
|
||||||
|
|
||||||
scores = {layer: self.score_layer_relevance(layer, decision_type) for layer in all_layers}
|
scores = {
|
||||||
|
layer: self.score_layer_relevance(layer, decision_type) for layer in all_layers
|
||||||
|
}
|
||||||
|
|
||||||
# Filter by minimum score
|
# Filter by minimum score
|
||||||
selected_layers = [layer for layer, score in scores.items() if score >= min_score]
|
selected_layers = [layer for layer, score in scores.items() if score >= min_score]
|
||||||
|
|||||||
@@ -25,12 +25,12 @@ from typing import Any
|
|||||||
|
|
||||||
from google import genai
|
from google import genai
|
||||||
|
|
||||||
from src.brain.cache import DecisionCache
|
|
||||||
from src.brain.prompt_optimizer import PromptOptimizer
|
|
||||||
from src.config import Settings
|
from src.config import Settings
|
||||||
|
from src.data.news_api import NewsAPI, NewsSentiment
|
||||||
from src.data.economic_calendar import EconomicCalendar
|
from src.data.economic_calendar import EconomicCalendar
|
||||||
from src.data.market_data import MarketData
|
from src.data.market_data import MarketData
|
||||||
from src.data.news_api import NewsAPI, NewsSentiment
|
from src.brain.cache import DecisionCache
|
||||||
|
from src.brain.prompt_optimizer import PromptOptimizer
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@@ -159,12 +159,16 @@ class GeminiClient:
|
|||||||
return ""
|
return ""
|
||||||
|
|
||||||
# Check for upcoming high-impact events
|
# Check for upcoming high-impact events
|
||||||
upcoming = self._economic_calendar.get_upcoming_events(days_ahead=7, min_impact="HIGH")
|
upcoming = self._economic_calendar.get_upcoming_events(
|
||||||
|
days_ahead=7, min_impact="HIGH"
|
||||||
|
)
|
||||||
|
|
||||||
if upcoming.high_impact_count == 0:
|
if upcoming.high_impact_count == 0:
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
lines = [f"Upcoming High-Impact Events: {upcoming.high_impact_count} in next 7 days"]
|
lines = [
|
||||||
|
f"Upcoming High-Impact Events: {upcoming.high_impact_count} in next 7 days"
|
||||||
|
]
|
||||||
|
|
||||||
if upcoming.next_major_event is not None:
|
if upcoming.next_major_event is not None:
|
||||||
event = upcoming.next_major_event
|
event = upcoming.next_major_event
|
||||||
@@ -176,7 +180,9 @@ class GeminiClient:
|
|||||||
# Check for earnings
|
# Check for earnings
|
||||||
earnings_date = self._economic_calendar.get_earnings_date(stock_code)
|
earnings_date = self._economic_calendar.get_earnings_date(stock_code)
|
||||||
if earnings_date is not None:
|
if earnings_date is not None:
|
||||||
lines.append(f" Earnings: {stock_code} on {earnings_date.strftime('%Y-%m-%d')}")
|
lines.append(
|
||||||
|
f" Earnings: {stock_code} on {earnings_date.strftime('%Y-%m-%d')}"
|
||||||
|
)
|
||||||
|
|
||||||
return "\n".join(lines)
|
return "\n".join(lines)
|
||||||
|
|
||||||
@@ -229,7 +235,9 @@ class GeminiClient:
|
|||||||
|
|
||||||
# Add foreigner net if non-zero
|
# Add foreigner net if non-zero
|
||||||
if market_data.get("foreigner_net", 0) != 0:
|
if market_data.get("foreigner_net", 0) != 0:
|
||||||
market_info_lines.append(f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}")
|
market_info_lines.append(
|
||||||
|
f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}"
|
||||||
|
)
|
||||||
|
|
||||||
market_info = "\n".join(market_info_lines)
|
market_info = "\n".join(market_info_lines)
|
||||||
|
|
||||||
@@ -241,7 +249,8 @@ class GeminiClient:
|
|||||||
market_info += f"\n\n{external_context}"
|
market_info += f"\n\n{external_context}"
|
||||||
|
|
||||||
json_format = (
|
json_format = (
|
||||||
'{"action": "BUY"|"SELL"|"HOLD", "confidence": <int 0-100>, "rationale": "<string>"}'
|
'{"action": "BUY"|"SELL"|"HOLD", '
|
||||||
|
'"confidence": <int 0-100>, "rationale": "<string>"}'
|
||||||
)
|
)
|
||||||
return (
|
return (
|
||||||
f"You are a professional {market_name} trading analyst.\n"
|
f"You are a professional {market_name} trading analyst.\n"
|
||||||
@@ -280,12 +289,15 @@ class GeminiClient:
|
|||||||
|
|
||||||
# Add foreigner net if non-zero
|
# Add foreigner net if non-zero
|
||||||
if market_data.get("foreigner_net", 0) != 0:
|
if market_data.get("foreigner_net", 0) != 0:
|
||||||
market_info_lines.append(f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}")
|
market_info_lines.append(
|
||||||
|
f"Foreigner Net Buy/Sell: {market_data['foreigner_net']}"
|
||||||
|
)
|
||||||
|
|
||||||
market_info = "\n".join(market_info_lines)
|
market_info = "\n".join(market_info_lines)
|
||||||
|
|
||||||
json_format = (
|
json_format = (
|
||||||
'{"action": "BUY"|"SELL"|"HOLD", "confidence": <int 0-100>, "rationale": "<string>"}'
|
'{"action": "BUY"|"SELL"|"HOLD", '
|
||||||
|
'"confidence": <int 0-100>, "rationale": "<string>"}'
|
||||||
)
|
)
|
||||||
return (
|
return (
|
||||||
f"You are a professional {market_name} trading analyst.\n"
|
f"You are a professional {market_name} trading analyst.\n"
|
||||||
@@ -327,19 +339,25 @@ class GeminiClient:
|
|||||||
data = json.loads(cleaned)
|
data = json.loads(cleaned)
|
||||||
except json.JSONDecodeError:
|
except json.JSONDecodeError:
|
||||||
logger.warning("Malformed JSON from Gemini — defaulting to HOLD")
|
logger.warning("Malformed JSON from Gemini — defaulting to HOLD")
|
||||||
return TradeDecision(action="HOLD", confidence=0, rationale="Malformed JSON response")
|
return TradeDecision(
|
||||||
|
action="HOLD", confidence=0, rationale="Malformed JSON response"
|
||||||
|
)
|
||||||
|
|
||||||
# Validate required fields
|
# Validate required fields
|
||||||
if not all(k in data for k in ("action", "confidence", "rationale")):
|
if not all(k in data for k in ("action", "confidence", "rationale")):
|
||||||
logger.warning("Missing fields in Gemini response — defaulting to HOLD")
|
logger.warning("Missing fields in Gemini response — defaulting to HOLD")
|
||||||
# Preserve raw text in rationale so prompt_override callers (e.g. pre_market_planner)
|
# Preserve raw text in rationale so prompt_override callers (e.g. pre_market_planner)
|
||||||
# can extract their own JSON format from decision.rationale (#245)
|
# can extract their own JSON format from decision.rationale (#245)
|
||||||
return TradeDecision(action="HOLD", confidence=0, rationale=raw)
|
return TradeDecision(
|
||||||
|
action="HOLD", confidence=0, rationale=raw
|
||||||
|
)
|
||||||
|
|
||||||
action = str(data["action"]).upper()
|
action = str(data["action"]).upper()
|
||||||
if action not in VALID_ACTIONS:
|
if action not in VALID_ACTIONS:
|
||||||
logger.warning("Invalid action '%s' from Gemini — defaulting to HOLD", action)
|
logger.warning("Invalid action '%s' from Gemini — defaulting to HOLD", action)
|
||||||
return TradeDecision(action="HOLD", confidence=0, rationale=f"Invalid action: {action}")
|
return TradeDecision(
|
||||||
|
action="HOLD", confidence=0, rationale=f"Invalid action: {action}"
|
||||||
|
)
|
||||||
|
|
||||||
confidence = int(data["confidence"])
|
confidence = int(data["confidence"])
|
||||||
rationale = str(data["rationale"])
|
rationale = str(data["rationale"])
|
||||||
@@ -427,7 +445,9 @@ class GeminiClient:
|
|||||||
# not a parsed TradeDecision. Skip parse_response to avoid spurious
|
# not a parsed TradeDecision. Skip parse_response to avoid spurious
|
||||||
# "Missing fields" warnings and return the raw response directly. (#247)
|
# "Missing fields" warnings and return the raw response directly. (#247)
|
||||||
if "prompt_override" in market_data:
|
if "prompt_override" in market_data:
|
||||||
logger.info("Gemini raw response received (prompt_override, tokens=%d)", token_count)
|
logger.info(
|
||||||
|
"Gemini raw response received (prompt_override, tokens=%d)", token_count
|
||||||
|
)
|
||||||
# Not a trade decision — don't inflate _total_decisions metrics
|
# Not a trade decision — don't inflate _total_decisions metrics
|
||||||
return TradeDecision(
|
return TradeDecision(
|
||||||
action="HOLD", confidence=0, rationale=raw, token_count=token_count
|
action="HOLD", confidence=0, rationale=raw, token_count=token_count
|
||||||
@@ -526,7 +546,9 @@ class GeminiClient:
|
|||||||
# Batch Decision Making (for daily trading mode)
|
# Batch Decision Making (for daily trading mode)
|
||||||
# ------------------------------------------------------------------
|
# ------------------------------------------------------------------
|
||||||
|
|
||||||
async def decide_batch(self, stocks_data: list[dict[str, Any]]) -> dict[str, TradeDecision]:
|
async def decide_batch(
|
||||||
|
self, stocks_data: list[dict[str, Any]]
|
||||||
|
) -> dict[str, TradeDecision]:
|
||||||
"""Make decisions for multiple stocks in a single API call.
|
"""Make decisions for multiple stocks in a single API call.
|
||||||
|
|
||||||
This is designed for daily trading mode to minimize API usage
|
This is designed for daily trading mode to minimize API usage
|
||||||
|
|||||||
@@ -179,8 +179,7 @@ class PromptOptimizer:
|
|||||||
# Minimal instructions
|
# Minimal instructions
|
||||||
prompt = (
|
prompt = (
|
||||||
f"{market_name} trader. Analyze:\n{data_str}\n\n"
|
f"{market_name} trader. Analyze:\n{data_str}\n\n"
|
||||||
"Return JSON: "
|
'Return JSON: {"action":"BUY"|"SELL"|"HOLD","confidence":<0-100>,"rationale":"<text>"}\n'
|
||||||
'{"action":"BUY"|"SELL"|"HOLD","confidence":<0-100>,"rationale":"<text>"}\n'
|
|
||||||
"Rules: action=BUY/SELL/HOLD, confidence=0-100, rationale=concise. No markdown."
|
"Rules: action=BUY/SELL/HOLD, confidence=0-100, rationale=concise. No markdown."
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
|
|||||||
@@ -103,8 +103,7 @@ class KISBroker:
|
|||||||
ssl_ctx.verify_mode = ssl.CERT_NONE
|
ssl_ctx.verify_mode = ssl.CERT_NONE
|
||||||
connector = aiohttp.TCPConnector(ssl=ssl_ctx)
|
connector = aiohttp.TCPConnector(ssl=ssl_ctx)
|
||||||
self._session = aiohttp.ClientSession(
|
self._session = aiohttp.ClientSession(
|
||||||
timeout=timeout,
|
timeout=timeout, connector=connector,
|
||||||
connector=connector,
|
|
||||||
)
|
)
|
||||||
return self._session
|
return self._session
|
||||||
|
|
||||||
@@ -225,12 +224,16 @@ class KISBroker:
|
|||||||
async with session.get(url, headers=headers, params=params) as resp:
|
async with session.get(url, headers=headers, params=params) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"get_orderbook failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"get_orderbook failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
return await resp.json()
|
return await resp.json()
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error fetching orderbook: {exc}") from exc
|
raise ConnectionError(f"Network error fetching orderbook: {exc}") from exc
|
||||||
|
|
||||||
async def get_current_price(self, stock_code: str) -> tuple[float, float, float]:
|
async def get_current_price(
|
||||||
|
self, stock_code: str
|
||||||
|
) -> tuple[float, float, float]:
|
||||||
"""Fetch current price data for a domestic stock.
|
"""Fetch current price data for a domestic stock.
|
||||||
|
|
||||||
Uses the ``inquire-price`` API (FHKST01010100), which works in both
|
Uses the ``inquire-price`` API (FHKST01010100), which works in both
|
||||||
@@ -262,7 +265,9 @@ class KISBroker:
|
|||||||
async with session.get(url, headers=headers, params=params) as resp:
|
async with session.get(url, headers=headers, params=params) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"get_current_price failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"get_current_price failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
data = await resp.json()
|
data = await resp.json()
|
||||||
out = data.get("output", {})
|
out = data.get("output", {})
|
||||||
return (
|
return (
|
||||||
@@ -271,7 +276,9 @@ class KISBroker:
|
|||||||
_f(out.get("frgn_ntby_qty")),
|
_f(out.get("frgn_ntby_qty")),
|
||||||
)
|
)
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error fetching current price: {exc}") from exc
|
raise ConnectionError(
|
||||||
|
f"Network error fetching current price: {exc}"
|
||||||
|
) from exc
|
||||||
|
|
||||||
async def get_balance(self) -> dict[str, Any]:
|
async def get_balance(self) -> dict[str, Any]:
|
||||||
"""Fetch current account balance and holdings."""
|
"""Fetch current account balance and holdings."""
|
||||||
@@ -301,7 +308,9 @@ class KISBroker:
|
|||||||
async with session.get(url, headers=headers, params=params) as resp:
|
async with session.get(url, headers=headers, params=params) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"get_balance failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"get_balance failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
return await resp.json()
|
return await resp.json()
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error fetching balance: {exc}") from exc
|
raise ConnectionError(f"Network error fetching balance: {exc}") from exc
|
||||||
@@ -360,7 +369,9 @@ class KISBroker:
|
|||||||
async with session.post(url, headers=headers, json=body) as resp:
|
async with session.post(url, headers=headers, json=body) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"send_order failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"send_order failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
data = await resp.json()
|
data = await resp.json()
|
||||||
logger.info(
|
logger.info(
|
||||||
"Order submitted",
|
"Order submitted",
|
||||||
@@ -438,7 +449,9 @@ class KISBroker:
|
|||||||
async with session.get(url, headers=headers, params=params) as resp:
|
async with session.get(url, headers=headers, params=params) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"fetch_market_rankings failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"fetch_market_rankings failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
data = await resp.json()
|
data = await resp.json()
|
||||||
|
|
||||||
# Parse response - output is a list of ranked stocks
|
# Parse response - output is a list of ranked stocks
|
||||||
@@ -452,16 +465,14 @@ class KISBroker:
|
|||||||
|
|
||||||
rankings = []
|
rankings = []
|
||||||
for item in data.get("output", [])[:limit]:
|
for item in data.get("output", [])[:limit]:
|
||||||
rankings.append(
|
rankings.append({
|
||||||
{
|
|
||||||
"stock_code": item.get("stck_shrn_iscd") or item.get("mksc_shrn_iscd", ""),
|
"stock_code": item.get("stck_shrn_iscd") or item.get("mksc_shrn_iscd", ""),
|
||||||
"name": item.get("hts_kor_isnm", ""),
|
"name": item.get("hts_kor_isnm", ""),
|
||||||
"price": _safe_float(item.get("stck_prpr", "0")),
|
"price": _safe_float(item.get("stck_prpr", "0")),
|
||||||
"volume": _safe_float(item.get("acml_vol", "0")),
|
"volume": _safe_float(item.get("acml_vol", "0")),
|
||||||
"change_rate": _safe_float(item.get("prdy_ctrt", "0")),
|
"change_rate": _safe_float(item.get("prdy_ctrt", "0")),
|
||||||
"volume_increase_rate": _safe_float(item.get("vol_inrt", "0")),
|
"volume_increase_rate": _safe_float(item.get("vol_inrt", "0")),
|
||||||
}
|
})
|
||||||
)
|
|
||||||
return rankings
|
return rankings
|
||||||
|
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
@@ -511,7 +522,9 @@ class KISBroker:
|
|||||||
data = await resp.json()
|
data = await resp.json()
|
||||||
return data.get("output", []) or []
|
return data.get("output", []) or []
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error fetching domestic pending orders: {exc}") from exc
|
raise ConnectionError(
|
||||||
|
f"Network error fetching domestic pending orders: {exc}"
|
||||||
|
) from exc
|
||||||
|
|
||||||
async def cancel_domestic_order(
|
async def cancel_domestic_order(
|
||||||
self,
|
self,
|
||||||
@@ -562,10 +575,14 @@ class KISBroker:
|
|||||||
async with session.post(url, headers=headers, json=body) as resp:
|
async with session.post(url, headers=headers, json=body) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"cancel_domestic_order failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"cancel_domestic_order failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
return cast(dict[str, Any], await resp.json())
|
return cast(dict[str, Any], await resp.json())
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error cancelling domestic order: {exc}") from exc
|
raise ConnectionError(
|
||||||
|
f"Network error cancelling domestic order: {exc}"
|
||||||
|
) from exc
|
||||||
|
|
||||||
async def get_daily_prices(
|
async def get_daily_prices(
|
||||||
self,
|
self,
|
||||||
@@ -592,7 +609,6 @@ class KISBroker:
|
|||||||
|
|
||||||
# Calculate date range (today and N days ago)
|
# Calculate date range (today and N days ago)
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta
|
||||||
|
|
||||||
end_date = datetime.now().strftime("%Y%m%d")
|
end_date = datetime.now().strftime("%Y%m%d")
|
||||||
start_date = (datetime.now() - timedelta(days=days + 10)).strftime("%Y%m%d")
|
start_date = (datetime.now() - timedelta(days=days + 10)).strftime("%Y%m%d")
|
||||||
|
|
||||||
@@ -611,7 +627,9 @@ class KISBroker:
|
|||||||
async with session.get(url, headers=headers, params=params) as resp:
|
async with session.get(url, headers=headers, params=params) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"get_daily_prices failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"get_daily_prices failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
data = await resp.json()
|
data = await resp.json()
|
||||||
|
|
||||||
# Parse response
|
# Parse response
|
||||||
@@ -625,16 +643,14 @@ class KISBroker:
|
|||||||
|
|
||||||
prices = []
|
prices = []
|
||||||
for item in data.get("output2", []):
|
for item in data.get("output2", []):
|
||||||
prices.append(
|
prices.append({
|
||||||
{
|
|
||||||
"date": item.get("stck_bsop_date", ""),
|
"date": item.get("stck_bsop_date", ""),
|
||||||
"open": _safe_float(item.get("stck_oprc", "0")),
|
"open": _safe_float(item.get("stck_oprc", "0")),
|
||||||
"high": _safe_float(item.get("stck_hgpr", "0")),
|
"high": _safe_float(item.get("stck_hgpr", "0")),
|
||||||
"low": _safe_float(item.get("stck_lwpr", "0")),
|
"low": _safe_float(item.get("stck_lwpr", "0")),
|
||||||
"close": _safe_float(item.get("stck_clpr", "0")),
|
"close": _safe_float(item.get("stck_clpr", "0")),
|
||||||
"volume": _safe_float(item.get("acml_vol", "0")),
|
"volume": _safe_float(item.get("acml_vol", "0")),
|
||||||
}
|
})
|
||||||
)
|
|
||||||
|
|
||||||
# Sort oldest to newest (KIS returns newest first)
|
# Sort oldest to newest (KIS returns newest first)
|
||||||
prices.reverse()
|
prices.reverse()
|
||||||
|
|||||||
@@ -56,7 +56,9 @@ class OverseasBroker:
|
|||||||
"""
|
"""
|
||||||
self._broker = kis_broker
|
self._broker = kis_broker
|
||||||
|
|
||||||
async def get_overseas_price(self, exchange_code: str, stock_code: str) -> dict[str, Any]:
|
async def get_overseas_price(
|
||||||
|
self, exchange_code: str, stock_code: str
|
||||||
|
) -> dict[str, Any]:
|
||||||
"""
|
"""
|
||||||
Fetch overseas stock price.
|
Fetch overseas stock price.
|
||||||
|
|
||||||
@@ -87,10 +89,14 @@ class OverseasBroker:
|
|||||||
async with session.get(url, headers=headers, params=params) as resp:
|
async with session.get(url, headers=headers, params=params) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"get_overseas_price failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"get_overseas_price failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
return await resp.json()
|
return await resp.json()
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error fetching overseas price: {exc}") from exc
|
raise ConnectionError(
|
||||||
|
f"Network error fetching overseas price: {exc}"
|
||||||
|
) from exc
|
||||||
|
|
||||||
async def fetch_overseas_rankings(
|
async def fetch_overseas_rankings(
|
||||||
self,
|
self,
|
||||||
@@ -148,7 +154,9 @@ class OverseasBroker:
|
|||||||
ranking_type,
|
ranking_type,
|
||||||
)
|
)
|
||||||
return []
|
return []
|
||||||
raise ConnectionError(f"fetch_overseas_rankings failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"fetch_overseas_rankings failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
|
|
||||||
data = await resp.json()
|
data = await resp.json()
|
||||||
rows = self._extract_ranking_rows(data)
|
rows = self._extract_ranking_rows(data)
|
||||||
@@ -163,7 +171,9 @@ class OverseasBroker:
|
|||||||
)
|
)
|
||||||
return []
|
return []
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error fetching overseas rankings: {exc}") from exc
|
raise ConnectionError(
|
||||||
|
f"Network error fetching overseas rankings: {exc}"
|
||||||
|
) from exc
|
||||||
|
|
||||||
async def get_overseas_balance(self, exchange_code: str) -> dict[str, Any]:
|
async def get_overseas_balance(self, exchange_code: str) -> dict[str, Any]:
|
||||||
"""
|
"""
|
||||||
@@ -183,7 +193,9 @@ class OverseasBroker:
|
|||||||
|
|
||||||
# TR_ID: 실전 TTTS3012R, 모의 VTTS3012R
|
# TR_ID: 실전 TTTS3012R, 모의 VTTS3012R
|
||||||
# Source: 한국투자증권 오픈API 전체문서 (20260221) — '해외주식 잔고조회' 시트
|
# Source: 한국투자증권 오픈API 전체문서 (20260221) — '해외주식 잔고조회' 시트
|
||||||
balance_tr_id = "TTTS3012R" if self._broker._settings.MODE == "live" else "VTTS3012R"
|
balance_tr_id = (
|
||||||
|
"TTTS3012R" if self._broker._settings.MODE == "live" else "VTTS3012R"
|
||||||
|
)
|
||||||
headers = await self._broker._auth_headers(balance_tr_id)
|
headers = await self._broker._auth_headers(balance_tr_id)
|
||||||
params = {
|
params = {
|
||||||
"CANO": self._broker._account_no,
|
"CANO": self._broker._account_no,
|
||||||
@@ -193,16 +205,22 @@ class OverseasBroker:
|
|||||||
"CTX_AREA_FK200": "",
|
"CTX_AREA_FK200": "",
|
||||||
"CTX_AREA_NK200": "",
|
"CTX_AREA_NK200": "",
|
||||||
}
|
}
|
||||||
url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-balance"
|
url = (
|
||||||
|
f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-balance"
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with session.get(url, headers=headers, params=params) as resp:
|
async with session.get(url, headers=headers, params=params) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"get_overseas_balance failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"get_overseas_balance failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
return await resp.json()
|
return await resp.json()
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error fetching overseas balance: {exc}") from exc
|
raise ConnectionError(
|
||||||
|
f"Network error fetching overseas balance: {exc}"
|
||||||
|
) from exc
|
||||||
|
|
||||||
async def get_overseas_buying_power(
|
async def get_overseas_buying_power(
|
||||||
self,
|
self,
|
||||||
@@ -229,7 +247,9 @@ class OverseasBroker:
|
|||||||
|
|
||||||
# TR_ID: 실전 TTTS3007R, 모의 VTTS3007R
|
# TR_ID: 실전 TTTS3007R, 모의 VTTS3007R
|
||||||
# Source: 한국투자증권 오픈API 전체문서 (20260221) — '해외주식 매수가능금액조회' 시트
|
# Source: 한국투자증권 오픈API 전체문서 (20260221) — '해외주식 매수가능금액조회' 시트
|
||||||
ps_tr_id = "TTTS3007R" if self._broker._settings.MODE == "live" else "VTTS3007R"
|
ps_tr_id = (
|
||||||
|
"TTTS3007R" if self._broker._settings.MODE == "live" else "VTTS3007R"
|
||||||
|
)
|
||||||
headers = await self._broker._auth_headers(ps_tr_id)
|
headers = await self._broker._auth_headers(ps_tr_id)
|
||||||
params = {
|
params = {
|
||||||
"CANO": self._broker._account_no,
|
"CANO": self._broker._account_no,
|
||||||
@@ -238,7 +258,9 @@ class OverseasBroker:
|
|||||||
"OVRS_ORD_UNPR": f"{price:.2f}",
|
"OVRS_ORD_UNPR": f"{price:.2f}",
|
||||||
"ITEM_CD": stock_code,
|
"ITEM_CD": stock_code,
|
||||||
}
|
}
|
||||||
url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-psamount"
|
url = (
|
||||||
|
f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-psamount"
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with session.get(url, headers=headers, params=params) as resp:
|
async with session.get(url, headers=headers, params=params) as resp:
|
||||||
@@ -249,7 +271,9 @@ class OverseasBroker:
|
|||||||
)
|
)
|
||||||
return await resp.json()
|
return await resp.json()
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error fetching overseas buying power: {exc}") from exc
|
raise ConnectionError(
|
||||||
|
f"Network error fetching overseas buying power: {exc}"
|
||||||
|
) from exc
|
||||||
|
|
||||||
async def send_overseas_order(
|
async def send_overseas_order(
|
||||||
self,
|
self,
|
||||||
@@ -306,7 +330,9 @@ class OverseasBroker:
|
|||||||
async with session.post(url, headers=headers, json=body) as resp:
|
async with session.post(url, headers=headers, json=body) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"send_overseas_order failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"send_overseas_order failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
data = await resp.json()
|
data = await resp.json()
|
||||||
rt_cd = data.get("rt_cd", "")
|
rt_cd = data.get("rt_cd", "")
|
||||||
msg1 = data.get("msg1", "")
|
msg1 = data.get("msg1", "")
|
||||||
@@ -331,9 +357,13 @@ class OverseasBroker:
|
|||||||
)
|
)
|
||||||
return data
|
return data
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error sending overseas order: {exc}") from exc
|
raise ConnectionError(
|
||||||
|
f"Network error sending overseas order: {exc}"
|
||||||
|
) from exc
|
||||||
|
|
||||||
async def get_overseas_pending_orders(self, exchange_code: str) -> list[dict[str, Any]]:
|
async def get_overseas_pending_orders(
|
||||||
|
self, exchange_code: str
|
||||||
|
) -> list[dict[str, Any]]:
|
||||||
"""Fetch unfilled (pending) overseas orders for a given exchange.
|
"""Fetch unfilled (pending) overseas orders for a given exchange.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@@ -349,7 +379,9 @@ class OverseasBroker:
|
|||||||
ConnectionError: On network or API errors (live mode only).
|
ConnectionError: On network or API errors (live mode only).
|
||||||
"""
|
"""
|
||||||
if self._broker._settings.MODE != "live":
|
if self._broker._settings.MODE != "live":
|
||||||
logger.debug("Pending orders API (TTTS3018R) not supported in paper mode; returning []")
|
logger.debug(
|
||||||
|
"Pending orders API (TTTS3018R) not supported in paper mode; returning []"
|
||||||
|
)
|
||||||
return []
|
return []
|
||||||
|
|
||||||
await self._broker._rate_limiter.acquire()
|
await self._broker._rate_limiter.acquire()
|
||||||
@@ -366,7 +398,9 @@ class OverseasBroker:
|
|||||||
"CTX_AREA_FK200": "",
|
"CTX_AREA_FK200": "",
|
||||||
"CTX_AREA_NK200": "",
|
"CTX_AREA_NK200": "",
|
||||||
}
|
}
|
||||||
url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-nccs"
|
url = (
|
||||||
|
f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/inquire-nccs"
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with session.get(url, headers=headers, params=params) as resp:
|
async with session.get(url, headers=headers, params=params) as resp:
|
||||||
@@ -381,7 +415,9 @@ class OverseasBroker:
|
|||||||
return output
|
return output
|
||||||
return []
|
return []
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error fetching pending orders: {exc}") from exc
|
raise ConnectionError(
|
||||||
|
f"Network error fetching pending orders: {exc}"
|
||||||
|
) from exc
|
||||||
|
|
||||||
async def cancel_overseas_order(
|
async def cancel_overseas_order(
|
||||||
self,
|
self,
|
||||||
@@ -433,16 +469,22 @@ class OverseasBroker:
|
|||||||
headers = await self._broker._auth_headers(tr_id)
|
headers = await self._broker._auth_headers(tr_id)
|
||||||
headers["hashkey"] = hash_key
|
headers["hashkey"] = hash_key
|
||||||
|
|
||||||
url = f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/order-rvsecncl"
|
url = (
|
||||||
|
f"{self._broker._base_url}/uapi/overseas-stock/v1/trading/order-rvsecncl"
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with session.post(url, headers=headers, json=body) as resp:
|
async with session.post(url, headers=headers, json=body) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
text = await resp.text()
|
text = await resp.text()
|
||||||
raise ConnectionError(f"cancel_overseas_order failed ({resp.status}): {text}")
|
raise ConnectionError(
|
||||||
|
f"cancel_overseas_order failed ({resp.status}): {text}"
|
||||||
|
)
|
||||||
return await resp.json()
|
return await resp.json()
|
||||||
except (TimeoutError, aiohttp.ClientError) as exc:
|
except (TimeoutError, aiohttp.ClientError) as exc:
|
||||||
raise ConnectionError(f"Network error cancelling overseas order: {exc}") from exc
|
raise ConnectionError(
|
||||||
|
f"Network error cancelling overseas order: {exc}"
|
||||||
|
) from exc
|
||||||
|
|
||||||
def _get_currency_code(self, exchange_code: str) -> str:
|
def _get_currency_code(self, exchange_code: str) -> str:
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -124,8 +124,12 @@ class Settings(BaseSettings):
|
|||||||
OVERSEAS_RANKING_ENABLED: bool = True
|
OVERSEAS_RANKING_ENABLED: bool = True
|
||||||
OVERSEAS_RANKING_FLUCT_TR_ID: str = "HHDFS76290000"
|
OVERSEAS_RANKING_FLUCT_TR_ID: str = "HHDFS76290000"
|
||||||
OVERSEAS_RANKING_VOLUME_TR_ID: str = "HHDFS76270000"
|
OVERSEAS_RANKING_VOLUME_TR_ID: str = "HHDFS76270000"
|
||||||
OVERSEAS_RANKING_FLUCT_PATH: str = "/uapi/overseas-stock/v1/ranking/updown-rate"
|
OVERSEAS_RANKING_FLUCT_PATH: str = (
|
||||||
OVERSEAS_RANKING_VOLUME_PATH: str = "/uapi/overseas-stock/v1/ranking/volume-surge"
|
"/uapi/overseas-stock/v1/ranking/updown-rate"
|
||||||
|
)
|
||||||
|
OVERSEAS_RANKING_VOLUME_PATH: str = (
|
||||||
|
"/uapi/overseas-stock/v1/ranking/volume-surge"
|
||||||
|
)
|
||||||
|
|
||||||
# Dashboard (optional)
|
# Dashboard (optional)
|
||||||
DASHBOARD_ENABLED: bool = False
|
DASHBOARD_ENABLED: bool = False
|
||||||
|
|||||||
@@ -222,7 +222,9 @@ class ContextAggregator:
|
|||||||
|
|
||||||
total_pnl = 0.0
|
total_pnl = 0.0
|
||||||
for month in months:
|
for month in months:
|
||||||
monthly_pnl = self.store.get_context(ContextLayer.L4_MONTHLY, month, "monthly_pnl")
|
monthly_pnl = self.store.get_context(
|
||||||
|
ContextLayer.L4_MONTHLY, month, "monthly_pnl"
|
||||||
|
)
|
||||||
if monthly_pnl is not None:
|
if monthly_pnl is not None:
|
||||||
total_pnl += monthly_pnl
|
total_pnl += monthly_pnl
|
||||||
|
|
||||||
@@ -249,7 +251,9 @@ class ContextAggregator:
|
|||||||
if quarterly_pnl is not None:
|
if quarterly_pnl is not None:
|
||||||
total_pnl += quarterly_pnl
|
total_pnl += quarterly_pnl
|
||||||
|
|
||||||
self.store.set_context(ContextLayer.L2_ANNUAL, year, "annual_pnl", round(total_pnl, 2))
|
self.store.set_context(
|
||||||
|
ContextLayer.L2_ANNUAL, year, "annual_pnl", round(total_pnl, 2)
|
||||||
|
)
|
||||||
|
|
||||||
def aggregate_legacy_from_annual(self) -> None:
|
def aggregate_legacy_from_annual(self) -> None:
|
||||||
"""Aggregate L1 (legacy) context from all L2 (annual) data."""
|
"""Aggregate L1 (legacy) context from all L2 (annual) data."""
|
||||||
@@ -276,7 +280,9 @@ class ContextAggregator:
|
|||||||
self.store.set_context(
|
self.store.set_context(
|
||||||
ContextLayer.L1_LEGACY, "LEGACY", "total_pnl", round(total_pnl, 2)
|
ContextLayer.L1_LEGACY, "LEGACY", "total_pnl", round(total_pnl, 2)
|
||||||
)
|
)
|
||||||
self.store.set_context(ContextLayer.L1_LEGACY, "LEGACY", "years_traded", years_traded)
|
self.store.set_context(
|
||||||
|
ContextLayer.L1_LEGACY, "LEGACY", "years_traded", years_traded
|
||||||
|
)
|
||||||
self.store.set_context(
|
self.store.set_context(
|
||||||
ContextLayer.L1_LEGACY,
|
ContextLayer.L1_LEGACY,
|
||||||
"LEGACY",
|
"LEGACY",
|
||||||
|
|||||||
@@ -3,10 +3,10 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from enum import StrEnum
|
from enum import Enum
|
||||||
|
|
||||||
|
|
||||||
class ContextLayer(StrEnum):
|
class ContextLayer(str, Enum):
|
||||||
"""7-tier context hierarchy from real-time to generational."""
|
"""7-tier context hierarchy from real-time to generational."""
|
||||||
|
|
||||||
L1_LEGACY = "L1_LEGACY" # Cumulative/generational wisdom
|
L1_LEGACY = "L1_LEGACY" # Cumulative/generational wisdom
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ This module summarizes old context data instead of including raw details:
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import UTC, datetime
|
from datetime import UTC, datetime, timedelta
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
from src.context.layer import ContextLayer
|
from src.context.layer import ContextLayer
|
||||||
|
|||||||
@@ -11,9 +11,8 @@ Order is fixed:
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import inspect
|
import inspect
|
||||||
from collections.abc import Awaitable, Callable
|
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
from typing import Any
|
from typing import Any, Awaitable, Callable
|
||||||
|
|
||||||
StepCallable = Callable[[], Any | Awaitable[Any]]
|
StepCallable = Callable[[], Any | Awaitable[Any]]
|
||||||
|
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ from src.markets.schedule import MarketInfo
|
|||||||
_LOW_LIQUIDITY_SESSIONS = {"NXT_AFTER", "US_PRE", "US_DAY", "US_AFTER"}
|
_LOW_LIQUIDITY_SESSIONS = {"NXT_AFTER", "US_PRE", "US_DAY", "US_AFTER"}
|
||||||
|
|
||||||
|
|
||||||
class OrderPolicyRejectedError(Exception):
|
class OrderPolicyRejected(Exception):
|
||||||
"""Raised when an order violates session policy."""
|
"""Raised when an order violates session policy."""
|
||||||
|
|
||||||
def __init__(self, message: str, *, session_id: str, market_code: str) -> None:
|
def __init__(self, message: str, *, session_id: str, market_code: str) -> None:
|
||||||
@@ -61,9 +61,7 @@ def classify_session_id(market: MarketInfo, now: datetime | None = None) -> str:
|
|||||||
|
|
||||||
def get_session_info(market: MarketInfo, now: datetime | None = None) -> SessionInfo:
|
def get_session_info(market: MarketInfo, now: datetime | None = None) -> SessionInfo:
|
||||||
session_id = classify_session_id(market, now)
|
session_id = classify_session_id(market, now)
|
||||||
return SessionInfo(
|
return SessionInfo(session_id=session_id, is_low_liquidity=session_id in _LOW_LIQUIDITY_SESSIONS)
|
||||||
session_id=session_id, is_low_liquidity=session_id in _LOW_LIQUIDITY_SESSIONS
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_order_policy(
|
def validate_order_policy(
|
||||||
@@ -78,7 +76,7 @@ def validate_order_policy(
|
|||||||
|
|
||||||
is_market_order = price <= 0
|
is_market_order = price <= 0
|
||||||
if info.is_low_liquidity and is_market_order:
|
if info.is_low_liquidity and is_market_order:
|
||||||
raise OrderPolicyRejectedError(
|
raise OrderPolicyRejected(
|
||||||
f"Market order is forbidden in low-liquidity session ({info.session_id})",
|
f"Market order is forbidden in low-liquidity session ({info.session_id})",
|
||||||
session_id=info.session_id,
|
session_id=info.session_id,
|
||||||
market_code=market.code,
|
market_code=market.code,
|
||||||
@@ -86,14 +84,10 @@ def validate_order_policy(
|
|||||||
|
|
||||||
# Guard against accidental unsupported actions.
|
# Guard against accidental unsupported actions.
|
||||||
if order_type not in {"BUY", "SELL"}:
|
if order_type not in {"BUY", "SELL"}:
|
||||||
raise OrderPolicyRejectedError(
|
raise OrderPolicyRejected(
|
||||||
f"Unsupported order_type={order_type}",
|
f"Unsupported order_type={order_type}",
|
||||||
session_id=info.session_id,
|
session_id=info.session_id,
|
||||||
market_code=market.code,
|
market_code=market.code,
|
||||||
)
|
)
|
||||||
|
|
||||||
return info
|
return info
|
||||||
|
|
||||||
|
|
||||||
# Backward compatibility alias
|
|
||||||
OrderPolicyRejected = OrderPolicyRejectedError
|
|
||||||
|
|||||||
@@ -28,7 +28,9 @@ class PriorityTask:
|
|||||||
# Task data not used in comparison
|
# Task data not used in comparison
|
||||||
task_id: str = field(compare=False)
|
task_id: str = field(compare=False)
|
||||||
task_data: dict[str, Any] = field(compare=False, default_factory=dict)
|
task_data: dict[str, Any] = field(compare=False, default_factory=dict)
|
||||||
callback: Callable[[], Coroutine[Any, Any, Any]] | None = field(compare=False, default=None)
|
callback: Callable[[], Coroutine[Any, Any, Any]] | None = field(
|
||||||
|
compare=False, default=None
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
|
|||||||
@@ -25,7 +25,7 @@ class CircuitBreakerTripped(SystemExit):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class FatFingerRejectedError(Exception):
|
class FatFingerRejected(Exception):
|
||||||
"""Raised when an order exceeds the maximum allowed proportion of cash."""
|
"""Raised when an order exceeds the maximum allowed proportion of cash."""
|
||||||
|
|
||||||
def __init__(self, order_amount: float, total_cash: float, max_pct: float) -> None:
|
def __init__(self, order_amount: float, total_cash: float, max_pct: float) -> None:
|
||||||
@@ -61,7 +61,7 @@ class RiskManager:
|
|||||||
def check_fat_finger(self, order_amount: float, total_cash: float) -> None:
|
def check_fat_finger(self, order_amount: float, total_cash: float) -> None:
|
||||||
"""Reject orders that exceed the maximum proportion of available cash."""
|
"""Reject orders that exceed the maximum proportion of available cash."""
|
||||||
if total_cash <= 0:
|
if total_cash <= 0:
|
||||||
raise FatFingerRejectedError(order_amount, total_cash, self._ff_max_pct)
|
raise FatFingerRejected(order_amount, total_cash, self._ff_max_pct)
|
||||||
|
|
||||||
ratio_pct = (order_amount / total_cash) * 100
|
ratio_pct = (order_amount / total_cash) * 100
|
||||||
if ratio_pct > self._ff_max_pct:
|
if ratio_pct > self._ff_max_pct:
|
||||||
@@ -69,7 +69,7 @@ class RiskManager:
|
|||||||
"Fat finger check failed",
|
"Fat finger check failed",
|
||||||
extra={"order_amount": order_amount},
|
extra={"order_amount": order_amount},
|
||||||
)
|
)
|
||||||
raise FatFingerRejectedError(order_amount, total_cash, self._ff_max_pct)
|
raise FatFingerRejected(order_amount, total_cash, self._ff_max_pct)
|
||||||
|
|
||||||
def validate_order(
|
def validate_order(
|
||||||
self,
|
self,
|
||||||
@@ -81,7 +81,3 @@ class RiskManager:
|
|||||||
self.check_circuit_breaker(current_pnl_pct)
|
self.check_circuit_breaker(current_pnl_pct)
|
||||||
self.check_fat_finger(order_amount, total_cash)
|
self.check_fat_finger(order_amount, total_cash)
|
||||||
logger.info("Order passed risk validation")
|
logger.info("Order passed risk validation")
|
||||||
|
|
||||||
|
|
||||||
# Backward compatibility alias
|
|
||||||
FatFingerRejected = FatFingerRejectedError
|
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ from __future__ import annotations
|
|||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import sqlite3
|
import sqlite3
|
||||||
from datetime import UTC, datetime
|
from datetime import UTC, datetime, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
@@ -188,7 +188,10 @@ def create_dashboard_app(db_path: str, mode: str = "paper") -> FastAPI:
|
|||||||
return {
|
return {
|
||||||
"market": "all",
|
"market": "all",
|
||||||
"combined": combined,
|
"combined": combined,
|
||||||
"by_market": [_row_to_performance(row) for row in by_market_rows],
|
"by_market": [
|
||||||
|
_row_to_performance(row)
|
||||||
|
for row in by_market_rows
|
||||||
|
],
|
||||||
}
|
}
|
||||||
|
|
||||||
row = conn.execute(
|
row = conn.execute(
|
||||||
@@ -398,7 +401,7 @@ def create_dashboard_app(db_path: str, mode: str = "paper") -> FastAPI:
|
|||||||
"""
|
"""
|
||||||
).fetchall()
|
).fetchall()
|
||||||
|
|
||||||
now = datetime.now(UTC)
|
now = datetime.now(timezone.utc)
|
||||||
positions = []
|
positions = []
|
||||||
for row in rows:
|
for row in rows:
|
||||||
entry_time_str = row["entry_time"]
|
entry_time_str = row["entry_time"]
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ from __future__ import annotations
|
|||||||
import logging
|
import logging
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|||||||
11
src/db.py
11
src/db.py
@@ -123,7 +123,8 @@ def init_db(db_path: str) -> sqlite3.Connection:
|
|||||||
"""
|
"""
|
||||||
)
|
)
|
||||||
decision_columns = {
|
decision_columns = {
|
||||||
row[1] for row in conn.execute("PRAGMA table_info(decision_logs)").fetchall()
|
row[1]
|
||||||
|
for row in conn.execute("PRAGMA table_info(decision_logs)").fetchall()
|
||||||
}
|
}
|
||||||
if "session_id" not in decision_columns:
|
if "session_id" not in decision_columns:
|
||||||
conn.execute("ALTER TABLE decision_logs ADD COLUMN session_id TEXT DEFAULT 'UNKNOWN'")
|
conn.execute("ALTER TABLE decision_logs ADD COLUMN session_id TEXT DEFAULT 'UNKNOWN'")
|
||||||
@@ -184,7 +185,9 @@ def init_db(db_path: str) -> sqlite3.Connection:
|
|||||||
conn.execute(
|
conn.execute(
|
||||||
"CREATE INDEX IF NOT EXISTS idx_decision_logs_timestamp ON decision_logs(timestamp)"
|
"CREATE INDEX IF NOT EXISTS idx_decision_logs_timestamp ON decision_logs(timestamp)"
|
||||||
)
|
)
|
||||||
conn.execute("CREATE INDEX IF NOT EXISTS idx_decision_logs_reviewed ON decision_logs(reviewed)")
|
conn.execute(
|
||||||
|
"CREATE INDEX IF NOT EXISTS idx_decision_logs_reviewed ON decision_logs(reviewed)"
|
||||||
|
)
|
||||||
conn.execute(
|
conn.execute(
|
||||||
"CREATE INDEX IF NOT EXISTS idx_decision_logs_confidence ON decision_logs(confidence)"
|
"CREATE INDEX IF NOT EXISTS idx_decision_logs_confidence ON decision_logs(confidence)"
|
||||||
)
|
)
|
||||||
@@ -378,7 +381,9 @@ def get_open_position(
|
|||||||
return {"decision_id": row[1], "price": row[2], "quantity": row[3], "timestamp": row[4]}
|
return {"decision_id": row[1], "price": row[2], "quantity": row[3], "timestamp": row[4]}
|
||||||
|
|
||||||
|
|
||||||
def get_recent_symbols(conn: sqlite3.Connection, market: str, limit: int = 30) -> list[str]:
|
def get_recent_symbols(
|
||||||
|
conn: sqlite3.Connection, market: str, limit: int = 30
|
||||||
|
) -> list[str]:
|
||||||
"""Return recent unique symbols for a market, newest first."""
|
"""Return recent unique symbols for a market, newest first."""
|
||||||
cursor = conn.execute(
|
cursor = conn.execute(
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -90,7 +90,9 @@ class ABTester:
|
|||||||
sharpe_ratio = None
|
sharpe_ratio = None
|
||||||
if len(pnls) > 1:
|
if len(pnls) > 1:
|
||||||
mean_return = avg_pnl
|
mean_return = avg_pnl
|
||||||
std_return = (sum((p - mean_return) ** 2 for p in pnls) / (len(pnls) - 1)) ** 0.5
|
std_return = (
|
||||||
|
sum((p - mean_return) ** 2 for p in pnls) / (len(pnls) - 1)
|
||||||
|
) ** 0.5
|
||||||
if std_return > 0:
|
if std_return > 0:
|
||||||
sharpe_ratio = mean_return / std_return
|
sharpe_ratio = mean_return / std_return
|
||||||
|
|
||||||
@@ -196,7 +198,8 @@ class ABTester:
|
|||||||
|
|
||||||
if meets_criteria:
|
if meets_criteria:
|
||||||
logger.info(
|
logger.info(
|
||||||
"Strategy '%s' meets deployment criteria: win_rate=%.2f%%, trades=%d, avg_pnl=%.2f",
|
"Strategy '%s' meets deployment criteria: "
|
||||||
|
"win_rate=%.2f%%, trades=%d, avg_pnl=%.2f",
|
||||||
result.winner,
|
result.winner,
|
||||||
winning_perf.win_rate,
|
winning_perf.win_rate,
|
||||||
winning_perf.total_trades,
|
winning_perf.total_trades,
|
||||||
|
|||||||
@@ -60,7 +60,9 @@ class DailyReviewer:
|
|||||||
if isinstance(scenario_match, dict) and scenario_match:
|
if isinstance(scenario_match, dict) and scenario_match:
|
||||||
matched += 1
|
matched += 1
|
||||||
scenario_match_rate = (
|
scenario_match_rate = (
|
||||||
round((matched / total_decisions) * 100, 2) if total_decisions else 0.0
|
round((matched / total_decisions) * 100, 2)
|
||||||
|
if total_decisions
|
||||||
|
else 0.0
|
||||||
)
|
)
|
||||||
|
|
||||||
trade_stats = self._conn.execute(
|
trade_stats = self._conn.execute(
|
||||||
|
|||||||
@@ -80,8 +80,7 @@ class EvolutionOptimizer:
|
|||||||
# Convert to dict format for analysis
|
# Convert to dict format for analysis
|
||||||
failures = []
|
failures = []
|
||||||
for decision in losing_decisions:
|
for decision in losing_decisions:
|
||||||
failures.append(
|
failures.append({
|
||||||
{
|
|
||||||
"decision_id": decision.decision_id,
|
"decision_id": decision.decision_id,
|
||||||
"timestamp": decision.timestamp,
|
"timestamp": decision.timestamp,
|
||||||
"stock_code": decision.stock_code,
|
"stock_code": decision.stock_code,
|
||||||
@@ -94,12 +93,13 @@ class EvolutionOptimizer:
|
|||||||
"outcome_accuracy": decision.outcome_accuracy,
|
"outcome_accuracy": decision.outcome_accuracy,
|
||||||
"context_snapshot": decision.context_snapshot,
|
"context_snapshot": decision.context_snapshot,
|
||||||
"input_data": decision.input_data,
|
"input_data": decision.input_data,
|
||||||
}
|
})
|
||||||
)
|
|
||||||
|
|
||||||
return failures
|
return failures
|
||||||
|
|
||||||
def identify_failure_patterns(self, failures: list[dict[str, Any]]) -> dict[str, Any]:
|
def identify_failure_patterns(
|
||||||
|
self, failures: list[dict[str, Any]]
|
||||||
|
) -> dict[str, Any]:
|
||||||
"""Identify patterns in losing decisions.
|
"""Identify patterns in losing decisions.
|
||||||
|
|
||||||
Analyzes:
|
Analyzes:
|
||||||
@@ -143,8 +143,12 @@ class EvolutionOptimizer:
|
|||||||
total_confidence += failure.get("confidence", 0)
|
total_confidence += failure.get("confidence", 0)
|
||||||
total_loss += failure.get("outcome_pnl", 0.0)
|
total_loss += failure.get("outcome_pnl", 0.0)
|
||||||
|
|
||||||
patterns["avg_confidence"] = round(total_confidence / len(failures), 2) if failures else 0.0
|
patterns["avg_confidence"] = (
|
||||||
patterns["avg_loss"] = round(total_loss / len(failures), 2) if failures else 0.0
|
round(total_confidence / len(failures), 2) if failures else 0.0
|
||||||
|
)
|
||||||
|
patterns["avg_loss"] = (
|
||||||
|
round(total_loss / len(failures), 2) if failures else 0.0
|
||||||
|
)
|
||||||
|
|
||||||
# Convert Counters to regular dicts for JSON serialization
|
# Convert Counters to regular dicts for JSON serialization
|
||||||
patterns["markets"] = dict(patterns["markets"])
|
patterns["markets"] = dict(patterns["markets"])
|
||||||
@@ -193,8 +197,7 @@ class EvolutionOptimizer:
|
|||||||
|
|
||||||
prompt = (
|
prompt = (
|
||||||
"You are a quantitative trading strategy developer.\n"
|
"You are a quantitative trading strategy developer.\n"
|
||||||
"Analyze these failed trades and their patterns, "
|
"Analyze these failed trades and their patterns, then generate an improved strategy.\n\n"
|
||||||
"then generate an improved strategy.\n\n"
|
|
||||||
f"Failure Patterns:\n{json.dumps(patterns, indent=2)}\n\n"
|
f"Failure Patterns:\n{json.dumps(patterns, indent=2)}\n\n"
|
||||||
f"Sample Failed Trades (first 5):\n"
|
f"Sample Failed Trades (first 5):\n"
|
||||||
f"{json.dumps(failures[:5], indent=2, default=str)}\n\n"
|
f"{json.dumps(failures[:5], indent=2, default=str)}\n\n"
|
||||||
@@ -211,8 +214,7 @@ class EvolutionOptimizer:
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
response = await self._client.aio.models.generate_content(
|
response = await self._client.aio.models.generate_content(
|
||||||
model=self._model_name,
|
model=self._model_name, contents=prompt,
|
||||||
contents=prompt,
|
|
||||||
)
|
)
|
||||||
body = response.text.strip()
|
body = response.text.strip()
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
@@ -278,7 +280,9 @@ class EvolutionOptimizer:
|
|||||||
logger.info("Strategy validation PASSED")
|
logger.info("Strategy validation PASSED")
|
||||||
return True
|
return True
|
||||||
else:
|
else:
|
||||||
logger.warning("Strategy validation FAILED:\n%s", result.stdout + result.stderr)
|
logger.warning(
|
||||||
|
"Strategy validation FAILED:\n%s", result.stdout + result.stderr
|
||||||
|
)
|
||||||
# Clean up failing strategy
|
# Clean up failing strategy
|
||||||
strategy_path.unlink(missing_ok=True)
|
strategy_path.unlink(missing_ok=True)
|
||||||
return False
|
return False
|
||||||
|
|||||||
@@ -187,7 +187,9 @@ class PerformanceTracker:
|
|||||||
|
|
||||||
return metrics
|
return metrics
|
||||||
|
|
||||||
def calculate_improvement_trend(self, metrics_history: list[StrategyMetrics]) -> dict[str, Any]:
|
def calculate_improvement_trend(
|
||||||
|
self, metrics_history: list[StrategyMetrics]
|
||||||
|
) -> dict[str, Any]:
|
||||||
"""Calculate improvement trend from historical metrics.
|
"""Calculate improvement trend from historical metrics.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@@ -227,7 +229,9 @@ class PerformanceTracker:
|
|||||||
"period_count": len(metrics_history),
|
"period_count": len(metrics_history),
|
||||||
}
|
}
|
||||||
|
|
||||||
def generate_dashboard(self, strategy_name: str | None = None) -> PerformanceDashboard:
|
def generate_dashboard(
|
||||||
|
self, strategy_name: str | None = None
|
||||||
|
) -> PerformanceDashboard:
|
||||||
"""Generate a comprehensive performance dashboard.
|
"""Generate a comprehensive performance dashboard.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@@ -256,7 +260,9 @@ class PerformanceTracker:
|
|||||||
improvement_trend=improvement_trend,
|
improvement_trend=improvement_trend,
|
||||||
)
|
)
|
||||||
|
|
||||||
def export_dashboard_json(self, dashboard: PerformanceDashboard) -> str:
|
def export_dashboard_json(
|
||||||
|
self, dashboard: PerformanceDashboard
|
||||||
|
) -> str:
|
||||||
"""Export dashboard as JSON string.
|
"""Export dashboard as JSON string.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
|||||||
@@ -140,7 +140,9 @@ class DecisionLogger:
|
|||||||
)
|
)
|
||||||
self.conn.commit()
|
self.conn.commit()
|
||||||
|
|
||||||
def update_outcome(self, decision_id: str, pnl: float, accuracy: int) -> None:
|
def update_outcome(
|
||||||
|
self, decision_id: str, pnl: float, accuracy: int
|
||||||
|
) -> None:
|
||||||
"""Update the outcome of a decision after trade execution.
|
"""Update the outcome of a decision after trade execution.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
|||||||
312
src/main.py
312
src/main.py
@@ -26,12 +26,12 @@ from src.context.aggregator import ContextAggregator
|
|||||||
from src.context.layer import ContextLayer
|
from src.context.layer import ContextLayer
|
||||||
from src.context.scheduler import ContextScheduler
|
from src.context.scheduler import ContextScheduler
|
||||||
from src.context.store import ContextStore
|
from src.context.store import ContextStore
|
||||||
|
from src.core.criticality import CriticalityAssessor
|
||||||
from src.core.blackout_manager import (
|
from src.core.blackout_manager import (
|
||||||
BlackoutOrderManager,
|
BlackoutOrderManager,
|
||||||
QueuedOrderIntent,
|
QueuedOrderIntent,
|
||||||
parse_blackout_windows_kst,
|
parse_blackout_windows_kst,
|
||||||
)
|
)
|
||||||
from src.core.criticality import CriticalityAssessor
|
|
||||||
from src.core.kill_switch import KillSwitchOrchestrator
|
from src.core.kill_switch import KillSwitchOrchestrator
|
||||||
from src.core.order_policy import (
|
from src.core.order_policy import (
|
||||||
OrderPolicyRejected,
|
OrderPolicyRejected,
|
||||||
@@ -52,16 +52,12 @@ from src.evolution.optimizer import EvolutionOptimizer
|
|||||||
from src.logging.decision_logger import DecisionLogger
|
from src.logging.decision_logger import DecisionLogger
|
||||||
from src.logging_config import setup_logging
|
from src.logging_config import setup_logging
|
||||||
from src.markets.schedule import MARKETS, MarketInfo, get_next_market_open, get_open_markets
|
from src.markets.schedule import MARKETS, MarketInfo, get_next_market_open, get_open_markets
|
||||||
from src.notifications.telegram_client import (
|
from src.notifications.telegram_client import NotificationFilter, TelegramClient, TelegramCommandHandler
|
||||||
NotificationFilter,
|
|
||||||
TelegramClient,
|
|
||||||
TelegramCommandHandler,
|
|
||||||
)
|
|
||||||
from src.strategy.exit_rules import ExitRuleConfig, ExitRuleInput, evaluate_exit
|
|
||||||
from src.strategy.models import DayPlaybook, MarketOutlook
|
from src.strategy.models import DayPlaybook, MarketOutlook
|
||||||
|
from src.strategy.exit_rules import ExitRuleConfig, ExitRuleInput, evaluate_exit
|
||||||
from src.strategy.playbook_store import PlaybookStore
|
from src.strategy.playbook_store import PlaybookStore
|
||||||
from src.strategy.position_state_machine import PositionState
|
|
||||||
from src.strategy.pre_market_planner import PreMarketPlanner
|
from src.strategy.pre_market_planner import PreMarketPlanner
|
||||||
|
from src.strategy.position_state_machine import PositionState
|
||||||
from src.strategy.scenario_engine import ScenarioEngine
|
from src.strategy.scenario_engine import ScenarioEngine
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
@@ -354,7 +350,9 @@ async def _inject_staged_exit_features(
|
|||||||
return
|
return
|
||||||
|
|
||||||
if "pred_down_prob" not in market_data:
|
if "pred_down_prob" not in market_data:
|
||||||
market_data["pred_down_prob"] = _estimate_pred_down_prob_from_rsi(market_data.get("rsi"))
|
market_data["pred_down_prob"] = _estimate_pred_down_prob_from_rsi(
|
||||||
|
market_data.get("rsi")
|
||||||
|
)
|
||||||
|
|
||||||
existing_atr = safe_float(market_data.get("atr_value"), 0.0)
|
existing_atr = safe_float(market_data.get("atr_value"), 0.0)
|
||||||
if existing_atr > 0:
|
if existing_atr > 0:
|
||||||
@@ -391,7 +389,7 @@ async def _retry_connection(coro_factory: Any, *args: Any, label: str = "", **kw
|
|||||||
return await coro_factory(*args, **kwargs)
|
return await coro_factory(*args, **kwargs)
|
||||||
except ConnectionError as exc:
|
except ConnectionError as exc:
|
||||||
if attempt < MAX_CONNECTION_RETRIES:
|
if attempt < MAX_CONNECTION_RETRIES:
|
||||||
wait_secs = 2**attempt
|
wait_secs = 2 ** attempt
|
||||||
logger.warning(
|
logger.warning(
|
||||||
"Connection error %s (attempt %d/%d), retrying in %ds: %s",
|
"Connection error %s (attempt %d/%d), retrying in %ds: %s",
|
||||||
label,
|
label,
|
||||||
@@ -415,7 +413,7 @@ async def sync_positions_from_broker(
|
|||||||
broker: Any,
|
broker: Any,
|
||||||
overseas_broker: Any,
|
overseas_broker: Any,
|
||||||
db_conn: Any,
|
db_conn: Any,
|
||||||
settings: Settings,
|
settings: "Settings",
|
||||||
) -> int:
|
) -> int:
|
||||||
"""Sync open positions from the live broker into the local DB at startup.
|
"""Sync open positions from the live broker into the local DB at startup.
|
||||||
|
|
||||||
@@ -443,7 +441,9 @@ async def sync_positions_from_broker(
|
|||||||
if market.exchange_code in seen_exchange_codes:
|
if market.exchange_code in seen_exchange_codes:
|
||||||
continue
|
continue
|
||||||
seen_exchange_codes.add(market.exchange_code)
|
seen_exchange_codes.add(market.exchange_code)
|
||||||
balance_data = await overseas_broker.get_overseas_balance(market.exchange_code)
|
balance_data = await overseas_broker.get_overseas_balance(
|
||||||
|
market.exchange_code
|
||||||
|
)
|
||||||
log_market = market_code # e.g. "US_NASDAQ"
|
log_market = market_code # e.g. "US_NASDAQ"
|
||||||
except ConnectionError as exc:
|
except ConnectionError as exc:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
@@ -453,7 +453,9 @@ async def sync_positions_from_broker(
|
|||||||
)
|
)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
held_codes = _extract_held_codes_from_balance(balance_data, is_domestic=market.is_domestic)
|
held_codes = _extract_held_codes_from_balance(
|
||||||
|
balance_data, is_domestic=market.is_domestic
|
||||||
|
)
|
||||||
for stock_code in held_codes:
|
for stock_code in held_codes:
|
||||||
if get_open_position(db_conn, stock_code, log_market):
|
if get_open_position(db_conn, stock_code, log_market):
|
||||||
continue # already tracked
|
continue # already tracked
|
||||||
@@ -485,7 +487,9 @@ async def sync_positions_from_broker(
|
|||||||
synced += 1
|
synced += 1
|
||||||
|
|
||||||
if synced:
|
if synced:
|
||||||
logger.info("Startup sync complete: %d position(s) synced from broker", synced)
|
logger.info(
|
||||||
|
"Startup sync complete: %d position(s) synced from broker", synced
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
logger.info("Startup sync: no new positions to sync from broker")
|
logger.info("Startup sync: no new positions to sync from broker")
|
||||||
return synced
|
return synced
|
||||||
@@ -855,9 +859,15 @@ def _apply_staged_exit_override_for_hold(
|
|||||||
|
|
||||||
pnl_pct = (current_price - entry_price) / entry_price * 100.0
|
pnl_pct = (current_price - entry_price) / entry_price * 100.0
|
||||||
if exit_eval.reason == "hard_stop":
|
if exit_eval.reason == "hard_stop":
|
||||||
rationale = f"Stop-loss triggered ({pnl_pct:.2f}% <= {stop_loss_threshold:.2f}%)"
|
rationale = (
|
||||||
|
f"Stop-loss triggered ({pnl_pct:.2f}% <= "
|
||||||
|
f"{stop_loss_threshold:.2f}%)"
|
||||||
|
)
|
||||||
elif exit_eval.reason == "arm_take_profit":
|
elif exit_eval.reason == "arm_take_profit":
|
||||||
rationale = f"Take-profit triggered ({pnl_pct:.2f}% >= {arm_pct:.2f}%)"
|
rationale = (
|
||||||
|
f"Take-profit triggered ({pnl_pct:.2f}% >= "
|
||||||
|
f"{arm_pct:.2f}%)"
|
||||||
|
)
|
||||||
elif exit_eval.reason == "atr_trailing_stop":
|
elif exit_eval.reason == "atr_trailing_stop":
|
||||||
rationale = "ATR trailing-stop triggered"
|
rationale = "ATR trailing-stop triggered"
|
||||||
elif exit_eval.reason == "be_lock_threat":
|
elif exit_eval.reason == "be_lock_threat":
|
||||||
@@ -968,10 +978,7 @@ def _maybe_queue_order_intent(
|
|||||||
)
|
)
|
||||||
if queued:
|
if queued:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
(
|
"Blackout active: queued order intent %s %s (%s) qty=%d price=%.4f source=%s pending=%d",
|
||||||
"Blackout active: queued order intent %s %s (%s) "
|
|
||||||
"qty=%d price=%.4f source=%s pending=%d"
|
|
||||||
),
|
|
||||||
order_type,
|
order_type,
|
||||||
stock_code,
|
stock_code,
|
||||||
market.code,
|
market.code,
|
||||||
@@ -1064,10 +1071,7 @@ async def process_blackout_recovery_orders(
|
|||||||
)
|
)
|
||||||
if queued_price <= 0 or current_price <= 0:
|
if queued_price <= 0 or current_price <= 0:
|
||||||
logger.info(
|
logger.info(
|
||||||
(
|
"Drop queued intent by price revalidation (invalid price): %s %s (%s) queued=%.4f current=%.4f",
|
||||||
"Drop queued intent by price revalidation (invalid price): "
|
|
||||||
"%s %s (%s) queued=%.4f current=%.4f"
|
|
||||||
),
|
|
||||||
intent.order_type,
|
intent.order_type,
|
||||||
intent.stock_code,
|
intent.stock_code,
|
||||||
market.code,
|
market.code,
|
||||||
@@ -1078,10 +1082,7 @@ async def process_blackout_recovery_orders(
|
|||||||
drift_pct = abs(current_price - queued_price) / queued_price * 100.0
|
drift_pct = abs(current_price - queued_price) / queued_price * 100.0
|
||||||
if drift_pct > max_drift_pct:
|
if drift_pct > max_drift_pct:
|
||||||
logger.info(
|
logger.info(
|
||||||
(
|
"Drop queued intent by price revalidation: %s %s (%s) queued=%.4f current=%.4f drift=%.2f%% max=%.2f%%",
|
||||||
"Drop queued intent by price revalidation: %s %s (%s) "
|
|
||||||
"queued=%.4f current=%.4f drift=%.2f%% max=%.2f%%"
|
|
||||||
),
|
|
||||||
intent.order_type,
|
intent.order_type,
|
||||||
intent.stock_code,
|
intent.stock_code,
|
||||||
market.code,
|
market.code,
|
||||||
@@ -1374,18 +1375,24 @@ async def trading_cycle(
|
|||||||
# 1. Fetch market data
|
# 1. Fetch market data
|
||||||
price_output: dict[str, Any] = {} # Populated for overseas markets; used for fallback metrics
|
price_output: dict[str, Any] = {} # Populated for overseas markets; used for fallback metrics
|
||||||
if market.is_domestic:
|
if market.is_domestic:
|
||||||
current_price, price_change_pct, foreigner_net = await broker.get_current_price(stock_code)
|
current_price, price_change_pct, foreigner_net = await broker.get_current_price(
|
||||||
|
stock_code
|
||||||
|
)
|
||||||
balance_data = await broker.get_balance()
|
balance_data = await broker.get_balance()
|
||||||
|
|
||||||
output2 = balance_data.get("output2", [{}])
|
output2 = balance_data.get("output2", [{}])
|
||||||
total_eval = safe_float(output2[0].get("tot_evlu_amt", "0")) if output2 else 0
|
total_eval = safe_float(output2[0].get("tot_evlu_amt", "0")) if output2 else 0
|
||||||
total_cash = safe_float(
|
total_cash = safe_float(
|
||||||
balance_data.get("output2", [{}])[0].get("dnca_tot_amt", "0") if output2 else "0"
|
balance_data.get("output2", [{}])[0].get("dnca_tot_amt", "0")
|
||||||
|
if output2
|
||||||
|
else "0"
|
||||||
)
|
)
|
||||||
purchase_total = safe_float(output2[0].get("pchs_amt_smtl_amt", "0")) if output2 else 0
|
purchase_total = safe_float(output2[0].get("pchs_amt_smtl_amt", "0")) if output2 else 0
|
||||||
else:
|
else:
|
||||||
# Overseas market
|
# Overseas market
|
||||||
price_data = await overseas_broker.get_overseas_price(market.exchange_code, stock_code)
|
price_data = await overseas_broker.get_overseas_price(
|
||||||
|
market.exchange_code, stock_code
|
||||||
|
)
|
||||||
balance_data = await overseas_broker.get_overseas_balance(market.exchange_code)
|
balance_data = await overseas_broker.get_overseas_balance(market.exchange_code)
|
||||||
|
|
||||||
output2 = balance_data.get("output2", [{}])
|
output2 = balance_data.get("output2", [{}])
|
||||||
@@ -1452,7 +1459,11 @@ async def trading_cycle(
|
|||||||
total_cash = settings.PAPER_OVERSEAS_CASH
|
total_cash = settings.PAPER_OVERSEAS_CASH
|
||||||
|
|
||||||
# Calculate daily P&L %
|
# Calculate daily P&L %
|
||||||
pnl_pct = ((total_eval - purchase_total) / purchase_total * 100) if purchase_total > 0 else 0.0
|
pnl_pct = (
|
||||||
|
((total_eval - purchase_total) / purchase_total * 100)
|
||||||
|
if purchase_total > 0
|
||||||
|
else 0.0
|
||||||
|
)
|
||||||
|
|
||||||
market_data: dict[str, Any] = {
|
market_data: dict[str, Any] = {
|
||||||
"stock_code": stock_code,
|
"stock_code": stock_code,
|
||||||
@@ -1480,13 +1491,11 @@ async def trading_cycle(
|
|||||||
market_data["rsi"] = max(0.0, min(100.0, 50.0 + price_change_pct * 2.0))
|
market_data["rsi"] = max(0.0, min(100.0, 50.0 + price_change_pct * 2.0))
|
||||||
if price_output and current_price > 0:
|
if price_output and current_price > 0:
|
||||||
pr_high = safe_float(
|
pr_high = safe_float(
|
||||||
price_output.get("high")
|
price_output.get("high") or price_output.get("ovrs_hgpr")
|
||||||
or price_output.get("ovrs_hgpr")
|
|
||||||
or price_output.get("stck_hgpr")
|
or price_output.get("stck_hgpr")
|
||||||
)
|
)
|
||||||
pr_low = safe_float(
|
pr_low = safe_float(
|
||||||
price_output.get("low")
|
price_output.get("low") or price_output.get("ovrs_lwpr")
|
||||||
or price_output.get("ovrs_lwpr")
|
|
||||||
or price_output.get("stck_lwpr")
|
or price_output.get("stck_lwpr")
|
||||||
)
|
)
|
||||||
if pr_high > 0 and pr_low > 0 and pr_high >= pr_low:
|
if pr_high > 0 and pr_low > 0 and pr_high >= pr_low:
|
||||||
@@ -1503,7 +1512,9 @@ async def trading_cycle(
|
|||||||
if open_pos and current_price > 0:
|
if open_pos and current_price > 0:
|
||||||
entry_price = safe_float(open_pos.get("price"), 0.0)
|
entry_price = safe_float(open_pos.get("price"), 0.0)
|
||||||
if entry_price > 0:
|
if entry_price > 0:
|
||||||
market_data["unrealized_pnl_pct"] = (current_price - entry_price) / entry_price * 100
|
market_data["unrealized_pnl_pct"] = (
|
||||||
|
(current_price - entry_price) / entry_price * 100
|
||||||
|
)
|
||||||
entry_ts = open_pos.get("timestamp")
|
entry_ts = open_pos.get("timestamp")
|
||||||
if entry_ts:
|
if entry_ts:
|
||||||
try:
|
try:
|
||||||
@@ -1734,19 +1745,16 @@ async def trading_cycle(
|
|||||||
stock_playbook=stock_playbook,
|
stock_playbook=stock_playbook,
|
||||||
settings=settings,
|
settings=settings,
|
||||||
)
|
)
|
||||||
if (
|
if open_position and decision.action == "HOLD" and _should_force_exit_for_overnight(
|
||||||
open_position
|
|
||||||
and decision.action == "HOLD"
|
|
||||||
and _should_force_exit_for_overnight(
|
|
||||||
market=market,
|
market=market,
|
||||||
settings=settings,
|
settings=settings,
|
||||||
)
|
|
||||||
):
|
):
|
||||||
decision = TradeDecision(
|
decision = TradeDecision(
|
||||||
action="SELL",
|
action="SELL",
|
||||||
confidence=max(decision.confidence, 85),
|
confidence=max(decision.confidence, 85),
|
||||||
rationale=(
|
rationale=(
|
||||||
"Forced exit by overnight policy (session close window / kill switch priority)"
|
"Forced exit by overnight policy"
|
||||||
|
" (session close window / kill switch priority)"
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
logger.info(
|
logger.info(
|
||||||
@@ -1826,7 +1834,9 @@ async def trading_cycle(
|
|||||||
return
|
return
|
||||||
|
|
||||||
broker_held_qty = (
|
broker_held_qty = (
|
||||||
_extract_held_qty_from_balance(balance_data, stock_code, is_domestic=market.is_domestic)
|
_extract_held_qty_from_balance(
|
||||||
|
balance_data, stock_code, is_domestic=market.is_domestic
|
||||||
|
)
|
||||||
if decision.action == "SELL"
|
if decision.action == "SELL"
|
||||||
else 0
|
else 0
|
||||||
)
|
)
|
||||||
@@ -1861,10 +1871,7 @@ async def trading_cycle(
|
|||||||
)
|
)
|
||||||
if fx_blocked:
|
if fx_blocked:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
(
|
"Skip BUY %s (%s): FX buffer guard (remaining=%.2f, required=%.2f, cash=%.2f, order=%.2f)",
|
||||||
"Skip BUY %s (%s): FX buffer guard "
|
|
||||||
"(remaining=%.2f, required=%.2f, cash=%.2f, order=%.2f)"
|
|
||||||
),
|
|
||||||
stock_code,
|
stock_code,
|
||||||
market.name,
|
market.name,
|
||||||
remaining_cash,
|
remaining_cash,
|
||||||
@@ -2061,7 +2068,8 @@ async def trading_cycle(
|
|||||||
action="SELL",
|
action="SELL",
|
||||||
confidence=0,
|
confidence=0,
|
||||||
rationale=(
|
rationale=(
|
||||||
"[ghost-close] Broker reported no balance; position closed without fill"
|
"[ghost-close] Broker reported no balance;"
|
||||||
|
" position closed without fill"
|
||||||
),
|
),
|
||||||
quantity=0,
|
quantity=0,
|
||||||
price=0.0,
|
price=0.0,
|
||||||
@@ -2267,13 +2275,17 @@ async def handle_domestic_pending_orders(
|
|||||||
outcome="cancelled",
|
outcome="cancelled",
|
||||||
)
|
)
|
||||||
except Exception as notify_exc:
|
except Exception as notify_exc:
|
||||||
logger.warning("notify_unfilled_order failed: %s", notify_exc)
|
logger.warning(
|
||||||
|
"notify_unfilled_order failed: %s", notify_exc
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
# First unfilled SELL → resubmit at last * 0.996 (-0.4%).
|
# First unfilled SELL → resubmit at last * 0.996 (-0.4%).
|
||||||
try:
|
try:
|
||||||
last_price, _, _ = await broker.get_current_price(stock_code)
|
last_price, _, _ = await broker.get_current_price(stock_code)
|
||||||
if last_price <= 0:
|
if last_price <= 0:
|
||||||
raise ValueError(f"Invalid price ({last_price}) for {stock_code}")
|
raise ValueError(
|
||||||
|
f"Invalid price ({last_price}) for {stock_code}"
|
||||||
|
)
|
||||||
new_price = kr_round_down(last_price * 0.996)
|
new_price = kr_round_down(last_price * 0.996)
|
||||||
validate_order_policy(
|
validate_order_policy(
|
||||||
market=MARKETS["KR"],
|
market=MARKETS["KR"],
|
||||||
@@ -2286,7 +2298,9 @@ async def handle_domestic_pending_orders(
|
|||||||
quantity=psbl_qty,
|
quantity=psbl_qty,
|
||||||
price=new_price,
|
price=new_price,
|
||||||
)
|
)
|
||||||
sell_resubmit_counts[key] = sell_resubmit_counts.get(key, 0) + 1
|
sell_resubmit_counts[key] = (
|
||||||
|
sell_resubmit_counts.get(key, 0) + 1
|
||||||
|
)
|
||||||
try:
|
try:
|
||||||
await telegram.notify_unfilled_order(
|
await telegram.notify_unfilled_order(
|
||||||
stock_code=stock_code,
|
stock_code=stock_code,
|
||||||
@@ -2297,7 +2311,9 @@ async def handle_domestic_pending_orders(
|
|||||||
new_price=float(new_price),
|
new_price=float(new_price),
|
||||||
)
|
)
|
||||||
except Exception as notify_exc:
|
except Exception as notify_exc:
|
||||||
logger.warning("notify_unfilled_order failed: %s", notify_exc)
|
logger.warning(
|
||||||
|
"notify_unfilled_order failed: %s", notify_exc
|
||||||
|
)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error(
|
logger.error(
|
||||||
"SELL resubmit failed for KR %s: %s",
|
"SELL resubmit failed for KR %s: %s",
|
||||||
@@ -2365,7 +2381,9 @@ async def handle_overseas_pending_orders(
|
|||||||
try:
|
try:
|
||||||
orders = await overseas_broker.get_overseas_pending_orders(exchange_code)
|
orders = await overseas_broker.get_overseas_pending_orders(exchange_code)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.warning("Failed to fetch pending orders for %s: %s", exchange_code, exc)
|
logger.warning(
|
||||||
|
"Failed to fetch pending orders for %s: %s", exchange_code, exc
|
||||||
|
)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
for order in orders:
|
for order in orders:
|
||||||
@@ -2430,21 +2448,26 @@ async def handle_overseas_pending_orders(
|
|||||||
outcome="cancelled",
|
outcome="cancelled",
|
||||||
)
|
)
|
||||||
except Exception as notify_exc:
|
except Exception as notify_exc:
|
||||||
logger.warning("notify_unfilled_order failed: %s", notify_exc)
|
logger.warning(
|
||||||
|
"notify_unfilled_order failed: %s", notify_exc
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
# First unfilled SELL → resubmit at last * 0.996 (-0.4%).
|
# First unfilled SELL → resubmit at last * 0.996 (-0.4%).
|
||||||
try:
|
try:
|
||||||
price_data = await overseas_broker.get_overseas_price(
|
price_data = await overseas_broker.get_overseas_price(
|
||||||
order_exchange, stock_code
|
order_exchange, stock_code
|
||||||
)
|
)
|
||||||
last_price = float(price_data.get("output", {}).get("last", "0") or "0")
|
last_price = float(
|
||||||
|
price_data.get("output", {}).get("last", "0") or "0"
|
||||||
|
)
|
||||||
if last_price <= 0:
|
if last_price <= 0:
|
||||||
raise ValueError(f"Invalid price ({last_price}) for {stock_code}")
|
raise ValueError(
|
||||||
|
f"Invalid price ({last_price}) for {stock_code}"
|
||||||
|
)
|
||||||
new_price = round(last_price * 0.996, 4)
|
new_price = round(last_price * 0.996, 4)
|
||||||
market_info = next(
|
market_info = next(
|
||||||
(
|
(
|
||||||
m
|
m for m in MARKETS.values()
|
||||||
for m in MARKETS.values()
|
|
||||||
if m.exchange_code == order_exchange and not m.is_domestic
|
if m.exchange_code == order_exchange and not m.is_domestic
|
||||||
),
|
),
|
||||||
None,
|
None,
|
||||||
@@ -2462,7 +2485,9 @@ async def handle_overseas_pending_orders(
|
|||||||
quantity=nccs_qty,
|
quantity=nccs_qty,
|
||||||
price=new_price,
|
price=new_price,
|
||||||
)
|
)
|
||||||
sell_resubmit_counts[key] = sell_resubmit_counts.get(key, 0) + 1
|
sell_resubmit_counts[key] = (
|
||||||
|
sell_resubmit_counts.get(key, 0) + 1
|
||||||
|
)
|
||||||
try:
|
try:
|
||||||
await telegram.notify_unfilled_order(
|
await telegram.notify_unfilled_order(
|
||||||
stock_code=stock_code,
|
stock_code=stock_code,
|
||||||
@@ -2473,7 +2498,9 @@ async def handle_overseas_pending_orders(
|
|||||||
new_price=new_price,
|
new_price=new_price,
|
||||||
)
|
)
|
||||||
except Exception as notify_exc:
|
except Exception as notify_exc:
|
||||||
logger.warning("notify_unfilled_order failed: %s", notify_exc)
|
logger.warning(
|
||||||
|
"notify_unfilled_order failed: %s", notify_exc
|
||||||
|
)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error(
|
logger.error(
|
||||||
"SELL resubmit failed for %s %s: %s",
|
"SELL resubmit failed for %s %s: %s",
|
||||||
@@ -2632,16 +2659,13 @@ async def run_daily_session(
|
|||||||
logger.warning("Playbook notification failed: %s", exc)
|
logger.warning("Playbook notification failed: %s", exc)
|
||||||
logger.info(
|
logger.info(
|
||||||
"Generated playbook for %s: %d stocks, %d scenarios",
|
"Generated playbook for %s: %d stocks, %d scenarios",
|
||||||
market.code,
|
market.code, playbook.stock_count, playbook.scenario_count,
|
||||||
playbook.stock_count,
|
|
||||||
playbook.scenario_count,
|
|
||||||
)
|
)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error("Playbook generation failed for %s: %s", market.code, exc)
|
logger.error("Playbook generation failed for %s: %s", market.code, exc)
|
||||||
try:
|
try:
|
||||||
await telegram.notify_playbook_failed(
|
await telegram.notify_playbook_failed(
|
||||||
market=market.code,
|
market=market.code, reason=str(exc)[:200],
|
||||||
reason=str(exc)[:200],
|
|
||||||
)
|
)
|
||||||
except Exception as notify_exc:
|
except Exception as notify_exc:
|
||||||
logger.warning("Playbook failed notification error: %s", notify_exc)
|
logger.warning("Playbook failed notification error: %s", notify_exc)
|
||||||
@@ -2652,11 +2676,13 @@ async def run_daily_session(
|
|||||||
for stock_code in watchlist:
|
for stock_code in watchlist:
|
||||||
try:
|
try:
|
||||||
if market.is_domestic:
|
if market.is_domestic:
|
||||||
current_price, price_change_pct, foreigner_net = await _retry_connection(
|
current_price, price_change_pct, foreigner_net = (
|
||||||
|
await _retry_connection(
|
||||||
broker.get_current_price,
|
broker.get_current_price,
|
||||||
stock_code,
|
stock_code,
|
||||||
label=stock_code,
|
label=stock_code,
|
||||||
)
|
)
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
price_data = await _retry_connection(
|
price_data = await _retry_connection(
|
||||||
overseas_broker.get_overseas_price,
|
overseas_broker.get_overseas_price,
|
||||||
@@ -2664,7 +2690,9 @@ async def run_daily_session(
|
|||||||
stock_code,
|
stock_code,
|
||||||
label=f"{stock_code}@{market.exchange_code}",
|
label=f"{stock_code}@{market.exchange_code}",
|
||||||
)
|
)
|
||||||
current_price = safe_float(price_data.get("output", {}).get("last", "0"))
|
current_price = safe_float(
|
||||||
|
price_data.get("output", {}).get("last", "0")
|
||||||
|
)
|
||||||
# Fallback: if price API returns 0, use scanner candidate price
|
# Fallback: if price API returns 0, use scanner candidate price
|
||||||
if current_price <= 0:
|
if current_price <= 0:
|
||||||
cand_lookup = candidate_map.get(stock_code)
|
cand_lookup = candidate_map.get(stock_code)
|
||||||
@@ -2676,7 +2704,9 @@ async def run_daily_session(
|
|||||||
)
|
)
|
||||||
current_price = cand_lookup.price
|
current_price = cand_lookup.price
|
||||||
foreigner_net = 0.0
|
foreigner_net = 0.0
|
||||||
price_change_pct = safe_float(price_data.get("output", {}).get("rate", "0"))
|
price_change_pct = safe_float(
|
||||||
|
price_data.get("output", {}).get("rate", "0")
|
||||||
|
)
|
||||||
# Fall back to scanner candidate price if API returns 0.
|
# Fall back to scanner candidate price if API returns 0.
|
||||||
if current_price <= 0:
|
if current_price <= 0:
|
||||||
cand_lookup = candidate_map.get(stock_code)
|
cand_lookup = candidate_map.get(stock_code)
|
||||||
@@ -2739,9 +2769,15 @@ async def run_daily_session(
|
|||||||
|
|
||||||
if market.is_domestic:
|
if market.is_domestic:
|
||||||
output2 = balance_data.get("output2", [{}])
|
output2 = balance_data.get("output2", [{}])
|
||||||
total_eval = safe_float(output2[0].get("tot_evlu_amt", "0")) if output2 else 0
|
total_eval = safe_float(
|
||||||
total_cash = safe_float(output2[0].get("dnca_tot_amt", "0")) if output2 else 0
|
output2[0].get("tot_evlu_amt", "0")
|
||||||
purchase_total = safe_float(output2[0].get("pchs_amt_smtl_amt", "0")) if output2 else 0
|
) if output2 else 0
|
||||||
|
total_cash = safe_float(
|
||||||
|
output2[0].get("dnca_tot_amt", "0")
|
||||||
|
) if output2 else 0
|
||||||
|
purchase_total = safe_float(
|
||||||
|
output2[0].get("pchs_amt_smtl_amt", "0")
|
||||||
|
) if output2 else 0
|
||||||
else:
|
else:
|
||||||
output2 = balance_data.get("output2", [{}])
|
output2 = balance_data.get("output2", [{}])
|
||||||
if isinstance(output2, list) and output2:
|
if isinstance(output2, list) and output2:
|
||||||
@@ -2752,15 +2788,18 @@ async def run_daily_session(
|
|||||||
balance_info = {}
|
balance_info = {}
|
||||||
|
|
||||||
total_eval = safe_float(balance_info.get("frcr_evlu_tota", "0") or "0")
|
total_eval = safe_float(balance_info.get("frcr_evlu_tota", "0") or "0")
|
||||||
purchase_total = safe_float(balance_info.get("frcr_buy_amt_smtl", "0") or "0")
|
purchase_total = safe_float(
|
||||||
|
balance_info.get("frcr_buy_amt_smtl", "0") or "0"
|
||||||
|
)
|
||||||
|
|
||||||
# Fetch available foreign currency cash via inquire-psamount (TTTS3007R/VTTS3007R).
|
# Fetch available foreign currency cash via inquire-psamount (TTTS3007R/VTTS3007R).
|
||||||
# TTTS3012R output2 does not include a cash/deposit field.
|
# TTTS3012R output2 does not include a cash/deposit field — frcr_dncl_amt_2 does not exist.
|
||||||
# frcr_dncl_amt_2 does not exist.
|
|
||||||
# Use the first stock with a valid price as the reference for the buying power query.
|
# Use the first stock with a valid price as the reference for the buying power query.
|
||||||
# Source: 한국투자증권 오픈API 전체문서 (20260221) — '해외주식 매수가능금액조회' 시트
|
# Source: 한국투자증권 오픈API 전체문서 (20260221) — '해외주식 매수가능금액조회' 시트
|
||||||
total_cash = 0.0
|
total_cash = 0.0
|
||||||
ref_stock = next((s for s in stocks_data if s.get("current_price", 0) > 0), None)
|
ref_stock = next(
|
||||||
|
(s for s in stocks_data if s.get("current_price", 0) > 0), None
|
||||||
|
)
|
||||||
if ref_stock:
|
if ref_stock:
|
||||||
try:
|
try:
|
||||||
ps_data = await overseas_broker.get_overseas_buying_power(
|
ps_data = await overseas_broker.get_overseas_buying_power(
|
||||||
@@ -2780,7 +2819,11 @@ async def run_daily_session(
|
|||||||
|
|
||||||
# Paper mode fallback: VTS overseas balance API often fails for many accounts.
|
# Paper mode fallback: VTS overseas balance API often fails for many accounts.
|
||||||
# Only activate in paper mode — live mode must use real balance from KIS.
|
# Only activate in paper mode — live mode must use real balance from KIS.
|
||||||
if total_cash <= 0 and settings.MODE == "paper" and settings.PAPER_OVERSEAS_CASH > 0:
|
if (
|
||||||
|
total_cash <= 0
|
||||||
|
and settings.MODE == "paper"
|
||||||
|
and settings.PAPER_OVERSEAS_CASH > 0
|
||||||
|
):
|
||||||
total_cash = settings.PAPER_OVERSEAS_CASH
|
total_cash = settings.PAPER_OVERSEAS_CASH
|
||||||
|
|
||||||
# Capture the day's opening portfolio value on the first market processed
|
# Capture the day's opening portfolio value on the first market processed
|
||||||
@@ -2813,17 +2856,13 @@ async def run_daily_session(
|
|||||||
# Evaluate scenarios for each stock (local, no API calls)
|
# Evaluate scenarios for each stock (local, no API calls)
|
||||||
logger.info(
|
logger.info(
|
||||||
"Evaluating %d stocks against playbook for %s",
|
"Evaluating %d stocks against playbook for %s",
|
||||||
len(stocks_data),
|
len(stocks_data), market.name,
|
||||||
market.name,
|
|
||||||
)
|
)
|
||||||
for stock_data in stocks_data:
|
for stock_data in stocks_data:
|
||||||
stock_code = stock_data["stock_code"]
|
stock_code = stock_data["stock_code"]
|
||||||
stock_playbook = playbook.get_stock_playbook(stock_code)
|
stock_playbook = playbook.get_stock_playbook(stock_code)
|
||||||
match = scenario_engine.evaluate(
|
match = scenario_engine.evaluate(
|
||||||
playbook,
|
playbook, stock_code, stock_data, portfolio_data,
|
||||||
stock_code,
|
|
||||||
stock_data,
|
|
||||||
portfolio_data,
|
|
||||||
)
|
)
|
||||||
decision = TradeDecision(
|
decision = TradeDecision(
|
||||||
action=match.action.value,
|
action=match.action.value,
|
||||||
@@ -2930,13 +2969,9 @@ async def run_daily_session(
|
|||||||
stock_playbook=stock_playbook,
|
stock_playbook=stock_playbook,
|
||||||
settings=settings,
|
settings=settings,
|
||||||
)
|
)
|
||||||
if (
|
if daily_open and decision.action == "HOLD" and _should_force_exit_for_overnight(
|
||||||
daily_open
|
|
||||||
and decision.action == "HOLD"
|
|
||||||
and _should_force_exit_for_overnight(
|
|
||||||
market=market,
|
market=market,
|
||||||
settings=settings,
|
settings=settings,
|
||||||
)
|
|
||||||
):
|
):
|
||||||
decision = TradeDecision(
|
decision = TradeDecision(
|
||||||
action="SELL",
|
action="SELL",
|
||||||
@@ -3028,21 +3063,16 @@ async def run_daily_session(
|
|||||||
)
|
)
|
||||||
continue
|
continue
|
||||||
order_amount = stock_data["current_price"] * quantity
|
order_amount = stock_data["current_price"] * quantity
|
||||||
fx_blocked, remaining_cash, required_buffer = (
|
fx_blocked, remaining_cash, required_buffer = _should_block_overseas_buy_for_fx_buffer(
|
||||||
_should_block_overseas_buy_for_fx_buffer(
|
|
||||||
market=market,
|
market=market,
|
||||||
action=decision.action,
|
action=decision.action,
|
||||||
total_cash=total_cash,
|
total_cash=total_cash,
|
||||||
order_amount=order_amount,
|
order_amount=order_amount,
|
||||||
settings=settings,
|
settings=settings,
|
||||||
)
|
)
|
||||||
)
|
|
||||||
if fx_blocked:
|
if fx_blocked:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
(
|
"Skip BUY %s (%s): FX buffer guard (remaining=%.2f, required=%.2f, cash=%.2f, order=%.2f)",
|
||||||
"Skip BUY %s (%s): FX buffer guard "
|
|
||||||
"(remaining=%.2f, required=%.2f, cash=%.2f, order=%.2f)"
|
|
||||||
),
|
|
||||||
stock_code,
|
stock_code,
|
||||||
market.name,
|
market.name,
|
||||||
remaining_cash,
|
remaining_cash,
|
||||||
@@ -3060,10 +3090,7 @@ async def run_daily_session(
|
|||||||
if now < daily_cooldown_until:
|
if now < daily_cooldown_until:
|
||||||
remaining = int(daily_cooldown_until - now)
|
remaining = int(daily_cooldown_until - now)
|
||||||
logger.info(
|
logger.info(
|
||||||
(
|
"Skip BUY %s (%s): insufficient-balance cooldown active (%ds remaining)",
|
||||||
"Skip BUY %s (%s): insufficient-balance cooldown active "
|
|
||||||
"(%ds remaining)"
|
|
||||||
),
|
|
||||||
stock_code,
|
stock_code,
|
||||||
market.name,
|
market.name,
|
||||||
remaining,
|
remaining,
|
||||||
@@ -3122,9 +3149,13 @@ async def run_daily_session(
|
|||||||
# Use limit orders (지정가) for domestic stocks.
|
# Use limit orders (지정가) for domestic stocks.
|
||||||
# KRX tick rounding applied via kr_round_down.
|
# KRX tick rounding applied via kr_round_down.
|
||||||
if decision.action == "BUY":
|
if decision.action == "BUY":
|
||||||
order_price = kr_round_down(stock_data["current_price"] * 1.002)
|
order_price = kr_round_down(
|
||||||
|
stock_data["current_price"] * 1.002
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
order_price = kr_round_down(stock_data["current_price"] * 0.998)
|
order_price = kr_round_down(
|
||||||
|
stock_data["current_price"] * 0.998
|
||||||
|
)
|
||||||
try:
|
try:
|
||||||
validate_order_policy(
|
validate_order_policy(
|
||||||
market=market,
|
market=market,
|
||||||
@@ -3229,7 +3260,9 @@ async def run_daily_session(
|
|||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.warning("Telegram notification failed: %s", exc)
|
logger.warning("Telegram notification failed: %s", exc)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error("Order execution failed for %s: %s", stock_code, exc)
|
logger.error(
|
||||||
|
"Order execution failed for %s: %s", stock_code, exc
|
||||||
|
)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if decision.action == "SELL" and order_succeeded:
|
if decision.action == "SELL" and order_succeeded:
|
||||||
@@ -3253,9 +3286,7 @@ async def run_daily_session(
|
|||||||
accuracy=1 if trade_pnl > 0 else 0,
|
accuracy=1 if trade_pnl > 0 else 0,
|
||||||
)
|
)
|
||||||
if trade_pnl < 0:
|
if trade_pnl < 0:
|
||||||
cooldown_key = _stoploss_cooldown_key(
|
cooldown_key = _stoploss_cooldown_key(market=market, stock_code=stock_code)
|
||||||
market=market, stock_code=stock_code
|
|
||||||
)
|
|
||||||
cooldown_minutes = _stoploss_cooldown_minutes(
|
cooldown_minutes = _stoploss_cooldown_minutes(
|
||||||
settings,
|
settings,
|
||||||
market=market,
|
market=market,
|
||||||
@@ -3338,8 +3369,7 @@ async def _handle_market_close(
|
|||||||
|
|
||||||
|
|
||||||
def _run_context_scheduler(
|
def _run_context_scheduler(
|
||||||
scheduler: ContextScheduler,
|
scheduler: ContextScheduler, now: datetime | None = None,
|
||||||
now: datetime | None = None,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Run periodic context scheduler tasks and log when anything executes."""
|
"""Run periodic context scheduler tasks and log when anything executes."""
|
||||||
result = scheduler.run_if_due(now=now)
|
result = scheduler.run_if_due(now=now)
|
||||||
@@ -3408,7 +3438,6 @@ def _start_dashboard_server(settings: Settings) -> threading.Thread | None:
|
|||||||
# reported synchronously (avoids the misleading "started" → "failed" log pair).
|
# reported synchronously (avoids the misleading "started" → "failed" log pair).
|
||||||
try:
|
try:
|
||||||
import uvicorn # noqa: F401
|
import uvicorn # noqa: F401
|
||||||
|
|
||||||
from src.dashboard import create_dashboard_app # noqa: F401
|
from src.dashboard import create_dashboard_app # noqa: F401
|
||||||
except ImportError as exc:
|
except ImportError as exc:
|
||||||
logger.warning("Dashboard server unavailable (missing dependency): %s", exc)
|
logger.warning("Dashboard server unavailable (missing dependency): %s", exc)
|
||||||
@@ -3417,7 +3446,6 @@ def _start_dashboard_server(settings: Settings) -> threading.Thread | None:
|
|||||||
def _serve() -> None:
|
def _serve() -> None:
|
||||||
try:
|
try:
|
||||||
import uvicorn
|
import uvicorn
|
||||||
|
|
||||||
from src.dashboard import create_dashboard_app
|
from src.dashboard import create_dashboard_app
|
||||||
|
|
||||||
app = create_dashboard_app(settings.DB_PATH, mode=settings.MODE)
|
app = create_dashboard_app(settings.DB_PATH, mode=settings.MODE)
|
||||||
@@ -3558,7 +3586,8 @@ async def run(settings: Settings) -> None:
|
|||||||
pause_trading.set()
|
pause_trading.set()
|
||||||
logger.info("Trading resumed via Telegram command")
|
logger.info("Trading resumed via Telegram command")
|
||||||
await telegram.send_message(
|
await telegram.send_message(
|
||||||
"<b>▶️ Trading Resumed</b>\n\nTrading operations have been restarted."
|
"<b>▶️ Trading Resumed</b>\n\n"
|
||||||
|
"Trading operations have been restarted."
|
||||||
)
|
)
|
||||||
|
|
||||||
async def handle_status() -> None:
|
async def handle_status() -> None:
|
||||||
@@ -3601,7 +3630,9 @@ async def run(settings: Settings) -> None:
|
|||||||
|
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error("Error in /status handler: %s", exc)
|
logger.error("Error in /status handler: %s", exc)
|
||||||
await telegram.send_message("<b>⚠️ Error</b>\n\nFailed to retrieve trading status.")
|
await telegram.send_message(
|
||||||
|
"<b>⚠️ Error</b>\n\nFailed to retrieve trading status."
|
||||||
|
)
|
||||||
|
|
||||||
async def handle_positions() -> None:
|
async def handle_positions() -> None:
|
||||||
"""Handle /positions command - show account summary."""
|
"""Handle /positions command - show account summary."""
|
||||||
@@ -3612,7 +3643,8 @@ async def run(settings: Settings) -> None:
|
|||||||
|
|
||||||
if not output2:
|
if not output2:
|
||||||
await telegram.send_message(
|
await telegram.send_message(
|
||||||
"<b>💼 Account Summary</b>\n\nNo balance information available."
|
"<b>💼 Account Summary</b>\n\n"
|
||||||
|
"No balance information available."
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -3641,7 +3673,9 @@ async def run(settings: Settings) -> None:
|
|||||||
|
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error("Error in /positions handler: %s", exc)
|
logger.error("Error in /positions handler: %s", exc)
|
||||||
await telegram.send_message("<b>⚠️ Error</b>\n\nFailed to retrieve positions.")
|
await telegram.send_message(
|
||||||
|
"<b>⚠️ Error</b>\n\nFailed to retrieve positions."
|
||||||
|
)
|
||||||
|
|
||||||
async def handle_report() -> None:
|
async def handle_report() -> None:
|
||||||
"""Handle /report command - show daily summary metrics."""
|
"""Handle /report command - show daily summary metrics."""
|
||||||
@@ -3685,7 +3719,9 @@ async def run(settings: Settings) -> None:
|
|||||||
)
|
)
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error("Error in /report handler: %s", exc)
|
logger.error("Error in /report handler: %s", exc)
|
||||||
await telegram.send_message("<b>⚠️ Error</b>\n\nFailed to generate daily report.")
|
await telegram.send_message(
|
||||||
|
"<b>⚠️ Error</b>\n\nFailed to generate daily report."
|
||||||
|
)
|
||||||
|
|
||||||
async def handle_scenarios() -> None:
|
async def handle_scenarios() -> None:
|
||||||
"""Handle /scenarios command - show today's playbook scenarios."""
|
"""Handle /scenarios command - show today's playbook scenarios."""
|
||||||
@@ -3734,7 +3770,9 @@ async def run(settings: Settings) -> None:
|
|||||||
await telegram.send_message("\n".join(lines).strip())
|
await telegram.send_message("\n".join(lines).strip())
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error("Error in /scenarios handler: %s", exc)
|
logger.error("Error in /scenarios handler: %s", exc)
|
||||||
await telegram.send_message("<b>⚠️ Error</b>\n\nFailed to retrieve scenarios.")
|
await telegram.send_message(
|
||||||
|
"<b>⚠️ Error</b>\n\nFailed to retrieve scenarios."
|
||||||
|
)
|
||||||
|
|
||||||
async def handle_review() -> None:
|
async def handle_review() -> None:
|
||||||
"""Handle /review command - show recent scorecards."""
|
"""Handle /review command - show recent scorecards."""
|
||||||
@@ -3750,7 +3788,9 @@ async def run(settings: Settings) -> None:
|
|||||||
).fetchall()
|
).fetchall()
|
||||||
|
|
||||||
if not rows:
|
if not rows:
|
||||||
await telegram.send_message("<b>📝 Recent Reviews</b>\n\nNo scorecards available.")
|
await telegram.send_message(
|
||||||
|
"<b>📝 Recent Reviews</b>\n\nNo scorecards available."
|
||||||
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
lines = ["<b>📝 Recent Reviews</b>", ""]
|
lines = ["<b>📝 Recent Reviews</b>", ""]
|
||||||
@@ -3768,7 +3808,9 @@ async def run(settings: Settings) -> None:
|
|||||||
await telegram.send_message("\n".join(lines))
|
await telegram.send_message("\n".join(lines))
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error("Error in /review handler: %s", exc)
|
logger.error("Error in /review handler: %s", exc)
|
||||||
await telegram.send_message("<b>⚠️ Error</b>\n\nFailed to retrieve reviews.")
|
await telegram.send_message(
|
||||||
|
"<b>⚠️ Error</b>\n\nFailed to retrieve reviews."
|
||||||
|
)
|
||||||
|
|
||||||
async def handle_notify(args: list[str]) -> None:
|
async def handle_notify(args: list[str]) -> None:
|
||||||
"""Handle /notify [key] [on|off] — query or change notification filters."""
|
"""Handle /notify [key] [on|off] — query or change notification filters."""
|
||||||
@@ -3803,7 +3845,8 @@ async def run(settings: Settings) -> None:
|
|||||||
else:
|
else:
|
||||||
valid = ", ".join(list(status.keys()) + ["all"])
|
valid = ", ".join(list(status.keys()) + ["all"])
|
||||||
await telegram.send_message(
|
await telegram.send_message(
|
||||||
f"❌ 알 수 없는 키: <code>{key}</code>\n유효한 키: {valid}"
|
f"❌ 알 수 없는 키: <code>{key}</code>\n"
|
||||||
|
f"유효한 키: {valid}"
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -3815,22 +3858,30 @@ async def run(settings: Settings) -> None:
|
|||||||
value = toggle == "on"
|
value = toggle == "on"
|
||||||
if telegram.set_notification(key, value):
|
if telegram.set_notification(key, value):
|
||||||
icon = "✅" if value else "❌"
|
icon = "✅" if value else "❌"
|
||||||
label = "전체 알림" if key == "all" else f"<code>{key}</code> 알림"
|
label = f"전체 알림" if key == "all" else f"<code>{key}</code> 알림"
|
||||||
state = "켜짐" if value else "꺼짐"
|
state = "켜짐" if value else "꺼짐"
|
||||||
await telegram.send_message(f"{icon} {label} → {state}")
|
await telegram.send_message(f"{icon} {label} → {state}")
|
||||||
logger.info("Notification filter changed via Telegram: %s=%s", key, value)
|
logger.info("Notification filter changed via Telegram: %s=%s", key, value)
|
||||||
else:
|
else:
|
||||||
valid = ", ".join(list(telegram.filter_status().keys()) + ["all"])
|
valid = ", ".join(list(telegram.filter_status().keys()) + ["all"])
|
||||||
await telegram.send_message(f"❌ 알 수 없는 키: <code>{key}</code>\n유효한 키: {valid}")
|
await telegram.send_message(
|
||||||
|
f"❌ 알 수 없는 키: <code>{key}</code>\n"
|
||||||
|
f"유효한 키: {valid}"
|
||||||
|
)
|
||||||
|
|
||||||
async def handle_dashboard() -> None:
|
async def handle_dashboard() -> None:
|
||||||
"""Handle /dashboard command - show dashboard URL if enabled."""
|
"""Handle /dashboard command - show dashboard URL if enabled."""
|
||||||
if not settings.DASHBOARD_ENABLED:
|
if not settings.DASHBOARD_ENABLED:
|
||||||
await telegram.send_message("<b>🖥️ Dashboard</b>\n\nDashboard is not enabled.")
|
await telegram.send_message(
|
||||||
|
"<b>🖥️ Dashboard</b>\n\nDashboard is not enabled."
|
||||||
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
url = f"http://{settings.DASHBOARD_HOST}:{settings.DASHBOARD_PORT}"
|
url = f"http://{settings.DASHBOARD_HOST}:{settings.DASHBOARD_PORT}"
|
||||||
await telegram.send_message(f"<b>🖥️ Dashboard</b>\n\n<b>URL:</b> {url}")
|
await telegram.send_message(
|
||||||
|
"<b>🖥️ Dashboard</b>\n\n"
|
||||||
|
f"<b>URL:</b> {url}"
|
||||||
|
)
|
||||||
|
|
||||||
command_handler.register_command("help", handle_help)
|
command_handler.register_command("help", handle_help)
|
||||||
command_handler.register_command("stop", handle_stop)
|
command_handler.register_command("stop", handle_stop)
|
||||||
@@ -4131,7 +4182,9 @@ async def run(settings: Settings) -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
# Store candidates per market for selection context logging
|
# Store candidates per market for selection context logging
|
||||||
scan_candidates[market.code] = {c.stock_code: c for c in candidates}
|
scan_candidates[market.code] = {
|
||||||
|
c.stock_code: c for c in candidates
|
||||||
|
}
|
||||||
|
|
||||||
logger.info(
|
logger.info(
|
||||||
"Smart Scanner: Found %d candidates for %s: %s",
|
"Smart Scanner: Found %d candidates for %s: %s",
|
||||||
@@ -4141,7 +4194,9 @@ async def run(settings: Settings) -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
# Get market-local date for playbook keying
|
# Get market-local date for playbook keying
|
||||||
market_today = datetime.now(market.timezone).date()
|
market_today = datetime.now(
|
||||||
|
market.timezone
|
||||||
|
).date()
|
||||||
|
|
||||||
# Load or generate playbook (1 Gemini call per market per day)
|
# Load or generate playbook (1 Gemini call per market per day)
|
||||||
if market.code not in playbooks:
|
if market.code not in playbooks:
|
||||||
@@ -4179,8 +4234,7 @@ async def run(settings: Settings) -> None:
|
|||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error(
|
logger.error(
|
||||||
"Playbook generation failed for %s: %s",
|
"Playbook generation failed for %s: %s",
|
||||||
market.code,
|
market.code, exc,
|
||||||
exc,
|
|
||||||
)
|
)
|
||||||
try:
|
try:
|
||||||
await telegram.notify_playbook_failed(
|
await telegram.notify_playbook_failed(
|
||||||
@@ -4225,8 +4279,7 @@ async def run(settings: Settings) -> None:
|
|||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
"Failed to fetch holdings for %s: %s — skipping holdings merge",
|
"Failed to fetch holdings for %s: %s — skipping holdings merge",
|
||||||
market.name,
|
market.name, exc,
|
||||||
exc,
|
|
||||||
)
|
)
|
||||||
held_codes = []
|
held_codes = []
|
||||||
|
|
||||||
@@ -4235,8 +4288,7 @@ async def run(settings: Settings) -> None:
|
|||||||
if extra_held:
|
if extra_held:
|
||||||
logger.info(
|
logger.info(
|
||||||
"Holdings added to loop for %s (not in scanner): %s",
|
"Holdings added to loop for %s (not in scanner): %s",
|
||||||
market.name,
|
market.name, extra_held,
|
||||||
extra_held,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if not stock_codes:
|
if not stock_codes:
|
||||||
|
|||||||
@@ -211,7 +211,9 @@ def get_open_markets(
|
|||||||
return is_market_open(market, now)
|
return is_market_open(market, now)
|
||||||
|
|
||||||
open_markets = [
|
open_markets = [
|
||||||
MARKETS[code] for code in enabled_markets if code in MARKETS and is_available(MARKETS[code])
|
MARKETS[code]
|
||||||
|
for code in enabled_markets
|
||||||
|
if code in MARKETS and is_available(MARKETS[code])
|
||||||
]
|
]
|
||||||
|
|
||||||
return sorted(open_markets, key=lambda m: m.code)
|
return sorted(open_markets, key=lambda m: m.code)
|
||||||
@@ -280,7 +282,9 @@ def get_next_market_open(
|
|||||||
# Calculate next open time for this market
|
# Calculate next open time for this market
|
||||||
for days_ahead in range(7): # Check next 7 days
|
for days_ahead in range(7): # Check next 7 days
|
||||||
check_date = market_now.date() + timedelta(days=days_ahead)
|
check_date = market_now.date() + timedelta(days=days_ahead)
|
||||||
check_datetime = datetime.combine(check_date, market.open_time, tzinfo=market.timezone)
|
check_datetime = datetime.combine(
|
||||||
|
check_date, market.open_time, tzinfo=market.timezone
|
||||||
|
)
|
||||||
|
|
||||||
# Skip weekends
|
# Skip weekends
|
||||||
if check_datetime.weekday() >= 5:
|
if check_datetime.weekday() >= 5:
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import asyncio
|
|||||||
import logging
|
import logging
|
||||||
import time
|
import time
|
||||||
from collections.abc import Awaitable, Callable
|
from collections.abc import Awaitable, Callable
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass, fields
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from typing import ClassVar
|
from typing import ClassVar
|
||||||
|
|
||||||
@@ -136,14 +136,14 @@ class TelegramClient:
|
|||||||
self._enabled = enabled
|
self._enabled = enabled
|
||||||
self._rate_limiter = LeakyBucket(rate=rate_limit)
|
self._rate_limiter = LeakyBucket(rate=rate_limit)
|
||||||
self._session: aiohttp.ClientSession | None = None
|
self._session: aiohttp.ClientSession | None = None
|
||||||
self._filter = (
|
self._filter = notification_filter if notification_filter is not None else NotificationFilter()
|
||||||
notification_filter if notification_filter is not None else NotificationFilter()
|
|
||||||
)
|
|
||||||
|
|
||||||
if not enabled:
|
if not enabled:
|
||||||
logger.info("Telegram notifications disabled via configuration")
|
logger.info("Telegram notifications disabled via configuration")
|
||||||
elif bot_token is None or chat_id is None:
|
elif bot_token is None or chat_id is None:
|
||||||
logger.warning("Telegram notifications disabled (missing bot_token or chat_id)")
|
logger.warning(
|
||||||
|
"Telegram notifications disabled (missing bot_token or chat_id)"
|
||||||
|
)
|
||||||
self._enabled = False
|
self._enabled = False
|
||||||
else:
|
else:
|
||||||
logger.info("Telegram notifications enabled for chat_id=%s", chat_id)
|
logger.info("Telegram notifications enabled for chat_id=%s", chat_id)
|
||||||
@@ -209,12 +209,14 @@ class TelegramClient:
|
|||||||
async with session.post(url, json=payload) as resp:
|
async with session.post(url, json=payload) as resp:
|
||||||
if resp.status != 200:
|
if resp.status != 200:
|
||||||
error_text = await resp.text()
|
error_text = await resp.text()
|
||||||
logger.error("Telegram API error (status=%d): %s", resp.status, error_text)
|
logger.error(
|
||||||
|
"Telegram API error (status=%d): %s", resp.status, error_text
|
||||||
|
)
|
||||||
return False
|
return False
|
||||||
logger.debug("Telegram message sent: %s", text[:50])
|
logger.debug("Telegram message sent: %s", text[:50])
|
||||||
return True
|
return True
|
||||||
|
|
||||||
except TimeoutError:
|
except asyncio.TimeoutError:
|
||||||
logger.error("Telegram message timeout")
|
logger.error("Telegram message timeout")
|
||||||
return False
|
return False
|
||||||
except aiohttp.ClientError as exc:
|
except aiohttp.ClientError as exc:
|
||||||
@@ -303,7 +305,9 @@ class TelegramClient:
|
|||||||
NotificationMessage(priority=NotificationPriority.LOW, message=message)
|
NotificationMessage(priority=NotificationPriority.LOW, message=message)
|
||||||
)
|
)
|
||||||
|
|
||||||
async def notify_circuit_breaker(self, pnl_pct: float, threshold: float) -> None:
|
async def notify_circuit_breaker(
|
||||||
|
self, pnl_pct: float, threshold: float
|
||||||
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Notify circuit breaker activation.
|
Notify circuit breaker activation.
|
||||||
|
|
||||||
@@ -350,7 +354,9 @@ class TelegramClient:
|
|||||||
NotificationMessage(priority=NotificationPriority.HIGH, message=message)
|
NotificationMessage(priority=NotificationPriority.HIGH, message=message)
|
||||||
)
|
)
|
||||||
|
|
||||||
async def notify_system_start(self, mode: str, enabled_markets: list[str]) -> None:
|
async def notify_system_start(
|
||||||
|
self, mode: str, enabled_markets: list[str]
|
||||||
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Notify system startup.
|
Notify system startup.
|
||||||
|
|
||||||
@@ -363,7 +369,9 @@ class TelegramClient:
|
|||||||
mode_emoji = "📝" if mode == "paper" else "💰"
|
mode_emoji = "📝" if mode == "paper" else "💰"
|
||||||
markets_str = ", ".join(enabled_markets)
|
markets_str = ", ".join(enabled_markets)
|
||||||
message = (
|
message = (
|
||||||
f"<b>{mode_emoji} System Started</b>\nMode: {mode.upper()}\nMarkets: {markets_str}"
|
f"<b>{mode_emoji} System Started</b>\n"
|
||||||
|
f"Mode: {mode.upper()}\n"
|
||||||
|
f"Markets: {markets_str}"
|
||||||
)
|
)
|
||||||
await self._send_notification(
|
await self._send_notification(
|
||||||
NotificationMessage(priority=NotificationPriority.MEDIUM, message=message)
|
NotificationMessage(priority=NotificationPriority.MEDIUM, message=message)
|
||||||
@@ -437,7 +445,11 @@ class TelegramClient:
|
|||||||
"""
|
"""
|
||||||
if not self._filter.playbook:
|
if not self._filter.playbook:
|
||||||
return
|
return
|
||||||
message = f"<b>Playbook Failed</b>\nMarket: {market}\nReason: {reason[:200]}"
|
message = (
|
||||||
|
f"<b>Playbook Failed</b>\n"
|
||||||
|
f"Market: {market}\n"
|
||||||
|
f"Reason: {reason[:200]}"
|
||||||
|
)
|
||||||
await self._send_notification(
|
await self._send_notification(
|
||||||
NotificationMessage(priority=NotificationPriority.HIGH, message=message)
|
NotificationMessage(priority=NotificationPriority.HIGH, message=message)
|
||||||
)
|
)
|
||||||
@@ -457,7 +469,9 @@ class TelegramClient:
|
|||||||
if "circuit breaker" in reason.lower()
|
if "circuit breaker" in reason.lower()
|
||||||
else NotificationPriority.MEDIUM
|
else NotificationPriority.MEDIUM
|
||||||
)
|
)
|
||||||
await self._send_notification(NotificationMessage(priority=priority, message=message))
|
await self._send_notification(
|
||||||
|
NotificationMessage(priority=priority, message=message)
|
||||||
|
)
|
||||||
|
|
||||||
async def notify_unfilled_order(
|
async def notify_unfilled_order(
|
||||||
self,
|
self,
|
||||||
@@ -482,7 +496,11 @@ class TelegramClient:
|
|||||||
return
|
return
|
||||||
# SELL resubmit is high priority — position liquidation at risk.
|
# SELL resubmit is high priority — position liquidation at risk.
|
||||||
# BUY cancel is medium priority — only cash is freed.
|
# BUY cancel is medium priority — only cash is freed.
|
||||||
priority = NotificationPriority.HIGH if action == "SELL" else NotificationPriority.MEDIUM
|
priority = (
|
||||||
|
NotificationPriority.HIGH
|
||||||
|
if action == "SELL"
|
||||||
|
else NotificationPriority.MEDIUM
|
||||||
|
)
|
||||||
outcome_emoji = "🔄" if outcome == "resubmitted" else "❌"
|
outcome_emoji = "🔄" if outcome == "resubmitted" else "❌"
|
||||||
outcome_label = "재주문" if outcome == "resubmitted" else "취소됨"
|
outcome_label = "재주문" if outcome == "resubmitted" else "취소됨"
|
||||||
action_emoji = "🔴" if action == "SELL" else "🟢"
|
action_emoji = "🔴" if action == "SELL" else "🟢"
|
||||||
@@ -497,7 +515,9 @@ class TelegramClient:
|
|||||||
message = "\n".join(lines)
|
message = "\n".join(lines)
|
||||||
await self._send_notification(NotificationMessage(priority=priority, message=message))
|
await self._send_notification(NotificationMessage(priority=priority, message=message))
|
||||||
|
|
||||||
async def notify_error(self, error_type: str, error_msg: str, context: str) -> None:
|
async def notify_error(
|
||||||
|
self, error_type: str, error_msg: str, context: str
|
||||||
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Notify system error.
|
Notify system error.
|
||||||
|
|
||||||
@@ -521,7 +541,9 @@ class TelegramClient:
|
|||||||
class TelegramCommandHandler:
|
class TelegramCommandHandler:
|
||||||
"""Handles incoming Telegram commands via long polling."""
|
"""Handles incoming Telegram commands via long polling."""
|
||||||
|
|
||||||
def __init__(self, client: TelegramClient, polling_interval: float = 1.0) -> None:
|
def __init__(
|
||||||
|
self, client: TelegramClient, polling_interval: float = 1.0
|
||||||
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Initialize command handler.
|
Initialize command handler.
|
||||||
|
|
||||||
@@ -537,7 +559,9 @@ class TelegramCommandHandler:
|
|||||||
self._polling_task: asyncio.Task[None] | None = None
|
self._polling_task: asyncio.Task[None] | None = None
|
||||||
self._running = False
|
self._running = False
|
||||||
|
|
||||||
def register_command(self, command: str, handler: Callable[[], Awaitable[None]]) -> None:
|
def register_command(
|
||||||
|
self, command: str, handler: Callable[[], Awaitable[None]]
|
||||||
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Register a command handler (no arguments).
|
Register a command handler (no arguments).
|
||||||
|
|
||||||
@@ -648,7 +672,7 @@ class TelegramCommandHandler:
|
|||||||
|
|
||||||
return updates
|
return updates
|
||||||
|
|
||||||
except TimeoutError:
|
except asyncio.TimeoutError:
|
||||||
logger.debug("getUpdates timeout (normal)")
|
logger.debug("getUpdates timeout (normal)")
|
||||||
return []
|
return []
|
||||||
except aiohttp.ClientError as exc:
|
except aiohttp.ClientError as exc:
|
||||||
@@ -673,7 +697,9 @@ class TelegramCommandHandler:
|
|||||||
# Verify chat_id matches configured chat
|
# Verify chat_id matches configured chat
|
||||||
chat_id = str(message.get("chat", {}).get("id", ""))
|
chat_id = str(message.get("chat", {}).get("id", ""))
|
||||||
if chat_id != self._client._chat_id:
|
if chat_id != self._client._chat_id:
|
||||||
logger.warning("Ignoring command from unauthorized chat_id: %s", chat_id)
|
logger.warning(
|
||||||
|
"Ignoring command from unauthorized chat_id: %s", chat_id
|
||||||
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
# Extract command text
|
# Extract command text
|
||||||
|
|||||||
@@ -8,12 +8,12 @@ Defines the data contracts for the proactive strategy system:
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from datetime import UTC, date, datetime
|
from datetime import UTC, date, datetime
|
||||||
from enum import StrEnum
|
from enum import Enum
|
||||||
|
|
||||||
from pydantic import BaseModel, Field, field_validator
|
from pydantic import BaseModel, Field, field_validator
|
||||||
|
|
||||||
|
|
||||||
class ScenarioAction(StrEnum):
|
class ScenarioAction(str, Enum):
|
||||||
"""Actions that can be taken by scenarios."""
|
"""Actions that can be taken by scenarios."""
|
||||||
|
|
||||||
BUY = "BUY"
|
BUY = "BUY"
|
||||||
@@ -22,7 +22,7 @@ class ScenarioAction(StrEnum):
|
|||||||
REDUCE_ALL = "REDUCE_ALL"
|
REDUCE_ALL = "REDUCE_ALL"
|
||||||
|
|
||||||
|
|
||||||
class MarketOutlook(StrEnum):
|
class MarketOutlook(str, Enum):
|
||||||
"""AI's assessment of market direction."""
|
"""AI's assessment of market direction."""
|
||||||
|
|
||||||
BULLISH = "bullish"
|
BULLISH = "bullish"
|
||||||
@@ -32,7 +32,7 @@ class MarketOutlook(StrEnum):
|
|||||||
BEARISH = "bearish"
|
BEARISH = "bearish"
|
||||||
|
|
||||||
|
|
||||||
class PlaybookStatus(StrEnum):
|
class PlaybookStatus(str, Enum):
|
||||||
"""Lifecycle status of a playbook."""
|
"""Lifecycle status of a playbook."""
|
||||||
|
|
||||||
PENDING = "pending"
|
PENDING = "pending"
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ Designed for the pre-market strategy system (one playbook per market per day).
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
import logging
|
import logging
|
||||||
import sqlite3
|
import sqlite3
|
||||||
from datetime import date
|
from datetime import date
|
||||||
@@ -52,10 +53,8 @@ class PlaybookStore:
|
|||||||
row_id = cursor.lastrowid or 0
|
row_id = cursor.lastrowid or 0
|
||||||
logger.info(
|
logger.info(
|
||||||
"Saved playbook for %s/%s (%d stocks, %d scenarios)",
|
"Saved playbook for %s/%s (%d stocks, %d scenarios)",
|
||||||
playbook.date,
|
playbook.date, playbook.market,
|
||||||
playbook.market,
|
playbook.stock_count, playbook.scenario_count,
|
||||||
playbook.stock_count,
|
|
||||||
playbook.scenario_count,
|
|
||||||
)
|
)
|
||||||
return row_id
|
return row_id
|
||||||
|
|
||||||
|
|||||||
@@ -6,10 +6,10 @@ State progression is monotonic (promotion-only) except terminal EXITED.
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from enum import StrEnum
|
from enum import Enum
|
||||||
|
|
||||||
|
|
||||||
class PositionState(StrEnum):
|
class PositionState(str, Enum):
|
||||||
HOLDING = "HOLDING"
|
HOLDING = "HOLDING"
|
||||||
BE_LOCK = "BE_LOCK"
|
BE_LOCK = "BE_LOCK"
|
||||||
ARMED = "ARMED"
|
ARMED = "ARMED"
|
||||||
@@ -40,7 +40,12 @@ def evaluate_exit_first(inp: StateTransitionInput) -> bool:
|
|||||||
|
|
||||||
EXITED must be evaluated before any promotion.
|
EXITED must be evaluated before any promotion.
|
||||||
"""
|
"""
|
||||||
return inp.hard_stop_hit or inp.trailing_stop_hit or inp.model_exit_signal or inp.be_lock_threat
|
return (
|
||||||
|
inp.hard_stop_hit
|
||||||
|
or inp.trailing_stop_hit
|
||||||
|
or inp.model_exit_signal
|
||||||
|
or inp.be_lock_threat
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def promote_state(current: PositionState, inp: StateTransitionInput) -> PositionState:
|
def promote_state(current: PositionState, inp: StateTransitionInput) -> PositionState:
|
||||||
|
|||||||
@@ -124,14 +124,12 @@ class PreMarketPlanner:
|
|||||||
|
|
||||||
# 4. Parse response
|
# 4. Parse response
|
||||||
playbook = self._parse_response(
|
playbook = self._parse_response(
|
||||||
decision.rationale,
|
decision.rationale, today, market, candidates, cross_market,
|
||||||
today,
|
|
||||||
market,
|
|
||||||
candidates,
|
|
||||||
cross_market,
|
|
||||||
current_holdings=current_holdings,
|
current_holdings=current_holdings,
|
||||||
)
|
)
|
||||||
playbook_with_tokens = playbook.model_copy(update={"token_count": decision.token_count})
|
playbook_with_tokens = playbook.model_copy(
|
||||||
|
update={"token_count": decision.token_count}
|
||||||
|
)
|
||||||
logger.info(
|
logger.info(
|
||||||
"Generated playbook for %s: %d stocks, %d scenarios, %d tokens",
|
"Generated playbook for %s: %d stocks, %d scenarios, %d tokens",
|
||||||
market,
|
market,
|
||||||
@@ -148,9 +146,7 @@ class PreMarketPlanner:
|
|||||||
return self._empty_playbook(today, market)
|
return self._empty_playbook(today, market)
|
||||||
|
|
||||||
def build_cross_market_context(
|
def build_cross_market_context(
|
||||||
self,
|
self, target_market: str, today: date | None = None,
|
||||||
target_market: str,
|
|
||||||
today: date | None = None,
|
|
||||||
) -> CrossMarketContext | None:
|
) -> CrossMarketContext | None:
|
||||||
"""Build cross-market context from the other market's L6 data.
|
"""Build cross-market context from the other market's L6 data.
|
||||||
|
|
||||||
@@ -196,9 +192,7 @@ class PreMarketPlanner:
|
|||||||
)
|
)
|
||||||
|
|
||||||
def build_self_market_scorecard(
|
def build_self_market_scorecard(
|
||||||
self,
|
self, market: str, today: date | None = None,
|
||||||
market: str,
|
|
||||||
today: date | None = None,
|
|
||||||
) -> dict[str, Any] | None:
|
) -> dict[str, Any] | None:
|
||||||
"""Build previous-day scorecard for the same market."""
|
"""Build previous-day scorecard for the same market."""
|
||||||
if today is None:
|
if today is None:
|
||||||
@@ -326,18 +320,18 @@ class PreMarketPlanner:
|
|||||||
f"{context_text}\n"
|
f"{context_text}\n"
|
||||||
f"## Instructions\n"
|
f"## Instructions\n"
|
||||||
f"Return a JSON object with this exact structure:\n"
|
f"Return a JSON object with this exact structure:\n"
|
||||||
f"{{\n"
|
f'{{\n'
|
||||||
f' "market_outlook": "bullish|neutral_to_bullish|neutral'
|
f' "market_outlook": "bullish|neutral_to_bullish|neutral'
|
||||||
f'|neutral_to_bearish|bearish",\n'
|
f'|neutral_to_bearish|bearish",\n'
|
||||||
f' "global_rules": [\n'
|
f' "global_rules": [\n'
|
||||||
f' {{"condition": "portfolio_pnl_pct < -2.0",'
|
f' {{"condition": "portfolio_pnl_pct < -2.0",'
|
||||||
f' "action": "REDUCE_ALL", "rationale": "..."}}\n'
|
f' "action": "REDUCE_ALL", "rationale": "..."}}\n'
|
||||||
f" ],\n"
|
f' ],\n'
|
||||||
f' "stocks": [\n'
|
f' "stocks": [\n'
|
||||||
f" {{\n"
|
f' {{\n'
|
||||||
f' "stock_code": "...",\n'
|
f' "stock_code": "...",\n'
|
||||||
f' "scenarios": [\n'
|
f' "scenarios": [\n'
|
||||||
f" {{\n"
|
f' {{\n'
|
||||||
f' "condition": {{"rsi_below": 30, "volume_ratio_above": 2.0,'
|
f' "condition": {{"rsi_below": 30, "volume_ratio_above": 2.0,'
|
||||||
f' "unrealized_pnl_pct_above": 3.0, "holding_days_above": 5}},\n'
|
f' "unrealized_pnl_pct_above": 3.0, "holding_days_above": 5}},\n'
|
||||||
f' "action": "BUY|SELL|HOLD",\n'
|
f' "action": "BUY|SELL|HOLD",\n'
|
||||||
@@ -346,11 +340,11 @@ class PreMarketPlanner:
|
|||||||
f' "stop_loss_pct": -2.0,\n'
|
f' "stop_loss_pct": -2.0,\n'
|
||||||
f' "take_profit_pct": 3.0,\n'
|
f' "take_profit_pct": 3.0,\n'
|
||||||
f' "rationale": "..."\n'
|
f' "rationale": "..."\n'
|
||||||
f" }}\n"
|
f' }}\n'
|
||||||
f" ]\n"
|
f' ]\n'
|
||||||
f" }}\n"
|
f' }}\n'
|
||||||
f" ]\n"
|
f' ]\n'
|
||||||
f"}}\n\n"
|
f'}}\n\n'
|
||||||
f"Rules:\n"
|
f"Rules:\n"
|
||||||
f"- Max {max_scenarios} scenarios per stock\n"
|
f"- Max {max_scenarios} scenarios per stock\n"
|
||||||
f"- Candidates list is the primary source for BUY candidates\n"
|
f"- Candidates list is the primary source for BUY candidates\n"
|
||||||
@@ -581,7 +575,8 @@ class PreMarketPlanner:
|
|||||||
stop_loss_pct=-3.0,
|
stop_loss_pct=-3.0,
|
||||||
take_profit_pct=5.0,
|
take_profit_pct=5.0,
|
||||||
rationale=(
|
rationale=(
|
||||||
f"Rule-based BUY: oversold signal, RSI={c.rsi:.0f} (fallback planner)"
|
f"Rule-based BUY: oversold signal, "
|
||||||
|
f"RSI={c.rsi:.0f} (fallback planner)"
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -107,9 +107,7 @@ class ScenarioEngine:
|
|||||||
# 2. Find stock playbook
|
# 2. Find stock playbook
|
||||||
stock_pb = playbook.get_stock_playbook(stock_code)
|
stock_pb = playbook.get_stock_playbook(stock_code)
|
||||||
if stock_pb is None:
|
if stock_pb is None:
|
||||||
logger.debug(
|
logger.debug("No playbook for %s — defaulting to %s", stock_code, playbook.default_action)
|
||||||
"No playbook for %s — defaulting to %s", stock_code, playbook.default_action
|
|
||||||
)
|
|
||||||
return ScenarioMatch(
|
return ScenarioMatch(
|
||||||
stock_code=stock_code,
|
stock_code=stock_code,
|
||||||
matched_scenario=None,
|
matched_scenario=None,
|
||||||
@@ -137,9 +135,7 @@ class ScenarioEngine:
|
|||||||
)
|
)
|
||||||
|
|
||||||
# 4. No match — default action
|
# 4. No match — default action
|
||||||
logger.debug(
|
logger.debug("No scenario matched for %s — defaulting to %s", stock_code, playbook.default_action)
|
||||||
"No scenario matched for %s — defaulting to %s", stock_code, playbook.default_action
|
|
||||||
)
|
|
||||||
return ScenarioMatch(
|
return ScenarioMatch(
|
||||||
stock_code=stock_code,
|
stock_code=stock_code,
|
||||||
matched_scenario=None,
|
matched_scenario=None,
|
||||||
@@ -202,27 +198,17 @@ class ScenarioEngine:
|
|||||||
checks.append(price is not None and price < condition.price_below)
|
checks.append(price is not None and price < condition.price_below)
|
||||||
|
|
||||||
price_change_pct = self._safe_float(market_data.get("price_change_pct"))
|
price_change_pct = self._safe_float(market_data.get("price_change_pct"))
|
||||||
if (
|
if condition.price_change_pct_above is not None or condition.price_change_pct_below is not None:
|
||||||
condition.price_change_pct_above is not None
|
|
||||||
or condition.price_change_pct_below is not None
|
|
||||||
):
|
|
||||||
if "price_change_pct" not in market_data:
|
if "price_change_pct" not in market_data:
|
||||||
self._warn_missing_key("price_change_pct")
|
self._warn_missing_key("price_change_pct")
|
||||||
if condition.price_change_pct_above is not None:
|
if condition.price_change_pct_above is not None:
|
||||||
checks.append(
|
checks.append(price_change_pct is not None and price_change_pct > condition.price_change_pct_above)
|
||||||
price_change_pct is not None and price_change_pct > condition.price_change_pct_above
|
|
||||||
)
|
|
||||||
if condition.price_change_pct_below is not None:
|
if condition.price_change_pct_below is not None:
|
||||||
checks.append(
|
checks.append(price_change_pct is not None and price_change_pct < condition.price_change_pct_below)
|
||||||
price_change_pct is not None and price_change_pct < condition.price_change_pct_below
|
|
||||||
)
|
|
||||||
|
|
||||||
# Position-aware conditions
|
# Position-aware conditions
|
||||||
unrealized_pnl_pct = self._safe_float(market_data.get("unrealized_pnl_pct"))
|
unrealized_pnl_pct = self._safe_float(market_data.get("unrealized_pnl_pct"))
|
||||||
if (
|
if condition.unrealized_pnl_pct_above is not None or condition.unrealized_pnl_pct_below is not None:
|
||||||
condition.unrealized_pnl_pct_above is not None
|
|
||||||
or condition.unrealized_pnl_pct_below is not None
|
|
||||||
):
|
|
||||||
if "unrealized_pnl_pct" not in market_data:
|
if "unrealized_pnl_pct" not in market_data:
|
||||||
self._warn_missing_key("unrealized_pnl_pct")
|
self._warn_missing_key("unrealized_pnl_pct")
|
||||||
if condition.unrealized_pnl_pct_above is not None:
|
if condition.unrealized_pnl_pct_above is not None:
|
||||||
@@ -241,9 +227,15 @@ class ScenarioEngine:
|
|||||||
if "holding_days" not in market_data:
|
if "holding_days" not in market_data:
|
||||||
self._warn_missing_key("holding_days")
|
self._warn_missing_key("holding_days")
|
||||||
if condition.holding_days_above is not None:
|
if condition.holding_days_above is not None:
|
||||||
checks.append(holding_days is not None and holding_days > condition.holding_days_above)
|
checks.append(
|
||||||
|
holding_days is not None
|
||||||
|
and holding_days > condition.holding_days_above
|
||||||
|
)
|
||||||
if condition.holding_days_below is not None:
|
if condition.holding_days_below is not None:
|
||||||
checks.append(holding_days is not None and holding_days < condition.holding_days_below)
|
checks.append(
|
||||||
|
holding_days is not None
|
||||||
|
and holding_days < condition.holding_days_below
|
||||||
|
)
|
||||||
|
|
||||||
return len(checks) > 0 and all(checks)
|
return len(checks) > 0 and all(checks)
|
||||||
|
|
||||||
@@ -303,15 +295,9 @@ class ScenarioEngine:
|
|||||||
details["volume_ratio"] = self._safe_float(market_data.get("volume_ratio"))
|
details["volume_ratio"] = self._safe_float(market_data.get("volume_ratio"))
|
||||||
if condition.price_above is not None or condition.price_below is not None:
|
if condition.price_above is not None or condition.price_below is not None:
|
||||||
details["current_price"] = self._safe_float(market_data.get("current_price"))
|
details["current_price"] = self._safe_float(market_data.get("current_price"))
|
||||||
if (
|
if condition.price_change_pct_above is not None or condition.price_change_pct_below is not None:
|
||||||
condition.price_change_pct_above is not None
|
|
||||||
or condition.price_change_pct_below is not None
|
|
||||||
):
|
|
||||||
details["price_change_pct"] = self._safe_float(market_data.get("price_change_pct"))
|
details["price_change_pct"] = self._safe_float(market_data.get("price_change_pct"))
|
||||||
if (
|
if condition.unrealized_pnl_pct_above is not None or condition.unrealized_pnl_pct_below is not None:
|
||||||
condition.unrealized_pnl_pct_above is not None
|
|
||||||
or condition.unrealized_pnl_pct_below is not None
|
|
||||||
):
|
|
||||||
details["unrealized_pnl_pct"] = self._safe_float(market_data.get("unrealized_pnl_pct"))
|
details["unrealized_pnl_pct"] = self._safe_float(market_data.get("unrealized_pnl_pct"))
|
||||||
if condition.holding_days_above is not None or condition.holding_days_below is not None:
|
if condition.holding_days_above is not None or condition.holding_days_below is not None:
|
||||||
details["holding_days"] = self._safe_float(market_data.get("holding_days"))
|
details["holding_days"] = self._safe_float(market_data.get("holding_days"))
|
||||||
|
|||||||
@@ -4,7 +4,8 @@ from __future__ import annotations
|
|||||||
|
|
||||||
import sqlite3
|
import sqlite3
|
||||||
import sys
|
import sys
|
||||||
from datetime import UTC, datetime
|
import tempfile
|
||||||
|
from datetime import UTC, datetime, timedelta
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from unittest.mock import MagicMock, patch
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
@@ -47,9 +48,7 @@ def temp_db(tmp_path: Path) -> Path:
|
|||||||
|
|
||||||
cursor.executemany(
|
cursor.executemany(
|
||||||
"""
|
"""
|
||||||
INSERT INTO trades (
|
INSERT INTO trades (timestamp, stock_code, action, quantity, price, confidence, rationale, pnl)
|
||||||
timestamp, stock_code, action, quantity, price, confidence, rationale, pnl
|
|
||||||
)
|
|
||||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||||
""",
|
""",
|
||||||
test_trades,
|
test_trades,
|
||||||
@@ -74,7 +73,9 @@ class TestBackupExporter:
|
|||||||
exporter = BackupExporter(str(temp_db))
|
exporter = BackupExporter(str(temp_db))
|
||||||
output_dir = tmp_path / "exports"
|
output_dir = tmp_path / "exports"
|
||||||
|
|
||||||
results = exporter.export_all(output_dir, formats=[ExportFormat.JSON], compress=False)
|
results = exporter.export_all(
|
||||||
|
output_dir, formats=[ExportFormat.JSON], compress=False
|
||||||
|
)
|
||||||
|
|
||||||
assert ExportFormat.JSON in results
|
assert ExportFormat.JSON in results
|
||||||
assert results[ExportFormat.JSON].exists()
|
assert results[ExportFormat.JSON].exists()
|
||||||
@@ -85,7 +86,9 @@ class TestBackupExporter:
|
|||||||
exporter = BackupExporter(str(temp_db))
|
exporter = BackupExporter(str(temp_db))
|
||||||
output_dir = tmp_path / "exports"
|
output_dir = tmp_path / "exports"
|
||||||
|
|
||||||
results = exporter.export_all(output_dir, formats=[ExportFormat.JSON], compress=True)
|
results = exporter.export_all(
|
||||||
|
output_dir, formats=[ExportFormat.JSON], compress=True
|
||||||
|
)
|
||||||
|
|
||||||
assert ExportFormat.JSON in results
|
assert ExportFormat.JSON in results
|
||||||
assert results[ExportFormat.JSON].suffix == ".gz"
|
assert results[ExportFormat.JSON].suffix == ".gz"
|
||||||
@@ -95,13 +98,15 @@ class TestBackupExporter:
|
|||||||
exporter = BackupExporter(str(temp_db))
|
exporter = BackupExporter(str(temp_db))
|
||||||
output_dir = tmp_path / "exports"
|
output_dir = tmp_path / "exports"
|
||||||
|
|
||||||
results = exporter.export_all(output_dir, formats=[ExportFormat.CSV], compress=False)
|
results = exporter.export_all(
|
||||||
|
output_dir, formats=[ExportFormat.CSV], compress=False
|
||||||
|
)
|
||||||
|
|
||||||
assert ExportFormat.CSV in results
|
assert ExportFormat.CSV in results
|
||||||
assert results[ExportFormat.CSV].exists()
|
assert results[ExportFormat.CSV].exists()
|
||||||
|
|
||||||
# Verify CSV content
|
# Verify CSV content
|
||||||
with open(results[ExportFormat.CSV]) as f:
|
with open(results[ExportFormat.CSV], "r") as f:
|
||||||
lines = f.readlines()
|
lines = f.readlines()
|
||||||
assert len(lines) == 4 # Header + 3 rows
|
assert len(lines) == 4 # Header + 3 rows
|
||||||
|
|
||||||
@@ -141,7 +146,7 @@ class TestBackupExporter:
|
|||||||
# Should only have 1 trade (AAPL on Jan 2)
|
# Should only have 1 trade (AAPL on Jan 2)
|
||||||
import json
|
import json
|
||||||
|
|
||||||
with open(results[ExportFormat.JSON]) as f:
|
with open(results[ExportFormat.JSON], "r") as f:
|
||||||
data = json.load(f)
|
data = json.load(f)
|
||||||
assert data["record_count"] == 1
|
assert data["record_count"] == 1
|
||||||
assert data["trades"][0]["stock_code"] == "AAPL"
|
assert data["trades"][0]["stock_code"] == "AAPL"
|
||||||
@@ -402,7 +407,9 @@ class TestBackupExporterAdditional:
|
|||||||
assert ExportFormat.JSON in results
|
assert ExportFormat.JSON in results
|
||||||
assert ExportFormat.CSV in results
|
assert ExportFormat.CSV in results
|
||||||
|
|
||||||
def test_export_all_logs_error_on_failure(self, temp_db: Path, tmp_path: Path) -> None:
|
def test_export_all_logs_error_on_failure(
|
||||||
|
self, temp_db: Path, tmp_path: Path
|
||||||
|
) -> None:
|
||||||
"""export_all must log an error and continue when one format fails."""
|
"""export_all must log an error and continue when one format fails."""
|
||||||
exporter = BackupExporter(str(temp_db))
|
exporter = BackupExporter(str(temp_db))
|
||||||
# Patch _export_format to raise on JSON, succeed on CSV
|
# Patch _export_format to raise on JSON, succeed on CSV
|
||||||
@@ -423,7 +430,9 @@ class TestBackupExporterAdditional:
|
|||||||
assert ExportFormat.JSON not in results
|
assert ExportFormat.JSON not in results
|
||||||
assert ExportFormat.CSV in results
|
assert ExportFormat.CSV in results
|
||||||
|
|
||||||
def test_export_csv_empty_trades_no_compress(self, empty_db: Path, tmp_path: Path) -> None:
|
def test_export_csv_empty_trades_no_compress(
|
||||||
|
self, empty_db: Path, tmp_path: Path
|
||||||
|
) -> None:
|
||||||
"""CSV export with no trades and compress=False must write header row only."""
|
"""CSV export with no trades and compress=False must write header row only."""
|
||||||
exporter = BackupExporter(str(empty_db))
|
exporter = BackupExporter(str(empty_db))
|
||||||
results = exporter.export_all(
|
results = exporter.export_all(
|
||||||
@@ -437,7 +446,9 @@ class TestBackupExporterAdditional:
|
|||||||
content = out.read_text()
|
content = out.read_text()
|
||||||
assert "timestamp" in content
|
assert "timestamp" in content
|
||||||
|
|
||||||
def test_export_csv_empty_trades_compressed(self, empty_db: Path, tmp_path: Path) -> None:
|
def test_export_csv_empty_trades_compressed(
|
||||||
|
self, empty_db: Path, tmp_path: Path
|
||||||
|
) -> None:
|
||||||
"""CSV export with no trades and compress=True must write gzipped header."""
|
"""CSV export with no trades and compress=True must write gzipped header."""
|
||||||
import gzip
|
import gzip
|
||||||
|
|
||||||
@@ -454,7 +465,9 @@ class TestBackupExporterAdditional:
|
|||||||
content = f.read()
|
content = f.read()
|
||||||
assert "timestamp" in content
|
assert "timestamp" in content
|
||||||
|
|
||||||
def test_export_csv_with_data_compressed(self, temp_db: Path, tmp_path: Path) -> None:
|
def test_export_csv_with_data_compressed(
|
||||||
|
self, temp_db: Path, tmp_path: Path
|
||||||
|
) -> None:
|
||||||
"""CSV export with data and compress=True must write gzipped rows."""
|
"""CSV export with data and compress=True must write gzipped rows."""
|
||||||
import gzip
|
import gzip
|
||||||
|
|
||||||
@@ -479,7 +492,6 @@ class TestBackupExporterAdditional:
|
|||||||
with patch.dict(sys.modules, {"pyarrow": None, "pyarrow.parquet": None}):
|
with patch.dict(sys.modules, {"pyarrow": None, "pyarrow.parquet": None}):
|
||||||
try:
|
try:
|
||||||
import pyarrow # noqa: F401
|
import pyarrow # noqa: F401
|
||||||
|
|
||||||
pytest.skip("pyarrow is installed; cannot test ImportError path")
|
pytest.skip("pyarrow is installed; cannot test ImportError path")
|
||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
@@ -545,7 +557,9 @@ class TestCloudStorage:
|
|||||||
importlib.reload(m)
|
importlib.reload(m)
|
||||||
m.CloudStorage(s3_config)
|
m.CloudStorage(s3_config)
|
||||||
|
|
||||||
def test_upload_file_success(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
|
def test_upload_file_success(
|
||||||
|
self, mock_boto3_module, s3_config, tmp_path: Path
|
||||||
|
) -> None:
|
||||||
"""upload_file must call client.upload_file and return the object key."""
|
"""upload_file must call client.upload_file and return the object key."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -558,7 +572,9 @@ class TestCloudStorage:
|
|||||||
assert key == "backups/backup.json.gz"
|
assert key == "backups/backup.json.gz"
|
||||||
storage.client.upload_file.assert_called_once()
|
storage.client.upload_file.assert_called_once()
|
||||||
|
|
||||||
def test_upload_file_default_key(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
|
def test_upload_file_default_key(
|
||||||
|
self, mock_boto3_module, s3_config, tmp_path: Path
|
||||||
|
) -> None:
|
||||||
"""upload_file without object_key must use the filename as key."""
|
"""upload_file without object_key must use the filename as key."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -570,7 +586,9 @@ class TestCloudStorage:
|
|||||||
|
|
||||||
assert key == "myfile.gz"
|
assert key == "myfile.gz"
|
||||||
|
|
||||||
def test_upload_file_not_found(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
|
def test_upload_file_not_found(
|
||||||
|
self, mock_boto3_module, s3_config, tmp_path: Path
|
||||||
|
) -> None:
|
||||||
"""upload_file must raise FileNotFoundError for missing files."""
|
"""upload_file must raise FileNotFoundError for missing files."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -593,7 +611,9 @@ class TestCloudStorage:
|
|||||||
with pytest.raises(RuntimeError, match="network error"):
|
with pytest.raises(RuntimeError, match="network error"):
|
||||||
storage.upload_file(test_file)
|
storage.upload_file(test_file)
|
||||||
|
|
||||||
def test_download_file_success(self, mock_boto3_module, s3_config, tmp_path: Path) -> None:
|
def test_download_file_success(
|
||||||
|
self, mock_boto3_module, s3_config, tmp_path: Path
|
||||||
|
) -> None:
|
||||||
"""download_file must call client.download_file and return local path."""
|
"""download_file must call client.download_file and return local path."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -617,8 +637,11 @@ class TestCloudStorage:
|
|||||||
with pytest.raises(RuntimeError, match="timeout"):
|
with pytest.raises(RuntimeError, match="timeout"):
|
||||||
storage.download_file("key", tmp_path / "dest.gz")
|
storage.download_file("key", tmp_path / "dest.gz")
|
||||||
|
|
||||||
def test_list_files_returns_objects(self, mock_boto3_module, s3_config) -> None:
|
def test_list_files_returns_objects(
|
||||||
|
self, mock_boto3_module, s3_config
|
||||||
|
) -> None:
|
||||||
"""list_files must return parsed file metadata from S3 response."""
|
"""list_files must return parsed file metadata from S3 response."""
|
||||||
|
from datetime import timezone
|
||||||
|
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -628,7 +651,7 @@ class TestCloudStorage:
|
|||||||
{
|
{
|
||||||
"Key": "backups/a.gz",
|
"Key": "backups/a.gz",
|
||||||
"Size": 1024,
|
"Size": 1024,
|
||||||
"LastModified": datetime(2026, 1, 1, tzinfo=UTC),
|
"LastModified": datetime(2026, 1, 1, tzinfo=timezone.utc),
|
||||||
"ETag": '"abc123"',
|
"ETag": '"abc123"',
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
@@ -639,7 +662,9 @@ class TestCloudStorage:
|
|||||||
assert files[0]["key"] == "backups/a.gz"
|
assert files[0]["key"] == "backups/a.gz"
|
||||||
assert files[0]["size_bytes"] == 1024
|
assert files[0]["size_bytes"] == 1024
|
||||||
|
|
||||||
def test_list_files_empty_bucket(self, mock_boto3_module, s3_config) -> None:
|
def test_list_files_empty_bucket(
|
||||||
|
self, mock_boto3_module, s3_config
|
||||||
|
) -> None:
|
||||||
"""list_files must return empty list when bucket has no objects."""
|
"""list_files must return empty list when bucket has no objects."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -649,7 +674,9 @@ class TestCloudStorage:
|
|||||||
files = storage.list_files()
|
files = storage.list_files()
|
||||||
assert files == []
|
assert files == []
|
||||||
|
|
||||||
def test_list_files_propagates_error(self, mock_boto3_module, s3_config) -> None:
|
def test_list_files_propagates_error(
|
||||||
|
self, mock_boto3_module, s3_config
|
||||||
|
) -> None:
|
||||||
"""list_files must re-raise exceptions from the boto3 client."""
|
"""list_files must re-raise exceptions from the boto3 client."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -659,7 +686,9 @@ class TestCloudStorage:
|
|||||||
with pytest.raises(RuntimeError):
|
with pytest.raises(RuntimeError):
|
||||||
storage.list_files()
|
storage.list_files()
|
||||||
|
|
||||||
def test_delete_file_success(self, mock_boto3_module, s3_config) -> None:
|
def test_delete_file_success(
|
||||||
|
self, mock_boto3_module, s3_config
|
||||||
|
) -> None:
|
||||||
"""delete_file must call client.delete_object with the correct key."""
|
"""delete_file must call client.delete_object with the correct key."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -669,7 +698,9 @@ class TestCloudStorage:
|
|||||||
Bucket="test-bucket", Key="backups/old.gz"
|
Bucket="test-bucket", Key="backups/old.gz"
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_delete_file_propagates_error(self, mock_boto3_module, s3_config) -> None:
|
def test_delete_file_propagates_error(
|
||||||
|
self, mock_boto3_module, s3_config
|
||||||
|
) -> None:
|
||||||
"""delete_file must re-raise exceptions from the boto3 client."""
|
"""delete_file must re-raise exceptions from the boto3 client."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -679,8 +710,11 @@ class TestCloudStorage:
|
|||||||
with pytest.raises(RuntimeError):
|
with pytest.raises(RuntimeError):
|
||||||
storage.delete_file("backups/old.gz")
|
storage.delete_file("backups/old.gz")
|
||||||
|
|
||||||
def test_get_storage_stats_success(self, mock_boto3_module, s3_config) -> None:
|
def test_get_storage_stats_success(
|
||||||
|
self, mock_boto3_module, s3_config
|
||||||
|
) -> None:
|
||||||
"""get_storage_stats must aggregate file sizes correctly."""
|
"""get_storage_stats must aggregate file sizes correctly."""
|
||||||
|
from datetime import timezone
|
||||||
|
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -690,13 +724,13 @@ class TestCloudStorage:
|
|||||||
{
|
{
|
||||||
"Key": "a.gz",
|
"Key": "a.gz",
|
||||||
"Size": 1024 * 1024,
|
"Size": 1024 * 1024,
|
||||||
"LastModified": datetime(2026, 1, 1, tzinfo=UTC),
|
"LastModified": datetime(2026, 1, 1, tzinfo=timezone.utc),
|
||||||
"ETag": '"x"',
|
"ETag": '"x"',
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"Key": "b.gz",
|
"Key": "b.gz",
|
||||||
"Size": 1024 * 1024,
|
"Size": 1024 * 1024,
|
||||||
"LastModified": datetime(2026, 1, 2, tzinfo=UTC),
|
"LastModified": datetime(2026, 1, 2, tzinfo=timezone.utc),
|
||||||
"ETag": '"y"',
|
"ETag": '"y"',
|
||||||
},
|
},
|
||||||
]
|
]
|
||||||
@@ -707,7 +741,9 @@ class TestCloudStorage:
|
|||||||
assert stats["total_size_bytes"] == 2 * 1024 * 1024
|
assert stats["total_size_bytes"] == 2 * 1024 * 1024
|
||||||
assert stats["total_size_mb"] == pytest.approx(2.0)
|
assert stats["total_size_mb"] == pytest.approx(2.0)
|
||||||
|
|
||||||
def test_get_storage_stats_on_error(self, mock_boto3_module, s3_config) -> None:
|
def test_get_storage_stats_on_error(
|
||||||
|
self, mock_boto3_module, s3_config
|
||||||
|
) -> None:
|
||||||
"""get_storage_stats must return error dict without raising on failure."""
|
"""get_storage_stats must return error dict without raising on failure."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -718,7 +754,9 @@ class TestCloudStorage:
|
|||||||
assert "error" in stats
|
assert "error" in stats
|
||||||
assert stats["total_files"] == 0
|
assert stats["total_files"] == 0
|
||||||
|
|
||||||
def test_verify_connection_success(self, mock_boto3_module, s3_config) -> None:
|
def test_verify_connection_success(
|
||||||
|
self, mock_boto3_module, s3_config
|
||||||
|
) -> None:
|
||||||
"""verify_connection must return True when head_bucket succeeds."""
|
"""verify_connection must return True when head_bucket succeeds."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -726,7 +764,9 @@ class TestCloudStorage:
|
|||||||
result = storage.verify_connection()
|
result = storage.verify_connection()
|
||||||
assert result is True
|
assert result is True
|
||||||
|
|
||||||
def test_verify_connection_failure(self, mock_boto3_module, s3_config) -> None:
|
def test_verify_connection_failure(
|
||||||
|
self, mock_boto3_module, s3_config
|
||||||
|
) -> None:
|
||||||
"""verify_connection must return False when head_bucket raises."""
|
"""verify_connection must return False when head_bucket raises."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -736,7 +776,9 @@ class TestCloudStorage:
|
|||||||
result = storage.verify_connection()
|
result = storage.verify_connection()
|
||||||
assert result is False
|
assert result is False
|
||||||
|
|
||||||
def test_enable_versioning(self, mock_boto3_module, s3_config) -> None:
|
def test_enable_versioning(
|
||||||
|
self, mock_boto3_module, s3_config
|
||||||
|
) -> None:
|
||||||
"""enable_versioning must call put_bucket_versioning."""
|
"""enable_versioning must call put_bucket_versioning."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
@@ -744,7 +786,9 @@ class TestCloudStorage:
|
|||||||
storage.enable_versioning()
|
storage.enable_versioning()
|
||||||
storage.client.put_bucket_versioning.assert_called_once()
|
storage.client.put_bucket_versioning.assert_called_once()
|
||||||
|
|
||||||
def test_enable_versioning_propagates_error(self, mock_boto3_module, s3_config) -> None:
|
def test_enable_versioning_propagates_error(
|
||||||
|
self, mock_boto3_module, s3_config
|
||||||
|
) -> None:
|
||||||
"""enable_versioning must re-raise exceptions from the boto3 client."""
|
"""enable_versioning must re-raise exceptions from the boto3 client."""
|
||||||
from src.backup.cloud_storage import CloudStorage
|
from src.backup.cloud_storage import CloudStorage
|
||||||
|
|
||||||
|
|||||||
@@ -323,8 +323,7 @@ class TestPromptOverride:
|
|||||||
# Verify the custom prompt was sent, not a built prompt
|
# Verify the custom prompt was sent, not a built prompt
|
||||||
mock_generate.assert_called_once()
|
mock_generate.assert_called_once()
|
||||||
actual_prompt = mock_generate.call_args[1].get(
|
actual_prompt = mock_generate.call_args[1].get(
|
||||||
"contents",
|
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None
|
||||||
mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None,
|
|
||||||
)
|
)
|
||||||
assert actual_prompt == custom_prompt
|
assert actual_prompt == custom_prompt
|
||||||
# Raw response preserved in rationale without parse_response (#247)
|
# Raw response preserved in rationale without parse_response (#247)
|
||||||
@@ -386,8 +385,7 @@ class TestPromptOverride:
|
|||||||
await client.decide(market_data)
|
await client.decide(market_data)
|
||||||
|
|
||||||
actual_prompt = mock_generate.call_args[1].get(
|
actual_prompt = mock_generate.call_args[1].get(
|
||||||
"contents",
|
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None
|
||||||
mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None,
|
|
||||||
)
|
)
|
||||||
# The custom prompt must be used, not the compressed prompt
|
# The custom prompt must be used, not the compressed prompt
|
||||||
assert actual_prompt == custom_prompt
|
assert actual_prompt == custom_prompt
|
||||||
@@ -413,8 +411,7 @@ class TestPromptOverride:
|
|||||||
await client.decide(market_data)
|
await client.decide(market_data)
|
||||||
|
|
||||||
actual_prompt = mock_generate.call_args[1].get(
|
actual_prompt = mock_generate.call_args[1].get(
|
||||||
"contents",
|
"contents", mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None
|
||||||
mock_generate.call_args[0][1] if len(mock_generate.call_args[0]) > 1 else None,
|
|
||||||
)
|
)
|
||||||
# Should contain stock code from build_prompt, not be a custom override
|
# Should contain stock code from build_prompt, not be a custom override
|
||||||
assert "005930" in actual_prompt
|
assert "005930" in actual_prompt
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, MagicMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
@@ -99,10 +99,7 @@ class TestTokenManagement:
|
|||||||
mock_resp_403 = AsyncMock()
|
mock_resp_403 = AsyncMock()
|
||||||
mock_resp_403.status = 403
|
mock_resp_403.status = 403
|
||||||
mock_resp_403.text = AsyncMock(
|
mock_resp_403.text = AsyncMock(
|
||||||
return_value=(
|
return_value='{"error_code":"EGW00133","error_description":"접근토큰 발급 잠시 후 다시 시도하세요(1분당 1회)"}'
|
||||||
'{"error_code":"EGW00133","error_description":'
|
|
||||||
'"접근토큰 발급 잠시 후 다시 시도하세요(1분당 1회)"}'
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
mock_resp_403.__aenter__ = AsyncMock(return_value=mock_resp_403)
|
mock_resp_403.__aenter__ = AsyncMock(return_value=mock_resp_403)
|
||||||
mock_resp_403.__aexit__ = AsyncMock(return_value=False)
|
mock_resp_403.__aexit__ = AsyncMock(return_value=False)
|
||||||
@@ -235,7 +232,9 @@ class TestRateLimiter:
|
|||||||
mock_order_resp.__aenter__ = AsyncMock(return_value=mock_order_resp)
|
mock_order_resp.__aenter__ = AsyncMock(return_value=mock_order_resp)
|
||||||
mock_order_resp.__aexit__ = AsyncMock(return_value=False)
|
mock_order_resp.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash_resp, mock_order_resp]):
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash_resp, mock_order_resp]
|
||||||
|
):
|
||||||
with patch.object(
|
with patch.object(
|
||||||
broker._rate_limiter, "acquire", new_callable=AsyncMock
|
broker._rate_limiter, "acquire", new_callable=AsyncMock
|
||||||
) as mock_acquire:
|
) as mock_acquire:
|
||||||
@@ -406,7 +405,7 @@ class TestFetchMarketRankings:
|
|||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
from src.broker.kis_api import kr_round_down, kr_tick_unit # noqa: E402
|
from src.broker.kis_api import kr_tick_unit, kr_round_down # noqa: E402
|
||||||
|
|
||||||
|
|
||||||
class TestKrTickUnit:
|
class TestKrTickUnit:
|
||||||
@@ -539,7 +538,9 @@ class TestSendOrderTickRounding:
|
|||||||
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
||||||
mock_order.__aexit__ = AsyncMock(return_value=False)
|
mock_order.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.send_order("005930", "BUY", 1, price=188150)
|
await broker.send_order("005930", "BUY", 1, price=188150)
|
||||||
|
|
||||||
order_call = mock_post.call_args_list[1]
|
order_call = mock_post.call_args_list[1]
|
||||||
@@ -562,7 +563,9 @@ class TestSendOrderTickRounding:
|
|||||||
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
||||||
mock_order.__aexit__ = AsyncMock(return_value=False)
|
mock_order.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.send_order("005930", "BUY", 1, price=50000)
|
await broker.send_order("005930", "BUY", 1, price=50000)
|
||||||
|
|
||||||
order_call = mock_post.call_args_list[1]
|
order_call = mock_post.call_args_list[1]
|
||||||
@@ -584,7 +587,9 @@ class TestSendOrderTickRounding:
|
|||||||
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
||||||
mock_order.__aexit__ = AsyncMock(return_value=False)
|
mock_order.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.send_order("005930", "SELL", 1, price=0)
|
await broker.send_order("005930", "SELL", 1, price=0)
|
||||||
|
|
||||||
order_call = mock_post.call_args_list[1]
|
order_call = mock_post.call_args_list[1]
|
||||||
@@ -623,7 +628,9 @@ class TestTRIDBranchingDomestic:
|
|||||||
broker = self._make_broker(settings, "paper")
|
broker = self._make_broker(settings, "paper")
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
mock_resp.json = AsyncMock(return_value={"output1": [], "output2": {}})
|
mock_resp.json = AsyncMock(
|
||||||
|
return_value={"output1": [], "output2": {}}
|
||||||
|
)
|
||||||
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
|
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
|
||||||
mock_resp.__aexit__ = AsyncMock(return_value=False)
|
mock_resp.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
@@ -638,7 +645,9 @@ class TestTRIDBranchingDomestic:
|
|||||||
broker = self._make_broker(settings, "live")
|
broker = self._make_broker(settings, "live")
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
mock_resp.json = AsyncMock(return_value={"output1": [], "output2": {}})
|
mock_resp.json = AsyncMock(
|
||||||
|
return_value={"output1": [], "output2": {}}
|
||||||
|
)
|
||||||
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
|
mock_resp.__aenter__ = AsyncMock(return_value=mock_resp)
|
||||||
mock_resp.__aexit__ = AsyncMock(return_value=False)
|
mock_resp.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
@@ -663,7 +672,9 @@ class TestTRIDBranchingDomestic:
|
|||||||
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
||||||
mock_order.__aexit__ = AsyncMock(return_value=False)
|
mock_order.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.send_order("005930", "BUY", 1)
|
await broker.send_order("005930", "BUY", 1)
|
||||||
|
|
||||||
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
||||||
@@ -684,7 +695,9 @@ class TestTRIDBranchingDomestic:
|
|||||||
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
||||||
mock_order.__aexit__ = AsyncMock(return_value=False)
|
mock_order.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.send_order("005930", "BUY", 1)
|
await broker.send_order("005930", "BUY", 1)
|
||||||
|
|
||||||
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
||||||
@@ -705,7 +718,9 @@ class TestTRIDBranchingDomestic:
|
|||||||
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
||||||
mock_order.__aexit__ = AsyncMock(return_value=False)
|
mock_order.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.send_order("005930", "SELL", 1)
|
await broker.send_order("005930", "SELL", 1)
|
||||||
|
|
||||||
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
||||||
@@ -726,7 +741,9 @@ class TestTRIDBranchingDomestic:
|
|||||||
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
mock_order.__aenter__ = AsyncMock(return_value=mock_order)
|
||||||
mock_order.__aexit__ = AsyncMock(return_value=False)
|
mock_order.__aexit__ = AsyncMock(return_value=False)
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.send_order("005930", "SELL", 1)
|
await broker.send_order("005930", "SELL", 1)
|
||||||
|
|
||||||
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
||||||
@@ -771,7 +788,9 @@ class TestGetDomesticPendingOrders:
|
|||||||
mock_get.assert_not_called()
|
mock_get.assert_not_called()
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_live_mode_calls_tttc0084r_with_correct_params(self, settings) -> None:
|
async def test_live_mode_calls_tttc0084r_with_correct_params(
|
||||||
|
self, settings
|
||||||
|
) -> None:
|
||||||
"""Live mode must call TTTC0084R with INQR_DVSN_1/2 and paging params."""
|
"""Live mode must call TTTC0084R with INQR_DVSN_1/2 and paging params."""
|
||||||
broker = self._make_broker(settings, "live")
|
broker = self._make_broker(settings, "live")
|
||||||
pending = [{"odno": "001", "pdno": "005930", "psbl_qty": "10"}]
|
pending = [{"odno": "001", "pdno": "005930", "psbl_qty": "10"}]
|
||||||
@@ -853,7 +872,9 @@ class TestCancelDomesticOrder:
|
|||||||
broker = self._make_broker(settings, "live")
|
broker = self._make_broker(settings, "live")
|
||||||
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
|
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
|
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
|
||||||
|
|
||||||
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
||||||
@@ -865,7 +886,9 @@ class TestCancelDomesticOrder:
|
|||||||
broker = self._make_broker(settings, "paper")
|
broker = self._make_broker(settings, "paper")
|
||||||
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
|
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
|
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
|
||||||
|
|
||||||
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
||||||
@@ -877,7 +900,9 @@ class TestCancelDomesticOrder:
|
|||||||
broker = self._make_broker(settings, "live")
|
broker = self._make_broker(settings, "live")
|
||||||
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
|
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
|
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 5)
|
||||||
|
|
||||||
body = mock_post.call_args_list[1][1].get("json", {})
|
body = mock_post.call_args_list[1][1].get("json", {})
|
||||||
@@ -891,7 +916,9 @@ class TestCancelDomesticOrder:
|
|||||||
broker = self._make_broker(settings, "live")
|
broker = self._make_broker(settings, "live")
|
||||||
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
|
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.cancel_domestic_order("005930", "ORD123", "BRN456", 3)
|
await broker.cancel_domestic_order("005930", "ORD123", "BRN456", 3)
|
||||||
|
|
||||||
body = mock_post.call_args_list[1][1].get("json", {})
|
body = mock_post.call_args_list[1][1].get("json", {})
|
||||||
@@ -905,7 +932,9 @@ class TestCancelDomesticOrder:
|
|||||||
broker = self._make_broker(settings, "live")
|
broker = self._make_broker(settings, "live")
|
||||||
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
|
mock_hash, mock_order = self._make_post_mocks({"rt_cd": "0"})
|
||||||
|
|
||||||
with patch("aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]) as mock_post:
|
with patch(
|
||||||
|
"aiohttp.ClientSession.post", side_effect=[mock_hash, mock_order]
|
||||||
|
) as mock_post:
|
||||||
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 2)
|
await broker.cancel_domestic_order("005930", "ORD001", "BRNO01", 2)
|
||||||
|
|
||||||
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
order_headers = mock_post.call_args_list[1][1].get("headers", {})
|
||||||
|
|||||||
@@ -77,7 +77,9 @@ class TestContextStore:
|
|||||||
# Latest by updated_at, which should be the last one set
|
# Latest by updated_at, which should be the last one set
|
||||||
assert latest == "2026-02-02"
|
assert latest == "2026-02-02"
|
||||||
|
|
||||||
def test_delete_old_contexts(self, store: ContextStore, db_conn: sqlite3.Connection) -> None:
|
def test_delete_old_contexts(
|
||||||
|
self, store: ContextStore, db_conn: sqlite3.Connection
|
||||||
|
) -> None:
|
||||||
"""Test deleting contexts older than a cutoff date."""
|
"""Test deleting contexts older than a cutoff date."""
|
||||||
# Insert contexts with specific old timestamps
|
# Insert contexts with specific old timestamps
|
||||||
# (bypassing set_context which uses current time)
|
# (bypassing set_context which uses current time)
|
||||||
@@ -168,7 +170,9 @@ class TestContextAggregator:
|
|||||||
log_trade(db_conn, "035720", "HOLD", 75, "Wait", quantity=0, price=0, pnl=0)
|
log_trade(db_conn, "035720", "HOLD", 75, "Wait", quantity=0, price=0, pnl=0)
|
||||||
|
|
||||||
# Manually set timestamps to the target date
|
# Manually set timestamps to the target date
|
||||||
db_conn.execute(f"UPDATE trades SET timestamp = '{date}T10:00:00+00:00'")
|
db_conn.execute(
|
||||||
|
f"UPDATE trades SET timestamp = '{date}T10:00:00+00:00'"
|
||||||
|
)
|
||||||
db_conn.commit()
|
db_conn.commit()
|
||||||
|
|
||||||
# Aggregate
|
# Aggregate
|
||||||
@@ -190,10 +194,18 @@ class TestContextAggregator:
|
|||||||
week = "2026-W06"
|
week = "2026-W06"
|
||||||
|
|
||||||
# Set daily contexts
|
# Set daily contexts
|
||||||
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "total_pnl_KR", 100.0)
|
aggregator.store.set_context(
|
||||||
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "total_pnl_KR", 200.0)
|
ContextLayer.L6_DAILY, "2026-02-02", "total_pnl_KR", 100.0
|
||||||
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-02", "avg_confidence_KR", 80.0)
|
)
|
||||||
aggregator.store.set_context(ContextLayer.L6_DAILY, "2026-02-03", "avg_confidence_KR", 85.0)
|
aggregator.store.set_context(
|
||||||
|
ContextLayer.L6_DAILY, "2026-02-03", "total_pnl_KR", 200.0
|
||||||
|
)
|
||||||
|
aggregator.store.set_context(
|
||||||
|
ContextLayer.L6_DAILY, "2026-02-02", "avg_confidence_KR", 80.0
|
||||||
|
)
|
||||||
|
aggregator.store.set_context(
|
||||||
|
ContextLayer.L6_DAILY, "2026-02-03", "avg_confidence_KR", 85.0
|
||||||
|
)
|
||||||
|
|
||||||
# Aggregate
|
# Aggregate
|
||||||
aggregator.aggregate_weekly_from_daily(week)
|
aggregator.aggregate_weekly_from_daily(week)
|
||||||
@@ -211,9 +223,15 @@ class TestContextAggregator:
|
|||||||
month = "2026-02"
|
month = "2026-02"
|
||||||
|
|
||||||
# Set weekly contexts
|
# Set weekly contexts
|
||||||
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W05", "weekly_pnl_KR", 100.0)
|
aggregator.store.set_context(
|
||||||
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W06", "weekly_pnl_KR", 200.0)
|
ContextLayer.L5_WEEKLY, "2026-W05", "weekly_pnl_KR", 100.0
|
||||||
aggregator.store.set_context(ContextLayer.L5_WEEKLY, "2026-W07", "weekly_pnl_KR", 150.0)
|
)
|
||||||
|
aggregator.store.set_context(
|
||||||
|
ContextLayer.L5_WEEKLY, "2026-W06", "weekly_pnl_KR", 200.0
|
||||||
|
)
|
||||||
|
aggregator.store.set_context(
|
||||||
|
ContextLayer.L5_WEEKLY, "2026-W07", "weekly_pnl_KR", 150.0
|
||||||
|
)
|
||||||
|
|
||||||
# Aggregate
|
# Aggregate
|
||||||
aggregator.aggregate_monthly_from_weekly(month)
|
aggregator.aggregate_monthly_from_weekly(month)
|
||||||
@@ -298,7 +316,6 @@ class TestContextAggregator:
|
|||||||
store = aggregator.store
|
store = aggregator.store
|
||||||
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl_KR") == 1000.0
|
assert store.get_context(ContextLayer.L6_DAILY, date, "total_pnl_KR") == 1000.0
|
||||||
from datetime import date as date_cls
|
from datetime import date as date_cls
|
||||||
|
|
||||||
trade_date = date_cls.fromisoformat(date)
|
trade_date = date_cls.fromisoformat(date)
|
||||||
iso_year, iso_week, _ = trade_date.isocalendar()
|
iso_year, iso_week, _ = trade_date.isocalendar()
|
||||||
trade_week = f"{iso_year}-W{iso_week:02d}"
|
trade_week = f"{iso_year}-W{iso_week:02d}"
|
||||||
@@ -307,9 +324,7 @@ class TestContextAggregator:
|
|||||||
trade_quarter = f"{trade_date.year}-Q{(trade_date.month - 1) // 3 + 1}"
|
trade_quarter = f"{trade_date.year}-Q{(trade_date.month - 1) // 3 + 1}"
|
||||||
trade_year = str(trade_date.year)
|
trade_year = str(trade_date.year)
|
||||||
assert store.get_context(ContextLayer.L4_MONTHLY, trade_month, "monthly_pnl") == 1000.0
|
assert store.get_context(ContextLayer.L4_MONTHLY, trade_month, "monthly_pnl") == 1000.0
|
||||||
assert (
|
assert store.get_context(ContextLayer.L3_QUARTERLY, trade_quarter, "quarterly_pnl") == 1000.0
|
||||||
store.get_context(ContextLayer.L3_QUARTERLY, trade_quarter, "quarterly_pnl") == 1000.0
|
|
||||||
)
|
|
||||||
assert store.get_context(ContextLayer.L2_ANNUAL, trade_year, "annual_pnl") == 1000.0
|
assert store.get_context(ContextLayer.L2_ANNUAL, trade_year, "annual_pnl") == 1000.0
|
||||||
|
|
||||||
|
|
||||||
@@ -414,7 +429,9 @@ class TestContextSummarizer:
|
|||||||
# summarize_layer
|
# summarize_layer
|
||||||
# ------------------------------------------------------------------
|
# ------------------------------------------------------------------
|
||||||
|
|
||||||
def test_summarize_layer_no_data(self, summarizer: ContextSummarizer) -> None:
|
def test_summarize_layer_no_data(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""summarize_layer with no data must return the 'No data' sentinel."""
|
"""summarize_layer with no data must return the 'No data' sentinel."""
|
||||||
result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
|
result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
|
||||||
assert result["count"] == 0
|
assert result["count"] == 0
|
||||||
@@ -431,12 +448,15 @@ class TestContextSummarizer:
|
|||||||
result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
|
result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
|
||||||
assert "total_entries" in result
|
assert "total_entries" in result
|
||||||
|
|
||||||
def test_summarize_layer_with_dict_values(self, summarizer: ContextSummarizer) -> None:
|
def test_summarize_layer_with_dict_values(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""summarize_layer must handle dict values by extracting numeric subkeys."""
|
"""summarize_layer must handle dict values by extracting numeric subkeys."""
|
||||||
store = summarizer.store
|
store = summarizer.store
|
||||||
# set_context serialises the value as JSON, so passing a dict works
|
# set_context serialises the value as JSON, so passing a dict works
|
||||||
store.set_context(
|
store.set_context(
|
||||||
ContextLayer.L6_DAILY, "2026-02-01", "metrics", {"win_rate": 65.0, "label": "good"}
|
ContextLayer.L6_DAILY, "2026-02-01", "metrics",
|
||||||
|
{"win_rate": 65.0, "label": "good"}
|
||||||
)
|
)
|
||||||
|
|
||||||
result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
|
result = summarizer.summarize_layer(ContextLayer.L6_DAILY)
|
||||||
@@ -444,7 +464,9 @@ class TestContextSummarizer:
|
|||||||
# numeric subkey "win_rate" should appear as "metrics.win_rate"
|
# numeric subkey "win_rate" should appear as "metrics.win_rate"
|
||||||
assert "metrics.win_rate" in result
|
assert "metrics.win_rate" in result
|
||||||
|
|
||||||
def test_summarize_layer_with_string_values(self, summarizer: ContextSummarizer) -> None:
|
def test_summarize_layer_with_string_values(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""summarize_layer must count string values separately."""
|
"""summarize_layer must count string values separately."""
|
||||||
store = summarizer.store
|
store = summarizer.store
|
||||||
# set_context stores string values as JSON-encoded strings
|
# set_context stores string values as JSON-encoded strings
|
||||||
@@ -458,7 +480,9 @@ class TestContextSummarizer:
|
|||||||
# rolling_window_summary
|
# rolling_window_summary
|
||||||
# ------------------------------------------------------------------
|
# ------------------------------------------------------------------
|
||||||
|
|
||||||
def test_rolling_window_summary_basic(self, summarizer: ContextSummarizer) -> None:
|
def test_rolling_window_summary_basic(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""rolling_window_summary must return the expected structure."""
|
"""rolling_window_summary must return the expected structure."""
|
||||||
store = summarizer.store
|
store = summarizer.store
|
||||||
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "pnl", 500.0)
|
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "pnl", 500.0)
|
||||||
@@ -468,16 +492,22 @@ class TestContextSummarizer:
|
|||||||
assert "recent_data" in result
|
assert "recent_data" in result
|
||||||
assert "historical_summary" in result
|
assert "historical_summary" in result
|
||||||
|
|
||||||
def test_rolling_window_summary_no_older_data(self, summarizer: ContextSummarizer) -> None:
|
def test_rolling_window_summary_no_older_data(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""rolling_window_summary with summarize_older=False skips history."""
|
"""rolling_window_summary with summarize_older=False skips history."""
|
||||||
result = summarizer.rolling_window_summary(ContextLayer.L6_DAILY, summarize_older=False)
|
result = summarizer.rolling_window_summary(
|
||||||
|
ContextLayer.L6_DAILY, summarize_older=False
|
||||||
|
)
|
||||||
assert result["historical_summary"] == {}
|
assert result["historical_summary"] == {}
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
# ------------------------------------------------------------------
|
||||||
# aggregate_to_higher_layer
|
# aggregate_to_higher_layer
|
||||||
# ------------------------------------------------------------------
|
# ------------------------------------------------------------------
|
||||||
|
|
||||||
def test_aggregate_to_higher_layer_mean(self, summarizer: ContextSummarizer) -> None:
|
def test_aggregate_to_higher_layer_mean(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""aggregate_to_higher_layer with 'mean' via dict subkeys returns average."""
|
"""aggregate_to_higher_layer with 'mean' via dict subkeys returns average."""
|
||||||
store = summarizer.store
|
store = summarizer.store
|
||||||
# Use different outer keys but same inner metric key so get_all_contexts
|
# Use different outer keys but same inner metric key so get_all_contexts
|
||||||
@@ -490,7 +520,9 @@ class TestContextSummarizer:
|
|||||||
)
|
)
|
||||||
assert result == pytest.approx(150.0)
|
assert result == pytest.approx(150.0)
|
||||||
|
|
||||||
def test_aggregate_to_higher_layer_sum(self, summarizer: ContextSummarizer) -> None:
|
def test_aggregate_to_higher_layer_sum(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""aggregate_to_higher_layer with 'sum' must return the total."""
|
"""aggregate_to_higher_layer with 'sum' must return the total."""
|
||||||
store = summarizer.store
|
store = summarizer.store
|
||||||
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
|
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
|
||||||
@@ -501,7 +533,9 @@ class TestContextSummarizer:
|
|||||||
)
|
)
|
||||||
assert result == pytest.approx(300.0)
|
assert result == pytest.approx(300.0)
|
||||||
|
|
||||||
def test_aggregate_to_higher_layer_max(self, summarizer: ContextSummarizer) -> None:
|
def test_aggregate_to_higher_layer_max(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""aggregate_to_higher_layer with 'max' must return the maximum."""
|
"""aggregate_to_higher_layer with 'max' must return the maximum."""
|
||||||
store = summarizer.store
|
store = summarizer.store
|
||||||
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
|
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
|
||||||
@@ -512,7 +546,9 @@ class TestContextSummarizer:
|
|||||||
)
|
)
|
||||||
assert result == pytest.approx(200.0)
|
assert result == pytest.approx(200.0)
|
||||||
|
|
||||||
def test_aggregate_to_higher_layer_min(self, summarizer: ContextSummarizer) -> None:
|
def test_aggregate_to_higher_layer_min(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""aggregate_to_higher_layer with 'min' must return the minimum."""
|
"""aggregate_to_higher_layer with 'min' must return the minimum."""
|
||||||
store = summarizer.store
|
store = summarizer.store
|
||||||
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
|
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "day1", {"pnl": 100.0})
|
||||||
@@ -523,7 +559,9 @@ class TestContextSummarizer:
|
|||||||
)
|
)
|
||||||
assert result == pytest.approx(100.0)
|
assert result == pytest.approx(100.0)
|
||||||
|
|
||||||
def test_aggregate_to_higher_layer_no_data(self, summarizer: ContextSummarizer) -> None:
|
def test_aggregate_to_higher_layer_no_data(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""aggregate_to_higher_layer with no matching key must return None."""
|
"""aggregate_to_higher_layer with no matching key must return None."""
|
||||||
result = summarizer.aggregate_to_higher_layer(
|
result = summarizer.aggregate_to_higher_layer(
|
||||||
ContextLayer.L6_DAILY, ContextLayer.L5_WEEKLY, "nonexistent", "mean"
|
ContextLayer.L6_DAILY, ContextLayer.L5_WEEKLY, "nonexistent", "mean"
|
||||||
@@ -547,7 +585,9 @@ class TestContextSummarizer:
|
|||||||
# create_compact_summary + format_summary_for_prompt
|
# create_compact_summary + format_summary_for_prompt
|
||||||
# ------------------------------------------------------------------
|
# ------------------------------------------------------------------
|
||||||
|
|
||||||
def test_create_compact_summary(self, summarizer: ContextSummarizer) -> None:
|
def test_create_compact_summary(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""create_compact_summary must produce a dict keyed by layer value."""
|
"""create_compact_summary must produce a dict keyed by layer value."""
|
||||||
store = summarizer.store
|
store = summarizer.store
|
||||||
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "pnl", 100.0)
|
store.set_context(ContextLayer.L6_DAILY, "2026-02-01", "pnl", 100.0)
|
||||||
@@ -575,7 +615,9 @@ class TestContextSummarizer:
|
|||||||
text = summarizer.format_summary_for_prompt(summary)
|
text = summarizer.format_summary_for_prompt(summary)
|
||||||
assert text == ""
|
assert text == ""
|
||||||
|
|
||||||
def test_format_summary_non_dict_value(self, summarizer: ContextSummarizer) -> None:
|
def test_format_summary_non_dict_value(
|
||||||
|
self, summarizer: ContextSummarizer
|
||||||
|
) -> None:
|
||||||
"""format_summary_for_prompt must render non-dict values as plain text."""
|
"""format_summary_for_prompt must render non-dict values as plain text."""
|
||||||
summary = {
|
summary = {
|
||||||
"daily": {
|
"daily": {
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ from __future__ import annotations
|
|||||||
|
|
||||||
import json
|
import json
|
||||||
import sqlite3
|
import sqlite3
|
||||||
from datetime import UTC, datetime
|
|
||||||
from types import SimpleNamespace
|
from types import SimpleNamespace
|
||||||
from unittest.mock import AsyncMock, MagicMock
|
from unittest.mock import AsyncMock, MagicMock
|
||||||
|
|
||||||
@@ -17,6 +16,8 @@ from src.evolution.daily_review import DailyReviewer
|
|||||||
from src.evolution.scorecard import DailyScorecard
|
from src.evolution.scorecard import DailyScorecard
|
||||||
from src.logging.decision_logger import DecisionLogger
|
from src.logging.decision_logger import DecisionLogger
|
||||||
|
|
||||||
|
from datetime import UTC, datetime
|
||||||
|
|
||||||
TODAY = datetime.now(UTC).strftime("%Y-%m-%d")
|
TODAY = datetime.now(UTC).strftime("%Y-%m-%d")
|
||||||
|
|
||||||
|
|
||||||
@@ -52,8 +53,7 @@ def _log_decision(
|
|||||||
|
|
||||||
|
|
||||||
def test_generate_scorecard_market_scoped(
|
def test_generate_scorecard_market_scoped(
|
||||||
db_conn: sqlite3.Connection,
|
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||||
context_store: ContextStore,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
reviewer = DailyReviewer(db_conn, context_store)
|
reviewer = DailyReviewer(db_conn, context_store)
|
||||||
logger = DecisionLogger(db_conn)
|
logger = DecisionLogger(db_conn)
|
||||||
@@ -134,8 +134,7 @@ def test_generate_scorecard_market_scoped(
|
|||||||
|
|
||||||
|
|
||||||
def test_generate_scorecard_top_winners_and_losers(
|
def test_generate_scorecard_top_winners_and_losers(
|
||||||
db_conn: sqlite3.Connection,
|
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||||
context_store: ContextStore,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
reviewer = DailyReviewer(db_conn, context_store)
|
reviewer = DailyReviewer(db_conn, context_store)
|
||||||
logger = DecisionLogger(db_conn)
|
logger = DecisionLogger(db_conn)
|
||||||
@@ -169,8 +168,7 @@ def test_generate_scorecard_top_winners_and_losers(
|
|||||||
|
|
||||||
|
|
||||||
def test_generate_scorecard_empty_day(
|
def test_generate_scorecard_empty_day(
|
||||||
db_conn: sqlite3.Connection,
|
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||||
context_store: ContextStore,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
reviewer = DailyReviewer(db_conn, context_store)
|
reviewer = DailyReviewer(db_conn, context_store)
|
||||||
scorecard = reviewer.generate_scorecard(TODAY, "KR")
|
scorecard = reviewer.generate_scorecard(TODAY, "KR")
|
||||||
@@ -186,8 +184,7 @@ def test_generate_scorecard_empty_day(
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_generate_lessons_without_gemini_returns_empty(
|
async def test_generate_lessons_without_gemini_returns_empty(
|
||||||
db_conn: sqlite3.Connection,
|
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||||
context_store: ContextStore,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
reviewer = DailyReviewer(db_conn, context_store, gemini_client=None)
|
reviewer = DailyReviewer(db_conn, context_store, gemini_client=None)
|
||||||
lessons = await reviewer.generate_lessons(
|
lessons = await reviewer.generate_lessons(
|
||||||
@@ -209,8 +206,7 @@ async def test_generate_lessons_without_gemini_returns_empty(
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_generate_lessons_parses_json_array(
|
async def test_generate_lessons_parses_json_array(
|
||||||
db_conn: sqlite3.Connection,
|
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||||
context_store: ContextStore,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
mock_gemini = MagicMock()
|
mock_gemini = MagicMock()
|
||||||
mock_gemini.decide = AsyncMock(
|
mock_gemini.decide = AsyncMock(
|
||||||
@@ -237,8 +233,7 @@ async def test_generate_lessons_parses_json_array(
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_generate_lessons_fallback_to_lines(
|
async def test_generate_lessons_fallback_to_lines(
|
||||||
db_conn: sqlite3.Connection,
|
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||||
context_store: ContextStore,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
mock_gemini = MagicMock()
|
mock_gemini = MagicMock()
|
||||||
mock_gemini.decide = AsyncMock(
|
mock_gemini.decide = AsyncMock(
|
||||||
@@ -265,8 +260,7 @@ async def test_generate_lessons_fallback_to_lines(
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_generate_lessons_handles_gemini_error(
|
async def test_generate_lessons_handles_gemini_error(
|
||||||
db_conn: sqlite3.Connection,
|
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||||
context_store: ContextStore,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
mock_gemini = MagicMock()
|
mock_gemini = MagicMock()
|
||||||
mock_gemini.decide = AsyncMock(side_effect=RuntimeError("boom"))
|
mock_gemini.decide = AsyncMock(side_effect=RuntimeError("boom"))
|
||||||
@@ -290,8 +284,7 @@ async def test_generate_lessons_handles_gemini_error(
|
|||||||
|
|
||||||
|
|
||||||
def test_store_scorecard_in_context(
|
def test_store_scorecard_in_context(
|
||||||
db_conn: sqlite3.Connection,
|
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||||
context_store: ContextStore,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
reviewer = DailyReviewer(db_conn, context_store)
|
reviewer = DailyReviewer(db_conn, context_store)
|
||||||
scorecard = DailyScorecard(
|
scorecard = DailyScorecard(
|
||||||
@@ -323,8 +316,7 @@ def test_store_scorecard_in_context(
|
|||||||
|
|
||||||
|
|
||||||
def test_store_scorecard_key_is_market_scoped(
|
def test_store_scorecard_key_is_market_scoped(
|
||||||
db_conn: sqlite3.Connection,
|
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||||
context_store: ContextStore,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
reviewer = DailyReviewer(db_conn, context_store)
|
reviewer = DailyReviewer(db_conn, context_store)
|
||||||
kr = DailyScorecard(
|
kr = DailyScorecard(
|
||||||
@@ -365,8 +357,7 @@ def test_store_scorecard_key_is_market_scoped(
|
|||||||
|
|
||||||
|
|
||||||
def test_generate_scorecard_handles_invalid_context_snapshot(
|
def test_generate_scorecard_handles_invalid_context_snapshot(
|
||||||
db_conn: sqlite3.Connection,
|
db_conn: sqlite3.Connection, context_store: ContextStore,
|
||||||
context_store: ContextStore,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
reviewer = DailyReviewer(db_conn, context_store)
|
reviewer = DailyReviewer(db_conn, context_store)
|
||||||
db_conn.execute(
|
db_conn.execute(
|
||||||
|
|||||||
@@ -355,7 +355,6 @@ def test_positions_empty_when_no_trades(tmp_path: Path) -> None:
|
|||||||
|
|
||||||
def _seed_cb_context(conn: sqlite3.Connection, pnl_pct: float, market: str = "KR") -> None:
|
def _seed_cb_context(conn: sqlite3.Connection, pnl_pct: float, market: str = "KR") -> None:
|
||||||
import json as _json
|
import json as _json
|
||||||
|
|
||||||
conn.execute(
|
conn.execute(
|
||||||
"INSERT OR REPLACE INTO system_metrics (key, value, updated_at) VALUES (?, ?, ?)",
|
"INSERT OR REPLACE INTO system_metrics (key, value, updated_at) VALUES (?, ?, ?)",
|
||||||
(
|
(
|
||||||
|
|||||||
@@ -79,7 +79,7 @@ class TestNewsAPI:
|
|||||||
# Mock the fetch to avoid real API call
|
# Mock the fetch to avoid real API call
|
||||||
with patch.object(api, "_fetch_news", new_callable=AsyncMock) as mock_fetch:
|
with patch.object(api, "_fetch_news", new_callable=AsyncMock) as mock_fetch:
|
||||||
mock_fetch.return_value = None
|
mock_fetch.return_value = None
|
||||||
await api.get_news_sentiment("AAPL")
|
result = await api.get_news_sentiment("AAPL")
|
||||||
|
|
||||||
# Should have attempted refetch since cache expired
|
# Should have attempted refetch since cache expired
|
||||||
mock_fetch.assert_called_once_with("AAPL")
|
mock_fetch.assert_called_once_with("AAPL")
|
||||||
@@ -111,7 +111,9 @@ class TestNewsAPI:
|
|||||||
"source": "Reuters",
|
"source": "Reuters",
|
||||||
"time_published": "2026-02-04T10:00:00",
|
"time_published": "2026-02-04T10:00:00",
|
||||||
"url": "https://example.com/1",
|
"url": "https://example.com/1",
|
||||||
"ticker_sentiment": [{"ticker": "AAPL", "ticker_sentiment_score": "0.85"}],
|
"ticker_sentiment": [
|
||||||
|
{"ticker": "AAPL", "ticker_sentiment_score": "0.85"}
|
||||||
|
],
|
||||||
"overall_sentiment_score": "0.75",
|
"overall_sentiment_score": "0.75",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -120,7 +122,9 @@ class TestNewsAPI:
|
|||||||
"source": "Bloomberg",
|
"source": "Bloomberg",
|
||||||
"time_published": "2026-02-04T09:00:00",
|
"time_published": "2026-02-04T09:00:00",
|
||||||
"url": "https://example.com/2",
|
"url": "https://example.com/2",
|
||||||
"ticker_sentiment": [{"ticker": "AAPL", "ticker_sentiment_score": "-0.3"}],
|
"ticker_sentiment": [
|
||||||
|
{"ticker": "AAPL", "ticker_sentiment_score": "-0.3"}
|
||||||
|
],
|
||||||
"overall_sentiment_score": "-0.2",
|
"overall_sentiment_score": "-0.2",
|
||||||
},
|
},
|
||||||
]
|
]
|
||||||
@@ -657,9 +661,7 @@ class TestGeminiClientWithExternalData:
|
|||||||
)
|
)
|
||||||
|
|
||||||
# Mock the Gemini API call
|
# Mock the Gemini API call
|
||||||
with patch.object(
|
with patch.object(client._client.aio.models, "generate_content", new_callable=AsyncMock) as mock_gen:
|
||||||
client._client.aio.models, "generate_content", new_callable=AsyncMock
|
|
||||||
) as mock_gen:
|
|
||||||
mock_response = MagicMock()
|
mock_response = MagicMock()
|
||||||
mock_response.text = '{"action": "BUY", "confidence": 85, "rationale": "Good news"}'
|
mock_response.text = '{"action": "BUY", "confidence": 85, "rationale": "Good news"}'
|
||||||
mock_gen.return_value = mock_response
|
mock_gen.return_value = mock_response
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
"""Tests for database helper functions."""
|
"""Tests for database helper functions."""
|
||||||
|
|
||||||
import os
|
|
||||||
import tempfile
|
import tempfile
|
||||||
|
import os
|
||||||
|
|
||||||
from src.db import get_latest_buy_trade, get_open_position, init_db, log_trade
|
from src.db import get_latest_buy_trade, get_open_position, init_db, log_trade
|
||||||
|
|
||||||
@@ -204,8 +204,7 @@ def test_mode_migration_adds_column_to_existing_db() -> None:
|
|||||||
assert "strategy_pnl" in columns
|
assert "strategy_pnl" in columns
|
||||||
assert "fx_pnl" in columns
|
assert "fx_pnl" in columns
|
||||||
migrated = conn.execute(
|
migrated = conn.execute(
|
||||||
"SELECT pnl, strategy_pnl, fx_pnl, session_id "
|
"SELECT pnl, strategy_pnl, fx_pnl, session_id FROM trades WHERE stock_code='AAPL' LIMIT 1"
|
||||||
"FROM trades WHERE stock_code='AAPL' LIMIT 1"
|
|
||||||
).fetchone()
|
).fetchone()
|
||||||
assert migrated is not None
|
assert migrated is not None
|
||||||
assert migrated[0] == 123.45
|
assert migrated[0] == 123.45
|
||||||
@@ -408,7 +407,9 @@ def test_decision_logs_session_id_migration_backfills_unknown() -> None:
|
|||||||
conn = init_db(db_path)
|
conn = init_db(db_path)
|
||||||
columns = {row[1] for row in conn.execute("PRAGMA table_info(decision_logs)").fetchall()}
|
columns = {row[1] for row in conn.execute("PRAGMA table_info(decision_logs)").fetchall()}
|
||||||
assert "session_id" in columns
|
assert "session_id" in columns
|
||||||
row = conn.execute("SELECT session_id FROM decision_logs WHERE decision_id='d1'").fetchone()
|
row = conn.execute(
|
||||||
|
"SELECT session_id FROM decision_logs WHERE decision_id='d1'"
|
||||||
|
).fetchone()
|
||||||
assert row is not None
|
assert row is not None
|
||||||
assert row[0] == "UNKNOWN"
|
assert row[0] == "UNKNOWN"
|
||||||
conn.close()
|
conn.close()
|
||||||
|
|||||||
@@ -49,10 +49,7 @@ def test_log_decision_creates_record(logger: DecisionLogger, db_conn: sqlite3.Co
|
|||||||
|
|
||||||
# Verify record exists in database
|
# Verify record exists in database
|
||||||
cursor = db_conn.execute(
|
cursor = db_conn.execute(
|
||||||
(
|
"SELECT decision_id, action, confidence, session_id FROM decision_logs WHERE decision_id = ?",
|
||||||
"SELECT decision_id, action, confidence, session_id "
|
|
||||||
"FROM decision_logs WHERE decision_id = ?"
|
|
||||||
),
|
|
||||||
(decision_id,),
|
(decision_id,),
|
||||||
)
|
)
|
||||||
row = cursor.fetchone()
|
row = cursor.fetchone()
|
||||||
|
|||||||
@@ -208,9 +208,7 @@ def test_identify_failure_patterns_empty(optimizer: EvolutionOptimizer) -> None:
|
|||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_generate_strategy_creates_file(
|
async def test_generate_strategy_creates_file(optimizer: EvolutionOptimizer, tmp_path: Path) -> None:
|
||||||
optimizer: EvolutionOptimizer, tmp_path: Path
|
|
||||||
) -> None:
|
|
||||||
"""Test that generate_strategy creates a strategy file."""
|
"""Test that generate_strategy creates a strategy file."""
|
||||||
failures = [
|
failures = [
|
||||||
{
|
{
|
||||||
@@ -236,9 +234,7 @@ async def test_generate_strategy_creates_file(
|
|||||||
return {"action": "HOLD", "confidence": 50, "rationale": "Waiting"}
|
return {"action": "HOLD", "confidence": 50, "rationale": "Waiting"}
|
||||||
"""
|
"""
|
||||||
|
|
||||||
with patch.object(
|
with patch.object(optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)):
|
||||||
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
|
|
||||||
):
|
|
||||||
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
|
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
|
||||||
strategy_path = await optimizer.generate_strategy(failures)
|
strategy_path = await optimizer.generate_strategy(failures)
|
||||||
|
|
||||||
@@ -251,8 +247,7 @@ async def test_generate_strategy_creates_file(
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_generate_strategy_saves_valid_python_code(
|
async def test_generate_strategy_saves_valid_python_code(
|
||||||
optimizer: EvolutionOptimizer,
|
optimizer: EvolutionOptimizer, tmp_path: Path,
|
||||||
tmp_path: Path,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test that syntactically valid generated code is saved."""
|
"""Test that syntactically valid generated code is saved."""
|
||||||
failures = [{"decision_id": "1", "timestamp": "2024-01-15T09:30:00+00:00"}]
|
failures = [{"decision_id": "1", "timestamp": "2024-01-15T09:30:00+00:00"}]
|
||||||
@@ -260,14 +255,12 @@ async def test_generate_strategy_saves_valid_python_code(
|
|||||||
mock_response = Mock()
|
mock_response = Mock()
|
||||||
mock_response.text = (
|
mock_response.text = (
|
||||||
'price = market_data.get("current_price", 0)\n'
|
'price = market_data.get("current_price", 0)\n'
|
||||||
"if price > 0:\n"
|
'if price > 0:\n'
|
||||||
' return {"action": "BUY", "confidence": 80, "rationale": "Positive price"}\n'
|
' return {"action": "BUY", "confidence": 80, "rationale": "Positive price"}\n'
|
||||||
'return {"action": "HOLD", "confidence": 50, "rationale": "No signal"}\n'
|
'return {"action": "HOLD", "confidence": 50, "rationale": "No signal"}\n'
|
||||||
)
|
)
|
||||||
|
|
||||||
with patch.object(
|
with patch.object(optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)):
|
||||||
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
|
|
||||||
):
|
|
||||||
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
|
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
|
||||||
strategy_path = await optimizer.generate_strategy(failures)
|
strategy_path = await optimizer.generate_strategy(failures)
|
||||||
|
|
||||||
@@ -277,9 +270,7 @@ async def test_generate_strategy_saves_valid_python_code(
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_generate_strategy_blocks_invalid_python_code(
|
async def test_generate_strategy_blocks_invalid_python_code(
|
||||||
optimizer: EvolutionOptimizer,
|
optimizer: EvolutionOptimizer, tmp_path: Path, caplog: pytest.LogCaptureFixture,
|
||||||
tmp_path: Path,
|
|
||||||
caplog: pytest.LogCaptureFixture,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test that syntactically invalid generated code is not saved."""
|
"""Test that syntactically invalid generated code is not saved."""
|
||||||
failures = [{"decision_id": "1", "timestamp": "2024-01-15T09:30:00+00:00"}]
|
failures = [{"decision_id": "1", "timestamp": "2024-01-15T09:30:00+00:00"}]
|
||||||
@@ -290,9 +281,7 @@ async def test_generate_strategy_blocks_invalid_python_code(
|
|||||||
' return {"action": "BUY", "confidence": 80, "rationale": "broken"}\n'
|
' return {"action": "BUY", "confidence": 80, "rationale": "broken"}\n'
|
||||||
)
|
)
|
||||||
|
|
||||||
with patch.object(
|
with patch.object(optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)):
|
||||||
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
|
|
||||||
):
|
|
||||||
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
|
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
|
||||||
with caplog.at_level("WARNING"):
|
with caplog.at_level("WARNING"):
|
||||||
strategy_path = await optimizer.generate_strategy(failures)
|
strategy_path = await optimizer.generate_strategy(failures)
|
||||||
@@ -321,7 +310,6 @@ def test_get_performance_summary() -> None:
|
|||||||
"""Test getting performance summary from trades table."""
|
"""Test getting performance summary from trades table."""
|
||||||
# Create a temporary database with trades
|
# Create a temporary database with trades
|
||||||
import tempfile
|
import tempfile
|
||||||
|
|
||||||
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp:
|
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp:
|
||||||
tmp_path = tmp.name
|
tmp_path = tmp.name
|
||||||
|
|
||||||
@@ -616,9 +604,7 @@ def test_calculate_improvement_trend_declining(performance_tracker: PerformanceT
|
|||||||
assert trend["pnl_change"] == -250.0
|
assert trend["pnl_change"] == -250.0
|
||||||
|
|
||||||
|
|
||||||
def test_calculate_improvement_trend_insufficient_data(
|
def test_calculate_improvement_trend_insufficient_data(performance_tracker: PerformanceTracker) -> None:
|
||||||
performance_tracker: PerformanceTracker,
|
|
||||||
) -> None:
|
|
||||||
"""Test improvement trend with insufficient data."""
|
"""Test improvement trend with insufficient data."""
|
||||||
metrics = [
|
metrics = [
|
||||||
StrategyMetrics(
|
StrategyMetrics(
|
||||||
@@ -732,9 +718,7 @@ async def test_full_evolution_pipeline(optimizer: EvolutionOptimizer, tmp_path:
|
|||||||
mock_response = Mock()
|
mock_response = Mock()
|
||||||
mock_response.text = 'return {"action": "HOLD", "confidence": 50, "rationale": "Test"}'
|
mock_response.text = 'return {"action": "HOLD", "confidence": 50, "rationale": "Test"}'
|
||||||
|
|
||||||
with patch.object(
|
with patch.object(optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)):
|
||||||
optimizer._client.aio.models, "generate_content", new=AsyncMock(return_value=mock_response)
|
|
||||||
):
|
|
||||||
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
|
with patch("src.evolution.optimizer.STRATEGIES_DIR", tmp_path):
|
||||||
with patch("subprocess.run") as mock_run:
|
with patch("subprocess.run") as mock_run:
|
||||||
mock_run.return_value = Mock(returncode=0, stdout="", stderr="")
|
mock_run.return_value = Mock(returncode=0, stdout="", stderr="")
|
||||||
|
|||||||
@@ -103,7 +103,9 @@ class TestSetupLogging:
|
|||||||
"""setup_logging must attach a JSON handler to the root logger."""
|
"""setup_logging must attach a JSON handler to the root logger."""
|
||||||
setup_logging(level=logging.DEBUG)
|
setup_logging(level=logging.DEBUG)
|
||||||
root = logging.getLogger()
|
root = logging.getLogger()
|
||||||
json_handlers = [h for h in root.handlers if isinstance(h.formatter, JSONFormatter)]
|
json_handlers = [
|
||||||
|
h for h in root.handlers if isinstance(h.formatter, JSONFormatter)
|
||||||
|
]
|
||||||
assert len(json_handlers) == 1
|
assert len(json_handlers) == 1
|
||||||
assert root.level == logging.DEBUG
|
assert root.level == logging.DEBUG
|
||||||
|
|
||||||
|
|||||||
@@ -4,45 +4,45 @@ from datetime import UTC, date, datetime
|
|||||||
from unittest.mock import ANY, AsyncMock, MagicMock, patch
|
from unittest.mock import ANY, AsyncMock, MagicMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
import src.main as main_module
|
import src.main as main_module
|
||||||
|
|
||||||
from src.config import Settings
|
from src.config import Settings
|
||||||
from src.context.layer import ContextLayer
|
from src.context.layer import ContextLayer
|
||||||
from src.context.scheduler import ScheduleResult
|
from src.context.scheduler import ScheduleResult
|
||||||
from src.core.order_policy import OrderPolicyRejected, get_session_info
|
from src.core.order_policy import OrderPolicyRejected
|
||||||
from src.core.risk_manager import CircuitBreakerTripped, FatFingerRejected
|
from src.core.risk_manager import CircuitBreakerTripped, FatFingerRejected
|
||||||
from src.db import init_db, log_trade
|
from src.db import init_db, log_trade
|
||||||
from src.evolution.scorecard import DailyScorecard
|
from src.evolution.scorecard import DailyScorecard
|
||||||
from src.logging.decision_logger import DecisionLogger
|
from src.logging.decision_logger import DecisionLogger
|
||||||
from src.main import (
|
from src.main import (
|
||||||
_RUNTIME_EXIT_PEAKS,
|
KILL_SWITCH,
|
||||||
_RUNTIME_EXIT_STATES,
|
|
||||||
_SESSION_RISK_LAST_BY_MARKET,
|
_SESSION_RISK_LAST_BY_MARKET,
|
||||||
_SESSION_RISK_OVERRIDES_BY_MARKET,
|
_SESSION_RISK_OVERRIDES_BY_MARKET,
|
||||||
_SESSION_RISK_PROFILES_MAP,
|
_SESSION_RISK_PROFILES_MAP,
|
||||||
_STOPLOSS_REENTRY_COOLDOWN_UNTIL,
|
_STOPLOSS_REENTRY_COOLDOWN_UNTIL,
|
||||||
KILL_SWITCH,
|
|
||||||
_apply_dashboard_flag,
|
|
||||||
_apply_staged_exit_override_for_hold,
|
_apply_staged_exit_override_for_hold,
|
||||||
_compute_kr_atr_value,
|
_compute_kr_atr_value,
|
||||||
_compute_kr_dynamic_stop_loss_pct,
|
|
||||||
_determine_order_quantity,
|
|
||||||
_estimate_pred_down_prob_from_rsi,
|
_estimate_pred_down_prob_from_rsi,
|
||||||
|
_inject_staged_exit_features,
|
||||||
|
_RUNTIME_EXIT_PEAKS,
|
||||||
|
_RUNTIME_EXIT_STATES,
|
||||||
|
_should_force_exit_for_overnight,
|
||||||
|
_should_block_overseas_buy_for_fx_buffer,
|
||||||
|
_trigger_emergency_kill_switch,
|
||||||
|
_apply_dashboard_flag,
|
||||||
|
_determine_order_quantity,
|
||||||
_extract_avg_price_from_balance,
|
_extract_avg_price_from_balance,
|
||||||
_extract_held_codes_from_balance,
|
_extract_held_codes_from_balance,
|
||||||
_extract_held_qty_from_balance,
|
_extract_held_qty_from_balance,
|
||||||
_handle_market_close,
|
_handle_market_close,
|
||||||
_inject_staged_exit_features,
|
_retry_connection,
|
||||||
_resolve_market_setting,
|
_resolve_market_setting,
|
||||||
_resolve_sell_qty_for_pnl,
|
_resolve_sell_qty_for_pnl,
|
||||||
_retry_connection,
|
|
||||||
_run_context_scheduler,
|
_run_context_scheduler,
|
||||||
_run_evolution_loop,
|
_run_evolution_loop,
|
||||||
_should_block_overseas_buy_for_fx_buffer,
|
|
||||||
_should_force_exit_for_overnight,
|
|
||||||
_start_dashboard_server,
|
_start_dashboard_server,
|
||||||
_stoploss_cooldown_minutes,
|
_stoploss_cooldown_minutes,
|
||||||
_trigger_emergency_kill_switch,
|
_compute_kr_dynamic_stop_loss_pct,
|
||||||
handle_domestic_pending_orders,
|
handle_domestic_pending_orders,
|
||||||
handle_overseas_pending_orders,
|
handle_overseas_pending_orders,
|
||||||
process_blackout_recovery_orders,
|
process_blackout_recovery_orders,
|
||||||
@@ -336,7 +336,10 @@ async def test_inject_staged_exit_features_sets_pred_down_prob_and_atr_for_kr()
|
|||||||
|
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_daily_prices = AsyncMock(
|
broker.get_daily_prices = AsyncMock(
|
||||||
return_value=[{"high": 102.0 + i, "low": 98.0 + i, "close": 100.0 + i} for i in range(40)]
|
return_value=[
|
||||||
|
{"high": 102.0 + i, "low": 98.0 + i, "close": 100.0 + i}
|
||||||
|
for i in range(40)
|
||||||
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
await _inject_staged_exit_features(
|
await _inject_staged_exit_features(
|
||||||
@@ -480,7 +483,9 @@ class TestExtractHeldQtyFromBalance:
|
|||||||
|
|
||||||
def test_overseas_returns_ord_psbl_qty_first(self) -> None:
|
def test_overseas_returns_ord_psbl_qty_first(self) -> None:
|
||||||
"""ord_psbl_qty (주문가능수량) takes priority over ovrs_cblc_qty."""
|
"""ord_psbl_qty (주문가능수량) takes priority over ovrs_cblc_qty."""
|
||||||
balance = {"output1": [{"ovrs_pdno": "AAPL", "ord_psbl_qty": "8", "ovrs_cblc_qty": "10"}]}
|
balance = {
|
||||||
|
"output1": [{"ovrs_pdno": "AAPL", "ord_psbl_qty": "8", "ovrs_cblc_qty": "10"}]
|
||||||
|
}
|
||||||
assert _extract_held_qty_from_balance(balance, "AAPL", is_domestic=False) == 8
|
assert _extract_held_qty_from_balance(balance, "AAPL", is_domestic=False) == 8
|
||||||
|
|
||||||
def test_overseas_fallback_to_ovrs_cblc_qty_when_ord_psbl_qty_absent(self) -> None:
|
def test_overseas_fallback_to_ovrs_cblc_qty_when_ord_psbl_qty_absent(self) -> None:
|
||||||
@@ -804,7 +809,9 @@ class TestTradingCycleTelegramIntegration:
|
|||||||
def mock_criticality_assessor(self) -> MagicMock:
|
def mock_criticality_assessor(self) -> MagicMock:
|
||||||
"""Create mock criticality assessor."""
|
"""Create mock criticality assessor."""
|
||||||
assessor = MagicMock()
|
assessor = MagicMock()
|
||||||
assessor.assess_market_conditions = MagicMock(return_value=MagicMock(value="NORMAL"))
|
assessor.assess_market_conditions = MagicMock(
|
||||||
|
return_value=MagicMock(value="NORMAL")
|
||||||
|
)
|
||||||
assessor.get_timeout = MagicMock(return_value=5.0)
|
assessor.get_timeout = MagicMock(return_value=5.0)
|
||||||
return assessor
|
return assessor
|
||||||
|
|
||||||
@@ -1192,7 +1199,9 @@ class TestOverseasBalanceParsing:
|
|||||||
def mock_overseas_broker_with_list(self) -> MagicMock:
|
def mock_overseas_broker_with_list(self) -> MagicMock:
|
||||||
"""Create mock overseas broker returning list format."""
|
"""Create mock overseas broker returning list format."""
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_overseas_price = AsyncMock(return_value={"output": {"last": "150.50"}})
|
broker.get_overseas_price = AsyncMock(
|
||||||
|
return_value={"output": {"last": "150.50"}}
|
||||||
|
)
|
||||||
broker.get_overseas_balance = AsyncMock(
|
broker.get_overseas_balance = AsyncMock(
|
||||||
return_value={
|
return_value={
|
||||||
"output2": [
|
"output2": [
|
||||||
@@ -1212,7 +1221,9 @@ class TestOverseasBalanceParsing:
|
|||||||
def mock_overseas_broker_with_dict(self) -> MagicMock:
|
def mock_overseas_broker_with_dict(self) -> MagicMock:
|
||||||
"""Create mock overseas broker returning dict format."""
|
"""Create mock overseas broker returning dict format."""
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_overseas_price = AsyncMock(return_value={"output": {"last": "150.50"}})
|
broker.get_overseas_price = AsyncMock(
|
||||||
|
return_value={"output": {"last": "150.50"}}
|
||||||
|
)
|
||||||
broker.get_overseas_balance = AsyncMock(
|
broker.get_overseas_balance = AsyncMock(
|
||||||
return_value={
|
return_value={
|
||||||
"output2": {
|
"output2": {
|
||||||
@@ -1230,7 +1241,9 @@ class TestOverseasBalanceParsing:
|
|||||||
def mock_overseas_broker_with_empty(self) -> MagicMock:
|
def mock_overseas_broker_with_empty(self) -> MagicMock:
|
||||||
"""Create mock overseas broker returning empty output2."""
|
"""Create mock overseas broker returning empty output2."""
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_overseas_price = AsyncMock(return_value={"output": {"last": "150.50"}})
|
broker.get_overseas_price = AsyncMock(
|
||||||
|
return_value={"output": {"last": "150.50"}}
|
||||||
|
)
|
||||||
broker.get_overseas_balance = AsyncMock(return_value={"output2": []})
|
broker.get_overseas_balance = AsyncMock(return_value={"output2": []})
|
||||||
broker.get_overseas_buying_power = AsyncMock(
|
broker.get_overseas_buying_power = AsyncMock(
|
||||||
return_value={"output": {"ovrs_ord_psbl_amt": "0.00"}}
|
return_value={"output": {"ovrs_ord_psbl_amt": "0.00"}}
|
||||||
@@ -1314,7 +1327,9 @@ class TestOverseasBalanceParsing:
|
|||||||
def mock_criticality_assessor(self) -> MagicMock:
|
def mock_criticality_assessor(self) -> MagicMock:
|
||||||
"""Create mock criticality assessor."""
|
"""Create mock criticality assessor."""
|
||||||
assessor = MagicMock()
|
assessor = MagicMock()
|
||||||
assessor.assess_market_conditions = MagicMock(return_value=MagicMock(value="NORMAL"))
|
assessor.assess_market_conditions = MagicMock(
|
||||||
|
return_value=MagicMock(value="NORMAL")
|
||||||
|
)
|
||||||
assessor.get_timeout = MagicMock(return_value=5.0)
|
assessor.get_timeout = MagicMock(return_value=5.0)
|
||||||
return assessor
|
return assessor
|
||||||
|
|
||||||
@@ -1477,7 +1492,9 @@ class TestOverseasBalanceParsing:
|
|||||||
def mock_overseas_broker_with_buy_scenario(self) -> MagicMock:
|
def mock_overseas_broker_with_buy_scenario(self) -> MagicMock:
|
||||||
"""Create mock overseas broker that returns a valid price for BUY orders."""
|
"""Create mock overseas broker that returns a valid price for BUY orders."""
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_overseas_price = AsyncMock(return_value={"output": {"last": "182.50"}})
|
broker.get_overseas_price = AsyncMock(
|
||||||
|
return_value={"output": {"last": "182.50"}}
|
||||||
|
)
|
||||||
broker.get_overseas_balance = AsyncMock(
|
broker.get_overseas_balance = AsyncMock(
|
||||||
return_value={
|
return_value={
|
||||||
"output2": [
|
"output2": [
|
||||||
@@ -1598,7 +1615,9 @@ class TestOverseasBalanceParsing:
|
|||||||
overseas_broker.get_overseas_buying_power = AsyncMock(
|
overseas_broker.get_overseas_buying_power = AsyncMock(
|
||||||
return_value={"output": {"ovrs_ord_psbl_amt": "50000.00"}}
|
return_value={"output": {"ovrs_ord_psbl_amt": "50000.00"}}
|
||||||
)
|
)
|
||||||
overseas_broker.send_overseas_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
|
overseas_broker.send_overseas_order = AsyncMock(
|
||||||
|
return_value={"rt_cd": "0", "msg1": "OK"}
|
||||||
|
)
|
||||||
|
|
||||||
sell_engine = MagicMock(spec=ScenarioEngine)
|
sell_engine = MagicMock(spec=ScenarioEngine)
|
||||||
sell_engine.evaluate = MagicMock(return_value=_make_sell_match("AAPL"))
|
sell_engine.evaluate = MagicMock(return_value=_make_sell_match("AAPL"))
|
||||||
@@ -1690,10 +1709,8 @@ class TestOverseasBalanceParsing:
|
|||||||
)
|
)
|
||||||
|
|
||||||
overseas_broker.send_overseas_order.assert_called_once()
|
overseas_broker.send_overseas_order.assert_called_once()
|
||||||
sent_price = (
|
sent_price = overseas_broker.send_overseas_order.call_args[1].get("price") or \
|
||||||
overseas_broker.send_overseas_order.call_args[1].get("price")
|
overseas_broker.send_overseas_order.call_args[0][4]
|
||||||
or overseas_broker.send_overseas_order.call_args[0][4]
|
|
||||||
)
|
|
||||||
# 50.1234 * 1.002 = 50.2235... rounded to 2 decimals = 50.22
|
# 50.1234 * 1.002 = 50.2235... rounded to 2 decimals = 50.22
|
||||||
assert sent_price == round(50.1234 * 1.002, 2), (
|
assert sent_price == round(50.1234 * 1.002, 2), (
|
||||||
f"Expected 2-decimal price {round(50.1234 * 1.002, 2)} but got {sent_price} (#252)"
|
f"Expected 2-decimal price {round(50.1234 * 1.002, 2)} but got {sent_price} (#252)"
|
||||||
@@ -1736,12 +1753,6 @@ class TestOverseasBalanceParsing:
|
|||||||
engine = MagicMock(spec=ScenarioEngine)
|
engine = MagicMock(spec=ScenarioEngine)
|
||||||
engine.evaluate = MagicMock(return_value=_make_buy_match())
|
engine.evaluate = MagicMock(return_value=_make_buy_match())
|
||||||
|
|
||||||
with patch(
|
|
||||||
"src.main._resolve_market_setting",
|
|
||||||
side_effect=lambda **kwargs: (
|
|
||||||
0.1 if kwargs.get("key") == "US_MIN_PRICE" else kwargs.get("default")
|
|
||||||
),
|
|
||||||
):
|
|
||||||
await trading_cycle(
|
await trading_cycle(
|
||||||
broker=mock_domestic_broker,
|
broker=mock_domestic_broker,
|
||||||
overseas_broker=overseas_broker,
|
overseas_broker=overseas_broker,
|
||||||
@@ -1759,10 +1770,8 @@ class TestOverseasBalanceParsing:
|
|||||||
)
|
)
|
||||||
|
|
||||||
overseas_broker.send_overseas_order.assert_called_once()
|
overseas_broker.send_overseas_order.assert_called_once()
|
||||||
sent_price = (
|
sent_price = overseas_broker.send_overseas_order.call_args[1].get("price") or \
|
||||||
overseas_broker.send_overseas_order.call_args[1].get("price")
|
overseas_broker.send_overseas_order.call_args[0][4]
|
||||||
or overseas_broker.send_overseas_order.call_args[0][4]
|
|
||||||
)
|
|
||||||
# 0.5678 * 1.002 = 0.56893... rounded to 4 decimals = 0.5689
|
# 0.5678 * 1.002 = 0.56893... rounded to 4 decimals = 0.5689
|
||||||
assert sent_price == round(0.5678 * 1.002, 4), (
|
assert sent_price == round(0.5678 * 1.002, 4), (
|
||||||
f"Expected 4-decimal price {round(0.5678 * 1.002, 4)} but got {sent_price} (#252)"
|
f"Expected 4-decimal price {round(0.5678 * 1.002, 4)} but got {sent_price} (#252)"
|
||||||
@@ -1812,10 +1821,7 @@ class TestScenarioEngineIntegration:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_scenario_engine_called_with_enriched_market_data(
|
async def test_scenario_engine_called_with_enriched_market_data(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_market: MagicMock,
|
|
||||||
mock_telegram: MagicMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test scenario engine receives market_data enriched with scanner metrics."""
|
"""Test scenario engine receives market_data enriched with scanner metrics."""
|
||||||
from src.analysis.smart_scanner import ScanCandidate
|
from src.analysis.smart_scanner import ScanCandidate
|
||||||
@@ -1825,14 +1831,9 @@ class TestScenarioEngineIntegration:
|
|||||||
playbook = _make_playbook()
|
playbook = _make_playbook()
|
||||||
|
|
||||||
candidate = ScanCandidate(
|
candidate = ScanCandidate(
|
||||||
stock_code="005930",
|
stock_code="005930", name="Samsung", price=50000,
|
||||||
name="Samsung",
|
volume=1000000, volume_ratio=3.5, rsi=25.0,
|
||||||
price=50000,
|
signal="oversold", score=85.0,
|
||||||
volume=1000000,
|
|
||||||
volume_ratio=3.5,
|
|
||||||
rsi=25.0,
|
|
||||||
signal="oversold",
|
|
||||||
score=85.0,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
with (
|
with (
|
||||||
@@ -1876,10 +1877,7 @@ class TestScenarioEngineIntegration:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_trading_cycle_sets_l7_context_keys(
|
async def test_trading_cycle_sets_l7_context_keys(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_market: MagicMock,
|
|
||||||
mock_telegram: MagicMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test L7 context is written with market-scoped keys."""
|
"""Test L7 context is written with market-scoped keys."""
|
||||||
from src.analysis.smart_scanner import ScanCandidate
|
from src.analysis.smart_scanner import ScanCandidate
|
||||||
@@ -1890,14 +1888,9 @@ class TestScenarioEngineIntegration:
|
|||||||
context_store = MagicMock(get_latest_timeframe=MagicMock(return_value=None))
|
context_store = MagicMock(get_latest_timeframe=MagicMock(return_value=None))
|
||||||
|
|
||||||
candidate = ScanCandidate(
|
candidate = ScanCandidate(
|
||||||
stock_code="005930",
|
stock_code="005930", name="Samsung", price=50000,
|
||||||
name="Samsung",
|
volume=1000000, volume_ratio=3.5, rsi=25.0,
|
||||||
price=50000,
|
signal="oversold", score=85.0,
|
||||||
volume=1000000,
|
|
||||||
volume_ratio=3.5,
|
|
||||||
rsi=25.0,
|
|
||||||
signal="oversold",
|
|
||||||
score=85.0,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
with patch("src.main.log_trade"):
|
with patch("src.main.log_trade"):
|
||||||
@@ -1947,10 +1940,7 @@ class TestScenarioEngineIntegration:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_scan_candidates_market_scoped(
|
async def test_scan_candidates_market_scoped(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_market: MagicMock,
|
|
||||||
mock_telegram: MagicMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test scan_candidates uses market-scoped lookup, ignoring other markets."""
|
"""Test scan_candidates uses market-scoped lookup, ignoring other markets."""
|
||||||
from src.analysis.smart_scanner import ScanCandidate
|
from src.analysis.smart_scanner import ScanCandidate
|
||||||
@@ -1960,14 +1950,9 @@ class TestScenarioEngineIntegration:
|
|||||||
|
|
||||||
# Candidate stored under US market — should NOT be found for KR market
|
# Candidate stored under US market — should NOT be found for KR market
|
||||||
us_candidate = ScanCandidate(
|
us_candidate = ScanCandidate(
|
||||||
stock_code="005930",
|
stock_code="005930", name="Overlap", price=100,
|
||||||
name="Overlap",
|
volume=500000, volume_ratio=5.0, rsi=15.0,
|
||||||
price=100,
|
signal="oversold", score=90.0,
|
||||||
volume=500000,
|
|
||||||
volume_ratio=5.0,
|
|
||||||
rsi=15.0,
|
|
||||||
signal="oversold",
|
|
||||||
score=90.0,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
with patch("src.main.log_trade"):
|
with patch("src.main.log_trade"):
|
||||||
@@ -1997,10 +1982,7 @@ class TestScenarioEngineIntegration:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_scenario_engine_called_without_scanner_data(
|
async def test_scenario_engine_called_without_scanner_data(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_market: MagicMock,
|
|
||||||
mock_telegram: MagicMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test scenario engine works when stock has no scan candidate."""
|
"""Test scenario engine works when stock has no scan candidate."""
|
||||||
engine = MagicMock(spec=ScenarioEngine)
|
engine = MagicMock(spec=ScenarioEngine)
|
||||||
@@ -2038,9 +2020,7 @@ class TestScenarioEngineIntegration:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_holding_overseas_stock_derives_volume_ratio_from_price_api(
|
async def test_holding_overseas_stock_derives_volume_ratio_from_price_api(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_telegram: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_telegram: MagicMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test overseas holding stocks derive volume_ratio from get_overseas_price high/low."""
|
"""Test overseas holding stocks derive volume_ratio from get_overseas_price high/low."""
|
||||||
engine = MagicMock(spec=ScenarioEngine)
|
engine = MagicMock(spec=ScenarioEngine)
|
||||||
@@ -2055,17 +2035,15 @@ class TestScenarioEngineIntegration:
|
|||||||
|
|
||||||
os_broker = MagicMock()
|
os_broker = MagicMock()
|
||||||
# price_change_pct=5.0, high=106, low=94 → intraday_range=12% → volume_ratio=max(1,6)=6
|
# price_change_pct=5.0, high=106, low=94 → intraday_range=12% → volume_ratio=max(1,6)=6
|
||||||
os_broker.get_overseas_price = AsyncMock(
|
os_broker.get_overseas_price = AsyncMock(return_value={
|
||||||
return_value={
|
|
||||||
"output": {"last": "100.0", "rate": "5.0", "high": "106.0", "low": "94.0"}
|
"output": {"last": "100.0", "rate": "5.0", "high": "106.0", "low": "94.0"}
|
||||||
}
|
})
|
||||||
)
|
os_broker.get_overseas_balance = AsyncMock(return_value={
|
||||||
os_broker.get_overseas_balance = AsyncMock(
|
"output2": [{"frcr_evlu_tota": "10000", "frcr_buy_amt_smtl": "9000"}]
|
||||||
return_value={"output2": [{"frcr_evlu_tota": "10000", "frcr_buy_amt_smtl": "9000"}]}
|
})
|
||||||
)
|
os_broker.get_overseas_buying_power = AsyncMock(return_value={
|
||||||
os_broker.get_overseas_buying_power = AsyncMock(
|
"output": {"ovrs_ord_psbl_amt": "500"}
|
||||||
return_value={"output": {"ovrs_ord_psbl_amt": "500"}}
|
})
|
||||||
)
|
|
||||||
|
|
||||||
with patch("src.main.log_trade"):
|
with patch("src.main.log_trade"):
|
||||||
await trading_cycle(
|
await trading_cycle(
|
||||||
@@ -2097,10 +2075,7 @@ class TestScenarioEngineIntegration:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_scenario_matched_notification_sent(
|
async def test_scenario_matched_notification_sent(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_market: MagicMock,
|
|
||||||
mock_telegram: MagicMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test telegram notification sent when a scenario matches."""
|
"""Test telegram notification sent when a scenario matches."""
|
||||||
# Create a match with matched_scenario (not None)
|
# Create a match with matched_scenario (not None)
|
||||||
@@ -2150,10 +2125,7 @@ class TestScenarioEngineIntegration:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_no_scenario_matched_notification_on_default_hold(
|
async def test_no_scenario_matched_notification_on_default_hold(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_market: MagicMock,
|
|
||||||
mock_telegram: MagicMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test no scenario notification when default HOLD is returned."""
|
"""Test no scenario notification when default HOLD is returned."""
|
||||||
engine = MagicMock(spec=ScenarioEngine)
|
engine = MagicMock(spec=ScenarioEngine)
|
||||||
@@ -2184,10 +2156,7 @@ class TestScenarioEngineIntegration:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_decision_logger_receives_scenario_match_details(
|
async def test_decision_logger_receives_scenario_match_details(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_market: MagicMock,
|
|
||||||
mock_telegram: MagicMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test decision logger context includes scenario match details."""
|
"""Test decision logger context includes scenario match details."""
|
||||||
match = ScenarioMatch(
|
match = ScenarioMatch(
|
||||||
@@ -2224,16 +2193,13 @@ class TestScenarioEngineIntegration:
|
|||||||
|
|
||||||
decision_logger.log_decision.assert_called_once()
|
decision_logger.log_decision.assert_called_once()
|
||||||
call_kwargs = decision_logger.log_decision.call_args.kwargs
|
call_kwargs = decision_logger.log_decision.call_args.kwargs
|
||||||
assert call_kwargs["session_id"] == get_session_info(mock_market).session_id
|
assert call_kwargs["session_id"] == "KRX_REG"
|
||||||
assert "scenario_match" in call_kwargs["context_snapshot"]
|
assert "scenario_match" in call_kwargs["context_snapshot"]
|
||||||
assert call_kwargs["context_snapshot"]["scenario_match"]["rsi"] == 45.0
|
assert call_kwargs["context_snapshot"]["scenario_match"]["rsi"] == 45.0
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_reduce_all_does_not_execute_order(
|
async def test_reduce_all_does_not_execute_order(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_market: MagicMock, mock_telegram: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_market: MagicMock,
|
|
||||||
mock_telegram: MagicMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test REDUCE_ALL action does not trigger order execution."""
|
"""Test REDUCE_ALL action does not trigger order execution."""
|
||||||
match = ScenarioMatch(
|
match = ScenarioMatch(
|
||||||
@@ -2374,9 +2340,7 @@ async def test_stoploss_reentry_cooldown_blocks_buy_when_active() -> None:
|
|||||||
broker.get_balance = AsyncMock(
|
broker.get_balance = AsyncMock(
|
||||||
return_value={
|
return_value={
|
||||||
"output1": [],
|
"output1": [],
|
||||||
"output2": [
|
"output2": [{"tot_evlu_amt": "100000", "dnca_tot_amt": "50000", "pchs_amt_smtl_amt": "50000"}],
|
||||||
{"tot_evlu_amt": "100000", "dnca_tot_amt": "50000", "pchs_amt_smtl_amt": "50000"}
|
|
||||||
],
|
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
|
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
|
||||||
@@ -2395,9 +2359,7 @@ async def test_stoploss_reentry_cooldown_blocks_buy_when_active() -> None:
|
|||||||
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
|
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
|
||||||
db_conn=db_conn,
|
db_conn=db_conn,
|
||||||
decision_logger=DecisionLogger(db_conn),
|
decision_logger=DecisionLogger(db_conn),
|
||||||
context_store=MagicMock(
|
context_store=MagicMock(get_latest_timeframe=MagicMock(return_value=None), set_context=MagicMock()),
|
||||||
get_latest_timeframe=MagicMock(return_value=None), set_context=MagicMock()
|
|
||||||
),
|
|
||||||
criticality_assessor=MagicMock(
|
criticality_assessor=MagicMock(
|
||||||
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
|
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
|
||||||
get_timeout=MagicMock(return_value=5.0),
|
get_timeout=MagicMock(return_value=5.0),
|
||||||
@@ -2427,9 +2389,7 @@ async def test_stoploss_reentry_cooldown_allows_buy_after_expiry() -> None:
|
|||||||
broker.get_balance = AsyncMock(
|
broker.get_balance = AsyncMock(
|
||||||
return_value={
|
return_value={
|
||||||
"output1": [],
|
"output1": [],
|
||||||
"output2": [
|
"output2": [{"tot_evlu_amt": "100000", "dnca_tot_amt": "50000", "pchs_amt_smtl_amt": "50000"}],
|
||||||
{"tot_evlu_amt": "100000", "dnca_tot_amt": "50000", "pchs_amt_smtl_amt": "50000"}
|
|
||||||
],
|
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
|
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
|
||||||
@@ -2448,9 +2408,7 @@ async def test_stoploss_reentry_cooldown_allows_buy_after_expiry() -> None:
|
|||||||
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
|
risk=MagicMock(validate_order=MagicMock(), check_circuit_breaker=MagicMock()),
|
||||||
db_conn=db_conn,
|
db_conn=db_conn,
|
||||||
decision_logger=DecisionLogger(db_conn),
|
decision_logger=DecisionLogger(db_conn),
|
||||||
context_store=MagicMock(
|
context_store=MagicMock(get_latest_timeframe=MagicMock(return_value=None), set_context=MagicMock()),
|
||||||
get_latest_timeframe=MagicMock(return_value=None), set_context=MagicMock()
|
|
||||||
),
|
|
||||||
criticality_assessor=MagicMock(
|
criticality_assessor=MagicMock(
|
||||||
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
|
assess_market_conditions=MagicMock(return_value=MagicMock(value="NORMAL")),
|
||||||
get_timeout=MagicMock(return_value=5.0),
|
get_timeout=MagicMock(return_value=5.0),
|
||||||
@@ -3461,7 +3419,6 @@ def test_start_dashboard_server_returns_none_when_uvicorn_missing() -> None:
|
|||||||
DASHBOARD_ENABLED=True,
|
DASHBOARD_ENABLED=True,
|
||||||
)
|
)
|
||||||
import builtins
|
import builtins
|
||||||
|
|
||||||
real_import = builtins.__import__
|
real_import = builtins.__import__
|
||||||
|
|
||||||
def mock_import(name: str, *args: object, **kwargs: object) -> object:
|
def mock_import(name: str, *args: object, **kwargs: object) -> object:
|
||||||
@@ -3489,13 +3446,8 @@ class TestBuyCooldown:
|
|||||||
broker.get_current_price = AsyncMock(return_value=(100.0, 1.0, 0.0))
|
broker.get_current_price = AsyncMock(return_value=(100.0, 1.0, 0.0))
|
||||||
broker.get_balance = AsyncMock(
|
broker.get_balance = AsyncMock(
|
||||||
return_value={
|
return_value={
|
||||||
"output2": [
|
"output2": [{"tot_evlu_amt": "1000000", "dnca_tot_amt": "500000",
|
||||||
{
|
"pchs_amt_smtl_amt": "500000"}]
|
||||||
"tot_evlu_amt": "1000000",
|
|
||||||
"dnca_tot_amt": "500000",
|
|
||||||
"pchs_amt_smtl_amt": "500000",
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
|
broker.send_order = AsyncMock(return_value={"msg1": "OK"})
|
||||||
@@ -3523,22 +3475,13 @@ class TestBuyCooldown:
|
|||||||
def mock_overseas_broker(self) -> MagicMock:
|
def mock_overseas_broker(self) -> MagicMock:
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_overseas_price = AsyncMock(
|
broker.get_overseas_price = AsyncMock(
|
||||||
return_value={
|
return_value={"output": {"last": "1.0", "rate": "0.0",
|
||||||
"output": {
|
"high": "1.05", "low": "0.95", "tvol": "1000000"}}
|
||||||
"last": "1.0",
|
|
||||||
"rate": "0.0",
|
|
||||||
"high": "1.05",
|
|
||||||
"low": "0.95",
|
|
||||||
"tvol": "1000000",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
)
|
||||||
broker.get_overseas_balance = AsyncMock(
|
broker.get_overseas_balance = AsyncMock(return_value={
|
||||||
return_value={
|
|
||||||
"output1": [],
|
"output1": [],
|
||||||
"output2": [{"frcr_evlu_tota": "50000", "frcr_buy_amt_smtl": "0"}],
|
"output2": [{"frcr_evlu_tota": "50000", "frcr_buy_amt_smtl": "0"}],
|
||||||
}
|
})
|
||||||
)
|
|
||||||
broker.get_overseas_buying_power = AsyncMock(
|
broker.get_overseas_buying_power = AsyncMock(
|
||||||
return_value={"output": {"ovrs_ord_psbl_amt": "50000"}}
|
return_value={"output": {"ovrs_ord_psbl_amt": "50000"}}
|
||||||
)
|
)
|
||||||
@@ -3558,9 +3501,7 @@ class TestBuyCooldown:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_cooldown_set_on_insufficient_balance(
|
async def test_cooldown_set_on_insufficient_balance(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_overseas_broker: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_overseas_broker: MagicMock,
|
|
||||||
mock_overseas_market: MagicMock,
|
mock_overseas_market: MagicMock,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""BUY cooldown entry is created after 주문가능금액 rejection."""
|
"""BUY cooldown entry is created after 주문가능금액 rejection."""
|
||||||
@@ -3568,12 +3509,7 @@ class TestBuyCooldown:
|
|||||||
engine.evaluate = MagicMock(return_value=self._make_buy_match_overseas("MLECW"))
|
engine.evaluate = MagicMock(return_value=self._make_buy_match_overseas("MLECW"))
|
||||||
buy_cooldown: dict[str, float] = {}
|
buy_cooldown: dict[str, float] = {}
|
||||||
|
|
||||||
with patch("src.main.log_trade"), patch(
|
with patch("src.main.log_trade"):
|
||||||
"src.main._resolve_market_setting",
|
|
||||||
side_effect=lambda **kwargs: (
|
|
||||||
0.1 if kwargs.get("key") == "US_MIN_PRICE" else kwargs.get("default")
|
|
||||||
),
|
|
||||||
):
|
|
||||||
await trading_cycle(
|
await trading_cycle(
|
||||||
broker=mock_broker,
|
broker=mock_broker,
|
||||||
overseas_broker=mock_overseas_broker,
|
overseas_broker=mock_overseas_broker,
|
||||||
@@ -3604,9 +3540,7 @@ class TestBuyCooldown:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_cooldown_skips_buy(
|
async def test_cooldown_skips_buy(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_overseas_broker: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_overseas_broker: MagicMock,
|
|
||||||
mock_overseas_market: MagicMock,
|
mock_overseas_market: MagicMock,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""BUY is skipped when cooldown is active for the stock."""
|
"""BUY is skipped when cooldown is active for the stock."""
|
||||||
@@ -3614,9 +3548,10 @@ class TestBuyCooldown:
|
|||||||
engine.evaluate = MagicMock(return_value=self._make_buy_match_overseas("MLECW"))
|
engine.evaluate = MagicMock(return_value=self._make_buy_match_overseas("MLECW"))
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
|
|
||||||
# Set an active cooldown (expires far in the future)
|
# Set an active cooldown (expires far in the future)
|
||||||
buy_cooldown: dict[str, float] = {"US_NASDAQ:MLECW": asyncio.get_event_loop().time() + 600}
|
buy_cooldown: dict[str, float] = {
|
||||||
|
"US_NASDAQ:MLECW": asyncio.get_event_loop().time() + 600
|
||||||
|
}
|
||||||
|
|
||||||
with patch("src.main.log_trade"):
|
with patch("src.main.log_trade"):
|
||||||
await trading_cycle(
|
await trading_cycle(
|
||||||
@@ -3649,9 +3584,7 @@ class TestBuyCooldown:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_cooldown_not_set_on_other_errors(
|
async def test_cooldown_not_set_on_other_errors(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_overseas_market: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_overseas_market: MagicMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Cooldown is NOT set for non-balance-related rejections."""
|
"""Cooldown is NOT set for non-balance-related rejections."""
|
||||||
engine = MagicMock(spec=ScenarioEngine)
|
engine = MagicMock(spec=ScenarioEngine)
|
||||||
@@ -3659,22 +3592,13 @@ class TestBuyCooldown:
|
|||||||
# Different rejection reason
|
# Different rejection reason
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
overseas_broker.get_overseas_price = AsyncMock(
|
overseas_broker.get_overseas_price = AsyncMock(
|
||||||
return_value={
|
return_value={"output": {"last": "1.0", "rate": "0.0",
|
||||||
"output": {
|
"high": "1.05", "low": "0.95", "tvol": "1000000"}}
|
||||||
"last": "1.0",
|
|
||||||
"rate": "0.0",
|
|
||||||
"high": "1.05",
|
|
||||||
"low": "0.95",
|
|
||||||
"tvol": "1000000",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
)
|
||||||
overseas_broker.get_overseas_balance = AsyncMock(
|
overseas_broker.get_overseas_balance = AsyncMock(return_value={
|
||||||
return_value={
|
|
||||||
"output1": [],
|
"output1": [],
|
||||||
"output2": [{"frcr_evlu_tota": "50000", "frcr_buy_amt_smtl": "0"}],
|
"output2": [{"frcr_evlu_tota": "50000", "frcr_buy_amt_smtl": "0"}],
|
||||||
}
|
})
|
||||||
)
|
|
||||||
overseas_broker.get_overseas_buying_power = AsyncMock(
|
overseas_broker.get_overseas_buying_power = AsyncMock(
|
||||||
return_value={"output": {"ovrs_ord_psbl_amt": "50000"}}
|
return_value={"output": {"ovrs_ord_psbl_amt": "50000"}}
|
||||||
)
|
)
|
||||||
@@ -3714,21 +3638,14 @@ class TestBuyCooldown:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_no_cooldown_param_still_works(
|
async def test_no_cooldown_param_still_works(
|
||||||
self,
|
self, mock_broker: MagicMock, mock_overseas_broker: MagicMock,
|
||||||
mock_broker: MagicMock,
|
|
||||||
mock_overseas_broker: MagicMock,
|
|
||||||
mock_overseas_market: MagicMock,
|
mock_overseas_market: MagicMock,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""trading_cycle works normally when buy_cooldown is None (default)."""
|
"""trading_cycle works normally when buy_cooldown is None (default)."""
|
||||||
engine = MagicMock(spec=ScenarioEngine)
|
engine = MagicMock(spec=ScenarioEngine)
|
||||||
engine.evaluate = MagicMock(return_value=self._make_buy_match_overseas("MLECW"))
|
engine.evaluate = MagicMock(return_value=self._make_buy_match_overseas("MLECW"))
|
||||||
|
|
||||||
with patch("src.main.log_trade"), patch(
|
with patch("src.main.log_trade"):
|
||||||
"src.main._resolve_market_setting",
|
|
||||||
side_effect=lambda **kwargs: (
|
|
||||||
0.1 if kwargs.get("key") == "US_MIN_PRICE" else kwargs.get("default")
|
|
||||||
),
|
|
||||||
):
|
|
||||||
await trading_cycle(
|
await trading_cycle(
|
||||||
broker=mock_broker,
|
broker=mock_broker,
|
||||||
overseas_broker=mock_overseas_broker,
|
overseas_broker=mock_overseas_broker,
|
||||||
@@ -3805,7 +3722,6 @@ class TestMarketOutlookConfidenceThreshold:
|
|||||||
self, confidence: int, stock_code: str = "005930"
|
self, confidence: int, stock_code: str = "005930"
|
||||||
) -> ScenarioMatch:
|
) -> ScenarioMatch:
|
||||||
from src.strategy.models import StockScenario
|
from src.strategy.models import StockScenario
|
||||||
|
|
||||||
scenario = StockScenario(
|
scenario = StockScenario(
|
||||||
condition=StockCondition(rsi_below=30),
|
condition=StockCondition(rsi_below=30),
|
||||||
action=ScenarioAction.BUY,
|
action=ScenarioAction.BUY,
|
||||||
@@ -3820,9 +3736,10 @@ class TestMarketOutlookConfidenceThreshold:
|
|||||||
rationale="Test buy",
|
rationale="Test buy",
|
||||||
)
|
)
|
||||||
|
|
||||||
def _make_playbook_with_outlook(self, outlook_str: str, market: str = "KR") -> DayPlaybook:
|
def _make_playbook_with_outlook(
|
||||||
|
self, outlook_str: str, market: str = "KR"
|
||||||
|
) -> DayPlaybook:
|
||||||
from src.strategy.models import MarketOutlook
|
from src.strategy.models import MarketOutlook
|
||||||
|
|
||||||
outlook_map = {
|
outlook_map = {
|
||||||
"bearish": MarketOutlook.BEARISH,
|
"bearish": MarketOutlook.BEARISH,
|
||||||
"bullish": MarketOutlook.BULLISH,
|
"bullish": MarketOutlook.BULLISH,
|
||||||
@@ -4074,15 +3991,7 @@ async def test_buy_suppressed_when_open_position_exists() -> None:
|
|||||||
|
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
overseas_broker.get_overseas_price = AsyncMock(
|
overseas_broker.get_overseas_price = AsyncMock(
|
||||||
return_value={
|
return_value={"output": {"last": "51.0", "rate": "2.0", "high": "52.0", "low": "50.0", "tvol": "1000000"}}
|
||||||
"output": {
|
|
||||||
"last": "51.0",
|
|
||||||
"rate": "2.0",
|
|
||||||
"high": "52.0",
|
|
||||||
"low": "50.0",
|
|
||||||
"tvol": "1000000",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
)
|
||||||
overseas_broker.get_overseas_balance = AsyncMock(
|
overseas_broker.get_overseas_balance = AsyncMock(
|
||||||
return_value={
|
return_value={
|
||||||
@@ -4149,15 +4058,7 @@ async def test_buy_proceeds_when_no_open_position() -> None:
|
|||||||
|
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
overseas_broker.get_overseas_price = AsyncMock(
|
overseas_broker.get_overseas_price = AsyncMock(
|
||||||
return_value={
|
return_value={"output": {"last": "100.0", "rate": "1.0", "high": "101.0", "low": "99.0", "tvol": "500000"}}
|
||||||
"output": {
|
|
||||||
"last": "100.0",
|
|
||||||
"rate": "1.0",
|
|
||||||
"high": "101.0",
|
|
||||||
"low": "99.0",
|
|
||||||
"tvol": "500000",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
)
|
||||||
overseas_broker.get_overseas_balance = AsyncMock(
|
overseas_broker.get_overseas_balance = AsyncMock(
|
||||||
return_value={
|
return_value={
|
||||||
@@ -4259,7 +4160,9 @@ class TestOverseasBrokerIntegration:
|
|||||||
)
|
)
|
||||||
|
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
overseas_broker.get_overseas_price = AsyncMock(return_value={"output": {"last": "182.50"}})
|
overseas_broker.get_overseas_price = AsyncMock(
|
||||||
|
return_value={"output": {"last": "182.50"}}
|
||||||
|
)
|
||||||
# 브로커: 여전히 AAPL 10주 보유 중 (SELL 미체결)
|
# 브로커: 여전히 AAPL 10주 보유 중 (SELL 미체결)
|
||||||
overseas_broker.get_overseas_balance = AsyncMock(
|
overseas_broker.get_overseas_balance = AsyncMock(
|
||||||
return_value={
|
return_value={
|
||||||
@@ -4333,7 +4236,9 @@ class TestOverseasBrokerIntegration:
|
|||||||
# DB: 레코드 없음 (신규 포지션)
|
# DB: 레코드 없음 (신규 포지션)
|
||||||
|
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
overseas_broker.get_overseas_price = AsyncMock(return_value={"output": {"last": "182.50"}})
|
overseas_broker.get_overseas_price = AsyncMock(
|
||||||
|
return_value={"output": {"last": "182.50"}}
|
||||||
|
)
|
||||||
# 브로커: AAPL 미보유
|
# 브로커: AAPL 미보유
|
||||||
overseas_broker.get_overseas_balance = AsyncMock(
|
overseas_broker.get_overseas_balance = AsyncMock(
|
||||||
return_value={
|
return_value={
|
||||||
@@ -4401,7 +4306,9 @@ class TestOverseasBrokerIntegration:
|
|||||||
db_conn = init_db(":memory:")
|
db_conn = init_db(":memory:")
|
||||||
|
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
overseas_broker.get_overseas_price = AsyncMock(return_value={"output": {"last": "182.50"}})
|
overseas_broker.get_overseas_price = AsyncMock(
|
||||||
|
return_value={"output": {"last": "182.50"}}
|
||||||
|
)
|
||||||
overseas_broker.get_overseas_balance = AsyncMock(
|
overseas_broker.get_overseas_balance = AsyncMock(
|
||||||
return_value={
|
return_value={
|
||||||
"output1": [],
|
"output1": [],
|
||||||
@@ -4480,7 +4387,6 @@ class TestRetryConnection:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_success_on_first_attempt(self) -> None:
|
async def test_success_on_first_attempt(self) -> None:
|
||||||
"""Returns the result immediately when the first call succeeds."""
|
"""Returns the result immediately when the first call succeeds."""
|
||||||
|
|
||||||
async def ok() -> str:
|
async def ok() -> str:
|
||||||
return "data"
|
return "data"
|
||||||
|
|
||||||
@@ -4690,7 +4596,9 @@ class TestDailyCBBaseline:
|
|||||||
return_value=self._make_domestic_balance(tot_evlu_amt=55000.0)
|
return_value=self._make_domestic_balance(tot_evlu_amt=55000.0)
|
||||||
)
|
)
|
||||||
# Price data for the stock
|
# Price data for the stock
|
||||||
broker.get_current_price = AsyncMock(return_value=(100.0, 1.5, 100.0))
|
broker.get_current_price = AsyncMock(
|
||||||
|
return_value=(100.0, 1.5, 100.0)
|
||||||
|
)
|
||||||
|
|
||||||
market = MagicMock()
|
market = MagicMock()
|
||||||
market.name = "KR"
|
market.name = "KR"
|
||||||
@@ -4735,10 +4643,8 @@ class TestDailyCBBaseline:
|
|||||||
async def _passthrough(fn, *a, label: str = "", **kw): # type: ignore[override]
|
async def _passthrough(fn, *a, label: str = "", **kw): # type: ignore[override]
|
||||||
return await fn(*a, **kw)
|
return await fn(*a, **kw)
|
||||||
|
|
||||||
with (
|
with patch("src.main.get_open_markets", return_value=[market]), \
|
||||||
patch("src.main.get_open_markets", return_value=[market]),
|
patch("src.main._retry_connection", new=_passthrough):
|
||||||
patch("src.main._retry_connection", new=_passthrough),
|
|
||||||
):
|
|
||||||
result = await run_daily_session(
|
result = await run_daily_session(
|
||||||
broker=broker,
|
broker=broker,
|
||||||
overseas_broker=MagicMock(),
|
overseas_broker=MagicMock(),
|
||||||
@@ -4814,10 +4720,8 @@ class TestDailyCBBaseline:
|
|||||||
async def _passthrough(fn, *a, label: str = "", **kw): # type: ignore[override]
|
async def _passthrough(fn, *a, label: str = "", **kw): # type: ignore[override]
|
||||||
return await fn(*a, **kw)
|
return await fn(*a, **kw)
|
||||||
|
|
||||||
with (
|
with patch("src.main.get_open_markets", return_value=[market]), \
|
||||||
patch("src.main.get_open_markets", return_value=[market]),
|
patch("src.main._retry_connection", new=_passthrough):
|
||||||
patch("src.main._retry_connection", new=_passthrough),
|
|
||||||
):
|
|
||||||
result = await run_daily_session(
|
result = await run_daily_session(
|
||||||
broker=broker,
|
broker=broker,
|
||||||
overseas_broker=MagicMock(),
|
overseas_broker=MagicMock(),
|
||||||
@@ -4940,10 +4844,8 @@ async def test_run_daily_session_applies_staged_exit_override_on_hold() -> None:
|
|||||||
async def _passthrough(fn, *a, label: str = "", **kw): # type: ignore[override]
|
async def _passthrough(fn, *a, label: str = "", **kw): # type: ignore[override]
|
||||||
return await fn(*a, **kw)
|
return await fn(*a, **kw)
|
||||||
|
|
||||||
with (
|
with patch("src.main.get_open_markets", return_value=[market]), \
|
||||||
patch("src.main.get_open_markets", return_value=[market]),
|
patch("src.main._retry_connection", new=_passthrough):
|
||||||
patch("src.main._retry_connection", new=_passthrough),
|
|
||||||
):
|
|
||||||
await run_daily_session(
|
await run_daily_session(
|
||||||
broker=broker,
|
broker=broker,
|
||||||
overseas_broker=MagicMock(),
|
overseas_broker=MagicMock(),
|
||||||
@@ -5130,14 +5032,17 @@ class TestSyncPositionsFromBroker:
|
|||||||
db_conn = init_db(":memory:")
|
db_conn = init_db(":memory:")
|
||||||
|
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_balance = AsyncMock(return_value=self._domestic_balance("005930", qty=7))
|
broker.get_balance = AsyncMock(
|
||||||
|
return_value=self._domestic_balance("005930", qty=7)
|
||||||
|
)
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
|
|
||||||
synced = await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
synced = await sync_positions_from_broker(
|
||||||
|
broker, overseas_broker, db_conn, settings
|
||||||
|
)
|
||||||
|
|
||||||
assert synced == 1
|
assert synced == 1
|
||||||
from src.db import get_open_position
|
from src.db import get_open_position
|
||||||
|
|
||||||
pos = get_open_position(db_conn, "005930", "KR")
|
pos = get_open_position(db_conn, "005930", "KR")
|
||||||
assert pos is not None
|
assert pos is not None
|
||||||
assert pos["quantity"] == 7
|
assert pos["quantity"] == 7
|
||||||
@@ -5161,10 +5066,14 @@ class TestSyncPositionsFromBroker:
|
|||||||
)
|
)
|
||||||
|
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_balance = AsyncMock(return_value=self._domestic_balance("005930", qty=5))
|
broker.get_balance = AsyncMock(
|
||||||
|
return_value=self._domestic_balance("005930", qty=5)
|
||||||
|
)
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
|
|
||||||
synced = await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
synced = await sync_positions_from_broker(
|
||||||
|
broker, overseas_broker, db_conn, settings
|
||||||
|
)
|
||||||
|
|
||||||
assert synced == 0
|
assert synced == 0
|
||||||
|
|
||||||
@@ -5180,11 +5089,12 @@ class TestSyncPositionsFromBroker:
|
|||||||
return_value=self._overseas_balance("AAPL", qty=10)
|
return_value=self._overseas_balance("AAPL", qty=10)
|
||||||
)
|
)
|
||||||
|
|
||||||
synced = await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
synced = await sync_positions_from_broker(
|
||||||
|
broker, overseas_broker, db_conn, settings
|
||||||
|
)
|
||||||
|
|
||||||
assert synced == 1
|
assert synced == 1
|
||||||
from src.db import get_open_position
|
from src.db import get_open_position
|
||||||
|
|
||||||
pos = get_open_position(db_conn, "AAPL", "US_NASDAQ")
|
pos = get_open_position(db_conn, "AAPL", "US_NASDAQ")
|
||||||
assert pos is not None
|
assert pos is not None
|
||||||
assert pos["quantity"] == 10
|
assert pos["quantity"] == 10
|
||||||
@@ -5196,10 +5106,14 @@ class TestSyncPositionsFromBroker:
|
|||||||
db_conn = init_db(":memory:")
|
db_conn = init_db(":memory:")
|
||||||
|
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_balance = AsyncMock(return_value={"output1": [], "output2": [{}]})
|
broker.get_balance = AsyncMock(
|
||||||
|
return_value={"output1": [], "output2": [{}]}
|
||||||
|
)
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
|
|
||||||
synced = await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
synced = await sync_positions_from_broker(
|
||||||
|
broker, overseas_broker, db_conn, settings
|
||||||
|
)
|
||||||
|
|
||||||
assert synced == 0
|
assert synced == 0
|
||||||
|
|
||||||
@@ -5210,10 +5124,14 @@ class TestSyncPositionsFromBroker:
|
|||||||
db_conn = init_db(":memory:")
|
db_conn = init_db(":memory:")
|
||||||
|
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_balance = AsyncMock(side_effect=ConnectionError("KIS unreachable"))
|
broker.get_balance = AsyncMock(
|
||||||
|
side_effect=ConnectionError("KIS unreachable")
|
||||||
|
)
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
|
|
||||||
synced = await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
synced = await sync_positions_from_broker(
|
||||||
|
broker, overseas_broker, db_conn, settings
|
||||||
|
)
|
||||||
|
|
||||||
assert synced == 0 # Failure treated as no-op
|
assert synced == 0 # Failure treated as no-op
|
||||||
|
|
||||||
@@ -5233,7 +5151,9 @@ class TestSyncPositionsFromBroker:
|
|||||||
return_value={"output1": [], "output2": [{}]}
|
return_value={"output1": [], "output2": [{}]}
|
||||||
)
|
)
|
||||||
|
|
||||||
await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
await sync_positions_from_broker(
|
||||||
|
broker, overseas_broker, db_conn, settings
|
||||||
|
)
|
||||||
|
|
||||||
# Two distinct exchange codes (NASD, NYSE) → 2 calls
|
# Two distinct exchange codes (NASD, NYSE) → 2 calls
|
||||||
assert overseas_broker.get_overseas_balance.call_count == 2
|
assert overseas_broker.get_overseas_balance.call_count == 2
|
||||||
@@ -5246,9 +5166,7 @@ class TestSyncPositionsFromBroker:
|
|||||||
|
|
||||||
balance = {
|
balance = {
|
||||||
"output1": [{"pdno": "005930", "ord_psbl_qty": "5", "pchs_avg_pric": "68000.0"}],
|
"output1": [{"pdno": "005930", "ord_psbl_qty": "5", "pchs_avg_pric": "68000.0"}],
|
||||||
"output2": [
|
"output2": [{"tot_evlu_amt": "1000000", "dnca_tot_amt": "500000", "pchs_amt_smtl_amt": "500000"}],
|
||||||
{"tot_evlu_amt": "1000000", "dnca_tot_amt": "500000", "pchs_amt_smtl_amt": "500000"}
|
|
||||||
],
|
|
||||||
}
|
}
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_balance = AsyncMock(return_value=balance)
|
broker.get_balance = AsyncMock(return_value=balance)
|
||||||
@@ -5257,7 +5175,6 @@ class TestSyncPositionsFromBroker:
|
|||||||
await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
||||||
|
|
||||||
from src.db import get_open_position
|
from src.db import get_open_position
|
||||||
|
|
||||||
pos = get_open_position(db_conn, "005930", "KR")
|
pos = get_open_position(db_conn, "005930", "KR")
|
||||||
assert pos is not None
|
assert pos is not None
|
||||||
assert pos["price"] == 68000.0
|
assert pos["price"] == 68000.0
|
||||||
@@ -5279,7 +5196,6 @@ class TestSyncPositionsFromBroker:
|
|||||||
await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
||||||
|
|
||||||
from src.db import get_open_position
|
from src.db import get_open_position
|
||||||
|
|
||||||
pos = get_open_position(db_conn, "AAPL", "US_NASDAQ")
|
pos = get_open_position(db_conn, "AAPL", "US_NASDAQ")
|
||||||
assert pos is not None
|
assert pos is not None
|
||||||
assert pos["price"] == 170.0
|
assert pos["price"] == 170.0
|
||||||
@@ -5293,9 +5209,7 @@ class TestSyncPositionsFromBroker:
|
|||||||
# No pchs_avg_pric in output1
|
# No pchs_avg_pric in output1
|
||||||
balance = {
|
balance = {
|
||||||
"output1": [{"pdno": "005930", "ord_psbl_qty": "5"}],
|
"output1": [{"pdno": "005930", "ord_psbl_qty": "5"}],
|
||||||
"output2": [
|
"output2": [{"tot_evlu_amt": "1000000", "dnca_tot_amt": "500000", "pchs_amt_smtl_amt": "500000"}],
|
||||||
{"tot_evlu_amt": "1000000", "dnca_tot_amt": "500000", "pchs_amt_smtl_amt": "500000"}
|
|
||||||
],
|
|
||||||
}
|
}
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_balance = AsyncMock(return_value=balance)
|
broker.get_balance = AsyncMock(return_value=balance)
|
||||||
@@ -5304,7 +5218,6 @@ class TestSyncPositionsFromBroker:
|
|||||||
await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
await sync_positions_from_broker(broker, overseas_broker, db_conn, settings)
|
||||||
|
|
||||||
from src.db import get_open_position
|
from src.db import get_open_position
|
||||||
|
|
||||||
pos = get_open_position(db_conn, "005930", "KR")
|
pos = get_open_position(db_conn, "005930", "KR")
|
||||||
assert pos is not None
|
assert pos is not None
|
||||||
assert pos["price"] == 0.0
|
assert pos["price"] == 0.0
|
||||||
@@ -5432,8 +5345,12 @@ class TestHandleOverseasPendingOrders:
|
|||||||
"ovrs_excg_cd": "NASD",
|
"ovrs_excg_cd": "NASD",
|
||||||
}
|
}
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
overseas_broker.get_overseas_pending_orders = AsyncMock(return_value=[pending_order])
|
overseas_broker.get_overseas_pending_orders = AsyncMock(
|
||||||
overseas_broker.cancel_overseas_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
|
return_value=[pending_order]
|
||||||
|
)
|
||||||
|
overseas_broker.cancel_overseas_order = AsyncMock(
|
||||||
|
return_value={"rt_cd": "0", "msg1": "OK"}
|
||||||
|
)
|
||||||
|
|
||||||
sell_resubmit_counts: dict[str, int] = {}
|
sell_resubmit_counts: dict[str, int] = {}
|
||||||
buy_cooldown: dict[str, float] = {}
|
buy_cooldown: dict[str, float] = {}
|
||||||
@@ -5468,10 +5385,18 @@ class TestHandleOverseasPendingOrders:
|
|||||||
"ovrs_excg_cd": "NASD",
|
"ovrs_excg_cd": "NASD",
|
||||||
}
|
}
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
overseas_broker.get_overseas_pending_orders = AsyncMock(return_value=[pending_order])
|
overseas_broker.get_overseas_pending_orders = AsyncMock(
|
||||||
overseas_broker.cancel_overseas_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
|
return_value=[pending_order]
|
||||||
overseas_broker.get_overseas_price = AsyncMock(return_value={"output": {"last": "200.0"}})
|
)
|
||||||
overseas_broker.send_overseas_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
|
overseas_broker.cancel_overseas_order = AsyncMock(
|
||||||
|
return_value={"rt_cd": "0", "msg1": "OK"}
|
||||||
|
)
|
||||||
|
overseas_broker.get_overseas_price = AsyncMock(
|
||||||
|
return_value={"output": {"last": "200.0"}}
|
||||||
|
)
|
||||||
|
overseas_broker.send_overseas_order = AsyncMock(
|
||||||
|
return_value={"rt_cd": "0", "msg1": "OK"}
|
||||||
|
)
|
||||||
|
|
||||||
sell_resubmit_counts: dict[str, int] = {}
|
sell_resubmit_counts: dict[str, int] = {}
|
||||||
|
|
||||||
@@ -5502,7 +5427,9 @@ class TestHandleOverseasPendingOrders:
|
|||||||
"ovrs_excg_cd": "NASD",
|
"ovrs_excg_cd": "NASD",
|
||||||
}
|
}
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
overseas_broker.get_overseas_pending_orders = AsyncMock(return_value=[pending_order])
|
overseas_broker.get_overseas_pending_orders = AsyncMock(
|
||||||
|
return_value=[pending_order]
|
||||||
|
)
|
||||||
overseas_broker.cancel_overseas_order = AsyncMock(
|
overseas_broker.cancel_overseas_order = AsyncMock(
|
||||||
return_value={"rt_cd": "1", "msg1": "Error"} # failure
|
return_value={"rt_cd": "1", "msg1": "Error"} # failure
|
||||||
)
|
)
|
||||||
@@ -5531,8 +5458,12 @@ class TestHandleOverseasPendingOrders:
|
|||||||
"ovrs_excg_cd": "NASD",
|
"ovrs_excg_cd": "NASD",
|
||||||
}
|
}
|
||||||
overseas_broker = MagicMock()
|
overseas_broker = MagicMock()
|
||||||
overseas_broker.get_overseas_pending_orders = AsyncMock(return_value=[pending_order])
|
overseas_broker.get_overseas_pending_orders = AsyncMock(
|
||||||
overseas_broker.cancel_overseas_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
|
return_value=[pending_order]
|
||||||
|
)
|
||||||
|
overseas_broker.cancel_overseas_order = AsyncMock(
|
||||||
|
return_value={"rt_cd": "0", "msg1": "OK"}
|
||||||
|
)
|
||||||
overseas_broker.send_overseas_order = AsyncMock()
|
overseas_broker.send_overseas_order = AsyncMock()
|
||||||
|
|
||||||
# Already resubmitted once
|
# Already resubmitted once
|
||||||
@@ -5605,7 +5536,9 @@ class TestHandleDomesticPendingOrders:
|
|||||||
}
|
}
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_domestic_pending_orders = AsyncMock(return_value=[pending_order])
|
broker.get_domestic_pending_orders = AsyncMock(return_value=[pending_order])
|
||||||
broker.cancel_domestic_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
|
broker.cancel_domestic_order = AsyncMock(
|
||||||
|
return_value={"rt_cd": "0", "msg1": "OK"}
|
||||||
|
)
|
||||||
|
|
||||||
sell_resubmit_counts: dict[str, int] = {}
|
sell_resubmit_counts: dict[str, int] = {}
|
||||||
buy_cooldown: dict[str, float] = {}
|
buy_cooldown: dict[str, float] = {}
|
||||||
@@ -5644,13 +5577,17 @@ class TestHandleDomesticPendingOrders:
|
|||||||
}
|
}
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_domestic_pending_orders = AsyncMock(return_value=[pending_order])
|
broker.get_domestic_pending_orders = AsyncMock(return_value=[pending_order])
|
||||||
broker.cancel_domestic_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
|
broker.cancel_domestic_order = AsyncMock(
|
||||||
|
return_value={"rt_cd": "0", "msg1": "OK"}
|
||||||
|
)
|
||||||
broker.get_current_price = AsyncMock(return_value=(50000.0, 0.0, 0.0))
|
broker.get_current_price = AsyncMock(return_value=(50000.0, 0.0, 0.0))
|
||||||
broker.send_order = AsyncMock(return_value={"rt_cd": "0"})
|
broker.send_order = AsyncMock(return_value={"rt_cd": "0"})
|
||||||
|
|
||||||
sell_resubmit_counts: dict[str, int] = {}
|
sell_resubmit_counts: dict[str, int] = {}
|
||||||
|
|
||||||
await handle_domestic_pending_orders(broker, telegram, settings, sell_resubmit_counts)
|
await handle_domestic_pending_orders(
|
||||||
|
broker, telegram, settings, sell_resubmit_counts
|
||||||
|
)
|
||||||
|
|
||||||
broker.cancel_domestic_order.assert_called_once()
|
broker.cancel_domestic_order.assert_called_once()
|
||||||
broker.send_order.assert_called_once()
|
broker.send_order.assert_called_once()
|
||||||
@@ -5684,7 +5621,9 @@ class TestHandleDomesticPendingOrders:
|
|||||||
|
|
||||||
sell_resubmit_counts: dict[str, int] = {}
|
sell_resubmit_counts: dict[str, int] = {}
|
||||||
|
|
||||||
await handle_domestic_pending_orders(broker, telegram, settings, sell_resubmit_counts)
|
await handle_domestic_pending_orders(
|
||||||
|
broker, telegram, settings, sell_resubmit_counts
|
||||||
|
)
|
||||||
|
|
||||||
broker.send_order.assert_not_called()
|
broker.send_order.assert_not_called()
|
||||||
telegram.notify_unfilled_order.assert_not_called()
|
telegram.notify_unfilled_order.assert_not_called()
|
||||||
@@ -5704,13 +5643,17 @@ class TestHandleDomesticPendingOrders:
|
|||||||
}
|
}
|
||||||
broker = MagicMock()
|
broker = MagicMock()
|
||||||
broker.get_domestic_pending_orders = AsyncMock(return_value=[pending_order])
|
broker.get_domestic_pending_orders = AsyncMock(return_value=[pending_order])
|
||||||
broker.cancel_domestic_order = AsyncMock(return_value={"rt_cd": "0", "msg1": "OK"})
|
broker.cancel_domestic_order = AsyncMock(
|
||||||
|
return_value={"rt_cd": "0", "msg1": "OK"}
|
||||||
|
)
|
||||||
broker.send_order = AsyncMock()
|
broker.send_order = AsyncMock()
|
||||||
|
|
||||||
# Already resubmitted once
|
# Already resubmitted once
|
||||||
sell_resubmit_counts: dict[str, int] = {"KR:005930": 1}
|
sell_resubmit_counts: dict[str, int] = {"KR:005930": 1}
|
||||||
|
|
||||||
await handle_domestic_pending_orders(broker, telegram, settings, sell_resubmit_counts)
|
await handle_domestic_pending_orders(
|
||||||
|
broker, telegram, settings, sell_resubmit_counts
|
||||||
|
)
|
||||||
|
|
||||||
broker.cancel_domestic_order.assert_called_once()
|
broker.cancel_domestic_order.assert_called_once()
|
||||||
broker.send_order.assert_not_called()
|
broker.send_order.assert_not_called()
|
||||||
@@ -5924,7 +5867,9 @@ class TestOverseasGhostPositionClose:
|
|||||||
current_price = 1.5
|
current_price = 1.5
|
||||||
# ord_psbl_qty=5 means the code passes the qty check and a SELL is sent
|
# ord_psbl_qty=5 means the code passes the qty check and a SELL is sent
|
||||||
balance_data = {
|
balance_data = {
|
||||||
"output1": [{"ovrs_pdno": stock_code, "ord_psbl_qty": "5", "ovrs_cblc_qty": "5"}],
|
"output1": [
|
||||||
|
{"ovrs_pdno": stock_code, "ord_psbl_qty": "5", "ovrs_cblc_qty": "5"}
|
||||||
|
],
|
||||||
"output2": [{"tot_evlu_amt": "10000"}],
|
"output2": [{"tot_evlu_amt": "10000"}],
|
||||||
}
|
}
|
||||||
sell_result = {"rt_cd": "1", "msg1": "모의투자 잔고내역이 없습니다"}
|
sell_result = {"rt_cd": "1", "msg1": "모의투자 잔고내역이 없습니다"}
|
||||||
@@ -5960,11 +5905,9 @@ class TestOverseasGhostPositionClose:
|
|||||||
settings.POSITION_SIZING_ENABLED = False
|
settings.POSITION_SIZING_ENABLED = False
|
||||||
settings.PAPER_OVERSEAS_CASH = 0
|
settings.PAPER_OVERSEAS_CASH = 0
|
||||||
|
|
||||||
with (
|
with patch("src.main.log_trade") as mock_log_trade, patch(
|
||||||
patch("src.main.log_trade") as mock_log_trade,
|
"src.main.get_open_position", return_value=None
|
||||||
patch("src.main.get_open_position", return_value=None),
|
), patch("src.main.get_latest_buy_trade", return_value=None):
|
||||||
patch("src.main.get_latest_buy_trade", return_value=None),
|
|
||||||
):
|
|
||||||
await trading_cycle(
|
await trading_cycle(
|
||||||
broker=domestic_broker,
|
broker=domestic_broker,
|
||||||
overseas_broker=overseas_broker,
|
overseas_broker=overseas_broker,
|
||||||
@@ -6033,9 +5976,8 @@ class TestOverseasGhostPositionClose:
|
|||||||
|
|
||||||
db_conn = MagicMock()
|
db_conn = MagicMock()
|
||||||
|
|
||||||
with (
|
with patch("src.main.log_trade") as mock_log_trade, patch(
|
||||||
patch("src.main.log_trade") as mock_log_trade,
|
"src.main.get_open_position", return_value=None
|
||||||
patch("src.main.get_open_position", return_value=None),
|
|
||||||
):
|
):
|
||||||
await trading_cycle(
|
await trading_cycle(
|
||||||
broker=domestic_broker,
|
broker=domestic_broker,
|
||||||
@@ -6226,10 +6168,7 @@ async def test_us_min_price_filter_boundary(price: float, should_block: bool) ->
|
|||||||
return_value={"output": {"last": str(price), "rate": "0.0"}}
|
return_value={"output": {"last": str(price), "rate": "0.0"}}
|
||||||
)
|
)
|
||||||
overseas_broker.get_overseas_balance = AsyncMock(
|
overseas_broker.get_overseas_balance = AsyncMock(
|
||||||
return_value={
|
return_value={"output1": [], "output2": [{"frcr_evlu_tota": "10000", "frcr_buy_amt_smtl": "0"}]}
|
||||||
"output1": [],
|
|
||||||
"output2": [{"frcr_evlu_tota": "10000", "frcr_buy_amt_smtl": "0"}],
|
|
||||||
}
|
|
||||||
)
|
)
|
||||||
overseas_broker.get_overseas_buying_power = AsyncMock(
|
overseas_broker.get_overseas_buying_power = AsyncMock(
|
||||||
return_value={"output": {"ovrs_ord_psbl_amt": "10000"}}
|
return_value={"output": {"ovrs_ord_psbl_amt": "10000"}}
|
||||||
|
|||||||
@@ -173,7 +173,9 @@ class TestGetNextMarketOpen:
|
|||||||
"""Should find next Monday opening when called on weekend."""
|
"""Should find next Monday opening when called on weekend."""
|
||||||
# Saturday 2026-02-07 12:00 UTC
|
# Saturday 2026-02-07 12:00 UTC
|
||||||
test_time = datetime(2026, 2, 7, 12, 0, tzinfo=ZoneInfo("UTC"))
|
test_time = datetime(2026, 2, 7, 12, 0, tzinfo=ZoneInfo("UTC"))
|
||||||
market, open_time = get_next_market_open(enabled_markets=["KR"], now=test_time)
|
market, open_time = get_next_market_open(
|
||||||
|
enabled_markets=["KR"], now=test_time
|
||||||
|
)
|
||||||
assert market.code == "KR"
|
assert market.code == "KR"
|
||||||
# Monday 2026-02-09 09:00 KST
|
# Monday 2026-02-09 09:00 KST
|
||||||
expected = datetime(2026, 2, 9, 9, 0, tzinfo=ZoneInfo("Asia/Seoul"))
|
expected = datetime(2026, 2, 9, 9, 0, tzinfo=ZoneInfo("Asia/Seoul"))
|
||||||
@@ -183,7 +185,9 @@ class TestGetNextMarketOpen:
|
|||||||
"""Should find next day opening when called after market close."""
|
"""Should find next day opening when called after market close."""
|
||||||
# Monday 2026-02-02 16:00 KST (after close)
|
# Monday 2026-02-02 16:00 KST (after close)
|
||||||
test_time = datetime(2026, 2, 2, 16, 0, tzinfo=ZoneInfo("Asia/Seoul"))
|
test_time = datetime(2026, 2, 2, 16, 0, tzinfo=ZoneInfo("Asia/Seoul"))
|
||||||
market, open_time = get_next_market_open(enabled_markets=["KR"], now=test_time)
|
market, open_time = get_next_market_open(
|
||||||
|
enabled_markets=["KR"], now=test_time
|
||||||
|
)
|
||||||
assert market.code == "KR"
|
assert market.code == "KR"
|
||||||
# Tuesday 2026-02-03 09:00 KST
|
# Tuesday 2026-02-03 09:00 KST
|
||||||
expected = datetime(2026, 2, 3, 9, 0, tzinfo=ZoneInfo("Asia/Seoul"))
|
expected = datetime(2026, 2, 3, 9, 0, tzinfo=ZoneInfo("Asia/Seoul"))
|
||||||
@@ -193,7 +197,9 @@ class TestGetNextMarketOpen:
|
|||||||
"""Should find earliest opening market among multiple."""
|
"""Should find earliest opening market among multiple."""
|
||||||
# Saturday 2026-02-07 12:00 UTC
|
# Saturday 2026-02-07 12:00 UTC
|
||||||
test_time = datetime(2026, 2, 7, 12, 0, tzinfo=ZoneInfo("UTC"))
|
test_time = datetime(2026, 2, 7, 12, 0, tzinfo=ZoneInfo("UTC"))
|
||||||
market, open_time = get_next_market_open(enabled_markets=["KR", "US_NASDAQ"], now=test_time)
|
market, open_time = get_next_market_open(
|
||||||
|
enabled_markets=["KR", "US_NASDAQ"], now=test_time
|
||||||
|
)
|
||||||
# Monday 2026-02-09: KR opens at 09:00 KST = 00:00 UTC
|
# Monday 2026-02-09: KR opens at 09:00 KST = 00:00 UTC
|
||||||
# Monday 2026-02-09: US opens at 09:30 EST = 14:30 UTC
|
# Monday 2026-02-09: US opens at 09:30 EST = 14:30 UTC
|
||||||
# KR opens first
|
# KR opens first
|
||||||
@@ -208,7 +214,9 @@ class TestGetNextMarketOpen:
|
|||||||
def test_get_next_market_open_invalid_market(self) -> None:
|
def test_get_next_market_open_invalid_market(self) -> None:
|
||||||
"""Should skip invalid market codes."""
|
"""Should skip invalid market codes."""
|
||||||
test_time = datetime(2026, 2, 7, 12, 0, tzinfo=ZoneInfo("UTC"))
|
test_time = datetime(2026, 2, 7, 12, 0, tzinfo=ZoneInfo("UTC"))
|
||||||
market, _ = get_next_market_open(enabled_markets=["INVALID", "KR"], now=test_time)
|
market, _ = get_next_market_open(
|
||||||
|
enabled_markets=["INVALID", "KR"], now=test_time
|
||||||
|
)
|
||||||
assert market.code == "KR"
|
assert market.code == "KR"
|
||||||
|
|
||||||
def test_get_next_market_open_prefers_extended_session(self) -> None:
|
def test_get_next_market_open_prefers_extended_session(self) -> None:
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ import aiohttp
|
|||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from src.broker.kis_api import KISBroker
|
from src.broker.kis_api import KISBroker
|
||||||
from src.broker.overseas import _PRICE_EXCHANGE_MAP, _RANKING_EXCHANGE_MAP, OverseasBroker
|
from src.broker.overseas import OverseasBroker, _PRICE_EXCHANGE_MAP, _RANKING_EXCHANGE_MAP
|
||||||
from src.config import Settings
|
from src.config import Settings
|
||||||
|
|
||||||
|
|
||||||
@@ -85,27 +85,25 @@ class TestConfigDefaults:
|
|||||||
assert mock_settings.OVERSEAS_RANKING_VOLUME_TR_ID == "HHDFS76270000"
|
assert mock_settings.OVERSEAS_RANKING_VOLUME_TR_ID == "HHDFS76270000"
|
||||||
|
|
||||||
def test_fluct_path(self, mock_settings: Settings) -> None:
|
def test_fluct_path(self, mock_settings: Settings) -> None:
|
||||||
assert (
|
assert mock_settings.OVERSEAS_RANKING_FLUCT_PATH == "/uapi/overseas-stock/v1/ranking/updown-rate"
|
||||||
mock_settings.OVERSEAS_RANKING_FLUCT_PATH
|
|
||||||
== "/uapi/overseas-stock/v1/ranking/updown-rate"
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_volume_path(self, mock_settings: Settings) -> None:
|
def test_volume_path(self, mock_settings: Settings) -> None:
|
||||||
assert (
|
assert mock_settings.OVERSEAS_RANKING_VOLUME_PATH == "/uapi/overseas-stock/v1/ranking/volume-surge"
|
||||||
mock_settings.OVERSEAS_RANKING_VOLUME_PATH
|
|
||||||
== "/uapi/overseas-stock/v1/ranking/volume-surge"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestFetchOverseasRankings:
|
class TestFetchOverseasRankings:
|
||||||
"""Test fetch_overseas_rankings method."""
|
"""Test fetch_overseas_rankings method."""
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_fluctuation_uses_correct_params(self, overseas_broker: OverseasBroker) -> None:
|
async def test_fluctuation_uses_correct_params(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""Fluctuation ranking should use HHDFS76290000, updown-rate path, and correct params."""
|
"""Fluctuation ranking should use HHDFS76290000, updown-rate path, and correct params."""
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
mock_resp.json = AsyncMock(return_value={"output": [{"symb": "AAPL", "name": "Apple"}]})
|
mock_resp.json = AsyncMock(
|
||||||
|
return_value={"output": [{"symb": "AAPL", "name": "Apple"}]}
|
||||||
|
)
|
||||||
|
|
||||||
mock_session = MagicMock()
|
mock_session = MagicMock()
|
||||||
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
|
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
|
||||||
@@ -134,11 +132,15 @@ class TestFetchOverseasRankings:
|
|||||||
overseas_broker._broker._auth_headers.assert_called_with("HHDFS76290000")
|
overseas_broker._broker._auth_headers.assert_called_with("HHDFS76290000")
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_volume_uses_correct_params(self, overseas_broker: OverseasBroker) -> None:
|
async def test_volume_uses_correct_params(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""Volume ranking should use HHDFS76270000, volume-surge path, and correct params."""
|
"""Volume ranking should use HHDFS76270000, volume-surge path, and correct params."""
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
mock_resp.json = AsyncMock(return_value={"output": [{"symb": "TSLA", "name": "Tesla"}]})
|
mock_resp.json = AsyncMock(
|
||||||
|
return_value={"output": [{"symb": "TSLA", "name": "Tesla"}]}
|
||||||
|
)
|
||||||
|
|
||||||
mock_session = MagicMock()
|
mock_session = MagicMock()
|
||||||
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
|
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
|
||||||
@@ -167,7 +169,9 @@ class TestFetchOverseasRankings:
|
|||||||
overseas_broker._broker._auth_headers.assert_called_with("HHDFS76270000")
|
overseas_broker._broker._auth_headers.assert_called_with("HHDFS76270000")
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_404_returns_empty_list(self, overseas_broker: OverseasBroker) -> None:
|
async def test_404_returns_empty_list(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""HTTP 404 should return empty list (fallback) instead of raising."""
|
"""HTTP 404 should return empty list (fallback) instead of raising."""
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 404
|
mock_resp.status = 404
|
||||||
@@ -182,7 +186,9 @@ class TestFetchOverseasRankings:
|
|||||||
assert result == []
|
assert result == []
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_non_404_error_raises(self, overseas_broker: OverseasBroker) -> None:
|
async def test_non_404_error_raises(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""Non-404 HTTP errors should raise ConnectionError."""
|
"""Non-404 HTTP errors should raise ConnectionError."""
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 500
|
mock_resp.status = 500
|
||||||
@@ -197,7 +203,9 @@ class TestFetchOverseasRankings:
|
|||||||
await overseas_broker.fetch_overseas_rankings("NASD")
|
await overseas_broker.fetch_overseas_rankings("NASD")
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_empty_response_returns_empty(self, overseas_broker: OverseasBroker) -> None:
|
async def test_empty_response_returns_empty(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""Empty output in response should return empty list."""
|
"""Empty output in response should return empty list."""
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -212,14 +220,18 @@ class TestFetchOverseasRankings:
|
|||||||
assert result == []
|
assert result == []
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_ranking_disabled_returns_empty(self, overseas_broker: OverseasBroker) -> None:
|
async def test_ranking_disabled_returns_empty(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""When OVERSEAS_RANKING_ENABLED=False, should return empty immediately."""
|
"""When OVERSEAS_RANKING_ENABLED=False, should return empty immediately."""
|
||||||
overseas_broker._broker._settings.OVERSEAS_RANKING_ENABLED = False
|
overseas_broker._broker._settings.OVERSEAS_RANKING_ENABLED = False
|
||||||
result = await overseas_broker.fetch_overseas_rankings("NASD")
|
result = await overseas_broker.fetch_overseas_rankings("NASD")
|
||||||
assert result == []
|
assert result == []
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_limit_truncates_results(self, overseas_broker: OverseasBroker) -> None:
|
async def test_limit_truncates_results(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""Results should be truncated to the specified limit."""
|
"""Results should be truncated to the specified limit."""
|
||||||
rows = [{"symb": f"SYM{i}"} for i in range(20)]
|
rows = [{"symb": f"SYM{i}"} for i in range(20)]
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
@@ -235,7 +247,9 @@ class TestFetchOverseasRankings:
|
|||||||
assert len(result) == 5
|
assert len(result) == 5
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_network_error_raises(self, overseas_broker: OverseasBroker) -> None:
|
async def test_network_error_raises(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""Network errors should raise ConnectionError."""
|
"""Network errors should raise ConnectionError."""
|
||||||
cm = MagicMock()
|
cm = MagicMock()
|
||||||
cm.__aenter__ = AsyncMock(side_effect=aiohttp.ClientError("timeout"))
|
cm.__aenter__ = AsyncMock(side_effect=aiohttp.ClientError("timeout"))
|
||||||
@@ -250,7 +264,9 @@ class TestFetchOverseasRankings:
|
|||||||
await overseas_broker.fetch_overseas_rankings("NASD")
|
await overseas_broker.fetch_overseas_rankings("NASD")
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_exchange_code_mapping_applied(self, overseas_broker: OverseasBroker) -> None:
|
async def test_exchange_code_mapping_applied(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""All major exchanges should use mapped codes in API params."""
|
"""All major exchanges should use mapped codes in API params."""
|
||||||
for original, mapped in [("NASD", "NAS"), ("NYSE", "NYS"), ("AMEX", "AMS")]:
|
for original, mapped in [("NASD", "NAS"), ("NYSE", "NYS"), ("AMEX", "AMS")]:
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
@@ -282,9 +298,7 @@ class TestGetOverseasPrice:
|
|||||||
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
|
mock_session.get = MagicMock(return_value=_make_async_cm(mock_resp))
|
||||||
|
|
||||||
_setup_broker_mocks(overseas_broker, mock_session)
|
_setup_broker_mocks(overseas_broker, mock_session)
|
||||||
overseas_broker._broker._auth_headers = AsyncMock(
|
overseas_broker._broker._auth_headers = AsyncMock(return_value={"authorization": "Bearer t"})
|
||||||
return_value={"authorization": "Bearer t"}
|
|
||||||
)
|
|
||||||
|
|
||||||
result = await overseas_broker.get_overseas_price("NASD", "AAPL")
|
result = await overseas_broker.get_overseas_price("NASD", "AAPL")
|
||||||
assert result["output"]["last"] == "150.00"
|
assert result["output"]["last"] == "150.00"
|
||||||
@@ -516,14 +530,11 @@ class TestPriceExchangeMap:
|
|||||||
def test_price_map_equals_ranking_map(self) -> None:
|
def test_price_map_equals_ranking_map(self) -> None:
|
||||||
assert _PRICE_EXCHANGE_MAP is _RANKING_EXCHANGE_MAP
|
assert _PRICE_EXCHANGE_MAP is _RANKING_EXCHANGE_MAP
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize("original,expected", [
|
||||||
"original,expected",
|
|
||||||
[
|
|
||||||
("NASD", "NAS"),
|
("NASD", "NAS"),
|
||||||
("NYSE", "NYS"),
|
("NYSE", "NYS"),
|
||||||
("AMEX", "AMS"),
|
("AMEX", "AMS"),
|
||||||
],
|
])
|
||||||
)
|
|
||||||
def test_us_exchange_code_mapping(self, original: str, expected: str) -> None:
|
def test_us_exchange_code_mapping(self, original: str, expected: str) -> None:
|
||||||
assert _PRICE_EXCHANGE_MAP[original] == expected
|
assert _PRICE_EXCHANGE_MAP[original] == expected
|
||||||
|
|
||||||
@@ -563,7 +574,9 @@ class TestOrderRtCdCheck:
|
|||||||
return OverseasBroker(broker)
|
return OverseasBroker(broker)
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_success_rt_cd_returns_data(self, overseas_broker: OverseasBroker) -> None:
|
async def test_success_rt_cd_returns_data(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""rt_cd='0' → order accepted, data returned."""
|
"""rt_cd='0' → order accepted, data returned."""
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -577,7 +590,9 @@ class TestOrderRtCdCheck:
|
|||||||
assert result["rt_cd"] == "0"
|
assert result["rt_cd"] == "0"
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_error_rt_cd_returns_data_with_msg(self, overseas_broker: OverseasBroker) -> None:
|
async def test_error_rt_cd_returns_data_with_msg(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""rt_cd != '0' → order rejected, data still returned (caller checks rt_cd)."""
|
"""rt_cd != '0' → order rejected, data still returned (caller checks rt_cd)."""
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -608,7 +623,6 @@ class TestPaperOverseasCash:
|
|||||||
|
|
||||||
def test_env_override(self) -> None:
|
def test_env_override(self) -> None:
|
||||||
import os
|
import os
|
||||||
|
|
||||||
os.environ["PAPER_OVERSEAS_CASH"] = "25000"
|
os.environ["PAPER_OVERSEAS_CASH"] = "25000"
|
||||||
settings = Settings(
|
settings = Settings(
|
||||||
KIS_APP_KEY="k",
|
KIS_APP_KEY="k",
|
||||||
@@ -621,7 +635,6 @@ class TestPaperOverseasCash:
|
|||||||
|
|
||||||
def test_zero_disables_fallback(self) -> None:
|
def test_zero_disables_fallback(self) -> None:
|
||||||
import os
|
import os
|
||||||
|
|
||||||
os.environ["PAPER_OVERSEAS_CASH"] = "0"
|
os.environ["PAPER_OVERSEAS_CASH"] = "0"
|
||||||
settings = Settings(
|
settings = Settings(
|
||||||
KIS_APP_KEY="k",
|
KIS_APP_KEY="k",
|
||||||
@@ -809,7 +822,9 @@ class TestGetOverseasPendingOrders:
|
|||||||
"""Tests for get_overseas_pending_orders method."""
|
"""Tests for get_overseas_pending_orders method."""
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_paper_mode_returns_empty(self, overseas_broker: OverseasBroker) -> None:
|
async def test_paper_mode_returns_empty(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""Paper mode should immediately return [] without any API call."""
|
"""Paper mode should immediately return [] without any API call."""
|
||||||
# Default mock_settings has MODE="paper"
|
# Default mock_settings has MODE="paper"
|
||||||
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
|
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
|
||||||
@@ -840,7 +855,9 @@ class TestGetOverseasPendingOrders:
|
|||||||
|
|
||||||
overseas_broker._broker._auth_headers = mock_auth_headers # type: ignore[method-assign]
|
overseas_broker._broker._auth_headers = mock_auth_headers # type: ignore[method-assign]
|
||||||
|
|
||||||
pending_orders = [{"odno": "001", "pdno": "AAPL", "sll_buy_dvsn_cd": "02", "nccs_qty": "5"}]
|
pending_orders = [
|
||||||
|
{"odno": "001", "pdno": "AAPL", "sll_buy_dvsn_cd": "02", "nccs_qty": "5"}
|
||||||
|
]
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
mock_resp.json = AsyncMock(return_value={"output": pending_orders})
|
mock_resp.json = AsyncMock(return_value={"output": pending_orders})
|
||||||
@@ -862,7 +879,9 @@ class TestGetOverseasPendingOrders:
|
|||||||
assert captured_params[0]["OVRS_EXCG_CD"] == "NASD"
|
assert captured_params[0]["OVRS_EXCG_CD"] == "NASD"
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_live_mode_connection_error(self, overseas_broker: OverseasBroker) -> None:
|
async def test_live_mode_connection_error(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""Network error in live mode should raise ConnectionError."""
|
"""Network error in live mode should raise ConnectionError."""
|
||||||
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
|
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
|
||||||
update={"MODE": "live"}
|
update={"MODE": "live"}
|
||||||
@@ -907,41 +926,55 @@ class TestCancelOverseasOrder:
|
|||||||
return captured_tr_ids, mock_session
|
return captured_tr_ids, mock_session
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_us_live_uses_tttt1004u(self, overseas_broker: OverseasBroker) -> None:
|
async def test_us_live_uses_tttt1004u(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""US exchange in live mode should use TTTT1004U."""
|
"""US exchange in live mode should use TTTT1004U."""
|
||||||
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
|
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
|
||||||
update={"MODE": "live"}
|
update={"MODE": "live"}
|
||||||
)
|
)
|
||||||
captured, _ = self._setup_cancel_mocks(overseas_broker, {"rt_cd": "0", "msg1": "OK"})
|
captured, _ = self._setup_cancel_mocks(
|
||||||
|
overseas_broker, {"rt_cd": "0", "msg1": "OK"}
|
||||||
|
)
|
||||||
|
|
||||||
await overseas_broker.cancel_overseas_order("NASD", "AAPL", "ORD001", 5)
|
await overseas_broker.cancel_overseas_order("NASD", "AAPL", "ORD001", 5)
|
||||||
|
|
||||||
assert "TTTT1004U" in captured
|
assert "TTTT1004U" in captured
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_us_paper_uses_vttt1004u(self, overseas_broker: OverseasBroker) -> None:
|
async def test_us_paper_uses_vttt1004u(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""US exchange in paper mode should use VTTT1004U."""
|
"""US exchange in paper mode should use VTTT1004U."""
|
||||||
# Default mock_settings has MODE="paper"
|
# Default mock_settings has MODE="paper"
|
||||||
captured, _ = self._setup_cancel_mocks(overseas_broker, {"rt_cd": "0", "msg1": "OK"})
|
captured, _ = self._setup_cancel_mocks(
|
||||||
|
overseas_broker, {"rt_cd": "0", "msg1": "OK"}
|
||||||
|
)
|
||||||
|
|
||||||
await overseas_broker.cancel_overseas_order("NASD", "AAPL", "ORD001", 5)
|
await overseas_broker.cancel_overseas_order("NASD", "AAPL", "ORD001", 5)
|
||||||
|
|
||||||
assert "VTTT1004U" in captured
|
assert "VTTT1004U" in captured
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_hk_live_uses_ttts1003u(self, overseas_broker: OverseasBroker) -> None:
|
async def test_hk_live_uses_ttts1003u(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""SEHK exchange in live mode should use TTTS1003U."""
|
"""SEHK exchange in live mode should use TTTS1003U."""
|
||||||
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
|
overseas_broker._broker._settings = overseas_broker._broker._settings.model_copy(
|
||||||
update={"MODE": "live"}
|
update={"MODE": "live"}
|
||||||
)
|
)
|
||||||
captured, _ = self._setup_cancel_mocks(overseas_broker, {"rt_cd": "0", "msg1": "OK"})
|
captured, _ = self._setup_cancel_mocks(
|
||||||
|
overseas_broker, {"rt_cd": "0", "msg1": "OK"}
|
||||||
|
)
|
||||||
|
|
||||||
await overseas_broker.cancel_overseas_order("SEHK", "0700", "ORD002", 10)
|
await overseas_broker.cancel_overseas_order("SEHK", "0700", "ORD002", 10)
|
||||||
|
|
||||||
assert "TTTS1003U" in captured
|
assert "TTTS1003U" in captured
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_cancel_sets_rvse_cncl_dvsn_cd_02(self, overseas_broker: OverseasBroker) -> None:
|
async def test_cancel_sets_rvse_cncl_dvsn_cd_02(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""Cancel body must include RVSE_CNCL_DVSN_CD='02' and OVRS_ORD_UNPR='0'."""
|
"""Cancel body must include RVSE_CNCL_DVSN_CD='02' and OVRS_ORD_UNPR='0'."""
|
||||||
captured_body: list[dict] = []
|
captured_body: list[dict] = []
|
||||||
|
|
||||||
@@ -972,7 +1005,9 @@ class TestCancelOverseasOrder:
|
|||||||
assert captured_body[0]["ORGN_ODNO"] == "ORD003"
|
assert captured_body[0]["ORGN_ODNO"] == "ORD003"
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_cancel_sets_hashkey_header(self, overseas_broker: OverseasBroker) -> None:
|
async def test_cancel_sets_hashkey_header(
|
||||||
|
self, overseas_broker: OverseasBroker
|
||||||
|
) -> None:
|
||||||
"""hashkey must be set in the request headers."""
|
"""hashkey must be set in the request headers."""
|
||||||
captured_headers: list[dict] = []
|
captured_headers: list[dict] = []
|
||||||
overseas_broker._broker._get_hash_key = AsyncMock(return_value="test_hash") # type: ignore[method-assign]
|
overseas_broker._broker._get_hash_key = AsyncMock(return_value="test_hash") # type: ignore[method-assign]
|
||||||
|
|||||||
@@ -78,7 +78,9 @@ def _gemini_response_json(
|
|||||||
"rationale": "Near circuit breaker",
|
"rationale": "Near circuit breaker",
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
return json.dumps({"market_outlook": outlook, "global_rules": global_rules, "stocks": stocks})
|
return json.dumps(
|
||||||
|
{"market_outlook": outlook, "global_rules": global_rules, "stocks": stocks}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def _make_planner(
|
def _make_planner(
|
||||||
@@ -562,12 +564,8 @@ class TestBuildPrompt:
|
|||||||
def test_prompt_contains_cross_market(self) -> None:
|
def test_prompt_contains_cross_market(self) -> None:
|
||||||
planner = _make_planner()
|
planner = _make_planner()
|
||||||
cross = CrossMarketContext(
|
cross = CrossMarketContext(
|
||||||
market="US",
|
market="US", date="2026-02-07", total_pnl=1.5,
|
||||||
date="2026-02-07",
|
win_rate=60, index_change_pct=0.8, lessons=["Cut losses early"],
|
||||||
total_pnl=1.5,
|
|
||||||
win_rate=60,
|
|
||||||
index_change_pct=0.8,
|
|
||||||
lessons=["Cut losses early"],
|
|
||||||
)
|
)
|
||||||
|
|
||||||
prompt = planner._build_prompt("KR", [_candidate()], {}, None, cross)
|
prompt = planner._build_prompt("KR", [_candidate()], {}, None, cross)
|
||||||
@@ -685,7 +683,9 @@ class TestSmartFallbackPlaybook:
|
|||||||
)
|
)
|
||||||
|
|
||||||
def test_momentum_candidate_gets_buy_on_volume(self) -> None:
|
def test_momentum_candidate_gets_buy_on_volume(self) -> None:
|
||||||
candidates = [_candidate(code="CHOW", signal="momentum", volume_ratio=13.64, rsi=100.0)]
|
candidates = [
|
||||||
|
_candidate(code="CHOW", signal="momentum", volume_ratio=13.64, rsi=100.0)
|
||||||
|
]
|
||||||
settings = self._make_settings()
|
settings = self._make_settings()
|
||||||
|
|
||||||
pb = PreMarketPlanner._smart_fallback_playbook(
|
pb = PreMarketPlanner._smart_fallback_playbook(
|
||||||
@@ -707,7 +707,9 @@ class TestSmartFallbackPlaybook:
|
|||||||
assert sell_sc.condition.price_change_pct_below == -3.0
|
assert sell_sc.condition.price_change_pct_below == -3.0
|
||||||
|
|
||||||
def test_oversold_candidate_gets_buy_on_rsi(self) -> None:
|
def test_oversold_candidate_gets_buy_on_rsi(self) -> None:
|
||||||
candidates = [_candidate(code="005930", signal="oversold", rsi=22.0, volume_ratio=3.5)]
|
candidates = [
|
||||||
|
_candidate(code="005930", signal="oversold", rsi=22.0, volume_ratio=3.5)
|
||||||
|
]
|
||||||
settings = self._make_settings()
|
settings = self._make_settings()
|
||||||
|
|
||||||
pb = PreMarketPlanner._smart_fallback_playbook(
|
pb = PreMarketPlanner._smart_fallback_playbook(
|
||||||
@@ -774,7 +776,9 @@ class TestSmartFallbackPlaybook:
|
|||||||
def test_empty_candidates_returns_empty_playbook(self) -> None:
|
def test_empty_candidates_returns_empty_playbook(self) -> None:
|
||||||
settings = self._make_settings()
|
settings = self._make_settings()
|
||||||
|
|
||||||
pb = PreMarketPlanner._smart_fallback_playbook(date(2026, 2, 17), "US_AMEX", [], settings)
|
pb = PreMarketPlanner._smart_fallback_playbook(
|
||||||
|
date(2026, 2, 17), "US_AMEX", [], settings
|
||||||
|
)
|
||||||
|
|
||||||
assert pb.stock_count == 0
|
assert pb.stock_count == 0
|
||||||
|
|
||||||
@@ -810,14 +814,19 @@ class TestSmartFallbackPlaybook:
|
|||||||
planner = _make_planner()
|
planner = _make_planner()
|
||||||
planner._gemini.decide = AsyncMock(side_effect=ConnectionError("429 quota exceeded"))
|
planner._gemini.decide = AsyncMock(side_effect=ConnectionError("429 quota exceeded"))
|
||||||
# momentum candidate
|
# momentum candidate
|
||||||
candidates = [_candidate(code="CHOW", signal="momentum", volume_ratio=13.64, rsi=100.0)]
|
candidates = [
|
||||||
|
_candidate(code="CHOW", signal="momentum", volume_ratio=13.64, rsi=100.0)
|
||||||
|
]
|
||||||
|
|
||||||
pb = await planner.generate_playbook("US_AMEX", candidates, today=date(2026, 2, 18))
|
pb = await planner.generate_playbook(
|
||||||
|
"US_AMEX", candidates, today=date(2026, 2, 18)
|
||||||
|
)
|
||||||
|
|
||||||
# Should NOT be all-SELL defensive; should have BUY for momentum
|
# Should NOT be all-SELL defensive; should have BUY for momentum
|
||||||
assert pb.stock_count == 1
|
assert pb.stock_count == 1
|
||||||
buy_scenarios = [
|
buy_scenarios = [
|
||||||
s for s in pb.stock_playbooks[0].scenarios if s.action == ScenarioAction.BUY
|
s for s in pb.stock_playbooks[0].scenarios
|
||||||
|
if s.action == ScenarioAction.BUY
|
||||||
]
|
]
|
||||||
assert len(buy_scenarios) == 1
|
assert len(buy_scenarios) == 1
|
||||||
assert buy_scenarios[0].condition.volume_ratio_above == 2.0 # VOL_MULTIPLIER default
|
assert buy_scenarios[0].condition.volume_ratio_above == 2.0 # VOL_MULTIPLIER default
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ from src.strategy.models import (
|
|||||||
StockPlaybook,
|
StockPlaybook,
|
||||||
StockScenario,
|
StockScenario,
|
||||||
)
|
)
|
||||||
from src.strategy.scenario_engine import ScenarioEngine
|
from src.strategy.scenario_engine import ScenarioEngine, ScenarioMatch
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
@@ -162,10 +162,8 @@ class TestEvaluateCondition:
|
|||||||
def test_mixed_invalid_types_no_exception(self, engine: ScenarioEngine) -> None:
|
def test_mixed_invalid_types_no_exception(self, engine: ScenarioEngine) -> None:
|
||||||
"""Various invalid types should not raise exceptions."""
|
"""Various invalid types should not raise exceptions."""
|
||||||
cond = StockCondition(
|
cond = StockCondition(
|
||||||
rsi_below=30.0,
|
rsi_below=30.0, volume_ratio_above=2.0,
|
||||||
volume_ratio_above=2.0,
|
price_above=100, price_change_pct_below=-1.0,
|
||||||
price_above=100,
|
|
||||||
price_change_pct_below=-1.0,
|
|
||||||
)
|
)
|
||||||
data = {
|
data = {
|
||||||
"rsi": [25], # list
|
"rsi": [25], # list
|
||||||
@@ -358,7 +356,9 @@ class TestEvaluate:
|
|||||||
|
|
||||||
def test_match_details_populated(self, engine: ScenarioEngine) -> None:
|
def test_match_details_populated(self, engine: ScenarioEngine) -> None:
|
||||||
pb = _playbook(scenarios=[_scenario(rsi_below=30.0, volume_ratio_above=2.0)])
|
pb = _playbook(scenarios=[_scenario(rsi_below=30.0, volume_ratio_above=2.0)])
|
||||||
result = engine.evaluate(pb, "005930", {"rsi": 25.0, "volume_ratio": 3.0}, {})
|
result = engine.evaluate(
|
||||||
|
pb, "005930", {"rsi": 25.0, "volume_ratio": 3.0}, {}
|
||||||
|
)
|
||||||
assert result.match_details.get("rsi") == 25.0
|
assert result.match_details.get("rsi") == 25.0
|
||||||
assert result.match_details.get("volume_ratio") == 3.0
|
assert result.match_details.get("volume_ratio") == 3.0
|
||||||
|
|
||||||
@@ -381,9 +381,7 @@ class TestEvaluate:
|
|||||||
),
|
),
|
||||||
StockPlaybook(
|
StockPlaybook(
|
||||||
stock_code="MSFT",
|
stock_code="MSFT",
|
||||||
scenarios=[
|
scenarios=[_scenario(rsi_above=75.0, action=ScenarioAction.SELL, confidence=80)],
|
||||||
_scenario(rsi_above=75.0, action=ScenarioAction.SELL, confidence=80)
|
|
||||||
],
|
|
||||||
),
|
),
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
@@ -452,42 +450,58 @@ class TestEvaluate:
|
|||||||
class TestPositionAwareConditions:
|
class TestPositionAwareConditions:
|
||||||
"""Tests for unrealized_pnl_pct and holding_days condition fields."""
|
"""Tests for unrealized_pnl_pct and holding_days condition fields."""
|
||||||
|
|
||||||
def test_evaluate_condition_unrealized_pnl_above_matches(self, engine: ScenarioEngine) -> None:
|
def test_evaluate_condition_unrealized_pnl_above_matches(
|
||||||
|
self, engine: ScenarioEngine
|
||||||
|
) -> None:
|
||||||
"""unrealized_pnl_pct_above should match when P&L exceeds threshold."""
|
"""unrealized_pnl_pct_above should match when P&L exceeds threshold."""
|
||||||
condition = StockCondition(unrealized_pnl_pct_above=3.0)
|
condition = StockCondition(unrealized_pnl_pct_above=3.0)
|
||||||
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": 5.0}) is True
|
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": 5.0}) is True
|
||||||
|
|
||||||
def test_evaluate_condition_unrealized_pnl_above_no_match(self, engine: ScenarioEngine) -> None:
|
def test_evaluate_condition_unrealized_pnl_above_no_match(
|
||||||
|
self, engine: ScenarioEngine
|
||||||
|
) -> None:
|
||||||
"""unrealized_pnl_pct_above should NOT match when P&L is below threshold."""
|
"""unrealized_pnl_pct_above should NOT match when P&L is below threshold."""
|
||||||
condition = StockCondition(unrealized_pnl_pct_above=3.0)
|
condition = StockCondition(unrealized_pnl_pct_above=3.0)
|
||||||
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": 2.0}) is False
|
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": 2.0}) is False
|
||||||
|
|
||||||
def test_evaluate_condition_unrealized_pnl_below_matches(self, engine: ScenarioEngine) -> None:
|
def test_evaluate_condition_unrealized_pnl_below_matches(
|
||||||
|
self, engine: ScenarioEngine
|
||||||
|
) -> None:
|
||||||
"""unrealized_pnl_pct_below should match when P&L is under threshold."""
|
"""unrealized_pnl_pct_below should match when P&L is under threshold."""
|
||||||
condition = StockCondition(unrealized_pnl_pct_below=-2.0)
|
condition = StockCondition(unrealized_pnl_pct_below=-2.0)
|
||||||
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": -3.5}) is True
|
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": -3.5}) is True
|
||||||
|
|
||||||
def test_evaluate_condition_unrealized_pnl_below_no_match(self, engine: ScenarioEngine) -> None:
|
def test_evaluate_condition_unrealized_pnl_below_no_match(
|
||||||
|
self, engine: ScenarioEngine
|
||||||
|
) -> None:
|
||||||
"""unrealized_pnl_pct_below should NOT match when P&L is above threshold."""
|
"""unrealized_pnl_pct_below should NOT match when P&L is above threshold."""
|
||||||
condition = StockCondition(unrealized_pnl_pct_below=-2.0)
|
condition = StockCondition(unrealized_pnl_pct_below=-2.0)
|
||||||
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": -1.0}) is False
|
assert engine.evaluate_condition(condition, {"unrealized_pnl_pct": -1.0}) is False
|
||||||
|
|
||||||
def test_evaluate_condition_holding_days_above_matches(self, engine: ScenarioEngine) -> None:
|
def test_evaluate_condition_holding_days_above_matches(
|
||||||
|
self, engine: ScenarioEngine
|
||||||
|
) -> None:
|
||||||
"""holding_days_above should match when position held longer than threshold."""
|
"""holding_days_above should match when position held longer than threshold."""
|
||||||
condition = StockCondition(holding_days_above=5)
|
condition = StockCondition(holding_days_above=5)
|
||||||
assert engine.evaluate_condition(condition, {"holding_days": 7}) is True
|
assert engine.evaluate_condition(condition, {"holding_days": 7}) is True
|
||||||
|
|
||||||
def test_evaluate_condition_holding_days_above_no_match(self, engine: ScenarioEngine) -> None:
|
def test_evaluate_condition_holding_days_above_no_match(
|
||||||
|
self, engine: ScenarioEngine
|
||||||
|
) -> None:
|
||||||
"""holding_days_above should NOT match when position held shorter."""
|
"""holding_days_above should NOT match when position held shorter."""
|
||||||
condition = StockCondition(holding_days_above=5)
|
condition = StockCondition(holding_days_above=5)
|
||||||
assert engine.evaluate_condition(condition, {"holding_days": 3}) is False
|
assert engine.evaluate_condition(condition, {"holding_days": 3}) is False
|
||||||
|
|
||||||
def test_evaluate_condition_holding_days_below_matches(self, engine: ScenarioEngine) -> None:
|
def test_evaluate_condition_holding_days_below_matches(
|
||||||
|
self, engine: ScenarioEngine
|
||||||
|
) -> None:
|
||||||
"""holding_days_below should match when position held fewer days."""
|
"""holding_days_below should match when position held fewer days."""
|
||||||
condition = StockCondition(holding_days_below=3)
|
condition = StockCondition(holding_days_below=3)
|
||||||
assert engine.evaluate_condition(condition, {"holding_days": 1}) is True
|
assert engine.evaluate_condition(condition, {"holding_days": 1}) is True
|
||||||
|
|
||||||
def test_evaluate_condition_holding_days_below_no_match(self, engine: ScenarioEngine) -> None:
|
def test_evaluate_condition_holding_days_below_no_match(
|
||||||
|
self, engine: ScenarioEngine
|
||||||
|
) -> None:
|
||||||
"""holding_days_below should NOT match when held more days."""
|
"""holding_days_below should NOT match when held more days."""
|
||||||
condition = StockCondition(holding_days_below=3)
|
condition = StockCondition(holding_days_below=3)
|
||||||
assert engine.evaluate_condition(condition, {"holding_days": 5}) is False
|
assert engine.evaluate_condition(condition, {"holding_days": 5}) is False
|
||||||
@@ -499,33 +513,33 @@ class TestPositionAwareConditions:
|
|||||||
holding_days_above=5,
|
holding_days_above=5,
|
||||||
)
|
)
|
||||||
# Both met → match
|
# Both met → match
|
||||||
assert (
|
assert engine.evaluate_condition(
|
||||||
engine.evaluate_condition(
|
|
||||||
condition,
|
condition,
|
||||||
{"unrealized_pnl_pct": 4.5, "holding_days": 7},
|
{"unrealized_pnl_pct": 4.5, "holding_days": 7},
|
||||||
)
|
) is True
|
||||||
is True
|
|
||||||
)
|
|
||||||
# Only pnl met → no match
|
# Only pnl met → no match
|
||||||
assert (
|
assert engine.evaluate_condition(
|
||||||
engine.evaluate_condition(
|
|
||||||
condition,
|
condition,
|
||||||
{"unrealized_pnl_pct": 4.5, "holding_days": 3},
|
{"unrealized_pnl_pct": 4.5, "holding_days": 3},
|
||||||
)
|
) is False
|
||||||
is False
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_missing_unrealized_pnl_does_not_match(self, engine: ScenarioEngine) -> None:
|
def test_missing_unrealized_pnl_does_not_match(
|
||||||
|
self, engine: ScenarioEngine
|
||||||
|
) -> None:
|
||||||
"""Missing unrealized_pnl_pct key should not match the condition."""
|
"""Missing unrealized_pnl_pct key should not match the condition."""
|
||||||
condition = StockCondition(unrealized_pnl_pct_above=3.0)
|
condition = StockCondition(unrealized_pnl_pct_above=3.0)
|
||||||
assert engine.evaluate_condition(condition, {}) is False
|
assert engine.evaluate_condition(condition, {}) is False
|
||||||
|
|
||||||
def test_missing_holding_days_does_not_match(self, engine: ScenarioEngine) -> None:
|
def test_missing_holding_days_does_not_match(
|
||||||
|
self, engine: ScenarioEngine
|
||||||
|
) -> None:
|
||||||
"""Missing holding_days key should not match the condition."""
|
"""Missing holding_days key should not match the condition."""
|
||||||
condition = StockCondition(holding_days_above=5)
|
condition = StockCondition(holding_days_above=5)
|
||||||
assert engine.evaluate_condition(condition, {}) is False
|
assert engine.evaluate_condition(condition, {}) is False
|
||||||
|
|
||||||
def test_match_details_includes_position_fields(self, engine: ScenarioEngine) -> None:
|
def test_match_details_includes_position_fields(
|
||||||
|
self, engine: ScenarioEngine
|
||||||
|
) -> None:
|
||||||
"""match_details should include position fields when condition specifies them."""
|
"""match_details should include position fields when condition specifies them."""
|
||||||
pb = _playbook(
|
pb = _playbook(
|
||||||
scenarios=[
|
scenarios=[
|
||||||
|
|||||||
@@ -1,128 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import importlib.util
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
|
|
||||||
def _load_module():
|
|
||||||
script_path = Path(__file__).resolve().parents[1] / "scripts" / "session_handover_check.py"
|
|
||||||
spec = importlib.util.spec_from_file_location("session_handover_check", script_path)
|
|
||||||
assert spec is not None
|
|
||||||
assert spec.loader is not None
|
|
||||||
module = importlib.util.module_from_spec(spec)
|
|
||||||
spec.loader.exec_module(module)
|
|
||||||
return module
|
|
||||||
|
|
||||||
|
|
||||||
def test_ci_mode_skips_date_branch_and_merge_gate(monkeypatch, tmp_path) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
handover = tmp_path / "session-handover.md"
|
|
||||||
handover.write_text(
|
|
||||||
"\n".join(
|
|
||||||
[
|
|
||||||
"### 2000-01-01 | session=test",
|
|
||||||
"- branch: feature/other-branch",
|
|
||||||
"- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md",
|
|
||||||
"- open_issues_reviewed: #1",
|
|
||||||
"- next_ticket: #123",
|
|
||||||
"- process_gate_checked: process_ticket=#1 merged_to_feature_branch=no",
|
|
||||||
]
|
|
||||||
),
|
|
||||||
encoding="utf-8",
|
|
||||||
)
|
|
||||||
monkeypatch.setattr(module, "HANDOVER_LOG", handover)
|
|
||||||
|
|
||||||
errors: list[str] = []
|
|
||||||
module._check_handover_entry(
|
|
||||||
branch="feature/current-branch",
|
|
||||||
strict=True,
|
|
||||||
ci_mode=True,
|
|
||||||
errors=errors,
|
|
||||||
)
|
|
||||||
assert errors == []
|
|
||||||
|
|
||||||
|
|
||||||
def test_ci_mode_still_blocks_tbd_next_ticket(monkeypatch, tmp_path) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
handover = tmp_path / "session-handover.md"
|
|
||||||
handover.write_text(
|
|
||||||
"\n".join(
|
|
||||||
[
|
|
||||||
"### 2000-01-01 | session=test",
|
|
||||||
"- branch: feature/other-branch",
|
|
||||||
"- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md",
|
|
||||||
"- open_issues_reviewed: #1",
|
|
||||||
"- next_ticket: #TBD",
|
|
||||||
"- process_gate_checked: process_ticket=#1 merged_to_feature_branch=no",
|
|
||||||
]
|
|
||||||
),
|
|
||||||
encoding="utf-8",
|
|
||||||
)
|
|
||||||
monkeypatch.setattr(module, "HANDOVER_LOG", handover)
|
|
||||||
|
|
||||||
errors: list[str] = []
|
|
||||||
module._check_handover_entry(
|
|
||||||
branch="feature/current-branch",
|
|
||||||
strict=True,
|
|
||||||
ci_mode=True,
|
|
||||||
errors=errors,
|
|
||||||
)
|
|
||||||
assert "latest handover entry must not use placeholder next_ticket (#TBD)" in errors
|
|
||||||
|
|
||||||
|
|
||||||
def test_non_ci_strict_enforces_date_branch_and_merge_gate(monkeypatch, tmp_path) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
handover = tmp_path / "session-handover.md"
|
|
||||||
handover.write_text(
|
|
||||||
"\n".join(
|
|
||||||
[
|
|
||||||
"### 2000-01-01 | session=test",
|
|
||||||
"- branch: feature/other-branch",
|
|
||||||
"- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md",
|
|
||||||
"- open_issues_reviewed: #1",
|
|
||||||
"- next_ticket: #123",
|
|
||||||
"- process_gate_checked: process_ticket=#1 merged_to_feature_branch=no",
|
|
||||||
]
|
|
||||||
),
|
|
||||||
encoding="utf-8",
|
|
||||||
)
|
|
||||||
monkeypatch.setattr(module, "HANDOVER_LOG", handover)
|
|
||||||
|
|
||||||
errors: list[str] = []
|
|
||||||
module._check_handover_entry(
|
|
||||||
branch="feature/current-branch",
|
|
||||||
strict=True,
|
|
||||||
ci_mode=False,
|
|
||||||
errors=errors,
|
|
||||||
)
|
|
||||||
assert any("must contain today's UTC date" in e for e in errors)
|
|
||||||
assert any("must target current branch" in e for e in errors)
|
|
||||||
assert any("merged_to_feature_branch=no" in e for e in errors)
|
|
||||||
|
|
||||||
|
|
||||||
def test_non_ci_strict_still_blocks_tbd_next_ticket(monkeypatch, tmp_path) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
handover = tmp_path / "session-handover.md"
|
|
||||||
handover.write_text(
|
|
||||||
"\n".join(
|
|
||||||
[
|
|
||||||
"### 2000-01-01 | session=test",
|
|
||||||
"- branch: feature/other-branch",
|
|
||||||
"- docs_checked: docs/workflow.md, docs/commands.md, docs/agent-constraints.md",
|
|
||||||
"- open_issues_reviewed: #1",
|
|
||||||
"- next_ticket: #TBD",
|
|
||||||
"- process_gate_checked: process_ticket=#1 merged_to_feature_branch=yes",
|
|
||||||
]
|
|
||||||
),
|
|
||||||
encoding="utf-8",
|
|
||||||
)
|
|
||||||
monkeypatch.setattr(module, "HANDOVER_LOG", handover)
|
|
||||||
|
|
||||||
errors: list[str] = []
|
|
||||||
module._check_handover_entry(
|
|
||||||
branch="feature/current-branch",
|
|
||||||
strict=True,
|
|
||||||
ci_mode=False,
|
|
||||||
errors=errors,
|
|
||||||
)
|
|
||||||
assert "latest handover entry must not use placeholder next_ticket (#TBD)" in errors
|
|
||||||
@@ -2,9 +2,8 @@
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from unittest.mock import AsyncMock, MagicMock
|
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
from unittest.mock import AsyncMock, MagicMock
|
||||||
|
|
||||||
from src.analysis.smart_scanner import ScanCandidate, SmartVolatilityScanner
|
from src.analysis.smart_scanner import ScanCandidate, SmartVolatilityScanner
|
||||||
from src.analysis.volatility import VolatilityAnalyzer
|
from src.analysis.volatility import VolatilityAnalyzer
|
||||||
@@ -201,7 +200,9 @@ class TestSmartVolatilityScanner:
|
|||||||
assert len(candidates) <= scanner.top_n
|
assert len(candidates) <= scanner.top_n
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_get_stock_codes(self, scanner: SmartVolatilityScanner) -> None:
|
async def test_get_stock_codes(
|
||||||
|
self, scanner: SmartVolatilityScanner
|
||||||
|
) -> None:
|
||||||
"""Test extraction of stock codes from candidates."""
|
"""Test extraction of stock codes from candidates."""
|
||||||
candidates = [
|
candidates = [
|
||||||
ScanCandidate(
|
ScanCandidate(
|
||||||
|
|||||||
@@ -19,6 +19,7 @@ from src.strategy.models import (
|
|||||||
StockScenario,
|
StockScenario,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# StockCondition
|
# StockCondition
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|||||||
@@ -5,11 +5,7 @@ from unittest.mock import AsyncMock, patch
|
|||||||
import aiohttp
|
import aiohttp
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from src.notifications.telegram_client import (
|
from src.notifications.telegram_client import NotificationFilter, NotificationPriority, TelegramClient
|
||||||
NotificationFilter,
|
|
||||||
NotificationPriority,
|
|
||||||
TelegramClient,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestTelegramClientInit:
|
class TestTelegramClientInit:
|
||||||
@@ -17,7 +13,9 @@ class TestTelegramClientInit:
|
|||||||
|
|
||||||
def test_disabled_via_flag(self) -> None:
|
def test_disabled_via_flag(self) -> None:
|
||||||
"""Client disabled via enabled=False flag."""
|
"""Client disabled via enabled=False flag."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=False)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=False
|
||||||
|
)
|
||||||
assert client._enabled is False
|
assert client._enabled is False
|
||||||
|
|
||||||
def test_disabled_missing_token(self) -> None:
|
def test_disabled_missing_token(self) -> None:
|
||||||
@@ -32,7 +30,9 @@ class TestTelegramClientInit:
|
|||||||
|
|
||||||
def test_enabled_with_credentials(self) -> None:
|
def test_enabled_with_credentials(self) -> None:
|
||||||
"""Client enabled when credentials provided."""
|
"""Client enabled when credentials provided."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
assert client._enabled is True
|
assert client._enabled is True
|
||||||
|
|
||||||
|
|
||||||
@@ -42,7 +42,9 @@ class TestNotificationSending:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_send_message_success(self) -> None:
|
async def test_send_message_success(self) -> None:
|
||||||
"""send_message returns True on successful send."""
|
"""send_message returns True on successful send."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -74,7 +76,9 @@ class TestNotificationSending:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_send_message_api_error(self) -> None:
|
async def test_send_message_api_error(self) -> None:
|
||||||
"""send_message returns False on API error."""
|
"""send_message returns False on API error."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 400
|
mock_resp.status = 400
|
||||||
@@ -89,7 +93,9 @@ class TestNotificationSending:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_send_message_with_markdown(self) -> None:
|
async def test_send_message_with_markdown(self) -> None:
|
||||||
"""send_message supports different parse modes."""
|
"""send_message supports different parse modes."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -122,7 +128,9 @@ class TestNotificationSending:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_trade_execution_format(self) -> None:
|
async def test_trade_execution_format(self) -> None:
|
||||||
"""Trade notification has correct format."""
|
"""Trade notification has correct format."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -155,7 +163,9 @@ class TestNotificationSending:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_playbook_generated_format(self) -> None:
|
async def test_playbook_generated_format(self) -> None:
|
||||||
"""Playbook generated notification has expected fields."""
|
"""Playbook generated notification has expected fields."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -180,7 +190,9 @@ class TestNotificationSending:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_scenario_matched_format(self) -> None:
|
async def test_scenario_matched_format(self) -> None:
|
||||||
"""Scenario matched notification has expected fields."""
|
"""Scenario matched notification has expected fields."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -205,7 +217,9 @@ class TestNotificationSending:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_playbook_failed_format(self) -> None:
|
async def test_playbook_failed_format(self) -> None:
|
||||||
"""Playbook failed notification has expected fields."""
|
"""Playbook failed notification has expected fields."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -226,7 +240,9 @@ class TestNotificationSending:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_circuit_breaker_priority(self) -> None:
|
async def test_circuit_breaker_priority(self) -> None:
|
||||||
"""Circuit breaker uses CRITICAL priority."""
|
"""Circuit breaker uses CRITICAL priority."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -244,7 +260,9 @@ class TestNotificationSending:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_api_error_handling(self) -> None:
|
async def test_api_error_handling(self) -> None:
|
||||||
"""API errors logged but don't crash."""
|
"""API errors logged but don't crash."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 400
|
mock_resp.status = 400
|
||||||
@@ -259,19 +277,25 @@ class TestNotificationSending:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_timeout_handling(self) -> None:
|
async def test_timeout_handling(self) -> None:
|
||||||
"""Timeouts logged but don't crash."""
|
"""Timeouts logged but don't crash."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
with patch(
|
with patch(
|
||||||
"aiohttp.ClientSession.post",
|
"aiohttp.ClientSession.post",
|
||||||
side_effect=aiohttp.ClientError("Connection timeout"),
|
side_effect=aiohttp.ClientError("Connection timeout"),
|
||||||
):
|
):
|
||||||
# Should not raise exception
|
# Should not raise exception
|
||||||
await client.notify_error(error_type="Test Error", error_msg="Test", context="test")
|
await client.notify_error(
|
||||||
|
error_type="Test Error", error_msg="Test", context="test"
|
||||||
|
)
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_session_management(self) -> None:
|
async def test_session_management(self) -> None:
|
||||||
"""Session created and reused correctly."""
|
"""Session created and reused correctly."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
# Session should be None initially
|
# Session should be None initially
|
||||||
assert client._session is None
|
assert client._session is None
|
||||||
@@ -300,7 +324,9 @@ class TestRateLimiting:
|
|||||||
"""Rate limiter delays rapid requests."""
|
"""Rate limiter delays rapid requests."""
|
||||||
import time
|
import time
|
||||||
|
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True, rate_limit=2.0)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True, rate_limit=2.0
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -327,7 +353,9 @@ class TestMessagePriorities:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_low_priority_uses_info_emoji(self) -> None:
|
async def test_low_priority_uses_info_emoji(self) -> None:
|
||||||
"""LOW priority uses ℹ️ emoji."""
|
"""LOW priority uses ℹ️ emoji."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -343,7 +371,9 @@ class TestMessagePriorities:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_critical_priority_uses_alarm_emoji(self) -> None:
|
async def test_critical_priority_uses_alarm_emoji(self) -> None:
|
||||||
"""CRITICAL priority uses 🚨 emoji."""
|
"""CRITICAL priority uses 🚨 emoji."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -359,7 +389,9 @@ class TestMessagePriorities:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_playbook_generated_priority(self) -> None:
|
async def test_playbook_generated_priority(self) -> None:
|
||||||
"""Playbook generated uses MEDIUM priority emoji."""
|
"""Playbook generated uses MEDIUM priority emoji."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -380,7 +412,9 @@ class TestMessagePriorities:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_playbook_failed_priority(self) -> None:
|
async def test_playbook_failed_priority(self) -> None:
|
||||||
"""Playbook failed uses HIGH priority emoji."""
|
"""Playbook failed uses HIGH priority emoji."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -399,7 +433,9 @@ class TestMessagePriorities:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_scenario_matched_priority(self) -> None:
|
async def test_scenario_matched_priority(self) -> None:
|
||||||
"""Scenario matched uses HIGH priority emoji."""
|
"""Scenario matched uses HIGH priority emoji."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_resp = AsyncMock()
|
mock_resp = AsyncMock()
|
||||||
mock_resp.status = 200
|
mock_resp.status = 200
|
||||||
@@ -424,7 +460,9 @@ class TestClientCleanup:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_close_closes_session(self) -> None:
|
async def test_close_closes_session(self) -> None:
|
||||||
"""close() closes the HTTP session."""
|
"""close() closes the HTTP session."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
mock_session = AsyncMock()
|
mock_session = AsyncMock()
|
||||||
mock_session.closed = False
|
mock_session.closed = False
|
||||||
@@ -437,7 +475,9 @@ class TestClientCleanup:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_close_handles_no_session(self) -> None:
|
async def test_close_handles_no_session(self) -> None:
|
||||||
"""close() handles None session gracefully."""
|
"""close() handles None session gracefully."""
|
||||||
client = TelegramClient(bot_token="123:abc", chat_id="456", enabled=True)
|
client = TelegramClient(
|
||||||
|
bot_token="123:abc", chat_id="456", enabled=True
|
||||||
|
)
|
||||||
|
|
||||||
# Should not raise exception
|
# Should not raise exception
|
||||||
await client.close()
|
await client.close()
|
||||||
@@ -495,12 +535,8 @@ class TestNotificationFilter:
|
|||||||
)
|
)
|
||||||
with patch("aiohttp.ClientSession.post") as mock_post:
|
with patch("aiohttp.ClientSession.post") as mock_post:
|
||||||
await client.notify_trade_execution(
|
await client.notify_trade_execution(
|
||||||
stock_code="005930",
|
stock_code="005930", market="KR", action="BUY",
|
||||||
market="KR",
|
quantity=10, price=70000.0, confidence=85.0
|
||||||
action="BUY",
|
|
||||||
quantity=10,
|
|
||||||
price=70000.0,
|
|
||||||
confidence=85.0,
|
|
||||||
)
|
)
|
||||||
mock_post.assert_not_called()
|
mock_post.assert_not_called()
|
||||||
|
|
||||||
@@ -520,13 +556,8 @@ class TestNotificationFilter:
|
|||||||
async def test_circuit_breaker_always_sends_regardless_of_filter(self) -> None:
|
async def test_circuit_breaker_always_sends_regardless_of_filter(self) -> None:
|
||||||
"""notify_circuit_breaker always sends (no filter flag)."""
|
"""notify_circuit_breaker always sends (no filter flag)."""
|
||||||
nf = NotificationFilter(
|
nf = NotificationFilter(
|
||||||
trades=False,
|
trades=False, market_open_close=False, fat_finger=False,
|
||||||
market_open_close=False,
|
system_events=False, playbook=False, scenario_match=False, errors=False,
|
||||||
fat_finger=False,
|
|
||||||
system_events=False,
|
|
||||||
playbook=False,
|
|
||||||
scenario_match=False,
|
|
||||||
errors=False,
|
|
||||||
)
|
)
|
||||||
client = TelegramClient(
|
client = TelegramClient(
|
||||||
bot_token="123:abc", chat_id="456", enabled=True, notification_filter=nf
|
bot_token="123:abc", chat_id="456", enabled=True, notification_filter=nf
|
||||||
@@ -586,7 +617,7 @@ class TestNotificationFilter:
|
|||||||
nf = NotificationFilter()
|
nf = NotificationFilter()
|
||||||
assert nf.set_flag("unknown_key", False) is False
|
assert nf.set_flag("unknown_key", False) is False
|
||||||
|
|
||||||
def test_as_dict_keys_match_keys(self) -> None:
|
def test_as_dict_keys_match_KEYS(self) -> None:
|
||||||
"""as_dict() returns every key defined in KEYS."""
|
"""as_dict() returns every key defined in KEYS."""
|
||||||
nf = NotificationFilter()
|
nf = NotificationFilter()
|
||||||
d = nf.as_dict()
|
d = nf.as_dict()
|
||||||
@@ -609,17 +640,10 @@ class TestNotificationFilter:
|
|||||||
def test_set_notification_all_on(self) -> None:
|
def test_set_notification_all_on(self) -> None:
|
||||||
"""set_notification('all', True) enables every filter flag."""
|
"""set_notification('all', True) enables every filter flag."""
|
||||||
client = TelegramClient(
|
client = TelegramClient(
|
||||||
bot_token="123:abc",
|
bot_token="123:abc", chat_id="456", enabled=True,
|
||||||
chat_id="456",
|
|
||||||
enabled=True,
|
|
||||||
notification_filter=NotificationFilter(
|
notification_filter=NotificationFilter(
|
||||||
trades=False,
|
trades=False, market_open_close=False, scenario_match=False,
|
||||||
market_open_close=False,
|
fat_finger=False, system_events=False, playbook=False, errors=False,
|
||||||
scenario_match=False,
|
|
||||||
fat_finger=False,
|
|
||||||
system_events=False,
|
|
||||||
playbook=False,
|
|
||||||
errors=False,
|
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
assert client.set_notification("all", True) is True
|
assert client.set_notification("all", True) is True
|
||||||
|
|||||||
@@ -357,7 +357,8 @@ class TestTradingControlCommands:
|
|||||||
|
|
||||||
pause_event.set()
|
pause_event.set()
|
||||||
await client.send_message(
|
await client.send_message(
|
||||||
"<b>▶️ Trading Resumed</b>\n\nTrading operations have been restarted."
|
"<b>▶️ Trading Resumed</b>\n\n"
|
||||||
|
"Trading operations have been restarted."
|
||||||
)
|
)
|
||||||
|
|
||||||
handler.register_command("resume", mock_resume)
|
handler.register_command("resume", mock_resume)
|
||||||
@@ -525,7 +526,9 @@ class TestStatusCommands:
|
|||||||
|
|
||||||
async def mock_status_error() -> None:
|
async def mock_status_error() -> None:
|
||||||
"""Mock /status handler with error."""
|
"""Mock /status handler with error."""
|
||||||
await client.send_message("<b>⚠️ Error</b>\n\nFailed to retrieve trading status.")
|
await client.send_message(
|
||||||
|
"<b>⚠️ Error</b>\n\nFailed to retrieve trading status."
|
||||||
|
)
|
||||||
|
|
||||||
handler.register_command("status", mock_status_error)
|
handler.register_command("status", mock_status_error)
|
||||||
|
|
||||||
@@ -600,7 +603,10 @@ class TestStatusCommands:
|
|||||||
|
|
||||||
async def mock_positions_empty() -> None:
|
async def mock_positions_empty() -> None:
|
||||||
"""Mock /positions handler with no positions."""
|
"""Mock /positions handler with no positions."""
|
||||||
message = "<b>💼 Account Summary</b>\n\nNo balance information available."
|
message = (
|
||||||
|
"<b>💼 Account Summary</b>\n\n"
|
||||||
|
"No balance information available."
|
||||||
|
)
|
||||||
await client.send_message(message)
|
await client.send_message(message)
|
||||||
|
|
||||||
handler.register_command("positions", mock_positions_empty)
|
handler.register_command("positions", mock_positions_empty)
|
||||||
@@ -633,7 +639,9 @@ class TestStatusCommands:
|
|||||||
|
|
||||||
async def mock_positions_error() -> None:
|
async def mock_positions_error() -> None:
|
||||||
"""Mock /positions handler with error."""
|
"""Mock /positions handler with error."""
|
||||||
await client.send_message("<b>⚠️ Error</b>\n\nFailed to retrieve positions.")
|
await client.send_message(
|
||||||
|
"<b>⚠️ Error</b>\n\nFailed to retrieve positions."
|
||||||
|
)
|
||||||
|
|
||||||
handler.register_command("positions", mock_positions_error)
|
handler.register_command("positions", mock_positions_error)
|
||||||
|
|
||||||
|
|||||||
@@ -1,123 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import importlib.util
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
|
|
||||||
def _load_module():
|
|
||||||
script_path = Path(__file__).resolve().parents[1] / "scripts" / "validate_docs_sync.py"
|
|
||||||
spec = importlib.util.spec_from_file_location("validate_docs_sync", script_path)
|
|
||||||
assert spec is not None
|
|
||||||
assert spec.loader is not None
|
|
||||||
module = importlib.util.module_from_spec(spec)
|
|
||||||
spec.loader.exec_module(module)
|
|
||||||
return module
|
|
||||||
|
|
||||||
|
|
||||||
def test_collect_command_endpoints_parses_markdown_table_rows() -> None:
|
|
||||||
module = _load_module()
|
|
||||||
text = "\n".join(
|
|
||||||
[
|
|
||||||
"| Endpoint | Description |",
|
|
||||||
"|----------|-------------|",
|
|
||||||
"| `GET /api/status` | status |",
|
|
||||||
"| `POST /api/run` | run |",
|
|
||||||
"| not-a-row | ignored |",
|
|
||||||
]
|
|
||||||
)
|
|
||||||
endpoints = module.collect_command_endpoints(text)
|
|
||||||
assert endpoints == ["GET /api/status", "POST /api/run"]
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_links_resolve_detects_absolute_and_broken_links(tmp_path) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
doc = tmp_path / "doc.md"
|
|
||||||
existing = tmp_path / "ok.md"
|
|
||||||
existing.write_text("# ok\n", encoding="utf-8")
|
|
||||||
doc.write_text(
|
|
||||||
"\n".join(
|
|
||||||
[
|
|
||||||
"[ok](./ok.md)",
|
|
||||||
"[abs](/tmp/nowhere.md)",
|
|
||||||
"[broken](./missing.md)",
|
|
||||||
]
|
|
||||||
),
|
|
||||||
encoding="utf-8",
|
|
||||||
)
|
|
||||||
errors: list[str] = []
|
|
||||||
module.validate_links_resolve(doc, doc.read_text(encoding="utf-8"), errors)
|
|
||||||
|
|
||||||
assert any("absolute link is forbidden" in err for err in errors)
|
|
||||||
assert any("broken link" in err for err in errors)
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_summary_docs_reference_core_docs(monkeypatch) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
errors: list[str] = []
|
|
||||||
fake_docs = {
|
|
||||||
str(module.REQUIRED_FILES["README.md"]): (
|
|
||||||
"docs/workflow.md docs/commands.md docs/testing.md"
|
|
||||||
),
|
|
||||||
str(module.REQUIRED_FILES["CLAUDE.md"]): "docs/workflow.md docs/commands.md",
|
|
||||||
}
|
|
||||||
|
|
||||||
def fake_read(path: Path) -> str:
|
|
||||||
return fake_docs[str(path)]
|
|
||||||
|
|
||||||
monkeypatch.setattr(module, "_read", fake_read)
|
|
||||||
module.validate_summary_docs_reference_core_docs(errors)
|
|
||||||
assert errors == []
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_summary_docs_reference_core_docs_reports_missing_links(
|
|
||||||
monkeypatch,
|
|
||||||
) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
errors: list[str] = []
|
|
||||||
fake_docs = {
|
|
||||||
str(module.REQUIRED_FILES["README.md"]): "docs/workflow.md",
|
|
||||||
str(module.REQUIRED_FILES["CLAUDE.md"]): "docs/workflow.md",
|
|
||||||
}
|
|
||||||
|
|
||||||
def fake_read(path: Path) -> str:
|
|
||||||
return fake_docs[str(path)]
|
|
||||||
|
|
||||||
monkeypatch.setattr(module, "_read", fake_read)
|
|
||||||
module.validate_summary_docs_reference_core_docs(errors)
|
|
||||||
|
|
||||||
assert any("README.md" in err and "docs/commands.md" in err for err in errors)
|
|
||||||
assert any("README.md" in err and "docs/testing.md" in err for err in errors)
|
|
||||||
assert any("CLAUDE.md" in err and "docs/commands.md" in err for err in errors)
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_commands_endpoint_duplicates_reports_duplicates(monkeypatch) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
errors: list[str] = []
|
|
||||||
text = "\n".join(
|
|
||||||
[
|
|
||||||
"| `GET /api/status` | status |",
|
|
||||||
"| `GET /api/status` | duplicate |",
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
def fake_read(path: Path) -> str:
|
|
||||||
assert path == module.REQUIRED_FILES["commands"]
|
|
||||||
return text
|
|
||||||
|
|
||||||
monkeypatch.setattr(module, "_read", fake_read)
|
|
||||||
module.validate_commands_endpoint_duplicates(errors)
|
|
||||||
assert errors
|
|
||||||
assert "duplicated API endpoint row -> GET /api/status" in errors[0]
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_testing_doc_has_dynamic_count_guidance(monkeypatch) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
errors: list[str] = []
|
|
||||||
|
|
||||||
def fake_read(path: Path) -> str:
|
|
||||||
assert path == module.REQUIRED_FILES["testing"]
|
|
||||||
return "Use pytest --collect-only -q for dynamic counts."
|
|
||||||
|
|
||||||
monkeypatch.setattr(module, "_read", fake_read)
|
|
||||||
module.validate_testing_doc_has_dynamic_count_guidance(errors)
|
|
||||||
assert errors == []
|
|
||||||
@@ -70,9 +70,7 @@ def test_load_changed_files_with_range_uses_git_diff(monkeypatch) -> None:
|
|||||||
assert check is True
|
assert check is True
|
||||||
assert capture_output is True
|
assert capture_output is True
|
||||||
assert text is True
|
assert text is True
|
||||||
return SimpleNamespace(
|
return SimpleNamespace(stdout="docs/ouroboros/85_loss_recovery_action_plan.md\nsrc/main.py\n")
|
||||||
stdout="docs/ouroboros/85_loss_recovery_action_plan.md\nsrc/main.py\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
monkeypatch.setattr(module.subprocess, "run", fake_run)
|
monkeypatch.setattr(module.subprocess, "run", fake_run)
|
||||||
changed = module.load_changed_files(["abc...def"], errors)
|
changed = module.load_changed_files(["abc...def"], errors)
|
||||||
@@ -116,74 +114,3 @@ def test_validate_pr_traceability_warns_when_req_missing(monkeypatch) -> None:
|
|||||||
module.validate_pr_traceability(warnings)
|
module.validate_pr_traceability(warnings)
|
||||||
assert warnings
|
assert warnings
|
||||||
assert "PR text missing REQ-ID reference" in warnings
|
assert "PR text missing REQ-ID reference" in warnings
|
||||||
|
|
||||||
|
|
||||||
def test_validate_read_only_approval_requires_evidence(monkeypatch) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
changed_files = ["src/core/risk_manager.py"]
|
|
||||||
errors: list[str] = []
|
|
||||||
warnings: list[str] = []
|
|
||||||
monkeypatch.setenv(
|
|
||||||
"GOVERNANCE_PR_BODY",
|
|
||||||
"\n".join(
|
|
||||||
[
|
|
||||||
"## READ-ONLY Approval (Required when touching READ-ONLY files)",
|
|
||||||
"- Touched READ-ONLY files: src/core/risk_manager.py",
|
|
||||||
"- Human approval: TBD",
|
|
||||||
"- Test suite 1: pytest -q",
|
|
||||||
"- Test suite 2: TBD",
|
|
||||||
]
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
module.validate_read_only_approval(changed_files, errors, warnings)
|
|
||||||
assert warnings == []
|
|
||||||
assert any("Human approval" in err for err in errors)
|
|
||||||
assert any("Test suite 2" in err for err in errors)
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_read_only_approval_passes_with_complete_evidence(monkeypatch) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
changed_files = ["src/core/risk_manager.py"]
|
|
||||||
errors: list[str] = []
|
|
||||||
warnings: list[str] = []
|
|
||||||
monkeypatch.setenv(
|
|
||||||
"GOVERNANCE_PR_BODY",
|
|
||||||
"\n".join(
|
|
||||||
[
|
|
||||||
"## READ-ONLY Approval (Required when touching READ-ONLY files)",
|
|
||||||
"- Touched READ-ONLY files: src/core/risk_manager.py",
|
|
||||||
"- Human approval: https://example.com/review/123",
|
|
||||||
"- Test suite 1: pytest -q tests/test_risk.py",
|
|
||||||
"- Test suite 2: pytest -q tests/test_main.py -k risk",
|
|
||||||
]
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
module.validate_read_only_approval(changed_files, errors, warnings)
|
|
||||||
assert errors == []
|
|
||||||
assert warnings == []
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_read_only_approval_warns_without_pr_body(monkeypatch) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
changed_files = ["src/core/risk_manager.py"]
|
|
||||||
errors: list[str] = []
|
|
||||||
warnings: list[str] = []
|
|
||||||
monkeypatch.delenv("GOVERNANCE_PR_BODY", raising=False)
|
|
||||||
|
|
||||||
module.validate_read_only_approval(changed_files, errors, warnings)
|
|
||||||
assert errors == []
|
|
||||||
assert warnings
|
|
||||||
assert "approval evidence check skipped" in warnings[0]
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_read_only_approval_skips_when_no_readonly_file_changed() -> None:
|
|
||||||
module = _load_module()
|
|
||||||
changed_files = ["src/main.py"]
|
|
||||||
errors: list[str] = []
|
|
||||||
warnings: list[str] = []
|
|
||||||
|
|
||||||
module.validate_read_only_approval(changed_files, errors, warnings)
|
|
||||||
assert errors == []
|
|
||||||
assert warnings == []
|
|
||||||
|
|||||||
@@ -1,81 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import importlib.util
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
|
|
||||||
def _load_module():
|
|
||||||
script_path = Path(__file__).resolve().parents[1] / "scripts" / "validate_ouroboros_docs.py"
|
|
||||||
spec = importlib.util.spec_from_file_location("validate_ouroboros_docs", script_path)
|
|
||||||
assert spec is not None
|
|
||||||
assert spec.loader is not None
|
|
||||||
module = importlib.util.module_from_spec(spec)
|
|
||||||
spec.loader.exec_module(module)
|
|
||||||
return module
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_plan_source_link_accepts_canonical_source_path() -> None:
|
|
||||||
module = _load_module()
|
|
||||||
errors: list[str] = []
|
|
||||||
path = Path("docs/ouroboros/README.md").resolve()
|
|
||||||
|
|
||||||
assert module.validate_plan_source_link(path, "./source/ouroboros_plan_v2.txt", errors) is False
|
|
||||||
assert module.validate_plan_source_link(path, "./source/ouroboros_plan_v3.txt", errors) is False
|
|
||||||
|
|
||||||
assert errors == []
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_plan_source_link_rejects_root_relative_path() -> None:
|
|
||||||
module = _load_module()
|
|
||||||
errors: list[str] = []
|
|
||||||
path = Path("docs/ouroboros/README.md").resolve()
|
|
||||||
|
|
||||||
handled = module.validate_plan_source_link(
|
|
||||||
path,
|
|
||||||
"/home/agentson/repos/The-Ouroboros/ouroboros_plan_v2.txt",
|
|
||||||
errors,
|
|
||||||
)
|
|
||||||
|
|
||||||
assert handled is True
|
|
||||||
assert errors
|
|
||||||
assert "invalid plan link path" in errors[0]
|
|
||||||
assert "use ./source/ouroboros_plan_v2.txt" in errors[0]
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_plan_source_link_rejects_repo_root_relative_path() -> None:
|
|
||||||
module = _load_module()
|
|
||||||
errors: list[str] = []
|
|
||||||
path = Path("docs/ouroboros/README.md").resolve()
|
|
||||||
|
|
||||||
handled = module.validate_plan_source_link(path, "../../ouroboros_plan_v2.txt", errors)
|
|
||||||
|
|
||||||
assert handled is True
|
|
||||||
assert errors
|
|
||||||
assert "invalid plan link path" in errors[0]
|
|
||||||
assert "must resolve to docs/ouroboros/source/ouroboros_plan_v2.txt" in errors[0]
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_plan_source_link_accepts_fragment_suffix() -> None:
|
|
||||||
module = _load_module()
|
|
||||||
errors: list[str] = []
|
|
||||||
path = Path("docs/ouroboros/README.md").resolve()
|
|
||||||
|
|
||||||
handled = module.validate_plan_source_link(path, "./source/ouroboros_plan_v2.txt#sec", errors)
|
|
||||||
|
|
||||||
assert handled is False
|
|
||||||
assert errors == []
|
|
||||||
|
|
||||||
|
|
||||||
def test_validate_links_avoids_duplicate_error_for_invalid_plan_link(tmp_path) -> None:
|
|
||||||
module = _load_module()
|
|
||||||
errors: list[str] = []
|
|
||||||
doc = tmp_path / "doc.md"
|
|
||||||
doc.write_text(
|
|
||||||
"[v2](/home/agentson/repos/The-Ouroboros/ouroboros_plan_v2.txt)\n",
|
|
||||||
encoding="utf-8",
|
|
||||||
)
|
|
||||||
|
|
||||||
module.validate_links(doc, doc.read_text(encoding="utf-8"), errors)
|
|
||||||
|
|
||||||
assert len(errors) == 1
|
|
||||||
assert "invalid plan link path" in errors[0]
|
|
||||||
@@ -80,7 +80,9 @@ class TestVolatilityAnalyzer:
|
|||||||
# ATR should be roughly the average true range
|
# ATR should be roughly the average true range
|
||||||
assert 3.0 <= atr <= 6.0
|
assert 3.0 <= atr <= 6.0
|
||||||
|
|
||||||
def test_calculate_atr_insufficient_data(self, volatility_analyzer: VolatilityAnalyzer) -> None:
|
def test_calculate_atr_insufficient_data(
|
||||||
|
self, volatility_analyzer: VolatilityAnalyzer
|
||||||
|
) -> None:
|
||||||
"""Test ATR with insufficient data returns 0."""
|
"""Test ATR with insufficient data returns 0."""
|
||||||
high_prices = [110.0, 112.0]
|
high_prices = [110.0, 112.0]
|
||||||
low_prices = [105.0, 107.0]
|
low_prices = [105.0, 107.0]
|
||||||
@@ -118,13 +120,17 @@ class TestVolatilityAnalyzer:
|
|||||||
surge = volatility_analyzer.calculate_volume_surge(1000.0, 0.0)
|
surge = volatility_analyzer.calculate_volume_surge(1000.0, 0.0)
|
||||||
assert surge == 1.0
|
assert surge == 1.0
|
||||||
|
|
||||||
def test_calculate_pv_divergence_bullish(self, volatility_analyzer: VolatilityAnalyzer) -> None:
|
def test_calculate_pv_divergence_bullish(
|
||||||
|
self, volatility_analyzer: VolatilityAnalyzer
|
||||||
|
) -> None:
|
||||||
"""Test bullish price-volume divergence."""
|
"""Test bullish price-volume divergence."""
|
||||||
# Price up + Volume up = bullish
|
# Price up + Volume up = bullish
|
||||||
divergence = volatility_analyzer.calculate_pv_divergence(5.0, 2.0)
|
divergence = volatility_analyzer.calculate_pv_divergence(5.0, 2.0)
|
||||||
assert divergence > 0.0
|
assert divergence > 0.0
|
||||||
|
|
||||||
def test_calculate_pv_divergence_bearish(self, volatility_analyzer: VolatilityAnalyzer) -> None:
|
def test_calculate_pv_divergence_bearish(
|
||||||
|
self, volatility_analyzer: VolatilityAnalyzer
|
||||||
|
) -> None:
|
||||||
"""Test bearish price-volume divergence."""
|
"""Test bearish price-volume divergence."""
|
||||||
# Price up + Volume down = bearish divergence
|
# Price up + Volume down = bearish divergence
|
||||||
divergence = volatility_analyzer.calculate_pv_divergence(5.0, 0.5)
|
divergence = volatility_analyzer.calculate_pv_divergence(5.0, 0.5)
|
||||||
@@ -138,7 +144,9 @@ class TestVolatilityAnalyzer:
|
|||||||
divergence = volatility_analyzer.calculate_pv_divergence(-5.0, 2.0)
|
divergence = volatility_analyzer.calculate_pv_divergence(-5.0, 2.0)
|
||||||
assert divergence < 0.0
|
assert divergence < 0.0
|
||||||
|
|
||||||
def test_calculate_momentum_score(self, volatility_analyzer: VolatilityAnalyzer) -> None:
|
def test_calculate_momentum_score(
|
||||||
|
self, volatility_analyzer: VolatilityAnalyzer
|
||||||
|
) -> None:
|
||||||
"""Test momentum score calculation."""
|
"""Test momentum score calculation."""
|
||||||
score = volatility_analyzer.calculate_momentum_score(
|
score = volatility_analyzer.calculate_momentum_score(
|
||||||
price_change_1m=5.0,
|
price_change_1m=5.0,
|
||||||
@@ -492,7 +500,9 @@ class TestMarketScanner:
|
|||||||
# Should keep all current stocks since they're all in top movers
|
# Should keep all current stocks since they're all in top movers
|
||||||
assert set(updated) == set(current_watchlist)
|
assert set(updated) == set(current_watchlist)
|
||||||
|
|
||||||
def test_get_updated_watchlist_max_replacements(self, scanner: MarketScanner) -> None:
|
def test_get_updated_watchlist_max_replacements(
|
||||||
|
self, scanner: MarketScanner
|
||||||
|
) -> None:
|
||||||
"""Test that max_replacements limit is respected."""
|
"""Test that max_replacements limit is respected."""
|
||||||
current_watchlist = ["000660", "035420", "005490"]
|
current_watchlist = ["000660", "035420", "005490"]
|
||||||
|
|
||||||
@@ -546,6 +556,8 @@ class TestMarketScanner:
|
|||||||
active_count = 0
|
active_count = 0
|
||||||
peak_count = 0
|
peak_count = 0
|
||||||
|
|
||||||
|
original_scan = scanner.scan_stock
|
||||||
|
|
||||||
async def tracking_scan(code: str, market: Any) -> VolatilityMetrics:
|
async def tracking_scan(code: str, market: Any) -> VolatilityMetrics:
|
||||||
nonlocal active_count, peak_count
|
nonlocal active_count, peak_count
|
||||||
active_count += 1
|
active_count += 1
|
||||||
|
|||||||
Reference in New Issue
Block a user