Compare commits
3 Commits
feature/is
...
feature/is
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8c05448843 | ||
|
|
87556b145e | ||
| 645c761238 |
348
docs/disaster_recovery.md
Normal file
348
docs/disaster_recovery.md
Normal file
@@ -0,0 +1,348 @@
|
|||||||
|
# Disaster Recovery Guide
|
||||||
|
|
||||||
|
Complete guide for backing up and restoring The Ouroboros trading system.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Backup Strategy](#backup-strategy)
|
||||||
|
- [Creating Backups](#creating-backups)
|
||||||
|
- [Restoring from Backup](#restoring-from-backup)
|
||||||
|
- [Health Monitoring](#health-monitoring)
|
||||||
|
- [Export Formats](#export-formats)
|
||||||
|
- [RTO/RPO](#rtorpo)
|
||||||
|
- [Testing Recovery](#testing-recovery)
|
||||||
|
|
||||||
|
## Backup Strategy
|
||||||
|
|
||||||
|
The system implements a 3-tier backup retention policy:
|
||||||
|
|
||||||
|
| Policy | Frequency | Retention | Purpose |
|
||||||
|
|--------|-----------|-----------|---------|
|
||||||
|
| **Daily** | Every day | 30 days | Quick recovery from recent issues |
|
||||||
|
| **Weekly** | Sunday | 1 year | Medium-term historical analysis |
|
||||||
|
| **Monthly** | 1st of month | Forever | Long-term archival |
|
||||||
|
|
||||||
|
### Storage Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
data/backups/
|
||||||
|
├── daily/ # Last 30 days
|
||||||
|
├── weekly/ # Last 52 weeks
|
||||||
|
└── monthly/ # Forever (cold storage)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Creating Backups
|
||||||
|
|
||||||
|
### Automated Backups (Recommended)
|
||||||
|
|
||||||
|
Set up a cron job to run daily:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Edit crontab
|
||||||
|
crontab -e
|
||||||
|
|
||||||
|
# Run backup at 2 AM every day
|
||||||
|
0 2 * * * cd /path/to/The-Ouroboros && ./scripts/backup.sh >> logs/backup.log 2>&1
|
||||||
|
```
|
||||||
|
|
||||||
|
### Manual Backups
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run backup script
|
||||||
|
./scripts/backup.sh
|
||||||
|
|
||||||
|
# Or use Python directly
|
||||||
|
python3 -c "
|
||||||
|
from pathlib import Path
|
||||||
|
from src.backup.scheduler import BackupScheduler, BackupPolicy
|
||||||
|
|
||||||
|
scheduler = BackupScheduler('data/trade_logs.db', Path('data/backups'))
|
||||||
|
metadata = scheduler.create_backup(BackupPolicy.DAILY, verify=True)
|
||||||
|
print(f'Backup created: {metadata.file_path}')
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Export to Other Formats
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 -c "
|
||||||
|
from pathlib import Path
|
||||||
|
from src.backup.exporter import BackupExporter, ExportFormat
|
||||||
|
|
||||||
|
exporter = BackupExporter('data/trade_logs.db')
|
||||||
|
results = exporter.export_all(
|
||||||
|
Path('exports'),
|
||||||
|
formats=[ExportFormat.JSON, ExportFormat.CSV],
|
||||||
|
compress=True
|
||||||
|
)
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Restoring from Backup
|
||||||
|
|
||||||
|
### Interactive Restoration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/restore.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
The script will:
|
||||||
|
1. List available backups
|
||||||
|
2. Ask you to select one
|
||||||
|
3. Create a safety backup of current database
|
||||||
|
4. Restore the selected backup
|
||||||
|
5. Verify database integrity
|
||||||
|
|
||||||
|
### Manual Restoration
|
||||||
|
|
||||||
|
```python
|
||||||
|
from pathlib import Path
|
||||||
|
from src.backup.scheduler import BackupScheduler
|
||||||
|
|
||||||
|
scheduler = BackupScheduler('data/trade_logs.db', Path('data/backups'))
|
||||||
|
|
||||||
|
# List backups
|
||||||
|
backups = scheduler.list_backups()
|
||||||
|
for backup in backups:
|
||||||
|
print(f"{backup.timestamp}: {backup.file_path}")
|
||||||
|
|
||||||
|
# Restore specific backup
|
||||||
|
scheduler.restore_backup(backups[0], verify=True)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Health Monitoring
|
||||||
|
|
||||||
|
### Check System Health
|
||||||
|
|
||||||
|
```python
|
||||||
|
from pathlib import Path
|
||||||
|
from src.backup.health_monitor import HealthMonitor
|
||||||
|
|
||||||
|
monitor = HealthMonitor('data/trade_logs.db', Path('data/backups'))
|
||||||
|
|
||||||
|
# Run all checks
|
||||||
|
report = monitor.get_health_report()
|
||||||
|
print(f"Overall status: {report['overall_status']}")
|
||||||
|
|
||||||
|
# Individual checks
|
||||||
|
checks = monitor.run_all_checks()
|
||||||
|
for name, result in checks.items():
|
||||||
|
print(f"{name}: {result.status.value} - {result.message}")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Health Checks
|
||||||
|
|
||||||
|
The system monitors:
|
||||||
|
|
||||||
|
- **Database Health**: Accessibility, integrity, size
|
||||||
|
- **Disk Space**: Available storage (alerts if < 10 GB)
|
||||||
|
- **Backup Recency**: Ensures backups are < 25 hours old
|
||||||
|
|
||||||
|
### Health Status Levels
|
||||||
|
|
||||||
|
- **HEALTHY**: All systems operational
|
||||||
|
- **DEGRADED**: Warning condition (e.g., low disk space)
|
||||||
|
- **UNHEALTHY**: Critical issue (e.g., database corrupted, no backups)
|
||||||
|
|
||||||
|
## Export Formats
|
||||||
|
|
||||||
|
### JSON (Human-Readable)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"export_timestamp": "2024-01-15T10:30:00Z",
|
||||||
|
"record_count": 150,
|
||||||
|
"trades": [
|
||||||
|
{
|
||||||
|
"timestamp": "2024-01-15T09:00:00Z",
|
||||||
|
"stock_code": "005930",
|
||||||
|
"action": "BUY",
|
||||||
|
"quantity": 10,
|
||||||
|
"price": 70000.0,
|
||||||
|
"confidence": 85,
|
||||||
|
"rationale": "Strong momentum",
|
||||||
|
"pnl": 0.0
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### CSV (Analysis Tools)
|
||||||
|
|
||||||
|
Compatible with Excel, pandas, R:
|
||||||
|
|
||||||
|
```csv
|
||||||
|
timestamp,stock_code,action,quantity,price,confidence,rationale,pnl
|
||||||
|
2024-01-15T09:00:00Z,005930,BUY,10,70000.0,85,Strong momentum,0.0
|
||||||
|
```
|
||||||
|
|
||||||
|
### Parquet (Big Data)
|
||||||
|
|
||||||
|
Columnar format for Spark, DuckDB:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import pandas as pd
|
||||||
|
df = pd.read_parquet('exports/trades_20240115.parquet')
|
||||||
|
```
|
||||||
|
|
||||||
|
## RTO/RPO
|
||||||
|
|
||||||
|
### Recovery Time Objective (RTO)
|
||||||
|
|
||||||
|
**Target: < 5 minutes**
|
||||||
|
|
||||||
|
Time to restore trading operations:
|
||||||
|
1. Identify backup to restore (1 min)
|
||||||
|
2. Run restore script (2 min)
|
||||||
|
3. Verify database integrity (1 min)
|
||||||
|
4. Restart trading system (1 min)
|
||||||
|
|
||||||
|
### Recovery Point Objective (RPO)
|
||||||
|
|
||||||
|
**Target: < 24 hours**
|
||||||
|
|
||||||
|
Maximum acceptable data loss:
|
||||||
|
- Daily backups ensure ≤ 24-hour data loss
|
||||||
|
- For critical periods, run backups more frequently
|
||||||
|
|
||||||
|
## Testing Recovery
|
||||||
|
|
||||||
|
### Quarterly Recovery Test
|
||||||
|
|
||||||
|
Perform full disaster recovery test every quarter:
|
||||||
|
|
||||||
|
1. **Create test backup**
|
||||||
|
```bash
|
||||||
|
./scripts/backup.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Simulate disaster** (use test database)
|
||||||
|
```bash
|
||||||
|
cp data/trade_logs.db data/trade_logs_test.db
|
||||||
|
rm data/trade_logs_test.db # Simulate data loss
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Restore from backup**
|
||||||
|
```bash
|
||||||
|
DB_PATH=data/trade_logs_test.db ./scripts/restore.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Verify data integrity**
|
||||||
|
```python
|
||||||
|
import sqlite3
|
||||||
|
conn = sqlite3.connect('data/trade_logs_test.db')
|
||||||
|
cursor = conn.execute('SELECT COUNT(*) FROM trades')
|
||||||
|
print(f"Restored {cursor.fetchone()[0]} trades")
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Document results** in `logs/recovery_test_YYYYMMDD.md`
|
||||||
|
|
||||||
|
### Backup Verification
|
||||||
|
|
||||||
|
Always verify backups after creation:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from pathlib import Path
|
||||||
|
from src.backup.scheduler import BackupScheduler
|
||||||
|
|
||||||
|
scheduler = BackupScheduler('data/trade_logs.db', Path('data/backups'))
|
||||||
|
|
||||||
|
# Create and verify
|
||||||
|
metadata = scheduler.create_backup(BackupPolicy.DAILY, verify=True)
|
||||||
|
print(f"Checksum: {metadata.checksum}") # Should not be None
|
||||||
|
```
|
||||||
|
|
||||||
|
## Emergency Procedures
|
||||||
|
|
||||||
|
### Database Corrupted
|
||||||
|
|
||||||
|
1. Stop trading system immediately
|
||||||
|
2. Check most recent backup age: `ls -lht data/backups/daily/`
|
||||||
|
3. Restore: `./scripts/restore.sh`
|
||||||
|
4. Verify: Run health check
|
||||||
|
5. Resume trading
|
||||||
|
|
||||||
|
### Disk Full
|
||||||
|
|
||||||
|
1. Check disk space: `df -h`
|
||||||
|
2. Clean old backups: Run cleanup manually
|
||||||
|
```python
|
||||||
|
from pathlib import Path
|
||||||
|
from src.backup.scheduler import BackupScheduler
|
||||||
|
scheduler = BackupScheduler('data/trade_logs.db', Path('data/backups'))
|
||||||
|
scheduler.cleanup_old_backups()
|
||||||
|
```
|
||||||
|
3. Consider archiving old monthly backups to external storage
|
||||||
|
4. Increase disk space if needed
|
||||||
|
|
||||||
|
### Lost All Backups
|
||||||
|
|
||||||
|
If local backups are lost:
|
||||||
|
1. Check if exports exist in `exports/` directory
|
||||||
|
2. Reconstruct database from CSV/JSON exports
|
||||||
|
3. If no exports: Check broker API for trade history
|
||||||
|
4. Manual reconstruction as last resort
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Test Restores Regularly**: Don't wait for disaster
|
||||||
|
2. **Monitor Disk Space**: Set up alerts at 80% usage
|
||||||
|
3. **Keep Multiple Generations**: Never delete all backups at once
|
||||||
|
4. **Verify Checksums**: Always verify backup integrity
|
||||||
|
5. **Document Changes**: Update this guide when backup strategy changes
|
||||||
|
6. **Off-Site Storage**: Consider external backup for monthly archives
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Backup Script Fails
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check database file permissions
|
||||||
|
ls -l data/trade_logs.db
|
||||||
|
|
||||||
|
# Check disk space
|
||||||
|
df -h data/
|
||||||
|
|
||||||
|
# Run backup manually with debug
|
||||||
|
python3 -c "
|
||||||
|
import logging
|
||||||
|
logging.basicConfig(level=logging.DEBUG)
|
||||||
|
from pathlib import Path
|
||||||
|
from src.backup.scheduler import BackupScheduler, BackupPolicy
|
||||||
|
scheduler = BackupScheduler('data/trade_logs.db', Path('data/backups'))
|
||||||
|
scheduler.create_backup(BackupPolicy.DAILY, verify=True)
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Restore Fails Verification
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check backup file integrity
|
||||||
|
python3 -c "
|
||||||
|
import sqlite3
|
||||||
|
conn = sqlite3.connect('data/backups/daily/trade_logs_daily_20240115.db')
|
||||||
|
cursor = conn.execute('PRAGMA integrity_check')
|
||||||
|
print(cursor.fetchone()[0])
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Health Check Fails
|
||||||
|
|
||||||
|
```python
|
||||||
|
from pathlib import Path
|
||||||
|
from src.backup.health_monitor import HealthMonitor
|
||||||
|
|
||||||
|
monitor = HealthMonitor('data/trade_logs.db', Path('data/backups'))
|
||||||
|
|
||||||
|
# Check each component individually
|
||||||
|
print("Database:", monitor.check_database_health())
|
||||||
|
print("Disk Space:", monitor.check_disk_space())
|
||||||
|
print("Backup Recency:", monitor.check_backup_recency())
|
||||||
|
```
|
||||||
|
|
||||||
|
## Contact
|
||||||
|
|
||||||
|
For backup/recovery issues:
|
||||||
|
- Check logs: `logs/backup.log`
|
||||||
|
- Review health status: Run health monitor
|
||||||
|
- Raise issue on GitHub if automated recovery fails
|
||||||
96
scripts/backup.sh
Normal file
96
scripts/backup.sh
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Automated backup script for The Ouroboros trading system
|
||||||
|
# Runs daily/weekly/monthly backups
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
DB_PATH="${DB_PATH:-data/trade_logs.db}"
|
||||||
|
BACKUP_DIR="${BACKUP_DIR:-data/backups}"
|
||||||
|
PYTHON="${PYTHON:-python3}"
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
RED='\033[0;31m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
log_info() {
|
||||||
|
echo -e "${GREEN}[INFO]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_warn() {
|
||||||
|
echo -e "${YELLOW}[WARN]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_error() {
|
||||||
|
echo -e "${RED}[ERROR]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if database exists
|
||||||
|
if [ ! -f "$DB_PATH" ]; then
|
||||||
|
log_error "Database not found: $DB_PATH"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create backup directory
|
||||||
|
mkdir -p "$BACKUP_DIR"
|
||||||
|
|
||||||
|
log_info "Starting backup process..."
|
||||||
|
log_info "Database: $DB_PATH"
|
||||||
|
log_info "Backup directory: $BACKUP_DIR"
|
||||||
|
|
||||||
|
# Determine backup policy based on day of week and month
|
||||||
|
DAY_OF_WEEK=$(date +%u) # 1-7 (Monday-Sunday)
|
||||||
|
DAY_OF_MONTH=$(date +%d)
|
||||||
|
|
||||||
|
if [ "$DAY_OF_MONTH" == "01" ]; then
|
||||||
|
POLICY="monthly"
|
||||||
|
log_info "Running MONTHLY backup (first day of month)"
|
||||||
|
elif [ "$DAY_OF_WEEK" == "7" ]; then
|
||||||
|
POLICY="weekly"
|
||||||
|
log_info "Running WEEKLY backup (Sunday)"
|
||||||
|
else
|
||||||
|
POLICY="daily"
|
||||||
|
log_info "Running DAILY backup"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Run Python backup script
|
||||||
|
$PYTHON -c "
|
||||||
|
from pathlib import Path
|
||||||
|
from src.backup.scheduler import BackupScheduler, BackupPolicy
|
||||||
|
from src.backup.health_monitor import HealthMonitor
|
||||||
|
|
||||||
|
# Create scheduler
|
||||||
|
scheduler = BackupScheduler(
|
||||||
|
db_path='$DB_PATH',
|
||||||
|
backup_dir=Path('$BACKUP_DIR')
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create backup
|
||||||
|
policy = BackupPolicy.$POLICY.upper()
|
||||||
|
metadata = scheduler.create_backup(policy, verify=True)
|
||||||
|
print(f'Backup created: {metadata.file_path}')
|
||||||
|
print(f'Size: {metadata.size_bytes / 1024 / 1024:.2f} MB')
|
||||||
|
print(f'Checksum: {metadata.checksum}')
|
||||||
|
|
||||||
|
# Cleanup old backups
|
||||||
|
removed = scheduler.cleanup_old_backups()
|
||||||
|
total_removed = sum(removed.values())
|
||||||
|
if total_removed > 0:
|
||||||
|
print(f'Removed {total_removed} old backup(s)')
|
||||||
|
|
||||||
|
# Health check
|
||||||
|
monitor = HealthMonitor('$DB_PATH', Path('$BACKUP_DIR'))
|
||||||
|
status = monitor.get_overall_status()
|
||||||
|
print(f'System health: {status.value}')
|
||||||
|
"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
log_info "Backup completed successfully"
|
||||||
|
else
|
||||||
|
log_error "Backup failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Backup process finished"
|
||||||
111
scripts/restore.sh
Normal file
111
scripts/restore.sh
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Restore script for The Ouroboros trading system
|
||||||
|
# Restores database from a backup file
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
DB_PATH="${DB_PATH:-data/trade_logs.db}"
|
||||||
|
BACKUP_DIR="${BACKUP_DIR:-data/backups}"
|
||||||
|
PYTHON="${PYTHON:-python3}"
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
RED='\033[0;31m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
log_info() {
|
||||||
|
echo -e "${GREEN}[INFO]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_warn() {
|
||||||
|
echo -e "${YELLOW}[WARN]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_error() {
|
||||||
|
echo -e "${RED}[ERROR]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if backup directory exists
|
||||||
|
if [ ! -d "$BACKUP_DIR" ]; then
|
||||||
|
log_error "Backup directory not found: $BACKUP_DIR"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Available backups:"
|
||||||
|
log_info "=================="
|
||||||
|
|
||||||
|
# List available backups
|
||||||
|
$PYTHON -c "
|
||||||
|
from pathlib import Path
|
||||||
|
from src.backup.scheduler import BackupScheduler
|
||||||
|
|
||||||
|
scheduler = BackupScheduler(
|
||||||
|
db_path='$DB_PATH',
|
||||||
|
backup_dir=Path('$BACKUP_DIR')
|
||||||
|
)
|
||||||
|
|
||||||
|
backups = scheduler.list_backups()
|
||||||
|
|
||||||
|
if not backups:
|
||||||
|
print('No backups found.')
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
for i, backup in enumerate(backups, 1):
|
||||||
|
size_mb = backup.size_bytes / 1024 / 1024
|
||||||
|
print(f'{i}. [{backup.policy.value.upper()}] {backup.file_path.name}')
|
||||||
|
print(f' Date: {backup.timestamp.strftime(\"%Y-%m-%d %H:%M:%S UTC\")}')
|
||||||
|
print(f' Size: {size_mb:.2f} MB')
|
||||||
|
print()
|
||||||
|
"
|
||||||
|
|
||||||
|
# Ask user to select backup
|
||||||
|
echo ""
|
||||||
|
read -p "Enter backup number to restore (or 'q' to quit): " BACKUP_NUM
|
||||||
|
|
||||||
|
if [ "$BACKUP_NUM" == "q" ]; then
|
||||||
|
log_info "Restore cancelled"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Confirm restoration
|
||||||
|
log_warn "WARNING: This will replace the current database!"
|
||||||
|
log_warn "Current database will be backed up to: ${DB_PATH}.before_restore"
|
||||||
|
read -p "Are you sure you want to continue? (yes/no): " CONFIRM
|
||||||
|
|
||||||
|
if [ "$CONFIRM" != "yes" ]; then
|
||||||
|
log_info "Restore cancelled"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Perform restoration
|
||||||
|
$PYTHON -c "
|
||||||
|
from pathlib import Path
|
||||||
|
from src.backup.scheduler import BackupScheduler
|
||||||
|
|
||||||
|
scheduler = BackupScheduler(
|
||||||
|
db_path='$DB_PATH',
|
||||||
|
backup_dir=Path('$BACKUP_DIR')
|
||||||
|
)
|
||||||
|
|
||||||
|
backups = scheduler.list_backups()
|
||||||
|
backup_index = int('$BACKUP_NUM') - 1
|
||||||
|
|
||||||
|
if backup_index < 0 or backup_index >= len(backups):
|
||||||
|
print('Invalid backup number')
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
selected = backups[backup_index]
|
||||||
|
print(f'Restoring: {selected.file_path.name}')
|
||||||
|
|
||||||
|
scheduler.restore_backup(selected, verify=True)
|
||||||
|
print('Restore completed successfully')
|
||||||
|
"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
log_info "Database restored successfully"
|
||||||
|
else
|
||||||
|
log_error "Restore failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
21
src/backup/__init__.py
Normal file
21
src/backup/__init__.py
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
"""Backup and disaster recovery system for long-term sustainability.
|
||||||
|
|
||||||
|
This module provides:
|
||||||
|
- Automated database backups (daily, weekly, monthly)
|
||||||
|
- Multi-format exports (JSON, CSV, Parquet)
|
||||||
|
- Cloud storage integration (S3-compatible)
|
||||||
|
- Health monitoring and alerts
|
||||||
|
"""
|
||||||
|
|
||||||
|
from src.backup.exporter import BackupExporter, ExportFormat
|
||||||
|
from src.backup.scheduler import BackupScheduler, BackupPolicy
|
||||||
|
from src.backup.cloud_storage import CloudStorage, S3Config
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"BackupExporter",
|
||||||
|
"ExportFormat",
|
||||||
|
"BackupScheduler",
|
||||||
|
"BackupPolicy",
|
||||||
|
"CloudStorage",
|
||||||
|
"S3Config",
|
||||||
|
]
|
||||||
274
src/backup/cloud_storage.py
Normal file
274
src/backup/cloud_storage.py
Normal file
@@ -0,0 +1,274 @@
|
|||||||
|
"""Cloud storage integration for off-site backups.
|
||||||
|
|
||||||
|
Supports S3-compatible storage providers:
|
||||||
|
- AWS S3
|
||||||
|
- MinIO
|
||||||
|
- Backblaze B2
|
||||||
|
- DigitalOcean Spaces
|
||||||
|
- Cloudflare R2
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class S3Config:
|
||||||
|
"""Configuration for S3-compatible storage."""
|
||||||
|
|
||||||
|
endpoint_url: str | None # None for AWS S3, custom URL for others
|
||||||
|
access_key: str
|
||||||
|
secret_key: str
|
||||||
|
bucket_name: str
|
||||||
|
region: str = "us-east-1"
|
||||||
|
use_ssl: bool = True
|
||||||
|
|
||||||
|
|
||||||
|
class CloudStorage:
|
||||||
|
"""Upload backups to S3-compatible cloud storage."""
|
||||||
|
|
||||||
|
def __init__(self, config: S3Config) -> None:
|
||||||
|
"""Initialize cloud storage client.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config: S3 configuration
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ImportError: If boto3 is not installed
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
import boto3
|
||||||
|
except ImportError:
|
||||||
|
raise ImportError(
|
||||||
|
"boto3 is required for cloud storage. Install with: pip install boto3"
|
||||||
|
)
|
||||||
|
|
||||||
|
self.config = config
|
||||||
|
self.client = boto3.client(
|
||||||
|
"s3",
|
||||||
|
endpoint_url=config.endpoint_url,
|
||||||
|
aws_access_key_id=config.access_key,
|
||||||
|
aws_secret_access_key=config.secret_key,
|
||||||
|
region_name=config.region,
|
||||||
|
use_ssl=config.use_ssl,
|
||||||
|
)
|
||||||
|
|
||||||
|
def upload_file(
|
||||||
|
self,
|
||||||
|
file_path: Path,
|
||||||
|
object_key: str | None = None,
|
||||||
|
metadata: dict[str, str] | None = None,
|
||||||
|
) -> str:
|
||||||
|
"""Upload a file to cloud storage.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file_path: Local file to upload
|
||||||
|
object_key: S3 object key (default: filename)
|
||||||
|
metadata: Optional metadata to attach
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
S3 object key
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
FileNotFoundError: If file doesn't exist
|
||||||
|
Exception: If upload fails
|
||||||
|
"""
|
||||||
|
if not file_path.exists():
|
||||||
|
raise FileNotFoundError(f"File not found: {file_path}")
|
||||||
|
|
||||||
|
if object_key is None:
|
||||||
|
object_key = file_path.name
|
||||||
|
|
||||||
|
extra_args: dict[str, Any] = {}
|
||||||
|
|
||||||
|
# Add server-side encryption
|
||||||
|
extra_args["ServerSideEncryption"] = "AES256"
|
||||||
|
|
||||||
|
# Add metadata if provided
|
||||||
|
if metadata:
|
||||||
|
extra_args["Metadata"] = metadata
|
||||||
|
|
||||||
|
logger.info("Uploading %s to s3://%s/%s", file_path.name, self.config.bucket_name, object_key)
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.client.upload_file(
|
||||||
|
str(file_path),
|
||||||
|
self.config.bucket_name,
|
||||||
|
object_key,
|
||||||
|
ExtraArgs=extra_args,
|
||||||
|
)
|
||||||
|
logger.info("Upload successful: %s", object_key)
|
||||||
|
return object_key
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error("Upload failed: %s", exc)
|
||||||
|
raise
|
||||||
|
|
||||||
|
def download_file(self, object_key: str, local_path: Path) -> Path:
|
||||||
|
"""Download a file from cloud storage.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
object_key: S3 object key
|
||||||
|
local_path: Local destination path
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Path to downloaded file
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
Exception: If download fails
|
||||||
|
"""
|
||||||
|
local_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
logger.info("Downloading s3://%s/%s to %s", self.config.bucket_name, object_key, local_path)
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.client.download_file(
|
||||||
|
self.config.bucket_name,
|
||||||
|
object_key,
|
||||||
|
str(local_path),
|
||||||
|
)
|
||||||
|
logger.info("Download successful: %s", local_path)
|
||||||
|
return local_path
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error("Download failed: %s", exc)
|
||||||
|
raise
|
||||||
|
|
||||||
|
def list_files(self, prefix: str = "") -> list[dict[str, Any]]:
|
||||||
|
"""List files in cloud storage.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
prefix: Filter by object key prefix
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of file metadata dictionaries
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
response = self.client.list_objects_v2(
|
||||||
|
Bucket=self.config.bucket_name,
|
||||||
|
Prefix=prefix,
|
||||||
|
)
|
||||||
|
|
||||||
|
if "Contents" not in response:
|
||||||
|
return []
|
||||||
|
|
||||||
|
files = []
|
||||||
|
for obj in response["Contents"]:
|
||||||
|
files.append(
|
||||||
|
{
|
||||||
|
"key": obj["Key"],
|
||||||
|
"size_bytes": obj["Size"],
|
||||||
|
"last_modified": obj["LastModified"],
|
||||||
|
"etag": obj["ETag"],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return files
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error("Failed to list files: %s", exc)
|
||||||
|
raise
|
||||||
|
|
||||||
|
def delete_file(self, object_key: str) -> None:
|
||||||
|
"""Delete a file from cloud storage.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
object_key: S3 object key
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
Exception: If deletion fails
|
||||||
|
"""
|
||||||
|
logger.info("Deleting s3://%s/%s", self.config.bucket_name, object_key)
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.client.delete_object(
|
||||||
|
Bucket=self.config.bucket_name,
|
||||||
|
Key=object_key,
|
||||||
|
)
|
||||||
|
logger.info("Deletion successful: %s", object_key)
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error("Deletion failed: %s", exc)
|
||||||
|
raise
|
||||||
|
|
||||||
|
def get_storage_stats(self) -> dict[str, Any]:
|
||||||
|
"""Get cloud storage statistics.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with storage stats
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
files = self.list_files()
|
||||||
|
|
||||||
|
total_size = sum(f["size_bytes"] for f in files)
|
||||||
|
total_count = len(files)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"total_files": total_count,
|
||||||
|
"total_size_bytes": total_size,
|
||||||
|
"total_size_mb": total_size / 1024 / 1024,
|
||||||
|
"total_size_gb": total_size / 1024 / 1024 / 1024,
|
||||||
|
}
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error("Failed to get storage stats: %s", exc)
|
||||||
|
return {
|
||||||
|
"error": str(exc),
|
||||||
|
"total_files": 0,
|
||||||
|
"total_size_bytes": 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
def verify_connection(self) -> bool:
|
||||||
|
"""Verify connection to cloud storage.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if connection is successful
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
self.client.head_bucket(Bucket=self.config.bucket_name)
|
||||||
|
logger.info("Cloud storage connection verified")
|
||||||
|
return True
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error("Cloud storage connection failed: %s", exc)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def create_bucket_if_not_exists(self) -> None:
|
||||||
|
"""Create storage bucket if it doesn't exist.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
Exception: If bucket creation fails
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
self.client.head_bucket(Bucket=self.config.bucket_name)
|
||||||
|
logger.info("Bucket already exists: %s", self.config.bucket_name)
|
||||||
|
except self.client.exceptions.NoSuchBucket:
|
||||||
|
logger.info("Creating bucket: %s", self.config.bucket_name)
|
||||||
|
if self.config.region == "us-east-1":
|
||||||
|
# us-east-1 requires special handling
|
||||||
|
self.client.create_bucket(Bucket=self.config.bucket_name)
|
||||||
|
else:
|
||||||
|
self.client.create_bucket(
|
||||||
|
Bucket=self.config.bucket_name,
|
||||||
|
CreateBucketConfiguration={"LocationConstraint": self.config.region},
|
||||||
|
)
|
||||||
|
logger.info("Bucket created successfully")
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error("Failed to verify/create bucket: %s", exc)
|
||||||
|
raise
|
||||||
|
|
||||||
|
def enable_versioning(self) -> None:
|
||||||
|
"""Enable versioning on the bucket.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
Exception: If versioning enablement fails
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
self.client.put_bucket_versioning(
|
||||||
|
Bucket=self.config.bucket_name,
|
||||||
|
VersioningConfiguration={"Status": "Enabled"},
|
||||||
|
)
|
||||||
|
logger.info("Versioning enabled for bucket: %s", self.config.bucket_name)
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error("Failed to enable versioning: %s", exc)
|
||||||
|
raise
|
||||||
326
src/backup/exporter.py
Normal file
326
src/backup/exporter.py
Normal file
@@ -0,0 +1,326 @@
|
|||||||
|
"""Multi-format database exporter for backups.
|
||||||
|
|
||||||
|
Supports JSON, CSV, and Parquet formats for different use cases:
|
||||||
|
- JSON: Human-readable, easy to inspect
|
||||||
|
- CSV: Analysis tools (Excel, pandas)
|
||||||
|
- Parquet: Big data tools (Spark, DuckDB)
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import csv
|
||||||
|
import gzip
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import sqlite3
|
||||||
|
from datetime import UTC, datetime
|
||||||
|
from enum import Enum
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class ExportFormat(str, Enum):
|
||||||
|
"""Supported export formats."""
|
||||||
|
|
||||||
|
JSON = "json"
|
||||||
|
CSV = "csv"
|
||||||
|
PARQUET = "parquet"
|
||||||
|
|
||||||
|
|
||||||
|
class BackupExporter:
|
||||||
|
"""Export database to multiple formats."""
|
||||||
|
|
||||||
|
def __init__(self, db_path: str) -> None:
|
||||||
|
"""Initialize the exporter.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
db_path: Path to SQLite database
|
||||||
|
"""
|
||||||
|
self.db_path = db_path
|
||||||
|
|
||||||
|
def export_all(
|
||||||
|
self,
|
||||||
|
output_dir: Path,
|
||||||
|
formats: list[ExportFormat] | None = None,
|
||||||
|
compress: bool = True,
|
||||||
|
incremental_since: datetime | None = None,
|
||||||
|
) -> dict[ExportFormat, Path]:
|
||||||
|
"""Export database to multiple formats.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
output_dir: Directory to write export files
|
||||||
|
formats: List of formats to export (default: all)
|
||||||
|
compress: Whether to gzip compress exports
|
||||||
|
incremental_since: Only export records after this timestamp
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary mapping format to output file path
|
||||||
|
"""
|
||||||
|
if formats is None:
|
||||||
|
formats = [ExportFormat.JSON, ExportFormat.CSV, ExportFormat.PARQUET]
|
||||||
|
|
||||||
|
output_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
timestamp = datetime.now(UTC).strftime("%Y%m%d_%H%M%S")
|
||||||
|
|
||||||
|
results: dict[ExportFormat, Path] = {}
|
||||||
|
|
||||||
|
for fmt in formats:
|
||||||
|
try:
|
||||||
|
output_file = self._export_format(
|
||||||
|
fmt, output_dir, timestamp, compress, incremental_since
|
||||||
|
)
|
||||||
|
results[fmt] = output_file
|
||||||
|
logger.info("Exported to %s: %s", fmt.value, output_file)
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error("Failed to export to %s: %s", fmt.value, exc)
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
def _export_format(
|
||||||
|
self,
|
||||||
|
fmt: ExportFormat,
|
||||||
|
output_dir: Path,
|
||||||
|
timestamp: str,
|
||||||
|
compress: bool,
|
||||||
|
incremental_since: datetime | None,
|
||||||
|
) -> Path:
|
||||||
|
"""Export to a specific format.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
fmt: Export format
|
||||||
|
output_dir: Output directory
|
||||||
|
timestamp: Timestamp string for filename
|
||||||
|
compress: Whether to compress
|
||||||
|
incremental_since: Incremental export cutoff
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Path to output file
|
||||||
|
"""
|
||||||
|
if fmt == ExportFormat.JSON:
|
||||||
|
return self._export_json(output_dir, timestamp, compress, incremental_since)
|
||||||
|
elif fmt == ExportFormat.CSV:
|
||||||
|
return self._export_csv(output_dir, timestamp, compress, incremental_since)
|
||||||
|
elif fmt == ExportFormat.PARQUET:
|
||||||
|
return self._export_parquet(
|
||||||
|
output_dir, timestamp, compress, incremental_since
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Unsupported format: {fmt}")
|
||||||
|
|
||||||
|
def _get_trades(
|
||||||
|
self, incremental_since: datetime | None = None
|
||||||
|
) -> list[dict[str, Any]]:
|
||||||
|
"""Fetch trades from database.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
incremental_since: Only fetch trades after this timestamp
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of trade records
|
||||||
|
"""
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
conn.row_factory = sqlite3.Row
|
||||||
|
|
||||||
|
if incremental_since:
|
||||||
|
cursor = conn.execute(
|
||||||
|
"SELECT * FROM trades WHERE timestamp > ?",
|
||||||
|
(incremental_since.isoformat(),),
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
cursor = conn.execute("SELECT * FROM trades")
|
||||||
|
|
||||||
|
trades = [dict(row) for row in cursor.fetchall()]
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
return trades
|
||||||
|
|
||||||
|
def _export_json(
|
||||||
|
self,
|
||||||
|
output_dir: Path,
|
||||||
|
timestamp: str,
|
||||||
|
compress: bool,
|
||||||
|
incremental_since: datetime | None,
|
||||||
|
) -> Path:
|
||||||
|
"""Export to JSON format.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
output_dir: Output directory
|
||||||
|
timestamp: Timestamp for filename
|
||||||
|
compress: Whether to gzip
|
||||||
|
incremental_since: Incremental cutoff
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Path to output file
|
||||||
|
"""
|
||||||
|
trades = self._get_trades(incremental_since)
|
||||||
|
|
||||||
|
filename = f"trades_{timestamp}.json"
|
||||||
|
if compress:
|
||||||
|
filename += ".gz"
|
||||||
|
|
||||||
|
output_file = output_dir / filename
|
||||||
|
|
||||||
|
data = {
|
||||||
|
"export_timestamp": datetime.now(UTC).isoformat(),
|
||||||
|
"incremental_since": (
|
||||||
|
incremental_since.isoformat() if incremental_since else None
|
||||||
|
),
|
||||||
|
"record_count": len(trades),
|
||||||
|
"trades": trades,
|
||||||
|
}
|
||||||
|
|
||||||
|
if compress:
|
||||||
|
with gzip.open(output_file, "wt", encoding="utf-8") as f:
|
||||||
|
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||||
|
else:
|
||||||
|
with open(output_file, "w", encoding="utf-8") as f:
|
||||||
|
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||||
|
|
||||||
|
return output_file
|
||||||
|
|
||||||
|
def _export_csv(
|
||||||
|
self,
|
||||||
|
output_dir: Path,
|
||||||
|
timestamp: str,
|
||||||
|
compress: bool,
|
||||||
|
incremental_since: datetime | None,
|
||||||
|
) -> Path:
|
||||||
|
"""Export to CSV format.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
output_dir: Output directory
|
||||||
|
timestamp: Timestamp for filename
|
||||||
|
compress: Whether to gzip
|
||||||
|
incremental_since: Incremental cutoff
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Path to output file
|
||||||
|
"""
|
||||||
|
trades = self._get_trades(incremental_since)
|
||||||
|
|
||||||
|
filename = f"trades_{timestamp}.csv"
|
||||||
|
if compress:
|
||||||
|
filename += ".gz"
|
||||||
|
|
||||||
|
output_file = output_dir / filename
|
||||||
|
|
||||||
|
if not trades:
|
||||||
|
# Write empty CSV with headers
|
||||||
|
if compress:
|
||||||
|
with gzip.open(output_file, "wt", encoding="utf-8", newline="") as f:
|
||||||
|
writer = csv.writer(f)
|
||||||
|
writer.writerow(
|
||||||
|
[
|
||||||
|
"timestamp",
|
||||||
|
"stock_code",
|
||||||
|
"action",
|
||||||
|
"quantity",
|
||||||
|
"price",
|
||||||
|
"confidence",
|
||||||
|
"rationale",
|
||||||
|
"pnl",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
with open(output_file, "w", encoding="utf-8", newline="") as f:
|
||||||
|
writer = csv.writer(f)
|
||||||
|
writer.writerow(
|
||||||
|
[
|
||||||
|
"timestamp",
|
||||||
|
"stock_code",
|
||||||
|
"action",
|
||||||
|
"quantity",
|
||||||
|
"price",
|
||||||
|
"confidence",
|
||||||
|
"rationale",
|
||||||
|
"pnl",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
return output_file
|
||||||
|
|
||||||
|
# Get column names from first trade
|
||||||
|
fieldnames = list(trades[0].keys())
|
||||||
|
|
||||||
|
if compress:
|
||||||
|
with gzip.open(output_file, "wt", encoding="utf-8", newline="") as f:
|
||||||
|
writer = csv.DictWriter(f, fieldnames=fieldnames)
|
||||||
|
writer.writeheader()
|
||||||
|
writer.writerows(trades)
|
||||||
|
else:
|
||||||
|
with open(output_file, "w", encoding="utf-8", newline="") as f:
|
||||||
|
writer = csv.DictWriter(f, fieldnames=fieldnames)
|
||||||
|
writer.writeheader()
|
||||||
|
writer.writerows(trades)
|
||||||
|
|
||||||
|
return output_file
|
||||||
|
|
||||||
|
def _export_parquet(
|
||||||
|
self,
|
||||||
|
output_dir: Path,
|
||||||
|
timestamp: str,
|
||||||
|
compress: bool,
|
||||||
|
incremental_since: datetime | None,
|
||||||
|
) -> Path:
|
||||||
|
"""Export to Parquet format.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
output_dir: Output directory
|
||||||
|
timestamp: Timestamp for filename
|
||||||
|
compress: Whether to compress (Parquet has built-in compression)
|
||||||
|
incremental_since: Incremental cutoff
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Path to output file
|
||||||
|
"""
|
||||||
|
trades = self._get_trades(incremental_since)
|
||||||
|
|
||||||
|
filename = f"trades_{timestamp}.parquet"
|
||||||
|
output_file = output_dir / filename
|
||||||
|
|
||||||
|
try:
|
||||||
|
import pyarrow as pa
|
||||||
|
import pyarrow.parquet as pq
|
||||||
|
except ImportError:
|
||||||
|
raise ImportError(
|
||||||
|
"pyarrow is required for Parquet export. "
|
||||||
|
"Install with: pip install pyarrow"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Convert to pyarrow table
|
||||||
|
table = pa.Table.from_pylist(trades)
|
||||||
|
|
||||||
|
# Write with compression
|
||||||
|
compression = "gzip" if compress else "none"
|
||||||
|
pq.write_table(table, output_file, compression=compression)
|
||||||
|
|
||||||
|
return output_file
|
||||||
|
|
||||||
|
def get_export_stats(self) -> dict[str, Any]:
|
||||||
|
"""Get statistics about exportable data.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with data statistics
|
||||||
|
"""
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
stats = {}
|
||||||
|
|
||||||
|
# Total trades
|
||||||
|
cursor.execute("SELECT COUNT(*) FROM trades")
|
||||||
|
stats["total_trades"] = cursor.fetchone()[0]
|
||||||
|
|
||||||
|
# Date range
|
||||||
|
cursor.execute("SELECT MIN(timestamp), MAX(timestamp) FROM trades")
|
||||||
|
min_date, max_date = cursor.fetchone()
|
||||||
|
stats["date_range"] = {"earliest": min_date, "latest": max_date}
|
||||||
|
|
||||||
|
# Database size
|
||||||
|
cursor.execute("SELECT page_count * page_size FROM pragma_page_count(), pragma_page_size()")
|
||||||
|
stats["db_size_bytes"] = cursor.fetchone()[0]
|
||||||
|
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
return stats
|
||||||
282
src/backup/health_monitor.py
Normal file
282
src/backup/health_monitor.py
Normal file
@@ -0,0 +1,282 @@
|
|||||||
|
"""Health monitoring for backup system.
|
||||||
|
|
||||||
|
Checks:
|
||||||
|
- Database accessibility and integrity
|
||||||
|
- Disk space availability
|
||||||
|
- Backup success/failure tracking
|
||||||
|
- Self-healing capabilities
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import shutil
|
||||||
|
import sqlite3
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from datetime import UTC, datetime, timedelta
|
||||||
|
from enum import Enum
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class HealthStatus(str, Enum):
|
||||||
|
"""Health check status."""
|
||||||
|
|
||||||
|
HEALTHY = "healthy"
|
||||||
|
DEGRADED = "degraded"
|
||||||
|
UNHEALTHY = "unhealthy"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class HealthCheckResult:
|
||||||
|
"""Result of a health check."""
|
||||||
|
|
||||||
|
status: HealthStatus
|
||||||
|
message: str
|
||||||
|
details: dict[str, Any] | None = None
|
||||||
|
timestamp: datetime | None = None
|
||||||
|
|
||||||
|
def __post_init__(self) -> None:
|
||||||
|
if self.timestamp is None:
|
||||||
|
self.timestamp = datetime.now(UTC)
|
||||||
|
|
||||||
|
|
||||||
|
class HealthMonitor:
|
||||||
|
"""Monitor system health and backup status."""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
db_path: str,
|
||||||
|
backup_dir: Path,
|
||||||
|
min_disk_space_gb: float = 10.0,
|
||||||
|
max_backup_age_hours: int = 25, # Daily backups should be < 25 hours old
|
||||||
|
) -> None:
|
||||||
|
"""Initialize health monitor.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
db_path: Path to SQLite database
|
||||||
|
backup_dir: Backup directory
|
||||||
|
min_disk_space_gb: Minimum required disk space in GB
|
||||||
|
max_backup_age_hours: Maximum acceptable backup age in hours
|
||||||
|
"""
|
||||||
|
self.db_path = Path(db_path)
|
||||||
|
self.backup_dir = backup_dir
|
||||||
|
self.min_disk_space_bytes = int(min_disk_space_gb * 1024 * 1024 * 1024)
|
||||||
|
self.max_backup_age = timedelta(hours=max_backup_age_hours)
|
||||||
|
|
||||||
|
def check_database_health(self) -> HealthCheckResult:
|
||||||
|
"""Check database accessibility and integrity.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
HealthCheckResult
|
||||||
|
"""
|
||||||
|
# Check if database exists
|
||||||
|
if not self.db_path.exists():
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.UNHEALTHY,
|
||||||
|
message=f"Database not found: {self.db_path}",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if database is accessible
|
||||||
|
try:
|
||||||
|
conn = sqlite3.connect(str(self.db_path))
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
# Run integrity check
|
||||||
|
cursor.execute("PRAGMA integrity_check")
|
||||||
|
result = cursor.fetchone()[0]
|
||||||
|
|
||||||
|
if result != "ok":
|
||||||
|
conn.close()
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.UNHEALTHY,
|
||||||
|
message=f"Database integrity check failed: {result}",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Get database size
|
||||||
|
cursor.execute(
|
||||||
|
"SELECT page_count * page_size FROM pragma_page_count(), pragma_page_size()"
|
||||||
|
)
|
||||||
|
db_size = cursor.fetchone()[0]
|
||||||
|
|
||||||
|
# Get row counts
|
||||||
|
cursor.execute("SELECT COUNT(*) FROM trades")
|
||||||
|
trade_count = cursor.fetchone()[0]
|
||||||
|
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.HEALTHY,
|
||||||
|
message="Database is healthy",
|
||||||
|
details={
|
||||||
|
"size_bytes": db_size,
|
||||||
|
"size_mb": db_size / 1024 / 1024,
|
||||||
|
"trade_count": trade_count,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
except sqlite3.Error as exc:
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.UNHEALTHY,
|
||||||
|
message=f"Database access error: {exc}",
|
||||||
|
)
|
||||||
|
|
||||||
|
def check_disk_space(self) -> HealthCheckResult:
|
||||||
|
"""Check available disk space.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
HealthCheckResult
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
stat = shutil.disk_usage(self.backup_dir)
|
||||||
|
|
||||||
|
free_gb = stat.free / 1024 / 1024 / 1024
|
||||||
|
total_gb = stat.total / 1024 / 1024 / 1024
|
||||||
|
used_percent = (stat.used / stat.total) * 100
|
||||||
|
|
||||||
|
if stat.free < self.min_disk_space_bytes:
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.UNHEALTHY,
|
||||||
|
message=f"Low disk space: {free_gb:.2f} GB free (minimum: {self.min_disk_space_bytes / 1024 / 1024 / 1024:.2f} GB)",
|
||||||
|
details={
|
||||||
|
"free_gb": free_gb,
|
||||||
|
"total_gb": total_gb,
|
||||||
|
"used_percent": used_percent,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
elif stat.free < self.min_disk_space_bytes * 2:
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.DEGRADED,
|
||||||
|
message=f"Disk space low: {free_gb:.2f} GB free",
|
||||||
|
details={
|
||||||
|
"free_gb": free_gb,
|
||||||
|
"total_gb": total_gb,
|
||||||
|
"used_percent": used_percent,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.HEALTHY,
|
||||||
|
message=f"Disk space healthy: {free_gb:.2f} GB free",
|
||||||
|
details={
|
||||||
|
"free_gb": free_gb,
|
||||||
|
"total_gb": total_gb,
|
||||||
|
"used_percent": used_percent,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as exc:
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.UNHEALTHY,
|
||||||
|
message=f"Failed to check disk space: {exc}",
|
||||||
|
)
|
||||||
|
|
||||||
|
def check_backup_recency(self) -> HealthCheckResult:
|
||||||
|
"""Check if backups are recent enough.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
HealthCheckResult
|
||||||
|
"""
|
||||||
|
daily_dir = self.backup_dir / "daily"
|
||||||
|
|
||||||
|
if not daily_dir.exists():
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.DEGRADED,
|
||||||
|
message="Daily backup directory not found",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Find most recent backup
|
||||||
|
backups = sorted(daily_dir.glob("*.db"), key=lambda p: p.stat().st_mtime, reverse=True)
|
||||||
|
|
||||||
|
if not backups:
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.UNHEALTHY,
|
||||||
|
message="No daily backups found",
|
||||||
|
)
|
||||||
|
|
||||||
|
most_recent = backups[0]
|
||||||
|
mtime = datetime.fromtimestamp(most_recent.stat().st_mtime, tz=UTC)
|
||||||
|
age = datetime.now(UTC) - mtime
|
||||||
|
|
||||||
|
if age > self.max_backup_age:
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.DEGRADED,
|
||||||
|
message=f"Most recent backup is {age.total_seconds() / 3600:.1f} hours old",
|
||||||
|
details={
|
||||||
|
"backup_file": most_recent.name,
|
||||||
|
"age_hours": age.total_seconds() / 3600,
|
||||||
|
"threshold_hours": self.max_backup_age.total_seconds() / 3600,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
return HealthCheckResult(
|
||||||
|
status=HealthStatus.HEALTHY,
|
||||||
|
message=f"Recent backup found ({age.total_seconds() / 3600:.1f} hours old)",
|
||||||
|
details={
|
||||||
|
"backup_file": most_recent.name,
|
||||||
|
"age_hours": age.total_seconds() / 3600,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
def run_all_checks(self) -> dict[str, HealthCheckResult]:
|
||||||
|
"""Run all health checks.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary mapping check name to result
|
||||||
|
"""
|
||||||
|
checks = {
|
||||||
|
"database": self.check_database_health(),
|
||||||
|
"disk_space": self.check_disk_space(),
|
||||||
|
"backup_recency": self.check_backup_recency(),
|
||||||
|
}
|
||||||
|
|
||||||
|
# Log results
|
||||||
|
for check_name, result in checks.items():
|
||||||
|
if result.status == HealthStatus.UNHEALTHY:
|
||||||
|
logger.error("[%s] %s: %s", check_name, result.status.value, result.message)
|
||||||
|
elif result.status == HealthStatus.DEGRADED:
|
||||||
|
logger.warning("[%s] %s: %s", check_name, result.status.value, result.message)
|
||||||
|
else:
|
||||||
|
logger.info("[%s] %s: %s", check_name, result.status.value, result.message)
|
||||||
|
|
||||||
|
return checks
|
||||||
|
|
||||||
|
def get_overall_status(self) -> HealthStatus:
|
||||||
|
"""Get overall system health status.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
HealthStatus (worst status from all checks)
|
||||||
|
"""
|
||||||
|
checks = self.run_all_checks()
|
||||||
|
|
||||||
|
# Return worst status
|
||||||
|
if any(c.status == HealthStatus.UNHEALTHY for c in checks.values()):
|
||||||
|
return HealthStatus.UNHEALTHY
|
||||||
|
elif any(c.status == HealthStatus.DEGRADED for c in checks.values()):
|
||||||
|
return HealthStatus.DEGRADED
|
||||||
|
else:
|
||||||
|
return HealthStatus.HEALTHY
|
||||||
|
|
||||||
|
def get_health_report(self) -> dict[str, Any]:
|
||||||
|
"""Get comprehensive health report.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with health report
|
||||||
|
"""
|
||||||
|
checks = self.run_all_checks()
|
||||||
|
overall = self.get_overall_status()
|
||||||
|
|
||||||
|
return {
|
||||||
|
"overall_status": overall.value,
|
||||||
|
"timestamp": datetime.now(UTC).isoformat(),
|
||||||
|
"checks": {
|
||||||
|
name: {
|
||||||
|
"status": result.status.value,
|
||||||
|
"message": result.message,
|
||||||
|
"details": result.details,
|
||||||
|
}
|
||||||
|
for name, result in checks.items()
|
||||||
|
},
|
||||||
|
}
|
||||||
336
src/backup/scheduler.py
Normal file
336
src/backup/scheduler.py
Normal file
@@ -0,0 +1,336 @@
|
|||||||
|
"""Backup scheduler for automated database backups.
|
||||||
|
|
||||||
|
Implements backup policies:
|
||||||
|
- Daily: Keep for 30 days (hot storage)
|
||||||
|
- Weekly: Keep for 1 year (warm storage)
|
||||||
|
- Monthly: Keep forever (cold storage)
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import shutil
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from datetime import UTC, datetime, timedelta
|
||||||
|
from enum import Enum
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class BackupPolicy(str, Enum):
|
||||||
|
"""Backup retention policies."""
|
||||||
|
|
||||||
|
DAILY = "daily"
|
||||||
|
WEEKLY = "weekly"
|
||||||
|
MONTHLY = "monthly"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BackupMetadata:
|
||||||
|
"""Metadata for a backup."""
|
||||||
|
|
||||||
|
timestamp: datetime
|
||||||
|
policy: BackupPolicy
|
||||||
|
file_path: Path
|
||||||
|
size_bytes: int
|
||||||
|
checksum: str | None = None
|
||||||
|
|
||||||
|
|
||||||
|
class BackupScheduler:
|
||||||
|
"""Manage automated database backups with retention policies."""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
db_path: str,
|
||||||
|
backup_dir: Path,
|
||||||
|
daily_retention_days: int = 30,
|
||||||
|
weekly_retention_days: int = 365,
|
||||||
|
) -> None:
|
||||||
|
"""Initialize the backup scheduler.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
db_path: Path to SQLite database
|
||||||
|
backup_dir: Root directory for backups
|
||||||
|
daily_retention_days: Days to keep daily backups
|
||||||
|
weekly_retention_days: Days to keep weekly backups
|
||||||
|
"""
|
||||||
|
self.db_path = Path(db_path)
|
||||||
|
self.backup_dir = backup_dir
|
||||||
|
self.daily_retention = timedelta(days=daily_retention_days)
|
||||||
|
self.weekly_retention = timedelta(days=weekly_retention_days)
|
||||||
|
|
||||||
|
# Create policy-specific directories
|
||||||
|
self.daily_dir = backup_dir / "daily"
|
||||||
|
self.weekly_dir = backup_dir / "weekly"
|
||||||
|
self.monthly_dir = backup_dir / "monthly"
|
||||||
|
|
||||||
|
for d in [self.daily_dir, self.weekly_dir, self.monthly_dir]:
|
||||||
|
d.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
def create_backup(
|
||||||
|
self, policy: BackupPolicy, verify: bool = True
|
||||||
|
) -> BackupMetadata:
|
||||||
|
"""Create a database backup.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
policy: Backup policy (daily/weekly/monthly)
|
||||||
|
verify: Whether to verify backup integrity
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
BackupMetadata object
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
FileNotFoundError: If database doesn't exist
|
||||||
|
OSError: If backup fails
|
||||||
|
"""
|
||||||
|
if not self.db_path.exists():
|
||||||
|
raise FileNotFoundError(f"Database not found: {self.db_path}")
|
||||||
|
|
||||||
|
timestamp = datetime.now(UTC)
|
||||||
|
backup_filename = self._get_backup_filename(timestamp, policy)
|
||||||
|
|
||||||
|
# Determine output directory
|
||||||
|
if policy == BackupPolicy.DAILY:
|
||||||
|
output_dir = self.daily_dir
|
||||||
|
elif policy == BackupPolicy.WEEKLY:
|
||||||
|
output_dir = self.weekly_dir
|
||||||
|
else: # MONTHLY
|
||||||
|
output_dir = self.monthly_dir
|
||||||
|
|
||||||
|
backup_path = output_dir / backup_filename
|
||||||
|
|
||||||
|
# Create backup (copy database file)
|
||||||
|
logger.info("Creating %s backup: %s", policy.value, backup_path)
|
||||||
|
shutil.copy2(self.db_path, backup_path)
|
||||||
|
|
||||||
|
# Get file size
|
||||||
|
size_bytes = backup_path.stat().st_size
|
||||||
|
|
||||||
|
# Verify backup if requested
|
||||||
|
checksum = None
|
||||||
|
if verify:
|
||||||
|
checksum = self._verify_backup(backup_path)
|
||||||
|
|
||||||
|
metadata = BackupMetadata(
|
||||||
|
timestamp=timestamp,
|
||||||
|
policy=policy,
|
||||||
|
file_path=backup_path,
|
||||||
|
size_bytes=size_bytes,
|
||||||
|
checksum=checksum,
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
"Backup created: %s (%.2f MB)",
|
||||||
|
backup_path.name,
|
||||||
|
size_bytes / 1024 / 1024,
|
||||||
|
)
|
||||||
|
|
||||||
|
return metadata
|
||||||
|
|
||||||
|
def _get_backup_filename(self, timestamp: datetime, policy: BackupPolicy) -> str:
|
||||||
|
"""Generate backup filename.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
timestamp: Backup timestamp
|
||||||
|
policy: Backup policy
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Filename string
|
||||||
|
"""
|
||||||
|
ts_str = timestamp.strftime("%Y%m%d_%H%M%S")
|
||||||
|
return f"trade_logs_{policy.value}_{ts_str}.db"
|
||||||
|
|
||||||
|
def _verify_backup(self, backup_path: Path) -> str:
|
||||||
|
"""Verify backup integrity using SQLite integrity check.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
backup_path: Path to backup file
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Checksum string (MD5 hash)
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
RuntimeError: If integrity check fails
|
||||||
|
"""
|
||||||
|
import hashlib
|
||||||
|
import sqlite3
|
||||||
|
|
||||||
|
# Integrity check
|
||||||
|
try:
|
||||||
|
conn = sqlite3.connect(str(backup_path))
|
||||||
|
cursor = conn.cursor()
|
||||||
|
cursor.execute("PRAGMA integrity_check")
|
||||||
|
result = cursor.fetchone()[0]
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
if result != "ok":
|
||||||
|
raise RuntimeError(f"Integrity check failed: {result}")
|
||||||
|
except sqlite3.Error as exc:
|
||||||
|
raise RuntimeError(f"Failed to verify backup: {exc}")
|
||||||
|
|
||||||
|
# Calculate MD5 checksum
|
||||||
|
md5 = hashlib.md5()
|
||||||
|
with open(backup_path, "rb") as f:
|
||||||
|
for chunk in iter(lambda: f.read(8192), b""):
|
||||||
|
md5.update(chunk)
|
||||||
|
|
||||||
|
return md5.hexdigest()
|
||||||
|
|
||||||
|
def cleanup_old_backups(self) -> dict[BackupPolicy, int]:
|
||||||
|
"""Remove backups older than retention policies.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary mapping policy to number of backups removed
|
||||||
|
"""
|
||||||
|
now = datetime.now(UTC)
|
||||||
|
removed_counts: dict[BackupPolicy, int] = {}
|
||||||
|
|
||||||
|
# Daily backups: remove older than retention
|
||||||
|
removed_counts[BackupPolicy.DAILY] = self._cleanup_directory(
|
||||||
|
self.daily_dir, now - self.daily_retention
|
||||||
|
)
|
||||||
|
|
||||||
|
# Weekly backups: remove older than retention
|
||||||
|
removed_counts[BackupPolicy.WEEKLY] = self._cleanup_directory(
|
||||||
|
self.weekly_dir, now - self.weekly_retention
|
||||||
|
)
|
||||||
|
|
||||||
|
# Monthly backups: never remove (kept forever)
|
||||||
|
removed_counts[BackupPolicy.MONTHLY] = 0
|
||||||
|
|
||||||
|
total = sum(removed_counts.values())
|
||||||
|
if total > 0:
|
||||||
|
logger.info("Cleaned up %d old backup(s)", total)
|
||||||
|
|
||||||
|
return removed_counts
|
||||||
|
|
||||||
|
def _cleanup_directory(self, directory: Path, cutoff: datetime) -> int:
|
||||||
|
"""Remove backups older than cutoff date.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
directory: Directory to clean
|
||||||
|
cutoff: Remove files older than this
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Number of files removed
|
||||||
|
"""
|
||||||
|
removed = 0
|
||||||
|
|
||||||
|
for backup_file in directory.glob("*.db"):
|
||||||
|
# Get file modification time
|
||||||
|
mtime = datetime.fromtimestamp(backup_file.stat().st_mtime, tz=UTC)
|
||||||
|
|
||||||
|
if mtime < cutoff:
|
||||||
|
logger.debug("Removing old backup: %s", backup_file.name)
|
||||||
|
backup_file.unlink()
|
||||||
|
removed += 1
|
||||||
|
|
||||||
|
return removed
|
||||||
|
|
||||||
|
def list_backups(
|
||||||
|
self, policy: BackupPolicy | None = None
|
||||||
|
) -> list[BackupMetadata]:
|
||||||
|
"""List available backups.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
policy: Filter by policy (None for all)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of BackupMetadata objects
|
||||||
|
"""
|
||||||
|
backups: list[BackupMetadata] = []
|
||||||
|
|
||||||
|
policies_to_check = (
|
||||||
|
[policy] if policy else [BackupPolicy.DAILY, BackupPolicy.WEEKLY, BackupPolicy.MONTHLY]
|
||||||
|
)
|
||||||
|
|
||||||
|
for pol in policies_to_check:
|
||||||
|
if pol == BackupPolicy.DAILY:
|
||||||
|
directory = self.daily_dir
|
||||||
|
elif pol == BackupPolicy.WEEKLY:
|
||||||
|
directory = self.weekly_dir
|
||||||
|
else:
|
||||||
|
directory = self.monthly_dir
|
||||||
|
|
||||||
|
for backup_file in sorted(directory.glob("*.db")):
|
||||||
|
mtime = datetime.fromtimestamp(backup_file.stat().st_mtime, tz=UTC)
|
||||||
|
size = backup_file.stat().st_size
|
||||||
|
|
||||||
|
backups.append(
|
||||||
|
BackupMetadata(
|
||||||
|
timestamp=mtime,
|
||||||
|
policy=pol,
|
||||||
|
file_path=backup_file,
|
||||||
|
size_bytes=size,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Sort by timestamp (newest first)
|
||||||
|
backups.sort(key=lambda b: b.timestamp, reverse=True)
|
||||||
|
|
||||||
|
return backups
|
||||||
|
|
||||||
|
def get_backup_stats(self) -> dict[str, Any]:
|
||||||
|
"""Get backup statistics.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with backup stats
|
||||||
|
"""
|
||||||
|
stats: dict[str, Any] = {}
|
||||||
|
|
||||||
|
for policy in BackupPolicy:
|
||||||
|
if policy == BackupPolicy.DAILY:
|
||||||
|
directory = self.daily_dir
|
||||||
|
elif policy == BackupPolicy.WEEKLY:
|
||||||
|
directory = self.weekly_dir
|
||||||
|
else:
|
||||||
|
directory = self.monthly_dir
|
||||||
|
|
||||||
|
backups = list(directory.glob("*.db"))
|
||||||
|
total_size = sum(b.stat().st_size for b in backups)
|
||||||
|
|
||||||
|
stats[policy.value] = {
|
||||||
|
"count": len(backups),
|
||||||
|
"total_size_bytes": total_size,
|
||||||
|
"total_size_mb": total_size / 1024 / 1024,
|
||||||
|
}
|
||||||
|
|
||||||
|
return stats
|
||||||
|
|
||||||
|
def restore_backup(self, backup_metadata: BackupMetadata, verify: bool = True) -> None:
|
||||||
|
"""Restore database from backup.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
backup_metadata: Backup to restore
|
||||||
|
verify: Whether to verify restored database
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
FileNotFoundError: If backup file doesn't exist
|
||||||
|
RuntimeError: If verification fails
|
||||||
|
"""
|
||||||
|
if not backup_metadata.file_path.exists():
|
||||||
|
raise FileNotFoundError(f"Backup not found: {backup_metadata.file_path}")
|
||||||
|
|
||||||
|
# Create backup of current database
|
||||||
|
if self.db_path.exists():
|
||||||
|
backup_current = self.db_path.with_suffix(".db.before_restore")
|
||||||
|
logger.info("Backing up current database to: %s", backup_current)
|
||||||
|
shutil.copy2(self.db_path, backup_current)
|
||||||
|
|
||||||
|
# Restore backup
|
||||||
|
logger.info("Restoring backup: %s", backup_metadata.file_path.name)
|
||||||
|
shutil.copy2(backup_metadata.file_path, self.db_path)
|
||||||
|
|
||||||
|
# Verify restored database
|
||||||
|
if verify:
|
||||||
|
try:
|
||||||
|
self._verify_backup(self.db_path)
|
||||||
|
logger.info("Backup restored and verified successfully")
|
||||||
|
except RuntimeError as exc:
|
||||||
|
# Restore failed, revert to backup
|
||||||
|
if backup_current.exists():
|
||||||
|
logger.error("Restore verification failed, reverting: %s", exc)
|
||||||
|
shutil.copy2(backup_current, self.db_path)
|
||||||
|
raise
|
||||||
@@ -24,6 +24,10 @@ class Settings(BaseSettings):
|
|||||||
NEWS_API_PROVIDER: str = "alphavantage" # "alphavantage" or "newsapi"
|
NEWS_API_PROVIDER: str = "alphavantage" # "alphavantage" or "newsapi"
|
||||||
MARKET_DATA_API_KEY: str | None = None
|
MARKET_DATA_API_KEY: str | None = None
|
||||||
|
|
||||||
|
# Legacy field names (for backward compatibility)
|
||||||
|
ALPHA_VANTAGE_API_KEY: str | None = None
|
||||||
|
NEWSAPI_KEY: str | None = None
|
||||||
|
|
||||||
# Risk Management
|
# Risk Management
|
||||||
CIRCUIT_BREAKER_PCT: float = Field(default=-3.0, le=0.0)
|
CIRCUIT_BREAKER_PCT: float = Field(default=-3.0, le=0.0)
|
||||||
FAT_FINGER_PCT: float = Field(default=30.0, gt=0.0, le=100.0)
|
FAT_FINGER_PCT: float = Field(default=30.0, gt=0.0, le=100.0)
|
||||||
@@ -41,6 +45,15 @@ class Settings(BaseSettings):
|
|||||||
# Market selection (comma-separated market codes)
|
# Market selection (comma-separated market codes)
|
||||||
ENABLED_MARKETS: str = "KR"
|
ENABLED_MARKETS: str = "KR"
|
||||||
|
|
||||||
|
# Backup and Disaster Recovery (optional)
|
||||||
|
BACKUP_ENABLED: bool = True
|
||||||
|
BACKUP_DIR: str = "data/backups"
|
||||||
|
S3_ENDPOINT_URL: str | None = None # For MinIO, Backblaze B2, etc.
|
||||||
|
S3_ACCESS_KEY: str | None = None
|
||||||
|
S3_SECRET_KEY: str | None = None
|
||||||
|
S3_BUCKET_NAME: str | None = None
|
||||||
|
S3_REGION: str = "us-east-1"
|
||||||
|
|
||||||
model_config = {"env_file": ".env", "env_file_encoding": "utf-8"}
|
model_config = {"env_file": ".env", "env_file_encoding": "utf-8"}
|
||||||
|
|
||||||
@property
|
@property
|
||||||
|
|||||||
365
tests/test_backup.py
Normal file
365
tests/test_backup.py
Normal file
@@ -0,0 +1,365 @@
|
|||||||
|
"""Tests for backup and disaster recovery system."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import sqlite3
|
||||||
|
import tempfile
|
||||||
|
from datetime import UTC, datetime, timedelta
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from src.backup.exporter import BackupExporter, ExportFormat
|
||||||
|
from src.backup.health_monitor import HealthMonitor, HealthStatus
|
||||||
|
from src.backup.scheduler import BackupPolicy, BackupScheduler
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def temp_db(tmp_path: Path) -> Path:
|
||||||
|
"""Create a temporary test database."""
|
||||||
|
db_path = tmp_path / "test_trades.db"
|
||||||
|
|
||||||
|
conn = sqlite3.connect(str(db_path))
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
# Create trades table
|
||||||
|
cursor.execute("""
|
||||||
|
CREATE TABLE trades (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
timestamp TEXT NOT NULL,
|
||||||
|
stock_code TEXT NOT NULL,
|
||||||
|
action TEXT NOT NULL,
|
||||||
|
quantity INTEGER NOT NULL,
|
||||||
|
price REAL NOT NULL,
|
||||||
|
confidence INTEGER NOT NULL,
|
||||||
|
rationale TEXT,
|
||||||
|
pnl REAL DEFAULT 0.0
|
||||||
|
)
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Insert test data
|
||||||
|
test_trades = [
|
||||||
|
("2024-01-01T10:00:00Z", "005930", "BUY", 10, 70000.0, 85, "Test buy", 0.0),
|
||||||
|
("2024-01-01T11:00:00Z", "005930", "SELL", 10, 71000.0, 90, "Test sell", 10000.0),
|
||||||
|
("2024-01-02T10:00:00Z", "AAPL", "BUY", 5, 180.0, 88, "Tech buy", 0.0),
|
||||||
|
]
|
||||||
|
|
||||||
|
cursor.executemany(
|
||||||
|
"""
|
||||||
|
INSERT INTO trades (timestamp, stock_code, action, quantity, price, confidence, rationale, pnl)
|
||||||
|
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||||
|
""",
|
||||||
|
test_trades,
|
||||||
|
)
|
||||||
|
|
||||||
|
conn.commit()
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
return db_path
|
||||||
|
|
||||||
|
|
||||||
|
class TestBackupExporter:
|
||||||
|
"""Test BackupExporter functionality."""
|
||||||
|
|
||||||
|
def test_exporter_init(self, temp_db: Path) -> None:
|
||||||
|
"""Test exporter initialization."""
|
||||||
|
exporter = BackupExporter(str(temp_db))
|
||||||
|
assert exporter.db_path == str(temp_db)
|
||||||
|
|
||||||
|
def test_export_json(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test JSON export."""
|
||||||
|
exporter = BackupExporter(str(temp_db))
|
||||||
|
output_dir = tmp_path / "exports"
|
||||||
|
|
||||||
|
results = exporter.export_all(
|
||||||
|
output_dir, formats=[ExportFormat.JSON], compress=False
|
||||||
|
)
|
||||||
|
|
||||||
|
assert ExportFormat.JSON in results
|
||||||
|
assert results[ExportFormat.JSON].exists()
|
||||||
|
assert results[ExportFormat.JSON].suffix == ".json"
|
||||||
|
|
||||||
|
def test_export_json_compressed(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test compressed JSON export."""
|
||||||
|
exporter = BackupExporter(str(temp_db))
|
||||||
|
output_dir = tmp_path / "exports"
|
||||||
|
|
||||||
|
results = exporter.export_all(
|
||||||
|
output_dir, formats=[ExportFormat.JSON], compress=True
|
||||||
|
)
|
||||||
|
|
||||||
|
assert ExportFormat.JSON in results
|
||||||
|
assert results[ExportFormat.JSON].suffix == ".gz"
|
||||||
|
|
||||||
|
def test_export_csv(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test CSV export."""
|
||||||
|
exporter = BackupExporter(str(temp_db))
|
||||||
|
output_dir = tmp_path / "exports"
|
||||||
|
|
||||||
|
results = exporter.export_all(
|
||||||
|
output_dir, formats=[ExportFormat.CSV], compress=False
|
||||||
|
)
|
||||||
|
|
||||||
|
assert ExportFormat.CSV in results
|
||||||
|
assert results[ExportFormat.CSV].exists()
|
||||||
|
|
||||||
|
# Verify CSV content
|
||||||
|
with open(results[ExportFormat.CSV], "r") as f:
|
||||||
|
lines = f.readlines()
|
||||||
|
assert len(lines) == 4 # Header + 3 rows
|
||||||
|
|
||||||
|
def test_export_all_formats(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test exporting all formats."""
|
||||||
|
exporter = BackupExporter(str(temp_db))
|
||||||
|
output_dir = tmp_path / "exports"
|
||||||
|
|
||||||
|
# Skip Parquet if pyarrow not available
|
||||||
|
try:
|
||||||
|
import pyarrow # noqa: F401
|
||||||
|
|
||||||
|
formats = [ExportFormat.JSON, ExportFormat.CSV, ExportFormat.PARQUET]
|
||||||
|
except ImportError:
|
||||||
|
formats = [ExportFormat.JSON, ExportFormat.CSV]
|
||||||
|
|
||||||
|
results = exporter.export_all(output_dir, formats=formats, compress=False)
|
||||||
|
|
||||||
|
for fmt in formats:
|
||||||
|
assert fmt in results
|
||||||
|
assert results[fmt].exists()
|
||||||
|
|
||||||
|
def test_incremental_export(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test incremental export."""
|
||||||
|
exporter = BackupExporter(str(temp_db))
|
||||||
|
output_dir = tmp_path / "exports"
|
||||||
|
|
||||||
|
# Export only trades after Jan 2
|
||||||
|
cutoff = datetime(2024, 1, 2, tzinfo=UTC)
|
||||||
|
results = exporter.export_all(
|
||||||
|
output_dir,
|
||||||
|
formats=[ExportFormat.JSON],
|
||||||
|
compress=False,
|
||||||
|
incremental_since=cutoff,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Should only have 1 trade (AAPL on Jan 2)
|
||||||
|
import json
|
||||||
|
|
||||||
|
with open(results[ExportFormat.JSON], "r") as f:
|
||||||
|
data = json.load(f)
|
||||||
|
assert data["record_count"] == 1
|
||||||
|
assert data["trades"][0]["stock_code"] == "AAPL"
|
||||||
|
|
||||||
|
def test_get_export_stats(self, temp_db: Path) -> None:
|
||||||
|
"""Test export statistics."""
|
||||||
|
exporter = BackupExporter(str(temp_db))
|
||||||
|
stats = exporter.get_export_stats()
|
||||||
|
|
||||||
|
assert stats["total_trades"] == 3
|
||||||
|
assert "date_range" in stats
|
||||||
|
assert "db_size_bytes" in stats
|
||||||
|
|
||||||
|
|
||||||
|
class TestBackupScheduler:
|
||||||
|
"""Test BackupScheduler functionality."""
|
||||||
|
|
||||||
|
def test_scheduler_init(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test scheduler initialization."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
scheduler = BackupScheduler(str(temp_db), backup_dir)
|
||||||
|
|
||||||
|
assert scheduler.db_path == temp_db
|
||||||
|
assert (backup_dir / "daily").exists()
|
||||||
|
assert (backup_dir / "weekly").exists()
|
||||||
|
assert (backup_dir / "monthly").exists()
|
||||||
|
|
||||||
|
def test_create_daily_backup(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test daily backup creation."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
scheduler = BackupScheduler(str(temp_db), backup_dir)
|
||||||
|
|
||||||
|
metadata = scheduler.create_backup(BackupPolicy.DAILY, verify=True)
|
||||||
|
|
||||||
|
assert metadata.policy == BackupPolicy.DAILY
|
||||||
|
assert metadata.file_path.exists()
|
||||||
|
assert metadata.size_bytes > 0
|
||||||
|
assert metadata.checksum is not None
|
||||||
|
|
||||||
|
def test_create_weekly_backup(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test weekly backup creation."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
scheduler = BackupScheduler(str(temp_db), backup_dir)
|
||||||
|
|
||||||
|
metadata = scheduler.create_backup(BackupPolicy.WEEKLY, verify=False)
|
||||||
|
|
||||||
|
assert metadata.policy == BackupPolicy.WEEKLY
|
||||||
|
assert metadata.file_path.exists()
|
||||||
|
assert metadata.checksum is None # verify=False
|
||||||
|
|
||||||
|
def test_list_backups(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test listing backups."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
scheduler = BackupScheduler(str(temp_db), backup_dir)
|
||||||
|
|
||||||
|
scheduler.create_backup(BackupPolicy.DAILY)
|
||||||
|
scheduler.create_backup(BackupPolicy.WEEKLY)
|
||||||
|
|
||||||
|
backups = scheduler.list_backups()
|
||||||
|
assert len(backups) == 2
|
||||||
|
|
||||||
|
daily_backups = scheduler.list_backups(BackupPolicy.DAILY)
|
||||||
|
assert len(daily_backups) == 1
|
||||||
|
assert daily_backups[0].policy == BackupPolicy.DAILY
|
||||||
|
|
||||||
|
def test_cleanup_old_backups(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test cleanup of old backups."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
scheduler = BackupScheduler(str(temp_db), backup_dir, daily_retention_days=0)
|
||||||
|
|
||||||
|
# Create a backup
|
||||||
|
scheduler.create_backup(BackupPolicy.DAILY)
|
||||||
|
|
||||||
|
# Cleanup should remove it (0 day retention)
|
||||||
|
removed = scheduler.cleanup_old_backups()
|
||||||
|
assert removed[BackupPolicy.DAILY] >= 1
|
||||||
|
|
||||||
|
def test_backup_stats(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test backup statistics."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
scheduler = BackupScheduler(str(temp_db), backup_dir)
|
||||||
|
|
||||||
|
scheduler.create_backup(BackupPolicy.DAILY)
|
||||||
|
scheduler.create_backup(BackupPolicy.MONTHLY)
|
||||||
|
|
||||||
|
stats = scheduler.get_backup_stats()
|
||||||
|
|
||||||
|
assert stats["daily"]["count"] == 1
|
||||||
|
assert stats["monthly"]["count"] == 1
|
||||||
|
assert stats["daily"]["total_size_bytes"] > 0
|
||||||
|
|
||||||
|
def test_restore_backup(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test backup restoration."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
scheduler = BackupScheduler(str(temp_db), backup_dir)
|
||||||
|
|
||||||
|
# Create backup
|
||||||
|
metadata = scheduler.create_backup(BackupPolicy.DAILY)
|
||||||
|
|
||||||
|
# Modify database
|
||||||
|
conn = sqlite3.connect(str(temp_db))
|
||||||
|
conn.execute("DELETE FROM trades")
|
||||||
|
conn.commit()
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
# Restore
|
||||||
|
scheduler.restore_backup(metadata, verify=True)
|
||||||
|
|
||||||
|
# Verify restoration
|
||||||
|
conn = sqlite3.connect(str(temp_db))
|
||||||
|
cursor = conn.execute("SELECT COUNT(*) FROM trades")
|
||||||
|
count = cursor.fetchone()[0]
|
||||||
|
conn.close()
|
||||||
|
|
||||||
|
assert count == 3 # Original 3 trades restored
|
||||||
|
|
||||||
|
|
||||||
|
class TestHealthMonitor:
|
||||||
|
"""Test HealthMonitor functionality."""
|
||||||
|
|
||||||
|
def test_monitor_init(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test monitor initialization."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
monitor = HealthMonitor(str(temp_db), backup_dir)
|
||||||
|
|
||||||
|
assert monitor.db_path == temp_db
|
||||||
|
|
||||||
|
def test_check_database_health_ok(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test database health check (healthy)."""
|
||||||
|
monitor = HealthMonitor(str(temp_db), tmp_path / "backups")
|
||||||
|
result = monitor.check_database_health()
|
||||||
|
|
||||||
|
assert result.status == HealthStatus.HEALTHY
|
||||||
|
assert "healthy" in result.message.lower()
|
||||||
|
assert result.details is not None
|
||||||
|
assert result.details["trade_count"] == 3
|
||||||
|
|
||||||
|
def test_check_database_health_missing(self, tmp_path: Path) -> None:
|
||||||
|
"""Test database health check (missing file)."""
|
||||||
|
non_existent = tmp_path / "missing.db"
|
||||||
|
monitor = HealthMonitor(str(non_existent), tmp_path / "backups")
|
||||||
|
result = monitor.check_database_health()
|
||||||
|
|
||||||
|
assert result.status == HealthStatus.UNHEALTHY
|
||||||
|
assert "not found" in result.message.lower()
|
||||||
|
|
||||||
|
def test_check_disk_space(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test disk space check."""
|
||||||
|
monitor = HealthMonitor(str(temp_db), tmp_path, min_disk_space_gb=0.001)
|
||||||
|
result = monitor.check_disk_space()
|
||||||
|
|
||||||
|
# Should be healthy with minimal requirement
|
||||||
|
assert result.status in [HealthStatus.HEALTHY, HealthStatus.DEGRADED]
|
||||||
|
assert result.details is not None
|
||||||
|
assert "free_gb" in result.details
|
||||||
|
|
||||||
|
def test_check_backup_recency_no_backups(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test backup recency check (no backups)."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
backup_dir.mkdir()
|
||||||
|
(backup_dir / "daily").mkdir()
|
||||||
|
|
||||||
|
monitor = HealthMonitor(str(temp_db), backup_dir)
|
||||||
|
result = monitor.check_backup_recency()
|
||||||
|
|
||||||
|
assert result.status == HealthStatus.UNHEALTHY
|
||||||
|
assert "no" in result.message.lower()
|
||||||
|
|
||||||
|
def test_check_backup_recency_recent(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test backup recency check (recent backup)."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
scheduler = BackupScheduler(str(temp_db), backup_dir)
|
||||||
|
scheduler.create_backup(BackupPolicy.DAILY)
|
||||||
|
|
||||||
|
monitor = HealthMonitor(str(temp_db), backup_dir)
|
||||||
|
result = monitor.check_backup_recency()
|
||||||
|
|
||||||
|
assert result.status == HealthStatus.HEALTHY
|
||||||
|
assert "recent" in result.message.lower()
|
||||||
|
|
||||||
|
def test_run_all_checks(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test running all health checks."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
scheduler = BackupScheduler(str(temp_db), backup_dir)
|
||||||
|
scheduler.create_backup(BackupPolicy.DAILY)
|
||||||
|
|
||||||
|
monitor = HealthMonitor(str(temp_db), backup_dir, min_disk_space_gb=0.001)
|
||||||
|
checks = monitor.run_all_checks()
|
||||||
|
|
||||||
|
assert "database" in checks
|
||||||
|
assert "disk_space" in checks
|
||||||
|
assert "backup_recency" in checks
|
||||||
|
assert checks["database"].status == HealthStatus.HEALTHY
|
||||||
|
|
||||||
|
def test_get_overall_status(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test overall health status."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
scheduler = BackupScheduler(str(temp_db), backup_dir)
|
||||||
|
scheduler.create_backup(BackupPolicy.DAILY)
|
||||||
|
|
||||||
|
monitor = HealthMonitor(str(temp_db), backup_dir, min_disk_space_gb=0.001)
|
||||||
|
status = monitor.get_overall_status()
|
||||||
|
|
||||||
|
assert status in [HealthStatus.HEALTHY, HealthStatus.DEGRADED]
|
||||||
|
|
||||||
|
def test_get_health_report(self, temp_db: Path, tmp_path: Path) -> None:
|
||||||
|
"""Test health report generation."""
|
||||||
|
backup_dir = tmp_path / "backups"
|
||||||
|
scheduler = BackupScheduler(str(temp_db), backup_dir)
|
||||||
|
scheduler.create_backup(BackupPolicy.DAILY)
|
||||||
|
|
||||||
|
monitor = HealthMonitor(str(temp_db), backup_dir, min_disk_space_gb=0.001)
|
||||||
|
report = monitor.get_health_report()
|
||||||
|
|
||||||
|
assert "overall_status" in report
|
||||||
|
assert "timestamp" in report
|
||||||
|
assert "checks" in report
|
||||||
|
assert len(report["checks"]) == 3
|
||||||
Reference in New Issue
Block a user