ai-agent/benchmarks/performance_metrics.md

51 lines
No EOL
1.6 KiB
Markdown

# Performance Benchmark Metrics
## Overview
This document describes the performance benchmark suite for core system components. The benchmarks measure:
- Task dispatcher throughput (tasks/second)
- RBAC authorization latency (milliseconds)
- SQLite CRUD operation performance (operations/second)
## Running Benchmarks
```bash
pytest tests/performance/benchmarks.py -v --benchmark-enable --benchmark-json=benchmarks/results.json
```
## Interpreting Results
Performance metrics are logged to `metrics/api_performance.log` with timestamps.
### Key Metrics
| Component | Metric | Target | Unit |
|-----------|--------|--------|------|
| TaskDispatcher | Throughput | ≥1000 | tasks/sec |
| RBACEngine | Auth Latency | ≤5 | ms |
| SQLiteAdapter | INSERT | ≤2 | ms/op |
| SQLiteAdapter | SELECT | ≤1 | ms/op |
## Baseline Targets
These targets are based on system requirements:
1. **Task Dispatcher**
- Must handle ≥1000 tasks/second under load
- 95th percentile latency ≤10ms
2. **RBAC Authorization**
- Average check time ≤5ms
- 99th percentile ≤10ms
3. **SQLite Operations**
- INSERT: ≤2ms average
- SELECT: ≤1ms average for simple queries
- Complex queries (joins): ≤10ms average
## Performance Trends
Performance metrics are tracked over time in `metrics/api_performance.log`. Use this command to analyze trends:
```bash
grep "TaskDispatcher throughput" metrics/api_performance.log
```
## Troubleshooting
If benchmarks fail to meet targets:
1. Check system resource usage during tests
2. Review recent code changes affecting components
3. Compare with historical data in performance logs