Conversation
Reviewer's GuideReplaces the SQLite-based persistent storage with a PostgreSQL-backed adapter, updates the storage port and metrics to be backend-agnostic, adds a fast-path for serving fresh player profiles directly from persistent storage, and wires Postgres into the runtime, tests, and monitoring stack. Sequence diagram for player profile request with PostgreSQL fast pathsequenceDiagram
actor Client
participant APIServer as APIServer
participant PlayerService as PlayerService
participant Storage as PostgresStorage
participant Blizzard as BlizzardAPI
Client->>APIServer: HTTP GET /players/{player_id}
APIServer->>PlayerService: execute_player_request(player_id)
Note over PlayerService: Fast path: check persistent storage freshness
PlayerService->>Storage: get_player_id_by_battletag(player_id) / direct if Blizzard ID
PlayerService->>Storage: get_player_profile(blizzard_id)
Storage-->>PlayerService: profile or None
alt Fresh_profile_in_storage
PlayerService->>PlayerService: data_factory(profile.html, profile.summary)
PlayerService->>Storage: set_player_profile (background updates as needed)
PlayerService-->>APIServer: data, is_stale=false, age
APIServer-->>Client: 200 OK (served from Postgres)
else No_fresh_profile
PlayerService->>PlayerService: _resolve_player_identity(player_id)
PlayerService->>Blizzard: fetch_player_html(blizzard_id)
Blizzard-->>PlayerService: html
PlayerService->>PlayerService: data_factory(html, player_summary)
PlayerService->>Storage: set_player_profile(player_id, html, summary)
PlayerService-->>APIServer: data, is_stale, age=0
APIServer-->>Client: 200 OK (served from Blizzard)
end
ER diagram for PostgreSQL static_data and player_profiles tableserDiagram
static_data {
varchar key PK
jsonb data
static_data_category category
smallint data_version
timestamptz created_at
timestamptz updated_at
}
player_profiles {
text player_id PK
text battletag
text name
bytea html_compressed
jsonb summary
bigint last_updated_blizzard
smallint data_version
timestamptz created_at
timestamptz updated_at
}
%% Indexes (conceptual, shown as relationships to virtual entities)
battletag_index {
text battletag
}
updated_at_index {
timestamptz updated_at
}
player_profiles ||--o{ battletag_index : idx_player_profiles_battletag
player_profiles ||--o{ updated_at_index : idx_player_profiles_updated_at
Class diagram for storage port and PostgreSQL adapterclassDiagram
class StoragePort {
<<protocol>>
+initialize() async
+get_static_data(key: str) async dict|None
+set_static_data(key: str, data: dict, category: StaticDataCategory, data_version: int=1) async
+get_player_profile(player_id: str) async dict|None
+get_player_id_by_battletag(battletag: str) async str|None
+set_player_profile(player_id: str, html: str, summary: dict|None, battletag: str|None, name: str|None, last_updated_blizzard: int|None, data_version: int=1) async
+delete_old_player_profiles(max_age_seconds: int) async int
+clear_all_data() async
+get_stats() async dict
+close() async
}
class StaticDataCategory {
<<enum>>
+HEROES
+HERO
+GAMEMODES
+MAPS
+ROLES
}
class PostgresStorage {
-_pool: asyncpg.Pool|None
-_initialized: bool
-_init_lock: asyncio.Lock
-_MAX_POOL_CREATION_ATTEMPTS: int
+initialize() async
+close() async
+get_static_data(key: str) async dict|None
+set_static_data(key: str, data: dict, category: StaticDataCategory, data_version: int=1) async
+get_player_profile(player_id: str) async dict|None
+get_player_id_by_battletag(battletag: str) async str|None
+set_player_profile(player_id: str, html: str, summary: dict|None, battletag: str|None, name: str|None, last_updated_blizzard: int|None, data_version: int=1) async
+delete_old_player_profiles(max_age_seconds: int) async int
+clear_all_data() async
+get_stats() async dict
-_create_schema() async
-_compress(data: str) bytes
-_decompress(data: bytes) str
-_init_connection(conn: asyncpg.Connection) async
}
class Settings {
+postgres_host: str
+postgres_port: int
+postgres_db: str
+postgres_user: str
+postgres_password: str
+postgres_pool_min_size: int
+postgres_pool_max_size: int
+player_profile_max_age: int
+postgres_dsn: str
}
class StaticDataService {
+get_or_fetch(config: StaticFetchConfig) async tuple
-_load_from_storage(storage_key: str) async dict|None
-_serve_from_storage(stored: dict, config: StaticFetchConfig) async tuple
-_refresh_static(config: StaticFetchConfig) async
-_fetch_and_store(config: StaticFetchConfig) async Any
-_cold_fetch(config: StaticFetchConfig) async tuple
-_store_in_storage(storage_key: str, data: Any, entity_type: str) async
-_apply_filter(data: Any, result_filter: Callable) Any
-storage: StoragePort
}
class PlayerService {
+get_player_profile_cache(player_id: str) async dict|None
+update_player_profile_cache(player_id: str, html: str, summary: dict|None, battletag: str|None, name: str|None) async
-_execute_player_request(request: PlayerRequest) async tuple
-_get_fresh_stored_profile(player_id: str) async tuple|None
-_get_player_html(effective_id: str, identity: PlayerIdentity) async str
-_resolve_player_identity(player_id: str) async PlayerIdentity
-storage: StoragePort
}
class PostgresStorageSingleton {
<<metaclass Singleton>>
}
StoragePort <|.. PostgresStorage
StaticDataService --> StoragePort
PlayerService --> StoragePort
PostgresStorage ..> StaticDataCategory
Settings ..> PostgresStorage
PostgresStorage ..> PostgresStorageSingleton
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 3 issues, and left some high level feedback:
- In docker-compose, the Postgres service mounts
pg-dataat/var/lib/postgresql, but the official Postgres image persists data under/var/lib/postgresql/data; updating the mount path will ensure the database actually uses the volume for persistence. - The Postgres image is pinned to
postgres:18-alpine, which does not currently exist; consider using a stable, existing tag such aspostgres:16-alpineor similar to avoid pull/startup failures.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In docker-compose, the Postgres service mounts `pg-data` at `/var/lib/postgresql`, but the official Postgres image persists data under `/var/lib/postgresql/data`; updating the mount path will ensure the database actually uses the volume for persistence.
- The Postgres image is pinned to `postgres:18-alpine`, which does not currently exist; consider using a stable, existing tag such as `postgres:16-alpine` or similar to avoid pull/startup failures.
## Individual Comments
### Comment 1
<location path="app/domain/services/player_service.py" line_range="268" />
<code_context>
+ logger.info(
+ "Serving player data from persistent storage (within staleness threshold)"
+ )
+ data = request.data_factory(profile["profile"], profile["summary"])
+ else:
+ identity = await self._resolve_player_identity(request.player_id)
</code_context>
<issue_to_address>
**issue (bug_risk):** Use the stored HTML field instead of a non-existent `profile` key when serving from persistent storage.
`get_player_profile_cache`/`get_player_profile` return the HTML under the `"html"` key, not `"profile"`, so `profile["profile"]` will raise a KeyError at runtime. This branch should call `request.data_factory(profile["html"], profile["summary"])` to match the non-cache path.
</issue_to_address>
### Comment 2
<location path="app/domain/services/player_service.py" line_range="287-288" />
<code_context>
async def get_player_profile_cache(self, player_id: str) -> dict | None:
- """Get player profile from SQLite storage."""
+ """Get player profile from persistent storage storage."""
profile = await self.storage.get_player_profile(player_id)
if not profile:
</code_context>
<issue_to_address>
**nitpick (typo):** Fix duplicated word in the `get_player_profile_cache` docstring.
There’s a duplicated word here; please change `"persistent storage storage"` to `"persistent storage"`.
```suggestion
async def get_player_profile_cache(self, player_id: str) -> dict | None:
"""Get player profile from persistent storage."""
```
</issue_to_address>
### Comment 3
<location path="docker-compose.yml" line_range="40" />
<code_context>
timeout: 2s
+ postgres:
+ image: postgres:18-alpine
+ environment:
+ - POSTGRES_DB=${POSTGRES_DB:-overfast}
</code_context>
<issue_to_address>
**issue (bug_risk):** The `postgres:18-alpine` image tag likely does not exist and will break environment startup.
Official Postgres images only go up to major version 16, so `postgres:18-alpine` will fail to pull and prevent `docker-compose` from starting. Please use an existing tag (e.g. `postgres:16-alpine`) or make the version configurable via an environment variable.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Pull request overview
This pull request implements a major architectural change by replacing SQLite with PostgreSQL as the persistent storage backend for OverFast API. The migration includes updates to the storage adapter, configuration, Docker setup, monitoring infrastructure, and test fixtures.
Changes:
- Replaced SQLite storage adapter with PostgreSQL adapter using asyncpg driver and connection pooling
- Updated Docker Compose configuration to include PostgreSQL 18 service with health checks and postgres-exporter for monitoring
- Migrated schema from SQLite to PostgreSQL with JSONB for structured data and BYTEA for compressed HTML
- Updated all monitoring metrics and Grafana dashboards to reflect the storage backend change
- Replaced in-memory test fixtures with FakeStorage implementation for better test isolation
Reviewed changes
Copilot reviewed 26 out of 27 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| uv.lock | Removed aiosqlite dependency, added asyncpg 0.31.0, updated virtualenv |
| pyproject.toml | Updated dependency from aiosqlite to asyncpg, version bump to 3.45.1 |
| app/config.py | Replaced SQLite configuration with PostgreSQL connection settings and DSN builder |
| app/adapters/storage/postgres_storage.py | New PostgreSQL adapter with connection pooling, JSONB support, and zstd compression |
| app/adapters/storage/schema.sql | Migrated schema from SQLite to PostgreSQL with ENUM types and TIMESTAMPTZ |
| app/domain/ports/storage.py | Updated port interface to use dict for data (JSONB) instead of JSON strings, added StaticDataCategory enum |
| app/domain/services/*.py | Updated service layer to work with dict-based storage instead of JSON strings |
| tests/fake_storage.py | New in-memory FakeStorage implementation for testing without database dependencies |
| tests/test_storage.py | Simplified tests to exercise StoragePort contract via FakeStorage |
| tests/conftest.py | Updated test fixtures to use FakeStorage instead of SQLite |
| app/monitoring/metrics.py | Renamed all sqlite_* metrics to storage_*, removed WAL-specific metrics |
| app/monitoring/router.py | Updated to use PostgresStorage instead of SQLiteStorage |
| docker-compose.yml | Added PostgreSQL service and postgres-exporter, removed SQLite volume |
| build/grafana/provisioning/dashboards/*.json | Updated Grafana dashboards with PostgreSQL-specific metrics |
| build/prometheus/prometheus.yml | Added PostgreSQL scrape configuration |
| .env.dist | Replaced SQLite configuration with PostgreSQL connection settings |
| README.md | Updated documentation to reference PostgreSQL instead of SQLite |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|



Summary by Sourcery
Migrate persistent storage from SQLite to PostgreSQL, updating the storage adapter, configuration, metrics, and services to use the new backend and a more flexible storage contract.
New Features:
Bug Fixes:
Enhancements:
Build:
Deployment:
Documentation:
Tests:
Chores: