Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions .github/workflows/vulture.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Dead Code Detection

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
workflow_dispatch:

jobs:
vulture:
name: Vulture Dead Code Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6

- name: Set up uv
uses: astral-sh/setup-uv@v7
with:
enable-cache: true
cache-dependency-glob: "backend/uv.lock"

- name: Install Python and dependencies
run: |
cd backend
uv python install 3.12
uv sync --frozen

- name: Run vulture
run: |
cd backend
uv run vulture app/ vulture_whitelist.py
8 changes: 8 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,14 @@ repos:
files: ^backend/.*\.py$
pass_filenames: false

# Vulture - matches CI: cd backend && uv run vulture app/ vulture_whitelist.py
- id: vulture-backend
name: vulture dead code (backend)
entry: bash -c 'cd backend && uv run vulture app/ vulture_whitelist.py'
language: system
files: ^backend/.*\.py$
pass_filenames: false

# ESLint - matches CI: cd frontend && npx eslint src
- id: eslint-frontend
name: eslint (frontend)
Expand Down
58 changes: 44 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@
<a href="https://github.com/HardMax71/Integr8sCode/actions/workflows/security.yml">
<img src="https://img.shields.io/github/actions/workflow/status/HardMax71/Integr8sCode/security.yml?branch=main&label=security&logo=shieldsdotio&logoColor=white" alt="Security Scan Status" />
</a>
<a href="https://github.com/HardMax71/Integr8sCode/actions/workflows/vulture.yml">
<img src="https://img.shields.io/github/actions/workflow/status/HardMax71/Integr8sCode/vulture.yml?branch=main&label=dead%20code&logo=python&logoColor=white" alt="Dead Code Check" />
</a>
<a href="https://github.com/HardMax71/Integr8sCode/actions/workflows/docker.yml">
<img src="https://img.shields.io/github/actions/workflow/status/HardMax71/Integr8sCode/docker.yml?branch=main&label=docker&logo=docker&logoColor=white" alt="Docker Scan Status" />
</a>
Expand Down Expand Up @@ -57,29 +60,56 @@ things safe and efficient. You'll get the results back in no time.
<details>
<summary>How to deploy</summary>

1. Clone this repository
2. Check if docker is enabled, kubernetes is running and kubectl is installed
3. `docker-compose up --build`
### Prerequisites

- Docker and Docker Compose
- Kubernetes cluster (k3s, Docker Desktop K8s, or minikube) with `kubectl` configured

### Quick start

```bash
git clone https://github.com/HardMax71/Integr8sCode.git
cd Integr8sCode
cp backend/secrets.example.toml backend/secrets.toml
./deploy.sh dev
```

- Frontend: `https://127.0.0.1:5001/`
- Backend: `https://127.0.0.1:443/`
- To check if it works, you can use `curl -k https://127.0.0.1/api/v1/k8s-limits`, should return JSON with current limits
- Grafana: `http://127.0.0.1:3000` (login - `admin`, pw - `admin123`)
The `secrets.toml` file holds credentials and is gitignored. The example template has working development defaults.

### Verify

You may also find out that k8s doesn't capture metrics (`CPU` and `Memory` params are `null`), it may well be that metrics server
for k8s is turned off/not enabled. To enable, execute:
```bash
kubectl create -f https://raw.githubusercontent.com/pythianarora/total-practice/master/sample-kubernetes-code/metrics-server.yaml
curl -k https://localhost/api/v1/health/live
```

and test output by writing `kubectl top node` in console, should output sth like:
### Access

| Service | URL |
|--------------------|--------------------------------------------------------|
| Frontend | [https://localhost:5001](https://localhost:5001) |
| Backend API | [https://localhost:443](https://localhost:443) |
| Kafdrop (Kafka UI) | [http://localhost:9000](http://localhost:9000) |
| Grafana | [http://localhost:3000](http://localhost:3000) |
| Jaeger (Tracing) | [http://localhost:16686](http://localhost:16686) |

Default credentials: `user` / `user123` (regular), `admin` / `admin123` (admin).

Self-signed TLS certs are generated automatically — accept the browser warning.

### Run tests

```bash
./deploy.sh test
```
PS C:\Users\User\Desktop\Integr8sCode> kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
docker-desktop 267m 3% 4732Mi 29%

### Stop

```bash
./deploy.sh down
```

See the [full deployment guide](https://hardmax71.github.io/Integr8sCode/operations/deployment/) for Docker build strategy, troubleshooting, pre-built images, and more.

</details>

<details>
Expand Down
2 changes: 0 additions & 2 deletions backend/app/core/container.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@
PodMonitorProvider,
RedisProvider,
RepositoryProvider,
ResourceCleanerProvider,
ResultProcessorProvider,
SagaOrchestratorProvider,
SettingsProvider,
Expand Down Expand Up @@ -59,7 +58,6 @@ def create_app_container(settings: Settings) -> AsyncContainer:
BusinessServicesProvider(),
CoordinatorProvider(),
KubernetesProvider(),
ResourceCleanerProvider(),
FastapiProvider(),
context={Settings: settings},
)
Expand Down
1 change: 0 additions & 1 deletion backend/app/core/dishka_lifespan.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,6 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:

# Get unstarted broker from DI (BrokerProvider yields without starting)
broker: KafkaBroker = await container.get(KafkaBroker)
app.state.kafka_broker = broker

# Register subscribers BEFORE broker.start() - FastStream requirement
register_sse_subscriber(broker, settings)
Expand Down
12 changes: 1 addition & 11 deletions backend/app/core/providers.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@
from app.services.notification_service import NotificationService
from app.services.pod_monitor import PodEventMapper, PodMonitor, PodMonitorConfig
from app.services.rate_limit_service import RateLimitService
from app.services.result_processor import ResourceCleaner, ResultProcessor
from app.services.result_processor import ResultProcessor
from app.services.runtime_settings import RuntimeSettingsLoader
from app.services.saga import SagaOrchestrator, SagaService
from app.services.saved_script_service import SavedScriptService
Expand Down Expand Up @@ -291,16 +291,6 @@ async def get_api_client(
await api_client.close()


class ResourceCleanerProvider(Provider):
scope = Scope.APP

@provide
def get_resource_cleaner(
self, api_client: k8s_client.ApiClient, logger: structlog.stdlib.BoundLogger
) -> ResourceCleaner:
return ResourceCleaner(api_client=api_client, logger=logger)


class MetricsProvider(Provider):
"""Provides all metrics instances via DI (no contextvars needed)."""

Expand Down
12 changes: 0 additions & 12 deletions backend/app/db/docs/replay.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,18 +69,6 @@ class ReplaySessionDocument(Document):

model_config = ConfigDict(from_attributes=True)

@property
def progress_percentage(self) -> float:
"""Calculate progress percentage."""
if self.total_events == 0:
return 0.0
return round((self.replayed_events / self.total_events) * 100, 2)

@property
def is_completed(self) -> bool:
"""Check if session is completed."""
return self.status in [ReplayStatus.COMPLETED, ReplayStatus.FAILED, ReplayStatus.CANCELLED]

@property
def is_running(self) -> bool:
"""Check if session is running."""
Expand Down
13 changes: 1 addition & 12 deletions backend/app/db/repositories/admin/admin_events_repository.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
HourlyEventCount,
UserEventCount,
)
from app.domain.replay import ReplayFilter, ReplaySessionState
from app.domain.replay import ReplayFilter


class AdminEventsRepository:
Expand Down Expand Up @@ -210,17 +210,6 @@ async def archive_event(self, event: DomainEvent, deleted_by: str) -> bool:
await archive_doc.insert()
return True

async def create_replay_session(self, session: ReplaySessionState) -> str:
doc = ReplaySessionDocument(**session.model_dump())
await doc.insert()
return session.session_id

async def get_replay_session(self, session_id: str) -> ReplaySessionState | None:
doc = await ReplaySessionDocument.find_one(ReplaySessionDocument.session_id == session_id)
if not doc:
return None
return ReplaySessionState.model_validate(doc)

async def update_replay_session(self, session_id: str, updates: ReplaySessionUpdate) -> bool:
update_dict = updates.model_dump(exclude_none=True)
if not update_dict:
Expand Down
22 changes: 0 additions & 22 deletions backend/app/db/repositories/dlq_repository.py
Original file line number Diff line number Diff line change
Expand Up @@ -149,25 +149,3 @@ async def get_queue_sizes_by_topic(self) -> dict[str, int]:
result[row["_id"]] = row["count"]
return result

async def mark_message_retried(self, event_id: str) -> bool:
doc = await DLQMessageDocument.find_one({"event.event_id": event_id})
if not doc:
return False
now = datetime.now(timezone.utc)
doc.status = DLQMessageStatus.RETRIED
doc.retried_at = now
doc.last_updated = now
await doc.save()
return True

async def mark_message_discarded(self, event_id: str, reason: str) -> bool:
doc = await DLQMessageDocument.find_one({"event.event_id": event_id})
if not doc:
return False
now = datetime.now(timezone.utc)
doc.status = DLQMessageStatus.DISCARDED
doc.discarded_at = now
doc.discard_reason = reason
doc.last_updated = now
await doc.save()
return True
12 changes: 0 additions & 12 deletions backend/app/db/repositories/execution_repository.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
from app.domain.execution import (
DomainExecution,
DomainExecutionCreate,
DomainExecutionUpdate,
ExecutionResultDomain,
)

Expand All @@ -34,17 +33,6 @@ async def get_execution(self, execution_id: str) -> DomainExecution | None:
self.logger.info("Found execution in MongoDB", execution_id=execution_id)
return DomainExecution.model_validate(doc)

async def update_execution(self, execution_id: str, update_data: DomainExecutionUpdate) -> bool:
doc = await ExecutionDocument.find_one(ExecutionDocument.execution_id == execution_id)
if not doc:
return False

update_dict = update_data.model_dump(exclude_none=True)
if update_dict:
update_dict["updated_at"] = datetime.now(timezone.utc)
await doc.set(update_dict)
return True

async def write_terminal_result(self, result: ExecutionResultDomain) -> bool:
doc = await ExecutionDocument.find_one(ExecutionDocument.execution_id == result.execution_id)
if not doc:
Expand Down
3 changes: 0 additions & 3 deletions backend/app/db/repositories/user_settings_repository.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,6 @@ async def count_events_since_snapshot(self, user_id: str) -> int:
GT(EventDocument.timestamp, snapshot.updated_at),
).count()

async def count_events_for_user(self, user_id: str) -> int:
return await EventDocument.find(EventDocument.aggregate_id == f"user_settings_{user_id}").count()

async def delete_user_settings(self, user_id: str) -> None:
doc = await UserSettingsSnapshotDocument.find_one(UserSettingsSnapshotDocument.user_id == user_id)
if doc:
Expand Down
3 changes: 0 additions & 3 deletions backend/app/dlq/manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -209,9 +209,6 @@ async def update_queue_metrics(self) -> None:
def set_retry_policy(self, topic: str, policy: RetryPolicy) -> None:
self._retry_policies[topic] = policy

def add_filter(self, filter_func: Callable[[DLQMessage], bool]) -> None:
self._filters.append(filter_func)

async def retry_message_manually(self, event_id: str) -> bool:
message = await self.repository.get_message_by_id(event_id)
if not message:
Expand Down
29 changes: 0 additions & 29 deletions backend/app/domain/events/event_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,32 +3,13 @@

from pydantic import BaseModel, ConfigDict, Field

from app.core.utils import StringEnum
from app.domain.enums import EventType
from app.domain.events.typed import DomainEvent, EventMetadata

MongoQueryValue = str | dict[str, str | list[str] | float | datetime]
MongoQuery = dict[str, MongoQueryValue]


class CollectionNames(StringEnum):
EVENTS = "events"
EVENT_STORE = "event_store"
REPLAY_SESSIONS = "replay_sessions"
EVENTS_ARCHIVE = "events_archive"
RESOURCE_ALLOCATIONS = "resource_allocations"
USERS = "users"
EXECUTIONS = "executions"
EXECUTION_RESULTS = "execution_results"
SAVED_SCRIPTS = "saved_scripts"
NOTIFICATIONS = "notifications"
NOTIFICATION_SUBSCRIPTIONS = "notification_subscriptions"
USER_SETTINGS = "user_settings"
USER_SETTINGS_SNAPSHOTS = "user_settings_snapshots"
SAGAS = "sagas"
DLQ_MESSAGES = "dlq_messages"


class EventSummary(BaseModel):
"""Lightweight event summary for lists and previews."""

Expand Down Expand Up @@ -167,16 +148,6 @@ class ExecutionEventsResult(BaseModel):
access_allowed: bool
include_system_events: bool

def get_filtered_events(self) -> list[DomainEvent]:
"""Get events filtered based on access and system event settings."""
if not self.access_allowed:
return []

events = self.events
if not self.include_system_events:
events = [e for e in events if not e.metadata.service_name.startswith("system-")]

return events


class EventExportRow(BaseModel):
Expand Down
9 changes: 0 additions & 9 deletions backend/app/domain/saga/exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,15 +28,6 @@ def __init__(self, saga_id: str, current_state: SagaState, operation: str) -> No
super().__init__(f"Cannot {operation} saga '{saga_id}' in state '{current_state}'")


class SagaCompensationError(InfrastructureError):
"""Raised when saga compensation fails."""

def __init__(self, saga_id: str, step: str, reason: str) -> None:
self.saga_id = saga_id
self.step = step
super().__init__(f"Compensation failed for saga '{saga_id}' at step '{step}': {reason}")


class SagaTimeoutError(InfrastructureError):
"""Raised when a saga times out."""

Expand Down
14 changes: 0 additions & 14 deletions backend/app/domain/saga/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,20 +94,6 @@ class SagaCancellationResult(BaseModel):
saga_id: str


class SagaStatistics(BaseModel):
"""Saga statistics."""

model_config = ConfigDict(from_attributes=True)

total_sagas: int
sagas_by_state: dict[str, int] = Field(default_factory=dict)
sagas_by_name: dict[str, int] = Field(default_factory=dict)
average_duration_seconds: float = 0.0
success_rate: float = 0.0
failure_rate: float = 0.0
compensation_rate: float = 0.0


class SagaConfig(BaseModel):
"""Configuration for saga orchestration (domain)."""

Expand Down
Loading
Loading