Skip to content

kunal9812/cybersecurity_framework-DTI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🛡️ Adaptive Trust-Aware Cybersecurity Defense Framework

An end-to-end ML-based cybersecurity defense system implementing adversarial attack simulation, trust scoring, and adaptive learning — built on PyTorch and the NSL-KDD benchmark dataset.

Python PyTorch Dataset Open In Colab


🎯 What This Project Does

This framework implements the research paper "Enhancing Cybersecurity Resilience Against Adversarial Machine Learning Attacks" as a fully working Python system. It:

  1. Trains a neural network on real network intrusion data (NSL-KDD dataset)
  2. Simulates adversarial attacks — FGSM, PGD (evasion), and Label Flipping (poisoning)
  3. Defends using a trust scoring engine that evaluates behavioral, reputational, and integrity signals
  4. Adapts over time using incremental learning and concept drift detection
  5. Generates result charts comparing defended vs. undefended performance

📊 Results

Charts are auto-generated into results/ when you run main.py

Attack Success Rate — Before vs After Defense

Attack Undefended ASR Defended ASR Reduction
FGSM (ε=0.30) ~25% significantly lower
FGSM (ε=0.50) ~40% significantly lower
FGSM (ε=0.80) ~60% significantly lower
PGD (20 steps) ~65% significantly lower

📌 Run python main.py to see exact numbers and generate the 5 charts below.

Output Charts (saved to results/)

Chart What It Shows
asr_comparison.png Attack success rates — defended vs undefended
trust_distribution.png Trust score separation: clean vs adversarial inputs
adaptive_vs_static.png Why adaptive learning beats static models under drift
metrics_comparison.png Accuracy / F1 / Precision / Recall — with and without defense
trust_components.png Breakdown of Behavioral, Reputation, and Integrity trust scores

Attack Success Rate — Defended vs Undefended ASR Comparison

Trust Score Separation — Clean vs Adversarial Trust Distribution

Adaptive vs Static Model Under Drift Adaptive vs Static

Accuracy / F1 / Precision / Recall Comparison Metrics Comparison

Behavioral / Reputation / Integrity Trust Breakdown Trust Components

🧠 Architecture Overview

Incoming Network Traffic
         ↓
  Anomaly Scoring (PyTorch Neural Network)
         ↓
  Trust Evaluation Engine
  ┌─────────────────────────────────────┐
  │  Te(t) = α·Be(t) + β·Re(t) + γ·Ie(t) │
  │  Be = Behavioral trust              │
  │  Re = Reputation trust              │
  │  Ie = Integrity trust               │
  └─────────────────────────────────────┘
         ↓
  Trust Filter Decision
  ┌───────────────────────────────┐
  │ T < 0.3   → ❌ Block          │
  │ 0.3 ≤ T < 0.6 → ⚠️ Flag     │
  │ T ≥ 0.6   → ✅ Allow         │
  └───────────────────────────────┘
         ↓
  Model Classification
         ↓
  Drift Monitor → Incremental Update (if drift detected)
         ↓
  Output + Metrics

📁 Project Structure

cybersecurity_framework-DTI/
├── main.py                    ← Entry point — run this
├── visualize.py               ← Chart generation (matplotlib)
├── requirements.txt           ← All dependencies
├── data/
│   ├── KDDTrain+.txt          ← NSL-KDD training set
│   └── KDDTest+.txt           ← NSL-KDD test set
├── results/                   ← Auto-created; charts saved here
└── src/
    ├── data_loader.py         ← NSL-KDD loader + synthetic fallback
    ├── baseline_model.py      ← 3-layer feedforward neural net (PyTorch)
    ├── adversarial_attacks.py ← FGSM, PGD, Label Flipping attacks
    ├── trust_scoring.py       ← Trust score engine (paper formula)
    ├── adaptive_learning.py   ← Drift detection + incremental updater
    └── defense_framework.py   ← Full defense pipeline

⚡ Quick Setup

Prerequisites

  • Python 3.9 or newer
  • pip

Step 1 — Clone the repo

git clone https://github.com/kunal9812/cybersecurity_framework-DTI.git
cd cybersecurity_framework-DTI

Step 2 — Create a virtual environment (recommended)

python -m venv venv

# Windows:
venv\Scripts\activate

# Mac/Linux:
source venv/bin/activate

Step 3 — Install dependencies

pip install -r requirements.txt

Step 4 — Add the NSL-KDD dataset

Download KDDTrain+.txt and KDDTest+.txt from the NSL-KDD dataset page and place them in the data/ folder.

💡 No dataset? No problem — the data_loader.py automatically generates synthetic data that mimics NSL-KDD's statistical properties, so you can still run and explore the full framework.

Step 5 — Run the demo

python main.py

🔬 Component Deep-Dive

1. Neural Network (baseline_model.py)

A 3-layer feedforward network trained with Adam optimizer on 41-feature network traffic data:

Input(41) → Dense(128) → ReLU → Dense(64) → ReLU → Dense(32) → ReLU → Output(2)

2. Adversarial Attacks (adversarial_attacks.py)

Attack Category How It Works
FGSM Evasion Single-step gradient perturbation (ε = 0.30 / 0.50 / 0.80)
PGD Evasion (stronger) Iterative projected gradient descent (20 steps, α=0.05)
Label Flipping Poisoning Corrupts 15% of training labels before training

3. Trust Scoring Engine (trust_scoring.py)

Implements the paper's core formula:

Te(t) = α·Be(t) + β·Re(t) + γ·Ie(t)

With weights: α=0.35, β=0.25, γ=0.40

  • Be(t) — Behavioral trust: how anomalous is this input vs baseline?
  • Re(t) — Reputation trust: has this source ID been reliable historically?
  • Ie(t) — Integrity trust: does the data distribution match expectations?

4. Adaptive Learning (adaptive_learning.py)

  • Experience Buffer (capacity: 2000 samples) — prevents catastrophic forgetting
  • Drift Detector — monitors accuracy over sliding windows; threshold: 0.08
  • Incremental Updater — fine-tunes the model when drift is detected

📈 Sample Terminal Output

===========================================================
 FINAL SUMMARY
===========================================================
  Total processed  : 1000
  Blocked          : 312 (31.2%)
  Flagged (review) : 87
  Drift events     : 2
  Model updates    : 2

  Charts saved in results/:
    results/asr_comparison.png
    results/trust_distribution.png
    results/adaptive_vs_static.png
    results/metrics_comparison.png
    results/trust_components.png

🛠️ Tech Stack

Tool Purpose
Python 3.9+ Core language
PyTorch Neural network training and adversarial attack gradients
scikit-learn IsolationForest (anomaly scoring), metrics
NumPy Vectorized math for trust scoring and perturbations
Matplotlib Chart generation
NSL-KDD Benchmark intrusion detection dataset

📚 Reference

This project implements the framework from:

"Enhancing Cybersecurity Resilience Against Adversarial Machine Learning Attacks" Implementation by Kunal Yadav, — B.Tech Computer Science, Manav Rachna University


🤝 Connect


⭐ If this helped you understand adversarial ML or cybersecurity defense systems, consider starring the repo!

About

Adaptive ML-based cybersecurity defense system with adversarial attack simulation (FGSM, PGD), trust scoring engine, and incremental learning — built with PyTorch on NSL-KDD dataset.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages