This repository demonstrates an end-to-end, highly optimized pipeline for 5G Network Slicing recommendation systems using Knowledge Graph (KG) integration and Large Language Model (LLM) fine-tuning.
Specifically, this project utilizes Parameter-Efficient Fine-Tuning (PEFT) via LoRA (Low-Rank Adaptation) on the Llama-3.2-1B-Instruct model. By converting raw telecommunication tabular data (throughput, latency, reliability) into structured Knowledge Graph triples, we train an AI agent to act as a 5G network slicing expert that maps real-world parameters to 3GPP standards (e.g., TS 23.501) and recommends optimal network configurations.
Designed by Senthilkumar Vijayakumar (IEEE Senior Member).
- Knowledge Graph Construction: Converts raw 5G dataset parameters into Semantic Subject-Predicate-Object (SVO) triples using
pandasand visualizes the ontology withNetworkX. - LLM Prompt Engineering: Transforms graph data into structured instruction datasets optimized for Llama-3's chat template.
- LoRA & PEFT Fine-Tuning: Efficiently fine-tunes the
meta-llama/Llama-3.2-1B-Instructmodel usingtrl(SFTTrainer) andpeft, drastically reducing GPU VRAM requirements. - Apple Silicon (MPS) & CUDA Support: Dynamically handles precision and quantization. Includes stability workarounds specifically for macOS/MPS hardware.
- Inference Pipeline: Merges the base model with the trained LoRA adapter for fast, accurate inference on unseen 5G network conditions.
- Data Processing: Parses network parameters (Throughput, Latency, Density, Mobility, Error) and tags them to specific 3GPP standards.
- Graph Visualization: Uses
NetworkXspring layouts to visualize the relationships between standards, conditions, and network slicing plans. - Model Initialization: Loads the Llama-3 model in FP16 (or 4-bit NF4 via
bitsandbyteson CUDA devices). - Supervised Fine-Tuning (SFT): Employs targeted LoRA configuration on Attention (
q_proj,v_proj, etc.) and MLP layers to optimize domain-specific adaptation. - Evaluation: Extracts training metrics and plots loss curves. Evaluates substring match accuracy against expected network recommendations.
Ensure you have Python 3.10+ installed. Install the necessary dependencies:
pip install pandas networkx matplotlib torch transformers datasets peft trl bitsandbytes accelerate scikit-learnThe Llama-3 model requires authentication. Export your Hugging Face token in your terminal before running the notebooks or scripts:
export HF_TOKEN="your_hugging_face_token_here"Comprehensive_5G_KG_LoRA.ipynb: The primary, fully-commented Jupyter Notebook detailing the entire pipeline from KG creation to model inference.kg_lora_llama_sft.py: Standalone Python script for automated batch processing and training.network_slicing_300.csv/kg_instruction_data_example.csv: Sample 5G telemetry and formatted instruction datasets.
5G Network Slicing, Knowledge Graph, LLM Fine-Tuning, LoRA, Llama-3, Parameter-Efficient Fine-Tuning (PEFT), Telecom AI, PyTorch, Hugging Face Transformers, AI Networking, 3GPP TS 23.501.
Contributions, issues, and feature requests are welcome! Feel free to check the issues page.
If you utilize this framework or code in your research, please use the following citation:
@software{Vijayakumar_KG_LoRA_5G_2026,
author = {Vijayakumar, Senthilkumar},
title = {Knowledge Graph-Enhanced LoRA Fine-Tuning for Intelligent 3GPP-Compliant 5G Network Slicing},
year = {2026},
url = {https://github.com/senthilv83/LLM-FineTuning},
orcid = {0009-0009-6436-9003}
}(See CITATION.cff for more details).