This project implements and evaluates Kernel Least Mean Squares (KLMS) and Quantized KLMS (QKLMS) algorithms for time series prediction using adaptive filtering techniques.
The project focuses on nonlinear adaptive filtering using kernel methods for time series prediction. Key components include:
- NLMS (Normalized Least Mean Squares): A linear adaptive filtering baseline
- KLMS (Kernel Least Mean Squares): Nonlinear adaptive filtering using kernel methods
- QKLMS (Quantized KLMS): Memory-efficient version with dictionary quantization
- Maximum Correntropy Criterion (MCC): Alternative cost function for robust learning
The project uses the ETTh2 (Electricity Transformer Temperature - hourly) dataset, containing:
- Timestamp information
- Multiple temperature measurements (HUFL, HULL, MUFL, MULL, LUFL, LULL)
- Oil Temperature (OT) as the primary prediction target
Dataset file: ETTh21hour.csv
- NLMS (Normalized Least Mean Squares)
- KLMS with Gaussian kernel
- QKLMS with dictionary quantization
- MCC-based KLMS for robust learning
- Hyperparameter Tuning: Systematic tuning of learning rate, kernel size, and quantization factor
- Convergence Analysis: Comparison of MSE and MCC cost functions
- Multi-step Prediction: Testing prediction accuracy for future samples (up to 50 steps ahead)
- Trajectory Learning: Generation of time series using learned models
- Multi-variate Extension: Implementation for multiple time series variables
TS_Project_2/
├── main.ipynb # Main implementation notebook
├── cupy_Test.ipynb # GPU acceleration experiments
├── ETTh21hour.csv # Dataset
├── Figures/ # Generated plots and results
├── Submission/ # Submission materials
├── Hyp_tuning/ # Hyperparameter tuning results
├── KLMS_QF_Results.csv # Quantization factor results
├── KLMS_QF_Results2.csv # Additional results
├── project 2.pdf # Project specification
└── README.md # This file
The project requires the following Python packages:
numpy- Numerical computingpandas- Data manipulationmatplotlib- Plotting and visualizationitertools- Iteration tools (standard library)
pip install numpy pandas matplotlibFor GPU acceleration experiments (optional):
pip install cupy-cuda11x # or appropriate CUDA version- Ensure all dependencies are installed
- Open
main.ipynbin Jupyter Notebook or JupyterLab - Run cells sequentially to:
- Load and preprocess the ETTh2 dataset
- Train NLMS, KLMS, and QKLMS models
- Perform hyperparameter tuning
- Visualize results and compare algorithms
- order: FIR filter order (window size for past samples)
- sigma: Kernel bandwidth parameter
- eta: Learning rate
- quantization_factor: Distance threshold for dictionary quantization
- sigma_e: Error kernel parameter for MCC
The project demonstrates:
- KLMS and QKLMS outperform linear NLMS for nonlinear time series
- Quantization significantly reduces dictionary size with minimal performance loss
- MCC-based learning provides robustness to outliers
- Multi-step ahead prediction accuracy degrades with increasing horizon
The dataset is split into training, validation, and test sets for robust evaluation:
Kernel Size (Sigma) Optimization:

Quantization Factor Optimization:

Test Set Predictions Comparison:

Maximum Correntropy Criterion (MCC):

Additional visualizations and detailed results are available in the Figures/ directory.
This project is licensed under the MIT License - see the LICENSE file for details.
Patrick J Craig (patrickjcraig)
- ETTh2 dataset from the Electricity Transformer Temperature dataset collection
- Implementation based on kernel adaptive filtering research







