Skip to content

patrickjcraig/Time-Series-Project-2-Kernel-Adaptive-Filtering

Repository files navigation

Time Series Project 2: Kernel Adaptive Filtering

This project implements and evaluates Kernel Least Mean Squares (KLMS) and Quantized KLMS (QKLMS) algorithms for time series prediction using adaptive filtering techniques.

Overview

The project focuses on nonlinear adaptive filtering using kernel methods for time series prediction. Key components include:

  • NLMS (Normalized Least Mean Squares): A linear adaptive filtering baseline
  • KLMS (Kernel Least Mean Squares): Nonlinear adaptive filtering using kernel methods
  • QKLMS (Quantized KLMS): Memory-efficient version with dictionary quantization
  • Maximum Correntropy Criterion (MCC): Alternative cost function for robust learning

Dataset

The project uses the ETTh2 (Electricity Transformer Temperature - hourly) dataset, containing:

  • Timestamp information
  • Multiple temperature measurements (HUFL, HULL, MUFL, MULL, LUFL, LULL)
  • Oil Temperature (OT) as the primary prediction target

Dataset file: ETTh21hour.csv

Features

Implemented Algorithms

  • NLMS (Normalized Least Mean Squares)
  • KLMS with Gaussian kernel
  • QKLMS with dictionary quantization
  • MCC-based KLMS for robust learning

Experiments

  1. Hyperparameter Tuning: Systematic tuning of learning rate, kernel size, and quantization factor
  2. Convergence Analysis: Comparison of MSE and MCC cost functions
  3. Multi-step Prediction: Testing prediction accuracy for future samples (up to 50 steps ahead)
  4. Trajectory Learning: Generation of time series using learned models
  5. Multi-variate Extension: Implementation for multiple time series variables

Project Structure

TS_Project_2/
├── main.ipynb              # Main implementation notebook
├── cupy_Test.ipynb         # GPU acceleration experiments
├── ETTh21hour.csv          # Dataset
├── Figures/                # Generated plots and results
├── Submission/             # Submission materials
├── Hyp_tuning/             # Hyperparameter tuning results
├── KLMS_QF_Results.csv     # Quantization factor results
├── KLMS_QF_Results2.csv    # Additional results
├── project 2.pdf           # Project specification
└── README.md               # This file

Dependencies

The project requires the following Python packages:

  • numpy - Numerical computing
  • pandas - Data manipulation
  • matplotlib - Plotting and visualization
  • itertools - Iteration tools (standard library)

Installation

pip install numpy pandas matplotlib

For GPU acceleration experiments (optional):

pip install cupy-cuda11x  # or appropriate CUDA version

Usage

Running the Main Notebook

  1. Ensure all dependencies are installed
  2. Open main.ipynb in Jupyter Notebook or JupyterLab
  3. Run cells sequentially to:
    • Load and preprocess the ETTh2 dataset
    • Train NLMS, KLMS, and QKLMS models
    • Perform hyperparameter tuning
    • Visualize results and compare algorithms

Key Parameters

  • order: FIR filter order (window size for past samples)
  • sigma: Kernel bandwidth parameter
  • eta: Learning rate
  • quantization_factor: Distance threshold for dictionary quantization
  • sigma_e: Error kernel parameter for MCC

Results

The project demonstrates:

  • KLMS and QKLMS outperform linear NLMS for nonlinear time series
  • Quantization significantly reduces dictionary size with minimal performance loss
  • MCC-based learning provides robustness to outliers
  • Multi-step ahead prediction accuracy degrades with increasing horizon

Results and Visualizations

Data Splits

The dataset is split into training, validation, and test sets for robust evaluation:

Data Splits

Hyperparameter Tuning

Learning Rate Optimization: Learning Rate Tuning

Kernel Size (Sigma) Optimization: Kernel Size Tuning

Quantization Factor Optimization: Quantization Factor Tuning

Model Predictions

Final KLMS Predictions: KLMS Predictions

Test Set Predictions Comparison: Test Predictions

Final Prediction MSE: Final Prediction MSE

Convergence Analysis

MSE Loss During Training: MSE Loss

Test MSE: Test MSE

Advanced Analysis

Maximum Correntropy Criterion (MCC): MCC Analysis

Error Kernel Tuning: Error Kernel Tuning

Multi-step Ahead Prediction: Future Predictions

Additional visualizations and detailed results are available in the Figures/ directory.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Author

Patrick J Craig (patrickjcraig)

Acknowledgments

  • ETTh2 dataset from the Electricity Transformer Temperature dataset collection
  • Implementation based on kernel adaptive filtering research

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors