A comprehensive Lane Keep Assist (LKA) system for Advanced Driver Assistance Systems (ADAS) using deep learning and behavior cloning. This project implements Nvidia's PilotNet CNN architecture to learn steering behavior from human driving data.
This system enables a vehicle to maintain lane position using only front camera input without requiring LIDAR, GPS, or HD maps. The approach uses end-to-end deep learning where the model learns to predict steering angles directly from camera images by mimicking human driving behavior.
- Behavior Cloning: Learns steering behavior from human driving demonstrations
- PilotNet Architecture: Implements Nvidia's proven CNN architecture
- Real-time Inference: Provides steering predictions at 30 FPS
- Vision-only Approach: No dependency on LIDAR, GPS, or HD maps
- Comprehensive Training Pipeline: Complete data collection, preprocessing, and training workflow
- Performance Monitoring: Real-time visualization and performance metrics
- Modular Design: Easy to extend and customize
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Data Collection│ │ Preprocessing │ │ Training │
│ │ │ │ │ │
│ • Camera Input │───▶│ • Image Resize │───▶│ • PilotNet CNN │
│ • Steering Data │ │ • Normalization │ │ • Behavior Clone│
│ • Synchronized │ │ • Augmentation │ │ • Validation │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Real-time LKA │ │ Inference │ │ Trained Model │
│ │ │ │ │ │
│ • Lane Keeping │◀───│ • Steering Pred │◀───│ • PilotNet.h5 │
│ • Smoothing │ │ • Real-time │ │ • Weights │
│ • Safety Limits │ │ • Visualization │ │ • Architecture │
└─────────────────┘ └─────────────────┘ └─────────────────┘
- Python 3.8+
- OpenCV
- TensorFlow 2.x
- NumPy, Pandas, Matplotlib
- Camera (USB webcam or vehicle camera)
- Clone the repository:
git clone <repository-url>
cd lane-keep-assist- Create virtual environment:
python -m venv venv
source venv/bin/activate # Linux/Mac
# or
venv\Scripts\activate # Windows- Install dependencies:
pip install -r requirements.txt- Verify installation:
python -c "import tensorflow as tf; print('TensorFlow:', tf.__version__)"
python -c "import cv2; print('OpenCV:', cv2.__version__)"Collect training data by recording camera footage and steering angles:
python examples/collect_data.py --output_dir data/my_training_sessionControls:
r: Start/stop recordingq: Quita: Steer left (keyboard mode)d: Steer right (keyboard mode)s: Center steering
Train the PilotNet model with your collected data:
python examples/train_model.py --data_dir data/my_training_session --model_name my_pilotnetTraining will:
- Preprocess images (resize, crop, normalize)
- Apply data augmentation
- Train PilotNet CNN architecture
- Generate performance visualizations
- Save trained model
Run lane keep assist with your trained model:
python examples/run_inference.py --model data/models/my_pilotnet_best.h5Controls:
q: Quitr: Reset performance metricss: Save performance statistics
lane-keep-assist/
├── config/
│ └── config.py # Configuration settings
├── src/
│ ├── data_collection/
│ │ └── data_collector.py # Data recording utilities
│ ├── preprocessing/
│ │ └── image_processor.py # Image preprocessing & augmentation
│ ├── model/
│ │ └── pilotnet.py # PilotNet CNN architecture
│ ├── training/
│ │ └── trainer.py # Training pipeline
│ ├── inference/
│ │ └── real_time_predictor.py # Real-time inference
│ └── utils/
│ └── visualization.py # Visualization utilities
├── examples/
│ ├── collect_data.py # Data collection example
│ ├── train_model.py # Training example
│ └── run_inference.py # Inference example
├── data/
│ ├── raw/ # Raw collected data
│ ├── processed/ # Processed datasets
│ └── models/ # Trained models
├── tests/ # Unit tests
├── docs/ # Documentation
├── requirements.txt # Dependencies
└── README.md # This file
The system implements Nvidia's PilotNet CNN architecture optimized for steering prediction:
Input Image (66x200x3)
↓
Normalization Layer
↓
Conv2D(24, 5x5, stride=2) → ReLU
↓
Conv2D(36, 5x5, stride=2) → ReLU
↓
Conv2D(48, 5x5, stride=2) → ReLU
↓
Conv2D(64, 3x3, stride=1) → ReLU
↓
Conv2D(64, 3x3, stride=1) → ReLU
↓
Flatten
↓
Dense(100) → ReLU → Dropout(0.5)
↓
Dense(50) → ReLU → Dropout(0.5)
↓
Dense(10) → ReLU → Dropout(0.5)
↓
Dense(1) → Steering Angle
The system is highly configurable through config/config.py:
# Model Architecture
MODEL = {
'input_shape': (66, 200, 3),
'conv_layers': [...],
'dense_layers': [100, 50, 10, 1],
'dropout_rate': 0.5,
'l2_regularization': 0.001
}
# Training Parameters
TRAINING = {
'batch_size': 32,
'epochs': 100,
'learning_rate': 0.001,
'early_stopping_patience': 10
}
# Steering Limits
STEERING = {
'max_steering_angle': 25.0, # degrees
'min_steering_angle': -25.0,
'smoothing_factor': 0.8
}# Use joystick input
python examples/collect_data.py --use_joystick --session_name highway_driving
# Specify camera and output directory
python examples/collect_data.py --camera_index 1 --output_dir /path/to/data# Train with custom parameters
python examples/train_model.py \
--data_dir data/highway_session \
--model_name highway_pilotnet \
--epochs 150 \
--batch_size 64 \
--no_balance
# Train with specific output directory
python examples/train_model.py \
--data_dir data/mixed_conditions \
--output_dir models/production \
--model_name production_v1# Run without visualization (headless)
python examples/run_inference.py \
--model models/production_v1_best.h5 \
--no_visualization \
--save_predictions
# Run full Lane Keep Assist system
python examples/run_inference.py \
--model models/production_v1_best.h5 \
--full_systemThe system provides comprehensive performance monitoring:
- Training/validation loss curves
- Mean Absolute Error (MAE)
- R² score
- Steering angle distribution analysis
- Inference time (ms)
- Frames per second (FPS)
- Steering angle predictions
- Performance statistics
- Model activation maps
- Filter weight visualization
- Prediction analysis plots
- Performance dashboard
- Diverse Conditions: Collect data in various lighting, weather, and road conditions
- Quality Driving: Ensure smooth, consistent steering behavior
- Balanced Dataset: Include equal amounts of straight, left, and right turns
- Sufficient Data: Collect at least 30 minutes of driving data
- Multiple Sessions: Record separate sessions for different scenarios
- Data Preprocessing: Ensure consistent image preprocessing
- Data Augmentation: Use appropriate augmentation to improve generalization
- Early Stopping: Monitor validation loss to prevent overfitting
- Learning Rate: Use learning rate scheduling for better convergence
- Regularization: Apply dropout and L2 regularization
- Testing Environment: Only test in safe, controlled environments
- Human Supervision: Always have a human driver ready to take control
- Speed Limits: Test at low speeds initially
- Emergency Override: Implement manual override capabilities
- Fail-safe Mechanisms: Include steering angle limits and safety checks
-
Camera Not Found
# Check available cameras python -c "import cv2; print([i for i in range(10) if cv2.VideoCapture(i).isOpened()])"
-
GPU Memory Issues
# Reduce batch size in config.py TRAINING['batch_size'] = 16
-
Poor Model Performance
- Collect more diverse training data
- Increase training epochs
- Adjust learning rate
- Check data preprocessing
-
Slow Inference
- Optimize model architecture
- Use GPU acceleration
- Reduce image resolution
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
This project is licensed under the MIT License. See LICENSE file for details.
- Nvidia for the PilotNet architecture
- TensorFlow team for the deep learning framework
- OpenCV community for computer vision tools
- Bojarski, M., et al. "End to End Learning for Self-Driving Cars." arXiv:1604.07316 (2016).
- Nvidia PilotNet: https://developer.nvidia.com/blog/deep-learning-self-driving-cars/
- Udacity Self-Driving Car Nanodegree: https://www.udacity.com/course/self-driving-car-engineer-nanodegree--nd013
For questions and support:
- Open an issue on GitHub
- Check the documentation in
docs/ - Review the configuration in
config/config.py