XdetectionCore is the foundational data processing engine for the Akrami Lab (LIM Lab). It provides standardized utilities for electrophysiology (ephys) and behavioral analysis, specifically designed to bridge the gap between Windows workstations and Linux-based HPC clusters.
- Features
- Installation
- Quick Start
- Core Modules
- Components
- Usage Examples
- Project Structure
- Contributing
- License
- Unified Session Management: Centralized
Sessionclass for managing ephys and behavior data - Spike Analysis: Tools for spike time processing, PSTH computation, and neural decoding
- Behavioral Analysis: Utilities for sound events, lick detection, and pupil tracking
- Cross-Platform Support: Seamless path handling between Windows and Linux systems
- Scalable Processing: Integration with parallel processing via
joblibfor large datasets - Statistical Analysis: Built-in filtering, z-scoring, and neural population analysis
- Visualization: Matplotlib-based plotting with custom styling and publication-ready figures
The easiest way to install the stable version is via pip:
pip install XdetectionCore```
### From Source
For development or to access the latest version:
```bash
git clone https://github.com/Akrami-Lab/XdetectionCore.git
cd XdetectionCore
pip install -e .- Python >= 3.8
- numpy < 2.0
- pandas >= 1.3
- matplotlib
- scipy
- tqdm
- joblib
from xdetectioncore.session import Session
# Initialize a session
session = Session(
sessname='my_experiment',
ceph_dir='path/to/ceph/data',
pkl_dir='path/to/pkl/data'
)
# Initialize spike data
session.init_spike_obj(
spike_times_path='spike_times.npy',
spike_cluster_path='spike_clusters.npy',
start_time=0,
parent_dir='path/to/data'
)
# Initialize sound events
session.init_sound_event_dict(
sound_write_path='sound_writes.bin',
format_kwargs={'sampling_rate': 30000}
)from xdetectioncore.io_utils import load_sess_pkl, load_spikes
from xdetectioncore.paths import posix_from_win
# Load spike data
spike_times, spike_clusters = load_spikes(
spike_times_path='spike_times.npy',
spike_clusters_path='spike_clusters.npy'
)
# Cross-platform path handling
linux_path = posix_from_win('C:\\data\\recording', '/nfs/nhome/live')Central session management class that coordinates ephys and behavioral data:
- Manages spike objects, events, and behavioral measurements
- Handles trial data (td_df) and inter-trial-interval (ITI) statistics
- Integrates decoders for neural population analysis
- Aggregates multi-session data via
load_aggregate_td_df()
Electrophysiology analysis tools:
spike_time_utils.py:SessionSpikesclass for spike handling, raster generation, PSTH computationgenerate_synthetic_spikes.py: Synthetic neural data generation for validation and testing
Modular components for specific data types:
events.py:SoundEventclass for sound stimulus representation and PSTH analysislicks.py:SessionLicksfor lick behavior tracking and analysispupil.py:SessionPupilfor pupil tracking and statisticsutils.py: Utility functions including z-scoring and normalization
Neural population decoding and classification:
decoding_funcs.py:Decoderclass for various decoding algorithms
io_utils.py: File I/O operations (spike loading, sound binary reading, pickle handling)paths.py: Cross-platform path utilities and date extractionplotting.py: Publication-ready visualization and styling (format_axis(),unique_legend(),apply_style())behaviour.py: Behavioral data processing and formattingstats.py: Statistical analysis functions
Represents a sound stimulus event with associated spike responses:
from xdetectioncore.components.events import SoundEvent
event = SoundEvent(idx=0, times=[1.0, 2.0, 3.0], lbl='stim_A')
# Compute PSTH (peri-stimulus time histogram)
event.get_psth(
sess_spike_obj=session.spike_obj,
window=[-0.5, 1.0],
baseline_dur=0.25,
zscore_flag=True
)
# Save visualization
event.save_plot_as_svg('figures/', suffix='trial_001')Core class for spike time processing:
from xdetectioncore.ephys.spike_time_utils import SessionSpikes
spike_obj = SessionSpikes(
spike_times_path='spike_times.npy',
spike_cluster_path='spike_clusters.npy',
start_time=0,
parent_dir='data/'
)
# Get spike raster for specific time window
raster = spike_obj.get_spike_raster(start_time=0, end_time=10)Track and analyze lick behavior:
from xdetectioncore.components.licks import SessionLicks
licks = SessionLicks()
lick_times = licks.get_lick_times(lick_data_path='lick_times.csv')Analyze pupil dynamics:
from xdetectioncore.components.pupil import SessionPupil
pupil = SessionPupil()
# Process pupil tracking datafrom xdetectioncore.session import Session
from xdetectioncore.components.events import SoundEvent
import numpy as np
# Create session
session = Session('exp_001', 'ceph_dir', 'pkl_dir')
session.init_spike_obj('spikes.npy', 'clusters.npy', 0, 'data/')
# Create sound event
sound_times = np.array([1.5, 5.2, 9.8]) # Second times
event = SoundEvent(idx=0, times=sound_times, lbl='tone_1khz')
# Compute and plot PSTH
event.get_psth(
sess_spike_obj=session.spike_obj,
window=[-0.5, 2.0],
baseline_dur=0.25,
zscore_flag=True,
title='Tone Response'
)
event.save_plot_as_svg('output/', suffix='psth_analysis')from xdetectioncore.behaviour import load_aggregate_td_df
from pathlib import Path
# Load trial data from multiple sessions
td_df = load_aggregate_td_df(
session_topology=session_info_df,
home_dir=Path('/home/user/data')
)
# Filter and analyze
learning_window = td_df[td_df['trial_type'] == 'learning']
print(f"Average performance: {learning_window['correct'].mean():.3f}")from xdetectioncore.paths import posix_from_win
# Convert Windows path to Linux HPC path
win_path = 'C:\\data\\recordings\\exp_2024'
linux_path = posix_from_win(win_path, '/nfs/nhome/live/aonih')
# Now use linux_path for HPC analysis
print(linux_path) # /nfs/nhome/live/aonih/data/recordings/exp_2024XdetectionCore/
βββ xdetectioncore/
β βββ __init__.py # Package exports
β βββ session.py # Central Session class
β βββ behaviour.py # Behavioral data processing
β βββ io_utils.py # File I/O utilities
β βββ paths.py # Cross-platform path handling
β βββ plotting.py # Visualization utilities
β βββ stats.py # Statistical functions
β βββ components/
β β βββ events.py # SoundEvent class
β β βββ licks.py # SessionLicks class
β β βββ pupil.py # SessionPupil class
β β βββ utils.py # Component utilities
β βββ decoding/
β β βββ decoding_funcs.py # Neural decoding algorithms
β βββ ephys/
β βββ __init__.py
β βββ spike_time_utils.py # SessionSpikes class
β βββ generate_synthetic_spikes.py # Data generation
βββ pyproject.toml # Project metadata
βββ setup.py # Setup configuration
βββ README.md # This file
βββ LICENSE # License file
Contributions are welcome! To contribute:
- Fork the repository
- Create a feature branch (
git checkout -b feature/my-feature) - Commit your changes (
git commit -am 'Add my feature') - Push to the branch (
git push origin feature/my-feature) - Open a Pull Request
This project is licensed under the License specified in the LICENSE file.
- Neo: Neural data standards and I/O
- Elephant: Electrophysiology analysis toolkit
Maintained by the Akrami Lab (LIM Lab)
For issues, questions, or suggestions, please open an issue on the GitHub repository or contact the lab.