Skip to content

APapafragkakis/Distributed-Library-System-MPI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Distributed Library Management System - MPI Implementation

Student: Alexandros Papafragkakis - CSD5084

Overview

Implementation of a distributed library management system using MPI (Message Passing Interface). The system consists of library processes (servers) and borrower processes (clients) that communicate through message passing.

System Architecture

Process Distribution

Total Processes: 1 + N² + N³/2
├── Coordinator (Rank 0)
├── Libraries (Ranks 1 to N²) - N×N Grid
└── Clients (Ranks N²+1 to N²+N³/2) - Tree Topology

Network Topologies

  • Libraries: N×N grid with 4-connectivity neighborhood
  • Clients: Tree determined by CONNECT commands

Implemented Algorithms

1. DFS Spanning Tree Election (Libraries)

  • "Constructing a DFS Spanning Tree without a Specified Root" algorithm
  • EXPLORE/PARENT/ACK message handling
  • Tree merging with highest tree_id selection
  • Leadership announcement to all libraries

2. STtoLeader Election (Clients)

  • Complete convergecast algorithm from tree leaves
  • ELECT message propagation with collision detection
  • Two completion cases: convergence or edge collision
  • Highest rank wins in collision scenarios

3. Tree-Based Aggregation

  • Broadcast + convergecast through client spanning tree
  • O(log n) message complexity instead of O(n) direct polling
  • Cost-based tie-breaking for popular book selection
  • Parent-child relationship utilization

4. Grid Traversal (Libraries)

  • Systematic grid traversal: <0,0>, <1,0>, ..., <N-1,0>, <N-1,1>, <N-2,1>, ..., <0,1>, <0,2>, ...
  • Message forwarding through specified grid pattern
  • Accumulative loan counting during traversal

System Features

Book Management with Cost

  • Random Cost Generation: Books assigned cost between 5-100
  • Tie-Breaking: Higher cost wins in popular book ties
  • Assignment Formula: Library i manages books i*N to (i+1)*N-1

Error Handling & Validation

  • Configuration Validation: Automatic process count verification
  • Range Checking: All rank validations for safety
  • Graceful Degradation: Proper error messages and fallbacks

Compilation and Execution

Build System

# Standard build
make

# Clean build artifacts
make clean

# Debug build
make debug

Execution Examples

N=2 (9 processes: 1+4+4)

mpirun -np 9 ./library_system 2

N=3 (23 processes: 1+9+13)

mpirun -np 23 ./library_system 3

N=4 (49 processes: 1+16+32)

mpirun -np 49 ./library_system 4

N=5 (88 processes: 1+25+62)

mpirun -np 88 ./library_system 5

Supported Events

1. CONNECT - Client Tree Creation

CONNECT <client_id1> <client_id2>

2. TAKE_BOOK - Book Borrowing

TAKE_BOOK <client_id> <book_id>

3. DONATE_BOOK - Book Donation

DONATE_BOOK <client_id> <book_id> <n_copies>

4. GET_MOST_POPULAR_BOOK - Popular Book Discovery

GET_MOST_POPULAR_BOOK

5. CHECK_NUM_BOOKS_LOANED - Consistency Check

CHECK_NUM_BOOKS_LOANED

File Structure

├── main.c              # Main logic and system startup
├── coordinator.c       # Coordinator process
├── lib.c              # Library processes
├── client.c           # Client processes
├── Makefile           # Build system
├── testfile.txt       # Input file with events
└── README.md          # Documentation

Performance Characteristics

Message Complexity

  • DFS Election: O(E) where E is edges in grid
  • STtoLeader: O(V) where V is vertices in tree
  • Tree Aggregation: O(log n) for clients
  • Grid Traversal: O(N²) with spatial locality

Scalability Testing

  • Tested Configurations: N=2 through N=5
  • Maximum Tested: 88 parallel processes
  • Memory Usage: Efficient with fixed-size arrays
  • Network Load: Optimized message patterns

System Requirements

  • MPI Implementation: OpenMPI or MPICH
  • Compiler: mpicc (GCC-based)
  • OS: Linux/Unix
  • Memory: ~100MB for N=5

Algorithm Verification

DFS Spanning Tree Verification

  • Multiple concurrent tree initiation
  • Proper tree merging based on tree_id comparison
  • Parent-child relationship establishment
  • Leadership announcement propagation

STtoLeader Verification

  • Leaf node identification and initiation
  • Convergecast message flow control
  • Collision detection on tree edges
  • Rank-based collision resolution

Aggregation Verification

  • Tree-based popular book finding with cost tie-breaking
  • Grid traversal pattern compliance for libraries
  • Consistency checking between both methods
  • Message complexity optimization

Debugging and Validation

Built-in Validation

  • Memory Leak Detection: Optional checks with valgrind
  • Static Analysis: Code quality checks
  • Performance Profiling: Timing and message metrics

Output Verification

  • Algorithm Tracing: Step-by-step execution logging
  • Consistency Checks: Automatic loan count verification
  • Error Detection: Invalid operation identification
  • Performance Metrics: Message count and timing information

Usage

  1. Compilation:

    make
  2. Execution:

    mpirun -np <total_processes> ./library_system <N>

    where total_processes = 1 + N² + N³/2

  3. Example for N=3:

    mpirun -np 23 ./library_system 3

Compatibility

The system has been tested and works on:

  • Ubuntu 20.04/22.04
  • OpenMPI 4.0+
  • MPICH 3.3+
  • GCC 9.0+

Implementation Details

Message Types

  • Connection establishment (CONNECT, NEIGHBOR, ACK)
  • Leader election (START_LE, EXPLORE, PARENT, ELECT)
  • Book operations (TAKE_BOOK, LEND_BOOK, DONATE_BOOK)
  • Aggregation (GET_MOST_POPULAR_BOOK, CHECK_NUM_BOOKS_LOAN)
  • System control (SHUTDOWN)

Key Data Structures

  • LibraryNode: Grid position, neighbors, election state
  • ClientNode: Tree connections, aggregation state
  • BookInfo: ID, cost, loan count for tie-breaking

Synchronization

  • Coordinator-driven event processing
  • Barrier synchronization for phase transitions
  • Acknowledgment-based completion detection

Author

Alexandros Papafragkakis
CSD5084
Computer Science Department
University of Crete

About

Distributed library management system built using MPI, featuring DFS-based spanning tree construction, leader election algorithms, and tree-based aggregation, enabling efficient coordination and communication across distributed nodes in a grid-based architecture.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors