Thank you for your interest in contributing to pgsqlite! This guide will help you get started.
- Fork the repository on GitHub
- Clone your fork locally
- Create a new branch for your feature or bug fix
- Make your changes
- Run tests to ensure everything works
- Submit a pull request
# Clone the repository
git clone https://github.com/your-username/pgsqlite
cd pgsqlite
# Build the project
cargo build
# Run tests
cargo test
# Run with debug logging
RUST_LOG=debug cargo run- Follow Rust conventions and idioms
- Use
cargo fmtto format your code - Run
cargo clippyto catch common issues - Keep code concise and well-documented
- Avoid unnecessary comments in code
# Run all unit tests
cargo test
# Run integration tests
./run_ssl_tests.sh
# Run specific test
cargo test test_name
# Run tests with output
cargo test -- --nocapture- Write tests for all new functionality
- Test edge cases and error conditions
- Ensure tests actually verify behavior
- Use descriptive test names
When reporting issues, please include:
- SQL statements that reproduce the issue
- Expected behavior - what should happen
- Actual behavior - what actually happened
- Error messages if any
- Environment details (OS, Rust version, etc.)
Title: INSERT with RETURNING clause fails for SERIAL columns
PostgreSQL SQL:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255)
);
INSERT INTO users (email) VALUES ('test@example.com') RETURNING id;
Expected: Should return the generated ID
Actual: Error: "RETURNING clause not supported"
Environment: Ubuntu 22.04, Rust 1.75, pgsqlite v0.1.0
Follow this complete checklist to ensure code quality:
- Run
cargo build- no compilation errors - Run
cargo test- all tests pass - Run
cargo clippy- no warnings (usecargo clippy --fixfor automatic fixes) - Run
cargo fmt- code is formatted - Update documentation if needed
- Add tests for new functionality
- Update TODO.md if applicable (mark completed tasks, add new discoveries)
Pre-commit checklist: Run ALL of these before committing:
cargo check- No errors or warningscargo clippy- Review and fix warnings where reasonablecargo build- Successful buildcargo test- All tests pass
- Clear Description: Explain what and why
- Small Changes: Keep PRs focused
- Test Coverage: Include tests
- Documentation: Update if needed
- Clean History: Squash commits if messy
feat: Add support for ARRAY typesfix: Handle NULL values in DECIMAL columnsperf: Optimize query cache lookupdocs: Update SSL configuration guide
When working on pgsqlite:
- Check
TODO.mdfor planned work - Mark items as
[x]when completed - Add new items discovered during development
- Document partial progress with notes
- Never use column names to infer types
- Types come from:
- PostgreSQL type declarations
- SQLite schema (PRAGMA table_info)
- Explicit casts in queries
- Value-based inference as last resort
- Cache aggressively but invalidate correctly
- Prefer batch operations
- Minimize allocations in hot paths
- Profile before optimizing
pgsqlite adds ~360x overhead vs pure SQLite (~80ms vs 0.22ms per operation) in exchange for full PostgreSQL compatibility. This overhead is acceptable for most web applications where database operations represent 10-20% of total request time.
Based on comprehensive benchmarking (100 operations each):
For Read-Heavy Workloads - Use psycopg3-binary:
- SELECT: 0.452ms (best read performance)
- Best for applications with frequent SELECT queries
- 21.8x faster SELECT than psycopg2
For Write-Heavy Workloads - Use psycopg2:
- INSERT: 0.214ms (3.6x faster than psycopg3)
- UPDATE/DELETE: ~0.06-0.09ms (2x faster than psycopg3)
- Best for data ingestion and batch updates
For Balanced Workloads - Use psycopg3-text:
- Reasonable performance across all operations
- Good middle ground for mixed usage patterns
- Batch operations: 10-76x speedup for bulk INSERT operations
- 10-row batches: ~11x faster than single-row INSERTs
- 100-row batches: ~51x faster
- 1000-row batches: ~76x faster
- Connection architecture: Connection-per-session provides excellent isolation
- Ultra-fast path: Optimized execution for simple SELECT queries
- Type efficiency: Use native PostgreSQL types to reduce conversion overhead
- Protocol choice: Consider workload pattern when selecting driver
- Return PostgreSQL-compatible error codes
- Provide helpful error messages
- Never panic in production code
- Handle all Result types explicitly
- Improve error messages
- Add more SQL function translations
- Enhance documentation
- Add more integration tests
- New PostgreSQL type support
- Performance optimizations
- Protocol enhancements
- System catalog emulation
- Open an issue for discussion
- Check existing issues and PRs
- Read the architecture documentation
- Ask in pull request comments
By contributing, you agree that your contributions will be licensed under the Apache License 2.0.