A comprehensive collection of modern Docker-based deployment examples for Machine Learning and Data Science applications using Python 3.11, UV package manager, and production-ready architectures.
This repository demonstrates modern best practices for containerizing and deploying ML/DataScience applications with:
- β Python 3.11: Latest stable Python with performance improvements
- β UV Package Manager: Lightning-fast dependency resolution and virtual environments
- β FastAPI: Modern, high-performance web framework with automatic OpenAPI docs
- β ONNX Runtime: Cross-platform, optimized ML model inference
- β Production Architecture: Nginx, Redis, PostgreSQL, monitoring
- β Type Safety: Full type hints and validation with Pydantic
- β Multi-stage Builds: Optimized Docker images for production
- β Health Monitoring: Comprehensive health checks and observability
docker_cellar/
βββ 1-Basic_App/ # Image processing with scikit-image
βββ 2-FastAPI/ # FastAPI image processing service
βββ 3-ML_Pipeline/ # Full production ML pipeline
βββ 4-ONNX_Server/ # ONNX Runtime model serving
βββ 5-PostgreSQL_and_pgAdmin/ # Database & analytics stack
Modern Python 3.11 application for RGB to grayscale conversion
- Tech Stack: Python 3.11, UV, scikit-image, multi-stage Docker
- Features: Type hints, error handling, structured logging
- Use Case: Basic containerized image processing
cd 1-Basic_App
docker build -t basic-image-converter .
docker run -it --rm -v "$(pwd)/app:/app" basic-image-converterHigh-performance async API for image processing with automatic documentation
- Tech Stack: FastAPI, Python 3.11, UV, scikit-image, OpenAPI
- Features: Async file upload, CORS, validation, health checks
- Use Case: Production-ready image processing API
cd 2-FastAPI
docker build -t fastapi-image-processor .
docker run --rm -p 8000:8000 fastapi-image-processor
# Visit http://localhost:8000/docs for interactive API docsComplete production ML pipeline with load balancing, caching, and monitoring
- Tech Stack: Nginx, FastAPI, ONNX Runtime, Redis, Prometheus
- Features: Load balancing, rate limiting, caching, monitoring, web dashboard
- Use Case: Enterprise-gradeML model serving
cd 3-ML_Pipeline
docker-compose up --build
# Visit http://localhost for web dashboard
# Visit http://localhost/api/docs for API docs
# Visit http://localhost:9090 for Prometheus monitoringOptimized ONNX Runtime model serving with FastAPI
- Tech Stack: ONNX Runtime, FastAPI, Python 3.11, scikit-learn
- Features: Cross-platform inference, model metadata, performance optimization
- Use Case: High-performance model serving for any ONNX-compatible model
cd 4-ONNX_Server
docker build -t onnx-model-server .
docker run --rm -p 8000:8000 onnx-model-server
# Visit http://localhost:8000/docs for API documentationModern data analytics stack with PostgreSQL, Redis, and ML model registry
- Tech Stack: PostgreSQL 16, pgAdmin 4, Redis, FastAPI, SQLAlchemy
- Features: ML model registry, experiment tracking, analytics APIs
- Use Case: Data storage and analytics for ML applications
cd 5-PostgreSQL_and_pgAdmin
cp .env.example .env # Configure your settings
docker-compose up --build
# pgAdmin: http://localhost:5050
# Adminer: http://localhost:8080
# Analytics API: http://localhost:8000- Docker & Docker Compose
- Python 3.11+ (for local development)
- UV package manager
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"Each project supports local development with UV:
cd <project-directory>
uv sync # Install dependencies
uv run python main.py # Run application
uv run pytest # Run tests (if available)- 1-Basic_App: Command-line data processing
- 2-FastAPI: Simple web API with documentation
- 4-ONNX_Server: Model serving with caching
- 5-PostgreSQL: Database-backed applications
- 3-ML_Pipeline: Full microservices architecture with:
- Load balancing (Nginx)
- API gateway patterns
- Caching strategies (Redis)
- Monitoring & observability (Prometheus)
- Health checks & graceful degradation
| Service | Cold Start | Latency | Throughput | Memory |
|---|---|---|---|---|
| 1-Basic_App | ~2s | N/A | N/A | ~50MB |
| 2-FastAPI | ~3s | ~10ms | ~500 RPS | ~100MB |
| 4-ONNX_Server | ~5s | ~5ms | ~1000 RPS | ~150MB |
| 3-ML_Pipeline | ~10s | ~3ms | ~1500 RPS | ~300MB |
Each service supports environment-based configuration:
# API Configuration
API_ENV=production
LOG_LEVEL=info
WORKERS=4
# Database Configuration
DATABASE_URL=postgresql://user:pass@localhost:5432/db
REDIS_URL=redis://localhost:6379/0
# Model Configuration
MODEL_PATH=/models/
CACHE_TTL=3600Use compose overrides for different environments:
# Development
docker-compose up
# Production
docker-compose -f docker-compose.yml -f docker-compose.prod.yml upAll services include comprehensive health checks:
- Service health endpoints
- Dependency health monitoring
- Graceful degradation patterns
- Prometheus: System and application metrics
- Structured Logging: JSON logs with correlation IDs
- Performance Tracking: Request/response times, throughput
- Error Tracking: Exception monitoring and alerting
- Interactive Docs: Automatic OpenAPI/Swagger documentation
- Type Safety: Full mypy compatibility
- Code Quality: Black, Ruff formatting and linting
- Testing: Pytest-based testing frameworks
cd <project>
uv run uvicorn main:app --reloaddocker build -t service-name .
docker run -p 8000:8000 service-namedocker-compose up --buildapiVersion: apps/v1
kind: Deployment
metadata:
name: ml-service
spec:
replicas: 3
selector:
matchLabels:
app: ml-service
template:
spec:
containers:
- name: api
image: ml-service:latest
ports:
- containerPort: 8000- β Multi-stage Docker builds for minimal attack surface
- β Non-root user execution in containers
- β Security headers (CORS, XSS protection)
- β Input validation and sanitization
- β Rate limiting and request throttling
- β Health check endpoints without sensitive info
- β Environment-based secrets management
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Use Python 3.11+ with type hints
- Follow UV project structure with pyproject.toml
- Include comprehensive documentation
- Add health checks and monitoring
- Use multi-stage Docker builds
- Include example usage and tests
- Start with 1-Basic_App - Learn modern Python containerization
- Progress to 2-FastAPI - Understand web APIs and async patterns
- Explore 4-ONNX_Server - Dive into ML model serving
- Study 5-PostgreSQL - Learn data persistence and analytics
- Master 3-ML_Pipeline - Understand production architectures
This project is licensed under the MIT License - see the LICENSE file for details.
- FastAPI - Modern Python web framework
- ONNX Runtime - High-performance ML inference
- UV - Ultra-fast Python package management
- Docker - Containerization platform
- PostgreSQL - Advanced open source database
Built with β€οΈ for the ML/DataScience community
Showcasing modern Python development practices with Docker, FastAPI, ONNX Runtime, and production-ready architectures.