Skip to content

learntocloud/journal-starter

Repository files navigation

Topic 5: Capstone - Journal API

CI

Welcome to your Python capstone project! You'll be working with a FastAPI + PostgreSQL application that helps people track their daily learning journey. This will prepare you for deploying to the cloud in the next phase.

By the end of this capstone, your API should be working locally and ready for cloud deployment.

⚠️ Important: This Is a Template Repository

Do NOT open Pull Requests against this repository (learntocloud/journal-starter).

This repo is a starter template. Your work should happen on your own fork:

  1. Fork this repo to your GitHub account (click the "Fork" button at the top right).
  2. Clone your fork β€” not this repo.
  3. Do all your work and open PRs on your fork (github.com/YOUR_USERNAME/journal-starter).

PRs opened against learntocloud/journal-starter will be closed without review.


Table of Contents

πŸš€ Getting Started

Prerequisites

  • Git installed on your machine
  • Docker Desktop installed and running
  • VS Code with the Dev Containers extension

1. Fork and Clone the Repository

Run these commands on your host machine (your local terminal, not inside a container):

  1. Fork this repository to your GitHub account by clicking the "Fork" button at the top right of this page. This creates your own copy of the project under your GitHub account.

    ⚠️ Important: Always clone your fork, not this original repository. All your work and Pull Requests should happen on your fork. Do not open PRs against the original learntocloud/journal-starter repo.

  2. Clone your fork to your local machine (replace YOUR_USERNAME with your actual GitHub username):

    git clone https://github.com/YOUR_USERNAME/journal-starter.git

    Verify your remote points to your fork (not learntocloud):

    git remote -v
    # Should show: origin  https://github.com/YOUR_USERNAME/journal-starter.git
  3. Navigate into the project folder:

    cd journal-starter
  4. Open in VS Code:

    code .

πŸ’‘ Enable GitHub Actions on your fork: Forks have GitHub Actions workflows disabled by default. Go to the Actions tab on your fork and click "I understand my workflows, go ahead and enable them" to activate CI.

2. Configure Your Environment (.env)

Environment variables live in a .env file (which is git-ignored so you don't accidentally commit secrets). This repo ships with a template named .env-sample.

Copy the sample file to create your real .env. Run this from the project root on your host machine:

cp .env-sample .env

The sample already contains DATABASE_URL (pointing at the devcontainer's Postgres service) and a placeholder for OPENAI_API_KEY. Leave the placeholder in place for Tasks 1–3; you'll replace it with a real token from your chosen LLM provider when you reach Task 4.

Why is the placeholder needed? The app uses pydantic-settings to validate configuration at startup. If OPENAI_API_KEY is missing entirely, Settings() raises a ValidationError before FastAPI boots. Any non-empty string satisfies that validation β€” tests never call a real LLM because Task 4 is exercised with an injected mock client.

3. Set Up Your Development Environment

  1. Install the Dev Containers extension in VS Code (if not already installed)
  2. Reopen in container: When VS Code detects the .devcontainer folder, click "Reopen in Container"
    • Or use Command Palette (Cmd/Ctrl + Shift + P): Dev Containers: Reopen in Container
  3. Wait for setup: The API container will automatically install Python, dependencies, and configure your environment. The PostgreSQL Database container will also automatically be created.

4. Verify the PostgreSQL Database Is Running

In a terminal on your host machine (not inside VS Code), run:

docker ps

You should see the postgres service running.

5. Run the API

In the VS Code terminal (inside the dev container), verify you're in the project root:

pwd
# Should output: /workspaces/journal-starter (or similar)

Then start the API from the project root:

./start.sh

6. Test Everything Works! πŸŽ‰

  1. Visit the API docs: http://localhost:8000/docs
  2. Create your first entry In the Docs UI Use the POST /entries endpoint to create a new journal entry.
  3. View your entries using the GET /entries endpoint to see what you've created!

🎯 Once you can create and see entries, you're ready to start the development tasks!

πŸ”„ Development Workflow

This project comes with several features already built for you β€” creating entries, listing entries, updating, and deleting all entries. The remaining features are left for you to implement.

We have provided tests so you can verify your implementations are correct without manual testing. When you first run the tests, some will pass (for the pre-built features) and some will fail (for the features you need to build). Your goal is to make all tests pass.

πŸ“ Where to run commands: All commands in this section should be run from the project root in the VS Code terminal (inside the dev container). Do not cd into subdirectories like api/ or tests/ β€” run everything from the top-level project folder.

First-Time Setup

From the project root in the VS Code terminal, install dev dependencies:

uv sync --all-extras

Install the pre-commit hooks so ruff runs automatically on every commit:

uv run pre-commit install

Then run the tests to see the starting state:

uv run pytest

You should see output with 18 failing tests β€” one group per task you still have to complete:

FAILED tests/test_logging.py::test_root_logger_is_configured_at_info
FAILED tests/test_logging.py::test_api_main_installs_stream_handler_with_formatter
FAILED tests/test_logging.py::test_api_main_emits_startup_log
FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_by_id_success
FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_not_found
FAILED tests/test_api.py::TestDeleteEntry::test_delete_entry_success
FAILED tests/test_api.py::TestDeleteEntry::test_delete_entry_not_found
FAILED tests/test_models.py::TestEntryCreateValidation::test_empty_string_rejected
FAILED tests/test_models.py::TestEntryCreateValidation::test_whitespace_only_rejected
FAILED tests/test_models.py::TestEntryCreateValidation::test_whitespace_stripped_from_valid_input
FAILED tests/test_models.py::TestEntryUpdateModel::test_all_fields_optional
FAILED tests/test_models.py::TestEntryUpdateModel::test_partial_update
FAILED tests/test_models.py::TestEntryUpdateModel::test_oversize_field_rejected
FAILED tests/test_api.py::TestUpdateEntry::test_update_rejects_oversize_field
FAILED tests/test_api.py::TestUpdateEntry::test_update_rejects_empty_string
FAILED tests/test_llm_service.py::test_analyze_entry_actually_calls_llm
FAILED tests/test_llm_service.py::test_analyze_entry_sends_entry_text_in_prompt
FAILED tests/test_llm_service.py::test_analyze_entry_returns_valid_analysis_response
===================== 18 failed, 32 passed =====================

The passing tests cover features that are already built for you (creating entries, listing entries, updating, deleting all entries). The 18 failing tests correspond to Tasks 1–4 below β€” your job is to turn all of them green.

For Each Task

  1. Create a branch

    Branches let you work on features in isolation without affecting the main codebase. From the project root, create one for each task:

    git checkout -b feature/your-feature-name
  2. Implement the feature

    Write your code in the api/ directory. Check the TODO comments in the files for guidance on what to implement.

  3. Run the tests

    After implementing a feature, run the tests from the project root to check if your implementation is correct:

    uv run pytest

    pytest is a testing framework that runs automated tests to verify your code works as expected.

    • Tests failing? Read the error messages β€” they tell you exactly what's wrong (e.g., assert 501 == 200 means your endpoint is still returning "Not Implemented").
    • Tests passing? Great, your implementation is correct! Move on to the next step.

    Example: Before implementing GET /entries/{entry_id}:

    FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_by_id_success - assert 501 == 200
    FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_not_found - assert 501 == 404
    

    After implementing it correctly:

    tests/test_api.py::TestGetSingleEntry::test_get_entry_by_id_success PASSED
    tests/test_api.py::TestGetSingleEntry::test_get_entry_not_found PASSED
    

    πŸ’‘ Tip: Use uv run pytest -v for verbose output to see each test's pass/fail status, or uv run pytest -v --tb=short to also see concise error details.

    Run the linter from the project root to check code style and catch common mistakes:

    uv run ruff check .

    A linter is a tool that analyzes your code for potential errors, bugs, and style issues without running it. Ruff is a fast Python linter that checks for things like unused imports, incorrect syntax, and code that doesn't follow Python style conventions (PEP 8).

    Run the formatter to auto-format your code (CI also checks formatting):

    uv run ruff format .

    πŸ’‘ Tip: If you ran uv run pre-commit install earlier, both ruff check and ruff format run automatically on every commit.

    Run the type checker from the project root to ensure proper type annotations:

    uv run pyright

    A type checker verifies that your code uses type hints correctly. Type hints (like def get_entry(entry_id: str) -> dict:) help catch bugs early by ensuring you're passing the right types of data to functions. Pyright is Microsoft's fast Python type checker.

  4. Commit and push (only after tests pass!)

    Once the tests for your feature are passing, commit your changes and push to GitHub. Run from the project root:

    git add .
    git commit -m "Implement feature X"
    git push -u origin feature/your-feature-name
  5. Create a Pull Request (on your fork)

    Go to your fork on GitHub (github.com/YOUR_USERNAME/journal-starter) and open a Pull Request (PR) to merge your feature branch into your own main branch.

    ⚠️ Do NOT open PRs against the original learntocloud/journal-starter repository. Your PR should merge into your fork's main branch. When creating the PR, make sure the "base repository" is YOUR_USERNAME/journal-starter, not learntocloud/journal-starter.

    Example:

    Core Base Repository Selection

⚠️ Do not modify the test files. Make the tests pass by implementing features in the api/ directory. If a test is failing, it means there's something left to implement β€” read the error message for clues!

πŸ€– Continuous Integration

Every push and pull request runs the GitHub Actions workflow in .github/workflows/ci.yml, which has two jobs:

Job What it checks How to reproduce locally
lint ruff check, ruff format --check, pyright uv run ruff check . && uv run ruff format --check . && uv run pyright
test pytest -v against a real Postgres 16 service container, with database_setup.sql applied uv run pytest -v

Both jobs run on every push to main and every PR. Your fork will show two green checks on a PR once all your implementations are complete (i.e., Tasks 1–4 are finished). Intermediate PRs that cover only some tasks will still have failing tests in CI β€” that's expected. No secrets are required β€” the test job uses a disposable Postgres service container, and Task 4 is exercised entirely with an injected mock OpenAI client so CI never calls a real LLM.

🎯 Development Tasks

Each task below has a single acceptance check: the listed tests must pass (or the listed manual command must succeed for Task 5).

Task 1 β€” Logging Setup

  • Branch: feature/logging-setup
  • Edit: api/main.py
  • Acceptance: uv run pytest tests/test_logging.py passes

Configure logging.basicConfig() in api/main.py so the root logger ends up at INFO with at least one handler attached. The journal logger used throughout the service layer must continue to propagate.

Task 2a β€” GET Single Entry Endpoint

  • Branch: feature/get-single-entry
  • Edit: api/routers/journal_router.py
  • Acceptance: uv run pytest tests/test_api.py::TestGetSingleEntry passes

Implement GET /entries/{entry_id} to fetch an entry via entry_service.get_entry(entry_id) and return 404 when not found.

Task 2b β€” DELETE Single Entry Endpoint

  • Branch: feature/delete-entry
  • Edit: api/routers/journal_router.py
  • Acceptance: uv run pytest tests/test_api.py::TestDeleteEntry passes

Implement DELETE /entries/{entry_id}, returning 404 when the entry does not exist.

Task 3 β€” Input Validation

  • Branch: feature/input-validation
  • Edit: api/models/entry.py, api/routers/journal_router.py
  • Acceptance:
    • uv run pytest tests/test_models.py::TestEntryCreateValidation passes
    • uv run pytest tests/test_models.py::TestEntryUpdateModel passes
    • uv run pytest tests/test_api.py::TestUpdateEntry::test_update_rejects_oversize_field passes
    • uv run pytest tests/test_api.py::TestUpdateEntry::test_update_rejects_empty_string passes

Add validation to EntryCreate so empty, whitespace-only, and oversize (>256 char) fields are rejected and surrounding whitespace is stripped. Hint: Annotated[str, StringConstraints(...)] from Pydantic.

Then create an EntryUpdate model in the same file with all three fields optional and the same validation rules, and wire it into the PATCH endpoint in api/routers/journal_router.py.

Task 4 β€” AI-Powered Entry Analysis

  • Branch: feature/ai-analysis
  • Edit: api/services/llm_service.py
  • Acceptance: uv run pytest tests/test_llm_service.py passes

The POST /entries/{entry_id}/analyze endpoint in api/routers/journal_router.py is already wired up β€” it fetches the entry, combines the fields into prompt text, calls analyze_journal_entry(), and maps errors to appropriate HTTP responses. Your job is to implement the LLM call itself in api/services/llm_service.py.

See AI Analysis Guide below for the expected response format and LLM provider setup.

Task 5 β€” Cloud CLI Setup (manual)

  • Branch: feature/cloud-cli-setup
  • Edit: .devcontainer/devcontainer.json
  • Acceptance: az --version / aws --version / gcloud --version runs successfully in the rebuilt devcontainer

Uncomment exactly one of the cloud CLI features in .devcontainer/devcontainer.json, rebuild the devcontainer, and verify the CLI is installed.

What the automated tests cover

Task Automated? How the tests verify it
1 β€” Logging βœ… tests/test_logging.py inspects the root logger state after importing api.main
2a β€” GET single βœ… tests/test_api.py::TestGetSingleEntry via the FastAPI test client
2b β€” DELETE single βœ… tests/test_api.py::TestDeleteEntry via the FastAPI test client
3 β€” Input validation βœ… tests/test_models.py unit tests + tests/test_api.py::TestUpdateEntry PATCH validation tests
4 β€” AI analysis βœ… tests/test_llm_service.py injects MockAsyncOpenAI; no real network calls
5 β€” Cloud CLI ❌ Manual verification: run az --version / aws --version / gcloud --version in the rebuilt devcontainer

πŸ“Š Data Schema

Each journal entry follows this structure:

Field Type Description Validation
id string Unique identifier (UUID) Auto-generated
work string What did you work on today? Required, max 256 characters
struggle string What's one thing you struggled with today? Required, max 256 characters
intention string What will you study/work on tomorrow? Required, max 256 characters
created_at datetime When entry was created Auto-generated UTC
updated_at datetime When entry was last updated Auto-updated UTC

πŸ€– AI Analysis Guide

For Task 4: AI-Powered Entry Analysis, your endpoint should return this format:

{
  "entry_id": "123e4567-e89b-12d3-a456-426614174000",
  "sentiment": "positive",
  "summary": "The learner made progress with FastAPI and database integration. They're excited to continue learning about cloud deployment.",
  "topics": ["FastAPI", "PostgreSQL", "API development", "cloud deployment"],
  "created_at": "2025-12-25T10:30:00Z"
}

Task 4 setup

This project mandates the OpenAI Python SDK, which works as a drop-in client for any OpenAI-compatible provider:

Provider Cost Notes
GitHub Models (default, recommended) Free Uses your GitHub account, no credit card needed
OpenAI proper Paid Standard api.openai.com
Azure OpenAI Paid Your Azure subscription
Groq / Together / OpenRouter / Fireworks / DeepInfra Varies All expose OpenAI-compatible endpoints
Ollama / LM Studio / vLLM Free (local) Run a model on your own machine

Configure your provider via .env β€” no GitHub Actions secrets are required, because CI uses an injected mock OpenAI client:

OPENAI_API_KEY=<your token or api key>
OPENAI_BASE_URL=https://models.inference.ai.azure.com
OPENAI_MODEL=gpt-4o-mini

These variables are loaded by api/config.py's Settings class. If you mistype a variable name, Settings() will raise a ValidationError at app startup naming the missing field β€” no silent None from os.getenv that crashes later.

Optional: once your implementation compiles, sanity-check it against a real provider with the bundled helper script:

uv run python -m scripts.verify_llm

Phase 4 preview: In Phase 4, you'll migrate this same code to a cloud AI platform (Azure OpenAI, AWS Bedrock, or GCP Vertex AI). Since they all support the OpenAI SDK, the migration is just an environment variable change β€” no code rewrite needed.

πŸ”§ Troubleshooting

API won't start?

  • Make sure you're running ./start.sh from the project root inside the dev container
  • Check PostgreSQL is running: docker ps (on your host machine)
  • Restart the database: docker restart your-postgres-container-name (on your host machine)

pydantic_core._pydantic_core.ValidationError on startup?

  • One of the required env vars in your .env file is missing or mistyped. The error message names the field (e.g. database_url or openai_api_key). Add it to .env β€” the defaults in .env-sample are a good starting point β€” and restart.

Can't connect to database?

  • Verify .env file exists with correct DATABASE_URL
  • Restart dev container: Dev Containers: Rebuild Container

Dev container won't open?

  • Ensure Docker Desktop is running
  • Try: Dev Containers: Rebuild and Reopen in Container

πŸ”„ What To Do If the Upstream Repo Has Changed

If you forked this repository and started working on it, but the original learntocloud/journal-starter repo has since been updated (e.g. a redesign was merged), your fork is now behind. You have two options.

Context: The capstone redesign changed nearly every core file: the API router, models, services, tests, config, and project dependencies. If you had work in progress, expect conflicts in most files you touched.


Option A: Start Fresh (Recommended)

Delete your fork and re-fork. This is the simplest path, especially since the redesign changed the project structure significantly. Your old task code likely won't drop in cleanly anyway.

  1. Save any work you want to keep. Copy files you changed to a folder outside the repo. Focus on saving the logic you wrote (your route handlers, validation code, etc.), not entire files.

  2. Delete your fork on GitHub:

    • Go to your fork: https://github.com/YOUR_USERNAME/journal-starter
    • Settings > scroll to the bottom > Delete this repository
  3. Re-fork the repository by clicking "Fork" on the original repo: https://github.com/learntocloud/journal-starter

  4. Clone your new fork:

    git clone https://github.com/YOUR_USERNAME/journal-starter.git
    cd journal-starter
  5. Re-apply your work by looking at the new file structure and adding your logic back in. Don't copy-paste whole files from your old fork since the structure has changed. Instead, read through the new code and re-implement your task solutions to fit the updated project.


Option B: Sync Your Fork with Upstream (The Git Learning Opportunity)

This is how open-source contributors keep their fork up to date. It's more involved, but it's a valuable skill to learn.

  1. Add the upstream remote (you only need to do this once):

    git remote add upstream https://github.com/learntocloud/journal-starter.git

    Verify it:

    git remote -v
    # origin    https://github.com/YOUR_USERNAME/journal-starter.git (fetch)
    # origin    https://github.com/YOUR_USERNAME/journal-starter.git (push)
    # upstream  https://github.com/learntocloud/journal-starter.git (fetch)
    # upstream  https://github.com/learntocloud/journal-starter.git (push)
  2. Fetch the latest from upstream:

    git fetch upstream
  3. Make sure you're on your main branch:

    git checkout main
  4. Merge upstream changes into your main:

    git merge upstream/main
  5. Handle merge conflicts. You will almost certainly get conflicts. Git will list the conflicting files. Here's how to work through them:

    Open each conflicting file and look for conflict markers like this:

    <<<<<<< HEAD
    # your code
    =======
    # upstream code
    >>>>>>> upstream/main
    

    For this redesign, accept the upstream (incoming) version in most cases. The project structure changed significantly, so the upstream code is the correct foundation. If you had task work in a conflicting file, take note of what you wrote, accept the upstream version, and then re-add your logic on top of the new structure.

    After resolving all conflicts:

    git add .
    git commit -m "Merge upstream changes"
  6. Push the updated main to your fork:

    git push origin main
  7. Update any feature branches you're working on:

    git checkout your-feature-branch
    git merge main
    # Resolve any conflicts the same way as above

πŸ’‘ Why merge instead of rebase? Merge is safer for beginners. It preserves your commit history and is more straightforward when resolving conflicts. Rebase rewrites history, which can cause issues if you've already pushed your branch. Once you're comfortable with Git, feel free to explore git rebase upstream/main as an alternative.


πŸ“š Extras

πŸ“„ License

MIT License - see LICENSE for details.

Contributions welcome! Open an issue to get started.

About

A starter project to help you practice Python + API skills

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors