Welcome to your Python capstone project! You'll be working with a FastAPI + PostgreSQL application that helps people track their daily learning journey. This will prepare you for deploying to the cloud in the next phase.
By the end of this capstone, your API should be working locally and ready for cloud deployment.
Do NOT open Pull Requests against this repository (learntocloud/journal-starter).
This repo is a starter template. Your work should happen on your own fork:
- Fork this repo to your GitHub account (click the "Fork" button at the top right).
- Clone your fork β not this repo.
- Do all your work and open PRs on your fork (
github.com/YOUR_USERNAME/journal-starter).
PRs opened against learntocloud/journal-starter will be closed without review.
- Getting Started
- Development Workflow
- Continuous Integration
- Development Tasks
- Data Schema
- AI Analysis Guide
- Troubleshooting
- What To Do If the Upstream Repo Has Changed
- Extras
- License
- Git installed on your machine
- Docker Desktop installed and running
- VS Code with the Dev Containers extension
Run these commands on your host machine (your local terminal, not inside a container):
-
Fork this repository to your GitHub account by clicking the "Fork" button at the top right of this page. This creates your own copy of the project under your GitHub account.
β οΈ Important: Always clone your fork, not this original repository. All your work and Pull Requests should happen on your fork. Do not open PRs against the originallearntocloud/journal-starterrepo. -
Clone your fork to your local machine (replace
YOUR_USERNAMEwith your actual GitHub username):git clone https://github.com/YOUR_USERNAME/journal-starter.git
Verify your remote points to your fork (not
learntocloud):git remote -v # Should show: origin https://github.com/YOUR_USERNAME/journal-starter.git -
Navigate into the project folder:
cd journal-starter -
Open in VS Code:
code .
π‘ Enable GitHub Actions on your fork: Forks have GitHub Actions workflows disabled by default. Go to the Actions tab on your fork and click "I understand my workflows, go ahead and enable them" to activate CI.
Environment variables live in a .env file (which is git-ignored so you don't accidentally commit secrets). This repo ships with a template named .env-sample.
Copy the sample file to create your real .env. Run this from the project root on your host machine:
cp .env-sample .envThe sample already contains DATABASE_URL (pointing at the devcontainer's
Postgres service) and a placeholder for OPENAI_API_KEY. Leave the
placeholder in place for Tasks 1β3; you'll replace it with a real token
from your chosen LLM provider when you reach Task 4.
Why is the placeholder needed? The app uses
pydantic-settingsto validate configuration at startup. IfOPENAI_API_KEYis missing entirely,Settings()raises aValidationErrorbefore FastAPI boots. Any non-empty string satisfies that validation β tests never call a real LLM because Task 4 is exercised with an injected mock client.
- Install the Dev Containers extension in VS Code (if not already installed)
- Reopen in container: When VS Code detects the
.devcontainerfolder, click "Reopen in Container"- Or use Command Palette (
Cmd/Ctrl + Shift + P):Dev Containers: Reopen in Container
- Or use Command Palette (
- Wait for setup: The API container will automatically install Python, dependencies, and configure your environment. The PostgreSQL Database container will also automatically be created.
In a terminal on your host machine (not inside VS Code), run:
docker psYou should see the postgres service running.
In the VS Code terminal (inside the dev container), verify you're in the project root:
pwd
# Should output: /workspaces/journal-starter (or similar)Then start the API from the project root:
./start.sh- Visit the API docs: http://localhost:8000/docs
- Create your first entry In the Docs UI Use the POST
/entriesendpoint to create a new journal entry. - View your entries using the GET
/entriesendpoint to see what you've created!
π― Once you can create and see entries, you're ready to start the development tasks!
This project comes with several features already built for you β creating entries, listing entries, updating, and deleting all entries. The remaining features are left for you to implement.
We have provided tests so you can verify your implementations are correct without manual testing. When you first run the tests, some will pass (for the pre-built features) and some will fail (for the features you need to build). Your goal is to make all tests pass.
π Where to run commands: All commands in this section should be run from the project root in the VS Code terminal (inside the dev container). Do not
cdinto subdirectories likeapi/ortests/β run everything from the top-level project folder.
From the project root in the VS Code terminal, install dev dependencies:
uv sync --all-extrasInstall the pre-commit hooks so ruff runs automatically on every commit:
uv run pre-commit installThen run the tests to see the starting state:
uv run pytestYou should see output with 18 failing tests β one group per task you still have to complete:
FAILED tests/test_logging.py::test_root_logger_is_configured_at_info
FAILED tests/test_logging.py::test_api_main_installs_stream_handler_with_formatter
FAILED tests/test_logging.py::test_api_main_emits_startup_log
FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_by_id_success
FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_not_found
FAILED tests/test_api.py::TestDeleteEntry::test_delete_entry_success
FAILED tests/test_api.py::TestDeleteEntry::test_delete_entry_not_found
FAILED tests/test_models.py::TestEntryCreateValidation::test_empty_string_rejected
FAILED tests/test_models.py::TestEntryCreateValidation::test_whitespace_only_rejected
FAILED tests/test_models.py::TestEntryCreateValidation::test_whitespace_stripped_from_valid_input
FAILED tests/test_models.py::TestEntryUpdateModel::test_all_fields_optional
FAILED tests/test_models.py::TestEntryUpdateModel::test_partial_update
FAILED tests/test_models.py::TestEntryUpdateModel::test_oversize_field_rejected
FAILED tests/test_api.py::TestUpdateEntry::test_update_rejects_oversize_field
FAILED tests/test_api.py::TestUpdateEntry::test_update_rejects_empty_string
FAILED tests/test_llm_service.py::test_analyze_entry_actually_calls_llm
FAILED tests/test_llm_service.py::test_analyze_entry_sends_entry_text_in_prompt
FAILED tests/test_llm_service.py::test_analyze_entry_returns_valid_analysis_response
===================== 18 failed, 32 passed =====================
The passing tests cover features that are already built for you (creating entries, listing entries, updating, deleting all entries). The 18 failing tests correspond to Tasks 1β4 below β your job is to turn all of them green.
-
Create a branch
Branches let you work on features in isolation without affecting the main codebase. From the project root, create one for each task:
git checkout -b feature/your-feature-name
-
Implement the feature
Write your code in the
api/directory. Check the TODO comments in the files for guidance on what to implement. -
Run the tests
After implementing a feature, run the tests from the project root to check if your implementation is correct:
uv run pytest
pytest is a testing framework that runs automated tests to verify your code works as expected.
- Tests failing? Read the error messages β they tell you exactly what's wrong (e.g.,
assert 501 == 200means your endpoint is still returning "Not Implemented"). - Tests passing? Great, your implementation is correct! Move on to the next step.
Example: Before implementing GET /entries/{entry_id}:
FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_by_id_success - assert 501 == 200 FAILED tests/test_api.py::TestGetSingleEntry::test_get_entry_not_found - assert 501 == 404After implementing it correctly:
tests/test_api.py::TestGetSingleEntry::test_get_entry_by_id_success PASSED tests/test_api.py::TestGetSingleEntry::test_get_entry_not_found PASSEDπ‘ Tip: Use
uv run pytest -vfor verbose output to see each test's pass/fail status, oruv run pytest -v --tb=shortto also see concise error details.Run the linter from the project root to check code style and catch common mistakes:
uv run ruff check .A linter is a tool that analyzes your code for potential errors, bugs, and style issues without running it. Ruff is a fast Python linter that checks for things like unused imports, incorrect syntax, and code that doesn't follow Python style conventions (PEP 8).
Run the formatter to auto-format your code (CI also checks formatting):
uv run ruff format .π‘ Tip: If you ran
uv run pre-commit installearlier, bothruff checkandruff formatrun automatically on every commit.Run the type checker from the project root to ensure proper type annotations:
uv run pyright
A type checker verifies that your code uses type hints correctly. Type hints (like
def get_entry(entry_id: str) -> dict:) help catch bugs early by ensuring you're passing the right types of data to functions. Pyright is Microsoft's fast Python type checker. - Tests failing? Read the error messages β they tell you exactly what's wrong (e.g.,
-
Commit and push (only after tests pass!)
Once the tests for your feature are passing, commit your changes and push to GitHub. Run from the project root:
git add .git commit -m "Implement feature X"git push -u origin feature/your-feature-name
-
Create a Pull Request (on your fork)
Go to your fork on GitHub (
github.com/YOUR_USERNAME/journal-starter) and open a Pull Request (PR) to merge your feature branch into your ownmainbranch.β οΈ Do NOT open PRs against the originallearntocloud/journal-starterrepository. Your PR should merge into your fork'smainbranch. When creating the PR, make sure the "base repository" isYOUR_USERNAME/journal-starter, notlearntocloud/journal-starter.Example:
β οΈ Do not modify the test files. Make the tests pass by implementing features in theapi/directory. If a test is failing, it means there's something left to implement β read the error message for clues!
Every push and pull request runs the GitHub Actions workflow in
.github/workflows/ci.yml, which has two jobs:
| Job | What it checks | How to reproduce locally |
|---|---|---|
lint |
ruff check, ruff format --check, pyright |
uv run ruff check . && uv run ruff format --check . && uv run pyright |
test |
pytest -v against a real Postgres 16 service container, with database_setup.sql applied |
uv run pytest -v |
Both jobs run on every push to main and every PR. Your fork will
show two green checks on a PR once all your implementations are complete
(i.e., Tasks 1β4 are finished). Intermediate PRs that cover only some
tasks will still have failing tests in CI β that's expected.
No secrets are required β the test job uses a disposable Postgres
service container, and Task 4 is exercised entirely with an injected
mock OpenAI client so CI never calls a real LLM.
Each task below has a single acceptance check: the listed tests must pass (or the listed manual command must succeed for Task 5).
- Branch:
feature/logging-setup - Edit:
api/main.py - Acceptance:
uv run pytest tests/test_logging.pypasses
Configure logging.basicConfig() in api/main.py so the root logger
ends up at INFO with at least one handler attached. The journal
logger used throughout the service layer must continue to propagate.
- Branch:
feature/get-single-entry - Edit:
api/routers/journal_router.py - Acceptance:
uv run pytest tests/test_api.py::TestGetSingleEntrypasses
Implement GET /entries/{entry_id} to fetch an entry via
entry_service.get_entry(entry_id) and return 404 when not found.
- Branch:
feature/delete-entry - Edit:
api/routers/journal_router.py - Acceptance:
uv run pytest tests/test_api.py::TestDeleteEntrypasses
Implement DELETE /entries/{entry_id}, returning 404 when the entry does not exist.
- Branch:
feature/input-validation - Edit:
api/models/entry.py,api/routers/journal_router.py - Acceptance:
uv run pytest tests/test_models.py::TestEntryCreateValidationpassesuv run pytest tests/test_models.py::TestEntryUpdateModelpassesuv run pytest tests/test_api.py::TestUpdateEntry::test_update_rejects_oversize_fieldpassesuv run pytest tests/test_api.py::TestUpdateEntry::test_update_rejects_empty_stringpasses
Add validation to EntryCreate so empty, whitespace-only, and
oversize (>256 char) fields are rejected and surrounding whitespace is
stripped. Hint: Annotated[str, StringConstraints(...)] from Pydantic.
Then create an EntryUpdate model in the same file with all three
fields optional and the same validation rules, and wire it into the
PATCH endpoint in api/routers/journal_router.py.
- Branch:
feature/ai-analysis - Edit:
api/services/llm_service.py - Acceptance:
uv run pytest tests/test_llm_service.pypasses
The POST /entries/{entry_id}/analyze endpoint in
api/routers/journal_router.py is already wired up β it fetches the
entry, combines the fields into prompt text, calls
analyze_journal_entry(), and maps errors to appropriate HTTP responses.
Your job is to implement the LLM call itself in
api/services/llm_service.py.
See AI Analysis Guide below for the expected response format and LLM provider setup.
- Branch:
feature/cloud-cli-setup - Edit:
.devcontainer/devcontainer.json - Acceptance:
az --version/aws --version/gcloud --versionruns successfully in the rebuilt devcontainer
Uncomment exactly one of the cloud CLI features in
.devcontainer/devcontainer.json, rebuild the devcontainer, and
verify the CLI is installed.
| Task | Automated? | How the tests verify it |
|---|---|---|
| 1 β Logging | β | tests/test_logging.py inspects the root logger state after importing api.main |
| 2a β GET single | β | tests/test_api.py::TestGetSingleEntry via the FastAPI test client |
| 2b β DELETE single | β | tests/test_api.py::TestDeleteEntry via the FastAPI test client |
| 3 β Input validation | β | tests/test_models.py unit tests + tests/test_api.py::TestUpdateEntry PATCH validation tests |
| 4 β AI analysis | β | tests/test_llm_service.py injects MockAsyncOpenAI; no real network calls |
| 5 β Cloud CLI | β | Manual verification: run az --version / aws --version / gcloud --version in the rebuilt devcontainer |
Each journal entry follows this structure:
| Field | Type | Description | Validation |
|---|---|---|---|
| id | string | Unique identifier (UUID) | Auto-generated |
| work | string | What did you work on today? | Required, max 256 characters |
| struggle | string | What's one thing you struggled with today? | Required, max 256 characters |
| intention | string | What will you study/work on tomorrow? | Required, max 256 characters |
| created_at | datetime | When entry was created | Auto-generated UTC |
| updated_at | datetime | When entry was last updated | Auto-updated UTC |
For Task 4: AI-Powered Entry Analysis, your endpoint should return this format:
{
"entry_id": "123e4567-e89b-12d3-a456-426614174000",
"sentiment": "positive",
"summary": "The learner made progress with FastAPI and database integration. They're excited to continue learning about cloud deployment.",
"topics": ["FastAPI", "PostgreSQL", "API development", "cloud deployment"],
"created_at": "2025-12-25T10:30:00Z"
}This project mandates the OpenAI Python SDK, which works as a drop-in client for any OpenAI-compatible provider:
| Provider | Cost | Notes |
|---|---|---|
| GitHub Models (default, recommended) | Free | Uses your GitHub account, no credit card needed |
| OpenAI proper | Paid | Standard api.openai.com |
| Azure OpenAI | Paid | Your Azure subscription |
| Groq / Together / OpenRouter / Fireworks / DeepInfra | Varies | All expose OpenAI-compatible endpoints |
| Ollama / LM Studio / vLLM | Free (local) | Run a model on your own machine |
Configure your provider via .env β no GitHub Actions secrets are
required, because CI uses an injected mock OpenAI client:
OPENAI_API_KEY=<your token or api key>
OPENAI_BASE_URL=https://models.inference.ai.azure.com
OPENAI_MODEL=gpt-4o-mini
These variables are loaded by api/config.py's Settings
class. If you mistype a variable name, Settings() will raise a
ValidationError at app startup naming the missing field β no silent
None from os.getenv that crashes later.
Optional: once your implementation compiles, sanity-check it against a real provider with the bundled helper script:
uv run python -m scripts.verify_llmPhase 4 preview: In Phase 4, you'll migrate this same code to a cloud AI platform (Azure OpenAI, AWS Bedrock, or GCP Vertex AI). Since they all support the OpenAI SDK, the migration is just an environment variable change β no code rewrite needed.
API won't start?
- Make sure you're running
./start.shfrom the project root inside the dev container - Check PostgreSQL is running:
docker ps(on your host machine) - Restart the database:
docker restart your-postgres-container-name(on your host machine)
pydantic_core._pydantic_core.ValidationError on startup?
- One of the required env vars in your
.envfile is missing or mistyped. The error message names the field (e.g.database_urloropenai_api_key). Add it to.envβ the defaults in.env-sampleare a good starting point β and restart.
Can't connect to database?
- Verify
.envfile exists with correctDATABASE_URL - Restart dev container:
Dev Containers: Rebuild Container
Dev container won't open?
- Ensure Docker Desktop is running
- Try:
Dev Containers: Rebuild and Reopen in Container
If you forked this repository and started working on it, but the original learntocloud/journal-starter repo has since been updated (e.g. a redesign was merged), your fork is now behind. You have two options.
Context: The capstone redesign changed nearly every core file: the API router, models, services, tests, config, and project dependencies. If you had work in progress, expect conflicts in most files you touched.
Delete your fork and re-fork. This is the simplest path, especially since the redesign changed the project structure significantly. Your old task code likely won't drop in cleanly anyway.
-
Save any work you want to keep. Copy files you changed to a folder outside the repo. Focus on saving the logic you wrote (your route handlers, validation code, etc.), not entire files.
-
Delete your fork on GitHub:
- Go to your fork:
https://github.com/YOUR_USERNAME/journal-starter - Settings > scroll to the bottom > Delete this repository
- Go to your fork:
-
Re-fork the repository by clicking "Fork" on the original repo:
https://github.com/learntocloud/journal-starter -
Clone your new fork:
git clone https://github.com/YOUR_USERNAME/journal-starter.git cd journal-starter -
Re-apply your work by looking at the new file structure and adding your logic back in. Don't copy-paste whole files from your old fork since the structure has changed. Instead, read through the new code and re-implement your task solutions to fit the updated project.
This is how open-source contributors keep their fork up to date. It's more involved, but it's a valuable skill to learn.
-
Add the upstream remote (you only need to do this once):
git remote add upstream https://github.com/learntocloud/journal-starter.git
Verify it:
git remote -v # origin https://github.com/YOUR_USERNAME/journal-starter.git (fetch) # origin https://github.com/YOUR_USERNAME/journal-starter.git (push) # upstream https://github.com/learntocloud/journal-starter.git (fetch) # upstream https://github.com/learntocloud/journal-starter.git (push)
-
Fetch the latest from upstream:
git fetch upstream
-
Make sure you're on your main branch:
git checkout main
-
Merge upstream changes into your main:
git merge upstream/main
-
Handle merge conflicts. You will almost certainly get conflicts. Git will list the conflicting files. Here's how to work through them:
Open each conflicting file and look for conflict markers like this:
<<<<<<< HEAD # your code ======= # upstream code >>>>>>> upstream/mainFor this redesign, accept the upstream (incoming) version in most cases. The project structure changed significantly, so the upstream code is the correct foundation. If you had task work in a conflicting file, take note of what you wrote, accept the upstream version, and then re-add your logic on top of the new structure.
After resolving all conflicts:
git add . git commit -m "Merge upstream changes"
-
Push the updated main to your fork:
git push origin main
-
Update any feature branches you're working on:
git checkout your-feature-branch git merge main # Resolve any conflicts the same way as above
π‘ Why
mergeinstead ofrebase? Merge is safer for beginners. It preserves your commit history and is more straightforward when resolving conflicts. Rebase rewrites history, which can cause issues if you've already pushed your branch. Once you're comfortable with Git, feel free to exploregit rebase upstream/mainas an alternative.
- Explore Your Database - Connect to PostgreSQL and run queries directly
MIT License - see LICENSE for details.
Contributions welcome! Open an issue to get started.
