-
Track: Concierge Agents.
-
Subtitle: A Capstone Project for the Kaggle 5-Day AI Agents Intensive Course. This agent protects user privacy by asking for explicit permission before saving any long-term memories.
This project maps directly to the "Category 1" evaluation criteria.
-
The Problem: Standard personal assistants have a major flaw: they save everything you say by default. This creates a privacy nightmare, as users have no control over whether sensitive information (like health data, family birthdays, or private thoughts) is stored permanently.
-
The Solution: This agent inverts the model. It's a "Privacy-First" assistant that assumes all information is temporary. It is built to detect sensitive information and must ask for explicit user permission before saving anything to its long-term memory.
-
The Value (Core Concept): This project's core idea is building user trust through explicit consent. It gives the user genuine control over their data by using a Human-in-the-Loop (HITL) workflow. The agent can still be a helpful, stateful assistant, but only with facts the user has personally approved.
This agent's architecture and code are designed to satisfy the "Category 2: The Implementation" criteria by using 3+ key concepts from the course.
The agent's workflow is centered around a "pause and resume" logic. The full process flow and system architecture are visualized below:

This project implements three core concepts from the course notebooks:
-
Long-Running Operations (LRO): This is the core of the privacy feature. The agent uses a custom tool,
process_user_fact, which detects sensitive keywords (e.g., "allergic," "birthday"). When a sensitive fact is found, the tool callstool_context.request_confirmation()to pause the entire agent workflow and wait for the user's "yes/no" response. This is the "Human-in-the-Loop" (HITL) pattern. -
Long-Term Memory: This concept is split into two parts:
- Saving: If (and only if) the user approves the LRO, the agent resumes, and the tool saves the conversation to the
InMemoryMemoryServiceusingawait memory_service.add_session_to_memory(). - Retrieving: The agent uses the
preload_memorytool. This proactively loads all approved facts from long-term memory into the agent's context at the start of every new conversation.
- Saving: If (and only if) the user approves the LRO, the agent resumes, and the tool saves the conversation to the
-
Context Engineering (Compaction): This is the "forget" mechanism. If the user rejects a save, the fact is not saved to long-term memory. We also enable
EventsCompactionConfigon theAppwith an aggressive interval. This ensures the rejected fact is quickly and automatically removed from the short-term session history, making the agent truly "forget" it.
- Effective Use of Gemini: The agent's reasoning is powered by
Gemini(model="gemini-2.5-flash"), satisfying the bonus criteria. - Custom Tools: The entire project is orchestrated by a custom
FunctionTool(process_user_fact) that handles the complex LRO and memory logic.
- Google Agent Development Kit (ADK)
- Google Gemini API (gemini-2.5-flash)
- Python 3 &
asyncio - Google Colab (for development and demonstration)
- Deployment: Agent Engine (ADK) on Google Cloud (Vertex AI)
This project is designed to run in a Google Colab or Kaggle Notebook.
- Get API Key: Create a Google Gemini API key at Google AI Studio.
- Add Secret: Open the notebook in Google Colab and click the "Secrets" (🔑) tab. Create a new secret named
GOOGLE_API_KEYand paste your key as the value. - Run Setup Cells: Run cells 1-6 in the notebook. This will:
- Install
google-adk. - Import all libraries.
- Configure the API key.
- Define the LRO tool, the agent, the app, and the helper functions.
- Install
Run the final cell (Cell 8: FINAL DEMO: Interactive Chat Loop) to start the agent.
Note: The demo code (
cell 7) is a pre-scripted run to show all features working. The interactive chat (cell 8) is for live demonstration.
You can test the full privacy workflow:
-
Test 1: Save a Fact
- You:
I am allergic to strawberries. - Agent:
(PAUSES)...Assistant requires approval. Approve? (y/n): - You:
y - Agent:
Got it. I've saved this to your long-term profile.
- You:
-
Test 2: Reject a Fact
- You:
My birthday is August 10th. - Agent:
(PAUSES)...Assistant requires approval. Approve? (y/n): - You:
n - Agent:
Okay, I will not save this fact.
- You:
-
Test 3: Verify Memory
- Restart the interactive chat (cell 8) to simulate a new session.
- You:
What do you know about me? - Agent:
I know that you are allergic to strawberries. - (Notice: It correctly remembers the allergy but not the birthday).
This project was built for the Kaggle 5-Day AI Agents Intensive Course (Nov 2025).