Skip to content

redis-developer/google_adk_redis_memory_demo

Repository files navigation

Car Dealership Agent with Redis Agent Memory Server + Google ADK

Redis Agent Memory Server License: MIT

Car dealership AI agent that demonstrates an ADK-first backend wired to Redis Agent Memory Server through adk-redis, so the assistant can recover customer preferences across sessions and guide the buying journey from discovery through delivery.

Table of Contents

Demo Objectives

  • Long-term memory storage using Redis Agent Memory Server for persistent customer preferences
  • Short-term/working memory using Google ADK session services backed by Redis Agent Memory Server
  • Conversation context retrieval for personalized interactions across sessions
  • Agentic orchestration with Google ADK stages (needs analysis → shortlist → test drive → financing → delivery)

Tech Stack

Layer Technology Purpose
Memory Redis Agent Memory Server Long-term and working memory management
Database Redis Cloud Vector storage and session persistence
Orchestration Google ADK Agent runtime and orchestration
Backend FastAPI Python REST API
Frontend React 18 + TypeScript User interface
Styling Tailwind CSS UI styling
LLM OpenAI GPT-4 Language model
Deployment Docker (primary) + Terraform (deferred) Containerized local stack with optional IaC path

This project runs Google ADK as the backend orchestration layer and uses adk-redis for memory integration. In the current runtime, Redis-backed memory is split into:

  • RedisWorkingMemorySessionService for session and working-memory event storage
  • RedisLongTermMemoryService for cross-session recall and preference retrieval

Prerequisites

  • Python 3.11+
  • Node.js 18+
  • Docker and Docker Compose
  • Redis Cloud account
  • OpenAI API key

Getting Started

1. Clone the Repository

git clone <repository-url>
cd dealership-chatbot-agent-memory-demo

2. Environment Configuration

Create a .env file in the project root:

OPENAI_API_KEY=your_openai_api_key_here
REDIS_URL=redis://default:password@your-redis-cloud-host:port
REDIS_MEMORY_SERVER_URL=http://memory-server:8000
VITE_API_URL=http://localhost:8001

Environment notes by target:

  • Local Docker: keep VITE_API_URL=http://localhost:8001 so the browser talks to your local backend.
  • AWS EC2: set VITE_API_URL=http://<public-ip>:8001 so the built frontend points at the public backend.
  • Backend containers: keep REDIS_MEMORY_SERVER_URL=http://memory-server:8000 because backend services communicate with the memory server over the Docker network.

Use the same codebase and docker-compose.yml for both environments. The key difference is what URL the browser should call.

Variable Localhost (Docker) AWS EC2 (Docker) Why
VITE_API_URL http://localhost:8001 http://<ec2-public-ip-or-dns>:8001 Frontend build-time value used by the browser
REDIS_MEMORY_SERVER_URL http://memory-server:8000 http://memory-server:8000 ADK memory integration URL inside the backend container

If the backend runs outside Docker (local Python process), use REDIS_MEMORY_SERVER_URL=http://localhost:8000.

For backend runtime defaults, copy and edit:

cp backend/.env.example backend/.env

The backend ADK runtime is environment-driven. Important variables include ADK_APP_NAME, ADK_AGENT_NAME, ADK_MODEL_NAME, ADK_PROVIDER_API_KEY_ENV, REDIS_MEMORY_SERVER_URL, REDIS_MEMORY_NAMESPACE, REDIS_MEMORY_CONTEXT_WINDOW, and REDIS_MEMORY_RECENCY_BOOST.

The backend runtime is ADK-only. backend/adk_runtime/runner.py always selects GoogleAdkRunnerFacade, and sparse agent state is rebuilt deterministically from session history when needed.

Recommended minimum .env values for this demo:

OPENAI_API_KEY=your_openai_api_key_here
ADK_MODEL_NAME=openai/gpt-4o-mini
ADK_PROVIDER_API_KEY_ENV=OPENAI_API_KEY
REDIS_URL=redis://default:password@your-redis-cloud-host:port
REDIS_MEMORY_SERVER_URL=http://memory-server:8000
VITE_API_URL=http://localhost:8001

Notes:

  • In Docker Compose, memory-server is the service hostname used by backend containers.
  • VITE_API_URL is a frontend build-time variable. Set it before building the frontend image (docker compose build frontend).
  • REDIS_URL is consumed by the Agent Memory Server container, which writes memory to your Redis instance.
  • The ADK runtime reads Redis memory through REDIS_MEMORY_SERVER_URL and uses adk-redis to connect Google ADK flows to the Agent Memory Server API.
  • docker compose up --build requires REDIS_URL to be set to your Redis Cloud connection string.
  • The backend orchestration path uses Google ADK with OpenAI via OPENAI_API_KEY; there is no legacy fallback runner path.

3. Start Agent Memory Server

Get the pre-built Docker image from Docker Hub:

docker run -p 8000:8000 \
  -e REDIS_URL=redis://default:<password>@<your-redis-host>:<port> \
  -e OPENAI_API_KEY=<your-openai-api-key> \
  redislabs/agent-memory-server:latest \
  agent-memory api --host 0.0.0.0 --port 8000 --task-backend=asyncio

Note: This command starts the Agent Memory Server API with asyncio task backend. You must have a running Redis instance (e.g., Redis Cloud) accessible at the URL you provide.

4. Run with Docker

Build and start all services:

docker-compose up --build

Access the application:

The Compose file is set up to support both environments:

  • On localhost, use VITE_API_URL=http://localhost:8001.
  • On AWS, set VITE_API_URL=http://<ec2-public-ip-or-dns>:8001 before docker compose build frontend.
  • In both environments, keep backend memory URLs as http://memory-server:8000 when services run in Compose.

5. Run for Development

Backend:

cd backend
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -r requirements.txt
python main.py

Frontend:

cd frontend
npm install
npm run dev

Screenshots

Landing Page Chatbot Interface

Architecture

Architecture_diagram

Architecture Flow

User Query
    ↓
[Retrieve Conversation Context] → Load past preferences from long-term memory
    ↓
[Parse Slots] → Extract car preferences using LLM
    ↓
[Ensure Readiness] → Check if all required slots are filled
    ↓
[Decide Next]
    ├→ Missing slots? → Ask follow-up question
    └→ All slots filled? → Advance to next stage
         ↓
    [Workflow Stages]
         ├→ Brand Selected? → Suggest Models
         ├→ Model Selected? → Suggest Test Drive
         ├→ Test Drive Completed? → Suggest Financing
         └→ Financing Completed? → Prepare for Delivery
         ↓
[Save to Memory] → Store conversation and preferences
         ↓
    Response to User

The orchestration path in this repo is ADK-first: backend/adk_runtime/runner.py selects GoogleAdkRunnerFacade, and the memory layer is wired through backend/adk_runtime/memory_services.py.

Project Structure

dealership-chatbot-agent-memory-demo/
├── backend/
│   ├── main.py              # FastAPI application
│   ├── adk_runtime/         # Google ADK runtime, Redis memory wiring, and reset modules
│   └── requirements.txt     # Python dependencies
├── frontend/
│   ├── src/
│   │   ├── components/      # React components
│   │   └── contexts/        # React contexts
│   ├── package.json
│   └── nginx.conf           # Production server config
├── docker/
│   ├── Dockerfile.backend
│   └── Dockerfile.frontend
├── terraform/
│   ├── main.tf               # AWS infrastructure
│   ├── variables.tf          # Variable definitions
│   ├── outputs.tf            # Output definitions
│   └── user_data.sh          # EC2 bootstrap script
├── docker-compose.yml
└── README.md

Usage

  1. Start a conversation by logging in with any username
  2. Share your preferences (e.g., "I'm looking for a 5-seater SUV")
  3. Browse recommendations based on your requirements
  4. Select a model and schedule a test drive
  5. Complete the journey through financing and delivery planning

The agent remembers your preferences across sessions, so returning customers get personalized recommendations immediately.

That behavior comes from the Redis memory integration used by the ADK runtime:

  • working-memory session events are stored through RedisWorkingMemorySessionService
  • long-term recall is handled through RedisLongTermMemoryService
  • sparse ADK journey state is rebuilt from session history when the structured state is incomplete
  • the backend queries memory using stable user_id and session_id values

Docker Commands Reference

Start the stack:

docker compose up --build

Useful follow-up commands:

docker compose logs -f
docker compose ps
docker compose down
docker compose up -d --build

To verify memory behavior locally:

curl -s -X POST http://localhost:8001/chat \
  -H 'Content-Type: application/json' \
  -d '{"message":"Remember I want a diesel SUV with 7 seats","user_id":"demo-user","session_id":"demo-session"}'
docker compose logs backend --tail 200

Cloud Deployment

Deploy to AWS EC2 using Terraform.

Prerequisites:

  • AWS account with credentials configured
  • Terraform installed (>= 1.0)
  • SSH key pair in AWS EC2

Quick Start:

cd terraform
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars with your values
terraform init
terraform plan
terraform apply

Full deployment guide: See terraform/README.md for detailed instructions.

Resources

Maintainers

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Car Dealership Chat Assistant with ADK and Redis Agent Memory Server, showcasing short-term and long-term memory

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors