Car dealership AI agent that demonstrates an ADK-first backend wired to Redis Agent Memory Server through adk-redis, so the assistant can recover customer preferences across sessions and guide the buying journey from discovery through delivery.
- Demo Objectives
- Tech Stack
- Prerequisites
- Getting Started
- Architecture
- Project Structure
- Usage
- Docker Commands Reference
- Cloud Deployment
- Resources
- Maintainers
- License
- Long-term memory storage using Redis Agent Memory Server for persistent customer preferences
- Short-term/working memory using Google ADK session services backed by Redis Agent Memory Server
- Conversation context retrieval for personalized interactions across sessions
- Agentic orchestration with Google ADK stages (needs analysis → shortlist → test drive → financing → delivery)
| Layer | Technology | Purpose |
|---|---|---|
| Memory | Redis Agent Memory Server | Long-term and working memory management |
| Database | Redis Cloud | Vector storage and session persistence |
| Orchestration | Google ADK | Agent runtime and orchestration |
| Backend | FastAPI | Python REST API |
| Frontend | React 18 + TypeScript | User interface |
| Styling | Tailwind CSS | UI styling |
| LLM | OpenAI GPT-4 | Language model |
| Deployment | Docker (primary) + Terraform (deferred) | Containerized local stack with optional IaC path |
This project runs Google ADK as the backend orchestration layer and uses adk-redis for memory integration. In the current runtime, Redis-backed memory is split into:
RedisWorkingMemorySessionServicefor session and working-memory event storageRedisLongTermMemoryServicefor cross-session recall and preference retrieval
- Python 3.11+
- Node.js 18+
- Docker and Docker Compose
- Redis Cloud account
- OpenAI API key
git clone <repository-url>
cd dealership-chatbot-agent-memory-demoCreate a .env file in the project root:
OPENAI_API_KEY=your_openai_api_key_here
REDIS_URL=redis://default:password@your-redis-cloud-host:port
REDIS_MEMORY_SERVER_URL=http://memory-server:8000
VITE_API_URL=http://localhost:8001Environment notes by target:
- Local Docker: keep
VITE_API_URL=http://localhost:8001so the browser talks to your local backend. - AWS EC2: set
VITE_API_URL=http://<public-ip>:8001so the built frontend points at the public backend. - Backend containers: keep
REDIS_MEMORY_SERVER_URL=http://memory-server:8000because backend services communicate with the memory server over the Docker network.
Use the same codebase and docker-compose.yml for both environments. The key difference is what URL the browser should call.
| Variable | Localhost (Docker) | AWS EC2 (Docker) | Why |
|---|---|---|---|
VITE_API_URL |
http://localhost:8001 |
http://<ec2-public-ip-or-dns>:8001 |
Frontend build-time value used by the browser |
REDIS_MEMORY_SERVER_URL |
http://memory-server:8000 |
http://memory-server:8000 |
ADK memory integration URL inside the backend container |
If the backend runs outside Docker (local Python process), use REDIS_MEMORY_SERVER_URL=http://localhost:8000.
For backend runtime defaults, copy and edit:
cp backend/.env.example backend/.envThe backend ADK runtime is environment-driven. Important variables include ADK_APP_NAME, ADK_AGENT_NAME, ADK_MODEL_NAME, ADK_PROVIDER_API_KEY_ENV, REDIS_MEMORY_SERVER_URL, REDIS_MEMORY_NAMESPACE, REDIS_MEMORY_CONTEXT_WINDOW, and REDIS_MEMORY_RECENCY_BOOST.
The backend runtime is ADK-only. backend/adk_runtime/runner.py always selects GoogleAdkRunnerFacade, and sparse agent state is rebuilt deterministically from session history when needed.
Recommended minimum .env values for this demo:
OPENAI_API_KEY=your_openai_api_key_here
ADK_MODEL_NAME=openai/gpt-4o-mini
ADK_PROVIDER_API_KEY_ENV=OPENAI_API_KEY
REDIS_URL=redis://default:password@your-redis-cloud-host:port
REDIS_MEMORY_SERVER_URL=http://memory-server:8000
VITE_API_URL=http://localhost:8001Notes:
- In Docker Compose,
memory-serveris the service hostname used by backend containers. VITE_API_URLis a frontend build-time variable. Set it before building the frontend image (docker compose build frontend).REDIS_URLis consumed by the Agent Memory Server container, which writes memory to your Redis instance.- The ADK runtime reads Redis memory through
REDIS_MEMORY_SERVER_URLand usesadk-redisto connect Google ADK flows to the Agent Memory Server API. docker compose up --buildrequiresREDIS_URLto be set to your Redis Cloud connection string.- The backend orchestration path uses Google ADK with OpenAI via
OPENAI_API_KEY; there is no legacy fallback runner path.
Get the pre-built Docker image from Docker Hub:
docker run -p 8000:8000 \
-e REDIS_URL=redis://default:<password>@<your-redis-host>:<port> \
-e OPENAI_API_KEY=<your-openai-api-key> \
redislabs/agent-memory-server:latest \
agent-memory api --host 0.0.0.0 --port 8000 --task-backend=asyncioNote: This command starts the Agent Memory Server API with asyncio task backend. You must have a running Redis instance (e.g., Redis Cloud) accessible at the URL you provide.
Build and start all services:
docker-compose up --buildAccess the application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8001
- Memory Server: http://localhost:8000
The Compose file is set up to support both environments:
- On localhost, use
VITE_API_URL=http://localhost:8001. - On AWS, set
VITE_API_URL=http://<ec2-public-ip-or-dns>:8001beforedocker compose build frontend. - In both environments, keep backend memory URLs as
http://memory-server:8000when services run in Compose.
Backend:
cd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
python main.pyFrontend:
cd frontend
npm install
npm run devUser Query
↓
[Retrieve Conversation Context] → Load past preferences from long-term memory
↓
[Parse Slots] → Extract car preferences using LLM
↓
[Ensure Readiness] → Check if all required slots are filled
↓
[Decide Next]
├→ Missing slots? → Ask follow-up question
└→ All slots filled? → Advance to next stage
↓
[Workflow Stages]
├→ Brand Selected? → Suggest Models
├→ Model Selected? → Suggest Test Drive
├→ Test Drive Completed? → Suggest Financing
└→ Financing Completed? → Prepare for Delivery
↓
[Save to Memory] → Store conversation and preferences
↓
Response to User
The orchestration path in this repo is ADK-first: backend/adk_runtime/runner.py selects GoogleAdkRunnerFacade, and the memory layer is wired through backend/adk_runtime/memory_services.py.
dealership-chatbot-agent-memory-demo/
├── backend/
│ ├── main.py # FastAPI application
│ ├── adk_runtime/ # Google ADK runtime, Redis memory wiring, and reset modules
│ └── requirements.txt # Python dependencies
├── frontend/
│ ├── src/
│ │ ├── components/ # React components
│ │ └── contexts/ # React contexts
│ ├── package.json
│ └── nginx.conf # Production server config
├── docker/
│ ├── Dockerfile.backend
│ └── Dockerfile.frontend
├── terraform/
│ ├── main.tf # AWS infrastructure
│ ├── variables.tf # Variable definitions
│ ├── outputs.tf # Output definitions
│ └── user_data.sh # EC2 bootstrap script
├── docker-compose.yml
└── README.md
- Start a conversation by logging in with any username
- Share your preferences (e.g., "I'm looking for a 5-seater SUV")
- Browse recommendations based on your requirements
- Select a model and schedule a test drive
- Complete the journey through financing and delivery planning
The agent remembers your preferences across sessions, so returning customers get personalized recommendations immediately.
That behavior comes from the Redis memory integration used by the ADK runtime:
- working-memory session events are stored through
RedisWorkingMemorySessionService - long-term recall is handled through
RedisLongTermMemoryService - sparse ADK journey state is rebuilt from session history when the structured state is incomplete
- the backend queries memory using stable
user_idandsession_idvalues
Start the stack:
docker compose up --buildUseful follow-up commands:
docker compose logs -f
docker compose ps
docker compose down
docker compose up -d --buildTo verify memory behavior locally:
curl -s -X POST http://localhost:8001/chat \
-H 'Content-Type: application/json' \
-d '{"message":"Remember I want a diesel SUV with 7 seats","user_id":"demo-user","session_id":"demo-session"}'
docker compose logs backend --tail 200Deploy to AWS EC2 using Terraform.
Prerequisites:
- AWS account with credentials configured
- Terraform installed (>= 1.0)
- SSH key pair in AWS EC2
Quick Start:
cd terraform
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars with your values
terraform init
terraform plan
terraform applyFull deployment guide: See terraform/README.md for detailed instructions.
- Bhavana Giri — @bhavanagiri
This project is licensed under the MIT License - see the LICENSE file for details.


