Skip to content

Misprect/AI-recipe-generator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation


AI Recipe Generator (Multi-Reasoning + Voice)

An end-to-end AI-powered recipe generation system supporting advanced LLM reasoning strategiesChain-of-Thought (CoT), ReAct, and Tree-of-Thought (ToT) — along with voice-based input.

Built using FastAPI (backend), React (frontend), and Gemini LLM, this project demonstrates modern LLM orchestration, reasoning pipelines, structured outputs, and clean result management.

Features

Reasoning Modes

  • Chain-of-Thought (CoT) – step-by-step logical reasoning
  • ReAct – structured reasoning with action-oriented responses
  • Tree-of-Thought (ToT) – multi-sample reasoning with scoring and best-answer selection
  • Voice Input – generate recipes by speaking instead of typing

Dashboard

  • View responses for each reasoning mode
  • Timestamped, structured outputs
  • Unified result view across CoT, ReAct, and ToT

Results Management

  • All final outputs are stored in a single tracked directory:

backend/results/

  • No unnecessary nested folders for ToT
  • Clean, readable JSON outputs for inspection and debugging

Project Structure


AI-recipe-generator/
│
├── backend/
│ ├── main.py
│ ├── models/
│ │ ├── cot.py
│ │ ├── react.py
│ │ └── tot.py
│ │
│ ├── gemini/
│ │ └── client.py
│ │
│ ├── utils/
│ │ ├── results_manager.py
│ │ └── memory_manager.py
│ │
│ ├── results/ # MAIN results folder (tracked in Git)
│ │
│ └── requirements.txt
│
├── frontend/
│ ├── src/
│ │ ├── components/
│ │ │ └── Dashboard.jsx
│ │ └── App.jsx
│ │
│ └── package.json
│
├── .env
├── .gitignore
└── README.md


Setup Instructions

1.Backend Setup

cd backend
conda activate transformer-2   # or your preferred environment
pip install -r requirements.txt

Create a .env file:

GEMINI_API_KEY=your_api_key_here
USE_MOCK_GEMINI=false

Run the backend:

uvicorn main:app --reload

Backend will run on:

http://127.0.0.1:8000

2.Frontend Setup

cd frontend
npm install
npm run dev

Frontend will run on:

http://localhost:5173

API Endpoints

Method Endpoint Description
POST /cot Chain-of-Thought reasoning
POST /react ReAct reasoning
POST /tot Tree-of-Thought reasoning
POST /voice Voice-based recipe generation

Technical Highlights

  • Gemini Python SDK with prompt-enforced JSON outputs
  • Safe JSON parsing with graceful fallback handling
  • No unsupported SDK schema usage
  • Modular, extensible reasoning architecture
  • Production-safe FastAPI backend design
  • Mock mode support for quota-free testing

Notes

  • backend/results/ is intentionally version-controlled
  • backend/utils/results/ is ignored to avoid local clutter
  • Mock mode enables testing without consuming Gemini API quota

👨‍💻 Author

Aryaman Jain AI / ML / LLM Engineering


About

Developed an AI-powered recipe generator leveraging Gemini LLM and structured reasoning methods (CoT, ToT, ReAct). Includes voice-based queries, semantic answer evaluation, and persistent result/history storage via a FastAPI backend and React frontend.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors