- Overview
- Technical Aspect
- Installation
- Usage
- Configuration
- Examples
- Directory Tree
- Troubleshooting
- Bug / Feature Request
- Technologies Used
This project is an AI Interview Chatbot that uses OpenAI's GPT models (GPT-4o-mini by default) and the Streamlit framework. The chatbot generates job-specific interview questions and evaluates candidate responses using advanced language model capabilities.
Interview-bot-Video.mp4
The Interview Chatbot project consists of three main components:
- Question Generation: Creates job-specific interview questions based on a configurable job description
- Response Collection: Interactive chat interface for collecting candidate answers
- Evaluation: AI-powered assessment of candidate responses against job requirements
The project uses OpenAI's API with support for custom base URLs, making it compatible with OpenAI-compatible endpoints.
- Python 3.11 or higher
- uv package manager (recommended) or pip
-
Copy the
.env.examplefile to create your own.envfile:cp .env.example .env
-
Choose your LLM provider - Edit the
.envfile:Option A: Using OpenAI
LLM_PROVIDER=openai OPENAI_API_KEY=your_actual_api_key_here OPENAI_BASE_URL=https://api.openai.com/v1 OPENAI_MODEL=gpt-4o-mini
- Replace
your_actual_api_key_herewith your actual OpenAI API key - Update
OPENAI_BASE_URLif using a custom OpenAI-compatible endpoint - Change
OPENAI_MODELto use a different model (e.g.,gpt-4,gpt-4-turbo)
Option B: Using Ollama (Local/Free)
LLM_PROVIDER=ollama OLLAMA_BASE_URL=http://localhost:11434/v1 OLLAMA_MODEL=llama3.2 OLLAMA_API_KEY=ollama
- Install Ollama from ollama.ai
- Pull a model:
ollama pull llama3.2(orllama3.1,mistral,qwen2.5, etc.) - Start Ollama:
ollama serve(usually runs automatically) - Change
OLLAMA_MODELto any model you have pulled
- Replace
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone the repository
git clone https://github.com/nehalvaghasiya/interview-bot.git
cd interview-bot
# Install dependencies using uv
uv sync
# Activate the virtual environment
source .venv/bin/activate # Linux/Mac
# or
.venv\Scripts\activate # Windows
# Run the application
streamlit run chatbot.py# Create a virtual environment and install dependencies
uv venv
source .venv/bin/activate # Linux/Mac
uv pip install -r requirements.txt
# Run the application
streamlit run chatbot.py# Create a virtual environment
python3 -m venv .venv
source .venv/bin/activate # Linux/Mac
# or
.venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txt
# Run the application
streamlit run chatbot.py-
Start the application:
streamlit run chatbot.py
-
Access the chatbot: Open your browser at
http://localhost:8501 -
Interact with the bot:
- The chatbot will greet you and begin asking interview questions
- Type your answers in the text input field
- Press Enter to submit each answer
- After all questions are answered, you'll receive an AI-generated evaluation
Edit config.py to customize the job description and prompts:
class Parameters:
MODEL = os.environ.get("OPENAI_MODEL", "gpt-4o-mini")
JOB_DESCRIPTION = """
Your custom job description here...
"""
# Customize prompts as needed
QUESTIONS_PROMPT = "..."
EVALUATION_PROMPT = "..."| Variable | Description | Default | Required |
|---|---|---|---|
LLM_PROVIDER |
LLM provider to use (openai or ollama) |
openai |
Yes |
| OpenAI Settings | |||
OPENAI_API_KEY |
Your OpenAI API key | - | If using OpenAI |
OPENAI_BASE_URL |
OpenAI API base URL | https://api.openai.com/v1 |
No |
OPENAI_MODEL |
OpenAI model name | gpt-4o-mini |
No |
| Ollama Settings | |||
OLLAMA_BASE_URL |
Ollama API base URL | http://localhost:11434/v1 |
No |
OLLAMA_MODEL |
Ollama model name | llama3.2 |
No |
OLLAMA_API_KEY |
Placeholder (Ollama doesn't need real key) | ollama |
No |
-
Bot: "Hello! I'm your interviewer bot powered by OpenAI. I will ask you a few questions, and your responses will be evaluated. Let's get started."
-
Bot: "What is your experience with Machine Learning and AI implementation?"
-
User: "I have 5 years of experience implementing ML models in production..."
-
Bot: "Can you describe a challenging NLP project you've worked on?"
-
User: "I developed a sentiment analysis system for customer feedback..."
-
(After all questions)
-
Bot: "Thank you for your thoughtful responses. Based on your answers, it appears that your skills, experience, and understanding align well with the requirements of the role..."
interview-bot/
├── .env.example # Environment variables template
├── .github/ # GitHub configuration
├── .gitignore # Git ignore rules
├── README.md # This file
├── chatbot.py # Main Streamlit application
├── config.py # Configuration and prompts
├── utils.py # Utility functions (API calls, text processing)
├── pyproject.toml # Project metadata and dependencies (uv)
├── requirements.txt # Python dependencies (pip)
├── uv.lock # Locked dependencies (uv)
├── devtools/ # Development tools
│ └── lint.py
├── images/ # Project images
│ ├── openai.png
│ └── streamlit.jpg
├── src/ # Source package (if using as package)
│ └── interview_bot.egg-info/
└── tests/ # Test files
└── test_placeholder.py
If you encounter import errors:
# Ensure you're in the virtual environment
source .venv/bin/activate # Linux/Mac
# Reinstall dependencies
uv pip install -r requirements.txt
# or
pip install -r requirements.txtFor OpenAI:
- Verify your
.envfile exists and contains valid credentials - Check that
OPENAI_API_KEYis set correctly - Ensure
OPENAI_BASE_URLis accessible from your network
For Ollama:
- Ensure Ollama is running:
ollama serve(or check if it's running as a service) - Verify the model is installed:
ollama list - Pull the model if missing:
ollama pull llama3.2 - Check Ollama is accessible:
curl http://localhost:11434/api/tags - Verify
OLLAMA_BASE_URLpoints to the correct endpoint
For OpenAI:
- Update the
OPENAI_MODELin your.envfile to a model available in your account - Common models:
gpt-4o-mini,gpt-4o,gpt-4-turbo,gpt-4
For Ollama:
- List available models:
ollama list - Pull the desired model:
ollama pull <model-name> - Popular models:
llama3.2,llama3.1,mistral,qwen2.5,phi3 - Update
OLLAMA_MODELin.envto match an installed model
Ollama not responding:
# Check if Ollama is running
ps aux | grep ollama
# Start Ollama
ollama serveSlow responses with Ollama:
- Local models require sufficient RAM and compute
- Consider using smaller models:
phi3,llama3.2:1b - Or faster models:
qwen2.5:3b,mistral
If Streamlit throws errors:
# Clear Streamlit cache
streamlit cache clear
# Restart the application
streamlit run chatbot.pyIf issues persist:
- Check that all dependencies are correctly installed
- Verify Python version is 3.11 or higher:
python --version - Review error logs in the terminal
- Open an issue on GitHub with error details
If you find a bug or the chatbot doesn't work as expected, please open an issue here with:
- Description of the issue
- Steps to reproduce
- Error messages or screenshots
- Your environment details (OS, Python version)
If you'd like to request a new feature, open an issue here with:
- Feature description
- Use case and benefits
- Example scenarios
- OpenAI API - GPT-4o-mini and other models for question generation and evaluation
- Ollama - Local LLM support (Llama 3.2, Mistral, Qwen, and more)
- Streamlit - Web framework for the interactive chat interface
- Python-dotenv - Environment variable management
- uv - Fast Python package manager
- Cloud-based API
- High-quality responses
- Requires API key and costs per token
- Models: GPT-4o, GPT-4o-mini, GPT-4-turbo, GPT-3.5-turbo
- Run models locally on your machine
- Free and private
- No internet required after model download
- Popular models: Llama 3.2, Llama 3.1, Mistral, Qwen 2.5, Phi-3
- Learn more: ollama.ai

