This guide walks you through obtaining API keys from each supported LLM provider.
Website: https://platform.openai.com
Free Tier: Credits may apply (see pricing)
- Go to platform.openai.com
- Click "Sign Up" (or "Log In" if you have an account)
- Navigate to API Keys in the left sidebar
- Click "Create new secret key"
- Give it a name (e.g., "LLM Playbook")
- Copy the key immediately (you won't see it again!)
export OPENAI_API_KEY="sk-..."gpt-4o- Latest flagship modelgpt-4o-mini- Fast and cheap (recommended for testing)gpt-4-turbo- Previous generationgpt-3.5-turbo- Fast and affordable
Website: https://console.anthropic.com
Free Tier: Free credits for new users (see console)
- Go to console.anthropic.com
- Click "Sign Up" or "Log In"
- Navigate to Settings → API Keys
- Click "Create Key"
- Copy the key (starts with
sk-ant-)
export ANTHROPIC_API_KEY="sk-ant-..."claude-sonnet-4-20250514- Latest Sonnet (balanced)claude-opus-4-20250514- Most capableclaude-3-5-sonnet-20241022- Previous Sonnetclaude-3-haiku-20240307- Fast and cheap
Website: https://aistudio.google.com
Free Tier: Generous free tier with high rate limits
- Go to aistudio.google.com
- Sign in with your Google account
- Click "Get API Key" in the top right
- Click "Create API Key"
- Select a Google Cloud project (or create one)
- Copy the key
export GOOGLE_API_KEY="AI..."gemini-2.0-flash- Latest fast model (recommended)gemini-1.5-pro- Most capablegemini-1.5-flash- Fast and efficientgemini-1.5-flash-8b- Smallest/fastest
Website: https://console.groq.com
Free Tier: Free tier with generous rate limits
- Go to console.groq.com
- Click "Sign Up" with Google, GitHub, or email
- Navigate to API Keys in the sidebar
- Click "Create API Key"
- Copy the key (starts with
gsk_)
export GROQ_API_KEY="gsk_..."llama-3.3-70b-versatile- Latest Llama (recommended)llama-3.1-70b-versatile- Previous Llamallama-3.1-8b-instant- Fast small modelmixtral-8x7b-32768- Mixtral MoEgemma2-9b-it- Google Gemma
Groq specializes in ultra-fast inference using custom LPU hardware. Response times are often 10x faster than other providers, making it ideal for real-time applications.
See groq.com/pricing
Website: https://ollama.com
Free: Completely free (runs on your hardware)
-
Download and Install:
- macOS:
brew install ollamaor download from ollama.com - Linux:
curl -fsSL https://ollama.com/install.sh | sh - Windows: Download installer from ollama.com
- macOS:
-
Pull a model:
ollama pull llama3.2
-
Start using (Ollama auto-starts on install):
# Or manually start ollama serve
Ollama runs entirely on your local machine. No API key, no internet, no costs.
ollama pull llama3.2 # Meta's latest (3B, 8B)
ollama pull llama3.2:1b # Smallest/fastest
ollama pull mistral # Mistral 7B
ollama pull codellama # Code-specialized
ollama pull phi3 # Microsoft Phi-3
ollama pull gemma2 # Google Gemma 2- Minimum: 8GB RAM for 7B models
- Recommended: 16GB+ RAM for 13B+ models
- GPU: Optional but significantly faster (NVIDIA, Apple Silicon)
Create a .env file in your project root:
cp .env.example .envEdit .env with your keys:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AI...
GROQ_API_KEY=gsk_...
Add to your ~/.bashrc, ~/.zshrc, or equivalent:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="AI..."
export GROQ_API_KEY="gsk_..."Then reload:
source ~/.bashrc # or ~/.zshrcIn Google Colab:
- Click the 🔑 key icon in the left sidebar
- Add each key as a secret
- Access in code:
from google.colab import userdata import os os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')
-
Never commit API keys to Git
- Use
.envfiles (already in.gitignore) - Use environment variables
- Use
-
Rotate keys regularly
- Delete unused keys
- Create new keys for different projects
-
Set usage limits
- Most providers allow setting spending limits
- Set alerts for unusual usage
-
Use separate keys for dev/prod
- Easier to track usage
- Limit blast radius if compromised