-
Don't wait for search results. lixSearch remembers what you've already asked about and serves up answers instantly from its memory. Same question, instant answer.
-
Unlike regular search engines, lixSearch understands what you're really asking for. It searches the web, watches YouTube videos, analyzes images, and pieces everything together into a coherent answer.
-
Every answer comes with sources. Read the original articles, watch the videos, see exactly where the information came from. No fluff, no guessing.
-
Ask a follow-up question and lixSearch remembers what you were just talking about. It's like chatting with someone who actually paid attention to the conversation.
| Package | Registry | Install / Pull | Description |
|---|---|---|---|
lix-open-search |
PyPI | pip install lix-open-search |
Python client SDK β sync + async, streaming, multimodal, OpenAI-compatible |
lix-open-cache |
PyPI | pip install lix-open-cache |
Standalone 3-layer Redis caching + Huffman disk archival for conversational AI |
LixSearch |
Docker Hub / GHCR | docker pull elixpo/lixsearch OR docker pull ghcr.io/circuit-overtime/lixsearch |
Full self-hostable search engine (API + Redis + ChromaDB + Playwright) both on Docker Hub and GitHub Container Registry |
- lix-open-search β lightweight client SDK that connects to a running LixSearch server (self-hosted via Docker or hosted at
search.elixpo.com). Only depends onhttpx. - lix-open-cache β standalone caching library, works independently with just Redis. No server needed. Only depends on
redis,numpy,loguru. - LixSearch Docker β the full search engine. Run
docker compose upand get a working API.
When you ask lixSearch a question, here's what happens behind the scenes:
- You Ask - Type your question naturally, like you're talking to a friend
- We Understand - lixSearch breaks down your question into its key parts
- We Search - Multiple search agents fan out across the web, YouTube, and images simultaneously
- We Read - Automatically extract the important information from articles and videos
- We Synthesize - An AI reads through everything and writes a clear, concise answer
- You Get Results - A beautifully formatted answer with clickable sources and relevant images
- Any NLP based model that has the ability to perform tools / function calling can use this
- Completely self hosted, so you can run it on your own infrastructure and customize it to your needs on CPU
- Can surf youtube / web_images / indexed web pages to find information and synthesize it into a single answer
- Can be used as a backend for any search engine, chatbot, or assistant that needs to
- Search the web for information
- Find relevant videos and images
- Synthesize information into a clear answer
- Provide sources for all information
- Can be modified into single endpoints for specific use cases like:
- Product search with reviews and images
- Recipe search with videos and photos
- News search with original sources and summaries
- Location search with details, reviews, and photos
flowchart TD
A["π You Ask a Question"] --> B["π§ Query Analysis & Understanding"]
B --> C{"Cache Hit?"}
C -- "Yes (cosine > 0.90)" --> J["β‘ Instant Cached Answer"]
C -- "No" --> D["π Tool Router"]
D --> E["π Web Search\n(Playwright Agents)"]
D --> F["π¬ YouTube Search\n(Metadata + Transcripts)"]
D --> G["πΌοΈ Image Search"]
D --> H["π Page Fetch\n(Full Text Extraction)"]
E --> I["π RAG Context Assembly\nChunk β Embed β Vector Search"]
F --> I
G --> I
H --> I
I --> K{"Need More Info?"}
K -- "Yes (max 3 loops)" --> D
K -- "No" --> L["π€ LLM Synthesis\n(Conversation History + RAG + Sources)"]
L --> M["π‘ Stream Response\n(SSE: real-time, word by word)"]
M --> N["Answer with Sources"]
M -.-> O[("πΎ Save to Cache\n& Session History")]
style A fill:#4A90D9,stroke:#2C5F8A,color:#fff
style C fill:#F5A623,stroke:#D4891A,color:#fff
style J fill:#7ED321,stroke:#5FA318,color:#fff
style N fill:#7ED321,stroke:#5FA318,color:#000
style D fill:#9B59B6,stroke:#7D3C98,color:#fff
style L fill:#E74C3C,stroke:#C0392B,color:#fff
Result: Fast, accurate answers you can trust.

| Capability | Implementation |
|---|---|
| Real-time streaming | Server-Sent Events (SSE) β tokens stream as they're generated, not buffered |
| Semantic caching | Redis DB0 with cosine similarity (threshold 0.90) β repeat queries resolve in <15ms |
| Multi-turn memory | Two-tier hybrid: Redis hot window (20 msgs) + Huffman-compressed disk archive (30-day TTL) |
| Dynamic context | Token-budget based history injection (6000 tokens) β model gets as much context as it needs |
| Source attribution | Every claim links to the original URL, with full-text extraction and relevance scoring |
| Deep search mode | Decomposes complex queries into sub-queries, runs parallel mini-pipelines, synthesizes a unified answer |
Right now we have stopped public access, but it will be available soon with [Pollinations.ai API] (https://enter.pollinations.ai). In the meantime, you can run your own instance of lixSearch and use the same API.
lixSearch exposes an OpenAI-compatible API. Any client that works with OpenAI works with lixSearch.
curl -X POST https://search.elixpo.com/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "What is the best way to learn Python?"}],
"stream": true
}'curl -X POST https://search.elixpo.com/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "What is the best way to learn Python?"},
{"role": "assistant", "content": "Here are the top approaches..."},
{"role": "user", "content": "What about free resources specifically?"}
],
"stream": true
}'The full conversation history is injected into the model context β no session ID management needed.
curl -X POST https://search.elixpo.com/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Latest breakthroughs in AI"}],
"stream": false
}'Returns a standard chat.completion object with usage (prompt/completion tokens) and choices[0].message.content.
| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions |
POST | OpenAI-compatible chat completions (stream + non-stream) |
/v1/models |
GET | List available models |
/api/search |
POST/GET | Legacy search endpoint (SSE) |
/api/stats |
GET | Redis memory, disk archive stats, session counts |
/api/health |
GET | Health check |
/docs |
GET | Interactive API documentation (Scalar UI) |
- Better Understanding - Grasps what you really want, not just keyword matching
- Saves Time - One coherent answer instead of browsing 10 blue links
- Context Aware - Remembers what you were just talking about
- Multimedia - Automatically finds videos, images, and articles
- Real-Time Info - Connected to the web, not using outdated training data
- Verified Sources - Every fact links to where it came from
- Fresher Results - Gets today's news, not last year's knowledge
- Faster - Streamed results, not waiting for a complete response
- Students - Research papers, homework, learning topics
- Professionals - Market research, industry updates, competitor analysis
- Home Improvement - DIY guides, product reviews, how-to videos
- Travelers - Travel planning, local info, reviews
- Researchers - Deep dives with cited sources
- News Junkies - Latest updates with original sources
Each will give you a complete, sourced answer.
This is the flow of the request that we have from the CF Reverse proxy
Browser β search.elixpo.com (Cloudflare Pages, edge)
β
Next.js API route (e.g. /api/search/route.ts)
β
backendUrl("/api/search") β "http://search.elixpo.com/api/search"
β
fetch() with headers:
X-API-Key: <API_KEY secret> β nginx auth on :10001
X-Internal-Key: <INTERNAL_API_KEY> β app-level auth
β
Your droplet :10001 (nginx) β :9002 (app)
How is this different from Google? Google gives you links. lixSearch gives you answers. We do the searching for you and synthesize a coherent response.
Is my search history private?
Your privacy is important. [Link to privacy policy]
Can I use this offline? lixSearch needs internet to search the web, but works faster when utilizing local cache.
Why are my results sometimes general? Ask more specifically! "Vegan chocolate chip cookies" gives better results than "recipes."
Found a bug? Have ideas for improvement? Contribute on GitHub | Report Issues
Questions? Feedback? Suggestions?
Happy Searching!
lixSearch - Search smarter, not harder. Made with β€οΈ by Elixpo


