Production-grade LLM framework for Python. Async-native RAG, agents, and graph workflows. 2 dependencies. Zero magic.
Documentation • Quickstart • API Reference • Roadmap • Contributing
|
The problem: Existing LLM frameworks are heavy — 50+ dependencies, hidden chains, magic callbacks, YAML configs. Hard to debug, harder to ship. The fix: SynapseKit gives you everything you need to build production LLM apps with just 2 core dependencies and plain Python you can actually read. |
pip install synapsekit[openai]from synapsekit import RAG
rag = RAG(model="gpt-4o-mini", api_key="sk-...")
rag.add("Your document text here")
print(rag.ask_sync("What is the main topic?"))3 lines. That's it. |
|
5 text splitters • 10+ loaders BM25 reranking • conversation memory streaming retrieval-augmented generation |
ReAct • native function calling Supervisor/Worker • Handoff • Crew 32 built-in tools • fully extensible |
parallel execution • conditional routing cycle support • checkpointing SSE/WS streaming • human-in-the-loop |
|
OpenAI • Anthropic • Gemini Mistral • Ollama • Cohere Bedrock • Groq • DeepSeek • more |
InMemory • ChromaDB • FAISS Qdrant • Pinecone all behind VectorStore ABC |
Evaluation • Observability • Guardrails MCP • A2A • Multimodal 1011 tests • Apache 2.0 licensed |
|
RAG in 3 lines from synapsekit import RAG
rag = RAG(model="gpt-4o-mini", api_key="sk-...")
rag.add("Your document text here")
async for token in rag.stream("What is the main topic?"):
print(token, end="", flush=True) |
Agent with tools from synapsekit import FunctionCallingAgent
from synapsekit.agents.tools import CalculatorTool
agent = FunctionCallingAgent(
llm=llm,
tools=[CalculatorTool()]
)
result = await agent.run("What is 42 * 17?") |
|
Graph workflow from synapsekit import StateGraph
graph = StateGraph()
graph.add_node("fetch", fetch_data)
graph.add_node("process", process_data)
graph.add_edge("fetch", "process")
graph.set_entry("fetch")
graph.set_finish("process")
app = graph.compile()
result = await app.run({"query": "hello"}) |
Swap providers in one line from synapsekit import RAG
# OpenAI
rag = RAG(model="gpt-4o-mini", api_key="sk-...")
# Anthropic
rag = RAG(model="claude-3-haiku", api_key="sk-ant-...")
# Ollama (local)
rag = RAG(model="ollama/llama3", api_key="")
# Same API. Same code. Different brain. |
Contributors welcome • Apache 2.0 Licensed
We're building the most comprehensive async-native LLM framework in Python. Whether you're a seasoned open-source contributor or looking for your first PR — jump in.
Star the repo • Browse good first issues • Join the discussion