
Typed knowledge graph
Ten typed edge kinds. Not flat document search — real relational reasoning.
Fused retrieval (RRF)
BM25 + vector, combined by rank position. Same pattern as Vespa and Elastic.
Memory with decay
Strengthens what you use. Prunes what you don’t. Grounded in the graph.
Databases as first-class
Real SQL against Postgres and MySQL. Not API wrappers.
Claude-native
Agent SDK, streaming responses, tool use, session persistence.
Langfuse tracing
Every LLM call, tool, embedding, and cost per query.
Ask across your entire stack
The questions most AI assistants can’t answer — because they’re built on document search, not relational knowledge.- Who has context?
- Join SQL + Slack
- Date-range timeline
Q: Who has context on the Q2 billing migration?Fabric traverses its graph across email threads, Slack
#billing, and the last three Fireflies meetings. Returns four people with evidence — not documents.Four layers that compound
1 — Typed knowledge graph
1 — Typed knowledge graph
Nodes are people, threads, meetings, channels, customers, domains. Edges are typed —
sent_by, replied_to, attended, participant, in_thread, posted_in, organized_by, in_folder, from_domain, has_email — each carrying a weight and a timestamp.Graph traversal is a first-class query operation, not a post-hoc extraction step.2 — Fused retrieval via RRF
2 — Fused retrieval via RRF
Every query runs BM25 (Postgres Most RAG tools still do weighted linear blending on incompatible score scales, which quietly lets one signal dominate. We don’t.
ts_rank_cd) and vector similarity (pgvector cosine) in parallel, then combines them by rank position via Reciprocal Rank Fusion with k = 60.3 — Semantic memory with decay
3 — Semantic memory with decay
Every conversation produces typed observations — facts, decisions, commitments, risks, insights, patterns. Each has an importance score that strengthens on reference (×1.1) and decays when unused (×0.9 per conversation). Below 0.05, pruned.Unlike mem0’s floating memory, every observation links back to the email, meeting, or message where the fact came from.
4 — Databases as first-class citizens
4 — Databases as first-class citizens
Direct connections to PostgreSQL and MySQL via
asyncpg and aiomysql with full schema discovery. Natural-language questions become real SQL against your real database.Most “chat with your data” tools wrap SaaS APIs. Fabric connects to the actual systems where your operational data lives.Who Fabric is for
Technical teams that want to own their stack. Run the backend yourself. Pick your model. Inspect the graph in SQL. Read the source. Modify what you need.
Self-host anywhere
Docker Compose on a laptop or ECS Fargate via AWS Copilot. Both first-class.
Pick your model
Anthropic, OpenAI, or local — switchable per deployment.
SQL-inspectable
Graph and memory live in Postgres. No black box.
Traced end-to-end
Langfuse shows every LLM call, tool use, and per-query cost.
Get started
Quickstart
Docker Compose to a running Fabric in 10 minutes.
Architecture
The end-to-end data flow.
How It Works
The engineering behind each of the four layers.
API Reference
72+ endpoints, live OpenAPI playground.