Your intelligence layer,
running with governance controls.

Build a personal knowledge base with RAG, vector search, and intelligent document indexing. Your context stays under your governance policy — self-hosted, hybrid, or managed.

What makes it powerful

Document indexing

Drop in PDFs, Markdown, code files, or entire folders. Alabobai indexes everything locally and makes it instantly searchable.

Semantic vector search

Find information by meaning, not just keywords. Powered by local embedding models that understand context and nuance.

RAG-powered chat

Ask questions about your documents in natural language. Get precise answers with references to exact paragraphs and page numbers.

Governed runtime control

Choose secure runtime mode for regulated environments while maintaining full execution capability.

Code-aware indexing

Understands code structure — functions, classes, imports. Ask questions like "where is auth handled?" across your entire codebase.

Zero data leakage

Embeddings are generated locally. No document content, queries, or metadata ever touches an external server.

See it in action

Index your documents and start asking questions immediately.

Local AI Brain
$ alabobai brain index ~/Documents/research-papers/

Scanning 847 files...
Generating embeddings with nomic-embed-text...
Building vector index (ChromaDB)...
Indexed 847 documents (12,340 chunks) in 23.4s

brain> What papers discuss transformer attention optimization?

Searching 12,340 chunks (cosine similarity)...
Found 14 relevant passages across 6 documents

Based on your indexed papers, there are three main approaches:

1. Flash Attention — Reduces memory from O(n²) to O(n) using tiling [paper_042.pdf, p.12]
2. Multi-Query Attention — Shares KV heads across queries [paper_118.pdf, p.7]
3. Sliding Window — Limits attention span for long contexts [paper_203.pdf, p.3]

How it works

1

Point to your files

Select folders, files, or entire drives. Alabobai watches for changes and re-indexes automatically in the background.

2

Embeddings are generated

A local embedding model (nomic-embed-text via Ollama) converts your documents into searchable vector representations.

3

Ask anything

Query your knowledge base in natural language. The RAG pipeline retrieves relevant context and generates precise answers.

Technical specifications

Supported formatsPDF, MD, TXT, Code, DOCX
Vector databaseChromaDB (local)
Embedding modelnomic-embed-text
Chunk size512 tokens (configurable)
Max documentsUnlimited (disk space)
Search latency<100ms (10K docs)
Auto-reindexingFile watcher (fsnotify)
Data stored in cloudNone

Your knowledge, your machine

Index your first folder and start asking questions in under 3 minutes. No complex setup required.