Memory infrastructure for AI agents. Every conversation remembered. Every preference learned. Every relationship preserved — permanently.
Your AI assistant forgets you exist between conversations. Context compaction destroys mid-session reasoning. Platform memory is shallow — transcripts, not understanding. Hardware migrations mean starting over.
AI Engram fixes this. Not with bigger context windows. With memory architecture.
A swarm of specialized background agents captures, extracts, consolidates, curates, and retrieves memories — so the AI you're talking to just knows.
Every message stored verbatim. Zero data loss. The firehose.
LLM distills facts, decisions, preferences, relationships from raw conversation.
Deduplicate. Merge related memories. Link cross-references. Keep it clean.
Score importance. Manage tiers. Build personality profiles. Archive gracefully.
Semantic + keyword hybrid search. Two-stage ranking. Sub-500ms recall.
Guardian is hub-and-spoke — one memory pipeline serving any number of channels. Chat UI, API, AI agents, GitHub — they all feed the same brain.
Web UI where you talk to Guardian. Sign up, chat, experience memory that works.
Point any AI agent at the API. Instant unlimited memory via POST /api/chat.
OpenClaw, Claude Code, custom agents — connect to Guardian and never forget.
RLS at the database level. Each user's memories are completely private.
Guardian builds a profile of each user — interests, communication style, expertise.
MIT licensed. Clone it, deploy it, build on it. 7 research iterations and 40+ papers behind it.
Tell it about yourself. Come back tomorrow. It'll remember everything.
Open Guardian →