Aurra Docs

Overview

Aurra is a memory layer for AI agents — durable, citable, time-aware.

Aurra is a memory infrastructure layer for AI agents. It captures what your agent observes, supersedes outdated facts automatically, and returns memories with citations and timestamps.

Why Aurra exists

Most "memory for AI" tools store strings and retrieve them by similarity. That works until your agent confidently tells a customer something that was true six months ago and isn't anymore — or fabricates a date that never existed.

Aurra solves three problems other systems don't:

  • Bi-temporal versioning. Every fact has a valid_from and valid_to. When something changes ("I switched from Vim to Emacs"), the old fact is superseded, not deleted, with a full audit trail.
  • Citation-grounded retrieval. Every answer cites the exact memories it used, with similarity scores and tenant scoping.
  • Source filtering. Restrict retrieval to specific sources (Slack, Notion, email) so the right memory wins.

Benchmark vs Mem0

We published the methodology and code at github.com/aurra-memory/benchmarks. On the LoCoMo long-term conversational memory dataset:

MetricAurraMem0
Memories with fabricated dates0%22.95%
Judge-rated useful42.4%28.2%
Judge-rated misattributed1.7%7.2%

Reproducible — clone the repo, set your keys, run the scripts.

Where to start

What Aurra is not

  • Not a vector database. Aurra uses pgvector under the hood, but the product is the extraction, supersession, and citation layer on top.
  • Not a knowledge graph. Memories are atomic facts with provenance, not nodes and edges.
  • Not a chat history store. Aurra extracts durable facts from raw input — it does not store every message.

Status

Aurra is in active development. The API is stable for the endpoints documented here. Breaking changes are announced in advance via GitHub releases and email to API key holders.

On this page