Add and Query

How do I add a belief grain?

Use POST /api/memories/{id}/add to store a belief grain with subject, relation, and object fields. All three fields are indexed in the hexastore for graph traversal and triple-pattern queries.

Belief grains are the primary unit of structured knowledge in the Areev context database. Each belief represents a single factual claim — “john likes coffee” — decomposed into a subject-relation-object triple. The hexastore indexes all six permutations of these fields, enabling lookups by any combination. This is the foundation for AI agent memory, where agents accumulate beliefs about users, tasks, and the world over time.

Optional fields include confidence (0.0—1.0), namespace for organizational grouping, and tags for metadata. The response returns a blob_hash (SHA-256 content address) that you use in supersede, forget, and inspect operations.

import areev

db = areev.open("./my-data", policies=["gdpr"])
db.add("belief", {"subject": "john", "relation": "likes", "object": "coffee"})
POST /api/memories/customer-support/add
Content-Type: application/json

{
  "subject": "john",
  "relation": "likes",
  "object": "coffee",
  "confidence": 0.95,
  "namespace": "preferences",
  "tags": ["user-pref"]
}
areev add belief subject=john relation=likes object="coffee"

How do I add other grain types?

Use POST /api/memories/{id}/batch-add to store grains of any of the 10 OMS grain types. Unlike the /add endpoint, batch-add does not index fields into the triple store.

The OMS defines 10 grain types that capture different kinds of autonomous memory: belief, event, state, action, observation, goal, reasoning, consensus, consent, and workflow. Each type has required fields suited to its purpose — events track what happened, states track current values, actions track tool executions. Batch-add accepts an array of typed grains with shared namespace and tags, returning per-grain status.

Use batch-add when you need to write multiple grains atomically or when you are storing non-belief grain types. Beliefs written via batch-add are stored and searchable but do not populate the hexastore or knowledge graph. If you need triple indexing, use the /add endpoint instead.

db.add("event", {"content": "john completed onboarding", "subject": "john"})
db.add("state", {"context_data": {"label": "john theme", "key": "theme", "value": "dark"}})
db.add("action", {"tool_name": "send_email", "input": '{"to":"john"}', "content": "sent"})
POST /api/memories/customer-support/batch-add
Content-Type: application/json

{
  "namespace": "onboarding",
  "tags": ["q1-2026"],
  "grains": [
    {
      "grain_type": "event",
      "fields": { "content": "john completed onboarding", "subject": "john" }
    },
    {
      "grain_type": "state",
      "fields": { "context_data": {"label": "john theme", "key": "theme", "value": "dark"} }
    }
  ]
}
areev add event content="john completed onboarding" subject=john
areev add state context_data='{"label":"john theme","key":"theme","value":"dark"}'

How do I query memories?

Use POST /api/memories/{id}/recall to query grains. Areev automatically routes your query to the best engine: BM25 full-text search for text queries, hexastore lookups for structured filters, or RRF hybrid fusion when you combine both.

Query routing is determined by the parameters you provide. A query string triggers Tantivy BM25 full-text search. Structural fields (subject, relation, object) trigger hexastore lookups across six permutation indexes. When you provide both, Areev fuses results using Reciprocal Rank Fusion with the formula score(item) = sum(1/(k + rank_i)) where k defaults to 60. An embedding vector triggers HNSW KNN similarity search. Providing no parameters returns the most recent grains by timestamp.

Each result includes the grain’s blob_hash, grain_type, decoded fields, a relevance score, and metadata like namespace and created_at. Post-retrieval filters for tags, importance, confidence, and contradiction detection are applied after fusion scoring. Use limit and offset for pagination.

hits = db.recall(query="john preferences", limit=5)
hits = db.recall(subject="john", relation="likes")
hits = db.recall(query="coffee", subject="john", limit=10)
POST /api/memories/customer-support/recall
Content-Type: application/json

{
  "query": "john preferences",
  "subject": "john",
  "namespace": "preferences",
  "limit": 10
}
areev recall --query "john preferences" --subject john --limit 5 --json

How does query routing work?

Areev selects the query strategy based on which parameters you provide. You do not need to choose a mode manually.

The routing logic inspects the recall request and activates the appropriate engine. When only query is set, BM25 scores grains by term frequency and inverse document frequency. When only structural fields are set, the hexastore returns exact triple matches in constant time. When both are present, each engine runs independently, and RRF merges the ranked lists into a single result set. This design lets AI memory queries blend fuzzy text relevance with precise structural lookups in one call.

When an embedding vector is provided, HNSW approximate nearest-neighbor search runs over the vector index (USearch or FAISS, depending on the memory’s vector_backend setting). If no parameters are set at all, Areev falls back to a time-sorted scan and returns the most recent grains.

Parameters providedQuery modeEngine
query onlyFull-text searchTantivy BM25
subject, relation, or objectStructural lookupHexastore (6 permutation indexes)
query + structural fieldsHybrid fusionRRF (BM25 + hexastore)
embedding vectorSemantic similarityHNSW (USearch or FAISS)
NoneRecent grainsTime-sorted scan
  • Search — advanced search filters and temporal queries
  • Managing Memories — creating and configuring memory instances
  • Supersede — updating grains while preserving history