HTTP REST API
How do I add and query grains?
Use POST /api/memories/{id}/add for triple-indexed beliefs, POST /api/memories/{id}/batch-add for typed grains, and POST /api/memories/{id}/recall to search.
The Areev context database exposes all grain operations through JSON endpoints. The /add endpoint creates a single belief grain with subject/relation/object triple indexing in the hexastore. For other grain types (event, state, workflow, action, observation, goal, reasoning, consensus, consent), use /batch-add with an explicit grain_type field. Both endpoints return content-addressed hashes.
The /recall endpoint provides AI agent memory retrieval through three query modes: structural filters (subject, relation, object, namespace), free-text BM25 search, and hybrid search combining both via Reciprocal Rank Fusion. You can layer additional filters for grain type, tags, confidence threshold, importance threshold, temporal expressions, and contradiction detection. Results arrive sorted by relevance score with each hit containing the grain hash, type, score, and all fields.
# Add a belief grain (triple-indexed in the hexastore)
curl -X POST http://localhost:4009/api/memories/default/add \
-H "Content-Type: application/json" \
-d '{
"subject": "john",
"relation": "likes",
"object": "coffee",
"confidence": 0.95,
"namespace": "prefs",
"tags": ["ui"]
}'
# {"blob_hash": "a3f8c1...", "status": "created"}
# Batch-add typed grains (any of the 10 OMS types)
curl -X POST http://localhost:4009/api/memories/default/batch-add \
-H "Content-Type: application/json" \
-d '{
"grains": [
{"grain_type": "event", "fields": {"content": "User logged in", "user_id": "john"}},
{"grain_type": "action", "fields": {"input": "translate(hello)", "content": "hola"}}
]
}'
# {"added": 2, "failed": 0, "hashes": ["b2e9...", "c4d1..."]}
# Recall grains with hybrid search
curl -X POST http://localhost:4009/api/memories/default/recall \
-H "Content-Type: application/json" \
-d '{
"query": "john preferences",
"subject": "john",
"grain_type": "belief",
"limit": 10
}'
from areev import Areev
areev = Areev()
areev.remember("john likes coffee")
results = areev.recall("john preferences")
| Method | Endpoint | Description |
|---|---|---|
GET | /memories/{id}/grains | List grains with pagination, type filters, and date range |
GET | /memories/{id}/grains/{hash} | Get a single grain by content-address hash |
GET | /memories/{id}/grains/{hash}/raw | Download the raw .mg blob binary |
POST | /memories/{id}/batch-add | Add multiple typed grains in a single request |
POST | /memories/{id}/remember | Ingest natural language text as memory |
POST | /memories/{id}/recall-chain | Multi-hop recall with sub-query decomposition |
POST | /memories/{id}/supersede | Replace a grain with updated content (immutable versioning) |
POST | /memories/{id}/accumulate | Atomic delta updates on a grain |
POST | /memories/{id}/forget | Delete grains by hash, subject, or user_id |
POST | /memories/{id}/flush | Flush write buffer to disk |
POST | /memories/{id}/search | Full-text BM25 search |
POST | /memories/{id}/app/exec | Execute an arbitrary engine command |
How do I manage compliance and PII detection?
Use the verification, PII detection, consent, and export endpoints to enforce regulatory requirements on the context database.
Areev provides a compliance subsystem that runs 87+ automated checks against sealed policies. The /verify/run endpoint executes all checks (or filters by regulation) and returns pass/fail/warn counts with detailed results per check. The /compliance/metrics endpoint provides summary counts, /compliance/violations lists failures with severity, and /compliance/impact generates impact assessments for the EU AI Act and Colorado AI Act.
PII detection runs a hybrid NER + regex pipeline locally. The /detect-pii endpoint accepts a text string and returns match categories, positions, and confidence scores. For autonomous memory workflows, the AI agent memory system uses PII detection during ingest to flag sensitive content before it enters the database. Consent endpoints (/consent/grant, /consent/revoke) record and revoke user consent per purpose, and /export/{user_id} fulfills subject access requests. Authorization endpoints (/authz/grant, /authz/revoke, /authz/check) manage Zanzibar-style access tuples separately from consent.
The audit trail at /audit records every operation with user, event type, and grain hash filters. The retention system at /retention/status and /retention/enforce manages TTL-based expiration per namespace.
# Run all 87+ compliance checks
curl -X POST http://localhost:4009/api/memories/default/verify/run \
-H "Content-Type: application/json" \
-d '{"regulation": "gdpr"}'
# {"total": 87, "pass": 85, "fail": 1, "warn": 1, "checks": [...]}
# Detect PII with detailed match positions
curl -X POST http://localhost:4009/api/memories/default/detect-pii \
-H "Content-Type: application/json" \
-d '{"text": "Contact john@example.com at 555-0100", "detailed": true}'
# Export all grains for a user (GDPR subject access request)
curl http://localhost:4009/api/memories/default/export/john
# Export compliance report
curl http://localhost:4009/api/memories/default/compliance/export
from areev import Areev
areev = Areev()
results = areev.recall("john preferences")
areev.forget("a3f8c1...")
| Method | Endpoint | Description |
|---|---|---|
GET | /memories/{id}/verify/latest | Last verification result |
POST | /memories/{id}/verify/run | Run compliance checks (optional regulation filter) |
GET | /memories/{id}/compliance/metrics | Pass/fail summary counts |
GET | /memories/{id}/compliance/violations | Failed checks with severity |
GET | /memories/{id}/compliance/impact | Impact assessment (EU AI Act, Colorado AI Act) |
GET | /memories/{id}/compliance/export | Full compliance report (JSON/text/CSV) |
POST | /memories/{id}/detect-pii | PII/PHI detection (simple or detailed mode) |
GET | /memories/{id}/audit | Query audit trail (filter by user, event type, grain hash) |
GET | /memories/{id}/retention/status | Per-namespace retention status |
POST | /memories/{id}/retention/enforce | Trigger TTL-based retention enforcement |
GET | /memories/{id}/export/{user_id} | Export all grains for a user |
GET | /settings/ner/status | NER model availability |
How do I work with sessions, goals, and intelligence?
Call /sessions/{sid}/bootstrap to load full session context, then use the goal and action endpoints to track AI agent progress.
Session endpoints provide the runtime context an AI agent needs to resume work. The /bootstrap endpoint returns the latest state grain, active goals, and recent actions in a single call, eliminating multiple round-trips. The /context endpoint compiles a context window within a token budget, selecting the highest-value actions, beliefs, and goals that fit.
Goal management follows a lifecycle: create goals with /goals/create, update their state (active, satisfied, failed, suspended) with /goals/{hash}/state, and delegate them to other agents with /goals/{hash}/delegate. The /goals/tree endpoint returns the full hierarchy, and /goals/blocked lists suspended goals. Action endpoints at /actions, /actions/summary, /actions/chain/{task}, and /actions/failed provide operational telemetry.
The intelligence endpoints at /events (SSE long-poll for engine events) and /consolidate (merge session memories) support autonomous memory management. The /states/diff endpoint compares two state grains to show what changed between checkpoints.
# Bootstrap a session (state + active goals + recent actions)
curl http://localhost:4009/api/memories/default/sessions/sess_abc/bootstrap
# Create a goal in a session
curl -X POST http://localhost:4009/api/memories/default/sessions/sess_abc/goals/create \
-H "Content-Type: application/json" \
-d '{"description": "Deploy v2.0", "priority": "high", "criteria": "All tests pass"}'
# Compile context with a token budget
curl "http://localhost:4009/api/memories/default/sessions/sess_abc/context?token_budget=4096"
areev recall --query "session context" --scope-path acme/prod --json
| Method | Endpoint | Description |
|---|---|---|
GET | /memories/{id}/sessions/{sid}/bootstrap | Full session context |
GET | /memories/{id}/sessions/{sid}/state | Latest state grain |
GET | /memories/{id}/sessions/{sid}/goals | Active goals |
GET | /memories/{id}/sessions/{sid}/actions | Recent actions |
GET | /memories/{id}/sessions/{sid}/actions/summary | Per-tool success/error breakdown |
GET | /memories/{id}/sessions/{sid}/actions/chain/{task} | Action chain for a task |
GET | /memories/{id}/sessions/{sid}/actions/failed | Failed actions |
POST | /memories/{id}/sessions/{sid}/goals/create | Create a goal |
PUT | /memories/{id}/goals/{hash}/state | Update goal state (active/satisfied/failed/suspended) |
POST | /memories/{id}/goals/{hash}/delegate | Delegate goal to another agent |
GET | /memories/{id}/sessions/{sid}/goals/tree | Goal hierarchy tree |
GET | /memories/{id}/sessions/{sid}/goals/blocked | Blocked (suspended) goals |
GET | /memories/{id}/sessions/{sid}/context | Compile context within token budget |
POST | /memories/{id}/sessions/{sid}/consolidate | Consolidate session memories |
POST | /memories/{id}/states/diff | Diff two state grains |
GET | /memories/{id}/events | Long-poll for engine events (SSE) |
How do I manage policies, scopes, hooks, and providers?
Use the policy, scope-tree, hook, and provider endpoints to configure the database’s governance, isolation, event streaming, and LLM integrations.
Policy endpoints control the sealed policy configuration that governs every read and write. The GET /policy endpoint returns the active policy, PUT /policy adds or removes presets, and POST /policy/simulate tests changes without applying them. Scope endpoints manage hierarchical multi-tenant isolation: PUT /scope-tree defines the tree structure, and POST /scope-erase crypto-erases all data within a scope.
Hook endpoints enable CDC (Change Data Capture) event streaming. Each hook targets specific events (grain_created, crypto_erase, etc.) and delivers payloads via webhook or internal CDC log. Provider endpoints configure LLM integrations: PUT /settings/providers/{id} registers a provider with its API key and base URL, POST /settings/providers/{id}/test verifies the connection, and GET /settings/providers/{id}/models discovers available models. The /models endpoint lists all models across providers with capability filters.
The knowledge graph endpoint at /graph performs BFS traversal from a starting entity. Authorization endpoints (/authz/grant, /authz/revoke, /authz/check) manage Zanzibar-style access tuples.
# Get current policy
curl http://localhost:4009/api/memories/default/policy
# Modify policy (add HIPAA)
curl -X PUT http://localhost:4009/api/memories/default/policy \
-H "Content-Type: application/json" \
-d '{"add_policies": ["hipaa"]}'
# Set scope tree for multi-tenant isolation
curl -X PUT http://localhost:4009/api/memories/default/scope-tree \
-H "Content-Type: application/json" \
-d '{"name": "root", "children": [{"name": "acme", "children": [{"name": "prod"}]}]}'
# Create a webhook hook
curl -X POST http://localhost:4009/api/memories/default/hooks \
-H "Content-Type: application/json" \
-d '{"name": "slack-notify", "mode": "webhook", "url": "https://hooks.slack.com/...",
"events": ["grain_created", "crypto_erase"]}'
# Configure an LLM provider
curl -X PUT http://localhost:4009/api/settings/providers/openai \
-H "Content-Type: application/json" \
-d '{"kind": "openai", "api_key": "sk-...", "base_url": "https://api.openai.com/v1"}'
areev hook list --json
areev scope-tree-get --json
| Method | Endpoint | Description |
|---|---|---|
GET | /memories/{id}/policy | Current policy configuration |
PUT | /memories/{id}/policy | Modify policy (add/remove presets) |
POST | /memories/{id}/policy/simulate | Simulate policy changes without applying |
GET/PUT/DELETE | /memories/{id}/scope-tree | Scope tree CRUD |
POST | /memories/{id}/scope-erase | Crypto-erase an entire scope |
GET | /memories/{id}/hooks | List hooks |
POST | /memories/{id}/hooks | Create a hook (CDC or webhook) |
DELETE | /memories/{id}/hooks/{hook_id} | Delete a hook |
GET | /memories/{id}/graph | Knowledge graph BFS traversal |
POST | /memories/{id}/authz/grant | Grant Zanzibar authorization tuple |
POST | /memories/{id}/authz/revoke | Revoke authorization tuple |
GET | /memories/{id}/authz/check | Check authorization |
GET/PUT | /settings/llm | LLM provider settings |
GET | /settings/providers | List configured providers |
PUT | /settings/providers/{id} | Create or update a provider |
DELETE | /settings/providers/{id} | Remove a provider |
POST | /settings/providers/{id}/test | Test provider connection |
GET | /settings/providers/{id}/models | Discover provider models |
GET | /models | Unified model list (filter by capability) |
GET/PUT | /settings/model-assignments | Role-to-model mappings |
How do I use health, stats, auth, ingest, and import/export?
Call /health for system status, /stats for grain counts, and use the ingest endpoints for staged document import with LLM analysis.
Health and config endpoints require no authentication. The /health endpoint returns status and version, while /config reports the active auth mode and deployment settings so clients can configure themselves. The /stats endpoint returns total grain count, disk space, and per-type breakdowns; /stats/history provides these metrics over time.
Document import follows an async job pipeline: POST /imports submits a file for import, GET /imports/{job_id} checks job status, GET /imports/{job_id}/stream provides SSE progress events, and POST /imports/{job_id}/cancel aborts a running job. For synchronous document import, use POST /import-document which extracts and stores grains in a single call. The /extract-document endpoint extracts text without creating grains.
Authentication is configured at server startup with --oidc-issuer (enables OIDC JWT bearer token validation) or via API key management (/auth/keys). Import/export endpoints handle .mg blob files for data portability.
# Health check (no auth required)
curl http://localhost:4009/api/health
# {"status": "ok", "version": "0.1.0"}
# App config (no auth required)
curl http://localhost:4009/api/config
# Memory stats with type breakdown
curl http://localhost:4009/api/memories/default/stats
# Import a document (async job pipeline)
curl -X POST http://localhost:4009/api/memories/default/imports \
-F "file=@report.pdf"
# Returns: {"job_id": "...", "status": "queued"}
# Check import job status
curl http://localhost:4009/api/memories/default/imports/{job_id}
# Stream import progress (SSE)
curl http://localhost:4009/api/memories/default/imports/{job_id}/stream
# Import a document (synchronous, single call)
curl -X POST http://localhost:4009/api/memories/default/import-document \
-F "file=@report.pdf"
# Import .mg blobs
curl -X POST http://localhost:4009/api/memories/default/import-file \
-F "file=@export.mg"
# Export as streaming .mg
curl http://localhost:4009/api/memories/default/export-file
areev info
areev import-doc report.pdf --namespace docs --json
areev import /path/to/mg-files
| Method | Endpoint | Description |
|---|---|---|
GET | /config | App deployment configuration (public) |
GET | /health | Health check (public) |
GET/POST | /auth/keys | List or create API keys |
DELETE | /auth/keys/{key_id} | Revoke an API key |
GET/POST | /memories | List or create memory instances |
GET/DELETE | /memories/{id} | Get or delete a memory instance |
GET | /memories/{id}/stats | Memory statistics |
GET | /memories/{id}/stats/history | Historical stats over time |
GET | /memories/{id}/tier-info | Tiered storage info |
GET | /memories/{id}/agents | List agents |
GET | /memories/{id}/agents/{agent_id} | Agent detail with trust history |
GET/POST | /memories/{id}/agents/identities | List or register agent identities |
GET | /memories/{id}/agents/identities/{agent_id} | Get agent identity |
POST | /memories/{id}/agents/identities/{agent_id}/deactivate | Deactivate agent |
POST | /memories/{id}/remember | Ingest natural language text as memory |
POST | /memories/{id}/recall-chain | Multi-hop recall (sub-query decomposition) |
POST | /memories/{id}/accumulate | Atomic delta updates on a grain |
POST | /memories/{id}/flush | Flush write buffer to disk |
POST | /memories/{id}/search | Full-text BM25 search |
POST | /memories/{id}/import-document | Synchronous document import |
POST/GET | /memories/{id}/imports | Submit or list import jobs (async pipeline) |
GET | /memories/{id}/imports/{job_id} | Import job status |
GET | /memories/{id}/imports/{job_id}/stream | Import job progress (SSE) |
POST | /memories/{id}/imports/{job_id}/cancel | Cancel import job |
POST | /memories/{id}/extract-document | Extract text from document |
POST | /memories/{id}/import-file | Import .mg blob file |
GET | /memories/{id}/export-file | Export as .mg stream |
POST | /memories/{id}/chat/stream | LLM chat completions (SSE proxy) |
GET/POST | /memories/{id}/chat/threads | List or create chat threads |
GET/DELETE/PATCH | /memories/{id}/chat/threads/{thread_id} | Get, delete, or update a thread |
GET | /memories/{id}/chat/threads/{thread_id}/messages | Thread message history |
GET | /memories/{id}/provenance | List provenance records |
GET | /memories/{id}/provenance/summary | Provenance statistics |
GET | /memories/{id}/provenance/{recall_id} | Single provenance record |
POST | /memories/{id}/consent/grant | Grant user consent |
POST | /memories/{id}/consent/revoke | Revoke user consent |
POST | /memories/{id}/consent/grant-bulk | Grant consent for multiple users |
GET | /memories/{id}/consent/status | Consent status for all users |