HTTP REST API

How do I add and query grains?

Use POST /api/memories/{id}/add for triple-indexed beliefs, POST /api/memories/{id}/batch-add for typed grains, and POST /api/memories/{id}/recall to search.

The Areev context database exposes all grain operations through JSON endpoints. The /add endpoint creates a single belief grain with subject/relation/object triple indexing in the hexastore. For other grain types (event, state, workflow, action, observation, goal, reasoning, consensus, consent), use /batch-add with an explicit grain_type field. Both endpoints return content-addressed hashes.

The /recall endpoint provides AI agent memory retrieval through three query modes: structural filters (subject, relation, object, namespace), free-text BM25 search, and hybrid search combining both via Reciprocal Rank Fusion. You can layer additional filters for grain type, tags, confidence threshold, importance threshold, temporal expressions, and contradiction detection. Results arrive sorted by relevance score with each hit containing the grain hash, type, score, and all fields.

# Add a belief grain (triple-indexed in the hexastore)
curl -X POST http://localhost:4009/api/memories/default/add \
  -H "Content-Type: application/json" \
  -d '{
    "subject": "john",
    "relation": "likes",
    "object": "coffee",
    "confidence": 0.95,
    "namespace": "prefs",
    "tags": ["ui"]
  }'
# {"blob_hash": "a3f8c1...", "status": "created"}

# Batch-add typed grains (any of the 10 OMS types)
curl -X POST http://localhost:4009/api/memories/default/batch-add \
  -H "Content-Type: application/json" \
  -d '{
    "grains": [
      {"grain_type": "event", "fields": {"content": "User logged in", "user_id": "john"}},
      {"grain_type": "action", "fields": {"input": "translate(hello)", "content": "hola"}}
    ]
  }'
# {"added": 2, "failed": 0, "hashes": ["b2e9...", "c4d1..."]}

# Recall grains with hybrid search
curl -X POST http://localhost:4009/api/memories/default/recall \
  -H "Content-Type: application/json" \
  -d '{
    "query": "john preferences",
    "subject": "john",
    "grain_type": "belief",
    "limit": 10
  }'
from areev import Areev

areev = Areev()
areev.remember("john likes coffee")
results = areev.recall("john preferences")
MethodEndpointDescription
GET/memories/{id}/grainsList grains with pagination, type filters, and date range
GET/memories/{id}/grains/{hash}Get a single grain by content-address hash
GET/memories/{id}/grains/{hash}/rawDownload the raw .mg blob binary
POST/memories/{id}/batch-addAdd multiple typed grains in a single request
POST/memories/{id}/rememberIngest natural language text as memory
POST/memories/{id}/recall-chainMulti-hop recall with sub-query decomposition
POST/memories/{id}/supersedeReplace a grain with updated content (immutable versioning)
POST/memories/{id}/accumulateAtomic delta updates on a grain
POST/memories/{id}/forgetDelete grains by hash, subject, or user_id
POST/memories/{id}/flushFlush write buffer to disk
POST/memories/{id}/searchFull-text BM25 search
POST/memories/{id}/app/execExecute an arbitrary engine command

How do I manage compliance and PII detection?

Use the verification, PII detection, consent, and export endpoints to enforce regulatory requirements on the context database.

Areev provides a compliance subsystem that runs 87+ automated checks against sealed policies. The /verify/run endpoint executes all checks (or filters by regulation) and returns pass/fail/warn counts with detailed results per check. The /compliance/metrics endpoint provides summary counts, /compliance/violations lists failures with severity, and /compliance/impact generates impact assessments for the EU AI Act and Colorado AI Act.

PII detection runs a hybrid NER + regex pipeline locally. The /detect-pii endpoint accepts a text string and returns match categories, positions, and confidence scores. For autonomous memory workflows, the AI agent memory system uses PII detection during ingest to flag sensitive content before it enters the database. Consent endpoints (/consent/grant, /consent/revoke) record and revoke user consent per purpose, and /export/{user_id} fulfills subject access requests. Authorization endpoints (/authz/grant, /authz/revoke, /authz/check) manage Zanzibar-style access tuples separately from consent.

The audit trail at /audit records every operation with user, event type, and grain hash filters. The retention system at /retention/status and /retention/enforce manages TTL-based expiration per namespace.

# Run all 87+ compliance checks
curl -X POST http://localhost:4009/api/memories/default/verify/run \
  -H "Content-Type: application/json" \
  -d '{"regulation": "gdpr"}'
# {"total": 87, "pass": 85, "fail": 1, "warn": 1, "checks": [...]}

# Detect PII with detailed match positions
curl -X POST http://localhost:4009/api/memories/default/detect-pii \
  -H "Content-Type: application/json" \
  -d '{"text": "Contact john@example.com at 555-0100", "detailed": true}'

# Export all grains for a user (GDPR subject access request)
curl http://localhost:4009/api/memories/default/export/john

# Export compliance report
curl http://localhost:4009/api/memories/default/compliance/export
from areev import Areev

areev = Areev()
results = areev.recall("john preferences")
areev.forget("a3f8c1...")
MethodEndpointDescription
GET/memories/{id}/verify/latestLast verification result
POST/memories/{id}/verify/runRun compliance checks (optional regulation filter)
GET/memories/{id}/compliance/metricsPass/fail summary counts
GET/memories/{id}/compliance/violationsFailed checks with severity
GET/memories/{id}/compliance/impactImpact assessment (EU AI Act, Colorado AI Act)
GET/memories/{id}/compliance/exportFull compliance report (JSON/text/CSV)
POST/memories/{id}/detect-piiPII/PHI detection (simple or detailed mode)
GET/memories/{id}/auditQuery audit trail (filter by user, event type, grain hash)
GET/memories/{id}/retention/statusPer-namespace retention status
POST/memories/{id}/retention/enforceTrigger TTL-based retention enforcement
GET/memories/{id}/export/{user_id}Export all grains for a user
GET/settings/ner/statusNER model availability

How do I work with sessions, goals, and intelligence?

Call /sessions/{sid}/bootstrap to load full session context, then use the goal and action endpoints to track AI agent progress.

Session endpoints provide the runtime context an AI agent needs to resume work. The /bootstrap endpoint returns the latest state grain, active goals, and recent actions in a single call, eliminating multiple round-trips. The /context endpoint compiles a context window within a token budget, selecting the highest-value actions, beliefs, and goals that fit.

Goal management follows a lifecycle: create goals with /goals/create, update their state (active, satisfied, failed, suspended) with /goals/{hash}/state, and delegate them to other agents with /goals/{hash}/delegate. The /goals/tree endpoint returns the full hierarchy, and /goals/blocked lists suspended goals. Action endpoints at /actions, /actions/summary, /actions/chain/{task}, and /actions/failed provide operational telemetry.

The intelligence endpoints at /events (SSE long-poll for engine events) and /consolidate (merge session memories) support autonomous memory management. The /states/diff endpoint compares two state grains to show what changed between checkpoints.

# Bootstrap a session (state + active goals + recent actions)
curl http://localhost:4009/api/memories/default/sessions/sess_abc/bootstrap

# Create a goal in a session
curl -X POST http://localhost:4009/api/memories/default/sessions/sess_abc/goals/create \
  -H "Content-Type: application/json" \
  -d '{"description": "Deploy v2.0", "priority": "high", "criteria": "All tests pass"}'

# Compile context with a token budget
curl "http://localhost:4009/api/memories/default/sessions/sess_abc/context?token_budget=4096"
areev recall --query "session context" --scope-path acme/prod --json
MethodEndpointDescription
GET/memories/{id}/sessions/{sid}/bootstrapFull session context
GET/memories/{id}/sessions/{sid}/stateLatest state grain
GET/memories/{id}/sessions/{sid}/goalsActive goals
GET/memories/{id}/sessions/{sid}/actionsRecent actions
GET/memories/{id}/sessions/{sid}/actions/summaryPer-tool success/error breakdown
GET/memories/{id}/sessions/{sid}/actions/chain/{task}Action chain for a task
GET/memories/{id}/sessions/{sid}/actions/failedFailed actions
POST/memories/{id}/sessions/{sid}/goals/createCreate a goal
PUT/memories/{id}/goals/{hash}/stateUpdate goal state (active/satisfied/failed/suspended)
POST/memories/{id}/goals/{hash}/delegateDelegate goal to another agent
GET/memories/{id}/sessions/{sid}/goals/treeGoal hierarchy tree
GET/memories/{id}/sessions/{sid}/goals/blockedBlocked (suspended) goals
GET/memories/{id}/sessions/{sid}/contextCompile context within token budget
POST/memories/{id}/sessions/{sid}/consolidateConsolidate session memories
POST/memories/{id}/states/diffDiff two state grains
GET/memories/{id}/eventsLong-poll for engine events (SSE)

How do I manage policies, scopes, hooks, and providers?

Use the policy, scope-tree, hook, and provider endpoints to configure the database’s governance, isolation, event streaming, and LLM integrations.

Policy endpoints control the sealed policy configuration that governs every read and write. The GET /policy endpoint returns the active policy, PUT /policy adds or removes presets, and POST /policy/simulate tests changes without applying them. Scope endpoints manage hierarchical multi-tenant isolation: PUT /scope-tree defines the tree structure, and POST /scope-erase crypto-erases all data within a scope.

Hook endpoints enable CDC (Change Data Capture) event streaming. Each hook targets specific events (grain_created, crypto_erase, etc.) and delivers payloads via webhook or internal CDC log. Provider endpoints configure LLM integrations: PUT /settings/providers/{id} registers a provider with its API key and base URL, POST /settings/providers/{id}/test verifies the connection, and GET /settings/providers/{id}/models discovers available models. The /models endpoint lists all models across providers with capability filters.

The knowledge graph endpoint at /graph performs BFS traversal from a starting entity. Authorization endpoints (/authz/grant, /authz/revoke, /authz/check) manage Zanzibar-style access tuples.

# Get current policy
curl http://localhost:4009/api/memories/default/policy

# Modify policy (add HIPAA)
curl -X PUT http://localhost:4009/api/memories/default/policy \
  -H "Content-Type: application/json" \
  -d '{"add_policies": ["hipaa"]}'

# Set scope tree for multi-tenant isolation
curl -X PUT http://localhost:4009/api/memories/default/scope-tree \
  -H "Content-Type: application/json" \
  -d '{"name": "root", "children": [{"name": "acme", "children": [{"name": "prod"}]}]}'

# Create a webhook hook
curl -X POST http://localhost:4009/api/memories/default/hooks \
  -H "Content-Type: application/json" \
  -d '{"name": "slack-notify", "mode": "webhook", "url": "https://hooks.slack.com/...",
       "events": ["grain_created", "crypto_erase"]}'

# Configure an LLM provider
curl -X PUT http://localhost:4009/api/settings/providers/openai \
  -H "Content-Type: application/json" \
  -d '{"kind": "openai", "api_key": "sk-...", "base_url": "https://api.openai.com/v1"}'
areev hook list --json
areev scope-tree-get --json
MethodEndpointDescription
GET/memories/{id}/policyCurrent policy configuration
PUT/memories/{id}/policyModify policy (add/remove presets)
POST/memories/{id}/policy/simulateSimulate policy changes without applying
GET/PUT/DELETE/memories/{id}/scope-treeScope tree CRUD
POST/memories/{id}/scope-eraseCrypto-erase an entire scope
GET/memories/{id}/hooksList hooks
POST/memories/{id}/hooksCreate a hook (CDC or webhook)
DELETE/memories/{id}/hooks/{hook_id}Delete a hook
GET/memories/{id}/graphKnowledge graph BFS traversal
POST/memories/{id}/authz/grantGrant Zanzibar authorization tuple
POST/memories/{id}/authz/revokeRevoke authorization tuple
GET/memories/{id}/authz/checkCheck authorization
GET/PUT/settings/llmLLM provider settings
GET/settings/providersList configured providers
PUT/settings/providers/{id}Create or update a provider
DELETE/settings/providers/{id}Remove a provider
POST/settings/providers/{id}/testTest provider connection
GET/settings/providers/{id}/modelsDiscover provider models
GET/modelsUnified model list (filter by capability)
GET/PUT/settings/model-assignmentsRole-to-model mappings

How do I use health, stats, auth, ingest, and import/export?

Call /health for system status, /stats for grain counts, and use the ingest endpoints for staged document import with LLM analysis.

Health and config endpoints require no authentication. The /health endpoint returns status and version, while /config reports the active auth mode and deployment settings so clients can configure themselves. The /stats endpoint returns total grain count, disk space, and per-type breakdowns; /stats/history provides these metrics over time.

Document import follows an async job pipeline: POST /imports submits a file for import, GET /imports/{job_id} checks job status, GET /imports/{job_id}/stream provides SSE progress events, and POST /imports/{job_id}/cancel aborts a running job. For synchronous document import, use POST /import-document which extracts and stores grains in a single call. The /extract-document endpoint extracts text without creating grains.

Authentication is configured at server startup with --oidc-issuer (enables OIDC JWT bearer token validation) or via API key management (/auth/keys). Import/export endpoints handle .mg blob files for data portability.

# Health check (no auth required)
curl http://localhost:4009/api/health
# {"status": "ok", "version": "0.1.0"}

# App config (no auth required)
curl http://localhost:4009/api/config

# Memory stats with type breakdown
curl http://localhost:4009/api/memories/default/stats

# Import a document (async job pipeline)
curl -X POST http://localhost:4009/api/memories/default/imports \
  -F "file=@report.pdf"
# Returns: {"job_id": "...", "status": "queued"}

# Check import job status
curl http://localhost:4009/api/memories/default/imports/{job_id}

# Stream import progress (SSE)
curl http://localhost:4009/api/memories/default/imports/{job_id}/stream

# Import a document (synchronous, single call)
curl -X POST http://localhost:4009/api/memories/default/import-document \
  -F "file=@report.pdf"

# Import .mg blobs
curl -X POST http://localhost:4009/api/memories/default/import-file \
  -F "file=@export.mg"

# Export as streaming .mg
curl http://localhost:4009/api/memories/default/export-file
areev info
areev import-doc report.pdf --namespace docs --json
areev import /path/to/mg-files
MethodEndpointDescription
GET/configApp deployment configuration (public)
GET/healthHealth check (public)
GET/POST/auth/keysList or create API keys
DELETE/auth/keys/{key_id}Revoke an API key
GET/POST/memoriesList or create memory instances
GET/DELETE/memories/{id}Get or delete a memory instance
GET/memories/{id}/statsMemory statistics
GET/memories/{id}/stats/historyHistorical stats over time
GET/memories/{id}/tier-infoTiered storage info
GET/memories/{id}/agentsList agents
GET/memories/{id}/agents/{agent_id}Agent detail with trust history
GET/POST/memories/{id}/agents/identitiesList or register agent identities
GET/memories/{id}/agents/identities/{agent_id}Get agent identity
POST/memories/{id}/agents/identities/{agent_id}/deactivateDeactivate agent
POST/memories/{id}/rememberIngest natural language text as memory
POST/memories/{id}/recall-chainMulti-hop recall (sub-query decomposition)
POST/memories/{id}/accumulateAtomic delta updates on a grain
POST/memories/{id}/flushFlush write buffer to disk
POST/memories/{id}/searchFull-text BM25 search
POST/memories/{id}/import-documentSynchronous document import
POST/GET/memories/{id}/importsSubmit or list import jobs (async pipeline)
GET/memories/{id}/imports/{job_id}Import job status
GET/memories/{id}/imports/{job_id}/streamImport job progress (SSE)
POST/memories/{id}/imports/{job_id}/cancelCancel import job
POST/memories/{id}/extract-documentExtract text from document
POST/memories/{id}/import-fileImport .mg blob file
GET/memories/{id}/export-fileExport as .mg stream
POST/memories/{id}/chat/streamLLM chat completions (SSE proxy)
GET/POST/memories/{id}/chat/threadsList or create chat threads
GET/DELETE/PATCH/memories/{id}/chat/threads/{thread_id}Get, delete, or update a thread
GET/memories/{id}/chat/threads/{thread_id}/messagesThread message history
GET/memories/{id}/provenanceList provenance records
GET/memories/{id}/provenance/summaryProvenance statistics
GET/memories/{id}/provenance/{recall_id}Single provenance record
POST/memories/{id}/consent/grantGrant user consent
POST/memories/{id}/consent/revokeRevoke user consent
POST/memories/{id}/consent/grant-bulkGrant consent for multiple users
GET/memories/{id}/consent/statusConsent status for all users
  • SDKs — Python, TypeScript, and Rust clients
  • CLI — command-line interface for the same operations
  • gRPC — high-performance binary protocol alternative
  • MCP — Model Context Protocol for AI agent integration