CLI

How do I add and query memories from the terminal?

Use areev add to store grains and areev recall to search. Both support all 10 OMS grain types and produce JSON output with --json.

The Areev CLI gives you direct access to the context database from any shell. Each add command takes a grain type followed by key=value pairs, creating a content-addressed grain in the underlying Fjall storage engine. The recall command provides the same hybrid search available through the REST API: free-text BM25, structural filters (subject, relation, object, namespace), and combined queries with Reciprocal Rank Fusion.

Scoped memory queries let you target specific parts of a multi-tenant hierarchy. The --scope-path flag traverses the scope tree, and --include-siblings widens the search to sibling namespaces at the same depth. For AI agent memory workflows, the --detect-contradictions flag highlights grains that conflict with each other, and --temporal accepts natural language time expressions like “last 7 days” or “today”.

Additional data commands cover the full grain lifecycle: supersede for immutable versioning, accumulate for atomic delta updates, inspect for hash lookups, dump for paginated listing, forget for deletion (also forget user for crypto-erasure and forget scope for scope erasure), remember for natural language ingestion, recall-chain for multi-hop decomposition, flush for committing the write buffer, and cal for executing Context Assembly Language queries.

# Add a belief grain (key=value pairs)
areev add belief subject=john relation=likes object="coffee" \
  confidence=0.95 namespace=prefs tags=ui,settings

# Add other grain types
areev add event content="User logged in" user_id=john
areev add action input="translate(hello)" content="hola"
areev add goal content="Deploy v2.0" priority=high

# Query with full-text search
areev recall --query "john preferences" --limit 5 --json

# Structural query by subject + namespace
areev recall --subject john --namespace prefs

# Hybrid query with all filters
areev recall -q "preferences" -s john -n prefs \
  --grain-type belief --tags ui --importance 0.5 \
  --temporal "last 7 days" --detect-contradictions --json

# Scoped memory queries
areev recall -q "config" --scope-path acme/prod/bot1 --include-siblings
from areev import Areev

areev = Areev()
results = areev.recall("john preferences")
CommandDescription
areev remember <text>Ingest natural language text as memory
areev recall-chain --query <q>Multi-hop recall with sub-query decomposition
areev supersede <old_hash> <type> [fields...]Replace a grain with updated content
areev accumulate <type> --add field=delta --reason <r>Atomic delta updates on a grain
areev inspect <hash>Show a grain by its content-address hash
areev dumpList all grains (with --limit, --namespace, --grain-type)
areev forget <hash>Delete a grain by hash
areev forget user <user_id>Crypto-erase all data for a user (GDPR Art. 17)
areev forget scope <scope_path>Crypto-erase all data in a scope
areev flushFlush the write buffer to disk
areev cal '<query>'Execute a CAL query (--format json)

How do I manage compliance and privacy?

Run areev verify to execute 87+ compliance checks, areev erase for GDPR crypto-erasure, and areev pii for local PII detection.

The CLI covers the full compliance lifecycle for GDPR, HIPAA, CCPA, and the EU AI Act. Consent management uses grant-consent and revoke-consent to record per-user, per-purpose consent grains that the sealed policy enforces. The restrict and unrestrict commands implement GDPR Art. 18 processing restrictions, freezing all operations for a user until the restriction lifts.

Crypto-erasure via areev erase destroys the user’s encryption key, rendering all their grains permanently unreadable. The export command fulfills subject access requests, with --portable producing the Art. 20 portability format. The verify command runs all compliance checks against the sealed policy, and compliance-export outputs the results in JSON, text, or CSV. For the EU AI Act, impact-assessment generates a risk assessment and provenance tracks decision lineage per Art. 86.

PII detection via areev pii runs the hybrid NER + regex pipeline locally. The --detailed flag adds match positions and confidence scores. All PII processing stays in-process with no external network calls.

# Grant and revoke consent
areev grant-consent john memory_storage
areev revoke-consent john memory_storage

# Processing restriction (GDPR Art. 18)
areev restrict john "pending investigation"
areev unrestrict john

# Crypto-erase a user (GDPR Art. 17 -- destroys encryption key)
areev erase john

# Export all user data (subject access request)
areev export john
areev export john --portable   # Art. 20 portability format

# Run compliance verification (87+ checks)
areev verify
areev verify --regulation gdpr

# Export compliance report
areev compliance-export --format json
areev compliance-status --json

# Impact assessment (Colorado AI Act, EU AI Act)
areev impact-assessment --format json

# Detect PII in text
areev pii "Contact john@example.com at 555-0100"
areev pii "Contact john@example.com" --detailed

# Decision provenance (EU AI Act Art. 86)
areev provenance --list --since "last 7 days" --json
areev provenance abc123def456   # View a specific record
POST /api/memories/default/verify/run
Content-Type: application/json

{"regulation": "gdpr"}

How do I start the server and manage the database?

Run areev serve to start the HTTP/gRPC/MCP/A2A server, and use areev create/areev modify/areev destroy for database lifecycle management.

The serve command starts all configured protocol listeners in a single process. Use --http for the REST API, --grpc for the binary protocol, --mcp for Model Context Protocol, and --a2a for Agent-to-Agent. The --app flag serves a static frontend (the console UI) alongside the API. Authentication is configured via --oidc-issuer (OIDC JWT validation) and --oidc-audience.

Database creation with areev create seals one or more policy presets and optionally enables encryption, deduplication, and content guardrails. The --vector-backend flag selects the vector index implementation (usearch or flat). Once created, areev modify adds or removes policy presets with a mandatory --reason for downgrades. The areev destroy command permanently deletes the database and requires --confirm.

Maintenance commands include info for statistics, metrics for write/query/latency counters, compact for flushing data to disk, and rebuild-indexes for reconstructing FTS and vector indexes.

# Start server with all protocols
areev serve --http 0.0.0.0:4009 --grpc 0.0.0.0:50051 --mcp --a2a

# Serve with the console UI and authentication
areev serve --http 0.0.0.0:4009 --mcp --a2a \
  --app app/dist/browser \
  --oidc-issuer https://auth.example.com \
  --oidc-audience my-app

# Rate limiting and CORS
areev serve --rate-limit-rps 200 --rate-limit-burst 50 \
  --cors-origins "https://app.example.com,https://admin.example.com"

# Create a new database with sealed policies
areev create --policy gdpr,hipaa \
  --master-key-env AREEV_MASTER_KEY \
  --vector-backend usearch \
  --dedup fuzzy --dedup-threshold 0.85 \
  --guardrail "social security" --guardrail-mode contains

# Modify policies on an existing database
areev modify --add-policy hipaa --remove-policy permissive \
  --reason "Switching to stricter compliance"

# Destroy a database (irreversible)
areev destroy --confirm

# Database maintenance
areev info              # Database statistics
areev metrics --json    # Engine metrics (write/query/latency)
areev compact           # Force flush to disk
areev rebuild-indexes   # Rebuild FTS and vector indexes
FlagDefaultDescription
--data-dir./areev-dataPath to database directory
--policyPolicy preset (gdpr, ccpa, hipaa, lgpd, pipl, sox, ephemeral, open)
--userActive user ID for audit logging
--master-key256-bit master key (hex) for encryption
--ner-modelPath to NER PII model directory

How do I manage agents, scopes, and hooks?

Use areev agent-register for agent identities, areev scope-tree-set for hierarchical isolation, and areev hook add for CDC event streaming.

Agent identity management registers AI agents with specific permissions and namespace access. Each agent gets a unique ID, a human-readable name, and a permission level (read, read_write, admin). The --delegated-by flag records who authorized the agent, and --namespaces restricts the agent to specific namespace scopes. The agent-deactivate command revokes an agent’s access without deleting its identity record.

Scope tree management defines the hierarchical namespace structure used for multi-tenant isolation. The scope-tree-set command accepts a JSON file defining the tree, and scope-erase crypto-erases all data within a specific path. Hook management provides CDC (Change Data Capture) event streaming: hook add creates a hook targeting specific events, hook drain reads accumulated events, and hook pause/resume controls delivery. Goal management tracks autonomous memory objectives with goal-create, goal-update, goal-tree, and goal-blocked.

Document import via areev import-doc processes files (PDF, DOCX, PPTX, HTML, TXT) with configurable chunking, namespace targeting, and tag assignment. The model subcommand group manages NER models for PII detection, and policy-cmd inspects the sealed policy and its history. The spec command exports OpenAPI/gRPC specs for CI validation.

# Register an agent identity
areev agent-register --agent-id "agent:planner-v2" --name "Planner" \
  --delegated-by admin@company.com --permission read_write \
  --namespaces "planning,shared"

# List and inspect agents
areev agent-list --json
areev agent-get agent:planner-v2 --json
areev agent-deactivate agent:planner-v2

# Scope tree management
areev scope-tree-set --file scopes.json
areev scope-tree-get --json
areev scope-erase acme/prod/bot1    # Crypto-erase a scope

# Goal management
areev goal-create --session sess_abc "Deploy v2.0" \
  --priority high --criteria "All tests pass"
areev goal-update abc123 --state satisfied --reason "Tests passed"
areev goal-tree sess_abc --json
areev goal-blocked sess_abc

# Hook management (CDC event log)
areev hook list --json
areev hook add --name slack-notify --mode webhook \
  --url https://hooks.slack.com/... \
  --events grain_created,crypto_erase \
  --redaction metadata_only
areev hook drain --after 0 --limit 100 --format ndjson
areev hook pause <hook_id>
areev hook resume <hook_id>
areev hook delete <hook_id>
areev hook test <hook_id>
areev hook stats

# Document import
areev import-doc report.pdf --namespace docs --tags report,q1 \
  --chunk-size 1000 --chunk-overlap 100 --json

# Import/export .mg blobs
areev import /path/to/mg-files

# NER model management
areev model download pii-ner
areev model list
areev model path pii-ner

# Policy inspection
areev policy-cmd show
areev policy-cmd history

# Export API specs (for CI validation)
areev spec --output specs/
areev spec --check   # Exit 1 if specs are stale

How does the forgetting engine work?

The purge stale command (or deprecated alias forget-stale) prunes low-value grains using exponential decay, scoring each grain by importance, access frequency, and age.

The forgetting engine implements autonomous memory management by continuously evaluating grain value. It combines three signals: the grain’s importance score (set at write time or inferred), how frequently the grain appears in recall results, and the elapsed time since creation. These signals feed into an exponential decay function controlled by the --half-life parameter, which sets the number of days for a grain’s retention score to halve.

Grains scoring below --threshold and older than --min-age-days are candidates for deletion. The --batch parameter caps how many grains are forgotten per run, preventing large bursts of deletions. The --namespace flag scopes forgetting to a specific namespace, which is useful for isolating ephemeral data from long-term knowledge.

Run purge stale on a schedule (cron or systemd timer) to maintain a self-pruning context database. In production, start with conservative settings (high threshold, long half-life) and tune based on the ratio of stale grains to total grain count reported by areev info.

# Run with defaults (threshold 0.3, half-life 30 days, batch 100)
areev purge stale

# Custom forgetting parameters
areev purge stale \
  --threshold 0.2 \
  --half-life 14 \
  --min-age-days 3 \
  --batch 500 \
  --namespace ephemeral
FlagDefaultDescription
--threshold0.3Retention score below which grains are forgotten
--half-life30Days for exponential decay half-life
--min-age-days7Minimum age before a grain can be forgotten
--batch100Maximum grains to forget per run
--namespacedefaultScope forgetting to a namespace
  • HTTP REST — the same operations available over HTTP
  • SDKs — Python, TypeScript, and Rust clients
  • MCP — Model Context Protocol for AI agents
  • Configuration — server configuration details