Configuration
What are the core configuration options?
Pass CLI flags and environment variables to control encryption, governance policy, authentication, and protocol bindings when starting the Areev server.
Areev is a context database that requires no external configuration file — every setting maps to a CLI flag or environment variable. The most critical option is AREEV_MASTER_KEY, a 256-bit hex key that enables AES-256-GCM envelope encryption for all AI memory grains at rest. Without it, grains are stored unencrypted. The --policy flag selects one of eight governance presets (gdpr, ccpa, hipaa, lgpd, pipl, sox, ephemeral, permissive) that control retention rules, consent requirements, and erasure behavior.
Protocol bindings determine which interfaces the server exposes. --http binds the REST API (default 0.0.0.0:4009), --grpc enables the gRPC interface, --mcp starts the Model Context Protocol server, and --a2a enables the Agent-to-Agent protocol. Authentication is controlled at compile time by the auth feature flag (included in dev, edge, node, cluster, and cloud profiles). When built with auth, the server supports email/password login, API keys, and optional OIDC via the --oidc-issuer and --oidc-audience flags. The autonomous memory engine starts accepting requests as soon as all bindings are ready.
# Minimal production server
areev --policy gdpr --master-key $AREEV_MASTER_KEY \
serve --http 0.0.0.0:4009
# Full-featured server with all protocols and OIDC
areev --policy gdpr --master-key $AREEV_MASTER_KEY \
serve --http 0.0.0.0:4009 --grpc 0.0.0.0:50051 \
--mcp --a2a \
--oidc-issuer https://accounts.google.com \
--oidc-audience YOUR_CLIENT_ID \
--key-backend vault
| Setting | CLI Flag | Env Variable | Default |
|---|---|---|---|
| HTTP bind | --http | — | 0.0.0.0:4009 |
| gRPC bind | --grpc | — | (disabled) |
| Master key | --master-key | AREEV_MASTER_KEY | (no encryption) |
| API key | — | AREEV_API_KEY | (ephemeral key) |
| Policy | --policy | — | (no policy) |
| OIDC issuer | --oidc-issuer | — | (disabled) |
| OIDC audience | --oidc-audience | — | (required with OIDC) |
| Data dir | --data-dir | AREEV_DATA_DIR | ./areev-data |
| Log level | — | RUST_LOG | info |
| Memory budget | --memory-budget | — | profile-aware (128MB–512MB) |
| Write batch size | --write-batch-size | — | 1 |
| Rate limit RPS | --rate-limit-rps | — | 100 |
| CORS origins | --cors-origins | — | permissive |
How does the database lifecycle work?
The serve command creates the database if it does not exist, or opens an existing one; use create, modify, and destroy for explicit lifecycle control.
Policy is sealed at creation time and recorded in the database header. The AI agent memory engine enforces the sealed policy on every write and recall operation — there is no way to bypass it at runtime. Upgrades (adding a stricter policy) are always permitted. Downgrades (removing a policy) require a --reason flag and produce a PolicyDowngraded audit event that records the actor, reason, and new configuration hash.
The destroy command requires --confirm and permanently deletes all data, encryption keys, and audit trail from the data directory. This operation is irreversible. The context database does not support partial destruction — it is all or nothing.
# Create a new database with GDPR policy
areev --data-dir ./my-data --master-key $AREEV_MASTER_KEY create --policy gdpr
# Start server (auto-opens existing database)
areev --data-dir ./my-data serve --http 0.0.0.0:4009
# Add HIPAA policy (upgrade)
areev --data-dir ./my-data modify --add-policy hipaa
# Remove HIPAA policy (downgrade, requires reason)
areev --data-dir ./my-data modify --remove-policy hipaa --reason "No longer processing PHI"
# Destroy database (irreversible)
areev --data-dir ./my-data destroy --confirm
What key store backends are available?
Areev supports four key store backends — local, Vault Transit, AWS KMS, and PKCS#11 HSM — selected at startup with the --key-backend flag.
The local backend stores wrapped DEKs in a dedicated Fjall partition alongside the AI memory data. This is the default and requires no external infrastructure. The Vault Transit backend delegates key wrapping to a HashiCorp Vault server using AppRole authentication. AWS KMS uses the AWS Key Management Service for key wrapping, suitable for AWS-native deployments. PKCS#11 connects to a hardware security module for the highest level of key protection.
All backends use the same envelope encryption model: random per-user data encryption keys (DEKs) are wrapped by the master key. Switching backends after initial setup requires re-wrapping all DEKs via key rotation.
# Vault backend — configure via environment variables
export VAULT_ADDR=https://vault.example.com:8200
export VAULT_ROLE_ID=your-role-id
export VAULT_SECRET_ID=your-secret-id
areev serve --http 0.0.0.0:4009 --key-backend vault
| Backend | Flag | Feature Flag | CMK Location |
|---|---|---|---|
| Local | --key-backend local | (default) | In-process, Fjall partition |
| Vault Transit | --key-backend vault | vault | HashiCorp Vault server |
| AWS KMS | --key-backend aws-kms | aws-kms | AWS KMS service |
| PKCS#11 HSM | --key-backend pkcs11 | pkcs11 | Hardware HSM |
What feature flags and build profiles exist?
Areev uses 36 Cargo feature flags grouped into five compound build profiles: dev, edge, node, cluster, and cloud.
The single Rust crate compiles only the features you select. The dev profile is the default and includes full-text search, HTTP, gRPC, MCP, A2A, app UI, document import, CAL, chat, hooks, auth, and vector search. The edge profile targets IoT/constrained environments with FTS, HTTP, auth, CAL, import, and operational metrics. The node profile enables every feature for self-hosted single-node deployments including signing, vault, KMS, PII detection, and metrics export. The cluster profile adds distributed Raft consensus and OpenTelemetry to node. The cloud profile adds billing metrics and pilot-managed identity to cluster.
Feature flags are additive. The cluster profile includes grpc through its node dependency, which is required for inter-node communication. The signing flag adds COSE Sign1 Ed25519 signatures for tamper-evident audit trails. The chat flag enables the conversational context database mode for multi-turn dialogue storage.
| Feature | Description |
|---|---|
fts | Full-text search (Tantivy BM25) |
vector | Vector similarity search (HNSW) |
vector-usearch | USearch vector backend |
vector-faiss | FAISS vector backend |
http | HTTP REST API (Axum) |
grpc | gRPC API (Tonic) |
mcp | Model Context Protocol server |
a2a | Agent-to-Agent protocol |
app | Console UI static file serving |
auth | Email/password + OAuth authentication |
hooks | CDC event log and webhook delivery |
hooks-builtin | Built-in webhook handlers |
import | Document import (PDF, DOCX, PPTX, HTML, TXT) |
cal | Context Assembly Language queries |
chat | Conversational memory engine |
rerank | Candle cross-encoder reranking |
llm-rerank | External LLM reranking via API |
pii_ner | NER-based PII detection via Candle BERT |
eval | Retrieval evaluation metrics (nDCG, MRR, Recall@k) |
signing | COSE Sign1 Ed25519 signatures |
vault | HashiCorp Vault key backend |
aws-kms | AWS KMS key backend |
pkcs11 | PKCS#11 HSM key backend |
distributed | Multi-node Raft clustering (OpenRaft + chitchat) |
otel | OpenTelemetry distributed tracing |
metrics-ops | Operational health metrics |
metrics-intel | Memory intelligence metrics |
metrics-cloud-billing | Cloud billing metrics |
metrics-export | Prometheus metrics export |
pilot-managed-identity | Pilot-owned identity (cloud profile) |
python | Python SDK (PyO3/maturin) |
Related
- Docker: Container deployment with build profiles
- Kubernetes: Orchestrated StatefulSet deployment
- Key Management: Key backend details
- Policy: Policy preset configuration