Kubernetes
How do I deploy Areev on Kubernetes?
Deploy Areev as a StatefulSet with a PersistentVolumeClaim so each pod retains its Fjall LSM data, encryption keys, and audit trail across restarts.
Areev is a context database that stores AI memory as encrypted grains. Each pod runs a single static binary with no external database dependencies — all state lives on the persistent volume mounted at /data. The StatefulSet controller provides stable network identities (areev-0, areev-1, etc.) that are required for distributed cluster bootstrap. Health probes hit the /api/health endpoint, which returns subsystem status for the storage engine, index, and encryption layer.
For single-node deployments, set replicas: 1. The autonomous memory engine handles concurrent reads and writes within a single pod. For multi-node scaling, use the cluster or cloud build profile and pass --clustered with --seed-nodes to enable Raft consensus across pods.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: areev
spec:
replicas: 1
serviceName: areev
selector:
matchLabels:
app: areev
template:
metadata:
labels:
app: areev
spec:
containers:
- name: areev
image: areev/areev:node
ports:
- containerPort: 4011
name: http
- containerPort: 50051
name: grpc
env:
- name: AREEV_MASTER_KEY
valueFrom:
secretKeyRef:
name: areev-secrets
key: master-key
args: ["--policy", "gdpr", "serve", "--http", "0.0.0.0:4011", "--grpc", "0.0.0.0:50051", "--mcp", "--a2a"]
volumeMounts:
- name: data
mountPath: /data
livenessProbe:
httpGet:
path: /api/health
port: 4011
initialDelaySeconds: 5
periodSeconds: 30
readinessProbe:
httpGet:
path: /api/health
port: 4011
initialDelaySeconds: 3
periodSeconds: 10
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
How do I manage secrets for the encryption key?
Store the master encryption key and API key as Kubernetes Secrets, then reference them in the StatefulSet pod spec via secretKeyRef.
The context database encrypts every grain at rest using AES-256-GCM envelope encryption. The master key wraps per-user data encryption keys (DEKs), so losing it means losing access to all AI agent memory stored on that node. Never pass the master key as a plaintext environment variable in manifests checked into version control.
For production deployments, use an external key store backend (HashiCorp Vault, AWS KMS, or PKCS#11 HSM) instead of Kubernetes Secrets. Areev supports these backends natively via the --key-backend flag. The Kubernetes Secret then only holds the Vault or KMS credentials, not the master key itself.
apiVersion: v1
kind: Secret
metadata:
name: areev-secrets
type: Opaque
data:
master-key: <base64-encoded-256-bit-hex-key>
api-key: <base64-encoded-api-key>
# Create the secret from literal values
kubectl create secret generic areev-secrets \
--from-literal=master-key=your-256-bit-hex-key \
--from-literal=api-key=your-api-key
How do I expose Areev externally?
Create a ClusterIP Service and an Ingress resource to route HTTP and gRPC traffic to the StatefulSet pods.
The HTTP API serves all endpoints under /api/, the MCP protocol at /mcp, and A2A discovery at /.well-known/agent.json. For gRPC traffic, your Ingress controller must support HTTP/2 (NGINX Ingress, Traefik, or Envoy all qualify). If you run a single replica, a LoadBalancer Service type works as an alternative to Ingress with less configuration.
TLS termination at the Ingress layer is recommended. Areev does not terminate TLS itself in single-node mode — it expects a reverse proxy or load balancer to handle certificates. In distributed mode, inter-node communication uses mTLS independently of the Ingress TLS configuration.
apiVersion: v1
kind: Service
metadata:
name: areev
spec:
selector:
app: areev
ports:
- name: http
port: 4011
targetPort: 4011
- name: grpc
port: 50051
targetPort: 50051
type: ClusterIP
How do I scale horizontally with distributed mode?
Set replicas: 3 or more and use the cluster build profile with --clustered, --node-id, --cluster-id, and --seed-nodes flags pointing to the StatefulSet’s stable DNS names.
Horizontal scaling requires Areev’s distributed feature, which adds Raft consensus, consistent hash sharding, and write forwarding between nodes. Each node maintains its own Fjall LSM storage and participates in one or more Raft groups. The StatefulSet’s stable network identities provide the predictable hostnames needed for cluster bootstrap — areev-0.areev, areev-1.areev, etc.
Without the distributed feature enabled, each pod operates as an independent AI memory instance with its own data. There is no automatic replication or sharding in non-cluster profiles. The distributed mode requires the headless Service (created by the StatefulSet’s serviceName) for DNS-based peer discovery. The --node-id must be a unique numeric ID (u64) for each node in the cluster. The --cluster-id is an opaque string that must be the same across all nodes in the cluster.
spec:
replicas: 3
template:
spec:
containers:
- name: areev
image: areev/areev:cluster
args:
- "serve"
- "--http=0.0.0.0:4011"
- "--grpc=0.0.0.0:50051"
- "--clustered"
- "--node-id=1"
- "--cluster-id=prod"
- "--seed-nodes=areev-0.areev:50051,areev-1.areev:50051,areev-2.areev:50051"
Related
- Docker: Single-node container deployment
- Distributed: Multi-node cluster architecture and Raft consensus
- Configuration: Full configuration reference