GDPR

How does Areev implement right-to-erasure (Art. 17)?

Areev implements GDPR Art. 17 through crypto-erasure — destroying the per-user Data Encryption Key makes all associated grain ciphertext permanently unrecoverable. This context database treats right-to-erasure as a cryptographic operation on AI memory rather than a data-overwriting task, achieving O(1) key destruction instead of O(n) blob rewrites.

The forget_user() operation destroys the DEK, purges blind index tokens, cleans entity_latest entries, handles tiered storage cleanup, and returns an ErasureProof with the grain count, key fingerprint, and timestamp. Six compliance verification checks validate erasure completeness: erasure_crypto, erasure_completeness, erasure_proof, erasure_key_destruction, erasure_data_inaccessible, and erasure_memory_clean.

import requests

# Erase all data for a user (GDPR Art. 17)
resp = requests.post("http://localhost:4009/api/memories/default/forget",
    json={"user_id": "john"})
proof = resp.json()  # ErasureProof with grain count and key fingerprint
POST /api/memories/default/forget HTTP/1.1
Host: localhost:4009
Content-Type: application/json

{"user_id": "john"}
areev erase john

Areev tracks consent through Consent grains — a dedicated grain type that records consent grants and revocations. When the GDPR policy is active, the policy engine checks for valid consent on every write operation in this autonomous memory system.

Consent grains are exempt from consent checks themselves (to avoid a circular dependency). ConsentGranted and ConsentRevoked audit events create an immutable record of consent state changes. Processing restrictions (Art. 18) can be applied per-user, blocking all recall operations for the restricted user until the restriction is lifted. The AI agent memory stores consent purpose, legal basis, and timestamp, providing auditors with a complete consent history.

# Grant consent
resp = requests.post("http://localhost:4009/api/memories/default/add", json={
    "type": "consent",
    "subject": "john",
    "relation": "grants_consent_for",
    "object": "marketing_emails",
    "purpose": "Direct marketing communications",
    "legal_basis": "consent"
})
areev add --type consent --subject john --relation grants_consent_for \
  --object marketing_emails --purpose "Direct marketing communications"

How does data minimization work (Art. 5)?

Areev supports data minimization through TTL-based retention policies and privacy-safe provenance records. When a policy specifies a TTL, grains exceeding the retention period are automatically excluded from recall results in this context database.

Provenance records (CD-1) satisfy GDPR Art. 5(1)(c) by storing only SHA-256 hashes, score breakdowns, and structural metadata — never the grain content itself. Blind index tokens prevent plaintext storage in indexes. The audit trail pseudonymizes all actor and subject IDs via HMAC-SHA256 (Art. 25 data protection by design). Query parameters in provenance are HMAC-blinded when encryption is active, ensuring that even the search terms in the AI memory decision log cannot be read in plaintext.

Data minimization controls:
  - TTL ceiling: grains beyond TTL are excluded from recall()
  - Provenance records store only hashes and scores, never raw content
  - Blind index tokens prevent plaintext storage in indexes
  - Audit trail pseudonymizes all actor and subject IDs
  - Query params in provenance are HMAC-blinded when encryption is active

How does data portability work (Art. 15/20)?

Areev supports data subject access requests (Art. 15) and data portability (Art. 20) through a structured JSON export mechanism. The UserDataExported audit event records each export with grain count and portability flag.

The export includes all grain content, metadata, and provenance records associated with the user. The portable flag controls whether the export uses a standardized interchange format for migration to another system. This autonomous memory system ensures that every export is audit-logged, providing regulators with evidence that access requests were fulfilled and the data was delivered in a machine-readable format.

import requests

# Export a user's data (Art. 15 access request)
resp = requests.get("http://localhost:4009/api/memories/default/export/john",
    params={"portable": True})
data = resp.json()
GET /api/memories/default/export/john?portable=true HTTP/1.1
Host: localhost:4009
areev export --user john --portable --output john-data.json