EU AI Act
How does decision provenance satisfy Art. 12 record-keeping?
Areev captures a complete decision trace for every recall() invocation through the Decision Provenance Chain (CD-1). This context database records query parameters, returned results with score breakdowns, excluded candidates with reasons, and a cryptographic link to the audit trail entry, satisfying EU AI Act Art. 12 requirements for AI memory system record-keeping.
Provenance records store only hashes and scoring metadata, never raw grain content, satisfying GDPR Art. 5(1)(c) data minimization simultaneously. When encryption is active, SPO fields in query params are stored as HMAC-SHA256 blind tokens. Records default to 180-day retention per EU AI Act Art. 19, configurable per namespace. The autonomous memory engine links each record to a MemoryRecalled audit event via the audit_entry_hash field, forming a cryptographic cross-reference.
import requests
# Retrieve provenance for a recall operation
resp = requests.get("http://localhost:4009/api/memories/default/provenance/a1b2c3d4")
record = resp.json()
# record["score_breakdowns"], record["excluded"], record["audit_entry_hash"]
GET /api/memories/default/provenance/a1b2c3d4 HTTP/1.1
Host: localhost:4009
areev provenance --recall-id a1b2c3d4
How does Art. 86 explanation work?
Areev generates human-readable explanations from provenance records, providing affected individuals with understandable reasons for AI agent memory system decisions. Each explanation describes the scoring factors that led to a memory being recalled or excluded.
Explanations cover BM25 text relevance rank, semantic similarity score (vector search), RRF fusion score, recency decay factor, interference penalty, and final composite score. Exclusion reasons are equally transparent: ProcessingRestricted (GDPR Art. 18), Superseded, BelowConfidence, Contradicted, TtlExpired, DiversityFiltered, and 15 other documented types (21 total). This transparency supports Art. 86’s requirement for explanations of decisions affecting individual rights.
"This belief (created 2026-03-01) was recalled because:
it ranked #1 in text relevance (BM25), had 0.87 semantic
similarity (high, vector search), combined via RRF fusion
(score: 0.0164), 0.95 recency factor (very recent memory).
Final score: 0.9200."
How does AI content marking work (Art. 50)?
The .mg binary format includes an AI-generated content flag (bit 5 of the flags byte) that marks grains produced by AI systems, satisfying EU AI Act Art. 50 transparency obligations. This context database automatically sets the flag during serialization when the grain’s source_type is llm_generated, consolidated, inferred, or a2a_recalled, ensuring that downstream consumers of AI memory can identify AI-generated content without inspecting grain content.
The flag persists through serialization, tiered storage, and sharing envelopes. The flags byte also carries is_signed (bit 0), is_encrypted (bit 1), is_compressed (bit 2), has_content_refs (bit 3), has_embedding_refs (bit 4), and sensitivity level (bits 6-7, values 0-3). This metadata-level marking fulfills Art. 50 without requiring content inspection or additional processing.
.mg header flags byte:
bit 0: is_signed (COSE Sign1 Ed25519)
bit 1: is_encrypted
bit 2: is_compressed
bit 3: has_content_refs
bit 4: has_embedding_refs
bit 5: is_ai_generated <- Art. 50 content marking
bits 6-7: sensitivity level (0-3)
How does AI system disclosure work (Art. 52)?
When the active policy sets ai_marking_required (e.g., GDPR), Areev injects X-AI-System and X-AI-Act-Disclosure response headers across HTTP, gRPC, and MCP interfaces. This ensures that any system interacting with the AI agent memory is informed it is communicating with an AI system, satisfying Art. 52’s disclosure obligation at the interface level.
The art52_disclosure_headers compliance check verifies that the policy mandates AI marking and that the headers are injected. Policies without ai_marking_required (e.g., CCPA) will trigger a warning on this check.
How do impact assessments work (Art. 9/27)?
Areev generates automated impact assessment reports for EU AI Act Art. 9 (risk management) and Art. 27 (fundamental rights assessment), as well as Colorado AI Act conformity assessments. Reports analyze data practices, usage patterns, and disparity detection across this AI memory system.
The assessment includes system description, data practices summary, usage analysis by namespace, disparity detection (flags namespaces with over 2x deviation from the mean grain count), risk assessment with severity levels, and mitigation measures with implementation status. Each report covers all 87 compliance verification checks and their results, providing a comprehensive view of the autonomous memory system’s regulatory posture.
import requests
# Generate an impact assessment
resp = requests.get("http://localhost:4009/api/memories/default/compliance/impact")
assessment = resp.json()
GET /api/memories/default/compliance/impact HTTP/1.1
Host: localhost:4009
areev impact-assessment --output report.json
Related
- Provenance: Provenance chain technical details
- Compliance: Full compliance verification overview
- Audit Trail: Art. 12 record-keeping via audit log