A2A
How do I discover Areev as an A2A agent?
Start the server with --a2a and fetch the agent card from /.well-known/agent.json. Any A2A-compatible agent can then send tasks to Areev’s 15 skills for AI memory operations.
Areev implements Google’s Agent-to-Agent (A2A) protocol, which enables peer-to-peer communication between AI agents over HTTP. Unlike MCP’s tool-calling model where a client pulls a tool manifest, A2A uses a task-based model where agents send structured tasks to each other. The context database becomes a first-class participant in multi-agent systems, receiving autonomous memory operations as tasks and returning results through the standard A2A response format.
The agent card at /.well-known/agent.json declares Areev’s AI agent memory capabilities, supported protocol versions, and all 15 available skills. A2A clients read this card during discovery to determine what operations Areev supports. Each skill accepts input as either structured JSON data or natural language text, making A2A suitable for both programmatic integration and conversational agent workflows.
Tasks target a skill by ID and include one or more input parts. All skills return data output containing the operation result as a JSON object.
# Start with A2A enabled
areev serve --a2a --http 0.0.0.0:4009
# Fetch the agent card
curl http://localhost:4009/.well-known/agent.json
{
"name": "areev",
"description": "Areev Knowledge Database -- OMS v1.2 compliant memory server for AI agents",
"url": "http://localhost:4009",
"version": "0.1.0",
"capabilities": {
"streaming": false,
"pushNotifications": false,
"stateTransitionHistory": true
},
"supportedProtocolVersions": ["1.0"],
"defaultInputModes": ["text"],
"defaultOutputModes": ["text", "data"],
"skills": [ ... ]
}
What memory skills are available?
The A2A agent exposes 10 memory skills: memory_add, memory_recall, memory_recall_chain, memory_forget, memory_accumulate, memory_supersede, memory_remember, memory_flush, memory_graph, and memory_cal.
memory_add stores a new grain in the context database. Send structured data for full control over grain type, fields, namespace, and tags, or send plain text for a quick belief extraction. memory_recall queries the autonomous memory engine with free-text search, structural filters, or both. Skills that accept text input interpret natural language like “What do you know about Rust?” and convert it into the appropriate query. memory_recall_chain decomposes a complex question into sub-queries, executes each independently, and merges the results (requires a language model for decomposition).
memory_forget deletes grains by hash, subject, or user ID (supporting GDPR crypto-erasure). memory_accumulate atomically applies numeric deltas to a grain’s fields with tip resolution (zero conflicts). memory_supersede updates an existing grain by hash, merging new field values over old ones while preserving the grain type. memory_remember ingests natural language text as memory, creating an Observation grain and queuing belief extraction. memory_flush flushes the write buffer, ensuring all pending writes are committed to storage. memory_graph performs BFS traversal of the entity knowledge graph with configurable depth and node limits. memory_cal executes CAL (Context Assembly Language) queries for structured memory retrieval and assembly, accepting text input only.
{
"skill": "memory_add",
"input": [
{"type": "data", "data": {
"grain_type": "belief",
"subject": "rust",
"relation": "created_by",
"object": "Graydon Hoare",
"confidence": 0.99,
"tags": ["languages"],
"namespace": "programming"
}}
]
}
{
"skill": "memory_recall",
"input": [
{"type": "data", "data": {
"query": "programming languages",
"limit": 5,
"grain_type": "belief",
"subject": "rust",
"namespace": "programming"
}}
]
}
{
"skill": "memory_graph",
"input": [
{"type": "data", "data": {
"subject": "john",
"depth": 2,
"max_nodes": 50
}}
]
}
What session, stats, and compliance skills are available?
Five additional skills cover session management, monitoring, compliance verification, and PII scanning: session_bootstrap, memory_stats, memory_metrics, compliance_verify, and pii_scan.
session_bootstrap returns full session context (latest state, active goals, recent actions) for resuming agent work. This skill accepts a session_id and an optional recent_actions_limit to control how many actions are included. It provides the same data as the REST /sessions/{sid}/bootstrap endpoint, packaged for A2A consumption.
memory_stats returns database statistics (total grains, type counts, store size, health) and accepts plain text input. memory_metrics returns operational health, intelligence quality, and billing metrics for a specified domain. compliance_verify runs the 87+ compliance checks against the sealed policies, with an optional regulation filter. pii_scan detects PII/PHI in text with match positions and confidence scores, running the hybrid NER + regex pipeline locally.
{
"skill": "session_bootstrap",
"input": [
{"type": "data", "data": {
"session_id": "sess_abc123",
"recent_actions_limit": 10
}}
]
}
{
"skill": "compliance_verify",
"input": [
{"type": "data", "data": {"regulation": "gdpr"}}
]
}
{
"skill": "memory_metrics",
"input": [
{"type": "data", "data": {"domain": "ops"}}
]
}
What are the input and output modes?
Each skill declares supported input modes (text, data, or both) and output modes. Skills accepting text input interpret natural language; skills accepting data input expect structured JSON.
| Skill | Input modes | Output modes |
|---|---|---|
memory_add | text, data | data |
memory_recall | text, data | data |
memory_recall_chain | data | data |
memory_forget | data | data |
memory_accumulate | data | data |
memory_supersede | data | data |
memory_remember | text, data | data |
memory_flush | text | data |
memory_graph | text, data | data |
session_bootstrap | data | data |
memory_cal | text | data |
memory_stats | text | data |
memory_metrics | text, data | data |
compliance_verify | text, data | data |
pii_scan | text | data |
Skills that accept both modes give agents flexibility in how they interact with the context database. An AI agent can send “Remember that John likes coffee” as text to memory_add, or send a structured data object with explicit grain type, subject, relation, and object fields. Data-only skills like memory_forget and memory_supersede require precise identifiers (hashes, user IDs) that are unsuitable for natural language interpretation.
All skills return data output containing the operation result as a JSON object, regardless of the input mode used.
How does A2A differ from MCP?
A2A uses a task-based model where agents communicate as peers, while MCP uses a tool-calling model where a client pulls tools from a server.
Both protocols give AI agents access to the same underlying Areev operations, but they target different integration architectures. MCP works within agent frameworks that use tool-calling (Claude Desktop, Cursor), where the framework manages the tool manifest and invokes tools on behalf of the user. A2A works in multi-agent systems where autonomous agents discover and communicate with each other directly.
A2A offers two capabilities that MCP does not: session context via session_bootstrap and entity graph traversal via memory_graph. MCP provides a tighter integration surface with fewer moving parts, making it the better choice for single-agent tool-use patterns.
| Aspect | MCP | A2A |
|---|---|---|
| Protocol | Tool-calling (client pulls tools) | Task-based (agent sends tasks to peer) |
| Discovery | MCP tool manifest | /.well-known/agent.json agent card |
| Endpoint | /mcp | /a2a |
| Input format | Tool arguments (JSON) | Task parts (text or data) |
| Session context | No | Yes (session_bootstrap skill) |
| Graph traversal | No | Yes (memory_graph skill) |
| Streaming | No | No (current version) |