CrewAI
How do I give a CrewAI crew shared memory?
Subclass crewai.tools.BaseTool, call Areev’s A2A endpoint in _run, and attach the tool to every agent that should read or write crew memory.
CrewAI agents are independent actors. Without a shared memory, each one re-derives context on every step. Wiring all of them to a single Areev memory turns the crew into a stateful, auditable team: the researcher writes findings, the writer reads them, and every grain is content-addressed, encrypted, and subject to the memory’s compliance policy. See Managing Memories for how memory isolation and scope work.
The simplest wiring is a thin A2A tool class. CrewAI does not currently ship an A2A client, so a single HTTP call in _run is the smallest working bridge.
import requests
from crewai import Agent, Task, Crew
from crewai.tools import BaseTool
AREEV_URL = "https://your-areev-host"
AREEV_KEY = "ar_..." # API key from the Areev app
class AreevRemember(BaseTool):
name: str = "remember"
description: str = "Store a fact in the crew's shared memory."
def _run(self, fact: str) -> dict:
return _a2a("memory_remember", fact)
class AreevRecall(BaseTool):
name: str = "recall"
description: str = "Search the crew's shared memory in natural language."
def _run(self, question: str) -> dict:
return _a2a("memory_recall", question)
def _a2a(skill: str, text: str) -> dict:
r = requests.post(
f"{AREEV_URL}/a2a",
headers={"Authorization": f"Bearer {AREEV_KEY}"},
json={
"jsonrpc": "2.0",
"id": 1,
"method": "tasks/send",
"params": {
"id": f"task-{skill}",
"message": {"role": "user", "parts": [{"type": "text", "text": text}]},
"metadata": {"skill": skill},
},
},
timeout=30,
)
r.raise_for_status()
return r.json()["result"]
memory_tools = [AreevRemember(), AreevRecall()]
researcher = Agent(role="Researcher", goal="Find and store facts", tools=memory_tools)
writer = Agent(role="Writer", goal="Read facts and draft the report", tools=memory_tools)
crew = Crew(agents=[researcher, writer], tasks=[...])
crew.kickoff()
How do I keep per-crew memory isolated?
Use a separate Areev memory per crew, or a separate namespace within one memory. The A2A skill and endpoint are identical; only the API key or the grain’s namespace field changes.
Each Areev memory has independent encryption keys, policy, and retention. Give long-lived crews their own memory so a compliance reset on one does not affect the others. Short-lived or ad-hoc crews can share a memory but tag their writes with a namespace like crew:q2-research — recall filters then scope cleanly.
class AreevRecall(BaseTool):
name: str = "recall"
description: str = "Search this crew's namespace in shared memory."
namespace: str = "crew:q2-research"
def _run(self, question: str) -> dict:
return _a2a_data("memory_recall", {
"query": question,
"namespace": self.namespace,
"limit": 10,
})
Use memory_recall in data mode to pass structured filters like namespace, subject, and limit. The A2A skill reference lists every supported field per skill.
How do I run CAL queries from a crew?
Expose memory_cal as a tool. CAL lets a single agent fetch a ranked, formatted context in one call instead of chaining recall + filter + sort manually.
CAL is Areev’s Context Assembly Language: a declarative query for memory retrieval with RECALL, ASSEMBLE, LET, and FORMAT clauses. For CrewAI agents, CAL is the fastest way to hand a specialist agent exactly the context it needs — for example, “give the writer the top 5 beliefs about the subject formatted as markdown bullets.”
class AreevCal(BaseTool):
name: str = "cal_query"
description: str = (
"Run a CAL query against shared memory. "
"Example: 'RECALL beliefs ABOUT subject LIMIT 5 FORMAT markdown'"
)
def _run(self, cal_query: str) -> dict:
return _a2a("memory_cal", cal_query)
Full CAL syntax: CAL Queries.
How do I handle “forget this user” requests?
Add memory_forget as a tool on any crew that touches user data. Forget is a first-class skill and performs GDPR-compliant crypto-erasure.
Areev’s forget operation is not a soft delete — it destroys the per-user DEK so ciphertext becomes unrecoverable even with backups. Exposing this as a CrewAI tool lets a data-governance agent act on user requests end-to-end. The skill accepts hash, subject, or user_id inputs. See Crypto-Erasure for the full guarantee.
class AreevForget(BaseTool):
name: str = "forget_user"
description: str = "Crypto-erase all memory for a given user_id."
def _run(self, user_id: str) -> dict:
return _a2a_data("memory_forget", {"user_id": user_id})
What about A2UI rendering and streaming?
Areev’s A2A endpoint does not currently stream. If the crew needs real-time updates, use the REST /api/memories/{id}/chat SSE endpoint instead of A2A for that specific read path.
The A2A protocol version Areev implements (1.0) does not yet declare streaming: true in its agent card. Tasks return their full result in a single JSON response. For conversational streaming UI, Areev’s REST chat endpoint (see Chat) is the right surface — A2A is best for batch or step-wise tool calls inside a crew.
Related
- A2A — full skill catalog and task envelope
- Managing Memories — per-crew memory isolation
- CAL Queries — syntax for the
memory_caltool - LangGraph — same pattern for LangGraph
- AutoGen — same pattern for AutoGen