Reasoning Grain
What is a Reasoning grain?
A Reasoning grain stores an inference chain — the logical steps that connect observations and beliefs to a conclusion. Reasoning grains make agent decision-making transparent and auditable in the context database by preserving the full path from premises to outcome.
In the AI agent memory system, Reasoning grains serve as the decision audit trail. They record what the agent considered (premises), how it reasoned (inference_method), what it concluded, and what alternatives it rejected. This is critical for regulated environments where the autonomous memory system must demonstrate why an agent made a specific decision — the EU AI Act’s transparency requirements and GDPR’s right to explanation both depend on this kind of traceability.
The text representation used for embedding and BM25 indexing is the conclusion field. If no conclusion is set, the thinking_content is used instead. This means you can recall reasoning grains by searching for conclusions (“CPU spike caused by batch job”) or by chain-of-thought content when conclusions are absent.
| Field | Type | Required | Description |
|---|---|---|---|
premises | string[] | no | Statements or observations the reasoning is based on |
conclusion | string | no | The outcome of the inference |
inference_method | string | no | Method used (e.g., "deductive", "abductive", "bayesian", "heuristic") |
alternatives_considered | string[] | no | Other conclusions that were evaluated and rejected |
thinking_content | string | no | Raw chain-of-thought or scratchpad text |
thinking_redacted | boolean | no | Whether the thinking content was redacted for privacy |
Plus all common fields (confidence, tags, namespace, etc.).
How do I create a Reasoning grain?
Pass premises, conclusion, and inference_method through the /batch-add endpoint with grain_type: "reasoning". All fields are optional, but at minimum include a conclusion or thinking_content so the grain is text-searchable.
Include premises as an array of strings representing the facts or observations the reasoning is based on. The inference_method field categorizes the type of reasoning: "deductive" for syllogistic logic, "abductive" for inference to the best explanation, "bayesian" for probabilistic reasoning, or "heuristic" for rule-of-thumb decisions. The alternatives_considered array captures conclusions the agent evaluated but rejected, which is essential for demonstrating thoroughness in audit reviews.
Use confidence to express how certain the agent is in its conclusion. A confidence: 0.82 with inference_method: "abductive" tells downstream consumers that this is a best-guess explanation, not a proven fact. Link reasoning grains to the observations and actions they reference using related_to for full decision traceability.
import areev
db = areev.open("./data")
h = db.add("reasoning", {
"premises": [
"CPU usage exceeded 90% for 5 minutes",
"Memory pressure is nominal",
"No recent deployments in the last 24 hours"
],
"conclusion": "The CPU spike is caused by the batch processing job, not a code regression",
"inference_method": "abductive",
"alternatives_considered": [
"Memory leak causing swapping",
"Recent deployment introduced a hot loop"
],
"confidence": 0.82
})
POST /api/memories/default/batch-add
Content-Type: application/json
{
"grains": [
{
"grain_type": "reasoning",
"fields": {
"premises": [
"CPU usage exceeded 90% for 5 minutes",
"Memory pressure is nominal",
"No recent deployments in the last 24 hours"
],
"conclusion": "The CPU spike is caused by the batch processing job, not a code regression",
"inference_method": "abductive",
"alternatives_considered": [
"Memory leak causing swapping",
"Recent deployment introduced a hot loop"
]
}
}
]
}
areev add reasoning conclusion="CPU spike caused by batch job" inference_method=abductive
How do I store chain-of-thought traces?
Use thinking_content to capture the full chain-of-thought from an LLM. When the raw thinking contains sensitive information, set thinking_redacted to true and omit the content.
Chain-of-thought traces preserve the agent’s internal reasoning process, which is valuable for debugging unexpected conclusions and for compliance audits that require transparency into AI decision-making. The thinking_content field stores the raw scratchpad text — the unstructured stream of reasoning before it was distilled into premises and a conclusion.
When thinking_content contains PII, trade secrets, or other sensitive data, set thinking_redacted: true to record that reasoning occurred without exposing the content. The grain still carries its premises, conclusion, and inference_method, so the audit trail remains intact even without the raw thought process.
h = db.add("reasoning", {
"premises": ["User asked about competitor pricing"],
"conclusion": "Redirect to our pricing page with feature comparison",
"inference_method": "heuristic",
"thinking_content": "The user seems to be evaluating alternatives. "
"Best approach is to highlight differentiators rather than "
"directly addressing competitor pricing. Show the comparison table.",
"confidence": 0.91
})
# When thinking must be redacted
h = db.add("reasoning", {
"premises": ["Internal financial data referenced"],
"conclusion": "Cannot disclose quarterly projections",
"thinking_redacted": True,
"confidence": 1.0
})
POST /api/memories/default/batch-add
Content-Type: application/json
{
"grains": [
{
"grain_type": "reasoning",
"fields": {
"premises": ["User asked about competitor pricing"],
"conclusion": "Redirect to our pricing page with feature comparison",
"thinking_content": "The user seems to be evaluating alternatives...",
"inference_method": "heuristic"
}
}
]
}
When should I use Reasoning vs. Belief?
Use Reasoning to record how a conclusion was reached — the full inference chain with premises, alternatives, and method. Use Belief to store the resulting knowledge claim as a structured triple.
A reasoning grain says “here is why I concluded X”; a belief grain says “X is true.” Both are needed for a complete knowledge pipeline: the Reasoning grain provides the justification, and the Belief grain provides the queryable fact. Link them via related_to so that when someone queries a belief, they can trace back to the reasoning that produced it.
If you only store Beliefs without Reasoning, you lose the audit trail. If you only store Reasoning without Beliefs, you lose the structured queryability of the hexastore. The recommended pattern is to create both and link them.
Related
- Grain Types: Overview of all 10 OMS grain types and shared fields
- Belief: For storing conclusions as structured knowledge claims
- Observation: For perceptions that serve as premises
- Action: For operations chosen based on reasoning