Skip to content

L4 - Reasoning

The Reasoning layer (L4) is the cognitive core of the agent, responsible for decision-making, planning, and inference.


The Reasoning layer transforms inputs and context into actionable decisions. It respects Persona constraints, accesses Memory for context, and produces structured decisions for Capability invocation.


Decision Making

Choose appropriate actions based on context and goals

Planning

Break down complex tasks into executable steps

Inference

Draw conclusions from available information

Confidence Scoring

Assess certainty of decisions and predictions


RequirementDescription
Structured OutputProduce machine-readable action decisions
Confidence ScoresInclude certainty estimates when applicable
RationaleExplain reasoning process (chain-of-thought)
AlternativesConsider and document alternative options
RequirementDescription
Respect PersonaHonor constraints defined in L5
No Direct Memory AccessNever bypass L2 Memory interface
Validate CapabilitiesNever bypass L3 Capability validation
Timeout ComplianceRespect request timeout limits
RequirementDescription
Log DecisionsRecord all decisions with full rationale
Handle AmbiguityGracefully manage unclear inputs
Error RecoveryHandle model errors without crashing

{
"decision_id": "uuid",
"timestamp": "2026-01-15T12:00:00Z",
"trace_id": "uuid",
"action": "web_search",
"parameters": {
"query": "ARAL specification examples",
"max_results": 5
},
"confidence": 0.95,
"rationale": "User asked for examples. Web search is most appropriate capability for finding documentation.",
"chain_of_thought": [
"User wants ARAL examples",
"Examples likely documented online",
"web_search capability available",
"Query: 'ARAL specification examples'"
],
"alternatives": [
{
"action": "database_query",
"confidence": 0.3,
"reason": "Local database might have cached examples, but likely outdated"
}
],
"metadata": {
"model": "claude-3-opus",
"temperature": 0.7,
"tokens_used": 1250
}
}

Quick, single-step reasoning for simple tasks.

{
"strategy": "direct",
"input": "What time is it?",
"action": "get_time",
"confidence": 1.0
}

Step-by-step reasoning for complex problems.

{
"strategy": "chain_of_thought",
"steps": [
"User wants weather information",
"Need location first",
"Ask user for location",
"Then call weather_api"
]
}

Breaking down complex goals into sequences.

{
"strategy": "planning",
"goal": "Book a flight to Paris",
"plan": [
{ "step": 1, "action": "search_flights", "params": {...} },
{ "step": 2, "action": "compare_prices", "params": {...} },
{ "step": 3, "action": "book_flight", "params": {...} }
]
}

Iterative improvement through feedback loops.

{
"strategy": "self_reflection",
"iterations": [
{
"attempt": 1,
"result": "insufficient_information",
"reflection": "Need more context about user preferences"
},
{
"attempt": 2,
"result": "success",
"reflection": "Better result after clarification"
}
]
}

ScoreMeaningAction
0.9 - 1.0Very HighProceed with action
0.7 - 0.9HighProceed, log uncertainty
0.5 - 0.7ModerateRequest confirmation
0.3 - 0.5LowSeek clarification
0.0 - 0.3Very LowDecline or defer
{
"confidence_factors": {
"input_clarity": 0.9,
"context_availability": 0.8,
"capability_reliability": 0.95,
"historical_success": 0.92
},
"final_confidence": 0.89
}

{
"decision": "delete_database",
"persona_check": {
"allowed": false,
"reason": "Persona does not include 'database:write' permission",
"alternative": "Request human approval for dangerous action"
}
}
{
"decision": "send_email",
"persona_constraint": {
"require_confirmation": true,
"reason": "High-risk action requires human approval",
"prompt": "Confirm sending email to 500 recipients?"
}
}

{
"error": "ambiguous_input",
"input": "it",
"clarification_needed": "What does 'it' refer to?",
"suggestions": [
"the weather",
"the document",
"the previous result"
]
}
{
"error": "model_error",
"message": "Inference model timeout",
"fallback_action": "use_simpler_model",
"retryable": true
}

  1. Always Provide Rationale: Enable debugging and user trust
  2. Include Confidence Scores: Help users understand uncertainty
  3. Consider Alternatives: Document decision-making process
  4. Respect Constraints: Never bypass Persona restrictions
  5. Implement Timeouts: Prevent infinite reasoning loops
  6. Log Everything: Maintain audit trail for decisions
  7. Handle Ambiguity: Ask for clarification when needed
  8. Use Appropriate Strategy: Match strategy to task complexity

ReAct

Reason + Act pattern for iterative problem-solving

Tree of Thoughts

Explore multiple reasoning paths in parallel

Chain of Thought

Step-by-step logical reasoning

Self-Consistency

Multiple reasoning paths, majority vote



© 2026 IbIFACE — CC BY 4.0