Layer 4: Reasoning
Overview
Section titled “Overview”The Reasoning Layer (L4) implements the agent’s cognitive capabilities - how it thinks, plans, and makes decisions using LLMs or other AI models.
brain Decision Engine
Core logic for making choices
route Planning
Breaking down goals into steps
sparkles LLM Integration
Connect to language models
file-code Prompt Management
Templates and prompt engineering
Architecture
Section titled “Architecture”Reasoning Engine
Section titled “Reasoning Engine”interface ReasoningEngine { // Generate response generate(prompt: string, context: Context): Promise<Response>
// Plan actions plan(goal: Goal): Promise<Plan>
// Make decision decide(options: Option[]): Promise<Decision>
// Reflect on output reflect(output: string): Promise<Reflection>}
class LLMReasoningEngine implements ReasoningEngine { constructor( private llm: LLMProvider, private promptManager: PromptManager ) {}
async generate(prompt: string, context: Context): Promise<Response> { // Build full prompt with context const fullPrompt = await this.promptManager.buildPrompt({ system: context.systemPrompt, user: prompt, context: context.memory })
// Call LLM const response = await this.llm.complete({ messages: fullPrompt, temperature: 0.7, maxTokens: 1000 })
return { content: response.content, reasoning: response.reasoning, confidence: response.confidence } }
async plan(goal: Goal): Promise<Plan> { const prompt = await this.promptManager.getTemplate('planning', { goal: goal.description, constraints: goal.constraints, availableActions: goal.actions })
const response = await this.llm.complete(prompt)
// Parse plan from response return this.parsePlan(response.content) }}class ReasoningEngine(ABC): """Abstract reasoning engine"""
@abstractmethod async def generate(self, prompt: str, context: Context) -> Response: """Generate response to prompt""" pass
@abstractmethod async def plan(self, goal: Goal) -> Plan: """Create plan to achieve goal""" pass
class LLMReasoningEngine(ReasoningEngine): def __init__(self, llm: LLMProvider, prompt_manager: PromptManager): self.llm = llm self.prompt_manager = prompt_manager
async def generate(self, prompt: str, context: Context) -> Response: # Build full prompt with context full_prompt = await self.prompt_manager.build_prompt({ 'system': context.system_prompt, 'user': prompt, 'context': context.memory })
# Call LLM response = await self.llm.complete( messages=full_prompt, temperature=0.7, max_tokens=1000 )
return Response( content=response.content, reasoning=response.reasoning, confidence=response.confidence )Chain-of-Thought Reasoning
Section titled “Chain-of-Thought Reasoning”class ChainOfThoughtReasoner { async reason(problem: string): Promise<Solution> { const steps: ThoughtStep[] = []
// Step 1: Understand the problem const understanding = await this.llm.complete(` Analyze this problem and break it down: ${problem}
What are the key components? `) steps.push({ type: 'understanding', content: understanding })
// Step 2: Generate approach const approach = await this.llm.complete(` Given this understanding: ${understanding}
What approach should we take? `) steps.push({ type: 'approach', content: approach })
// Step 3: Execute reasoning const solution = await this.llm.complete(` Using this approach: ${approach}
Solve the problem step by step. `) steps.push({ type: 'solution', content: solution })
// Step 4: Verify solution const verification = await this.llm.complete(` Verify this solution: ${solution}
Is it correct? Any issues? `) steps.push({ type: 'verification', content: verification })
return { solution, reasoning: steps, confidence: this.calculateConfidence(verification) } }}Prompt Templates
Section titled “Prompt Templates”class PromptManager { private templates = new Map<string, PromptTemplate>()
register(name: string, template: PromptTemplate): void { this.templates.set(name, template) }
async render(name: string, variables: Record<string, any>): Promise<string> { const template = this.templates.get(name) if (!template) throw new Error(`Template ${name} not found`)
return template.render(variables) }}
// Example templatesconst PLANNING_TEMPLATE = `You are an AI agent planning to achieve a goal.
Goal: {{goal}}Available Actions: {{actions}}Constraints: {{constraints}}
Create a step-by-step plan to achieve this goal.For each step, specify:1. Action to take2. Expected outcome3. How it contributes to the goal
Output your plan in JSON format.`
const DECISION_TEMPLATE = `You need to make a decision.
Context: {{context}}Options:{{#each options}}- {{this.name}}: {{this.description}} Pros: {{this.pros}} Cons: {{this.cons}}{{/each}}
Analyze each option and choose the best one.Explain your reasoning.`Configuration
Section titled “Configuration”{ "layers": { "reasoning": { "engine": "llm", "llm": { "provider": "openai", "model": "gpt-4-turbo", "temperature": 0.7, "maxTokens": 2000, "topP": 1.0 }, "planning": { "enabled": true, "maxDepth": 5, "replanOnFailure": true }, "reflection": { "enabled": true, "minConfidence": 0.7 }, "prompts": { "systemPrompt": "You are a helpful AI assistant...", "templates": { "planning": "./prompts/planning.txt", "decision": "./prompts/decision.txt" } } } }}Best Practices
Section titled “Best Practices”Prompt Engineering
✅ DO:
- Use clear, specific instructions
- Provide examples (few-shot learning)
- Include relevant context
- Structure output format
- Test prompts thoroughly
❌ DON’T:
- Use vague instructions
- Overload with context
- Assume model knowledge
Planning
✅ DO:
- Break down complex goals
- Validate plans before execution
- Handle plan failures gracefully
- Learn from execution feedback
❌ DON’T:
- Create overly complex plans
- Execute without validation
- Ignore feedback
Model Management
✅ DO:
- Cache responses when appropriate
- Implement fallback models
- Monitor token usage
- Track model performance
❌ DON’T:
- Depend on single model
- Ignore cost implications
- Skip error handling