LangGraphJS Foundation
StateGraph Inspiration
The system draws from LangGraphJS StateGraph concepts but simplifies implementation for production requirements: Traditional LangGraphJS Pattern: StateGraph import, createSupervisor with agents array and LLM model, workflow compilation with MemorySaver checkpointer and InMemoryStore. haus²⁵ Adapted Pattern: Direct PlannerSupervisor class with shared agents (RAG, Research, Memory, Blockchain) and specialized agents (Title, Description, Pricing, Schedule, Banner), simplified coordination without compilation overhead.Supervisor Orchestration Pattern
Hierarchical Coordination
Coordination Implementation
Context Preparation Phase:Simplified Graph Compilation
No-Compilation Approach
Unlike LangGraphJS which requires explicit graph compilation, haus²⁵ uses direct method orchestration: LangGraphJS Approach: createSupervisor compilation with checkpointer and store, runtime execution via compiledApp.invoke with messages array. haus²⁵ Simplified Approach: Direct PlannerSupervisor with generatePlan method, orchestratePlan execution, state tracking, and error handling without compilation overhead.Multi-Level Hierarchy
Scope-Based Supervision
CurationCoordinator: Top-level coordination with three supervisors (Planner, Promoter, Producer). Scope-based routing where Promoter inherits Planner capabilities plus promotional content, Producer includes full pipeline coordination.Memory Management Pattern
LangGraph-Inspired Persistence
Short-Term Memory (equivalent to LangGraph checkpointer): ShortTermMemory class with sessionCache Map, setAgentState and getAgentState methods using session:agent keys with timestamps. Long-Term Memory (equivalent to LangGraph store): LongTermMemory class with MemoryAgent integration, persistPlanState and retrievePlanHistory methods using on-chain iteration system.Performance Optimizations
Context Efficiency
Context Optimization: ContextOptimizedSupervisor with contextCache Map, 5-minute cache expiry, userAddress:category cache keys, and fallback to fresh context preparation when cache misses.Cost Reduction Strategies
Token Optimization:- Shared context preparation: Single research call shared across all agents
- Model selection by task: Gemini Flash for creative, Gemini Lite for analytical
- Result caching: 12-hour cache for category research data
- Context filtering: Agent-specific data extraction to reduce token usage
Related Documentation
- System Overview - Overall multi-agent architecture
- Shared Agents - Foundation agents used by supervisors
- On-Chain Iteration System - State persistence implementation