The Lighthouse Storage monitors live streams and automatically uploads video content to permanent decentralized storage, ensuring performances are preserved forever without relying on centralized platforms. Built on Lighthouse, it processes video streams into 60-second chunks and provides verifiable Proof of Data Possession (PDP) receipts.Documentation Index
Fetch the complete documentation index at: https://docs.haus25.live/llms.txt
Use this file to discover all available pages before exploring further.
Architecture Overview
Storage Pipeline
Component Integration
Real-time Processing:- HLS Monitor: Watches SRS output directory using
chokidar - Video Processor: Combines segments into optimized chunks
- Upload Service: Handles Lighthouse communication and retry logic
- Metadata Service: Compiles manifests and creates IPFS backups
Service Architecture
Core Components
StorageService: Initializes HLSMonitor, VideoProcessor, Lighthouse, and MetadataService. startLivestreamStorage method configures monitoring and processing pipeline for specified eventId and creator with storage space preparation.HLS Monitoring
File System Watching: HLSMonitor class with chokidar watcher on event-specific hlsPath, processes .ts segments, buffers 6 segments (60 seconds) before triggering chunk creation via videoProcessor.Video Processing Pipeline
Chunk Creation
FFmpeg Integration:Quality Optimization
Encoding Presets:- CPU usage monitoring to adjust encoding presets
- File size optimization for efficient storage costs
- Format standardization for consistent playback
- Metadata preservation during transcoding
Metadata Management
Chunk Metadata Structure
API Interface
Storage Control Endpoints
Start Monitoring:Real-time Status Updates
Performance Considerations
Scalability Metrics
Processing Capacity:- Single instance: Handles 10-15 concurrent streams
- CPU utilization: ~70% during peak encoding
- Memory usage: ~2GB for video buffer management
- Disk I/O: Sequential writes optimized for SSD
- Upload batching: Multiple chunks uploaded in parallel
- Retry logic: Exponential backoff for failed uploads
- Bandwidth management: Adaptive upload speeds based on available bandwidth
Cost Optimization
Storage Efficiency:- Compression ratios: 60-70% size reduction through optimization
- Deduplication: Identical chunks stored once across events
- Lifecycle management: Automatic cleanup of temporary files
- Storage costs: ~$0.10 per GB per year
- Retrieval costs: Minimal for CDN-cached content
- Deal optimization: Batch uploads for better pricing
- Graceful handling of mid-stream disconnections
- Partial chunk processing for incomplete segments
- Recovery mechanisms for resumed streams
- Manual intervention tools for edge cases
Monitoring Dashboard
Key Metrics:- Active streams being processed
- Upload queue length and processing time
- Storage usage and available capacity
- Error rates and retry statistics
- Network bandwidth utilization
Related Documentation
- SRS - Streaming infrastructure that feeds the storage service
- Room - How stored content integrates with live experiences
- Compression - Video optimization techniques used in processing