← USPTO Patent Grants

Mobile-optimized multi-stage LLM with federated persistent cognitive architecture

Grant US12579437B2 Kind: B2 Mar 17, 2026

Assignee

ATOMBEAM TECHNOLOGIES INC.

Inventors

Brian Galvin, Alan McCord

Abstract

A system and method for extending mobile-optimized multi-stage language model processing with federated persistent cognitive architecture. The system processes prompts through a first large language model to generate “thoughts,” which are cached and processed with the original prompt through a smaller language model. Building upon the three-tier thought caching, the system implements a federated multi-tier hierarchy with local device, domain-specific branch, and global collective caches. A federated cognitive orchestrator coordinates operations across multiple domain-specialized instances, managing thought routing, state synchronization, and cross-domain knowledge sharing while maintaining domain boundaries. During user inactivity, autonomous reasoning continues in cloud environments, generating insights from existing thoughts and interaction history. The system performs memory consolidation, thought cache optimization, and cross-domain pattern recognition without consuming mobile device resources, while maintaining privacy boundaries. This persistent cognitive architecture functions as an evolving reasoning partner rather than merely a responsive tool.

CPC Classifications

G06N 3/082

Filing Date

2025-08-07

Application No.

19294125

Claims

20