Google LLC Patent for Distributed Computing Pipeline Processing
Summary
The USPTO has granted Google LLC a patent (US12585495B2) for a distributed computing pipeline processing system. The patent covers methods for processing computational graphs on distributed devices, including assigning operations to computing devices and hardware accelerators for parallel execution.
What changed
The United States Patent and Trademark Office (USPTO) has granted Google LLC patent US12585495B2 for a novel method of distributed computing pipeline processing. The patent details a system designed to process computational graphs across multiple distributed computing devices and hardware accelerators. Key aspects include the parallel assignment of initial operations to computing devices and subsequent operations to interconnected hardware accelerators, which receive inputs from queues managed by the computing devices.
This patent grant signifies a new intellectual property for Google in the area of advanced computing infrastructure. While not a regulatory rule, it represents innovation in distributed systems that could influence future technology development and adoption. Compliance officers in the technology sector, particularly those involved with AI and cloud computing infrastructure, should be aware of this patent as it may impact competitive landscapes and technology roadmaps.
Source document (simplified)
Distributed computing pipeline processing
Grant US12585495B2 Kind: B2 Mar 24, 2026
Assignee
Google LLC
Inventors
Rohan Anil, Battulga Bayarsaikhan, Ryan P. Doherty, Emanuel Taropa
Abstract
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing computational graphs on distributed computing devices. One of the methods includes receiving a request to execute a processing pipeline (i) first operations that transform raw inputs into pre-processed inputs and (ii) second operations that operate on the pre-processed inputs; and in response: assigning the first operations to two or more of a plurality of computing devices, assigning the second operations to one or more hardware accelerators of a plurality of hardware accelerators, wherein each hardware accelerator is interconnected with the plurality of computing devices, and configured to (i) receive inputs from respective queues of the two or more computing devices assigned the first operations and (ii) perform the second operations on the received pre-processed inputs, and executing, in parallel, the processing pipeline on the two or more computing devices and the one or more hardware accelerators.
CPC Classifications
G06F 9/4881 G06F 9/5066 G06F 9/5044 G06F 9/505 G06F 2209/483 G06F 2209/509 G06N 3/098 G06N 3/045 G06N 3/08 G06N 3/04
Filing Date
2020-03-06
Application No.
17909680
Claims
19
Named provisions
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Telecom & Technology alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when ChangeBridge: Patent Grants - AI & Computing (G06N) publishes new changes.