Neural network accelerator with a configurable pipeline
Summary
USPTO granted Patent US12596919B2 to Imagination Technologies Limited for a neural network accelerator featuring a configurable hardware pipeline. The invention comprises multiple hardware processing units for accelerating neural network operations on data tensors, interconnected via a crossbar that selectively forms configurable processing pipelines. Inventors Javier Sanchez and Alan Vines secured 20 claims covering the configurable processing order and pipeline architecture.
What changed
USPTO issued Patent US12596919B2 granting Imagination Technologies Limited exclusive rights to a neural network accelerator with configurable pipeline architecture. The patent covers hardware processing units capable of performing neural network operations on tensors, with a crossbar enabling dynamic pipeline configuration and selectable processing orders based on the formed pipeline topology.
Technology companies developing AI accelerators, neural processing units, or related hardware should review the patent claims to assess potential licensing needs or design-around considerations. The patent's broad coverage of configurable pipeline architecture in neural network hardware may affect product development strategies for competing implementations in AI acceleration technology.
What to do next
- Monitor for updates
- Review patent claims for infringement risk
Source document (simplified)
Neural network accelerator with a configurable pipeline
Grant US12596919B2 Kind: B2 Apr 07, 2026
Assignee
Imagination Technologies Limited
Inventors
Javier Sanchez, Alan Vines
Abstract
A neural network accelerator that has a configurable hardware pipeline includes a plurality of hardware processing units, each hardware processing unit comprising hardware to accelerate performing one or more neural network operations on a tensor of data; and a crossbar coupled to each hardware processing unit of the plurality of hardware processing units, the crossbar configured to selectively form, from a plurality of selectable pipelines, a pipeline from one or more of the hardware processing units of the plurality of hardware processing units to process input data to the neural network accelerator. At least one of the hardware processing units is configurable to transmit or receive a tensor via the crossbar in a selected processing order of a plurality of selectable processing orders, and the selected processing order is based on the pipeline formed by the crossbar.
CPC Classifications
G06N 3/063 G06N 3/0464 G06N 3/048 G06N 3/045 G06F 9/5027
Filing Date
2022-09-30
Application No.
17957044
Claims
20
Related changes
Get daily alerts for ChangeBridge: Patent Grants - AI & Computing (G06N)
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when ChangeBridge: Patent Grants - AI & Computing (G06N) publishes new changes.