Accelerator for deep neural networks
Summary
The USPTO granted Patent US12596918B2 to Samsung Electronics Co., Ltd. for a neural network accelerator designed to reduce ineffectual computations by processing only non-zero neurons using offsets. The system utilizes computation tiles, an activation memory, dispatcher, and encoder to optimize deep neural network operations. The patent grants Samsung exclusive rights to this neural network acceleration technology.
What changed
The USPTO issued Patent US12596918B2 titled 'Accelerator for deep neural networks' to Samsung Electronics Co., Ltd. The patent covers a system, integrated circuit, and method for reducing ineffective computations in neural network layer processing by utilizing offsets to identify and process only non-zero neurons, with optional extension to skip ineffectual synapse operations.
For manufacturers and technology companies developing neural network hardware or AI accelerators, this patent represents Samsung's intellectual property position in efficient neural network computation. Competitors developing similar accelerator technologies may need to consider licensing or design-around strategies to avoid infringement. The patent strengthens Samsung's portfolio in AI hardware and could influence competitive dynamics in the semiconductor and AI chip markets.
What to do next
- Monitor for updates
Source document (simplified)
Accelerator for deep neural networks
Grant US12596918B2 Kind: B2 Apr 07, 2026
Assignee
Samsung Electronics Co., Ltd.
Inventors
Patrick Judd, Jorge Albericio, Alberto Delmas Lascorz, Andreas Moshovos, Sayeh Sharifymoghaddam
Abstract
Described is a system, integrated circuit and method for reducing ineffectual computations in the processing of layers in a neural network. One or more tiles perform computations where each tile receives input neurons, offsets and synapses, and where each input neuron has an associated offset. Each tile generates output neurons, and there is also an activation memory for storing neurons in communication with the tiles via a dispatcher and an encoder. The dispatcher reads neurons from the activation memory and communicates the neurons to the tiles and reads synapses from a memory and communicates the synapses to the tiles. The encoder receives the output neurons from the tiles, encodes them and communicates the output neurons to the activation memory. The offsets are processed by the tiles in order to perform computations only on non-zero neurons. Optionally, synapses may be similarly processed to skip ineffectual operations.
CPC Classifications
G06N 3/02 G06N 3/04 G06N 3/082 G06N 3/084 G06N 3/063 G06N 3/0454 G06N 3/0481
Filing Date
2022-06-22
Application No.
17846837
Claims
20
Related changes
Get daily alerts for ChangeBridge: Patent Grants - AI & Computing (G06N)
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when ChangeBridge: Patent Grants - AI & Computing (G06N) publishes new changes.