USPTO Patent US12585928B2: Hardware for Neural Network Activation Sparsity
Summary
The USPTO has granted patent US12585928B2 to Numenta, Inc. for a hardware accelerator designed to efficiently introduce activation sparsity in neural networks. This innovation aims to improve the performance and efficiency of artificial intelligence hardware.
What changed
The United States Patent and Trademark Office (USPTO) has granted patent US12585928B2, titled 'Hardware architecture for introducing activation sparsity in neural network,' to Numenta, Inc. The patent describes a hardware accelerator designed to efficiently perform computations for sparse neural networks by reducing the number of active values in output activation tensors, potentially using a K-winner approach. This technology is intended to enhance the efficiency of AI accelerators.
This patent grant is a non-binding notice of intellectual property protection. While it does not impose direct compliance obligations on regulated entities, it signifies a technological advancement in AI hardware that may influence future industry standards or product development. Companies operating in the AI and hardware sectors should be aware of this patented technology, particularly concerning its potential impact on the design and optimization of neural network hardware.
Source document (simplified)
Hardware architecture for introducing activation sparsity in neural network
Grant US12585928B2 Kind: B2 Mar 24, 2026
Assignee
Numenta, Inc.
Inventors
Kevin Lee Hunter, Subutai Ahmad
Abstract
A hardware accelerator that is efficient at performing computations related to a sparse neural network. The sparse neural network may be associated with a plurality of nodes. An artificial intelligence (AI) accelerator stores, at a memory circuit, a weight tenor and an input activation tensor that corresponds to a node of the neural network. The AI accelerator performs a computation such as convolution between the weight tenor and the input activation tensor to generate an output activation tensor. The AI accelerator introduces sparsity to the output activation tensor by reducing the number of active values in the output activation tensor. The sparsity activation may be a K-winner approach, which selects the K-largest values in the output activation tensor and set the remaining values to zero.
CPC Classifications
G06N 3/063 G06N 3/045 G06N 3/048 G06N 3/084 G06F 7/08
Filing Date
2021-05-27
Application No.
17332295
Claims
20
Named provisions
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Telecom & Technology alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when ChangeBridge: Patent Grants - AI & Computing (G06N) publishes new changes.