Changeflow GovPing Telecom & Technology USPTO Patent US12585928B2: Hardware for Neural ...
Routine Notice Added Final

USPTO Patent US12585928B2: Hardware for Neural Network Activation Sparsity

Favicon for changeflow.com ChangeBridge: Patent Grants - AI & Computing (G06N)
Published March 24th, 2026
Detected March 25th, 2026
Email

Summary

The USPTO has granted patent US12585928B2 to Numenta, Inc. for a hardware accelerator designed to efficiently introduce activation sparsity in neural networks. This innovation aims to improve the performance and efficiency of artificial intelligence hardware.

What changed

The United States Patent and Trademark Office (USPTO) has granted patent US12585928B2, titled 'Hardware architecture for introducing activation sparsity in neural network,' to Numenta, Inc. The patent describes a hardware accelerator designed to efficiently perform computations for sparse neural networks by reducing the number of active values in output activation tensors, potentially using a K-winner approach. This technology is intended to enhance the efficiency of AI accelerators.

This patent grant is a non-binding notice of intellectual property protection. While it does not impose direct compliance obligations on regulated entities, it signifies a technological advancement in AI hardware that may influence future industry standards or product development. Companies operating in the AI and hardware sectors should be aware of this patented technology, particularly concerning its potential impact on the design and optimization of neural network hardware.

Source document (simplified)

← USPTO Patent Grants

Hardware architecture for introducing activation sparsity in neural network

Grant US12585928B2 Kind: B2 Mar 24, 2026

Assignee

Numenta, Inc.

Inventors

Kevin Lee Hunter, Subutai Ahmad

Abstract

A hardware accelerator that is efficient at performing computations related to a sparse neural network. The sparse neural network may be associated with a plurality of nodes. An artificial intelligence (AI) accelerator stores, at a memory circuit, a weight tenor and an input activation tensor that corresponds to a node of the neural network. The AI accelerator performs a computation such as convolution between the weight tenor and the input activation tensor to generate an output activation tensor. The AI accelerator introduces sparsity to the output activation tensor by reducing the number of active values in the output activation tensor. The sparsity activation may be a K-winner approach, which selects the K-largest values in the output activation tensor and set the remaining values to zero.

CPC Classifications

G06N 3/063 G06N 3/045 G06N 3/048 G06N 3/084 G06F 7/08

Filing Date

2021-05-27

Application No.

17332295

Claims

20

View original document →

Named provisions

Abstract CPC Classifications

Classification

Agency
USPTO
Published
March 24th, 2026
Instrument
Notice
Legal weight
Non-binding
Stage
Final
Change scope
Minor
Document ID
US12585928B2

Who this affects

Applies to
Technology companies
Industry sector
3341 Computer & Electronics Manufacturing 5112 Software & Technology
Activity scope
AI Hardware Design Neural Network Optimization
Geographic scope
United States US

Taxonomy

Primary area
Artificial Intelligence
Operational domain
IT Security
Topics
Machine Learning Hardware Design

Get Telecom & Technology alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when ChangeBridge: Patent Grants - AI & Computing (G06N) publishes new changes.

Free. Unsubscribe anytime.