← USPTO Patent Grants

Hardware architecture for introducing activation sparsity in neural network

Grant US12585928B2 Kind: B2 Mar 24, 2026

Assignee

Numenta, Inc.

Inventors

Kevin Lee Hunter, Subutai Ahmad

Abstract

A hardware accelerator that is efficient at performing computations related to a sparse neural network. The sparse neural network may be associated with a plurality of nodes. An artificial intelligence (AI) accelerator stores, at a memory circuit, a weight tenor and an input activation tensor that corresponds to a node of the neural network. The AI accelerator performs a computation such as convolution between the weight tenor and the input activation tensor to generate an output activation tensor. The AI accelerator introduces sparsity to the output activation tensor by reducing the number of active values in the output activation tensor. The sparsity activation may be a K-winner approach, which selects the K-largest values in the output activation tensor and set the remaining values to zero.

CPC Classifications

G06N 3/063 G06N 3/045 G06N 3/048 G06N 3/084 G06F 7/08

Filing Date

2021-05-27

Application No.

17332295

Claims

20