Layer-wise Precision Optimization in Analog Compute-in-Memory Accelerators
Summary
USPTO published Intel Corporation's patent application US20260093972A1 for a layer-wise precision optimization method in analog compute-in-memory (ACiM) accelerators for neural networks. The invention selectively allocates neural network layers to either digital compute-in-memory (DCiM) or analog compute-in-memory (ACiM) circuits based on signal sensitivity and statistical weight distribution criteria.
What changed
The patent application describes a method for optimizing computational efficiency in neural network inference by dynamically assigning layers to either digital or analog compute-in-memory circuitry. The system uses a combined heuristic evaluating two conditions: (1) the number of input channels meeting a signal sensitivity criterion, and (2) statistical properties of the weight distribution meeting a statistical sensitivity criterion. Layers are allocated to DCiM when both conditions are satisfied, while ACiM is used if either or both conditions are not met. Inventors include Shamik Kundu, Arnab Raha, Richard Dorrance, Deepak Abraham Mathaikutty, and Brent Carlton.
This is a published patent application with no immediate compliance obligations. Technology companies and semiconductor manufacturers developing AI accelerators should note this intellectual property filing when designing compute-in-memory architectures. The application (No. 19413190) was filed December 9, 2025, and published April 2, 2026.
Archived snapshot
Apr 2, 2026GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.
LAYER-WISE PRECISION OPTIMIZATION IN ANALOG COMPUTE-IN-MEMORY ACCELERATORS
Application US20260093972A1 Kind: A1 Apr 02, 2026
Assignee
Intel Corporation
Inventors
Shamik Kundu, Arnab Raha, Richard Dorrance, Deepak Abraham Mathaikutty, Brent Carlton
Abstract
It is not optimal to apply analog compute-in-memory circuitry (ACiM) for all layers of a neural network or to apply digital compute-in-memory (DCiM) circuitry for all layers of the neural network, due to the tradeoff between efficiency and precision. To address this challenge, a layer-wise offloading strategy can selectively execute neural network layers using either DCiM circuitry or ACIM circuitry based on signal and statistical sensitivity conditions. The approach leverages a combined heuristic, incorporating both the number of input channels meeting a signal sensitivity criterion and the statistical properties of the weight distribution meeting a statistical sensitivity criterion. Layers are allocated to DCiM when both conditions are satisfied, while layers are allocated to ACIM if either or both conditions are not met. The approach optimizes computational efficiency by dynamically assigning resources according to input characteristics and distributional metrics.
CPC Classifications
G06N 3/065 G06F 15/7821
Filing Date
2025-12-09
Application No.
19413190
Named provisions
Related changes
Get daily alerts for USPTO Patent Applications - AI & Computing (G06N)
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
Source
About this page
Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission
Source document text, dates, docket IDs, and authority are extracted directly from USPTO.
The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when USPTO Patent Applications - AI & Computing (G06N) publishes new changes.
Subscribed!
Optional. Filters your digest to exactly the updates that matter to you.