Computing Core Accelerator for Parallel Matrix Computation
Summary
USPTO published patent application US20260111365A1 filed November 20, 2023 (Application No. 19154296), covering a computing core and accelerator for parallel matrix computation. The invention enables simultaneous matrix multiplication of N rows or columns from two matrices, obtaining N final results in one operation. The technology eliminates the need for intermediate result storage and reduces memory-access overhead by determining optimal memory placement for subsequent computing rounds. CPC classifications include G06F 12/0877, G06F 13/28, G06F 17/16, and G06N 3/063. Inventors: Rengang LI, Hongbin YANG, Gang DONG, Dongdong JIANG, Qichun CAO, Kekun HU.
“Disclosed in the present application are a computing core, an accelerator, a computing method and apparatus, a device, a non-volatile readable storage medium, and a system in the technical field of computers.”
About this source
USPTO classification G06N covers computer systems based on specific computational models: neural networks, knowledge representation, fuzzy logic, expert systems, evolutionary algorithms. With the AI patent boom, this is one of the most-filed application classes in the office. Every newly published application in G06N lands in this feed, around 230 a month. Patent applications publish 18 months after filing, so this feed reveals what AI labs and companies were working on in the prior year and a half. Watch this if you compete in machine learning, file freedom-to-operate analyses, scout acquisition targets in AI infrastructure, or track which research groups are converting publications to patents. GovPing pulls each application with the filing number, title, applicant, and abstract.
What changed
USPTO published patent application US20260111365A1 on April 23, 2026, covering a computing core, accelerator, computing method, and apparatus for parallel matrix computation. The invention processes N rows of data in a first matrix with N columns of data in a second matrix in a single parallel operation, obtaining N final matrix multiplication results simultaneously. The computing core eliminates the need to temporarily store intermediate results, saving on-chip resources.
Technology companies developing AI accelerators, high-performance computing hardware, or matrix-computation systems should review this application for prior-art awareness and potential licensing considerations. The claimed approach to determining memory placement based on subsequent participation in matrix operations may represent a notable architectural approach in parallel computing systems.
Archived snapshot
Apr 24, 2026GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.
COMPUTING CORE, ACCELERATOR, COMPUTING METHOD AND APPARATUS, DEVICE, NON-VOLATILE READABLE STORAGE MEDIUM, AND SYSTEM
Application US20260111365A1 Kind: A1 Apr 23, 2026
Inventors
Rengang LI, Hongbin YANG, Gang DONG, Dongdong JIANG, Qichun CAO, Kekun HU
Abstract
Disclosed in the present application are a computing core, an accelerator, a computing method and apparatus, a device, a non-volatile readable storage medium, and a system in the technical field of computers. According to the present application, parallel computing can be performed to obtain matrix multiplication results of N rows of data or N columns of data in a first matrix and N columns of data or N rows of data in a second matrix, so that N final matrix multiplication results may be obtained at one time. And the computing efficiency and speed are improved. The computing core does not need to temporarily store an intermediate result, and are source-on-chip is saved. According to the present application, after the matrix multiplication results are obtained, which memory the matrix multiplication results are stored into can be determined according to a participation manner in which the matrix multiplication results participate in each round of computing in a next matrix multiplication operation, and hence, a storage format of the matrix multiplication results in the memory is consistent with an output format of the matrix multiplication results when participating in computing. Data is conveniently read in sequence in a continuous computing process, and matrix transposition does not need to be carried out. Therefore, the time overhead of accessing a memory can be reduced, and the efficiency is improved.
CPC Classifications
G06F 12/0877 G06F 13/28 G06F 17/16 G06N 3/063
Filing Date
2023-11-20
Application No.
19154296
Related changes
Get daily alerts for USPTO Patent Applications - AI & Computing (G06N)
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
Source
About this page
Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission
Source document text, dates, docket IDs, and authority are extracted directly from USPTO.
The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when USPTO Patent Applications - AI & Computing (G06N) publishes new changes.
Subscribed!
Optional. Filters your digest to exactly the updates that matter to you.