Changeflow GovPing Telecom & Technology Microsoft Patent: Neural Network Training Preci...
Routine Notice Added Final

Microsoft Patent: Neural Network Training Precision Adjustment

Favicon for changeflow.com ChangeBridge: Patent Grants - AI & Computing (G06N)
Published March 24th, 2026
Detected March 25th, 2026
Email

Summary

The USPTO has granted Microsoft Technology Licensing, LLC, patent US12585926B2 for a method of adjusting numerical precision and topology during neural network training based on performance metrics. This patent covers techniques for optimizing training efficiency and accuracy using block floating-point formats and accelerator hardware.

What changed

USPTO patent US12585926B2, granted to Microsoft Technology Licensing, LLC, details methods for adjusting precision and topology parameters during neural network training. The patent describes using lower-accuracy block floating-point formats in early training stages, increasing precision as training progresses based on performance metrics, and transforming values to normal precision floating-point formats. It also mentions the use of accelerator hardware with direct support for block floating-point formats.

This patent grant is a routine event for a technology company and does not impose new regulatory obligations on other entities. Compliance officers should note this development as it pertains to advancements in AI and machine learning hardware and software, potentially impacting future technology development and intellectual property considerations within the technology sector.

Source document (simplified)

← USPTO Patent Grants

Adjusting precision and topology parameters for neural network training based on a performance metric

Grant US12585926B2 Kind: B2 Mar 24, 2026

Assignee

Microsoft Technology Licensing, LLC

Inventors

Bita Darvish Rouhani, Eric S. Chung, Daniel Lo, Douglas C. Burger

Abstract

Apparatus and methods for training neural networks based on a performance metric, including adjusting numerical precision and topology as training progresses are disclosed. In some examples, block floating-point formats having relatively lower accuracy are used during early stages of training. Accuracy of the floating-point format can be increased as training progresses based on a determined performance metric. In some examples, values for the neural network are transformed to normal precision floating-point formats. The performance metric can be determined based on entropy of values for the neural network, accuracy of the neural network, or by other suitable techniques. Accelerator hardware can be used to implement certain implementations, including hardware having direct support for block floating-point formats.

CPC Classifications

G06F 2207/4824 G06F 7/483 G06F 9/30025 G06F 18/217 G06K 9/6262 G06N 3/0445 G06N 3/0472 G06N 3/0481 G06N 3/063 G06N 3/082 G06N 3/084 G06N 3/044 G06N 3/047 G06N 3/048 G06N 3/0442 G06N 3/0464 G06N 3/0495 G06N 3/09

Filing Date

2018-12-31

Application No.

16237308

Claims

20

View original document →

Classification

Agency
USPTO
Published
March 24th, 2026
Instrument
Notice
Legal weight
Non-binding
Stage
Final
Change scope
Minor
Document ID
US12585926B2

Who this affects

Applies to
Technology companies
Industry sector
3341 Computer & Electronics Manufacturing 5112 Software & Technology
Activity scope
AI Model Training
Geographic scope
United States US

Taxonomy

Primary area
Artificial Intelligence
Operational domain
IT Security
Topics
Machine Learning Computer Hardware

Get Telecom & Technology alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when ChangeBridge: Patent Grants - AI & Computing (G06N) publishes new changes.

Free. Unsubscribe anytime.