Microsoft Patent: Neural Network Training Precision Adjustment
Summary
The USPTO has granted Microsoft Technology Licensing, LLC, patent US12585926B2 for a method of adjusting numerical precision and topology during neural network training based on performance metrics. This patent covers techniques for optimizing training efficiency and accuracy using block floating-point formats and accelerator hardware.
What changed
USPTO patent US12585926B2, granted to Microsoft Technology Licensing, LLC, details methods for adjusting precision and topology parameters during neural network training. The patent describes using lower-accuracy block floating-point formats in early training stages, increasing precision as training progresses based on performance metrics, and transforming values to normal precision floating-point formats. It also mentions the use of accelerator hardware with direct support for block floating-point formats.
This patent grant is a routine event for a technology company and does not impose new regulatory obligations on other entities. Compliance officers should note this development as it pertains to advancements in AI and machine learning hardware and software, potentially impacting future technology development and intellectual property considerations within the technology sector.
Source document (simplified)
Adjusting precision and topology parameters for neural network training based on a performance metric
Grant US12585926B2 Kind: B2 Mar 24, 2026
Assignee
Microsoft Technology Licensing, LLC
Inventors
Bita Darvish Rouhani, Eric S. Chung, Daniel Lo, Douglas C. Burger
Abstract
Apparatus and methods for training neural networks based on a performance metric, including adjusting numerical precision and topology as training progresses are disclosed. In some examples, block floating-point formats having relatively lower accuracy are used during early stages of training. Accuracy of the floating-point format can be increased as training progresses based on a determined performance metric. In some examples, values for the neural network are transformed to normal precision floating-point formats. The performance metric can be determined based on entropy of values for the neural network, accuracy of the neural network, or by other suitable techniques. Accelerator hardware can be used to implement certain implementations, including hardware having direct support for block floating-point formats.
CPC Classifications
G06F 2207/4824 G06F 7/483 G06F 9/30025 G06F 18/217 G06K 9/6262 G06N 3/0445 G06N 3/0472 G06N 3/0481 G06N 3/063 G06N 3/082 G06N 3/084 G06N 3/044 G06N 3/047 G06N 3/048 G06N 3/0442 G06N 3/0464 G06N 3/0495 G06N 3/09
Filing Date
2018-12-31
Application No.
16237308
Claims
20
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Telecom & Technology alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when ChangeBridge: Patent Grants - AI & Computing (G06N) publishes new changes.