Distributed Training of Compressed Machine Learning Models
Summary
USPTO published patent application US20260099761A1 for an apparatus enabling distributed training of compressed machine learning models using quantization-based parameter compression. The invention manages compressed ML model parameters across distributed systems by cycling between compression and decompression states, allowing efficient parameter synchronization over networks. The application was filed on October 7, 2024.
What changed
USPTO published patent application US20260099761A1 disclosing an apparatus for distributed training of compressed machine learning models. The system stores ML model parameters at a first compressed precision, decompresses them for training operations, then recompresses updated parameters using quantization for network transmission to a server. The controller manages precision cycling between first and second precision levels.
Technology companies developing distributed machine learning training systems should review the disclosed compression techniques for potential licensing implications or to assess freedom-to-operate. The patent covers arithmetic circuits, memory configurations, and network interface controllers managing compressed parameter synchronization across distributed systems.
What to do next
- Monitor for patent grant status
- Review claims for potential infringement exposure if developing similar distributed ML training systems
Archived snapshot
Apr 12, 2026GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.
DISTRIBUTED TRAINING OF COMPRESSED MACHINE LEARNING MODELS
Application US20260099761A1 Kind: A1 Apr 09, 2026
Inventors
Yaniv Ben-Izhak, Shay Vargaftik
Abstract
An example apparatus includes a hardware platform having arithmetic circuits and a memory, the memory configured to store, at a first precision, first compressed parameters of a machine learning (ML) model; a network interface controller; and a controller, supported by the hardware platform, configured to: decompress, from the memory through an increase in precision to a second precision, the first compressed parameters to obtain decompressed parameters; control the arithmetic circuits to train, using arithmetic operations, the ML model and update the decompressed parameters; compress, using quantization and reduction in precision to the first precision, the decompressed parameters as updated to obtain second compressed parameters; send, using the network interface controller, the second compressed parameters over a network to a server; and update the first compressed parameters in the memory in response to data received, through the network interface controller, from the server over the network.
CPC Classifications
G06N 20/00
Filing Date
2024-10-07
Application No.
18908601
Named provisions
Related changes
Get daily alerts for USPTO Patent Applications - AI & Computing (G06N)
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
Source
About this page
Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission
Source document text, dates, docket IDs, and authority are extracted directly from USPTO.
The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when USPTO Patent Applications - AI & Computing (G06N) publishes new changes.
Subscribed!
Optional. Filters your digest to exactly the updates that matter to you.