Authority-Based LLM Training Patent Application US20260099716A1
Summary
USPTO published patent application US20260099716A1 by inventors Anna Luti and Paolo Antinori, covering an authority-based training process for large language models. The system generates data quality metrics and authority scores for training samples to dynamically adjust weights during LLM training. The application was filed on October 3, 2024, under CPC classification G06N 3/0895.
What changed
USPTO published patent application US20260099716A1 for an authority-based training process for large language models. The invention involves generating data quality metrics for training samples across multiple topics, assigning authority scores based on those metrics, and dynamically adjusting training weights using a loss function to improve LLM accuracy.
For technology companies, AI developers, and academic institutions working on machine learning, this patent publication represents potential intellectual property considerations for LLM training methodologies. Companies developing similar training processes should review the claims to assess potential freedom-to-operate implications.
What to do next
- Monitor for patent issuance
Archived snapshot
Apr 11, 2026GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.
AUTHORITY-BASED TRAINING PROCESS FOR A LARGE LANGUAGE MODEL
Application US20260099716A1 Kind: A1 Apr 09, 2026
Inventors
Anna Luti, Paolo Antinori
Abstract
An authority-based training process for a large language model is provided. The process can involve generating corresponding sets of data quality metrics for each sample in a training dataset. The training dataset can encompass a group of topics. The process can also involve generating a corresponding set of authority scores for each sample based on the corresponding sets of data quality metrics. Each authority score can indicate a respective authority level of the sample in relation to a particular topic of the group of topics. The process can further involve training the large language model using a loss function that includes a set of weights. During training, the set of weights can be dynamically adjusted based on the corresponding set of authority scores for each sample in the training dataset. This can produce a large language model that is more accurate than may otherwise be possible.
CPC Classifications
G06N 3/0895
Filing Date
2024-10-03
Application No.
18905542
Named provisions
Related changes
Get daily alerts for USPTO Patent Applications - AI & Computing (G06N)
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
Source
About this page
Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission
Source document text, dates, docket IDs, and authority are extracted directly from USPTO.
The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when USPTO Patent Applications - AI & Computing (G06N) publishes new changes.
Subscribed!
Optional. Filters your digest to exactly the updates that matter to you.