RAG Content Quality Evaluation Method Using Large Language Models
Summary
USPTO published patent application US20260099693A1 titled 'Content Quality Evaluation for Retrieval Augmented Generation Systems.' The patent covers a method for objectively evaluating content output by RAG systems using large language models to generate evaluation metrics and present comparative quality data across multiple RAG system configurations.
What changed
USPTO published patent application US20260099693A1 titled 'Content Quality Evaluation for Retrieval Augmented Generation Systems.' The patent discloses a method for evaluating content output by RAG systems including obtaining question-answer information for data chunks, prompting an LLM to generate answer construct conditions, generating question-specific evaluation metrics, evaluating answers from multiple RAG systems, and presenting comparative quality data.
Technology companies developing RAG systems, AI/LLM developers, and software companies working on AI evaluation tools may find this patent relevant for understanding evaluation methodologies for retrieval-augmented generation systems. The patent provides a framework for standardized comparison of AI-generated content quality across different system configurations.
Archived snapshot
Apr 17, 2026GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.
CONTENT QUALITY EVALUATION FOR RETRIEVAL AUGMENTED GENERATION (RAG) SYSTEMS
Application US20260099693A1 Kind: A1 Apr 09, 2026
Inventors
Haiyuan CAO, Satarupa GUHA, Zeqi LIN, Fuhui FANG, Atabak ASHFAQ, Yu HU
Abstract
A method for objectively evaluating content output by a retrieval augmented generation (RAG) system includes obtaining question-answer information for one or more data chunks residing in a source index and prompting a large language model (LLM) to generate one or more answer construct conditions for a first test question included in the question-answer information. Each of the answer construct conditions identifies a condition that is satisfied by a ground truth answer to the first test question. The method further includes generating a question-specific evaluation metric for the first test question based on the answer construct conditions and prompting multiple differently configured retrieval augmented generation (RAG) systems to answer the first test question based on information within the source index. The method additionally includes evaluating multiple answers to the first test question generated by the multiple RAG systems by repeatedly assessing the question-specific evaluation metric and presenting, on a user interface, comparative quality data quantifying a relative quality of the multiple responses generated by the multiple RAG systems.
CPC Classifications
G06N 3/006 G06N 3/0475
Filing Date
2024-10-09
Application No.
18910875
Related changes
Get daily alerts for USPTO Patent Applications - AI & Computing (G06N)
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
Source
About this page
Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission
Source document text, dates, docket IDs, and authority are extracted directly from USPTO.
The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when USPTO Patent Applications - AI & Computing (G06N) publishes new changes.
Subscribed!
Optional. Filters your digest to exactly the updates that matter to you.