Large Language Machine Learning Model Query Management
Assignee
Oracle International Corporation
Inventors
Vivek Kumar
Abstract
Techniques for filtering queries to a large language model (LLM) based on their relevance to an enterprise domain associated with the LLM involve training a machine learning model using historical LLM query data and associated relevance scores. These scores indicate how closely a query relates to the enterprise's operations. The trained model is then applied to new input queries, generating relevance scores for the input queries. Queries meeting a predetermined relevance threshold are passed to the LLM for processing. For queries falling below this threshold, remedial actions are taken instead of processing by the LLM. The techniques optimize computational resource allocation by prioritizing queries relevant to the enterprise while filtering out less pertinent ones. The techniques create a relevance-based gatekeeping mechanism for LLM query processing, enhancing efficiency and focusing the LLM's capabilities on enterprise-specific tasks.
CPC Classifications
Filing Date
2025-11-25
Application No.
19399945