Changeflow GovPing Telecom & Technology Huawei Cloud Patent: Model Inference Method
Routine Notice Added Final

Huawei Cloud Patent: Model Inference Method

Favicon for changeflow.com ChangeBridge: EPO Bulletin - AI & Computing (G06N)
Published March 18th, 2026
Detected March 23rd, 2026
Email

Summary

The European Patent Office has published patent application EP4711986A1 by Huawei Cloud, detailing a model inference method. This method aims to improve data security and inference efficiency by having clients and servers process different parts of user data separately.

What changed

The European Patent Office (EPO) has published patent application EP4711986A1, filed by Huawei Cloud Computing Technologies Co., Ltd., concerning a novel model inference method and device. The disclosed technology involves a distributed approach where a client and a server, each with deployed models, process distinct portions of user data. The client then combines its output with the server's output to achieve a final inference result. This method is designed to enhance user data security by limiting the data accessible to the server and to improve inference efficiency by reducing the bandwidth and time required for data transmission.

This patent publication is primarily informational and does not impose immediate compliance obligations on regulated entities. However, companies involved in machine learning, cloud computing, and AI development, particularly those operating within the EU, may find the technical approach relevant to their product development and data handling strategies. The focus on data security and transmission efficiency highlights emerging trends in AI deployment that could influence future industry standards and best practices.

Source document (simplified)

← EPO Patent Bulletin

MODEL INFERENCE METHOD AND DEVICE

Publication EP4711986A1 Kind: A1 Mar 18, 2026

Applicants

Huawei Cloud Computing Technologies Co., Ltd.

Inventors

LIU, Jizhe

Abstract

A model inference method and apparatus are disclosed, and relates to the field of machine learning technologies. A client and a server use respective deployed models to process different parts of user data, to obtain respective output results. In addition, the client obtains the output result of the server, and obtains an inference result based on the output results of the server and the client. Compared with a case in which the server needs to obtain all the user data in an inference process, in this application, the server obtains only a part of the user data. As the server cannot obtain, based on the part of the user data, all content included in the user data, security of the user data is ensured. In addition, the client needs to send only the part of the user data to the server, so that a bandwidth resource occupied by data transmission between the client and the server and time consumed by the transmission can be reduced, and inference efficiency can be improved.

IPC Classifications

G06N 5/048 20230101AFI20241213BHEP

Designated States

AL, AT, BE, BG, CH, CY, CZ, DE, DK, EE, ES, FI, FR, GB, GR, HR, HU, IE, IS, IT, LI, LT, LU, LV, MC, ME, MK, MT, NL, NO, PL, PT, RO, RS, SE, SI, SK, SM, TR

View original document →

Named provisions

Model Inference Method and Device

Classification

Agency
EPO
Published
March 18th, 2026
Instrument
Notice
Legal weight
Non-binding
Stage
Final
Change scope
Minor
Document ID
EP4711986A1

Who this affects

Applies to
Technology companies
Industry sector
5112 Software & Technology
Activity scope
Machine Learning AI Model Deployment
Geographic scope
European Union EU

Taxonomy

Primary area
Artificial Intelligence
Operational domain
IT Security
Topics
Data Security Machine Learning

Get Telecom & Technology alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when ChangeBridge: EPO Bulletin - AI & Computing (G06N) publishes new changes.

Free. Unsubscribe anytime.