MODEL INFERENCE METHOD AND DEVICE
Applicants
Huawei Cloud Computing Technologies Co., Ltd.
Inventors
LIU, Jizhe
Abstract
A model inference method and apparatus are disclosed, and relates to the field of machine learning technologies. A client and a server use respective deployed models to process different parts of user data, to obtain respective output results. In addition, the client obtains the output result of the server, and obtains an inference result based on the output results of the server and the client. Compared with a case in which the server needs to obtain all the user data in an inference process, in this application, the server obtains only a part of the user data. As the server cannot obtain, based on the part of the user data, all content included in the user data, security of the user data is ensured. In addition, the client needs to send only the part of the user data to the server, so that a bandwidth resource occupied by data transmission between the client and the server and time consumed by the transmission can be reduced, and inference efficiency can be improved.
IPC Classifications
Designated States
AL, AT, BE, BG, CH, CY, CZ, DE, DK, EE, ES, FI, FR, GB, GR, HR, HU, IE, IS, IT, LI, LT, LU, LV, MC, ME, MK, MT, NL, NO, PL, PT, RO, RS, SE, SI, SK, SM, TR