Method for injecting human knowledge into AI models
Assignee
UMNAI Limited
Inventors
Angelo Dalli, Mauro Pirrone
Abstract
Human knowledge may be injected in an explainable AI system in order to improve the model's generalization error, model accuracy, interpretability of the model, avoid or eliminate bias, while providing a path towards the integration of connectionist systems with symbolic and causal logic in a combined AI system. Human knowledge injection may be implemented by harnessing the white-box nature of explainable/interpretable models. In one exemplary embodiment, a user applies intuition to model-specific cases or exceptions. In another embodiment, an explainable model may be embedded in workflow systems which enable users to apply pre-hoc and post-hoc operations. A third exemplary embodiment implements human-assisted focusing. An exemplary embodiment also presents a method to train and refine explainable or interpretable models without losing the injected knowledge defined by humans when applying gradient descent techniques. The white-box nature of explainable models allows for precise source attribution and traceability of knowledge incorporated into the model.
CPC Classifications
Filing Date
2021-12-15
Application No.
17551821
Claims
20