Time-Series Optimized Transformer For Observability With Multimodal Input (TOTO-M)
Assignee
Datadog, Inc.
Inventors
Benjamin Jacob Cohen, Emaad Ali Khwaja, Viktoriya Zhukova, Othmane Abou-Amal
Abstract
The present disclosure describes technology for training and deploying time-series optimized transformers for observability with multimodal input (TOTO-M). The system includes processors and a storage device for storing instructions. The processors may execute the instructions to process multimodal data using an artificial intelligence (AI) model. The AI model includes a text embedding model configured to generate one or more query text embeddings based one or more query texts corresponding to multivariate time-series data The AI model further includes a patch embedding layer configured to generate patch embeddings from the multivariate time-series data and a transformer architecture comprising one or more segments including space-wise blocks and time-wise blocks. The transformer architecture is configured to receive the patch embeddings combined with the one or more query text embeddings, process the patch embeddings, and output transformed embeddings.
CPC Classifications
Filing Date
2025-06-25
Application No.
19249420