Changeflow GovPing Telecom & Technology Tokenized Data Streaming for Multi-Modal AI in ...
Routine Notice Added Final

Tokenized Data Streaming for Multi-Modal AI in Vehicles

Favicon for changeflow.com USPTO Patent Applications - Networking (H04L)
Published
Detected
Email

Summary

Tokenized data streaming for multi-modal AI in vehicles

What changed

USPTO published patent application US20260097781A1 describing a system for tokenized data streaming between in-vehicle system-on-chips (SoCs) and external AI accelerators in multi-modal language models for vehicles. The system encodes raw sensor data (vision, audio) into tokens on a first device and streams them to a second device hosting the language model inference server, which returns detection results used to control vehicle operations.

Technology and automotive companies developing autonomous driving systems, AI accelerators, or in-vehicle computing platforms should review this application's claims for potential overlap with existing or planned products. Patent monitoring services should track this application's prosecution for any divisional or continuation filings that may expand the scope of protection.

What to do next

  1. Monitor for updates on patent prosecution status
  2. Review for potential licensing implications if developing similar AI vehicle technology

Archived snapshot

Apr 9, 2026

GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.

← USPTO Patent Applications

TOKENIZED DATA STREAMING FOR MULTI-MODAL LANGUAGE MODELS

Application US20260097781A1 Kind: A1 Apr 09, 2026

Inventors

Rajath Bellipady Shetty, Niral Lalit Pathak, Ratin Kumar

Abstract

In various examples, a multi-modal language model may be split up and hosted by multiple devices. For example, a modality (e.g., vision, audio) encoder and/or projector of the multi-modal language model (e.g., a vision language model) may be hosted on one device (e.g., an in-vehicle SoC) that encodes raw sensor data into corresponding tokens and streams the tokens to a second device (e.g., an external graphic processing unit (GPU) or artificial intelligence (AI) accelerator) that hosts an inference server and a language model (LM) of the multi-modal language model. The LM may return a response indicating the result(s) of the requested detection task, and the response may be used to take some responsive action (e.g., control one or more operations of an ego-machine).

CPC Classifications

B60W 60/001 H04L 9/3213

Filing Date

2024-10-03

Application No.

18905193

View original document →

Get daily alerts for USPTO Patent Applications - Networking (H04L)

Daily digest delivered to your inbox.

Free. Unsubscribe anytime.

About this page

What is GovPing?

Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission

What's from the agency?

Source document text, dates, docket IDs, and authority are extracted directly from USPTO.

What's AI-generated?

The plain-English summary, classification, and "what to do next" steps are AI-generated from the original text. Cite the source document, not the AI analysis.

Last updated

Classification

Agency
USPTO
Published
April 9th, 2026
Instrument
Notice
Legal weight
Non-binding
Stage
Final
Change scope
Minor
Document ID
US20260097781A1

Who this affects

Applies to
Technology companies Transportation companies
Industry sector
5112 Software & Technology 3361 Automotive Manufacturing
Activity scope
Patent application filing AI system development Vehicle system design
Geographic scope
United States US

Taxonomy

Primary area
Intellectual Property
Operational domain
Legal
Topics
Artificial Intelligence Transportation

Get alerts for this source

We'll email you when USPTO Patent Applications - Networking (H04L) publishes new changes.

Optional. Personalizes your daily digest.

Free. Unsubscribe anytime.