MULTI-VIEW LIDAR PERCEPTION WITH MOTION CUES FOR AUTONOMOUS MACHINES AND APPLICATIONS
Inventors
Ke Chen, David Ambrose Wehr, Joachim PEHSERL, Wenyuan Zhang, Mark Austin Brophy, Prasanna Kumar Sivakumar, Christian Mostegel, Deepak Ravishankar, Sravya Nimmagadda
Abstract
Embodiments of the present disclosure relate to multi-view LIDAR perception with motion cues for autonomous and semi-autonomous machines and applications. A DNN may be used to detect objects, a navigable space, weather or surface conditions, artifacts, and/or other parts or features of an environment based on multiple views of LIDAR data from multiple time slices. The DNN may include multiple input channels for processing multiple views of sensor data from multiple time slices to provide motion cues, and the extracted features from the different time slices may be geometrically projected from a first 2D view to a second 2D view, combined with features that were extracted from the second 2D view, and applied to a subsequent stage of the DNN. The data generated by the DNN may be provided to the drive stack of an autonomous vehicle or other ego-machine to enable safe planning and control of the vehicle.
CPC Classifications
Filing Date
2024-09-13
Application No.
18884448