Adaptive, Intelligent LiDAR Augments NVIDIA’s Open Platform for L2+ to Level 5 Fully Autonomous Driving
AEye, the global leader in adaptive, high-performance LiDAR solutions, announced it is working with NVIDIA to bring its adaptive, intelligent sensing to the NVIDIA DRIVE autonomous vehicle platform.
The NVIDIA DRIVE platform is an open, end-to-end solution for Level 2+ automated driving to Level 5 fully autonomous driving. With AEye’s intelligent, adaptive LiDAR supported on the NVIDIA DRIVE platform, autonomous vehicle developers will have access to next-generation tools to increase the saliency and quality of data collected as they build and deploy state-of-the-art ADAS and AV applications. Specifically, AEye’s SDK and Visualizer will allow developers to configure the sensor and view point-clouds on the platform.
Recommended ITech News: New COBIT Resources Help Organizations Navigate I&T Risk and DevOps
“We are pleased to offer our customers the full functionality of our sensor on the NVIDIA DRIVE platform,” said Blair LaCorte, CEO of AEye. “ADAS and AV developers wanting a best-in-class, full-stack autonomous solution now have the unique ability to use a single adaptive LiDAR platform from Level 2+ through Level 5 as they configure solutions for different levels of autonomy. We believe that intelligence will be the key to delivering new levels of safety and performance.”
“AI-driven sensing and perception are critical to solving the most challenging corner cases in automated and autonomous driving,” said Glenn Schuster, senior director of sensor ecosystems at NVIDIA. “As an NVIDIA ecosystem partner, AEye’s adaptive, intelligent-sensing capabilities complement our DRIVE platform, which enables safe AV development and deployment.”
Recommended ITech News: Broadband Forum and Prpl Foundation Unite to Create a Secure Cross-Platform Service Delivery Framework
AEye’s adaptive LiDAR takes a uniquely intelligent approach to sensing, called iDAR (Intelligent Detection and Ranging). Its high-performance, adaptive LiDAR is based on a bi-static architecture, which keeps the transmit and receive channels separate. As each laser pulse is transmitted, the solid-state receiver is told where and when to look for its return — enabling parallel processing and deterministic artificial intelligence to be introduced into the sensing process at the point of object acquisition and detection. Ultimately, this establishes the iDAR platform as adaptive — allowing it to focus on what matters most, while simultaneously monitoring the vehicle’s surroundings, resulting in greater reliability, safety, and performance at longer range and lower cost.
Recommended ITech News: cloudHQ Releases New Way to Manage Online Sales From Your Phone with “Emails to Sheets”