VAYAVISION announced the release of VAYADrive 2.0, an AV perception software engine that fuses raw sensor data together with AI tools to create an accurate 3D environmental model of the area around the self-driving vehicle.
VAYADrive 2.0 breaks new ground in several categories of AV environmental perception – raw data fusion, object detection, classification, SLAM, and movement tracking – providing crucial information about dynamic driving environments, enabling safer and reliable autonomous driving, and optimizing cost-effective sensor technologies.
The VAYADrive 2.0 software solution combines state-of-the-art AI, analytics, and computer vision technologies with computational efficiency to scale up the performance of AV sensors hardware. The software is compatible with a wide range of cameras, LiDARs, and radars.
VAYADrive 2.0 solves a key challenge facing the industry: the detection of ‘unexpected’ objects. Roads are full of ‘unexpected’ objects that are absent from training data sets, even when those sets are captured while traveling millions of kilometers. Thus, systems that are mainly based on deep neural networks fail to detect the ‘unexpected’.
To detect objects, no single type of sensor is enough; Cameras don’t see depth, and distance sensors, such as LiDARs and Radars, possess very low resolution. VAYADrive 2.0 upsamples sparse samples from distance sensors and assigns distance information to every pixel in the high-resolution camera image. This allows autonomous vehicles to receive crucial information on an object’s size and shape, to separate every small obstacle on the road, and to accurately define the shapes of vehicles, humans, and other objects on the road.
VAYAVISION will be showing its solution at the CES – Consumer Electronics Show in Las Vegas from 8 – 11 January 2019, at Booth 301 of the OurCrowd Pavilion, Westgate Paradise Center.
Leave a Reply