A lot of work in sensor technology is taking place to make robotic vehicles practical.
It’s becoming clear that autonomous vehicle deployment will depend on the development of smart sensors. Industry analysts see advances coming in traditional areas of radar and proximity sensing. They also think the architecture of vehicle systems will evolve in ways designed to optimize the handling of information coming in from the multiplicity of sensors necessary for autonomous operation.
The dependence on sensing systems becomes clear from a review of how the autonomous vehicle industry classifies levels of autonomous ability:
Level zero – The human driver is in total control.
Level one – This is where most cars on the road today reside. Function-specific automation characterizes this level: The vehicle might carry automation for single functions such as braking or cruise control. But the driver is still completely in charge of vehicle controls.
Level two – This level is characterized by the automation of more than one control into what’s called combined function automation. The system can take control for some driving conditions but the driver is still ultimately responsible for driving the vehicle and must be able to take over on short notice. Dynamic cruise control, where braking takes place automatically to handle some events during cruise control, is an example of a level-two function.
Level three – This level defines limited self-driving automation where the vehicle takes control most of the time. The driver is expected to occasionally take over with comfortable transition times. Highly autonomous driving on the highway would fall into level three. (Audi’s A8 is said to be the first production-ready Level 3 autonomous car, capable of steering, braking and accelerating itself on highways up to a speed of 35 mph.)
Level four — Full self-driving automation. The vehicle is completely in control and the driver isn’t expected to take over.
Industry analysts say the transition of control between drivers and automated systems will be critical for vehicles at levels two and three. Autonomous systems operating at these levels will be human-machine interface intensive.
Sensors will make this sort of interface possible. The human driver will be monitored constantly to assess his or her state-of-mind when the system gets ready to hand the reins back. The vehicle will have to communicate status and status changes to the driver. The current thinking is that such feedback will likely involve a heads-up display and haptic means such as vibrating pedals, steering wheels, seats, or tightening seat belts.
THE EYES OF AN AUTONOMOUS VEHICLE
The sensing part of the architectural picture for autonomous vehicles is becoming clear. Designers now divide sensing tasks into two categories: sensors that look outside the vehicle and sensors looking at the human driver to gauge his or her state.
It looks as though driver monitoring will involve cameras and software designed to detect drowsiness and fatigue, distraction, and similar conditions. They might also play a role in security and in metering how the autonomous system should issue warnings. For example, when it comes to issuing a warning, the system might factor in how closely the human driver is paying attention before it decides on what the intensity of the warning should be.
Cameras and vision systems are the primary focus for driver sensing simply because cameras are relatively compact and inexpensive and widely available. But there are a lot of issues surrounding their use. One is simple user acceptance; some drivers aren’t comfortable with a camera constantly pointed at them. Another problem is that sun glasses, brimmed hats, and other fashion items can obscure the driver’s face, making it difficult for software to decide whether a driver is paying attention. The level of ambient light could present problems as well.
Autonomous system software has used classical methods of determining the driver’s state of attention that include eye tracking, eyelid estimators, face recognition for the driver’s mode, and so forth. However, such systems have more recently begun to implement artificial intelligence schemes that factor in driver behavior sensed via other means such as the movement of the steering wheel or posture in the seat.
When it comes to sensing the environment outside the car, autonomous systems generally divide the task into three categories: environmental perception, localization, and communication. The sensing of the environment around the car generally involves the use of both lidar and radar.
Lidar sensors measure the distance to an object by calculating the time it takes a pulse of light to travel to an object and back. The lidar unit usually sits atop the vehicle where it can generate a 360° 3D view of potential obstacles to be avoided. Vehicular Lidar systems typically use a 905-nm wavelength that can provide up to 200 m range in restricted FOVs (field of views). Some manufacturers now make 1,550 nm units with longer range that are more accurate.
One problem with lidar units is their expense. It’s said, for example, that some of the lidar units in Darpa Autonomous Vehicle Grand Challenge cost more than the vehicles they sat on. However, costs are dropping partly thanks to the development of solid-state lidar (SSL) that eliminates the scanning mirrors and other moving parts in today’s technology.
SSLs currently cover smaller FOVs but their lower cost makes it practical to equip vehicles with multiple units. For example, some systems under development use four to six lidars. Among them will generally be one high-definition lidar, where high-definition typically means using between 64 to 128 lasers to generate pulses to yield an angular resolution of less than 0.1°. High-definition lidar can generally resolve cars and foliage up to 120 m away.
Autonomous vehicles will also carry between three and five long-range, medium and short-range radars on their side to detect on-coming traffic. Here short-range radar (SRR) generally has a range of 0.2 to 30 m, medium-range radar (MRR) covers the 30 to 80 m range, and long-range radar (LRR) 80 to more than 200 m.
LRR is the de facto sensor for adaptive cruise control (ACC) and highway automatic emergency braking systems (AEBS). One problem is that systems that depend on LRR may not react correctly in such scenarios as when a car cuts in front of the vehicle when there are thin-profile vehicles such as motorcycles staggered in a lane, and when a curvature of the road potentially confuses the ACC system about which car to follow. To overcome such limitations, some developers pair radar with cameras to provide additional context. One reason is that camera images can be analyzed for azimuth angles, a measurement not possible with radar.
Besides helping to interpret radar returns, cameras are used to illuminate blind spots and to detect a variety of features that include lane markings, lane width and curvature, stop signs, speed limit signs, pedestrians, and buildings. Some prototypes currently carry four to eight cameras aimed forward, back, and toward each side.
Autonomous systems have in addition traditionally used a lot of ultrasonic sensors because they are inexpensive. It is not uncommon to find 10 to 16 on a prototype because they are easy to integrate into a vehicle.
But there is a lot of redundancy and overlap in what all these sensors do. The feeling is that once proof-of-concept work is complete, manufacturers will begin working toward reducing the number of sensors used on production vehicles.
Radar and camera units are strictly for perception tasks. Another set of sensors are used for localization — basically, finding out where the vehicle is on a map. Prototypes use high-grade inertial measurement units, as well as differential GPS receivers for highly accurate localization.
Inertial measurement units (IMUs) measure linear and angular motion usually with a triad of both gyroscopes and accelerometers or magnetometers. They generally output angular velocity and acceleration.
Differential GPS units are an improvement of the Global Positioning System that provides better location accuracy, boosting the 15-m nominal GPS accuracy to about 10 cm. The better accuracy comes from the use of networked ground-based reference stations that broadcast the difference between the positions indicated by the GPS satellite systems and their own known positions. Then shorter-range transmitters send out the digital correction signal locally. Differential GPS receivers can make use of the correction signal up to about 200 miles away from the reference station, though the accuracy drops in proportion to the distance.
One significant area of autonomous vehicle research is in how to fuse the various sensing technologies. For example, researchers are interested in fusing a lidar with a camera to both improve performance and enable the use of lidar with lower resolution (and cost) without sacrificing capabilities. Ditto for radar and cameras. Also, indications are that ultrasonic sensors may be phased out of autonomous sensing simply because lidar, radar, and camera technology may make them superfluous.
There is a discussion about at what level sensor data should be fused – at the object level or at the lower level of raw returns from lidar and radar. Indications are that arguments for lower-level fusion are holding sway. The implication is that the data manipulation necessary for this kind of system necessitates use of a powerful central processor rather than several smaller units distributed around the vehicle. The centralized processor – dubbed a unified controller by autonomous practitioners – takes care of sensor fusion for a variety of tasks. These tasks include detecting and tracking pedestrians, vehicles, lanes, and vehicle traffic, as well as path prediction vehicle control, and managing the HMI.
V2X
Autonomous vehicles have a communication sensor suite that normally includes dedicated short-range communication (DSRC) technology and cellular LTE connectivity. Each has a different purpose. Cellular phone technology is envisioned as a means for downloading and updating maps, providing corrections to the GPS receiver, and similar tasks.
DSRC uses 75 MHz of spectrum around the 5.9 GHz band and is based on the 802.11p standard. Consequently, it can make use of relatively inexpensive Wi-Fi chipsets and has a range of 300 m or more. It’s main use now is to warn about hazards around the car.
But 802.11p is seen as particularly useful for both vehicle-to-vehicle (V2V) as well as vehicle-to-infrastructure (V2I) communications because it can support low-latency, secure transmissions and the ability to handle rapid and frequent handovers that characterize a vehicle environment. And adverse weather conditions generally don’t cause problems.
Expectations are that future versions of DSRC will handle V2P collision warning, V2V platooning through cooperative adaptive cruise control, V2I for weather, and more. The DOT has identified more than 40 V2I ideas, such as the ability to pay for parking and tolls wirelessly, identify when a car approaches a curve too quickly and alert the driver; adjusting traffic signals to accommodate first responders in an emergency; and alert drivers of conditions such as road construction.
Of course, it can get complicated anticipating all possible eventualities of those scenarios. That’s why autonomous developers are investigating the use of machine learning and implementing sensor perception through neural networks. The appeal of machine learning is that it could potentially bypass the complicated math traditionally used in object and feature detection. It can potentially be faster to implement and perform better than classical methods if given enough training data. The challenges, however, include verifying and validating the machine learning system. Practitioners point out that when a neural network system fails, it’s not always possible to pinpoint why. Thus debugging can be problematic.
This in not exactly the warm fuzzy feeling developers might hope for when designing vehicles that can potentially run-down people in crosswalks.
Leave a Reply