A team of researchers at the Georgia Institute of Technology has proposed an innovative framework for aggressive driving using wheel speed sensors, IMU sensors, and a monocular camera. According to the researchers, it is a combination of deep-learning based road detection, model predictive control (MPC), and particle filters, and has been detailed in a paper appearing on arXiv.
As it has become essential to understand the extremity of autonomous driving, the researchers chose aggressive driving which is a good factor for collision avoidance or safety measures needed for autonomous vehicles, TechXplore reported, citing Researcher Paul Drews.
Aggressive driving is referred to instances in which a vehicle operates beyond the speed limits or with high sideslips angles, the one implemented in rally racing. In an earlier study, the researchers used high-quality GPS to explore aggressive driving for global position estimation. However, it had many limitations including the need for high-cost sensors and didn’t cover the GPS-denied locations.
In order to cope with these limitations, the research team used vision-based driving solution, based on images from monocular camera, going back to local cost map, and using the data for MPC-based control. This led to promising results, but evaluating each input frame separately resulted in other important learning challenges. It became difficult to produce cost maps required at high speed, due to restrictive field of view and low vantage point of camera installed on the vehicle.
Drews said the main aim of the new project is understanding the mechanism to use vision as key sensor for aggressive driving, allowing them to investigate algorithms that bring perception and control together.
In the recent study, the team introduced an alternative approach for autonomous aggressive or high-speed driving, in order to address the challenges of their previous work. The team generated local cost map with video-based deep neutral network model (LSTM) and used as calculation process for estimating the state particle filter.
Most importantly, to localize in a schematic map, the particle filter employs the dynamic observation model, while MPC operates aggressive driving on the basis of this state estimate. The newly developed framework enabled the researchers to obtain global position estimate through the schematic map, eliminating the need for GPS technology as well as boosting the accuracy of predicting cost maps.
Using monocular images to learn the intermediate cost map, the team directly approach to autonomous racing, according to Drews, this intermediate representation can be either used by particle filter or MPC to obtrain GPS state based aggressive driving.
After evaluation of the framework on AutoRally, it was found that by regressing a local cost map from monocular images, it can be used directly or for localization, allowing aggressive performance at the limits of handling.
The framework is promising for cost-effective and robust aggressive autonomous driving on complex tracks, TechXplore reported.