End-to-End Deep Learning approach for Autonomous Lane Navigation. Imitation Learning implemented using Duckie Town Simulator. The architecture is based on the proposed NVIDIA’s DAVE-2.

An ideal autonomous car is a vehicle that can sense its surrounding and react with no human interaction. According to The Society of Automotive Engineers (SAE), there are 6 unique levels of driving automation starting from Level 0, which is fully manual, up to Level 5, meaning fully autonomous. Sensors are crucial components that make autonomous vehicles autonomous since they are essential for correctly perceiving the environment. There are two types of sensors that are exteroceptive, used for sensing the environment, and proprioceptive, used for sensing some internal aspects of a vehicle. Exteroceptive sensors include cameras, LIDAR, radar and sonar, whereas proprioceptive sensors include GNSS and a wheel odometry.

In this work, I demonstrate a CNN that is indeed powerful by applying it beyond pattern recognition. Thus, it learns the entire processing pipeline required to steer a vehicle. The work is inspired by NVIDIA’s real-sized autonomous car, called DAVE-2, which drove on public roads autonomously while only relying on the CNN. Therefore, the identical architecture is implemented and tested in various environments.

#imitation-learning #self-driving-cars #machine-learning #deep-learning #duckietown

End-to-End Deep Learning Approach for Autonomous Driving: Imitation Learning
2.50 GEEK