Computer Vision and Camera Calibration for Self Driving Cars. Introduction to concepts like Camera Calibration, Perspective Transform and Distortion for Self Driving Cars
Welcome to this medium article. Computer Vision can essentially be broken down into a three step cycle.
Computer vision is a major part of the perception step in that cycle. 80% of the challenge of building a self-driving car is perception. Computer vision is the art and science of perceiving and understanding the world around us through images. In the case of self-driving cars, computer vision helps us detect lane markings, vehicles, pedestrians, and other elements in the environment in order to navigate safely.
Self-driven cars employ a suite of sophisticated sensors, but humans do the job of driving with just two eyes and one good brain. In fact, we can even do it with one eye closed. Indeed, we can. So, let’s take a closer look at why using cameras instead of other sensors might be an advantage in developing self-driving cars. Radar and Lidar see the world in 3D, which can be a big advantage for knowing where we are relative to our environment. A camera sees in 2D, but at much higher spatial resolution than Radar and Lidar such that it’s actually possible to infer depth information from camera images. The big difference, however, comes down to cost, where cameras are significantly cheaper.
That’s right. It is altogether possible that self-driving cars will eventually be outfitted with just a handful of cameras and a really smart algorithm to do the driving.
For example, to steer a car, we’ll need to measure how much the lane is curving. To do that, we need to map out the lens in our camera images, after transforming them to a different perspective. One way, we’re looking down
on the road from above. But, in order to get this perspective transformation right, we first have to correct for the effect of image distortion. Well hopefully, the distortion we’re dealing with isn’t quite that bad, but yes, that’s the idea. Cameras don’t create perfect images. Some of the objects in the images, especially ones near the edges, can get stretched or skewed in various ways and we need to correct for that. Cool, let’s jump into step one, how to undistort our distorted camera images.
Artificial Intelligence (AI) will and is currently taking over an important role in our lives — not necessarily through intelligent robots.
Foundational Concepts in the field of Deep Learning and Machine Learning. We’ll focus on TensorFlow because if one becomes a machine learning expert, these are the tools that people in the trade use everyday.
Tesla CEO Elon Musk believes level 5 self-driving cars will be completed by the end of 2020. But the limits of deep learning will make it unlikely.
Enroll now at CETPA, the best Institute in India for Artificial Intelligence Online Training Course and Certification for students & working professionals & avail 50% instant discount.
Watch this video on Artificial Intelligence vs Machine Learning vs Deep Learning in Hindi! Artificial Intelligence, Machine Learning and Deep Learning are some of the most popular and sought-after domains today. We understand Artificial Intelligence as a computer being programmed with the ability to learn from experience, adjust to new commands and to perform human-like tasks, and Machine Learning as a subset of AI. Deep Learning is also an AI function which pretty much imitates the functioning of the human brain to process data for recognizing speech, translation of languages, detection of objects and making decisions.