The autonomous tech world has 2 giants: Tesla & Waymo.

In my last article, I broke down Tesla’s computer vision system.

👉 You can find that post here.

Today, we’ll take a look at what Waymo, Google’s self-driving car division, is doing under the hood.

Waymo has driven more than 20 million miles on public roads in over 25 cities. They also drove tens of billions of miles in simulations (as we’ll see later in the article). Additionally, Waymo is operating a taxi service in the United States, transporting passengers—for real—without a driver.

Given their growing presence in the real world, I want to dive deep into Waymo’s technology so you can understand what’s actually behind this giant.

As with every self-driving vehicle, Waymo implements their tech using our 4 main steps:** perception, localization, planning, and control.**

In this post, the only thing I’ll not talk about is control. For Waymo, prediction (which is part of planning), is another core pillar, and it will be treated independently here.

Let’s start with perception.


Before that, I am sharing every day, give your email and receive daily content that will help you start in AI & Autonomous Tech.

Perception

The core component of most robotics systems is the perception task. In Waymo’s case, perception includes estimations of obstacles and the localization of the self-driving vehicle.

#machine-learning #self-driving-cars #deep-learning #computer-vision #heartbeat #deep learning

How Google’s Self-Driving Cars Work
1.10 GEEK