Got $50? Turn Your Phone Into a Programmable Robot

Got $50? Turn Your Phone Into a Programmable Robot

The OpenBot initiative aims to democratize edge robotics using devices you already have. Intel Labs just revealed its OpenBot project, which aims to provide low-cost real-time advanced robotics to inspire mass community adoption in the area. The low cost is achieved by a combination of mass-produced sensors, a 3-D printed body, and your very own smartphone.

Intel Labs just revealed its OpenBot  project, which aims to provide low-cost real-time advanced robotics to inspire mass community adoption in the area. The low cost is achieved by a combination of mass-produced sensors, a 3-D printed body, and your very own smartphone.

I love the design of this kind of system. Companies have been so focused on getting their data and logic into the cloud, they often forget about the latency introduced. Edge devices in some cases have been stripped of all but the minimal processing power to read sensor measurements and upload them.

The optimal approach (for many industries) is one that handles as much as feasible at the edge while resorting to more capable devices for the rest. Why is that? The edge is where all the action is. The edge is the consumer activity you care so much about. The edge is…well, where the money is made. Simply shoveling data back and forth between centralized cloud data-crunching services is not nearly as scalable as distributing computational workload throughout your ecosystem. For all but very-low-power IoT applications where you want your edge devices to run for months on a single battery charge, you want your edge to accomplish more.

A Less Edgy Edge

Notice that I used the phrase “more capable devices” instead of “your cloud” for managing the remainder of your non-edge computation. Your architectures are not just a binary division of edge and cloud. You can have a “middleware” of progressively more powerful devices that are still much closer to your first-hop edge device; these devices can provide more computing power to your first-hop devices, while still allowing you to avoid the round-trip to your cloud and back. This is where the smartphone comes into play in the OpenBot architecture.

Block diagram of the {Phone, Arduino} OpenBot application setup

Block diagram of the application setup made by the authors in their paper.

Efforts like Tensorflow Lite have made great strides in bringing machine learning to the edge. Somewhere in the near future, we may have such efficient algorithms and hardware that there’s not a second question about whether embedded devices can run complex neural network predictions. Until then, however, nothing prevents us from using devices in between our first-hop and backend cloud devices, devices like our beloved smartphones.

In this setup, you retain all traditional embedded actions (mainly sensors & actuation) on your edge device, while offloading more computationally intensive actions (machine learning inference, state estimation & updates, etc) to the “local workhorse” of your smartphone. This may seem like a minor step, but solves two crucial problems: avoiding the expensive round-trip to your cloud and avoiding slow/intractable on-chip computation for your edge device. The edge device views the smartphone as a fast offboard computation, and your cloud service is none the wiser (and hopefully auto-scaling down now 🤑).

robotics machine-learning data-science artificial-intelligence software-development

Bootstrap 5 Complete Course with Examples

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Building a simple Applications with Vue 3

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

Artificial Intelligence (AI) vs Machine Learning vs Deep Learning vs Data Science

Artificial Intelligence (AI) vs Machine Learning vs Deep Learning vs Data Science: Artificial intelligence is a field where set of techniques are used to make computers as smart as humans. Machine learning is a sub domain of artificial intelligence where set of statistical and neural network based algorithms are used for training a computer in doing a smart task. Deep learning is all about neural networks. Deep learning is considered to be a sub field of machine learning. Pytorch and Tensorflow are two popular frameworks that can be used in doing deep learning.

Hire Machine Learning Developers in India

We supply you with world class machine learning experts / ML Developers with years of domain experience who can add more value to your business.

Hire Machine Learning Developer | Hire ML Experts in India

We supply you with world class machine learning experts / ML Developers with years of domain experience who can add more value to your business.

ML Optimization pt.1 - Gradient Descent with Python

In this article, we explore gradient descent - the grandfather of all optimization techniques and it’s variations. We implement them from scratch with Python.

Most popular Data Science and Machine Learning courses — July 2020

Most popular Data Science and Machine Learning courses — August 2020. This list was last updated in August 2020 — and will be updated regularly so as to keep it relevant