Facebook and Twitter have left most other companies around the world far behind when it comes to using machine learning to improve their business model. And while their practices haven’t always resulted in the best reactions from end-users, there’s much to be learned from these companies on what to do–and what not to do–when it comes to scaling and applying data analytics.

Get the Data You Need First

While Facebook seemingly uses machine learning for everything — it is used for content detection and content integrity, sentiment analysis, speech recognition, and fraudulent account detection, as well as operating functions like facial recognition, language translation, and content search functions. The Facebook algorithm manages all this while offloading some computation to edge devices in order to reduce latency.

This allows users with older mobile devices (more than half of the global market) to access the platform more easily. This is an excellent tactic for legacy systems with limited computing power which can then use the cloud to handle the torrent of data. Cloud-based systems can also be improved through the introduction of accessible metadata that will customize, correct, and contextualize real-world data.

Start by thinking about what data is really needed, and which of those datasets are most important. Then start small. Too often, teams get distracted in the rush to do it now and do it big. But this mindset can actually be confused for the real objective: do it right. Focus on modest efforts that work, then increase the application development to apply to more datasets or to adapt more quickly to changing parameters. By focusing on early success and scaling upwards, early failures that may occur due to too much data too quickly can be avoided altogether. Even if a failure does happen, the momentum of smaller successes will propel the project forward.

#artificial intelligence #pytorch #machine learning

What Twitter and Facebook Can Teach Us About Machine Learning
4.30 GEEK