First impressions of TensorFlow Dev Summit, 2019

First impressions of TensorFlow Dev Summit, 2019

The 2019 edition of the TensorFlow dev summit got off to a great start on a rather cold and rainy morning in Sunnyvale, CA. This time around, the TensorFlow team has made certain visible changes from previous year’s edition:

The 2019 edition of the TensorFlow dev summit got off to a great start on a rather cold and rainy morning in Sunnyvale, CA. This time around, the TensorFlow team has made certain visible changes from previous year’s edition:

  1. Summit is now a 2-day event with the first day full of talks and demos whereas the second day focussing on hands-on sessions where attendees can code along with the TensorFlow team members and engage them on the real-life problems they are solving.
  2. The event has a much larger invitee list and this is visible from their venue (Google Event Center)
  3. Talks are more technical with a lot of code snippets and live demos using Google Colab.

Let’s go over the 15 key takeaways from a full day of talks and the hands-on sessions. To view all the talks, TensorFlow team has been fairly quick this time to put up the videos on their youtube channel.

15 Key Takeaways

  1. Summit is now a 2-day event with the first day full of talks and demos whereas the second day focussing on hands-on sessions where attendees can code along with the TensorFlow team members and engage them on the real-life problems they are solving.
  2. The event has a much larger invitee list and this is visible from their venue (Google Event Center)
  3. Talks are more technical with a lot of code snippets and live demos using Google Colab.

  1. TensorFlow in collaboration with Udacity and Coursera has launched two new courses.

  1. The announcement of TensorFlow World, a conference where engineers, innovators, executives, and product managers can come and discuss their product/service offering that has been powered by TensorFlow.

  1. One of my favorite contribution to the TensorFlow ecosystem is TensorFlow Extended (TFX) and the TFX team has certainly delivered what it promised in the 2018 edition. All the various components in TFX (DataValidator, Trainer, ModelValidator, Pusher) now work together for an end-to-end ML offering. Bonus: TFX now integrates with open source orchestrators such as AirFlow and KubeFlow.

  1. TensorFlow has stepped into the hardware space with the launch of the Raspberry pi style Coral DevBoard powered by the edge TPU ML accelerator. Priced at 150$, it is more expensive than a Raspberry Pi but cheaper than an Nvidia Jetson.

  1. TensorFlow lite team focussed their talk on speaking about their expanding list of use-cases, from Google assistant to YouDao’s on-device translation service. Keeping up with the usability theme of TensorFlow 2.0, TensorFlow lite has focussed on reducing the footprint of the models and making inference faster. Documentation for TensorFlow lite has also been improved.

  2. TensorFlow is now supported by the Julia programming language (tensorflow.jl). To give you a hint of why you might prefer Julia over Python for your next project, have a look at the code syntax and runtime for the below piece of code.

  1. Ever considered doing machine learning on decentralized data, checkout TensorFlow Federated.

  2. Sonnet, a high-level library built by DeepMind on top of TensorFlow, announced it’s support for TF2.0.

  3. Building models too big to fit on an off-the-shelf cloud instance? Need model parallelism? Checkout Mesh-TensorFlow.

There were a lot more updates and examples of how TensorFlow is being used in the industry and research. The above key takeaways capture just a glimpse of what was covered in the summit. To go over all the content, refer to TensorFlow’s YouTube channel. Consider applying to TensorFlow Dev Summit 2020 if you would like to meet some really smart people applying machine learning to real-life use-cases.

About the author: Gaurav is a data science manager at EY’s Innovation Advisory in Dublin, Ireland. His interests include building scalable machine learning systems for computer vision applications. Find more at gauravkaila.com

Suggest:

☞ Tools to Scale Your Production Machine Learning

☞ TensorFlow for JavaScript

☞ Machine Learning at Uber Natural Language Processing Use Cases

☞ Machine Learning Strategies for Time Series Forecasting

☞ What is Python and Why You Must Learn It in [2019]

☞ Recognizing Traffic Lights With Deep Learning

TensorFlow is dead, long live TensorFlow!

TensorFlow is dead, long live TensorFlow!

TensorFlow is ann open source machine learning library for research and production. TensorFlow is dead, long live TensorFlow

TensorFlow is ann open source machine learning library for research and production. TensorFlow is dead, long live TensorFlow

If you’re an AI enthusiast and you didn’t see the big news this month, you might have just snoozed through an off-the-charts earthquake. Everything is about to change!

Last year I wrote 9 Things You Need To Know About TensorFlow… but there’s one thing you need to know above all others: TensorFlow 2.0 is here!

The revolution is here! Welcome to TensorFlow 2.0.
It’s a radical makeover. The consequences of what just happened are going to have major ripple effects on every industry, just you wait. If you’re a TF beginner in mid-2019, you’re extra lucky because you picked the best possible time to enter AI (though you might want to start from scratch if your old tutorials have the word “session” in them).

In a nutshell:TensorFlow has just gone full Keras. Those of you who know those words just fell out of your chairs. Boom!

A prickly experience

I doubt that many people have accused TensorFlow 1.x of being easy to love. It’s the industrial lathe of AI… and about as user-friendly. At best, you might feel grateful for being able to accomplish your AI mission at mind-boggling scale.

You’d also attract some raised eyebrows if you claimed that TensorFlow 1.x was easy to get the hang of. Its steep learning curve made it mostly inaccessible to the casual user, but mastering it meant you could talk about it the way you’d brag about that toe you lost while climbing Everest. Was it fun? No, c’mon, really: was it fun?

You‘re not the only one — it’s what TensorFlow 1.x tutorials used to feel like for everybody.

TensorFlow’s core strength is performance. It was built for taking models from research to production at massive scale and it delivers, but TF 1.x made you sweat for it. Persevere and you’d be able to join the ranks of ML practitioners who use it for incredible things, like finding new planets and pioneering medicine.

What a pity that such a powerful tool was in the hands of so few… until now.

Don’t worry about what tensors are. We

just called them (generalized) matrices where I grew up. The name

TensorFlow is a nod to the fact that TF’s very good at performing

distributed computations involving multidimensional arrays (er,

matrices), which you’ll find handy for

AI](http://bit.ly/quaesita_emperor) "http://bit.ly/quaesita_emperor)") at scale.

Image source.](http://karlstratos.com/drawings/drawings.html). "http://karlstratos.com/drawings/drawings.html).")

Cute and cuddly Keras

Now that we’ve covered cactuses, let’s talk about something you’d actually want to hug. Overheard at my place of work: “I think I have an actual crush on Keras.”

Keras is a specification for building models layer-by-layer that works with multiple machine learning frameworks (so it’s not a TF thing), but you might know it as a high level API accessed from within TensorFlow as tf.keras.

Incidentally, I’m writing this section on Keras’ 4th birthday (Mar 27, 2019) for an extra dose of warm fuzzies.

Keras was built from the ground up to be Pythonic and always put people first — it was designed to be inviting, flexible, and simple to learn.

Why don’t we have both?

Why must we choose between Keras’s cuddliness and traditional TensorFlow’s mighty performance? What don’t we have both?

Great idea! Let’s have both! That’s TensorFlow 2.0 in a nutshell.

This is TensorFlow 2.0. You can mash those orange buttons yourself here.](http://bit.ly/tfoview). "http://bit.ly/tfoview).")

The revolution is here! Welcome to TensorFlow 2.0.### The usability revolution

Going forward, Keras will be the high level API for TensorFlow and it’s extended so that you can use all the advanced features of TensorFlow directly from tf.keras.

The revolution is here! Welcome to TensorFlow 2.0.

In the new version, everything you’ve hated most about TensorFlow 1.x gets the guillotine. Having to perform a dark ritual just to add two numbers together? Dead. TensorFlow Sessions? Dead. A million ways to do the exact same thing? Dead. Rewriting code if you switch hardware or scale? Dead. Reams of boilerplate to write? Dead. Horrible unactionable error messages? Dead. Steep learning curve? Dead.

The revolution is here! Welcome to TensorFlow 2.0.
You’re expecting the obvious catch, aren’t you? Worse performance? Guess again! We’re not giving up performance.

TensorFlow is now cuddly and this is a game-changer, because it means that one of the most potent tools of our time just dropped the bulk of its barriers to entry. Tech enthusiasts from all walks of life are finally empowered to join in because the new version opens access beyond researchers and other highly-motivated folks with an impressive pain threshold.

The revolution is here! Welcome to TensorFlow 2.0.
Everyone is welcome. Want to play? Then come play!

Eager to please

In TensorFlow 2.0, eager execution is now the default. You can take advantage of graphs even in eager context, which makes your debugging and prototyping easy, while the TensorFlow runtime takes care of performance and scaling under the hood.

Wrangling graphs in TensorFlow 1.x (declarative programming) was disorienting for many, but it’s all just a bad dream now with eager execution (imperative programming). If you skipped learning it before, so much the better. TF 2.0 is a fresh start for everyone.

As easy as one… one… one…

Many APIs got consolidated across TensorFlow under Keras, so now it’s easier to know what you should use when. For example, now you only need to work with one set of optimizers and one set of metrics. How many sets of layers? You guessed it! One! Keras-style, naturally.

In fact, the whole ecosystem of tools got a spring cleaning, from data processing pipelines to easy model exporting to TensorBoard integration with Keras, which is now a… one-liner!

There are also great tools that let you switch and optimize distribution strategies for amazing scaling efficiency without losing any of the convenience of Keras.

Those distribution strategies are pretty, aren’t they?

The catch!

If the catch isn’t performance, what is it? There has to be a catch, right?

Actually, the catch was your suffering up to now. TensorFlow demanded quite a lot of patience from its users while a friendly version was brewing. This wasn’t a matter of sadism. Making tools for deep learning is new territory, and we’re all charting it as we go along. Wrong turns were inevitable, but we learned a lot along the way.

The revolution is here! Welcome to TensorFlow 2.0.
The TensorFlow community put in a lot of elbow grease to make the initial magic happen, and then more effort again to polish the best gems while scraping out less fortunate designs. The plan was never to force you to use a rough draft forever, but perhaps you habituated so well to the discomfort that you didn’t realize it was temporary. Thank you for your patience!
The revolution is here! Welcome to TensorFlow 2.0.
The reward is everything you appreciate about TensorFlow 1.x made friendly under a consistent API with tons of duplicate functionality removed so it’s cleaner to use. Even the errors are cleaned up to be concise, simple to understand, and actionable. Mighty performance stays!

What’s the big deal?

Haters (who’re gonna hate) might say that much of v2.0 could be cobbled together in v1.x if you searched hard enough, so what’s all the fuss about? Well, not everyone wants to spend our days digging around in clutter for buried treasure. The makeover and clean-up are worth a standing ovation. But that’s not the biggest big deal.

The point not to miss is this: TensorFlow just announced an uncompromising focus on usability.

The revolution is here! Welcome to TensorFlow 2.0.
AI lets you automate tasks you can’t come up with instructions for. It lets you automate the ineffable. Democratization means that AI at scale will no longer be the province of a tiny tech elite.
The revolution is here! Welcome to TensorFlow 2.0.
Imagine a future where “I know how to make things with Python and “I know how to make things with AI are equally commonplace statements… Exactly! I’m almost tempted to use that buzzword “disruptive” here.

The great migration

We know it’s hard work to upgrade to a new version, especially when the changes are so dramatic. If you’re about to embark on migrating your codebase to 2.0, you’re not alone — we’ll be doing the same here at Google with one of the largest codebases in the world. As we go along, we’ll be sharing migration guides to help you out.

The revolution is here! Welcome to TensorFlow 2.0.
If you rely on specific functionality, you won’t be left in the lurch — except for contrib, all TF 1.x functions will live on in the compat.v1 compatibility module. We’re also giving you a script which automatically updates your code so it runs on TensorFlow 2.0. Learn more in the video below.

This video’s is a great resource if you’re eager to dig deeper into TF 2.0 and geek out on code snippets.

Your clean slate

TF 2.0 is a beginner’s paradise, so it will be a downer for those who’ve been looking forward to watching newbies suffer the way you once suffered. If you were hoping to use TensorFlow for hazing new recruits, you might need to search for some other way to inflict existential horror.

The revolution is here! Welcome to TensorFlow 2.0.
Sitting out might have been the smartest move, because now’s the best time to arrive on the scene. As of March 2019, TensorFlow 2.0 is available in alpha (that’s a preview, you hipster you), so learning it now gets you ready in time for the full release that the community is gearing up for over the next quarter.
The revolution is here! Welcome to TensorFlow 2.0.
Following the dramatic changes, you won’t be as much of a beginner as you imagined. The playing field got leveled, the game got easier, and there’s a seat saved just for you. Welcome! I’m glad you’re finally here and I hope you’re as excited about this new world of possibilities as I am.

Dive in!

Check out the shiny redesigned tensorflow.org for tutorials, examples, documentation, and tools to get you started… or dive straight in with:

pip install tensorflow==2.0.0-alpha0

You’ll find detailed instructions here.

TensorFlow Full Course - TensorFlow Tutorial For Beginners

TensorFlow Full Course - TensorFlow Tutorial For Beginners

This "TensorFlow Full Course - TensorFlow Tutorial For Beginners" video is a complete guide to Deep Learning using TensorFlow. It covers in-depth knowledge about Deep Leaning, Tensorflow & Neural Networks

Below are the topics covered in this TensorFlow tutorial:

2:07 Artificial Intelligence

2:21 Why Artificial Intelligence?

5:27 What is Artificial Intelligence?

5:55 Artificial Intelligence Domains

6:14 Artificial Intelligence Subsets

11:17 Machine Learning

12:32 Types of Machine Learning

12:39 Machine Learning Use Case

15:55 Supervised Learning

18:50 Types of Supervised Learning

20:17 Use Case 2

21:28 Linear Regression

26:34 Linear Regression Demo

38:39 Regression Application

40:14 Building Logistic Regression Model

40:24 Logistic Regression Use Case

46:55 Analysing Performance Of The Model
	
49:40 Calculating The Accuracy
	
51:31 Logistic Regression Demo

1:01:38 Clustering Use Case

1:05:12 How Clustering works?

1:05:12 Initialization
	
1:06:07 Cluster Assignment
	
1:07:37 Move Centroid
	
1:08:27 Optimization
	
1:08:32 Convergence
	
1:09:22 How to find optimal solution?
	
1:09:30 Choosing the number of cluster

1:16:35 Reinforcement Learning

1:17:35 Limitation of Machine Learning

1:22:00 How Deep Learning Solves the Issue?

1:25:05 What is Deep Learning?

1:26:35 Applications of Deep Learning

1:29:14 What is a Tensor?

1:29:48 Rank of Tensors

1:32:13 Shape of a Tensor

1:33:58 What is TensorFlow?

1:35:38 TensorFlow Code Basics

1:36:09 TensorFlow Basic Demo

2:00:33 Activation or Transformation Function

2:01:28 Linear
	
2:02:18 Unit Step
	
2:03:23 Sigmoid
	
2:04:23 Tanh
	
2:05:18 ReLU
	
2:05:53 Softmax

2:07:03 Activation Function Demo

2:10:43 How Neuron Works?

2:13:08 What is a Perceptron?

2:15:53 Role of Weights & Bias

2:16:18 Perceptron Example

2:22:23 Training a Perceptron

2:22:48 Perceptron Learning Algorithm

2:26:08 Training Network Weights

2:39:43 Reducing The Loss

2:43:18 Perceptron Learning Algorithm Demo

Tensorflow Tutorial - Modelling with Tensorflow 2.0

Tensorflow Tutorial - Modelling with Tensorflow 2.0

Tensorflow Tutorial - Modelling with Tensorflow 2.0: In this module, you will see an example of data ETL from raw images to input into tensors, then apply transfer learning (which is how a lot of future models for end-users will be built at companies) to build an emotion classification model. This use case would be really cool for deployment in the next module, where you can see the inference of your own facial expressions. Note however that at the moment, TF 2.0 models converted to tflite are incompatable with the devices. Follow us to get the latest updates post workshop when the TensorFlow team has fixed the issue.

In the previous module, you have seen the power of tf.data APIs in TensorFlow 2.0 reading complex data types from any storage.

In this module, you will see an example of data ETL from raw images to input into tensors, then apply transfer learning (which is how a lot of future models for end-users will be built at companies) to build an emotion classification model. This use case would be really cool for deployment in the next module, where you can see the inference of your own facial expressions. Note however that at the moment, TF 2.0 models converted to tflite are incompatable with the devices. Follow us to get the latest updates post workshop when the TensorFlow team has fixed the issue.

What you'll learn

In this lab, you will learn to:

  • Examine and understand the data (not exhaustive)
  • Build an example input pipeline (there are many ways to build an input pipeline)
  • Compose your model:
  • Choose a suitable pre-train model
  • Choose the sub-model you will depend on to build your model
  • Append the suitable classification layers at the end
  • Train your model
  • Evaluate your model
  • Tune and/or update the architecture of your model