If you are going around, checking out different tutorials, doing Google searches, spending a lot of time on Stack Overflow about TensorFlow, you might have realized that there are a ton of different ways to build neural network models. This has been an issue for TensorFlow for a long time. It is almost like TensorFlow is trying to find its path towards a bright deep learning environment. Well if you think about it, this is exactly what is happening and this is pretty normal for a library in its version 2.x. Since TensorFlow is so far the most mature deep learning library on the market, this is basically the best you can get.

Keras-TensorFlow Relationship

A Little Background

TensorFlow’s evolution into a deep learning platform did not happen overnight. Initially, TensorFlow marketed itself as a symbolic math library for dataflow programming across a range of tasks. Therefore, the value proposition that the TensorFlow initially offered was not a pure machine learning library. The goal was to create an efficient math library so that custom machine learning algorithms that are built on top of this efficient structure would train in a short amount of time with high accuracy.

However, building models from scratch with low-level APIs repetitively was not very ideal. So, François Chollet, a Google engineer, developed Keras, as a separate high-level deep learning library. Although Keras has been capable of running on top of different libraries such as TensorFlow, Microsoft Cognitive Toolkit, Theano, or PlaidML, TensorFlow was and still is the most common library that people use Keras with.

#deep-learning #data-science #artificial-intelligence #machine-learning #tensorflow

3 Ways to Build Neural Networks in TensorFlow with the Keras API
8.05 GEEK