1670550232
In this Keras article, we will learn Binary Classification with Keras Deep Learning Library. Keras is a Python library for deep learning that wraps the efficient numerical libraries TensorFlow and Theano.
Keras is a Python library for deep learning that wraps the efficient numerical libraries TensorFlow and Theano.
Keras allows you to quickly and simply design and train neural networks and deep learning models.
In this post, you will discover how to effectively use the Keras library in your machine learning project by working through a binary classification project step-by-step.
After completing this tutorial, you will know:
The dataset you will use in this tutorial is the Sonar dataset.
This is a dataset that describes sonar chirp returns bouncing off different services. The 60 input variables are the strength of the returns at different angles. It is a binary classification problem that requires a model to differentiate rocks from metal cylinders.
You can learn more about this dataset on the UCI Machine Learning repository. You can download the dataset for free and place it in your working directory with the filename sonar.csv.
It is a well-understood dataset. All the variables are continuous and generally in the range of 0 to 1. The output variable is a string “M” for mine and “R” for rock, which will need to be converted to integers 1 and 0.
A benefit of using this dataset is that it is a standard benchmark problem. This means that we have some idea of the expected skill of a good model. Using cross-validation, a neural network should be able to achieve a performance of around 84% with an upper bound on accuracy for custom models at around 88%.
Let’s create a baseline model and result for this problem.
You will start by importing all the classes and functions you will need.
import pandas as pd
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from scikeras.wrappers import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
...
Now, you can load the dataset using pandas and split the columns into 60 input variables (X) and one output variable (Y). Use pandas to load the data because it easily handles strings (the output variable), whereas attempting to load the data directly using NumPy would be more difficult.
...
# load dataset
dataframe = pd.read_csv("sonar.csv", header=None)
dataset = dataframe.values
# split into input (X) and output (Y) variables
X = dataset[:,0:60].astype(float)
Y = dataset[:,60]
The output variable is string values. You must convert them into integer values 0 and 1.
You can do this using the LabelEncoder class from scikit-learn. This class will model the encoding required using the entire dataset via the fit() function, then apply the encoding to create a new output variable using the transform() function.
...
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
You are now ready to create your neural network model using Keras.
You will use scikit-learn to evaluate the model using stratified k-fold cross validation. This is a resampling technique that will provide an estimate of the performance of the model. It does this by splitting the data into k-parts and training the model on all parts except one, which is held out as a test set to evaluate the performance of the model. This process is repeated k-times, and the average score across all constructed models is used as a robust estimate of performance. It is stratified, meaning that it will look at the output values and attempt to balance the number of instances that belong to each class in the k-splits of the data.
To use Keras models with scikit-learn, you must use the KerasClassifier wrapper from the SciKeras module. This class takes a function that creates and returns our neural network model. It also takes arguments that it will pass along to the call to fit(), such as the number of epochs and the batch size.
Let’s start by defining the function that creates your baseline model. Your model will have a single, fully connected hidden layer with the same number of neurons as input variables. This is a good default starting point when creating neural networks.
The weights are initialized using a small Gaussian random number. The Rectifier activation function is used. The output layer contains a single neuron in order to make predictions. It uses the sigmoid activation function in order to produce a probability output in the range of 0 to 1 that can easily and automatically be converted to crisp class values.
Finally, you will use the logarithmic loss function (binary_crossentropy) during training, the preferred loss function for binary classification problems. The model also uses the efficient Adam optimization algorithm for gradient descent, and accuracy metrics will be collected when the model is trained.
# baseline model
def create_baseline():
# create model
model = Sequential()
model.add(Dense(60, input_shape=(60,), activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
Now, it is time to evaluate this model using stratified cross validation in the scikit-learn framework.
Pass the number of training epochs to the KerasClassifier, again using reasonable default values. Verbose output is also turned off, given that the model will be created ten times for the 10-fold cross validation being performed.
...
# evaluate model with standardized dataset
estimator = KerasClassifier(model=create_baseline, epochs=100, batch_size=5, verbose=0)
kfold = StratifiedKFold(n_splits=10, shuffle=True)
results = cross_val_score(estimator, X, encoded_Y, cv=kfold)
print("Baseline: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
After tying this together, the complete example is listed below.
# Binary Classification with Sonar Dataset: Baseline
from pandas import read_csv
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from scikeras.wrappers import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold
# load dataset
dataframe = read_csv("sonar.csv", header=None)
dataset = dataframe.values
# split into input (X) and output (Y) variables
X = dataset[:,0:60].astype(float)
Y = dataset[:,60]
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# baseline model
def create_baseline():
# create model
model = Sequential()
model.add(Dense(60, input_shape=(60,), activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# evaluate model with standardized dataset
estimator = KerasClassifier(model=create_baseline, epochs=100, batch_size=5, verbose=0)
kfold = StratifiedKFold(n_splits=10, shuffle=True)
results = cross_val_score(estimator, X, encoded_Y, cv=kfold)
print("Baseline: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
Running this code produces the following output showing the mean and standard deviation of the estimated accuracy of the model on unseen data.
Baseline: 81.68% (7.26%)
This is an excellent score without doing any hard work.
It is a good practice to prepare your data before modeling.
Neural network models are especially suitable for having consistent input values, both in scale and distribution.
Standardization is an effective data preparation scheme for tabular data when building neural network models. This is where the data is rescaled such that the mean value for each attribute is 0, and the standard deviation is 1. This preserves Gaussian and Gaussian-like distributions while normalizing the central tendencies for each attribute.
You can use scikit-learn to perform the standardization of your sonar dataset using the StandardScaler class.
Rather than performing the standardization on the entire dataset, it is good practice to train the standardization procedure on the training data within the pass of a cross-validation run and use the trained standardization to prepare the “unseen” test fold. This makes standardization a step in model preparation in the cross-validation process. It prevents the algorithm from having knowledge of “unseen” data during evaluation, knowledge that might be passed from the data preparation scheme like a crisper distribution.
You can achieve this in scikit-learn using a Pipeline. The pipeline is a wrapper that executes one or more models within a pass of the cross-validation procedure. Here, you can define a pipeline with the StandardScaler followed by your neural network model.
...
# evaluate baseline model with standardized dataset
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasClassifier(model=create_baseline, epochs=100, batch_size=5, verbose=0)))
pipeline = Pipeline(estimators)
kfold = StratifiedKFold(n_splits=10, shuffle=True)
results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)
print("Standardized: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
After tying this together, the complete example is listed below.
# Binary Classification with Sonar Dataset: Standardized
from pandas import read_csv
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from scikeras.wrappers import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
# load dataset
dataframe = read_csv("sonar.csv", header=None)
dataset = dataframe.values
# split into input (X) and output (Y) variables
X = dataset[:,0:60].astype(float)
Y = dataset[:,60]
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# baseline model
def create_baseline():
# create model
model = Sequential()
model.add(Dense(60, input_shape=(60,), activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# evaluate baseline model with standardized dataset
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasClassifier(model=create_baseline, epochs=100, batch_size=5, verbose=0)))
pipeline = Pipeline(estimators)
kfold = StratifiedKFold(n_splits=10, shuffle=True)
results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)
print("Standardized: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
Running this example provides the results below.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
You now see a small but very nice lift in the mean accuracy.
Standardized: 84.56% (5.74%)
There are many things to tune on a neural network, such as weight initialization, activation functions, optimization procedure, and so on.
One aspect that may have an outsized effect is the structure of the network itself, called the network topology. In this section, you will look at two experiments on the structure of the network: making it smaller and making it larger.
These are good experiments to perform when tuning a neural network on your problem.
Note that there is likely a lot of redundancy in the input variables for this problem.
The data describes the same signal from different angles. Perhaps some of those angles are more relevant than others. So you can force a type of feature extraction by the network by restricting the representational space in the first hidden layer.
In this experiment, you will take your baseline model with 60 neurons in the hidden layer and reduce it by half to 30. This will pressure the network during training to pick out the most important structure in the input data to model.
You will also standardize the data as in the previous experiment with data preparation and try to take advantage of the slight lift in performance.
...
# smaller model
def create_smaller():
# create model
model = Sequential()
model.add(Dense(30, input_shape=(60,), activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasClassifier(model=create_smaller, epochs=100, batch_size=5, verbose=0)))
pipeline = Pipeline(estimators)
kfold = StratifiedKFold(n_splits=10, shuffle=True)
results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)
print("Smaller: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
After tying this together, the complete example is listed below.
# Binary Classification with Sonar Dataset: Standardized Smaller
from pandas import read_csv
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from scikeras.wrappers import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
# load dataset
dataframe = read_csv("sonar.csv", header=None)
dataset = dataframe.values
# split into input (X) and output (Y) variables
X = dataset[:,0:60].astype(float)
Y = dataset[:,60]
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# smaller model
def create_smaller():
# create model
model = Sequential()
model.add(Dense(30, input_shape=(60,), activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasClassifier(model=create_smaller, epochs=100, batch_size=5, verbose=0)))
pipeline = Pipeline(estimators)
kfold = StratifiedKFold(n_splits=10, shuffle=True)
results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)
print("Smaller: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
Running this example provides the following result. You can see that you have a very slight boost in the mean estimated accuracy and an important reduction in the standard deviation (average spread) of the accuracy scores for the model.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
This is a great result because you are doing slightly better with a network half the size, which, in turn, takes half the time to train.
Smaller: 86.04% (4.00%)
A neural network topology with more layers offers more opportunities for the network to extract key features and recombine them in useful nonlinear ways.
You can easily evaluate whether adding more layers to the network improves the performance by making another small tweak to the function used to create our model. Here, you add one new layer (one line) to the network that introduces another hidden layer with 30 neurons after the first hidden layer.
Your network now has the topology:
60 inputs -> [60 -> 30] -> 1 output
The idea here is that the network is given the opportunity to model all input variables before being bottlenecked and forced to halve the representational capacity, much like you did in the experiment above with the smaller network.
Instead of squeezing the representation of the inputs themselves, you have an additional hidden layer to aid in the process.
...
# larger model
def create_larger():
# create model
model = Sequential()
model.add(Dense(60, input_shape=(60,), activation='relu'))
model.add(Dense(30, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasClassifier(model=create_larger, epochs=100, batch_size=5, verbose=0)))
pipeline = Pipeline(estimators)
kfold = StratifiedKFold(n_splits=10, shuffle=True)
results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)
print("Larger: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
After tying this together, the complete example is listed below.
# Binary Classification with Sonar Dataset: Standardized Larger
from pandas import read_csv
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from scikeras.wrappers import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
# load dataset
dataframe = read_csv("sonar.csv", header=None)
dataset = dataframe.values
# split into input (X) and output (Y) variables
X = dataset[:,0:60].astype(float)
Y = dataset[:,60]
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# larger model
def create_larger():
# create model
model = Sequential()
model.add(Dense(60, input_shape=(60,), activation='relu'))
model.add(Dense(30, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasClassifier(model=create_larger, epochs=100, batch_size=5, verbose=0)))
pipeline = Pipeline(estimators)
kfold = StratifiedKFold(n_splits=10, shuffle=True)
results = cross_val_score(pipeline, X, encoded_Y, cv=kfold)
print("Larger: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
Running this example produces the results below.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
You can see that you do not get a lift in the model performance. This may be statistical noise or a sign that further training is needed.
Larger: 83.14% (4.52%)
With further tuning of aspects like the optimization algorithm and the number of training epochs, it is expected that further improvements are possible. What is the best score that you can achieve on this dataset?
In this post, you discovered the Keras deep Learning library in Python.
You learned how you can work through a binary classification problem step-by-step with Keras, specifically:
Do you have any questions about deep learning with Keras or this post? Ask your questions in the comments, and I will do my best to answer.
Original article sourced at: https://machinelearningmastery.com
1618317562
View more: https://www.inexture.com/services/deep-learning-development/
We at Inexture, strategically work on every project we are associated with. We propose a robust set of AI, ML, and DL consulting services. Our virtuoso team of data scientists and developers meticulously work on every project and add a personalized touch to it. Because we keep our clientele aware of everything being done associated with their project so there’s a sense of transparency being maintained. Leverage our services for your next AI project for end-to-end optimum services.
#deep learning development #deep learning framework #deep learning expert #deep learning ai #deep learning services
1652250166
Binary MLM Software Demo | Binary MLM Compensation Plan, MLM Woocommerce Price USA, Philippines : Binary MLM Woocommerce Software is a web application that integrate with the Woo-commerce plugin and helps to manage binary MLM networks. LETSCMS provide worldwide service, such as USA, Hong Kong, China, UK, UAE, Jordan, Saudi Arabia, Pakistan, Philippines, Japan, Singapore, Romania, Vietnam, Canada, Hong Kong, Russia, Hungary, Romania, Poland, Thailand, Laos and many others.
Binary MLM Woo-commerce includes a two legged structure where in a parent Node has two sub nodes where each new distributor or members is placed in either left or right sub-tree. One sub-tree is known as a Power Leg or Profit Leg while the second sub-tree is a Profit Leg or a weak leg.. It is one of the basic Binary MLM plan which is required by all the MLM organizations be it small or large. The binary MLM plan helps admin managing users or sub nodes in a binary network to keep record of their income, expenses etc.
Admin Demo : https://wpbmw.mlmforest.com/wp-admin/admin.php?page=bmw-settings
Front Demo : https://wpbmw.mlmforest.com/?page_id=56
Features
Admin Features
Payout Reports.
Report to show complete details of an individual payouts.
Affiliate Commission.
Pair Commission .
Bonus Commission.
Specify eligibility criteria in the admin.
Configuration of commission and bonus details in the admin.
Service Charges for payout.
Run payouts manually.
Payout Detail based on user in admin .
Frontend Features
Register a Binary MLM User from provided registration page.
Register new Members using Genealogy.
New Join Network Page for non-Network Members .
MLM registration can happen by the Checkout page also.
Members can view full payout details in their account.
If you want to know more information and any queries regarding Binary MLM Woo-commerce, you can contact our experts through
Skype: jks0586,
Mail: letscmsdev@gmail.com,
Website: www.letscms.com, www.mlmtrees.com,
Call/WhatsApp/WeChat: +91-9717478599.
more information : https://www.mlmtrees.com/product/binary-mlm-ecommerce
Vedio : https://www.youtube.com/watch?v=gji5XnnTJNc&list=PLn9cGkS1zw3QMCC-89p5zK39mPtfltkwq&index=5
#mlm_plan #Binary_mlm #binary_mlm_plan #Binary_mlm_wordpress_plugin #Binary_mlm_woo-commerce #binary_mlm_ecommerce #binary_mlm_software #binary_mlm_wp_plugin #biary_mlm_ecommerce #mlm_plan
1603735200
The Deep Learning DevCon 2020, DLDC 2020, has exciting talks and sessions around the latest developments in the field of deep learning, that will not only be interesting for professionals of this field but also for the enthusiasts who are willing to make a career in the field of deep learning. The two-day conference scheduled for 29th and 30th October will host paper presentations, tech talks, workshops that will uncover some interesting developments as well as the latest research and advancement of this area. Further to this, with deep learning gaining massive traction, this conference will highlight some fascinating use cases across the world.
Here are ten interesting talks and sessions of DLDC 2020 that one should definitely attend:
Also Read: Why Deep Learning DevCon Comes At The Right Time
By Dipanjan Sarkar
**About: **Adversarial Robustness in Deep Learning is a session presented by Dipanjan Sarkar, a Data Science Lead at Applied Materials, as well as a Google Developer Expert in Machine Learning. In this session, he will focus on the adversarial robustness in the field of deep learning, where he talks about its importance, different types of adversarial attacks, and will showcase some ways to train the neural networks with adversarial realisation. Considering abstract deep learning has brought us tremendous achievements in the fields of computer vision and natural language processing, this talk will be really interesting for people working in this area. With this session, the attendees will have a comprehensive understanding of adversarial perturbations in the field of deep learning and ways to deal with them with common recipes.
Read an interview with Dipanjan Sarkar.
By Divye Singh
**About: **Imbalance Handling with Combination of Deep Variational Autoencoder and NEATER is a paper presentation by Divye Singh, who has a masters in technology degree in Mathematical Modeling and Simulation and has the interest to research in the field of artificial intelligence, learning-based systems, machine learning, etc. In this paper presentation, he will talk about the common problem of class imbalance in medical diagnosis and anomaly detection, and how the problem can be solved with a deep learning framework. The talk focuses on the paper, where he has proposed a synergistic over-sampling method generating informative synthetic minority class data by filtering the noise from the over-sampled examples. Further, he will also showcase the experimental results on several real-life imbalanced datasets to prove the effectiveness of the proposed method for binary classification problems.
By Dongsuk Hong
About: This is a paper presentation given by Dongsuk Hong, who is a PhD in Computer Science, and works in the big data centre of Korea Credit Information Services. This talk will introduce the attendees with machine learning and deep learning models for predicting self-employment default rates using credit information. He will talk about the study, where the DNN model is implemented for two purposes — a sub-model for the selection of credit information variables; and works for cascading to the final model that predicts default rates. Hong’s main research area is data analysis of credit information, where she is particularly interested in evaluating the performance of prediction models based on machine learning and deep learning. This talk will be interesting for the deep learning practitioners who are willing to make a career in this field.
#opinions #attend dldc 2020 #deep learning #deep learning sessions #deep learning talks #dldc 2020 #top deep learning sessions at dldc 2020 #top deep learning talks at dldc 2020
1595422560
Welcome to DataFlair Keras Tutorial. This tutorial will introduce you to everything you need to know to get started with Keras. You will discover the characteristics, features, and various other properties of Keras. This article also explains the different neural network layers and the pre-trained models available in Keras. You will get the idea of how Keras makes it easier to try and experiment with new architectures in neural networks. And how Keras empowers new ideas and its implementation in a faster, efficient way.
Keras is an open-source deep learning framework developed in python. Developers favor Keras because it is user-friendly, modular, and extensible. Keras allows developers for fast experimentation with neural networks.
Keras is a high-level API and uses Tensorflow, Theano, or CNTK as its backend. It provides a very clean and easy way to create deep learning models.
Keras has the following characteristics:
The following major benefits of using Keras over other deep learning frameworks are:
Before installing TensorFlow, you should have one of its backends. We prefer you to install Tensorflow. Install Tensorflow and Keras using pip python package installer.
The basic data structure of Keras is model, it defines how to organize layers. A simple type of model is the Sequential model, a sequential way of adding layers. For more flexible architecture, Keras provides a Functional API. Functional API allows you to take multiple inputs and produce outputs.
It allows you to define more complex models.
#keras tutorials #introduction to keras #keras models #keras tutorial #layers in keras #why learn keras
1595414040
This article is the spotlight on the need for python deep learning library, Keras. Keras offers a uniform face for various deep learning frameworks including Tensorflow, Theano, and MXNet. Let us see why you should choose and learn keras now.
Keras makes deep learning accessible and local on your computer.It also acts as a frontend for other big cloud providers. It is the most voted recommendation for beginners who want to start their journey in machine learning. It provides a minimal approach to run neural networks. This allows students to learn complex features from input data sequentially.
Let us see some of the features of keras that make you learn Keras.
Keras is the most easy to use the library for machine learning for beginners. Being simple helps it to bring machine learning from imaginations to reality. It provides an infrastructure that can be learned in very less time. Using Keras, you will be able to stack layers like experts.
Python is the most popular library for machine learning and Data Science. The compatibility with python allows Keras to have many useful features. Writing less code, easy to debug, easy to deploy, extensibility is due to the support of Keras with python 2.7 and python 3.6.
Keras being a high-level API provides support for multiple popular and powerful backend frameworks. Tensorflow, theano, CNTK are very dominant for backend computations and Keras supports all of them.
The importance of Keras leads to many other innovative tools to explore deep learning. These tools are built on top of Keras making Keras as the base. The following tools are:
#keras tutorials #importance of keras #keras features #learn keras #deep learning