How to Automate Tasks on GitHub With Machine Learning for Fun

How to Automate Tasks on GitHub With Machine Learning for Fun

A tutorial on how to build a GitHub App that predicts and applies issue labels using Tensorflow and public datasets.

Teaser: Build a model that labels issues and launch it as a product!

Motivation: The elusive, perfect machine learning problem

Our friends and colleagues who are data scientists would describe the ideal predictive modeling project as a situation where:

  • There is an abundance of data, which is already labeled or where labels can be inferred.
  • The data can be used to solve real problems.
  • The problem relates to a domain you are passionate about or the problem you want to solve is your own and you can be your first customer.
  • There is a platform where your data product can reach a massive audience, with mechanisms to gather feedback and improve.
  • You can create this with minimal expense and time, hopefully using languages and tools you are familiar with.
  • There is a way to monetize your product if it becomes successful.

The above list is aspirational and data scientists are lucky to encounter a problem that meets all of these (the authors feel lucky if we can find a problem that satisfies even half of these!).

Enter GH-Archive & GitHub Apps: Where Data Meets Opportunity

Today, we present a dataset, platform, and domain that we believe satisfies the criteria set forth above!

The dataset:GH-Archive.

GH-Archive logs a tremendous amount of data from GitHub by ingesting most of these events from the GitHub REST API. These events are sent from GitHub to GH-Archive in JSON format, referred to as a payload. Below is an example of a payload that is received when an issue is edited:

As you can imagine, there is a large number of payloads given the number of event types and users on GitHub. Thankfully, this data is stored in BigQuery which allows for fast retrieval through a SQL interface! It is very economical to acquire this data, as Google gives you $300 when you first sign up for an account, and if you already have one, the costs are very reasonable.

Since the data is in a JSON format, the syntax for un-nesting this data may be a bit unfamiliar. We can use the JSON_EXTRACT function to get the data we need. Below is a toy example of how you might extract data from issue payloads:

A step-by-step explanation on how to extract GitHub issues from BigQuery can be found in the appendix section of this article, however, it is important to note that more than issue data is available — you can retrieve data for almost anything that happens on GitHub! You can even retrieve large corpus of code from public repos in BigQuery.

The platform: GitHub Apps & The GitHub Marketplace

The GitHub platform allows you to build apps that can perform many actions, such as interacting with issues, creating repositories or fixing code in pull requests. Since all that is required from your app is to receive payloads from GitHub and make calls to the REST API, you can write the app in any language of your choice, including python.

Most importantly, the GitHub marketplace gives you a way to list your app on a searchable platform and charge users a monthly subscription. This is a great way to monetize your ideas. You can even host unverified free apps as a way to collect feedback and iterate.

Surprisingly, there are not many GitHub apps that use machine learning, despite the availability of these public datasets! Raising awareness of this is one of the motivations for this blog post.

An End-to-End Example: Automatically Label GitHub Issues With Machine Learning

In order to show you how to create your own apps, we will walk you through the process of creating a GitHub app that can automatically label issues. Note that all of the code for this app, including the model training steps are located in this GitHub repository.

Step 1: Register your app & complete pre-requisites.

First, you will need to set up your development environment. Complete steps 1–4 of this article. You do not need to read the section on “The Ruby Programming Language”, or any steps beyond step 4. Make sure you set up a Webhook secret even though that part is optional.

Note that there is a difference between GitHub apps and Oauth apps. For the purposes of this tutorial, we are interested in GitHub apps. You don’t need to worry about this too much, but the distinction is good to know in case you are going through the documentation.

Step 2: Get comfortable interacting with the GitHub API with python.

Your app will need to interact with the GitHub API in order to perform actions on GitHub. It is useful to use a pre-built client in the programming language of your choice in order to make life easier. While the official docs on GitHub show you how to use a Ruby client, there are third-party clients for many other languages, including Python. For the purposes of this tutorial, we will be using the Github3.py library.

One of the most confusing aspects of interfacing with the GitHub API as an app is authentication. For the following instructions, use the curl commands, not the ruby examples in the documentation.

First, you must authenticate as an app by signing a JSON Web Token (JWT). Once you have signed a JWT, you may use it to authenticate as an app installation. Upon authenticating as an app installation, you will receive an installation access token which you can use to interact with the REST API.

Note that the authenticating as an app is done via a GET request, whereas authenticating as an app installation is done via a PUT request. Even though this is illustrated in the example CURL commands, it is a detail that we missed when getting started.

Knowing the above authentication steps are useful even though you will be using the Github3.py library, as there may be routes that are not supported that you may want to implement yourself using the requests library. This was the case for us, so we ended up writing a thin wrapper around the Github3.py library called mlapp to help us interact with issues, which is defined here.

Below is code that can be used to create an issue, make a comment, and apply a label. This code is also available in this notebook.

You can see the issue created by this code here.

Step 3: Acquire and prepare the data.

As mentioned previously, we can use GH-Archive hosted on BigQuery to retrieve examples of issues. Additionally, we can also retrieve the labels that people manually apply for each issue. Below is the query we used to build a Pareto chart of all of these labels:


This spreadsheet contains the data for the entire Pareto chart. There is a long tail of issue labels which are not mutually exclusive. For example, the enhancement and feature labels could be grouped together. Furthermore, the quality and meaning of labels may vary greatly by project. Despite these hurdles, we decided to simplify the problem and group as many labels as possible into three categories: feature request, bug, and question using heuristics we constructed after manually looking at the top ~ 200 labels. Additionally, we consulted with the maintainers of a large open source project, Kubeflow, as our first customer to validate our intuitions.

We experimented with creating a fourth category called other in order to have negative samples of items not in the first three categories, however, we discovered that the information was noisy as there were many bugs, feature requests, and questionsin this “other” category_._ Therefore, we limited the training set to issues that we could categorize as either a feature request, bug or question exclusively_._

It should be noted that this arrangement of the training data is far from ideal, as we want our training data to resemble the distribution of real issues as closely as possible. However, our goal was to construct a minimal viable product with the least time and expense possible and iterate later, so we moved forward with this approach.

Finally, we took special care to de-duplicate issues. To be conservative, we resolved the following types of duplicates (by arbitrarily choosing one issue in the duplicate set):

  1. Issues with the same title in the same repo.
  2. Issues that have the same content in their body, regardless of the title. Removed further duplicates by only considering the first 75% of characters and alternatively last 75% of characters in an issue body.

The SQL query used to categorize issues and deduplicate issues can be viewed with this link. You don’t have to run this query, as our friends from the Kubeflow project have run this query and are hosting the resulting data as CSV files on Google Cloud Bucket, which you can retrieve by following the code in this notebook. An exploration of the raw data as well as a description of all the fields in the dataset is also located in the notebook.

Step 4: Build & train the model.

Now that we have the data, the next step is to build and train the model. For this problem, we decided to borrow a text pre-processing pipeline that we built for a similar problem and apply it here. This pre-processing pipeline cleans the raw text, tokenizes the data, builds a vocabulary, and pads the sequences of text to equal length, which are steps that are outlined in the “Prepare and Clean Data” section of our prior blog post. The code that accomplishes this task for issue labeling is outlined in this notebook.

Our model takes two inputs: the issue title and body and classifies each issue as either a bug, feature request or question. Below is our model’s architecture defined with tensorflow.Keras:

A couple of notes about this model:

  • You do not have to use deep learning to solve this problem. We just used an existing pipeline we built for another closely related problem in order to quickly bootstrap ourselves.
  • The model architecture is embarrassingly simple. Our goal was to keep things as simple as possible to demonstrate that you can build a real data product using a simple approach. We did not spend much time tuning or experimenting with different architectures.
  • We anticipate that there is plenty of room for improvements on this model by using more state of the art architectures or improving the dataset. We provide several hints in the next steps section of this blog post.

Evaluating the model

Below is a confusion matrix showing our model’s accuracy on a test set of the three categories. The model really struggles to categorize questions but does a fairly decent job at distinguishing bugs from features.

Note that since our test set is not representative of all issues (as we filtered the dataset to only those that we could categorize), the accuracy metrics above should be taken with a grain of salt. We somewhat mitigate this problem by gathering explicit feedback from our users, which allows us to re-train our model and debug problems very fast. We discuss the explicit feedback mechanism in a later section.

Making predictions

Below are model predictions on toy examples. The full code is available in this notebook.

We wanted to choose reasonable thresholds so the model is not spamming people with too many incorrect predictions (this means that our app may not offer any predictions in some cases). We selected thresholds by testing our system on several repos and consulting with several maintainers on an acceptable false positive rate.

Step 5: Use Flask to respond to payloads.

Now that you have a model that can make predictions, and a way to programmatically add comments and labels to issues (step 2), all that is left is gluing the pieces together. You can accomplish this with the following steps:

  1. Start a web server that listens to payloads from GitHub.com (you specified the endpoint that GitHub will send payloads to when you registered your app in step 1).
  2. Verify the payload is coming from GitHub (illustrated by the verify_webhook function in this script).
  3. Respond to the payload if desired by using the GitHub API (which you learned in step 2).
  4. Log appropriate data and feedback you receive to a database to facilitate model retraining.

A great way to accomplish this is to use a framework like Flask and database interface like SQLAlchemy. If you are already familiar with flask, below is a truncated version of the code that applies predicted issue labels when notified by GitHub that an issue has been opened:

Don’t worry if you are not familiar with Flask or SQLAchemy. You can learn everything you need to know about this subject from this wonderful MOOC on Flask, HTML, CSS and Javascript. This course is a really great investment of time if you are a data scientist, as this will allow you to build interfaces for your data products in a lightweight way. We took this course and were impressed with it.

We leave it as an exercise for the reader to go through the rest of the flask code in our GitHub repository.

Collecting explicit user feedback.

As illustrated above, explicit feedback is requested by asking users to react with 👍 or 👎 to a prediction. We can store these reactions in a database which allows us to re-train and debug our models. This is is perhaps one of the most exciting and important aspects of launching a data product as a GitHub App!

You can see more examples of predictions and user feedback on our app’s homepage. For example, this is the page for the kubeflow/kubeflow repo:

Please install our app, it’s free!

If you enjoy what you have read thus far and want to support this project, please install this app on your public repositories (this app will not make predictions on private repos even if installed there), and give our bot feedback when it makes predictions 👍 👎.

Here is the link to install our app.

Conclusion: Tips for building your own machine learning powered apps

  • Don’t be afraid to use public datasets. You can do a lot more than just label issues (see the resources section for ideas).
  • Don’t be afraid to iterate fast, even if the solution is not perfect. Building a model is sometimes the smallest component of a project and getting user feedback is very valuable so you do not waste time.
  • Try to consult with at least one real customer or user and have them guide and validate decisions.
  • Take advantage of opportunities to gather explicit user feedback. This will allow you to improve your solution and your models quickly.

Part II & Next Steps

One aspect we did not cover is how to serve your app at scale. When you are just starting out, you probably do not need to worry about this and can serve this on a single server with your favorite cloud provider. You can also use a service like Heroku, which is covered in the course on Flask linked in the resources section below.

In Part II, we will cover the following:

  • How to deploy your Flask app on Kubernetees so it can scale to many users.
  • Using Argo pipelines to manage the model training and serving pipeline.

We believe there are many opportunities to improve upon the approach we illustrated in this post. Some ideas we have in mind are:

  • Constructing better labels and negative samples of items that do not belong in the label set.
  • Using the tools from fastai to explore state of the art architectures, such as Multi-Head Attention.
  • Pre-training on a large corpus and fine tuning that on GitHub issues to enable a user to predict repo-specific labels instead of a small global set of labels.
  • Using additional data such as information about the repository or the user opening the issue, perhaps learning an embedding for these entities.
  • Allow users to customize the label thresholds and the names of labels, as well as choose which labels to predict.

Resources

Get In Touch!

We hope you enjoyed this blog post. Please feel free to get in touch with us:

Disclaimer

Any ideas or opinions presented in this article are our own. Any ideas or techniques presented do not necessarily foreshadow future products of any company. The purpose of this blog is for educational purposes only.

30s ad

Machine Learning Guide: Learn Machine Learning Algorithms

Introduction to Machine Learning & Deep Learning in Python

Hands-On Machine Learning: Learn TensorFlow, Python, & Java!

Learn Azure Machine Learning from scratch

Learn TensorFlow.js - Deep Learning and Neural Networks with JavaScript

Learn TensorFlow.js - Deep Learning and Neural Networks with JavaScript

This full course introduces the concept of client-side artificial neural networks. We will learn how to deploy and run models along with full deep learning applications in the browser! To implement this cool capability, we’ll be using TensorFlow.js (TFJS), TensorFlow’s JavaScript library.

By the end of this video tutorial, you will have built and deployed a web application that runs a neural network in the browser to classify images! To get there, we'll learn about client-server deep learning architectures, converting Keras models to TFJS models, serving models with Node.js, tensor operations, and more!

⭐️Course Sections⭐️

⌨️ 0:00 - Intro to deep learning with client-side neural networks

⌨️ 6:06 - Convert Keras model to Layers API format

⌨️ 11:16 - Serve deep learning models with Node.js and Express

⌨️ 19:22 - Building UI for neural network web app

⌨️ 27:08 - Loading model into a neural network web app

⌨️ 36:55 - Explore tensor operations with VGG16 preprocessing

⌨️ 45:16 - Examining tensors with the debugger

⌨️ 1:00:37 - Broadcasting with tensors

⌨️ 1:11:30 - Running MobileNet in the browser

Deep Learning Using TensorFlow

Deep Learning Using TensorFlow

In this TensorFlow tutorial for professionals and enthusiasts who are interested in applying Deep Learning Algorithm using TensorFlow to solve various problems.

In this TensorFlow tutorial for professionals and enthusiasts who are interested in applying Deep Learning Algorithm using TensorFlow to solve various problems.

TensorFlow is an open source deep learning library that is based on the concept of data flow graphs for building models. It allows you to create large-scale neural networks with many layers. Learning the use of this library is also a fundamental part of the AI & Deep Learning course curriculum. Following are the topics that will be discussed in this TensorFlow tutorial:
**What is TensorFlowTensorFlow Code BasicsTensorFlow UseCase **##

What are Tensors?

In this TensorFlow tutorial, before talking about TensorFlow, let us first understand what are tensors. **Tensors **are nothing but a de facto for representing the data in deep learning.

As shown in the image above, tensors are just multidimensional arrays, that allows you to represent data having higher dimensions. In general, Deep Learning you deal with high dimensional data sets where dimensions refer to different features present in the data set. In fact, the name “TensorFlow” has been derived from the operations which neural networks perform on tensors. It’s literally a flow of tensors. Since, you have understood what are tensors, let us move ahead in this **TensorFlow **tutorial and understand – what is TensorFlow?

What is TensorFlow?

**TensorFlow **is a library based on Python that provides different types of functionality for implementing Deep Learning Models. As discussed earlier, the term TensorFlow is made up of two terms – Tensor & Flow:

In TensorFlow, the term tensor refers to the representation of data as multi-dimensional array whereas the term flow refers to the series of operations that one performs on tensors as shown in the above image.

Now we have covered enough background about TensorFlow.

Next up, in this TensorFlow tutorial we will be discussing about TensorFlow code-basics.

TensorFlow Tutorial: Code Basics

Basically, the overall process of writing a TensorFlow program involves two steps:

  1. Building a Computational Graph
  2. Running a Computational Graph

Let me explain you the above two steps one by one:

1. Building a Computational Graph

So, what is a computational graph? Well, a computational graph is a series of TensorFlow operations arranged as nodes in the graph. Each nodes take 0 or more tensors as input and produces a tensor as output. Let me give you an example of a simple computational graph which consists of three nodes – a, b & c as shown below:

Explanation of the Above Computational Graph:

**What is TensorFlowTensorFlow Code BasicsTensorFlow UseCase **
Basically, one can think of a computational graph as an alternative way of conceptualizing mathematical calculations that takes place in a TensorFlow program. The operations assigned to different nodes of a Computational Graph can be performed in parallel, thus, providing a better performance in terms of computations.

Here we just describe the computation, it doesn’t compute anything, it does not hold any values, it just defines the operations specified in your code.

2. Running a Computational Graph

Let us take the previous example of computational graph and understand how to execute it. Following is the code from previous example:

Example 1:

import tensorflow as tf
 
# Build a graph
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b

Now, in order to get the output of node c, we need to run the computational graph within a session. Session places the graph operations onto Devices, such as CPUs or GPUs, and provides methods to execute them.

A session encapsulates the control and state of the *TensorFlow *runtime i.e. it stores the information about the order in which all the operations will be performed and passes the result of already computed operation to the next operation in the pipeline. Let me show you how to run the above computational graph within a session (Explanation of each line of code has been added as a comment):

# Create the session object
sess = tf.Session()
 
#Run the graph within a session and store the output to a variable
output_c = sess.run(c)
 
#Print the output of node c
print(output_c)
 
#Close the session to free up some resources
sess.close()
Output:
30

So, this was all about session and running a computational graph within it. Now, let us talk about variables and placeholders that we will be using extensively while building deep learning model using TensorFlow.

Constants, Placeholder and Variables

In TensorFlow, constants, placeholders and variables are used to represent different parameters of a deep learning model. Since, I have already discussed constants earlier, I will start with placeholders.

Placeholder:

A TensorFlow constant allows you to store a value but, what if, you want your nodes to take inputs on the run? For this kind of functionality, placeholders are used which allows your graph to take external inputs as parameters. Basically, a placeholder is a promise to provide a value later or during runtime. Let me give you an example to make things simpler:

import tensorflow as tf
 
# Creating placeholders
a = tf. placeholder(tf.float32)
b = tf. placeholder(tf.float32)
 
# Assigning multiplication operation w.r.t. a & b to node mul
mul = a*b
 
# Create session object
sess = tf.Session()
 
# Executing mul by passing the values [1, 3] [2, 4] for a and b respectively
output = sess.run(mul, {a: [1,3], b: [2, 4]})
print('Multiplying a b:', output)
Output:
[2. 12.]

Points to Remember about placeholders:

**What is TensorFlowTensorFlow Code BasicsTensorFlow UseCase **
Now, let us move ahead and understand – what are variables?

Variables

In deep learning, placeholders are used to take arbitrary inputs in your model or graph. Apart from taking input, you also need to modify the graph such that it can produce new outputs w.r.t. same inputs. For this you will be using variables. In a nutshell, a variable allows you to add such parameters or node to the graph that are trainable i.e. the value can be modified over the period of a time. Variables are defined by providing their initial value and type as shown below:

var = tf.Variable( [0.4], dtype = tf.float32 )

**Note: **
**What is TensorFlowTensorFlow Code BasicsTensorFlow UseCase **
Constants are initialized when you call tf.constant, and their value can never change. On the contrary, variables are not initialized when you call tf.Variable. To initialize all the variables in a TensorFlow program, you must explicitly call a special operation as shown below:

init = tf.global_variables_initializer()
sess.run(init)

Always remember that a variable must be initialized before a graph is used for the first time.

Note: TensorFlow variables are in-memory buffers that contain tensors, but unlike normal tensors that are only instantiated when a graph is run and are immediately deleted afterwards, variables survive across multiple executions of a graph.

Now that we have covered enough basics of TensorFlow, let us go ahead and understand how to implement a linear regression model using TensorFlow.

Linear Regression Model Using TensorFlow

Linear Regression Model is used for predicting the unknown value of a variable (Dependent Variable) from the known value of another variables (Independent Variable) using linear regression equation as shown below:

Therefore, for creating a linear model, you need:

  1. Building a Computational Graph
  2. Running a Computational Graph

So, let us begin building linear model using TensorFlow:

Copy the code by clicking the button given below:

# Creating variable for parameter slope (W) with initial value as 0.4
W = tf.Variable([.4], tf.float32)
 
#Creating variable for parameter bias (b) with initial value as -0.4
b = tf.Variable([-0.4], tf.float32)
 
# Creating placeholders for providing input or independent variable, denoted by x
x = tf.placeholder(tf.float32)
 
# Equation of Linear Regression
linear_model = W * x + b
 
# Initializing all the variables
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
 
# Running regression model to calculate the output w.r.t. to provided x values
print(sess.run(linear_model {x: [1, 2, 3, 4]})) 

Output:

[ 0.     0.40000001 0.80000007 1.20000005]

The above stated code just represents the basic idea behind the implementation of regression model i.e. how you follow the equation of regression line so as to get output w.r.t. a set of input values. But, there are two more things left to be added in this model to make it a complete regression model:
**What is TensorFlowTensorFlow Code BasicsTensorFlow UseCase **
Now let us understand how can I incorporate the above stated functionalities into my code for regression model.

Loss Function – Model Validation

A loss function measures how far apart the current output of the model is from that of the desired or target output. I’ll use a most commonly used loss function for my linear regression model called as Sum of Squared Error or SSE. SSE calculated w.r.t. model output (represent by linear_model) and desired or target output (y) as:

y = tf.placeholder(tf.float32)
error = linear_model - y
squared_errors = tf.square(error)
loss = tf.reduce_sum(squared_errors)
print(sess.run(loss, {x:[1,2,3,4], y:[2, 4, 6, 8]})

Output:
90.24

As you can see, we are getting a high loss value. Therefore, we need to adjust our weights (W) and bias (b) so as to reduce the error that we are receiving.

tf.train API – Training the Model

TensorFlow provides optimizers that slowly change each variable in order to minimize the loss function or error. The simplest optimizer is gradient descent. It modifies each variable according to the magnitude of the derivative of loss with respect to that variable.

#Creating an instance of gradient descent optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
 
train = optimizer.minimize(loss)
 
for i in range(1000):
     sess.run(train, {x:[1, 2, 3, 4], y:[2, 4, 6, 8]})
print(sess.run([W, b]))

Output:
 [array([ 1.99999964], dtype=float32), array([ 9.86305167e-07], dtype=float32)]

So, this is how you create a linear model using TensorFlow and train it to get the desired output.

TensorFlow Vs PyTorch: Comparison of the Machine Learning Libraries

TensorFlow Vs PyTorch: Comparison of the Machine Learning Libraries

Libraries play an important role when developers decide to work in Machine Learning or Deep Learning researches. In this article, we list down 10 comparisons between TensorFlow and PyTorch these two Machine Learning Libraries.

According to this article, a survey based on a sample of 1,616 ML developers and data scientists, for every one developer using PyTorch, there are 3.4 developers using TensorFlow. In this article, we list down 10 comparisons between these two Machine Learning Libraries

1 - Origin

PyTorch has been developed by Facebook which is based on Torch while TensorFlow, an open sourced Machine Learning Library, developed by Google Brain is based on the idea of data flow graphs for building models.

2 - Features

TensorFlow has some attracting features such as TensorBoard which serves as a great option while visualising a Machine Learning model, it also has TensorFlow Serving which is a specific grpc server that is used during the deployment of models in production. On the other hand, PyTorch has several distinguished features too such as dynamic computation graphs, naive support for Python, support for CUDA which ensures less time for running the code and increase in performance.

3 - Community

TensorFlow is adopted by many researchers of various fields like academics, business organisations, etc. It has a much bigger community than PyTorch which implies that it is easier to find for resources or solutions in TensorFlow. There is a vast amount of tutorials, codes, as well as support in TensorFlow and PyTorch, being the newcomer into play as compared to TensorFlow, it lacks these benefits.

4 - Visualisation

Visualisation plays as a protagonist while presenting any project in an organisation. TensorFlow has TensorBoard for visualising Machine Learning models which helps during training the model and spot the errors quickly. It is a real-time representation of the graphs of a model which not only depicts the graphic representation but also shows the accuracy graphs in real-time. This eye-catching feature is lacked by PyTorch.

5 - Defining Computational Graphs

In TensorFlow, defining computational graph is a lengthy process as you have to build and run the computations within sessions. Also, you will have to use other parameters such as placeholders, variable scoping, etc. On the other hand, Python wins this point as it has the dynamic computation graphs which help id building the graphs dynamically. Here, the graph is built at every point of execution and you can manipulate the graph at run-time.

6 - Debugging

PyTorch being the dynamic computational process, the debugging process is a painless method. You can easily use Python debugging tools like pdb or ipdb, etc. for instance, you can put “pdb.set_trace()” at any line of code and then proceed for executions of further computations, pinpoint the cause of the errors, etc. While, for TensorFlow you have to use the TensorFlow debugger tool, tfdbg which lets you view the internal structure and states of running TensorFlow graphs during training and inference.

7 - Deployment

For now, deployment in TensorFlow is much more supportive as compared to PyTorch. It has the advantage of TensorFlow Serving which is a flexible, high-performance serving system for deploying Machine Learning models, designed for production environments. However, in PyTorch, you can use the Microframework for Python, Flask for deployment of models.

8 - Documentation

The documentation of both frameworks is broadly available as there are examples and tutorials in abundance for both the libraries. You can say, it is a tie between both the frameworks.

Click here for TensorFlow documentation and click here for PyTorch documentation.

9 - Serialisation

The serialisation in TensorFlow can be said as one of the advantages for this framework users. Here, you can save your entire graph as a protocol buffer and then later it can be loaded in other supported languages, however, PyTorch lacks this feature. 

10 - Device Management

By default, Tensorflow maps nearly all of the GPU memory of all GPUs visible to the process which is a comedown but here it automatically presumes that you want to run your code on the GPU because of the well-set defaults and thus result in fair management of the device. On the other hand, PyTorch keeps track of the currently selected GPU and all the CUDA tensors which will be allocated.