Anthony Bryant

1614104520

GoogLeNet (InceptionV1) with TensorFlow

InceptionV1 or with a more remarkable name GoogLeNet is one of the most successful models of the earlier years of convolutional neural networks. Szegedy et al. from Google Inc. published the model in their paper named Going Deeper with Convolutions[1] and won ILSVRC-2014 with a large margin. The name both signifies the affiliation of most of the contributed scholars with Google and the reference to the LeNet[2] model.

Introduction

After analyzing and implementing VGG16[7] (runner-up of ILSVRC-2014), now it is time for the winner of the competition, GoogLeNet. As the name of the paper[1] implies the main intuition of GoogLeNet is obtaining a more capable model by increasing the dept. However, as covered in the previous posts, it is a risky architectural choice since deeper and larger models are notoriously harder to train. The success of GoogLeNet stems from the smart tricks that make the model lighter and easier to train. The original GoogLeNet is also named InceptionV1 and it is a 22-layers-deep network. Other versions of the model have also been developed by some of the writers of the first paper in 2016[3].

A few new ideas made GoogLeNet superior to the counterpart model VGGNet:

  1. 1x1 convolutions were used for reducing the dimensionality of the layers while making the model deeper.
  2. Average global pooling[6] is performed before the fully connected layers in order to reduce the number of feature maps.
  3. Two auxiliary losses were introduced at the earlier layers of the model for the sake of propagating good gradients to the initial layers. It is a very brilliant idea for tackling the vanishing gradient problem.
  4. Prior models such as LeNet, AlexNet, and VGGNet follow a sequential model architecture. However, GoogLeNet has braches that lead to auxiliary losses.
  5. Another branchy entity in the model is the Inception module that combines the outputs of differently sized filters. The parallel structure of multiple scales enables the module to capture both smaller and larger motifs in the pixel-data.

All these ideas will be discussed further throughout the next sections as we build the model using Keras.

#artificial-intelligence #deep-learning #tensorflow

What is GEEK

Buddha Community

GoogLeNet (InceptionV1) with TensorFlow

5 Steps to Passing the TensorFlow Developer Certificate

Deep Learning is one of the most in demand skills on the market and TensorFlow is the most popular DL Framework. One of the best ways in my opinion to show that you are comfortable with DL fundaments is taking this TensorFlow Developer Certificate. I completed mine last week and now I am giving tips to those who want to validate your DL skills and I hope you love Memes!

  1. Do the DeepLearning.AI TensorFlow Developer Professional Certificate Course on Coursera Laurence Moroney and by Andrew Ng.

2. Do the course questions in parallel in PyCharm.

#tensorflow #steps to passing the tensorflow developer certificate #tensorflow developer certificate #certificate #5 steps to passing the tensorflow developer certificate #passing

Anthony Bryant

1614104520

GoogLeNet (InceptionV1) with TensorFlow

InceptionV1 or with a more remarkable name GoogLeNet is one of the most successful models of the earlier years of convolutional neural networks. Szegedy et al. from Google Inc. published the model in their paper named Going Deeper with Convolutions[1] and won ILSVRC-2014 with a large margin. The name both signifies the affiliation of most of the contributed scholars with Google and the reference to the LeNet[2] model.

Introduction

After analyzing and implementing VGG16[7] (runner-up of ILSVRC-2014), now it is time for the winner of the competition, GoogLeNet. As the name of the paper[1] implies the main intuition of GoogLeNet is obtaining a more capable model by increasing the dept. However, as covered in the previous posts, it is a risky architectural choice since deeper and larger models are notoriously harder to train. The success of GoogLeNet stems from the smart tricks that make the model lighter and easier to train. The original GoogLeNet is also named InceptionV1 and it is a 22-layers-deep network. Other versions of the model have also been developed by some of the writers of the first paper in 2016[3].

A few new ideas made GoogLeNet superior to the counterpart model VGGNet:

  1. 1x1 convolutions were used for reducing the dimensionality of the layers while making the model deeper.
  2. Average global pooling[6] is performed before the fully connected layers in order to reduce the number of feature maps.
  3. Two auxiliary losses were introduced at the earlier layers of the model for the sake of propagating good gradients to the initial layers. It is a very brilliant idea for tackling the vanishing gradient problem.
  4. Prior models such as LeNet, AlexNet, and VGGNet follow a sequential model architecture. However, GoogLeNet has braches that lead to auxiliary losses.
  5. Another branchy entity in the model is the Inception module that combines the outputs of differently sized filters. The parallel structure of multiple scales enables the module to capture both smaller and larger motifs in the pixel-data.

All these ideas will be discussed further throughout the next sections as we build the model using Keras.

#artificial-intelligence #deep-learning #tensorflow

Mckenzie  Osiki

Mckenzie Osiki

1623139838

Transfer Learning on Images with Tensorflow 2 – Predictive Hacks

In this tutorial, we will provide you an example of how you can build a powerful neural network model to classify images of **cats **and dogs using transfer learning by considering as base model a pre-trained model trained on ImageNet and then we will train additional new layers for our cats and dogs classification model.

The Data

We will work with a sample of 600 images from the Dogs vs Cats dataset, which was used for a 2013 Kaggle competition.

#python #transfer learning #tensorflow #images #transfer learning on images with tensorflow #tensorflow 2

TensorFlow Lite Object Detection using Raspberry Pi and Pi Camera

I have not created the Object Detection model, I have just merely cloned Google’s Tensor Flow Lite model and followed their Raspberry Pi Tutorial which they talked about in the Readme! You don’t need to use this article if you understand everything from the Readme. I merely talk about what I did!

Prerequisites:

  • I have used a Raspberry Pi 3 Model B and PI Camera Board (3D printed a case for camera board). **I had this connected before starting and did not include this in the 90 minutes **(plenty of YouTube videos showing how to do this depending on what Pi model you have. I used a video like this a while ago!)

  • I have used my Apple Macbook which is Linux at heart and so is the Raspberry Pi. By using Apple you don’t need to install any applications to interact with the Raspberry Pi, but on Windows you do (I will explain where to go in the article if you use windows)

#raspberry-pi #object-detection #raspberry-pi-camera #tensorflow-lite #tensorflow #tensorflow lite object detection using raspberry pi and pi camera

A Demo Code Of Training and Testing using Tensorflow

ProbFace, arxiv

This is a demo code of training and testing [ProbFace] using Tensorflow. ProbFace is a reliable Probabilistic Face Embeddging (PFE) method. The representation of each face will be an Guassian distribution parametrized by (mu, sigma), where mu is the original embedding and sigma is the learned uncertainty. Experiments show that ProbFace could

  • improve the robustness of PFE.
  • simplify the calculation of the multal likelihood score (MLS).
  • improve the recognition performance on the risk-controlled scenarios.

#machine learning #tensorflow #testing #a demo code of training and testing using tensorflow #a demo code of training #testing using tensorflow