John David

John David

1551149766

How exactly does LSTMCell from TensorFlow operates?

I try to reproduce results generated by the LSTMCell from TensorFlow to be sure that I know what it does.

Here is my TensorFlow code:

num_units = 3
lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units)

timesteps = 7
num_input = 4
X = tf.placeholder("float", [None, timesteps, num_input])
x = tf.unstack(X, timesteps, 1)
outputs, states = tf.contrib.rnn.static_rnn(lstm, x, dtype=tf.float32)

sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)

x_val = np.random.normal(size = (1, 7, num_input))

res = sess.run(outputs, feed_dict = {X:x_val})

for e in res:
    print e

Here is its output:

[[-0.13285545 -0.13569424 -0.23993783]]
[[-0.04818152  0.05927373  0.2558436 ]]
[[-0.13818116 -0.13837864 -0.15348436]]
[[-0.232219    0.08512601  0.05254192]]
[[-0.20371495 -0.14795329 -0.2261929 ]]
[[-0.10371902 -0.0263292  -0.0914975 ]]
[[0.00286371 0.16377522 0.059478  ]]

And here is my own implementation:

n_steps, _ = X.shape
h = np.zeros(shape = self.hid_dim)
c = np.zeros(shape = self.hid_dim)

for i in range(n_steps):
    x = X[i,:]

    vec = np.concatenate([x, h])
    #vec = np.concatenate([h, x])
    gs = np.dot(vec, self.kernel) + self.bias


    g1 = gs[0*self.hid_dim : 1*self.hid_dim]
    g2 = gs[1*self.hid_dim : 2*self.hid_dim]
    g3 = gs[2*self.hid_dim : 3*self.hid_dim]
    g4 = gs[3*self.hid_dim : 4*self.hid_dim]

    I = vsigmoid(g1)
    N = np.tanh(g2)
    F = vsigmoid(g3)
    O = vsigmoid(g4)

    c = c*F + I*N

    h = O * np.tanh(c)

    print h

And here is its output:

[-0.13285543 -0.13569425 -0.23993781]
[-0.01461723  0.08060743  0.30876374]
[-0.13142865 -0.14921292 -0.16898363]
[-0.09892188  0.11739943  0.08772941]
[-0.15569218 -0.15165766 -0.21918869]
[-0.0480604  -0.00918626 -0.06084118]
[0.0963612  0.1876516  0.11888081]

As you might notice I was able to reproduce the first hidden vector, but the second one and all the following ones are different. What am I missing?

#python #tensorflow

What is GEEK

Buddha Community

Watts Kendall

1551150665

i examined this link and your code is almost perfect but you forgot to add forget_bias value(default 1.0) in this line F = vsigmoid(g3) its actualy F = vsigmoid(g3+self.forget_bias) or in your case its 1 F = vsigmoid(g3+1)

here is my imp with numpy:

import numpy as np
import tensorflow as tf

num_units = 3
lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units)
batch=1
timesteps = 7
num_input = 4
X = tf.placeholder("float", [batch, timesteps, num_input])
x = tf.unstack(X, timesteps, 1)
outputs, states = tf.contrib.rnn.static_rnn(lstm, x, dtype=tf.float32)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
x_val = np.reshape(range(28),[batch, timesteps, num_input])
res = sess.run(outputs, feed_dict = {X:x_val})
for e in res:
    print(e)
print("\nmy imp\n")
#my impl
def sigmoid(x):
    return 1/(1+np.exp(-x))

kernel,bias=sess.run([lstm._kernel,lstm._bias])
f_b_=lstm._forget_bias
c,h=np.zeros([batch,num_input-1]),np.zeros([batch,num_input-1])
for step in range(timesteps):
    inpt=np.split(x_val,7,1)[step][0]
    lstm_mtrx=np.matmul(np.concatenate([inpt,h],1),kernel)+bias
    i,j,f,o=np.split(lstm_mtrx,4,1)
    c=sigmoid(f+f_b_)*c+sigmoid(i)*np.tanh(j)
    h=sigmoid(o)*np.tanh(c)
    print(h)

output:

[[ 0.06964055 -0.06541953 -0.00682676]]
[[ 0.005264   -0.03234607  0.00014838]]
[[ 1.617855e-04 -1.316892e-02  8.596722e-06]]
[[ 3.9425286e-06 -5.1347450e-03  7.5078127e-08]]
[[ 8.7508155e-08 -1.9560163e-03  6.3853928e-10]]
[[ 1.8867894e-09 -7.3784427e-04  5.8551406e-12]]
[[ 4.0385355e-11 -2.7728223e-04  5.3957669e-14]]

my imp

[[ 0.06964057 -0.06541953 -0.00682676]]
[[ 0.005264   -0.03234607  0.00014838]]
[[ 1.61785520e-04 -1.31689185e-02  8.59672610e-06]]
[[ 3.94252745e-06 -5.13474567e-03  7.50781122e-08]]
[[ 8.75080644e-08 -1.95601574e-03  6.38539112e-10]]
[[ 1.88678843e-09 -7.37844070e-04  5.85513438e-12]]
[[ 4.03853841e-11 -2.77282006e-04  5.39576024e-14]]

Lyly Sara

1551150710

Tensorflow uses glorot_uniform() function to initialize the lstm kernel, which samples weights from a random uniform distribution. We need to fix a value for the kernel to get reproducible results:

import tensorflow as tf
import numpy as np

np.random.seed(0)
timesteps = 7
num_input = 4
x_val = np.random.normal(size = (1, timesteps, num_input))

num_units = 3

def glorot_uniform(shape):
    limit = np.sqrt(6.0 / (shape[0] + shape[1]))
    return np.random.uniform(low=-limit, high=limit, size=shape)

kernel_init = glorot_uniform((num_input + num_units, 4 * num_units))

My implementation of the LSTMCell (well, actually it’s just slightly rewritten tensorflow’s code):

def sigmoid(x):
    return 1. / (1 + np.exp(-x))

class LSTMCell():
    """Long short-term memory unit (LSTM) recurrent network cell.
    """
    def __init__(self, num_units, initializer=glorot_uniform,
               forget_bias=1.0, activation=np.tanh):
        """Initialize the parameters for an LSTM cell.
        Args:
          num_units: int, The number of units in the LSTM cell.
          initializer: The initializer to use for the kernel matrix. Default: glorot_uniform
          forget_bias: Biases of the forget gate are initialized by default to 1
            in order to reduce the scale of forgetting at the beginning of
            the training. 
          activation: Activation function of the inner states.  Default: np.tanh.
        """
        # Inputs must be 2-dimensional.
        self._num_units = num_units
        self._forget_bias = forget_bias
        self._activation = activation
        self._initializer = initializer

    def build(self, inputs_shape):
        input_depth = inputs_shape[-1]
        h_depth = self._num_units
        self._kernel = self._initializer(shape=(input_depth + h_depth, 4 * self._num_units))
        self._bias = np.zeros(shape=(4 * self._num_units))

    def call(self, inputs, state):
        """Run one step of LSTM.
        Args:
          inputs: input numpy array, must be 2-D, `[batch, input_size]`.
          state:  a tuple of numpy arrays, both `2-D`, with column sizes `c_state` and
            `m_state`.
        Returns:
          A tuple containing:
          - A `2-D, [batch, output_dim]`, numpy array representing the output of the
            LSTM after reading `inputs` when previous state was `state`.
            Here output_dim is equal to num_units.
          - Numpy array(s) representing the new state of LSTM after reading `inputs` when
            the previous state was `state`.  Same type and shape(s) as `state`.
        """
        num_proj = self._num_units
        (c_prev, m_prev) = state

        input_size = inputs.shape[-1]

        # i = input_gate, j = new_input, f = forget_gate, o = output_gate
        lstm_matrix = np.hstack([inputs, m_prev]).dot(self._kernel)
        lstm_matrix += self._bias

        i, j, f, o = np.split(lstm_matrix, indices_or_sections=4, axis=0)
        # Diagonal connections
        c = (sigmoid(f + self._forget_bias) * c_prev + sigmoid(i) *
               self._activation(j))

        m = sigmoid(o) * self._activation(c)

        new_state = (c, m)
        return m, new_state

X = x_val.reshape(x_val.shape[1:])

cell = LSTMCell(num_units, initializer=lambda shape: kernel_init)
cell.build(X.shape)

state = (np.zeros(num_units), np.zeros(num_units))
for i in range(timesteps):
    x = X[i,:]
    output, state = cell.call(x, state)
    print(output)

Produces output:

[-0.21386017 -0.08401277 -0.25431477]
[-0.22243588 -0.25817422 -0.1612211 ]
[-0.2282134  -0.14207162 -0.35017249]
[-0.23286737 -0.17129192 -0.2706512 ]
[-0.11768674 -0.20717363 -0.13339118]
[-0.0599215  -0.17756104 -0.2028935 ]
[ 0.11437953 -0.19484555  0.05371994]

While your Tensorflow code, if you replace the second line with

lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units, initializer = tf.constant_initializer(kernel_init))

returns:

[[-0.2138602  -0.08401276 -0.25431478]]
[[-0.22243595 -0.25817424 -0.16122109]]
[[-0.22821338 -0.1420716  -0.35017252]]
[[-0.23286738 -0.1712919  -0.27065122]]
[[-0.1176867  -0.2071736  -0.13339119]]
[[-0.05992149 -0.177561   -0.2028935 ]]
[[ 0.11437953 -0.19484554  0.05371996]]

Elthel Mario

1551150780

Here is a blog which will answer any conceptual questions related to LSTM’s. Seems that there is a lot which goes into building an LSTM from scratch!

Of course, this answer doesn’t solve your question but just giving a direction.

Ray  Patel

Ray Patel

1619565060

Ternary operator in Python?

  1. Ternary Operator in Python

What is a ternary operator: The ternary operator is a conditional expression that means this is a comparison operator and results come on a true or false condition and it is the shortest way to writing an if-else statement. It is a condition in a single line replacing the multiline if-else code.

syntax : condition ? value_if_true : value_if_false

condition: A boolean expression evaluates true or false

value_if_true: a value to be assigned if the expression is evaluated to true.

value_if_false: A value to be assigned if the expression is evaluated to false.

How to use ternary operator in python here are some examples of Python ternary operator if-else.

Brief description of examples we have to take two variables a and b. The value of a is 10 and b is 20. find the minimum number using a ternary operator with one line of code. ( **min = a if a < b else b ) **. if a less than b then print a otherwise print b and second examples are the same as first and the third example is check number is even or odd.

#python #python ternary operator #ternary operator #ternary operator in if-else #ternary operator in python #ternary operator with dict #ternary operator with lambda

Abdullah  Kozey

Abdullah Kozey

1617738420

Unformatted input/output operations In C++

In this article, we will discuss the unformatted Input/Output operations In C++. Using objects cin and cout for the input and the output of data of various types is possible because of overloading of operator >> and << to recognize all the basic C++ types. The operator >> is overloaded in the istream class and operator << is overloaded in the ostream class.

The general format for reading data from the keyboard:

cin >> var1 >> var2 >> …. >> var_n;

  • Here, var1var2, ……, varn are the variable names that are declared already.
  • The input data must be separated by white space characters and the data type of user input must be similar to the data types of the variables which are declared in the program.
  • The operator >> reads the data character by character and assigns it to the indicated location.
  • Reading of variables terminates when white space occurs or character type occurs that does not match the destination type.

#c++ #c++ programs #c++-operator overloading #cpp-input-output #cpp-operator #cpp-operator-overloading #operators

John David

John David

1551149766

How exactly does LSTMCell from TensorFlow operates?

I try to reproduce results generated by the LSTMCell from TensorFlow to be sure that I know what it does.

Here is my TensorFlow code:

num_units = 3
lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units)

timesteps = 7
num_input = 4
X = tf.placeholder("float", [None, timesteps, num_input])
x = tf.unstack(X, timesteps, 1)
outputs, states = tf.contrib.rnn.static_rnn(lstm, x, dtype=tf.float32)

sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)

x_val = np.random.normal(size = (1, 7, num_input))

res = sess.run(outputs, feed_dict = {X:x_val})

for e in res:
    print e

Here is its output:

[[-0.13285545 -0.13569424 -0.23993783]]
[[-0.04818152  0.05927373  0.2558436 ]]
[[-0.13818116 -0.13837864 -0.15348436]]
[[-0.232219    0.08512601  0.05254192]]
[[-0.20371495 -0.14795329 -0.2261929 ]]
[[-0.10371902 -0.0263292  -0.0914975 ]]
[[0.00286371 0.16377522 0.059478  ]]

And here is my own implementation:

n_steps, _ = X.shape
h = np.zeros(shape = self.hid_dim)
c = np.zeros(shape = self.hid_dim)

for i in range(n_steps):
    x = X[i,:]

    vec = np.concatenate([x, h])
    #vec = np.concatenate([h, x])
    gs = np.dot(vec, self.kernel) + self.bias


    g1 = gs[0*self.hid_dim : 1*self.hid_dim]
    g2 = gs[1*self.hid_dim : 2*self.hid_dim]
    g3 = gs[2*self.hid_dim : 3*self.hid_dim]
    g4 = gs[3*self.hid_dim : 4*self.hid_dim]

    I = vsigmoid(g1)
    N = np.tanh(g2)
    F = vsigmoid(g3)
    O = vsigmoid(g4)

    c = c*F + I*N

    h = O * np.tanh(c)

    print h

And here is its output:

[-0.13285543 -0.13569425 -0.23993781]
[-0.01461723  0.08060743  0.30876374]
[-0.13142865 -0.14921292 -0.16898363]
[-0.09892188  0.11739943  0.08772941]
[-0.15569218 -0.15165766 -0.21918869]
[-0.0480604  -0.00918626 -0.06084118]
[0.0963612  0.1876516  0.11888081]

As you might notice I was able to reproduce the first hidden vector, but the second one and all the following ones are different. What am I missing?

#python #tensorflow

5 Steps to Passing the TensorFlow Developer Certificate

Deep Learning is one of the most in demand skills on the market and TensorFlow is the most popular DL Framework. One of the best ways in my opinion to show that you are comfortable with DL fundaments is taking this TensorFlow Developer Certificate. I completed mine last week and now I am giving tips to those who want to validate your DL skills and I hope you love Memes!

  1. Do the DeepLearning.AI TensorFlow Developer Professional Certificate Course on Coursera Laurence Moroney and by Andrew Ng.

2. Do the course questions in parallel in PyCharm.

#tensorflow #steps to passing the tensorflow developer certificate #tensorflow developer certificate #certificate #5 steps to passing the tensorflow developer certificate #passing

Mckenzie  Osiki

Mckenzie Osiki

1623139838

Transfer Learning on Images with Tensorflow 2 – Predictive Hacks

In this tutorial, we will provide you an example of how you can build a powerful neural network model to classify images of **cats **and dogs using transfer learning by considering as base model a pre-trained model trained on ImageNet and then we will train additional new layers for our cats and dogs classification model.

The Data

We will work with a sample of 600 images from the Dogs vs Cats dataset, which was used for a 2013 Kaggle competition.

#python #transfer learning #tensorflow #images #transfer learning on images with tensorflow #tensorflow 2