How exactly does LSTMCell from TensorFlow operates?

I try to reproduce results generated by the LSTMCell from TensorFlow to be sure that I know what it does.

Here is my TensorFlow code:

num_units = 3
lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units)

timesteps = 7
num_input = 4
X = tf.placeholder("float", [None, timesteps, num_input])
x = tf.unstack(X, timesteps, 1)
outputs, states = tf.contrib.rnn.static_rnn(lstm, x, dtype=tf.float32)

sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)

x_val = np.random.normal(size = (1, 7, num_input))

res = sess.run(outputs, feed_dict = {X:x_val})

for e in res:
    print e

Here is its output:

[[-0.13285545 -0.13569424 -0.23993783]]
[[-0.04818152  0.05927373  0.2558436 ]]
[[-0.13818116 -0.13837864 -0.15348436]]
[[-0.232219    0.08512601  0.05254192]]
[[-0.20371495 -0.14795329 -0.2261929 ]]
[[-0.10371902 -0.0263292  -0.0914975 ]]
[[0.00286371 0.16377522 0.059478  ]]

And here is my own implementation:

n_steps, _ = X.shape
h = np.zeros(shape = self.hid_dim)
c = np.zeros(shape = self.hid_dim)

for i in range(n_steps):
    x = X[i,:]

    vec = np.concatenate([x, h])
    #vec = np.concatenate([h, x])
    gs = np.dot(vec, self.kernel) + self.bias


    g1 = gs[0*self.hid_dim : 1*self.hid_dim]
    g2 = gs[1*self.hid_dim : 2*self.hid_dim]
    g3 = gs[2*self.hid_dim : 3*self.hid_dim]
    g4 = gs[3*self.hid_dim : 4*self.hid_dim]

    I = vsigmoid(g1)
    N = np.tanh(g2)
    F = vsigmoid(g3)
    O = vsigmoid(g4)

    c = c*F + I*N

    h = O * np.tanh(c)

    print h

And here is its output:

[-0.13285543 -0.13569425 -0.23993781]
[-0.01461723  0.08060743  0.30876374]
[-0.13142865 -0.14921292 -0.16898363]
[-0.09892188  0.11739943  0.08772941]
[-0.15569218 -0.15165766 -0.21918869]
[-0.0480604  -0.00918626 -0.06084118]
[0.0963612  0.1876516  0.11888081]

As you might notice I was able to reproduce the first hidden vector, but the second one and all the following ones are different. What am I missing?

#python #tensorflow

3 Likes2.45 GEEK