Monday 27 November 2017

Implementing Keras model in Tensorflow

Deep learning is the latest trend in the field of artificial intelligence. Deep learning based models have produced incredible results in the fields such as computer vision, natural language processing, robotics etc. Tensorflow and Keras are two popular frameworks used for building the deep neural networks used in deep learning. Tensorflow is one of the advanced software libraries used for implementing deep learning models whereas Keras is a high level abstract library that runs on top of tensorflow.

In this blog, we will see a simple implementation of using a Keras model with Tensorflow backend in a Tensorflow serving framework. The Jupyter notebook implementation of the code can be found in github.

First of all import the libraries required.
## Imports
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from keras.layers import Dense, Dropout, Flatten, MaxPooling2D, Conv2D, Input
from keras.models import Model
import numpy as np
import pylab as plt
import os

Next define values for the variables used in the program.
## Defining variables
batch_size=512
nb_classes=10
lr_rate=.0005
height=28
width=28
channel=1
input_shape=[height,width,channel]
iterations=500

Then load the MNIST dataset.
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)

Now the model has to be defined in Keras. For that, first the placeholder variables are defined in tensorflow. The data placeholder variable will be used for serving the input data and the target placeholder variable will be used for feeding the expected target values.

## Defining placeholders in tf
data = tf.placeholder(tf.float32, [None, height,width,channel]) #step_size=No: of frames in video sequence
target = tf.placeholder(tf.float32, [None, nb_classes])

Next the model has to be defined in Keras. A simple CNN is used as an example in this program. The model is defined using Keras functional API.

## Defining the model in keras using functional layers
input_layer = Input(shape=input_shape)
layer1= Conv2D(32, kernel_size=(3, 3),activation='relu')(input_layer)
layer2=Conv2D(64, (3, 3), activation='relu')(layer1)
layer3=MaxPooling2D(pool_size=(2, 2))(layer2)
layer4=Flatten()(layer3)
layer5=Dense(128, activation='relu')(layer4)
output_layer=Dense(nb_classes, activation='linear')(layer5)
model=Model(input_layer,output_layer)
print(model.summary())
out=model(data)

After that the remaining part is defined as normal in tensorflow. This includes defining the cross entropy loss function, Adam optimiser for optimisation of the network and defining accuracy.

## Making optimisation method, loss function and calculating accuracy
predictions= tf.nn.softmax(out)
cross_entropy=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=target,logits=out))
optimizer = tf.train.AdamOptimizer(lr_rate)
minimize = optimizer.minimize(cross_entropy)
mistakes = tf.equal(tf.argmax(target, 1), tf.argmax(predictions, 1))
accuracy = tf.reduce_mean(tf.cast(mistakes, tf.float32))

Once the model and the related operations are defined, the model has to be trained. The training is done using the tensorflow serving mechanism. First a session is started, the variables are initialised and then the network is trained by feeding the the input and target data batch by batch.

## The keras model is trained as a tensorlfow graph
init_op = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init_op)

for i in range(iterations):
    batch_x, batch_y = mnist.train.next_batch(batch_size)
    batch_x = batch_x.reshape(batch_x.shape[0], height, width, channel)
    sess.run([minimize],{data: batch_x, target: batch_y})
    print('Iteration {}'.format(i))

And finally the trained model is tested and session is closed.
mnist_test = mnist.test.images.reshape(mnist.test.images.shape[0], height, width, channel)
acc=sess.run(accuracy, feed_dict={data:mnist_test ,target: mnist.test.labels})
print('accuracy on test set: {}'.format(acc))

There are also other ways of using Keras alongside tensorflow. Refer to this blog for further reading.

No comments:

Post a Comment