Problem after converting keras model into Tensorflow pb

I'm using keras 2.1.* with tensorflow 1.13.* backend. I save my model during training with .h5 format and after that I convert it into protobuf (.pb) model. Everything looks good during converting process, but the result of tensorflow model is a bit weird. It shows a little bit different results. Also I'm loading keras model with not compiled mode. Something like this.

import keras
keras.models.load_model('model.h5', compile=False)

Are there any ideas about the reason?

tensorflow
keras
hdf5
protobuf
3 votesavatar
2 Answers
avatar
3

This is the code you need for converting to tensorflow.

from keras import backend as K
import tensorflow as tf

def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
    """
    Freezes the state of a session into a pruned computation graph.

    Creates a new computation graph where variable nodes are replaced by
    constants taking their current value in the session. The new graph will be
    pruned so subgraphs that are not necessary to compute the requested
    outputs are removed.
    @param session The TensorFlow session to be frozen.
    @param keep_var_names A list of variable names that should not be frozen,
                          or None to freeze all the variables in the graph.
    @param output_names Names of the relevant graph outputs.
    @param clear_devices Remove the device directives from the graph for better portability.
    @return The frozen graph definition.
    """
    from tensorflow.python.framework.graph_util import convert_variables_to_constants
    graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        # Graph -> GraphDef ProtoBuf
        input_graph_def = graph.as_graph_def()
        if clear_devices:
            for node in input_graph_def.node:
                node.device = ""
        frozen_graph = convert_variables_to_constants(session, input_graph_def,
                                                      output_names, freeze_var_names)
        return frozen_graph


frozen_graph = freeze_session(K.get_session(),
                              output_names=[out.op.name for out in model.outputs])
Reply
avatar
2

Most probable the problem is related to running environment. There are some variables which are being computed in training phase for future use. If you don't change to test phase, it will use current values. For example dropout and batch normalization. If you use it in training mode, then for mean and variance it will use current values, but in test time it will use moving_mean and moving_variance. That's why you should call 

import keras.backend as K
k.set_learning_phase(0) # 0 testing, 1 training mode

Try this one. It can help to understand the difference.

Reply