I've got weird issue related to Batch Normalization

I'm training a small neural network using tensorflow 1.10. The training process goes well and I get expected results, but it works weird in validation or in testing process. I'm checking out my code multiple times, but can't find the reason. Why I'm getting different results in training and validation process.

tensorflowbnneural-networks
4 votesNN132.00
1 Answers
JW229.00
1

In training process you should control some variable updates. As you know batch normalization contains some not trainable weights, which getting changed during training process. These variables are moving_mean and moving_variance. To force updating these variables you should put this code in your training process. 

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
    optimizer = tf.train.AdamOptimizer(learning_rate=L_RATE).minimize(loss=loss)
Reply
Couldn't find what you were looking for?and we will find an expert to answer.
How helpful was this page?