Discussions>I've got weird issue related to Batch Normalization>

I've got weird issue related to Batch Normalization

I'm training a small neural network using Tensorflow 1.10. The training process goes well and I get expected results, but it works weirdly in the validation or in the testing process. I'm checking out my code multiple times, but can't find the reason. Why I'm getting different results in the training and validation process.

4 votesNN170.00
1 Answers
JW270.00
1

In training process you should control some variable updates. As you know batch normalization contains some not trainable weights, which getting changed during training process. These variables are moving_mean and moving_variance. To force updating these variables you should put this code in your training process. 

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
    optimizer = tf.train.AdamOptimizer(learning_rate=L_RATE).minimize(loss=loss)
Reply
Couldn't find what you were looking for?and we will find an expert to answer.
How helpful was this page?