Discussions>Why Testing and Training behave differently in PyTorch?>

Why Testing and Training behave differently in PyTorch?

I have designed a deep CNN for segmentation task in PyTorch 1.6, trained multiple epochs, but met a weird behavior. During the training and the testing regimes, the network behaves differently on the same dataset. Any ideas why this occurs?

3 votesJO295.00
1 Answers
NN215.00
3

I assume you do not change the mode of your model. There is a real difference between training and testing regimes and a model can behave differently when you have BatchNorm or Dropout in it. For example, BatchNorm calculates the mean and variance for each batch and stores it inside the layer. Those moving_mean and moving_variance variables then will be used during the testing regime. In PyTorch, you need to change the regime so the model will know how to behave.

import torch

model = MyModel()
model.load_state_dict(torch.load(model_path))
model.train(False)
model.eval()

Passing False to the train function and calling eval will change the regime from training into the evaluation.

 
Reply
Couldn't find what you were looking for?and we will find an expert to answer.
How helpful was this page?