You need to change the way you train and avoid using model.fit() and model.compile() functions.
Here is an example from Keras documentation, where you can apply different optimizers during different stages of your training procedure.
# Instantiate an optimizer. optimizer1 = tf.keras.optimizers.Adam() optimizer1 = tf.keras.optimizers.SGD() # Iterate over the batches of a dataset. for x, y in dataset: # Open a GradientTape. with tf.GradientTape() as tape: # Forward pass. logits = model(x) # Loss value for this batch. loss_value = loss_fn(y, logits) # Get gradients of loss wrt the weights. gradients = tape.gradient(loss_value, model.trainable_weights) # Update the weights of the model. if condition1: optimizer1.apply_gradients(zip(gradients, model.trainable_weights)) else: optimizer2.apply_gradients(zip(gradients, model.trainable_weights))
Couldn't find what you were looking for?and we will find an expert to answer.
How helpful was this page?