Multi gpu training with Keras
I'm using Keras with Tensorflow backend (Keras version 2.0.*, TensorFlow version 1.10.*).
The training process takes too much time. How can I make a distributed train across multiple GPUs?
I'm using Keras with Tensorflow backend (Keras version 2.0.*, TensorFlow version 1.10.*).
The training process takes too much time. How can I make a distributed train across multiple GPUs?
Keras has just a simple function, which lets you use parallel training easly.
keras.utils.multi_gpu_model(model, gpus=None, cpu_merge=True, cpu_relocation=False)
This function returns modified model, which is able to train on multiple gpus parallel.
Here is a link for that https://keras.io/utils/#multi_gpu_model