Multi gpu training with Keras

I'm using Keras with tensorflow backend (keras version 2.0.*, tensorflow version 1.10.*). The training process takes too much time. Can I make distributed train across multiple gpu?

tensorflowkerasmultigpugpu
1 votesNN132.00
1 Answers
JW229.00
0

Keras has just a simple function, which lets you use parallel training easly. 

keras.utils.multi_gpu_model(model, gpus=None, cpu_merge=True, cpu_relocation=False)

This function returns modified model, which is able to train on multiple gpus parallel.

Here is a link for that https://keras.io/utils/#multi_gpu_model

Reply
Couldn't find what you were looking for?and we will find an expert to answer.
How helpful was this page?