Multi gpu training with Keras

I'm using Keras with tensorflow backend (keras version 2.0.*, tensorflow version 1.10.*). The training process takes too much time. Can I make distributed train across multiple gpu?

2 votesNN160.00
1 Answers

Keras has just a simple function, which lets you use parallel training easly. 

keras.utils.multi_gpu_model(model, gpus=None, cpu_merge=True, cpu_relocation=False)

This function returns modified model, which is able to train on multiple gpus parallel.

Here is a link for that

Couldn't find what you were looking for?and we will find an expert to answer.
How helpful was this page?