Discussions>Tensorflow cublas_status_alloc_failed>

Tensorflow cublas_status_alloc_failed

I just tried to train a simple model for MNIST classification, but got this error

failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED
attempting to perform BLAS operation using StreamExecutor without BLAS support
tensorflow.python.framework.errors_impl.InternalError: Blas SGEMM launch failed : a.shape=(100, 784), b.shape=(784, 256), m=100, n=256, k=784
         [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](_recv_Placeholder_1/_12, Variable/read)]]
         [[Node: Mean/_15 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_35_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Why am I getting this?

3 votesJW326.00
1 Answers

Try putting a flag for the gpu memory usage. You need to controll it like this

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
Couldn't find what you were looking for?and we will find an expert to answer.
How helpful was this page?