Discussions>Tensorflow 2.0 Alpha Could not find any TPU devices>

Tensorflow 2.0 Alpha Could not find any TPU devices

I've created a custom model with Tensorflow 2.0 Alpha and wanted to use a distributed training.

resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
tpu_strategy = tf.distribute.experimental.TPUStrategy(resolver)

By using tensorflow strategies, I'm getting this error

/usr/local/lib/python3.6/dist-packages/tensorflow/python/tpu/tpu_strategy_util.py in get_first_tpu_host_device(cluster_resolver)
     41         [x for x in context.list_devices() if "device:TPU:" in x])
     42     if not tpu_devices:
     43       raise RuntimeError("Could not find any TPU devices")
     44     spec = tf_device.DeviceSpec.from_string(tpu_devices[0])
     45     task_id = spec.task

RuntimeError: Could not find any TPU devices

I'm using Ubuntu 16.04 and installed Tensorflow 2.0 by pip

2 votesJW326.00
1 Answers

Before starting your distributed training you need to connect ot remote TPU host by using this


You need to make it before the initializations

Couldn't find what you were looking for?and we will find an expert to answer.
How helpful was this page?