Tensorflow 2.0 Alpha Could not find any TPU devices

I've created a custom model with Tensorflow 2.0 Alpha and wanted to use a distributed training.

resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.tpu.experimental.initialize_tpu_system(resolver)
tpu_strategy = tf.distribute.experimental.TPUStrategy(resolver)

By using tensorflow strategies, I'm getting this error

/usr/local/lib/python3.6/dist-packages/tensorflow/python/tpu/tpu_strategy_util.py in get_first_tpu_host_device(cluster_resolver)
     41         [x for x in context.list_devices() if "device:TPU:" in x])
     42     if not tpu_devices:
     43       raise RuntimeError("Could not find any TPU devices")
     44     spec = tf_device.DeviceSpec.from_string(tpu_devices[0])
     45     task_id = spec.task

RuntimeError: Could not find any TPU devices

I'm using Ubuntu 16.04 and installed Tensorflow 2.0 by pip

tensorflowtensorflow2
2 votesJW234.00
1 Answers
JO220.00
2

Before starting your distributed training you need to connect ot remote TPU host by using this

tf.config.experimental_connect_to_host(TPU_ADDRESS)

You need to make it before the initializations

Reply
Couldn't find what you were looking for?and we will find an expert to answer.
How helpful was this page?