First, your PyTorch installation should be CUDA compiled, which is automatically done during installations (when a GPU device is available and visible). The PyTorch library has a couple of functions, which give a lot of information about the capabilities and properties of GPU devices. The cuda property of the torch contains all necessary functions.
import torch
torch.cuda.is_available()
>>> True
torch.cuda.device_count()
>>> 1
torch.cuda.get_device_name(0)
>>> 'GeForce RTX 2080 Ti'
torch.cuda.get_device_properties(0)
>>> _CudaDeviceProperties(name='GeForce RTX 2080 Ti', major=7, minor=5, total_memory=11019MB, multi_processor_count=68)
It also contains many functions related to the memory, its allocations, device capabilities, and many others.
Also, make sure you do not fix other devices in your code. The PyTorch gives you the ability to run your code on your chosen device.
import torch
device = torch.device("cpu")
model = MyModel().to(device)
X = torch.randn(10, 100).to(device)
In this case, you force your script to run it on a CPU device and ignore any GPU, even when they are available and visible by the library.