Skip to content

[PyTorch] How to check which GPU device our data used

Last Updated on 2021-07-05 by Clay

When I using PyTorch to train a model, I often use GPU_A to train the model, save model. But if I load the model I saved to test some new data, I always put the new data in a different GPU, we called it GPU_B.

We will get an error message. It is a problem we can solve, of course. For example, I can put the model and new data to the same GPU device ("cuda:0").

model = model.to('cuda:0')



But what I want to know is, is there any way to directly see which device my data is on? I searched the Internet and found that some people really have this demand, so I have a good record here.


Use "get_device()" to check

Note: This method is only useful for Tensor, and it does not seem to work for Tensor still on the CPU.

import torch

a = torch.tensor([5, 3]).to('cuda:3')
print(a.get_device())



Output:

3

It can be confirmed that this method is feasible.


References

Leave a ReplyCancel reply

Exit mobile version