This is a strange error message:
RuntimeError: Attempting to deserialize object on CUDA
device 3 but torch.cuda.device_count() is 1.
This error occurred when I was loading the trained model. My understanding is that because I used the No.3 GPU for training when I was training, but I was on a different device when I was testing, and the device has only on GPU, so I can't find it when I load the model.
It is worth noting that using to()
or cuda()
cannot solve this problem, because both of them do the conversion of the device after reading the data in, but the current problem is "the data cannot be read in".
I checked it on the Internet and found a way to read it successfully.
torch.load("MODEL_NAME", map_location='cpu')
For example, when reading the model, directly set the CPU or other available GPU in the rear parameter map_location
, so that the device will be automatically used to access data when reading.
References
- https://github.com/computationalmedia/semstyle/issues/3
- https://stackoverflow.com/questions/53186736/runtimeerror-attempting-to-deserialize-object-on-cuda-device-2-but-torch-cuda-d
Read More
- [Solved][PyTorch] RuntimeError: CUDA out of memory. Tried to allocate 2.0 GiB
- [Solved][PyTorch] return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range: Tried to access index 5 out of table with 4 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:237
- [Solved][PyTorch] IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
- [Solved][PyTorch] TypeError: not a sequence
- [Solved][PyTorch] ValueError: expected sequence of length 300 at dim 1 (got 3)