RuntimeError: Attempting to deserialize object on a CUDA device

54,145

Solution 1

Just giving a smaller answer. To solve this, you could change the parameters of the function named load() in the serialization.py file. This is stored in: ./site-package/torch/serialization.py

Write:

def load(f, map_location='cpu', pickle_module=pickle, **pickle_load_args):

instead of:

def load(f, map_location=None, pickle_module=pickle, **pickle_load_args):

Hope it helps.

Solution 2

If you don't have gpu then use map_location=torch.device('cpu') with load model.load()

my_model = net.load_state_dict(torch.load('classifier.pt', map_location=torch.device('cpu')))

Solution 3

"If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU."

model = torch.load('model/pytorch_resnet50.pth',map_location ='cpu')

Solution 4

You can remap the Tensor location at load time using the map_location argument to torch.load.

On the following repository,in file "test.py", model = loadmodel() calls the model_loader.py file to load the model with torch.load().

While this will only map storages from GPU0, add the map_location:

torch.load(settings.MODEL_FILE, map_location={'cuda:0': 'cpu'})

In the model_loader.py file, add, map_location={'cuda:0': 'cpu'} whereever, torch.load() function is called.

Solution 5

I have tried add "map_location='cpu'" in load function, but it doesn't work for me.

If you use a model trained by GPU on a CPU only computer, then you may meet this bug. And you can try this solution.

solution

class CPU_Unpickler(pickle.Unpickler):
    def find_class(self, module, name):
        if module == 'torch.storage' and name == '_load_from_bytes':
            return lambda b: torch.load(io.BytesIO(b), map_location='cpu')
        else: return super().find_class(module, name)

contents = CPU_Unpickler(f).load()
Share:
54,145
Admin
Author by

Admin

Updated on January 18, 2022

Comments

  • Admin
    Admin over 2 years

    I encounter a RunTimeError while I am trying to run the code in my machine's CPU instead of GPU. The code is originally from this GitHub project - IBD: Interpretable Basis Decomposition for Visual Explanation. This is for a research project. I tried putting the CUDA as false and looked at other solutions on this website.

    GPU = False               # running on GPU is highly suggested
    CLEAN = False             # set to "True" if you want to clean the temporary large files after generating result
    APP = "classification"    # Do not change! mode choide: "classification", "imagecap", "vqa". Currently "imagecap" and "vqa" are not supported.
    CATAGORIES = ["object", "part"]   # Do not change! concept categories that are chosen to detect: "object", "part", "scene", "material", "texture", "color"
    
    CAM_THRESHOLD = 0.5                 # the threshold used for CAM visualization
    FONT_PATH = "components/font.ttc"   # font file path
    FONT_SIZE = 26                      # font size
    SEG_RESOLUTION = 7                  # the resolution of cam map
    BASIS_NUM = 7                       # In decomposition, this is to decide how many concepts are used to interpret the weight vector of a class.
    

    Here is the error:

    Traceback (most recent call last):
      File "test.py", line 22, in <module>
        model = loadmodel()
      File "/home/joshuayun/Desktop/IBD/loader/model_loader.py", line 48, in loadmodel
        checkpoint = torch.load(settings.MODEL_FILE)
      File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 387, in load
        return _load(f, map_location, pickle_module, **pickle_load_args)
      File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 574, in _load
        result = unpickler.load()
      File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 537, in persistent_load
        deserialized_objects[root_key] = restore_location(obj, location)
      File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 119, in default_restore_location
        result = fn(storage, location)
      File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 95, in _cuda_deserialize
        device = validate_cuda_device(location)
      File "/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py", line 79, in validate_cuda_device
        raise RuntimeError('Attempting to deserialize object on a CUDA '
    RuntimeError: Attempting to deserialize object on a CUDA device but 
      torch.cuda.is_available() is False. If you are running on a CPU-only machine, 
      please use torch.load with map_location='cpu' to map your storages to the CPU.