RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[3, 1, 224, 224] to have 3 channels, but got 1 channels instead

10,306

Here's the fix:

test_data, test_target = image_datasets['train'][idx]
test_data = test_data.cuda()
test_target = torch.tensor(test_target)
test_target = test_target.cuda()
test_data.unsqueeze_(0)
test_target.unsqueeze_(0)
output = model_ft(test_data)

I had to change test_data.unsqueeze_(1) to test_data.unsqueeze_(0)

Share:
10,306
Mona Jalal
Author by

Mona Jalal

contact me at [email protected] I am a 5th-year computer science Ph.D. Candidate at Boston University advised by Professor Vijaya Kolachalama in computer vision as the area of study. Currently, I am working on my proposal exam and thesis on the use of efficient computer vision and deep learning for cancer detection in H&E stained digital pathology images.

Updated on June 28, 2022

Comments

  • Mona Jalal
    Mona Jalal almost 2 years

    In the code below:

        model_ft.eval()
        test_data, test_target = image_datasets['train'][idx]
        test_data = test_data.cuda()
        #test_target = test_target.cuda()
        test_target = torch.tensor(test_target)
        test_target = test_target.cuda()
        test_data.unsqueeze_(1)
        test_target.unsqueeze_(0)
        print(test_data.shape)
        output = model_ft(test_data)
    

    I get the following error:

    Traceback (most recent call last):
      File "test_loocv.py", line 245, in <module>
        output = model_ft(test_data)
      File "/scratch/sjn-p3/anaconda/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
        result = self.forward(*input, **kwargs)
      File "/scratch/sjn-p3/anaconda/anaconda3/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/resnet.py", line 139, in forward
      File "/scratch/sjn-p3/anaconda/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
        result = self.forward(*input, **kwargs)
      File "/scratch/sjn-p3/anaconda/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward
        self.padding, self.dilation, self.groups)
    RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[3, 1, 224, 224] to have 3 channels, but got 1 channels instead
    

    Also, test_data has the shape: torch.Size([3, 1, 224, 224]).

    How should I fix this?

  • CodeBlooded
    CodeBlooded almost 4 years
    Could you please explain a little further about this?