Given input size: (128x1x1). Calculated output size: (128x0x0). Output size is too small

20,406

Solution 1

Your problem is that before the Pool4 your image has already reduced to a 1x1pixel size image. So you need to either feed an much larger image of size at least around double that (~134x134) or remove a pooling layer in your network.

Solution 2

For the nn.MaxPool2d() function , it will raise the bug if kernel_size is bigger than its input_size.

Share:
20,406
Ryan
Author by

Ryan

Updated on July 05, 2022

Comments

  • Ryan
    Ryan almost 2 years

    I am trying to train a U-Net which looks like this

    `class UNet(nn.Module):
    def __init__(self, imsize):
        super(UNet, self).__init__()
        self.imsize = imsize
    
        self.activation = F.relu
        self.pool1 = nn.MaxPool2d(2)
        self.pool2 = nn.MaxPool2d(2)
        self.pool3 = nn.MaxPool2d(2)
        self.pool4 = nn.MaxPool2d(2)
        self.conv_block1_64 = UNetConvBlock(4, 64)
        self.conv_block64_128 = UNetConvBlock(64, 128)
        self.conv_block128_256 = UNetConvBlock(128, 256)
        self.conv_block256_512 = UNetConvBlock(256, 512)
        self.conv_block512_1024 = UNetConvBlock(512, 1024)
    
        self.up_block1024_512 = UNetUpBlock(1024, 512)
        self.up_block512_256 = UNetUpBlock(512, 256)
        self.up_block256_128 = UNetUpBlock(256, 128)
        self.up_block128_64 = UNetUpBlock(128, 64)
    
        self.last = nn.Conv2d(64, 1, 1)`
    

    The loss function i am using is

    `class BCELoss2d(nn.Module):
    
    def __init__(self, weight=None, size_average=True):
        super(BCELoss2d, self).__init__()
        self.bce_loss = nn.BCELoss(weight, size_average)
    
    def forward(self, logits, targets):
        probs = F.sigmoid(logits)
        probs_flat = probs.view(-1)
        targets_flat = targets.view(-1)
        return self.bce_loss(probs_flat, targets_flat)`
    

    The input image tensor is [1,1,68,68] and labels are also of the same shape

    I get this error:

    <ipython-input-72-270210759010> in forward(self, x)
     75 
     76         block4 = self.conv_block256_512(pool3)
    ---> 77         pool4 = self.pool4(block4)
         78 
      79         block5 = self.conv_block512_1024(pool4)
    
    /usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in _    _call__(self, *input, **kwargs)
        323         for hook in self._forward_pre_hooks.values():
        324             hook(self, input)
     325         result = self.forward(*input, **kwargs)
        326         for hook in self._forward_hooks.values():
        327             hook_result = hook(self, input, result)
    
    /usr/local/lib/python3.5/dist-packages/torch/nn/modules/pooling.py in forward(self, input)
        141         return F.max_pool2d(input, self.kernel_size, self.stride,
        142                             self.padding, self.dilation, self.ceil_mode,
    --> 143                             self.return_indices)
        144 
        145     def __repr__(self):
    
    /usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode, return_indices)
        332     See :class:`~torch.nn.MaxPool2d` for details.
        333     """
    --> 334     ret = torch._C._nn.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
        335     return ret if return_indices else ret[0]
        336 
    
    RuntimeError: Given input size: (128x1x1). Calculated output size: (128x0x0). Output size is too small at /pytorch/torch/lib/THCUNN/generic/SpatialDilatedMaxPooling.cu:69
    

    I'm guessing I'm making a mistake in my channel size or pooling size but i'm not sure where exactly is the mistake.

  • Coder
    Coder over 2 years
    saved my 10000 hours