Stacking copies of an array/ a torch tensor efficiently?

12,291

Solution 1

Note, that you need to decide whether you would like to allocate new memory for your expanded array or whether you simply require a new view of the existing memory of the original array.

In PyTorch, this distinction gives rise to the two methods expand() and repeat(). The former only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory. In contrast, the latter copies the original data and allocates new memory.

In PyTorch, you can use expand() and repeat() as follows for your purposes:

import torch

L = 10
N = 20
A = torch.randn(L,L)
A.expand(N, L, L) # specifies new size
A.repeat(N,1,1) # specifies number of copies

In Numpy, there are a multitude of ways to achieve what you did above in a more elegant and efficient manner. For your particular purpose, I would recommend np.tile() over np.repeat(), since np.repeat() is designed to operate on the particular elements of an array, while np.tile() is designed to operate on the entire array. Hence,

import numpy as np

L = 10
N = 20
A = np.random.rand(L,L)
np.tile(A,(N, 1, 1))

Solution 2

If you don't mind creating new memory:

  • In numpy, you can use np.repeat() or np.tile(). With efficiency in mind, you should choose the one which organises the memory for your purposes, rather than re-arranging after the fact:
    • np.repeat([1, 2], 2) == [1, 1, 2, 2]
    • np.tile([1, 2], 2) == [1, 2, 1, 2]
  • In pytorch, you can use tensor.repeat(). Note: This matches np.tile, not np.repeat.

If you don't want to create new memory:

  • In numpy, you can use np.broadcast_to(). This creates a readonly view of the memory.
  • In pytorch, you can use tensor.expand(). This creates an editable view of the memory, so operations like += will have weird effects.
Share:
12,291
Gericault
Author by

Gericault

Updated on June 20, 2022

Comments

  • Gericault
    Gericault almost 2 years

    I'm a Python/Pytorch user. First, in numpy, let's say I have an array M of size LxL, and i want to have the following array: A=(M,...,M) of size, say, NxLxL, is there a more elegant/memory efficient way of doing it than :

    A=np.array([M]*N) ?
    

    Same question with torch tensor ! Cause, Now, if M is a Variable(torch.tensor), i have to do:

    A=torch.autograd.Variable(torch.tensor(np.array([M]*N))) 
    

    which is ugly !

  • Nicolas Gervais
    Nicolas Gervais about 2 years
    np.tile is different from np.repeat
  • hpaulj
    hpaulj about 2 years
    np.tile uses np.repeat, once for each dimension. It doesn't use repeat's ability to apply a different number of each element, but otherwise it's just an extension of repeat. It's code is readable python.
  • Multihunter
    Multihunter about 2 years
    @NicolasGervais Excellent. How are they different? I'd like to update the answer.
  • Nicolas Gervais
    Nicolas Gervais about 2 years
    if you tile abc it will give abcabcabc and if you repeat it will give aaabbbccc