Building up an array in numpy/scipy by iteration in Python?

45,703

Solution 1

If i understand your question correctly, this should do what you want:

# the array passed into your function
ax = NP.random.randint(10, 99, 20).reshape(5, 4)

# just define a function to operate on some data
fnx = lambda x : NP.sum(x)**2

# apply the function directly to the numpy array
new_row = NP.apply_along_axis(func1d=fnx, axis=0, arr=ax)

# 'append' the new values to the original array
new_row = new_row.reshape(1,4)
ax = NP.vstack((ax, new_row))

Solution 2

NumPy provides a 'fromiter' method:

def myfunc(n):
    for i in range(n):
        yield i**2


np.fromiter(myfunc(5), dtype=int)

which yields

array([ 0,  1,  4,  9, 16])

Solution 3

The recommended way to do this is to preallocate before the loop and use slicing and indexing to insert

my_array = numpy.zeros(1,1000)
for i in xrange(1000):
    #for 1D array
    my_array[i] = functionToGetValue(i)
    #OR to fill an entire row
    my_array[i:] = functionToGetValue(i)
    #or to fill an entire column
    my_array[:,i] = functionToGetValue(i)

numpy does provide an array.resize() method, but this will be far slower due to the cost of reallocating memory inside a loop. If you must have flexibility, then I'm afraid the only way is to create an array from a list.

EDIT: If you are worried that you're allocating too much memory for your data, I'd use the method above to over-allocate and then when the loop is done, lop off the unused bits of the array using array.resize(). This will be far, far faster than constantly reallocating the array inside the loop.

EDIT: In response to @user248237's comment, assuming you know any one dimension of the array (for simplicity's sake):

my_array = numpy.array(10000, SOMECONSTANT)

for i in xrange(someVariable):
    if i >= my_array.shape[0]:
        my_array.resize((my_array.shape[0]*2, SOMECONSTANT))

    my_array[i:] = someFunction()

#lop off extra bits with resize() here

The general principle is "allocate more than you think you'll need, and if things change, resize the array as few times as possible". Doubling the size could be thought of as excessive, but in fact this is the method used by several data structures in several standard libraries in other languages (java.util.Vector does this by default for example. I think several implementations of std::vector in C++ do this as well).

Share:
45,703

Related videos on Youtube

Alex Riley
Author by

Alex Riley

Software engineer working in bioinformatics/drug-discovery.

Updated on June 07, 2020

Comments

  • Alex Riley
    Alex Riley almost 4 years

    Often, I am building an array by iterating through some data, e.g.:

    my_array = []
    for n in range(1000):
      # do operation, get value 
      my_array.append(value)
    # cast to array
    my_array = array(my_array)
    

    I find that I have to first build a list and then cast it (using "array") to an array. Is there a way around these? All these casting calls clutter the code... how can I iteratively build up "my_array", with it being an array from the start?

    • OscarRyz
      OscarRyz about 14 years
      what happens if you don't do it?
    • Andriy Drozdyuk
      Andriy Drozdyuk about 14 years
      The reason numpy is so fast in the first place is that it operates with constant size arrays and not dynamic lists. So getting rid of it just to reduce "clutter" in your code is probably not the right way to go. If you know the size beforehand (1000) preallocate it. If you don't - building up the list is definitely the best way to go, as python lists [] are pretty efficient.
  • Admin
    Admin about 14 years
    What if I don't know the size ahead of time?
  • Admin
    Admin about 14 years
    That makes sense, but suppose I built up arrays this way by allocating more than I need -- how can I then iterate over the array? If I allocate numpy.zeros(1, 1000) but only use 50 elements, I don't want to iterate until I hit a zero... the approach of overallocating seems to create this weird situation where each array data type will require a different "stop" condition for looping. Is there a way around this?
  • Chinmay Kanchi
    Chinmay Kanchi about 14 years
    As I said, you can make a final call to resize() when the loop is done. If you shrink the array to the correct size, you will throw away all the zero elements.
  • meawoppl
    meawoppl over 11 years
    Someone should write something akin to the stl::vector approach where is allocated blocks as things get appended. . .
  • maxy
    maxy over 10 years
    The Python array module module can do this, for discussion see the answer to this question.
  • xaedes
    xaedes over 10 years
    As this is the accepted answer I have to say this: I have seen vstack used a lot like this. One should be aware that this is really(!) non-performant. If you do build up a big array this way you do a lot of unnecessary memory copy operation. see answers below
  • Marijn van Vliet
    Marijn van Vliet about 7 years
    When using np.fromiter and you know the size of the array beforehand, you can pass it as a parameter so the array gets pre-allocated. This increases performance immensely. So in the example above, do: np.fromiter(myfunc(5), dtype=int, count=5)