How to read a text file into a list or an array with Python

1,063,789

Solution 1

You will have to split your string into a list of values using split()

So,

lines = text_file.read().split(',')

EDIT: I didn't realise there would be so much traction to this. Here's a more idiomatic approach.

import csv
with open('filename.csv', 'r') as fd:
    reader = csv.reader(fd)
    for row in reader:
        # do something

Solution 2

You can also use numpy loadtxt like

from numpy import loadtxt
lines = loadtxt("filename.dat", comments="#", delimiter=",", unpack=False)

Solution 3

So you want to create a list of lists... We need to start with an empty list

list_of_lists = []

next, we read the file content, line by line

with open('data') as f:
    for line in f:
        inner_list = [elt.strip() for elt in line.split(',')]
        # in alternative, if you need to use the file content as numbers
        # inner_list = [int(elt.strip()) for elt in line.split(',')]
        list_of_lists.append(inner_list)

A common use case is that of columnar data, but our units of storage are the rows of the file, that we have read one by one, so you may want to transpose your list of lists. This can be done with the following idiom

by_cols = zip(*list_of_lists)

Another common use is to give a name to each column

col_names = ('apples sold', 'pears sold', 'apples revenue', 'pears revenue')
by_names = {}
for i, col_name in enumerate(col_names):
    by_names[col_name] = by_cols[i]

so that you can operate on homogeneous data items

 mean_apple_prices = [money/fruits for money, fruits in
                     zip(by_names['apples revenue'], by_names['apples_sold'])]

Most of what I've written can be speeded up using the csv module, from the standard library. Another third party module is pandas, that lets you automate most aspects of a typical data analysis (but has a number of dependencies).


Update While in Python 2 zip(*list_of_lists) returns a different (transposed) list of lists, in Python 3 the situation has changed and zip(*list_of_lists) returns a zip object that is not subscriptable.

If you need indexed access you can use

by_cols = list(zip(*list_of_lists))

that gives you a list of lists in both versions of Python.

On the other hand, if you don't need indexed access and what you want is just to build a dictionary indexed by column names, a zip object is just fine...

file = open('some_data.csv')
names = get_names(next(file))
columns = zip(*((x.strip() for x in line.split(',')) for line in file)))
d = {}
for name, column in zip(names, columns): d[name] = column

Solution 4

This question is asking how to read the comma-separated value contents from a file into an iterable list:

0,0,200,0,53,1,0,255,...,0.

The easiest way to do this is with the csv module as follows:

import csv
with open('filename.dat', newline='') as csvfile:
    spamreader = csv.reader(csvfile, delimiter=',')

Now, you can easily iterate over spamreader like this:

for row in spamreader:
    print(', '.join(row))

See documentation for more examples.

Share:
1,063,789

Related videos on Youtube

user2037744
Author by

user2037744

Updated on October 21, 2021

Comments

  • user2037744
    user2037744 over 2 years

    I am trying to read the lines of a text file into a list or array in python. I just need to be able to individually access any item in the list or array after it is created.

    The text file is formatted as follows:

    0,0,200,0,53,1,0,255,...,0.
    

    Where the ... is above, there actual text file has hundreds or thousands more items.

    I'm using the following code to try to read the file into a list:

    text_file = open("filename.dat", "r")
    lines = text_file.readlines()
    print lines
    print len(lines)
    text_file.close()
    

    The output I get is:

    ['0,0,200,0,53,1,0,255,...,0.']
    1
    

    Apparently it is reading the entire file into a list of just one item, rather than a list of individual items. What am I doing wrong?

  • A.W.
    A.W. over 10 years
    I need this too. I noticed on a Raspberry Pi that numpy works really slow. For this application I reverted to open a file and read it line by line.
  • gboffi
    gboffi over 7 years
    I think that this answer could be bettered... If you consider a multiline .csv file (as mentioned by the OP), e.g., a file containing the alphabetic characters 3 by row (a,b,c, d,e,f, etc) and apply the procedure described above what you get is a list like this: ['a', 'b', 'c\nd', 'e', ... ] (note the item 'c\nd'). I'd like to add that, the above problem notwistanding, this procedure collapses data from individual rows in a single mega-list, usually not what I want when processing a record-oriented data file.
  • Ozgur Ozturk
    Ozgur Ozturk over 7 years
    This is useful for specifying format too, via dtype : data-type parameter. docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.h‌​tml Pandas read_csv is very easy to use. But I did not see a way to specify format for it. It was reading floats from my file, whereas I needed string. Thanks @Thiru for showing loadtxt.
  • Blairg23
    Blairg23 about 6 years
    The OP said they wanted a list of data from a CSV, not a "list of lists". Just use the csv module...
  • Alex M981
    Alex M981 over 5 years
    if txt files contains strings, then dtype should be specified, so it should be like lines = loadtxt("filename.dat", dtype=str, comments="#", delimiter=",", unpack=False)
  • Jean-François Fabre
    Jean-François Fabre almost 4 years
    split is going to leave the newlines. Don't do this, use csv module or some other existing parser
  • Admin
    Admin over 2 years
    Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.