R text file and text mining...how to load data

39,051

Solution 1

Like @richiemorrisroe I found this poorly documented. Here's how I get my text in to use with the tm package and make the document term matrix:

library(tm) #load text mining library
setwd('F:/My Documents/My texts') #sets R's working directory to near where my files are
a  <-Corpus(DirSource("/My Documents/My texts"), readerControl = list(language="lat")) #specifies the exact folder where my text file(s) is for analysis with tm.
summary(a)  #check what went in
a <- tm_map(a, removeNumbers)
a <- tm_map(a, removePunctuation)
a <- tm_map(a , stripWhitespace)
a <- tm_map(a, tolower)
a <- tm_map(a, removeWords, stopwords("english")) # this stopword file is at C:\Users\[username]\Documents\R\win-library\2.13\tm\stopwords 
a <- tm_map(a, stemDocument, language = "english")
adtm <-DocumentTermMatrix(a) 
adtm <- removeSparseTerms(adtm, 0.75)

In this case you don't need to specify the exact file name. So long as it's the only one in the directory referred to in line 3, it will be used by the tm functions. I do it this way because I have not had any success in specifying the file name in line 3.

If anyone can suggest how to get text into the lda package I'd be most grateful. I haven't been able to work that out at all.

Solution 2

Can't you just use the function readPlain from the same library? Or you could just use the more common scan function.

mydoc.txt <-scan("./mydoc.txt", what = "character")

Solution 3

I actually found this quite tricky to begin with, so here's a more comprehensive explanation.

First, you need to set up a source for your text documents. I found that the easiest way (especially if you plan on adding more documents, is to create a directory source that will read all of your files in.

source <- DirSource("yourdirectoryname/") #input path for documents
YourCorpus <- Corpus(source, readerControl=list(reader=readPlain)) #load in documents

You can then apply the StemDocument function to your Corpus. HTH.

Solution 4

I believe what you wanted to do was read individual file into a corpus and then make it treat the different rows in the text file as different observations.

See if this gives you what you want:

text <- read.delim("this is a test for R load.txt", sep = "/t")
text_corpus <- Corpus(VectorSource(text), readerControl = list(language = "en"))

This is assuming that the file "this is a test for R load.txt" has only one column which has the text data.

Here the "text_corpus" is the object that you are looking for.

Hope this helps.

Share:
39,051
Admin
Author by

Admin

Updated on February 21, 2020

Comments

  • Admin
    Admin about 4 years

    I am using the R package tm and I want to do some text mining. This is one document and is treated as a bag of words.

    I don't understand the documentation on how to load a text file and to create the necessary objects to start using features such as....

    stemDocument(x, language = map_IETF(Language(x)))
    

    So assume that this is my doc "this is a test for R load"

    How do I load the data for text processing and to create the object x?

  • Ben
    Ben over 12 years
    I just discovered that the stemDocument function doesn't seem to work at all unless the language is specified, so I've edited my code above to include that.