processing strings of text for neural network input

23,886

Solution 1

I'll go ahead and summarize our discussion as the answer here.

Your goal is to be able to incorporate text into your neural network. We have established that traditional ANNs are not really suitable for analyzing text. The underlying explanation for why this is so is based around the idea that ANNs operate on inputs that are generally a continuous range of values and the nearness of two values for an input means some sort of nearness in their meaning. Words do not have this idea of nearness and so, there's no real numerical encoding for words that can make sense as input to an ANN.

On the other hand, a solution that might work is to use a more traditional semantic analysis which could, perhaps produce sentiment ranges for a list of topics and then those topics and their sentiment values could possibly be used as input for an ANN.

Solution 2

In response to your comments, no, your proposed scheme doesn't quite make sense. An artificial neuron output by its nature represents a continuous or at least a binary value. It does not makes sense to map between a huge discrete enumeration (like UTF-8 characters) and the continuous range represented by a floating point value. The ANN will necessarily act like 0.1243573 is an extremely good approximation to 0.1243577 when those numbers could easily be mapped to the newline character and the character "a", for example, which would not be good approximations for each other at all.

Quite frankly, there is no reasonable representation for "general unicode string" as inputs to an ANN. A reasonable representation depends on the specifics of what you're doing. It depends on your answers to the following questions:

  • Are you expecting words to show up in the input strings as opposed to blocks of characters? What words are you expecting to show up in the strings?
  • What is the length distribution of the input strings?
  • What is the expected entropy of the input strings?
  • Is there any domain specific knowledge you have about what you expect the strings to look like?

and most importantly

  • What are you trying to do with the ANN. This is not something you can ignore.

Its possible you might have a setup for which there is no translation that will actually allow you to do what you want with the neural network. Until you answer those questions (you skirt around them in your comments above), it's impossible to give a good answer.

I can give an example answer, that would work if you happened to give certain answers to the above questions. For example, if you are reading in strings with arbitrary length but composed of a small vocabulary of words separated by spaces, then I would suggest a translation scheme where you make N inputs, one for each word in the vocabulary, and use a recurrent neural network to feed in the words one at a time by setting the corresponding input to 1 and all the others to 0.

Solution 3

I think it would be fascinating to feed in text (encoded at the character level) to a deep belief network, to see what properties of the language it can discover.

There has been a lot of work done recently on Neural Network Language modeling (mainly at the word level, but also at the character level)

See these links for more info

http://www.stanford.edu/group/pdplab/pdphandbook/handbookch8.html http://code.google.com/p/word2vec/

The word vectors are encoded by training on a large corpus of wikipedia articles etc.. and have been able to acquire semantic and syntactic features, which allows a "distance" to be defined between them"

"It was recently shown that the word vectors capture many linguistic regularities, for example vector operations vector('king') - vector('man') + vector('woman') is close to vector('queen')"

Also see this great research paper by Ilya Sutskever on generating random characters, which exhibit the features of the english language after being trained on wikipedia. Amazing stuff!

http://www.cs.toronto.edu/~ilya/pubs/2011/LANG-RNN.pdf http://www.cs.toronto.edu/~ilya/rnn.html (Online text generation text demo - very cool!)

Solution 4

It is not exactly clear what you are trying to do, but I guess that it seems to be in some sense related to what people call "Natural Language". There are lots of references about this... I am not an expert, but I know for example that there are some interesting references by O'Reilly.

From the NN perspective there are lots of different NN models. I think you are referring to the most popular one known as Multilayer perceptron with a kind of backpropagation algorithm, but there are lots of models of associative memory that may be more suitable for your case. A very good reference about this is the Simon Haykin book.

However, if I tried to do something like this, I would start trying to understand how the frequency of letters, syllables and words arise together in English language (?).

I hope that I helped. As I told before, I am not an expert in the field.

Share:
23,886
Ælex
Author by

Ælex

I'm an ML/AI researcher who sold his soul to the industry. I love working on RL, ML, CV using Python (PyTorch, Keras, TF, etc). Used to work on C++/ROS/OpenCV for Robotics. I'm not looking for a job, but I'm definitely interested in Startups.

Updated on July 14, 2022

Comments

  • Ælex
    Ælex almost 2 years

    I understand that ANN input must be normalized, standardized, etc. Leaving the peculiarities and models of various ANN's aside, how can I preprocess UTF-8 encoded text within the range of {0,1} or alternatively between the range {-1,1} before it is given as input to neural networks? I have been searching for this on google but can't find any information (I may be using the wrong term).

    1. Does that make sense?
    2. Isn't that how text is preprocessed for neural networks?
    3. Are there any alternatives?

    Update on November 2013

    I have long accepted as correct the answer of Pete. However, I have serious doubts, mostly due to recent research I've been doing on Symbolic knowledge and ANN's.

    Dario Floreano and Claudio Mattiussi in their book explain that such processing is indeed possible, by using distributed encoding.

    Indeed if you try a google scholar search, there exists a plethora of neuroscience articles and papers on how distrubuted encoding is hypothesized to be used by brains in order to encode Symbolic Knowledge.

    Teuvo Kohonen, in his paper "Self Organizing Maps" explains:

    One might think that applying the neural adaptation laws to a symbol set (regarded as a set of vectorial variables) might create a topographic map that displays the "logical distances" between the symbols. However, there occurs a problem which lies in the different nature of symbols as compared with continuous data. For the latter, similarity always shows up in a natural way, as the metric differences between their continuous encodings. This is no longer true for discrete, symbolic items, such as words, for which no metric has been defined. It is in the very nature of a symbol that its meaning is dissociated from its encoding.

    However, Kohonen did manage to deal with Symbolic Information in SOMs!

    Furthermore, Prof Dr Alfred Ultsch in his paper "The Integration of Neural Networks with Symbolic Knowledge Processing" deals exactly with how to process Symbolic Knowledge (such as text) in ANN's. Ultsch offers the following methodologies for processing Symbolic Knowledge: Neural Approximative Reasoning, Neural Unification, Introspection and Integrated Knowledge Acquisition. Albeit little information can be found on those in google scholar or anywhere else for that matter.

    Pete in his answer is right about semantics. Semantics in ANN's are usually disconnected. However, following reference, provides insight how researchers have used RBMs, trained to recognize similarity in semantics of different word inputs, thus it shouldn't be impossible to have semantics, but would require a layered approach, or a secondary ANN if semantics are required.

    Natural Language Processing With Subsymbolic Neural Networks, Risto Miikkulainen, 1997 Training Restricted Boltzmann Machines on Word Observations, G.E.Dahl, Ryan.P.Adams, H.Rarochelle, 2012

    Update on January 2021

    The field of NLP and Deep Learning has seen a resurgence in research in the past few years and since I asked that Question. There are now Machine-learning models which address what I was trying to achieve in many different ways.

    For anyone arriving to this question wondering on how to pre-process text in Deep Learning or Neural Networks, here's a few helpful topics, none of which are Academic, but simple to understand and which should get you started on solving similar tasks:

    At the time I was asking that question, RNN, CNN and VSM were about to start being used, nowadays most Deep Learning frameworks support extensive NLP support. Hope the above helps.