Extracting nationalities and countries from text

10,051

Solution 1

So after the fruitful comments, I digged deeper into different NER tools to find the best in recognizing nationalities and country mentions and found that SPACY has a NORP entity that extracts nationalities efficiently. https://spacy.io/docs/usage/entity-recognition

Solution 2

If you want the country names to be extracted, what you need is NER tagger, not POS tagger.

Named-entity recognition (NER) is a subtask of information extraction that seeks to locate and classify elements in text into pre-defined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc.

Check out Stanford NER tagger!

from nltk.tag.stanford import NERTagger
import os
st = NERTagger('../ner-model.ser.gz','../stanford-ner.jar')
tagging = st.tag(text.split()) 

Solution 3

Here's geograpy that uses NLTK to perform entity extraction. It stores all places and locations as a gazetteer. It then performs a lookup on the gazetteer to fetch relevant places and locations. Look up the docs for more usage details -

from geograpy import extraction

e = extraction.Extractor(text="Thyroid-associated orbitopathy (TO) is an autoimmune-
mediated orbital inflammation that can lead to disfigurement and blindness. 
Multiple genetic loci have been associated with Graves' disease, but the genetic 
basis for TO is largely unknown. This study aimed to identify loci associated with 
TO in individuals with Graves' disease, using a genome-wide association scan 
(GWAS) for the first time to our knowledge in TO.Genome-wide association scan was 
performed on pooled DNA from an Australian Caucasian discovery cohort of 265 
participants with Graves' disease and TO (cases) and 147 patients with Graves' 
disease without TO (controls).")

e.find_entities()
print e.places()

Solution 4

You can use Spacy for NER. It gives better results that NLTK.

import spacy

nlp = spacy.load('en_core_web_sm')

doc = nlp(u"Apple is opening its first big office in San Francisco and California.")
print([(ent.text, ent.label_) for ent in doc.ents])
Share:
10,051
user6453258
Author by

user6453258

Updated on June 09, 2022

Comments

  • user6453258
    user6453258 almost 2 years

    I want to extract all country and nationality mentions from text using nltk, I used POS tagging to extract all GPE labeled tokens but the results were not satisfying.

     abstract="Thyroid-associated orbitopathy (TO) is an autoimmune-mediated orbital inflammation that can lead to disfigurement and blindness. Multiple genetic loci have been associated with Graves' disease, but the genetic basis for TO is largely unknown. This study aimed to identify loci associated with TO in individuals with Graves' disease, using a genome-wide association scan (GWAS) for the first time to our knowledge in TO.Genome-wide association scan was performed on pooled DNA from an Australian Caucasian discovery cohort of 265 participants with Graves' disease and TO (cases) and 147 patients with Graves' disease without TO (controls). "
    
      sent = nltk.tokenize.wordpunct_tokenize(abstract)
      pos_tag = nltk.pos_tag(sent)
      nes = nltk.ne_chunk(pos_tag)
      places = []
      for ne in nes:
          if type(ne) is nltk.tree.Tree:
             if (ne.label() == 'GPE'):
                places.append(u' '.join([i[0] for i in ne.leaves()]))
          if len(places) == 0:
              places.append("N/A")
    

    The results obtained are :

    ['Thyroid', 'Australian', 'Caucasian', 'Graves']
    

    Some are nationalities but others are just nouns.

    So what am I doing wrong or is there another way to extract such info?

  • Ic3fr0g
    Ic3fr0g almost 8 years
    He's already performed entity extraction!! Unknowingly perhaps.
  • Ic3fr0g
    Ic3fr0g almost 8 years
    Your answer just gives him a list of classified words.You do not even provide him with a list of GPEs. Please edit your answer
  • user6453258
    user6453258 almost 8 years
    I actually tried to install geograpy but failed.. this is is why I relied on the nltk.
  • Ic3fr0g
    Ic3fr0g almost 8 years
    sPacy is fantastic and really powerful. I also recommend fooling around with Alchemy API as well. Though for large data it's preferable to use sPacy as it does not impose transaction cost for every query and result.
  • Owais Qureshi
    Owais Qureshi about 7 years
    Same issue with me couldn't install geograpy :(
  • Ic3fr0g
    Ic3fr0g about 7 years
    Please install NLTK before you install geography, Or you can do pip install geograpy-nltk
  • atkat12
    atkat12 about 7 years
    For geograpy, this worked for me: stackoverflow.com/questions/31172719/…
  • vinita
    vinita about 7 years
    @OwaisKureshi pip install --upgrade html5lib==1.0b8 and then install geograpy
  • Nomiluks
    Nomiluks over 6 years
    As we know, spacy will tag locations as {GPE}. In my case I have two locations marked as GPE (e.g India, Delhi). Now my goal is to identify which one is city and the country. Please comment @Renaud
  • ASHu2
    ASHu2 over 4 years
    old but for python3 use - pip3 install geograpy3