nltk Getting started with nltk

Help us to keep this website almost Ad Free! It takes only 10 seconds of your time:
> Step 1: Go view our video on YouTube: EF Core Bulk Extensions
> Step 2: And Like the video. BONUS: You can also share it!

Remarks

NLTK is a leading platform for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, wrappers for industrial-strength NLP libraries, and an active discussion forum.

The book

Natural Language Processing with Python provides a practical introduction to programming for language processing. Written by the creators of NLTK, it guides the reader through the fundamentals of writing Python programs, working with corpora, categorizing text, analyzing linguistic structure, and more. The book is being updated for Python 3 and NLTK 3. (The original Python 2 version is still available at http://nltk.org/book_1ed .)

Versions

NLTK Version History

VersionRelease Date
3.2.4 (latest)2017-05-21
3.22016-03-03
3.12015-10-15

Basic Terms

Corpus

Body of text, singular. Corpora is the plural of this. Example: A collection of medical journals.

Lexicon

Words and their meanings. Example: English dictionary. Consider, however, that various fields will have different lexicons. For example: To a financial investor, the first meaning for the word "Bull" is someone who is confident about the market, as compared to the common English lexicon, where the first meaning for the word "Bull" is an animal. As such, there is a special lexicon for financial investors, doctors, children, mechanics, and so on.

Token

Each "entity" that is a part of whatever was split up based on rules. For examples, each word is a token when a sentence is "tokenized" into words. Each sentence can also be a token, if you tokenized the sentences out of a paragraph.

Installation or Setup

NLTK requires Python versions 2.7 or 3.4+.

These instructions consider python version - 3.5


  • Mac/Unix :

    1. Install NLTK: run sudo pip install -U nltk
    2. Install Numpy (optional): run sudo pip install -U numpy
    3. Test installation: run python then type import nltk

    NOTE : For older versions of Python it might be necessary to install setuptools (see http://pypi.python.org/pypi/setuptools) and to install pip (sudo easy_install pip).




Reference : http://www.nltk.org/install.html

NLTK installation with Conda.

To install NLTK with Continuum's anaconda / conda .

If you are using Anaconda, most probably nltk would be already downloaded in the root (though you may still need to download various packages manually).

Using conda :

conda install nltk 
 

To upgrade nltk using conda :

conda update nltk
 

With anaconda :

If you are using multiple python envriroments in anaconda, first activate the enviroment where you want to install nltk. You can check the active enviroment using the command

conda info --envs
 

The enviroment with the * sign before the directory path is the active one. To change the active enviroment use

activate <python_version>
for eg. activate python3.5
 

Now check the list of packages installed in this enviroment using commnad

conda list
 

If you dont find 'nltk' in the list, use

conda install -c anaconda nltk=3.2.1
 

For further information, you may consult https://anaconda.org/anaconda/nltk.


To install mini-conda a.k.a. conda : http://conda.pydata.org/docs/install/quick.html

To install anaconda : https://docs.continuum.io/anaconda/install

NLTK's download function

You can install NLTK over pip (pip install nltk ).After it is installed, many components will not be present, and you will not be able to use some of NLTK's features.

From your Python shell, run the function ntlk.download() to select which additional packages you want to install using UI. Alternatively, you can use python -m nltk.downloader [package_name] .


  • To download all packages available.
nltk.download('all')
 

  • To download specific package.
nltk.download('package-name')
 

  • To download all packages of specific folder.
import nltk

dwlr = nltk.downloader.Downloader()

# chunkers, corpora, grammars, help, misc, 
# models, sentiment, stemmers, taggers, tokenizers
for pkg in dwlr.packages():
    if pkg.subdir== 'taggers':
        dwlr.download(pkg.id)
 

  • To download all packages except Corpora Folder.
import nltk

dwlr = nltk.downloader.Downloader()

for pkg in dwlr.corpora():
    dwlr._status_cache[pkg.id] = 'installed'

dwlr.download('all')
 

With NLTK

You can use NLTK (especially, the nltk.tokenize package) to perform sentence boundary detection:

import nltk
text = "This is a test. Let's try this sentence boundary detector."
text_output = nltk.tokenize.sent_tokenize(text)
print('text_output: {0}'.format(text_output))
 

Output:

text_output: ['This is a test.', "Let's try this sentence boundary detector."]
 


Got any nltk Question?