NLTK is a leading platform for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, wrappers for industrial-strength NLP libraries, and an active discussion forum.
Natural Language Processing with Python provides a practical introduction to programming for language processing. Written by the creators of NLTK, it guides the reader through the fundamentals of writing Python programs, working with corpora, categorizing text, analyzing linguistic structure, and more. The book is being updated for Python 3 and NLTK 3. (The original Python 2 version is still available at http://nltk.org/book_1ed .)
Version | Release Date |
---|---|
3.2.4 (latest) | 2017-05-21 |
3.2 | 2016-03-03 |
3.1 | 2015-10-15 |
Body of text, singular. Corpora is the plural of this. Example: A collection of medical journals.
Words and their meanings. Example: English dictionary. Consider, however, that various fields will have different lexicons. For example: To a financial investor, the first meaning for the word "Bull" is someone who is confident about the market, as compared to the common English lexicon, where the first meaning for the word "Bull" is an animal. As such, there is a special lexicon for financial investors, doctors, children, mechanics, and so on.
Each "entity" that is a part of whatever was split up based on rules. For examples, each word is a token when a sentence is "tokenized" into words. Each sentence can also be a token, if you tokenized the sentences out of a paragraph.
NLTK requires Python
versions 2.7 or 3.4+.
These instructions consider python
version - 3.5
Mac/Unix :
sudo pip install -U nltk
sudo pip install -U numpy
python
then type import nltk
NOTE : For older versions of Python it might be necessary to install setuptools (see http://pypi.python.org/pypi/setuptools) and to install pip (sudo easy_install pip).
Windows :
These instructions assume that you do not already have Python installed on your machine.
32-bit binary installation
Start>Python35
, then type import nltk
Installing Third-Party Software :
Please see: https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software
Reference : http://www.nltk.org/install.html
To install NLTK with Continuum's anaconda
/ conda
.
If you are using Anaconda, most probably nltk would be already downloaded in the root (though you may still need to download various packages manually).
Using conda
:
conda install nltk
To upgrade nltk
using conda
:
conda update nltk
With anaconda
:
If you are using multiple python envriroments in anaconda, first activate the enviroment where you want to install nltk. You can check the active enviroment using the command
conda info --envs
The enviroment with the * sign before the directory path is the active one. To change the active enviroment use
activate <python_version>
for eg. activate python3.5
Now check the list of packages installed in this enviroment using commnad
conda list
If you dont find 'nltk' in the list, use
conda install -c anaconda nltk=3.2.1
For further information, you may consult https://anaconda.org/anaconda/nltk.
To install mini-conda a.k.a. conda
: http://conda.pydata.org/docs/install/quick.html
To install anaconda
: https://docs.continuum.io/anaconda/install
You can install NLTK over pip
(pip install nltk
).After it is installed, many components will not be present, and you will not be able to use some of NLTK's features.
From your Python shell, run the function ntlk.download()
to select which additional packages you want to install using UI. Alternatively, you can use python -m nltk.downloader [package_name]
.
nltk.download('all')
nltk.download('package-name')
import nltk
dwlr = nltk.downloader.Downloader()
# chunkers, corpora, grammars, help, misc,
# models, sentiment, stemmers, taggers, tokenizers
for pkg in dwlr.packages():
if pkg.subdir== 'taggers':
dwlr.download(pkg.id)
import nltk
dwlr = nltk.downloader.Downloader()
for pkg in dwlr.corpora():
dwlr._status_cache[pkg.id] = 'installed'
dwlr.download('all')
You can use NLTK (especially, the nltk.tokenize
package) to perform sentence boundary detection:
import nltk
text = "This is a test. Let's try this sentence boundary detector."
text_output = nltk.tokenize.sent_tokenize(text)
print('text_output: {0}'.format(text_output))
Output:
text_output: ['This is a test.', "Let's try this sentence boundary detector."]