nltk Tokenizing Sentence and word tokenization from user given paragraph

Help us to keep this website almost Ad Free! It takes only 10 seconds of your time:
> Step 1: Go view our video on YouTube: EF Core Bulk Insert
> Step 2: And Like the video. BONUS: You can also share it!

Example

from nltk.tokenize import sent_tokenize, word_tokenize
example_text = input("Enter the text:  ")

print("Sentence Tokens:")
print(sent_tokenize(example_text))

print("Word Tokens:")
print(word_tokenize(example_text))


Got any nltk Question?