This repository contains code and presentation for our full day workshop, 'Getting Started with Natural Language Processing'. This is created for the purpose of being presented in Analytics Vidhya's DataHack Summit 2018. Authors: Dipanjan Sarkar & Raghav Bali
- Introduction to Natural Language Processing
- Text pre-processing and Wrangling
- Removing HTML tags\noise
- Removing accented characters
- Removing special characters\symbols
- Handling contractions
- Stemming
- Lemmatization
- Stop word removal
- Project: Build a duplicate character removal module
- Project: Build a spell-check and correction module
- Project: Build an end-to-end text pre-processor
- Text Understanding
- POS (Parts of Speech) Tagging
- Text Parsing
- Shallow Parsing
- Dependency Parsing
- Constituency Parsing
- NER (Named Entity Recognition) Tagging
- Project: Build your own POS Tagger
- Project: Build your own NER Tagger
- Text Representation – Feature Engineering
- Traditional Statistical Models – BOW, TF-IDF
- Newer Deep Learning Models for word embeddings – Word2Vec, GloVe, FastText
- Project: Movie Recommendations
- Project: Sentiment Analysis using Unsupervised Learning
- Project: Sentiment Analysis using Supervised Learning
- Project: Text Clustering (grouping similar movies)
- Project: Topic Models
- Project: Text Summarization
- Promise of Deep Learning for NLP
Learn and understand popular NLP workflows with interactive examples
Covers concepts and interactive projects on cleaning and handling noisy unstructured text data including duplicate checks, spelling corrections and text wrangling
Build your own POS and NER taggers and parse text data to understand it better
Understand, build and explore text semantics and representations with traditional statistical models and newer word embedding models
Projects on popular NLP tasks including text classification, sentiment analysis, text clustering, summarization, topic models and recommendations
Brief coverage of the promise of deep learning for NLP
- Standard system with 4-8GB RAM, 2-4 core processor(i5\i7\AMD), GPU preferred for some deep learning tasks but not essential, Windows\Linux\Mac OS.
- Cloud based services like AWS EC2 also work fine.
- Participants needs to carry their laptop for the workshop.
- Anaconda distribution (Python 3.6) preferred with the following libraries pre-installed:
nltk, spacy, TextBlob scikit-learn, numpy, pandas, keras, tensorflow
- If you install Python 3.7 do remember that
keras
+tensorflow
may not be available with a stable build