NLP stands for _________.
Natural Language Processing
Companies use Natural Language Processing applications, such as _________, to identify opinions and sentiment online to help them understand what customers think about their products and services
Sentiment Analysis
___________ Information overload is a real problem when we need to access a specific, important piece of information from a huge knowledge base.
Automatic Summarization
___________ makes it possible to assign predefined categories to a document and organize it to help you find the information you need or simplify some activities.
Text Classification
CBT stands for ____________.
Cognitive Behavioural Therapy (CBT)
By dividing up large problems into smaller ones, ____________ aims to help you manage them in a more constructive manner.
CBT
Once the textual data has been collected, it needs to be processed and cleaned so that an easier version can be sent to the machine. This is known as __________.
Data Exploration
In Tokenization each sentence is divided into _________.
Token
Once the text has been normalized, it is then fed to an NLP based AI model. Note that in NLP, modelling requires data pre-processing only after which the data is fed to the machine.
Modelling
___________ is the process in which the affixes of words are removed and the words are converted to their base form.
Stemming
___________ is a Natural Language Processing model which helps in extracting features out of the text which can be helpful in machine learning algorithms.
Bag of Words
TFIDF stands for ___________.
Term Frequency and Inverse Document Frequency
Syntax refers to the ___ of a sentence
Grammatical structure
_______ allows the computer to identify the different parts of a speech.
. part-of-speech tagging.
__________ are the words which occur very frequently in the corpus but do not add any value to it.
Stopwords
While stemming healed, healing and healer all were reduced to _______________
heal
Steps to implement bag of words algorithm is given below. Choose the correct sequence.
1. Text Normalisation 2. Create document vectors 3. Create document vectors for all the documents 4. Create Dictionary
1, 4, 2, 3
In _________, we put the document frequency in the denominator while the total number of documents in the numerator
Inverse Document Frequency
While working with NLP what is the meaning of?
a. Syntax b. Semantics
Syntax: Syntax refers to the grammatical structure of a sentence.
Semantics: It refers to the meaning of the sentence.
................. is one of the leading platforms for building Python programs that can work with human language data.
NLTK
________________________ refers to the AI modelling where the machine learns by itself.
Learning Based
................ enable machines to learn by themselves using the provided data and make accurate Predictions/ Decisions
Machine Learning
Snapchat filters use _____ and _____ to enhance your selfie with flowers, cat ears etc
augmented reality and machine learning
Whenever you download an app and install, it asks you for several permissions to access your phone’s data in different ways. So the data which is collected by applications is __________________
ethical
In a ___________ learning model, the dataset which is fed to the machine is labelled.
Supervised