Total Questions : 30
Expected Time : 30 Minutes

1. Explain the concept of 'word embedding' in NLP and its advantages in text representation.

2. Define 'TF-IDF (Term Frequency-Inverse Document Frequency)' and its role in text analysis.

3. Discuss the concept of 'transfer learning' in NLP and its advantages in training language models.

4. What is the purpose of a 'stemming algorithm' in natural language processing, and provide an example.

5. What is the purpose of 'sentiment analysis' in NLP, and how is it used?

6. Which technique is commonly used for sentiment analysis in NLP?

7. Discuss the challenges associated with cross-lingual Natural Language Processing and propose techniques to overcome language barriers in NLP applications.

8. What is the purpose of a Word Embedding in NLP?

9. In the context of neural networks, explain the concept of transfer learning and its application in Natural Language Processing.

10. Examine the role of Named Entity Recognition (NER) in information extraction from unstructured text. Provide an example scenario where NER is crucial.

11. Explain the concept of 'attention mechanism' in NLP and its role in sequence-to-sequence models.

12. Define 'corpus' in NLP and its role in training language models.

13. What is the 'long-tail distribution' in the context of language processing?

14. What role does 'TF-IDF (Term Frequency-Inverse Document Frequency)' play in text analysis, and how is it calculated?

15. Discuss the trade-offs between using rule-based approaches and machine learning approaches in Natural Language Processing applications.

16. What is the significance of 'syntax tree' in the analysis of sentence structure in NLP?

17. Which step is typically included in the preprocessing phase of NLP tasks?

18. What is the purpose of the stemming process in NLP?

19. Explain the concept of 'Named Entity Recognition (NER)' in NLP and its applications.

20. Explain the concept of co-reference resolution in Natural Language Processing. Provide an example scenario where co-reference resolution is crucial.

21. What are 'stop words' in NLP, and why are they often excluded from text analysis?

22. Which algorithm is commonly used for text classification in NLP?

23. How does 'context window' influence the performance of word embeddings in NLP?

24. How does 'semantic analysis' contribute to the understanding of language in NLP?

25. In named entity recognition, what does the 'LOC' tag represent?

26. In sentiment analysis, what does a positive polarity score indicate?

27. Which evaluation metric is commonly used for named entity recognition tasks?

28. Which deep learning architecture is commonly used for sequence-to-sequence tasks in NLP?

29. Compare and contrast the bag-of-words model and word embeddings in NLP. Highlight their respective advantages and limitations.

30. Define 'BLEU score' and its role in evaluating the quality of machine-translated text.