These are some of the articles that have appeared on Medium or Towards Data Science about Natural Language Processing (NLP) or Language Models (like BERT, GPT-2 or XLNet) in the last few days.
In addition, the latest articles published in ARXIV and Semantic Scholar are always useful.
- SBERT vs. Data2vec on Text Classification
- Web-Based Chatbot Project, Module 2: GPT-3-generated responses assisted with a database for…
- An Introduction to Word2Vec in NLP
- AI21 Labs’ Augmented Frozen Language Models Challenge Conventional Fine-Tuning Approaches Without…
- Implementing Various NLP Text Representation in Python
- Analysis of the polarity of tweets with the hashtag #bridgerton on Twitter
- Fine-Tuning BERT for Text Classification
- DALL-E 2.0, Explained
- Fine-Tuning for Domain Adaptation in NLP
- Google’s Universal Pretraining Framework Unifies Language Learning Paradigms