Word Embeddings with Word2Vec in NLP

Word Embeddings are a type of word representation that captures the semantic meaning of words in vector form. Unlike traditional one-hot encoding, word embeddings represent words as dense vectors in a continuous vector space, preserving their context and relationships.

1. What is Word2Vec?

Word2Vec is a popular algorithm for learning word embeddings developed by Google. It transforms words into vector representations, capturing their semantic meaning and relationships. Word2Vec uses two main architectures:

  • CBOW (Continuous Bag of Words): Predicts the current word based on surrounding context words.
  • Skip-Gram: Predicts surrounding context words based on the current word.

2. Why Use Word2Vec?

  • Captures semantic relationships between words (e.g., “king” – “man” + “woman” = “queen”).
  • Improves NLP tasks such as text classification, sentiment analysis, and recommendation systems.
  • Reduces dimensionality compared to traditional methods like one-hot encoding.

3. Building Word2Vec Model with Gensim

We will use the Gensim library to create a Word2Vec model and explore word embeddings.

Example: Training a Word2Vec Model

from gensim.models import Word2Vec
from nltk.tokenize import word_tokenize
import nltk
nltk.download('punkt')

# Sample text data
text = "Natural Language Processing and machine learning are exciting fields. Word embeddings capture word semantics."

# Tokenize the text into sentences and words
tokenized_sentences = [word_tokenize(sentence.lower()) for sentence in text.split(".") if sentence]

# Train a Word2Vec model
model = Word2Vec(sentences=tokenized_sentences, vector_size=100, window=5, min_count=1, workers=4)

# Get the vector for a word
vector = model.wv['natural']
print(vector)

Try It Now

Example: Finding Similar Words

Word2Vec can identify words that are semantically similar to a given word.

# Find words similar to 'processing'
similar_words = model.wv.most_similar('processing')
print(similar_words)

Try It Now

4. Visualizing Word Embeddings

You can use t-SNE for visualizing high-dimensional word vectors in 2D space.

from sklearn.manifold import TSNE
import matplotlib.pyplot as plt

# Reduce dimensionality using t-SNE
words = list(model.wv.index_to_key)
word_vectors = model.wv[words]
tsne = TSNE(n_components=2, random_state=42)
reduced_vectors = tsne.fit_transform(word_vectors)

# Plot the words in 2D space
plt.figure(figsize=(10, 8))
for i, word in enumerate(words):
    plt.scatter(reduced_vectors[i, 0], reduced_vectors[i, 1])
    plt.text(reduced_vectors[i, 0]+0.1, reduced_vectors[i, 1]+0.1, word)
plt.show()

Try It Now

5. Practical Applications

Word embeddings are used in a variety of NLP tasks, such as:

  • Sentiment Analysis: Understanding the sentiment of text.
  • Machine Translation: Improving translations by capturing word relationships.
  • Document Similarity: Finding similar documents based on word embeddings.

Conclusion

Word embeddings with Word2Vec provide a powerful way to represent words in vector space, preserving their semantic meaning. By training Word2Vec models, you can capture relationships between words and improve the performance of NLP applications.