What is embedding in NLP, and how do contextual embeddings differ from static embeddings?

Prepare for the AI Prompt Engineering Test with detailed flashcards and insightful questions. Master key Machine Learning and NLP concepts with explanations for every query. Ace your exam!

Multiple Choice

What is embedding in NLP, and how do contextual embeddings differ from static embeddings?

Explanation:
Embeddings map tokens to dense vectors in a numeric space, turning text into a form that neural models can work with. Static embeddings assign a single fixed vector to each token, learned from large corpora, so every occurrence of a word shares that same representation. Contextual embeddings come from models that look at surrounding text, so the representation for a token changes with its context. This makes them richer and better at capturing meaning and nuance, including different senses of a word depending on usage. That’s why this answer is best: it correctly states that embeddings map tokens to vectors, that static embeddings are fixed, and that contextual embeddings vary with surrounding tokens to provide richer representations. The other options misstate what embeddings are or how contextual information is used—contextual means the representation isn’t fixed and isn’t just about attention weights, and static and contextual are not identical.

Embeddings map tokens to dense vectors in a numeric space, turning text into a form that neural models can work with. Static embeddings assign a single fixed vector to each token, learned from large corpora, so every occurrence of a word shares that same representation. Contextual embeddings come from models that look at surrounding text, so the representation for a token changes with its context. This makes them richer and better at capturing meaning and nuance, including different senses of a word depending on usage.

That’s why this answer is best: it correctly states that embeddings map tokens to vectors, that static embeddings are fixed, and that contextual embeddings vary with surrounding tokens to provide richer representations. The other options misstate what embeddings are or how contextual information is used—contextual means the representation isn’t fixed and isn’t just about attention weights, and static and contextual are not identical.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy