An overview of word embeddings and their connection to distributional semantic models
Sam Hart
Info

FacebookTwitterLinkedInUnsupervisedly learned word embeddings have seen tremendous success in numerous NLP tasks in recent years. So much so that in many NLP architectures, they are close to fully replacing more traditional distributional representations such as LSA features and Brown clusters.

Connections