# Word Embeddings Explain Word Embeddings in [[Natural Language Processing]] > [!ai]+ AI > > Word Embeddings are a type of word representation that allows words with similar meaning to have a similar representation. They are a form of feature engineering for text data and are widely used in many Natural Language Processing (NLP) and Artificial Intelligence (AI) tasks. > In simple terms, word embeddings map words or phrases from the vocabulary to vectors of real numbers. This is done in such a way that the semantic relationship between words is reflected in the vectors' distances and directions. For example, the words 'king' and 'queen' would be closer to each other than 'king' and 'apple', because the first pair has more related meanings than the second one. > There are different types of Word Embedding techniques such as Word2Vec by Google, GloVe by Stanford, and fastText by Facebook, each employing different methodologies to generate these vectors. These techniques usually involve neural networks or probabilistic models to compute the word embeddings using large amounts of text data. > The main advantage of using word embeddings is that they can capture context of a word in a document, semantic and syntactic similarity, relation with other words, etc. This makes them very useful for various NLP tasks like sentiment analysis, text classification, translation, speech recognition and others. [[ElasticSearch]] has [[Vector Database]] support now: - [World's most downloaded vector database: Elasticsearch | Elastic](https://www.elastic.co/elasticsearch/vector-database) ## Links - [Site Unreachable](https://web.archive.org/web/20241205065122/https://www.philschmid.de/fine-tune-embedding-model-for-rag) ## Logs - Hacking on embeddings in go: See [[2024-04-13]].