Word Embeddings

Word embedding

Word embeddings are representations of words in a vector space that encode the meaning of words such that similar words are close together. They can be generated using various techniques such as neural networks, dimensionality reduction and probabilistic models. Word embeddings have been shown to improve performance in NLP tasks such as syntactic parsing and sentiment analysis.

4 courses cover this concept

11-411/611 Natural Language Processing

Carnegie Mellon University

Spring 2021

Focused on computational systems for human languages, this course introduces various NLP applications, such as translation and summarization. It encompasses a broad scope, from machine learning to linguistics, with a software engineering perspective.

No concepts data

+ 28 more concepts

CS 230 Deep Learning

Stanford University

Fall 2022

An in-depth course focused on building neural networks and leading successful machine learning projects. It covers Convolutional Networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. Students are expected to have basic computer science skills, probability theory knowledge, and linear algebra familiarity.

No concepts data

+ 35 more concepts

COS 484: Natural Language Processing

Princeton University

Spring 2023

This course introduces the basics of NLP, including recent deep learning approaches. It covers a wide range of topics, such as language modeling, text classification, machine translation, and question answering.

No concepts data

+ 13 more concepts

CSCI 1470/2470 Deep Learning

Brown University

Spring 2022

Brown University's Deep Learning course acquaints students with the transformative capabilities of deep neural networks in computer vision, NLP, and reinforcement learning. Using the TensorFlow framework, topics like CNNs, RNNs, deepfakes, and reinforcement learning are addressed, with an emphasis on ethical applications and potential societal impacts.

No concepts data

+ 40 more concepts