A language model is a probability distribution over sequences of words, assigning probabilities to different word sequences. Language models are trained on text corpora and can be used for various applications in computational linguistics. Large language models consisting of deep neural networks have become popular in recent years, and n-gram language models make use of the Markov assumption to model word sequences.
University of Washington
Winter 2022
This course provides a comprehensive overview of Natural Language Processing (NLP), including core components like text classification, machine translation, and syntax analysis. It offers two project types: implementation problem-solving for CSE 447, and reproducing experiments from recent NLP papers for CSE 517.
No concepts data
+ 16 more conceptsStanford University
Winter 2023
This course is centered on extracting information from unstructured data in language and social networks using machine learning tools. It covers techniques like sentiment analysis, chatbot development, and social network analysis.
No concepts data
+ 14 more conceptsStanford University
Spring 2022
This is a deep-dive into the details of deep learning architectures for visual recognition tasks. The course provides students with the ability to implement, train their own neural networks and understand state-of-the-art computer vision research. It requires Python proficiency and familiarity with calculus, linear algebra, probability, and statistics.
No concepts data
+ 55 more concepts