Large language models are computerized neural networks with millions to billions of parameters that are pre-trained on vast amounts of unlabeled text using self-supervised or semi-supervised learning. This allows for massively parallel processing and has made older supervised models obsolete. LLMs have acquired an embodied knowledge about language, but also any inaccuracies and biases present in the corpora.
Stanford University
Winter 2023
This comprehensive course covers various machine learning principles from supervised, unsupervised to reinforcement learning. Topics also touch on neural networks, support vector machines, bias-variance tradeoffs, and many real-world applications. It requires a background in computer science, probability, multivariable calculus, and linear algebra.
No concepts data
+ 32 more conceptsPrinceton University
Spring 2023
This course introduces the basics of NLP, including recent deep learning approaches. It covers a wide range of topics, such as language modeling, text classification, machine translation, and question answering.
No concepts data
+ 13 more conceptsStanford University
Fall 2022
This course focuses on the creation of effective, personalized, conversational assistants using large language neural models. It involves both theory and practical assignments, offering students a chance to design their own open-ended course project. Familiarity with NLP and task-oriented agents is beneficial.
No concepts data
+ 13 more concepts