In-context Learning

In-context learning (natural language processing)

Prompt engineering, also known as in-context learning, is a method that was suggested as an alternative to fine-tuning. It utilizes the transformer architecture to enable principled learning algorithms based on gradient descent within the model's weights, allowing for mesa-optimization and the ability to learn-to-learn small models based on contextual data during prediction.

2 courses cover this concept

CS 229: Machine Learning

Stanford University

Winter 2023

This comprehensive course covers various machine learning principles from supervised, unsupervised to reinforcement learning. Topics also touch on neural networks, support vector machines, bias-variance tradeoffs, and many real-world applications. It requires a background in computer science, probability, multivariable calculus, and linear algebra.

No concepts data

+ 32 more concepts

CS 330 Deep Multi-Task and Meta Learning

Stanford University

Fall 2022

This course emphasizes leveraging shared structures in multiple tasks to enhance learning efficiency in deep learning. It provides a thorough understanding of multi-task and meta-learning algorithms with a focus on topics like self-supervised pre-training, few-shot learning, and lifelong learning. Prerequisites include an introductory machine learning course. The course is designed for graduate-level students.

No concepts data

+ 17 more concepts