Go to Course: https://www.coursera.org/learn/generative-ai-advanced-fine-tuning-for-llms
In-demand gen AI engineering skills in fine-tuning LLMs employers are actively looking for in just 2 weeks
Instruction-tuning and reward modeling with the Hugging Face, plus LLMs as policies and RLHF
Direct preference optimization (DPO) with partition function and Hugging Face and how to create an optimal solution to a DPO problem
How to use proximal policy optimization (PPO) with Hugging Face to create a scoring function and perform dataset tokenization
Different Approaches to Fine-Tuning
In this module, you’ll begin by defining instruction-tuning and its process. You’ll also gain insights into loading a dataset, generating text pipelines, and training arguments. Further, you’ll delve into reward modeling, where you’ll preprocess the dataset and apply low-rank adaptation (LoRA) configuration. You’ll also learn to quantify quality responses, guide model optimization, and incorporate reward preferences. You’ll also describe reward trainer, an advanced training technique to train a model, and reward model loss using Hugging Face. The labs, in this module will allow practice on instruction-tuning and reward models.
Fine-Tuning Causal LLMs with Human Feedback and Direct PreferenceIn this module, you’ll describe the applications of large language models (LLMs) to generate policies and probabilities for generating responses based on the input text. You’ll also gain insights into the relationship between the policy and the language model as a function of omega to generate possible responses. Further, this module will demonstrate how to calculate rewards using human feedback incorporating reward function, train response samples, and evaluate agent’s performance. You’ll also define the scoring function for sentiment analysis using PPO with Hugging Face. You’ll also explain the PPO configuration class for specific models and learning rate for PPO training and how the PPO trainer processes the query samples to optimize the chatbot’s policies to get high-quality responses. This module delves into direct preference optimization (DPO) concepts to provide optimal solutions for the generated queries based on human preferences more directly and efficiently using Hugging Face. The labs in this module provide hands-on practice on human feedback and DPO. Methods like PPO and reinforcement learning are quite involved and could be considered subjects of study on their own. While we have provided some references for those interested, you are not expected to understand them in depth for this course
Fine-tuning a large language model (LLM) is crucial for aligning it with specific business needs, enhancing accuracy, and optimizing its performance. In turn, this gives businesses precise, actionable insights that drive efficiency and innovation. This course gives aspiring gen AI engineers valuable fine-tuning skills employers are actively seeking. During this course, you’ll explore different approaches to fine-tuning and causal LLMs with human feedback and direct preference. You’ll look at LL
This course is a great resource for learners, providing deep insights and practical skills in fine-tuning large language models for advanced AI applications.
Very Informative – Covers advanced fine-tuning techniques in a clear and structured way
Great course, love the deep-rooted content. All my concepts are so clear now. Kudos!!
The course gave me a good understanding of fine-tuning LLMs. It made complex topics easy to learn.
An excellent course with a wealth of high-quality material, featuring highly informative lessons such as DPO and PPO.