New Concepts in Large Language Models
Outline:
1. Fine-tuning LLMs With Parameter-Efficient Fine-Tuning (PEFT) Methods
2. Enhancing Problem Solving in LLMs with Chain of Thought Reasoning
3. Using LLMs as Evaluative Tools for Human-Like Judgment
4. Improving Response Relevance in LLMs with Retrieval-Augmented Generation (RAG)
Abstract:
This workshop introduces advanced features of state-of-the-art large language models (LLMs), covering techniques to extend their problem solving, improve computational efficiency, increase the capability for evaluating information or reasonings (e.g. using chain- or tree-of-thought reasoning), and augment them with knowledge. Chain of thought (CoT) is a reasoning technique where large language models solve complex problems through step-by-step reasoning, enhancing the model's logic processing and allowing for more accurate output solutions. At the same time, Parameter-Efficient Fine-Tuning (PEFT) tackles the problem of adapting LLMs to down-stream tasks using fewer resources by covering methods that allow for LLM training with minimal computation ability. In addition, it talks about how we can use LLMs as judges or evaluators — judging responses and validating them from large datasets. This manifests in judgment and alignment with feedback at a general human level. In the last session, he introduces Retrieval-Augmented Generation (RAG), a way to combine retrieval (using IR) and generation of response based on-search to boost relevance & coverage. Combined, these sessions provide participants with an all-tools approach for advancing LLM capabilities across different AI applications using innovative techniques.
Presenters: to be announced
Access details: to be announced
Duration: to be announced
Date & Time: to be announced
Paper Submission Deadline
2025-07-06 58 DaysNotification of Acceptance
2025-09-15 129 DaysCamera-ready Deadline
2025-10-14 158 DaysWorkshop Dates
2025-10-20 164 DaysRegistration Deadline
2025-10-24 168 DaysConference Dates
2025-10-28 172 Days