Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Databases and Information Systems

Singapore Management University

2023

Language model

Articles 1 - 4 of 4

Full-Text Articles in Physical Sciences and Mathematics

On The Usage Of Continual Learning For Out-Of-Distribution Generalization In Pre-Trained Language Models Of Code, Martin Weyssow, Xin Zhou, Kisub Kim, David Lo, Houari A. Sahraoui Dec 2023

On The Usage Of Continual Learning For Out-Of-Distribution Generalization In Pre-Trained Language Models Of Code, Martin Weyssow, Xin Zhou, Kisub Kim, David Lo, Houari A. Sahraoui

Research Collection School Of Computing and Information Systems

Pre-trained language models (PLMs) have become a prevalent technique in deep learning for code, utilizing a two-stage pre-training and fine-tuning procedure to acquire general knowledge about code and specialize in a variety of downstream tasks. However, the dynamic nature of software codebases poses a challenge to the effectiveness and robustness of PLMs. In particular, world-realistic scenarios potentially lead to significant differences between the distribution of the pre-training and test data, i.e., distribution shift, resulting in a degradation of the PLM's performance on downstream tasks. In this paper, we stress the need for adapting PLMs of code to software data whose …


Robust Prompt Optimization For Large Language Models Against Distribution Shifts, Moxin Li, Wenjie Wang, Fuli Feng, Yixin Cao, Jizhi Zhang, Tat-Seng Chua Dec 2023

Robust Prompt Optimization For Large Language Models Against Distribution Shifts, Moxin Li, Wenjie Wang, Fuli Feng, Yixin Cao, Jizhi Zhang, Tat-Seng Chua

Research Collection School Of Computing and Information Systems

Large Language Model (LLM) has demonstrated significant ability in various Natural Language Processing tasks. However, their effectiveness is highly dependent on the phrasing of the task prompt, leading to research on automatic prompt optimization using labeled task data. We reveal that these prompt optimization techniques are vulnerable to distribution shifts such as subpopulation shifts, which are common for LLMs in real-world scenarios such as customer reviews analysis. In this light, we propose a new problem of robust prompt optimization for LLMs against distribution shifts, which requires the prompt optimized over the labeled source group can simultaneously generalize to an unlabeled …


The Devil Is In The Tails: How Long-Tailed Code Distributions Impact Large Language Models, Xin Zhou, Kisub Kim, Bowen Xu, Jiakun Liu, Donggyun Han, David Lo Sep 2023

The Devil Is In The Tails: How Long-Tailed Code Distributions Impact Large Language Models, Xin Zhou, Kisub Kim, Bowen Xu, Jiakun Liu, Donggyun Han, David Lo

Research Collection School Of Computing and Information Systems

Learning-based techniques, especially advanced Large Language Models (LLMs) for code, have gained considerable popularity in various software engineering (SE) tasks. However, most existing works focus on designing better learning-based models and pay less attention to the properties of datasets. Learning-based models, including popular LLMs for code, heavily rely on data, and the data's properties (e.g., data distribution) could significantly affect their behavior. We conducted an exploratory study on the distribution of SE data and found that such data usually follows a skewed distribution (i.e., long-tailed distribution) where a small number of classes have an extensive collection of samples, while a …


Plan-And-Solve Prompting: Improving Zero-Shot Chain-Of-Thought Reasoning By Large Language Models, Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim Jul 2023

Plan-And-Solve Prompting: Improving Zero-Shot Chain-Of-Thought Reasoning By Large Language Models, Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim

Research Collection School Of Computing and Information Systems

Large language models (LLMs) have recently been shown to deliver impressive performance in various NLP tasks. To tackle multi-step reasoning tasks, few-shot chain-of-thought (CoT) prompting includes a few manually crafted step-by-step reasoning demonstrations which enable LLMs to explicitly generate reasoning steps and improve their reasoning task accuracy. To eliminate the manual effort, Zeroshot-CoT concatenates the target problem statement with “Let’s think step by step” as an input prompt to LLMs. Despite the success of Zero-shot-CoT, it still suffers from three pitfalls: calculation errors, missing-step errors, and semantic misunderstanding errors. To address the missing-step errors, we propose Planand-Solve (PS) Prompting. It …