Open Access. Powered by Scholars. Published by Universities.®
Programming Languages and Compilers Commons™
Open Access. Powered by Scholars. Published by Universities.®
- Institution
- Keyword
-
- Artificial Intelligence (AI) (1)
- Artificial intelligence (1)
- Benefits of AI (1)
- Bias in AI Systems (1)
- Bing (1)
-
- Blockchain (1)
- Calculation error (1)
- Conceptual Investigation (1)
- Cost functions (1)
- Crypto-currency (1)
- Deep Learning (1)
- Ethereum (1)
- Ethical Considerations (1)
- Future AI (1)
- Human-AI Interaction (1)
- Human-Machine Interaction (1)
- Information extraction (1)
- Job Displacement (1)
- LLMs (1)
- Lagrange multipliers (1)
- Language model (1)
- Large margins (1)
- Learning systems (1)
- Machine Learning (1)
- Metamophosis (1)
- Multisteps (1)
- NRC (1)
- Natural Language Processing (1)
- On-chain data analysis (1)
- Performance (1)
- Publication
- File Type
Articles 1 - 14 of 14
Full-Text Articles in Programming Languages and Compilers
Examining The Inter-Consistency Of Large Language Models: An In-Depth Analysis Via Debate, Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin
Examining The Inter-Consistency Of Large Language Models: An In-Depth Analysis Via Debate, Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin
Research Collection School Of Computing and Information Systems
Large Language Models (LLMs) have shown impressive capabilities in various applications, but they still face various inconsistency issues. Existing works primarily focus on the inconsistency issues within a single LLM, while we complementarily explore the inter-consistency among multiple LLMs for collaboration. To examine whether LLMs can collaborate effectively to achieve a consensus for a shared goal, we focus on commonsense reasoning, and introduce a formal debate framework (FORD) to conduct a three-stage debate among LLMs with real-world scenarios alignment: fair debate, mismatched debate, and roundtable debate. Through extensive experiments on various datasets, LLMs can effectively collaborate to reach a consensus …
Benchmarking Foundation Models With Language-Model-As-An-Examiner, Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, Jiayin Zhang, Juanzi Li, Lei Hou
Benchmarking Foundation Models With Language-Model-As-An-Examiner, Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, Jiayin Zhang, Juanzi Li, Lei Hou
Research Collection School Of Computing and Information Systems
Numerous benchmarks have been established to assess the performance of foundation models on open-ended question answering, which serves as a comprehensive test of a model’s ability to understand and generate language in a manner similar to humans. Most of these works focus on proposing new datasets, however, we see two main issues within previous benchmarking pipelines, namely testing leakage and evaluation automation. In this paper, we propose a novel benchmarking framework, Language-Model-as-an-Examiner, where the LM serves as a knowledgeable examiner that formulates questions based on its knowledge and evaluates responses in a reference-free manner. Our framework allows for effortless extensibility …
A Comprehensive Evaluation Of Large Language Models On Legal Judgment Prediction, Ruihao Shui, Yixin Cao, Xiang Wang, Tat-Seng Chua
A Comprehensive Evaluation Of Large Language Models On Legal Judgment Prediction, Ruihao Shui, Yixin Cao, Xiang Wang, Tat-Seng Chua
Research Collection School Of Computing and Information Systems
Large language models (LLMs) have demonstrated great potential for domain-specific applications, such as the law domain. However, recent disputes over GPT-4’s law evaluation raise questions concerning their performance in real-world legal tasks. To systematically investigate their competency in the law, we design practical baseline solutions based on LLMs and test on the task of legal judgment prediction. In our solutions, LLMs can work alone to answer open questions or coordinate with an information retrieval (IR) system to learn from similar cases or solve simplified multi-choice questions. We show that similar cases and multi-choice options, namely label candidates, included in prompts …
A Black-Box Attack On Code Models Via Representation Nearest Neighbor Search, Jie Zhang, Wei Ma, Qiang Hu, Shangqing Liu, Xiaofei Xie, Yves Le Traon, Yang Liu
A Black-Box Attack On Code Models Via Representation Nearest Neighbor Search, Jie Zhang, Wei Ma, Qiang Hu, Shangqing Liu, Xiaofei Xie, Yves Le Traon, Yang Liu
Research Collection School Of Computing and Information Systems
Existing methods for generating adversarial code examples face several challenges: limted availability of substitute variables, high verification costs for these substitutes, and the creation of adversarial samples with noticeable perturbations. To address these concerns, our proposed approach, RNNS, uses a search seed based on historical attacks to find potential adversarial substitutes. Rather than directly using the discrete substitutes, they are mapped to a continuous vector space using a pre-trained variable name encoder. Based on the vector representation, RNNS predicts and selects better substitutes for attacks. We evaluated the performance of RNNS across six coding tasks encompassing three programming languages: Java, …
Llm-Adapters: An Adapter Family For Parameter-Efficient Fine-Tuning Of Large Language Models, Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, Roy Ka-Wei Lee
Llm-Adapters: An Adapter Family For Parameter-Efficient Fine-Tuning Of Large Language Models, Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, Roy Ka-Wei Lee
Research Collection School Of Computing and Information Systems
The success of large language models (LLMs), like GPT-4 and ChatGPT, has led to the development of numerous cost-effective and accessible alternatives that are created by finetuning open-access LLMs with task-specific data (e.g., ChatDoctor) or instruction data (e.g., Alpaca). Among the various fine-tuning methods, adapter-based parameter-efficient fine-tuning (PEFT) is undoubtedly one of the most attractive topics, as it only requires fine-tuning a few external parameters instead of the entire LLMs while achieving comparable or even better performance. To enable further research on PEFT methods of LLMs, this paper presents LLMAdapters, an easy-to-use framework that integrates various adapters into LLMs and …
Large Language Model Is Not A Good Few-Shot Information Extractor, But A Good Reranker For Hard Samples!, Yubo Ma, Yixin Cao, Yongchin Hong, Aixin Sun
Large Language Model Is Not A Good Few-Shot Information Extractor, But A Good Reranker For Hard Samples!, Yubo Ma, Yixin Cao, Yongchin Hong, Aixin Sun
Research Collection School Of Computing and Information Systems
Large Language Models (LLMs) have made remarkable strides in various tasks. However, whether they are competitive few-shot solvers for information extraction (IE) tasks and surpass fine-tuned small Pre-trained Language Models (SLMs) remains an open problem. This paper aims to provide a thorough answer to this problem, and moreover, to explore an approach towards effective and economical IE systems that combine the strengths of LLMs and SLMs. Through extensive experiments on nine datasets across four IE tasks, we show that LLMs are not effective few-shot information extractors in general, given their unsatisfactory performance in most settings and the high latency and …
Molca: Molecular Graph-Language Modeling With Cross-Modal Projector And Uni-Modal Adapter, Zhiyuan Liu, Sihang Li, Yanchen Luo, Hao Fei, Yixin Cao, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua
Molca: Molecular Graph-Language Modeling With Cross-Modal Projector And Uni-Modal Adapter, Zhiyuan Liu, Sihang Li, Yanchen Luo, Hao Fei, Yixin Cao, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua
Research Collection School Of Computing and Information Systems
Language Models (LMs) have demonstrated impressive molecule understanding ability on various 1D text-related tasks. However, they inherently lack 2D graph perception — a critical ability of human professionals in comprehending molecules’ topological structures. To bridge this gap, we propose MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter. MolCA enables an LM (i.e., Galactica) to understand both text- and graph-based molecular contents via the cross-modal projector. Specifically, the cross-modal projector is implemented as a QFormer to connect a graph encoder’s representation space and an LM’s text space. Further, MolCA employs a uni-modal adapter (i.e., LoRA) for the LM’s efficient …
Disentangling Transformer Language Models As Superposed Topic Models, Jia Peng Lim, Hady Wirawan Lauw
Disentangling Transformer Language Models As Superposed Topic Models, Jia Peng Lim, Hady Wirawan Lauw
Research Collection School Of Computing and Information Systems
Topic Modelling is an established research area where the quality of a given topic is measured using coherence metrics. Often, we infer topics from Neural Topic Models (NTM) by interpreting their decoder weights, consisting of top-activated words projected from individual neurons. Transformer-based Language Models (TLM) similarly consist of decoder weights. However, due to its hypothesised superposition properties, the final logits originating from the residual path are considered uninterpretable. Therefore, we posit that we can interpret TLM as superposed NTM by proposing a novel weight-based, model-agnostic and corpus-agnostic approach to search and disentangle decoder-only TLM, potentially mapping individual neurons to multiple …
Safe Mdp Planning By Learning Temporal Patterns Of Undesirable Trajectories And Averting Negative Side Effects, Siow Meng Low, Akshat Kumar, Scott Sanner
Safe Mdp Planning By Learning Temporal Patterns Of Undesirable Trajectories And Averting Negative Side Effects, Siow Meng Low, Akshat Kumar, Scott Sanner
Research Collection School Of Computing and Information Systems
In safe MDP planning, a cost function based on the current state and action is often used to specify safety aspects. In real world, often the state representation used may lack sufficient fidelity to specify such safety constraints. Operating based on an incomplete model can often produce unintended negative side effects (NSEs). To address these challenges, first, we associate safety signals with state-action trajectories (rather than just immediate state-action). This makes our safety model highly general. We also assume categorical safety labels are given for different trajectories, rather than a numerical cost function, which is harder to specify by the …
Plan-And-Solve Prompting: Improving Zero-Shot Chain-Of-Thought Reasoning By Large Language Models, Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim
Plan-And-Solve Prompting: Improving Zero-Shot Chain-Of-Thought Reasoning By Large Language Models, Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim
Research Collection School Of Computing and Information Systems
Large language models (LLMs) have recently been shown to deliver impressive performance in various NLP tasks. To tackle multi-step reasoning tasks, few-shot chain-of-thought (CoT) prompting includes a few manually crafted step-by-step reasoning demonstrations which enable LLMs to explicitly generate reasoning steps and improve their reasoning task accuracy. To eliminate the manual effort, Zeroshot-CoT concatenates the target problem statement with “Let’s think step by step” as an input prompt to LLMs. Despite the success of Zero-shot-CoT, it still suffers from three pitfalls: calculation errors, missing-step errors, and semantic misunderstanding errors. To address the missing-step errors, we propose Planand-Solve (PS) Prompting. It …
Cie Text Analysis: Narrative Of The Life Of Frederick Douglass, The Declaration Of Independence, And The Declaration Of Sentiments, Arianna Knipe
Cie Text Analysis: Narrative Of The Life Of Frederick Douglass, The Declaration Of Independence, And The Declaration Of Sentiments, Arianna Knipe
Mathematics and Computer Science Presentations
Our STAT-451 class has worked with analyzing the words from CIE texts and assigning them to a sentiment or feeling and comparing them with one another using RStudio. This project analyzes texts from three sources: The Narrative of the Life of Frederick Douglass, The Declaration of Independence and the Declaration of Sentiments.
Code Will Tell: Visual Identification Of Ponzi Schemes On Ethereum, Xiaolin Wen, Kim Siang Yeo, Yong Wang, Ling Cheng, Feida Zhu, Min Zhu
Code Will Tell: Visual Identification Of Ponzi Schemes On Ethereum, Xiaolin Wen, Kim Siang Yeo, Yong Wang, Ling Cheng, Feida Zhu, Min Zhu
Research Collection School Of Computing and Information Systems
Ethereum has become a popular blockchain with smart contracts for investors nowadays. Due to the decentralization and anonymity of Ethereum, Ponzi schemes have been easily deployed and caused significant losses to investors. However, there are still no explainable and effective methods to help investors easily identify Ponzi schemes and validate whether a smart contract is actually a Ponzi scheme. To fill the research gap, we propose PonziLens, a novel visualization approach to help investors achieve early identification of Ponzi schemes by investigating the operation codes of smart contracts. Specifically, we conduct symbolic execution of opcode and extract the control flow …
Chatgpt As Metamorphosis Designer For The Future Of Artificial Intelligence (Ai): A Conceptual Investigation, Amarjit Kumar Singh (Library Assistant), Dr. Pankaj Mathur (Deputy Librarian)
Chatgpt As Metamorphosis Designer For The Future Of Artificial Intelligence (Ai): A Conceptual Investigation, Amarjit Kumar Singh (Library Assistant), Dr. Pankaj Mathur (Deputy Librarian)
Library Philosophy and Practice (e-journal)
Abstract
Purpose: The purpose of this research paper is to explore ChatGPT’s potential as an innovative designer tool for the future development of artificial intelligence. Specifically, this conceptual investigation aims to analyze ChatGPT’s capabilities as a tool for designing and developing near about human intelligent systems for futuristic used and developed in the field of Artificial Intelligence (AI). Also with the helps of this paper, researchers are analyzed the strengths and weaknesses of ChatGPT as a tool, and identify possible areas for improvement in its development and implementation. This investigation focused on the various features and functions of ChatGPT that …
Contrastive Learning Approach To Word-In-Context Task For Low-Resource Languages, Pei-Chi Lo, Yang-Yin Lee, Hsien-Hao Chen, Agus Trisnajaya Kwee, Ee-Peng Lim
Contrastive Learning Approach To Word-In-Context Task For Low-Resource Languages, Pei-Chi Lo, Yang-Yin Lee, Hsien-Hao Chen, Agus Trisnajaya Kwee, Ee-Peng Lim
Research Collection School Of Computing and Information Systems
Word in context (WiC) task aims to determine whether a target word’s occurrences in two sentences share the same sense. In this paper, we propose a Contrastive Learning WiC (CLWiC) framework to improve the learning of sentence/word representations and classification of target word senses in the sentence pair when performing WiC on lowresource languages. In representation learning, CLWiC trains a pre-trained language model’s ability to cope with lowresource languages using both unsupervised and supervised contrastive learning. The WiC classifier learning further finetunes the language model with WiC classification loss under two classifier architecture options, SGBERT and WiSBERT, which use single-encoder …