Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 30 of 588

Full-Text Articles in Physical Sciences and Mathematics

Hierarchical Damage Correlations For Old Photo Restoration, Weiwei Cai, Xuemiao Xu, Jiajia Xu, Huaidong Zhang, Haoxin Yang, Kun Zhang, Shengfeng He Jul 2024

Hierarchical Damage Correlations For Old Photo Restoration, Weiwei Cai, Xuemiao Xu, Jiajia Xu, Huaidong Zhang, Haoxin Yang, Kun Zhang, Shengfeng He

Research Collection School Of Computing and Information Systems

Restoring old photographs can preserve cherished memories. Previous methods handled diverse damages within the same network structure, which proved impractical. In addition, these methods cannot exploit correlations among artifacts, especially in scratches versus patch-misses issues. Hence, a tailored network is particularly crucial. In light of this, we propose a unified framework consisting of two key components: ScratchNet and PatchNet. In detail, ScratchNet employs the parallel Multi-scale Partial Convolution Module to effectively repair scratches, learning from multi-scale local receptive fields. In contrast, the patch-misses necessitate the network to emphasize global information. To this end, we incorporate a transformer-based encoder and decoder …


Diffusion-Based Negative Sampling On Graphs For Link Prediction, Yuan Fang, Yuan Fang May 2024

Diffusion-Based Negative Sampling On Graphs For Link Prediction, Yuan Fang, Yuan Fang

Research Collection School Of Computing and Information Systems

Link prediction is a fundamental task for graph analysis with important applications on the Web, such as social network analysis and recommendation systems, etc. Modern graph link prediction methods often employ a contrastive approach to learn robust node representations, where negative sampling is pivotal. Typical negative sampling methods aim to retrieve hard examples based on either predefined heuristics or automatic adversarial approaches, which might be inflexible or difficult to control. Furthermore, in the context of link prediction, most previous methods sample negative nodes from existing substructures of the graph, missing out on potentially more optimal samples in the latent space. …


Multigprompt For Multi-Task Pre-Training And Prompting On Graphs, Xingtong Yu, Chang Zhou, Yuan Fang, Xinming Zhan May 2024

Multigprompt For Multi-Task Pre-Training And Prompting On Graphs, Xingtong Yu, Chang Zhou, Yuan Fang, Xinming Zhan

Research Collection School Of Computing and Information Systems

Graph Neural Networks (GNNs) have emerged as a mainstream technique for graph representation learning. However, their efficacy within an end-to-end supervised framework is significantly tied to the availability of task-specific labels. To mitigate labeling costs and enhance robustness in few-shot settings, pre-training on self-supervised tasks has emerged as a promising method, while prompting has been proposed to further narrow the objective gap between pretext and downstream tasks. Although there has been some initial exploration of prompt-based learning on graphs, they primarily leverage a single pretext task, resulting in a limited subset of general knowledge that could be learned from the …


Transiam: Aggregating Multi-Modal Visual Features With Locality For Medical Image Segmentation, Xuejian Li, Shiqiang Ma, Junhai Xu, Jijun Tang, Shengfeng He, Fei Guo Mar 2024

Transiam: Aggregating Multi-Modal Visual Features With Locality For Medical Image Segmentation, Xuejian Li, Shiqiang Ma, Junhai Xu, Jijun Tang, Shengfeng He, Fei Guo

Research Collection School Of Computing and Information Systems

Automatic segmentation of medical images plays an important role in the diagnosis of diseases. On single-modal data, convolutional neural networks have demonstrated satisfactory performance. However, multi-modal data encompasses a greater amount of information rather than single-modal data. Multi-modal data can be effectively used to improve the segmentation accuracy of regions of interest by analyzing both spatial and temporal information. In this study, we propose a dual-path segmentation model for multi-modal medical images, named TranSiam. Taking into account that there is a significant diversity between the different modalities, TranSiam employs two parallel CNNs to extract the features which are specific to …


Foodmask: Real-Time Food Instance Counting, Segmentation And Recognition, Huu-Thanh Nguyen, Yu Cao, Chong-Wah Ngo, Wing-Kwong Chan Feb 2024

Foodmask: Real-Time Food Instance Counting, Segmentation And Recognition, Huu-Thanh Nguyen, Yu Cao, Chong-Wah Ngo, Wing-Kwong Chan

Research Collection School Of Computing and Information Systems

Food computing has long been studied and deployed to several applications. Understanding a food image at the instance level, including recognition, counting and segmentation, is essential to quantifying nutrition and calorie consumption. Nevertheless, existing techniques are limited to either category-specific instance detection, which does not reflect precisely the instance size at the pixel level, or category-agnostic instance segmentation, which is insufficient for dish recognition. This paper presents a compact and fast multi-task network, namely FoodMask, for clustering-based food instance counting, segmentation and recognition. The network learns a semantic space simultaneously encoding food category distribution and instance height at pixel basis. …


Delving Into Multimodal Prompting For Fine-Grained Visual Classification, Xin Jiang, Hao Tang, Junyao Gao, Xiaoyu Du, Shengfeng He, Zechao Li Feb 2024

Delving Into Multimodal Prompting For Fine-Grained Visual Classification, Xin Jiang, Hao Tang, Junyao Gao, Xiaoyu Du, Shengfeng He, Zechao Li

Research Collection School Of Computing and Information Systems

Fine-grained visual classification (FGVC) involves categorizing fine subdivisions within a broader category, which poses challenges due to subtle inter-class discrepancies and large intra-class variations. However, prevailing approaches primarily focus on uni-modal visual concepts. Recent advancements in pre-trained vision-language models have demonstrated remarkable performance in various high-level vision tasks, yet the applicability of such models to FGVC tasks remains uncertain. In this paper, we aim to fully exploit the capabilities of cross-modal description to tackle FGVC tasks and propose a novel multimodal prompting solution, denoted as MP-FGVC, based on the contrastive language-image pertaining (CLIP) model. Our MP-FGVC comprises a multimodal prompts …


M3sa: Multimodal Sentiment Analysis Based On Multi-Scale Feature Extraction And Multi-Task Learning, Changkai Lin, Hongju Cheng, Qiang Rao, Yang Yang Feb 2024

M3sa: Multimodal Sentiment Analysis Based On Multi-Scale Feature Extraction And Multi-Task Learning, Changkai Lin, Hongju Cheng, Qiang Rao, Yang Yang

Research Collection School Of Computing and Information Systems

Sentiment analysis plays an indispensable part in human-computer interaction. Multimodal sentiment analysis can overcome the shortcomings of unimodal sentiment analysis by fusing multimodal data. However, how to extracte improved feature representations and how to execute effective modality fusion are two crucial problems in multimodal sentiment analysis. Traditional work uses simple sub-models for feature extraction, and they ignore features of different scales and fuse different modalities of data equally, making it easier to incorporate extraneous information and affect analysis accuracy. In this paper, we propose a Multimodal Sentiment Analysis model based on Multi-scale feature extraction and Multi-task learning (M 3 SA). …


Catnet: Cross-Modal Fusion For Audio-Visual Speech Recognition, Xingmei Wang, Jianchen Mi, Boquan Li, Yixu Zhao, Jiaxiang Meng Feb 2024

Catnet: Cross-Modal Fusion For Audio-Visual Speech Recognition, Xingmei Wang, Jianchen Mi, Boquan Li, Yixu Zhao, Jiaxiang Meng

Research Collection School Of Computing and Information Systems

Automatic speech recognition (ASR) is a typical pattern recognition technology that converts human speeches into texts. With the aid of advanced deep learning models, the performance of speech recognition is significantly improved. Especially, the emerging Audio–Visual Speech Recognition (AVSR) methods achieve satisfactory performance by combining audio-modal and visual-modal information. However, various complex environments, especially noises, limit the effectiveness of existing methods. In response to the noisy problem, in this paper, we propose a novel cross-modal audio–visual speech recognition model, named CATNet. First, we devise a cross-modal bidirectional fusion model to analyze the close relationship between audio and visual modalities. Second, …


Hgprompt: Bridging Homogeneous And Heterogeneous Graphs For Few-Shot Prompt Learning, Xingtong Yu, Yuan Fang, Zemin Liu, Xinming Zhang Feb 2024

Hgprompt: Bridging Homogeneous And Heterogeneous Graphs For Few-Shot Prompt Learning, Xingtong Yu, Yuan Fang, Zemin Liu, Xinming Zhang

Research Collection School Of Computing and Information Systems

Graph neural networks (GNNs) and heterogeneous graph neural networks (HGNNs) are prominent techniques for homogeneous and heterogeneous graph representation learning, yet their performance in an end-to-end supervised framework greatly depends on the availability of task-specific supervision. To reduce the labeling cost, pre-training on selfsupervised pretext tasks has become a popular paradigm, but there is often a gap between the pre-trained model and downstream tasks, stemming from the divergence in their objectives. To bridge the gap, prompt learning has risen as a promising direction especially in few-shot settings, without the need to fully fine-tune the pre-trained model. While there has been …


Simple Image-Level Classification Improves Open-Vocabulary Object Detection, Ruohuan Fang, Guansong Pang, Xiao Bai Feb 2024

Simple Image-Level Classification Improves Open-Vocabulary Object Detection, Ruohuan Fang, Guansong Pang, Xiao Bai

Research Collection School Of Computing and Information Systems

Open-Vocabulary Object Detection (OVOD) aims to detect novel objects beyond a given set of base categories on which the detection model is trained. Recent OVOD methods focus on adapting the image-level pre-trained vision-language models (VLMs), such as CLIP, to a region-level object detection task via, eg., region-level knowledge distillation, regional prompt learning, or region-text pre-training, to expand the detection vocabulary. These methods have demonstrated remarkable performance in recognizing regional visual concepts, but they are weak in exploiting the VLMs' powerful global scene understanding ability learned from the billion-scale image-level text descriptions. This limits their capability in detecting hard objects of …


Leveraging Llms And Generative Models For Interactive Known-Item Video Search, Zhixin Ma, Jiaxin Wu, Chong-Wah Ngo Feb 2024

Leveraging Llms And Generative Models For Interactive Known-Item Video Search, Zhixin Ma, Jiaxin Wu, Chong-Wah Ngo

Research Collection School Of Computing and Information Systems

While embedding techniques such as CLIP have considerably boosted search performance, user strategies in interactive video search still largely operate on a trial-and-error basis. Users are often required to manually adjust their queries and carefully inspect the search results, which greatly rely on the users’ capability and proficiency. Recent advancements in large language models (LLMs) and generative models offer promising avenues for enhancing interactivity in video retrieval and reducing the personal bias in query interpretation, particularly in the known-item search. Specifically, LLMs can expand and diversify the semantics of the queries while avoiding grammar mistakes or the language barrier. In …


Predicting Viral Rumors And Vulnerable Users With Graph-Based Neural Multi-Task Learning For Infodemic Surveillance, Xuan Zhang, Wei Gao Jan 2024

Predicting Viral Rumors And Vulnerable Users With Graph-Based Neural Multi-Task Learning For Infodemic Surveillance, Xuan Zhang, Wei Gao

Research Collection School Of Computing and Information Systems

In the age of the infodemic, it is crucial to have tools for effectively monitoring the spread of rampant rumors that can quickly go viral, as well as identifying vulnerable users who may be more susceptible to spreading such misinformation. This proactive approach allows for timely preventive measures to be taken, mitigating the negative impact of false information on society. We propose a novel approach to predict viral rumors and vulnerable users using a unified graph neural network model. We pre-train network-based user embeddings and leverage a cross-attention mechanism between users and posts, together with a community-enhanced vulnerability propagation (CVP) …


Tracking People Across Ultra Populated Indoor Spaces By Matching Unreliable Wi-Fi Signals With Disconnected Video Feeds, Quang Hai Truong, Dheryta Jaisinghani, Shubham Jain, Arunesh Sinha, Jeong Gil Ko, Rajesh Krishna Balan Jan 2024

Tracking People Across Ultra Populated Indoor Spaces By Matching Unreliable Wi-Fi Signals With Disconnected Video Feeds, Quang Hai Truong, Dheryta Jaisinghani, Shubham Jain, Arunesh Sinha, Jeong Gil Ko, Rajesh Krishna Balan

Research Collection School Of Computing and Information Systems

Tracking in dense indoor environments where several thousands of people move around is an extremely challenging problem. In this paper, we present a system — DenseTrack for tracking people in such environments. DenseTrack leverages data from the sensing modalities that are already present in these environments — Wi-Fi (from enterprise network deployments) and Video (from surveillance cameras). We combine Wi-Fi information with video data to overcome the individual errors induced by these modalities. More precisely, the locations derived from video are used to overcome the localization errors inherent in using Wi-Fi signals where precise Wi-Fi MAC IDs are used to …


Glance To Count: Learning To Rank With Anchors For Weakly-Supervised Crowd Counting, Zheng Xiong, Liangyu Chai, Wenxi Liu, Yongtuo Liu, Sucheng Ren, Shengfeng He Jan 2024

Glance To Count: Learning To Rank With Anchors For Weakly-Supervised Crowd Counting, Zheng Xiong, Liangyu Chai, Wenxi Liu, Yongtuo Liu, Sucheng Ren, Shengfeng He

Research Collection School Of Computing and Information Systems

Crowd image is arguably one of the most laborious data to annotate. In this paper, we devote to reduce the massive demand of densely labeled crowd data, and propose a novel weakly-supervised setting, in which we leverage the binary ranking of two images with highcontrast crowd counts as training guidance. To enable training under this new setting, we convert the crowd count regression problem to a ranking potential prediction problem. In particular, we tailor a Siamese Ranking Network that predicts the potential scores of two images indicating the ordering of the counts. Hence, the ultimate goal is to assign appropriate …


Learning An Interpretable Stylized Subspace For 3d-Aware Animatable Artforms, Chenxi Zheng, Bangzhen Liu, Xuemiao Xu, Huaidong Zhang, Shengfeng He Jan 2024

Learning An Interpretable Stylized Subspace For 3d-Aware Animatable Artforms, Chenxi Zheng, Bangzhen Liu, Xuemiao Xu, Huaidong Zhang, Shengfeng He

Research Collection School Of Computing and Information Systems

Throughout history, static paintings have captivated viewers within display frames, yet the possibility of making these masterpieces vividly interactive remains intriguing. This research paper introduces 3DArtmator, a novel approach that aims to represent artforms in a highly interpretable stylized space, enabling 3D-aware animatable reconstruction and editing. Our rationale is to transfer the interpretability and 3D controllability of the latent space in a 3D-aware GAN to a stylized sub-space of a customized GAN, revitalizing the original artforms. To this end, the proposed two-stage optimization framework of 3DArtmator begins with discovering an anchor in the original latent space that accurately mimics the …


Efficient Unsupervised Video Hashing With Contextual Modeling And Structural Controlling, Jingru Duan, Yanbin Hao, Bin Zhu, Lechao Cheng, Pengyuan Zhou, Xiang Wang Jan 2024

Efficient Unsupervised Video Hashing With Contextual Modeling And Structural Controlling, Jingru Duan, Yanbin Hao, Bin Zhu, Lechao Cheng, Pengyuan Zhou, Xiang Wang

Research Collection School Of Computing and Information Systems

The most important effect of the video hashing technique is to support fast retrieval, which is benefiting from the high efficiency of binary calculation. Current video hash approaches are thus mainly targeted at learning compact binary codes to represent video content accurately. However, they may overlook the generation efficiency for hash codes, i.e., designing lightweight neural networks. This paper proposes an method, which is not only for computing compact hash codes but also for designing a lightweight deep model. Specifically, we present an MLP-based model, where the video tensor is split into several groups and multiple axial contexts are explored …


Self-Supervised Pseudo Multi-Class Pre-Training For Unsupervised Anomaly Detection And Segmentation In Medical Images, Yu Tian, Fengbei Liu, Guansong Pang, Yuanhong Chen, Yuyuan Liu, Johan W. Verjans, Rajvinder Singh, Gustavo Carneiro Dec 2023

Self-Supervised Pseudo Multi-Class Pre-Training For Unsupervised Anomaly Detection And Segmentation In Medical Images, Yu Tian, Fengbei Liu, Guansong Pang, Yuanhong Chen, Yuyuan Liu, Johan W. Verjans, Rajvinder Singh, Gustavo Carneiro

Research Collection School Of Computing and Information Systems

Unsupervised anomaly detection (UAD) methods are trained with normal (or healthy) images only, but during testing, they are able to classify normal and abnormal (or disease) images. UAD is an important medical image analysis (MIA) method to be applied in disease screening problems because the training sets available for those problems usually contain only normal images. However, the exclusive reliance on normal images may result in the learning of ineffective low-dimensional image representations that are not sensitive enough to detect and segment unseen abnormal lesions of varying size, appearance, and shape. Pre-training UAD methods with self-supervised learning, based on computer …


Mermaid: A Dataset And Framework For Multimodal Meme Semantic Understanding, Shaun Toh, Adriel Kuek, Wen Haw Chong, Roy Ka Wei Lee Dec 2023

Mermaid: A Dataset And Framework For Multimodal Meme Semantic Understanding, Shaun Toh, Adriel Kuek, Wen Haw Chong, Roy Ka Wei Lee

Research Collection School Of Computing and Information Systems

Memes are widely used to convey cultural and societal issues and have a significant impact on public opinion. However, little work has been done on understanding and explaining the semantics expressed in multimodal memes. To fill this research gap, we introduce MERMAID, a dataset consisting of 3,633 memes annotated with their entities and relations, and propose a novel MERF pipeline that extracts entities and their relationships in memes. Our framework combines state-of-the-art techniques from natural language processing and computer vision to extract text and image features and infer relationships between entities in memes. We evaluate the proposed framework on a …


Graph Contrastive Learning With Stable And Scalable Spectral Encoding, Deyu Bo, Yuan Fang, Yang Liu, Chuan Shi Dec 2023

Graph Contrastive Learning With Stable And Scalable Spectral Encoding, Deyu Bo, Yuan Fang, Yang Liu, Chuan Shi

Research Collection School Of Computing and Information Systems

Graph contrastive learning (GCL) aims to learn representations by capturing the agreements between different graph views. Traditional GCL methods generate views in the spatial domain, but it has been recently discovered that the spectral domain also plays a vital role in complementing spatial views. However, existing spectral-based graph views either ignore the eigenvectors that encode valuable positional information, or suffer from high complexity when trying to address the instability of spectral features. To tackle these challenges, we first design an informative, stable, and scalable spectral encoder, termed EigenMLP, to learn effective representations from the spectral features. Theoretically, EigenMLP is invariant …


Video Sentiment Analysis For Child Safety, Yee Sen Tan, Nicole Anne Huiying Teo, Ezekiel En Zhe Ghe, Jolie Zhi Yi Fong, Zhaoxia Wang Dec 2023

Video Sentiment Analysis For Child Safety, Yee Sen Tan, Nicole Anne Huiying Teo, Ezekiel En Zhe Ghe, Jolie Zhi Yi Fong, Zhaoxia Wang

Research Collection School Of Computing and Information Systems

The proliferation of online video content underscores the critical need for effective sentiment analysis, particularly in safeguarding children from potentially harmful material. This research addresses this concern by presenting a multimodal analysis method for assessing video sentiment, categorizing it as either positive (child-friendly) or negative (potentially harmful). This method leverages three key components: text analysis, facial expression analysis, and audio analysis, including music mood analysis, resulting in a comprehensive sentiment assessment. Our evaluation results validate the effectiveness of this approach, making significant contributions to the field of video sentiment analysis and bolstering child safety measures. This research serves as a …


Mrim: Lightweight Saliency-Based Mixed-Resolution Imaging For Low-Power Pervasive Vision, Jiyan Wu, Vithurson Subasharan, Minh Anh Tuan Tran, Kasun Pramuditha Gamlath, Archan Misra Dec 2023

Mrim: Lightweight Saliency-Based Mixed-Resolution Imaging For Low-Power Pervasive Vision, Jiyan Wu, Vithurson Subasharan, Minh Anh Tuan Tran, Kasun Pramuditha Gamlath, Archan Misra

Research Collection School Of Computing and Information Systems

While many pervasive computing applications increasingly utilize real-time context extracted from a vision sensing infrastructure, the high energy overhead of DNN-based vision sensing pipelines remains a challenge for sustainable in-the-wild deployment. One common approach to reducing such energy overheads is the capture and transmission of lower-resolution images to an edge node (where the DNN inferencing task is executed), but this results in an accuracy-vs-energy tradeoff, as the DNN inference accuracy typically degrades with a drop in resolution. In this work, we introduce MRIM, a simple but effective framework to tackle this tradeoff. Under MRIM, the vision sensor platform first executes …


Turn-It-Up: Rendering Resistance For Knobs In Virtual Reality Through Undetectable Pseudo-Haptics, Martin Feick, Andre Zenner, Oscar Ariza, Anthony Tang, Cihan Biyikli, Antonio Kruger Nov 2023

Turn-It-Up: Rendering Resistance For Knobs In Virtual Reality Through Undetectable Pseudo-Haptics, Martin Feick, Andre Zenner, Oscar Ariza, Anthony Tang, Cihan Biyikli, Antonio Kruger

Research Collection School Of Computing and Information Systems

Rendering haptic feedback for interactions with virtual objects is an essential part of effective virtual reality experiences. In this work, we explore providing haptic feedback for rotational manipulations, e.g., through knobs. We propose the use of a Pseudo-Haptic technique alongside a physical proxy knob to simulate various physical resistances. In a psychophysical experiment with 20 participants, we found that designers can introduce unnoticeable offsets between real and virtual rotations of the knob, and we report the corresponding detection thresholds. Based on these, we present the Pseudo-Haptic Resistance technique to convey physical resistance while applying only unnoticeable pseudo-haptic manipulation. Additionally, we …


Voxelhap: A Toolkit For Constructing Proxies Providing Tactile And Kinesthetic Haptic Feedback In Virtual Reality, M. Feick, C. Biyikli, K. Gani, A. Wittig, Anthony Tang, A. Krüger Nov 2023

Voxelhap: A Toolkit For Constructing Proxies Providing Tactile And Kinesthetic Haptic Feedback In Virtual Reality, M. Feick, C. Biyikli, K. Gani, A. Wittig, Anthony Tang, A. Krüger

Research Collection School Of Computing and Information Systems

Experiencing virtual environments is often limited to abstract interactions with objects. Physical proxies allow users to feel virtual objects, but are often inaccessible. We present the VoxelHap toolkit which enables users to construct highly functional proxy objects using Voxels and Plates. Voxels are blocks with special functionalities that form the core of each physical proxy. Plates increase a proxy’s haptic resolution, such as its shape, texture or weight. Beyond providing physical capabilities to realize haptic sensations, VoxelHap utilizes VR illusion techniques to expand its haptic resolution. We evaluated the capabilities of the VoxelHap toolkit through the construction of a range …


Npf-200: A Multi-Modal Eye Fixation Dataset And Method For Non-Photorealistic Videos, Ziyu Yang, Sucheng Ren, Zongwei Wu, Nanxuan Zhao, Junle Wang, Jing Qin, Shengfeng He Nov 2023

Npf-200: A Multi-Modal Eye Fixation Dataset And Method For Non-Photorealistic Videos, Ziyu Yang, Sucheng Ren, Zongwei Wu, Nanxuan Zhao, Junle Wang, Jing Qin, Shengfeng He

Research Collection School Of Computing and Information Systems

Non-photorealistic videos are in demand with the wave of the metaverse, but lack of sufficient research studies. This work aims to take a step forward to understand how humans perceive nonphotorealistic videos with eye fixation (i.e., saliency detection), which is critical for enhancing media production, artistic design, and game user experience. To fill in the gap of missing a suitable dataset for this research line, we present NPF-200, the first largescale multi-modal dataset of purely non-photorealistic videos with eye fixations. Our dataset has three characteristics: 1) it contains soundtracks that are essential according to vision and psychological studies; 2) it …


Constructing Holistic Spatio-Temporal Scene Graph For Video Semantic Role Labeling, Yu Zhao, Hao Fei, Yixin Cao, Bobo Li, Meishan Zhang, Jianguo Wei, Min Zhang, Tat-Seng Chua Nov 2023

Constructing Holistic Spatio-Temporal Scene Graph For Video Semantic Role Labeling, Yu Zhao, Hao Fei, Yixin Cao, Bobo Li, Meishan Zhang, Jianguo Wei, Min Zhang, Tat-Seng Chua

Research Collection School Of Computing and Information Systems

As one of the core video semantic understanding tasks, Video Semantic Role Labeling (VidSRL) aims to detect the salient events from given videos, by recognizing the predict-argument event structures and the interrelationships between events. While recent endeavors have put forth methods for VidSRL, they can be mostly subject to two key drawbacks, including the lack of fine-grained spatial scene perception and the insufficiently modeling of video temporality. Towards this end, this work explores a novel holistic spatio-temporal scene graph (namely HostSG) representation based on the existing dynamic scene graph structures, which well model both the fine-grained spatial semantics and temporal …


Disentangling Multi-View Representations Beyond Inductive Bias, Guanzhou Ke, Yang Yu, Guoqing Chao, Xiaoli Wang, Chenyang Xu, Shengfeng He Nov 2023

Disentangling Multi-View Representations Beyond Inductive Bias, Guanzhou Ke, Yang Yu, Guoqing Chao, Xiaoli Wang, Chenyang Xu, Shengfeng He

Research Collection School Of Computing and Information Systems

Multi-view (or -modality) representation learning aims to understand the relationships between different view representations. Existing methods disentangle multi-view representations into consistent and view-specific representations by introducing strong inductive biases, which can limit their generalization ability. In this paper, we propose a novel multi-view representation disentangling method that aims to go beyond inductive biases, ensuring both interpretability and generalizability of the resulting representations. Our method is based on the observation that discovering multi-view consistency in advance can determine the disentangling information boundary, leading to a decoupled learning objective. We also found that the consistency can be easily extracted by maximizing the …


Pro-Cap: Leveraging A Frozen Vision-Language Model For Hateful Meme Detection, Rui Cao, Ming Shan Hee, Adriel Kuek, Wen Haw Chong, Roy Ka-Wei Lee, Jing Jiang Nov 2023

Pro-Cap: Leveraging A Frozen Vision-Language Model For Hateful Meme Detection, Rui Cao, Ming Shan Hee, Adriel Kuek, Wen Haw Chong, Roy Ka-Wei Lee, Jing Jiang

Research Collection School Of Computing and Information Systems

Hateful meme detection is a challenging multimodal task that requires comprehension of both vision and language, as well as cross-modal interactions. Recent studies have tried to fine-tune pre-trained vision-language models (PVLMs) for this task. However, with increasing model sizes, it becomes important to leverage powerful PVLMs more efficiently, rather than simply fine-tuning them. Recently, researchers have attempted to convert meme images into textual captions and prompt language models for predictions. This approach has shown good performance but suffers from non-informative image captions. Considering the two factors mentioned above, we propose a probing-based captioning approach to leverage PVLMs in a zero-shot …


Matk: The Meme Analytical Tool Kit, Ming Shan Hee, Aditi Kumaresan, Nguyen Khoi Hoang, Nirmalendu Prakash, Rui Cao, Roy Ka-Wei Lee Nov 2023

Matk: The Meme Analytical Tool Kit, Ming Shan Hee, Aditi Kumaresan, Nguyen Khoi Hoang, Nirmalendu Prakash, Rui Cao, Roy Ka-Wei Lee

Research Collection School Of Computing and Information Systems

The rise of social media platforms has brought about a new digital culture called memes. Memes, which combine visuals and text, can strongly influence public opinions on social and cultural issues. As a result, people have become interested in categorizing memes, leading to the development of various datasets and multimodal models that show promising results in this field. However, there is currently a lack of a single library that allows for the reproduction, evaluation, and comparison of these models using fair benchmarks and settings. To fill this gap, we introduce the Meme Analytical Tool Kit (MATK), an open-source toolkit specifically …


Revisiting Disentanglement And Fusion On Modality And Context In Conversational Multimodal Emotion Recognition, Bobo Li, Hao Fei, Lizi Liao, Yu Zhao, Chong Teng, Tat-Seng Chua, Donghong Ji, Fei Li Nov 2023

Revisiting Disentanglement And Fusion On Modality And Context In Conversational Multimodal Emotion Recognition, Bobo Li, Hao Fei, Lizi Liao, Yu Zhao, Chong Teng, Tat-Seng Chua, Donghong Ji, Fei Li

Research Collection School Of Computing and Information Systems

It has been a hot research topic to enable machines to understand human emotions in multimodal contexts under dialogue scenarios, which is tasked with multimodal emotion analysis in conversation (MM-ERC). MM-ERC has received consistent attention in recent years, where a diverse range of methods has been proposed for securing better task performance. Most existing works treat MM-ERC as a standard multimodal classification problem and perform multimodal feature disentanglement and fusion for maximizing feature utility. Yet after revisiting the characteristic of MM-ERC, we argue that both the feature multimodality and conversational contextualization should be properly modeled simultaneously during the feature disentanglement …


Posmlp-Video: Spatial And Temporal Relative Position Encoding For Efficient Video Recognition, Yanbin Hao, Diansong Zhou, Zhicai Wang, Chong-Wah Ngo, Xiangnan He, Meng Wang Oct 2023

Posmlp-Video: Spatial And Temporal Relative Position Encoding For Efficient Video Recognition, Yanbin Hao, Diansong Zhou, Zhicai Wang, Chong-Wah Ngo, Xiangnan He, Meng Wang

Research Collection School Of Computing and Information Systems

In recent years, vision Transformers and MLPs have demonstrated remarkable performance in image understanding tasks. However, their inherently dense computational operators, such as self-attention and token-mixing layers, pose significant challenges when applied to spatio-temporal video data. To address this gap, we propose PosMLP-Video, a lightweight yet powerful MLP-like backbone for video recognition. Instead of dense operators, we use efficient relative positional encoding (RPE) to build pairwise token relations, leveraging small-sized parameterized relative position biases to obtain each relation score. Specifically, to enable spatio-temporal modeling, we extend the image PosMLP’s positional gating unit to temporal, spatial, and spatio-temporal variants, namely PoTGU, …