Open Access. Powered by Scholars. Published by Universities.®
Graphics and Human Computer Interfaces Commons™
Open Access. Powered by Scholars. Published by Universities.®
Numerical Analysis and Scientific Computing
Research Collection School Of Computing and Information Systems
- Keyword
-
- Adaptation models (1)
- Convolution block attention module (1)
- Data visualization (1)
- Error Diagnosis (1)
- Event extraction (1)
-
- Eye fixation (1)
- Feature extraction (1)
- Generative adversarial network (1)
- Graph Neural Networks (1)
- Knowledge Graphs (1)
- Logical Reasoning (1)
- Magnetic heads (1)
- Meme (1)
- Multi-modal frequency (1)
- Multimodal analysis (1)
- Non-photorealistic videos (1)
- Scene graph (1)
- Semantics role labeling (1)
- Style-independent discriminator (1)
- Underwater image translation (1)
- Video understanding (1)
- Visual-language models (1)
- Visualization (1)
- Visualization Recommendation (1)
Articles 1 - 6 of 6
Full-Text Articles in Graphics and Human Computer Interfaces
Npf-200: A Multi-Modal Eye Fixation Dataset And Method For Non-Photorealistic Videos, Ziyu Yang, Sucheng Ren, Zongwei Wu, Nanxuan Zhao, Junle Wang, Jing Qin, Shengfeng He
Npf-200: A Multi-Modal Eye Fixation Dataset And Method For Non-Photorealistic Videos, Ziyu Yang, Sucheng Ren, Zongwei Wu, Nanxuan Zhao, Junle Wang, Jing Qin, Shengfeng He
Research Collection School Of Computing and Information Systems
Non-photorealistic videos are in demand with the wave of the metaverse, but lack of sufficient research studies. This work aims to take a step forward to understand how humans perceive nonphotorealistic videos with eye fixation (i.e., saliency detection), which is critical for enhancing media production, artistic design, and game user experience. To fill in the gap of missing a suitable dataset for this research line, we present NPF-200, the first largescale multi-modal dataset of purely non-photorealistic videos with eye fixations. Our dataset has three characteristics: 1) it contains soundtracks that are essential according to vision and psychological studies; 2) it …
Constructing Holistic Spatio-Temporal Scene Graph For Video Semantic Role Labeling, Yu Zhao, Hao Fei, Yixin Cao, Bobo Li, Meishan Zhang, Jianguo Wei, Min Zhang, Tat-Seng Chua
Constructing Holistic Spatio-Temporal Scene Graph For Video Semantic Role Labeling, Yu Zhao, Hao Fei, Yixin Cao, Bobo Li, Meishan Zhang, Jianguo Wei, Min Zhang, Tat-Seng Chua
Research Collection School Of Computing and Information Systems
As one of the core video semantic understanding tasks, Video Semantic Role Labeling (VidSRL) aims to detect the salient events from given videos, by recognizing the predict-argument event structures and the interrelationships between events. While recent endeavors have put forth methods for VidSRL, they can be mostly subject to two key drawbacks, including the lack of fine-grained spatial scene perception and the insufficiently modeling of video temporality. Towards this end, this work explores a novel holistic spatio-temporal scene graph (namely HostSG) representation based on the existing dynamic scene graph structures, which well model both the fine-grained spatial semantics and temporal …
Matk: The Meme Analytical Tool Kit, Ming Shan Hee, Aditi Kumaresan, Nguyen Khoi Hoang, Nirmalendu Prakash, Rui Cao, Roy Ka-Wei Lee
Matk: The Meme Analytical Tool Kit, Ming Shan Hee, Aditi Kumaresan, Nguyen Khoi Hoang, Nirmalendu Prakash, Rui Cao, Roy Ka-Wei Lee
Research Collection School Of Computing and Information Systems
The rise of social media platforms has brought about a new digital culture called memes. Memes, which combine visuals and text, can strongly influence public opinions on social and cultural issues. As a result, people have become interested in categorizing memes, leading to the development of various datasets and multimodal models that show promising results in this field. However, there is currently a lack of a single library that allows for the reproduction, evaluation, and comparison of these models using fair benchmarks and settings. To fill this gap, we introduce the Meme Analytical Tool Kit (MATK), an open-source toolkit specifically …
Underwater Image Translation Via Multi-Scale Generative Adversarial Network, Dongmei Yang, Tianzi Zhang, Boquan Li, Menghao Li, Weijing Chen, Xiaoqing Li, Xingmei Wang
Underwater Image Translation Via Multi-Scale Generative Adversarial Network, Dongmei Yang, Tianzi Zhang, Boquan Li, Menghao Li, Weijing Chen, Xiaoqing Li, Xingmei Wang
Research Collection School Of Computing and Information Systems
The role that underwater image translation plays assists in generating rare images for marine applications. However, such translation tasks are still challenging due to data lacking, insufficient feature extraction ability, and the loss of content details. To address these issues, we propose a novel multi-scale image translation model based on style-independent discriminators and attention modules (SID-AM-MSITM), which learns the mapping relationship between two unpaired images for translation. We introduce Convolution Block Attention Modules (CBAM) to the generators and discriminators of SID-AM-MSITM to improve its feature extraction ability. Moreover, we construct style-independent discriminators that enable the discriminant results of SID-AM-MSITM to …
Adavis: Adaptive And Explainable Visualization Recommendation For Tabular Data, Songheng Zhang, Yong Wang, Haotian Li, Huamin Qu
Adavis: Adaptive And Explainable Visualization Recommendation For Tabular Data, Songheng Zhang, Yong Wang, Haotian Li, Huamin Qu
Research Collection School Of Computing and Information Systems
Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose AdaVis, an adaptive and explainable …
Gnnlens: A Visual Analytics Approach For Prediction Error Diagnosis Of Graph Neural Networks., Zhihua Jin, Yong Wang, Qianwen Wang, Yao Ming, Tengfei Ma, Huamin Qu
Gnnlens: A Visual Analytics Approach For Prediction Error Diagnosis Of Graph Neural Networks., Zhihua Jin, Yong Wang, Qianwen Wang, Yao Ming, Tengfei Ma, Huamin Qu
Research Collection School Of Computing and Information Systems
Graph Neural Networks (GNNs) aim to extend deep learning techniques to graph data and have achieved significant progress in graph analysis tasks (e.g., node classification) in recent years. However, similar to other deep neural networks like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), GNNs behave like a black box with their details hidden from model developers and users. It is therefore difficult to diagnose possible errors of GNNs. Despite many visual analytics studies being done on CNNs and RNNs, little research has addressed the challenges for GNNs. This paper fills the research gap with an interactive visual analysis …