Open Access. Powered by Scholars. Published by Universities.®

Old Dominion University

Discipline
Keyword
Publication Year
Publication
Publication Type

Articles 1 - 30 of 46

Full-Text Articles in Graphics and Human Computer Interfaces

The Propagation And Execution Of Malware In Images, Piper Hall Nov 2023

The Propagation And Execution Of Malware In Images, Piper Hall

Cybersecurity Undergraduate Research Showcase

Malware has become increasingly prolific and severe in its consequences as information systems mature and users become more reliant on computing in their daily lives. As cybercrime becomes more complex in its strategies, an often-overlooked manner of propagation is through images. In recent years, several high-profile vulnerabilities in image libraries have opened the door for threat actors to steal money and information from unsuspecting users. This paper will explore the mechanisms by which these exploits function and how they can be avoided.


Iot Health Devices: Exploring Security Risks In The Connected Landscape, Abasi-Amefon Obot Affia, Hilary Finch, Woosub Jung, Issah Abubakari Samori, Lucas Potter, Xavier-Lewis Palmer May 2023

Iot Health Devices: Exploring Security Risks In The Connected Landscape, Abasi-Amefon Obot Affia, Hilary Finch, Woosub Jung, Issah Abubakari Samori, Lucas Potter, Xavier-Lewis Palmer

School of Cybersecurity Faculty Publications

The concept of the Internet of Things (IoT) spans decades, and the same can be said for its inclusion in healthcare. The IoT is an attractive target in medicine; it offers considerable potential in expanding care. However, the application of the IoT in healthcare is fraught with an array of challenges, and also, through it, numerous vulnerabilities that translate to wider attack surfaces and deeper degrees of damage possible to both consumers and their confidence within health systems, as a result of patient-specific data being available to access. Further, when IoT health devices (IoTHDs) are developed, a diverse range of …


Digital Transformation, Applications, And Vulnerabilities In Maritime And Shipbuilding Ecosystems, Rafael Diaz, Katherine Smith Jan 2023

Digital Transformation, Applications, And Vulnerabilities In Maritime And Shipbuilding Ecosystems, Rafael Diaz, Katherine Smith

VMASC Publications

The evolution of maritime and shipbuilding supply chains toward digital ecosystems increases operational complexity and needs reliable communication and coordination. As labor and suppliers shift to digital platforms, interconnection, information transparency, and decentralized choices become ubiquitous. In this sense, Industry 4.0 enables "smart digitalization" in these environments. Many applications exist in two distinct but interrelated areas related to shipbuilding design and shipyard operational performance. New digital tools, such as virtual prototypes and augmented reality, begin to be used in the design phases, during the commissioning/quality control activities, and for training workers and crews. An application relates to using Virtual Prototypes …


Enabling Customization Of Discussion Forums For Blind Users, Mohan Sunkara, Yash Prakash, Hae-Na Lee, Sampath Jayarathna, Vikas Ashok Jan 2023

Enabling Customization Of Discussion Forums For Blind Users, Mohan Sunkara, Yash Prakash, Hae-Na Lee, Sampath Jayarathna, Vikas Ashok

Computer Science Faculty Publications

Online discussion forums have become an integral component of news, entertainment, information, and video-streaming websites, where people all over the world actively engage in discussions on a wide range of topics including politics, sports, music, business, health, and world affairs. Yet, little is known about their usability for blind users, who aurally interact with the forum conversations using screen reader assistive technology. In an interview study, blind users stated that they often had an arduous and frustrating interaction experience while consuming conversation threads, mainly due to the highly redundant content and the absence of customization options to selectively view portions …


Mwirgan: Unsupervised Visible-To Mwir Image Translation With Generative Adversarial Network, Mohammad Shahab Uddin, Chiman Kwan, Jiang Li Jan 2023

Mwirgan: Unsupervised Visible-To Mwir Image Translation With Generative Adversarial Network, Mohammad Shahab Uddin, Chiman Kwan, Jiang Li

Electrical & Computer Engineering Faculty Publications

Unsupervised image-to-image translation techniques have been used in many applications, including visible-to-Long-Wave Infrared (visible-to-LWIR) image translation, but very few papers have explored visible-to-Mid-Wave Infrared (visible-to-MWIR) image translation. In this paper, we investigated unsupervised visible-to-MWIR image translation using generative adversarial networks (GANs). We proposed a new model named MWIRGAN for visible-to-MWIR image translation in a fully unsupervised manner. We utilized a perceptual loss to leverage shape identification and location changes of the objects in the translation. The experimental results showed that MWIRGAN was capable of visible-to-MWIR image translation while preserving the object’s shape with proper enhancement in the translated images and …


X-Disetrac: Distributed Eye-Tracking With Extended Realities, Bhanuka Mahanama, Sampath Jayarathna Jan 2023

X-Disetrac: Distributed Eye-Tracking With Extended Realities, Bhanuka Mahanama, Sampath Jayarathna

College of Sciences Posters

Humans use heterogeneous collaboration mediums such as in-person, online, and extended realities for day-to-day activities. Identifying patterns in viewpoints and pupillary responses (a.k.a eye-tracking data) provide informative cues on individual and collective behavior during collaborative tasks. Despite the increasing ubiquity of these different mediums, the aggregation and analysis of eye-tracking data in heterogeneous collaborative environments remain unexplored. Our study proposes X-DisETrac: Extended Distributed Eye Tracking, a versatile framework for eye tracking in heterogeneous environments. Our approach tackles the complexity by establishing a platform-agnostic communication protocol encompassing three data streams to simplify data aggregation and …


Autodesc: Facilitating Convenient Perusal Of Web Data Items For Blind Users, Yash Prakash, Mohan Sunkara, Hae-Na Lee, Sampath Jayarathna, Vikas Ashok Jan 2023

Autodesc: Facilitating Convenient Perusal Of Web Data Items For Blind Users, Yash Prakash, Mohan Sunkara, Hae-Na Lee, Sampath Jayarathna, Vikas Ashok

Computer Science Faculty Publications

Web data items such as shopping products, classifieds, and job listings are indispensable components of most e-commerce websites. The information on the data items are typically distributed over two or more webpages, e.g., a ‘Query-Results’ page showing the summaries of the items, and ‘Details’ pages containing full information about the items. While this organization of data mitigates information overload and visual cluttering for sighted users, it however increases the interaction overhead and effort for blind users, as back-and-forth navigation between webpages using screen reader assistive technology is tedious and cumbersome. Existing usability-enhancing solutions are unable to provide adequate support in …


Evaluating Human Eye Features For Objective Measure Of Working Memory Capacity, Yasasi Abeysinghe, Enkelejda Kasneci (Ed.), Frederick Shic (Ed.), Mohamed Khamis (Ed.) Jan 2023

Evaluating Human Eye Features For Objective Measure Of Working Memory Capacity, Yasasi Abeysinghe, Enkelejda Kasneci (Ed.), Frederick Shic (Ed.), Mohamed Khamis (Ed.)

Computer Science Faculty Publications

Eye tracking measures can provide means to understand the underlying development of human working memory. In this study, we propose to develop machine learning algorithms to find an objective relationship between human eye movements via oculomotor plant and their working memory capacity, which determines subjective cognitive load. Here we evaluate oculomotor plant features extracted from saccadic eye movements, traditional positional gaze metrics, and advanced eye metrics such as ambient/focal coefficient , gaze transition entropy, low/high index of pupillary activity (LHIPA), and real-time index of pupillary activity (RIPA). This paper outlines the proposed approach of evaluating eye movements for obtaining an …


The Message Design Of Raiders Of The Lost Ark On The Atari 2600 & A Fan’S Map, Quick Start, And Strategy Guide, Miguel Ramlatchan, William I. Ramlatchan Jul 2022

The Message Design Of Raiders Of The Lost Ark On The Atari 2600 & A Fan’S Map, Quick Start, And Strategy Guide, Miguel Ramlatchan, William I. Ramlatchan

Distance Learning Faculty & Staff Books

The message design and human performance technology in video games, especially early video games have always been fascinating to me. From an instructional design perspective, the capabilities of the technology of the classic game consoles required a careful balance of achievable objectives, cognitive task analysis, guided problem solving, and message design. Raiders on the Atari is an excellent example of this balance. It is an epic adventure game, spanning 13+ distinct areas, with an inventory of items, where those hard to find items had to be used by the player to solve problems during their quest (and who would have …


Humans And The Core Partition: An Agent-Based Modeling Experiment, Andrew J. Collins, Sheida Etemadidavan Jan 2022

Humans And The Core Partition: An Agent-Based Modeling Experiment, Andrew J. Collins, Sheida Etemadidavan

Engineering Management & Systems Engineering Faculty Publications

Although strategic coalition formation is traditionally modeled using cooperative game theory, behavioral game theorists have repeatedly shown that outcomes predicted by game theory are different from those generated by actual human behavior. To further explore these differences, in a cooperative game theory context, we experiment to compare the outcomes resulting from human participants’ behavior to those generated by a cooperative game theory solution mechanism called the core partition. Our experiment uses an interactive simulation of a glove game, a particular type of cooperative game, to collect the participant’s decision choices and their resultant outcomes. Two different glove games are considered, …


Core Point Pixel-Level Localization By Fingerprint Features In Spatial Domain, Xueyi Ye, Yuzhong Shen, Maosheng Zeng, Yirui Liu, Huahua Chen, Zhijing Zhao Jan 2022

Core Point Pixel-Level Localization By Fingerprint Features In Spatial Domain, Xueyi Ye, Yuzhong Shen, Maosheng Zeng, Yirui Liu, Huahua Chen, Zhijing Zhao

Computational Modeling & Simulation Engineering Faculty Publications

Singular point detection is a primary step in fingerprint recognition, especially for fingerprint alignment and classification. But in present there are still some problems and challenges such as more false-positive singular points or inaccurate reference point localization. This paper proposes an accurate core point localization method based on spatial domain features of fingerprint images from a completely different viewpoint to improve the fingerprint core point displacement problem of singular point detection. The method first defines new fingerprint features, called furcation and confluence, to represent specific ridge/valley distribution in a core point area, and uses them to extract the innermost Curve …


Augmented Reality Integrated Welder Training For Mechanical Engineering Technology, Aditya Akundi, Hamid Eisazadeh, Mona Torabizadeh Jan 2022

Augmented Reality Integrated Welder Training For Mechanical Engineering Technology, Aditya Akundi, Hamid Eisazadeh, Mona Torabizadeh

Engineering Technology Faculty Publications

The shortage of welders is well documented and projected to become more severe for various industries such as shipbuilding in coming years. It is mainly because welding training is a critical and often costly endeavor. This study examines the training potential using augmented reality technology as a critical part of welder training for mechanical engineering technology students. This study assessed the performance of two groups of MET students trained with two different methods. One group received training with the traditional method in three sessions. The second group acquired training initially with an augmented reality welding system for three sessions. Then, …


The Effect Of Touch Simulation In Virtual Reality Shopping, Ha Kyung Lee, Namhee Yoon, Dooyoung Choi Jan 2022

The Effect Of Touch Simulation In Virtual Reality Shopping, Ha Kyung Lee, Namhee Yoon, Dooyoung Choi

STEMPS Faculty Publications

This study aims to explore the effect of touch simulation on virtual reality (VR) store satisfaction mediated by VR shopping self-efficacy and VR shopping pleasure. The moderation effects of the autotelic and instrumental need for touch between touch simulation and VR store satisfaction are also explored. Participants wear a head-mounted display VR device (Oculus Go) in a controlled laboratory environment, and their VR store experience is recorded as data. All participants’ responses (n = 58) are analyzed using SPSS 20.0 for descriptive statistics, reliability analysis, exploratory factor analysis, and the Process macro model analysis. The results show that touch simulation …


Can I Touch The Clothes On The Screen? The Touch Effect In Online Shopping, Ha Kyung Lee, Dooyoung Choi Jan 2022

Can I Touch The Clothes On The Screen? The Touch Effect In Online Shopping, Ha Kyung Lee, Dooyoung Choi

STEMPS Faculty Publications

We examined the interplay effects of device types (touch vs. non-touch) and the tactile sensitivity (fur vs. woven) on the product attitudes mediated by the mental simulation for touch. The participants from MTurk were randomly assigned to one of two tactile conditions. Responses from those who used tablets (n=83, touch device) and laptops (n=96, non-touch device) were included in the analysis. The main effects of device types and tactile-sensitivity on the mental simulation for touch were significant. The interaction effect of device types and tactile sensitivity was also significant. Those participants seeing the less tactile-sensitive product showed greater mental simulation …


Multi-User Eye-Tracking, Bhanuka Mahanama Jan 2022

Multi-User Eye-Tracking, Bhanuka Mahanama

Computer Science Faculty Publications

The human gaze characteristics provide informative cues on human behavior during various activities. Using traditional eye trackers, assessing gaze characteristics in the wild requires a dedicated device per participant and therefore is not feasible for large-scale experiments. In this study, we propose a commodity hardware-based multi-user eye-tracking system. We leverage the recent advancements in Deep Neural Networks and large-scale datasets for implementing our system. Our preliminary studies provide promising results for multi-user eye-tracking on commodity hardware, providing a cost-effective solution for large-scale studies.


Eye Movement And Pupil Measures: A Review, Bhanuka Mahanama, Yasith Jayawardana, Sundararaman Rengarajan, Gavindya Jayawardena, Leanne Chukoskie, Joseph Snider, Sampath Jayarathna Jan 2022

Eye Movement And Pupil Measures: A Review, Bhanuka Mahanama, Yasith Jayawardana, Sundararaman Rengarajan, Gavindya Jayawardena, Leanne Chukoskie, Joseph Snider, Sampath Jayarathna

Computer Science Faculty Publications

Our subjective visual experiences involve complex interaction between our eyes, our brain, and the surrounding world. It gives us the sense of sight, color, stereopsis, distance, pattern recognition, motor coordination, and more. The increasing ubiquity of gaze-aware technology brings with it the ability to track gaze and pupil measures with varying degrees of fidelity. With this in mind, a review that considers the various gaze measures becomes increasingly relevant, especially considering our ability to make sense of these signals given different spatio-temporal sampling capacities. In this paper, we selectively review prior work on eye movements and pupil measures. We first …


Toward A Real-Time Index Of Pupillary Activity As An Indicator Of Cognitive Load, Gavindya Jayawardena, Yasith Jayawardana, Sampath Jayarathna, Jonas Högström, Thomas Papa, Deepak Akkil, Andrew T. Duchowski, Vsevolod Peysakhovich, Izabela Krejtz, Nina Gehrer, Krzysztof Krejtz Jan 2022

Toward A Real-Time Index Of Pupillary Activity As An Indicator Of Cognitive Load, Gavindya Jayawardena, Yasith Jayawardana, Sampath Jayarathna, Jonas Högström, Thomas Papa, Deepak Akkil, Andrew T. Duchowski, Vsevolod Peysakhovich, Izabela Krejtz, Nina Gehrer, Krzysztof Krejtz

Computer Science Faculty Publications

The Low/High Index of Pupillary Activity (LHIPA), an eye-tracked measure of pupil diameter oscillation, is redesigned and implemented to function in real-time. The novel Real-time IPA (RIPA) is shown to discriminate cognitive load in re-streamed data from earlier experiments. Rationale for the RIPA is tied to the functioning of the human autonomic nervous system yielding a hybrid measure based on the ratio of Low/High frequencies of pupil oscillation. The paper's contribution is drawn from provision of documentation of the calculation of the RIPA. As with the LHIPA, it is possible for researchers to apply this metric to their own experiments …


Internet-Of-Things Devices In Support Of The Development Of Echoic Skills Among Children With Autism Spectrum Disorder, Krzysztof J. Rechowicz, John B. Stull, Michelle M. Hascall, Saikou Y. Diallo, Kevin J. O'Brien Jan 2021

Internet-Of-Things Devices In Support Of The Development Of Echoic Skills Among Children With Autism Spectrum Disorder, Krzysztof J. Rechowicz, John B. Stull, Michelle M. Hascall, Saikou Y. Diallo, Kevin J. O'Brien

VMASC Publications

A significant therapeutic challenge for people with disabilities is the development of verbal and echoic skills. Digital voice assistants (DVAs), such as Amazon’s Alexa, provide networked intelligence to billions of Internet-of-Things devices and have the potential to offer opportunities to people, such as those diagnosed with autism spectrum disorder (ASD), to advance these necessary skills. Voice interfaces can enable children with ASD to practice such skills at home; however, it remains unclear whether DVAs can be as proficient as therapists in recognizing utterances by a developing speaker. We developed an Alexa-based skill called ASPECT to measure how well the DVA …


Converting Optical Videos To Infrared Videos Using Attention Gan And Its Impact On Target Detection And Classification Performance, Mohammad Shahab Uddin, Reshad Hoque, Kazi Aminul Islam, Chiman Kwan, David Gribben, Jiang Li Jan 2021

Converting Optical Videos To Infrared Videos Using Attention Gan And Its Impact On Target Detection And Classification Performance, Mohammad Shahab Uddin, Reshad Hoque, Kazi Aminul Islam, Chiman Kwan, David Gribben, Jiang Li

Electrical & Computer Engineering Faculty Publications

To apply powerful deep-learning-based algorithms for object detection and classification in infrared videos, it is necessary to have more training data in order to build high-performance models. However, in many surveillance applications, one can have a lot more optical videos than infrared videos. This lack of IR video datasets can be mitigated if optical-to-infrared video conversion is possible. In this paper, we present a new approach for converting optical videos to infrared videos using deep learning. The basic idea is to focus on target areas using attention generative adversarial network (attention GAN), which will preserve the fidelity of target areas. …


Helion’S Snapshot Module, Nii-Kwartei Quartey Jul 2020

Helion’S Snapshot Module, Nii-Kwartei Quartey

Cybersecurity Undergraduate Research Showcase

During my undergraduate research, I spent my time working with a home automation program known as Helion, specifically, its Snapshot module. I was tasked with learning new material and completing part of the webpage that were unfinished. I also had to get a little creative when working on a design that users could find appealing. There were times I found working on Helion difficult but overall, working with Helion’s Snapshot Module is something that will help me improve with my undergraduate studies.


Rotate-And-Press: A Non-Visual Alternative To Point-And-Click, Hae-Na Lee, Vikas Ashok, I. V. Ramakrishnan Jan 2020

Rotate-And-Press: A Non-Visual Alternative To Point-And-Click, Hae-Na Lee, Vikas Ashok, I. V. Ramakrishnan

Computer Science Faculty Publications

Most computer applications manifest visually rich and dense graphical user interfaces (GUIs) that are primarily tailored for an easy-and-efficient sighted interaction using a combination of two default input modalities, namely the keyboard and the mouse/touchpad. However, blind screen-reader users predominantly rely only on keyboard, and therefore struggle to interact with these applications, since it is both arduous and tedious to perform the visual 'point-and-click' tasks such as accessing the various application commands/features using just keyboard shortcuts supported by screen readers.

In this paper, we investigate the suitability of a 'rotate-and-press' input modality as an effective non-visual substitute for the visual …


Repurposing Visual Input Modalities For Blind Users: A Case Study Of Word Processors, Hae-Na Lee, Vikas Ashok, I.V. Ramakrishnan Jan 2020

Repurposing Visual Input Modalities For Blind Users: A Case Study Of Word Processors, Hae-Na Lee, Vikas Ashok, I.V. Ramakrishnan

Computer Science Faculty Publications

Visual 'point-and-click' interaction artifacts such as mouse and touchpad are tangible input modalities, which are essential for sighted users to conveniently interact with computer applications. In contrast, blind users are unable to leverage these visual input modalities and are thus limited while interacting with computers using a sequentially narrating screen-reader assistive technology that is coupled to keyboards. As a consequence, blind users generally require significantly more time and effort to do even simple application tasks (e.g., applying a style to text in a word processor) using only keyboard, compared to their sighted peers who can effortlessly accomplish the same tasks …


A Saliency-Driven Video Magnifier For People With Low Vision, Ali Selman Aydin, Shirin Feiz, Iv Ramakrishnan, Vikas Ashok Jan 2020

A Saliency-Driven Video Magnifier For People With Low Vision, Ali Selman Aydin, Shirin Feiz, Iv Ramakrishnan, Vikas Ashok

Computer Science Faculty Publications

Consuming video content poses significant challenges for many screen magnifier users, which is the “go to” assistive technology for people with low vision. While screen magnifier software could be used to achieve a zoom factor that would make the content of the video visible to low-vision users, it is oftentimes a major challenge for these users to navigate through videos. Towards making videos more accessible for low-vision users, we have developed the SViM video magnifier system [6]. Specifically, SViM consists of three different magnifier interfaces with easy-to-use means of interactions. All three interfaces are driven by visual saliency as a …


Towards Making Videos Accessible For Low Vision Screen Magnifier Users, Ali Selman Aydin, Shirin Feiz, Vikas Ashok, Iv Ramakrishnan Jan 2020

Towards Making Videos Accessible For Low Vision Screen Magnifier Users, Ali Selman Aydin, Shirin Feiz, Vikas Ashok, Iv Ramakrishnan

Computer Science Faculty Publications

People with low vision who use screen magnifiers to interact with computing devices find it very challenging to interact with dynamically changing digital content such as videos, since they do not have the luxury of time to manually move, i.e., pan the magnifier lens to different regions of interest (ROIs) or zoom into these ROIs before the content changes across frames.

In this paper, we present SViM, a first of its kind screen-magnifier interface for such users that leverages advances in computer vision, particularly video saliency models, to identify salient ROIs in videos. SViM's interface allows users to zoom in/out …


Sail: Saliency-Driven Injection Of Aria Landmarks, Ali Selman Aydin, Shirin Feiz, Vikas Ashok, Iv Ramakrishnan Jan 2020

Sail: Saliency-Driven Injection Of Aria Landmarks, Ali Selman Aydin, Shirin Feiz, Vikas Ashok, Iv Ramakrishnan

Computer Science Faculty Publications

Navigating webpages with screen readers is a challenge even with recent improvements in screen reader technologies and the increased adoption of web standards for accessibility, namely ARIA. ARIA landmarks, an important aspect of ARIA, lets screen reader users access different sections of the webpage quickly, by enabling them to skip over blocks of irrelevant or redundant content. However, these landmarks are sporadically and inconsistently used by web developers, and in many cases, even absent in numerous web pages. Therefore, we propose SaIL, a scalable approach that automatically detects the important sections of a web page, and then injects ARIA landmarks …


Accessibility Of Deepfakes, Andrew L. Collings Jan 2020

Accessibility Of Deepfakes, Andrew L. Collings

Cybersecurity Undergraduate Research Showcase

The danger posed by falsified media, commonly referred to as deepfakes, has been well researched and documented. The software Faceswap to was used to swap the faces of two politician (Joe Biden and Donald Trump). The testing was performed using an affordable consumer GPU (an AMD Radeon RX 570) over 100,000 iterations. The process and results for the two attempts with the best results (and largest differences) were recorded. The result was ultimately unconvincing, while the software was able to recreate the facial structure the lighting and skin tone did not blend at all.


Impact Of Http Cookie Violations In Web Archives, Sawood Alam, Michele C. Weigle, Michael L. Nelson Jun 2019

Impact Of Http Cookie Violations In Web Archives, Sawood Alam, Michele C. Weigle, Michael L. Nelson

Computer Science Faculty Publications

Certain HTTP Cookies on certain sites can be a source of content bias in archival crawls. Accommodating Cookies at crawl time, but not utilizing them at replay time may cause cookie violations, resulting in defaced composite mementos that never existed on the live web. To address these issues, we propose that crawlers store Cookies with short expiration time and archival replay systems account for values in the Vary header along with URIs.


Experimental Investigation On The Effects Of Website Aesthetics On User Performance In Different Virtual Tasks, Meinald T. Thielsch, Russell Haines, Leonie Flacke Jan 2019

Experimental Investigation On The Effects Of Website Aesthetics On User Performance In Different Virtual Tasks, Meinald T. Thielsch, Russell Haines, Leonie Flacke

Information Technology & Decision Sciences Faculty Publications

In Human-Computer Interaction research, the positive effect of aesthetics on users' subjective impressions and reactions is well-accepted. However, results regarding the influence of interface aesthetics on a user's individual performance as an objective outcome are very mixed, yet of urgent interest due to the proceeding of digitalization. In this web-based experiment (N = 331), the effect of interface aesthetics on individual performance considering three different types of tasks (search, creative, and transfer tasks) is investigated. The tasks were presented on an either aesthetic or unaesthetic website, which differed significantly in subjective aesthetics. Goal orientation (learning versus performance goals) was included …


Transfer Learning Approach To Multiclass Classification Of Child Facial Expressions, Megan A. Witherow, Manar D. Samad, Khan M. Iftekharuddin Jan 2019

Transfer Learning Approach To Multiclass Classification Of Child Facial Expressions, Megan A. Witherow, Manar D. Samad, Khan M. Iftekharuddin

Electrical & Computer Engineering Faculty Publications

The classification of facial expression has been extensively studied using adult facial images which are not appropriate ground truths for classifying facial expressions in children. The state-of-the-art deep learning approaches have been successful in the classification of facial expressions in adults. A deep learning model may be better able to learn the subtle but important features underlying child facial expressions and improve upon the performance of traditional machine learning and feature extraction methods. However, unlike adult data, only a limited number of ground truth images exist for training and validating models for child facial expression classification and there is a …


Fusion Of Landsat And Worldview Images, Chiman Kwan, Bryan Chou, Jerry Yang, Daniel Perez, Yuzhong Shen, Jiang Li, Krzysztof Koperski Jan 2019

Fusion Of Landsat And Worldview Images, Chiman Kwan, Bryan Chou, Jerry Yang, Daniel Perez, Yuzhong Shen, Jiang Li, Krzysztof Koperski

Computational Modeling & Simulation Engineering Faculty Publications

Pansharpened Landsat images have 15 m spatial resolution with 16-day revisit periods. On the other hand, Worldview images have 0.5 m resolution after pansharpening but the revisit times are uncertain. We present some preliminary results for a challenging image fusion problem that fuses Landsat and Worldview (WV) images to yield a high temporal resolution image sequence at the same spatial resolution of WV images. Since the spatial resolution between Landsat and Worldview is 30 to 1, our preliminary results are mixed in that the objective performance metrics such as peak signal-to-noise ratio (PSNR), correlation coefficient (CC), etc. sometimes showed good …