Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 7 of 7

Full-Text Articles in Engineering

Intent Recognition In Smart Living Through Deep Recurrent Neural Networks, Xiang Zhang, Lina Yao, Chaoran Huang, Quan Z. Sheng, Xianzhi Wang Nov 2017

Intent Recognition In Smart Living Through Deep Recurrent Neural Networks, Xiang Zhang, Lina Yao, Chaoran Huang, Quan Z. Sheng, Xianzhi Wang

Research Collection School Of Computing and Information Systems

Electroencephalography (EEG) signal based intent recognition has recently attracted much attention in both academia and industries, due to helping the elderly or motor-disabled people controlling smart devices to communicate with outer world. However, the utilization of EEG signals is challenged by low accuracy, arduous and time-consuming feature extraction. This paper proposes a 7-layer deep learning model to classify raw EEG signals with the aim of recognizing subjects’ intents, to avoid the time consumed in pre-processing and feature extraction. The hyper-parameters are selected by an Orthogonal Array experiment method for efficiency. Our model is applied to an open EEG dataset provided …


Hierarchical Fusion Based Deep Learning Framework For Lung Nodule Classification, Kazim Sekeroglu Oct 2017

Hierarchical Fusion Based Deep Learning Framework For Lung Nodule Classification, Kazim Sekeroglu

LSU Doctoral Dissertations

Lung cancer is the leading cancer type that causes the mortality in both men and women. Computer aided detection (CAD) and diagnosis systems can play a very important role for helping the physicians in cancer treatments. This dissertation proposes a CAD framework that utilizes a hierarchical fusion based deep learning model for detection of nodules from the stacks of 2D images. In the proposed hierarchical approach, a decision is made at each level individually employing the decisions from the previous level. Further, individual decisions are computed for several perspectives of a volume of interest (VOI). This study explores three different …


An Ensemble Deep Convolutional Neural Network Model With Improved D-S Evidence Fusion For Bearing Fault Diagnosis, Shaobo Li, Guoka Liu, Xianghong Tang, Jianguang Lu, Jianjun Hu Jul 2017

An Ensemble Deep Convolutional Neural Network Model With Improved D-S Evidence Fusion For Bearing Fault Diagnosis, Shaobo Li, Guoka Liu, Xianghong Tang, Jianguang Lu, Jianjun Hu

Faculty Publications

Intelligent machine health monitoring and fault diagnosis are becoming increasingly important for modern manufacturing industries. Current fault diagnosis approaches mostly depend on expert-designed features for building prediction models. In this paper, we proposed IDSCNN, a novel bearing fault diagnosis algorithm based on ensemble deep convolutional neural networks and an improved Dempster–Shafer theory based evidence fusion. The convolutional neural networks take the root mean square (RMS) maps from the FFT (Fast Fourier Transformation) features of the vibration signals from two sensors as inputs. The improved D-S evidence theory is implemented via distance matrix from evidences and modified Gini Index. Extensive evaluations …


Speech Based Machine Learning Models For Emotional State Recognition And Ptsd Detection, Debrup Banerjee Jul 2017

Speech Based Machine Learning Models For Emotional State Recognition And Ptsd Detection, Debrup Banerjee

Electrical & Computer Engineering Theses & Dissertations

Recognition of emotional state and diagnosis of trauma related illnesses such as posttraumatic stress disorder (PTSD) using speech signals have been active research topics over the past decade. A typical emotion recognition system consists of three components: speech segmentation, feature extraction and emotion identification. Various speech features have been developed for emotional state recognition which can be divided into three categories, namely, excitation, vocal tract and prosodic. However, the capabilities of different feature categories and advanced machine learning techniques have not been fully explored for emotion recognition and PTSD diagnosis. For PTSD assessment, clinical diagnosis through structured interviews is a …


Deepmon: Mobile Gpu-Based Deep Learning Framework For Continuous Vision Applications, Nguyen Loc Huynh, Youngki Lee, Rajesh Krishna Balan Jun 2017

Deepmon: Mobile Gpu-Based Deep Learning Framework For Continuous Vision Applications, Nguyen Loc Huynh, Youngki Lee, Rajesh Krishna Balan

Research Collection School Of Computing and Information Systems

The rapid emergence of head-mounted devices such as the Microsoft Holo-lens enables a wide variety of continuous vision applications. Such applications often adopt deep-learning algorithms such as CNN and RNN to extract rich contextual information from the first-person-view video streams. Despite the high accuracy, use of deep learning algorithms in mobile devices raises critical challenges, i.e., high processing latency and power consumption. In this paper, we propose DeepMon, a mobile deep learning inference system to run a variety of deep learning inferences purely on a mobile device in a fast and energy-efficient manner. For this, we designed a suite of …


Demo: Deepmon - Building Mobile Gpu Deep Learning Models For Continuous Vision Applications, Loc Nguyen Huynh, Rajesh Krishna Balan, Youngki Lee Jun 2017

Demo: Deepmon - Building Mobile Gpu Deep Learning Models For Continuous Vision Applications, Loc Nguyen Huynh, Rajesh Krishna Balan, Youngki Lee

Research Collection School Of Computing and Information Systems

Deep learning has revolutionized vision sensing applications in terms of accuracy comparing to other techniques. Its breakthrough comes from the ability to extract complex high level features directly from sensor data. However, deep learning models are still yet to be natively supported on mobile devices due to high computational requirements. In this paper, we present DeepMon, a next generation of DeepSense [1] framework, to enable deep learning models on conventional mobile devices (e.g. Samsung Galaxy S7) for continuous vision sensing applications. Firstly, Deep-Mon exploits similarity between consecutive video frames for intermediate data caching within models to enhance inference latency. Secondly, …


Deep Models For Engagement Assessment With Scarce Label Information, Feng Li, Guangfan Zhang, Wei Wang, Roger Xu, Tom Schnell, Jonathan Wen, Frederic Mckenzie, Jiang Li Jan 2017

Deep Models For Engagement Assessment With Scarce Label Information, Feng Li, Guangfan Zhang, Wei Wang, Roger Xu, Tom Schnell, Jonathan Wen, Frederic Mckenzie, Jiang Li

Electrical & Computer Engineering Faculty Publications

Task engagement is defined as loadings on energetic arousal (affect), task motivation, and concentration (cognition) [1]. It is usually challenging and expensive to label cognitive state data, and traditional computational models trained with limited label information for engagement assessment do not perform well because of overfitting. In this paper, we proposed two deep models (i.e., a deep classifier and a deep autoencoder) for engagement assessment with scarce label information. We recruited 15 pilots to conduct a 4-h flight simulation from Seattle to Chicago and recorded their electroencephalograph (EEG) signals during the simulation. Experts carefully examined the EEG signals and labeled …