Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Gesture recognition

Discipline
Institution
Publication Year
Publication
Publication Type

Articles 1 - 20 of 20

Full-Text Articles in Physical Sciences and Mathematics

Gesture Recognition For Dynamic Vision Sensor Based On Multi-Dimensional Projection Spatiotemporal Event Frame, Lai Kang, Yakun Zhang Mar 2024

Gesture Recognition For Dynamic Vision Sensor Based On Multi-Dimensional Projection Spatiotemporal Event Frame, Lai Kang, Yakun Zhang

Journal of System Simulation

Abstract: Vision-based gesture recognition is a commonly used means of human-computer interaction in the fields of virtual reality and game simulation. In practical applications, rapid changes in gesture movements will lead to blurred imaging with traditional RGB cameras or depth cameras, which brings great challenges to gesture recognition. To solve the above problems, a dynamic visual data gesture recognition method based on a multi-dimensional projection spatiotemporal event frame (STEF) is proposed by a using dynamic vision sensor to capture high-speed gesture movement information. The spatiotemporal information is embedded in the data projection surface and fused to form a multidimensional projection …


Utilizing Motion And Spatial Features For Sign Language Gesture Recognition Using Cascaded Cnn And Lstm Models, Hamzah Luqman, Elsayed Elalfy Nov 2022

Utilizing Motion And Spatial Features For Sign Language Gesture Recognition Using Cascaded Cnn And Lstm Models, Hamzah Luqman, Elsayed Elalfy

Turkish Journal of Electrical Engineering and Computer Sciences

Sign language is a language produced by body parts gestures and facial expressions. The aim of an automatic sign language recognition system is to assign meaning to each sign gesture. Recently, several computer vision systems have been proposed for sign language recognition using a variety of recognition techniques, sign languages, and gesture modalities. However, one of the challenging problems involves image preprocessing, segmentation, extraction and tracking of relevant static and dynamic features related to manual and nonmanual gestures from different images in sequence. In this paper, we studied the efficiency, scalability, and computation time of three cascaded architectures of convolutional …


Research On Accurate Gesture Recognition Algorithm In Complex Environment Based On Machine Vision, Xu Sheng, Wenyu Feng, Zhicheng Liu, Xintao Tu, Minrui Fei, Kun Zhang Oct 2021

Research On Accurate Gesture Recognition Algorithm In Complex Environment Based On Machine Vision, Xu Sheng, Wenyu Feng, Zhicheng Liu, Xintao Tu, Minrui Fei, Kun Zhang

Journal of System Simulation

Abstract: To address the issue of cross infection caused by elevator public buttons during COVID-19, a software algorithm based on machine vision for non-contact control of public buttons by gesture recognition is designed. In order to improve the accuracy of gesture recognition, an improved YOLOv4 algorithm is proposed. A Ghost module is designed based on attention mechanism, and the ResBlock module in YOLOv4 is improved to Ghost module. The experimental results show that, in the task of gesture recognition, the detection speed is improved by 14% and the detection accuracy is improved by 0.1% compared with the original model. The …


Comparison Of Machine Learning Models: Gesture Recognition Using A Multimodal Wrist Orthosis For Tetraplegics, Charlie Martin Aug 2020

Comparison Of Machine Learning Models: Gesture Recognition Using A Multimodal Wrist Orthosis For Tetraplegics, Charlie Martin

The Journal of Purdue Undergraduate Research

Many tetraplegics must wear wrist braces to support paralyzed wrists and hands. However, current wrist orthoses have limited functionality to assist a person’s ability to perform typical activities of daily living other than a small pocket to hold utensils. To enhance the functionality of wrist orthoses, gesture recognition technology can be applied to control mechatronic tools attached to a novel fabricated wrist brace. Gesture recognition is a growing technology for providing touchless human-computer interaction that can be particularly useful for tetraplegics with limited upper-extremity mobility. In this study, three gesture recognition models were compared—two dynamic time-warping models and a hidden …


Gesture Segmentation And Recognition Based On Kinect Depth Data, Yanming Mao, Liliang Zhang Aug 2020

Gesture Segmentation And Recognition Based On Kinect Depth Data, Yanming Mao, Liliang Zhang

Journal of System Simulation

Abstract: Aiming at the problems that gesture recognition required high environmental background, segmented gesture usually contained wrist data and closing fingers easily caused false recognition, a gesture recognition method based on depth data was proposed. It captured gesture depth map, and it used Hands Generator to obtain the information of palm for gesture segmentation, in order to remove redundant wrist data, the constraint of the palm which looks like a square was added. The number of all the other four fingers except thumb could be acquired with the use of the scanline method, therefore, the width ratio of the adjacent …


Study On Hand Gesture Recognition And Portfolio Optimization Model Based On Svm, Zhiwei Cai, Shuyan Wu, Junfeng Song Aug 2020

Study On Hand Gesture Recognition And Portfolio Optimization Model Based On Svm, Zhiwei Cai, Shuyan Wu, Junfeng Song

Journal of System Simulation

Abstract: Hand gesture recognition was researched. The idea of extracting related features was proposed by using SVM algorithm in machine learning domain, and combination optimization method was used, which consists of ANN, HMM and DTW, to do hand gesture recognition. The experimental results show that portfolio optimization model based gesture recognition method has high accuracy and is very effective.


Research On Motion Capture In Substation Virtual Environment, Yuping Huo, Xiu’E Zhang, Li Bing, Weiqing Li Aug 2020

Research On Motion Capture In Substation Virtual Environment, Yuping Huo, Xiu’E Zhang, Li Bing, Weiqing Li

Journal of System Simulation

Abstract: In the immersive transformer training simulator, using MEMS inertial motion capture system, the operator’s motion data was captured and analyzed. According to the typical operation in the substation virtual environment, the semantics of interaction were studied. SVM classification algorithm based on grid search and cross validation was used to recognize operator’s gestures. The identified gestures were used to actuate the virtual human’s action in the substation virtual environment. A priority classified character animation method with deformation weight control was proposed to render different priority level actions at the same time. Two modes of virtual operation performance, character animation sequences …


Research On Skin Color Image Post-Processing Methods For Gesture Recognition System, Renda Wang, Yin Yong, Shengwei Xing Aug 2020

Research On Skin Color Image Post-Processing Methods For Gesture Recognition System, Renda Wang, Yin Yong, Shengwei Xing

Journal of System Simulation

Abstract: Skin color region image post-processing methods suitable for gesture recognition system based on computer vision was proposed. To solve the problem of face and hand often occur simultaneously in the image frame, the training process and grouping strategy of AdaBoost face detector based on Haar-like template were studied, and a rapid culling method of face region based on fusion of color information was proposed. To reduce the influence of noise in skin color region image, impulse noise suppression and skin-like region rejection was proposed. The test results have shown that the proposed methods can greatly speed up the face …


Stay-At-Home Motor Rehabilitation: Optimizing Spatiotemporal Learning On Low-Cost Capacitive Sensor Arrays, Reid Sutherland May 2020

Stay-At-Home Motor Rehabilitation: Optimizing Spatiotemporal Learning On Low-Cost Capacitive Sensor Arrays, Reid Sutherland

Graduate Theses and Dissertations

Repeated, consistent, and precise gesture performance is a key part of recovery for stroke and other motor-impaired patients. Close professional supervision to these exercises is also essential to ensure proper neuromotor repair, which consumes a large amount of medical resources. Gesture recognition systems are emerging as stay-at-home solutions to this problem, but the best solutions are expensive, and the inexpensive solutions are not universal enough to tackle patient-to-patient variability. While many methods have been studied and implemented, the gesture recognition system designer does not have a strategy to effectively predict the right method to fit the needs of a patient. …


Investigating Machine Learning Techniques For Gesture Recognition With Low-Cost Capacitive Sensing Arrays, Michael Fahr Jr. May 2020

Investigating Machine Learning Techniques For Gesture Recognition With Low-Cost Capacitive Sensing Arrays, Michael Fahr Jr.

Computer Science and Computer Engineering Undergraduate Honors Theses

Machine learning has proven to be an effective tool for forming models to make predictions based on sample data. Supervised learning, a subset of machine learning, can be used to map input data to output labels based on pre-existing paired data. Datasets for machine learning can be created from many different sources and vary in complexity, with popular datasets including the MNIST handwritten dataset and CIFAR10 image dataset. The focus of this thesis is to test and validate multiple machine learning models for accurately classifying gestures performed on a low-cost capacitive sensing array. Multiple neural networks are trained using gesture …


Underwater Gesture Recognition Using Classical Computer Vision And Deep Learning Techniques, Mygel Andrei M. Martija, Jakov Ivan S. Dumbrique, Prospero C. Naval Jr. Mar 2020

Underwater Gesture Recognition Using Classical Computer Vision And Deep Learning Techniques, Mygel Andrei M. Martija, Jakov Ivan S. Dumbrique, Prospero C. Naval Jr.

Mathematics Faculty Publications

Underwater Gesture Recognition is a challenging task since conditions which are normally not an issue in gesture recognition on land must be considered. Such issues include low visibility, low contrast, and unequal spectral propagation. In this work, we explore the underwater gesture recognition problem by taking on the recently released Cognitive Autonomous Diving Buddy Underwater Gestures dataset. The contributions of this paper are as follows: (1) Use traditional computer vision techniques along with classical machine learning to perform gesture recognition on the CADDY dataset; (2) Apply deep learning using a convolutional neural network to solve the same problem; (3) Perform …


Application Of Myo Gesture Recognition Method In Ancient Building Roaming System, Yanping Xue, Xuesong Wang, Zhongke Wu, Xingce Wang, Mingquan Zhou Dec 2019

Application Of Myo Gesture Recognition Method In Ancient Building Roaming System, Yanping Xue, Xuesong Wang, Zhongke Wu, Xingce Wang, Mingquan Zhou

Journal of System Simulation

Abstract: Museum display many exhibits, but its display way is single. The ancient buildings are displayed by texts and pictures, so it cannot bring a realistic experience to the visitors. Therefore, the ancient architectural scene roaming based on Myo combined with virtual reality is proposed. Kaifeng Tower Park is used as an example to achieve the system. In the design, system gets visitors’ acceleration and gyroscope values of forward, turn, squat and stop by wearing Myo. SVM-based gesture recognition algorithm is designed to complete the classification of feature values. Contrast experiment is designed to verify accuracy and efficiency of …


Teaching Introductory Programming Concepts Through A Gesture-Based Interface, Lora Streeter May 2019

Teaching Introductory Programming Concepts Through A Gesture-Based Interface, Lora Streeter

Graduate Theses and Dissertations

Computer programming is an integral part of a technology driven society, so there is a tremendous need to teach programming to a wider audience. One of the challenges in meeting this demand for programmers is that most traditional computer programming classes are targeted to university/college students with strong math backgrounds. To expand the computer programming workforce, we need to encourage a wider range of students to learn about programming.

The goal of this research is to design and implement a gesture-driven interface to teach computer programming to young and non-traditional students. We designed our user interface based on the feedback …


Gesture Recognition Method Based On Multi-Feature Fusion, Yuanming Wang, Zhang Jun, Yuanhui Qin, Xiujuan Chai Feb 2019

Gesture Recognition Method Based On Multi-Feature Fusion, Yuanming Wang, Zhang Jun, Yuanhui Qin, Xiujuan Chai

Journal of System Simulation

Abstract: Aiming at the specific application requirements of command gesture for flight deck, a gesture recognition method based on multi-feature fusion is proposed. The 3D trajectory feature vector and hand sparse representation are established from two aspects of the trajectory and posture based on the visual information collected by depth camera. On the one hand, the gesture is recognized through normalization resampling and alignment based on the trajectory feature. On the other hand, the gesture is recognized through sparse representation alignment based on the HOG feature. The recognition results are fused effectively. The experimental results indicate that our …


Annapurna: Building A Real-World Smartwatch-Based Automated Food Journal, Sougata Sen, Vigneshwaran Subbaraju, Archan Misra, Rajesh Krishna Balan, Youngki Lee Jun 2018

Annapurna: Building A Real-World Smartwatch-Based Automated Food Journal, Sougata Sen, Vigneshwaran Subbaraju, Archan Misra, Rajesh Krishna Balan, Youngki Lee

Research Collection School Of Computing and Information Systems

We describe the design and implementation of a smartwatch-based, completely unobtrusive, food journaling system, where the smartwatch helps to intelligently capture useful images of food that an individual consumes throughout the day. The overall system, called Annapurna, is based on three key components: (a) a smartwatch-based gesture recognizer to identify eating gestures, (b) a smartwatch-based image capturer that obtains a small set of relevant and useful images with a low energy overhead, and (c) a server-based image filtering engine that removes irrelevant uploaded images, and then catalogs them through a portal. Our primary challenge is to make the system robust …


Smartwatch-Based Early Gesture Detection & Trajectory Tracking For Interactive Gesture-Driven Applications, Tran Huy Vu, Archan Misra, Quentin Roy, Kenny Tsu Wei Choo, Youngki Lee Jan 2018

Smartwatch-Based Early Gesture Detection & Trajectory Tracking For Interactive Gesture-Driven Applications, Tran Huy Vu, Archan Misra, Quentin Roy, Kenny Tsu Wei Choo, Youngki Lee

Research Collection School Of Computing and Information Systems

The paper explores the possibility of using wrist-worn devices (specifically, a smartwatch) to accurately track the hand movement and gestures for a new class of immersive, interactive gesture-driven applications. These interactive applications need two special features: (a) the ability to identify gestures from a continuous stream of sensor data early–i.e., even before the gesture is complete, and (b) the ability to precisely track the hand’s trajectory, even though the underlying inertial sensor data is noisy. We develop a new approach that tackles these requirements by first building a HMM-based gesture recognition framework that does not need an explicit segmentation step, …


Magi: Enabling Multi-Device Gestural Applications, Tran Huy Vu, Choo Tsu Wei, Kenny, Youngki Lee, Richard Christopher Davis, Archan Misra Mar 2016

Magi: Enabling Multi-Device Gestural Applications, Tran Huy Vu, Choo Tsu Wei, Kenny, Youngki Lee, Richard Christopher Davis, Archan Misra

Research Collection School Of Computing and Information Systems

We describe our vision of a multiple mobile or wearable device environment and share our initial exploration of our vision in multi-wrist gesture recognition. We explore how multi-device input and output might look, giving four scenarios of everyday multi-device use that show the technical challenges that need to be addressed. We describe our system which allows for recognition to be distributed between multiple devices, fusing recognition streams on a resource-rich device (e.g., mobile phone). An Interactor layer recognises common gestures from the fusion engine, and provides abstract input streams (e.g., scrolling and zooming) to user interface components called Midgets. These …


The Case For Smartwatch-Based Diet Monitoring, Sougata Sen, Vigneshwaran Subbaraju, Archan Misra, Rajesh Krishna Balan, Youngki Lee Mar 2015

The Case For Smartwatch-Based Diet Monitoring, Sougata Sen, Vigneshwaran Subbaraju, Archan Misra, Rajesh Krishna Balan, Youngki Lee

Research Collection School Of Computing and Information Systems

We explore the use of gesture recognition on a wrist-worn smartwatch as an enabler of an automated eating activity (and diet monitoring) system. We show, using small-scale user studies, how it is possible to use the accelerometer and gyroscope data from a smartwatch to accurately separate eating episodes from similar non-eating activities, and to additionally identify the mode of eating (i.e., using a spoon, bare hands or chopsticks). Additionally, we investigate the likelihood of automatically triggering the smartwatch's camera to capture clear images of the food being consumed, for possible offline analysis to identify what (and how much) the user …


Using Infrastructure-Provided Context Filters For Efficient Fine-Grained Activity Sensing, Vigneshwaran Subbaraju, Sougata Sen, Archan Misra, Satyadip Chakraborty, Rajesh Krishna Balan Mar 2015

Using Infrastructure-Provided Context Filters For Efficient Fine-Grained Activity Sensing, Vigneshwaran Subbaraju, Sougata Sen, Archan Misra, Satyadip Chakraborty, Rajesh Krishna Balan

Research Collection School Of Computing and Information Systems

While mobile and wearable sensing can capture unique insights into fine-grained activities (such as gestures and limb-based actions) at an individual level, their energy overheads are still prohibitive enough to prevent them from being executed continuously. In this paper, we explore practical alternatives to addressing this challenge-by exploring how cheap infrastructure sensors or information sources (e.g., BLE beacons) can be harnessed with such mobile/wearable sensors to provide an effective solution that reduces energy consumption without sacrificing accuracy. The key idea is that many fine-grained activities that we desire to capture are specific to certain location, movement or background context: infrastructure …


Gesture Based Home Automation For The Physically Disabled, Alexander Hugh Nelson May 2013

Gesture Based Home Automation For The Physically Disabled, Alexander Hugh Nelson

Graduate Theses and Dissertations

Paralysis and motor-impairments can greatly reduce the autonomy and quality of life of a patient while presenting a major recurring cost in home-healthcare. Augmented with a non-invasive wearable sensor system and home-automation equipment, the patient can regain a level of autonomy at a fraction of the cost of home nurses. A system which utilizes sensor fusion, low-power digital components, and smartphone cellular capabilities can extend the usefulness of such a system to allow greater adaptivity for patients with various needs. This thesis develops such a system as a Bluetooth enabled glove device which communicates with a remote web server to …