Open Access. Powered by Scholars. Published by Universities.®
Physical Sciences and Mathematics Commons™
Open Access. Powered by Scholars. Published by Universities.®
Articles 1 - 2 of 2
Full-Text Articles in Physical Sciences and Mathematics
Unsupervised Multivariate Time Series Clustering, Md Monibor Rahman, Lasitha Vidyaratne, Alex Glandon, Khan Iftekharuddin
Unsupervised Multivariate Time Series Clustering, Md Monibor Rahman, Lasitha Vidyaratne, Alex Glandon, Khan Iftekharuddin
College of Engineering & Technology (Batten) Posters
Clustering is widely used in unsupervised machine learning to partition a given set of data into non-overlapping groups. Many real-world applications require processing more complex multivariate time series data characterized by more than one dependent variables. A few works in literature reported multivariate classification using Shapelet learning. However, the clustering of multivariate time series signals using Shapelet learning has not explored yet. Shapelet learning is a process of discovering those Shapelets which contain the most informative features of the time series signal. Discovering suitable Shapelets from many candidates Shapelet has been broadly studied for classification and clustering of univariate time …
Data-Limited Domain Adaptation And Transfer Learning For Learning Latent Expression Labels Of Child Facial Expression Images, Megan Witherow, Winston Shields, Manar Samad, Khan Iftekharuddin
Data-Limited Domain Adaptation And Transfer Learning For Learning Latent Expression Labels Of Child Facial Expression Images, Megan Witherow, Winston Shields, Manar Samad, Khan Iftekharuddin
College of Engineering & Technology (Batten) Posters
While state-of-the-art deep learning models have demonstrated success in adult facial expression classification by leveraging large, labeled datasets, labeled data for child facial expression classification is limited. Due to differences in facial morphology and development in child and adult faces, deep learning models trained on adult data do not generalize well to child data. Recent deep domain adaptation approaches have improved the generalizability of models trained on a source domain to a target domain with few labeled samples. We propose that incorporating steps of deep transfer learning, e.g. weights initialization from the pre-trained source model and freezing model layers, may …