Open Access. Powered by Scholars. Published by Universities.®

Signal Processing Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 5 of 5

Full-Text Articles in Signal Processing

Remote Sensing Using I-Band And S-Band Signals Of Opportunity, Kadir Efecik, Benjamin R. Nold, James L. Garrison Aug 2018

Remote Sensing Using I-Band And S-Band Signals Of Opportunity, Kadir Efecik, Benjamin R. Nold, James L. Garrison

The Summer Undergraduate Research Fellowship (SURF) Symposium

Measurement of soil moisture, especially the root zone soil moisture, is important in agriculture, meteorology, and hydrology. Root zone soil moisture is concerned with the first meter down the soil. Active and passive remote sensing methods used today utilizing L-band(1-2GHz) are physically limited to a sensing depth of about 5 cm or less. To remotely sense the soil moisture in the deeper parts of the soil, the frequency should be lowered. Lower frequencies cannot be used in active spaceborne instruments because of their need for larger antennas, radio frequency interference (RFI), and frequency spectrum allocations. Ground-based passive remote sensing using …


Deep Neural Network Architectures For Modulation Classification Using Principal Component Analysis, Sharan Ramjee, Shengtai Ju, Diyu Yang, Aly El Gamal Aug 2018

Deep Neural Network Architectures For Modulation Classification Using Principal Component Analysis, Sharan Ramjee, Shengtai Ju, Diyu Yang, Aly El Gamal

The Summer Undergraduate Research Fellowship (SURF) Symposium

In this work, we investigate the application of Principal Component Analysis to the task of wireless signal modulation recognition using deep neural network architectures. Sampling signals at the Nyquist rate, which is often very high, requires a large amount of energy and space to collect and store the samples. Moreover, the time taken to train neural networks for the task of modulation classification is large due to the large number of samples. These problems can be drastically reduced using Principal Component Analysis, which is a technique that allows us to reduce the dimensionality or number of features of the samples …


Perception Of 3d Symmetrical And Near-Symmetrical Shapes, Vijai Jayadevan, Aaron Michaux, Edward Delp, Zygmunt Pizlo May 2017

Perception Of 3d Symmetrical And Near-Symmetrical Shapes, Vijai Jayadevan, Aaron Michaux, Edward Delp, Zygmunt Pizlo

MODVIS Workshop

No abstract provided.


3-D Shape Recovery From A Single Camera Image, Vijai Jayadevan, Aaron Michaux, Edward Delp, Zygmunt Pizlo May 2016

3-D Shape Recovery From A Single Camera Image, Vijai Jayadevan, Aaron Michaux, Edward Delp, Zygmunt Pizlo

MODVIS Workshop

3-D shape recovery is an ill-posed inverse problem which must be solved by using a priori constraints. We use symmetry and planarity constraints to recover 3-D shapes from a single image. Once we assume that the object to be reconstructed is symmetric, all that is left to do is to estimate the plane of symmetry and establish the symmetry correspondence between the various parts of the object. The edge map of the image of an object serves as a good representation of its 2-D shape and establishing symmetry correspondence means identifying pairs of symmetric curves in the edge map. The …


Model-Free Method Of Reinforcement Learning For Visual Tasks, Jeff S. Soldate, Jonghoon Jin, Eugenio Culurciello Aug 2014

Model-Free Method Of Reinforcement Learning For Visual Tasks, Jeff S. Soldate, Jonghoon Jin, Eugenio Culurciello

The Summer Undergraduate Research Fellowship (SURF) Symposium

There has been success in recent years for neural networks in applications requiring high level intelligence such as categorization and assessment. In this work, we present a neural network model to learn control policies using reinforcement learning. It takes a raw pixel representation of the current state and outputs an approximation of a Q value function made with a neural network that represents the expected reward for each possible state-action pair. The action is chosen an \epsilon-greedy policy, choosing the highest expected reward with a small chance of random action. We used gradient descent to update the weights and biases …