In conjunction with Dr. Moble Benedict (AVFL Lab – TAMU), Dr Puneet Singla (CASS Lab – Penn State), and Dr. Randy Beard (MAGICC Lab – BYU), Dr. Valasek presented the current updates of the System Identification project at the National Science Foundation (NSF) CAAMS Summer Industry Advisory Board Meeting.
The project “Integration of System Theory with Machine Learning Tools for Data Driven System Identification” integrates system theory with machine learning tools for data driven system identification. The objective is to derive nonlinear dynamical models by employing a unique handshake between linear time varying subspace methods and sparse approximation tools from high fidelity flight simulations and flight experiments.
The center is a partnership between academia, industry, and government to offer pre-competitive research in autonomous air mobility and sensing. Pictured (left to right) are Undergraduate Researcher Halle Vandersloot, PhD student Cassie-Kay McQuinn, Dr. Valasek, and TAMU AERO alum and VP of Engineering of VectorNav Dr. Jeremy Davis.








Approaches for teaching learning agents via human demonstrations have been widely studied and successfully applied to multiple domains. However, the majority of imitation learning work utilizes only behavioral information from the demonstrator, i.e. which actions were taken, and ignores other useful information. In particular, eye gaze information can give valuable insight towards where the demonstrator is allocating visual attention, and holds the potential to improve agent performance and generalization. In this work, we propose Gaze Regularized Imitation Learning (GRIL), a novel context-aware, imitation learning architecture that learns concurrently from both human demonstrations and eye gaze to solve tasks where visual attention provides important context. We apply GRIL to a visual navigation task, in which an unmanned quadrotor is trained to search for and navigate to a target vehicle in a photo-realistic simulated environment. We show that GRIL outperforms several state-of-the-art gaze-based imitation learning algorithms, simultaneously learns to predict human visual attention, and generalizes to scenarios not present in the training data.
selected by the AERO Graduate Program Committee with an award of $1,000.








C