• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • Videos
  • Research
    • Facilities
    • Vehicles
    • Sponsors
  • Publications
    • Books
    • Journal Papers
    • Conference Papers
  • People
    • Faculty
    • Staff
    • Graduate Students
    • Undergraduate Students
    • Alumni
    • Where VSCL Alumni Work
    • Friends and Colleagues
  • Prospective Students
  • About Us
  • Contact Us
  • Where VSCL Alumni Work

Texas A&M University College of Engineering

Research

Our research is focused on bridging the scientific gaps between traditional computer science topics and aerospace engineering topics, while achieving a high degree of closure between theory and experiment.  We focus on machine learning and multi-agent systems, intelligent autonomous control, nonlinear control theory, vision based navigation systems, fault tolerant adaptive control, and cockpit systems and displays.  What sets our work apart is a unique systems approach and an ability to seamlessly integrate different disciplines such as dynamics & control, artificial intelligence, and bio-inspiration.  Our body of work integrates these disciplines, creating a lasting impact on technical communities from smart materials to General Aviation flight safety to Unmanned Air Systems (UAS) to guidance, navigation & control theory.  Our research has been funded by AFOSR, ARO, ONR, AFRL, ARL, AFC, NSF, NASA, FAA, and industry.

Autonomous and Nonlinear Control of Cyber-Physical Air, Space and Ground Systems

Vision Based Sensors and Navigation Systems

Cybersecurity for Air and Space Vehicles

Air Vehicle Control and Management

Space Vehicle Control and Management

Advanced Cockpit/UAS Systems and Displays

Control of Bio-Nano Materials and Structures

C-UAS: Online Near Real time System Identification of UAS

National Science Foundation: Center for Unmanned Aircraft Systems

Principal Investigator

This project is investigating an online near real-time system identification system for the onboard generation of locally linear models of Small Unmanned Air Systems. Angle-of-attack and sideslip angle are measured rather than estimated, and automated control surface excitation inputs consisting of doublets, triplets, and frequency sweeps are implemented and used to assure consistency in the excitation and to eliminate errors introduced by manually applied user inputs. A real-time vehicle monitoring system is used to provide a human-in-the-loop model update capability, with a goal of ensuring the safety of the vehicle. A combined lateral/directional and longitudinal excitation is developed and demonstrated for identifying a full dynamic system and representing it in state-space form. The methodology is demonstrated with flight tests of a fixed-wing Small Unmanned Air System, with locally linear models generated onboard the vehicle during flight. The objective of this work is to show that the system is capable of reliably and repeatedly generating accurate locally linear models that are suitable for real-time flight control design using model based control techniques and post-flight modal analysis.

 

Model Identification:

 

Flight Test Instrumentation:

 

Online Identification Procedure:

 

Combining Visible and Infrared Spectrum Imagery using Machine Learning for Small Unmanned Aerial System Detection

Co-Principal Investigator

1 May 2019 – 30 April 2020

Long-wave Infrared and Visible Spectrum sensors

This research work proposes combining the advantages of the long-wave infrared (LWIR) and visible spectrum sensors using machine learning for vision-based detection of small unmanned air systems (sUAS). Utilizing the heightened background contrast from the LWIR sensor combined and synchronized with the relatively increased resolution of the visible spectrum sensor, a deep learning model was trained to detect the sUAS through previously difficult environments. More specifically, the approach demonstrated effective detection of multiple sUAS flying above and below the treeline, in the presence of birds and glare from the sun. With a network of these small and affordable sensors, one can accurately estimate the 3D position of the sUAS, which could then be used for elimination or further localization from more narrow sensors, like a fire-control radar (FCR).

A summary video of the system can be found here, along with videos with all predictions for the single-vehicle case and multiple-vehicle case.

The paper was presented at the 2020 SPIE Defense + Commercial Sensing Conference Digital Forum held 27 April – 8 May 2020. The paper is available at the SPIE Digital Library and the preprint version is available at arXiv

Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Dense and Sparse Reward Environments

Army Research Laboratory

Co-Principal Investigator

9 August 2019 – 8 August 2024

This work investigates how to efficiently transition and update policies, trained initially with demonstrations, using off-policy actor-critic reinforcement learning. In this work we propose the Cycle-of-Learning (CoL) framework that uses an actor-critic architecture with a loss function that combines behavior cloning and 1-step Q-learning losses with an off-policy pre-training step from human demonstrations. This enables transition from behavior cloning to reinforcement learning without performance degradation and improves reinforcement learning in terms of overall performance and training time. Additionally, we carefully study the composition of these combined losses and their impact on overall policy learning and show that our approach outperforms state-of-the-art techniques for combining behavior cloning and reinforcement learning for both dense and sparse reward scenarios.

The Cycle-of-Learning (CoL) framework is a method for transitioning behavior cloning (BC) policies to reinforcement learning (RL) by utilizing an actor-critic architecture with a combined BC+RL loss function and pre-training phase for continuous state-action spaces, in dense- and sparse-reward environments. This combined BC+RL loss function consists of the following components: an expert behavior cloning loss that bounds actor’s action to previous human trajectories, 1-step return Q-learning loss to propagate values of human trajectories to previous states, the actor loss, and a L2 regularization loss on the actor and critic to stabilize performance and prevent over-fitting during training. The implementation of each loss component can be seen in our paper.

Our approach starts by collecting contiguous trajectories from expert policies (in this case, humans) and stores the current and subsequent state-actions pairs, reward received, and task completion signal in a permanent expert memory buffer. We validate our approach in three environments with continuous observation- and action-space: LunarLanderContinuous-v2 (dense and sparse reward cases) and a custom quadrotor landing task with wind disturbance implemented using Microsoft AirSim.

The paper documenting this work, “Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Dense and Sparse Reward Environments,” is available at the official AAMAS 2020 proceedings, together with the supplemental material detailing the training hyperparameters. A summary video of the proposed method can be found here, along with the project page that accompanied the paper submission.

Working with me on this project are:

Graduate Students:

    -Vinicius Goecks, Ph.D. AERO

    -Ritwik Bera, Ph.D. AERO

 

Acknowledgments

Research was sponsored by the U.S. Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-18-2-0134. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes not withstanding any copyright notation herein.

Interferometric Vision and Optomechanical Accelerometer Sensing for Navigation, Guidance, and Adaptive Control of Hypersonic Vehicle Platforms

DOD-Navy-Naval Surface Warfare Center

Co-Principal Investigator

9 September 2020 – 8 September 2021

Total award $500,000

This research will develop and analyze methods to support the positioning and navigation of hypersonic vehicle platforms in contested environments, and develop and analyze the effectiveness of an adaptive control and observer framework to address elastic-body nonlinear hypersonic vehicle dynamics.  Furthermore, this research effort will include some development of suitable curricula for developing the next generation hypersonic workforce.

This research will be supported by recent advances in optomechanical accelerometer technologies and interferometric vision technologies to provide alternative positioning and navigation solutions for hypersonic vehicles.  Optomechanical accelerometers for onboard adaptive guidance and control applications because these sensing technologies are completely immune to high frequency aerodynamic noise disturbances that usually corrupt other capacitance-based accelerometers.  The reason is the mechanical characteristics of the device act as a passive low-pass filter.  Additional advantages include the thermal stability of the materials (glass), integrated optics, and the small form factor.  The research will also be supported by a nonlinear dynamic inversion adaptive control architecture with a control allocation scheme.  These ideas avoid the use of gain scheduling and can are robust with respect to parametric uncertainty or slowly time-varying parameters.  Additionally, nonlinear observers and bounding function methods will be used to prove the stability of the control laws.

This research will leverage the past and current autonomy and adaptive flight controls research performed by Dr Valasek; the smart sensing technologies and computational vision research per-formed by Dr Majji; and the technical expertise in precision optical metrology, laser interferometry, and novel optomechanical inertial sensors researched performed by Dr Guzman.  The out-comes will include an optimal set of design parameters for a sensing system for onboard adaptive guidance and control applications for hypersonic flight vehicles.  This research effort will also leverage the past experience of Dr Hurtado, who as Associate Dean in the College of Engineering, developed several undergraduate-level, graduate-level, and extra-curricular programs and led them through the Texas A&M University approval process.

This research will support the Department of Defense efforts on enabling hypersonic vehicles to provide responsive, long-range, strike options against distant, defended, and time-critical threats.

Working with me on this project are:

Graduate Students:

    -Kameron Eves , Ph.D. AERO

Malware In The Loop: Investigating Takeover of UAS via Sensor Data Falsification

To date, there have been a number of simulated, experimental, and real-world cyber-attacks on unmanned aerial systems (UAS). These attacks have been carried out by different actors including researchers, militaries, rogue hackers, and unidentified sources [1]. Successful cyber-intrusions on aircraft are often considered incredibly dangerous because they could, hypothetically, control the vehicle once inside. However, there have been few attempts to do so and successful attacks have only been able to partially control the vehicle, moderately at best.

What we have learned from these attacks is that the issue of installing malware on a UAS has been proven to be a solved problem. While there are known safeguards to prevent the installation of malware onto a system, it is mostly a matter of operational security at this point. Furthermore, while a cyber-attack on a UAS could take full control of a vehicle, in theory, no such attempts have been made and the feasibility of such an attack remains unknown. In short, we don’t know whether or not it’s possible to hijack a UAS via cyber-attack.

This research will investigate the possibility of taking full control of a UAS via a malware program installed on the flight computer and will identify safeguards to defend against such an attack. This malware will make use of a novel algorithm designed to manipulate the control of the vehicle by intercepting and modifying the sensor data. The malware will alter the sensor data to fool the controllers into performing the actions desired by the malware. Furthermore, this attack has potential to be used remotely via the use of sensor spoofing technology.

The objectives of this research are:

  1. Investigate the possibility of controlling a UAS via installed malware by altering the sensor data.
  2. Assess the feasibility and seriousness of such an attack.
  3. Investigate the possibility of performing the attack remotely with sensor spoofing.
  4. Develop a method of defense against this attack.

Methods

Installing malware onto a UAS is a solved problem and is not the focus of this research. Instead, for the purposes of this research, we will assume that the malware already has access to the system. We can break up the investigation into the research of both an onboard and remote attack. Additionally, sensor data can be broken up into two categories: structured and unstructured. Structured data, such as GPS location or bank angle, is much more well-defined and provides a simpler model to work with compared to unstructured data, such as image data.

Thus, the first phase will be to perform an onboard attack on a controller which runs purely on structured data. In this phase, traditional controls techniques will likely prove as a well-working solution.

In the second phase, we can begin to investigate onboard attacks on unstructured data. Often, good control performance with unstructured data relies on machine learning and optimal control techniques which are likely good candidates for this problem.

Finally, in the third phase we can begin to look at remote attacks. It is likely that this remote attack may remain only theoretical and reliant on the development of new spoofing techniques. However, the algorithm should still be evaluated to assess its ability to perform despite the error introduced by estimation and spoofing tools.

Working with me on this project are:

Graduate Students:

    -Garrett Jares , Ph.D. AERO

This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE:1746932. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Novel Multiple Time Scale Adaptive Control for Uncertain Nonlinear Dynamical Systems

Office of Naval Research

Principal Investigator and Technical Lead

1 May 2023 – 30 April 2026

Total award $597,468

Many naval aerospace systems such as unmanned air systems (UAS), high performance aircraft, and satellites are multiple time scale (MTS) systems. MTS systems are systems with some states that evolve quickly and some states that evolve slowly. These systems can have coupled fast and slow modes which occur simultaneously. For example, in aircraft the short period mode is fast and the phugoid mode is slow. MTS systems are particularly interesting from a controls perspective because the time scale separation in the plant can cause degraded performance or even instability under traditional control methods. Accounting for the time scales can remedy this problem. For example, a MTS control technique demonstrated significantly reduced rise times over traditional Nonlinear Dynamic Inversion (NDI). Similarly, traditional adaptive control has been demonstrated to have reduced performance on MTS systems. On the other hand, traditional control techniques that are specifically designed for MTS systems cannot account for systems with model uncertainties. Thus, a method of MTS control for uncertain systems is needed.

A novel methodology called [K]Control of Adaptive MTS Systems (KAMS) is developed which expands upon the class of dynamical systems to which MTS control and adaptive control can apply. While other techniques use elements of adaptive control and MTS control, other research stops short of fully and rigorously combining them. KAMS is a significant improvement over prior methods and provides insight into the physics of the system. It is capable of controlling systems with model uncertainty unlike traditional MTS control, and is robust to systems with unstable zeros unlike traditional adaptive control and feedback linearization. Further, KAMS is expected to provide the following benefits:

  • Method can be generalized.
  • Underlying physics inherent in the time scale separation are evident in the control law. This allows for improved analysis.
  • Does not suffer from the curse of dimensionality.
  • Derivation and implementation are simplified.
  • KAMS is agnostic to the type of adaptive control and MTS control used. This could allow the new technique to take advantage of the most recent research.
  • Improves performance for some systems by reducing rise time and overshoot compared to prior methods.
  • Improves robustness to changes in time scale separation.

Figure: KAMS Control Loop Block Diagram

KAMS has low technical maturity but high technical potential. The research plan is to investigate KAMS so that it becomes more mature and closer to implementation on naval systems. This requires a theoretical understanding of the capabilities of KAMS and it’s limitations. In addition to investigating theoretical research questions, hardware validation of the resulting theory will be performed with a flight testing evaluation campaign using a small unmanned air system (UAS), both fixed-wing and rotorcraft, operating in a challenging environment.

TECHNICAL OBJECTIVES

  1. Evaluate the performance of KAMS compared to other traditional control methods
  2. Identify systems which benefit from KAMS
  3. Evaluate KAMS’s performance on naval systems
  4. Generalize KAMS for multi-input multi-output (MIMO), uncertain, nonlinear, nonstandard, adaptive MTS systems
  5. Identify the stable range for the time scale separation parameter
  6. Identify how KAMS changes when adaptive control is applied to the slow control, the fast control, or both.

Working with me on this program are Research Assistants:

– Ph.D. student Christopher Leshikar (B. S. Aerospace Engineering ‘20, Texas A&M University)

– M.S. student Jillian Bennett (B. S. Aerospace Engineering ‘23, Texas A&M University)

– M.S. student Noah Luna (B. S. Aerospace Engineering ‘23, United States Air Force Academy)

Phase I IUCRC Texas A&M University: Center for Unmanned Air Systems

National Science Foundation

Principal Investigator

15 March 2020 – 28 February 2022

Total award $200,000

Vision Enabled Markerless Landing on a Moving Ground Vehicle

The cooperation of autonomous air and ground assets is a heavily researched topic, and enables autonomy that air and ground assets could not accomplish alone. This cooperation often requires an unmanned air system (UAS) to deploy from and return to the ground agent after completing a task. This work aims to develop vision-based perception methods which enable UAS to autonomously land on moving ground targets assuming communication to the ground target is not available. To alleviate the infrastructure requirement of a ground station computer to aid in computational power, this work aims to complete all necessary computations on board the UAS. The UAS must detect the ground target within an image, estimate the target’s position relative to UAS, and compute necessary controls to result in a successful landing.

This image shows a UAS detecting a ground target in a simulation environment.

Key technical areas associated with this work are:

1. Using computer vision methods that are lightweight computationally so that detection and localization occur at a high enough update frequency to be usable for the landing controller.

2. Employment of various computer vision methods to detect the ground target in the image, and then estimate the targets position

3. The landing controller should serve as an outer loop to the existing controller used to fly the UAS. The controller should be able to transfer the position of the ground target into commands that can be fed into the existing controller.

This work is funded through an ORAU Journeyman Fellowship from the Army Research Lab in Aberdeen, Maryland.

Working with me on this project is:

Graduate Student:

    -Blake Krpec, M.S. AERO

 

Robust Threat Detection for Ground Combat Vehicles with Multi-Domain Surveillance in Hostile Environments

Army Research Laboratory

Co-Principal Investigator

19 March 2019 – 18 March 2023

Total award $2,499,458

This project investigates Complex Battlefield operations through Vehicle Automation, Coordination, Multi-Domain Surveillance, and Intelligent Decision Support Systems.  In complex, emergent scenarios of heterogeneous ground and aerial vehicles performing coordinated maneuvers to achieve common goals as a team, each of the vehicles will normally have inherently significant perception, autonomy and intelligence, and thus the scenarios that we are addressing are different from typical swarm robotics, where the individual elements of the swarm are normally simpler while trying to cooperatively achieve a larger purpose.  This research will develop a decision support system that enables the commander of the operations to effectively coordinate their fleet of vehicles towards successful completion of various missions.

A critical requirement for effective coordination is rich, accurate, and shared situational awareness. This is achieved by using a variety of sensors on both air and ground vehicles and fusing the information. This is often denoted as Multi-Domain Surveillance (MDS). Vision-based cameras are popular sensors for MDS because of their cost, their ability to observe passively, and the richness of information they can provide.  However, using an MDS with vision based cameras to support still poses significant fundamental challenges that we strive to address in this research.  While significant advances have been made in vision processing through both image processing as well as through the use of machine learning algorithms, many of the results show promising results only under “nominal” environmental conditions. In degraded visual environments (DVEs), such as in low lighting, fog, rain, etc., the vision algorithms perform poorly, significantly reducing the effectiveness of an MDS.

The Key Challenges addressed in this project are:

Challenge 1: Robust Vision Algorithms in the presence of Degraded Visual Environments

Challenge 2: Effective MDS through optimal location of sensors (and aerial vehicles with sensors)

Challenge 3: Collation and Abstraction of information from diverse sources with varying levels of fidelity and timeliness to generate a coherent and succinct “Situational Awareness”

Challenge 4: Use of semantic situational awareness to identify suspicious activities and malicious threats

Tightly Integrated Navigation and Guidance for Multiple Autonomous Agents

Sandia National Laboratory

1 October 2019 – 30 September 2022

Total award $300,000

Reliable, autonomous navigation is a highly desirable capability that is typically viewed through the lens of sensor system development. However, methods for aiding both global and local (i.e., relative to the target) navigation via guidance/mission planning must also be considered: the choice of path can significantly impact the utility of sensor measurements.

This project seeks to demonstrate tightly-integrated navigation and guidance and experimentally verify it by incorporating information about the sensor and sensing environment into the trajectory generation problem.  In addition, autonomous team targeting and multi-objective decision making utilizing enhanced target  localization are investigated.

The technical objectives of this work are to:

  1. Investigate the capabilities of single-vehicle Reinforcement Learning (RL) agents, which utilize Deep Deterministic Policy Gradient (DDPG) learning, to task a multi-vehicle platoon.
  2. Develop a baseline automatic target recognition algorithm to characterize, in simulation and experimentally, the sensing requirements for acquiring the target with fixed-boresight aerial sensing platforms.
  3. Investigate training an RL agent via simulation to follow a predetermined trajectory to the target sensing field of view, sense and refine the target location, and fly to the target autonomously in a payload-directed flight.
  4. Demonstrate these capabilities in real-time on vehicle platforms.
  5. Demonstrate the utility of the VSCL Clark Heterogeneous Multi-Vehicle Modular Control Framework for multi-vehicle tasking communication.

Working with me on this project is:

Graduate Student:

    -Hannah Lehman

 

Next Page »

© 2016–2025 Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment