• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • Videos
  • Research
    • Facilities
    • Vehicles
    • Sponsors
  • Publications
    • Books
    • Journal Papers
    • Conference Papers
  • People
    • Faculty
    • Staff
    • Graduate Students
    • Undergraduate Students
    • Alumni
    • Where VSCL Alumni Work
    • Friends and Colleagues
  • Prospective Students
  • About Us
  • Contact Us
  • Where VSCL Alumni Work

Texas A&M University College of Engineering

Research

Our research is focused on bridging the scientific gaps between traditional computer science topics and aerospace engineering topics, while achieving a high degree of closure between theory and experiment.  We focus on machine learning and multi-agent systems, intelligent autonomous control, nonlinear control theory, vision based navigation systems, fault tolerant adaptive control, and cockpit systems and displays.  What sets our work apart is a unique systems approach and an ability to seamlessly integrate different disciplines such as dynamics & control, artificial intelligence, and bio-inspiration.  Our body of work integrates these disciplines, creating a lasting impact on technical communities from smart materials to General Aviation flight safety to Unmanned Air Systems (UAS) to guidance, navigation & control theory.  Our research has been funded by AFOSR, ARO, ONR, AFRL, ARL, AFC, NSF, NASA, FAA, and industry.

Autonomous and Nonlinear Control of Cyber-Physical Air, Space and Ground Systems

Vision Based Sensors and Navigation Systems

Cybersecurity for Air and Space Vehicles

Air Vehicle Control and Management

Space Vehicle Control and Management

Advanced Cockpit/UAS Systems and Displays

Control of Bio-Nano Materials and Structures

Enhancing the Cycle-of-Learning for Autonomous Systems to Facilitate Human-Agent Teaming

Army Research Laboratory

9 August 2019 – 8 August 2024

Total award $1,250,000

Current state-of-the-art research on learning algorithms focuses on end-to-end approaches. The learning agent is initialized with no previous knowledge of the task nor the environment and the action selection process (trial-and-error) develops almost randomly. To efficiently and safely train autonomous systems in real-time, the Cycle-of-Learning (CoL) for Autonomous Systems combines supervised and reinforcement learning theories with human input modalities.  This approach was shown to improve task performance while requiring fewer interactions with the environment. The research in this project directly supports the essential Human-Agent Teaming research by enabling efficient training of autonomous systems through novel forms of human interaction.

This project investigates if it is possible to train a learning agent to learn a latent space initially from human interaction and perform tasks entirely in these latent space worlds before interacting with the hardware. This approach has benefits for real-world robotic application where safety is critical and interactions with the environment are expensive.

 

The objectives of this research are:

  1. Extending previous CoL work by developing a model-based reinforcement learning algorithm that learns the environment dynamics from human interaction and on-policy data.
  2. Demonstrating in hardware the current stage of the CoL, specifically in a human and small Unmanned Air System (human-sUAS) scenario.
  3. Extending the Cycle-of-Learning to a multi-agent setting to accommodate a mixed squad of multiple humans and sUAS.

The hardware implementation aims to replicate using a real vehicle the CoL results that have been observed using the simulated environment Microsoft AirSim for the quadrotor landing task. This hardware demonstration includes investigation of computational limitation, sensor and platform disparities, dynamic range limits, and vehicle dynamics on the CoL. Alternative approaches for the perceptual front-end extraction will be investigated, as well as methodologies for human intervention and possible autonomous Return to Launch (RTL). Results in hardware in terms of task performance and sample efficiency will be compared with already proven results achieved in the Microsoft AirSim simulated environment.

The third area will involve extending the Cycle-of-Learning to a multi-agent domain to solve tasks that require coordination with multiple sUAS and human teammates.

The result of this project will be algorithms and models that enable novel forms of human and AI integration to enable fast and efficient training of autonomous systems.  Additionally, software and hardware infrastructure to allow the deployment of those algorithms and models on physical quadrotor systems.  Research efforts on ways to improve these algorithms will be done through collaboration with ARL technical representatives with expertise in human-in-the-loop reinforcement learning and machine learning.

Working with me on this project are:

Graduate Students:

    -Ritwik Bera, Ph.D. AERO

    -Ravi Thakur, Ph.D. AERO

Agile Technology Development (ATD)  – Air-Ground Coordinated Teaming

Army Futures Command

1 August 2019 – 31 July 2024

Total award $65,000,000

Coordinated teams of Ground Vehicles (GV), air vehicles (AV) and human operators conducting platoon maneuvers in a Multi-Operational Domain contested environment will need to possess coordinated capabilities and resilience beyond current state-of-the art systems, many of which were designed for the similar but not equivalent off-road/on-road terrain missions of emergency response and disaster relief.  A critical enabler for such movement is the availability of situational awareness (SA) information that includes localization information on the platoon vehicles as well as the localization and characterization of dynamic and static objects in a neighborhood of the platoon.  A wide body of research has been ongoing in both localization that is based on fusion of multiple sensors, as well as situational awareness that enhances the localization by building semantic object maps.

The SA information is often generated using GPS as a key source of information. However, in unexplored off-road/on-road environments, GPS availability may be compromised for a variety of reasons. In such cases, the quality and range of SA gets further reduced.   Further, in such challenging environments, the fielded system will be operating independent of the Cloud, there will no WIFI routers or cell towers, and much if not all computations will be onboard the vehicles.  Human operators will need to Command, Control, and Communicate (C3) with any platform in the system directly and when desired.

A particular concept that is being researched at Texas A&M is the concept of Infrastructure Enabled Autonomy (IEA).  IEA uses sensors that are placed outside of the vehicles, co-located with compute capabilities to process the sensor information and abstract the SA information, and finally send this information wirelessly to the vehicle leading to enhanced SA at the vehicle.  Such an IEA concept can facilitate both the basic localization as well as higher level semantic information for generating SA information that can be used for complex maneuvers.  This project will extend the concept to off-road/on-road terrain, creating an “on-demand” IEA through the use of supporting aerial vehicles (AVs) that will carry smart sensors and wireless communication capabilities.  It is expected to produce significantly accelerated movement of vehicle platoons in unexplored off-road/on-road terrains, while collecting rich SA along the way.

The technical objectives of this work are to:

  1. Investigate how the SA information should be structured so as to enable efficient communication from the Air Vehicles (AVs) to the Ground Vehicles (GVs), while retaining sufficient richness to enable rapid, safe and autonomous motion of GVs.
  2. Investigate path planning of the AVs to coordinate with the path planning of the GVs to ensure minimal communications, and satisfaction of movement constraints (such as avoidance of hazardous areas)?
  3. Develop AV control laws to support desired orientation of the sensors to support the need flight paths.  In particular, what levels of coordination between AVs are required to create coordinated sensory information?
  4. Investigate how the autonomy stack and vehicle controls should be architected so as to work seamlessly between using the SA coming from the AVs, and being fully autonomous, while progressing towards the platoon motion.

A technology demonstrator platform consisting of a fleet of GVs and AVs will be built, that will perform a movement to contact (MTC) mission. The mission performance metrics shall be jointly developed with between TAMU, Army Research Laboratory (ARL), and Ground Vehicle Systems Center (GVSC).  Testing and Evaluation shall be performed as part of the Innovations Proving Ground every year to capture the above mission performance metrics, with a targeted improvement of 25% every year.

Working with me on this project are:

Graduate Students:

    -Garrett Jares, Ph.D AERO

    -Morgan Wood, M.S. AERO

 

 

Autonomous Intelligent Detection Tracking and Recognition (AIDTR) 

Army Research Laboratory / National Robotics Engineering Center (NREC)

1 August 2019 – 31 July 2020

Total award $587,000

Working with me on this project are:

Graduate Students:

    -Kameron Eves , Ph.D. AERO

 

Autonomous Navigation in Challenging Operational Environments: Demonstration

Sandia National Laboratory

1 October 2019 – 24 September 2020

Total award $216,500

Small VTOL Unmanned Air System Controller Design and Evaluation for Variable Wind Conditions, Phase I

Bell

1 October 2018 – 30 September 2019

Total award $154,759

Due to their design, rotorcraft are inherently sensitive to gust and turbulence in hover and transition.  With their relatively low disk loading, they are sensitive to gusts when compared with aircraft; that disk loading is the major design parameter affecting turbulence.  As the disk loading increases with forward flight, the sensitivity to turbulence dries.  Along with disk loading, Center of Gravity (CG) also plays a major role in gust tolerance.  Most rotorcraft have a low CG which acts as a pendulum and has the effect of creating added stability.  However, modern VTOL designs tend to have a very high CG location which makes them highly sensitive to gusts and turbulence.

The technical objectives of this work are to:

  1. Investigate and develop a control law and associated control system to that allows a VTOL vehicle to sense and correct for gust and turbulence induced instabilities
  2. Design and build a sub-scale demonstrator.
  3. Testing the sub-scale demonstrator under full control in the Oran W. Nicks Low Speed Wind Tunnel, and in the outdoor environment  of the Vehicle System & Control Laboratory’s UAS test site at the TAMU RELLIS Campus.

The basic control law will be a disturbance rejection enhanced version of the Proportional Integral Filter – Control Rate Weighting – Nonzero Setpoint (PIF-CRW-NZSP) control law structure.  The PIF-CRW-NZSP controller is a multi-input multi-output (MIMO) optimal control methodology that permits low-pass filtering (smoothing) of feedback signals.  It also permits the rate of servo actuation to be adjusted by the designer.  This is effective in preventing actuators from hitting and riding their rate limits, which often produces poor performance and can lead to pilot induced oscillations (PIO).  Control allocation will be used as needed to distribute the modulation of the gimballed rotors for the disturbance rejection capability.

Working with me on this project are:

Graduate Students:

   -Zeke Bowden, MENG AERO

Undergraduate Students:

    -Blake Krpec, AERO

    -Christopher Leshikar, AERO

Multi-Sensor Flight Test

VectorNav Technologies

15 May 2018 – 31 October 2018

Total award $11,748

Working with me on this project are:

Graduate Students:

    -Zeke Bowden, MENG  AERO

    -Garrett Jares, Ph.D. AERO

 

2018 T3 Program: Mars Automated Terrain Analysis for Navigation and Science Targeting

Texas A&M University

1 April 2018 – 31 March 2020

Total award $32,000

Working with me on this project are:

Graduate Students:

    -Blake Krpec, M.S. AERO

 

Intelligent and Safe Technologies for Enhanced UAS Autonomous Air Refueling Operations, Phase I

Air Force Research Laboratory through sub-contract with Barron Associates

1 September 2017 – 30 April 2018

Total award $49,982

Unmanned Aircraft System (UAS) platforms provide many important military roles that require long periods of time aloft.  Repeated returns to base for refueling is one scenario that can severely degrade mission operations.  There is a critical need to develop autonomous aerial refueling (AAR) capabilities in which both the tanker and receiver aircraft are unmanned.  One of the challenges in AAR is minimum airspeed. For this effort the focus will be Groups 4 and 5 UASs with maximum airspeeds of 130 KCAS. The main objective is to radically increase mission length and on-station availability of UAS platforms by developing the capability to reliably conduct (AAR) of Groups 4 and 5 UASs with calibrated airspeeds of 130 KCAS or less.

Additional key technical challenges associated with AAR of UAS are:

  1. The refueling procedure will require the UAS to operate in close proximity of the tanker aircraft. Relative location must be known with a high level of accuracy, employing collision avoidance procedures.
  2. One or both the tanker and UAS must respond quickly if an unsafe refueling condition occurs.
  3. AAR solutions should minimize modifications to both the tanker and the UAS due to cost, maintenance and SWaP considerations.
  4. The refueling system must operate under broad weather and day and night conditions.

Working with me on this project are:

Graduate Students:

    -Zeke Bowden, MENG  AERO

Human Performance Impacts of Head-Worn Displays For General Aviation, Phase I

Federal Aviation Administration, Civil Aerospace Medical Institute (CAMI)

1 August 2017 – 31 July 2018

Total award $227, 025

Many systems on the market or in the conceptual design phase for Enhanced Flight Vision Systems (EFVS) provide the capability of enhancing the visual capability of the pilot during flight, at near to eye distances similar to spectacles.  Although Head Worn Displays (HWDs) have been proposed for Civil Aviation (CA) flight operations by various organizations, rigorous qualitative and quantitative comparison of candidate devices and their impact on Human Factors is currently lacking.  The technical objective of the proposed effort is to develop a means for evaluating the human factors aspects of emerging Head Worn Displays (HMD) for Enhanced Vision System technologies, and then conduct a Human Factors study of their suitability for Civil Aviation (CA) with specific application to General Aviation.

The goal of this project is to collect data and information to be used in a Human Factors study that will quantify answers to the following questions:

  1. What are the CA operational impacts of a Head Worn Display?
  2. What CA pilot approach and landing tasks can be done, and what tasks cannot be done?
  3. Is it operationally suitable for CA pilot approach and landing tasks?
  4. Does use of this system allow CA pilots to adequately conduct Cat 2 approaches?

The expected results of this research will be increased understanding of the effects of enhanced vision systems on CA pilot safety and performance in Low Visibility Operations due to weather in the approach and landing phase for Cat 1, S.A. Cat 1, Cat 2, and Cat 3.

Working with me on this project are:

Co-PI

-Dr. Thomas Ferris, ISED

Graduate Students

-Emily Fojtik, MENG AERO

Undergraduate Students:

    -Allison Daveid, BSAE

    -Alexandra Heinimann, BSAE

    -Mia Brown, CSCE

 

State Constrained Adaptive Flight Control, Phase II

Air Force Research Laboratory, Air Vehicles Directorate

Principal Investigator and Technical Lead

3 February 2017 – 3 May 2018

Total award $85,389

The development of control architectures for hypersonic vehicles presents a significant challenge due to widely varying flight conditions in which these vehicles operate and certain aspects unique to hypersonic flight. One particular safety and operational concern in hypersonic flight is inlet unstart, which not only produce a significant decrease in the thrust but also can lead to loss of control and possibly the loss of the vehicle. One potential flight condition that can cause an inlet unstart is flying at a large angle-of-attack or sideslip angle. In Phase I, a nonlinear dynamic inversion (NDI) adaptive controller was developed with the ability to enforce state constraints in order to restrict the vehicle from approaching these large aerodynamic angles. In addition, due to the challenges associated with equipping hypersonic vehicles with traditional external sensor equipment, an observer-based feedback controller for the longitudinal axis of a generic hypersonic vehicle was developed. 

Phase II will investigate a single control framework that consists of an observer-based feedback controller capable of achieving tracking for a full 6 degree-of-freedom hypersonic vehicle model, and an NDI adaptive controller capable of enforcing state constraints without full-state measurements. Additionally, a sampled-data NDI control framework is being developed to not only achieve tracking but also include enforcing state constraints as well. The effect of slower sampling times on the ability to control the aircraft and enforce state constraints will be investigated.

TECHNICAL OBJECTIVES

  1. Develop theory for an observer based feedback controller capable of tracking commands that account for the full coupling dynamics, i.e. along both the longitudinal and lateral/directional axes of the aircraft.
  2. Develop theory for enforcing state constraints in the observer-feedback adaptive dynamic inversion architecture.
  3. Develop, implement and analyze a sampled-data control framework based on the continuous time controller previously developed.

Working with me on this program are Research Assistants:

  • Douglas Famularo, Ph.D student
  • Sean Whitney, B.S. student

 

« Previous Page
Next Page »

© 2016–2025 Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment