• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • Videos
  • Research
    • Facilities
    • Vehicles
    • Sponsors
  • Publications
    • Books
    • Journal Papers
    • Conference Papers
  • People
    • Faculty
    • Staff
    • Graduate Students
    • Undergraduate Students
    • Alumni
    • Where VSCL Alumni Work
    • Friends and Colleagues
  • Prospective Students
  • About Us
  • Contact Us
  • Where VSCL Alumni Work

Texas A&M University College of Engineering

Machine Learning

VSCL Students Present at 2026 AIAA SciTech Forum

Posted on January 7, 2026 by Cassie-Kay McQuinn

VSCL researchers Raul Santos, Seth Johnson, Carla Zaramella, Zach Curtis will present papers in January at the 2026 AIAA SciTech Forum in Orlando, Florida.

On 12 January, Raul Santos will present the paper ”Deep Reinforcement Learning Waypoint Generation for Attitude Station-Keeping with Sun Avoidance”. This work studies deep reinforcement learning–based waypoint generation for autonomous on-orbit attitude control and examines how observation and action space design influence neural network performance.

Santos, Raul, Binz, Sadie, McQuinn,Cassie-Kay, Valasek, John, Hamilton, Nathaniel, Hobbs, Kerianne L., and Dulap, Kyle, ”Deep Reinforcement Learning Waypoint Generation for Attitude Station-Keeping with Sun Avoidance,” 2026 AIAA Science and Technology Forum and Exposition, Orlando, FL, 12 January 2026

 

On 12 January, Seth Johnson will present the paper “Modular Open System Architecture for Low-cost Integrated Avionics (MOSA LINA)”. This work investigates a modular, open-system avionics architecture for experimental vehicles that reduces integration complexity and supports platform-agnostic mission reconfiguration through plug-and-play sensor integration. Two case studies are investigated: one focused on synchronized high-fidelity data collection and the other on autonomous fixed-wing target tracking.

Johnson, Seth, Santos, Raul, Martinez-Banda, Isabella, Luna, Noah, and Valasek, John, “Modular Open System Architecture for Low-cost Integrated Avionics (MOSA LINA),” 2026 AIAA Science and Technology Forum and Exposition, Orlando, FL, 12 January 2026.

 

On 15 January, Carla Zaramella will present the paper “Identification of Non-Dimensional Aerodynamic Derivatives using Markov Parameter Based Least Squares Identification Algorithm”. This work expands apon previous developments of the MARBLES algorithm to directly identify non-dimensional stability and control derivatives using computed Markov Parameters with a least squares estimator and a priori information.

Leshikar, Christopher, Zaramella, Carla, Madewell, Evelyn, and Valasek, John, “Identification of Non-Dimensional Aerodynamic Derivatives using Markov Parameter Based Least Squares Identification Algorithm,” 2026 AIAA Science and Technology Forum and Exposition, Orlando, FL, 15 January 2026

 

On 16 January, Zachary Curtis will present the paper “Real-Time Controller Architecture for sUAS Flight Test”. This work investigates a C++/ROS architecture for real -time controller implementation. The said architecture, Kanan, allows safe and fast integration of custom controllers across a broad range of vehicles and controller types.

Luna, Noah, Valasek, John, and Curtis, Zachary, “Real-Time Controller Architecture for sUAS Flight Test,”  2026 AIAA Science and Technology Forum and Exposition, Orlando, FL, 16 January 2026.

 

Filed Under: Control, Machine Learning, Presentations, Publications, Reinforcement Learning, System Identification, Uncategorized

Payton Clem Defends Masters Thesis

Posted on December 16, 2025 by Cassie-Kay McQuinn

Payton Clem successfully defended her M.S. thesis Autonomous Target Tracking of Hostile Ground Target under Wind Disturbance and Sun Concealment using Deep Reinforcement Learning on 12 December.  Payton has been with VSCL since the first semester of her senior year and is highly engaged in AI and flight testing.

Intelligence, surveillance, and reconnaissance (ISR) missions benefit from the use of unmanned aircraft systems (UAS) capable of maintaining visual contact with ground targets, referred to here as target tracking. For practical deployment, it is valuable for tracking to be autonomous and function without detailed knowledge of the surrounding environment. The task becomes more complex when additional objectives, such as concealment or avoiding a hostile target, are introduced. To address this problem, a Soft Actor-Critic (SAC) reinforcement learning controller is developed that uses only the target’s location in the image frame. The agent controls a multirotor UAS equipped with a fixed optical sensor, requiring the agent to adjust vehicle attitude to keep the target in view while accounting for wind, varying target behaviors, altitude-based concealment constraints, and sun-related concealment. Previous work on fixed-camera target tracking has shown that RL-based algorithms can produce unstable behaviors such as control oscillations and large altitude changes. This work focuses on reward shaping to mitigate these issues and encourage stable, consistent tracking. In addition, the influence of including solar concealment information in the reward function is examined to assess its effect on vehicle behavior. The results demonstrate that the proposed reward structure effectively reduces unwanted behaviors such as diving and pitch and yaw ringing. The reward structure enables stable, long-duration tracking, despite the incorporation of constraints associated with sun concealment strategies. The resulting policy achieves reliable tracking across the evaluated conditions.

Payton’s research is supported by the Army Research Laboratory on the project Robust Threat Detection for Ground Combat Vehicles with Multi-Domain Surveillance in Hostile Environments

Filed Under: Defense, Machine Learning, Presentations

Krpec and Valasek Publish “Vision-based Marker-less Landing of a UAS on Moving Ground Vehicle” in Journal of Aerospace Information Systems

Posted on May 5, 2024 by Cassie-Kay McQuinn

VSCL Alumni Blake Krpec, Dr. John Valasek, and Dr. Stephen Nogar with the DEVCOM Army Research Lab published the paper “Vision-based Marker-less Landing of a UAS on Moving Ground Vehicle” in Journal of Aerospace Information Systems.

Current autonomous unmanned aerial systems (UAS) commonly use vision-based landing solutions that depend upon fiducial markers to localize a static or mobile landing target relative to the UAS. This paper develops and demonstrates an alternative method to fiducial markers with a combination of neural network-based object detection and camera intrinsic properties to localize an unmanned ground vehicle (UGV) and enable autonomous landing. Implementing this visual approach is challenging given the limited compute power on board the UAS, but is relevant for autonomous landings on targets for which affixing a fiducial marker a priori is not possible, or not practical. The position estimate of the UGV is used to formulate a landing trajectory that is then input to the flight controller. Algorithms are tailored towards low size, weight, and power constraints as all compute and sensing components weigh less than 100 g. Landings were successfully demonstrated in both simulation and experimentally on a UGV traveling in both a straight line and while turning. Simulation landings were successful at UGV speeds of up to 3.0 m/s, and experimental landings at speeds up to 1.0 m/s.

 

Filed Under: Control, Machine Learning, Publications

Texas A&M University Becomes Founding Partner of New NSF Center for Autonomous Air Mobility and Sensing

Posted on September 1, 2023 by Cassie-Kay McQuinn

Texas A&M University is a founding partner of the National Science Foundation (NSF) Center for Autonomous Air Mobility and Sensing (CAAMS) along withUniversity of Colorado Boulder (CU), Brigham Young University (BYU), University of Michigan (UM), Penn State University (PSU), and Virginia Tech (VT). The center is organized under the NSF’s Industry-University Cooperative Research Centers program (IUCRC). CAAMS consists of three primary partners: academia, industry, and government. Academic faculty collaborate with industry and government members to promote long-term global competitive research and innovation. They create solutions to the most critical challenges faced in the autonomous industry. Dr. John Valasek serves as the Site Director for Texas A&M University. Texas A&M University faculty associated with CAAMS include: Dr. Moble Benedict, Dr. Manoranjan Majji, Dr. Sivakumar Rathinem, and Dr. Swaroop Darbha.

In conjunction with the CASS Lab at Penn State, directed by Dr. Puneet Singla, VSCL will be working on the project Integration of System Theory with Machine Learning Tools for Data Driven System Identification.  This project integrates system theory with machine learning tools for data driven system identification. The objective is to derive nonlinear dynamical models by employing a unique handshake between linear time varying subspace methods and sparse approximation tools from high fidelity flight simulations and flight experiments.

 

Filed Under: Machine Learning, New Items, System Identification

VSCL Student Presents at Interactive Learning with Implicit Human Feedback Workshop at 2023 International Conference on Machine Learning (ICML)

Posted on July 18, 2023 by Cassie-Kay McQuinn

VSCL graduate student M.D. Sunbeam will present a workshop paper on 29 July at the 2023 International Conference on Machine Learning (ICML) in Honolulu, Hawaii.

Sunbeam will be presenting the paper “Imitation Learning with Human Eye Gaze via Multi-Objective Prediction,”. Approaches for teaching learning agents via human demonstrations have been widely studied and successfully applied to multiple domains. However, the majority of imitation learning work utilizes only behavioral information from the demonstrator, i.e. which actions were taken, and ignores other useful information. In particular, eye gaze information can give valuable insight towards where the demonstrator is allocating visual attention, and holds the potential to improve agent performance and generalization. In this work, we propose Gaze Regularized Imitation Learning (GRIL), a novel context-aware, imitation learning architecture that learns concurrently from both human demonstrations and eye gaze to solve tasks where visual attention provides important context.

We apply GRIL to a visual navigation task, in which an unmanned quadrotor is trained to search for and navigate to a target vehicle in a photorealistic simulated environment. We show that GRIL outperforms several state-of-the-art gaze-based imitation learning algorithms, simultaneously learns to predict human visual attention, and generalizes to scenarios not present in the training data. Supplemental videos can be found at https://sites.google.com/view/gaze-regularized-il/, and code will be made available.

Filed Under: Machine Learning, Presentations, Publications

© 2016–2026 Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment