• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • Videos
  • Research
    • Facilities
    • Vehicles
    • Sponsors
  • Publications
    • Books
    • Journal Papers
    • Conference Papers
  • People
    • Faculty
    • Staff
    • Graduate Students
    • Undergraduate Students
    • Alumni
    • Where VSCL Alumni Work
    • Friends and Colleagues
  • Prospective Students
  • About Us
  • Contact Us
  • Where VSCL Alumni Work

Texas A&M University College of Engineering

Reinforcement Learning

VSCL Students Present at 2026 AIAA SciTech Forum

Posted on January 7, 2026 by Cassie-Kay McQuinn

VSCL researchers Raul Santos, Seth Johnson, Carla Zaramella, Zach Curtis will present papers in January at the 2026 AIAA SciTech Forum in Orlando, Florida.

On 12 January, Raul Santos will present the paper ”Deep Reinforcement Learning Waypoint Generation for Attitude Station-Keeping with Sun Avoidance”. This work studies deep reinforcement learning–based waypoint generation for autonomous on-orbit attitude control and examines how observation and action space design influence neural network performance.

Santos, Raul, Binz, Sadie, McQuinn,Cassie-Kay, Valasek, John, Hamilton, Nathaniel, Hobbs, Kerianne L., and Dulap, Kyle, ”Deep Reinforcement Learning Waypoint Generation for Attitude Station-Keeping with Sun Avoidance,” 2026 AIAA Science and Technology Forum and Exposition, Orlando, FL, 12 January 2026

 

On 12 January, Seth Johnson will present the paper “Modular Open System Architecture for Low-cost Integrated Avionics (MOSA LINA)”. This work investigates a modular, open-system avionics architecture for experimental vehicles that reduces integration complexity and supports platform-agnostic mission reconfiguration through plug-and-play sensor integration. Two case studies are investigated: one focused on synchronized high-fidelity data collection and the other on autonomous fixed-wing target tracking.

Johnson, Seth, Santos, Raul, Martinez-Banda, Isabella, Luna, Noah, and Valasek, John, “Modular Open System Architecture for Low-cost Integrated Avionics (MOSA LINA),” 2026 AIAA Science and Technology Forum and Exposition, Orlando, FL, 12 January 2026.

 

On 15 January, Carla Zaramella will present the paper “Identification of Non-Dimensional Aerodynamic Derivatives using Markov Parameter Based Least Squares Identification Algorithm”. This work expands apon previous developments of the MARBLES algorithm to directly identify non-dimensional stability and control derivatives using computed Markov Parameters with a least squares estimator and a priori information.

Leshikar, Christopher, Zaramella, Carla, Madewell, Evelyn, and Valasek, John, “Identification of Non-Dimensional Aerodynamic Derivatives using Markov Parameter Based Least Squares Identification Algorithm,” 2026 AIAA Science and Technology Forum and Exposition, Orlando, FL, 15 January 2026

 

On 16 January, Zachary Curtis will present the paper “Real-Time Controller Architecture for sUAS Flight Test”. This work investigates a C++/ROS architecture for real -time controller implementation. The said architecture, Kanan, allows safe and fast integration of custom controllers across a broad range of vehicles and controller types.

Luna, Noah, Valasek, John, and Curtis, Zachary, “Real-Time Controller Architecture for sUAS Flight Test,”  2026 AIAA Science and Technology Forum and Exposition, Orlando, FL, 16 January 2026.

 

Filed Under: Control, Machine Learning, Presentations, Publications, Reinforcement Learning, System Identification, Uncategorized

Lehman and Valasek Publish “Design, Selection, Evaluation of Reinforcement Learning Single Agents for Ground Target Tracking,” in Journal of Aerospace Information Systems

Posted on September 14, 2023 by Cassie-Kay McQuinn

Ph.D. student Hannah Lehman and Dr. John Valasek of VSCL published the paper “Design, Selection, Evaluation of Reinforcement Learning Single Agents for Ground Target Tracking,” in Journal of Aerospace Information Systems.  

Previous approaches for small fixed-wing unmanned air systems that carry strapdown rather than gimbaled cameras achieved satisfactory ground object tracking performance using both standard and deep reinforcement learning algorithms. However, these approaches have significant restrictions and abstractions to the dynamics of the vehicle such as constant airspeed and constant altitude because the number of states and actions were necessarily limited.  Thus extensive tuning was required to obtain good tracking performance. The expansion from four state-action degrees-of-freedom to 15 enabled the agent to exploit previous reward functions which produced novel, yet undesirable emergent behavior. This paper investigates the causes of, and various potential solutions to, undesirable emergent behavior in the ground target tracking problem. A combination of changes to the environment, reward structure, action space simplification, command rate, and controller implementation provide insight into obtaining stable tracking results. Consideration is given to reward structure selection to mitigate undesirable emergent behavior. Results presented in the paper are on a simulated environment of a single unmanned air system tracking a randomly moving single ground object and show that a soft actor-critic algorithm can produce feasible tracking trajectories without limiting the state-space and action-space provided the environment is properly posed.

This publication is part of VSCL’s ongoing work in the area of Reinforcement Learning and Control.  The early access version of the article can be viewed at https://arc.aiaa.org/journal/jais

Filed Under: Control, Reinforcement Learning, Target Tracking

© 2016–2026 Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment