• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • Videos
  • Research
    • Facilities
    • Vehicles
    • Sponsors
  • Publications
    • Books
    • Journal Papers
    • Conference Papers
  • People
    • Faculty
    • Staff
    • Graduate Students
    • Undergraduate Students
    • Alumni
    • Where VSCL Alumni Work
    • Friends and Colleagues
  • Prospective Students
  • About Us
  • Contact Us
  • Where VSCL Alumni Work

Texas A&M University College of Engineering

Target Tracking

Lehman and Valasek Publish “Design, Selection, Evaluation of Reinforcement Learning Single Agents for Ground Target Tracking,” in Journal of Aerospace Information Systems

Posted on September 14, 2023 by Cassie-Kay McQuinn

Ph.D. student Hannah Lehman and Dr. John Valasek of VSCL published the paper “Design, Selection, Evaluation of Reinforcement Learning Single Agents for Ground Target Tracking,” in Journal of Aerospace Information Systems.  

Previous approaches for small fixed-wing unmanned air systems that carry strapdown rather than gimbaled cameras achieved satisfactory ground object tracking performance using both standard and deep reinforcement learning algorithms. However, these approaches have significant restrictions and abstractions to the dynamics of the vehicle such as constant airspeed and constant altitude because the number of states and actions were necessarily limited.  Thus extensive tuning was required to obtain good tracking performance. The expansion from four state-action degrees-of-freedom to 15 enabled the agent to exploit previous reward functions which produced novel, yet undesirable emergent behavior. This paper investigates the causes of, and various potential solutions to, undesirable emergent behavior in the ground target tracking problem. A combination of changes to the environment, reward structure, action space simplification, command rate, and controller implementation provide insight into obtaining stable tracking results. Consideration is given to reward structure selection to mitigate undesirable emergent behavior. Results presented in the paper are on a simulated environment of a single unmanned air system tracking a randomly moving single ground object and show that a soft actor-critic algorithm can produce feasible tracking trajectories without limiting the state-space and action-space provided the environment is properly posed.

This publication is part of VSCL’s ongoing work in the area of Reinforcement Learning and Control.  The early access version of the article can be viewed at https://arc.aiaa.org/journal/jais

Filed Under: Control, Reinforcement Learning, Target Tracking

VSCL’s Reinforcement Learning Control Law for Ground Target Tracking Featured in January’s Aerospace America

Posted on January 28, 2019 by Garrett Jares

The January 2019 edition of Aerospace America’s annual Year in Review section for Information Systems featured the flight-demonstration of a machine learning algorithm developed by a team of VSCL students and faculty.  The article discussed the progression of the project from its first demonstration in December 2017 to more recent demonstrations.  The algorithm is based on Q-Learning and provides a control policy for the vehicle’s orientation in order to keep the target fixed in the image frame autonomously. The algorithm was tested against stationary and randomly moving targets in both a structured and unstructured environment.

The Aerospace America article can be found here.

Dr. John Valasek, Vinicius Goecks, Hannah Lehman, Zeke Bowden, and Blake Krpec.

Filed Under: New Items, Target Tracking

Intelligent Motion Video Target Tracking Flight Testing Presented by VSCL at 2018 AIAA Infotech@Aerospace Conference

Posted on January 15, 2018 by Charles Noren

VSCL Undergraduate Research Assistant Chase Noren ’18 presented flight test results of an autonomous tracking intelligent agent at the AIAA Infotech@Aerospace Conference on 11 January at the 2018 AIAA SciTech Forum. The goal of this continuing project is to autonomously track fixed and moving user-selected targets with a non-gimbaled image capturing device mounted on a Small/micro fixed-wing UAS. The autonomous intelligent agent acts independently of human operators and the developed algorithm learns and operates without the need for prior information of road networks or terrain features.  The paper documenting this work is “Flight Testing of Intelligent Motion Video Guidance for Unmanned Air System Ground Target Surveillance,” AIAA-2018-1632.

Filed Under: New Items, Target Tracking

© 2016–2025 Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment