• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • Videos
  • Research
    • Facilities
    • Vehicles
    • Sponsors
  • Publications
    • Books
    • Journal Papers
    • Conference Papers
  • People
    • Faculty
    • Staff
    • Graduate Students
    • Undergraduate Students
    • Alumni
    • Where VSCL Alumni Work
    • Friends and Colleagues
  • Prospective Students
  • About Us
  • Contact Us
  • Where VSCL Alumni Work

Texas A&M University College of Engineering

Control

Krpec and Valasek Publish “Vision-based Marker-less Landing of a UAS on Moving Ground Vehicle” in Journal of Aerospace Information Systems

Posted on May 5, 2024 by Cassie-Kay McQuinn

VSCL Alumni Blake Krpec, Dr. John Valasek, and Dr. Stephen Nogar with the DEVCOM Army Research Lab published the paper “Vision-based Marker-less Landing of a UAS on Moving Ground Vehicle” in Journal of Aerospace Information Systems.

Current autonomous unmanned aerial systems (UAS) commonly use vision-based landing solutions that depend upon fiducial markers to localize a static or mobile landing target relative to the UAS. This paper develops and demonstrates an alternative method to fiducial markers with a combination of neural network-based object detection and camera intrinsic properties to localize an unmanned ground vehicle (UGV) and enable autonomous landing. Implementing this visual approach is challenging given the limited compute power on board the UAS, but is relevant for autonomous landings on targets for which affixing a fiducial marker a priori is not possible, or not practical. The position estimate of the UGV is used to formulate a landing trajectory that is then input to the flight controller. Algorithms are tailored towards low size, weight, and power constraints as all compute and sensing components weigh less than 100 g. Landings were successfully demonstrated in both simulation and experimentally on a UGV traveling in both a straight line and while turning. Simulation landings were successful at UGV speeds of up to 3.0 m/s, and experimental landings at speeds up to 1.0 m/s.

 

Filed Under: Control, Machine Learning, Publications

McQuinn presents at IEEE Aerospace Conference in Big Sky, Montana

Posted on March 8, 2024 by Cassie-Kay McQuinn

VSCL graduate student Cassie-Kay McQuinn presented “Run Time Assurance for Simultaneous Constraint Satisfaction During Spacecraft Attitude Maneuvering” at the 2024 IEEE Aerospace Conference this month. This work was completed as part of her internship with AFRL in summer 2023.

A fundamental capability for On-orbit Servicing, Assembly, and Manufacturing (OSAM) is inspection of the vehicle to be serviced, or the structure being assembled. The focus of this research is developing Active-Set Invariance Filtering (ASIF) Run Time Assurance (RTA) filters that monitor system behavior and the output of the primary controller to enforce attitude requirements pertinent for autonomous space operations. Slack variables are introduced into the ASIF controller to prioritize safety constraints when a solution to all safety constraints is infeasible. Monte Carlo simulation results as well as plots of example cases are shown and evaluated for a three degree of freedom spacecraft with reaction wheel attitude control. A preprint of the paper is available at: https://arxiv.org/abs/2402.14723

Filed Under: Control, Presentations, Publications

Lehman and Valasek Publish “Design, Selection, Evaluation of Reinforcement Learning Single Agents for Ground Target Tracking,” in Journal of Aerospace Information Systems

Posted on September 14, 2023 by Cassie-Kay McQuinn

Ph.D. student Hannah Lehman and Dr. John Valasek of VSCL published the paper “Design, Selection, Evaluation of Reinforcement Learning Single Agents for Ground Target Tracking,” in Journal of Aerospace Information Systems.  

Previous approaches for small fixed-wing unmanned air systems that carry strapdown rather than gimbaled cameras achieved satisfactory ground object tracking performance using both standard and deep reinforcement learning algorithms. However, these approaches have significant restrictions and abstractions to the dynamics of the vehicle such as constant airspeed and constant altitude because the number of states and actions were necessarily limited.  Thus extensive tuning was required to obtain good tracking performance. The expansion from four state-action degrees-of-freedom to 15 enabled the agent to exploit previous reward functions which produced novel, yet undesirable emergent behavior. This paper investigates the causes of, and various potential solutions to, undesirable emergent behavior in the ground target tracking problem. A combination of changes to the environment, reward structure, action space simplification, command rate, and controller implementation provide insight into obtaining stable tracking results. Consideration is given to reward structure selection to mitigate undesirable emergent behavior. Results presented in the paper are on a simulated environment of a single unmanned air system tracking a randomly moving single ground object and show that a soft actor-critic algorithm can produce feasible tracking trajectories without limiting the state-space and action-space provided the environment is properly posed.

This publication is part of VSCL’s ongoing work in the area of Reinforcement Learning and Control.  The early access version of the article can be viewed at https://arc.aiaa.org/journal/jais

Filed Under: Control, Reinforcement Learning, Target Tracking

© 2016–2025 Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment