• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • Videos
  • Research
    • Facilities
    • Vehicles
    • Sponsors
  • Publications
    • Books
    • Journal Papers
    • Conference Papers
  • People
    • Faculty
    • Staff
    • Graduate Students
    • Undergraduate Students
    • Alumni
    • Where VSCL Alumni Work
    • Friends and Colleagues
  • Prospective Students
  • About Us
  • Contact Us
  • Where VSCL Alumni Work

Texas A&M University College of Engineering

Machine Learning

Krpec and Valasek Publish “Vision-based Marker-less Landing of a UAS on Moving Ground Vehicle” in Journal of Aerospace Information Systems

Posted on May 5, 2024 by Cassie-Kay McQuinn

VSCL Alumni Blake Krpec, Dr. John Valasek, and Dr. Stephen Nogar with the DEVCOM Army Research Lab published the paper “Vision-based Marker-less Landing of a UAS on Moving Ground Vehicle” in Journal of Aerospace Information Systems.

Current autonomous unmanned aerial systems (UAS) commonly use vision-based landing solutions that depend upon fiducial markers to localize a static or mobile landing target relative to the UAS. This paper develops and demonstrates an alternative method to fiducial markers with a combination of neural network-based object detection and camera intrinsic properties to localize an unmanned ground vehicle (UGV) and enable autonomous landing. Implementing this visual approach is challenging given the limited compute power on board the UAS, but is relevant for autonomous landings on targets for which affixing a fiducial marker a priori is not possible, or not practical. The position estimate of the UGV is used to formulate a landing trajectory that is then input to the flight controller. Algorithms are tailored towards low size, weight, and power constraints as all compute and sensing components weigh less than 100 g. Landings were successfully demonstrated in both simulation and experimentally on a UGV traveling in both a straight line and while turning. Simulation landings were successful at UGV speeds of up to 3.0 m/s, and experimental landings at speeds up to 1.0 m/s.

 

Filed Under: Control, Machine Learning, Publications

Texas A&M University Becomes Founding Partner of New NSF Center for Autonomous Air Mobility and Sensing

Posted on September 1, 2023 by Cassie-Kay McQuinn

Texas A&M University is a founding partner of the National Science Foundation (NSF) Center for Autonomous Air Mobility and Sensing (CAAMS) along withUniversity of Colorado Boulder (CU), Brigham Young University (BYU), University of Michigan (UM), Penn State University (PSU), and Virginia Tech (VT). The center is organized under the NSF’s Industry-University Cooperative Research Centers program (IUCRC). CAAMS consists of three primary partners: academia, industry, and government. Academic faculty collaborate with industry and government members to promote long-term global competitive research and innovation. They create solutions to the most critical challenges faced in the autonomous industry. Dr. John Valasek serves as the Site Director for Texas A&M University. Texas A&M University faculty associated with CAAMS include: Dr. Moble Benedict, Dr. Manoranjan Majji, Dr. Sivakumar Rathinem, and Dr. Swaroop Darbha.

In conjunction with the CASS Lab at Penn State, directed by Dr. Puneet Singla, VSCL will be working on the project Integration of System Theory with Machine Learning Tools for Data Driven System Identification.  This project integrates system theory with machine learning tools for data driven system identification. The objective is to derive nonlinear dynamical models by employing a unique handshake between linear time varying subspace methods and sparse approximation tools from high fidelity flight simulations and flight experiments.

 

Filed Under: Machine Learning, New Items, System Identification

VSCL Student Presents at Interactive Learning with Implicit Human Feedback Workshop at 2023 International Conference on Machine Learning (ICML)

Posted on July 18, 2023 by Cassie-Kay McQuinn

VSCL graduate student M.D. Sunbeam will present a workshop paper on 29 July at the 2023 International Conference on Machine Learning (ICML) in Honolulu, Hawaii.

Sunbeam will be presenting the paper “Imitation Learning with Human Eye Gaze via Multi-Objective Prediction,”. Approaches for teaching learning agents via human demonstrations have been widely studied and successfully applied to multiple domains. However, the majority of imitation learning work utilizes only behavioral information from the demonstrator, i.e. which actions were taken, and ignores other useful information. In particular, eye gaze information can give valuable insight towards where the demonstrator is allocating visual attention, and holds the potential to improve agent performance and generalization. In this work, we propose Gaze Regularized Imitation Learning (GRIL), a novel context-aware, imitation learning architecture that learns concurrently from both human demonstrations and eye gaze to solve tasks where visual attention provides important context.

We apply GRIL to a visual navigation task, in which an unmanned quadrotor is trained to search for and navigate to a target vehicle in a photorealistic simulated environment. We show that GRIL outperforms several state-of-the-art gaze-based imitation learning algorithms, simultaneously learns to predict human visual attention, and generalizes to scenarios not present in the training data. Supplemental videos can be found at https://sites.google.com/view/gaze-regularized-il/, and code will be made available.

Filed Under: Machine Learning, Presentations, Publications

© 2016–2025 Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment