The cooperation of autonomous air and ground assets is a heavily researched topic, and enables autonomy that air and ground assets could not accomplish alone. This cooperation often requires an unmanned air system (UAS) to deploy from and return to the ground agent after completing a task. This work aims to develop vision-based perception methods which enable UAS to autonomously land on moving ground targets assuming communication to the ground target is not available. To alleviate the infrastructure requirement of a ground station computer to aid in computational power, this work aims to complete all necessary computations on board the UAS. The UAS must detect the ground target within an image, estimate the target’s position relative to UAS, and compute necessary controls to result in a successful landing.
This image shows a UAS detecting a ground target in a simulation environment.
Key technical areas associated with this work are:
1. Using computer vision methods that are lightweight computationally so that detection and localization occur at a high enough update frequency to be usable for the landing controller.
2. Employment of various computer vision methods to detect the ground target in the image, and then estimate the targets position
3. The landing controller should serve as an outer loop to the existing controller used to fly the UAS. The controller should be able to transfer the position of the ground target into commands that can be fed into the existing controller.
This work is funded through an ORAU Journeyman Fellowship from the Army Research Lab in Aberdeen, Maryland.
Working with me on this project is:
Graduate Student:
-Blake Krpec, M.S. AERO