• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • News
  • Research
    • Sponsors
    • Facilities
  • Publications
    • Books
    • Journal Papers
    • Conference Papers
  • People
    • Alumni
  • Prospective Students
  • VSCL Gallery
  • Contact Us

Vehicle Systems & Control Laboratory

Texas A&M University College of Engineering

Research

List of Research Grants and Awards:

Intelligent Motion Video Target Tracking

Intelligent Motion Video Algorithms for Unmanned Air Systems, Phase IV

Raytheon Company, Intelligence and Information Systems
1 January – 31 December 2013
Co-P.I. Dr. James D. Turner
Total award $250,000

This project consits of applied research that will enable a pathway for basic academic research at Texas A&M University to be transitioned into larger Raytheon Corporate Research and Development efforts for operational systems.

TECHNICAL OBJECTIVES

  1. Demonstrate the utility of motion based video algorithms developed with the Reinforcement Learning / Approximate Dynamic Programming methodology in Phases I-III.
  2. Develop and demonstrate a reinforcement Learning / Approximate Dynamic Programming methodology for UAS Autonomous Soaring.
  3. Conceive novel platform positioning algorithms in support of advanced UAS platforms.
  4. Refine and demonstrate video processing algorithms with the Land, Air, and Space Robotics Laboratory (LASR) at Texas A&M University.

Validation and verification flight testing will be conducted using the three Pegasus research UAS owned and operated by the Vehicle Systems & Control Laboratory.

Working with me on this program are Graduate Research Assistants:

  • Anshu Siddarth, Postdoctoral Research Associate
  • Kenton Kirkpatrick, Postdoctoral Research Associate
  • Dipanjan Saha, Ph.D. student
  • Jim Henrickson, M.S. student
  • Tim Woodbury, M.S. student
  • Josh Harris, B.S. student
  • Candace Hernandez, B.S. student
  • Alejandro Azocar, B.S. student

 

Intelligent Motion Video Algorithms for Unmanned Air Systems, Phase III

Raytheon Company, Intelligence and Information Systems
1 December 2011 – 31 December 2012
Total award $200,000

Advanced development and testing phase of algorithms developed during the Phase I & II efforts (described below).

TECHNICAL OBJECTIVES

  1. HARDWARE INTEGRATION: Sensor and experimental controller integration, flight controller off-board control modifications, validate integrated sensor and flight computer with Pegasus UAS via hardware-in-loop simulation. Validate experimental controller in flight test.
  2. ALGORITHM DEVELOPMENT: Perform additional learning for more complicated target paths, flight test reinforcement learning controller.
  3. GROUND STATION: Write extensions to ground station to transmit experimental controller commands back to vehicle. Ground test to ensure proper operation including failure scenarios in sensor, telemetry links, ground station PC, etc.
  4. FLIGHT VEHICLES: Build three additional Pegasus UAS vehicles.

Working with me on this program are Graduate Research Assistants:

  • Kenton Kirkpatrick, Ph.D. student
  • Jim May, M.S. student
  • Drew Beckett, M.S. student
  • Grant Atkinson, M.S. student
  • Jim Henrickson, M.S. student
  • Tim Woodbury, M.S. student
  • Nick Oliviero, B.S. student
  • Josh Harris, B.S. student

Intelligent Motion Video Algorithms for Unmanned Air Systems, Phase II

Raytheon Company, Intelligence and Information Systems
1 June 2011 – 31 August 2011
Total award $45,000

This project will conduct a realistic outdoor flight test demonstration of the autonomous target tracking algorithm developed in the Phase I effort (described below).

The flight vehicle will be the Pegasus fixed-wing Unmanned Air System (UAS) designed, built, and developed by the Vehicle Systems & Control Laboratory. Pegasus has a maximum takeoff weight of 60 lbs, a payload weight of 20 lbs, and one hour flight endurance.

All flights will be conducted at the runway complex at the Flight Mechanics Laboratory, Texas A&M; University Riverside Campus.

Working with me on this program are Graduate Research Assistants:

  • Kenton Kirkpatrick, Ph.D. student
  • Jim May, M.S. student
  • Drew Beckett, M.S. student

Intelligent Motion Video Algorithms for Unmanned Air Systems, Phase I

Raytheon Company, Intelligence and Information Systems
1 August 2010 – 31 January 2011
Total award $45,000

Advances in unmanned flight have led to the development of Unmanned Air Systems (UASs) that are capable of carrying state-of-the-art video capturing systems for the intended purpose of surveillance and tracking. UASs have the capability to fly through a target area with a mounted camera and allow humans to operate both the UAS and the camera to attempt to survey any objects that are deemed targets. These systems have worked well when controlled by humans, but having them operate autonomously is more challenging.

One way to introduce the concept of autonomy to this problem is to determine a control policy that is capable of controlling the UAS autonomously along a certain trajectory while having the camera controlled by a human. Another way is to do the opposite, and have the UAS flown manually while the camera gimbals to capture and track identified targets. Both of these methods have been explored before and have merit, but having both the UAS and the camera operated autonomously could provide greater flight and tracking efficiency. Having a system that is capable of controlling a UAS and camera system to keep a selected target visible in the camera screen would free the human supervisor to focus on selecting viable targets and analyzing the images received.

The biggest challenge stems from the need to determine an optimal control policy for keeping the target in the middle of the image. Conventional control techniques require determining an appropriate cost function and then finding the weights that make the control optimal. Although finding the optimal control is often straightforward, determining the cost function that best describes the problem is not straightforward. For this research, Reinforcement Learning (RL) is utilized for the determination of the optimal control policy that will both gimbal the camera and steer the UAS to provide target tracking.

The specific RL algorithm used is Q-Learning with Adaptive Action Grid (AAG), developed by Lampton and Valasek as a means to provide greater accuracy in reaching the goal state (i.e., the target), while also decreasing the size (dimensions) of the state-space to be considered. This dramatically decreases the total number of states in the system, so that the learning time becomes more feasible and the storage requirements more tractable. The objective of the approach is to bring any target located in an image captured by a camera into the center of the image, using the AAG learned control policy described above. The learning agent will determine offline (initially) how to control the UAS and camera to get a target from any point in the image to the center and hold it there. A feature of this approach is that the learning agent will continue to learn and refine and update the previously offline learned control policy, during actual operation.

Working with me on this program are Graduate Research Assistants:

  • Kenton Kirkpatrick, Ph.D. student
  • Jim May, M.S. student

Research by Subject

  • Machine Learning
  • Dynamic Inversion
  • Flight Test
  • Morphing Aerostructures
  • Simulation
  • Adaptive Flight Control
  • Autonomy
  • Autonomous Aerial Refueling
  • Control Theory
  • Headworn Display
  • Intelligent Motion Video Target Tracking
  • Payload Directed Flight
  • Precision Agriculture
  • Stability
  • Fasteners
  • Pneumatic Vortex Control
  • Re-entry Vehicles
  • Optical Landing System
  • Cockpit Display
  • Intelligent Systems
  • Education
  • Cooperative Control
  • Bio-Nano Materials
  • Autonomous Airports
  • Nanotechnology
  • Icing Effects
  • Reaction Control System
  • Active Flow Control
  • Pinpoint Landings
  • Model Estimation
  • Fault Tolerant
  • Crew Exploration Vehicle
  • Micro Air Vehicles
  • Optimal Trajectories
  • Two-Time Scale Control
  • Modeling
  • Weather Technology
  • Infrastructure Assessment
  • Derived Angle-of-Attack
    • © 2016–2019 Vehicle Systems & Control Laboratory Log in

      Texas A&M Engineering Experiment Station Logo