• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • Videos
  • Research
    • Facilities
    • Vehicles
    • Sponsors
  • Publications
    • Books
    • Journal Papers
    • Conference Papers
  • People
    • Faculty
    • Staff
    • Graduate Students
    • Undergraduate Students
    • Alumni
    • Where VSCL Alumni Work
    • Friends and Colleagues
  • Prospective Students
  • About Us
  • Contact Us
  • Where VSCL Alumni Work

Texas A&M University College of Engineering

Presentations

Bera to Present PODNet Paper at AAAI-MAKE 2020 on March 23

Posted on February 25, 2020 by Garrett Jares

VSCL Graduate Research Assistant Ritwik Bera will present a paper titled “PODNet: A Neural Network for Discovery of Plannable Options” at the AAAI-MAKE: Combining Machine Learning and Knowledge Engineering in Practice, AAAI Spring Symposium on March 23, 2020. Co-authored by researchers from the US Army Research Laboratory’s Human Research and Engineering Directorate, this continuing project investigates how to segment an unstructured set of demonstrated trajectories for option discovery. This enables learning from demonstration to perform multiple tasks and plan high-level trajectories based on the discovered option labels. This method is composed of several constituent networks that not only segment demonstrated trajectories into options, but concurrently trains an option dynamics model that can be used for downstream planning tasks and training on simulated rollouts to minimize interaction with the environment while the policy is maturing. The paper documenting this work is “PODNet: A Neural Network for Discovery of Plannable Options,” currently available at https://arxiv.org/abs/1911.00171.

Filed Under: New Items, Presentations

Goecks to Present Cycle-of-Learning Paper at AAMAS 2020 on May 11

Posted on February 25, 2020 by Garrett Jares

VSCL Graduate Research Assistant Vinicius Goecks will present a paper on “Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Dense and Sparse Reward Environments” at the International Conference on Autonomous Agents and Multi-Agent Systems on May 11, 2020. Co-authored by researchers from the US Army Research Laboratory’s Human Research and Engineering Directorate, this continuing project investigates how to efficiently transition and update policies, trained initially with demonstrations,  using off-policy actor-critic reinforcement learning. This method outperforms state-of-the-art techniques for combining behavior cloning and reinforcement learning for both dense and sparse reward scenarios. Results also suggest that directly including the behavior cloning loss on demonstration data helps to ensure stable learning and ground future policy updates.

The paper documenting this work, “Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Dense and Sparse Reward Environments,” is available at the official AAMAS 2020 proceedings, together with the supplemental material detailing the training hyperparameters.

A summary video of the proposed method can be found here, along with the project page that accompanied the paper submission.

Filed Under: New Items, Presentations

« Previous Page

© 2016–2025 Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment