Army research leads to more effective training model for robots

Multi-domain operations, the Army’s future operating concept, requires autonomous agents with learning components to operate alongside the warfighter. New Army research reduces the unpredictability of current training reinforcement learning policies so that they are more practically applicable to physical systems, especially ground robots.

These learning components will permit autonomous agents to reason and adapt to changing battlefield conditions, said Army researcher Dr. Alec Koppel from the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory.

The underlying adaptation and re-planning mechanism consists of reinforcement learning-based policies. Making these policies efficiently obtainable is critical to making the MDO operating concept a reality, he said.

According to Koppel, policy gradient methods in reinforcement learning are the foundation for scalable algorithms for continuous spaces, but existing techniques cannot incorporate broader decision-making goals such as risk sensitivity, safety constraints, exploration and divergence to a prior.

Designing autonomous behaviors when the relationship between dynamics and goals are complex may be addressed with reinforcement learning, which has gained attention recently for solving previously intractable tasks such as strategy games like go, chess and videogames such as Atari and Starcraft II, Koppel said.

Prevailing practice, unfortunately, demands astronomical sample complexity, such as thousands of years of simulated gameplay, he said. This sample complexity renders many common training mechanisms inapplicable to data-starved settings required by MDO context for the Next-Generation Combat Vehicle, or NGCV.

“To facilitate reinforcement learning for MDO and NGCV, training mechanisms must improve sample efficiency and reliability in continuous spaces,” Koppel said. “Through the generalization of existing policy search schemes to general utilities, we take a step towards breaking existing sample efficiency barriers of prevailing practice in reinforcement learning.”

READ MORE  Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers

Koppel and his research team developed new policy search schemes for general utilities, whose sample complexity is also established. They observed that the resulting policy search schemes reduce the volatility of reward accumulation, yield efficient exploration of an unknown domains and a mechanism for incorporating prior experience.

“This research contributes an augmentation of the classical Policy Gradient Theorem in reinforcement learning,” Koppel said. “It presents new policy search schemes for general utilities, whose sample complexity is also established. These innovations are impactful to the U.S. Army through their enabling of reinforcement learning objectives beyond the standard cumulative return, such as risk sensitivity, safety constraints, exploration and divergence to a prior.”

Notably, in the context of ground robots, he said, data is costly to acquire.

“Reducing the volatility of reward accumulation, ensuring one explores an unknown domain in an efficient manner, or incorporating prior experience, all contribute towards breaking existing sample efficiency barriers of prevailing practice in reinforcement learning by alleviating the amount of random sampling one requires in order to complete policy optimization,” Koppel said.

The future of this research is very bright, and Koppel has dedicated his efforts towards making his findings applicable for innovative technology for Soldiers on the battlefield.

“I am optimistic that reinforcement-learning equipped autonomous robots will be able to assist the warfighter in exploration, reconnaissance and risk assessment on the future battlefield,” Koppel said. “That this vision is made a reality is essential to what motivates which research problems I dedicate my efforts.”

The next step for this research is to incorporate the broader decision-making goals enabled by general utilities in reinforcement learning into multi-agent settings and investigate how interactive settings between reinforcement learning agents give rise to synergistic and antagonistic reasoning among teams.

READ MORE  An unexpected role for the brain's immune cells

According to Koppel, the technology that results from this research will be capable of reasoning under uncertainty in team scenarios.


U.S. Army Research Laboratory

Journal Reference

Variational Policy Gradient Method for Reinforcement Learning with General Utilities


In recent years, reinforcement learning systems with general goals beyond a cumulative sum of rewards have gained traction, such as in constrained problems, exploration, and acting upon prior experiences. In this paper, we consider policy optimization in Markov Decision Problems, where the objective is a general utility function of the state-action occupancy measure, which subsumes several of the aforementioned examples as special cases.

Such generality invalidates the Bellman equation. As this means that dynamic programming no longer works, we focus on direct policy search. Analogously to the Policy Gradient Theorem \cite{sutton2000policy} available for RL with cumulative rewards, we derive a new Variational Policy Gradient Theorem for RL with general utilities, which establishes that the gradient may be obtained as the solution of a stochastic saddle point problem involving the Fenchel dual of the utility function. We develop a variational Monte Carlo gradient estimation algorithm to compute the policy gradient based on sample paths.

Further, we prove that the variational policy gradient scheme converges globally to the optimal policy for the general objective, and we also establish its rate of convergence that matches or improves the convergence rate available in the case of RL with cumulative rewards.

Ominy science editory team

A team of dedicated users that search, fetch and publish research stories for Ominy science.

Leave a Reply

Your email address will not be published. Required fields are marked *

Enable notifications of new posts OK No thanks