Bayesian inverse reinforcement learning for collective animal movement

Annals of Applied Statistics
By: , and 

Links

Abstract

Agent-based methods allow for defining simple rules that generate complex group behaviors. The governing rules of such models are typically set a priori, and parameters are tuned from observed behavior trajectories. Instead of making simplifying assumptions across all anticipated scenarios, inverse reinforcement learning provides inference on the short-term (local) rules governing long-term behavior policies by using properties of a Markov decision process. We use the computationally efficient linearly-solvable Markov decision process to learn the local rules governing collective movement for a simulation of the selfpropelled-particle (SPP) model and a data application for a captive guppy population. The estimation of the behavioral decision costs is done in a Bayesian framework with basis function smoothing. We recover the true costs in the SPP simulation and find the guppies value collective movement more than targeted movement toward shelter.

Publication type Article
Publication Subtype Journal Article
Title Bayesian inverse reinforcement learning for collective animal movement
Series title Annals of Applied Statistics
DOI 10.1214/21-AOAS1529
Volume 16
Issue 2
Year Published 2022
Language English
Publisher Project Euclid
Contributing office(s) Coop Res Unit Seattle
Description 15 p.
First page 999
Last page 1013
Google Analytic Metrics Metrics page
Additional publication details