Research
Navigating with grid-like representations in artificial agents
Most animals, including humans, are able to flexibly navigate the world they live in – exploring new areas, returning quickly to remembered places, and taking shortcuts. Indeed, these abilities feel so easy and natural that it is not immediately obvious how complex the underlying processes really are. In contrast, spatial navigation remains a substantial challenge for artificial agents whose abilities are far outstripped by those of mammals.
In 2005, a potentially crucial part of the neural circuitry underlying spatial behaviour was revealed by an astonishing discovery: neurons that fire in a strikingly regular hexagonal pattern as animals explore their environment. This lattice of points is believed to facilitate spatial navigation, similarly to the gridlines on a map. In addition to equipping animals with an internal coordinate system, these neurons - known as grid cells - have recently been hypothesised to support vector-based navigation. That is: enabling the brain to calculate the distance and direction to a desired destination, “as the crow flies,” allowing animals to make direct journeys between different places even if that exact route had not been followed before.
The group that first discovered grid cells was jointly awarded the 2014 Nobel Prize in Physiology or Medicine for shedding light on how cognitive representations of space might work. But after more than 10 years of theorising since their discovery, the computational functions of grid cells - and whether they support vector-based navigation - has remained largely a mystery.
In our most recent paper [PDF here]published in Nature, we developed an artificial agent to test the theory that grid cells support vector-based navigation, in keeping with our overarching philosophy that algorithms used for AI can meaningfully approximate elements of the brain.
As a first step, we trained a recurrent network to perform the task of localising itself in a virtual environment, using predominantly movement-related velocity signals. This ability is commonly used by mammals when moving through unfamiliar places or in situations where it is not easy to spot familiar landmarks (e.g. when navigating in the dark).
We found that grid-like representations (hereafter grid units) spontaneously emerged within the network - providing a striking convergence with the neural activity patterns observed in foraging mammals, and consistent with the notion that grid cells provide an efficient code for space.
We next sought to test the theory that grid cells support vector-based navigation by creating an artificial agent to be used as an experimental guinea pig. This was done by combining the initial “grid network” with a larger network architecture, forming an agent that could be trained using deep reinforcement learning to navigate to goals in challenging virtual reality game environments.
This agent performed at a super-human level, exceeding the ability of a professional game player, and exhibited the type of flexible navigation normally associated with animals, taking novel routes and shortcuts when they became available.
Through a series of experimental manipulations, we showed that grid-like representations were critical for vector-based navigation. For example, when grid cells in the network were silenced, the agent’s ability to navigate was impaired, and the representation of key metrics such as distance and direction to the goal became less accurate.
We believe our study constitutes an important step in understanding the fundamental computational purpose of grid cells in the brain and also highlights the benefits they afford to artificial agents. The evidence provides compelling support for the theory that grid cells provide a Euclidean spatial framework - a concept of space - enabling vector-based navigation.
More broadly, our work reaffirms the potential of utilising algorithms thought to be used by the brain as inspiration for machine learning architectures. The extensive previous neuroscience research into grid cells makes the agent's interpretability - which is itself a major topic in AI research - significantly easier, by giving us clues about what to look for when trying to understand its internal representations. The work also showcases the potential of using artificial agents actively engaging in complex behaviours within realistic virtual environments to test theories of how the brain works.
Taking this principle further, a similar approach could be used to test theories concerning brain areas that are important for perceiving sound or controlling limbs, for example. In the future such networks may well provide a new way for scientists to conduct ‘experiments’, suggesting new theories and even complementing some of the work that is currently conducted in animals.
UPDATE 14.05.18
We’d encourage you to read The emergence of grid-like representations by training recurrent neural networks to perform spatial localization by Cueva and Wei, which was published contemporaneously at ICLR. While different in scope and findings, it shows interesting results. In brief, the authors found periodic firing that conformed to the shape of the enclosure, e.g rectangular grids in a square environment and triangular in a triangular environment (fig. 2 of Cueva and Wei). This differs from our study, where we found grid-like units whose firing pattern closely resembles rodent grid cells which typically show hexagonal firing patterns across different shaped environments (e.g. square and circular arena).
Read the Nature paper: [PDF]
Download the original paper (unformatted): [PDF]
Read Nobel Prize Laureate Edvard Moser's review of the paper.
This work was done by Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin Chadwick, Thomas Degris, Joseph Modayil, Greg Wayne, Hubert Soyer, Fabio Viola, Brian Zhang, Ross Goroshin, Neil Rabinowitz, Razvan Pascanu, Charlie Beattie, Stig Petersen, Amir Sadik, Stephen Gaffney, Helen King, Koray Kavukcuoglu, Demis Hassabis, Raia Hadsell, and Dharshan Kumaran.