Cambridge University scientists have demonstrated how placing physical constraints on an artificially intelligent system—in much the same way that the human brain has to develop and operate within physical and biological constraints—allows it to develop features of the brains of complex organisms in order to solve tasks.
The team is hopeful that its AI system, which is based on spatially embedded recurrent neural networks (seRNNs), could begin to shed light on how such constraints shape differences between people’s brains and contribute to differences seen in those that experience cognitive or mental health difficulties. The findings are also likely to be of interest to the AI community, and could provide insights that enable the development of more efficient systems, particularly in situations where there are likely to be physical constraints.
Jascha Achterberg, PhD, a Gates Scholar from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge, said, “Artificial ‘brains’ allow us to ask questions that it would be impossible to look at in an actual biological system. We can train the system to perform tasks and then play around experimentally with the constraints we impose, to see if it begins to look more like the brains of particular individuals.” Added John Duncan, PhD, from the 1MRC Cognition and Brain Sciences Unit, University of Cambridge said, “These artificial brains give us a way to understand the rich and bewildering data we see when the activity of real neurons is recorded in real brains.”
Achterberg, Duncan, and colleagues reported on development of the system in Nature Machine Intelligence, in a paper titled “Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings.” In their report the team concluded, “seRNNs incorporate biophysical constraints within a fully artificial system and can serve as a bridge between structural and functional research communities to move neuroscientific understanding forwards.”
As neural systems such as the brain organise themselves and make connections, they have to balance competing demands. “Brain networks exist within the confines of resource limitations,” the authors wrote. As a result, a brain network must overcome the metabolic costs of growing and sustaining the network within its physical space, while simultaneously implementing its required information processing.
“As such, the most basic features of both brain organization and network function—such as its sparse and small-world structure, functional modularity, and characteristic neuronal tuning curves—might arise because of this basic optimization problem.” This trade-off shapes all brains within and across species, which may help explain why many brains converge on similar organisational solutions, the team continued. However, as the team further noted, “we have yet to incorporate both the brain’s anatomy and the brain’s function into a single coherent model, allowing a network to dynamically trade-off its different structural, functional and behavioural objectives in real time.” Achterberg further noted, “Not only is the brain great at solving complex problems, it does so while using very little energy.”
For their reported study the scientists created an artificial system intended to model a very simplified version of the brain, and applied physical constraints. Instead of real neurons, the system uses computational nodes. Neurons and nodes are similar in function, in that each takes an input, transforms it, and produces an output, and a single node or neuron might connect to multiple others, all inputting information to be computed.
The system developed by the Cambridge team introduced spatially embedded recurrent neural networks. An seRNN is optimized to solve a task, making decisions to achieve functional goals, they explained. “However, as it learns to achieve these goals and to optimize its behavioral performance, its constituent neurons face the kind of resource constraints experienced within biological networks.”
In their system, the researchers applied a “physical” constraint on the system. Each node was given a specific location in a virtual space, and the further away two nodes were, the more difficult it was for them to communicate. This is similar to how neurons in the human brain are organized.
The team gave the system a simple task to complete—in this case a simplified version of a maze navigation task typically given to animals such as rats and macaques when studying the brain, where it has to combine multiple pieces of information to decide on the shortest route to get to the end point.
One of the reasons the team chose this particular task is because to complete it, the system needs to maintain a number of elements—start location, end location and intermediate steps—and once it has learned to do the task reliably, it is possible to observe, at different moments in a trial, which nodes are important. For example, one particular cluster of nodes may encode the finish locations, while others encode the available routes, and it is possible to track which nodes are active at different stages of the task.
Initially, the system does not know how to complete the task and makes mistakes. But when it is given feedback it gradually learns to get better at the task. It learns by changing the strength of the connections between its nodes, similar to how the strength of connections between brain cells changes as we learn. The system then repeats the task multiple times, until eventually it learns to perform it correctly.
With their system, however, the physical constraint meant that the further away two nodes were, the more difficult it was to build a connection between the two nodes in response to the feedback. In the human brain, connections that span a large physical distance are expensive to form and maintain.
They found that their system went on to develop certain key characteristics and tactics similar to those found in human brains. When the system was asked to perform the task under these constraints, it used some of the same tricks used by real human brains to solve the task. For example, to get around the constraints, the artificial systems started to develop hubs—highly connected nodes that act as conduits for passing information across the network.
More surprising, however, was that the response profiles of individual nodes themselves began to change: in other words, rather than having a system where each node codes for one particular property of the maze task, like the goal location or the next choice, nodes developed a flexible coding scheme. This means that at different moments in time nodes might be firing for a mix of the properties of the maze. For instance, the same node might be able to encode multiple locations of a maze, rather than needing specialised nodes for encoding specific locations. This is another feature seen in the brains of complex organisms.
Co-author Duncan Astle, PhD, University of Cambridge Department of Psychiatry, said: “This simple constraint—it’s harder to wire nodes that are far apart—forces artificial systems to produce some quite complicated characteristics. Interestingly, they are characteristics shared by biological systems like the human brain. I think that tells us something fundamental about why our brains are organised the way they are.” The authors noted, “Our model provides an important tool to continue the work on jointly studying structure and function in neuroscience mode … We believe that the modeling approach shown to work in seRNNs will speed up innovations in neuroscience by allowing us to systematically study the relationships between features that all have been individually discussed to be of high importance to the brain.”
The findings could in addition allow researchers develop more efficient AI systems. Akarca noted, “AI researchers are constantly trying to work out how to make complex, neural systems that can encode and perform in a flexible way that is efficient. To achieve this, we think that neurobiology will give us a lot of inspiration. For example, the overall wiring cost of the system we’ve created is much lower than you would find in a typical AI system.”
Many modern AI solutions involve using architectures that only superficially resemble a brain. The researchers say their work shows that the type of problem the AI is solving will influence which architecture is the most powerful to use. The team stated, “The development of seRNNs allowed us to observe the impact of optimizing task control, structural cost and network communication in a model system that can dynamically trade off its structural and functional objectives … In addition, our results are relevant for developments on the intersection of neuroscience and artificial intelligence (NeuroAI).”
Achterberg said “If you want to build an artificially intelligent system that solves similar problems to humans, then ultimately the system will end up looking much closer to an actual brain than systems running on large compute cluster that specialise in very different tasks to those carried out by humans. The architecture and structure we see in our artificial ‘brain’ is there because it is beneficial for handling the specific brain-like challenges it faces.”
This means that robots that have to process a large amount of constantly changing information with finite energetic resources could benefit from having brain structures not dissimilar to ours. Achterberg continued, “Brains of robots that are deployed in the real physical world are probably going to look more like our brains because they might face the same challenges as us. They need to constantly process new information coming in through their sensors while controlling their bodies to move through space towards a goal. Many systems will need to run all their computations with a limited supply of electric energy and so, to balance these energetic constraints with the amount of information it needs to process, it will probably need a brain structure similar to ours.”