fbpx

Neuroscientists discovers a new computational method to make complex dendrite models much simpler

Unlike their simple counterparts in artificial intelligence (AI) applications, neurons in the brain use dendrites – their intricate tree-like branches – to find relevant chunks of information. Now, neuroscientists from the University of Bern have discovered a new computational method to make complex dendrite models much simpler. These faithful reductions may lead AI applications to process information much like the brain does.

Neurons, the fundamental units of the brain, are complex computers by themselves. They receive input signals on a tree-like structure – the dendrite. This structure does more than simply collect the input signals: it integrates and compares them to find those special combinations that are important for the neurons’ role in the brain. Moreover, the dendrites of neurons come in a variety of shapes and forms, indicating that distinct neurons may have separate roles in the brain.

A simple yet faithfull model

In neuroscience, there has historically been a tradeoff between a model’s faithfulness to the underlying biological neuron and its complexity. Neuroscientists have constructed detailed computational models of many different types of dendrites. These models mimic the behavior of real dendrites to a high degree of accuracy.


Subscribe now to remove this ad, read unlimited articles, bookmark your favorite post and soo much more

The tradeoff, however, is that such models are very complex. Thus, it is hard to exhaustively characterize all possible responses of such models and to simulate them on a computer. Even the most powerful computers can only simulate a small fraction of the neurons in any given brain area.

Researchers from the Department of Physiology at the University of Bern have long sought to understand the role of dendrites in computations carried out by the brain. On the one hand, they have constructed detailed models of dendrites from experimental measurements, and on the other hand they have constructed neural network models with highly abstract dendrites to learn computations such as object recognition.

READ MORE  COVID-19 risk model uses hospital data to guide decisions on social distancing

A new study set out to find a computational method to make highly detailed models of neurons simpler, while retaining a high degree of faithfulness. This work emerged from the collaboration between experimental and computational neuroscientists from the research groups of Prof. Thomas Nevian and Prof. Walter Senn, and was led by Dr Willem Wybo.

Complex to abstract, the fascinating tree structure of dendrites can now be modelled at many scales. © eLife

“We wanted the method to be flexible, so that it could be applied to all types of dendrites. We also wanted it to be accurate, so that it could faithfully capture the most important functions of any given dendrite. With these simpler models, neural responses can more easily be characterized and simulation of large networks of neurons with dendrites can be conducted,” Dr Wybo explains.


Subscribe now to remove this ad, read unlimited articles, bookmark your favorite post and soo much more

This new approach exploits an elegant mathematical relation between the responses of detailed dendrite models and of simplified dendrite models. Due to this mathematical relation, the objective that is optimized is linear in the parameters of the simplified model.


Make more money selling and advertising your products and services for free on Ominy market. Click here to start selling now


“This crucial observation allowed us to use the well-known linear least squares method to find the optimized parameters. This method is very efficient compared to methods that use non-linear parameter searches, but also achieves a high degree of accuracy,” says Prof. Senn.

Tools available for AI applications

The main result of the work is the methodology itself: a flexible yet accurate way to construct reduced neuron models from experimental data and morphological reconstructions. “Our methodology shatters the perceived tradeoff between faithfulness and complexity, by showing that extremely simplified models can still capture much of the important response properties of real biological neurons,” Prof. Senn explains. “Which also provides insight into ‘the essential dendrite’, the simplest possible dendrite model that still captures all possible responses of the real dendrite from which it is derived,” Dr Wybo adds.

READ MORE  Astronomers Find Nearest Black Hole to Earth, and It’s Strange

Thus, in specific situations, hard bounds can be established on how much a dendrite can be simplified, while retaining its important response properties. “Furthermore, our methodology greatly simplifies deriving neuron models directly from experimental data,” Prof. Senn highlights, who is also a member of the steering committe of the Center for Artifical Intelligence (CAIM) of the University of Bern.

The methodology has been compiled into NEAT (NEural Analysis Toolkit) – an open-source software toolbox that automatizes the simplification process. NEAT is publicly available on GitHub.


Subscribe now to remove this ad, read unlimited articles, bookmark your favorite post and soo much more

The neurons used currently in AI applications are exceedingly simplistic compared to their biological counterparts, as they don’t include dendrites at all. Neuroscientists believe that including dendrite-like operations in artificial neural networks will lead to the next leap in AI technology. By enabling the inclusion of very simple, but very accurate dendrite models in neural networks, this new approach and toolkit provide an important step towards that goal.

This work was supported by the Human Brain Project, by the Swiss National Science foundation and by the European Research Council.

Source

University of Bern

Journal Reference

Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses

Abstract

Dendrites shape information flow in neurons. Yet, there is little consensus on the level of spatial complexity at which they operate. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models at any level of complexity.

We show that (back-propagating) action potentials, Ca2+ spikes, and N-methyl-D-aspartate spikes can all be reproduced with few compartments. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite.

We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Furthermore, our methodology fits reduced models directly from experimental data, without requiring morphological reconstructions. We provide software that automatizes the simplification, eliminating a common hurdle toward including dendritic computations in network models.

Ominy science editory team

A team of dedicated users that search, fetch and publish research stories for Ominy science.

What do you think??

Enable notifications of new posts    OK No thanks