Brain Inspired AI Learns Like Humans

Summary: Today’s AI can read, talk, and analyze data but still has critical limitations. NeuroAI researchers designed a new AI model inspired by the human brain’s efficiency.

This model allows AI neurons to receive feedback and adjust in real time, enhancing learning and memory processes. The innovation could lead to a new generation of more efficient and accessible AI, bringing AI and neuroscience closer together.

Key Facts:

  1. Inspired by the Brain: The new AI model is based on how human brains efficiently process and adjust data.
  2. Real-Time Adjustment: AI neurons can receive feedback and adjust on the fly, improving efficiency.
  3. Potential Impact: This breakthrough could pioneer a new generation of AI that learns like humans, enhancing both AI and neuroscience fields.

Source: CSHL

It reads. It talks. It collates mountains of data and recommends business decisions. Today’s artificial intelligence might seem more human than ever. However, AI still has several critical shortcomings. 

“As impressive as ChatGPT and all these current AI technologies are, in terms of interacting with the physical world, they’re still very limited. Even in things they do, like solve math problems and write essays, they take billions and billions of training examples before they can do them well, ” explains Cold Spring Harbor Laboratory (CSHL) NeuroAI Scholar Kyle Daruwalla. 

Daruwalla has been searching for new, unconventional ways to design AI that can overcome such computational obstacles. And he might have just found one.

The new machine-learning model provides evidence for a yet unproven theory that correlates working memory with learning and academic performance. Credit: Neuroscience News

The key was moving data. Nowadays, most of modern computing’s energy consumption comes from bouncing data around. In artificial neural networks, which are made up of billions of connections, data can have a very long way to go.

So, to find a solution, Daruwalla looked for inspiration in one of the most computationally powerful and energy-efficient machines in existence—the human brain.

Daruwalla designed a new way for AI algorithms to move and process data much more efficiently, based on how our brains take in new information. The design allows individual AI “neurons” to receive feedback and adjust on the fly rather than wait for a whole circuit to update simultaneously. This way, data doesn’t have to travel as far and gets processed in real time.

“In our brains, our connections are changing and adjusting all the time,” Daruwalla says. “It’s not like you pause everything, adjust, and then resume being you.”

The new machine-learning model provides evidence for a yet unproven theory that correlates working memory with learning and academic performance. Working memory is the cognitive system that enables us to stay on task while recalling stored knowledge and experiences. 

“There have been theories in neuroscience of how working memory circuits could help facilitate learning. But there isn’t something as concrete as our rule that actually ties these two together.

“And so that was one of the nice things we stumbled into here. The theory led out to a rule where adjusting each synapse individually necessitated this working memory sitting alongside it, ” says Daruwalla. 

Daruwalla’s design may help pioneer a new generation of AI that learns like we do. That would not only make AI more efficient and accessible—it would also be somewhat of a full-circle moment for neuroAI. Neuroscience has been feeding AI valuable data since long before ChatGPT uttered its first digital syllable. Soon, it seems, AI may return the favor.

About this artificial intelligence research news

Author: Sara Giarnieri
Source: CSHL
Contact: Sara Giarnieri – CSHL
Image: The image is credited to Neuroscience News

Original Research: Open access.
Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates” by Kyle Daruwalla et al. Frontiers in Computational Neuroscience


Abstract

Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates

Deep neural feedforward networks are effective models for a wide array of problems, but training and deploying such networks presents a significant energy cost. Spiking neural networks (SNNs), which are modeled after biologically realistic neurons, offer a potential solution when deployed correctly on neuromorphic computing hardware.

Still, many applications train SNNs offline, and running network training directly on neuromorphic hardware is an ongoing research problem. The primary hurdle is that back-propagation, which makes training such artificial deep networks possible, is biologically implausible.

Neuroscientists are uncertain about how the brain would propagate a precise error signal backward through a network of neurons. Recent progress addresses part of this question, e.g., the weight transport problem, but a complete solution remains intangible.

In contrast, novel learning rules based on the information bottleneck (IB) train each layer of a network independently, circumventing the need to propagate errors across layers. Instead, propagation is implicit due the layers’ feedforward connectivity.

These rules take the form of a three-factor Hebbian update a global error signal modulates local synaptic updates within each layer. Unfortunately, the global signal for a given layer requires processing multiple samples concurrently, and the brain only sees a single sample at a time.

We propose a new three-factor update rule where the global signal correctly captures information across samples via an auxiliary memory network. The auxiliary network can be trained a priori independently of the dataset being used with the primary network.

We demonstrate comparable performance to baselines on image classification tasks. Interestingly, unlike back-propagation-like schemes where there is no link between learning and memory, our rule presents a direct connection between working memory and synaptic updates. To the best of our knowledge, this is the first rule to make this link explicit.

We explore these implications in initial experiments examining the effect of memory capacity on learning performance. Moving forward, this work suggests an alternate view of learning where each layer balances memory-informed compression against task performance.

This view naturally encompasses several key aspects of neural computation, including memory, efficiency, and locality.

Reference

Denial of responsibility! Elite News is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a comment